Merge pull request #1 from LCTT/master

merge the repo from LCTT/TranslateProject
This commit is contained in:
Y.C.S.M 2018-09-14 07:10:18 +08:00 committed by GitHub
commit c8da1a51e3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
50 changed files with 5618 additions and 2260 deletions

View File

@ -1,57 +1,27 @@
Docker 指南:Docker 化 Python Django 应用程序
如何 Docker 化 Python Django 应用程序
======
### 目录
1. [我们要做什么?][6]
2. [步骤 1 - 安装 Docker-ce][7]
3. [步骤 2 - 安装 Docker-compose][8]
4. [步骤 3 - 配置项目环境][9]
1. [创建一个新的 requirements.txt 文件][1]
2. [创建 Nginx 虚拟主机文件 django.conf][2]
3. [创建 Dockerfile][3]
4. [创建 Docker-compose 脚本][4]
5. [配置 Django 项目][5]
5. [步骤 4 - 构建并运行 Docker 镜像][10]
6. [步骤 5 - 测试][11]
7. [参考][12]
Docker 是一个开源项目为开发人员和系统管理员提供了一个开放平台作为一个轻量级容器它可以在任何地方构建打包和运行应用程序。Docker 在软件容器中自动部署应用程序。
Docker 是一个开源项目为开发人员和系统管理员提供了一个开放平台可以将应用程序构建、打包为一个轻量级容器并在任何地方运行。Docker 会在软件容器中自动部署应用程序。
Django 是一个用 Python 编写的 Web 应用程序框架,遵循 MVC模型-视图-控制器)架构。它是免费的,并在开源许可下发布。它速度很快,旨在帮助开发人员尽快将他们的应用程序上线。
在本教程中,我将逐步向你展示在 Ubuntu 16.04 如何为现有的 Django 应用程序创建 docker 镜像。我们将学习如何 docker 化一个 Python Django 应用程序,然后使用一个 docker-compose 脚本将应用程序作为容器部署到 docker 环境。
在本教程中,我将逐步向你展示在 Ubuntu 16.04 中如何为现有的 Django 应用程序创建 docker 镜像。我们将学习如何 docker 化一个 Python Django 应用程序,然后使用一个 `docker-compose` 脚本将应用程序作为容器部署到 docker 环境。
为了部署我们的 Python Django 应用程序,我们需要其 docker 镜像:一个用于 Web 服务器的 nginx docker 镜像和用于数据库的 PostgreSQL 镜像。
为了部署我们的 Python Django 应用程序,我们需要其它 docker 镜像:一个用于 Web 服务器的 nginx docker 镜像和用于数据库的 PostgreSQL 镜像。
### 我们要做什么?
1. 安装 Docker-ce
2. 安装 Docker-compose
3. 配置项目环境
4. 构建并运行
5. 测试
### 步骤 1 - 安装 Docker-ce
在本教程中,我们将重 docker 仓库安装 docker-ce 社区版。我们将安装 docker-ce 社区版和 docker-compose其支持 compose 文件版本 3to 校正者:此处不太明白具体意思)。
在本教程中,我们将从 docker 仓库安装 docker-ce 社区版。我们将安装 docker-ce 社区版和 `docker-compose`(其支持 compose 文件版本 3
在安装 docker-ce 之前,先使用 apt 命令安装所需的 docker 依赖项。
在安装 docker-ce 之前,先使用 `apt` 命令安装所需的 docker 依赖项。
```
sudo apt install -y \
@ -71,7 +41,7 @@ sudo add-apt-repository \
   stable"
```
[![安装 Docker-ce](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/1.png)][14]
[![安装 Docker-ce](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/1.png)][14]
更新仓库并安装 docker-ce。
@ -87,16 +57,16 @@ systemctl start docker
systemctl enable docker
```
接着,我们将添加一个名为 'omar' 的新用户并将其添加到 docker 组。
接着,我们将添加一个名为 `omar` 的新用户并将其添加到 `docker` 组。
```
useradd -m -s /bin/bash omar
usermod -a -G docker omar
```
[![启动 Docker](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/2.png)][15]
[![启动 Docker](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/2.png)][15]
以 omar 用户身份登录并运行 docker 命令,如下所示。
`omar` 用户身份登录并运行 `docker` 命令,如下所示。
```
su - omar
@ -107,13 +77,13 @@ docker run hello-world
[![检查 Docker 安装](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/3.png)][16]
Docker-ce 安装已经完成。
Docker-ce 安装已经完成。
### 步骤 2 - 安装 Docker-compose
### 步骤 2 - 安装 Docker-compose
在本教程中,我们将使用最新的 docker-compose 支持 compose 文件版本 3。我们将手动安装 docker-compose。
在本教程中,我们将使用支持 compose 文件版本 3 的最新 `docker-compose`。我们将手动安装 `docker-compose`
使用 curl 命令将最新版本的 docker-compose 下载到 `/usr/local/bin` 目录,并使用 chmod 命令使其有执行权限。
使用 `curl` 命令将最新版本的 `docker-compose` 下载到 `/usr/local/bin` 目录,并使用 `chmod` 命令使其有执行权限。
运行以下命令:
@ -122,7 +92,7 @@ sudo curl -L https://github.com/docker/compose/releases/download/1.21.0/docker-c
sudo chmod +x /usr/local/bin/docker-compose
```
现在检查 docker-compose 版本。
现在检查 `docker-compose` 版本。
```
docker-compose version
@ -132,26 +102,26 @@ docker-compose version
[![安装 Docker-compose](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/4.png)][17]
已安装支持 compose 文件版本 3 的 docker-compose 最新版本。
已安装支持 compose 文件版本 3 的 `docker-compose` 最新版本。
### 步骤 3 - 配置项目环境
在这一步中,我们将配置 Python Django 项目环境。我们将创建新目录 'guide01',并使其成为我们项目文件的主目录,例如 DockerfileDjango 项目,nginx 配置文件等。
在这一步中,我们将配置 Python Django 项目环境。我们将创建新目录 `guide01`,并使其成为我们项目文件的主目录,例如包括 Dockerfile、Django 项目、nginx 配置文件等。
登录到 'omar' 用户。
登录到 `omar` 用户。
```
su - omar
```
创建一个新目录 'guide01',并进入目录。
创建一个新目录 `guide01`,并进入目录。
```
mkdir -p guide01
cd guide01/
```
现在在 'guide01' 目录下,创建两个新目录 'project' 和 'config'
现在在 `guide01` 目录下,创建两个新目录 `project``config`
```
mkdir project/ config/
@ -159,118 +129,151 @@ mkdir project/ config/
注意:
* 'project' 目录:我们所有的 python Django 项目文件都将放在该目录中。
* `project` 目录:我们所有的 python Django 项目文件都将放在该目录中。
* `config` 目录:项目配置文件的目录,包括 nginx 配置文件、python pip 的`requirements.txt` 文件等。
* 'config' 目录:项目配置文件的目录,包括 nginx 配置文件python pip requirements 文件等。
#### 创建一个新的 requirements.txt 文件
### 创建一个新的 requirements.txt 文件
接下来,使用 vim 命令在 'config' 目录中创建一个新的 requirements.txt 文件
接下来,使用 `vim` 命令在 `config` 目录中创建一个新的 `requirements.txt` 文件。
```
vim config/requirements.txt
```
粘贴下面的配置
粘贴下面的配置
```
Django==2.0.4 
gunicorn==19.7.0 
psycopg2==2.7.4
gunicorn==19.7.0 
psycopg2==2.7.4
```
保存并退出。
### 创建 Dockerfile
#### 创建 Nginx 虚拟主机文件 django.conf
'guide01' 目录下创建新文件 'Dockerfile'
`config` 目录下创建 nginx 配置目录并添加虚拟主机配置文件 `django.conf`
运行以下命令。
```
mkdir -p config/nginx/
vim config/nginx/django.conf
```
粘贴下面的配置:
```
upstream web {
ip_hash;
server web:8000;
}
# portal
server {
location / {
proxy_pass http://web/;
}
listen 8000;
server_name localhost;
location /static {
autoindex on;
alias /src/static/;
}
}
```
保存并退出。
#### 创建 Dockerfile
`guide01` 目录下创建新文件 `Dockerfile`
运行以下命令:
```
vim Dockerfile
```
现在粘贴下面的 Dockerfile 脚本。
现在粘贴下面的 Dockerfile 脚本
```
FROM python:3.5-alpine
ENV PYTHONUNBUFFERED 1 
ENV PYTHONUNBUFFERED 1 
RUN apk update && \
    apk add --virtual build-deps gcc python-dev musl-dev && \
    apk add postgresql-dev bash
RUN apk update && \
   apk add --virtual build-deps gcc python-dev musl-dev && \
   apk add postgresql-dev bash
RUN mkdir /config 
ADD /config/requirements.txt /config/ 
RUN pip install -r /config/requirements.txt
RUN mkdir /src
WORKDIR /src
RUN mkdir /config 
ADD /config/requirements.txt /config/ 
RUN pip install -r /config/requirements.txt
RUN mkdir /src
WORKDIR /src
```
保存并退出。
注意:
我们想要为我们的 Django 项目构建基于 Alpine Linux 的 Docker 镜像Alpine 是最小的 Linux 版本。我们的 Django 项目将运行在带有 Python3.5 的 Alpine Linux 上,并添加 postgresql-dev 包以支持 PostgreSQL 数据库。然后,我们将使用 python pip 命令安装在 'requirements.txt' 上列出的所有 Python 包,并为我们的项目创建新目录 '/src'。
我们想要为我们的 Django 项目构建基于 Alpine Linux 的 Docker 镜像Alpine 是最小的 Linux 版本。我们的 Django 项目将运行在带有 Python 3.5 的 Alpine Linux 上,并添加 postgresql-dev 包以支持 PostgreSQL 数据库。然后,我们将使用 python `pip` 命令安装在 `requirements.txt` 上列出的所有 Python 包,并为我们的项目创建新目录 `/src`
### 创建 Docker-compose 脚本
#### 创建 Docker-compose 脚本
使用 [vim][18] 命令在 'guide01' 目录下创建 'docker-compose.yml' 文件。
使用 [vim][18] 命令在 `guide01` 目录下创建 `docker-compose.yml` 文件。
```
vim docker-compose.yml
```
粘贴以下配置内容
粘贴以下配置内容
```
version: '3'
services:
  db:
    image: postgres:10.3-alpine
    container_name: postgres01
  nginx:
    image: nginx:1.13-alpine
    container_name: nginx01
    ports:
      - "8000:8000"
    volumes:
      - ./project:/src
      - ./config/nginx:/etc/nginx/conf.d
    depends_on:
      - web
  web:
    build: .
    container_name: django01
    command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py collectstatic --noinput && gunicorn hello_django.wsgi -b 0.0.0.0:8000"
    depends_on:
      - db
    volumes:
      - ./project:/src
    expose:
      - "8000"
    restart: always
services:
  db:
     image: postgres:10.3-alpine
    container_name: postgres01
  nginx:
    image: nginx:1.13-alpine
    container_name: nginx01
    ports:
      - "8000:8000"
    volumes:
      - ./project:/src
      - ./config/nginx:/etc/nginx/conf.d
    depends_on:
      - web
  web:
    build: .
    container_name: django01
    command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py collectstatic --noinput && gunicorn hello_django.wsgi -b 0.0.0.0:8000"
    depends_on:
      - db
    volumes:
      - ./project:/src
    expose:
      - "8000"
    restart: always
```
保存并退出。
注意:
使用这个 docker-compose 文件脚本,我们将创建三个服务。使用 PostgreSQL alpine Linux 创建名为 'db' 的数据库服务,再次使用 Nginx alpine Linux 创建 'nginx' 服务,并使用从 Dockerfile 生成的自定义 docker 镜像创建我们的 python Django 容器。
使用这个 `docker-compose` 文件脚本,我们将创建三个服务。使用 alpine Linux 版的 PostgreSQL 创建名为 `db` 的数据库服务,再次使用 alpine Linux 版的 Nginx 创建 `nginx` 服务,并使用从 Dockerfile 生成的自定义 docker 镜像创建我们的 python Django 容器。
[![配置项目环境](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/5.png)][19]
[![配置项目环境](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/5.png)][19]
### 配置 Django 项目
#### 配置 Django 项目
将 Django 项目文件复制到 'project' 目录。
将 Django 项目文件复制到 `project` 目录。
```
cd ~/django
cp -r * ~/guide01/project/
```
进入 'project' 目录并编辑应用程序设置 'settings.py'
进入 `project` 目录并编辑应用程序设置 `settings.py`
```
cd ~/guide01/project/
@ -279,29 +282,29 @@ vim hello_django/settings.py
注意:
我们将部署名为 'hello_django' 的简单 Django 应用程序。
我们将部署名为 “hello_django” 的简单 Django 应用程序。
'ALLOW_HOSTS' 行中,添加服务名称 'web'
`ALLOW_HOSTS` 行中,添加服务名称 `web`
```
ALLOW_HOSTS = ['web']
```
现在更改数据库设置,我们将使用 PostgreSQL 数据库'db' 数据库作为服务运行,使用默认用户和密码。
现在更改数据库设置,我们将使用 PostgreSQL 数据库来运行名为 `db` 的服务,使用默认用户和密码。
```
DATABASES = { 
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'postgres',
        'USER': 'postgres',
        'HOST': 'db',
        'PORT': 5432,
    }
}
   'default': {
       'ENGINE': 'django.db.backends.postgresql_psycopg2',
       'NAME': 'postgres',
       'USER': 'postgres',
       'HOST': 'db',
       'PORT': 5432,
   }
}
```
至于 'STATIC_ROOT' 配置目录,将此行添加到文件行的末尾。
至于 `STATIC_ROOT` 配置目录,将此行添加到文件行的末尾。
```
STATIC_ROOT = os.path.join(BASE_DIR, 'static/')
@ -309,21 +312,21 @@ STATIC_ROOT = os.path.join(BASE_DIR, 'static/')
保存并退出。
[![配置 Django 项目](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/6.png)][20]
[![配置 Django 项目](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/6.png)][20]
现在我们准备在 docker 容器下构建和运行 Django 项目。
### 步骤 4 - 构建并运行 Docker 镜像
在这一步中,我们想要使用 'guide01' 目录中的配置为我们的 Django 项目构建一个 Docker 镜像。
在这一步中,我们想要使用 `guide01` 目录中的配置为我们的 Django 项目构建一个 Docker 镜像。
进入 'guide01' 目录。
进入 `guide01` 目录。
```
cd ~/guide01/
```
现在使用 docker-compose 命令构建 docker 镜像。
现在使用 `docker-compose` 命令构建 docker 镜像。
```
docker-compose build
@ -331,7 +334,7 @@ docker-compose build
[![运行 docker 镜像](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/7.png)][21]
启动 docker-compose 脚本中的所有服务。
启动 `docker-compose` 脚本中的所有服务。
```
docker-compose up -d
@ -348,7 +351,7 @@ docker-compose ps
docker-compose images
```
现在,你将在系统上运行三个容器列出 Docker 镜像,如下所示。
现在,你将在系统上运行三个容器列出 Docker 镜像,如下所示。
[![docke-compose ps 命令](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/9.png)][23]
@ -356,17 +359,19 @@ docker-compose images
### 步骤 5 - 测试
打开 Web 浏览器并使用端口 8000 键入服务器地址我的是http://ovh01:8000/
打开 Web 浏览器并使用端口 8000 键入服务器地址,我的是:`http://ovh01:8000/`。
现在你将获得默认的 Django 主页。
现在你将看到默认的 Django 主页。
[![默认 Django 项目主页](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/10.png)][24]
接下来,通过在 URL 上添加 “/admin” 路径来测试管理页面。
接下来,通过在 URL 上添加 `/admin` 路径来测试管理页面。
```
http://ovh01:8000/admin/
```
然后你将会看到 Django admin 登录页面。
然后你将会看到 Django 管理登录页面。
[![Django administration](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/11.png)][25]
@ -383,7 +388,7 @@ via: https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-
作者:[Muhammad Arul][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,36 +1,30 @@
运营一个 Kubernetes 网络
============================================================
Kubernetes 网络运维
======
最近我一直在研究 Kubernetes 网络。我注意到一件事情就是,虽然关于如何设置 Kubernetes 网络的文章很多,也写得很不错,但是却没有看到关于如何去运 Kubernetes 网络的文章、以及如何完全确保它不会给你造成生产事故。
最近我一直在研究 Kubernetes 网络。我注意到一件事情就是,虽然关于如何设置 Kubernetes 网络的文章很多,也写得很不错,但是却没有看到关于如何去运 Kubernetes 网络的文章、以及如何完全确保它不会给你造成生产事故。
在本文中,我将尽力让你相信三件事情(我觉得这些都很合理 :)
* 避免生产系统网络中断非常重要
* 运营联网软件是很难的
* 运维联网软件是很难的
* 有关你的网络基础设施的重要变化值得深思熟虑以及这种变化对可靠性的影响。虽然非常“牛x”的谷歌人常说“这是我们在谷歌正在用的”谷歌工程师在 Kubernetes 上正做着很重大的工作!但是我认为重要的仍然是研究架构,并确保它对你的组织有意义)。
我肯定不是 Kubernetes 网络方面的专家,但是我在配置 Kubernetes 网络时遇到了一些问题,并且比以前更加了解 Kubernetes 网络了。
### 运联网软件是很难的
### 运联网软件是很难的
在这里,我并不讨论有关运物理网络的话题(对于它我不懂),而是讨论关于如何让像 DNS 服务、负载均衡以及代理这样的软件正常工作方面的内容。
在这里,我并不讨论有关运物理网络的话题(对于它我不懂),而是讨论关于如何让像 DNS 服务、负载均衡以及代理这样的软件正常工作方面的内容。
我在一个负责很多网络基础设施的团队工作过一年时间,并且因此学到了一些运营网络基础设施的知识!(显然我还有很多的知识需要继续学习)在我们开始之前有三个整体看法:
* 联网软件经常重度依赖 Linux 内核。因此除了正确配置软件之外你还需要确保许多不同的系统控制sysctl配置正确而一个错误配置的系统控制就很容易让你处于“一切都很好”和“到处都出问题”的差别中。
我在一个负责很多网络基础设施的团队工作过一年时间,并且因此学到了一些运维网络基础设施的知识!(显然我还有很多的知识需要继续学习)在我们开始之前有三个整体看法:
* 联网软件经常重度依赖 Linux 内核。因此除了正确配置软件之外,你还需要确保许多不同的系统控制(`sysctl`)配置正确,而一个错误配置的系统控制就很容易让你处于“一切都很好”和“到处都出问题”的差别中。
* 联网需求会随时间而发生变化(比如,你的 DNS 查询或许比上一年多了五倍!或者你的 DNS 服务器突然开始返回 TCP 协议的 DNS 响应而不是 UDP 的,它们是完全不同的内核负载!)。这意味着之前正常工作的软件突然开始出现问题。
* 修复一个生产网络的问题,你必须有足够的经验。(例如,看这篇 [由 Sophie Haskins 写的关于 kube-dns 问题调试的文章][1])我在网络调试方面比以前进步多了,但那也是我花费了大量时间研究 Linux 网络知识之后的事了。
我距离成为一名网络运专家还差得很远,但是我认为以下几点很重要:
我距离成为一名网络运专家还差得很远,但是我认为以下几点很重要:
1. 对生产网络的基础设施做重要的更改是很难得的(因为它会产生巨大的混乱)
2. 当你对网络基础设施做重大更改时,真的应该仔细考虑如果新网络基础设施失败该如何处理
3. 是否有很多人都能理解你的网络配置
切换到 Kubernetes 显然是个非常大的更改!因此,我们来讨论一下可能会导致错误的地方!
@ -39,86 +33,72 @@
在本文中我们将要讨论的 Kubernetes 网络组件有:
* 网络覆盖后端(像 flannel/calico/weave 网络/romana
* <ruby>覆盖网络<rt>overlay network</rt></ruby>的后端(像 flannel/calico/weave 网络/romana
* `kube-dns`
* `kube-proxy`
* 入站控制器 / 负载均衡器
* `kubelet`
如果你打算配置 HTTP 服务,或许这些你都会用到。这些组件中的大部分我都不会用到,但是我尽可能去理解它们,因此,本文将涉及它们有关的内容。
### 最简化的方式:为所有容器使用宿主机网络
我们从你能做到的最简单的东西开始。这并不能让你在 Kubernetes 中运行 HTTP 服务。我认为它是非常安全的,因为在这里面可以让你动的东西很少。
我们从你能做到的最简单的东西开始。这并不能让你在 Kubernetes 中运行 HTTP 服务。我认为它是非常安全的,因为在这里面可以让你动的东西很少。
如果你为所有容器使用宿主机网络,我认为需要你去做的全部事情仅有:
1. 配置 kubelet以便于容器内部正确配置 DNS
2. 没了,就这些!
如果你为每个 Pod 直接使用宿主机网络,那就不需要 kube-dns 或者 kube-proxy 了。你都不需要一个作为基础的覆盖网络。
如果你为每个 pod 直接使用宿主机网络,那就不需要 kube-dns 或者 kube-proxy 了。你都不需要一个作为基础的覆盖网络。
这种配置方式中,你的 pod 们都可以连接到外部网络(同样的方式,你的宿主机上的任何进程都可以与外部网络对话),但外部网络不能连接到你的 pod 们。
这并不是最重要的(我认为大多数人想在 Kubernetes 中运行 HTTP 服务并与这些服务进行真实的通讯),但我认为有趣的是,从某种程度上来说,网络的复杂性并不是绝对需要的,并且有时候你不用这么复杂的网络就可以实现你的需要。如果可以的话,尽可能地避免让网络过于复杂。
### 运一个覆盖网络
### 运一个覆盖网络
我们将要讨论的第一个网络组件是有关覆盖网络的。Kubernetes 假设每个 pod 都有一个 IP 地址,这样你就可以与那个 pod 中的服务进行通讯了。我在说到“覆盖网络”这个词时,指的就是这个意思(“让你通过它的 IP 地址指向到 pod 的系统)。
所有其它的 Kubernetes 网络的东西都依赖正确工作的覆盖网络。更多关于它的内容,你可以读 [这里的 kubernetes 网络模型][10]。
Kelsey Hightower 在 [kubernetes the hard way][11] 中描述的方式看起来似乎很好,但是,事实上它的作法在超过 50 个节点的 AWS 上是行不通的,因此,我不打算讨论它了。
Kelsey Hightower 在 [kubernetes 艰难之路][11] 中描述的方式看起来似乎很好,但是,事实上它的作法在超过 50 个节点的 AWS 上是行不通的,因此,我不打算讨论它了。
有许多覆盖网络后端calico、flannel、weaveworks、romana并且规划非常混乱。就我的观点来看我认为一个覆盖网络有 2 个职责:
1. 确保你的 pod 能够发送网络请求到外部的集群
2. 保持一个到子网络的稳定的节点映射,并且保持集群中每个节点都可以使用那个映射得以更新。当添加和删除节点时,能够做出正确的反应。
Okay! 因此!你的覆盖网络可能会出现的问题是什么呢?
* 覆盖网络负责设置 iptables 规则(最基本的是 `iptables -A -t nat POSTROUTING -s $SUBNET -j MASQUERADE`),以确保那个容器能够向 Kubernetes 之外发出网络请求。如果在这个规则上有错误,你的容器就不能连接到外部网络。这并不很难(它只是几条 iptables 规则而已),但是它非常重要。我发起了一个 [pull request][2],因为我想确保它有很好的弹性。
* 添加或者删除节点时可能会有错误。我们使用 `flannel hostgw` 后端,我们开始使用它的时候,节点删除 [尚未开始工作][3]。
* 覆盖网络负责设置 iptables 规则(最基本的是 `iptables -A -t nat POSTROUTING -s $SUBNET -j MASQUERADE`),以确保那个容器能够向 Kubernetes 之外发出网络请求。如果在这个规则上有错误,你的容器就不能连接到外部网络。这并不很难(它只是几条 iptables 规则而已),但是它非常重要。我发起了一个 [拉取请求][2],因为我想确保它有很好的弹性。
* 添加或者删除节点时可能会有错误。我们使用 `flannel hostgw` 后端,我们开始使用它的时候,节点删除功能 [尚未开始工作][3]。
* 你的覆盖网络或许依赖一个分布式数据库etcd。如果那个数据库发生什么问题这将导致覆盖网络发生问题。例如[https://github.com/coreos/flannel/issues/610][4] 上说,如果在你的 `flannel etcd` 集群上丢失了数据,最后的结果将是在容器中网络连接会丢失。(现在这个问题已经被修复了)
* 你升级 Docker 以及其它东西导致的崩溃
* 还有更多的其它的可能性!
我在这里主要讨论的是过去发生在 Flannel 中的问题,但是我并不是要承诺不去使用 Flannel —— 事实上我很喜欢 Flannel因为我觉得它很简单比如类似 [vxlan 在后端这一块的部分][12] 只有 500 行代码),并且我觉得对我来说,通过代码来找出问题的根源成为了可能。并且很显然,它在不断地改进。他们在审查 `pull requests` 方面做的很好。
我在这里主要讨论的是过去发生在 Flannel 中的问题,但是我并不是要承诺不去使用 Flannel —— 事实上我很喜欢 Flannel因为我觉得它很简单比如类似 [vxlan 在后端这一块的部分][12] 只有 500 行代码),对我来说,通过代码来找出问题的根源成为了可能。并且很显然,它在不断地改进。他们在审查拉取请求方面做的很好。
到目前为止,我运覆盖网络的方法是:
到目前为止,我运覆盖网络的方法是:
* 学习它的工作原理的详细内容以及如何去调试它比如Flannel 用于创建路由的 hostgw 网络后端,因此,你只需要使用 `sudo ip route list` 命令去查看它是否正确即可)
* 如果需要的话,维护一个内部构建版本,这样打补丁比较容易
* 有问题时,向上游贡献补丁
我认为去遍历所有已合并的 PR 以及过去已修复的 bug 清单真的是非常有帮助的 —— 这需要花费一些时间,但这是得到一个其它人遇到的各种问题的清单的好方法。
我认为去遍历所有已合并的拉取请求以及过去已修复的 bug 清单真的是非常有帮助的 —— 这需要花费一些时间,但这是得到一个其它人遇到的各种问题的清单的好方法。
对其他人来说他们的覆盖网络可能工作的很好但是我并不能从中得到任何经验并且我也曾听说过其他人报告类似的问题。如果你有一个类似配置的覆盖网络a) 在 AWS 上并且 b) 在多于 50-100 节点上运行,我想知道你运这样的一个网络有多大的把握。
对其他人来说他们的覆盖网络可能工作的很好但是我并不能从中得到任何经验并且我也曾听说过其他人报告类似的问题。如果你有一个类似配置的覆盖网络a) 在 AWS 上并且 b) 在多于 50-100 节点上运行,我想知道你运这样的一个网络有多大的把握。
### 运 kube-proxy 和 kube-dns
### 运 kube-proxy 和 kube-dns
现在,我有一些关于运覆盖网络的想法,我们来讨论一下。
现在,我有一些关于运覆盖网络的想法,我们来讨论一下。
这个标题的最后面有一个问号,那是因为我并没有真的去运过。在这里我还有更多的问题要问答。
这个标题的最后面有一个问号,那是因为我并没有真的去运过。在这里我还有更多的问题要问答。
这里的 Kubernetes 服务是如何工作的!一个服务是一群 pod 们,它们中的每个都有自己的 IP 地址(像 10.1.0.3、10.2.3.5、10.3.5.6 这样)
1. 每个 Kubernetes 服务有一个 IP 地址(像 10.23.1.2 这样)
2. `kube-dns` 去解析 Kubernetes 服务 DNS 名字为 IP 地址因此my-svc.my-namespace.svc.cluster.local 可能映射到 10.23.1.2 上)
3. `kube-proxy` 配置 `iptables` 规则是为了在它们之间随机进行均衡负载。Kube-proxy 也有一个用户空间的轮询负载均衡器,但是在我的印象中,他们并不推荐使用它。
因此,当你发出一个请求到 `my-svc.my-namespace.svc.cluster.local` 时,它将解析为 10.23.1.2,然后,在你本地主机上的 `iptables` 规则(由 kube-proxy 生成)将随机重定向到 10.1.0.3 或者 10.2.3.5 或者 10.3.5.6 中的一个上。
@ -126,9 +106,7 @@ Okay! 因此!你的覆盖网络可能会出现的问题是什么呢?
在这个过程中我能想像出的可能出问题的地方:
* `kube-dns` 配置错误
* `kube-proxy` 挂了,以致于你的 `iptables` 规则没有得以更新
* 维护大量的 `iptables` 规则相关的一些问题
我们来讨论一下 `iptables` 规则,因为创建大量的 `iptables` 规则是我以前从没有听过的事情!
@ -141,7 +119,6 @@ kube-proxy 像如下这样为每个目标主机创建一个 `iptables` 规则:
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-RKIFTWKKG3OHTTMI
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CGDKBCNM24SZWCMS
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -j KUBE-SEP-RI4SRNQQXWSTGE2Y
```
因此kube-proxy 创建了许多 `iptables` 规则。它们都是什么意思?它对我的网络有什么样的影响?这里有一个来自华为的非常好的演讲,它叫做 [支持 50,000 个服务的可伸缩 Kubernetes][14],它说如果在你的 Kubernetes 集群中有 5,000 服务,增加一个新规则,将需要 **11 分钟**。如果这种事情发生在真实的集群中,我认为这将是一件非常糟糕的事情。
@ -152,19 +129,16 @@ kube-proxy 像如下这样为每个目标主机创建一个 `iptables` 规则:
但是,我觉得使用 HAProxy 更舒服!它能够用于去替换 kube-proxy我用谷歌搜索了一下然后发现了这个 [thread on kubernetes-sig-network][15],它说:
> kube-proxy 是很难用的,我们在生产系统中使用它近一年了,它在大部分的时间都表现的很好,但是,随着我们集群中的服务越来越多,我们发现它的排错和维护工作越来越难。在我们的团队中没有 iptables 方面的专家,我们只有 HAProxy&LVS 方面的专家,由于我们已经使用它们好几年了,因此我们决定使用一个中心化的 HAProxy 去替换分布式的代理。我觉得这可能会对在 Kubernetes 中使用 HAProxy 的其他人有用,因此,我们更新了这个项目,并将它开源:[https://github.com/AdoHe/kube2haproxy][5]。如果你发现它有用,你可以去看一看、试一试。
> kube-proxy 是很难用的,我们在生产系统中使用它近一年了,它在大部分的时间都表现的很好,但是,随着我们集群中的服务越来越多,我们发现它的排错和维护工作越来越难。在我们的团队中没有 iptables 方面的专家,我们只有 HAProxy & LVS 方面的专家,由于我们已经使用它们好几年了,因此我们决定使用一个中心化的 HAProxy 去替换分布式的代理。我觉得这可能会对在 Kubernetes 中使用 HAProxy 的其他人有用,因此,我们更新了这个项目,并将它开源:[https://github.com/AdoHe/kube2haproxy][5]。如果你发现它有用,你可以去看一看、试一试。
因此,那是一个有趣的选择!我在这里确实没有答案,但是,有一些想法:
* 负载均衡器是很复杂的
* DNS 也很复杂
* 如果你有运维某种类型的负载均衡器(比如 HAProxy的经验与其使用一个全新的负载均衡器比如 kube-proxy还不如做一些额外的工作去使用你熟悉的那个来替换或许更有意义。
* 我一直在考虑,我们希望在什么地方能够完全使用 kube-proxy 或者 kube-dns —— 我认为,最好是只在 Envoy 上投入,并且在负载均衡&服务发现上完全依赖 Envoy 来做。因此,你只需要将 Envoy 运维好就可以了。
* 如果你有运营某种类型的负载均衡器(比如 HAProxy的经验与其使用一个全新的负载均衡器比如 kube-proxy还不如做一些额外的工作去使用你熟悉的那个来替换或许更有意义。
* 我一直在考虑,我们希望在什么地方能够完全使用 kube-proxy 或者 kube-dns —— 我认为,最好是只在 Envoy 上投入,并且在负载均衡&服务发现上完全依赖 Envoy 来做。因此,你只需要将 Envoy 运营好就可以了。
正如你所看到的,我在关于如何运营 Kubernetes 中的内部代理方面的思路还是很混乱的并且我也没有使用它们的太多经验。总体上来说kube-proxy 和 kube-dns 还是很好的,也能够很好地工作,但是我仍然认为应该去考虑使用它们可能产生的一些问题(例如,”你不能有超出 5000 的 Kubernetes 服务“)。
正如你所看到的,我在关于如何运维 Kubernetes 中的内部代理方面的思路还是很混乱的并且我也没有使用它们的太多经验。总体上来说kube-proxy 和 kube-dns 还是很好的,也能够很好地工作,但是我仍然认为应该去考虑使用它们可能产生的一些问题(例如,”你不能有超出 5000 的 Kubernetes 服务“)。
### 入口
@ -175,14 +149,12 @@ kube-proxy 像如下这样为每个目标主机创建一个 `iptables` 规则:
几个有用的链接,总结如下:
* [Kubernetes 网络模型][6]
* GKE 网络是如何工作的:[https://www.youtube.com/watch?v=y2bhV81MfKQ][7]
* 上述的有关 `kube-proxy` 上性能的讨论:[https://www.youtube.com/watch?v=4-pawkiazEg][8]
### 我认为网络运很重要
### 我认为网络运很重要
我对 Kubernetes 的所有这些联网软件的感觉是,它们都仍然是非常新的,并且我并不能确定我们(作为一个社区)真的知道如何去把它们运好。这让我作为一个操作者感到很焦虑,因为我真的想让我的网络运行的很好!:) 而且我觉得作为一个组织,运行你自己的 Kubernetes 集群需要相当大的投入,以确保你理解所有的代码片段,这样当它们出现问题时你可以去修复它们。这不是一件坏事,它只是一个事而已。
我对 Kubernetes 的所有这些联网软件的感觉是,它们都仍然是非常新的,并且我并不能确定我们(作为一个社区)真的知道如何去把它们运好。这让我作为一个操作者感到很焦虑,因为我真的想让我的网络运行的很好!:) 而且我觉得作为一个组织,运行你自己的 Kubernetes 集群需要相当大的投入,以确保你理解所有的代码片段,这样当它们出现问题时你可以去修复它们。这不是一件坏事,它只是一个事而已。
我现在的计划是,继续不断地学习关于它们都是如何工作的,以尽可能多地减少对我动过的那些部分的担忧。
@ -192,9 +164,9 @@ kube-proxy 像如下这样为每个目标主机创建一个 `iptables` 规则:
via: https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/
作者:[Julia Evans ][a]
作者:[Julia Evans][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,198 @@
如何在 Linux 中压缩和解压缩文件
======
![](https://www.ostechnix.com/wp-content/uploads/2018/03/compress-720x340.jpg)
当在备份重要文件和通过网络发送大文件的时候,对文件进行压缩非常有用。请注意,压缩一个已经压缩过的文件会增加额外开销,因此你将会得到一个更大一些的文件。所以,请不要压缩已经压缩过的文件。在 GNU/Linux 中,有许多程序可以用来压缩和解压缩文件。在这篇教程中,我们仅学习其中两个应用程序。
在类 Unix 系统中,最常见的用来压缩文件的程序是:
1. gzip
2. bzip2
### 1. 使用 gzip 程序来压缩和解压缩文件
`gzip` 是一个使用 Lempel-Ziv 编码LZ77算法来压缩和解压缩文件的实用工具。
#### 1.1 压缩文件
如果要压缩一个名为 `ostechnix.txt` 的文件,使之成为 gzip 格式的压缩文件,那么只需运行如下命令:
```
$ gzip ostechnix.txt
```
上面的命令运行结束之后,将会出现一个名为 `ostechnix.txt.gz` 的 gzip 格式压缩文件,代替了原始的 `ostechnix.txt` 文件。
`gzip` 命令还可以有其他用法。一个有趣的例子是,我们可以将一个特定命令的输出通过管道传递,然后作为 `gzip` 程序的输入来创建一个压缩文件。看下面的命令:
```
$ ls -l Downloads/ | gzip > ostechnix.txt.gz
```
上面的命令将会创建一个 gzip 格式的压缩文件,文件的内容为 `Downloads` 目录的目录项。
#### 1.2 压缩文件并将输出写到新文件中(不覆盖原始文件)
默认情况下,`gzip` 程序会压缩给定文件,并以压缩文件替代原始文件。但是,你也可以保留原始文件,并将输出写到标准输出。比如,下面这个命令将会压缩 `ostechnix.txt` 文件,并将输出写入文件 `output.txt.gz`
```
$ gzip -c ostechnix.txt > output.txt.gz
```
类似地,要解压缩一个 `gzip` 格式的压缩文件并指定输出文件的文件名,只需运行:
```
$ gzip -c -d output.txt.gz > ostechnix1.txt
```
上面的命令将会解压缩 `output.txt.gz` 文件,并将输出写入到文件 `ostechnix1.txt` 中。在上面两个例子中,原始文件均不会被删除。
#### 1.3 解压缩文件
如果要解压缩 `ostechnix.txt.gz` 文件,并以原始未压缩版本的文件来代替它,那么只需运行:
```
$ gzip -d ostechnix.txt.gz
```
我们也可以使用 `gunzip` 程序来解压缩文件:
```
$ gunzip ostechnix.txt.gz
```
#### 1.4 在不解压缩的情况下查看压缩文件的内容
如果你想在不解压缩的情况下,使用 `gzip` 程序查看压缩文件的内容,那么可以像下面这样使用 `-c` 选项:
```
$ gunzip -c ostechnix1.txt.gz
```
或者,你也可以像下面这样使用 `zcat` 程序:
```
$ zcat ostechnix.txt.gz
```
你也可以通过管道将输出传递给 `less` 命令,从而一页一页的来查看输出,就像下面这样:
```
$ gunzip -c ostechnix1.txt.gz | less
$ zcat ostechnix.txt.gz | less
```
另外,`zless` 程序也能够实现和上面的管道同样的功能。
```
$ zless ostechnix1.txt.gz
```
#### 1.5 使用 gzip 压缩文件并指定压缩级别
`gzip` 的另外一个显著优点是支持压缩级别。它支持下面给出的 3 个压缩级别:
* **1** 最快 (最差)
* **9** 最慢 (最好)
* **6** 默认级别
要压缩名为 `ostechnix.txt` 的文件,使之成为“最好”压缩级别的 gzip 压缩文件,可以运行:
```
$ gzip -9 ostechnix.txt
```
#### 1.6 连接多个压缩文件
我们也可以把多个需要压缩的文件压缩到同一个文件中。如何实现呢?看下面这个例子。
```
$ gzip -c ostechnix1.txt > output.txt.gz
$ gzip -c ostechnix2.txt >> output.txt.gz
```
上面的两个命令将会压缩文件 `ostechnix1.txt``ostechnix2.txt`,并将输出保存到一个文件 `output.txt.gz` 中。
你可以通过下面其中任何一个命令,在不解压缩的情况下,查看两个文件 `ostechnix1.txt``ostechnix2.txt` 的内容:
```
$ gunzip -c output.txt.gz
$ gunzip -c output.txt
$ zcat output.txt.gz
$ zcat output.txt
```
如果你想了解关于 `gzip` 的更多细节,请参阅它的 man 手册。
```
$ man gzip
```
### 2. 使用 bzip2 程序来压缩和解压缩文件
`bzip2``gzip` 非常类似,但是 `bzip2` 使用的是 Burrows-Wheeler 块排序压缩算法,并使用<ruby>哈夫曼<rt>Huffman</rt></ruby>编码。使用 `bzip2` 压缩的文件以 “.bz2” 扩展结尾。
正如我上面所说的, `bzip2` 的用法和 `gzip` 几乎完全相同。只需在上面的例子中将 `gzip` 换成 `bzip2`,将 `gunzip` 换成 `bunzip2`,将 `zcat` 换成 `bzcat` 即可。
要使用 `bzip2` 压缩一个文件,并以压缩后的文件取而代之,只需运行:
```
$ bzip2 ostechnix.txt
```
如果你不想替换原始文件,那么可以使用 `-c` 选项,并把输出写入到新文件中。
```
$ bzip2 -c ostechnix.txt > output.txt.bz2
```
如果要解压缩文件,则运行:
```
$ bzip2 -d ostechnix.txt.bz2
```
或者,
```
$ bunzip2 ostechnix.txt.bz2
```
如果要在不解压缩的情况下查看一个压缩文件的内容,则运行:
```
$ bunzip2 -c ostechnix.txt.bz2
```
或者,
```
$ bzcat ostechnix.txt.bz2
```
如果你想了解关于 `bzip2` 的更多细节,请参阅它的 man 手册。
```
$ man bzip2
```
### 总结
在这篇教程中,我们学习了 `gzip``bzip2` 程序是什么,并通过 GNU/Linux 下的一些例子学习了如何使用它们来压缩和解压缩文件。接下来,我们将要学习如何在 Linux 中将文件和目录归档。
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-compress-and-decompress-files-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[ucasFL](https://github.com/ucasFL)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/

View File

@ -0,0 +1,258 @@
理解 ext4 等 Linux 文件系统
======
> 了解 ext4 的历史,包括其与 ext3 和之前的其它文件系统之间的区别。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR)
目前的大部分 Linux 文件系统都默认采用 ext4 文件系统,正如以前的 Linux 发行版默认使用 ext3、ext2 以及更久前的 ext。
对于不熟悉 Linux 或文件系统的朋友而言,你可能不清楚 ext4 相对于上一版本 ext3 带来了什么变化。你可能还想知道在一连串关于替代的文件系统例如 Btrfs、XFS 和 ZFS 不断被发布的情况下ext4 是否仍然能得到进一步的发展。
在一篇文章中,我们不可能讲述文件系统的所有方面,但我们尝试让你尽快了解 Linux 默认文件系统的发展历史,包括它的诞生以及未来发展。
我仔细研究了维基百科里的各种关于 ext 文件系统文章、kernel.org 的 wiki 中关于 ext4 的条目以及结合自己的经验写下这篇文章。
### ext 简史
#### MINIX 文件系统
在有 ext 之前,使用的是 MINIX 文件系统。如果你不熟悉 Linux 历史,那么可以理解为 MINIX 是用于 IBM PC/AT 微型计算机的一个非常小的类 Unix 系统。Andrew Tannenbaum 为了教学的目的而开发了它,并于 1987 年发布了源代码(以印刷版的格式!)。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/ibm_pc_at.jpg?itok=Tfk3hQYB)
*IBM 1980 中期的 PC/AT[MBlairMartin](https://commons.wikimedia.org/wiki/File:IBM_PC_AT.jpg)[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.en)*
虽然你可以细读 MINIX 的源代码但实际上它并不是自由开源软件FOSS。出版 Tannebaum 著作的出版商要求你花 69 美元的许可费来运行 MINIX而这笔费用包含在书籍的费用中。尽管如此在那时来说非常便宜并且 MINIX 的使用得到迅速发展,很快超过了 Tannebaum 当初使用它来教授操作系统编码的意图。在整个 20 世纪 90 年代,你可以发现 MINIX 的安装在世界各个大学里面非常流行。而此时,年轻的 Linus Torvalds 使用 MINIX 来开发原始 Linux 内核,并于 1991 年首次公布,而后在 1992 年 12 月在 GPL 开源协议下发布。
但是等等,这是一篇以 *文件系统* 为主题的文章不是吗是的MINIX 有自己的文件系统,早期的 Linux 版本依赖于它。跟 MINIX 一样Linux 的文件系统也如同玩具那般小 —— MINIX 文件系统最多能处理 14 个字符的文件名,并且只能处理 64MB 的存储空间。到了 1991 年,一般的硬盘尺寸已经达到了 40-140 MB。很显然Linux 需要一个更好的文件系统。
#### ext
当 Linus 开发出刚起步的 Linux 内核时Rémy Card 从事第一代的 ext 文件系统的开发工作。ext 文件系统在 1992 年首次实现并发布 —— 仅在 Linux 首次发布后的一年!—— ext 解决了 MINIX 文件系统中最糟糕的问题。
1992 年的 ext 使用在 Linux 内核中的新虚拟文件系统VFS抽象层。与之前的 MINIX 文件系统不同的是ext 可以处理高达 2 GB 存储空间并处理 255 个字符的文件名。
但 ext 并没有长时间占统治地位,主要是由于它原始的时间戳(每个文件仅有一个时间戳,而不是今天我们所熟悉的有 inode、最近文件访问时间和最新文件修改时间的时间戳。仅仅一年后ext2 就替代了它。
#### ext2
Rémy 很快就意识到 ext 的局限性,所以一年后他设计出 ext2 替代它。当 ext 仍然根植于 “玩具” 操作系统时ext2 从一开始就被设计为一个商业级文件系统,沿用 BSD 的 Berkeley 文件系统的设计原理。
ext2 提供了 GB 级别的最大文件大小和 TB 级别的文件系统大小,使其在 20 世纪 90 年代的地位牢牢巩固在文件系统大联盟中。很快它被广泛地使用,无论是在 Linux 内核中还是最终在 MINIX 中,且利用第三方模块可以使其应用于 MacOS 和 Windows。
但这里仍然有一些问题需要解决ext2 文件系统与 20 世纪 90 年代的大多数文件系统一样,如果在将数据写入到磁盘的时候,系统发生崩溃或断电,则容易发生灾难性的数据损坏。随着时间的推移,由于碎片(单个文件存储在多个位置,物理上其分散在旋转的磁盘上),它们也遭受了严重的性能损失。
尽管存在这些问题,但今天 ext2 还是用在某些特殊的情况下 —— 最常见的是,作为便携式 USB 驱动器的文件系统格式。
#### ext3
1998 年,在 ext2 被采用后的 6 年后Stephen Tweedie 宣布他正在致力于改进 ext2。这成了 ext3并于 2001 年 11 月在 2.4.15 内核版本中被采用到 Linux 内核主线中。
![Packard Bell 计算机][2]
*20 世纪 90 年代中期的 Packard Bell 计算机,[Spacekid][3][CC0][4]*
在大部分情况下ext2 在 Linux 发行版中工作得很好,但像 FAT、FAT32、HFS 和当时的其它文件系统一样 —— 在断电时容易发生灾难性的破坏。如果在将数据写入文件系统时候发生断电,则可能会将其留在所谓 *不一致* 的状态 —— 事情只完成一半而另一半未完成。这可能导致大量文件丢失或损坏,这些文件与正在保存的文件无关甚至导致整个文件系统无法卸载。
ext3 和 20 世纪 90 年代后期的其它文件系统,如微软的 NTFS使用 *日志* 来解决这个问题。日志是磁盘上的一种特殊的分配区域,其写入被存储在事务中;如果该事务完成磁盘写入,则日志中的数据将提交给文件系统自身。如果系统在该操作提交前崩溃,则重新启动的系统识别其为未完成的事务而将其进行回滚,就像从未发生过一样。这意味着正在处理的文件可能依然会丢失,但文件系统 *本身* 保持一致,且其它所有数据都是安全的。
在使用 ext3 文件系统的 Linux 内核中实现了三个级别的日志记录方式:<ruby>日记<rt>journal</rt></ruby><ruby>顺序<rt>ordered</rt></ruby><ruby>回写<rt>writeback</rt></ruby>
* **日记** 是最低风险模式,在将数据和元数据提交给文件系统之前将其写入日志。这可以保证正在写入的文件与整个文件系统的一致性,但其显著降低了性能。
* **顺序** 是大多数 Linux 发行版默认模式;顺序模式将元数据写入日志而直接将数据提交到文件系统。顾名思义,这里的操作顺序是固定的:首先,元数据提交到日志;其次,数据写入文件系统,然后才将日志中关联的元数据更新到文件系统。这确保了在发生崩溃时,那些与未完整写入相关联的元数据仍在日志中,且文件系统可以在回滚日志时清理那些不完整的写入事务。在顺序模式下,系统崩溃可能导致在崩溃期间文件的错误被主动写入,但文件系统它本身 —— 以及未被主动写入的文件 —— 确保是安全的。
* **回写** 是第三种模式 —— 也是最不安全的日志模式。在回写模式下,像顺序模式一样,元数据会被记录到日志,但数据不会。与顺序模式不同,元数据和数据都可以以任何有利于获得最佳性能的顺序写入。这可以显著提高性能,但安全性低很多。尽管回写模式仍然保证文件系统本身的安全性,但在崩溃或崩溃之前写入的文件很容易丢失或损坏。
跟之前的 ext2 类似ext3 使用 16 位内部寻址。这意味着对于有着 4K 块大小的 ext3 在最大规格为 16 TiB 的文件系统中可以处理的最大文件大小为 2 TiB。
#### ext4
Theodore Ts'o是当时 ext3 主要开发人员)在 2006 年发表的 ext4于两年后在 2.6.28 内核版本中被加入到了 Linux 主线。
Ts'o 将 ext4 描述为一个显著扩展 ext3 但仍然依赖于旧技术的临时技术。他预计 ext4 终将会被真正的下一代文件系统所取代。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dell_precision_380_workstation.jpeg?itok=3EjYXY2i)
*Dell Precision 380 工作站,[Lance Fisher](https://commons.wikimedia.org/wiki/File:Dell_Precision_380_Workstation.jpeg)[CC BY-SA 2.0](https://creativecommons.org/licenses/by-sa/2.0/deed.en)*
ext4 在功能上与 ext3 在功能上非常相似,但支持大文件系统,提高了对碎片的抵抗力,有更高的性能以及更好的时间戳。
### ext4 vs ext3
ext3 和 ext4 有一些非常明确的差别,在这里集中讨论下。
#### 向后兼容性
ext4 特地设计为尽可能地向后兼容 ext3。这不仅允许 ext3 文件系统原地升级到 ext4也允许 ext4 驱动程序以 ext3 模式自动挂载 ext3 文件系统,因此使它无需单独维护两个代码库。
#### 大文件系统
ext3 文件系统使用 32 位寻址,这限制它仅支持 2 TiB 文件大小和 16 TiB 文件系统系统大小(这是假设在块大小为 4 KiB 的情况下,一些 ext3 文件系统使用更小的块大小,因此对其进一步被限制)。
ext4 使用 48 位的内部寻址,理论上可以在文件系统上分配高达 16 TiB 大小的文件,其中文件系统大小最高可达 1000000 TiB1 EiB。在早期 ext4 的实现中有些用户空间的程序仍然将其限制为最大大小为 16 TiB 的文件系统,但截至 2011 年e2fsprogs 已经直接支持大于 16 TiB 大小的 ext4 文件系统。例如,红帽企业 Linux 在其合同上仅支持最高 50 TiB 的 ext4 文件系统,并建议 ext4 卷不超过 100 TiB。
#### 分配方式改进
ext4 在将存储块写入磁盘之前对存储块的分配方式进行了大量改进,这可以显著提高读写性能。
##### 区段
<ruby>区段<rt>extent</rt></ruby>是一系列连续的物理块 (最多达 128 MiB假设块大小为 4 KiB可以一次性保留和寻址。使用区段可以减少给定文件所需的 inode 数量,并显著减少碎片并提高写入大文件时的性能。
##### 多块分配
ext3 为每一个新分配的块调用一次块分配器。当多个写入同时打开分配器时很容易导致严重的碎片。然而ext4 使用延迟分配,这允许它合并写入并更好地决定如何为尚未提交的写入分配块。
##### 持久的预分配
在为文件预分配磁盘空间时大部分文件系统必须在创建时将零写入该文件的块中。ext4 允许替代使用 `fallocate()`,它保证了空间的可用性(并试图为它找到连续的空间),而不需要先写入它。这显著提高了写入和将来读取流和数据库应用程序的写入数据的性能。
##### 延迟分配
这是一个耐人寻味而有争议性的功能。延迟分配允许 ext4 等待分配将写入数据的实际块直到它准备好将数据提交到磁盘。相比之下即使数据仍然在往写入缓存中写入ext3 也会立即分配块。)
当缓存中的数据累积时,延迟分配块允许文件系统对如何分配块做出更好的选择,降低碎片(写入,以及稍后的读)并显著提升性能。然而不幸的是,它 *增加* 了还没有专门调用 `fsync()` 方法(当程序员想确保数据完全刷新到磁盘时)的程序的数据丢失的可能性。
假设一个程序完全重写了一个文件:
```
fd=open("file", O_TRUNC); write(fd, data); close(fd);
```
使用旧的文件系统,`close(fd);` 足以保证 `file` 中的内容刷新到磁盘。即使严格来说,写不是事务性的,但如果文件关闭后发生崩溃,则丢失数据的风险很小。
如果写入不成功(由于程序上的错误、磁盘上的错误、断电等),文件的原始版本和较新版本都可能丢失数据或损坏。如果其它进程在写入文件时访问文件,则会看到损坏的版本。如果其它进程打开文件并且不希望其内容发生更改 —— 例如,映射到多个正在运行的程序的共享库。这些进程可能会崩溃。
为了避免这些问题,一些程序员完全避免使用 `O_TRUNC`。相反,他们可能会写入一个新文件,关闭它,然后将其重命名为旧文件名:
```
fd=open("newfile"); write(fd, data); close(fd); rename("newfile", "file");
```
*没有* 延迟分配的文件系统下,这足以避免上面列出的潜在的损坏和崩溃问题:因为 `rename()` 是原子操作,所以它不会被崩溃中断;并且运行的程序将继续引用旧的文件。现在 `file` 的未链接版本只要有一个打开的文件文件句柄即可。但是因为 ext4 的延迟分配会导致写入被延迟和重新排序,`rename("newfile", "file")` 可以在 `newfile` 的内容实际写入磁盘内容之前执行,这出现了并行进行再次获得 `file` 坏版本的问题。
为了缓解这种情况Linux 内核(自版本 2.6.30)尝试检测这些常见代码情况并强制立即分配。这会减少但不能防止数据丢失的可能性 —— 并且它对新文件没有任何帮助。如果你是一位开发人员,请注意:保证数据立即写入磁盘的唯一方法是正确调用 `fsync()`
#### 无限制的子目录
ext3 仅限于 32000 个子目录ext4 允许无限数量的子目录。从 2.6.23 内核版本开始ext4 使用 HTree 索引来减少大量子目录的性能损失。
#### 日志校验
ext3 没有对日志进行校验,这给处于内核直接控制之外的磁盘或自带缓存的控制器设备带来了问题。如果控制器或具自带缓存的磁盘脱离了写入顺序,则可能会破坏 ext3 的日记事务顺序,从而可能破坏在崩溃期间(或之前一段时间)写入的文件。
理论上,这个问题可以使用写入<ruby>障碍<rt>barrier</rt></ruby> —— 在安装文件系统时,你在挂载选项设置 `barrier=1`,然后设备就会忠实地执行 `fsync` 一直向下到底层硬件。通过实践,可以发现存储设备和控制器经常不遵守写入障碍 —— 提高性能(和跟竞争对手比较的性能基准),但增加了本应该防止数据损坏的可能性。
对日志进行校验和允许文件系统崩溃后第一次挂载时意识到其某些条目是无效或无序的。因此,这避免了回滚部分条目或无序日志条目的错误,并进一步损坏的文件系统 —— 即使部分存储设备假做或不遵守写入障碍。
#### 快速文件系统检查
在 ext3 下,在 `fsck` 被调用时会检查整个文件系统 —— 包括已删除或空文件。相比之下ext4 标记了 inode 表未分配的块和扇区,从而允许 `fsck` 完全跳过它们。这大大减少了在大多数文件系统上运行 `fsck` 的时间,它实现于内核 2.6.24。
#### 改进的时间戳
ext3 提供粒度为一秒的时间戳。虽然足以满足大多数用途但任务关键型应用程序经常需要更严格的时间控制。ext4 通过提供纳秒级的时间戳,使其可用于那些企业、科学以及任务关键型的应用程序。
ext3 文件系统也没有提供足够的位来存储 2038 年 1 月 18 日以后的日期。ext4 在这里增加了两个位,将 [Unix 纪元][5]扩展了 408 年。如果你在公元 2446 年读到这篇文章,你很有可能已经转移到一个更好的文件系统 —— 如果你还在测量自 1970 年 1 月 1 日 00:00UTC以来的时间这会让我死后得以安眠。
#### 在线碎片整理
ext2 和 ext3 都不直接支持在线碎片整理 —— 即在挂载时会对文件系统进行碎片整理。ext2 有一个包含的实用程序 `e2defrag`,它的名字暗示 —— 它需要在文件系统未挂载时脱机运行。(显然,这对于根文件系统来说非常有问题。)在 ext3 中的情况甚至更糟糕 —— 虽然 ext3 比 ext2 更不容易受到严重碎片的影响,但 ext3 文件系统运行 `e2defrag` 可能会导致灾难性损坏和数据丢失。
尽管 ext3 最初被认为“不受碎片影响”,但对同一文件(例如 BitTorrent采用大规模并行写入过程的过程清楚地表明情况并非完全如此。一些用户空间的手段和解决方法例如 [Shake][6],以这样或那样方式解决了这个问题 —— 但它们比真正的、文件系统感知的、内核级碎片整理过程更慢并且在各方面都不太令人满意。
ext4 通过 `e4defrag` 解决了这个问题,且是一个在线、内核模式、文件系统感知、块和区段级别的碎片整理实用程序。
### 正在进行的 ext4 开发
ext4正如 Monty Python 中瘟疫感染者曾经说过的那样,“我还没死呢!”虽然它的[主要开发人员][7]认为它只是一个真正的[下一代文件系统][8]的权宜之计,但是在一段时间内,没有任何可能的候选人准备好(由于技术或许可问题)部署为根文件系统。
在未来的 ext4 版本中仍然有一些关键功能要开发,包括元数据校验和、一流的配额支持和大分配块。
#### 元数据校验和
由于 ext4 具有冗余超级块,因此为文件系统校验其中的元数据提供了一种方法,可以自行确定主超级块是否已损坏并需要使用备用块。可以在没有校验和的情况下,从损坏的超级块恢复 —— 但是用户首先需要意识到它已损坏,然后尝试使用备用方法手动挂载文件系统。由于在某些情况下,使用损坏的主超级块安装文件系统读写可能会造成进一步的损坏,即使是经验丰富的用户也无法避免,这也不是一个完美的解决方案!
与 Btrfs 或 ZFS 等下一代文件系统提供的极其强大的每块校验和相比ext4 的元数据校验和的功能非常弱。但它总比没有好。虽然校验 **所有的事情** 都听起来很简单!—— 事实上,将校验和与文件系统连接到一起有一些重大的挑战;请参阅[设计文档][9]了解详细信息。
#### 一流的配额支持
等等,配额?!从 ext2 出现的那天开始我们就有了这些!是的,但它们一直都是事后的添加的东西,而且它们总是犯傻。这里可能不值得详细介绍,但[设计文档][10]列出了配额将从用户空间移动到内核中的方式,并且能够更加正确和高效地执行。
#### 大分配块
随着时间的推移,那些讨厌的存储系统不断变得越来越大。由于一些固态硬盘已经使用 8K 硬件块大小,因此 ext4 对 4K 模块的当前限制越来越受到限制。较大的存储块可以显著减少碎片并提高性能,代价是增加“松弛”空间(当你只需要块的一部分来存储文件或文件的最后一块时留下的空间)。
你可以在[设计文档][11]中查看详细说明。
### ext4 的实际限制
ext4 是一个健壮、稳定的文件系统。如今大多数人都应该在用它作为根文件系统,但它无法处理所有需求。让我们简单地谈谈你不应该期待的一些事情 —— 现在或可能在未来:
虽然 ext4 可以处理高达 1 EiB 大小(相当于 1,000,000 TiB大小的数据但你 *真的* 不应该尝试这样做。除了能够记住更多块的地址之外,还存在规模上的问题。并且现在 ext4 不会处理(并且可能永远不会)超过 50-100 TiB 的数据。
ext4 也不足以保证数据的完整性。随着日志记录的重大进展又回到了 ext3 的那个时候,它并未涵盖数据损坏的许多常见原因。如果数据已经在磁盘上被[破坏][12] —— 由于故障硬件,宇宙射线的影响(是的,真的),或者只是数据随时间衰减 —— ext4 无法检测或修复这种损坏。
基于上面两点ext4 只是一个纯 *文件系统*,而不是存储卷管理器。这意味着,即使你有多个磁盘 —— 也就是奇偶校验或冗余,理论上你可以从 ext4 中恢复损坏的数据,但无法知道使用它是否对你有利。虽然理论上可以在不同的层中分离文件系统和存储卷管理系统而不会丢失自动损坏检测和修复功能,但这不是当前存储系统的设计方式,并且它将给新设计带来重大挑战。
### 备用文件系统
在我们开始之前,提醒一句:要非常小心,没有任何备用的文件系统作为主线内核的一部分而内置和直接支持!
即使一个文件系统是 *安全的*,如果在内核升级期间出现问题,使用它作为根文件系统也是非常可怕的。如果你没有充分的理由通过一个 chroot 去使用替代介质引导耐心地操作内核模块、grub 配置和 DKMS……不要在一个很重要的系统中去掉预留的根文件。
可能有充分的理由使用你的发行版不直接支持的文件系统 —— 但如果你这样做,我强烈建议你在系统启动并可用后再安装它。(例如,你可能有一个 ext4 根文件系统,但是将大部分数据存储在 ZFS 或 Btrfs 池中。)
#### XFS
XFS 与非 ext 文件系统在 Linux 中的主线中的地位一样。它是一个 64 位的日志文件系统,自 2001 年以来内置于 Linux 内核中,为大型文件系统和高度并发性提供了高性能(即大量的进程都会立即写入文件系统)。
从 RHEL 7 开始XFS 成为 Red Hat Enterprise Linux 的默认文件系统。对于家庭或小型企业用户来说,它仍然有一些缺点 —— 最值得注意的是,重新调整现有 XFS 文件系统是一件非常痛苦的事情,不如创建另一个并复制数据更有意义。
虽然 XFS 是稳定的且是高性能的,但它和 ext4 之间没有足够具体的最终用途差异,以值得推荐在非默认(如 RHEL7的任何地方使用它除非它解决了对 ext4 的特定问题,例如大于 50 TiB 容量的文件系统。
XFS 在任何方面都不是 ZFS、Btrfs 甚至 WAFL一个专有的 SAN 文件系统)的“下一代”文件系统。就像 ext4 一样,它应该被视为一种更好的方式的权宜之计。
#### ZFS
ZFS 由 Sun Microsystems 开发,以 zettabyte 命名 —— 相当于 1 万亿 GB —— 因为它理论上可以解决大型存储系统。
作为真正的下一代文件系统ZFS 提供卷管理(能够在单个文件系统中处理多个单独的存储设备),块级加密校验和(允许以极高的准确率检测数据损坏),[自动损坏修复][12](其中冗余或奇偶校验存储可用),[快速异步增量复制][13],内联压缩等,[以及更多][14]。
从 Linux 用户的角度来看ZFS 的最大问题是许可证问题。ZFS 许可证是 CDDL 许可证,这是一种与 GPL 冲突的半许可的许可证。关于在 Linux 内核中使用 ZFS 的意义存在很多争议,其争议范围从“它是 GPL 违规”到“它是 CDDL 违规”到“它完全没问题,它还没有在法庭上进行过测试。”最值得注意的是,自 2016 年以来 Canonical 已将 ZFS 代码内联在其默认内核中,而且目前尚无法律挑战。
此时,即使我作为一个非常狂热于 ZFS 的用户,我也不建议将 ZFS 作为 Linux 的根文件系统。如果你想在 Linux 上利用 ZFS 的优势,用 ext4 设置一个小的根文件系统,然后将 ZFS 用在你剩余的存储上,把数据、应用程序以及你喜欢的东西放在它上面 —— 但把 root 分区保留在 ext4 上,直到你的发行版明确支持 ZFS 根目录。
#### Btrfs
Btrfs 是 B-Tree Filesystem 的简称,通常发音为 “butter” —— 由 Chris Mason 于 2007 年在 Oracle 任职期间发布。Btrfs 旨在跟 ZFS 有大部分相同的目标,提供多种设备管理、每块校验、异步复制、直列压缩等,[还有更多][8]。
截至 2018 年Btrfs 相当稳定,可用作标准的单磁盘文件系统,但可能不应该依赖于卷管理器。与许多常见用例中的 ext4、XFS 或 ZFS 相比,它存在严重的性能问题,其下一代功能 —— 复制、多磁盘拓扑和快照管理 —— 可能非常多,其结果可能是从灾难性地性能降低到实际数据的丢失。
Btrfs 的维持状态是有争议的SUSE Enterprise Linux 在 2015 年采用它作为默认文件系统,而 Red Hat 于 2017 年宣布它从 RHEL 7.4 开始不再支持 Btrfs。可能值得注意的是该产品支持 Btrfs 部署用作单磁盘文件系统,而不是像 ZFS 中的多磁盘卷管理器,甚至 Synology 在它的存储设备使用 Btrfs但是它在传统 Linux 内核 RAIDmdraid之上分层来管理磁盘。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/ext4-filesystem
作者:[Jim Salter][a]
译者:[HardworkFish](https://github.com/HardworkFish)
校对:[wxy](https://github.com/wxy), [pityonline](https://github.com/pityonline)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-salter
[1]: https://opensource.com/file/391546
[2]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/packard_bell_pc.jpg?itok=VI8dzcwp (Packard Bell computer)
[3]: https://commons.wikimedia.org/wiki/File:Old_packard_bell_pc.jpg
[4]: https://creativecommons.org/publicdomain/zero/1.0/deed.en
[5]: https://en.wikipedia.org/wiki/Unix_time
[6]: https://vleu.net/shake/
[7]: http://www.linux-mag.com/id/7272/
[8]: https://arstechnica.com/information-technology/2014/01/bitrot-and-atomic-cows-inside-next-gen-filesystems/
[9]: https://ext4.wiki.kernel.org/index.php/Ext4_Metadata_Checksums
[10]: https://ext4.wiki.kernel.org/index.php/Design_For_1st_Class_Quota_in_Ext4
[11]: https://ext4.wiki.kernel.org/index.php/Design_for_Large_Allocation_Blocks
[12]: https://en.wikipedia.org/wiki/Data_degradation#Visual_example_of_data_degradation
[13]: https://arstechnica.com/information-technology/2015/12/rsync-net-zfs-replication-to-the-cloud-is-finally-here-and-its-fast/
[14]: https://arstechnica.com/information-technology/2014/02/ars-walkthrough-using-the-zfs-next-gen-filesystem-on-linux/

View File

@ -0,0 +1,107 @@
如何在 Ubuntu 或 Linux Mint 启用 Chromium 硬件加速的视频解码
======
你或许已经注意到了,在 Linux 上使用 Google Chrome 或 Chromium 浏览器在 YouTube 或其它类似网站观看高清视频会增加你的 CPU 使用率,如果你用的是笔记本,电脑会发热而且电池会很快用完。这是因为 Chrome/ChromiumFirefox 也是如此,但是 Firefox 的问题没有办法解决)在 Linux 上不支持硬件加速的视频解码。
这篇文章讲述了如何在 Linux 环境安装带有 VA-API 补丁的 Chromium 开发版,它支持 GPU 加速的视频解码,可以显著减少观看在线高清视频时的 CPU 使用率,这篇教程只适用于 Intel 和 Nvidia 的显卡,我没有 ATI/AMD 的显卡可以试验,也没有使用过这几种显卡。
这是 Chromium 浏览器在 Ubuntu18.04 中,在没有 GPU 加速视频解码的情况下播放一个 1080p 的 YouTube 视频:
![](https://4.bp.blogspot.com/-KtUQni2PMvE/W3KlJ62yLLI/AAAAAAAABW4/NrNVFaTAkZ8AmwqWwRvWD6czT51ni-R-gCLcBGAs/s1600/chromium-default-no-accel.png)
这是带有 VA-API 补丁的 Chromium 浏览器在 Ubuntu18.04 中,在带有 GPU 加速视频解码的情况下播放同样的 1080p 的 YouTube 视频:
![](https://4.bp.blogspot.com/-0c-wb4UNhW8/W3KlQBfeFnI/AAAAAAAABW8/WVUAYzM6hA8wRTlCcrPXPMpoXoFVR6b1QCLcBGAs/s1600/chromium-hardware-acceleration-enabled.png)
注意截图中的 CPU 使用率。两张截图都是在我老旧而依然强大的桌面计算机上捕捉的。在我的笔记本电脑上,没有硬件加速的 Chromium 带来更高的 CPU 使用率。
“只需 VA-API 即可在 Linux 启用 VAVDA、VAVEA 和 VAJDA” 这个[补丁][3]在一年多以前就提交给了 Chromium但是它还没有合并。
Chrome 有一个选项可以覆盖软件渲染列表(`#ignore-gpu-blacklist`),但是这个选项不能启用硬件加速的视频解码。启用这个选项以后,你或许会在访问 `chrome://gpu` 时发现这些信息“_Video Decode: Hardware accelerated_ “,然而这个并不意味着真的可以工作。在 YouTube 打开一个高清视频并用诸如 `htop` 的工具查看 CPU 使用率(这是我在以上截图中用来查看 CPU 使用率的)。因为 GPU 视频解码没有真的被启用,你应该看到较高的 CPU 使用率。下面有一个部分是关于检查你是否真的在使用硬件加速的视频解码的。
**文中使用的 Chromium 浏览器 Ubuntu 版启用 VA-API 的补丁在[这个地址][1]可以获得**
### 在 Ubuntu 和 Linux Mint 安装和使用带有 VA-API 支持的 Chromium 浏览器
每个人都该知道 Chromium 开发版本没有理想中那么稳定。所以你可能发现 bug它可能会发生崩溃等情况。它现在可能正常运行但是谁知道几次更新以后会发生什么。
还有,如果你想启用 Widevine 支持(这样你才能观看 Netflix 视频和 YouTube 付费视频Chromium dev 分支 PPA 要求你执行一些额外步骤。 如果你想要一些功能,比如同步,也是如此(需要注册 API 密钥还要在你的系统上设置好)。执行这些任务的说明在 [Chromium 开发版本的 PPA][4] 中有详细解释。
对于 Nvidia 显卡vdpau 视频驱动程序需要更新以便显示 vaQuerySurfaceAttributes。所以 Nvidia 需要使用打过补丁的 vdpau-va-driver。值得庆幸的是Chromium-dev PPA 提供了这个打过补丁的包。
带有 VA-API 补丁的 Chromium 也可用于其它 Linux 发行版,在第三方仓库,比如说 [Arch Linux][5](对于 Nvidia 你需要[这个][6]补丁过的 libva-vdpau-driver。如果你不使用 Ubuntu 或 Linux Mint你得自己找那些包。
#### 1、安装带有 VA-API 补丁的 Chromium
有一个带 VA-API 补丁的 Chromium Beta PPA但是它缺少适用于 Ubuntu 18.04 的 vdpau-video。如果你需要你可以使用这个 [Beta PPA][7],而不是我在下面的步骤中使用 [Dev PPA][8],不过如果你使用 Nvidia 显卡,你需要从这个 Dev PPA 中下载安装 vdpau-va-driver并确认 Ubuntu/Linux Mint 不更新这个包(有点复杂,如果你准备根据下面步骤使用 Dev PPA 的话,不需要手动做这些)。
你可以添加 [Chromium 开发分支 PPA][4],并在 Ubuntu 或 Linux Mint及其它基于 Ubuntu 的发行版,如 elementary以及 Ubuntu 或 Linux Mint 的风味版,如 Xubuntu、Kubuntu、Ubuntu MATE、Linux Mint MATE 等等)上安装最新的 Chromium 浏览器开发版:
```
sudo add-apt-repository ppa:saiarcot895/chromium-dev
sudo apt-get update
sudo apt install chromium-browser
```
#### 2、安装 VA-API 驱动
对于 Intel 的显卡,你需要安装 `i965-va-driver` 这个包(它可能早就安装好了)
```
sudo apt install i965-va-driver
```
对于 Nvidia 的显卡(在开源的 Nouveau 驱动和闭源的 Nvidia 驱动上,它应该都有效), 安装 `vdpau-va-driver`
```
sudo apt install vdpau-va-driver
```
#### 3、在 Chromium 启用硬件加速视频选项
复制这串地址,粘贴进 Chromium 的 URL 栏: `chrome://flags/#enable-accelerated-video` (或者在 `chrome://flags` 搜索 `Hardware-accelerated video` )并启用它,然后重启 Chromium 浏览器。
在默认的 Google Chrome / Chromium 版本,这个选项不可用,但是你可以在启用了 VP-API 的 Chromium 版本启用它。
#### 4、安装 [h264ify][2] Chrome 扩展
YouTube可能还有其它一些网址也是如此默认使用 VP8 或 VP9 编码解码器,许多 GPU 不支持这种编码解码器的硬件解码。h264ify 会强制 YouTube 使用大多数 GPU 都支持的 H.264 而不是 VP8/VP9。
这个扩展还能阻塞 60fps 的视频,对低性能机器有用。
你可以在视频上右键点击,并且选择 `Stats for nerds` 以查看 Youtube 视频所使用额编码解码器,如果启用了 h264ify 扩展,你可以看到编码解码器是 avc / mp4a。如果没有启用编码解码器应该是 vp09 / opus。
### 如何检查 Chromium 是否在使用 GPU 视频解码
在 YouTube 打开一个视频,然后,在 Chromium 打开一个新的标签页并将以下地址输入 URL 栏:`chrome://media-internals`。
`chrome://media-internals` 标签页中,点击视频的 URL为了展开它 往下滚动查看 `Player Properties` 的下面,你应该可以找到 `video_decoder` 属性。如果`video_decoder` 的值是 `GpuVideoDecoder` ,这说明当前在另一个标签页播放的 YouTube 视频正在使用硬件加速的的视频解码。
![](https://4.bp.blogspot.com/-COBJWVT_Y0Q/W3KnG7AeHsI/AAAAAAAABXM/W2XAJA_S0BIHug4eQKTMOdIfXHhgkXhhQCLcBGAs/s1600/chromium-gpuvideodecoder-linux.png)
如果它显示的是 `FFmpegVideoDecoder``VpxVideoDecoder` ,说明加速视频解码无效或者你忘记安装或禁用了 h264ify 这个 Chrome 扩展。
如果无效,你可以通过在命令行运行 `chromium-browser` ,通过查看是否有 VA-API 相关的错误显示出来以调试。你也可以运行 `vainfo`(在 Ubuntu 或 Linux Mint 上安装:`sudo apt install vainfo`)和 `vdpauinfo` (对于 Nvidia在 Ubuntu 或 Linux Mint 上安装:`sudo apt install vdpauinfo`)并且查看是否有显示任何错误。
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/08/how-to-enable-hardware-accelerated.html
作者:[Logix][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[GraveAccent](https://github.com/GraveAccent)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://github.com/saiarcot895/chromium-ubuntu-build/tree/master/debian/patches
[2]:https://chrome.google.com/webstore/detail/h264ify/aleakchihdccplidncghkekgioiakgal
[3]:https://chromium-review.googlesource.com/c/chromium/src/+/532294
[4]:https://launchpad.net/~saiarcot895/+archive/ubuntu/chromium-dev
[5]:https://aur.archlinux.org/packages/?O=0&SeB=nd&K=chromium+vaapi&outdated=&SB=n&SO=a&PP=50&do_Search=Go
[6]:https://aur.archlinux.org/packages/libva-vdpau-driver-chromium/
[7]:https://launchpad.net/~saiarcot895/+archive/ubuntu/chromium-beta
[8]:https://launchpad.net/~saiarcot895/+archive/ubuntu/chromium-dev/+packages

View File

@ -0,0 +1,321 @@
Makefile 及其工作原理
======
> 用这个方便的工具来更有效的运行和编译你的程序。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_liberate%20docs_1109ay.png?itok=xQOLreya)
当你需要在一些源文件改变后运行或更新一个任务时,通常会用到 `make` 工具。`make` 工具需要读取一个 `Makefile`(或 `makefile`)文件,在该文件中定义了一系列需要执行的任务。你可以使用 `make` 来将源代码编译为可执行程序。大部分开源项目会使用 `make` 来实现最终的二进制文件的编译,然后使用 `make install` 命令来执行安装。
本文将通过一些基础和进阶的示例来展示 `make``Makefile` 的使用方法。在开始前,请确保你的系统中安装了 `make`
### 基础示例
依然从打印 “Hello World” 开始。首先创建一个名字为 `myproject` 的目录,目录下新建 `Makefile` 文件,文件内容为:
```
say_hello:
        echo "Hello World"
```
`myproject` 目录下执行 `make`,会有如下输出:
```
$ make
echo "Hello World"
Hello World
```
在上面的例子中“say_hello” 类似于其他编程语言中的函数名。这被称之为<ruby>目标<rt>target</rt></ruby>。在该目标之后的是预置条件或依赖。为了简单起见,我们在这个示例中没有定义预置条件。`echo Hello World'` 命令被称为<ruby>步骤<rt>recipe</rt></ruby>。这些步骤基于预置条件来实现目标。目标、预置条件和步骤共同构成一个规则。
总结一下,一个典型的规则的语法为:
```
目标: 预置条件
<TAB> 步骤
```
作为示例,目标可以是一个基于预置条件(源代码)的二进制文件。另一方面,预置条件也可以是依赖其他预置条件的目标。
```
final_target: sub_target final_target.c
        Recipe_to_create_final_target
       
sub_target: sub_target.c
        Recipe_to_create_sub_target
```
目标并不要求是一个文件,也可以只是步骤的名字,就如我们的例子中一样。我们称之为“伪目标”。
再回到上面的示例中,当 `make` 被执行时,整条指令 `echo "Hello World"` 都被显示出来,之后才是真正的执行结果。如果不希望指令本身被打印处理,需要在 `echo` 前添加 `@`
```
say_hello:
        @echo "Hello World"
```
重新运行 `make`,将会只有如下输出:
```
$ make
Hello World
```
接下来在 `Makefile` 中添加如下伪目标:`generate` 和 `clean`
```
say_hello:
        @echo "Hello World"
generate:
        @echo "Creating empty text files..."
        touch file-{1..10}.txt
clean:
        @echo "Cleaning up..."
        rm *.txt
```
随后当我们运行 `make` 时,只有 `say_hello` 这个目标被执行。这是因为`Makefile` 中的第一个目标为默认目标。通常情况下会调用默认目标,这就是你在大多数项目中看到 `all` 作为第一个目标而出现。`all` 负责来调用它他的目标。我们可以通过 `.DEFAULT_GOAL` 这个特殊的伪目标来覆盖掉默认的行为。
`Makefile` 文件开头增加 `.DEFAULT_GOAL`
```
.DEFAULT_GOAL := generate
```
`make` 会将 `generate` 作为默认目标:
```
$ make
Creating empty text files...
touch file-{1..10}.txt
```
顾名思义,`.DEFAULT_GOAL` 伪目标仅能定义一个目标。这就是为什么很多 `Makefile` 会包括 `all` 这个目标,这样可以调用多个目标。
下面删除掉 `.DEFAULT_GOAL`,增加 `all` 目标:
```
all: say_hello generate
say_hello:
        @echo "Hello World"
generate:
        @echo "Creating empty text files..."
        touch file-{1..10}.txt
clean:
        @echo "Cleaning up..."
        rm *.txt
```
运行之前,我们再增加一些特殊的伪目标。`.PHONY` 用来定义这些不是文件的目标。`make` 会默认调用这些伪目标下的步骤,而不去检查文件名是否存在或最后修改日期。完整的 `Makefile` 如下:
```
.PHONY: all say_hello generate clean
all: say_hello generate
say_hello:
        @echo "Hello World"
generate:
        @echo "Creating empty text files..."
        touch file-{1..10}.txt
clean:
        @echo "Cleaning up..."
        rm *.txt
```
`make` 命令会调用 `say_hello``generate`
```
$ make
Hello World
Creating empty text files...
touch file-{1..10}.txt
```
`clean` 不应该被放入 `all` 中,或者被放入第一个目标中。`clean` 应当在需要清理时手动调用,调用方法为 `make clean`
```
$ make clean
Cleaning up...
rm *.txt
```
现在你应该已经对 `Makefile` 有了基础的了解,接下来我们看一些进阶的示例。
### 进阶示例
#### 变量
在之前的实例中,大部分目标和预置条件是已经固定了的,但在实际项目中,它们通常用变量和模式来代替。
定义变量最简单的方式是使用 `=` 操作符。例如,将命令 `gcc` 赋值给变量 `CC`
```
CC = gcc
```
这被称为递归扩展变量,用于如下所示的规则中:
```
hello: hello.c
    ${CC} hello.c -o hello
```
你可能已经想到了,这些步骤将会在传递给终端时展开为:
```
gcc hello.c -o hello
```
`${CC}``$(CC)` 都能对 `gcc` 进行引用。但如果一个变量尝试将它本身赋值给自己,将会造成死循环。让我们验证一下:
```
CC = gcc
CC = ${CC}
all:
    @echo ${CC}
```
此时运行 `make` 会导致:
```
$ make
Makefile:8: *** Recursive variable 'CC' references itself (eventually).  Stop.
```
为了避免这种情况发生,可以使用 `:=` 操作符(这被称为简单扩展变量)。以下代码不会造成上述问题:
```
CC := gcc
CC := ${CC}
all:
    @echo ${CC}
```
#### 模式和函数
下面的 `Makefile` 使用了变量、模式和函数来实现所有 C 代码的编译。我们来逐行分析下:
```
# Usage:
# make        # compile all binary
# make clean  # remove ALL binaries and objects
.PHONY = all clean
CC = gcc                        # compiler to use
LINKERFLAG = -lm
SRCS := $(wildcard *.c)
BINS := $(SRCS:%.c=%)
all: ${BINS}
%: %.o
        @echo "Checking.."
        ${CC} ${LINKERFLAG} $< -o $@
%.o: %.c
        @echo "Creating object.."
        ${CC} -c $<
clean:
        @echo "Cleaning up..."
        rm -rvf *.o ${BINS}
```
* 以 `#` 开头的行是评论。
* `.PHONY = all clean` 行定义了 `all``clean` 两个伪目标。
* 变量 `LINKERFLAG` 定义了在步骤中 `gcc` 命令需要用到的参数。
* `SRCS := $(wildcard *.c)``$(wildcard pattern)` 是与文件名相关的一个函数。在本示例中,所有 “.c”后缀的文件会被存入 `SRCS` 变量。
* `BINS := $(SRCS:%.c=%)`:这被称为替代引用。本例中,如果 `SRCS` 的值为 `'foo.c bar.c'`,则 `BINS`的值为 `'foo bar'`
* `all: ${BINS}` 行:伪目标 `all` 调用 `${BINS}` 变量中的所有值作为子目标。
* 规则:
```
%: %.o
  @echo "Checking.."
  ${CC} ${LINKERFLAG} $&lt; -o $@
```
下面通过一个示例来理解这条规则。假定 `foo` 是变量 `${BINS}` 中的一个值。`%` 会匹配到 `foo``%`匹配任意一个目标)。下面是规则展开后的内容:
```
foo: foo.o
  @echo "Checking.."
  gcc -lm foo.o -o foo
```
如上所示,`%` 被 `foo` 替换掉了。`$<` 被 `foo.o` 替换掉。`$<`用于匹配预置条件,`$@` 匹配目标。对 `${BINS}` 中的每个值,这条规则都会被调用一遍。
* 规则:
```
%.o: %.c
  @echo "Creating object.."
  ${CC} -c $&lt;
```
之前规则中的每个预置条件在这条规则中都会都被作为一个目标。下面是展开后的内容:
```
foo.o: foo.c
  @echo "Creating object.."
  gcc -c foo.c
```
* 最后,在 `clean` 目标中,所有的二进制文件和编译文件将被删除。
下面是重写后的 `Makefile`,该文件应该被放置在一个有 `foo.c` 文件的目录下:
```
# Usage:
# make        # compile all binary
# make clean  # remove ALL binaries and objects
.PHONY = all clean
CC = gcc                        # compiler to use
LINKERFLAG = -lm
SRCS := foo.c
BINS := foo
all: foo
foo: foo.o
        @echo "Checking.."
        gcc -lm foo.o -o foo
foo.o: foo.c
        @echo "Creating object.."
        gcc -c foo.c
clean:
        @echo "Cleaning up..."
        rm -rvf foo.o foo
```
关于 `Makefile` 的更多信息,[GNU Make 手册][1]提供了更完整的说明和实例。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/what-how-makefile
作者:[Sachin Patil][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[Zafiry](https://github.com/zafiry)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/psachin
[1]:https://www.gnu.org/software/make/manual/make.pdf

View File

@ -0,0 +1,122 @@
差异文件diff和补丁文件patch简介
======
> 这篇文章介绍<ruby>差异文件<rt>diff</rt></ruby><ruby>补丁文件<rt>patch</rt></ruby>,以及它们如何在开源项目中使用的例子。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
如果你曾有机会在一个使用分布式开发模型的大型代码库上工作过你就应该听说过类似下面的话“Sue 刚发过来一个<ruby>补丁<rt>patch</rt></ruby>“Rajiv 正在<ruby>签出<rt>checking out</rt></ruby><ruby>差异<rt>diff</rt></ruby>”, 可能这些词(补丁、差异文件)对你而言很陌生,而你确定很想搞懂他们到底指什么。开源软件对上述提到的名词有很大的贡献,作为大型项目从 Apache web 服务器到 Linux 内核的开发模型,“基于补丁文件的开发” 这一模式贯穿了上述项目的始终。实际上,你可能不知道 Apache 的名字就来自“一系列的代码补丁”LCTT 译注Apache 英文发音和补丁的英文 patch 相似),它们被一一收集起来并针对原来的 [NCSA HTTPd server source code][1] 进行了修订。
你可能认为这只不过是些逸闻,但是一份早期的 [Apache 网站的存档中][2] 声称 Apache 的名字就是来自于最早的“补丁”集合;即“<ruby>打了补丁的<rt>APAtCHy</rt></ruby>”服务器,简化为 Apache。
好了,言归正传,程序员嘴里说的“差异”和“补丁”到底是什么?
首先在这篇文章里我们可以认为这两个术语都指向同一个概念。“diff” 是 ”difference“ 的简写Unix 下的同名工具程序 `diff`剖析了一个或多个文件之间的“差异”。下面我们会看到 `diff` 的例子:
一个“补丁”指的是文件之间一系列差异,这些差异能被 Unix 的 `diff` 程序应用在源代码树上。我们能使用 `diff` 工具来创建“差异”(或“补丁”),然后使用该工具将它们 “打” 在一个没有这个补丁的同样的源代码版本上。此外,(我又要开始跑题说些历史轶事了……),“补丁” 这个词真的指在计算机的早期使用打卡机的时候,用来覆盖在打孔纸带上来对软件进行修改的覆盖纸,那个时代打孔纸带就是在计算机处理器上运行的程序。下面来自 [维基页面][3] 的这张图真切的描绘了最初的“打补丁”这个词的出处:
![](https://opensource.com/sites/default/files/uploads/360px-harvard_mark_i_program_tape.agr_.jpg)
现在你对补丁和差异就了一个基本的概念,让我们来看看软件开发者是怎么使用这些工具的。如果你还没有使用过类似于 [Git][4] 或 [subversion][5] 这样的源代码版本控制工具的话,我将会一步步展示最流行的软件项目是怎么使用它们的。如果你将一个软件的生命周期看成是一条时间线的话,你就能看见这个软件的点滴变化,比如在何时源代码加上了一个功能,在何时源代码修复了一个功能缺陷。我们称这些改变的点为“<ruby>提交<rt>commit</rt></ruby>”,“提交”这个词被当今最流行的源代码版本管理工具 Git 所使用,当你想检查在一个提交前后的代码变化的话,(或者在许多个提交之间的代码变化),你都可以使用工具来观察文件差异。
如果你同样在使用 Git 开发软件的话,你可以在你的本地开发环境做些希望交给别的开发者的提交,以添加到他们的源代码树中。为了给别的开发者你的提交,一个方法就是创建一个你本地文件的差异文件,然后将这个“补丁”发送给和你工作在同一个源代码树的别的开发者。别的开发者在“打”了你的补丁之后,就能看到在你的代码变树上的变化。
### Linux、Git 和 GitHub
这种分享补丁的开发模型正是现今 Linux 内核社区如何处理内核修改提议而采用的模型。如果你有机会浏览任何一个主流的 Linux 内核邮件列表 —— 主要是 [LKML][6],也包括 [linux-containers][7]、[fs-devel][8]、[Netdev][9] 等等,你能看到很多开发者会贴出他们想让其他内核开发者审核、测试或者合入 Linux 官方 Git 代码树某个位置的补丁。当然,讨论 Git 不在这篇文章范围之内Git 是由 Linus Torvalds 开发的源代码控制系统,它支持分布式开发模型以及允许独立于主要代码仓库的补丁包,这些补丁包能被推送或拉取到不同的源代码树上,并遵守这些代码树各自的开发流程。)
在继续我们的话题之前,我们当然不能忽略和补丁和差异这个概念相关的最流行的服务:[GitHub][10]。从它的名字就能猜想出 GitHub 是基于 Git 的,而且它还围绕着 Git 对分布式开源代码开发模型提供了基于 Web 和 API 的工作流管理。LCTT 译注:即<ruby>拉取请求<rt>Pull Request</rt></ruby>)。在 GitHub 上,分享补丁的方式不是像 Linux 内核社区那样通过邮件列表,而是通过创建一个 **拉取请求** 。当你提交你自己的源代码树的改动时你能通过创建一个针对软件项目的共享仓库的“拉取请求”来分享你的代码改动LCTT 译注:即核心开发者维护一个主仓库,开发者去“<ruby>复刻<rt>fork</rt></ruby>”这个仓库待各自的提交后再创建针对这个主仓库的拉取请求所有的拉取请求由主仓库的核心开发者批准后才能合入主代码库。GitHub 被当今很多活跃的开源社区所采用,如 [Kubernetes][11]、[Docker][12]、[容器网络接口 (CNI)][13]、[Istio][14] 等等。在 GitHub 的世界里,用户会倾向于使用基于 Web 页面的方式来审核一个拉取请求里的补丁或差异,你也可以直接访问原始的补丁并在命令行上直接使用它们。
### 该说点干货了
我们前面已经讲了在流行的开源社区里是怎么应用补丁和差异的,现在看看一些例子。
第一个例子包括一个源代码树的两个不同副本,其中一个有代码改动,我们想用 `diff` 来看看这些改动是什么。这个例子里,我们想看的是“<ruby>合并格式<rt>unified</rt></ruby>”的补丁,这是现在软件开发世界里最通用的格式。如果想知道更详细参数的用法以及如何生成差异文件,请参考 `diff` 手册。原始的代码在 `sources-orig` 目录,而改动后的代码在 `sources-fixed` 目录。如果要在你的命令行上用“合并格式”来展示补丁请运行如下命令。LCTT 译注:参数 `-N` 代表如果比较的文件不存在,则认为是个空文件, `-a` 代表将所有文件都作为文本文件对待,`-u` 代表使用合并格式并输出上下文,`-r` 代表递归比较目录)
```
$ diff -Naur sources-orig/ sources-fixed/
```
……下面是 `diff` 命令的输出:
```
diff -Naur sources-orig/officespace/interest.go sources-fixed/officespace/interest.go
--- sources-orig/officespace/interest.go        2018-08-10 16:39:11.000000000 -0400
+++ sources-fixed/officespace/interest.go       2018-08-10 16:39:40.000000000 -0400
@@ -11,15 +11,13 @@
   InterestRate float64
 }
+// compute the rounded interest for a transaction
 func computeInterest(acct *Account, t Transaction) float64 {
   interest := t.Amount * t.InterestRate
   roundedInterest := math.Floor(interest*100) / 100.0
   remainingInterest := interest - roundedInterest
-  // a little extra..
-  remainingInterest *= 1000
-
   // Save the remaining interest into an account we control:
   acct.Balance = acct.Balance + remainingInterest
```
最开始几行 `diff` 命令的输出可以这样解释:三个 `---` 显示了原来文件的名字任何在原文件LCTT 译注:不是源文件)里存在而在新文件里不存在的行将会用前缀 `-`,用来表示这些行被从源代码里“减去”了。而 `+++` 表示的则相反:在新文件里被加上的行会被放上前缀 `+`,表示这是在新文件里被“加上”的行。补丁文件中的每一个补丁“块”(用 `@@` 作为前缀的的部分都有上下文的行号这能帮助补丁工具或其它处理器知道在代码的哪里应用这个补丁块。你能看到我们已经修改了“Office Space”这部电影里提到的那个函数移除了三行并加上了一行代码注释电影里那个有点贪心的工程师可是偷偷的在计算利息的函数里加了点“料”哦。LCTT译注剧情详情请见电影 https://movie.douban.com/subject/1296424/
如果你想找人来测试你的代码改动,你可以将差异保存到一个补丁里:
```
$ diff -Naur sources-orig/ sources-fixed/ >myfixes.patch
```
现在你有补丁 `myfixes.patch` 了,你能把它分享给别的开发者,他们可以将这个补丁打在他们自己的源代码树上从而得到和你一样的代码并测试他们。如果一个开发者的当前工作目录就是他的源代码树的根的话,他可以用下面的命令来打补丁:
```
$ patch -p1 < ../myfixes.patch
patching file officespace/interest.go
```
现在这个开发者的源代码树已经打好补丁并准备好构建和测试文件的修改了。那么如果这个开发者在打补丁之前已经改动过了怎么办只要这些改动没有直接冲突LCTT 译注比如改在同一行上补丁工具就能自动的合并代码的改动。例如下面的interest.go 文件,它有其它几处改动,然后它想打上 `myfixes.patch` 这个补丁:
```
$ patch -p1 < ../myfixes.patch
patching file officespace/interest.go
Hunk #1 succeeded at 26 (offset 15 lines).
```
在这个例子中,补丁警告说代码改动并不在文件原来的地方而是偏移了 15 行。如果你文件改动的很厉害,补丁可能干脆说找不到要应用的地方,还好补丁程序提供了提供了打开“模糊”匹配的选项(这个选项在文档里有预置的警告信息,对其讲解已经超出了本文的范围)。
如果你使用 Git 或者 GitHub 的话你可能不会直接使用补丁或差异。Git 已经内置了这些功能你能使用这些功能和共享一个源代码树的其他开发者交互拉取或合并代码。Git 一个比较相近的功能是可以使用 `git diff` 来对你的本地代码树生成全局差异,又或者对你的任意两次”引用“(可能是一个代表提交的数字,或一个标记或分支的名字,等等)做全局补丁。你甚至能简单的用管道将 `git diff` 的输出到一个文件里(这个文件必须严格符合将要被使用它的程序的输入要求),然后将这个文件交给一个并不使用 Git 的开发者应用到他的代码上。当然GitHub 把这些功能放到了 Web 上,你能直接在 Web 页面上查看一个拉取请求的文件变动。在 Web 上你能看到所展示的合并差异GitHub 还允许你将这些代码改动下载为原始的补丁文件。
### 总结
好了,你已经学到了”差异“和”补丁“是什么,以及在 Unix/Linux 上怎么使用命令行工具和它们交互。除非你还在像 Linux 内核开发这样的项目中工作而使用完全基于补丁文件的开发方式,你应该会主要通过你的源代码控制系统(如 Git来使用补丁。但熟悉像 GitHub 这样的高级别工具的技术背景和技术底层对你的工作也是大有裨益的。谁知道会不会有一天你需要和一个来自 Linux 世界邮件列表的补丁包打交道呢?
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/diffs-patches
作者:[Phil Estes][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[David Chen](https://github.com/DavidChenLiang)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/estesp
[1]:https://github.com/TooDumbForAName/ncsa-httpd
[2]:https://web.archive.org/web/19970615081902/http:/www.apache.org/info.html
[3]:https://en.wikipedia.org/wiki/Patch_(computing)
[4]:https://git-scm.com/
[5]:https://subversion.apache.org/
[6]:https://lkml.org/
[7]:https://lists.linuxfoundation.org/pipermail/containers/
[8]:https://patchwork.kernel.org/project/linux-fsdevel/list/
[9]:https://www.spinics.net/lists/netdev/
[10]:https://github.com/
[11]:https://kubernetes.io/
[12]:https://www.docker.com/
[13]:https://github.com/containernetworking/cni
[14]:https://istio.io/

View File

@ -1,6 +1,7 @@
如何在 Ubuntu 18.04 上更新固件
======
通常Ubuntu 和其他 Linux 中的默认软件中心会处理系统固件的更新。但是如果你遇到了错误,你可以使用 fwupd 命令行工具更新系统的固件。
通常Ubuntu 和其他 Linux 中的默认软件中心会处理系统固件的更新。但是如果你遇到了错误,你可以使用 `fwupd` 命令行工具更新系统的固件。
我使用 [Dell XPS 13 Ubuntu 版本][1]作为我的主要操作系统。我全新[安装了 Ubuntu 18.04][2],我对硬件兼容性感到满意。蓝牙、外置 USB 耳机和扬声器、多显示器,一切都开箱即用。
@ -14,7 +15,7 @@
错误消息是:
**Unable to update “Thunderbolt NVM for Xps Notebook 9360”: could not detect device after update: timed out while waiting for device**
> Unable to update “Thunderbolt NVM for Xps Notebook 9360”: could not detect device after update: timed out while waiting for device
在这篇文章中,我将向你展示如何在 [Ubuntu][6] 中更新系统固件。
@ -22,42 +23,42 @@
![How to update firmware in Ubuntu][7]
有一件事你应该知道 GNOME Softwar 即 Ubuntu 18.04 中的软件中心也能够更新固件。但是在由于某种原因失败的情况下,你可以使用命令行工具 fwupd。
有一件事你应该知道 GNOME Software即 Ubuntu 18.04 中的软件中心)也能够更新固件。但是在由于某种原因失败的情况下,你可以使用命令行工具 `fwupd`
[fwupd][8] 是一个开源守护进程,可以处理基于 Linux 的系统中的固件升级。它由 GNOME 开发人员 [Richard Hughes][9] 创建。戴尔的开发人员也为这一开源工具的开发做出了贡献。
基本上,它使用 LVFSLinux 供应商固件服务 Linux Vendor Firmware Service。硬件供应商将可再发行固件上传到 LVFS 站点,并且多亏 fwupd你可以从操作系统内部升级这些固件。fwupd 受到 Ubuntu 和 Fedora 等主要 Linux 发行版的支持。
基本上,它使用 LVFS —— <ruby>Linux 供应商固件服务<rt>Linux Vendor Firmware Service</rt></ruby>。硬件供应商将可再发行固件上传到 LVFS 站点,并且多亏 `fwupd`,你可以从操作系统内部升级这些固件。`fwupd` 得到了 Ubuntu 和 Fedora 等主要 Linux 发行版的支持。
首先打开终端并更新系统:
```
sudo apt update && sudo apt upgrade -y
```
之后,你可以逐个使用以下命令来启动守护程序,刷新可用固件更新列表并安装固件更新。
```
sudo service fwupd start
```
守护进程运行后,检查是否有可用的固件更新。
```
sudo fwupdmgr refresh
```
输出应如下所示:
```
Fetching metadata <https://cdn.fwupd.org/downloads/firmware.xml.gz>
Downloading… [****************************]
Fetching signature <https://cdn.fwupd.org/downloads/firmware.xml.gz.asc>
Fetching metadata https://cdn.fwupd.org/downloads/firmware.xml.gz
Downloading… [****************************]
Fetching signature https://cdn.fwupd.org/downloads/firmware.xml.gz.asc
```
在此之后,运行固件更新:
```
sudo fwupdmgr update
```
固件更新的输出可能与此类似:
@ -67,8 +68,8 @@ No upgrades for XPS 13 9360 TPM 2.0, current is 1.3.1.0: 1.3.1.0=same
No upgrades for XPS 13 9360 System Firmware, current is 0.2.8.1: 0.2.8.1=same, 0.2.7.1=older, 0.2.6.2=older, 0.2.5.1=older, 0.2.4.2=older, 0.2.3.1=older, 0.2.2.1=older, 0.2.1.0=older, 0.1.3.7=older, 0.1.3.5=older, 0.1.3.2=older, 0.1.2.3=older
Downloading 21.00 for XPS13 9360 Thunderbolt Controller…
Updating 21.00 on XPS13 9360 Thunderbolt Controller…
Decompressing… [***********]
Authenticating… [***********]
Decompressing… [***********]
Authenticating… [***********]
Restarting device… [***********]
```
@ -83,7 +84,7 @@ via: https://itsfoss.com/update-firmware-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,6 +1,8 @@
8 个用于<ruby>业余项目<rt>side projects</rt></ruby>的优秀 Python 库
8 个用于业余项目的优秀 Python 库
======
> 这些库可以使你更容易构架个人项目。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd)
在 Python/Django 的世界里有这样一个谚语:为语言而来,为社区而留。对绝大多数人来说的确是这样的,但是,还有一件事情使得我们一直停留在 Python 的世界里,不愿离开,那就是我们可以很容易地利用一顿午餐或晚上几个小时的时间,把一个想法快速地实现出来。
@ -9,7 +11,7 @@
### 在数据库中即时保存数据Dataset
当我们想要在不知道最终数据库表长什么样的情况下,快速收集数据并保存到数据库中的时候,[Dataset][1] 库将是我们的最佳选择。Dataset 库有一个简单但功能强大的 API因此我们可以很容易的把数据保存下来之后再进行排序
当我们想要在不知道最终数据库表长什么样的情况下,快速收集数据并保存到数据库中的时候,[Dataset][1] 库将是我们的最佳选择。Dataset 库有一个简单但功能强大的 API因此我们可以很容易的把数据保存下来之后再进行整理
Dataset 建立在 SQLAlchemy 之上,所以如果需要对它进行扩展,你会感到非常熟悉。使用 Django 内建的 [inspectdb][2] 管理命令可以很容易地把底层数据库模型导入 Django 中,这使得和现有数据库一同工作不会出现任何障碍。
@ -35,13 +37,13 @@ Dataset 建立在 SQLAlchemy 之上,所以如果需要对它进行扩展,你
### 把 CSV 文件转换到 API 中DataSette
[DataSette][8] 是一个神奇的工具,它可以很容易地把 CSV 文件转换进全特性只读 REST JSON API同时不要把它和 Dataset 库混淆。Datasette 有许多特性,包括创建图表和 geo用于创建交互式图),并且很容易通过容器或第三方网络主机进行部署。
[DataSette][8] 是一个神奇的工具,它可以很容易地把 CSV 文件转换为全特性的只读 REST JSON API同时不要把它和 Dataset 库混淆。Datasette 有许多特性,包括创建图表和 geo用于创建交互式图),并且很容易通过容器或第三方网络主机进行部署。
### 处理环境变量等Envparse
如果你不想在源代码中保存 API 密钥、数据库凭证或其他敏感信息,那么你便需要解析环境变量,这时候 [envparse][9] 是最好的选择。Envparse 能够处理环境变量、ENV 文件、变量类型,甚至还可以进行预处理和后处理(例如,你想要确保变量名总是大写或小写的)
如果你不想在源代码中保存 API 密钥、数据库凭证或其他敏感信息,那么你便需要解析环境变量,这时候 [envparse][9] 是最好的选择。Envparse 能够处理环境变量、ENV 文件、变量类型,甚至还可以进行预处理和后处理(例如,你想要确保变量名总是大写或小写的)
有什么你最喜欢的用于<ruby>业余项目<rt>side projects</rt></ruby>的 Python 库不在这个列表中吗?请在评论中和我们分享。
有什么你最喜欢的用于业余项目的 Python 库不在这个列表中吗?请在评论中和我们分享。
--------------------------------------------------------------------------------
@ -50,7 +52,7 @@ via: https://opensource.com/article/18/9/python-libraries-side-projects
作者:[Jeff Triplett][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,67 @@
DevOps: The consequences of blame
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga)
Merriam-Webster defines "blame" as both a verb and a noun. As a verb, it means "to find fault with or to hold responsible." As a noun, it means "an expression of disapproval or responsibility for something believed to deserve censure."
Either way, blame isnt a pleasant thing. It can create feelings of fear and shame, foster power imbalances, and cause us to devalue others.
Just think of what it felt like the last time you were yelled at or accused of something. Conversely, consider the opposite of blame: Praise, flattery, and approval. How does it feel to be complimented or commended for a job well done?
You may be wondering what all this talk about blame has to do with DevOps. Read on:
### DevOps and blame
The three pillars of DevOps are flow, feedback, and continuous improvement. How can an organization or a team improve if its members are focused on finding someone to blame? For a DevOps culture to succeed, blame must be eliminated.
For example, suppose your product has a bug or experiences an outage. If your organization's leaders react to this by looking for someone to blame, theres little chance for feedback on how to improve. Look at how blame is flowing in your organization and work to remove it. Strive for blameless post-mortems and move away from _root-cause analysis_ , which tends to focus on assigning blame. In todays complex business infrastructure, many factors can contribute to bugs and other problems. Successful DevOps teams practice post-incident reviews to examine the bigger picture when things go wrong.
### Consequences of blame
DevOps is about creating a culture of collaboration and community. This is not possible in a culture of blame. Because blame does not correct behavior, there is no continuous learning. What _is_ learned is how to avoid blame—so instead of solving problems, team members focus on how they can avoid being blamed for them.
What about accountability? Avoiding blame does not mean avoiding accountability or consequences. Here are some tips to create an environment in which people are held accountable without blame:
* When mistakes are made, focus on what steps you can take to avoid making the same mistake in the future. What did you learn, and how can you apply that knowledge to improving things?
* When something goes wrong, people feel stress. Work toward eliminating or reducing that stress. Avoid yelling and putting additional pressure on people.
* Accept that mistakes will happen. Nobody—and nothing—is perfect.
* When corrective actions are necessary, provide them privately, not publicly.
As a child, I loved reading the [Family Circus][1] comic strip, especially the ones featuring “Not Me.” Not Me frequently appeared with “Ida Know” and “Nobody” when Mom and Dad asked an accusatory question. Why did the kids in Family Circus blame Not Me? Look no further than the parents' angry, frustrated expressions. Like the kids in the comic strip, we quickly learn to assign blame or look for faults in others because blaming ourselves is too painful.
In his book, [_Thinking, Fast and Slow_][2], author Daniel Kanheman points out that most of us spend as little time as possible thinking—after all, thinking is hard. To make things easier, we learn from previous experiences, which in turn creates biases. If blame is part of that equation, it will be included in our bias: _“The last time a question was asked in a meeting and I took responsibility, I was chewed out in front of all my co-workers. I wont do that again.”_
When something goes wrong, we want answers and accountability. Uncertainty is scary and leads to stress; we prefer predictable scenarios. This drives us to look for root causes, which often leads to blame.
But what if, instead of assigning blame, we turned the situation into something constructive and helpful—an opportunity for learning? It isn't always easy, but working to eliminate blame will build a stronger DevOps team and a happier, more productive company.
Next time you find yourself starting to look for someone to blame, think of this poem by Rupi Kaur:
_“It takes grace_
_To remain kind_
_In cruel situations”_
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/consequences-blame-your-devops-team
作者:[Dawn Parzych][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dawnparzych
[1]: http://familycircus.com/comics/september-1-2012/
[2]: https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555

View File

@ -0,0 +1,79 @@
What do open source and cooking have in common?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/waffles-recipe-eggs-cooking-mix.png?itok=Fp06VOBx)
Whats a fun way to promote the principles of free software without actually coding? Heres an idea: open source cooking. For the past eight years, this is what weve been doing in Munich.
The idea of _open source cooking_ grew out of our regular open source meetups because we realized that cooking and free software have a lot in common.
### Cooking together
The [Munich Open Source Meetings][1] is a series of recurring Friday night events that was born in [Café Netzwerk][2] in July 2009. The meetings help provide a way for open source project members and enthusiasts to get to know each other. Our motto is: “Every fourth Friday for free software.” In addition to adding some weekend workshops, we soon introduced other side events, including white sausage breakfast, sauna, and cooking.
The first official _Open Source Cooking_ meetup was admittedly rather chaotic, but weve improved our routine over the past eight years and 15 events, and weve mastered the art of cooking delicious food for 25-30 people.
Looking back at all those evenings, similarities between cooking together and working together in open source communities have become more clear.
### FLOSS principles at play
Here are a few ways cooking together is like working together on open source projects:
* We enjoy collaborating and working toward a result we share.
* Weve become a community.
* As we share a common interest and enthusiasm, we learn more about ourselves, each other, and what were working on together.
* Mistakes happen. We learn from them and share our knowledge to our mutual benefit, so hopefully we avoid repeating the same mistakes.
* Everyone contributes what theyre best at, as everyone has something theyre better at than someone else.
* We motivate others to contribute and join us.
* Coordination is key, but a bit chaotic.
* Everyone benefits from the results!
### Smells like open source
Like any successful open source-related meetup, open source cooking requires some coordination and structure. Ahead of the event, we run a _call for recipes_ in which all participants can vote. Rather than throwing a pizza into a microwave, we want to create something delicious and tasty, and so far weve had Japanese, Mexican, Hungarian, and Indian food, just to name a few.
Like in real life, cooking together requires having respect and mutual understanding for each other, so we always try to have dishes for vegans, vegetarians, and people with allergies and food preferences. A little beta test at home can be helpful (and fun!) when preparing for the big release.
Scalability matters, and shopping for our “build requirements” at the grocery store easily can eat up three hours. We use a spreadsheet (LibreOffice Calc, naturally) for calculating ingredient requirements and costs.
For every dinner course we have a “package maintainer” working with volunteers to make the menu in time and to find unconventional solutions to problems that arise.
Not everyone is a cook by profession, but with a little bit of help and a good distribution of tasks and responsibilities, its rather easy to parallelize things — at some point, 18kg of tomatoes and 100 eggs really dont worry you anymore, believe me! The only real scalability limit is the stove with its four hotplates, so maybe its time to invest in an infrastructure budget.
Time-based releasing, on the other hand, isnt working as reliably as it should, as we usually serve the main dish at a rather “flexible” time between 21:30 und 01:30, but thats not a release blocker, either.
And, as with in many open source projects, cooking documentation has room for improvement. Cleanup tasks such as washing the dishes, surely can be optimized further, too.
### Future flavor releases
Some of our future ideas include:
* cooking in a foreign country,
* finally buying and cooking that large 700 € pumpkin, and
* find a grocery store that donates a percentage of our purchases to a good cause.
The last item is also an important aspect about the free software movement: Always remember there are people who are not living on the sunny side, who do not have the same access to resources, and who are otherwise struggling. How can the open nature of what were doing help them?
With all that in mind, I am looking forward to the next Open Source Cooking meetup. If reading about them makes you hungry and youd like to run own event, wed love to see you adapt our idea or even fork it. And wed love to have you join us in a meetup, and perhaps even do some mentoring and QA.
Article originally appeared on [blog.effenberger.org][3]. Reprinted with permission.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/open-source-cooking
作者:[Florian Effenberger][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/floeff
[1]: https://www.opensourcetreffen.de/
[2]: http://www.cafe-netzwerk.de/
[3]: https://blog.effenberger.org/2018/05/28/what-do-open-source-and-cooking-have-in-common/

View File

@ -1,3 +1,5 @@
Translating by DavidChen
How do groups work on Linux?
============================================================

View File

@ -1,3 +1,4 @@
imquanquan Translating
Trying Other Go Versions
============================================================
@ -109,4 +110,4 @@ via: https://pocketgophers.com/trying-other-versions/
[8]:https://pocketgophers.com/trying-other-versions/#trying-a-specific-release
[9]:https://pocketgophers.com/guide-to-json/
[10]:https://pocketgophers.com/trying-other-versions/#trying-any-release
[11]:https://pocketgophers.com/trying-other-versions/#trying-a-source-build-e-g-tip
[11]:https://pocketgophers.com/trying-other-versions/#trying-a-source-build-e-g-tip

View File

@ -1,3 +1,4 @@
Zafiry translating...
Writing eBPF tracing tools in Rust
============================================================

View File

@ -1,3 +1,5 @@
translating---geekpi
How to build a URL shortener with Apache
======

View File

@ -1,3 +1,5 @@
translating---geekpi
Distrochooser Helps Linux Beginners To Choose A Suitable Linux Distribution
======
![](https://www.ostechnix.com/wp-content/uploads/2018/08/distrochooser-logo-720x340.png)

View File

@ -0,0 +1,168 @@
How To Quickly Serve Files And Folders Over HTTP In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/08/http-720x340.png)
Today, I came across a whole bunch of methods to serve a single file or entire directory with other systems in your local area network via a web browser. I tested all of them in my Ubuntu test machine, and everything worked just fine as described below. If you ever wondered how to easily and quickly serve files and folders over HTTP in Unix-like operating systems, one of the following methods will definitely help.
### Serve Files And Folders Over HTTP In Linux
**Disclaimer:** All the methods given here are meant to be used within a secure local area network. Since these methods doesnt have any security mechanism, it is **not recommended to use them in production**. You have been warned!
#### Method 1 Using simpleHTTPserver (Python)
We already have written a brief guide to setup a simple http server to share files and directories instantly in the following link. If you have a system with Python installed, this method is quite handy.
#### Method 2 Using Quickserve (Python)
This method is specifically for Arch Linux and its variants. Check the following link for more details.
#### Method 3 Using Ruby**
In this method, we use Ruby to serve files and folders over HTTP in Unix-like systems. Install Ruby and Rails as described in the following link.
Once Ruby installed, go to the directory, for example ostechnix, that you want to share over the network:
```
$ cd ostechnix
```
And, run the following command:
```
$ ruby -run -ehttpd . -p8000
[2018-08-10 16:02:55] INFO WEBrick 1.4.2
[2018-08-10 16:02:55] INFO ruby 2.5.1 (2018-03-29) [x86_64-linux]
[2018-08-10 16:02:55] INFO WEBrick::HTTPServer#start: pid=5859 port=8000
```
Make sure the port 8000 is opened in your router or firewall . If the port has already been used by some other services use different port.
You can now access the contents of this folder from any remote system using URL **http:// <IP-address>:8000/**.
![](https://www.ostechnix.com/wp-content/uploads/2018/08/ruby-http-server.png)
To stop sharing press **CTRL+C**.
#### Method 4 Using Http-server (NodeJS)
[**Http-server**][1] is a simple, production ready command line http-server written in NodeJS. It requires zero configuration and can be used to instantly share files and directories via web browser.
Install NodeJS as described below.
Once NodeJS installed, run the following command to install http-server.
```
$ npm install -g http-server
```
Now, go to any directory and share its contents over HTTP as shown below.
```
$ cd ostechnix
$ http-server -p 8000
Starting up http-server, serving ./
Available on:
http://127.0.0.1:8000
http://192.168.225.24:8000
http://192.168.225.20:8000
Hit CTRL-C to stop the server
```
Now, you can access the contents of this directory from local or remote systems in the network using URL **http:// <ip-address>:8000**.
![](http://www.ostechnix.com/wp-content/uploads/2018/08/nodejs-http-server.png)
To stop sharing, press **CTRL+C**.
#### Method 5 Using Miniserve (Rust)
[**Miniserve**][2] is yet another command line utility that allows you to quickly serve files over HTTP. It is very fast, easy-to-use, and cross-platform utility written in **Rust** programming language. Unlike the above utilities/methods, it provides authentication support, so you can setup username and password to the shares.
Install Rust in your Linux system as described in the following link.
After installing Rust, run the following command to install miniserve:
```
$ cargo install miniserve
```
Alternatively, you can download the binaries from [**the releases page**][3] and make it executable.
```
$ chmod +x miniserve-linux
```
And, then you can run it using command (assuming miniserve binary file is downloaded in the current working directory):
```
$ ./miniserve-linux <path-to-share>
```
**Usage**
To serve a directory:
```
$ miniserve <path-to-directory>
```
**Example:**
```
$ miniserve /home/sk/ostechnix/
miniserve v0.2.0
Serving path /home/sk/ostechnix at http://[::]:8080, http://localhost:8080
Quit by pressing CTRL-C
```
Now, you can access the share from local system itself using URL **<http://localhost:8080>** and/or from remote system with URL **http:// <ip-address>:8080**.
To serve a single file:
```
$ miniserve <path-to-file>
```
**Example:**
```
$ miniserve ostechnix/file.txt
```
Serve file/folder with username and password:
```
$ miniserve --auth joe:123 <path-to-share>
```
Bind to multiple interfaces:
```
$ miniserve -i 192.168.225.1 -i 10.10.0.1 -i ::1 -- <path-to-share>
```
As you can see, I have given only 5 methods. But, there are few more methods given in the link attached at the end of this guide. Go and test them as well. Also, bookmark and revisit it from time to time to check if there are any new additions to the list in future.
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-quickly-serve-files-and-folders-over-http-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.npmjs.com/package/http-server
[2]:https://github.com/svenstaro/miniserve
[3]:https://github.com/svenstaro/miniserve/releases

View File

@ -1,113 +0,0 @@
How To Enable Hardware Accelerated Video Decoding In Chromium On Ubuntu Or Linux Mint
======
You may have noticed that watching HD videos from Youtube and other similar websites in Google Chrome or Chromium browsers on Linux considerably increases your CPU usage and, if you use a laptop, it gets quite hot and the battery drains very quickly. That's because Chrome / Chromium (Firefox too but there's no way to force this) doesn't support hardware accelerated video decoding on Linux.
**This article explains how to install a Chromium development build which includes a patch that enables VA-API on Linux, bringing support for GPU accelerated video decoding, which should significantly decrease the CPU usage when watching HD videos online. The instructions cover only Intel and Nvidia graphics cards, as I don't have an ATI/AMD graphics card to try this, nor do I have experience with such graphics cards.**
This is Chromium from the Ubuntu (18.04) repositories without GPU accelerated video decoding playing a 1080p YouTube video:
![](https://4.bp.blogspot.com/-KtUQni2PMvE/W3KlJ62yLLI/AAAAAAAABW4/NrNVFaTAkZ8AmwqWwRvWD6czT51ni-R-gCLcBGAs/s1600/chromium-default-no-accel.png)
The same 1080p YouTube video playing in Chromium with the VA-API patch and hardware accelerated video decode enabled on Ubuntu 18.04:
![](https://4.bp.blogspot.com/-0c-wb4UNhW8/W3KlQBfeFnI/AAAAAAAABW8/WVUAYzM6hA8wRTlCcrPXPMpoXoFVR6b1QCLcBGAs/s1600/chromium-hardware-acceleration-enabled.png)
Notice the CPU usage in the screenshots. Both screenshots were taken on my old, but still quite powerful desktop. On my laptop, the Chromium CPU usage without hardware acceleration goes way higher.
The _Enable VAVDA, VAVEA and VAJDA on linux with VAAPI only_ " was was initially submitted to Chromium more than a year ago, but it has yet to be merged.
Chrome has an option to override the software rendering list (
`#ignore-gpu-blacklist`
), but this option does not enable hardware accelerated video decoding. After enabling this option, you may find the following when visiting
`chrome://gpu`
: " _Video Decode: Hardware accelerated_ ", but this does not mean it actually works. Open a HD video on YouTube and check the CPU usage in a tool such as
`htop`
(this is what I'm using in the screenshots above to check the CPU usage) - you should see high CPU usage because GPU video decoding is not actually enabled. There's also a section below for how to check if you're actually using hardware accelerated video decoding.
**The patches used by the Chromium Ubuntu builds with VA-API enabled used in this article are available[here][1].**
### Installing and using Chromium browser with VA-API support on Ubuntu or Linux Mint
**It should be clear to everyone reading this that Chromium Dev Branch is not considered stable. So you might find bugs, it may crash, etc. It works fine right now but who knows what may happen after some update.**
**What's more, the Chromium Dev Branch PPA requires you to perform some extra steps if you want to enable Widevine support** (so you can play Netflix videos and paid YouTube videos, etc.), **or if you need features like Sync** (which needs registering an API key and setting it up on your system). Instructions for performing these tweaks are explained in the
Chromium with the VA-API patch is also available for some other Linux distributions, in third-party repositories, like
**1\. Install Chromium Dev Branch with VA-API support.**
There's a Chromium Beta PPA with the VA-API patch, but it lacks vdpau-video for Ubuntu 18.04. If you want, you can use the `vdpau-va-driver` from the You can add the Chromium
```
sudo add-apt-repository ppa:saiarcot895/chromium-dev
sudo apt-get update
sudo apt install chromium-browser
```
**2\. Install the VA-API driver**
For Intel graphics cards, you'll need to install the `i965-va-driver` package (it may already be installed):
```
sudo apt install i965-va-driver
```
For Nvidia graphics cards (it should work with both the open source Nouveau drivers and the proprietary Nvidia drivers), install `vdpau-va-driver` :
```
sudo apt install vdpau-va-driver
```
**3\. Enable the Hardware-accelerated video option in Chromium.**
Copy and paste the following in the Chrome URL bar: `chrome://flags/#enable-accelerated-video` (or search for the `Hardware-accelerated video` option in `chrome://flags`) and enable it, then restart Chromium browser.
On a default Google Chrome / Chromium build, this option shows as unavailable, but you'll be able to enable it now because we've used the VA-API enabled Chromium build.
**4\. Install[h264ify][2] Chrome extension.**
YouTube (and probably some other websites as well) uses VP8 or VP9 video codecs by default, and many GPUs don't support hardware decoding for this codec. The h264ify extension will force YouTube to use H.264, which should be supported by most GPUs, instead of VP8/VP9.
This extension can also block 60fps videos, useful on lower end machines.
You can check the codec used by a YouTube video by right clicking on the video and selecting `Stats for nerds` . With the h264ify extension enabled, you should see avc / mp4a as the codecs. Without this extension, the codec should be something like vp09 / opus.
### How to check if Chromium is using GPU video decoding
Open a video on YouTube. Next, open a new tab in Chromium and enter the following in the URL bar: `chrome://media-internals` .
On the `chrome://media-internals` tab, click on the video url (in order to expand it), scroll down and look under `Player Properties` , and you should find the `video_decoder` property. If the `video_decoder` value is `GpuVideoDecoder` it means that the video that's currently playing on YouTube in the other tab is using hardware-accelerated video decoding.
![](https://4.bp.blogspot.com/-COBJWVT_Y0Q/W3KnG7AeHsI/AAAAAAAABXM/W2XAJA_S0BIHug4eQKTMOdIfXHhgkXhhQCLcBGAs/s1600/chromium-gpuvideodecoder-linux.png)
If it says `FFmpegVideoDecoder` or `VpxVideoDecoder` , accelerated video decoding is not working, or maybe you forgot to install or disabled the h264ify Chrome extension.
If it's not working, you could try to debug it by running `chromium-browser` from the command line and see if it shows any VA-API related errors. You can also run `vainfo` (install it in Ubuntu or Linux Mint: `sudo apt install vainfo`) and `vdpauinfo` (for Nvidia; install it in Ubuntu or Linux Mint: `sudo apt install vdpauinfo`) and see if it shows an error.
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/08/how-to-enable-hardware-accelerated.html
作者:[Logix][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://github.com/saiarcot895/chromium-ubuntu-build/tree/master/debian/patches
[2]:https://chrome.google.com/webstore/detail/h264ify/aleakchihdccplidncghkekgioiakgal
[3]:https://chromium-review.googlesource.com/c/chromium/src/+/532294
[4]:https://launchpad.net/~saiarcot895/+archive/ubuntu/chromium-dev
[5]:https://aur.archlinux.org/packages/?O=0&SeB=nd&K=chromium+vaapi&outdated=&SB=n&SO=a&PP=50&do_Search=Go
[6]:https://aur.archlinux.org/packages/libva-vdpau-driver-chromium/
[7]:https://launchpad.net/~saiarcot895/+archive/ubuntu/chromium-beta
[8]:https://launchpad.net/~saiarcot895/+archive/ubuntu/chromium-dev/+packages

View File

@ -1,327 +0,0 @@
pinewall translating
Add GUIs to your programs and scripts easily with PySimpleGUI
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web_browser_desktop_devlopment_design_system_computer.jpg?itok=pfqRrJgh)
Few people run Python programs by double-clicking the .py file as if it were a .exe file. When a typical user (non-programmer types) double-clicks an .exe file, they expect it to pop open with a window they can interact with. While GUIs, using tkinter, are possible using standard Python installations, it's unlikely many programs do this.
What if it were so easy to open a Python program into a GUI that complete beginners could do it? Would anyone care? Would anyone use it? It's difficult to answer because to date it's not been easy to build a custom GUI.
There seems to be a gap in the ability to add a GUI onto a Python program/script. Complete beginners are left using only the command line and many advanced programmers don't want to take the time required to code up a tkinter GUI.
### GUI frameworks
There is no shortage of GUI frameworks for Python. Tkinter, WxPython, Qt, and Kivy are a few of the major packages. In addition, there are a good number of dumbed-down GUI packages that "wrap" one of the major packages, including EasyGUI, PyGUI, and Pyforms.
The problem is that beginners (those with less than six weeks of experience) can't learn even the simplest of the major packages. That leaves the wrapper packages as a potential option, but it will still be difficult or impossible for most new users to build a custom GUI layout. Even if it's possible, the wrappers still require pages of code.
[PySimpleGUI][1] attempts to address these GUI challenges by providing a super-simple, easy-to-understand interface to GUIs that can be easily customized. Even many complex GUIs require less than 20 lines of code when PySimpleGUI is used.
### The secret
What makes PySimpleGUI superior for newcomers is that the package contains the majority of the code that the user is normally expected to write. Button callbacks are handled by PySimpleGUI, not the user's code. Beginners struggle to grasp the concept of a function, and expecting them to understand a call-back function in the first few weeks is a stretch.
With most GUIs, arranging GUI widgets often requires several lines of code… at least one or two lines per widget. PySimpleGUI uses an "auto-packer" that automatically creates the layout. No pack or grid system is needed to lay out a GUI window.
Finally, PySimpleGUI leverages the Python language constructs in clever ways that shorten the amount of code and return the GUI data in a straightforward manner. When a widget is created in a form layout, it is configured in place, not several lines of code away.
### What is a GUI?
Most GUIs do one thing: collect information from the user and return it. From a programmer's viewpoint, this could be summed up as a function call that looks like this:
```
button, values = GUI_Display(gui_layout)
```
What's expected from most GUIs is the button that was clicked (e.g., OK, cancel, save, yes, no, etc.) and the values input by the user. The essence of a GUI can be boiled down to a single line of code.
This is exactly how PySimpleGUI works (for simple GUIs). When the call is made to display the GUI, nothing executes until a button is clicked that closes the form.
There are more complex GUIs, such as those that don't close after a button is clicked. Examples include a remote control interface for a robot and a chat window. These complex forms can also be created with PySimpleGUI.
### Making a quick GUI
When is PySimpleGUI useful? Immediately, whenever you need a GUI. It takes less than five minutes to create and try a GUI. The quickest way to make a GUI is to copy one from the [PySimpleGUI Cookbook][2]. Follow these steps:
* Find a GUI that looks similar to what you want to create
* Copy code from the Cookbook
* Paste it into your IDE and run it
Let's look at the first recipe from the book.
```
import PySimpleGUI as sg
# Very basic form.  Return values as a list
form = sg.FlexForm('Simple data entry form')  # begin with a blank form
layout = [
          [sg.Text('Please enter your Name, Address, Phone')],
          [sg.Text('Name', size=(15, 1)), sg.InputText('name')],
          [sg.Text('Address', size=(15, 1)), sg.InputText('address')],
          [sg.Text('Phone', size=(15, 1)), sg.InputText('phone')],
          [sg.Submit(), sg.Cancel()]
         ]
button, values = form.LayoutAndRead(layout)
print(button, values[0], values[1], values[2])
```
It's a reasonably sized form.
![](https://opensource.com/sites/default/files/uploads/pysimplegui_cookbook-form.jpg)
If you just need to collect a few values and they're all basically strings, you could copy this recipe and modify it to suit your needs.
You can even create a custom GUI layout in just five lines of code.
```
import PySimpleGUI as sg
form = sg.FlexForm('My first GUI')
layout = [ [sg.Text('Enter your name'), sg.InputText()],
           [sg.OK()] ]
button, (name,) = form.LayoutAndRead(layout)
```
![](https://opensource.com/sites/default/files/uploads/pysimplegui-5-line-form.jpg)
### Making a custom GUI in five minutes
If you have a straightforward layout, you should be able create a custom layout in PySimpleGUI in less than five minutes by modifying code from the Cookbook.
Widgets are called elements in PySimpleGUI. These elements are spelled exactly as you would type them into your Python code.
#### Core elements
```
Text
InputText
Multiline
InputCombo
Listbox
Radio
Checkbox
Spin
Output
SimpleButton
RealtimeButton
ReadFormButton
ProgressBar
Image
Slider
Column
```
#### Shortcut list
PySimpleGUI also has two types of element shortcuts. One type is simply other names for the exact same element (e.g., `T` instead of `Text`). The second type configures an element with a particular setting, sparing you from specifying all parameters (e.g., `Submit` is a button with the text "Submit" on it)
```
T = Text
Txt = Text
In = InputText
Input = IntputText
Combo = InputCombo
DropDown = InputCombo
Drop = InputCombo
```
#### Button shortcuts
A number of common buttons have been implemented as shortcuts. These include:
```
FolderBrowse
FileBrowse
FileSaveAs
Save
Submit
OK
Ok
Cancel
Quit
Exit
Yes
No
```
There are also shortcuts for more generic button functions.
```
SimpleButton
ReadFormButton
RealtimeButton
```
These are all the GUI widgets you can choose from in PySimpleGUI. If one isn't on these lists, it doesn't go in your form layout.
#### GUI design pattern
The stuff that tends not to change in GUIs are the calls that set up and show a window. The layout of the elements is what changes from one program to another.
Here is the code from the example above with the layout removed:
```
import PySimpleGUI as sg
form = sg.FlexForm('Simple data entry form')
# Define your form here (it's a list of lists)
button, values = form.LayoutAndRead(layout)
```
The flow for most GUIs is:
* Create the form object
* Define the GUI as a list of lists
* Show the GUI and get results
These are line-for-line what you see in PySimpleGUI's design pattern.
#### GUI layout
To create your custom GUI, first break your form down into rows, because forms are defined one row at a time. Then place one element after another, working from left to right.
The result is a "list of lists" that looks something like this:
```
layout = [  [Text('Row 1')],
            [Text('Row 2'), Checkbox('Checkbox 1', OK()), Checkbox('Checkbox 2'), OK()] ]
```
This layout produces this window:
![](https://opensource.com/sites/default/files/uploads/pysimplegui-custom-form.jpg)
### Displaying the GUI
Once you have your layout complete and you've copied the lines of code that set up and show the form, it's time to display the form and get values from the user.
This is the line of code that displays the form and provides the results:
```
button, values = form.LayoutAndRead(layout)
```
Forms return two values: the text of the button that is clicked and a list of values the user enters into the form.
If the example form is displayed and the user does nothing other than clicking the OK button, the results would be:
```
button == 'OK'
values == [False, False]
```
Checkbox elements return a value of True or False. Because the checkboxes defaulted to unchecked, both the values returned were False.
### Displaying results
Once you have the values from the GUI, it's nice to check what values are in the variables. Rather than printing them out using a `print` statement, let's stick with the GUI idea and output the data to a window.
PySimpleGUI has a number of message boxes to choose from. The data passed to the message box is displayed in a window. The function takes any number of arguments. You can simply indicate all the variables you want to see in the call.
The most commonly used message box in PySimpleGUI is MsgBox. To display the results from the previous example, write:
```
MsgBox('The GUI returned:', button, values)
```
### Putting it all together
Now that you know the basics, let's put together a form that contains as many of PySimpleGUI's elements as possible. Also, to give it a nice appearance, we'll change the "look and feel" to a green and tan color scheme.
```
import PySimpleGUI as sg
sg.ChangeLookAndFeel('GreenTan')
form = sg.FlexForm('Everything bagel', default_element_size=(40, 1))
column1 = [[sg.Text('Column 1', background_color='#d3dfda', justification='center', size=(10,1))],
           [sg.Spin(values=('Spin Box 1', '2', '3'), initial_value='Spin Box 1')],
           [sg.Spin(values=('Spin Box 1', '2', '3'), initial_value='Spin Box 2')],
           [sg.Spin(values=('Spin Box 1', '2', '3'), initial_value='Spin Box 3')]]
layout = [
    [sg.Text('All graphic widgets in one form!', size=(30, 1), font=("Helvetica", 25))],
    [sg.Text('Here is some text.... and a place to enter text')],
    [sg.InputText('This is my text')],
    [sg.Checkbox('My first checkbox!'), sg.Checkbox('My second checkbox!', default=True)],
    [sg.Radio('My first Radio!     ', "RADIO1", default=True), sg.Radio('My second Radio!', "RADIO1")],
    [sg.Multiline(default_text='This is the default Text should you decide not to type anything', size=(35, 3)),
     sg.Multiline(default_text='A second multi-line', size=(35, 3))],
    [sg.InputCombo(('Combobox 1', 'Combobox 2'), size=(20, 3)),
     sg.Slider(range=(1, 100), orientation='h', size=(34, 20), default_value=85)],
    [sg.Listbox(values=('Listbox 1', 'Listbox 2', 'Listbox 3'), size=(30, 3)),
     sg.Slider(range=(1, 100), orientation='v', size=(5, 20), default_value=25),
     sg.Slider(range=(1, 100), orientation='v', size=(5, 20), default_value=75),
     sg.Slider(range=(1, 100), orientation='v', size=(5, 20), default_value=10),
     sg.Column(column1, background_color='#d3dfda')],
    [sg.Text('_'  * 80)],
    [sg.Text('Choose A Folder', size=(35, 1))],
    [sg.Text('Your Folder', size=(15, 1), auto_size_text=False, justification='right'),
     sg.InputText('Default Folder'), sg.FolderBrowse()],
    [sg.Submit(), sg.Cancel()]
     ]
button, values = form.LayoutAndRead(layout)
sg.MsgBox(button, values)
```
This may seem like a lot of code, but try coding this same GUI layout directly in tkinter and you'll quickly realize how tiny it is.
![](https://opensource.com/sites/default/files/uploads/pysimplegui-everything.jpg)
The last line of code opens a message box. This is how it looks:
![](https://opensource.com/sites/default/files/uploads/pysimplegui-message-box.jpg)
Each parameter to the message box call is displayed on a new line. There are two lines of text in the message box; the second line is very long and wrapped a number of times
Take a moment and pair up the results values with the GUI to get an understanding of how results are created and returned.
### Adding a GUI to Your Program or Script
If you have a script that uses the command line, you don't have to abandon it in order to add a GUI. An easy solution is that if there are zero parameters given on the command line, then the GUI is run. Otherwise, execute the command line as you do today.
This kind of logic is all that's needed:
```
if len(sys.argv) == 1:
        # collect arguments from GUI
else:
    # collect arguements from sys.argv
```
The easiest way to get a GUI up and running quickly is to copy and modify one of the recipes from the [PySimpleGUI Cookbook][2].
Have some fun! Spice up the scripts you're tired of running by hand. Spend 5 or 10 minutes playing with the demo scripts. You may find one already exists that does exactly what you need. If not, you will find it's simple to create your own. If you really get lost, you've only invested 10 minutes.
### Resources
#### Installation
PySimpleGUI works on all systems that run tkinter, including Raspberry Pi, and it requires Python 3
```
pip install PySimpleGUI
```
#### Documentation
+ [Manual][3]
+ [Cookbook][4]
+ [GitHub repository][5]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/pysimplegui
作者:[Mike Barnett][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/pysimplegui
[1]: https://github.com/MikeTheWatchGuy/PySimpleGUI
[2]: https://pysimplegui.readthedocs.io/en/latest/cookbook/
[3]: https://pysimplegui.readthedocs.io/en/latest/cookbook/
[4]: https://pysimplegui.readthedocs.io/en/latest/cookbook/
[5]: https://github.com/MikeTheWatchGuy/PySimpleGUI

View File

@ -1,3 +1,5 @@
translating---geekpi
How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions
======
Creating a slideshow of photos is a matter of a few clicks. Heres how to make a slideshow of pictures in Ubuntu 18.04 and other Linux distributions.

View File

@ -1,343 +0,0 @@
Turn your vi editor into a productivity powerhouse
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk)
A versatile and powerful editor, vi includes a rich set of potent commands that make it a popular choice for many users. This article specifically looks at commands that are not enabled by default in vi but are nevertheless useful. The commands recommended here are expected to be set in a vi configuration file. Though it is possible to enable commands individually from each vi session, the purpose of this article is to create a highly productive environment out of the box.
### Before you begin
While "vim" is the technically correct name of the newer version of the vi editor, this article refers to it as "vi." vimrc is the configuration file used by vim.
The commands or configurations discussed here go into the vi startup configuration file, vimrc, located in the user home directory. Follow the instructions below to set the commands in vimrc:
(Note: The vimrc file is also used for system-wide configurations in Linux, such as `/etc/vimrc` or `/etc/vim/vimrc`. In this article, we'll consider only user-specific vimrc, present in user home folder.)
In Linux:
* Open the file with `vi $HOME/.vimrc`
* Type or copy/paste the commands in the cheat sheet at the end of this article
* Save and close (`:wq`)
In Windows:
* First, [install gvim][1]
* Open gvim
* Click Edit --> Startup settings, which opens the _vimrc file
* Type or copy/paste the commands in the cheat sheet at the end of this article
* Click File --> Save
Let's delve into the individual vi productivity commands. These commands are classified into the following categories:
1. Indentation & Tabs
2. Display & Format
3. Search
4. Browse & Scroll
5. Spell
6. Miscellaneous
### 1\. Indentation & Tabs
To automatically align the indentation of a line in a file:
```
set autoindent
```
Smart Indent uses the code syntax and style to align:
```
set smartindent
```
Tip: vi is language-aware and provides a default setting that works efficiently based on the programming language used in your file. There are many default configuration commands, including `axs cindent`, `cinoptions`, `indentexpr`, etc., which are not explained here. `syn` is a helpful command that shows or sets the file syntax.
To set the number of spaces to display for a tab:
```
set tabstop=4
```
To set the number of spaces to display for a “shift operation” (such as >> or <<):
```
set shiftwidth=4
```
If you prefer to use spaces instead of tabs, this option inserts spaces when the Tab key is pressed. This may cause problems for languages such as Python that rely on tabs instead of spaces. In such cases, you may set this option based on the file type (see `autocmd`).
```
set expandtab
```
### 2\. Display & Format
To show line numbers:
```
set number
```
![](https://opensource.com/sites/default/files/uploads/picture01.png)
To wrap text when it crosses the maximum line width:
```
set textwidth=80
```
To wrap text based on a number of columns from the right side:
```
set wrapmargin=2
```
To identify open and close brace positions when you traverse through the file:
```
set showmatch
```
![](https://opensource.com/sites/default/files/uploads/picture02-03.jpg)
### 3\. Search
To highlight the searched term in a file:
```
set hlsearch
```
![](https://opensource.com/sites/default/files/uploads/picture04.png)
To perform incremental searches as you type:
```
set incsearch
```
![](https://opensource.com/sites/default/files/picture05.png)
To search ignoring case (many users prefer not to use this command; set it only if you think it will be useful):
```
set ignorecase
```
To search without considering `ignorecase` when both `ignorecase` and `smartcase` are set and the search pattern contains uppercase:
```
set smartcase
```
For example, if the file contains:
test
Test
When both `ignorecase` and `smartcase` are set, a search for “test” finds and highlights both:
test
Test
A search for “Test” highlights or finds only the second line:
test
Test
### 4. Browse & Scroll
For a better visual experience, you may prefer to have the cursor somewhere in the middle rather than on the first line. The following option sets the cursor position to the 5th row.
```
set scrolloff=5
```
Example:
The first image is with scrolloff=0 and the second image is with scrolloff=5.
![](https://opensource.com/sites/default/files/uploads/picture06-07.jpg)
Tip: `set sidescrolloff` is useful if you also set `nowrap.`
To display a permanent status bar at the bottom of the vi screen showing the filename, row number, column number, etc.:
```
set laststatus=2
```
![](https://opensource.com/sites/default/files/picture08.png)
### 5. Spell
vi has a built-in spell-checker that is quite useful for text editing as well as coding. vi recognizes the file type and checks the spelling of comments only in code. Use the following command to turn on spell-check for the English language:
```
set spell spelllang=en_us
```
### 6. Miscellaneous
Disable creating backup file: When this option is on, vi creates a backup of the previous edit. If you do not want this feature, disable it as shown below. Backup files are named with a tilde (~) at the end of the filename.
```
set nobackup
```
Disable creating a swap file: When this option is on, vi creates a swap file that exists until you start editing the file. Swapfile is used to recover a file in the event of a crash or a use conflict. Swap files are hidden files that begin with `.` and end with `.swp`.
```
set noswapfile
```
Suppose you need to edit multiple files in the same vi session and switch between them. An annoying feature that's not readily apparent is that the working directory is the one from which you opened the first file. Often it is useful to automatically switch the working directory to that of the file being edited. To enable this option:
```
set autochdir
```
vi maintains an undo history that lets you undo changes. By default, this history is active only until the file is closed. vi includes a nifty feature that maintains the undo history even after the file is closed, which means you may undo your changes even after the file is saved, closed, and reopened. The undo file is a hidden file saved with the `.un~` extension.
```
set undofile
```
To set audible alert bells (which sound a warning if you try to scroll beyond the end of a line):
```
set errorbells
```
If you prefer, you may set visual alert bells:
```
set visualbell
```
### Bonus
vi provides long-format as well as short-format commands. Either format can be used to set or unset the configuration.
Long format for the `autoindent` command:
```
set autoindent
```
Short format for the `autoindent` command:
```
set ai
```
To see the current configuration setting of a command without changing its current value, use `?` at the end:
```
set autoindent?
```
To unset or turn off a command, most commands take `no` as a prefix:
```
set noautoindent
```
It is possible to set a command for one file but not for the global configuration. To do this, open the file and type `:`, followed by the `set` command. This configuration is effective only for the current file editing session.
![](https://opensource.com/sites/default/files/uploads/picture09.png)
For help on a command:
```
:help autoindent
```
![](https://opensource.com/sites/default/files/uploads/picture10-11.jpg)
Note: The commands listed here were tested on Linux with Vim version 7.4 (2013 Aug 10) and Windows with Vim 8.0 (2016 Sep 12).
These useful commands are sure to enhance your vi experience. Which other commands do you recommend?
### Cheat sheet
Copy/paste this list of commands in your vimrc file:
```
" Indentation & Tabs
set autoindent
set smartindent
set tabstop=4
set shiftwidth=4
set expandtab
set smarttab
" Display & format
set number
set textwidth=80
set wrapmargin=2
set showmatch
" Search
set hlsearch
set incsearch
set ignorecase
set smartcase
" Browse & Scroll
set scrolloff=5
set laststatus=2
" Spell
set spell spelllang=en_us
" Miscellaneous
set nobackup
set noswapfile
set autochdir
set undofile
set visualbell
set errorbells
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/vi-editor-productivity-powerhouse
作者:[Girish Managoli][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gammay
[1]: https://www.vim.org/download.php#pc

View File

@ -1,132 +0,0 @@
translating---geekpi
Why I love Xonsh
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/shelloff.png?itok=L8pjHXjW)
Shell languages are useful for interactive use. But this optimization often comes with trade-offs against using them as programming languages, which is sometimes felt when writing shell scripts.
What if your shell also understood a more scalable programming language? Say, Python?
Enter [Xonsh][1].
Installing Xonsh is as simple as creating a virtual environment, running `pip install xonsh[ptk,linux]`, and then running `xonsh`.
At first, you might wonder why your Python shell has a weird prompt:
```
$ 1+1
2
```
Nice calculator!
```
$ print("hello world")
hello world
```
We can also call other functions:
```
$ from antigravity import geohash
$ geohash(37.421542, -122.085589, b'2005-05-26-10458.68')
37.857713 -122.544543
```
However, we can still use it like a regular shell:
```
$ echo "hello world"
hello world
```
We can even mix and match!
```
$ for i in range(3):
.     echo "hello world"
.
hello world
hello world
hello world
```
Xonsh supports completion for both shell commands and Python expressions by using the [Prompt Toolkit][2]. Completions are visually informative, showing possible completions and having in-band dropdown lists.
It also supports environment access. It uses a simple but powerful heuristic for applying Python types to environment variables. The default is "string," but, for example, path variables are automatically lists.
```
$ '/usr/bin' in $PATH
True
```
Xonsh accepts either shell-style or Python-style boolean shortcut operators:
```
$ cat things
foo
$ grep -q foo things and echo "found"
found
$ grep -q bar things && echo "found"
$ grep -q foo things or echo "found"
$ grep -q bar things || echo "found"
found
```
This means that Python keywords are interpreted. If we want to print the title of a famous Dr. Seuss book, we need to quote the keywords.
```
$ echo green eggs "and" ham
green eggs and ham
```
If we do not, we are in for a surprise:
```
$ echo green eggs and ham
green eggs
xonsh: For full traceback set: $XONSH_SHOW_TRACEBACK = True
xonsh: subprocess mode: command not found: ham
Did you mean one of the following?
    as:   Command (/usr/bin/as)
    ht:   Command (/usr/bin/ht)
    mag:  Command (/usr/bin/mag)
    ar:   Command (/usr/bin/ar)
    nm:   Command (/usr/bin/nm)
```
Virtual environments can get a little tricky. Regular virtual environments, depending as they do on Bash-like syntax, cannot work. However, Xonsh comes with its own virtual environment management system called `vox`.
`vox` can create, activate and deactivate environments in `~/.virtualenvs`; if you've used `virtualenvwrapper`, this is where the environments were.
Note that the current activated environment doesn't affect `x``onsh`. It can't import anything from an activated environment.
```
$ xontrib load vox
$ vox create my-environment                                                    
...
$ vox activate my-environment        
Activated "my-environment".                                                    
$ pip install money                                                            
...
$ python                                                              
...
>>> import money                                                              
>>> money.Money('3.14')                        
$ import money
xonsh: For full traceback set: $XONSH_SHOW_TRACEBACK = True
ModuleNotFoundError: No module named 'money'
```
The first line enables `vox`: it is a `xontrib`, a third-party extension for Xonsh. The `xontrib` manager can list all possible `xontribs` and their current state (installed, loaded, or neither).
It's possible to write a `xontrib` and just upload it to `PyPi` to make it available. However, it's good practice to add it to the `xontrib` index so Xonsh knows about it in advance. This allows, for example, the configuration wizard to suggest it.
If you've ever wondered, "can Python be my shell?" then you are only a `pip install xonsh` away from finding out.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/xonsh-bash-alternative
作者:[Moshe Zadka][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[1]: https://xon.sh/
[2]: https://python-prompt-toolkit.readthedocs.io/en/master/

View File

@ -1,110 +0,0 @@
translating---geekpi
Find your systems easily on a LAN with mDNS
======
![](https://fedoramagazine.org/wp-content/uploads/2018/09/mDNS-816x345.jpg)
Multicast DNS, or mDNS, lets systems broadcast queries on a local network to find other resources by name. Fedora users often own multiple Linux systems on a router without sophisticated name services. In that case, mDNS lets you talk to your multiple systems by name — without touching the router in most cases. You also dont have to keep files like /etc/hosts in sync on all the local systems. This article shows you how to set it up.
mDNS is a zero-configuration networking service thats been around for quite a while. Fedora ships Avahi, a zero-configuration stack that includes mDNS, as part of Workstation. (mDNS is also part of Bonjour, found on Mac OS.)
This article assumes you have two systems running supported versions of Fedora (27 or 28). Their host names are meant to be castor and pollux.
### Installing packages
Make sure the nss-mdns and avahi packages are installed on your system. You might have a different version, which is fine:
```
$ rpm -q nss-mdns avahi
nss-mdns-0.14.1-1.fc28.x86_64
avahi-0.7-13.fc28.x86_64
```
Fedora Workstation provides both of these packages by default. If not present, install them:
```
$ sudo dnf install nss-mdns avahi
```
Make sure the avahi-daemon.service unit is enabled and running. Again, this is the default on Fedora Workstation.
```
$ sudo systemctl enable --now avahi-daemon.service
```
Although optional, you might also want to install the avahi-tools package. This package includes a number of handy utilities for checking how well the zero-configuration services on your system are working. Use this sudo command:
```
$ sudo dnf install avahi-tools
```
The /etc/nsswitch.conf file controls which services your system uses to resolve services, and in what order. You should see a line like this in that file:
```
hosts: files mdns4_minimal [NOTFOUND=return] dns myhostname
```
Notice the commands mdns4_minimal [NOTFOUND=return]. They tell your system to use the multicast DNS resolver to resolve a hostname to an IP address. Even if that service works, the remaining services are tried if the name doesnt resolve.
If you dont see a configuration similar to this, you can edit it (as the root user). However, the nss-mdns package handles this for you. Remove and reinstall that package to fix the file, if youre uncomfortable editing it yourself.
Follow the steps above for **both systems**.
### Setting host name and testing
Now that youve done the common configuration work, set up each hosts name in one of these ways:
1. If youre using Fedora Workstation, [you can use this procedure][1].
2. If not, use hostnamectl to do the honors. Do this for the first box:
```
$ hostnamectl set-hostname castor
```
3. You can also edit the /etc/avahi/avahi-daemon.conf file, remove the comment on the host-name setting line, and set the name there. By default, though, Avahi uses the system provided host name, so you **shouldnt** need this method.
Next, restart the Avahi daemon so it picks up changes:
```
$ sudo systemctl restart avahi-daemon.service
```
Then set your other box properly:
```
$ hostnamectl set-hostname pollux
$ sudo systemctl restart avahi-daemon.service
```
As long as your network router is not disallowing mDNS traffic, you should now be able to login to castor and ping the other box. You should use the default .local domain name so resolution works correctly:
```
$ ping pollux.local
PING pollux.local (192.168.0.1) 56(84) bytes of data.
64 bytes from 192.168.0.1 (192.168.0.1): icmp_seq=1 ttl=64 time=3.17 ms
64 bytes from 192.168.0.1 (192.168.0.1): icmp_seq=2 ttl=64 time=1.24 ms
...
```
The same trick should also work from pollux if you ping castor.local. Its much more convenient now to access your systems around the network!
Moreover, dont be surprised if your router advertises services. Modern WiFi and wired routers often provide these services to make life easier for consumers.
This process works for most systems. However, if you run into trouble, use avahi-browse and other tools from the avahi-tools package to see what services are available.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/find-systems-easily-lan-mdns/
作者:[Paul W. Frields][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pfrields/
[1]: https://fedoramagazine.org/set-hostname-fedora/

View File

@ -0,0 +1,196 @@
How To Limit Network Bandwidth In Linux Using Wondershaper
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Wondershaper-1-720x340.jpg)
This tutorial will help you to easily limit network bandwidth and shape your network traffic in Unix-like operating systems. By limiting the network bandwidth usage, you can save unnecessary bandwidth consumptions by applications, such as package managers (pacman, yum, apt), web browsers, torrent clients, download managers etc., and prevent the bandwidth abuse by a single or multiple users in the network. For the purpose of this tutorial, we will be using a command line utility named **Wondershaper**. Trust me, it is not that hard as you may think. It is one of the easiest and quickest way ever I have come across to limit the Internet or local network bandwidth usage in your own Linux system. Read on.
Please be mindful that the aforementioned utility can only limit the incoming and outgoing traffic of your local network interfaces, not the interfaces of your router or modem. In other words, Wondershaper will only limit the network bandwidth in your local system itself, not any other systems in the network. These utility is mainly designed for limiting the bandwidth of one or more network adapters in your local system. Hope you got my point.
Let us see how to use Wondershaper to shape the network traffic.
### Limit Network Bandwidth In Linux Using Wondershaper
**Wondershaper** is simple script used to limit the bandwidth of your systems network adapter(s). It limits the bandwidth iproutes tc command, but greatly simplifies its operation.
**Installing Wondershaper**
To install the latest version, git clone wondershaoer repository:
```
$ git clone https://github.com/magnific0/wondershaper.git
```
Go to the wondershaper directory and install it as show below
```
$ cd wondershaper
$ sudo make install
```
And, run the following command to start wondershaper service automatically on every reboot.
```
$ sudo systemctl enable wondershaper.service
$ sudo systemctl start wondershaper.service
```
You can also install using your distributions package manager (official or non-official) if you dont mind the latest version.
Wondershaper is available in [**AUR**][1], so you can install it in Arch-based systems using AUR helper programs such as [**Yay**][2].
```
$ yay -S wondershaper-git
```
On Debian, Ubuntu, Linux Mint:
```
$ sudo apt-get install wondershaper
```
On Fedora:
```
$ sudo dnf install wondershaper
```
On RHEL, CentOS, enable EPEL repository and install wondershaper as shown below.
```
$ sudo yum install epel-release
$ sudo yum install wondershaper
```
Finally, start wondershaper service automatically on every reboot.
```
$ sudo systemctl enable wondershaper.service
$ sudo systemctl start wondershaper.service
```
**Usage**
First, find the name of your network interface. Here are some common ways to find the details of a network card.
```
$ ip addr
$ route
$ ifconfig
```
Once you find the network card name, you can limit the bandwidth rate as shown below.
```
$ sudo wondershaper -a <adapter> -d <rate> -u <rate>
```
For instance, if your network card name is **enp0s8** and you wanted to limit the bandwidth to **1024 Kbps** for **downloads** and **512 kbps** for **uploads** , the command would be:
```
$ sudo wondershaper -a enp0s8 -d 1024 -u 512
```
Where,
* **-a** : network card name
* **-d** : download rate
* **-u** : upload rate
To clear the limits from a network adapter, simply run:
```
$ sudo wondershaper -c -a enp0s8
```
Or
```
$ sudo wondershaper -c enp0s8
```
Just in case, there are more than one network card available in your system, you need to manually set the download/upload rates for each network interface card as described above.
If you have installed Wondershaper by cloning its GitHub repository, there is a configuration named **wondershaper.conf** exists in **/etc/conf.d/** location. Make sure you have set the download or upload rates by modifying the appropriate values(network card name, download/upload rate) in this file.
```
$ sudo nano /etc/conf.d/wondershaper.conf
[wondershaper]
# Adapter
#
IFACE="eth0"
# Download rate in Kbps
#
DSPEED="2048"
# Upload rate in Kbps
#
USPEED="512"
```
Here is the sample before Wondershaper:
After enabling Wondershaper:
As you can see, the download rate has been tremendously reduced after limiting the bandwidth using WOndershaper in my Ubuntu 18.o4 LTS server.
For more details, view the help section by running the following command:
```
$ wondershaper -h
```
Or, refer man pages.
```
$ man wondershaper
```
As far as tested, Wondershaper worked just fine as described above. Give it a try and let us know what do you think about this utility.
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-limit-network-bandwidth-in-linux-using-wondershaper/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://aur.archlinux.org/packages/wondershaper-git/
[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/

View File

@ -1,58 +0,0 @@
translating---geekpi
Two open source alternatives to Flash Player
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bulb-light-energy-power-idea.png?itok=zTEEmTZB)
In July 2017, Adobe sounded the [death knell][1] for its Flash Media Player, announcing it would end support for the once-ubiquitous online video player in 2020. In truth, however, Flash has been on the decline for the past eight years following a rash of zero-day attacks that damaged its reputation. Its future dimmed after Apple announced in 2010 it would not support the technology, and its demise accelerated in 2016 after Google stopped enabling Flash by default (in favor of HTML5) in the Chrome browser.
Even so, Adobe is still issuing monthly updates for the software, which has slipped from being used on 28.5% of all websites in 2011 to [only 4.4.%][2] as of August 2018. More evidence of Flashs decline: Google director of engineering [Parisa Tabriz said][3] the number of Chrome users who access Flash content via the browser has declined from 80% in 2014 to under eight percent in 2018.
Although few* video creators are publishing in Flash format today, there are still a lot of Flash videos out there that people will want to access for years to come. Given that the official applications days are numbered, open source software creators have a great opportunity to step in with alternatives to Adobe Flash Media Player. Two of those applications are Lightspark and GNU Gnash. Neither are perfect substitutions, but help from willing contributors could make them viable alternatives.
### Lightspark
[Lightspark][4] is a Flash Player alternative for Linux machines. While its still in alpha, development has accelerated since Adobe announced it would sunset Flash in 2017. According to its website, Lightspark implements about 60% of the Flash APIs and [works][5] on many leading websites including BBC News, Google Play Music, and Amazon Music.
Lightspark is written in C++/C and licensed under [LGPLv3][6]. The project lists 41 contributors and is actively soliciting bug reports and other contributions. For more information, check out its [GitHub repository][5].
### GNU Gnash
[GNU Gnash][7] is a Flash Player for GNU/Linux operating systems including Ubuntu, Fedora, and Debian. It works as standalone software and as a plugin for the Firefox and Konqueror browsers.
Gnashs main drawback is that it doesnt support the latest versions of Flash files—it supports most Flash SWF v7 features, some v8 and v9 features, and offers no support for v10 files. Its in beta release, and since its licensed under the [GNU GPLv3 or later][8], you can help contribute to modernizing it. Access its [project page][9] for more information.
### Want to create Flash?
*Just because most people aren't publishing Flash videos these days, that doesn't mean there will never, ever be a need to create SWF files. If you find yourself in that position, these two open source tools might help:
* [Motion-Twin ActionScript 2 Compiler][10] (MTASC): A command-line compiler that can generate SWF files without Adobe Animate (the current iteration of Adobe's video-creator software).
* [Ming][11]: A library written in C that can generate SWF files. It also contains some [utilities][12] you can use to work with Flash files.
--------------------------------------------------------------------------------
via: https://opensource.com/alternatives/flash-media-player
作者:[Opensource.com][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com
[1]: https://theblog.adobe.com/adobe-flash-update/
[2]: https://w3techs.com/technologies/details/cp-flash/all/all
[3]: https://www.bleepingcomputer.com/news/security/google-chrome-flash-usage-declines-from-80-percent-in-2014-to-under-8-percent-today/
[4]: http://lightspark.github.io/
[5]: https://github.com/lightspark/lightspark/wiki/Site-Support
[6]: https://github.com/lightspark/lightspark/blob/master/COPYING
[7]: https://www.gnu.org/software/gnash/
[8]: http://www.gnu.org/licenses/gpl-3.0.html
[9]: http://savannah.gnu.org/projects/gnash/
[10]: http://tech.motion-twin.com/mtasc.html
[11]: http://www.libming.org/
[12]: http://www.libming.org/WhatsIncluded

View File

@ -0,0 +1,67 @@
6 open source tools for writing a book
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-austen-writing-code.png?itok=XPxRMtQ4)
I first used and contributed to free and open source software in 1993, and since then I've been an open source software developer and evangelist. I've written or contributed to dozens of open source software projects, although the one that I'll be remembered for is the [FreeDOS Project][1], an open source implementation of the DOS operating system.
I recently wrote a book about FreeDOS. [_Using FreeDOS_][2] is my celebration of the 24th anniversary of FreeDOS. It is a collection of how-to's about installing and using FreeDOS, essays about my favorite DOS applications, and quick-reference guides to the DOS command line and DOS batch programming. I've been working on this book for the last few months, with the help of a great professional editor.
_Using FreeDOS_ is available under the Creative Commons Attribution (cc-by) International Public License. You can download the EPUB and PDF versions at no charge from the [FreeDOS e-books][2] website. (I'm also planning a print version, for those who prefer a bound copy.)
The book was produced almost entirely with open source software. I'd like to share a brief insight into the tools I used to create, edit, and produce _Using FreeDOS_.
### Google Docs
[Google Docs][3] is the only tool I used that isn't open source software. I uploaded my first drafts to Google Docs so my editor and I could collaborate. I'm sure there are open source collaboration tools, but Google Doc's ability to let two people edit the same document at the same time, make comments, edit suggestions, and change tracking—not to mention its use of paragraph styles and the ability to download the finished document—made it a valuable part of the editing process.
### LibreOffice
I started on [LibreOffice][4] 6.0 but I finished the book using LibreOffice 6.1. I love LibreOffice's rich support of styles. Paragraph styles made it easy to apply a style for titles, headers, body text, sample code, and other text. Character styles let me modify the appearance of text within a paragraph, such as inline sample code or a different style to indicate a filename. Graphics styles let me apply certain styling to screenshots and other images. And page styles allowed me to easily modify the layout and appearance of the page.
### GIMP
My book includes a lot of DOS program screenshots, website screenshots, and FreeDOS logos. I used [GIMP][5] to modify these images for the book. Usually, this was simply cropping or resizing an image, but as I prepare the print edition of the book, I'm using GIMP to create a few images that will be simpler for print layout.
### Inkscape
Most of the FreeDOS logos and fish mascots are in SVG format, and I used [Inkscape][6] for any image tweaking here. And in preparing the PDF version of the ebook, I wanted a simple blue banner at top of the page, with the FreeDOS logo in the corner. After some experimenting, I found it easier to create an SVG image in Inkscape that looked like the banner I wanted, and I pasted that into the header.
### ImageMagick
While it's great to use GIMP to do the fine work, sometimes it's faster to run an [ImageMagick][7] command over a set of images, such as to convert into PNG format or to resize images.
### Sigil
LibreOffice can export directly to EPUB format, but it wasn't a great transfer. I haven't tried creating an EPUB with LibreOffice 6.1, but LibreOffice 6.0 didn't include my images. It also added styles in a weird way. I used [Sigil][8] to tweak the EPUB file and make everything look right. Sigil even has a preview function so you can see what the EPUB will look like.
### QEMU
Because this book is about installing and running FreeDOS, I needed to actually run FreeDOS. You can boot FreeDOS inside any PC emulator, including VirtualBox, QEMU, GNOME Boxes, PCem, and Bochs. But I like the simplicity of [QEMU][9]. And the QEMU console lets you issue a screen dump in PPM format, which is ideal for grabbing screenshots to include in the book.
Of course, I have to mention running [GNOME][10] on [Linux][11]. I use the [Fedora][12] distribution of Linux.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/writing-book-open-source-tools
作者:[Jim Hall][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[1]: http://www.freedos.org/
[2]: http://www.freedos.org/ebook/
[3]: https://www.google.com/docs/about/
[4]: https://www.libreoffice.org/
[5]: https://www.gimp.org/
[6]: https://inkscape.org/
[7]: https://www.imagemagick.org/
[8]: https://sigil-ebook.com/
[9]: https://www.qemu.org/
[10]: https://www.gnome.org/
[11]: https://www.kernel.org/
[12]: https://getfedora.org/

View File

@ -0,0 +1,140 @@
Autotrash A CLI Tool To Automatically Purge Old Trashed Files
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/autotrash-720x340.png)
**Autotrash** is a command line utility to automatically purge old trashed files. It will purge files that have been in the trash for more then a given number of days. You dont need to empty the trash folder or do SHIFT+DELETE to permanently purge the files/folders. Autortrash will handle the contents of your Trash folder and delete them automatically after a particular period of time. In a nutshell, Autotrash will never allow your trash to grow too big.
### Installing Autotrash
Autotrash is available in the default repositories of Debian-based systems. To install autotrash on Debian, Ubuntu, Linux Mint, run:
```
$ sudo apt-get install autotrash
```
On Fedora:
```
$ sudo dnf install autotrash
```
For Arch linux and its variants, you can install it using any AUR helper programs such as [**Yay**][1].
```
$ yay -S autotrash-git
```
### Automatically Purge Old Trashed Files
Whenever you run autotrash, It will scan your **`~/.local/share/Trash/info`** directory and read the **`.trashinfo`** files to find their deletion date. If the files have been in trash folder for more than the defined date, they will be deleted.
Let me show you some examples.
To purge files which are in the trash folder for more than 30 days, run:
```
$ autotrash -d 30
```
As per above example, if the files in your Trash folder are more than 30-days old, Autotrash will automatically delete them from your Trash. You dont need to manually delete them. Just send the unnecessary junk to your trash folder and forget about them. Autotrash will take care of the trashed files.
The above command will only process currently logged-in users trash directory. If you want to make autotrash to process trash directories of all users (not just in your home directory), use **-t** option like below.
```
$ autotrash -td 30
```
Autotrash also allows you to delete trashed files based on the space left or available on the trash filesystem.
For example, have a look at the following example.
```
$ autotrash --max-free 1024 -d 30
```
As per the above command, autotrash will only purge trashed files that are older than **30 days** from the trash if there is less than **1GB of space left** on the trash filesystem. This can be useful if your trash filesystem is running out of the space.
We can also purge files from trash, oldest first, till there is at least 1GB of space on the trash filesystem.
```
$ autotrash --min-free 1024
```
In this case, there is no restriction on how old trashed files are.
You can combine both options ( **`--min-free`** and **`--max-free`** ) in a single command like below.
```
$ autotrash --max-free 2048 --min-free 1024 -d 30
```
As per the above command, autotrash will start reading the trash if there is less than **2GB** of free space, then start keeping an eye on. At that point, remove files older than 30 days and if there is less than **1GB** of free space after that remove even newer files.
As you can see, all command should be manually run by the user. You might wonder, how can I automate this task?? Thats easy! Just add autotrash as crontab entry. Now, the commands will automatically run at a scheduled time and purge the files in your trash depending on the defined options.
To add these commands in crontab file, run:
```
$ crontab -e
```
Add the entries, for example:
```
@daily /usr/bin/autotrash -d 30
```
Now autotrash will purge files which are in the trash folder for more than 30 days, everyday.
For more details about scheduling tasks, refer the following links.
+ [A Beginners Guide To Cron Jobs][2]
+ [How To Easily And Safely Manage Cron Jobs In Linux][3]
Please be mindful that if you have deleted any important files inadvertently, they will be permanently gone after the defined days, so just be careful.
Refer man pages to know more about Autotrash.
```
$ man autotrash
```
Emptying Trash folder or pressing SHIFT+DELETE to permanently get rid of unnecessary stuffs from the Linux system is no big deal. It will just take a couple seconds. However, if you wanted an extra utility to take care of your junk files, Autotrash might be helpful. Give it a try and see how it works.
And, thats all for now. Hope this helps. More good stuffs to come.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/autotrash-a-cli-tool-to-automatically-purge-old-trashed-files/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[2]: https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/
[3]: https://www.ostechnix.com/how-to-easily-and-safely-manage-cron-jobs-in-linux/

View File

@ -0,0 +1,229 @@
How to Use the Netplan Network Configuration Tool on Linux
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/netplan.jpg?itok=Gu_ZfNGa)
For years Linux admins and users have configured their network interfaces in the same way. For instance, if youre a Ubuntu user, you could either configure the network connection via the desktop GUI or from within the /etc/network/interfaces file. The configuration was incredibly easy and never failed to work. The configuration within that file looked something like this:
```
auto enp10s0
iface enp10s0 inet static
address 192.168.1.162
netmask 255.255.255.0
gateway 192.168.1.100
dns-nameservers 1.0.0.1,1.1.1.1
```
Save and close that file. Restart networking with the command:
```
sudo systemctl restart networking
```
Or, if youre not using a non-systemd distribution, you could restart networking the old fashioned way like so:
```
sudo /etc/init.d/networking restart
```
Your network will restart and the newly configured interface is good to go.
Thats how its been done for years. Until now. With certain distributions (such as Ubuntu Linux 18.04), the configuration and control of networking has changed considerably. Instead of that interfaces file and using the /etc/init.d/networking script, we now turn to [Netplan][1]. Netplan is a command line utility for the configuration of networking on certain Linux distributions. Netplan uses YAML description files to configure network interfaces and, from those descriptions, will generate the necessary configuration options for any given renderer tool.
I want to show you how to use Netplan on Linux, to configure a static IP address and a DHCP address. Ill be demonstrating on Ubuntu Server 18.04. I will give you one word of warning, the .yaml files you create for Netplan must be consistent in spacing, otherwise theyll fail to work. You dont have to use a specific spacing for each line, it just has to remain consistent.
### The new configuration files
Open a terminal window (or log into your Ubuntu Server via SSH). You will find the new configuration files for Netplan in the /etc/netplan directory. Change into that directory with the command cd /etc/netplan. Once in that directory, you will probably only see a single file:
```
01-netcfg.yaml
```
You can create a new file or edit the default. If you opt to edit the default, I suggest making a copy with the command:
```
sudo cp /etc/netplan/01-netcfg.yaml /etc/netplan/01-netcfg.yaml.bak
```
With your backup in place, youre ready to configure.
### Network Device Name
Before you configure your static IP address, youll need to know the name of device to be configured. To do that, you can issue the command ip a and find out which device is to be used (Figure 1).
![netplan][3]
Figure 1: Finding our device name with the ip a command.
[Used with permission][4]
Ill be configuring ens5 for a static IP address.
### Configuring a Static IP Address
Open the original .yaml file for editing with the command:
```
sudo nano /etc/netplan/01-netcfg.yaml
```
The layout of the file looks like this:
network:
Version: 2
Renderer: networkd
ethernets:
DEVICE_NAME:
Dhcp4: yes/no
Addresses: [IP/NETMASK]
Gateway: GATEWAY
Nameservers:
Addresses: [NAMESERVER, NAMESERVER]
Where:
* DEVICE_NAME is the actual device name to be configured.
* yes/no is an option to enable or disable dhcp4.
* IP is the IP address for the device.
* NETMASK is the netmask for the IP address.
* GATEWAY is the address for your gateway.
* NAMESERVER is the comma-separated list of DNS nameservers.
Heres a sample .yaml file:
```
network:
version: 2
renderer: networkd
ethernets:
ens5:
dhcp4: no
addresses: [192.168.1.230/24]
gateway4: 192.168.1.254
nameservers:
addresses: [8.8.4.4,8.8.8.8]
```
Edit the above to fit your networking needs. Save and close that file.
Notice the netmask is no longer configured in the form 255.255.255.0. Instead, the netmask is added to the IP address.
### Testing the Configuration
Before we apply the change, lets test the configuration. To do that, issue the command:
```
sudo netplan try
```
The above command will validate the configuration before applying it. If it succeeds, you will see Configuration accepted. In other words, Netplan will attempt to apply the new settings to a running system. Should the new configuration file fail, Netplan will automatically revert to the previous working configuration. Should the new configuration work, it will be applied.
### Applying the New Configuration
If you are certain of your configuration file, you can skip the try option and go directly to applying the new options. The command for this is:
```
sudo netplan apply
```
At this point, you can issue the command ip a to see that your new address configurations are in place.
### Configuring DHCP
Although you probably wont be configuring your server for DHCP, its always good to know how to do this. For example, you might not know what static IP addresses are currently available on your network. You could configure the device for DHCP, get an IP address, and then reconfigure that address as static.
To use DHCP with Netplan, the configuration file would look something like this:
```
network:
version: 2
renderer: networkd
ethernets:
ens5:
Addresses: []
dhcp4: true
optional: true
```
Save and close that file. Test the file with:
```
sudo netplan try
```
Netplan should succeed and apply the DHCP configuration. You could then issue the ip a command, get the dynamically assigned address, and then reconfigure a static address. Or, you could leave it set to use DHCP (but seeing as how this is a server, you probably wont want to do that).
Should you have more than one interface, you could name the second .yaml configuration file 02-netcfg.yaml. Netplan will apply the configuration files in numerical order, so 01 will be applied before 02. Create as many configuration files as needed for your server.
### Thats All There Is
Believe it or not, thats all there is to using Netplan. Although it is a significant change to how were accustomed to configuring network addresses, its not all that hard to get used to. But this style of configuration is here to stay… so you will need to get used to it.
Learn more about Linux through the free ["Introduction to Linux" ][5]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/9/how-use-netplan-network-configuration-tool-linux
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/jlwallen
[1]: https://netplan.io/
[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/netplan_1.jpg?itok=XuIsXWbV (netplan)
[4]: /licenses/category/used-permission
[5]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,114 @@
What is ZFS? Why People Use ZFS? [Explained for Beginners]
======
Today, we will take a look at ZFS, an advanced file system. We will discuss where it came from, what it is, and why it is so popular among techies and enterprise.
Even though Im from the US, I prefer to pronounce it ZedFS instead of ZeeFS because it sounds cooler. You are free to pronounce it however you like.
Note: You will see ZFS repeated many times in the article. When I talk about feature and installation, Im talking about OpenZFS. ZFS (developed by Oracle) and OpenZFS have followed different paths since Oracle shutdown OpenSolaris. (More on that later.)
### History of ZFS
The Z File System (ZFS) was created by [Matthew Ahrens and Jeff Bonwick][1] in 2001. ZFS was designed to be a next generation file system for [Sun Microsystems][2] [OpenSolaris][3]. In 2008, ZFS was ported to FreeBSD. The same year a project was started to port [ZFS to Linux][4]. However, since ZFS is licensed under the [Common Development and Distribution License][5], which is incompatible with the [GNU General Public License][6], it cannot be included in the Linux kernel. To get around this problem, most Linux distros offer methods to install ZFS.
Shortly after Oracle purchased Sun Microsystems, OpenSolaris became close-source. All further development of ZFS became closed source, as well. Many of the developers of ZFS where unhappy about this turn of events. [Two-thirds of the core ZFS devlopers][1], including Ahrens and Bonwick, left Oracle due to this decision. They joined other companies and created the [OpenZFS project][7] in September of 2013. The project has spearheaded the open-source development of ZFS.
Lets go back to the license issue mentioned above. Since the OpenZFS project is separate from Oracle, some probably wonder why they dont change the license to something that is compatible with the GPL so it can be included in the Linux kernel. According to the [OpenZFS website][8], changing the license would involve contacting anyone who contributed code to the current OpenZFS implementation (including the initial, common ZFS code till OpenSolaris) and get their permission to change the license. Since this job is near impossible (because some contributors may be dead or hard to find), they have decided to keep the license they have.
### What is ZFS? What are its features?
![ZFS filesystem][9]
As I said before, ZFS is an advanced file system. As such, it has some interesting [features][10]. Such as:
* Pooled storage
* Copy-on-write
* Snapshots
* Data integrity verification and automatic repair
* RAID-Z
* Maximum 16 Exabyte file size
* Maximum 256 Quadrillion Zettabytes storage
Lets break down a couple of those features.
#### Pooled Storage
Unlike most files systems, ZFS combines the features of a file system and a volume manager. This means that unlike other file systems, ZFS can create a file system that spans across a series of drives or a pool. Not only that but you can add storage to a pool by adding another drive. ZFS will handle [partitioning and formatting][11].
![Pooled storage in ZFS][12]Pooled storage in ZFS
#### Copy-on-write
[Copy-on-write][13] is another interesting (and cool) features. On most files system, when data is overwritten, it is lost forever. On ZFS, the new information is written to a different block. Once the write is complete, the file systems metadata is updated to point to the new info. This ensures that if the system crashes (or something else happens) while the write is taking place, the old data will be preserved. It also means that the system does not need to run [fsck][14] after a system crash.
#### Snapshots
Copy-on-write leads into another ZFS feature: snapshots. ZFS uses snapshots to track changes in the file system. “[The snapshot][13] contains the original version of the file system, and the live filesystem contains any changes made since the snapshot was taken. No additional space is used. As new data is written to the live file system, new blocks are allocated to store this data.” It a file is deleted, the snapshot reference is removed, as well. So, snapshots are mainly designed to track changes to files, but not the addition and creation of files.
Snapshots can be mounted as read-only to recover a past version of a file. It is also possible to rollback the live system to a previous snapshot. All changes made since the snapshot will be lost.
#### Data integrity verification and automatic repair
Whenever new data is written to ZFS, it creates a checksum for that data. When that data is read, the checksum is verified. If the checksum does not match, then ZFS knows that an error has been detected. ZFS will then automatically attempt to correct the error.
#### RAID-Z
ZFS can handle RAID without requiring any extra software or hardware. Unsurprisingly, ZFS has its own implementation of RAID: RAID-Z. RAID-Z is actually a variation of RAID-5. However, it is designed to overcome the RAID-5 write hole error, “in which the data and parity information become inconsistent after an unexpected restart”. To use the basic [level of RAID-Z][15] (RAID-Z1) you need at least two disks for storage and one for [parity][16]. RAID-Z2 required at least two storage drives and two drive for parity. RAID-Z3 requires at least two storage drives and three drive for parity. When drives are added to the RAID-Z pools, they have to be added in multiples of two.
#### Huge Storage potential
When ZFS was created, it was designed to be [the last word in file systems][17]. At a time when most file systems where 64-bit, the ZFS creators decided to jump right to 128-bit to future proof it. This means that ZFS “offers 16 billion billion times the capacity of 32- or 64-bit systems”. In fact, Jeff Bonwick (one of the creators) said [that powering][18] a “fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans.”
### How to Install ZFS?
If you want to use ZFS out of the box, it would require installing either [FreeBSD][19] or an [operating system using the illumos kernel][20]. [illumos][21] is a fork of the OpenSolaris kernel.
In fact, support for [ZFS is one of the main reasons why some experienced Linux users opt for BSD][22].
If you want to try ZFS on Linux, you can only use it at your storage file system. As a far as I know, no Linux distro give you the option to install ZFS on your root out of the box. If you are interested in trying ZFS on Linux, the [ZFS on Linux project][4] has a number of tutorials on how to do that.
### Caveat
This article has sung the benefits of ZFS. Now let me tell you a quick problem with ZFS. Using RAID-Z [can be expensive][23] because of the number of drives you need to purchase to add storage space.
Have you every ZFS? What was your experience like? Let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][24].
--------------------------------------------------------------------------------
via: https://itsfoss.com/what-is-zfs/
作者:[John Paul][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[1]: https://wiki.gentoo.org/wiki/ZFS
[2]: http://en.wikipedia.org/wiki/Sun_Microsystems
[3]: http://en.wikipedia.org/wiki/Opensolaris
[4]: https://zfsonlinux.org/
[5]: https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License
[6]: https://en.wikipedia.org/wiki/GNU_General_Public_License
[7]: http://www.open-zfs.org/wiki/Main_Page
[8]: http://www.open-zfs.org/wiki/FAQ#Do_you_plan_to_release_OpenZFS_under_a_license_other_than_the_CDDL.3F
[9]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/what-is-zfs.png
[10]: https://wiki.archlinux.org/index.php/ZFS
[11]: https://www.howtogeek.com/175159/an-introduction-to-the-z-file-system-zfs-for-linux/
[12]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/zfs-overview.png
[13]: https://www.freebsd.org/doc/handbook/zfs-term.html
[14]: https://en.wikipedia.org/wiki/Fsck
[15]: https://wiki.archlinux.org/index.php/ZFS/Virtual_disks#Creating_and_Destroying_Zpools
[16]: https://www.pcmag.com/encyclopedia/term/60364/raid-parity
[17]: https://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/
[18]: https://blogs.oracle.com/bonwick/128-bit-storage:-are-you-high
[19]: https://www.freebsd.org/
[20]: https://wiki.illumos.org/display/illumos/Distributions
[21]: https://wiki.illumos.org/display/illumos/illumos+Home
[22]: https://itsfoss.com/why-use-bsd/
[23]: http://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html
[24]: http://reddit.com/r/linuxusersgroup

View File

@ -0,0 +1,168 @@
XiatianSummer translating
13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know
======
Knowing keyboard shortcuts increase your productivity. Here are some useful Ubuntu shortcut keys that will help you use Ubuntu like a pro.
You can use an operating system with the combination of keyboard and mouse
Note: The keyboard shortcuts mentioned in the list is intended for Ubuntu 18.04 GNOME edition. Usually, most of them (if not all) should work on other Ubuntu versions as well, but I cannot vouch for it.
![Ubuntu keyboard shortcuts][1]
### Useful Ubuntu keyboard shortcuts
Lets have a look at some of the must know keyboard shortcut for Ubuntu GNOME. I have not included universal keyboard shortcuts like Ctrl+C (copy), Ctrl+V (paste) or Ctrl+S (save).
Note: Super key in Linux refers to the key with Windows logo. I have used capital letters in the shortcuts but it doesnt mean you have to press the shift key. For example, T means t key only, not Shift+t.
#### 1\. Super key: Opens Activities search
Super Key Opens the activities menuIf you have to use just one keyboard shortcut on Ubuntu, this has to be the one.
You want to open an application? Press the super key and search for the application. If the application is not installed, it will even suggest applications from software center.
You want to see the running applications? Press super key and it will show you all the running GUI applications.
You want to use workspaces? Simply press the super key and you can see the workspaces option on the right-hand side.
#### 2\. Ctrl+Alt+T: Ubuntu terminal shortcut
![Ubuntu Terminal Shortcut][2]Use Ctrl+alt+T to open terminal
You want to open a new terminal. The combination of three keys Ctrl+Alt+T is what you need. This is my favorite keyboard shortcut in Ubuntu. I even mention it in various tutorials on Its FOSS when it involves opening a terminal.
#### 3\. Super+L or Ctrl+Alt+L: Locks the screen
Locking screen when you are not at your desk is one of the most basic security tips. Instead of going to the top right corner and then choosing the lock screen option, you can simply use the Super+L key combination.
Some systems also use Ctrl+Alt+L keys for locking the screen.
#### 4\. Super+D or Ctrl+Alt+D: Show desktop
Pressing Super+D minimizes all running application windows and shows the desktop.
Pressing Super+D again will open all the running applications windows as it was previously.
You may also use Ctrl+Alt+D for this purpose.
#### 5\. Super+A: Shows the application menu
You can open the application menu in Ubuntu 18.04 GNOME by clicking on the 9 dots on the left bottom of the screen. However, a quicker way would be to use Super+A key combination.
It will show the application menu where you can see the installed applications on your systems and can also search for them.
You can use Esc key to move out of the application menu screen.
#### 6\. Super+Tab or Alt+Tab: Switch between running applications
If you have more than one applications running, you can switch between the applications using the Super+Tab or Alt+Tab key combinations.
Keep holding the super key and press tab and youll the application switcher appearing. While holding the super key, keep on tapping the tab key to select between applications. When you are at the desired application, release both super and tab keys.
By default, the application switcher moves from left to right. If you want to move from right to left, use the Super+Shift+Tab key combination.
You can also use Alt key instead of Super here.
Tip: If there are multiple instances of an application, you can switch between those instances by using Super+` key combination.
#### 7\. Super+Arrow keys: Snap windows
<https://player.vimeo.com/video/289091549>
This is available in Windows as well. While using an application, press Super and left arrow key and the application will go to the left edge of the screen, taking half of the screen.
Similarly, pressing Super and right arrow keys will move the application to the right edge.
Super and up arrow keys will maximize the application window and super and down arrow will bring the application back to its usual self.
#### 8\. Super+M: Toggle notification tray
GNOME has a notification tray where you can see notifications for various system and application activities. You also have the calendar here.
![Notification Tray Ubuntu 18.04 GNOME][3]
Notification Tray
With Super+M key combination, you can open this notification area. If you press these keys again, an opened notification tray will be closed.
You can also use Super+V for toggling the notification tray.
#### 9\. Super+Space: Change input keyboard (for multilingual setup)
If you are multilingual, perhaps you have more than one keyboards installed on your system. For example, I use [Hindi on Ubuntu][4] along with English and I have Hindi (Devanagari) keyboard installed along with the default English one.
If you also use a multilingual setup, you can quickly change the input keyboard with the Super+Space shortcut.
#### 10\. Alt+F2: Run console
This is for power users. If you want to run a quick command, instead of opening a terminal and running the command there, you can use Alt+F2 to run the console.
![Alt+F2 to run commands in Ubuntu][5]
Console
This is particularly helpful when you have to use applications that can only be run from the terminal.
#### 11\. Ctrl+Q: Close an application window
If you have an application running, you can close the application window using the Ctrl+Q key combination. You can also use Ctrl+W for this purpose.
Alt+F4 is more universal shortcut for closing an application window.
It not work on a few applications such as the default terminal in Ubuntu.
#### 12\. Ctrl+Alt+arrow: Move between workspaces
![Workspace switching][6]
Workspace switching
If you are one of the power users who use workspaces, you can use the Ctrl+Alt+Up arrow and Ctrl+Alt+Down arrow keys to switch between the workspaces.
#### 13\. Ctrl+Alt+Del: Log out
No! Like Windows, the famous combination of Ctrl+Alt+Del wont bring task manager in Linux (unless you use custom keyboard shortcuts for it).
![Log Out Ubuntu][7]
Log Out
In the normal GNOME desktop environment, you can bring the power off menu using the Ctrl+Alt+Del keys but Ubuntu doesnt always follow the norms and hence it opens the logout dialogue box when you use Ctrl+Alt+Del in Ubuntu.
### Use custom keyboard shortcuts in Ubuntu
You are not limited to the default keyboard shortcuts. You can create your own custom keyboard shortcuts as you like.
Go to Settings->Devices->Keyboard. Youll see all the keyboard shortcuts here for your system. Scroll down to the bottom and youll see the Custom Shortcuts option.
![Add custom keyboard shortcut in Ubuntu][8]
You have to provide an easy-to-recognize name of the shortcut, the command that will be run when the key combinations are used and of course the keys you are going to use for the shortcut.
### What are your favorite keyboard shortcuts in Ubuntu?
There is no end to shortcuts. If you want, you can have a look at all the possible [GNOME shortcuts][9] here and see if there are some more shortcuts you would like to use.
You can, and you should also learn keyboard shortcuts for the applications you use most of the time. For example, I use Kazam for [screen recording][10], and the keyboard shortcuts help me a lot in pausing and resuming the recording.
What are your favorite Ubuntu shortcuts that you cannot live without?
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-shortcuts/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/ubuntu-keyboard-shortcuts.jpeg
[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/ubuntu-terminal-shortcut.jpg
[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/notification-tray-ubuntu-gnome.jpeg
[4]: https://itsfoss.com/type-indian-languages-ubuntu/
[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/console-alt-f2-ubuntu-gnome.jpeg
[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/workspace-switcher-ubuntu.png
[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/log-out-ubuntu.jpeg
[8]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/custom-keyboard-shortcut.jpg
[9]: https://wiki.gnome.org/Design/OS/KeyboardShortcuts
[10]: https://itsfoss.com/best-linux-screen-recorders/

View File

@ -0,0 +1,110 @@
3 open source log aggregation tools
======
Log aggregation systems can help with troubleshooting and other tasks. Here are three top options.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr)
How is metrics aggregation different from log aggregation? Cant logs include metrics? Cant log aggregation systems do the same things as metrics aggregation systems?
These are questions I hear often. Ive also seen vendors pitching their log aggregation system as the solution to all observability problems. Log aggregation is a valuable tool, but it isnt normally a good tool for time-series data.
A couple of valuable features in a time-series metrics aggregation system are the regular interval and the storage system customized specifically for time-series data. The regular interval allows a user to derive real mathematical results consistently. If a log aggregation system is collecting metrics in a regular interval, it can potentially work the same way. However, the storage system isnt optimized for the types of queries that are typical in a metrics aggregation system. These queries will take more resources and time to process using storage systems found in log aggregation tools.
So, we know a log aggregation system is likely not suitable for time-series data, but what is it good for? A log aggregation system is a great place for collecting event data. These are irregular activities that are significant. An example might be access logs for a web service. These are significant because we want to know what is accessing our systems and when. Another example would be an application error condition—because it is not a normal operating condition, it might be valuable during troubleshooting.
A handful of rules for logging:
* DO include a timestamp
* DO format in JSON
* DONT log insignificant events
* DO log all application errors
* MAYBE log warnings
* DO turn on logging
* DO write messages in a human-readable form
* DONT log informational data in production
* DONT log anything a human cant read or react to
### Cloud costs
When investigating log aggregation tools, the cloud might seem like an attractive option. However, it can come with significant costs. Logs represent a lot of data when aggregated across hundreds or thousands of hosts and applications. The ingestion, storage, and retrieval of that data are expensive in cloud-based systems.
As a point of reference from a real system, a collection of around 500 nodes with a few hundred apps results in 200GB of log data per day. Theres probably room for improvement in that system, but even reducing it by half will cost nearly $10,000 per month in many SaaS offerings. This often includes retention of only 30 days, which isnt very long if you want to look at trending data year-over-year.
This isnt to discourage the use of these systems, as they can be very valuable—especially for smaller organizations. The purpose is to point out that there could be significant costs, and it can be discouraging when they are realized. The rest of this article will focus on open source and commercial solutions that are self-hosted.
### Tool options
#### ELK
[ELK][1], short for Elasticsearch, Logstash, and Kibana, is the most popular open source log aggregation tool on the market. Its used by Netflix, Facebook, Microsoft, LinkedIn, and Cisco. The three components are all developed and maintained by [Elastic][2]. [Elasticsearch][3] is essentially a NoSQL, Lucene search engine implementation. [Logstash][4] is a log pipeline system that can ingest data, transform it, and load it into a store like Elasticsearch. [Kibana][5] is a visualization layer on top of Elasticsearch.
A few years ago, Beats were introduced. Beats are data collectors. They simplify the process of shipping data to Logstash. Instead of needing to understand the proper syntax of each type of log, a user can install a Beat that will export NGINX logs or Envoy proxy logs properly so they can be used effectively within Elasticsearch.
When installing a production-level ELK stack, a few other pieces might be included, like [Kafka][6], [Redis][7], and [NGINX][8]. Also, it is common to replace Logstash with Fluentd, which well discuss later. This system can be complex to operate, which in its early days led to a lot of problems and complaints. These have largely been fixed, but its still a complex system, so you might not want to try it if youre a smaller operation.
That said, there are services available so you dont have to worry about that. [Logz.io][9] will run it for you, but its list pricing is a little steep if you have a lot of data. Of course, youre probably smaller and may not have a lot of data. If you cant afford Logz.io, you could look at something like [AWS Elasticsearch Service][10] (ES). ES is a service Amazon Web Services (AWS) offers that makes it very easy to get Elasticsearch working quickly. It also has tooling to get all AWS logs into ES using Lambda and S3. This is a much cheaper option, but there is some management required and there are a few limitations.
Elastic, the parent company of the stack, [offers][11] a more robust product that uses the open core model, which provides additional options around analytics tools, and reporting. It can also be hosted on Google Cloud Platform or AWS. This might be the best option, as this combination of tools and hosting platforms offers a cheaper solution than most SaaS options and still provides a lot of value. This system could effectively replace or give you the capability of a [security information and event management][12] (SIEM) system.
The ELK stack also offers great visualization tools through Kibana, but it lacks an alerting function. Elastic provides alerting functionality within the paid X-Pack add-on, but there is nothing built in for the open source system. Yelp has created a solution to this problem, called [ElastAlert][13], and there are probably others. This additional piece of software is fairly robust, but it increases the complexity of an already complex system.
#### Graylog
[Graylog][14] has recently risen in popularity, but it got its start when Lennart Koopmann created it back in 2010. A company was born with the same name two years later. Despite its increasing use, it still lags far behind the ELK stack. This also means it has fewer community-developed features, but it can use the same Beats that the ELK stack uses. Graylog has gained praise in the Go community with the introduction of the Graylog Collector Sidecar written in [Go][15].
Graylog uses Elasticsearch, [MongoDB][16], and the Graylog Server under the hood. This makes it as complex to run as the ELK stack and maybe a little more. However, Graylog comes with alerting built into the open source version, as well as several other notable features like streaming, message rewriting, and geolocation.
The streaming feature allows for data to be routed to specific Streams in real time while they are being processed. With this feature, a user can see all database errors in a single Stream and web server errors in a different Stream. Alerts can even be based on these Streams as new items are added or when a threshold is exceeded. Latency is probably one of the biggest issues with log aggregation systems, and Streams eliminate that issue in Graylog. As soon as the log comes in, it can be routed to other systems through a Stream without being processed fully.
The message rewriting feature uses the open source rules engine [Drools][17]. This allows all incoming messages to be evaluated against a user-defined rules file enabling a message to be dropped (called Blacklisting), a field to be added or removed, or the message to be modified.
The coolest feature might be Graylogs geolocation capability, which supports plotting IP addresses on a map. This is a fairly common feature and is available in Kibana as well, but it adds a lot of value—especially if you want to use this as your SIEM system. The geolocation functionality is provided in the open source version of the system.
Graylog, the company, charges for support on the open source version if you want it. It also offers an open core model for its Enterprise version that offers archiving, audit logging, and additional support. There arent many other options for support or hosting, so youll likely be on your own if you dont use Graylog (the company).
#### Fluentd
[Fluentd][18] was developed at [Treasure Data][19], and the [CNCF][20] has adopted it as an Incubating project. It was written in C and Ruby and is recommended by [AWS][21] and [Google Cloud][22]. Fluentd has become a common replacement for Logstash in many installations. It acts as a local aggregator to collect all node logs and send them off to central storage systems. It is not a log aggregation system.
It uses a robust plugin system to provide quick and easy integrations with different data sources and data outputs. Since there are over 500 plugins available, most of your use cases should be covered. If they arent, this sounds like an opportunity to contribute back to the open source community.
Fluentd is a common choice in Kubernetes environments due to its low memory requirements (just tens of megabytes) and its high throughput. In an environment like [Kubernetes][23], where each pod has a Fluentd sidecar, memory consumption will increase linearly with each new pod created. Using Fluentd will drastically reduce your system utilization. This is becoming a common problem with tools developed in Java that are intended to run one per node where the memory overhead hasnt been a major issue.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/open-source-log-aggregation-tools
作者:[Dan Barker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/barkerd427
[1]: https://www.elastic.co/webinars/introduction-elk-stack
[2]: https://www.elastic.co/
[3]: https://www.elastic.co/products/elasticsearch
[4]: https://www.elastic.co/products/logstash
[5]: https://www.elastic.co/products/kibana
[6]: http://kafka.apache.org/
[7]: https://redis.io/
[8]: https://www.nginx.com/
[9]: https://logz.io/
[10]: https://aws.amazon.com/elasticsearch-service/
[11]: https://www.elastic.co/cloud
[12]: https://en.wikipedia.org/wiki/Security_information_and_event_management
[13]: https://github.com/Yelp/elastalert
[14]: https://www.graylog.org/
[15]: https://opensource.com/tags/go
[16]: https://www.mongodb.com/
[17]: https://www.drools.org/
[18]: https://www.fluentd.org/
[19]: https://www.treasuredata.com/
[20]: https://www.cncf.io/
[21]: https://aws.amazon.com/blogs/aws/all-your-data-fluentd/
[22]: https://cloud.google.com/logging/docs/agent/
[23]: https://opensource.com/resources/what-is-kubernetes

View File

@ -0,0 +1,644 @@
How To List An Available Package Groups In Linux
======
As we know, if we want to install any packages in Linux we need to use the distribution package manager to get it done.
Package manager is playing major role in Linux as this used most of the time by admin.
If you would like to install group of package in one shot what would be the possible option.
Is it possible in Linux? if so, what would be the command for it.
Yes, this can be done in Linux by using the package manager. Each package manager has their own option to perform this task, as i know apt or apt-get package manager doesnt has this option.
For Debian based system we need to use tasksel command instead of official package managers called apt or apt-get.
What is the benefit if we install group of package in Linux? Yes, there is lot of benefit is available in Linux when we install group of package because if you want to install LAMP separately we need to include so many packages but that can be done using single package when we use group of package command.
Say for example, as you get a request from Application team to install LAMP but you dont know what are the packages needs to be installed, this is where group of package comes into picture.
Group option is a handy tool for Linux systems which will install Group of Software in a single click on your system without headache.
A package group is a collection of packages that serve a common purpose, for instance System Tools or Sound and Video. Installing a package group pulls a set of dependent packages, saving time considerably.
**Suggested Read :**
**(#)** [How To List Installed Packages By Size (Largest) On Linux][1]
**(#)** [How To View/List The Available Packages Updates In Linux][2]
**(#)** [How To View A Particular Package Installed/Updated/Upgraded/Removed/Erased Date On Linux][3]
**(#)** [How To View Detailed Information About A Package In Linux][4]
**(#)** [How To Search If A Package Is Available On Your Linux Distribution Or Not][5]
**(#)** [Newbies corner A Graphical frontend tool for Linux Package Manager][6]
**(#)** [Linux Expert should knows, list of Command line Package Manager & Usage][7]
### How To List An Available Package Groups In CentOS/RHEL Systems
RHEL & CentOS systems are using RPM packages hence we can use the `Yum Package Manager` to get this information.
YUM stands for Yellowdog Updater, Modified is an open-source command-line front-end package-management utility for RPM based systems such as Red Hat Enterprise Linux (RHEL) and CentOS.
Yum is the primary tool for getting, installing, deleting, querying, and managing RPM packages from distribution repositories, as well as other third-party repositories.
**Suggested Read :** [YUM Command To Manage Packages on RHEL/CentOS Systems][8]
```
# yum grouplist
Loaded plugins: fastestmirror, security
Setting up Group Process
Loading mirror speeds from cached hostfile
* epel: epel.mirror.constant.com
Installed Groups:
Base
E-mail server
Graphical Administration Tools
Hardware monitoring utilities
Legacy UNIX compatibility
Milkymist
Networking Tools
Performance Tools
Perl Support
Security Tools
Available Groups:
Additional Development
Backup Client
Backup Server
CIFS file server
Client management tools
Compatibility libraries
Console internet tools
Debugging Tools
Desktop
.
.
Available Language Groups:
Afrikaans Support [af]
Albanian Support [sq]
Amazigh Support [ber]
Arabic Support [ar]
Armenian Support [hy]
Assamese Support [as]
Azerbaijani Support [az]
.
.
Done
```
If you would like to list what are the packages is associated on it, run the below command. In this example we are going to list what are the packages is associated with “Performance Tools” group.
```
# yum groupinfo "Performance Tools"
Loaded plugins: fastestmirror, security
Setting up Group Process
Loading mirror speeds from cached hostfile
* epel: ewr.edge.kernel.org
Group: Performance Tools
Description: Tools for diagnosing system and application-level performance problems.
Mandatory Packages:
blktrace
sysstat
Default Packages:
dstat
iotop
latencytop
latencytop-tui
oprofile
perf
powertop
seekwatcher
Optional Packages:
oprofile-jit
papi
sdparm
sg3_utils
tiobench
tuned
tuned-utils
```
### How To List An Available Package Groups In Fedora
Fedora system uses DNF package manager hence we can use the Dnf Package Manager to get this information.
DNF stands for Dandified yum. We can tell DNF, the next generation of yum package manager (Fork of Yum) using hawkey/libsolv library for backend. Aleš Kozumplík started working on DNF since Fedora 18 and its implemented/launched in Fedora 22 finally.
Dnf command is used to install, update, search & remove packages on Fedora 22 and later system. It automatically resolve dependencies and make it smooth package installation without any trouble.
Yum replaced by DNF due to several long-term problems in Yum which was not solved. Asked why ? he did not patches the Yum issues. Aleš Kozumplík explains that patching was technically hard and YUM team wont accept the changes immediately and other major critical, YUM is 56K lines but DNF is 29K lies. So, there is no option for further development, except to fork.
**Suggested Read :** [DNF (Fork of YUM) Command To Manage Packages on Fedora System][9]
```
# dnf grouplist
Last metadata expiration check: 0:00:00 ago on Sun 09 Sep 2018 07:10:36 PM IST.
Available Environment Groups:
Fedora Custom Operating System
Minimal Install
Fedora Server Edition
Fedora Workstation
Fedora Cloud Server
KDE Plasma Workspaces
Xfce Desktop
LXDE Desktop
Hawaii Desktop
LXQt Desktop
Cinnamon Desktop
MATE Desktop
Sugar Desktop Environment
Development and Creative Workstation
Web Server
Infrastructure Server
Basic Desktop
Installed Groups:
C Development Tools and Libraries
Development Tools
Available Groups:
3D Printing
Administration Tools
Ansible node
Audio Production
Authoring and Publishing
Books and Guides
Cloud Infrastructure
Cloud Management Tools
Container Management
D Development Tools and Libraries
.
.
RPM Development Tools
Security Lab
Text-based Internet
Window Managers
GNOME Desktop Environment
Graphical Internet
KDE (K Desktop Environment)
Fonts
Games and Entertainment
Hardware Support
Sound and Video
System Tools
```
If you would like to list what are the packages is associated on it, run the below command. In this example we are going to list what are the packages is associated with “Editor” group.
```
# dnf groupinfo Editors
Last metadata expiration check: 0:04:57 ago on Sun 09 Sep 2018 07:10:36 PM IST.
Group: Editors
Description: Sometimes called text editors, these are programs that allow you to create and edit text files. This includes Emacs and Vi.
Optional Packages:
code-editor
cssed
emacs
emacs-auctex
emacs-bbdb
emacs-ess
emacs-vm
geany
gobby
jed
joe
leafpad
nedit
poedit
psgml
vim-X11
vim-enhanced
xemacs
xemacs-packages-base
xemacs-packages-extra
xemacs-xft
xmlcopyeditor
zile
```
### How To List An Available Package Groups In openSUSE System
openSUSE system uses zypper package manager hence we can use the zypper Package Manager to get this information.
Zypper is a command line package manager for suse & openSUSE distributions. Its used to install, update, search & remove packages & manage repositories, perform various queries, and more. Zypper command-line interface to ZYpp system management library (libzypp).
**Suggested Read :** [Zypper Command To Manage Packages On openSUSE & suse Systems][10]
```
# zypper patterns
Loading repository data...
Warning: Repository 'Update Repository (Non-Oss)' appears to be outdated. Consider using a different mirror or server.
Warning: Repository 'Main Update Repository' appears to be outdated. Consider using a different mirror or server.
Reading installed packages...
S | Name | Version | Repository | Dependency
---|----------------------|---------------|-----------------------|-----------
| 64bit | 20150918-25.1 | Main Repository (OSS) |
| apparmor | 20150918-25.1 | Main Repository (OSS) |
i | apparmor | 20150918-25.1 | @System |
| base | 20150918-25.1 | Main Repository (OSS) |
i+ | base | 20150918-25.1 | @System |
| books | 20150918-25.1 | Main Repository (OSS) |
| console | 20150918-25.1 | Main Repository (OSS) |
| devel_C_C++ | 20150918-25.1 | Main Repository (OSS) |
i | enhanced_base | 20150918-25.1 | @System |
| enlightenment | 20150918-25.1 | Main Repository (OSS) |
| file_server | 20150918-25.1 | Main Repository (OSS) |
| fonts | 20150918-25.1 | Main Repository (OSS) |
i | fonts | 20150918-25.1 | @System |
| games | 20150918-25.1 | Main Repository (OSS) |
i | games | 20150918-25.1 | @System |
| gnome | 20150918-25.1 | Main Repository (OSS) |
| gnome_basis | 20150918-25.1 | Main Repository (OSS) |
i | imaging | 20150918-25.1 | @System |
| kde | 20150918-25.1 | Main Repository (OSS) |
i+ | kde | 20150918-25.1 | @System |
| kde_plasma | 20150918-25.1 | Main Repository (OSS) |
i | kde_plasma | 20150918-25.1 | @System |
| lamp_server | 20150918-25.1 | Main Repository (OSS) |
| laptop | 20150918-25.1 | Main Repository (OSS) |
i+ | laptop | 20150918-25.1 | @System |
| lxde | 20150918-25.1 | Main Repository (OSS) |
| lxqt | 20150918-25.1 | Main Repository (OSS) |
i | multimedia | 20150918-25.1 | @System |
| network_admin | 20150918-25.1 | Main Repository (OSS) |
| non_oss | 20150918-25.1 | Main Repository (OSS) |
i | non_oss | 20150918-25.1 | @System |
| office | 20150918-25.1 | Main Repository (OSS) |
i | office | 20150918-25.1 | @System |
| print_server | 20150918-25.1 | Main Repository (OSS) |
| remote_desktop | 20150918-25.1 | Main Repository (OSS) |
| x11 | 20150918-25.1 | Main Repository (OSS) |
i+ | x11 | 20150918-25.1 | @System |
| x86 | 20150918-25.1 | Main Repository (OSS) |
| xen_server | 20150918-25.1 | Main Repository (OSS) |
| xfce | 20150918-25.1 | Main Repository (OSS) |
| xfce_basis | 20150918-25.1 | Main Repository (OSS) |
| yast2_basis | 20150918-25.1 | Main Repository (OSS) |
i | yast2_basis | 20150918-25.1 | @System |
| yast2_install_wf | 20150918-25.1 | Main Repository (OSS) |
```
If you would like to list what are the packages is associated on it, run the below command. In this example we are going to list what are the packages is associated with “file_server” group.
Additionally zypper command allows a user to perform the same action with different options.
```
# zypper info file_server
Loading repository data...
Warning: Repository 'Update Repository (Non-Oss)' appears to be outdated. Consider using a different mirror or server.
Warning: Repository 'Main Update Repository' appears to be outdated. Consider using a different mirror or server.
Reading installed packages...
Information for pattern file_server:
------------------------------------
Repository : Main Repository (OSS)
Name : file_server
Version : 20150918-25.1
Arch : x86_64
Vendor : openSUSE
Installed : No
Visible to User : Yes
Summary : File Server
Description :
File services to host files so that they may be accessed or retrieved by other computers on the same network. This includes the FTP, SMB, and NFS protocols.
Contents :
S | Name | Type | Dependency
---|-------------------------------|---------|------------
i+ | patterns-openSUSE-base | package | Required
| patterns-openSUSE-file_server | package | Required
| nfs-kernel-server | package | Recommended
i | nfsidmap | package | Recommended
i | samba | package | Recommended
i | samba-client | package | Recommended
i | samba-winbind | package | Recommended
| tftp | package | Recommended
| vsftpd | package | Recommended
| yast2-ftp-server | package | Recommended
| yast2-nfs-server | package | Recommended
i | yast2-samba-server | package | Recommended
| yast2-tftp-server | package | Recommended
```
If you would like to list what are the packages is associated on it, run the below command.
```
# zypper pattern-info file_server
Loading repository data...
Warning: Repository 'Update Repository (Non-Oss)' appears to be outdated. Consider using a different mirror or server.
Warning: Repository 'Main Update Repository' appears to be outdated. Consider using a different mirror or server.
Reading installed packages...
Information for pattern file_server:
------------------------------------
Repository : Main Repository (OSS)
Name : file_server
Version : 20150918-25.1
Arch : x86_64
Vendor : openSUSE
Installed : No
Visible to User : Yes
Summary : File Server
Description :
File services to host files so that they may be accessed or retrieved by other computers on the same network. This includes the FTP, SMB, and NFS protocols.
Contents :
S | Name | Type | Dependency
---|-------------------------------|---------|------------
i+ | patterns-openSUSE-base | package | Required
| patterns-openSUSE-file_server | package | Required
| nfs-kernel-server | package | Recommended
i | nfsidmap | package | Recommended
i | samba | package | Recommended
i | samba-client | package | Recommended
i | samba-winbind | package | Recommended
| tftp | package | Recommended
| vsftpd | package | Recommended
| yast2-ftp-server | package | Recommended
| yast2-nfs-server | package | Recommended
i | yast2-samba-server | package | Recommended
| yast2-tftp-server | package | Recommended
```
If you would like to list what are the packages is associated on it, run the below command.
```
# zypper info pattern file_server
Loading repository data...
Warning: Repository 'Update Repository (Non-Oss)' appears to be outdated. Consider using a different mirror or server.
Warning: Repository 'Main Update Repository' appears to be outdated. Consider using a different mirror or server.
Reading installed packages...
Information for pattern file_server:
------------------------------------
Repository : Main Repository (OSS)
Name : file_server
Version : 20150918-25.1
Arch : x86_64
Vendor : openSUSE
Installed : No
Visible to User : Yes
Summary : File Server
Description :
File services to host files so that they may be accessed or retrieved by other computers on the same network. This includes the FTP, SMB, and NFS protocols.
Contents :
S | Name | Type | Dependency
---|-------------------------------|---------|------------
i+ | patterns-openSUSE-base | package | Required
| patterns-openSUSE-file_server | package | Required
| nfs-kernel-server | package | Recommended
i | nfsidmap | package | Recommended
i | samba | package | Recommended
i | samba-client | package | Recommended
i | samba-winbind | package | Recommended
| tftp | package | Recommended
| vsftpd | package | Recommended
| yast2-ftp-server | package | Recommended
| yast2-nfs-server | package | Recommended
i | yast2-samba-server | package | Recommended
| yast2-tftp-server | package | Recommended
```
If you would like to list what are the packages is associated on it, run the below command.
```
# zypper info -t pattern file_server
Loading repository data...
Warning: Repository 'Update Repository (Non-Oss)' appears to be outdated. Consider using a different mirror or server.
Warning: Repository 'Main Update Repository' appears to be outdated. Consider using a different mirror or server.
Reading installed packages...
Information for pattern file_server:
------------------------------------
Repository : Main Repository (OSS)
Name : file_server
Version : 20150918-25.1
Arch : x86_64
Vendor : openSUSE
Installed : No
Visible to User : Yes
Summary : File Server
Description :
File services to host files so that they may be accessed or retrieved by other computers on the same network. This includes the FTP, SMB, and NFS protocols.
Contents :
S | Name | Type | Dependency
---|-------------------------------|---------|------------
i+ | patterns-openSUSE-base | package | Required
| patterns-openSUSE-file_server | package | Required
| nfs-kernel-server | package | Recommended
i | nfsidmap | package | Recommended
i | samba | package | Recommended
i | samba-client | package | Recommended
i | samba-winbind | package | Recommended
| tftp | package | Recommended
| vsftpd | package | Recommended
| yast2-ftp-server | package | Recommended
| yast2-nfs-server | package | Recommended
i | yast2-samba-server | package | Recommended
| yast2-tftp-server | package | Recommended
```
### How To List An Available Package Groups In Debian/Ubuntu Systems
Since APT or APT-GET package manager doesnt offer this option for Debian/Ubuntu based systems hence, we are using tasksel command to get this information.
[Tasksel][11] is a handy tool for Debian/Ubuntu systems which will install Group of Software in a single click on your system. Tasks are defined in `.desc` files and located at `/usr/share/tasksel`.
By default, tasksel tool installed on Debian system as part of Debian installer but its not installed on Ubuntu desktop editions. This functionality is similar to that of meta-packages, like how package managers have.
Tasksel tool offer a simple user interface based on zenity (popup Graphical dialog box in command line).
**Suggested Read :** [Tasksel Install Group of Software in A Single Click on Debian/Ubuntu][12]
```
# tasksel --list-task
u kubuntu-live Kubuntu live CD
u lubuntu-live-gtk Lubuntu live CD (GTK part)
u ubuntu-budgie-live Ubuntu Budgie live CD
u ubuntu-live Ubuntu live CD
u ubuntu-mate-live Ubuntu MATE Live CD
u ubuntustudio-dvd-live Ubuntu Studio live DVD
u vanilla-gnome-live Ubuntu GNOME live CD
u xubuntu-live Xubuntu live CD
u cloud-image Ubuntu Cloud Image (instance)
u dns-server DNS server
u kubuntu-desktop Kubuntu desktop
u kubuntu-full Kubuntu full
u lamp-server LAMP server
u lubuntu-core Lubuntu minimal installation
u lubuntu-desktop Lubuntu Desktop
u lubuntu-gtk-core Lubuntu minimal installation (GTK part)
u lubuntu-gtk-desktop Lubuntu Desktop (GTK part)
u lubuntu-qt-core Lubuntu minimal installation (Qt part)
u lubuntu-qt-desktop Lubuntu Qt Desktop (Qt part)
u mail-server Mail server
u postgresql-server PostgreSQL database
i print-server Print server
u samba-server Samba file server
u tomcat-server Tomcat Java server
u ubuntu-budgie-desktop Ubuntu Budgie desktop
i ubuntu-desktop Ubuntu desktop
u ubuntu-mate-core Ubuntu MATE minimal
u ubuntu-mate-desktop Ubuntu MATE desktop
i ubuntu-usb Ubuntu desktop USB
u ubuntustudio-audio Audio recording and editing suite
u ubuntustudio-desktop Ubuntu Studio desktop
u ubuntustudio-desktop-core Ubuntu Studio minimal DE installation
u ubuntustudio-fonts Large selection of font packages
u ubuntustudio-graphics 2D/3D creation and editing suite
u ubuntustudio-photography Photograph touchup and editing suite
u ubuntustudio-publishing Publishing applications
u ubuntustudio-video Video creation and editing suite
u vanilla-gnome-desktop Vanilla GNOME desktop
u xubuntu-core Xubuntu minimal installation
u xubuntu-desktop Xubuntu desktop
u openssh-server OpenSSH server
u server Basic Ubuntu server
```
If you would like to list what are the packages is associated on it, run the below command. In this example we are going to list what are the packages is associated with “file_server” group.
```
# tasksel --task-desc "lamp-server"
Selects a ready-made Linux/Apache/MySQL/PHP server.
```
### How To List An Available Package Groups In Arch Linux based Systems
Arch Linux based systems are using pacman package manager hence we can use the pacman Package Manager to get this information.
pacman stands for package manager utility (pacman). pacman is a command-line utility to install, build, remove and manage Arch Linux packages. pacman uses libalpm (Arch Linux Package Management (ALPM) library) as a back-end to perform all the actions.
**Suggested Read :** [Pacman Command To Manage Packages On Arch Linux Based Systems][13]
```
# pacman -Sg
base-devel
base
multilib-devel
gnome-extra
kde-applications
kdepim
kdeutils
kdeedu
kf5
kdemultimedia
gnome
plasma
kdegames
kdesdk
kdebase
xfce4
fprint
kdegraphics
kdenetwork
kdeadmin
kf5-aids
kdewebdev
.
.
dlang-ldc
libretro
ring
lxqt
non-daw
non
alsa
qtcurve
realtime
sugar-fructose
tesseract-data
vim-plugins
```
If you would like to list what are the packages is associated on it, run the below command. In this example we are going to list what are the packages is associated with “gnome” group.
```
# pacman -Sg gnome
gnome baobab
gnome cheese
gnome eog
gnome epiphany
gnome evince
gnome file-roller
gnome gdm
gnome gedit
gnome gnome-backgrounds
gnome gnome-calculator
gnome gnome-calendar
gnome gnome-characters
gnome gnome-clocks
gnome gnome-color-manager
gnome gnome-contacts
gnome gnome-control-center
gnome gnome-dictionary
gnome gnome-disk-utility
gnome gnome-documents
gnome gnome-font-viewer
.
.
gnome sushi
gnome totem
gnome tracker
gnome tracker-miners
gnome vino
gnome xdg-user-dirs-gtk
gnome yelp
gnome gnome-boxes
gnome gnome-software
gnome simple-scan
```
Alternatively we can check the same by running following command.
```
# pacman -S gnome
:: There are 64 members in group gnome:
:: Repository extra
1) baobab 2) cheese 3) eog 4) epiphany 5) evince 6) file-roller 7) gdm 8) gedit 9) gnome-backgrounds 10) gnome-calculator 11) gnome-calendar 12) gnome-characters 13) gnome-clocks
14) gnome-color-manager 15) gnome-contacts 16) gnome-control-center 17) gnome-dictionary 18) gnome-disk-utility 19) gnome-documents 20) gnome-font-viewer 21) gnome-getting-started-docs
22) gnome-keyring 23) gnome-logs 24) gnome-maps 25) gnome-menus 26) gnome-music 27) gnome-photos 28) gnome-screenshot 29) gnome-session 30) gnome-settings-daemon 31) gnome-shell
32) gnome-shell-extensions 33) gnome-system-monitor 34) gnome-terminal 35) gnome-themes-extra 36) gnome-todo 37) gnome-user-docs 38) gnome-user-share 39) gnome-video-effects 40) grilo-plugins
41) gvfs 42) gvfs-afc 43) gvfs-goa 44) gvfs-google 45) gvfs-gphoto2 46) gvfs-mtp 47) gvfs-nfs 48) gvfs-smb 49) mousetweaks 50) mutter 51) nautilus 52) networkmanager 53) orca 54) rygel
55) sushi 56) totem 57) tracker 58) tracker-miners 59) vino 60) xdg-user-dirs-gtk 61) yelp
:: Repository community
62) gnome-boxes 63) gnome-software 64) simple-scan
Enter a selection (default=all): ^C
Interrupt signal received
```
To know exactly how many packages is associated on it, run the following command.
```
# pacman -Sg gnome | wc -l
64
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-list-an-available-package-groups-in-linux/
作者:[Prakash Subramanian][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/prakash/
[1]: https://www.2daygeek.com/how-to-list-installed-packages-by-size-largest-on-linux/
[2]: https://www.2daygeek.com/how-to-view-list-the-available-packages-updates-in-linux/
[3]: https://www.2daygeek.com/how-to-view-a-particular-package-installed-updated-upgraded-removed-erased-date-on-linux/
[4]: https://www.2daygeek.com/how-to-view-detailed-information-about-a-package-in-linux/
[5]: https://www.2daygeek.com/how-to-search-if-a-package-is-available-on-your-linux-distribution-or-not/
[6]: https://www.2daygeek.com/list-of-graphical-frontend-tool-for-linux-package-manager/
[7]: https://www.2daygeek.com/list-of-command-line-package-manager-for-linux/
[8]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[9]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[10]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
[11]: https://wiki.debian.org/tasksel
[12]: https://www.2daygeek.com/tasksel-install-group-of-software-in-a-single-click-or-single-command-on-debian-ubuntu/
[13]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/

View File

@ -0,0 +1,109 @@
Randomize your MAC address using NetworkManager
======
![](https://fedoramagazine.org/wp-content/uploads/2018/09/randomizemacaddress-816x345.png)
Today, users run their notebooks everywhere. To stay connected you use the local wifi to access the internet, on the couch at home or in a little cafe with your favorite coffee. But modern hotspots track you based on your MAC address, [an address that is unique per network card][1], and in this way identifies your device. Read more below about how to avoid this kind of tracking.
Why is this a problem? Many people use the word “privacy” to talk about this issue. But the concern is not about someone accessing the private contents of your laptop (thats a separate issue). Instead, its about legibility — in simple terms, the ability to be easily counted and tracked. You can and should [read more about legibility][2]. But the bottom line is legibility gives the tracker power over the tracked. For instance, timed WiFi leases at the airport can only be enforced when youre legible.
Since a fixed MAC address for your laptop is so legible (easily tracked), you should change it often. A random address is a good choice. Since MAC-addresses are only used within a local network, a random MAC-address is unlikely to cause a [collision.][3]
### Configuring NetworkManager
To apply randomized MAC-addresses by default to all WiFi connections, create the following file /etc/NetworkManager/conf.d/00-macrandomize.conf :
```
[device]
wifi.scan-rand-mac-address=yes
[connection]
wifi.cloned-mac-address=stable
ethernet.cloned-mac-address=stable
connection.stable-id=${CONNECTION}/${BOOT}
```
Afterward, restart NetworkManager:
```
systemctl restart NetworkManager
```
Set cloned-mac-address to stable to generate the same hashed MAC every time a NetworkManager connection activates, but use a different MAC with each connection. To get a truly random MAC with every activation, use random instead.
The stable setting is useful to get the same IP address from DHCP, or a captive portal might remember your login status based on the MAC address. With random you may be required to re-authenticate (or click “I agree”) on every connect. You probably want “random” for that airport WiFi. See the NetworkManager [blog post][4] for a more detailed discussion and instructions for using nmcli to configure specific connections from the terminal.
To see your current MAC addresses, use ip link. The MAC follows the word ether.
```
$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
link/ether 52:54:00:5f:d5:4e brd ff:ff:ff:ff:ff:ff
3: wlp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DORMANT group default qlen 1000
link/ether 52:54:00:03:23:59 brd ff:ff:ff:ff:ff:ff
```
### When not to randomize your MAC address
Naturally, there are times when you do need to be legible. For instance, on your home network, you may have configured your router to assign your notebook a consistent private IP for port forwarding. Or you might allow only certain MAC addresses to use the WiFi. Your employer probably requires legibility as well.
To change a specific WiFi connection, use nmcli to see your NetworkManager connections and show the current settings:
```
$ nmcli c | grep wifi
Amtrak_WiFi 5f4b9f75-9e41-47f8-8bac-25dae779cd87 wifi --
StaplesHotspot de57940c-32c2-468b-8f96-0a3b9a9b0a5e wifi --
MyHome e8c79829-1848-4563-8e44-466e14a3223d wifi wlp1s0
...
$ nmcli c show 5f4b9f75-9e41-47f8-8bac-25dae779cd87 | grep cloned
802-11-wireless.cloned-mac-address: --
$ nmcli c show e8c79829-1848-4563-8e44-466e14a3223d | grep cloned
802-11-wireless.cloned-mac-address: stable
```
This example uses a fully random MAC for Amtrak (which is currently using the default), and the permanent MAC for MyHome (currently set to stable). The permanent MAC was assigned to your network interface when it was manufactured. Network admins like to use the permanent MAC to see [manufacturer IDs on the wire][5].
Now, make the changes and reconnect the active interface:
```
$ nmcli c modify 5f4b9f75-9e41-47f8-8bac-25dae779cd87 802-11-wireless.cloned-mac-address random
$ nmcli c modify e8c79829-1848-4563-8e44-466e14a3223d 802-11-wireless.cloned-mac-address permanent
$ nmcli c down e8c79829-1848-4563-8e44-466e14a3223d
$ nmcli c up e8c79829-1848-4563-8e44-466e14a3223d
$ ip link
...
```
You can also install NetworkManager-tui to get the nmtui command for nice menus when editing connections.
### Conclusion
When you walk down the street, you should [stay aware of your surroundings][6], and on the [alert for danger][7]. In the same way, learn to be aware of your legibility when using public internet resources.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/randomize-mac-address-nm/
作者:[sheogorath][a],[Stuart D Gathman][b]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/sheogorath/
[b]: https://fedoramagazine.org/author/sdgathman/
[1]: https://en.wikipedia.org/wiki/MAC_address
[2]: https://www.ribbonfarm.com/2010/07/26/a-big-little-idea-called-legibility/
[3]: https://serverfault.com/questions/462178/duplicate-mac-address-on-the-same-lan-possible
[4]: https://blogs.gnome.org/thaller/2016/08/26/mac-address-spoofing-in-networkmanager-1-4-0/
[5]: https://www.wireshark.org/tools/oui-lookup.html
[6]: https://www.isba.org/committees/governmentlawyers/newsletter/2013/06/becomingmoreawareafewtipsonkeepingy
[7]: http://www.selectinternational.com/safety-blog/aware-of-surroundings-can-reduce-safety-incidents

View File

@ -0,0 +1,62 @@
Know Your Storage: Block, File & Object
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/block2_1920.jpg?itok=s1y6RLhT)
Dealing with the tremendous amount of data generated today presents a big challenge for companies who create or consume such data. Its a challenge for tech companies that are dealing with related storage issues.
“Data is growing exponentially each year, and we find that the majority of data growth is due to increased consumption and industries adopting transformational projects to expand value. Certainly, the Internet of Things (IoT) has contributed greatly to data growth, but the key challenge for software-defined storage is how to address the use cases associated with data growth,” said Michael St. Jean, principal product marketing manager, Red Hat Storage.
Every challenge is an opportunity. “The deluge of data being generated by old and new sources today is certainly presenting us with opportunities to meet our customers escalating needs in the areas of scale, performance, resiliency, and governance,” said Tad Brockway, General Manager for Azure Storage, Media and Edge.
### Trinity of modern software-defined storage
There are three different kinds of storage solutions -- block, file, and object -- each serving a different purpose while working with the others.
Block storage is the oldest form of data storage, where data is stored in fixed-length blocks or chunks of data. Block storage is used in enterprise storage environments and usually is accessed using Fibre Channel or iSCSI interface. “Block storage requires an application to map where the data is stored on the storage device,” according to SUSEs Larry Morris, Sr. Product Manager, Software Defined Storage.
Block storage is virtualized in storage area network and software defined storage systems, which are abstracted logical devices that reside on a shared hardware infrastructure and are created and presented to the host operating system of a server, virtual server, or hypervisor via protocols like SCSI, SATA, SAS, FCP, FCoE, or iSCSI.
“Block storage splits a single storage volume (like a virtual or cloud storage node, or a good old fashioned hard disk) into individual instances known as blocks,” said St. Jean.
Each block exists independently and can be formatted with its own data transfer protocol and operating system — giving users complete configuration autonomy. Because block storage systems arent burdened with the same investigative file-finding duties as the file storage systems, block storage is a faster storage system. Pairing that speed with configuration flexibility makes block storage ideal for raw server storage or rich media databases.
Block storage can be used to host operating systems, applications, databases, entire virtual machines and containers. Traditionally, block storage can only be accessed by individual machine, or machines in a cluster, to which it has been presented.
### File-based storage
File-based storage uses a filesystem to map where the data is stored on the storage device. Its a dominant technology used on direct- and networked-attached storage system, and it takes care of two things: organizing data and representing it to users. “With file storage, data is arranged on the server side in the exact same format as the clients see it. This allows the user to request a file by some unique identifier — like a name, location, or URL — which is communicated to the storage system using specific data transfer protocols,” said St. Jean.
The result is a type of hierarchical file structure that can be navigated from top to bottom. File storage is layered on top of block storage, allowing users to see and access data as files and folders, but restricting access to the blocks that stand up those files and folders.
“File storage is typically represented by shared filesystems like NFS and CIFS/SMB that can be accessed by many servers over an IP network. Access can be controlled at a file, directory, and export level via user and group permissions. File storage can be used to store files needed by multiple users and machines, application binaries, databases, virtual machines, and can be used by containers,” explained Brockway.
### Object storage
Object storage is the newest form of data storage, and it provides a repository for unstructured data which separates the content from the indexing and allows the concatenation of multiple files into an object. An object is a piece of data paired with any associated metadata that provides context about the bytes contained within the object (things like how old or big the data is). Those two things together — the data and metadata — make an object.
One advantage of object storage is the unique identifier associated with each piece of data. Accessing the data involves using the unique identifier and does not require the application or user to know where the data is actually stored. Object data is accessed through APIs.
“The data stored in objects is uncompressed and unencrypted, and the objects themselves are arranged in object stores (a central repository filled with many other objects) or containers (a package that contains all of the files an application needs to run). Objects, object stores, and containers are very flat in nature — compared to the hierarchical structure of file storage systems — which allow them to be accessed very quickly at huge scale,” explained St. Jean.
Object stores can scale to many petabytes to accommodate the largest datasets and are a great choice for images, audio, video, logs, backups, and data used by analytics services.
### Conclusion
Now you know about the various types of storage and how they are used. Stay tuned to learn more about software-defined storage as we examine the topic in the future.
Join us at [Open Source Summit + Embedded Linux Conference Europe][1] in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2018/9/know-your-storage-block-file-object
作者:[Swapnil Bhartiya][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/arnieswap
[1]: https://events.linuxfoundation.org/events/elc-openiot-europe-2018/

View File

@ -0,0 +1,105 @@
XiatianSummer translating
Visualize Disk Usage On Your Linux System
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/filelight-720x340.png)
Finding disk space usage is no big deal in Unix-like operating systems. We have a built-in command named [**du**][1] that can be used to calculate and summarize the disk space usage in minutes. And, we have some third-party tools like [**Ncdu**][2] and [**Agedu**][3] which can also be used to track down the disk usage. As you already know, these are all command line utilities and you will see the disk usage results in plain-text format. However, some of youd like to view the results in visual or kind of image format. No worries! I know one such GUI tool to find out the disk usage details. Say hello to **“Filelight”** , a graphical utility to visualize disk usage on your Linux system and displays the disk usage results in a colored radial layout. Filelight is one of the oldest project and it has been around for a long time. It is completely free to use and open source.
### Installing Filelight
Filelight is part of KDE applications and comes pre-installed with KDE-based Linux distributions.
If youre using non-KDE distros, Filelight is available in the official repositories, so you can install it using the default package manager.
On Arch Linux and its variants such as Antergos, Manjaro Linux, Filelight can be installed as below.
```
$ sudo pacman -S filelight
```
On Debian, Ubuntu, Linux Mint:
```
$ sudo apt install filelight
```
On Fedora:
```
$ sudo dnf install filelight
```
On openSUSE:
```
$ sudo zypper install filelight
```
### Visualize Disk Usage On Your Linux System
Once installed, launch Filelight from Menu or application launcher.
FIlelight graphically represents your filesystem as a set of concentric segmented-rings.
![](https://www.ostechnix.com/wp-content/uploads/2018/09/filelight-1-1.png)
As you can see, Filelight displays the disk usage of the **/** and **/boot** filesystems by default.
You can also scan the individual folders of your choice to view the disk usage of that particular folder. To do so, go to **Filelight - > Scan -> Scan Folder** and choose the folder you want to scan.
Filelight excludes the following directories from scanning:
* /dev
* /proc
* /sys
* /root
This option is helpful to skip the directories that you may not have permissions to read, or folders that are part of a virtual filesystem, such as /proc.
If you want to add any folder in this list, go to **Filelight - > Settings -> Scanning** and click “add” button and choose the folder you want to add in this list.
![](http://www.ostechnix.com/wp-content/uploads/2018/09/filelight-settings.png)
Similarly, to remove a folder from the list, choose the folder and click on “Remove”.
If you want to change the way filelight looks, go to **Settings - > Appearance** tab and change the color scheme as per your liking.
Each segment in the radial layout is represented with different colors. The following image represents the entire radial layout of **/** filesystem. To view the full information of files and folders, just hover the mouse pointer over them.
![](https://www.ostechnix.com/wp-content/uploads/2018/09/filelight-2.png)
You can navigate around the the filesystem by simply clicking on the respective segment. To view the disk usage of any file or folder, just click on them and you will get the complete disk usage details of that particular folder/file.
Not just local filesystem, Filelight can able to scan your local, remote and removable disks. If youre using any KDE-based Linux distribution, it can be integrated into file managers like Konqueror, Dolphin and Krusader.
Unlike the CLI utilities, you dont have to use any extra arguments or options to view the results in human-readable format. Filelight will display the disk usage in human-readable format by default.
### Conclusion
By using Filelight, you can quickly discover where exactly your diskspace is being used in your filesystem and free up the space wherever necessary by deleting the unwanted files or folders. If you are looking for some simple and user-learnedly graphical disk usage viewer, Filelight is worth trying.
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/filelight-visualize-disk-usage-on-your-linux-system/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://www.ostechnix.com/find-size-directory-linux/
[2]: https://www.ostechnix.com/check-disk-space-usage-linux-using-ncdu/
[3]: https://www.ostechnix.com/agedu-find-out-wasted-disk-space-in-linux/

View File

@ -0,0 +1,124 @@
How To Configure Mouse Support For Linux Virtual Consoles
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/GPM-1-720x340.png)
I use Oracle VirtualBox to test various Unix-like operating systems. Most of my VMs are headless servers that does not have graphical desktop environment. For a long time, I have been wondering how can we use the mouse in the text-based terminals in headless Linux servers. Thanks to **GPM** , today I learned that we can use Mouse in virtual consoles for copy and paste operations. **GPM** , acronym for **G** eneral **P** urpose **M** ouse, is a daemon that helps you to configure mouse support for Linux virtual consoles. Please do not confuse GPM with **GDM** (GNOME Display manager). Both serves entirely different purpose.
GPM is especially useful in the following scenarios:
* New Linux server installations or for systems that cannot or do not use an X windows system by default, like Arch Linux and Gentoo.
* Use copy/paste operations around in the virtual terminals/consoles.
* Use copy/paste in text-based editors and browsers (Eg. emacs, lynx).
* Use copy/paste in text file managers (Eg. Ranger, Midnight commander).
In this brief tutorial, we are going to see how to use Mouse in Text-based terminals in Unix-like operating systems.
### Installing GPM
To enable mouse support in Text-only Linux systems, install GPM package. It is available in the default repositories of most Linux distributions.
On Arch Linux and its variants like Antergos, Manjaro Linux, run the following command to install GPM:
```
$ sudo pacman -S gpm
```
On Debian, Ubuntu, Linux Mint:
```
$ sudo apt install gpm
```
On Fedora:
```
$ sudo dnf install gpm
```
On openSUSE:
```
$ sudo zypper install gpm
```
Once installed, enable and start GPM service using the following commands:
```
$ sudo systemctl enable gpm
$ sudo systemctl start gpm
```
In Debian-based systems, gpm service will be automatically started after you installed it, so you need not to manually start the service as shown above.
### Configure Mouse Support For Linux Virtual Consoles
There is no special configuration required. GPM will start working as soon as you installed it and started gpm service.
Have a look at the following screenshot of my Ubuntu 18.04 LTS server before installing GPM:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Ubuntu-18.04-CLI.png)
As you see in the above screenshot, there is no visible Mouse pointer in my Ubuntu 18.04 LTS headless server. Only a blinking cursor and it wont let me to select a text, copy/paste text using mouse. In CLI-only Linux servers, the mouse is literally not useful at all.
Now check the following screenshot of Ubuntu 18.04 LTS server after installing GPM:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/GPM.png)
See? I can now be able to select the text.
To select, copy and paste text, do the following:
* To select text, press the left mouse button and drag the mouse.
* Once you selected the text, release the left mouse button and paste text in the same or another console by pressing the middle mouse button.
* The right button is used to extend the selection, like in `xterm.
* If youre using two-button mouse, use the right button to paste text.
Its that simple!
Like I already said, GPM works just fine and there is no extra configuration needed. Here is the sample contents of GPM configuration file **/etc/gpm.conf** (or `/etc/conf.d/gpm` in some distributions):
```
# protected from evaluation (i.e. by quoting them).
#
# This file is used by /etc/init.d/gpm and can be modified by
# "dpkg-reconfigure gpm" or by hand at your option.
#
device=/dev/input/mice
responsiveness=
repeat_type=none
type=exps2
append=''
sample_rate=
```
In my example, I use USB mouse. If youre using different mouse, you might have to change the values of **device=/dev/input/mice** and **type=exps2** parameters.
For more details, refer man pages.
```
$ man gpm
```
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-configure-mouse-support-for-linux-virtual-consoles/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/

View File

@ -0,0 +1,335 @@
How subroutine signatures work in Perl 6
======
In the fourth article in this series comparing Perl 5 to Perl 6, learn how signatures work in Perl 6.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard2.png?itok=WnKfsl-G)
In the [first article][1] in this series comparing Perl 5 to Perl 6, we looked into some of the issues you might encounter when migrating code into Perl 6. In the [second article][2], we examined how garbage collection works in Perl 6, and in the [third article][3], we looked at how containers replaced references in Perl 6. Here in the fourth article, we will focus on (subroutine) signatures in Perl 6 and how they differ from those in Perl 5.
### Experimental signatures in Perl 5
If you're migrating from Perl 5 code to Perl 6, you're probably not using the [experimental signature feature][4] that became available in Perl 5.20 or any of the older CPAN modules like [signatures][5], [Function::Parameters][6], or any of the other Perl 5 modules on CPAN with ["signature" in their name][7].
Also, in my experience, [prototypes][8] haven't been used very often in the Perl programs out in the world (e.g., the [DarkPAN][9] ).
For these reasons, I will compare Perl 6 functionality only with the most common use of "classic" Perl 5 argument passing.
### Argument passing in Perl 5
All arguments you pass to a Perl 5 subroutine are flattened and put into the automatically defined `@_` array variable inside. That is basically all Perl 5 does with passing arguments to subroutines. Nothing more, nothing less. There are, however, several idioms in Perl 5 that take it from there. The most common (I would say "standard") idiom in my experience is:
```
# Perl 5
sub do_something {
    my ($foo, $bar) = @_;
    # actually do something with $foo and $bar
}
```
This idiom performs a list assignment (copy) to two (new) lexical variables. This way of accessing the arguments to a subroutine is also supported in Perl 6, but it's intended just as a way to make migrations easier.
If you expect a fixed number of arguments followed by a variable number of arguments, the following idiom is typically used:
```
# Perl 5
sub do_something {
    my $foo = shift;
    my $bar = shift;
    for (@_) {
        # do something for each element in @_
    }
}do_something
```
This idiom depends on the magic behavior of [shift][10], which shifts from `@_` in this context. If the subroutine is intended to be called as a method, something like this is usually seen:
```
# Perl 5
sub do_something {
    my $self = shift;
    # do something with $self
}do_something
```
as the first argument passed is the [invocant][11] in Perl 5.
By the way, this idiom can also be written in the first idiom:
```
# Perl 5
sub do_something {
    my ($foo, $bar, @rest) = @_;
    for (@rest) {
        # do something for each element in @rest
    }
}
```
But that would be less efficient, as it would involve copying a potentially long list of values.
The third idiom revolves on directly accessing the `@_` array.
```
# Perl 5
sub sum_two {
    return $_[0] + $_[1];  # return the sum of the two parameters
}sum_two
```
This idiom is typically used for small, one-line subroutines, as it is one of the most efficient ways of handling arguments because no copying takes place.
This idiom is also used if you want to change any variable that is passed as a parameter. Since the elements in `@_` are aliases to any variables specified (in Perl 6 you would say: "are bound to the variables"), it is possible to change the contents:
```
# Perl 5
sub make42 {
    $_[0] = 42;
}
my $a = 666;
make42($a);
say $a;      # 42
```
### Named arguments in Perl 5
Named arguments (as such) don't exist in Perl 5. But there is an often-used idiom that effectively mimics named arguments:
```
# Perl 5
sub do_something {
    my %named = @_;
    if (exists %named{bar}) {
        # do stuff if named variable "bar" exists
    }
}do_somethingbar
```
This initializes the hash `%named` by alternately taking a key and a value from the `@_` array. If you call a subroutine with arguments using the fat-comma syntax:
```
# Perl 5
frobnicate( bar => 42 );
```
it will pass two values, `"foo"` and `42`, which will be placed into the `%named` hash as the value `42` associated with key `"foo"`. But the same thing would have happened if you had specified:
```
# Perl 5
frobnicate( "bar", 42 );
```
The `=>` is syntactic sugar for automatically quoting the left side. Otherwise, it functions just like a comma (hence the name "fat comma").
If a subroutine is called as a method with named arguments, this idiom is combined with the standard idiom:
```
# Perl 5
sub do_something {
    my ($self, %named) = @_;
    # do something with $self and %named
}
```
alternatively:
```
# Perl 5
sub do_something {
    my $self  = shift;
    my %named = @_;
    # do something with $self and %named
}do_something
```
### Argument passing in Perl 6
In their simplest form, subroutine signatures in Perl 6 are very much like the "standard" idiom of Perl 5. But instead of being part of the code, they are part of the definition of the subroutine, and you don't need to do the assignment:
```
# Perl 6
sub do-something($foo, $bar) {
    # actually do something with $foo and $bar
}
```
versus:
```
# Perl 5
sub do_something {
    my ($foo, $bar) = @_;
    # actually do something with $foo and $bar
}
```
In Perl 6, the `($foo, $bar)` part is called the signature of the subroutine.
Since Perl 6 has an actual `method` keyword, it is not necessary to take the invocant into account, as that is automatically available with the `self` term:
```
# Perl 6
class Foo {
    method do-something-else($foo, $bar) {
        # do something else with self, $foo and $bar
    }
}
```
Such parameters are called positional parameters in Perl 6. Unless indicated otherwise, positional parameters must be specified when calling the subroutine.
If you need the aliasing behavior of using `$_[0]` directly in Perl 5, you can mark the parameter as writable by specifying the `is rw` trait:
```
# Perl 6
sub make42($foo is rw) {
    $foo = 42;
}
my $a = 666;
make42($a);
say $a;      # 42
```
When you pass an array as an argument to a subroutine, it doesn't get flattened in Perl 6. You only need to accept an array as an array in the signature:
```
# Perl 6
sub handle-array(@a) {
    # do something with @a
}
my @foo = "a" .. "z";
handle-array(@foo);
```
You can pass any number of arrays:
```
# Perl 6
sub handle-two-arrays(@a, @b) {
    # do something with @a and @b
}
my @bar = 1..26;
handle-two-arrays(@foo, @bar);
```
If you want the ([variadic][12]) flattening semantics of Perl 5, you can indicate this with a so-called "slurpy array" by prefixing the array with an asterisk in the signature:
```
# Perl 6
sub slurp-an-array(*@values) {
    # do something with @values
}
slurp-an-array("foo", 42, "baz");slurpanarrayslurpanarray
```
A slurpy array can occur only as the last positional parameter in a signature.
If you prefer to use the Perl 5 way of specifying parameters in Perl 6, you can do this by specifying a slurpy array `*@_` in the signature:
```
# Perl 6
sub do-like-5(*@_) {
    my ($foo, $bar) = @_;
}
```
### Named arguments in Perl 6
On the calling side, named arguments in Perl 6 can be expressed very similarly to how they are expressed in Perl 5:
```
# Perl 5 and Perl 6
frobnicate( bar => 42 );
```
However, on the definition side of the subroutine, things are very different:
```
# Perl 6
sub frobnicate(:$bar) {
    # do something with $bar
}
```
The difference between an ordinary (positional) parameter and a named parameter is the colon, which precedes the [sigil][13] and the variable name in the definition:
```
$foo      # positional parameter, receives in $foo
:$bar     # named parameter "bar", receives in $bar
```
Unless otherwise specified, named parameters are optional. If a named argument is not specified, the associated variable will contain the default value, which usually is the type object `Any`.
If you want to catch any (other) named arguments, you can use a so-called "slurpy hash." Just like the slurpy array, it is indicated with an asterisk before a hash:
```
# Perl 6
sub slurp-nameds(*%nameds) {
    say "Received: " ~ join ", ", sort keys %nameds;
}
slurp-nameds(foo => 42, bar => 666); # Received: bar, fooslurpnamedssayslurpnamedsfoobar
```
As with the slurpy array, there can be only one slurpy hash in a signature, and it must be specified after any other named parameters.
Often you want to pass a named argument to a subroutine from a variable with the same name. In Perl 5 this looks like: `do_something(bar => $bar)`. In Perl 6, you can specify this in the same way: `do-something(bar => $bar)`. But you can also use a shortcut: `do-something(:$bar)`. This means less typingand less chance of typos.
### Default values in Perl 6
Perl 5 has the following idiom for making parameters optional with a default value:
```
# Perl 5
sub dosomething_with_defaults {
    my $foo = @_ ? shift : 42;
    my $bar = @_ ? shift : 666;
    # actually do something with $foo and $bar
}dosomething_with_defaults
```
In Perl 6, you can specify default values as part of the signature by specifying an equal sign and an expression:
```
# Perl 6
sub dosomething-with-defaults($foo = 42, :$bar = 666) {
    # actually do something with $foo and $bar
}
```
Positional parameters become optional if a default value is specified for them. Named parameters stay optional regardless of any default value.
### Summary
Perl 6 has a way of describing how arguments to a subroutine should be captured into parameters of that subroutine. Positional parameters are indicated by their name and the appropriate sigil (e.g., `$foo`). Named parameters are prefixed with a colon (e.g. `:$bar`). Positional parameters can be marked as `is rw` to allow changing variables in the caller's scope.
Positional arguments can be flattened in a slurpy array, which is prefixed by an asterisk (e.g., `*@values`). Unexpected named arguments can be collected using a slurpy hash, which is also prefixed with an asterisk (e.g., `*%nameds`).
Default values can be specified inside the signature by adding an expression after an equal sign (e.g., `$foo = 42`), which makes that parameter optional.
Signatures in Perl 6 have many other interesting features, aside from the ones summarized here; if you want to know more about them, check out the Perl 6 [signature object documentation][14].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/signatures-perl-6
作者:[Elizabeth Mattijsen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lizmat
[1]: https://opensource.com/article/18/7/migrating-perl-5-perl-6
[2]: https://opensource.com/article/18/7/garbage-collection-perl-6
[3]: https://opensource.com/article/18/7/containers-perl-6
[4]: https://metacpan.org/pod/distribution/perl/pod/perlsub.pod#Signatures
[5]: https://metacpan.org/pod/signatures
[6]: https://metacpan.org/pod/Function::Parameters
[7]: https://metacpan.org/search?q=signature
[8]: https://metacpan.org/pod/perlsub#Prototypes
[9]: http://modernperlbooks.com/mt/2009/02/the-darkpan-dependency-management-and-support-problem.html
[10]: https://perldoc.perl.org/functions/shift.html
[11]: https://docs.perl6.org/routine/invocant
[12]: https://en.wikipedia.org/wiki/Variadic_function
[13]: https://www.perl.com/article/on-sigils/
[14]: https://docs.perl6.org/type/Signature

View File

@ -0,0 +1,395 @@
How to build rpm packages
======
Save time and effort installing files and scripts across multiple hosts.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_gift_giveaway_box_520x292.png?itok=w1YQhNH1)
I have used rpm-based package managers to install software on Red Hat and Fedora Linux since I started using Linux more than 20 years ago. I have used the **rpm** program itself, **yum** , and **DNF** , which is a close descendant of yum, to install and update packages on my Linux hosts. The yum and DNF tools are wrappers around the rpm utility that provide additional functionality, such as the ability to find and install package dependencies.
Over the years I have created a number of Bash scripts, some of which have separate configuration files, that I like to install on most of my new computers and virtual machines. It reached the point that it took a great deal of time to install all of these packages, so I decided to automate that process by creating an rpm package that I could copy to the target hosts and install all of these files in their proper locations. Although the **rpm** tool was formerly used to build rpm packages, that function was removed and a new tool,was created to build new rpms.
When I started this project, I found very little information about creating rpm packages, but I managed to find a book, Maximum RPM, that helped me figure it out. That book is now somewhat out of date, as is the vast majority of information I have found. It is also out of print, and used copies go for hundreds of dollars. The online version of [Maximum RPM][1] is available at no charge and is kept up to date. The [RPM website][2] also has links to other websites that have a lot of documentation about rpm. What other information there is tends to be brief and apparently assumes that you already have a good deal of knowledge about the process.
In addition, every one of the documents I found assumes that the code needs to be compiled from sources as in a development environment. I am not a developer. I am a sysadmin, and we sysadmins have different needs because we dont—or we shouldnt—compile code to use for administrative tasks; we should use shell scripts. So we have no source code in the sense that it is something that needs to be compiled into binary executables. What we have is a source that is also the executable.
For the most part, this project should be performed as the non-root user student. Rpms should never be built by root, but only by non-privileged users. I will indicate which parts should be performed as root and which by a non-root, unprivileged user.
### Preparation
First, open one terminal session and `su` to root. Be sure to use the `-` option to ensure that the complete root environment is enabled. I do not believe that sysadmins should use `sudo` for any administrative tasks. Find out why in my personal blog post: [Real SysAdmins dont sudo][3].
```
[student@testvm1 ~]$ su -
Password:
[root@testvm1 ~]#
```
Create a student user that can be used for this project and set a password for that user.
```
[root@testvm1 ~]# useradd -c "Student User" student
[root@testvm1 ~]# passwd student
Changing password for user student.
New password: <Enter the password>
Retype new password: <Enter the password>
passwd: all authentication tokens updated successfully.
[root@testvm1 ~]#
```
Building rpm packages requires the `rpm-build` package, which is likely not already installed. Install it now as root. Note that this command will also install several dependencies. The number may vary, depending upon the packages already installed on your host; it installed a total of 17 packages on my test VM, which is pretty minimal.
```
dnf install -y rpm-build
```
The rest of this project should be performed as the user student unless otherwise explicitly directed. Open another terminal session and use `su` to switch to that user to perform the rest of these steps. Download a tarball that I have prepared of a development directory structure, utils.tar, from GitHub using the following command:
```
wget https://github.com/opensourceway/how-to-rpm/raw/master/utils.tar
```
This tarball includes all of the files and Bash scripts that will be installed by the final rpm. There is also a complete spec file, which you can use to build the rpm. We will go into detail about each section of the spec file.
As user student, using your home directory as your present working directory (pwd), untar the tarball.
```
[student@testvm1 ~]$ cd ; tar -xvf utils.tar
```
Use the `tree` command to verify that the directory structure of ~/development and the contained files looks like the following output:
```
[student@testvm1 ~]$ tree development/
development/
├── license
  ├── Copyright.and.GPL.Notice.txt
  └── GPL_LICENSE.txt
├── scripts
  ├── create_motd
  ├── die
  ├── mymotd
  └── sysdata
└── spec
    └── utils.spec
3 directories, 7 files
[student@testvm1 ~]$
```
The `mymotd` script creates a “Message Of The Day” data stream that is sent to stdout. The `create_motd` script runs the `mymotd` scripts and redirects the output to the /etc/motd file. This file is used to display a daily message to users who log in remotely using SSH.
The `die` script is my own script that wraps the `kill` command in a bit of code that can find running programs that match a specified string and kill them. It uses `kill -9` to ensure that they cannot ignore the kill message.
The `sysdata` script can spew tens of thousands of lines of data about your computer hardware, the installed version of Linux, all installed packages, and the metadata of your hard drives. I use it to document the state of a host at a point in time. I can later use it for reference. I used to do this to maintain a record of hosts that I installed for customers.
You may need to change ownership of these files and directories to student.student. Do this, if necessary, using the following command:
```
chown -R student.student development
```
Most of the files and directories in this tree will be installed on Fedora systems by the rpm you create during this project.
### Creating the build directory structure
The `rpmbuild` command requires a very specific directory structure. You must create this directory structure yourself because no automated way is provided. Create the following directory structure in your home directory:
```
~ ─ rpmbuild
    ├── RPMS
     └── noarch
    ├── SOURCES
    ├── SPECS
    └── SRPMS
```
We will not create the rpmbuild/RPMS/X86_64 directory because that would be architecture-specific for 64-bit compiled binaries. We have shell scripts that are not architecture-specific. In reality, we wont be using the SRPMS directory either, which would contain source files for the compiler.
### Examining the spec file
Each spec file has a number of sections, some of which may be ignored or omitted, depending upon the specific circumstances of the rpm build. This particular spec file is not an example of a minimal file required to work, but it is a good example of a moderately complex spec file that packages files that do not need to be compiled. If a compile were required, it would be performed in the `%build` section, which is omitted from this spec file because it is not required.
#### Preamble
This is the only section of the spec file that does not have a label. It consists of much of the information you see when the command `rpm -qi [Package Name]` is run. Each datum is a single line which consists of a tag, which identifies it and text data for the value of the tag.
```
###############################################################################
# Spec file for utils
################################################################################
# Configured to be built by user student or other non-root user
################################################################################
#
Summary: Utility scripts for testing RPM creation
Name: utils
Version: 1.0.0
Release: 1
License: GPL
URL: http://www.both.org
Group: System
Packager: David Both
Requires: bash
Requires: screen
Requires: mc
Requires: dmidecode
BuildRoot: ~/rpmbuild/
# Build with the following syntax:
# rpmbuild --target noarch -bb utils.spec
```
Comment lines are ignored by the `rpmbuild` program. I always like to add a comment to this section that contains the exact syntax of the `rpmbuild` command required to create the package. The Summary tag is a short description of the package. The Name, Version, and Release tags are used to create the name of the rpm file, as in utils-1.00-1.rpm. Incrementing the release and version numbers lets you create rpms that can be used to update older ones.
The License tag defines the license under which the package is released. I always use a variation of the GPL. Specifying the license is important to clarify the fact that the software contained in the package is open source. This is also why I included the license and GPL statement in the files that will be installed.
The URL is usually the web page of the project or project owner. In this case, it is my personal web page.
The Group tag is interesting and is usually used for GUI applications. The value of the Group tag determines which group of icons in the applications menu will contain the icon for the executable in this package. Used in conjunction with the Icon tag (which we are not using here), the Group tag allows adding the icon and the required information to launch a program into the applications menu structure.
The Packager tag is used to specify the person or organization responsible for maintaining and creating the package.
The Requires statements define the dependencies for this rpm. Each is a package name. If one of the specified packages is not present, the DNF installation utility will try to locate it in one of the defined repositories defined in /etc/yum.repos.d and install it if it exists. If DNF cannot find one or more of the required packages, it will throw an error indicating which packages are missing and terminate.
The BuildRoot line specifies the top-level directory in which the `rpmbuild` tool will find the spec file and in which it will create temporary directories while it builds the package. The finished package will be stored in the noarch subdirectory that we specified earlier. The comment showing the command syntax used to build this package includes the option `target noarch`, which defines the target architecture. Because these are Bash scripts, they are not associated with a specific CPU architecture. If this option were omitted, the build would be targeted to the architecture of the CPU on which the build is being performed.
The `rpmbuild` program can target many different architectures, and using the `--target` option allows us to build architecture-specific packages on a host with a different architecture from the one on which the build is performed. So I could build a package intended for use on an i686 architecture on an x86_64 host, and vice versa.
Change the packager name to yours and the URL to your own website if you have one.
#### %description
The `%description` section of the spec file contains a description of the rpm package. It can be very short or can contain many lines of information. Our `%description` section is rather terse.
```
%description
A collection of utility scripts for testing RPM creation.
```
#### %prep
The `%prep` section is the first script that is executed during the build process. This script is not executed during the installation of the package.
This script is just a Bash shell script. It prepares the build directory, creating directories used for the build as required and copying the appropriate files into their respective directories. This would include the sources required for a complete compile as part of the build.
The $RPM_BUILD_ROOT directory represents the root directory of an installed system. The directories created in the $RPM_BUILD_ROOT directory are fully qualified paths, such as /user/local/share/utils, /usr/local/bin, and so on, in a live filesystem.
In the case of our package, we have no pre-compile sources as all of our programs are Bash scripts. So we simply copy those scripts and other files into the directories where they belong in the installed system.
```
%prep
################################################################################
# Create the build tree and copy the files from the development directories    #
# into the build tree.                                                         #
################################################################################
echo "BUILDROOT = $RPM_BUILD_ROOT"
mkdir -p $RPM_BUILD_ROOT/usr/local/bin/
mkdir -p $RPM_BUILD_ROOT/usr/local/share/utils
cp /home/student/development/utils/scripts/* $RPM_BUILD_ROOT/usr/local/bin
cp /home/student/development/utils/license/* $RPM_BUILD_ROOT/usr/local/share/utils
cp /home/student/development/utils/spec/* $RPM_BUILD_ROOT/usr/local/share/utils
exit
```
Note that the exit statement at the end of this section is required.
#### %files
This section of the spec file defines the files to be installed and their locations in the directory tree. It also specifies the file attributes and the owner and group owner for each file to be installed. The file permissions and ownerships are optional, but I recommend that they be explicitly set to eliminate any chance for those attributes to be incorrect or ambiguous when installed. Directories are created as required during the installation if they do not already exist.
```
%files
%attr(0744, root, root) /usr/local/bin/*
%attr(0644, root, root) /usr/local/share/utils/*
```
#### %pre
This section is empty in our lab projects spec file. This would be the place to put any scripts that are required to run during installation of the rpm but prior to the installation of the files.
#### %post
This section of the spec file is another Bash script. This one runs after the installation of files. This section can be pretty much anything you need or want it to be, including creating files, running system commands, and restarting services to reinitialize them after making configuration changes. The `%post` script for our rpm package performs some of those tasks.
```
%post
################################################################################
# Set up MOTD scripts                                                          #
################################################################################
cd /etc
# Save the old MOTD if it exists
if [ -e motd ]
then
   cp motd motd.orig
fi
# If not there already, Add link to create_motd to cron.daily
cd /etc/cron.daily
if [ ! -e create_motd ]
then
   ln -s /usr/local/bin/create_motd
fi
# create the MOTD for the first time
/usr/local/bin/mymotd > /etc/motd
```
The comments included in this script should make its purpose clear.
#### %postun
This section contains a script that would be run after the rpm package is uninstalled. Using rpm or DNF to remove a package removes all of the files listed in the `%files` section, but it does not remove files or links created by the `%post` section, so we need to handle that in this section.
This script usually consists of cleanup tasks that simply erasing the files previously installed by the rpm cannot accomplish. In the case of our package, it includes removing the link created by the `%post` script and restoring the saved original of the motd file.
```
%postun
# remove installed files and links
rm /etc/cron.daily/create_motd
# Restore the original MOTD if it was backed up
if [ -e /etc/motd.orig ]
then
   mv -f /etc/motd.orig /etc/motd
fi
```
#### %clean
This Bash script performs cleanup after the rpm build process. The two lines in the `%clean` section below remove the build directories created by the `rpm-build` command. In many cases, additional cleanup may also be required.
```
%clean
rm -rf $RPM_BUILD_ROOT/usr/local/bin
rm -rf $RPM_BUILD_ROOT/usr/local/share/utils
```
#### %changelog
This optional text section contains a list of changes to the rpm and files it contains. The newest changes are recorded at the top of this section.
```
%changelog
* Wed Aug 29 2018 Your Name <Youremail@yourdomain.com>
  - The original package includes several useful scripts. it is
    primarily intended to be used to illustrate the process of
    building an RPM.
```
Replace the data in the header line with your own name and email address.
### Building the rpm
The spec file must be in the SPECS directory of the rpmbuild tree. I find it easiest to create a link to the actual spec file in that directory so that it can be edited in the development directory and there is no need to copy it to the SPECS directory. Make the SPECS directory your pwd, then create the link.
```
cd ~/rpmbuild/SPECS/
ln -s ~/development/spec/utils.spec
```
Run the following command to build the rpm. It should only take a moment to create the rpm if no errors occur.
```
rpmbuild --target noarch -bb utils.spec
```
Check in the ~/rpmbuild/RPMS/noarch directory to verify that the new rpm exists there.
```
[student@testvm1 ~]$ cd rpmbuild/RPMS/noarch/
[student@testvm1 noarch]$ ll
total 24
-rw-rw-r--. 1 student student 24364 Aug 30 10:00 utils-1.0.0-1.noarch.rpm
[student@testvm1 noarch]$
```
### Testing the rpm
As root, install the rpm to verify that it installs correctly and that the files are installed in the correct directories. The exact name of the rpm will depend upon the values you used for the tags in the Preamble section, but if you used the ones in the sample, the rpm name will be as shown in the sample command below:
```
[root@testvm1 ~]# cd /home/student/rpmbuild/RPMS/noarch/
[root@testvm1 noarch]# ll
total 24
-rw-rw-r--. 1 student student 24364 Aug 30 10:00 utils-1.0.0-1.noarch.rpm
[root@testvm1 noarch]# rpm -ivh utils-1.0.0-1.noarch.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:utils-1.0.0-1                    ################################# [100%]
```
Check /usr/local/bin to ensure that the new files are there. You should also verify that the create_motd link in /etc/cron.daily has been created.
Use the `rpm -q --changelog utils` command to view the changelog. View the files installed by the package using the `rpm -ql utils` command (that is a lowercase L in `ql`.)
```
[root@testvm1 noarch]# rpm -q --changelog utils
* Wed Aug 29 2018 Your Name <Youremail@yourdomain.com>
- The original package includes several useful scripts. it is
    primarily intended to be used to illustrate the process of
    building an RPM.
[root@testvm1 noarch]# rpm -ql utils
/usr/local/bin/create_motd
/usr/local/bin/die
/usr/local/bin/mymotd
/usr/local/bin/sysdata
/usr/local/share/utils/Copyright.and.GPL.Notice.txt
/usr/local/share/utils/GPL_LICENSE.txt
/usr/local/share/utils/utils.spec
[root@testvm1 noarch]#
```
Remove the package.
```
rpm -e utils
```
### Experimenting
Now you will change the spec file to require a package that does not exist. This will simulate a dependency that cannot be met. Add the following line immediately under the existing Requires line:
```
Requires: badrequire
```
Build the package and attempt to install it. What message is displayed?
We used the `rpm` command to install and delete the `utils` package. Try installing the package with yum or DNF. You must be in the same directory as the package or specify the full path to the package for this to work.
### Conclusion
There are many tags and a couple sections that we did not cover in this look at the basics of creating an rpm package. The resources listed below can provide more information. Building rpm packages is not difficult; you just need the right information. I hope this helps you—it took me months to figure things out on my own.
We did not cover building from source code, but if you are a developer, that should be a simple step from this point.
Creating rpm packages is another good way to be a lazy sysadmin and save time and effort. It provides an easy method for distributing and installing the scripts and other files that we as sysadmins need to install on many hosts.
### Resources
* Edward C. Baily, Maximum RPM, Sams Publishing, 2000, ISBN 0-672-31105-4
* Edward C. Baily, [Maximum RPM][1], updated online version
* [RPM Documentation][4]: This web page lists most of the available online documentation for rpm. It includes many links to other websites and information about rpm.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/how-build-rpm-packages
作者:[David Both][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[1]: http://ftp.rpm.org/max-rpm/
[2]: http://rpm.org/index.html
[3]: http://www.both.org/?p=960
[4]: http://rpm.org/documentation.html

View File

@ -0,0 +1,201 @@
How to turn on an LED with Fedora IoT
======
![](https://fedoramagazine.org/wp-content/uploads/2018/08/LED-IoT-816x345.jpg)
Do you enjoy running Fedora, containers, and have a Raspberry Pi? What about using all three together to play with LEDs? This article introduces Fedora IoT and shows you how to install a preview image on a Raspberry Pi. Youll also learn how to interact with GPIO in order to light up an LED.
### What is Fedora IoT?
Fedora IoT is one of the current Fedora Project objectives, with a plan to become a full Fedora Edition. The result will be a system that runs on ARM (aarch64 only at the moment) devices such as the Raspberry Pi, as well as on the x86_64 architecture.
![][1]
Fedora IoT is based on OSTree, like [Fedora Silverblue][2] and the former [Atomic Host][3].
### Download and install Fedora IoT
The official Fedora IoT images are coming with the Fedora 29 release. However, in the meantime you can download a [Fedora 28-based image][4] for this experiment.
You have two options to install the system: either flash the SD card using a dd command, or use a fedora-arm-installer tool. The Fedora Wiki offers more information about [setting up a physical device][5] for IoT. Also, remember that you might need to resize the third partition.
Once you insert the SD card into the device, youll need to complete the installation by creating a user. This step requires either a serial connection, or a HDMI display with a keyboard to interact with the device.
When the system is installed and ready, the next step is to configure a network connection. Log in to the system with the user you have just created choose one of the following options:
* If you need to configure your network manually, run a command similar to the following. Remember to use the right addresses for your network:
```
$ nmcli connection add con-name cable ipv4.addresses \
192.168.0.10/24 ipv4.gateway 192.168.0.1 \
connection.autoconnect true ipv4.dns "8.8.8.8,1.1.1.1" \
type ethernet ifname eth0 ipv4.method manual
```
* If theres a DHCP service on your network, run a command like this:
```
$ nmcli con add type ethernet con-name cable ifname eth0
```
### **The GPIO interface in Fedora**
Many tutorials about GPIO on Linux focus on a legacy GPIO sysfis interface. This interface is deprecated, and the upstream Linux kernel community plan to remove it completely, due to security and other issues.
The Fedora kernel is already compiled without this legacy interface, so theres no /sys/class/gpio on the system. This tutorial uses a new character device /dev/gpiochipN provided by the upstream kernel. This is the current way of interacting with GPIO.
To interact with this new device, you need to use a library and a set of command line interface tools. The common command line tools such as echo or cat wont work with this device.
You can install the CLI tools by installing the libgpiod-utils package. A corresponding Python library is provided by the python3-libgpiod package.
### **Creating a container with Podman**
[Podman][6] is a container runtime with a command line interface similar to Docker. The big advantage of Podman is it doesnt run any daemon in the background. Thats especially useful for devices with limited resources. Podman also allows you to start containerized services with systemd unit files. Plus, it has many additional features.
Well create a container in these two steps:
1. Create a layered image containing the required packages.
2. Create a new container starting from our image.
First, create a file Dockerfile with the content below. This tells podman to build an image based on the latest Fedora image available in the registry. Then it updates the system inside and installs some packages:
```
FROM fedora:latest
RUN  dnf -y update
RUN  dnf -y install libgpiod-utils python3-libgpiod
```
You have created a build recipe of a container image based on the latest Fedora with updates, plus packages to interact with GPIO.
Now, run the following command to build your base image:
```
$ sudo podman build --tag fedora:gpiobase -f ./Dockerfile
```
You have just created your custom image with all the bits in place. You can play with this base container images as many times as you want without installing the packages every time you run it.
### Working with Podman
To verify the image is present, run the following command:
```
$ sudo podman images
REPOSITORY                 TAG        IMAGE ID       CREATED          SIZE
localhost/fedora           gpiobase   67a2b2b93b4b   10 minutes ago  488MB
docker.io/library/fedora   latest     c18042d7fac6   2 days ago     300MB
```
Now, start the container and do some actual experiments. Containers are normally isolated and dont have an access to the host system, including the GPIO interface. Therefore, you need to mount it inside while starting the container. To do this, use the device option in the following command:
```
$ sudo podman run -it --name gpioexperiment --device=/dev/gpiochip0 localhost/fedora:gpiobase /bin/bash
```
You are now inside the running container. Before you move on, here are some more container commands. For now, exit the container by typing exit or pressing **Ctrl+D**.
To list the the existing containers, including those not currently running, such as the one you just created, run:
```
$ sudo podman container ls -a
CONTAINER ID   IMAGE             COMMAND     CREATED          STATUS                              PORTS   NAMES
64e661d5d4e8   localhost/fedora:gpiobase   /bin/bash 37 seconds ago Exited (0) Less than a second ago           gpioexperiment
```
To create a new container, run this command:
```
$ sudo podman run -it --name newexperiment --device=/dev/gpiochip0 localhost/fedora:gpiobase /bin/bash
```
Delete it with the following command:
```
$ sudo podman rm newexperiment
```
### **Turn on an LED**
Now you can use the container you already created. If you exited from the container, start it again with this command:
```
$ sudo podman start -ia gpioexperiment
```
As already discussed, you can use the CLI tools provided by the libgpiod-utils package in Fedora. To list the available GPIO chips, run:
```
$ gpiodetect
gpiochip0 [pinctrl-bcm2835] (54 lines)
```
To get the list of the lines exposed by a specific chip, run:
```
$ gpioinfo gpiochip0
```
Notice theres no correlation between the number of physical pins and the number of lines printed by the previous command. Whats important is the BCM number, as shown on [pinout.xyz][7]. It is not advised to play with the lines that dont have a corresponding BCM number.
Now, connect an LED to the physical pin 40, that is BCM 21. Remember: the shorter leg of the LED (the negative leg, called the cathode) must be connected to a GND pin of the Raspberry Pi with a 330 ohm resistor, and the long leg (the anode) to the physical pin 40.
To turn the LED on, run the following command. It will stay on until you press **Ctrl+C** :
```
$ gpioset --mode=wait gpiochip0 21=1
```
To light it up for a certain period of time, add the -b (run in the background) and -s NUM (how many seconds) parameters, as shown below. For example, to light the LED for 5 seconds, run:
```
$ gpioset -b -s 5 --mode=time gpiochip0 21=1
```
Another useful command is gpioget. It gets the status of a pin (high or low), and can be useful to detect buttons and switches.
![Closeup of LED connection with GPIO][8]
### **Conclusion**
You can also play with LEDs using Python — [there are some examples here][9]. And you can also use the i2c devices inside the container as well. In addition, Podman is not strictly related to this Fedora edition. You can install it on any existing Fedora Edition, or try it on the two new OSTree-based systems in Fedora: [Fedora Silverblue][2] and [Fedora CoreOS][10].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/turnon-led-fedora-iot/
作者:[Alessio Ciregia][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://alciregi.id.fedoraproject.org/
[1]: https://fedoramagazine.org/wp-content/uploads/2018/08/oled-1024x768.png
[2]: https://teamsilverblue.org/
[3]: https://www.projectatomic.io/
[4]: https://kojipkgs.fedoraproject.org/compose/iot/latest-Fedora-IoT-28/compose/IoT/
[5]: https://fedoraproject.org/wiki/InternetOfThings/GettingStarted#Setting_up_a_Physical_Device
[6]: https://github.com/containers/libpod
[7]: https://pinout.xyz/
[8]: https://fedoramagazine.org/wp-content/uploads/2018/08/breadboard-1024x768.png
[9]: https://github.com/brgl/libgpiod/tree/master/bindings/python/examples
[10]: https://coreos.fedoraproject.org/

View File

@ -1,73 +1,72 @@
为开源?
谓开源编程?
======
> 开源就是丢一些代码到 GitHub 上。了解一下它是什么,以及不是什么?
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82)
简单来说,开源项目就是书写一些大家可以随意取用、修改的代码。但你肯定听过关于Go语言的那个笑话说 Go 语言简单到看一眼就可以明白规则,但需要一辈子去学会运用它。其实写开源代码也是这样的。往 GitHub, Bitbucket, SourceForge 等网站或者是你自己的博客,网站上丢几行代码不是难事,但想要有效地操作,还需要个人的努力付出,和高瞻远瞩。
最简单的来说,开源编程就是编写一些大家可以随意取用、修改的代码。但你肯定听过关于 Go 语言的那个笑话,说 Go 语言简单到看一眼就可以明白规则,但需要一辈子去学会运用它。其实写开源代码也是这样的。往 GitHub、Bitbucket、 SourceForge 等网站或者是你自己的博客或网站上丢几行代码不是难事,但想要卓有成效,还需要个人的努力付出和高瞻远瞩。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/floorgoban.jpeg?itok=r8gA5jOk)
### 我们对开源项目的误解
### 我们对开源编程的误解
首先我要说清楚一点:把你的代码写在 GitHub 的公开资源库中并不意味着把你的代码开源化了。在几乎全世界,根本不用创作者做什么,只要作品形成,版权就随之而生了。在创作者进行授权之前,只有作者可以行使版权相关的权力。未经创作者授权的代码,不论有多少人在使用,都是一颗定时炸弹,只有愚蠢的人才会去用它。
首先我要说清楚一点:把你的代码放在 GitHub 的公开仓库中并不意味着把你的代码开源了。在几乎全世界,根本不用创作者做什么,只要作品形成,版权就随之而生了。在创作者进行授权之前,只有作者可以行使版权相关的权力。未经创作者授权的代码,不论有多少人在使用,都是一颗定时炸弹,只有愚蠢的人才会去用它。
有些创作者很善良,认为“很明显我的代码是免费提供给大家使用的。”,他也并不想起诉那些用了他的代码的人,但这并不意味着这些代码可以放心使用。不论在你眼中创作者们多么善良,他们都是有权力起诉任何使用、修改代码,或未经明确授权就将代码嵌入的人。
有些创作者很善良,认为“很明显我的代码是免费提供给大家使用的。”,他也并不想起诉那些用了他的代码的人,但这并不意味着这些代码可以放心使用。不论在你眼中创作者们多么善良,他们都是*有权力*起诉任何使用、修改代码,或未经明确授权就将代码嵌入的人。
很明显,你不应该在没有指定开源许可证的情况下将你的源代码发布到网上然后期望别人使用它并为其做出贡献,我建议你也尽量避免使用这种代码,甚至疑似未授权的也不要使用。如果你开发了一个函数和实现,它和之前一个疑似未授权代码很像,源代码作者就可以对你就侵权提起诉讼。
很明显,你不应该在没有指定开源许可证的情况下将你的源代码发布到网上然后期望别人使用它并为其做出贡献,我建议你也尽量避免使用这种代码,甚至疑似未授权的也不要使用。如果你开发了一个函数和例程,它和之前一个疑似未授权代码很像,源代码作者就可以对你就侵权提起诉讼。
举个例子, Jill Schmill 写了 AwesomeLib 然后未明确授权就把它放到了 GitHub 上,就算 Jill Schmill 不起诉任何人,只要她把 AwesomeLib 的完整版权都卖给 EvilCorpEvilCorp 就会起诉之前违规使用这段代码的人。这种行为就好像是埋下了计算机安全隐患,总有一天会为人所用。
没有许可证的代码的危险的,以上
没有许可证的代码的危险的,切记
### 选择恰当的开源许可证
假设你证要写一个新程序,而且打算把它放在开源平台上,你需要选择最贴合你需求的[许可证][1]。和宣传中说的一样,你可以从 [GitHub-curated][2] 上得到你想要的信息。这个网站设置得像个小问卷,特别方便快捷,点几下就能找到合适的许可证。
假设你正要写一个新程序,而且打算让人们以开源的方式使用它,你需要做的就是选择最贴合你需求的[许可证][1]。和宣传中说的一样,你可以从 GitHub 所支持的 [choosealicense.com][2] 开始。这个网站设置得像个简单的问卷,特别方便快捷,点几下就能找到合适的许可证。
没有许可证的代码的危险的,切记
警示:在选择许可证时不要过于自负,如果你选的是 [Apache 许可证][3] 或者 [GPLv3][4] 这种广为使用的许可证,人们很容易理解其对于权利的规划,你也不需要请律师来排查其中的漏洞。你选择的许可证使用的人越少,带来的麻烦越多
在选择许可证时不要过于自负,如果你选的是 [Apache License][3] 或者 [GPLv3][4] 这种广为使用的许可证,人们很容易理解其对于权利的规划,你也不需要请律师来排查其中的漏洞。你选择的许可证使用的人越少,带来的麻烦越多
最重要的一点是:*千万不要试图自己编造许可证!*自己编造许可证会给大家带来更多的困惑和困扰,不要这样做。如果在现有的许可证中确实找不到你需要的条款,你可以在现有的许可证中附加上你的要求,并且重点标注出来,提醒使用者们注意
最重要的一点是:千万不要试图自己编造许可证!自己编造许可证会给大家带来更多的困惑和困扰,不要这样做。如果在现有的许可证中确实找不到你需要的程式,你可以在现有的许可证中附加上你的要求,并且重点标注出来,提醒使用者们注意。
我知道有些人会说:“我才懒得管什么许可证,我已经把代码发到公共域了。”但问题是,公共域的法律效力并不是受全世界认可的。在不同的国家,公共域的效力和表现形式不同。有些国家的政府管控下,你甚至不可以把自己的源代码发到公共域中。万幸,[Unlicense][5] 可以弥补这些漏洞,它语言简洁,但其效力为全世界认可。
我知道有些人会说:“我才懒得管什么许可证,我已经把代码发到<ruby>公开领域<rt>public domain</rt></ruby>了。”但问题是,公开领域的法律效力并不是受全世界认可的。在不同的国家,公开领域的效力和表现形式不同。有些国家的政府管控下,你甚至不可以把自己的源代码发到公开领域中。万幸,[Unlicense][5] 可以弥补这些漏洞,它语言简洁,使用几个词清楚地描述了“就把它放到公开领域”,但其效力为全世界认可。
### 怎样引入许可证
确定使用哪个许可证之后,你需要明文指定它。如果你是在 GitHub 、 GitLab 或 BitBucket 这几个网站发布,你需要构建很多个文件夹,在根文件夹中,你应把许可证创建为一个以 LICENSE 命名的 txt 格式明文文件。
确定使用哪个许可证之后,你需要清晰而无疑义地指定它。如果你是在 GitHub、 GitLab 或 BitBucket 这几个网站发布,你需要构建很多个文件夹,在根文件夹中,你应把许可证创建为一个以 `LICENSE.txt` 命名的明文文件。
创建 LICENSE.txt 这个文件之后还有其他事要做。你需要在每个有效文件的页眉中添加注释块来申明许可证。如果你使用的是一现有的许可证,这一步对你来说十分简便。一个 `# 项目名 (c)2018作者名, GPLv3 许可证,详情见 https://www.gnu.org/licenses/gpl-3.0.en.html` 这样的注释块比隐约指代的许可证的效力要强得多。
创建 `LICENSE.txt` 这个文件之后还有其他事要做。你需要在每个有效文件的页眉中添加注释块来申明许可证。如果你使用的是一现有的许可证,这一步对你来说十分简便。一个 `# 项目名 (c)2018 作者名, GPLv3 许可证,详情见 https://www.gnu.org/licenses/gpl-3.0.en.html` 这样的注释块比隐约指代的许可证的效力要强得多。
如果你是要发布在自己的网站上,步骤也差不多。先创建 LICENSE.txt 文件,放入许可证,再表明许可证出处。
如果你是要发布在自己的网站上,步骤也差不多。先创建 `LICENSE.txt` 文件,放入许可证,再表明许可证出处。
### 开源代码的不同之处
开源代码和专有代码的一个区别的开源代码写出来就是为了给别人看的。我是个40多岁的系统管理员已经写过许许多多的代码。最开始我写代码是为了工作为了解决公司的问题所以其中大部分代码都是专有代码。这种代码的目的很简单只要能在特定场合通过特定方式发挥作用就行。
开源代码和专有代码的一个主要区别是开源代码写出来就是为了给别人看的。我是个 40 多岁的系统管理员,已经写过许许多多的代码。最开始我写代码是为了工作,为了解决公司的问题,所以其中大部分代码都是专有代码。这种代码的目的很简单,只要能在特定场合通过特定方式发挥作用就行。
开源代码则大不相同。在写开源代码时,你知道它可能会被用于各种各样的环境中。也许你的使用案例的环境条件很局限,但你仍旧希望它能在各种环境下发挥理想的效果。不同的人使用这些代码时,你会看到各类冲突,还有你没有考虑过的思路。虽然代码不一定要满足所有人,但最少它们可以顺利解决使用者遇到的问题,就算解决不了,也可以转换回常见的逻辑,不会给使用者添麻烦。(例如“第583行的内容除以零”就不能作为命令行参数正确的结果)
开源代码则大不相同。在写开源代码时,你知道它可能会被用于各种各样的环境中。也许你的使用案例的环境条件很局限,但你仍旧希望它能在各种环境下发挥理想的效果。不同的人使用这些代码时,你会看到各类冲突,还有你没有考虑过的思路。虽然代码不一定要满足所有人,但最少它们可以顺利解决使用者遇到的问题,就算解决不了,也可以转换回常见的逻辑,不会给使用者添麻烦。(例如“第 583 行出现零除错误”就不能作为错误地提供命令行参数的响应结果)
你的源代码也可能逼疯你,尤其是在你一遍又一遍地修改错误的函数或是子过程后,终于出现了你希望的结果,这时你不会叹口气就继续下一个任务,你会把过程清理干净,因为你不会愿意别人看出你一遍遍尝试的痕迹。比如你会把 `$variable` `$lol`全都换成有意义的 `$iterationcounter``$modelname`。这意味着你要认真专业地进行注释(尽管对于头脑风暴中的你来说它并不难懂),但为了之后有更多的人可以使用你的代码,你会尽力去注释,但注意适可而止。
你的源代码也可能逼疯你,尤其是在你一遍又一遍地修改错误的函数或是子过程后,终于出现了你希望的结果,这时你不会叹口气就继续下一个任务,你会把过程清理干净,因为你不会愿意别人看出你一遍遍尝试的痕迹。比如你会把 `$variable`、`$lol` 全都换成有意义的 `$iterationcounter``$modelname`。这意味着你要认真专业地进行注释(尽管对于头脑风暴中的你来说它并不难懂),但为了之后有更多的人可以使用你的代码,你会尽力去注释,但注意适可而止。
这个过程难免有些痛苦沮丧,毕竟这不是你常做的事,会有些不习惯。但它会使你成为一位更好的程序员,也会让你的代码升华。即使你的项目只有你在贡献,清理代码也会节约你后期的很多工作,相信我一年后你更新 app 时,你会庆幸自己现在写下的是 `$modelname`,还有清晰的注释,而不是什么不知名的数列,甚至连 `$lol`也不是。
这个过程难免有些痛苦沮丧,毕竟这不是你常做的事,会有些不习惯。但它会使你成为一位更好的程序员,也会让你的代码升华。即使你的项目只有你一位贡献者,清理代码也会节约你后期的很多工作,相信我一年后你更新 app 时,你会庆幸自己现在写下的是 `$modelname`,还有清晰的注释,而不是什么不知名的数列,甚至连 `$lol`也不是。
### 你并不是为你一人而写
开源的真正核心并不是那些代码,是社区。更大的社区的项目维持的时间更长,也更容易为人们接受。因此不仅要加入社区,还要多多为社区发展贡献思路,让自己的项目能够为社区所用。
蝙蝠侠为了完成目标暗中独自花了很大功夫,你用不着这样,你可以登录 Twitter , Reddit, 或者给你项目的相关人士发邮件,发布你正在筹备新项目的消息,仔细聊聊项目的设计初衷和你的计划,让大家一起帮忙,向大家征集数据输入,类似的使用案例,把这些信息整合起来,用在你的代码里。你不用看所有的回复,但你要对它有个大概把握,这样在你之后完善时可以躲过一些陷阱。
蝙蝠侠为了完成目标暗中独自花了很大功夫,你用不着这样,你可以登录 Twitter、 Reddit或者给你项目的相关人士发邮件,发布你正在筹备新项目的消息,仔细聊聊项目的设计初衷和你的计划,让大家一起帮忙,向大家征集数据输入,类似的使用案例,把这些信息整合起来,用在你的代码里。你不用看所有的回复,但你要对它有个大概把握,这样在你之后完善时可以躲过一些陷阱。
不发首次通告这个过程还不算完整。如果你希望大家能够接受你的作品,并且使用它,你就要以此为初衷来设计。公众说不定可以帮到你,你不必对公开这件事如临大敌。所以不要闭门造车,既然你是为大家而写,那就开设一个真实、公开的项目,想象你在社区的监督下,认真地一步步完成它。
### 建立项目的方式
你可以在 GitHub, GitLab, or BitBucket 上免费注册账号来管理你的项目。注册之后,创建知识库,建立 README 文件,分配一个许可证,一步步写入代码。这样可以帮你建立好习惯,让你之后和现实中的团队一起工作时,也能目的清晰地朝着目标稳妥地进行工作。这样你做得越久,就越有兴趣。
你可以在 GitHub、 GitLab 或 BitBucket 上免费注册账号来管理你的项目。注册之后,创建知识库,建立 `README` 文件,分配一个许可证,一步步写入代码。这样可以帮你建立好习惯,让你之后和现实中的团队一起工作时,也能目的清晰地朝着目标稳妥地进行工作。这样你做得越久,就越有兴趣。
用户们会开始对你产生兴趣,这会让你开心也会让你不爽,但你应该亲切礼貌地对待他们,就算他们很多人根本不知道你的项目做的是什么,你可以把文件给他们看,让他们了解你在干什么。有些还在犹豫的用户可以给你提个醒,告诉你最开始设计的用户范围中落下了哪些人。
如果你的项目很受用户青睐,总会有开发者出现,并表示出兴趣。这也许是好事,也可能激怒你。最开始你可能只会做简单的错误修正,但总有一天你会收到拉请求,有可能是特殊利基案例,它可能改变你项目的作用域,甚至改变你项目的初衷。你需要学会分辨哪个有贡献,根据这个决定合并哪个,婉拒哪个。
如果你的项目很受用户青睐,总会有开发者出现,并表示出兴趣。这也许是好事,也可能激怒你。最开始你可能只会做简单的错误修正,但总有一天你会收到拉请求,有可能是特殊利基案例,它可能改变你项目的作用域,甚至改变你项目的初衷。你需要学会分辨哪个有贡献,根据这个决定合并哪个,婉拒哪个。
### 我们为什么要开源?
开源听起来任务繁重,它也确实是这样。但它对你也有很多好处。它可以在无形之中磨练你,让你写出纯净持久的代码,也教会你与人沟通,团队协作。对于一位志向远大的专业开发者来说,它是最好的简历书写者。你的未来雇主很有可能点开你的知识库,了解你的能力范围;而你的开发者也有可能想带你进全球信息网络工作。
开源听起来任务繁重,它也确实是这样。但它对你也有很多好处。它可以在无形之中磨练你,让你写出纯净持久的代码,也教会你与人沟通,团队协作。对于一位志向远大的专业开发者来说,它是最好的简历书写者。你的未来雇主很有可能点开你的仓库,了解你的能力范围;而社区项目的开发者也有可能给你带来工作。
最后,为开源工作,意味着个人的提升,因为你在做的事不是为了你一个人,这比养活自己重要得多。
@ -77,7 +76,7 @@ via: https://opensource.com/article/18/3/what-open-source-programming
作者:[Jim Salter][a]
译者:[Valoniakim](https://github.com/Valoniakim)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,258 +0,0 @@
理解 Linux 文件系统ext4 等文件系统
=======
> 了解 ext4 的历史,包括其与 ext3 和之前的其它文件系统之间的区别。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR)
目前的大部分 Linux 文件系统都默认采用 ext4 文件系统, 正如以前的 Linux 发行版默认使用 ext3、ext2 以及更久前的 ext。
对于不熟悉 Linux 或文件系统的朋友而言,你可能不清楚 ext4 相对于上一版本 ext3 带来了什么变化。你可能还想知道在一连串关于替代的文件系统例如 btrfs、xfs 和 zfs 不断被发布的情况下ext4 是否仍然能得到进一步的发展。
在一篇文章中,我们不可能讲述文件系统的所有方面,但我们尝试让您尽快了解 Linux 默认文件系统的发展历史,包括它的产生以及未来发展。我仔细研究了维基百科里的各种关于 ext 文件系统文章、kernel.org 的 wiki 中关于 ext4 的条目以及结合自己的经验写下这篇文章。
### ext 简史
#### MINIX 文件系统
在有 ext 之前, 使用的是 MINIX 文件系统。如果你不熟悉 Linux 历史, 那么可以理解为 MINIX 是用于 IBM PC/AT 微型计算机的一个非常小的类 Unix 系统。Andrew Tannenbaum 为了教学的目的而开发了它,并于 1987 年发布了源代码(以印刷版的格式!)。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/ibm_pc_at.jpg?itok=Tfk3hQYB)
*IBM 1980 中期的 PC/AT[MBlairMartin](https://commons.wikimedia.org/wiki/File:IBM_PC_AT.jpg)[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.en)*
虽然你可以细读 MINIX 的源代码但实际上它并不是自由开源软件FOSS。出版 Tannebaum 著作的出版商要求你花 69 美元的许可费来运行 MINIX而这笔费用包含在书籍的费用中。尽管如此在那时来说非常便宜并且 MINIX 的使用得到迅速发展,很快超过了 Tannebaum 当初使用它来教授操作系统编码的意图。在整个 20 世纪 90 年代,你可以发现 MINIX 的安装在世界各个大学里面非常流行。而此时,年轻的 Lius Torvalds 使用 MINIX 来开发原始 Linux 内核,并于 1991 年首次公布。而后在 1992 年 12 月在 GPL 开源协议下发布。
但是等等,这是一篇以*文件系统*为主题的文章不是吗是的MINIX 有自己的文件系统,早期的 Linux 版本依赖于它。跟 MINIX 一样Linux 的文件系统也如同玩具那般小 —— MINIX 文件系统最多能处理 14 个字符的文件名,并且只能处理 64MB 的存储空间。到了 1991 年,一般的硬盘尺寸已经达到了 40-140MB。很显然Linux 需要一个更好的文件系统。
#### ext
当 Linus 开发出刚起步的 Linux 内核时Rémy Card 从事第一代的 ext 文件系统的开发工作。 ext 文件系统在 1992 首次实现并发布 —— 仅在 Linux 首次发布后的一年! —— ext 解决了 MINIX 文件系统中最糟糕的问题。
1992 年的 ext 使用在 Linux 内核中的新虚拟文件系统VFS抽象层。与之前的 MINIX 文件系统不同的是ext 可以处理高达 2GB 存储空间并处理 255 个字符的文件名。
但 ext 并没有长时间占统治地位,主要是由于它原始的时间戳(每个文件仅有一个时间戳,而不是今天我们所熟悉的有 inode 、最近文件访问时间和最新文件修改时间的时间戳。仅仅一年后ext2 就替代了它。
#### ext2
Rémy 很快就意识到 ext 的局限性,所以一年后他设计出 ext2 替代它。当 ext 仍然根植于 “玩具” 操作系统时ext2 从一开始就被设计为一个商业级文件系统,沿用 BSD 的 Berkeley 文件系统的设计原理。
Ext2 提供了 GB 级别的最大文件大小和 TB 级别的文件系统大小,使其在 20 世纪 90 年代的地位牢牢巩固在文件系统大联盟中。很快它被广泛地使用,无论是在 Linux 内核中还是最终在 MINIX 中,且利用第三方模块可以使其应用于 MacOS 和 Windows。
但这里仍然有一些问题需要解决ext2 文件系统与 20 世纪 90 年代的大多数文件系统一样,如果在将数据写入到磁盘的时候,系统发生崩溃或断电,则容易发生灾难性的数据损坏。随着时间的推移,由于碎片(单个文件存储在多个位置,物理上其分散在旋转的磁盘上),它们也遭受了严重的性能损失。
尽管存在这些问题,但今天 ext2 还是用在某些特殊的情况下 —— 最常见的是,作为便携式 USB 拇指驱动器的文件系统格式。
#### ext3
1998 年, 在 ext2 被采用后的 6 年后Stephen Tweedie 宣布他正在致力于改进 ext2。这成了 ext3并于 2001 年 11 月在 2.4.15 内核版本中被采用到 Linux 内核主线中。
![Packard Bell 计算机][2]
*20 世纪 90 年代中期的 Packard Bell 计算机,[Spacekid][3][CC0][4]*
在大部分情况下Ext2 在 Linux 发行版中工作得很好,但像 FAT、FAT32、HFS 和当时的其他文件系统一样 —— 在断电时容易发生灾难性的破坏。如果在将数据写入文件系统时候发生断电,则可能会将其留在所谓*不一致*的状态 —— 事情只完成一半而另一半未完成。这可能导致大量文件丢失或损坏,这些文件与正在保存的文件无关甚至导致整个文件系统无法卸载。
Ext3 和 20 世纪 90 年代后期的其他文件系统,如微软的 NTFS ,使用*日志*来解决这个问题。日志是磁盘上的一种特殊的分配区域,其写入被存储在事务中;如果该事务完成磁盘写入,则日志中的数据将提交给文件系统自身。如果系统在该操作提交前崩溃,则重新启动的系统识别其为未完成的事务而将其进行回滚,就像从未发生过一样。这意味着正在处理的文件可能依然会丢失,但文件系统*本身*保持一致,且其他所有数据都是安全的。
在使用 ext3 文件系统的 Linux 内核中实现了三个级别的日志记录方式:<ruby>日记<rt>journal</rt></ruby><ruby>顺序<rt>ordered</rt></ruby><ruby>回写<rt>writeback</rt></ruby>
* **日记** 是最低风险模式,在将数据和元数据提交给文件系统之前将其写入日志。这可以保证正在写入的文件与整个文件系统的一致性,但其显著降低了性能。
* **顺序** 是大多数 Linux 发行版默认模式;顺序模式将元数据写入日志而直接将数据提交到文件系统。顾名思义,这里的操作顺序是固定的:首先,元数据提交到日志;其次,数据写入文件系统,然后才将日志中关联的元数据更新到文件系统。这确保了在发生崩溃时,那些与未完整写入相关联的元数据仍在日志中,且文件系统可以在回滚日志时清理那些不完整的写入事务。在顺序模式下,系统崩溃可能导致在崩溃期间文件的错误被主动写入,但文件系统它本身 —— 以及未被主动写入的文件 —— 确保是安全的。
* **回写** 是第三种模式 —— 也是最不安全的日志模式。在回写模式下,像顺序模式一样,元数据会被记录到日志,但数据不会。与顺序模式不同,元数据和数据都可以以任何有利于获得最佳性能的顺序写入。这可以显著提高性能,但安全性低很多。尽管回写模式仍然保证文件系统本身的安全性,但在崩溃或崩溃之前写入的文件很容易丢失或损坏。
跟之前的 ext2 类似ext3 使用 16 位内部寻址。这意味着对于有着 4K 块大小的 ext3 在最大规格为 16TiB 的文件系统中可以处理的最大文件大小为 2TiB。
#### ext4
Theodore Ts'o (是当时 ext3 主要开发人员) 在 2006 年发表的 ext4 ,于两年后在 2.6.28 内核版本中被加入到了 Linux 主线。
Tso 将 ext4 描述为一个显著扩展 ext3 但仍然依赖于旧技术的临时技术。他预计 ext4 终将会被真正的下一代文件系统所取代。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dell_precision_380_workstation.jpeg?itok=3EjYXY2i)
*Dell Precision 380 工作站,[Lance Fisher](https://commons.wikimedia.org/wiki/File:Dell_Precision_380_Workstation.jpeg)[CC BY-SA 2.0](https://creativecommons.org/licenses/by-sa/2.0/deed.en)*
Ext4 在功能上与 Ext3 在功能上非常相似,但支持大文件系统、提高了对碎片的抵抗力,有更高的性能以及更好的时间戳。
### ext4 vs ext3
ext3 和 ext4 有一些非常明确的差别,在这里集中讨论下。
#### 向后兼容性
ext4 特地设计为尽可能地向后兼容 ext3。这不仅允许 ext3 文件系统原地升级到 ext4也允许 ext4 驱动程序以 ext3 模式自动挂载 ext3 文件系统,因此使它无需单独维护两个代码库。
#### 大文件系统
ext3 文件系统使用 32 位寻址,这限制它仅支持 2TiB 文件大小和 16TiB 文件系统系统大小(这是假设在块大小为 4KiB 的情况下,一些 ext3 文件系统使用更小的块大小,因此对其进一步被限制)。
ext4 使用 48 位的内部寻址,理论上可以在文件系统上分配高达 16TiB 大小的文件,其中文件系统大小最高可达 1000000 TiB1EiB。在早期 ext4 的实现中有些用户空间的程序仍然将其限制为最大大小为 16TiB 的文件系统,但截至 2011 年e2fsprogs 已经直接支持大于 16TiB 大小的 ext4 文件系统。例如,红帽企业 Linux 在其合同上仅支持最高 50TiB 的 ext4 文件系统,并建议 ext4 卷不超过 100TiB。
#### 分配方式改进
ext4 在将存储块写入磁盘之前对存储块的分配方式进行了大量改进,这可以显著提高读写性能。
##### 区段
<ruby>区段<rt>extent</rt></ruby>是一系列连续的物理块 (最多达 128 MiB假设块大小为 4KiB可以一次性保留和寻址。使用区段可以减少给定文件所需的 inode 数量,并显著减少碎片并提高写入大文件时的性能。
##### 多块分配
ext3 为每一个新分配的块调用一次块分配器。当多个写入同时打开分配器时很容易导致严重的碎片。然而ext4 使用延迟分配,这允许它合并写入并更好地决定如何为尚未提交的写入分配块。
##### 持久的预分配
在为文件预分配磁盘空间时大部分文件系统必须在创建时将零写入该文件的块中。ext4 允许替代使用 `fallocate()`,它保证了空间的可用性(并试图为它找到连续的空间),而不需要先写入它。这显著提高了写入和将来读取流和数据库应用程序的写入数据的性能。
##### 延迟分配
这是一个耐人寻味而有争议性的功能。延迟分配允许 ext4 等待分配将写入数据的实际块直到它准备好将数据提交到磁盘。相比之下即使数据仍然在往写入缓存中写入ext3 也会立即分配块。)
当缓存中的数据累积时,延迟分配块允许文件系统对如何分配块做出更好的选择,降低碎片(写入,以及稍后的读)并显著提升性能。然而不幸的是,它*增加*了还没有专门调用 `fsync()` 方法(当程序员想确保数据完全刷新到磁盘时)的程序的数据丢失的可能性。
假设一个程序完全重写了一个文件:
```
fd=open("file" ,O_TRUNC); write(fd, data); close(fd);
```
使用旧的文件系统, `close(fd);` 足以保证 `file` 中的内容刷新到磁盘。即使严格来说,写不是事务性的,但如果文件关闭后发生崩溃,则丢失数据的风险很小。
如果写入不成功(由于程序上的错误、磁盘上的错误、断电等),文件的原始版本和较新版本都可能丢失数据或损坏。如果其他进程在写入文件时访问文件,则会看到损坏的版本。如果其他进程打开文件并且不希望其内容发生更改 —— 例如,映射到多个正在运行的程序的共享库。这些进程可能会崩溃。
为了避免这些问题,一些程序员完全避免使用 `O_TRUNC`。相反,他们可能会写入一个新文件,关闭它,然后将其重命名为旧文件名:
```
fd=open("newfile"); write(fd, data); close(fd); rename("newfile", "file");
```
在*没有*延迟分配的文件系统下,这足以避免上面列出的潜在的损坏和崩溃问题:因为`rename()` 是原子操作,所以它不会被崩溃中断;并且运行的程序将继续引用旧的文件。现在 `file` 的未链接版本只要有一个打开的文件文件句柄即可。但是因为 ext4 的延迟分配会导致写入被延迟和重新排序,`rename("newfile","file")` 可以在 `newfile` 的内容实际写入磁盘内容之前执行,这出现了并行进行再次获得 `file` 坏版本的问题。
为了缓解这种情况Linux 内核(自版本 2.6.30 )尝试检测这些常见代码情况并强制立即分配。这会减少但不能防止数据丢失的可能性 —— 并且它对新文件没有任何帮助。如果你是一位开发人员,请注意:保证数据立即写入磁盘的唯一方法是正确调用 `fsync()`
#### 无限制的子目录
ext3 仅限于 32000 个子目录ext4 允许无限数量的子目录。从 2.6.23 内核版本开始ext4 使用 HTree 索引来减少大量子目录的性能损失。
#### 日志校验
ext3 没有对日志进行校验,这给处于内核直接控制之外的磁盘或带有自己的缓存的控制器设备带来了问题。如果控制器或具有自己的缓存的磁盘脱离了写入顺序,则可能会破坏 ext3 的日记事务顺序,从而可能破坏在崩溃期间(或之前一段时间)写入的文件。
理论上,这个问题可以使用写入<ruby>障碍<rt>barrier</rt></ruby> —— 在安装文件系统时,你在挂载选项设置 `barrier=1` ,然后设备就会忠实地执行 `fsync` 一直向下到底层硬件。通过实践,可以发现存储设备和控制器经常不遵守写入障碍 —— 提高性能(和跟竞争对手比较的性能基准),但增加了本应该防止数据损坏的可能性。
对日志进行校验和允许文件系统崩溃后第一次挂载时意识到其某些条目是无效或无序的。因此,这避免了回滚部分条目或无序日志条目的错误,并进一步损坏的文件系统——即使部分存储设备假做或不遵守写入障碍。
#### 快速文件系统检查
在 ext3 下,在 `fsck` 被调用时会检查整个文件系统 —— 包括已删除或空文件。相比之下ext4 标记了 inode 表未分配的块和扇区,从而允许 `fsck` 完全跳过它们。这大大减少了在大多数文件系统上运行 `fsck` 的时间,它实现于内核 2.6.24。
#### 改进的时间戳
ext3 提供粒度为一秒的时间戳。虽然足以满足大多数用途但任务关键型应用程序经常需要更严格的时间控制。ext4 通过提供纳秒级的时间戳,使其可用于那些企业、科学以及任务关键型的应用程序。
ext3 文件系统也没有提供足够的位来存储 2038 年 1 月 18 日以后的日期。ext4 在这里增加了两个位,将 [Unix 纪元][5] 扩展了 408 年。如果你在公元 2446 年读到这篇文章,你很有可能已经转移到一个更好的文件系统 —— 如果你还在测量自 1970 年 1 月 1 日 00:00UTC以来的时间这会让我死后得以安眠。
#### 在线碎片整理
ext2 和 ext3 都不直接支持在线碎片整理 —— 即在挂载时会对文件系统进行碎片整理。ext2 有一个包含的实用程序,`e2defrag`,它的名字暗示 —— 它需要在文件系统未挂载时脱机运行。(显然,这对于根文件系统来说非常有问题。)在 ext3 中的情况甚至更糟糕 —— 虽然 ext3 比 ext2 更不容易受到严重碎片的影响,但 ext3 文件系统运行 `e2defrag` 可能会导致灾难性损坏和数据丢失。
尽管 ext3 最初被认为“不受碎片影响”,但对同一文件(例如 BitTorrent采用大规模并行写入过程的过程清楚地表明情况并非完全如此。一些用户空间的手段和解决方法例如 [Shake][6],以这样或那样方式解决了这个问题 —— 但它们比真正的、文件系统感知的、内核级碎片整理过程更慢并且在各方面都不太令人满意。
ext4通过 `e4defrag` 解决了这个问题,且是一个在线、内核模式、文件系统感知、块和区段级别的碎片整理实用程序。
### 正在进行的 ext4 开发
ext4正如 Monty Python 中瘟疫感染者曾经说过的那样,“我还没死呢!” 虽然它的[主要开发人员][7]认为它只是一个真正的[下一代文件系统][8]的权宜之计,但是在一段时间内,没有任何可能的候选人准备好(由于技术或许可问题)部署为根文件系统。
在未来的 ext4 版本中仍然有一些关键功能要开发,包括元数据校验和、一流的配额支持和大分配块。
#### 元数据校验和
由于 ext4 具有冗余超级块,因此为文件系统校验其中的元数据提供了一种方法,可以自行确定主超级块是否已损坏并需要使用备用块。可以在没有校验和的情况下,从损坏的超级块恢复 —— 但是用户首先需要意识到它已损坏,然后尝试使用备用方法手动挂载文件系统。由于在某些情况下,使用损坏的主超级块安装文件系统读写可能会造成进一步的损坏,即使是经验丰富的用户也无法避免,这也不是一个完美的解决方案!
与 btrfs 或 zfs 等下一代文件系统提供的极其强大的每块校验和相比ext4 的元数据校验和的功能非常弱。但它总比没有好。虽然校验**所有的事情**都听起来很简单!—— 事实上,将校验和与文件系统连接到一起有一些重大的挑战;请参阅[设计文档][9]了解详细信息。
#### 一流的配额支持
等等,配额?!从 ext2 出现的那天开始我们就有了这些!是的,但它们一直都是事后的添加的东西,而且它们总是犯傻。这里可能不值得详细介绍,但[设计文档][10]列出了配额将从用户空间移动到内核中的方式,并且能够更加正确和高效地执行。
#### 大分配块
随着时间的推移,那些讨厌的存储系统不断变得越来越大。由于一些固态硬盘已经使用 8K 硬件块大小,因此 ext4 对 4K 模块的当前限制越来越受到限制。较大的存储块可以显著减少碎片并提高性能,代价是增加“松弛”空间(当您只需要块的一部分来存储文件或文件的最后一块时留下的空间)。
您可以在[设计文档][11]中查看详细说明。
### ext4 的实际限制
ext4 是一个健壮、稳定的文件系统。如今大多数人都应该在用它作为根文件系统,但它无法处理所有需求。让我们简单地谈谈你不应该期待的一些事情 —— 现在或可能在未来:
虽然 ext4 可以处理高达 1 EiB 大小(相当于 1,000,000 TiB大小的数据但你真的、*真的*不应该尝试这样做。除了能够记住更多块的地址之外,还存在规模上的问题。并且现在 ext4 不会处理(并且可能永远不会)超过 50 —— 100TiB 的数据。
ext4 也不足以保证数据的完整性。随着日志记录的重大进展又回到了 ext3 的那个时候,它并未涵盖数据损坏的许多常见原因。如果数据已经在磁盘上被[破坏][12]——由于故障硬件,宇宙射线的影响(是的,真的),或者只是数据随时间衰减 —— ext4 无法检测或修复这种损坏。
基于上面两点ext4 只是一个纯*文件系统*,而不是存储卷管理器。这意味着,即使你有多个磁盘——也就是奇偶校验或冗余,理论上你可以从 ext4 中恢复损坏的数据,但无法知道使用它是否对你有利。虽然理论上可以在不同的层中分离文件系统和存储卷管理系统而不会丢失自动损坏检测和修复功能,但这不是当前存储系统的设计方式,并且它将给新设计带来重大挑战。
### 备用文件系统
在我们开始之前,提醒一句:要非常小心,没有任何备用的文件系统作为主线内核的一部分而内置和直接支持!
即使一个文件系统是*安全的*,如果在内核升级期间出现问题,使用它作为根文件系统也是非常可怕的。如果你没有充分的理由通过一个 chroot 去使用替代介质引导,耐心地操作内核模块、 grub 配置和 DKMS...不要在一个很重要的系统中去掉预留的根文件。
可能有充分的理由使用您的发行版不直接支持的文件系统 —— 但如果您这样做,我强烈建议您在系统启动并可用后再安装它。(例如,您可能有一个 ext4 根文件系统,但是将大部分数据存储在 zfs 或 btrfs 池中。)
#### XFS
XFS 与非 ext 文件系统在 Linux 中的主线中的地位一样。它是一个 64 位的日志文件系统,自 2001 年以来内置于 Linux 内核中,为大型文件系统和高度并发性提供了高性能(即,大量的进程都会立即写入文件系统)。
从 RHEL 7 开始XFS 成为 Red Hat Enterprise Linux 的默认文件系统。对于家庭或小型企业用户来说,它仍然有一些缺点 —— 最值得注意的是,重新调整现有 XFS 文件系统是一件非常痛苦的事情,不如创建另一个并复制数据更有意义。
虽然 XFS 是稳定且是高性能的,但它和 ext4 之间没有足够具体的最终用途差异以值得推荐在非默认例如RHEL7的任何地方使用它除非它解决了对 ext4 的特定问题,例如 > 50 TiB 容量的文件系统。
XFS 在任何方面都不是 ZFS、btrfs 甚至 WAFL一个专有的 SAN 文件系统)的“下一代”文件系统。就像 ext4 一样,它应该被视为一种更好的方式的权宜之计。
#### ZFS
ZFS 由 Sun Microsystems 开发,以 zettabyte 命名 —— 相当于 1 万亿 GB —— 因为它理论上可以解决大型存储系统。
作为真正的下一代文件系统ZFS 提供卷管理(能够在单个文件系统中处理多个单独的存储设备),块级加密校验和(允许以极高的准确率检测数据损坏),[自动损坏修复][12](其中冗余或奇偶校验存储可用),[快速异步增量复制][13],内联压缩等,[以及更多][14]。
从 Linux 用户的角度来看ZFS 的最大问题是许可证问题。ZFS 许可证是 CDDL 许可证,这是一种与 GPL 冲突的半许可许可证。关于在 Linux 内核中使用 ZFS 的意义存在很多争议,其争议范围从“它是 GPL 违规”到“它是 CDDL 违规”到“它完全没问题,它还没有在法庭上进行过测试。 ” 最值得注意的是,自 2016 年以来 Canonical 已将 ZFS 代码内联在其默认内核中,而且目前尚无法律挑战。
此时,即使我作为一个非常狂热于 ZFS 的用户,我也不建议将 ZFS 作为 Linux 的 root 文件系统。如果你想在 Linux 上利用 ZFS 的优势,用 ext4 设置一个小的根文件系统,然后将 ZFS 用在你剩余的存储上,把数据、应用程序以及你喜欢的东西放在它上面 —— 但把 root 保留在 ext4 上,直到你的发行版明显支持 zfs 根目录。
#### BTRFS
Btrfs 是 B-Tree Filesystem 的简称,通常发音为 “butter” —— 由 Chris Mason 于 2007 年在 Oracle 任职期间发布。BTRFS 旨在跟 ZFS 有大部分相同的目标,提供多种设备管理、每块校验、异步复制、直列压缩等,[还有更多][8]。
截至 2018 年btrfs 相当稳定,可用作标准的单磁盘文件系统,但可能不应该依赖于卷管理器。与许多常见用例中的 ext4、XFS 或 ZFS 相比,它存在严重的性能问题,其下一代功能 —— 复制、多磁盘拓扑和快照管理 —— 可能非常多,其结果可能是从灾难性地性能降低到实际数据的丢失。
btrfs 的维持状态是有争议的SUSE Enterprise Linux 在 2015 年采用它作为默认文件系统,而 Red Hat 于 2017 年宣布它从 RHEL 7.4 开始不再支持 btrfs。可能值得注意的是该产品支持 btrfs 部署用作单磁盘文件系统,而不是像 ZFS 中的多磁盘卷管理器,甚至 Synology 在它的存储设备使用 BTRFS
但是它在传统 Linux 内核 RAIDmdraid之上分层来管理磁盘。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/ext4-filesystem
作者:[Jim Salter][a]
译者:[HardworkFish](https://github.com/HardworkFish)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jim-salter
[1]:https://opensource.com/file/391546
[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/packard_bell_pc.jpg?itok=VI8dzcwp (Packard Bell computer)
[3]:https://commons.wikimedia.org/wiki/File:Old_packard_bell_pc.jpg
[4]:https://creativecommons.org/publicdomain/zero/1.0/deed.en
[5]:https://en.wikipedia.org/wiki/Unix_time
[6]:https://vleu.net/shake/
[7]:http://www.linux-mag.com/id/7272/
[8]:https://arstechnica.com/information-technology/2014/01/bitrot-and-atomic-cows-inside-next-gen-filesystems/
[9]:https://ext4.wiki.kernel.org/index.php/Ext4_Metadata_Checksums
[10]:https://ext4.wiki.kernel.org/index.php/Design_For_1st_Class_Quota_in_Ext4
[11]:https://ext4.wiki.kernel.org/index.php/Design_for_Large_Allocation_Blocks
[12]:https://en.wikipedia.org/wiki/Data_degradation#Visual_example_of_data_degradation
[13]:https://arstechnica.com/information-technology/2015/12/rsync-net-zfs-replication-to-the-cloud-is-finally-here-and-its-fast/
[14]:https://arstechnica.com/information-technology/2014/02/ars-walkthrough-using-the-zfs-next-gen-filesystem-on-linux/

View File

@ -1,211 +0,0 @@
如何在 Linux 中压缩和解压缩文件
======
![](https://www.ostechnix.com/wp-content/uploads/2018/03/compress-720x340.jpg)
当在备份重要文件和通过网络发送大文件的时候,对文件进行压缩非常有用。请注意,压缩一个已经压缩过的文件会增加额外开销,因此你将会得到一个更大一些的文件。所以,请不要压缩已经压缩过的文件。在 GNU/Linux 中,有许多程序可以用来压缩和解压缩文件。在这篇教程中,我们仅学习其中两个应用程序。
### 压缩和解压缩文件
在类 Unix 系统中,最常见的用来压缩文件的程序是:
1. gzip
2. bzip2
##### 1\. 使用 Gzip 程序来压缩和解压缩文件
Gzip 是一个使用 Lempel-Ziv 编码LZ77算法来压缩和解压缩文件的实用工具。
**1.1 压缩文件**
如果要压缩一个名为 ostechnix.txt 的文件,使之成为 gzip 格式的压缩文件,那么只需运行如下命令:
```
$ gzip ostechnix.txt
```
上面的命令运行结束之后,将会出现一个名为 ostechnix.txt.gz 的 gzip 格式压缩文件,代替原始的 ostechnix.txt 文件。
gzip 命令还可以有其他用法。一个有趣的例子是,我们可以将一个特定命令的输出通过管道传递,然后作为 gzip 程序的输入来创建一个压缩文件。看下面的命令:
```
$ ls -l Downloads/ | gzip > ostechnix.txt.gz
```
上面的命令将会创建一个 gzip 格式的压缩文件,文件的内容为 “Downloads” 目录的目录项。
**1.2 压缩文件并将输出写到新文件中(不覆盖原始文件)
**
默认情况下gzip 程序会压缩给定文件,并以压缩文件替代原始文件。但是,你也可以保留原始文件,并将输出写到标准输出。比如,下面这个命令将会压缩 ostechnix.txt 文件,并将输出写入文件 output.txt.gz 。
```
$ gzip -c ostechnix.txt > output.txt.gz
```
类似地,要解压缩一个 gzip 格式的压缩文件并指定输出文件的文件名,只需运行:
```
$ gzip -c -d output.txt.gz > ostechnix1.txt
```
上面的命令将会解压缩 output.txt.gz 文件,并将输出写入到文件 ostechnix1.txt 中。在上面两个例子中,原始文件均不会被删除。
**1.3 解压缩文件**
如果要解压缩 ostechnix.txt.gz 文件,并以原始未压缩版本的文件来代替它,那么只需运行:
```
$ gzip -d ostechnix.txt.gz
```
我们也可以使用 gunzip 程序来解压缩文件:
```
$ gunzip ostechnix.txt.gz
```
**1.4 在不解压缩的情况下查看压缩文件的内容**
如果你想在不解压缩的情况下,使用 gzip 程序查看压缩文件的内容,那么可以像下面这样使用 -c 选项:
```
$ gunzip -c ostechnix1.txt.gz
```
或者,你也可以像下面这样使用 zcat 程序:
```
$ zcat ostechnix.txt.gz
```
你也可以通过管道将输出传递给 less 命令,从而一页一页的来查看输出,就像下面这样:
```
$ gunzip -c ostechnix1.txt.gz | less
$ zcat ostechnix.txt.gz | less
```
另外zless 程序也能够实现和上面的管道同样的功能。
```
$ zless ostechnix1.txt.gz
```
**1.5 使用 gzip 压缩文件并指定压缩级别**
Gzip 的另外一个显著优点是支持压缩级别。它支持下面给出的 3 个压缩级别:
* **1** 最快 (最差)
* **9** 最慢 (最好)
* **6** 默认级别
要压缩名为 ostechnix.txt 的文件,使之成为“最好”压缩级别的 gzip 压缩文件,可以运行:
```
$ gzip -9 ostechnix.txt
```
**1.6 连接多个压缩文件**
我们也可以把多个需要压缩的文件压缩到同一个文件中。如何实现呢?看下面这个例子。
```
$ gzip -c ostechnix1.txt > output.txt.gz
$ gzip -c ostechnix2.txt >> output.txt.gz
```
上面的两个命令将会压缩文件 ostechnix1.txt 和 ostechnix2.txt并将输出保存到一个文件 output.txt.gz 中。
你可以通过下面其中任何一个命令,在不解压缩的情况下,查看两个文件 ostechnix1.txt 和 ostechnix2.txt 的内容:
```
$ gunzip -c output.txt.gz
$ gunzip -c output.txt
$ zcat output.txt.gz
$ zcat output.txt
```
如果你想了解关于 gzip 的更多细节,请参阅它的 man 手册。
```
$ man gzip
```
##### 2\. 使用 bzip2 程序来压缩和解压缩文件
bzip2 和 gzip 非常类似,但是 bzip2 使用的是 Burrows-Wheeler 块排序压缩算法,并使用<ruby>哈夫曼<rt>Huffman</rt></ruby>编码。使用 bzip2 压缩的文件以 “.bz2” 扩展结尾。
正如我上面所说的, bzip2 的用法和 gzip 几乎完全相同。只需在上面的例子中将 gzip 换成 bzip2将 gunzip 换成 bunzip2将 zcat 换成 bzcat 即可。
要使用 bzip2 压缩一个文件,并以压缩后的文件取而代之,只需运行:
```
$ bzip2 ostechnix.txt
```
如果你不想替换原始文件,那么可以使用 -c 选项,并把输出写入到新文件中。
```
$ bzip2 -c ostechnix.txt > output.txt.bz2
```
如果要解压缩文件,则运行:
```
$ bzip2 -d ostechnix.txt.bz2
```
或者,
```
$ bunzip2 ostechnix.txt.bz2
```
如果要在不解压缩的情况下查看一个压缩文件的内容,则运行:
```
$ bunzip2 -c ostechnix.txt.bz2
```
或者,
```
$ bzcat ostechnix.txt.bz2
```
如果你想了解关于 bzip2 的更多细节,请参阅它的 man 手册。
```
$ man bzip2
```
##### 总结
在这篇教程中,我们学习了 gzip 和 bzip2 程序是什么,并通过 GNU/Linux 下的一些例子学习了如何使用它们来压缩和解压缩文件。接下来,我们将要学习如何在 Linux 中将文件和目录归档。
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-compress-and-decompress-files-in-linux/
作者:[SK][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/

View File

@ -1,322 +0,0 @@
Makefile及其工作原理
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_liberate%20docs_1109ay.png?itok=xQOLreya)
当你在一些源文件改变后需要运行或更新一个任务时make工具通常会被用到。make工具需要读取Makefile(或makefile)文件在该文件中定义了一系列需要执行的任务。make可以用来将源代码编译为可执行程序。大部分开源项目会使用make来实现二进制文件的编译然后使用make istall命令来执行安装。
本文将通过一些基础和进阶的示例来展示make和Makefile的使用方法。在开始前请确保你的系统中安装了make。
### 基础示例
依然从打印“Hello World”开始。首先创建一个名字为myproject的目录目录下新建Makefile文件文件内容为
```
say_hello:
        echo "Hello World"
```
在myproject目录下执行make会有如下输出
```
$ make
echo "Hello World"
Hello World
```
在上面的例子中“say_hello”类似于其他编程语言中的函数名。在此可以成为target。在target之后的是预置条件和依赖。为了简单期间我们在示例中没有定义预置条件。“echo Hello World'"命令被称为recipe。recipe基于预置条件来实现target。target、预置条件和recipe共同构成一个规则。
总结一下,一个典型的规则的语法为:
```
target: 预置条件
<TAB> recipe
```
在示例中target是一个基于源代码这个预置条件的二进制文件。另外在另一规则中这个预置条件也可以是依赖其他预置条件的target。
```
final_target: sub_target final_target.c
        Recipe_to_create_final_target
sub_target: sub_target.c
        Recipe_to_create_sub_target
```
target不要求是一个文件也可以只是方便recipe使用的名字。我们称之为伪target。
再回到上面的示例中当make被执行时整条指令echo "Hello World"都被打印出来之后才是真正的执行结果。如果不希望指令本身被打印处理需要在echo前添加@。
```
say_hello:
        @echo "Hello World"
```
重新运行make将会只有如下输出
```
$ make
Hello World
```
接下来在Makefile中添加如下伪targetgenerate和clean
```
say_hello:
        @echo "Hello World"
generate:
        @echo "Creating empty text files..."
        touch file-{1..10}.txt
clean:
        @echo "Cleaning up..."
        rm *.txt
```
随后当我们运行make时只有say_hello这个target被执行。这是因为makefile中的默认target为第一个target。通常情况下只有默认的target会被调用大多数项目会将“all”作为默认target。“all”负责来调用其他的target。我们可以通过.DEFAULT_GOAL这个特殊的伪target来覆盖掉默认的行为。
在makefile文件开头增加.DEFAULT_GOAL
```
.DEFAULT_GOAL := generate
```
make会将generate作为默认target
```
$ make
Creating empty text files...
touch file-{1..10}.txt
```
顾名思义,.DEFAULT_GOAL伪target仅能定义一个target。这就是为什么很多项目仍然会有all这个target。这样可以保证多个target的实现。
下面删除掉.DEFAULT_GOAL增加all target
```
all: say_hello generate
say_hello:
        @echo "Hello World"
generate:
        @echo "Creating empty text files..."
        touch file-{1..10}.txt
clean:
        @echo "Cleaning up..."
        rm *.txt
```
运行之前我们再增加一些特殊的伪target。.PHONY用来定义这些不是file的target。make会默认调用这写伪target下的recipe而不去检查文件是否存在或最后修改日期。完整的makefile如下
```
.PHONY: all say_hello generate clean
all: say_hello generate
say_hello:
        @echo "Hello World"
generate:
        @echo "Creating empty text files..."
        touch file-{1..10}.txt
clean:
        @echo "Cleaning up..."
        rm *.txt
```
make命令会调用say_hello和generate
```
$ make
Hello World
Creating empty text files...
touch file-{1..10}.txt
```
clean不应该被放入all中或者被放入第一个target。clean应当在需要清理时手动调用调用方法为make clean。
```
$ make clean
Cleaning up...
rm *.txt
```
现在你应该已经对makefile有了基础的了解接下来我们看一些进阶的示例。
### 进阶示例
#### 变量
在之前的实例中大部分target和预置条件是已经固定了的但在实际项目中它们通常用变量和模式来代替。
定义变量最简单的方式是使用‘=操作符。例如将命令gcc赋值给变量CC
```
CC = gcc
```
这被称为递归扩展变量,用于如下所示的规则中:
```
hello: hello.c
    ${CC} hello.c -o hello
```
你可能已经想到了recipe将会在传递给终端时展开为
```
gcc hello.c -o hello
```
${CC}和$(CC)都能对gcc进行引用。但如果一个变量尝试将它本身赋值给自己将会造成死循环。让我们验证一下
```
CC = gcc
CC = ${CC}
all:
    @echo ${CC}
```
此时运行make会导致
```
$ make
Makefile:8: *** Recursive variable 'CC' references itself (eventually).  Stop.
```
为了避免这种情况发生,可以使用“:=”操作符(这被称为简单扩展变量)。以下代码不会造成上述问题:
```
CC := gcc
CC := ${CC}
all:
    @echo ${CC}
```
#### 模式和函数
下面的makefile使用了变量、模式和函数来实现所有C代码的编译。我们来逐行分析下
```
# Usage:
# make        # compile all binary
# make clean  # remove ALL binaries and objects
.PHONY = all clean
CC = gcc                        # compiler to use
LINKERFLAG = -lm
SRCS := $(wildcard *.c)
BINS := $(SRCS:%.c=%)
all: ${BINS}
%: %.o
        @echo "Checking.."
        ${CC} ${LINKERFLAG} $< -o $@
%.o: %.c
        @echo "Creating object.."
        ${CC} -c $<
clean:
        @echo "Cleaning up..."
        rm -rvf *.o ${BINS}
```
* 以“#”开头的行是评论。
* `.PHONY = all clean` 定义了“all”和“clean”两个伪代码。
* 变量`LINKERFLAG` recipe中gcc命令需要用到的参数。
* `SRCS := $(wildcard *.c)`: `$(wildcard pattern)` 是与文件名相关的一个函数。在本示例中,所有“.c"后缀的文件会被存入“SRCS”变量。
* `BINS := $(SRCS:%.c=%)`: 这被称为替代引用。本例中如果“SRCS”的值为“'foo.c bar.c'”则“BINS”的值为“'foo bar'”。
* Line `all: ${BINS}`: 伪target “all”调用“${BINS}”变量中的所有值作为子target。
* 规则:
```
%: %.o
  @echo "Checking.."
  ${CC} ${LINKERFLAG} $&lt; -o $@
```
下面通过一个示例来理解这条规则。假定“foo”是变量“${BINS}”中的一个值。“%”会匹配到“foo”“%”匹配任意一个target。下面是规则展开后的内容
```
foo: foo.o
  @echo "Checking.."
  gcc -lm foo.o -o foo
```
如上所示,“%”被“foo”替换掉了。“$<”被“foo.o”替换掉。“$<”用于匹配预置条件,`$@`匹配target。对“${BINS}”中的每个值,这条规则都会被调用一遍。
* 规则:
```
%.o: %.c
  @echo "Creating object.."
  ${CC} -c $&lt;
```
之前规则中的每个预置条件在这条规则中都会都被作为一个target。下面是展开后的内容
```
foo.o: foo.c
  @echo "Creating object.."
  gcc -c foo.c
```
* 最后在target “clean”中所有的而简直文件和编译文件将被删除。
下面是重写后的makefile该文件应该被放置在一个有foo.c文件的目录下
```
# Usage:
# make        # compile all binary
# make clean  # remove ALL binaries and objects
.PHONY = all clean
CC = gcc                        # compiler to use
LINKERFLAG = -lm
SRCS := foo.c
BINS := foo
all: foo
foo: foo.o
        @echo "Checking.."
        gcc -lm foo.o -o foo
foo.o: foo.c
        @echo "Creating object.."
        gcc -c foo.c
clean:
        @echo "Cleaning up..."
        rm -rvf foo.o foo
```
关于makefiles的更多信息[GNU Make manual][1]提供了更完整的说明和实例。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/what-how-makefile
作者:[Sachin Patil][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[Zafiry](https://github.com/zafiry)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/psachin
[1]:https://www.gnu.org/software/make/manual/make.pdf

View File

@ -1,124 +0,0 @@
差异文件diffs和补丁文件patches简介
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
如果你曾有机会在一个使用分布式开发模型的大型代码库上工作过,你就应该听说过类似下面的话,"Sue刚发过来一个补丁","Rajiv 正在签出差异文件", 可能这些词(补丁,差异文件)对你而言很陌生,而你确定很想搞懂他们到底指什么。开源软件对上述提到的名词有很大的贡献,作为从开发 Apache web 服务器到开发Linux 内核的开发模型,"基于补丁文件的开发" 这一模式贯穿了上述项目的始终。实际上,你可能不知道 Apache 的名字就来自一系列的代码补丁,他们被一一收集起来并针对原来的[NCSA HTTPd server source code][1]进行了校对
你可能认为前面说的只不过是些逸闻,但是一份早期的[capture of the Apache website][2]声称Apache 的名字就是来自于最早的“补丁文件”集合;(译注Apache 英文音和补丁相似),是“打了补丁的”服务器的英文名字简化。
好了,言归正传,程序员嘴里说的"差异"和"补丁" 到底是什么?
首先,在这篇文章里,我们可以认为这两个术语都指向同一个概念。“差异” 就是”补丁“。Unix 下的同名工具程序diff("差异")和patch("补丁")剖析了一个或多个文件之间的”差异”。下面我们看看diff 的例子:
一个"补丁"指的是文件之间一系列差异,这些差异能被 Unix 的 diff程序应用在源代码树上使之转变为程序员想要的文件状态。我们能使用diff 工具来创建“差异”( 或“补丁”),然后将他们“打” 在一个没有这个补丁的源代码版本上,此外,(我又要开始跑题了...,“补丁” 这个词真的指在计算机的早期使用打卡机的时候,用来覆盖在纸带上的用于修复代码错误的覆盖纸,那个时代纸带(上面有孔)就是在计算机处理器上运行的程序。下面的这张图,来自[Wikipedia Page][3] 真切的描绘了最初的“ 打补丁”这个词的出处:
![](https://opensource.com/sites/default/files/uploads/360px-harvard_mark_i_program_tape.agr_.jpg)
现在你对补丁和差异就了一个基本的概念,让我们来看看软件开发者是怎么使用这些工具的。如果你还没有使用过类似于[Git][4]这样的源代码版本控制工具的话,我将会一步步展示最流行的软件项目是怎么使用它们的。如果你将一个软件的生命周期看成是一条时间线的话,你就能看见这个软件的点滴变化,比如在何时源代码树加上了一个功能,在何时源代码树修复了一个功能缺陷。我们称这些改变的点为“进行了一次提交”,”提交“这个词被当今最流行的源代码版本管理工具Git使用, 当你想检查在一个提交前后的代码变化的话,(或者在许多个提交之间的代码变化),你都可以使用工具来观察文件差异。
如果你在使用 Git 开发软件的话,你开发环境本地有可能就有你想交给别的开发者的提交,为了给别的开发者你的提交,一个方法就是创建一个你本地文件的差异文件,然后将这个“补丁”发送给和你工作在同一个源代码树的别的开发者。别的开发者在“打”了你的补丁之后,就能看到在你的代码变树上的变化。
### Linux, Git, 和 GitHub
这种共享补丁的开发模型正是现今 Linux 内核社区如何处理内核修改提议而采用的模型。如果你有机会浏览任何一个主流的 Linux 内核邮件列表-主要是[LKML][6],包括[linux-containers][7][fs-devel][8][Netdev][9]等等你能看到很多开发者会贴出他们想让其他内核开发者审核测试或者合入Linux官方Git代码树某个提交的补丁。当然讨论 Git 不在这篇文章范围之内Git 是由 Linus Torvalds 开发的源代码控制系统,它支持分布式开发模型以及允许独立于主要代码仓库的补丁包,这些补丁包能被推送或拉取到不同的源代码树上并遵守这些代码树各自的开发流程。)
在继续我们的话题之前,我们当然不能忽略和补丁和差异这个概念最相关的的服务:[GitHub][10]。从它的名字就能猜想出 GitHub 是基于 Git 的,而且它还围绕着 Git对分布式开源代码开发模型提供了基于Web 和 API 的工作流管理。译注即Pull Request -- 拉取请求)。在 GitHub 上,分享补丁的方式不是像 Linux 内核社区那样通过邮件列表,而是通过创建一个 **拉取请求** 。当你提交你自己源代码的改动时你能通过创建一个针对软件项目的主仓库的“拉取请求”来分享你的代码改动译注即核心开发者维护一个主仓库开发者去“fork”这个仓库待各自的提交后再创建针对这个主仓库的拉取请求所有的拉取请求由主仓库的核心开发者批准后才能合入主代码库。GitHub 被当今很多活跃的开源社区所采用,如[Kubernetes][11],[Docker][12],[the Container Network Interface (CNI)][13],[Istio][14]等等。在 GitHub 的世界里,用户会倾向于使用基于 Web 页面的方式来审核一个拉取请求里的补丁或差异,你也可以直接访问原始的补丁并在命令行上直接使用它们。
### 该说点干货了
我们前面已经讲了在流行的开源社区了是怎么应用补丁和 diff的现在看看一些例子。
第一个例子包括一个源代码树的两个不同拷贝,其中一个有代码改动,我们想用 diff来看看这些改动是什么。这个例子里我们想看的是“合并格式”的补丁这是现在软件开发世界里最通用的格式。如果想知道更详细参数的用法以及如何生成diff请参考diff手册。原始的代码在sources-orig目录 而改动后的代码在sources-fixed目录. 如果要在你的命令行上用“合并格式”来展示补丁,请运行如下命令。(译注: 参数 N 代表如果比较的文件不存在,则认为是个空文件, a代表将所有文件都作为文本文件对待u 代表使用合并格式并输出上下文r 代表递归比较目录)
```
$ diff -Naur sources-orig/ sources-fixed/
```
...下面是 diff命令的输出:
```
diff -Naur sources-orig/officespace/interest.go sources-fixed/officespace/interest.go
--- sources-orig/officespace/interest.go        2018-08-10 16:39:11.000000000 -0400
+++ sources-fixed/officespace/interest.go       2018-08-10 16:39:40.000000000 -0400
@@ -11,15 +11,13 @@
   InterestRate float64
 }
+// compute the rounded interest for a transaction
 func computeInterest(acct *Account, t Transaction) float64 {
   interest := t.Amount * t.InterestRate
   roundedInterest := math.Floor(interest*100) / 100.0
   remainingInterest := interest - roundedInterest
-  // a little extra..
-  remainingInterest *= 1000
-
   // Save the remaining interest into an account we control:
   acct.Balance = acct.Balance + remainingInterest
```
最开始几行 diff的输出可以这样解释三个---’显示了原来文件的名字;任何在原文件(译注:不是源文件)里存在而在新文件里不存在的行将会用前缀‘-’,用来表示这些行被从源代码里‘减去’了。而‘+++’表示的则相反:在新文件里被加上的行会被放上前缀‘+’,表示这是在新文件里被'加上'的行。每一个补丁”块“(用@@作为前缀的的部分)都有上下文的行号,这能帮助补丁工具(或其他处理器)知道在代码的哪里应用这个补丁块。你能看到我们已经修改了”办公室“这部电影里提到的那个函数(移除了三行并加上了一行代码注释),电影里那个有点贪心的工程师可是偷偷的在计算利息的函数里加了点”料“哦。( LCTT译注剧情详情请见电影 https://movie.douban.com/subject/1296424/
如果你想找人来测试你的代码改动,你可以将差异保存到一个补丁里:
```
$ diff -Naur sources-orig/ sources-fixed/ >myfixes.patch
```
现在你有补丁 myfixes.patch了你能把它分享给别的开发者他们可以将这个补丁打在他们自己的源代码树上从而得到和你一样的代码并测试他们。如果一个开发者的当前工作目录就是他的源代码树的根的话他可以用下面的命令来打补丁
```
$ patch -p1 < ../myfixes.patch
patching file officespace/interest.go
```
现在这个开发者的源代码树已经打好补丁并准备好构建和测试文件的修改了。那么如果这个开发者在打补丁之前已经改动过了怎么办只要这些改动没有直接冲突译注比如改在同一行上补丁工具就能自动的合并代码的改动。例如下面的interest.go 文件他有其他几处改动然后它想打上myfixes.patch 这个补丁:
```
$ patch -p1 < ../myfixes.patch
patching file officespace/interest.go
Hunk #1 succeeded at 26 (offset 15 lines).
```
在这个例子中补丁警告说代码改动并不在文件原来的地方而是偏移了15行。如果你文件改动的很厉害补丁可能干脆说找不到要应用的地方还好补丁程序提供了提供了打开”模糊“匹配的选项这个选项在文档里有预置的警告信息对其讲解已经超出了本文的范围
如果你使用 Git 或者 GitHub 的话你可能不会直接使用diff或patch。Git 已经内置了这些功能你能使用这些功能和共享一个源代码树的其他开发者交互拉取或合并代码。Git一个比较相近的功能是可以使用 git diff 来对你的本地代码树生成全局差异,又或者对你的任意两次”引用“(可能是一个代表提交的数字,或一个标记或分支的名字,等等)做全局补丁。你甚至能简单的用管道将 git diff的输出到一个文件里这个文件必须严格符合将要被使用它的程序的输入要求然后将这个文件交给一个并不使用 Git 的开发者应用到他的代码上。当然GitHub 把这些功能放到了 Web 上,你能直接在 Web 页面上查看一个拉取请求的文件变动。在Web 上你能看到所展示的合并差异GitHub 还允许你将这些代码改动下载为原始的补丁文件。
### 总结
好了,你已经学到了”差异“和”补丁“是什么,以及在 Unix/Linux 上怎么使用命令行工具和他们交互。除非你还在像 Linux 内核开发这样的项目中工作而使用完全基于补丁的开发方式你应该会主要通过你的源代码控制系统如Git来使用补丁。但熟悉像 GitHub 这样的高级别工具的技术背景和技术底层对你的工作也是大有裨益的。谁知道会不会有一天你需要和一个来自 Linux 世界邮件列表的补丁包打交道呢?
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/diffs-patches
作者:[Phil Estes][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[David Chen](https://github.com/DavidChenLiang)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/estesp
[1]:https://github.com/TooDumbForAName/ncsa-httpd
[2]:https://web.archive.org/web/19970615081902/http:/www.apache.org/info.html
[3]:https://en.wikipedia.org/wiki/Patch_(computing)
[4]:https://git-scm.com/
[5]:https://subversion.apache.org/
[6]:https://lkml.org/
[7]:https://lists.linuxfoundation.org/pipermail/containers/
[8]:https://patchwork.kernel.org/project/linux-fsdevel/list/
[9]:https://www.spinics.net/lists/netdev/
[10]:https://github.com/
[11]:https://kubernetes.io/
[12]:https://www.docker.com/
[13]:https://github.com/containernetworking/cni
[14]:https://istio.io/

View File

@ -0,0 +1,346 @@
使用 PySimpleGUI 轻松为程序和脚本增加 GUI
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web_browser_desktop_devlopment_design_system_computer.jpg?itok=pfqRrJgh)
对于 `.exe` 类型的程序文件,我们可以通过双击鼠标左键打开;但对于 `.py` 类型的 Python 程序,几乎不会有人尝试同样的操作。对于一个(非程序员类型的)典型用户,他们双击打开 `.exe` 文件时预期弹出一个可以交互的窗体。基于 `Tkinter`,可以通过<ruby>标准 Python 安装<rt>standard Python installations</rt></ruby>的方式提供 GUI但很多程序都不太可能这样做。
如果打开 Python 程序并进入 GUI 界面变得如此容易,以至于真正的初学者也可以掌握,会怎样呢?会有人感兴趣并使用吗?这个问题不好回答,因为直到今天创建自定义 GUI 布局仍不是件容易的事情。
在为程序或脚本增加 GUI 这件事上,似乎存在能力的“错配”。(缺乏这方面能力的)真正的初学者被迫只能使用命令行方式,而很多(具备这方面能力的)高级程序员却不愿意花时间创建一个 `Tkinter` GUI。
### GUI 框架
Python 的 GUI 框架并不少,其中 `Tkinter``wxPython``Qt` 和 `Kivy` 是几种比较主流的框架。此外,还有不少在上述框架基础上封装的简化框架,例如 `EasyGUI``PyGUI` 和 `Pyforms` 等。
但问题在于,对于初学者(这里是指编程经验不超过 6 个月的用户)而言,即使是最简单的主流框架,他们也无从下手;他们也可以选择封装过的(简化)框架,但仍难以甚至无法创建自定义 GUI <ruby>布局<rt>layout</rt></ruby>。即便学会了某种(简化)框架,也需要编写连篇累牍的代码。
[`PySimpleGUI`][1] 尝试解决上述 GUI 难题,它提供了一种简单明了、易于理解、方便自定义的 GUI 接口。如果使用 `PySimpleGUI`,很多复杂的 GUI 也仅需不到 20 行代码。
### 秘诀
`PySimpleGUI` 极为适合初学者的秘诀在于,它已经包含了绝大多数原本需要用户编写的代码。`PySimpleGUI` 处理按钮<ruby>回调<rt>callback</rt></ruby>,无需用户编写代码。对于初学者,在几周内掌握函数的概念已经不容易了,要求其理解回调函数似乎有些强人所难。
在大部分 GUI 框架中,布局 GUI <ruby>小部件<rt>widgets</rt></ruby>通常需要写一些代码,每个小部件至少 1-2 行。`PySimpleGUI` 使用了“auto-packer”技术可以自动创建布局。因而布局 GUI 窗口不再需要 `pack``grid` 系统。
LCTT 译注:这里提到的 `pack``grid` 都是 `Tkinter` 的布局管理器,另外一种叫做 `place`
最后,`PySimpleGUI` 框架编写中有效利用 Python 语言特性降低用户代码量并简化GUI 数据返回的方式。在<ruby>窗体<rt>form</rt></ruby>布局中创建小部件时,小部件会被部署到对应的布局中,无需额外的代码。
### GUI 是什么?
绝大多数 GUI 只完成一件事情:收集用户数据并返回。在程序员看来,可以归纳为如下的函数调用:
```
button, values = GUI_Display(gui_layout)
```
绝大多数 GUI 支持的用户行为包括鼠标点击例如“确认”“取消”“保存”“是”和“否”等和内容输入。GUI 本质上可以归结为一行代码。
这也正是 `PySimpleGUI` (简单 GUI 模式)的工作原理。当执行命令显示 GUI 后,除非点击鼠标关闭窗体,否则不会执行任何代码。
当然还有更复杂的 GUI其中鼠标点击后窗口并不关闭例如机器人的远程控制界面聊天窗口等。这类复杂的窗体也可以用 `PySimpleGUI` 创建。
### 快速创建 GUI
`PySimpleGUI` 什么时候有用呢?显然,是你需要 GUI 的时候。仅需不超过 5 分钟,就可以让你创建并尝试 GUI。最便捷的 GUI 创建方式就是从 [PySimpleGUI 经典实例][2]中拷贝一份代码。具体操作流程如下:
* 找到一个与你需求最接近的 GUI
* 从经典实例中拷贝代码
* 粘贴到 IDE 中并运行
下面我们看一下书中的第一个<ruby>经典实例<rt>recipe</rt></ruby>
```
import PySimpleGUI as sg
# Very basic form.  Return values as a list
form = sg.FlexForm('Simple data entry form')  # begin with a blank form
layout = [
          [sg.Text('Please enter your Name, Address, Phone')],
          [sg.Text('Name', size=(15, 1)), sg.InputText('name')],
          [sg.Text('Address', size=(15, 1)), sg.InputText('address')],
          [sg.Text('Phone', size=(15, 1)), sg.InputText('phone')],
          [sg.Submit(), sg.Cancel()]
         ]
button, values = form.LayoutAndRead(layout)
print(button, values[0], values[1], values[2])
```
运行后会打开一个大小适中的窗体。
![](https://opensource.com/sites/default/files/uploads/pysimplegui_cookbook-form.jpg)
如果你只是想收集一些字符串类型的值,拷贝上述经典实例中的代码,稍作修改即可满足你的需求。
你甚至可以只用 5 行代码创建一个自定义 GUI 布局。
```
import PySimpleGUI as sg
form = sg.FlexForm('My first GUI')
layout = [ [sg.Text('Enter your name'), sg.InputText()],
           [sg.OK()] ]
button, (name,) = form.LayoutAndRead(layout)
```
![](https://opensource.com/sites/default/files/uploads/pysimplegui-5-line-form.jpg)
### 5 分钟内创建一个自定义 GUI
在简单布局的基础上通过修改经典实例中的代码5 分钟内即可使用 `PySimpleGUI` 创建自定义布局。
`PySimpleGUI` 中,<ruby>小部件<rt>widgets</rt></ruby>被称为<ruby>元素<rt>elements</rt></ruby>。元素的名称与编码中使用的名称保持一致。
LCTT 译注:`Tkinter` 中使用小部件这个词)
#### 核心元素
```
Text
InputText
Multiline
InputCombo
Listbox
Radio
Checkbox
Spin
Output
SimpleButton
RealtimeButton
ReadFormButton
ProgressBar
Image
Slider
Column
```
#### 元素简写
`PySimpleGUI` 还包含两种元素简写方式。一种是元素类型名称简写,例如 `T` 用作 `Text` 的简写;另一种是元素参数被配置了默认值,你可以无需指定所有参数,例如 `Submit` 按钮默认的文本就是“Submit”。
```
T = Text
Txt = Text
In = InputText
Input = IntputText
Combo = InputCombo
DropDown = InputCombo
Drop = InputCombo
```
LCTT 译注:第一种简写就是 Python 类的别名,第二种简写是在返回元素对象的 Python 函数定义时指定了参数的默认值)
#### 按钮简写
一些通用按钮具有简写实现,包括:
```
FolderBrowse
FileBrowse
FileSaveAs
Save
Submit
OK
Ok LCTT 译注:这里 `k` 是小写)
Cancel
Quit
Exit
Yes
No
```
此外,还有通用按钮功能对应的简写:
```
SimpleButton
ReadFormButton
RealtimeButton
```
LCTT 译注:其实就是返回 `Button` 类实例的函数)
上面就是 `PySimpleGUI` 支持的全部元素。如果不在上述列表之中,就不会在你的窗口布局中生效。
LCTT 译注:上述都是 `PySimpleGUI` 的类名、类别名或返回实例的函数,自然只能使用列表内的。)
#### GUI 设计模式
对于 GUI 程序,创建并展示窗口的调用大同小异,差异在于元素的布局。
设计模式代码与上面的例子基本一致,只是移除了布局:
```
import PySimpleGUI as sg
form = sg.FlexForm('Simple data entry form')
# Define your form here (it's a list of lists)
button, values = form.LayoutAndRead(layout)
```
LCTT 译注:这段代码无法运行,只是为了说明每个程序都会用到的设计模式)
对于绝大多数 GUI编码流程如下
* 创建窗体对象
* 以“列表的列表”的形式定义 GUI
* 展示 GUI 并获取元素的值
上述流程与 `PySimpleGUI` 设计模式部分的代码一一对应。
#### GUI 布局
要创建自定义 GUI首先要将窗体分割成多个行因为窗体是一行一行定义的。然后在每一行中从左到右依次放置元素。
我们得到的就是一个“列表的列表”,类似如下:
```
layout = [  [Text('Row 1')],
            [Text('Row 2'), Checkbox('Checkbox 1', OK()), Checkbox('Checkbox 2'), OK()] ]
```
上述布局对应的效果如下:
![](https://opensource.com/sites/default/files/uploads/pysimplegui-custom-form.jpg)
### 展示 GUI
当你完成布局、拷贝完用于创建和展示窗体的代码后,下一步就是展示窗体并收集用户数据。
下面这行代码用于展示窗体并返回收集的数据:
```
button, values = form.LayoutAndRead(layout)
```
窗体返回的结果由两部分组成:一部分是被点击按钮的名称,另一部分是一个列表,包含所有用户输入窗体的值。
在这个例子中,窗体显示后用户直接点击 `OK` 按钮,返回的结果如下:
```
button == 'OK'
values == [False, False]
```
`Checkbox` 类型元素返回 `True``False` 类型的值。由于默认处于未选中状态,两个元素的值都是 `False`
### 显示元素的值
一旦从 GUI 获取返回值,检查返回变量中的值是个不错的想法。与其使用 `print` 语句进行打印,我们不妨坚持使用 GUI 并在一个窗口中输出这些值。
LCTT 译注:考虑使用的是 Python 3 版本,`print` 应该是函数而不是语句)
`PySimpleGUI` 中,有多种消息框可供选取。传递给消息框(函数)的数据会被显示在消息框中;函数可以接受任意数目的参数,你可以轻松的将所有要查看的变量展示出来。
`PySimpleGUI` 中,最常用的消息框是 `MsgBox`。要展示上面例子中的数据,只需编写一行代码:
```
MsgBox('The GUI returned:', button, values)
```
### 整合
好了,你已经了解了基础知识,让我们创建一个包含尽可能多 `PySimpleGUI` 元素的窗体吧!此外,为了更好的感观效果,我们将采用绿色/棕褐色的配色方案。
```
import PySimpleGUI as sg
sg.ChangeLookAndFeel('GreenTan')
form = sg.FlexForm('Everything bagel', default_element_size=(40, 1))
column1 = [[sg.Text('Column 1', background_color='#d3dfda', justification='center', size=(10,1))],
           [sg.Spin(values=('Spin Box 1', '2', '3'), initial_value='Spin Box 1')],
           [sg.Spin(values=('Spin Box 1', '2', '3'), initial_value='Spin Box 2')],
           [sg.Spin(values=('Spin Box 1', '2', '3'), initial_value='Spin Box 3')]]
layout = [
    [sg.Text('All graphic widgets in one form!', size=(30, 1), font=("Helvetica", 25))],
    [sg.Text('Here is some text.... and a place to enter text')],
    [sg.InputText('This is my text')],
    [sg.Checkbox('My first checkbox!'), sg.Checkbox('My second checkbox!', default=True)],
    [sg.Radio('My first Radio!     ', "RADIO1", default=True), sg.Radio('My second Radio!', "RADIO1")],
    [sg.Multiline(default_text='This is the default Text should you decide not to type anything', size=(35, 3)),
     sg.Multiline(default_text='A second multi-line', size=(35, 3))],
    [sg.InputCombo(('Combobox 1', 'Combobox 2'), size=(20, 3)),
     sg.Slider(range=(1, 100), orientation='h', size=(34, 20), default_value=85)],
    [sg.Listbox(values=('Listbox 1', 'Listbox 2', 'Listbox 3'), size=(30, 3)),
     sg.Slider(range=(1, 100), orientation='v', size=(5, 20), default_value=25),
     sg.Slider(range=(1, 100), orientation='v', size=(5, 20), default_value=75),
     sg.Slider(range=(1, 100), orientation='v', size=(5, 20), default_value=10),
     sg.Column(column1, background_color='#d3dfda')],
    [sg.Text('_'  * 80)],
    [sg.Text('Choose A Folder', size=(35, 1))],
    [sg.Text('Your Folder', size=(15, 1), auto_size_text=False, justification='right'),
     sg.InputText('Default Folder'), sg.FolderBrowse()],
    [sg.Submit(), sg.Cancel()]
     ]
button, values = form.LayoutAndRead(layout)
sg.MsgBox(button, values)
```
看上面要写不少代码,但如果你试着直接使用 `Tkinter` 框架实现同样的 GUI你很快就会发现 `PySimpleGUI` 版本的代码是多么的简洁。
![](https://opensource.com/sites/default/files/uploads/pysimplegui-everything.jpg)
代码的最后一行打开了一个消息框,效果如下:
![](https://opensource.com/sites/default/files/uploads/pysimplegui-message-box.jpg)
消息框函数中的每一个参数的内容都会被打印到单独的行中。本例的消息框中包含两行,其中第二行非常长而且包含列表嵌套。
建议花一点时间将上述结果与 GUI 中的元素一一比对,这样可以更好的理解这些结果是如何产生的。
### 为你的程序或脚本添加 GUI
如果你有一个命令行方式使用的脚本,添加 GUI 不一定意味着完全放弃该脚本。一种简单的方案如下:如果脚本不需要命令行参数,那么可以直接使用 GUI 调用该脚本;反之,就按原来的方式运行脚本。
仅需类似如下的逻辑:
```
if len(sys.argv) == 1:
        # collect arguments from GUI
else:
    # collect arguements from sys.argv
```
创建并运行 GUI 最便捷的方式就是从 [PySimpleGUI 经典实例][2]中拷贝一份代码并修改。
快来试试吧!给你一直疲于手动执行的脚本增加一些趣味。只需 5-10 分钟即可玩转示例脚本。你可能发现一个几乎满足你需求的经典实例;如果找不到,也很容易自己编写一个。即使你真的玩不转,也只是浪费了 5-10 分钟而已。
### 资源
#### 安装方式
支持 `Tkinter` 的系统就支持 `PySimpleGUI`,甚至包括<ruby>树莓派<rt>Raspberry Pi</rt></ruby>,但你需要使用 Python 3。
```
pip install PySimpleGUI
```
#### 文档
* [手册][3]
* [经典实例][2]
* [GitHub repository][1]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/pysimplegui
作者:[Mike Barnett][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/pysimplegui
[1]: https://github.com/MikeTheWatchGuy/PySimpleGUI
[2]: https://pysimplegui.readthedocs.io/en/latest/cookbook/
[3]: https://pysimplegui.readthedocs.io/en/latest/

View File

@ -0,0 +1,356 @@
增强 Vim 编辑器,提高编辑效率
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk)
编者注:标题和文章最初提到的 `vi` 编辑器,现已更新为编辑器的正确名称:`Vim`。
`Vim` 作为一款功能强大、选项丰富的编辑器,为许多用户所热爱。本文介绍了一些在 `Vim` 中默认未启用但实际非常有用的选项。虽然可以在每个 `Vim` 会话中单独启用,但为了创建一个开箱即用的高效编辑环境,还是建议在 `Vim` 的配置文件中配置这些命令。
## 开始前的准备
这里所说的选项或配置均位于用户主目录中的 `Vim` 启动配置文件 `.vimrc`。 按照下面的说明在 `.vimrc` 中设置选项:
(注意:`vimrc` 文件也用于 `Linux` 中的全局配置,如 `/etc/vimrc``/etc/vim/vimrc`。本文所说的 `.vimrc` 均是指位于用户主目录中的 `.vimrc` 文件。)
`Linux` 系统中:
* 用 `Vim` 打开 `.vimrc` 文件: `vim ~/.vimrc`
* 复制本文最后的 `选项列表` 粘贴到 `.vimrc` 文件
* 保存并关闭 (`:wq`)
译者注:此处不建议使用 `Vim` 编辑 `.vimrc` 文件,因为很可能无法粘贴成功,可以选择 `gedit` 编辑器编辑 `.vimrc` 文件。
`Windows` 系统中:
* 首先,[安装 gvim][1]
* 打开 `gvim`
* 单击 `编辑` -> `启动设置`,打开 `.vimrc` 文件
* 复制本文最后的 `选项列表` 粘贴到 `.vimrc` 文件
* 单击 `文件` -> `保存`
译者注:此处应注意不要使用 `Windows` 自带的记事本编辑该 `.vimrc` 文件。
下面,我们将深入研究提高 `Vim` 编辑效率的选项。主要分为以下几类:
1. 缩进 & 制表符
2. 显示 & 格式化
3. 搜索
4. 浏览 & 滚动
5. 拼写
6. 其他选项
## 1. 缩进 & 制表符
使 `Vim` 在创建新行的时候使用与上一行同样的缩进:
```vim
set autoindent
```
创建新行时使用智能缩进,主要用于 `C` 语言一类的程序。通常,打开 `smartindent` 时也应该打开 `autoindent`
```vim
set smartindent
```
注意:`Vim` 具有语言感知功能,且其默认设置可以基于文件中的编程语言来改变配置以提高效率。有许多默认的配置选项,包括 `axs cindent``cinoptions``indentexpr` 等,没有在这里说明。 `syn` 是一个非常有用的命令,用于设置文件的语法以更改显示模式。
译者注:这里的 `syn` 是指 `syntax`,可用于设置文件所用的编程语言,开启对应的语法高亮,以及执行自动事件 (`autocmd`)。
设置文件里的制表符 `(TAB)` 的宽度(以空格的数量表示):
```vim
set tabstop=4
```
设置移位操作 `>>``<<` 的缩进长度(以空格的数量表示):
```vim
set shiftwidth=4
```
如果你更喜欢在编辑文件时使用空格而不是制表符,设置以下选项可以使 `Vim` 在你按下 `Tab` 键时用空格代替制表符。
```vim
set expandtab
```
注意:这可能会导致依赖于制表符的 `Python` 等编程语言出现问题。这时,你可以根据文件类型设置该选项(请参考 `autocmd`)。
## 2. 显示 & 格式化
要在每行的前面显示行号:
```vim
set number
```
![](https://opensource.com/sites/default/files/uploads/picture01.png)
要在文本行超过一定长度时自动换行:
```vim
set textwidth=80
```
要根据从窗口右侧向左数的列数来自动换行:
```vim
set wrapmargin=2
```
译者注:如果 `textwidth` 选项不等于零,本选项无效。
插入括号时,短暂地跳转到匹配的括号:
```vim
set showmatch
```
![](https://opensource.com/sites/default/files/uploads/picture02-03.jpg)
## 3. 搜索
高亮搜索内容的所有匹配位置:
```vim
set hlsearch
```
![](https://opensource.com/sites/default/files/uploads/picture04.png)
搜索过程中动态显示匹配内容:
```vim
set incsearch
```
![](https://opensource.com/sites/default/files/picture05.png)
搜索时忽略大小写:
```vim
set ignorecase
```
在打开 `ignorecase` 选项的条件下,搜索内容包含部分大写字符时,要使搜索大小写敏感:
```vim
set smartcase
```
例如,如果文件内容是:
> test\
> Test
当打开 `ignorecase``smartcase` 选项时,搜索 `test` 时的突出显示:
> <font color=yellow>test</font>\
> <font color=yellow>Test</font>
搜索 `Test` 时的突出显示:
> test\
> <font color=yellow>Test</font>
## 4. 浏览 & 滚动
为获得更好的视觉体验,你可能希望将光标放在窗口中间而不是第一行,以下选项使光标距窗口上下保留 5 行。
```vim
set scrolloff=5
```
一个例子:
第一张图中 `scrolloff=0`,第二张图中 `scrolloff=5`
![](https://opensource.com/sites/default/files/uploads/picture06-07.jpg)
提示:如果你没有设置选项 `nowrap`,那么设置 `sidescrolloff` 将非常有用。
`Vim` 窗口底部显示一个永久状态栏,可以显示文件名、行号和列号等内容:
```vim
set laststatus=2
```
![](https://opensource.com/sites/default/files/picture08.png)
## 5. 拼写
`Vim` 有一个内置的拼写检查器,对于文本编辑和编码非常有用。`Vim` 可以识别文件类型并仅对代码中的注释进行拼写检查。使用下面的选项打开英语拼写检查:
```vim
set spell spelllang=en_us
```
译者注:中文、日文或其它东亚语字符通常会在打开拼写检查时被标为拼写错误,因为拼写检查不支持这些语种,可以在 `spelllang` 选项中加入 `cjk` 来忽略这些错误标注。
## 6. 其他选项
禁止创建备份文件:启用此选项后,`Vim` 将在覆盖文件前创建一个备份,文件成功写入后保留该备份。如果不想保留该备份文件,可以按下面的方式关闭:
```vim
set nobackup
```
禁止创建交换文件:启用此选项后,`Vim` 将在编辑该文件时创建一个交换文件。 交换文件用于在崩溃或发生使用冲突时恢复文件。交换文件是以 `.` 开头并以 `.swp` 结尾的隐藏文件。
```vim
set noswapfile
```
如果需要在同一个 `Vim` 窗口中编辑多个文件并进行切换。默认情况下,工作目录是打开的第一个文件的目录。而将工作目录自动切换到正在编辑的文件的目录是非常有用的。要自动切换工作目录:
```vim
set autochdir
```
`Vim` 自动维护编辑的历史记录,允许撤消更改。默认情况下,该历史记录仅在文件关闭之前有效。`Vim` 包含一个增强功能,使得即使在文件关闭后也可以维护撤消历史记录,这意味着即使在保存、关闭和重新打开文件后,也可以撤消之前的更改。历史记录文件是使用 `.un~` 扩展名保存的隐藏文件。
```vim
set undofile
```
错误信息响铃,只对错误信息起作用:
```vim
set errorbells
```
如果你愿意,还可以设置错误视觉提示:
```vim
set visualbell
```
## 惊喜
vi provides long-format as well as short-format commands. Either format can be used to set or unset the configuration.
`Vim` 提供长格式和短格式命令,两种格式都可用于设置或取消选项配置。
`autoindent` 选项的长格式是:
```vim
set autoindent
```
`autoindent` 选项的短格式是:
```vim
set ai
```
To see the current configuration setting of a command without changing its current value, use `?` at the end:
要在不更改选项当前值的情况下查看其当前设置,可以在 `Vim` 的命令行上使用在末尾加上 `?` 的命令:
```vim
set autoindent?
```
在大多数选项前加上 `no` 前缀可以取消或关闭选项:
```vim
set noautoindent
```
可以为单独的文件配置选项,而不必修改全局配置文件。需要的话,请打开文件并输入 `:`,然后键入 `set`命令。这样的话,配置仅对当前的文件编辑会话有效。
![](https://opensource.com/sites/default/files/uploads/picture09.png)
使用命令行获取帮助:
```vim
:help autoindent
```
![](https://opensource.com/sites/default/files/uploads/picture10-11.jpg)
注意:此处列出的命令仅对 `Linux` 上的 `Vim 7.4` 版本和 `Windows` 上的 `Vim 8.0` 版本进行了测试。
这些有用的命令肯定会增强您的 `Vim` 使用体验。你会推荐哪些其他有用的命令?
## 选项列表
复制该选项列表粘贴到 `.vimrc` 文件中:
```vim
" Indentation & Tabs
set autoindent
set smartindent
set tabstop=4
set shiftwidth=4
set expandtab
set smarttab
" Display & format
set number
set textwidth=80
set wrapmargin=2
set showmatch
" Search
set hlsearch
set incsearch
set ignorecase
set smartcase
" Browse & Scroll
set scrolloff=5
set laststatus=2
" Spell
set spell spelllang=en_us
" Miscellaneous
set nobackup
set noswapfile
set autochdir
set undofile
set visualbell
set errorbells
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/vi-editor-productivity-powerhouse
作者:[Girish Managoli][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[idea2act](https://github.com/idea2act)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gammay
[1]: https://www.vim.org/download.php#pc

View File

@ -1,13 +1,18 @@
8 Linux commands for effective process management
heguangzhi Translating
8个Linux命令用于有效的进程管理
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg)
Generally, an application process' lifecycle has three main states: start, run, and stop. Each state can and should be managed carefully if we want to be competent administrators. These eight commands can be used to manage processes through their lifecycles.
一般来说,应用程序的生命周期有三种主要状态:启动、运行和停止。如果我们想成为称职的管理员,每个状态都可以而且应该得到认真的管理。这八个命令可用于管理进程的整个生命周期。
### Starting a process
The easiest way to start a process is to type its name at the command line and press Enter. If you want to start an Nginx web server, type **nginx**. Perhaps you just want to check the version.
### 启动进程
启动进程的最简单方法是在命令行中键入其名称,然后按 Enter 键。如果要启动 Nginx web 服务器,请键入 **nginx** 。也许您只是想看看其版本。
```
alan@workstation:~$ nginx
@ -15,9 +20,11 @@ alan@workstation:~$ nginx -v
nginx version: nginx/1.14.0
```
### Viewing your executable path
The above demonstration of starting a process assumes the executable file is located in your executable path. Understanding this path is key to reliably starting and managing a process. Administrators often customize this path for their desired purpose. You can view your executable path using **echo $PATH**.
### 查看您的可执行路径
以上启动进程的演示是假设可执行文件位于您的可执行路径中。理解这条路径是是否启动和管理进程的关键。管理员通常会为他们想要的目的定制这条路径。您可以使用 **echo $PATH** 查看您的可执行路径。
```
alan@workstation:~$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
@ -25,26 +32,36 @@ alan@workstation:~$ echo $PATH
#### WHICH
Use the which command to view the full path of an executable file.
使用 which 命令查看可执行文件的完整路径。
```
alan@workstation:~$ which nginx                                                    
/opt/nginx/bin/nginx
```
I will use the popular web server software Nginx for my examples. Let's assume that Nginx is installed. If the command **which nginx** returns nothing, then Nginx was not found because which searches only your defined executable path. There are three ways to remedy a situation where a process cannot be started simply by name. The first is to type the full path. Although, I'd rather not have to type all of that, would you?
我将使用流行的 web 服务器软件 Nginx 作为我的例子。假设安装了 Nginx。如果执行 **which nginx** 的命令什么也不返回,那么 Nginx 就找不到了,因为它只搜索您指定的可执行路径。有三种方法可以补救一个进程不能简单地通过名字启动的情况。首先是键入完整路径。虽然,我不情愿输入全部路径,您会吗?
```
alan@workstation:~$ /home/alan/web/prod/nginx/sbin/nginx -v
nginx version: nginx/1.14.0
```
The second solution would be to install the application in a directory in your executable's path. However, this may not be possible, particularly if you don't have root privileges.
The third solution is to update your executable path environment variable to include the directory where the specific application you want to use is installed. This solution is shell-dependent. For example, Bash users would need to edit the PATH= line in their .bashrc file.
第二个解决方案是将应用程序安装在可执行文件路径中的目录中。然而,这可能是不可能的,特别是如果您没有 root 权限。
第三个解决方案是更新您的可执行路径环境变量,包括要使用的特定应用程序的安装目录。这个解决方案是 shell-dependent。例如Bash 用户需要在他们的 .bashrc 文件中编辑 PATH= line。
```
PATH="$HOME/web/prod/nginx/sbin:$PATH"
```
Now, repeat your echo and which commands or try to check the version. Much easier!
现在,重复您的 echo 和 which命令或者尝试检查版本。容易多了
```
alan@workstation:~$ echo $PATH
/home/alan/web/prod/nginx/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
@ -56,24 +73,27 @@ alan@workstation:~$ nginx -v                                  
nginx version: nginx/1.14.0
```
### Keeping a process running
### 保持进程运行
#### NOHUP
A process may not continue to run when you log out or close your terminal. This special case can be avoided by preceding the command you want to run with the nohup command. Also, appending an ampersand (&) will send the process to the background and allow you to continue using the terminal. For example, suppose you want to run myprogram.sh.
注销或关闭终端时,进程可能不会继续运行。这种特殊情况可以通过在要使用 nohup 命令放在要运行的命令前面让进程持续运行。此外,附加一个&符号将会把进程发送到后台,并允许您继续使用终端。例如,假设您想运行 myprogram.sh 。
```
nohup myprogram.sh &
```
One nice thing nohup does is return the running process's PID. I'll talk more about the PID next.
nohup 会返回运行进程的PID。接下来我会更多地谈论PID。
### Manage a running process
### 管理正在运行的进程
Each process is given a unique process identification number (PID). This number is what we use to manage each process. We can also use the process name, as I'll demonstrate below. There are several commands that can check the status of a running process. Let's take a quick look at these.
每个进程都有一个唯一的进程标识号 (PID) 。这个数字是我们用来管理每个进程的。我们还可以使用进程名称,我将在下面演示。有几个命令可以检查正在运行的进程的状态。让我们快速看看这些命令。
#### PS
The most common is ps. The default output of ps is a simple list of the processes running in your current terminal. As you can see below, the first column contains the PID.
最常见的是 ps 命令。ps 的默认输出是当前终端中运行的进程的简单列表。如下所示第一列包含PID。
```
alan@workstation:~$ ps
PID TTY          TIME CMD
@ -81,7 +101,8 @@ PID TTY          TIME CMD
24148 pts/0    00:00:00 ps
```
I'd like to view the Nginx process I started earlier. To do this, I tell ps to show me every running process ( **-e** ) and a full listing ( **-f** ).
我想看看我之前开始的 Nginx 进程。为此,我告诉 ps 给我展示每一个正在运行的进程( **-e** ) 和完整的列表 ( **-f** )。
```
alan@workstation:~$ ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
@ -107,25 +128,29 @@ alan     20536 20526  0 10:39 pts/0    00:00:00 pager
alan     20564 20496  0 10:40 pts/1    00:00:00 bash
```
You can see the Nginx processes in the output of the ps command above. The command displayed almost 300 lines, but I shortened it for this illustration. As you can imagine, trying to handle 300 lines of process information is a bit messy. We can pipe this output to grep to filter out nginx.
您可以在上面 ps 命令的输出中看到 Nginx 进程。这个命令显示了将近300行但是我在这个例子中缩短了它。可以想象试图处理300行过程信息有点混乱。我们可以将这个输出输送到 grep, 过滤一下仅显示 nginx。
```
alan@workstation:~$ ps -ef |grep nginx
alan     20520  1454  0 10:39 ?        00:00:00 nginx: master process nginx
alan     20521 20520  0 10:39 ?        00:00:00 nginx: worker process
```
That's better. We can quickly see that Nginx has PIDs of 20520 and 20521.
确实更好了。我们可以很快看到Nginx 有20520和2052的PIDs。
#### PGREP
The pgrep command was created to further simplify things by removing the need to call grep separately.
pgrep 命令更加简化单独调用 grep 遇到的问题。
```
alan@workstation:~$ pgrep nginx
20520
20521
```
Suppose you are in a hosting environment where multiple users are running several different instances of Nginx. You can exclude others from the output with the **-u** option.
假设您在一个托管环境中,多个用户正在运行几个不同的 Nginx 实例。您可以使用 **-u** 选项将其他人排除在输出之外。
```
alan@workstation:~$ pgrep -u alan nginx
20520
@ -134,7 +159,8 @@ alan@workstation:~$ pgrep -u alan nginx
#### PIDOF
Another nifty one is pidof. This command will check the PID of a specific binary even if another process with the same name is running. To set up an example, I copied my Nginx to a second directory and started it with the prefix set accordingly. In real life, this instance could be in a different location, such as a directory owned by a different user. If I run both Nginx instances, the **ps -ef** output shows all their processes.
另一个好用的是pidof。此命令将检查特定二进制文件的 PID即使另一个同名进程正在运行。为了建立一个例子我将我的 Nginx 复制到第二个目录,并以相应的前缀集开始。在现实生活中,这个实例可能位于不同的位置,例如由不同用户拥有的目录。如果我运行两个 Nginx 实例则pidof 输出显示它们的所有进程。
```
alan@workstation:~$ ps -ef |grep nginx
alan     20881  1454  0 11:18 ?        00:00:00 nginx: master process ./nginx -p /home/alan/web/prod/nginxsec
@ -143,7 +169,8 @@ alan     20895  1454  0 11:19 ?        00:00:00 nginx: master process ng
alan     20896 20895  0 11:19 ?        00:00:00 nginx: worker process
```
Using grep or pgrep will show PID numbers, but we may not be able to discern which instance is which.
使用 grep 或 pgrep 将显示 PID 数字,但我们可能无法辨别哪个实例是哪个。
```
alan@workstation:~$ pgrep nginx
20881
@ -152,7 +179,8 @@ alan@workstation:~$ pgrep nginx
20896
```
The pidof command can be used to determine the PID of each specific Nginx instance.
pidof 命令可用于确定每个特定 Nginx 实例的PID。
```
alan@workstation:~$ pidof /home/alan/web/prod/nginxsec/sbin/nginx
20882 20881
@ -163,7 +191,7 @@ alan@workstation:~$ pidof /home/alan/web/prod/nginx/sbin/nginx
#### TOP
The top command has been around a long time and is very useful for viewing details of running processes and quickly identifying issues such as memory hogs. Its default view is shown below.
top 命令已经有很长时间了,对于查看运行进程的细节和快速识别内存消耗等问题是非常有用的。其默认视图如下所示。
```
top - 11:56:28 up 1 day, 13:37,  1 user,  load average: 0.09, 0.04, 0.03
Tasks: 292 total,   3 running, 225 sleeping,   0 stopped,   0 zombie
@ -182,7 +210,7 @@ KiB Swap:        0 total,        0 free,        0 used. 14176540 ava
    7 root      20   0       0      0      0 S   0.0  0.0   0:00.08 ksoftirqd/0
```
The update interval can be changed by typing the letter **s** followed by the number of seconds you prefer for updates. To make it easier to monitor our example Nginx processes, we can call top and pass the PID(s) using the **-p** option. This output is much cleaner.
可以通过键入字母 **s** 和您喜欢的更新秒数来更改更新间隔。为了更容易监控我们的示例 Nginx 进程,我们可以使用 **-p** 选项调用top并通过PID。这个输出要干净得多。
```
alan@workstation:~$ top -p20881 -p20882 -p20895 -p20896
@ -198,13 +226,17 @@ KiB Swap:        0 total,        0 free,        0 used. 14177928 ava
20896 alan      20   0   12460   1628    912 S   0.0  0.0   0:00.00 nginx
```
It is important to correctly determine the PID when managing processes, particularly stopping one. Also, if using top in this manner, any time one of these processes is stopped or a new one is started, top will need to be informed of the new ones.
在管理进程特别是终止进程时正确确定PID是非常重要。此外如果以这种方式使用top每当这些进程中的一个停止或一个新进程开始时top都需要被告知有新的更新。
### Stopping a process
### 终止进程
#### KILL
Interestingly, there is no stop command. In Linux, there is the kill command. Kill is used to send a signal to a process. The most commonly used signal is "terminate" (SIGTERM) or "kill" (SIGKILL). However, there are many more. Below are some examples. The full list can be shown with **kill -L**.
有趣的是,没有 stop 命令。在 Linux中有 kill 命令。kill 用于向进程发送信号。最常用的信号是“终止”( SIGTERM )或“杀死”( SIGKILL )。然而,还有更多。下面是一些例子。完整的列表可以用 **kill -L** 显示。
```
 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL       5) SIGTRAP
 6) SIGABRT      7) SIGBUS       8) SIGFPE       9) SIGKILL     10) SIGUSR1
@ -213,6 +245,10 @@ Interestingly, there is no stop command. In Linux, there is the kill command. Ki
```
Notice signal number nine is SIGKILL. Usually, we issue a command such as **kill -9 20896**. The default signal is 15, which is SIGTERM. Keep in mind that many applications have their own method for stopping. Nginx uses a **-s** option for passing a signal such as "stop" or "reload." Generally, I prefer to use an application's specific method to stop an operation. However, I'll demonstrate the kill command to stop Nginx process 20896 and then confirm it is stopped with pgrep. The PID 20896 no longer appears.
注意第九号信号是 SIGKILL。通常我们会发布一个命令比如 **kill -9 20896** 。默认信号是15这是SIGTERM。请记住许多应用程序都有自己的停止方法。Nginx 使用 **-s** 选项传递信号,如“停止”或“重新加载”。“通常,我更喜欢使用应用程序的特定方法来停止操作。然而,我将演示 kill 命令来停止 Nginx process 20896然后用 pgrep 确认它已经停止。PID 20896 就不再出现。
```
alan@workstation:~$ kill -9 20896
 
@ -226,6 +262,9 @@ alan@workstation:~$ pgrep nginx
#### PKILL
The command pkill is similar to pgrep in that it can search by name. This means you have to be very careful when using pkill. In my example with Nginx, I might not choose to use it if I only want to kill one Nginx instance. I can pass the Nginx option **-s** **stop** to a specific instance to kill it, or I need to use grep to filter on the full ps output.
命令 pkill 类似于 pgrep因为它可以按名称搜索。这意味着在使用 pkill 时必须非常小心。在我的 Nginx 示例中,如果我只想杀死一个 Nginx 实例,我可能不会选择使用它。我可以将 Nginx 选项 **-s** **stop** 传递给特定的实例来消除它或者我需要使用grep来过滤整个 ps 输出。
```
/home/alan/web/prod/nginx/sbin/nginx -s stop
@ -233,6 +272,9 @@ The command pkill is similar to pgrep in that it can search by name. This means
```
If I want to use pkill, I can include the **-f** option to ask pkill to filter across the full command line argument. This of course also applies to pgrep. So, first I can check with **pgrep -a** before issuing the **pkill -f**.
如果我想使用 pkill我可以包括 **-f** 选项,让 pkill 过滤整个命令行参数。这当然也适用于 pgrep。所以在执行 **pkill -f** 之前,首先我可以用 **pgrep -a** 确认一下。
```
alan@workstation:~$ pgrep -a nginx
20881 nginx: master process ./nginx -p /home/alan/web/prod/nginxsec
@ -242,6 +284,10 @@ alan@workstation:~$ pgrep -a nginx
```
I can also narrow down my result with **pgrep -f**. The same argument used with pkill stops the process.
我也可以用 **pgrep -f** 缩小我的结果。pkill 使用的相同参数会停止该进程。
```
alan@workstation:~$ pgrep -f nginxsec
20881
@ -251,8 +297,14 @@ alan@workstation:~$ pkill -f nginxsec
The key thing to remember with pgrep (and especially pkill) is that you must always be sure that your search result is accurate so you aren't unintentionally affecting the wrong processes.
pgrep (尤其是pkill )要记住的关键点是,您必须始终确保搜索结果准确性,这样您就不会无意中影响到错误的进程。
Most of these commands have many command line options, so I always recommend reading the [man page][1] on each one. While most of these exist across platforms such as Linux, Solaris, and BSD, there are a few differences. Always test and be ready to correct as needed when working at the command line or writing scripts.
大多数这些命令都有许多命令行选项,所以我总是建议阅读每一个命令的 [man page][1]。虽然大多数这些都存在于 Linux、Solaris 和 BSD 等平台上,但也有一些不同之处。在命令行工作或编写脚本时,始终测试并随时准备根据需要进行更正。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/linux-commands-process-management

View File

@ -0,0 +1,130 @@
我为什么喜欢 Xonsh
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/shelloff.png?itok=L8pjHXjW)
Shell 语言对交互式使用很有用。但是在使用它们作为编程语言时这种优化需要权衡,这有时在编写 shell 脚本时会感觉到。
如果你的 shell 时一种更可伸缩的语言会怎样比如说Python
进入 [Xonsh][1]。
安装 Xonsh 就像创建虚拟环境一样简单,运行 `pip install xonsh [ptklinux]`,然后运行 `xonsh`
首先,你可能想知道为什么你的 Python shell 有一个奇怪的提示:
```
$ 1+1
2
```
好的计算器!
```
$ print("hello world")
hello world
```
我们还可以调用其他函数:
```
$ from antigravity import geohash
$ geohash(37.421542, -122.085589, b'2005-05-26-10458.68')
37.857713 -122.544543
```
然而,我们仍然可以像常规 shell 一样使用它:
```
$ echo "hello world"
hello world
```
我们甚至可以混搭!
```
$ for i in range(3):
.     echo "hello world"
.
hello world
hello world
hello world
```
Xonsh 支持使用 [Prompt Toolkit][2] 补全 shell 命令和 Python 表达式。补全有可视化提示,会显示可能的补全并有下拉列表。
它还支持访问环境变量。它使用简单但强大的启发式方法将 Python 类型应用于环境变量。默认值为 “string”但是例如路径变量是自动列表。
```
$ '/usr/bin' in $PATH
True
```
Xonsh 接受 shell 形式或 Python 形式的布尔快捷运算符:
```
$ cat things
foo
$ grep -q foo things and echo "found"
found
$ grep -q bar things && echo "found"
$ grep -q foo things or echo "found"
$ grep -q bar things || echo "found"
found
```
这意味着 Python 关键字是被解释了。如果我们想要打印著名的 Dr. Seuss 书的标题,我们需要引用关键词。
```
$ echo green eggs "and" ham
green eggs and ham
```
如果我们不这样做,我们会感到惊讶:
```
$ echo green eggs and ham
green eggs
xonsh: For full traceback set: $XONSH_SHOW_TRACEBACK = True
xonsh: subprocess mode: command not found: ham
Did you mean one of the following?
    as:   Command (/usr/bin/as)
    ht:   Command (/usr/bin/ht)
    mag:  Command (/usr/bin/mag)
    ar:   Command (/usr/bin/ar)
    nm:   Command (/usr/bin/nm)
```
虚拟环境可能会有点棘手。一般的虚拟环境(取决于它们类似 Bash 的语法无法工作。但是Xonsh 自带了一个名为 `vox` 的虚拟环境管理系统。
`vox` 可以创建,激活和停用 `~/.virtualenvs` 中的环境。如果你用过 `virtualenvwrapper`,这就是环境变量所在的地方。
请注意,当前激活的环境不会影响 `xonsh`。它无法从激活的环境中导入任何内容。
```
$ xontrib load vox
$ vox create my-environment                                                    
...
$ vox activate my-environment        
Activated "my-environment".                                                    
$ pip install money                                                            
...
$ python                                                              
...
>>> import money                                                              
>>> money.Money('3.14')                        
$ import money
xonsh: For full traceback set: $XONSH_SHOW_TRACEBACK = True
ModuleNotFoundError: No module named 'money'
```
第一行启用 `vox`:它是一个 `xontrib`Xonsh 的一个第三方扩展。`xontrib` 管理器可以列出所有可能的 `xontribs` 及其当前状态(已安装,已加载或未加载)。
可以编写一个 `xontrib` 并上传到 `PyPi` 以使其可用。但是,最好将它添加到 `xontrib` 索引中,以便 Xonsh 提前知道它。比如,这能让配置向导建议它。
如果你曾经想过“Python 可以成为我的 shell 吗?”,然后你只要 `pip install xonsh` 一下就能知道。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/xonsh-bash-alternative
作者:[Moshe Zadka][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[1]: https://xon.sh/
[2]: https://python-prompt-toolkit.readthedocs.io/en/master/

View File

@ -0,0 +1,108 @@
使用 mDNS 在局域网中轻松发现系统
======
![](https://fedoramagazine.org/wp-content/uploads/2018/09/mDNS-816x345.jpg)
多播 DNS 或 mDNS 允许系统通过在广播查询局域网中的其他资源。Fedora 用户经常在没有复杂名称服务的路由器上拥有多个 Linux 系统。在这种情况下mDNS 允许你按名称与多个系统通信 - 多数情况下不用路由器。你也不必在所有本地系统上同步类似 /etc/hosts 之类的文件。本文介绍如何设置它。
mDNS 是一个零配置网络服务它已经诞生了很长一段时间。Fedora 将 Avahi (包含 mDNS 的零配置系统)作为工作站的一部分。 mDNS 也是 Bonjour 的一部分,可在 Mac OS 上找到。)
本文假设你有两个系统运行受支持的 Fedora 版本27 或 28。它们的主机名是 castor 和 pollux。
### 安装包
确保系统上安装了 nss-mdns 和 avahi 软件包。你可能是不同的版本,这也没问题:
```
$ rpm -q nss-mdns avahi
nss-mdns-0.14.1-1.fc28.x86_64
avahi-0.7-13.fc28.x86_64
```
Fedora Workstation 默认提供这两个包。如果不存在,请安装它们:
```
$ sudo dnf install nss-mdns avahi
```
确保 avahi-daemon.service 单元已启用并正在运行。同样,这是 Fedora Workstation 的默认设置。
```
$ sudo systemctl enable --now avahi-daemon.service
```
虽然是可选的,但你可能还需要安装 avahi-tools 软件包。该软件包包括许多方便的程序,用于检查系统上的零配置服务的工作情况。使用这个 sudo 命令:
```
$ sudo dnf install avahi-tools
```
/etc/nsswitch.conf 控制系统使用哪个服务用于解析,以及它们的顺序。你应该在那个文件中看到这样的一行:
```
hosts: files mdns4_minimal [NOTFOUND=return] dns myhostname
```
注意命令 mdns4_minimal [NOTFOUND=return]。它们告诉你的系统使用多播 DNS 解析程序将主机名解析为 IP 地址。即使该服务有效,如果名称无法解析,也会尝试其余服务。
如果你没有看到与此类似的配置,则可以对其进行编辑(以 root 用户身份。但是nss-mdns 包会为你处理此问题。如果你对自己编辑它感到不舒服,请删除并重新安装该软件包以修复该文件。
在**两个系统**中执行同样的步骤 。
### 设置主机名并测试
现在你已完成常见的配置工作,请使用以下方法之一设置每个主机的名称:
1. 如果你正在使用 Fedora Workstation[你可以使用这个步骤][1]。
2. 如果没有,请使用 hostnamectl 来做。在第一台机器上这么做:
```
$ hostnamectl set-hostname castor
```
3. 你还可以编辑 /etc/avahi/avahi-daemon.conf删除主机名设置行上的注释并在那里设置名称。但是默认情况下Avahi 使用系统提供的主机名,因此你**不应该**需要此方法。
接下来,重启 Avahi 守护进程,以便它接收更改:
```
$ sudo systemctl restart avahi-daemon.service
```
然后正确设置另一台机器:
```
$ hostnamectl set-hostname pollux
$ sudo systemctl restart avahi-daemon.service
```
只要你的路由器没有禁止 mDNS 流量,你现在应该能够登录到 castor 并 ping 通另一台机器。你应该使用默认的 .local 域名,以便解析正常工作:
```
$ ping pollux.local
PING pollux.local (192.168.0.1) 56(84) bytes of data.
64 bytes from 192.168.0.1 (192.168.0.1): icmp_seq=1 ttl=64 time=3.17 ms
64 bytes from 192.168.0.1 (192.168.0.1): icmp_seq=2 ttl=64 time=1.24 ms
...
```
如果你在 pollux ping castor.local同样的技巧也适用 。现在在网络中访问你的系统更方便了!
此外,如果你的路由器宣传这个服务,请不要感到惊讶。现代 WiFi 和有线路由器通常提供这些服务,以使消费者的生活更轻松。
此过程适用于大多数系统。但是,如果遇到麻烦,请使用 avahi-browse 和 avahi-tools 软件包中的其他工具来查看可用的服务。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/find-systems-easily-lan-mdns/
作者:[Paul W. Frields][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pfrields/
[1]: https://fedoramagazine.org/set-hostname-fedora/

View File

@ -0,0 +1,56 @@
Flash Player 的两种开源替代方案
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bulb-light-energy-power-idea.png?itok=zTEEmTZB)
2017 年 7 月Adobe 为 Flash Media Player 敲响了[丧钟][1],宣布将在 2020 年终止对曾经无处不在的在线视频播放器的支持。但事实上在一系列损害了其声誉的零日攻击后Flash 的份额在过去的 8 年一直在下跌。苹果公司在 2010 年宣布它不会支持这项技术后,其未来趋于黯淡,并且在谷歌停止在 Chrome 浏览器中默认启用 Flash支持 HTML5它的消亡在 2016 年加速。
即便如此Adobe 仍然每月发布该软件的更新,截至 2018 年 8 月,它在网站的使用率从 2011 年的 28.5% 下降到[仅 4.4] [2]。还有更多证据表明 Flash 的下滑:谷歌工程总监 [Parisa Tabriz 说][3]通过浏览器访问 Flash 内容的 Chrome 用户数量从 2014 年的 80 下降到 2018 年的 8
尽管如今很少有视频创作者以 Flash 格式发布,但仍有很多人们希望在未来几年内访问的 Flash 视频。鉴于官方支持的日期已经屈指可数,开源软件创建者有很好的机会介入 Adobe Flash Media Player 的替代品。这其中两个应用是 Lightspark 和 GNU Gnash。它们都不是完美的替代品但来自贡献者的帮助可以使它们成为可行的替代品。
### Lightspark
[Lightspark][4] 是 Linux 上的 Flash Player 替代品。虽然它仍处于 alpha 状态,但自从 Adobe 在 2017 宣布废弃 Adobe 以来开发速度已经加快。据其网站称Lightspark 实现了 60% 的 Flash API可在许多流行网站包括 BBC 新闻、Google Play 音乐和亚马逊音乐上[使用][5]。
Lightspark 是用 C++/C 编写的,并在 [LGPLv3][6] 下许可。该项目列出了 41 个贡献者,并正在积极征求错误报告和其他贡献。有关更多信息,请查看其[ GitHub 仓库][5]。
### GNU Gnash
[GNU Gnash][7] 是一个用于 GNU/Linux 操作系统,包括 Ubuntu、Fedora 和 Debian 的 Flash Player。它作为独立软件和插件可用于 Firefox 和 Konqueror 浏览器中。
Gnash 的主要缺点是它不支持最新版本的 Flash 文件 - 它支持大多数 Flash SWF v7 功能,一些 v8 和 v9 功能,不支持 v10 文件。它处于测试阶段,由于它在[ GNU GPLv3 或更高版本][8]下许可,因此你可以帮助实现它的现代化。访问其[项目页面][9]获取更多信息。
### 想要创建Flash吗
* 仅因为大多数人都不会发布 Flash 视频,但这并不意味着永远不需要创建 SWF 文件。如果你发现自己需要,这两个开源工具可能会有所帮助:
* [Motion-Twin ActionScript 2 编译器][10]MTASC一个命令行编译器它可以在没有 Adobe AnimateAdobe 当前的视频创建软件)的情况下生成 SWF 文件。
  * [Ming][11]:用 C 编写的可以生成 SWF 文件的库。它还包含一些可用于处理 Flash 的[程序][12]。
--------------------------------------------------------------------------------
via: https://opensource.com/alternatives/flash-media-player
作者:[Opensource.com][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com
[1]: https://theblog.adobe.com/adobe-flash-update/
[2]: https://w3techs.com/technologies/details/cp-flash/all/all
[3]: https://www.bleepingcomputer.com/news/security/google-chrome-flash-usage-declines-from-80-percent-in-2014-to-under-8-percent-today/
[4]: http://lightspark.github.io/
[5]: https://github.com/lightspark/lightspark/wiki/Site-Support
[6]: https://github.com/lightspark/lightspark/blob/master/COPYING
[7]: https://www.gnu.org/software/gnash/
[8]: http://www.gnu.org/licenses/gpl-3.0.html
[9]: http://savannah.gnu.org/projects/gnash/
[10]: http://tech.motion-twin.com/mtasc.html
[11]: http://www.libming.org/
[12]: http://www.libming.org/WhatsIncluded