2
0
mirror of https://github.com/LCTT/TranslateProject.git synced 2025-03-24 02:20:09 +08:00

Merge pull request from LCTT/master

Update
This commit is contained in:
Beini Gu 2019-06-03 18:18:05 -04:00 committed by GitHub
commit 3a11aa85b4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
111 changed files with 4745 additions and 4164 deletions
published
20190409 5 open source mobile apps.md20190411 Be your own certificate authority.md20190417 Inter-process communication in Linux- Sockets and signals.md20190422 4 open source apps for plant-based diets.md20190427 Monitoring CPU and GPU Temperatures on Linux.md20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md
201905
20161106 Myths about -dev-urandom.md20171214 Build a game framework with Python using the module Pygame.md20171215 How to add a player to your Python game.md20180130 An introduction to the DomTerm terminal emulator for Linux.md20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md20180429 The Easiest PDO Tutorial (Basics).md20180518 What-s a hero without a villain- How to add one to your Python game.md20180605 How to use autofs to mount NFS shares.md20180611 3 open source alternatives to Adobe Lightroom.md20180725 Put platforms in a Python game with Pygame.md20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md20181218 Using Pygame to move your game character around.md20190107 Aliases- To Protect and Serve.md20190307 How to Restart a Network in Ubuntu -Beginner-s Tip.md20190308 Virtual filesystems in Linux- Why we need them and how they work.md20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md20190327 Why DevOps is the most important tech strategy today.md20190329 How to manage your Linux environment.md20190401 What is 5G- How is it better than 4G.md20190405 Command line quick tips- Cutting content out of files.md20190408 Getting started with Python-s cryptography library.md20190408 How to quickly deploy, run Linux applications as unikernels.md20190409 Anbox - Easy Way To Run Android Apps On Linux.md20190409 How To Install And Configure Chrony As NTP Client.md20190409 How To Install And Configure NTP Server And NTP Client In Linux.md20190411 Installing Ubuntu MATE on a Raspberry Pi.md20190415 12 Single Board Computers- Alternative to Raspberry Pi.md20190415 How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux.md20190415 Inter-process communication in Linux- Shared storage.md20190415 Kubernetes on Fedora IoT with k3s.md20190416 Building a DNS-as-a-service with OpenStack Designate.md20190416 Detecting malaria with deep learning.md20190416 Inter-process communication in Linux- Using pipes and message queues.md20190419 Building scalable social media sentiment analysis services in Python.md20190419 Getting started with social media sentiment analysis in Python.md20190419 This is how System76 does open hardware.md20190422 2 new apps for music tweakers on Fedora Workstation.md20190422 8 environment-friendly open software projects you should know.md20190422 Tracking the weather with Python and Prometheus.md20190423 Four Methods To Check The Default Gateway Or Router IP Address In Linux.md20190423 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md20190425 Automate backups with restic and systemd.md20190430 Upgrading Fedora 29 to Fedora 30.md20190501 3 apps to manage personal finances in Fedora.md20190501 Cisco issues critical security warning for Nexus data-center switches.md20190501 Write faster C extensions for Python with Cython.md20190502 Format Python however you like with Black.md20190502 Get started with Libki to manage public user computer access.md20190503 API evolution the right way.md20190503 Check your spelling at the command line with Ispell.md20190503 Say goodbye to boilerplate in Python with attrs.md20190504 Add methods retroactively in Python with singledispatch.md20190504 Using the force at the Linux command line.md20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md20190505 How To Create SSH Alias In Linux.md20190505 Kindd - A Graphical Frontend To dd Command.md20190505 Linux Shell Script To Monitor Disk Space Usage And Send Email.md20190505 Ping Multiple Servers And Show The Output In Top-like Text UI.md20190505 apt-clone - Backup Installed Packages And Restore Those On Fresh Ubuntu System.md20190506 How to Add Application Shortcuts on Ubuntu Desktop.md20190508 How to use advanced rsync for large Linux backups.md20190509 21 Best Kali Linux Tools for Hacking and Penetration Testing.md20190510 How to Use 7Zip in Ubuntu and Other Linux -Quick Tip.md20190510 PHP in 2019.md20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md20190516 Building Smaller Container Images.md20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md20190520 PiShrink - Make Raspberry Pi Images Smaller.md20190520 xsos - A Tool To Read SOSReport In Linux.md
20190505 How To Install-Uninstall Listed Packages From A File In Linux.md20190520 Zettlr - Markdown Editor for Writers and Researchers.md20190522 French IT giant Atos enters the edge-computing business.md20190527 20- FFmpeg Commands For Beginners.md20190527 Dockly - Manage Docker Containers From Terminal.md
sources

View File

@ -0,0 +1,108 @@
[#]: collector: "lujun9972"
[#]: translator: "fuzheng1998"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-10931-1.html"
[#]: subject: "5 open source mobile apps"
[#]: via: "https://opensource.com/article/19/4/mobile-apps"
[#]: author: "Chris Hermansen https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen"
5 个可以满足你的生产力、沟通和娱乐需求的开源手机应用
======
> 你可以依靠这些应用来满足你的生产力、沟通和娱乐需求。
![](https://img.linux.net.cn/data/attachment/album/201906/03/001949brnq19j5qeqn3onv.jpg)
像世界上大多数人一样,我的手似乎就没有离开过手机。多亏了我从 Google Play 和 F-Droid 安装的开源移动应用程序,让我的 Android 设备好像提供了无限的沟通、生产力和娱乐服务一样。
在我的手机上的许多开源应用程序中,当想听音乐、与朋友/家人和同事联系、或者在旅途中完成工作时,以下五个是我一直使用的。
### MPDroid
一个音乐播放器进程 (MPD)的 Android 控制器。
![MPDroid][2]
MPD 是将音乐从小型音乐服务器电脑传输到大型的黑色立体声音箱的好方法。它直连 ALSA因此可以通过 ALSA 硬件接口与数模转换器DAC对话它可以通过我的网络进行控制——但是用什么东西控制呢好吧事实证明 MPDroid 是一个很棒的 MPD 控制器。它可以管理我的音乐数据库,显示专辑封面,处理播放列表,并支持互联网广播。而且它是开源的,所以如果某些东西不好用的话……
MPDroid 可在 [Google Play][4] 和 [F-Droid][5] 上找到。
### RadioDroid
一台能单独使用及与 Chromecast 搭配使用的 Android 网络收音机。
![RadioDroid][6]
RadioDroid 是一个网络收音机,而 MPDroid 则管理我音乐的数据库从本质上讲RadioDroid 是 [Internet-Radio.com][7] 的一个前端。此外,通过将耳机插入 Android 设备,通过耳机插孔或 USB 将 Android 设备直接连接到立体声系统,或通过兼容设备使用其 Chromecast 功能,可以享受 RadioDroid。这是一个查看芬兰天气情况听取排名前 40 的西班牙语音乐,或收到到最新新闻消息的好方法。
RadioDroid 可在 [Google Play][8] 和 [F-Droid][9] 上找到。
### Signal
一个支持 Android、iOS还有桌面系统的安全即时消息客户端。
![Signal][10]
如果你喜欢 WhatsApp但是因为它与 Facebook [日益密切][11]的关系而感到困扰,那么 Signal 应该是你的下一个产品。Signal 的唯一问题是说服你的朋友们最好用 Signal 取代 WhatsApp。但除此之外它有一个与 WhatsApp 类似的界面;很棒的语音和视频通话;很好的加密;恰到好处的匿名;并且它受到了一个不打算通过使用软件来获利的基金会的支持。为什么不喜欢它呢?
Signal 可用于 [Android][12]、[iOS][13] 和 [桌面][14]。
### ConnectBot
Android SSH 客户端。
![ConnectBot][15]
有时我离电脑很远,但我需要登录服务器才能办事。[ConnectBot][16] 是将 SSH 会话搬到手机上的绝佳解决方案。
ConnectBot 可在 [Google Play][17] 上找到。
### Termux
有多种熟悉的功能的安卓终端模拟器。
![Termux][18]
你是否需要在手机上运行 `awk` 脚本?[Termux][19] 是个解决方案。如果你需要做终端类的工作,而且你不想一直保持与远程计算机的 SSH 连接,请使用 ConnectBot 将文件放到手机上,然后退出会话,在 Termux 中执行你的操作,用 ConnectBot 发回结果。
Termux 可在 [Google Play][20] 和 [F-Droid][21] 上找到。
* * *
你最喜欢用于工作或娱乐的开源移动应用是什么呢?请在评论中分享它们。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/mobile-apps
作者:[Chris Hermansen][a]
选题:[lujun9972][b]
译者:[fuzheng1998](https://github.com/fuzheng1998)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78
[2]: https://opensource.com/sites/default/files/uploads/mpdroid.jpg "MPDroid"
[3]: https://opensource.com/article/17/4/fun-new-gadget
[4]: https://play.google.com/store/apps/details?id=com.namelessdev.mpdroid&hl=en_US
[5]: https://f-droid.org/en/packages/com.namelessdev.mpdroid/
[6]: https://opensource.com/sites/default/files/uploads/radiodroid.png "RadioDroid"
[7]: https://www.internet-radio.com/
[8]: https://play.google.com/store/apps/details?id=net.programmierecke.radiodroid2
[9]: https://f-droid.org/en/packages/net.programmierecke.radiodroid2/
[10]: https://opensource.com/sites/default/files/uploads/signal.png "Signal"
[11]: https://opensource.com/article/19/3/open-messenger-client
[12]: https://play.google.com/store/apps/details?id=org.thoughtcrime.securesms
[13]: https://itunes.apple.com/us/app/signal-private-messenger/id874139669?mt=8
[14]: https://signal.org/download/
[15]: https://opensource.com/sites/default/files/uploads/connectbot.png "ConnectBot"
[16]: https://connectbot.org/
[17]: https://play.google.com/store/apps/details?id=org.connectbot
[18]: https://opensource.com/sites/default/files/uploads/termux.jpg "Termux"
[19]: https://termux.com/
[20]: https://play.google.com/store/apps/details?id=com.termux
[21]: https://f-droid.org/packages/com.termux/

View File

@ -0,0 +1,135 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10921-1.html)
[#]: subject: (Be your own certificate authority)
[#]: via: (https://opensource.com/article/19/4/certificate-authority)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/elenajon123)
自己成为一个证书颁发机构CA
======
> 为你的微服务架构或者集成测试创建一个简单的内部 CA。
![](https://img.linux.net.cn/data/attachment/album/201905/31/091023sg9s0ss11rsoseqg.jpg)
传输层安全([TLS][2])模型(有时也称它的旧名称 SSL基于<ruby>[证书颁发机构][3]<rt>certificate authoritie</rt></ruby>CA的概念。这些机构受到浏览器和操作系统的信任从而*签名*服务器的的证书以用于验证其所有权。
但是,对于内部网络,微服务架构或集成测试,有时候*本地 CA*更有用:一个只在内部受信任的 CA然后签名本地服务器的证书。
这对集成测试特别有意义。获取证书可能会带来负担,因为这会占用服务器几分钟。但是在代码中使用“忽略证书”可能会被引入到生产环境,从而导致安全灾难。
CA 证书与常规服务器证书没有太大区别。重要的是它被本地代码信任。例如,在 Python `requests` 库中,可以通过将 `REQUESTS_CA_BUNDLE` 变量设置为包含此证书的目录来完成。
在为集成测试创建证书的例子中,不需要*长期的*证书:如果你的集成测试需要超过一天,那么你应该已经测试失败了。
因此,计算**昨天**和**明天**作为有效期间隔:
```
>>> import datetime
>>> one_day = datetime.timedelta(days=1)
>>> today = datetime.date.today()
>>> yesterday = today - one_day
>>> tomorrow = today - one_day
```
现在你已准备好创建一个简单的 CA 证书。你需要生成私钥,创建公钥,设置 CA 的“参数”然后自签名证书CA 证书*总是*自签名的。最后,导出证书文件以及私钥文件。
```
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.primitives import hashes, serialization
from cryptography import x509
from cryptography.x509.oid import NameOID
private_key = rsa.generate_private_key(
public_exponent=65537,
key_size=2048,
backend=default_backend()
)
public_key = private_key.public_key()
builder = x509.CertificateBuilder()
builder = builder.subject_name(x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'),
]))
builder = builder.issuer_name(x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'),
]))
builder = builder.not_valid_before(yesterday)
builder = builder.not_valid_after(tomorrow)
builder = builder.serial_number(x509.random_serial_number())
builder = builder.public_key(public_key)
builder = builder.add_extension(
x509.BasicConstraints(ca=True, path_length=None),
critical=True)
certificate = builder.sign(
private_key=private_key, algorithm=hashes.SHA256(),
backend=default_backend()
)
private_bytes = private_key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.TraditionalOpenSSL,
encryption_algorithm=serialization.NoEncrption())
public_bytes = certificate.public_bytes(
encoding=serialization.Encoding.PEM)
with open("ca.pem", "wb") as fout:
fout.write(private_bytes + public_bytes)
with open("ca.crt", "wb") as fout:
fout.write(public_bytes)
```
通常,真正的 CA 会需要[证书签名请求][4]CSR来签名证书。但是当你是自己的 CA 时,你可以制定自己的规则!可以径直签名你想要的内容。
继续集成测试的例子,你可以创建私钥并立即签名相应的公钥。注意 `COMMON_NAME` 需要是 `https` URL 中的“服务器名称”。如果你已配置名称查询,你需要服务器能响应对 `service.test.local` 的请求。
```
service_private_key = rsa.generate_private_key(
public_exponent=65537,
key_size=2048,
backend=default_backend()
)
service_public_key = service_private_key.public_key()
builder = x509.CertificateBuilder()
builder = builder.subject_name(x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, 'service.test.local')
]))
builder = builder.not_valid_before(yesterday)
builder = builder.not_valid_after(tomorrow)
builder = builder.public_key(public_key)
certificate = builder.sign(
private_key=private_key, algorithm=hashes.SHA256(),
backend=default_backend()
)
private_bytes = service_private_key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.TraditionalOpenSSL,
encryption_algorithm=serialization.NoEncrption())
public_bytes = certificate.public_bytes(
encoding=serialization.Encoding.PEM)
with open("service.pem", "wb") as fout:
fout.write(private_bytes + public_bytes)
```
现在 `service.pem` 文件有一个私钥和一个“有效”的证书:它已由本地的 CA 签名。该文件的格式可以给 Nginx、HAProxy 或大多数其他 HTTPS 服务器使用。
通过将此逻辑用在测试脚本中,只要客户端配置信任该 CA那么就可以轻松创建看起来真实的 HTTPS 服务器。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/certificate-authority
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez/users/elenajon123
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_commun_4604_02_mech_connections_rhcz0.5x.png?itok=YPPU4dMj
[2]: https://en.wikipedia.org/wiki/Transport_Layer_Security
[3]: https://en.wikipedia.org/wiki/Certificate_authority
[4]: https://en.wikipedia.org/wiki/Certificate_signing_request

View File

@ -1,8 +1,8 @@
[#]: collector: "lujun9972"
[#]: translator: "FSSlc"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-10930-1.html"
[#]: subject: "Inter-process communication in Linux: Sockets and signals"
[#]: via: "https://opensource.com/article/19/4/interprocess-communication-linux-networking"
[#]: author: "Marty Kalin https://opensource.com/users/mkalindepauledu"
@ -10,21 +10,21 @@
Linux 下的进程间通信:套接字和信号
======
学习在 Linux 中进程是如何与其他进程进行同步的。
> 学习在 Linux 中进程是如何与其他进程进行同步的。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3)
![](https://img.linux.net.cn/data/attachment/album/201906/02/234437y6gig4tg4yy94356.jpg)
本篇是 Linux 下[进程间通信][1]IPC系列的第三篇同时也是最后一篇文章。[第一篇文章][2]聚焦在通过共享存储(文件和共享内存段)来进行 IPC[第二篇文章][3]则通过管道(无名的或者名的)及消息队列来达到相同的目的。这篇文章将目光从高处(套接字)然后到低处(信号)来关注 IPC。代码示例将用力地充实下面的解释细节。
本篇是 Linux 下[进程间通信][1]IPC系列的第三篇同时也是最后一篇文章。[第一篇文章][2]聚焦在通过共享存储(文件和共享内存段)来进行 IPC[第二篇文章][3]则通过管道(无名的或者名的)及消息队列来达到相同的目的。这篇文章将目光从高处(套接字)然后到低处(信号)来关注 IPC。代码示例将用力地充实下面的解释细节。
### 套接字
正如管道有两种类型(有名和无名一样套接字也有两种类型。IPC 套接字(即 Unix domain socket)给予进程在相同设备(主机)上基于通道的通信能力;而网络套接字给予进程运行在不同主机的能力,因此也带来了网络通信的能力。网络套接字需要底层协议的支持,例如 TCP传输控制协议或 UDP用户数据报协议
正如管道有两种类型(命名和无名一样套接字也有两种类型。IPC 套接字(即 Unix 套接字)给予进程在相同设备(主机)上基于通道的通信能力;而网络套接字给予进程运行在不同主机的能力,因此也带来了网络通信的能力。网络套接字需要底层协议的支持,例如 TCP传输控制协议或 UDP用户数据报协议
与之相反IPC 套接字依赖于本地系统内核的支持来进行通信特别的IPC 通信使用一个本地的文件作为套接字地址。尽管这两种套接字的实现有所不同但在本质上IPC 套接字和网络套接字的 API 是一致的。接下来的例子将包含网络套接字的内容,但示例服务器和客户端程序可以在相同的机器上运行,因为服务器使用了 localhost127.0.0.1)这个网络地址,该地址表示的是本地机器上的本地机器地址。
与之相反IPC 套接字依赖于本地系统内核的支持来进行通信特别的IPC 通信使用一个本地的文件作为套接字地址。尽管这两种套接字的实现有所不同但在本质上IPC 套接字和网络套接字的 API 是一致的。接下来的例子将包含网络套接字的内容,但示例服务器和客户端程序可以在相同的机器上运行,因为服务器使用了 `localhost`127.0.0.1)这个网络地址,该地址表示的是本地机器上的本地机器地址。
套接字以流的形式(下面将会讨论到)被配置为双向的,并且其控制遵循 C/S客户端/服务器端)模式:客户端通过尝试连接一个服务器来初始化对话,而服务器端将尝试接受该连接。假如万事顺利,来自客户端的请求和来自服务器端的响应将通过管道进行传输,直到其中任意一方关闭该通道,从而断开这个连接。
一个`迭代服务器`(只适用于开发)将一直和连接它的客户端打交道:从最开始服务第一个客户端,然后到这个连接关闭,然后服务第二个客户端,循环往复。这种方式的一个缺点是处理一个特定的客户端可能会一直持续下去,使得其他的客户端一直在后面等待。生产级别的服务器将是并发的,通常使用了多进程或者多线程的混合。例如,我台式机上的 Nginx 网络服务器有一个 4 个 worker 的进程池,它们可以并发地处理客户端的请求。在下面的代码示例中,我们将使用迭代服务器,使得我们将要处理的问题达到一个很小的规模,只关注基本的 API而不去关心并发的问题。
一个迭代服务器(只适用于开发)将一直和连接它的客户端打交道:从最开始服务第一个客户端,然后到这个连接关闭,然后服务第二个客户端,循环往复。这种方式的一个缺点是处理一个特定的客户端可能会挂起,使得其他的客户端一直在后面等待。生产级别的服务器将是并发的,通常使用了多进程或者多线程的混合。例如,我台式机上的 Nginx 网络服务器有一个 4 个<ruby>工人<rt>worker</rt></ruby>的进程池,它们可以并发地处理客户端的请求。在下面的代码示例中,我们将使用迭代服务器,使得我们将要处理的问题保持在一个很小的规模,只关注基本的 API而不去关心并发的问题。
最后,随着各种 POSIX 改进的出现,套接字 API 随着时间的推移而发生了显著的变化。当前针对服务器端和客户端的示例代码特意写的比较简单,但是它着重强调了基于流的套接字中连接的双方。下面是关于流控制的一个总结,其中服务器端在一个终端中开启,而客户端在另一个不同的终端中开启:
@ -108,10 +108,10 @@ int main() {
上面的服务器端程序执行典型的 4 个步骤来准备回应客户端的请求,然后接受其他的独立请求。这里每一个步骤都以服务器端程序调用的系统函数来命名。
1. `socket(…)` : 为套接字连接获取一个文件描述符
2. `bind(…)` : 将套接字和服务器主机上的一个地址进行绑定
3. `listen(…)` : 监听客户端请求
4. `accept(…)` :接受一个特定的客户端请求
1. `socket(…)`为套接字连接获取一个文件描述符
2. `bind(…)`将套接字和服务器主机上的一个地址进行绑定
3. `listen(…)`监听客户端请求
4. `accept(…)`:接受一个特定的客户端请求
上面的 `socket` 调用的完整形式为:
@ -121,7 +121,7 @@ int sockfd = socket(AF_INET,      /* versus AF_LOCAL */
                    0);           /* system picks protocol (TCP) */
```
第一个参数特别指定了使用的是一个网络套接字,而不是 IPC 套接字。对于第二个参数有多种选项,但 `SOCK_STREAM``SOCK_DGRAM`(数据报)是最为常用的。基于流的套接字支持可信通道,在这种通道中如果发生了信息的丢失或者更改,都将会被报告。这种通道是双向的,并且从一端到另外一端的有效载荷在大小上可以是任意的。相反的,基于数据报的套接字大多是不可信的,没有方向性,并且需要固定大小的载荷。`socket` 的第三个参数特别指定了协议。对于这里展示的基于流的套接字只有一种协议选择TCP在这里表示的 `0`;。因为对 `socket` 的一次成功调用将返回相似的文件描述符,一个套接字将会被读写,对应的语法和读写一个本地文件是类似的。
第一个参数特别指定了使用的是一个网络套接字,而不是 IPC 套接字。对于第二个参数有多种选项,但 `SOCK_STREAM``SOCK_DGRAM`(数据报)是最为常用的。基于流的套接字支持可信通道,在这种通道中如果发生了信息的丢失或者更改,都将会被报告。这种通道是双向的,并且从一端到另外一端的有效载荷在大小上可以是任意的。相反的,基于数据报的套接字大多是不可信的,没有方向性,并且需要固定大小的载荷。`socket` 的第三个参数特别指定了协议。对于这里展示的基于流的套接字只有一种协议选择TCP在这里表示的 `0`。因为对 `socket` 的一次成功调用将返回相似的文件描述符,套接字可以被读写,对应的语法和读写一个本地文件是类似的。
`bind` 的调用是最为复杂的,因为它反映出了在套接字 API 方面上的各种改进。我们感兴趣的点是这个调用将一个套接字和服务器端所在机器中的一个内存地址进行绑定。但对 `listen` 的调用就非常直接了:
@ -131,9 +131,9 @@ if (listen(fd, MaxConnects) < 0)
第一个参数是套接字的文件描述符,第二个参数则指定了在服务器端处理一个拒绝连接错误之前,有多少个客户端连接被允许连接。(在头文件 `sock.h``MaxConnects` 的值被设置为 `8`。)
`accept` 调用默认将是一个塞等待:服务器端将不做任何事情直到一个客户端尝试连接它,然后进行处理。`accept` 函数返回的值如果是 `-1` 则暗示有错误发生。假如这个调用是成功的,则它将返回另一个文件描述符,这个文件描述符被用来指代另一个可读可写的套接字,它与 `accept` 调用中的第一个参数对应的接收套接字有所不同。服务器端使用这个可读可写的套接字来从客户端读取请求然后写回它的回应。接收套接字只被用于接受客户端的连接。
`accept` 调用默认将是一个塞等待:服务器端将不做任何事情直到一个客户端尝试连接它,然后进行处理。`accept` 函数返回的值如果是 `-1` 则暗示有错误发生。假如这个调用是成功的,则它将返回另一个文件描述符,这个文件描述符被用来指代另一个可读可写的套接字,它与 `accept` 调用中的第一个参数对应的接收套接字有所不同。服务器端使用这个可读可写的套接字来从客户端读取请求然后写回它的回应。接收套接字只被用于接受客户端的连接。
在设计上,一个服务器端可以一直运行下去。当然服务器端可以通过在命令行中使用 `Ctrl+C` 来终止它。
在设计上,服务器端可以一直运行下去。当然服务器端可以通过在命令行中使用 `Ctrl+C` 来终止它。
#### 示例 2. 使用套接字的客户端
@ -207,25 +207,25 @@ int main() {
if (connect(sockfd, (struct sockaddr*) &saddr, sizeof(saddr)) < 0)
```
`connect` 的调用可能因为多种原因而导致失败,例如客户端拥有错误的服务器端地址或者已经有太多的客户端连接上了服务器端。假如 `connect` 操作成功,客户端将在一个 `for` 循环中,写入它的响应然后读取返回的响应。在经过会话后,服务器端和客户端都将调用 `close` 去关闭可读可写套接字,尽管其中一个关闭操作已经足以关闭他们之间的连接,但此时客户端可能就此关闭,但正如前面提到的那样,服务器端将一直保持开放以处理其他事务。
`connect` 的调用可能因为多种原因而导致失败,例如客户端拥有错误的服务器端地址或者已经有太多的客户端连接上了服务器端。假如 `connect` 操作成功,客户端将在一个 `for` 循环中,写入它的请求然后读取返回的响应。在会话后,服务器端和客户端都将调用 `close` 去关闭这个可读可写套接字,尽管任何一边的关闭操作就足以关闭它们之间的连接。此后客户端可以退出了,但正如前面提到的那样,服务器端可以一直保持开放以处理其他事务。
从上面的套接示例中,我们看到了请求信息被回给客户端,这使得客户端和服务器端之间拥有进行丰富对话的可能性。也许这就是套接字的主要魅力。在现代系统中,客户端应用(例如一个数据库客户端)和服务器端通过套接字进行通信非常常见。正如先前提及的那样,本地 IPC 套接字和网络套接字只在某些实现细节上面有所不同一般来说IPC 套接字有着更低的消耗和更好的性能。它们的通信 API 基本是一样的。
从上面的套接示例中,我们看到了请求信息被回给客户端,这使得客户端和服务器端之间拥有进行丰富对话的可能性。也许这就是套接字的主要魅力。在现代系统中,客户端应用(例如一个数据库客户端)和服务器端通过套接字进行通信非常常见。正如先前提及的那样,本地 IPC 套接字和网络套接字只在某些实现细节上面有所不同一般来说IPC 套接字有着更低的消耗和更好的性能。它们的通信 API 基本是一样的。
### 信号
一个信号中断一个正在执行的程序,在这种意义下,就是用信号与这个程序进行通信。大多数的信号要么可以被忽略(阻塞)或者被处理(通过特别设计的代码)。`SIGSTOP` (暂停)和 `SIGKILL`(立即停止)是最应该提及的两种信号。符号常数拥有整数类型的值,例如 `SIGKILL` 对应的值为 `9`
信号中断一个正在执行的程序,在这种意义下,就是用信号与这个程序进行通信。大多数的信号要么可以被忽略(阻塞)或者被处理(通过特别设计的代码)。`SIGSTOP` (暂停)和 `SIGKILL`(立即停止)是最应该提及的两种信号。这种符号常量有整数类型的值,例如 `SIGKILL` 对应的值为 `9`
信号可以在与用户交互的情况下发生。例如,一个用户从命令行中敲了 `Ctrl+C`从命令行中终止一个程序;`Ctrl+C` 将产生一个 `SIGTERM` 信号。针对终止,`SIGTERM` 信号可以被阻塞或者被处理,而不像 `SIGKILL` 信号那样。一个进程也可以通过信号和另一个进程通信,这样使得信号也可以作为一种 IPC 机制。
信号可以在与用户交互的情况下发生。例如,一个用户从命令行中敲了 `Ctrl+C`终止一个从命令行中启动的程序;`Ctrl+C` 将产生一个 `SIGTERM` 信号。`SIGTERM` 意即终止,它可以被阻塞或者被处理,而不像 `SIGKILL` 信号那样。一个进程也可以通过信号和另一个进程通信,这样使得信号也可以作为一种 IPC 机制。
考虑一下一个多进程应用,例如 Nginx 网络服务器是如何被另一个进程优雅地关闭的。`kill` 函数:
```
int kill(pid_t pid, int signum); /* declaration */
```
bei
可以被一个进程用来终止另一个进程或者一组进程。假如 `kill` 函数的第一个参数是大于 `0` 的,那么这个参数将会被认为是目标进程的 pid进程 ID假如这个参数是 `0`,则这个参数将会被识别为信号发送者所属的那组进程。
`kill` 的第二个参数要么是一个标准的信号数字(例如 `SIGTERM``SIGKILL`),要么是 `0` ,这将会对信号做一次询问,确认第一个参数中的 pid 是否是有效的。这样将一个多进程应用的优雅地关闭就可以通过向组成该应用的一组进程发送一个终止信号来完成,具体来说就是调用一个 `kill` 函数,使得这个调用的第二个参数是 `SIGTERM`Nginx 主进程可以通过调用 `kill` 函数来终止其他 worker 进程,然后再停止自己。)就像许多库函数一样,`kill` 函数通过一个简单的可变语法拥有更多的能力和灵活性。
可以被一个进程用来终止另一个进程或者一组进程。假如 `kill` 函数的第一个参数是大于 `0` 的,那么这个参数将会被认为是目标进程的 `pid`(进程 ID假如这个参数是 `0`,则这个参数将会被视作信号发送者所属的那组进程。
`kill` 的第二个参数要么是一个标准的信号数字(例如 `SIGTERM``SIGKILL`),要么是 `0` ,这将会对信号做一次询问,确认第一个参数中的 `pid` 是否是有效的。这样优雅地关闭一个多进程应用就可以通过向组成该应用的一组进程发送一个终止信号来完成,具体来说就是调用一个 `kill` 函数,使得这个调用的第二个参数是 `SIGTERM`Nginx 主进程可以通过调用 `kill` 函数来终止其他工人进程,然后再停止自己。)就像许多库函数一样,`kill` 函数通过一个简单的可变语法拥有更多的能力和灵活性。
#### 示例 3. 一个多进程系统的优雅停止
@ -290,9 +290,9 @@ int main() {
上面的停止程序模拟了一个多进程系统的优雅退出,在这个例子中,这个系统由一个父进程和一个子进程组成。这次模拟的工作流程如下:
* 父进程尝试去 fork 一个子进程。假如这个 fork 操作成功了,每个进程就执行它自己的代码:子进程就执行函数 `child_code`,而父进程就执行函数 `parent_code`
* 子进程将会进入一个潜在的无限循环,在这个循环中子进程将睡眠一秒,然后打印一个信息,接着再次进入睡眠状态,以此循环往复。来自父进程的一个 `SIGTERM` 信号将引起子进程去执行一个信号处理回调函数 `graceful`。这样这个信号就使得子进程可以跳出循环,然后进行子进程和父进程之间的优雅终止。在终止之前,进程将打印一个信息。
* 在 fork 一个子进程后,父进程将睡眠 5 秒,使得子进程可以执行一会儿;当然在这个模拟中,子进程大多数时间都在睡眠。然后父进程调用 `SIGTERM` 作为第二个参数的 `kill` 函数,等待子进程的终止,然后自己再终止。
* 父进程尝试去 `fork` 一个子进程。假如这个 `fork` 操作成功了,每个进程就执行它自己的代码:子进程就执行函数 `child_code`,而父进程就执行函数 `parent_code`
* 子进程将会进入一个潜在的无限循环,在这个循环中子进程将睡眠一秒,然后打印一个信息,接着再次进入睡眠状态,以此循环往复。来自父进程的一个 `SIGTERM` 信号将引起子进程去执行一个信号处理回调函数 `graceful`。这样这个信号就使得子进程可以跳出循环,然后进行子进程和父进程之间的优雅终止。在终止之前,进程将打印一个信息。
* 在 `fork` 一个子进程后,父进程将睡眠 5 秒,使得子进程可以执行一会儿;当然在这个模拟中,子进程大多数时间都在睡眠。然后父进程调用 `SIGTERM` 作为第二个参数的 `kill` 函数,等待子进程的终止,然后自己再终止。
下面是一次运行的输出:
@ -309,22 +309,23 @@ Parent sleeping for a time...
My child terminated, about to exit myself...
```
对于信号的处理,上面的示例使用了 `sigaction` 库函数POSIX 推荐的用法)而不是传统的 `signal` 函数,`signal` 函数有轻便性问题。下面是我们主要关心的代码片段:
对于信号的处理,上面的示例使用了 `sigaction` 库函数POSIX 推荐的用法)而不是传统的 `signal` 函数,`signal` 函数有移植性问题。下面是我们主要关心的代码片段:
* 假如对 `fork` 的调用成功了,父进程将执行 `parent_code` 函数,而子进程将执行 `child_code` 函数。在给子进程发送信号之前,父进程将会等待 5 秒:
* 假如对 `fork` 的调用成功了,父进程将执行 `parent_code` 函数,而子进程将执行 `child_code` 函数。在给子进程发送信号之前,父进程将会等待 5 秒:
```
puts("Parent sleeping for a time...");
```
puts("Parent sleeping for a time...");
sleep(5);
if (-1 == kill(cpid, SIGTERM)) {
...sleepkillcpidSIGTERM...
```
假如 `kill` 调用成功了,父进程将在子进程终止时做等待,使得子进程不会变成一个僵尸进程。在等待完成后,父进程再退出。
* `child_code` 函数首先调用 `set_handler` 然后进入它的可能永久睡眠的循环。下面是我们将要查看的 `set_handler` 函数:
假如 `kill` 调用成功了,父进程将在子进程终止时做等待,使得子进程不会变成一个僵尸进程。在等待完成后,父进程再退出。
```
void set_handler() {
* `child_code` 函数首先调用 `set_handler` 然后进入它的可能永久睡眠的循环。下面是我们将要查看的 `set_handler` 函数:
```
void set_handler() {
  struct sigaction current;            /* current setup */
  sigemptyset(&current.sa_mask);       /* clear the signal set */
  current.sa_flags = 0;                /* for setting sa_handler, not sa_action */
@ -333,21 +334,22 @@ if (-1 == kill(cpid, SIGTERM)) {
}
```
上面代码的前三行在做相关的准备。第四个语句将为 `graceful` 设定 handler ,它将在调用 `_exit` 来停止之前打印一些信息。第 5 行和最后一行的语句将通过调用 `sigaction` 来向系统注册上面的 handler。`sigaction` 的第一个参数是 `SIGTERM` ,用作终止;第二个参数是当前的 `sigaction` 设定,而最后的参数(在这个例子中是 `NULL` )可被用来保存前面的 `sigaction` 设定,以备后面的可能使用。
上面代码的前三行在做相关的准备。第四个语句将为 `graceful` 设定为句柄,它将在调用 `_exit` 来停止之前打印一些信息。第 5 行和最后一行的语句将通过调用 `sigaction` 来向系统注册上面的句柄。`sigaction` 的第一个参数是 `SIGTERM` ,用作终止;第二个参数是当前的 `sigaction` 设定,而最后的参数(在这个例子中是 `NULL` )可被用来保存前面的 `sigaction` 设定,以备后面的可能使用。
使用信号来作为 IPC 的确是一个很轻量的方法,但确实值得尝试。通过信号来做 IPC 显然可以被归入 IPC 工具箱中。
### 这个系列的总结
在这个系列中,我们通过三篇有关 IPC 的文章,用示例代码介绍了如下机制:
* 共享文件
* 共享内存(通过信号量)
* 管道(名和无名)
* 管道(名和无名)
* 消息队列
* 套接字
* 信号
甚至在今天,在以线程为中心的语言,例如 Java、C# 和 Go 等变得越来越流行的情况下IPC 仍然很受欢迎,因为相比于使用多线程,通过多进程来实现并发有着一个明显的优势:默认情况下,每个进程都有它自己的地址空间,除非使用了基于共享内存的 IPC 机制(为了达到安全的并发,竞争条件在多线程和多进程的时候必须被加上锁),在多进程中可以排除掉基于内存的竞争条件。对于任何一个写过甚至是通过共享变量来通信的基本多线程程序的人来说TA 都会知道想要写一个清晰、高效、线程安全的代码是多么具有挑战性。使用单线程的多进程的确是很有吸引力的,这是一个切实可行的方式,使用它可以利用好今天多处理器的机器,而不需要面临基于内存的竞争条件的风险。
甚至在今天,在以线程为中心的语言,例如 Java、C# 和 Go 等变得越来越流行的情况下IPC 仍然很受欢迎,因为相比于使用多线程,通过多进程来实现并发有着一个明显的优势:默认情况下,每个进程都有它自己的地址空间,除非使用了基于共享内存的 IPC 机制(为了达到安全的并发,竞争条件在多线程和多进程的时候必须被加上锁),在多进程中可以排除掉基于内存的竞争条件。对于任何一个写过即使是基本的通过共享变量来通信的多线程程序的人来说,他都会知道想要写一个清晰、高效、线程安全的代码是多么具有挑战性。使用单线程的多进程的确是很有吸引力的,这是一个切实可行的方式,使用它可以利用好今天多处理器的机器,而不需要面临基于内存的竞争条件的风险。
当然,没有一个简单的答案能够回答上述 IPC 机制中的哪一个更好。在编程中每一种 IPC 机制都会涉及到一个取舍问题:是追求简洁,还是追求功能强大。以信号来举例,它是一个相对简单的 IPC 机制,但并不支持多个进程之间的丰富对话。假如确实需要这样的对话,另外的选择可能会更合适一些。带有锁的共享文件则相对直接,但是当要处理大量共享的数据流时,共享文件并不能很高效地工作。管道,甚至是套接字,有着更复杂的 API可能是更好的选择。让具体的问题去指导我们的选择吧。
@ -360,13 +362,13 @@ via: https://opensource.com/article/19/4/interprocess-communication-linux-networ
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mkalindepauledu
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Inter-process_communication
[2]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-1
[3]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-2
[2]: https://linux.cn/article-10826-1.html
[3]: https://linux.cn/article-10845-1.html
[4]: http://condor.depaul.edu/mkalin

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10926-1.html)
[#]: subject: (4 open source apps for plant-based diets)
[#]: via: (https://opensource.com/article/19/4/apps-plant-based-diets)
[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja)
4 款“吃草”的开源应用
======
> 这些应用使素食者、纯素食主义者和那些想吃得更健康的杂食者找到可以吃的食物。
![](https://img.linux.net.cn/data/attachment/album/201906/01/193302nompumppxnmnxirz.jpg)
减少对肉类、乳制品和加工食品的消费对地球来说更好,也对你的健康更有益。改变你的饮食习惯可能很困难,但是一些开源的 Android 应用可以让你吃的更清淡。无论你是参加[无肉星期一][2]、践行 Mark Bittman 的 [6:00 前的素食][3]指南,还是完全切换到<ruby>[植物全食饮食][4]<rt>whole-food, plant-based diet</rt></ruby>WFPB这些应用能帮助你找出要吃什么、发现素食和素食友好的餐馆并轻松地将你的饮食偏好传达给他人来助你更好地走这条路。所有这些应用都是开源的可从 [F-Droid 仓库][5]下载。
### Daily Dozen
![Daily Dozen app][6]
[Daily Dozen][7] 提供了医学博士、美国法医学会院士FACLM Michael Greger 推荐的项目清单作为健康饮食和生活方式的一部分。Greger 博士建议食用由多种食物组成的基于植物的全食饮食,并坚持日常锻炼。该应用可以让你跟踪你吃的每种食物的份数,你喝了多少份水(或其他获准的饮料,如茶),以及你是否每天锻炼。每类食物都提供食物分量和属于该类别的食物清单。例如,十字花科蔬菜类包括白菜、花椰菜、抱子甘蓝等许多其他建议。
### Food Restrictions
![Food Restrictions app][8]
[Food Restrictions][9] 是一个简单的应用,它可以帮助你将你的饮食限制传达给他人,即使这些人不会说你的语言。用户可以输入七种不同类别的食物限制:鸡肉、牛肉、猪肉、鱼、奶酪、牛奶和辣椒。每种类别都有“我不吃”和“我过敏”选项。“不吃”选项会显示带有红色 X 的图标。“过敏” 选项显示 “X” 和小骷髅图标。可以使用文本而不是图标显示相同的信息,但文本仅提供英语和葡萄牙语。还有一个选项可以显示一条文字信息,说明用户是素食主义者或纯素食主义者,它比选择选项更简洁、更准确地总结了这些饮食限制。纯素食主义者的文本清楚地提到不吃鸡蛋和蜂蜜,这在选择选项中是没有的。但是,就像选择选项方式的文字版本一样,这些句子仅提供英语和葡萄牙语。
### OpenFoodFacts
![Open Food Facts app][10]
购买杂货时避免买入不必要的成分可能令人沮丧,但 [OpenFoodFacts][11] 可以帮助简化流程。该应用可让你扫描产品上的条形码,以获得有关产品成分和是否健康的报告。即使产品符合纯素产品的标准,产品仍然可能非常不健康。拥有成分列表和营养成分可让你在购物时做出明智的选择。此应用的唯一缺点是数据是用户贡献的,因此并非每个产品都可有数据,但如果你想回馈项目,你可以贡献新数据。
### OpenVegeMap
![OpenVegeMap app][12]
使用 [OpenVegeMap][13] 查找你附近的纯素食或素食主义餐厅。此应用可以通过手机的当前位置或者输入地址来搜索。餐厅分类为仅限纯素食者、适合纯素食者,仅限素食主义者,适合素食者,非素食和未知。该应用使用来自 [OpenStreetMap][14] 的数据和用户提供的有关餐馆的信息,因此请务必仔细检查以确保所提供的信息是最新且准确的。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/apps-plant-based-diets
作者:[Joshua Allen Holm][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/holmja
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78
[2]: https://www.meatlessmonday.com/
[3]: https://www.amazon.com/dp/0385344740/
[4]: https://nutritionstudies.org/whole-food-plant-based-diet-guide/
[5]: https://f-droid.org/
[6]: https://opensource.com/sites/default/files/uploads/daily_dozen.png (Daily Dozen app)
[7]: https://f-droid.org/en/packages/org.nutritionfacts.dailydozen/
[8]: https://opensource.com/sites/default/files/uploads/food_restrictions.png (Food Restrictions app)
[9]: https://f-droid.org/en/packages/br.com.frs.foodrestrictions/
[10]: https://opensource.com/sites/default/files/uploads/openfoodfacts.png (Open Food Facts app)
[11]: https://f-droid.org/en/packages/openfoodfacts.github.scrachx.openfood/
[12]: https://opensource.com/sites/default/files/uploads/openvegmap.png (OpenVegeMap app)
[13]: https://f-droid.org/en/packages/pro.rudloff.openvegemap/
[14]: https://www.openstreetmap.org/

View File

@ -0,0 +1,146 @@
[#]: collector: (lujun9972)
[#]: translator: (cycoe)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10929-1.html)
[#]: subject: (Monitoring CPU and GPU Temperatures on Linux)
[#]: via: (https://itsfoss.com/monitor-cpu-gpu-temp-linux/)
[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/)
在 Linux 上监控 CPU 和 GPU 温度
======
> 本篇文章讨论了在 Linux 命令行中监控 CPU 和 GPU 温度的两种简单方式。
由于 [Steam][1](包括 [Steam Play][2],即 Proton和一些其他的发展GNU/Linux 正在成为越来越多计算机用户的日常游戏平台的选择。也有相当一部分用户在遇到像[视频编辑][3]或图形设计等Kdenlive 和 [Blender][4] 是这类应用程序中很好的例子)资源消耗型计算任务时,也会使用 GNU/Linux。
不管你是否是这些用户中的一员或其他用户,你也一定想知道你的电脑 CPU 和 GPU 能有多热(如果你想要超频的话更会如此)。如果是这样,那么继续读下去。我们会介绍两个非常简单的命令来监控 CPU 和 GPU 温度。
我的装置包括一台 [Slimbook Kymera][5] 和两台显示器(一台 TV 和一台 PC 监视器),使得我可以用一台来玩游戏,另一台来留意监控温度。另外,因为我使用 [Zorin OS][6],我会将关注点放在 Ubuntu 和 Ubuntu 的衍生发行版上。
为了监控 CPU 和 GPU 的行为,我们将利用实用的 `watch` 命令在每几秒钟之后动态地得到读数。
![][7]
### 在 Linux 中监控 CPU 温度
对于 CPU 温度,我们将结合使用 `watch``sensors` 命令。一篇关于[此工具的图形用户界面版本][8]的有趣文章已经在 It's FOSS 中介绍过了。然而,我们将在此处使用命令行版本:
```
watch -n 2 sensors
```
`watch` 保证了读数会在每 2 秒钟更新一次(当然,这个周期值能够根据你的需要去更改):
```
Every 2,0s: sensors
iwlwifi-virtual-0
Adapter: Virtual device
temp1: +39.0°C
acpitz-virtual-0
Adapter: Virtual device
temp1: +27.8°C (crit = +119.0°C)
temp2: +29.8°C (crit = +119.0°C)
coretemp-isa-0000
Adapter: ISA adapter
Package id 0: +37.0°C (high = +82.0°C, crit = +100.0°C)
Core 0: +35.0°C (high = +82.0°C, crit = +100.0°C)
Core 1: +35.0°C (high = +82.0°C, crit = +100.0°C)
Core 2: +33.0°C (high = +82.0°C, crit = +100.0°C)
Core 3: +36.0°C (high = +82.0°C, crit = +100.0°C)
Core 4: +37.0°C (high = +82.0°C, crit = +100.0°C)
Core 5: +35.0°C (high = +82.0°C, crit = +100.0°C)
```
除此之外,我们还能得到如下信息:
* 我们有 5 个核心正在被使用(并且当前的最高温度为 37.0℃)。
* 温度超过 82.0℃ 会被认为是过热。
* 超过 100.0℃ 的温度会被认为是超过临界值。
根据以上的温度值我们可以得出结论,我的电脑目前的工作负载非常小。
### 在 Linux 中监控 GPU 温度
现在让我们来看看显卡。我从来没使用过 AMD 的显卡,因此我会将重点放在 Nvidia 的显卡上。我们需要做的第一件事是从 [Ubuntu 的附加驱动][10] 中下载合适的最新驱动。
在 UbuntuZorin 或 Linux Mint 也是相同的)中,进入“软件和更新 > 附加驱动”选项,选择最新的可用驱动。另外,你可以添加或启用显示卡的官方 ppa通过命令行或通过“软件和更新 > 其他软件”来实现)。安装驱动程序后,你将可以使用 “Nvidia X Server” 的 GUI 程序以及命令行工具 `nvidia-smi`Nvidia 系统管理界面)。因此我们将使用 `watch``nvidia-smi`
```
watch -n 2 nvidia-smi
```
与 CPU 的情况一样,我们会在每两秒得到一次更新的读数:
```
Every 2,0s: nvidia-smi
Fri Apr 19 20:45:30 2019
+-----------------------------------------------------------------------------+
| Nvidia-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 106... Off | 00000000:01:00.0 On | N/A |
| 0% 54C P8 10W / 120W | 433MiB / 6077MiB | 4% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1557 G /usr/lib/xorg/Xorg 190MiB |
| 0 1820 G /usr/bin/gnome-shell 174MiB |
| 0 7820 G ...equest-channel-token=303407235874180773 65MiB |
+-----------------------------------------------------------------------------+
```
从这个表格中我们得到了关于显示卡的如下信息:
* 它正在使用版本号为 418.56 的开源驱动。
* 显示卡的当前温度为 54.0℃,并且风扇的使用量为 0%。
* 电量的消耗非常低:仅仅 10W。
* 总量为 6GB 的 vram视频随机存取存储器只使用了 433MB。
* vram 正在被 3 个进程使用,他们的 ID 分别为 1557、1820 和 7820。
大部分这些事实或数值都清晰地表明,我们没有在玩任何消耗系统资源的游戏或处理大负载的任务。当我们开始玩游戏、处理视频或其他类似任务时,这些值就会开始上升。
#### 结论
即便我们有 GUI 工具,但我还是发现这两个命令对于实时监控硬件非常的顺手。
你将如何去使用它们呢?你可以通过阅读他们的 man 手册来学习更多关于这些工具的使用技巧。
你有其他偏爱的工具吗?在评论里分享给我们吧 ;)。
玩得开心!
--------------------------------------------------------------------------------
via: https://itsfoss.com/monitor-cpu-gpu-temp-linux/
作者:[Alejandro Egea-Abellán][a]
选题:[lujun9972][b]
译者:[cycoe](https://github.com/cycoe)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/itsfoss/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/install-steam-ubuntu-linux/
[2]: https://itsfoss.com/steam-play-proton/
[3]: https://itsfoss.com/best-video-editing-software-linux/
[4]: https://www.blender.org/
[5]: https://slimbook.es/
[6]: https://zorinos.com/
[7]: https://itsfoss.com/wp-content/uploads/2019/04/monitor-cpu-gpu-temperature-linux-800x450.png
[8]: https://itsfoss.com/check-laptop-cpu-temperature-ubuntu/
[9]: https://itsfoss.com/best-command-line-games-linux/
[10]: https://itsfoss.com/install-additional-drivers-ubuntu/
[11]: https://itsfoss.com/review-googler-linux/
[12]: https://itsfoss.com/wp-content/uploads/2019/04/EGEA-ABELLAN-Alejandro.jpg

View File

@ -1,34 +1,34 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10919-1.html)
[#]: subject: (Installing Budgie Desktop on Ubuntu [Quick Guide])
[#]: via: (https://itsfoss.com/install-budgie-ubuntu/)
[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/)
在 Ubuntu 上安装 Budgie 桌面 [快速指南]
在 Ubuntu 上安装 Budgie 桌面
======
_ **简介:在这一步步的教程中学习如何在 Ubuntu 上安装 Budgie 桌面。** _
> 在这个逐步的教程中学习如何在 Ubuntu 上安装 Budgie 桌面。
在所有[各种 Ubuntu 版本][1]中,[Ubuntu Budgie][2] 是最被低估的版本。它看起来很优雅,而且需要的资源也不多。
在所有[各种 Ubuntu 版本][1]中,[Ubuntu Budgie][2] 是最被低估的版本。它外观优雅,而且需要的资源也不多。
阅读这篇 [Ubuntu Budgie 的评论][3]或观看此视频,了解 Ubuntu Budgie 18.04 的外观
阅读这篇 《[Ubuntu Budgie 点评][3]》或观看下面的视频,了解 Ubuntu Budgie 18.04 的外观如何
[Subscribe to our YouTube channel for more Linux Videos][4]
- [Ubuntu 18.04 Budgie Desktop Tour [It's Elegant]](https://youtu.be/KXgreWOK33k)
如果你喜欢 [Budgie 桌面][5]但你正在使用其他版本的 Ubuntu例如默认的 GNOME 桌面的 Ubuntu,我有个好消息。你可以在当前的 Ubuntu 系统上安装 Budgie 并切换桌面环境。
如果你喜欢 [Budgie 桌面][5]但你正在使用其他版本的 Ubuntu例如默认 Ubuntu 带有 GNOME 桌面,我有个好消息。你可以在当前的 Ubuntu 系统上安装 Budgie 并切换桌面环境。
在这篇文章中,我将告诉你到底该怎么做。但首先,对那些不了解 Budgie 的人进行一点介绍。
Budgie 桌面环境主要由 [Solus Linux 团队开发][6]。它的设计注重优雅和现代使用。Budgie 适用于所有主流 Linux 发行版,供用户尝试体验这种新的桌面环境。Budgie 现在非常成熟,并提供了出色的桌面体验。
Budgie 桌面环境主要由 [Solus Linux 团队开发][6]。它的设计注重优雅和现代使用。Budgie 适用于所有主流 Linux 发行版,可以让用户在其上尝试体验这种新的桌面环境。Budgie 现在非常成熟,并提供了出色的桌面体验。
警告
在同一系统上安装多个桌面可能会导致冲突,你可能会遇到一些问题,如面板中缺少图标或同一程序的多个图标。
你也许不会遇到任何问题。是否要尝试不同桌面由你决定。
> 警告
>
> 在同一系统上安装多个桌面可能会导致冲突,你可能会遇到一些问题,如面板中缺少图标或同一程序的多个图标。
>
> 你也许不会遇到任何问题。是否要尝试不同桌面由你决定。
### 在 Ubuntu 上安装 Budgie
@ -55,23 +55,19 @@ sudo apt install ubuntu-budgie-desktop
![Budgie login screen][9]
你可以单击登录名旁边的 Budgie 图标获取登录选项。在那里,你可以在已安装的桌面环境 DE 之间进行选择。就我而言,我看到了 Budgie 和默认的 UbuntuGNOME桌面。
你可以单击登录名旁边的 Budgie 图标获取登录选项。在那里你可以在已安装的桌面环境DE之间进行选择。就我而言我看到了 Budgie 和默认的 UbuntuGNOME桌面。
![Select your DE][10]
因此,无论何时你想登录 GNOME都可以使用此菜单执行此操作。
[][11]
建议阅读:在 Ubuntu中 摆脱 “snapd returned status code 400: Bad Request” 错误。
### 如何删除 Budgie
如果你不喜欢 Budgie 或只是想回到常规的以前的 Ubuntu你可以如上节所述切换回常规桌面。
但是,如果你真的想要删除 Budgie 及其组件,你可以按照以下命令回到之前的状态。
_ **在使用这些命令之前先切换到其他桌面环境:** _
**在使用这些命令之前先切换到其他桌面环境:**
```
sudo apt remove ubuntu-budgie-desktop ubuntu-budgie* lightdm
@ -83,7 +79,7 @@ sudo apt install --reinstall gdm3
现在,你将回到 GNOME 或其他你有的桌面。
**你对Budgie有什么看法**
### 你对 Budgie 有什么看法?
Budgie 是[最佳 Linux 桌面环境][12]之一。希望这个简短的指南帮助你在 Ubuntu 上安装了很棒的 Budgie 桌面。
@ -96,7 +92,7 @@ via: https://itsfoss.com/install-budgie-ubuntu/
作者:[Atharva Lele][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,5 +1,6 @@
ddgr - 一个从终端搜索 DuckDuckGo 的命令行工具
ddgr一个从终端搜索 DuckDuckGo 的命令行工具
======
在 Linux 中Bash 技巧非常棒,它使 Linux 中的一切成为可能。
对于开发人员或系统管理员来说,它真的很管用,因为他们大部分时间都在使用终端。你知道他们为什么喜欢这种技巧吗?
@ -8,184 +9,184 @@ ddgr - 一个从终端搜索 DuckDuckGo 的命令行工具
### 什么是 ddgr
[ddgr][1] 是一个命令行实用程序,用于从终端搜索 DuckDuckGo。如果设置了 BROWSER 环境变量ddgr 可以在几个基于文本的浏览器中开箱即用。
[ddgr][1] 是一个命令行实用程序,用于从终端搜索 DuckDuckGo。如果设置了 `BROWSER` 环境变量ddgr 可以在几个基于文本的浏览器中开箱即用。
确保你的系统安装了任何基于文本的浏览器。你可能知道 [googler][2],它允许用户从 Linux 命令行进行 Google 搜索。
确保你的系统安装了任何一个基于文本的浏览器。你可能知道 [googler][2],它允许用户从 Linux 命令行进行 Google 搜索。
它在命令行用户中非常受欢迎,他们期望对隐私敏感的 DuckDuckGo 也有类似的实用程序,这就是 ddgr 出现的原因。
它在命令行用户中非常受欢迎,他们期望对隐私敏感的 DuckDuckGo 也有类似的实用程序,这就是 `ddgr` 出现的原因。
与 Web 界面不同,你可以指定每页要查看的搜索结果数。
**建议阅读:**
**(#)** [Googler 从 Linux 命令行搜索 Google][2]
**(#)** [Buku Linux 中一个强大的命令行书签管理器][3]
**(#)** [SoCLI 从终端搜索和浏览堆栈溢出的简单方法][4]
**(#)** [RTVReddit 终端查看器)- 一个简单的 Reddit 终端查看器][5]
- [Googler 从 Linux 命令行搜索 Google][2]
- [Buku Linux 中一个强大的命令行书签管理器][3]
- [SoCLI 从终端搜索和浏览 StackOverflow 的简单方法][4]
- [RTVReddit 终端查看器)- 一个简单的 Reddit 终端查看器][5]
### 什么是 DuckDuckGo
DDG 即 DuckDuckGo。DuckDuckGoDDG是一个真正保护用户搜索和隐私的互联网搜索引擎。
它们没有过滤用户的个性化搜索结果,对于给定的搜索词,它会向所有用户显示相同的搜索结果。
DDG 即 DuckDuckGo。DuckDuckGoDDG是一个真正保护用户搜索和隐私的互联网搜索引擎。它没有过滤用户的个性化搜索结果对于给定的搜索词它会向所有用户显示相同的搜索结果。
大多数用户更喜欢谷歌搜索引擎,但是如果你真的担心隐私,那么你可以放心地使用 DuckDuckGo。
### ddgr 特性
* 快速且干净(没有广告,多余的 URL 或杂物),自定义颜色
* 快速且干净(没有广告、多余的 URL 或杂物参数),自定义颜色
* 旨在以最小的空间提供最高的可读性
* 指定每页显示的搜索结果数
* 从浏览器中打开的 omniprompt URL 导航结果页面
* Bash、Zsh 和 Fish 的搜索和配置脚本
* DuckDuckGo Bang 支持(自动完
* 直接在浏览器中打开第一个结果(就像我感觉 Ducky
* 可以在 omniprompt 中导航结果,在浏览器中打开 URL
* 用于 Bash、Zsh 和 Fish 的搜索和选项补完脚本
* 支持 DuckDuckGo Bang带有自动完)
* 直接在浏览器中打开第一个结果(如同 “Im Feeling Ducky”
* 不间断搜索:无需退出即可在 omniprompt 中触发新搜索
* 关键字支持例如filetype:mime、site:somesite.com
* 按时间、指定区域禁用安全搜索
* HTTPS 代理支持,无跟踪,可选择禁用用户代理
* 按时间、指定区域搜索,禁用安全搜索
* 支持 HTTPS 代理,支持 Do Not Track可选择禁用用户代理字符串
* 支持自定义 URL 处理程序脚本或命令行实用程序
* 全面的文档man 页面有方便的使用示例
* 最小的依赖关系
### 需要条件
ddgr 需要 Python 3.4 或更高版本。因此,确保你的系统应具有 Python 3.4 或更高版本。
`ddgr` 需要 Python 3.4 或更高版本。因此,确保你的系统应具有 Python 3.4 或更高版本。
```
$ python3 --version
Python 3.6.3
```
### 如何在 Linux 中安装 ddgr
我们可以根据发行版使用以下命令轻松安装 ddgr。
我们可以根据发行版使用以下命令轻松安装 `ddgr`
对于 Fedora ,使用 [DNF 命令][6]来安装 `ddgr`
对于 **`Fedora`** ,使用 [DNF 命令][6]来安装 ddgr。
```
# dnf install ddgr
```
或者我们可以使用 [SNAP 命令][7]来安装 ddgr。
或者我们可以使用 [SNAP 命令][7]来安装 `ddgr`
```
# snap install ddgr
```
对于 **`LinuxMint/Ubuntu`**,使用 [APT-GET 命令][8] 或 [APT 命令][9]来安装 ddgr。
对于 LinuxMint/Ubuntu使用 [APT-GET 命令][8] 或 [APT 命令][9]来安装 `ddgr`
```
$ sudo add-apt-repository ppa:twodopeshaggy/jarun
$ sudo apt-get update
$ sudo apt-get install ddgr
```
对于基于 **`Arch Linux`** 的系统,使用 [Yaourt 命令][10]或 [Packer 命令][11]从 AUR 仓库安装 ddgr。
对于基于 Arch Linux 的系统,使用 [Yaourt 命令][10]或 [Packer 命令][11]从 AUR 仓库安装 `ddgr`
```
$ yaourt -S ddgr
or
$ packer -S ddgr
```
对于 **`Debian`**,使用 [DPKG 命令][12] 安装 ddgr。
对于 Debian使用 [DPKG 命令][12] 安装 `ddgr`
```
# wget https://github.com/jarun/ddgr/releases/download/v1.2/ddgr_1.2-1_debian9.amd64.deb
# dpkg -i ddgr_1.2-1_debian9.amd64.deb
```
对于 **`CentOS 7`**,使用 [YUM 命令][13]来安装 ddgr。
对于 CentOS 7使用 [YUM 命令][13]来安装 `ddgr`
```
# yum install https://github.com/jarun/ddgr/releases/download/v1.2/ddgr-1.2-1.el7.3.centos.x86_64.rpm
```
对于 **`opensuse`**,使用 [zypper 命令][14]来安装 ddgr。
对于 opensuse使用 [zypper 命令][14]来安装 `ddgr`
```
# zypper install https://github.com/jarun/ddgr/releases/download/v1.2/ddgr-1.2-1.opensuse42.3.x86_64.rpm
```
### 如何启动 ddgr
在终端上输入 `ddgr` 命令,不带任何选项来进行 DuckDuckGo 搜索。你将获得类似于下面的输出。
```
$ ddgr
```
![][16]
### 如何使用 ddgr 进行搜索
我们可以通过两种方式启动搜索。从omniprompt 或者直接从终端开始。你可以搜索任何你想要的短语。
我们可以通过两种方式启动搜索。从 omniprompt 或者直接从终端开始。你可以搜索任何你想要的短语。
直接从终端:
```
$ ddgr 2daygeek
```
![][17]
`omniprompt`
从 omniprompt
![][18]
### Omniprompt 快捷方式
输入 `?` 以获得 `omniprompt`,它将显示关键字列表和进一步使用 ddgr 的快捷方式。
输入 `?` 以获得 omniprompt它将显示关键字列表和进一步使用 `ddgr` 的快捷方式。
![][19]
### 如何移动下一页、上一页和第一页
它允许用户移动下一页、上一页或第一页。
* `n:` 移动到下一组搜索结果
* `p:` 移动到上一组搜索结果
* `f:` 跳转到第一页
* `n` 移动到下一组搜索结果
* `p` 移动到上一组搜索结果
* `f` 跳转到第一页
![][20]
### 如何启动新搜索
“**d**” 选项允许用户从 omniprompt 发起新的搜索。例如,我搜索了 `2daygeek 网站`,现在我将搜索 **Magesh Maruthamuthu** 这个新短语。
`d` 选项允许用户从 omniprompt 发起新的搜索。例如,我搜索了 “2daygeek website”现在我将搜索 “Magesh Maruthamuthu” 这个新短语。
从 omniprompt
`omniprompt`.
```
ddgr (? for help) d magesh maruthmuthu
```
![][21]
### 在搜索结果中显示完整的 URL
默认情况下,它仅显示文章标题,在搜索中添加 **x** 选项以在搜索结果中显示完整的文章网址。
默认情况下,它仅显示文章标题,在搜索中添加 `x` 选项以在搜索结果中显示完整的文章网址。
```
$ ddgr -n 5 -x 2daygeek
```
![][22]
### 限制搜索结果
默认情况下,搜索结果每页显示 10 个结果。如果你想为方便起见限制页面结果,可以使用 ddgr 带有 `--num`` -n` 参数。
默认情况下,搜索结果每页显示 10 个结果。如果你想为方便起见限制页面结果,可以使用 `ddgr` 带有 `--num`` -n` 参数。
```
$ ddgr -n 5 2daygeek
```
![][23]
### 网站特定搜索
要搜索特定网站的特定页面,使用以下格式。这将从网站获取给定关键字的结果。例如,我们在 2daygeek 网站搜索 **Package Manager**,查看结果。
要搜索特定网站的特定页面,使用以下格式。这将从网站获取给定关键字的结果。例如,我们在 2daygeek 网站下搜索 “Package Manager”查看结果。
```
$ ddgr -n 5 --site 2daygeek "package manager"
```
![][24]
@ -195,8 +196,8 @@ $ ddgr -n 5 --site 2daygeek "package manager"
via: https://www.2daygeek.com/ddgr-duckduckgo-search-from-the-command-line-in-linux/
作者:[Magesh Maruthamuthu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,86 @@
Adobe Lightroom 的三个开源替代品
=======
> 摄影师们:在没有 Lightroom 套件的情况下,可以看看这些 RAW 图像处理器。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/camera-photography-film.jpg?itok=oe2ixyu6)
如今智能手机的摄像功能已经完备到多数人认为可以代替传统摄影了。虽然这在傻瓜相机的市场中是个事实,但是对于许多摄影爱好者和专业摄影师看来,一个高端单反相机所能带来的照片景深、清晰度以及真实质感是口袋中的智能手机无法与之相比的。
所有的这些功能在便利性上要付出一些很小的代价;就像传统的胶片相机中的反色负片,单反照相得到的 RAW 格式文件必须预先处理才能印刷或编辑;因此对于单反相机,照片的后期处理是无可替代的,并且 首选应用就是 Adobe Lightroom。但是由于 Adobe Lightroom 的昂贵价格、基于订阅的定价模式以及专有许可证都使更多人开始关注其开源替代品。
Lightroom 有两大主要功能:处理 RAW 格式的图片文件以及数字资产管理系统DAM —— 通过标签、评星以及其他元数据信息来简单清晰地整理照片。
在这篇文章中我们将介绍三个开源的图片处理软件Darktable、LightZone 以及 RawTherapee。所有的软件都有 DAM 系统,但没有任何一个具有 Lightroom 基于机器学习的图像分类和标签功能。如果你想要知道更多关于开源的 DAM 系统的软件,可以看 Terry Hacock 的文章:“[开源项目的 DAM 管理][2]”,他分享了他在自己的 [Lunatics!][3] 电影项目研究过的开源多媒体软件。
### Darktable
![Darktable][4]
类似其他两个软件Darktable 可以处理 RAW 格式的图像并将它们转换成可用的文件格式 —— JPEG、PNG、TIFF、PPM、PFM 和 EXR它同时支持 Google 和 Facebook 的在线相册,上传至 Flikr通过邮件附件发送以及创建在线相册。
它有 61 个图像处理模块,可以调整图像的对比度、色调、明暗、色彩、噪点;添加水印;切割以及旋转;等等。如同另外两个软件一样,不论你做出多少次修改,这些修改都是“无损的” —— 你的初始 RAW 图像文件始终会被保存。
Darktable 可以从 400 多种相机型号中直接导入照片,以及有 JPEG、CR2、DNG、OpenEXR 和 PFM 等格式的支持。图像在一个数据库中显示,因此你可以轻易地过滤并查询这些元数据,包括了文字标签、评星以及颜色标签。软件同时支持 21 种语言,支持 Linux、MacOS、BSD、Solaris 11/GNOME 以及 WindowsWindows 版本是最新发布的Darktable 声明它比起其他版本可能还有一些不完备之处,有一些未实现的功能)。
Darktable 在开源许可证 [GPLv3][7] 下发布,你可以了解更多它的 [特性][8],查阅它的 [用户手册][9],或者直接去 Github 上看[源代码][10] 。
### LightZone
![LightZone's tool stack][11]
[LightZone][12] 和其他两个软件类似同样是无损的 RAW 格式图像处理工具:它是跨平台的,有 Windows、MacOS 和 Linux 版本,除 RAW 格式之外,它还支持 JPG 和 TIFF 格式的图像处理。接下来说说 LightZone 其他独特特性。
这个软件最初在 2005 年时,是以专有许可证发布的图像处理软件,后来在 BSD 证书下开源。此外,在你下载这个软件之前,你必须注册一个免费账号,以便 LightZone的 开发团队可以跟踪软件的下载数量以及建立相关社区。(许可很快,而且是自动的,因此这不是一个很大的使用障碍。)
除此之外的一个特性是这个软件的图像处理通常是通过很多可组合的工具实现的,而不是叠加滤镜(就像大多数图像处理软件),这些工具组可以被重新编排以及移除,以及被保存并且复制用到另一些图像上。如果想要编辑图片的部分区域,你还可以通过矢量工具或者根据色彩和亮度来选择像素。
想要了解更多,见 LightZone 的[论坛][13] 或者查看 Github上的 [源代码][14]。
### RawTherapee
![RawTherapee][15]
[RawTherapee][16] 是另一个值得关注的开源([GPL][17])的 RAW 图像处理器。就像 Darktable 和 LightZone它是跨平台的支持 Windows、MacOS 和 Linux一切修改都在无损条件下进行因此不论你叠加多少滤镜做出多少改变你都可以回到你最初的 RAW 文件。
RawTherapee 采用的是一个面板式的界面,包括一个历史记录面板来跟踪你做出的修改,以方便随时回到先前的图像;一个快照面板可以让你同时处理一张照片的不同版本;一个可滚动的工具面板可以方便准确地选择工具。这些工具包括了一系列的调整曝光、色彩、细节、图像变换以及去马赛克功能。
这个软件可以从多数相机直接导入 RAW 文件,并且支持超过 25 种语言,得到了广泛使用。批量处理以及 [SSE][18] 优化这类功能也进一步提高了图像处理的速度以及对 CPU 性能的利用。
RawTherapee 还提供了很多其他 [功能][19];可以查看它的 [官方文档][20] 以及 [源代码][21] 了解更多细节。
你是否在摄影中使用另外的开源 RAW 图像处理工具?有任何建议和推荐都可以在评论中分享。
------
via: https://opensource.com/alternatives/adobe-lightroom
作者:[Opensource.com][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[scoutydren](https://github.com/scoutydren)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com
[1]: https://en.wikipedia.org/wiki/Raw_image_format
[2]: https://opensource.com/article/18/3/movie-open-source-software
[3]: http://lunatics.tv/
[4]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_darkroom1.jpg?itok=0fjk37tC "Darktable"
[5]: http://www.darktable.org/
[6]: https://www.darktable.org/about/faq/#faq-windows
[7]: https://github.com/darktable-org/darktable/blob/master/LICENSE
[8]: https://www.darktable.org/about/features/
[9]: https://www.darktable.org/resources/
[10]: https://github.com/darktable-org/darktable
[11]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_lightzone1tookstack.jpg?itok=1e3s85CZ
[12]: http://www.lightzoneproject.org/
[13]: http://www.lightzoneproject.org/Forum
[14]: https://github.com/ktgw0316/LightZone
[15]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_rawtherapee.jpg?itok=meiuLxPw "RawTherapee"
[16]: http://rawtherapee.com/
[17]: https://github.com/Beep6581/RawTherapee/blob/dev/LICENSE.txt
[18]: https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions
[19]: http://rawpedia.rawtherapee.com/Features
[20]: http://rawpedia.rawtherapee.com/Main_Page
[21]: https://github.com/Beep6581/RawTherapee

View File

@ -0,0 +1,172 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10918-1.html)
[#]: subject: (Aliases: To Protect and Serve)
[#]: via: (https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve)
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
命令别名:保护和服务
======
> Linux shell 允许你将命令彼此链接在一起,一次触发执行复杂的操作,并且可以对此创建别名作为快捷方式。
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/prairie-path_1920.jpg?itok=wRARsM7p)
让我们将继续我们的别名系列。到目前为止,你可能已经阅读了我们的[关于别名的第一篇文章][1],并且应该非常清楚它们是如何为你省去很多麻烦的最简单方法。例如,你已经看到它们帮助我们减少了输入,让我们看看别名派上用场的其他几个案例。
### 别名即快捷方式
Linux shell 最美妙的事情之一是可以使用数以万计的选项和把命令连接在一起执行真正复杂的操作。好吧,也许这种美丽是在旁观者的眼中的,但是我们觉得这个功能很实用。
不利的一面是,你经常需要记得难以记忆或难以打字出来的命令组合。比如说硬盘上的空间非常宝贵,而你想要做一些清洁工作。你的第一步可能是寻找隐藏在你的家目录里的东西。你可以用来判断的一个标准是查找不再使用的内容。`ls` 可以帮助你:
```
ls -lct
```
上面的命令显示了每个文件和目录的详细信息(`-l`),并显示了每一项上次访问的时间(`-c`),然后它按从最近访问到最少访问的顺序排序这个列表(`-t`)。
这难以记住吗?你可能不会每天都使用 `-c``-t` 选项,所以也许是吧。无论如何,定义一个别名,如:
```
alias lt='ls -lct'
```
会更容易一些。
然后,你也可能希望列表首先显示最旧的文件:
```
alias lo='lt -F | tac'
```
![aliases][3]
*图 1使用 lt 和 lo 别名。*
这里有一些有趣的事情。首先,我们使用别名(`lt`)来创建另一个别名 —— 这是完全可以的。其次,我们将一个新参数传递给 `lt`(后者又通过 `lt` 别名的定义传递给了 `ls`)。
`-F` 选项会将特殊符号附加到项目的名称后,以便更好地区分常规文件(没有符号)和可执行文件(附加了 `*`)、目录文件(以 `/` 结尾),以及所有链接文件、符号链接文件(以 `@` 符号结尾)等等。`-F` 选项是当你回归到单色终端的日子里,没有其他方法可以轻松看到列表项之间的差异时用的。在这里使用它是因为当你将输出从 `lt` 传递到 `tac` 时,你会丢失 `ls` 的颜色。
第三件我们需要注意的事情是我们使用了管道。管道用于你将一个命令的输出传递给另外一个命令时。第二个命令可以使用这些输出作为它的输入。在包括 Bash 在内的许多 shell 里,你可以使用管道符(`|` 来做传递。
在这里,你将来自 `lt -F` 的输出导给 `tac`。`tac` 这个命令有点玩笑的意思,你或许听说过 `cat` 命令它名义上用于将文件彼此连接con`cat`),而在实践中,它被用于将一个文件的内容打印到终端。`tac` 做的事情一样,但是它是以逆序将接收到的内容输出出来。明白了吗?`cat` 和 `tac`,技术人有时候也挺有趣的。
`cat``tac` 都能输出通过管道传递过来的内容,在这里,也就是一个按时间顺序排序的文件列表。
那么,在有些离题之后,最终我们得到的就是这个列表将当前目录中的文件和目录以新鲜度的逆序列出(即老的在前)。
最后你需要注意的是,当在当前目录或任何目录运行 `lt` 时:
```
# 这可以工作:
lt
# 这也可以:
lt /some/other/directory
```
……而 `lo` 只能在当前目录奏效:
```
# 这可工作:
lo
# 而这不行:
lo /some/other/directory
```
这是因为 Bash 会展开别名的组分。当你键入:
```
lt /some/other/directory
```
Bash 实际上运行的是:
```
ls -lct /some/other/directory
```
这是一个有效的 Bash 命令。
而当你键入:
```
lo /some/other/directory
```
Bash 试图运行:
```
ls -lct -F | tac /some/other/directory
```
这不是一个有效的命令,主要是因为 `/some/other/directory` 是个目录,而 `cat``tac` 不能用于目录。
### 更多的别名快捷方式
* `alias lll='ls -R'` 会打印出目录的内容,并深入到子目录里面打印子目录的内容,以及子目录的子目录,等等。这是一个查看一个目录下所有内容的方式。
* `mkdir='mkdir -pv'` 可以让你一次性创建目录下的目录。按照 `mkdir` 的基本形式,要创建一个包含子目录的目录,你必须这样:
```
mkdir newdir
mkdir newdir/subdir
```
或这样:
```
mkdir -p newdir/subdir
```
而用这个别名你将只需要这样就行:
```
mkdir newdir/subdir
```
你的新 `mkdir` 也会告诉你创建子目录时都做了什么。
### 别名也是一种保护
别名的另一个好处是它可以作为防止你意外地删除或覆写已有的文件的保护措施。你可能听说过这个 Linux 新用户的传言,当他们以 root 身份运行:
```
rm -rf /
```
整个系统就爆了。而决定输入如下命令的用户:
```
rm -rf /some/directory/ *
```
就很好地干掉了他们的家目录的全部内容。这里不小心键入的目录和 `*` 之间的那个空格有时候很容易就会被忽视掉。
这两种情况我们都可以通过 `alias rm='rm -i'` 别名来避免。`-i` 选项会使 `rm` 询问用户是否真的要做这个操作,在你对你的文件系统做出不可弥补的损失之前给你第二次机会。
对于 `cp` 也是一样,它能够覆盖一个文件而不会给你任何提示。创建一个类似 `alias cp='cp -i'` 来保持安全吧。
### 下一次
我们越来越深入到了脚本领域,下一次,我们将沿着这个方向,看看如何在命令行组合命令以给你真正的乐趣,并可靠地解决系统管理员每天面临的问题。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve
作者:[Paul Brown][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/bro66
[b]: https://github.com/lujun9972
[1]: https://linux.cn/article-10377-1.html
[2]: https://www.linux.com/files/images/fig01png-0
[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig01_0.png?itok=crqTm_va (aliases)
[4]: https://www.linux.com/licenses/category/used-permission

View File

@ -0,0 +1,56 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10914-1.html)
[#]: subject: (Blockchain 2.0: Blockchain In Real Estate [Part 4])
[#]: via: (https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/)
[#]: author: (ostechnix https://www.ostechnix.com/author/editor/)
区块链 2.0:房地产区块链(四)
======
![](https://www.ostechnix.com/wp-content/uploads/2019/03/Blockchain-In-Real-Estate-720x340.png)
### 区块链 2.0:“更”智能的房地产
在本系列的[上一篇文章][1]中我们探讨了区块链的特征,这些区块链将使机构能够将**传统银行**和**融资系统**转换和交织在一起。这部分将探讨**房地产区块链**。房地产业正在走向革命。它是人类已知的交易最活跃、最重要的资产类别之一。然而,由于充满了监管障碍和欺诈、欺骗的无数可能性,它也是最难参与交易的之一。利用适当的共识算法的区块链的分布式分类账本功能被吹捧为这个行业的前进方向,而这个行业传统上被认为其面对变革是保守的。
就其无数的业务而言房地产一直是一个非常保守的行业。这似乎也是理所当然的。2008 年金融危机或 20 世纪上半叶的大萧条等重大经济危机成功摧毁了该行业及其参与者。然而,与大多数具有经济价值的产品一样,房地产行业具有弹性,而这种弹性则源于其保守性。
全球房地产市场由价值 228 万亿 [^1] 美元的资产类别组成,出入不大。其他投资资产,如股票、债券和股票合计价值仅为 170 万亿美元。显然,在这样一个行业中实施的交易在很大程度上都是精心策划和执行的。很多时候,房地产也因许多欺诈事件而臭名昭著,并且随之而来的是毁灭性的损失。由于其运营非常保守,该行业也难以驾驭。它受到了法律的严格监管,创造了一个交织在一起的细微差别网络,这对于普通人来说太难以完全理解,使得大多数人无法进入和参与。如果你曾参与过这样的交易,那么你就会知道纸质文件的重要性和长期性。
从一个微不足道的开始,虽然是一个重要的例子,以显示当前的记录管理实践在房地产行业有多糟糕,考虑一下[产权保险业务][2] [^3]。产权保险用于对冲土地所有权和所有权记录不可接受且从而无法执行的可能性。诸如此类的保险产品也称为赔偿保险。在许多情况下,法律要求财产拥有产权保险,特别是在处理多年来多次易手的财产时。抵押贷款公司在支持房地产交易时也可能坚持同样的要求。事实上,这种产品自 19 世纪 50 年代就已存在,并且仅在美国每年至少有 1.5 万亿美元的商业价值这一事实证明了一开始的说法。在这种情况下,这些记录的维护方式必须进行改革,区块链提供了一个可持续解决方案。根据[美国土地产权协会][4],平均每个案例的欺诈平均约为 10 万美元,并且涉及交易的所有产权中有 25 的文件存在问题。区块链允许设置一个不可变的永久数据库,该数据库将跟踪资产本身,记录已经进入的每个交易或投资。这样的分类帐本系统将使包括一次性购房者在内的房地产行业的每个人的生活更加轻松,并使诸如产权保险等金融产品基本上无关紧要。将诸如房地产之类的实物资产转换为这样的数字资产是非常规的,并且目前仅在理论上存在。然而,这种变化迫在眉睫,而不是迟到 [^5]。
区块链在房地产中影响最大的领域如上所述,在维护透明和安全的产权管理系统方面。基于区块链的财产记录可以包含有关财产、其所在地、所有权历史以及相关的公共记录的[信息][6]。这将允许房地产交易快速完成,并且无需第三方监控和监督。房地产评估和税收计算等任务成为有形的、客观的参数问题,而不是主观测量和猜测,因为可靠的历史数据是可公开验证的。[UBITQUITY][7] 就是这样一个平台,为企业客户提供定制的基于区块链的解决方案。该平台允许客户跟踪所有房产细节、付款记录、抵押记录,甚至允许运行智能合约,自动处理税收和租赁。
这为我们带来了房地产区块链的第二大机遇和用例。由于该行业受到众多第三方的高度监管,除了参与交易的交易对手外,尽职调查和财务评估可能非常耗时。这些流程主要通过离线渠道进行,文书工作需要在最终评估报告出来之前进行数天。对于公司房地产交易尤其如此,这构成了顾问所收取的总计费时间的大部分。如果交易由抵押背书,则这些过程的重复是不可避免的。一旦与所涉及的人员和机构的数字身份相结合,就可以完全避免当前的低效率,并且可以在几秒钟内完成交易。租户、投资者、相关机构、顾问等可以单独验证数据并达成一致的共识,从而验证永久性的财产记录 [^8]。这提高了验证流程的准确性。房地产巨头 RE/MAX 最近宣布与服务提供商 XYO Network Partners 合作,[建立墨西哥房上市地产国家数据库][9]。他们希望有朝一日能够创建世界上最大的(截至目前)去中心化房地产登记中心之一。
然而,区块链可以带来的另一个重要且可以说是非常民主的变化是投资房地产。与其他投资资产类别不同,即使是小型家庭投资者也可能参与其中,房地产通常需要大量的手工付款才能参与。诸如 ATLANT 和 BitOfProperty 之类的公司将房产的账面价值代币化,并将其转换为加密货币的等价物。这些代币随后在交易所出售,类似于股票和股票的交易方式。[房地产后续产生的任何现金流都会根据其在财产中的“份额”记入贷方或借记给代币所有者][4]。
然而尽管如此区块链技术仍处于房地产领域的早期采用阶段目前的法规还没有明确定义它。诸如分布式应用程序、分布式匿名组织DAO、智能合约等概念在许多国家的法律领域是闻所未闻的。一旦所有利益相关者充分接受了区块链复杂性的良好教育就会彻底改革现有的法规和指导方针这是最务实的前进方式。 同样,这将是一个缓慢而渐进的变化,但是它是一个急需的变化。本系列的下一篇文章将介绍 “智能合约”,例如由 UBITQUITY 和 XYO 等公司实施的那些是如何在区块链中创建和执行的。
[^1]: HSBC, “Global Real Estate,” no. April, 2008
[^3]: D. B. Burke, Law of title insurance. Aspen Law & Business, 2000.
[^5]: M. Swan, OReilly Blockchain. Blueprint for a New Economy 2015.
[^8]: Deloite, “Blockchain in commercial real estate The future is here! Table of contents.”
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/
作者:[ostechnix][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[b]: https://github.com/lujun9972
[1]: https://linux.cn/article-10689-1.html
[2]: https://www.forbes.com/sites/jordanlulich/2018/06/21/what-is-title-insurance-and-why-its-important/#1472022b12bb
[4]: https://www.cbinsights.com/research/blockchain-real-estate-disruption/#financing
[6]: https://www2.deloitte.com/us/en/pages/financial-services/articles/blockchain-in-commercial-real-estate.html
[7]: https://www.ubitquity.io/
[9]: https://www.businesswire.com/news/home/20181012005068/en/XYO-Network-Partners-REMAX-M%C3%A9xico-Bring-Blockchain

View File

@ -1,22 +1,22 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10916-1.html)
[#]: subject: (How to manage your Linux environment)
[#]: via: (https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all)
[#]: via: (https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
如何管理你的 Linux 环境
如何管理你的 Linux 环境变量
======
### Linux 用户环境变量帮助你找到你需要的命令,获取很多完成的细节,而不需要知道系统如何配置的。 设置来自哪里和如何被修改它们是另一个课题。
> Linux 用户环境变量可以帮助你找到你需要的命令,无须了解系统如何配置的细节而完成大量工作。而这些设置来自哪里和如何被修改它们是另一个话题。
![IIP Photo Archive \(CC BY 2.0\)][1]
在 Linux 系统上的用户配置可以用多种方法简化你的使用。你可以运行命令,而不需要知道它们的位置。你可以重新使用先前运行的命令,而不用发愁系统是如何保持它们的踪迹。你可以查看你的电子邮件,查看手册页,并容易地回到你的 home 目录,而不管你在文件系统可能已经迷失方向。并且,当需要的时候,你可以调整你的账户设置,以便它向着你喜欢的方式来工作
在 Linux 系统上的用户账户配置以多种方法简化了系统的使用。你可以运行命令,而不需要知道它们的位置。你可以重新使用先前运行的命令,而不用发愁系统是如何追踪到它们的。你可以查看你的电子邮件,查看手册页,并容易地回到你的家目录,而不用管你在文件系统中身在何方。并且,当需要的时候,你可以调整你的账户设置,以便其更符合你喜欢的方式
Linux 环境设置来自一系列的文件 — 一些是系统范围(意味着它们影响所有用户账户),一些是配置处于你的 home 目录中文件中。系统范围设置在你登陆时生效,本地设置在以后生效,所以,你在你账户中作出的更改将覆盖系统范围设置。对于 bash 用户,这些文件包含这些系统文件:
Linux 环境设置来自一系列的文件:一些是系统范围(意味着它们影响所有用户账户),一些是处于你的家目录中的配置文件里。系统范围的设置在你登录时生效,而本地设置在其后生效,所以,你在你账户中作出的更改将覆盖系统范围设置。对于 bash 用户,这些文件包含这些系统文件:
```
/etc/environment
@ -24,22 +24,20 @@ Linux 环境设置来自一系列的文件 — 一些是系统范围(意味着
/etc/profile
```
和其中一些本地文件:
以及一些本地文件:
```
~/.bashrc
~/.profile -- not read if ~/.bash_profile or ~/.bash_login
~/.profile # 如果有 ~/.bash_profile 或 ~/.bash_login 就不会读此文件
~/.bash_profile
~/.bash_login
```
你可以修改本地存在的四个文件的任何一个,因为它们处于你的 home 目录,并且它们是属于你的。
**[ 两分钟 Linux 提示:[学习如何在2分钟视频教程中掌握很多 Linux 命令][2] ]**
你可以修改本地存在的四个文件的任何一个,因为它们处于你的家目录,并且它们是属于你的。
### 查看你的 Linux 环境设置
为查看你的环境设置,使用 **env** 命令。你的输出将可能与这相似:
为查看你的环境设置,使用 `env` 命令。你的输出将可能与这相似:
```
$ env
@ -84,9 +82,9 @@ LESSOPEN=| /usr/bin/lesspipe %s
_=/usr/bin/env
```
虽然你可能会得到大量的输出,第一个大部分用颜色显示上面的细节,颜色被用于命令行上来识别各种各样文件类型。当你看到一些东西,像 ***.tar=01;31:** ,这告诉你 tar 文件将以红色显示在文件列表中,然而 ***.jpg=01;35:** 告诉你 jpg 文件将以紫色显现出来。这些颜色本意是使它易于从一个文件列表中分辨出某些文件。你可以在[在 Linux 命令行中自定义你的颜色][3]处学习更多关于这些颜色的定义,和如何自定义它们
虽然你可能会看到大量的输出,上面显示的第一大部分用于在命令行上使用颜色标识各种文件类型。当你看到类似 `*.tar=01;31:` 这样的东西,这告诉你 `tar` 文件将以红色显示在文件列表中,然而 `*.jpg=01;35:` 告诉你 jpg 文件将以紫色显现出来。这些颜色旨在使它易于从一个文件列表中分辨出某些文件。你可以在[在 Linux 命令行中自定义你的颜色][3]处学习更多关于这些颜色的定义,和如何自定义它们
当你更喜欢一种不加装饰的显示时,一种简单关闭颜色方法是使用一个命令,例如这一个
当你更喜欢一种不加装饰的显示时,一种关闭颜色显示的简单方法是使用如下命令
```
$ ls -l --color=never
@ -98,14 +96,14 @@ $ ls -l --color=never
$ alias ll2='ls -l --color=never'
```
你也可以使用 **echo** 命令来单独地显现设置。在这个命令中,我们显示在历史缓存区中将被记忆命令的数量:
你也可以使用 `echo` 命令来单独地显现某个设置。在这个命令中,我们显示在历史缓存区中将被记忆命令的数量:
```
$ echo $HISTSIZE
1000
```
如果你已经移动,你在文件系统中的最后位置将被记忆。
如果你已经移动到某个位置,你在文件系统中的最后位置会被记在这里:
```
PWD=/home/shs
@ -114,31 +112,31 @@ OLDPWD=/tmp
### 作出更改
你可以使用一个像这样的命令更改环境设置,但是,如果你希望保持这个设置,在你的 ~/.bashrc 文件中添加一行代码,例如 "HISTSIZE=1234"
你可以使用一个像这样的命令更改环境设置,但是,如果你希望保持这个设置,在你的 `~/.bashrc` 文件中添加一行代码,例如 `HISTSIZE=1234`
```
$ export HISTSIZE=1234
```
### "export" 一个变量的本意是什么
### “export” 一个变量的本意是什么
导出一个变量使设置用于你的 shell 和可能的子shell。默认情况下用户定义的变量是本地的并不被导出到新的进程例如子 shell 和脚本。export 命令使变量可用于子进程的函数
导出一个环境变量使设置用于你的 shell 和可能的子 shell。默认情况下用户定义的变量是本地的并不被导出到新的进程例如子 shell 和脚本。`export` 命令使得环境变量可用在子进程中发挥功用
### 添加和移除变量
你可以创建新的变量,并使它们在命令行和子 shell 上非常容易地可用。然而,这些变量将不存活于你的登出和再次回来,除非你也添加它们到 ~/.bashrc 或一个类似的文件
你可以很容易地在命令行和子 shell 上创建新的变量,并使它们可用。然而,当你登出并再次回来时这些变量将消失,除非你也将它们添加到 `~/.bashrc` 或一个类似的文件中
```
$ export MSG="Hello, World!"
```
如果你需要,你可以使用 **unset** 命令来消除一个变量:
如果你需要,你可以使用 `unset` 命令来消除一个变量:
```
$ unset MSG
```
如果变量被局部定义,你可以通过获得你的启动文件来简单地设置它回来。例如:
如果变量是局部定义的,你可以通过加载你的启动文件来简单地将其设置回来。例如:
```
$ echo $MSG
@ -153,18 +151,16 @@ Hello, World!
### 小结
用户账户是为创建一个有用的用户环境,而使用一组恰当的启动文件建立,但是,独立的用户和系统管理员都可以通过编辑他们的个人设置文件(对于用户)或很多来自设置起源的文件(对于系统管理员)来更改默认设置。
Join the Network World communities on 在 [Facebook][4] 和 [LinkedIn][5] 上加入网络世界社区来评论重要话题。
用户账户是用一组恰当的启动文件设立的,创建了一个有用的用户环境,而个人用户和系统管理员都可以通过编辑他们的个人设置文件(对于用户)或很多来自设置起源的文件(对于系统管理员)来更改默认设置。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all
via: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (way-ww)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10923-1.html)
[#]: subject: (How To Install/Uninstall Listed Packages From A File In Linux?)
[#]: via: (https://www.2daygeek.com/how-to-install-uninstall-listed-packages-from-a-file-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
@ -10,25 +10,17 @@
如何在 Linux 上安装/卸载一个文件中列出的软件包?
======
在某些情况下,你可能想要将一个服务器上的软件包列表安装到另一个服务器上。
在某些情况下,你可能想要将一个服务器上的软件包列表安装到另一个服务器上。例如,你已经在服务器 A 上安装了 15 个软件包并且这些软件包也需要被安装到服务器 B、服务器 C 上等等。
例如你已经在服务器A 上安装了 15 个软件包并且这些软件包也需要被安装到服务器B服务器C 上等等。
我们可以手动去安装这些软件但是这将花费大量的时间。你可以手动安装一俩个服务器,但是试想如果你有大概十个服务器呢。在这种情况下你无法手动完成工作,那么怎样才能解决问题呢?
我们可以手动去安装这些软件但是这将花费大量的时间。
你可以手动安装一俩个服务器,但是试想如果你有大概十个服务器呢。
在这种情况下你无法手动完成工作,那么怎样才能解决问题呢?
不要担心我们可以帮你摆脱这样的情况和场景。
我们在这篇文章中增加了四种方法来克服困难。
不要担心我们可以帮你摆脱这样的情况和场景。我们在这篇文章中增加了四种方法来克服困难。
我希望这可以帮你解决问题。我已经在 Centos7 和 Ubuntu 18.04 上测试了这些命令。
我也希望这可以在其他发行版上工作。这仅仅需要使用该发行版的官方包管理器命令替代本文中的包管理器命令就行了。
如果想要 **[检查 Linux 系统上已安装的软件包列表][1]** 请点击链接。
如果想要 [检查 Linux 系统上已安装的软件包列表][1],请点击链接。
例如,如果你想要在基于 RHEL 系统上创建软件包列表请使用以下步骤。其他发行版也一样。
@ -53,11 +45,9 @@ apr-util-1.5.2-6.el7.x86_64
apr-1.4.8-3.el7_4.1.x86_64
```
### 方法一 : 如何在 Linux 上使用 cat 命令安装文件中列出的包?
### 方法一如何在 Linux 上使用 cat 命令安装文件中列出的包?
为实现这个目标,我将使用简单明了的第一种方法。
为此,创建一个文件并添加上你想要安装的包列表。
为实现这个目标,我将使用简单明了的第一种方法。为此,创建一个文件并添加上你想要安装的包列表。
出于测试的目的,我们将只添加以下的三个软件包名到文件中。
@ -69,7 +59,7 @@ mariadb-server
nano
```
只要简单的运行 **[apt 命令][2]** 就能在 Ubuntu/Debian 系统上一次性安装所有的软件包。
只要简单的运行 [apt 命令][2] 就能在 Ubuntu/Debian 系统上一次性安装所有的软件包。
```
# apt -y install $(cat /tmp/pack1.txt)
@ -138,20 +128,19 @@ Processing triggers for install-info (6.5.0.dfsg.1-2) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
```
使用 **[yum 命令][3]** 在基于 RHEL (如 CentosRHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上安装文件中列出的软件包。
使用 [yum 命令][3] 在基于 RHEL (如 Centos、RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上安装文件中列出的软件包。
```
# yum -y install $(cat /tmp/pack1.txt)
```
使用以命令在基于 RHEL (如 CentosRHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上卸载文件中列出的软件包。
使用以命令在基于 RHEL (如 CentosRHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上卸载文件中列出的软件包。
```
# yum -y remove $(cat /tmp/pack1.txt)
```
使用以下 **[dnf 命令][4]** 在 Fedora 系统上安装文件中列出的软件包。
使用以下 [dnf 命令][4] 在 Fedora 系统上安装文件中列出的软件包。
```
# dnf -y install $(cat /tmp/pack1.txt)
@ -163,7 +152,7 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
# dnf -y remove $(cat /tmp/pack1.txt)
```
使用以下 **[zypper 命令][5]** 在 openSUSE 系统上安装文件中列出的软件包。
使用以下 [zypper 命令][5] 在 openSUSE 系统上安装文件中列出的软件包。
```
# zypper -y install $(cat /tmp/pack1.txt)
@ -175,7 +164,7 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
# zypper -y remove $(cat /tmp/pack1.txt)
```
使用以下 **[pacman 命令][6]** 在基于 Arch Linux (如 Manjaro 和 Antergos) 的系统上安装文件中列出的软件包。
使用以下 [pacman 命令][6] 在基于 Arch Linux (如 Manjaro 和 Antergos) 的系统上安装文件中列出的软件包。
```
# pacman -S $(cat /tmp/pack1.txt)
@ -188,36 +177,35 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
# pacman -Rs $(cat /tmp/pack1.txt)
```
### 方法二 : 如何使用 cat 和 xargs 命令在 Linux 中安装文件中列出的软件包。
### 方法二如何使用 cat 和 xargs 命令在 Linux 中安装文件中列出的软件包。
甚至,我更喜欢使用这种方法,因为这是一种非常简单直接的方法。
使用以下 apt 命令在基于 Debian 的系统 (如 DebianUbuntu和Linux Mint) 上安装文件中列出的软件包。
使用以下 `apt` 命令在基于 Debian 的系统 (如 Debian、Ubuntu 和 Linux Mint) 上安装文件中列出的软件包。
```
# cat /tmp/pack1.txt | xargs apt -y install
```
使用以下 apt 命令 从基于 Debian 的系统 (如 DebianUbuntu和Linux Mint) 上卸载文件中列出的软件包。
使用以下 `apt` 命令 从基于 Debian 的系统 (如 Debian、Ubuntu 和 Linux Mint) 上卸载文件中列出的软件包。
```
# cat /tmp/pack1.txt | xargs apt -y remove
```
使用以下 yum 命令在基于 RHEL (如 CentosRHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上安装文件中列出的软件包。
使用以下 `yum` 命令在基于 RHEL (如 CentosRHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上安装文件中列出的软件包。
```
# cat /tmp/pack1.txt | xargs yum -y install
```
使用以命令从基于 RHEL (如 CentosRHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上卸载文件中列出的软件包。
使用以命令从基于 RHEL (如 CentosRHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上卸载文件中列出的软件包。
```
# cat /tmp/pack1.txt | xargs yum -y remove
```
使用以下 dnf 命令在 Fedora 系统上安装文件中列出的软件包。
使用以下 `dnf` 命令在 Fedora 系统上安装文件中列出的软件包。
```
# cat /tmp/pack1.txt | xargs dnf -y install
@ -229,7 +217,7 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
# cat /tmp/pack1.txt | xargs dnf -y remove
```
使用以下 zypper 命令在 openSUSE 系统上安装文件中列出的软件包。
使用以下 `zypper` 命令在 openSUSE 系统上安装文件中列出的软件包。
```
@ -242,7 +230,7 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
# cat /tmp/pack1.txt | xargs zypper -y remove
```
使用以下 pacman 命令在基于 Arch Linux (如 Manjaro 和 Antergos) 的系统上安装文件中列出的软件包。
使用以下 `pacman` 命令在基于 Arch Linux (如 Manjaro 和 Antergos) 的系统上安装文件中列出的软件包。
```
# cat /tmp/pack1.txt | xargs pacman -S
@ -254,17 +242,17 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
# cat /tmp/pack1.txt | xargs pacman -Rs
```
### 方法三 : 如何使用 For Loop 在 Linux 上安装文件中列出的软件包?
我们也可以使用 For 循环命令来实现此目的。
### 方法三 : 如何使用 For 循环在 Linux 上安装文件中列出的软件包
安装批量包可以使用以下一条 For 循环的命令。
我们也可以使用 `for` 循环命令来实现此目的。
安装批量包可以使用以下一条 `for` 循环的命令。
```
# for pack in `cat /tmp/pack1.txt` ; do apt -y install $i; done
```
要使用 shell 脚本安装批量包,请使用以下 For 循环。
要使用 shell 脚本安装批量包,请使用以下 `for` 循环。
```
# vi /opt/scripts/bulk-package-install.sh
@ -275,7 +263,7 @@ do apt -y remove $pack
done
```
为 bulk-package-install.sh 设置可执行权限。
`bulk-package-install.sh` 设置可执行权限。
```
# chmod + bulk-package-install.sh
@ -287,17 +275,17 @@ done
# sh bulk-package-install.sh
```
### 方法四 : 如何使用 While 循环在 Linux 上安装文件中列出的软件包
### 方法四如何使用 While 循环在 Linux 上安装文件中列出的软件包
我们也可以使用 While 循环命令来实现目的。
我们也可以使用 `while` 循环命令来实现目的。
安装批量包可以使用以下一条 While 循环的命令。
安装批量包可以使用以下一条 `while` 循环的命令。
```
# file="/tmp/pack1.txt"; while read -r pack; do apt -y install $pack; done < "$file"
```
要使用 shell 脚本安装批量包,请使用以下 While 循环。
要使用 shell 脚本安装批量包,请使用以下 `while` 循环。
```
# vi /opt/scripts/bulk-package-install.sh
@ -309,7 +297,7 @@ do apt -y remove $pack
done < "$file"
```
为 bulk-package-install.sh 设置可执行权限。
`bulk-package-install.sh` 设置可执行权限。
```
# chmod + bulk-package-install.sh
@ -328,13 +316,13 @@ via: https://www.2daygeek.com/how-to-install-uninstall-listed-packages-from-a-fi
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[way-ww](https://github.com/way-ww)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/check-installed-packages-in-rhel-centos-fedora-debian-ubuntu-opensuse-arch-linux/
[1]: https://linux.cn/article-10116-1.html
[2]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[3]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[4]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/

View File

@ -0,0 +1,109 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10922-1.html)
[#]: subject: (Zettlr Markdown Editor for Writers and Researchers)
[#]: via: (https://itsfoss.com/zettlr-markdown-editor/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
Zettlr适合写作者和研究人员的 Markdown 编辑器
======
有很多[适用于 Linux 的 Markdown 编辑器][1],并且还在继续增加。问题是,像 [Boostnote][2] 一样,大多数是为编码人员设计的,可能不会受到非技术人员的欢迎。让我们看一个想要替代 Word 和昂贵的文字处理器,适用于非技术人员的 Markdown 编辑器。我们来看看 Zettlr 吧。
### Zettlr Markdown 编辑器
![Zettlr Light Mode][3]
我可能在网站上提到过一两次,我更喜欢用 [Markdown][4] 写下我的所有文档。它易于学习,不会让你受困于专有文档格式。我还在我的[适合作者的开源工具列表][5]中提到了 Markdown 编辑器。
我用过许多 Markdown 编辑器,但是我一直有兴趣尝试新的。最近,我遇到了 Zettlr一个开源 Markdown 编辑器。
[Zettlr][6] 是一位名叫 [Hendrik Erz][7] 的德国社会学家/政治理论家创建的。Hendrik 创建了 Zettlr因为他对目前的文字处理器感到不满意。他想要可以让他“专注于写作和阅读”的编辑器。
在发现 Markdown 之后,他在不同的操作系统上尝试了几个 Markdown 编辑器。但它们都没有他想要的东西。[根据 Hendrik 的说法][8],“但我不得不意识到没有为高效组织大量文本而写的编辑器。大多数编辑都是由编码人员编写的,因此可以满足工程师和数学家的需求。没有为我这样的社会科学、历史或政治学的学生的编辑器。“
所以他决定创造自己的。2017 年 11 月,他开始编写 Zettlr。
![Zettlr About][9]
#### Zettlr 功能
Zettlr 有许多简洁的功能,包括:
* 从 [Zotero 数据库][10]导入源并在文档中引用它们
  * 使用可选的行屏蔽,让你无打扰地专注于写作
  * 支持代码高亮
  * 使用标签对信息进行排序
  * 能够为该任务设定写作目标
  * 查看一段时间的写作统计
  * 番茄钟计时器
  * 浅色/深色主题
  * 使用 [reveal.js][11] 创建演示文稿
  * 快速预览文档
  * 可以在一个项目文件夹中搜索 Markdown 文档,并用热图展示文字搜索密度。
  * 将文件导出为 HTML、PDF、ODT、DOC、reStructuredText、LaTex、TXT、Emacs ORG、[TextBundle][12] 和 Textpack
  * 将自定义 CSS 添加到你的文档
当我写这篇文章时,一个对话框弹出来告诉我最近发布了 [1.3.0 beta][14]。此测试版将包括几个新的主题,以及一大堆修复,新功能和改进。
![Zettlr Night Mode][15]
#### 安装 Zettlr
目前,唯一可安装 Zettlr 的 Linux 存储库是 [AUR][16]。如果你的 Linux 发行版不是基于 Arch 的,你可以从网站上下载 macOS、Windows、Debian 和 Fedora 的[安装程序][17]。
#### 对 Zettlr 的最后一点想法
注意:为了测试 Zettlr我用它来写这篇文章。
Zettlr 有许多我希望我之前选择的编辑器 ghostwriter 有的简洁的功能,例如为文档设置字数目标。我也喜欢在不打开文档的情况下预览文档的功能。
![Zettlr Settings][18]
我也遇到了几个问题,但这些更多的是因为 Zettlr 与 ghostwriter 的工作方式略有不同。例如,当我尝试从网站复制引用或名称时,它会将内嵌样式粘贴到 Zettlr 中。幸运的是,它有一个“不带样式粘贴”的选项。有几次我在打字时有轻微的延迟。但那可能是因为它是一个 Electron 程序。
总的来说,我认为 Zettlr 是第一次使用 Markdown 用户的好选择。它有许多 Markdown 编辑器已有的功能,并为那些只使用过文字处理器的用户增加了一些功能。
正如 Hendrik 在 [Zettlr 网站][8]中所说的那样,“让自己摆脱文字处理器的束缚,看看你的写作过程如何通过身边的技术得到改善!”
如果你觉得 Zettlr 有用,请考虑支持 [Hendrik][19]。正如他在网站上所说,“它是免费的,因为我不相信激烈竞争、早逝的创业文化。我只是想帮忙。”
你有没有用过 Zettlr你最喜欢的 Markdown 编辑器是什么?请在下面的评论中告诉我们。
如果你觉得这篇文章有趣请在社交媒体Hacker News 或 [Reddit][21] 上分享它。
--------------------------------------------------------------------------------
via: https://itsfoss.com/zettlr-markdown-editor/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/best-markdown-editors-linux/
[2]: https://itsfoss.com/boostnote-linux-review/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/Zettlr-light-mode.png?fit=800%2C462&ssl=1
[4]: https://daringfireball.net/projects/markdown/
[5]: https://itsfoss.com/open-source-tools-writers/
[6]: https://www.zettlr.com/
[7]: https://github.com/nathanlesage
[8]: https://www.zettlr.com/about
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/zettlr-about.png?fit=800%2C528&ssl=1
[10]: https://www.zotero.org/
[11]: https://revealjs.com/#/
[12]: http://textbundle.org/
[13]: https://itsfoss.com/great-little-book-shelf-review/
[14]: https://github.com/Zettlr/Zettlr/releases/tag/v1.3.0-beta
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Zettlr-night-mode.png?fit=800%2C469&ssl=1
[16]: https://aur.archlinux.org/packages/zettlr-bin/
[17]: https://www.zettlr.com/download
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/zettlr-settings.png?fit=800%2C353&ssl=1
[19]: https://www.zettlr.com/supporters
[21]: http://reddit.com/r/linuxusersgroup

View File

@ -0,0 +1,62 @@
[#]: collector: (lujun9972)
[#]: translator: (chen-ni)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10934-1.html)
[#]: subject: (French IT giant Atos enters the edge-computing business)
[#]: via: (https://www.networkworld.com/article/3397139/atos-is-the-latest-to-enter-the-edge-computing-business.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
法国 IT 巨头 Atos 进军边缘计算
======
> Atos 另辟蹊径,通过一种只有行李箱大小的设备 BullSequana Edge 进军边缘计算。
![iStock][1]
法国 IT 巨头 Atos 是最近才开展边缘计算业务的,他们的产品是一个叫做 BullSequana Edge 的小型设备。和竞争对手们的集装箱大小的设备不同(比如说 Vapor IO 和 Schneider Electronics 的产品Atos 的边缘设备完全可以被放进衣柜里。
Atos 表示,他们的这个设备使用人工智能应用提供快速响应,适合需要快速响应的领域比如生产 4.0、自动驾驶汽车、健康管理,以及零售业和机场的安保系统。在这些领域,数据需要在边缘进行实时处理和分析。
[延伸阅读:[什么是边缘计算?][2] 以及 [边缘网络和物联网如何重新定义数据中心][3]]
BullSequana Edge 可以作为独立的基础设施单独采购,也可以和 Atos 的边缘软件捆绑采购并且这个软件还是非常出色的。Atos 表示 BullSequana Edge 主要支持三种使用场景:
* AI人工智能Atos 的边缘计算机视觉软件为监控摄像头提供先进的特征抽取和分析技术,包括人像、人脸、行为等特征。这些分析可以支持系统做出自动化响应。
* 大数据Atos 边缘数据分析系统通过预测性和规范性的解决方案,帮助机构优化商业模型。它使用数据湖的功能,确保数据的可信度和可用性。
* 容器Atos 边缘数据容器EDC是一种一体化容器解决方案。它可以作为一个去中心化的 IT 系统在边缘运行,并且可以在没有数据中心的环境下自动运行,而不需要现场操作。
由于体积小BullSequana Edge 并不具备很强的处理能力。它装载一个 16 核的 Intel Xeon 中央处理器,可以装备最多两枚英伟达 Tesla T4 图形处理器或者是 FPGA现场可编程门阵列。Atos 表示,这就足够让复杂的 AI 模型在边缘进行低延迟的运行了。
考虑到数据的敏感性BullSequana Edge 同时装备了一个入侵感应器,用来在遭遇物理入侵的时候禁用机器。
虽然大多数边缘设备都被安放在信号塔附近但是考虑到边缘系统可能被安放在任何地方BullSequana Edge 还支持通过无线电、全球移动通信系统GSM或者 Wi-Fi 来进行通信。
Atos 在美国也许不是一个家喻户晓的名字,但是在欧洲它可以和 IBM 相提并论,并且在过去的十年里已经收购了诸如 Bull SA、施乐 IT 外包以及西门子 IT 等 IT 巨头们。
关于边缘网络的延伸阅读:
* [边缘网络和物联网如何重新定义数据中心][3]
* [边缘计算的最佳实践][4]
* [边缘计算如何提升物联网安全][5]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3397139/atos-is-the-latest-to-enter-the-edge-computing-business.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[chen-ni](https://github.com/chen-ni)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/01/huawei-18501-edge-gartner-100786331-large.jpg
[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[4]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[5]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,471 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10932-1.html)
[#]: subject: (20+ FFmpeg Commands For Beginners)
[#]: via: (https://www.ostechnix.com/20-ffmpeg-commands-beginners/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
给初学者的 20 多个 FFmpeg 命令示例
======
![FFmpeg Commands](https://img.linux.net.cn/data/attachment/album/201906/03/011553xu323dzu40pb03bx.jpg)
在这个指南中,我将用示例来阐明如何使用 FFmpeg 媒体框架来做各种各样的音频、视频转码和转换的操作。我已经为初学者汇集了最常用的 20 多个 FFmpeg 命令,我将不时地添加更多的示例来保持更新这个指南。请给这个指南加书签,以后回来检查更新。让我们开始吧,如果你还没有在你的 Linux 系统中安装 FFmpeg参考下面的指南。
* [在 Linux 中安装 FFmpeg][2]
### 针对初学者的 20 多个 FFmpeg 命令
FFmpeg 命令的典型语法是:
```
ffmpeg [全局选项] {[输入文件选项] -i 输入_url_地址} ...
{[输出文件选项] 输出_url_地址} ...
```
现在我们将查看一些重要的和有用的 FFmpeg 命令。
#### 1、获取音频/视频文件信息
为显示你的媒体文件细节,运行:
```
$ ffmpeg -i video.mp4
```
样本输出:
```
ffmpeg version n4.1.3 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 8.2.1 (GCC) 20181127
configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-nvdec --enable-nvenc --enable-omx --enable-shared --enable-version3
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.20.100
Duration: 00:00:28.79, start: 0.000000, bitrate: 454 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, smpte170m/bt470bg/smpte170m), 1920x1080 [SAR 1:1 DAR 16:9], 318 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
Metadata:
handler_name : ISO Media file produced by Google Inc. Created on: 04/08/2019.
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
handler_name : ISO Media file produced by Google Inc. Created on: 04/08/2019.
At least one output file must be specified
```
如你在上面的输出中看到的FFmpeg 显示该媒体文件信息,以及 FFmpeg 细节,例如版本、配置细节、版权标记、构建参数和库选项等等。
如果你不想看 FFmpeg 标语和其它细节,而仅仅想看媒体文件信息,使用 `-hide_banner` 标志,像下面。
```
$ ffmpeg -i video.mp4 -hide_banner
```
样本输出:
![][3]
*使用 FFMpeg 查看音频、视频文件信息。*
看见了吗?现在,它仅显示媒体文件细节。
#### 2、转换视频文件到不同的格式
FFmpeg 是强有力的音频和视频转换器,因此,它能在不同格式之间转换媒体文件。举个例子,要转换 mp4 文件到 avi 文件,运行:
```
$ ffmpeg -i video.mp4 video.avi
```
类似地,你可以转换媒体文件到你选择的任何格式。
例如,为转换 YouTube flv 格式视频为 mpeg 格式,运行:
```
$ ffmpeg -i video.flv video.mpeg
```
如果你想维持你的源视频文件的质量,使用 `-qscale 0` 参数:
```
$ ffmpeg -i input.webm -qscale 0 output.mp4
```
为检查 FFmpeg 的支持格式的列表,运行:
```
$ ffmpeg -formats
```
#### 3、转换视频文件到音频文件
我转换一个视频文件到音频文件,只需具体指明输出格式,像 .mp3或 .ogg或其它任意音频格式。
上面的命令将转换 input.mp4 视频文件到 output.mp3 音频文件。
```
$ ffmpeg -i input.mp4 -vn output.mp3
```
此外,你也可以对输出文件使用各种各样的音频转换编码选项,像下面演示。
```
$ ffmpeg -i input.mp4 -vn -ar 44100 -ac 2 -ab 320 -f mp3 output.mp3
```
在这里,
* `-vn` 表明我们已经在输出文件中禁用视频录制。
* `-ar` 设置输出文件的音频频率。通常使用的值是22050 Hz、44100 Hz、48000 Hz。
* `-ac` 设置音频通道的数目。
* `-ab` 表明音频比特率。
* `-f` 输出文件格式。在我们的实例中,它是 mp3 格式。
#### 4、更改视频文件的分辨率
如果你想设置一个视频文件为指定的分辨率,你可以使用下面的命令:
```
$ ffmpeg -i input.mp4 -filter:v scale=1280:720 -c:a copy output.mp4
```
或,
```
$ ffmpeg -i input.mp4 -s 1280x720 -c:a copy output.mp4
```
上面的命令将设置所给定视频文件的分辨率到 1280×720。
类似地,为转换上面的文件到 640×480 大小,运行:
```
$ ffmpeg -i input.mp4 -filter:v scale=640:480 -c:a copy output.mp4
```
或者,
```
$ ffmpeg -i input.mp4 -s 640x480 -c:a copy output.mp4
```
这个技巧将帮助你缩放你的视频文件到较小的显示设备上,例如平板电脑和手机。
#### 5、压缩视频文件
减小媒体文件的大小到较小来节省硬件的空间总是一个好主意.
下面的命令将压缩并减少输出文件的大小。
```
$ ffmpeg -i input.mp4 -vf scale=1280:-1 -c:v libx264 -preset veryslow -crf 24 output.mp4
```
请注意,如果你尝试减小视频文件的大小,你将损失视频质量。如果 24 太有侵略性,你可以降低 `-crf` 值到或更低值。
你也可以通过下面的选项来转换编码音频降低比特率,使其有立体声感,从而减小大小。
```
-ac 2 -c:a aac -strict -2 -b:a 128k
```
#### 6、压缩音频文件
正像压缩视频文件一样,为节省一些磁盘空间,你也可以使用 `-ab` 标志压缩音频文件。
例如,你有一个 320 kbps 比特率的音频文件。你想通过更改比特率到任意较低的值来压缩它,像下面。
```
$ ffmpeg -i input.mp3 -ab 128 output.mp3
```
各种各样可用的音频比特率列表是:
1. 96kbps
2. 112kbps
3. 128kbps
4. 160kbps
5. 192kbps
6. 256kbps
7. 320kbps
#### 7、从一个视频文件移除音频流
如果你不想要一个视频文件中的音频,使用 `-an` 标志。
```
$ ffmpeg -i input.mp4 -an output.mp4
```
在这里,`-an` 表示没有音频录制。
上面的命令会撤销所有音频相关的标志,因为我们不要来自 input.mp4 的音频。
#### 8、从一个媒体文件移除视频流
类似地,如果你不想要视频流,你可以使用 `-vn` 标志从媒体文件中简单地移除它。`-vn` 代表没有视频录制。换句话说,这个命令转换所给定媒体文件为音频文件。
下面的命令将从所给定媒体文件中移除视频。
```
$ ffmpeg -i input.mp4 -vn output.mp3
```
你也可以使用 `-ab` 标志来指出输出文件的比特率,如下面的示例所示。
```
$ ffmpeg -i input.mp4 -vn -ab 320 output.mp3
```
#### 9、从视频中提取图像
FFmpeg 的另一个有用的特色是我们可以从一个视频文件中轻松地提取图像。如果你想从一个视频文件中创建一个相册,这可能是非常有用的。
为从一个视频文件中提取图像,使用下面的命令:
```
$ ffmpeg -i input.mp4 -r 1 -f image2 image-%2d.png
```
在这里,
* `-r` 设置帧速度。即,每秒提取帧到图像的数字。默认值是 25。
* `-f` 表示输出格式,即,在我们的实例中是图像。
* `image-%2d.png` 表明我们如何想命名提取的图像。在这个实例中命名应该像这样image-01.png、image-02.png、image-03.png 等等开始。如果你使用 `%3d`,那么图像的命名像 image-001.png、image-002.png 等等开始。
#### 10、裁剪视频
FFMpeg 允许以我们选择的任何范围裁剪一个给定的媒体文件。
裁剪一个视频文件的语法如下给定:
```
ffmpeg -i input.mp4 -filter:v "crop=w:h:x:y" output.mp4
```
在这里,
* `input.mp4` 源视频文件。
* `-filter:v` 表示视频过滤器。
* `crop` 表示裁剪过滤器。
* `w` 我们想自源视频中裁剪的矩形的宽度。
* `h` 矩形的高度。
* `x` 我们想自源视频中裁剪的矩形的 x 坐标 。
* `y` 矩形的 y 坐标。
比如说你想要一个来自视频的位置 (200,150),且具有 640 像素宽度和 480 像素高度的视频,命令应该是:
```
$ ffmpeg -i input.mp4 -filter:v "crop=640:480:200:150" output.mp4
```
请注意,剪切视频将影响质量。除非必要,请勿剪切。
#### 11、转换一个视频的具体的部分
有时你可能想仅转换视频文件的一个具体的部分到不同的格式。以示例说明下面的命令将转换所给定视频input.mp4 文件的开始 10 秒到视频 .avi 格式。
```
$ ffmpeg -i input.mp4 -t 10 output.avi
```
在这里,我们以秒具体说明时间。此外,以 `hh.mm.ss` 格式具体说明时间也是可以的。
#### 12、设置视频的屏幕高宽比
你可以使用 `-aspect` 标志设置一个视频文件的屏幕高宽比,像下面。
```
$ ffmpeg -i input.mp4 -aspect 16:9 output.mp4
```
通常使用的高宽比是:
* 16:9
* 4:3
* 16:10
* 5:4
* 2:21:1
* 2:35:1
* 2:39:1
#### 13、添加海报图像到音频文件
你可以添加海报图像到你的文件,以便图像将在播放音频文件时显示。这对托管在视频托管主机或共享网站中的音频文件是有用的。
```
$ ffmpeg -loop 1 -i inputimage.jpg -i inputaudio.mp3 -c:v libx264 -c:a aac -strict experimental -b:a 192k -shortest output.mp4
```
#### 14、使用开始和停止时间剪下一段媒体文件
可以使用开始和停止时间来剪下一段视频为小段剪辑,我们可以使用下面的命令。
```
$ ffmpeg -i input.mp4 -ss 00:00:50 -codec copy -t 50 output.mp4
```
在这里,
* `s` 表示视频剪辑的开始时间。在我们的示例中,开始时间是第 50 秒。
* `-t` 表示总的持续时间。
当你想使用开始和结束时间从一个音频或视频文件剪切一部分时,它是非常有用的。
类似地,我们可以像下面剪下音频。
```
$ ffmpeg -i audio.mp3 -ss 00:01:54 -to 00:06:53 -c copy output.mp3
```
#### 15、切分视频文件为多个部分
一些网站将仅允许你上传具体指定大小的视频。在这样的情况下,你可以切分大的视频文件到多个较小的部分,像下面。
```
$ ffmpeg -i input.mp4 -t 00:00:30 -c copy part1.mp4 -ss 00:00:30 -codec copy part2.mp4
```
在这里,
* `-t 00:00:30` 表示从视频的开始到视频的第 30 秒创建一部分视频。
* `-ss 00:00:30` 为视频的下一部分显示开始时间戳。它意味着第 2 部分将从第 30 秒开始,并将持续到原始视频文件的结尾。
#### 16、接合或合并多个视频部分到一个
FFmpeg 也可以接合多个视频部分,并创建一个单个视频文件。
创建包含你想接合文件的准确的路径的 `join.txt`。所有的文件都应该是相同的格式(相同的编码格式)。所有文件的路径应该逐个列出,像下面。
```
file /home/sk/myvideos/part1.mp4
file /home/sk/myvideos/part2.mp4
file /home/sk/myvideos/part3.mp4
file /home/sk/myvideos/part4.mp4
```
现在,接合所有文件,使用命令:
```
$ ffmpeg -f concat -i join.txt -c copy output.mp4
```
如果你得到一些像下面的错误;
```
[concat @ 0x555fed174cc0] Unsafe file name '/path/to/mp4'
join.txt: Operation not permitted
```
添加 `-safe 0` :
```
$ ffmpeg -f concat -safe 0 -i join.txt -c copy output.mp4
```
上面的命令将接合 part1.mp4、part2.mp4、part3.mp4 和 part4.mp4 文件到一个称为 output.mp4 的单个文件中。
#### 17、添加字幕到一个视频文件
我们可以使用 FFmpeg 来添加字幕到视频文件。为你的视频下载正确的字幕,并如下所示添加它到你的视频。
```
$ fmpeg -i input.mp4 -i subtitle.srt -map 0 -map 1 -c copy -c:v libx264 -crf 23 -preset veryfast output.mp4
```
#### 18、预览或测试视频或音频文件
你可能希望通过预览来验证或测试输出的文件是否已经被恰当地转码编码。为完成预览,你可以从你的终端播放它,用命令:
```
$ ffplay video.mp4
```
![][7]
类似地,你可以测试音频文件,像下面所示。
```
$ ffplay audio.mp3
```
![][8]
#### 19、增加/减少视频播放速度
FFmpeg 允许你调整视频播放速度。
为增加视频播放速度,运行:
```
$ ffmpeg -i input.mp4 -vf "setpts=0.5*PTS" output.mp4
```
该命令将双倍视频的速度。
为降低你的视频速度,你需要使用一个大于 1 的倍数。为减少播放速度,运行:
```
$ ffmpeg -i input.mp4 -vf "setpts=4.0*PTS" output.mp4
```
#### 20、创建动画的 GIF
出于各种目的,我们在几乎所有的社交和专业网络上使用 GIF 图像。使用 FFmpeg我们可以简单地和快速地创建动画的视频文件。下面的指南阐释了如何在类 Unix 系统中使用 FFmpeg 和 ImageMagick 创建一个动画的 GIF 文件。
* [在 Linux 中如何创建动画的 GIF][9]
#### 21、从 PDF 文件中创建视频
我长年累月的收集了很多 PDF 文件,大多数是 Linux 教程,保存在我的平板电脑中。有时我懒得从平板电脑中阅读它们。因此,我决定从 PDF 文件中创建一个视频,在一个大屏幕设备(像一台电视机或一台电脑)中观看它们。如果你想知道如何从一批 PDF 文件中制作一个电影,下面的指南将帮助你。
* [在 Linux 中如何从 PDF 文件中创建一个视频][10]
#### 22、获取帮助
在这个指南中,我已经覆盖大多数常常使用的 FFmpeg 命令。它有很多不同的选项来做各种各样的高级功能。要学习更多用法,请参考手册页。
```
$ man ffmpeg
```
这就是全部了。我希望这个指南将帮助你入门 FFmpeg。如果你发现这个指南有用请在你的社交和专业网络上分享它。更多好东西将要来。敬请期待
谢谢!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/20-ffmpeg-commands-beginners/
作者:[sk][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2017/05/FFmpeg-Commands-720x340.png
[2]: https://www.ostechnix.com/install-ffmpeg-linux/
[3]: http://www.ostechnix.com/wp-content/uploads/2017/05/sk@sk_001.png
[4]: https://ostechnix.tradepub.com/free/w_make141/prgm.cgi
[5]: https://ostechnix.tradepub.com/free/w_make75/prgm.cgi
[6]: https://ostechnix.tradepub.com/free/w_make235/prgm.cgi
[7]: http://www.ostechnix.com/wp-content/uploads/2017/05/Menu_004.png
[8]: http://www.ostechnix.com/wp-content/uploads/2017/05/Menu_005-3.png
[9]: https://www.ostechnix.com/create-animated-gif-ubuntu-16-04/
[10]: https://www.ostechnix.com/create-video-pdf-files-linux/

View File

@ -0,0 +1,146 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10925-1.html)
[#]: subject: (Dockly Manage Docker Containers From Terminal)
[#]: via: (https://www.ostechnix.com/dockly-manage-docker-containers-from-terminal/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
Dockly从终端管理 Docker 容器
======
![](https://img.linux.net.cn/data/attachment/album/201906/01/144422bfwx1e7fqx1ee11x.jpg)
几天前,我们发布了一篇指南,其中涵盖了[开始使用 Docker][2] 时需要了解的几乎所有细节。在该指南中,我们向你展示了如何详细创建和管理 Docker 容器。还有一些可用于管理 Docker 容器的非官方工具。如果你看过我们以前的文章,你可能会看到两个基于 Web 的工具,[Portainer][3] 和 [PiCluster][4]。它们都使得 Docker 管理任务在 Web 浏览器中变得更加容易和简单。今天,我遇到了另一个名为 Dockly 的 Docker 管理工具。
与上面的工具不同Dockly 是一个 TUI文本界面程序用于在类 Unix 系统中从终端管理 Docker 容器和服务。它是使用 NodeJS 编写的自由开源工具。在本简要指南中,我们将了解如何安装 Dockly 以及如何从命令行管理 Docker 容器。
### 安装 Dockly
确保已在 Linux 上安装了 NodeJS。如果尚未安装请参阅以下指南。
* [如何在 Linux 上安装 NodeJS][5]
安装 NodeJS 后,运行以下命令安装 Dockly
```
# npm install -g dockly
```
### 使用 Dockly 在终端管理 Docker 容器
使用 Dockly 管理 Docker 容器非常简单!你所要做的就是打开终端并运行以下命令:
```
# dockly
```
Dockly 将通过 unix 套接字自动连接到你的本机 docker 守护进程,并在终端中显示正在运行的容器列表,如下所示。
![][6]
*使用 Dockly 管理 Docker 容器*
正如你在上面的截图中看到的Dockly 在顶部显示了运行容器的以下信息:
* 容器 ID
* 容器名称,
* Docker 镜像,
* 命令,
* 运行中容器的状态,
* 状态。
在右上角,你将看到容器的 CPU 和内存利用率。使用向上/向下箭头键在容器之间移动。
在底部,有少量的键盘快捷键来执行各种 Docker 管理任务。以下是目前可用的键盘快捷键列表:
* `=` - 刷新 Dockly 界面,
* `/` - 搜索容器列表视图,
* `i` - 显示有关当前所选容器或服务的信息,
* `回车` - 显示当前容器或服务的日志,
* `v` - 在容器和服务视图之间切换,
* `l` - 在选定的容器上启动 `/bin/bash` 会话,
* `r` - 重启选定的容器,
* `s` - 停止选定的容器,
* `h` - 显示帮助窗口,
* `q` - 退出 Dockly。
#### 查看容器的信息
使用向上/向下箭头选择一个容器,然后按 `i` 以显示所选容器的信息。
![][7]
*查看容器的信息*
#### 重启容器
如果你想随时重启容器,只需选择它并按 `r` 即可重新启动。
![][8]
*重启 Docker 容器*
#### 停止/删除容器和镜像
如果不再需要容器,我们可以立即停止和/或删除一个或所有容器。为此,请按 `m` 打开菜单。
![][9]
*停止,删除 Docker 容器和镜像*
在这里,你可以执行以下操作。
* 停止所有 Docker 容器,
* 删除选定的容器,
* 删除所有容器,
* 删除所有 Docker 镜像等。
#### 显示 Dockly 帮助部分
如果你有任何疑问,只需按 `h` 即可打开帮助部分。
![][10]
*Dockly 帮助*
有关更多详细信息,请参考最后给出的官方 GitHub 页面。
就是这些了。希望这篇文章有用。如果你一直在使用 Docker 容器,请试试 Dockly看它是否有帮助。
建议阅读:
* [如何自动更新正在运行的 Docker 容器][11]
* [ctop一个 Linux 容器的命令行监控工具][12]
资源:
* [Dockly 的 GitHub 仓库][13]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/dockly-manage-docker-containers-from-terminal/
作者:[sk][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Dockly-720x340.png
[2]: https://www.ostechnix.com/getting-started-with-docker/
[3]: https://www.ostechnix.com/portainer-an-easiest-way-to-manage-docker/
[4]: https://www.ostechnix.com/picluster-simple-web-based-docker-management-application/
[5]: https://www.ostechnix.com/install-node-js-linux/
[6]: http://www.ostechnix.com/wp-content/uploads/2019/05/Manage-Docker-Containers-Using-Dockly.png
[7]: http://www.ostechnix.com/wp-content/uploads/2019/05/View-containers-information.png
[8]: http://www.ostechnix.com/wp-content/uploads/2019/05/Restart-containers.png
[9]: http://www.ostechnix.com/wp-content/uploads/2019/05/stop-remove-containers-and-images.png
[10]: http://www.ostechnix.com/wp-content/uploads/2019/05/Dockly-Help.png
[11]: https://www.ostechnix.com/automatically-update-running-docker-containers/
[12]: https://www.ostechnix.com/ctop-commandline-monitoring-tool-linux-containers/
[13]: https://github.com/lirantal/dockly

View File

@ -1,66 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (French IT giant Atos enters the edge-computing business)
[#]: via: (https://www.networkworld.com/article/3397139/atos-is-the-latest-to-enter-the-edge-computing-business.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
French IT giant Atos enters the edge-computing business
======
Atos takes a different approach to edge computing with a device called BullSequana Edge that's the size of a suitcase.
![iStock][1]
French IT giant Atos is the latest to jump into the edge computing business with a small device called BullSequana Edge. Unlike devices from its competitors that are the size of a shipping container, including those from Vapor IO and Schneider Electronics, Atos' edge device can sit in a closet.
Atos says the device uses artificial intelligence (AI) applications to offer fast response times that are needed in areas such as manufacturing 4.0, autonomous vehicles, healthcare and retail/airport security where data needs to be processed and analyzed at the edge in real time.
**[ Also see:[What is edge computing?][2] and [How edge networking and IoT will reshape data centers][3].]**
The BullSequana Edge can be purchased as standalone infrastructure or bundled with Atos software edge software, and that software is pretty impressive. Atos says the BullSequana Edge supports three main categories of use cases:
* AI: Atos Edge Computer Vision software for surveillance cameras provide advanced extraction and analysis of features such as people, faces, emotions, and behaviors so that automatic actions can be carried out based on that analysis.
* Big data: Atos Edge Data Analytics enables organizations to improve their business models with predictive and prescriptive solutions. It utilizes data lake capabilities to make data trustworthy and useable.
* Containers: Atos Edge Data Container (EDC) is an all-in-one container solution that is ready to run at the edge and serves as a decentralized IT system that can run autonomously in non-data center environments with no need for local on-site operation.
Because of its small size, the BullSequana Edge doesnt pack a lot of processing power. It comes with a 16-core Intel Xeon CPU and can hold up to two Nvidia Tesla T4 GPUs or optional FPGAs. Atos says that is enough to handle the inference of complex AI models with low latency at the edge.
Because it handles sensitive data, BullSequana Edge also comes with an intrusion sensor that will disable the machine in case of physical attacks.
Most edge devices are placed near cell towers, but since the edge system can be placed anywhere, it can communicate via radio, Global System for Mobile Communications (GSM), or Wi-Fi.
Atos may not be a household name in the U.S., but its on par with IBM in Europe, having acquired IT giants Bull SA, Xerox IT Outsourcing, and Siemens IT all in this past decade.
**More about edge networking:**
* [How edge networking and IoT will reshape data centers][3]
* [Edge computing best practices][4]
* [How edge computing can help secure the IoT][5]
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3397139/atos-is-the-latest-to-enter-the-edge-computing-business.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/01/huawei-18501-edge-gartner-100786331-large.jpg
[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[4]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[5]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (ninifly)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,438 +0,0 @@
Writing a Time Series Database from Scratch
============================================================
I work on monitoring. In particular on [Prometheus][2], a monitoring system that includes a custom time series database, and its integration with [Kubernetes][3].
In many ways Kubernetes represents all the things Prometheus was designed for. It makes continuous deployments, auto scaling, and other features of highly dynamic environments easily accessible. The query language and operational model, among many other conceptual decisions make Prometheus particularly well-suited for such environments. Yet, if monitored workloads become significantly more dynamic, this also puts new strains on monitoring system itself. With this in mind, rather than doubling back on problems Prometheus already solves well, we specifically aim to increase its performance in environments with highly dynamic, or transient services.
Prometheus's storage layer has historically shown outstanding performance, where a single server is able to ingest up to one million samples per second as several million time series, all while occupying a surprisingly small amount of disk space. While the current storage has served us well, I propose a newly designed storage subsystem that corrects for shortcomings of the existing solution and is equipped to handle the next order of scale.
> Note: I've no background in databases. What I say might be wrong and mislead. You can channel your criticism towards me (fabxc) in #prometheus on Freenode.
### Problems, Problems, Problem Space
First, a quick outline of what we are trying to accomplish and what key problems it raises. For each, we take a look at Prometheus' current approach, what it does well, and which problems we aim to address with the new design.
### Time series data
We have a system that collects data points over time.
```
identifier -> (t0, v0), (t1, v1), (t2, v2), (t3, v3), ....
```
Each data point is a tuple of a timestamp and a value. For the purpose of monitoring, the timestamp is an integer and the value any number. A 64 bit float turns out to be a good representation for counter as well as gauge values, so we go with that. A sequence of data points with strictly monotonically increasing timestamps is a series, which is addressed by an identifier. Our identifier is a metric name with a dictionary of  _label dimensions_ . Label dimensions partition the measurement space of a single metric. Each metric name plus a unique set of labels is its own  _time series_  that has a value stream associated with it.
This is a typical set of series identifiers that are part of metric counting requests:
```
requests_total{path="/status", method="GET", instance=”10.0.0.1:80”}
requests_total{path="/status", method="POST", instance=”10.0.0.3:80”}
requests_total{path="/", method="GET", instance=”10.0.0.2:80”}
```
Let's simplify this representation right away: A metric name can be treated as just another label dimension — `__name__` in our case. At the query level, it might be be treated specially but that doesn't concern our way of storing it, as we will see later.
```
{__name__="requests_total", path="/status", method="GET", instance=”10.0.0.1:80”}
{__name__="requests_total", path="/status", method="POST", instance=”10.0.0.3:80”}
{__name__="requests_total", path="/", method="GET", instance=”10.0.0.2:80”}
```
When querying time series data, we want to do so by selecting series by their labels. In the simplest case `{__name__="requests_total"}` selects all series belonging to the `requests_total` metric. For all selected series, we retrieve data points within a specified time window.
In more complex queries, we may wish to select series satisfying several label selectors at once and also represent more complex conditions than equality. For example, negative (`method!="GET"`) or regular expression matching (`method=~"PUT|POST"`).
This largely defines the stored data and how it is recalled.
### Vertical and Horizontal
In a simplified view, all data points can be laid out on a two-dimensional plane. The  _horizontal_  dimension represents the time and the series identifier space spreads across the  _vertical_  dimension.
```
series
^
│ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="GET"}
│ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="POST"}
│ . . . . . . .
│ . . . . . . . . . . . . . . . . . . . ...
│ . . . . . . . . . . . . . . . . . . . . .
│ . . . . . . . . . . . . . . . . . . . . . {__name__="errors_total", method="POST"}
│ . . . . . . . . . . . . . . . . . {__name__="errors_total", method="GET"}
│ . . . . . . . . . . . . . .
│ . . . . . . . . . . . . . . . . . . . ...
│ . . . . . . . . . . . . . . . . . . . .
v
<-------------------- time --------------------->
```
Prometheus retrieves data points by periodically scraping the current values for a set of time series. The entity from which we retrieve such a batch is called a  _target_ . Thereby, the write pattern is completely vertical and highly concurrent as samples from each target are ingested independently.
To provide some measurement of scale: A single Prometheus instance collects data points from tens of thousands of  _targets_ , which expose hundreds to thousands of different time series each.
At the scale of collecting millions of data points per second, batching writes is a non-negotiable performance requirement. Writing single data points scattered across our disk would be painfully slow. Thus, we want to write larger chunks of data in sequence.
This is an unsurprising fact for spinning disks, as their head would have to physically move to different sections all the time. While SSDs are known for fast random writes, they actually can't modify individual bytes but only write in  _pages_  of 4KiB or more. This means writing a 16 byte sample is equivalent to writing a full 4KiB page. This behavior is part of what is known as [ _write amplification_ ][4], which as a bonus causes your SSD to wear out so it wouldn't just be slow, but literally destroy your hardware within a few days or weeks.
For more in-depth information on the problem, the blog series ["Coding for SSDs" series][5] is a an excellent resource. Let's just consider the main take away: sequential and batched writes are the ideal write pattern for spinning disks and SSDs alike. A simple rule to stick to.
The querying pattern is significantly more differentiated than the write the pattern. We can query a single datapoint for a single series, a single datapoint for 10000 series, weeks of data points for a single series, weeks of data points for 10000 series, etc. So on our two-dimensional plane, queries are neither fully vertical or horizontal, but a rectangular combination of the two.
[Recording rules][6] mitigate the problem for known queries but are not a general solution for ad-hoc queries, which still have to perform reasonably well.
We know that we want to write in batches, but the only batches we get are vertical sets of data points across series. When querying data points for a series over a time window, not only would it be hard to figure out where the individual points can be found, we'd also have to read from a lot of random places on disk. With possibly millions of touched samples per query, this is slow even on the fastest SSDs. Reads will also retrieve more data from our disk than the requested 16 byte sample. SSDs will load a full page, HDDs will at least read an entire sector. Either way, we are wasting precious read throughput.
So ideally, samples for the same series would be stored sequentially so we can just scan through them with as few reads as possible. On top, we only need to know where this sequence starts to access all data points.
There's obviously a strong tension between the ideal pattern for writing collected data to disk and the layout that would be significantly more efficient for serving queries. It is  _the_  fundamental problem our TSDB has to solve.
#### Current solution
Time to take a look at how Prometheus's current storage, let's call it "V2", addresses this problem.
We create one file per time series that contains all of its samples in sequential order. As appending single samples to all those files every few seconds is expensive, we batch up 1KiB chunks of samples for a series in memory and append those chunks to the individual files, once they are full. This approach solves a large part of the problem. Writes are now batched, samples are stored sequentially. It also enables incredibly efficient compression formats, based on the property that a given sample changes only very little with respect to the previous sample in the same series. Facebook's paper on their Gorilla TSDB describes a similar chunk-based approach and [introduces a compression format][7] that reduces 16 byte samples to an average of 1.37 bytes. The V2 storage uses various compression formats including a variation of Gorillas.
```
┌──────────┬─────────┬─────────┬─────────┬─────────┐ series A
└──────────┴─────────┴─────────┴─────────┴─────────┘
┌──────────┬─────────┬─────────┬─────────┬─────────┐ series B
└──────────┴─────────┴─────────┴─────────┴─────────┘
. . .
┌──────────┬─────────┬─────────┬─────────┬─────────┬─────────┐ series XYZ
└──────────┴─────────┴─────────┴─────────┴─────────┴─────────┘
chunk 1 chunk 2 chunk 3 ...
```
While the chunk-based approach is great, keeping a separate file for each series is troubling the V2 storage for various reasons:
* We actually need a lot more files than the number of time series we are currently collecting data for. More on that in the section on "Series Churn". With several million files, sooner or later way may run out of [inodes][1] on our filesystem. This is a condition we can only recover from by reformatting our disks, which is as invasive and disruptive as it could be. We generally want to avoid formatting disks specifically to fit a single application.
* Even when chunked, several thousands of chunks per second are completed and ready to be persisted. This still requires thousands of individual disk writes every second. While it is alleviated by also batching up several completed chunks for a series, this in return increases the total memory footprint of data which is waiting to be persisted.
* It's infeasible to keep all files open for reads and writes. In particular because ~99% of data is never queried again after 24 hours. If it is queried though though, we have to open up to thousands of files, find and read relevant data points into memory, and close them again. As this would result in high query latencies, data chunks are cached rather aggressively leading to problems outlined further in the section on "Resource Consumption".
* Eventually, old data has to be deleted and data needs to be removed from the front of millions of files. This means that deletions are actually write intensive operations. Additionally, cycling through millions of files and analyzing them makes this a process that often takes hours. By the time it completes, it might have to start over again. Oh yea, and deleting the old files will cause further write amplification for your SSD!
* Chunks that are currently accumulating are only held in memory. If the application crashes, data will be lost. To avoid this, the memory state is periodically checkpointed to disk, which may take significantly longer than the window of data loss we are willing to accept. Restoring the checkpoint may also take several minutes, causing painfully long restart cycles.
The key take away from the existing design is the concept of chunks, which we most certainly want to keep. The most recent chunks always being held in memory is also generally good. After all, the most recent data is queried the most by a large margin.
Having one file per time series is a concept we would like to find an alternative to.
### Series Churn
In the Prometheus context, we use the term  _series churn_  to describe that a set of time series becomes inactive, i.e. receives no more data points, and a new set of active series appears instead.
For example, all series exposed by a given microservice instance have a respective “instance” label attached that identifies its origin. If we perform a rolling update of our microservice and swap out every instance with a newer version, series churn occurs. In more dynamic environments those events may happen on an hourly basis. Cluster orchestration systems like Kubernetes allow continuous auto-scaling and frequent rolling updates of applications, potentially creating tens of thousands of new application instances, and with them completely new sets of time series, every day.
```
series
^
│ . . . . . .
│ . . . . . .
│ . . . . . .
│ . . . . . . .
│ . . . . . . .
│ . . . . . . .
│ . . . . . .
│ . . . . . .
│ . . . . .
│ . . . . .
│ . . . . .
v
<-------------------- time --------------------->
```
So even if the entire infrastructure roughly remains constant in size, over time there's a linear growth of time series in our database. While a Prometheus server will happily collect data for 10 million time series, query performance is significantly impacted if data has to be found among a billion series.
#### Current solution
The current V2 storage of Prometheus has an index based on LevelDB for all series that are currently stored. It allows querying series containing a given label pair, but lacks a scalable way to combine results from different label selections.
For example, selecting all series with label `__name__="requests_total"` works efficiently, but selecting all series with `instance="A" AND __name__="requests_total"` has scalability problems. We will later revisit what causes this and which tweaks are necessary to improve lookup latencies.
This problem is in fact what spawned the initial hunt for a better storage system. Prometheus needed an improved indexing approach for quickly searching hundreds of millions of time series.
### Resource consumption
Resource consumption is one of the consistent topics when trying to scale Prometheus (or anything, really). But it's not actually the absolute resource hunger that is troubling users. In fact, Prometheus manages an incredible throughput given its requirements. The problem is rather its relative unpredictability and instability in face of changes. By its architecture the V2 storage slowly builds up chunks of sample data, which causes the memory consumption to ramp up over time. As chunks get completed, they are written to disk and can be evicted from memory. Eventually, Prometheus's memory usage reaches a steady state. That is until the monitored environment changes —  _series churn_  increases the usage of memory, CPU, and disk IO every time we scale an application or do a rolling update.
If the change is ongoing, it will yet again reach a steady state eventually but it will be significantly higher than in a more static environment. Transition periods are often multiple hours long and it is hard to determine what the maximum resource usage will be.
The approach of having a single file per time series also makes it way too easy for a single query to knock out the Prometheus process. When querying data that is not cached in memory, the files for queried series are opened and the chunks containing relevant data points are read into memory. If the amount of data exceeds the memory available, Prometheus quits rather ungracefully by getting OOM-killed.
After the query is completed the loaded data can be released again but it is generally cached much longer to serve subsequent queries on the same data faster. The latter is a good thing obviously.
Lastly, we looked at write amplification in the context of SSDs and how Prometheus addresses it by batching up writes to mitigate it. Nonetheless, in several places it still causes write amplification by having too small batches and not aligning data precisely on page boundaries. For larger Prometheus servers, a reduced hardware lifetime was observed in the real world. Chances are that this is still rather normal for database applications with high write throughput, but we should keep an eye on whether we can mitigate it.
### Starting Over
By now we have a good idea of our problem domain, how the V2 storage solves it, and where its design has issues. We also saw some great concepts that we want to adapt more or less seamlessly. A fair amount of V2's problems can be addressed with improvements and partial redesigns, but to keep things fun (and after carefully evaluating my options, of course), I decided to take a stab at writing an entire time series database — from scratch, i.e. writing bytes to the file system.
The critical concerns of performance and resource usage are a direct consequence of the chosen storage format. We have to find the right set of algorithms and disk layout for our data to implement a well-performing storage layer.
This is where I take the shortcut and drive straight to the solution — skip the headache, failed ideas, endless sketching, tears, and despair.
### V3 — Macro Design
What's the macro layout of our storage? In short, everything that is revealed when running `tree` on our data directory. Just looking at that gives us a surprisingly good picture of what is going on.
```
$ tree ./data
./data
├── b-000001
│ ├── chunks
│ │ ├── 000001
│ │ ├── 000002
│ │ └── 000003
│ ├── index
│ └── meta.json
├── b-000004
│ ├── chunks
│ │ └── 000001
│ ├── index
│ └── meta.json
├── b-000005
│ ├── chunks
│ │ └── 000001
│ ├── index
│ └── meta.json
└── b-000006
├── meta.json
└── wal
├── 000001
├── 000002
└── 000003
```
At the top level, we have a sequence of numbered blocks, prefixed with `b-`. Each block obviously holds a file containing an index and a "chunk" directory holding more numbered files. The “chunks” directory contains nothing but raw chunks of data points for various series. Just as for V2, this makes reading series data over a time windows very cheap and allows us to apply the same efficient compression algorithms. The concept has proven to work well and we stick with it. Obviously, there is no longer a single file per series but instead a handful of files holds chunks for many of them.
The existence of an “index” file should not be surprising. Let's just assume it contains a lot of black magic allowing us to find labels, their possible values, entire time series and the chunks holding their data points.
But why are there several directories containing the layout of index and chunk files? And why does the last one contain a "wal" directory instead? Understanding those two questions, solves about 90% of our problems.
#### Many Little Databases
We partition our  _horizontal_  dimension, i.e. the time space, into non-overlapping blocks. Each block acts as a fully independent database containing all time series data for its time window. Hence, it has its own index and set of chunk files.
```
t0 t1 t2 t3 now
┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
│ │ │ │ │ │ │ │ ┌────────────┐
│ │ │ │ │ │ │ mutable │ <─── write ──── ┤ Prometheus │
│ │ │ │ │ │ │ │ └────────────┘
└───────────┘ └───────────┘ └───────────┘ └───────────┘ ^
└──────────────┴───────┬──────┴──────────────┘ │
│ query
│ │
merge ─────────────────────────────────────────────────┘
```
Every block of data is immutable. Of course, we must be able to add new series and samples to the most recent block as we collect new data. For this block, all new data is written to an in-memory database that provides the same lookup properties as our persistent blocks. The in-memory data structures can be updated efficiently. To prevent data loss, all incoming data is also written to a temporary  _write ahead log_ , which is the set of files in our “wal” directory, from which we can re-populate the in-memory database on restart.
All these files come with their own serialization format, which comes with all the things one would expect: lots of flags, offsets, varints, and CRC32 checksums. Good fun to come up with, rather boring to read about.
This layout allows us to fan out queries to all blocks relevant to the queried time range. The partial results from each block are merged back together to form the overall result.
This horizontal partitioning adds a few great capabilities:
* When querying a time range, we can easily ignore all data blocks outside of this range. It trivially addresses the problem of  _series churn_  by reducing the set of inspected data to begin with.
* When completing a block, we can persist the data from our in-memory database by sequentially writing just a handful of larger files. We avoid any write-amplification and serve SSDs and HDDs equally well.
* We keep the good property of V2 that recent chunks, which are queried most, are always hot in memory.
* Nicely enough, we are also no longer bound to the fixed 1KiB chunk size to better align data on disk. We can pick any size that makes the most sense for the individual data points and chosen compression format.
* Deleting old data becomes extremely cheap and instantaneous. We merely have to delete a single directory. Remember, in the old storage we had to analyze and re-write up to hundreds of millions of files, which could take hours to converge.
Each block also contains a `meta.json` file. It simply holds human-readable information about the block to easily understand the state of our storage and the data it contains.
##### mmap
Moving from millions of small files to a handful of larger allows us to keep all files open with little overhead. This unblocks the usage of [`mmap(2)`][8], a system call that allows us to transparently back a virtual memory region by file contents. For simplicity, you might want to think of it like swap space, just that all our data is on disk already and no writes occur when swapping data out of memory.
This means we can treat all contents of our database as if they were in memory without occupying any physical RAM. Only if we access certain byte ranges in our database files, the operating system lazily loads pages from disk. This puts the operating system in charge of all memory management related to our persisted data. Generally, it is more qualified to make such decisions, as it has the full view on the entire machine and all its processes. Queried data can be rather aggressively cached in memory, yet under memory pressure the pages will be evicted. If the machine has unused memory, Prometheus will now happily cache the entire database, yet will immediately return it once another application needs it.
Therefore, queries can longer easily OOM our process by querying more persisted data than fits into RAM. The memory cache size becomes fully adaptive and data is only loaded once the query actually needs it.
From my understanding, this is how a lot of databases work today and an ideal way to do it if the disk format allows — unless one is confident to outsmart the OS from within the process. We certainly get a lot of capabilities with little work from our side.
#### Compaction
The storage has to periodically "cut" a new block and write the previous one, which is now completed, onto disk. Only after the block was successfully persisted, the write ahead log files, which are used to restore in-memory blocks, are deleted.
We are interested in keeping each block reasonably short (about two hours for a typical setup) to avoid accumulating too much data in memory. When querying multiple blocks, we have to merge their results into an overall result. This merge procedure obviously comes with a cost and a week-long query should not have to merge 80+ partial results.
To achieve both, we introduce  _compaction_ . Compaction describes the process of taking one or more blocks of data and writing them into a, potentially larger, block. It can also modify existing data along the way, e.g. dropping deleted data, or restructuring our sample chunks for improved query performance.
```
t0 t1 t2 t3 t4 now
┌────────────┐ ┌──────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
│ 1 │ │ 2 │ │ 3 │ │ 4 │ │ 5 mutable │ before
└────────────┘ └──────────┘ └───────────┘ └───────────┘ └───────────┘
┌─────────────────────────────────────────┐ ┌───────────┐ ┌───────────┐
│ 1 compacted │ │ 4 │ │ 5 mutable │ after (option A)
└─────────────────────────────────────────┘ └───────────┘ └───────────┘
┌──────────────────────────┐ ┌──────────────────────────┐ ┌───────────┐
│ 1 compacted │ │ 3 compacted │ │ 5 mutable │ after (option B)
└──────────────────────────┘ └──────────────────────────┘ └───────────┘
```
In this example we have the sequential blocks `[1, 2, 3, 4]`. Blocks 1, 2, and 3 can be compacted together and the new layout is `[1, 4]`. Alternatively, compact them in pairs of two into `[1, 3]`. All time series data still exist but now in fewer blocks overall. This significantly reduces the merging cost at query time as fewer partial query results have to be merged.
#### Retention
We saw that deleting old data was a slow process in the V2 storage and put a toll on CPU, memory, and disk alike. How can we drop old data in our block based design? Quite simply, by just deleting the directory of a block that has no data within our configured retention window. In the example below, block 1 can safely be deleted, whereas 2 has to stick around until it falls fully behind the boundary.
```
|
┌────────────┐ ┌────┼─────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
│ 1 │ │ 2 | │ │ 3 │ │ 4 │ │ 5 │ . . .
└────────────┘ └────┼─────┘ └───────────┘ └───────────┘ └───────────┘
|
|
retention boundary
```
The older data gets, the larger the blocks may become as we keep compacting previously compacted blocks. An upper limit has to be applied so blocks dont grow to span the entire database and thus diminish the original benefits of our design.
Conveniently, this also limits the total disk overhead of blocks that are partially inside and partially outside of the retention window, i.e. block 2 in the example above. When setting the maximum block size at 10% of the total retention window, our total overhead of keeping block 2 around is also bound by 10%.
Summed up, retention deletion goes from very expensive, to practically free.
> _If you've come this far and have some background in databases, you might be asking one thing by now: Is any of this new? — Not really; and probably for the better._
>
> _The pattern of batching data up in memory, tracked in a write ahead log, and periodically flushed to disk is ubiquitous today._
> _The benefits we have seen apply almost universally regardless of the data's domain specifics. Prominent open source examples following this approach are LevelDB, Cassandra, InfluxDB, or HBase. The key takeaway is to avoid reinventing an inferior wheel, researching proven methods, and applying them with the right twist._
> _Running out of places to add your own magic dust later is an unlikely scenario._
### The Index
The initial motivation to investigate storage improvements were the problems brought by  _series churn_ . The block-based layout reduces the total number of series that have to be considered for serving a query. So assuming our index lookup was of complexity  _O(n^2)_ , we managed to reduce the  _n_  a fair amount and now have an improved complexity of  _O(n^2)_  — uhm, wait... damnit.
A quick flashback to "Algorithms 101" reminds us that this, in theory, did not buy us anything. If things were bad before, they are just as bad now. Theory can be depressing.
In practice, most of our queries will already be answered significantly faster. Yet, queries spanning the full time range remain slow even if they just need to find a handful of series. My original idea, dating back way before all this work was started, was a solution to exactly this problem: we need a more capable [ _inverted index_ ][9].
An inverted index provides a fast lookup of data items based on a subset of their contents. Simply put, I can look up all series that have a label `app=”nginx"` without having to walk through every single series and check whether it contains that label.
For that, each series is assigned a unique ID by which it can be retrieved in constant time, i.e. O(1). In this case the ID is our  _forward index_ .
> Example: If the series with IDs 10, 29, and 9 contain the label `app="nginx"`, the inverted index for the label "nginx" is the simple list `[10, 29, 9]`, which can be used to quickly retrieve all series containing the label. Even if there were 20 billion further series, it would not affect the speed of this lookup.
In short, if  _n_  is our total number of series, and  _m_  is the result size for a given query, the complexity of our query using the index is now  _O(m)_ . Queries scaling along the amount of data they retrieve ( _m_ ) instead of the data body being searched ( _n_ ) is a great property as  _m_  is generally significantly smaller.
For brevity, lets assume we can retrieve the inverted index list itself in constant time.
Actually, this is almost exactly the kind of inverted index V2 has and a minimum requirement to serve performant queries across millions of series. The keen observer will have noticed, that in the worst case, a label exists in all series and thus  _m_  is, again, in  _O(n)_ . This is expected and perfectly fine. If you query all data, it naturally takes longer. Things become problematic once we get involved with more complex queries.
#### Combining Labels
Labels associated with millions of series are common. Suppose a horizontally scaling “foo” microservice with hundreds of instances with thousands of series each. Every single series will have the label `app="foo"`. Of course, one generally won't query all series but restrict the query by further labels, e.g. I want to know how many requests my service instances received and query `__name__="requests_total" AND app="foo"`.
To find all series satisfying both label selectors, we take the inverted index list for each and intersect them. The resulting set will typically be orders of magnitude smaller than each input list individually. As each input list has the worst case size O(n), the brute force solution of nested iteration over both lists, has a runtime of O(n^2). The same cost applies for other set operations, such as the union (`app="foo" OR app="bar"`). When adding further label selectors to the query, the exponent increases for each to O(n^3), O(n^4), O(n^5), ... O(n^k). A lot of tricks can be played to minimize the effective runtime by changing the execution order. The more sophisticated, the more knowledge about the shape of the data and the relationships between labels is needed. This introduces a lot of complexity, yet does not decrease our algorithmic worst case runtime.
This is essentially the approach in the V2 storage and luckily a seemingly slight modification is enough gain significant improvements. What happens if we assume that the IDs in our inverted indices are sorted?
Suppose this example of lists for our initial query:
```
__name__="requests_total" -> [ 9999, 1000, 1001, 2000000, 2000001, 2000002, 2000003 ]
app="foo" -> [ 1, 3, 10, 11, 12, 100, 311, 320, 1000, 1001, 10002 ]
intersection => [ 1000, 1001 ]
```
The intersection is fairly small. We can find it by setting a cursor at the beginning of each list and always advancing the one at the smaller number. When both numbers are equal, we add the number to our result and advance both cursors. Overall, we scan both lists in this zig-zag pattern and thus have a total cost of  _O(2n) = O(n)_  as we only ever move forward in either list.
The procedure for more than two lists of different set operations works similarly. So the number of  _k_  set operations merely modifies the factor ( _O(k*n)_ ) instead of the exponent ( _O(n^k)_ ) of our worst-case lookup runtime. A great improvement.
What I described here is a simplified version of the canonical search index used by practically any [full text search engine][10] out there. Every series descriptor is treated as a short "document", and every label (name + fixed value) as a "word" inside of it. We can ignore a lot of additional data typically encountered in search engine indices, such as word position and frequency data.
Seemingly endless research exists on approaches improving the practical runtime, often making some assumptions about the input data. Unsurprisingly, there are also plenty of techniques to compress inverted indices that come with their own benefits and drawbacks. As our "documents" are tiny and the “words” are hugely repetitive across all series, compression becomes almost irrelevant. For example, a real-world dataset of ~4.4 million series with about 12 labels each has less than 5,000 unique labels. For our initial storage version, we stick to the basic approach without compression, and just a few simple tweaks added to skip over large ranges of non-intersecting IDs.
While keeping the IDs sorted may sound simple, it is not always a trivial invariant to keep up. For instance, the V2 storage assigns hashes as IDs to new series and we cannot efficiently build up sorted inverted indices.
Another daunting task is modifying the indices on disk as data gets deleted or updated. Typically, the easiest approach is to simply recompute and rewrite them but doing so while keeping the database queryable and consistent. The V3 storage does exactly this by having a separate immutable index per block that is only modified via rewrite on compaction. Only the indices for the mutable blocks, which are held entirely in memory, need to be updated.
### Benchmarking
I started initial development of the storage with a benchmark based on ~4.4 million series descriptors extracted from a real world data set and generated synthetic data points to feed into those series. This iteration just tested the stand-alone storage and was crucial to quickly identify performance bottlenecks and trigger deadlocks only experienced under highly concurrent load.
After the conceptual implementation was done, the benchmark could sustain a write throughput of 20 million data points per second on my Macbook Pro — all while a dozen Chrome tabs and Slack were running. So while this sounded all great it also indicated that there's no further point in pushing this benchmark (or running it in a less random environment for that matter). After all, it is synthetic and thus not worth much beyond a good first impression. Starting out about 20x above the initial design target, it was time to embed this into an actual Prometheus server, adding all the practical overhead and flakes only experienced in more realistic environments.
We actually had no reproducible benchmarking setup for Prometheus, in particular none that allowed A/B testing of different versions. Concerning in hindsight, but [now we have one][11]!
Our tool allows us to declaratively define a benchmarking scenario, which is then deployed to a Kubernetes cluster on AWS. While this is not the best environment for all-out benchmarking, it certainly reflects our user base better than dedicated bare metal servers with 64 cores and 128GB of memory.
We deploy two Prometheus 1.5.2 servers (V2 storage) and two Prometheus servers from the 2.0 development branch (V3 storage). Each Prometheus server runs on a dedicated machine with an SSD. A horizontally scaled application exposing typical microservice metrics is deployed to worker nodes. Additionally, the Kubernetes cluster itself and the nodes are being monitored. The whole setup is supervised by yet another Meta-Prometheus, monitoring each Prometheus server for health and performance.
To simulate series churn, the microservice is periodically scaled up and down to remove old pods and spawn new pods, exposing new series. Query load is simulated by a selection of "typical" queries, run against one server of each Prometheus version.
Overall the scaling and querying load as well as the sampling frequency significantly exceed today's production deployments of Prometheus. For instance, we swap out 60% of our microservice instances every 15 minutes to produce series churn. This would likely only happen 1-5 times a day in a modern infrastructure. This ensures that our V3 design is capable of handling the workloads of the years ahead. As a result, the performance differences between Prometheus 1.5.2 and 2.0 are larger than in a more moderate environment.
In total, we are collecting about 110,000 samples per second from 850 targets exposing half a million series at a time.
After leaving this setup running for a while, we can take a look at the numbers. We evaluate several metrics over the first 12 hours within both versiones reached a steady state.
> Be aware of the slightly truncated Y axis in screen shots from the Prometheus graph UI.
![Heap usage GB](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/heap_usage.png)
> _Heap memory usage in GB_
Memory usage is the most troubling resource for users today as it is relatively unpredictable and it may cause the process to crash.
Obviously, the queried servers are consuming more memory, which can largely be attributed to overhead of the query engine, which will be subject to future optimizations. Overall, Prometheus 2.0's memory consumption is reduced by 3-4x. After about six hours, there is a clear spike in Prometheus 1.5, which aligns with the our retention boundary at six hours. As deletions are quite costly, resource consumption ramps up. This will become visible throughout various other graphs below.
![CPU usage cores](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/cpu_usage.png)
> _CPU usage in cores/second_
A similar pattern shows for CPU usage, but the delta between queried and non-queried servers is more significant. Averaging at about 0.5 cores/sec while ingesting about 110,000 samples/second, our new storage becomes almost negligible compared to the cycles spent on query evaluation. In total the new storage needs 3-10 times fewer CPU resources.
![Disk writes](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/disk_writes.png)
>_Disk writes in MB/second_
The by far most dramatic and unexpected improvement shows in write utilization of our disk. It clearly shows why Prometheus 1.5 is prone to wear out SSDs. We see an initial ramp-up as soon as the first chunks are persisted into the series files and a second ramp-up once deletion starts rewriting them. Surprisingly, the queried and non-queried server show a very different utilization.
Prometheus 2.0 on the other hand, merely writes about a single Megabyte per second to its write ahead log. Writes periodically spike when blocks are compacted to disk. Overall savings: staggering 97-99%.
![Disk usage](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/disk_usage.png)
> _Disk size in GB_
Closely related to disk writes is the total amount of occupied disk space. As we are using almost the same compression algorithm for samples, which is the bulk of our data, they should be about the same. In a more stable setup that would largely be true, but as we are dealing with high  _series churn_ , there's also the per-series overhead to consider.
As we can see, Prometheus 1.5 ramps up storage space a lot faster before both versions reach a steady state as the retention kicks in. Prometheus 2.0 seems to have a significantly lower overhead per individual series. We can nicely see how space is linearly filled up by the write ahead log and instantaneously drops as its gets compacted. The fact that the lines for both Prometheus 2.0 servers do not exactly match is a fact that needs further investigation.
This all looks quite promising. The important piece left is query latency. The new index should have improved our lookup complexity. What has not substantially changed is processing of this data, e.g. in `rate()` functions or aggregations. Those aspects are part of the query engine.
![Query latency](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/query_latency.png)
>_99th percentile query latency in seconds_
Expectations are completely met by the data. In Prometheus 1.5 the query latency increases over time as more series are stored. It only levels off once retention starts and old series are deleted. In contrast, Prometheus 2.0 stays in place right from the beginning.
Some caution must be taken on how this data was collected. The queries fired against the servers were chosen by estimating a good mix of range and instant queries, doing heavier and more lightweight computations, and touching few or many series. It does not necessarily represent a real-world distribution of queries. It is also not representative for queries hitting cold data and we can assume that all sample data is practically always hot in memory in either storage.
Nonetheless, we can say with good confidence, that the overall query performance became very resilient to series churn and improved by up to 4x in our straining benchmarking scenario. In a more static environment, we can assume query time to be mostly spent in the query engine itself and the improvement to be notably lower.
![Ingestion rate](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/ingestion_rate.png)
>_Ingested samples/second_
Lastly, a quick look into our ingestion rates of the different Prometheus servers. We can see that both servers with the V3 storage have the same ingestion rate. After a few hours it becomes unstable, which is caused by various nodes of the benchmarking cluster becoming unresponsive due to high load rather than the Prometheus instances. (The fact that both 2.0 lines exactly match is hopefully convincing enough.)
Both Prometheus 1.5.2 servers start suffering from significant drops in ingestion rate even though more CPU and memory resources are available. The high stress of series churn causes a larger amount of data to not be collected.
But what's the  _absolute maximum_  number of samples per second you could ingest now?
I don't know — and deliberately don't care.
There are a lot of factors that shape the data flowing into Prometheus and there is no single number capable of capturing quality. Maximum ingestion rate has historically been a metric leading to skewed benchmarks and neglect of more important aspects such as query performance and resilience to series churn. The rough assumption that resource usage increases linearly was confirmed by some basic testing. It is easy to extrapolate what could be possible.
Our benchmarking setup simulates a highly dynamic environment stressing Prometheus more than most real-world setups today. The results show we went way above our initial design goal, while running on non-optimal cloud servers. Ultimately, success will be determined by user feedback rather than benchmarking numbers.
> Note:  _At time of writing this, Prometheus 1.6 is in development, which will allow configuring the maximum memory usage more reliably and may notably reduce overall consumption in favor of slightly increased CPU utilization. I did not repeat the tests against this as the overall results still hold, especially when facing high series churn._
### Conclusion
Prometheus sets out to handle high cardinality of series and throughput of individual samples. It remains a challenging task, but the new storage seems to position us well for the hyper-scale, hyper-convergent, GIFEE infrastructure of the futu... well, it seems to work pretty well.
A [first alpha release of Prometheus 2.0][12] with the new V3 storage is available for testing. Expect crashes, deadlocks, and other bugs at this early stage.
The code for the storage itself can be found [in a separate project][13]. It's surprisingly agnostic to Prometheus itself and could be widely useful for a wider range of applications looking for an efficient local storage time series database.
> _There's a long list of people to thank for their contributions to this work. Here they go in no particular order:_
>
> _The groundlaying work by Bjoern Rabenstein and Julius Volz on the V2 storage engine and their feedback on V3 was fundamental to everything seen in this new generation._
>
> _Wilhelm Bierbaum's ongoing advice and insight contributed significantly to the new design. Brian Brazil's continous feedback ensured that we ended up with a semantically sound approach. Insightful discussions with Peter Bourgon validated the design and shaped this write-up._
>
> _Not to forget my entire team at CoreOS and the company itself for supporting and sponsoring this work. Thanks to everyone who listened to my ramblings about SSDs, floats, and serialization formats again and again._
--------------------------------------------------------------------------------
via: https://fabxc.org/blog/2017-04-10-writing-a-tsdb/
作者:[Fabian Reinartz ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://twitter.com/fabxc
[1]:https://en.wikipedia.org/wiki/Inode
[2]:https://prometheus.io/
[3]:https://kubernetes.io/
[4]:https://en.wikipedia.org/wiki/Write_amplification
[5]:http://codecapsule.com/2014/02/12/coding-for-ssds-part-1-introduction-and-table-of-contents/
[6]:https://prometheus.io/docs/practices/rules/
[7]:http://www.vldb.org/pvldb/vol8/p1816-teller.pdf
[8]:https://en.wikipedia.org/wiki/Mmap
[9]:https://en.wikipedia.org/wiki/Inverted_index
[10]:https://en.wikipedia.org/wiki/Search_engine_indexing#Inverted_indices
[11]:https://github.com/prometheus/prombench
[12]:https://prometheus.io/blog/2017/04/10/promehteus-20-sneak-peak/
[13]:https://github.com/prometheus/tsdb

View File

@ -1,146 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 projects for Raspberry Pi at home)
[#]: via: (https://opensource.com/article/17/4/5-projects-raspberry-pi-home)
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
5 projects for Raspberry Pi at home
======
![5 projects for Raspberry Pi at home][1]
The [Raspberry Pi][2] computer can be used in all kinds of settings and for a variety of purposes. It obviously has a place in education for helping students with learning programming and maker skills in the classroom and the hackspace, and it has plenty of industrial applications in the workplace and in factories. I'm going to introduce five projects you might want to build in your own home.
### Media center
One of the most common uses for Raspberry Pi in people's homes is behind the TV running media center software serving multimedia files. It's easy to set this up, and the Raspberry Pi provides plenty of GPU (Graphics Processing Unit) power to render HD TV shows and movies to your big screen TV. [Kodi][3] (formerly XBMC) on a Raspberry Pi is a great way to playback any media you have on a hard drive or network-attached storage. You can also install a plugin to play YouTube videos.
There are a few different options available, most prominently [OSMC][4] (Open Source Media Center) and [LibreELEC][5], both based on Kodi. They both perform well at playing media content, but OSMC has a more visually appearing user interface, while LibreElec is much more lightweight. All you have to do is choose a distribution, download the image and install on an SD card (or just use [NOOBS][6]), boot it up, and you're ready to go.
![LibreElec ][7]
LibreElec; Raspberry Pi Foundation, CC BY-SA
![OSMC][8]
OSMC.tv, Copyright, Used with permission
Before proceeding you'll need to decide [w][9][hich Raspberry Pi model to use][9]. These distributions will work on any Pi (1, 2, 3, or Zero), and video playback will essentially be matched on each of these. Apart from the Pi 3 (and Zero W) having built-in Wi-Fi, the only noticeable difference is the reaction speed of the user interface, which will be much faster on a Pi 3. A Pi 2 will not be much slower, so that's fine if you don't need Wi-Fi, but the Pi 3 will noticeably outperform the Pi 1 and Zero when it comes to flicking through the menus.
### SSH gateway
If you want to be able to access computers and devices on your home network from outside over the internet, you have to open up ports on those devices to allow outside traffic. Opening ports to the internet is a security risk, meaning you're always at risk of attack, misuse, or any kind of unauthorized access. However, if you install a Raspberry Pi on your network and set up port forwarding to allow only SSH access to that Pi, you can use that as a secure gateway to hop onto other Pis and PCs on the network.
Most routers allow you to configure port-forwarding rules. You'll need to give your Pi a fixed internal IP address and set up port 22 on your router to map to port 22 on your Raspberry Pi. If your ISP provides you with a static IP address, you'll be able to SSH into it with this as the host address (for example, **ssh pi@123.45.56.78** ). If you have a domain name, you can configure a subdomain to point to this IP address, so you don't have to remember it (for example, **ssh[pi@home.mydomain.com][10]** ).
![][11]
However, if you're going to expose a Raspberry Pi to the internet, you should be very careful not to put your network at risk. There are a few simple procedures you can follow to make it sufficiently secure:
1\. Most people suggest you change your login password (which makes sense, seeing as the default password “raspberry” is well known), but this does not protect against brute-force attacks. You could change your password and add a two-factor authentication (so you need your password _and_ a time-dependent passcode generated by your phone), which is more secure. However, I believe the best way to secure your Raspberry Pi from intruders is to [disable][12] [“password authentication”][12] in your SSH configuration, so you allow only SSH key access. This means that anyone trying to SSH in by guessing your password will never succeed. Only with your private SSH key can anyone gain access. Similarly, most people suggest changing the SSH port from the default 22 to something unexpected, but a simple [Nmap][13] of your IP address will reveal your true SSH port.
2\. Ideally, you would not run much in the way of other software on this Pi, so you don't end up accidentally exposing anything else. If you want to run other software, you might be better running it on another Pi on the network that is not exposed to the internet. Ensure that you keep your packages up to date by upgrading regularly, particularly the **openssh-server** package, so that any security vulnerabilities are patched.
3\. Install [sshblack][14] or [fail2ban][15] to blacklist any users who seem to be acting maliciously, such as attempting to brute force your SSH password.
Once you've secured your Raspberry Pi and put it online, you'll be able to log in to your network from anywhere in the world. Once you're on your Raspberry Pi, you can SSH into other devices on the network using their local IP address (for example, 192.168.1.31). If you have passwords on these devices, just use the password. If they're also SSH-key-only, you'll need to ensure your key is forwarded over SSH by using the **-A** flag: **ssh -A pi@123.45.67.89**.
### CCTV / pet camera
Another great home project is to set up a camera module to take photos or stream video, capture and save files, or streamed internally or to the internet. There are many reasons you might want to do this, but two common use cases are for a homemade security camera or to monitor a pet.
The [Raspberry Pi camera module][16] is a brilliant accessory. It provides full HD photo and video, lots of advanced configuration, and is [easy to][17] [program][17]. The [infrared camera][18] is ideal for this kind of use, and with an infrared LED (which the Pi can control) you can see in the dark!
If you want to take still images on a regular basis to keep an eye on things, you can just write a short [Python][19] script or use the command line tool [raspistill][20], and schedule it to recur in [Cron][21]. You might want to have it save them to [Dropbox][22] or another web service, upload them to a web server, or you can even create a [web app][23] to display them.
If you want to stream video, internally or externally, that's really easy, too. A simple MJPEG (Motion JPEG) example is provided in the [picamera documentation][24] (under “web streaming”). Just download or copy that code into a file, run it and visit the Pi's IP address at port 8000, and you'll see your camera's output live.
A more advanced streaming project, [pistreaming][25], is available, which uses [JSMpeg][26] (a JavaScript video player) with the web server and a websocket for the camera stream running separately. This method is more performant and is just as easy to get running as the previous example, but there is more code involved and if set up to stream on the internet, requires you to open two ports.
Once you have web streaming set up, you can position the camera where you want it. I have one set up to keep an eye on my pet tortoise:
![Tortoise ][27]
Ben Nuttall, CC BY-SA
If you want to be able to control where the camera actually points, you can do so using servos. A neat solution is to use Pimoroni's [Pan-Tilt HAT][28], which allows you to move the camera easily in two dimensions. To integrate this with pistreaming, see the project's [pantilthat branch][29].
![Pan-tilt][30]
Pimoroni.com, Copyright, Used with permission
If you want to position your Pi outside, you'll need a waterproof enclosure and some way of getting power to the Pi. PoE (Power-over-Ethernet) cables can be a good way of achieving this.
### Home automation and IoT
It's 2017 and there are internet-connected devices everywhere, especially in the home. Our lightbulbs have Wi-Fi, our toasters are smarter than they used to be, and our tea kettles are at risk of attack from Russia. As long as you keep your devices secure, or don't connect them to the internet if they don't need to be, then you can make great use of IoT devices to automate tasks around the home.
There are plenty of services you can buy or subscribe to, like Nest Thermostat or Philips Hue lightbulbs, which allow you to control your heating or your lighting from your phone, respectively—whether you're inside or away from home. You can use a Raspberry Pi to boost the power of these kinds of devices by automating interactions with them according to a set of rules involving timing or even sensors. One thing you can't do with Philips Hue is have the lights come on when you enter the room, but with a Raspberry Pi and a motion sensor, you can use a Python API to turn on the lights. Similarly, you can configure your Nest to turn on the heating when you're at home, but what if you only want it to turn on if there's at least two people home? Write some Python code to check which phones are on the network and if there are at least two, tell the Nest to turn on the heat.
You can do a great deal more without integrating with existing IoT devices and with only using simple components. A homemade burglar alarm, an automated chicken coop door opener, a night light, a music box, a timed heat lamp, an automated backup server, a print server, or whatever you can imagine.
### Tor proxy and blocking ads
Adafruit's [Onion Pi][31] is a [Tor][32] proxy that makes your web traffic anonymous, allowing you to use the internet free of snoopers and any kind of surveillance. Follow Adafruit's tutorial on setting up Onion Pi and you're on your way to a peaceful anonymous browsing experience.
![Onion-Pi][33]
Onion-pi from Adafruit, Copyright, Used with permission
![Pi-hole][34]You can install a Raspberry Pi on your network that intercepts all web traffic and filters out any advertising. Simply download the [Pi-hole][35] software onto the Pi, and all devices on your network will be ad-free (it even blocks in-app ads on your mobile devices).
There are plenty more uses for the Raspberry Pi at home. What do you use Raspberry Pi for at home? What do you want to use it for?
Let us know in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/4/5-projects-raspberry-pi-home
作者:[Ben Nuttall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bennuttall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_home_automation.png?itok=2TnmJpD8 (5 projects for Raspberry Pi at home)
[2]: https://www.raspberrypi.org/
[3]: https://kodi.tv/
[4]: https://osmc.tv/
[5]: https://libreelec.tv/
[6]: https://www.raspberrypi.org/downloads/noobs/
[7]: https://opensource.com/sites/default/files/libreelec_0.png (LibreElec )
[8]: https://opensource.com/sites/default/files/osmc.png (OSMC)
[9]: https://opensource.com/life/16/10/which-raspberry-pi-should-you-choose-your-project
[10]: mailto:pi@home.mydomain.com
[11]: https://opensource.com/sites/default/files/resize/screenshot_from_2017-04-07_15-13-01-700x380.png
[12]: http://stackoverflow.com/questions/20898384/ssh-disable-password-authentication
[13]: https://nmap.org/
[14]: http://www.pettingers.org/code/sshblack.html
[15]: https://www.fail2ban.org/wiki/index.php/Main_Page
[16]: https://www.raspberrypi.org/products/camera-module-v2/
[17]: https://opensource.com/life/15/6/raspberry-pi-camera-projects
[18]: https://www.raspberrypi.org/products/pi-noir-camera-v2/
[19]: http://picamera.readthedocs.io/
[20]: https://www.raspberrypi.org/documentation/usage/camera/raspicam/raspistill.md
[21]: https://www.raspberrypi.org/documentation/linux/usage/cron.md
[22]: https://github.com/RZRZR/plant-cam
[23]: https://github.com/bennuttall/bett-bot
[24]: http://picamera.readthedocs.io/en/release-1.13/recipes2.html#web-streaming
[25]: https://github.com/waveform80/pistreaming
[26]: http://jsmpeg.com/
[27]: https://opensource.com/sites/default/files/tortoise.jpg (Tortoise)
[28]: https://shop.pimoroni.com/products/pan-tilt-hat
[29]: https://github.com/waveform80/pistreaming/tree/pantilthat
[30]: https://opensource.com/sites/default/files/pan-tilt.gif (Pan-tilt)
[31]: https://learn.adafruit.com/onion-pi/overview
[32]: https://www.torproject.org/
[33]: https://opensource.com/sites/default/files/onion-pi.jpg (Onion-Pi)
[34]: https://opensource.com/sites/default/files/resize/pi-hole-250x250.png (Pi-hole)
[35]: https://pi-hole.net/

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +1,4 @@
Translating by robsean
Learning BASIC Like It's 1983
======
I was not yet alive in 1983. This is something that I occasionally regret. I am especially sorry that I did not experience the 8-bit computer era as it was happening, because I think the people that first encountered computers when they were relatively simple and constrained have a huge advantage over the rest of us.

View File

@ -1,176 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Aliases: To Protect and Serve)
[#]: via: (https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve)
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
Aliases: To Protect and Serve
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/prairie-path_1920.jpg?itok=wRARsM7p)
Happy 2019! Here in the new year, were continuing our series on aliases. By now, youve probably read our [first article on aliases][1], and it should be quite clear how they are the easiest way to save yourself a lot of trouble. You already saw, for example, that they helped with muscle-memory, but let's see several other cases in which aliases come in handy.
### Aliases as Shortcuts
One of the most beautiful things about Linux's shells is how you can use zillions of options and chain commands together to carry out really sophisticated operations in one fell swoop. All right, maybe beauty is in the eye of the beholder, but let's agree that this feature published practical.
The downside is that you often come up with recipes that are often hard to remember or cumbersome to type. Say space on your hard disk is at a premium and you want to do some New Year's cleaning. Your first step may be to look for stuff to get rid off in you home directory. One criteria you could apply is to look for stuff you don't use anymore. `ls` can help with that:
```
ls -lct
```
The instruction above shows the details of each file and directory (`-l`) and also shows when each item was last accessed (`-c`). It then orders the list from most recently accessed to least recently accessed (`-t`).
Is this hard to remember? You probably dont use the `-c` and `-t` options every day, so perhaps. In any case, defining an alias like
```
alias lt='ls -lct'
```
will make it easier.
Then again, you may want to have the list show the oldest files first:
```
alias lo='lt -F | tac'
```
![aliases][3]
Figure 1: The lt and lo aliases in action.
[Used with permission][4]
There are a few interesting things going here. First, we are using an alias (`lt`) to create another alias -- which is perfectly okay. Second, we are passing a new parameter to `lt` (which, in turn gets passed to `ls` through the definition of the `lt` alias).
The `-F` option appends special symbols to the names of items to better differentiate regular files (that get no symbol) from executable files (that get an `*`), files from directories (end in `/`), and all of the above from links, symbolic and otherwise (that end in an `@` symbol). The `-F` option is throwback to the days when terminals where monochrome and there was no other way to easily see the difference between items. You use it here because, when you pipe the output from `lt` through to `tac` you lose the colors from `ls`.
The third thing to pay attention to is the use of piping. Piping happens when you pass the output from an instruction to another instruction. The second instruction can then use that output as its own input. In many shells (including Bash), you pipe something using the pipe symbol (`|`).
In this case, you are piping the output from `lt -F` into `tac`. `tac`'s name is a bit of a joke. You may have heard of `cat`, the instruction that was nominally created to con _cat_ enate files together, but that in practice is used to print out the contents of a file to the terminal. `tac` does the same, but prints out the contents it receives in reverse order. Get it? `cat` and `tac`. Developers, you so funny!
The thing is both `cat` and `tac` can also print out stuff piped over from another instruction, in this case, a list of files ordered chronologically.
So... after that digression, what comes out of the other end is the list of files and directories of the current directory in inverse order of freshness.
The final thing you have to bear in mind is that, while `lt` will work the current directory and any other directory...
```
# This will work:
lt
# And so will this:
lt /some/other/directory
```
... `lo` will only work with the current directory:
```
# This will work:
lo
# But this won't:
lo /some/other/directory
```
This is because Bash expands aliases into their components. When you type this:
```
lt /some/other/directory
```
Bash REALLY runs this:
```
ls -lct /some/other/directory
```
which is a valid Bash command.
However, if you type this:
```
lo /some/other/directory
```
Bash tries to run this:
```
ls -lct -F | tac /some/other/directory
```
which is not a valid instruction, because `tac` mainly because _/some/other/directory_ is a directory, and `cat` and `tac` don't do directories.
### More Alias Shortcuts
* `alias lll='ls -R'` prints out the contents of a directory and then drills down and prints out the contents of its subdirectories and the subdirectories of the subdirectories, and so on and so forth. It is a way of seeing everything you have under a directory.
* `mkdir='mkdir -pv'` let's you make directories within directories all in one go. With the base form of `mkdir`, to make a new directory containing a subdirectory you have to do this:
```
mkdir newdir
mkdir newdir/subdir
```
Or this:
```
mkdir -p newdir/subdir
```
while with the alias you would only have to do this:
```
mkdir newdir/subdir
```
Your new `mkdir` will also tell you what it is doing while is creating new directories.
### Aliases as Safeguards
The other thing aliases are good for is as safeguards against erasing or overwriting your files accidentally. At this stage you have probably heard the legendary story about the new Linux user who ran:
```
rm -rf /
```
as root, and nuked the whole system. Then there's the user who decided that:
```
rm -rf /some/directory/ *
```
was a good idea and erased the complete contents of their home directory. Notice how easy it is to overlook that space separating the directory path and the `*`.
Both things can be avoided with the `alias rm='rm -i'` alias. The `-i` option makes `rm` ask the user whether that is what they really want to do and gives you a second chance before wreaking havoc in your file system.
The same goes for `cp`, which can overwrite a file without telling you anything. Create an alias like `alias cp='cp -i'` and stay safe!
### Next Time
We are moving more and more into scripting territory. Next time, we'll take the next logical step and see how combining instructions on the command line gives you really interesting and sophisticated solutions to everyday admin problems.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve
作者:[Paul Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/bro66
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve
[2]: https://www.linux.com/files/images/fig01png-0
[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig01_0.png?itok=crqTm_va (aliases)
[4]: https://www.linux.com/licenses/category/used-permission

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,50 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Blockchain 2.0: Blockchain In Real Estate [Part 4])
[#]: via: (https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/)
[#]: author: (EDITOR https://www.ostechnix.com/author/editor/)
Blockchain 2.0: Blockchain In Real Estate [Part 4]
======
![](https://www.ostechnix.com/wp-content/uploads/2019/03/Blockchain-In-Real-Estate-720x340.png)
### Blockchain 2.0: Smarter Real Estate
The [**previous article**][1] of this series explored the features of blockchain which will enable institutions to transform and interlace **traditional banking** and **financing systems** with it. This part will explore **Blockchain in real estate**. The real estate industry is ripe for a revolution. Its among the most actively traded most significant asset classes known to man. However, filled with regulatory hurdles and numerous possibilities of fraud and deceit, its also one of the toughest to participate in. The distributed ledger capabilities of the blockchain utilizing an appropriate consensus algorithm are touted as the way forward for the industry which is traditionally regarded as conservative in its attitude to change.
Real estate has always been a very conservative industry in terms of its myriad operations. Somewhat rightfully so as well. A major economic crisis such as the 2008 financial crisis or the great depression from the early half of the 20th century managed to destroy the industry and its participants. However, like most products of economic value, the real estate industry is resilient and this resilience is rooted in its conservative nature.
The global real estate market comprises an asset class worth **$228 trillion dollars** [1]. Give or take. Other investment assets such as stocks, bonds, and shares combined are only worth **$170 trillion**. Obviously, any and all transactions implemented in such an industry is naturally carefully planned and meticulously executed, for the most part. For the most part, because real estate is also notorious for numerous instances of fraud and devastating loses which ensue them. The industry because of the very conservative nature of its operations is also tough to navigate. Its heavily regulated with complex laws creating an intertwined web of nuances that are just too difficult for an average person to understand fully. This makes entry and participation near impossible for most people. If youve ever been involved in one such deal, youll know how heavy and long the paper trail was.
This hard reality is now set to change, albeit a slow and gradual transformation. The very reasons the industry has stuck to its hardy tested roots all this while can finally give way to its modern-day counterpart. The backbone of the real estate industry has always been its paper records. Land deeds, titles, agreements, rental insurance, proofs, and declarations etc., are just the tip of the iceberg here. If youve noticed the pattern here, this should be obvious, the distributed ledger technology that is blockchain, fits in perfectly with the needs here. Forget paper records, conventional database systems are also points of major failure. They can be modified by multiple participants, is not tamper proof or un-hackable, has a complicated set of ever-changing regulatory parameters making auditing and verifying data a nightmare. The blockchain perfectly solves all of these issues and more.
Starting with a trivial albeit an important example to show just how bad the current record management practices are in the real estate sector, consider the **Title Insurance business** [2], [3]. Title Insurance is used to hedge against the possibility of the lands titles and ownership records being inadmissible and hence unenforceable. An insurance product such as this is also referred to as an indemnity cover. It is by law required in many cases that properties have title insurance, especially when dealing with property that has changed hands multiple times over the years. Mortgage firms might insist on the same as well when they back real estate deals. The fact that a product of this kind has existed since the 1850s and that it does business worth at least **$1.5 trillion a year in the US alone** is a testament to the statement at the start. A revolution in terms of how these records are maintained is imperative to have in this situation and the blockchain provides a sustainable solution. Title fraud averages around $100k per case on average as per the **American Land Title Association** and 25% of all titles involved in transactions have an issue regarding their documents[4]. The blockchain allows for setting up an immutable permanent database that will track the property itself, recording each and every transaction or investment that has gone into it. Such a ledger system will make life easier for everyone involved in the real estate industry including one-time home buyers and make financial products such as Title Insurance basically irrelevant. Converting a physical asset such as real estate to a digital asset like this is unconventional and is extant only in theory at the moment. However, such a change is imminent sooner rather than later[5].
Among the areas in which blockchain will have the most impact within real estate is as highlighted above in maintaining a transparent and secure title management system for properties. A blockchain based record of the property can contain information about the property, its location, history of ownership, and any related public record of the same[6]. This will permit closing real estate deals fast and obliviates the need for 3rd party monitoring and oversight. Tasks such as real estate appraisal and tax calculations become matters of tangible objective parameters rather than subjective measures and guesses because of reliable historical data which is publicly verifiable. **UBITQUITY** is one such platform that offers customized blockchain-based solutions to enterprise customers. The platform allows customers to keep track of all property details, payment records, mortgage records and even allows running smart contracts thatll take care of taxation and leasing automatically[7].
This brings us to the second biggest opportunity and use case of blockchains in real estate. Since the sector is highly regulated by numerous 3rd parties apart from the counterparties involved in the trade, due-diligence and financial evaluations can be significantly time-consuming. These processes are predominantly carried out using offline channels and paperwork needs to travel for days before a final evaluation report comes out. This is especially true for corporate real estate deals and forms a bulk of the total billable hours charged by consultants. In case the transaction is backed by a mortgage, duplication of these processes is unavoidable. Once combined with digital identities for the people and institutions involved along with the property, the current inefficiencies can be avoided altogether and transactions can take place in a matter of seconds. The tenants, investors, institutions involved, consultants etc., could individually validate the data and arrive at a critical consensus thereby validating the property records for perpetuity[8]. This increases the accuracy of verification manifold. Real estate giant **RE/MAX** has recently announced a partnership with service provider **XYO Network Partners** for building a national database of real estate listings in Mexico. They hope to one day create one of the largest (as of yet) decentralized real estate title registry in the world[9].
However, another significant and arguably a very democratic change that the blockchain can bring about is with respect to investing in real estate. Unlike other investment asset classes where even small household investors can potentially participate, real estate often requires large hands-down payments to participate. Companies such as **ATLANT** and **BitOfProperty** tokenize the book value of a property and convert them into equivalents of a cryptocurrency. These tokens are then put for sale on their exchanges similar to how stocks and shares are traded. Any cash flow that the real estate property generates afterward is credited or debited to the token owners depending on their “share” in the property[4].
However, even with all of that said, Blockchain technology is still in very early stages of adoption in the real estate sector and current regulations are not exactly defined for it to be either[8]. Concepts such as distributed applications, distributed anonymous organizations, smart contracts etc., are unheard of in the legal domain in many countries. A complete overhaul of existing regulations and guidelines once all the stakeholders are well educated on the intricacies of the blockchain is the most pragmatic way forward. Again, itll be a slow and gradual change to go through, however a much-needed one nonetheless. The next article of the series will look at how **“Smart Contracts”** , such as those implemented by companies such as UBITQUITY and XYO are created and executed in the blockchain.
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/
作者:[EDITOR][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/blockchain-2-0-redefining-financial-services/

View File

@ -1,126 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (tomjlw)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to build a mobile particulate matter sensor with a Raspberry Pi)
[#]: via: (https://opensource.com/article/19/3/mobile-particulate-matter-sensor)
[#]: author: (Stephan Tetzel https://opensource.com/users/stephan)
How to build a mobile particulate matter sensor with a Raspberry Pi
======
Monitor your air quality with a Raspberry Pi, a cheap sensor, and an inexpensive display.
![Team communication, chat][1]
About a year ago, I wrote about [measuring air quality][2] using a Raspberry Pi and a cheap sensor. We've been using this project in our school and privately for a few years now. However, it has one disadvantage: It is not portable because it depends on a WLAN network or a wired network connection to work. You can't even access the sensor's measurements if the Raspberry Pi and the smartphone or computer are not on the same network.
To overcome this limitation, we added a small screen to the Raspberry Pi so we can read the values directly from the device. Here's how we set up and configured a screen for our mobile fine particulate matter sensor.
### Setting up the screen for the Raspberry Pi
There is a wide range of Raspberry Pi displays available from [Amazon][3], AliExpress, and other sources. They range from ePaper screens to LCDs with touch function. We chose an inexpensive [3.5″ LCD][4] with touch and a resolution of 320×480 pixels that can be plugged directly into the Raspberry Pi's GPIO pins. It's also nice that a 3.5″ display is about the same size as a Raspberry Pi.
The first time you turn on the screen and start the Raspberry Pi, the screen will remain white because the driver is missing. You have to install [the appropriate drivers][5] for the display first. Log in with SSH and execute the following commands:
```
$ rm -rf LCD-show
$ git clone <https://github.com/goodtft/LCD-show.git>
$ chmod -R 755 LCD-show
$ cd LCD-show/
```
Execute the appropriate command for your screen to install the drivers. For example, this is the command for our model MPI3501 screen:
```
$ sudo ./LCD35-show
```
This command installs the appropriate drivers and restarts the Raspberry Pi.
### Installing PIXEL desktop and setting up autostart
Here is what we want our project to do: If the Raspberry Pi boots up, we want to display a small website with our air quality measurements.
First, install the Raspberry Pi's [PIXEL desktop environment][6]:
```
$ sudo apt install raspberrypi-ui-mods
```
Then install the Chromium browser to display the website:
```
$ sudo apt install chromium-browser
```
Autologin is required for the measured values to be displayed directly after startup; otherwise, you will just see the login screen. However, autologin is not configured for the "pi" user by default. You can configure autologin with the **raspi-config** tool:
```
$ sudo raspi-config
```
In the menu, select: **3 Boot Options → B1 Desktop / CLI → B4 Desktop Autologin**.
There is a step missing to start Chromium with our website right after boot. Create the folder **/home/pi/.config/lxsession/LXDE-pi/** :
```
$ mkdir -p /home/pi/config/lxsession/LXDE-pi/
```
Then create the **autostart** file in this folder:
```
$ nano /home/pi/.config/lxsession/LXDE-pi/autostart
```
and paste the following code:
```
#@unclutter
@xset s off
@xset -dpms
@xset s noblank
# Open Chromium in Full Screen Mode
@chromium-browser --incognito --kiosk <http://localhost>
```
If you want to hide the mouse pointer, you have to install the package **unclutter** and remove the comment character at the beginning of the **autostart** file:
```
$ sudo apt install unclutter
```
![Mobile particulate matter sensor][7]
I've made a few small changes to the code in the last year. So, if you set up the air quality project before, make sure to re-download the script and files for the AQI website using the instructions in the [original article][2].
By adding the touch screen, you now have a mobile particulate matter sensor! We use it at our school to check the quality of the air in the classrooms or to do comparative measurements. With this setup, you are no longer dependent on a network connection or WLAN. You can use the small measuring station everywhere—you can even use it with a power bank to be independent of the power grid.
* * *
_This article originally appeared on[Open School Solutions][8] and is republished with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/mobile-particulate-matter-sensor
作者:[Stephan Tetzel][a]
选题:[lujun9972][b]
译者:[tomjlw](https://github.com/tomjlw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/stephan
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat)
[2]: https://opensource.com/article/18/3/how-measure-particulate-matter-raspberry-pi
[3]: https://www.amazon.com/gp/search/ref=as_li_qf_sp_sr_tl?ie=UTF8&tag=openschoolsol-20&keywords=lcd%20raspberry&index=aps&camp=1789&creative=9325&linkCode=ur2&linkId=51d6d7676e10d6c7db203c4a8b3b529a
[4]: https://amzn.to/2CcvgpC
[5]: https://github.com/goodtft/LCD-show
[6]: https://opensource.com/article/17/1/try-raspberry-pis-pixel-os-your-pc
[7]: https://opensource.com/sites/default/files/uploads/mobile-aqi-sensor.jpg (Mobile particulate matter sensor)
[8]: https://openschoolsolutions.org/mobile-particulate-matter-sensor/

View File

@ -1,131 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (fuzheng1998 )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 open source mobile apps)
[#]: via: (https://opensource.com/article/19/4/mobile-apps)
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen)
5 open source mobile apps
======
You can count on these apps to meet your needs for productivity,
communication, and entertainment.
![][1]
Like most people in the world, I'm rarely further than an arm's reach from my smartphone. My Android device provides a seemingly limitless number of communication, productivity, and entertainment services thanks to the open source mobile apps I've installed from Google Play and F-Droid.
Of the many open source apps on my phone, the following five are the ones I consistently turn to whether I want to listen to music; connect with friends, family, and colleagues; or get work done on the go.
### MPDroid
_An Android controller for the Music Player Daemon (MPD)_
![MPDroid][2]
MPD is a great way to get music from little music server computers out to the big black stereo boxes. It talks straight to ALSA and therefore to the Digital-to-Analog Converter ([DAC][3]) via the ALSA hardware interface, and it can be controlled over my network—but by what? Well, it turns out that MPDroid is a great MPD controller. It manages my music database, displays album art, handles playlists, and supports internet radio. And it's open source, so if something doesn't work…
MPDroid is available on [Google Play][4] and [F-Droid][5].
### RadioDroid
_An Android internet radio tuner that I use standalone and with Chromecast_
**
**
**
_![RadioDroid][6]_
RadioDroid is to internet radio as MPDroid is to managing my music database; essentially, RadioDroid is a frontend to [Internet-Radio.com][7]. Moreover, RadioDroid can be enjoyed by plugging headphones into the Android device, by connecting the Android device directly to the stereo via the headphone jack or USB, or by using its Chromecast capability with a compatible device. It's a fine way to check the weather in Finland, listen to the Spanish top 40, or hear the latest news from down under.
RadioDroid is available on [Google Play][8] and [F-Droid][9].
### Signal
_A secure messaging client for Android, iOS, and desktop_
**
**
**
_![Signal][10]_
If you like WhatsApp but are bothered by its [getting-closer-every-day][11] relationship to Facebook, Signal should be your next thing. The only problem with Signal is convincing your contacts they're better off replacing WhatsApp with Signal. But other than that, it has a similar interface; great voice and video calling; great encryption; decent anonymity; and it's supported by a foundation that doesn't plan to monetize your use of the software. What's not to like?
Signal is available for [Android][12], [iOS][13], and [desktop][14].
### ConnectBot
_Android SSH client_
**
**
**
_![ConnectBot][15]_
Sometimes I'm far away from my computer, but I need to log into the server to do something. [ConnectBot][16] is a great solution for moving SSH sessions onto my phone.
ConnectBot is available on [Google Play][17].
### Termux
_Android terminal emulator with many familiar utilities_
**
**
**
_![Termux][18]_
Have you ever needed to run an **awk** script on your phone? [Termux][19] is your solution. If you need to do terminal-type stuff, and you don't want to maintain an SSH connection to a remote computer the whole time, bring the files over to your phone with ConnectBot, quit the session, do your stuff in Termux, and send the results back with ConnectBot.
Termux is available on [Google Play][20] and [F-Droid][21].
* * *
What are your favorite open source mobile apps for work or fun? Please share them in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/mobile-apps
作者:[Chris Hermansen (Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78
[2]: https://opensource.com/sites/default/files/uploads/mpdroid.jpg (MPDroid)
[3]: https://opensource.com/article/17/4/fun-new-gadget
[4]: https://play.google.com/store/apps/details?id=com.namelessdev.mpdroid&hl=en_US
[5]: https://f-droid.org/en/packages/com.namelessdev.mpdroid/
[6]: https://opensource.com/sites/default/files/uploads/radiodroid.png (RadioDroid)
[7]: https://www.internet-radio.com/
[8]: https://play.google.com/store/apps/details?id=net.programmierecke.radiodroid2
[9]: https://f-droid.org/en/packages/net.programmierecke.radiodroid2/
[10]: https://opensource.com/sites/default/files/uploads/signal.png (Signal)
[11]: https://opensource.com/article/19/3/open-messenger-client
[12]: https://play.google.com/store/apps/details?id=org.thoughtcrime.securesms
[13]: https://itunes.apple.com/us/app/signal-private-messenger/id874139669?mt=8
[14]: https://signal.org/download/
[15]: https://opensource.com/sites/default/files/uploads/connectbot.png (ConnectBot)
[16]: https://connectbot.org/
[17]: https://play.google.com/store/apps/details?id=org.connectbot
[18]: https://opensource.com/sites/default/files/uploads/termux.jpg (Termux)
[19]: https://termux.com/
[20]: https://play.google.com/store/apps/details?id=com.termux
[21]: https://f-droid.org/packages/com.termux/

View File

@ -1,135 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Be your own certificate authority)
[#]: via: (https://opensource.com/article/19/4/certificate-authority)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/elenajon123)
Be your own certificate authority
======
Create a simple, internal CA for your microservice architecture or
integration testing.
![][1]
The Transport Layer Security ([TLS][2]) model, which is sometimes referred to by the older name SSL, is based on the concept of [certificate authorities][3] (CAs). These authorities are trusted by browsers and operating systems and, in turn, _sign_ servers' certificates to validate their ownership.
However, for an intranet, a microservice architecture, or integration testing, it is sometimes useful to have a _local CA_ : one that is trusted only internally and, in turn, signs local servers' certificates.
This especially makes sense for integration tests. Getting certificates can be a burden because the servers will be up for minutes. But having an "ignore certificate" option in the code could allow it to be activated in production, leading to a security catastrophe.
A CA certificate is not much different from a regular server certificate; what matters is that it is trusted by local code. For example, in the **requests** library, this can be done by setting the **REQUESTS_CA_BUNDLE** variable to a directory containing this certificate.
In the example of creating a certificate for integration tests, there is no need for a _long-lived_ certificate: if your integration tests take more than a day, you have already failed.
So, calculate **yesterday** and **tomorrow** as the validity interval:
```
>>> import datetime
>>> one_day = datetime.timedelta(days=1)
>>> today = datetime.date.today()
>>> yesterday = today - one_day
>>> tomorrow = today - one_day
```
Now you are ready to create a simple CA certificate. You need to generate a private key, create a public key, set up the "parameters" of the CA, and then self-sign the certificate: a CA certificate is _always_ self-signed. Finally, write out both the certificate file as well as the private key file.
```
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.primitives import hashes, serialization
from cryptography import x509
from cryptography.x509.oid import NameOID
private_key = rsa.generate_private_key(
public_exponent=65537,
key_size=2048,
backend=default_backend()
)
public_key = private_key.public_key()
builder = x509.CertificateBuilder()
builder = builder.subject_name(x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'),
]))
builder = builder.issuer_name(x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'),
]))
builder = builder.not_valid_before(yesterday)
builder = builder.not_valid_after(tomorrow)
builder = builder.serial_number(x509.random_serial_number())
builder = builder.public_key(public_key)
builder = builder.add_extension(
x509.BasicConstraints(ca=True, path_length=None),
critical=True)
certificate = builder.sign(
private_key=private_key, algorithm=hashes.SHA256(),
backend=default_backend()
)
private_bytes = private_key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.TraditionalOpenSSL,
encryption_algorithm=serialization.NoEncrption())
public_bytes = certificate.public_bytes(
encoding=serialization.Encoding.PEM)
with open("ca.pem", "wb") as fout:
fout.write(private_bytes + public_bytes)
with open("ca.crt", "wb") as fout:
fout.write(public_bytes)
```
In general, a real CA will expect a [certificate signing request][4] (CSR) to sign a certificate. However, when you are your own CA, you can make your own rules! Just go ahead and sign what you want.
Continuing with the integration test example, you can create the private keys and sign the corresponding public keys right then. Notice **COMMON_NAME** needs to be the "server name" in the **https** URL. If you've configured name lookup, the needed server will respond on **service.test.local**.
```
service_private_key = rsa.generate_private_key(
public_exponent=65537,
key_size=2048,
backend=default_backend()
)
service_public_key = service_private_key.public_key()
builder = x509.CertificateBuilder()
builder = builder.subject_name(x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, 'service.test.local')
]))
builder = builder.not_valid_before(yesterday)
builder = builder.not_valid_after(tomorrow)
builder = builder.public_key(public_key)
certificate = builder.sign(
private_key=private_key, algorithm=hashes.SHA256(),
backend=default_backend()
)
private_bytes = service_private_key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.TraditionalOpenSSL,
encryption_algorithm=serialization.NoEncrption())
public_bytes = certificate.public_bytes(
encoding=serialization.Encoding.PEM)
with open("service.pem", "wb") as fout:
fout.write(private_bytes + public_bytes)
```
Now the **service.pem** file has a private key and a certificate that is "valid": it has been signed by your local CA. The file is in a format that can be given to, say, Nginx, HAProxy, or most other HTTPS servers.
By applying this logic to testing scripts, it's easy to create servers that look like authentic HTTPS servers, as long as the client is configured to trust the right CA.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/certificate-authority
作者:[Moshe Zadka (Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez/users/elenajon123
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_commun_4604_02_mech_connections_rhcz0.5x.png?itok=YPPU4dMj
[2]: https://en.wikipedia.org/wiki/Transport_Layer_Security
[3]: https://en.wikipedia.org/wiki/Certificate_authority
[4]: https://en.wikipedia.org/wiki/Certificate_signing_request

View File

@ -1,166 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (cycoe)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Monitoring CPU and GPU Temperatures on Linux)
[#]: via: (https://itsfoss.com/monitor-cpu-gpu-temp-linux/)
[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/)
Monitoring CPU and GPU Temperatures on Linux
======
_**Brief: This articles discusses two simple ways of monitoring CPU and GPU temperatures in Linux command line.**_
Because of **[Steam][1]** (including _[Steam Play][2]_ , aka _Proton_ ) and other developments, **GNU/Linux** is becoming the gaming platform of choice for more and more computer users everyday. A good number of users are also going for **GNU/Linux** when it comes to other resource-consuming computing tasks such as [video editing][3] or graphic design ( _Kdenlive_ and _[Blender][4]_ are good examples of programs for these).
Whether you are one of those users or otherwise, you are bound to have wondered how hot your computers CPU and GPU can get (even more so if you do overclocking). If that is the case, keep reading. We will be looking at a couple of very simple commands to monitor CPU and GPU temps.
My setup includes a [Slimbook Kymera][5] and two displays (a TV set and a PC monitor) which allows me to use one for playing games and the other to keep an eye on the temperatures. Also, since I use [Zorin OS][6] I will be focusing on **Ubuntu** and **Ubuntu** derivatives.
To monitor the behaviour of both CPU and GPU we will be making use of the useful `watch` command to have dynamic readings every certain number of seconds.
![][7]
### Monitoring CPU Temperature in Linux
For CPU temps, we will combine `watch` with the `sensors` command. An interesting article about a [gui version of this tool has already been covered on Its FOSS][8]. However, we will use the terminal version here:
```
watch -n 2 sensors
```
`watch` guarantees that the readings will be updated every 2 seconds (and this value can — of course — be changed to what best fit your needs):
```
Every 2,0s: sensors
iwlwifi-virtual-0
Adapter: Virtual device
temp1: +39.0°C
acpitz-virtual-0
Adapter: Virtual device
temp1: +27.8°C (crit = +119.0°C)
temp2: +29.8°C (crit = +119.0°C)
coretemp-isa-0000
Adapter: ISA adapter
Package id 0: +37.0°C (high = +82.0°C, crit = +100.0°C)
Core 0: +35.0°C (high = +82.0°C, crit = +100.0°C)
Core 1: +35.0°C (high = +82.0°C, crit = +100.0°C)
Core 2: +33.0°C (high = +82.0°C, crit = +100.0°C)
Core 3: +36.0°C (high = +82.0°C, crit = +100.0°C)
Core 4: +37.0°C (high = +82.0°C, crit = +100.0°C)
Core 5: +35.0°C (high = +82.0°C, crit = +100.0°C)
```
Amongst other things, we get the following information:
* We have 5 cores in use at the moment (with the current highest temperature being 37.0ºC).
* Values higher than 82.0ºC are considered high.
* A value over 100.0ºC is deemed critical.
[][9]
Suggested read Top 10 Command Line Games For Linux
The values above lead us to the conclusion that the computers workload is very light at the moment.
### Monitoring GPU Temperature in Linux
Let us turn to the graphics card now. I have never used an **AMD** dedicated graphics card, so I will be focusing on **Nvidia** ones. The first thing to do is download the appropriate, current driver through [additional drivers in Ubuntu][10].
On **Ubuntu** (and its forks such as **Zorin** or **Linux Mint** ), going to _Software & Updates_ > _Additional Drivers_ and selecting the most recent one normally suffices. Additionally, you can add/enable the official _ppa_ for graphics cards (either through the command line or via _Software & Updates_ > _Other Software_ ). After installing the driver you will have at your disposal the _Nvidia X Server_ gui application along with the command line utility _nvidia-smi_ (Nvidia System Management Interface). So we will use `watch` and `nvidia-smi`:
```
watch -n 2 nvidia-smi
```
And — the same as for the CPU — we will get updated readings every two seconds:
```
Every 2,0s: nvidia-smi
Fri Apr 19 20:45:30 2019
+-----------------------------------------------------------------------------+
| Nvidia-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 106... Off | 00000000:01:00.0 On | N/A |
| 0% 54C P8 10W / 120W | 433MiB / 6077MiB | 4% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1557 G /usr/lib/xorg/Xorg 190MiB |
| 0 1820 G /usr/bin/gnome-shell 174MiB |
| 0 7820 G ...equest-channel-token=303407235874180773 65MiB |
+-----------------------------------------------------------------------------+
```
The chart gives the following information about the graphics card:
* it is using the open source driver version 418.56.
* the current temperature of the card is 54.0ºC — with the fan at 0% of its capacity.
* the power consumption is very low: only 10W.
* out of 6 GB of vram (video random access memory), it is only using 433 MB.
* the used vram is being taken by three processes whose IDs are — respectively — 1557, 1820 and 7820.
[][11]
Suggested read Googler: Now You Can Google From Linux Terminal!
Most of these facts/values show that — clearly — we are not playing any resource-consuming games or dealing with heavy workloads. Should we started playing a game, processing a video — or the like —, the values would start to go up.
#### Conclusion
Althoug there are gui tools, I find these two commands very handy to check on your hardware in real time.
What do you make of them? You can learn more about the utilities involved by reading their man pages.
Do you have other preferences? Share them with us in the comments, ;).
Halof!!! (Have a lot of fun!!!).
![avatar][12]
### Alejandro Egea-Abellán
Its FOSS Community Contributor
I developed a liking for electronics, linguistics, herpetology and computers (particularly GNU/Linux and FOSS). I am LPIC-2 certified and currently work as a technical consultant and Moodle administrator in the Department for Lifelong Learning at the Ministry of Education in Murcia, Spain. I am a firm believer in lifelong learning, the sharing of knowledge and computer-user freedom.
--------------------------------------------------------------------------------
via: https://itsfoss.com/monitor-cpu-gpu-temp-linux/
作者:[It's FOSS Community][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/itsfoss/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/install-steam-ubuntu-linux/
[2]: https://itsfoss.com/steam-play-proton/
[3]: https://itsfoss.com/best-video-editing-software-linux/
[4]: https://www.blender.org/
[5]: https://slimbook.es/
[6]: https://zorinos.com/
[7]: https://itsfoss.com/wp-content/uploads/2019/04/monitor-cpu-gpu-temperature-linux-800x450.png
[8]: https://itsfoss.com/check-laptop-cpu-temperature-ubuntu/
[9]: https://itsfoss.com/best-command-line-games-linux/
[10]: https://itsfoss.com/install-additional-drivers-ubuntu/
[11]: https://itsfoss.com/review-googler-linux/
[12]: https://itsfoss.com/wp-content/uploads/2019/04/EGEA-ABELLAN-Alejandro.jpg

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (arrowfeng)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,499 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (zhang5788)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting Started With Docker)
[#]: via: (https://www.ostechnix.com/getting-started-with-docker/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
Getting Started With Docker
======
![Getting Started With Docker][1]
In our previous tutorial, we have explained **[how to install Docker in Ubuntu][2]** , and how to [**install Docker in CentOS**][3]. Today, we will see the basic usage of Docker. This guide covers the Docker basics, such as how to create a new container, how to run the container, remove a container, how to build your own Docker image from the Container and so on. Let us get started! All steps given below are tested in Ubuntu 18.04 LTS server edition.
### Getting Started With Docker
Before exploring the Docker basics, dont confuse with Docker images and Docker Containers. As I already explained in the previous tutorial, Docker Image is the file that decides how a Container should behave, and Docker Container is the running or stopped stage of a Docker image.
##### 1\. Search Docker images
We can get the images from either from the registry, for example [**Docker hub**][4], or create our own, For those wondering, Docker hub is cloud hosted place where all Docker users build, test, and save their Docker images.
Docker hub has tens of thousands of Docker images. You can search for the any Docker images with **“docker search”** command.
For instance, to search for docker images based on Ubuntu, run:
```
$ sudo docker search ubuntu
```
**Sample output:**
![][5]
To search images based on CentOS, run:
```
$ sudo docker search ubuntu
```
To search images for AWS, run:
```
$ sudo docker search aws
```
For wordpress:
```
$ sudo docker search wordpress
```
Docker hub has almost all kind of images. Be it an operating system, application, or anything, you will find pre-built Docker images in Docker hub. If something youre looking for is not available, you can build it and make it available for public or keep it private for your own use.
##### 2\. Download Docker image
To download Docker image for Ubuntu OS, run the following command from the Terminal:
```
$ sudo docker pull ubuntu
```
The above command will download the latest Ubuntu image from the **Docker hub**.
**Sample output:**
```
Using default tag: latest
latest: Pulling from library/ubuntu
6abc03819f3e: Pull complete
05731e63f211: Pull complete
0bd67c50d6be: Pull complete
Digest: sha256:f08638ec7ddc90065187e7eabdfac3c96e5ff0f6b2f1762cf31a4f49b53000a5
Status: Downloaded newer image for ubuntu:latest
```
![][6]
Download docker images
You can also download a specific version of Ubuntu image using command:
```
$ docker pull ubuntu:18.04
```
Docker allows us to download any images and start the container regardless of the host OS.
For example, to download CentOS image, run:
```
$ sudo docker pull centos
```
All downloaded Docker images will be saved in **/var/lib/docker/** directory.
To view the list of downloaded Docker images, run:
```
$ sudo docker images
```
**Sample output:**
```
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 7698f282e524 14 hours ago 69.9MB
centos latest 9f38484d220f 2 months ago 202MB
hello-world latest fce289e99eb9 4 months ago 1.84kB
```
As you see above, I have downloaded three Docker images **Ubuntu** , **CentOS** and **hello-world**.
Now, let us go ahead and see how to start or run the containers based on the downloaded images.
##### 3\. Run Docker Containers
We can start the containers in two methods. We can start a container either using its **TAG** or **IMAGE ID**. **TAG** refers to a particular snapshot of the image and the **IMAGE ID** is the corresponding unique identifier for that image.
As you in the above results **“latest”** is the TAG for all containers, and **7698f282e524** is the IMAGE ID of **Ubuntu** Docker image, **9f38484d220f** is the image id of CentOS Docker image and **fce289e99eb9** is the image id of **hello_world** Docker image.
Once you downloaded the Docker images of your choice, run the following command to start a Docker container by using its TAG.
```
$ sudo docker run -t -i ubuntu:latest /bin/bash
```
Here,
* **-t** : Assigns a new Terminal inside the Ubuntu container.
* **-i** : Allows us to make an interactive connection by grabbing the standard in (STDIN) of the container.
* **ubuntu:latest** : Ubuntu container with TAG “latest”.
* **/bin/bash** : BASH shell for the new container.
Or, you can start the container using IMAGE ID as shown below:
```
sudo docker run -t -i 7698f282e524 /bin/bash
```
Here,
* **7698f282e524** Image id
After starting the container, youll be landed automatically into the Containers shell (Command prompt):
![][7]
Docker containers shell
To return back to the host systems Terminal (In my case, it is Ubuntu 18.04 LTS) without terminating the Container (guest os), press **CTRL+P** followed by **CTRL+Q**. Now, youll be safely return back to your original host computers terminal window. Please note that the container is still running in the background and we didnt terminate it yet.
To view the list running of containers, run the following command:
```
$ sudo docker ps
```
**Sample output:**
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
32fc32ad0d54 ubuntu:latest "/bin/bash" 7 minutes ago Up 7 minutes modest_jones
```
![][8]
List running containers
Here,
* **32fc32ad0d54** Container ID
* **ubuntu:latest** Docker image
Please note that **Container ID and Docker image ID are different**.
To list all available ( either running or stopped) containers:
```
$ sudo docker ps -a
```
To stop (power off the container) from the hosts shell, run the following command:
```
$ sudo docker stop <container-id>
```
**Example:**
```
$ sudo docker stop 32fc32ad0d54
```
To login back to or attach to the running container, just run:
```
$ sudo docker attach 32fc32ad0d54
```
As you already know, **32fc32ad0d54** is the containers ID.
To power off a Container from inside its shell by typing the following command:
```
# exit
```
You can verify the list of running containers with command:
```
$ sudo docker ps
```
##### 4\. Build your custom Docker images
Docker is not just for downloading and using the existing containers. You can create your own custom docker image as well.
To do so, start any one the downloaded container:
```
$ sudo docker run -t -i ubuntu:latest /bin/bash
```
Now, you will be in the containers shell.
Then, install any software or do what ever you want to do in the container.
For example, let us install **Apache web server** in the container.
Once you did all tweaks, installed all necessary software, run the following command to build your custom Docker image:
```
# apt update
# apt install apache2
```
Similarly, install and test any software of your choice in the Container.
Once you all set, return back to the host systems shell. Do not stop or poweroff the Container. To switch to the host systems shell without stopping Container, press CTRL+P followed by CTRL+Q.
From your host computers shell, run the following command to find the container ID:
```
$ sudo docker ps
```
Finally, create a Docker image of the running Container using command:
```
$ sudo docker commit 3d24b3de0bfc ostechnix/ubuntu_apache
```
**Sample Output:**
```
sha256:ce5aa74a48f1e01ea312165887d30691a59caa0d99a2a4aa5116ae124f02f962
```
Here,
* **3d24b3de0bfc** Ubuntu container ID. As you already, we can
* **ostechnix** Name of the user who created the container.
* **ubuntu_apache** Name of the docker image created by user ostechnix.
Let us check whether the new Docker image is created or not with command:
```
$ sudo docker images
```
**Sample output:**
```
REPOSITORY TAG IMAGE ID CREATED SIZE
ostechnix/ubuntu_apache latest ce5aa74a48f1 About a minute ago 191MB
ubuntu latest 7698f282e524 15 hours ago 69.9MB
centos latest 9f38484d220f 2 months ago 202MB
hello-world latest fce289e99eb9 4 months ago 1.84kB
```
![][9]
List docker images
As you see in the above output, the new Docker image has been created in our localhost system from the running Container.
Now, you can create a new Container from the newly created Docker image as usual suing command:
```
$ sudo docker run -t -i ostechnix/ubuntu_apache /bin/bash
```
##### 5\. Removing Containers
Once youre done all R&D with Docker containers, you can delete if you dont want them anymore.
To do so, First we have to stop (power off) the running Containers.
Let us find out the running containers with command:
```
$ sudo docker ps
```
**Sample output:**
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3d24b3de0bfc ubuntu:latest "/bin/bash" 28 minutes ago Up 28 minutes goofy_easley
```
Stop the running container by using its ID:
```
$ sudo docker stop 3d24b3de0bfc
```
Now, delete the container using command:
```
$ sudo docker rm 3d24b3de0bfc
```
Similarly, stop all containers and delete them if they are no longer required.
Deleting multiple containers one by one can be a tedious task. So, we can delete all stopped containers in one go, just run:
```
$ sudo docker container prune
```
Type **“Y”** and hit ENTER key to delete the containers.
```
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Deleted Containers:
32fc32ad0d5445f2dfd0d46121251c7b5a2aea06bb22588fb2594ddbe46e6564
5ec614e0302061469ece212f0dba303c8fe99889389749e6220fe891997f38d0
Total reclaimed space: 5B
```
This command will work only with latest Docker versions.
##### 6\. Removing Docker images
Once you removed containers, you can delete the Docker images that you no longer need.
To find the list of the Downloaded Docker images:
```
$ sudo docker images
```
**Sample output:**
```
REPOSITORY TAG IMAGE ID CREATED SIZE
ostechnix/ubuntu_apache latest ce5aa74a48f1 5 minutes ago 191MB
ubuntu latest 7698f282e524 15 hours ago 69.9MB
centos latest 9f38484d220f 2 months ago 202MB
hello-world latest fce289e99eb9 4 months ago 1.84kB
```
As you see above, we have three Docker images in our host system.
Let us delete them by using their IMAGE id:
```
$ sudo docker rmi ce5aa74a48f1
```
**Sample output:**
```
Untagged: ostechnix/ubuntu_apache:latest
Deleted: sha256:ce5aa74a48f1e01ea312165887d30691a59caa0d99a2a4aa5116ae124f02f962
Deleted: sha256:d21c926f11a64b811dc75391bbe0191b50b8fe142419f7616b3cee70229f14cd
```
##### Troubleshooting
Docker wont let you to delete the Docker images if they are used by any running or stopped containers.
For example, when I try to delete a Docker Image with ID **b72889fa879c** , from one of my old Ubuntu server. I got the following error:
```
Error response from daemon: conflict: unable to delete b72889fa879c (must be forced) - image is being used by stopped container dde4dd285377
```
This is because the Docker image that you want to delete is currently being used by another Container.
So, let us check the running Container using command:
```
$ sudo docker ps
```
**Sample output:**
![][10]
Oops! There is no running container.
Let us again check for all containers (Running and stopped) with command:
```
$ sudo docker ps -a
```
**Sample output:**
![][11]
As you see there are still some stopped containers are using one of the Docker images. So, let us delete all of the containers.
**Example:**
```
$ sudo docker rm 12e892156219
```
Similarly, remove all containers as shown above using their respective containers ID.
Once you deleted all Containers, finally remove the Docker images.
**Example:**
```
$ sudo docker rmi b72889fa879c
```
Thats it. Let us verify is there any other Docker images in the host with command:
```
$ sudo docker images
```
For more details, refer the official resource links given at the end of this guide or drop a comment in the comment section below.
Also, download and use the following Docker Ebooks to get to know more about it.
** **Download** [**Free eBook: “Docker Containerization Cookbook”**][12]
** **Download** [**Free Guide: “Understanding Docker”**][13]
** **Download** [**Free Guide: “What is Docker and Why is it So Popular?”**][14]
** **Download** [**Free Guide: “Introduction to Docker”**][15]
** **Download** [**Free Guide: “Docker in Production”**][16]
And, thats all for now. Hope you a got the basic idea about Docker usage.
More good stuffs to come. Stay tuned!
Cheers!!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/getting-started-with-docker/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2016/04/docker-basics-720x340.png
[2]: http://www.ostechnix.com/install-docker-ubuntu/
[3]: https://www.ostechnix.com/install-docker-centos/
[4]: https://hub.docker.com/
[5]: http://www.ostechnix.com/wp-content/uploads/2016/04/Search-Docker-images.png
[6]: http://www.ostechnix.com/wp-content/uploads/2016/04/Download-docker-images.png
[7]: http://www.ostechnix.com/wp-content/uploads/2016/04/Docker-containers-shell.png
[8]: http://www.ostechnix.com/wp-content/uploads/2016/04/List-running-containers.png
[9]: http://www.ostechnix.com/wp-content/uploads/2016/04/List-docker-images.png
[10]: http://www.ostechnix.com/wp-content/uploads/2016/04/sk@sk-_005-1-1.jpg
[11]: http://www.ostechnix.com/wp-content/uploads/2016/04/sk@sk-_006-1.jpg
[12]: https://ostechnix.tradepub.com/free/w_java39/prgm.cgi?a=1
[13]: https://ostechnix.tradepub.com/free/w_pacb32/prgm.cgi?a=1
[14]: https://ostechnix.tradepub.com/free/w_pacb31/prgm.cgi?a=1
[15]: https://ostechnix.tradepub.com/free/w_pacb29/prgm.cgi?a=1
[16]: https://ostechnix.tradepub.com/free/w_pacb28/prgm.cgi?a=1

View File

@ -0,0 +1,219 @@
[#]: collector: (lujun9972)
[#]: translator: (arrowfeng)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Disable IPv6 on Ubuntu Linux)
[#]: via: (https://itsfoss.com/disable-ipv6-ubuntu-linux/)
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
How to Disable IPv6 on Ubuntu Linux
======
Are you looking for a way to **disable IPv6** connections on your Ubuntu machine? In this article, Ill teach you exactly how to do it and why you would consider this option. Ill also show you how to **enable or re-enable IPv6** in case you change your mind.
### What is IPv6 and why would you want to disable IPv6 on Ubuntu?
**[Internet Protocol version 6][1]** [(][1] **[IPv6][1]**[)][1] is the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. It was developed in 1998 to replace the **IPv4** protocol.
**IPv6** aims to improve security and performance, while also making sure we dont run out of addresses. It assigns unique addresses globally to every device, storing them in **128-bits** , compared to just 32-bits used by IPv4.
![Disable IPv6 Ubuntu][2]
Although the goal is for IPv4 to be replaced by IPv6, there is still a long way to go. Less than **30%** of the sites on the Internet makes IPv6 connectivity available to users (tracked by Google [here][3]). IPv6 can also cause [problems with some applications at time][4].
Since **VPNs** provide global services, the fact that IPv6 uses globally routed addresses (uniquely assigned) and that there (still) are ISPs that dont offer IPv6 support shifts this feature lower down their priority list. This way, they can focus on what matters the most for VPN users: security.
Another possible reason you might want to disable IPv6 on your system is not wanting to expose yourself to various threats. Although IPv6 itself is safer than IPv4, the risks I am referring to are of another nature. If you arent actively using IPv6 and its features, [having IPv6 enabled leaves you vulnerable to various attacks][5], offering the hacker another possible exploitable tool.
On the same note, configuring basic network rules is not enough. You have to pay the same level of attention to tweaking your IPv6 configuration as you do for IPv4. This can prove to be quite a hassle to do (and also to maintain). With IPv6 comes a suite of problems different to those of IPv4 (many of which can be referenced online, given the age of this protocol), giving your system another layer of complexity.
[][6]
Suggested read How To Remove Drive Icons From Unity Launcher In Ubuntu 14.04 [Beginner Tips]
### Disabling IPv6 on Ubuntu [For Advanced Users Only]
In this section, Ill be covering how you can disable IPv6 protocol on your Ubuntu machine. Open up a terminal ( **default:** CTRL+ALT+T) and lets get to it!
**Note:** _For most of the commands you are going to input in the terminal_ _you are going to need root privileges ( **sudo** )._
Warning!
If you are a regular desktop Linux user and prefer a stable working system, please avoid this tutorial. This is for advanced users who know what they are doing and why they are doing so.
#### 1\. Disable IPv6 using Sysctl
First of all, you can **check** if you have IPv6 enabled with:
```
ip a
```
You should see an IPv6 address if it is enabled (the name of your internet card might be different):
![IPv6 Address Ubuntu][7]
You have see the sysctl command in the tutorial about [restarting network in Ubuntu][8]. We are going to use it here as well. To **disable IPv6** you only have to input 3 commands:
```
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1
```
You can check if it worked using:
```
ip a
```
You should see no IPv6 entry:
![IPv6 Disabled Ubuntu][9]
However, this only **temporarily disables IPv6**. The next time your system boots, IPv6 will be enabled again.
One method to make this option persist is modifying **/etc/sysctl.conf**. Ill be using vim to edit the file, but you can use any editor you like. Make sure you have **administrator rights** (use **sudo** ):
![Sysctl Configuration][10]
Add the following lines to the file:
```
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
```
For the settings to take effect use:
```
sudo sysctl -p
```
If IPv6 is still enabled after rebooting, you must create (with root privileges) the file **/etc/rc.local** and fill it with:
```
#!/bin/bash
# /etc/rc.local
/etc/sysctl.d
/etc/init.d/procps restart
exit 0
```
Now use [chmod command][11] to make the file executable:
```
sudo chmod 755 /etc/rc.local
```
What this will do is manually read (during the boot time) the kernel parameters from your sysctl configuration file.
[][12]
Suggested read 3 Ways to Check Linux Kernel Version in Command Line
#### 2\. Disable IPv6 using GRUB
An alternative method is to configure **GRUB** to pass kernel parameters at boot time. Youll have to edit **/etc/default/grub**. Once again, make sure you have administrator privileges:
![GRUB Configuration][13]
Now you need to modify **GRUB_CMDLINE_LINUX_DEFAULT** and **GRUB_CMDLINE_LINUX** to disable IPv6 on boot:
```
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash ipv6.disable=1"
GRUB_CMDLINE_LINUX="ipv6.disable=1"
```
Save the file and run:
```
sudo update-grub
```
The settings should now persist on reboot.
### Re-enabling IPv6 on Ubuntu
To re-enable IPv6, youll have to undo the changes you made. To enable IPv6 until reboot, enter:
```
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=0
sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=0
```
Otherwise, if you modified **/etc/sysctl.conf** you can either remove the lines you added or change them to:
```
net.ipv6.conf.all.disable_ipv6=0
net.ipv6.conf.default.disable_ipv6=0
net.ipv6.conf.lo.disable_ipv6=0
```
You can optionally reload these values:
```
sudo sysctl -p
```
You should once again see a IPv6 address:
![IPv6 Reenabled in Ubuntu][14]
Optionally, you can remove **/etc/rc.local** :
```
sudo rm /etc/rc.local
```
If you modified the kernel parameters in **/etc/default/grub** , go ahead and delete the added options:
```
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX=""
```
Now do:
```
sudo update-grub
```
**Wrapping Up**
In this guide I provided you ways in which you can **disable IPv6** on Linux, as well as giving you an idea about what IPv6 is and why you would want to disable it.
Did you find this article useful? Do you disable IPv6 connectivity? Let us know in the comment section!
--------------------------------------------------------------------------------
via: https://itsfoss.com/disable-ipv6-ubuntu-linux/
作者:[Sergiu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sergiu/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/IPv6
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/disable_ipv6_ubuntu.png?fit=800%2C450&ssl=1
[3]: https://www.google.com/intl/en/ipv6/statistics.html
[4]: https://whatismyipaddress.com/ipv6-issues
[5]: https://www.internetsociety.org/blog/2015/01/ipv6-security-myth-1-im-not-running-ipv6-so-i-dont-have-to-worry/
[6]: https://itsfoss.com/remove-drive-icons-from-unity-launcher-in-ubuntu/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_address_ubuntu.png?fit=800%2C517&ssl=1
[8]: https://itsfoss.com/restart-network-ubuntu/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_disabled_ubuntu.png?fit=800%2C442&ssl=1
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/sysctl_configuration.jpg?fit=800%2C554&ssl=1
[11]: https://linuxhandbook.com/chmod-command/
[12]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/grub_configuration-1.jpg?fit=800%2C565&ssl=1
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_address_ubuntu-1.png?fit=800%2C517&ssl=1

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Choosing the right model for maintaining and enhancing your IoT project)
[#]: via: (https://opensource.com/article/19/5/model-choose-embedded-iot-development)
[#]: author: (Drew Moseley https://opensource.com/users/drewmoseley)
Choosing the right model for maintaining and enhancing your IoT project
======
Learn more about these two models: Centralized Golden Master and
Distributed Build System
![][1]
In today's connected embedded device market, driven by the [Internet of things (IoT)][2], a large share of devices in development are based on Linux of one form or another. The prevalence of low-cost boards with ready-made Linux distributions is a key driver in this. Acquiring hardware, building your custom code, connecting the devices to other hardware peripherals and the internet as well as device management using commercial cloud providers has never been easier. A developer or development team can quickly prototype a new application and get the devices in the hands of potential users. This is a good thing and results in many interesting new applications, as well as many questionable ones.
When planning a system design for beyond the prototyping phase, things get a little more complex. In this post, we want to consider mechanisms for developing and maintaining your base [operating system (OS) image][3]. There are many tools to help with this but we won't be discussing individual tools; of interest here is the underlying model for maintaining and enhancing this image and how it will make your life better or worse.
There are two primary models for generating these images:
1. Centralized Golden Master
2. Distributed Build System
These categories mirror the driving models for [Source Code Management (SCM)][4] systems, and many of the arguments regarding centralized vs. distributed are applicable when discussing OS images.
### Centralized Golden Master
Hobbyist and maker projects primarily use the Centralized Golden Master method of creating and maintaining application images. This fact gives this model the benefit of speed and familiarity, allowing developers to quickly set up such a system and get it running. The speed comes from the fact that many device manufacturers provide canned images for their off-the-shelf hardware. For example, boards from such families as the [BeagleBone][5] and [Raspberry Pi][6] offer ready-to-use OS images and [flashing][7]. Relying on these images means having your system up and running in just a few mouse clicks. The familiarity is due to the fact that these images are generally based on a desktop distro many device developers have already used, such as [Debian][8]. Years of using Linux can then directly transfer to the embedded design, including the fact that the packaging utilities remain largely the same, and it is simple for designers to get the extra software packages they need.
There are a few downsides of such an approach. The first is that the [golden master image][9] is generally a choke point, resulting in lost developer productivity after the prototyping stage since everyone must wait for their turn to access the latest image and make their changes. In the SCM realm, this practice is equivalent to a centralized system with individual [file locking][10]. Only the developer with the lock can work on any given file.
![Development flow with the Centralized Golden Master model.][11]
The second downside with this approach is image reproducibility. This issue is usually managed by manually logging into the target systems, installing packages using the native package manager, configuring applications and dot files, and then modifying the system configuration files in place. Once this process is completed, the disk is imaged using the **dd** utility, or an equivalent, and then distributed.
Again, this approach creates a minefield of potential issues. For example, network-based package feeds may cease to exist, and the base software provided by the vendor image may change. Scripting can help mitigate these issues. However, these scripts tend to be fragile and break when changes are made to configuration file formats or the vendor's base software packages.
The final issue that arises with this development model is reliance on third parties. If the hardware vendor's image changes don't work for your design, you may need to invest significant time to adapt. To make matters even more complicated, as mentioned before, the hardware vendors often based their images on an upstream project such as Debian or Ubuntu. This situation introduces even more third parties who can affect your design.
### Distributed Build System
This method of creating and maintaining an image for your application relies on the generation of target images separate from the target hardware. The developer workflow here is similar to standard software development using an SCM system; the image is fully buildable by tooling and each developer can work independently. Changes to the system are made via edits to metadata files (scripting, recipes, configuration files, etc) and then the tooling is rerun to generate an updated image. These metadata files are then managed using an SCM system. Individual developers can merge the latest changes into their working copies to produce their development images. In this case, no golden master image is needed and developers can avoid the associated bottleneck.
Release images are then produced by a build system using standard SCM techniques to pull changes from all the developers.
![Development flow with the Distributed Build System model.][12]
Working in this fashion allows the size of your development team to increase without reducing productivity of individual developers. All engineers can work independently of the others. Additionally, this build setup ensures that your builds can be reproduced. Using standard SCM workflows can ensure that, at any future time, you can regenerate a specific build allowing for long term maintenance, even if upstream providers are no longer available. Similar to working with distributed SCM tools however, there is additional policy that needs to be in place to enable reproducible, release candidate images. Individual developers have their own copies of the source and can build their own test images but for a proper release engineering effort, development teams will need to establish merging and branching standards and ensure that all changes targeted for release eventually get merged into a well-defined branch. Many upstream projects already have well-defined processes for this kind of release strategy (for instance, using *-stable and *-next branches).
The primary downside of this approach is the lack of familiarity. For example, adding a package to the image normally requires creating a recipe of some kind and then updating the definitions so that the package binaries are included in the image. This is very different from running apt while logged into a running system. The learning curve of these systems can be daunting but the results are more predictable and scalable and are likely a better choice when considering a design for a product that will be mass produced.
Dedicated build systems such as [OpenEmbedded][13] and [Buildroot][14] use this model as do distro packaging tools such as [debootstrap][15] and [multistrap][16]. Newer tools such as [Isar][17], [debos][18], and [ELBE][19] also use this basic model. Choices abound, and it is worth the investment to learn one or more of these packages for your designs. The long term maintainability and reproducibility of these systems will reduce risk in your design by allowing you to generate reproducible builds, track all the source code, and remove your dependency on third-party providers continued existence.
#### Conclusion
To be clear, the distributed model does suffer some of the same issues as mentioned for the Golden Master Model; especially the reliance on third parties. This is a consequence of using systems designed by others and cannot be completely avoided unless you choose a completely roll-your-own approach which comes with a significant cost in development and maintenance.
For prototyping and proof-of-concept level design, and a team of just a few developers, the Golden Master Model may well be the right choice given restrictions in time and budget that are present at this stage of development. For low volume, high touch designs, this may be an acceptable trade-off for production use.
For general production use, the benefits in terms of team size scalability, image reproducibility and developer productivity greatly outweigh the learning curve and overhead of systems implementing the distributed model. Support from board and chip vendors is also widely available in these systems reducing the upfront costs of developing with them. For your next product, I strongly recommend starting the design with a serious consideration of the model being used to generate the base OS image. If you choose to prototype with the golden master model with the intention of migrating to the distributed model, make sure to build sufficient time in your schedule for this effort; the estimates will vary widely depending on the specific tooling you choose as well as the scope of the requirements and the out-of-the-box availability of software packages your code relies on.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/model-choose-embedded-iot-development
作者:[Drew Moseley][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/drewmoseley
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe
[2]: https://en.wikipedia.org/wiki/Internet_of_things
[3]: https://en.wikipedia.org/wiki/System_image
[4]: https://en.wikipedia.org/wiki/Version_control
[5]: http://beagleboard.org/
[6]: https://www.raspberrypi.org/
[7]: https://en.wikipedia.org/wiki/Flash_memory
[8]: https://www.debian.org/
[9]: https://en.wikipedia.org/wiki/Software_release_life_cycle#RTM
[10]: https://en.wikipedia.org/wiki/File_locking
[11]: https://opensource.com/sites/default/files/uploads/cgm1_500.png (Development flow with the Centralized Golden Master model.)
[12]: https://opensource.com/sites/default/files/uploads/cgm2_500.png (Development flow with the Distributed Build System model.)
[13]: https://www.openembedded.org/
[14]: https://buildroot.org/
[15]: https://wiki.debian.org/Debootstrap
[16]: https://wiki.debian.org/Multistrap
[17]: https://github.com/ilbers/isar
[18]: https://github.com/go-debos/debos
[19]: https://elbe-rfs.org/

View File

@ -1,497 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (20+ FFmpeg Commands For Beginners)
[#]: via: (https://www.ostechnix.com/20-ffmpeg-commands-beginners/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
20+ FFmpeg Commands For Beginners
======
![FFmpeg Commands][1]
In this guide, I will be explaining how to use FFmpeg multimedia framework to do various audio, video transcoding and conversion operations with examples. I have compiled most commonly and frequently used 20+ FFmpeg commands for beginners. I will keep updating this guide by adding more examples from time to time. Please bookmark this guide and come back in a while to check for the updates. Let us get started, shall we? If you havent installed FFmpeg in your Linux system yet, refer the following guide.
* [**Install FFmpeg in Linux**][2]
### 20+ FFmpeg Commands For Beginners
The typical syntax of the FFmpeg command is:
```
ffmpeg [global_options] {[input_file_options] -i input_url} ...
{[output_file_options] output_url} ...
```
We are now going to see some important and useful FFmpeg commands.
##### **1\. Getting audio/video file information**
To display your media file details, run:
```
$ ffmpeg -i video.mp4
```
**Sample output:**
```
ffmpeg version n4.1.3 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 8.2.1 (GCC) 20181127
configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-nvdec --enable-nvenc --enable-omx --enable-shared --enable-version3
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.20.100
Duration: 00:00:28.79, start: 0.000000, bitrate: 454 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, smpte170m/bt470bg/smpte170m), 1920x1080 [SAR 1:1 DAR 16:9], 318 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
Metadata:
handler_name : ISO Media file produced by Google Inc. Created on: 04/08/2019.
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
handler_name : ISO Media file produced by Google Inc. Created on: 04/08/2019.
At least one output file must be specified
```
As you see in the above output, FFmpeg displays the media file information along with FFmpeg details such as version, configuration details, copyright notice, build and library options etc.
If you dont want to see the FFmpeg banner and other details, but only the media file information, use **-hide_banner** flag like below.
```
$ ffmpeg -i video.mp4 -hide_banner
```
**Sample output:**
![][3]
View audio, video file information using FFMpeg
See? Now, it displays only the media file details.
** **Recommended Download** [**Free Guide: “Spotify Music Streaming: The Unofficial Guide”**][4]
##### **2\. Converting video files to different formats**
FFmpeg is powerful audio and video converter, so Its possible to convert media files between different formats. Say for example, to convert **mp4 file to avi file** , run:
```
$ ffmpeg -i video.mp4 video.avi
```
Similarly, you can convert media files to any format of your choice.
For example, to convert youtube **flv** format videos to **mpeg** format, run:
```
$ ffmpeg -i video.flv video.mpeg
```
If you want to preserve the quality of your source video file, use -qscale 0 parameter:
```
$ ffmpeg -i input.webm -qscale 0 output.mp4
```
To check list of supported formats by FFmpeg, run:
```
$ ffmpeg -formats
```
##### **3\. Converting video files to audio files**
To convert a video file to audio file, just specify the output format as .mp3, or .ogg, or any other audio formats.
The above command will convert **input.mp4** video file to **output.mp3** audio file.
```
$ ffmpeg -i input.mp4 -vn output.mp3
```
Also, you can use various audio transcoding options to the output file as shown below.
```
$ ffmpeg -i input.mp4 -vn -ar 44100 -ac 2 -ab 320 -f mp3 output.mp3
```
Here,
* **-vn** Indicates that we have disabled video recording in the output file.
* **-ar** Set the audio frequency of the output file. The common values used are 22050, 44100, 48000 Hz.
* **-ac** Set the number of audio channels.
* **-ab** Indicates the audio bitrate.
* **-f** Output file format. In our case, its mp3 format.
##### **4\. Change resolution of video files**
If you want to set a particular resolution to a video file, you can use following command:
```
$ ffmpeg -i input.mp4 -filter:v scale=1280:720 -c:a copy output.mp4
```
Or,
```
$ ffmpeg -i input.mp4 -s 1280x720 -c:a copy output.mp4
```
The above command will set the resolution of the given video file to 1280×720.
Similarly, to convert the above file to 640×480 size, run:
```
$ ffmpeg -i input.mp4 -filter:v scale=640:480 -c:a copy output.mp4
```
Or,
```
$ ffmpeg -i input.mp4 -s 640x480 -c:a copy output.mp4
```
This trick will help you to scale your video files to smaller display devices such as tablets and mobiles.
##### **5\. Compressing video files**
It is always an good idea to reduce the media files size to lower size to save the harddrives space.
The following command will compress and reduce the output files size.
```
$ ffmpeg -i input.mp4 -vf scale=1280:-1 -c:v libx264 -preset veryslow -crf 24 output.mp4
```
Please note that you will lose the quality if you try to reduce the video file size. You can lower that **crf** value to **23** or lower if **24** is too aggressive.
You could also transcode the audio down a bit and make it stereo to reduce the size by including the following options.
```
-ac 2 -c:a aac -strict -2 -b:a 128k
```
** **Recommended Download** [**Free Guide: “PLEX, a Manual: Your Media, With Style”**][5]
##### **6\. Compressing Audio files**
Just like compressing video files, you can also compress audio files using **-ab** flag in order to save some disk space.
Let us say you have an audio file of 320 kbps bitrate. You want to compress it by changing the bitrate to any lower value like below.
```
$ ffmpeg -i input.mp3 -ab 128 output.mp3
```
The list of various available audio bitrates are:
1. 96kbps
2. 112kbps
3. 128kbps
4. 160kbps
5. 192kbps
6. 256kbps
7. 320kbps
##### **7. Removing audio stream from a video file
**
If you dont want to a audio from a video file, use **-an** flag.
```
$ ffmpeg -i input.mp4 -an output.mp4
```
Here, an indicates no audio recording.
The above command will undo all audio related flags, because we dont audio from the input.mp4.
##### **8\. Removing video stream from a media file**
Similarly, if you dont want video stream, you could easily remove it from the media file using vn flag. vn stands for no video recording. In other words, this command converts the given media file into audio file.
The following command will remove the video from the given media file.
```
$ ffmpeg -i input.mp4 -vn output.mp3
```
You can also mention the output files bitrate using -ab flag as shown in the following example.
```
$ ffmpeg -i input.mp4 -vn -ab 320 output.mp3
```
##### **9. Extracting images from the video **
Another useful feature of FFmpeg is we can easily extract images from a video file. This could be very useful, if you want to create a photo album from a video file.
To extract images from a video file, use the following command:
```
$ ffmpeg -i input.mp4 -r 1 -f image2 image-%2d.png
```
Here,
* **-r** Set the frame rate. I.e the number of frames to be extracted into images per second. The default value is **25**.
* **-f** Indicates the output format i.e image format in our case.
* **image-%2d.png** Indicates how we want to name the extracted images. In this case, the names should start like image-01.png, image-02.png, image-03.png and so on. If you use %3d, then the name of images will start like image-001.png, image-002.png and so on.
##### **10\. Cropping videos**
FFMpeg allows to crop a given media file in any dimension of our choice.
The syntax to crop a vide ofile is given below:
```
ffmpeg -i input.mp4 -filter:v "crop=w:h:x:y" output.mp4
```
Here,
* **input.mp4** source video file.
* **-filter:v** Indicates the video filter.
* **crop** Indicates crop filter.
* **w** **Width** of the rectangle that we want to crop from the source video.
* **h** Height of the rectangle.
* **x** **x coordinate** of the rectangle that we want to crop from the source video.
* **y** y coordinate of the rectangle.
Let us say you want to a video with a **width of 640 pixels** and a **height of 480 pixels** , from the **position (200,150)** , the command would be:
```
$ ffmpeg -i input.mp4 -filter:v "crop=640:480:200:150" output.mp4
```
Please note that cropping videos will affect the quality. Do not do this unless it is necessary.
##### **11\. Convert a specific portion of a video**
Sometimes, you might want to convert only a specific portion of the video file to different format. Say for example, the following command will convert the **first 50 seconds** of given video.mp4 file to video.avi format.
```
$ ffmpeg -i input.mp4 -t 10 output.avi
```
Here, we specify the the time in seconds. Also, it is possible to specify the time in **hh.mm.ss** format.
##### **12\. Set the aspect ratio to video**
You can set the aspect ration to a video file using **-aspect** flag like below.
```
$ ffmpeg -i input.mp4 -aspect 16:9 output.mp4
```
The commonly used aspect ratios are:
* 16:9
* 4:3
* 16:10
* 5:4
* 2:21:1
* 2:35:1
* 2:39:1
##### **13\. Adding poster image to audio files**
You can add the poster images to your files, so that the images will be displayed while playing the audio files. This could be useful to host audio files in Video hosting or sharing websites.
```
$ ffmpeg -loop 1 -i inputimage.jpg -i inputaudio.mp3 -c:v libx264 -c:a aac -strict experimental -b:a 192k -shortest output.mp4
```
##### **14. Trim a media file using start and stop times
**
To trim down a video to smaller clip using start and stop times, we can use the following command.
```
$ ffmpeg -i input.mp4 -ss 00:00:50 -codec copy -t 50 output.mp4
```
Here,
* s Indicates the starting time of the video clip. In our example, starting time is the 50th second.
* -t Indicates the total time duration.
This is very helpful when you want to cut a part from an audio or video file using starting and ending time.
Similarly, we can trim down the audio file like below.
```
$ ffmpeg -i audio.mp3 -ss 00:01:54 -to 00:06:53 -c copy output.mp3
```
##### **15\. Split video files into multiple parts**
Some websites will allow you to upload only a specific size of video. In such cases, you can split the large video files into multiple smaller parts like below.
```
$ ffmpeg -i input.mp4 -t 00:00:30 -c copy part1.mp4 -ss 00:00:30 -codec copy part2.mp4
```
Here, **-t 00:00:30** indicates a part that is created from the start of the video to the 30th second of video. **-ss 00:00:30** shows the starting time stamp for the next part of video. It means that the 2nd part will start from the 30th second and will continue up to the end of the original video file.
** **Recommended Download** [**Free Guide: “How to Start Your Own Successful Podcast”**][6]
##### **16\. Joining or merging multiple video parts into one**
FFmpeg will also join the multiple video parts and create a single video file.
Create **join.txt** file that contains the exact paths of the files that you want to join. All files should be same format (same codec). The path name of all files should be mentioned one by one like below.
```
file /home/sk/myvideos/part1.mp4
file /home/sk/myvideos/part2.mp4
file /home/sk/myvideos/part3.mp4
file /home/sk/myvideos/part4.mp4
```
Now, join all files using command:
```
$ ffmpeg -f concat -i join.txt -c copy output.mp4
```
If you get an error something like below;
```
[concat @ 0x555fed174cc0] Unsafe file name '/path/to/mp4'
join.txt: Operation not permitted
```
Add **“-safe 0”** :
```
$ ffmpeg -f concat -safe 0 -i join.txt -c copy output.mp4
```
The above command will join part1.mp4, part2.mp4, part3.mp4, and part4.mp4 files into a single file called “output.mp4”.
##### **17\. Add subtitles to a video file**
We can also add subtitles to a video file using FFmpeg. Download the correct subtitle for your video and add it your video as shown below.
```
$ fmpeg -i input.mp4 -i subtitle.srt -map 0 -map 1 -c copy -c:v libx264 -crf 23 -preset veryfast output.mp4
```
##### **18\. Preview or test video or audio files**
You might want to preview to verify or test whether the output file has been properly transcoded or not. To do so, you can play it from your Terminal with command:
```
$ ffplay video.mp4
```
[![][1]][7]
Similarly, you can test the audio files as shown below.
```
$ ffplay audio.mp3
```
[![][1]][8]
##### **19\. Increase/decrease video playback speed**
FFmpeg allows you to adjust the video playback speed.
To increase the video playback speed, run:
```
$ ffmpeg -i input.mp4 -vf "setpts=0.5*PTS" output.mp4
```
The command will double the speed of the video.
To slow down your video, you need to use a multiplier **greater than 1**. To decrease playback speed, run:
```
$ ffmpeg -i input.mp4 -vf "setpts=4.0*PTS" output.mp4
```
##### **20. Create Animated GIF
**
We use GIF images on almost all social and professional networks for various purposes. Using FFmpeg, we can easily and quickly create animated video files. The following guide explains how to create an animated GIF file using FFmpeg and ImageMagick in Unix-like systems.
* [**How To Create Animated GIF In Linux**][9]
##### **21.** Create videos from PDF files
I collected many PDF files, mostly Linux tutorials, over the years and saved in my Tablet PC. Sometimes I feel too lazy to read them from the tablet. So, I decided to create a video from PDF files and watch it in a big screen devices like a TV or a Computer. If you ever wondered how to make a movie file from a collection of PDF files, the following guide will help.
* [**How To Create A Video From PDF Files In Linux**][10]
##### **22\. Getting help**
In this guide, I have covered the most commonly used FFmpeg commands. It has a lot more different options to do various advanced functions. To learn more about it, refer the man page.
```
$ man ffmpeg
```
And, thats all. I hope this guide will help you to getting started with FFmpeg. If you find this guide useful, please share it on your social, and professional networks. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/20-ffmpeg-commands-beginners/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2017/05/FFmpeg-Commands-720x340.png
[2]: https://www.ostechnix.com/install-ffmpeg-linux/
[3]: http://www.ostechnix.com/wp-content/uploads/2017/05/sk@sk_001.png
[4]: https://ostechnix.tradepub.com/free/w_make141/prgm.cgi
[5]: https://ostechnix.tradepub.com/free/w_make75/prgm.cgi
[6]: https://ostechnix.tradepub.com/free/w_make235/prgm.cgi
[7]: http://www.ostechnix.com/wp-content/uploads/2017/05/Menu_004.png
[8]: http://www.ostechnix.com/wp-content/uploads/2017/05/Menu_005-3.png
[9]: https://www.ostechnix.com/create-animated-gif-ubuntu-16-04/
[10]: https://www.ostechnix.com/create-video-pdf-files-linux/

Some files were not shown because too many files have changed in this diff Show More