Merge pull request #1 from LCTT/master

Update  from  LCTT
This commit is contained in:
heguangzhi 2019-10-10 11:23:10 +08:00 committed by GitHub
commit dd2180c14a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
28 changed files with 3532 additions and 1148 deletions

View File

@ -0,0 +1,82 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11436-1.html)
[#]: subject: (Buying a Linux-ready laptop)
[#]: via: (https://opensource.com/article/19/7/linux-laptop)
[#]: author: (Ricardo Berlasso https://opensource.com/users/rgb-es)
我买了一台 Linux 笔记本
======
> Tuxedo 让买一台开箱即用的 Linux 笔记本变得容易。
![](https://img.linux.net.cn/data/attachment/album/201910/08/133924vnmbklqh5jkshkmj.jpg)
最近,我开始使用我买的 Linux 笔记本计算机 Tuxedo Book BC1507。十年前如果有人告诉我十年后我可以从 [System76][2]、[Slimbook][3] 和 [Tuxedo][4] 等公司购买到高质量的“企鹅就绪”的笔记本电脑。我可能会发笑。好吧,现在我也在笑,但是很开心!
除了为免费/自由开源软件FLOSS设计计算机之外这三家公司最近[宣布][5]都试图通过切换到[Coreboot][6]来消除专有的 BIOS 软件。
### 买一台
Tuxedo Computers 是一家德国公司,生产支持 Linux 的笔记本电脑。实际上,如果你要使用其他的操作系统,则它的价格会更高。
购买他们的计算机非常容易。Tuxedo 提供了许多付款方式:不仅包括信用卡,而且还包括 PayPal 甚至银行转帐LCTT 译注:我们需要支付宝和微信支付……此外,要国际配送,还需要支付运输费和清关费用等)。只需在 Tuxedo 的网页上填写银行转帐表格,公司就会给你发送银行信息。
Tuxedo 可以按需构建每台计算机,只需选择基本模型并浏览下拉菜单以选择不同的组件,即可轻松准确地选择所需内容。页面上有很多信息可以指导你进行购买。
如果你选择的 Linux 发行版与推荐的发行版不同,则 Tuxedo 会进行“网络安装”,因此请准备好网络电缆以完成安装,也可以将你首选的镜像文件刻录到 USB 盘上。我通过外部 DVD 阅读器来安装刻录了 openSUSE Leap 15.1 安装程序的 DVD但是你可用你自己的方式。
我选择的型号最多可以容纳两个磁盘:一个 SSD另一个可以是 SSD 或常规硬盘。由于已经超出了我的预算,因此我决定选择传统的 1TB 磁盘并将 RAM 增加到 16GB。该处理器是具有四个内核的第八代 i5。我选择了背光西班牙语键盘、1920×1080/96dpi 屏幕和 SD 卡读卡器。总而言之,这是一个很棒的系统。
如果你对默认的英语或德语键盘感觉满意,甚至可以要求在 Meta 键上印上一个企鹅图标!我需要的西班牙语键盘则不提供此选项。
### 收货并开箱使用
付款完成后仅六个工作日,完好包装的计算机就十分安全地到达了我家。打开计算机包装并解锁电池后,我准备好开始浪了。
![Tuxedo Book BC1507][7]
*我的(物理)桌面上的新玩具。*
该电脑的设计确实很棒而且感觉扎实。即使此型号的外壳不是铝制的LCTT 译注他们有更好看的铝制外壳的型号也可以保持凉爽。风扇真的很安静气流像许多其他笔记本电脑一样导向后边缘而不是流向侧面。电池可提供数小时的续航时间。BIOS 中的一个名为 FlexiCharger 的选项会在达到一定百分比后停止为电池充电,因此在插入电源长时间工作时,无需卸下电池。
键盘真的很舒适,而且非常安静。甚至触摸板按键也很安静!另外,你可以轻松调整背光键盘上的光强度。
最后很容易访问笔记本电脑中的每个组件因此可以毫无问题地对计算机进行更新或维修。Tuxedo 甚至送了几个备用螺丝!
### 结语
经过一个月的频繁使用,我对该系统感到非常满意。它完全满足了我的要求,并且一切都很完美。
因为它们通常是高端系统,所以包含 Linux 的计算机往往比较昂贵。如果你将 Tuxedo 或 Slimbook 电脑的价格与更知名品牌的类似规格的价格进行比较,价格相差无几。如果你想要一台使用自由软件的强大系统,请毫不犹豫地支持这些公司:他们所提供的物有所值。
请在评论中让我们知道你在 Tuxedo 和其他“企鹅友好”公司的经历。
* * *
本文基于 Ricardo 的博客 [From Mind to Type][9] 上发表的“ [我的新企鹅笔记本电脑Tuxedo-Book-BC1507][8]”。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/linux-laptop
作者:[Ricardo Berlasso][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/rgb-es
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
[2]: https://system76.com/
[3]: https://slimbook.es/en/
[4]: https://www.tuxedocomputers.com/
[5]: https://www.tuxedocomputers.com/en/Infos/News/Tuxedo-Computers-stands-for-Free-Software-and-Security-.tuxedo
[6]: https://coreboot.org/
[7]: https://opensource.com/sites/default/files/uploads/tuxedo-600_0.jpg (Tuxedo Book BC1507)
[8]: https://frommindtotype.wordpress.com/2019/06/17/my-new-penguin-ready-laptop-tuxedo-book-bc1507/
[9]: https://frommindtotype.wordpress.com/

View File

@ -0,0 +1,110 @@
[#]: collector: (lujun9972)
[#]: translator: (amwps290)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11434-1.html)
[#]: subject: (Mirror your Android screen on your computer with Guiscrcpy)
[#]: via: (https://opensource.com/article/19/9/mirror-android-screen-guiscrcpy)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
使用 guiscrcpy 将你的安卓手机的屏幕投射到你的电脑
======
> 使用这个基于 scrcpy 的开源应用从你的电脑上访问你的安卓设备。
![](https://img.linux.net.cn/data/attachment/album/201910/08/123143nlz718152v5nf5n8.png)
在未来,你所需的一切信息皆触手可及,并且全部会以全息的形式出现在空中,即使你在驾驶汽车时也可以与之交互。不过,那是未来,在那一刻到来之前,我们所有人都只能将信息分散在笔记本电脑、手机、平板电脑和智能冰箱上。不幸的是,这意味着当我们需要来自该设备的信息时,我们通常必须查看该设备。
虽然不完全是像全息终端或飞行汽车那样酷炫,但 [srevin saju][3] 开发的 [guiscrcpy][2] 是一个可以在一个地方整合多个屏幕,让你有一点未来感觉的应用程序。
Guiscrcpy 是一个基于屡获殊荣的一个开源引擎 [scrcpy][4] 的一个开源项目GUN GPLv3 许可证)。使用 Guiscrcpy 可以将你的安卓手机的屏幕投射到你的电脑这样你就可以查看手机上的一切东西。Guiscrcpy 支持 Linux、Windows 和 MacOS。
不像其他 scrcpy 的替代软件一样Guiscrcpy 并不仅仅是 scrcpy 的一个简单的复制品。该项目优先考虑了与其他开源项目的协作。因此Guiscrcpy 对 scrcpy 来说是一个扩展,或者说是一个用户界面层。将 Python 3 GUI 与 scrcpy 分开可以确保没有任何东西干扰 scrcpy 后端的效率。你可以投射到 1080P 分辨率的屏幕,因为它的超快的渲染速度和超低的 CPU 使用,即使在低端的电脑上也可以运行的很顺畅。
Scrcpy 是 Guiscrcpy 项目的基石。它是一个基于命令行的应用,因此它没有处理你的手势操作的用户界面。它也没有提供返回按钮和主页按钮,而且它需要你对 [Linux 终端][5]比较熟悉。Guiscrcpy 给 scrcpy 添加了图形面板。因此任何用户都可以使用它而且不需要通过网络发送任何信息就可以投射和控制他的设备。Guiscrcpy 同时也为 Windows 用户和 Linux 用户提供了编译好的二进制文件,以方便你的使用。
### 安装 Guiscrcpy
在你安装 Guiscrcpy 之前,你需要先安装它的依赖包。尤其是要安装 scrcpy。安装 scrcpy 最简单的方式可能就是使用对于大部分 Linux 发行版都安装了的 [snap][6] 工具。如果你的电脑上安装并使用了 snap那么你就可以使用下面的命令来一步安装 scrcpy。
```
$ sudo snap install scrcpy
```
当你安装完 scrcpy你就可以安装其他的依赖包了。[Simple DirectMedia Layer][7]SDL 2.0 是一个显示和控制你设备屏幕的工具包。[Android Debug Bridge][8] (ADB) 命令可以连接你的安卓手机到电脑。
在 Fedora 或者 CentOS
```
$ sudo dnf install SDL2 android-tools
```
在 Ubuntu 或者 Debian
```
$ sudo apt install SDL2 android-tools-adb
```
在另一个终端中,安装 Python 依赖项:
```
$ python3 -m pip install -r requirements.txt --user
```
### 设置你的手机
为了能够让你的手机接受 adb 连接。必须让你的手机开启开发者选项。为了打开开发者选项打开“设置”然后选择“关于手机”找到“版本号”它也可能位于“软件信息”面板中。不敢置信只要你连续点击“版本号”七次你就可以打开开发者选项。LCTT 译注:显然这里是以 Google 原生的 Android 作为说明的,你的不同品牌的安卓手机打开开发者选项的方式或有不同。)
![Enabling Developer Mode][9]
更多更全面的连接手机的方式,请参考[安卓开发者文档][10]。
一旦你设置好了你的手机,将你的手机通过 USB 线插入到你的电脑中(或者通过无线的方式进行连接,确保你已经配置好了无线连接)。
### 使用 Guiscrcpy
当你启动 guiscrcpy 的时候,你就能看到一个主控制窗口。点击窗口里的 “Start scrcpy” 按钮。只要你设置好了开发者模式并且通过 USB 或者 WiFi 将你的手机连接到电脑。guiscrcpy 就会连接你的手机。
![Guiscrcpy main screen][11]
它还包括一个可写入的配置系统,你可以将你的配置文件写入到 `~/.config` 目录。可以在使用前保存你的首选项。
guiscrcpy 底部的面板是一个浮动的窗口,可以帮助你执行一些基本的控制动作。它包括了主页按钮、返回按钮、电源按钮以及一些其他的按键。这些按键在安卓手机上都非常常用。值得注意的是,这个模块并不是与 scrcpy 的 SDL 进行交互。因此,它可以毫无延迟的执行。换句话说,这个操作窗口是直接通过 adb 与你的手机进行交互而不是通过 scrcpy。
![guiscrcpy's bottom panel][12]
这个项目目前十分活跃,不断地有新的特性加入其中。最新版本的具有了手势操作和通知界面。
有了这个 guiscrcpy你不仅仅可以在你的电脑屏幕上看到你的手机你还可以就像操作你的实体手机一样点击 SDL 窗口,或者使用浮动窗口上的按钮与之进行交互。
![guiscrcpy running on Fedora 30][13]
Guiscrcpy 是一个有趣且实用的应用程序,它提供的功能应该是任何现代设备(尤其是 Android 之类的平台)的正式功能。自己尝试一下,为当今的数字生活增添一些未来主义的感觉。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mirror-android-screen-guiscrcpy
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[amwps290](https://github.com/amwps290)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
[2]: https://github.com/srevinsaju/guiscrcpy
[3]: http://opensource.com/users/srevinsaju
[4]: https://github.com/Genymobile/scrcpy
[5]: https://www.redhat.com/sysadmin/navigating-filesystem-linux-terminal
[6]: https://snapcraft.io/
[7]: https://www.libsdl.org/
[8]: https://developer.android.com/studio/command-line/adb
[9]: https://opensource.com/sites/default/files/uploads/developer-mode.jpg (Enabling Developer Mode)
[10]: https://developer.android.com/studio/debug/dev-options
[11]: https://opensource.com/sites/default/files/uploads/guiscrcpy-main.png (Guiscrcpy main screen)
[12]: https://opensource.com/sites/default/files/uploads/guiscrcpy-bottompanel.png (guiscrcpy's bottom panel)
[13]: https://opensource.com/sites/default/files/uploads/guiscrcpy-screenshot.jpg (guiscrcpy running on Fedora 30)

View File

@ -1,41 +1,33 @@
[#]: collector: (lujun9972)
[#]: translator: (alim0x)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11440-1.html)
[#]: subject: (How to Execute Commands on Remote Linux System over SSH)
[#]: via: (https://www.2daygeek.com/execute-run-linux-commands-remote-system-over-ssh/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How to Execute Commands on Remote Linux System over SSH
如何通过 SSH 在远程 Linux 系统上运行命令
======
We may need to perform some commands on the remote machine.
我们有时可能需要在远程机器上运行一些命令。如果只是偶尔进行的操作,要实现这个目的,可以登录到远程系统上直接执行命令。但是每次都这么做的话,就有点烦人了。既然如此,有没有摆脱这种麻烦操作的更佳方案?
To do so, log in to a remote system and do it, if its once in a while.
是的,你可以从你本地系统上执行这些操作,而不用登录到远程系统上。这有什么好处吗?毫无疑问。这会为你节省很多好时光。
But every time you do this, it can irritate you
这是怎么实现的SSH 允许你无需登录到远程计算机就可以在它上面运行命令。
If so, what is the better way to get out of it.
Yes, you can do this from your local system instead of logging in to the remote system.
Will it benefit me? Yeah definitely. This will save you good time.
Hows that happening? Yes, SSH allows you to run a command on a remote machine without logging in to a computer.
**The general syntax is as follow:**
**通用语法如下所示:**
```
$ ssh [User_Name]@[Rremote_Host_Name or IP] [Command or Script]
$ ssh [用户名]@[远程主机名或 IP] [命令或脚本]
```
### 1) How to Run the Command on a Remote Linux System Over SSH
### 1) 如何通过 SSH 在远程 Linux 系统上运行命令
The following example allows users to run the **[df command][1]** via ssh on a remote Linux machine
下面的例子允许用户通过 ssh 在远程 Linux 机器上运行 [df 命令][1]。
```
$ ssh [email protected] df -h
$ ssh daygeek@CentOS7.2daygeek.com df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 27G 4.4G 23G 17% /
@ -48,14 +40,14 @@ $ ssh [email protected] df -h
tmpfs 184M 0 184M 0% /run/user/1000
```
### 2) How to Run Multiple Commands on a Remote Linux System Over SSH
### 2) 如何通过 SSH 在远程 Linux 系统上运行多条命令
The following example allows users to run multiple commands at once over ssh on the remote Linux system.
下面的例子允许用户通过 ssh 在远程 Linux 机器上一次运行多条命令。
Its running uptime command and free command on the remote Linux system simultaneously.
同时在远程 Linux 系统上运行 `uptime` 命令和 `free` 命令。
```
$ ssh [email protected] "uptime && free -m"
$ ssh daygeek@CentOS7.2daygeek.com "uptime && free -m"
23:05:10 up 10 min, 0 users, load average: 0.00, 0.03, 0.03
@ -65,15 +57,13 @@ $ ssh [email protected] "uptime && free -m"
Swap: 3071 0 3071
```
### 3) How to Run the Command with sudo Privilege on a Remote Linux System Over SSH
### 3) 如何通过 SSH 在远程 Linux 系统上运行带 sudo 权限的命令
The following example allows users to run the **fdisk** command with **[sudo][2]** [][2]**[privilege][2]** on the remote Linux system via ssh.
下面的例子允许用户通过 ssh 在远程 Linux 机器上运行带有 [sudo 权限][2] 的 `fdisk` 命令。
Normal users are not allowed to execute commands available under the system binary **(/usr/sbin/)** directory. Users need root privileges to run it.
普通用户不允许执行系统二进制(`/usr/sbin/`)目录下提供的命令。用户需要 root 权限来运行它。
So to run the **[fdisk command][3]** on a Linux system, you need root privileges.
The which command returns the full path of the executable of the given command.
所以你需要 root 权限,好在 Linux 系统上运行 [fdisk 命令][3]。`which` 命令返回给定命令的完整可执行路径。
```
$ which fdisk
@ -81,7 +71,7 @@ $ which fdisk
```
```
$ ssh -t [email protected] "sudo fdisk -l"
$ ssh -t daygeek@CentOS7.2daygeek.com "sudo fdisk -l"
[sudo] password for daygeek:
Disk /dev/sda: 32.2 GB, 32212254720 bytes, 62914560 sectors
@ -113,23 +103,23 @@ $ ssh -t [email protected] "sudo fdisk -l"
Connection to centos7.2daygeek.com closed.
```
### 4) How to Run the Service Command with sudo Privilege on a Remote Linux System Over SSH
### 4) 如何通过 SSH 在远程 Linux 系统上运行带 sudo 权限的服务控制命令
The following example allows users to run the service command with sudo privilege on the remote Linux system via ssh.
下面的例子允许用户通过 ssh 在远程 Linux 机器上运行带有 sudo 权限的服务控制命令。
```
$ ssh -t [email protected] "sudo systemctl restart httpd"
$ ssh -t daygeek@CentOS7.2daygeek.com "sudo systemctl restart httpd"
[sudo] password for daygeek:
Connection to centos7.2daygeek.com closed.
```
### 5) How to Run the Command on a Remote Linux System Over SSH With Non-Standard Port
### 5) 如何通过非标准端口 SSH 在远程 Linux 系统上运行命令
The following example allows users to run the **[hostnamectl command][4]** via ssh on a remote Linux machine with non-standard port.
下面的例子允许用户通过 ssh 在使用了非标准端口的远程 Linux 机器上运行 [hostnamectl 命令][4]。
```
$ ssh -p 2200 [email protected] hostnamectl
$ ssh -p 2200 daygeek@CentOS7.2daygeek.com hostnamectl
Static hostname: Ubuntu18.2daygeek.com
Icon name: computer-vm
@ -142,12 +132,12 @@ $ ssh -p 2200 [email protected] hostnamectl
Architecture: x86-64
```
### 6) How to Save Output from Remote System to Local System
### 6) 如何将远程系统的输出保存到本地系统
The following example allows users to remotely execute the **[top command][5]** on a Linux system via ssh and save the output to the local system.
下面的例子允许用户通过 ssh 在远程 Linux 机器上运行 [top 命令][5],并将输出保存到本地系统。
```
$ ssh [email protected] "top -bc | head -n 35" > /tmp/top-output.txt
$ ssh daygeek@CentOS7.2daygeek.com "top -bc | head -n 35" > /tmp/top-output.txt
```
```
@ -180,17 +170,17 @@ cat /tmp/top-output.txt
20 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset]
```
Alternatively, you can use the following format to run multiple commands on a remote system.
或者你也可以使用以下格式在远程系统上运行多条命令:
```
$ ssh [email protected] << EOF
$ ssh daygeek@CentOS7.2daygeek.com << EOF
hostnamectl
free -m
grep daygeek /etc/passwd
EOF
```
Output of the above command.
上面命令的输出如下:
```
Pseudo-terminal will not be allocated because stdin is not a terminal.
@ -212,11 +202,11 @@ Pseudo-terminal will not be allocated because stdin is not a terminal.
daygeek:x:1000:1000:2daygeek:/home/daygeek:/bin/bash
```
### 7) How to Execute Local Bash Scripts on Remote System
### 7) 如何在远程系统上运行本地 Bash 脚本
The following example allows users to run local **[bash script][6]** “remote-test.sh” via ssh on a remote Linux machine.
下面的例子允许用户通过 ssh 在远程 Linux 机器上运行本地 [bash 脚本][5] `remote-test.sh`
Create a shell script and execute it.
创建一个 shell 脚本并执行它。
```
$ vi /tmp/remote-test.sh
@ -231,10 +221,10 @@ $ vi /tmp/remote-test.sh
hostnamectl
```
Output for the above command.
上面命令的输出如下:
```
$ ssh [email protected] 'bash -s' < /tmp/remote-test.sh
$ ssh daygeek@CentOS7.2daygeek.com 'bash -s' < /tmp/remote-test.sh
01:17:09 up 22 min, 1 user, load average: 0.00, 0.02, 0.08
@ -266,7 +256,7 @@ $ ssh [email protected] 'bash -s' < /tmp/remote-test.sh
Architecture: x86-64
```
Alternatively, the pipe can be used. If you think the output is not good, add few changes to make it more elegant.
或者也可以使用管道。如果你觉得输出不太好看,再做点修改让它更优雅些。
```
$ vi /tmp/remote-test-1.sh
@ -290,10 +280,10 @@ $ vi /tmp/remote-test-1.sh
echo "------------------------------------------------------------------"
```
Output for the above script.
上面脚本的输出如下:
```
$ cat /tmp/remote-test.sh | ssh [email protected]
$ cat /tmp/remote-test.sh | ssh daygeek@CentOS7.2daygeek.com
Pseudo-terminal will not be allocated because stdin is not a terminal.
---------System Uptime--------------------------------------------
03:14:09 up 2:19, 1 user, load average: 0.00, 0.01, 0.05
@ -331,22 +321,22 @@ $ cat /tmp/remote-test.sh | ssh [email protected]
Architecture: x86-64
```
### 8) How to Run Multiple Commands on Multiple Remote Systems Simultaneously
### 8) 如何同时在多个远程系统上运行多条指令
The following bash script allows users to run multiple commands on multiple remote systems simultaneously. Use simple for loop to achieve it.
下面的 bash 脚本允许用户同时在多个远程系统上运行多条指令。使用简单的 `for` 循环实现。
For this purpose, you can try with with the **[PSSH command][7]** or **[ClusterShell command][8]** or **[DSH command][9]**
为了实现这个目的,你可以尝试 [PSSH 命令][7] 或 [ClusterShell 命令][8] 或 [DSH 命令][9]。
```
$ vi /tmp/multiple-host.sh
for host in CentOS7.2daygeek.com CentOS6.2daygeek.com
do
ssh [email protected]${host} "uname -a;uptime;date;w"
ssh daygeek@CentOS7.2daygeek.com${host} "uname -a;uptime;date;w"
done
```
Output for the above script:
上面脚本的输出如下:
```
$ sh multiple-host.sh
@ -358,7 +348,7 @@ $ sh multiple-host.sh
Wed Sep 25 01:33:57 CDT 2019
01:33:57 up 39 min, 1 user, load average: 0.07, 0.06, 0.06
USER TTY FROM [email protected] IDLE JCPU PCPU WHAT
USER TTY FROM daygeek@CentOS7.2daygeek.com IDLE JCPU PCPU WHAT
daygeek pts/0 192.168.1.6 01:08 23:25 0.06s 0.06s -bash
Linux CentOS6.2daygeek.com 2.6.32-754.el6.x86_64 #1 SMP Tue Jun 19 21:26:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
@ -368,21 +358,19 @@ $ sh multiple-host.sh
Tue Sep 24 23:33:58 MST 2019
23:33:58 up 39 min, 0 users, load average: 0.00, 0.00, 0.00
USER TTY FROM [email protected] IDLE JCPU PCPU WHAT
USER TTY FROM daygeek@CentOS7.2daygeek.com IDLE JCPU PCPU WHAT
```
### 9) How to Add a Password Using the sshpass Command
### 9) 如何使用 sshpass 命令添加一个密码
If you are having trouble entering your password each time, I advise you to go with any one of the methods below as per your requirement.
如果你觉得每次输入密码很麻烦,我建议你视你的需求选择以下方法中的一项来解决这个问题。
If you are going to perform this type of activity frequently, I advise you to set up **[password-less authentication][10]** since its a standard and permanent solution.
如果你经常进行类似的操作,我建议你设置 [免密码认证][10],因为它是标准且永久的解决方案。
If you only do these tasks a few times a month. I recommend you to use the **“sshpass”** utility.
Just provide a password as an argument using the **“-p”** option.
如果你一个月只是执行几次这些任务,我推荐你使用 `sshpass` 工具。只需要使用 `-p` 参数选项提供你的密码即可。
```
$ sshpass -p 'Your_Password_Here' ssh -p 2200 [email protected] ip a
$ sshpass -p '在这里输入你的密码' ssh -p 2200 daygeek@CentOS7.2daygeek.com ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
@ -404,8 +392,8 @@ via: https://www.2daygeek.com/execute-run-linux-commands-remote-system-over-ssh/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[alim0x](https://github.com/alim0x)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11433-1.html)
[#]: subject: (You Can Now Use OneDrive in Linux Natively Thanks to Insync)
[#]: via: (https://itsfoss.com/use-onedrive-on-linux/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
@ -10,33 +10,33 @@
现在你可以借助 Insync 在 Linux 中原生使用 OneDrive
======
[OneDrive][1] 是 Microsoft 的一项云存储服务,它为每个用户提供 5GB 的免费存储空间。它已与 Microsoft 帐户集成,如果你使用 Windows那么已在其中预安装了 OneDrive。
[OneDrive][1] 是微软的一项云存储服务,它为每个用户提供 5GB 的免费存储空间。它已与微软帐户集成,如果你使用 Windows那么已在其中预安装了 OneDrive。
OneDrive 无法在 Linux 中作为桌面应用使用。你可以通过网页访问已存储的文件,但无法像在文件管理器中那样使用云存储。
好消息是,你现在可以使用一个非官方工具,它可让你在 Ubuntu 或其他 Linux 发行版中使用 OneDrive。
当 [Insync][2] 在 Linux 上支持 Google Drive 时,它变成了 Linux 上非常流行的高级第三方同步工具。我们有篇详细回顾 [Insync 支持 Google Drive][3] 的文章。
当 [Insync][2] 在 Linux 上支持 Google Drive 时,它变成了 Linux 上非常流行的高级第三方同步工具。我们有篇对 [Insync 支持 Google Drive][3] 的详细点评文章。
但是,最近[发布的 Insync 3][4] 支持 OneDrive。因此在本文中我们将看下如何在 Insync 中使用 OneDrive 以及它的新功能。
而最近[发布的 Insync 3][4] 支持了 OneDrive。因此在本文中我们将看下如何在 Insync 中使用 OneDrive 以及它的新功能。
非 FOSS 警告
> 非 FOSS 警告
_少数开发者会对非 FOSS 软件引入 LInux 感到痛苦。作为专注于桌面 Linux 的门户,即使不是 FOSS我们也会在此介绍此类软件。_
> 少数开发者会对非 FOSS 软件引入 Linux 感到痛苦。作为专注于桌面 Linux 的门户,即使不是 FOSS我们也会在此介绍此类软件。
_Insync 3 既不是开源软件,也不免费使用。你只有 15 天的试用期进行测试。如果你喜欢它,那么可以按每个帐户终生 29.99 美元的费用购买_
> Insync 3 既不是开源软件,也不免费使用。你只有 15 天的试用期进行测试。如果你喜欢它,那么可以按每个帐户终生 29.99 美元的费用购买。
_我们不会拿钱来推广它们以防你这么想。我们不会在这里这么做_
> 我们不会拿钱来推广它们(以防你这么想)。我们不会在这里这么做。
### 在 Linux 中通过 Insync 获得原生 OneDrive 体验
![][5]
尽管它是一个高级工具,但依赖 OneDrive 的用户或许希望在他们的 LInux系统中获得同步 OneDrive 的无缝体验。
尽管它是一个付费工具,但依赖 OneDrive 的用户或许希望在他们的 Linux 系统中获得同步 OneDrive 的无缝体验。
首先,你需要从[官方页面][6]下载适合你 Linux 发行版的软件包。
[下载 Insync][7]
- [下载 Insync][7]
你也可以选择添加仓库并进行安装。你将在 Insync 的[官方网站][7]看到说明。
@ -46,7 +46,7 @@ _我们不会拿钱来推广它们以防你这么想。我们不会在这
另外,要注意的是,你添加的每个 OneDrive 或 Google Drive 帐户都需要单独的许可证。
现在,在授权 OneDrive 帐户后,你必须选择一个用于同步所有内容的文件夹,这是 Insync 3 中的一项新功能。
现在,在授权 OneDrive 帐户后,你必须选择一个用于同步所有内容的基础文件夹,这是 Insync 3 中的一项新功能。
![Insync 3 Base Folder][9]
@ -62,13 +62,13 @@ _我们不会拿钱来推广它们以防你这么想。我们不会在这
![Insync 3][12]
你现在可以在包括有 Insync 的 Linux 桌面的多个平台使用 OneDrive 开始同步文件/文件夹。除了上面所有新功能/更改之外,你还可以在 Insync 上获得更快/更流畅的体验。
你现在可以在包括有 Insync 的 Linux 桌面在内的多个平台使用 OneDrive 开始同步文件/文件夹。除了上面所有新功能/更改之外,你还可以在 Insync 上获得更快/更流畅的体验。
此外,借助 Insync 3你可以查看同步进度
![][13]
**总结**
### 总结
总的来说,对于希望在 Linux 系统上同步 OneDrive 的用户而言Insync 3 是令人印象深刻的升级。如果你不想付款,你可以尝试其他 [Linux 的免费云服务][14]。
@ -81,7 +81,7 @@ via: https://itsfoss.com/use-onedrive-on-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,251 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11438-1.html)
[#]: subject: (CentOS 8 Installation Guide with Screenshots)
[#]: via: (https://www.linuxtechi.com/centos-8-installation-guide-screenshots/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
CentOS 8 安装图解
======
继 RHEL 8 发布之后CentOS 社区也发布了让人期待已久的 CentOS 8并发布了两种模式
* CentOS stream滚动发布的 Linux 发行版,适用于需要频繁更新的开发者
* CentOS类似 RHEL 8 的稳定操作系统,系统管理员可以用其部署或配置服务和应用
在这篇文章中,我们会使用图解的方式演示 CentOS 8 的安装方法。
### CentOS 8 的新特性
* DNF 成为了默认的软件包管理器,同时 yum 仍然是可用的
* 使用网络管理器(`nmcli` 和 `nmtui`)进行网络配置,移除了网络脚本
* 使用 Podman 进行容器管理
* 引入了两个新的包仓库BaseOS 和 AppStream
* 使用 Cockpit 作为默认的系统管理工具
* 默认使用 Wayland 作为显示服务器
* `iptables` 将被 `nftables` 取代
* 使用 Linux 内核 4.18
* 提供 PHP 7.2、Python 3.6、Ansible 2.8、VIM 8.0 和 Squid 4
### CentOS 8 所需的最低硬件配置:
* 2 GB RAM
* 64 位 x86 架构、2 GHz 或以上的 CPU
* 20 GB 硬盘空间
### CentOS 8 安装图解
#### 第一步:下载 CentOS 8 ISO 文件
在 CentOS 官方网站 <https://www.centos.org/download/> 下载 CentOS 8 ISO 文件。
#### 第二步: 创建 CentOS 8 启动介质USB 或 DVD
下载 CentOS 8 ISO 文件之后,将 ISO 文件烧录到 USB 移动硬盘或 DVD 光盘中,作为启动介质。
然后重启系统,在 BIOS 中设置为从上面烧录好的启动介质启动。
#### 第三步:选择“安装 CentOS Linux 8.0”选项
当系统从 CentOS 8 ISO 启动介质启动之后就可以看到以下这个界面。选择“Install CentOS Linux 8.0”(安装 CentOS Linux 8.0)选项并按回车。
![Choose-Install-CentOS8][2]
#### 第四步:选择偏好语言
选择想要在 CentOS 8 **安装过程**中使用的语言,然后继续。
![Select-Language-CentOS8-Installation][3]
#### 第五步:准备安装 CentOS 8
这一步我们会配置以下内容:
* 键盘布局
* 日期和时间
* 安装来源
* 软件选择
* 安装目标
* Kdump
![Installation-Summary-CentOS8][4]
如上图所示,安装向导已经自动提供了“<ruby>键盘布局<rt>Keyboard</rt></ruby>”、“<ruby>时间和日期<rt>Time & Date</rt></ruby>”、“<ruby>安装来源<rt>Installation Source</rt></ruby>”和“<ruby>软件选择<rt>Software Selection</rt></ruby>”的选项。
如果你需要修改以上设置,点击对应的图标就可以了。例如修改系统的时间和日期,只需要点击“<ruby>时间和日期<rt>Time & Date</rt></ruby>”,选择正确的时区,然后点击“<ruby>完成<rt>Done</rt></ruby>”即可。
![TimeZone-CentOS8-Installation][5]
在软件选择选项中选择安装的模式。例如“<ruby>包含图形界面<rt>Server with GUI</rt></ruby>”选项会在安装后的系统中提供图形界面,而如果想安装尽可能少的额外软件,可以选择“<ruby>最小化安装<rt>Minimal Install</rt></ruby>”。
![Software-Selection-CentOS8-Installation][6]
这里我们选择“<ruby>包含图形界面<rt>Server with GUI</rt></ruby>”,点击“<ruby>完成<rt>Done</rt></ruby>”。
Kdump 功能默认是开启的。尽管这是一个强烈建议开启的功能,但也可以点击对应的图标将其关闭。
如果想要在安装过程中对网络进行配置,可以点击“<ruby>网络与主机名<rt>Network & Host Name</rt></ruby>”选项。
![Networking-During-CentOS8-Installation][7]
如果系统连接到启用了 DHCP 功能的调制解调器上,就会在启动网络接口的时候自动获取一个 IP 地址。如果需要配置静态 IP点击“<ruby>配置<rt>Configure</rt></ruby>”并指定 IP 的相关信息。除此以外我们还将主机名设置为 “linuxtechi.com”。
完成网络配置后,点击“<ruby>完成<rt>Done</rt></ruby>”。
最后我们要配置“<ruby>安装目标<rt>Installation Destination</rt></ruby>”,指定 CentOS 8 将要安装到哪一个硬盘,以及相关的分区方式。
![Installation-Destination-Custom-CentOS8][8]
点击“<ruby>完成<rt>Done</rt></ruby>”。
如图所示,我为 CentOS 8 分配了 40 GB 的硬盘空间。有两种分区方案可供选择:如果由安装向导进行自动分区,可以从“<ruby>存储配置<rt>Storage Configuration</rt></ruby>”中选择“<ruby>自动<rt>Automatic</rt></ruby>”选项;如果想要自己手动进行分区,可以选择“<ruby>自定义<rt>Custom</rt></ruby>”选项。
在这里我们选择“<ruby>自定义<rt>Custom</rt></ruby>”选项,并按照以下的方式创建基于 LVM 的分区:
* `/boot` 2 GB (ext4 文件系统)
* `/` 12 GB (xfs 文件系统)
* `/home` 20 GB (xfs 文件系统)
* `/tmp` 5 GB (xfs 文件系统)
* Swap 1 GB (xfs 文件系统)
首先创建 `/boot` 标准分区,设置大小为 2GB如下图所示
![boot-partition-CentOS8-Installation][9]
点击“<ruby>添加挂载点<rt>Add mount point</rt></ruby>”。
再创建第二个分区 `/`,并设置大小为 12GB。点击加号指定挂载点和分区大小点击“<ruby>添加挂载点<rt>Add mount point</rt></ruby>”即可。
![slash-root-partition-centos8-installation][10]
然后在页面上将 `/` 分区的分区类型从标准更改为 LVM并点击“<ruby>更新设置<rt>Update Settings</rt></ruby>”。
![Change-Partition-Type-CentOS8][11]
如上图所示,安装向导已经自动创建了一个卷组。如果想要更改卷组的名称,只需要点击“<ruby>卷组<rt>Volume Group</rt></ruby>”标签页中的“<ruby>修改<rt>Modify</rt></ruby>”选项。
同样地,创建 `/home` 分区和 `/tmp` 分区,分别将大小设置为 20GB 和 5GB并设置分区类型为 LVM。
![home-partition-CentOS8-Installation][12]
![tmp-partition-centos8-installation][13]
最后创建<ruby>交换分区<rt>Swap Partition</rt></ruby>
![Swap-Partition-CentOS8-Installation][14]
点击“<ruby>添加挂载点<rt>Add mount point</rt></ruby>”。
在完成所有分区设置后,点击“<ruby>完成<rt>Done</rt></ruby>”。
![Choose-Done-after-manual-partition-centos8][15]
在下一个界面,点击“<ruby>应用更改<rt>Accept changes</rt></ruby>”,以上做的更改就会写入到硬盘中。
![Accept-changes-CentOS8-Installation][16]
#### 第六步:选择“开始安装”
完成上述的所有更改后,回到先前的安装概览界面,点击“<ruby>开始安装<rt>Begin Installation</rt></ruby>”以开始安装 CentOS 8。
![Begin-Installation-CentOS8][17]
下面这个界面表示安装过程正在进行中。
![Installation-progress-centos8][18]
要设置 root 用户的口令,只需要点击 “<ruby>root 口令<rt>Root Password</rt></ruby>”选项,输入一个口令,然后点击“<ruby>创建用户<rt>User Creation</rt></ruby>”选项创建一个本地用户。
![Root-Password-CentOS8-Installation][19]
填写新创建的用户的详细信息。
![Local-User-Details-CentOS8][20]
在安装完成后,安装向导会提示重启系统。
![CentOS8-Installation-Progress][21]
#### 第七步:完成安装并重启系统
安装完成后要重启系统。只需点击“<ruby>重启<rt>Reboot</rt></ruby>”按钮。
![Installation-Completed-CentOS8][22]
注意:重启完成后,记得要把安装介质断开,并将 BIOS 的启动介质设置为硬盘。
#### 第八步:启动新安装的 CentOS 8 并接受许可协议
在 GRUB 引导菜单中,选择 CentOS 8 进行启动。
![Grub-Boot-CentOS8][23]
同意 CentOS 8 的许可证,点击“<ruby>完成<rt>Done</rt></ruby>”。
![Accept-License-CentOS8-Installation][24]
在下一个界面,点击“<ruby>完成配置<rt>Finish Configuration</rt></ruby>”。
![Finish-Configuration-CentOS8-Installation][25]
#### 第九步:配置完成后登录
同意 CentOS 8 的许可证以及完成配置之后,会来到登录界面。
![Login-screen-CentOS8][26]
使用刚才创建的用户以及对应的口令登录,按照提示进行操作,就可以看到以下界面。
![CentOS8-Ready-Use-Screen][27]
点击“<ruby>开始使用 CentOS Linux<rt>Start Using CentOS Linux</rt></ruby>”。
![Desktop-Screen-CentOS8][28]
以上就是 CentOS 8 的安装过程,至此我们已经完成了 CentOS 8 的安装。
欢迎给我们发送评论。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/centos-8-installation-guide-screenshots/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Install-CentOS8.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Language-CentOS8-Installation.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-Summary-CentOS8.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/TimeZone-CentOS8-Installation.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Software-Selection-CentOS8-Installation.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Networking-During-CentOS8-Installation.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-Destination-Custom-CentOS8.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/boot-partition-CentOS8-Installation.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/slash-root-partition-centos8-installation.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Change-Partition-Type-CentOS8.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/09/home-partition-CentOS8-Installation.jpg
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/09/tmp-partition-centos8-installation.jpg
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Swap-Partition-CentOS8-Installation.jpg
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Done-after-manual-partition-centos8.jpg
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Accept-changes-CentOS8-Installation.jpg
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Begin-Installation-CentOS8.jpg
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-progress-centos8.jpg
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Root-Password-CentOS8-Installation.jpg
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Local-User-Details-CentOS8.jpg
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/09/CentOS8-Installation-Progress.jpg
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-Completed-CentOS8.jpg
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Grub-Boot-CentOS8.jpg
[24]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Accept-License-CentOS8-Installation.jpg
[25]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Finish-Configuration-CentOS8-Installation.jpg
[26]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Login-screen-CentOS8.jpg
[27]: https://www.linuxtechi.com/wp-content/uploads/2019/09/CentOS8-Ready-Use-Screen.jpg
[28]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Desktop-Screen-CentOS8.jpg

View File

@ -0,0 +1,64 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Kubernetes communication, SRE struggles, and more industry trends)
[#]: via: (https://opensource.com/article/19/10/kubernetes-sre-more-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
Kubernetes communication, SRE struggles, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [Review of pod-to-pod communications in Kubernetes][2]
> In this article, we dive into pod-to-pod communications by showing you ways in which pods within a Kubernetes network can communicate with one another.
>
> While Kubernetes is opinionated in how containers are deployed and operated, it is very non-prescriptive of how the network should be designed in which pods are to be run. Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies)
**The impact**: Networking is one of the most complicated parts of making computers work together to solve our problems. Kubernetes turns that complexity up to 11, and this article dials it back down to 10.75.
## [One SRE's struggle and success to improve Infrastructure as Code][3]
> Convergence is our goal because we expect our infrastructure to reach a desired state over time expressed in the code. Software idempotence means software can run as many times as it wants and unintended changes dont happen. As a result, we built an in-house service that runs as specified to apply configurations in source control. Traditionally, weve aimed for a masterless configuration design so our configuration agent looks for information on the host.
**The impact**: I've heard it said that the [human element][4] is the most important element of any digital transformation. While I don't know that the author would use that term to describe the outcome he was after, he does a great job of showing that it is not automation for automation's sake we want but rather automation that makes a meaningful impact on the lives of the people it supports.
## [Why GitHub is the gold standard for developer-focused companies][5]
> Now, with last years purchase by Microsoft supporting them, it is clear that GitHub has a real opportunity to continue building out a robust ecosystem, with billion dollar companies built upon what could turn into a powerful platform. Is GitHub the next ecosystem success story? In a word, yes. At my company, we bet on GitHub as a successful platform to build upon from the very start. We felt it was the place to build our solution if we wanted to streamline project management and keep software teams close to the code.
**The impact**: It is one of the great ironies of open source that the most popular tool for open source development is not itself open source. The only way this works is if that tool is so good that open source developers are willing to overlook that inconsistency.
## [KubeVirt joins Cloud Native Computing Foundation][6]
> This month the Cloud Native Computing Foundation (CNCF) formally adopted [KubeVirt][7] into the CNCF Sandbox. KubeVirt allows you to provision, manage and run virtual machines from and within Kubernetes. In joining the CNCF Sandbox, KubeVirt now has a more substantial platform to grow as well as educate the CNCF community on the use cases for placing virtual machines within Kubernetes. The CNCF onboards projects into the CNCF Sandbox when they warrant experimentation on neutral ground to promote and foster collaborative development.
**The impact**: The convergence of containers and virtual machines is clearly a direction vendors think is valuable. Moving this project to the CNCF gives a way to see whether this idea is going to be as popular with users and customers as vendors hope it will be.
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/kubernetes-sre-more-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://superuser.openstack.org/articles/review-of-pod-to-pod-communications-in-kubernetes/
[3]: https://thenewstack.io/one-sres-struggle-and-success-to-improve-infrastructure-as-code/
[4]: https://devops.com/the-secret-to-digital-transformation-is-human-connection/
[5]: https://thenextweb.com/podium/2019/10/02/why-github-is-the-gold-standard-for-developer-focused-companies/
[6]: https://blog.openshift.com/kubevirt-joins-cloud-native-computing-foundation/
[7]: https://kubevirt.io/

View File

@ -0,0 +1,57 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cloud Native Computing: The Hidden Force behind Swift App Development)
[#]: via: (https://opensourceforu.com/2019/10/cloud-native-computing-the-hidden-force-behind-swift-app-development/)
[#]: author: (Robert Shimp https://opensourceforu.com/author/robert-shimp/)
Cloud Native Computing: The Hidden Force behind Swift App Development
======
[![][1]][2]
_Cloud native computing can bolster the development of advanced applications powered by artificial intelligence, machine learning and the Internet of Things._
Modern enterprises are constantly adapting their business strategies and processes as they respond to evolving market conditions. This is especially true for enterprises serving fast-growing economies in the Asia Pacific, such as India and Australia. For these businesses, cloud computing is an invaluable means to accelerate change. From quickly deploying new applications to rapidly scaling infrastructure, enterprises are using cloud computing to create new value, build better solutions and expand business.
Now cloud providers are introducing new cloud native computing services that enable even more dynamic application development. This new technology will make cloud application developers more agile and efficient, even as it reduces deployment costs and increases cloud vendor independence.
Many enterprises are intrigued but are also feeling overwhelmed by the rapidly changing cloud native technology landscape and hence, arent sure how to proceed. While cloud native computing has demonstrated success among early adopters, harnessing this technology has posed a challenge for many mainstream businesses.
**Choosing the right cloud native open source projects**
There are several ways that an enterprise can bring cloud native computing on board. One option is to build its own cloud native environment using open source software. This comes at the price of carefully evaluating many different open source projects before choosing which software to use. Once the software is selected, the IT department will need to staff and train hard-to-find talent to provide in-house support. All in all, this can be an expensive and risky way to adopt new technology.
A second option is to contract with a software vendor to provide a complete cloud native solution. But this compromises the organisations freedom to choose the best open source technologies in exchange for better vendor support, not to mention the added perils of a closed contract.
This dilemma can be resolved by using a technology provider that offers the best of both worlds — i.e., delivering standards-based off-the-shelf software from the open source projects designated by the Cloud Native Computing Foundation (CNCF), and also providing integration, testing and enterprise-class support for the entire software stack.
CNCF uses experts from across the industry to evaluate the maturity, quality and security of cloud native open source projects and give guidance on which ones are ready for enterprise use. Selected cloud native technologies cover the entire scope of containers, microservices, continuous integration, serverless functions, analytics and much more.
Once CNCF declares these cloud native open source projects as having graduated, they can confidently be incorporated into an enterprises cloud native strategy with the knowledge that these are high quality, mainstream technologies that will get industry-wide support.
**Finding that single vendor who offers multiple benefits**
But adopting CNCFs rich cloud native technology framework is only half the battle won. You also must choose a technology provider who will package these CNCF-endorsed technologies without proprietary extensions that lock you in, and provide the necessary integration, testing, support, documentation, training and more.
A well-designed software stack built based on CNCF guidelines and offered by a single vendor has many benefits. First, it reduces the risks associated with technology adoption. Second, it provides a single point of contact to rapidly get support when needed and resolve issues, which means faster time to market and higher customer satisfaction. Third, it helps make cloud native applications portable to any popular cloud. This flexibility can help enterprises improve their operating margins by reducing expenses and unlocking future revenue growth opportunities.
Cloud native computing is becoming an everyday part of mainstream cloud application development. It can also bolster the development of advanced applications powered by artificial intelligence (AI), machine learning (ML) and the Internet of Things (IoT), among others.
Leading users of cloud native technologies include R&amp;D laboratories; high tech, manufacturing and logistics companies; critical service providers and many others.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/cloud-native-computing-the-hidden-force-behind-swift-app-development/
作者:[Robert Shimp][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/robert-shimp/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Cloud-Native_Cloud-Computing.jpg?resize=696%2C459&ssl=1 (Cloud Native_Cloud Computing)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Cloud-Native_Cloud-Computing.jpg?fit=900%2C593&ssl=1

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (DevOps is Eating the World)
[#]: via: (https://opensourceforu.com/2019/10/devops-is-eating-the-world/)
[#]: author: (Jens Eckels https://opensourceforu.com/author/jens-eckels/)
DevOps is Eating the World
======
[![][1]][2]
_Ten years ago, DevOps wasnt a thing. Now, if youre not adopting DevOps practices, youre in danger of being left behind the competition. Over the last decade, JFrogs Liquid Software vision has driven a commitment to helping companies around the world adopt, mature and evolve their CI/CD pipelines. Why? DevOps powers the software that powers the world. Most companies today turn into Software companies with more and more applications to build and to update. They have to manage releases fast and secure with distributed development teams and a growing amount of data to manage._
**Our Mission**
JFrog is on a mission to enable continuous updates through liquid software, empowering developers to code high-quality applications that securely flow to end-users with zero downtime. We are the creators of [_Artifactory_][3], the heart of the end-to-end Universal DevOps platform for automating, managing, securing, distributing, and monitoring [_all types of technologies_][4]. As the leading universal, highly available enterprise DevOps Solution, the [_JFrog platform_][5] empowers customers with trusted and expedited software releases from code-to-production. Trusted by more than 5,500 customers, the worlds top brands, such as Amazon, Facebook, Google, Netflix, Uber, VMware, and Spotify depend on JFrog to manage their binaries for their mission-critical applications.
**“Liquid Software”**
In its truest form, Liquid Software updates software continuously from code to the edge seamlessly, securely and with no downtime. No versions. No big buttons. Just flowing updates that frictionlessly power all the devices and applications around you. Why? To an edge device or browser or end-user, versions dont really have to matter. What version of Facebook is on your phone? You dont care until its time to update it and you get annoyed. What is the current version of the operating system in your laptop? You might know, but again you dont really care as long as its up to date. How about your version of Microsoft products? The version of your favorite website? You dont care. You want it to work, and the more transparently it works the better. In fact, youd prefer it most times if software would just update and you didnt even need to click a button. JFrog is powering that change.
**A fully Automated CI/CD Pipeline**
The idea of automating everything in the CI/CD pipeline is exciting and groundbreaking. Imagine a single platform where you could automate every step from code into production. Its not a pipe dream (or a dream for your pipeline). Its the Liquid Software vision: a world without versions. Were excited about it, and eager to share the possibilities with you.
**The Frog gives back!**
JFrogs roots are in the many **open source** communities that are mainstays today. In addition to the many community contributions through global standards organizations, JFrog is proud to give enterprise-grade tools away for open source committers, as well as provide free versions of products for specific package types. There are “developer-first” companies that like to talk about their target market. JFrog is a developer company built by and for developers. Were happy to support you.
**JFrog is all over the world!**
JFrog has nine-and-counting global offices, including one in India, where we have a rapidly-growing team with R&amp;D and support functions. **And, were hiring fast!** ([_see open positions_][6]). Join us and the Liquid Software revolution!
**We are a proud sponsor of Open Source India**
As the sponsor of the DevOps track, we want to be sure that you see and have access to all the cool tools and methods available. So, we have a couple of amazing experiences you can enjoy:
1. Stop by the booth where we will be demonstrating the latest versions of the JFrog Platform, enabling Liquid Software. Were excited to show you whats possible.
2. Join **Stephen Chin**, world-renowned speaker and night-hacker who will be giving a talk on the future of Liquid Software. Stephen spent many years at Sun and Oracle running teams of developer advocates.
Hes a developer and advocate for your communities, and hes excited to join you.
**The bottom line:** JFrog is proud to be a developer company, serving your needs and the needs of the OSS communities around the globe with DevOps, DevSecOps and pipeline automation solutions that are changing how the world does business. Were happy to help and eager to serve.
JFrog products are available as [_open-source_][7], [_on-premise_][8], and [_on the cloud_][9] on [_AWS_][10], [_Microsoft Azure_][11], and [_Google Cloud_][12]. JFrog is privately held with offices across North America, Europe, and Asia. **Learn more at** [_jfrog.com_][13].
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/devops-is-eating-the-world/
作者:[Jens Eckels][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/jens-eckels/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/07/DevOps-Statup-rising.jpg?resize=696%2C498&ssl=1 (DevOps Statup rising)
[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/07/DevOps-Statup-rising.jpg?fit=1460%2C1045&ssl=1
[3]: https://jfrog.com/artifactory/
[4]: https://jfrog.com/integration/
[5]: https://jfrog.com/enterprise-plus-platform/
[6]: https://join.jfrog.com/
[7]: https://jfrog.com/open-source/
[8]: https://jfrog.com/artifactory/free-trial/
[9]: https://jfrog.com/artifactory/free-trial/#saas
[10]: https://jfrog.com/artifactory/cloud-native-aws/
[11]: https://jfrog.com/artifactory/cloud-native-azure/
[12]: https://jfrog.com/artifactory/cloud-native-gcp/
[13]: https://jfrog.com/

View File

@ -0,0 +1,117 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fight for the planet: Building an open platform and open culture at Greenpeace)
[#]: via: (https://opensource.com/open-organization/19/10/open-platform-greenpeace)
[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger)
Fight for the planet: Building an open platform and open culture at Greenpeace
======
Global problems require global solutions. Global solutions require open
organizations. Learn how Greenpeace is opening up to address climate
change.
![The Open Organization at Greenpeace][1]
Global problems require global solutions.
Few organizations know this better than Greenpeace. For nearly 50 years, the non-profit has been campaigning for a greener and more peaceful future.
But in 2015, Greenpeace found itself at a crossroads. To address the climate emergency, Greenpeace knew it needed to [shift its organizational culture][2].
The organization needed a bold, new way of _being_. It needed to invest in new priorities, including digital technology and data analysis, inclusive and agile organizational structures, leadership supportive of culture change, new engagement practices, digital systems thinking, and more. It needed to facilitate the collective power of activists to embody [distributed leadership][3] and help the organization drive change. It needed to become [more transparent, more adaptable, and more collaborative][4]—and imbue those same values into a platform that would help others do the same as they joined forces to save the world.
To address the ecological problems of the 21st century, Greenpeace needed to become a more open organization.
And I helped Greenpeace International do it. But—as with any open effort—I didn't work alone.
As an [Open Organization Ambassador][5], a [writer in the open organization community][6], and co-founder [a cooperative][7] working to spread the culture, processes, and benefits of openness wherever it can, I connected Greenpeace with the combined resources, perspectives, and energy of communities working to make the world a better place.
Working with an organization in the midst of a massive cultural transition presented plenty of challenges for me—but my colleagues at Greenpeace and partners in the open organization community shared both their experience and their tried-and-true solutions for infusing open principles into complex organizations.
In this three-part series, I'll explain how [Greenpeace International][8] built its first fully free and open source project: [Planet 4][9], a global platform that connects activists and advocates in search of opportunities to impact the climate crisis.
The work itself is open. But, just as importantly, so is the spirit that guides it.
### From secretive to open
But I'm getting ahead of myself. Let's rewind to 2015.
To address the ecological problems of the 21st century, Greenpeace needed to become a more open organization.
Like so many others concerned with the climate emergency I headed to [Greenpeace.org][10] to learn how I could join the legendary organization fight for the planet. What greeted me was an outdated, complicated, and difficult-to-use website. I (somehow) found my way to the organization's jobs page, applied, and landed an interview.
As part of the interview process, I learned of an internal document circulating among Greenpeacers. In vivid and compelling terms, that document described [seven “shifts” Greenpeace would need to make][2] to its internal culture if it was to adapt to a changing social and political landscape. It was a new internal [storytelling][11] initiative aimed at helping Greenpeace both _imagine_ and _become_ the organization its staff wanted it to be.
As I read the document—especially the part that described a desired shift “from secretive to open source”—I knew I could help. My entire career, I've used open principles to help guide people and projects to spark powerful, positive change in the world. Helping [a traditionally “secretive” organization][12] embrace openness to galvanize others in fighting for our planet.
I was all in.
### Getting off the ground
Greenpeace needed to return to one of its founding values: _transparency._ Its founders were [open by default][13]. Like any organization, Greenpeace will always have secrets, from the locations activists plan to gather for protests to supporters' credit card numbers. But consensus was that Greenpeace had grown _too_ secretive. What good is being a global hub for activism if no one knows what you're doing—or how to help you?
Likewise, Greenpeace sought new methods of collaboration, both internally and with populations around the world. Throughout the 1970s, people-powered strategies helped the organization unleash new modes of successful activism. But today's world required even _more_. Becoming more open would mean accepting more unsolicited help, allowing more people work toward shared goals in creative and ingenious ways, and extending greater trust and connection between the organization and its supporters.
Greenpeace needed a new approach. And that approach would be embodied in a new platform codenamed “Planet 4.”
### Enter Planet 4
Being as open as we can, pushing the boundaries of what it means to work openly, doesn't just impact our work. It impacts our identity. It's certainly part of mine, and it's part of what makes open source so successful—but I knew I'd need to work hard to help Greenpeace change its identity.
Planet 4 would be a tool that drove people to action. We would harness modern technology to help people visualize their impact on the planet, then help them understand how they can drive change. We would publish calls to action and engage with our supporters in [a new and meaningful way][14].
Getting off the ground would require monumental effort—not just technical skill, but superior persuasive and educational ability to _explain_ the need for an open culture project (disguised as a software project) to people at Greenpeace. Before we wrote a single line of code, we needed to do some convincing.
Being radically open is a scary prospect, especially for a global organization that the press loves to scrutinize. Working transparently while others are watching means accepting a certain kind of vulnerability. Some people never leave their house unless they look perfect. Others go to work in shorts and sandals. Asking the question of "when to be open" is kind of like asking "when do we want to be perfectly polished and where do we get to hang out in our pajamas?"
Being as open as we can, pushing the boundaries of what it means to work openly, doesn't just impact our work. It impacts our _identity_. It's certainly part of mine, and it's part of what makes open source so successful—but I knew I'd need to work hard to help Greenpeace change its identity.
As I tried to figure out how we could garner support for a fully open project, the solution presented itself at a monthly meeting of the [Open Organization Ambassador community][5]. One day in June 2016, fellow ambassador Rebecca Fernandez began describing one of her recent projects at Red Hat: the [Open Decision Framework][15].
_Listen to Rebecca Fernandez explain the Open Decision Framework._
And as she presented it, I knew right away I would be able to remix that tool into a language that would help Greenpeace leaders understand the power of thinking and acting openly.
_Listen to Rebecca Fernandez explain Greenpeace's use of the Open Decision Framework._
It worked. And so began our journey with Planet 4.
We had a huge task in front of us. We initiated a discovery phase to research what stakeholders needed from the new engagement platform. We held [community calls][16]. We published blog posts. We designed a software concept to rattle the bones of non-profit technology. We talked openly about our work. And we spent the next two years helping our glimmer of an idea become a functioning prototype launched in more than 35 countries all over the globe.
We'd been successful. Yet the vision we'd developed—the vision of a platform built on modern technologies and capable of inspiring people to act on behalf of our planet—hadn't been fully realized. We needed help seeing the path from successful prototype to world-changing engagement platform.
So with the support of a few strategic Greenpeacers, I reached out to some colleagues at Red Hat the only way I knew how—the open source way, by starting a conversation. We've started a collaboration between our two organizations, one that will enable us all to build and learn together. This effort will help us design community-driven, engaging, open systems that spur change.
This is just the beginning of our story.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/10/open-platform-greenpeace
作者:[Laura Hilliger][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/laurahilliger
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/images/open-org/open-org-greenpeace-article-blog-header-thumbnail.png?itok=M8Y0WQOT (The Open Organization at Greenpeace)
[2]: https://opensource.com/open-organization/16/1/greenpeace-makes-7-shifts-toward-open
[3]: https://opensource.com/open-organization/18/3/empowerment-and-leadership
[4]: https://opensource.com/open-organization/resources/open-org-definition
[5]: https://opensource.com/open-organization/resources/meet-ambassadors
[6]: https://opensource.com/users/laurahilliger
[7]: http://weareopen.coop
[8]: http://greenpeace.org/international
[9]: http://medium.com/planet4
[10]: https://www.greenpeace.org/
[11]: https://storytelling.greenpeace.org/
[12]: https://opensource.com/open-organization/15/10/using-open-source-fight-man
[13]: https://www.youtube.com/watch?v=O49U2M1uczQ
[14]: https://medium.com/planet4/greenpeaces-engagement-content-vision-fbd6bb66018a#.u0gmrzf0f
[15]: https://opensource.com/open-organization/resources/open-decision-framework
[16]: https://opensource.com/open-organization/16/1/community-calls-will-increase-participation-your-open-organization

View File

@ -1,80 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Buying a Linux-ready laptop)
[#]: via: (https://opensource.com/article/19/7/linux-laptop)
[#]: author: (Ricardo Berlasso https://opensource.com/users/rgb-eshttps://opensource.com/users/greg-phttps://opensource.com/users/chrisodhttps://opensource.com/users/victorhckhttps://opensource.com/users/hankghttps://opensource.com/users/noplanman)
Buying a Linux-ready laptop
======
Tuxedo makes it easy to buy an out-of-the-box "penguin-ready" laptop.
![Penguin with green background][1]
Recently, I bought and started using a Tuxedo Book BC1507, a Linux laptop computer. Ten years ago, if someone had told me that, by the end of the decade, I could buy top-quality, "penguin-ready" laptops from companies such as [System76][2], [Slimbook][3], and [Tuxedo][4], I probably would have laughed. Well, now I'm laughing, but with joy!
Going beyond designing computers for free/libre open source software (FLOSS), all three companies recently [announced][5] they are trying to eliminate proprietary BIOS software by switching to [Coreboot][6].
### Buying it
Tuxedo Computers is a German company that builds Linux-ready laptops. In fact, if you want a different operating system, it costs more.
Buying the computer was incredibly easy. Tuxedo offers many payment methods: not only credit cards but also PayPal and even bank transfers. Just fill out the bank transfer form on Tuxedo's web page, and the company will send you the bank coordinates.
Tuxedo builds every computer on demand, and picking exactly what you want is as easy as selecting the basic model and exploring the drop-down menus to select different components. There is a lot of information on the page to guide you in the purchase.
If you pick a different Linux distribution from the recommended one, Tuxedo does a "net install," so have a network cable ready to finish the installation, or you can burn your preferred image onto a USB key. I used a DVD with the openSUSE Leap 15.1 installer through an external DVD reader instead, but you get the idea.
The model I chose accepts up to two disks: one SSD and the other either an SSD or a conventional hard drive. As I was already over budget, I decided to pick a conventional 1TB disk and increase the RAM to 16GB. The processor is an 8th Generation i5 with four cores. I selected a back-lit Spanish keyboard, a 1920×1080/96dpi screen, and an SD card reader—all in all, a great system.
If you're fine with the default English or German keyboard, you can even ask for a penguin icon on the Meta key! I needed a Spanish keyboard, which doesn't offer this option.
### Receiving and using it
The perfectly packaged computer arrived in total safety to my door just six working days after the payment was registered. After unpacking the computer and unlocking the battery, I was ready to roll.
![Tuxedo Book BC1507][7]
The new toy on top of my (physical) desktop.
The computer's design is really nice and feels solid. Even though the chassis on this model is not aluminum, it stays cool. The fan is really quiet, and the airflow goes to the back edge, not to the sides, as in many other laptops. The battery provides several hours of autonomy from an electrical outlet. An option in the BIOS called FlexiCharger stops charging the battery after it reaches a certain percentage, so you don't need to remove the battery when you work for a long time while plugged in.
The keyboard is really comfortable and surprisingly quiet. Even the touchpad keys are quiet! Also, you can easily adjust the light intensity on the back-lit keyboard.
Finally, it's easy to access every component in the laptop so the computer can be updated or repaired without problems. Tuxedo even sends spare screws!
### Conclusion
After a month of heavy use, I'm really happy with the system. I got exactly what I asked for, and everything works perfectly.
Because they are usually high-end systems, Linux-included computers tend to be on the expensive side of the spectrum. If you compare the price of a Tuxedo or Slimbook computer with something with similar specifications from a more established brand, the prices are not that different. If you are after a powerful system to use with free software, don't hesitate to support these companies: What they offer is worth the price.
Let's us know in the comments about your experience with Tuxedo and other "penguin-friendly" companies.
* * *
_This article is based on "[My new 'penguin ready' laptop: Tuxedo-Book-BC1507][8]," published on Ricardo's blog, [From Mind to Type][9]._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/linux-laptop
作者:[Ricardo Berlasso][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/rgb-eshttps://opensource.com/users/greg-phttps://opensource.com/users/chrisodhttps://opensource.com/users/victorhckhttps://opensource.com/users/hankghttps://opensource.com/users/noplanman
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
[2]: https://system76.com/
[3]: https://slimbook.es/en/
[4]: https://www.tuxedocomputers.com/
[5]: https://www.tuxedocomputers.com/en/Infos/News/Tuxedo-Computers-stands-for-Free-Software-and-Security-.tuxedo
[6]: https://coreboot.org/
[7]: https://opensource.com/sites/default/files/uploads/tuxedo-600_0.jpg (Tuxedo Book BC1507)
[8]: https://frommindtotype.wordpress.com/2019/06/17/my-new-penguin-ready-laptop-tuxedo-book-bc1507/
[9]: https://frommindtotype.wordpress.com/

View File

@ -1,188 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mutation testing by example: Failure as experimentation)
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew)
Mutation testing by example: Failure as experimentation
======
Develop the logic for an automated cat door that opens during daylight
hours and locks during the night, and follow along with the .NET
xUnit.net testing framework.
![Digital hand surrounding by objects, bike, light bulb, graphs][1]
In the [first article][2] in this series, I demonstrated how to use planned failure to ensure expected outcomes in your code. In this second article, I'll continue developing my example project—an automated cat door that opens during daylight hours and locks during the night.
As a reminder, you can follow along using the .NET xUnit.net testing framework by following the [instructions here][3].
### What about the daylight hours?
Recall that test-driven development (TDD) centers on a healthy amount of unit tests.
The first article implemented logic that fulfills the expectations of the **Given7pmReturnNighttime** unit test. But you're not done yet. Now you need to describe the expectations of what happens when the current time is greater than 7am. Here is the new unit test, called **Given7amReturnDaylight**:
```
       [Fact]
       public void Given7amReturnDaylight()
       {
           var expected = "Daylight";
           var actual = dayOrNightUtility.GetDayOrNight();
           Assert.Equal(expected, actual);
       }
```
The new unit test now fails (it is very desirable to fail as early as possible!):
```
Starting test execution, please wait...
[Xunit.net 00:00:01.23] unittest.UnitTest1.Given7amReturnDaylight [FAIL]
Failed unittest.UnitTest1.Given7amReturnDaylight
[...]
```
It was expecting to receive the string value "Daylight" but instead received the string value "Nighttime."
### Analyze the failed test case
Upon closer inspection, it seems that the code has trapped itself. It turns out that the implementation of the **GetDayOrNight** method is not testable!
Take a look at the core challenge we have ourselves in:
1. **GetDayOrNight relies on hidden input. **
The value of **dayOrNight** is dependent upon the hidden input (it obtains the value for the time of day from the built-in system clock).
2. **GetDayOrNight contains non-deterministic behavior. **
The value of the time of day obtained from the system clock is non-deterministic. It depends on the point in time when you run the code, which we must consider unpredictable.
3. **Low quality of the GetDayOrNight API.**
This API is tightly coupled to the concrete data source (system **DateTime**).
4. **GetDayOrNight violates the single responsibility principle.**
You have implemented a method that consumes and processes information at the same time. It is a good practice that a method should be responsible for performing a single duty.
5. **GetDayOrNight has more than one reason to change.**
It is possible to imagine a scenario where the internal source of time may change. Also, it is quite easy to imagine that the processing logic will change. These disparate reasons for changing must be isolated from each other.
6. **The API signature of GetDayOrNight is not sufficient when it comes to trying to understand its behavior.**
It is very desirable to be able to understand what type of behavior to expect from an API by simply looking at its signature.
7. **GetDayOrNight depends on global shared mutable state.**
Shared mutable state is to be avoided at all costs!
8. **The behavior of the GetDayOrNight method cannot be predicted even after reading the source code.**
That is a scary proposition. It should always be very clear from reading the source code what kind of behavior can be predicted once the system is operational.
### The principles behind what failed
Whenever you're faced with an engineering problem, it is advisable to use the time-tested strategy of _divide and conquer_. In this case, following the principle of _separation of concerns_ is the way to go.
> **separation of concerns** (**SoC**) is a design principle for separating a computer program into distinct sections, so that each section addresses a separate concern. A concern is a set of information that affects the code of a computer program. A concern can be as general as the details of the hardware the code is being optimized for, or as specific as the name of a class to instantiate. A program that embodies SoC well is called a modular program.
>
> ([source][4])
The **GetDayOrNight** method should be concerned only with deciding whether the date and time value means daylight or nighttime. It should not be concerned with finding the source of that value. That concern should be left to the calling client.
You must leave it to the calling client to take care of obtaining the current time. This approach aligns with another valuable engineering principle—_inversion of control_. Martin Fowler explores this concept in [detail, here][5].
> One important characteristic of a framework is that the methods defined by the user to tailor the framework will often be called from within the framework itself, rather than from the user's application code. The framework often plays the role of the main program in coordinating and sequencing application activity. This inversion of control gives frameworks the power to serve as extensible skeletons. The methods supplied by the user tailor the generic algorithms defined in the framework for a particular application.
>
> \-- [Ralph Johnson and Brian Foote][6]
### Refactoring the test case
So the code needs refactoring. Get rid of the dependency on the internal clock (the **DateTime** system utility):
```
` DateTime time = new DateTime();`
```
Delete the above line (which should be line 7 in your file). Refactor your code further by adding an input parameter **DateTime** time to the **GetDayOrNight** method.
Here's the refactored class **DayOrNightUtility.cs**:
```
using System;
namespace app {
   public class DayOrNightUtility {
       public string GetDayOrNight(DateTime time) {
           string dayOrNight = "Nighttime";
           if(time.Hour &gt;= 7 &amp;&amp; time.Hour &lt; 19) {
               dayOrNight = "Daylight";
           }
           return dayOrNight;
       }
   }
}
```
Refactoring the code requires the unit tests to change. You need to prepare values for the **nightHour** and the **dayHour** and pass those values into the **GetDayOrNight** method. Here are the refactored unit tests:
```
using System;
using Xunit;
using app;
namespace unittest
{
   public class UnitTest1
   {
       DayOrNightUtility dayOrNightUtility = [new][7] DayOrNightUtility();
       DateTime nightHour = [new][7] DateTime(2019, 08, 03, 19, 00, 00);
       DateTime dayHour = [new][7] DateTime(2019, 08, 03, 07, 00, 00);
       [Fact]
       public void Given7pmReturnNighttime()
       {
           var expected = "Nighttime";
           var actual = dayOrNightUtility.GetDayOrNight(nightHour);
           Assert.Equal(expected, actual);
       }
       [Fact]
       public void Given7amReturnDaylight()
       {
           var expected = "Daylight";
           var actual = dayOrNightUtility.GetDayOrNight(dayHour);
           Assert.Equal(expected, actual);
       }
   }
}
```
### Lessons learned
Before moving forward with this simple scenario, take a look back and review the lessons in this exercise.
It is easy to create a trap inadvertently by implementing code that is untestable. On the surface, such code may appear to be functioning correctly. However, following test-driven development (TDD) practice—describing the expectations first and only then prescribing the implementation—revealed serious problems in the code.
This shows that TDD is the ideal methodology for ensuring code does not get too messy. TDD points out problem areas, such as the absence of single responsibility and the presence of hidden inputs. Also, TDD assists in removing non-deterministic code and replacing it with fully testable code that behaves deterministically.
Finally, TDD helped deliver code that is easy to read and logic that's easy to follow.
In the next article in this series, I'll demonstrate how to use the logic created during this exercise to implement functioning code and how further testing can make it even better.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=fvorN0e- (Digital hand surrounding by objects, bike, light bulb, graphs)
[2]: https://opensource.com/article/19/9/mutation-testing-example-part-1-how-leverage-failure
[3]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd
[4]: https://en.wikipedia.org/wiki/Separation_of_concerns
[5]: https://martinfowler.com/bliki/InversionOfControl.html
[6]: http://www.laputan.org/drc/drc.html
[7]: http://www.google.com/search?q=new+msdn.microsoft.com

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,428 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (9 essential GNU binutils tools)
[#]: via: (https://opensource.com/article/19/10/gnu-binutils)
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
9 essential GNU binutils tools
======
Binary analysis is the most underestimated skill in the computer
industry.
![Tools for the sysadmin][1]
Imagine not having access to a software's source code but still being able to understand how the software is implemented, find vulnerabilities in it, and—better yet—fix the bugs. All of this in binary form. It sounds like having superpowers, doesn't it?
You, too, can possess such superpowers, and the GNU binary utilities (binutils) are a good starting point. The [GNU binutils][2] are a collection of binary tools that are installed by default on all Linux distributions.
Binary analysis is the most underestimated skill in the computer industry. It is mostly utilized by malware analysts, reverse engineers, and people
working on low-level software.
This article explores some of the tools available through binutils. I am using RHEL but these examples should run on any Linux distribution.
```
`[~]# cat /etc/redhat-release  Red Hat Enterprise Linux Server release 7.6 (Maipo) [~]#  [~]# uname -r 3.10.0-957.el7.x86_64 [~]# `
```
Note that some packaging commands (like **rpm**) might not be available on Debian-based distributions, so use the equivalent **dpkg** command where applicable.
### Software development 101
In the open source world, many of us are focused on software in source form; when the software's source code is readily available, it is easy to simply get a copy of the source code, open your favorite editor, get a cup of coffee, and start exploring.
But the source code is not what is executed on the CPU; it is the binary or machine language instructions that are executed on the CPU. The binary or executable file is what you get when you compile the source code. People skilled in debugging often get their edge by understanding this difference.
### Compilation 101
Before digging into the binutils package itself, it's good to understand the basics of compilation.
Compilation is the process of converting a program from its source or text form in a certain programming language (C/C++) into machine code.
Machine code is the sequence of 1's and 0's that are understood by a CPU (or hardware in general) and therefore can be executed or run by the CPU. This machine code is saved to a file in a specific format that is often referred to as an executable file or a binary file. On Linux (and BSD, when using [Linux Binary Compatibility][3]), this is called [ELF][4] (Executable and Linkable Format).
The compilation process goes through a series of complicated steps before it presents an executable or binary file for a given source file. Consider this source program (C code) as an example. Open your favorite editor and type out this program:
```
`#include <stdio.h> int main(void) { printf("Hello World\n"); return 0; }`
```
#### Step 1: Preprocessing with cpp
The [C preprocessor (**cpp**)][5] is used to expand all macros and include the header files. In this example, the header file **stdio.h** will be included in the source code. **stdio.h** is a header file that contains information on a **printf** function that is used within the program. **cpp** runs on the source code, and the resulting instructions are saved in a file called **hello.i**. Open the file with a text editor to see its contents. The source code for printing **hello world** is at the bottom of the file.
```
`[testdir]# cat hello.c #include <stdio.h> int main(void) { printf("Hello World\n"); return 0; } [testdir]# [testdir]# cpp hello.c > hello.i [testdir]# [testdir]# ls -lrt total 24 -rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c -rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i [testdir]#`
```
#### Step 2: Compilation with gcc
This is the stage where preprocessed source code from Step 1 is converted to assembly language instructions without creating an object file. It uses the [GNU Compiler Collection (**gcc**)][6]. After running the **gcc** command with the -**S** option on the **hello.i** file, it creates a new file called **hello.s**. This file contains the assembly language instructions for the C program.
You can view the contents using any editor or the **cat** command.
```
`[testdir]# [testdir]# gcc -Wall -S hello.i [testdir]# [testdir]# ls -l total 28 -rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c -rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i -rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s [testdir]# [testdir]# cat hello.s .file "hello.c" .section .rodata .LC0: .string "Hello World" .text .globl main .type main, @function main: .LFB0: .cfi_startproc pushq %rbp .cfi_def_cfa_offset 16 .cfi_offset 6, -16 movq %rsp, %rbp .cfi_def_cfa_register 6 movl $.LC0, %edi call puts movl $0, %eax popq %rbp .cfi_def_cfa 7, 8 ret .cfi_endproc .LFE0: .size main, .-main .ident "GCC: (GNU) 4.8.5 20150623 (Red Hat 4.8.5-36)" .section .note.GNU-stack,"",@progbits [testdir]#`
```
#### Step 3: Assembling with as
The purpose of an assembler is to convert assembly language instructions into machine language code and generate an object file that has a **.o** extension. Use the GNU assembler **as** that is available by default on all Linux platforms.
```
`[testdir]# as hello.s -o hello.o [testdir]# [testdir]# ls -l total 32 -rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c -rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i -rw-r--r--. 1 root root 1496 Sep 13 03:39 hello.o -rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s [testdir]#`
```
You now have your first file in the ELF format; however, you cannot execute it yet. Later, you will see the difference between an **object file** and an **executable file**.
```
`[testdir]# file hello.o hello.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped`
```
#### Step 4: Linking with ld
This is the final stage of compillation, when the object files are linked to create an executable. An executable usually requires external functions that often come from system libraries (**libc**).
You can directly invoke the linker with the **ld** command; however, this command is somewhat complicated. Instead, you can use the **gcc** compiler with the **-v** (verbose) flag to understand how linking happens. (Using the **ld** command for linking is an exercise left for you to explore.)
```
`[testdir]# gcc -v hello.o Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man [...] --build=x86_64-redhat-linux Thread model: posix gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) COMPILER_PATH=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/:/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/:[...]:/usr/lib/gcc/x86_64-redhat-linux/ LIBRARY_PATH=/usr/lib/gcc/x86_64-redhat-linux/4.8.5/:/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/:/lib/../lib64/:/usr/lib/../lib64/:/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../:/lib/:/usr/lib/ COLLECT_GCC_OPTIONS='-v' '-mtune=generic' '-march=x86-64' /usr/libexec/gcc/x86_64-redhat-linux/4.8.5/collect2 --build-id --no-add-needed --eh-frame-hdr --hash-style=gnu [...]/../../../../lib64/crtn.o [testdir]#`
```
After running this command, you should see an executable file named **a.out**:
```
`[testdir]# ls -l total 44 -rwxr-xr-x. 1 root root 8440 Sep 13 03:45 a.out -rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c -rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i -rw-r--r--. 1 root root 1496 Sep 13 03:39 hello.o -rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s`
```
Running the **file** command on **a.out** shows that it is indeed an ELF executable:
```
`[testdir]# file a.out a.out: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=48e4c11901d54d4bf1b6e3826baf18215e4255e5, not stripped`
```
Run your executable file to see if it does as the source code instructs:
```
`[testdir]# ./a.out Hello World`
```
It does! So much happens behind the scenes just to print **Hello World** on the screen. Imagine what happens in more complicated programs.
### Explore the binutils tools
This exercise provided a good background for utilizing the tools that are in the binutils package. My system has binutils version 2.27-34; you may have a different version depending on your Linux distribution.
```
`[~]# rpm -qa | grep binutils binutils-2.27-34.base.el7.x86_64`
```
The following tools are available in the binutils packages:
```
`[~]# rpm -ql binutils-2.27-34.base.el7.x86_64 | grep bin/ /usr/bin/addr2line /usr/bin/ar /usr/bin/as /usr/bin/c++filt /usr/bin/dwp /usr/bin/elfedit /usr/bin/gprof /usr/bin/ld /usr/bin/ld.bfd /usr/bin/ld.gold /usr/bin/nm /usr/bin/objcopy /usr/bin/objdump /usr/bin/ranlib /usr/bin/readelf /usr/bin/size /usr/bin/strings /usr/bin/strip`
```
The compilation exercise above already explored two of these tools: the **as** command was used as an assembler, and the **ld** command was used as a linker. Read on to learn about the other seven GNU binutils package tools highlighted in bold above.
#### readelf: Displays information about ELF files
The exercise above mentioned the terms **object file** and **executable file**. Using the files from that exercise, enter **readelf** using the **-h** (header) option to dump the files' ELF header on your screen. Notice that the object file ending with the **.o** extension is shown as **Type: REL (Relocatable file)**:
```
`[testdir]# readelf -h hello.o ELF Header: Magic: 7f 45 4c 46 02 01 01 00 [...] [...] Type: REL (Relocatable file) [...]`
```
If you try to execute this file, you will get an error saying it cannot be executed. This simply means that it doesn't yet have the information that is required for it to be executed on the CPU.
Remember, you need to add the **x** or **executable bit** on the object file first using the **chmod** command or else you will get a **Permission denied** error.
```
`[testdir]# ./hello.o bash: ./hello.o: Permission denied [testdir]# chmod +x ./hello.o [testdir]# [testdir]# ./hello.o bash: ./hello.o: cannot execute binary file`
```
If you try the same command on the **a.out** file, you see that its type is an **EXEC (Executable file)**.
```
`[testdir]# readelf -h a.out ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 [...] Type: EXEC (Executable file)`
```
As seen before, this file can directly be executed by the CPU:
```
`[testdir]# ./a.out Hello World`
```
The **readelf** command gives a wealth of information about a binary. Here, it tells you that it is in ELF64-bit format, which means it can be executed only on a 64-bit CPU and won't work on a 32-bit CPU. It also tells you that it is meant to be executed on X86-64 (Intel/AMD) architecture. The entry point into the binary is at address 0x400430, which is just the address of the **main** function within the C source program.
Try the **readelf** command on the other system binaries you know, like **ls**. Note that your output (especially **Type:**) might differ on RHEL 8 or Fedora 30 systems and above due to position independent executable ([PIE][7]) changes made for security reasons.
```
`[testdir]# readelf -h /bin/ls ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file)`
```
Learn what **system libraries** the **ls** command is dependant on using the **ldd** command, as follows:
```
`[testdir]# ldd /bin/ls linux-vdso.so.1 => (0x00007ffd7d746000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f060daca000) libcap.so.2 => /lib64/libcap.so.2 (0x00007f060d8c5000) libacl.so.1 => /lib64/libacl.so.1 (0x00007f060d6bc000) libc.so.6 => /lib64/libc.so.6 (0x00007f060d2ef000) libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f060d08d000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f060ce89000) /lib64/ld-linux-x86-64.so.2 (0x00007f060dcf1000) libattr.so.1 => /lib64/libattr.so.1 (0x00007f060cc84000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f060ca68000)`
```
Run **readelf** on the **libc** library file to see what kind of file it is. As it points out, it is a **DYN (Shared object file)**, which means it can't be directly executed on its own; it must be used by an executable file that internally uses any functions made available by the library.
```
`[testdir]# readelf -h /lib64/libc.so.6 ELF Header: Magic: 7f 45 4c 46 02 01 01 03 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - GNU ABI Version: 0 Type: DYN (Shared object file)`
```
#### size: Lists section sizes and the total size
The **size** command works only on object and executable files, so if you try running it on a simple ASCII file, it will throw an error saying **File format not recognized**.
```
`[testdir]# echo "test" > file1 [testdir]# cat file1 test [testdir]# file file1 file1: ASCII text [testdir]# size file1 size: file1: File format not recognized`
```
Now, run **size** on the **object file** and the **executable file** from the exercise above. Notice that the executable file (**a.out**) has considerably more information than the object file (**hello.o**), based on the output of size command:
```
`[testdir]# size hello.o text data bss dec hex filename 89 0 0 89 59 hello.o [testdir]# size a.out text data bss dec hex filename 1194 540 4 1738 6ca a.out`
```
But what do the **text**, **data**, and **bss** sections mean?
The **text** sections refer to the code section of the binary, which has all the executable instructions. The **data** sections are where all the initialized data is, and **bss** is where all the uninitialized data is stored.
Compare **size** with some of the other available system binaries.
For the **ls** command:
```
`[testdir]# size /bin/ls text data bss dec hex filename 103119 4768 3360 111247 1b28f /bin/ls`
```
You can see that **gcc** and **gdb** are far bigger programs than **ls** just by looking at the output of the **size** command:
```
`[testdir]# size /bin/gcc text data bss dec hex filename 755549 8464 81856 845869 ce82d /bin/gcc [testdir]# size /bin/gdb text data bss dec hex filename 6650433 90842 152280 6893555 692ff3 /bin/gdb`
```
#### strings: Prints the strings of printable characters in files
It is often useful to add the **-d** flag to the **strings** command to show only the printable characters from the data section.
**hello.o** is an object file that contains instructions to print out the text **Hello World**. Hence, the only output from the **strings** command is **Hello World**.
```
`[testdir]# strings -d hello.o Hello World`
```
Running **strings** on **a.out** (an executable), on the other hand, shows additional information that was included in the binary during the linking phase:
```
`[testdir]# strings -d a.out /lib64/ld-linux-x86-64.so.2 !^BU libc.so.6 puts __libc_start_main __gmon_start__ GLIBC_2.2.5 UH-0 UH-0 =( []A\A]A^A_ Hello World ;*3$"`
```
Recall that compilation is the process of converting source code instructions into machine code. Machine code consists of only 1's and 0's and is difficult for humans to read. Therefore, it helps to present machine code as assembly language instructions. What do assembly languages look like? Remember that assembly language is architecture-specific; since I am using Intel or x86-64 architecture, the instructions will be different if you're using ARM architecture to compile the same programs.
#### objdump: Displays information from object files
Another binutils tool that can dump the machine language instructions from the binary is called **objdump**.
Use the **-d** option, which disassembles all assembly instructions from the binary.
```
`[testdir]# objdump -d hello.o hello.o: file format elf64-x86-64 Disassembly of section .text: 0000000000000000`
`: 0: 55 push %rbp 1: 48 89 e5 mov %rsp,%rbp 4: bf 00 00 00 00 mov $0x0,%edi 9: e8 00 00 00 00 callq e`
` e: b8 00 00 00 00 mov $0x0,%eax 13: 5d pop %rbp 14: c3 retq`
` `
```
This output seems intimidating at first, but take a moment to understand it before moving ahead. Recall that the **.text** section has all the machine code instructions. The assembly instructions can be seen in the fourth column (i.e., **push**, **mov**, **callq**, **pop**, **retq**). These instructions act on registers, which are memory locations built into the CPU. The registers in this example are **rbp**, **rsp**, **edi**, **eax**, etc., and each register has a special meaning.
Now run **objdump** on the executable file (**a.out**) and see what you get. The output of **objdump** on the executable can be large, so I've narrowed it down to the **main** function using the **grep** command:
```
`[testdir]# objdump -d a.out | grep -A 9 main\> 000000000040051d`
`: 40051d: 55 push %rbp 40051e: 48 89 e5 mov %rsp,%rbp 400521: bf d0 05 40 00 mov $0x4005d0,%edi 400526: e8 d5 fe ff ff callq 400400 40052b: b8 00 00 00 00 mov $0x0,%eax 400530: 5d pop %rbp 400531: c3 retq`
` `
```
Notice that the instructions are similar to the object file **hello.o**, but they have some additional information in them:
* The object file **hello.o** has the following instruction: `callq e` ` `
* The executable **a.out** consists of the following instruction with an address and a function:`callq 400400 <puts@plt>`
The above assembly instruction is calling a **puts** function. Remember that you used a **printf** function in the source code. The compiler inserted a call to the **puts** library function to output **Hello World** to the screen.
Look at the instruction for a line above **puts**:
* The object file **hello.o** has the instruction **mov**:`mov $0x0,%edi`
* The instruction **mov** for the executable **a.out** has an actual address (**$0x4005d0**) instead of **$0x0**:`mov $0x4005d0,%edi`
This instruction moves whatever is present at address **$0x4005d0** within the binary to the register named **edi**.
What else could be in the contents of that memory location? Yes, you guessed it right: it is nothing but the text **Hello, World**. How can you be sure?
The **readelf** command enables you to dump any section of the binary file (**a.out**) onto the screen. The following asks it to dump the **.rodata**, which is read-only data, onto the screen:
```
`[testdir]# readelf -x .rodata a.out Hex dump of section '.rodata': 0x004005c0 01000200 00000000 00000000 00000000 .... 0x004005d0 48656c6c 6f20576f 726c6400 Hello World.`
```
You can see the text **Hello World** on the right-hand side and its address in binary on the left-hand side. Does it match the address you saw in the **mov** instruction above? Yes, it does.
#### strip: Discards symbols from object files
This command is often used to reduce the size of the binary before shipping it to customers.
Remember that it hinders the process of debugging since vital information is removed from the binary; nonetheless, the binary executes flawlessly.
Run it on your **a.out** executable and notice what happens. First, ensure the binary is **not stripped** by running the following command:
```
`[testdir]# file a.out a.out: ELF 64-bit LSB executable, x86-64, [......] not stripped`
```
Also, keep track of the number of bytes originally in the binary before running the **strip** command:
```
`[testdir]# du -b a.out 8440 a.out`
```
Now run the **strip** command on your executable and ensure it worked using the **file** command:
```
`[testdir]# strip a.out [testdir]# file a.out a.out: ELF 64-bit LSB executable, x86-64, [......] stripped`
```
After stripping the binary, its size went down to **6296** from the previous **8440** bytes for this small program. With this much savings for a tiny program, no wonder large programs often are stripped.
```
`[testdir]# du -b a.out 6296 a.out`
```
#### addr2line: Converts addresses into file names and line numbers
The **addr2line** tool simply looks up addresses in the binary file and matches them up with lines in the C source code program. Pretty cool, isn't it?
Write another test program for this; only this time ensure you compile it with the **-g** flag for **gcc**, which adds additional debugging information for the binary and also helps by including the line numbers (provided in the source code here):
```
`[testdir]# cat -n atest.c 1 #include <stdio.h> 2 3 int globalvar = 100; 4 5 int function1(void) 6 { 7 printf("Within function1\n"); 8 return 0; 9 } 10 11 int function2(void) 12 { 13 printf("Within function2\n"); 14 return 0; 15 } 16 17 int main(void) 18 { 19 function1(); 20 function2(); 21 printf("Within main\n"); 22 return 0; 23 }`
```
Compile with the **-g** flag and execute it. No surprises here:
```
`[testdir]# gcc -g atest.c [testdir]# ./a.out Within function1 Within function2 Within main`
```
Now use **objdump** to identify memory addresses where your functions begin. You can use the **grep** command to filter out specific lines that you want. The addresses for your functions are highlighted below:
```
`[testdir]# objdump -d a.out | grep -A 2 -E 'main>:|function1>:|function2>:' 000000000040051d : 40051d: 55 push %rbp 40051e: 48 89 e5 mov %rsp,%rbp -- 0000000000400532 : 400532: 55 push %rbp 400533: 48 89 e5 mov %rsp,%rbp -- 0000000000400547`
`: 400547: 55 push %rbp 400548: 48 89 e5 mov %rsp,%rbp`
```
Now use the **addr2line** tool to map these addresses from the binary to match those of the C source code:
```
`[testdir]# addr2line -e a.out 40051d /tmp/testdir/atest.c:6 [testdir]# [testdir]# addr2line -e a.out 400532 /tmp/testdir/atest.c:12 [testdir]# [testdir]# addr2line -e a.out 400547 /tmp/testdir/atest.c:18`
```
It says that **40051d** starts on line number 6 in the source file **atest.c**, which is the line where the starting brace (**{**) for **function1** starts. Match the output for **function2** and **main**.
#### nm: Lists symbols from object files
Use the C program above to test the **nm** tool. Compile it quickly using **gcc** and execute it.
```
`[testdir]# gcc atest.c [testdir]# ./a.out Within function1 Within function2 Within main`
```
Now run **nm** and **grep** for information on your functions and variables:
```
`[testdir]# nm a.out | grep -Ei 'function|main|globalvar' 000000000040051d T function1 0000000000400532 T function2 000000000060102c D globalvar U __libc_start_main@@GLIBC_2.2.5 0000000000400547 T main`
```
You can see that the functions are marked **T**, which stands for symbols in the **text** section, whereas variables are marked as **D**, which stands for symbols in the initialized **data** section.
Imagine how useful it will be to run this command on binaries where you do not have source code? This allows you to peek inside and understand which functions and variables are used. Unless, of course, the binaries have been stripped, in which case they contain no symbols, and therefore the **nm** command wouldn't be very helpful, as you can see here:
```
`[testdir]# strip a.out [testdir]# nm a.out | grep -Ei 'function|main|globalvar' nm: a.out: no symbols`
```
### Conclusion
The GNU binutils tools offer many options for anyone interested in analyzing binaries, and this has only been a glimpse of what they can do for you. Read the man pages for each tool to understand more about them and how to use them.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/gnu-binutils
作者:[Gaurav Kamathe][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gkamathe
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn (Tools for the sysadmin)
[2]: https://en.wikipedia.org/wiki/GNU_Binutils
[3]: https://www.freebsd.org/doc/handbook/linuxemu.html
[4]: https://en.wikipedia.org/wiki/Executable_and_Linkable_Format
[5]: https://en.wikipedia.org/wiki/C_preprocessor
[6]: https://gcc.gnu.org/onlinedocs/gcc/
[7]: https://en.wikipedia.org/wiki/Position-independent_code#Position-independent_executables

View File

@ -0,0 +1,192 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install and Configure VNC Server on Centos 8 / RHEL 8)
[#]: via: (https://www.linuxtechi.com/install-configure-vnc-server-centos8-rhel8/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
How to Install and Configure VNC Server on Centos 8 / RHEL 8
======
A **VNC** (Virtual Network Computing) Server is a GUI based desktop sharing platform that allows you to access remote desktop machines. In **Centos 8** and **RHEL 8** systems, VNC servers are not installed by default and need to be installed manually. In this article, well look at how to install VNC Server on CentOS 8 / RHEL 8 systems with a simple step-by-step installation guide.
### Prerequisites to Install VNC Server on Centos 8 / RHEL 8
To install VNC Server in your system, make sure you have the following requirements readily available on your system:
* CentOS 8 / RHEL 8
* GNOME Desktop Environment
* Root access
* DNF / YUM Package repositories
### Step by Step Guide to Install VNC Server on Centos 8 / RHEL 8
### Step 1)  Install GNOME Desktop environment
Before installing VNC Server in your CentOS 8 / RHEL 8, make sure you have a desktop Environment (DE) installed. In case GNOME desktop is already installed or you have installed your server with gui option then you can skip this step.
In CentOS 8 / RHEL 8, GNOME is the default desktop environment. if you dont have it in your system, install it using the following command:
```
[root@linuxtechi ~]# dnf groupinstall "workstation"
Or
[root@linuxtechi ~]# dnf groupinstall "Server with GUI
```
Once the above packages are installed successfully then run the following command to enable the graphical mode
```
[root@linuxtechi ~]# systemctl set-default graphical
```
Now reboot the system so that we get GNOME login screen.
```
[root@linuxtechi ~]# reboot
```
Once the system is rebooted successfully uncomment the line “**WaylandEnable=false**” from the file “**/etc/gdm/custom.conf**” so that remote desktop session request via vnc is handled by xorg of GNOME desktop in place of wayland display manager.
**Note:** Wayland is the default display manager (GDM) in GNOME and it not is configured to handled remote rendering API like X.org
### Step 2) Install VNC Server (tigervnc-server)
Next well install the VNC Server, there are lot of VNC Servers available, and for installation purposes, well be installing **TigerVNC Server**. It is one of the most popular VNC Server and a high-performance and platform-independent VNC that allows users to interact with remote machines easily.
Now install TigerVNC Server using the following command:
```
[root@linuxtechi ~]# dnf install tigervnc-server tigervnc-server-module -y
```
### Step 3) Set VNC Password for Local User
Lets assume we want pkumar user to use VNC for remote desktop session, then switch to the user and set its password using vncpasswd command,
```
[root@linuxtechi ~]# su - pkumar
[root@linuxtechi ~]$ vncpasswd
Password:
Verify:
Would you like to enter a view-only password (y/n)? n
A view-only password is not used
[root@linuxtechi ~]$
[root@linuxtechi ~]$ exit
logout
[root@linuxtechi ~]#
```
### Step 4) Setup VNC Server Configuration File
Next step is to configure VNC Server Configuration file. Create a file “**/etc/systemd/system/[root@linuxtechi][1]**” with the following content so that tigervnc-servers service started for above local user “pkumar”.
```
[root@linuxtechi ~]# vim /etc/systemd/system/root@linuxtechi
[Unit]
Description=Remote Desktop VNC Service
After=syslog.target network.target
[Service]
Type=forking
WorkingDirectory=/home/pkumar
User=pkumar
Group=pkumar
ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'
ExecStart=/usr/bin/vncserver -autokill %i
ExecStop=/usr/bin/vncserver -kill %i
[Install]
WantedBy=multi-user.target
```
Save and exit the file,
**Note:** Replace the user name in above file which suits to your setup.
By default, VNC server listen on tcp port 5900+n, where n is the display number, if the display number is “1” then VNC server will listen its request on TCP port 5901.
### Step 5) Start VNC Service and allow port in firewall
I am using display number as 1, so use the following commands to start and enable vnc service on display number “1”,
```
[root@linuxtechi ~]# systemctl daemon-reload
[root@linuxtechi ~]# systemctl start root@linuxtechi:1.service
[root@linuxtechi ~]# systemctl enable vnroot@linuxtechi:1.service
Created symlink /etc/systemd/system/multi-user.target.wants/root@linuxtechi:1.service → /etc/systemd/system/root@linuxtechi
[root@linuxtechi ~]#
```
Use below **netstat** or **ss** command to verify whether VNC server start listening its request on 5901,
```
[root@linuxtechi ~]# netstat -tunlp | grep 5901
tcp 0 0 0.0.0.0:5901 0.0.0.0:* LISTEN 8169/Xvnc
tcp6 0 0 :::5901 :::* LISTEN 8169/Xvnc
[root@linuxtechi ~]# ss -tunlp | grep -i 5901
tcp LISTEN 0 5 0.0.0.0:5901 0.0.0.0:* users:(("Xvnc",pid=8169,fd=6))
tcp LISTEN 0 5 [::]:5901 [::]:* users:(("Xvnc",pid=8169,fd=7))
[root@linuxtechi ~]#
```
Use below systemctl command to verify the status of VNC server,
```
[root@linuxtechi ~]# systemctl status root@linuxtechi:1.service
```
![vncserver-status-centos8-rhel8][2]
Above commands output confirms that VNC is started successfully on port tcp port 5901. Use the following command allow VNC Server port “5901” in os firewall,
```
[root@linuxtechi ~]# firewall-cmd --permanent --add-port=5901/tcp
success
[root@linuxtechi ~]# firewall-cmd --reload
success
[root@linuxtechi ~]#
```
### Step 6) Connect to Remote Desktop Session
Now we are all set to see if the remote desktop connection is working. To access the remote desktop, Start the VNC Viewer from your Windows  / Linux workstation and enter your **VNC server IP Address** and **Port Number** and then hit enter
[![VNC-Viewer-Windows10][2]][3]
Next, it will ask for your VNC password. Enter the password that you have created earlier for your local user and click OK to continue
[![VNC-Viewer-Connect-CentOS8-RHEL8-VNC-Server][2]][4]
Now you can see the remote desktop,
[![VNC-Desktop-Screen-CentOS8][2]][5]
Thats it, youve successfully installed VNC Server in Centos 8 / RHEL 8.
**Conclusion**
Hope the step-by-step guide to install VNC server on Centos 8 / RHEL 8 has provided you with all the information to easily setup VNC Server and access remote desktops. Please provide your comments and suggestion in the feedback section below. See you in the next article…Until then a big THANK YOU and BYE for now!!!
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-configure-vnc-server-centos8-rhel8/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/cdn-cgi/l/email-protection
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/10/VNC-Viewer-Windows10.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/10/VNC-Viewer-Connect-CentOS8-RHEL8-VNC-Server.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/10/VNC-Desktop-Screen-CentOS8.jpg

View File

@ -0,0 +1,54 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use internal packages to reduce your public API surface)
[#]: via: (https://dave.cheney.net/2019/10/06/use-internal-packages-to-reduce-your-public-api-surface)
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
Use internal packages to reduce your public API surface
======
In the beginning, before the `go` tool, before Go 1.0, the Go distribution stored the standard library in a subdirectory called `pkg/` and the commands which built upon it in `cmd/`. This wasnt so much a deliberate taxonomy but a by product of the original `make` based build system. In [September 2014][1], the Go distribution dropped the `pkg/` subdirectory, but then this tribal knowledge had set root in large Go projects and continues to this day.
I tend to view empty directories inside a Go project with suspicion. Often they are a hint that the modules author may be trying to create a taxonomy of packages rather than ensuring each packages name, and thus its enclosing directory, [uniquely describes its purpose][2]. While the symmetry with `cmd/` for `package main` commands is appealing, a directory that exists only to hold other packages is a potential design smell.
More importantly, the boilerplate of an empty `pkg/` directory distracts from the more useful idiom of an `internal/` directory. `internal/` is a special directory name recognised by the `go` tool which will prevent one package from being imported by another unless both share a common ancestor. Packages within an `internal/` directory are therefore said to be _internal packages_.
To create an internal package, place it within a directory named `internal/`. When the `go` command sees an import of a package with `internal/` in the import path, it verifies that the importing package is within the tree rooted at the _parent_ of the `internal/` directory.
For example, a package `/a/b/c/internal/d/e/f` can only be imported by code in the directory tree rooted at `/a/b/c`. It cannot be imported by code in `/a/b/g` or in any other repository.
If your project contains multiple packages you may find you have some exported symbols which are intended to be used by other packages in your project, but are not intended to be part of your projects public API. Although Go has limited visibility modifierspublic, exported, symbols and private, non exported, symbolsinternal packages provide a useful mechanism for controlling visibility to parts of your project which would otherwise be considered part of its public versioned API.
You can, of course, promote internal packages later if you want to commit to supporting that API; just move them up a directory level or two. The key is this process is _opt-in_. As the author, internal packages give you control over which symbols in your projects public API without being forced to glob concepts together into unwieldy mega packages to avoid exporting them.
### Related posts:
1. [Stress test your Go packages][3]
2. [Practical public speaking for Nerds][4]
3. [Five suggestions for setting up a Go project][5]
4. [Automatically fetch your projects dependencies with gb][6]
--------------------------------------------------------------------------------
via: https://dave.cheney.net/2019/10/06/use-internal-packages-to-reduce-your-public-api-surface
作者:[Dave Cheney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://dave.cheney.net/author/davecheney
[b]: https://github.com/lujun9972
[1]: https://groups.google.com/forum/m/#!msg/golang-dev/c5AknZg3Kww/OFLmvGyfNR0J
[2]: https://dave.cheney.net/2019/01/08/avoid-package-names-like-base-util-or-common
[3]: https://dave.cheney.net/2013/06/19/stress-test-your-go-packages (Stress test your Go packages)
[4]: https://dave.cheney.net/2015/02/17/practical-public-speaking-for-nerds (Practical public speaking for Nerds)
[5]: https://dave.cheney.net/2014/12/01/five-suggestions-for-setting-up-a-go-project (Five suggestions for setting up a Go project)
[6]: https://dave.cheney.net/2016/06/26/automatically-fetch-your-projects-dependencies-with-gb (Automatically fetch your projects dependencies with gb)

View File

@ -0,0 +1,222 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (7 Java tips for new developers)
[#]: via: (https://opensource.com/article/19/10/java-basics)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
7 Java tips for new developers
======
If you're just getting started with Java programming, here are seven
basics you need to know.
![Coffee and laptop][1]
Java is a versatile programming language used, in some way, in nearly every industry that touches a computer. Java's greatest power is that it runs in a Java Virtual Machine (JVM), a layer that translates Java code into bytecode compatible with your operating system. As long as a JVM exists for your operating system, whether that OS is on a server (or [serverless][2], for that matter), desktop, laptop, mobile device, or embedded device, then a Java application can run on it.
This makes Java a popular language for both programmers and users. Programmers know that they only have to write one version of their software to end up with an application that runs on any platform, and users know that an application will run on their computer regardless of what operating system they use.
Many languages and frameworks are cross-platform, but none deliver the same level of abstraction. With Java, you target the JVM, not the OS. For programmers, that's the path of least resistance when faced with several programming challenges, but it's only useful if you know how to program Java. If you're just getting started with Java programming, here are seven basic tips you need to know.
But first, if you're not sure whether you have Java installed, you can find out in a terminal (such as [Bash][3] or [PowerShell][4]) by running:
```
$ java --version
openjdk 12.0.2 2019-07-16
OpenJDK Runtime Environment 19.3 (build 12.0.2+9)
OpenJDK 64-Bit Server VM 19.3 (build 12.0.2+9, mixed mode, sharing)
```
If you get an error or nothing in return, then you should install the [Java Development Kit][5] (JDK) to get started with Java development. Or install a Java Runtime Environment ****(JRE) if you just need to run Java applications.
### 1\. Java packages
In Java, related classes are grouped into a _package_. The basic Java libraries you get when you download the JDK are grouped into packages starting with **java** or **javax**. Packages serve a similar function as folders on your computer: they provide structure and definition for related elements (in programming terminology, a _namespace_). Additional packages can be obtained from independent coders, open source projects, and commercial vendors, just as libraries can be obtained for any programming language.
When you write a Java program, you should declare a package name at the top of your code. If you're just writing a simple application to get started with Java, your package name can be as simple as the name of your project. If you're using a Java integrated development environment (IDE), like [Eclipse][6], it generates a sane package name for you when you start a new project.
```
package helloworld;
/**
 * @author seth
 * An application written in Java.
 */
```
Otherwise, you can determine the name of your package by looking at its path in relation to the broad definition of your project. For instance, if you're writing a set of classes to assist in game development and the collection is called **jgamer**, then you might have several unique classes within it.
```
package jgamer.avatar;
/**
 * @author seth
 * An imaginary game library.
 */
```
The top level of your package is **jgamer**, and each package inside it is a descendant, such as **jgamer.avatar** and **jgamer.score** and so on. In your filesystem, the structure reflects this, with **jgamer** being the top directory containing the files **avatar.java** and **score.java**.
### 2\. Java imports
The most fun you'll ever have as a polyglot programmer is trying to keep track of whether you **include**, **import**, **use**, **require**, or **some other term** a library in whatever programming language you're writing in. Java, for the record, uses the **import** keyword when importing libraries needed for your code.
```
package helloworld;
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
/**
 * @author seth
 * A GUI hello world.
 */
```
Imports work based on an environment's Java path. If Java doesn't know where Java libraries are stored on a system, then an import cannot be successful. As long as a library is stored in a system's Java path, then an import can succeed, and a library can be used to build and run a Java application.
If a library is not expected to be in the Java path (because, for instance, you are writing the library yourself), then the library can be bundled with your application (license permitting) so that the import works as expected.
### 3\. Java classes
A Java class is declared with the keywords **public class** along with a unique class name mirroring its file name. For example, in a file **Hello.java** in project **helloworld**:
```
package helloworld;
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
/**
 * @author seth
 * A GUI hello world.
 */
public class Hello {
        // this is an empty class
}
```
You can declare variables and functions inside a class. In Java, variables within a class are called _fields_.
### 4\. Java methods
Java methods are, essentially, functions within an object. They are defined as being **public** (meaning they can be accessed by any other class) or **private** (limiting their use) based on the expected type of returned data, such as **void**, **int**, **float**, and so on.
```
    public void helloPrompt([ActionEvent][7] event) {
        [String][8] salutation = "Hello %s";
 
        string helloMessage = "World";
        message = [String][8].format(salutation, helloMessage);
        [JOptionPane][9].showMessageDialog(this, message);
    }
 
    private int someNumber (x) {
        return x*2;
    }
```
When calling a method directly, it is referenced by its class and method name. For instance, **Hello.someNumber** refers to the **someNumber** method in the **Hello** class.
### 5\. Static
The **static** keyword in Java makes a member in your code accessible independently of the object that contains it.
In object-oriented programming, you write code that serves as a template for "objects" that get spawned as the application runs. You don't code a specific window, for instance, but an _instance_ of a window based upon a window class in Java (and modified by your code). Since nothing you are coding "exists" until the application generates an instance of it, most methods and variables (and even nested classes) cannot be used until the object they depend upon has been created.
However, sometimes you need to access or use data in an object before it is created by the application (for example, an application can't generate a red ball without first knowing that the ball is meant to be red). For those cases, there's the **static** keyword.
### 6\. Try and catch
Java is excellent at catching errors, but it can only recover gracefully if you tell it what to do. The cascading hierarchy of attempting to perform an action in Java starts with **try**, falls back to **catch**, and ends with **finally**. Should the **try** clause fail, then **catch** is invoked, and in the end, there's always **finally** to perform some sensible action regardless of the results. Here's an example:
```
try {
        cmd = parser.parse(opt, args); 
       
        if(cmd.hasOption("help")) {
                HelpFormatter helper = new HelpFormatter();
                helper.printHelp("Hello &lt;options&gt;", opt);
                [System][10].exit(0);
                }
        else {
                if(cmd.hasOption("shell") || cmd.hasOption("s")) {
                [String][8] target = cmd.getOptionValue("tgt");
                } // else
        } // fi
} catch ([ParseException][11] err) {
        [System][10].out.println(err);
        [System][10].exit(1);
        } //catch
        finally {
                new Hello().helloWorld(opt);
        } //finally
} //try
```
It's a robust system that attempts to avoid irrecoverable errors or, at least, to provide you with the option to give useful feedback to the user. Use it often, and your users will thank you!
### 7\. Running a Java application
Java files, usually ending in **.java**, theoretically can be run with the **java** command. If an application is complex, however, whether running a single file results in anything meaningful is another question.
To run a **.java** file directly:
```
`$ java ./Hello.java`
```
Usually, Java applications are distributed as Java Archives (JAR) files, ending in **.jar**. A JAR file contains a manifest file specifying the main class, some metadata about the project structure, and all the parts of your code required to run the application.
To run a JAR file, you may be able to double-click its icon (depending on how you have your OS set up), or you can launch it from a terminal:
```
`$ java -jar ./Hello.jar`
```
### Java for everyone
Java is a powerful language, and thanks to the [OpenJDK][12] project and other initiatives, it's an open specification that allows projects like [IcedTea][13], [Dalvik][14], and [Kotlin][15] to thrive. Learning Java is a great way to prepare to work in a wide variety of industries, and what's more, there are plenty of [great reasons to use it][16].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/java-basics
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o (Coffee and laptop)
[2]: https://www.redhat.com/en/resources/building-microservices-eap-7-reference-architecture
[3]: https://www.gnu.org/software/bash/
[4]: https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell?view=powershell-6
[5]: http://openjdk.java.net/
[6]: http://www.eclipse.org/
[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+actionevent
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+joptionpane
[10]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+parseexception
[12]: https://openjdk.java.net/
[13]: https://icedtea.classpath.org/wiki/Main_Page
[14]: https://source.android.com/devices/tech/dalvik/
[15]: https://kotlinlang.org/
[16]: https://opensource.com/article/19/9/why-i-use-java

View File

@ -0,0 +1,202 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Introduction to open source observability on Kubernetes)
[#]: via: (https://opensource.com/article/19/10/open-source-observability-kubernetes)
[#]: author: (Yuri Grinshteyn https://opensource.com/users/yuri-grinshteyn)
Introduction to open source observability on Kubernetes
======
In the first article in this series, learn the signals, mechanisms,
tools, and platforms you can use to observe services running on
Kubernetes.
![Looking back with binoculars][1]
With the advent of DevOps, engineering teams are taking on more and more ownership of the reliability of their services. While some chafe at the increased operational burden, others welcome the opportunity to treat service reliability as a key feature, invest in the necessary capabilities to measure and improve reliability, and deliver the best possible customer experiences.
This change is measured explicitly in the [2019 Accelerate State of DevOps Report][2]. One of its most interesting conclusions (as written in the summary) is:
> "Delivering software quickly, **reliably** _[emphasis mine]_, and safely is at the heart of technology transformation and organizational performance. We see continued evidence that software speed, stability, and **availability** _[emphasis mine]_ contribute to organizational performance (including profitability, productivity, and customer satisfaction). Our highest performers are twice as likely to meet or exceed their organizational performance goals."
The full [report][3] says:
> "**Low performers use more proprietary software than high and elite performers**: The cost to maintain and support proprietary software can be prohibitive, prompting high and elite performers to use open source solutions. This is in line with results from previous reports. In fact, the 2018 Accelerate State of DevOps Report found that elite performers were 1.75 times more likely to make extensive use of open source components, libraries, and platforms."
This is a strong testament to the value of open source as a general accelerator of performance. Combining these two conclusions leads to the rather obvious thesis for this series:
> Reliability is a critical feature, observability is a necessary component of reliability, and open source tooling is at least _A_ right approach, if not _THE_ right approach.
This article, the first in a series, will introduce the types of signals engineers typically rely on and the mechanisms, tools, and platforms that you can use to instrument services running on Kubernetes to emit these signals, ingest and store them, and use and interpret them.
From there, the series will continue with hands-on tutorials, where I will walk through getting started with each of the tools and technologies. By the end, you should be well-equipped to start improving the observability of your own systems!
### What is observability?
While observability as a general [concept in control theory][4] has been around since at least 1960, its applicability to digital systems and services is rather new and in some ways an evolution of how these systems have been monitored for the last two decades. You are likely familiar with the necessity of monitoring services to ensure you know about issues before your users are impacted. You are also likely familiar with the idea of using metric data to better understand the health and state of a system, especially in the context of troubleshooting during an incident or debugging.
The key differentiation between monitoring and observability is that observability is an inherent property of a system or service, rather than something someone does to the system, which is what monitoring fundamentally is. [Cindy Sridharan][5], author of a free [e-book][6] on observability in distributed systems, does a great job of explaining the difference in an excellent [Medium article][7].
It is important to distinguish between these two terms because observability, as a property of the service you build, is your responsibility. As a service developer and owner, you have full control over the signals your system emits, how and where those signals are ingested and stored, and how they're utilized. This is in contrast to "monitoring," which may be done by others (and by you) to measure the availability and performance of your service and generate alerts to let you know that service reliability has degraded.
### Signals
Now that you understand the idea of observability as a property of a system that you control and that is explicitly manifested as the signals you instruct your system to emit, it's important to understand and describe the kinds of signals generally considered in this context.
#### What are metrics?
A metric is a fundamental type of signal that can be emitted by a service or the infrastructure it's running on. At its most basic, it is the combination of:
1. Some identifier, hopefully descriptive, that indicates what the metric represents
2. A series of data points, each of which contains two elements:
a. The timestamp at which the data point was generated (or ingested)
b. A numeric value representing the state of the thing you're measuring at that time
Time-series metrics have been and remain the key data structure used in monitoring and observability practice and are the primary way that the state and health of a system are represented over time. They are also the primary mechanism for alerting, but that practice and others (like incident management, on-call, and postmortems) are outside the scope here. For now, the focus is on how to instrument systems to emit metrics, how to store them, and how to use them for charts and dashboards to help you visualize the current and historical state of your system.
Metrics are used for two primary purposes: health and insight.
Understanding the health and state of your infrastructure, platform, and service is essential to keeping them available to users. Generally, these are emitted by the various components chosen to build services, and it's just a matter of setting up the right collection and storage infrastructure to be able to use them. Metrics from the simple (node CPU utilization) to the esoteric (garbage collection statistics) fall into this category.
Metrics are also essential to understanding what is happening in the system to avoid interruptions to your services. From this perspective, a service can emit custom telemetry that precisely describes specific aspects of how the service is functioning and performing. This will require you to instrument the code itself, usually by including specific libraries, and specify an export destination.
#### What are logs?
Unlike metrics that represent numeric values that change over time, logs represent discrete events. Log entries contain both the log payload—the message emitted by a component of the service or the code—and often metadata, such as the timestamp, label, tag, or other identifiers. Therefore, this is by far the largest volume of data you need to store, and you should carefully consider your log ingestion and storage strategies as you look to take on increasing user traffic.
#### What are traces?
Distributed tracing is a relatively new addition to the observability toolkit and is specifically relevant to microservice architectures to allow you to understand latency and how various backend service calls contribute to it. Ted Young published an [excellent article on the concept][8] that includes its origins with Google's [Dapper paper][9] and subsequent evolution. This series will be specifically concerned with the various implementations available.
### Instrumentation
Once you identify the signals you want to emit, store, and analyze, you need to instruct your system to create the signals and build a mechanism to store and analyze them. Instrumentation refers to those parts of your code that are used to generate metrics, logs, and traces. In this series, we'll discuss open source instrumentation options and introduce the basics of their use through hands-on tutorials.
### Observability on Kubernetes
Kubernetes is the dominant platform today for deploying and maintaining containers. As it rose to the top of the industry's consciousness, so did new technologies to provide effective observability tooling around it. Here is a short list of these essential technologies; they will be covered in greater detail in future articles in this series.
#### Metrics
Once you select your preferred approach for instrumenting your service with metrics, the next decision is where to store those metrics and what set of services will support your effort to monitor your environment.
##### Prometheus
[Prometheus][10] is the best place to start when looking to monitor both your Kubernetes infrastructure and the services running in the cluster. It provides everything you'll need, including client instrumentation libraries, the [storage backend][11], a visualization UI, and an alerting framework. Running Prometheus also provides a wealth of infrastructure metrics right out of the box. It further provides [integrations][12] with third-party providers for storage, although the data exchange is not bi-directional in every case, so be sure to read the documentation if you want to store metric data in multiple locations.
Later in this series, I will walk through setting up Prometheus in a cluster for basic infrastructure monitoring and adding custom telemetry to an application using the Prometheus client libraries.
##### Graphite
[Graphite][13] grew out of an in-house development effort at Orbitz and is now positioned as an enterprise-ready monitoring tool. It provides metrics storage and retrieval mechanisms, but no instrumentation capabilities. Therefore, you will still need to implement Prometheus or OpenCensus instrumentation to collect metrics. Later in this series, I will walk through setting up Graphite and sending metrics to it.
##### InfluxDB
[InfluxDB][14] is another open source database purpose-built for storing and retrieving time-series metrics. Unlike Graphite, InfluxDB is supported by a company called InfluxData, which provides both the InfluxDB software and a cloud-hosted version called InfluxDB Cloud. Later in this series, I will walk through setting up InfluxDB in a cluster and sending metrics to it.
##### OpenTSDB
[OpenTSDB][15] is also an open source purpose-built time-series database. One of its advantages is the ability to use [HBase][16] as the storage layer, which allows integration with a cloud managed service like Google's Cloud Bigtable. Google has published a [reference guide][17] on setting up OpenTSDB to monitor your Kubernetes cluster (assuming it's running in Google Kubernetes Engine, or GKE). Since it's a great introduction, I recommend following Google's tutorial if you're interested in learning more about OpenTSDB.
##### OpenCensus
[OpenCensus][18] is the open source version of the [Census library][19] developed at Google. It provides both metric and tracing instrumentation capabilities and supports a number of backends to [export][20] the metrics to—including Prometheus! Note that OpenCensus does not monitor your infrastructure, and you will still need to determine the best approach if you choose to use OpenCensus for custom metric telemetry.
We'll revisit this library later in this series, and I will walk through creating metrics in a service and exporting them to a backend.
#### Logging for observability
If metrics provide "what" is happening, logging tells part of the story of "why." Here are some common options for consistently gathering and analyzing logs.
##### Collecting with fluentd
In the Kubernetes ecosystem, [fluentd][21] is the de-facto open source standard for collecting logs emitted in the cluster and forwarding them to a specified backend. You can use config maps to modify fluentd's behavior, and later in the series, I'll walk through deploying it in a cluster and modifying the associated config map to parse unstructured logs and convert them to structured for better and easier analysis. In the meantime, you can read my post "[Customizing Kubernetes logging (Part 1)][22]" on how to do that on GKE.
##### Storing and analyzing with ELK
The most common storage mechanism for logs is provided by [Elastic][23] in the form of the "ELK" stack. As Elastic says:
> "'ELK' is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a serverside data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a 'stash' like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch."
Later in the series, I'll walk through setting up Elasticsearch, Kibana, and Logstash in
a cluster to store and analyze logs being collected by fluentd.
#### Distributed traces and observability
When asking "why" in analyzing service issues, logs can only provide the information that applications are designed to share with it. The way to go even deeper is to gather traces. As the [OpenTracing initiative][24] says:
> "Distributed tracing, also called distributed request tracing, is a method used to profile and monitor applications, especially those built using a microservices architecture. Distributed tracing helps pinpoint where failures occur and what causes poor performance."
##### Istio
The [Istio][25] open source service mesh provides multiple benefits for microservice architectures, including traffic control, security, and observability capabilities. It does not combine multiple spans into a single trace to assemble a full picture of what happens when a user call traverses a distributed system, but it can nevertheless be useful as an easy first step toward distributed tracing. It also provides other observability benefits—it's the easiest way to get ["golden signal"][26] metrics for each service, and it also adds logging for each request, which can be very useful for calculating error rates. You can read my post on [using it with Google's Stackdriver][27]. I'll revisit it in this series and show how to install it in a cluster and configure it to export observability data to a backend.
##### OpenCensus
I described [OpenCensus][28] in the Metrics section above, and that's one of the main reasons for choosing it for distributed tracing: Using a single library for both metrics and traces is a great option to reduce your instrumentation work—with the caveat that you must be working in a language that supports both the traces and stats exporters. I'll come back to OpenCensus and show how to get started instrumenting code for distributed tracing. Note that OpenCensus provides only the instrumentation library, and you'll still need to use a storage and visualization layer like Zipkin, Jaeger, Stackdriver (on GCP), or X-Ray (on AWS).
##### Zipkin
[Zipkin][29] is a full, distributed tracing solution that includes instrumentation, storage, and visualization. It's a tried and true set of tools that's been around for years and has a strong user and developer community. It can also be used as a backend for other instrumentation options like OpenCensus. In a future tutorial, I'll show how to set up the Zipkin server and instrument your code.
##### Jaeger
[Jaeger][30] is another open source tracing solution that includes all the components you'll need. It's a newer project that's being incubated at the Cloud Native Computing Foundation (CNCF). Whether you choose to use Zipkin or Jaeger may ultimately depend on your experience with them and their support for the language you're writing your service in. In this series, I'll walk through setting up Jaeger and instrumenting code for tracing.
### Visualizing observability data
The final piece of the toolkit for using metrics is the visualization layer. There are basically two options here: the "native" visualization that your persistence layers enable (e.g., the Prometheus UI or Flux with InfluxDB) or a purpose-built visualization tool.
[Grafana][31] is currently the de facto standard for open source visualization. I'll walk through setting it up and using it to visualize data from various backends later in this series.
### Looking ahead
Observability on Kubernetes has many parts and many options for each type of need. Metric, logging, and tracing instrumentation provide the bedrock of information needed to make decisions about services. Instrumenting, storing, and visualizing data are also essential. Future articles in this series will dive into all of these options with hands-on tutorials for each.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/open-source-observability-kubernetes
作者:[Yuri Grinshteyn][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/yuri-grinshteyn
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/look-binoculars-sight-see-review.png?itok=NOw2cm39 (Looking back with binoculars)
[2]: https://cloud.google.com/blog/products/devops-sre/the-2019-accelerate-state-of-devops-elite-performance-productivity-and-scaling
[3]: https://services.google.com/fh/files/misc/state-of-devops-2019.pdf
[4]: https://en.wikipedia.org/wiki/Observability
[5]: https://twitter.com/copyconstruct
[6]: https://t.co/0gOgZp88Jn?amp=1
[7]: https://medium.com/@copyconstruct/monitoring-and-observability-8417d1952e1c
[8]: https://opensource.com/article/18/5/distributed-tracing
[9]: https://research.google.com/pubs/pub36356.html
[10]: https://prometheus.io/
[11]: https://prometheus.io/docs/prometheus/latest/storage/
[12]: https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage
[13]: https://graphiteapp.org/
[14]: https://www.influxdata.com/get-influxdb/
[15]: http://opentsdb.net/
[16]: https://hbase.apache.org/
[17]: https://cloud.google.com/solutions/opentsdb-cloud-platform
[18]: https://opencensus.io/
[19]: https://opensource.googleblog.com/2018/03/how-google-uses-opencensus-internally.html
[20]: https://opencensus.io/exporters/#exporters
[21]: https://www.fluentd.org/
[22]: https://medium.com/google-cloud/customizing-kubernetes-logging-part-1-a1e5791dcda8
[23]: https://www.elastic.co/
[24]: https://opentracing.io/docs/overview/what-is-tracing
[25]: http://istio.io/
[26]: https://landing.google.com/sre/sre-book/chapters/monitoring-distributed-systems/
[27]: https://medium.com/google-cloud/istio-and-stackdriver-59d157282258
[28]: http://opencensus.io/
[29]: https://zipkin.io/
[30]: https://www.jaegertracing.io/
[31]: https://grafana.com/

View File

@ -0,0 +1,66 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Understanding Joins in Hadoop)
[#]: via: (https://opensourceforu.com/2019/10/understanding-joins-in-hadoop/)
[#]: author: (Bhaskar Narayan Das https://opensourceforu.com/author/bhaskar-narayan/)
Understanding Joins in Hadoop
======
[![Hadoop big data career opportunities][1]][2]
_Those who have just begun the study of Hadoop might have come across different types of joins. This article briefly discusses normal joins, map side joins and reduce side joins. The differences between map side joins and reduce side joins, as well as their pros and cons, are also discussed._
Normally, the term join is used to refer to the combination of the record-sets of two tables. Thus when we run a query, tables are joined and we get the data from two tables in the joined format, as is the case in SQL joins. Joins find maximum usage in Hadoop processing. They should be used when large data sets are encountered and there is no urgency to generate the outcome. In case of Hadoop common joins, Hadoop distributes all the rows on all the nodes based on the join key. Once this is achieved, all the keys that have the same values end up on the same node and then, finally, the join at the reducer happens. This scenario is perfect when both the tables are huge, but when one table is small and the other is quite big, common joins become inefficient and take more time to distribute the row.
While processing data using Hadoop, we generally do it over the map phase and the reduce phase. Thus there are mappers and reducers that do the job for the map phase and the reduce phase. We use map reduce joins when we encounter a large data set that is too big to use data-sharing techniques.
**Map side joins**
Map side join is the term used when the record sets of two tables are joined within the mapper. In this case, the reduce phase is not involved. In the map side join, the record sets of the tables are loaded into memory, ensuring a faster join operation. Map side join is convenient for small tables and not recommended for large tables. In situations where you have queries running too frequently with small table joins you could experience a very significant reduction in query computation time.
**Reduce side joins**
Reduce side joins happen at the reduce side of Hadoop processing. They are also known as repartitioned sort merge joins, or simply, repartitioned joins or distributed joins or common joins. They are the most widely used joins. Reduce side joins happen when both the tables are so big that they cannot fit into the memory. The process flow of reduce side joins is as follows:
1. The input data is read by the mapper, which needs to be combined on the basis of the join key or common column.
2. Once the input data is processed by the mapper, it adds a tag to the processed input data in order to distinguish the input origin sources.
3. The mapper returns the intermediate key-value pair, where the key is also the join key.
4. For the reducer, a key and a list of values is generated once the sorting and shuffling phase is complete.
5. The reducer joins the values that are present in the generated list along with the key to produce the final outcome.
The join at the reduce side combines the output of two mappers based on a common key. This scenario is quite synonymous with SQL joins, where the data sets of two tables are joined based on a primary key. In this case we have to decide which field is the primary key.
There are a few terms associated with reduce side joins:
1\. _Data source:_ This is nothing but the input files.
2\. _Tag:_ This is basically used to distinguish each input data on the basis of its origin.
3\. _Group key:_ This refers to the common column that is used as a join key to combine the output of two mappers.
**Difference between map side joins and reduce side joins**
1. A map side join, as explained earlier, happens on the map side whereas a reduce side join happens on the reduce side.
2. A map side join happens in the memory whereas a reduce side join happens off the memory.
3. Map side joins are effective when one data set is big while the other is small, whereas reduce side joins work effectively for big size data sets.
4. Map side joins are expensive, whereas reduce side joins are cheap.
Opt for map side joins when the table size is small and fits in memory, and you require the job to be completed in a short span of time. Use the reduce side join when dealing with large data sets, which cannot fit into the memory. Reduce side joins are easy to implement and have the advantage of their inbuilt sorting and shuffling algorithms. Besides this, there is no requirement to strictly follow any formatting rule for input in case of reduce side joins, and these could also be performed on unstructured data sets.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/understanding-joins-in-hadoop/
作者:[Bhaskar Narayan Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/bhaskar-narayan/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2017/06/Hadoop-big-data.jpg?resize=696%2C441&ssl=1 (Hadoop big data career opportunities)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2017/06/Hadoop-big-data.jpg?fit=750%2C475&ssl=1

View File

@ -0,0 +1,273 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using the Java Persistence API)
[#]: via: (https://opensource.com/article/19/10/using-java-persistence-api)
[#]: author: (Stephon Brown https://opensource.com/users/stephb)
Using the Java Persistence API
======
Learn how to use the JPA by building an example app for a bike store.
![Coffee beans][1]
The Java Persistence API (JPA) is an important Java functionality for application developers to understand. It translates exactly how Java developers turn method calls on objects into accessing, persisting, and managing data stored in NoSQL and relational databases.
This article examines the JPA in detail through a tutorial example of building a bicycle loaning service. This example will create a create, read, update, and delete (CRUD) layer for a larger application using the Spring Boot framework, the MongoDB database (which is [no longer open source][2]), and the Maven package manager. I also use NetBeans 11 as my IDE of choice.
This tutorial focuses on the open source angle of the Java Persistence API, rather than the tools, to show how it works. This is all about learning the pattern of programming applications, but it's still smart to understand the software. You can access the full code in my [GitHub repository][3].
### Java: More than 'beans'
Java is an object-oriented language that has gone through many changes since the Java Development Kit (JDK) was released in 1996. Understanding the language's various pathways and its virtual machine is a history lesson in itself; in brief, the language has forked in many directions, similar to the Linux kernel, since its release. There are standard editions that are free to the community, enterprise editions for business, and an open source alternatives contributed to by multiple vendors. Major versions are released at six-month intervals; since there are often major differences in features, you may want to do some research before choosing a version.
All and all, Java is steeped in history. This tutorial focuses on [JDK 11][4], which is the open source implementation of Java 11, because it is one of the long-term-support versions that is still active.
* **Spring Boot: **Spring Boot is a module from the larger Spring framework developed by Pivotal. Spring is a very popular framework for working with Java. It allows for a variety of architectures and configurations. Spring also offers support for web applications and security. Spring Boot offers basic configurations for bootstrapping various types of Java projects quickly. This tutorial uses Spring Boot to quickly write a console application and test functionality against the database.
* **Maven:** Maven is a project/package manager developed by Apache. Maven allows for the management of packages and various dependencies within its POM.xml file. If you have used NPM, you may be familiar with how package managers function. Maven also manages build and reporting functionality.
* **Lombok:** Lombok is a library that allows the creation of object getters/setters through annotation within the object file. This is already present in languages like C#, and Lombok introduces this functionality into Java.
* **NetBeans: **NetBeans is a popular open source IDE that focuses specifically on Java development. Many of its tools provide an implementation for the latest Java SE and EE updates.
This group of tools will be used to create a simple application for a fictional bike store. It will implement functionality for inserting collections for "Customer" and "Bike" objects.
### Brewed to perfection
Navigate to the [Spring Initializr][5]. This website enables you to generate basic project needs for Spring Boot and the dependencies you will need for the project. Select the following options:
1. **Project:** Maven Project
2. **Language:** Java
3. **Spring Boot:** 2.1.8 (or the most stable release)
4. **Project Metadata:** Whatever your naming conventions are (e.g., **com.stephb**)
* You can keep Artifact as "Demo"
5. **Dependencies:** Add:
* Spring Data MongoDB
* Lombok
Click **Download** and open the new project in your chosen IDE (e.g., NetBeans).
#### Model outline
The models represent information collected about specific objects in the program that will be persisted in your database. Focus on two objects: **Customer** and **Bike**. First, create a **dto** folder within the **src** folder. Then, create the two Java class objects named **Customer.java** and **Bike.java**. They will be structured in the program as follows:
**Customer. Java**
```
 1 package com.stephb.JavaMongo.dto;
 2 
 3 import lombok.Getter;
 4 import lombok.Setter;
 5 import org.springframework.data.annotation.Id;
 6 
 7 /**
 8  *
 9  * @author stephon
10  */
11 @Getter @Setter
12 public class Customer {
13 
14         private @Id [String][6] id;
15         private [String][6] emailAddress;
16         private [String][6] firstName;
17         private [String][6] lastName;
18         private [String][6] address;
19         
20 }
```
**Bike.java**
```
 1 package com.stephb.JavaMongo.dto;
 2 
 3 import lombok.Getter;
 4 import lombok.Setter;
 5 import org.springframework.data.annotation.Id;
 6 
 7 /**
 8  *
 9  * @author stephon
10  */
11 @Getter @Setter
12 public class Bike {
13         private @Id [String][6] id;
14         private [String][6] modelNumber;
15         private [String][6] color;
16         private [String][6] description;
17 
18         @Override
19         public [String][6] toString() {
20                 return "This bike model is " + this.modelNumber + " is the color " + this.color + " and is " + description;
21         }
22 }
```
As you can see, Lombok annotation is used within the object to generate the getters/setters for the properties/attributes. Properties can specifically receive the annotations if you do not want all of the attributes to have getters/setters within that class. These two classes will form the container carrying your data to wherever you want to display information.
#### Set up a database
I used a [Mongo Docker][7] container for testing. If you have MongoDB installed on your system, you do not have to run an instance in Docker. You can install MongoDB from its website by selecting your system information and following the installation instructions.
After installing, you can interact with your new MongoDB server through the command line, a GUI such as MongoDB Compass, or IDE drivers for connecting to data sources. Now you can define your data layer to pull, transform, and persist your data. To set your database access properties, navigate to the **applications.properties** file in your application and provide the following:
```
 1 spring.data.mongodb.host=localhost
 2 spring.data.mongodb.port=27017
 3 spring.data.mongodb.database=BikeStore
```
#### Define the data access object/data access layer
The data access objects (DAO) in the data access layer (DAL) will define how you will interact with data in the database. The awesome thing about using a **spring-boot-starter** is that most of the work for querying the database is already done.
Start with the **Customer** DAO. Create an interface in a new **dao** folder within the **src** folder, then create another Java class name called **CustomerRepository.java**. The class should look like:
```
 1 package com.stephb.JavaMongo.dao;
 2 
 3 import com.stephb.JavaMongo.dto.Customer;
 4 import java.util.List;
 5 import org.springframework.data.mongodb.repository.MongoRepository;
 6 
 7 /**
 8  *
 9  * @author stephon
10  */
11 public interface CustomerRepository extends MongoRepository&lt;Customer, String&gt;{
12         @Override
13         public List&lt;Customer&gt; findAll();
14         public List&lt;Customer&gt; findByFirstName([String][6] firstName);
15         public List&lt;Customer&gt; findByLastName([String][6] lastName);
16 }
```
This class is an interface that extends or inherits from the **MongoRepository** class with your DTO (**Customer.java**) and a string because they will be used for querying with your custom functions. Because you have inherited from this class, you have access to many functions that allow persistence and querying of your object without having to implement or reference your own functions. For example, after you instantiate the **CustomerRepository** object, you can use the **Save** function immediately. You can also override these functions if you need more extended functionality. I created a few custom queries to search my collection, given specific elements of my object.
The **Bike** object also has a repository for interacting with the database. Implement it very similarly to the **CustomerRepository**. It should look like:
```
 1 package com.stephb.JavaMongo.dao;
 2 
 3 import com.stephb.JavaMongo.dto.Bike;
 4 import java.util.List;
 5 import org.springframework.data.mongodb.repository.MongoRepository;
 6 
 7 /**
 8  *
 9  * @author stephon
10  */
11 public interface BikeRepository extends MongoRepository&lt;Bike,String&gt;{
12         public Bike findByModelNumber([String][6] modelNumber);
13         @Override
14         public List&lt;Bike&gt; findAll();
15         public List&lt;Bike&gt; findByColor([String][6] color);
16 }
```
#### Run your program
Now that you have a way to structure your data and a way to pull, transform, and persist it, run your program!
Navigate to your **Application.java** file (it may have a different name, depending on what you named your application, but it should include "application"). Where the class is defined, include an **implements CommandLineRunner** afterward. This will allow you to implement a **run** method to create a command-line application. Override the **run** method provided by the **CommandLineRunner** interface and include the following to test the **BikeRepository**:
```
 1 package com.stephb.JavaMongo;
 2 
 3 import com.stephb.JavaMongo.dao.BikeRepository;
 4 import com.stephb.JavaMongo.dao.CustomerRepository;
 5 import com.stephb.JavaMongo.dto.Bike;
 6 import java.util.Scanner;
 7 import org.springframework.beans.factory.annotation.Autowired;
 8 import org.springframework.boot.CommandLineRunner;
 9 import org.springframework.boot.SpringApplication;
10 import org.springframework.boot.autoconfigure.SpringBootApplication;
11 
12 
13 @SpringBootApplication
14 public class JavaMongoApplication implements CommandLineRunner {
15                 @Autowired
16                 private BikeRepository bikeRepo;
17                 private CustomerRepository custRepo;
18                 
19     public static void main([String][6][] args) {
20                         SpringApplication.run(JavaMongoApplication.class, args);
21     }
22         @Override
23         public void run([String][6]... args) throws [Exception][8] {
24                 Scanner scan = new Scanner([System][9].in);
25                 [String][6] response = "";
26                 boolean running = true;
27                 while(running){
28                         [System][9].out.println("What would you like to create? \n C: The Customer \n B: Bike? \n X:Close");
29                         response = scan.nextLine();
30                         if ("B".equals(response.toUpperCase())) {
31                                 [String][6][] bikeInformation = new [String][6][3];
32                                 [System][9].out.println("Enter the information for the Bike");
33                                 [System][9].out.println("Model Number");
34                                 bikeInformation[0] = scan.nextLine();
35                                 [System][9].out.println("Color");
36                                 bikeInformation[1] = scan.nextLine();
37                                 [System][9].out.println("Description");
38                                 bikeInformation[2] = scan.nextLine();
39 
40                                 Bike bike = new Bike();
41                                 bike.setModelNumber(bikeInformation[0]);
42                                 bike.setColor(bikeInformation[1]);
43                                 bike.setDescription(bikeInformation[2]);
44 
45                                 bike = bikeRepo.save(bike);
46                                 [System][9].out.println(bike.toString());
47 
48 
49                         } else if ("X".equals(response.toUpperCase())) {
50                                 [System][9].out.println("Bye");
51                                 running = false;
52                         } else {
53                                 [System][9].out.println("Sorry nothing else works right now!");
54                         }
55                 }
56                 
57         }
58 }
```
The **@Autowired** annotation allows automatic dependency injection of the **BikeRepository** and **CustomerRepository** beans. You will use these classes to persist and gather data from the database.
There you have it! You have created a command-line application that connects to a database and is able to perform CRUD operations with minimal code on your part.
### Conclusion
Translating from programming language concepts like objects and classes into calls to store, retrieve, or change data in a database is essential to building an application. The Java Persistence API (JPA) is an important tool in the Java developer's toolkit to solve that challenge. What databases are you exploring in Java? Please share in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/using-java-persistence-api
作者:[Stephon Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/stephb
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/java-coffee-beans.jpg?itok=3hkjX5We (Coffee beans)
[2]: https://www.techrepublic.com/article/mongodb-ceo-tells-hard-truths-about-commercial-open-source/
[3]: https://github.com/StephonBrown/SpringMongoJava
[4]: https://openjdk.java.net/projects/jdk/11/
[5]: https://start.spring.io/
[6]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[7]: https://hub.docker.com/_/mongo
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+exception
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system

View File

@ -0,0 +1,201 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 Best Password Managers For Linux Desktop)
[#]: via: (https://itsfoss.com/password-managers-linux/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
5 Best Password Managers For Linux Desktop
======
_**A password manager is a useful tool for creating unique passwords and storing them securely so that you dont have to remember them. Check out the best password managers available for Linux desktop.**_
Passwords are everywhere. Websites, forums, web apps and what not, you need to create accounts and password for them. The trouble comes with the password. Keeping the same password for various accounts poses a security risk because [if one of the websites is compromised, hackers try the same email-password combination on other websites][1] as well.
But keeping unique passwords for all the new accounts means that you have to remember all of them and its not possible for normal humans. This is where password managers come to your help.
Password managing apps suggest/create strong passwords for you and store them in an encrypted database. You just need to remember the master password for the password manager.
Mainstream modern web browsers like Mozilla Firefox and Google Chrome have built in password manager. This helps but you are restricted to use it on their web browser only.
There are third party, dedicated password managers and some of them also provide native desktop applications for Linux. In this article, we filter out the best password managers available for Linux.
Before you see that, I would also advise going through the list of [free password generators for Linux][2] to generate strong, unique passwords for you.
### Password Managers for Linux
Possible non-FOSS alert!
Weve given priority to the ones which are open source (with some proprietary options, dont hate me!) and also offer a standalone desktop app (GUI) for Linux. The proprietary options have been highlighted.
#### 1\. Bitwarden
![][3]
Key Highlights:
* Open Source
* Free for personal use (paid options available for upgrade)
* End-to-end encryption for Cloud servers
* Cross-platform
* Browser Extensions available
* Command-line tools
Bitwarden is one of the most impressive password managers for Linux. Ill be honest that I didnt know about this until now and Im already making the switch from [LastPass][4]. I was able to easily import the data from LastPass without any issues and had no trouble whatsoever.
The premium version costs just $10/year which seems to be worth it (Ive upgraded for my personal usage).
It is an open source solution so theres nothing shady about it. You can even host it on your own server and create a password solution for your organization.
In addition to that, you get all the necessary features like 2FA for login, import/export options for your credentials, fingerprint phrase (a unique key), password generator, and more.
You can upgrade your account as an organization account for free to be able to share your information with 2 users in total. However, if you want additional encrypted vault storage and the ability to share passwords with 5 users, premium upgrades are available starting from as low as $1 per month. I think its definitely worth a shot!
[Bitwarden][5]
#### 2\. Buttercup
![][6]
Key Highlights:
* Open Source
* Free, with no premium options.
* Cross-platform
* Browser Extensions available
Yet another open-source password manager for Linux. Buttercup may not be a very popular solution but if you are looking for a simpler alternative to store your credentials, this would be a good start.
Unlike some others, you do not have to be skeptical about its cloud servers because it sticks to offline usage only and supports connecting cloud sources like [Dropbox][7], [OwnCloud][8], [Nextcloud][9], and [WebDAV][10].
So, you can opt for the cloud source if you need to sync the data. Youve got the choice for it.
[Buttercup][11]
#### 4\. KeePassXC
![][12]
Key Highlights:
* Open Source
* Simple password manager
* Cross-platform
* No mobile support
KeePassXC is a community fork of [KeePassX][13] which was originally a Linux port for [KeePass][14] on Windows.
Unless youre not aware, KeePassX hasnt been maintained for years so KeePassXC is a good alternative if you are looking for a dead-simple password manager. KeePassXC may not be the most prettiest or fanciest password manager, but it does the job.
It is secure and open source as well. I think that makes it worth a shot, what say?
[KeePassXC][15]
#### 4\. Enpass (not open source)
![][16]
Key Highlights:
* Proprietary
* A lot of features including Wearable device support.
* Completely free for Linux (with premium features)
Enpass is a quite popular password manager across multiple platforms. Even though its not an open source solution, a lot of people rely on it so you can be sure that it works, at least.
It offers a great deal of features and if you have a wearable device, it will support that too which is rare.
Its great to see that Enpass manages the package for Linux distros actively. Also, note that it works for 64-bit systems only. You can find the [official instructions for installation][17] on their website. It will require utilizing the terminal, but I followed the steps to test it out and it worked like a charm.
[Enpass][18]
#### 5\. myki (not open source)
![][19]
Key Highlights:
* Proprietary
* Avoids cloud servers for storing passwords
* Focuses on local peer-to-peer syncing
* Ability to replace passwords with Fingerprint IDs on mobile
This may not be a popular recommendation but I found it very interesting. It is a proprietary password manager which lets you avoid cloud servers and relies on peer-to-peer sync.
So, if you do not want to utilize any cloud servers to store your information, this is for you. It is also interesting to note that the app available for Android and iOS helps you replace passwords with your fingerprint ID. If you want convenience on your mobile phone along with the basic functionality on a desktop password manager this looks like a good option.
However, if you are opting for a premium upgrade, the pricing plans are for you to judge, definitely not cheap.
Do try it out and let us know how it goes!
[myki][20]
### Some Other Password Managers Worth Pointing Out
Even without offering a standalone app for Linux, there are some password managers that may deserve a mention.
If you need to utilize browser-based (extensions) password managers, we would recommend trying out [LastPass][21], [Dashlane][22], and [1Password][23]. LastPass even offers a [Linux client (and a command-line tool)][24].
If you are looking for CLI password managers, you should check out [Pass][25].
[Password Safe][26] is also an option but the Linux client is in beta. I wouldnt recommend relying on “beta” applications for storing passwords. [Universal Password Manager][27] exists but its no longer maintained. You may have also heard about [Password Gorilla][28] but it isnt actively maintained.
**Wrapping Up**
Bitwarden seems to be my personal favorite for now. However, there are several options to choose from on Linux. You can either opt for something that offers a native app or just a browser extension the choice is yours.
If we missed listing out a password manager worth trying out, let us know about it in the comments below. As always, well extend our list with your suggestion.
--------------------------------------------------------------------------------
via: https://itsfoss.com/password-managers-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://medium.com/@computerphonedude/one-of-my-old-passwords-was-hacked-on-6-different-sites-and-i-had-no-clue-heres-how-to-quickly-ced23edf3b62
[2]: https://itsfoss.com/password-generators-linux/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/bitward.png?ssl=1
[4]: https://www.lastpass.com/
[5]: https://bitwarden.com/
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/buttercup.png?ssl=1
[7]: https://www.dropbox.com/
[8]: https://owncloud.com/
[9]: https://nextcloud.com/
[10]: https://en.wikipedia.org/wiki/WebDAV
[11]: https://buttercup.pw/
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/KeePassXC.png?ssl=1
[13]: https://www.keepassx.org/
[14]: https://keepass.info/
[15]: https://keepassxc.org
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/enpass.png?ssl=1
[17]: https://www.enpass.io/support/kb/general/how-to-install-enpass-on-linux/
[18]: https://www.enpass.io/
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/myki.png?ssl=1
[20]: https://myki.com/
[21]: https://lastpass.com/
[22]: https://www.dashlane.com/
[23]: https://1password.com/
[24]: https://lastpass.com/misc_download2.php
[25]: https://www.passwordstore.org/
[26]: https://pwsafe.org/
[27]: http://upm.sourceforge.net/
[28]: https://github.com/zdia/gorilla/wiki

View File

@ -0,0 +1,119 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bringing Some Order into a Collection of Photographs)
[#]: via: (https://opensourceforu.com/2019/10/bringing-some-order-into-a-collection-of-photographs/)
[#]: author: (Dr Anil Seth https://opensourceforu.com/author/anil-seth/)
Bringing Some Order into a Collection of Photographs
======
[![][1]][2]
_In this article, the author shares tips on managing photographs using different Internet resources and Python programming._
These days, it is very easy to let Google Photos or similar cloud based services manage your photos. You can keep clicking on the smartphone and the photos get saved. The tools for helping you find photos, especially based on the content, keep getting better. There is no cost to keeping all your photos as long as you are an amateur and not taking very high resolution images. And it is far easier to let the dozens of photos clicked by accident remain on the cloud, than to remove them!
Even if you are willing to delegate the task of managing photos to AI tools, there is still the challenge of what to do with the photos taken before the smartphone era. Broadly, the photos can be divided into two groups — those taken with digital cameras and the physical photo prints.
Each of the two categories will need to be handled and managed differently. First, consider the older physical photos.
**Managing physical photos in the digital era**
Photos can deteriorate over time. So, the sooner you digitise them, the better you will preserve your memories. Besides, it is far easier to share a memory digitally when the family members are scattered across the globe.
The first hard decision is related to the physical albums. Should you take photos out of albums for scanning and risk damaging the albums, or scan the album pages and then crop individual photos from the album pages? Scanning or imaging tools can help with the cropping of photos.
In this article, we assume that you are ready to deal with a collection of individual photos.
One of the great features of photo management software, both on the cloud and the desktop, is that they organise the photos by date. However, the only date associated with scanned photos is the date of scanning! It will be a while before the AI software will place the photos on a timeline by examining the age of the people in the photos. Currently, you will need to handle this aspect manually.
One would like to be able to store a date in the metadata of the image so every tool can use it.
Python has a number of packages to help you do this. A pretty easy one to use is pyexiv2. Here is a snippet of sample code to modify the date of an image:
```
import datetime
import pyexiv2
EXIF_DATE = Exif.Image.DateTime
EXIF_ORIG_DATE = Exif.Photo.DateTimeOriginal
def update_exif(filename,date):
try:
metadata=pyexiv2.ImageMetadata(filename)
metadata.read()
metadata[EXIF_DATE]=date
metadata[EXIF_ORIG_DATE]=date
metadata.write()
except:
print(“Error “ + f)
```
Most photo management software seem to use either of the two dates, whichever is available. While you are setting the date, you might as well set both! There can be various ways in which the date for the photo may be specified. You may find the following scheme convenient.
Sort the photos manually into directories, each with the name _yy-mm-dd_. If the date is not known, you might as well select an approximate date. If the month also is not known, set it to 01. Now, you can use the _os.walk_ function to iterate over the directories and files, and set the date for each file as just suggested above.
You may further divide the files into event based sub-directories, event_label, and use that to label photos, as follows:
```
LABEL = Xmp.xmp.Label
metadata[LABEL] = pyexiv2.XmpTag(LABEL,event_label)
```
This is only for illustration purposes. You can decide on how you would like to organise the photos and use what seems most convenient for you.
**Digital photos**
Digital photos have different challenges. It is so easy to keep taking photos that you are likely to have a lot of them. Unless you have been careful, you are likely to find that you have used different tools for downloading photos from digital cameras and smartphones, so the file names and directory names are not consistent. A convenient option is to use the date and time of an image from the metadata and rename files accordingly. An example code follows:
```
import os
import datetime
import pyexiv2
EXIF_DATE = Exif.Image.DateTime
EXIF_ORIG_DATE = Exif.Photo.DateTimeOriginal
def rename_file(p,f,fpref,ctr):
fold,fext = f.rsplit(.,1) # separate the ext, e.g. jpg
fname = fpref + “-%04i”%ctr # add a serial number to ensure uniqueness
fnew = ..join((fname,fext))
os.rename(/.join((p,f)),/.join((p,fnew)))
def process_files(path, files):
ctr = 0
for f in files:
try:
metadata=pyexiv2.ImageMetadata(/.join((path,f)))
metadata.read()
if EXIF_ORIG_DATE in metadata.exif_keys:
datestamp = metadata[EXIF_ORIG_DATE].human_value
else:
datestamp = metadata[EXIF_DATE].human_value
datepref = _.join([ x.replace(:,-) for x in datestamp.split( )])
rename_file(path,f,datepref,ctr)
ctr += 1
except:
print(Error in %s/%s%(path,f))
for path, dirs, files in os.walk(.): # work with current directory for convenience
if len(files) > 0:
process_files(path, files)
```
All the file names now have a consistent file name. Since the photo managing software provides a way to view the photos by time, it seems that organising the files into directories that have meaningful names may be preferable. You can move photos into directories/albums that are meaningful. The photo management software will let you view photos either by albums or by dates.
**Reducing clutter and duplicates**
Over time, my collection included multiple copies of the same photos. In the old days, to share photos easily, I used to even keep low resolution copies. Digikam has an excellent option of identifying similar photos. However, each photo needs to be handled individually. A very convenient tool for finding the duplicate files and managing them programmatically is *<http://www.jhnc.org/findimagedupes*/>. The output of this program contains each set of duplicate files on a separate line.
You can use the Python Pillow and Matplotlib packages to display the images. Use the images size to select the image with the highest resolution among the duplicates, retain that and delete the rest.
One thing is certain, though. After all the work is done, it is a pleasure to look at the photographs and relive all those old memories.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/bringing-some-order-into-a-collection-of-photographs/
作者:[Dr Anil Seth][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/anil-seth/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Gimp-6-Souping-up-photos.jpg?resize=696%2C492&ssl=1 (Gimp-6 Souping up photos)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Gimp-6-Souping-up-photos.jpg?fit=900%2C636&ssl=1

View File

@ -0,0 +1,245 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to manage Go projects with GVM)
[#]: via: (https://opensource.com/article/19/10/introduction-gvm)
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
How to manage Go projects with GVM
======
Manage Go environments, including installing multiple versions and
managing modules, with Go Version Manager.
![Woman programming][1]
Go Version Manager ([GVM][2]) is an open source tool for managing Go environments. It supports installing multiple versions of Go and managing modules per-project using GVM "pkgsets." Developed originally by [Josh Bussdieker][3], GVM (like its Ruby counterpart, RVM) allows you to create a development environment for each project or group of projects, segregating the different Go versions and package dependencies to allow for more flexibility and prevent versioning issues.
There are several options for managing Go packages, including Go 1.11 Modules, built right into Go. I find GVM to be simple and intuitive, and even if I didn't use it to manage packages, I'd still use it to manage Go versions.
### Installing GVM
Installing GVM is straightforward. The [GVM repository][4] installation documentation instructs you to download the installer script and pipe it to Bash:
```
`bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)`
```
Despite the growing adoption of this kind of installation method, it's still good practice to take a look at what the installer is doing before you do it. In the case of GVM, the installer script:
1. Checks some dependencies
2. Clones the GVM repo
3. Uses shell scripts to:
* Install Go
* Manage the GOPATH environment
* Add a line to your bashrc, zshrc, or profile
If you want to double-check what it's doing, you can clone the repo and review the shell scripts, then run **./binscripts/gvm-installer** to set it up using the local scripts.
_Note:_ Since GVM can be used to download and compile new Go versions, there are some expected dependencies like Make, Git, and Curl. You can find the complete list for your distribution in [GVM's README][5].
### Installing and managing Go versions with GVM
Once GVM is installed, you can start using it to install and manage different versions of Go. The **gvm listall** command shows the available versions of Go that can be downloaded and compiled:
```
[chris@marvin ]$ gvm listall
$ gvm listall
gvm gos (available)
   go1
   go1.0.1
   go1.0.2
   go1.0.3
&lt;output truncated&gt;
```
Installing a specific Go version is as easy as **gvm install &lt;version&gt;**, where **&lt;version&gt;** is one of the ones returned by the **gvm listall** command.
Say you're working on a project that uses Go version 1.12.8. You can install it with **gvm install go1.12.8**:
```
[chris@marvin]$ gvm install go1.12.8
Installing go1.12.8...
 * Compiling...
go1.12.8 successfully installed!
```
Enter **gvm list**, and you see Go version 1.12.8 is installed along with the system Go version (the version that comes packed using your OS's package manager):
```
[chris@marvin]$ gvm list
gvm gos (installed)
   go1.12.8
=&gt; system
```
GVM is still using the system version of Go, denoted by the **=&gt;** symbol next to it. You can switch your environment to use the newly installed go1.12.8 with the **gvm use** command:
```
[chris@marvin]$ gvm use go1.12.8
Now using version go1.12.8
[chris@marvin]$ go version
go version go1.12.8 linux/amd64
```
GVM makes it extremely simple to manage installed versions of Go, but it gets even better!
### Using GVM pkgset
Out of the box, Go has a brilliant—and frustrating—way of managing packages and modules. By default, if you **go get** a package, it is downloaded into the **src** and **pkg** directories in your **$GOPATH**; then it can be included in your Go program by using **import**. This makes it easy to get packages, especially for unprivileged users, without requiring **sudo** or root privileges (much like **pip install --user** in Python). The tradeoff, however, is the difficulty in managing different versions of the same packages across different projects.
There are a number of ways to try fixing or mitigating the issue, including the experimental Go Modules (preliminary support added in Go v1.11) and [go dep][6] (an "official experiment" and ongoing alternative to Go Modules). Before I discovered GVM, I would build and test Go projects in their own Docker containers to ensure segregation.
GVM elegantly accomplishes management and segregation of packages between projects by using "pkgsets" to append a new directory for projects to the default **$GOPATH** for the version of Go installed, much like **$PATH** works on Unix/Linux systems.
It is easiest to visualize how this works in action. First, install a new version of Go, v1.12.9:
```
[chris@marvin]$ echo $GOPATH
/home/chris/.gvm/pkgsets/go1.12.8/global
[chris@marvin]$ gvm install go1.12.9
Installing go1.12.9...
 * Compiling...
go1.12.9 successfully installed
[chris@marvin]$ gvm use go1.12.9
Now using version go1.12.9
```
When GVM is told to use a new version, it changes to a new **$GOPATH**, which corresponds to a default **gloabl** pkgset for that version:
```
[chris@marvin]$ echo $GOPATH
/home/chris/.gvm/pkgsets/go1.12.9/global
[chris@marvin]$ gvm pkgset list
gvm go package sets (go1.12.9)
=&gt;  global
```
Packages in the global pkgset are available to any project using this specific version of Go, although by default there are no extra packages installed.
Now, suppose you're starting a new project, and it needs a specific package. First, use GVM to create a new pkgset called **introToGvm**:
```
[chris@marvin]$ gvm pkgset create introToGvm
[chris@marvin]$ gvm pkgset use introToGvm
Now using version go1.12.9@introToGvm
[chris@marvin]$ gvm pkgset list
gvm go package sets (go1.12.9)
    global
=&gt;  introToGvm
```
As mentioned above, a new directory for the pkgset is prepended to the **$GOPATH**:
```
[chris@marvin]$ echo $GOPATH
/home/chris/.gvm/pkgsets/go1.12.9/introToGvm:/home/chris/.gvm/pkgsets/go1.12.9/global
```
Change directories into the **introToGvm** path that was prepended and examine the directory structure—and take the opportunity to have some fun with **awk** and **bash** as you do:
```
[chris@marvin]$ cd $( awk -F':' '{print $1}' &lt;&lt;&lt; $GOPATH )
[chris@marvin]$ pwd
/home/chris/.gvm/pkgsets/go1.12.9/introToGvm
[chris@marvin]$ ls
overlay  pkg  src
```
Notice that the new directory looks a lot like a normal **$GOPATH**. New Go packages can be downloaded using the same **go get** command you'd normally use with Go, and they are added to the pkgset.
As an example, use the following to get the **gorilla/mux** package, then examine the directory structure of the pkgset:
```
[chris@marvin]$ go get github.com/gorilla/mux
[chris@marvin]$ tree
[chris@marvin introToGvm ]$ tree
.
├── overlay
  ├── bin
  └── lib
      └── pkgconfig
├── pkg
  └── linux_amd64
      └── github.com
          └── gorilla
              └── mux.a
src/
└── github.com
    └── gorilla
        └── mux
            ├── AUTHORS
            ├── bench_test.go
            ├── context.go
            ├── context_test.go
            ├── doc.go
            ├── example_authentication_middleware_test.go
            ├── example_cors_method_middleware_test.go
            ├── example_route_test.go
            ├── go.mod
            ├── LICENSE
            ├── middleware.go
            ├── middleware_test.go
            ├── mux.go
            ├── mux_test.go
            ├── old_test.go
            ├── README.md
            ├── regexp.go
            ├── route.go
            └── test_helpers.go
```
As you can see, **gorilla/mux** was added to the pkgset **$GOPATH** directory as expected and can now be used with projects that use this pkgset.
### GVM makes Go management a breeze
GVM is an intuitive and non-intrusive way to manage Go versions and packages. It can be used on its own or in combination with other Go module management techniques and make use of GVM's Go version management capabilities. Segregating projects by Go version and package dependency makes development easier and leads to fewer complications with managing version conflicts, and GVM makes this a breeze.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/introduction-gvm
作者:[Chris Collins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clcollins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
[2]: https://github.com/moovweb/gvm
[3]: https://github.com/jbussdieker
[4]: https://github.com/moovweb/gvm#installing
[5]: https://github.com/moovweb/gvm/blob/master/README.md
[6]: https://golang.github.io/dep/

View File

@ -0,0 +1,192 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mutation testing by example: Failure as experimentation)
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew)
以变异测试为例:基于故障的试验
======
基于 .NET 的 xUnit.net 测试框架,开发一款自动猫门的逻辑,让门在白天开放,夜间锁定,
![Digital hand surrounding by objects, bike, light bulb, graphs][1]
在本系列的[第一篇文章][2]中,我演示了如何使用设计的故障来确保代码中的预期结果。 在第二篇文章中,我将继续开发示例项目——一款自动猫门,该门在白天开放,夜间锁定。
在此提醒一下,您可以按照[此处的说明][3]使用 .NET 的 xUnit.net 测试框架。
### 关于白天时间
回想一下测试驱动开发TDD围绕着大量的单元测试。
第一篇文章中实现了满足 **Given7pmReturnNighttime** 单元测试期望的逻辑。 但还没有完, 现在您需要描述当前时间大于7点时期望发生的结果。 这是新的单元测试,称为 **Given7amReturnDaylight**
```
[Fact]
public void Given7amReturnDaylight()
{
var expected = "Daylight";
var actual = dayOrNightUtility.GetDayOrNight();
Assert.Equal(expected, actual);
}
```
现在,新的单元测试失败了(越早失败越好!):
```
Starting test execution, please wait...
[Xunit.net 00:00:01.23] unittest.UnitTest1.Given7amReturnDaylight [FAIL]
Failed unittest.UnitTest1.Given7amReturnDaylight
[...]
```
期望接收到字符串值是 "Daylight" ,但实际接收到的值是 "Nighttime"。
### 分析失败的测试用例
经过仔细检查,代码本身似乎已经出现问题。 事实证明,**GetDayOrNight** 方法的实现是不可测试的!
看看我们面临的核心挑战:
1. **GetDayOrNight 依赖隐藏输入。 **
**dayOrNight** 的值取决于隐藏输入(它从内置系统时钟中获取一天的时间值)。
2. **GetDayOrNight 包含非确定性行为。 **
从系统时钟中获取到的时间值是不确定的。 (因为)该时间取决于你运行代码的时间点,而这一点我们认为这是不可预测的。
3. **GetDayOrNight API 的质量差。**
该 API 与具体的数据源(系统 **DateTime**) 紧密耦合。
4. **GetDayOrNight violates 违反了单一责任原则。**
该方法实现同时使用和处理信息。优良作法是一种方法应负责执行一项职责。
5. **GetDayOrNight 有多个更改原因。**
可以想象内部时间源可能会更改的情况。同样,很容易想象处理逻辑也将改变。这些变化的不同原因必须相互隔离。
6. **当(我们)尝试了解 GetDayOrNight 行为时,会发现它的 API 签名不足。 **
最理想的做法就是通过简单的查看API的签名就能了解API预期的行为类型。。
7. **GetDayOrNight 取决于全局共享可变状态。**
要不惜一切代价避免共享的可变状态!
8. **即使在阅读源代码之后,也无法预测 GetDayOrNight方法的行为。**
这是一个严重的问题。 通过阅读源代码,应该始终非常清楚,系统一旦开始运行,便可以预测出其行为。
### 失败背后的原则
每当您遇到工程问题时,建议使用久经考验的分而治之策略。 在这种情况下,遵循关注点分离的原则是一种可行的方法。
> **separation of concerns** (**SoC**) 是一种用于将计算机程序分为不同模块的设计原理,以便每个模块都可以解决一个关注点。 关注点是影响计算机程序代码的一组信息。 关注点信息可能与要优化代码的硬件的细节一样概括,也可能与要实例化的类的名称一样具体。完美体现 SoC 的程序称为模块化程序。
>
> ([source][4])
**GetDayOrNight** 方法应仅与确定日期和时间值表示白天还是夜晚有关。 它不应该与寻找该值的来源有关。该问题应留给调用客户端。
必须将这个问题留给调用客户端,以获取当前时间。 这种方法符合另一个有价值的工程原理-控制反转。 Martin Fowler [在这里][5]详细探讨了这一概念。
> 框架的一个重要特征是用户定义的用于定制框架的方法通常来自于框架本身而不是从用户的应用程序代码调用来的。 该框架通常在协调和排序应用程序活动中扮演主程序的角色。 控制权的这种反转使框架有能力充当可扩展的框架。 用户提供的方法为框架中的特定应用程序量身制定泛化算法。
>
> \-- [Ralph Johnson and Brian Foote][6]
### 重构测试用例
因此,代码需要重构。 摆脱对内部时钟的依赖(**DateTime** 系统实用程序):
```
` DateTime time = new DateTime();`
```
删除上述代码在你的文件中应该是第7行。 通过将输入参数 **DateTime** 时间添加到 **GetDayOrNight** 方法,进一步重构代码。
这是重构类 **DayOrNightUtility.cs**:
```
using System;
namespace app {
public class DayOrNightUtility {
public string GetDayOrNight(DateTime time) {
string dayOrNight = "Nighttime";
if(time.Hour &gt;= 7 &amp;&amp; time.Hour &lt; 19) {
dayOrNight = "Daylight";
}
return dayOrNight;
}
}
}
```
重构代码需要更改单元测试。 需要准备 **nightHour****dayHour** 的测试数据,并将这些值传到**GetDayOrNight** 方法中。 以下是重构的单元测试:
```
using System;
using Xunit;
using app;
namespace unittest
{
public class UnitTest1
{
DayOrNightUtility dayOrNightUtility = [new][7] DayOrNightUtility();
DateTime nightHour = [new][7] DateTime(2019, 08, 03, 19, 00, 00);
DateTime dayHour = [new][7] DateTime(2019, 08, 03, 07, 00, 00);
[Fact]
public void Given7pmReturnNighttime()
{
var expected = "Nighttime";
var actual = dayOrNightUtility.GetDayOrNight(nightHour);
Assert.Equal(expected, actual);
}
[Fact]
public void Given7amReturnDaylight()
{
var expected = "Daylight";
var actual = dayOrNightUtility.GetDayOrNight(dayHour);
Assert.Equal(expected, actual);
}
}
}
```
### 经验教训
在继续开发这种简单的场景之前,请先回顾复习一下本次练习中所学到的东西。
运行无法测试的代码,很容易在不经意间制造陷阱。 从表面上看这样的代码似乎可以正常工作。但是遵循测试驱动开发TDD的实践首先描述期望结果---执行测试---暴露了代码中的严重问题。
这表明 TDD 是确保代码不会太凌乱的理想方法。 TDD 指出了一些问题区域,例如缺乏单一责任和存在隐藏输入。 此外TDD 有助于删除不确定性代码,并用行为明确的完全可测试代码替换它。
最后TDD 帮助交付易于阅读、逻辑易于遵循的代码。
在本系列的下一篇文章中,我将演示如何使用在本练习中创建的逻辑来实现功能代码,以及如何进行进一步的测试使其变得更好。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=fvorN0e- (Digital hand surrounding by objects, bike, light bulb, graphs)
[2]: https://opensource.com/article/19/9/mutation-testing-example-part-1-how-leverage-failure
[3]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd
[4]: https://en.wikipedia.org/wiki/Separation_of_concerns
[5]: https://martinfowler.com/bliki/InversionOfControl.html
[6]: http://www.laputan.org/drc/drc.html
[7]: http://www.google.com/search?q=new+msdn.microsoft.com

View File

@ -1,111 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (amwps290)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mirror your Android screen on your computer with Guiscrcpy)
[#]: via: (https://opensource.com/article/19/9/mirror-android-screen-guiscrcpy)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/holmjahttps://opensource.com/users/holmjahttps://opensource.com/users/rajaram121)
使用 Guiscrcpy 将你的安卓手机的屏幕投射到你的电脑
======
使用这个基于 scrcpy 的开源应用从你的电脑上访问你的安卓设备
![Coding on a computer][1]
在未来,您所需的一切信息都将在你的股掌之内,并且全部以全息的形式出现在空中,即使您在驾驶汽车时也可以与之交互。 不过,这就是未来,直到那一刻到来,我们所有人都只能将信息分散在笔记本电脑,手机,平板电脑和智能冰箱上。 不幸的是,这意味着当我们需要来自该设备的信息时,我们通常必须查看该设备。
虽然不完全是像全息终端或飞行汽车那样酷炫,[srevin saju][3] 开发的 [guiscrcpy][2] 是一个在一个地方整合了多个屏幕,有助于捕捉未来感觉的应用程序。
Guiscrcpy 是一个基于屡获殊荣的一个开源引擎 [scrcpy][4] 的一个开源项目GUN GPLv3 许可证)。使用 Guiscrcpy 你可以将你的安卓手机的屏幕投射到你的电脑这样你就可以查看手机上的一切东西。Guiscrcpy 支持 LinuxWindows 和 MacOS。
不像其他 scrcpy 的替代软件一样Guiscrcpy 并不仅仅是 scrcpy 的一个简单的复制。该项目优先考虑了与其他开源项目的协作。因此Guiscrcpy 对 scrcpy 来说是一个扩展或者说是用户界面层。将 Python 3 GUI 与 scrcpy 分开可以确保没有任何东西干扰 scrcpy 后端的效率。 您可以投射 1080P 分辨率的屏幕。因为它的超快的渲染速度和超低的 CPU 使用,即使在低端的电脑上也可以运行的很顺畅。
Scrcpy 是 Guiscrcpy 项目的基石。它是一个基于命令行的应用,因此它没有一个用户界面去处理你的手势操作。它也不提供一个返回按钮和主页按钮,而且它需要你对 [linux 终端][5]比较熟悉。Guiscrcpy 给 scrcpy 添加了图形面板。因此任何用户都可以使用它而且不需要通过网络发送任何信息就可以投射和控制他的设备。Guiscrcpy 同时也为 Windows 用户和 Linux 用户提供了编译好的二进制文件,以方便您的使用。
### 安装 Guiscrcpy
在你安装 Guiscrcpy 之前,你需要先安装它的依赖包。尤其是要安装 scrcpy。安装 scrcpy 最简单的方式可能就是使用对于大部分 Linux 发行版都安装了的 [snap][6] 工具。如果你的电脑上安装并使用了 snap那么你就可以使用下面的命令来一步安装 scrcpy。
```
`$ sudo snap install scrcpy`
```
当你安装完 scrcpy你就可以安装其他的依赖包了。[Simple DirectMedia Layer][7] 是一个显示和控制你设备屏幕的工具包。[Android Debug Bridge][8] (ADB) 命令可以连接你的安卓手机到电脑。
在 Fedora 或者 CentOS
```
`$ sudo dnf install SDL2 android-tools`
```
在 Ubuntu 或者 Debian
```
`$ sudo apt install SDL2 android-tools-adb`
```
在另一个终端中,安装 Python 依赖项:
```
`$ python3 -m pip install -r requirements.txt --user`
```
### 设置你的手机
为了能够让你的手机接受一个 adb 连接。必须让你的手机开启开发者选项。为了打开开发者选项,打开**设置**,然后选择**关于手机**,找到**版本号**(它也可能位于**软件信息**面板中)。爱信不信,只要你连续点击**版本号**七次,你就可以打开开发者选项。
![Enabling Developer Mode][9]
更多更全面的连接手机的方式,请参考[安卓开发者文档][10]。
一旦你设置好了你的手机,将你的手机通过 USB 线插入到你的电脑中(或者通过无线的方式进行连接,确保你已经配置好了无线连接)。
### 使用 Guiscrcpy
当你启动 guiscrcpy 的时候,你就能看到一个主控制窗口。点击窗口里的 **Start scrcpy** 按钮。只要你设置好了开发者模式并且通过 USB 或者 WiFi 将你的手机连接到电脑。guiscrcpy 就会连接你的手机。
![Guiscrcpy main screen][11]
它还包括一个可写入的配置系统,你可以将你的配置文件写入到 **~/.config** 目录。可以在使用前保存你的首选项。
guiscrcpy 底部的面板是一个浮动的窗口可以帮助你执行一些基本的控制动作。它包括了主页按钮,返回按钮,电源按钮以及一些其他的按键。这些按键在安卓手机上都非常常用。值得注意的是,它并不是与 scrcpy 直接交互。因此,它可以毫无延迟的执行。换句话说,这个操作窗口是直接通过 adb 与你的手机进行交互而不是通过 scrcpy。
![guiscrcpy's bottom panel][12]
这个项目目前十分活跃,不断地有新的特性加入其中。最新版本的具有了手势操作和通知界面。
有了这个 guiscrcpy你不仅仅可以在你的电脑屏幕上看到你的手机你还可以通过点击窗口就像操作你的实体手机一样或者使用浮动窗口上的按钮与之进行交互。
![guiscrcpy running on Fedora 30][13]
Guiscrcpy 是一个有趣且实用的应用程序,提供的功能应该是任何现代设备(尤其是 Android 之类的平台)的正式功能。 自己尝试一下,为当今的数字生活增添一些未来主义的感觉。
我们看了 F-Droid 软件库中的 12 个最佳的开源 Android 游戏。
所有 6 个应用程序都可以从 F-Droid 软件库中获得,并根据 GPLv3 许可,从而提供了...
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mirror-android-screen-guiscrcpy
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[amwps290](https://github.com/amwps290)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/holmjahttps://opensource.com/users/holmjahttps://opensource.com/users/rajaram121
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
[2]: https://github.com/srevinsaju/guiscrcpy
[3]: http://opensource.com/users/srevinsaju
[4]: https://github.com/Genymobile/scrcpy
[5]: https://www.redhat.com/sysadmin/navigating-filesystem-linux-terminal
[6]: https://snapcraft.io/
[7]: https://www.libsdl.org/
[8]: https://developer.android.com/studio/command-line/adb
[9]: https://opensource.com/sites/default/files/uploads/developer-mode.jpg (Enabling Developer Mode)
[10]: https://developer.android.com/studio/debug/dev-options
[11]: https://opensource.com/sites/default/files/uploads/guiscrcpy-main.png (Guiscrcpy main screen)
[12]: https://opensource.com/sites/default/files/uploads/guiscrcpy-bottompanel.png (guiscrcpy's bottom panel)
[13]: https://opensource.com/sites/default/files/uploads/guiscrcpy-screenshot.jpg (guiscrcpy running on Fedora 30)

View File

@ -1,257 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (CentOS 8 Installation Guide with Screenshots)
[#]: via: (https://www.linuxtechi.com/centos-8-installation-guide-screenshots/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
CentOS 8 安装图解
======
继 RHEL 8 发布之后CentOS 社区也发布了让人期待已久的 CentOS 8并发布了两种模式
* CentOS stream滚动发布的 Linux 发行版,适用于需要频繁更新的开发者
* CentOS类似 RHEL 8 的稳定操作系统,系统管理员可以用其部署或配置服务和应用
在这篇文章中,我们会使用图解的方式演示 CentOS 8 的安装方法。
### CentOS 8 的新特性
* DNF 成为了默认的软件包管理器,同时 yum 仍然是可用的
* 使用 `nmcli``nmtui` 进行网络配置N
* 使用 Podman 进行容器管理
* 引入了两个新的包仓库BaseOS 和 AppStream
* 使用 Cockpit 作为默认的系统管理工具
* 默认使用 Wayland 提供图形界面
* `iptables` 将被 `nftables` 取代
* 使用 Linux 内核 4.18
* 提供 PHP 7.2、Python 3.6、Ansible 2.8、VIM 8.0 和 Squid 4
### CentOS 8 所需的最低硬件配置:
* 2 GB RAM
* 64 位 x86 架构、2 GHz 或以上的 CPU
* 20 GB 硬盘空间
### CentOS 8 安装图解
### 第一步:下载 CentOS 8 ISO 文件
在 CentOS 官方网站 <https://www.centos.org/download/> 下载 CentOS 8 ISO 文件。
### 第二步: 创建 CentOS 8 启动介质USB 或 DVD
下载 CentOS 8 ISO 文件之后,将 ISO 文件烧录到 USB 移动硬盘或 DVD 光盘中,作为启动介质。
然后重启系统,在 BIOS 设置中启动上面烧录好的启动介质。
### 第三步:选择“安装 CentOS Linux 8.0”选项
搭载了 CentOS 8 ISO 文件的启动介质启动之后,就可以看到以下这个界面。选择“安装 CentOS Linux 8.0”Install CentOS Linux 8.0)选项并按回车。
[![Choose-Install-CentOS8][1]][2]
### 第四步:选择偏好语言
选择想要在 CentOS 8 安装过程中使用的语言,然后继续。
[![Select-Language-CentOS8-Installation][1]][3]
### 第五步:准备安装 CentOS 8
这一步我们会配置以下内容:
* 键盘布局
* 日期和时间
* 安装来源
* 软件选择
* 安装目标
* Kdump
[![Installation-Summary-CentOS8][1]][4]
如上图所示安装向导已经自动提供了键盘布局Keyboard、时间和日期Time &amp; Date、安装来源Installation Source和软件选择Software Selection的选项。
如果你需要修改以上的选项点击对应的图标就可以了。例如修改系统的时间和日期只需要点击“Time &amp; Date”选择正确的时区然后点击“完成”Done即可。
[![TimeZone-CentOS8-Installation][1]][5]
在软件选择选项中选择安装的模式。例如“包含图形界面”Server with GUI选项会在安装后的系统中提供图形界面而如果想安装尽可能少的额外软件可以选择“最小化安装”Minimal Install
[![Software-Selection-CentOS8-Installation][1]][6]
这里我们选择“包含图形界面”,点击完成。
Kdump 功能默认是开启的。尽管这是一个强烈建议开启的功能,但也可以点击对应的图标将其关闭。
如果想要在安装过程中对网络进行配置可以点击“网络与主机名”Network &amp; Host Name选项。
[![Networking-During-CentOS8-Installation][1]][7]
如果系统连接到启用了 DHCP 功能的调制解调器上,就会在启动网络接口的时候自动获取一个 IP 地址。如果需要配置静态 IP点击“配置”Configure并指定 IP 的相关信息。除此以外我们还将主机名设置为 linuxtechi.com。
完成网络配置后,点击完成。
最后我们要配置“安装目标”Installation Destination指定 CentOS 8 将要安装到哪一个硬盘,以及相关的分区方式。
[![Installation-Destination-Custom-CentOS8][1]][8]
点击完成。
如图所示,我为 CentOS 8 分配了 40 GB 的硬盘空间。有两种分区方案可供选择如果由安装向导进行自动分区可以选择“自动”Automatic选项如果想要自己手动进行分区可以选择“自定义”Custom选项。
在这里我们选择“自定义”选项,并按照以下的方式分区:
* /boot 2 GB (ext4 文件系统)
* / 12 GB (xfs 文件系统)
* /home 20 GB (xfs 文件系统)
* /tmp 5 GB (xfs 文件系统)
* Swap 1 GB (xfs 文件系统)
首先创建 `/boot` 标准分区,设置大小为 2GB如下图所示
[![boot-partition-CentOS8-Installation][1]][9]
点击“添加挂载点”Add mount point
再创建第二个分区 `/`,并设置大小为 12GB。点击加号指定挂载点和分区大小点击“添加挂载点”即可。
[![slash-root-partition-centos8-installation][1]][10]
然后在页面上将 `/` 分区的分区类型从标准更改为逻辑卷LVM并点击“更新设置”update settings
[![Change-Partition-Type-CentOS8][1]][11]
如上图所示安装向导已经自动创建了一个卷组volume group。如果想要更改卷组的名称只需要点击卷组标签页中的“修改”Modify选项。
同样地,创建 `/home` 分区和 `/tmp` 分区,分别将大小设置为 20GB 和 5GB并设置分区类型为逻辑卷。
[![home-partition-CentOS8-Installation][1]][12]
[![tmp-partition-centos8-installation][1]][13]
最后创建<ruby>交换分区<rt>swap partition</rt></ruby>
[![Swap-Partition-CentOS8-Installation][1]][14]
点击“添加挂载点”。
在完成所有分区设置后,点击“完成”。
[![Choose-Done-after-manual-partition-centos8][1]][15]
在下一个界面点击“应用更改”Accept changes以上做的更改就会写入到硬盘中。
[![Accept-changes-CentOS8-Installation][1]][16]
### 第六步:选择“开始安装”
完成上述的所有更改后回到先前的安装概览界面点击“开始安装”Begin Installation以开始安装 CentOS 8。
[![Begin-Installation-CentOS8][1]][17]
下面这个界面表示安装过程正在进行中。
[![Installation-progress-centos8][1]][18]
要设置 root 用户的口令,只需要点击 “root 口令”Root Password选项输入一个口令然后点击“创建用户”User Creation选项创建一个本地用户。
[![Root-Password-CentOS8-Installation][1]][19]
填写新创建的用户的详细信息。
[![Local-User-Details-CentOS8][1]][20]
在安装完成后,安装向导会提示重启系统。
[![CentOS8-Installation-Progress][1]][21]
### 第七步:完成安装并重启系统
安装完成后要重启系统。只需点击“重启”Reboot按钮。
[![Installation-Completed-CentOS8][1]][22]
注意:重启完成后,记得要把安装介质断开,并将 bios 的启动介质设置为硬盘。
### Step:8) Boot newly installed CentOS 8 and Accept License
在 grub 引导菜单中,选择 CentOS 8 进行启动。
[![Grub-Boot-CentOS8][1]][23]
同意 CentOS 8 的许可证,点击“完成”。
[![Accept-License-CentOS8-Installation][1]][24]
在下一个界面点击“完成配置”Finish Configuration
[![Finish-Configuration-CentOS8-Installation][1]][25]
### 第九步:配置完成后登录
同意 CentOS 8 的许可证以及完成配置之后,会来到登录界面。
[![Login-screen-CentOS8][1]][26]
使用刚才创建的用户以及对应的口令登录,按照提示进行操作,就可以看到以下界面。
[![CentOS8-Ready-Use-Screen][1]][27]
点击“开始使用 CentOS Linux”Start Using CentOS Linux
[![Desktop-Screen-CentOS8][1]][28]
以上就是 CentOS 8 的安装过程,至此我们已经完成了 CentOS 8 的安装。欢迎给我们发送评论。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/centos-8-installation-guide-screenshots/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Install-CentOS8.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Language-CentOS8-Installation.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-Summary-CentOS8.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/TimeZone-CentOS8-Installation.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Software-Selection-CentOS8-Installation.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Networking-During-CentOS8-Installation.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-Destination-Custom-CentOS8.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/boot-partition-CentOS8-Installation.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/slash-root-partition-centos8-installation.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Change-Partition-Type-CentOS8.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/09/home-partition-CentOS8-Installation.jpg
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/09/tmp-partition-centos8-installation.jpg
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Swap-Partition-CentOS8-Installation.jpg
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Done-after-manual-partition-centos8.jpg
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Accept-changes-CentOS8-Installation.jpg
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Begin-Installation-CentOS8.jpg
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-progress-centos8.jpg
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Root-Password-CentOS8-Installation.jpg
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Local-User-Details-CentOS8.jpg
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/09/CentOS8-Installation-Progress.jpg
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-Completed-CentOS8.jpg
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Grub-Boot-CentOS8.jpg
[24]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Accept-License-CentOS8-Installation.jpg
[25]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Finish-Configuration-CentOS8-Installation.jpg
[26]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Login-screen-CentOS8.jpg
[27]: https://www.linuxtechi.com/wp-content/uploads/2019/09/CentOS8-Ready-Use-Screen.jpg
[28]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Desktop-Screen-CentOS8.jpg

View File

@ -0,0 +1,642 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (9 essential GNU binutils tools)
[#]: via: (https://opensource.com/article/19/10/gnu-binutils)
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
GNU binutils 里的九种武器
======
> 二进制分析是计算机行业中最被低估的技能。
![Tools for the sysadmin][1]
想象一下,在无法访问软件的源代码时,但仍然能够理解软件的实现方式,在其中找到漏洞,并且更厉害的是还能修复错误。所有这些都是在只有二进制文件时做到的。这听起来就像是超能力,对吧?
你也可以拥有这样的超能力GNU 二进制实用程序binutils就是一个很好的起点。[GNU binutils] [2] 是一个二进制工具集,默认情况下所有 Linux 发行版中都会安装这些二进制工具。
二进制分析是计算机行业中最被低估的技能。它主要由恶意软件分析师、反向工程师和使用底层软件的人使用。
本文探讨了 binutils 可用的一些工具。我使用的是 RHEL但是这些示例应在任何 Linux 发行版上运行。
```
[~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.6 (Maipo)
[~]#
[~]# uname -r
3.10.0-957.el7.x86_64
[~]#
```
请注意,某些打包命令(例如 `rpm`)在基于 Debian 的发行版中可能不可用,因此请在适用时使用等效的 `dpkg` 命令。
### 软件开发的基础知识
在开源世界中,我们很多人都专注于源代码形式的软件。当软件的源代码随时可用时,很容易获得源代码的副本,打开喜欢的编辑器,喝杯咖啡,然后开始探索。
但是源代码不是在 CPU 上执行的代码,在 CPU 上执行的是二进制或机器语言指令。二进制或可执行文件是编译源代码时获得的。熟练的调试人员通常会通过了解这种差异来获得优势。
### 编译的基础知识
在深入研究 binutils 软件包本身之前,最好先了解编译的基础知识。
编译是将程序从某种编程语言C/C++)的源代码或文本形式转换为机器代码的过程。
机器代码是 CPU或一般而言硬件可以理解的 1 和 0 的序列,因此可以由 CPU 执行或运行。该机器码以特定格式保存到文件,通常称为可执行文件或二进制文件。在 Linux和使用 [Linux 兼容二进制][3]的 BSD这称为 [ELF][4]<ruby>可执行和可链接格式<rt>Executable and Linkable Format</rt></ruby>)。
在呈现给定源文件的可执行文件或二进制文件之前编译过程将经历一系列复杂的步骤。以这个源程序C 代码)为例。打开你喜欢的编辑器,然后键入以下程序:
```
#include <stdio.h>
int main(void)
{
printf("Hello World\n");
return 0;
}
```
#### 步骤 1用 cpp 预处理
[C 预处理程序cpp][5]用于扩展所有宏并包括头文件。在此示例中,头文件 `stdio.h` 将被包含在源代码中。`stdio.h` 是一个头文件,其中包含有关程序内使用的 `printf` 函数的信息。对源代码运行 `cpp`,其结果指令保存在名为 `hello.i` 的文件中。可以使用文本编辑器打开该文件以查看其内容。打印 “hello world” 的源代码在该文件的底部。
```
[testdir]# cat hello.c
#include <stdio.h>
int main(void)
{
printf("Hello World\n");
return 0;
}
[testdir]#
[testdir]# cpp hello.c > hello.i
[testdir]#
[testdir]# ls -lrt
total 24
-rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c
-rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i
[testdir]#
```
#### 步骤 2用 gcc 编译
在此阶段,无需创建目标文件就将步骤 1 中的预处理源代码转换为汇编语言指令。这个阶段使用 [GNU 编译器集合gcc][6]。对 `hello.i` 文件运行带有 `-S` 选项的 `gcc` 命令后,它将创建一个名为 `hello.s` 的新文件。该文件包含 C 程序的汇编语言指令。
你可以使用任何编辑器或 `cat` 命令查看其内容。
```
[testdir]#
[testdir]# gcc -Wall -S hello.i
[testdir]#
[testdir]# ls -l
total 28
-rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c
-rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i
-rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s
[testdir]#
[testdir]# cat hello.s
.file "hello.c"
.section .rodata
.LC0:
.string "Hello World"
.text
.globl main
.type main, @function
main:
.LFB0:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
movl $.LC0, %edi
call puts
movl $0, %eax
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
.LFE0:
.size main, .-main
.ident "GCC: (GNU) 4.8.5 20150623 (Red Hat 4.8.5-36)"
.section .note.GNU-stack,"",@progbits
[testdir]#
```
#### 步骤 3用 as 汇编
汇编程序的目的是将汇编语言指令转换为机器语言代码,并生成扩展名为 `.o` 的目标文件。此阶段使用默认情况下在所有 Linux 平台上都可用的 GNU 汇编器。
```
testdir]# as hello.s -o hello.o
[testdir]#
[testdir]# ls -l
total 32
-rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c
-rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i
-rw-r--r--. 1 root root 1496 Sep 13 03:39 hello.o
-rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s
[testdir]#
```
现在,你有了第一个 ELF 格式的文件;但是,还不能执行它。稍后,你将看到“目标文件”和“可执行文件”之间的区别。
```
[testdir]# file hello.o
hello.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped
```
#### 步骤 4用 ld 链接
这是编译的最后阶段,将目标文件链接以创建可执行文件。可执行文件通常需要外部函数,这些外部函数通常来自系统库(`libc`)。
你可以使用 `ld` 命令直接调用链接器;但是,此命令有些复杂。相反,你可以使用带有 `-v` (详细)标志的 `gcc` 编译器,以了解链接是如何发生的。(使用 `ld` 命令进行链接作为一个练习,你可以自行探索。)
```
[testdir]# gcc -v hello.o
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/lto-wrapper
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man [...] --build=x86_64-redhat-linux
Thread model: posix
gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC)
COMPILER_PATH=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/:/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/:[...]:/usr/lib/gcc/x86_64-redhat-linux/
LIBRARY_PATH=/usr/lib/gcc/x86_64-redhat-linux/4.8.5/:/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/:/lib/../lib64/:/usr/lib/../lib64/:/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../:/lib/:/usr/lib/
COLLECT_GCC_OPTIONS='-v' '-mtune=generic' '-march=x86-64'
/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/collect2 --build-id --no-add-needed --eh-frame-hdr --hash-style=gnu [...]/../../../../lib64/crtn.o
[testdir]#
```
运行此命令后,你应该看到一个名为 `a.out` 的可执行文件:
```
[testdir]# ls -l
total 44
-rwxr-xr-x. 1 root root 8440 Sep 13 03:45 a.out
-rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c
-rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i
-rw-r--r--. 1 root root 1496 Sep 13 03:39 hello.o
-rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s
```
`a.out` 运行 `file` 命令,结果表明它确实是 ELF 可执行文件:
```
[testdir]# file a.out
a.out: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=48e4c11901d54d4bf1b6e3826baf18215e4255e5, not stripped
```
运行该可执行文件,看看它是否如源代码所示工作:
```
[testdir]# ./a.out Hello World
```
工作了!在幕后发生了很多事情它才在屏幕上打印了 “Hello World”。想象一下在更复杂的程序中会发生什么。
### 探索 binutils 工具
此练习为使用 binutils 软件包中的工具提供了良好的背景。我的系统带有 binutils 版本 2.27-34 你的 Linux 发行版上的版本可能有所不同。
```
[~]# rpm -qa | grep binutils
binutils-2.27-34.base.el7.x86_64
```
binutils 软件包中提供了以下工具:
```
[~]# rpm -ql binutils-2.27-34.base.el7.x86_64 | grep bin/
/usr/bin/addr2line
/usr/bin/ar
/usr/bin/as
/usr/bin/c++filt
/usr/bin/dwp
/usr/bin/elfedit
/usr/bin/gprof
/usr/bin/ld
/usr/bin/ld.bfd
/usr/bin/ld.gold
/usr/bin/nm
/usr/bin/objcopy
/usr/bin/objdump
/usr/bin/ranlib
/usr/bin/readelf
/usr/bin/size
/usr/bin/strings
/usr/bin/strip
```
上面的编译练习已经探索了其中的两个工具:用作汇编程序的 `as` 命令,用作链接程序的 `ld` 命令。 继续阅读以了解上述 GNU binutils 软件包工具中的其他七个。
#### readelf显示 ELF 文件信息
上面的练习提到了术语“目标文件”和“可执行文件”。使用该练习中的文件,使用带有 `-h`(标题)选项的 `readelf` 命令,以将文件的 ELF 标题转储到屏幕上。请注意,以 `.o` 扩展名结尾的目标文件显示为 `Type: REL (Relocatable file)`(可重定位文件):
```
[testdir]# readelf -h hello.o
ELF Header:
Magic: 7f 45 4c 46 02 01 01 00 [...]
[...]
Type: REL (Relocatable file)
[...]
```
如果尝试执行此文件,将收到一条错误消息,指出无法执行。这仅表示它尚不具备在 CPU 上执行所需的信息。
请记住,你首先需要使用 `chmod` 命令在对象文件上添加 `x`(可执行位),否则你将得到“权限被拒绝”的错误。
```
[testdir]# ./hello.o
bash: ./hello.o: Permission denied
[testdir]# chmod +x ./hello.o
[testdir]#
[testdir]# ./hello.o
bash: ./hello.o: cannot execute binary file
```
如果对 `a.out` 文件尝试相同的命令,则会看到其类型为 `EXEC (Executable file)`(可执行文件)。
```
[testdir]# readelf -h a.out
ELF Header:
Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
Class: ELF64
[...] Type: EXEC (Executable file)
```
如上所示,该文件可以直接由 CPU 执行:
```
[testdir]# ./a.out Hello World
```
`readelf` 命令可提供有关二进制文件的大量信息。在这里,它会告诉你它是 ELF64 位格式,这意味着它只能在 64 位 CPU 上执行,而不能在 32 位 CPU 上运行。它还告诉你它应在 X86-64Intel/AMD架构上执行。二进制文件的入口点是地址 `0x400430`,它就是 C 源程序中 `main` 函数的地址。
在你知道的其他系统二进制文件上尝试一下 `readelf` 命令,例如 `ls`。请注意,在 RHEL 8 或 Fedora 30 及更高版本的系统上,由于安全原因改用了位置无关可执行文件([PIE][7]),因此你的输出(尤其是 `Type:`)可能会有所不同。
```
[testdir]# readelf -h /bin/ls
ELF Header:
Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
Class: ELF64
Data: 2's complement, little endian
Version: 1 (current)
OS/ABI: UNIX - System V
ABI Version: 0
Type: EXEC (Executable file)
```
使用 `ldd` 命令了解 `ls` 命令所依赖的系统库,如下所示:
```
[testdir]# ldd /bin/ls
linux-vdso.so.1 => (0x00007ffd7d746000)
libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f060daca000)
libcap.so.2 => /lib64/libcap.so.2 (0x00007f060d8c5000)
libacl.so.1 => /lib64/libacl.so.1 (0x00007f060d6bc000)
libc.so.6 => /lib64/libc.so.6 (0x00007f060d2ef000)
libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f060d08d000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f060ce89000)
/lib64/ld-linux-x86-64.so.2 (0x00007f060dcf1000)
libattr.so.1 => /lib64/libattr.so.1 (0x00007f060cc84000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f060ca68000)
```
`libc` 库文件运行 `readelf` 以查看它是哪种文件。正如它指出的那样,它是一个 `DYN (Shared object file)`(共享对象文件),这意味着它不能直接直接执行;必须由内部使用了该库提供的任何函数的可执行文件使用它。
```
[testdir]# readelf -h /lib64/libc.so.6
ELF Header:
Magic: 7f 45 4c 46 02 01 01 03 00 00 00 00 00 00 00 00
Class: ELF64
Data: 2's complement, little endian
Version: 1 (current)
OS/ABI: UNIX - GNU
ABI Version: 0
Type: DYN (Shared object file)
```
#### size列出节的大小和全部大小
`size` 命令仅适用于目标文件和可执行文件,因此,如果尝试在简单的 ASCII 文件上运行它,则会抛出错误,提示“文件格式无法识别”。
```
[testdir]# echo "test" > file1
[testdir]# cat file1
test
[testdir]# file file1
file1: ASCII text
[testdir]# size file1
size: file1: File format not recognized
```
现在,在上面的练习中,对对象文件和可执行文件运行 `size` 命令。请注意,根据 `size` 命令的输出,可执行文件(`a.out`)的信息要比目标文件(`hello.o`)多得多:
```
[testdir]# size hello.o
text data bss dec hex filename
89 0 0 89 59 hello.o
[testdir]# size a.out
text data bss dec hex filename
1194 540 4 1738 6ca a.out
```
但是这里的 `text`、`data` 和 `bss` 节是什么意思?
`text` 节是指二进制文件的代码部分,其中包含所有可执行指令。`data` 节是所有初始化数据所在的位置,`bss` 节是所有未初始化数据的存储位置。
比较其他一些可用的系统二进制文件的 `size` 结果。
对于 `ls` 命令:
```
[testdir]# size /bin/ls
text data bss dec hex filename
103119 4768 3360 111247 1b28f /bin/ls
```
只需查看 `size` 命令的输出,你就可以看到 `gcc``gdb` 是比 `ls` 大得多的程序:
```
[testdir]# size /bin/gcc
text data bss dec hex filename
755549 8464 81856 845869 ce82d /bin/gcc
[testdir]# size /bin/gdb
text data bss dec hex filename
6650433 90842 152280 6893555 692ff3 /bin/gdb
```
#### strings打印文件中的可打印字符串
`strings` 命令中添加 `-d` 标志以仅显示 `data` 节中的可打印字符通常很有用。
`hello.o` 是一个目标文件,其中包含打印出 `Hello World` 文本的指令。因此,`strings` 命令的唯一输出是 `Hello World`
```
[testdir]# strings -d hello.o
Hello World
```
另一方面,在 `a.out`(可执行文件)上运行 `strings` 会显示在链接阶段二进制文件中包含的其他信息:
```
[testdir]# strings -d a.out
/lib64/ld-linux-x86-64.so.2
!^BU
libc.so.6
puts
__libc_start_main
__gmon_start__
GLIBC_2.2.5
UH-0
UH-0
=(
[]A\A]A^A_
Hello World
;*3$"
```
#### objdump显示目标文件信息
另一个可以从二进制文件中转储机器语言指令的 binutils 工具称为 `objdump`。使用 `-d` 选项,可从二进制文件中反汇编所有汇编指令。
回想一下,编译是将源代码指令转换为机器代码的过程。机器代码仅由 1 和 0 组成,人类难以阅读。因此,它有助于将机器代码表示为汇编语言指令。汇编语言是什么样的?请记住,汇编语言是特定于体系结构的;由于我使用的是 Intelx86-64架构因此如果你使用 ARM 架构编译相同的程序,指令将有所不同。
```
[testdir]# objdump -d hello.o
hello.o: file format elf64-x86-64
Disassembly of section .text:
0000000000000000
:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: bf 00 00 00 00 mov $0x0,%edi
9: e8 00 00 00 00 callq e
e: b8 00 00 00 00 mov $0x0,%eax
13: 5d pop %rbp
14: c3 retq
```
该输出乍一看似乎令人生畏,但请花一点时间来理解它,然后再继续。回想一下,`.text` 节包含所有的机器代码指令。汇编指令可以在第四列中看到(即 `push`、`mov`、`callq`、`pop`、`retq` 等)。这些指令作用于寄存器,寄存器是 CPU 内置的存储器位置。本示例中的寄存器是 `rbp`、`rsp`、`edi`、`eax` 等,并且每个寄存器都有特殊的含义。
现在对可执行文件(`a.out`)运行 `objdump` 并查看得到的内容。可执行文件上的 `objdump` 的输出可能很大,因此我使用 `grep` 命令将其缩小到 `main` 函数:
```
[testdir]# objdump -d a.out | grep -A 9 main\>
000000000040051d
:
40051d: 55 push %rbp
40051e: 48 89 e5 mov %rsp,%rbp
400521: bf d0 05 40 00 mov $0x4005d0,%edi
400526: e8 d5 fe ff ff callq 400400
40052b: b8 00 00 00 00 mov $0x0,%eax
400530: 5d pop %rbp
400531: c3 retq
```
请注意,这些指令与目标文件 `hello.o` 相似,但是其中包含一些其他信息:
* 目标文件 `hello.o` 具有以下指令:`callq e`
* 可执行文件 `a.out` 由以下指令组成,该指令带有一个地址和函数:`callq 400400 <puts@plt>`
  
上面的汇编指令正在调用 `puts` 函数。请记住,你在源代码中使用了 `printf` 函数。编译器插入了对 `puts` 库函数的调用,以将 `Hello World` 输出到屏幕。
查看 `put` 上方一行的说明:
* 目标文件 `hello.o` 有个指令 `mov``mov $0x0,%edi`
* 可执行文件 `a.out``mov` 指令带有实际地址(`$0x4005d0`)而不是 `$0x0``mov $0x4005d0,%edi`
该指令将二进制文件中地址 `$0x4005d0` 处存在的内容移动到名为 `edi` 的寄存器中。
该存储位置的内容中还能是别的什么吗?是的,你猜对了:它不过是文本 `Hello, World`。你是如何确定的?
`readelf` 命令使你可以将二进制文件(`a.out`)的任何节转储到屏幕上。以下要求它将 `.rodata`(这是只读数据)转储到屏幕上:
```
[testdir]# readelf -x .rodata a.out
Hex dump of section '.rodata':
0x004005c0 01000200 00000000 00000000 00000000 ....
0x004005d0 48656c6c 6f20576f 726c6400 Hello World.
```
你可以在右侧看到文本 `Hello World`,在左侧可以看到其二进制格式的地址。它是否与你在上面的 `mov` 指令中看到的地址匹配?是的,确实匹配。
#### strip从目标文件中丢弃符号
该命令通常用于在将二进制文件交付给客户之前减小二进制文件的大小。
请记住,由于重要信息已从二进制文件中删除,因此它会阻碍调试过程。但是,这个二进制文件可以完美地执行。
`a.out` 可执行文件运行它,并注意会发生什么。首先,通过运行以下命令确保二进制文件没有被剥离(`not stripped`
```
[testdir]# file a.out
a.out: ELF 64-bit LSB executable, x86-64, [......] not stripped
```
另外,在运行 `strip` 命令之前,请记下二进制文件中最初的字节数:
```
[testdir]# du -b a.out
8440 a.out
```
现在对该可执行文件运行 `strip` 命令,并使用 `file` 命令确保它可以正常工作:
```
[testdir]# strip a.out
[testdir]# file a.out a.out: ELF 64-bit LSB executable, x86-64, [......] stripped
```
剥离二进制文件后,此小程序的大小从之前的 `8440` 字节减小为 `6296` 字节。对于这样小的一个小程序都能有这么大的节省,难怪大型程序经常被剥离。
```
[testdir]# du -b a.out
6296 a.out
```
#### addr2line转换地址到文件名和行号
`addr2line` 工具只是在二进制文件中查找地址,并将其与 C 源代码程序中的行进行匹配。很酷,不是吗?
为此编写另一个测试程序;只是这一次确保使用 `gcc``-g` 标志进行编译,这将为二进制文件添加其他调试信息,并包含有助于调试的行号(由源代码中提供):
```
[testdir]# cat -n atest.c
1 #include <stdio.h>
2
3 int globalvar = 100;
4
5 int function1(void)
6 {
7 printf("Within function1\n");
8 return 0;
9 }
10
11 int function2(void)
12 {
13 printf("Within function2\n");
14 return 0;
15 }
16
17 int main(void)
18 {
19 function1();
20 function2();
21 printf("Within main\n");
22 return 0;
23 }
```
`-g` 标志编译并执行它。正如预期:
```
[testdir]# gcc -g atest.c
[testdir]# ./a.out
Within function1
Within function2
Within main
```
现在使用 `objdump` 来标识函数开始的内存地址。你可以使用 `grep` 命令来过滤出所需的特定行。函数的地址在下面突出显示:
```
[testdir]# objdump -d a.out | grep -A 2 -E 'main>:|function1>:|function2>:'
000000000040051d :
40051d: 55 push %rbp
40051e: 48 89 e5 mov %rsp,%rbp
--
0000000000400532 :
400532: 55 push %rbp
400533: 48 89 e5 mov %rsp,%rbp
--
0000000000400547
:
400547: 55 push %rbp
400548: 48 89 e5 mov %rsp,%rbp
```
现在,使用 `addr2line` 工具从映射二进制文件中的这些地址到 C 源代码匹配的地址:
```
[testdir]# addr2line -e a.out 40051d
/tmp/testdir/atest.c:6
[testdir]#
[testdir]# addr2line -e a.out 400532
/tmp/testdir/atest.c:12
[testdir]#
[testdir]# addr2line -e a.out 400547
/tmp/testdir/atest.c:18
```
它说 `40051d` 从源文件 `atest.c` 中的第 `6` 行开始,这是 `function1` 的起始大括号(`{`)开始的行。`function2` 和 `main` 的输出也匹配。
#### nm列出目标文件的符号
使用上面的 C 程序测试 `nm` 工具。使用 `gcc` 快速编译并执行它。
```
[testdir]# gcc atest.c
[testdir]# ./a.out
Within function1
Within function2
Within main
```
现在运行 `nm``grep` 获取有关函数和变量的信息:
```
[testdir]# nm a.out | grep -Ei 'function|main|globalvar'
000000000040051d T function1
0000000000400532 T function2
000000000060102c D globalvar
U __libc_start_main@@GLIBC_2.2.5
0000000000400547 T main
```
你可以看到函数被标记为 `T`,它表示 `text` 节中的符号,而变量标记为 `D`,表示初始化的 `data` 节中的符号。
想象一下在没有源代码的二进制文件上运行此命令有多大用处?这使你可以窥视内部并了解使用了哪些函数和变量。当然,除非二进制文件已被剥离,这种情况下它们将不包含任何符号,因此 `nm` 就命令不会很有用,如你在此处看到的:
```
[testdir]# strip a.out
[testdir]# nm a.out | grep -Ei 'function|main|globalvar'
nm: a.out: no symbols
```
### 结论
GNU binutils 工具为有兴趣分析二进制文件的人提供了许多选项,这只是它们可以为你做的事情的一角。请阅读每种工具的手册页,以了解有关它们以及如何使用它们的更多信息。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/gnu-binutils
作者:[Gaurav Kamathe][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gkamathe
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn (Tools for the sysadmin)
[2]: https://en.wikipedia.org/wiki/GNU_Binutils
[3]: https://www.freebsd.org/doc/handbook/linuxemu.html
[4]: https://en.wikipedia.org/wiki/Executable_and_Linkable_Format
[5]: https://en.wikipedia.org/wiki/C_preprocessor
[6]: https://gcc.gnu.org/onlinedocs/gcc/
[7]: https://en.wikipedia.org/wiki/Position-independent_code#Position-independent_executables

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (IceWM A really cool desktop)
[#]: via: (https://fedoramagazine.org/icewm-a-really-cool-desktop/)
[#]: author: (tdawson https://fedoramagazine.org/author/tdawson/)
IceWM 一个非常酷的桌面
======
![][1]
IceWM 是一款非常轻量的桌面。它已经出现 20 多年了,它今天的目标仍然与当时相同:速度、简单性以及不妨碍用户。
为了轻量桌面,我曾经将 IceWM 添加到 Scientific Linux 中。当时它只是 0.5 兆的 rpm。运行时它仅使用 5 兆的内存。这些年来IceWM 有所增长。rpm 包现在为 1 兆。运行时IceWM 现在使用 10 兆的内存。尽管在过去十年中它的大小增加了一倍,但它仍然非常小。
这么小的包装,你能得到什么?确切地说,就是一个窗口管理器。没有其他东西。你有一个带有菜单或图标的工具栏来启动程序。速度很快。最后,还有主题和选项。除了工具栏中的一些小东西,就只有这些了。
![][2]
### 安装
因为 IceWM 很小,你只需安装主软件包。
```
$ sudo dnf install icewm
```
如果要节省磁盘空间许多依赖项都是可选的。没有它们IceWM 也可以正常工作。
```
$ sudo dnf install icewm --setopt install_weak_deps=false
```
### 选项
IceWM 默认已经设置完毕,以使普通用户感到舒适。这是一件好事,因为选项是通过配置文件手动完成的。
我希望你不会因此放弃,因为它并没有听起来那么糟糕。它只有 8 个配置文件,大多数人只使用其中几个。主要的三个配置文件是 keys键绑定preferences总体首选项和 toolbar工具栏上显示的内容。默认配置文件位于 /usr/share/icewm/
要进行更改,请将默认配置复制到 icewm 家目录(\~/.icewm编辑文件然后重新启动 IceWM。第一次做可能会有点害怕因为在 “Logout” 菜单项下可以找到 “Restart Icewm”。但是当你重启 IceWM 时,你只会看到一个闪烁,更改就生效了。任何打开的程序均不受影响,并保持原样。
### 主题
![IceWM in the NanoBlue theme][3]
如果安装 icewm-themes 包,那么会得到很多主题。与常规选项不同,你无需重启 IceWM 即可更改为新主题。通常我不会谈论主题,但是由于其他功能很少,因此我想提下。
### 工具栏
工具栏是在 IceWM 中添加了一些其他功能的地方。你可以看到在不同工作区之间切换它。工作区有时称为虚拟桌面。单击工作区进行移动。右键单击窗口任务栏条目,可以在工作区之间移动它。如果你喜欢工作区,它拥有你想要的所有功能。如果你不喜欢工作区,那么可以选择关闭它。
工具栏还有网络/内存/CPU 监控图。将鼠标悬停在图标上可获得详细信息。单击图标打开一个拥有完整监控功能的窗口。这些小图标曾经出现在每个窗口管理器上。但是,随着这些台式机的成熟,它们都将这些图标去除了。我很高兴 IceWM 留下了这个不错的功能。
## 总结
如果你想要轻量但功能强大的桌面IceWM 适合你。它有预设,因此新的 Linux 用户也可以立即使用它。它是灵活的,因此 Unix 用户可以根据自己的喜好进行调整。最重要的是IceWM 可以让你的程序不受阻碍地运行。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/icewm-a-really-cool-desktop/
作者:[tdawson][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/tdawson/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/icewm-1-816x346.png
[2]: https://fedoramagazine.org/wp-content/uploads/2019/09/icewm.2-1024x768.png
[3]: https://fedoramagazine.org/wp-content/uploads/2019/09/icewm.3-1024x771.png

View File

@ -0,0 +1,223 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (7 steps to securing your Linux server)
[#]: via: (https://opensource.com/article/19/10/linux-server-security)
[#]: author: (Patrick H. Mullins https://opensource.com/users/pmullins)
安全强化你的 Linux 服务器的七个步骤
======
> 通过七个简单的步骤来加固你的 Linux 服务器。
![computer servers processing data][1]
这篇入门文章将向你介绍基本的 Linux 服务器安全知识。虽然主要针对 Debian/Ubuntu但是你可以将此处介绍的所有内容应用于其他 Linux 发行版。我也鼓励你研究这份材料,并在适用的情况下进行扩展。
### 1、更新你的服务器
保护服务器安全的第一件事是更新本地存储库,并通过应用最新的修补程序来升级操作系统和已安装的应用程序。
在 Ubuntu 和 Debian 上:
```
$ sudo apt update && sudo apt upgrade -y
```
在 Fedora、CentOS 或 RHEL
```
$ sudo dnf upgrade
```
### 2、创建一个新的特权用户
接下来,创建一个新的用户帐户。永远不要以 root 身份登录服务器,而是创建你自己的帐户(用户),赋予它 `sudo` 权限,然后使用它登录你的服务器。
首先创建一个新用户:
```
$ adduser <username>
```
通过将 `sudo` 组(`-G`)附加(`-a`)到用户的组成员身份里,从而授予新用户帐户 `sudo` 权限:
```
$ usermod -a -G sudo <username>
```
### 3、上传你的 SSH 密钥
你应该使用 SSH 密钥登录到新服务器。你可以使用 `ssh-copy-id` 命令将[预生成的 SSH 密钥][2]上传到你的新服务器:
```
$ ssh-copy-id <username>@ip_address
```
现在,你无需输入密码即可登录到新服务器。
### 4、安全强化 SSH
接下来,进行以下三个更改:
* 禁用 SSH 密码认证
* 限制 root 远程登录
* 限制对 IPv4 或 IPv6 的访问
使用你选择的文本编辑器打开 `/etc/ssh/sshd_config` 并确保以下行:
```
PasswordAuthentication yes
PermitRootLogin yes
```
改成这样:
```
PasswordAuthentication no
PermitRootLogin no
```
接下来,通过修改 `AddressFamily` 选项将 SSH 服务限制为 IPv4 或 IPv6。要将其更改为仅使用 IPv4对大多数人来说应该没问题请进行以下更改
```
AddressFamily inet
```
重新启动 SSH 服务以启用你的更改。请注意,在重新启动 SSH 服务器之前,与服务器建立两个活动连接是一个好主意。有了这些额外的连接,你可以在重新启动出错的情况下修复所有问题。
在 Ubuntu 上:
```
$ sudo service sshd restart
```
在 Fedora 或 CentOS 或任何使用 Systemd 的系统上:
```
$ sudo systemctl restart sshd
```
### 5、启用防火墙
现在你需要安装防火墙、启用防火墙并对其进行配置以仅允许你指定的网络流量。Ubuntu 上的)[简单的防火墙][3]UFW是一个易用的 iptables 界面,可大大简化防火墙的配置过程。
你可以通过以下方式安装 UFW
```
$ sudo apt install ufw
```
默认情况下UFW 拒绝所有传入连接,并允许所有传出连接。这意味着服务器上的任何应用程序都可以访问互联网,但是任何尝试访问服务器的内容都无法连接。
首先,确保你可以通过启用对 SSH、HTTP 和 HTTPS 的访问来登录:
```
$ sudo ufw allow ssh
$ sudo ufw allow http
$ sudo ufw allow https
```
然后启用 UFW
```
$ sudo ufw enable
```
你可以通过以下方式查看允许和拒绝了哪些服务:
```
$ sudo ufw status
```
如果你想禁用 UFW可以通过键入以下命令来禁用
```
$ sudo ufw disable
```
你还可以(在 RHEL/CentOS 上)使用 [firewall-cmd][4],它已经安装并集成到某些发行版中。
### 6、安全 Fail2ban
[Fail2ban][5] 是一种用于检查服务器日志以查找重复或自动攻击的应用程序。如果找到任何攻击,它会更改防火墙以永久地或在指定的时间内阻止攻击者的 IP 地址。
你可以通过键入以下命令来安装 Fail2ban
```
$ sudo apt install fail2ban -y
```
然后复制随附的配置文件:
```
$ sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
```
重启 Fail2ban
```
$ sudo service fail2ban restart
```
这样就行了。该软件将不断检查日志文件以查找攻击。一段时间后,该应用程序将建立相当多的封禁的 IP 地址列表。你可以通过以下方法查询 SSH 服务的当前状态来查看此列表:
```
$ sudo fail2ban-client status ssh
```
### 7、移除无用的网络服务
几乎所有 Linux 服务器操作系统都启用了一些面向网络的服务。你可能希望保留其中大多数,然而,有一些你或许希望删除。你可以使用 `ss` 命令查看所有正在运行的网络服务LCTT 译注:应该是只保留少部分,而所有可用确认无关的、无用的服务都应该停用或删除。)
```
$ sudo ss -atpu
```
`ss` 的输出将取决于你的操作系统。这是一个可能的示例。它显示 SSH`sshd`)和 Ngnix`nginx`)服务正在侦听网络并准备连接:
```
tcp LISTEN 0 128 *:http *:* users:(("nginx",pid=22563,fd=7))
tcp LISTEN 0 128 *:ssh *:* users:(("sshd",pid=685,fd=3))
```
删除未使用的服务的方式因你的操作系统及其使用的程序包管理器而异。
要删除 Debian / Ubuntu 上未使用的服务:
```
$ sudo apt purge <service_name>
```
要在 Red Hat/CentOS 上删除未使用的服务:
```
$ sudo yum remove <service_name>
```
再次运行 `ss -atup` 以确认这些未使用的服务没有安装和运行。
### 总结
本教程介绍了加固 Linux 服务器所需的最起码的措施。可以并且应该根据服务器的使用方式启用其他安全层。这些安全层可以包括诸如各个应用程序配置、入侵检测软件IDS以及启用访问控制例如双因素身份验证之类的东西。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/linux-server-security
作者:[Patrick H. Mullins][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/pmullins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8 (computer servers processing data)
[2]: https://opensource.com/article/19/4/ssh-keys-seahorse
[3]: https://launchpad.net/ufw
[4]: https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd
[5]: https://www.fail2ban.org/wiki/index.php/Main_Page