Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-10-14 22:30:12 +08:00
commit 77ff56df23
21 changed files with 1872 additions and 391 deletions

View File

@ -1,17 +1,18 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12715-1.html)
[#]: subject: (Using Bash traps in your scripts)
[#]: via: (https://opensource.com/article/20/6/bash-trap)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
在脚本中使用 Bash 信号捕获
======
> 无论你的脚本是否成功运行,<ruby>信号捕获<rt>trap</rt></ruby>都能让它平稳结束。
![Hands programming][1]
![](https://img.linux.net.cn/data/attachment/album/202010/13/182135f2nktcrnryncisg8.jpg)
Shell 脚本的启动并不难被检测到,但 Shell 脚本的终止检测却并不容易,因为我们无法确定脚本会按照预期地正常结束,还是由于意外的错误导致失败。当脚本执行失败时,将正在处理的内容记录下来是非常有用的做法,但有时候这样做起来并不方便。而 [Bash][2] 中 `trap` 命令的存在正是为了解决这个问题,它可以捕获到脚本的终止信号,并以某种预设的方式作出应对。
@ -19,22 +20,21 @@ Shell 脚本的启动并不难被检测到,但 Shell 脚本的终止检测却
如果出现了一个错误,可能导致发生一连串错误。下面示例脚本中,首先在 `/tmp` 中创建一个临时目录,这样可以在临时目录中执行解包、文件处理等操作,然后再以另一种压缩格式进行打包:
```
#!/usr/bin/env bash
CWD=`pwd`
TMP=${TMP:-/tmp/tmpdir}
## create tmp dir
mkdir $TMP
mkdir "${TMP}"
## extract files to tmp
tar xf "${1}" --directory $TMP
tar xf "${1}" --directory "${TMP}"
## move to tmpdir and run commands
pushd $TMP
pushd "${TMP}"
for IMG in *.jpg; do
  mogrify -verbose -flip -flop $IMG
mogrify -verbose -flip -flop "${IMG}"
done
tar --create --file "${1%.*}".tar *.jpg
@ -42,22 +42,21 @@ tar --create --file "${1%.*}".tar *.jpg
popd
## bundle with bzip2
bzip2 --compress $TMP/"${1%.*}".tar \
      --stdout &gt; "${1%.*}".tbz
bzip2 --compress "${TMP}"/"${1%.*}".tar \
--stdout > "${1%.*}".tbz
## clean up
/usr/bin/rm -r /tmp/tmpdir
```
一般情况下,这个脚本都可以按照预期执行。但如果归档文件中的文件是 PNG 文件而不是期望的 JPEG 文件,脚本就会在中途失败,这时候另一个问题就出现了:最后一步删除临时目录的操作没有被正常执行。如果你手动把临时目录删掉,倒是不会造成什么影响,但是如果没有手动把临时目录删掉,在下一次执行这个脚本的时候,就会在一个残留着很多临时文件的临时目录里执行了
一般情况下,这个脚本都可以按照预期执行。但如果归档文件中的文件是 PNG 文件而不是期望的 JPEG 文件,脚本就会在中途失败,这时候另一个问题就出现了:最后一步删除临时目录的操作没有被正常执行。如果你手动把临时目录删掉,倒是不会造成什么影响,但是如果没有手动把临时目录删掉,在下一次执行这个脚本的时候,它必须处理一个现有的临时目录,里面充满了不可预知的剩余文件
其中一个解决方案是在脚本开头增加一个预防性删除逻辑用来处理这种情况。但这种做法显得有些暴力,而我们更应该从结构上解决这个问题。使用 `trap` 是一个优雅的方法。
### 使用 `trap` 捕获信号
### 使用 trap 捕获信号
我们可以通过 `trap` 捕捉程序运行时的信号。如果你使用过 `kill` 或者 `killall` 命令,那你就已经使用过名为 `SIGTERM` 的信号了。除此以外,还可以执行 `trap -l``trap --list` 命令列出其它更多的信号:
```
$ trap --list
 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL       5) SIGTRAP
@ -85,40 +84,38 @@ $ trap --list
例如,下面的这行语句可以捕获到在进程运行时用户按下 `Ctrl + C` 组合键发出的 `SIGINT` 信号:
```
`trap "{ echo 'Terminated with Ctrl+C'; }" SIGINT`
trap "{ echo 'Terminated with Ctrl+C'; }" SIGINT
```
因此,上文中脚本的缺陷可以通过使用 `trap` 捕获 `SIGINT`、`SIGTERM`、进程错误退出、进程正常退出等信号,并正确处理临时目录的方式来修复:
```
#!/usr/bin/env bash
CWD=`pwd`
TMP=${TMP:-/tmp/tmpdir}
trap \
 "{ /usr/bin/rm -r $TMP ; exit 255; }" \
 SIGINT SIGTERM ERR EXIT
"{ /usr/bin/rm -r "${TMP}" ; exit 255; }" \
SIGINT SIGTERM ERR EXIT
## create tmp dir
mkdir $TMP
tar xf "${1}" --directory $TMP
mkdir "${TMP}"
tar xf "${1}" --directory "${TMP}"
## move to tmp and run commands
pushd $TMP
pushd "${TMP}"
for IMG in *.jpg; do
  mogrify -verbose -flip -flop $IMG
mogrify -verbose -flip -flop "${IMG}"
done
tar --create --file "${1%.*}".tar *.jpgh
tar --create --file "${1%.*}".tar *.jpg
## move back to origin
popd
## zip tar
bzip2 --compress $TMP/"${1%.*}".tar \
      --stdout &gt; "${1%.*}".tbz
--stdout > "${1%.*}".tbz
```
对于更复杂的功能,还可以用 [Bash 函数][3]来简化 `trap` 语句。
@ -134,7 +131,7 @@ via: https://opensource.com/article/20/6/bash-trap
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,122 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (KDE Plasma 5.20 is Here With Exciting Improvements)
[#]: via: (https://itsfoss.com/kde-plasma-5-20/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
KDE Plasma 5.20 is Here With Exciting Improvements
======
KDE Plasma 5.20 is finally here and theres a lot of things to be excited about, including the new wallpaper **Shell** by Lucas Andrade.
It is worth noting that is not an LTS release unlike [KDE Plasma 5.18][1] and will be maintained for the next 4 months or so. So, if you want the latest and greatest, you can surely go ahead and give it a try.
In this article, I shall mention the key highlights of KDE Plasma 5.20 from [my experience with it on KDE Neon][2] (Testing Edition).
![][3]
### Plasma 5.20 Features
If you like to see things in action, we made a feature overview video for you.
[Subscribe to our YouTube channel for more Linux videos][4]
#### Icon-only Taskbar
![][5]
You must be already comfortable with a taskbar that mentions the title of the window along the icon. However, that takes a lot of space in the taskbar, which looks bad when you want to have a clean look with multiple applications/windows opened.
Not just limited to that, if you launch several windows of the same application, it will group them together and let you cycle through it from a single icon on the task bar.
So, with this update, you get an icon-only taskbar by default which makes it look a lot cleaner and you can have more things in the taskbar at a glance.
#### Digital Clock Applet with Date
![][6]
If youve used any KDE-powered distro, you must have noticed that the digital clock applet (in the bottom-right corner) displays the time but not the date by default.
Its always a good choice to have the date and time as well (at least I prefer that). So, with KDE Plasma 5.20, the applet will have both time and date.
#### Get Notified When your System almost Runs out of Space
I know this is not a big addition, but a necessary one. No matter whether your home directory is on a different partition, you will be notified when youre about to run out of space.
#### Set the Charge Limit Below 100%
You are in for a treat if you are a laptop user. To help you preserve the battery health, you can now set a charge limit below 100%. I couldnt show it to you because I use a desktop.
#### Workspace Improvements
Working with the workspaces on KDE desktop was already an impressive experience, now with the latest update, several tweaks have been made to take the user experience up a notch.
To start with, the system tray has been overhauled with a grid-like layout replacing the list view.
The default shortcut has been re-assigned with Meta+drag instead of Alt+drag to move/re-size windows to avoid conflicts with some other productivity apps with Alt+drag keybind support. You can also use the key binds like Meta + up/left/down arrow to corner-tile windows.
![][7]
It is also easier to list all the disks using the old “**Device Notifier**” applet, which has been renamed to “**Disks &amp; Devices**“.
If that wasnt enough, you will also find improvements to [KRunner][8], which is the essential application launcher or search utility for users. It will now remember the search text history and you can also have it centered on the screen instead of having it on top of the screen.
#### System Settings Improvements
The look and feel of the system setting is the same but it is more useful now. You will notice a new “**Highlight changed settings**” option which will show you the recent/modified changes when compared to the default values.
So, in that way, you can monitor any changes that you did accidentally or if someone else did it.
![][9]
In addition to that, you also get to utilize S.M.A.R.T monitoring and disk failure notifications.
#### Wayland Support Improvements
If you prefer to use a Wayland session, you will be happy to know that it now supports [Klipper][10] and you can also middle-click to paste (on KDE apps only for the time being).
The much-needed screencasting support has also been added.
#### Other Improvements
Of course, you will notice some subtle visual improvements or adjustments for the look and feel. You may notice a smooth transition effect when changing the brightness. Similarly, when changing the brightness or volume, the on-screen display that pops up is now less obtrusive
Options like controlling the scroll speed of mouse/touchpad have been added to give you finer controls.
You can find the detailed list of changes in its [official changelog][11], if youre curious.
### Wrapping Up
The changes are definitely impressive and should make the KDE experience better than ever before.
If youre running KDE Neon, you should get the update soon. But, if you are on Kubuntu, you will have to try the 20.10 ISO to get your hands on Plasma 5.20.
What do you like the most among the list of changes? Have you tried it yet? Let me know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/kde-plasma-5-20/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/kde-plasma-5-18-release/
[2]: https://itsfoss.com/kde-neon-review/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/kde-plasma-5-20-feat.png?resize=800%2C394&ssl=1
[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/kde-plasma-5-20-taskbar.jpg?resize=472%2C290&ssl=1
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/kde-plasma-5-20-clock.jpg?resize=372%2C224&ssl=1
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/kde-plasma-5-20-notify.jpg?resize=800%2C692&ssl=1
[8]: https://docs.kde.org/trunk5/en/kde-workspace/plasma-desktop/krunner.html
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/10/plasma-disks-smart.png?resize=800%2C539&ssl=1
[10]: https://userbase.kde.org/Klipper
[11]: https://kde.org/announcements/plasma-5.20.0

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (beamrolling)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,119 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (tanslating)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (GNOME 3.38 is Here With Customizable App Grid, Performance Improvements and Tons of Other Changes)
[#]: via: (https://itsfoss.com/gnome-3-38-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
GNOME 3.38 is Here With Customizable App Grid, Performance Improvements and Tons of Other Changes
======
[GNOME 3.36][1] brought some much-needed improvements along with a major performance boost. Now, after 6 months, were finally here with GNOME 3.38 with a big set of changes.
### GNOME 3.38 Key Features
Here are the main highlight of GNOME 3.38 codenamed Orbis:
[Subscribe to our YouTube channel for more Linux videos][2]
#### Customizable App Menu
The app grid or the app menu will now be customizable as part of a big change in GNOME 3.38.
Now, you can create folders by dragging application icons over each other and move them to/from folders and set it right back in the app grid. You can also just reposition the icons as you want in the app grid.
![][3]
Also, these changes are some basic building blocks for upcoming design changes planned for future updates — so itll be exciting to see what we can expect.
#### Calendar Menu Updates
![][4]
The notification area is a lot cleaner with the recent GNOME updates but now with GNOME 3.38, you can finally access calendar events right below the calendar area to make things convenient and easy to access.
Its not a major visual overhaul, but theres a few improvements to it.
#### Parental Controls Improvement
You will observe a parental control service as a part of GNOME 3.38. It supports integration with various components of the desktop, the shell, the settings, and others to help you limit what a user can access.
#### The Restart Button
Some subtle improvements lead to massive changes and this is exactly one of those changes. Its always annoying to click on the “Power Off” / “Shut down” button first and then hit the “Restart” button to reboot the system.
So, with GNOME 3.38, you will finally notice a “Restart” entry as a separate button which will save you click and give you a peace of mind.
#### Screen Recording Improvements
[GNOME shells built-in screen record][5] is now a separate system service which should potentially make recording the screen a smooth experience.
Also, window screencasting had several improvements to it along with some bug fixes:
#### GNOME apps Updates
The GNOME calculator has received a lot of bug fixes. In addition to that, you will also find some major changes to the [epiphany GNOME browser][6].
GNOME Boxes now lets you pick the OS from a list of operating systems and GNOME Maps was updated with some UI changes as well.
Not just limited to these, you will also find subtle updates and fixes to GNOME control center, Contacts, Photos, Nautilus, and some other packages.
#### Performance &amp; multi-monitor support improvements
Theres a bunch of under-the-hood improvements to improve GNOME 3.38 across the board. For instance, there were some serious fixes to [Mutter][7] which now lets two monitors run at different refresh rates.
![][8]
Previously, if you had one monitor with a 60 Hz refresh rate and another with 144 Hz, the one with the slower rate will limit the second monitor. But, with the improvements in GNOME 3.38, it will handle multi-monitors without limiting any of them.
Also, some changes reported by [Phoronix][9] pointed out around 10% lower render time in some cases. So, thats definitely a great performance optimization.
#### Miscellaneous other changes
* Battery percentage indicator
* Restart option in the power menu
* New welcome tour
* Fingerprint login
* QR code scanning for sharing Wi-Fi hotspot
* Privacy and other improvements to GNOME Browser
* GNOME Maps is now responsive and changes its size based on the screen
* Revised icons
You can find a details list of changes in their official [changelog][10].
### Wrapping Up
GNOME 3.38 is indeed an impressive update to improve the GNOME experience. Even though the performance was greatly improved with GNOME 3.36, more optimizations is a very good thing for GNOME 3.38.
GNOME 3.38 will be available in Ubuntu 20.10 and [Fedora 33][11]. Arch and Manjaro users should be getting it soon.
I think there are plenty of changes in right direction. What do you think?
--------------------------------------------------------------------------------
via: https://itsfoss.com/gnome-3-38-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/gnome-3-36-release/
[2]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/gnome-app-arranger.jpg?resize=799%2C450&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/gnome-3-38-calendar-menu.png?resize=800%2C721&ssl=1
[5]: https://itsfoss.com/gnome-screen-recorder/
[6]: https://en.wikipedia.org/wiki/GNOME_Web
[7]: https://en.wikipedia.org/wiki/Mutter_(software)
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/gnome-multi-monitor-refresh-rate.jpg?resize=800%2C369&ssl=1
[9]: https://www.phoronix.com/scan.php?page=news_item&px=GNOME-3.38-Last-Min-Mutter
[10]: https://help.gnome.org/misc/release-notes/3.38
[11]: https://itsfoss.com/fedora-33/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,112 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Integrate your calendar with Ansible to avoid schedule conflicts)
[#]: via: (https://opensource.com/article/20/10/calendar-ansible)
[#]: author: (Nicolas Leiva https://opensource.com/users/nicolas-leiva)
Integrate your calendar with Ansible to avoid schedule conflicts
======
Make sure your automation workflow's schedule doesn't conflict with
something else by integrating a calendar app into Ansible.
![Calendar close up snapshot][1]
Is "anytime" a good time to execute your automation workflow? The answer is probably no, for different reasons.
If you want to avoid simultaneous changes to minimize the impact on critical business processes and reduce the risk of unintended service disruptions, then no one else should be attempting to make changes at the same time your automation is running.
In some scenarios, there could be an ongoing scheduled maintenance window. Or maybe there is a big event coming up, a critical business time, or a holiday—or maybe you prefer not to make changes on a Friday night.
![Street scene with a large calendar and people walking][2]
([Curtis MacNewton][3], [CC BY-ND 2.0][4])
Whatever the reason, you want to signal this information to your automation platform and prevent the execution of periodic or ad-hoc tasks during specific time slots. In change management jargon, I am talking about specifying blackout windows when change activity should not occur.
### Calendar integration in Ansible
How can you accomplish this in [Ansible][5]? While it has no calendar function per se, Ansible's extensibility will allow it to integrate with any calendar application that has an API.
The goal is this: Before you execute any automation or change activity, you execute a `pre-task` that checks whether something is already scheduled in the calendar (now or soon enough) and confirms you are not in the middle of a blocked timeslot.
Imagine you have a fictitious module named `calendar`, and it can connect to a remote calendar, like Google Calendar, to determine if the time you specify has otherwise been marked as busy. You could write a playbook that looks like this:
```
\- name: Check if timeslot is taken
  calendar:
    time: "{{ ansible_date_time.iso8601 }}"
  register: output
```
Ansible facts will give `ansible_date_time`, which is passed to the `calendar` module to verify the time availability so that it can register the response (`output`) to use in subsequent tasks.
If your calendar looks like this:
![Google Calendar screenshot][6]
(Nicolas Leiva, [CC BY-SA 4.0][7])
Then the output of this task would highlight the fact this timeslot is taken (`busy: true`):
```
ok: [localhost] =&gt; {
   "output": {
       "busy": true,
       "changed": false,
       "failed": false,
       "msg": "The timeslot 2020-09-02T17:53:43Z is busy: true"
   }
}
```
### Prevent tasks from running
Next, [Ansible Conditionals][8] will help prevent the execution of any further tasks. As a simple example, you could use a `when` statement on the next task to enforce that it runs only when the field `busy` in the previous output is not `true`:
```
tasks:
  - shell: echo "Run this only when not busy!"
    when: not output.busy
```
### Conclusion
In a [previous article][9], I said Ansible is a framework to wire things together, interconnecting different building blocks to orchestrate an end-to-end automation workflow.
This article looked at how playbooks can integrate or talk to a calendar application to check availability. However, I am just scratching the surface! For example, your tasks could also block a timeslot in the calendar… the sky is the limit.
In my next article, I will dig into how the `calendar` module is built and how other programming languages can be used with Ansible. Stay tuned if you are a [Go][10] fan like me!
* * *
_This originally appeared on Medium as [Ansible and Google Calendar integration for change management][11] under a CC BY-SA 4.0 license and is republished with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/10/calendar-ansible
作者:[Nicolas Leiva][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/nicolas-leiva
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT (Calendar close up snapshot)
[2]: https://opensource.com/sites/default/files/uploads/street-calendar.jpg (Street scene with a large calendar and people walking)
[3]: https://www.flickr.com/photos/7841127@N02/4217116202
[4]: https://creativecommons.org/licenses/by-nd/2.0/
[5]: https://docs.ansible.com/ansible/latest/index.html
[6]: https://opensource.com/sites/default/files/uploads/googlecalendarexample.png (Google Calendar screenshot)
[7]: https://creativecommons.org/licenses/by-sa/4.0/
[8]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_conditionals.html
[9]: https://medium.com/swlh/python-and-ansible-to-automate-a-network-security-workflow-28b9a44660c6
[10]: https://golang.org/
[11]: https://medium.com/swlh/ansible-and-google-calendar-integration-for-change-management-7c00553b3d5a

View File

@ -1,131 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (gxlct008)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install Deepin Desktop on Ubuntu 20.04 LTS)
[#]: via: (https://itsfoss.com/install-deepin-ubuntu/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
How to Install Deepin Desktop on Ubuntu 20.04 LTS
======
_**This tutorial shows you the proper steps to install the Deepin desktop environment on Ubuntu. Removal steps are also mentioned.**_
Deepin is undoubtedly a [beautiful Linux distribution][1]. The recently released [Deepin version 20][2] makes it even more beautiful.
Now, [Deepin Linux][3] is based on [Debian][4] and the default repository mirrors are too slow. If you would rather stay with Ubuntu, you have the Deepin variant of Ubuntu in the form [UbuntuDDE Linux distribution][5]. It is not one of the [official Ubuntu flavors][6] yet.
[Reinstalling a new distribution][7] is a bit of annoyances for you would lose the data and youll have to reinstall your applications on the newly installed UbuntuDDE.
A simpler option is to install Deepin desktop environment on your existing Ubuntu system. After all you can easily install more than one [desktop environment][8] in one system.
Fret not, it is easy to do it and you can also revert the changes if you do not like it. Let me show you how to do that.
### Installing Deepin Desktop on Ubuntu 20.04
![][9]
The UbuntuDDE team has created a PPA for their distribution and you can use the same PPA to install Deepin desktop on Ubuntu 20.04. Keep in mind that this PPA is only available for Ubuntu 20.04. Please read about [using PPA in Ubuntu][10].
No Deepin version 20
The Deepin desktop youll be installing using the PPA here is NOT the new Deepin desktop version 20 yet. It will probably be there after Ubuntu 20.10 release but we cannot promise anything.
Here are the steps that you need to follow:
**Step 1**: You need to first add the [official PPA by Ubuntu DDE Remix team][11] by typing this on the terminal:
```
sudo add-apt-repository ppa:ubuntudde-dev/stable
```
**Step 2**: Once you have added the repository, proceed with installing the Deepin desktop.
```
sudo apt install ubuntudde-dde
```
![][12]
Now, the installation will start and after a while, you will be asked to choose the display manager.
![][13]
You need to select “**lightdm**” if you want Deepin desktop themed lock screen. If not, you can set it as “**gdm3**“.
In case you dont see this option, you can get it by typing the following command and then select your preferred display manager:
```
sudo dpkg-reconfigure lightdm
```
**Step 3:** Once done, you have to log out and log in again by choosing the “**Deepin**” session or just reboot the system.
![][14]
And, that is it. Enjoy the Deepin experience on your Ubuntu 20.04 LTS system in no time!
![][15]
### Removing Deepin desktop from Ubuntu 20.04
In case, you dont like the experience or of it is buggy for some reason, you can remove it by following the steps below.
**Step 1:** If youve set “lightdm” as your display manager, you need to set the display manager as “gdm3” before uninstalling Deepin. To do that, type in the following command:
```
sudo dpkg-reconfigure lightdm
```
![Select gdm3 on this screen][13]
And, select **gdm3** to proceed.
Once youre done with that, you can simply enter the following command to remove Deepin completely:
```
sudo apt remove startdde ubuntudde-dde
```
You can just reboot to get back to your original Ubuntu desktop. In case the icons become unresponsive, you just open the terminal (**CTRL + ALT + T**) and type in:
```
reboot
```
**Wrapping Up**
It is good to have different [choices of desktop environments][16]. If you really like Deepin desktop interface, this could be a way to experience Deepin on Ubuntu.
If you have questions or if you face any issues, please let me know in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-deepin-ubuntu/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/beautiful-linux-distributions/
[2]: https://itsfoss.com/deepin-20-review/
[3]: https://www.deepin.org/en/
[4]: https://www.debian.org/
[5]: https://itsfoss.com/ubuntudde/
[6]: https://itsfoss.com/which-ubuntu-install/
[7]: https://itsfoss.com/reinstall-ubuntu/
[8]: https://itsfoss.com/what-is-desktop-environment/
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/ubuntu-20-with-deepin.jpg?resize=800%2C386&ssl=1
[10]: https://itsfoss.com/ppa-guide/
[11]: https://launchpad.net/~ubuntudde-dev/+archive/ubuntu/stable
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/deepin-desktop-install.png?resize=800%2C534&ssl=1
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/deepin-display-manager.jpg?resize=800%2C521&ssl=1
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/deepin-session-ubuntu.jpg?resize=800%2C414&ssl=1
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/10/ubuntu-20-with-deepin-1.png?resize=800%2C589&ssl=1
[16]: https://itsfoss.com/best-linux-desktop-environments/

View File

@ -0,0 +1,172 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Build a Kubernetes Minecraft server with Ansible's Helm modules)
[#]: via: (https://opensource.com/article/20/10/kubernetes-minecraft-ansible)
[#]: author: (Jeff Geerling https://opensource.com/users/geerlingguy)
Build a Kubernetes Minecraft server with Ansible's Helm modules
======
Deploy a Minecraft server into a Kubernetes cluster with Ansible's new
collections.
![Ship captain sailing the Kubernetes seas][1]
One of the best outcomes of Ansible's [move towards content collections][2] is it spreads the thousands of modules in [Ansible][3]'s "core" repository into many more independent repositories. This means movement on issues and modules that had long been delayed (often due to the [sheer volume of issues and pull requests][4] in the repo) can progress more rapidly.
Obviously, not all modules will get the same love and appreciation as others—that's the way open source works: more popular things flourish, as others may languish a bit—but one bright example of the positive impact has been the [Kubernetes][5] collection's ability to incorporate some long-awaited [Helm][6] modules.
Thanks especially to the work of [LucasBoisserie][7], three new Helm modules were merged into the Kubernetes collection:
* helm
* helm_info
* helm_repository
Ansible has long had a [helm module][8], but it was fairly broken for a long time, only worked with older versions of Helm, and was slated for deprecation in Ansible 2.14. That version of the module will still work the same in the regular community distribution of Ansible, as it's now been moved to the [community.general][9] collection.
But if you want to use these new modules to automate your Helm deployments using the Kubernetes container orchestration system, you can do it with the [community.kubernetes][10] collection.
### What is Helm?
Helm says it is "the best way to find, share, and use software built for Kubernetes."
There are currently dozens of ways to deploy software into Kubernetes and [OpenShift][11] clusters (you can even do it using Ansible natively with the [k8s module][12]), but Helm is often the easiest onramp to Kubernetes deployments, especially when you're starting out on your Kubernetes journey.
The way Helm works is that people maintain "charts," which are templates describing "how to deploy application XYZ" into Kubernetes. Charts can have "values" that override the default settings for a deployment's chart.
There are thousands of [charts on Helm Hub][13] you can use to install popular software. If your software is not included, you can build and host your own Helm chart repositories.
### What is Minecraft?
For a certain generation (or their parents), this question doesn't need an answer: [Minecraft][14] is the [best-selling video game of all time][15], and it appeals to an extremely wide audience because there are so many different ways you can play it.
I remember spending an hour here or there during my post-college years tending to a farm that I built in my little virtual Minecraft world. Minecraft can now run on almost any computing device with a screen, and networked play has become very popular. To support this, the Minecraft team maintains a [Minecraft server][16] application you can run to play networked games with your friends.
### Where does Ansible fit in?
I like to think of Ansible as the "glue" that holds automation together. I previously wrote about [how Ansible is useful in a cloud-native environment][17], so I won't rehash why I use Ansible to manage my Kubernetes infrastructure.
In this article, I'll show you how to write a short Ansible playbook to manage the setup of Helm's Minecraft chart in a cluster. In a real-world infrastructure, this playbook would be one small part of a set of plays that:
* Build or configure a Kubernetes cluster
* Deploy monitoring tools into the cluster
* Deploy applications into the cluster
Before you can write the playbook, you have to install Ansible's official [Kubernetes collection][10]. You can do this either by requiring it in a **requirements.yml** file (which could be used by Ansible Tower to install the collection automatically) or by manually installing it:
```
`ansible-galaxy collection install community.kubernetes`
```
Once you have the collection, it's time to write the playbook. To make it easy for you to view the code or download the file, I've posted my **[minecraft.yml][18] **playbook as a Gist on GitHub.
The playbook uses many of the Kubernetes collection's modules:
1. The `k8s` module creates a namespace, `minecraft`.
2. The `helm_repository` module adds the `itzg` Helm repository, which contains the Minecraft Helm chart.
3. The `helm` module deploys the chart and creates the Minecraft server instance.
4. The `k8s_info` module retrieves the NodePort where Minecraft is running so that you can connect to it from Minecraft.
The playbook assumes you have a running Kubernetes or OpenShift cluster and a kubeconfig file that points to that cluster already. If not, create a Minikube cluster on your workstation:
1. Make sure you have [Minikube][19] installed.
2. Run `minikube start`, and wait for the cluster to be created.
Make sure you have [Ansible][20] and [Helm][21] installed, then run the playbook:
```
`ansible-playbook minecraft.yml`
```
After a few minutes, the Minecraft server will generate a spawn area and be ready for connections! The playbook should provide the Minecraft NodePort at the end of its output (e.g., Minecraft NodePort: 32393).
Get the IP address of your Minikube cluster with `minikube ip`, add the NodePort to it (in my case, 192.168.64.19:32393), then open up Minecraft and connect to it:
1. Click **Multiplayer**.
2. Click **Direct Connection**.
3. Enter the server address (the Minikube IP and Minecraft NodePort).
4. Click **Join Server**.
And voila! You should be able to play around in the little virtual Minecraft world that's running on your very own Kubernetes cluster.
![Minecraft gameplay][22]
(Jeff Geerling, [CC BY-SA 4.0][23])
View the server logs with:
```
`kubectl logs -f -n minecraft -l app=minecraft-minecraft;`
```
In the logs, you can see that I was successful in finding many ways to die inside my little Minecraft world!
![Minecraft server logs][24]
(Jeff Geerling, [CC BY-SA 4.0][23])
### Take a step beyond
There are dozens of ways to deploy applications like a Minecraft server into a Kubernetes cluster. Luckily for us, Ansible already supports most of those options through its Kubernetes collection! And if you want to take a step beyond simple deployments and chart updates, you can use Ansible to build a [Kubernetes operator][25] with the Operator SDK—in fact, someone already made a [community operator][26] built with Ansible that runs a Minecraft server!
I was inspired to write this after using Ansible to manage a seven-node Kubernetes cluster built with Raspberry Pis. You can learn more about that in the [Turing Pi Cluster][27] GitHub repository.
* * *
If you want to learn more about Ansible, make sure to register for [AnsibleFest][28], a virtual experience on October 13-14.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/10/kubernetes-minecraft-ansible
作者:[Jeff Geerling][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/geerlingguy
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
[2]: https://github.com/ansible-collections/overview
[3]: https://www.ansible.com/
[4]: https://emeraldreverie.org/2020/03/02/collections-the-backlog-view/
[5]: https://kubernetes.io/
[6]: https://helm.sh/
[7]: https://github.com/LucasBoisserie
[8]: https://docs.ansible.com/ansible/2.9/modules/helm_module.html
[9]: https://github.com/ansible-collections/community.general/blob/master/plugins/modules/cloud/misc/helm.py
[10]: https://github.com/ansible-collections/community.kubernetes
[11]: https://www.openshift.com/
[12]: https://docs.ansible.com/ansible/latest/collections/community/kubernetes/k8s_module.html#ansible-collections-community-kubernetes-k8s-module
[13]: https://hub.helm.sh/
[14]: https://www.minecraft.net/
[15]: https://en.wikipedia.org/wiki/List_of_best-selling_video_games#List
[16]: https://www.minecraft.net/en-us/download/server/
[17]: https://www.ansible.com/blog/how-useful-is-ansible-in-a-cloud-native-kubernetes-environment
[18]: https://gist.github.com/geerlingguy/2f4b0c06b4b696c8983b82dda655adf3
[19]: https://kubernetes.io/docs/tasks/tools/install-minikube/
[20]: https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html
[21]: https://helm.sh/docs/intro/install/
[22]: https://opensource.com/sites/default/files/uploads/minecraft.png (Minecraft gameplay)
[23]: https://creativecommons.org/licenses/by-sa/4.0/
[24]: https://opensource.com/sites/default/files/uploads/serverlogs.png (Minecraft server logs)
[25]: https://www.redhat.com/en/topics/containers/what-is-a-kubernetes-operator
[26]: https://github.com/fabianvf/game-server-operator
[27]: https://github.com/geerlingguy/turing-pi-cluster
[28]: https://www.ansible.com/ansiblefest

View File

@ -0,0 +1,186 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create an Ansible module for integrating your Google Calendar)
[#]: via: (https://opensource.com/article/20/10/ansible-module-go)
[#]: author: (Nicolas Leiva https://opensource.com/users/nicolas-leiva)
Create an Ansible module for integrating your Google Calendar
======
Learn how to write an Ansible module in Go to integrate Google Calendar
into your automation workflow.
![Business woman on laptop sitting in front of window][1]
In a [previous article][2], I explored how [Ansible][3] can integrate with Google Calendar for change management, but I didn't get into the details of the [Ansible module][4] that was built for this purpose. In this article, I will cover the nuts and bolts of it.
While most [Ansible modules][5]** **are written in [Python][6] (see [this example][7]), that's not the only option you have. You can use other programming languages if you prefer. And if you like [Go][8], this post is for you!
![Gopher illustration][9]
([Maria Letta's Free Gophers Pack][10], [Free Gophers License v1.0][11], modified with permission)
If you are new to Go, check out these [pointers to get started][12].
## Ansible and Go
There are at least four different ways that you can run a Go program from Ansible:
1. Install Go and run your Go code with the `go run` command from Ansible.
2. Cross-compile your Go code for different platforms before execution. Then call the proper binary from Ansible, based on the facts you collect from the host.
3. Run your Go code or compiled binary from a container with the `containers.podman` [collection][13]. Something along the lines of: [code] - name: Run Go container
  podman_container:
    name: go_test_container
    image: golang
    command: go version
    rm: true
    log_options: "path={{ log_file }}"
```
4. Create an [RPM][14] package of your Go code with something like [NFPM][15], and install it in the system of the target host. You can then call it with the Ansible modules [shell][16] or [command][17].
Running an RPM package or container is not Go-specific (cases 3 and 4), so I will focus on the first two options.
## Google Calendar API
You will need to interact with the [Google Calendar API][18], which provides code examples for different programming languages. The one for Go will be the base for your Ansible module.
The tricky part is [enabling the Calendar API][19] and downloading the credentials you generate in the [Google API Console][20] (`Credentials` &gt; `+ CREATE CREDENTIALS` &gt; `OAuth client ID` &gt; `Desktop App`).
![Downloading credentials from Google API Console][21]
(Nicolas Leiva, [CC BY-SA 4.0][22])
The arrow shows where to download your OAuth 2.0 client credentials (JSON file) once you create them in [API Credentials][23].
## Calling the module from Ansible
The `calendar` module takes the `time` to validate as an argument. The example below provides the current time. You can typically get this from [Ansible facts][24] (`ansible_date_time`). The JSON output of the module is registered in a variable named `output` to be used in a subsequent task:
```
\- name: Check if timeslot is taken
  calendar:
    time: "{{ ansible_date_time.iso8601 }}"
  register: output
```
You might wonder where the `calendar` module code lives and how Ansible executes it. Please bear with me for a moment; I'll get to this after I cover other pieces of the puzzle.
## Employ the time logic
With the Calendar API nuances out of the way, you can proceed to interact with the API and build a Go function to capture the module logic. The `time` is taken from the input arguments—in the playbook above—as the initial time (`min`) of the time window to validate (I arbitrarily chose a one-hour duration):
```
func isItBusy(min string) (bool, error) {
        ...
        // max -&gt; min.Add(1 * time.Hour)
        max, err := maxTime(min)
        // ...
        srv, err := calendar.New(client)
        // ...
        freebusyRequest := calendar.FreeBusyRequest{
                TimeMin: min,
                TimeMax: max,
                Items:   []*calendar.FreeBusyRequestItem{&amp;cal},
        }
        // ...
        freebusyRequestResponse, err := freebusyRequestCall.Do()
        // ...
        if len(freebusyRequestResponse.Calendars[name].Busy) == 0 {
                return false, nil
        }
        return true, nil
}
```
It [sends a `FreeBusyRequest`][25] to Google Calendar with the time window's initial and finish time (`min` and `max`). It also creates a calendar [client][26] (`srv`) to authenticate the account correctly using the JSON file with the OAuth 2.0 client credentials. In return, you get a list of events during this time window.
If you get zero events, the function returns `busy=false`. However, if there is at least one event during this time window, it means `busy=true`. You can check out the [full code][27] in my GitHub repository.
## Process the input and creating a response
Now, how does the Go code read the inputs arguments from Ansible and, in turn, generate a response that Ansible can process? The answer to this depends on whether you are running the [Go CLI][28] (command-line interface) or just executing a pre-compiled Go program binary (i.e., options 1 and 2 above).
Both options have their pros and cons. If you use the Go CLI, you can pass the arguments the way you prefer. However, to make it consistent with how it works for binaries you run from Ansible, both alternatives will read a JSON file in the examples presented here.
### Reading the arguments
As shown in the Go code snippet below, an input file is processed, and Ansible provides a path to it when it calls a binary.
The content of the file is unmarshaled into an instance (`moduleArg`) of a previously defined struct (`ModuleArgs`). This is how you tell the Go code which data you expect to receive. This method enables you to gain access to the `time` specified in the playbook via `moduleArg.time`, which is then passed to the time logic function (`isItBusy`) for processing:
```
// ModuleArgs are the module inputs
type ModuleArgs struct {
        Time string
}
func main() {
        ...
        argsFile := os.Args[1]
        text, err := ioutil.ReadFile(argsFile)
        ...
        var moduleArgs ModuleArgs
        err = json.Unmarshal(text, &amp;moduleArgs)
        ...
        busy, err := isItBusy(moduleArg.time)
        ...
}
```
### Generating a response
The values to return are assigned to an instance of a `Response` object. Ansible will need this response includes a couple of boolean flags (`Changed` and `Failed`). You can add any other field you need; in this case, a `Busy` boolean value is carried to signal the response of the time logic function.
The response is marshaled into a message that you print out and Ansible can read:
```
// Response are the values returned from the module
type Response struct {
        Msg     string `json:"msg"`
        Busy    bool   `json:"busy"`
        Changed bool   `json:"changed"`
        Failed  bool   `json:"failed"`
}
func returnResponse(r Response) {
  ...
        response, err = json.Marshal(r)
        ...
        fmt.Println(string(response))
        ...
}
```
You can check out the [full code][29] on GitHub.
## Execute a binary or Go code on the fly?
One of the cool things about Go is that you can cross-compile a Go program to run on different target operating systems and architectures. The binary files you compile can be executed in the target host without installing Go or any dependency.
This flexibility plays nicely with Ansible, which provides the target host details (`ansible_system` and `ansible_architecture`) via Ansible facts. In this example, the target architecture is fixed when compiling (`x86_64`), but binaries for macOS, Linux, and Windows are generated (via `make compile`). This produces the three files that are included in the [`library` folder][30] of the `go_role` role with the form of: `calendar_$system`:
```
 tree roles/go_role/
roles/go_role/
├── library
  ├── calendar_darwin
  ├── calendar_linux
  ├── calendar_windows
  └── go_run
└── tasks
    ├── Darwin.yml
    ├── Go.yml
    ├── Linux.yml
    ├── main.yml
    └── Win32NT.yml
```
The [`go_role` role][31] that packages the `calendar`

View File

@ -0,0 +1,55 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My top 7 keywords in Rust)
[#]: via: (https://opensource.com/article/20/10/keywords-rust)
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
My top 7 keywords in Rust
======
Learn a handful of useful keywords from the Rust standard library.
![Rustacean t-shirt][1]
I've been using [Rust][2] for a few months now, writing rather more of it than I expected—though quite a lot of that has been thrown away as I've learned, improved what I'm writing, and taken some more complex tasks beyond what I originally intended.
I still love it and thought it might be good to talk about some of the important keywords that come up again and again in Rust. I'll provide my personal summary of what they do, why you need to think about how you use them, and anything else that's useful, particularly for people who are new to Rust or coming from another language (such as Java; see my article [_Why I'm enjoying learning Rust as a Java programmer_][3]).
Without further ado, let's get going. A good place for further information is always the official Rust documentation—you'll probably want to start with the [std library][4].
1. **const**  You get to declare constants with const, and you should. This isn't rocket science, but do declare with const, and if you're going to use constants across different modules, then do the right thing and create a `lib.rs` file (the Rust default) into which you can put all of them with a nicely named module. I've had clashes of const variable names (and values!) across different files in different modules simply because I was too lazy to do anything other than cut and paste across files when I could have saved myself lots of work simply by creating a shared module.
2. **let**  You don't _always_ need to declare a variable with a let statement, but your code will be clearer when you do. What's more, always add the type if you can. Rust will do its very best to guess what it should be, but it may not always be able to do so at runtime (in which case [Cargo][5], the compiler, will tell you), or it may even not do what you expect. In the latter case, it's always simpler for Cargo to complain that the function you're assigning from (for instance) doesn't match the declaration than for Rust to try to help you do the wrong thing, only for you to have to spend ages debugging elsewhere.
3. **match**  match was new to me, and I love it. It's not dissimilar to "switch" in other languages but is used extensively in Rust. It makes for legible code, and Cargo will have a good go at warning you if you do something foolish (such as miss possible cases). My general rule of thumb, where I'm managing different options or doing branching, is to ask whether I can use match. If I can, I will.
4. **mut**  When declaring a variable, if it's going to change after its initialisation, then you need to declare it mutable. A common mistake is to declare something mutable when it _isn't_ changed—but the compiler will warn you about that. If you get a warning from Cargo that a mutable variable isn't changed when you think it _is_, then you may wish to check the scope of the variable and make sure that you're using the right version.
5. **return**  I actually very rarely use return, which is for returning a value from a function, because it's usually simpler and clearer to read if you just provide the value (or the function providing the return value) at the end of the function as the last line. Warning: you _will_ forget to omit the semicolon at the end of this line on many occasions; if you do, the compiler won't be happy.
6. **unsafe**  This does what it says on the tin: If you want to do things where Rust can't guarantee memory safety, then you're going to need to use this keyword. I have absolutely no intention of declaring any of my Rust code unsafe now or at any point in the future; one of the reasons Rust is so friendly is because it stops this sort of hackery. If you really need to do this, think again, think yet again, and then redesign. Unless you're a seriously low-level systems programmer, _avoid_ unsafe.
7. **use**  When you want to use an item, e.g., struct, variable, function, etc. from another crate, then you need to declare it at the beginning of the block where you'll be using it. Another common mistake is to do this but fail to add the crate (preferably with a minimum version number) to the `Cargo.toml` file.
This isn't the most complicated article I've ever written, I know, but it's the sort of article I would have appreciated when I was starting to learn Rust. I plan to create similar articles on key functions and other Rust must-knows: let me know if you have any requests!
* * *
_This article was originally published on [Alice, Eve, and Bob][6] and is reprinted with the author's permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/10/keywords-rust
作者:[Mike Bursell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mikecamel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rustacean-tshirt.jpg?itok=u7LBmyaj (Rustacean t-shirt)
[2]: https://www.rust-lang.org/
[3]: https://opensource.com/article/20/5/rust-java
[4]: https://doc.rust-lang.org/std/
[5]: https://doc.rust-lang.org/cargo/
[6]: https://aliceevebob.com/2020/09/01/rust-my-top-7-keywords/

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Install MariaDB or MySQL on Linux)
[#]: via: (https://opensource.com/article/20/10/mariadb-mysql-linux)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Install MariaDB or MySQL on Linux
======
Get started using an open source SQL database on your Linux system.
![Person standing in front of a giant computer screen with numbers, data][1]
Both [MariaDB][2] and [MySQL][3] are open source databases that use SQL and share the same original codebase. MariaDB is a drop-in replacement for MySQL, so much so that you use the same command (`mysql`) to interact with MySQL and MariaDB databases. This article, therefore, applies equally to MariaDB and MySQL.
### Install MariaDB
You can install MariaDB using your Linux distribution's package manager. On most distributions, MariaDB is split into a server package and a client package. The server package provides the database "engine," the part of MariaDB that runs (usually on a physical server) in the background, listening for data input or requests for data output. The client package provides the command `mysql`, which you can use to communicate with the server.
On RHEL, Fedora, CentOS, or similar:
```
`$ sudo dnf install mariadb mariadb-server`
```
On Debian, Ubuntu, Elementary, or similar:
```
`$ sudo apt install mariadb-client mariadb-server`
```
Other systems may package MariaDB differently systems, so you may need to search your software repository to learn how your distribution's maintainers provide it.
### Start MariaDB
Because MariaDB is designed to function, in part, as a database server, it can run on one computer and be administered from another. As long as you have access to the computer running it, you can use the `mysql` command to administer the database. I ran MariaDB on my local computer when writing this article, but it's just as likely that you'll interact with a MariaDB database hosted on a remote system.
Before starting MariaDB, you must create an initial database. You should define the user you want MariaDB to use when initializing its file structure. By default, MariaDB uses the current user, but you probably want it to use a dedicated user account. Your package manager probably configured a system user and group for you. Use `grep` to find out whether there's a `mysql` group:
```
$ grep mysql /etc/group
mysql❌27:
```
You can also look in `/etc/passwd` for a dedicated user, but usually, where there's a group, there's also a user. If there isn't a dedicated `mysql` user and group, look through `/etc/group` for an obvious alternative (such as `mariadb`). Failing that, read your distribution's documentation to learn how MariaDB runs.
Assuming your install uses `mysql`, initialize the database environment:
```
$ sudo mysql_install_db --user=mysql
Installing MariaDB/MySQL system tables in '/var/lib/mysql'...
OK
[...]
```
The result of this step reveals the next tasks you must perform to configure MariaDB:
```
PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !
To do so, start the server, then issue the following commands:
'/usr/bin/mysqladmin' -u root password 'new-password'
'/usr/bin/mysqladmin' -u root -h $(hostname) password 'new-password'
Alternatively you can run:
'/usr/bin/mysql_secure_installation'
which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.
```
Start MariaDB using your distribution's init system:
```
`$ sudo systemctl start mariadb`
```
To enable the MariaDB server to start upon boot:
```
`$ sudo systemctl enable --now mariadb`
```
Now that you have a MariaDB server to communicate with, set a password for it:
```
mysqladmin -u root password 'myreallysecurepassphrase'
mysqladmin -u root -h $(hostname) password 'myreallysecurepassphrase'
```
Finally, if you intend to use this installation on a production server, run the `mysql_secure_installation` command before going live.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/10/mariadb-mysql-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://mariadb.org/
[3]: https://www.mysql.com/

View File

@ -0,0 +1,304 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My first day using Ansible)
[#]: via: (https://opensource.com/article/20/10/first-day-ansible)
[#]: author: (David Both https://opensource.com/users/dboth)
My first day using Ansible
======
A sysadmin shares information and advice about putting Ansible into
real-world use configuring computers on his network.
![People work on a computer server with devices][1]
Getting a new computer, whether physical or virtual, up and running is time-consuming and requires a good deal of work—whether it's your first time or the 50th. For many years, I have used a series of scripts and RPMs that I created to install the packages I need and to perform many bits of configuration for my favorite tools. This approach has worked well and simplified my work, as well as reduced the amount of time I spend typing commands.
I am always looking for better ways of doing things, and, for several years now, I have been hearing and reading about [Ansible][2], which is a powerful tool for automating system configuration and management. Ansible allows a sysadmin to define a specific state for each host in one or more playbooks and then performs whatever tasks are necessary to bring the host to that state. This includes installing or deleting various resources such as RPM or Apt packages, configuration and other files, users, groups, and much more.
I have delayed learning how to use it for a long time because—stuff. Until recently, when I ran into a problem that I thought Ansible could easily solve.
This article is not a complete how-to for getting started with Ansible; rather, it is intended to provide insight into some of the issues that I encountered and some information that I found only in some very obscure places. Much of the information I found in various online discussions and Q&amp;A groups about Ansible was incorrect. Errors ranged from information that was really old with no indication of its date or provenance to information that was just wrong.
The information in this article is known to work—although there might be other ways of accomplishing the same things—and it works with Ansible 2.9.13 and [Python][3] 3.8.5.
### My problem
All of my best learning experiences start with a problem I need to solve, and this was no exception.
I have been working on a little project to modify the configuration files for the [Midnight Commander][4] (mc) file manager and pushing them out to various systems on my network for testing. Although I have a script to automate this, it still requires a bit of fussing with a command-line loop to provide the names of the systems to which I want to push the new code. The large number of changes I was making to the configuration files made it necessary for me to push the new ones frequently. But, just when I thought I had my new configuration just right, I would find a problem and need to do another push after making the fix.
This environment made it difficult to keep track of which systems had the new files and which did not. I also have a couple of hosts that need to be treated differently. And my little bit of knowledge about Ansible suggested that it would probably be able to do all or most of what I need.
### Getting started
I had read a number of good articles and books about Ansible, but never in an "I have to get this working NOW!" kind of situation. And now was—well, NOW!
In rereading these documents, I discovered that they mostly talk about how to install Ansible from GitHub using—wait for it—Ansible. That is cool, but I really just wanted to get started, so I installed it on my Fedora workstation using DNF and the version in the Fedora repository. Easy.
But then I started looking for the file locations and trying to determine which configuration files I needed to modify, where to keep my playbooks, what a playbook even looks like, and what it does. I had lots of (so far) unanswered questions running around in my head.
So, without further descriptions of my tribulations, here are the things I discovered and that got me going.
### Configuration
Ansible's configuration files are kept in `/etc/ansible`. Makes sense, right, since `/etc` is where system programs are supposed to keep their configuration files. The two files I needed to work with are `ansible.cfg` and `hosts`.
#### ansible.cfg
After getting started with some of the exercises I found in the documents and online, I began receiving warning messages about deprecating certain older Python files. So, I set `deprecation_warnings` to `false` in `ansible.cfg` and those angry red warning messages stopped:
```
`deprecation_warnings = False`
```
Those warnings are important, so I will revisit them later and figure out what I need to do. But for now, they no longer clutter the screen and obfuscate the errors I actually need to be concerned about.
#### The hosts file
Not the same as the `/etc/hosts` file, the `hosts` file is also known as the inventory file, and it lists the hosts on your network. This file allows grouping hosts together in related sets, such as servers, workstations, and pretty much any designation you need. This file contains its own help and plenty of examples, so I won't go into boring detail here. However, there are some things to know.
Hosts can be listed outside of any groups, but groups can be helpful in identifying hosts with one or more common characteristics. Groups use the INI format, so a server group looks like this:
```
[servers]
server1
server2
...etc.
```
A hostname must be present in this file for Ansible to work on it. Even though some subcommands allow you to specify a hostname, the command will fail unless the hostname is in the `hosts` file. A host can also be listed in multiple groups. So `server1` might also be a member of the `[webservers]` group in addition to the `[servers]` group, and a member of the `[ubuntu]` group to differentiate it from Fedora servers.
Ansible is smart. If the `all` argument is used for the hostname, Ansible scans the file and performs the defined tasks on all hosts listed in the file. Ansible will only attempt to work on each host once, no matter how many groups it appears in. This also means that there does not need to be a defined "all" group because Ansible can determine all hostnames in the file and create its own list of unique hostnames.
Another little thing to look out for is multiple entries for a single host. I use `CNAME` records in my DNS zone file to create aliased names that point to the [A records][5] for some of my hosts. That way, I can refer to a host as `host1` or `h1` or `myhost`. If you use multiple hostnames for the same host in the `hosts` file, Ansible will try to perform its tasks on all of those hostnames; it has no way of knowing that they refer to the same host. The good news is that this does not affect the overall result; it just takes a bit more time as Ansible works on the secondary hostnames and determines that all of the operations have already been performed.
### Ansible facts
Most of the materials I have read on Ansible talk about [Ansible facts][6], which "are data related to your remote systems, including operating systems, IP addresses, attached filesystems, and more." This information is available in other ways, such as `lshw`, `dmidecode`, the `/proc` filesystem, and more, but Ansible generates a JSON file containing this information. Each time Ansible runs, it generates this facts data. There is an amazing amount of information in this data stream, all of which are in `<"variable-name": "value">` pairs. All of these variables are available for use within an Ansible playbook. The best way to understand the huge amount of information available is to display it yourself:
```
`# ansible -m setup <hostname> | less`
```
See what I mean? Everything you ever wanted to know about your host hardware and Linux distribution is there and usable in a playbook. I have not yet gotten to the point where I need to use those variables, but I am certain I will in the next couple of days.
### Modules
The `ansible` command above uses the `-m` option to specify the "setup" module. Ansible has many modules already built-in, so you do not need to use the `-m` for those. There are also many downloadable modules that can be installed, but the built-ins do everything I have needed for my current projects so far.
### Playbooks
Playbooks can be located almost anywhere. Since I need to run my playbooks as root, I placed mine in `/root/ansible`. As long as this directory is the present working directory (PWD) when I run Ansible, it can find my playbook. Ansible also has a runtime option to specify a different playbook and location.
Playbooks can contain comments, although I have seen very few articles or books that mention this. As a sysadmin who believes in documenting everything, I find using comments can be very helpful. This is not so much about saying the same things in the comments as I do in the task name; rather, it is about identifying the purpose of groups of tasks and ensuring that I record my reasons for doing certain things in a certain way or order. This can help with debugging problems later when I may have forgotten my original thinking.
Playbooks are simply collections of tasks that define the desired state of a host. A hostname or inventory group is specified at the beginning of the playbook and defines the hosts on which Ansible will run the playbook.
Here is a sample of my playbook:
```
################################################################################
# This Ansible playbook updates Midnight commander configuration files.        #
################################################################################
\- name: Update midnight commander configuration files
  hosts: all
 
  tasks:
  - name: ensure midnight commander is the latest version
    dnf:
      name: mc
      state: present
  - name: create ~/.config/mc directory for root
    file:
      path: /root/.config/mc
      state: directory
      mode: 0755
      owner: root
      group: root
  - name: create ~/.config/mc directory for dboth
    file:
      path: /home/dboth/.config/mc
      state: directory
      mode: 0755
      owner: dboth
      group: dboth
  - name: copy latest personal skin
    copy:
      src: /root/ansible/UpdateMC/files/MidnightCommander/DavidsGoTar.ini
      dest: /usr/share/mc/skins/DavidsGoTar.ini
      mode: 0644
      owner: root
      group: root
  - name: copy latest mc ini file
    copy:
      src: /root/ansible/UpdateMC/files/MidnightCommander/ini
      dest: /root/.config/mc/ini
      mode: 0644
      owner: root
      group: root
  - name: copy latest mc panels.ini file
    copy:
      src: /root/ansible/UpdateMC/files/MidnightCommander/panels.ini
      dest: /root/.config/mc/panels.ini
      mode: 0644
      owner: root
      group: root
&lt;SNIP&gt;
```
The playbook starts with its own name and the hosts it will act on—in this case, all of the hosts listed in my `hosts` file. The `tasks` section lists the specific tasks required to bring the host into compliance with the desired state. This playbook starts with a task in which Ansible's built-in DNF updates Midnight Commander if it is not the most recent release. The next tasks ensure that the required directories are created if they do not exist, and the remainder of the tasks copy the files to the proper locations. These `file` and `copy` tasks can also set the ownership and file modes for the directories and files.
The details of my playbook are beyond the scope of this article, but I used a bit of a brute-force attack on the problem. There are other methods for determining which users need to have the files updated rather than using a task for each file for each user. My next objective is to simplify this playbook to use some of the more advanced techniques.
Running a playbook is easy; just use the `ansible-playbook` command. The .yml extension stands for YAML. I have seen several meanings for that, but my bet is on "Yet Another Markup Language," despite the fact that some claim that YAML is not one.
This command runs the playbook I created for updating my Midnight Commander files:
```
`# ansible-playbook -f 10 UpdateMC.yml`
```
The `-f` option specifies that Ansible should fork up to 10 threads in order to perform operations in parallel. This can greatly speed overall task completion, especially when working on multiple hosts.
### Output
The output from a running playbook lists each task and the results. An `ok` means the machine state managed by the task is already defined in the task stanza. Because the state defined in the task is already true, Ansible did not need to perform the actions defined in the task stanza.
The response `changed` indicates that Ansible performed the task specified in the stanza in order to bring it to the desired state. In this case, the machine state defined in the stanza was not true, so the actions defined were performed to make it true. On a color-capable terminal, the `TASK` lines are shown in color. On my host with my amber-on-black terminal color configuration, the `TASK` lines are shown in amber, `changed` lines are in brown, and `ok` lines are shown in green. Error lines are displayed in red.
The following output is from the playbook I will eventually use to perform post-install configuration on new hosts:
```
PLAY [Post-installation updates, package installation, and configuration]
TASK [Gathering Facts]
ok: [testvm2]
TASK [Ensure we have connectivity]
ok: [testvm2]
TASK [Install all current updates]
changed: [testvm2]
TASK [Install a few command line tools]
changed: [testvm2]
TASK [copy latest personal Midnight Commander skin to /usr/share]
changed: [testvm2]
TASK [create ~/.config/mc directory for root]
changed: [testvm2]
TASK [Copy the most current Midnight Commander configuration files to /root/.config/mc]
changed: [testvm2] =&gt; (item=/root/ansible/PostInstallMain/files/MidnightCommander/DavidsGoTar.ini)
changed: [testvm2] =&gt; (item=/root/ansible/PostInstallMain/files/MidnightCommander/ini)
changed: [testvm2] =&gt; (item=/root/ansible/PostInstallMain/files/MidnightCommander/panels.ini)
TASK [create ~/.config/mc directory in /etc/skel]
changed: [testvm2]
&lt;SNIP&gt;
```
### The cow
If you have the [cowsay][7] program installed on your computer, you will notice that the `TASK` names appear in the cow's speech bubble:
```
 ____________________________________
&lt; TASK [Ensure we have connectivity] &gt;
 ------------------------------------
        \   ^__^
         \  (oo)\\_______
            (__)\       )\/\
                ||----w |
                ||     ||
```
If you do not have this fun feature and want it, install the cowsay package using your distribution's package manager. If you have this and don't want it, disable it with by setting `nocows = 1` in the `/etc/ansible/ansible.cfg` file.
I like the cow and think it is fun, but it reduces the amount of screen space that can be used to display messages. So I disabled it after it started getting in the way.
### Files
As with my Midnight Commander task, it is frequently necessary to install and maintain files of various types. There are as many "best practices" for creating a directory tree for storing files used in playbooks as there are sysadmins—or at least as many as the number of authors writing books and articles about Ansible.
I chose a simple structure that makes sense to me:
```
/root/ansible
└── UpdateMC
    ├── files
     └── MidnightCommander
         ├── DavidsGoTar.ini
         ├── ini
         └── panels.ini
    └── UpdateMC.yml
```
You should use whatever structure works for you. Just be aware that some other sysadmin will likely need to work with whatever you set up, so there should be some level of logic to it. When I was using RPM and Bash scripts to perform my post-install tasks, my file repository was a bit scattered and definitely not structured with any logic. As I work through creating playbooks for many of my administrative tasks, I will introduce a much more logical structure for managing my files.
### Multiple playbook runs
It is safe to run a playbook as many times as needed or desired. Each task will only be executed if the state does not match the one specified in the task stanza. This makes it easy to recover from errors encountered during previous playbook runs. The playbook stops running when it encounters an error.
While testing my first playbook, I made many mistakes and corrected them. Each additional run of the playbook—assuming my fix is a good one—skips the tasks whose state already matches the specified one and executes those that did not. When my fix works, the previously failed task completes successfully, and any tasks after that one in my playbook also execute—until it encounters another error.
This also makes testing easy. I can add new tasks and, when I run the playbook, only those new tasks are performed because they are the only ones that do not match the test host's desired state.
### Some thoughts
Some tasks are not appropriate for Ansible because there are better methods for achieving a specific machine state. The use case that comes to mind is that of returning a VM to an initial state so that it can be used as many times as necessary to perform testing beginning with that known state. It is much easier to get the VM into the desired state and then to take a snapshot of the then-current machine state. Reverting to that snapshot is usually going to be easier and much faster than using Ansible to return the host to that desired state. This is something I do several times a day when researching articles or testing new code.
After completing my playbook for updating Midnight Commander, I started a new playbook that I will use to perform post-installation tasks on newly installed Fedora hosts. I have already made good progress, and the playbook is a bit more sophisticated and less brute-force than my first one.
On my very first day using Ansible, I created a playbook that solves a problem. I also started a second playbook that will solve the very big problem of post-install configuration. And I have learned a lot.
Although I really like using [Bash][8] scripts for many of my administrative tasks, I am already finding that Ansible can do everything I want—and in a way that can maintain the system in the state I want. After only a single day of use, I am an Ansible fan.
### Resources
The most complete and useful document I have found is the [User Guide][9] on the Ansible website. This document is intended as a reference and not a how-to or getting-started document.
Opensource.com has published many [articles about Ansible][10] over the years, and I have found most of them very helpful for my needs. The Enable Sysadmin website also has a lot of [Ansible articles][11] that I have found to be helpful. You can learn even more by checking out [AnsibleFest][12] happening this week (October 13-14, 2020). The event is completely virtual and free to register.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/10/first-day-ansible
作者:[David Both][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server with devices)
[2]: https://www.ansible.com/
[3]: https://www.python.org/
[4]: https://midnight-commander.org/
[5]: https://en.wikipedia.org/wiki/List_of_DNS_record_types
[6]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_vars_facts.html#ansible-facts
[7]: https://en.wikipedia.org/wiki/Cowsay
[8]: https://opensource.com/downloads/bash-cheat-sheet
[9]: https://docs.ansible.com/ansible/latest/user_guide/index.html
[10]: https://opensource.com/tags/ansible
[11]: https://www.redhat.com/sysadmin/topics/ansible
[12]: https://www.ansible.com/ansiblefest

View File

@ -0,0 +1,70 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What measured boot and trusted boot means for Linux)
[#]: via: (https://opensource.com/article/20/10/measured-trusted-boot)
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
What measured boot and trusted boot means for Linux
======
When a trusted boot process is performed, the process not only measures
each value but also performs a check against a known (and expected!)
good value at the same time.
![Brain on a computer screen][1]
Sometimes I'm looking around for a subject to write about, and realise that there's one that I assume that I've covered, but, on searching, discover that I haven't. One of those topics is measured boot and trusted boot—sometimes misleadingly referred to as "secure boot." There are specific procedures that use these terms with capital letters (e.g., Secure Boot), which I'm going to try to avoid discussing in this article. I'm more interested in the generic processes—and a major potential downfall—than in trying to go into the ins and outs of specifics. What follows is a (heavily edited) excerpt from my forthcoming book on trust in computing and the cloud for [Wiley][2].
In order to understand what measured boot and trusted boot aim to achieve, look at the Linux virtualisation stack: the components you run if you want to use virtual machines (VMs) on a Linux machine. This description is arguably over-simplified, but (as I noted above) I'm not interested in the specifics but in what I'm trying to achieve. I'll concentrate on the bottom four layers (at a rather simple level of abstraction): CPU/management engine; BIOS/EFI; firmware; and hypervisor, but I'll also consider a layer _just_ above the CPU/management engine, which interposes a Trusted Platform Module (TPM) and some instructions for how to perform one of the two processes (_measured boot_ and _trusted boot_). Once the system starts to boot, the TPM is triggered and starts its work. Alternative roots of trust, such as hardware security modules (HSMs), might also be used, but I will use TPMs, the most common example in this context, in my example.
In both cases (trusted boot and the measured boot), the basic flow starts with the TPM performing a measurement of the BIOS/EFI layer. This measurement involves checking the binary instructions to be carried out by this layer and creating a cryptographic hash of the binary image. The hash that's produced is then stored in one of several Platform Configuration Register (PCR) "slots" in the TPM. These can be thought of as pieces of memory that can be read later on - either by the TPM for its purposes or by entities external to the TPM - but that cannot be changed once they have been written. These pieces of memory are integrity protected from the time of their initially being written. This provides assurances that once a value is written to a PCR by the TPM, it can be considered constant for the lifetime of the system until power off or reboot.
After measuring the BIOS/EFI layer, the next layer (firmware) is measured. In this case, the resulting hash is combined with the previous hash (which was stored in the PCR slot) and then also stored in a PCR slot. The process continues until all the layers involved in the process have been measured and the hashes' results have been stored. There are (sometimes quite complex) processes to set up the original TPM values (I've skipped some of the more low-level steps in the process for simplicity) and to allow (hopefully authorised) changes to the layers for upgrading or security patching, for example. This "measured boot" process allows for entities to query the TPM after the process has completed and to check whether the values in the PCR slots correspond to the expected values, pre-calculated with "known good" versions of the various layers—that is, pre-checked versions whose provenance and integrity have already been established.
Various protocols exist to allow parties _external_ to the system to check the values (e.g., via a network connection) that the TPM attests to be correct: the process of receiving and checking such values from an external system is known as "remote attestation."
This process—measured boot—allows you to find out whether the underpinnings of your system—the lowest layers—are what you think they are. But what if they're not? Measured boot (unsurprisingly, given the name) measures but doesn't perform any other actions.
The alternative, "trusted boot," goes a step further. When a trusted boot process is performed, the process not only measures each value but also performs a check against a known (and expected!) good value at the same time. If the check fails, then the process will halt, and the booting of the system will fail. This may sound like a rather extreme approach to take on a system, but sometimes it is absolutely the right one. Where the system under consideration may have been compromised—which is one likely inference you can make from the failure of a trusted boot process—it is better for it to not be available at all than to be running based on flawed expectations.
This is all very well if I am the owner of the system being measured, have checked all of the various components being measured (and the measurements), and am happy that what's being booted is what I want.[1][3] But what if I'm using a system on the cloud, for instance, or any system owned and managed by someone else? In that case, I'm trusting the cloud provider (or owner/manager) with two things:
1. Doing all the measuring correctly and reporting correct results to me
2. Building something I should trust in the first place
This is the problem with the nomenclature "trusted boot" and, even worse, "secure boot." Both imply that an absolute, objective property of a system has been established—it is "trusted" or "secure"—when this is clearly not the case. Obviously, it would be unfair to expect the designers of such processes to name them after the failure states—"untrusted boot" or "insecure boot"—but, unless I can be very certain that I trust the owner of the system to do step two entirely correctly (and in my best interests as the user of the system, rather than theirs as the owner), then I can make no stronger assertions.
There is an enormous temptation to take a system that has gone through a trusted boot process and label it a "trusted system" when _the very best_ assertion you can make is that the particular layers measured in the measured and/or trusted boot process have been asserted to be those the process expects to be present. Such a process says nothing at all about the fitness of the layers to provide assurances of behaviour nor about the correctness (or fitness to provide assurances of behaviour) of any subsequent layers on top of those.
It's important to note that designers of TPMs are quite clear what is being asserted and that assertions about trust should be made carefully and sparingly. Unluckily, however, the complexities of systems, the general low level of understanding of trust, and the complexities of context and transitive trust make it very easy for systems designers and implementors to do the wrong thing and assume that any system that has successfully performed a trusted boot process can be considered "trusted." It is also extremely important to remember that TPMs, as hardware roots of trust, offer one of the best mechanisms available for establishing a chain of trust in systems that you may be designing or implementing, and I plan to write an article about them soon.
* * *
1. Although this turns out to be _much_ harder to do than you might expect!
* * *
_This article was originally published on [Alice, Eve, and Bob][4] and is reprinted with the author's permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/10/measured-trusted-boot
作者:[Mike Bursell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mikecamel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_computer_solve_fix_tool.png?itok=okq8joti (Brain on a computer screen)
[2]: https://wiley.com/
[3]: tmp.HkXCfJwlpF#1
[4]: https://aliceevebob.com/2020/09/08/measured-and-trusted-boot/

View File

@ -0,0 +1,192 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (2 Ways to Download Files From Linux Terminal)
[#]: via: (https://itsfoss.com/download-files-from-linux-terminal/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
2 Ways to Download Files From Linux Terminal
======
If you are stuck to the Linux terminal, say on a server, how do you download a file from the terminal?
There is no download command in Linux but there are a couple of Linux commands for downloading file.
In this terminal trick, youll learn two ways to download file using command line in Linux.
I am using Ubuntu here but apart from the installation, rest of the commands are equally valid for all other Linux distributions.
### Download files from Linux terminal using wget command
![][1]
[wget][2] is perhaps the most used command line download manager for Linux and UNIX-like systems. You can download a single file, multiple files, entire directory or even an entire website using wget.
wget is non-interactive and can easily work in the background. This means you can easily use it in scripts or even build tools like [uGet download manager][3].
Lets see how to use wget to download file from terminal.
#### Installing wget
Most Linux distributions come with wget preinstalled. It is also available in the repository of most distributions and you can easily install it using your distributions package manager.
On Ubuntu and Debian based distribution, you can use the [apt package manager][4] command:
```
sudo apt install wget
```
#### Download a file or webpage using wget
You just need to provide the URL of the file or webpage. It will download the file with its original name in the directory you are in.
```
wget URL
```
![][5]
To download multiple files, youll have to save their URLs in a text file and provide that text file as input to wget like this:
```
wget -i download_files.txt
```
#### Download files with a different name using wget
Youll notice that a webpage is almost always saved as index.html with wget. It will be a good idea to provide custom name to downloaded file.
You can use the -O (uppercase O) option to provide the output filename while downloading.
```
wget -O filename URL
```
![][6]
#### Download a folder using wget
Suppose you are browsing an FTP server and you need to download an entire directory, you can use the recursive option
```
wget -r ftp://server-address.com/directory
```
#### Download an entire website using wget
Yes, you can totally do that. You can mirror an entire website with wget. By downloading an entire website I mean the entire public facing website structure.
While you can use the mirror option -m directly, it will be a good idea add:
* convert-links : links are converted so that internal links are pointed to downloaded resource instead of web
* page-requisites: downloads additional things like style sheets so that the pages look better offline
```
wget -m --convert-links --page-requisites website_address
```
![][7]
#### Bonus Tip: Resume incomplete downloads
If you aborted the download by pressing C for some reasons, you can resume the previous download with option -c.
```
wget -c
```
### Download files from Linux command line using curl
Like wget, [curl][8] is also one of the most popular commands to download files in Linux terminal. There are so many ways to [use curl extensively][9] but Ill focus on only the simple downloading here.
#### Installing curl
Though curl doesnt come preinstalled, it is available in the official repositories of most distributions. You can use your distributions package manager to install it.
To [install curl on Ubuntu][10] and other Debian based distributions, use the following command:
```
sudo apt install curl
```
#### Download files or webpage using curl
If you use curl without any option with a URL, it will read the file and print it on the terminal screen.
To download a file using curl command in Linux terminal, youll have to use the -O (uppercase O) option:
```
curl -O URL
```
![][11]
It is simpler to download multiple files in Linux with curl. You just have to specify multiple URLs:
```
curl -O URL1 URL2 URL3
```
Keep in mind that curl is not as simple as wget. While wget saves webpages as index.html, curl will complain of remote file not having a name for webpages. Youll have to save it with a custom name as described in the next section.
#### Download files with a different name
It could be confusing but to provide a custom name for the downloaded file (instead of the original source name), youll have to use -o (lowercase O) option:
```
curl -o filename URL
```
![][12]
Some times, curl wouldnt just download the file as you expect it to. Youll have to use option -L (for location) to download it correctly. This is because some times the links redirect to some other link and with option -L, it follows the final link.
#### Pause and resume download with curl
Like wget, you can also resume a paused download using curl with option -c:
```
curl -c URL
```
**Conclusion**
As always, there are multiple ways to do the same thing in Linux. Downloading files from the terminal is no different.
wget and curl are just two of the most popular commands for downloading files in Linux. There are more such command line tools. Terminal based web-browsers like [elinks][13], [w3m][14] etc can also be used for downloading files in command line.
Personally, for a simple download, I prefer using wget over curl. It is simpler and less confusing because you may have a difficult time figuring out why curl could not download a file in the expected format.
Your feedback and suggestions are welcome.
--------------------------------------------------------------------------------
via: https://itsfoss.com/download-files-from-linux-terminal/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/Download-Files-from-Linux-terminal.png?resize=800%2C450&ssl=1
[2]: https://www.gnu.org/software/wget/
[3]: https://itsfoss.com/install-latest-uget-ubuntu-linux-mint/
[4]: https://itsfoss.com/apt-command-guide/
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/10/download-file-in-linux-terminal-using-wget.png?resize=795%2C418&ssl=1
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/download-file-in-linux-terminal-using-wget-2.png?resize=795%2C418&ssl=1
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/download-entire-website-using-wget.png?resize=795%2C418&ssl=1
[8]: https://curl.haxx.se/
[9]: https://linuxhandbook.com/curl-command-examples/
[10]: https://itsfoss.com/install-curl-ubuntu/
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/10/download-files-in-linux-using-curl.png?resize=795%2C418&ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/download-files-in-linux-using-curl-1.png?resize=795%2C418&ssl=1
[13]: http://elinks.or.cz/
[14]: http://w3m.sourceforge.net/

View File

@ -0,0 +1,81 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (LibreOffice Wants Apache to Drop the Ailing OpenOffice and Support LibreOffice Instead)
[#]: via: (https://itsfoss.com/libreoffice-letter-openoffice/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
LibreOffice Wants Apache to Drop the Ailing OpenOffice and Support LibreOffice Instead
======
It is a no-brainer that Apache OpenOffice is still a relevant recommendation when we think about [open source alternatives to Microsoft Office][1] for Linux users. However, for the past several years, the development of OpenOffice is pretty much stale.
Of course, it is not a shocker, considering Abhishek wrote about the [possibility of Apache OpenOffice shutting down][2] back in 2016.
Now, in an [open letter from The Document Foundation][3], they appeal Apache OpenOffice to recommend users to start using better alternatives like LibreOffice. In this article, I shall mention some highlights from the blog post by The Document Foundation and what it means to Apache OpenOffice.
![][4]
### Apache OpenOffice is History, LibreOffice is the Future?
Even though I didnt use OpenOffice back in the day, it is safe to say that it is definitely not a modern open-source alternative to Microsoft Office. Not anymore, at least.
Yes, Apache OpenOffice is still something important for legacy users and was a great alternative a few years back.
Heres the timeline of major releases for OpenOffice and LibreOffice:
![][5]
Now that theres no significant development taking place for OpenOffice, whats the future of Apache OpenOffice? A fairly active project with no major releases by the largest open source foundation?
It does not sound promising and that is exactly what The Document Foundation highlights in their open letter:
> OpenOffice(.org) the “father project” of LibreOffice was a great office suite, and changed the world. It has a fascinating history, but **since 2014, Apache OpenOffice (its current home) hasnt had a single major release**. Thats right no significant new features or major updates have arrived in over six years. Very few minor releases have been made, and there have been issues with timely security updates too.
For an average user, if they dont know about [LibreOffice][6], I would definitely want them to know. But, should the Apache Foundation suggest OpenOffice users to try LibreOffice to experience a better or advanced office suite?
I dont know, maybe yes, or no?
> …many users dont know that LibreOffice exists. The OpenOffice brand is still so strong, even though the software hasnt had a significant release for over six years, and is barely being developed or supported
As mentioned in the open letter, The Document Foundation highlights the advantages/improvements of LibreOffice over OpenOffice and appeals to Apache OpenOffice that they start recommending their users to try something better (i.e. LibreOffice):
> We appeal to Apache OpenOffice to do the right thing. Our goal should be to **get powerful, up-to-date and well-maintained productivity tools into the hands of as many people as possible**. Lets work together on that!
### What Should Apache OpenOffice Do?
If OpenOffice does the work, users may not need the effort to look for alternatives. So, is it a good idea to call out another project about their slow development and suggest them to embrace the future tools and recommend them instead?
In an argument, one might say it is only fair to promote your competition if youre done and have no interest in improving OpenOffice. And, theres nothing wrong in that, the open-source community should always work together to ensure that new users get the best options available.
On another side, one might say that The Document Foundation is frustrated about OpenOffice still being something relevant in 2020, even without any significant improvements.
I wont judge, but I think these conflicting thoughts come to my mind when I take a look at the open letter.
### Do you think it is time to put OpenOffice to rest and rely on LibreOffice?
Even though LibreOffice seems to be a superior choice and definitely deserves the limelight, what do you think should be done? Should Apache discontinue OpenOffice and redirect users to LibreOffice?
Your opinion is welcome.
--------------------------------------------------------------------------------
via: https://itsfoss.com/libreoffice-letter-openoffice/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/
[2]: https://itsfoss.com/openoffice-shutdown/
[3]: https://blog.documentfoundation.org/blog/2020/10/12/open-letter-to-apache-openoffice/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/libre-office-open-office.png?resize=800%2C450&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/libre-office-open-office-derivatives.jpg?resize=800%2C166&ssl=1
[6]: https://itsfoss.com/libreoffice-tips/

View File

@ -0,0 +1,94 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (MellowPlayer is a Desktop App for Various Streaming Music Services)
[#]: via: (https://itsfoss.com/mellow-player/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
MellowPlayer is a Desktop App for Various Streaming Music Services
======
_**Brief: MellowPlayer is a free and open-source desktop that lets you integrate web-based music streaming services on Linux and Windows.**_
Undoubtedly, a lot of users prefer tuning in to streaming services to listen to their favorite music instead of purchasing individual music from stores or downloading them for a collection.
Of course, streaming services let you explore new music and help artists reach out to a wider audience easily. But, with so much music streaming services available ([Soundcloud][1], [Spotify][2], [YouTube Music][3], [Amazon Music][4], etc) it often becomes annoying to utilize them effectively while using your computer.
You may [install Spotify on Linux][5] but there is no desktop app for Amazon Music. So, potentially you cannot manage the streaming service from a single portal.
What if a desktop app lets you integrate streaming services on both Windows and Linux for free? In this article, I will talk about such an app — [MellowPlayer][6].
### MellowPlayer: Open Source App to Integrate Various Streaming Music Services
![][7]
MellowPlayer is a free and open-source cross-platform desktop app that lets you integrate multiple streaming services and manage them all from one interface.
There are several supported streaming services that you can integrate. You also get a certain level of control to tweak your experience from each individual service. For instance, you can set to automatically skip ads or mute them on YouTube.
The cross-platform support for both Windows and Linux is definitely a plus point.
Apart from the ability to manage the streaming services, it also integrates the player with your system tray to easily control the music. This means that you can use media keys on your keyboard to control the music player.
It is also worth noting that you can add a new service that is not officially supported by just creating a plugin for it yourself within the app. To let you know more about it, let me highlight all the key features below.
### Features of MellowPlayer
![][8]
* Cross-platform (Windows &amp; Linux)
* Free &amp; Open-Source
* Plugin-based Application to let you add new service by creating a plugin
* Integrates the services as a native desktop app with the system tray
* Supports hot keys
* Notifications support
* Listening history
### Installing MellowPlayer on Linux
![][9]
MellowPlayer is available as a [Flatpak package][10]. I know its disappointing for some but its just Flatpak for Linux and an executable file for Windows. In case you didnt know, follow our guide on [using Flatpak on Linux][11] to get started.
[Download MellowPlayer][12]
### Wrapping Up
MellowPlayer is a handy desktop app for users who often dabble with multiple streaming services for music. Even though it works fine as per my test with SoundCloud, YouTube, and Spotify, I did notice that the app crashed when trying to re-size the window, just a heads up on that. You can explore more about it on its [GitLab page][13].
There are two similar applications that allow you to play multiple streaming music services: [Nuvola][14] and [Nuclear Music Player][15]. You may want to check them out.
Have you tried MellowPlayer? Feel free to share your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/mellow-player/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://soundcloud.com
[2]: https://www.spotify.com
[3]: https://music.youtube.com
[4]: https://music.amazon.com/home
[5]: https://itsfoss.com/install-spotify-ubuntu-linux/
[6]: https://colinduquesnoy.gitlab.io/MellowPlayer/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/mellowplayer-screenshot.jpg?resize=800%2C439&ssl=1
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/mellowplayer.png?resize=800%2C442&ssl=1
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/mellowplayer-system-integration.jpg?resize=800%2C438&ssl=1
[10]: https://flathub.org/apps/details/com.gitlab.ColinDuquesnoy.MellowPlayer
[11]: https://itsfoss.com/flatpak-guide/
[12]: https://colinduquesnoy.gitlab.io/MellowPlayer/#features
[13]: https://gitlab.com/ColinDuquesnoy/MellowPlayer
[14]: https://itsfoss.com/nuvola-music-player/
[15]: https://itsfoss.com/nuclear-music-player-linux/

View File

@ -0,0 +1,119 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (GNOME 3.38 is Here With Customizable App Grid, Performance Improvements and Tons of Other Changes)
[#]: via: (https://itsfoss.com/gnome-3-38-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
GNOME 3.38 携可定制应用程序网格,性能改善和大量其它的更改而来
======
[GNOME 3.36][1] 带来大量急需改善同时也带来性能的重大提升。现在在6个月后我们终于和具有一系列的更改的 GNOME 3.38 一起到来。
### GNOME 3.38 主要特色
这里是 GNOME 3.38 (代码名称Orbis) 的主要亮点:
[更多 Linux 视频,请订阅我们的 YouTube 频道][2]
#### 可定制应用程序菜单
作为 GNOME 3.38 重大更改中的一部分,应用程序网格或应用程序菜单现在是可以可定制的。
现在,你可以通过拖拽每个应用程序图标来创建文件夹,将它们移到/移出文件夹,并且可以在应用程序网格中重新设置回来。你也可以在应用程序网格中如你所想一样的重新定位图标。
![][3]
此外,这些变化是一些即将到来的未来设计更改更新的基本组成部分 — 因此,看到我们可以期待的东西会很令人兴奋。
#### 日历菜单更新
![][4]
随着最近一次的 GNOME 更新,通知区整洁了很多,但是现在随着 GNOME 3.38 的到来,你终于可以通过访问日历区正下方的日历事件来更方便地处理事情。
它不是一个主要的可见改造,但是它也有不少的改善。
#### 家长控制改善
你将会注意作为 GNOME 3.38 一部分的家长控制服务。它支持与桌面shell设置以及其它各种各样组件的集成来帮助你限制用户可以访问的内容。
#### 重新启动按钮
一些细微的改善导致了巨大的变化,重新启动按钮正是其中的一个变化。先单击 “关闭电源” / “关机” 按钮,再单击 “重新启动” 按钮的操作来重新启动系统总是让人很烦闷。
因此,随着 GNOME 3.38 的到来,你将最终会注意到一个作为单独按钮的 “重新启动” ,这将节省你的单击次数,平复你烦闷的心情。
#### 屏幕录制改善
[GNOME shell 的内置屏幕录制][5] 现在是一项独立的系统服务,这可能会使录制屏幕成为一种平滑流畅的体验。
另外,窗口截屏也有一些改善,并修复了一些错误。
#### GNOME 应用程序更新
GNOME 计算器也收到很多的错误修复。除此之外,你也将发现 [epiphany GNOME 浏览器][6] 的一些重大改变.
GNOME Boxes 现在允许你从一个操作系统列表中选择将要运行的操作系统GNOME 地图也有一些图像用户界面上的更改。
当然,不仅限于这些,你页将注意到 GNOME 控制中心联系人照片Nautilus以及其它一些软件包的细微更新和修复。
#### 性能和多显示器支持改善
这里有一大堆隐藏改善来全面地改善 GNOME 3.38 。 例如,[Mutter][7] 有一些重要的修复,它现在允许在两个显示器中使用不同的刷新频率。
![][8]
先前,如果一台显示器的刷新频率为 60 Hz而另一台的刷新频率为 144 Hz ,那么刷新频率较慢的显示器将限制另外一台显示器的刷新频率。但是,随着在 GNOME 3.38 中的改善,它将能够处理多个显示器,而不会使显示器相互限制。
另外,[Phoronix][9] 报告的一些更改指出,在一些情况下,缩短大约 10% 的渲染时间。因此,巨大的性能优化是很确定的。
#### 各种各样的其它更改
* 电池百分比指示器
* 在电源菜单中的重新启动选项
* 新的欢迎参观N
* 指纹登录
* 二维码扫描共享 Wi-Fi 热点
* GNOME 浏览器的隐私和其它改善
* GNOME 地图现在反应敏捷并能根据屏幕大小改变其大小
* 重新修订的图标
你可以在它们的官方 [更改日志][10] 中找到一个详细的更改列表。
### 总结
GNOME 3.38 确实是一个令人赞叹的改善 GNOME 用户体验的更新。尽管 GNOME 3.36 带来了性能的很大改善, 但是针对 GNOME 3.38 的更多优化仍然是一件非常好的事.
GNOME 3.38 将在 Ubuntu 20.10 和 [Fedora 33][11] 中可用。Arch 和 Manjaro 用户应该很快就能获得。
我认为在正确的方向上有大量的更改。你觉得呢?
--------------------------------------------------------------------------------
via: https://itsfoss.com/gnome-3-38-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/gnome-3-36-release/
[2]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/gnome-app-arranger.jpg?resize=799%2C450&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/gnome-3-38-calendar-menu.png?resize=800%2C721&ssl=1
[5]: https://itsfoss.com/gnome-screen-recorder/
[6]: https://en.wikipedia.org/wiki/GNOME_Web
[7]: https://en.wikipedia.org/wiki/Mutter_(software)
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/gnome-multi-monitor-refresh-rate.jpg?resize=800%2C369&ssl=1
[9]: https://www.phoronix.com/scan.php?page=news_item&px=GNOME-3.38-Last-Min-Mutter
[10]: https://help.gnome.org/misc/release-notes/3.38
[11]: https://itsfoss.com/fedora-33/

View File

@ -0,0 +1,111 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Integrate your calendar with Ansible to avoid schedule conflicts)
[#]: via: (https://opensource.com/article/20/10/calendar-ansible)
[#]: author: (Nicolas Leiva https://opensource.com/users/nicolas-leiva)
将你的日历与 Ansible 集成,以避免与日程冲突
======
通过将一个日历应用集成到 Ansible 中来确保你的自动化工作流计划不会与其他东西冲突。
![Calendar close up snapshot][1]
”随时“是执行自动化工作流的好时机吗?答案可能是否定的,原因各不相同。
如果你希望避免同时进行更改,以最大限度地减少对关键业务流程的影响,并降低意外服务中断的风险,那么其他人不应该试图在你的自动化运行的同时进行更改。
在某些情况下,可能存在一个正在进行的计划维护窗口。 或者,可能有大型事件即将来临、一个关键的业务时间,或者假期,你或许不想在星期五晚上进行更改。
![Street scene with a large calendar and people walking][2]
([Curtis MacNewton][3], [CC BY-ND 2.0][4])
无论出于什么原因,你都希望将此信息发送到你的自动化平台,并防止在特定时间段内执行周期或临时任务。在变更管理的行话中,我说的是当变更活动不应该发生时,指定封锁窗口。
### Ansible 中的日历集成
如何在 [Ansible][5] 中实现这个功能?虽然它本身没有日历功能,但 Ansible 的可扩展性将允许它与任何具有 API 的日历应用集成。
目标是这样的:在执行任何自动化或变更活动之前,你要执行一个 `pre-task` ,它会检查日历中是否已经安排了某些事情(目前或最近),并确认你没有在一个阻塞的时间段中。
想象一下,你有一个名为 `calendar` 的虚构模块,它可以连接到一个远程日历,比如 Google 日历,以确定你指定的时间是否已经以其他方式被标记为繁忙。你可以写一个类似这样的 playbook
```
\- name: Check if timeslot is taken
  calendar:
    time: "{{ ansible_date_time.iso8601 }}"
  register: output
```
Ansible 实际会给出 `ansible_date_time`,将其传递给 `calendar` 模块,以验证时间的可用性,以便它可以注册响应 `output`),用于后续任务。
如果你的日历是这样的:
![Google Calendar screenshot][6]
(Nicolas Leiva, [CC BY-SA 4.0][7])
那么这个任务的输出就会高亮这个时间段被占用的事实 `busy: true`
```
ok: [localhost] =&gt; {
   "output": {
       "busy": true,
       "changed": false,
       "failed": false,
       "msg": "The timeslot 2020-09-02T17:53:43Z is busy: true"
   }
}
```
### 阻止任务运行
接下来,[Ansible Conditionals][8] 将帮助阻止所有之后任务的执行。一个简单的例子,你可以在下一个任务上使用 `when` 语句来强制它只有当上一个输出中的 `busy` 字段不是 `true` 时,它才会运行:
```
tasks:
  - shell: echo "Run this only when not busy!"
    when: not output.busy
```
### 总结
在[上一篇文章][9]中,我说过 Ansible 是一个将事物连接在一起的框架,将不同的构建相互连接,以协调端到端自动化工作流。
这篇文章探讨了 playbook 如何与日历应用集成以检查可用性。然而,我只做了一些表面工作!例如,你的任务也可以阻止日历中的一个时间段,这里的发挥空间很大。
在我的下一篇文章中,我将深入 `calendar` 模块是如何构建的,以及其他编程语言如何与 Ansible 一起使用。如果你和我一样是 [Go][10] 的粉丝,请继续关注!
* * *
_这篇文章最初发表在 Medium 上,名为 [Ansible and Google Calendar integration for change management][11],采用 CC BY-SA 4.0 许可经许可后转载。_
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/10/calendar-ansible
作者:[Nicolas Leiva][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/nicolas-leiva
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT (Calendar close up snapshot)
[2]: https://opensource.com/sites/default/files/uploads/street-calendar.jpg (Street scene with a large calendar and people walking)
[3]: https://www.flickr.com/photos/7841127@N02/4217116202
[4]: https://creativecommons.org/licenses/by-nd/2.0/
[5]: https://docs.ansible.com/ansible/latest/index.html
[6]: https://opensource.com/sites/default/files/uploads/googlecalendarexample.png (Google Calendar screenshot)
[7]: https://creativecommons.org/licenses/by-sa/4.0/
[8]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_conditionals.html
[9]: https://medium.com/swlh/python-and-ansible-to-automate-a-network-security-workflow-28b9a44660c6
[10]: https://golang.org/
[11]: https://medium.com/swlh/ansible-and-google-calendar-integration-for-change-management-7c00553b3d5a

View File

@ -0,0 +1,134 @@
[#]: collector: (lujun9972)
[#]: translator: (gxlct008)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install Deepin Desktop on Ubuntu 20.04 LTS)
[#]: via: (https://itsfoss.com/install-deepin-ubuntu/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
如何在 Ubuntu 20.04 LTS 上安装 Deepin 桌面
======
_**本教程向您展示在 Ubuntu 上安装 Deepin 桌面环境的正确步骤。还提到了移除步骤。**_
毫无疑问Deepin 是一个 [漂亮的 Linux 发行版][1]。最近发布的 [Deepin version 20][2] 让它更加美观了。
现在,[Deepin Linux][3] 是基于 [Debian][4] 的,默认的存储库镜像太慢了。如果您更愿意使用 Ubuntu可以选择 [UbuntuDDE Linux 发行版][5] 形式的 Ubuntu 的 Deepin 变体。它还不是 [官方的 Ubuntu 风格][6] 之一。
[重新安装新的发行版][7] 是一个麻烦,因为您会丢失数据,您将不得不在新安装的 UbuntuDDE 上重新安装您的应用程序。
一个更简单的选择是在现有的 Ubuntu 系统上安装 Deepin 桌面环境。毕竟,您可以轻松地在一个系统中安装多个 [桌面环境][8]。
不要烦恼,这很容易做到,如果您不喜欢,也可以恢复这些更改。让我来告诉你怎么做。
### 在 Ubuntu 20.04 上安装 Deepin 桌面
![][9]
UbuntuDDE 团队已为他们的发行版创建了一个 PPA您可以使用相同的 PPA 在 Ubuntu 20.04 上安装 Deepin 桌面。请记住,此 PPA 仅适用于 Ubuntu 20.04。请阅读有关 [在 Ubuntu 中使用 PPA][10]。
没有 Deepin 版本 20
您将在此处使用 PPA 安装的 Deepin 桌面还不是新的 Deepin 桌面版本 20。它可能会在 Ubuntu 20.10 发布后出现,但是我们不能保证任何事情。
以下是您需要遵循的步骤:
**步骤 1**:您需要首先在终端上输入以下内容,来添加 [Ubuntu DDE Remix 团队的官方 PPA][11]
```
sudo add-apt-repository ppa:ubuntudde-dev/stable
```
**步骤 2**:添加存储库以后,继而安装 Deepin 桌面。
```
sudo apt install ubuntudde-dde
```
![][12]
现在,安装将启动,一段时间后,将要求您选择<ruby>显示管理器<rt>display manager</rt></ruby>
![][13]
如果需要深度桌面主题的锁屏,则需要选择 “**lightdm**”。如果不需要,您可以将其设置为 “**gdm3**”。
如果您看不到此选项,可以通过键入以下命令来获得它,然后选择您首选的显示管理器:
```
sudo dpkg-reconfigure lightdm
```
**步骤 3** 完成后,您必须退出并通过选择 “**Deepin**” 会话再次登录,或者重新启动系统。
![][14]
就是这样。马上在您的 Ubuntu 20.04 LTS 系统上享受深度体验吧!
![][15]
### 从 Ubuntu 20.04 删除 Deepin 桌面
如果您不喜欢这种体验,或者由于某些原因它有 bug可以按照以下步骤将其删除。
**步骤 1** 如果您已将 “lightdm” 设置为显示管理器,则需要在卸载 Deepin 之前将显示管理器设置为 “gdm3”。为此请键入以下命令
```
sudo dpkg-reconfigure lightdm
```
![Select gdm3 on this screen][13]
然后,选择 **gdm3** 继续。
完成此操作后,您只需输入以下命令即可完全删除 Deepin
```
sudo apt remove startdde ubuntudde-dde
```
您只需重启即可回到原来的 Ubuntu 桌面。如果图标没有响应,只需打开终端(**CTRL + ALT + T**)并输入:
```
reboot
```
**总结**
有不同的 [桌面环境选择][16] 是件好事。如果您真的喜欢 Deepin 桌面界面,那么这可能是在 Ubuntu 上体验 Deepin 的一种方式。
如果您有任何疑问或遇到任何问题,请在评论中告诉我。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-deepin-ubuntu/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[gxlct008](https://github.com/gxlct008)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/beautiful-linux-distributions/
[2]: https://itsfoss.com/deepin-20-review/
[3]: https://www.deepin.org/en/
[4]: https://www.debian.org/
[5]: https://itsfoss.com/ubuntudde/
[6]: https://itsfoss.com/which-ubuntu-install/
[7]: https://itsfoss.com/reinstall-ubuntu/
[8]: https://itsfoss.com/what-is-desktop-environment/
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/ubuntu-20-with-deepin.jpg?resize=800%2C386&ssl=1
[10]: https://itsfoss.com/ppa-guide/
[11]: https://launchpad.net/~ubuntudde-dev/+archive/ubuntu/stable
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/deepin-desktop-install.png?resize=800%2C534&ssl=1
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/deepin-display-manager.jpg?resize=800%2C521&ssl=1
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/deepin-session-ubuntu.jpg?resize=800%2C414&ssl=1
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/10/ubuntu-20-with-deepin-1.png?resize=800%2C589&ssl=1
[16]: https://itsfoss.com/best-linux-desktop-environments/

View File

@ -0,0 +1,88 @@
[#]: collector: (wxy)
[#]: translator: (gxlct008)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Could Microsoft be en route to dumping Windows in favor of Linux?)
[#]: via: (https://www.techrepublic.com/article/could-microsoft-be-en-route-to-dumping-windows-in-favor-of-linux/)
[#]: author: (Jack Wallen https://www.techrepublic.com/meet-the-team/us/jack-wallen/)
微软能否放弃 Windows 转向 Linux
======
Jack Wallen 认为Microsoft Linux 是 Microsoft 桌面操作系统的下一个次演进方向。他解释了为什么这将是一个对 Microsoft、IT 专业人士、用户和 Linux 社区的双赢。
![](https://tr1.cbsistatic.com/hub/i/r/2014/08/20/123daeb8-d6ce-4f0b-986e-225d55bf12e3/resize/770x/a693d56694587dbe5d025db7b8d79c48/linux-and-windows.jpg)
我尊敬的同事 Steven J. Vaughan-Nichols 在姊妹网站 ZDNet 上发表了一篇出色的文章,名为 [《基于 Linux 的 Windows 非常有意义》][1],他在文中讨论了 Eric S. Raymond 的观点即我们正接近桌面战争的最后阶段。Vaughan-Nichols 假设下一个合乎逻辑的步骤是在 Linux 内核上运行的 Windows 界面。
这是有道理的,尤其是考虑到微软在 [Windows Subsystem for Linux][2] 上的努力。然而,从我过去几年所目睹的一切来看,我认为可以得出一个对微软更有意义的结论。
请参阅:[Microsoft Build 2020 亮点][3] (TechRepublic Premium)。
## Microsoft Linux: 为什么它是最好的解决方案
一度,微软的最大摇钱树是软件——确切地说是 Windows 和 Microsoft Office。但是就像科技行业中的所有事物一样进化也在发生。拒绝进化的科技公司失败了。
微软明白这一点,并且它已经进化了。一个恰当的例子是:[Microsoft Azure][4]。微软的云计算服务,以及 [AWS][5] 和 [Google Cloud][6] 已经成为这个不断变化的行业的巨大推动力。Azure 已成为微软新世界的摇钱树——如此之多,以至于这家在桌面电脑市场上享有垄断地位的公司已经开始意识到,或许还有更好的方式来利用台式机。
这种优势很容易通过 Linux 来实现,但不是您可能想到的 Linux。Vaughan-Nichols 所建议的 Linux 对于微软来说可能是一个很好的垫脚石,但我相信该公司需要做出一个更大的飞跃。我说的是登月规模的飞跃——这将使所有参与者的生活变得更加轻松。
我说的是深入 Linux 领域。忘掉在 Linux 内核上运行的 [Windows 10][7] 接口的桌面版本吧,最后承认 Microsoft Linux 可能是当今世界的最佳解决方案。
微软发布一个完整的 Linux 发行版将对所有参与者来说意味着更少的挫败感。微软可以将其在 Windows 10 桌面系统上的开发工作转移到一个更稳定、更可靠、更灵活、更经考验的桌面系统上来。微软可以从任意数量的桌面系统中选择自己的官方风格GNOM、KD、Pantheon、Xfce、Mint、Cinnamon... 清单不胜枚举。 微软可以按原样使用桌面,也可以为它们做出贡献,创造一些更符合用户习惯的东西。
## 开发:微软并没有摆脱困境
这并不意味着微软在开发方面将摆脱困境。微软还希望对 Wine 做出重大贡献,以确保其所有产品均可在兼容层上顺畅运行,并且默认集成到操作系统中,这样最终用户就不必为安装 Windows 应用程序做任何额外的工作。
## Windows 用户需要 Defender
微软开发团队也希望将 Windows Defender 移植到这个新的发行版中。等一等。什么?我真的是在暗示 MS Linux 需要 Windows Defender 吗? 是的,我确定。为什么?
最终用户仍然需要防范 <ruby>[网络钓鱼][8] 诈骗<rt>phishing scams</rt></ruby>、恶意 URL 和其他类型的攻击。普通的 Windows 用户可能没有意识到Linux 和安全使用实践的结合比 Windows 10 和 Windows Defender 要安全得多。所以,是的,将 Windows Defender 移植到 Microsoft Linux 将是保持用户基础舒适的一个很好的步骤。
这些用户将很快了解在台式计算机上工作的感觉,而不必处理 Windows 操作系统带来的日常困扰。更新更流畅、更值得信赖、更安全,桌面更有意义。
请参阅:[Linux 管理员需要了解的有关使用命令行工作的所有信息][9]TechRepublic Premium
## 微软、用户和 IT 专业人士的双赢
微软一直在尽其所能将用户从标准的基于客户端的软件迁移到云和其他托管解决方案,并且其软件摇钱树已经变成了以网络为中心和基于订阅的软件。所有这些 Linux 用户仍然可以使用 [Microsoft 365][10] 和它必须提供的任何其他 <ruby>[软件即服务SaaS][11]<rt>Software as a Service</rt></ruby> 解决方案——所有这些都来自于 Linux 操作系统的舒适性和安全性。
这对微软和消费者而言是双赢的,因为 Windows 并不是一个让人头疼的问题(通过漏洞搜索和对其专有解决方案进行安全补丁),消费者可以得到一个更可靠的解决方案而不会错过任何东西。
如果微软打对了牌,他们可以对 KDE 或几乎任何 Linux 桌面重新设置主题,使其与 Windows 10 界面没有太大区别。
正确地安排这一点消费者甚至可能都不知道其中的区别——“Windows 11” 将仅仅是 Microsoft 桌面操作系统的下一个演进版本。
说到胜利IT 专业人员将花费更少的时间来处理病毒、恶意软件和操作系统问题,而把更多的时间用于保持网络(以及为该网络供动力的服务器)的运行和安全上。
## 大卖场怎么办?
这是橡胶遇到道路的地方。为了让这个功能真正发挥作用,微软将不得不完全放弃 Windows转而使用自己风格的 Linux。基于同样的思路微软需要确保大卖场里的 PC 都安装了 Microsoft Linux 系统。不可能有折中措施的余地——微软必须全力以赴,以确保这种过渡成功。
一旦大卖场开始销售安装了 Microsoft Linux 的 PC 和笔记本电脑,我预测这一举措对所有相关人员来说将会是一个巨大的成功。微软最终将被视为最终推出了一款值得消费者信赖的操作系统;消费者将拥有一个这样的桌面操作系统,它不会带来太多令人头疼的事情,而会带来真正的生产力和乐趣; Linux 社区最终将主导桌面。
## Microsoft Linux现在是时候了
你可能会认为这个想法很疯狂,但如果你真的仔细想想,微软 Windows 的演进就是朝着这个方向发展的。为什么不绕过这个时间线的中途部分,而直接跳到一个为所有参与者带来成功的终极游戏呢? 现在是 Microsoft Linux 的时候了。
via: https://www.techrepublic.com/article/could-microsoft-be-en-route-to-dumping-windows-in-favor-of-linux/
作者:[jack-wallen][a]
选题:[wxy][b]
译者:[gxlct008](https://github.com/gxlct008)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.techrepublic.com/meet-the-team/us/jack-wallen/
[b]: https://github.com/wxy
[1]: https://www.zdnet.com/article/linux-based-windows-makes-perfect-sense/
[2]: https://www.techrepublic.com/article/microsoft-older-windows-10-versions-now-get-to-run-windows-subsystem-for-linux-2/
[3]: https://www.techrepublic.com/resource-library/whitepapers/microsoft-build-2020-highlights/
[4]: https://www.techrepublic.com/article/microsoft-azure-the-smart-persons-guide/
[5]: https://www.techrepublic.com/article/amazon-web-services-the-smart-persons-guide/
[6]: https://www.techrepublic.com/article/google-cloud-platform-the-smart-persons-guide/
[7]: https://www.techrepublic.com/article/windows-10-the-smart-persons-guide/
[8]: https://www.techrepublic.com/article/phishing-and-spearphishing-a-cheat-sheet/
[9]: https://www.techrepublic.com/article/everything-a-linux-admin-needs-to-know-about-working-from-the-command-line/
[10]: https://www.techrepublic.com/article/microsoft-365-a-cheat-sheet/
[11]: https://www.techrepublic.com/article/software-as-a-service-saas-a-cheat-sheet/