Merge pull request #2 from LCTT/master

update doc
This commit is contained in:
sanle 2021-02-10 11:46:25 +08:00 committed by GitHub
commit 5b78300935
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
15 changed files with 924 additions and 521 deletions

View File

@ -1,18 +1,18 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13103-1.html)
[#]: subject: (The Zen of Python: Why timing is everything)
[#]: via: (https://opensource.com/article/19/12/zen-python-timeliness)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
Python 之禅:为什么时间就是一切
Python 之禅:时机最重要
======
> 这是 Python 之禅特别系列的一部分,重点是第十五和第十六条原则:现在与永远
> 这是 Python 之禅特别系列的一部分,重点是第十五和第十六条原则:现在与将来
!["桌子上的时钟、笔和记事本"][1]
![](https://img.linux.net.cn/data/attachment/album/202102/09/231557dkuzz22ame4ja2jj.jpg)
Python 一直在不断发展。Python 社区对特性请求的渴求是无止境的,对现状也总是不满意的。随着 Python 越来越流行,这门语言的变化会影响到更多的人。

View File

@ -1,78 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Where are all the IoT experts going to come from?)
[#]: via: (https://www.networkworld.com/article/3404489/where-are-all-the-iot-experts-going-to-come-from.html)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
Where are all the IoT experts going to come from?
======
The fast growth of the internet of things (IoT) is creating a need to train cross-functional experts who can combine traditional networking and infrastructure expertise with database and reporting skills.
![Kevin \(CC0\)][1]
If the internet of things (IoT) is going to fulfill its enormous promise, its going to need legions of smart, skilled, _trained_ workers to make everything happen. And right now, its not entirely clear where those people are going to come from.
Thats why I was interested in trading emails with Keith Flynn, senior director of product management, R&D at asset-optimization software company [AspenTech][2], who says that when dealing with the slew of new technologies that fall under the IoT umbrella, you need people who can understand how to configure the technology and interpret the data. Flynn sees a growing need for existing educational institutions to house IoT-specific programs, as well as an opportunity for new IoT-focused private colleges, offering a well -ounded curriculum
“In the future,” Flynn told me, “IoT projects will differ tremendously from the general data management and automation projects of today. … The future requires a more holistic set of skills and cross-trading capabilities so that were all speaking the same language.”
**[ Also read:  [20 hot jobs ambitious IT pros should shoot for][3] ]**
With the IoT growing 30% a year, Flynn added, rather than a few specific skills, “everything from traditional deployment skills, like networking and infrastructure, to database and reporting skills and, frankly, even basic data science, need to be understood together and used together.”
### Calling all IoT consultants
“The first big opportunity for IoT-educated people is in the consulting field,” Flynn predicted. “As consulting companies adapt or die to the industry trends … having IoT-trained people on staff will help position them for IoT projects and make a claim in the new line of business: IoT consulting.”
The problem is especially acute for startups and smaller companies. “The bigger the organization, the more likely they have a means to hire different people across different lines of skillsets,” Flynn said. “But for smaller organizations and smaller IoT projects, you need someone who can do both.”
Both? Or _everything?_ The IoT “requires a combination of all knowledge and skillsets,” Flynn said, noting that “many of the skills arent new, theyve just never been grouped together or taught together before.”
**[ [Looking to upgrade your career in tech? This comprehensive online course teaches you how.][4] ]**
### The IoT expert of the future
True IoT expertise starts with foundational instrumentation and electrical skills, Flynn said, which can help workers implement new wireless transmitters and boost technology for better battery life and power consumption.
“IT skills, like networking, IP addressing, subnet masks, cellular and satellite are also pivotal IoT needs,” Flynn said. He also sees a need for database management skills and cloud management and security expertise, “especially as things like [advanced process control] APC and sending sensor data directly to databases and data lakes become the norm.”
### Where will IoT experts come from?
Flynn said standardized formal education courses would be the best way to make sure that graduates or certificate holders have the right set of skills. He even laid out a sample curriculum: “Start in chronological order with the basics like [Electrical & Instrumentation] E&I and measurement. Then teach networking, and then database administration and cloud courses should follow that. This degree could even be looped into an existing engineering course, and it would probably take two years … to complete the IoT component.”
While corporate training could also play role, “thats easier said than done,” Flynn warned. “Those trainings will need to be organization-specific efforts and pushes.”
Of course, there are already [plenty of online IoT training courses and certificate programs][5]. But, ultimately, the responsibility lies with the workers themselves.
“Upskilling is incredibly important in this world as tech continues to transform industries,” Flynn said. “If that upskilling push doesnt come from your employer, then online courses and certifications would be an excellent way to do that for yourself. We just need those courses to be created. ... I could even see organizations partnering with higher-education institutions that offer these courses to give their employees better access to it. Of course, the challenge with an IoT program is that it will need to constantly evolve to keep up with new advancements in tech.”
**[ For more on IoT, see [tips for securing IoT on your network][6], our list of [the most powerful internet of things companies][7] and learn about the [industrial internet of things][8]. | Get regularly scheduled insights by [signing up for Network World newsletters][9]. ]**
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3404489/where-are-all-the-iot-experts-going-to-come-from.html
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/07/programmer_certification-skills_code_devops_glasses_student_by-kevin-unsplash-100764315-large.jpg
[2]: https://www.aspentech.com/
[3]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fupgrading-your-technology-career
[5]: https://www.google.com/search?client=firefox-b-1-d&q=iot+training
[6]: https://www.networkworld.com/article/3254185/internet-of-things/tips-for-securing-iot-on-your-network.html#nww-fsb
[7]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb
[8]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html#nww-fsb
[9]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,90 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Understanding Linus's Law for open source security)
[#]: via: (https://opensource.com/article/21/2/open-source-security)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Understanding Linus's Law for open source security
======
Linus's Law is that given enough eyeballs, all bugs are shallow. How
does this apply to open source software security?
![Hand putting a Linux file folder into a drawer][1]
In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. This article discusses Linux's influence on the security of open source software.
An often-praised virtue of open source software is that its code can be reviewed (or "audited," as security professionals like to say) by anyone and everyone. However, if you actually ask many open source users when the last time they reviewed code was, you might get answers ranging from a blank stare to an embarrassed murmur. And besides, there are some really big open source applications out there, so it can be difficult to review every single line of code effectively.
Extrapolating from these slightly uncomfortable truths, you have to wonder: When nobody looks at the code, does it really matter whether it's open or not?
### Should you trust open source?
We tend to make a trite assumption in hobbyist computing that open source is "more secure" than anything else. We don't often talk about what that means, what the basis of comparison is ("more" secure than what?), or how the conclusion has even been reached. It's a dangerous statement to make because it implies that as long as you call something _open source_, it automatically and magically inherits enhanced security. That's not what open source is about, and in fact, it's what open source security is very much against.
You should never assume an application is secure unless you have personally audited and understood its code. Once you have done this, you can assign _ultimate trust_ to that application. Ultimate trust isn't a thing you do on a computer; it's something you do in your own mind: You trust software because you choose to believe that it is secure, at least until someone finds a way to exploit that software.
You're the only person who can place ultimate trust in that code, so every user who wants that luxury must audit the code for themselves. Taking someone else's word for it doesn't count!
So until you have audited and understood a codebase for yourself, the maximum trust level you can give to an application is a spectrum ranging from approximately, _not trustworthy at all_ to _pretty trustworthy_. There's no cheat sheet for this. It's a personal choice you must make for yourself. If you've heard from people you strongly trust that an application is secure, then you might trust that software more than you trust something for which you've gotten no trusted recommendations.
Because you cannot audit proprietary (non-open source) code, you can never assign it _ultimate trust_.
### Linus's Law
The reality is, not everyone is a programmer, and not everyone who is a programmer has the time to dedicate to reviewing hundreds and hundreds of lines of code. So if you're not going to audit code yourself, then you must choose to trust (to some degree) the people who _do_ audit code.
So exactly who does audit code, anyway?
Linus's Law asserts that _given enough eyeballs, all bugs are shallow_, but we don't really know how many eyeballs are "enough." However, don't underestimate the number. Software is very often reviewed by more people than you might imagine. The original developer or developers obviously know the code that they've written. However, open source is often a group effort, so the longer code is open, the more software developers end up seeing it. A developer must review major portions of a project's code because they must learn a codebase to write new features for it.
Open source packagers also get involved with many projects in order to make them available to a Linux distribution. Sometimes an application can be packaged with almost no familiarity with the code, but often a packager gets familiar with a project's code, both because they don't want to sign off on software they don't trust and because they may have to make modifications to get it to compile correctly. Bug reporters and triagers also sometimes get familiar with a codebase as they try to solve anomalies ranging from quirks to major crashes. Of course, some bug reporters inadvertently reveal code vulnerabilities not by reviewing it themselves but by bringing attention to something that obviously doesn't work as intended. Sysadmins frequently get intimately familiar with the code of an important software their users rely upon. Finally, there are security researchers who dig into code exclusively to uncover potential exploits.
### Trust and transparency
Some people assume that because major software is composed of hundreds of thousands of lines of code, it's basically impossible to audit. Don't be fooled by how much code it takes to make an application run. You don't actually have to read millions of lines. Code is highly structured, and exploitable flaws are rarely just a single line hidden among the millions of lines; there are usually whole functions involved.
There are exceptions, of course. Sometimes a serious vulnerability is enabled with just one system call or by linking to one flawed library. Luckily, those kinds of errors are relatively easy to notice, thanks to the active role of security researchers and vulnerability databases.
Some people point to bug trackers, such as the [Common Vulnerabilities and Exposures (CVE)][2] website, and deduce that it's actually as plain as day that open source isn't secure. After all, hundreds of security risks are filed against lots of open source projects, out in the open for everyone to see. Don't let that fool you, though. Just because you don't get to see the flaws in closed software doesn't mean those flaws don't exist. In fact, we know that they do because exploits are filed against them, too. The difference is that _all_ exploits against open source applications are available for developers (and users) to see so those flaws can be mitigated. That's part of the system that boosts trust in open source, and it's wholly missing from proprietary software.
There may never be "enough" eyeballs on any code, but the stronger and more diverse the community around the code, the better chance there is to uncover and fix weaknesses.
### Trust and people
In open source, the probability that many developers, each working on the same project, have noticed something _not secure_ but have all remained equally silent about that flaw is considered to be low because humans rarely mutually agree to conspire in this way. We've seen how disjointed human behavior can be recently with COVID-19 mitigation:
* We've all identified a flaw (a virus).
* We know how to prevent it from spreading (stay home).
* Yet the virus continues to spread because one or more people deviate from the mitigation plan.
The same is true for bugs in software. If there's a flaw, someone noticing it will bring it to light (provided, of course, that someone sees it).
However, with proprietary software, there can be a high probability that many developers working on a project may notice something not secure but remain equally silent because the proprietary model relies on paychecks. If a developer speaks out against a flaw, then that developer may at best hurt the software's reputation, thereby decreasing sales, or at worst, may be fired from their job. Developers being paid to work on software in secret do not tend to talk about its flaws. If you've ever worked as a developer, you've probably signed an NDA, and you've been lectured on the importance of trade secrets, and so on. Proprietary software encourages, and more often enforces, silence even in the face of serious flaws.
### Trust and software
Don't trust software you haven't audited.
If you must trust software you haven't audited, then choose to trust code that's exposed to many developers who independently are likely to speak up about a vulnerability.
Open source isn't inherently more secure than proprietary software, but the systems in place to fix it are far better planned, implemented, and staffed.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/2/open-source-security
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
[2]: https://cve.mitre.org

View File

@ -1,46 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to tell if implementing your Python code is a good idea)
[#]: via: (https://opensource.com/article/19/12/zen-python-implementation)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
How to tell if implementing your Python code is a good idea
======
This is part of a special series about the Zen of Python focusing on the
17th and 18th principles: hard vs. easy.
![Brick wall between two people, a developer and an operations manager][1]
A language does not exist in the abstract. Every single language feature has to be implemented in code. It is easy to promise some features, but the implementation can get hairy. Hairy implementation means more potential for bugs, and, even worse, a maintenance burden for the ages.
The [Zen of Python][2] has answers for this conundrum.
### If the implementation is hard to explain, it's a bad idea.
The most important thing about programming languages is predictability. Sometimes we explain the semantics of a certain construct in terms of abstract programming models, which do not correspond exactly to the implementation. However, the best of all explanations just _explains the implementation_.
If the implementation is hard to explain, it means the avenue is impossible.
### If the implementation is easy to explain, it may be a good idea.
Just because something is easy does not mean it is worthwhile. However, once it is explained, it is much easier to judge whether it is a good idea.
This is why the second half of this principle intentionally equivocates: nothing is certain to be a good idea, but it always allows people to have that discussion.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/12/zen-python-implementation
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops_confusion_wall_questions.png?itok=zLS7K2JG (Brick wall between two people, a developer and an operations manager)
[2]: https://www.python.org/dev/peps/pep-0020/

View File

@ -1,128 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension)
[#]: via: (https://itsfoss.com/material-shell/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension
======
There is something about tiling windows that attracts many people. Perhaps it looks good or perhaps it is time-saving if you are a fan of [keyboard shortcuts in Linux][1]. Or maybe its the challenge of using the uncommon tiling windows.
![Tiling Windows in Linux | Image Source][2]
From i3 to [Sway][3], there are so many tiling window managers available for Linux desktop. Configuring a tiling window manager itself requires a steep learning curve.
This is why projects like [Regolith desktop][4] exist to give you preconfigured tiling desktop so that you can get started with tiling windows with less effort.
Let me introduce you to a similar project named Material Shell that makes using tiling feature even easier than [Regolith][5].
### Material Shell GNOME Extension: Convert GNOME desktop into a tiling window manager
[Material Shell][6] is a GNOME extension and thats the best thing about it. This means that you dont have to log out and log in to another desktop environment or window manager. You can enable or disable it from within your current session.
Ill list the features of Material Shell but it will be easier to see it in action:
[Subscribe to our YouTube channel for more Linux videos][7]
The project is called Material Shell because it follows the [Material Design][8] guideline and thus gives the applications an aesthetically pleasing interface. Here are its main features:
#### Intuitive interface
Material Shell adds a left panel for quick access. On this panel, you can find the system tray at the bottom and the search and workspaces on the top.
All the new apps are added to the current workspace. You can create new workspace and switch to it for organizing your running apps into categories. This is the essential concept of workspace anyway.
In Material Shell, every workspace can be visualized as a row with several apps rather than a box with several apps in it.
#### Tiling windows
In a workspace, you can see all your opened applications on the top all the time. By default, the applications are opened to take the entire screen like you do in GNOME desktop. You can change the layout to split it in half or multiple columns or a grid of apps using the layout changer in the top right corner.
This video shows all the above features at a glance:
#### Persistent layout and workspaces
Thats not it. Material Shell also remembers the workspaces and windows you open so that you dont have to reorganize your layout again. This is a good feature to have as it saves time if you are particular about which application goes where.
#### Hotkeys/Keyboard shortcut
Like any tiling windows manager, you can use keyboard shortcuts to navigate between applications and workspaces.
* `Super+W` Navigate to the upper workspace.
* `Super+S` Navigate to the lower workspace.
* `Super+A` Focus the window at the left of the current window.
* `Super+D` Focus the window at the right of the current window.
* `Super+1`, `Super+2``Super+0` Navigate to specific workspace
* `Super+Q` Kill the current window focused.
* `Super+[MouseDrag]` Move window around.
* `Super+Shift+A` Move the current window to the left.
* `Super+Shift+D` Move the current window to the right.
* `Super+Shift+W` Move the current window to the upper workspace.
* `Super+Shift+S` Move the current window to the lower workspace.
### Installing Material Shell
Warning!
Tiling windows could be confusing for many users. You should be familiar with GNOME Extensions to use it. Avoid trying it if you are absolutely new to Linux or if you are easily panicked if anything changes in your system.
Material Shell is a GNOME extension. So, please [check your desktop environment][9] to make sure you are running _**GNOME 3.34 or higher version**_.
I would also like to add that tiling windows could be confusing for many users.
Apart from that, I noticed that after disabling Material Shell it removes the top bar from Firefox and the Ubuntu dock. You can get the dock back by disabling/enabling Ubuntu dock extension from the Extensions app in GNOME. I havent tried but I guess these problems should also go away after a system reboot.
I hope you know [how to use GNOME extensions][10]. The easiest way is to just [open this link in the browser][11], install GNOME extension plugin and then enable the Material Shell extension.
![][12]
If you dont like it, you can disable it from the same extension link you used earlier or use the GNOME Extensions app:
![][13]
**To tile or not?**
I use multiple screens and I found that Material Shell doesnt work well with multiple monitors. This is something the developer(s) can improve in the future.
Apart from that, its a really easy to get started with tiling windows with Material Shell. If you try Material Shell and like it, appreciate the project by [giving it a star or sponsoring it on GitHub][14].
For some reasons, tiling windows are getting popular. Recently released [Pop OS 20.04][15] also added tiling window features.
But as I mentioned previously, tiling layouts are not for everyone and it could confuse many people.
How about you? Do you prefer tiling windows or you prefer the classic desktop layout?
--------------------------------------------------------------------------------
via: https://itsfoss.com/material-shell/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/ubuntu-shortcuts/
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-ricing-example-800x450.jpg?resize=800%2C450&ssl=1
[3]: https://itsfoss.com/sway-window-manager/
[4]: https://itsfoss.com/regolith-linux-desktop/
[5]: https://regolith-linux.org/
[6]: https://material-shell.com
[7]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[8]: https://material.io/
[9]: https://itsfoss.com/find-desktop-environment/
[10]: https://itsfoss.com/gnome-shell-extensions/
[11]: https://extensions.gnome.org/extension/3357/material-shell/
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/install-material-shell.png?resize=800%2C307&ssl=1
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/material-shell-gnome-extension.png?resize=799%2C497&ssl=1
[14]: https://github.com/material-shell/material-shell
[15]: https://itsfoss.com/pop-os-20-04-review/

View File

@ -1,180 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Manage containers with Podman Compose)
[#]: via: (https://fedoramagazine.org/manage-containers-with-podman-compose/)
[#]: author: (Mehdi Haghgoo https://fedoramagazine.org/author/powergame/)
Manage containers with Podman Compose
======
![][1]
Containers are awesome, allowing you to package your application along with its dependencies and run it anywhere. Starting with Docker in 2013, containers have been making the lives of software developers much easier.
One of the downsides of Docker is it has a central daemon that runs as the root user, and this has security implications. But this is where Podman comes in handy. Podman is a [daemonless container engine][2] for developing, managing, and running OCI Containers on your Linux system in root or rootless mode.
There are other articles on Fedora Magazine you can use to learn more about Podman. Two examples follow:
* [Using Pods with Podman on Fedora][3]
* [Podman with Capabilities on Fedora][4]
If you have worked with Docker, chances are you also know about Docker Compose, which is a tool for orchestrating several containers that might be interdependent. To learn more about Docker Compose see its [documentation][5].
### What is Podman Compose?
[Podman Compose][6] is a project whose goal is to be used as an alternative to Docker Compose without needing any changes to be made in the docker-compose.yaml file. Since Podman Compose works using pods, its good to check a refresher definition of a pod.
> A _Pod_ (as in a pod of whales or pea pod) is a group of one or more [containers][7], with shared storage/network resources, and a specification for how to run the containers.
>
> [Pods Kubernetes Documentation][8]
The basic idea behind Podman Compose is that it picks the services defined inside the _docker-compose.yaml_ file and creates a container for each service. A major difference between Docker Compose and Podman Compose is that Podman Compose adds the containers to a single pod for the whole project, and all the containers share the same network. It even names the containers the same way Docker Compose does, using the _add-host_ flag when creating the containers, as you will see in the example.
### Installation
Complete install instructions for Podman Compose are found on its [project page][6], and there are several ways to do it. To install the latest development version, use the following command:
```
pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz
```
Make sure you also have [Podman installed][9] since youll need it as well. On Fedora, to install Podman use the following command:
```
sudo dnf install podman
```
### Example: launching a WordPress site with Podman Compose
Imagine your _docker-compose.yaml_ file is in a folder called _wpsite_. A typical _docker-compose.yaml_ (or _docker-compose.yml_) for a WordPress site looks like this:
```
version: "3.8"
services:
web:
image: wordpress
restart: always
volumes:
- wordpress:/var/www/html
ports:
- 8080:80
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: magazine
WORDPRESS_DB_NAME: magazine
WORDPRESS_DB_PASSWORD: 1maGazine!
WORDPRESS_TABLE_PREFIX: cz
WORDPRESS_DEBUG: 0
depends_on:
- db
networks:
- wpnet
db:
image: mariadb:10.5
restart: always
ports:
- 6603:3306
volumes:
- wpdbvol:/var/lib/mysql
environment:
MYSQL_DATABASE: magazine
MYSQL_USER: magazine
MYSQL_PASSWORD: 1maGazine!
MYSQL_ROOT_PASSWORD: 1maGazine!
networks:
- wpnet
volumes:
wordpress: {}
wpdbvol: {}
networks:
wpnet: {}
```
If you come from a Docker background, you know you can launch these services by running _docker-compose up_. Docker Compose will create two containers named _wpsite_web_1_ and _wpsite_db_1_, and attaches them to a network called _wpsite_wpnet_.
Now, see what happens when you run _podman-compose up_ in the project directory. First, a pod is created named after the directory in which the command was issued. Next, it looks for any named volumes defined in the YAML file and creates the volumes if they do not exist. Then, one container is created per every service listed in the _services_ section of the YAML file and added to the pod.
Naming of the containers is done similar to Docker Compose. For example, for your web service, a container named _wpsite_web_1_ is created. Podman Compose also adds localhost aliases to each named container. Then, containers can still resolve each other by name, although they are not on a bridge network as in Docker. To do this, use the option _add-host_. For example, _add-host web:localhost_.
Note that _docker-compose.yaml_ includes a port forwarding from host port 8080 to container port 80 for the web service. You should now be able to access your fresh WordPress instance from the browser using the address _<http://localhost:8080>_.
![WordPress Dashboard][10]
### Controlling the pod and containers
To see your running containers, use _podman ps_, which shows the web and database containers along with the infra container in your pod.
```
```
CONTAINER ID  IMAGE                               COMMAND               CREATED      STATUS          PORTS                                         NAMES
a364a8d7cec7  docker.io/library/wordpress:latest  apache2-foregroun...  2 hours ago  Up 2 hours ago  0.0.0.0:8080-&amp;gt;80/tcp, 0.0.0.0:6603-&amp;gt;3306/tcp  wpsite_web_1
c447024aa104  docker.io/library/mariadb:10.5      mysqld                2 hours ago  Up 2 hours ago  0.0.0.0:8080-&amp;gt;80/tcp, 0.0.0.0:6603-&amp;gt;3306/tcp  wpsite_db_1
12b1e3418e3e  k8s.gcr.io/pause:3.2
```
```
You can also verify that a pod has been created by Podman for this project, named after the folder in which you issued the command.
```
```
POD ID        NAME             STATUS    CREATED      INFRA ID      # OF CONTAINERS
8a08a3a7773e  wpsite           Degraded  2 hours ago  12b1e3418e3e  3
```
```
To stop the containers, enter the following command in another command window:
```
podman-compose down
```
You can also do that by stopping and removing the pod. This essentially stops and removes all the containers and then the containing pod. So, the same thing can be achieved with these commands:
```
podman pod stop podname
podman pod rm podname
```
Note that this does not remove the volumes you defined in _docker-compose.yaml_. So, the state of your WordPress site is saved, and you can get it back by running this command:
```
podman-compose up
```
In conclusion, if youre a Podman fan and do your container jobs with Podman, you can use Podman Compose to manage your containers in development and production.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/manage-containers-with-podman-compose/
作者:[Mehdi Haghgoo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/powergame/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/01/podman-compose-1-816x345.jpg
[2]: https://podman.io
[3]: https://fedoramagazine.org/podman-pods-fedora-containers/
[4]: https://fedoramagazine.org/podman-with-capabilities-on-fedora/
[5]: https://docs.docker.com/compose/
[6]: https://github.com/containers/podman-compose
[7]: https://kubernetes.io/docs/concepts/containers/
[8]: https://kubernetes.io/docs/concepts/workloads/pods/
[9]: https://podman.io/getting-started/installation
[10]: https://fedoramagazine.org/wp-content/uploads/2021/01/Screenshot-from-2021-01-08-06-27-29-1024x767.png

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,199 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My open source disaster recovery strategy for the home office)
[#]: via: (https://opensource.com/article/21/2/high-availability-home-office)
[#]: author: (Howard Fosdick https://opensource.com/users/howtech)
My open source disaster recovery strategy for the home office
======
In the remote work era, it's more important than ever to have a disaster
recovery plan for your household infrastructure.
![Person using a laptop][1]
I've worked from home for years, and with the COVID-19 crisis, millions more have joined me. Teachers, accountants, librarians, stockbrokers… you name it, these workers now operate full or part time from their homes. Even after the coronavirus crisis ends, many will continue working at home, at least part time. But what happens when the home worker's computer fails? Whether the device is a smartphone, tablet, laptop, or desktop—and whether the problem is hardware or software—the result might be missed workdays and lots of frustration.
This article explores how to ensure high-availability home computing. Open source software is key. It offers device independence so that home workers can easily move between primary and backup devices. Most importantly, it gives users control of their environment, which is the surest route to high availability. This simple high-availability strategy, based on open source, is easy to modify for your needs.
### Different strategies for different situations
I need to emphasize one point upfront: different job functions require different solutions. Some at-home workers can use smartphones or tablets, while others rely on laptops, and still others require high-powered desktop workstations. Some can tolerate an outage of hours or even days, while others must be available without interruption. Some use company-supplied devices, and others must provide their own. Lastly, some home workers store their data in their company's cloud, while others self-manage their data.
Obviously, no single high-availability strategy fits everyone. My strategy probably isn't "the answer" for you, but I hope it prompts you to think about the challenges involved (if you haven't already) and presents some ideas to help you prepare before disaster strikes.
### Defining high availability
Whatever computing device a home worker uses, high availability (HA) involves five interoperable components:
* Device hardware
* System software
* Communications capability
* Applications
* Data
The HA plan must encompass all five components to succeed. Missing any component causes HA failure.
For example, last night, I worked on a cloud-based spreadsheet. If my communications link had failed and I couldn't access my cloud data, that would stop my work on the project… even if I had all the other HA components available in a backup computer.
Of course, there are exceptions. Say last night's spreadsheet was stored on my local computer. If that device failed, I could have kept working as long as I had a backup computer with my data on it, even if I lacked internet access.
To succeed as a high-availability home worker, you must first identify the components you require for your work. Once you've done that, develop a plan to continue working even if one or more components fails.
#### Duplicate replacement
One approach is to create a _duplicate replacement_. Having the exact same hardware, software, communications, apps, and data available on a backup device guarantees that you can work if your primary fails. This approach is simple, though it might cost more to keep a complete backup on hand.
To economize, you might share computers with your family or flatmates. A _shared backup_ is always more cost-effective than a _dedicated backup_, so long as you have top priority on the shared computer when you need it.
#### Functional replacement
The alternative to duplicate replacement is a _functional replacement_. You substitute a working equivalent for the failed component. Say I'm working from my home laptop and connecting through home WiFi. My internet connection fails. Perhaps I can tether my computer to my phone and use the cell network instead. I achieve HA by replacing one technology with an equivalent.
#### Know your requirements
Beyond the five HA components, be sure to identify any special requirements you have. For example, if mobility is important, you might need to replace a broken laptop with another laptop, not a desktop.
HA means identifying all the functions you need, then ensuring your HA plan covers them all.
### Timing, planning, and testing
You must also define your time frame for recovery. Must you be able to continue your work immediately after a failure? Or do you have the luxury of some downtime during which you can react?
The longer your allowable downtime, the more options you have. For example, if you could miss work for several days, you could simply trot a broken device into a repair shop. No need for a backup.
In this article, by "high availability," I mean getting back to work in very short order after a failure, perhaps less than one hour. This typically requires that you have access to a backup device that is immediately available and ready to go. While there might be occasions when you can recover your primary device in a matter of minutes—for example, by working around a failure or by quickly replacing a defective piece of hardware or software—a backup computer is normally part of the HA plan.
HA requires planning and preparation. "Winging it" doesn't suffice; ensure your backup plan works by testing it beforehand.
For example, say your data resides in the cloud. That data is accessible from anywhere, from any device. That sounds ideal. But what if you forget that there's a small but vital bit of data stored locally on your failed computer? If you can't access that essential data, your HA plan fails. A dry run surfaces problems like this.
### Smartphones as backup
Most of us in software engineering and support use laptops and desktops at home. Smartphones and tablets are useful adjuncts, but they aren't at the core of what we do.
The main reasons are screen size and keyboard. For software work, you can't achieve the same level of productivity with a small screen and touchscreen keypad as you can with a large monitor and physical keyboard.
If you normally use a laptop or desktop and opt for a smartphone or tablet as your backup, test it out beforehand to make sure it suffices. Here's an example of the kind of subtlety that might otherwise trip you up. Most videoconferencing platforms run on both smartphones and laptops or desktops, but their mobile apps can differ in small but important ways. And even when the platform does offer an equivalent experience (the way [Jitsi][2] does, for instance), it can be awkward to share charts, slide decks, and documents, to use a chat feature, and so on, just due to the difference in mobile form factors compared to a big computer screen and a variety of input options.
Smartphones make convenient backup devices because nearly everyone has one. But if you designate yours as your functional replacement, then try using it for work one day to verify that it meets your needs.
### Data accessibility
Data access is vital when your primary device fails. Even if you back up your work data, if a device fails, you also may need credentials for VPN or SSH access, specialized software, or forms of data that might not be stored along with your day-to-day documents and directories. You must ensure that when you design a backup scheme for yourself, you include all important data and store encryption keys and other access information securely.
The best way to keep your work data secure is to use your own service. Running [Nextcloud][3] or [Sparkleshare][4] is easy, and hosting is cheap. Both are automated: files you place in a specially designated directory are synchronized with your server. It's not exactly building your own cloud, but it's a great way to leverage the cloud for your own services. You can make the backup process seamless with tools like [Syncthing, Bacula][5], or [rdiff-backup][6].
Cloud storage enables you to access data from any device at any location, but cloud storage will work only if you have a live communications path to it after a failure event. And not all cloud storage meets the privacy and security specifications for all projects. If your workplace has a cloud backup solution, spend some time learning about the cloud vendor's services and find out what level of availability it promises. Check its track record in achieving it. And be sure to devise an alternate way to access your cloud if your primary communications link fails.
### Local backups
If you store your data on a local device, you'll be responsible for backing it up and recovering it. In that case, back up your data to an alternate device, and verify that you can restore it within your acceptable time frame. This is your _time-to-recovery_.
You'll also need to secure that data and meet any privacy requirements your employer specifies.
#### Acceptable loss
Consider how much data you can afford to lose in the event of an outage. For example, if you back up your data nightly, you could lose up to a maximum of one day's work (all the work completed during the day prior to the nightly backup). This is your _backup data timeliness_.
Open source offers many free applications for local data backup and recovery. Generally, the same applications used for remote backups can also apply to local backup plans, so take a look at the [Advanced Rsync][7] or the [Syncthing tutorial][8] articles here on Opensource.com.
Many prefer a data strategy that combines both cloud and local storage. Store your data locally, and then use the cloud as a backup (rather than working on the cloud). Or do it the other way around (although automating the cloud to push backups to you is more difficult than automating your local machine to push backups to the cloud). Storing your data in two separate locations gives your data _geographical redundancy_, which is useful should either site become unavailable.
With a little forethought, you can devise a simple plan to access your data regardless of any outage.
### My high-availability strategy
As a practical example, I'll describe my own HA approach. My goals are a time to recovery of an hour or less and backup data timeliness within a day.
![High Availability Strategy][9]
(Howard Fosdick, [CC BY-SA 4.0][10])
#### Hardware
I use an Android smartphone for phone calls and audioconferences. I can access a backup phone from another family member if my primary fails.
Unfortunately, my phone's small size and touch keyboard mean I can't use it as my backup computer. Instead, I rely on a few generic desktop computers that have standard, interchangeable parts. You can easily maintain such hardware with this simple [free how-to guide][11]. You don't need any hardware experience.
Open source software makes my multibox strategy affordable. It runs so efficiently that even [10-year-old computers work fine][12] as backups for typical office work. Mine are dual-core desktops with 4GB of RAM and any disk that cleanly verifies. These are so inexpensive that you can often get them for free from recycling centers. (In my [charity work][13], I find that many people give them away as unsuitable for running current proprietary software, but they're actually in perfect working order given a flexible operating system like Linux.)
Another way to economize is to designate another family member's computer for your shared backups.
#### Systems software and apps
Running open source software on top of this generic hardware enables me to achieve several benefits. First, the flexibility of open source software enables me to address any possible software failure. For example, with simple operating system commands, I can copy, move, back up, and recover the operating system, applications, and data across partitions, disks, or computers. I don't have to worry about software constraints, vendor lock-in, proprietary backup file formats, licensing or activation restrictions, or extra fees.
Another open source benefit is that you control your operating system. If you don't have control over your own system, you could be subject to forced restarts, unexpected and unwanted updates, and forced upgrades. My relative has run into such problems more than once. Without his knowledge or consent, his computer suddenly launched a forced upgrade from Windows 7 to Windows 10, which cost him three days of lost income (and untold frustration). The lesson: Your vendor's agenda may not coincide with your own.
All operating systems have bugs. The difference is that open source software doesn't force you to eat them.
#### Data classification
I use very simple techniques to make my data highly available.
I can't use cloud services for my data due to privacy requirements. Instead, my data "master copy" resides on a USB-connected disk. I plug it into any of several computers. After every session, I back up any altered data on the computer I used.
Of course, this approach is only feasible if your backups run quickly. For most home workers, that's easy. All you have to do is segregate your data by size and how frequently you update it.
Isolate big files like photos, audio, and video into separate folders or partitions. Make sure you back up only the files that are new or modified, not older items that have already been backed up.
Much of my work involves office suites. These generate small files, so I isolate each project in its own folder. For example, I stored the two dozen files I used to write this article in a single subdirectory. Backing it up is as simple as copying that folder.
Giving a little thought to data segregation and backing up only modified files ensures quick, easy backups for most home workers. My approach is simple; it works best if you only work on a couple of projects in a session. And I can tolerate losing up to a day's work. You can easily automate a more refined backup scheme for yourself.
For software development, I take an entirely different approach. I use software versioning, which transparently handles all software backup issues for me and coordinates with other developers. My HA planning in this area focuses just on ensuring I can access the online tool.
#### Communications
Like many home users, I communicate through both a cellphone network and the internet. If my internet goes down, I can use the cell network instead by tethering my laptop to my Android smartphone.
### Learning from failure
Using my strategy for 15 years, how have I fared? What failures have I experienced, and how did they turn out?
1. **Motherboard burnout:** One day, my computer wouldn't turn on. I simply moved my USB "master data" external disk to another computer and used that. I lost no data. After some investigation, I determined it was a motherboard failure, so I scrapped the computer and used it for parts.
2. **Drive failure:** An internal disk failed while I was working. I just moved my USB master disk to a backup computer. I lost 10 minutes of data updates. After work, I created a new boot disk by copying one from another computer—flexibility that only open source software offers. I used the affected computer the next day.
3. **Fatal software update:** An update caused a failure in an important login service. I shifted to a backup computer where I hadn't yet applied the fatal update. I lost no data. After work, I searched for help with this problem and had it solved in an hour.
4. **Monitor burnout:** My monitor fizzled out. I just swapped in a backup display and kept working. This took 10 minutes. After work, I determined that the problem was a burned-out capacitor, so I recycled the monitor.
5. **Power outage:** Now, here's a situation I didn't plan for! A tornado took down the electrical power in our entire town for two days. I learned that one should think through _all_ possible contingencies—including alternate work sites.
### Make your plan
If you work from home, you need to consider what will happen when your home computer fails. If not, you could experience frustrating workdays off while you scramble to fix the problem.
Open source software is the key. It runs so efficiently on older, cheaper computers that they become affordable backup machines. It offers device independence, and it ensures that you can design solutions that work best for you.
For most people, ensuring high availability is very simple. The trick is thinking about it in advance. Create a plan _and then test it_.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/2/high-availability-home-office
作者:[Howard Fosdick][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/howtech
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
[2]: https://jitsi.org/downloads/
[3]: https://opensource.com/article/20/7/nextcloud
[4]: https://opensource.com/article/19/4/file-sharing-git
[5]: https://opensource.com/article/19/3/backup-solutions
[6]: https://opensource.com/life/16/3/turn-your-old-raspberry-pi-automatic-backup-server
[7]: https://opensource.com/article/19/5/advanced-rsync
[8]: https://opensource.com/article/18/9/take-control-your-data-syncthing
[9]: https://opensource.com/sites/default/files/uploads/my_ha_strategy.png (High Availability Strategy)
[10]: https://creativecommons.org/licenses/by-sa/4.0/
[11]: http://www.rexxinfo.org/Quick_Guide/Quick_Guide_To_Fixing_Computer_Hardware
[12]: https://opensource.com/article/19/7/how-make-old-computer-useful-again
[13]: https://www.freegeekchicago.org/

View File

@ -0,0 +1,12 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Try Deno as an alternative to Node.js)
[#]: via: (https://opensource.com/article/21/2/deno)
[#]: author: (Bryant Son https://opensource.com/users/brson)
Try Deno as an alternative to Node.js
======
Deno is a secure runtime for JavaScript and TypeScript.

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Add Fingerprint Login in Ubuntu and Other Linux Distributions)
[#]: via: (https://itsfoss.com/fingerprint-login-ubuntu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
How to Add Fingerprint Login in Ubuntu and Other Linux Distributions
======
Many high-end laptops come with fingerprint readers these days. Windows and macOS have been supporting fingerprint login for some time. In desktop Linux, the support for fingerprint login was more of geeky tweaks but [GNOME][1] and [KDE][2] have started supporting it through system settings.
This means that on newer Linux distribution versions, you can easily use fingerprint reading. I am going to enable fingerprint login in Ubuntu here but you may use the steps on other distributions running GNOME 3.38.
Prerequisite
This is obvious, of course. Your computer must have a fingerprint reader.
This method works for any Linux distribution running GNOME version 3.38 or higher. If you are not certain, you may [check which desktop environment version you are using][3].
KDE 5.21 also has a fingerprint manager. The screenshots will look different, of course.
### Adding fingerprint login in Ubuntu and other Linux distributions
Go to **Settings** and the click on **Users** from left sidebar. You should see all the user account on your system here. Youll see several option including **Fingerprint Login**.
Click on the Fingerprint Login option here.
![Enable fingerprint login in Ubuntu][4]
It will immediately ask you to scan a new fingerprint. When you click the + sign to add a fingerprint, it presents a few predefined options so that you can easily identify which finger or thumb it is.
You may of course scan left thumb by clicking right index finger though I dont see a good reason why you would want to do that.
![Adding fingerprint][5]
While adding the fingerprint, rotate your finger or thumb as directed.
![Rotate your finger][6]
Once the system registers the entire finger, it will give you a green signal that the fingerprint has been added.
![Fingerprint successfully added][7]
If you want to test it right away, lock the screen by pressing Super+L keyboard shortcut in Ubuntu and then using the fingerprint for login.
![Login With Fingerprint in Ubuntu][8]
#### Experience with fingerprint login on Ubuntu
Fingerprint login is what its name suggests: login using your fingerprint. Thats it. You cannot use your finger when it asks for authentication for programs that need sudo access. Its not a replacement of your password.
One more thing. The fingerprint login allows you to log in but you cannot use your finger when your system asks for sudo password. The [keyring in Ubuntu][9] also remains locked.
Another annoying thing is because of GNOMEs GDM login screen. When you login, you have to click on your account first to get to the password screen. This is where you can use your finger. It would have been nicer to not bothered about clicking the user account ID first.
I also notice that fingerprint reading is not as smooth and quick as it is in Windows. It works, though.
If you are somewhat disappointed with the fingerprint login on Linux, you may disable it. Let me show you the steps in the next section.
### Disable fingerprint login
Disabling fingerprint login is pretty much the same as enabling it in the first place.
Go to **Settings→User** and then click on Fingerprint Login option. It will show a screen with options to add more fingerprints or delete the existing ones. You need to delete the existing fingerprints.
![Disable Fingerprint Login][10]
Fingerprint login does have some benefits, specially for lazy people like me. I dont have to type my password every time I lock the screen and I am happy with the limited usage.
Enabling sudo with fingerprint should not be entirely impossible with [PAM][11]. I remember that when I [set up face unlock in Ubuntu][12], it could be used with sudo as well. Lets see if future versions add this feature.
Do you have a laptop with fingerprint reader? Do you use it often or is it just one of things you dont care about?
--------------------------------------------------------------------------------
via: https://itsfoss.com/fingerprint-login-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.gnome.org/
[2]: https://kde.org/
[3]: https://itsfoss.com/find-desktop-environment/
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/enable-fingerprint-ubuntu.png?resize=800%2C607&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/adding-fingerprint-login-ubuntu.png?resize=800%2C496&ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/adding-fingerprint-ubuntu-linux.png?resize=800%2C603&ssl=1
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/fingerprint-added-ubuntu.png?resize=797%2C510&ssl=1
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/login-with-fingerprint-ubuntu.jpg?resize=800%2C320&ssl=1
[9]: https://itsfoss.com/ubuntu-keyring/
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/disable-fingerprint-login.png?resize=798%2C524&ssl=1
[11]: https://tldp.org/HOWTO/User-Authentication-HOWTO/x115.html
[12]: https://itsfoss.com/face-unlock-ubuntu/

View File

@ -0,0 +1,78 @@
[#]: collector: (lujun9972)
[#]: translator: (scvoet)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Where are all the IoT experts going to come from?)
[#]: via: (https://www.networkworld.com/article/3404489/where-are-all-the-iot-experts-going-to-come-from.html)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
物联网专家都从何而来?
======
物联网 (IoT) 的快速发展催生了对跨职能专家进行培养的需求,这些专家可以将传统的网络和基础设施专业知识与数据库和报告技能相结合。
![Kevin \(CC0\)][1]
如果物联网 (IoT) 要实现其宏伟的诺言,它将需要大量聪明、熟练、**训练有素**的工人军团来实现这一切。而现在,这些人将从何而来尚不清楚。
这就是我为什么有兴趣同资产优化软件公司 [AspenTech][2] 的产品管理、R&amp;D 研发高级总监基思·弗林 (Keith Flynn) 通邮件,他说,当处理大量属于物联网范畴的新技术时,你需要能够理解如何配置技术和解释数据的人。弗林认为,现有的教育机构对物联网特定课程的需求越来越大,这同时也给了以物联网为重点,提供了完善课程的新私立学院机会。
弗林跟我说,“在未来物联网项目将与如今普遍的数据管理和自动化项目有着巨大的不同......未来需要更全面的技能和交叉交易能力,这样我们才会说同一种语言。”
**【参见: [有雄心壮志的I T 专业人才应该争取的 20 个热门职位][3]】**
弗林补充说,随着物联网每年增长 30%,将不再依赖于几个特定的技能,“从传统的部署技能(如网络和基础设施)到数据库和报告技能,坦白说,甚至是基础数据科学,都将需要一起理解和使用。”
### 召集所有物联网顾问
弗林预测,“受过物联网教育的人的第一个大机会将会是在咨询领域,随着咨询公司对行业趋势的适应或淘汰......有受过物联网培训的员工将有助于他们在物联网项目中的定位,并在新的业务线中提出要求——物联网咨询。”
对初创企业和小型公司而言,这个问题尤为严重。“组织越大,他们越有可能雇佣到不同技术类别的人”弗林这样说到,“但对于较小的组织和较小的物联网项目来说,你则需要一个能同时兼顾的人。”
两者兼而有之?还是**一应俱全?**物联网“需要将所有知识和技能组合在一起”,弗林说到,“并不是所有技能都是全新的,只是在此之前从来没有被归纳在一起或放在一起教授过。”
**【[想在技术领域提升自己的事业?这个全面的在线课程会教您该怎么做。][4]】**
### 未来的物联网专家
弗林表示,真正的物联网专业技术是从基础的仪器仪表和电气技能开始的,这能帮助工人发明新的无线发射器或提升技术,以提高电池寿命和功耗。
“IT 技能如网络、IP 寻址、子网掩码、蜂窝和卫星也是物联网的关键需求”,弗林说。他还认为物联网需要数据库管理技能和云管理和安全专业知识,“特别是当高级过程控制 (APC) 将传感器数据直接发送到数据库和数据湖等事情成为常态时。”
### 物联网专家又从何而来?
弗林说,标准化的正规教育课程将是确保毕业生或证书持有者掌握一套正确技能的最佳途径。他甚至还列出了一个样本课程。“按时间顺序开始,从基础知识开始,比如[电气&amp;仪表]E&amp;I和测量。然后讲授网络知识数据库管理和云计算课程都应在此之后开展。这个学位甚至可以循序渐进至现有的工程课程中这可能需要两年时间......来完成物联网部分的学业。”
虽然企业培训也能发挥作用,但实际上却是“说起来容易做起来难”,弗林这样警告,“这些培训需要针对组织的具体努力而推动。”
当然,现在市面上已经有了[大量的在线物联网培训课程和证书课程][5]。但追根到底,这一工作全都依赖于工人自身的推断。
“在这个世界上,随着科技不断改变行业,提升技能是非常重要的”,弗林说,“如果这种提升技能的推动力并不是来源于你的雇主,那么在线课程和认证将会是提升你自己很好的一个方式。我们只需要创建这些课程......我甚至可以预见组织将与提供这些课程的高等教育机构合作,让他们的员工更好地开始。当然,物联网课程的挑战在于它需要不断发展以跟上科技的发展。”
**【有关物联网的更多信息,请参阅[在网络上确保物联网安全的提醒][6],我们的[最强大的物联网公司][7]列表,并了解[工业物联网][8]。 | 通过[注册网络世界新闻通讯][9]定期获取见解。】**
参与[脸书][10]和[领英][11]上的网络世界社区,对最重要的话题进行评论。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3404489/where-are-all-the-iot-experts-going-to-come-from.html
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[Percy (@scvoet)](https://github.com/scvoet)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/07/programmer_certification-skills_code_devops_glasses_student_by-kevin-unsplash-100764315-large.jpg
[2]: https://www.aspentech.com/
[3]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fupgrading-your-technology-career
[5]: https://www.google.com/search?client=firefox-b-1-d&q=iot+training
[6]: https://www.networkworld.com/article/3254185/internet-of-things/tips-for-securing-iot-on-your-network.html#nww-fsb
[7]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb
[8]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html#nww-fsb
[9]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,47 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to tell if implementing your Python code is a good idea)
[#]: via: (https://opensource.com/article/19/12/zen-python-implementation)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
如何判断实现你的 Python 代码是否是个好主意?
======
> 这是 Python 之禅特别系列的一部分,重点介绍第十七和十八条原则:困难和容易。
!["开发人员和运营经理两个人之间的砖墙"][1]
一门语言并不是抽象存在的。每一个语言功能都必须用代码来实现。承诺一些功能是很容易的,但实现起来就会很麻烦。复杂的实现意味着更多潜在的 bug甚至更糟糕的是会带来日复一日的维护负担。
对于这个难题,[Python 之禅][2] 中有答案。
### <ruby>如果一个实现难以解释,那就是个坏思路<rt>If the implementation is hard to explain, it's a bad idea</rt></ruby>
编程语言最重要的是可预测性。有时我们用抽象的编程模型来解释某个结构的语义,而这些模型与实现并不完全对应。然而,最好的解释只是*解释实现*。
如果该实现很难解释,那就意味着这个途径是不可能的。
### <ruby>如果一个实现易于解释,那它可能是一个好思路<rt>If the implementation is easy to explain, it may be a good idea</rt></ruby>
仅仅因为某事容易,并不意味着它值得。然而,一旦解释清楚,判断它是否是一个好思路就容易得多。
这也是为什么这个原则的后半部分故意含糊其辞的原因:没有什么可以肯定一定是好的,但总是可以讨论一下。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/12/zen-python-implementation
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops_confusion_wall_questions.png?itok=zLS7K2JG (Brick wall between two people, a developer and an operations manager)
[2]: https://www.python.org/dev/peps/pep-0020/

View File

@ -1,96 +1,90 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ansible Playbooks Quick Start Guide with Examples)
[#]: via: (https://www.2daygeek.com/ansible-playbooks-quick-start-guide-with-examples/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
[#]: collector: "lujun9972"
[#]: translator: "MjSeven"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: subject: "Ansible Playbooks Quick Start Guide with Examples"
[#]: via: "https://www.2daygeek.com/ansible-playbooks-quick-start-guide-with-examples/"
[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/"
Ansible Playbooks Quick Start Guide with Examples
Ansible 剧本快速入门指南
======
We have already written two articles about Ansible, this is the third article.
我们已经写了两篇关于 Ansible 的文章,这是第三篇。
If you are new to Ansible, I advise you to read the two topics below, which will teach you the basics of Ansible and what it is.
如果你是 Ansible 新手,我建议你阅读下面这两篇文章,它会教你一些 Ansible 的基础以及它是什么。
* **Part-1: [How to Install and Configure Ansible on Linux][1]**
* **Part-2: [Ansible ad-hoc Command Quick Start Guide][2]**
* **第一篇: [在 Linux 如何安装和配置 Ansible][1]**
* **第二篇: [Ansible ad-hoc 命令快速入门指南][2]**
如果你已经阅读过了,那么在阅读本文时你才不会感到突兀。
### 什么是 Ansible 剧本?
If you have finished them, you will feel the continuity as you read this article.
剧本比临时命令模式更强大,而且完全不同。
### What is the Ansible Playbook?
它使用了 **"/usr/bin/ansible-playbook"** 二进制文件,并且提供丰富的特性使得复杂的任务变得更容易。
Playbooks are much more powerful and completely different way than ad-hoc command mode.
如果你想经常运行一个任务,剧本是非常有用的。
It uses the **“/usr/bin/ansible-playbook”** binary. It provides rich features to make complex task easier.
此外,如果你想在服务器组上执行多个任务,它也是非常有用的。
Playbooks are very useful if you want to run a task often.
剧本由 YAML 语言编写。YAML 代表一种标记语言,它比其它常见的数据格式(如 XML 或 JSON更容易读写。
Also, this is useful if you want to perform multiple tasks at the same time on the group of server.
Playbooks are written in YAML language. YAML stands for Aint Markup Language, which is easier for humans to read and write than other common data formats such as XML or JSON.
The Ansible Playbook Flow Chart below will tell you its detailed structure.
下面这张 Ansible 剧本流程图将告诉你它的详细结构。
![][3]
### Understanding the Ansible Playbooks Terminology
### 理解 Ansible 剧本的术语
* **Control Node:** The machine where Ansible is installed. It is responsible for managing client nodes.
* **Managed Nodes:** List of hosts managed by the control node
* **Playbook:** A Playbook file contains a set of procedures used to automate a task.
* **Inventory:** The inventory file contains information about the servers you manage.
* **Task:** Each play has multiple tasks, tasks that are executed one by one against a given machine (it a host or multiple host or a group of host).
* **Module:** Modules are a unit of code that is used to gather information from the client node.
* **Role:** Roles are ways to automatically load some vars_files, tasks, and handlers based on known file structure.
* **Play:** Each playbook has multiple plays, and a play is the implementation of a particular automation from beginning to end.
* **Handlers:** This helps you reduce any service restart in a play. Lists of handler tasks are not really different from regular tasks, and changes are notified by notifiers. If the handler does not receive any notification, it will not work.
* **控制节点:** Ansible 安装的机器,它负责管理客户端节点。
* **被控节点:** 被控制节点管理的主机列表。
* **剧本:** 一个剧本文件,包含一组自动化任务。
* **主机清单:*** 这个文件包含有关管理的服务器的信息。
* **任务:** 每个剧本都有大量的任务。任务在指定机器上依次执行(一个主机或多个主机)。
* **模块:** 模块是一个代码单元,用于从客户端节点收集信息。
* **角色:** 角色是根据已知文件结构自动加载一些变量文件、任务和处理程序的方法。
* **Play:** 每个剧本含有大量的 play, 一个 play 从头到尾执行一个特定的自动化。
* **Handlers:** 它可以帮助你减少在剧本中的重启任务。处理程序任务列表实际上与常规任务没有什么不同,更改由通知程序通知。如果处理程序没有收到任何通知,它将不起作用。
### 基本的剧本是怎样的?
下面是一个剧本的模板:
### How Does the Basic Playbook looks Like?
Heres how the basic playbook looks.
```
--- [YAML file should begin with a three dash]
- name: [Description about a script]
hosts: group [Add a host or host group]
become: true [It requires if you want to run a task as a root user]
tasks: [What action do you want to perform under task]
- name: [Enter the module options]
module: [Enter a module, which you want to perform]
module_options-1: value [Enter the module options]
```yaml
--- [YAML 文件应该以三个破折号开头]
- name: [脚本描述]
hosts: group [添加主机或主机组]
become: true [如果你想以 root 身份运行任务,则标记它]
tasks: [你想在任务下执行什么动作]
- name: [输入模块选项]
module: [输入要执行的模块]
module_options-1: value [输入模块选项]
module_options-2: value
.
module_options-N: value
```
### How to Understand Ansible Output
### 如何理解 Ansible 的输出
The Ansible Playbook output comes with 4 colors, see below for color definitions.
Ansible 剧本的输出有四种颜色,下面是具体含义:
* **Green:** **ok ** If that is correct, the associated task data already exists and configured as needed.
* **Yellow: changed ** Specific data has updated or modified according to the needs of the tasks.
* **Red: FAILED ** If there is any problem while doing a task, it returns a failure message, it may be anything and you need to fix it accordingly.
* **White:** It comes with multiple parameters
* **绿色:** **ok ** 代表成功,关联的任务数据已经存在,并且已经根据需要进行了配置。
* **黄色: 已更改 ** 指定的数据已经根据任务的需要更新或修改。
* **红色: 失败–** 如果在执行任务时出现任何问题,它将返回一个失败消息,它可能是任何东西,你需要相应地修复它。
* **白色:** 表示有多个参数。
为此,创建一个剧本目录,将它们都放在同一个地方。
To do so, create a playbook directory to keep them all in one place.
```
```bash
$ sudo mkdir /etc/ansible/playbooks
```
### Playbook-1: Ansible Playbook to Install Apache Web Server on RHEL Based Systems
### 剧本-1在 RHEL 系统上安装 Apache Web 服务器
This sample playbook allows you to install the Apache web server on a given target node.
这个示例剧本允许你在指定的目标机器上安装 Apache Web 服务器:
```
```bash
$ sudo nano /etc/ansible/playbooks/apache.yml
---
@ -108,17 +102,17 @@ $ sudo nano /etc/ansible/playbooks/apache.yml
state: started
```
```
```bash
$ ansible-playbook apache1.yml
```
![][3]
### How to Understand Playbook Execution in Ansible
### 如何理解 Ansible 中剧本的执行
To check the syntax error, run the following command. If it finds no error, it only shows the given file name. If it detects any error, you will get an error as follows, but the contents may differ based on your input file.
使用以下命令来查看语法错误。如果没有发现错误,它只显示剧本文件名。如果它检测到任何错误,你将得到一个如下所示的错误,但内容可能根据你的输入文件而有所不同。
```
```bash
$ ansible-playbook apache1.yml --syntax-check
ERROR! Syntax Error while loading YAML.
@ -143,11 +137,11 @@ Should be written as:
# ^--- all spaces here.
```
Alternatively, you can check your ansible-playbook content from online using the following url @ [YAML Lint][4]
或者,你可以使用这个 URL [YAML Lint][4] 在线检查 Ansible 剧本内容。
Run the following command to perform a **“Dry Run”**. When you run a ansible-playbook with the **check”** option, it does not make any changes to the remote machine. Instead, it will tell you what changes they have made rather than create them.
执行以下命令进行**“演练”**。当你运行带有 **"-check"** 选项的剧本时,它不会对远程机器进行任何修改。相反,它会告诉你它将要做什么改变但不是真的执行。
```
```bash
$ ansible-playbook apache.yml --check
PLAY [Install and Configure Apache Webserver] ********************************************************************
@ -169,9 +163,9 @@ node1.2g.lab : ok=3 changed=2 unreachable=0 failed=0 s
node2.2g.lab : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
If you want detailed information about your ansible playbook implementation, use the **“-vv”** verbose option. It shows what it really does to gather this information.
如果你想要知道 ansible 剧本实现的详细信息,使用 **"-vv"** 选项,它会展示如何收集这些信息。
```
```bash
$ ansible-playbook apache.yml --check -vv
ansible-playbook 2.9.2
@ -212,11 +206,11 @@ node1.2g.lab : ok=3 changed=2 unreachable=0 failed=0 s
node2.2g.lab : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Playbook-2: Ansible Playbook to Install Apache Web Server on Ubuntu Based Systems
### 剧本-2在 Ubuntu 系统上安装 Apache Web 服务器
This sample playbook allows you to install the Apache web server on a given target node.
这个示例剧本允许你在指定的目标节点上安装 Apache Web 服务器。
```
```bash
$ sudo nano /etc/ansible/playbooks/apache-ubuntu.yml
---
@ -250,13 +244,13 @@ $ sudo nano /etc/ansible/playbooks/apache-ubuntu.yml
enabled: yes
```
### Playbook-3: Ansible Playbook to Install a List of Packages on Red Hat Based Systems
### 剧本-3在 Red Hat 系统上安装软件包列表
This sample playbook allows you to install a list of packages on a given target node.
这个示例剧本允许你在指定的目标节点上安装软件包。
**Method-1:**
**方法-1:**
```
```bash
$ sudo nano /etc/ansible/playbooks/packages-redhat.yml
---
@ -273,9 +267,9 @@ $ sudo nano /etc/ansible/playbooks/packages-redhat.yml
- htop
```
**Method-2:**
**方法-2:**
```
```bash
$ sudo nano /etc/ansible/playbooks/packages-redhat-1.yml
---
@ -292,9 +286,9 @@ $ sudo nano /etc/ansible/playbooks/packages-redhat-1.yml
- htop
```
**Method-3: Using Array Variable**
**方法-3: 使用数组变量**
```
```bash
$ sudo nano /etc/ansible/playbooks/packages-redhat-2.yml
---
@ -309,11 +303,11 @@ $ sudo nano /etc/ansible/playbooks/packages-redhat-2.yml
with_items: "{{ packages }}"
```
### Playbook-4: Ansible Playbook to Install Updates on Linux Systems
### 剧本-4在 Linux 系统上安装更新
This sample playbook allows you to install updates on your Linux systems, running Red Hat and Debian-based client nodes.
这个示例剧本允许你在基于 Red Hat 或 Debian 的 Linux 系统上安装更新。
```
```bash
$ sudo nano /etc/ansible/playbooks/security-update.yml
---
@ -336,7 +330,7 @@ via: https://www.2daygeek.com/ansible-playbooks-quick-start-guide-with-examples/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,130 @@
[#]: collector: (lujun9972)
[#]: translator: (Chao-zhi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension)
[#]: via: (https://itsfoss.com/material-shell/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
使用 Material Shell 扩展将你的 GNOME 桌面打造成平铺式风格
======
平铺式窗口的特性吸引了很多人的追捧。也许是因为他很好看,也许是因为对于 [linux 快捷键 ][1] 玩家他能提高效率。又或者是因为使用不同寻常的平铺式窗口是一种挑战。
![Tiling Windows in Linux | Image Source][2]
从 i3 到 [Sway][3]linux 桌面拥有各种各样的平铺式窗口管理器。配置一个平铺式窗口管理器需要一个陡峭的学习曲线。
这就是为什么像 [Regolith desktop][4] 这样的项目会存在,它给你提供一个已经配置好的平铺式桌面。所以你不需要做大多的准备就可以直接开始使用。
让我给你介绍一个相似的项目 ——Material Shell。它可以让你用上平铺式桌面甚至比 [Regolith][5] 还简单。
### Material Shell 扩展:将 GNOME 桌面转变成平铺式窗口管理器
[Material Shell][6] 是一个 GNOME 扩展,这就是它最好的点。这意味着你不需要注销并登陆其他桌面环境。你只需要启用或关闭这个扩展就可以自如的切换你的工作环境。
我会列出 Material Shell 的各种特性,但是也许视频更容易让你理解:
[Subscribe to our YouTube channel for more Linux videos][7]
这个项目叫做 Material Shell 是因为他遵循 [Material Design][8] 原则。因此这个应用拥有一个美观的界面。这就是他最重要的一个特性。
#### 直观的界面
Material Shell 添加了一个左侧面板,可以快速访问。在此面板上,您可以在底部找到系统托盘,在顶部找到搜索和工作区。
所有新打开的应用都会添加到当前工作区中。您也可以创建新的工作区并切换到该工作区,以将正在运行的应用分类。其实这就是工作区最初的意义。
在 Material Shell 中,每个工作区都可以显示为具有多个应用程序的行列,而不是包含多个应用程序的程序框。
#### 平铺式窗口
在工作区中,你可以看到所有打开的应用程序都在顶部。默认情况下,应用程序会像在 GNOME desktop 中那样铺满整个屏幕。你可以使用右上角的布局改变器来改变布局,将其分成两半、多列或多个应用网格。
这段视频一目了然的显示了以上所有功能:
<-- 丢了个视频链接,我不知道怎么添加 -->
#### 固定布局和工作区
Material Shell 会记住你打开的工作区和窗口,这样你就不必重新组织你的布局。这是一个很好的特性,因为如果您对应用程序的位置有要求的话,它可以节省时间。
#### 热建/快捷键
像任何平铺窗口管理器一样,您可以使用键盘快捷键在应用程序和工作区之间切换。
* `Super+W` 切换到上个工作区;
* `Super+S` 切换到下个工作区;
* `Super+A` 切换到左边的窗口;
* `Super+D` 切换到右边的窗口;
* `Super+1``Super+2` … `Super+0` 切换到某个指定的工作区;
* `Super+Q` 关闭当前窗口;
* `Super+[MouseDrag]` 移动窗口;
* `Super+Shift+A` 将当前窗口左移;
* `Super+Shift+D` 将当前窗口右移;
* `Super+Shift+W` 将当前窗口上移;
* `Super+Shift+S` 将当前窗口下移。
### 安装 Material Shell
警告!
对于大多数用户来说,平铺式窗口可能会导致混乱。你最好先熟悉如何使用 GNOME 扩展。如果你是 Linux 新手或者你害怕你的系统发生翻天覆地的变化,你应当避免使用这个扩展。
Material Shell 是一个 GNOME 扩展。所以,请你[检查你的桌面环境 ][9] 确保它是 _**GNOME 3.34 或者更高的版本**_
我还想补充一点,平铺窗口可能会让许多用户感到困惑。
除此之外,我注意到在禁用 Material Shell 之后,它会导致 Firefox 和 Ubuntu dock 的顶栏消失。你可以在 GNOME 的扩展应用程序中禁用/启用 Ubuntu 的 dock 扩展来使其变回原来的样子。我想这些问题也应该在系统重启后消失,虽然我没试过。
我希望你知道[如何使用 GNOME 扩展 ][10]。最简单的办法就是[在浏览器中打开这个链接 ][11],安装 GNOME 扩展浏览器插件并且启用 Material Shell 扩展。
![][12]
如果你不喜欢这个扩展,你也可以在同样的链接中禁用它。或者在 GNOME 扩展程序中禁用它。
![][13]
**使不使用平铺式**
我使用多个电脑屏幕,我发现 Material Shell 不适用于多个屏幕的情况。这是开发者将来可以改进的地方。
除了这个毛病以外Material Shell 是个让你开始使用平铺式窗口的好东西。如果你尝试了 Material Shell 并且喜欢它,请通过[给它一个星或在 GitHub 上赞助它 ][14] 来鼓励这个项目。
由于某些原因,平铺窗户越来越受欢迎。最近发布的 [Pop OS 20.04][15] 也增加了平铺窗口的功能。
但正如我前面提到的,平铺布局并不适合所有人,它可能会让很多人感到困惑。
你呢?你是喜欢平铺窗口还是喜欢经典的桌面布局?
--------------------------------------------------------------------------------
via: https://itsfoss.com/material-shell/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[Chao-zhi](https://github.com/Chao-zhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/ubuntu-shortcuts/
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-ricing-example-800x450.jpg?resize=800%2C450&ssl=1
[3]: https://itsfoss.com/sway-window-manager/
[4]: https://itsfoss.com/regolith-linux-desktop/
[5]: https://regolith-linux.org/
[6]: https://material-shell.com
[7]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[8]: https://material.io/
[9]: https://itsfoss.com/find-desktop-environment/
[10]: https://itsfoss.com/gnome-shell-extensions/
[11]: https://extensions.gnome.org/extension/3357/material-shell/
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/install-material-shell.png?resize=800%2C307&ssl=1
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/material-shell-gnome-extension.png?resize=799%2C497&ssl=1
[14]: https://github.com/material-shell/material-shell
[15]: https://itsfoss.com/pop-os-20-04-review/

View File

@ -0,0 +1,184 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Manage containers with Podman Compose)
[#]: via: (https://fedoramagazine.org/manage-containers-with-podman-compose/)
[#]: author: (Mehdi Haghgoo https://fedoramagazine.org/author/powergame/)
用 Podman Compose 管理容器
======
![][1]
容器很棒,让你可以将你的应用连同其依赖项一起打包,并在任何地方运行。从 2013 年的 Docker 开始,容器已经让软件开发者的生活变得更加轻松。
Docker 的一个缺点是它有一个中央守护进程,它以 root 用户的身份运行,这对安全有影响。但这正是 Podman 的用武之地。Podman 是一个 [无守护进程容器引擎][2],用于开发、管理和在你的 Linux 系统上以 root 或无 root 模式运行 OCI 容器。
在 Fedora Magazine 上还有其他文章,你可以用来了解更多关于 Podman 的信息。下面有两个例子:
* [在 Fedora 上使用 Podman 的 Pod][3]
* [在 Fedora 上具有 Capabilities 的Podman][4]
如果你使用过 Docker你很可能也知道 Docker Compose它是一个用于编排多个可能相互依赖的容器的工具。要了解更多关于 Docker Compose 的信息,请看它的[文档][5]。
### 什么是 Podman Compose
[Podman Compose][6]是一个目标作为 Docker Compose 的替代品,不需要对 docker-compose.yaml 文件进行任何修改的项目。由于 Podman Compose 使用 pod 工作,所以最好看下 pod 的最新定义。
> 一个_Pod_(如一群鲸鱼或豌豆荚)是由一个或多个[容器][7]组成的组,具有共享的存储/网络资源,以及如何运行容器的规范。
>
> [Pods - Kubernetes 文档][8]
Podman Compose 的基本思想是,它选中 _docker-compose.yaml_ 文件里面定义的服务为每个服务创建一个容器。Docker Compose 和 Podman Compose 的一个主要区别是Podman Compose 将整个项目的容器添加到一个单一的 pod 中,而且所有的容器共享同一个网络。它甚至用和 Docker Compose 一样的方式命名容器,在创建容器时使用 _--add-host_ 标志,你会在例子中看到。
### 安装
Podman Compose 的完整安装说明可以在[项目页面][6]上找到,它有几种方法。要安装最新的开发版本,使用以下命令:
```
pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz
```
确保你也安装了 [Podman][9],因为你也需要它。在 Fedora 上使用下面的命令来安装Podman
```
sudo dnf install podman
```
### 例子:用 Podman Compose 启动一个 WordPress 网站
想象一下,你的 _docker-compose.yaml_ 文件在一个叫 _wpsite_ 的文件夹里。一个典型的 WordPress 网站的 _docker-compose.yaml_ (或 _docker-compose.yml_ 文件是这样的:
```
version: "3.8"
services:
web:
image: wordpress
restart: always
volumes:
- wordpress:/var/www/html
ports:
- 8080:80
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: magazine
WORDPRESS_DB_NAME: magazine
WORDPRESS_DB_PASSWORD: 1maGazine!
WORDPRESS_TABLE_PREFIX: cz
WORDPRESS_DEBUG: 0
depends_on:
- db
networks:
- wpnet
db:
image: mariadb:10.5
restart: always
ports:
- 6603:3306
volumes:
- wpdbvol:/var/lib/mysql
environment:
MYSQL_DATABASE: magazine
MYSQL_USER: magazine
MYSQL_PASSWORD: 1maGazine!
MYSQL_ROOT_PASSWORD: 1maGazine!
networks:
- wpnet
volumes:
wordpress: {}
wpdbvol: {}
networks:
wpnet: {}
```
如果你用过 Docker你就会知道你可运行 _docker-compose up_ 来启动这些服务。Docker Compose 会创建两个名为 _wpsite_web_1_ 和 _wpsite_db_1_ 的容器,并将它们连接到一个名为 _wpsite_wpnet_ 的网络。
现在,看看当你在项目目录下运行 _podman-compose up_ 时会发生什么。首先,一个以执行命令的目录命名的 pod 被创建。接下来,它寻找 YAML 文件中定义的任何名称的卷,如果它们不存在,就创建卷。然后,在 YAML 文件的 _services_ 部分列出的每个服务都会创建一个容器,并添加到 pod 中。
容器的命名与 Docker Compose 类似。例如,为你的 web 服务创建一个名为 _wpsite_web_1_ 的容器。Podman Compose 还为每个命名的容器添加了 localhost 别名。之后,容器仍然可以通过名字互相解析,尽管它们并不像 Docker 那样在一个桥接网络上。要做到这一点,使用选项 _-add-host_。例如_-add-host web:localhost_。
请注意_docker-compose.yaml_ 包含了一个从主机 8080 端口到容器 80 端口的 Web 服务的端口转发。现在你应该可以通过浏览器访问新 WordPress 实例,地址为 _<http://localhost:8080>_
![WordPress Dashboard][10]
### 控制 pod 和容器
要查看正在运行的容器,使用 _podman ps_,它可以显示 web 和数据库容器以及 pod 中的 infra 容器。
```
```
CONTAINER ID  IMAGE                               COMMAND               CREATED      STATUS          PORTS                                         NAMES
a364a8d7cec7  docker.io/library/wordpress:latest  apache2-foregroun...  2 hours ago  Up 2 hours ago  0.0.0.0:8080-&amp;gt;80/tcp, 0.0.0.0:6603-&amp;gt;3306/tcp  wpsite_web_1
c447024aa104  docker.io/library/mariadb:10.5      mysqld                2 hours ago  Up 2 hours ago  0.0.0.0:8080-&amp;gt;80/tcp, 0.0.0.0:6603-&amp;gt;3306/tcp  wpsite_db_1
12b1e3418e3e  k8s.gcr.io/pause:3.2
```
```
你也可以验证 Podman 已经为这个项目创建了一个 pod以你执行命令的文件夹命名。
```
```
POD ID        NAME             STATUS    CREATED      INFRA ID      # OF CONTAINERS
8a08a3a7773e  wpsite           Degraded  2 hours ago  12b1e3418e3e  3
```
```
要停止容器,在另一个命令窗口中输入以下命令:
```
podman-compose down
```
你也可以通过停止和删除 pod 来实现。这实质上是停止并移除所有的容器,然后再删除包含的 pod。所以同样的事情也可以通过这些命令来实现
```
podman pod stop podname
podman pod rm podname
```
请注意,这不会删除你在 _docker-compose.yaml_ 中定义的卷。所以,你的 WordPress 网站的状态被保存下来了,你可以通过运行这个命令来恢复它。
```
podman-compose up
```
总之,如果你是一个 Podman 粉丝,并且用 Podman 做容器工作,你可以使用 Podman Compose 来管理你的开发和生产中的容器。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/manage-containers-with-podman-compose/
作者:[Mehdi Haghgoo][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/powergame/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/01/podman-compose-1-816x345.jpg
[2]: https://podman.io
[3]: https://fedoramagazine.org/podman-pods-fedora-containers/
[4]: https://fedoramagazine.org/podman-with-capabilities-on-fedora/
[5]: https://docs.docker.com/compose/
[6]: https://github.com/containers/podman-compose
[7]: https://kubernetes.io/docs/concepts/containers/
[8]: https://kubernetes.io/docs/concepts/workloads/pods/
[9]: https://podman.io/getting-started/installation
[10]: https://fedoramagazine.org/wp-content/uploads/2021/01/Screenshot-from-2021-01-08-06-27-29-1024x767.png