Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-06-18 07:51:52 +08:00
commit f55af42628
3 changed files with 566 additions and 0 deletions

View File

@ -0,0 +1,105 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 essential tools to set up your Python environment for success)
[#]: via: (https://opensource.com/article/20/6/python-tools)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
4 essential tools to set up your Python environment for success
======
This selection of tools will streamline your Python environment for
smooth and consistent development practices.
![Python programming language logo with question marks][1]
Python is a wonderful general-purpose programming language, often taught as a first programming language. Twenty years in, multiple books written, and it remains [my language of choice][2]. While the language is often said to be straight-forward, configuring Python for development has not been described as such (as documented by [xkcd][3]).
![xkcd python illustration][4]
A complex Python environment: [xkcd][3]
There are many ways to use Python in your day-to-day life. I will explain how I use the Python ecosystem tools, and I will be honest where I am still looking for alternatives.
### Use pyenv to manage Python versions
The best way I have found to get a Python version working on your machine is `pyenv`. This software will work on Linux, Mac OS X, and WSL2: the three "UNIX-like" environments that I usually care about.
Installing `pyenv` itself can be a little tricky at times. One way is to use the dedicated [pyenv installer][5], which uses a `curl | bash` method to bootstrap (see the instructions for more details).
If you're on a Mac (or another system where you run Homebrew), you can follow instructions on how to install and use pyenv [here][6].
Once you install and set up `pyenv` per the directions, you can use `pyenv global` to set a "default Python" version. In general, you will want to select your "favorite" version. This will usually be the latest stable, but other considerations can change that.
### Make virtual environments simpler with virtualenvwrapper
One advantage of using `pyenv` to install Python is that all subsequent Python interpreter installations you care about are owned by you instead of the operating system you use.
Though installing things inside Python itself is usually not the best option, there is one exception: in your "favorite" Python chosen above, install and configure `virtualenvwrapper`. This gives you the ability to create and switch to virtual environments at a moment's notice.
I walk through exactly how to install and use `virtualenvwrapper` [in this article][7].
Here is where I recommend a unique workflow. There is one virtual environment that you will want to make so that you can reuse it a lot—`runner`. In this environment, install your favorite `runner`; that is, software that you will regularly use to run other software. As of today, my preference is `tox`.
### Use tox as a Python runner
[tox][8] is a great tool to automate your test runs of Python. In each Python environment, I create a `tox.ini` file. Whatever system I use for continuous integration will run it, and I can run the same locally with `virtualenvwrapper`'s workon syntax described in the article above:
```
$ workon runner
$ tox
```
The reason this workflow is important is that I test my code against multiple versions of Python and multiple versions of the library dependencies. That means there are going to be multiple environments in the tox runner. Some will try running against the latest dependencies. Some will try running against frozen dependencies (more on that next), and I might also generate those locally with `pip-compile`.
Side note: I am currently [looking at `nox`][9] as a replacement for `tox`. The reasons are beyond the scope of this article, but it's worth taking a look at.
### Use pip-compile for Python dependency management
Python is a dynamic programming language, which means it loads its dependencies on every execution of the code. Understanding exactly what version of each dependency is running could mean the difference between smoothly running code and an unexpected crash. That means we have to think about dependency management tooling.
For each new project, I include a `requirements.in` file that is (usually) only the following:
```
`.`
```
Yes, that's right. A single line with a single dot. I document "loose" dependencies, such as `Twisted>=17.5` in the `setup.py`file. That is in contrast to exact dependencies like `Twisted==18.1`, which make it harder to upgrade to new versions of the library when you need a feature or a bug fix.
The `.` means "current directory," which uses the current directory's `setup.py` as the source for dependencies.
This means that using `pip-compile requirements.in > requirements.txt` will create a frozen dependencies file. You can use this dependencies file either in a virtual environment created by `virtualenvwrapper` or in `tox.ini`.
Sometimes it is useful to have `requirements-dev.txt`, generated from `requirements-dev.in` (contents: `.[dev]`) or `requirements-test.txt`, generated from `requirements-test.in` (contents: `.[test]`).
I am looking to see if `pip-compile` should be replaced in this flow by [`dephell`][10]. The `dephell` tool has a bunch of interesting things about it, like the use of asynchronous HTTP requests to speak dependency downloads.
### Conclusion
Python is as powerful as it is pleasing on the eyes. In order to write that code, I lean on a particular toolchain that has worked well for me. The tools `pyenv`, `virtualenvwrapper`, `tox`, and `pip-compile` are all separate. However, they each have their own role, with no overlaps, and together, they deliver a powerful Python workflow.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/python-tools
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python_programming_question.png?itok=cOeJW-8r (Python programming language logo with question marks)
[2]: https://opensource.com/article/19/10/why-love-python
[3]: https://xkcd.com/1987/
[4]: https://opensource.com/sites/default/files/uploads/python_environment_xkcd_1.png (xkcd python illustration)
[5]: https://github.com/pyenv/pyenv-installer
[6]: https://opensource.com/article/20/4/pyenv
[7]: https://opensource.com/article/19/6/python-virtual-environments-mac
[8]: https://opensource.com/article/19/5/python-tox
[9]: https://nox.thea.codes/en/stable/
[10]: https://github.com/dephell/dephell

View File

@ -0,0 +1,298 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to handle dynamic and static libraries in Linux)
[#]: via: (https://opensource.com/article/20/6/linux-libraries)
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
How to handle dynamic and static libraries in Linux
======
Knowing how Linux uses libraries, including the difference between
static and dynamic linking, can help you fix dependency problems.
![Hand putting a Linux file folder into a drawer][1]
Linux, in a way, is a series of static and dynamic libraries that depend on each other. For new users of Linux-based systems, the whole handling of libraries can be a mystery. But with experience, the massive amount of shared code built into the operating system can be an advantage when writing new applications.
To help you get in touch with this topic, I prepared a small [application example][2] that shows the most common methods that work on common Linux distributions (these have not been tested on other systems). To follow along with this hands-on tutorial using the example application, open a command prompt and type:
```
$ git clone <https://github.com/hANSIc99/library\_sample>
$ cd library_sample/
$ make
cc -c main.c -Wall -Werror
cc -c libmy_static_a.c -o libmy_static_a.o -Wall -Werror
cc -c libmy_static_b.c -o libmy_static_b.o -Wall -Werror
ar -rsv libmy_static.a libmy_static_a.o libmy_static_b.o
ar: creating libmy_static.a
a - libmy_static_a.o
a - libmy_static_b.o
cc -c -fPIC libmy_shared.c -o libmy_shared.o
cc -shared -o libmy_shared.so libmy_shared.o
$ make clean
rm *.o
```
After executing these commands, these files should be added to the directory (run `ls` to see them):
```
my_app
libmy_static.a
libmy_shared.so
```
### About static linking
When your application links against a static library, the library's code becomes part of the resulting executable. This is performed only once at linking time, and these static libraries usually end with a `.a` extension.
A static library is an archive ([ar][3]) of object files. The object files are usually in the ELF format. ELF is short for [Executable and Linkable Format][4], which is compatible with many operating systems.
The output of the `file` command tells you that the static library `libmy_static.a` is the `ar` archive type:
```
$ file libmy_static.a
libmy_static.a: current ar archive
```
With `ar -t`, you can look into this archive; it shows two object files:
```
$ ar -t libmy_static.a
libmy_static_a.o
libmy_static_b.o
```
You can extract the archive's files with `ar -x <archive-file>`. The extracted files are object files in ELF format:
```
$ ar -x libmy_static.a
$ file libmy_static_a.o
libmy_static_a.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped
```
### About dynamic linking
Dynamic linking means the use of shared libraries. Shared libraries usually end with `.so` (short for "shared object").
Shared libraries are the most common way to manage dependencies on Linux systems. These shared resources are loaded into memory before the application starts, and when several processes require the same library, it will be loaded only once on the system. This feature saves on memory usage by the application.
Another thing to note is that when a bug is fixed in a shared library, every application that references this library will profit from it. This also means that if the bug remains undetected, each referencing application will suffer from it (if the application uses the affected parts).
It can be very hard for beginners when an application requires a specific version of the library, but the linker only knows the location of an incompatible version. In this case, you must help the linker find the path to the correct version.
Although this is not an everyday issue, understanding dynamic linking will surely help you in fixing such problems.
Fortunately, the mechanics for this are quite straightforward.
To detect which libraries are required for an application to start, you can use `ldd`, which will print out the shared libraries used by a given file:
```
$ ldd my_app
        linux-vdso.so.1 (0x00007ffd1299c000)
        libmy_shared.so =&gt; not found
        libc.so.6 =&gt; /lib64/libc.so.6 (0x00007f56b869b000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f56b8881000)
```
Note that the library `libmy_shared.so` is part of the repository but is not found. This is because the dynamic linker, which is responsible for loading all dependencies into memory before executing the application, cannot find this library in the standard locations it searches.
Errors associated with linkers finding incompatible versions of common libraries (like `bzip2`, for example) can be quite confusing for a new user. One way around this is to add the repository folder to the environment variable `LD_LIBRARY_PATH` to tell the linker where to look for the correct version. In this case, the right version is in this folder, so you can export it:
```
$ LD_LIBRARY_PATH=$(pwd):$LD_LIBRARY_PATH
$ export LD_LIBRARY_PATH
```
Now the dynamic linker knows where to find the library, and the application can be executed. You can rerun `ldd` to invoke the dynamic linker, which inspects the application's dependencies and loads them into memory. The memory address is shown after the object path:
```
$ ldd my_app
        linux-vdso.so.1 (0x00007ffd385f7000)
        libmy_shared.so =&gt; /home/stephan/library_sample/libmy_shared.so (0x00007f3fad401000)
        libc.so.6 =&gt; /lib64/libc.so.6 (0x00007f3fad21d000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f3fad408000)
```
To find out which linker is invoked, you can use `file`:
```
$ file my_app
my_app: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=26c677b771122b4c99f0fd9ee001e6c743550fa6, for GNU/Linux 3.2.0, not stripped
```
The linker `/lib64/ld-linux-x8664.so.2` is a symbolic link to `ld-2.30.so`, which is the default linker for my Linux distribution:
```
$ file /lib64/ld-linux-x86-64.so.2
/lib64/ld-linux-x86-64.so.2: symbolic link to ld-2.31.so
```
Looking back to the output of `ldd`, you can also see (next to `libmy_shared.so`) that each dependency ends with a number (e.g., `/lib64/libc.so.6`). The usual naming scheme of shared objects is:
```
`**lib** XYZ.so **.<MAJOR>** . **<MINOR>**`
```
On my system, `libc.so.6` is also a symbolic link to the shared object `libc-2.30.so` in the same folder:
```
$ file /lib64/libc.so.6
/lib64/libc.so.6: symbolic link to libc-2.31.so
```
If you are facing the issue that an application will not start because the loaded library has the wrong version, it is very likely that you can fix this issue by inspecting and rearranging the symbolic links or specifying the correct search path (see "The dynamic loader: ld.so" below).
For more information, look on the [`ldd` man page][5].
#### Dynamic loading
Dynamic loading means that a library (e.g., a `.so` file) is loaded during a program's runtime. This is done using a certain programming scheme.
Dynamic loading is applied when an application uses plugins that can be modified during runtime.
See the [`dlopen` man page][6] for more information.
#### The dynamic loader: ld.so
On Linux, you mostly are dealing with shared objects, so there must be a mechanism that detects an application's dependencies and loads them into memory.
`ld.so` looks for shared objects in these places in the following order:
1. The relative or absolute path in the application (hardcoded with the `-rpath` compiler option on GCC)
2. In the environment variable `LD_LIBRARY_PATH`
3. In the file `/etc/ld.so.cache`
Keep in mind that adding a library to the systems library archive `/usr/lib64` requires administrator privileges. You could copy `libmy_shared.so` manually to the library archive and make the application work without setting `LD_LIBRARY_PATH`:
```
unset LD_LIBRARY_PATH
sudo cp libmy_shared.so /usr/lib64/
```
When you run `ldd`, you can see the path to the library archive shows up now:
```
$ ldd my_app
        linux-vdso.so.1 (0x00007ffe82fab000)
        libmy_shared.so =&gt; /lib64/libmy_shared.so (0x00007f0a963e0000)
        libc.so.6 =&gt; /lib64/libc.so.6 (0x00007f0a96216000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f0a96401000)
```
### Customize the shared library at compile time
If you want your application to use your shared libraries, you can specify an absolute or relative path during compile time.
Modify the makefile (line 10) and recompile the program by invoking `make -B` . Then, the output of `ldd` shows `libmy_shared.so` is listed with its absolute path.
Change this:
```
`CFLAGS =-Wall -Werror -Wl,-rpath,$(shell pwd)`
```
To this (be sure to edit the username):
```
`CFLAGS =/home/stephan/library_sample/libmy_shared.so`
```
Then recompile:
```
`$ make`
```
Confirm it is using the absolute path you set, which you can see on line 2 of the output:
```
$ ldd my_app
    linux-vdso.so.1 (0x00007ffe143ed000)
        libmy_shared.so =&gt; /lib64/libmy_shared.so (0x00007fe50926d000)
        /home/stephan/library_sample/libmy_shared.so (0x00007fe509268000)
        libc.so.6 =&gt; /lib64/libc.so.6 (0x00007fe50909e000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fe50928e000)
```
This is a good example, but how would this work if you were making a library for others to use? New library locations can be registered by writing them to `/etc/ld.so.conf` or creating a `<library-name>.conf` file containing the location under `/etc/ld.so.conf.d/`. Afterward, `ldconfig` must be executed to rewrite the `ld.so.cache` file. This step is sometimes necessary after you install a program that brings some special shared libraries with it.
See the [`ld.so` man page][7] for more information.
### How to handle multiple architectures
Usually, there are different libraries for the 32-bit and 64-bit versions of applications. The following list shows their standard locations for different Linux distributions:
**Red Hat family**
* 32 bit: `/usr/lib`
* 64 bit: `/usr/lib64`
**Debian family**
* 32 bit: `/usr/lib/i386-linux-gnu`
* 64 bit: `/usr/lib/x86_64-linux-gnu`
**Arch Linux family**
* 32 bit: `/usr/lib32`
* 64 bit: `/usr/lib64`
[**FreeBSD**][8] (technical not a Linux distribution)
* 32bit: `/usr/lib32`
* 64bit: `/usr/lib`
Knowing where to look for these key libraries can make broken library links a problem of the past.
While it may be confusing at first, understanding dependency management in Linux libraries is a way to feel in control of the operating system. Run through these steps with other applications to become familiar with common libraries, and continue to learn how to fix any library challenges that could come up along your way.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/linux-libraries
作者:[Stephan Avenwedde][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hansic99
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
[2]: https://github.com/hANSIc99/library_sample
[3]: https://en.wikipedia.org/wiki/Ar_%28Unix%29
[4]: https://linuxhint.com/understanding_elf_file_format/
[5]: https://www.man7.org/linux/man-pages/man1/ldd.1.html
[6]: https://www.man7.org/linux/man-pages/man3/dlopen.3.html
[7]: https://www.man7.org/linux/man-pages/man8/ld.so.8.html
[8]: https://opensource.com/article/20/5/furybsd-linux

View File

@ -0,0 +1,163 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Internet connection sharing with NetworkManager)
[#]: via: (https://fedoramagazine.org/internet-connection-sharing-networkmanager/)
[#]: author: (bengal https://fedoramagazine.org/author/bengal/)
Internet connection sharing with NetworkManager
======
![][1]
NetworkManager is the network configuration daemon used on Fedora and many other distributions. It provides a consistent way to configure network interfaces and other network-related aspects on a Linux machine. Among many other features, it provides a Internet connection sharing functionality that can be very useful in different situations.
For example, suppose you are in a place without Wi-Fi and want to share your laptops mobile data connection with friends. Or maybe you have a laptop with broken Wi-Fi and want to connect it via Ethernet cable to another laptop; in this way the first laptop become able to reach the Internet and maybe download new Wi-Fi drivers.
In cases like these it is useful to share Internet connectivity with other devices. On smartphones this feature is called “Tethering” and allows sharing a cellular connection via Wi-Fi, Bluetooth or a USB cable.
This article shows how the connection sharing mode offered by NetworkManager can be set up easily; it addition, it explains how to configure some more advanced features for power users.
### How connection sharing works
The basic idea behind connection sharing is that there is an _upstream_ interface with Internet access and a _downstream_ interface that needs connectivity. These interfaces can be of a different type—for example, Wi-Fi and Ethernet.
If the upstream interface is connected to a LAN, it is possible to configure our computer to act as a _bridge_; a bridge is the software version of an Ethernet switch. In this way, you “extend” the LAN to the downstream network. However this solution doesnt always play well with all interface types; moreover, it works only if the upstream network uses private addresses.
A more general approach consists in assigning a private IPv4 subnet to the downstream network and turning on routing between the two interfaces. In this case, NAT (Network Address Translation) is also necessary. The purpose of NAT is to modify the source of packets coming from the downstream network so that they look as if they originate from your computer.
It would be inconvenient to configure manually all the devices in the downstream network. Therefore, you need a DHCP server to assign addresses automatically and configure hosts to route all traffic through your computer. In addition, in case the sharing happens through Wi-Fi, the wireless network adapter must be configured as an access point.
There are many tutorials out there explaining how to achieve this, with different degrees of difficulty. NetworkManager hides all this complexity and provides a _shared_ mode that makes this configuration quick and convenient.
### Configuring connection sharing
The configuration paradigm of NetworkManager is based on the concept of connection (or connection profile). A connection is a group of settings to apply on a network interface.
This article shows how to create and modify such connections using _nmcli_, the NetworkManager command line utility, and the GTK connection editor. If you prefer, other tools are available such as _nmtui_ (a text-based user interface), GNOME control center or the KDE network applet.
A reasonable prerequisite to share Internet access is to have it available in the first place; this implies that there is already a NetworkManager connection active. If you are reading this, you probably already have a working Internet connection. If not, see [this article][2] for a more comprehensive introduction to NetworkManager.
The rest of this article assumes you already have a Wi-Fi connection profile configured and that connectivity must be shared over an Ethernet interface _enp1s0_.
To enable sharing, create a connection for interface enp1s0 and set the ipv4.method property to _shared_ instead of the usual _auto_:
```
$ nmcli connection add type ethernet ifname enp1s0 ipv4.method shared con-name local
```
The shared IPv4 method does multiple things:
* enables IP forwarding for the interface;
* adds firewall rules and enables masquerading;
* starts dnsmasq as a DHCP and DNS server.
NetworkManager connection profiles, unless configured otherwise, are activated automatically. The new connection you have added should be already active in the device status:
```
$ nmcli device
DEVICE TYPE STATE CONNECTION
enp1s0 ethernet connected local
wlp4s0 wifi connected home-wifi
```
If that is not the case, activate the profile manually with _nmcli connection up local_.
### Changing the shared IP range
Now look at how NetworkManager configured the downstream interface enp1s0:
```
$ ip -o addr show enp1s0
8: enp1s0 inet 10.42.0.1/24 brd 10.42.0.255 ...
```
10.42.0.1/24 is the default address set by NetworkManager for a device in shared mode. Addresses in this range are also distributed via DHCP to other computers. If the range conflicts with other private networks in your environment, change it by modifying the _ipv4.addresses_ property:
```
$ nmcli connection modify local ipv4.addresses 192.168.42.1/24
```
Remember to activate again the connection profile after any change to apply the new values:
```
$ nmcli connection up local
$ ip -o addr show enp1s0
8: enp1s0 inet 192.168.42.1/24 brd 192.168.42.255 ...
```
If you prefer using a graphical tool to edit connections, install the _nm-connection-editor_ package. Launch the program and open the connection to edit; then select the _Shared to other computers_ method in the _IPv4 Settings_ tab. Finally, if you want to use a specific IP subnet, click _Add_ and insert an address and a netmask.
![][3]
### Adding custom dnsmasq options
In case you want to further extend the dnsmasq configuration, you can add new configuration snippets in _/etc/NetworkManager/dnsmasq-shared.d/_. For example, the following configuration:
```
dhcp-option=option:ntp-server,192.168.42.1
dhcp-host=52:54:00:a4:65:c8,192.168.42.170
```
tells dnsmasq to advertise a NTP server via DHCP. In addition, it assigns a static IP to a client with a certain MAC.
There are many other useful options in the dnsmasq manual page. However, remember that some of them may conflict with the rest of the configuration; so please use custom options only if you know what you are doing.
### Other useful tricks
If you want to set up sharing via Wi-Fi, you could create a connection in Access Point mode, manually configure the security, and then enable connection sharing. Actually, there is a quicker way, the hotspot mode:
```
$ nmcli device wifi hotspot [ifname $dev] [password $pw]
```
This does everything needed to create a functional access point with connection sharing. The interface and password options are optional; if they are not specified, _nmcli_ chooses the first Wi-Fi device available and generates a random password. Use the _nmcli device wifi show-password_ command to display information for the active hotspot; the output includes the password and a text-based QR code that you can scan with a phone:
![][4]
### What about IPv6?
Until now this article discussed sharing IPv4 connectivity. NetworkManager also supports sharing IPv6 connectivity through DHCP prefix delegation. Using prefix delegation, a computer can request additional IPv6 prefixes from the DHCP server. Those public routable addresses are assigned to local networks via Router Advertisements. Again, NetworkManager makes all this easier through the shared IPv6 mode:
```
$ nmcli connection modify local ipv6.method shared
```
Note that IPv6 sharing requires support from the Internet Service Provider, which should give out prefix delegations through DHCP. If the ISP doesnt provides delegations, IPv6 sharing will not work; in such case NM will report in the journal that no prefixes are available:
```
policy: ipv6-pd: none of 0 prefixes of wlp1s0 can be shared on enp1s0
```
Also, note that the Wi-Fi hotspot command described above only enables IPv4 sharing; if you want to also use IPv6 sharing you must edit the connection manually.
### Conclusion
Remember, the next time you need to share your Internet connection, NetworkManager will make it easy for you.
If you have suggestions on how to improve this feature or any other feedback, please reach out to the NM community using the [mailing list][5], the [issue tracker][6] or joining the _#nm_ IRC channel on _freenode_.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/internet-connection-sharing-networkmanager/
作者:[bengal][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bengal/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/06/networkmanager-connection_sharing-816x345.png
[2]: https://www.redhat.com/sysadmin/becoming-friends-networkmanager
[3]: https://fedoramagazine.org/wp-content/uploads/2020/06/nmce.png
[4]: https://fedoramagazine.org/wp-content/uploads/2020/06/hotspot-password.png
[5]: https://mail.gnome.org/mailman/listinfo/networkmanager-list
[6]: https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/issues