Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-11-27 21:59:01 +08:00
commit 2c174c8c99
8 changed files with 541 additions and 408 deletions

View File

@ -0,0 +1,82 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11616-1.html)
[#]: subject: (Google to Add Mainline Linux Kernel Support to Android)
[#]: via: (https://itsfoss.com/mainline-linux-kernel-android/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
谷歌为安卓添加主线 Linux 内核支持
======
当前的安卓生态系统被数百种不同版本的安卓所污染,每种版本都运行着 Linux 内核的不同变体。每个版本均针对不同的手机和不同的配置而设计。谷歌试图通过将主线 Linux 内核添加到安卓来解决该问题。
### 当前在安卓中是如何处理 Linux 内核的
在到达你的手机之前,你手机上的 Linux 内核经历了[三个主要步骤][1]。
首先,谷歌采用了 Linux 内核的 LTS长期支持版本并添加了所有的安卓专用代码。这将成为“安卓通用内核”。
然后谷歌将此代码发送给创建可运行在手机的片上系统SoC的公司。这通常是高通公司。
SoC 制造商添加了支持 CPU 和其他芯片的代码后,便会将该内核传递给实际的设备制造商,例如三星和摩托罗拉。然后,设备制造商添加代码以支持手机的其余部分,例如显示屏和摄像头。
每个步骤都需要一段时间才能完成,并且会导致该内核无法与其他任何设备一起使用。这也意味着内核会非常旧,通常是大约两年前的内核。例如,上个月交付的谷歌 Pixel 4 带有来自 2017 年 11 月的内核,而且它将永远不会得到更新。
谷歌承诺会为较旧的设备创建安全补丁,这意味着他们会一直盯着大量的旧代码。
### 将来
![][2]
去年,谷歌宣布[计划][3]解决此问题。今年,他们在 2019 Linux Plumbers Conference 上展示了他们取得的进展。
> “我们知道运行安卓需要什么,但不一定是在任何给定的硬件上。因此,我们的目标是从根本上找出所有这些,然后将其交给上游,并尝试尽可能接近主线。”
>
> Sandeep Patil[安卓内核团队负责人][1]
他们确实炫耀了运行带有合适的 Linux 内核的小米 Poco F1。但是有些东西[似乎没有工作][4],例如电池电量百分比一直留在 0
那么,谷歌计划如何使其工作呢?从他们的 [Treble 项目][5]计划中摘录。在 Treble 项目之前与设备和安卓本身交互的底层代码是一大堆代码。Treble 项目将两者分开,并使它们模块化,以便可以更快地交付安卓更新,并且在更新时,这些低级代码可以保持不变。
谷歌希望为内核带来同样的模块化。他们的[计划][1]“涉及稳定 Linux 的内核 ABI并为 Linux 内核和硬件供应商提供稳定的接口来进行写入。谷歌希望将 Linux 内核与其硬件支持脱钩。”
因此,这意味着谷歌将交付一个内核,而硬件驱动程序将作为内核模块加载。目前,这只是一个草案。仍然有很多技术问题有待解决。因此,这不会很快有结果。
### 来自开源的反对意见
开源社区不会对将专有代码放入内核的想法感到满意。[Linux 内核准则][6]指出,驱动程序必须具有 GPL 许可证才能包含在内核中。他们还指出,如果驱动程序的更改导致错误,应由导致该错误的人来解决。从长远来看,这意味着设备制造商的工作量将减少。
### 关于将主线内核包含到安卓中的最终想法
到目前为止,这只是一个草案。谷歌有很大的可能会开始进行该项目,除非他们意识到这将需要多少工作后才会放弃。看看谷歌[已经放弃][7]了多少个项目!
[Android Police][4] 指出谷歌正在开发其 [Fuchsia 操作系统][8],这似乎是为了有一天取代谷歌。
那么,问题是谷歌会尝试完成那些艰巨的任务,使安卓以主线 Linux 内核运行,还是完成他们统一的安卓替代产品的工作?只有时间可以回答。
你对此话题有何看法?请在下面的评论中告诉我们。
--------------------------------------------------------------------------------
via: https://itsfoss.com/mainline-linux-kernel-android/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://arstechnica.com/gadgets/2019/11/google-outlines-plans-for-mainline-linux-kernel-support-in-android/
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/mainline_linux_kernel_android.png?ssl=1
[3]: https://lwn.net/Articles/771974/
[4]: https://www.androidpolice.com/2019/11/19/google-wants-android-to-use-regular-linux-kernel-potentially-improving-updates-and-security/
[5]: https://www.computerworld.com/article/3306443/what-is-project-treble-android-upgrade-fix-explained.html
[6]: https://www.kernel.org/doc/Documentation/process/stable-api-nonsense.rst
[7]: https://killedbygoogle.com/
[8]: https://itsfoss.com/fuchsia-os-what-you-need-to-know/
[9]: https://reddit.com/r/linuxusersgroup

View File

@ -1,84 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Google to Add Mainline Linux Kernel Support to Android)
[#]: via: (https://itsfoss.com/mainline-linux-kernel-android/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
Google to Add Mainline Linux Kernel Support to Android
======
The current Android ecosystem is polluted with hundreds of different versions of Android, each running a different variant of the Linux kernel. Each version is designed for a different phone and its different configurations. Google has been working to fix the problem by adding the mainline Linux kernel to Android.
### How the Linux kernel is currently handled in Android
Before it reaches you, the Linux kernel on your cellphone goes through [three major steps][1].
First, Google takes the LTS (Long Term Support) version of the Linux kernel and adds all of the Android-specific code. This becomes the “Android Common kernel”.
Google then sends this code to the company that creates the System on a Chip (SoC) that runs your phone. This is usually Qualcomm.
Once the SoC maker finishes add code to support the CPU and other chips, the kernel is then passed on to the actual device maker, such as Samsung or Motorola. The device maker then adds code to support the rest of the phone, such as the display and camera.
Each of these steps takes a while to complete and results in a kernel that wont work with any other device. It also means that the kernel is very old, usually about two years old. For example, the Google Pixel 4, which shipped last month, has a kernel from November 2017, which will never get updated.
Google has pledged to create security patches for older devices, which means theyre stuck keeping an eye on a huge hodge-podge of old code.
### The Future
![][2]
Last year, Google announced [plans][3] to fix this mess. This year they revealed what progress they made at the 2019 Linux Plumbers Conference.
> “We know what it takes to run Android but not necessarily on any given hardware. So our goal is to basically find all of that out, then upstream it and try to be as close to mainline as possible.”
>
> Sandeep Patil, [Android Kernel Team Lead][1]
They did show off a Xiaomi Poco F1 running Android with a proper Linux kernel. However, it some things did not [appear to be working][4], such as the battery percentage which was stuck at 0%.
So, how does Google plan to make this work? By taking a page from their [Project Treble][5] playbook. Before Project Treble, the low-level code that interacted with the device and Android itself was one big mess of code. Project Treble separated the two and made them modular so that Android updates could be shipped quicker and the low-level code could remain unchanged between updates.
Google wants to bring the same modularity to the kernel. Their [plan][1] “involves stabilizing Linuxs in-kernel ABI and having a stable interface for the Linux kernel and hardware vendors to write to. Google wants to decouple the Linux kernel from its hardware support.”
So this means that Google would ship a kernel and hardware drivers would be loaded as kernel modules. Currently, this is just a proposal. There are still quite a few technical problems that have to be solved. so, this wont happen any time soon.
### Opposition from Open Source
The Open Source community will not be happy with the idea of putting proprietary code in the kernel. The [Linux kernel guidelines][6] state that drivers have to have a GPL license to be included in the kernel. They also point out that if a change in the driver causes an error, it will be resolved by the person who created the error. This means less work for device makers in the long run.
### Final Thoughts on including mainline kernel to Andorid
So far, this is just a proposal. There is a good chance that Google will start working on the project only to abandon it once they realize how much work this will take. Just take a look at how many projects Google has [already abandoned][7].
[Android Police][4] made a good point by mentioned that Google is working on its [Fuchsia operating system][8], which seems to have the goal of replacing Android one day.
So, the question is which monumental task will Google try to complete, getting Android running with a mainline Linux kernel or complete work on their unified Android replacement? Only time can answer that.
What are your thoughts on this topic? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][9].
--------------------------------------------------------------------------------
via: https://itsfoss.com/mainline-linux-kernel-android/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://arstechnica.com/gadgets/2019/11/google-outlines-plans-for-mainline-linux-kernel-support-in-android/
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/mainline_linux_kernel_android.png?ssl=1
[3]: https://lwn.net/Articles/771974/
[4]: https://www.androidpolice.com/2019/11/19/google-wants-android-to-use-regular-linux-kernel-potentially-improving-updates-and-security/
[5]: https://www.computerworld.com/article/3306443/what-is-project-treble-android-upgrade-fix-explained.html
[6]: https://www.kernel.org/doc/Documentation/process/stable-api-nonsense.rst
[7]: https://killedbygoogle.com/
[8]: https://itsfoss.com/fuchsia-os-what-you-need-to-know/
[9]: https://reddit.com/r/linuxusersgroup

View File

@ -0,0 +1,138 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (SASE: Redefining the network and security architecture)
[#]: via: (https://www.networkworld.com/article/3481519/sase-redefining-the-network-and-security-architecture.html)
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
SASE: Redefining the network and security architecture
======
Adoption of SASE reduces complexity and overhead, improves security and boosts application performance.
Getty Images
In a cloud-centric world, users and devices require access to services everywhere. The focal point has changed. Now it is the identity of the user and device as opposed to the traditional model that focused solely on the data center. As a result, these environmental changes have created a new landscape that we need to protect and connect.
This new landscape is challenged by many common problems. The enterprises are loaded with complexity and overhead due to deployed appliances for different technology stacks. The legacy network and security designs increase latency. In addition, the world is encrypted; this dimension needs to be inspected carefully, without degrading the application performance.
These are some of the reasons that surface the need for a cloud-delivered secure access service edge (SASE). SASE consists of a tailored network fabric optimization where it makes the most sense for the user, device and application - at geographically dispersed PoPs. To deliver optimum network experience everywhere you should avoid the unpredictability of the Internet core. In the requirements for SASE, Gartner recommends that this backbone should not be based on AWS or Azure. Their PoP density is insufficient. It is not sufficient to offer a SASE service built solely on a hyper-scale.
[][1]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][1]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
There are clear benefits that can be achieved by redefining the network and security architecture. Yes, the adoption of SASE reduces complexity and overhead, improves security. It increases the application performance, but practically, what does that mean?
[Linda Musthaler had a great example in her conversation with Andrew Thomson, Director of IT at BioIVT,][2] a provider of biological materials and scientific services to research and development organizations, who adopted Cato Networks SASE platform nearly two years ago:
_“We positioned it as a platform for everything that we wanted to be able to do over the next three years with the business,” he told Linda, “The big goal, the business strategy, is growth and acquisition. We presented this as a platform, as a base service that we just had to have in place in order to leverage things like voice over IP, Office 365, Azure, cloud-based computing services, hosting servers in the cloud. Without a common core solid foundation, we wouldn't have been able to do any of those things reliably without adding staff to do monitoring or maintenance or administrative overhead.”_
Its that last line “without adding staff to do monitoring or maintenance or administrative overhead” that I found particularly striking. So, lets understand why SASE can be so impactful from an architectural perspective.
**[ Now read [20 hot jobs ambitious IT pros should shoot for][3]. ]**
### Complexity and overhead
Traditional mechanisms are limited by the hardware capacity of the physical appliances located at the customer's site. Such mechanisms create a lag in the hardware refresh rates that are needed to add new functionality.
Hardware-based network and security solutions build into the hardware the differentiator of the offering. Primarily, with different hardware, you can accelerate the services and add new features. There are some features that are available only on the specific hardware, not the hardware you already have onsite. In this case, heavy lifting by the customer will be required.
As the environment evolves, we should not depend on the new network and security features coming from the new generation of an appliance. Typically, this model is inefficient and complex. It creates high operational overhead and management complexity.
Device upgrades for new features require a lot of management. From past experience, to change out a line card would involve multiple teams. The line card might run out of ports or you may simply need additional features from a new generation. Largely, this would involve project planning, on-site engineers, design guides, hopefully, line card testing and hours of work. For critical sites to ensure a successful refresh, team members may need to be backed up. Therefore, there are many touches that need to be managed.
### SASE Easing management
The cloud-based SASE enables the updates for new features and functionality without the need for new deployments of appliances (physical or virtual) and software versions on the customer side. This has an immediate effect on the ease of management.
Now the network and security deployment can occur without ever touching the enterprise network. This allows enterprises to adopt new capabilities quickly. Once the tight coupling between the features and the customer appliance is removed, this increases the agility and simplicity for the deployment of network and security services.
With a SASE platform, when we create an object, such as a policy in the networking domain, it is then available in other domains as well. So any policies assigned to users are tied to that user, regardless of the network location. This significantly removes the complexity of managing both; network and security policies across multiple locations, users and types of devices. Supremely, all of this can be done from one platform.
Also, when we examine the security solution, many buy individual appliances that focus just on one job. To troubleshoot, you need to gather information, such as the logs from each device. This is what a SIEM is useful for but it can only be used in some organizations as its resource-heavy. For the ones who dont have ample resources, the process is backbreaking and there will be false positives.
In addition, SASE enables easier troubleshooting because all the data is in one common repository. You no longer have normalized data from different appliances/solution and then import the data into a database for a common view.
### Consolidation of vendors and technology stacks
I recall an experience from a previous consultancy, wherein we were planning the next year's security budget. The network was packed with numerous security solutions. All these point solutions are expensive and there is never a fixed price. So how do you actually plan for this?
Some new solutions we were considering charge on the usage models which at that time we didnt have the quantity. SASE removes these types of headaches. By consolidating services into a single provider, there will be a reduction in the number of vendors and agents/clients on the end-user device.
Overall, there will be high complexity saving from the consolidation of vendors and technology stacks. The complexity is pushed to the cloud away from the on-premise enterprise network. The SASE fabric abstracts the complexity and reduces costs.
From a hardware point of view: for scale and additional capacity, the cloud-based SASE can add more PoPs of the same instance. This is known as vertical scaling. This scaling can also be carried in new locations, known as horizontal scaling.
Additionally, the SASE-based cloud takes care of intensive processing. For example, since a large proportion of internet traffic is now encrypted, malware can use encryption to evade and hide from detection. Here, each of the PoPs can perform DPI on TLS-encrypted traffic.
Traditional firewalls are not capable of inspecting encrypted traffic. Performing DPI on TLS-encrypted traffic would require extra modules or a new appliance. A SASE solution ensures that the decryption and inspection are done at the PoP. Consequently, there is no performance-hit or the need for new appliances on the customer sites.
### Ways to improve performance
Network congestion resulting in dropped and out of order packets is bad for applications. Latency-sensitive applications, such as collaboration, video, VoIP and web conferencing are hit hardest because of packet drops. Luckily, there are options to minimize latency and the effects of packet loss.
SD-WAN solutions have WAN optimization features that can be applied on an application-by-application or site-by-site basis. Along with WAN optimization features, there are protocol and application acceleration techniques that can be employed.
On top of the existing techniques to reduce the packet loss and latency, we can privatize the WAN as much as possible. You can control the adverse and varying effects that the last mile and middle mile have on the applications by privatizing with a global backbone consisting of a fabric of PoPs.
Once privatized, we can have more control over traffic paths, packet loss and latency. A private network fabric is a key benefit gained from SASE as it drives the application performance.
### SASE PoP optimizations
Each PoP in the SASE cloud-based solution optimizes where it makes the most sense, not just at the WAN edge. Within the backbone, we have global route optimizations to determine which path is the best at the current time and it can also be changed for all traffic or certain applications.
These routing algorithms factor in the performance metrics, such as latency, packet loss and jitter. These algorithms can help in selecting the optimal route for every network packet. The WAN backbone constantly analyzes and tries to improve the performance. This is unlike internet routing that favors cost over performance.
As everything is privatized, we have all the information to create the largest packet size and use rate-based algorithms over traditional loss-based algorithms. As a result, you don't need to learn anything, and the end-to-end throughput can be maintained.
As each PoP acts as a TCP proxy server, certain techniques are employed so that the TCP client and server think that they are closer. Therefore, a larger TCP window is set, allowing more data to be passed before waiting for an acknowledgment.
### Preferred egress points
We can also define preferred egress points to exit the cloud application traffic. These could be the points closest to the customer's application instance. The optimal global routing algorithms determine the best path to the customer's cloud application instance from anywhere in the world.
The PoPs can be collocated in the data centers directly connected to the IXP that connects to all major Infrastructures as service providers. This provides a good on-ramp to access the services from Amazon AWS, Microsoft Azure and Google cloud.
Therefore, you can keep the traffic on the private cloud for the majority of the time. Within a SASE design, the internet is used only to provide a short hop to the SASE fabric.
### Security Identity-centric perimeter
SASE converges the networking and security pillars into a single platform. This allows multiple security solutions into a cloud service that enforces a unified policy across all the corporate locations, users and data.
SASE recommends you employ the zero-trust principles. The initial path to zero trust starts with identifying that network access is based on the identity of user, device and application. It is not based on the IP address or physical location of the device. And this is for a good reason as there is no contextual information.
The identity of the user/device must reflect the business context as opposed to being associated with binary constructs that are completely disjointed from the upper layers. This binds the identity to the world of networking and is the best way forward for policy enforcement. This way, the dependency on IP or applications as identifiers is removed. Now, the policy can be applied consistently, regardless of where the user/device is located. At the same time, the identity of the user/device/service can be factored into the applied policy.
The SASE stack is dynamically applied based on the identity and context while serving zero trust at strategic points in the cloud. This is what enforces an identity-centric perimeter.
_You can learn more about SASE and how it relates to [SD-WAN architectures][4] in a recent course Ive rolled out. The course shines the torch on various SD-WAN solutions from Silver Peak, VMware, Cisco and Cato._
**This article is published as part of the IDG Contributor Network. [Want to Join?][5]**
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3481519/sase-redefining-the-network-and-security-architecture.html
作者:[Matt Conran][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[2]: https://www.networkworld.com/article/3453030/sase-is-more-than-a-buzzword-for-bioivt.html
[3]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
[4]: https://www.pluralsight.com/courses/sd-wan-architectures-big-picture
[5]: https://www.networkworld.com/contributor-network/signup.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -1,180 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to document Python code with Sphinx)
[#]: via: (https://opensource.com/article/19/11/document-python-sphinx)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
How to document Python code with Sphinx
======
Documentation is best as part of the development process. Sphinx, along
with Tox, makes it easy to write and beautiful to look at.
![Python in a coffee cup.][1]
Python code can include documentation right inside its source code. The default way of doing so relies on **docstrings**, which are defined in a triple quote format. While the value of documentation is well... documented, it seems all too common to not document code sufficiently. Let's walk through a scenario on the power of great documentation.
After one too many whiteboard tech interviews that ask you to implement the Fibonacci sequence, you have had enough. You go home and write a reusable Fibonacci calculator in Python that uses floating-point tricks to get to O(1).
The code is pretty simple:
```
# fib.py
import math
_SQRT_5 = math.sqrt(5)
_PHI = (1 + _SQRT_5) / 2
def approx_fib(n):
    return round(_PHI**(n+1) / _SQRT_5)
```
(That the Fibonacci sequence is a geometric sequence rounded to the nearest whole number is one of my favorite little-known math facts.)
Being a decent person, you make the code open source and put it on [PyPI][2]. The **setup.py** file is simple enough:
```
import setuptools
setuptools.setup(
    name='fib',
    version='2019.1.0',
    description='Fibonacci',
    py_modules=["fib"],
)
```
However, code without documentation is useless. So you add a docstring to the function. One of my favorite docstring styles is the ["Google" style][3]. It is light on markup, which is nice when it is inside the source code.
```
def approx_fib(n):
    """
    Approximate Fibonacci sequence
    Args:
        n (int): The place in Fibonacci sequence to approximate
    Returns:
        float: The approximate value in Fibonacci sequence
    """
    # ...
```
But the function's documentation is only half the battle. Prose documentation is important for contextualizing code usage. In this case, the context is annoying tech interviews. 
There is an option to add more documentation, and the Pythonic pattern is to use an **rst** file (short for [reStructuredText][4]) commonly added under a **docs/** directory. So the **docs/index.rst** file ends up looking like this:
```
Fibonacci
=========
Are you annoyed at tech interviewers asking you to implement
the Fibonacci sequence?
Do you want to have some fun with them?
A simple
:code:`pip install fib`
is all it takes to tell them to,
um,
fib off.
.. automodule:: fib
   :members:
```
And we're done, right? We have the text in a file. Someone should look at it.
### Making Python documentation beautiful
To make your documentation look beautiful, you can take advantage of [Sphinx][5], which is designed to make pretty Python documents. In particular, these three Sphinx extensions are helpful:
* **sphinx.ext.autodoc**: Grabs documentation from inside modules
* **sphinx.ext.napoleon**: Supports Google-style docstrings
* **sphinx.ext.viewcode**: Packages the ReStructured Text sources with the generated docs
In order to tell Sphinx what and how to generate, we configure a helper file at **docs/conf.py**:
```
extensions = [
    'sphinx.ext.autodoc',
    'sphinx.ext.napoleon',
    'sphinx.ext.viewcode',
]
# The name of the entry point, without the ".rst" extension.
# By convention this will be "index"
master_doc = "index"
# This values are all used in the generated documentation.
# Usually, the release and version are the same,
# but sometimes we want to have the release have an "rc" tag.
project = "Fib"
copyright = "2019, Moshe Zadka"
author = "Moshe Zadka"
version = release = "2019.1.0"
```
This file allows us to release our code with all the metadata we want and note our extensions (the comments above explain how). Finally, to document exactly how we want the documentation generated, use [Tox][6] to manage the virtual environment to make sure we generate the documentation smoothly:
```
[tox]
# By default, .tox is the directory.
# Putting it in a non-dot file allows opening the generated
# documentation from file managers or browser open dialogs
# that will sometimes hide dot files.
toxworkdir = {toxinidir}/build/tox
[testenv:docs]
# Running sphinx from inside the "docs" directory
# ensures it will not pick up any stray files that might
# get into a virtual environment under the top-level directory
# or other artifacts under build/
changedir = docs
# The only dependency is sphinx
# If we were using extensions packaged separately,
# we would specify them here.
# A better practice is to specify a specific version of sphinx.
deps =
    sphinx
# This is the sphinx command to generate HTML.
# In other circumstances, we might want to generate a PDF or an ebook
commands =
    sphinx-build -W -b html -d {envtmpdir}/doctrees . {envtmpdir}/html
# We use Python 3.7. Tox sometimes tries to autodetect it based on the name of
# the testenv, but "docs" does not give useful clues so we have to be explicit.
basepython = python3.7
```
Now, whenever you run Tox, it will generate beautiful documentation for your Python code.
### Documentation in Python is excellent
As a Python developer, the toolchain available to us is fantastic. We can start with **docstrings**, add **.rst** files, then add Sphinx and Tox to beautify the results for users. 
What do you appreciate about good documentation? Do you have other favorite tactics? Share them in the comments!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/document-python-sphinx
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_python.jpg?itok=G04cSvp_ (Python in a coffee cup.)
[2]: https://pypi.org/
[3]: http://google.github.io/styleguide/pyguide.html#381-docstrings
[4]: http://docutils.sourceforge.net/rst.html
[5]: http://www.sphinx-doc.org/en/master/
[6]: https://tox.readthedocs.io/en/latest/

View File

@ -1,143 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bauh Manage Snaps, Flatpaks and AppImages from One Interface)
[#]: via: (https://itsfoss.com/bauh-package-manager/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
Bauh Manage Snaps, Flatpaks and AppImages from One Interface
======
One of the biggest problems with universal packages like [Snap][1], [Flatpak][2] and [AppImage][3] is managing them. Most built-in package managers do not support all of these new formats.
Thankfully, I stumbled across an application that supports several universal package formats.
### Bauh a Manager for Your Multi-Package Needs
Originally named fpakman, [bauh][4] is designed to handle Flatpak, Snap, [AppImage][5], and [AUR][6] packages. Creator [vinifmor][7] started the project in June19 with the [intention][8] of “giving a graphical interface to manage Flatpaks for Manjaro users.” Since then, he has expanded the application to add support for Debian-based systems.
![Bauh About][9]
When you first open bauh, it will scan your installed applications and check for updates. If there are any that need to be updated, they will be listed front and center. Once all the packages are updated, you will see a list of packages you have installed. You can deselect a package with updates to prevent it from being updated. You can also choose to install a previous version of the application.
![With Bauh you can manage various types of packages from one application][10]
You can also search for applications. Bauh has detailed information for both installed and searched packages. If you are not interested in one (or more) of the packaging types, you can deselect them in settings.
### Installing bauh on your Linux distribution
Lets see how to install bauh.
#### Arch-based distributions
If you have a recent install of [Manjaro][11], you should be all set. Bauh comes installed by default. If you have an older install of Manjaro (like I do) or a different Arch-based distro, you can install it from the [AUR][12] by typing this in terminal:
```
sudo pacman -S bauh
```
![Bauh Package Info][13]
#### Debian/Ubuntu based distributions
If you have a Debianor Ubuntubased Linux distribution, you can install bauh with pip. First, make sure to [install pip on Ubuntu][14].
```
sudo apt install python3-pip
```
And then use it to install bauh:
```
pip3 install bauh
```
However, the creator recommends installing it [manually][15] to avoid messing up your systems libraries.
To install bauh manually, you have to first download the [latest release][16]. Once you download it, you can [unzip using a graphical tool][17] or the [unzip command][18]. Next, open up the folder in your terminal. You will need to use the following steps to complete the install.
First, create a virtualenv in a folder called env:
```
python3 -m venv env
```
Now install the application code inside the env:
```
env/bin/pip install .
```
And launch the application:
```
env/bin/bauh
```
![Bauh Updating][19]
Once you finish installing bauh, you can [fine-tune][20] it by changing the environment setting and arguments.
### The road ahead for bauh
Bauh has grown quite a bit in a few short months. It plans to continue to grow. The current [road map][21] includes:
* Support for other packaging technologies
* Separate modules for each packaging technology
* Memory and performance improvements
* Improve the user experience
![Bauh Search][22]
### Final thoughts
When I tried out bauh, I ran into a couple of issues. When I opened it up for the first time, it told me that Snap was not installed and that I would have to install it if I wanted to use Snaps. I know that Snap is installed because I ran `snap list` in the terminal and it worked. I restarted the system and snaps worked.
The other issue I ran into was that one of my AUR packages failed to update. I was able to update the package without any issue with `yay`. There might be an issue with my install of Manjaro, Ive had it going for 3 or 4 years.
Overall, bauh worked. It did what was printed on the tin. I cant ask for more than that.
Have you ever used bauh? What is your favorite tool to manage different package formats if there is one? Let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][23].
--------------------------------------------------------------------------------
via: https://itsfoss.com/bauh-package-manager/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://snapcraft.io/
[2]: https://flatpak.org/
[3]: https://appimage.org/
[4]: https://github.com/vinifmor/bauh
[5]: https://itsfoss.com/use-appimage-linux/
[6]: https://itsfoss.com/best-aur-helpers/
[7]: https://github.com/vinifmor
[8]: https://forum.manjaro.org/t/bauh-formerly-known-as-fpakman-a-gui-for-flatpak-and-snap-management/96180
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/bauh-about.jpg?ssl=1
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/bauh.jpg?ssl=1
[11]: https://manjaro.org/
[12]: https://aur.archlinux.org/packages/bauh
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/11/bauh-package-info.jpg?ssl=1
[14]: https://itsfoss.com/install-pip-ubuntu/
[15]: https://github.com/vinifmor/bauh#manual-installation
[16]: https://github.com/vinifmor/bauh/releases
[17]: https://itsfoss.com/unzip-linux/
[18]: https://linuxhandbook.com/unzip-command/
[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/bauh-updating.jpg?ssl=1
[20]: https://github.com/vinifmor/bauh#general-settings
[21]: https://github.com/vinifmor/bauh#roadmap
[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/11/bauh-search.png?resize=800%2C319&ssl=1
[23]: https://reddit.com/r/linuxusersgroup

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,181 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to document Python code with Sphinx)
[#]: via: (https://opensource.com/article/19/11/document-python-sphinx)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
如何使用 Sphinx 给 Python 代码写文档
======
最好将文档作为开发过程的一部分。Sphinx 加上 Tox让文档可以轻松书写并且外观漂亮。
![Python in a coffee cup.][1]
Python 代码可以在源码中包含文档。这种方式默认依靠 **docstring**,它以三引号格式定义。虽然文档的价值是很大的,但是代码没有充足的文档还是很常见。让我们演练一个场景,了解出色的文档的强大功能。
经历了太多白板技术面试,要求你实现斐波那契数列,你已经受够了。你回家用 Python 写了一个可重用的斐波那契计算器,使用浮点技巧来实现 O(1) 复杂度。
代码很简单:
```
# fib.py
import math
_SQRT_5 = math.sqrt(5)
_PHI = (1 + _SQRT_5) / 2
def approx_fib(n):
return round(_PHI**(n+1) / _SQRT_5)
```
(该斐波那契数列是四舍五入到最接近的整数的几何序列,这是我最喜欢的鲜为人知的数学事实之一。)
作为一个好人,你可以将代码开源,并将它放在 [PyPI][2] 上。setup.py 文件很简单:
```
import setuptools
setuptools.setup(
name='fib',
version='2019.1.0',
description='Fibonacci',
py_modules=["fib"],
)
```
但是,没有文档的代码是没有用的。因此,你可以向函数添加 docstring。我最喜欢的 docstring 样式之一是 [“Google” 样式][3]。标记很轻量,这在它位于源代码中时很好。
```
def approx_fib(n):
"""
Approximate Fibonacci sequence
Args:
n (int): The place in Fibonacci sequence to approximate
Returns:
float: The approximate value in Fibonacci sequence
"""
# ...
```
但是函数的文档只是成功的一半。普通文档对于情境化代码用法很重要。在这种情况下,上下文是恼人的技术面试。
有一种添加更多文档的方式Pythonic 模式通常是在 **docs/** 添加 **rst** 文件 [reStructuredText][4] 的缩写)。因此**docs/index.rst** 文件最终看起来像这样:
```
Fibonacci
=========
Are you annoyed at tech interviewers asking you to implement
the Fibonacci sequence?
Do you want to have some fun with them?
A simple
:code:`pip install fib`
is all it takes to tell them to,
um,
fib off.
.. automodule:: fib
:members:
```
我们完成了,对吧?我们已经将文本放在了文件中。人们应该看看。
### 使 Python 文档更漂亮
为了使你的文档看起来更漂亮,你可以利用 [Sphinx][5],它旨在制作漂亮的 Python 文档。这三个 Sphinx 扩展特别有用:
* **sphinx.ext.autodoc**:从模块内部获取文档
* **sphinx.ext.napoleon**:支持 Google 样式的 docstring
* **sphinx.ext.viewcode**:将 ReStructured Text 源码与生成的文档打包在一起
为了告诉 Sphinx 该生成什么以及如何生成,我们在 **docs/conf.py** 中配置一个辅助文件:
```
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.napoleon',
'sphinx.ext.viewcode',
]
# The name of the entry point, without the ".rst" extension.
# By convention this will be "index"
master_doc = "index"
# This values are all used in the generated documentation.
# Usually, the release and version are the same,
# but sometimes we want to have the release have an "rc" tag.
project = "Fib"
copyright = "2019, Moshe Zadka"
author = "Moshe Zadka"
version = release = "2019.1.0"
```
此文件使我们可以使用所需的所有元数据来发布代码,并注意扩展名(上面的注释说明了方式)。最后,要确保生成我们想要的文档,请使用 [Tox][6] 管理虚拟环境以确保我们顺利生成文档:
```
[tox]
# By default, .tox is the directory.
# Putting it in a non-dot file allows opening the generated
# documentation from file managers or browser open dialogs
# that will sometimes hide dot files.
toxworkdir = {toxinidir}/build/tox
[testenv:docs]
# Running sphinx from inside the "docs" directory
# ensures it will not pick up any stray files that might
# get into a virtual environment under the top-level directory
# or other artifacts under build/
changedir = docs
# The only dependency is sphinx
# If we were using extensions packaged separately,
# we would specify them here.
# A better practice is to specify a specific version of sphinx.
deps =
sphinx
# This is the sphinx command to generate HTML.
# In other circumstances, we might want to generate a PDF or an ebook
commands =
sphinx-build -W -b html -d {envtmpdir}/doctrees . {envtmpdir}/html
# We use Python 3.7. Tox sometimes tries to autodetect it based on the name of
# the testenv, but "docs" does not give useful clues so we have to be explicit.
basepython = python3.7
```
现在无论何时运行T ox它都会为你的 Python 代码生成漂亮的文档。
### 在 Python 中写文档很好
作为 Python 开发人员,我们可以使用的工具链很棒。 我们可以从 **docstring** 开始,添加 **.rst** 文件,然后添加 Sphinx 和 Tox 来为用户美化结果。
你对好的文档有何评价? 你还有其他喜欢的方式么? 请在评论中分享它们!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/document-python-sphinx
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_python.jpg?itok=G04cSvp_ (Python in a coffee cup.)
[2]: https://pypi.org/
[3]: http://google.github.io/styleguide/pyguide.html#381-docstrings
[4]: http://docutils.sourceforge.net/rst.html
[5]: http://www.sphinx-doc.org/en/master/
[6]: https://tox.readthedocs.io/en/latest/

View File

@ -0,0 +1,139 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bauh Manage Snaps, Flatpaks and AppImages from One Interface)
[#]: via: (https://itsfoss.com/bauh-package-manager/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
bauh在一个界面中管理 Snap、Flatpak 和 AppImage
======
[Snap][1]、[Flatpak][2] 和 [AppImage][3] 等通用软件包的最大问题之一就是管理它们。大多数内置的软件包管理器不能全部支持这些新格式。
幸运的是,我偶然发现了一个支持几种通用包格式的应用程序。
### Bauh多包装需求的管理器
[bauh][4]LCTT给该软件建议一个中文名“包豪”最初名为 fpakman旨在处理 Flatpak、Snap、[AppImage][5] 和 [AUR][6] 软件包。创建者 [vinifmor][7] 在 2019 年 6 月启动了该项目,[意图][8]“为 Manjaro 用户提供管理 Flatpak 的图形界面”。此后,他扩展了该应用程序,以添加对基于 Debian 的系统的支持。
![Bauh About][9]
首次打开 bauh 时,它将扫描已安装的应用程序并检查更新。如果有任何需要更新的内容,它们将列在前面并居中。更新所有软件包后,你将看到已安装的软件包列表。你可以取消选择需要更新的软件包,以防止其被更新。你也可以选择安装该应用程序的早期版本。
![With Bauh you can manage various types of packages from one application][10]
你也可以搜索应用程序。bauh 提供了有关已安装和已搜索软件包的详细信息。如果你对一种(或多种)打包类型不感兴趣,则可以在设置中取消选择它们。
### 在你的 Linux 发行版上安装 bauh
让我们看看如何安装 bauh。
#### 基于 Arch 的发行版
如果你安装的是最近的 [Manjaro][11]则应该一切已经就绪。bauh 默认情况下已安装。如果你安装的是较早版本的 Manjaro如我一样或其他基于 Arch 的发行版,则可以在终端中输入以下内容从 [AUR][12] 中进行安装:
```
sudo pacman -S bauh
```
![Bauh Package Info][13]
#### 基于 Debian/Ubuntu 的发行版
如果你拥有基于 Debian 或 Ubuntu 的 Linux 发行版,则可以使用 `pip` 安装 bauh。首先请确保[在 Ubuntu 上安装了 pip][14]。
```
sudo apt install python3-pip
```
然后使用它来安装 bauh
```
pip3 install bauh
```
但是,该软件的创建者建议[手动][15]安装它,以避免弄乱系统的库。
要手动安装 bauh你必须先下载其[最新版本][16]。下载后,可以[使用图形工具][17]或 [unzip 命令][18]解压缩。接下来,在终端中打开该文件夹。你将需要使用以下步骤来完成安装。
首先,在名为 `env` 的文件夹中创建一个虚拟环境:
```
python3 -m venv env
```
现在在该环境中安装该应用程序的代码:
```
env/bin/pip install .
```
启动该应用程序:
```
env/bin/bauh
```
![Bauh Updating][19]
一旦完成了 bauh 的安装,就可以通过更改环境设置和参数来对其进行[微调][20]。
### bauh 的未来之路
bauh 在短短的几个月中增长了很多。它有计划继续增长。当前的[路线图][21]包括:
* 支持其他打包技术
* 每种打包技术一个单独模块
* 内存和性能改进
* 改善用户体验
  
![Bauh Search][22]
### 结语
当我尝试 bauh 时,遇到了两个问题。当我第一次打开它时,它告诉我尚未安装 Snap如果要使用 Snap 软件包,则必须安装它。我知道我已经安装了 Snap因为我在终端中运行了 `snap list`并且可以正常工作。我重新启动系统Snap 才工作正常。
我遇到的另一个问题是我的一个 AUR 软件包无法更新。我可以用 `yay` 更新软件包,而没有任何问题。可能是我的 Manjaro 有问题,我已经使用了它 3 到 4 年。
总体而言bauh 可以工作。它做到了宣称的功能。我不能要求更多。
你有没有用过 hauh如果有的话你最喜欢的用于管理不同打包格式的工具是什么在下面的评论中让我们知道。
--------------------------------------------------------------------------------
via: https://itsfoss.com/bauh-package-manager/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://snapcraft.io/
[2]: https://flatpak.org/
[3]: https://appimage.org/
[4]: https://github.com/vinifmor/bauh
[5]: https://itsfoss.com/use-appimage-linux/
[6]: https://itsfoss.com/best-aur-helpers/
[7]: https://github.com/vinifmor
[8]: https://forum.manjaro.org/t/bauh-formerly-known-as-fpakman-a-gui-for-flatpak-and-snap-management/96180
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/bauh-about.jpg?ssl=1
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/bauh.jpg?ssl=1
[11]: https://manjaro.org/
[12]: https://aur.archlinux.org/packages/bauh
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/11/bauh-package-info.jpg?ssl=1
[14]: https://itsfoss.com/install-pip-ubuntu/
[15]: https://github.com/vinifmor/bauh#manual-installation
[16]: https://github.com/vinifmor/bauh/releases
[17]: https://itsfoss.com/unzip-linux/
[18]: https://linuxhandbook.com/unzip-command/
[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/bauh-updating.jpg?ssl=1
[20]: https://github.com/vinifmor/bauh#general-settings
[21]: https://github.com/vinifmor/bauh#roadmap
[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/11/bauh-search.png?resize=800%2C319&ssl=1
[23]: https://reddit.com/r/linuxusersgroup