Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-10-31 07:22:05 +08:00
commit 810d1286bc
13 changed files with 1313 additions and 193 deletions

View File

@ -0,0 +1,98 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11519-1.html)
[#]: subject: (Object-Oriented Programming and Essential State)
[#]: via: (https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
面向对象编程和根本状态
======
![](https://img.linux.net.cn/data/attachment/album/201910/30/232452kvdivhgb9b2yi0ug.jpg)
早在 2015 年Brian Will 撰写了一篇有挑衅性的博客:[面向对象编程:一个灾难故事][1]。他随后发布了一个名为[面向对象编程很糟糕][2]的视频,该视频更加详细。我建议你花些时间观看视频,下面是我的一段总结:
> OOP 的柏拉图式理想是一堆相互解耦的对象它们彼此之间发送无状态消息。没有人真的像这样制作软件Brian 指出这甚至没有意义:对象需要知道向哪个对象发送消息,这意味着它们需要相互引用。该视频大部分讲述的是这样一个痛点:人们试图将对象耦合以实现控制流,同时假装它们是通过设计解耦的。
总的来说,他的想法与我自己的 OOP 经验产生了共鸣:对象没有问题,但是我一直不满意的是*面向*对象建模程序控制流,并且试图使代码“正确地”面向对象似乎总是在创建不必要的复杂性。
有一件事我认为他无法完全解释。他直截了当地说“封装没有作用”,但在脚注后面加上“在细粒度的代码级别”,并继续承认对象有时可以奏效,并且在库和文件级别封装是可以的。但是他没有确切解释为什么有时会奏效,有时却没有奏效,以及如何和在何处划清界限。有人可能会说这使他的 “OOP 不好”的说法有缺陷,但是我认为他的观点是正确的,并且可以在根本状态和偶发状态之间划清界限。
如果你以前从未听说过“<ruby>根本<rt>essential</rt></ruby>”和“<ruby>偶发<rt>accidental</rt></ruby>”这两个术语的使用,那么你应该阅读 Fred Brooks 的经典文章《[没有银弹][3]》。(顺便说一句,他写了许多很棒的有关构建软件系统的文章。)我以前曾写过[关于根本和偶发的复杂性的文章][4],这里有一个简短的摘要:软件是复杂的。部分原因是因为我们希望软件能够解决混乱的现实世界问题,因此我们将其称为“根本复杂性”。“偶发复杂性”是所有其它的复杂性,因为我们正尝试使用硅和金属来解决与硅和金属无关的问题。例如,对于大多数程序而言,用于内存管理或在内存与磁盘之间传输数据或解析文本格式的代码都是“偶发的复杂性”。
假设你正在构建一个支持多个频道的聊天应用。消息可以随时到达任何频道。有些频道特别有趣,当有新消息传入时,用户希望得到通知。而其他频道静音:消息被存储,但用户不会受到打扰。你需要跟踪每个频道的用户首选设置。
一种实现方法是在频道和频道设置之间使用<ruby>映射<rt>map</rt></ruby>(也称为哈希表、字典或关联数组)。注意,映射是 Brian Will 所说的可以用作对象的抽象数据类型ADT
如果我们有一个调试器并查看内存中的映射对象,我们将看到什么?我们当然会找到频道 ID 和频道设置数据(或至少指向它们的指针)。但是我们还会找到其它数据。如果该映射是使用红黑树实现的,我们将看到带有红/黑标签和指向其他节点的指针的树节点对象。与频道相关的数据是根本状态,而树节点是偶发状态。不过,请注意以下几点:该映射有效地封装了它的偶发状态 —— 你可以用 AVL 树实现的另一个映射替换该映射,并且你的聊天程序仍然可以使用。另一方面,映射没有封装根本状态(仅使用 `get()``set()` 方法访问数据并不是封装)。事实上,映射与根本状态是尽可能不可知的,你可以使用基本相同的映射数据结构来存储与频道或通知无关的其他映射。
这就是映射 ADT 如此成功的原因它封装了偶发状态并与根本状态解耦。如果你思考一下Brian 用封装描述的问题就是尝试封装根本状态。其他描述的好处是封装偶发状态的好处。
要使整个软件系统都达到这一理想状况相当困难,但扩展开来,我认为它看起来像这样:
* 没有全局的可变状态
* 封装了偶发状态(在对象或模块或以其他任何形式)
* 无状态偶发复杂性封装在单独函数中,与数据解耦
* 使用诸如依赖注入之类的技巧使输入和输出变得明确
* 组件可由易于识别的位置完全拥有和控制
其中有些违反了我很久以来的直觉。例如,如果你有一个数据库查询函数,如果数据库连接处理隐藏在该函数内部,并且唯一的参数是查询参数,那么接口会看起来会更简单。但是,当你使用这样的函数构建软件系统时,协调数据库的使用实际上变得更加复杂。组件不仅以自己的方式做事,而且还试图将自己所做的事情隐藏为“实现细节”。数据库查询需要数据库连接这一事实从来都不是实现细节。如果无法隐藏某些内容,那么显露它是更合理的。
我对将面向对象编程和函数式编程放在对立的两极非常警惕但我认为从函数式编程进入面向对象编程的另一极端是很有趣的OOP 试图封装事物,包括无法封装的根本复杂性,而纯函数式编程往往会使事情变得明确,包括一些偶发复杂性。在大多数时候,这没什么问题,但有时候(比如[在纯函数式语言中构建自我指称的数据结构][5])设计更多的是为了函数编程,而不是为了简便(这就是为什么 [Haskell 包含了一些“<ruby>逃生出口<rt>escape hatches</rt></ruby>”][6])。我之前写过一篇[所谓“<ruby>弱纯性<rt>weak purity</rt></ruby>”的中间立场][7]。
Brian 发现封装对更大规模有效,原因有几个。一个是,由于大小的原因,较大的组件更可能包含偶发状态。另一个是“偶发”与你要解决的问题有关。从聊天程序用户的角度来看,“偶发的复杂性”是与消息、频道和用户等无关的任何事物。但是,当你将问题分解为子问题时,更多的事情就变得“根本”。例如,在解决“构建聊天应用”问题时,可以说频道名称和频道 ID 之间的映射是偶发的复杂性,而在解决“实现 `getChannelIdByName()` 函数”子问题时,这是根本复杂性。因此,封装对于子组件的作用比对父组件的作用要小。
顺便说一句在视频的结尾Brian Will 想知道是否有任何语言支持*无法*访问它们所作用的范围的匿名函数。[D][8] 语言可以。 D 中的匿名 Lambda 通常是闭包,但是如果你想要的话,也可以声明匿名无状态函数:
```
import std.stdio;
void main()
{
int x = 41;
// Value from immediately executed lambda
auto v1 = () {
return x + 1;
}();
writeln(v1);
// Same thing
auto v2 = delegate() {
return x + 1;
}();
writeln(v2);
// Plain functions aren't closures
auto v3 = function() {
// Can't access x
// Can't access any mutable global state either if also marked pure
return 42;
}();
writeln(v3);
}
```
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://medium.com/@brianwill/object-oriented-programming-a-personal-disaster-1b044c2383ab
[2]: https://www.youtube.com/watch?v=QM1iUe6IofM
[3]: http://www.cs.nott.ac.uk/~pszcah/G51ISS/Documents/NoSilverBullet.html
[4]: https://theartofmachinery.com/2017/06/25/compression_complexity_and_software.html
[5]: https://wiki.haskell.org/Tying_the_Knot
[6]: https://en.wikibooks.org/wiki/Haskell/Mutable_objects#The_ST_monad
[7]: https://theartofmachinery.com/2016/03/28/dirtying_pure_functions_can_be_useful.html
[8]: https://dlang.org

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fedora 31 is officially here!)
[#]: via: (https://fedoramagazine.org/announcing-fedora-31/)
[#]: author: (Matthew Miller https://fedoramagazine.org/author/mattdm/)
Fedora 31 is officially here!
======
![][1]
Its here! Were proud to announce the release of Fedora 31. Thanks to the hard work of thousands of Fedora community members and contributors, were celebrating yet another on-time release. This is getting to be a habit!
If you just want to get to the bits without delay, go to <https://getfedora.org/> right now. For details, read on!
### Toolbox
If you havent used the [Fedora Toolbox][2], this is a great time to try it out. This is a simple tool for launching and managing personal workspace containers, so you can do development or experiment in an isolated experience. Its as simple as running “toolbox enter” from the command line.
This containerized workflow is vital for users of the ostree-based Fedora variants like CoreOS, IoT, and Silverblue, but is also extremely useful on any workstation or even server system. Look for many more enhancements to this tool and the user experience around it in the next few months — your feedback is very welcome.
### All of Fedoras Flavors
Fedora Editions are targeted outputs geared toward specific “showcase” uses.
Fedora Workstation focuses on the desktop, and particular software developers who want a “just works” Linux operating system experience. This release features GNOME 3.34, which brings significant performance enhancements which will be especially noticeable on lower-powered hardware.
Fedora Server brings the latest in cutting-edge open source server software to systems administrators in an easy-to-deploy fashion.
And, in preview state, we have Fedora CoreOS, a category-defining operating system made for the modern container world, and [Fedora IoT][3] for “edge computing” use cases. (Stay tuned for a planned contest to find a shiny name for the IoT edition!)
Of course, we produce more than just the editions. [Fedora Spins][4] and [Labs][5] target a variety of audiences and use cases, including the [Fedora Astronomy][6], which brings a complete open source toolchain to both amateur and professional astronomers, and desktop environments like [KDE Plasma][7] and [Xfce][8].
And, dont forget our alternate architectures, [ARM AArch64, Power, and S390x][9]. Of particular note, we have improved support for the Rockchip system-on-a-chip devices including the Rock960, RockPro64,  and Rock64, plus initial support for “[panfrost][10]”, an open source 3D accelerated graphics driver for newer Arm Mali “midgard” GPUs.
If youre using an older 32-bit only i686 system, though, its time to find an alternative — [we bid farewell to 32-bit Intel architecture as a base system][11] this release.
### General improvements
No matter what variant of Fedora you use, youre getting the latest the open source world has to offer. Following our “[First][12]” foundation, were enabling CgroupsV2 (if youre using Docker, [make sure to check this out][13]). Glibc 2.30  and NodeJS 12 are among the many updated packages in Fedora 31. And, weve switched the “python” command to by Python 3 — remember, Python 2 is end-of-life at the [end of this year][14].
Were excited for you to try out the new release! Go to <https://getfedora.org/> and download it now. Or if youre already running a Fedora operating system, follow the easy [upgrade instructions][15].
### In the unlikely event of a problem….
If you run into a problem, check out the [Fedora 31 Common Bugs][16] page, and if you have questions, visit our [Ask Fedora][17] user-support platform.
### Thank you everyone
Thanks to the thousands of people who contributed to the Fedora Project in this release cycle, and especially to those of you who worked extra hard to make this another on-time release. And if youre in Portland for [USENIX LISA][18] this week, stop by the expo floor and visit me at the Red Hat, Fedora, and CentOS booth.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/announcing-fedora-31/
作者:[Matthew Miller][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/mattdm/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/fedora31-816x345.jpg
[2]: https://docs.fedoraproject.org/en-US/fedora-silverblue/toolbox/
[3]: https://iot.fedoraproject.org/
[4]: https://spins.fedoraproject.org/
[5]: https://labs.fedoraproject.org/
[6]: https://labs.fedoraproject.org/en/astronomy/
[7]: https://spins.fedoraproject.org/en/kde/
[8]: https://spins.fedoraproject.org/en/xfce/
[9]: https://alt.fedoraproject.org/alt/
[10]: https://panfrost.freedesktop.org/
[11]: https://fedoramagazine.org/in-fedora-31-32-bit-i686-is-86ed/
[12]: https://docs.fedoraproject.org/en-US/project/#_first
[13]: https://fedoraproject.org/wiki/Common_F31_bugs#Docker_package_no_longer_available_and_will_not_run_by_default_.28due_to_switch_to_cgroups_v2.29
[14]: https://pythonclock.org/
[15]: https://docs.fedoraproject.org/en-US/quick-docs/upgrading/
[16]: https://fedoraproject.org/wiki/Common_F31_bugs
[17]: http://ask.fedoraproject.org
[18]: https://www.usenix.org/conference/lisa19

View File

@ -0,0 +1,168 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 reasons why I love Python)
[#]: via: (https://opensource.com/article/19/10/why-love-python)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
5 reasons why I love Python
======
These are a few of my favorite things about Python.
![Snake charmer cartoon with a yellow snake and a blue snake][1]
I have been using Python since it was a little-known language in 1998. It was a time when [Perl was quite popular][2] in the open source world, but I believed in Python from the moment I found it. My parents like to remind me that I used to say things like, "Python is going to be a big deal" and "I'll be able to find a job using it one day."** **It took a while, but my predictions came true.
There is so much to love about the language. Here are my top 5 reasons why I continue to love Python so much (in reverse order, to build anticipation).
### 5\. Python reads like executable pseudocode
Pseudocode is the concept of writing out programming logic without it following the exact syntax and grammar of a specific language. I have stopped writing much pseudocode since becoming a Python programmer because its actual design meets my needs.
Python can be easy to read even if you don't know the language well and that is very much by design. It is reasonably famous for whitespace requirements for code to be able to run. Whitespace is necessary for any languageit allows us to see each of the words in this sentence as distinct. Most languages have suggestions or  "best practices" around whitespace usage, but Python takes a bold step by requiring standardization. For me, that makes it incredibly straightforward to read through code and see exactly what it's doing.
For example, here is an implementation of the classic [bubble sort algorithm][3].
```
def bubble_sort(things):
    needs_pass = True
    while needs_pass:
        needs_pass = False
        for idx in range(1, len(things)):
            if things[idx - 1] &gt; things[idx]:
                things[idx - 1], things[idx] = things[idx], things[idx - 1]
                needs_pass = True
```
Now let's compare that with [this implementation][4] in Java.
```
public static int[] bubblesort(int[] numbers) {
    boolean swapped = true;
    for(int i = numbers.length - 1; i &gt; 0 &amp;&amp; swapped; i--) {
        swapped = false;
        for (int j = 0; j &lt; i; j++) {
            if (numbers[j] &gt; numbers[j+1]) {
                int temp = numbers[j];
                numbers[j] = numbers[j+1];
                numbers[j+1] = temp;
                swapped = true;
            }
        }
    }
    return numbers;
}
```
I appreciate that Python requires indentation to indicate nesting of blocks. While our Java example also uses indentation quite nicely, it is not required. The curly brackets are what determine the beginning and end of the block, not the spacing. Since Python uses whitespace as syntax, there is no need for beginning **{** and end **}** notation throughout the other code. 
Python also avoids the need for semicolons, which is a [syntactic sugar][5] needed to make other languages human-readable. Python is much easier to read on my eyes and it feels so close to pseudocode it sometimes surprises me what is runnable!
### 4\. Python has powerful primitives
In programming language design, a primitive is the simplest available element. The fact that Python is easy to read does _not_ mean it is not a powerful language, and that stems from its use of primitives. My favorite example of what makes Python both easy to use and advanced is its concept of **generators**. 
Imagine you have a simple binary tree structure with `value`, `left`, and `right`. You want to easily iterate over it in order. You usually are looking for "small" elements, in order to exit as soon as the right value is found. That sounds simple so far. However, there are many kinds of algorithms to make a decision on the element.
Other languages would have you write a **visitor**, where you invert control by putting your "is this the right element?" in a function and call it via function pointers. You _can_ do this in Python. But you don't have to.
```
def in_order(tree):
    if tree is None:
        return
    yield from in_order(tree.left)
    yield tree.value
    yield from in_order(tree.right)
```
This _generator function_ will return an iterator that, if used in a **for** loop, will only execute as much as needed but no more. That's powerful.
### 3\. The Python standard library
Python has a great standard library with many hidden gems I did not know about until I took the time to [walk through the list of all available][6] functions, constants, types, and much more. One of my personal favorites is the `itertools` module, which is listed under the functional programming modules (yes, [Python supports functional programming][7]!).
It is great for playing jokes on your tech interviewer, for example with this nifty little solution to the classic [FizzBuzz interview question][8]:
```
fizz = itertools.cycle(itertools.chain(['Fizz'], itertools.repeat('', 2)))
buzz = itertools.cycle(itertools.chain(['Buzz'], itertools.repeat('', 4)))
fizz_buzz = map(operator.add, fizz, buzz)
numbers = itertools.islice(itertools.count(), 100)
combo = zip(fizz_buzz, numbers)
for fzbz, n in combo:
    print(fzbz or n)
```
A quick web search will show that this is not the most straight-forward way to solve for FizzBuzz, but it sure is fun!
Beyond jokes, the `itertools` module, as well as the `heapq` and `functools` modules are a trove of treasures that come by default in your Python implementation.
### 2\. The Python ecosystem is massive
For everything that is not in the standard library, there is an enormous ecosystem to support the new Pythonista, from exciting packages to text editor plugins specifically for the language. With around 200,000 projects hosted on PyPi (at the time of writing) and growing, there is something for everyone: [data science][9], [async frameworks][10], [web frameworks][11], or just tools to make [remote automation][12] easier.
### 1\. The Python community is special
The Python community is amazing. It was one of the first to adopt a code of conduct, first for the [Python Software Foundation][13] and then for [PyCon][14]. There is a real commitment to diversity and inclusion: blog posts and conference talks on this theme are frequent, thoughtful, and well-read by Python community members.
While the community is global, there is a lot of great activity in the local community as well. Local Python meet-ups are a great place to meet wonderful people who are smart, experienced, and eager to help. A lot of meet-ups will explicitly have time set aside for experienced people to help newcomers who want to learn a new concept or to get past an issue with their code. My local community took the time to support me as I began my Python journey, and I am privileged to continue to give back to new developers.
Whether you can attend a local community meet-up or you spend time with the [online Python community][15] across IRC, Slack, and Twitter, I am sure you will meet lovely people who want to help you succeed as a developer. 
### Wrapping it up
There is so much to love about Python, and now you know my favorite part is definitely the people.
I have found kind, thoughtful Pythonistas in the community throughout the world, and the amount of community investment provide to those in need is incredibly encouraging. In addition to those I've met, the simple, clean, and powerful Python language gives any developer more than enough to master on their journey toward a career in software development or as a hobbyist enjoying playing around with a fun language. If you are interested in learning your first or a new language, consider Python and let me know how I can help.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/why-love-python
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_python.png?itok=MFEKm3gl (Snake charmer cartoon with a yellow snake and a blue snake)
[2]: https://opensource.com/article/19/8/command-line-heroes-perl
[3]: https://en.wikipedia.org/wiki/Bubble_sort
[4]: https://en.wikibooks.org/wiki/Algorithm_Implementation/Sorting/Bubble_sort#Java
[5]: https://en.wikipedia.org/wiki/Syntactic_sugar
[6]: https://docs.python.org/3/library/
[7]: https://opensource.com/article/19/10/python-programming-paradigms
[8]: https://en.wikipedia.org/wiki/Fizz_buzz
[9]: https://pypi.org/project/pandas/
[10]: https://pypi.org/project/Twisted/
[11]: https://pypi.org/project/Django/
[12]: https://pypi.org/project/paramiko/
[13]: https://www.python.org/psf/conduct/
[14]: https://us.pycon.org/2019/about/code-of-conduct/
[15]: https://www.python.org/community/

View File

@ -0,0 +1,91 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The best (and worst) ways to influence your open community)
[#]: via: (https://opensource.com/open-organization/19/10/how-to-influence-open-community)
[#]: author: (ldimaggi https://opensource.com/users/ldimaggi)
The best (and worst) ways to influence your open community
======
The trick to effectively influencing your community's decisions?
Empathy, confidence, and patience.
![Media ladder][1]
After you've established a positive reputation in an open community—hopefully, as [we discussed in our previous article][2], by being an active member in and contributing productively to that community—you'll have built up a healthy "bank balance" of credibility you can use to influence the _direction_ of that community.
What does this mean in concrete terms? It means you can contribute to the decisions the community makes.
In this article, we'll explain how best to do this—and how best _not_ to do it.
### Understanding influence
To some, the term "influence" denotes a heavy-handed approach to imposing your will over others. That _is_ one way to exercise influence. But "influencing" others over whom you have clear political or economic power and seeing them obey your commands isn't too difficult.
In an organization structured such that a single leader makes decisions and simply "passes down" those decisions to followers, influence isn't _earned_; it's simply _enforced_. Decisions in this sense are mandates. Those decisions don't encourage differing views. If someone questions a decision (or raises a contrarian view) he or she will have a difficult time promoting that view, because people's employment or membership in the organization depends on following the will of the leader. Unfortunately, many hierarchical organizations around the world run this way.
When it comes to influencing people who can actually exercise free will (and most people in an open organization can, to some degree), patience is both necessary and useful. Sometimes the only way to make quick progress is to go slowly and persistently.
### Balancing empathy and confidence
In an organization structured such that a single leader makes decisions and simply "passes down" those decisions to followers, influence isn't earned; it's simply enforced.
Apart from patience and persistence, what else will you need to display in order to influence others in an open organization? We think these factors are important:
#### Expressing empathy
It's easy to become frustrated when you encounter a situation where you simply cannot get people to change their minds and see things your way. As human beings, we all have beliefs and opinions. And all too frequently, we base these on incorrect information or biases. A key element to success at influencing others in an open organization is understanding not only others' opinions but also the causes behind them.
In this context, empathy and listening skills are more important than your ability to command (and more effective, too). For example, if you propose a change in direction for a project, and other people object, think: Are they objecting because they are carrying emotional "baggage" from a previous project that encountered problems in a similar situation? They may not be able to see your point of view unless they can be freed from carrying around that baggage.
#### Having confidence (in yourself and others)
In this context, to be successful in influencing others, you must have reached your own conclusions through a rigorous vetting process. In other words, must have gotten past the point of conducting internal debates with yourself. You won't influence others to think or do something you yourself don't believe in.
Don't misunderstand us: This is not a matter of having blind faith in yourself. Indeed, some of the most dangerous people around do not know their own limits. For example, we all have a general understanding of dentistry, but we're not going to work on our own teeth (or anyone else's, for that matter)! The confidence you have in your opinion must be based on your ability to defend that position to both others and yourself, based on facts and evidence. You also have to have confidence in your audience. You have to have faith that when presented with facts and evidence, they have the ability to internalize that argument, understand, and eventually accept that information.
### Moving forward
So far we've focused almost exclusively on the _positive_ situations in which you'd want to apply your influence (i.e., to "bring people around" to your side of an issue). Unfortunately, you'll also encounter _negative_ situations where team members are in disagreement, or one or more team members are simply saying "no" to all your attempts to find common ground.
Remember, in an open organization, great ideas can come from anyone, not just someone in a leadership position, and those ideas must always be reviewed to ensure they provide value.
What can you do if you hit this type of brick wall? How can you move forward?
The answer might be by applying patient, persistent, and empathetic escalation, along with some flexibility. For example:
* **Search for the root causes of disagreement:** Are the problems that you face technical in nature, or are they interpersonal? Technical issues can be difficult to resolve, but interpersonal problems can be much _more_ difficult, as they involve human needs and emotions (we humans love to hold grudges). Does the person with whom you're dealing feel a loss of control over the project, or are they feeling marginalized? With distributed teams (which often require us to communicate through online tools), hard feelings can grow undetected until they explode into the open. How will you spot and resolve these? You may need to invest time and effort reaching out to team members privately, on a one-to-one basis. Based on time zones, this may require some late nights or early mornings. But it can be very effective, as some people will be reluctant to discuss disagreements in group meetings or online chats.
* **Seek common ground:** A blanket refusal to compromise on a topic can sometimes mask areas of potential agreement. Can you sub-divide the topic you're discussing into smaller pieces, then look for areas of possible agreement or common ground? Building upon smaller agreements can have a multiplier effect, which can lead to better cooperation and ultimately agreement on larger topics. Think of this approach as emulating a sailboat facing a headwind. The only way to make forward progress is to "tack"—that is, to move forward at an angle when a straight ahead path is not possible. 
* **Enlist allies:** Open teams and communities can feel like families. At some point in everyone's family, feuds break out, and you can only resolve them through a third party. On your team or in your community, if you're locked in a polarizing disagreement with a team member, reach out to other members of the team to provide support for your conclusions.
And if all that fails, then try turning to these "last resorts":
* **Last Resort #1:** If empathetic approaches fail, then it's time to escalate. Start by staging an intervention, where the full team meets to convince a team member to adopt a team decision. It's not "do what I'm tellin' ya"; it's "do what we all are asking you to do and here's why."
* **Last Resort #2:** If all else fails—if you've tried _everything else_ on this list and the team is mostly in agreement, yet you cannot get the last few holdouts to agree—then it's time to move on without them. Hopefully, this will be a rare occurrence.
### Conclusions
In a traditional, top-down organization, a person's degree of influence springs from that person's position, title, and the economic power the position commands. In sharp contrast, many open organizations are meritocracies in which the amount of influence a person possesses is directly related to the value of the contributions that one makes to the community. In open source communities, for example, influence is _earned_ over time through contributions—and through patience and persistence—much like a virtual currency. Making slow, patient, and persistent progress can sometimes be more effective than trying to make _quick_ progress.
Remember, in an open organization, great ideas can come from anyone, not just someone in a leadership position, and those ideas must always be reviewed to ensure they provide value. Influence in an open community—like happiness in life—must always be earned. And, once earned, it must be applied with patience and sensitivity to other people's views (and the reasons behind them), and with confidence in both your own judgement and others' abilities to accept occasionally unpleasant, but still critical, facts.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/10/how-to-influence-open-community
作者:[ldimaggi][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ldimaggi
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_meritladder.png?itok=eWIDxnh2 (Media ladder)
[2]: https://opensource.com/open-organization/19/10/gaining-influence-open-community

View File

@ -1,93 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 cool new projects to try in COPR for October 2019)
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-october-2019/)
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
4 cool new projects to try in COPR for October 2019
======
![][1]
[COPR][2] is a collection of personal repositories for software that isnt carried in Fedora. Some software doesnt conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isnt supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
This article presents a few new and interesting projects in COPR. If youre new to using COPR, see the [COPR User Documentation][3] for how to get started.
### Nu
[Nu][4], or Nushell, is a shell inspired by PowerShell and modern CLI tools. Using a structured data based approach, Nu makes it easy to work with commands that output data, piping through other commands. The results are then displayed in tables that can be sorted or filtered easily and may serve as inputs for further commands. Finally, Nu provides several builtin commands, multiple shells and support for plugins.
#### Installation instructions
The [repo][5] currently provides Nu for Fedora 30, 31 and Rawhide. To install Nu, use these commands:
```
sudo dnf copr enable atim/nushell
sudo dnf install nushell
```
### NoteKit
[NoteKit][6] is a program for note-taking. It supports Markdown for formatting notes, and the ability to create hand-drawn notes using mouse. In NoteKit, notes are sorted and organized in a tree structure.
#### Installation instructions
The [repo][7] currently provides NoteKit for Fedora 29, 30, 31 and Rawhide. To install NoteKit, use these commands:
```
sudo dnf copr enable lyessaadi/notekit
sudo dnf install notekit
```
### Crow Translate
[Crow Translate][8] is a program for translating. It can translate text as well as speak both the input and result, and offers a command line interface as well. For translation, Crow Translate uses Google, Yandex or Bing translate API.
#### Installation instructions
The [repo][9] currently provides Crow Translate for Fedora 30, 31 and Rawhide, and for Epel 8. To install Crow Translate, use these commands:
```
sudo dnf copr enable faezebax/crow-translate
sudo dnf install crow-translate
```
### dnsmeter
[dnsmeter][10] is a command-line tool for testing performance of a nameserver and its infrastructure. For this, it sends DNS queries and counts the replies, measuring various statistics. Among other features, dnsmeter can use different load steps, use payload from PCAP files and spoof sender addresses.
#### Installation instructions
The repo currently provides dnsmeter for Fedora 29, 30, 31 and Rawhide, and EPEL 7. To install dnsmeter, use these commands:
```
sudo dnf copr enable @dnsoarc/dnsmeter
sudo dnf install dnsmeter
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-october-2019/
作者:[Dominik Turecek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/dturecek/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
[2]: https://copr.fedorainfracloud.org/
[3]: https://docs.pagure.org/copr.copr/user_documentation.html#
[4]: https://github.com/nushell/nushell
[5]: https://copr.fedorainfracloud.org/coprs/atim/nushell/
[6]: https://github.com/blackhole89/notekit
[7]: https://copr.fedorainfracloud.org/coprs/lyessaadi/notekit/
[8]: https://github.com/crow-translate/crow-translate
[9]: https://copr.fedorainfracloud.org/coprs/faezebax/crow-translate/
[10]: https://github.com/DNS-OARC/dnsmeter

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (warmfrog)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,116 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (SQLite is really easy to compile)
[#]: via: (https://jvns.ca/blog/2019/10/28/sqlite-is-really-easy-to-compile/)
[#]: author: (Julia Evans https://jvns.ca/)
SQLite is really easy to compile
======
In the last week Ive been working on another SQL website (<https://sql-steps.wizardzines.com/>, a list of SQL examples). Im running all the queries on that site with sqlite, and I wanted to use window functions in one of the examples ([this one][1]).
But Im using the version of sqlite from Ubuntu 18.04, and that version is too old and doesnt support window functions. So I needed to upgrade sqlite!
This turned to out be surprisingly annoying (as usual), but in a pretty interesting way! I was reminded of some things about how executables and shared libraries work and it had a very satisfying conclusion. So I wanted to write it up here.
(spoiler: the summary is that <https://www.sqlite.org/howtocompile.html> explains how to compile SQLite and it takes like 5 seconds to do and its 20x easier than my usual experiences compiling software from source)
### attempt 1: download a SQLite binary from their website
The [SQLite download page][2] has a link to a Linux binary for the SQLite command line tool. I downloaded it, it worked on my laptop, and I thought I was done.
But then I tried to run it on a build server I was using (Netlify), and I got this extremely strange error message: “File not found”. I straced it, and sure enough `execve` was returning the error code ENOENT, which means “File not found”. This was kind of maddening because the file was DEFINITELY there and it had the correct permissions and everything.
I googled this problem (by searching “execve enoent”), found [this stack overflow answer][3], which pointed out that to run a binary, you dont just need the binary to exist! You also need its **loader** to exist. (the path to the loader is inside the binary)
To see the path for the loader you can use `ldd`, like this:
```
$ ldd sqlite3
linux-gate.so.1 (0xf7f9d000)
libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xf7f70000)
libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xf7e6e000)
libz.so.1 => /lib/i386-linux-gnu/libz.so.1 (0xf7e4f000)
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xf7c73000)
/lib/ld-linux.so.2
```
So `/lib/ld-linux.so.2` is the loader,and that file doesnt exist on the build server, probably because that Xenial installation didnt have support for 32-bit binaries (?), and I needed to try something different.
### attempt 2: install the Debian sqlite3 package
Okay, I thought, maybe I can install the [sqlite package from debian testing][4]. Trying to install a package from a different Debian version that Im not using is literally never a good idea, but for some reason I decided to try it anyway.
Doing this completely unsurprisingly broke the sqlite installation on my computer (which also broke git), but I managed to recover from that with a bunch of `sudo dpkg --purge --force-all libsqlite3-0` and make everything that depended on sqlite work again.
### attempt 3: extract the Debian sqlite3 package
I also briefly tried to just extract the sqlite3 binary from the Debian sqlite package and run it. Unsurprisingly, this also didnt work, but in a more understandable way: I had an older version of libreadline (.so.7) and it wanted .so.8.
```
$ ./usr/bin/sqlite3
./usr/bin/sqlite3: error while loading shared libraries: libreadline.so.8: cannot open shared object file: No such file or directory
```
### attempt 4: compile it from source
The whole reason I spent all this time trying to download sqlite binaries is that I assumed it would be annoying or time consuming to compile sqlite from source. But obviously downloading random sqlite binaries was not working for me at all, so I finally decided to try to compile it myself.
Here are the directions: [How to compile SQLite][5]. And theyre the EASIEST THING IN THE UNIVERSE. Often compiling things feels like this:
* run `./configure`
* realize im missing a dependency
* run `./configure` again
* run `make`
* the compiler fails because actually i have the wrong version of some dependency
* go do something else and try to find a binary
Compiling SQLite works like this:
* download an [amalgamation tarball from the download page][2]
* run `gcc shell.c sqlite3.c -lpthread -ldl`
* thats it!!!
All the code is in one file (`sqlite.c`), and there are no weird dependencies! Its amazing.
For my specific use case I didnt actually need threading support or readline support or anything, so I used the instructions on the compile page to create a very simple binary that only used libc and no other shared libraries.
```
$ ldd sqlite3
linux-vdso.so.1 (0x00007ffe8e7e9000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fbea4988000)
/lib64/ld-linux-x86-64.so.2 (0x00007fbea4d79000)
```
### this is nice because it makes it easy to experiment with sqlite
I think its cool that SQLites build process is so simple because in the past Ive had fun [editing sqlites source code][6] to understand how its btree implementation works.
This isnt really super surprising given what I know about SQLite (its made to work really well in restricted / embedded contexts, so it makes sense that it would be possible to compile it in a really simple/minimal way). But it is super nice!
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/10/28/sqlite-is-really-easy-to-compile/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://sql-steps.wizardzines.com/lag.html
[2]: https://www.sqlite.org/download.html
[3]: https://stackoverflow.com/questions/5234088/execve-file-not-found-when-stracing-the-very-same-file
[4]: https://packages.debian.org/bullseye/amd64/sqlite3/download
[5]: https://www.sqlite.org/howtocompile.html
[6]: https://jvns.ca/blog/2014/10/02/how-does-sqlite-work-part-2-btrees/

View File

@ -0,0 +1,146 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Demystifying namespaces and containers in Linux)
[#]: via: (https://opensource.com/article/19/10/namespaces-and-containers-linux)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Demystifying namespaces and containers in Linux
======
Peek behind the curtains to understand the backend of Linux container
technology.
![cubes coming together to create a larger cube][1]
Containers have taken the world by storm. Whether you think of Kubernetes, Docker, CoreOS, Silverblue, or Flatpak when you hear the term, it's clear that modern applications are running in containers for convenience, security, and scalability.
Containers can be confusing to understand, though. What does it mean to run in a container? How can processes in a container interact with the rest of the computer they're running on? Open source dislikes mystery, so this article explains the backend of container technology, just as [my article on Flatpak][2] explained a common frontend.
### Namespaces
Namespaces are common in the programming world. If you dwell in the highly technical places of the computer world, then you have probably seen code like this:
```
`using namespace std;`
```
Or you may have seen this in XML:
```
`<book xmlns="http://docbook.org/ns/docbook" xml:lang="en">`
```
These kinds of phrases provide context for commands used later in a source code file. The only reason C++ knows, for instance, what programmers mean when they type **cout** is because C++ knows the **cout** namespace is a meaningful word.
If that's too technical for you to picture, you may be surprised to learn that we all use namespaces every day in real life, too. We don't call them namespaces, but we use the concept all the time. For instance, the phrase "I'm a fan of the Enterprise" has one meaning in an IT company that serves large businesses (which are commonly called "enterprises"), but it may have a different meaning at a science fiction convention. The question "what engine is it running?" has one meaning in a garage and a different meaning in web development. We don't always declare a namespace in casual conversation because we're human, and our brains can adapt quickly to determine context, but for computers, the namespace must be declared explicitly.
For containers, a namespace is what defines the boundaries of a process' "awareness" of what else is running around it.
### lsns
You may not realize it, but your Linux machine quietly maintains different namespaces specific to given processes. By using a recent version of the **util-linux** package, you can list existing namespaces on your machine:
```
$ lsns
        NS TYPE   NPROCS   PID USER    COMMAND
4026531835 cgroup     85  1571 seth /usr/lib/systemd/systemd --user
4026531836 pid        85  1571 seth /usr/lib/systemd/systemd --user
4026531837 user       80  1571 seth /usr/lib/systemd/systemd --user
4026532601 user        1  6266 seth /usr/lib64/firefox/firefox [...]
4026532928 net         1  7164 seth /usr/lib64/firefox/firefox [...]
[...]
```
If your version of **util-linux** doesn't provide the **lsns** command, you can see namespace entries in **/proc**:
```
$ ls /proc/*/ns
1571
6266
7164
[...]
$ ls /proc/6266/ns
ipc net pid user uts [...]
```
Each process running on your Linux machine is enumerated with a process ID (PID). Each PID is assigned a namespace. PIDs in the same namespace can have access to one another because they are programmed to operate within a given namespace. PIDs in different namespaces are unable to interact with one another by default because they are running in a different context, or _namespace_. This is why a process running in a "container" under one namespace cannot access information outside its container or information running inside a different container.
### Creating a new namespace
A usual feature of software dealing with containers is automatic namespace management. A human administrator starting up a new containerized application or environment doesn't have to use **lsns** to check which namespaces exist and then create a new one manually; the software using PID namespaces does that automatically with the help of the Linux kernel. However, you can mimic the process manually to gain a better understanding of what's happening behind the scenes.
First, you need to identify a process that is _not_ running on your computer. For this example, I'll use the Z shell ([Zsh][3]) because I'm running the Bash shell on my machine. If you're running Zsh on your computer, then use **Bash** or **tcsh** or some other shell that you're not currently running. The goal is to find something that you can prove is not running. You can prove something is not running with the **pidof** command, which queries your system to discover the PID of any application you name:
```
$ pidof zsh
$ sudo pidof zsh
```
As long as no PID is returned, the application you have queried is not running.
#### Unshare
The **unshare** command runs a program in a namespace _unshared_ from its parent process. There are many kinds of namespaces available, so read the **unshare** man page for all options available.
To create a new namespace for your test command:
```
$ sudo unshare --fork --pid --mount-proc zsh
%
```
Because Zsh is an interactive shell, it conveniently brings you into its namespace upon launch. Not all processes do that, because some processes run in the background, leaving you at a prompt in its native namespace. As long as you remain in the Zsh session, you can see that you have left the usual namespace by looking at the PID of your new forked process:
```
% pidof zsh
pid 1
```
If you know anything about Linux process IDs, then you know that PID 1 is always reserved, mostly by nature of the boot process, for the initialization application (systemd on most distributions outside of Slackware, Devuan, and maybe some customized installations of Arch). It's next to impossible for Zsh, or any application that isn't a boot initialization application, to be PID 1 (because without an init system, a computer wouldn't know how to boot up). Yet, as far as your shell knows in this demonstration, Zsh occupies the PID 1 slot.
Despite what your shell is now telling you, PID 1 on your system has _not_ been replaced. Open a second terminal or terminal tab on your computer and look at PID 1:
```
$ ps 1
init
```
And then find the PID of Zsh:
```
$ pidof zsh
7723
```
As you can see, your "host" system sees the big picture and understands that Zsh is actually running as some high-numbered PID (it probably won't be 7723 on your computer, except by coincidence). Zsh sees itself as PID 1 only because its scope is confined to (or _contained_ within) its namespace. Once you have forked a process into its own namespace, its children processes are numbered starting from 1, but only within that namespace.
Namespaces, along with other technologies like **cgroups** and more, form the foundation of containerization. Understanding that namespaces exist within the context of the wider namespace of a host environment (in this demonstration, that's your computer, but in the real world the host is typically a server or a hybrid cloud) can help you understand how and why containerized applications act the way they do. For instance, a container running a Wordpress blog doesn't "know" it's not running in a container; it knows that it has access to a kernel and some RAM and whatever configuration files you've provided it, but it probably can't access your home directory or any directory you haven't specifically given it permission to access. Furthermore, a runaway process within that blog software can't affect any other process on your system, because as far as it knows, the PID "tree" only goes back to 1, and 1 is the container it's running in.
Containers are a powerful Linux feature, and they're getting more popular every day. Now that you understand how they work, try exploring container technology such as Kubernetes, Silverblue, or Flatpak, and see what you can do with containerized apps. Containers are Linux, so start them up, inspect them carefully, and learn as you go.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/namespaces-and-containers-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
[2]: https://opensource.com/article/19/10/how-build-flatpak-packaging
[3]: https://opensource.com/article/19/9/getting-started-zsh

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Upgrading Fedora 30 to Fedora 31)
[#]: via: (https://fedoramagazine.org/upgrading-fedora-30-to-fedora-31/)
[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/)
Upgrading Fedora 30 to Fedora 31
======
![][1]
Fedora 31 [is available now][2]. Youll likely want to upgrade your system to get the latest features available in Fedora. Fedora Workstation has a graphical upgrade method. Alternatively, Fedora offers a command-line method for upgrading Fedora 30 to Fedora 31.
### Upgrading Fedora 30 Workstation to Fedora 31
Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the **GNOME Software** app. Or you can choose Software from GNOME Shell.
Choose the _Updates_ tab in GNOME Software and you should see a screen informing you that Fedora 31 is Now Available.
If you dont see anything on this screen, try using the reload button at the top left. It may take some time after release for all systems to be able to see an upgrade available.
Choose _Download_ to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later.
### Using the command line
If youve upgraded from past Fedora releases, you are likely familiar with the _dnf upgrade_ plugin. This method is the recommended and supported way to upgrade from Fedora 30 to Fedora 31. Using this plugin will make your upgrade to Fedora 31 simple and easy.
#### 1\. Update software and back up your system
Before you do start the upgrade process, make sure you have the latest software for Fedora 30. This is particularly important if you have modular software installed; the latest versions of dnf and GNOME Software include improvements to the upgrade process for some modular streams. To update your software, use _GNOME Software_ or enter the following command in a terminal.
```
sudo dnf upgrade --refresh
```
Additionally, make sure you back up your system before proceeding. For help with taking a backup, see [the backup series][3] on the Fedora Magazine.
#### 2\. Install the DNF plugin
Next, open a terminal and type the following command to install the plugin:
```
sudo dnf install dnf-plugin-system-upgrade
```
#### 3\. Start the update with DNF
Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal:
```
sudo dnf system-upgrade download --releasever=31
```
This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the _allowerasing_ flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade.
#### 4\. Reboot and upgrade
Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal:
```
sudo dnf system-upgrade reboot
```
Your system will restart after this. Many releases ago, the _fedup_ tool would create a new option on the kernel selection / boot screen. With the _dnf-plugin-system-upgrade_ package, your system reboots into the current kernel installed for Fedora 30; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process.
Now might be a good time for a coffee break! Once it finishes, your system will restart and youll be able to log in to your newly upgraded Fedora 31 system.
![][4]
### Resolving upgrade problems
On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the [DNF system upgrade quick docs][5] for more information on troubleshooting.
If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/upgrading-fedora-30-to-fedora-31/
作者:[Ben Cotton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bcotton/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/f30-f31-816x345.jpg
[2]: https://fedoramagazine.org/announcing-fedora-31/
[3]: https://fedoramagazine.org/taking-smart-backups-duplicity/
[4]: https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot_f23-ws-upgrade-test_2016-06-10_110906-1024x768.png
[5]: https://docs.fedoraproject.org/en-US/quick-docs/dnf-system-upgrade/#Resolving_post-upgrade_issues

View File

@ -0,0 +1,200 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What you probably didnt know about sudo)
[#]: via: (https://opensource.com/article/19/10/know-about-sudo)
[#]: author: (Peter Czanik https://opensource.com/users/czanik)
What you probably didnt know about sudo
======
Think you know everything about sudo? Think again.
![Command line prompt][1]
Everybody knows **sudo**, right? This tool is installed by default on most Linux systems and is available for most BSD and commercial Unix variants. Still, after talking to hundreds of **sudo** users, the most common answer I received was that **sudo** is a tool to complicate life.
There is a root user and there is the **su** command, so why have yet another tool? For many, **sudo** was just a prefix for administrative commands. Only a handful mentioned that when you have multiple administrators for the same system, you can use **sudo** logs to see who did what.
So, what is **sudo**? According to the [**sudo** website][2]:
> _"Sudo allows a system administrator to delegate authority by giving certain users the ability to run some commands as root or another user while providing an audit trail of the commands and their arguments."_
By default, **sudo** comes with a simple configuration, a single rule allowing a user or a group of users to do practically anything (more on the configuration file later in this article):
```
`%wheel ALL=(ALL) ALL`
```
In this example, the parameters mean the following:
* The first parameter defines the members of the group.
* The second parameter defines the host(s) the group members can run commands on.
* The third parameter defines the usernames under which the command can be executed.
* The last parameter defines the applications that can be run.
So, in this example, the members of the **wheel** group can run all applications as all users on all hosts. Even this really permissive rule is useful because it results in logs of who did what on your machine.
### Aliases
Of course, once it is not just you and your best friend administering a shared box, you will start to fine-tune permissions. You can replace the items in the above configuration with lists: a list of users, a list of commands, and so on. Most likely, you will copy and paste some of these lists around in your configuration.
This situation is where aliases can come handy. Maintaining the same list in multiple places is error-prone. You define an alias once and then you can use it many times. Therefore, when you lose trust in one of your administrators, you can remove them from the alias and you are done. With multiple lists instead of aliases, it is easy to forget to remove the user from one of the lists with elevated privileges. 
### Enable features for a certain group of users
The **sudo** command comes with a huge set of defaults. Still, there are situations when you want to override some of these. This is when you use the **Defaults** statement in the configuration. Usually, these defaults are enforced on every user, but you can narrow the setting down to a subset of users based on host, username, and so on. Here is an example that my generation of sysadmins loves to hear about: insults. These are just some funny messages for when someone mistypes a password:
```
czanik@linux-mewy:~&gt; sudo ls
[sudo] password for root:
Hold it up to the light --- not a brain in sight!
[sudo] password for root:
My pet ferret can type better than you!
[sudo] password for root:
sudo: 3 incorrect password attempts
czanik@linux-mewy:~&gt;
```
Because not everyone is a fan of sysadmin humor, these insults are disabled by default. The following example shows how to enable this setting only for your seasoned sysadmins, who are members of the **wheel** group:
```
Defaults !insults
Defaults:%wheel insults
```
I do not have enough fingers to count how many people thanked me for bringing these messages back.
### Digest verification
There are, of course, more serious features in **sudo** as well. One of them is digest verification. You can include the digest of applications in your configuration: 
```
`peter ALL = sha244:11925141bb22866afdf257ce7790bd6275feda80b3b241c108b79c88 /usr/bin/passwd`
```
In this case, **sudo** checks and compares the digest of the application to the one stored in the configuration before running the application. If they do not match, **sudo** refuses to run the application. While it is difficult to maintain this information in your configuration—there are no automated tools for this purpose—these digests can provide you with an additional layer of protection.
### Session recording
Session recording is also a lesser-known feature of **sudo**. After my demo, many people leave my talk with plans to implement it on their infrastructure. Why? Because with session recording, you see not just the command name, but also everything that happened in the terminal. You can see what your admins are doing even if they have shell access and logs only show that **bash** is started.
There is one limitation, currently. Records are stored locally, so with enough permissions, users can delete their traces. Stay tuned for upcoming features.
### Plugins
Starting with version 1.8, **sudo** changed to a modular, plugin-based architecture. With most features implemented as plugins, you can easily replace or extend the functionality of **sudo** by writing your own. There are both open source and commercial plugins already available for **sudo**.
In my talk, I demonstrated the **sudo_pair** plugin, which is available [on GitHub][3]. This plugin is developed in Rust, meaning that it is not so easy to compile, and it is even more difficult to distribute the results. On the other hand, the plugin provides interesting functionality, requiring a second admin to approve (or deny) running commands through **sudo**. Not just that, but sessions can be followed on-screen and terminated if there is suspicious activity.
In a demo I did during a recent talk at the All Things Open conference, I had the infamous:
```
`czanik@linux-mewy:~> sudo  rm -fr /`
```
command displayed on the screen. Everybody was holding their breath to see whether my laptop got destroyed, but it survived.
### Logs
As I already mentioned at the beginning, logging and alerting is an important part of **sudo**. If you do not check your **sudo** logs regularly, there is not much worth in using **sudo**. This tool alerts by email on events specified in the configuration and logs all events to **syslog**. Debug logs can be turned on and used to debug rules or report bugs.
### Alerts
Email alerts are kind of old-fashioned now, but if you use **syslog-ng** for collecting your log messages, your **sudo** log messages are automatically parsed. You can easily create custom alerts and send those to a wide variety of destinations, including Slack, Telegram, Splunk, or Elasticsearch. You can learn more about this feature from [my blog on syslong-ng.com][4].
### Configuration
We talked a lot about **sudo** features and even saw a few lines of configuration. Now, lets take a closer look at how **sudo** is configured. The configuration itself is available in **/etc/sudoers**, which is a simple text file. Still, it is not recommended to edit this file directly. Instead, use **visudo**, as this tool also does syntax checking. If you do not like **vi**, you can change which editor to use by pointing the **EDITOR** environment variable at your preferred option.
Before you start editing the **sudo** configuration, make sure that you know the root password. (Yes, even on Ubuntu, where root does not have a password by default.) While **visudo** checks the syntax, it is easy to create a syntactically correct configuration that locks you out of your system.
When you have a root password at hand in case of an emergency, you can start editing your configuration. When it comes to the **sudoers** file, there is one important thing to remember: This file is read from top to bottom, and the last setting wins. What this fact means for you is that you should start with generic settings and place exceptions at the end, otherwise exceptions are overridden by the generic settings.
You can find a simple **sudoers** file below, based on the one in CentOS, and add a few lines we discussed previously:
```
Defaults !visiblepw
Defaults always_set_home
Defaults match_group_by_gid
Defaults always_query_group_plugin
Defaults env_reset
Defaults env_keep = "COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS"
Defaults env_keep += "MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"
Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin
root ALL=(ALL) ALL
%wheel ALL=(ALL) ALL
Defaults:%wheel insults
Defaults !insults
Defaults log_output
```
This file starts by changing a number of defaults. Then come the usual default rules: The **root** user and members of the **wheel** group have full permissions over the machine. Next, we enable insults for the **wheel** group, but disable them for everyone else. The last line enables session recording.
The above configuration is syntactically correct, but can you spot the logical error? Yes, there is one: Insults are disabled for everyone since the last, generic setting overrides the previous, more specific setting. Once you switch the two lines, the setup works as expected: Members of the **wheel** group receive funny messages, but the rest of the users do not receive them.
### Configuration management
Once you have to maintain the **sudoers** file on multiple machines, you will most likely want to manage your configuration centrally. There are two major open source possibilities here. Both have their advantages and drawbacks.
You can use one of the configuration management applications that you also use to configure the rest of your infrastructure. Red Hat Ansible, Puppet, and Chef all have modules to configure **sudo**. The problem with this approach is that updating configurations is far from real-time. Also, users can still edit the **sudoers** file locally and change settings.
The **sudo** tool can also store its configuration in LDAP. In this case, configuration changes are real-time and users cannot mess with the **sudoers** file. On the other hand, this method also has limitations. For example, you cannot use aliases or use **sudo** when the LDAP server is unavailable.
### New features
There is a new version of **sudo** right around the corner. Version 1.9 will include many interesting new features. Here are the most important planned features:
* A recording service to collect session recordings centrally, which offers many advantages compared to local storage:
* It is more convenient to search in one place.
* Recordings are available even if the sender machine is down.
* Recordings cannot be deleted by someone who wants to delete their tracks.
* The **audit** plugin does not add new features to **sudoers**, but instead provides an API for plugins to easily access any kind of **sudo** logs. This plugin enables creating custom logs from **sudo** events using plugins.
* The **approval** plugin enables session approvals without using third-party plugins.
* And my personal favorite: Python support for plugins, which enables you to easily extend **sudo** using Python code instead of coding natively in C.
### Conclusion
I hope this article proved to you that **sudo** is a lot more than just a simple prefix. There are tons of possibilities to fine-tune permissions on your system. You cannot just fine-tune permissions, but also improve security by checking digests. Session recordings enable you to check what is happening on your systems. You can also extend the functionality of **sudo** using plugins, either using something already available or writing your own. Finally, given the list of upcoming features you can see that even if **sudo** is decades old, it is a living project that is constantly evolving.
If you want to learn more about **sudo**, here are a few resources:
* [The **sudo** website][5]
* [The **sudo** blog][6]
* [Follow us on Twitter][7]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/know-about-sudo
作者:[Peter Czanik][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/czanik
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
[2]: https://www.sudo.ws
[3]: https://github.com/square/sudo_pair/
[4]: https://www.syslog-ng.com/community/b/blog/posts/alerting-on-sudo-events-using-syslog-ng
[5]: https://www.sudo.ws/
[6]: https://blog.sudo.ws/
[7]: https://twitter.com/sudoproject

View File

@ -0,0 +1,218 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Find Out Top Memory Consuming Processes in Linux)
[#]: via: (https://www.2daygeek.com/linux-find-top-memory-consuming-processes/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How to Find Out Top Memory Consuming Processes in Linux
======
You may have seen your system consumes too much of memory many times.
If thats the case, what would be the best thing you can do to identify processes that consume too much memory on a Linux machine.
I believe, you may have run one of the below commands to check it out.
If not, what is the other commands you tried?
I would request you to update it in the comment section, it may help other users.
This can be easily identified using the **[top command][1]** and the **[ps command][2]**.
I used to check both commands simultaneously, and both were given the same result.
So i suggest you to use one of the command that you like.
### 1) How to Find Top Memory Consuming Process in Linux Using the ps Command
The ps command is used to report a snapshot of the current processes. The ps command stands for process status.
This is a standard Linux application that looks for information about running processes on a Linux system.
It is used to list the currently running processes and their process ID (PID), process owner name, process priority (PR), and the absolute path of the running command, etc,.
The below ps command format provides you more information about top memory consumption process.
```
# ps aux --sort -rss | head
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
mysql 1064 3.2 5.4 886076 209988 ? Ssl Oct25 62:40 /usr/sbin/mysqld
varnish 23396 0.0 2.9 286492 115616 ? SLl Oct25 0:42 /usr/sbin/varnishd -P /var/run/varnish.pid -f /etc/varnish/default.vcl -a :82 -T 127.0.0.1:6082 -S /etc/varnish/secret -s malloc,256M
named 1105 0.0 2.7 311712 108204 ? Ssl Oct25 0:16 /usr/sbin/named -u named -c /etc/named.conf
nobody 23377 0.2 2.3 153096 89432 ? S Oct25 4:35 nginx: worker process
nobody 23376 0.1 2.1 147096 83316 ? S Oct25 2:18 nginx: worker process
root 23375 0.0 1.7 131028 66764 ? Ss Oct25 0:01 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
nobody 23378 0.0 1.6 130988 64592 ? S Oct25 0:00 nginx: cache manager process
root 1135 0.0 0.9 86708 37572 ? S 05:37 0:20 cwpsrv: worker process
root 1133 0.0 0.9 86708 37544 ? S 05:37 0:05 cwpsrv: worker process
```
Use the below ps command format to include only specific information about the process of memory consumption in the output.
```
# ps -eo pid,ppid,%mem,%cpu,cmd --sort=-%mem | head
PID PPID %MEM %CPU CMD
1064 1 5.4 3.2 /usr/sbin/mysqld
23396 23386 2.9 0.0 /usr/sbin/varnishd -P /var/run/varnish.pid -f /etc/varnish/default.vcl -a :82 -T 127.0.0.1:6082 -S /etc/varnish/secret -s malloc,256M
1105 1 2.7 0.0 /usr/sbin/named -u named -c /etc/named.conf
23377 23375 2.3 0.2 nginx: worker process
23376 23375 2.1 0.1 nginx: worker process
3625 977 1.9 0.0 /usr/local/bin/php-cgi /home/daygeekc/public_html/index.php
23375 1 1.7 0.0 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
23378 23375 1.6 0.0 nginx: cache manager process
1135 3034 0.9 0.0 cwpsrv: worker process
```
If you want to see only the command name instead of the absolute path of the command, use the ps command format below.
```
# ps -eo pid,ppid,%mem,%cpu,comm --sort=-%mem | head
PID PPID %MEM %CPU COMMAND
1064 1 5.4 3.2 mysqld
23396 23386 2.9 0.0 cache-main
1105 1 2.7 0.0 named
23377 23375 2.3 0.2 nginx
23376 23375 2.1 0.1 nginx
23375 1 1.7 0.0 nginx
23378 23375 1.6 0.0 nginx
1135 3034 0.9 0.0 cwpsrv
1133 3034 0.9 0.0 cwpsrv
```
### 2) How to Find Out Top Memory Consuming Process in Linux Using the top Command
The Linux top command is the best and most well known command that everyone uses to monitor Linux system performance.
It displays a real-time view of the system process running on the interactive interface.
But if you want to find top memory consuming process then **[use the top command in the batch mode][3]**.
You should properly **[understand the top command output][4]** to fix the performance issue in system.
```
# top -c -b -o +%MEM | head -n 20 | tail -15
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1064 mysql 20 0 886076 209740 8388 S 0.0 5.4 62:41.20 /usr/sbin/mysqld
23396 varnish 20 0 286492 115616 83572 S 0.0 3.0 0:42.24 /usr/sbin/varnishd -P /var/run/varnish.pid -f /etc/varnish/default.vcl -a :82 -T 127.0.0.1:6082 -S /etc/varnish/secret -s malloc,256M
1105 named 20 0 311712 108204 2424 S 0.0 2.8 0:16.41 /usr/sbin/named -u named -c /etc/named.conf
23377 nobody 20 0 153240 89432 2432 S 0.0 2.3 4:35.74 nginx: worker process
23376 nobody 20 0 147096 83316 2416 S 0.0 2.1 2:18.09 nginx: worker process
23375 root 20 0 131028 66764 1616 S 0.0 1.7 0:01.07 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
23378 nobody 20 0 130988 64592 592 S 0.0 1.7 0:00.51 nginx: cache manager process
1135 root 20 0 86708 37572 2252 S 0.0 1.0 0:20.18 cwpsrv: worker process
1133 root 20 0 86708 37544 2212 S 0.0 1.0 0:05.94 cwpsrv: worker process
3034 root 20 0 86704 36740 1452 S 0.0 0.9 0:00.09 cwpsrv: master process /usr/local/cwpsrv/bin/cwpsrv
1067 nobody 20 0 1356200 31588 2352 S 0.0 0.8 0:56.06 /usr/local/apache/bin/httpd -k start
977 nobody 20 0 1356088 31268 2372 S 0.0 0.8 0:30.44 /usr/local/apache/bin/httpd -k start
968 nobody 20 0 1356216 30544 2348 S 0.0 0.8 0:19.95 /usr/local/apache/bin/httpd -k start
```
If you only want to see the command name instead of the absolute path of the command, use the below top command format.
```
# top -b -o +%MEM | head -n 20 | tail -15
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1064 mysql 20 0 886076 210340 8388 S 6.7 5.4 62:40.93 mysqld
23396 varnish 20 0 286492 115616 83572 S 0.0 3.0 0:42.24 cache-main
1105 named 20 0 311712 108204 2424 S 0.0 2.8 0:16.41 named
23377 nobody 20 0 153240 89432 2432 S 13.3 2.3 4:35.74 nginx
23376 nobody 20 0 147096 83316 2416 S 0.0 2.1 2:18.09 nginx
23375 root 20 0 131028 66764 1616 S 0.0 1.7 0:01.07 nginx
23378 nobody 20 0 130988 64592 592 S 0.0 1.7 0:00.51 nginx
1135 root 20 0 86708 37572 2252 S 0.0 1.0 0:20.18 cwpsrv
1133 root 20 0 86708 37544 2212 S 0.0 1.0 0:05.94 cwpsrv
3034 root 20 0 86704 36740 1452 S 0.0 0.9 0:00.09 cwpsrv
1067 nobody 20 0 1356200 31588 2352 S 0.0 0.8 0:56.04 httpd
977 nobody 20 0 1356088 31268 2372 S 0.0 0.8 0:30.44 httpd
968 nobody 20 0 1356216 30544 2348 S 0.0 0.8 0:19.95 httpd
```
### 3) Bonus Tips: How to Find Out Top Memory Consuming Process in Linux Using the ps_mem Command
The **[ps_mem utility][5]** is used to display the core memory used per program (not per process).
This utility allows you to check how much memory is used per program.
It calculates the amount of private and shared memory against a program and returns the total used memory in the most appropriate way.
It uses the following logic to calculate RAM usage. Total RAM = sum (private RAM for program processes) + sum (shared RAM for program processes)
```
# ps_mem
Private + Shared = RAM used Program
128.0 KiB + 27.5 KiB = 155.5 KiB agetty
228.0 KiB + 47.0 KiB = 275.0 KiB atd
284.0 KiB + 53.0 KiB = 337.0 KiB irqbalance
380.0 KiB + 81.5 KiB = 461.5 KiB dovecot
364.0 KiB + 121.5 KiB = 485.5 KiB log
520.0 KiB + 65.5 KiB = 585.5 KiB auditd
556.0 KiB + 60.5 KiB = 616.5 KiB systemd-udevd
732.0 KiB + 48.0 KiB = 780.0 KiB crond
296.0 KiB + 524.0 KiB = 820.0 KiB avahi-daemon (2)
772.0 KiB + 51.5 KiB = 823.5 KiB systemd-logind
940.0 KiB + 162.5 KiB = 1.1 MiB dbus-daemon
1.1 MiB + 99.0 KiB = 1.2 MiB pure-ftpd
1.2 MiB + 100.5 KiB = 1.3 MiB master
1.3 MiB + 198.5 KiB = 1.5 MiB pickup
1.3 MiB + 198.5 KiB = 1.5 MiB bounce
1.3 MiB + 198.5 KiB = 1.5 MiB pipe
1.3 MiB + 207.5 KiB = 1.5 MiB qmgr
1.4 MiB + 198.5 KiB = 1.6 MiB cleanup
1.3 MiB + 299.5 KiB = 1.6 MiB trivial-rewrite
1.5 MiB + 145.0 KiB = 1.6 MiB config
1.4 MiB + 291.5 KiB = 1.6 MiB tlsmgr
1.4 MiB + 308.5 KiB = 1.7 MiB local
1.4 MiB + 323.0 KiB = 1.8 MiB anvil (2)
1.3 MiB + 559.0 KiB = 1.9 MiB systemd-journald
1.8 MiB + 240.5 KiB = 2.1 MiB proxymap
1.9 MiB + 322.5 KiB = 2.2 MiB auth
2.4 MiB + 88.5 KiB = 2.5 MiB systemd
2.8 MiB + 458.5 KiB = 3.2 MiB smtpd
2.9 MiB + 892.0 KiB = 3.8 MiB bash (2)
3.3 MiB + 555.5 KiB = 3.8 MiB NetworkManager
4.1 MiB + 233.5 KiB = 4.3 MiB varnishd
4.0 MiB + 662.0 KiB = 4.7 MiB dhclient (2)
4.3 MiB + 623.5 KiB = 4.9 MiB rsyslogd
3.6 MiB + 1.8 MiB = 5.5 MiB sshd (3)
5.6 MiB + 431.0 KiB = 6.0 MiB polkitd
13.0 MiB + 546.5 KiB = 13.6 MiB tuned
22.5 MiB + 76.0 KiB = 22.6 MiB lfd - sleeping
30.0 MiB + 6.2 MiB = 36.2 MiB php-fpm (6)
5.7 MiB + 33.5 MiB = 39.2 MiB cwpsrv (3)
20.1 MiB + 25.3 MiB = 45.4 MiB httpd (5)
104.7 MiB + 156.0 KiB = 104.9 MiB named
112.2 MiB + 479.5 KiB = 112.7 MiB cache-main
69.4 MiB + 58.6 MiB = 128.0 MiB nginx (4)
203.4 MiB + 309.5 KiB = 203.7 MiB mysqld
---------------------------------
775.8 MiB
=================================
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/linux-find-top-memory-consuming-processes/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/
[2]: https://www.2daygeek.com/linux-ps-command-find-running-process-monitoring/
[3]: https://www.2daygeek.com/linux-run-execute-top-command-in-batch-mode/
[4]: https://www.2daygeek.com/understanding-linux-top-command-output-usage/
[5]: https://www.2daygeek.com/ps_mem-report-core-memory-usage-accurately-in-linux/

View File

@ -1,99 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Object-Oriented Programming and Essential State)
[#]: via: (https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
面向对象编程和根本状态
======
早在 2015 年Brian Will 撰写了一篇有挑衅性的博客:[面向对象编程:一个灾难故事][1]。他随后发布了一个名为[面向对象编程很糟糕][2]的视频,该视频更加详细。我建议你花些时间观看视频,但这是我的一小段摘要:
OOP 的柏拉图式理想是一堆相互解耦的对象它们彼此之间发送无状态消息。没有人真的像这样制作软件Brian 指出这甚至没有意义:对象需要知道向哪个对象发送消息,这意味着它们需要相互引用。视频大部分讲述的是人们试图将对象耦合以实现控制流,同时假装它们是通过设计解耦的。
总的来说,他的想法与我自己的 OOP 经验产生了共鸣对象没有问题但是我从来没有对_面向_对象建立程序控制流满意而试图使代码“正确地”面向对象似乎总是在创建不必要的复杂性。
我认为他无法完全解释一件事。他直截了当地说“封装没有作用”,但在脚注后面加上“在细粒度的代码级别”,并继续承认对象有时可以奏效,并且在库和文件级别可以封装。但是他没有确切解释为什么有时会奏效,有时却没有奏效,以及如何/在何处划清界限。有人可能会说这使他的“ OOP不好”的说法有缺陷但是我认为他的观点是正确的并且可以在根本状态和偶发状态之间划清界限。
如果你以前从未听说过“根本”和“偶发”这两个术语的使用,那么你应该阅读 Fred Brooks 的经典文章[没有银弹][3]。 (顺便说一句,他写了许多有关构建软件系统的很棒的文章。)我以前曾写过[关于根本和偶发的复杂性的文章][4],但是这里有一个简短的摘要:软件很复杂。部分原因是因为我们希望软件能够解决混乱的现实世界问题,因此我们将其称为“根本复杂性”。“偶发复杂性”是所有其他复杂性,因为我们正尝试使用硅和金属来解决与硅和金属无关的问题。例如,对于大多数程序而言,用于内存管理或在内存与磁盘之间传输数据或解析文本格式的代码都是“偶发的复杂性”。
假设你正在构建一个支持多个频道的聊天应用。消息可以随时到达任何频道。有些频道特别有趣,当有新消息传入时,用户希望得到通知。其他频道静音:消息被存储,但用户不会受到打扰。你需要跟踪每个频道的用户首选设置。
一种实现方法是在频道和频道设置之间使用映射(也称为哈希表,字典或关联数组)。注意,映射是 Brian Will 所说的可以用作对象的抽象数据类型ADT
如果我们有一个调试器并查看内存中的 map 对象,我们将看到什么?我们当然会找到频道 ID 和频道设置数据(或至少指向它们的指针)。但是我们还会找到其他数据。如果 map 是使用红黑树实现的,我们将看到带有红/黑标签和指向其他节点的指针的树节点对象。与频道相关的数据是根本状态,而树节点是偶发状态。不过,请注意以下几点:该映射有效地封装了它的偶发状态-你可以用 AVL 树实现的另一个映射替换该映射,并且你的聊天程序仍然可以使用。另一方面,映射没有封装根本状态(仅使用 `get()``set()`方法访问数据不是封装)。事实上,映射与根本状态是尽可能不可知的,你可以使用基本相同的映射数据结构来存储与频道或通知无关的其他映射。
这就是映射 ADT 如此成功的原因它封装了偶发状态并与根本状态解耦。如果你考虑一下Brian 描述的封装问题就是尝试封装根本状态。其他描述的好处是封装偶发状态的好处。
要使整个软件系统都达到这一理想相当困难,但扩展开来,我认为它看起来像这样:
* 没有全局的可变状态
* 封装了偶发状态(在对象或模块或以其他任何形式)
* 无状态偶发复杂性封装在单独函数中,与数据解耦
* 使用诸如依赖注入之类的技巧使输入和输出变得明确
* 完全拥有组件,并从易于识别的位置进行控制
其中有些违反了我很久以前的本能。例如,如果你有一个数据库查询函数,如果数据库连接处理隐藏在该函数内部,并且唯一的参数是查询参数,那么接口会看起来会更简单。但是,当你使用这样的函数构建软件系统时,协调数据库的使用实际上变得更加复杂。组件不仅以自己的方式做事,而且还试图将自己所做的事情隐藏为“实现细节”。数据库查询需要数据库连接这一事实从来都不是实现细节。如果无法隐藏某些内容,那么显露它是更合理的。
我警惕将面向对象编程和函数式编程放在两极但我认为从函数式编程进入面向对象编程的另一极端是很有趣的OOP 试图封装事物,包括无法封装的根本复杂性,而纯函数式编程往往会使事情变得明确,包括一些偶发复杂性。在大多数时候,没什么问题,但有时候(比如[在纯函数式语言中构建自我指称的数据结构][5])设计更多的是为了函数编程,而不是为了简便(这就是为什么 [Haskell 包含了一些“逃生出口”( escape hatches][6])。我之前写过一篇[中立的所谓的“弱纯性” weak purity][7]
Brian 发现封装对更大规模有效,原因有几个。一个是,由于大小的原因,较大的组件更可能包含偶发状态。另一个是“偶发”与你要解决的问题有关。从聊天程序用户的角度来看,“偶发的复杂性”是与消息,频道和用户等无关的任何事物。但是,当你将问题分解为子问题时,更多的事情就变得重要。例如,在解决“构建聊天应用”问题时,可以说频道名称和频道 ID 之间的映射是偶发的复杂性,而在解决“实现 `getChannelIdByName()` 函数”子问题时,这是根本复杂性。因此,封装对于子组件的作用比对父组件的作用要小。
顺便说一句在影片的结尾Brian Will 想知道是否有任何语言支持_无法_访问它们所作用的范围的匿名函数。[D][8] 语言可以。 D 中的匿名 Lambda 通常是闭包,但是如果你想要的话,也可以声明匿名无状态函数:
```
import std.stdio;
void main()
{
int x = 41;
// Value from immediately executed lambda
auto v1 = () {
return x + 1;
}();
writeln(v1);
// Same thing
auto v2 = delegate() {
return x + 1;
}();
writeln(v2);
// Plain functions aren't closures
auto v3 = function() {
// Can't access x
// Can't access any mutable global state either if also marked pure
return 42;
}();
writeln(v3);
}
```
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://medium.com/@brianwill/object-oriented-programming-a-personal-disaster-1b044c2383ab
[2]: https://www.youtube.com/watch?v=QM1iUe6IofM
[3]: http://www.cs.nott.ac.uk/~pszcah/G51ISS/Documents/NoSilverBullet.html
[4]: https://theartofmachinery.com/2017/06/25/compression_complexity_and_software.html
[5]: https://wiki.haskell.org/Tying_the_Knot
[6]: https://en.wikibooks.org/wiki/Haskell/Mutable_objects#The_ST_monad
[7]: https://theartofmachinery.com/2016/03/28/dirtying_pure_functions_can_be_useful.html
[8]: https://dlang.org

View File

@ -0,0 +1,94 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 cool new projects to try in COPR for October 2019)
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-october-2019/)
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
COPR 仓库中 4 个很酷的新项目2019.10
======
![][1]
COPR 是个人软件仓库[集合][2],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准;或者它可能不符合其他 Fedora 标准尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
本文介绍了 COPR 中一些有趣的新项目。如果你第一次使用 COPR请参阅 [COPR 用户文档][3]。
### Nu
[Nu][4] 或称为 Nushell 是受 PowerShell 和现代 CLI 工具启发的 shell。通过使用基于结构化数据的方法Nu 可轻松处理命令的输出并通过管道传输其他命令。然后将结果显示在可以轻松排序或过滤的表中并可以用作其他命令的输入。最后Nu 提供了几个内置命令、多 shell 和对插件的支持。
#### 安装说明
该[仓库][5]目前为 Fedora 30、31 和 Rawhide 提供 Nu。要安装 Nu请使用以下命令
```
sudo dnf copr enable atim/nushell
sudo dnf install nushell
```
### NoteKit
[NoteKit][6] 是一个笔记程序。它支持 Markdown 来格式化笔记,并支持使用鼠标创建手绘笔记的功能。在 NoteKit 中,笔记以树状结构进行排序和组织。
#### 安装说明
该[仓库][7]目前为 Fedora 29、30、31 和 Rawhide 提供 NoteKit。要安装 NoteKit请使用以下命令
```
sudo dnf copr enable lyessaadi/notekit
sudo dnf install notekit
```
### Crow Translate
[Crow Translate][8] 是一个翻译程序。它可以翻译文本并且可以对输入和结果发音它还提供命令行界面。对于翻译Crow Translate 使用 Google、Yandex 或 Bing 的翻译 API。
#### 安装说明
该[仓库][9]目前为 Fedora 30、31 和 Rawhide 以及 Epel 8 提供 Crow Translate。要安装 Crow Translate请使用以下命令
```
sudo dnf copr enable faezebax/crow-translate
sudo dnf install crow-translate
```
### dnsmeter
[dnsmeter][10] 是用于测试域名服务器及其基础设施性能的命令行工具。为此,它发送 DNS 查询并计算答复数从而测量各种统计数据。除此之外dnsmeter 可以使用不同的加载步骤,使用 PCAP 文件中的 payload 和欺骗发送者地址。
#### 安装说明
该仓库目前为 Fedora 29、30、31、Rawhide 以及 Epel 7 提供 dnsmeter。要安装 dnsmeter请使用以下命令
```
sudo dnf copr enable @dnsoarc/dnsmeter
sudo dnf install dnsmeter
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-october-2019/
作者:[Dominik Turecek][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/dturecek/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
[2]: https://copr.fedorainfracloud.org/
[3]: https://docs.pagure.org/copr.copr/user_documentation.html#
[4]: https://github.com/nushell/nushell
[5]: https://copr.fedorainfracloud.org/coprs/atim/nushell/
[6]: https://github.com/blackhole89/notekit
[7]: https://copr.fedorainfracloud.org/coprs/lyessaadi/notekit/
[8]: https://github.com/crow-translate/crow-translate
[9]: https://copr.fedorainfracloud.org/coprs/faezebax/crow-translate/
[10]: https://github.com/DNS-OARC/dnsmeter