From 1be0a70721d78362eba47f34eeab0dc35b5ec129 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Fri, 27 Apr 2018 23:51:36 +0800 Subject: [PATCH 001/102] Delete 20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md --- ...ode in the same VM with no code changes.md | 123 ------------------ 1 file changed, 123 deletions(-) delete mode 100644 sources/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md diff --git a/sources/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md b/sources/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md deleted file mode 100644 index 7ec1044acd..0000000000 --- a/sources/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md +++ /dev/null @@ -1,123 +0,0 @@ -Translating by MjSeven - - -Could we run Python 2 and Python 3 code in the same VM with no code changes? -====== - -Theoretically, yes. Zed Shaw famously jested that if this is impossible then Python 3 must not be Turing-complete. But in practice, this is unrealistic and I'll share why by giving you a few examples. - -### What does it mean to be a dict? - -Let’s imagine a Python 6 VM. It can read `module3.py` which was written for Python 3.6 but in this module it can import `module2.py` which was written for Python 2.7 and successfully use it with no issues. This is obviously toy code but let’s say that `module2.py` includes functions like: -``` - - -def update_config_from_dict(config_dict): - items = config_dict.items() - while items: - k, v = items.pop() - memcache.set(k, v) - -def config_to_dict(): - result = {} - for k, v in memcache.getall(): - result[k] = v - return result - -def update_in_place(config_dict): - for k, v in config_dict.items(): - new_value = memcache.get(k) - if new_value is None: - del config_dict[k] - elif new_value != v: - config_dict[k] = v - -``` - -Now, when we want to use those functions from `module3`, we are faced with a problem: the dict type in Python 3.6 is different from the dict type in Python 2.7. In Python 2, dicts were unordered and their `.keys()`, `.values()`, `.items()` methods returned proper lists. That meant calling `.items()` created a copy of the state in the dictionary. In Python 3 those methods return dynamic views on the current state of the dictionary. - -This means if `module3` called `module2.update_config_from_dict(some_dictionary)`, it would fail to run because the value returned by `dict.items()` in Python 3 isn’t a list and doesn’t have a `.pop()` method. The reverse is also true. If `module3` called `module2.config_to_dict()`, it would presumably return a Python 2 dictionary. Now calling `.items()` is suddenly returning a list so this code would not work correctly (which works fine with Python 3 dictionaries): -``` - - -def main(cmdline_options): - d = module2.config_to_dict() - items = d.items() - for k, v in items: - print(f'Config from memcache: {k}={v}') - for k, v in cmdline_options: - d[k] = v - for k, v in items: - print(f'Config with cmdline overrides: {k}={v}') - -``` - -Finally, using `module2.update_in_place()` would fail because the value of `.items()` in Python 3 now doesn’t allow the dictionary to change during iteration. - -There’s more issues with this dictionary situation. Should a Python 2 dictionary respond `True` to `isinstance(d, dict)` on Python 3? If it did, it’d be a lie. If it didn’t, code would break anyway. - -### Python should magically know types and translate! - -Why can’t our Python 6 VM recognize that in Python 3 code we mean something else when calling `some_dict.keys()` than in Python 2 code? Well, Python doesn’t know what the author of the code thought `some_dict` should be when she was writing that code. There is nothing in the code that signifies whether it’s a dictionary at all. Type annotations weren’t there in Python 2 and, since they’re optional, even in Python 3 most code doesn’t use them yet. - -At runtime, when you call `some_dict.keys()`, Python simply looks up an attribute on the object that happens to hide under the `some_dict` name and tries to run `__call__()` on that attribute. There’s some technicalities with method binding, descriptors, slots, etc. but this is the gist of it. We call this behavior “duck typing”. - -Because of duck typing, the Python 6 VM would not be able to make compile-time decisions to translate calls and attribute lookups correctly. - -### OK, so let’s make this decision at runtime instead - -The Python 6 VM could implement this by tagging every attribute lookup with information “call comes from py2” or “call comes from py3” and make the object on the other side dispatch the right attribute. That would slow things down a lot and use more memory, too. It would require us to keep both versions of the given type in memory with a proxy used by user code. We would need to sync the state of those objects behind the user’s back, doubling the work. After all, the memory representation of the new dictionary is different than in Python 2. - -If your head spun thinking about the problems with dictionaries, think about all the issues with Unicode strings in Python 3 and the do-it-all byte strings in Python 2. - -### Is everything lost? Can’t Python 3 run old code ever? - -Everything is not lost. Projects get ported to Python 3 every day. The recommended way to port Python 2 code to work on both versions of Python is to run [Python-Modernize][1] on your code. It will catch code that wouldn’t work on Python 3 and translate it to use the [six][2] library instead so it runs on both Python 2 and Python 3 after. It’s an adaptation of `2to3` which was producing Python 3-only code. `Modernize` is preferred since it provides a more incremental migration route. All this is outlined very well in the [Porting Python 2 Code to Python 3][3] document in the Python documentation. - -But wait, didn’t you say a Python 6 VM couldn’t do this automatically? Right. `Modernize` looks at your code and tries to guess what’s going to be safe. It will make some changes that are unnecessary and miss others that are necessary. Famously, it won’t help you with strings. That transformation is not trivial if your code didn’t keep the boundaries between “binary data from outside” and “text data within the process”. - -So, migrating big projects cannot be done automatically and involves humans running tests, finding problems and fixing them. Does it work? Yes, I helped [moving a million lines of code to Python 3][4] and the switch caused no incidents. This particular move regained 1/3 of memory on our servers and made the code run 12% faster. That was on Python 3.5. But Python 3.6 got quite a bit faster and depending on your workload you could maybe even achieve [a 4X speedup][5] . - -### Dear Zed - -Hi, man. I follow your story for over 10 years now. I’ve been watching when you were upset you were getting no credit for Mongrel even though the Rails ecosystem pretty much ran all on it. I’ve been there when you reimagined it and started the Mongrel 2 project. I was following your surprising move to use Fossil for it. I’ve seen you abruptly depart from the Ruby community with your “Rails is a Ghetto” post. I was thrilled when you started working on “Learn Python The Hard Way” and have been recommending it ever since. I met you in 2013 at [DjangoCon Europe][6] and we talked quite a bit about painting, singing and burnout. [This photo of you][7] is one of my first posts on Instagram. - -You almost pulled another “Is a Ghetto” move with your [“The Case Against Python 3”][8] post. I think your heart is in the right place but that post caused a lot of confusion, including many people seriously thinking you believe Python 3 is not Turing-complete. I spent quite a few hours convincing people that you said so in jest. But given your very valuable contribution of “Learn Python The Hard Way”, I think it was worth doing that. Especially that you did update your book for Python 3. Thank you for doing that work. If there really are people in our community that called for blacklisting you and your book on the grounds of your post alone, call them out. It’s a lose-lose situation and it’s wrong. - -For the record, no core Python dev thinks that the Python 2 -> Python 3 transition was smooth and well planned, [including Guido van Rossum][9]. Seriously, watch that video. Hindsight is 20/20 of course. In this sense we are in fact aggressively agreeing with each other. If we went to do it all over again, it would look differently. But at this point, [on January 1st 2020 Python 2 will reach End Of Life][10]. Most third-party libraries already support Python 3 and even started releasing Python 3-only versions (see [Django][11] or the [Python 3 Statement of the scientific projects][12]). - -We are also aggressively agreeing on another thing. Just like you with Mongrel, Python core devs are volunteers who aren’t compensated for their work. Most of us invested a lot of time and effort in this project, and so [we are naturally sensitive][13] to dismissive and aggressive comments against their contribution. Especially if the message is both attacking the current state of affairs and calling for more free labor. - -I hoped that by 2018 you’d let your 2016 post go. There were a bunch of good rebuttals. [I especially like eevee’s][14]. It specifically addresses the “run Python 2 alongside Python 3” scenario as not realistic, just like running Ruby 1.8 and Ruby 2.x in the same VM, or like running Lua 5.1 alongside 5.3. You can’t even run C binaries compiled against libc.so.5 with libc.so.6. What I find most surprising though is that you claimed that Python core is “purposefully” creating broken tools like 2to3, created by Guido in whose best interest it is for everybody to migrate as smoothly and quickly as possible. I’m glad that you backed out of that claim in your post later but you have to realize you antagonized people who read the original version. Accusations of deliberate harm better be backed by strong evidence. - -But it seems like you still do that. [Just today][15] you said that Python core “ignores” attempts to solve the API problem, specifically `six`. As I wrote above, `six` is covered by the official porting guide in the Python documentation. More importantly, `six` was written by Benjamin Peterson, the release manager of Python 2.7. A lot of people learned to program thanks to you and you have a large following online. People will read a tweet like this and they will believe it at face value. This is harmful. - -I have a suggestion. Let’s put this “Python 3 was poorly managed” dispute behind us. Python 2 is dying, we are slow to kill it and the process was ugly and bloody but it’s a one way street. Arguing about it is not actionable anymore. Instead, let’s focus on what we can do now to make Python 3.8 better than any other Python release. Maybe you prefer the role of an outsider looking in, but you would be much more impactful as a member of this community. Saying “we” instead of “they”. - --------------------------------------------------------------------------------- - -via: http://lukasz.langa.pl/13/could-we-run-python-2-and-python-3-code-same-vm/ - -作者:[Łukasz Langa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://lukasz.langa.pl -[1]:https://python-modernize.readthedocs.io/ -[2]:http://pypi.python.org/pypi/six -[3]:https://docs.python.org/3/howto/pyporting.html -[4]:https://www.youtube.com/watch?v=66XoCk79kjM -[5]:https://twitter.com/llanga/status/963834977745022976 -[6]:https://www.instagram.com/p/ZVC9CwH7G1/ -[7]:https://www.instagram.com/p/ZXtdtUn7Gk/ -[8]:https://learnpythonthehardway.org/book/nopython3.html -[9]:https://www.youtube.com/watch?v=Oiw23yfqQy8 -[10]:https://mail.python.org/pipermail/python-dev/2018-March/152348.html -[11]:https://pypi.python.org/pypi/Django/2.0.3 -[12]:http://python3statement.org/ -[13]:https://www.youtube.com/watch?v=-Nk-8fSJM6I -[14]:https://eev.ee/blog/2016/11/23/a-rebuttal-for-python-3/ -[15]:https://twitter.com/zedshaw/status/977909970795745281 From 6d1ea63e3d81fd5df8fd9c48b324c1e2b4f510e4 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Fri, 27 Apr 2018 23:52:04 +0800 Subject: [PATCH 002/102] =?UTF-8?q?Create=20Could=20we=20run=20Python=202?= =?UTF-8?q?=20and=20Python=203=20code=20in=20the=20same=20V=E2=80=A6=20=20?= =?UTF-8?q?=E2=80=A6M=20with=20no=20code=20changes.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...in the same V… …M with no code changes.md | 120 ++++++++++++++++++ 1 file changed, 120 insertions(+) create mode 100644 translated/tech/Could we run Python 2 and Python 3 code in the same V… …M with no code changes.md diff --git a/translated/tech/Could we run Python 2 and Python 3 code in the same V… …M with no code changes.md b/translated/tech/Could we run Python 2 and Python 3 code in the same V… …M with no code changes.md new file mode 100644 index 0000000000..5ffe1a6fc6 --- /dev/null +++ b/translated/tech/Could we run Python 2 and Python 3 code in the same V… …M with no code changes.md @@ -0,0 +1,120 @@ +我们可以在同一个虚拟机中运行 Python 2 和 Python 3 代码而不需要更改代码吗? +===== + +从理论上来说,可以。Zed Shaw 说过一句著名的话,如果不行,那么 Python 3 一定不是图灵完备的。但在实践中,这是不现实的,我将通过给你们举几个例子来说明原因。 + +### 对于字典(dict)来说,这意味着什么? + +让我们来想象一台拥有 Python 6 的虚拟机,它可以读取 Python 3.6 编写的 `module3.py`。但是在这个模块中,它可以导入 Python 2.7 编写的 `module2.py`,并成功使用它,没有问题。这显然是实验代码,但假设 `module2.py` 包含以下的功能: +``` + + +def update_config_from_dict(config_dict): + items = config_dict.items() + while items: + k, v = items.pop() + memcache.set(k, v) + +def config_to_dict(): + result = {} + for k, v in memcache.getall(): + result[k] = v + return result + +def update_in_place(config_dict): + for k, v in config_dict.items(): + new_value = memcache.get(k) + if new_value is None: + del config_dict[k] + elif new_value != v: + config_dict[k] = v + +``` + +现在,当我们想从 `module3` 中调用这些函数时,我们遇到了一个问题:Python 3.6 中的字典类型与 Python 2.7 中的字典类型不同。在 Python 2 中,dicts 是无序的,它们的 `.keys()`, `.values()`, `.items()` 方法返回了正确的序列,这意味着调用 `.items()` 会在字典中创建状态的副本。在 Python 3 中,这些方法返回字典当前状态的动态视图。 + +这意味着如果 `module3` 调用 `module2.update_config_from_dict(some_dictionary)`,它将无法运行,因为 Python 3 中 `dict.items()` 返回的值不是一个列表,并且没有 `.pop()` 方法。反过来也是如此。如果 `module3` 调用 `module2.config_to_dict()`,它可能会返回一个 Python 2 的字典。现在调用 `.items()` 突然返回一个列表,所以这段代码无法正常工作(这对 Python 3 字典来说工作正常): +``` + +def main(cmdline_options): + d = module2.config_to_dict() + items = d.items() + for k, v in items: + print(f'Config from memcache: {k}={v}') + for k, v in cmdline_options: + d[k] = v + for k, v in items: + print(f'Config with cmdline overrides: {k}={v}') + +``` + +最后,使用 `module2.update_in_place()` 会失败,因为 Python 3 中 `.items()` 的值现在不允许在迭代过程中改变。 + +对于字典来说,还有很多问题。Python 2 的字典在 Python 3 上使用 `isinstance(d, dict)` 应该返回 `True` 吗?如果是的话,这将是一个谎言。如果没有,代码将无法继续。 + +### Python 应该神奇地知道类型并会自动转换! + +为什么拥有 Python 6 的虚拟机无法识别 Python 3 的代码,在 Python 2 中调用 `some_dict.keys()` 时,我们还有别的意思吗?好吧,Python 不知道代码的作者在编写代码时,她所认为的 `some_dict` 应该是什么。代码中没有任何内容表明它是否是一个字典。在 Python 2 中没有类型注释,因为它们是可选的,即使在 Python 3 中,大多数代码也不会使用它们。 + +在运行时,当你调用 `some_dict.keys()` 的时候,Python 只是简单地在对象上查找一个属性,该属性恰好隐藏在 `some_dict` 名下,并试图在该属性上运行 `__call__()`。这里有一些关于方法绑定,描述符,slots 等技术问题,但这是它的核心。我们称这种行为为“鸭子类型”。 + +由于鸭子类型,Python 6 的虚拟机将无法做出编译时决定,以正确转换调用和属性查找。 + +### 好的,让我们在运行时做出这个决定 + +Python 6 的虚拟机可以通过标记每个属性查找来实现这一点,信息是“来自 py2 的调用”或“来自 py3 的调用”,并使对象发送正确的属性。这会让事情变得很慢,并且使用更多的内存。这将要求我们使用用户代码使用的代理将两种版本的给定类型保留在内存中。我们需要将这些对象的状态同步到用户背后,使工作加倍。毕竟,新字典的内存表示与 Python 2 不同。 + +如果你在思考字典问题,考虑 Python 3 中的 Unicode 字符串和 Python 2 中的字节(byte)字符串的所有问题。 + +### 一切都会丢失吗?Python 3 不能运行旧代码? + +一切都不会丢失。每天都会有项目移植到 Python 3。将 Python 2 代码移植到两个版本的 Python 上推荐方法是在代码上运行 [Python-Modernize][1]。它会捕获那些在 Python 3 上不起作用的代码,并使用 [six][2] 库将其替换,以便它在 Python 2 和 Python 3 上运行。这是对 `2to3` 的改编,它正在生成 Python 3-only 代码。`Modernize` 是首选,因为它提供了更多的增量迁移路线。所有的这些在 Python 文档中的 [Porting Python 2 Code to Python 3][3]文档中都有很好的概述。 + +但是,等一等,你不是说 Python 6 的虚拟机不能自动执行此操作吗?对。`Modernize` 查看你的代码,并试图猜测哪些是安全的。它会做出一些不必要的改变,还会错过其他必要的改变。但是,它不会帮助你处理字符串。如果你的代码没有保留“来自外部的二进制数据”和“流程中的文本数据”之间的界限,那么这种转换并非微不足道。 + +因此,迁移大项目不能自动完成,并且涉及人类进行测试,发现问题并修复它们。它工作吗?是的,我曾帮助[将一百万行代码迁移到 Python 3][4],并且交换没有造成事故。这一举措重获了我们服务器内存的 1/3,并使代码运行速度提高了 12%。那是在 Python 3.5 上,但是 Python 3.6 的速度要快得多,根据你的工作量,你甚至可以达到 [4 倍加速][5]。 + +### 亲爱的 Zed + +hi,伙计,我关注你已经超过 10 年了。我一直在观察,当你感到沮丧的时候,你对 Mongrel 没有任何信任,尽管 Rails 生态系统几乎全部都在上面运行。当你重新设计它并开始 Mongrel 2 项目时,我一直在观察。我一直在关注你使用 Fossil 这一令人惊讶的举动。随着你发布 “Rails 是一个贫民窟”的帖子,我看到你突然离开了 Ruby 社区。当你开始编写 “Learn Python The Hard Way” 并且开始推荐它时,我感到非常兴奋。2013 年我在 [DjangoCon Europe][6] 见过你,我们谈了很多关于绘画,唱歌和倦怠的内容。关于[这张你的照片][7]是我在 Instagram 上的第一篇文章。 + +你几乎把另一个“贫民区”的行动与 [“反对 Python 3” 案例][8] 文章拉到一起。我认为你本意是好的,但是这篇文章引起了很多混乱,包括许多人认为你认为 Python 3 不是图灵完整的。我花了好几个小时让人们相信,你是在开玩笑。但是,鉴于你对 Python 学习方式的重大贡献,我认为这是值得的。特别是你为 Python 3 更新了你的书。感谢你做这件事。如果我们社区中真的有人要求因你的帖子为由将你和你的书列入黑名单,把他们请出去。这是一个双输的局面,这是错误的。 + +在记录中,没有一个核心 Python 开发人员认为 Python 2 到 Python 3 的转换过程会顺利而且计划得当,[包括 Guido van Rossum][9]。真的,可以看那个视频,这有点事后诸葛亮的意思了。从这个意义上说,我们实际上是积极地相互认同的。如果我们再做一次,它会看起来不一样。但在这一点上,[在 2020 年 1 月 1 日,Python 2 将会到达终结][10]。大多数第三方库已经支持 Python 3,甚至开始发布只支持 Python 3 版本(参见[Django][11]或 [科学项目关于 Python 3 的声明][12])。 + +我们也积极地就另一件事达成一致。就像你于 Mongrel 一样,Python 核心开发人员是志愿者,他们的工作没有得到报酬。我们大多数人在这个项目上投入了大量的时间和精力,因此[我们自然而然敏感][13]于那些对他们的贡献不屑一顾和激烈的评论。特别是如果这个信息既攻击目前的事态,又要求更多的自由劳动。 + +我希望到 2018 年你会让忘记 2016 发布的帖子,有一堆好的反驳。[我特别喜欢 eevee][14](译注:eevee 是一个为 Blender 设计的渲染器)。它特别针对“一起运行 Python 2 和 Python 3 ”的场景,这是不现实的,就像在同一个虚拟机中运行 Ruby 1.8 和 Ruby 2.x 一样,或者像 Lua 5.3 和 Lua 5.1 同时运行一样。你甚至不能用 libc.so.6 运行针对 libc.so.5 编译的 C 二进制文件。然而,我发现最令人惊讶的是,你声称 Python 核心开发者是“有目的地”创造诸如 2to3 之类的破坏工具,这些由 Guido 创建,其最大利益就是让每个人尽可能顺利,快速地迁移。我很高兴你在之后的帖子中放弃了这个说法,但是你必须意识到你会激怒那些阅读原始版本的人。对蓄意伤害的指控最好有强有力的证据支持。 + +但看起来你仍然会这样做。[就在今天][15]你说 Python 核心开发者“忽略”尝试解决 API 的问题,特别是 `six`。正如我上面写的那样,Python 文档中的官方移植指南涵盖了 “six”。更重要的是,`six` 是由 Python 2.7 的发布管理者 Benjamin Peterson 编写。很多人学会了编程,这要归功于你,而且由于你在网上有大量的粉丝,人们会阅读这样的推文,他们会相信它的价值,这是有害的。 + +我有一个建议,让我们把 “Python 3 管理不善”的争议搁置起来。Python 2 正在死亡,这个过程会很慢,并且它是丑陋而血腥的,但它是一条单行道。争论那些没有用。相反,让我们专注于我们现在可以做什么来使 Python 3.8 比其他任何 Python 版本更好。也许你更喜欢看外面的角色,但作为这个社区的成员,你会更有影响力。请说“我们”而不是“他们”。 + + +-------------------------------------------------------------------------------- + +via: http://lukasz.langa.pl/13/could-we-run-python-2-and-python-3-code-same-vm/ + +作者:[Łukasz Langa][a] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) +选题:[lujun9972](https://github.com/lujun9972) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://lukasz.langa.pl +[1]:https://python-modernize.readthedocs.io/ +[2]:http://pypi.python.org/pypi/six +[3]:https://docs.python.org/3/howto/pyporting.html +[4]:https://www.youtube.com/watch?v=66XoCk79kjM +[5]:https://twitter.com/llanga/status/963834977745022976 +[6]:https://www.instagram.com/p/ZVC9CwH7G1/ +[7]:https://www.instagram.com/p/ZXtdtUn7Gk/ +[8]:https://learnpythonthehardway.org/book/nopython3.html +[9]:https://www.youtube.com/watch?v=Oiw23yfqQy8 +[10]:https://mail.python.org/pipermail/python-dev/2018-March/152348.html +[11]:https://pypi.python.org/pypi/Django/2.0.3 +[12]:http://python3statement.org/ +[13]:https://www.youtube.com/watch?v=-Nk-8fSJM6I +[14]:https://eev.ee/blog/2016/11/23/a-rebuttal-for-python-3/ +[15]:https://twitter.com/zedshaw/status/977909970795745281 From 417c321c47373882aad0aaa7e067677a24d3138b Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Fri, 27 Apr 2018 23:52:46 +0800 Subject: [PATCH 003/102] =?UTF-8?q?Rename=20Could=20we=20run=20Python=202?= =?UTF-8?q?=20and=20Python=203=20code=20in=20the=20same=20V=E2=80=A6=20=20?= =?UTF-8?q?=E2=80=A6M=20with=20no=20code=20changes.md=20to=2020180325=20Co?= =?UTF-8?q?uld=20we=20run=20Python=202=20and=20Python=203=20code=20in=20th?= =?UTF-8?q?e=20same=20VM=20with=20no=20code=20changes.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Python 2 and Python 3 code in the same VM with no code changes.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename translated/tech/{Could we run Python 2 and Python 3 code in the same V… …M with no code changes.md => 20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md} (100%) diff --git a/translated/tech/Could we run Python 2 and Python 3 code in the same V… …M with no code changes.md b/translated/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md similarity index 100% rename from translated/tech/Could we run Python 2 and Python 3 code in the same V… …M with no code changes.md rename to translated/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md From 4b36848a4423ad6548b039a8c31d08a5206f7de3 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 28 Apr 2018 06:55:04 +0800 Subject: [PATCH 004/102] PRF:20180127 Your instant Kubernetes cluster.md @qhwdw --- ...0180127 Your instant Kubernetes cluster.md | 37 ++++++------------- 1 file changed, 12 insertions(+), 25 deletions(-) diff --git a/translated/tech/20180127 Your instant Kubernetes cluster.md b/translated/tech/20180127 Your instant Kubernetes cluster.md index ac06ba730f..a2ccd874fe 100644 --- a/translated/tech/20180127 Your instant Kubernetes cluster.md +++ b/translated/tech/20180127 Your instant Kubernetes cluster.md @@ -1,19 +1,15 @@ “开箱即用” 的 Kubernetes 集群 ============================================================ - -这是我以前的 [10 分钟内配置 Kubernetes][10] 教程的精简版和更新版。我删除了一些我认为可以去掉的内容,所以,这个指南仍然是可理解的。当你想在云上创建一个集群或者尽可能快地构建基础设施时,你可能会用到它。 +这是我以前的 [10 分钟内配置 Kubernetes][10] 教程的精简版和更新版。我删除了一些我认为可以去掉的内容,所以,这个指南仍然是通顺的。当你想在云上创建一个集群或者尽可能快地构建基础设施时,你可能会用到它。 ### 1.0 挑选一个主机 我们在本指南中将使用 Ubuntu 16.04,这样你就可以直接拷贝/粘贴所有的指令。下面是我用本指南测试过的几种环境。根据你运行的主机,你可以从中挑选一个。 * [DigitalOcean][1] - 开发者云 - * [Civo][2] - UK 开发者云 - * [Packet][3] - 裸机云 - * 2x Dell Intel i7 服务器 —— 它在我家中 > Civo 是一个相对较新的开发者云,我比较喜欢的一点是,它开机时间只有 25 秒,我就在英国,因此,它的延迟很低。 @@ -25,14 +21,12 @@ 下面是一些其他的指导原则: * 最好选至少有 2 GB 内存的双核主机 - * 在准备主机的时候,如果你可以自定义用户名,那么就不要使用 root。例如,Civo 通常让你在 `ubuntu`、`civo` 或者 `root` 中选一个。 -现在,在每台机器上都运行以下的步骤。它将需要 5-10 钟时间。如果你觉得太慢了,你可以使用我的脚本 [kept in a Gist][11]: +现在,在每台机器上都运行以下的步骤。它将需要 5-10 钟时间。如果你觉得太慢了,你可以使用我[放在 Gist][11] 的脚本 : ``` $ curl -sL https://gist.githubusercontent.com/alexellis/e8bbec45c75ea38da5547746c0ca4b0c/raw/23fc4cd13910eac646b13c4f8812bab3eeebab4c/configure.sh | sh - ``` ### 1.2 登入和安装 Docker @@ -42,7 +36,6 @@ $ curl -sL https://gist.githubusercontent.com/alexellis/e8bbec45c75ea38da5547746 ``` $ sudo apt-get update \ && sudo apt-get install -qy docker.io - ``` ### 1.3 禁用 swap 文件 @@ -69,7 +62,6 @@ $ sudo apt-get update \ kubelet \ kubeadm \ kubernetes-cni - ``` ### 1.5 创建集群 @@ -80,7 +72,6 @@ $ sudo apt-get update \ ``` $ sudo kubeadm init - ``` 如果你错过一个步骤或者有问题,`kubeadm` 将会及时告诉你。 @@ -90,41 +81,37 @@ $ sudo kubeadm init ``` mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config -sudo chown $(id -u):$(id -g) $HOME/.kube/config - +sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` -确保你一定要记下如下的加入 token 命令。 +确保你一定要记下如下的加入 `token` 的命令。 ``` $ sudo kubeadm join --token c30633.d178035db2b4bb9a 10.0.0.5:6443 --discovery-token-ca-cert-hash sha256: - ``` ### 2.0 安装网络 -Kubernetes 可用于任何网络供应商的产品或服务,但是,默认情况下什么也没有,因此,我们使用来自  [Weaveworks][12] 的 Weave Net,它是 Kebernetes 社区中非常流行的选择之一。它倾向于不需要额外配置的 “开箱即用”。 +许多网络提供商提供了 Kubernetes 支持,但是,默认情况下 Kubernetes 都没有包括。这里我们使用来自  [Weaveworks][12] 的 Weave Net,它是 Kebernetes 社区中非常流行的选择之一。它近乎不需要额外配置的 “开箱即用”。 ``` $ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" - ``` -如果在你的主机上启用了私有网络,那么,你可能需要去修改 Weavenet 使用的私有子网络,以便于为 Pods(容器)分配 IP 地址。下面是命令示例: +如果在你的主机上启用了私有网络,那么,你可能需要去修改 Weavenet 使用的私有子网络,以便于为 Pod(容器)分配 IP 地址。下面是命令示例: ``` $ curl -SL "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=172.16.6.64/27" \ | kubectl apply -f - - ``` -> Weave 也有很酷的称为 Weave Cloud 的可视化工具。它是免费的,你可以在它上面看到你的 Pods 之间的路径流量。[这里有一个使用 OpenFaaS 项目的示例][6]。 +> Weave 也有很酷的称为 Weave Cloud 的可视化工具。它是免费的,你可以在它上面看到你的 Pod 之间的路径流量。[这里有一个使用 OpenFaaS 项目的示例][6]。 ### 2.2 在集群中加入工作节点 现在,你可以切换到你的每一台工作节点,然后使用 1.5 节中的 `kubeadm join` 命令。运行完成后,登出那个工作节点。 -### 3.0 收益 +### 3.0 收获 到此为止 —— 我们全部配置完成了。你现在有一个正在运行着的集群,你可以在它上面部署应用程序。如果你需要设置仪表板 UI,你可以去参考 [Kubernetes 文档][13]。 @@ -134,21 +121,21 @@ NAME STATUS ROLES AGE VERSION openfaas1 Ready master 20m v1.9.2 openfaas2 Ready 19m v1.9.2 openfaas3 Ready 19m v1.9.2 - ``` 如果你想看到我一步一步创建集群并且展示 `kubectl` 如何工作的视频,你可以看下面我的视频,你可以订阅它。 +![](https://youtu.be/6xJwQgDnMFE) -想在你的 Mac 电脑上,使用 Minikube 或者 Docker 的 Mac Edge 版本,安装一个 “开箱即用” 的 Kubernetes 集群,[阅读在这里的我的评估和第一印象][14]。 +你也可以在你的 Mac 电脑上,使用 Minikube 或者 Docker 的 Mac Edge 版本,安装一个 “开箱即用” 的 Kubernetes 集群。[阅读在这里的我的评估和第一印象][14]。 -------------------------------------------------------------------------------- via: https://blog.alexellis.io/your-instant-kubernetes-cluster/ -作者:[Alex Ellis ][a] +作者:[Alex Ellis][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e29b97959c3f7e451d1e0defac33d5020144a85d Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 28 Apr 2018 06:56:34 +0800 Subject: [PATCH 005/102] PUB:20180127 Your instant Kubernetes cluster.md @qhwdw https://linux.cn/article-9584-1.html --- .../20180127 Your instant Kubernetes cluster.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180127 Your instant Kubernetes cluster.md (100%) diff --git a/translated/tech/20180127 Your instant Kubernetes cluster.md b/published/20180127 Your instant Kubernetes cluster.md similarity index 100% rename from translated/tech/20180127 Your instant Kubernetes cluster.md rename to published/20180127 Your instant Kubernetes cluster.md From 103ba8045660e85d4e282d815dc102e873a43e9e Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 28 Apr 2018 07:09:58 +0800 Subject: [PATCH 006/102] PRF:20180405 How to find files in Linux.md @geekpi --- .../20180405 How to find files in Linux.md | 31 ++++++++++--------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/translated/tech/20180405 How to find files in Linux.md b/translated/tech/20180405 How to find files in Linux.md index 2566575d72..5af7e476fe 100644 --- a/translated/tech/20180405 How to find files in Linux.md +++ b/translated/tech/20180405 How to find files in Linux.md @@ -1,14 +1,17 @@ 如何在 Linux 中查找文件 ====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0) -如果你是 Windows 或 OSX 的非高级用户,那么可能使用 GUI 来查找文件。你也可能发现界面限制,令人沮丧,或者两者兼而有之,并学会组织文件并记住它们的确切顺序。你也可以在 Linux 中做到这一点 - 但你不必这样做。 +> 使用简单的命令在 Linux 下基于类型、内容等快速查找文件。 -Linux 的好处之一是它提供了多种方式来处理。你可以打开任何文件管理器或按下 `ctrl` +`f`,你也可以使用程序手动打开文件,或者你可以开始输入字母,它会过滤当前目录列表。 +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0) + +如果你是 Windows 或 OSX 的非资深用户,那么可能使用 GUI 来查找文件。你也可能发现界面受限,令人沮丧,或者两者兼而有之,并学会了组织文件并记住它们的确切顺序。你也可以在 Linux 中做到这一点 —— 但你不必这样做。 + +Linux 的好处之一是它提供了多种方式来处理。你可以打开任何文件管理器或按下 `Ctrl+F`,你也可以使用程序手动打开文件,或者你可以开始输入字母,它会过滤当前目录列表。 ![Screenshot of how to find files in Linux with Ctrl+F][2] -使用 Ctrl+F 在 Linux 中查找文件的截图 +*使用 Ctrl+F 在 Linux 中查找文件的截图* 但是如果你不知道你的文件在哪里,又不想搜索整个磁盘呢?对于这个以及其他各种情况,Linux 都很合适。 @@ -16,7 +19,7 @@ Linux 的好处之一是它提供了多种方式来处理。你可以打开任 如果你习惯随心所欲地放文件,Linux 文件系统看起来会让人望而生畏。对我而言,最难习惯的一件事是找到程序在哪里。 -例如,`which bash` 通常会返回 `/bin/bash`,但是如果你下载了一个程序并且它没有出现在你的菜单中,`which` 命令可以是一个很好的工具。 +例如,`which bash` 通常会返回 `/bin/bash`,但是如果你下载了一个程序并且它没有出现在你的菜单中,那么 `which` 命令就是一个很好的工具。 一个类似的工具是 `locate` 命令,我发现它对于查找配置文件很有用。我不喜欢输入程序名称,因为像 `locate php` 这样的简单程序通常会提供很多需要进一步过滤的结果。 @@ -25,31 +28,29 @@ Linux 的好处之一是它提供了多种方式来处理。你可以打开任 * `man which` * `man locate` +### find +`find` 工具提供了更先进的功能。以下是我安装在许多服务器上的脚本示例,我用于确保特定模式的文件(也称为 glob)仅存在五天,并且所有早于此的文件都将被删除。 (自上次修改以来,分数用于保留最多 240 分钟的偏差) -### Find - -`find` 工具提供了更先进的功能。以下是我安装在许多服务器上的脚本示例,我用于确保特定模式的文件(也称为 glob)仅存在五天,并且所有早于该文件的文件都将被删除。 (自上次修改以来,十进制用于计算最多 240 分钟的差异) ``` find ./backup/core-files*.tar.gz -mtime +4.9 -exec rm {} \; - ``` `find` 工具有许多高级用法,但最常见的是对结果执行命令,而不用链式地按照类型、创建日期、修改日期过滤文件。 -find 的另一个有趣用处是找到所有有可执行权限的文件。这有助于确保没有人在你昂贵的服务器上安装比特币挖矿程序或僵尸网络。 +`find` 的另一个有趣用处是找到所有有可执行权限的文件。这有助于确保没有人在你昂贵的服务器上安装比特币挖矿程序或僵尸网络。 + ``` find / -perm /+x - ``` 有关 `find` 的更多信息,请使用 `man find` 参考 `man` 页面。 -### Grep +### grep 想通过内容中查找文件? Linux 已经实现了。你可以使用许多 Linux 工具来高效搜索符合模式的文件,但是 `grep` 是我经常使用的工具。 -假设你有一个程序发布代码引用和堆栈跟踪的错误消息。你要在日志中找到这些。 grep 不总是最好的方法,但如果文件是一个给定的值,我经常使用 `grep -R`。 +假设你有一个程序发布代码引用和堆栈跟踪的错误消息。你要在日志中找到这些。 `grep` 不总是最好的方法,但如果文件是一个给定的值,我经常使用 `grep -R`。 越来越多的 IDE 正在实现查找功能,但是如果你正在访问远程系统或出于任何原因没有 GUI,或者如果你想在当前目录递归查找,请使用:`grep -R {searchterm} ` 或在支持 `egrep` 别名的系统上,只需将 `-e` 标志添加到命令 `egrep -r {regex-pattern}`。 @@ -62,9 +63,9 @@ find / -perm /+x via: https://opensource.com/article/18/4/how-find-files-linux 作者:[Lewis Cowles][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) 选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9717d40425f85780385979ac07efb9f82d667272 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 28 Apr 2018 07:10:39 +0800 Subject: [PATCH 007/102] PUB:20180405 How to find files in Linux.md @geekpi --- .../tech => published}/20180405 How to find files in Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180405 How to find files in Linux.md (100%) diff --git a/translated/tech/20180405 How to find files in Linux.md b/published/20180405 How to find files in Linux.md similarity index 100% rename from translated/tech/20180405 How to find files in Linux.md rename to published/20180405 How to find files in Linux.md From 069fb07755cbfaed8f95a00f685510a3d3240ea2 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 28 Apr 2018 08:37:54 +0800 Subject: [PATCH 008/102] translated --- ...NG GO PROJECTS WITH DOCKER ON GITLAB CI.md | 157 ------------------ ...NG GO PROJECTS WITH DOCKER ON GITLAB CI.md | 155 +++++++++++++++++ 2 files changed, 155 insertions(+), 157 deletions(-) delete mode 100644 sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md create mode 100644 translated/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md diff --git a/sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md b/sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md deleted file mode 100644 index da8a95e5b2..0000000000 --- a/sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md +++ /dev/null @@ -1,157 +0,0 @@ -translating----geekpi - -BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI -=============================================== - -### Intro - -This post is a summary of my research on building Go projects in a Docker container on CI (Gitlab, specifically). I found solving private dependencies quite hard (coming from a Node/.NET background) so that is the main reason I wrote this up. Please feel free to reach out if there are any issues or a submit pull request on the Docker image. - -### Dep - -As dep is the best option for managing Go dependencies right now, the build will need to run `dep ensure` before building. - -Note: I personally do not commit my `vendor/` folder into source control, if you do, I’m not sure if this step can be skipped or not. - -The best way to do this with Docker builds is to use `dep ensure -vendor-only`. [See here][1]. - -### Docker Build Image - -I first tried to use `golang:1.10` but this image doesn’t have: - -* curl - -* git - -* make - -* dep - -* golint - -I have created my own Docker image for builds ([github][2] / [dockerhub][3]) which I will keep up to date - but I offer no guarantees so you should probably create and manage your own. - -### Internal Dependencies - -We’re quite capable of building any project that has publicly accessible dependencies so far. But what about if your project depends on another private gitlab repository? - -Running `dep ensure` locally should work with your git setup, but once on CI this doesn’t apply and builds will fail. - -### Gitlab Permissions Model - -This was [added in Gitlab 8.12][4] and the most useful feature we care about is the `CI_JOB_TOKEN` environment variable made available during builds. - -This basically means we can clone [dependent repositories][5] like so - -``` -git clone https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.com/myuser/mydependentrepo - -``` - -However we do want to make this a bit more user friendly as dep will not magically add credentials when trying to pull code. - -We will add this line to the `before_script` section of the `.gitlab-ci.yml`. - -``` -before_script: - - echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > ~/.netrc - -``` - -Using the `.netrc` file allows you to specify which credentials to use for which server. This method allows you to avoid entering a username and password every time you pull (or push) from Git. The password is stored in plaintext so you shouldn’t do this on your own machine. This is actually for `cURL` which Git uses behind the scenes. [Read more here][6]. - -Project Files -============================================================ - -### Makefile - -While this is optional, I have found it makes things easier. - -Configuring these steps below means in the CI script (and locally) we can run `make lint`, `make build` etc without repeating steps each time. - -``` -GOFILES = $(shell find . -name '*.go' -not -path './vendor/*') -GOPACKAGES = $(shell go list ./... | grep -v /vendor/) - -default: build - -workdir: - mkdir -p workdir - -build: workdir/scraper - -workdir/scraper: $(GOFILES) - GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o workdir/scraper . - -test: test-all - -test-all: - @go test -v $(GOPACKAGES) - -lint: lint-all - -lint-all: - @golint -set_exit_status $(GOPACKAGES) - -``` - -### .gitlab-ci.yml - -This is where the Gitlab CI magic happens. You may want to swap out the image for your own. - -``` -image: sjdweb/go-docker-build:1.10 - -stages: - - test - - build - -before_script: - - cd $GOPATH/src - - mkdir -p gitlab.com/$CI_PROJECT_NAMESPACE - - cd gitlab.com/$CI_PROJECT_NAMESPACE - - ln -s $CI_PROJECT_DIR - - cd $CI_PROJECT_NAME - - echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > ~/.netrc - - dep ensure -vendor-only - -lint_code: - stage: test - script: - - make lint - -unit_tests: - stage: test - script: - - make test - -build: - stage: build - script: - - make - -``` - -### What This Is Missing - -I would usually be building a Docker image with my binary and pushing that to the Gitlab Container Registry. - -You can see I’m building the binary and exiting, you would at least want to store that binary somewhere (such as a build artifact). - --------------------------------------------------------------------------------- - -via: https://seandrumm.co.uk/blog/building-go-projects-with-docker-on-gitlab-ci/ - -作者:[ SEAN DRUMM][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://seandrumm.co.uk/ -[1]:https://github.com/golang/dep/blob/master/docs/FAQ.md#how-do-i-use-dep-with-docker -[2]:https://github.com/sjdweb/go-docker-build/blob/master/Dockerfile -[3]:https://hub.docker.com/r/sjdweb/go-docker-build/ -[4]:https://docs.gitlab.com/ce/user/project/new_ci_build_permissions_model.html -[5]:https://docs.gitlab.com/ce/user/project/new_ci_build_permissions_model.html#dependent-repositories -[6]:https://github.com/bagder/everything-curl/blob/master/usingcurl-netrc.md diff --git a/translated/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md b/translated/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md new file mode 100644 index 0000000000..765dd14f33 --- /dev/null +++ b/translated/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md @@ -0,0 +1,155 @@ +在 GITLAB CI 中使用 DOCKER 构建 GO 项目 +=============================================== + +### 介绍 + +这篇文章是我在 CI 的 Docker 容器中构建 Go 项目的研究总结(特别是在 Gitlab 中)。我发现很难解决私有依赖问题(来自 Node/.NET 背景),因此这是我写这篇文章的主要原因。如果 Docker 镜像上存在任何问题或提交请求,请随时与我们联系。 + +### Dep + +由于 dep 是现在管理 Go 依赖关系的最佳选择,因此在构建前之前运行 `dep ensure`。 + +注意:我个人不会将我的 `vendor/`  文件夹提交到源码控制,如果你这样做,我不确定这个步骤是否可以跳过。 + +使用 Docker 构建的最好方法是使用 `dep ensure -vendor-only`。 [见这里][1]。 + +### Docker 构建镜像 + +我第一次尝试使用  `golang:1.10`,但这个镜像没有: + +* curl + +* git + +* make + +* dep + +* golint + +我已经为我将不断更新的构建创建好了镜像([github][2] / [dockerhub][3]) - 但我不提供任何保证,因此你应该创建并管理自己的 Dockerhub。 + +### 内部依赖关系 + +我们完全有能力创建一个有公共依赖关系的项目。但是如果你的项目依赖于另一个私人 gitlab 仓库呢? + +在本地运行 `dep ensure` 应该可以和你的 git 设置一起工作,但是一旦在 CI 上不适用,构建就会失败。 + +### Gitlab 权限模型 + +这是在[ Gitlab 8.12 中添加的][4],我们关心的最有用的功能是在构建期提供的 `CI_JOB_TOKEN` 环境变量。 + +这基本上意味着我们可以像这样克隆[依赖仓库][5] + +``` +git clone https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.com/myuser/mydependentrepo + +``` + +然而,我们希望使这更友好一点,因为 dep 在试图拉取代码时不会奇迹般地添加凭据。 + +我们将把这一行添加到 `.gitlab-ci.yml` 的 `before_script` 部分。 + +``` +before_script: + - echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > ~/.netrc + +``` + +使用 `.netrc` 文件可以指定哪个凭证用于哪个服务器。这种方法可以避免每次从 Git 中拉取(或推送)时输入用户名和密码。密码以明文形式存储,因此你不应在自己的计算机上执行此操作。这实际用于 Git 在背后使用  `cURL`。 [在这里阅读更多][6]。 + +项目文件 +============================================================ + +### Makefile + +虽然这是可选的,但我发现它使事情变得更容易。 + +配置这些步骤意味着在 CI 脚本(和本地)中,我们可以运行 `make lint`、`make build` 等,而无需每次重复步骤。 + +``` +GOFILES = $(shell find . -name '*.go' -not -path './vendor/*') +GOPACKAGES = $(shell go list ./... | grep -v /vendor/) + +default: build + +workdir: + mkdir -p workdir + +build: workdir/scraper + +workdir/scraper: $(GOFILES) + GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o workdir/scraper . + +test: test-all + +test-all: + @go test -v $(GOPACKAGES) + +lint: lint-all + +lint-all: + @golint -set_exit_status $(GOPACKAGES) + +``` + +### .gitlab-ci.yml + +这是 Gitlab CI 魔术发生的地方。你可能想使用自己的镜像。 + +``` +image: sjdweb/go-docker-build:1.10 + +stages: + - test + - build + +before_script: + - cd $GOPATH/src + - mkdir -p gitlab.com/$CI_PROJECT_NAMESPACE + - cd gitlab.com/$CI_PROJECT_NAMESPACE + - ln -s $CI_PROJECT_DIR + - cd $CI_PROJECT_NAME + - echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > ~/.netrc + - dep ensure -vendor-only + +lint_code: + stage: test + script: + - make lint + +unit_tests: + stage: test + script: + - make test + +build: + stage: build + script: + - make + +``` + +### 缺少了什么 + +我通常会用我的二进制文件构建 Docker 镜像,并将其推送到 Gitlab 容器注册器中。 + +你可以看到我正在构建二进制文件并退出,你至少需要将该二进制文件(例如生成文件)存储在某处。 + +-------------------------------------------------------------------------------- + +via: https://seandrumm.co.uk/blog/building-go-projects-with-docker-on-gitlab-ci/ + +作者:[ SEAN DRUMM][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://seandrumm.co.uk/ +[1]:https://github.com/golang/dep/blob/master/docs/FAQ.md#how-do-i-use-dep-with-docker +[2]:https://github.com/sjdweb/go-docker-build/blob/master/Dockerfile +[3]:https://hub.docker.com/r/sjdweb/go-docker-build/ +[4]:https://docs.gitlab.com/ce/user/project/new_ci_build_permissions_model.html +[5]:https://docs.gitlab.com/ce/user/project/new_ci_build_permissions_model.html#dependent-repositories +[6]:https://github.com/bagder/everything-curl/blob/master/usingcurl-netrc.md From b22adaf3f9d7769f640a329e35024ea4c541413b Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 28 Apr 2018 08:40:44 +0800 Subject: [PATCH 009/102] translating --- ...3 Easily Install Android Studio in Ubuntu And Linux Mint.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20140403 Easily Install Android Studio in Ubuntu And Linux Mint.md b/sources/tech/20140403 Easily Install Android Studio in Ubuntu And Linux Mint.md index d5e33425b2..7fb5b4facc 100644 --- a/sources/tech/20140403 Easily Install Android Studio in Ubuntu And Linux Mint.md +++ b/sources/tech/20140403 Easily Install Android Studio in Ubuntu And Linux Mint.md @@ -1,3 +1,6 @@ +translating---geekpi + + Easily Install Android Studio in Ubuntu And Linux Mint ====== [Android Studio][1], Google’s own IDE for Android development, is a nice alternative to Eclipse with ADT plugin. Android Studio can be installed from its source code but in this quick post, we shall see **how to install Android Studio in Ubuntu 18.04, 16.04 and corresponding Linux Mint variants**. From 35e8c157014bed4ee52ae3328a2519482e4bcc33 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 28 Apr 2018 12:53:25 +0800 Subject: [PATCH 010/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Enhance=20your=20?= =?UTF-8?q?Python=20with=20an=20interactive=20shell?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...e your Python with an interactive shell.md | 93 +++++++++++++++++++ 1 file changed, 93 insertions(+) create mode 100644 sources/tech/20180425 Enhance your Python with an interactive shell.md diff --git a/sources/tech/20180425 Enhance your Python with an interactive shell.md b/sources/tech/20180425 Enhance your Python with an interactive shell.md new file mode 100644 index 0000000000..19826d142b --- /dev/null +++ b/sources/tech/20180425 Enhance your Python with an interactive shell.md @@ -0,0 +1,93 @@ +Enhance your Python with an interactive shell +====== +![](https://fedoramagazine.org/wp-content/uploads/2018/03/python-shells-816x345.jpg) +The Python programming language has become one of the most popular languages used in IT. One reason for this success is it can be used to solve a variety of problems. From web development to data science, machine learning to task automation, the Python ecosystem is rich in popular frameworks and libraries. This article presents some useful Python shells available in the Fedora packages collection to make development easier. + +### Python Shell + +The Python Shell lets you use the interpreter in an interactive mode. It’s very useful to test code or try a new library. In Fedora you can invoke the default shell by typing python3 in a terminal session. Some more advanced and enhanced shells are available to Fedora, though. + +### IPython + +IPython provides many useful enhancements to the Python shell. Examples include tab completion, object introspection, system shell access and command history retrieval. Many of these features are also used by the [Jupyter Notebook][1] , since it uses IPython underneath. + +#### Install and run IPython +``` +dnf install ipython3 +ipython3 + +``` + +Using tab completion prompts you with possible choices. This features comes in handy when you use an unfamiliar library. + +![][2] + +If you need more information, use the documentation by typing the ? command. For more details, you can use the ?? command. + +![][3] + +Another cool feature is the ability to execute a system shell command using the ! character. The result of the command can then be referenced in the IPython shell. + +![][4] + +A comprehensive list of IPython features is available in the [official documentation][5]. + +### bpython + +bpython doesn’t do as much as IPython, but nonetheless it provides a useful set of features in a simple and lightweight package. Among other features, bpython provides: + + * In-line syntax highlighting + * Autocomplete with suggestions as you type + * Expected parameter list + * Ability to send or save code to a pastebin service or file + + + +#### Install and run bpython +``` +dnf install bpython3 +bpython3 + +``` + +As you type, bpython offers you choices to autocomplete your code. + +![][6] + +When you call a function or method, the expected parameters and the docstring are automatically displayed. + +![][7] + +Another neat feature is the ability to open the current bpython session in an external editor (Vim by default) using the function key F7. This is very useful when testing more complex programs. + +For more details about configuration and features, consult the bpython [documentation][8]. + +### Conclusion + +Using an enhanced Python shell is a good way to increase productivity. It gives you enhanced features to write a quick prototype or try out a new library. Are you using an enhanced Python shell? Feel free to mention it in the comment section below. + +Photo by [David Clode][9] on [Unsplash][10] + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/enhance-python-interactive-shell/ + +作者:[Clément Verna][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fedoramagazine.org/author/cverna/ +[1]:https://ipython.org/notebook.html +[2]:https://fedoramagazine.org/wp-content/uploads/2018/03/ipython-tabcompletion.png +[3]:https://fedoramagazine.org/wp-content/uploads/2018/03/ipyhton_doc1.png +[4]:https://fedoramagazine.org/wp-content/uploads/2018/03/ipython_shell.png +[5]:https://ipython.readthedocs.io/en/stable/overview.html#main-features-of-the-interactive-shell +[6]:https://fedoramagazine.org/wp-content/uploads/2018/03/bpython1.png +[7]:https://fedoramagazine.org/wp-content/uploads/2018/03/bpython2.png +[8]:https://docs.bpython-interpreter.org/ +[9]:https://unsplash.com/photos/d0CasEMHDQs?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[10]:https://unsplash.com/search/photos/python?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText From 6e420c8152e4d38953af914e03f19f47e9e5b8ff Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 28 Apr 2018 12:55:30 +0800 Subject: [PATCH 011/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Get=20started=20w?= =?UTF-8?q?ith=20Pidgin:=20An=20open=20source=20replacement=20for=20Skype?= =?UTF-8?q?=20for=20Business?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...urce replacement for Skype for Business.md | 63 +++++++++++++++++++ 1 file changed, 63 insertions(+) create mode 100644 sources/talk/20180426 Get started with Pidgin- An open source replacement for Skype for Business.md diff --git a/sources/talk/20180426 Get started with Pidgin- An open source replacement for Skype for Business.md b/sources/talk/20180426 Get started with Pidgin- An open source replacement for Skype for Business.md new file mode 100644 index 0000000000..9521ac92e2 --- /dev/null +++ b/sources/talk/20180426 Get started with Pidgin- An open source replacement for Skype for Business.md @@ -0,0 +1,63 @@ +Get started with Pidgin: An open source replacement for Skype for Business +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/meeting-team-listen-communicate.png?itok=KEBP6vZ_) +Technology is at an interesting crossroads, where Linux rules the server landscape but Microsoft rules the enterprise desktop. Office 365, Skype for Business, Microsoft Teams, OneDrive, Outlook... the list goes on of Microsoft software and services that dominate the enterprise workspace. + +What if you could replace that proprietary software with free and open source applications and make them work with an Office 365 backend you have no choice but to use? Buckle up, because that is exactly what we are going to do with Pidgin, an open source replacement for Skype. + +### Installing Pidgin and SIPE + +Microsoft's Office Communicator became Microsoft Lync which became what we know today as Skype for Business. There are [pay software options][1] for Linux that provide feature parity with Skype for Business, but [Pidgin][2] is a fully free and open source option licensed under the GNU GPL. + +Pidgin can be found in just about every Linux distro's repository, so getting your hands on it should not be a problem. The only Skype feature that won't work with Pidgin is screen sharing, and file sharing can be a bit hit or miss—but there are ways to work around it. + +You also need a [SIPE][3] plugin, as it's part of the secret sauce to make Pidgin work as a Skype for Business replacement. Please note that the `sipe` library has different names in different distros. For example, the library's name on System76's Pop_OS! is `pidgin-sipe` while in the Solus 3 repo it is simply `sipe`. + +With the prerequisites out of the way, you can begin configuring Pidgin. + +### Configuring Pidgin + +When firing up Pidgin for the first time, click on **Add** to add a new account. In the Basic tab (shown in the screenshot below), select** Office Communicator** in the **Protocol** drop-down, then type your **business email address** in the **Username** field. + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pidgin_basic_account_screen_final.png?itok=1zoSbZjy) + +Next, click on the Advanced tab. In the **Server[:Port]** field enter **sipdir.online.lync.com:443** and in **User Agent** enter **UCCAPI/16.0.6965.5308 OC/16.0.6965.2117**. + +Your Advanced tab should now look like this: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pidgin_advanced_account_screen.png?itok=Z6loRfGi) + +You shouldn't need to make any changes to the Proxy tab or the Voice and Video tab. Just to be certain, make sure **Proxy type** is set to **Use Global Proxy Settings** and in the Voice and Video tab, the **Use silence suppression** checkbox is **unchecked**. + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pidgin_account_proxy_screen.png?itok=iDgszWy0) + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pidgin_voiceandvideo_screen.png?itok=klkbt5hr) + +After you've completed those configurations, click **Add,** and you'll be prompted for your email account's password. + +### Adding contacts + +To add contacts to your buddy list, click on **Manage Accounts** in the **Buddy Window**. Hover over your account and select **Contact Search** to look up your colleagues. If you run into any problems when searching by first and last name, try searching with your colleague's full email address, and you should always get the right person. + +You are now up and running with a Skype for Business replacement that gives you about 98% of the functionality you need to banish the proprietary option from your desktop. + +Ray Shimko will be speaking about [Linux in a Microsoft World][4] at [LinuxFest NW][5] April 28-29. See program highlights or register to attend. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/4/pidgin-open-source-replacement-skype-business + +作者:[Ray Shimko][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/shickmo +[1]:https://tel.red/linux.php +[2]:https://pidgin.im/ +[3]:http://sipe.sourceforge.net/ +[4]:https://www.linuxfestnorthwest.org/conferences/lfnw18/program/proposals/32 +[5]:https://www.linuxfestnorthwest.org/conferences/lfnw18 From c74f1e3f09b8d59d1346e55143b75486534576de Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 28 Apr 2018 12:59:41 +0800 Subject: [PATCH 012/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Things=20to=20do?= =?UTF-8?q?=20After=20Installing=20Ubuntu=2018.04?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ngs to do After Installing Ubuntu 18.04.md | 294 ++++++++++++++++++ 1 file changed, 294 insertions(+) create mode 100644 sources/tech/20180425 Things to do After Installing Ubuntu 18.04.md diff --git a/sources/tech/20180425 Things to do After Installing Ubuntu 18.04.md b/sources/tech/20180425 Things to do After Installing Ubuntu 18.04.md new file mode 100644 index 0000000000..4e69d04837 --- /dev/null +++ b/sources/tech/20180425 Things to do After Installing Ubuntu 18.04.md @@ -0,0 +1,294 @@ +Things to do After Installing Ubuntu 18.04 +====== +**Brief: This list of things to do after installing Ubuntu 18.04 helps you get started with Bionic Beaver for a smoother desktop experience.** + +[Ubuntu][1] 18.04 Bionic Beaver releases today. You are perhaps already aware of the [new features in Ubuntu 18.04 LTS][2] release. If not, here’s the video review of Ubuntu 18.04 LTS: + +[Subscribe to YouTube Channel for more Ubuntu Videos][3] + +If you opted to install Ubuntu 18.04, I have listed out a few recommended steps that you can follow to get started with it. + +### Things to do after installing Ubuntu 18.04 Bionic Beaver + +![Things to do after installing Ubuntu 18.04][4] + +I should mention that the list of things to do after installing Ubuntu 18.04 depends a lot on you and your interests and needs. If you are a programmer, you’ll focus on installing programming tools. If you are a graphic designer, you’ll focus on installing graphics tools. + +Still, there are a few things that should be applicable to most Ubuntu users. This list is composed of those things plus a few of my of my favorites. + +Also, this list is for the default [GNOME desktop][5]. If you are using some other flavor like [Kubuntu][6], Lubuntu etc then the GNOME-specific stuff won’t be applicable to your system. + +You don’t have to follow each and every point on the list blindly. You should see if the recommended action suits your requirements or not. + +With that said, let’s get started with this list of things to do after installing Ubuntu 18.04. + +#### 1\. Update the system + +This is the first thing you should do after installing Ubuntu. Update the system without fail. It may sound strange because you just installed a fresh OS but still, you must check for the updates. + +In my experience, if you don’t update the system right after installing Ubuntu, you might face issues while trying to install a new program. + +To update Ubuntu 18.04, press Super Key (Windows Key) to launch the Activity Overview and look for Software Updater. Run it to check for updates. + +![Software Updater in Ubuntu 17.10][7] + +**Alternatively** , you can use these famous commands in the terminal ( Use Ctrl+Alt+T): +``` +sudo apt update && sudo apt upgrade + +``` + +#### 2\. Enable additional repositories for more software + +[Ubuntu has several repositories][8] from where it provides software for your system. These repositories are: + + * Main – Free and open-source software supported by Ubuntu team + * Universe – Free and open-source software maintained by the community + * Restricted – Proprietary drivers for devices. + * Multiverse – Software restricted by copyright or legal issues. + * Canonical Partners – Software packaged by Ubuntu for their partners + + + +Enabling all these repositories will give you access to more software and proprietary drivers. + +Go to Activity Overview by pressing Super Key (Windows key), and search for Software & Updates: + +![Software and Updates in Ubuntu 17.10][9] + +Under the Ubuntu Software tab, make sure you have checked all of the Main, Universe, Restricted and Multiverse repository checked. + +![Setting repositories in Ubuntu 18.04][10] + +Now move to the **Other Software** tab, check the option of **Canonical Partners**. + +![Enable Canonical Partners repository in Ubuntu 17.10][11] + +You’ll have to enter your password in order to update the software sources. Once it completes, you’ll find more applications to install in the Software Center. + +#### 3\. Install media codecs + +In order to play media files like MP#, MPEG4, AVI etc, you’ll need to install media codecs. Ubuntu has them in their repository but doesn’t install it by default because of copyright issues in various countries. + +As an individual, you can install these media codecs easily using the Ubuntu Restricted Extra package. Click on the link below to install it from the Software Center. + +[Install Ubuntu Restricted Extras][12] + +Or alternatively, use the command below to install it: +``` +sudo apt install ubuntu-restricted-extras + +``` + +#### 4\. Install software from the Software Center + +Now that you have setup the repositories and installed the codecs, it is time to get software. If you are absolutely new to Ubuntu, please follow this [guide to installing software in Ubuntu][13]. + +There are several ways to install software. The most convenient way is to use the Software Center that has thousands of software available in various categories. You can install them in a few clicks from the software center. + +![Software Center in Ubuntu 17.10 ][14] + +It depends on you what kind of software you would like to install. I’ll suggest some of my favorites here. + + * **VLC** – media player for videos + * **GIMP** – Photoshop alternative for Linux + * **Pinta** – Paint alternative in Linux + * **Calibre** – eBook management tool + * **Chromium** – Open Source web browser + * **Kazam** – Screen recorder tool + * [**Gdebi**][15] – Lightweight package installer for .deb packages + * **Spotify** – For streaming music + * **Skype** – For video messaging + * **Kdenlive** – [Video editor for Linux][16] + * **Atom** – [Code editor][17] for programming + + + +You may also refer to this list of [must-have Linux applications][18] for more software recommendations. + +#### 5\. Install software from the Web + +Though Ubuntu has thousands of applications in the software center, you may not find some of your favorite applications despite the fact that they support Linux. + +Many software vendors provide ready to install .deb packages. You can download these .deb files from their website and install it by double-clicking on it. + +[Google Chrome][19] is one such software that you can download from the web and install it. + +#### 6\. Opt out of data collection in Ubuntu 18.04 (optional) + +Ubuntu 18.04 collects some harmless statistics about your system hardware and your system installation preference. It also collects crash reports. + +You’ll be given the option to not send this data to Ubuntu servers when you log in to Ubuntu 18.04 for the first time. + +![Opt out of data collection in Ubuntu 18.04][20] + +If you miss it that time, you can disable it by going to System Settings -> Privacy and then set the Problem Reporting to Manual. + +![Privacy settings in Ubuntu 18.04][21] + +#### 7\. Customize the GNOME desktop (Dock, themes, extensions and more) + +The GNOME desktop looks good in Ubuntu 18.04 but doesn’t mean you cannot change it. + +You can do a few visual changes from the System Settings. You can change the wallpaper of the desktop and the lock screen, you can change the position of the dock (launcher on the left side), change power settings, Bluetooth etc. In short, you can find many settings that you can change as per your need. + +![Ubuntu 17.10 System Settings][22] + +Changing themes and icons are the major way to change the looks of your system. I advise going through the list of [best GNOME themes][23] and [icons for Ubuntu][24]. Once you have found the theme and icon of your choice, you can use them with GNOME Tweaks tool. + +You can install GNOME Tweaks via the Software Center or you can use the command below to install it: +``` +sudo apt install gnome-tweak-tool + +``` + +Once it is installed, you can easily [install new themes and icons][25]. + +![Change theme is one of the must to do things after installing Ubuntu 17.10][26] + +You should also have a look at [use GNOME extensions][27] to further enhance the looks and capabilities of your system. I made this video about using GNOME extensions in 17.10 and you can follow the same for Ubuntu 18.04. + +If you are wondering which extension to use, do take a look at this list of [best GNOME extensions][28]. + +I also recommend reading this article on [GNOME customization in Ubuntu][29] so that you can know the GNOME desktop in detail. + +#### 8\. Prolong your battery and prevent overheating + +Let’s move on to [prevent overheating in Linux laptops][30]. TLP is a wonderful tool that controls CPU temperature and extends your laptops’ battery life in the long run. + +Make sure that you haven’t installed any other power saving application such as [Laptop Mode Tools][31]. You can install it using the command below in a terminal: +``` +sudo apt install tlp tlp-rdw + +``` + +Once installed, run the command below to start it: +``` +sudo tlp start + +``` + +#### 9\. Save your eyes with Nightlight + +Nightlight is my favorite feature in GNOME desktop. Keeping [your eyes safe at night][32] from the computer screen is very important. Reducing blue light helps reducing eye strain at night. + +![flux effect][33] + +GNOME provides a built-in Night Light option, which you can activate in the System Settings. + +Just go to System Settings-> Devices-> Displays and turn on the Night Light option. + +![Enabling night light is a must to do in Ubuntu 17.10][34] + +#### 9\. Disable automatic suspend for laptops + +Ubuntu 18.04 comes with a new automatic suspend feature for laptops. If the system is running on battery and is inactive for 20 minutes, it will go in suspend mode. + +I understand that the intention is to save battery life but it is an inconvenience as well. You can’t keep the power plugged in all the time because it’s not good for the battery life. And you may need the system to be running even when you are not using it. + +Thankfully, you can change this behavior. Go to System Settings -> Power. Under Suspend & Power Button section, either turn off the Automatic Suspend option or extend its time period. + +![Disable automatic suspend in Ubuntu 18.04][35] + +You can also change the screen dimming behavior in here. + +#### 10\. System cleaning + +I have written in detail about [how to clean up your Ubuntu system][36]. I recommend reading that article to know various ways to keep your system free of junk. + +Normally, you can use this little command to free up space from your system: +``` +sudo apt autoremove + +``` + +It’s a good idea to run this command every once a while. If you don’t like the command line, you can use a GUI tool like [Stacer][37] or [Bleach Bit][38]. + +#### 11\. Going back to Unity or Vanilla GNOME (not recommended) + +If you have been using Unity or GNOME in the past, you may not like the new customized GNOME desktop in Ubuntu 18.04. Ubuntu has customized GNOME so that it resembles Unity but at the end of the day, it is neither completely Unity nor completely GNOME. + +So if you are a hardcore Unity or GNOMEfan, you may want to use your favorite desktop in its ‘real’ form. I wouldn’t recommend but if you insist here are some tutorials for you: + +#### 12\. Can’t log in to Ubuntu 18.04 after incorrect password? Here’s a workaround + +I noticed a [little bug in Ubuntu 18.04][39] while trying to change the desktop session to Ubuntu Community theme. It seems if you try to change the sessions at the login screen, it rejects your password first and at the second attempt, the login gets stuck. You can wait for 5-10 minutes to get it back or force power it off. + +The workaround here is that after it displays the incorrect password message, click Cancel, then click your name, then enter your password again. + +#### 13\. Experience the Community theme (optional) + +Ubuntu 18.04 was supposed to have a dashing new theme developed by the community. The theme could not be completed so it could not become the default look of Bionic Beaver release. I am guessing that it will be the default theme in Ubuntu 18.10. + +![Ubuntu 18.04 Communitheme][40] + +You can try out the aesthetic theme even today. [Installing Ubuntu Community Theme][41] is very easy. Just look for it in the software center, install it, restart your system and then at the login choose the Communitheme session. + +#### 14\. Get Windows 10 in Virtual Box (if you need it) + +In a situation where you must use Windows for some reasons, you can [install Windows in virtual box inside Linux][42]. It will run as a regular Ubuntu application. + +It’s not the best way but it still gives you an option. You can also [use WINE to run Windows software on Linux][43]. In both cases, I suggest trying the alternative native Linux application first before jumping to virtual machine or WINE. + +#### What do you do after installing Ubuntu? + +Those were my suggestions for getting started with Ubuntu. There are many more tutorials that you can find under [Ubuntu 18.04][44] tag. You may go through them as well to see if there is something useful for you. + +Enough from myside. Your turn now. What are the items on your list of **things to do after installing Ubuntu 18.04**? The comment section is all yours. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:https://www.ubuntu.com/ +[2]:https://itsfoss.com/ubuntu-18-04-release-features/ +[3]:https://www.youtube.com/c/itsfoss?sub_confirmation=1 +[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/things-to-after-installing-ubuntu-18-04-featured-800x450.jpeg +[5]:https://www.gnome.org/ +[6]:https://kubuntu.org/ +[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/software-update-ubuntu-17-10.jpg +[8]:https://help.ubuntu.com/community/Repositories/Ubuntu +[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/software-updates-ubuntu-17-10.jpg +[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/repositories-ubuntu-18.png +[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/software-repository-ubuntu-17-10.jpeg +[12]:apt://ubuntu-restricted-extras +[13]:https://itsfoss.com/remove-install-software-ubuntu/ +[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/Ubuntu-software-center-17-10-800x551.jpeg +[15]:https://itsfoss.com/gdebi-default-ubuntu-software-center/ +[16]:https://itsfoss.com/best-video-editing-software-linux/ +[17]:https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ +[18]:https://itsfoss.com/essential-linux-applications/ +[19]:https://www.google.com/chrome/ +[20]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/opt-out-of-data-collection-ubuntu-18-800x492.jpeg +[21]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/privacy-ubuntu-18-04-800x417.png +[22]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/System-Settings-Ubuntu-17-10-800x573.jpeg +[23]:https://itsfoss.com/best-gtk-themes/ +[24]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/ +[25]:https://itsfoss.com/install-themes-ubuntu/ +[26]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/GNOME-Tweak-Tool-Ubuntu-17-10.jpeg +[27]:https://itsfoss.com/gnome-shell-extensions/ +[28]:https://itsfoss.com/best-gnome-extensions/ +[29]:https://itsfoss.com/gnome-tricks-ubuntu/ +[30]:https://itsfoss.com/reduce-overheating-laptops-linux/ +[31]:https://wiki.archlinux.org/index.php/Laptop_Mode_Tools +[32]:https://itsfoss.com/night-shift-flux-ubuntu-linux/ +[33]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/03/flux-eyes-strain.jpg +[34]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/Enable-Night-Light-Feature-Ubuntu-17-10-800x396.jpeg +[35]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/disable-automatic-suspend-ubuntu-18-800x586.jpeg +[36]:https://itsfoss.com/free-up-space-ubuntu-linux/ +[37]:https://itsfoss.com/optimize-ubuntu-stacer/ +[38]:https://itsfoss.com/bleachbit-2-release/ +[39]:https://gitlab.gnome.org/GNOME/gnome-shell/issues/227 +[40]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/ubunt-18-theme.jpeg +[41]:https://itsfoss.com/ubuntu-community-theme/ +[42]:https://itsfoss.com/install-windows-10-virtualbox-linux/ +[43]:https://itsfoss.com/use-windows-applications-linux/ +[44]:https://itsfoss.com/tag/ubuntu-18-04/ From ebd2fdfc580884f4b802e39406b83772d2ba3318 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 28 Apr 2018 13:02:09 +0800 Subject: [PATCH 013/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Using=20machine?= =?UTF-8?q?=20learning=20to=20color=20cartoons?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...sing machine learning to color cartoons.md | 125 ++++++++++++++++++ 1 file changed, 125 insertions(+) create mode 100644 sources/tech/20180425 Using machine learning to color cartoons.md diff --git a/sources/tech/20180425 Using machine learning to color cartoons.md b/sources/tech/20180425 Using machine learning to color cartoons.md new file mode 100644 index 0000000000..32efd931e6 --- /dev/null +++ b/sources/tech/20180425 Using machine learning to color cartoons.md @@ -0,0 +1,125 @@ +Using machine learning to color cartoons +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/art-yearbook-paint-draw-create-creative.png?itok=t9fOdlyJ) +A big problem with supervised machine learning is the need for huge amounts of labeled data. It's a big problem especially if you don't have the labeled data—and even in a world awash with big data, most of us don't. + +Although a few companies have access to enormous quantities of certain kinds of labeled data, for most organizations and many applications, creating sufficient quantities of the right kind of labeled data is cost prohibitive or impossible. Sometimes the domain is one in which there just isn't much data (for example, when diagnosing a rare disease or determining whether a signature matches a few known exemplars). Other times the volume of data needed multiplied by the cost of human labeling by [Amazon Turkers][1] or summer interns is just too high. Paying to label every frame of a movie-length video adds up fast, even at a penny a frame. + +### The big problem of big data requirements + +The specific problem our group set out to solve was: Can we train a model to automate applying a simple color scheme to a black and white character without hand-drawing hundreds or thousands of examples as training data? + + * A rule-based strategy for extreme augmentation of small datasets + * A borrowed TensorFlow image-to-image translation model, Pix2Pix, to automate cartoon coloring with very limited training data + + + +In this experiment (which we called DragonPaint), we confronted the problem of deep learning's enormous labeled-data requirements using: + +I had seen [Pix2Pix][2], a machine learning image-to-image translation model described in a paper ("Image-to-Image Translation with Conditional Adversarial Networks," by Isola, et al.), that colorizes landscapes after training on AB pairs where A is the grayscale version of landscape B. My problem seemed similar. The only problem was training data. + +I needed the training data to be very limited because I didn't want to draw and color a lifetime supply of cartoon characters just to train the model. The tens of thousands (or hundreds of thousands) of examples often required by deep-learning models were out of the question. + +Based on Pix2Pix's examples, we would need at least 400 to 1,000 sketch/colored pairs. How many was I willing to draw? Maybe 30. I drew a few dozen cartoon flowers and dragons and asked whether I could somehow turn this into a training set. + +### The 80% solution: color by component + + +![Characters colored by component rules][4] + +Characters colored by component rules + +When faced with a shortage of training data, the first question to ask is whether there is a good non-machine-learning based approach to our problem. If there's not a complete solution, is there a partial solution, and would a partial solution do us any good? Do we even need machine learning to color flowers and dragons? Or can we specify geometric rules for coloring? + + +![How to color by components][6] + +How to color by components + +There is a non-machine-learning approach to solving my problem. I could tell a kid how I want my drawings colored: Make the flower's center orange and the petals yellow. Make the dragon's body orange and the spikes yellow. + +At first, that doesn't seem helpful because our computer doesn't know what a center or a petal or a body or a spike is. But it turns out we can define the flower or dragon parts in terms of connected components and get a geometric solution for coloring about 80% of our drawings. Although 80% isn't enough, we can bootstrap from that partial-rule-based solution to 100% using strategic rule-breaking transformations, augmentations, and machine learning. + +Connected components are what is colored when you use Windows Paint (or a similar application). For example, when coloring a binary black and white image, if you click on a white pixel, the white pixels that are be reached without crossing over black are colored the new color. In a "rule-conforming" cartoon dragon or flower sketch, the biggest white component is the background. The next biggest is the body (plus the arms and legs) or the flower's center. The rest are spikes or petals, except for the dragon's eye, which can be distinguished by its distance from the background. + +### Using strategic rule breaking and Pix2Pix to get to 100% + +Some of my sketches aren't rule-conforming. A sloppily drawn line might leave a gap. A back limb will get colored like a spike. A small, centered daisy will switch a petal and the center's coloring rules. + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dragonpaint4.png?itok=MOiaVxMS) + +For the 20% we couldn't color with the geometric rules, we needed something else. We turned to Pix2Pix, which requires a minimum training set of 400 to 1,000 sketch/colored pairs (i.e., the smallest training sets in the [Pix2Pix paper][7]) including rule-breaking pairs. + +So, for each rule-breaking example, we finished the coloring by hand (e.g., back limbs) or took a few rule-abiding sketch/colored pairs and broke the rule. We erased a bit of a line in A or we transformed a fat, centered flower pair A and B with the same function (f) to create a new pair f(A) and f(B)—a small, centered flower. That got us to a training set. + +### Extreme augmentations with gaussian filters and homeomorphisms + +It's common in computer vision to augment an image training set with geometric transformations, such as rotation, translation, and zoom. + +But what if we need to turn sunflowers into daisies or make a dragon's nose bulbous or pointy? + +Or what if we just need an enormous increase in data volume without overfitting? Here we need a dataset 10 to 30 times larger than what we started with. + +![Sunflower turned into a daisy with r -> r cubed][9] + +Sunflower turned into a daisy with r -> r cubed + +![Gaussian filter augmentations][11] + +Gaussian filter augmentations + +Certain homeomorphisms of the unit disk make good daisies (e.g., r -> r cubed) and Gaussian filters change a dragon's nose. Both were extremely useful for creating augmentations for our dataset and produced the augmentation volume we needed, but they also started to change the style of the drawings in ways that an [affine transformation][12] could not. + +This inspired questions beyond how to automate a simple coloring scheme: What defines an artist's style, either to an outside viewer or the artist? When does an artist adopt as their own a drawing they could not have made without the algorithm? When does the subject matter become unrecognizable? What's the difference between a tool, an assistant, and a collaborator? + +### How far can we go? + +How little can we draw for input and how much variation and complexity can we create while staying within a subject and style recognizable as the artist's? What would we need to do to make an infinite parade of giraffes or dragons or flowers? And if we had one, what could we do with it? + +Those are questions we'll continue to explore in future work. + +But for now, the rules, augmentations, and Pix2Pix model worked. We can color flowers really well, and the dragons aren't bad. + + +![Results: flowers colored by model trained on flowers][14] + +Results: Flowers colored by model trained on flowers + + +![Results: dragons trained on model trained on dragons][16] + +Results: Dragons trained on model trained on dragons + +To learn more, attend Gretchen Greene's talk, [DragonPaint – bootstrapping small data to color cartoons][17], at [PyCon Cleveland 2018][18]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/4/dragonpaint-bootstrapping + +作者:[K. Gretchen Greene][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/kggreene +[1]:https://www.mturk.com/ +[2]:https://phillipi.github.io/pix2pix/ +[3]:/file/393246 +[4]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dragonpaint2.png?itok=qw_q72A5 (Characters colored by component rules) +[5]:/file/393251 +[6]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dragonpaint3.png?itok=JK3TPcvp (How to color by components) +[7]:https://arxiv.org/abs/1611.07004 +[8]:/file/393261 +[9]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dragonpaint5.png?itok=GvipU8l8 (Sunflower turned into a daisy with r -> r cubed) +[10]:/file/393266 +[11]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dragonpaint6.png?itok=r14I2Fyz (Gaussian filter augmentations) +[12]:https://en.wikipedia.org/wiki/Affine_transformation +[13]:/file/393271 +[14]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dragonpaint7.png?itok=xKWvyi_T (Results: flowers colored by model trained on flowers) +[15]:/file/393276 +[16]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dragonpaint8.png?itok=fSM5ovBT (Results: dragons trained on model trained on dragons) +[17]:https://us.pycon.org/2018/schedule/presentation/113/ +[18]:https://us.pycon.org/2018/ From 8c581ea40fd7cf6197a45999c526f6b9583786d4 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 28 Apr 2018 13:03:27 +0800 Subject: [PATCH 014/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Configuring=20loc?= =?UTF-8?q?al=20storage=20in=20Linux=20with=20Stratis?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ing local storage in Linux with Stratis.md | 79 +++++++++++++++++++ 1 file changed, 79 insertions(+) create mode 100644 sources/tech/20180425 Configuring local storage in Linux with Stratis.md diff --git a/sources/tech/20180425 Configuring local storage in Linux with Stratis.md b/sources/tech/20180425 Configuring local storage in Linux with Stratis.md new file mode 100644 index 0000000000..fc9471c88f --- /dev/null +++ b/sources/tech/20180425 Configuring local storage in Linux with Stratis.md @@ -0,0 +1,79 @@ +Configuring local storage in Linux with Stratis +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-storage.png?itok=95-zvHYl) +Configuring local storage is something desktop Linux users do very infrequently—maybe only once, during installation. Linux storage tech moves slowly, and many storage tools used 20 years ago are still used regularly today. But some things have improved since then. Why aren't people taking advantage of these new capabilities? + +This article is about Stratis, a new project that aims to bring storage advances to all Linux users, from the simple laptop single SSD to a hundred-disk array. Linux has the capabilities, but its lack of an easy-to-use solution has hindered widespread adoption. Stratis's goal is to make Linux's advanced storage features accessible. + +### Simple, reliable access to advanced storage features + +Stratis aims to make three things easier: initial configuration of storage; making later changes; and using advanced storage features like snapshots, thin provisioning, and even tiering. + +### Stratis: a volume-managing filesystem + +Stratis is a volume-managing filesystem (VMF) like [ZFS][1] and [Btrfs][2] . It starts with the central idea of a storage "pool," an idea common to VMFs and also standalone volume managers such as [LVM][3] . This pool is created from one or more local disks (or partitions), and volumes are created from the pool. Their exact layout is not specified by the user, unlike traditional disk partitioning using [fdisk][4] or [GParted][5] + +VMFs take it a step further and integrate the filesystem layer. The user no longer picks a filesystem to put on the volume. The filesystem and volume are merged into a single thing—a conceptual tree of files (which ZFS calls a dataset, Btrfs a subvolume, and Stratis a filesystem) whose data resides in the pool but that has no size limit except for the pool's total size. + +Another way of looking at this: Just as a filesystem abstracts the actual location of storage blocks that make up a single file within the filesystem, a VMF abstracts the actual storage blocks of a filesystem within the pool. + +The pool enables other useful features. Some of these, like filesystem snapshots, occur naturally from the typical implementation of a VMF, where multiple filesystems can share physical data blocks within the pool. Others, like redundancy, tiering, and integrity, make sense because the pool is a central place to manage these features for all the filesystems on the system. + +The result is that a VMF is simpler to set up and manage and easier to enable for advanced storage features than independent volume manager and filesystem layers. + +### What makes Stratis different from ZFS or Btrfs? + +Stratis is a new project, which gives it the benefit of learning from previous projects. What Stratis learned from ZFS, Btrfs, and LVM will be covered in depth in [Part 2][6], but to summarize, the differences in Stratis come from seeing what worked and what didn't work for others, from changes in how people use and automate computers, and changes in the underlying hardware. + +First, Stratis focuses on being easy and safe to use. This is important for the individual user, who may go for long stretches of time between interactions with Stratis. If these interactions are unfriendly, especially if there's a possibility of losing data, most people will stick with the basics instead of using new features. + +Second, APIs and DevOps-style automation are much more important today than they were even a few years ago. Stratis supports automation by providing a first-class API, so people and software tools alike can use Stratis directly. + +Third, SSDs have greatly expanded in capacity as well as market share. Earlier filesystems went to great lengths to optimize for rotational media's slow access times, but flash-based media makes these efforts less important. Even if a pool's data is too big to use SSDs economically for the entire pool, an SSD caching tier is still an option and can give excellent results. Assuming good performance because of SSDs lets Stratis focus its pool design on flexibility and reliability. + +Finally, Stratis has a very different implementation model from ZFS and Btrfs (I'll this discuss further in [Part 2][6]). This means some things are easier for Stratis, while other things are harder. It also increases Stratis's pace of development. + +### Learn more + +To learn more about Stratis, check out [Part 2][6] of this series. You'll also find a detailed [design document][7] on the [Stratis website][8]. + +### Get involved + +To develop, test, or offer feedback on Stratis, subscribe to our [mailing list][9]. + +Development is on [GitHub][10] for both the [daemon][11] (written in [Rust][12]) and the [command-line tool][13] (written in [Python][14]). + +Join us on the [Freenode][15] IRC network on channel #stratis-storage. + +Andy Grover will be speaking at LinuxFest Northwest this year. See [program highlights][16] or [register to attend][17]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/4/stratis-easy-use-local-storage-management-linux + +作者:[Andy Grover][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/agrover +[1]:https://en.wikipedia.org/wiki/ZFS +[2]:https://en.wikipedia.org/wiki/Btrfs +[3]:https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux) +[4]:https://en.wikipedia.org/wiki/Fdisk +[5]:https://gparted.org/ +[6]:https://opensource.com/article/18/4/stratis-lessons-learned +[7]:https://stratis-storage.github.io/StratisSoftwareDesign.pdf +[8]:https://stratis-storage.github.io/ +[9]:https://lists.fedoraproject.org/admin/lists/stratis-devel.lists.fedorahosted.org/ +[10]:https://github.com/stratis-storage/ +[11]:https://github.com/stratis-storage/stratisd +[12]:https://www.rust-lang.org/ +[13]:https://github.com/stratis-storage/stratis-cli +[14]:https://www.python.org/ +[15]:https://freenode.net/ +[16]:https://www.linuxfestnorthwest.org/conferences/lfnw18 +[17]:https://www.linuxfestnorthwest.org/conferences/lfnw18/register/new From 81abc03da5a09ddcaaf4ae85b17692956e32bc79 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 28 Apr 2018 13:04:47 +0800 Subject: [PATCH 015/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20What=20Stratis=20?= =?UTF-8?q?learned=20from=20ZFS,=20Btrfs,=20and=20Linux=20Volume=20Manager?= =?UTF-8?q?=20|=20Opensource.com?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...d Linux Volume Manager - Opensource.com.md | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 sources/tech/20180426 What Stratis learned from ZFS, Btrfs, and Linux Volume Manager - Opensource.com.md diff --git a/sources/tech/20180426 What Stratis learned from ZFS, Btrfs, and Linux Volume Manager - Opensource.com.md b/sources/tech/20180426 What Stratis learned from ZFS, Btrfs, and Linux Volume Manager - Opensource.com.md new file mode 100644 index 0000000000..a32792e532 --- /dev/null +++ b/sources/tech/20180426 What Stratis learned from ZFS, Btrfs, and Linux Volume Manager - Opensource.com.md @@ -0,0 +1,66 @@ +What Stratis learned from ZFS, Btrfs, and Linux Volume Manager | Opensource.com +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-windows-building-containers.png?itok=0XvZLZ8k) + +As discussed in [Part 1][1] of this series, Stratis is a volume-managing filesystem (VMF) with functionality similar to [ZFS][2] and [Btrfs][3]. In designing Stratis, we studied the choices that developers of existing solutions made. + +### Why not adopt an existing solution? + +The reasons vary. First, let's consider [ZFS][2]. Originally developed by Sun Microsystems for Solaris (now owned by Oracle), ZFS has been ported to Linux. However, its [CDDL][4]-licensed code cannot be merged into the [GPL][5]-licensed Linux source tree. Whether CDDL and GPLv2 are truly incompatible is a subject for debate, but the uncertainty is enough to make enterprise Linux vendors unwilling to adopt and support it. + +[Btrfs][3] is also well-established and has no licensing issues. For years it was the "Chosen One" for many users, but it just hasn't yet gotten to where it needs to be in terms of stability and features. + +So, fuelled by a desire to improve the status quo and frustration with existing options, Stratis was conceived. + +### How Stratis is different + +One thing that ZFS and Btrfs have clearly shown is that writing a VMF as an in-kernel filesystem takes a tremendous amount of work and time to work out the bugs and stabilize. It's essential to get right when it comes to precious data. Starting from scratch and taking the same approach with Stratis would probably take a decade as well, which was not acceptable. + +Instead, Stratis chose to use some of the Linux kernel's other existing capabilities: The [device mapper][6] subsystem, which is most notably used by LVM to provide RAID, thin-provisioning, and other features on top of block devices; and the well-tested and high-performance [XFS][7] filesystem. Stratis builds its pool using layers of existing technology, with the goal of managing them to appear as a seamless whole to the user. + +### What Stratis learned from ZFS + +For many users, ZFS set the expectations for what a next-generation filesystem should be. Reading comments online from people talking about ZFS helped set Stratis's initial development goals. ZFS's design also implicitly highlighted things to avoid. For example, ZFS requires an "import" step when attaching a pool created on another system. There are a few reasons for this, and each reason was likely an issue that Stratis had to solve, either by taking the same approach or a different one. + +One thing we didn't like about ZFS was that it has some restrictions on adding new hard drives or replacing existing drives with bigger ones, especially if the pool is configured for redundancy. Of course, there is a reason for this, but we thought it was an area we could improve. + +Finally, using ZFS's tools at the command line, once learned, is a good experience. We wanted to have that same feeling with Stratis's command-line tool, and we also liked the tool's tendency to use positional parameters and limit the amount of typing required for each command. + +### What Stratis learned from Btrfs + +One thing we liked about Btrfs was the single command-line tool, with positional subcommands. Btrfs also treats redundancy (Btrfs profiles) as a property of the pool, which seems easier to understand than ZFS's approach and allows drives to be added and even removed. + +Finally, looking at the features that both ZFS and Btrfs offer, such as snapshot implementations and send/receive support, helped determine which features Stratis should include. + +### What Stratis learned from LVM + +From the early design stages of Stratis, we studied LVM extensively. LVM is currently the most significant user of the Linux device mapper (DM) subsystem—in fact, DM is maintained by the LVM core team. We examined it both from the possibility of actually using LVM as a layer of Stratis and an example of using DM, which Stratis could do directly with LVM as a peer. We looked at LVM's on-disk metadata format (along with ZFS's and XFS's) for inspiration in defining Stratis's on-disk metadata format. + +Among the listed projects, LVM shares the most in common with Stratis internally, because they both use DM. However, from a usage standpoint, LVM is much more transparent about its inner workings. This gives expert users a great deal of control and options for precisely configuring the volume group (pool) layout in a way that Stratis doesn't. + +### A diversity of solutions + +One great thing about working on free software and open source is that there are no irreplaceable components. Every part—even the kernel—is open for view, modification, and even replacement if the current software isn't meeting users' needs. A new project doesn't need to end an existing one if there is enough support for both to be sustained in parallel. + +Stratis is an attempt to better meet some users' needs for local storage management—those looking for a no-hassle, easy-to-use, powerful solution. This means making design choices that might not be right for all users. Alternatives make tough choices possible since users have other options. All users ultimately benefit from their ability to use whichever tool works best for them. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/4/stratis-lessons-learned + +作者:[Andy Grover][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/agrover +[1]:https://opensource.com/article/18/4/stratis-easy-use-local-storage-management-linux +[2]:https://en.wikipedia.org/wiki/ZFS +[3]:https://en.wikipedia.org/wiki/Btrfs +[4]:https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License +[5]:https://en.wikipedia.org/wiki/GNU_General_Public_License +[6]:https://en.wikipedia.org/wiki/Device_mapper +[7]:https://en.wikipedia.org/wiki/XFS From c4d90eb198cb1ca63cbdf355321d056910cfa7b9 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 28 Apr 2018 13:05:40 +0800 Subject: [PATCH 016/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20use=20?= =?UTF-8?q?FIND=20in=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20180427 How to use FIND in Linux.md | 90 +++++++++++++++++++ 1 file changed, 90 insertions(+) create mode 100644 sources/tech/20180427 How to use FIND in Linux.md diff --git a/sources/tech/20180427 How to use FIND in Linux.md b/sources/tech/20180427 How to use FIND in Linux.md new file mode 100644 index 0000000000..829514dd3c --- /dev/null +++ b/sources/tech/20180427 How to use FIND in Linux.md @@ -0,0 +1,90 @@ +How to use FIND in Linux +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux31x_cc.png?itok=Pvim4U-B) + +In [a recent Opensource.com article][1], Lewis Cowles introduced the `find` command. + +`find` is one of the more powerful and flexible command-line programs in the daily toolbox, so it's worth spending a little more time on it. + +At a minimum, `find` takes a path to find things. For example: +``` +find / + +``` + +will find (and print) every file on the system. And since everything is a file, you will get a lot of output to sort through. This probably doesn't help you find what you're looking for. You can change the path argument to narrow things down a bit, but it's still not really any more helpful than using the `ls` command. So you need to think about what you're trying to locate. + +Perhaps you want to find all the JPEG files in your home directory. The `-name` argument allows you to restrict your results to files that match the given pattern. +``` +find ~ -name '*jpg' + +``` + +But wait! What if some of them have an uppercase extension? `-iname` is like `-name`, but it is case-insensitive. +``` +find ~ -iname '*jpg' + +``` + +Great! But the 8.3 name scheme is so 1985. Some of the pictures might have a .jpeg extension. Fortunately, we can combine patterns with an "or," represented by `-o`. +``` +find ~ ( -iname 'jpeg' -o -iname 'jpg' ) + +``` + +We're getting closer. But what if you have some directories that end in jpg? (Why you named a directory `bucketofjpg` instead of `pictures` is beyond me.) We can modify our command with the `-type` argument to look only for files. +``` +find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f + +``` + +Or maybe you'd like to find those oddly named directories so you can rename them later: +``` +find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type d + +``` + +It turns out you've been taking a lot of pictures lately, so let's narrow this down to files that have changed in the last week. +``` +find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7 + +``` + +`ctime`), modification time (`mtime`), or access time (`atime`). These are in days, so if you want finer-grained control, you can express it in minutes instead (`cmin`, `mmin`, and `amin`, respectively). Unless you know exactly the time you want, you'll probably prefix the number with `+` (more than) or `–` (less than). + +You can do time filters based on file status change time (), modification time (), or access time (). These are in days, so if you want finer-grained control, you can express it in minutes instead (, and, respectively). Unless you know exactly the time you want, you'll probably prefix the number with(more than) or(less than). + +But maybe you don't care about your pictures. Maybe you're running out of disk space, so you want to find all the gigantic (let's define that as "greater than 1 gigabyte") files in the `log` directory: +``` +find /var/log -size +1G + +``` + +Or maybe you want to find all the files owned by bcotton in `/data`: +``` +find /data -owner bcotton + +``` + +You can also look for files based on permissions. Perhaps you want to find all the world-readable files in your home directory to make sure you're not oversharing. +``` +find ~ -perm -o=r + +``` + +This post only scratches the surface of what `find` can do. Combining tests with Boolean logic can give you incredible flexibility to find exactly the files you're looking for. And with arguments like `-exec` or `-delete`, you can have `find` take action on what it... finds. Have any favorite `find` expressions? Share them in the comments! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/4/how-use-find-linux + +作者:[Ben Cotton][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/bcotton +[1]:https://opensource.com/article/18/4/how-find-files-linux From b69cf7b047904ca5b2af531a0fed3bb3d63d3cd0 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 28 Apr 2018 13:06:51 +0800 Subject: [PATCH 017/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Things=20You=20Sh?= =?UTF-8?q?ould=20Know=20About=20Ubuntu=2018.04?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ings You Should Know About Ubuntu 18.04.md | 154 ++++++++++++++++++ 1 file changed, 154 insertions(+) create mode 100644 sources/tech/20180424 Things You Should Know About Ubuntu 18.04.md diff --git a/sources/tech/20180424 Things You Should Know About Ubuntu 18.04.md b/sources/tech/20180424 Things You Should Know About Ubuntu 18.04.md new file mode 100644 index 0000000000..5061c1f6e9 --- /dev/null +++ b/sources/tech/20180424 Things You Should Know About Ubuntu 18.04.md @@ -0,0 +1,154 @@ +Things You Should Know About Ubuntu 18.04 +====== +[Ubuntu 18.04 release][1] is just around the corner. I can see lots of questions from Ubuntu users in various Facebook groups and forums. I also organized Q&A sessions on Facebook and Instagram to know what Ubuntu users are wondering about Ubuntu 18.04. + +I have tried to answer those frequently asked questions about Ubuntu 18.04 here. I hope it helps clear your doubts if you had any. And if you still have questions, feel free to ask in the comment section below. + +### What to expect in Ubuntu 18.04 + +![Ubuntu 18.04 Frequently Asked Questions][2] + +Just for clarification, some of the answers here are influenced by my personal opinion. If you are an experienced/aware Ubuntu user, some of the questions may sound silly to you. If that’s case, just ignore those questions. + +#### Can I install Unity on Ubuntu 18.04? + +Yes, you can. + +Canonical knows that there are people who simply loved Unity. This is why it has made Unity 7 available in the Universe repository. This is a community maintained edition and Ubuntu doesn’t develop it directly. + +I advise using the default GNOME first and if you really cannot tolerate it, then go on [installing Unity on Ubuntu 18.04][3]. + +#### What GNOME version does it have? + +At the time of its release, Ubuntu 18.04 has GNOME 3.28. + +#### Can I install vanilla GNOME on it? + +Yes, you can. + +Existing GNOME users might not like the Unity resembling, customized GNOME desktop in Ubuntu 18.04. There are some packages available in Ubuntu’s main and universe repositories that allows you to [install vanilla GNOME on Ubuntu 18.04][4]. + +#### Has the memory leak in GNOME fixed? + +Yes. The [infamous memory leak in GNOME 3.28][5] has been fixed and [Ubuntu is already testing the fix][6]. + +Just to clarify, the memory leak was not caused by Ubuntu. It was/is impacting all Linux distributions that use GNOME 3.28. A new patch was released under GNOME 3.28.1 to fix this memory leak. + +#### How long will Ubuntu 18.04 be supported? + +It is a long-term support (LTS) release and like any LTS release, it will be supported for five years. Which means that Ubuntu 18.04 will get security and maintenance updates until April 2023. This is also true for all participating flavors except Ubuntu Studio. + +#### When will Ubuntu 18.04 be released? + +Ubuntu 18.04 LTS has been released on 26th April. All the participating flavors like Kubuntu, Lubuntu, Xubuntu, Budgie, MATE etc will have their 18.04 release available on the same day. + +It seems [Ubuntu Studio will not have 18.04 as LTS release][7]. + +#### Is it possible to upgrade to Ubuntu 18.04 from 16.04/17.10? Can I upgrade from Ubuntu 16.04 with Unity to Ubuntu 18.04 with GNOME? + +Yes, absolutely. Once Ubuntu 18.04 LTS is released, you can easily upgrade to the new version. + +If you are using Ubuntu 17.10, make sure that in Software & Updates -> Updates, the ‘Notify me of a new Ubuntu version’ is set to ‘For any new version’. + +![Get notified for a new version in Ubuntu][8] + +If you are using Ubuntu 16.04, make sure that in Software & Updates -> Updates, the ‘Notify me of a new Ubuntu version’ is set to ‘For long-term support versions’. + +![Ubuntu 18.04 upgrade from Ubuntu 16.04][9] + +You should get system notification about the availability of the new versions. After that, upgrading to Ubuntu 18.04 is a matter of clicks. + +Even if Ubuntu 16.04 was Unity, you can still [upgrade to Ubuntu 18.04][10] GNOME. + +#### What does upgrading to Ubuntu 18.04 mean? Will I lose my data? + +If you are using Ubuntu 17.10 or Ubuntu 16.04, sooner or later, Ubuntu will notify you that Ubuntu 18.04 is available. If you have a good internet connection that can download 1.5 Gb of data, you can upgrade to Ubuntu 18.04 in a few clicks and in under 30 minutes. + +You don’t need to create a new USB and do a fresh install. Once the upgrade procedure finishes, you’ll have the new Ubuntu version available. + +Normally, your data, documents etc are safe in the upgrade procedure. However, keeping a backup of your important documents is always a good idea. + +#### When will I get to upgrade to Ubuntu 18.04? + +If you are using Ubuntu 17.10 and have correct update settings in place (as mentioned in the previous section), you should be notified for upgrading to Ubuntu 18.04 within a few days of Ubuntu 18.04 release. Since Ubuntu servers encounter heavy load on the release day, not everyone gets the upgrade the same day. + +For Ubuntu 16.04 users, it may take some weeks before they are officially notified of the availability of Ubuntu 18.04. Usually, this will happen after the first point release Ubuntu 18.04.1. This point release fixes the newly discovered bugs in 18.04. + +#### If I upgrade to Ubuntu 18.04 can I downgrade to 17.10 or 16.04? + +No, you cannot. While upgrading to the newer version is easy, there is no option to downgrade. If you want to go back to Ubuntu 16.04, you’ll have to do a fresh install. + +#### Can I use Ubuntu 18.04 on 32-bit systems? + +Yes and no. + +If you are already using the 32-bit version of Ubuntu 16.04 or 17.10, you may still get to upgrade to Ubuntu 18.04. However, you won’t find Ubuntu 18.04 bit ISO in 32-bit format anymore. In other words, you cannot do a fresh install of the 32-bit version of Ubuntu 18.04 GNOME. + +The good news here is that other official flavors like Ubuntu MATE, Lubuntu etc still have the 32-bit ISO of their new versions. + +In any case, if you have a 32-bit system, chances are that your system is weak on hardware. You’ll be better off using lightweight [Ubuntu MATE][11] or [Lubuntu][12] on such system. + +#### Where can I download Ubuntu 18.04? + +Once 18.04 is released, you can get the ISO image of Ubuntu 18.04 from its website. You have both direct download and torrent options. Other official flavors will be available on their official websites. + +#### Should I do a fresh install of Ubuntu 18.04 or upgrade to it from 16.04/17.10? + +If you have a choice, make a backup of your data and do a fresh install of Ubuntu 18.04. + +Upgrading to 18.04 from an existing version is a convenient option. However, in my opinion, it still keeps some traces/packages of the older version. A fresh install is always cleaner. + +For a fresh install, should I install Ubuntu 16.04 or Ubuntu 18.04? + +If you are going to install Ubuntu on a system, go for Ubuntu 18.04 instead of 16.04. + +Both of them are long-term support release and will be supported for a long time. Ubuntu 16.04 will get maintenance and security updates until 2021 and 18.04 until 2023. + +However, I would suggest that you use Ubuntu 18.04. Any LTS release gets [hardware updates for a limited time][13] (two and a half years I think). After that, it only gets maintenance updates. If you have newer hardware, you’ll get better support in 18.04. + +Also, many application developers will start focusing on Ubuntu 18.04 soon. Newly created PPAs might only support 18.04 in a few months. Using 18.04 has its advantages over 16.04. + +#### Will it be easier to install printer-scanner drivers instead of using the CLI? + +I am not an expert when it comes to printers so my opinion is based on my limited knowledge in this field. Most of the new printers support [IPP protocol][14] and thus they should be well supported in Ubuntu 18.04. I cannot say the same about older printers. + +#### Does Ubuntu 18.04 have better support for Realtek and other WiFi adapters? + +No specific information on this part. + +#### What are the system requirements for Ubuntu 18.04? + +For the default GNOME version, you should have [4 GB of RAM for a comfortable use][15]. A processor released in last 8 years will work as well. Anything older than that should use a [lightweight Linux distribution][16] such as [Lubuntu][12]. + +#### Any other questions about Ubuntu 18.04? + +If you have any other doubts regarding Ubuntu 18.04, please feel free to leave a comment below. If you think some other information should be added to the list, please let me know. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/ubuntu-18-04-faq/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:https://itsfoss.com/ubuntu-18-04-release-features/ +[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/ubuntu-18-04-faq-800x450.png +[3]:https://itsfoss.com/use-unity-ubuntu-17-10/ +[4]:https://itsfoss.com/vanilla-gnome-ubuntu/ +[5]:https://feaneron.com/2018/04/20/the-infamous-gnome-shell-memory-leak/ +[6]:https://community.ubuntu.com/t/help-test-memory-leak-fixes-in-18-04-lts/5251 +[7]:https://www.omgubuntu.co.uk/2018/04/ubuntu-studio-plans-to-reboot +[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/03/upgrade-ubuntu-2.jpeg +[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/ubuntu-18-04-upgrade-settings-800x379.png +[10]:https://itsfoss.com/upgrade-ubuntu-version/ +[11]:https://ubuntu-mate.org/ +[12]:https://lubuntu.net/ +[13]:https://www.ubuntu.com/info/release-end-of-life +[14]:https://www.pwg.org/ipp/everywhere.html +[15]:https://help.ubuntu.com/community/Installation/SystemRequirements +[16]:https://itsfoss.com/lightweight-linux-beginners/ From 202f39e6ac68dfb250139696493c172239b722bb Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 28 Apr 2018 13:08:00 +0800 Subject: [PATCH 018/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Check?= =?UTF-8?q?=20System=20Hardware=20Manufacturer,=20Model=20And=20Serial=20N?= =?UTF-8?q?umber=20In=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...turer, Model And Serial Number In Linux.md | 155 ++++++++++++++++++ 1 file changed, 155 insertions(+) create mode 100644 sources/tech/20180426 How To Check System Hardware Manufacturer, Model And Serial Number In Linux.md diff --git a/sources/tech/20180426 How To Check System Hardware Manufacturer, Model And Serial Number In Linux.md b/sources/tech/20180426 How To Check System Hardware Manufacturer, Model And Serial Number In Linux.md new file mode 100644 index 0000000000..da97e87fc6 --- /dev/null +++ b/sources/tech/20180426 How To Check System Hardware Manufacturer, Model And Serial Number In Linux.md @@ -0,0 +1,155 @@ +How To Check System Hardware Manufacturer, Model And Serial Number In Linux +====== +Getting system hardware information is not a problem for Linux GUI and Windows users but CLI users facing trouble to get this details. + +Even most of us don’t know what is the best command to get this. There are many utilities available in Linux to get system hardware information such as + +System Hardware Manufacturer, Model And Serial Number. + +We are trying to write possible ways to get this details but you can choose the best method for you. + +It is mandatory to know all these information because it will be needed when you raise a case with hardware vendor for any kind of hardware issues. + +This can be achieved in six methods, let me show you how to do that. + +### Method-1 : Using Dmidecode Command + +Dmidecode is a tool which reads a computer’s DMI (stands for Desktop Management Interface) (some say SMBIOS – stands for System Management BIOS) table contents and display system hardware information in a human-readable format. + +This table contains a description of the system’s hardware components, as well as other useful information such as serial number, Manufacturer information, Release Date, and BIOS revision, etc,., + +The DMI table doesn’t only describe what the system is currently made of, it also can report the possible evolution (such as the fastest supported CPU or the maximal amount of memory supported). + +This will help you to analyze your hardware capability like whether it’s support latest application version or not? +``` +# dmidecode -t system + +# dmidecode 2.12 +# SMBIOS entry point at 0x7e7bf000 +SMBIOS 2.7 present. + +Handle 0x0024, DMI type 1, 27 bytes +System Information + Manufacturer: IBM + Product Name: System x2530 M4: -[1214AC1]- + Version: 0B + Serial Number: MK2RL11 + UUID: 762A99BF-6916-450F-80A6-B2E9E78FC9A1 + Wake-up Type: Power Switch + SKU Number: Not Specified + Family: System X + +Handle 0x004B, DMI type 12, 5 bytes +System Configuration Options + Option 1: JP20 pin1-2: TPM PP Disable, pin2-3: TPM PP Enable + +Handle 0x004D, DMI type 32, 20 bytes +System Boot Information + Status: No errors detected + +``` + +**Suggested Read :** [Dmidecode – Easy Way To Get Linux System Hardware Information][1] + +### Method-2 : Using inxi Command + +inxi is a nifty tool to check hardware information on Linux and offers wide range of option to get all the hardware information on Linux system that i never found in any other utility which are available in Linux. It was forked from the ancient and mindbendingly perverse yet ingenius infobash, by locsmif. + +inxi is a script that quickly shows system hardware, CPU, drivers, Xorg, Desktop, Kernel, GCC version(s), Processes, RAM usage, and a wide variety of other useful information, also used for forum technical support & debugging tool. +``` +# inxi -M +Machine: Device: server System: IBM product: N/A v: 0B serial: MK2RL11 + Mobo: IBM model: 00Y8494 serial: 37M17D UEFI: IBM v: -[VVE134MUS-1.50]- date: 08/30/2013 + +``` + +**Suggested Read :** [inxi – A Great Tool to Check Hardware Information on Linux][2] + +### Method-3 : Using lshw Command + +lshw (stands for Hardware Lister) is a small nifty tool that generates detailed reports about various hardware components on the machine such as memory configuration, firmware version, mainboard configuration, CPU version and speed, cache configuration, usb, network card, graphics cards, multimedia, printers, bus speed, etc. + +It’s generating hardware information by reading varies files under /proc directory and DMI table. + +lshw must be run as super user to detect the maximum amount of information or it will only report partial information. Special option is available in lshw called class which will shows specific given hardware information in detailed manner. +``` +# lshw -C system +enal-dbo01t + description: Blade + product: System x2530 M4: -[1214AC1]- + vendor: IBM + version: 0B + serial: MK2RL11 + width: 64 bits + capabilities: smbios-2.7 dmi-2.7 vsyscall32 + configuration: boot=normal chassis=enclosure family=System X uuid=762A99BF-6916-450F-80A6-B2E9E78FC9A1 + +``` + +**Suggested Read :** [LSHW (Hardware Lister) – A Nifty Tool To Get A Hardware Information On Linux][3] + +### Method-4 : Using /sys file system + +The kernel expose some DMI information in the /sys virtual filesystem. So we can easily get the machine type by running grep command with following format. +``` +# grep "" /sys/class/dmi/id/[pbs]* + +``` + +Alternatively we can print only specific details by using cat command. +``` +# cat /sys/class/dmi/id/board_vendor +IBM + +# cat /sys/class/dmi/id/product_name +System x2530 M4: -[1214AC1]- + +# cat /sys/class/dmi/id/product_serial +MK2RL11 + +# cat /sys/class/dmi/id/bios_version +-[VVE134MUS-1.50]- + +``` + +### Method-5 : Using dmesg Command + +The dmesg command is used to write the kernel messages (boot-time messages) in Linux before syslogd or klogd start. It obtains its data by reading the kernel ring buffer. dmesg can be very useful when troubleshooting or just trying to obtain information about the hardware on a system. +``` +# dmesg | grep -i DMI +DMI: System x2530 M4: -[1214AC1]-/00Y8494, BIOS -[VVE134MUS-1.50]- 08/30/2013 + +``` + +### Method-6 : Using hwinfo Command + +hwinfo stands for hardware information tool is another great utility that used to probe for the hardware present in the system and display detailed information about varies hardware components in human readable format. + +It reports information about CPU, RAM, keyboard, mouse, graphics card, sound, storage, network interface, disk, partition, bios, and bridge, etc,., This tool could display more detailed information among others like lshw, dmidecode, inxi, etc,. + +hwinfo uses libhd library libhd.so to gather hardware information on the system. This tool especially designed for openSUSE system, later other distributions are included the tool into their official repository. +``` +# hwinfo | egrep "system.hardware.vendor|system.hardware.product" + system.hardware.vendor = 'IBM' + system.hardware.product = 'System x2530 M4: -[1214AC1]-' + +``` + +**Suggested Read :** [hwinfo (Hardware Info) – A Nifty Tool To Detect System Hardware Information On Linux][4] + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-check-system-hardware-manufacturer-model-and-serial-number-in-linux/ + +作者:[VINOTH KUMAR][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com/author/vinoth/ +[1]:https://www.2daygeek.com/dmidecode-get-print-display-check-linux-system-hardware-information/ +[2]:https://www.2daygeek.com/inxi-system-hardware-information-on-linux/ +[3]:https://www.2daygeek.com/lshw-find-check-system-hardware-information-details-linux/ +[4]:https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/ From 6898c90a4e6c02f0df54dd862fdbc37fdb9912a9 Mon Sep 17 00:00:00 2001 From: Auk7F7 <34982730+Auk7F7@users.noreply.github.com> Date: Sat, 28 Apr 2018 21:12:02 +0800 Subject: [PATCH 019/102] Update 20180219 Learn to code with Thonny - a Python IDE for beginners.md translating by Auk7F7 --- ...19 Learn to code with Thonny - a Python IDE for beginners.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180219 Learn to code with Thonny - a Python IDE for beginners.md b/sources/tech/20180219 Learn to code with Thonny - a Python IDE for beginners.md index 4ee603aa6d..078f9576e8 100644 --- a/sources/tech/20180219 Learn to code with Thonny - a Python IDE for beginners.md +++ b/sources/tech/20180219 Learn to code with Thonny - a Python IDE for beginners.md @@ -1,3 +1,5 @@ +translating by Auk7F7 + Learn to code with Thonny — a Python IDE for beginners ====== From 69fc6c34a95d159d52f400ca2265392511b8b111 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 28 Apr 2018 22:50:14 +0800 Subject: [PATCH 020/102] PRF:20180215 Check Linux Distribution Name and Version.md @HankChow --- ...eck Linux Distribution Name and Version.md | 85 +++++++++---------- 1 file changed, 41 insertions(+), 44 deletions(-) diff --git a/translated/tech/20180215 Check Linux Distribution Name and Version.md b/translated/tech/20180215 Check Linux Distribution Name and Version.md index 9a3b99eda5..a4b8d45c56 100644 --- a/translated/tech/20180215 Check Linux Distribution Name and Version.md +++ b/translated/tech/20180215 Check Linux Distribution Name and Version.md @@ -1,29 +1,28 @@ -查看 Linux 发行版名称和版本号的8种方法 +查看 Linux 发行版名称和版本号的 8 种方法 ====== + 如果你加入了一家新公司,要为开发团队安装所需的软件并重启服务,这个时候首先要弄清楚它们运行在什么发行版以及哪个版本的系统上,你才能正确完成后续的工作。作为系统管理员,充分了解系统信息是首要的任务。 查看 Linux 发行版名称和版本号有很多种方法。你可能会问,为什么要去了解这些基本信息呢? -因为对于诸如 RHEL、Debian、openSUSE、Arch Linux 这几种主流发行版来说,它们各自拥有不同的包管理器来管理系统上的软件包,如果不知道所使用的是哪一个发行版的系统,在包安装的时候就会无从下手,而且由于大多数发行版都是用 systemd 命令而不是 SysVinit 脚本,在重启服务的时候也难以执行正确的命令。 +因为对于诸如 RHEL、Debian、openSUSE、Arch Linux 这几种主流发行版来说,它们各自拥有不同的包管理器来管理系统上的软件包,如果不知道所使用的是哪一个发行版的系统,在软件包安装的时候就会无从下手,而且由于大多数发行版都是用 systemd 命令而不是 SysVinit 脚本,在重启服务的时候也难以执行正确的命令。 下面来看看可以使用那些基本命令来查看 Linux 发行版名称和版本号。 ### 方法总览 - * lsb_release command - * /etc/*-release file - * uname command - * /proc/version file - * dmesg Command - * YUM or DNF Command - * RPM command - * APT-GET command + * `lsb_release` 命令 + * `/etc/*-release` 文件 + * `uname` 命令 + * `/proc/version` 文件 + * `dmesg` 命令 + * YUM 或 DNF 命令 + * RPM 命令 + * APT-GET 命令 +### 方法 1: lsb_release 命令 - -### 方法1: lsb_release 命令 - -LSB(Linux Standard Base,Linux 标准库)能够打印发行版的具体信息,包括发行版名称、版本号、代号等。 +LSB(Linux 标准库Linux Standard Base)能够打印发行版的具体信息,包括发行版名称、版本号、代号等。 ``` # lsb_release -a @@ -32,12 +31,11 @@ Distributor ID: Ubuntu Description: Ubuntu 16.04.3 LTS Release: 16.04 Codename: xenial - ``` -### 方法2: /etc/arch-release /etc/os-release File +### 方法 2: /etc/*-release 文件 -版本文件通常被视为操作系统的标识。在 `/etc` 目录下放置了很多记录着发行版各种信息的文件,每个发行版都各自有一套这样记录着相关信息的文件。下面是一组在 Ubuntu/Debian 系统上显示出来的文件内容。 +release 文件通常被视为操作系统的标识。在 `/etc` 目录下放置了很多记录着发行版各种信息的文件,每个发行版都各自有一套这样记录着相关信息的文件。下面是一组在 Ubuntu/Debian 系统上显示出来的文件内容。 ``` # cat /etc/issue @@ -67,10 +65,10 @@ UBUNTU_CODENAME=xenial # cat /etc/debian_version 9.3 - ``` 下面这一组是在 RHEL/CentOS/Fedora 系统上显示出来的文件内容。其中 `/etc/redhat-release` 和 `/etc/system-release` 文件是指向 `/etc/[发行版名称]-release` 文件的一个连接。 + ``` # cat /etc/centos-release CentOS release 6.9 (Final) @@ -100,34 +98,34 @@ Fedora release 27 (Twenty Seven) # cat /etc/system-release Fedora release 27 (Twenty Seven) - ``` -### 方法3: uname 命令 +### 方法 3: uname 命令 -uname(unix name) 是一个打印系统信息的工具,包括内核名称、版本号、系统详细信息以及所运行的操作系统等等。 +uname(unix name 的意思) 是一个打印系统信息的工具,包括内核名称、版本号、系统详细信息以及所运行的操作系统等等。 + +- **建议阅读:** [6种查看系统 Linux 内核的方法][1] -**建议阅读:** [6种查看系统 Linux 内核的方法][1] ``` # uname -a Linux localhost.localdomain 4.12.14-300.fc26.x86_64 #1 SMP Wed Sep 20 16:28:07 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux - ``` 以上运行结果说明使用的操作系统版本是 Fedora 26。 -### 方法4: /proc/version File +### 方法 4: /proc/version 文件 这个文件记录了 Linux 内核的版本、用于编译内核的 gcc 的版本、内核编译的时间,以及内核编译者的用户名。 + ``` # cat /proc/version Linux version 4.12.14-300.fc26.x86_64 ([email protected]) (gcc version 7.2.1 20170915 (Red Hat 7.2.1-2) (GCC) ) #1 SMP Wed Sep 20 16:28:07 UTC 2017 - ``` -### Method-5: dmesg 命令 +### 方法 5: dmesg 命令 + +dmesg(展示信息display message驱动程序信息driver message)是大多数类 Unix 操作系统上的一个命令,用于打印内核的消息缓冲区的信息。 -dmesg(display message/driver message,展示信息/驱动程序信息)是大多数类 Unix 操作系统上的一个命令,用于打印内核上消息缓冲区的信息。 ``` # dmesg | grep "Linux" [ 0.000000] Linux version 4.12.14-300.fc26.x86_64 ([email protected]) (gcc version 7.2.1 20170915 (Red Hat 7.2.1-2) (GCC) ) #1 SMP Wed Sep 20 16:28:07 UTC 2017 @@ -139,14 +137,14 @@ dmesg(display message/driver message,展示信息/驱动程序信息)是 [ 0.688949] usb usb2: Manufacturer: Linux 4.12.14-300.fc26.x86_64 ohci_hcd [ 2.564554] SELinux: Disabled at runtime. [ 2.564584] SELinux: Unregistering netfilter hooks - ``` -### Method-6: Yum/Dnf 命令 +### 方法 6: Yum/Dnf 命令 -Yum(Yellowdog Updater Modified)是 Linux 操作系统上的一个包管理工具,而 `yum` 命令则是一些基于 RedHat 的 Linux 发行版上用于安装、更新、查找、删除软件包的命令。 +Yum(Yellowdog 更新器修改版Yellowdog Updater Modified)是 Linux 操作系统上的一个包管理工具,而 `yum` 命令被用于一些基于 RedHat 的 Linux 发行版上安装、更新、查找、删除软件包。 + +- **建议阅读:** [在 RHEL/CentOS 系统上使用 yum 命令管理软件包][2] -**建议阅读:** [在 RHEL/CentOS 系统上使用 yum 命令管理软件包][2] ``` # yum info nano Loaded plugins: fastestmirror, ovl @@ -165,10 +163,10 @@ Summary : A small text editor URL : http://www.nano-editor.org License : GPLv3+ Description : GNU nano is a small and friendly text editor. - ``` 下面的 `yum repolist` 命令执行后显示了 yum 的基础源仓库、额外源仓库、更新源仓库都来自 CentOS 7 仓库。 + ``` # yum repolist Loaded plugins: fastestmirror, ovl @@ -181,12 +179,12 @@ base/7/x86_64 CentOS-7 - Base 9591 extras/7/x86_64 CentOS-7 - Extras 388 updates/7/x86_64 CentOS-7 - Updates 1929 repolist: 11908 - ``` 使用 `dnf` 命令也同样可以查看发行版名称和版本号。 -**建议阅读:** [在 Fedora 系统上使用 DNF(YUM 的一个分支)命令管理软件包][3] +- **建议阅读:** [在 Fedora 系统上使用 DNF(YUM 的一个分支)命令管理软件包][3] + ``` # dnf info nano Last metadata expiration check: 0:01:25 ago on Thu Feb 15 01:59:31 2018. @@ -203,25 +201,25 @@ Summary : A small text editor URL : https://www.nano-editor.org License : GPLv3+ Description : GNU nano is a small and friendly text editor. - ``` -### Method-7: RPM 命令 +### 方法 7: RPM 命令 -RPM(RedHat Package Manager, RedHat 包管理器)是在 CentOS、Oracle Linux、Fedora 这些基于 RedHat 的操作系统上的一个强大的命令行包管理工具,同样也可以帮助我们查看系统的版本信息。 +RPM(红帽包管理器RedHat Package Manager)是在 CentOS、Oracle Linux、Fedora 这些基于 RedHat 的操作系统上的一个强大的命令行包管理工具,同样也可以帮助我们查看系统的版本信息。 + +- **建议阅读:** [在基于 RHEL 的系统上使用 RPM 命令管理软件包][4] -**建议阅读:** [在基于 RHEL 的系统上使用 RPM 命令管理软件包][4] ``` # rpm -q nano nano-2.8.7-1.fc27.x86_64 - ``` -### Method-8: APT-GET 命令 +### 方法 8: APT-GET 命令 -Apt-Get(Advanced Packaging Tool)是一个强大的命令行工具,可以自动下载安装新软件包、更新已有的软件包、更新软件包列表索引,甚至更新整个 Debian 系统。 +Apt-Get(高级打包工具Advanced Packaging Tool)是一个强大的命令行工具,可以自动下载安装新软件包、更新已有的软件包、更新软件包列表索引,甚至更新整个 Debian 系统。 + +- **建议阅读:** [在基于 Debian 的系统上使用 Apt-Get 和 Apt-Cache 命令管理软件包][5] -**建议阅读:** [在基于 Debian 的系统上使用 Apt-Get 和 Apt-Cache 命令管理软件包][5] ``` # apt-cache policy nano nano: @@ -233,7 +231,6 @@ nano: 100 /var/lib/dpkg/status 2.5.3-2 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 Packages - ``` -------------------------------------------------------------------------------- @@ -242,7 +239,7 @@ via: https://www.2daygeek.com/check-find-linux-distribution-name-and-version/ 作者:[Magesh Maruthamuthu][a] 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 2c211ea44e99016b4925c650fe7980549d876b24 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sat, 28 Apr 2018 22:51:26 +0800 Subject: [PATCH 021/102] PUB:20180215 Check Linux Distribution Name and Version.md @HankChow https://linux.cn/article-9586-1.html --- .../20180215 Check Linux Distribution Name and Version.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180215 Check Linux Distribution Name and Version.md (100%) diff --git a/translated/tech/20180215 Check Linux Distribution Name and Version.md b/published/20180215 Check Linux Distribution Name and Version.md similarity index 100% rename from translated/tech/20180215 Check Linux Distribution Name and Version.md rename to published/20180215 Check Linux Distribution Name and Version.md From efc5de22da12ab71ba8dc30a3e2fd3ec246a9429 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Sun, 29 Apr 2018 00:00:20 +0800 Subject: [PATCH 022/102] Update 20180122 A Simple Command-line Snippet Manager.md --- sources/tech/20180122 A Simple Command-line Snippet Manager.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20180122 A Simple Command-line Snippet Manager.md b/sources/tech/20180122 A Simple Command-line Snippet Manager.md index 1c8ef14fb6..6f2b83f6d0 100644 --- a/sources/tech/20180122 A Simple Command-line Snippet Manager.md +++ b/sources/tech/20180122 A Simple Command-line Snippet Manager.md @@ -1,3 +1,6 @@ +Translating by MjSeven + + A Simple Command-line Snippet Manager ====== From e8f828ac0206e588e0e909e827f3d4f22474a20b Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 29 Apr 2018 09:36:45 +0800 Subject: [PATCH 023/102] PRF:20180104 How does gdb call functions.md @ucasFL --- .../20180104 How does gdb call functions.md | 61 ++++++------------- 1 file changed, 20 insertions(+), 41 deletions(-) diff --git a/translated/tech/20180104 How does gdb call functions.md b/translated/tech/20180104 How does gdb call functions.md index 575563ad3d..28c26ba615 100644 --- a/translated/tech/20180104 How does gdb call functions.md +++ b/translated/tech/20180104 How does gdb call functions.md @@ -1,13 +1,13 @@ gdb 如何调用函数? ============================================================ -(之前的 gdb 系列文章:[gdb 如何工作(2016)][4] 和[通过 gdb 你能够做的三件事(2014)][5]) +(之前的 gdb 系列文章:[gdb 如何工作(2016)][4] 和[三步上手 gdb(2014)][5]) -在这个周,我发现,我可以从 gdb 上调用 C 函数。这看起来很酷,因为在过去我认为 gdb 最多只是一个只读调试工具。 +在这周,我发现我可以从 gdb 上调用 C 函数。这看起来很酷,因为在过去我认为 gdb 最多只是一个只读调试工具。 我对 gdb 能够调用函数感到很吃惊。正如往常所做的那样,我在 [Twitter][6] 上询问这是如何工作的。我得到了大量的有用答案。我最喜欢的答案是 [Evan Klitzke 的示例 C 代码][7],它展示了 gdb 如何调用函数。代码能够运行,这很令人激动! -我相信(通过一些跟踪和实验)那个示例 C 代码和 gdb 实际上如何调用函数不同。因此,在这篇文章中,我将会阐述 gdb 是如何调用函数的,以及我是如何知道的。 +我(通过一些跟踪和实验)认为那个示例 C 代码和 gdb 实际上如何调用函数不同。因此,在这篇文章中,我将会阐述 gdb 是如何调用函数的,以及我是如何知道的。 关于 gdb 如何调用函数,还有许多我不知道的事情,并且,在这儿我写的内容有可能是错误的。 @@ -15,17 +15,14 @@ gdb 如何调用函数? 在开始讲解这是如何工作之前,我先快速的谈论一下我是如何发现这件令人惊讶的事情的。 -所以,你已经在运行一个 C 程序(目标程序)。你可以运行程序中的一个函数,只需要像下面这样做: +假如,你已经在运行一个 C 程序(目标程序)。你可以运行程序中的一个函数,只需要像下面这样做: * 暂停程序(因为它已经在运行中) - * 找到你想调用的函数的地址(使用符号表) - * 使程序(目标程序)跳转到那个地址 - * 当函数返回时,恢复之前的指令指针和寄存器 -通过符号表来找到想要调用的函数的地址非常容易。下面是一段非常简单但能够工作的代码,我在 Linux 上使用这段代码作为例子来讲解如何找到地址。这段代码使用 [elf crate][8]。如果我想找到 PID 为 2345 的进程中的 foo 函数的地址,那么我可以运行 `elf_symbol_value("/proc/2345/exe", "foo")`。 +通过符号表来找到想要调用的函数的地址非常容易。下面是一段非常简单但能够工作的代码,我在 Linux 上使用这段代码作为例子来讲解如何找到地址。这段代码使用 [elf crate][8]。如果我想找到 PID 为 2345 的进程中的 `foo` 函数的地址,那么我可以运行 `elf_symbol_value("/proc/2345/exe", "foo")`。 ``` fn elf_symbol_value(file_name: &str, symbol_name: &str) -> Result> { @@ -42,7 +39,6 @@ fn elf_symbol_value(file_name: &str, symbol_name: &str) -> Result Result @@ -66,7 +62,6 @@ int foo() { int main() { sleep(1000); } - ``` 接下来,编译并运行它: @@ -74,7 +69,6 @@ int main() { ``` $ gcc -o test test.c $ ./test - ``` 最后,我们使用 gdb 来跟踪 `test` 这一程序: @@ -84,54 +78,42 @@ $ sudo gdb -p $(pgrep -f test) (gdb) p foo() $1 = 3 (gdb) quit - ``` 我运行 `p foo()` 然后它运行了这个函数!这非常有趣。 -### 为什么这是有用的? +### 这有什么用? 下面是一些可能的用途: -* 它使得你可以把 gdb 当成一个 C 应答式程序,这很有趣,我想对开发也会有用 - +* 它使得你可以把 gdb 当成一个 C 应答式程序(REPL),这很有趣,我想对开发也会有用 * 在 gdb 中进行调试的时候展示/浏览复杂数据结构的功能函数(感谢 [@invalidop][1]) - * [在进程运行时设置一个任意的名字空间][2](我的同事 [nelhage][3] 对此非常惊讶) - * 可能还有许多我所不知道的用途 ### 它是如何工作的 -当我在 Twitter 上询问从 gdb 中调用函数是如何工作的时,我得到了大量有用的回答。许多答案是”你从符号表中得到了函数的地址“,但这并不是完整的答案。 +当我在 Twitter 上询问从 gdb 中调用函数是如何工作的时,我得到了大量有用的回答。许多答案是“你从符号表中得到了函数的地址”,但这并不是完整的答案。 -有个人告诉了我两篇关于 gdb 如何工作的系列文章:[和本地人一起调试-第一部分][9],[和本地人一起调试-第二部分][10]。第一部分讲述了 gdb 是如何调用函数的(指出了 gdb 实际上完成这件事并不简单,但是我将会尽力)。 +有个人告诉了我两篇关于 gdb 如何工作的系列文章:[原生调试:第一部分][9],[原生调试:第二部分][10]。第一部分讲述了 gdb 是如何调用函数的(指出了 gdb 实际上完成这件事并不简单,但是我将会尽力)。 步骤列举如下: 1. 停止进程 - 2. 创建一个新的栈框(远离真实栈) - 3. 保存所有寄存器 - 4. 设置你想要调用的函数的寄存器参数 - -5. 设置栈指针指向新的栈框 - +5. 设置栈指针指向新的栈框stack frame 6. 在内存中某个位置放置一条陷阱指令 - 7. 为陷阱指令设置返回地址 - 8. 设置指令寄存器的值为你想要调用的函数地址 - 9. 再次运行进程! (LCTT 译注:如果将这个调用的函数看成一个单独的线程,gdb 实际上所做的事情就是一个简单的线程上下文切换) 我不知道 gdb 是如何完成这些所有事情的,但是今天晚上,我学到了这些所有事情中的其中几件。 -**创建一个栈框** +#### 创建一个栈框 如果你想要运行一个 C 函数,那么你需要一个栈来存储变量。你肯定不想继续使用当前的栈。准确来说,在 gdb 调用函数之前(通过设置函数指针并跳转),它需要设置栈指针到某个地方。 @@ -154,14 +136,13 @@ Breakpoint 1 at 0x40052a Breakpoint 1, 0x000000000040052a in foo () (gdb) p $rsp $8 = (void *) 0x7ffea3d0bc00 - ``` -这看起来符合”gdb 在当前栈的栈顶构造了一个新的栈框“这一理论。因为栈指针(`$rsp`)从 `0x7ffea3d0bca8` 变成了 `0x7ffea3d0bc00` - 栈指针从高地址往低地址长。所以 `0x7ffea3d0bca8` 在 `0x7ffea3d0bc00` 的后面。真是有趣! +这看起来符合“gdb 在当前栈的栈顶构造了一个新的栈框”这一理论。因为栈指针(`$rsp`)从 `0x7ffea3d0bca8` 变成了 `0x7ffea3d0bc00` —— 栈指针从高地址往低地址长。所以 `0x7ffea3d0bca8` 在 `0x7ffea3d0bc00` 的后面。真是有趣! 所以,看起来 gdb 只是在当前栈所在位置创建了一个新的栈框。这令我很惊讶! -**改变指令指针** +#### 改变指令指针 让我们来看一看 gdb 是如何改变指令指针的! @@ -181,7 +162,7 @@ $3 = (void (*)()) 0x40052a 我盯着输出看了很久,但仍然不理解它是如何改变指令指针的,但这并不影响什么。 -**如何设置断点** +#### 如何设置断点 上面我写到 `break foo` 。我跟踪 gdb 运行程序的过程,但是没有任何发现。 @@ -202,10 +183,9 @@ $3 = (void (*)()) 0x40052a // 将 0x400528 处的指令更改为之前的样子 25622 ptrace(PTRACE_PEEKTEXT, 25618, 0x400528, [0x5d00000003cce589]) = 0 25622 ptrace(PTRACE_POKEDATA, 25618, 0x400528, 0x5d00000003b8e589) = 0 - ``` -**在某处放置一条陷阱指令** +#### 在某处放置一条陷阱指令 当 gdb 运行一个函数的时候,它也会在某个地方放置一条陷阱指令。这是其中一条。它基本上是用 `cc` 来替换一条指令(`int3`)。 @@ -213,7 +193,6 @@ $3 = (void (*)()) 0x40052a 5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0 5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0 5908 ptrace(PTRACE_POKEDATA, 5810, 0x7f6fa7c0b260, 0x48f389fd894853cc) = 0 - ``` `0x7f6fa7c0b260` 是什么?我查看了进程的内存映射,发现它位于 `/lib/x86_64-linux-gnu/libc-2.23.so` 中的某个位置。这很奇怪,为什么 gdb 将陷阱指令放在 libc 中? @@ -226,7 +205,7 @@ $3 = (void (*)()) 0x40052a 我将要在这儿停止了(现在已经凌晨 1 点),但是我知道的多一些了! -看起来”gdb 如何调用函数“这一问题的答案并不简单。我发现这很有趣并且努力找出其中一些答案,希望你也能够找到。 +看起来“gdb 如何调用函数”这一问题的答案并不简单。我发现这很有趣并且努力找出其中一些答案,希望你也能够找到。 我依旧有很多未回答的问题,关于 gdb 是如何完成这些所有事的,但是可以了。我不需要真的知道关于 gdb 是如何工作的所有细节,但是我很开心,我有了一些进一步的理解。 @@ -236,7 +215,7 @@ via: https://jvns.ca/blog/2018/01/04/how-does-gdb-call-functions/ 作者:[Julia Evans][a] 译者:[ucasFL](https://github.com/ucasFL) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -244,8 +223,8 @@ via: https://jvns.ca/blog/2018/01/04/how-does-gdb-call-functions/ [1]:https://twitter.com/invalidop/status/949161146526781440 [2]:https://github.com/baloo/setns/blob/master/setns.c [3]:https://github.com/nelhage -[4]:https://jvns.ca/blog/2016/08/10/how-does-gdb-work/ -[5]:https://jvns.ca/blog/2014/02/10/three-steps-to-learning-gdb/ +[4]:https://linux.cn/article-9491-1.html +[5]:https://linux.cn/article-9276-1.html [6]:https://twitter.com/b0rk/status/948060808243765248 [7]:https://github.com/eklitzke/ptrace-call-userspace/blob/master/call_fprintf.c [8]:https://cole14.github.io/rust-elf From bfd7480b278957c7cb67253fbb0cd1ee88805d0c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 29 Apr 2018 09:38:43 +0800 Subject: [PATCH 024/102] PUB:20180104 How does gdb call functions.md @ucasFL https://linux.cn/article-9588-1.html --- .../tech => published}/20180104 How does gdb call functions.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180104 How does gdb call functions.md (100%) diff --git a/translated/tech/20180104 How does gdb call functions.md b/published/20180104 How does gdb call functions.md similarity index 100% rename from translated/tech/20180104 How does gdb call functions.md rename to published/20180104 How does gdb call functions.md From c88a3fade0bab7d46ce90e30439e9d23d8248727 Mon Sep 17 00:00:00 2001 From: FelixYFZ <33593534+FelixYFZ@users.noreply.github.com> Date: Sun, 29 Apr 2018 09:58:13 +0800 Subject: [PATCH 025/102] Update 20171116 10 easy steps from proprietary to open source.md Translating by FelixYFZ --- .../20171116 10 easy steps from proprietary to open source.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20171116 10 easy steps from proprietary to open source.md b/sources/tech/20171116 10 easy steps from proprietary to open source.md index 4e085872a5..a8bd86314d 100644 --- a/sources/tech/20171116 10 easy steps from proprietary to open source.md +++ b/sources/tech/20171116 10 easy steps from proprietary to open source.md @@ -1,4 +1,4 @@ -10 easy steps from proprietary to open source +Translating by FelixYFZ 10 easy steps from proprietary to open source ====== "But surely open source software is less secure, because everybody can see it, and they can just recompile it and replace it with bad stuff they've written." Hands up: who's heard this?1 From 41a45e580a6595796de9e8a4c87b7ec11e414583 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 29 Apr 2018 10:10:14 +0800 Subject: [PATCH 026/102] PRF:20180226 5 keys to building open hardware.md @kennethXia --- ...180226 5 keys to building open hardware.md | 49 +++++++++---------- 1 file changed, 24 insertions(+), 25 deletions(-) diff --git a/translated/talk/20180226 5 keys to building open hardware.md b/translated/talk/20180226 5 keys to building open hardware.md index c470820f5c..e1ea822451 100644 --- a/translated/talk/20180226 5 keys to building open hardware.md +++ b/translated/talk/20180226 5 keys to building open hardware.md @@ -1,54 +1,53 @@ -构建开源硬件的5个关键点 +构建开源硬件的 5 个关键点 ====== + +> 最大化你的项目影响。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openhardwaretools.png?itok=DC1RC_1f) -科学社区正在加速拥抱自由和开源硬件([FOSH][1]). 研究员正忙于[改进他们自己的装备][2]并创造数以百计基于分布式数字制造模型的设备来推动他们的研究。 -热衷于 FOSH 的主要原因还是钱: 有研究表明,和专用设备相比,FOSH 可以[节省90%到99%的花费][3]。基于[开源硬件商业模式][4]的科学 FOSH 的商业化已经推动其快速地发展为一个新的工程领域,并为此定期[举行年会][5]。 +科学社区正在加速拥抱自由及开源硬件Free and Open Source Hardware([FOSH][1])。 研究员正忙于[改进他们自己的装备][2]并创造数以百计的基于分布式数字制造模型的设备来推动他们的研究。 -特别的是,不止一本,而是关于这个主题的[两本学术期刊]:[Journal of Open Hardware] (由Ubiquity出版,一个新的自由访问出版商,同时出版了[Journal of Open Research Software][8])以及[HardwareX][9](由Elsevier出版的一种[自由访问期刊][10],它是世界上最大的学术出版商之一)。 +热衷于 FOSH 的主要原因还是钱: 有研究表明,和专用设备相比,FOSH 可以[节省 90% 到 99% 的花费][3]。基于[开源硬件商业模式][4]的科学 FOSH 的商业化已经推动其快速地发展为一个新的工程领域,并为此定期[举行 GOSH 年会][5]。 + +特别的是,不止一本,而是关于这个主题的[两本学术期刊]:[Journal of Open Hardware] (由 Ubiquity 出版,一个新的自由访问出版商,同时出版了 [Journal of Open Research Software][8] )以及 [HardwareX][9](由 Elsevier 出版的一种[自由访问期刊][10],它是世界上最大的学术出版商之一)。 由于学术社区的支持,科学 FOSH 的开发者在获取制作乐趣并推进科学快速发展的同时获得学术声望。 ### 科学 FOSH 的5个步骤 -协恩 (Shane Oberloier)和我在名为Designes的自由问工程期刊上共同发表了一篇关于设计 FOSH 科学设备原则的[文章][11]。我们以滑动烘干机为例,制造成本低于20美元,仅是专用设备价格的三百分之一。[科学][1]和[医疗][12]设备往往比较复杂,开发 FOSH 替代品将带来巨大的回报。 +Shane Oberloier 和我在名为 Designs 的自由访问工程期刊上共同发表了一篇关于设计 FOSH 科学设备原则的[文章][11]。我们以滑动式烘干机为例,制造成本低于 20 美元,仅是专用设备价格的三百分之一。[科学][1]和[医疗][12]设备往往比较复杂,开发 FOSH 替代品将带来巨大的回报。 -我总结了5个步骤(包括6条设计原则),它们在协恩和我发表的文章里有详细阐述。这些设计原则也推广到非科学设备,而且制作越复杂的设计越能带来更大的潜在收益。 +我总结了 5 个步骤(包括 6 条设计原则),它们在 Shane Oberloier 和我发表的文章里有详细阐述。这些设计原则也可以推广到非科学设备,而且制作越复杂的设计越能带来更大的潜在收益。 如果你对科学项目的开源硬件设计感兴趣,这些步骤将使你的项目的影响最大化。 - 1. 评估类似现有工具的功能,你的 FOSH 设计目标应该针对实际效果而不是现有的设计(译者注:作者的意思应该是不要被现有设计缚住手脚)。必要的时候需进行概念证明。 - - 2. 使用下列设计原则: - - * 在设备生产中,仅适用自由和开源的软件工具链(比如,开源的CAD工具,例如[OpenSCAD][13], [FreeCAD][14], or [Blender][15])和开源硬件。 +1. 评估类似现有工具的功能,你的 FOSH 设计目标应该针对实际效果而不是现有的设计(LCTT 译注:作者的意思应该是不要被现有设计缚住手脚)。必要的时候需进行概念证明。 +2. 使用下列设计原则: + * 在设备生产中,仅使用自由和开源的软件工具链(比如,开源的 CAD 工具,例如 [OpenSCAD][13]、 [FreeCAD][14] 或 [Blender][15])和开源硬件。 * 尝试减少部件的数量和类型并降低工具的复杂度 * 减少材料的数量和制造成本。 - * 尽量使用方便易得的工具(比如 [RepRap 3D 打印机][16])进行部件的分布式或数字化生产。 + * 尽量使用能够分发的部件或使用方便易得的工具(比如 [RepRap 3D 打印机][16])进行部件的数字化生产。 * 对部件进行[参数化设计][17],这使他人可以对你的设计进行个性化改动。相较于特例化设计,参数化设计会更有用。在未来的项目中,使用者可以通过修改核心参数来继续利用它们。 - ?* 所有不能使用开源硬件进行分布制造的零件,必须选择现货产品以方便采购。 - - 3. 验证功能设计 - - ?4. 提供关于设计、生产、装配、校准和操作的详尽文档。包括原始设计文件而不仅仅是设计输出。开源硬件协会对于开源设计的发布和文档化有额外的[指南][18],总结如下: - - * 以通用的形式分享设计文件 - * 提供详尽的材料清单,包括价格和采购信息 - * 如果包含软件,确保代码对大众来说清晰易懂 + * 所有不能使用现有的开源硬件以分布式的方式轻松且经济地制造的零件,必须选择现货产品以方便采购。 +3. 验证功能设计。 +4. 提供关于设计、生产、装配、校准和操作的详尽设备文档。包括原始设计文件而不仅仅是用于生产的。开源硬件协会Open Source Hardware Association对于开源设计的发布和文档化有额外的[指南][18],总结如下: + * 以通用的形式分享设计文件。 + * 提供详尽的材料清单,包括价格和采购信息。 + * 如果涉及软件,确保代码对大众来说清晰易懂。 * 作为生产时的参考,必须提供足够的照片,以确保没有任何被遮挡的部分。 * 在描述方法的章节,整个制作过程必须被细化成简单步骤以便复制此设计。 - * 在线上分享并指定许可证。这为用户提供了合理使用设计的信息。 - - ?5. 主动分享!为了使 FOSH 发扬光大,设计必须被广泛、频繁和有效地分享以提升他们的存在感。所有的文档应该在自由访问文献中发表,并与适当的社区共享。[开源科学框架][19]是一个值得考虑的优雅的通用存储库,它由开源科学中心主办,该中心设置为接受任何类型的文件并处理大型数据集。 + * 在线上分享并指定许可证。这为用户提供了合理使用该设计的信息。 +5. 主动分享!为了使 FOSH 发扬光大,设计必须被广泛、频繁和有效地分享以提升它们的存在感。所有的文档应该在自由访问文献中发表,并与适当的社区共享。[开源科学框架][19]Open Science Framework是一个值得考虑的优雅的通用存储库,它由开源科学中心Center for Open Science主办,该中心设置为接受任何类型的文件并处理大型数据集。 这篇文章得到了 [Fulbright Finland][20] 的支持,该公司赞助了芬兰 Fulbright-Aalto 大学的特聘校席 Joshua Pearce 在开源科学硬件方面的研究工作。 + -------------------------------------------------------------------------------- via: https://opensource.com/article/18/2/5-steps-creating-successful-open-hardware 作者:[Joshua Pearce][a] 译者:[kennethXia](https://github.com/kennethXia) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e8aea11be2d1d9fd665c25d0fbef202621d2f708 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 29 Apr 2018 10:10:34 +0800 Subject: [PATCH 027/102] PUB:20180226 5 keys to building open hardware.md @kennethXia https://linux.cn/article-9589-1.html --- .../20180226 5 keys to building open hardware.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/talk => published}/20180226 5 keys to building open hardware.md (100%) diff --git a/translated/talk/20180226 5 keys to building open hardware.md b/published/20180226 5 keys to building open hardware.md similarity index 100% rename from translated/talk/20180226 5 keys to building open hardware.md rename to published/20180226 5 keys to building open hardware.md From 4ede653e8dd3b6af2197dd9fa4fcb500e88555fe Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Sun, 29 Apr 2018 13:54:55 +0800 Subject: [PATCH 028/102] Delete 20180122 A Simple Command-line Snippet Manager.md --- ...2 A Simple Command-line Snippet Manager.md | 322 ------------------ 1 file changed, 322 deletions(-) delete mode 100644 sources/tech/20180122 A Simple Command-line Snippet Manager.md diff --git a/sources/tech/20180122 A Simple Command-line Snippet Manager.md b/sources/tech/20180122 A Simple Command-line Snippet Manager.md deleted file mode 100644 index 6f2b83f6d0..0000000000 --- a/sources/tech/20180122 A Simple Command-line Snippet Manager.md +++ /dev/null @@ -1,322 +0,0 @@ -Translating by MjSeven - - -A Simple Command-line Snippet Manager -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/01/pet-6-720x340.png) - -We can't remember all the commands, right? Yes. Except the frequently used commands, it is nearly impossible to remember some long commands that we rarely use. That's why we need to some external tools to help us to find the commands when we need them. In the past, we have reviewed two useful utilities named [**" Bashpast"**][1] and [**" Keep"**][2]. Using Bashpast, we can easily bookmark the Linux commands for easier repeated invocation. And, the Keep utility can be used to keep the some important and lengthy commands in your Terminal, so you can use them on demand. Today, we are going to see yet another tool in the series to help you remembering commands. Say hello to **" Pet"**, a simple command-line snippet manager written in **Go** language. - -Using Pet, you can; - - * Register/add your important, long and complex command snippets. - * Search the saved command snippets interactively. - * Run snippets directly without having to type over and over. - * Edit the saved command snippets easily. - * Sync the snippets via Gist. - * Use variables in snippets. - * And more yet to come. - - - -#### Installing Pet CLI Snippet Manager - -Since it is written in Go language, make sure you have installed Go in your system. - -After Go language, grab the latest binaries from [**the releases page**][3]. -``` -wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_amd64.zip -``` - -For 32 bit: -``` -wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_386.zip -``` - -Extract the downloaded archive: -``` -unzip pet_0.2.4_linux_amd64.zip -``` - -32 bit: -``` -unzip pet_0.2.4_linux_386.zip -``` - -Copy the pet binary file to your PATH (i.e **/usr/local/bin** or the like). -``` -sudo cp pet /usr/local/bin/ -``` - -Finally, make it executable: -``` -sudo chmod +x /usr/local/bin/pet -``` - -If you're using Arch based systems, then you can install it from AUR using any AUR helper tools. - -Using [**Pacaur**][4]: -``` -pacaur -S pet-git -``` - -Using [**Packer**][5]: -``` -packer -S pet-git -``` - -Using [**Yaourt**][6]: -``` -yaourt -S pet-git -``` - -Using [**Yay** :][7] -``` -yay -S pet-git -``` - -Also, you need to install **[fzf][8]** or [**peco**][9] tools to enable interactive search. Refer the official GitHub links to know how to install these tools. - -#### Usage - -Run 'pet' without any arguments to view the list of available commands and general options. -``` -$ pet -pet - Simple command-line snippet manager. - -Usage: - pet [command] - -Available Commands: - configure Edit config file - edit Edit snippet file - exec Run the selected commands - help Help about any command - list Show all snippets - new Create a new snippet - search Search snippets - sync Sync snippets - version Print the version number - -Flags: - --config string config file (default is $HOME/.config/pet/config.toml) - --debug debug mode - -h, --help help for pet - -Use "pet [command] --help" for more information about a command. -``` - -To view the help section of a specific command, run: -``` -$ pet [command] --help -``` - -**Configure Pet** - -It just works fine with default values. However, you can change the default directory to save snippets, choose the selector (fzf or peco) to use, the default text editor to edit snippets, add GIST id details etc. - -To configure Pet, run: -``` -$ pet configure -``` - -This command will open the default configuration in the default text editor (for example **vim** in my case). Change/edit the values as per your requirements. -``` -[General] - snippetfile = "/home/sk/.config/pet/snippet.toml" - editor = "vim" - column = 40 - selectcmd = "fzf" - -[Gist] - file_name = "pet-snippet.toml" - access_token = "" - gist_id = "" - public = false -~ -``` - -**Creating Snippets** - -To create a new snippet, run: -``` -$ pet new -``` - -Add the command and the description and hit ENTER to save it. -``` -Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9' -Description> Remove numbers from output. -``` - -[![][10]][11] - -This is a simple command to remove all numbers from the echo command output. You can easily remember it. But, if you rarely use it, you may forgot it completely after few days. Of course we can search the history using "CTRL+r", but "Pet" is much easier. Also, Pet can help you to add any number of entries. - -Another cool feature is we can easily add the previous command. To do so, add the following lines in your **.bashrc** or **.zshrc** file. -``` -function prev() { - PREV=$(fc -lrn | head -n 1) - sh -c "pet new `printf %q "$PREV"`" -} -``` - -Do the following command to take effect the saved changes. -``` -source .bashrc -``` - -Or, -``` -source .zshrc -``` - -Now, run any command, for example: -``` -$ cat Documents/ostechnix.txt | tr '|' '\n' | sort | tr '\n' '|' | sed "s/.$/\\n/g" -``` - -To add the above command, you don't have to use "pet new" command. just do: -``` -$ prev -``` - -Add the description to the command snippet and hit ENTER to save. - -[![][10]][12] - -**List snippets** - -To view the saved snippets, run: -``` -$ pet list -``` - -[![][10]][13] - -**Edit Snippets** - -If you want to edit the description or the command of a snippet, run: -``` -$ pet edit -``` - -This will open all saved snippets in your default text editor. You can edit or change the snippets as you wish. -``` -[[snippets]] - description = "Remove numbers from output." - command = "echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9'" - output = "" - -[[snippets]] - description = "Alphabetically sort one line of text" - command = "\t prev" - output = "" -``` - -**Use Tags in snippets** - -To use tags to a snippet, use **-t** flag like below. -``` -$ pet new -t -Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9 -Description> Remove numbers from output. -Tag> tr command examples - -``` - -**Execute Snippets** - -To execute a saved snippet, run: -``` -$ pet exec -``` - -Choose the snippet you want to run from the list and hit ENTER to run it. - -[![][10]][14] - -Remember you need to install fzf or peco to use this feature. - -**Search Snippets** - -If you have plenty of saved snippets, you can easily search them using a string or key word like below. -``` -$ pet search -``` - -Enter the search term or keyword to narrow down the search results. - -[![][10]][15] - -**Sync Snippets** - -First, you need to obtain the access token. Go to this link and create access token (only need "gist" scope). - -Configure Pet using command: -``` -$ pet configure -``` - -Set that token to **access_token** in **[Gist]** field. - -After setting, you can upload snippets to Gist like below. -``` -$ pet sync -u -Gist ID: 2dfeeeg5f17e1170bf0c5612fb31a869 -Upload success - -``` - -You can also download snippets on another PC. To do so, edit configuration file and set **Gist ID** to **gist_id** in **[Gist]**. - -Then, download the snippets using command: -``` -$ pet sync -Download success - -``` - -For more details, refer the help section: -``` -pet -h -``` - -Or, -``` -pet [command] -h -``` - -And, that's all. Hope this helps. As you can see, Pet usage is fairly simple and easy to use! If you're having hard time remembering lengthy commands, Pet utility can definitely be useful. - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/pet-simple-command-line-snippet-manager/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/bookmark-linux-commands-easier-repeated-invocation/ -[2]:https://www.ostechnix.com/save-commands-terminal-use-demand/ -[3]:https://github.com/knqyf263/pet/releases -[4]:https://www.ostechnix.com/install-pacaur-arch-linux/ -[5]:https://www.ostechnix.com/install-packer-arch-linux-2/ -[6]:https://www.ostechnix.com/install-yaourt-arch-linux/ -[7]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ -[8]:https://github.com/junegunn/fzf -[9]:https://github.com/peco/peco -[10]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[11]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-1.png () -[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-2.png () -[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-3.png () -[14]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-4.png () -[15]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-5.png () From 943affc3d212746b49044c8acc42833a41995ef1 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Sun, 29 Apr 2018 13:55:26 +0800 Subject: [PATCH 029/102] Create 20180122 A Simple Command-line Snippet Manager.md --- ...2 A Simple Command-line Snippet Manager.md | 318 ++++++++++++++++++ 1 file changed, 318 insertions(+) create mode 100644 translated/tech/20180122 A Simple Command-line Snippet Manager.md diff --git a/translated/tech/20180122 A Simple Command-line Snippet Manager.md b/translated/tech/20180122 A Simple Command-line Snippet Manager.md new file mode 100644 index 0000000000..f9e827024e --- /dev/null +++ b/translated/tech/20180122 A Simple Command-line Snippet Manager.md @@ -0,0 +1,318 @@ +一个简单的命令行片段管理器 +===== + +![](https://www.ostechnix.com/wp-content/uploads/2018/01/pet-6-720x340.png) +我们不可能记住所有的命令,对吧?是的。除了经常使用的命令之外,我们几乎不可能记住一些很少使用的长命令。这就是为什么需要一些外部工具来帮助我们在需要时找到命令。在过去,我们已经审查了两个有用的工具,名为 "Bashpast" 和 "Keep"。使用 Bashpast,我们可以轻松地为 Linux 命令添加书签,以便更轻松地重复调用。而且,Keep 实用程序可以用来在终端中保留一些重要且冗长的命令,以便你可以按需使用它们。今天,我们将看到该系列中的另一个工具,以帮助你记住命令。现在向 "Pet" 打个招呼,这是一个用 Go 语言编写的简单的命令行代码管理器。 + +使用 Pet,你可以: + + * 注册/添加你重要的,冗长和复杂的命令片段。 + * 以交互方式来搜索保存的命令片段。 + * 直接运行代码片段而无须一遍又一遍地输入。 + * 轻松编辑保存的代码片段。 + * 通过 Gist 同步片段。 + * 在片段中使用变量 + * 还有很多特性即将来临。 + + +#### 安装 Pet 命令行接口代码管理器 + +由于它是用 Go 语言编写的,所以确保你在系统中已经安装了 Go。 + +安装 Go 后,从 [**Pet 发布页面**][3] 获取最新的二进制文件。 +``` +wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_amd64.zip +``` + +对于 32 位计算机: +``` +wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_386.zip +``` + +解压下载的文件: +``` +unzip pet_0.2.4_linux_amd64.zip +``` + +对于 32 位: +``` +unzip pet_0.2.4_linux_386.zip +``` + +将 pet 二进制文件复制到 PATH(即 **/usr/local/bin** 之类的)。 +``` +sudo cp pet /usr/local/bin/ +``` + +最后,让它可以执行: +``` +sudo chmod +x /usr/local/bin/pet +``` + +如果你使用的是基于 Arch 的系统,那么你可以使用任何 AUR 帮助工具从 AUR 安装它。 + +使用 [**Pacaur**][4]: +``` +pacaur -S pet-git +``` + +使用 [**Packer**][5]: +``` +packer -S pet-git +``` + +使用 [**Yaourt**][6]: +``` +yaourt -S pet-git +``` + +使用 [**Yay** :][7] +``` +yay -S pet-git +``` + +此外,你需要安装 **[fzf][8]** 或 [**peco**][9] 工具已启用交互式搜索。请参阅官方 GitHub 链接了解如何安装这些工具。 + +#### 用法 + +运行没有任何参数的 'pet' 来查看可用命令和常规选项的列表。 +``` +$ pet +pet - Simple command-line snippet manager. + +Usage: + pet [command] + +Available Commands: + configure Edit config file + edit Edit snippet file + exec Run the selected commands + help Help about any command + list Show all snippets + new Create a new snippet + search Search snippets + sync Sync snippets + version Print the version number + +Flags: + --config string config file (default is $HOME/.config/pet/config.toml) + --debug debug mode + -h, --help help for pet + +Use "pet [command] --help" for more information about a command. +``` + +要查看特定命令的帮助部分,运行: +``` +$ pet [command] --help +``` + +**配置 Pet** + +它只适用于默认值。但是,你可以更改默认目录来保存片段,选择要使用的选择器 (fzf 或 peco),默认文本编辑器编辑片段,添加 GIST id 详细信息等。 + + +要配置 Pet,运行: +``` +$ pet configure +``` + +该命令将在默认的文本编辑器中打开默认配置(例如我是 **vim**),根据你的要求更改或编辑特定值。 +``` +[General] + snippetfile = "/home/sk/.config/pet/snippet.toml" + editor = "vim" + column = 40 + selectcmd = "fzf" + +[Gist] + file_name = "pet-snippet.toml" + access_token = "" + gist_id = "" + public = false +~ +``` + +**创建片段** + +为了创建一个新的片段,运行: +``` +$ pet new +``` + +添加命令和描述,然后按下 ENTER 键保存它。 +``` +Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9' +Description> Remove numbers from output. +``` + +[![][10]][11] + +这是一个简单的命令,用于从 echo 命令输出中删除所有数字。你可以很轻松地记住它。但是,如果你很少使用它,几天后你可能会完全忘记它。当然,我们可以使用 "CTRL+r" 搜索历史记录,但 "Pet" 会更容易。另外,Pet 可以帮助你添加任意数量的条目。 + +另一个很酷的功能是我们可以轻松添加以前的命令。为此,在你的 **.bashrc** 或 **.zshrc** 文件中添加以下行。 +``` +function prev() { + PREV=$(fc -lrn | head -n 1) + sh -c "pet new `printf %q "$PREV"`" +} +``` + +执行以下命令来使保存的更改生效。 +``` +source .bashrc +``` + +或者 +``` +source .zshrc +``` + +现在,运行任何命令,例如: +``` +$ cat Documents/ostechnix.txt | tr '|' '\n' | sort | tr '\n' '|' | sed "s/.$/\\n/g" +``` + +要添加上述命令,你不必使用 "pet new" 命令。只需要: +``` +$ prev +``` + +将说明添加到命令代码片段中,然后按下 ENTER 键保存。 + +![][12] + +**片段列表** + +要查看保存的片段,运行: +``` +$ pet list +``` + +![][13] + +**编辑片段** + +如果你想编辑描述或代码片段的命令,运行: +``` +$ pet edit +``` + +这将在你的默认文本编辑器中打开所有保存的代码片段,你可以根据需要编辑或更改片段。 +``` +[[snippets]] + description = "Remove numbers from output." + command = "echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9'" + output = "" + +[[snippets]] + description = "Alphabetically sort one line of text" + command = "\t prev" + output = "" +``` + +**在片段中使用标签** + +要将标签用于判断,使用下面的 **-t** 标志。 +``` +$ pet new -t +Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9 +Description> Remove numbers from output. +Tag> tr command examples + +``` + +**执行片段** + +要执行一个保存的片段,运行: +``` +$ pet exec +``` + +从列表中选择你要运行的代码段,然后按 ENTER 键来运行它: + +![][14] + +记住你需要安装 fzf 或 peco 才能使用此功能。 + +**寻找片段** + +如果你有很多要保存的片段,你可以使用字符串或关键词如 below.qjz 轻松搜索它们。 +``` +$ pet search +``` + +输入搜索字词或关键字以缩小搜索结果范围。 + +![][15] + +**同步片段** + +首先,你需要获取访问令牌。转到此链接 并创建访问令牌(只需要 "gist" 范围)。 + +使用以下命令来配置 Pet: +``` +$ pet configure +``` + +将标记设置到 **[Gist]** 字段中的 **access_token**。 + +设置完成后,你可以像下面一样将片段上传到 Gist。 +``` +$ pet sync -u +Gist ID: 2dfeeeg5f17e1170bf0c5612fb31a869 +Upload success + +``` + +你也可以在其他 PC 上下载片段。为此,编辑配置文件并在 **[Gist]** 中将 **Gist ID** 设置为 **gist_id**。 + +之后,下载片段使用以下命令: +``` +$ pet sync +Download success + +``` + +获取更多细节,参阅 help 选项: +``` +pet -h +``` + +或者 +``` +pet [command] -h +``` + +这就是全部了。希望这可以帮助到你。正如你所看到的,Pet 使用相当简单易用!如果你很难记住冗长的命令,Pet 实用程序肯定会有用。 + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/pet-simple-command-line-snippet-manager/ + +作者:[SK][a] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/bookmark-linux-commands-easier-repeated-invocation/ +[2]:https://www.ostechnix.com/save-commands-terminal-use-demand/ +[3]:https://github.com/knqyf263/pet/releases +[4]:https://www.ostechnix.com/install-pacaur-arch-linux/ +[5]:https://www.ostechnix.com/install-packer-arch-linux-2/ +[6]:https://www.ostechnix.com/install-yaourt-arch-linux/ +[7]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[8]:https://github.com/junegunn/fzf +[9]:https://github.com/peco/peco +[10]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[11]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-1.png +[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-2.png +[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-3.png +[14]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-4.png +[15]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-5.png From 8974a852995a3853287a1a50d8aa676063a3ec53 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 29 Apr 2018 17:42:35 +0800 Subject: [PATCH 030/102] PRF:20180321 The Command line Personal Assistant For Your Linux System.md @amwps290 --- ...ersonal Assistant For Your Linux System.md | 110 +++++------------- 1 file changed, 32 insertions(+), 78 deletions(-) diff --git a/translated/tech/20180321 The Command line Personal Assistant For Your Linux System.md b/translated/tech/20180321 The Command line Personal Assistant For Your Linux System.md index a96ed69bc4..c483ed94be 100644 --- a/translated/tech/20180321 The Command line Personal Assistant For Your Linux System.md +++ b/translated/tech/20180321 The Command line Personal Assistant For Your Linux System.md @@ -1,37 +1,37 @@ -# 您的 Linux 系统命令行个人助理 +Yoda:您的 Linux 系统命令行个人助理 +=========== ![](https://www.ostechnix.com/wp-content/uploads/2018/03/Yoda-720x340.png) -不久前,我们写了一个名为 [**“Betty”**][1] 的命令行虚拟助手。今天,我偶然发现了一个类似的实用程序,叫做 **“Yoda”**。Yoda 是一个命令行个人助理,可以帮助您在 Linux 中完成一些琐碎的任务。它是用 Python 编写的一个免费的开源应用程序。在本指南中,我们将了解如何在 GNU/Linux 中安装和使用 Yoda。 +不久前,我们介绍了一个名为 [“Betty”][1] 的命令行虚拟助手。今天,我偶然发现了一个类似的实用程序,叫做 “Yoda”。Yoda 是一个命令行个人助理,可以帮助您在 Linux 中完成一些琐碎的任务。它是用 Python 编写的一个自由开源应用程序。在本指南中,我们将了解如何在 GNU/Linux 中安装和使用 Yoda。 ### 安装 Yoda,命令行私人助理。 -Yoda 需要 **Python 2** 和 PIP 。如果在您的 Linux 中没有安装 PIP,请参考下面的指南来安装它。只要确保已经安装了 **python2-pip** 。Yoda 可能不支持 Python 3。 +Yoda 需要 Python 2 和 PIP 。如果在您的 Linux 中没有安装 PIP,请参考下面的指南来安装它。只要确保已经安装了 python2-pip 。Yoda 可能不支持 Python 3。 -**注意**:我建议你在虚拟环境下试用 Yoda。 不仅仅是 Yoda,总是在虚拟环境中尝试任何 Python 应用程序,让它们不会干扰全局安装的软件包。 您可以按照上文链接中标题为“创建虚拟环境”一节中所述设置虚拟环境。 +- [如何使用 pip 管理 Python 包](https://www.ostechnix.com/manage-python-packages-using-pip/) -在您的系统上安装了 pip 之后,使用下面的命令克隆 Yoda 库。 +注意:我建议你在 Python 虚拟环境下试用 Yoda。 不仅仅是 Yoda,应该总在虚拟环境中尝试任何 Python 应用程序,让它们不会干扰全局安装的软件包。 您可以按照上文链接中标题为“创建虚拟环境”一节中所述设置虚拟环境。 + +在您的系统上安装了 `pip` 之后,使用下面的命令克隆 Yoda 库。 ``` $ git clone https://github.com/yoda-pa/yoda - ``` -上面的命令将在当前工作目录中创建一个名为 “yoda” 的目录,并在其中克隆所有内容。转到 Yoda 目录: +上面的命令将在当前工作目录中创建一个名为 `yoda` 的目录,并在其中克隆所有内容。转到 `yoda` 目录: ``` $ cd yoda/ - ``` -运行以下命令安装Yoda应用程序。 +运行以下命令安装 Yoda 应用程序。 ``` $ pip install . - ``` -请注意最后的点(.)。 现在,所有必需的软件包将被下载并安装。 +请注意最后的点(`.`)。 现在,所有必需的软件包将被下载并安装。 ### 配置 Yoda @@ -41,7 +41,6 @@ $ pip install . ``` $ yoda setup new - ``` 填写下列的问题: @@ -59,7 +58,6 @@ Where shall your config be stored? (Default: ~/.yoda/) A configuration file already exists. Are you sure you want to overwrite it? (y/n) y - ``` 你的密码在加密后保存在配置文件中,所以不用担心。 @@ -68,25 +66,22 @@ y ``` $ yoda setup check - ``` 你会看到如下的输出。 ``` Name: Senthil Kumar -Email: [email protected] +Email: sk@senthilkumar.com Github username: sk - ``` -默认情况下,您的信息存储在 **~/.yoda** 目录中。 +默认情况下,您的信息存储在 `~/.yoda` 目录中。 要删除现有配置,请执行以下操作: ``` $ yoda setup delete - ``` ### 用法 @@ -95,7 +90,6 @@ Yoda 包含一个简单的聊天机器人。您可以使用下面的聊天命令 ``` $ yoda chat who are you - ``` 样例输出: @@ -107,14 +101,13 @@ I'm a virtual agent $ yoda chat how are you Yoda speaks: I'm doing very well. Thanks! - ``` -以下是我们可以用 Yoda 做的事情: +以下是我们可以用 Yoda 做的事情: -**测试网络速度** +#### 测试网络速度 -让我们问一下 Yoda 关于互联网速度的问题。运行: +让我们问一下 Yoda 关于互联网速度的问题。运行: ``` $ yoda speedtest @@ -122,18 +115,16 @@ Speed test results: Ping: 108.45 ms Download: 0.75 Mb/s Upload: 1.95 Mb/s - ``` -**缩短并展开网址** +#### 缩短和展开网址 -Yoda 还有助于缩短任何网址。 +Yoda 还有助于缩短任何网址: ``` $ yoda url shorten https://www.ostechnix.com/ Here's your shortened URL: https://goo.gl/hVW6U0 - ``` 要展开缩短的网址: @@ -142,11 +133,9 @@ https://goo.gl/hVW6U0 $ yoda url expand https://goo.gl/hVW6U0 Here's your original URL: https://www.ostechnix.com/ - ``` - -**阅读黑客新闻** +#### 阅读 Hacker News 我是 Hacker News 网站的常客。 如果你像我一样,你可以使用 Yoda 从下面的 Hacker News 网站阅读新闻。 @@ -159,12 +148,11 @@ Description-- I came up with this idea "a Yelp for developers" when talking with url-- https://news.ycombinator.com/item?id=16636071 Continue? [press-"y"] - ``` -Yoda 将一次显示一个项目。 要阅读下一条新闻,只需输入 “y” 并按下 ENTER。 +Yoda 将一次显示一个项目。 要阅读下一条新闻,只需输入 `y` 并按下回车。 -**管理个人日记** +#### 管理个人日记 我们也可以保留个人日记以记录重要事件。 @@ -174,7 +162,6 @@ Yoda 将一次显示一个项目。 要阅读下一条新闻,只需输入 “y $ yoda diary nn Input your entry for note: Today I learned about Yoda - ``` 要创建新笔记,请再次运行上述命令。 @@ -188,7 +175,6 @@ Today's notes: Time | Note --------|----- 16:41:41| Today I learned about Yoda - ``` 不仅仅是笔记,Yoda 还可以帮助你创建任务。 @@ -199,7 +185,6 @@ Today's notes: $ yoda diary nt Input your entry for task: Write an article about Yoda and publish it on OSTechNix - ``` 要查看任务列表,请运行: @@ -217,10 +202,9 @@ Summary: ---------------- Incomplete tasks: 1 Completed tasks: 0 - ``` -正如你在上面看到的,我有一个未完成的任务。 要将其标记为已完成,请运行以下命令并输入已完成的任务序列号并按下 ENTER 键: +正如你在上面看到的,我有一个未完成的任务。 要将其标记为已完成,请运行以下命令并输入已完成的任务序列号并按下回车键: ``` $ yoda diary ct @@ -231,7 +215,6 @@ Number | Time | Task 1 | 16:44:03: Write an article about Yoda and publish it on OSTechNix Enter the task number that you would like to set as completed 1 - ``` 您可以随时使用命令分析当前月份的任务: @@ -241,18 +224,16 @@ $ yoda diary analyze Percentage of incomplete task : 0 Percentage of complete task : 100 Frequency of adding task (Task/Day) : 3 - ``` 有时候,你可能想要记录一个关于你爱的或者敬佩的人的个人资料。 -**记录关于爱人的笔记** +#### 记录关于爱人的笔记 首先,您需要设置配置来存储朋友的详细信息。 请运行: ``` $ yoda love setup - ``` 输入你的朋友的详细信息: @@ -264,7 +245,6 @@ Enter sex(M/F): M Where do they live? Rameswaram - ``` 要查看此人的详细信息,请运行: @@ -272,7 +252,6 @@ Rameswaram ``` $ yoda love status {'place': 'Rameswaram', 'name': 'Abdul Kalam', 'sex': 'M'} - ``` 要添加你的爱人的生日: @@ -281,7 +260,6 @@ $ yoda love status $ yoda love addbirth Enter birthday 15-10-1931 - ``` 查看生日: @@ -289,7 +267,6 @@ Enter birthday ``` $ yoda love showbirth Birthday is 15-10-1931 - ``` 你甚至可以添加关于该人的笔记: @@ -297,7 +274,6 @@ Birthday is 15-10-1931 ``` $ yoda love note Avul Pakir Jainulabdeen Abdul Kalam better known as A. P. J. Abdul Kalam, was the 11th President of India from 2002 to 2007. - ``` 您可以使用命令查看笔记: @@ -306,7 +282,6 @@ Avul Pakir Jainulabdeen Abdul Kalam better known as A. P. J. Abdul Kalam, was th $ yoda love notes Notes: 1: Avul Pakir Jainulabdeen Abdul Kalam better known as A. P. J. Abdul Kalam, was the 11th President of India from 2002 to 2007. - ``` 你也可以写下这个人喜欢的东西: @@ -317,7 +292,6 @@ Add things they like Physics, Aerospace Want to add more things they like? [y/n] n - ``` 要查看他们喜欢的东西,请运行: @@ -326,12 +300,9 @@ n $ yoda love likes Likes: 1: Physics, Aerospace - ``` -**** - -**跟踪资金费用** +#### 跟踪资金费用 您不需要单独的工具来维护您的财务支出。 Yoda 会替您处理好。 @@ -339,7 +310,6 @@ Likes: ``` $ yoda money setup - ``` 输入您的货币代码和初始金额: @@ -360,7 +330,6 @@ Enter initial amount: ``` $ yoda money status {'initial_money': 10000, 'currency_code': 'INR'} - ``` 让我们假设你买了一本价值 250 卢比的书。 要添加此费用,请运行: @@ -369,7 +338,6 @@ $ yoda money status $ yoda money exp Spend 250 INR on books output: - ``` 要查看花费,请运行: @@ -377,44 +345,35 @@ output: ``` $ yoda money exps 2018-03-21 17:12:31 INR 250 books - ``` -**** +#### 创建想法列表 -**创建想法列表** - -创建一个新的想法: +创建一个新的想法: ``` $ yoda ideas add --task --inside - ``` -列出想法: +列出想法: ``` $ yoda ideas show - ``` -从任务中移除一个想法: +从任务中移除一个想法: ``` $ yoda ideas remove --task --inside - ``` -要完全删除这个想法,请运行: +要完全删除这个想法,请运行: ``` $ yoda ideas remove --project - ``` -**** - -**学习英语词汇** +#### 学习英语词汇 Yoda 帮助你学习随机英语单词并追踪你的学习进度。 @@ -422,36 +381,31 @@ Yoda 帮助你学习随机英语单词并追踪你的学习进度。 ``` $ yoda vocabulary word - ``` -它会随机显示一个单词。 按 ENTER 键显示单词的含义。 再一次,Yoda 问你是否已经知道这个词的意思。 如果您已经知道,请输入“是”。 如果您不知道,请输入“否”。 这可以帮助你跟踪你的进度。 使用以下命令来了解您的进度。 +它会随机显示一个单词。 按回车键显示单词的含义。 再一次,Yoda 问你是否已经知道这个词的意思。 如果您已经知道,请输入“是”。 如果您不知道,请输入“否”。 这可以帮助你跟踪你的进度。 使用以下命令来了解您的进度。 ``` $ yoda vocabulary accuracy - ``` 此外,Yoda 可以帮助您做其他一些事情,比如找到单词的定义和创建插卡以轻松学习任何内容。 有关更多详细信息和可用选项列表,请参阅帮助部分。 ``` $ yoda --help - ``` 更多好的东西来了。请继续关注! 干杯! - - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/yoda-the-command-line-personal-assistant-for-your-linux-system/ 作者:[SK][a] 译者:[amwps290](https://github.com/amwps290) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From a1e5bdc28bd23cb00f5ae3d510a39e04702325bd Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 29 Apr 2018 17:43:05 +0800 Subject: [PATCH 031/102] PUB:20180321 The Command line Personal Assistant For Your Linux System.md @amwps290 https://linux.cn/article-9590-1.html --- ...1 The Command line Personal Assistant For Your Linux System.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180321 The Command line Personal Assistant For Your Linux System.md (100%) diff --git a/translated/tech/20180321 The Command line Personal Assistant For Your Linux System.md b/published/20180321 The Command line Personal Assistant For Your Linux System.md similarity index 100% rename from translated/tech/20180321 The Command line Personal Assistant For Your Linux System.md rename to published/20180321 The Command line Personal Assistant For Your Linux System.md From 0a32be70ec7f4b03de26ca4d183dd1f7d7a116ea Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 29 Apr 2018 18:29:39 +0800 Subject: [PATCH 032/102] PRF:20180208 Become a Hollywood movie hacker with these three command line tools.md @wyxplus --- ...ker with these three command line tools.md | 38 +++++++++---------- 1 file changed, 17 insertions(+), 21 deletions(-) diff --git a/translated/tech/20180208 Become a Hollywood movie hacker with these three command line tools.md b/translated/tech/20180208 Become a Hollywood movie hacker with these three command line tools.md index 1e1c8aa032..a131f84969 100644 --- a/translated/tech/20180208 Become a Hollywood movie hacker with these three command line tools.md +++ b/translated/tech/20180208 Become a Hollywood movie hacker with these three command line tools.md @@ -1,53 +1,49 @@ -用这三个命令行工具成为好莱坞电影中的黑客 +假装很忙的三个命令行工具 ====== +> 有时候你很忙。而有时候你只是需要看起来很忙,就像电影中的黑客一样。有一些开源工具就是干这个的。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals.png?itok=CfBqYBah) -如果在你成长过程中有看过谍战片、动作片或犯罪片,那么你就会清楚地了解黑客的电脑屏幕是什么样子。就像是在《黑客帝国》电影中,[代码雨][1] 一样的十六进制数字流,又或是一排排快速移动的代码 。 +如果在你在消磨时光时看过谍战片、动作片或犯罪片,那么你就会清晰地在脑海中勾勒出黑客的电脑屏幕的样子。就像是在《黑客帝国》电影中,[代码雨][1] 一样的十六进制数字流,又或是一排排快速移动的代码。 -也许电影中出现一幅世界地图,其中布满了闪烁的光点除和一些快速更新的字符。 而且是3D旋转的几何图像,为什么不可能出现在现实中呢? 如果可能的话,那么就会出现数量多得不可思议的显示屏,以及不符合人体工学的电脑椅或其他配件。 在《剑鱼行动》电影中黑客就使用了七个显示屏。 +也许电影中出现一幅世界地图,其中布满了闪烁的光点和一些快速更新的图表。不可或缺的,也可能有 3D 旋转的几何形状。甚至,这一切都会显示在一些完全不符合人类习惯的数量荒谬的显示屏上。 在《剑鱼行动》电影中黑客就使用了七个显示屏。 -当然,我们这些从事计算机行业的人一下子就明白这完全是胡说八道。虽然在我们中,许多人都有双显示器(或更多),但一个闪烁的数据仪表盘通常和专注工作是相互矛盾的。编写代码、项目管理和系统管理与日常工作不同。我们遇到的大多数情况,为了解决问题,都需要大量的思考,与客户沟通所得到一些研究和组织的资料,然后才是少许的 [敲代码][7]。 +当然,我们这些从事计算机行业的人一下子就明白这完全是胡说八道。虽然在我们中,许多人都有双显示器(或更多),但一个闪烁的数据仪表盘、刷新的数据通常和专注工作是相互矛盾的。编写代码、项目管理和系统管理与日常工作不同。我们遇到的大多数情况,为了解决问题,都需要大量的思考,与客户沟通所得到一些研究和组织的资料,然后才是少许的 [敲代码][7]。 然而,这与我们想追求电影中的效果并不矛盾,也许,我们只是想要看起来“忙于工作”而已。 -**注:当然,我仅仅是在此胡诌。**如果实际上您公司是根据您繁忙程度来评估您的工作时,无论您是蓝领还是白领,都需要亟待解决这样的工作文化。假装工作很忙是一种有毒的文化,对公司和员工都有害无益。 - -这就是说,让我们找些乐子,用一些老式的、毫无意义的数据和代码片段填充我们的屏幕。(当然,数据或许有意义,而不是没有上下文。)当然有许多有趣的图形界面,如 [hackertyper.net][8] 或是 [GEEKtyper.com][9] 网站(译者注:是在线模拟黑客网站),为什么不使用Linux终端程序呢?对于更老派的外观,可以考虑使用 [酷炫复古终端][10],这听起来确实如此:一个酷炫的复古终端程序。我将在下面的屏幕截图中使用酷炫复古终端,因为它看起来的确很酷。 +**注:当然,我仅仅是在此胡诌。**如果您公司实际上是根据您繁忙程度来评估您的工作时,无论您是蓝领还是白领,都需要亟待解决这样的工作文化。假装工作很忙是一种有毒的文化,对公司和员工都有害无益。 +这就是说,让我们找些乐子,用一些老式的、毫无意义的数据和代码片段填充我们的屏幕。(当然,数据或许有意义,但不是在这种没有上下文的环境中。)当然有一些用于此用途的有趣的图形界面程序,如 [hackertyper.net][8] 或是 [GEEKtyper.com][9] 网站(LCTT 译注:是在线假装黑客操作的网站),为什么不使用标准的 Linux 终端程序呢?对于更老派的外观,可以考虑使用 [酷炫复古终端][10],这听起来确实如此:一个酷炫的复古终端程序。我将在下面的屏幕截图中使用酷炫复古终端,因为它看起来的确很酷。 ### Genact -我们来看下第一个工具——Genact。Genact的原理很简单,就是慢慢地循环播放您选择的一个序列,让您的代码在您外出休息时“编译”。由您来决定播放顺序,但是其中默认包含数字货币挖矿模拟器、PHP管理依赖关系工具、内核编译器、下载器、内存转储等工具。其中我最喜欢的是其中类似《模拟城市》加载显示。所以只要没有人仔细检查,你可以花一整个下午等待您的电脑完成进度条。 +我们来看下第一个工具——Genact。Genact 的原理很简单,就是慢慢地无尽循环播放您选择的一个序列,让您的代码在您外出休息时“编译”。由您来决定播放顺序,但是其中默认包含数字货币挖矿模拟器、Composer PHP 依赖关系管理工具、内核编译器、下载器、内存转储等工具。其中我最喜欢的是其中类似《模拟城市》加载显示。所以只要没有人仔细检查,你可以花一整个下午等待您的电脑完成进度条。 -Genact[发行版][11]支持Linux、OS X和Windows。并且用Rust编写。[源代码][12] 在GitHub上开源(遵循[MIT许可证][13]) +Genact [发布了][11] 支持 Linux、OS X 和 Windows 的版本。并且其 Rust [源代码][12] 在 GitHub 上开源(遵循 [MIT 许可证][13])。 ![](https://opensource.com/sites/default/files/uploads/genact.gif) ### Hollywood +Hollywood 采取更直接的方法。它本质上是在终端中创建一个随机的数量和配置的分屏,并启动那些看起来很繁忙的应用程序,如 htop、目录树、源代码文件等,并每隔几秒将其切换。它被组织成一个 shell 脚本,所以可以非常容易地根据需求进行修改。 -Hollywood采取更直接的方法。它本质上是在终端中创建一个随机数字和配置分屏,并启动跑个不停的应用程序,如htop,目录树,源代码文件等,并每隔几秒将其切换。它被放在一起作为一个shell脚本,所以可以非常容易地根据需求进行修改。 - - -Hollywood的 [源代码][14] 在GitHub上开源(遵循[Apache 2.0许可证][15])。 +Hollywood的 [源代码][14] 在 GitHub 上开源(遵循 [Apache 2.0 许可证][15])。 ![](https://opensource.com/sites/default/files/uploads/hollywood.gif) ### Blessed-contrib -Blessed-contrib是我个人最喜欢的应用,实际上并不是为了表演而专门设计的应用。相反地,它是一个基于Node.js的后台终端构建库的演示文件。与其他两个不同,实际上我已经在工作中使用Blessed-contrib的库,而不是用于假装忙于工作。因为它是一个相当有用的库,并且可以使用一组在命令行显示信息的小部件。与此同时填充虚拟数据也很容易,所以可以很容易实现模拟《战争游戏》的想法。 - - -Blessed-contrib的[源代码][16]在GitHub上(遵循[MIT许可证][17])。 +Blessed-contrib 是我个人最喜欢的应用,实际上并不是为了这种表演而专门设计的应用。相反地,它是一个基于 Node.js 的终端仪表盘的构建库的演示文件。与其他两个不同,实际上我已经在工作中使用 Blessed-contrib 的库,而不是用于假装忙于工作。因为它是一个相当有用的库,并且可以使用一组在命令行显示信息的小部件。与此同时填充虚拟数据也很容易,所以可以很容易实现你在计算机上模拟《战争游戏》的想法。 +Blessed-contrib 的[源代码][16]在 GitHub 上(遵循 [MIT 许可证][17])。 ![](https://opensource.com/sites/default/files/uploads/blessed.gif) -当然,尽管这些工具很容易使用,但也有很多其他的方式使你的屏幕丰富。在你看到电影中最常用的工具之一就是Nmap,一个开源的网络安全扫描工具。实际上,它被广泛用作展示好莱坞电影中,黑客电脑屏幕上的工具。因此Nmap的开发者创建了一个 [页面][18],列出了它出现在其中的一些电影,从《黑客帝国2:重装上阵》到《谍影重重3》、《龙纹身的女孩》,甚至《虎胆龙威4》。 - -当然,您可以创建自己的组合,使用终端多路复用器(如屏幕或tmux)启动您希望的任何数据分散应用程序。 +当然,尽管这些工具很容易使用,但也有很多其他的方式使你的屏幕丰富。在你看到电影中最常用的工具之一就是Nmap,这是一个开源的网络安全扫描工具。实际上,它被广泛用作展示好莱坞电影中,黑客电脑屏幕上的工具。因此 Nmap 的开发者创建了一个 [页面][18],列出了它出现在其中的一些电影,从《黑客帝国 2:重装上阵》到《谍影重重3》、《龙纹身的女孩》,甚至《虎胆龙威 4》。 +当然,您可以创建自己的组合,使用终端多路复用器(如 `screen` 或 `tmux`)启动您希望使用的任何数据切分程序。 那么,您是如何使用您的屏幕的呢? @@ -57,7 +53,7 @@ via: https://opensource.com/article/18/2/command-line-tools-productivity 作者:[Jason Baker][a] 译者:[wyxplus](https://github.com/wyxplus) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 3d197e23f42e0245fe394ed8f3bfb2b718db177d Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 29 Apr 2018 18:30:27 +0800 Subject: [PATCH 033/102] PUB:20180208 Become a Hollywood movie hacker with these three command line tools.md @wyxplus https://linux.cn/article-9591-1.html --- ... Hollywood movie hacker with these three command line tools.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180208 Become a Hollywood movie hacker with these three command line tools.md (100%) diff --git a/translated/tech/20180208 Become a Hollywood movie hacker with these three command line tools.md b/published/20180208 Become a Hollywood movie hacker with these three command line tools.md similarity index 100% rename from translated/tech/20180208 Become a Hollywood movie hacker with these three command line tools.md rename to published/20180208 Become a Hollywood movie hacker with these three command line tools.md From c2ab15f41e39a33f3f0f8eda90e29b4299697713 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 29 Apr 2018 21:32:06 +0800 Subject: [PATCH 034/102] PRF:20180118 Securing the Linux filesystem with Tripwire.md @geekpi --- ...ring the Linux filesystem with Tripwire.md | 47 ++++++++----------- 1 file changed, 19 insertions(+), 28 deletions(-) diff --git a/translated/tech/20180118 Securing the Linux filesystem with Tripwire.md b/translated/tech/20180118 Securing the Linux filesystem with Tripwire.md index 8d2f6db004..f82584a772 100644 --- a/translated/tech/20180118 Securing the Linux filesystem with Tripwire.md +++ b/translated/tech/20180118 Securing the Linux filesystem with Tripwire.md @@ -1,54 +1,50 @@ 使用 Tripwire 保护 Linux 文件系统 ====== +> 如果恶意软件或其情况改变了你的文件系统,Linux 完整性检查工具会提示你。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/file_system.jpg?itok=pzCrX1Kc) -尽管 Linux 被认为是最安全的操作系统(在 Windows 和 MacOS 之前),但它仍然容易受到 rootkit 和其他恶意软件的影响。因此,Linux 用户需要知道如何保护他们的服务器或个人电脑免遭破坏,他们需要采取的第一步就是保护文件系统。 +尽管 Linux 被认为是最安全的操作系统(排在 Windows 和 MacOS 之前),但它仍然容易受到 rootkit 和其他恶意软件的影响。因此,Linux 用户需要知道如何保护他们的服务器或个人电脑免遭破坏,他们需要采取的第一步就是保护文件系统。 -在本文中,我们将看看 [Tripwire][1],这是保护 Linux 文件系统的绝佳工具。Tripwire 是一个完整性检查工具,使系统管理员、安全工程师和其他人能够检测系统文件的变更。虽然它不是唯一的选择([AIDE][2] 和 [Samhain][3] 提供类似功能),但 Tripwire 可以说是 Linux 系统文件中最常用的完整性检查程序,并在 GPLv2 许可证下开源。 +在本文中,我们将看看 [Tripwire][1],这是保护 Linux 文件系统的绝佳工具。Tripwire 是一个完整性检查工具,使得系统管理员、安全工程师和其他人能够检测系统文件的变更。虽然它不是唯一的选择([AIDE][2] 和 [Samhain][3] 提供类似功能),但 Tripwire 可以说是 Linux 系统文件中最常用的完整性检查程序,并在 GPLv2 许可证下开源。 ### Tripwire 如何工作 了解 Tripwire 如何运行对了解 Tripwire 在安装后会做什么有所帮助。Tripwire 主要由两个部分组成:策略和数据库。策略列出了完整性检查器应该生成快照的所有文件和目录,还创建了用于识别对目录和文件更改违规的规则。数据库由 Tripwire 生成的快照组成。 -Tripwire 还有一个配置文件,它指定数据库、策略文件和 Tripwire 可执行文件的位置。它还提供两个加密密钥 - 站点密钥和本地密钥 - 以保护重要文件免遭篡改。站点密钥保护策略和配置文件,而本地密钥保护数据库和生成的报告。 +Tripwire 还有一个配置文件,它指定数据库、策略文件和 Tripwire 可执行文件的位置。它还提供两个加密密钥 —— 站点密钥和本地密钥 —— 以保护重要文件免遭篡改。站点密钥保护策略和配置文件,而本地密钥保护数据库和生成的报告。 -Tripwire 定期将目录和文件与数据库中的快照进行比较并报告所有的更改。 +Tripwire 会定期将目录和文件与数据库中的快照进行比较并报告所有的更改。 ### 安装 Tripwire 要 Tripwire,我们需要先下载并安装它。Tripwire 适用于几乎所有的 Linux 发行版。你可以从 [Sourceforge][4] 下载一个开源版本,并如下根据你的 Linux 版本进行安装。 Debian 和 Ubuntu 用户可以使用 `apt-get` 直接从仓库安装 Tripwire。非 root 用户应该输入 `sudo` 命令通过 `apt-get` 安装 Tripwire。 + ``` - - sudo apt-get update - sudo  apt-get install tripwire   ``` -CentOS 和其他基于 rpm 的发行版使用类似的过程。为了最佳实践,请在安装新软件包(如 Tripwire)之前更新仓库。命令 `yum install epel-release` 意思是我们想要安装额外的存储库。 (`epel` 代表 Extra Packages for Enterprise Linux。) +CentOS 和其他基于 RPM 的发行版使用类似的过程。为了最佳实践,请在安装新软件包(如 Tripwire)之前更新仓库。命令 `yum install epel-release` 意思是我们想要安装额外的存储库。 (`epel` 代表 Extra Packages for Enterprise Linux。) + ``` - - yum update - yum install epel-release - yum install tripwire   ``` 此命令会在安装中运行让 Tripwire 有效运行所需的配置。另外,它会在安装过程中询问你是否使用密码。你可以两个选择都选择 “Yes”。 -另外,如果需要构建配置文件,请选择 “Yes”。选择并确认站点密钥和本地密钥的密码。(建议使用复杂的密码,例如 `Il0ve0pens0urce`。) +另外,如果需要构建配置文件,请选择 “Yes”。选择并确认站点密钥和本地密钥的密码。(建议使用复杂的密码,例如 `Il0ve0pens0urce` 这样的。) ### 建立并初始化 Tripwire 数据库 接下来,按照以下步骤初始化 Tripwire 数据库: + ``` - - tripwire --init ``` @@ -57,39 +53,34 @@ tripwire --init ### 使用 Tripwire 进行基本的完整性检查 你可以使用以下命令让 Tripwire 检查你的文件或目录是否已被修改。Tripwire 将文件和目录与数据库中的初始快照进行比较的能力依赖于你在活动策略中创建的规则。 + ``` - - tripwire  --check   ``` 你还可以将 `-check` 命令限制为特定的文件或目录,如下所示: + ``` - - tripwire   --check   /usr/tmp   ``` 另外,如果你需要使用 Tripwire 的 `-check` 命令的更多帮助,该命令能够查阅 Tripwire 的手册: + ``` - - tripwire  --check  --help   ``` ### 使用 Tripwire 生成报告 -要轻松生成每日系统完整性报告,请使用以下命令创建一个 “crontab”: +要轻松生成每日系统完整性报告,请使用以下命令创建一个 crontab 任务: + ``` - - crontab -e ``` -之后,你可以编辑此文件(使用你选择的文本编辑器)来引入由 cron 运行的任务。例如,你可以使用以下命令设置一个 cron 作业,在每天的 5:40 将 Tripwire 的报告发送到你的邮箱: +之后,你可以编辑此文件(使用你选择的文本编辑器)来引入由 cron 运行的任务。例如,你可以使用以下命令设置一个 cron 任务,在每天的 5:40 将 Tripwire 的报告发送到你的邮箱: + ``` - - 40 5  *  *  *  usr/sbin/tripwire   --check ``` @@ -101,7 +92,7 @@ via: https://opensource.com/article/18/1/securing-linux-filesystem-tripwire 作者:[Michael Kwaku Aboagye][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 14cfa4f7618638a5573e11ba408512c3934a4677 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 29 Apr 2018 21:32:23 +0800 Subject: [PATCH 035/102] PUB:20180118 Securing the Linux filesystem with Tripwire.md @geekpi --- .../20180118 Securing the Linux filesystem with Tripwire.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180118 Securing the Linux filesystem with Tripwire.md (100%) diff --git a/translated/tech/20180118 Securing the Linux filesystem with Tripwire.md b/published/20180118 Securing the Linux filesystem with Tripwire.md similarity index 100% rename from translated/tech/20180118 Securing the Linux filesystem with Tripwire.md rename to published/20180118 Securing the Linux filesystem with Tripwire.md From d320cdfaf1574a2fe5b5c9de3ea26643924858d2 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 29 Apr 2018 23:01:20 +0800 Subject: [PATCH 036/102] PRF:20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md @qhwdw --- ...ARCH APP USING DOCKER AND ELASTICSEARCH.md | 216 +++++++----------- 1 file changed, 83 insertions(+), 133 deletions(-) diff --git a/translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md b/translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md index 3210c7d284..053f70bff3 100644 --- a/translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md +++ b/translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md @@ -1,59 +1,55 @@ -使用 DOCKER 和 ELASTICSEARCH 构建一个全文搜索应用程序 +使用 Docker 和 Elasticsearch 构建一个全文搜索应用程序 ============================================================ - _如何在超过 500 万篇文章的 Wikipedia 上找到与你研究相关的文章?_ +![](https://blog-images.patricktriest.com/uploads/library.jpg) - _如何在超过 20 亿用户的 Facebook 中找到你的朋友(并且还拼错了名字)?_ +_如何在超过 500 万篇文章的 Wikipedia 上找到与你研究相关的文章?_ - _谷歌如何在整个因特网上搜索你的模糊的、充满拼写错误的查询?_ +_如何在超过 20 亿用户的 Facebook 中找到你的朋友(并且还拼错了名字)?_ -在本教程中,我们将带你探索如何配置我们自己的全文探索应用程序(与上述问题中的系统相比,它的复杂度要小很多)。我们的示例应用程序将提供一个 UI 和 API 去从 100 部经典文学(比如,_Peter Pan_ ,  _Frankenstein_ , 和  _Treasure Island_ )中搜索完整的文本。 +_谷歌如何在整个因特网上搜索你的模糊的、充满拼写错误的查询?_ -你可以在这里([https://search.patricktriest.com][6])预览教程中应用程序的完整版本。 +在本教程中,我们将带你探索如何配置我们自己的全文搜索应用程序(与上述问题中的系统相比,它的复杂度要小很多)。我们的示例应用程序将提供一个 UI 和 API 去从 100 部经典文学(比如,《彼得·潘》 、  《弗兰肯斯坦》 和  《金银岛》)中搜索完整的文本。 + +你可以在这里([https://search.patricktriest.com][6])预览该教程应用的完整版本。 ![preview webapp](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_4_0.png) -这个应用程序的源代码是 100% 公开的,可以在 GitHub 仓库上找到它们 —— [https://github.com/triestpa/guttenberg-search][7] +这个应用程序的源代码是 100% 开源的,可以在 GitHub 仓库上找到它们 —— [https://github.com/triestpa/guttenberg-search][7] 。 -在应用程序中添加一个快速灵活的全文搜索可能是个挑战。大多数的主流数据库,比如,[PostgreSQL][8] 和 [MongoDB][9],在它们的查询和索引结构中都提供一个有限的、基础的、文本搜索的功能。为实现高质量的全文搜索,通常的最佳选择是单独数据存储。[Elasticsearch][10] 是一个开源数据存储的领导者,它专门为执行灵活而快速的全文搜索进行了优化。 +在应用程序中添加一个快速灵活的全文搜索可能是个挑战。大多数的主流数据库,比如,[PostgreSQL][8] 和 [MongoDB][9],由于受其查询和索引结构的限制只能提供一个非常基础的文本搜索功能。为实现高质量的全文搜索,通常的最佳选择是单独的数据存储。[Elasticsearch][10] 是一个开源数据存储的领导者,它专门为执行灵活而快速的全文搜索进行了优化。 -我们将使用 [Docker][11] 去配置我们自己的项目环境和依赖。Docker 是一个容器化引擎,它被 [Uber][12]、[Spotify][13]、[ADP][14]、以及 [Paypal][15] 使用。构建容器化应用的一个主要优势是,项目的设置在 Windows、macOS、以及 Linux 上都是相同的 —— 这使我写这个教程快速又简单。如果你还没有使用过 Docker,不用担心,我们接下来将经历完整的项目配置。 +我们将使用 [Docker][11] 去配置我们自己的项目环境和依赖。Docker 是一个容器化引擎,它被 [Uber][12]、[Spotify][13]、[ADP][14] 以及 [Paypal][15] 使用。构建容器化应用的一个主要优势是,项目的设置在 Windows、macOS、以及 Linux 上都是相同的 —— 这使我写这个教程快速又简单。如果你还没有使用过 Docker,不用担心,我们接下来将经历完整的项目配置。 -我也会使用 [Node.js][16] (使用 [Koa][17] 框架)、和 [Vue.js][18],用它们分别去构建我们自己的搜索 API 和前端 Web 应用程序。 +我也会使用 [Node.js][16] (使用 [Koa][17] 框架)和 [Vue.js][18],用它们分别去构建我们自己的搜索 API 和前端 Web 应用程序。 -### 1 - ELASTICSEARCH 是什么? +### 1 - Elasticsearch 是什么? -全文搜索在现代应用程序中是一个有大量需求的特性。搜索也可能是最难的一项特性 —— 许多流行的网站的搜索功能都不合格,要么返回结果太慢,要么找不到精确的结果。通常,这种情况是被底层的数据库所局限:大多数标准的关系型数据库在基本的 `CONTAINS` 或 `LIKE` SQL 查询上有局限性,它仅提供大多数基本的字符串匹配功能。 +全文搜索在现代应用程序中是一个有大量需求的特性。搜索也可能是最难的一项特性 —— 许多流行的网站的搜索功能都不合格,要么返回结果太慢,要么找不到精确的结果。通常,这种情况是被底层的数据库所局限:大多数标准的关系型数据库局限于基本的 `CONTAINS` 或 `LIKE` SQL 查询上,它仅提供最基本的字符串匹配功能。 我们的搜索应用程序将具备: 1. **快速** - 搜索结果将快速返回,为用户提供一个良好的体验。 - -2. **灵活** - 我们希望能够去修改搜索如何执行,这是为了便于在不同的数据库和用户场景下进行优化。 - -3. **容错** - 如果搜索内容有拼写错误,我们将仍然会返回相关的结果,而这个结果可能正是用户希望去搜索的结果。 - -4. **全文** - 我们不想限制我们的搜索只能与指定的关键字或者标签相匹配 —— 我们希望它可以搜索在我们的数据存储中的任何东西(包括大的文本域)。 +2. **灵活** - 我们希望能够去修改搜索如何执行的方式,这是为了便于在不同的数据库和用户场景下进行优化。 +3. **容错** - 如果所搜索的内容有拼写错误,我们将仍然会返回相关的结果,而这个结果可能正是用户希望去搜索的结果。 +4. **全文** - 我们不想限制我们的搜索只能与指定的关键字或者标签相匹配 —— 我们希望它可以搜索在我们的数据存储中的任何东西(包括大的文本字段)。 ![Elastic Search Logo](https://storage.googleapis.com/cdn.patricktriest.com/blog/images/posts/elastic-library/Elasticsearch-Logo.png) -为了构建一个功能强大的搜索功能,通常最理想的方法是使用一个为全文搜索任务优化过的用户数据存储。在这里我们使用 [Elasticsearch][19],Elasticsearch 是一个开源的内存中的数据存储,它是用 Java 写的,最初是在 [Apache Lucene][20] 库上构建的。 +为了构建一个功能强大的搜索功能,通常最理想的方法是使用一个为全文搜索任务优化过的数据存储。在这里我们使用 [Elasticsearch][19],Elasticsearch 是一个开源的内存中的数据存储,它是用 Java 写的,最初是在 [Apache Lucene][20] 库上构建的。 这里有一些来自 [Elastic 官方网站][21] 上的 Elasticsearch 真实使用案例。 * Wikipedia 使用 Elasticsearch 去提供带高亮搜索片断的全文搜索功能,并且提供按类型搜索和 “did-you-mean” 建议。 - -* Guardian 使用 Elasticsearch 把社交网络数据和访客日志相结合,为编辑去提供大家对新文章的实时的反馈。 - +* Guardian 使用 Elasticsearch 把社交网络数据和访客日志相结合,为编辑去提供新文章的公众意见的实时反馈。 * Stack Overflow 将全文搜索和地理查询相结合,并使用 “类似” 的方法去找到相关的查询和回答。 - * GitHub 使用 Elasticsearch 对 1300 亿行代码进行查询。 ### 与 “普通的” 数据库相比,Elasticsearch 有什么不一样的地方? -Elasticsearch 之所以能够提供快速灵活的全文搜索,秘密在于它使用 _反转索引_ 。 +Elasticsearch 之所以能够提供快速灵活的全文搜索,秘密在于它使用反转索引inverted index 。 -“索引” 是数据库中的一种数据结构,它能够以超快的速度进行数据查询和检索操作。数据库通过存储与表中行相关联的字段来生成索引。在一种可搜索的数据结构(一般是 [B树][22])中排序索引,在优化过的查询中,数据库能够达到接近线速的时间(比如,“使用 ID=5 查找行)。 +“索引” 是数据库中的一种数据结构,它能够以超快的速度进行数据查询和检索操作。数据库通过存储与表中行相关联的字段来生成索引。在一种可搜索的数据结构(一般是 [B 树][22])中排序索引,在优化过的查询中,数据库能够达到接近线性的时间(比如,“使用 ID=5 查找行”)。 ![Relational Index](https://cdn.patricktriest.com/blog/images/posts/elastic-library/db_index.png) @@ -63,41 +59,38 @@ Elasticsearch 之所以能够提供快速灵活的全文搜索,秘密在于它 ![Inverted Index](https://cdn.patricktriest.com/blog/images/posts/elastic-library/invertedIndex.jpg) -这种反转索引数据结构可以使我们非常快地查询到,所有出现 ”football" 的文档。通过使用大量优化过的内存中的反转索引,Elasticsearch 可以让我们在存储的数据上,执行一些非常强大的和自定义的全文搜索。 +这种反转索引数据结构可以使我们非常快地查询到,所有出现 “football” 的文档。通过使用大量优化过的内存中的反转索引,Elasticsearch 可以让我们在存储的数据上,执行一些非常强大的和自定义的全文搜索。 ### 2 - 项目设置 -### 2.0 - Docker +#### 2.0 - Docker 我们在这个项目上使用 [Docker][23] 管理环境和依赖。Docker 是个容器引擎,它允许应用程序运行在一个独立的环境中,不会受到来自主机操作系统和本地开发环境的影响。现在,许多公司将它们的大规模 Web 应用程序主要运行在容器架构上。这样将提升灵活性和容器化应用程序组件的可组构性。 ![Docker Logo](https://storage.googleapis.com/cdn.patricktriest.com/blog/images/posts/elastic-library/docker.png) -对我们来说,使用 Docker 的优势是,它对本教程非常友好,它的本地环境设置量最小,并且跨 Windows、macOS、和 Linux 系统的一致性很好。我们只需要在 Docker 配置文件中定义这些依赖关系,而不是按安装说明分别去安装 Node.js、Elasticsearch、和 Nginx,然后,就可以使用这个配置文件在任何其它地方运行我们的应用程序。而且,因为每个应用程序组件都运行在它自己的独立容器中,它们受本地机器上的其它 “垃圾” 干扰的可能性非常小,因此,在调试问题时,像 "But it works on my machine!" 这类的问题将非常少。 +对我来说,使用 Docker 的优势是,它对本教程的作者非常方便,它的本地环境设置量最小,并且跨 Windows、macOS 和 Linux 系统的一致性很好。我们只需要在 Docker 配置文件中定义这些依赖关系,而不是按安装说明分别去安装 Node.js、Elasticsearch 和 Nginx,然后,就可以使用这个配置文件在任何其它地方运行我们的应用程序。而且,因为每个应用程序组件都运行在它自己的独立容器中,它们受本地机器上的其它 “垃圾” 干扰的可能性非常小,因此,在调试问题时,像“它在我这里可以工作!”这类的问题将非常少。 -### 2.1 - 安装 Docker & Docker-Compose +#### 2.1 - 安装 Docker & Docker-Compose 这个项目只依赖 [Docker][24] 和 [docker-compose][25],docker-compose 是 Docker 官方支持的一个工具,它用来将定义的多个容器配置 _组装_  成单一的应用程序栈。 -安装 Docker - [https://docs.docker.com/engine/installation/][26] -安装 Docker Compose - [https://docs.docker.com/compose/install/][27] +- 安装 Docker - [https://docs.docker.com/engine/installation/][26] +- 安装 Docker Compose - [https://docs.docker.com/compose/install/][27] -### 2.2 - 设置项目主目录 +#### 2.2 - 设置项目主目录 为项目创建一个主目录(名为 `guttenberg_search`)。我们的项目将工作在主目录的以下两个子目录中。 * `/public` - 保存前端 Vue.js Web 应用程序。 - * `/server` - 服务器端 Node.js 源代码。 -### 2.3 - 添加 Docker-Compose 配置 +#### 2.3 - 添加 Docker-Compose 配置 接下来,我们将创建一个 `docker-compose.yml` 文件来定义我们的应用程序栈中的每个容器。 1. `gs-api` - 后端应用程序逻辑使用的 Node.js 容器 - 2. `gs-frontend` - 前端 Web 应用程序使用的 Ngnix 容器。 - 3. `gs-search` - 保存和搜索数据的 Elasticsearch 容器。 ``` @@ -140,12 +133,11 @@ services: volumes: # Define seperate volume for Elasticsearch data esdata: - ``` -这个文件定义了我们全部的应用程序栈 —— 不需要在你的本地系统上安装 Elasticsearch、Node、和 Nginx。每个容器都将端口转发到宿主机系统(`localhost`)上,以便于我们在宿主机上去访问和调试 Node API、Elasticsearch instance、和前端 Web 应用程序。 +这个文件定义了我们全部的应用程序栈 —— 不需要在你的本地系统上安装 Elasticsearch、Node 和 Nginx。每个容器都将端口转发到宿主机系统(`localhost`)上,以便于我们在宿主机上去访问和调试 Node API、Elasticsearch 实例和前端 Web 应用程序。 -### 2.4 - 添加 Dockerfile +#### 2.4 - 添加 Dockerfile 对于 Nginx 和 Elasticsearch,我们使用了官方预构建的镜像,而 Node.js 应用程序需要我们自己去构建。 @@ -169,7 +161,6 @@ COPY . . # Start app CMD [ "npm", "start" ] - ``` 这个 Docker 配置扩展了官方的 Node.js 镜像、拷贝我们的应用程序源代码、以及在容器内安装 NPM 依赖。 @@ -181,12 +172,11 @@ node_modules/ npm-debug.log books/ public/ - ``` -> 请注意:我们之所以不拷贝 `node_modules` 目录到我们的容器中 —— 是因为我们要在容器中运行 `npm install` 来构建这个进程。从宿主机系统拷贝 `node_modules` 可能会引起错误,因为一些包需要在某些操作系统上专门构建。比如说,在 macOS 上安装 `bcrypt` 包,然后尝试将这个模块直接拷贝到一个 Ubuntu 容器上将不能工作,因为 `bcyrpt` 需要为每个操作系统构建一个特定的二进制文件。 +> 请注意:我们之所以不拷贝 `node_modules` 目录到我们的容器中 —— 是因为我们要在容器构建过程里面运行 `npm install`。从宿主机系统拷贝 `node_modules` 到容器里面可能会引起错误,因为一些包需要为某些操作系统专门构建。比如说,在 macOS 上安装 `bcrypt` 包,然后尝试将这个模块直接拷贝到一个 Ubuntu 容器上将不能工作,因为 `bcyrpt` 需要为每个操作系统构建一个特定的二进制文件。 -### 2.5 - 添加基本文件 +#### 2.5 - 添加基本文件 为了测试我们的配置,我们需要添加一些占位符文件到应用程序目录中。 @@ -194,7 +184,6 @@ public/ ``` Hello World From The Frontend Container - ``` 接下来,在 `server/app.js` 中添加 Node.js 占位符文件。 @@ -213,10 +202,9 @@ app.listen(port, err => { if (err) console.error(err) console.log(`App Listening on Port ${port}`) }) - ``` -最后,添加我们的 `package.json` 节点应用配置。 +最后,添加我们的 `package.json`  Node 应用配置。 ``` { @@ -244,14 +232,13 @@ app.listen(port, err => { "koa-router": "7.2.1" } } - ``` 这个文件定义了应用程序启动命令和 Node.js 包依赖。 -> 注意:不要运行 `npm install` —— 当它构建时,这个依赖将在容器内安装。 +> 注意:不要运行 `npm install` —— 当它构建时,依赖会在容器内安装。 -### 2.6 - 测试它的输出 +#### 2.6 - 测试它的输出 现在一切新绪,我们来测试应用程序的每个组件的输出。从应用程序的主目录运行 `docker-compose build`,它将构建我们的 Node.js 应用程序容器。 @@ -261,13 +248,13 @@ app.listen(port, err => { ![docker compose output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_0_2.png) -> 这一步可能需要几分钟时间,因为 Docker 要为每个容器去下载基础镜像,接着再去运行,启动应用程序非常快,因为所需要的镜像已经下载完成了。 +> 这一步可能需要几分钟时间,因为 Docker 要为每个容器去下载基础镜像。以后再次运行,启动应用程序会非常快,因为所需要的镜像已经下载完成了。 -在你的浏览器中尝试访问 `localhost:8080` —— 你将看到简单的 “Hello World" Web 页面。 +在你的浏览器中尝试访问 `localhost:8080` —— 你将看到简单的 “Hello World” Web 页面。 ![frontend sample output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_0_0.png) -访问 `localhost:3000` 去验证我们的 Node 服务器,它将返回 "Hello World" 信息。 +访问 `localhost:3000` 去验证我们的 Node 服务器,它将返回 “Hello World” 信息。 ![backend sample output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_0_1.png) @@ -289,16 +276,15 @@ app.listen(port, err => { }, "tagline" : "You Know, for Search" } - ``` -如果三个 URLs 都显示成功,祝贺你!整个容器栈已经正常运行了,接下来我们进入最有趣的部分。 +如果三个 URL 都显示成功,祝贺你!整个容器栈已经正常运行了,接下来我们进入最有趣的部分。 -### 3 - 连接到 ELASTICSEARCH +### 3 - 连接到 Elasticsearch 我们要做的第一件事情是,让我们的应用程序连接到我们本地的 Elasticsearch 实例上。 -### 3.0 - 添加 ES 连接模块 +#### 3.0 - 添加 ES 连接模块 在新文件 `server/connection.js` 中添加如下的 Elasticsearch 初始化代码。 @@ -328,7 +314,6 @@ async function checkConnection () { } checkConnection() - ``` 现在,我们重新构建我们的 Node 应用程序,我们将使用 `docker-compose build` 来做一些改变。接下来,运行 `docker-compose up -d` 去启动应用程序栈,它将以守护进程的方式在后台运行。 @@ -351,12 +336,11 @@ checkConnection() number_of_in_flight_fetch: 0, task_max_waiting_in_queue_millis: 0, active_shards_percent_as_number: 50 } - ``` 继续之前,我们先删除最下面的 `checkConnection()` 调用,因为,我们最终的应用程序将调用外部的连接模块。 -### 3.1 - 添加函数去重置索引 +#### 3.1 - 添加函数去重置索引 在 `server/connection.js` 中的 `checkConnection` 下面添加如下的函数,以便于重置 Elasticsearch 索引。 @@ -370,12 +354,11 @@ async function resetIndex (index) { await client.indices.create({ index }) await putBookMapping() } - ``` -### 3.2 - 添加图书模式 +#### 3.2 - 添加图书模式 -接下来,我们将为图书的数据模式添加一个 "mapping"。在 `server/connection.js` 中的 `resetIndex` 函数下面添加如下的函数。 +接下来,我们将为图书的数据模式添加一个 “映射”。在 `server/connection.js` 中的 `resetIndex` 函数下面添加如下的函数。 ``` /** Add book section schema mapping to ES */ @@ -389,12 +372,11 @@ async function putBookMapping () { return client.indices.putMapping({ index, type, body: { properties: schema } }) } - ``` -这是为 `book` 索引定义了一个 mapping。一个 Elasticsearch 的 `index` 大概类似于 SQL 的 `table` 或者 MongoDB 的  `collection`。我们通过添加 mapping 来为存储的文档指定每个字段和它的数据类型。Elasticsearch 是无模式的,因此,从技术角度来看,我们是不需要添加 mapping 的,但是,这样做,我们可以更好地控制如何处理数据。 +这是为 `book` 索引定义了一个映射。Elasticsearch 中的 `index` 大概类似于 SQL 的 `table` 或者 MongoDB 的  `collection`。我们通过添加映射来为存储的文档指定每个字段和它的数据类型。Elasticsearch 是无模式的,因此,从技术角度来看,我们是不需要添加映射的,但是,这样做,我们可以更好地控制如何处理数据。 -比如,我们给 "title" 和 ”author" 字段分配 `keyword` 类型,给 “text" 字段分配 `text` 类型。之所以这样做的原因是,搜索引擎可以区别处理这些字符串字段 —— 在搜索的时候,搜索引擎将在 `text` 字段中搜索可能的匹配项,而对于 `keyword` 类型字段,将对它们进行全文匹配。这看上去差别很小,但是它们对在不同的搜索上的速度和行为的影响非常大。 +比如,我们给 `title` 和 `author` 字段分配 `keyword` 类型,给 `text` 字段分配 `text` 类型。之所以这样做的原因是,搜索引擎可以区别处理这些字符串字段 —— 在搜索的时候,搜索引擎将在 `text` 字段中搜索可能的匹配项,而对于 `keyword` 类型字段,将对它们进行全文匹配。这看上去差别很小,但是它们对在不同的搜索上的速度和行为的影响非常大。 在文件的底部,导出对外发布的属性和函数,这样我们的应用程序中的其它模块就可以访问它们了。 @@ -402,31 +384,29 @@ async function putBookMapping () { module.exports = { client, index, type, checkConnection, resetIndex } - ``` ### 4 - 加载原始数据 -我们将使用来自 [Gutenberg 项目][28] 的数据 ——  它致力于为公共提供免费的线上电子书。在这个项目中,我们将使用 100 本经典图书来充实我们的图书馆,包括_《The Adventures of Sherlock Holmes》_、_《Treasure Island》_、_《The Count of Monte Cristo》_、_《Around the World in 80 Days》_、_《Romeo and Juliet》_ 、和_《The Odyssey》_。 +我们将使用来自 [古登堡项目][28] 的数据 ——  它致力于为公共提供免费的线上电子书。在这个项目中,我们将使用 100 本经典图书来充实我们的图书馆,包括《福尔摩斯探案集》、《金银岛》、《基督山复仇记》、《环游世界八十天》、《罗密欧与朱丽叶》 和《奥德赛》。 ![Book Covers](https://storage.googleapis.com/cdn.patricktriest.com/blog/images/posts/elastic-library/books.jpg) -### 4.1 - 下载图书文件 +#### 4.1 - 下载图书文件 我将这 100 本书打包成一个文件,你可以从这里下载它 —— [https://cdn.patricktriest.com/data/books.zip][29] 将这个文件解压到你的项目的 `books/` 目录中。 -你可以使用以下的命令来完成(需要在命令行下使用 [wget][30] 和 ["The Unarchiver"][31])。 +你可以使用以下的命令来完成(需要在命令行下使用 [wget][30] 和 [The Unarchiver][31])。 ``` wget https://cdn.patricktriest.com/data/books.zip unar books.zip - ``` -### 4.2 - 预览一本书 +#### 4.2 - 预览一本书 尝试打开其中的一本书的文件,假设打开的是 `219-0.txt`。你将注意到它开头是一个公开访问的协议,接下来是一些标识这本书的书名、作者、发行日期、语言和字符编码的行。 @@ -441,7 +421,6 @@ Last Updated: September 7, 2016 Language: English Character set encoding: UTF-8 - ``` 在 `*** START OF THIS PROJECT GUTENBERG EBOOK HEART OF DARKNESS ***` 这些行后面,是这本书的正式内容。 @@ -450,7 +429,7 @@ Character set encoding: UTF-8 下一步,我们将使用程序从文件头部来解析书的元数据,提取 `*** START OF` 和 `***END OF` 之间的内容。 -### 4.3 - 读取数据目录 +#### 4.3 - 读取数据目录 我们将写一个脚本来读取每本书的内容,并将这些数据添加到 Elasticsearch。我们将定义一个新的 Javascript 文件 `server/load_data.js` 来执行这些操作。 @@ -486,7 +465,6 @@ async function readAndInsertBooks () { } readAndInsertBooks() - ``` 我们将使用一个快捷命令来重构我们的 Node.js 应用程序,并更新运行的容器。 @@ -501,7 +479,7 @@ readAndInsertBooks() ![docker exec output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_1_1.png) -### 4.4 - 读取数据文件 +#### 4.4 - 读取数据文件 接下来,我们读取元数据和每本书的内容。 @@ -536,32 +514,26 @@ function parseBookFile (filePath) { console.log(`Parsed ${paragraphs.length} Paragraphs\n`) return { title, author, paragraphs } } - ``` 这个函数执行几个重要的任务。 1. 从文件系统中读取书的文本。 - 2. 使用正则表达式(关于正则表达式,请参阅 [这篇文章][1] )解析书名和作者。 - -3. 通过匹配 ”Guttenberg 项目“ 头部和尾部,识别书的正文内容。 - +3. 通过匹配 “古登堡项目” 的头部和尾部,识别书的正文内容。 4. 提取书的内容文本。 - 5. 分割每个段落到它的数组中。 - 6. 清理文本并删除空白行。 -它的返回值,我们将构建一个对象,这个对象包含书名、作者、以及书中各段落的数据。 +它的返回值,我们将构建一个对象,这个对象包含书名、作者、以及书中各段落的数组。 -再次运行 `docker-compose up -d --build` 和 `docker exec gs-api "node" "server/load_data.js"`,你将看到如下的输出,在输出的末尾有三个额外的行。 +再次运行 `docker-compose up -d --build` 和 `docker exec gs-api "node" "server/load_data.js"`,你将看到输出同之前一样,在输出的末尾有三个额外的行。 ![docker exec output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_2_0.png) 成功!我们的脚本从文本文件中成功解析出了书名和作者。脚本再次以错误结束,因为到现在为止,我们还没有定义辅助函数。 -### 4.5 - 在 ES 中索引数据文件 +#### 4.5 - 在 ES 中索引数据文件 最后一步,我们将批量上传每个段落的数组到 Elasticsearch 索引中。 @@ -596,12 +568,11 @@ async function insertBookData (title, author, paragraphs) { await esConnection.client.bulk({ body: bulkOps }) console.log(`Indexed Paragraphs ${paragraphs.length - (bulkOps.length / 2)} - ${paragraphs.length}\n\n\n`) } - ``` -这个函数将使用书名、作者、和附加元数据的段落位置来索引书中的每个段落。我们通过批量操作来插入段落,它比逐个段落插入要快的多。 +这个函数将使用书名、作者和附加元数据的段落位置来索引书中的每个段落。我们通过批量操作来插入段落,它比逐个段落插入要快的多。 -> 我们分批索引段落,而不是一次性插入全部,是为运行这个应用程序的、内存稍有点小(1.7 GB)的服务器  `search.patricktriest.com` 上做的一个重要优化。如果你的机器内存还行(4 GB 以上),你或许不用分批上传。 +> 我们分批索引段落,而不是一次性插入全部,是为运行这个应用程序的内存稍有点小(1.7 GB)的服务器  `search.patricktriest.com` 上做的一个重要优化。如果你的机器内存还行(4 GB 以上),你或许不用分批上传。 运行 `docker-compose up -d --build` 和 `docker exec gs-api "node" "server/load_data.js"` 一次或多次 —— 现在你将看到前面解析的 100 本书的完整输出,并插入到了 Elasticsearch。这可能需要几分钟时间,甚至更长。 @@ -611,13 +582,13 @@ async function insertBookData (title, author, paragraphs) { 现在,Elasticsearch 中已经有了 100 本书了(大约有 230000 个段落),现在我们尝试搜索查询。 -### 5.0 - 简单的 HTTP 查询 +#### 5.0 - 简单的 HTTP 查询 首先,我们使用 Elasticsearch 的 HTTP API 对它进行直接查询。 在你的浏览器上访问这个 URL - `http://localhost:9200/library/_search?q=text:Java&pretty` -在这里,我们将执行一个极简的全文搜索,在我们的图书馆的书中查找 ”Java" 这个词。 +在这里,我们将执行一个极简的全文搜索,在我们的图书馆的书中查找 “Java” 这个词。 你将看到类似于下面的一个 JSON 格式的响应。 @@ -663,12 +634,11 @@ async function insertBookData (title, author, paragraphs) { ] } } - ``` 用 Elasticseach 的 HTTP 接口可以测试我们插入的数据是否成功,但是如果直接将这个 API 暴露给 Web 应用程序将有极大的风险。这个 API 将会暴露管理功能(比如直接添加和删除文档),最理想的情况是完全不要对外暴露它。而是写一个简单的 Node.js API 去接收来自客户端的请求,然后(在我们的本地网络中)生成一个正确的查询发送给 Elasticsearch。 -### 5.1 - 查询脚本 +#### 5.1 - 查询脚本 我们现在尝试从我们写的 Node.js 脚本中查询 Elasticsearch。 @@ -694,7 +664,6 @@ module.exports = { return client.search({ index, type, body }) } } - ``` 我们的搜索模块定义一个简单的 `search` 函数,它将使用输入的词 `match` 查询。 @@ -702,13 +671,9 @@ module.exports = { 这是查询的字段分解 - * `from` - 允许我们分页查询结果。默认每个查询返回 10 个结果,因此,指定 `from: 10` 将允许我们取回 10-20 的结果。 - * `query` - 这里我们指定要查询的词。 - -* `operator` - 我们可以修改搜索行为;在本案例中,我们使用 "and" 操作去对查询中包含所有 tokens(要查询的词)的结果来确定优先顺序。 - +* `operator` - 我们可以修改搜索行为;在本案例中,我们使用 `and` 操作去对查询中包含所有字元(要查询的词)的结果来确定优先顺序。 * `fuzziness` - 对拼写错误的容错调整,`auto` 的默认为 `fuzziness: 2`。模糊值越高,结果越需要更多校正。比如,`fuzziness: 1` 将允许以 `Patricc` 为关键字的查询中返回与 `Patrick` 匹配的结果。 - * `highlights` - 为结果返回一个额外的字段,这个字段包含 HTML,以显示精确的文本字集和查询中匹配的关键词。 你可以去浏览 [Elastic Full-Text Query DSL][32],学习如何随意调整这些参数,以进一步自定义搜索查询。 @@ -717,7 +682,7 @@ module.exports = { 为了能够从前端应用程序中访问我们的搜索功能,我们来写一个快速的 HTTP API。 -### 6.0 - API 服务器 +#### 6.0 - API 服务器 用以下的内容替换现有的 `server/app.js` 文件。 @@ -761,7 +726,6 @@ app if (err) throw err console.log(`App Listening on Port ${port}`) }) - ``` 这些代码将为 [Koa.js][33] Node API 服务器导入服务器依赖,设置简单的日志,以及错误处理。 @@ -782,10 +746,9 @@ router.get('/search', async (ctx, next) => { ctx.body = await search.queryTerm(term, offset) } ) - ``` -使用 `docker-compose up -d --build` 重启动应用程序。之后在你的浏览器中尝试调用这个搜索端点。比如,`http://localhost:3000/search?term=java` 这个请求将搜索整个图书馆中提到 “Jave" 的内容。 +使用 `docker-compose up -d --build` 重启动应用程序。之后在你的浏览器中尝试调用这个搜索端点。比如,`http://localhost:3000/search?term=java` 这个请求将搜索整个图书馆中提到 “Java” 的内容。 结果与前面直接调用 Elasticsearch HTTP 界面的结果非常类似。 @@ -835,7 +798,6 @@ router.get('/search', async (ctx, next) => { ] } } - ``` ### 6.2 - 输入校验 @@ -864,7 +826,6 @@ router.get('/search', ctx.body = await search.queryTerm(term, offset) } ) - ``` 现在,重启服务器,如果你使用一个没有搜索关键字的请求(`http://localhost:3000/search`),你将返回一个带相关消息的 HTTP 400 错误,比如像 `Invalid URL Query - child "term" fails because ["term" is required]`。 @@ -875,7 +836,7 @@ router.get('/search', 现在我们的 `/search` 端点已经就绪,我们来连接到一个简单的 Web 应用程序来测试这个 API。 -### 7.0 - Vue.js 应用程序 +#### 7.0 - Vue.js 应用程序 我们将使用 Vue.js 去协调我们的前端。 @@ -934,14 +895,13 @@ const vm = new Vue ({ } } }) - ``` -这个应用程序非常简单 —— 我们只定义了一些共享的数据属性,以及添加了检索和分页搜索结果的方法。为防止每按键一次都调用 API,搜索输入有一个 100 毫秒的除颤功能。 +这个应用程序非常简单 —— 我们只定义了一些共享的数据属性,以及添加了检索和分页搜索结果的方法。为防止每次按键一次都调用 API,搜索输入有一个 100 毫秒的除颤功能。 解释 Vue.js 是如何工作的已经超出了本教程的范围,如果你使用过 Angular 或者 React,其实一些也不可怕。如果你完全不熟悉 Vue,想快速了解它的功能,我建议你从官方的快速指南入手 —— [https://vuejs.org/v2/guide/][36] -### 7.1 - HTML +#### 7.1 - HTML 使用以下的内容替换 `/public/index.html` 文件中的占位符,以便于加载我们的 Vue.js 应用程序和设计一个基本的搜索界面。 @@ -1004,10 +964,9 @@ const vm = new Vue ({ - ``` -### 7.2 - CSS +#### 7.2 - CSS 添加一个新文件 `/public/styles.css`,使用一些自定义的 UI 样式。 @@ -1098,10 +1057,9 @@ body { font-family: 'EB Garamond', serif; } justify-content: space-around; background: white; } - ``` -### 7.3 - 尝试输出 +#### 7.3 - 尝试输出 在你的浏览器中打开 `localhost:8080`,你将看到一个简单的带结果分页功能的搜索界面。在顶部的搜索框中尝试输入不同的关键字来查看它们的搜索情况。 @@ -1113,7 +1071,7 @@ body { font-family: 'EB Garamond', serif; } ### 8 - 分页预览 -如果点击每个搜索结果,然后查看到来自书中的内容,那将是非常棒的体验。 +如果能点击每个搜索结果,然后查看到来自书中的内容,那将是非常棒的体验。 ### 8.0 - 添加 Elasticsearch 查询 @@ -1137,12 +1095,11 @@ getParagraphs (bookTitle, startLocation, endLocation) { return client.search({ index, type, body }) } - ``` 这个新函数将返回给定的书的开始位置和结束位置之间的一个排序后的段落数组。 -### 8.1 - 添加 API 端点 +#### 8.1 - 添加 API 端点 现在,我们将这个函数链接到 API 端点。 @@ -1170,10 +1127,9 @@ router.get('/paragraphs', ctx.body = await search.getParagraphs(bookTitle, start, end) } ) - ``` -### 8.2 - 添加 UI 功能 +#### 8.2 - 添加 UI 功能 现在,我们的新端点已经就绪,我们为应用程序添加一些从书中查询和显示全部页面的前端功能。 @@ -1217,10 +1173,9 @@ router.get('/paragraphs', document.body.style.overflow = 'auto' this.selectedParagraph = null } - ``` -这五个函数了提供了通过页码从书中下载和分页(每次十个段落)的逻辑。 +这五个函数提供了通过页码从书中下载和分页(每次十个段落)的逻辑。 现在,我们需要添加一个 UI 去显示书的页面。在 `/public/index.html` 的 `` 注释下面添加如下的内容。 @@ -1258,7 +1213,6 @@ router.get('/paragraphs', - ``` 再次重启应用程序服务器(`docker-compose up -d --build`),然后打开 `localhost:8080`。当你再次点击搜索结果时,你将能看到关键字附近的段落。如果你感兴趣,你现在甚至可以看这本书的剩余部分。 @@ -1269,42 +1223,38 @@ router.get('/paragraphs', 你可以去比较你的本地结果与托管在这里的完整示例 —— [https://search.patricktriest.com/][37] -### 9 - ELASTICSEARCH 的缺点 +### 9 - Elasticsearch 的缺点 -### 9.0 - 耗费资源 +#### 9.0 - 耗费资源 -Elasticsearch 是计算密集型的。[官方建议][38] 运行 ES 的机器最好有 64 GB 的内存,强烈反对在低于 8 GB 内存的机器上运行它。Elasticsearch 是一个 _内存中_ 数据库,这样使它的查询速度非常快,但这也非常占用系统内存。在生产系统中使用时,[他们强烈建议在一个集群中运行多个 Elasticsearch 节点][39],以实现高可用、自动分区、和一个节点失败时的数据冗余。 +Elasticsearch 是计算密集型的。[官方建议][38] 运行 ES 的机器最好有 64 GB 的内存,强烈反对在低于 8 GB 内存的机器上运行它。Elasticsearch 是一个 _内存中_ 数据库,这样使它的查询速度非常快,但这也非常占用系统内存。在生产系统中使用时,[他们强烈建议在一个集群中运行多个 Elasticsearch 节点][39],以实现高可用、自动分区和一个节点失败时的数据冗余。 我们的这个教程中的应用程序运行在一个 $15/月 的 GCP 计算实例中( [search.patricktriest.com][40]),它只有 1.7 GB 的内存,它勉强能运行这个 Elasticsearch 节点;有时候在进行初始的数据加载过程中,整个机器就 ”假死机“ 了。在我的经验中,Elasticsearch 比传统的那些数据库,比如,PostgreSQL 和 MongoDB 耗费的资源要多很多,这样会使托管主机的成本增加很多。 ### 9.1 - 与数据库的同步 -在大多数应用程序,将数据全部保存在 Elasticsearch 并不是个好的选择。最好是使用 ES 作为应用程序的主要事务数据库,但是一般不推荐这样做,因为在 Elasticsearch 中缺少 ACID,如果在处理数据的时候发生伸缩行为,它将丢失写操作。在许多案例中,ES 服务器更多是一个特定的角色,比如做应用程序中的一个文本搜索功能。这种特定的用途,要求它从主数据库中复制数据到 Elasticsearch 实例中。 +对于大多数应用程序,将数据全部保存在 Elasticsearch 并不是个好的选择。可以使用 ES 作为应用程序的主要事务数据库,但是一般不推荐这样做,因为在 Elasticsearch 中缺少 ACID,如果大量读取数据的时候,它能导致写操作丢失。在许多案例中,ES 服务器更多是一个特定的角色,比如做应用程序中的一个文本搜索功能。这种特定的用途,要求它从主数据库中复制数据到 Elasticsearch 实例中。 -比如,假设我们将用户信息保存在一个 PostgreSQL 表中,但是用 Elasticsearch 去驱动我们的用户搜索功能。如果一个用户,比如,"Albert",决定将他的名字改成 "Al",我们将需要把这个变化同时反映到我们主要的 PostgreSQL 数据库和辅助的 Elasticsearch 集群中。 +比如,假设我们将用户信息保存在一个 PostgreSQL 表中,但是用 Elasticsearch 去提供我们的用户搜索功能。如果一个用户,比如,“Albert”,决定将他的名字改成 “Al”,我们将需要把这个变化同时反映到我们主要的 PostgreSQL 数据库和辅助的 Elasticsearch 集群中。 正确地集成它们可能比较棘手,最好的答案将取决于你现有的应用程序栈。这有多种开源方案可选,从 [用一个进程去关注 MongoDB 操作日志][41] 并自动同步检测到的变化到 ES,到使用一个 [PostgresSQL 插件][42] 去创建一个定制的、基于 PSQL 的索引来与 Elasticsearch 进行自动沟通。 -如果没有有效的预构建选项可用,你可能需要在你的服务器代码中增加一些钩子,这样可以基于数据库的变化来手动更新 Elasticsearch 索引。最后一招,我认为是一个最后的选择,因为,使用定制的业务逻辑去保持 ES 的同步可能很复杂,这将会给应用程序引入很多的 bugs。 +如果没有有效的预构建选项可用,你可能需要在你的服务器代码中增加一些钩子,这样可以基于数据库的变化来手动更新 Elasticsearch 索引。最后一招,我认为是一个最后的选择,因为,使用定制的业务逻辑去保持 ES 的同步可能很复杂,这将会给应用程序引入很多的 bug。 让 Elasticsearch 与一个主数据库同步,将使它的架构更加复杂,其复杂性已经超越了 ES 的相关缺点,但是当在你的应用程序中考虑添加一个专用的搜索引擎的利弊得失时,这个问题是值的好好考虑的。 ### 总结 -在很多现在流行的应用程序中,全文搜索是一个非常重要的功能 —— 而且是很难实现的一个功能。对于在你的应用程序中添加一个快速而又可定制的文本搜索,Elasticsearch 是一个非常好的选择,但是,在这里也有一个替代者。[Apache Solr][43] 是一个类似的开源搜索平台,它是基于 Apache Lucene 构建的,它与 Elasticsearch 的核心库是相同的。[Algolia][44] 是一个搜索即服务的 Web 平台,它已经很快流行了起来,并且它对新手非常友好,很易于上手(但是作为折衷,它的可定制性较小,并且使用成本较高)。 +在很多现在流行的应用程序中,全文搜索是一个非常重要的功能 —— 而且是很难实现的一个功能。对于在你的应用程序中添加一个快速而又可定制的文本搜索,Elasticsearch 是一个非常好的选择,但是,在这里也有一个替代者。[Apache Solr][43] 是一个类似的开源搜索平台,它是基于 Apache Lucene 构建的,与 Elasticsearch 的核心库是相同的。[Algolia][44] 是一个搜索即服务的 Web 平台,它已经很快流行了起来,并且它对新手非常友好,很易于上手(但是作为折衷,它的可定制性较小,并且使用成本较高)。 -“搜索” 特性并不是 Elasticsearch 唯一功能。ES 也是日志存储和分析的常用工具,在一个 ELK(Elasticsearch、Logstash、Kibana)栈配置中通常会使用它。灵活的全文搜索功能使得 Elasticsearch 在数据量非常大的科学任务中用处很大 —— 比如,在一个数据集中正确的/标准化的条目拼写,或者为了类似的词组搜索一个文本数据集。 +“搜索” 特性并不是 Elasticsearch 唯一功能。ES 也是日志存储和分析的常用工具,在一个 ELK(Elasticsearch、Logstash、Kibana)架构配置中通常会使用它。灵活的全文搜索功能使得 Elasticsearch 在数据量非常大的科学任务中用处很大 —— 比如,在一个数据集中正确的/标准化的条目拼写,或者为了类似的词组搜索一个文本数据集。 对于你自己的项目,这里有一些创意。 * 添加更多你喜欢的书到教程的应用程序中,然后创建你自己的私人图书馆搜索引擎。 - * 利用来自 [Google Scholar][2] 的论文索引,创建一个学术抄袭检测引擎。 - * 通过将字典中的每个词索引到 Elasticsearch,创建一个拼写检查应用程序。 - * 通过将 [Common Crawl Corpus][3] 加载到 Elasticsearch 中,构建你自己的与谷歌竞争的因特网搜索引擎(注意,它可能会超过 50 亿个页面,这是一个成本极高的数据集)。 - * 在 journalism 上使用 Elasticsearch:在最近的大规模泄露的文档中搜索特定的名字和关键词,比如, [Panama Papers][4] 和 [Paradise Papers][5]。 本教程中应用程序的源代码是 100% 公开的,你可以在 GitHub 仓库上找到它们 —— [https://github.com/triestpa/guttenberg-search][45] @@ -1324,7 +1274,7 @@ via: https://blog.patricktriest.com/text-search-docker-elasticsearch/ 作者:[Patrick Triest][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 7ea87f7b1877ce41ace45d0496922696f6e00333 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 29 Apr 2018 23:01:47 +0800 Subject: [PATCH 037/102] PUB:20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md @qhwdw --- ...LDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md (100%) diff --git a/translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md b/published/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md similarity index 100% rename from translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md rename to published/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md From 74b51c6832b5b34c7ab5d7cb890b33683fcd6301 Mon Sep 17 00:00:00 2001 From: kennethXia <37970750+kennethXia@users.noreply.github.com> Date: Mon, 30 Apr 2018 12:57:31 +0800 Subject: [PATCH 038/102] translated by kennethXia --- ...ts You Safely Create Bootable USB Drive.md | 176 ------------------ ...ts You Safely Create Bootable USB Drive.md | 170 +++++++++++++++++ 2 files changed, 170 insertions(+), 176 deletions(-) delete mode 100644 sources/tech/20180410 Bootiso Lets You Safely Create Bootable USB Drive.md create mode 100644 translated/tech/20180410 Bootiso Lets You Safely Create Bootable USB Drive.md diff --git a/sources/tech/20180410 Bootiso Lets You Safely Create Bootable USB Drive.md b/sources/tech/20180410 Bootiso Lets You Safely Create Bootable USB Drive.md deleted file mode 100644 index 46f90e5f11..0000000000 --- a/sources/tech/20180410 Bootiso Lets You Safely Create Bootable USB Drive.md +++ /dev/null @@ -1,176 +0,0 @@ -Translating by kennethXia - -Bootiso Lets You Safely Create Bootable USB Drive -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/04/USB-drive-720x340.png) -Howdy newbies! Some of you may often use **dd command** to do various things, like creating a USB drive or cloning disk partitions. Please be mindful that dd command is one of the dangerous and destructive command. If you’re Linux beginner, mostly avoid using dd command to do stuffs. If you don’t know what you are doing, you may wipe your hard drive in minutes. The dd command literally just takes bytes from **if** and writes them to **of**. It doesn’t care what it’s overwriting, it doesn’t care if there’s a partition table in the way, or a boot sector, or a home folder, or anything important. It will simply do what it is told to do. Instead, use some user-friendly apps like [**Etcher**][1]. So you will know which device you’re going to format before actually start creating bootable USB devices. - -Today, I stumbled upon yet another utility named **“Bootiso”** , which is also used to safely create bootable USB drive. It is actually a BASH script, and is brilliant! It has some extra features that helps us to safely create bootable USB devices. If you want to be sure you’re targeting a USB device (and not internal drive), or if you want autodetection of a USB device, you can use bootiso. Here is the significant advantages of using this script: - - * If there is only one USB drive, Bootiso will automatically select it. - * If there are more than one USB drives present, it lets you to choose one of them from the list. - * Just in case you mistakenly choose one of Internal hard drive, it will exit without doing anything. - * It checks the selected ISO has the correct mime-type. If it has wrong mime-type, it will exit. - * It asserts that the selected item is not a partition and exit if it doesn’t. - * It will prompt the user confirmation before erasing and partitioning the USB drive. - * Lists available USB drives. - * Installs syslinux bootloader (optional). - * Free and Open Source. - - - -### Safely Create Bootable USB Drive Using Bootiso - -Installing Bootiso is very easy. Download the latest version using command: -``` -$ curl -L https://rawgit.com/jsamr/bootiso/latest/bootiso -O - -``` - -Move the downloaded file to your **$PATH** , for example /usr/local/bin/. -``` -$ sudo cp bootiso /usr/local/bin/ - -``` - -Finally, make it executable: -``` -$ sudo chmod +x /usr/local/bin/bootiso - -``` - -Done! Now, it is time to create bootable USB drives. First, let us see how many USB drives are present using command: -``` -$ bootiso -l - -``` - -Sample output: -``` -Listing USB drives available in your system: -NAME HOTPLUG SIZE STATE TYPE -sdb 1 7.5G running disk - -``` - -As you can see, I have only one USB drive. Let us go ahead and create the USB bootable from an ISO file using command: -``` -$ bootiso bionic-desktop-amd64.iso - -``` - -This command will prompt you to enter the sudo password. Type the password and hit ENTER key to install the missing dependencies (if there are any) and then create USB bootable device. - -Sample output: -``` -[...] -Listing USB drives available in your system: -NAME HOTPLUG SIZE STATE TYPE -sdb 1 7.5G running disk -Autoselecting `sdb' (only USB device candidate) -The selected device `/dev/sdb' is connected through USB. -Created ISO mount point at `/tmp/iso.c5m' -`bootiso' is about to wipe out the content of device `/dev/sdb'. -Are you sure you want to proceed? (y/n)>y -Erasing contents of /dev/sdb... -Creating FAT32 partition on `/dev/sdb1'... -Created USB device mount point at `/tmp/usb.QgV' -Copying files from ISO to USB device with `rsync' -Synchronizing writes on device `/dev/sdb' -`bootiso' took 303 seconds to write ISO to USB device with `rsync' method. -ISO succesfully unmounted. -USB device succesfully unmounted. -USB device succesfully ejected. -You can safely remove it ! - -``` - -If the your ISO file has the wrong mime-type, you will see the following error message: -``` -Provided file `bionic-desktop-amd64.iso' doesn't seem to be an iso file (wrong mime type: `application/octet-stream'). -Exiting bootiso... - -``` - -You can, however, skip the mime-type check using **–no-mime-check** option like below. -``` -$ bootiso --no-mime-check bionic-desktop-amd64.iso - -``` - -Like I already mentioned, Bootiso will automatically choose the USB drive if there is only one USB drive present in your system. So, we don’t need to mention the usb disk path. If you have more than one devices connected, you can explicitly specify the USB device using **-d** flag like below. -``` -$ bootiso -d /dev/sdb bionic-desktop-amd64.iso - -``` - -Replace “/dev/sdb” with your own path. - -If you don’t specify **-d** flag when using more than one USB devices, Bootiso will prompt you to select from available USB drives. - -Bootiso will ask the user confirmation before erasing and partitioning the USB devices. To auto-confirm this, use **-y** or **–assume-yes** flag. -``` -$ bootiso -y bionic-desktop-amd64.iso - -``` - -You can also enable autoselecting USB devices in conjunction with **-y** option as shown below. -``` -$ bootiso -y -a bionic-desktop-amd64.iso - -``` - -Or, -``` -$ bootiso --assume-yes --autoselect bionic-desktop-amd64.iso - -``` - -Please remember it will work only if you have only one connected USB drive. - -By default, Bootiso will create a **FAT 32** partition and then mount and copy the ISO contents using **“rsync”** program to your USB drive. You can also use “dd” instead of “rsync” if you want. -``` -$ bootiso --dd -d /dev/sdb bionic-desktop-amd64.iso - -``` - -If you want to increase the odds your USB will be bootable, use **“-b”** or **“–bootloader”** like below. -``` -$ bootiso -b bionic-desktop-amd64.iso - -``` - -The above command will install a bootloader with **syslinux** (safe mode). Please note that it doesn’t work if you use “–dd” option. - -After creating the bootable device, Bootiso will automatically eject the USB drive. If you don’t want it to automatically eject it, use **-J** or **–no-eject** flag. -``` -$ bootiso -J bionic-desktop-amd64.iso - -``` - -Now, the USb device will remain connected. You can unmount it at anytime using “umount” command. - -To display help section, run: -``` -$ bootiso -h - -``` - -And, that’s all for now. Hope this script helps. More good stuffs to come. Stay tuned! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/bootiso-lets-you-safely-create-bootable-usb-drive/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/etcher-beauitiful-app-create-bootable-sd-cards-usb-drives/ diff --git a/translated/tech/20180410 Bootiso Lets You Safely Create Bootable USB Drive.md b/translated/tech/20180410 Bootiso Lets You Safely Create Bootable USB Drive.md new file mode 100644 index 0000000000..b0d6f088f5 --- /dev/null +++ b/translated/tech/20180410 Bootiso Lets You Safely Create Bootable USB Drive.md @@ -0,0 +1,170 @@ +Bootiso 让你安全地创建 USB 启动设备 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/04/USB-drive-720x340.png) +你好,新兵!你们有些人经常使用 **dd 命令**做各种各样的事,比如创建 USB 启动盘或者克隆硬盘分区。不过请牢记,dd 是一个危险且有毁灭性的命令。如果你是个 Linux 的新手,最好避免使用 dd 命令。如果你不知道你在做什么,你可能会在几分钟里把硬盘擦掉。从原理上说,dd 只是从 **“if”** 读取然后写到 **“of”** 上。它才不管往哪里写呢。它根本不关心那里是否有分区表、引导区、家目录或是其他重要的东西。你叫它做什么它就做什么。可以使用像 [**Etcher**][1] 这样的用户友好的应用来代替它。这样你就可以在创建 USB 引导设备之前知道你将要格式化的是哪块盘。 + +今天,我发现了另一个可以安全创建 USB 引导设备的工具 **Bootiso** 。它实际上是一个 BASH 脚本,但真的很智能!它有很多额外的功能来帮我们安全创建 USB 引导盘。如果你想确定你的目标是 USB 设备(而不是内部驱动器),或者如果你想检测 USB 设备,你可以使用 Bootiso。下面是使用此脚本的显著优点: + + * 如果只有一个 USB 驱动器,Bootiso 会自动选择它。 + * 如果有一个以上的 USB 驱动器存在,它可以让你从列表中选择其中一个。 + * 万一你错误地选择一个内部硬盘驱动器,它将退出而不做任何事情。 + * 它检查选定的 ISO 是否具有正确的 MIME 类型。如果 MIME 类型不正确,它将退出。 + * 它判定所选的项目不是分区,如果判定失败则退出。 + * 它将在擦除和分区 USB 驱动器之前提示用户确认。 + * 列出可用的 USB 驱动器。 + * 安装 syslinux 引导系统 (可选)。 + * 自由且开源。 + +### 使用 Bootiso 安全地创建 USB 驱动器 + +安装 Bootiso 非常简单。用这个命令下载最新版本: +``` +$ curl -L https://rawgit.com/jsamr/bootiso/latest/bootiso -O + +``` + +把下载的文件加到 **$PATH** 目录中,比如 /usr/local/bin/. +``` +$ sudo cp bootiso /usr/local/bin/ + +``` + +最后,添加运行权限: +``` +$ sudo chmod +x /usr/local/bin/bootiso + +``` + +搞定!现在就可以创建 USB 引导设备了。首先,让我们用命令看看现在有哪些 USB 驱动器: +``` +$ bootiso -l + +``` + +输出: +``` +Listing USB drives available in your system: +NAME HOTPLUG SIZE STATE TYPE +sdb 1 7.5G running disk + +``` + +如你所见,我只有一个 USB 驱动器。让我们继续通过命令用 ISO 文件创建 USB 启动盘: +``` +$ bootiso bionic-desktop-amd64.iso + +``` + +这个命令会提示你输入 SUDO 密码。输入密码并回车来安装缺失的组件(如果有的话),然后创建 USB 启动盘。 + +输出: +``` +[...] +Listing USB drives available in your system: +NAME HOTPLUG SIZE STATE TYPE +sdb 1 7.5G running disk +Autoselecting `sdb' (only USB device candidate) +The selected device `/dev/sdb' is connected through USB. +Created ISO mount point at `/tmp/iso.c5m' +`bootiso' is about to wipe out the content of device `/dev/sdb'. +Are you sure you want to proceed? (y/n)>y +Erasing contents of /dev/sdb... +Creating FAT32 partition on `/dev/sdb1'... +Created USB device mount point at `/tmp/usb.QgV' +Copying files from ISO to USB device with `rsync' +Synchronizing writes on device `/dev/sdb' +`bootiso' took 303 seconds to write ISO to USB device with `rsync' method. +ISO succesfully unmounted. +USB device succesfully unmounted. +USB device succesfully ejected. +You can safely remove it ! + +``` + +如果你的 ISO 文件 mine 类型不对,你会得到下列错误信息: +``` +Provided file `bionic-desktop-amd64.iso' doesn't seem to be an iso file (wrong mime type: `application/octet-stream'). +Exiting bootiso... + +``` + +当然,你也能像下面那样使用 **–no-mime-check** 选项来跳过 mime 类型检查。 +``` +$ bootiso --no-mime-check bionic-desktop-amd64.iso + +``` + +就像我前面提到的,如果系统里只有1个 USB 设备 Bootiso 将自动选中它。所以我们不需要告诉它 USB 设备路径。如果你连接了多个设备,你可以像下面这样使用 **-d** 来指明 USB 设备。 +``` +$ bootiso -d /dev/sdb bionic-desktop-amd64.iso + +``` + +用你自己的设备路径来换掉 “/dev/sdb”. + +在多个设备情况下,如果你没有使用 **-d** 来指明要使用的设备,Bootiso 会提示你选择可用的 USB 设备。 + +Bootiso 在擦除和改写 USB 盘分区前会要求用户确认。使用 **-y** 或 **–assume-yes** 选项可以跳过这一步。 +``` +$ bootiso -y bionic-desktop-amd64.iso + +``` + +您也可以把自动选择 USB 设备与 **-y** 选项连用,如下所示。 +``` +$ bootiso -y -a bionic-desktop-amd64.iso + +``` + +或者, +``` +$ bootiso?--assume-yes?--autoselect bionic-desktop-amd64.iso + +``` + +请记住,当你只连接一个 USB 驱动器时,它才会起作用。 + +Bootiso 会默认创建一个 **FAT 32** 分区,挂载后用 **“rsync”** 程序把 ISO 的内容拷贝到 USB 盘里。 如果你愿意也可以使用 “dd” 代替 “rsync” 。 +``` +$ bootiso --dd -d /dev/sdb bionic-desktop-amd64.iso + +``` + +如果你想增加 USB 引导的成功概率,请使用 **“-b”** 或 **“–bootloader”** 选项。 +``` +$ bootiso -b bionic-desktop-amd64.iso + +``` + +上面这条命令会安装 **syslinux** 引导程序(安全模式)。注意,“–dd” 选项不可用. + +在创建引导设备后,Bootiso 会自动弹出 USB 设备。如果不想自动弹出,请使用 **-J** 或 **–no-eject** 选项。 +``` +$ bootiso -J bionic-desktop-amd64.iso + +``` + +现在,USB 设备依然连接中。你可以使用 “umount” 命令随时卸载它。 + +需要完整帮助信息,请输入: +``` +$ bootiso -h + +好,今天就到这里。希望这个脚本对你有帮助。好货不断,不要走开哦! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/bootiso-lets-you-safely-create-bootable-usb-drive/ + +作者:[SK][a] +译者:[kennethXia](https://github.com/kennethXia) +校对:[校对者ID](https://github.com/校对者ID) +选题:[lujun9972](https://github.com/lujun9972) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/etcher-beauitiful-app-create-bootable-sd-cards-usb-drives/ From 23bf508bb735c8bdd6f9c8f73ea36ddfc666fe7c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 30 Apr 2018 20:08:15 +0800 Subject: [PATCH 039/102] PRF:20171102 Testing IPv6 Networking in KVM- Part 1.md @qhwdw --- ... Testing IPv6 Networking in KVM- Part 1.md | 46 ++++++++++--------- 1 file changed, 24 insertions(+), 22 deletions(-) diff --git a/translated/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md b/translated/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md index 45a696511f..ca05272493 100644 --- a/translated/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md +++ b/translated/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md @@ -1,53 +1,55 @@ 在 KVM 中测试 IPv6 网络(第 1 部分) ====== +> 在这个两篇的系列当中,我们将学习关于 IPv6 私有地址的知识,以及如何在 KVM 中配置测试网络。 + ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ipv6-networking.png?itok=swQPV8Ey) 要理解 IPv6 地址是如何工作的,没有比亲自动手去实践更好的方法了,在 KVM 中配置一个小的测试实验室非常容易 —— 也很有趣。这个系列的文章共有两个部分,我们将学习关于 IPv6 私有地址的知识,以及如何在 KVM 中配置测试网络。 ### QEMU/KVM/虚拟机管理器 -我们先来了解什么是 KVM。在这里,我将使用 KVM 来表示 QEMU、KVM、以及虚拟机管理器的一个组合,虚拟机管理器在 Linux 发行版中一般内置了。简单解释就是,QEMU 模拟硬件,而 KVM 是一个内核模块,它在你的 CPU 上创建一个 “访客领地”,并去管理它们对内存和 CPU 的访问。虚拟机管理器是一个涵盖虚拟化和管理程序的图形工具。 +我们先来了解什么是 KVM。在这里,我将使用 KVM 来表示 QEMU、KVM、以及虚拟机管理器的一个组合,虚拟机管理器在 Linux 发行版中一般都内置了。简单解释就是,QEMU 模拟硬件,而 KVM 是一个内核模块,它在你的 CPU 上创建一个 “访客领地”,并去管理它们对内存和 CPU 的访问。虚拟机管理器是一个涵盖虚拟化和管理程序的图形工具。 -但是你不能被图形界面下 “点击” 操作的方式 "缠住" ,因为,它们也有命令行工具可以使用 —— 比如 virsh 和 virt-install。 +但是你不能被图形界面下 “点击” 操作的方式 “缠住” ,因为,它们也有命令行工具可以使用 —— 比如 `virsh` 和 `virt-install`。 如果你在使用 KVM 方面没有什么经验,你可以从 [在 KVM 中创建虚拟机:第 1 部分][1] 和 [在 KVM 中创建虚拟机:第 2 部分 - 网络][2] 开始学起。 ### IPv6 唯一本地地址 -在 KVM 中配置 IPv6 网络与配置 IPv4 网络很类似。它们的主要不同在于这些怪异的长地址。[上一次][3],我们讨论了 IPv6 地址的不同类型。其中有一个 IPv6 单播地址类,fc00::/7(详细情况请查阅 [RFC 4193][4]),它类似于 IPv4 中的私有地址 —— 10.0.0.0/8、172.16.0.0/12、和 192.168.0.0/16。 +在 KVM 中配置 IPv6 网络与配置 IPv4 网络很类似。它们的主要不同在于这些怪异的长地址。[上一次][3],我们讨论了 IPv6 地址的不同类型。其中有一个 IPv6 单播地址类,`fc00::/7`(详细情况请查阅 [RFC 4193][4]),它类似于 IPv4 中的私有地址 —— `10.0.0.0/8`、`172.16.0.0/12`、和 `192.168.0.0/16`。 下图解释了这个唯一本地地址空间的结构。前 48 位定义了前缀和全局 ID,随后的 16 位是子网,剩余的 64 位是接口 ID: -``` -| 7 bits |1| 40 bits | 16 bits | 64 bits | -+--------|-+------------|-----------|----------------------------+ -| Prefix |L| Global ID | Subnet ID | Interface ID | -+--------|-+------------|-----------|----------------------------+ +``` +| 7 bits |1| 40 bits | 16 bits | 64 bits | ++--------+-+------------+-----------+----------------------------+ +| Prefix |L| Global ID | Subnet ID | Interface ID | ++--------+-+------------+-----------+----------------------------+ ``` 下面是另外一种表示方法,它可能更有助于你理解这些地址是如何管理的: -``` -| Prefix | Global ID | Subnet ID | Interface ID | -+--------|--------------|-------------|----------------------+ -| fd | 00:0000:0000 | 0000 | 0000:0000:0000:0000 | -+--------|--------------|-------------|----------------------+ +``` +| Prefix | Global ID | Subnet ID | Interface ID | ++--------+--------------+-------------+----------------------+ +| fd | 00:0000:0000 | 0000 | 0000:0000:0000:0000 | ++--------+--------------+-------------+----------------------+ ``` -fc00::/7 共分成两个 /8 地址块,fc00::/8 和 fd00::/8。fc00::/8 是为以后使用保留的。因此,唯一本地地址通常都是以 fd 开头的,而剩余部分是由你使用的。L 位,也就是第八位,它总是设置为 1,这样它可以表示为 fd00::/8。设置为 0 时,它就表示为 fc00::/8。你可以使用 `subnetcalc` 来看到这些东西: +`fc00::/7` 共分成两个 `/8` 地址块,`fc00::/8` 和 `fd00::/8`。`fc00::/8` 是为以后使用保留的。因此,唯一本地地址通常都是以 `fd` 开头的,而剩余部分是由你使用的。`L` 位,也就是第八位,它总是设置为 `1`,这样它可以表示为 `fd00::/8`。设置为 `0` 时,它就表示为 `fc00::/8`。你可以使用 `subnetcalc` 来看到这些东西: + ``` $ subnetcalc fd00::/8 -n -Address = fd00:: - fd00 = 11111101 00000000 +Address = fd00:: + fd00 = 11111101 00000000 $ subnetcalc fc00::/8 -n -Address = fc00:: - fc00 = 11111100 00000000 - +Address = fc00:: + fc00 = 11111100 00000000 ``` -RFC 4193 要求地址必须随机产生。你可以用你选择的任何方法来造出个地址,只要它们以 `fd` 打头就可以,因为 IPv6 范围非常大,它不会因为地址耗尽而无法使用。当然,最佳实践还是按 RFCs 的要求来做。地址不能按顺序分配或者使用众所周知的数字。RFC 4193 包含一个构建伪随机地址生成器的算法,或者你可以在线找到任何生成器产生的数字。 +RFC 4193 要求地址必须随机产生。你可以用你选择的任何方法来造出个地址,只要它们以 `fd` 打头就可以,因为 IPv6 范围非常大,它不会因为地址耗尽而无法使用。当然,最佳实践还是按 RFC 的要求来做。地址不能按顺序分配或者使用众所周知的数字。RFC 4193 包含一个构建伪随机地址生成器的算法,或者你可以找到各种在线生成器。 唯一本地地址不像全局单播地址(它由你的因特网服务提供商分配)那样进行中心化管理,即使如此,发生地址冲突的可能性也是非常低的。当你需要去合并一些本地网络或者想去在不相关的私有网络之间路由时,这是一个非常好的优势。 @@ -61,7 +63,7 @@ RFC4193 建议,不要混用全局单播地址的 AAAA 和 PTR 记录,因为 下周我们将讲解如何在 KVM 中配置这些 IPv6 的地址,并现场测试它们。 -通过来自 Linux 基金会和 edX 的免费在线课程 ["Linux 入门" ][6] 学习更多的 Linux 知识。 +通过来自 Linux 基金会和 edX 的免费在线课程 [“Linux 入门”][6] 学习更多的 Linux 知识。 -------------------------------------------------------------------------------- @@ -69,7 +71,7 @@ via: https://www.linux.com/learn/intro-to-linux/2017/11/testing-ipv6-networking- 作者:[Carla Schroder][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 1c43fedcdd752d64aa6a35f68fb92d38696b09f6 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 30 Apr 2018 20:09:43 +0800 Subject: [PATCH 040/102] PUB:20171102 Testing IPv6 Networking in KVM- Part 1.md @qhwdw https://linux.cn/article-9594-1.html --- .../20171102 Testing IPv6 Networking in KVM- Part 1.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171102 Testing IPv6 Networking in KVM- Part 1.md (100%) diff --git a/translated/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md b/published/20171102 Testing IPv6 Networking in KVM- Part 1.md similarity index 100% rename from translated/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md rename to published/20171102 Testing IPv6 Networking in KVM- Part 1.md From 85379d8436aabe4a90f3ba571a6f614c03f2c707 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 30 Apr 2018 20:21:36 +0800 Subject: [PATCH 041/102] PRF:20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md @geekpi --- ... To Mobile Devices By Scanning QR Codes.md | 51 +++++++++---------- 1 file changed, 25 insertions(+), 26 deletions(-) diff --git a/translated/tech/20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md b/translated/tech/20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md index bb869bb39e..6d12ac858f 100644 --- a/translated/tech/20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md +++ b/translated/tech/20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md @@ -2,89 +2,90 @@ ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/03/qr-filetransfer-720x340.png) -将文件从计算机传输到智能手机并不是什么大问题。你可以使用 USB 线将手机挂载到系统上,然后从文件管理器传输文件。此外,某些第三方应用程序(例如 [**KDE Connect**][1] 和 [**AirDroid**] [2])可帮助你轻松管理和传输系统中的文件至 Android 设备。今天,我偶然发现了一个名为 **“Qr-filetransfer”** 的超酷工具。它允许你通过扫描二维码通过 WiFi 将文件从计算机传输到移动设备而无须离开终端。是的,你没有看错! qr-filetransfer 是一个使用 Go 语言编写的免费的开源命令行工具。在这个简短的教程中,我们将学习如何使用 qr-transfer 将文件从 Linux 传输到任何移动设备。 + +将文件从计算机传输到智能手机并不是什么大问题。你可以使用 USB 线将手机挂载到系统上,然后从文件管理器传输文件。此外,某些第三方应用程序(例如 [KDE Connect][1] 和 [AirDroid] [2])可帮助你轻松管理和传输系统中的文件至 Android 设备。今天,我偶然发现了一个名为 “Qr-filetransfer” 的超酷工具。它允许你通过扫描二维码通过 WiFi 将文件从计算机传输到移动设备而无须离开终端。是的,你没有看错! Qr-filetransfer 是一个使用 Go 语言编写的自由开源命令行工具。在这个简短的教程中,我们将学习如何使用 Qr-filetransfer 将文件从 Linux 传输到任何移动设备。 ### 安装 Qr-filetransfer 首先,在你的系统上安装 Go 语言。 在 Arch Linux 及其衍生版上: + ``` $ sudo pacman -S go - ``` 在基于 RPM 的系统(如 RHEL、CentOS、Fedora)上运行: + ``` $ sudo yum install golang - ``` 或者: + ``` $ sudo dnf install golang - ``` 在基于 DEB 的系统上,例如 Debian、Ubuntu、Linux Mint,你可以使用命令安装它: + ``` $ sudo apt-get install golang - ``` 在 SUSE/openSUSE 上: + ``` $ sudo zypper install golang - ``` -安装 Go 语言后,运行以下命令下载 qr-filetransfer 应用。 +安装 Go 语言后,运行以下命令下载 Qr-filetransfer 应用。 + ``` $ go get github.com/claudiodangelis/qr-filetransfer - ``` -上述命令将在当前工作目录下的一个名为 **“go”** 的目录中下载 qr-filetrnasfer GitHub 仓库的内容。 +上述命令将在当前工作目录下的一个名为 `go` 的目录中下载 Qr-filetransfer GitHub 仓库的内容。 + +将 Qr-filetransfer 的二进制文件复制到 PATH 中,例如 `/usr/local/bin/`。 -将 qt-filetransfer 的二进制文件复制到 PATH 中,例如 /usr/local/bin/。 ``` $ sudo cp go/bin/qr-filetransfer /usr/local/bin/ - ``` 最后,如下使其可执行: + ``` $ sudo chmod +x /usr/local/bin/qr-filetransfer - ``` ### 通过扫描二维码将文件从计算机传输到移动设备 确保你的智能手机已连接到与计算机相同的 WiFi 网络。 -然后,使用要传输的文件的完整路径启动 qt-filetransfer。 +然后,使用要传输的文件的完整路径启动 `qt-filetransfer`。 比如,我要传输一个 mp3 文件。 + ``` $ qr-filetransfer Chill\ Study\ Beats.mp3 - ``` -首次启动时,qr-filetransfer 会要求你选择使用的网络接口,如下所示。 +首次启动时,`qr-filetransfer` 会要求你选择使用的网络接口,如下所示。 + ``` Choose the network interface to use (type the number): [0] enp5s0 [1] wlp9s0 - ``` -我打算使用 **wlp9s0** 接口传输文件,因此我输入 “1”。qr-filetransfer 会记住这个选择,除非你通过 **-force** 参数或删除程序存储在当前用户的家目录中的 **.qr-filetransfer.json** 文件,否则永远不会再提示你。 +我打算使用 wlp9s0 接口传输文件,因此我输入 “1”。`qr-filetransfer` 会记住这个选择,除非你通过 `-force` 参数或删除程序存储在当前用户的家目录中的 `.qr-filetransfer.json` 文件,否则永远不会再提示你。 然后,你将看到二维码,如下图所示。 ![][4] -打开二维码应用(如果它尚未安装,请从 Play 商店安装任何一个二维码读取程序)并扫描终端中显示的二维码。 +打开二维码应用(如果尚未安装,请从 Play 商店安装任何一个二维码读取程序)并扫描终端中显示的二维码。 读取二维码后,系统会询问你是要复制链接还是打开链接。你可以复制链接并手动将其粘贴到移动网络浏览器上,或者选择“打开链接”以在移动浏览器中自动打开它。 @@ -94,34 +95,32 @@ Choose the network interface to use (type the number): ![][6] -如果文件太大,请压缩文件,然后传输它 +如果文件太大,请压缩文件,然后传输它: + ``` $ qr-filetransfer -zip /path/to/file.txt - ``` 要传输整个目录,请运行: + ``` $ qr-filetransfer /path/to/directory - ``` 请注意,目录在传输之前会被压缩。 -qr-filetransfer 只能将系统中的内容传输到移动设备,反之不能。这个项目非常新,所以会有 bug。如果你遇到了任何 bug,请在本指南最后给出的 GitHub 页面上报告。 +`qr-filetransfer` 只能将系统中的内容传输到移动设备,反之不能。这个项目非常新,所以会有 bug。如果你遇到了任何 bug,请在本指南最后给出的 GitHub 页面上报告。 干杯! - - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/transfer-files-from-computer-to-mobile-devices-by-scanning-qr-codes/ 作者:[SK][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) 选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8ebdf84c070521c51f032383bb77de5ddf336e0b Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 30 Apr 2018 20:22:02 +0800 Subject: [PATCH 042/102] PUB:20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md @geekpi --- ... Files From Computer To Mobile Devices By Scanning QR Codes.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md (100%) diff --git a/translated/tech/20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md b/published/20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md similarity index 100% rename from translated/tech/20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md rename to published/20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md From 00b0b86934ef95f5bb610ac5c17cc29abd326cbf Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Mon, 30 Apr 2018 22:19:41 +0800 Subject: [PATCH 043/102] Update 20180126 How To Manage NodeJS Packages Using Npm.md --- .../tech/20180126 How To Manage NodeJS Packages Using Npm.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20180126 How To Manage NodeJS Packages Using Npm.md b/sources/tech/20180126 How To Manage NodeJS Packages Using Npm.md index ac27816a7b..e21522944b 100644 --- a/sources/tech/20180126 How To Manage NodeJS Packages Using Npm.md +++ b/sources/tech/20180126 How To Manage NodeJS Packages Using Npm.md @@ -1,3 +1,6 @@ +Translating by MjSeven + + How To Manage NodeJS Packages Using Npm ====== From ba29496f23be05956f6cf518d9c7aaea07f7b26d Mon Sep 17 00:00:00 2001 From: kennethXia <37970750+kennethXia@users.noreply.github.com> Date: Tue, 1 May 2018 07:37:37 +0800 Subject: [PATCH 044/102] translating by kennethXia --- sources/tech/20180412 Getting started with Jenkins Pipelines.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180412 Getting started with Jenkins Pipelines.md b/sources/tech/20180412 Getting started with Jenkins Pipelines.md index 1cda5ee0c2..8fa216396c 100644 --- a/sources/tech/20180412 Getting started with Jenkins Pipelines.md +++ b/sources/tech/20180412 Getting started with Jenkins Pipelines.md @@ -1,3 +1,5 @@ +translating by kennethXia + Getting started with Jenkins Pipelines ====== From 4d2f00d54a73329ad29fb05fd3f3b6acdcbbe4c5 Mon Sep 17 00:00:00 2001 From: FelixYFZ <33593534+FelixYFZ@users.noreply.github.com> Date: Tue, 1 May 2018 09:19:43 +0800 Subject: [PATCH 045/102] Update 20180201 IT automation- How to make the case.md --- sources/talk/20180201 IT automation- How to make the case.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/talk/20180201 IT automation- How to make the case.md b/sources/talk/20180201 IT automation- How to make the case.md index 4b70fbff49..19f096051e 100644 --- a/sources/talk/20180201 IT automation- How to make the case.md +++ b/sources/talk/20180201 IT automation- How to make the case.md @@ -1,4 +1,4 @@ -IT automation: How to make the case +Translating by FelxiYFZ IT automation: How to make the case ====== At the start of any significant project or change initiative, IT leaders face a proverbial fork in the road. From ccc847416b03f9ae27e8cc3a001c3867f14f05e7 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 1 May 2018 22:56:13 +0800 Subject: [PATCH 046/102] =?UTF-8?q?=E5=BD=92=E6=A1=A3201804?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20090211 Page Cache the Affair Between Memory and Files.md | 0 ...10 Tools To Add Some Spice To Your UNIX-Linux Shell Scripts.md | 0 .../20150703 Let-s Build A Simple Interpreter. Part 2..md | 0 .../20150812 Let-s Build A Simple Interpreter. Part 3..md | 0 .../20170101 How to resolve mount.nfs- Stale file handle error.md | 0 ...d Folders From Accidental Deletion Or Modification In Linux.md | 0 ...8 Ansible Tutorial- Intorduction to simple Ansible commands.md | 0 ...arning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md | 0 .../20170927 Best Linux Distros for the Enterprise.md | 0 .../20170927 Linux directory structure- -lib explained.md | 0 published/{ => 201804}/20170928 Process Monitoring.md | 0 .../20171002 Linux Gunzip Command Explained with Examples.md | 0 .../20171010 How to test internet speed in Linux terminal.md | 0 .../20171014 Proxy Models in Container Environments.md | 0 published/{ => 201804}/20171016 5 SSH alias examples in Linux.md | 0 ...8 Install a Centralized Log Server with Rsyslog in Debian 9.md | 0 .../{ => 201804}/20171027 Scary Linux commands for Halloween.md | 0 .../20171102 Testing IPv6 Networking in KVM- Part 1.md | 0 ...109 How to record statistics about a Linux machine-s uptime.md | 0 .../20171113 My Adventure Migrating Back To Windows.md | 0 .../{ => 201804}/20171113 The big break in computer languages.md | 0 .../20171123 Why microservices are a security issue.md | 0 .../20171205 7 rules for avoiding documentation pitfalls.md | 0 .../20171205 What DevOps teams really need from a CIO.md | 0 .../20171207 How to use KVM cloud images on Ubuntu Linux.md | 0 .../20171212 Oh My Fish- Make Your Shell Beautiful.md | 0 .../20171219 How to generate webpages using CGI scripts.md | 0 .../20171219 How to set GNOME to display a custom slideshow.md | 0 .../20171228 Testing Ansible Playbooks With Vagrant.md | 0 ...80102 The Uniq Command Tutorial With Examples For Beginners.md | 0 published/{ => 201804}/20180104 How does gdb call functions.md | 0 ...0109 Linux size Command Tutorial for Beginners (6 Examples).md | 0 .../20180110 8 simple ways to promote team communication.md | 0 ...0 Why isn-t open source hot among computer science students.md | 0 ...111 AI and machine learning bias has dangerous implications.md | 0 .../{ => 201804}/20180116 Monitor your Kubernetes Cluster.md | 0 .../20180118 Configuring MSMTP On Ubuntu 16.04 (Again).md | 0 .../20180118 Securing the Linux filesystem with Tripwire.md | 0 published/{ => 201804}/20180122 How to Create a Docker Image.md | 0 .../{ => 201804}/20180123 Migrating to Linux- The Command Line.md | 0 ...LDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md | 0 .../{ => 201804}/20180127 Your instant Kubernetes cluster.md | 0 ... to add network bridge with nmcli (NetworkManager) on Linux.md | 0 .../20180129 A look inside Facebooks open source program.md | 0 published/{ => 201804}/20180129 How to Use DockerHub.md | 0 ...AWFFull web server log analysis application on ubuntu 17.10.md | 0 ...180130 Linux ln Command Tutorial for Beginners (5 Examples).md | 0 ...k Look at the Arch Based Indie Linux Distribution- MagpieOS.md | 0 ...0180206 How to start an open source program in your company.md | 0 published/{ => 201804}/20180206 Manage printers and printing.md | 0 .../{ => 201804}/20180206 Programming in Color with ncurses.md | 0 .../20180207 Python Global Keyword (With Examples).md | 0 ... Hollywood movie hacker with these three command line tools.md | 0 ...20180213 How to clone, modify, add, and delete files in Git.md | 0 published/{ => 201804}/20180214 11 awesome vi tips and tricks.md | 0 ... Code Integrity with PGP - Part 1- Basic Concepts and Tools.md | 0 .../20180215 Check Linux Distribution Name and Version.md | 0 published/{ => 201804}/20180215 What is a Linux -oops.md | 0 .../20180220 How to Get Started Using WSL in Windows 10.md | 0 published/{ => 201804}/20180221 Getting started with SQL.md | 0 ...ode Integrity with PGP - Part 2- Generating Your Master Key.md | 0 .../20180221 cTop - A CLI Tool For Container Monitoring.md | 0 .../20180222 How to configure an Apache web server.md | 0 .../{ => 201804}/20180226 5 keys to building open hardware.md | 0 .../{ => 201804}/20180226 How to Use WSL Like a Linux Pro.md | 0 published/{ => 201804}/20180301 How to add fonts to Fedora.md | 0 ...20180302 10 Quick Tips About sudo command for Linux systems.md | 0 ...pen Source Desktop YouTube Player For Privacy-minded People.md | 0 .../20180308 Dive into BPF a list of reading material.md | 0 .../{ => 201804}/20180312 Continuous integration in Fedora.md | 0 .../{ => 201804}/20180313 Running DOS on the Raspberry Pi.md | 0 ...80313 The Type Command Tutorial With Examples For Beginners.md | 0 published/{ => 201804}/20180314 Playing with water.md | 0 ...20180318 How To Manage Disk Partitions Using Parted Command.md | 0 ...1 The Command line Personal Assistant For Your Linux System.md | 0 ...ow To Find If A CPU Supports Virtualization Technology (VT).md | 0 ...0180323 How to tell when moving to blockchain is a bad idea.md | 0 ... Files From Computer To Mobile Devices By Scanning QR Codes.md | 0 .../{ => 201804}/20180326 Working with calendars on Linux.md | 0 .../20180328 Do continuous deployment with Github and Python.md | 0 ...0328 How To Create-Extend Swap Partition In Linux Using LVM.md | 0 .../20180329 Protect Your Websites with Let-s Encrypt.md | 0 published/{ => 201804}/20180402 Advanced SSH Cheat Sheet.md | 0 published/{ => 201804}/20180405 How to find files in Linux.md | 0 ...0180405 The fc Command Tutorial With Examples For Beginners.md | 0 .../{ => 201804}/20180407 12 Git tips for Git-s 12th birthday.md | 0 .../20180411 Awesome GNOME extensions for developers.md | 0 .../20180412 3 password managers for the Linux command line.md | 0 88 files changed, 0 insertions(+), 0 deletions(-) rename published/{ => 201804}/20090211 Page Cache the Affair Between Memory and Files.md (100%) rename published/{ => 201804}/20100419 10 Tools To Add Some Spice To Your UNIX-Linux Shell Scripts.md (100%) rename published/{ => 201804}/20150703 Let-s Build A Simple Interpreter. Part 2..md (100%) rename published/{ => 201804}/20150812 Let-s Build A Simple Interpreter. Part 3..md (100%) rename published/{ => 201804}/20170101 How to resolve mount.nfs- Stale file handle error.md (100%) rename published/{ => 201804}/20170201 Prevent Files And Folders From Accidental Deletion Or Modification In Linux.md (100%) rename published/{ => 201804}/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md (100%) rename published/{ => 201804}/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md (100%) rename published/{ => 201804}/20170927 Best Linux Distros for the Enterprise.md (100%) rename published/{ => 201804}/20170927 Linux directory structure- -lib explained.md (100%) rename published/{ => 201804}/20170928 Process Monitoring.md (100%) rename published/{ => 201804}/20171002 Linux Gunzip Command Explained with Examples.md (100%) rename published/{ => 201804}/20171010 How to test internet speed in Linux terminal.md (100%) rename published/{ => 201804}/20171014 Proxy Models in Container Environments.md (100%) rename published/{ => 201804}/20171016 5 SSH alias examples in Linux.md (100%) rename published/{ => 201804}/20171018 Install a Centralized Log Server with Rsyslog in Debian 9.md (100%) rename published/{ => 201804}/20171027 Scary Linux commands for Halloween.md (100%) rename published/{ => 201804}/20171102 Testing IPv6 Networking in KVM- Part 1.md (100%) rename published/{ => 201804}/20171109 How to record statistics about a Linux machine-s uptime.md (100%) rename published/{ => 201804}/20171113 My Adventure Migrating Back To Windows.md (100%) rename published/{ => 201804}/20171113 The big break in computer languages.md (100%) rename published/{ => 201804}/20171123 Why microservices are a security issue.md (100%) rename published/{ => 201804}/20171205 7 rules for avoiding documentation pitfalls.md (100%) rename published/{ => 201804}/20171205 What DevOps teams really need from a CIO.md (100%) rename published/{ => 201804}/20171207 How to use KVM cloud images on Ubuntu Linux.md (100%) rename published/{ => 201804}/20171212 Oh My Fish- Make Your Shell Beautiful.md (100%) rename published/{ => 201804}/20171219 How to generate webpages using CGI scripts.md (100%) rename published/{ => 201804}/20171219 How to set GNOME to display a custom slideshow.md (100%) rename published/{ => 201804}/20171228 Testing Ansible Playbooks With Vagrant.md (100%) rename published/{ => 201804}/20180102 The Uniq Command Tutorial With Examples For Beginners.md (100%) rename published/{ => 201804}/20180104 How does gdb call functions.md (100%) rename published/{ => 201804}/20180109 Linux size Command Tutorial for Beginners (6 Examples).md (100%) rename published/{ => 201804}/20180110 8 simple ways to promote team communication.md (100%) rename published/{ => 201804}/20180110 Why isn-t open source hot among computer science students.md (100%) rename published/{ => 201804}/20180111 AI and machine learning bias has dangerous implications.md (100%) rename published/{ => 201804}/20180116 Monitor your Kubernetes Cluster.md (100%) rename published/{ => 201804}/20180118 Configuring MSMTP On Ubuntu 16.04 (Again).md (100%) rename published/{ => 201804}/20180118 Securing the Linux filesystem with Tripwire.md (100%) rename published/{ => 201804}/20180122 How to Create a Docker Image.md (100%) rename published/{ => 201804}/20180123 Migrating to Linux- The Command Line.md (100%) rename published/{ => 201804}/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md (100%) rename published/{ => 201804}/20180127 Your instant Kubernetes cluster.md (100%) rename published/{ => 201804}/20180128 How to add network bridge with nmcli (NetworkManager) on Linux.md (100%) rename published/{ => 201804}/20180129 A look inside Facebooks open source program.md (100%) rename published/{ => 201804}/20180129 How to Use DockerHub.md (100%) rename published/{ => 201804}/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md (100%) rename published/{ => 201804}/20180130 Linux ln Command Tutorial for Beginners (5 Examples).md (100%) rename published/{ => 201804}/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md (100%) rename published/{ => 201804}/20180206 How to start an open source program in your company.md (100%) rename published/{ => 201804}/20180206 Manage printers and printing.md (100%) rename published/{ => 201804}/20180206 Programming in Color with ncurses.md (100%) rename published/{ => 201804}/20180207 Python Global Keyword (With Examples).md (100%) rename published/{ => 201804}/20180208 Become a Hollywood movie hacker with these three command line tools.md (100%) rename published/{ => 201804}/20180213 How to clone, modify, add, and delete files in Git.md (100%) rename published/{ => 201804}/20180214 11 awesome vi tips and tricks.md (100%) rename published/{ => 201804}/20180214 Protecting Code Integrity with PGP - Part 1- Basic Concepts and Tools.md (100%) rename published/{ => 201804}/20180215 Check Linux Distribution Name and Version.md (100%) rename published/{ => 201804}/20180215 What is a Linux -oops.md (100%) rename published/{ => 201804}/20180220 How to Get Started Using WSL in Windows 10.md (100%) rename published/{ => 201804}/20180221 Getting started with SQL.md (100%) rename published/{ => 201804}/20180221 Protecting Code Integrity with PGP - Part 2- Generating Your Master Key.md (100%) rename published/{ => 201804}/20180221 cTop - A CLI Tool For Container Monitoring.md (100%) rename published/{ => 201804}/20180222 How to configure an Apache web server.md (100%) rename published/{ => 201804}/20180226 5 keys to building open hardware.md (100%) rename published/{ => 201804}/20180226 How to Use WSL Like a Linux Pro.md (100%) rename published/{ => 201804}/20180301 How to add fonts to Fedora.md (100%) rename published/{ => 201804}/20180302 10 Quick Tips About sudo command for Linux systems.md (100%) rename published/{ => 201804}/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md (100%) rename published/{ => 201804}/20180308 Dive into BPF a list of reading material.md (100%) rename published/{ => 201804}/20180312 Continuous integration in Fedora.md (100%) rename published/{ => 201804}/20180313 Running DOS on the Raspberry Pi.md (100%) rename published/{ => 201804}/20180313 The Type Command Tutorial With Examples For Beginners.md (100%) rename published/{ => 201804}/20180314 Playing with water.md (100%) rename published/{ => 201804}/20180318 How To Manage Disk Partitions Using Parted Command.md (100%) rename published/{ => 201804}/20180321 The Command line Personal Assistant For Your Linux System.md (100%) rename published/{ => 201804}/20180322 How To Find If A CPU Supports Virtualization Technology (VT).md (100%) rename published/{ => 201804}/20180323 How to tell when moving to blockchain is a bad idea.md (100%) rename published/{ => 201804}/20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md (100%) rename published/{ => 201804}/20180326 Working with calendars on Linux.md (100%) rename published/{ => 201804}/20180328 Do continuous deployment with Github and Python.md (100%) rename published/{ => 201804}/20180328 How To Create-Extend Swap Partition In Linux Using LVM.md (100%) rename published/{ => 201804}/20180329 Protect Your Websites with Let-s Encrypt.md (100%) rename published/{ => 201804}/20180402 Advanced SSH Cheat Sheet.md (100%) rename published/{ => 201804}/20180405 How to find files in Linux.md (100%) rename published/{ => 201804}/20180405 The fc Command Tutorial With Examples For Beginners.md (100%) rename published/{ => 201804}/20180407 12 Git tips for Git-s 12th birthday.md (100%) rename published/{ => 201804}/20180411 Awesome GNOME extensions for developers.md (100%) rename published/{ => 201804}/20180412 3 password managers for the Linux command line.md (100%) diff --git a/published/20090211 Page Cache the Affair Between Memory and Files.md b/published/201804/20090211 Page Cache the Affair Between Memory and Files.md similarity index 100% rename from published/20090211 Page Cache the Affair Between Memory and Files.md rename to published/201804/20090211 Page Cache the Affair Between Memory and Files.md diff --git a/published/20100419 10 Tools To Add Some Spice To Your UNIX-Linux Shell Scripts.md b/published/201804/20100419 10 Tools To Add Some Spice To Your UNIX-Linux Shell Scripts.md similarity index 100% rename from published/20100419 10 Tools To Add Some Spice To Your UNIX-Linux Shell Scripts.md rename to published/201804/20100419 10 Tools To Add Some Spice To Your UNIX-Linux Shell Scripts.md diff --git a/published/20150703 Let-s Build A Simple Interpreter. Part 2..md b/published/201804/20150703 Let-s Build A Simple Interpreter. Part 2..md similarity index 100% rename from published/20150703 Let-s Build A Simple Interpreter. Part 2..md rename to published/201804/20150703 Let-s Build A Simple Interpreter. Part 2..md diff --git a/published/20150812 Let-s Build A Simple Interpreter. Part 3..md b/published/201804/20150812 Let-s Build A Simple Interpreter. Part 3..md similarity index 100% rename from published/20150812 Let-s Build A Simple Interpreter. Part 3..md rename to published/201804/20150812 Let-s Build A Simple Interpreter. Part 3..md diff --git a/published/20170101 How to resolve mount.nfs- Stale file handle error.md b/published/201804/20170101 How to resolve mount.nfs- Stale file handle error.md similarity index 100% rename from published/20170101 How to resolve mount.nfs- Stale file handle error.md rename to published/201804/20170101 How to resolve mount.nfs- Stale file handle error.md diff --git a/published/20170201 Prevent Files And Folders From Accidental Deletion Or Modification In Linux.md b/published/201804/20170201 Prevent Files And Folders From Accidental Deletion Or Modification In Linux.md similarity index 100% rename from published/20170201 Prevent Files And Folders From Accidental Deletion Or Modification In Linux.md rename to published/201804/20170201 Prevent Files And Folders From Accidental Deletion Or Modification In Linux.md diff --git a/published/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md b/published/201804/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md similarity index 100% rename from published/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md rename to published/201804/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md diff --git a/published/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md b/published/201804/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md similarity index 100% rename from published/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md rename to published/201804/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md diff --git a/published/20170927 Best Linux Distros for the Enterprise.md b/published/201804/20170927 Best Linux Distros for the Enterprise.md similarity index 100% rename from published/20170927 Best Linux Distros for the Enterprise.md rename to published/201804/20170927 Best Linux Distros for the Enterprise.md diff --git a/published/20170927 Linux directory structure- -lib explained.md b/published/201804/20170927 Linux directory structure- -lib explained.md similarity index 100% rename from published/20170927 Linux directory structure- -lib explained.md rename to published/201804/20170927 Linux directory structure- -lib explained.md diff --git a/published/20170928 Process Monitoring.md b/published/201804/20170928 Process Monitoring.md similarity index 100% rename from published/20170928 Process Monitoring.md rename to published/201804/20170928 Process Monitoring.md diff --git a/published/20171002 Linux Gunzip Command Explained with Examples.md b/published/201804/20171002 Linux Gunzip Command Explained with Examples.md similarity index 100% rename from published/20171002 Linux Gunzip Command Explained with Examples.md rename to published/201804/20171002 Linux Gunzip Command Explained with Examples.md diff --git a/published/20171010 How to test internet speed in Linux terminal.md b/published/201804/20171010 How to test internet speed in Linux terminal.md similarity index 100% rename from published/20171010 How to test internet speed in Linux terminal.md rename to published/201804/20171010 How to test internet speed in Linux terminal.md diff --git a/published/20171014 Proxy Models in Container Environments.md b/published/201804/20171014 Proxy Models in Container Environments.md similarity index 100% rename from published/20171014 Proxy Models in Container Environments.md rename to published/201804/20171014 Proxy Models in Container Environments.md diff --git a/published/20171016 5 SSH alias examples in Linux.md b/published/201804/20171016 5 SSH alias examples in Linux.md similarity index 100% rename from published/20171016 5 SSH alias examples in Linux.md rename to published/201804/20171016 5 SSH alias examples in Linux.md diff --git a/published/20171018 Install a Centralized Log Server with Rsyslog in Debian 9.md b/published/201804/20171018 Install a Centralized Log Server with Rsyslog in Debian 9.md similarity index 100% rename from published/20171018 Install a Centralized Log Server with Rsyslog in Debian 9.md rename to published/201804/20171018 Install a Centralized Log Server with Rsyslog in Debian 9.md diff --git a/published/20171027 Scary Linux commands for Halloween.md b/published/201804/20171027 Scary Linux commands for Halloween.md similarity index 100% rename from published/20171027 Scary Linux commands for Halloween.md rename to published/201804/20171027 Scary Linux commands for Halloween.md diff --git a/published/20171102 Testing IPv6 Networking in KVM- Part 1.md b/published/201804/20171102 Testing IPv6 Networking in KVM- Part 1.md similarity index 100% rename from published/20171102 Testing IPv6 Networking in KVM- Part 1.md rename to published/201804/20171102 Testing IPv6 Networking in KVM- Part 1.md diff --git a/published/20171109 How to record statistics about a Linux machine-s uptime.md b/published/201804/20171109 How to record statistics about a Linux machine-s uptime.md similarity index 100% rename from published/20171109 How to record statistics about a Linux machine-s uptime.md rename to published/201804/20171109 How to record statistics about a Linux machine-s uptime.md diff --git a/published/20171113 My Adventure Migrating Back To Windows.md b/published/201804/20171113 My Adventure Migrating Back To Windows.md similarity index 100% rename from published/20171113 My Adventure Migrating Back To Windows.md rename to published/201804/20171113 My Adventure Migrating Back To Windows.md diff --git a/published/20171113 The big break in computer languages.md b/published/201804/20171113 The big break in computer languages.md similarity index 100% rename from published/20171113 The big break in computer languages.md rename to published/201804/20171113 The big break in computer languages.md diff --git a/published/20171123 Why microservices are a security issue.md b/published/201804/20171123 Why microservices are a security issue.md similarity index 100% rename from published/20171123 Why microservices are a security issue.md rename to published/201804/20171123 Why microservices are a security issue.md diff --git a/published/20171205 7 rules for avoiding documentation pitfalls.md b/published/201804/20171205 7 rules for avoiding documentation pitfalls.md similarity index 100% rename from published/20171205 7 rules for avoiding documentation pitfalls.md rename to published/201804/20171205 7 rules for avoiding documentation pitfalls.md diff --git a/published/20171205 What DevOps teams really need from a CIO.md b/published/201804/20171205 What DevOps teams really need from a CIO.md similarity index 100% rename from published/20171205 What DevOps teams really need from a CIO.md rename to published/201804/20171205 What DevOps teams really need from a CIO.md diff --git a/published/20171207 How to use KVM cloud images on Ubuntu Linux.md b/published/201804/20171207 How to use KVM cloud images on Ubuntu Linux.md similarity index 100% rename from published/20171207 How to use KVM cloud images on Ubuntu Linux.md rename to published/201804/20171207 How to use KVM cloud images on Ubuntu Linux.md diff --git a/published/20171212 Oh My Fish- Make Your Shell Beautiful.md b/published/201804/20171212 Oh My Fish- Make Your Shell Beautiful.md similarity index 100% rename from published/20171212 Oh My Fish- Make Your Shell Beautiful.md rename to published/201804/20171212 Oh My Fish- Make Your Shell Beautiful.md diff --git a/published/20171219 How to generate webpages using CGI scripts.md b/published/201804/20171219 How to generate webpages using CGI scripts.md similarity index 100% rename from published/20171219 How to generate webpages using CGI scripts.md rename to published/201804/20171219 How to generate webpages using CGI scripts.md diff --git a/published/20171219 How to set GNOME to display a custom slideshow.md b/published/201804/20171219 How to set GNOME to display a custom slideshow.md similarity index 100% rename from published/20171219 How to set GNOME to display a custom slideshow.md rename to published/201804/20171219 How to set GNOME to display a custom slideshow.md diff --git a/published/20171228 Testing Ansible Playbooks With Vagrant.md b/published/201804/20171228 Testing Ansible Playbooks With Vagrant.md similarity index 100% rename from published/20171228 Testing Ansible Playbooks With Vagrant.md rename to published/201804/20171228 Testing Ansible Playbooks With Vagrant.md diff --git a/published/20180102 The Uniq Command Tutorial With Examples For Beginners.md b/published/201804/20180102 The Uniq Command Tutorial With Examples For Beginners.md similarity index 100% rename from published/20180102 The Uniq Command Tutorial With Examples For Beginners.md rename to published/201804/20180102 The Uniq Command Tutorial With Examples For Beginners.md diff --git a/published/20180104 How does gdb call functions.md b/published/201804/20180104 How does gdb call functions.md similarity index 100% rename from published/20180104 How does gdb call functions.md rename to published/201804/20180104 How does gdb call functions.md diff --git a/published/20180109 Linux size Command Tutorial for Beginners (6 Examples).md b/published/201804/20180109 Linux size Command Tutorial for Beginners (6 Examples).md similarity index 100% rename from published/20180109 Linux size Command Tutorial for Beginners (6 Examples).md rename to published/201804/20180109 Linux size Command Tutorial for Beginners (6 Examples).md diff --git a/published/20180110 8 simple ways to promote team communication.md b/published/201804/20180110 8 simple ways to promote team communication.md similarity index 100% rename from published/20180110 8 simple ways to promote team communication.md rename to published/201804/20180110 8 simple ways to promote team communication.md diff --git a/published/20180110 Why isn-t open source hot among computer science students.md b/published/201804/20180110 Why isn-t open source hot among computer science students.md similarity index 100% rename from published/20180110 Why isn-t open source hot among computer science students.md rename to published/201804/20180110 Why isn-t open source hot among computer science students.md diff --git a/published/20180111 AI and machine learning bias has dangerous implications.md b/published/201804/20180111 AI and machine learning bias has dangerous implications.md similarity index 100% rename from published/20180111 AI and machine learning bias has dangerous implications.md rename to published/201804/20180111 AI and machine learning bias has dangerous implications.md diff --git a/published/20180116 Monitor your Kubernetes Cluster.md b/published/201804/20180116 Monitor your Kubernetes Cluster.md similarity index 100% rename from published/20180116 Monitor your Kubernetes Cluster.md rename to published/201804/20180116 Monitor your Kubernetes Cluster.md diff --git a/published/20180118 Configuring MSMTP On Ubuntu 16.04 (Again).md b/published/201804/20180118 Configuring MSMTP On Ubuntu 16.04 (Again).md similarity index 100% rename from published/20180118 Configuring MSMTP On Ubuntu 16.04 (Again).md rename to published/201804/20180118 Configuring MSMTP On Ubuntu 16.04 (Again).md diff --git a/published/20180118 Securing the Linux filesystem with Tripwire.md b/published/201804/20180118 Securing the Linux filesystem with Tripwire.md similarity index 100% rename from published/20180118 Securing the Linux filesystem with Tripwire.md rename to published/201804/20180118 Securing the Linux filesystem with Tripwire.md diff --git a/published/20180122 How to Create a Docker Image.md b/published/201804/20180122 How to Create a Docker Image.md similarity index 100% rename from published/20180122 How to Create a Docker Image.md rename to published/201804/20180122 How to Create a Docker Image.md diff --git a/published/20180123 Migrating to Linux- The Command Line.md b/published/201804/20180123 Migrating to Linux- The Command Line.md similarity index 100% rename from published/20180123 Migrating to Linux- The Command Line.md rename to published/201804/20180123 Migrating to Linux- The Command Line.md diff --git a/published/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md b/published/201804/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md similarity index 100% rename from published/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md rename to published/201804/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md diff --git a/published/20180127 Your instant Kubernetes cluster.md b/published/201804/20180127 Your instant Kubernetes cluster.md similarity index 100% rename from published/20180127 Your instant Kubernetes cluster.md rename to published/201804/20180127 Your instant Kubernetes cluster.md diff --git a/published/20180128 How to add network bridge with nmcli (NetworkManager) on Linux.md b/published/201804/20180128 How to add network bridge with nmcli (NetworkManager) on Linux.md similarity index 100% rename from published/20180128 How to add network bridge with nmcli (NetworkManager) on Linux.md rename to published/201804/20180128 How to add network bridge with nmcli (NetworkManager) on Linux.md diff --git a/published/20180129 A look inside Facebooks open source program.md b/published/201804/20180129 A look inside Facebooks open source program.md similarity index 100% rename from published/20180129 A look inside Facebooks open source program.md rename to published/201804/20180129 A look inside Facebooks open source program.md diff --git a/published/20180129 How to Use DockerHub.md b/published/201804/20180129 How to Use DockerHub.md similarity index 100% rename from published/20180129 How to Use DockerHub.md rename to published/201804/20180129 How to Use DockerHub.md diff --git a/published/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md b/published/201804/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md similarity index 100% rename from published/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md rename to published/201804/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md diff --git a/published/20180130 Linux ln Command Tutorial for Beginners (5 Examples).md b/published/201804/20180130 Linux ln Command Tutorial for Beginners (5 Examples).md similarity index 100% rename from published/20180130 Linux ln Command Tutorial for Beginners (5 Examples).md rename to published/201804/20180130 Linux ln Command Tutorial for Beginners (5 Examples).md diff --git a/published/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md b/published/201804/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md similarity index 100% rename from published/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md rename to published/201804/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md diff --git a/published/20180206 How to start an open source program in your company.md b/published/201804/20180206 How to start an open source program in your company.md similarity index 100% rename from published/20180206 How to start an open source program in your company.md rename to published/201804/20180206 How to start an open source program in your company.md diff --git a/published/20180206 Manage printers and printing.md b/published/201804/20180206 Manage printers and printing.md similarity index 100% rename from published/20180206 Manage printers and printing.md rename to published/201804/20180206 Manage printers and printing.md diff --git a/published/20180206 Programming in Color with ncurses.md b/published/201804/20180206 Programming in Color with ncurses.md similarity index 100% rename from published/20180206 Programming in Color with ncurses.md rename to published/201804/20180206 Programming in Color with ncurses.md diff --git a/published/20180207 Python Global Keyword (With Examples).md b/published/201804/20180207 Python Global Keyword (With Examples).md similarity index 100% rename from published/20180207 Python Global Keyword (With Examples).md rename to published/201804/20180207 Python Global Keyword (With Examples).md diff --git a/published/20180208 Become a Hollywood movie hacker with these three command line tools.md b/published/201804/20180208 Become a Hollywood movie hacker with these three command line tools.md similarity index 100% rename from published/20180208 Become a Hollywood movie hacker with these three command line tools.md rename to published/201804/20180208 Become a Hollywood movie hacker with these three command line tools.md diff --git a/published/20180213 How to clone, modify, add, and delete files in Git.md b/published/201804/20180213 How to clone, modify, add, and delete files in Git.md similarity index 100% rename from published/20180213 How to clone, modify, add, and delete files in Git.md rename to published/201804/20180213 How to clone, modify, add, and delete files in Git.md diff --git a/published/20180214 11 awesome vi tips and tricks.md b/published/201804/20180214 11 awesome vi tips and tricks.md similarity index 100% rename from published/20180214 11 awesome vi tips and tricks.md rename to published/201804/20180214 11 awesome vi tips and tricks.md diff --git a/published/20180214 Protecting Code Integrity with PGP - Part 1- Basic Concepts and Tools.md b/published/201804/20180214 Protecting Code Integrity with PGP - Part 1- Basic Concepts and Tools.md similarity index 100% rename from published/20180214 Protecting Code Integrity with PGP - Part 1- Basic Concepts and Tools.md rename to published/201804/20180214 Protecting Code Integrity with PGP - Part 1- Basic Concepts and Tools.md diff --git a/published/20180215 Check Linux Distribution Name and Version.md b/published/201804/20180215 Check Linux Distribution Name and Version.md similarity index 100% rename from published/20180215 Check Linux Distribution Name and Version.md rename to published/201804/20180215 Check Linux Distribution Name and Version.md diff --git a/published/20180215 What is a Linux -oops.md b/published/201804/20180215 What is a Linux -oops.md similarity index 100% rename from published/20180215 What is a Linux -oops.md rename to published/201804/20180215 What is a Linux -oops.md diff --git a/published/20180220 How to Get Started Using WSL in Windows 10.md b/published/201804/20180220 How to Get Started Using WSL in Windows 10.md similarity index 100% rename from published/20180220 How to Get Started Using WSL in Windows 10.md rename to published/201804/20180220 How to Get Started Using WSL in Windows 10.md diff --git a/published/20180221 Getting started with SQL.md b/published/201804/20180221 Getting started with SQL.md similarity index 100% rename from published/20180221 Getting started with SQL.md rename to published/201804/20180221 Getting started with SQL.md diff --git a/published/20180221 Protecting Code Integrity with PGP - Part 2- Generating Your Master Key.md b/published/201804/20180221 Protecting Code Integrity with PGP - Part 2- Generating Your Master Key.md similarity index 100% rename from published/20180221 Protecting Code Integrity with PGP - Part 2- Generating Your Master Key.md rename to published/201804/20180221 Protecting Code Integrity with PGP - Part 2- Generating Your Master Key.md diff --git a/published/20180221 cTop - A CLI Tool For Container Monitoring.md b/published/201804/20180221 cTop - A CLI Tool For Container Monitoring.md similarity index 100% rename from published/20180221 cTop - A CLI Tool For Container Monitoring.md rename to published/201804/20180221 cTop - A CLI Tool For Container Monitoring.md diff --git a/published/20180222 How to configure an Apache web server.md b/published/201804/20180222 How to configure an Apache web server.md similarity index 100% rename from published/20180222 How to configure an Apache web server.md rename to published/201804/20180222 How to configure an Apache web server.md diff --git a/published/20180226 5 keys to building open hardware.md b/published/201804/20180226 5 keys to building open hardware.md similarity index 100% rename from published/20180226 5 keys to building open hardware.md rename to published/201804/20180226 5 keys to building open hardware.md diff --git a/published/20180226 How to Use WSL Like a Linux Pro.md b/published/201804/20180226 How to Use WSL Like a Linux Pro.md similarity index 100% rename from published/20180226 How to Use WSL Like a Linux Pro.md rename to published/201804/20180226 How to Use WSL Like a Linux Pro.md diff --git a/published/20180301 How to add fonts to Fedora.md b/published/201804/20180301 How to add fonts to Fedora.md similarity index 100% rename from published/20180301 How to add fonts to Fedora.md rename to published/201804/20180301 How to add fonts to Fedora.md diff --git a/published/20180302 10 Quick Tips About sudo command for Linux systems.md b/published/201804/20180302 10 Quick Tips About sudo command for Linux systems.md similarity index 100% rename from published/20180302 10 Quick Tips About sudo command for Linux systems.md rename to published/201804/20180302 10 Quick Tips About sudo command for Linux systems.md diff --git a/published/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md b/published/201804/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md similarity index 100% rename from published/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md rename to published/201804/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md diff --git a/published/20180308 Dive into BPF a list of reading material.md b/published/201804/20180308 Dive into BPF a list of reading material.md similarity index 100% rename from published/20180308 Dive into BPF a list of reading material.md rename to published/201804/20180308 Dive into BPF a list of reading material.md diff --git a/published/20180312 Continuous integration in Fedora.md b/published/201804/20180312 Continuous integration in Fedora.md similarity index 100% rename from published/20180312 Continuous integration in Fedora.md rename to published/201804/20180312 Continuous integration in Fedora.md diff --git a/published/20180313 Running DOS on the Raspberry Pi.md b/published/201804/20180313 Running DOS on the Raspberry Pi.md similarity index 100% rename from published/20180313 Running DOS on the Raspberry Pi.md rename to published/201804/20180313 Running DOS on the Raspberry Pi.md diff --git a/published/20180313 The Type Command Tutorial With Examples For Beginners.md b/published/201804/20180313 The Type Command Tutorial With Examples For Beginners.md similarity index 100% rename from published/20180313 The Type Command Tutorial With Examples For Beginners.md rename to published/201804/20180313 The Type Command Tutorial With Examples For Beginners.md diff --git a/published/20180314 Playing with water.md b/published/201804/20180314 Playing with water.md similarity index 100% rename from published/20180314 Playing with water.md rename to published/201804/20180314 Playing with water.md diff --git a/published/20180318 How To Manage Disk Partitions Using Parted Command.md b/published/201804/20180318 How To Manage Disk Partitions Using Parted Command.md similarity index 100% rename from published/20180318 How To Manage Disk Partitions Using Parted Command.md rename to published/201804/20180318 How To Manage Disk Partitions Using Parted Command.md diff --git a/published/20180321 The Command line Personal Assistant For Your Linux System.md b/published/201804/20180321 The Command line Personal Assistant For Your Linux System.md similarity index 100% rename from published/20180321 The Command line Personal Assistant For Your Linux System.md rename to published/201804/20180321 The Command line Personal Assistant For Your Linux System.md diff --git a/published/20180322 How To Find If A CPU Supports Virtualization Technology (VT).md b/published/201804/20180322 How To Find If A CPU Supports Virtualization Technology (VT).md similarity index 100% rename from published/20180322 How To Find If A CPU Supports Virtualization Technology (VT).md rename to published/201804/20180322 How To Find If A CPU Supports Virtualization Technology (VT).md diff --git a/published/20180323 How to tell when moving to blockchain is a bad idea.md b/published/201804/20180323 How to tell when moving to blockchain is a bad idea.md similarity index 100% rename from published/20180323 How to tell when moving to blockchain is a bad idea.md rename to published/201804/20180323 How to tell when moving to blockchain is a bad idea.md diff --git a/published/20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md b/published/201804/20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md similarity index 100% rename from published/20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md rename to published/201804/20180323 Transfer Files From Computer To Mobile Devices By Scanning QR Codes.md diff --git a/published/20180326 Working with calendars on Linux.md b/published/201804/20180326 Working with calendars on Linux.md similarity index 100% rename from published/20180326 Working with calendars on Linux.md rename to published/201804/20180326 Working with calendars on Linux.md diff --git a/published/20180328 Do continuous deployment with Github and Python.md b/published/201804/20180328 Do continuous deployment with Github and Python.md similarity index 100% rename from published/20180328 Do continuous deployment with Github and Python.md rename to published/201804/20180328 Do continuous deployment with Github and Python.md diff --git a/published/20180328 How To Create-Extend Swap Partition In Linux Using LVM.md b/published/201804/20180328 How To Create-Extend Swap Partition In Linux Using LVM.md similarity index 100% rename from published/20180328 How To Create-Extend Swap Partition In Linux Using LVM.md rename to published/201804/20180328 How To Create-Extend Swap Partition In Linux Using LVM.md diff --git a/published/20180329 Protect Your Websites with Let-s Encrypt.md b/published/201804/20180329 Protect Your Websites with Let-s Encrypt.md similarity index 100% rename from published/20180329 Protect Your Websites with Let-s Encrypt.md rename to published/201804/20180329 Protect Your Websites with Let-s Encrypt.md diff --git a/published/20180402 Advanced SSH Cheat Sheet.md b/published/201804/20180402 Advanced SSH Cheat Sheet.md similarity index 100% rename from published/20180402 Advanced SSH Cheat Sheet.md rename to published/201804/20180402 Advanced SSH Cheat Sheet.md diff --git a/published/20180405 How to find files in Linux.md b/published/201804/20180405 How to find files in Linux.md similarity index 100% rename from published/20180405 How to find files in Linux.md rename to published/201804/20180405 How to find files in Linux.md diff --git a/published/20180405 The fc Command Tutorial With Examples For Beginners.md b/published/201804/20180405 The fc Command Tutorial With Examples For Beginners.md similarity index 100% rename from published/20180405 The fc Command Tutorial With Examples For Beginners.md rename to published/201804/20180405 The fc Command Tutorial With Examples For Beginners.md diff --git a/published/20180407 12 Git tips for Git-s 12th birthday.md b/published/201804/20180407 12 Git tips for Git-s 12th birthday.md similarity index 100% rename from published/20180407 12 Git tips for Git-s 12th birthday.md rename to published/201804/20180407 12 Git tips for Git-s 12th birthday.md diff --git a/published/20180411 Awesome GNOME extensions for developers.md b/published/201804/20180411 Awesome GNOME extensions for developers.md similarity index 100% rename from published/20180411 Awesome GNOME extensions for developers.md rename to published/201804/20180411 Awesome GNOME extensions for developers.md diff --git a/published/20180412 3 password managers for the Linux command line.md b/published/201804/20180412 3 password managers for the Linux command line.md similarity index 100% rename from published/20180412 3 password managers for the Linux command line.md rename to published/201804/20180412 3 password managers for the Linux command line.md From 0162c4e8eb2008dfe918274a284758d81f6c4b5b Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Tue, 1 May 2018 23:11:33 +0800 Subject: [PATCH 047/102] Delete 20180126 How To Manage NodeJS Packages Using Npm.md --- ...How To Manage NodeJS Packages Using Npm.md | 375 ------------------ 1 file changed, 375 deletions(-) delete mode 100644 sources/tech/20180126 How To Manage NodeJS Packages Using Npm.md diff --git a/sources/tech/20180126 How To Manage NodeJS Packages Using Npm.md b/sources/tech/20180126 How To Manage NodeJS Packages Using Npm.md deleted file mode 100644 index e21522944b..0000000000 --- a/sources/tech/20180126 How To Manage NodeJS Packages Using Npm.md +++ /dev/null @@ -1,375 +0,0 @@ -Translating by MjSeven - - -How To Manage NodeJS Packages Using Npm -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/01/npm-720x340.png) - -A while ago, we have published a guide to [**manage Python packages using PIP**][1]. Today, we are going to discuss how to manage NodeJS packages using Npm. NPM is the largest software registry that contains over 600,000 packages. Everyday, developers across the world shares and downloads packages through npm. In this guide, I will explain the the basics of working with npm, such as installing packages(locally and globally), installing certain version of a package, updating, removing and managing NodeJS packages and so on. - -### Manage NodeJS Packages Using Npm - -##### Installing NPM - -Since npm is written in NodeJS, we need to install NodeJS in order to use npm. To install NodeJS on different Linux distributions, refer the following link. - -Once installed, ensure that NodeJS and NPM have been properly installed. There are couple ways to do this. - -To check where node has been installed: -``` -$ which node -/home/sk/.nvm/versions/node/v9.4.0/bin/node -``` - -Check its version: -``` -$ node -v -v9.4.0 -``` - -Log in to Node REPL session: -``` -$ node -> .help -.break Sometimes you get stuck, this gets you out -.clear Alias for .break -.editor Enter editor mode -.exit Exit the repl -.help Print this help message -.load Load JS from a file into the REPL session -.save Save all evaluated commands in this REPL session to a file -> .exit -``` - -Check where npm installed: -``` -$ which npm -/home/sk/.nvm/versions/node/v9.4.0/bin/npm -``` - -And the version: -``` -$ npm -v -5.6.0 -``` - -Great! Node and NPM have been installed and are working! As you may have noticed, I have installed NodeJS and NPM in my $HOME directory to avoid permission issues while installing modules globally. This is the recommended method by NodeJS team. - -Well, let us go ahead to see managing NodeJS modules (or packages) using npm. - -##### Installing NodeJS modules - -NodeJS modules can either be installed locally or globally(system wide). Now I am going to show how to install a package locally. - -**Install packages locally** - -To manage packages locally, we normally use **package.json** file. - -First, let us create our project directory. -``` -$ mkdir demo -``` -``` -$ cd demo -``` - -Create a package.json file inside your project's directory. To do so, run: -``` -$ npm init -``` - -Enter the details of your package such as name, version, author, github page etc., or just hit ENTER key to accept the default values and type **YES** to confirm. -``` -This utility will walk you through creating a package.json file. -It only covers the most common items, and tries to guess sensible defaults. - -See `npm help json` for definitive documentation on these fields -and exactly what they do. - -Use `npm install ` afterwards to install a package and -save it as a dependency in the package.json file. - -Press ^C at any time to quit. -package name: (demo) -version: (1.0.0) -description: demo nodejs app -entry point: (index.js) -test command: -git repository: -keywords: -author: -license: (ISC) -About to write to /home/sk/demo/package.json: - -{ - "name": "demo", - "version": "1.0.0", - "description": "demo nodejs app", - "main": "index.js", - "scripts": { - "test": "echo \"Error: no test specified\" && exit 1" - }, - "author": "", - "license": "ISC" -} - -Is this ok? (yes) yes -``` - -The above command initializes your project and create package.json file. - -You can also do this non-interactively using command: -``` -npm init --y -``` - -This will create a package.json file quickly with default values without the user interaction. - -Now let us install package named [**commander**][2]. -``` -$ npm install commander -``` - -Sample output: -``` -npm notice created a lockfile as package-lock.json. You should commit this file. -npm WARN demo@1.0.0 No repository field. - -+ commander@2.13.0 -added 1 package in 2.519s -``` - -This will create a directory named **" node_modules"** (if it doesn't exist already) in the project's root directory and download the packages in it. - -Let us check the package.json file. -``` -$ cat package.json -{ - "name": "demo", - "version": "1.0.0", - "description": "demo nodejs app", - "main": "index.js", - "scripts": { - "test": "echo \"Error: no test specified\" && exit 1" - }, - "author": "", - "license": "ISC", - **"dependencies": {** -**"commander": "^2.13.0"** - } -} -``` - -You will see the dependencies have been added. The caret ( **^** ) at the front of the version number indicates that when installing, npm will pull the highest version of the package it can find. -``` -$ ls node_modules/ -commander -``` - -The advantage of package.json file is if you had the package.json file in your project's directory, you can just type "npm install", then npm will look into the dependencies that listed in the file and download all of them. You can even share it with other developers or push into your GitHub repository, so when they type "npm install", they will get all the same packages that you have. - -You may also noticed another json file named **package-lock.json**. This file ensures that the dependencies remain the same on all systems the project is installed on. - -To use the installed package in your program, create a file **index.js** (or any name of you choice) in the project's directory with the actual code, and then run it using command: -``` -$ node index.js -``` - -**Install packages globally** - -If you want to use a package as a command line tool, then it is better to install it globally. This way, it works no matter which directory is your current directory. -``` -$ npm install async -g -+ async@2.6.0 -added 2 packages in 4.695s -``` - -Or, -``` -$ npm install async --global -``` - -To install a specific version of a package, we do: -``` -$ npm install async@2.6.0 --global -``` - -##### Updating NodeJS modules - -To update the local packages, go the the project's directory where the package.json is located and run: -``` -$ npm update -``` - -Then, run the following command to ensure all packages were updated. -``` -$ npm outdated -``` - -If there is no update, then it returns nothing. - -To find out which global packages need to be updated, run: -``` -$ npm outdated -g --depth=0 -``` - -If there is no output, then all packages are updated. - -To update the a single global package, run: -``` -$ npm update -g -``` - -To update all global packages, run: -``` -$ npm update -g -``` - -##### Listing NodeJS modules - -To list the local packages, go the project's directory and run: -``` -$ npm list -demo@1.0.0 /home/sk/demo -└── commander@2.13.0 -``` - -As you see, I have installed "commander" package in local mode. - -To list global packages, run this command from any location: -``` -$ npm list -g -``` - -Sample output: -``` -/home/sk/.nvm/versions/node/v9.4.0/lib -├─┬ async@2.6.0 -│ └── lodash@4.17.4 -└─┬ npm@5.6.0 - ├── abbrev@1.1.1 - ├── ansi-regex@3.0.0 - ├── ansicolors@0.3.2 - ├── ansistyles@0.1.3 - ├── aproba@1.2.0 - ├── archy@1.0.0 -[...] -``` - -This command will list all modules and their dependencies. - -To list only the top level modules, use -depth=0 option: -``` -$ npm list -g --depth=0 -/home/sk/.nvm/versions/node/v9.4.0/lib -├── async@2.6.0 -└── npm@5.6.0 -``` - -##### Searching NodeJS modules - -To search for a module, use "npm search" command: -``` -npm search -``` - -Example: -``` -$ npm search request -``` - -This command will display all modules that contains the search string "request". - -##### Removing NodeJS modules - -To remove a local package, go to the project's directory and run following command to remove the package from your **node_modules** directory: -``` -$ npm uninstall -``` - -To remove it from the dependencies in **package.json** file, use the **save** flag like below: -``` -$ npm uninstall --save - -``` - -To remove the globally installed packages, run: -``` -$ npm uninstall -g -``` - -##### Cleaning NPM cache - -By default, NPM keeps the copy of a installed package in the cache folder named npm in your $HOME directory when installing it. So, you can install it next time without having to download again. - -To view the cached modules: -``` -$ ls ~/.npm -``` - -The cache folder gets flooded with all old packages over time. It is better to clean the cache from time to time. - -As of npm@5, the npm cache self-heals from corruption issues and data extracted from the cache is guaranteed to be valid. If you want to make sure everything is consistent, run: -``` -$ npm cache verify -``` - -To clear the entire cache, run: -``` -$ npm cache clean --force -``` - -##### Viewing NPM configuration - -To view the npm configuration, type: -``` -$ npm config list -``` - -Or, -``` -$ npm config ls -``` - -Sample output: -``` -; cli configs -metrics-registry = "https://registry.npmjs.org/" -scope = "" -user-agent = "npm/5.6.0 node/v9.4.0 linux x64" - -; node bin location = /home/sk/.nvm/versions/node/v9.4.0/bin/node -; cwd = /home/sk -; HOME = /home/sk -; "npm config ls -l" to show all defaults. -``` - -To display the current global location: -``` -$ npm config get prefix -/home/sk/.nvm/versions/node/v9.4.0 -``` - -And, that's all for now. What we have just covered here is just the basics. NPM is a vast topic. For more details, head over to the the [**NPM Getting Started**][3] guide. - -Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/manage-nodejs-packages-using-npm/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/manage-python-packages-using-pip/ -[2]:https://www.npmjs.com/package/commander -[3]:https://docs.npmjs.com/getting-started/ From 38717e4d49bb37a11e854f0234884a565192f9a4 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Tue, 1 May 2018 23:14:53 +0800 Subject: [PATCH 048/102] Create 20180126 How To Manage NodeJS Packages Using Npm.md --- ...How To Manage NodeJS Packages Using Npm.md | 368 ++++++++++++++++++ 1 file changed, 368 insertions(+) create mode 100644 translated/tech/20180126 How To Manage NodeJS Packages Using Npm.md diff --git a/translated/tech/20180126 How To Manage NodeJS Packages Using Npm.md b/translated/tech/20180126 How To Manage NodeJS Packages Using Npm.md new file mode 100644 index 0000000000..e443576852 --- /dev/null +++ b/translated/tech/20180126 How To Manage NodeJS Packages Using Npm.md @@ -0,0 +1,368 @@ +如何使用 Npm 管理 NodeJS 包 +===== + +![](https://www.ostechnix.com/wp-content/uploads/2018/01/npm-720x340.png) + +前一段时间,我们发布了一个[**使用 PIP 管理 Python 包**][3]的指南。今天,我们将讨论如何使用 Npm 管理 NodeJS 包。NPM 是最大的软件注册中心,包含 600,000 多个包。每天,世界各地的开发人员通过 npm 共享和下载软件包。在本指南中,我将解释使用 npm 基础知识,例如安装包(本地和全局)、安装特定版本的包、更新、删除和管理 NodeJS 包等等。 + +### 使用 Npm 管理 NodeJS 包 + +##### 安装 NPM + +用于 npm 是用 NodeJS 编写的,我们需要安装 NodeJS 才能使用 npm。要在不同的 Linux 发行版上安装 NodeJS,请参考下面的链接。 + +检查 node 安装的位置: +``` +$ which node +/home/sk/.nvm/versions/node/v9.4.0/bin/node +``` + +检查它的版本: +``` +$ node -v +v9.4.0 +``` + +进入 Node 交互式解释器: +``` +$ node +> .help +.break Sometimes you get stuck, this gets you out +.clear Alias for .break +.editor Enter editor mode +.exit Exit the repl +.help Print this help message +.load Load JS from a file into the REPL session +.save Save all evaluated commands in this REPL session to a file +> .exit +``` + +检查 npm 安装的位置: +``` +$ which npm +/home/sk/.nvm/versions/node/v9.4.0/bin/npm +``` + +还有版本: +``` +$ npm -v +5.6.0 +``` + +棒极了!Node 和 NPM 已安装并能工作!正如你可能已经注意到,我已经在我的 $HOME 目录中安装了 NodeJS 和 NPM,这样是为了避免在全局模块时出现权限问题。这是 NodeJS 团队推荐的方法。 + +那么,让我们继续看看如何使用 npm 管理 NodeJS 模块(或包)。 + +##### 安装 NodeJS 模块 + +NodeJS 模块可以安装在本地或全局(系统范围)。现在我将演示如何在本地安装包。 + +**在本地安装包** + +为了在本地管理包,我们通常使用 **package.json** 文件来管理。 + +首先,让我们创建我们的项目目录。 +``` +$ mkdir demo +``` +``` +$ cd demo +``` + +在项目目录中创建一个 package.json 文件。为此,运行: +``` +$ npm init +``` + +输入你的包的详细信息,例如名称,版本,作者,github 页面等等,或者按下 ENTER 键接受默认值并键入 **YES** 确认。 +``` +This utility will walk you through creating a package.json file. +It only covers the most common items, and tries to guess sensible defaults. + +See `npm help json` for definitive documentation on these fields +and exactly what they do. + +Use `npm install ` afterwards to install a package and +save it as a dependency in the package.json file. + +Press ^C at any time to quit. +package name: (demo) +version: (1.0.0) +description: demo nodejs app +entry point: (index.js) +test command: +git repository: +keywords: +author: +license: (ISC) +About to write to /home/sk/demo/package.json: + +{ + "name": "demo", + "version": "1.0.0", + "description": "demo nodejs app", + "main": "index.js", + "scripts": { + "test": "echo \"Error: no test specified\" && exit 1" + }, + "author": "", + "license": "ISC" +} + +Is this ok? (yes) yes +``` + +上面的命令初始化你的项目并创建了 package.json 文件。 + +你也可以使用命令以非交互式方式执行此操作: +``` +npm init --y +``` + +现在让我们安装名为 [**commander**][2] 的包。 +``` +$ npm install commander +``` + +示例输出: +``` +npm notice created a lockfile as package-lock.json. You should commit this file. +npm WARN demo@1.0.0 No repository field. + ++ commander@2.13.0 +added 1 package in 2.519s +``` + +这将在项目的根目录中创建一个名为 **" node_modules"** 的目录(如果它不存在的话),并在其中下载包。 + +让我们检查 pachage.json 文件。 +``` +$ cat package.json +{ + "name": "demo", + "version": "1.0.0", + "description": "demo nodejs app", + "main": "index.js", + "scripts": { + "test": "echo \"Error: no test specified\" && exit 1" + }, + "author": "", + "license": "ISC", + **"dependencies": {** +**"commander": "^2.13.0"** + } +} +``` + +你会看到添加了依赖文件,版本号前面的插入符号 ( **^** ) 表示在安装时,npm 将取出它可以找到的最高版本的包。 +``` +$ ls node_modules/ +commander +``` + +package.json 文件的优点是,如果你的项目目录中有 package.json 文件,只需键入 "npm install",那么 npm 将查看文件中列出的依赖关系并下载它们。你甚至可以与其他开发人员共享它或将其推送到你的 GitHub 仓库。因此,当他们键入 “npm install” 时,他们将获得你拥有的所有相同的包。 + +你也可能会注意到另一个名为 **package-lock.json** 的文件,该文件确保在项目安装的所有系统上都保持相同的依赖关系。 + +要在你的程序中使用已安装的包,使用实际代码在项目目录中创建一个 **index.js**(或者其他任何名称)文件,然后使用以下命令运行它: +``` +$ node index.js +``` + +**在全局安装包** + +如果你想使用一个包作为命令行工具,那么最好在全局安装它。这样,无论你的当前目录是哪个目录,它都能正常工作。 +``` +$ npm install async -g ++ async@2.6.0 +added 2 packages in 4.695s +``` + +或者 +``` +$ npm install async --global +``` + +要安装特定版本的包,我们可以: +``` +$ npm install async@2.6.0 --global +``` + +##### 更新 NodeJS 模块 + +要更新本地包,转到 package.json 所在的项目目录并运行: +``` +$ npm update +``` + +然后,运行以下命令确保所有包都更新了。 +``` +$ npm outdated +``` + +如果没有需要更新的,那么它返回空。 + +要找出哪一个全局包需要更新,运行: +``` +$ npm outdated -g --depth=0 +``` + +如果没有输出,意味着所有包都已更新。 + +更新单个全局包,运行: +``` +$ npm update -g +``` + +更新所有的全局包,运行: +``` +$ npm update -g +``` + +##### 列出 NodeJS 模块 + +列出本地包,转到项目目录并运行: +``` +$ npm list +demo@1.0.0 /home/sk/demo +└── commander@2.13.0 +``` + +如你所见,我在本地安装了 "commander" 这个包。 + +要列出全局包,从任何位置都可以运行以下命令: +``` +$ npm list -g +``` + +示例输出: +``` +/home/sk/.nvm/versions/node/v9.4.0/lib +├─┬ async@2.6.0 +│ └── lodash@4.17.4 +└─┬ npm@5.6.0 + ├── abbrev@1.1.1 + ├── ansi-regex@3.0.0 + ├── ansicolors@0.3.2 + ├── ansistyles@0.1.3 + ├── aproba@1.2.0 + ├── archy@1.0.0 +[...] +``` + +该命令将列出所有模块及其依赖关系。 + +要仅仅列出顶级模块,使用 -depth=0 选项: +``` +$ npm list -g --depth=0 +/home/sk/.nvm/versions/node/v9.4.0/lib +├── async@2.6.0 +└── npm@5.6.0 +``` + +##### 寻找 NodeJS 模块 + +要搜索一个模块,使用 "npm search" 命令: +``` +npm search +``` + +例如: +``` +$ npm search request +``` + +该命令将显示包含搜索字符串 "request" 的所有模块。 + +##### 移除 NodeJS 模块 + +要删除本地包,转到项目目录并运行以下命令,这会从 **node_modules** 目录中删除包: +``` +$ npm uninstall +``` + +要从 **package.json** 文件中的依赖关系中删除它,使用如下所示的 **save** 标志: +``` +$ npm uninstall --save + +``` + +要删除已安装的全局包,运行: +``` +$ npm uninstall -g +``` + +##### 清楚 NPM 缓存 + +默认情况下,NPM 在安装包时,会将其副本保存在 $HOME 目录中名为 npm 的缓存文件夹中。所以,你可以在下次安装时不必再次下载。 + +查看缓存模块: +``` +$ ls ~/.npm +``` + +随着时间的推移,缓存文件夹会充斥着大量旧的包。所以不时清理缓存会好一些。 + +从 npm@5 开始,npm 缓存可以从 corruption 问题中自行修复,并且保证从缓存中提取的数据有效。如果你想确保一切都一致,运行: +``` +$ npm cache verify +``` + +清楚整个缓存,运行: +``` +$ npm cache clean --force +``` + +##### 查看 NPM 配置 + +要查看 NPM 配置,键入: +``` +$ npm config list +``` + +或者 +``` +$ npm config ls +``` + +示例输出: +``` +; cli configs +metrics-registry = "https://registry.npmjs.org/" +scope = "" +user-agent = "npm/5.6.0 node/v9.4.0 linux x64" + +; node bin location = /home/sk/.nvm/versions/node/v9.4.0/bin/node +; cwd = /home/sk +; HOME = /home/sk +; "npm config ls -l" to show all defaults. +``` + +要显示当前的全局位置: +``` +$ npm config get prefix +/home/sk/.nvm/versions/node/v9.4.0 +``` + +好吧,这就是全部了。我们刚才介绍的只是基础知识,NPM 是一个广泛话题。有关更多详细信息,参阅 [**NPM Getting Started**][3] 指南。 + +希望这对你有帮助。更多好东西即将来临,敬请关注! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/manage-nodejs-packages-using-npm/ + +作者:[SK][a] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/manage-python-packages-using-pip/ +[2]:https://www.npmjs.com/package/commander +[3]:https://docs.npmjs.com/getting-started/ From 6da3dcf4818bfed6b734f8673cd2d6c6b00f527b Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 2 May 2018 08:46:40 +0800 Subject: [PATCH 049/102] translated --- ... How to reset a root password on Fedora.md | 93 ------------------- ... How to reset a root password on Fedora.md | 89 ++++++++++++++++++ 2 files changed, 89 insertions(+), 93 deletions(-) delete mode 100644 sources/tech/20180423 How to reset a root password on Fedora.md create mode 100644 translated/tech/20180423 How to reset a root password on Fedora.md diff --git a/sources/tech/20180423 How to reset a root password on Fedora.md b/sources/tech/20180423 How to reset a root password on Fedora.md deleted file mode 100644 index e3b0234b77..0000000000 --- a/sources/tech/20180423 How to reset a root password on Fedora.md +++ /dev/null @@ -1,93 +0,0 @@ -translating---geekpi - -How to reset a root password on Fedora -====== - -![](https://fedoramagazine.org/wp-content/uploads/2018/04/resetrootpassword-816x345.jpg) -A system administrator can easily reset a password for a user that has forgotten their password. But what happens if the system administrator forgets the root password? This guide will show you how to reset a lost or forgotten root password. Note that to reset the root password, you need to have physical access to the machine in order to reboot and to access GRUB settings. Additionally, if the system is encrypted, you will also need to know the LUKS passphrase. - -### Edit the GRUB settings - -First you need to interrupt the boot process. So you’ll need to turn on the system or restart, if it’s already powered on. The first step is tricky because the grub menu tends to flash by very quickly on the screen. - -Press **E** on your keyboard when you see the GRUB menu: - -![][1] - -After pressing ‘e’ the following screen is shown: - -![][2] - -Use your arrow keys to move the the **linux16** line. - -![][3] - -Using your **del** key or **backspace** key, remove **rhgb quiet** and replace with the following. -``` -rd.break enforcing=0 - -``` - -![][4] - -After editing the lines, Press **Ctrl-x** to start the system. If the system is encrypted, you will be prompted for the LUKS passphase here. - -**Note:** Setting enforcing=0, avoids performing a complete system SELinux relabeling. Once the system is rebooted, restore the correct SELinux context for the /etc/shadow file. (this is explained a little further in this process) - -### Mounting the filesystem - -The system will now be in emergency mode. Remount the hard drive with read-write access: -``` -# mount –o remount,rw /sysroot - -``` - -### **Password Change - -** - -Run chroot to access the system. -``` -# chroot /sysroot - -``` - -You can now change the root password. -``` -# passwd - -``` - -Type the new root password twice when prompted. If you are successful, you should see a message that **all authentication tokens updated successfully.** - -Type **exit** , twice to reboot the system. - -Log in as root and restore the SELinux label to the /etc/shadow file. -``` -# restorecon -v /etc/shadow - -``` - -Turn SELinux back to enforcing mode. -``` -# setenforce 1 - -``` - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/reset-root-password-fedora/ - -作者:[Curt Warfield][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://fedoramagazine.org/author/rcurtiswarfield/ -[1]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub.png -[2]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub2.png -[3]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub3.png -[4]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub4.png diff --git a/translated/tech/20180423 How to reset a root password on Fedora.md b/translated/tech/20180423 How to reset a root password on Fedora.md new file mode 100644 index 0000000000..8d42ef9de0 --- /dev/null +++ b/translated/tech/20180423 How to reset a root password on Fedora.md @@ -0,0 +1,89 @@ +如何重置 Fedora 上的 root 密码 +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/04/resetrootpassword-816x345.jpg) +系统管理员可以轻松地为忘记密码的用户重置密码。但是如果系统管理员忘记 root 密码会发生什么?本指南将告诉你如何重置遗失或忘记的 root 密码。请注意,要重置 root 密码,你需要能够接触到本机以重新启动并访问 GRUB 设置。此外,如果系统已加密,你还需要知道 LUKS 密码。 + +### 编辑 GRUB 设置 + +首先你需要中断启动过程。所以你需要打开系统,如果已经打开就重新启动。第一步很棘手,因为 grub 菜单往往会在屏幕上快速闪过。 + +当你看到 GRUB 菜单时,请按键盘上的 **E** 键: + +![][1] + +按下 ‘e’ 后显示以下屏幕: + +![][2] + +使用箭头键移动到 **linux16** 这行。 + +![][3] + +使用您的**删除**键或**后退**键,删除 **rhgb quiet** 并替换为以下内容。 +``` +rd.break enforcing=0 + +``` + +![][4] + +编辑好后,按下 **Ctrl-x** 启动系统。如果系统已加密,则系统会提示你输入 LUKS 密码。 + +**注意:** 设置 enforcing=0,避免执行完整的系统 SELinux 重新标记。系统重启后,为 /etc/shadow 恢复正确的 SELinux 上下文。(这个会进一步解释) + +### 挂载文件系统 + +系统现在将处于紧急模式。以读写权限重新挂载硬盘: +``` +# mount –o remount,rw /sysroot + +``` + +### 更改密码 + +运行 chroot 访问系统。 +``` +# chroot /sysroot + +``` + +你现在可以更改 root 密码。 +``` +# passwd + +``` + +出现提示时输入新的 root 密码两次。如果成功,你应该看到一条 **all authentication tokens updated successfully** 消息。 + +输入 **exit** 两次重新启动系统。 + +以 root 身份登录并将 SELinux 标签恢复到 /etc/shadow 中。 +``` +# restorecon -v /etc/shadow + +``` + +将 SELinux 变回 enforce 模式。 +``` +# setenforce 1 + +``` + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/reset-root-password-fedora/ + +作者:[Curt Warfield][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fedoramagazine.org/author/rcurtiswarfield/ +[1]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub.png +[2]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub2.png +[3]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub3.png +[4]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub4.png From b502222c3fde7ed30aacf81cff44845572b4b86b Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 2 May 2018 08:49:17 +0800 Subject: [PATCH 050/102] translated --- sources/tech/20180427 How to use FIND in Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180427 How to use FIND in Linux.md b/sources/tech/20180427 How to use FIND in Linux.md index 829514dd3c..42da1784de 100644 --- a/sources/tech/20180427 How to use FIND in Linux.md +++ b/sources/tech/20180427 How to use FIND in Linux.md @@ -1,3 +1,5 @@ +translating---geekpi + How to use FIND in Linux ====== From c1b2e86d595ea39705835732c286144348b2c818 Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Wed, 2 May 2018 09:40:31 +0800 Subject: [PATCH 051/102] Delete 20180317 How To Edit Multiple Files Using Vim Editor.md --- ...To Edit Multiple Files Using Vim Editor.md | 349 ------------------ 1 file changed, 349 deletions(-) delete mode 100644 sources/tech/20180317 How To Edit Multiple Files Using Vim Editor.md diff --git a/sources/tech/20180317 How To Edit Multiple Files Using Vim Editor.md b/sources/tech/20180317 How To Edit Multiple Files Using Vim Editor.md deleted file mode 100644 index 6cebe726d1..0000000000 --- a/sources/tech/20180317 How To Edit Multiple Files Using Vim Editor.md +++ /dev/null @@ -1,349 +0,0 @@ -Translating by jessie-pang - -How To Edit Multiple Files Using Vim Editor -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/03/Edit-Multiple-Files-Using-Vim-Editor-720x340.png) -Sometimes, you will find yourself in a situation where you want to make changes in multiple files or you might want to copy the contents of one file to another. If you’re on GUI mode, you could simply open the files in any graphical text editor, like gedit, and use CTRL+C and CTRL+V to copy/paste the contents. In CLI mode, you can’t use such editors. No worries! Where there is vim editor, there is a way! In this tutorial, we are going to learn to edit multiple files at the same time using Vim editor. Trust me, this is very interesting read. - -### Installing Vim - -Vim editor is available in the official repositories of most Linux distributions. So you can install it using the default package manager. For example, on Arch Linux and its variants you can install it using command: -``` -$ sudo pacman -S vim - -``` - -On Debian, Ubuntu: -``` -$ sudo apt-get install vim - -``` - -On RHEL, CentOS: -``` -$ sudo yum install vim - -``` - -On Fedora: -``` -$ sudo dnf install vim - -``` - -On openSUSE: -``` -$ sudo zypper install vim - -``` - -### Edit multiple files at a time using Vim editor in Linux - -Let us now get down to the business. We can do this in two methods. - -#### Method 1 - -I have two files namely **file1.txt** and **file2.txt** , with a bunch of random words. Let us have a look at them. -``` -$ cat file1.txt -ostechnix -open source -technology -linux -unix - -$ cat file2.txt -line1 -line2 -line3 -line4 -line5 - -``` - -Now, let us edit these two files at a time. To do so, run: -``` -$ vim file1.txt file2.txt - -``` - -Vim will display the contents of the files in an order. The first file’s contents will be shown first and then second file and so on. - -![][2] - -**Switch between files** - -To move to the next file, type: -``` -:n - -``` - -![][3] - -To go back to previous file, type: -``` -:N - -``` - -Vim won’t allow you to move to the next file if there are any unsaved changes. To save the changes in the current file, type: -``` -ZZ - -``` - -Please note that it is double capital letters ZZ (SHIFT+zz). - -To abandon the changes and move to the previous file, type: -``` -:N! - -``` - -To view the files which are being currently edited, type: -``` -:buffers - -``` - -![][4] - -You will see the list of loaded files at the bottom. - -![][5] - -To switch to the next file, type **:buffer** followed by the buffer number. For example, to switch to the first file, type: -``` -:buffer 1 - -``` - -![][6] - -**Opening additional files for editing** - -We are currently editing two files namely file1.txt, file2.txt. I want to open another file named **file3.txt** for editing. -What will you do? It’s easy! Just type **:e** followed by the file name like below. -``` -:e file3.txt - -``` - -![][7] - -Now you can edit file3.txt. - -To view how many files are being edited currently, type: -``` -:buffers - -``` - -![][8] - -Please note that you can not switch between opened files with **:e** using either **:n** or **:N**. To switch to another file, type **:buffer** followed by the file buffer number. - -**Copying contents of one file into another** - -You know how to open and edit multiple files at the same time. Sometimes, you might want to copy the contents of one file into another. It is possible too. Switch to a file of your choice. For example, let us say you want to copy the contents of file1.txt into file2.txt. - -To do so, first switch to file1.txt: -``` -:buffer 1 - -``` - -Place the move cursor in-front of a line that wants to copy and type **yy** to yank(copy) the line. Then. move to file2.txt: -``` -:buffer 2 - -``` - -Place the mouse cursor where you want to paste the copied lines from file1.txt and type **p**. For example, you want to paste the copied line between line2 and line3. To do so, put the mouse cursor before line and type **p**. - -Sample output: -``` -line1 -line2 -ostechnix -line3 -line4 -line5 - -``` - -![][9] - -To save the changes made in the current file, type: -``` -ZZ - -``` - -Again, please note that this is double capital ZZ (SHIFT+z). - -To save the changes in all files and exit vim editor. type: -``` -:wq - -``` - -Similarly, you can copy any line from any file to other files. - -**Copying entire file contents into another** - -We know how to copy a single line. What about the entire file contents? That’s also possible. Let us say, you want to copy the entire contents of file1.txt into file2.txt. - -To do so, open the file2.txt first: -``` -$ vim file2.txt - -``` - -If the files are already loaded, you can switch to file2.txt by typing: -``` -:buffer 2 - -``` - -Move the cursor to the place where you wanted to copy the contents of file1.txt. I want to copy the contents of file1.txt after line5 in file2.txt, so I moved the cursor to line 5. Then, type the following command and hit ENTER key: -``` -:r file1.txt - -``` - -![][10] - -Here, **r** means **read**. - -Now you will see the contents of file1.txt is pasted after line5 in file2.txt. -``` -line1 -line2 -line3 -line4 -line5 -ostechnix -open source -technology -linux -unix - -``` - -![][11] - -To save the changes in the current file, type: -``` -ZZ - -``` - -To save all changes in all loaded files and exit vim editor, type: -``` -:wq - -``` - -#### Method 2 - -The another method to open multiple files at once is by using either **-o** or **-O** flags. - -To open multiple files in horizontal windows, run: -``` -$ vim -o file1.txt file2.txt - -``` - -![][12] - -To switch between windows, press **CTRL-w w** (i.e Press **CTRL+w** and again press **w** ). Or, you the following shortcuts to move between windows. - - * **CTRL-w k** – top window - * **CTRL-w j** – bottom window - - - -To open multiple files in vertical windows, run: -``` -$ vim -O file1.txt file2.txt file3.txt - -``` - -![][13] - -To switch between windows, press **CTRL-w w** (i.e Press **CTRL+w** and again press **w** ). Or, use the following shortcuts to move between windows. - - * **CTRL-w l** – left window - * **CTRL-w h** – right window - - - -Everything else is same as described in method 1. - -For example, to list currently loaded files, run: -``` -:buffers - -``` - -To switch between files: -``` -:buffer 1 - -``` - -To open an additional file, type: -``` -:e file3.txt - -``` - -To copy entire contents of a file into another: -``` -:r file1.txt - -``` - -The only difference in method 2 is once you saved the changes in the current file using **ZZ** , the file will automatically close itself. Also, you need to close the files one by one by typing **:wq**. But, had you followed the method 1, when typing **:wq** all changes will be saved in all files and all files will be closed at once. - -For more details, refer man pages. -``` -$ man vim - -``` - -**Suggested read:** - -You know now how to edit multiples files using vim editor in Linux. As you can see, editing multiple files is not that difficult. Vim editor has more powerful features. We will write more about Vim editor in the days to come. - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-edit-multiple-files-using-vim-editor/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-1-1.png -[3]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-2.png -[4]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-5.png -[5]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-6.png -[6]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-7.png -[7]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-8.png -[8]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-10-1.png -[9]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-11.png -[10]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-12.png -[11]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-13.png -[12]:http://www.ostechnix.com/wp-content/uploads/2018/03/Edit-multiple-files-16.png -[13]:http://www.ostechnix.com/wp-content/uploads/2018/03/Edit-multiple-files-17.png From fcce42cd874a9b822c37278238573a28650b8463 Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Wed, 2 May 2018 09:41:10 +0800 Subject: [PATCH 052/102] 20180317 How To Edit Multiple Files Using Vim Editor.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完成 --- ...To Edit Multiple Files Using Vim Editor.md | 351 ++++++++++++++++++ 1 file changed, 351 insertions(+) create mode 100644 translated/tech/20180317 How To Edit Multiple Files Using Vim Editor.md diff --git a/translated/tech/20180317 How To Edit Multiple Files Using Vim Editor.md b/translated/tech/20180317 How To Edit Multiple Files Using Vim Editor.md new file mode 100644 index 0000000000..acb042fcbb --- /dev/null +++ b/translated/tech/20180317 How To Edit Multiple Files Using Vim Editor.md @@ -0,0 +1,351 @@ +如何使用 Vim 编辑器编辑多个文件 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/03/Edit-Multiple-Files-Using-Vim-Editor-720x340.png) + + +有时候,您可能需要修改多个文件,或要将一个文件的内容复制到另一个文件中。在图形用户界面中,您可以在任何图形文本编辑器(如 gedit)中打开文件,并使用 CTRL + C 和 CTRL + V 复制和粘贴内容。在命令行模式下,您不能使用这种编辑器。不过别担心,只要有 vim 编辑器就有办法。在本教程中,我们将学习使用 Vim 编辑器同时编辑多个文件。相信我,很有意思哒。 + +### 安装 Vim + +Vim 编辑器可在大多数 Linux 发行版的官方软件仓库中找到,所以您可以用默认的软件包管理器来安装它。例如,在 Arch Linux 及其变体上,您可以使用如下命令: +``` +$ sudo pacman -S vim + +``` + +在 Debian 和 Ubuntu 上: +``` +$ sudo apt-get install vim + +``` + +在 RHEL 和 CentOS 上: +``` +$ sudo yum install vim + +``` + +在 Fedora 上: +``` +$ sudo dnf install vim + +``` + +在 openSUSE 上: +``` +$ sudo zypper install vim + +``` + +### 使用 Linux 的 Vim 编辑器同时编辑多个文件 + +现在让我们谈谈正事,我们可以用两种方法做到这一点。 + +#### 方法一 + +有两个文件,即 **file1.txt** 和 **file2.txt**,带有一堆随机单词: +``` +$ cat file1.txt +ostechnix +open source +technology +linux +unix + +$ cat file2.txt +line1 +line2 +line3 +line4 +line5 + +``` + +现在,让我们同时编辑这两个文件。请运行: +``` +$ vim file1.txt file2.txt + +``` + +Vim 将按顺序显示文件的内容。首先显示第一个文件的内容,然后显示第二个文件,依此类推。 + +![][2] + +**在文件中切换** + +要移至下一个文件,请键入: +``` +:n + +``` + +![][3] + +要返回到前一个文件,请键入: +``` +:N + +``` + +如果有任何未保存的更改,Vim 将不允许您移动到下一个文件。要保存当前文件中的更改,请键入: +``` +ZZ + +``` + +请注意,是两个大写字母 ZZ(SHIFT + zz)。 + +要放弃更改并移至上一个文件,请键入: +``` +:N! + +``` + +要查看当前正在编辑的文件,请键入: +``` +:buffers + +``` + +![][4] + +您将在底部看到加载文件的列表。 + +![][5] + +要切换到下一个文件,请输入 **:buffer**,后跟缓冲区编号。例如,要切换到第一个文件,请键入: +``` +:buffer 1 + +``` + +![][6] + +**打开其他文件进行编辑** + +目前我们正在编辑两个文件,即 file1.txt 和 file2.txt。我想打开另一个名为 **file3.txt** 的文件进行编辑。 + +您会怎么做?这很容易。只需键入 **:e**,然后输入如下所示的文件名即可: +``` +:e file3.txt + +``` + +![][7] + +现在你可以编辑 file3.txt 了。 + +要查看当前正在编辑的文件数量,请键入: +``` +:buffers + +``` + +![][8] + +请注意,对于使用 **:e** 打开的文件,您无法使用 **:n** 或 **:N** 进行切换。要切换到另一个文件,请输入 **:buffer**,然后输入文件缓冲区编号。 + +**将一个文件的内容复制到另一个文件中** + +您已经知道了如何同时打开和编辑多个文件。有时,您可能想要将一个文件的内容复制到另一个文件中。这也是可以做到的。切换到您选择的文件,例如,假设您想将 file1.txt 的内容复制到 file2.txt 中: + +首先,请切换到 file1.txt: +``` +:buffer 1 + +``` + +将光标移动至在想要复制的行的前面,并键入**yy** 以抽出(复制)该行。然后,移至 file2.txt: +``` +:buffer 2 + +``` + +将光标移至要从 file1.txt 粘贴复制行的位置,然后键入 **p**。例如,您想要将复制的行粘贴到 line2 和 line3 之间,请将鼠标光标置于行前并键入 **p**。 + +输出示例: +``` +line1 +line2 +ostechnix +line3 +line4 +line5 + +``` + +![][9] + +要保存当前文件中所做的更改,请键入: +``` +ZZ + +``` + +再次提醒,是两个大写字母 ZZ(SHIFT + z)。 + +保存所有文件的更改并退出 vim 编辑器,键入: +``` +:wq + +``` + +同样,您可以将任何文件的任何行复制到其他文件中。 + +**将整个文件内容复制到另一个文件中** + +我们知道如何复制一行,那么整个文件的内容呢?也是可以的。比如说,您要将 file1.txt 的全部内容复制到 file2.txt 中。 + +先打开 file2.txt: +``` +$ vim file2.txt + +``` + +如果文件已经加载,您可以通过输入以下命令切换到 file2.txt: +``` +:buffer 2 + +``` + +将光标移动到您想要粘贴 file1.txt 内容的位置。我想在 file2.txt 的第 5 行之后粘贴 file1.txt 的内容,所以我将光标移动到第 5 行。然后,键入以下命令并按回车键: +``` +:r file1.txt + +``` + +![][10] + +这里,**r** 代表 **read**。 + +现在您会看到 file1.txt 的内容被粘贴在 file2.txt 的第 5 行之后。 +``` +line1 +line2 +line3 +line4 +line5 +ostechnix +open source +technology +linux +unix + +``` + +![][11] + +要保存当前文件中的更改,请键入: +``` +ZZ + +``` + +要保存所有文件的所有更改并退出 vim 编辑器,请输入: +``` +:wq + +``` + +#### 方法二 + +另一种同时打开多个文件的方法是使用 **-o** 或 **-O** 标志。 + +要在水平窗口中打开多个文件,请运行: +``` +$ vim -o file1.txt file2.txt + +``` + +![][12] + +要在窗口之间切换,请按 **CTRL-w w**(即按 **CTRL + w** 并再次按 **w**)。或者,您可以使用以下快捷方式在窗口之间移动: + + * **CTRL-w k** – 上面的窗口 + * **CTRL-w j** – 下面的窗口 + + + +要在垂直窗口中打开多个文件,请运行: +``` +$ vim -O file1.txt file2.txt file3.txt + +``` + +![][13] + +要在窗口之间切换,请按 **CTRL-w w**(即按 **CTRL + w** 并再次按 **w**)。或者,使用以下快捷方式在窗口之间移动: + + * **CTRL-w l** – 左面的窗口 + * **CTRL-w h** – 右面的窗口 + + + +其他的一切都与方法一的描述相同。 + +例如,要列出当前加载的文件,请运行: +``` +:buffers + +``` + +在文件之间切换: +``` +:buffer 1 + +``` + +打开其他文件,请键入: +``` +:e file3.txt + +``` + +将文件的全部内容复制到另一个文件中: +``` +:r file1.txt + +``` + +方法二的唯一区别是,只要您使用 **ZZ** 保存对当前文件的更改,文件将自动关闭。然后,您需要依次键入 **:wq ** 来关闭文件。但是,如果您按照方法一进行操作,输入 **:wq** 时,所有更改将保存在所有文件中,并且所有文件将立即关闭。 + +有关更多详细信息,请参阅手册页。 +``` +$ man vim + +``` + + +**建议阅读:** + +您现在掌握了如何在 Linux 中使用 vim 编辑器编辑多个文件。正如您所见,编辑多个文件并不难。Vim 编辑器还有更强大的功能。我们接下来会提供更多关于 Vim 编辑器的内容。 + +再见! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-edit-multiple-files-using-vim-editor/ + +作者:[SK][a] +译者:[jessie-pang](https://github.com/jessie-pang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-1-1.png +[3]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-2.png +[4]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-5.png +[5]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-6.png +[6]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-7.png +[7]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-8.png +[8]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-10-1.png +[9]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-11.png +[10]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-12.png +[11]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-13.png +[12]:http://www.ostechnix.com/wp-content/uploads/2018/03/Edit-multiple-files-16.png +[13]:http://www.ostechnix.com/wp-content/uploads/2018/03/Edit-multiple-files-17.png \ No newline at end of file From 6daa7f012f1f8883ed972c12009a33b32c34584c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 2 May 2018 12:27:15 +0800 Subject: [PATCH 053/102] PRF:20180308 How to set up a print server on a Raspberry Pi.md @qhwdw --- ...set up a print server on a Raspberry Pi.md | 42 +++++++++---------- 1 file changed, 20 insertions(+), 22 deletions(-) diff --git a/translated/tech/20180308 How to set up a print server on a Raspberry Pi.md b/translated/tech/20180308 How to set up a print server on a Raspberry Pi.md index 0b53281d7f..65b9fd5a5a 100644 --- a/translated/tech/20180308 How to set up a print server on a Raspberry Pi.md +++ b/translated/tech/20180308 How to set up a print server on a Raspberry Pi.md @@ -1,60 +1,58 @@ 如何将树莓派配置为打印服务器 ====== +> 用树莓派和 CUPS 打印服务器将你的打印机变成网络打印机。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-raspberrypi_0.png?itok=Kczz87J2) -我喜欢在家做一些小项目,因此,今年我选择使用一个 [树莓派 3 Model B][1],这是一个像我这样的业余爱好者非常适合的东西。使用树莓派 3 Model B 的无线功能,我可以不使用线缆将树莓派连接到我的家庭网络中。这样可以很容易地将树莓派用到各种它所需要的地方。 +我喜欢在家做一些小项目,因此,今年我买了一个 [树莓派 3 Model B][1],这是一个非常适合像我这样的业余爱好者的东西。使用树莓派 3 Model B 的内置无线功能,我可以不使用线缆就将树莓派连接到我的家庭网络中。这样可以很容易地将树莓派用到各种所需要的地方。 -在家里,我和我的妻子都使用笔记本电脑,但是我们只有一台打印机:一台使用的并不频繁的 HP 彩色激光打印机。因为我们的打印机并不内置无线网卡,因此,它不能直接连接到无线网络中,一般情况下,使用我的笔记本电脑时,我并不连接打印机,因为,我做的大多数工作并不需要打印。虽然这种安排在大多数时间都没有问题,但是,有时候,我的妻子想在不 “麻烦” 我的情况下,自己去打印一些东西。 - -### 基本设置 +在家里,我和我的妻子都使用笔记本电脑,但是我们只有一台打印机:一台使用的并不频繁的 HP 彩色激光打印机。因为我们的打印机并不内置无线网卡,因此,它不能直接连接到无线网络中,我们一般把打印机连接到我的笔记本电脑上,因为通常是我在打印。虽然这种安排在大多数时间都没有问题,但是,有时候,我的妻子想在不 “麻烦” 我的情况下,自己去打印一些东西。 我觉得我们需要一个将打印机连接到无线网络的解决方案,以便于我们都能够随时随地打印。我本想买一个无线打印服务器将我的 USB 打印机连接到家里的无线网络上。后来,我决定使用我的树莓派,将它设置为打印服务器,这样就可以让家里的每个人都可以随时来打印。 -设置树莓派是非常简单的事。我下载了 [Raspbian][2] 镜像,并将它写入到我的 microSD 卡中。然后,使用它引导连接了一个 HDMI 显示器、一个 USB 键盘和一个 USB 鼠标的树莓派。之后,我们开始对它进行设置! +### 基本设置 + +设置树莓派是非常简单的事。我下载了 [Raspbian][2] 镜像,并将它写入到我的 microSD 卡中。然后,使用它来引导一个连接了 HDMI 显示器、 USB 键盘和 USB 鼠标的树莓派。之后,我们开始对它进行设置! 这个树莓派系统自动引导到一个图形桌面,然后我做了一些基本设置:设置键盘语言、连接无线网络、设置普通用户帐户(`pi`)的密码、设置管理员用户(`root`)的密码。 -我并不打算将树莓派运行在桌面环境下。我一般是通过我的普通的 Linux 计算机远程来使用它。因此,我使用树莓派的图形化管理工具,去设置将树莓派引导到控制台模式,而且不以 `pi` 用户自动登入。 +我并不打算将树莓派运行在桌面环境下。我一般是通过我的普通的 Linux 计算机远程来使用它。因此,我使用树莓派的图形化管理工具,去设置将树莓派引导到控制台模式,但不以 `pi` 用户自动登入。 重新启动树莓派之后,我需要做一些其它的系统方面的小调整,以便于我在家用网络中使用树莓派做为 “服务器”。我设置它的 DHCP 客户端为使用静态 IP 地址;默认情况下,DHCP 客户端可能任选一个可用的网络地址,这样我会不知道应该用哪个地址连接到树莓派。我的家用网络使用一个私有的 A 类地址,因此,我的路由器的 IP 地址是 `10.0.0.1`,并且我的全部可用地 IP 地址是 `10.0.0.x`。在我的案例中,低位的 IP 地址是安全的,因此,我通过在 `/etc/dhcpcd.conf` 中添加如下的行,设置它的无线网络使用 `10.0.0.11` 这个静态地址。 + ``` interface wlan0 - static ip_address=10.0.0.11/24 - static routers=10.0.0.1 - static domain_name_servers=8.8.8.8 8.8.4.4 - ``` 在我再次重启之前,我需要去确认安全 shell 守护程序(SSHD)已经正常运行(你可以在 “偏好” 中设置哪些服务在引导时启动它)。这样我就可以使用 SSH 从普通的 Linux 系统上基于网络连接到树莓派中。 ### 打印设置 -现在,我的树莓派已经在网络上正常工作了,我通过 SSH 从我的 Linux 电脑上远程连接它,接着做剩余的设置。在继续设置之前,确保你的打印机已经连接到树莓派上。 +现在,我的树莓派已经连到网络上了,我通过 SSH 从我的 Linux 电脑上远程连接它,接着做剩余的设置。在继续设置之前,确保你的打印机已经连接到树莓派上。 + +设置打印机很容易。现代的打印服务器被称为 CUPS,意即“通用 Unix 打印系统”。任何最新的 Unix 系统都可以通过 CUPS 打印服务器来打印。为了在树莓派上设置 CUPS 打印服务器。你需要通过几个命令去安装 CUPS 软件,并使用新的配置来重启打印服务器,这样就可以允许其它系统来打印了。 -设置打印机很容易。现在的打印服务器都称为 CUPS,它是标准的通用 Unix 打印系统。任何最新的 Unix 系统都可以通过 CUPS 打印服务器来打印。为了在树莓派上设置 CUPS 打印服务器。你需要通过几个命令去安装 CUPS 软件,并使用新的配置来重启打印服务器,这样就可以允许其它系统来打印了。 ``` $ sudo apt-get install cups - $ sudo cupsctl --remote-any - $ sudo /etc/init.d/cups restart - ``` -在 CUPS 中设置打印机也是非常简单的,你可以通过一个 Web 界面来完成。CUPS 监听端口是 631,因此你可以在浏览器中收藏这个地址: +在 CUPS 中设置打印机也是非常简单的,你可以通过一个 Web 界面来完成。CUPS 监听端口是 631,因此你用常用的浏览器来访问这个地址: + ``` https://10.0.0.11:631/ - ``` -你的 Web 浏览器可能会弹出警告,因为它不认可这个 Web 浏览器的 https 证书;选择 ”接受它“,然后以管理员用户登入系统,你将看到如下的标准的 CUPS 面板: +你的 Web 浏览器可能会弹出警告,因为它不认可这个 Web 浏览器的 https 证书;选择 “接受它”,然后以管理员用户登入系统,你将看到如下的标准的 CUPS 面板: + ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/cups-1-home.png?itok=t9OFJgSX) -这时候,导航到管理标签,选择 “Add Printer"。 +这时候,导航到管理标签,选择 “Add Printer”。 ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/cups-2-administration.png?itok=MlEINoYC) @@ -64,9 +62,9 @@ https://10.0.0.11:631/ ### 客户端设置 -从 Linux 中设置一台网络打印机非常简单。我的桌面环境是 GNOME,你可以从 GNOME 的设置应用程序中添加网络打印机。只需要导航到设备和打印机,然后解锁这个面板。点击 “Add" 按钮去添加打印机。 +从 Linux 中设置一台网络打印机非常简单。我的桌面环境是 GNOME,你可以从 GNOME 的“设置”应用程序中添加网络打印机。只需要导航到“设备和打印机”,然后解锁这个面板。点击 “添加” 按钮去添加打印机。 -在我的系统中,GNOME 设置为 ”自动发现网络打印机并添加它“。如果你的系统不是这样,你需要通过树莓派的 IP 地址,手动去添加打印机。 +在我的系统中,GNOME 的“设置”应用程序会自动发现网络打印机并添加它。如果你的系统不是这样,你需要通过树莓派的 IP 地址,手动去添加打印机。 ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gnome-settings-printers.png?itok=NOQLTaLs) @@ -78,7 +76,7 @@ via: https://opensource.com/article/18/3/print-server-raspberry-pi 作者:[Jim Hall][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From f07b16b3fa22f43803266da5da17755f8444245d Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 2 May 2018 12:27:41 +0800 Subject: [PATCH 054/102] PUB:20180308 How to set up a print server on a Raspberry Pi.md @qhwdw https://linux.cn/article-9596-1.html --- .../20180308 How to set up a print server on a Raspberry Pi.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180308 How to set up a print server on a Raspberry Pi.md (100%) diff --git a/translated/tech/20180308 How to set up a print server on a Raspberry Pi.md b/published/20180308 How to set up a print server on a Raspberry Pi.md similarity index 100% rename from translated/tech/20180308 How to set up a print server on a Raspberry Pi.md rename to published/20180308 How to set up a print server on a Raspberry Pi.md From eec7d59cb48f4cd725d3c48a36f1a72ab35e09b5 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 2 May 2018 12:38:07 +0800 Subject: [PATCH 055/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Configuring=20loc?= =?UTF-8?q?al=20storage=20in=20Linux=20with=20Stratis?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20180425 Configuring local storage in Linux with Stratis.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180425 Configuring local storage in Linux with Stratis.md b/sources/tech/20180425 Configuring local storage in Linux with Stratis.md index fc9471c88f..f76b7f6fca 100644 --- a/sources/tech/20180425 Configuring local storage in Linux with Stratis.md +++ b/sources/tech/20180425 Configuring local storage in Linux with Stratis.md @@ -2,6 +2,7 @@ Configuring local storage in Linux with Stratis ====== ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-storage.png?itok=95-zvHYl) + Configuring local storage is something desktop Linux users do very infrequently—maybe only once, during installation. Linux storage tech moves slowly, and many storage tools used 20 years ago are still used regularly today. But some things have improved since then. Why aren't people taking advantage of these new capabilities? This article is about Stratis, a new project that aims to bring storage advances to all Linux users, from the simple laptop single SSD to a hundred-disk array. Linux has the capabilities, but its lack of an easy-to-use solution has hindered widespread adoption. Stratis's goal is to make Linux's advanced storage features accessible. From 6e8f4721fc939d3bf75d81fdd503238fb24481cf Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 2 May 2018 12:39:17 +0800 Subject: [PATCH 056/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20What=20Stratis=20?= =?UTF-8?q?learned=20from=20ZFS,=20Btrfs,=20and=20Linux=20Volume=20Manager?= =?UTF-8?q?=20|=20Opensource.com?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...from ZFS, Btrfs, and Linux Volume Manager - Opensource.com.md | 1 - 1 file changed, 1 deletion(-) diff --git a/sources/tech/20180426 What Stratis learned from ZFS, Btrfs, and Linux Volume Manager - Opensource.com.md b/sources/tech/20180426 What Stratis learned from ZFS, Btrfs, and Linux Volume Manager - Opensource.com.md index a32792e532..bf658cab42 100644 --- a/sources/tech/20180426 What Stratis learned from ZFS, Btrfs, and Linux Volume Manager - Opensource.com.md +++ b/sources/tech/20180426 What Stratis learned from ZFS, Btrfs, and Linux Volume Manager - Opensource.com.md @@ -1,6 +1,5 @@ What Stratis learned from ZFS, Btrfs, and Linux Volume Manager | Opensource.com ====== - ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-windows-building-containers.png?itok=0XvZLZ8k) As discussed in [Part 1][1] of this series, Stratis is a volume-managing filesystem (VMF) with functionality similar to [ZFS][2] and [Btrfs][3]. In designing Stratis, we studied the choices that developers of existing solutions made. From b82f481a832ff3e772200b07bcf410199cc49ccf Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 2 May 2018 12:40:54 +0800 Subject: [PATCH 057/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20PCGen:=20An=20eas?= =?UTF-8?q?y=20way=20to=20generate=20RPG=20characters?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... An easy way to generate RPG characters.md | 136 ++++++++++++++++++ 1 file changed, 136 insertions(+) create mode 100644 sources/tech/20180430 PCGen- An easy way to generate RPG characters.md diff --git a/sources/tech/20180430 PCGen- An easy way to generate RPG characters.md b/sources/tech/20180430 PCGen- An easy way to generate RPG characters.md new file mode 100644 index 0000000000..bd1fd6516d --- /dev/null +++ b/sources/tech/20180430 PCGen- An easy way to generate RPG characters.md @@ -0,0 +1,136 @@ +PCGen: An easy way to generate RPG characters +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_gaming.png?itok=poe7HXJ7) +Do you remember the first time you built a role-playing game (RPG) character? It was exciting and full of possibility, and your imagination ran wild. If you're an avid gamer, it was probably a major milestone for you. + +But do you also remember struggling to decipher an empty character sheet and what you were supposed to write down in each box? Remember poring over the core rulebook, cross-referencing one table with a class write-up, the spellbook with your chosen school of magic, and skills to your race? + +Whether you thought it was fun or perplexing—or both, if you play RPGs, the process of building and tracking a character is probably as natural to you now as using a computer. + +That's an appropriate analogy, because, as we all know, character sheets have been computerized. It's a sensible match; computers are great for tracking information that changes frequently. They certainly handle it a lot better than scratches on paper worn thin by repeated erasing and scribbling and more erasing. + +Sure, you could build custom spreadsheets in [Libre Office][1], but then again you could also try [PCGen][2], a Java-based application that makes character creation and maintenance sublimely simple without taking the fun out of either. While it doesn't have a mobile version, there is a [PCGen viewer][3] for Android so you can access your build whenever you need to. + +### Downloading and installing + +PCGen is a Java application, so it runs on anything that has Java installed. This isn't quite the same thing as Java in your web browser; PCGen is a downloadable application that runs locally on your computer. You likely already have Java installed; if not, download and install it from your distribution's repository. If you're not sure whether you have it installed, you can [download PCGen][4] first, try to run it, and install Java if it fails to run. + +Since PCGen is a Java application, you don't have to install it after you download it (because you've already got the Java runtime installed). The application should just run if you double-click the `pcgen.jar` file, but if your computer hasn't been told what to do with a Java app yet, you may need to tell it explicitly to run in Java. You usually do this by right-clicking and specifying what application to open the file in. The application you want, of course, is Java or, if you're asked to input the application launch command manually, `java -jar`. + +Linux and BSD users can customize this experience: + + 1. Download PCGen to a directory, such as `/opt` or `~/bin`. + 2. Unzip the archive with `unzip pcgen-x.yy.zz-full.zip`. + 3. Download a suitable icon (e.g., `wget https://openclipart.org/image/300px/svg_to_png/285672/d20-blank.png -O pcgen/d20.png`. + 4. Create a file called `pcgen.desktop` in your `~/.local/share/applications` directory. Open it in a text editor and type the following, adjusting as appropriate: + + +``` +[Desktop Entry] Version=1.0 Type=Application Name=PCGen Exec="/home/your-username/bin/pcgen/pcgen.sh" Encoding=UTF-8 Icon=/home/your-username/bin/pcgen/d20.png + +``` + +Now you can launch PCGen from your system's application menu as you would any other applications. + +### Player agency + +Many hours of my childhood were spent poring over my friends' D&D Player Handbooks, rolling up characters that I'd never play (thanks to the infamous "satanic panic," I wasn't allowed to play the game). What I learned from this is creating a character for an RPG is a kind of mini-game itself. There are rules to follow, important choices to make, and ultimately a needed narrative to make it all come together. + +A new player might think it's a good idea to allow an application to do a build for them, but most experienced players probably agree that the best way to learn is by doing. And besides, letting something build your character would rob you of the mini-game that is character building. If an application is nothing more than a pre-gen factory, one of the most important parts of being a player is removed from the game, and nobody wants that. + +On the other hand, nobody wants the character building process to discourage new players. + +PCGen manages to strike a perfect balance between guiding you through a character build and staying out of your way as you tinker. Primarily it does this by using an unobtrusive alert system that keeps you updated about any required tasks left in your character build. It's helpful without grabbing the steering wheel out of your hands to take over completely. + + +![PCGen to-do list][6] + +No annoying Clippy, but plenty of helpful hints + +### Getting started + +PCGen essentially has two modes: the character build and the character sheet. When you launch it, PCGen first asks you to choose the game system you're building for. + +![System selection][8] + +Selecting your game system + +Included systems are OGL (Open Game License) systems, including D&D 5e (3 and 3.5 editions), Pathfinder, and Fantasy Craft. Better still, PCGen comes preloaded with all manner of add-on material, so not only can you design characters from advanced and third-party modules, the dungeon master (DM) can even create stats for monsters and villains. + +Once you've double-clicked the system you're using, you're presented with a helpful screen letting you either load an existing build you have saved or start building a new one. Each new character gets its own tab in PCGen, so if you want to build a whole party or if a DM wants to track a whole hoard of monsters, it's easy to load up a cast of characters and have them at the ready. + +Starting from the top left, your character build starts with the basics: choosing a name, gender, and alignment. PCGen includes a random-name generator with lots of knobs and switches to adjust for etymology (real and fantasy), race, and gender. + +### Rolling for abilities + +When it's time to roll for ability scores, PCGen has lots of options. It defaults to manual entry. You roll physical dice and enter the numbers. + +Alternately, you can let PCGen roll for you, and you can set the rolling style. You can have PCGen roll 4d6 and drop the lowest, roll 3d6, or roll 2d6 and add 6. + +You can also choose to use a point-purchasing mode with a budget of anything between 15 and 25. This method might appeal to players coming from video games, many of which use this method to allocate attributes. + +### Classes and levels + +Once you pick a class and add your first level, your attributes are locked in and you get a budget for all remaining class- and level-dependent aspects of your character. What exactly those are, of course, depends on what system you're playing, but PCGen keeps you updated on any remaining required tasks as you go. + +There are a lot of tabs in PCGen, and it can sometimes seem just as overwhelming as staring at a physical 300-page Player's Handbook, but as long as you follow the tips, PCGen can keep you on the straight and narrow. + +### Character sheets + +As if building your character wasn't enough fun, the most satisfying step is yet to come: seeing all your choices laid out in a proper character-sheet format. + +The final tab of PCGen is the Character Sheet tab, and it formats your character's attributes into a layout of your choice. The default is a standard, generic layout. It's easily printable and easy to understand. + +There are several variations and addendums available, too. For spellcasters, there's a spellbook sheet that lists available spells, and there are several alternate layouts, some optimized for screen and others for print. + +If you're using PCGen as you play, you can add temporary effects to your character sheet to let PCGen adjust your attributes accordingly. + +If you export your character and import it into the PCGen Importer on your Android phone or tablet, you can use your digital character sheet to track spells and current health and make temporary adjustments to armour class and other attribute modifiers. + +![Exported character sheet on Android][10] + +Exported character sheet on Android + +### PCGen's advantages + +The market is full of digital character-sheet trackers, but PCGen stands out for several reasons. First, it's cross-platform. That may or may not mean much to you, because we tend to standardize our workflow to devices that play nice with one another. In my gaming groups, though, we have Linux users and Windows users, and we have people who want to work off a mobile device and others who prefer a proper computer. Choosing an application that everyone can use and become comfortable with makes the process of updating stats more efficient. + +Second, because PCGen is open source, all your character data remains in an open and parsable format. As an XML fan, I find it invaluable to get my character data as an XML dump, and it's doubly useful to me as a DM as I prepare for an upcoming adventure and need custom monster stat blocks to insert in my notes. + +On a related note, knowing that PCGen will always be available regardless of a player's financial circumstance is also nice. When I changed jobs a year ago, I was lucky to go from one job to the next without interruption in income. In one of my gaming groups, however, two members have been made redundant recently and a third is a university student without much disposable income. The fact that we don't have to worry about monthly membership fees or whether we can all afford to invest in software that is, at the end of the day, a minor convenience over pen and paper gives us confidence in our choice of using digital tools. + +PCGen's open source license also lends itself to rapid development and expansion and ensured maintenance. The reason there's a mobile edition is that the code and data are open. Who knows what's next? + +While PCGen's default datasets revolve, naturally, around OGL content (because the OGL is open and allows content to be freely redistributed), since the application is also open, you can add whatever data you want. It's not necessarily trivial, but games like Open Legend, Dungeon Delvers, and other openly licensed games are ripe for porting to PCGen. + +### Pen and paper + +The pen-and-paper tradition remains important. PCGen strikes a healthy balance between the desire to make stats accounting more convenient by leveraging the latest technology while maintaining the joy of manually updating characters. + +Whether you're an old-school gamer who banishes digital devices from your table or a progressive gamer who embraces technology, it's fair to say most of us have encountered a few times when a game has come to a halt because of a phone or tablet. The fact is, everyone has a mobile device now (even me, even if it's only because my job pays for it), so they will make their way onto your game table. I have found that encouraging game-relevant information to be on screens has helped focus players on the game; I'd rather my players stare at their character sheets and game reference documents than surf social media sites on their devices. + +PCGen, in my experience, is the most true-to-form digital character sheet available. It allows for user control, it offers useful guidance as needed, and it's as close to pen-and-paper convenience as possible. Take a look at it, open gamers! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/4/pcgen-rpg-character-generator + +作者:[Seth Kenlon][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/seth +[1]:http://libreoffice.org +[2]:http://pcgen.org +[3]:https://play.google.com/store/apps/details?id=com.dysfunctional.apps.pcgencharactersheet +[4]:http://pcgen.org/download/ +[5]:/file/394491 +[6]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pcgen_tip.jpg?itok=GXOz_OJ_ (PCGen to-do list) +[7]:/file/394486 +[8]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pcgen_sys.jpg?itok=Zn0_9hkQ (System selection) +[9]:/file/394481 +[10]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pcgen_screen.jpg?itok=4V6AZPud (Exported character sheet on Android) From facde3bb19975119d38c4bc41f0a8f90fc88cb98 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 2 May 2018 12:43:50 +0800 Subject: [PATCH 058/102] PRF:20180401 9 Useful touch command examples in Linux.md @MjSeven --- ... Useful touch command examples in Linux.md | 168 +++++++++--------- 1 file changed, 82 insertions(+), 86 deletions(-) diff --git a/translated/tech/20180401 9 Useful touch command examples in Linux.md b/translated/tech/20180401 9 Useful touch command examples in Linux.md index 34aab2d024..d6feddf595 100644 --- a/translated/tech/20180401 9 Useful touch command examples in Linux.md +++ b/translated/tech/20180401 9 Useful touch command examples in Linux.md @@ -1,77 +1,74 @@ 在 Linux 下 9 个有用的 touch 命令示例 ===== -touch 命令用于创建空文件,并且更改 Unix 和 Linux 系统上现有文件时间戳。这里更改时间戳意味着更新文件和目录的访问以及修改时间。 +`touch` 命令用于创建空文件,也可以更改 Unix 和 Linux 系统上现有文件时间戳。这里所说的更改时间戳意味着更新文件和目录的访问以及修改时间。 -[![touch-command-examples-linux][1]![touch-command-examples-linux][2]][2] +![touch-command-examples-linux][2] -让我们来看看 touch 命令的语法和选项: +让我们来看看 `touch` 命令的语法和选项: -**语法**: # touch {选项} {文件} +**语法**: -touch 命令中使用的选项: +``` +# touch {选项} {文件} +``` -![touch-command-options][1] +`touch` 命令中使用的选项: ![touch-command-options][3] -在这篇文章中,我们将介绍 Linux 中 9 个有用的 touch 命令示例。 +在这篇文章中,我们将介绍 Linux 中 9 个有用的 `touch` 命令示例。 -### 示例:1 使用 touch 创建一个空文件 +### 示例:1 使用 touch 创建一个空文件 + +要在 Linux 系统上使用 `touch` 命令创建空文件,键入 `touch`,然后输入文件名。如下所示: -要在 Linux 系统上使用 touch 命令创建空文件,键入 touch,然后输入文件名。如下所示: ``` [root@linuxtechi ~]# touch devops.txt [root@linuxtechi ~]# ls -l devops.txt -rw-r--r--. 1 root root 0 Mar 29 22:39 devops.txt -[root@linuxtechi ~]# - ``` -### 示例:2 使用 touch 创建批量空文件 +### 示例:2 使用 touch 创建批量空文件 + +可能会出现一些情况,我们必须为某些测试创建大量空文件,这可以使用 `touch` 命令轻松实现: -可能会出现一些情况,我们必须为某些测试创建大量空文件,这可以使用 touch 命令轻松实现: ``` [root@linuxtechi ~]# touch sysadm-{1..20}.txt - ``` -在上面的例子中,我们创建了 20 个名为 sysadm-1.txt 到 sysadm-20.txt 的空文件,你可以根据需要更改名称和数字。 +在上面的例子中,我们创建了 20 个名为 `sysadm-1.txt` 到 `sysadm-20.txt` 的空文件,你可以根据需要更改名称和数字。 -### 示例:3 改变/更新文件和目录的访问时间 +### 示例:3 改变/更新文件和目录的访问时间 + +假设我们想要改变名为 `devops.txt` 文件的访问时间,在 `touch` 命令中使用 `-a` 选项,然后输入文件名。如下所示: -假设我们想要改变名为 **devops.txt** 文件的访问时间,在 touch 命令中使用 **-a** 选项,然后输入文件名。如下所示: ``` [root@linuxtechi ~]# touch -a devops.txt -[root@linuxtechi ~]# - ``` 现在使用 `stat` 命令验证文件的访问时间是否已更新: ``` [root@linuxtechi ~]# stat devops.txt -  File: ‘devops.txt’ -  Size: 0               Blocks: 0          IO Block: 4096   regular empty file -Device: fd00h/64768d    Inode: 67324178    Links: 1 -Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root) + File: 'devops.txt' + Size: 0 Blocks: 0 IO Block: 4096 regular empty file +Device: fd00h/64768d Inode: 67324178 Links: 1 +Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: unconfined_u:object_r:admin_home_t:s0 Access: 2018-03-29 23:03:10.902000000 -0400 Modify: 2018-03-29 22:39:29.365000000 -0400 Change: 2018-03-29 23:03:10.902000000 -0400 - Birth: - -[root@linuxtechi ~]# - + Birth: - ``` -**改变目录的访问时间** +**改变目录的访问时间:** + +假设我们在 `/mnt` 目录下有一个 `nfsshare` 文件夹,让我们用下面的命令改变这个文件夹的访问时间: -假设我们在 /mnt 目录下有一个 ‘nfsshare’ 文件夹,让我们用下面的命令改变这个文件夹的访问时间: ``` [root@linuxtechi ~]# touch -m /mnt/nfsshare/ -[root@linuxtechi ~]# - [root@linuxtechi ~]# stat /mnt/nfsshare/ -  File: ‘/mnt/nfsshare/’ +  File: '/mnt/nfsshare/'   Size: 6               Blocks: 0          IO Block: 4096   directory Device: fd00h/64768d    Inode: 2258        Links: 2 Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root) @@ -80,37 +77,34 @@ Access: 2018-03-29 23:34:38.095000000 -0400 Modify: 2018-03-03 10:42:45.194000000 -0500 Change: 2018-03-29 23:34:38.095000000 -0400  Birth: - -[root@linuxtechi ~]# - ``` -### 示例:4 更改访问时间而不用创建新文件 +### 示例:4 更改访问时间而不用创建新文件 + +在某些情况下,如果文件存在,我们希望更改文件的访问时间,并避免创建文件。在 touch 命令中使用 `-c` 选项即可,如果文件存在,那么我们可以改变文件的访问时间,如果不存在,我们也可不会创建它。 -在某些情况下,如果文件存在,我们希望更改文件的访问时间,并避免创建文件。在 touch 命令中使用 **-c** 选项即可,如果文件存在,那么我们可以改变文件的访问时间,如果不存在,我们也可不会创建它。 ``` [root@linuxtechi ~]# touch -c sysadm-20.txt [root@linuxtechi ~]# touch -c winadm-20.txt [root@linuxtechi ~]# ls -l winadm-20.txt ls: cannot access winadm-20.txt: No such file or directory -[root@linuxtechi ~]# - ``` -### 示例:5 更改文件和目录的修改时间 +### 示例:5 更改文件和目录的修改时间 -在 touch 命令中使用 **-m** 选项,我们可以更改文件和目录的修改时间。 +在 `touch` 命令中使用 `-m` 选项,我们可以更改文件和目录的修改时间。 + +让我们更改名为 `devops.txt` 文件的更改时间: -让我们更改名为 “devops.txt” 文件的更改时间: ``` [root@linuxtechi ~]# touch -m devops.txt -[root@linuxtechi ~]# - ``` -现在使用 stat 命令来验证修改时间是否改变: +现在使用 `stat` 命令来验证修改时间是否改变: + ``` [root@linuxtechi ~]# stat devops.txt -  File: ‘devops.txt’ +  File: 'devops.txt'   Size: 0               Blocks: 0          IO Block: 4096   regular empty file Device: fd00h/64768d    Inode: 67324178    Links: 1 Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root) @@ -119,21 +113,19 @@ Access: 2018-03-29 23:03:10.902000000 -0400 Modify: 2018-03-29 23:59:49.106000000 -0400 Change: 2018-03-29 23:59:49.106000000 -0400  Birth: - -[root@linuxtechi ~]# - ``` 同样的,我们可以改变一个目录的修改时间: + ``` [root@linuxtechi ~]# touch -m /mnt/nfsshare/ -[root@linuxtechi ~]# - ``` -使用 stat 交叉验证访问和修改时间: +使用 `stat` 交叉验证访问和修改时间: + ``` [root@linuxtechi ~]# stat devops.txt -  File: ‘devops.txt’ +  File: 'devops.txt'   Size: 0               Blocks: 0          IO Block: 4096   regular empty file Device: fd00h/64768d    Inode: 67324178    Links: 1 Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root) @@ -142,47 +134,47 @@ Access: 2018-03-30 00:06:20.145000000 -0400 Modify: 2018-03-30 00:06:20.145000000 -0400 Change: 2018-03-30 00:06:20.145000000 -0400  Birth: - -[root@linuxtechi ~]# - ``` -### 示例:7 将访问和修改时间设置为特定的日期和时间 +### 示例:7 将访问和修改时间设置为特定的日期和时间 -每当我们使用 touch 命令更改文件和目录的访问和修改时间时,它将当前时间设置为该文件或目录的访问和修改时间。 +每当我们使用 `touch` 命令更改文件和目录的访问和修改时间时,它将当前时间设置为该文件或目录的访问和修改时间。 -假设我们想要将特定的日期和时间设置为文件的访问和修改时间,这可以使用 touch 命令中的 ‘-c’ 和 ‘-t’ 选项来实现。 +假设我们想要将特定的日期和时间设置为文件的访问和修改时间,这可以使用 `touch` 命令中的 `-c` 和 `-t` 选项来实现。 -日期和时间可以使用以下格式指定:{CCYY}MMDDhhmm.ss +日期和时间可以使用以下格式指定: + +``` +{CCYY}MMDDhhmm.ss +``` 其中: - * CC – 年份的前两位数字 - * YY – 年份的后两位数字 - * MM – 月份 (01-12) - * DD – 天 (01-31) - * hh – 小时 (00-23) - * mm – 分钟 (00-59) + * `CC` – 年份的前两位数字 + * `YY` – 年份的后两位数字 + * `MM` – 月份 (01-12) + * `DD` – 天 (01-31) + * `hh` – 小时 (00-23) + * `mm` – 分钟 (00-59) + +让我们将 `devops.txt` 文件的访问和修改时间设置为未来的一个时间(2025 年 10 月 19 日 18 时 20 分)。 -让我们将 devops.txt file 文件的访问和修改时间设置为未来的一个时间( 2025 年, 10 月, 19 日, 18 时 20 分)。 ``` [root@linuxtechi ~]# touch -c -t 202510191820 devops.txt - ``` -使用 stat 命令查看更新访问和修改时间: - -![stat-command-output-linux][1] +使用 `stat` 命令查看更新访问和修改时间: ![stat-command-output-linux][4] -根据日期字符串设置访问和修改时间,在 touch 命令中使用 ‘-d’ 选项,然后指定日期字符串,后面跟文件名。如下所示: +根据日期字符串设置访问和修改时间,在 `touch` 命令中使用 `-d` 选项,然后指定日期字符串,后面跟文件名。如下所示: + ``` [root@linuxtechi ~]# touch -c -d "2010-02-07 20:15:12.000000000 +0530" sysadm-29.txt -[root@linuxtechi ~]# - ``` -使用 stat 命令验证文件的状态: +使用 `stat` 命令验证文件的状态: + ``` [root@linuxtechi ~]# stat sysadm-20.txt   File: ‘sysadm-20.txt’ @@ -194,39 +186,43 @@ Access: 2010-02-07 20:15:12.000000000 +0530 Modify: 2010-02-07 20:15:12.000000000 +0530 Change: 2018-03-30 10:23:31.584000000 +0530  Birth: - -[root@linuxtechi ~]# - ``` -**注意:**在上述命令中,如果我们不指定 ‘-c’,那么 touch 命令将创建一个新文件以防系统中存在该文件,并将时间戳设置为命令中给出的。 +**注意:**在上述命令中,如果我们不指定 `-c`,如果系统中不存在该文件那么 `touch` 命令将创建一个新文件,并将时间戳设置为命令中给出的。 -### 示例:8 使用参考文件设置时间戳(-r) +### 示例:8 使用参考文件设置时间戳(-r) -在 touch 命令中,我们可以使用参考文件来设置文件或目录的时间戳。假设我想在 “devops.txt” 文件上设置与文件 “sysadm-20.txt” 文件相同的时间戳,touch 命令中使用 ‘-r’ 选项可以轻松实现。 +在 `touch` 命令中,我们可以使用参考文件来设置文件或目录的时间戳。假设我想在 `devops.txt` 文件上设置与文件 `sysadm-20.txt` 文件相同的时间戳,`touch` 命令中使用 `-r` 选项可以轻松实现。 + +**语法:** + +``` +# touch -r {参考文件} 真正文件 +``` -**语法:**# touch -r {参考文件} 真正文件 ``` [root@linuxtechi ~]# touch -r sysadm-20.txt devops.txt -[root@linuxtechi ~]# - ``` -### 示例:9 在符号链接文件上更改访问和修改时间 +### 示例:9 在符号链接文件上更改访问和修改时间 -默认情况下,每当我们尝试使用 touch 命令更改符号链接文件的时间戳时,它只会更改原始文件的时间戳。如果你想更改符号链接文件的时间戳,则可以使用 touch 命令中的 ‘-h’ 选项来实现。 +默认情况下,每当我们尝试使用 `touch` 命令更改符号链接文件的时间戳时,它只会更改原始文件的时间戳。如果你想更改符号链接文件的时间戳,则可以使用 `touch` 命令中的 `-h` 选项来实现。 + +**语法:** + +``` +# touch -h {符号链接文件} +``` -**语法:** # touch -h {符号链接文件} ``` [root@linuxtechi opt]# ls -l /root/linuxgeeks.txt lrwxrwxrwx. 1 root root 15 Mar 30 10:56 /root/linuxgeeks.txt -> linuxadmins.txt [root@linuxtechi ~]# touch -t 203010191820 -h linuxgeeks.txt [root@linuxtechi ~]# ls -l linuxgeeks.txt lrwxrwxrwx. 1 root root 15 Oct 19  2030 linuxgeeks.txt -> linuxadmins.txt -[root@linuxtechi ~]# - ``` -这就是本教程的全部了。我希望这些例子能帮助你理解 touch 命令。请分享你的宝贵意见和评论。 +这就是本教程的全部了。我希望这些例子能帮助你理解 `touch` 命令。请分享你的宝贵意见和评论。 -------------------------------------------------------------------------------- @@ -234,7 +230,7 @@ via: https://www.linuxtechi.com/9-useful-touch-command-examples-linux/ 作者:[Pradeep Kumar][a] 译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From df38343f888280609ca1c968337417f766e726d8 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 2 May 2018 12:44:12 +0800 Subject: [PATCH 059/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Reset=20a=20lost?= =?UTF-8?q?=20root=20password=20in=20under=205=20minutes?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...a lost root password in under 5 minutes.md | 80 +++++++++++++++++++ 1 file changed, 80 insertions(+) create mode 100644 sources/tech/20180430 Reset a lost root password in under 5 minutes.md diff --git a/sources/tech/20180430 Reset a lost root password in under 5 minutes.md b/sources/tech/20180430 Reset a lost root password in under 5 minutes.md new file mode 100644 index 0000000000..06e18a2d6d --- /dev/null +++ b/sources/tech/20180430 Reset a lost root password in under 5 minutes.md @@ -0,0 +1,80 @@ +Reset a lost root password in under 5 minutes +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum) + +A system administrator can easily reset passwords for users who have forgotten theirs. But what happens if the system administrator forgets the root password, or leaves the company? This guide will show you how to reset a lost or forgotten root password on a Red Hat-compatible system, including Fedora and CentOS, in less than 5 minutes. + +Please note, if the entire system hard disk has been encrypted with LUKS, you would need to provide the LUKS password when prompted. Also, this procedure is applicable to systems running systemd which has been the default init system since Fedora 15, CentOS 7.14.04, and Red Hat Enterprise Linux 7.0. + +First, you need to interrupt the boot process, so you'll need to turn the system on or restart it if it’s already powered on. The first step is tricky because the GRUB menu tends to flash very quickly on the screen. You may need to try this a few times until you are able to do it. + +Press **e** on your keyboard when you see this screen: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub0.png?itok=cz9nk5BT) + +If you've done this correctly, you should see a screen similar to this one: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub1.png?itok=3ZY5uiGq) + +Use your arrow keys to move to the Linux16 line: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub2_0.png?itok=8epRyqOl) + +Using your **del** key or your **backspace** key, remove `rhgb quiet` and replace with the following: + +`rd.break enforcing=0` + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub3.png?itok=JDdMXnUb) + +Setting `enforcing=0` will allow you to avoid performing a complete system SELinux relabeling. Once the system is rebooted, you'll only have to restore the correct SELinux context for the `/etc/shadow` file. I'll show you how to do this too. + +Press **Ctrl-x** to start. + +**The system will now be in emergency mode.** + +Remount the hard drive with read-write access: +``` +# mount –o remount,rw /sysroot + +``` + +Run `chroot` to access the system: +``` +# chroot /sysroot + +``` + +You can now change the root password: +``` +# passwd + +``` + +Type the new root password twice when prompted. If you are successful, you should see a message that reads " **all authentication tokens updated successfully**. " + +Type **exit** twice to reboot the system. + +Log in as root and restore the SELinux label to the `/etc/shadow` file. +``` +# restorecon -v /etc/shadow + +``` + +Turn SELinux back to enforcing mode: +``` +# setenforce 1 + +``` +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/4/reset-lost-root-password + +作者:[Curt Warfield][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/rcurtiswarfield From 2da76b34c9e8fe533e05767f8002fbb62b77cebd Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 2 May 2018 12:44:32 +0800 Subject: [PATCH 060/102] PUB:20180401 9 Useful touch command examples in Linux.md @MjSeven https://linux.cn/article-9597-1.html --- .../20180401 9 Useful touch command examples in Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180401 9 Useful touch command examples in Linux.md (100%) diff --git a/translated/tech/20180401 9 Useful touch command examples in Linux.md b/published/20180401 9 Useful touch command examples in Linux.md similarity index 100% rename from translated/tech/20180401 9 Useful touch command examples in Linux.md rename to published/20180401 9 Useful touch command examples in Linux.md From 221d82bcc95c37621fdc94cb0028770c45574452 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 2 May 2018 12:48:32 +0800 Subject: [PATCH 061/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=203=20practical=20P?= =?UTF-8?q?ython=20tools:=20magic=20methods,=20iterators=20and=20generator?= =?UTF-8?q?s,=20and=20method=20magic?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...rators and generators, and method magic.md | 633 ++++++++++++++++++ 1 file changed, 633 insertions(+) create mode 100644 sources/tech/20180430 3 practical Python tools- magic methods, iterators and generators, and method magic.md diff --git a/sources/tech/20180430 3 practical Python tools- magic methods, iterators and generators, and method magic.md b/sources/tech/20180430 3 practical Python tools- magic methods, iterators and generators, and method magic.md new file mode 100644 index 0000000000..2cb0e5a948 --- /dev/null +++ b/sources/tech/20180430 3 practical Python tools- magic methods, iterators and generators, and method magic.md @@ -0,0 +1,633 @@ +3 practical Python tools: magic methods, iterators and generators, and method magic +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/serving-bowl-forks-dinner.png?itok=a3YqPwr5) +Python offers a unique set of tools and language features that help make your code more elegant, readable, and intuitive. By selecting the right tool for the right problem, your code will be easier to maintain. In this article, we'll examine three of those tools: magic methods, iterators and generators, and method magic. + +### Magic methods + + +Magic methods can be considered the plumbing of Python. They're the methods that are called "under the hood" for certain built-in methods, symbols, and operations. A common magic method you may be familiar with is, `__init__()`,which is called when we want to initialize a new instance of a class. + +You may have seen other common magic methods, like `__str__` and `__repr__`. There is a whole world of magic methods, and by implementing a few of them, we can greatly modify the behavior of an object or even make it behave like a built-in datatype, such as a number, list, or dictionary. + +Let's take this `Money` class for example: +``` +class Money: + + + +    currency_rates = { + +        '$': 1, + +        '€': 0.88, + +    } + + + +    def __init__(self, symbol, amount): + +        self.symbol = symbol + +        self.amount = amount + + + +    def __repr__(self): + +        return '%s%.2f' % (self.symbol, self.amount) + + + +    def convert(self, other): + +        """ Convert other amount to our currency """ + +        new_amount = ( + +            other.amount / self.currency_rates[other.symbol] + +            * self.currency_rates[self.symbol]) + + + +        return Money(self.symbol, new_amount) + +``` + +The class defines a currency rate for a given symbol and exchange rate, specifies an initializer (also known as a constructor), and implements `__repr__`, so when we print out the class, we see a nice representation such as `$2.00` for an instance `Money('$', 2.00)` with the currency symbol and amount. Most importantly, it defines a method that allows you to convert between different currencies with different exchange rates. + +Using a Python shell, let's say we've defined the costs for two food items in different currencies, like so: +``` +>>> soda_cost = Money('$', 5.25) + +>>> soda_cost + +    $5.25 + + + +>>> pizza_cost = Money('€', 7.99) + +>>> pizza_cost + +    €7.99 + +``` + +We could use magic methods to help instances of this class interact with each other. Let's say we wanted to be able to add two instances of this class together, even if they were in different currencies. To make that a reality, we could implement the `__add__` magic method on our `Money` class: +``` +class Money: + + + +    # ... previously defined methods ... + + + +    def __add__(self, other): + +        """ Add 2 Money instances using '+' """ + +        new_amount = self.amount + self.convert(other).amount + +        return Money(self.symbol, new_amount) + +``` + +Now we can use this class in a very intuitive way: +``` +>>> soda_cost = Money('$', 5.25) + + + +>>> pizza_cost = Money('€', 7.99) + + + +>>> soda_cost + pizza_cost + +    $14.33 + + + +>>> pizza_cost + soda_cost + +    €12.61 + +``` + +When we add two instances together, we get a result in the first defined currency. All the conversion is done seamlessly under the hood. If we wanted to, we could also implement `__sub__` for subtraction, `__mul__` for multiplication, and many more. Read about [emulating numeric types][1], or read this [guide to magic methods][2] for others. + +We learned that `__add__` maps to the built-in operator `+`. Other magic methods can map to symbols like `[]`. For example, to access an item by index or key (in the case of a dictionary), use the `__getitem__` method: +``` +>>> d = {'one': 1, 'two': 2} + + + +>>> d['two'] + +2 + +>>> d.__getitem__('two') + +2 + +``` + +Some magic methods even map to built-in functions, such as `__len__()`, which maps to `len()`. +``` +class Alphabet: + +    letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' + + + +    def __len__(self): + +        return len(self.letters) + + + + + +>>> my_alphabet = Alphabet() + +>>> len(my_alphabet) + +    26 + +``` + +### Custom iterators + +Custom iterators are an incredibly powerful but unfortunately confusing topic to new and seasoned Pythonistas alike. + +Many built-in types, such as lists, sets, and dictionaries, already implement the protocol that allows them to be iterated over under the hood. This allows us to easily loop over them. +``` +>>> for food in ['Pizza', 'Fries']: + +         print(food + '. Yum!') + + + +Pizza. Yum! + +Fries. Yum! + +``` + +How can we iterate over our own custom classes? First, let's clear up some terminology. + + * To be iterable, a class needs to implement `__iter__()` + * The `__iter__()` method needs to return an iterator + * To be an iterator, a class needs to implement `__next__()` (or `next()` [in Python 2][3]), which must raise a `StopIteration` exception when there are no more items to iterate over. + + + +Whew! It sounds complicated, but once you remember these fundamental concepts, you'll be able to iterate in your sleep. + +When might we want to use a custom iterator? Let's imagine a scenario where we have a `Server` instance running different services such as `http` and `ssh` on different ports. Some of these services have an `active` state while others are `inactive`. +``` +class Server: + + + +    services = [ + +        {'active': False, 'protocol': 'ftp', 'port': 21}, + +        {'active': True, 'protocol': 'ssh', 'port': 22}, + +        {'active': True, 'protocol': 'http', 'port': 80}, + +    ] + +``` + +When we loop over our `Server` instance, we only want to loop over `active` services. Let's create a new class, an `IterableServer`: +``` +class IterableServer: + + + +    def __init__(self): + +        self.current_pos = 0 + + + +    def __next__(self): + +        pass  # TODO: Implement and remember to raise StopIteration + +``` + +First, we initialize our current position to `0`. Then, we define a `__next__()` method, which will return the next item. We'll also ensure that we raise `StopIteration` when there are no more items to return. So far so good! Now, let's implement this `__next__()` method. +``` +class IterableServer: + + + +    def __init__(self): + +        self.current_pos = 0.  # we initialize our current position to zero + + + +    def __iter__(self):  # we can return self here, because __next__ is implemented + +        return self + + + +    def __next__(self): + +        while self.current_pos < len(self.services): + +            service = self.services[self.current_pos] + +            self.current_pos += 1 + +            if service['active']: + +                return service['protocol'], service['port'] + +        raise StopIteration + + + +    next = __next__  # optional python2 compatibility + +``` + +We keep looping over the services in our list while our current position is less than the length of the services but only returning if the service is active. Once we run out of services to iterate over, we raise a `StopIteration` exception. + +Because we implement a `__next__()` method that raises `StopIteration` when it is exhausted, we can return `self` from `__iter__()` because the `IterableServer` class adheres to the `iterable` protocol. + +Now we can loop over an instance of `IterableServer`, which will allow us to look at each active service, like so: +``` +>>> for protocol, port in IterableServer(): + +        print('service %s is running on port %d' % (protocol, port)) + + + +service ssh is running on port 22 + +service http is running on port 21 + +``` + +That's pretty great, but we can do better! In an instance like this, where our iterator doesn't need to maintain a lot of state, we can simplify our code and use a [generator][4] instead. +``` +class Server: + + + +    services = [ + +        {'active': False, 'protocol': 'ftp', 'port': 21}, + +        {'active': True, 'protocol': 'ssh', 'port': 22}, + +        {'active': True, 'protocol': 'http', 'port': 21}, + +    ] + + + +    def __iter__(self): + +        for service in self.services: + +            if service['active']: + +                yield service['protocol'], service['port'] + +``` + +What exactly is the `yield` keyword? Yield is used when defining a generator function. It's sort of like a `return`. While a `return` exits the function after returning the value, `yield` suspends execution until the next time it's called. This allows your generator function to maintain state until it resumes. Check out [yield's documentation][5] to learn more. With a generator, we don't have to manually maintain state by remembering our position. A generator knows only two things: what it needs to do right now and what it needs to do to calculate the next item. Once we reach a point of execution where `yield` isn't called again, we know to stop iterating. + +This works because of some built-in Python magic. In the [Python documentation for `__iter__()`][6] we can see that if `__iter__()` is implemented as a generator, it will automatically return an iterator object that supplies the `__iter__()` and `__next__()` methods. Read this great article for a deeper dive of [iterators, iterables, and generators][7]. + +### Method magic + +Due to its unique aspects, Python provides some interesting method magic as part of the language. + +One example of this is aliasing functions. Since functions are just objects, we can assign them to multiple variables. For example: +``` +>>> def foo(): + +       return 'foo' + + + +>>> foo() + +'foo' + + + +>>> bar = foo + + + +>>> bar() + +'foo' + +``` + +We'll see later on how this can be useful. + +Python provides a handy built-in, [called `getattr()`][8], that takes the `object, name, default` parameters and returns the attribute `name` on `object`. This programmatically allows us to access instance variables and methods. For example: +``` +>>> class Dog: + +        sound = 'Bark' + +        def speak(self): + +            print(self.sound + '!', self.sound + '!') + + + +>>> fido = Dog() + + + +>>> fido.sound + +'Bark' + +>>> getattr(fido, 'sound') + +'Bark' + + + +>>> fido.speak + +> + +>>> getattr(fido, 'speak') + +> + + + + + +>>> fido.speak() + +Bark! Bark! + +>>> speak_method = getattr(fido, 'speak') + +>>> speak_method() + +Bark! Bark! + +``` + +Cool trick, but how could we practically use `getattr`? Let's look at an example that allows us to write a tiny command-line tool to dynamically process commands. +``` +class Operations: + +    def say_hi(self, name): + +        print('Hello,', name) + + + +    def say_bye(self, name): + +        print ('Goodbye,', name) + + + +    def default(self, arg): + +        print ('This operation is not supported.') + + + +if __name__ == '__main__': + +    operations = Operations() + + + +    # let's assume we do error handling + +    command, argument = input('> ').split() + +    func_to_call = getattr(operations, command, operations.default) + +    func_to_call(argument) + +``` + +The output of our script is: +``` +$ python getattr.py + + + +> say_hi Nina + +Hello, Nina + + + +> blah blah + +This operation is not supported. + +``` + +Next, we'll look at `partial`. For example, **`functool.partial(func, *args, **kwargs)`** allows you to return a new [partial object][9] that behaves like `func` called with `args` and `kwargs`. If more `args` are passed in, they're appended to `args`. If more `kwargs` are passed in, they extend and override `kwargs`. Let's see it in action with a brief example: +``` +>>> from functools import partial + +>>> basetwo = partial(int, base=2) + +>>> basetwo + + + + + +>>> basetwo('10010') + +18 + + + +# This is the same as + +>>> int('10010', base=2) + +``` + +Let's see how this method magic ties together in some sample code from a library I enjoy using [called][10]`agithub`, which is a (poorly named) REST API client with transparent syntax that allows you to rapidly prototype any REST API (not just GitHub) with minimal configuration. I find this project interesting because it's incredibly powerful yet only about 400 lines of Python. You can add support for any REST API in about 30 lines of configuration code. `agithub` knows everything it needs to about protocol (`REST`, `HTTP`, `TCP`), but it assumes nothing about the upstream API. Let's dive into the implementation. + +Here's a simplified version of how we'd define an endpoint URL for the GitHub API and any other relevant connection properties. View the [full code][11] instead. +``` +class GitHub(API): + + + +    def __init__(self, token=None, *args, **kwargs): + +        props = ConnectionProperties(api_url = kwargs.pop('api_url', 'api.github.com')) + +        self.setClient(Client(*args, **kwargs)) + +        self.setConnectionProperties(props) + +``` + +Then, once your [access token][12] is configured, you can start using the [GitHub API][13]. +``` +>>> gh = GitHub('token') + +>>> status, data = gh.user.repos.get(visibility='public', sort='created') + +>>> # ^ Maps to GET /user/repos + +>>> data + +... ['tweeter', 'snipey', '...'] + +``` + +Note that it's up to you to spell things correctly. There's no validation of the URL. If the URL doesn't exist or anything else goes wrong, the error thrown by the API will be returned. So, how does this all work? Let's figure it out. First, we'll check out a simplified example of the [`API` class][14]: +``` +class API: + + + +    # ... other methods ... + + + +    def __getattr__(self, key): + +        return IncompleteRequest(self.client).__getattr__(key) + +    __getitem__ = __getattr__ + +``` + +Each call on the `API` class ferries the call to the [`IncompleteRequest` class][15] for the specified `key`. +``` +class IncompleteRequest: + + + +    # ... other methods ... + + + +    def __getattr__(self, key): + +        if key in self.client.http_methods: + +            htmlMethod = getattr(self.client, key) + +            return partial(htmlMethod, url=self.url) + +        else: + +            self.url += '/' + str(key) + +            return self + +    __getitem__ = __getattr__ + + + + + +class Client: + +    http_methods = ('get')  # ... and post, put, patch, etc. + + + +    def get(self, url, headers={}, **params): + +        return self.request('GET', url, None, headers) + +``` + +If the last call is not an HTTP method (like 'get', 'post', etc.), it returns an `IncompleteRequest` with an appended path. Otherwise, it gets the right function for the specified HTTP method from the [`Client` class][16] and returns a `partial` . + +What happens if we give a non-existent path? +``` +>>> status, data = this.path.doesnt.exist.get() + +>>> status + +... 404 + +``` + +And because `__getitem__` is aliased to `__getattr__`: +``` +>>> owner, repo = 'nnja', 'tweeter' + +>>> status, data = gh.repos[owner][repo].pulls.get() + +>>> # ^ Maps to GET /repos/nnja/tweeter/pulls + +>>> data + +.... # {....} + +``` + +Now that's some serious method magic! + +### Learn more + +Python provides plenty of tools that allow you to make your code more elegant and easier to read and understand. The challenge is finding the right tool for the job, but I hope this article added some new ones to your toolbox. And, if you'd like to take this a step further, you can read about decorators, context managers, context generators, and `NamedTuple`s on my blog [nnja.io][17]. As you become a better Python developer, I encourage you to get out there and read some source code for well-architected projects. [Requests][18] and [Flask][19] are two great codebases to start with. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/4/elegant-solutions-everyday-python-problems + +作者:[Nina Zakharenko][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/nnja +[1]:https://docs.python.org/3/reference/datamodel.html#emulating-numeric-types +[2]:https://rszalski.github.io/magicmethods/ +[3]:https://docs.python.org/2/library/stdtypes.html#iterator.next +[4]:https://docs.python.org/3/library/stdtypes.html#generator-types +[5]:https://docs.python.org/3/reference/expressions.html#yieldexpr +[6]:https://docs.python.org/3/reference/datamodel.html#object.__iter__ +[7]:http://nvie.com/posts/iterators-vs-generators/ +[8]:https://docs.python.org/3/library/functions.html#getattr +[9]:https://docs.python.org/3/library/functools.html#functools.partial +[10]:https://github.com/mozilla/agithub +[11]:https://github.com/mozilla/agithub/blob/master/agithub/GitHub.py +[12]:https://github.com/settings/tokens +[13]:https://developer.github.com/v3/repos/#list-your-repositories +[14]:https://github.com/mozilla/agithub/blob/dbf7014e2504333c58a39153aa11bbbdd080f6ac/agithub/base.py#L30-L58 +[15]:https://github.com/mozilla/agithub/blob/dbf7014e2504333c58a39153aa11bbbdd080f6ac/agithub/base.py#L60-L100 +[16]:https://github.com/mozilla/agithub/blob/dbf7014e2504333c58a39153aa11bbbdd080f6ac/agithub/base.py#L102-L231 +[17]:http://nnja.io +[18]:https://github.com/requests/requests +[19]:https://github.com/pallets/flask +[20]:https://us.pycon.org/2018/schedule/presentation/164/ +[21]:https://us.pycon.org/2018/ From f07b87e68f601015a530ebabd7ff1202b809d7c7 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 2 May 2018 12:51:28 +0800 Subject: [PATCH 062/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20A=20Beginners=20G?= =?UTF-8?q?uide=20To=20Flatpak?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20180428 A Beginners Guide To Flatpak.md | 311 ++++++++++++++++++ 1 file changed, 311 insertions(+) create mode 100644 sources/tech/20180428 A Beginners Guide To Flatpak.md diff --git a/sources/tech/20180428 A Beginners Guide To Flatpak.md b/sources/tech/20180428 A Beginners Guide To Flatpak.md new file mode 100644 index 0000000000..db1dfa8181 --- /dev/null +++ b/sources/tech/20180428 A Beginners Guide To Flatpak.md @@ -0,0 +1,311 @@ +A Beginners Guide To Flatpak +====== + +![](https://www.ostechnix.com/wp-content/uploads/2016/06/flatpak-720x340.jpg) + +A while, we have written about [**Ubuntu’s Snaps**][1]. Snaps are introduced by Canonical for Ubuntu operating system, and later it was adopted by other Linux distributions such as Arch, Gentoo, and Fedora etc. A snap is a single binary package bundled with all required libraries and dependencies, and you can install it on any Linux distribution, regardless of its version and architecture. Similar to Snaps, there is also another tool called **Flatpak**. As you may already know, packaging distributed applications for different Linux distributions are quite time consuming and difficult process. Each distributed application has different set of libraries and dependencies for various Linux distributions. But, Flatpak, the new framework for desktop applications that completely reduces this burden. Now, you can build a single Flatpak app and install it on various operating systems. How cool, isn’t it? + +Also, the users don’t have to worry about the libraries and dependencies, everything is bundled within the app itself. Most importantly, Flaptpak apps are sandboxed and isolated from the rest of the host operating system, and other applications. Another notable feature is we can install multiple versions of the same application at the same time in the same system. For example, you can install VLC player version 2.1, 2.2, and 2.3 on the same system. So, the developers can test different versions of same application at a time. + +In this tutorial, we will see how to install Flatpak in GNU/Linux. + +### Install Flatpak + +Flatpak is available for many popular Linux distributions such as Arch Linux, Debian, Fedora, Gentoo, Red Hat, Linux Mint, openSUSE, Solus, Mageia and Ubuntu distributions. + +To install Flatpak on Arch Linux, run: +``` +$ sudo pacman -S flatpak + +``` + +Flatpak is available in the default repositories of Debian Stretch and newer. To install it, run: +``` +$ sudo apt install flatpak + +``` + +On Fedora, Flatpak is installed by default. All you have to do is enable enable Flathub as described in the next section. + +Just in case, it is not installed for any reason, run: +``` +$ sudo dnf install flatpak + +``` + +On RHEL 7, run: +``` +$ sudo yum install flatpak + +``` + +On Linux Mint 18.3, flatpak is installed by default. So, no setup required. + +On openSUSE Tumbleweed, Flatpak can also be installed using Zypper: +``` +$ sudo zypper install flatpak + +``` + +On Ubuntu, add the following repository and install Flatpak as shown below. +``` +$ sudo add-apt-repository ppa:alexlarsson/flatpak + +$ sudo apt update + +$ sudo apt install flatpak + +``` + +The Flatpak plugin for the Software app makes it possible to install apps without needing the command line. To install this plugin, run: +``` +$ sudo apt install gnome-software-plugin-flatpak + +``` + +For other Linux distributions, refer the official installation [**link**][2]. + +### Getting Started With Flatpak + +There are many popular applications such as Gimp, Kdenlive, Steam, Spotify, Visual studio code etc., available as flatpaks. + +Let us now see the basic usage of flatpak command. + +First of all, we need to add remote repositories. + +#### Adding Remote Repositories** + +**Enable Flathub Repository:** + +**Flathub** is nothing but a central repository where all flatpak applications available to users. To enable it, just run: +``` +$ sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo + +``` + +Flathub is enough to install most popular apps. Just in case you wanted to try some GNOME apps, add the GNOME repository. + +**Enable GNOME Repository:** + +The GNOME repository contains all GNOME core applications. GNOME flatpak repository itself is available as two versions, **stable** and **nightly**. + +To add GNOME stable repository, run the following commands: +``` +$ wget https://sdk.gnome.org/keys/gnome-sdk.gpg + +$ sudo flatpak remote-add --gpg-import=gnome-sdk.gpg --if-not-exists gnome-apps https://sdk.gnome.org/repo-apps/ + +``` + +Applications in this repository require the **3.20 version of the org.gnome.Platform runtime**. + +To install the stable runtimes, run: +``` +$ sudo flatpak remote-add --gpg-import=gnome-sdk.gpg gnome https://sdk.gnome.org/repo/ + +``` + +To add the GNOME nightly apps repository, run: +``` +$ wget https://sdk.gnome.org/nightly/keys/nightly.gpg + +$ sudo flatpak remote-add --gpg-import=nightly.gpg --if-not-exists gnome-nightly-apps https://sdk.gnome.org/nightly/repo-apps/ + +``` + +Applications in this repository require the **nightly version of the org.gnome.Platform runtime**. + +To install the nightly runtimes, run: +``` +$ sudo flatpak remote-add --gpg-import=nightly.gpg gnome-nightly https://sdk.gnome.org/nightly/repo/ + +``` + +#### Listing Remotes + +To list all configured remote repositories, run: +``` +$ flatpak remotes +Name Options +flathub system +gnome system +gnome-apps system +gnome-nightly system +gnome-nightly-apps system + +``` + +As you can see, the above command lists the remotes that you have added in your system. It also lists whether the remote has been added per-user or system-wide. + +#### Removing Remotes + +To remove a remote, for example flathub, simply do; +``` +$ sudo flatpak remote-delete flathub + +``` + +Here **flathub** is remote name. + +#### Installing Flatpak Applications + +In this section, we will see how to install flatpak apps. To install a flatpak application + +To install an application, simply do: +``` +$ sudo flatpak install flathub com.spotify.Client + +``` + +All the apps in the GNOME stable repository uses the version name of “stable”. + +To install any Stable GNOME applications, for example **Evince** , run: +``` +$ sudo flatpak install gnome-apps org.gnome.Evince stable + +``` + +All the apps in the GNOME nightly repository uses the version name of “master”. + +For example, to install gedit, run: +``` +$ sudo flatpak install gnome-nightly-apps org.gnome.gedit master + +``` + +If you don’t want to install apps system-wide, you also can install flatpak apps per-user like below. +``` +$ flatpak install --user + +``` + +All installed apps will be stored in **$HOME/.var/app/** location. +``` +$ ls $HOME/.var/app/ +com.spotify.Client + +``` + +#### Running Flatpak Applications + +You can launch the installed applications at any time from the application launcher. From command line, you can run it, for example Spotify, using command: +``` +$ flatpak run com.spotify.Client + +``` + +#### Listing Applications + +To view the installed applications and runtimes, run: +``` +$ flatpak list + +``` + +To view only the applications, not run times, use this command instead. +``` +$ flatpak list --app + +``` + +You can also view the list of available applications and runtimes from all remotes using command: +``` +$ flatpak remote-ls + +``` + +To list only applications not runtimes, run: +``` +$ flatpak remote-ls --app + +``` + +To list applications and runtimes from a specific repository, for example **gnome-apps** , run: +``` +$ flatpak remote-ls gnome-apps + +``` + +To list only the applications from a remote repository, run: +``` +$ flatpak remote-ls flathub --app + +``` + +#### Updating Applications + +To update all your flatpak applications, run: +``` +$ flatpak update + +``` + +To update a specific application, we do: +``` +$ flatpak update com.spotify.Client + +``` + +#### Getting Details Of Applications + +To display the details of a installed application, run: +``` +$ flatpak info io.github.mmstick.FontFinder + +``` + +Sample output: +``` +Ref: app/io.github.mmstick.FontFinder/x86_64/stable +ID: io.github.mmstick.FontFinder +Arch: x86_64 +Branch: stable +Origin: flathub +Date: 2018-04-11 15:10:31 +0000 +Subject: Workaround appstream issues (391ef7f5) +Commit: 07164e84148c9fc8b0a2a263c8a468a5355b89061b43e32d95008fc5dc4988f4 +Parent: dbff9150fce9fdfbc53d27e82965010805f16491ec7aa1aa76bf24ec1882d683 +Location: /var/lib/flatpak/app/io.github.mmstick.FontFinder/x86_64/stable/07164e84148c9fc8b0a2a263c8a468a5355b89061b43e32d95008fc5dc4988f4 +Installed size: 2.5 MB +Runtime: org.gnome.Platform/x86_64/3.28 + +``` + +#### Removing Applications + +To remove a flatpak application, run: +``` +$ sudo flatpak uninstall com.spotify.Client + +``` + +For details, refer flatpak help section. +``` +$ flatpak --help + +``` + +And, that’s all for now. Hope you had basic idea about Flatpak. + +If you find this guide useful, please share it on your social, professional networks and support OSTechNix. + +More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:http://www.ostechnix.com/introduction-ubuntus-snap-packages/ +[2]:https://flatpak.org/setup/ From 137ec530c627a5cad79993704330bd5b87f1c158 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 2 May 2018 12:59:44 +0800 Subject: [PATCH 063/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Testing=20IPv6=20?= =?UTF-8?q?Networking=20in=20KVM:=20Part=202?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Testing IPv6 Networking in KVM- Part 2.md | 119 ++++++++++++++++++ 1 file changed, 119 insertions(+) create mode 100644 sources/tech/20171109 Testing IPv6 Networking in KVM- Part 2.md diff --git a/sources/tech/20171109 Testing IPv6 Networking in KVM- Part 2.md b/sources/tech/20171109 Testing IPv6 Networking in KVM- Part 2.md new file mode 100644 index 0000000000..b9787effe0 --- /dev/null +++ b/sources/tech/20171109 Testing IPv6 Networking in KVM- Part 2.md @@ -0,0 +1,119 @@ +Testing IPv6 Networking in KVM: Part 2 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_4.png?itok=yZBHylwd) +When last we met, in [Testing IPv6 Networking in KVM: Part 1][1], we learned about IPv6 private addressing. Today, we're going to use KVM to create networks for testing IPv6 to our heart's content. + +Should you desire a refresh in using KVM, see [Creating Virtual Machines in KVM: Part 1][2] and [Creating Virtual Machines in KVM: Part 2 - Networking][3]. + +### Creating Networks in KVM + +You need at least two virtual machines in KVM. Of course, you may create as many as you like. My little setup has Fedora, Ubuntu, and openSUSE. To create a new IPv6 network, open Edit > Connection Details > Virtual Networks in the main Virtual Machine Manager window. Click on the button with the green cross on the bottom left to create a new network (Figure 1). + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kvm-fig-1_0.png?itok=ruqjPXxd) +Figure 1: Create a network. + +Give your new network a name, then click the Forward button. You may opt to not create an IPv4 network if you wish. When you create a new IPv4 network the Virtual Machine Manager will not let you create a duplicate network, or one with an invalid address. On my host Ubuntu system a valid address is highlighted in green, and an invalid address is highlighted in a tasteful rosy hue. On my openSUSE machine there are no colored highlights. Enable DHCP or not, and create a static route or not, then move on to the next window. + +Check "Enable IPv6 network address space definition" and enter your private address range. You may use any IPv6 address class you wish, being careful, of course, to not allow your experiments to leak out of your network. We shall use the nice IPv6 unique local addresses (ULA), and use the online address generator at [Simple DNS Plus][4] to create our network address. Copy the "Combined/CID" address into the Network field (Figure 2). + + +![network address][6] + +Figure 2: Copy the "Combined/CID" address into the Network field. + +[Used with permission][7] + +Virtual Machine Manager thinks my address is not valid, as evidenced by the rose highlight. Can it be right? Let us use ipv6calc to check: +``` +$ ipv6calc -qi fd7d:844d:3e17:f3ae::/64 +Address type: unicast, unique-local-unicast, iid, iid-local +Registry for address: reserved(RFC4193#3.1) +Address type has SLA: f3ae +Interface identifier: 0000:0000:0000:0000 +Interface identifier is probably manual set + +``` + +ipv6calc thinks it's fine. Just for fun, change one of the numbers to something invalid, like the letter g, and try it again. (Asking "What if...?" and trial and error is the awesomest way to learn.) + +Let us carry on and enable DHCPv6 (Figure 3). You can accept the default values, or set your own. + +![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/kvm-fig-3.png?itok=F-oAAtN9) + +We shall skip creating a default route definition and move on to the next screen, where we shall enable "Isolated Virtual Network" and "Enable IPv6 internal routing/networking". + +### VM Network Selection + +Now you can configure your virtual machines to use your new network. Open your VMs, and then click the "i" button at the top left to open its "Show virtual hardware details" screen. In the "Add Hardware" column click on the NIC button to open the network selector, and select your nice new IPv6 network. Click Apply, and then reboot. (Or use your favorite method for restarting networking, or renewing your DHCP lease.) + +### Testing + +What does ifconfig tell us? +``` +$ ifconfig +ens3: flags=4163 UP,BROADCAST,RUNNING,MULTICAST mtu 1500 + inet 192.168.30.207 netmask 255.255.255.0 + broadcast 192.168.30.255 + inet6 fd7d:844d:3e17:f3ae::6314 + prefixlen 128 scopeid 0x0 + inet6 fe80::4821:5ecb:e4b4:d5fc + prefixlen 64 scopeid 0x20 + +``` + +And there is our nice new ULA, fd7d:844d:3e17:f3ae::6314, and the auto-generated link-local address that is always present. Let's have some ping fun, pinging another VM on the network: +``` +vm1 ~$ ping6 -c2 fd7d:844d:3e17:f3ae::2c9f +PING fd7d:844d:3e17:f3ae::2c9f(fd7d:844d:3e17:f3ae::2c9f) 56 data bytes +64 bytes from fd7d:844d:3e17:f3ae::2c9f: icmp_seq=1 ttl=64 time=0.635 ms +64 bytes from fd7d:844d:3e17:f3ae::2c9f: icmp_seq=2 ttl=64 time=0.365 ms + +vm2 ~$ ping6 -c2 fd7d:844d:3e17:f3ae:a:b:c:6314 +PING fd7d:844d:3e17:f3ae:a:b:c:6314(fd7d:844d:3e17:f3ae:a:b:c:6314) 56 data bytes +64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=1 ttl=64 time=0.744 ms +64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=2 ttl=64 time=0.364 ms + +``` + +When you're struggling to understand subnetting, this gives you a fast, easy way to try different addresses and see whether they work. You can assign multiple IP addresses to a single interface and then ping them to see what happens. In a ULA, the interface, or host, portion of the IP address is the last four quads, so you can do anything to those and still be in the same subnet, which in this example is f3ae. This example changes only the interface ID on one of my VMs, to show how you really can do whatever you want with those four quads: +``` +vm1 ~$ sudo /sbin/ip -6 addr add fd7d:844d:3e17:f3ae:a:b:c:6314 dev ens3 + +vm2 ~$ ping6 -c2 fd7d:844d:3e17:f3ae:a:b:c:6314 +PING fd7d:844d:3e17:f3ae:a:b:c:6314(fd7d:844d:3e17:f3ae:a:b:c:6314) 56 data bytes +64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=1 ttl=64 time=0.744 ms +64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=2 ttl=64 time=0.364 ms + +``` + +Now try it with a different subnet, which in this example is f4ae instead of f3ae: +``` +$ ping6 -c2 fd7d:844d:3e17:f4ae:a:b:c:6314 +PING fd7d:844d:3e17:f4ae:a:b:c:6314(fd7d:844d:3e17:f4ae:a:b:c:6314) 56 data bytes +From fd7d:844d:3e17:f3ae::1 icmp_seq=1 Destination unreachable: No route +From fd7d:844d:3e17:f3ae::1 icmp_seq=2 Destination unreachable: No route + +``` + +This is also a great time to practice routing, which we will do in a future installment along with setting up auto-addressing without DHCP. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2017/11/testing-ipv6-networking-kvm-part-2 + +作者:[CARLA SCHRODER][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:https://www.linux.com/learn/intro-to-linux/2017/11/testing-ipv6-networking-kvm-part-1 +[2]:https://www.linux.com/learn/intro-to-linux/2017/5/creating-virtual-machines-kvm-part-1 +[3]:https://www.linux.com/learn/intro-to-linux/2017/5/creating-virtual-machines-kvm-part-2-networking +[4]:http://simpledns.com/private-ipv6.aspx +[5]:/files/images/kvm-fig-2png +[6]:https://www.linux.com/sites/lcom/files/styles/floated_images/public/kvm-fig-2.png?itok=gncdPGj- (network address) +[7]:https://www.linux.com/licenses/category/used-permission From a9010018d0001286b4c148a0faf0519963d59239 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 2 May 2018 13:12:05 +0800 Subject: [PATCH 064/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Easily=20Search?= =?UTF-8?q?=20And=20Install=20Google=20Web=20Fonts=20In=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...h And Install Google Web Fonts In Linux.md | 87 +++++++++++++++++++ 1 file changed, 87 insertions(+) create mode 100644 sources/tech/20180430 Easily Search And Install Google Web Fonts In Linux.md diff --git a/sources/tech/20180430 Easily Search And Install Google Web Fonts In Linux.md b/sources/tech/20180430 Easily Search And Install Google Web Fonts In Linux.md new file mode 100644 index 0000000000..cbed6ac244 --- /dev/null +++ b/sources/tech/20180430 Easily Search And Install Google Web Fonts In Linux.md @@ -0,0 +1,87 @@ +Easily Search And Install Google Web Fonts In Linux +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/04/Font-Finder-720x340.png) +**Font Finder** is the Rust implementation of good old [**Typecatcher**][1], which is used to easily search and install Google web fonts from [**Google’s font archive**][2]. It helps you to install hundreds of free and open source fonts in your Linux desktop. In case you’re looking for beautiful fonts for your web projects and apps and whatever else, Font Finder can easily get them for you. It is free, open source GTK3 application written in Rust programming language. Unlike Typecatcher, which is written using Python, Font Finder can filter fonts by their categories, has zero Python runtime dependencies, and has much better performance and resource consumption. + +In this brief tutorial, we are going to see how to install and use Font Finder in Linux. + +### Install Font Finder + +Since Fond Finder is written using Rust programming language, you need to install Rust in your system as described below. + +After installing Rust, run the following command to install Font Finder: +``` +$ cargo install fontfinder + +``` + +Font Finder is also available as [**flatpak app**][3]. First install Flatpak in your system as described in the link below. + +Then, install Font Finder using command: +``` +$ flatpak install flathub io.github.mmstick.FontFinder + +``` + +### Search And Install Google Web Fonts In Linux Using Font Finder + +You can launch font finder either from the application launcher or run the following command to launch it. +``` +$ flatpak run io.github.mmstick.FontFinder + +``` + +This is how Font Finder default interface looks like. + +![][5] + +As you can see, Font Finder user interface is very simple. All Google web fonts are listed in the left pane and the preview of the respective font is given at the right pane. You can type any words in the preview box to view how the words will look like in the selected font. There is also a search box on the top left which allows you to quickly search for a font of your choice. + +By default, Font Finder displays all type of fonts. You can, however, display the fonts by category-wise from the category drop-down box above the the search box. + +![][6] + +To install a font, just choose it and click the **Install** button on the top. + +![][7] + +You can test the newly installed fonts in any text processing applications. + +![][8] + +Similarly, to remove a font, just choose it from the Font Finder dashboard and click the **Uninstall** button. It’s that simple! + +The Settings button (the gear button) on the top left corner provides the option to switch to dark preview. + +![][9] + +As you can see, Font Finder is very simple and does the job exactly as advertised in its home page. If you’re looking for an application to install Google web fonts, Font Finder is one such application. + +And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/font-finder-easily-search-and-install-google-web-fonts-in-linux/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/install-google-web-fonts-ubuntu/ +[2]:https://fonts.google.com/ +[3]:https://flathub.org/apps/details/io.github.mmstick.FontFinder +[4]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[5]:http://www.ostechnix.com/wp-content/uploads/2018/04/font-finder-1.png +[6]:http://www.ostechnix.com/wp-content/uploads/2018/04/font-finder-2.png +[7]:http://www.ostechnix.com/wp-content/uploads/2018/04/font-finder-3.png +[8]:http://www.ostechnix.com/wp-content/uploads/2018/04/font-finder-5.png +[9]:http://www.ostechnix.com/wp-content/uploads/2018/04/font-finder-4.png From d9b082aa44c4bcda9ad2f069c9a97ccc38eb240c Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 2 May 2018 15:01:28 +0800 Subject: [PATCH 065/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=203=20Python=20temp?= =?UTF-8?q?late=20libraries=20compared?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...27 3 Python template libraries compared.md | 135 ++++++++++++++++++ 1 file changed, 135 insertions(+) create mode 100644 sources/tech/20180427 3 Python template libraries compared.md diff --git a/sources/tech/20180427 3 Python template libraries compared.md b/sources/tech/20180427 3 Python template libraries compared.md new file mode 100644 index 0000000000..18a434150b --- /dev/null +++ b/sources/tech/20180427 3 Python template libraries compared.md @@ -0,0 +1,135 @@ +3 Python template libraries compared +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/library-libraries-search.png?itok=xH8xSU_G) +In my day job, I spend a lot of time wrangling data from various sources into human-readable information. While a lot of the time this just takes the form of a spreadsheet or some type of chart or other data visualization, there are other times when it makes sense to present the data instead in a written format. + +But a pet peeve of mine is copying and pasting. If you’re moving data from its source to a standardized template, you shouldn’t be copying and pasting either. It’s error-prone, and honestly, it’s not a good use of your time. + +So for any piece of information I send out regularly which follows a common pattern, I tend to find some way to automate at least a chunk of it. Maybe that involves creating a few formulas in a spreadsheet, a quick shell script, or some other solution to autofill a template with information pulled from an outside source. + +But lately, I’ve been exploring Python templating to do much of the work of creating reports and graphs from other datasets. + +Python templating engines are hugely powerful. My use case of simplifying report creation only scratches the surface of what they can be put to work for. Many developers are making use of these tools to build full-fledged web applications and content management systems. But you don’t have to have a grand vision of a complicated web app to make use of Python templating tools. + +### Why templating? + +Each templating tool is a little different, and you should read the documentation to understand the exact usage. But let’s create a hypothetical example. Let’s say I’d like to create a short page listing all of the Python topics I've written about recently. Something like this: +``` +html> + +  head> + +    title>/title> + +  /head> + +  body> + +    p>/p> + +    ul> + +      li>/li> + +      li>/li> + +      li>/li> + +    /ul> + +  /body> + +/html>My Python articlesThese are some of the things I have written about Python:Python GUIsPython IDEsPython web scrapers + +``` + +Simple enough to maintain when it’s just these three items. But what happens when I want to add a fourth, or fifth, or sixty-seventh? Rather than hand-coding this page, could I generate it from a CSV or other data file containing a list of all of my pages? Could I easily create duplicates of this for every topic I've written on? Could I programmatically change the text or title or heading on each one of those pages? That's where a templating engine can come into play. + +There are many different options to choose from, and today I'll share with you three, in no particular order: [Mako][6], [Jinja2][7], and [Genshi][8]. + +### Mako + +[Mako][6] is a Python templating tool released under the MIT license that is designed for fast performance (not unlike Jinja2). Mako has been used by Reddit to power their web pages, as well as being the default templating language for web frameworks like Pyramid and Pylons. It's also fairly simple and straightforward to use; you can design templates with just a couple of lines of code. Supporting both Python 2.x and 3.x, it's a powerful and feature-rich tool with [good documentation][9], which I consider a must. Features include filters, inheritance, callable blocks, and a built-in caching system, which could be import for large or complex web projects. + +### Jinja2 + +Jinja2 is another speedy and full-featured option, available for both Python 2.x and 3.x under a BSD license. Jinja2 has a lot of overlap from a feature perspective with Mako, so for a newcomer, your choice between the two may come down to which formatting style you prefer. Jinja2 also compiles your templates to bytecode, and has features like HTML escaping, sandboxing, template inheritance, and the ability to sandbox portions of templates. Its users include Mozilla, SourceForge, NPR, Instagram, and others, and also features [strong documentation][10]. Unlike Mako, which uses Python inline for logic inside your templates, Jinja2 uses its own syntax. + +### Genshi + +[Genshi][8] is the third option I'll mention. It's really an XML tool which has a strong templating component, so if the data you are working with is already in XML format, or you need to work with formatting beyond a web page, Genshi might be a good solution for you. HTML is basically a type of XML (well, not precisely, but that's beyond the scope of this article and a bit pedantic), so formatting them is quite similar. Since a lot of the data I work with commonly is in one flavor of XML or another, I appreciated working with a tool I could use for multiple things. + +The release version currently only supports Python 2.x, although Python 3 support exists in trunk, I would caution you that it does not appear to be receiving active development. Genshi is made available under a BSD license. + +### Example + +So in our hypothetical example above, rather than update the HTML file every time I write about a new topic, I can update it programmatically. I can create a template, which might look like this: +``` +html> + +  head> + +    title>/title> + +  /head> + +  body> + +    p>/p> + +    ul> + +      %for topic in topics: + +      li>/li> + +      %endfor + +    /ul> + +  /body> + +/html>My Python articlesThese are some of the things I have written about Python:%for topic in topics:${topic}%endfor + +``` + +And then I can iterate across each topic with my templating library, in this case, Mako, like this: +``` +from mako.template import Template + + + +mytemplate = Template(filename='template.txt') + +print(mytemplate.render(topics=("Python GUIs","Python IDEs","Python web scrapers"))) + +``` + +Of course, in a real-world usage, rather than listing the contents manually in a variable, I would likely pull them from an outside data source, like a database or an API. + +These are not the only Python templating engines out there. If you’re starting down the path of creating a new project which will make heavy use of templates, you’ll want to consider more than just these three. Check out this much more comprehensive list on the [Python wiki][11] for more projects that are worth considering. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/resources/python/template-libraries + +作者:[Jason Baker][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jason-baker +[1]:https://opensource.com/resources/python?intcmp=7016000000127cYAAQ +[2]:https://opensource.com/resources/python/ides?intcmp=7016000000127cYAAQ +[3]:https://opensource.com/resources/python/gui-frameworks?intcmp=7016000000127cYAAQ +[4]:https://opensource.com/tags/python?intcmp=7016000000127cYAAQ +[5]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ +[6]:http://www.makotemplates.org/ +[7]:http://jinja.pocoo.org/ +[8]:https://genshi.edgewall.org/ +[9]:http://docs.makotemplates.org/en/latest/ +[10]:http://jinja.pocoo.org/docs/2.10/ +[11]:https://wiki.python.org/moin/Templating From b8b7a66b1a319ae1617279d715cb908bb0e41e3f Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 2 May 2018 15:04:30 +0800 Subject: [PATCH 066/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Compil?= =?UTF-8?q?e=20a=20Linux=20Kernel?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20180427 How to Compile a Linux Kernel.md | 144 ++++++++++++++++++ 1 file changed, 144 insertions(+) create mode 100644 sources/tech/20180427 How to Compile a Linux Kernel.md diff --git a/sources/tech/20180427 How to Compile a Linux Kernel.md b/sources/tech/20180427 How to Compile a Linux Kernel.md new file mode 100644 index 0000000000..cc71b7fbae --- /dev/null +++ b/sources/tech/20180427 How to Compile a Linux Kernel.md @@ -0,0 +1,144 @@ +How to Compile a Linux Kernel +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/chester-alvarez-644-unsplash.jpg?itok=aFxG9kUZ) + +Once upon a time the idea of upgrading the Linux kernel sent fear through the hearts of many a user. Back then, the process of upgrading the kernel involved a lot of steps and even more time. Now, installing a new kernel can be easily handled with package managers like apt. With the addition of certain repositories, you can even easily install experimental or specific kernels (such as real-time kernels for audio production) without breaking a sweat. + +Considering how easy it is to upgrade your kernel, why would you bother compiling one yourself? Here are a few possible reasons: + + * You simply want to know how it’s done. + + * You need to enable or disable specific options into a kernel that simply aren’t available via the standard options. + + * You want to enable hardware support that might not be found in the standard kernel. + + * You’re using a distribution that requires you compile the kernel. + + * You’re a student and this is an assignment. + + + + +Regardless of why, knowing how to compile a Linux kernel is very useful and can even be seen as a right of passage. When I first compiled a new Linux kernel (a long, long time ago) and managed to boot from said kernel, I felt a certain thrill coursing through my system (which was quickly crushed the next time I attempted and failed). +With that said, let’s walk through the process of compiling a Linux kernel. I’ll be demonstrating on Ubuntu 16.04 Server. After running through a standard sudo apt upgrade, the installed kernel is 4.4.0-121. I want to upgrade to kernel 4.17. Let’s take care of that. + +A word of warning: I highly recommend you practice this procedure on a virtual machine. By working with a VM, you can always create a snapshot and back out of any problems with ease. DO NOT upgrade the kernel this way on a production machine… not until you know what you’re doing. + +### Downloading the kernel + +The first thing to do is download the kernel source file. This can be done by finding the URL of the kernel you want to download (from [Kernel.org][1]). Once you have the URL, download the source file with the following command (I’ll demonstrate with kernel 4.17 RC2): +``` +wget https://git.kernel.org/torvalds/t/linux-4.17-rc2.tar.gz + +``` + +While that file is downloading, there are a few bits to take care of. + +### Installing requirements + +In order to compile the kernel, we’ll need to first install a few requirements. This can be done with a single command: +``` +sudo apt-get install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison + +``` + +Do note: You will need at least 12GB of free space on your local drive to get through the kernel compilation process. So make sure you have enough space. + +### Extracting the source + +From within the directory housing our newly downloaded kernel, extract the kernel source with the command: +``` +tar xvzf linux-4.17-rc2.tar.gz + +``` + +Change into the newly created directory with the command cd linux-4.17-rc2. + +### Configuring the kernel + +Before we actually compile the kernel, we must first configure which modules to include. There is actually a really easy way to do this. With a single command, you can copy the current kernel’s config file and then use the tried and true menuconfig command to make any necessary changes. To do this, issue the command: +``` +cp /boot/config-$(uname -r) .config + +``` + +Now that you have a configuration file, issue the command make menuconfig. This command will open up a configuration tool (Figure 1) that allows you to go through every module available and enable or disable what you need or don’t need. + + +![menuconfig][3] + +Figure 1: The make menuconfig in action. + +[Used with permission][4] + +It is quite possible you might disable a critical portion of the kernel, so step through menuconfig with care. If you’re not sure about an option, leave it alone. Or, better yet, stick with the configuration we just copied from the running kernel (as we know it works). Once you’ve gone through the entire list (it’s quite long), you’re ready to compile! + +### Compiling and installing + +Now it’s time to actually compile the kernel. The first step is to compile using the make command. So issue make and then answer the necessary questions (Figure 2). The questions asked will be determined by what kernel you’re upgrading from and what kernel you’re upgrading to. Trust me when I say there’s a ton of questions to answer, so give yourself plenty of time here. + + +![make][6] + +Figure 2: Answering the questions for the make command. + +[Used with permission][4] + +After answering the litany of questions, you can then install the modules you’ve enabled with the command: +``` +make modules_install + +``` + +Once again, this command will take some time, so either sit back and watch the output, or go do something else (as it will not require your input). Chances are, you’ll want to undertake another task (unless you really enjoy watching output fly by in a terminal). + +Now we install the kernel with the command: +``` +sudo make install + +``` + +Again, another command that’s going to take a significant amount of time. In fact, the make install command will take even longer than the make modules_install command. Go have lunch, configure a router, install Linux on a few servers, or take a nap. + +### Enable the kernel for boot + +Once the make install command completes, it’s time to enable the kernel for boot. To do this, issue the command: +``` +sudo update-initramfs -c -k 4.17-rc2 + +``` + +Of course, you would substitute the kernel number above for the kernel you’ve compiled. When that command completes, update grub with the command: +``` +sudo update-grub + +``` + +You should now be able to restart your system and select the newly installed kernel. + +### Congratulations! + +You’ve compiled a Linux kernel! It’s a process that may take some time; but, in the end, you’ll have a custom kernel for your Linux distribution, as well as an important skill that many Linux admins tend to overlook. + +Learn more about Linux through the free ["Introduction to Linux" ][7] course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/4/how-compile-linux-kernel-0 + +作者:[Jack Wallen][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/jlwallen +[1]:https://www.kernel.org/ +[2]:/files/images/kernelcompile1jpg +[3]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kernel_compile_1.jpg?itok=ZNybYgEt (menuconfig) +[4]:/licenses/category/used-permission +[5]:/files/images/kernelcompile2jpg +[6]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kernel_compile_2.jpg?itok=TYfV02wC (make) +[7]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux From 15754fbc9f4c239b85ff076d7b151b237fda5b09 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 2 May 2018 21:36:37 +0800 Subject: [PATCH 067/102] PRF:20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md @geekpi --- ... Transferred Files Over SSH Using Rsync.md | 48 +++++++++---------- 1 file changed, 23 insertions(+), 25 deletions(-) diff --git a/translated/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md b/translated/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md index 3ae2d7761c..7f3f10a6e3 100644 --- a/translated/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md +++ b/translated/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md @@ -1,17 +1,17 @@ -如何使用 Rsync 通过 SSH 恢复部分传输的文件 +如何使用 rsync 通过 SSH 恢复部分传输的文件 ====== ![](https://www.ostechnix.com/wp-content/uploads/2016/02/Resume-Partially-Transferred-Files-Over-SSH-Using-Rsync.png) -由于诸如电源故障、网络故障或用户干预等各种原因,使用 SCP 命令通过 SSH 复制的大型文件可能会中断,取消或损坏。有一天,我将 Ubuntu 16.04 ISO 文件复制到我的远程系统。不幸的是断电了,网络连接立即丢失。结果么?复制过程终止!这只是一个简单的例子。Ubuntu ISO 并不是那么大,一旦电源恢复,我就可以重新启动复制过程。但在生产环境中,当你在传输大型文件时,你可能并不希望这样做。 +由于诸如电源故障、网络故障或用户干预等各种原因,使用 `scp` 命令通过 SSH 复制的大型文件可能会中断、取消或损坏。有一天,我将 Ubuntu 16.04 ISO 文件复制到我的远程系统。不幸的是断电了,网络连接立即断了。结果么?复制过程终止!这只是一个简单的例子。Ubuntu ISO 并不是那么大,一旦电源恢复,我就可以重新启动复制过程。但在生产环境中,当你在传输大型文件时,你可能并不希望这样做。 -而且,你不能总是使用 **scp** 命令恢复被中止的进度。因为,如果你这样做,它只会覆盖现有的文件。这时你会怎么做?别担心!这是 **Rsync** 派上用场的地方!Rsync 可以帮助你恢复中断的复制或下载过程。对于那些好奇的人,Rsync 是一个快速、多功能的文件复制程序,可用于复制和传输远程和本地系统中的文件或文件夹。 +而且,你不能继续使用 `scp` 命令恢复被中止的进度。因为,如果你这样做,它只会覆盖现有的文件。这时你会怎么做?别担心!这是 `rsync` 派上用场的地方!`rsync` 可以帮助你恢复中断的复制或下载过程。对于那些好奇的人,`rsync` 是一个快速、多功能的文件复制程序,可用于复制和传输远程和本地系统中的文件或文件夹。 -它提供了大量控制其行为的每个方面的选项,并允许非常灵活地指定要复制的一组文件。它以增量传输算法而闻名,它通过仅发送源文件和目标中现有文件之间的差异来减少通过网络发送的数据量。 Rsync 广泛用于备份和镜像,以及日常使用中改进的复制命令。 +它提供了大量控制其各种行为的选项,并允许非常灵活地指定要复制的一组文件。它以增量传输算法而闻名,它通过仅发送源文件和目标中现有文件之间的差异来减少通过网络发送的数据量。 `rsync` 广泛用于备份和镜像,以及日常使用中改进的复制命令。 -就像 SCP 一样,rsync 也会通过 SSH 复制文件。如果你想通过 SSH 下载或传输大文件和文件夹,我建议您使用 rsync。请注意,**应该在两边都安装 rsync**(远程和本地系统)来恢复部分传输的文件。 +就像 `scp` 一样,`rsync` 也会通过 SSH 复制文件。如果你想通过 SSH 下载或传输大文件和文件夹,我建议您使用 `rsync`。请注意,应该在两边(远程和本地系统)都安装 `rsync` 来恢复部分传输的文件。 -### 使用 Rsync 恢复部分传输的文件 +### 使用 rsync 恢复部分传输的文件 好吧,让我给你看一个例子。我将使用命令将 Ubuntu 16.04 ISO 从本地系统复制到远程系统: @@ -21,33 +21,32 @@ $ scp Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43. 这里, - * **sk**是我的远程系统的用户名 - * **192.168.43.2** 是远程机器的 IP 地址。 + * `sk`是我的远程系统的用户名 + * `192.168.43.2` 是远程机器的 IP 地址。 +现在,我按下 `CTRL+C` 结束它。 - -现在,我按下 **CTRL+c** 结束它。 - -**示例输出:** +示例输出: ``` sk@192.168.43.2's password: ubuntu-16.04-desktop-amd64.iso 26% 372MB 26.2MB/s 00:39 ETA^c ``` -[![][1]][2] +![][2] 正如你在上面的输出中看到的,当它达到 26% 时,我终止了复制过程。 如果我重新运行上面的命令,它只会覆盖现有的文件。换句话说,复制过程不会在我断开的地方恢复。 -为了恢复复制过程,我们可以使用 **rsync** 命令,如下所示。 +为了恢复复制过程,我们可以使用 `rsync` 命令,如下所示。 ``` $ rsync -P -rsh=ssh Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/ ``` -**示例输出:** +示例输出: + ``` sk@192.168.1.103's password: sending incremental file list @@ -55,14 +54,15 @@ ubuntu-16.04-desktop-amd64.iso                    380.56M 26% 41.05MB/s 0:00:25 ``` -[![][1]][4] +![][4] + +看见了吗?现在,复制过程在我们之前断开的地方恢复了。你也可以像下面那样使用 `-partial` 而不是 `-P` 参数。 -看见了吗?现在,复制过程在我们之前断开的地方恢复了。你也可以像下面那样使用 “-partial” 而不是 “-P” 参数。 ``` $ rsync --partial -rsh=ssh Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/ ``` -这里,参数 “-partial” 或 “-P” 告诉 rsync 命令保留部分下载的文件并恢复进度。 +这里,参数 `-partial` 或 `-P` 告诉 `rsync` 命令保留部分下载的文件并恢复进度。 或者,我们也可以使用以下命令通过 SSH 恢复部分传输的文件。 @@ -76,26 +76,24 @@ $ rsync -avP Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192. rsync -av --partial Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/ ``` -就是这样了。你现在知道如何使用 rsync 命令恢复取消、中断和部分下载的文件。正如你所看到的,它也不是那么难。如果两个系统都安装了 rsync,我们可以轻松地通过上面描述的那样恢复复制进度。 +就是这样了。你现在知道如何使用 `rsync` 命令恢复取消、中断和部分下载的文件。正如你所看到的,它也不是那么难。如果两个系统都安装了 `rsync`,我们可以轻松地通过上面描述的那样恢复复制的进度。 -如果你觉得本教程有帮助,请在你的社交、专业网络上分享,并支持 OSTechNix。还有更多的好东西。敬请关注! +如果你觉得本教程有帮助,请在你的社交、专业网络上分享,并支持我们。还有更多的好东西。敬请关注! 干杯! - - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/how-to-resume-partially-downloaded-or-transferred-files-using-rsync/ 作者:[SK][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.ostechnix.com/author/sk/ [1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:http://www.ostechnix.com/wp-content/uploads/2016/02/scp.png () +[2]:http://www.ostechnix.com/wp-content/uploads/2016/02/scp.png [3]:/cdn-cgi/l/email-protection -[4]:http://www.ostechnix.com/wp-content/uploads/2016/02/rsync.png () +[4]:http://www.ostechnix.com/wp-content/uploads/2016/02/rsync.png From f238012f8d82a756af913bcefa1252480fef1a6c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 2 May 2018 21:36:53 +0800 Subject: [PATCH 068/102] PUB:20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md @geekpi --- ... To Resume Partially Transferred Files Over SSH Using Rsync.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md (100%) diff --git a/translated/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md b/published/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md similarity index 100% rename from translated/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md rename to published/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md From a4dec2d9ee0deafc3935c92554a0c07156f45562 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 2 May 2018 21:53:37 +0800 Subject: [PATCH 069/102] PRF:20180122 A Simple Command-line Snippet Manager.md @MjSeven --- ...2 A Simple Command-line Snippet Manager.md | 117 +++++++++++------- 1 file changed, 72 insertions(+), 45 deletions(-) diff --git a/translated/tech/20180122 A Simple Command-line Snippet Manager.md b/translated/tech/20180122 A Simple Command-line Snippet Manager.md index f9e827024e..dd18a8cb10 100644 --- a/translated/tech/20180122 A Simple Command-line Snippet Manager.md +++ b/translated/tech/20180122 A Simple Command-line Snippet Manager.md @@ -1,12 +1,13 @@ -一个简单的命令行片段管理器 +Pet:一个简单的命令行片段管理器 ===== ![](https://www.ostechnix.com/wp-content/uploads/2018/01/pet-6-720x340.png) -我们不可能记住所有的命令,对吧?是的。除了经常使用的命令之外,我们几乎不可能记住一些很少使用的长命令。这就是为什么需要一些外部工具来帮助我们在需要时找到命令。在过去,我们已经审查了两个有用的工具,名为 "Bashpast" 和 "Keep"。使用 Bashpast,我们可以轻松地为 Linux 命令添加书签,以便更轻松地重复调用。而且,Keep 实用程序可以用来在终端中保留一些重要且冗长的命令,以便你可以按需使用它们。今天,我们将看到该系列中的另一个工具,以帮助你记住命令。现在向 "Pet" 打个招呼,这是一个用 Go 语言编写的简单的命令行代码管理器。 + +我们不可能记住所有的命令,对吧?是的。除了经常使用的命令之外,我们几乎不可能记住一些很少使用的长命令。这就是为什么需要一些外部工具来帮助我们在需要时找到命令。在过去,我们已经点评了两个有用的工具,名为 “Bashpast” 和 “Keep”。使用 Bashpast,我们可以轻松地为 Linux 命令添加书签,以便更轻松地重复调用。而 Keep 实用程序可以用来在终端中保留一些重要且冗长的命令,以便你可以随时使用它们。今天,我们将看到该系列中的另一个工具,以帮助你记住命令。现在让我们认识一下 “Pet”,这是一个用 Go 语言编写的简单的命令行代码管理器。 使用 Pet,你可以: - * 注册/添加你重要的,冗长和复杂的命令片段。 + * 注册/添加你重要的、冗长和复杂的命令片段。 * 以交互方式来搜索保存的命令片段。 * 直接运行代码片段而无须一遍又一遍地输入。 * 轻松编辑保存的代码片段。 @@ -14,68 +15,78 @@ * 在片段中使用变量 * 还有很多特性即将来临。 - -#### 安装 Pet 命令行接口代码管理器 +### 安装 Pet 命令行接口代码管理器 由于它是用 Go 语言编写的,所以确保你在系统中已经安装了 Go。 安装 Go 后,从 [**Pet 发布页面**][3] 获取最新的二进制文件。 + ``` wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_amd64.zip ``` 对于 32 位计算机: + ``` wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_386.zip ``` 解压下载的文件: + ``` unzip pet_0.2.4_linux_amd64.zip ``` 对于 32 位: + ``` unzip pet_0.2.4_linux_386.zip ``` -将 pet 二进制文件复制到 PATH(即 **/usr/local/bin** 之类的)。 +将 `pet` 二进制文件复制到 PATH(即 `/usr/local/bin` 之类的)。 + ``` sudo cp pet /usr/local/bin/ ``` 最后,让它可以执行: + ``` sudo chmod +x /usr/local/bin/pet ``` 如果你使用的是基于 Arch 的系统,那么你可以使用任何 AUR 帮助工具从 AUR 安装它。 -使用 [**Pacaur**][4]: +使用 [Pacaur][4]: + ``` pacaur -S pet-git ``` -使用 [**Packer**][5]: +使用 [Packer][5]: + ``` packer -S pet-git ``` -使用 [**Yaourt**][6]: +使用 [Yaourt][6]: + ``` yaourt -S pet-git ``` -使用 [**Yay** :][7] +使用 [Yay][7]: + ``` yay -S pet-git ``` -此外,你需要安装 **[fzf][8]** 或 [**peco**][9] 工具已启用交互式搜索。请参阅官方 GitHub 链接了解如何安装这些工具。 +此外,你需要安装 [fzf][8] 或 [peco][9] 工具以启用交互式搜索。请参阅官方 GitHub 链接了解如何安装这些工具。 -#### 用法 +### 用法 + +运行没有任何参数的 `pet` 来查看可用命令和常规选项的列表。 -运行没有任何参数的 'pet' 来查看可用命令和常规选项的列表。 ``` $ pet pet - Simple command-line snippet manager. @@ -103,21 +114,23 @@ Use "pet [command] --help" for more information about a command. ``` 要查看特定命令的帮助部分,运行: + ``` $ pet [command] --help ``` -**配置 Pet** - -它只适用于默认值。但是,你可以更改默认目录来保存片段,选择要使用的选择器 (fzf 或 peco),默认文本编辑器编辑片段,添加 GIST id 详细信息等。 +#### 配置 Pet +默认配置其实工作的挺好。但是,你可以更改保存片段的默认目录,选择要使用的选择器(fzf 或 peco),编辑片段的默认文本编辑器,添加 GIST id 详细信息等。 要配置 Pet,运行: + ``` $ pet configure ``` -该命令将在默认的文本编辑器中打开默认配置(例如我是 **vim**),根据你的要求更改或编辑特定值。 +该命令将在默认的文本编辑器中打开默认配置(例如我是 vim),根据你的要求更改或编辑特定值。 + ``` [General] snippetfile = "/home/sk/.config/pet/snippet.toml" @@ -133,24 +146,27 @@ $ pet configure ~ ``` -**创建片段** +#### 创建片段 为了创建一个新的片段,运行: + ``` $ pet new ``` -添加命令和描述,然后按下 ENTER 键保存它。 +添加命令和描述,然后按下回车键保存它。 + ``` Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9' Description> Remove numbers from output. ``` -[![][10]][11] +![][11] -这是一个简单的命令,用于从 echo 命令输出中删除所有数字。你可以很轻松地记住它。但是,如果你很少使用它,几天后你可能会完全忘记它。当然,我们可以使用 "CTRL+r" 搜索历史记录,但 "Pet" 会更容易。另外,Pet 可以帮助你添加任意数量的条目。 +这是一个简单的命令,用于从 `echo` 命令输出中删除所有数字。你可以很轻松地记住它。但是,如果你很少使用它,几天后你可能会完全忘记它。当然,我们可以使用 `CTRL+R` 搜索历史记录,但 Pet 会更容易。另外,Pet 可以帮助你添加任意数量的条目。 + +另一个很酷的功能是我们可以轻松添加以前的命令。为此,在你的 `.bashrc` 或 `.zshrc` 文件中添加以下行。 -另一个很酷的功能是我们可以轻松添加以前的命令。为此,在你的 **.bashrc** 或 **.zshrc** 文件中添加以下行。 ``` function prev() { PREV=$(fc -lrn | head -n 1) @@ -159,46 +175,53 @@ function prev() { ``` 执行以下命令来使保存的更改生效。 + ``` source .bashrc ``` -或者 +或者: + ``` source .zshrc ``` 现在,运行任何命令,例如: + ``` $ cat Documents/ostechnix.txt | tr '|' '\n' | sort | tr '\n' '|' | sed "s/.$/\\n/g" ``` -要添加上述命令,你不必使用 "pet new" 命令。只需要: +要添加上述命令,你不必使用 `pet new` 命令。只需要: + ``` $ prev ``` -将说明添加到命令代码片段中,然后按下 ENTER 键保存。 +将说明添加到该命令代码片段中,然后按下回车键保存。 ![][12] -**片段列表** +#### 片段列表 要查看保存的片段,运行: + ``` $ pet list ``` ![][13] -**编辑片段** +#### 编辑片段 + +如果你想编辑代码片段的描述或命令,运行: -如果你想编辑描述或代码片段的命令,运行: ``` $ pet edit ``` 这将在你的默认文本编辑器中打开所有保存的代码片段,你可以根据需要编辑或更改片段。 + ``` [[snippets]] description = "Remove numbers from output." @@ -211,33 +234,35 @@ $ pet edit output = "" ``` -**在片段中使用标签** +#### 在片段中使用标签 + +要将标签用于判断,使用下面的 `-t` 标志。 -要将标签用于判断,使用下面的 **-t** 标志。 ``` $ pet new -t Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9 Description> Remove numbers from output. Tag> tr command examples - ``` -**执行片段** +#### 执行片段 要执行一个保存的片段,运行: + ``` $ pet exec ``` -从列表中选择你要运行的代码段,然后按 ENTER 键来运行它: +从列表中选择你要运行的代码段,然后按回车键来运行它: ![][14] 记住你需要安装 fzf 或 peco 才能使用此功能。 -**寻找片段** +#### 寻找片段 如果你有很多要保存的片段,你可以使用字符串或关键词如 below.qjz 轻松搜索它们。 + ``` $ pet search ``` @@ -246,40 +271,43 @@ $ pet search ![][15] -**同步片段** +#### 同步片段 -首先,你需要获取访问令牌。转到此链接 并创建访问令牌(只需要 "gist" 范围)。 +首先,你需要获取访问令牌。转到此链接 并创建访问令牌(只需要 “gist” 范围)。 使用以下命令来配置 Pet: + ``` $ pet configure ``` -将标记设置到 **[Gist]** 字段中的 **access_token**。 +将令牌设置到 `[Gist]` 字段中的 `access_token`。 设置完成后,你可以像下面一样将片段上传到 Gist。 + ``` $ pet sync -u Gist ID: 2dfeeeg5f17e1170bf0c5612fb31a869 Upload success - ``` -你也可以在其他 PC 上下载片段。为此,编辑配置文件并在 **[Gist]** 中将 **Gist ID** 设置为 **gist_id**。 +你也可以在其他 PC 上下载片段。为此,编辑配置文件并在 `[Gist]` 中将 `gist_id` 设置为 GIST id。 + +之后,使用以下命令下载片段: -之后,下载片段使用以下命令: ``` $ pet sync Download success - ``` -获取更多细节,参阅 help 选项: +获取更多细节,参阅帮助选项: + ``` pet -h ``` -或者 +或者: + ``` pet [command] -h ``` @@ -289,14 +317,13 @@ pet [command] -h 干杯! - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/pet-simple-command-line-snippet-manager/ 作者:[SK][a] 译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 29715fe9962dd4cd8e7802297e684f2c138e93d0 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 2 May 2018 21:54:02 +0800 Subject: [PATCH 070/102] PUB:20180122 A Simple Command-line Snippet Manager.md @MjSeven https://linux.cn/article-9600-1.html --- .../20180122 A Simple Command-line Snippet Manager.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180122 A Simple Command-line Snippet Manager.md (100%) diff --git a/translated/tech/20180122 A Simple Command-line Snippet Manager.md b/published/20180122 A Simple Command-line Snippet Manager.md similarity index 100% rename from translated/tech/20180122 A Simple Command-line Snippet Manager.md rename to published/20180122 A Simple Command-line Snippet Manager.md From cd662c55d38d62f40c98954b3579c26d78be5966 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 2 May 2018 22:18:04 +0800 Subject: [PATCH 071/102] PRF:20180403 10 fundamental commands for new Linux users.md @MjSeven --- ...undamental commands for new Linux users.md | 109 ++++++++++-------- 1 file changed, 59 insertions(+), 50 deletions(-) diff --git a/translated/tech/20180403 10 fundamental commands for new Linux users.md b/translated/tech/20180403 10 fundamental commands for new Linux users.md index 671ef52d49..a423d7671e 100644 --- a/translated/tech/20180403 10 fundamental commands for new Linux users.md +++ b/translated/tech/20180403 10 fundamental commands for new Linux users.md @@ -1,7 +1,10 @@ -对于 Linux 新手来说 10 个基础的命令 +每个 Linux 新手都应该知道的 10 个命令 ===== +> 通过这 10 个基础命令开始掌握 Linux 命令行。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals.png?itok=CfBqYBah) + 你可能认为你是 Linux 新手,但实际上并不是。全球互联网用户有 [3.74 亿][1],他们都以某种方式使用 Linux,因为 Linux 服务器占据了互联网的 90%。大多数现代路由器运行 Linux 或 Unix,[TOP500 超级计算机][2] 也依赖于 Linux。如果你拥有一台 Android 智能手机,那么你的操作系统就是由 Linux 内核构建的。 换句话说,Linux 无处不在。 @@ -10,118 +13,124 @@ 下面是你需要知道的基本的 Linux 命令。每一个都很简单,也很容易记住。换句话说,你不必成为比尔盖茨就能理解它们。 -### 1\. ls +### 1、 ls -你可能会想:“这是什么东西?”不,那不是一个印刷错误 - 我真的打算输入一个小写的 l。`ls`,或者 “list,” 是你需要知道的使用 Linux CLI 的第一个命令。这个 list 命令在 Linux 终端中运行,以显示在相应文件系统下归档的所有主要目录。例如,这个命令: +你可能会想:“这是(is)什么东西?”不,那不是一个印刷错误 —— 我真的打算输入一个小写的 l。`ls`,或者说 “list”, 是你需要知道的使用 Linux CLI 的第一个命令。这个 list 命令在 Linux 终端中运行,以显示在存放在相应文件系统下的所有主要目录。例如,这个命令: -`ls /applications` +``` +ls /applications +``` -显示存储在 applications 文件夹下的每个文件夹,你将使用它来查看文件、文件夹和目录。 +显示存储在 `applications` 文件夹下的每个文件夹,你将使用它来查看文件、文件夹和目录。 显示所有隐藏的文件都可以使用命令 `ls -a`。 -### 2\. cd +### 2、 cd -这个命令是你用来跳转(或“更改”)到一个目录的。它指导你如何从一个文件夹导航到另一个文件夹。假设你位于 Downloads 文件夹中,但你想到名为 Gym Playlist 的文件夹中,简单地输入 `cd Gym Playlist` 将不起作用,(译注:这应该是 Gym 目录下的 Playlist 文件夹)因为 shell 不会识别它,并会报告你正在查找的文件夹不存在。要跳转到那个文件夹,你需要包含一个反斜杠。改命令如下所示: +这个命令是你用来跳转(或“更改”)到一个目录的。它指导你如何从一个文件夹导航到另一个文件夹。假设你位于 `Downloads` 文件夹中,但你想到名为 `Gym Playlist` 的文件夹中,简单地输入 `cd Gym Playlist` 将不起作用,因为 shell 不会识别它,并会报告你正在查找的文件夹不存在(LCTT 译注:这是因为目录名中有空格)。要跳转到那个文件夹,你需要包含一个反斜杠。改命令如下所示: -`cd Gym\ Playlist` +``` +cd Gym\ Playlist +``` -要从当前文件夹返回到上一个文件夹,你可以输入 `cd ..` 后跟着文件夹名称(译注:返回上一层目录不应该是 cd .. ?)。把这两个点想象成一个后退按钮。 +要从当前文件夹返回到上一个文件夹,你可以在该文件夹输入 `cd ..`。把这两个点想象成一个后退按钮。 -### 3\. mv +### 3、 mv 该命令将文件从一个文件夹转移到另一个文件夹;`mv` 代表“移动”。你可以使用这个简单的命令,就像你把一个文件拖到 PC 上的一个文件夹一样。 -例如,如果我想创建一个名为 `testfile` 的文件来演示所有基本的 Linux 命令,并且我想将它移动到我的 Documents 文件夹中,我将输入这个命令: +例如,如果我想创建一个名为 `testfile` 的文件来演示所有基本的 Linux 命令,并且我想将它移动到我的 `Documents` 文件夹中,我将输入这个命令: -`mv /home/sam/testfile /home/sam/Documents/` +``` +mv /home/sam/testfile /home/sam/Documents/ +``` 命令的第一部分(`mv`)说我想移动一个文件,第二部分(`home/sam/testfile`)表示我想移动的文件,第三部分(`/home/sam/Documents/`)表示我希望传输文件的位置。 -### 4\. 快捷键 +### 4、 快捷键 好吧,这不止一个命令,但我忍不住把它们都包括进来。为什么?因为它们能节省时间并避免经历头痛。 -`CTRL+K` 从光标处剪切文本直至本行结束 - -`CTRL+Y` 粘贴文本 - -`CTRL+E` 将光标移到本行的末尾 - -`CTRL+A` 将光标移动到本行的开头 - -`ALT+F` 跳转到下一个空格处 - -`ALT+B` 回到之前的空格处 - -`ALT+Backspace` 删除前一个词 - -`CTRL+W` 将光标前一个词剪贴 - -`Shift+Insert` 将文本粘贴到终端中 - -`Ctrl+D` 注销 +- `CTRL+K` 从光标处剪切文本直至本行结束 +- `CTRL+Y` 粘贴文本 +- `CTRL+E` 将光标移到本行的末尾 +- `CTRL+A` 将光标移动到本行的开头 +- `ALT+F` 跳转到下一个空格处 +- `ALT+B` 回到前一个空格处 +- `ALT+Backspace` 删除前一个词 +- `CTRL+W` 剪切光标前一个词 +- `Shift+Insert` 将文本粘贴到终端中 +- `Ctrl+D` 注销 这些命令在许多方面都能派上用场。例如,假设你在命令行文本中拼错了一个单词: -`sudo apt-get intall programname` +``` +sudo apt-get intall programname +``` -你可能注意到 "insatll" 拼写错了,因此该命令无法工作。但是快捷键可以让你分容易回去修复它。如果我的光标在这一行的末尾,我可以按下两次 `ALT+B` 来将光标移动到下面用 `^` 符号标记的地方: +你可能注意到 `install` 拼写错了,因此该命令无法工作。但是快捷键可以让你很容易回去修复它。如果我的光标在这一行的末尾,我可以按下两次 `ALT+B` 来将光标移动到下面用 `^` 符号标记的地方: -`sudo apt-get^intall programname` +``` +sudo apt-get^intall programname +``` 现在,我们可以快速地添加字母 `s` 来修复 `install`,十分简单! -### 5\. mkdir +### 5、 mkdir 这是你用来在 Linux 环境下创建目录或文件夹的命令。例如,如果你像我一样喜欢 DIY,你可以输入 `mkdir DIY` 为你的 DIY 项目创建一个目录。 -### 6\. at +### 6、 at 如果你想在特定时间运行 Linux 命令,你可以将 `at` 添加到语句中。语法是 `at` 后面跟着你希望命令运行的日期和时间,然后命令提示符变为 `at>`,这样你就可以输入在上面指定的时间运行的命令。 例如: -`at 4:08 PM Sat` -`at> cowsay 'hello'` -`at> CTRL+D` +``` +at 4:08 PM Sat +at> cowsay 'hello' +at> CTRL+D +``` -这将会在周六下午 4:08 运行 cowsay 程序。 +这将会在周六下午 4:08 运行 `cowsay` 程序。 -### 7\. rmdir +### 7、 rmdir 这个命令允许你通过 Linux CLI 删除一个目录。例如: -`rmdir testdirectory` +``` +rmdir testdirectory +``` 请记住,这个命令不会删除里面有文件的目录。这只在删除空目录时才起作用。 -### 8\. rm +### 8、 rm 如果你想删除文件,`rm` 命令就是你想要的。它可以删除文件和目录。要删除一个文件,键入 `rm testfile`,或者删除一个目录和里面的文件,键入 `rm -r`。 -### 9\. touch +### 9、 touch -`touch` 命令,也就是所谓的 "make file 命令",允许你使用 Linux CLI 创建新的、空的文件。很像 `mkdir` 创建目录,`touch` 会创建文件。例如,`touch testfile` 将会创建一个名为 testfile 的空文件。 +`touch` 命令,也就是所谓的 “make file 的命令”,允许你使用 Linux CLI 创建新的、空的文件。很像 `mkdir` 创建目录,`touch` 会创建文件。例如,`touch testfile` 将会创建一个名为 testfile 的空文件。 -### 10\. locate +### 10、 locate 这个命令是你在 Linux 系统中用来查找文件的命令。就像在 Windows 中搜索一样,如果你忘了存储文件的位置或它的名字,这是非常有用的。 例如,如果你有一个关于区块链用例的文档,但是你忘了标题,你可以输入 `locate -blockchain` 或者通过用星号分隔单词来查找 "blockchain use cases",或者星号(`*`)。例如: -`locate -i*blockchain*use*cases*` +``` +locate -i*blockchain*use*cases* +``` 还有很多其他有用的 Linux CLI 命令,比如 `pkill` 命令,如果你开始关机但是你意识到你并不想这么做,那么这条命令很棒。但是这里描述的 10 个简单而有用的命令是你开始使用 Linux 命令行所需的基本知识。 - -------------------------------------------------------------------------------- via: https://opensource.com/article/18/4/10-commands-new-linux-users 作者:[Sam Bocetta][a] 译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e1c0dc1df7f6960d13571cb8c3fd0cc6dbb15c4e Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 2 May 2018 22:18:29 +0800 Subject: [PATCH 072/102] PUB:20180403 10 fundamental commands for new Linux users.md @MjSeven https://linux.cn/article-9601-1.html --- .../20180403 10 fundamental commands for new Linux users.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180403 10 fundamental commands for new Linux users.md (100%) diff --git a/translated/tech/20180403 10 fundamental commands for new Linux users.md b/published/20180403 10 fundamental commands for new Linux users.md similarity index 100% rename from translated/tech/20180403 10 fundamental commands for new Linux users.md rename to published/20180403 10 fundamental commands for new Linux users.md From d38958e23069c54163bcd1423eb8b0effa3f443c Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 3 May 2018 08:55:05 +0800 Subject: [PATCH 073/102] translated --- ...0420 A Perl module for better debugging.md | 82 ------------------- ...0420 A Perl module for better debugging.md | 80 ++++++++++++++++++ 2 files changed, 80 insertions(+), 82 deletions(-) delete mode 100644 sources/tech/20180420 A Perl module for better debugging.md create mode 100644 translated/tech/20180420 A Perl module for better debugging.md diff --git a/sources/tech/20180420 A Perl module for better debugging.md b/sources/tech/20180420 A Perl module for better debugging.md deleted file mode 100644 index ca9f8bd8fd..0000000000 --- a/sources/tech/20180420 A Perl module for better debugging.md +++ /dev/null @@ -1,82 +0,0 @@ -translating---geekpi - -A Perl module for better debugging -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/annoyingbugs.png?itok=ywFZ99Gs) -It's occasionally useful to have a block of Perl code that you use only for debugging or development tweaking. That's fine, but having blocks like this can be expensive to performance, particularly if the decision whether to execute it is made at runtime. - -[Curtis "Ovid" Poe][1] recently wrote a module that can help with this problem: [Keyword::DEVELOPMENT][2]. The module utilizes Keyword::Simple and the pluggable keyword architecture introduced in Perl 5.012 to create a new keyword: DEVELOPMENT. It uses the value of the PERL_KEYWORD_DEVELOPMENT environment variable to determine whether or not a block of code is to be executed. - -Using it couldn't be easier: -``` -use Keyword::DEVELOPMENT; - -        - -sub doing_my_big_loop { - -    my $self = shift; - -    DEVELOPMENT { - -        # insert expensive debugging code here! - -    } - -}Keyworddoing_my_big_loopDEVELOPMENT - -``` - -At compile time, the code inside the DEVELOPMENT block is optimized away and simply doesn't exist. - -Do you see the advantage here? Set up the PERL_KEYWORD_DEVELOPMENT environment variable to be true on your sandbox and false on your production environment, and valuable debugging tools can be committed to your code repo, always there when you need them. - -You could also use this module, in the absence of a more evolved configuration management system, to handle variations in settings between production and development or test environments: -``` -sub connect_to_my_database { - -        - -    my $dsn = "dbi:mysql:productiondb"; - -    my $user = "db_user"; - -    my $pass = "db_pass"; - -    - -    DEVELOPMENT { - -        # Override some of that config information - -        $dsn = "dbi:mysql:developmentdb"; - -    } - -    - -    my $db_handle = DBI->connect($dsn, $user, $pass); - -}connect_to_my_databaseDEVELOPMENTDBI - -``` - -Later enhancement to this snippet would have you reading in configuration information from somewhere else, perhaps from a YAML or INI file, but I hope you see the utility here. - -I looked at the source code for Keyword::DEVELOPMENT and spent about a half hour wondering, "Gosh, why didn't I think of that?" Once Keyword::Simple is installed, the module that Curtis has given us is surprisingly simple. It's an elegant solution to something I've needed in my own coding practice for a long time. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/4/perl-module-debugging-code - -作者:[Ruth Holloway][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/druthb -[1]:https://metacpan.org/author/OVID -[2]:https://metacpan.org/pod/release/OVID/Keyword-DEVELOPMENT-0.04/lib/Keyword/DEVELOPMENT.pm diff --git a/translated/tech/20180420 A Perl module for better debugging.md b/translated/tech/20180420 A Perl module for better debugging.md new file mode 100644 index 0000000000..c2f599f8f5 --- /dev/null +++ b/translated/tech/20180420 A Perl module for better debugging.md @@ -0,0 +1,80 @@ +一个更好的调试 Perl 模块 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/annoyingbugs.png?itok=ywFZ99Gs) +只有在调试或开发调整时才使用 Perl 代码块有时会很有用。这很好,但是这样的代码块可能会对性能产生很大的影响, 尤其是在运行时决定是否执行它。 + +[Curtis“Ovid”Poe][1] 编写了一个可以帮助解决这个问题的模块:[Keyword::DEVELOPMENT][2]。该模块利用 Keyword::Simple 和 Perl 5.012 中引入的可插入关键字架构来创建新的关键字:DEVELOPMENT。它使用 PERL_KEYWORD_DEVELOPMENT 环境变量的值来确定是否要执行一段代码。 + +使用它并不容易: +``` +use Keyword::DEVELOPMENT; + +        + +sub doing_my_big_loop { + +    my $self = shift; + +    DEVELOPMENT { + +        # insert expensive debugging code here! + +    } + +}Keyworddoing_my_big_loopDEVELOPMENT + +``` + +在编译时,DEVELOPMENT 块内的代码已经被优化掉了,根本就不存在。 + +你看到好处了么?在沙盒中将 PERL_KEYWORD_DEVELOPMENT 环境变量设置为 true,在生产环境设为 false,并且可以将有价值的调试工具提交到你的代码库中,在你需要的时候随时可用。 + +在缺乏高级配置管理的系统中,你也可以使用此模块来处理生产和开发或测试环境之间的设置差异: +``` +sub connect_to_my_database { + +        + +    my $dsn = "dbi:mysql:productiondb"; + +    my $user = "db_user"; + +    my $pass = "db_pass"; + +    + +    DEVELOPMENT { + +        # Override some of that config information + +        $dsn = "dbi:mysql:developmentdb"; + +    } + +    + +    my $db_handle = DBI->connect($dsn, $user, $pass); + +}connect_to_my_databaseDEVELOPMENTDBI + +``` + +稍后对此代码片段的增强使你能在其他地方,比如 YAML 或 INI 中读取配置信息,但我希望您能在此看到该工具。 + +我查看了关键字 Keyword::DEVELOPMENT 的源码,花了大约半小时研究,“天哪,我为什么没有想到这个?”安装 Keyword::Simple 后,Curtis 给我们的模块就非常简单了。这是我长期以来在自己的编码实践中需要的一个优雅解决方案。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/4/perl-module-debugging-code + +作者:[Ruth Holloway][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/druthb +[1]:https://metacpan.org/author/OVID +[2]:https://metacpan.org/pod/release/OVID/Keyword-DEVELOPMENT-0.04/lib/Keyword/DEVELOPMENT.pm From 69ad7fdea4ab1fd9dfc8cb42cb516d419cd78adf Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 3 May 2018 08:58:45 +0800 Subject: [PATCH 074/102] translating --- .../20180425 Enhance your Python with an interactive shell.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180425 Enhance your Python with an interactive shell.md b/sources/tech/20180425 Enhance your Python with an interactive shell.md index 19826d142b..4d1f6a9a9d 100644 --- a/sources/tech/20180425 Enhance your Python with an interactive shell.md +++ b/sources/tech/20180425 Enhance your Python with an interactive shell.md @@ -1,3 +1,5 @@ +translating---geekpi + Enhance your Python with an interactive shell ====== ![](https://fedoramagazine.org/wp-content/uploads/2018/03/python-shells-816x345.jpg) From f500b3328b9b23bcac5bce994707139dea8814e1 Mon Sep 17 00:00:00 2001 From: songshunqiang Date: Thu, 3 May 2018 10:35:23 +0800 Subject: [PATCH 075/102] submit tech/20180417 How to do math on the Linux command line.md --- ...ow to do math on the Linux command line.md | 349 ------------------ ...ow to do math on the Linux command line.md | 344 +++++++++++++++++ 2 files changed, 344 insertions(+), 349 deletions(-) delete mode 100644 sources/tech/20180417 How to do math on the Linux command line.md create mode 100644 translated/tech/20180417 How to do math on the Linux command line.md diff --git a/sources/tech/20180417 How to do math on the Linux command line.md b/sources/tech/20180417 How to do math on the Linux command line.md deleted file mode 100644 index a6e37cf19e..0000000000 --- a/sources/tech/20180417 How to do math on the Linux command line.md +++ /dev/null @@ -1,349 +0,0 @@ -pinewall translating - -How to do math on the Linux command line -====== - -![](https://images.techhive.com/images/article/2014/12/math_blackboard-100534564-large.jpg) -Can you do math on the Linux command line? You sure can! In fact, there are quite a few commands that can make the process easy and some you might even find interesting. Let's look at some very useful commands and syntax for command line math. - -### expr - -First and probably the most obvious and commonly used command for performing mathematical calculations on the command line is the **expr** (expression) command. It can manage addition, subtraction, division, and multiplication. It can also be used to compare numbers. Here are some examples: - -#### Incrementing a variable -``` -$ count=0 -$ count=`expr $count + 1` -$ echo $count -1 - -``` - -#### Performing a simple calculations -``` -$ expr 11 + 123 -134 -$ expr 134 / 11 -12 -$ expr 134 - 11 -123 -$ expr 11 * 123 -expr: syntax error <== oops! -$ expr 11 \* 123 -1353 -$ expr 20 % 3 -2 - -``` - -Notice that you have to use a \ character in front of * to avoid the syntax error. The % operator is for modulo calculations. - -Here's a slightly more complex example: -``` -participants=11 -total=156 -share=`expr $total / $participants` -remaining=`expr $total - $participants \* $share` -echo $share -14 -echo $remaining -2 - -``` - -If we have 11 participants in some event and 156 prizes to distribute, each participant's fair share of the take is 14, leaving 2 in the pot. - -#### Making comparisons - -Now let's look at the logic for comparisons. These statements may look a little odd at first. They are not setting values, but only comparing the numbers. What **expr** is doing in the examples below is determining whether the statements are true. If the result is 1, the statement is true; otherwise, it's false. -``` -$ expr 11 = 11 -1 -$ expr 11 = 12 -0 - -``` - -Read them as "Does 11 equal 11?" and "Does 11 equal 12?" and you'll get used to how this works. Of course, no one would be asking if 11 equals 11 on the command line, but they might ask if $age equals 11. -``` -$ age=11 -$ expr $age = 11 -1 - -``` - -If you put the numbers in quotes, you'd actually be doing a string comparison rather than a numeric one. -``` -$ expr "11" = "11" -1 -$ expr "eleven" = "11" -0 - -``` - -In the following examples, we're asking whether 10 is greater than 5 and, then, whether it's greater than 99. -``` -$ expr 10 \> 5 -1 -$ expr 10 \> 99 -0 - -``` - -Of course, having true comparisons resulting in 1 and false resulting in 0 goes against what we generally expect on Linux systems. The example below shows that using **expr** in this kind of context doesn't work because **if** works with the opposite orientation (0=true). -``` -#!/bin/bash - -echo -n "Cost to us> " -read cost -echo -n "Price we're asking> " -read price - -if [ `expr $price \> $cost` ]; then - echo "We make money" -else - echo "Don't sell it" -fi - -``` - -Now, let's run this script: -``` -$ ./checkPrice -Cost to us> 11.50 -Price we're asking> 6 -We make money - -``` - -That sure isn't going to help with sales! With a small change, this would work as we'd expect: -``` -#!/bin/bash - -echo -n "Cost to us> " -read cost -echo -n "Price we're asking> " -read price - -if [ `expr $price \> $cost` == 1 ]; then - echo "We make money" -else - echo "Don't sell it" -fi - -``` - -### factor - -The **factor** command works just like you'd probably expect. You feed it a number, and it tells you what its factors are. -``` -$ factor 111 -111: 3 37 -$ factor 134 -134: 2 67 -$ factor 17894 -17894: 2 23 389 -$ factor 1987 -1987: 1987 - -``` - -NOTE: The factor command didn't get very far on factoring that last value because 1987 is a **prime number**. - -### jot - -The **jot** command allows you to create a list of numbers. Provide it with the number of values you want to see and the number that you want to start with. -``` -$ jot 8 10 -10 -11 -12 -13 -14 -15 -16 -17 - -``` - -You can also use **jot** like this. Here we're asking it to decrease the numbers by telling it we want to stop when we get to 2: -``` -$ jot 8 10 2 -10 -9 -8 -7 -5 -4 -3 -2 - -``` - -The **jot** command can be useful if you want to iterate through a series of numbers to create a list for some other purpose. -``` -$ for i in `jot 7 17`; do echo April $i; done -April 17 -April 18 -April 19 -April 20 -April 21 -April 22 -April 23 - -``` - -### bc - -The **bc** command is probably one of the best tools for doing calculations on the command line. Enter the calculation that you want performed, and pipe it to the command like this: -``` -$ echo "123.4+5/6-(7.89*1.234)" | bc -113.664 - -``` - -Notice that **bc** doesn't shy away from precision and that the string you need to enter is fairly straightforward. It can also make comparisons, handle Booleans, and calculate square roots, sines, cosines, tangents, etc. -``` -$ echo "sqrt(256)" | bc -16 -$ echo "s(90)" | bc -l -.89399666360055789051 - -``` - -In fact, **bc** can even calculate pi. You decide how many decimal points you want to see: -``` -$ echo "scale=5; 4*a(1)" | bc -l -3.14156 -$ echo "scale=10; 4*a(1)" | bc -l -3.1415926532 -$ echo "scale=20; 4*a(1)" | bc -l -3.14159265358979323844 -$ echo "scale=40; 4*a(1)" | bc -l -3.1415926535897932384626433832795028841968 - -``` - -And **bc** isn't just for receiving data through pipes and sending answers back. You can also start it interactively and enter the calculations you want it to perform. Setting the scale (as shown below) determines how many decimal places you'll see. -``` -$ bc -bc 1.06.95 -Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc. -This is free software with ABSOLUTELY NO WARRANTY. -For details type `warranty'. -scale=2 -3/4 -.75 -2/3 -.66 -quit - -``` - -Using **bc** , you can also convert numbers between different bases. The **obase** setting determines the output base. -``` -$ bc -bc 1.06.95 -Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc. -This is free software with ABSOLUTELY NO WARRANTY. -For details type `warranty'. -obase=16 -16 <=== entered -10 <=== response -256 <=== entered -100 <=== response -quit - -``` - -One of the easiest ways to convert between hex and decimal is to use **bc** like this: -``` -$ echo "ibase=16; F2" | bc -242 -$ echo "obase=16; 242" | bc -F2 - -``` - -In the first example above, we're converting from hex to decimal by setting the input base (ibase) to hex (base 16). In the second, we're doing the reverse by setting the outbut base (obase) to hex. - -### Easy bash math - -With sets of double-parentheses, we can do some easy math in bash. In the examples below, we create a variable and give it a value and then perform addition, decrement the result, and then square the remaining value. -``` -$ ((e=11)) -$ (( e = e + 7 )) -$ echo $e -18 - -$ ((e--)) -$ echo $e -17 - -$ ((e=e**2)) -$ echo $e -289 - -``` - -The arithmetic operators allow you to: -``` -+ - Add and subtract -++ -- Increment and decrement -* / % Multiply, divide, find remainder -^ Get exponent - -``` - -You can also use both logical and boolean operators: -``` -$ ((x=11)); ((y=7)) -$ if (( x > y )); then -> echo "x > y" -> fi -x > y - -$ ((x=11)); ((y=7)); ((z=3)) -$ if (( x > y )) >> (( y > z )); then -> echo "letters roll downhill" -> fi -letters roll downhill - -``` - -or if you prefer ... -``` -$ if [ x > y ] << [ y > z ]; then echo "letters roll downhill"; fi -letters roll downhill - -``` - -Now let's raise 2 to the 3rd power: -``` -$ echo "2 ^ 3" -2 ^ 3 -$ echo "2 ^ 3" | bc -8 - -``` - -### Wrap-up - -There are sure a lot of different ways to work with numbers and perform calculations on the command line on Linux systems. I hope you picked up a new trick or two by reading this post. - -Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3268964/linux/how-to-do-math-on-the-linux-command-line.html - -作者:[Sandra Henry-Stocker][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ -[1]:https://www.facebook.com/NetworkWorld/ -[2]:https://www.linkedin.com/company/network-world diff --git a/translated/tech/20180417 How to do math on the Linux command line.md b/translated/tech/20180417 How to do math on the Linux command line.md new file mode 100644 index 0000000000..eb4567454c --- /dev/null +++ b/translated/tech/20180417 How to do math on the Linux command line.md @@ -0,0 +1,344 @@ +Linux 命令行下的数学运算 +====== + +![](https://images.techhive.com/images/article/2014/12/math_blackboard-100534564-large.jpg) +可以在 Linux 命令行下做数学运算吗?当然可以!事实上,有不少命令可以轻松完成这些操作,其中一些甚至让你大吃一惊。让我们来学习这些有用的数学运算命令或命令语法吧。 + +### expr + +首先,对于在命令行使用命令进行数学运算,可能最容易想到、最常用的命令就是 **expr** (expression)。它可以完成四则运算,也可以用于比较大小。下面是几个例子: + +#### 变量递增 +``` +$ count=0 +$ count=`expr $count + 1` +$ echo $count +1 + +``` + +#### 完成简单运算 +``` +$ expr 11 + 123 +134 +$ expr 134 / 11 +12 +$ expr 134 - 11 +123 +$ expr 11 * 123 +expr: syntax error <== oops! +$ expr 11 \* 123 +1353 +$ expr 20 % 3 +2 + +``` +注意,你需要在 * 运算符之前增加 \ 符号,避免语法错误。% 运算符用于取余运算。 + +下面是一个稍微复杂的例子: +``` +participants=11 +total=156 +share=`expr $total / $participants` +remaining=`expr $total - $participants \* $share` +echo $share +14 +echo $remaining +2 + +``` + +假设某个活动中有 11 位参与者,需要颁发的奖项总数为 156,那么平均每个参与者获得 14 项奖项,额外剩余 2 个奖项。 + +#### 比较大小 + +下面让我们看一下比较大小的操作。从第一印象来看,语句看似有些怪异;这里并不是设置数值,而是进行数字大小比较。在本例中 **expr** 判断表达式是否为真:如果结果是 1,那么表达式为真;反之,表达式为假。 +``` +$ expr 11 = 11 +1 +$ expr 11 = 12 +0 + +``` +请读作"11 是否等于 11?"及"11 是否等于 12?",你很快就会习惯这种写法。当然,我们不会在命令行上执行上述比较,可能的比较是 $age 是否等于 11。 +``` +$ age=11 +$ expr $age = 11 +1 + +``` +如果将数字放到引号中间,那么你将进行字符串比较,而不是数值比较。 +``` +$ expr "11" = "11" +1 +$ expr "eleven" = "11" +0 + +``` + +在本例中,我们判断 10 是否大于 5,以及是否 大于 99。 +``` +$ expr 10 \> 5 +1 +$ expr 10 \> 99 +0 + +``` + +的确,返回 1 和 0 分别代表比较的结果为真和假,我们一般预期在 Linux 上得到这个结果。在下面的例子中,按照上述逻辑使用 **expr** 并不正确,因为 **if** 的工作原理刚好相反,即 0 代表真。 +``` +#!/bin/bash + +echo -n "Cost to us> " +read cost +echo -n "Price we're asking> " +read price + +if [ `expr $price \> $cost` ]; then + echo "We make money" +else + echo "Don't sell it" +fi + +``` + +下面,我们运行这个脚本: +``` +$ ./checkPrice +Cost to us> 11.50 +Price we're asking> 6 +We make money + +``` + +这显然与我们预期不符!我们稍微修改一下,以便使其按我们预期工作: +``` +#!/bin/bash + +echo -n "Cost to us> " +read cost +echo -n "Price we're asking> " +read price + +if [ `expr $price \> $cost` == 1 ]; then + echo "We make money" +else + echo "Don't sell it" +fi + +``` + +### factor + +**factor** 命令的功能基本与你预期相符。你给出一个数字,该命令会给出对应数字的因子。 +``` +$ factor 111 +111: 3 37 +$ factor 134 +134: 2 67 +$ factor 17894 +17894: 2 23 389 +$ factor 1987 +1987: 1987 + +``` + +注:factor 命令对于最后一个数字没有返回很多,这是因为 1987 是一个 **质数**。 + +### jot + +**jot** 命令可以创建一系列数字。给定数字总数及起始数字即可。 +``` +$ jot 8 10 +10 +11 +12 +13 +14 +15 +16 +17 + +``` + +你也可以用如下方式使用 **jot**,这里我们要求递减至数字 2。 +``` +$ jot 8 10 2 +10 +9 +8 +7 +5 +4 +3 +2 + +``` + +**jot** 可以帮你构造一系列数字组成的列表,该列表可以用于其它任务。 +``` +$ for i in `jot 7 17`; do echo April $i; done +April 17 +April 18 +April 19 +April 20 +April 21 +April 22 +April 23 + +``` + +### bc + +**bc** 基本上是命令行数学运算最佳工具之一。输入你想执行的运算,使用管道发送至该命令即可: +``` +$ echo "123.4+5/6-(7.89*1.234)" | bc +113.664 + +``` + +可见 **bc** 并没有忽略精度,而且输入的字符串也相当直截了当。它还可以进行大小比较、处理布尔值、计算平方根、正弦、余弦和正切等。 +``` +$ echo "sqrt(256)" | bc +16 +$ echo "s(90)" | bc -l +.89399666360055789051 + +``` + +事实上,**bc** 甚至可以计算 pi。你需要指定需要的精度。 +``` +$ echo "scale=5; 4*a(1)" | bc -l +3.14156 +$ echo "scale=10; 4*a(1)" | bc -l +3.1415926532 +$ echo "scale=20; 4*a(1)" | bc -l +3.14159265358979323844 +$ echo "scale=40; 4*a(1)" | bc -l +3.1415926535897932384626433832795028841968 + +``` + +除了通过管道接收数据并返回结果,**bc**还可以交互式运行,输入你想执行的运算即可。本例中提到的 scale 设置可以指定有效数字的个数。 +``` +$ bc +bc 1.06.95 +Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc. +This is free software with ABSOLUTELY NO WARRANTY. +For details type `warranty'. +scale=2 +3/4 +.75 +2/3 +.66 +quit + +``` + +你还可以使用 **bc** 完成数字进制转换。**obase** 用于设置输出的数字进制。 +``` +$ bc +bc 1.06.95 +Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc. +This is free software with ABSOLUTELY NO WARRANTY. +For details type `warranty'. +obase=16 +16 <=== entered +10 <=== response +256 <=== entered +100 <=== response +quit + +``` + +按如下方式使用 **bc** 也是完成十六进制与十进制转换的最简单方式之一: +``` +$ echo "ibase=16; F2" | bc +242 +$ echo "obase=16; 242" | bc +F2 + +``` + +在上面第一个例子中,我们将输入进制 (ibase) 设置为十六进制 (hex),完成十六进制到为十进制的转换。在第二个例子中,我们执行相反的操作,即将输出进制 (obase) 设置为十六进制。 + +### 简单的 bash 数学运算 + +通过使用双括号,我们可以在 bash 中完成简单的数学运算。在下面的例子中,我们创建一个变量,为变量赋值,然后依次执行加法、自减和平方。 +``` +$ ((e=11)) +$ (( e = e + 7 )) +$ echo $e +18 + +$ ((e--)) +$ echo $e +17 + +$ ((e=e**2)) +$ echo $e +289 + +``` + +允许使用的运算符包括: +``` ++ - 加法及减法 +++ -- 自增与自减 +* / % 乘法,除法及求余数 +^ 指数运算 + +``` + +你还可以使用逻辑运算符和布尔运算符: +``` +$ ((x=11)); ((y=7)) +$ if (( x > y )); then +> echo "x > y" +> fi +x > y + +$ ((x=11)); ((y=7)); ((z=3)) +$ if (( x > y )) >> (( y > z )); then +> echo "letters roll downhill" +> fi +letters roll downhill + +``` + +或者如下方式: +``` +$ if [ x > y ] << [ y > z ]; then echo "letters roll downhill"; fi +letters roll downhill + +``` + +下面计算 2 的 3 次幂: +``` +$ echo "2 ^ 3" +2 ^ 3 +$ echo "2 ^ 3" | bc +8 + +``` + +### 总结 + +在 Linux 系统中,有很多不同的命令行工具可以完成数字运算。希望你在读完本文之后,能掌握一两个新工具。 + +使用 [Facebook][1] 或 [LinkedIn][2] 加入 Network World 社区,点评你最喜爱的主题。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3268964/linux/how-to-do-math-on-the-linux-command-line.html + +作者:[Sandra Henry-Stocker][a] +译者:[pinewall](https://github.com/pinewall) +校对:[校对者ID](https://github.com/校对者ID) +选题:[lujun9972](https://github.com/lujun9972) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[1]:https://www.facebook.com/NetworkWorld/ +[2]:https://www.linkedin.com/company/network-world From 2de00fd94e9ee9ca00fef3134bec77377cbb4ebd Mon Sep 17 00:00:00 2001 From: songshunqiang Date: Thu, 3 May 2018 10:46:23 +0800 Subject: [PATCH 076/102] add translation tag --- ...418 Getting started with Anaconda Python for data science.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180418 Getting started with Anaconda Python for data science.md b/sources/tech/20180418 Getting started with Anaconda Python for data science.md index 43cd89f17c..fa55edc325 100644 --- a/sources/tech/20180418 Getting started with Anaconda Python for data science.md +++ b/sources/tech/20180418 Getting started with Anaconda Python for data science.md @@ -1,3 +1,5 @@ +pinewall translating + Getting started with Anaconda Python for data science ====== From 569b73eca26077f0fc8f6123add888e6208c8363 Mon Sep 17 00:00:00 2001 From: songshunqiang Date: Thu, 3 May 2018 10:53:43 +0800 Subject: [PATCH 077/102] add translation tag --- ...d application deployment with sample Face Recognition App.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180315 Kubernetes distributed application deployment with sample Face Recognition App.md b/sources/tech/20180315 Kubernetes distributed application deployment with sample Face Recognition App.md index 6971261590..83ef4b1835 100644 --- a/sources/tech/20180315 Kubernetes distributed application deployment with sample Face Recognition App.md +++ b/sources/tech/20180315 Kubernetes distributed application deployment with sample Face Recognition App.md @@ -1,3 +1,5 @@ +pinewall translating + Kubernetes distributed application deployment with sample Face Recognition App ============================================================ From 8cd6c038f748c4a3f9030fac70a03c3890fb82f5 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 3 May 2018 15:13:54 +0800 Subject: [PATCH 078/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=209=20ways=20to=20i?= =?UTF-8?q?mprove=20collaboration=20between=20developers=20and=20designers?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ration between developers and designers.md | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 sources/talk/20180502 9 ways to improve collaboration between developers and designers.md diff --git a/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md b/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md new file mode 100644 index 0000000000..293841714d --- /dev/null +++ b/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md @@ -0,0 +1,66 @@ +9 ways to improve collaboration between developers and designers +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_consensuscollab1.png?itok=ULQdGjlV) + +This article was co-written with [Jason Porter][1]. + +Design is a crucial element in any software project. Sooner or later, the developers' reasons for writing all this code will be communicated to the designers, human beings who aren't as familiar with its inner workings as the development team. + +Stereotypes exist on both side of the divide; engineers often expect designers to be flaky and irrational, while designers often expect engineers to be inflexible and demanding. The truth is considerably more nuanced and, at the end of the day, the fates of designers and developers are forever intertwined. + +Here are nine things that can improve collaboration between the two. + +### 1\. First, knock down the wall. Seriously. + +There are loads of memes about the "wall of confusion" in just about every industry. No matter what else you do, the first step toward tearing down this wall is getting both sides to agree it needs to be gone. Once everyone agrees the existing processes aren't functioning optimally, you can pick and choose from the rest of these ideas to begin fixing the problems. + +### 2\. Learn to empathize. + +Before rolling up any sleeves to build better communication, take a break. This is a great junction point for team building. A time to recognize that we're all people, we all have strengths and weaknesses, and most importantly, we're all on the same team. Discussions around workflows and productivity can become feisty, so it's crucial to build a foundation of trust and cooperation before diving on in. + +### 3\. Recognize differences. + +Designers and developers attack the same problem from different angles. Given a similar problem, designers will seek the solution with the biggest impact while developers will seek the solution with the least amount of waste. These two viewpoints do not have to be mutually exclusive. There is plenty of room for negotiation and compromise, and somewhere in the middle is where the end user receives the best experience possible. + +### 4\. Embrace similarities. + +This is all about workflow. CI/CD, scrum, agile, etc., are all basically saying the same thing: Ideate, iterate, investigate, and repeat. Iteration and reiteration are common denominators for both kinds of work. So instead of running a design cycle followed by a development cycle, it makes much more sense to run them concurrently and in tandem. Syncing cycles allows teams to communicate, collaborate, and influence each other every step of the way. + +### 5\. Manage expectations. + +All conflict can be distilled down to one simple idea: incompatible expectations. Therefore, an easy way to prevent systemic breakdowns is to manage expectations by ensuring that teams are thinking before talking and talking before doing. Setting expectations often evolves organically through everyday conversation. Forcing them to happen by having meetings can be counterproductive. + +### 6\. Meet early and meet often. + +Meeting once at the beginning of work and once at the end simply isn't enough. This doesn't mean you need daily or even weekly meetings. Setting a cadence for meetings can also be counterproductive. Let them happen whenever they're necessary. Great things can happen with impromptu meetings—even at the watercooler! If your team is distributed or has even one remote employee, video conferencing, text chat, or phone calls are all excellent ways to meet. It's important that everyone on the team has multiple ways to communicate with each other. + +### 7\. Build your own lexicon. + +Designers and developers sometimes have different terms for similar ideas. One person's card is another person's tile is a third person's box. Ultimately, the fit and accuracy of a term aren't as important as everyone's agreement to use the same term consistently. + +### 8\. Make everyone a communication steward. + +Everyone in the group is responsible for maintaining effective communication, regardless of how or when it happens. Each person should strive to say what they mean and mean what they say. + +### 9\. Give a darn. + +It only takes one member of a team to sabotage progress. Go all in. If every individual doesn't care about the product or the goal, there will be problems with motivation to make changes or continue the process. + +This article is based on [Designers and developers: Finding common ground for effective collaboration][2], a talk the authors will be giving at [Red Hat Summit 2018][3], which will be held May 8-10 in San Francisco. [Register by May 7][3] to save US$ 500 off of registration. Use discount code **OPEN18** on the payment page to apply the discount. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/5/9-ways-improve-collaboration-developers-designers + +作者:[Jason Brock][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jkbrock +[1]:https://opensource.com/users/lightguardjp +[2]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=154267 +[3]:https://www.redhat.com/en/summit/2018 From 57010e6484827f7ec4be692265fb571db30ec6a2 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 3 May 2018 15:16:32 +0800 Subject: [PATCH 079/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Use=20?= =?UTF-8?q?Vim=20Editor=20To=20Input=20Text=20Anywhere?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...o Use Vim Editor To Input Text Anywhere.md | 100 ++++++++++++++++++ 1 file changed, 100 insertions(+) create mode 100644 sources/tech/20180501 How To Use Vim Editor To Input Text Anywhere.md diff --git a/sources/tech/20180501 How To Use Vim Editor To Input Text Anywhere.md b/sources/tech/20180501 How To Use Vim Editor To Input Text Anywhere.md new file mode 100644 index 0000000000..368d1b1dbc --- /dev/null +++ b/sources/tech/20180501 How To Use Vim Editor To Input Text Anywhere.md @@ -0,0 +1,100 @@ +How To Use Vim Editor To Input Text Anywhere +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/05/vim-anywhere-720x340.png) + +Howdy Vim users! Today, I have come up with a good news to all of you. Say hello to **Vim-anywhere** , a simple script that allows you to use the Vim editor to input text anywhere in your Linux box. That means you can simply invoke your favorite Vim editor, type whatever you want and paste the text on any application or on a website. The text will be available in your clipboard until you restart your system. This utility is absolutely useful for those who love to use the Vim keybindings often in non-vim environment. + +### Install Vim-anywhere in Linux + +The Vim-anywhere utility will work on any GNOME based (or derivatives) Linux distributions. Also, make sure you have installed the following prerequisites. + + * Curl + * Git + * gVim + * xclip + + + +For instance, you can those utilities in Ubuntu as shown below. +``` +$ sudo apt install curl git vim-gnome xclip + +``` + +Then, run the following command to install Vim-anywhere: +``` +$ curl -fsSL https://raw.github.com/cknadler/vim-anywhere/master/install | bash + +``` + +Vim-anywhere has been installed. Now let us see how to use it. + +### Use Vim Editor To Input Text Anywhere + +Let us say you need to create a word document. But you’re much more comfortable using Vim editor than LibreOffice writer. No problem, this is where Vim-anywhere comes in handy. It automates the entire process. It simply invokes the Vim editor, so you can write whatever you want in it and paste it in the .doc file. + +Let me show you an example. Open LibreOffice writer or any graphical text editor of your choice. Then, open Vim-anywhere. To do so, simply press **CTRL+ALT+V**. It will open the gVim editor. Press “i” to switch to interactive mode and input the text. Once done, save and close it by typing **:wq**. + +![][2] + +The text will be available in the clipboard until you restart the system. After you closed the editor, your previous application is refocused. Just press **CTRL+P** to paste the text in it. + +![][3] + +It’s just an example. You can even use Vim-anywhere to write something on an annoying web form or any other applications. Once Vim-anywhere invoked, it will open a buffer. Close it and its contents are automatically copied to your clipboard and your previous application is refocused. + +The vim-anywhere utility will create a temporary file in **/tmp/vim-anywhere** when invoked. These temporary files stick around until you restart your system, giving you a temporary history. +``` +$ ls /tmp/vim-anywhere + +``` + +You can re-open your most recent file using command: +``` +$ vim $( ls /tmp/vim-anywhere | sort -r | head -n 1 ) + +``` + +**Update Vim-anywhere** + +Run the following command to update Vim-anywhere: +``` +$ ~/.vim-anywhere/update + +``` + +**Change keyboard shotcut** + +The default keybinding to invoke Vim-anywhere is CTRL+ALT+V. You can change it to any custom keybinding using gconf tool. +``` +$ gconftool -t str --set /desktop/gnome/keybindings/vim-anywhere/binding + +``` + +**Uninstall Vim-anywhere** + +Some of you might think that opening Vim editor each time to input text and paste the text back to another application might be pointless and completely unnecessary. + +If you don’t find this utility useful, simply uninstall it using command: +``` +$ ~/.vim-anywhere/uninstall + +``` + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-use-vim-editor-to-input-text-anywhere/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[2]:http://www.ostechnix.com/wp-content/uploads/2018/05/vim-anywhere-1-1.png +[3]:http://www.ostechnix.com/wp-content/uploads/2018/05/vim-anywhere-2.png From 5f5fc7fa8d24df558876a0ba65b795b3bacb7f4c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 3 May 2018 21:27:59 +0800 Subject: [PATCH 080/102] PRF:20180312 How To Quickly Monitor Multiple Hosts In Linux.md @MjSeven --- ...Quickly Monitor Multiple Hosts In Linux.md | 36 +++++++++---------- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/translated/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md b/translated/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md index 1f6f085feb..860f70553a 100644 --- a/translated/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md +++ b/translated/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md @@ -2,66 +2,67 @@ ===== ![](https://www.ostechnix.com/wp-content/uploads/2018/03/rwho-1-720x340.png) -有很多监控工具可用来监控本地和远程 Linux 系统,一个很好的例子是 [**Cockpit**][1]。但是,这些工具的安装和使用比较复杂,至少对于新手管理员来说是这样。新手管理员可能需要花一些时间来弄清楚如何配置这些工具来监视系统。如果你想要以快速且粗略地在局域网中一次监控多台主机,你可能需要查看一下 **“rwho”** 工具。只要安装 rwho 实用程序,它将立即快速地监控本地和远程系统。你什么都不用配置!你所要做的就是在要监视的系统上安装 “rwho” 工具。 -请不要将 rwho 视为功能丰富且完整的监控工具。这只是一个简单的工具,它只监视远程系统的**正常运行时间**,**加载**和**登录用户**。使用 “rwho” 使用程序,我们可以发现谁在哪台计算机上登录,一个被监视的计算机的列表,有正常运行时间(自上次重新启动以来的时间),有多少用户登录了,以及在过去的 1、5、15 分钟的平均负载。不多不少!而且,它只监视同一子网中的系统。因此,它非常适合小型和家庭办公网络。 +有很多监控工具可用来监控本地和远程 Linux 系统,一个很好的例子是 [Cockpit][1]。但是,这些工具的安装和使用比较复杂,至少对于新手管理员来说是这样。新手管理员可能需要花一些时间来弄清楚如何配置这些工具来监视系统。如果你想要以快速且粗略地在局域网中一次监控多台主机,你可能需要了解一下 “rwho” 工具。只要安装了 rwho 实用程序,它将立即快速地监控本地和远程系统。你什么都不用配置!你所要做的就是在要监视的系统上安装 “rwho” 工具。 + +请不要将 rwho 视为功能丰富且完整的监控工具。这只是一个简单的工具,它只监视远程系统的“正常运行时间”(`uptime`),“负载”(`load`)和**登录的用户**。使用 “rwho” 使用程序,我们可以发现谁在哪台计算机上登录;一个被监视的计算机的列表,列出了正常运行时间(自上次重新启动以来的时间);有多少用户登录了;以及在过去的 1、5、15 分钟的平均负载。不多不少!而且,它只监视同一子网中的系统。因此,它非常适合小型和家庭办公网络。 ### 在 Linux 中监控多台主机 -让我来解释一下 rwho 是如何工作的。每个在网络上使用 rwho 的系统都将广播关于它自己的信息,其他计算机可以使用 rwhod-daemon 来访问这些信息。因此,网络上的每台计算机都必须安装 rwho。此外,为了分发或访问其他主机的信息,必须允许 rwho 端口(例如端口 513/UDP)通过防火墙/路由器。 +让我来解释一下 `rwho` 是如何工作的。每个在网络上使用 `rwho` 的系统都将广播关于它自己的信息,其他计算机可以使用 `rwhod` 守护进程来访问这些信息。因此,网络上的每台计算机都必须安装 `rwho`。此外,为了分发或访问其他主机的信息,必须允许 `rwho` 端口(例如端口 `513/UDP`)通过防火墙/路由器。 好的,让我们来安装它。 -我在 Ubuntu 16.04 LTS 服务器上进行了测试,rwho 在默认仓库中可用,所以,我们可以使用像下面这样的 APT 软件包管理器来安装它。 +我在 Ubuntu 16.04 LTS 服务器上进行了测试,`rwho` 在默认仓库中可用,所以,我们可以使用像下面这样的 APT 软件包管理器来安装它。 + ``` $ sudo apt-get install rwho - ``` -在基于 RPM 的系统如 CentOS, Fedora, RHEL上,使用以下命令来安装它: +在基于 RPM 的系统如 CentOS、 Fedora、 RHEL 上,使用以下命令来安装它: + ``` $ sudo yum install rwho - ``` -如果你在防火墙/路由器之后,确保你已经允许使用 rwhod 513 端口。另外,使用命令验证 rwhod-daemon 是否正在运行: +如果你在防火墙/路由器之后,确保你已经允许使用 rwhod 513 端口。另外,使用命令验证 `rwhod` 守护进程是否正在运行: $ sudo systemctl status rwhod -如果它尚未启动,运行以下命令启用并启动 rwhod 服务: +如果它尚未启动,运行以下命令启用并启动 `rwhod` 服务: + ``` $ sudo systemctl enable rwhod $ sudo systemctl start rwhod - ``` 现在是时候来监视系统了。运行以下命令以发现谁在哪台计算机上登录: + ``` $ rwho ostechni ostechnix:pts/5 Mar 12 17:41 root server:pts/0 Mar 12 17:42 - ``` -正如你所看到的,目前我的局域网中有两个系统。本地系统用户是 **ostechnix** (Ubuntu 16.04 LTS),远程系统的用户是 **root** (CentOS 7)。可能你已经猜到了,rwho 与 “who” 命令相似,但它会监视远程系统。 +正如你所看到的,目前我的局域网中有两个系统。本地系统用户是 `ostechnix` (Ubuntu 16.04 LTS),远程系统的用户是 `root` (CentOS 7)。可能你已经猜到了,`rwho` 与 `who` 命令相似,但它会监视远程系统。 而且,我们可以使用以下命令找到网络上所有正在运行的系统的正常运行时间: + ``` $ ruptime ostechnix up 2:17, 1 user, load 0.09, 0.03, 0.01 server up 1:54, 1 user, load 0.00, 0.01, 0.05 - ``` -这里,ruptime(类似于 “uptime” 命令)显示了我的 Ubuntu(本地) and CentOS(远程)系统的总运行时间。明白了吗?棒极了!以下是我的 Ubuntu 16.04 LTS 系统的示例屏幕截图: +这里,`ruptime`(类似于 `uptime` 命令)显示了我的 Ubuntu(本地) 和 CentOS(远程)系统的总运行时间。明白了吗?棒极了!以下是我的 Ubuntu 16.04 LTS 系统的示例屏幕截图: ![][3] 你可以在以下位置找到有关局域网中所有其他机器的信息: + ``` $ ls /var/spool/rwho/ whod.ostechnix whod.server - ``` 它很小,但却非常有用,可以发现谁在哪台计算机上登录,以及正常运行时间和系统负载详情。 @@ -71,23 +72,22 @@ whod.ostechnix whod.server 请注意,这种方法有一个严重的漏洞。由于有关每台计算机的信息都通过网络进行广播,因此该子网中的每个人都可能获得此信息。通常情况下可以,但另一方面,当有关网络的信息分发给非授权用户时,这可能是不必要的副作用。因此,强烈建议在受信任和受保护的局域网中使用它。 更多的信息,查找 man 手册页。 + ``` $ man rwho - ``` 好了,这就是全部了。更多好东西要来了,敬请期待! 干杯! - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/ 作者:[SK][a] 译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From d8ed9d63aa101e04fabcaf336c8114211b2fb5e7 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 3 May 2018 21:28:22 +0800 Subject: [PATCH 081/102] PUB:20180312 How To Quickly Monitor Multiple Hosts In Linux.md @MjSeven https://linux.cn/article-9602-1.html --- .../20180312 How To Quickly Monitor Multiple Hosts In Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180312 How To Quickly Monitor Multiple Hosts In Linux.md (100%) diff --git a/translated/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md b/published/20180312 How To Quickly Monitor Multiple Hosts In Linux.md similarity index 100% rename from translated/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md rename to published/20180312 How To Quickly Monitor Multiple Hosts In Linux.md From b2937bde4b762cef825582708aad3241854e0ecb Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 4 May 2018 07:33:39 +0800 Subject: [PATCH 082/102] PRF:20180227 How to block local spoofed addresses using the Linux firewall.md @leemeans --- ...ofed addresses using the Linux firewall.md | 57 ++++++++----------- 1 file changed, 23 insertions(+), 34 deletions(-) diff --git a/translated/tech/20180227 How to block local spoofed addresses using the Linux firewall.md b/translated/tech/20180227 How to block local spoofed addresses using the Linux firewall.md index 6c54c36164..be0e5f85ec 100644 --- a/translated/tech/20180227 How to block local spoofed addresses using the Linux firewall.md +++ b/translated/tech/20180227 How to block local spoofed addresses using the Linux firewall.md @@ -1,71 +1,60 @@ -如何使用Linux防火墙隔离局域网受欺骗地址 +如何使用 Linux 防火墙隔离本地欺骗地址 ====== +> 如何使用 iptables 防火墙保护你的网络免遭黑客攻击。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA) +即便是被入侵检测和隔离系统所保护的远程网络,黑客们也在寻找各种精巧的方法入侵。IDS/IPS 不能停止或者减少那些想要接管你的网络控制权的黑客攻击。不恰当的配置允许攻击者绕过所有部署的安全措施。 -即便是被入侵检测和隔离系统保护的远程网络,黑客们也在寻找精致的方法入侵。IDS/IPS是不能停止或者减少那些想要接管你的网络的黑客的攻击的。不恰当的配置允许攻击者绕过所有部署的安全措施。 +在这篇文章中,我将会解释安全工程师或者系统管理员该怎样避免这些攻击。 -在这篇文章中,我将会解释安全工程师或者系统管理员怎样可以避免这些攻击。 +几乎所有的 Linux 发行版都带着一个内建的防火墙来保护运行在 Linux 主机上的进程和应用程序。大多数防火墙都按照 IDS/IPS 解决方案设计,这样的设计的主要目的是检测和避免恶意包获取网络的进入权。 -几乎所有的Linux发行版都带着一个内建的防火墙来保护运行在Linux宿主机上的进程和应用程序。大多数都按照IDS/IPS解决方案设计,这样的设计的主要目的是检测和避免恶意包获取网络的进入权。 +Linux 防火墙通常有两种接口:iptables 和 ipchains 程序(LCTT 译注:在支持 systemd 的系统上,采用的是更新的接口 firewalld)。大多数人将这些接口称作 iptables 防火墙或者 ipchains 防火墙。这两个接口都被设计成包过滤器。iptables 是有状态防火墙,其基于先前的包做出决定。ipchains 不会基于先前的包做出决定,它被设计为无状态防火墙。 -Linux防火墙通常带有两个接口:iptable和ipchain程序。大多数人将这些接口称作iptables防火墙或者ipchains防火墙。这两个接口都被设计成包过滤器。iptables是有状态防火墙,基于先前的包做出决定。 +在这篇文章中,我们将会专注于内核 2.4 之后出现的 iptables 防火墙。 -在这篇文章中,我们将会专注于内核2.4之后出现的iptables防火墙。 +有了 iptables 防火墙,你可以创建策略或者有序的规则集,规则集可以告诉内核该如何对待特定的数据包。在内核中的是Netfilter 框架。Netfilter 既是框架也是 iptables 防火墙的项目名称。作为一个框架,Netfilter 允许 iptables 勾连被设计来操作数据包的功能。概括地说,iptables 依靠 Netfilter 框架构筑诸如过滤数据包数据的功能。 -有了iptables防火墙,你可以创建策略或者有序的规则集,规则集可以告诉内核如何对待特定的数据包。在内核中的是Netfilter框架。Netfilter既是框架也是iptables防火墙的工程名。作为一个框架,Netfilter允许iptables勾取被设计来操作数据包的函数。概括地说,iptables依靠Netfilter框架构筑诸如过滤数据包数据的功能。 +每个 iptables 规则都被应用到一个表中的链上。一个 iptables 链就是一个比较包中相似特征的规则集合。而表(例如 `nat` 或者 `mangle`)则描述不同的功能目录。例如, `mangle` 表用于修改包数据。因此,特定的修改包数据的规则被应用到这里;而过滤规则被应用到 `filter` 表,因为 `filter` 表过滤包数据。 -每个iptables规则都被应用到一个含表的链中。一个iptables链就是一个比较包中相似字符的规则的集合。而表(例如nat或者mangle)则描述不同的功能目录。例如,一个mangle表转化包数据。因此,特定的改变包数据的规则被应用到这里,而过滤规则被应用到filter表,因为filter表过滤包数据。 +iptables 规则有一个匹配集,以及一个诸如 `Drop` 或者 `Deny` 的目标,这可以告诉 iptables 对一个包做什么以符合规则。因此,没有目标和匹配集,iptables 就不能有效地处理包。如果一个包匹配了一条规则,目标会指向一个将要采取的特定措施。另一方面,为了让 iptables 处理,每个数据包必须匹配才能被处理。 -iptables规则有一系列匹配,伴随着一个诸如`Drop`或者`Deny`的目标,这可以告诉iptables对一个包做什么符合规则。因此,没有一个目标和一系列匹配,iptables就不能有效地处理包。如果一个包匹配一条规则,一个目标简单地指向一个将要采取的特定措施。另一方面,为了让iptables处理,匹配必须被每个包满足吗。 - - -现在我们已经知道iptables防火墙如何工作,开始着眼于如何使用iptables防火墙检测并拒绝或丢弃被欺骗的地址吧。 +现在我们已经知道 iptables 防火墙如何工作,让我们着眼于如何使用 iptables 防火墙检测并拒绝或丢弃欺骗地址吧。 ### 打开源地址验证 -作为一个安全工程师,在处理远程主机被欺骗地址的时候,我采取的第一步是在内核打开源地址验证。 +作为一个安全工程师,在处理远程的欺骗地址的时候,我采取的第一步是在内核打开源地址验证。 -源地址验证是一种内核层级的特性,这种特性丢弃那些伪装成来自你的网络的包。这种特性使用反向路径过滤器方法来检查收到的包的源地址是否可以通过包到达的接口可以到达。(译注:到达的包的源地址应该可以从它到达的网络接口反向到达,只需反转源地址和目的地址就可以达到这样的效果) +源地址验证是一种内核层级的特性,这种特性丢弃那些伪装成来自你的网络的包。这种特性使用反向路径过滤器方法来检查收到的包的源地址是否可以通过包到达的接口可以到达。(LCTT 译注:到达的包的源地址应该可以从它到达的网络接口反向到达,只需反转源地址和目的地址就可以达到这样的效果) 利用下面简单的脚本可以打开源地址验证而不用手工操作: + ``` #!/bin/sh - #作者: Michael K Aboagye - #程序目标: 打开反向路径过滤 - #日期: 7/02/18 - #在屏幕上显示 “enabling source address verification” - echo -n "Enabling source address verification…" - #将值0覆盖为1来打开源地址验证 - echo 1 > /proc/sys/net/ipv4/conf/default/rp_filter - echo "completed" - ``` -先前的脚本在执行的时候只显示了`Enabling source address verification`这条信息而没有添加新行。默认的反向路径过滤的值是0,0表示没有源验证。因此,第二行简单地将默认值0覆盖为1。1表示内核将会通过确认反向路径来验证源(地址)。 +上面的脚本在执行的时候只显示了 `Enabling source address verification` 这条信息而不会换行。默认的反向路径过滤的值是 `0`,`0` 表示没有源验证。因此,第二行简单地将默认值 `0` 覆盖为 `1`。`1` 表示内核将会通过确认反向路径来验证源地址。 -最后,你可以使用下面的命令通过选择`DROP`或者`REJECT`目标中的一个来丢弃或者拒绝来自远端主机的被欺骗地址。但是,处于安全原因的考虑,我建议使用`DROP`目标。 +最后,你可以使用下面的命令通过选择 `DROP` 或者 `REJECT` 目标之一来丢弃或者拒绝来自远端主机的欺骗地址。但是,处于安全原因的考虑,我建议使用 `DROP` 目标。 -像下面这样,用你自己的IP地址代替“IP-address” 占位符。另外,你必须选择使用`REJECT`或者`DROP`中的一个,这两个目标不能同时使用。 -``` - iptables -A INPUT -i internal_interface -s IP_address -j REJECT / DROP - - - - iptables -A INPUT -i internal_interface -s 192.168.0.0/16 -j REJECT/ DROP +像下面这样,用你自己的 IP 地址代替 `IP-address` 占位符。另外,你必须选择使用 `REJECT` 或者 `DROP` 中的一个,这两个目标不能同时使用。 +``` +iptables -A INPUT -i internal_interface -s IP_address -j REJECT / DROP +iptables -A INPUT -i internal_interface -s 192.168.0.0/16 -j REJECT / DROP ``` -这篇文章只提供了如何使用iptables防火墙来避免远端欺骗攻击的基础(知识)。 +这篇文章只提供了如何使用 iptables 防火墙来避免远端欺骗攻击的基础知识。 -------------------------------------------------------------------------------- @@ -73,7 +62,7 @@ via: https://opensource.com/article/18/2/block-local-spoofed-addresses-using-lin 作者:[Michael Kwaku Aboagye][a] 译者:[leemeans](https://github.com/leemeans) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 2a0213b2e4b892dc77b75b46284074fa32ab25aa Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 4 May 2018 07:34:12 +0800 Subject: [PATCH 083/102] PUB:20180227 How to block local spoofed addresses using the Linux firewall.md @leemeans https://linux.cn/article-9603-1.html --- ...w to block local spoofed addresses using the Linux firewall.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180227 How to block local spoofed addresses using the Linux firewall.md (100%) diff --git a/translated/tech/20180227 How to block local spoofed addresses using the Linux firewall.md b/published/20180227 How to block local spoofed addresses using the Linux firewall.md similarity index 100% rename from translated/tech/20180227 How to block local spoofed addresses using the Linux firewall.md rename to published/20180227 How to block local spoofed addresses using the Linux firewall.md From 9bb734767048a9923e728db5aeb80ab3afe97b9b Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 4 May 2018 08:55:34 +0800 Subject: [PATCH 084/102] translated --- ...w to start developing on Java in Fedora.md | 124 ------------------ ...w to start developing on Java in Fedora.md | 122 +++++++++++++++++ 2 files changed, 122 insertions(+), 124 deletions(-) delete mode 100644 sources/tech/20180420 How to start developing on Java in Fedora.md create mode 100644 translated/tech/20180420 How to start developing on Java in Fedora.md diff --git a/sources/tech/20180420 How to start developing on Java in Fedora.md b/sources/tech/20180420 How to start developing on Java in Fedora.md deleted file mode 100644 index cc192c915c..0000000000 --- a/sources/tech/20180420 How to start developing on Java in Fedora.md +++ /dev/null @@ -1,124 +0,0 @@ -translating----geekpi - -How to start developing on Java in Fedora -====== - -![](https://fedoramagazine.org/wp-content/uploads/2018/04/java-getting-started-816x345.jpg) -Java is one of the most popular programming languages in the world. It is widely-used to develop IOT appliances, Android apps, web, and enterprise applications. This article will provide a quick guide to install and configure your workstation using [OpenJDK][1]. - -### Installing the compiler and tools - -Installing the compiler, or Java Development Kit (JDK), is easy to do in Fedora. At the time of this article, versions 8 and 9 are available. Simply open a terminal and enter: -``` -sudo dnf install java-1.8.0-openjdk-devel - -``` - -This will install the JDK for version 8. For version 9, enter: -``` -sudo dnf install java-9-openjdk-devel - -``` - -For the developer who requires additional tools and libraries such as Ant and Maven, the **Java Development** group is available. To install the suite, enter: -``` -sudo dnf group install "Java Development" - -``` - -To verify the compiler is installed, run: -``` -javac -version - -``` - -The output shows the compiler version and looks like this: -``` -javac 1.8.0_162 - -``` - -### Compiling applications - -You can use any basic text editor such as nano, vim, or gedit to write applications. This example provides a simple “Hello Fedora” program. - -Open your favorite text editor and enter the following: -``` -public class HelloFedora { - - -      public static void main (String[] args) { -              System.out.println("Hello Fedora!"); -      } -} - -``` - -Save the file as HelloFedora.java. In the terminal change to the directory containing the file and do: -``` -javac HelloFedora.java - -``` - -The compiler will complain if it runs into any syntax errors. Otherwise it will simply display the shell prompt beneath. - -You should now have a file called HelloFedora, which is the compiled program. Run it with the following command: -``` -java HelloFedora - -``` - -And the output will display: -``` -Hello Fedora! - -``` - -### Installing an Integrated Development Environment (IDE) - -Some programs may be more complex and an IDE can make things flow smoothly. There are quite a few IDEs available for Java programmers including: - -+ Geany, a basic IDE that loads quickly, and provides built-in templates -+ Anjuta -+ GNOME Builder, which has been covered in the article Builder – a new IDE specifically for GNOME app developers - -However, one of the most popular open-source IDE’s, mainly written in Java, is [Eclipse][2]. Eclipse is available in the official repositories. To install it, run this command: -``` -sudo dnf install eclipse-jdt - -``` - -When the installation is complete, a shortcut for Eclipse appears in the desktop menu. - -For more information on how to use Eclipse, consult the [User Guide][3] available on their website. - -### Browser plugin - -If you’re developing web applets and need a plugin for your browser, [IcedTea-Web][4] is available. Like OpenJDK, it is open source and easy to install in Fedora. Run this command: -``` -sudo dnf install icedtea-web - -``` - -As of Firefox 52, the web plugin no longer works. For details visit the Mozilla support site at [https://support.mozilla.org/en-US/kb/npapi-plugins?as=u&utm_source=inproduct][5]. - -Congratulations, your Java development environment is ready to use. - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/start-developing-java-fedora/ - -作者:[Shaun Assam][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://fedoramagazine.org/author/sassam/ -[1]:http://openjdk.java.net/ -[2]:https://www.eclipse.org/ -[3]:http://help.eclipse.org/oxygen/nav/0 -[4]:https://icedtea.classpath.org/wiki/IcedTea-Web -[5]:https://support.mozilla.org/en-US/kb/npapi-plugins?as=u&utm_source=inproduct diff --git a/translated/tech/20180420 How to start developing on Java in Fedora.md b/translated/tech/20180420 How to start developing on Java in Fedora.md new file mode 100644 index 0000000000..b606a74a5d --- /dev/null +++ b/translated/tech/20180420 How to start developing on Java in Fedora.md @@ -0,0 +1,122 @@ +如何在 Fedora 上开始 Java 开发 +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/04/java-getting-started-816x345.jpg) +Java 是世界上最流行的编程语言之一。它广泛用于开发物联网设备、Android 程序,Web 和企业应用。本文将提供使用 [OpenJDK][1] 安装和配置工作站的指南。 + +### 安装编译器和工具 + +在 Fedora 中安装编译器或 Java Development Kit(JDK)很容易。在写这篇文章时,可以用 v8 和 v9。只需打开一个终端并输入: +``` +sudo dnf install java-1.8.0-openjdk-devel + +``` + +这安装 JDK v8。对于 v9,请输入: +``` +sudo dnf install java-9-openjdk-devel + +``` + +对于需要其他工具和库(如 Ant 和 Maven)的开发人员,可以使用 **Java Development** 组。要安装套件,请输入: +``` +sudo dnf group install "Java Development" + +``` + +要验证编译器是否已安装,请运行: +``` +javac -version + +``` + +输出显示编译器版本,如下所示: +``` +javac 1.8.0_162 + +``` + +### 编译程序 + +你可以使用任何基本的文本编辑器(如 nano、vim 或 gedit)编写程序。这个例子提供了一个简单的 “Hello Fedora” 程序。 + +打开你最喜欢的文本编辑器并输入以下内容: +``` +public class HelloFedora { + + +      public static void main (String[] args) { +              System.out.println("Hello Fedora!"); +      } +} + +``` + +将文件保存为 HelloFedora.java。在终端切换到包含该文件的目录并执行以下操作: +``` +javac HelloFedora.java + +``` + +如果编译器遇到任何语法错误,它会发出错误。否则,它只会在下面显示 shell 提示符。 + +你现在应该有一个名为 HelloFedora 的文件,它是编译好的程序。使用以下命令运行它: +``` +java HelloFedora + +``` + +输出将显示: +``` +Hello Fedora! + +``` + +### 安装集成开发环境(IDE) + +有些程序可能更复杂,IDE 可以帮助顺利进行。Java 程序员有很多可用的 IDE,其中包括: + ++ Geany,一个加载快速的基本 IDE,并提供内置模板 ++ Anjuta ++ GNOME Builder,已经在 Builder 的文章中介绍过 - 这是一个专门面向 GNOME 程序开发人员的新 IDE + +然而,主要用 Java 编写的最流行的开源 IDE 之一是 [Eclipse][2]。 Eclipse 在官方仓库中有。要安装它,请运行以下命令: +``` +sudo dnf install eclipse-jdt + +``` + +安装完成后,Eclipse 的快捷方式会出现在桌面菜单中。 + +有关如何使用 Eclipse 的更多信息,请参阅其网站上的[用户指南][3]。 + +### 浏览器插件 + +如果你正在开发 Web 小程序并需要一个用于浏览器的插件,则可以使用 [IcedTea-Web][4]。像 OpenJDK 一样,它是开源的并易于在 Fedora 中安装。运行这个命令: +``` +sudo dnf install icedtea-web + +``` + +从 Firefox 52 开始,Web 插件不再有效。有关详细信息,请访问 Mozilla 支持网站 [https://support.mozilla.org/en-US/kb/npapi-plugins?as=u&utm_source=inproduct][5]。 + +恭喜,你的 Java 开发环境已准备完毕。 + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/start-developing-java-fedora/ + +作者:[Shaun Assam][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fedoramagazine.org/author/sassam/ +[1]:http://openjdk.java.net/ +[2]:https://www.eclipse.org/ +[3]:http://help.eclipse.org/oxygen/nav/0 +[4]:https://icedtea.classpath.org/wiki/IcedTea-Web +[5]:https://support.mozilla.org/en-US/kb/npapi-plugins?as=u&utm_source=inproduct From 7927df72827e34e6b62edbfac34cbe804ef2d8e5 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 4 May 2018 08:59:27 +0800 Subject: [PATCH 085/102] translating --- .../20180430 Reset a lost root password in under 5 minutes.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180430 Reset a lost root password in under 5 minutes.md b/sources/tech/20180430 Reset a lost root password in under 5 minutes.md index 06e18a2d6d..4521901e58 100644 --- a/sources/tech/20180430 Reset a lost root password in under 5 minutes.md +++ b/sources/tech/20180430 Reset a lost root password in under 5 minutes.md @@ -1,3 +1,5 @@ +translating---geekpi + Reset a lost root password in under 5 minutes ====== From cb16da8c12f125c1cc2035ea024a3854702c645b Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 4 May 2018 09:36:46 +0800 Subject: [PATCH 086/102] =?UTF-8?q?20180504-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...fficial Introduction to the Go Compiler.md | 116 ++++++++++++++++++ 1 file changed, 116 insertions(+) create mode 100644 sources/tech/20180427 An Official Introduction to the Go Compiler.md diff --git a/sources/tech/20180427 An Official Introduction to the Go Compiler.md b/sources/tech/20180427 An Official Introduction to the Go Compiler.md new file mode 100644 index 0000000000..c8369c7c8c --- /dev/null +++ b/sources/tech/20180427 An Official Introduction to the Go Compiler.md @@ -0,0 +1,116 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +## Introduction to the Go compiler + +`cmd/compile` contains the main packages that form the Go compiler. The compiler +may be logically split in four phases, which we will briefly describe alongside +the list of packages that contain their code. + +You may sometimes hear the terms "front-end" and "back-end" when referring to +the compiler. Roughly speaking, these translate to the first two and last two +phases we are going to list here. A third term, "middle-end", often refers to +much of the work that happens in the second phase. + +Note that the `go/*` family of packages, such as `go/parser` and `go/types`, +have no relation to the compiler. Since the compiler was initially written in C, +the `go/*` packages were developed to enable writing tools working with Go code, +such as `gofmt` and `vet`. + +It should be clarified that the name "gc" stands for "Go compiler", and has +little to do with uppercase GC, which stands for garbage collection. + +### 1. Parsing + +* `cmd/compile/internal/syntax` (lexer, parser, syntax tree) + +In the first phase of compilation, source code is tokenized (lexical analysis), +parsed (syntactic analyses), and a syntax tree is constructed for each source +file. + +Each syntax tree is an exact representation of the respective source file, with +nodes corresponding to the various elements of the source such as expressions, +declarations, and statements. The syntax tree also includes position information +which is used for error reporting and the creation of debugging information. + +### 2. Type-checking and AST transformations + +* `cmd/compile/internal/gc` (create compiler AST, type checking, AST transformations) + +The gc package includes an AST definition carried over from when it was written +in C. All of its code is written in terms of it, so the first thing that the gc +package must do is convert the syntax package's syntax tree to the compiler's +AST representation. This extra step may be refactored away in the future. + +The AST is then type-checked. The first steps are name resolution and type +inference, which determine which object belongs to which identifier, and what +type each expression has. Type-checking includes certain extra checks, such as +"declared and not used" as well as determining whether or not a function +terminates. + +Certain transformations are also done on the AST. Some nodes are refined based +on type information, such as string additions being split from the arithmetic +addition node type. Some other examples are dead code elimination, function call +inlining, and escape analysis. + +### 3. Generic SSA + +* `cmd/compile/internal/gc` (converting to SSA) +* `cmd/compile/internal/ssa` (SSA passes and rules) + + +In this phase, the AST is converted into Static Single Assignment (SSA) form, a +lower-level intermediate representation with specific properties that make it +easier to implement optimizations and to eventually generate machine code from +it. + +During this conversion, function intrinsics are applied. These are special +functions that the compiler has been taught to replace with heavily optimized +code on a case-by-case basis. + +Certain nodes are also lowered into simpler components during the AST to SSA +conversion, so that the rest of the compiler can work with them. For instance, +the copy builtin is replaced by memory moves, and range loops are rewritten into +for loops. Some of these currently happen before the conversion to SSA due to +historical reasons, but the long-term plan is to move all of them here. + +Then, a series of machine-independent passes and rules are applied. These do not +concern any single computer architecture, and thus run on all `GOARCH` variants. + +Some examples of these generic passes include dead code elimination, removal of +unneeded nil checks, and removal of unused branches. The generic rewrite rules +mainly concern expressions, such as replacing some expressions with constant +values, and optimizing multiplications and float operations. + +### 4. Generating machine code + +* `cmd/compile/internal/ssa` (SSA lowering and arch-specific passes) +* `cmd/internal/obj` (machine code generation) + +The machine-dependent phase of the compiler begins with the "lower" pass, which +rewrites generic values into their machine-specific variants. For example, on +amd64 memory operands are possible, so many load-store operations may be combined. + +Note that the lower pass runs all machine-specific rewrite rules, and thus it +currently applies lots of optimizations too. + +Once the SSA has been "lowered" and is more specific to the target architecture, +the final code optimization passes are run. This includes yet another dead code +elimination pass, moving values closer to their uses, the removal of local +variables that are never read from, and register allocation. + +Other important pieces of work done as part of this step include stack frame +layout, which assigns stack offsets to local variables, and pointer liveness +analysis, which computes which on-stack pointers are live at each GC safe point. + +At the end of the SSA generation phase, Go functions have been transformed into +a series of obj.Prog instructions. These are passed to the assembler +(`cmd/internal/obj`), which turns them into machine code and writes out the +final object file. The object file will also contain reflect data, export data, +and debugging information. + +### Further reading + +To dig deeper into how the SSA package works, including its passes and rules, +head to `cmd/compile/internal/ssa/README.md`. From 91c919939bd2f5445bc1429823bdb8f84b2e809f Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 4 May 2018 09:38:00 +0800 Subject: [PATCH 087/102] =?UTF-8?q?20180504-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... An Official Introduction to the Go Compiler.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/sources/tech/20180427 An Official Introduction to the Go Compiler.md b/sources/tech/20180427 An Official Introduction to the Go Compiler.md index c8369c7c8c..65c35fee64 100644 --- a/sources/tech/20180427 An Official Introduction to the Go Compiler.md +++ b/sources/tech/20180427 An Official Introduction to the Go Compiler.md @@ -114,3 +114,17 @@ and debugging information. To dig deeper into how the SSA package works, including its passes and rules, head to `cmd/compile/internal/ssa/README.md`. + + + +-------------------------------------------------------------------------------- + +via: https://github.com/golang/go/blob/master/src/cmd/compile/README.md + +作者:[mvdan ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://github.com/mvdan From 2a9b0a190292a758754616ad792ddfa22d227316 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 4 May 2018 09:40:32 +0800 Subject: [PATCH 088/102] =?UTF-8?q?20180504-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...426 Continuous Profiling of Go programs.md | 106 ++++++++++++++++++ 1 file changed, 106 insertions(+) create mode 100644 sources/tech/20180426 Continuous Profiling of Go programs.md diff --git a/sources/tech/20180426 Continuous Profiling of Go programs.md b/sources/tech/20180426 Continuous Profiling of Go programs.md new file mode 100644 index 0000000000..ee91150b53 --- /dev/null +++ b/sources/tech/20180426 Continuous Profiling of Go programs.md @@ -0,0 +1,106 @@ +Continuous Profiling of Go programs +============================================================ + +One of the most interesting parts of Google is our fleet-wide continuous profiling service. We can see who is accountable for CPU and memory usage, we can continuously monitor our production services for contention and blocking profiles, and we can generate analysis and reports and easily can tell what are some of the highly impactful optimization projects we can work on. + +I briefly worked on [Stackdriver Profiler][2], our new product that is filling the gap of cloud-wide profiling service for Cloud users. Note that you DON’T need to run your code on Google Cloud Platform in order to use it. Actually, I use it at development time on a daily basis now. It also supports Java and Node.js. + +#### Profiling in production + +pprof is safe to use in production. We target an additional 5% overhead for CPU and heap allocation profiling. The collection is happening for 10 seconds for every minute from a single instance. If you have multiple replicas of a Kubernetes pod, we make sure we do amortized collection. For example, if you have 10 replicas of a pod, the overhead will be 0.5%. This makes it possible for users to keep the profiling always on. + +We currently support CPU, heap, mutex and thread profiles for Go programs. + +#### Why? + +Before explaining how you can use the profiler in production, it would be helpful to explain why you would ever want to profile in production. Some very common cases are: + +* Debug performance problems only visible in production. + +* Understand the CPU usage to reduce billing. + +* Understand where the contention cumulates and optimize. + +* Understand the impact of new releases, e.g. seeing the difference between canary and production. + +* Enrich your distributed traces by [correlating][1] them with profiling samples to understand the root cause of latency. + +#### Enabling + +Stackdriver Profiler doesn’t work with the  _net/http/pprof_  handlers and require you to install and configure a one-line agent in your program. + +``` +go get cloud.google.com/go/profiler +``` + +And in your main function, start the profiler: + +``` +if err := profiler.Start(profiler.Config{ + Service: "indexing-service", + ServiceVersion: "1.0", + ProjectID: "bamboo-project-606", // optional on GCP +}); err != nil { + log.Fatalf("Cannot start the profiler: %v", err) +} +``` + +Once you start running your program, the profiler package will report the profilers for 10 seconds for every minute. + +#### Visualization + +As soon as profiles are reported to the backend, you will start seeing a flamegraph at [https://console.cloud.google.com/profiler][4]. You can filter by tags and change the time span, as well as break down by service name and version. The data will be around up to 30 days. + + +![](https://cdn-images-1.medium.com/max/900/1*JdCm1WwmTgExzee5-ZWfNw.gif) + +You can choose one of the available profiles; break down by service, zone and version. You can move in the flame and filter by tags. + +#### Reading the flame + +Flame graph visualization is explained by [Brendan Gregg][5] very comprehensively. Stackdriver Profiler adds a little bit of its own flavor. + + +![](https://cdn-images-1.medium.com/max/900/1*QqzFJlV9v7U1s1reYsaXog.png) + +We will examine a CPU profile but all also applies to the other profiles. + +1. The top-most x-axis represents the entire program. Each box on the flame represents a frame on the call path. The width of the box is proportional to the CPU time spent to execute that function. + +2. Boxes are sorted from left to right, left being the most expensive call path. + +3. Frames from the same package have the same color. All runtime functions are represented with green in this case. + +4. You can click on any box to expand the execution tree further. + + +![](https://cdn-images-1.medium.com/max/900/1*1jCm6f-Fl2mpkRe3-57mTg.png) + +You can hover on any box to see detailed information for any frame. + +#### Filtering + +You can show, hide and and highlight by symbol name. These are extremely useful if you specifically want to understand the cost of a particular call or package. + +![](https://cdn-images-1.medium.com/max/900/1*ka9fA-AAuKggAuIBq_uhGQ.png) + +1. Choose your filter. You can combine multiple filters. In this case, we are highlighting runtime.memmove. + +2. The flame is going to filter the frames with the filter and visualize the filtered boxes. In this case, it is highlighting all runtime.memmove boxes. + +-------------------------------------------------------------------------------- + +via: https://medium.com/google-cloud/continuous-profiling-of-go-programs-96d4416af77b + +作者:[JBD ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://medium.com/@rakyll?source=post_header_lockup +[1]:https://rakyll.org/profiler-labels/ +[2]:https://cloud.google.com/profiler/ +[3]:http://cloud.google.com/go/profiler +[4]:https://console.cloud.google.com/profiler +[5]:http://www.brendangregg.com/flamegraphs.html From 925433ef6c78a75aeddf4be00462ee079ac59312 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 4 May 2018 09:52:42 +0800 Subject: [PATCH 089/102] =?UTF-8?q?20180504-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...cessing with Go using Kafka and MongoDB.md | 273 ++++++++++++++++++ 1 file changed, 273 insertions(+) create mode 100644 sources/tech/20180429 Asynchronous Processing with Go using Kafka and MongoDB.md diff --git a/sources/tech/20180429 Asynchronous Processing with Go using Kafka and MongoDB.md b/sources/tech/20180429 Asynchronous Processing with Go using Kafka and MongoDB.md new file mode 100644 index 0000000000..ba57e40be3 --- /dev/null +++ b/sources/tech/20180429 Asynchronous Processing with Go using Kafka and MongoDB.md @@ -0,0 +1,273 @@ +Asynchronous Processing with Go using Kafka and MongoDB +============================================================ + +In my previous blog post ["My First Go Microservice using MongoDB and Docker Multi-Stage Builds"][9], I created a Go microservice sample which exposes a REST http endpoint and saves the data received from an HTTP POST to a MongoDB database. + +In this example, I decoupled the saving of data to MongoDB and created another microservice to handle this. I also added Kafka to serve as the messaging layer so the microservices can work on its own concerns asynchronously. + +> In case you have time to watch, I recorded a walkthrough of this blog post in the [video below][1] :) + +Here is the high-level architecture of this simple asynchronous processing example wtih 2 microservices. + +![rest-kafka-mongo-microservice-draw-io](https://www.melvinvivas.com/content/images/2018/04/rest-kafka-mongo-microservice-draw-io.jpg) + +Microservice 1 - is a REST microservice which receives data from a /POST http call to it. After receiving the request, it retrieves the data from the http request and saves it to Kafka. After saving, it responds to the caller with the same data sent via /POST + +Microservice 2 - is a microservice which subscribes to a topic in Kafka where Microservice 1 saves the data. Once a message is consumed by the microservice, it then saves the data to MongoDB. + +Before you proceed, we need a few things to be able to run these microservices: + +1. [Download Kafka][2] - I used version kafka_2.11-1.1.0 + +2. Install [librdkafka][3] - Unfortunately, this library should be present in the target system + +3. Install the [Kafka Go Client by Confluent][4] + +4. Run MongoDB. You can check my [previous blog post][5] about this where I used a MongoDB docker image. + +Let's get rolling! + +Start Kafka first, you need Zookeeper running before you run the Kafka server. Here's how + +``` +$ cd //kafka_2.11-1.1.0 +$ bin/zookeeper-server-start.sh config/zookeeper.properties + +``` + +Then run Kafka - I am using port 9092 to connect to Kafka. If you need to change the port, just configure it in config/server.properties. If you are just a beginner like me, I suggest to just use default ports for now. + +``` +$ bin/kafka-server-start.sh config/server.properties + +``` + +After running Kafka, we need MongoDB. To make it simple, just use this docker-compose.yml. + +``` +version: '3' +services: + mongodb: + image: mongo + ports: + - "27017:27017" + volumes: + - "mongodata:/data/db" + networks: + - network1 + +volumes: + mongodata: + +networks: + network1: + +``` + +Run the MongoDB docker container using Docker Compose + +``` +docker-compose up + +``` + +Here is the relevant code of Microservice 1. I just modified my previous example to save to Kafka rather than MongoDB. + +[rest-to-kafka/rest-kafka-sample.go][10] + +``` +func jobsPostHandler(w http.ResponseWriter, r *http.Request) { + + //Retrieve body from http request + b, err := ioutil.ReadAll(r.Body) + defer r.Body.Close() + if err != nil { + panic(err) + } + + //Save data into Job struct + var _job Job + err = json.Unmarshal(b, &_job) + if err != nil { + http.Error(w, err.Error(), 500) + return + } + + saveJobToKafka(_job) + + //Convert job struct into json + jsonString, err := json.Marshal(_job) + if err != nil { + http.Error(w, err.Error(), 500) + return + } + + //Set content-type http header + w.Header().Set("content-type", "application/json") + + //Send back data as response + w.Write(jsonString) + +} + +func saveJobToKafka(job Job) { + + fmt.Println("save to kafka") + + jsonString, err := json.Marshal(job) + + jobString := string(jsonString) + fmt.Print(jobString) + + p, err := kafka.NewProducer(&kafka.ConfigMap{"bootstrap.servers": "localhost:9092"}) + if err != nil { + panic(err) + } + + // Produce messages to topic (asynchronously) + topic := "jobs-topic1" + for _, word := range []string{string(jobString)} { + p.Produce(&kafka.Message{ + TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny}, + Value: []byte(word), + }, nil) + } +} + +``` + +Here is the code of Microservice 2. What is important in this code is the consumption from Kafka, the saving part I already discussed in my previous blog post. Here are the important parts of the code which consumes the data from Kafka. + +[kafka-to-mongo/kafka-mongo-sample.go][11] + +``` + +func main() { + + //Create MongoDB session + session := initialiseMongo() + mongoStore.session = session + + receiveFromKafka() + +} + +func receiveFromKafka() { + + fmt.Println("Start receiving from Kafka") + c, err := kafka.NewConsumer(&kafka.ConfigMap{ + "bootstrap.servers": "localhost:9092", + "group.id": "group-id-1", + "auto.offset.reset": "earliest", + }) + + if err != nil { + panic(err) + } + + c.SubscribeTopics([]string{"jobs-topic1"}, nil) + + for { + msg, err := c.ReadMessage(-1) + + if err == nil { + fmt.Printf("Received from Kafka %s: %s\n", msg.TopicPartition, string(msg.Value)) + job := string(msg.Value) + saveJobToMongo(job) + } else { + fmt.Printf("Consumer error: %v (%v)\n", err, msg) + break + } + } + + c.Close() + +} + +func saveJobToMongo(jobString string) { + + fmt.Println("Save to MongoDB") + col := mongoStore.session.DB(database).C(collection) + + //Save data into Job struct + var _job Job + b := []byte(jobString) + err := json.Unmarshal(b, &_job) + if err != nil { + panic(err) + } + + //Insert job into MongoDB + errMongo := col.Insert(_job) + if errMongo != nil { + panic(errMongo) + } + + fmt.Printf("Saved to MongoDB : %s", jobString) + +} + +``` + +Let's get down to the demo, run Microservice 1\. Make sure Kafka is running. + +``` +$ go run rest-kafka-sample.go + +``` + +I used Postman to send data to Microservice 1 + +![Screenshot-2018-04-29-22.20.33](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.20.33.png) + +Here is the log you will see in Microservice 1\. Once you see this, it means data has been received from Postman and saved to Kafka + +![Screenshot-2018-04-29-22.22.00](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.22.00.png) + +Since we are not running Microservice 2 yet, the data saved by Microservice 1 will just be in Kafka. Let's consume it and save to MongoDB by running Microservice 2. + +``` +$ go run kafka-mongo-sample.go + +``` + +Now you'll see that Microservice 2 consumes the data and saves it to MongoDB + +![Screenshot-2018-04-29-22.24.15](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.24.15.png) + +Check if data is saved in MongoDB. If it is there, we're good! + +![Screenshot-2018-04-29-22.26.39](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.26.39.png) + +Complete source code can be found here +[https://github.com/donvito/learngo/tree/master/rest-kafka-mongo-microservice][12] + +Shameless plug! If you like this blog post, please follow me in Twitter [@donvito][6]. I tweet about Docker, Kubernetes, GoLang, Cloud, DevOps, Agile and Startups. Would love to connect in [GitHub][7] and [LinkedIn][8] + +[VIDEO](https://youtu.be/xa0Yia1jdu8) + +Enjoy! + +-------------------------------------------------------------------------------- + +via: https://www.melvinvivas.com/developing-microservices-using-kafka-and-mongodb/ + +作者:[Melvin Vivas ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.melvinvivas.com/author/melvin/ +[1]:https://www.melvinvivas.com/developing-microservices-using-kafka-and-mongodb/#video1 +[2]:https://kafka.apache.org/downloads +[3]:https://github.com/confluentinc/confluent-kafka-go +[4]:https://github.com/confluentinc/confluent-kafka-go +[5]:https://www.melvinvivas.com/my-first-go-microservice/ +[6]:https://twitter.com/donvito +[7]:https://github.com/donvito +[8]:https://www.linkedin.com/in/melvinvivas/ +[9]:https://www.melvinvivas.com/my-first-go-microservice/ +[10]:https://github.com/donvito/learngo/tree/master/rest-kafka-mongo-microservice/rest-to-kafka +[11]:https://github.com/donvito/learngo/tree/master/rest-kafka-mongo-microservice/kafka-to-mongo +[12]:https://github.com/donvito/learngo/tree/master/rest-kafka-mongo-microservice From a85a94d4be86b1ff8570da2127875f392abe4516 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 4 May 2018 10:04:16 +0800 Subject: [PATCH 090/102] =?UTF-8?q?20180504-5=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Common Concurrent Programming Mistakes.md | 429 ++++++++++++++++++ 1 file changed, 429 insertions(+) create mode 100644 sources/tech/20180415 Some Common Concurrent Programming Mistakes.md diff --git a/sources/tech/20180415 Some Common Concurrent Programming Mistakes.md b/sources/tech/20180415 Some Common Concurrent Programming Mistakes.md new file mode 100644 index 0000000000..3bec5ee229 --- /dev/null +++ b/sources/tech/20180415 Some Common Concurrent Programming Mistakes.md @@ -0,0 +1,429 @@ +Some Common Concurrent Programming Mistakes +============================================================ + +Go is a language supporting built-in concurrent programming. By using the `go` keyword to create goroutines (light weight threads) and by [using][8] [channels][9] and [other concurrency][10] [synchronization techniques][11] provided in Go, concurrent programming becomes easy, flexible and enjoyable. + +One the other hand, Go doesn't prevent Go programmers from making some concurrent programming mistakes which are caused by either carelessnesses or lacking of experiences. The remaining of the current article will show some common mistakes in Go concurrent programming, to help Go programmers avoid making such mistakes. + +### No Synchronizations When Synchronizations Are Needed + +Code lines may be [not executed by the appearance orders][2]. + +There are two mistakes in the following program. + +* First, the read of `b` in the main goroutine and the write of `b` in the new goroutine might cause data races. + +* Second, the condition `b == true` can't ensure that `a != nil` in the main goroutine. Compilers and CPUs may make optimizations by [reordering instructions][1] in the new goroutine, so the assignment of `b`may happen before the assignment of `a` at run time, which makes that slice `a` is still `nil` when the elements of `a` are modified in the main goroutine. + +``` +package main + +import ( + "time" + "runtime" +) + +func main() { + var a []int // nil + var b bool // false + + // a new goroutine + go func () { + a = make([]int, 3) + b = true // write b + }() + + for !b { // read b + time.Sleep(time.Second) + runtime.Gosched() + } + a[0], a[1], a[2] = 0, 1, 2 // might panic +} + +``` + +The above program may run well on one computer, but may panic on another one. Or it may run well for  _N_ times, but may panic at the  _(N+1)_ th time. + +We should use channels or the synchronization techniques provided in the `sync` standard package to ensure the memory orders. For example, + +``` +package main + +func main() { + var a []int = nil + c := make(chan struct{}) + + // a new goroutine + go func () { + a = make([]int, 3) + c <- struct{}{} + }() + + <-c + a[0], a[1], a[2] = 0, 1, 2 +} + +``` + +### Use `time.Sleep` Calls To Do Synchronizations + +Let's view a simple example. + +``` +ppackage main + +import ( + "fmt" + "time" +) + +func main() { + var x = 123 + + go func() { + x = 789 // write x + }() + + time.Sleep(time.Second) + fmt.Println(x) // read x +} + +``` + +We expect the program to print `789`. If we run it, it really prints `789`, almost always. But is it a program with good syncrhonization? No! The reason is Go runtime doesn't guarantee the write of `x` happens before the read of `x` for sure. Under certain conditions, such as most CPU resources are cunsumed by other programs running on same OS, the write of `x` might happen after the read of `x`. This is why we should never use `time.Sleep` calls to do syncrhonizations in formal projects. + +Let's view another example. + +``` +package main + +import ( + "fmt" + "time" +) + +var x = 0 + +func main() { + var num = 123 + var p = &num + + c := make(chan int) + + go func() { + c <- *p + x + }() + + time.Sleep(time.Second) + num = 789 + fmt.Println(<-c) +} + +``` + +What do you expect the program will output? `123`, or `789`? In fact, the output is compiler dependent. For the standard Go compiler 1.10, it is very possible the program will output `123`. But in theory, it might output `789`, or another random number. + +Now, let's change `c <- *p + x` to `c <- *p` and run the program again. You will find the output becomes to `789` (for the he standard Go compiler 1.10). Again, the output is compiler dependent. + +Yes, there are data races in the above program. The expression `*p` might be evaluated before, after, or when the assignment `num = 789` is processed. The `time.Sleep` call can't guarantee the evaluation of `*p`happens before the assignment is processed. + +For this specified example, we should store the value to be sent in a temporary value before creating the new goroutine and send the temporary value instead in the new goroutine to remove the data races. + +``` +... + tmp := *p + x + go func() { + c <- tmp + }() +... + +``` + +### Leave Goroutines Hanging + +Hanging goroutines are the goroutines staying in blocking state for ever. There are many reasons leading goroutines into hanging. For example, + +* a goroutine tries to receive a value from a nil channel or from a channel which no more other goroutines will send values to. + +* a goroutine tries to send a value to nil channel or to a channel which no more other goroutines will receive values from. + +* a goroutine is dead locked by itself. + +* a group of goroutines are dead locked by each other. + +* a goroutine is blocked when executing a `select` code block without `default` branch, and all the channel operations following the `case` keywords in the `select` code block keep blocking for ever. + +Except sometimes we deliberately let the main goroutine in a program hanging to avoid the program exiting, most other hanging goroutine cases are unexpected. It is hard for Go runtime to judge whether or not a goroutine in blocking state is hanging or stays in blocking state temporarily. So Go runtime will never release the resources consumed by a hanging goroutine. + +In the [first-response-wins][12] channel use case, if the capacity of the channel which is used a future is not large enough, some slower response goroutines will hang when trying to send a result to the future channel. For example, if the following function is called, there will be 4 goroutines stay in blocking state for ever. + +``` +func request() int { + c := make(chan int) + for i := 0; i < 5; i++ { + i := i + go func() { + c <- i // 4 goroutines will hang here. + }() + } + return <-c +} + +``` + +To avoid the four goroutines hanging, the capacity of channel `c` must be at least `4`. + +In [the second way to implement the first-response-wins][13] channel use case, if the channel which is used as a future is an unbufferd channel, it is possible that the channel reveiver will never get a response and hang. For example, if the following function is called in a goroutine, the goroutine might hang. The reason is, if the five try-send operations all happen before the receive operation `<-c` is ready, then all the five try-send operations will fail to send values so that the caller goroutine will never receive a value. + +``` +func request() int { + c := make(chan int) + for i := 0; i < 5; i++ { + i := i + go func() { + select { + case c <- i: + default: + } + }() + } + return <-c +} + +``` + +Changing the channel `c` as a buffered channel will guarantee at least one of the five try-send operations succeed so that the caller goroutine will never hang in the above function. + +### Copy Values Of The Types In The `sync` Standard Package + +In practice, values of the types in the `sync` standard package shouldn't be copied. We should only copy pointers of such values. + +The following is bad concurrent programming example. In this example, when the `Counter.Value` method is called, a `Counter` receiver value will be copied. As a field of the receiver value, the respective `Mutex` field of the `Counter` receiver value will also be copied. The copy is not synchronized, so the copied `Mutex` value might be corrupt. Even if it is not corrupt, what it protects is the accessment of the copied `Counter` receiver value, which is meaningless generally. + +``` +import "sync" + +type Counter struct { + sync.Mutex + n int64 +} + +// This method is okay. +func (c *Counter) Increase(d int64) (r int64) { + c.Lock() + c.n += d + r = c.n + c.Unlock() + return +} + +// The method is bad. When it is called, a Counter +// receiver value will be copied. +func (c Counter) Value() (r int64) { + c.Lock() + r = c.n + c.Unlock() + return +} + +``` + +We should change the reveiver type of the `Value` method to the poiner type `*Counter` to avoid copying `Mutex` values. + +The `go vet` command provided in the official Go SDK will report potential bad value copies. + +### Call Methods Of `sync.WaitGroup` At Wrong Places + +Each `sync.WaitGroup` value maintains a counter internally, The initial value of the counter is zero. If the counter of a `WaitGroup` value is zero, a call to the `Wait` method of the `WaitGroup` value will not block, otherwise, the call blocks until the counter value becomes zero. + +To make the uses of `WaitGroup` value meaningful, when the counter of a `WaitGroup` value is zero, a call to the `Add` method of the `WaitGroup` value must happen before the corresponding call to the `Wait` method of the `WaitGroup` value. + +For example, in the following program, the `Add` method is called at an improper place, which makes that the final printed number is not always `100`. In fact, the final printed number of the program may be an arbitrary number in the range `[0, 100)`. The reason is none of the `Add` method calls are guaranteed to happen before the `Wait` method call. + +``` +package main + +import ( + "fmt" + "sync" + "sync/atomic" +) + +func main() { + var wg sync.WaitGroup + var x int32 = 0 + for i := 0; i < 100; i++ { + go func() { + wg.Add(1) + atomic.AddInt32(&x, 1) + wg.Done() + }() + } + + fmt.Println("To wait ...") + wg.Wait() + fmt.Println(atomic.LoadInt32(&x)) +} + +``` + +To make the program behave as expected, we should move the `Add` method calls out of the new goroutines created in the `for` loop, as the following code shown. + +``` +... + for i := 0; i < 100; i++ { + wg.Add(1) + go func() { + atomic.AddInt32(&x, 1) + wg.Done() + }() + } +... + +``` + +### Use Channels As Futures Improperly + +From the article [channel use cases][14], we know that some functions will return [channels as futures][15]. Assume `fa` and `fb` are two such functions, then the following call uses future arguments improperly. + +``` +doSomethingWithFutureArguments(<-fa(), <-fb()) + +``` + +In the above code line, the two channel receive operations are processed in sequentially, instead of concurrently. We should modify it as the following to process them concurrently. + +``` +ca, cb := fa(), fb() +doSomethingWithFutureArguments(<-c1, <-c2) + +``` + +### Close Channels Not From The Last Active Sender Goroutine + +A common mistake made by Go programmers is closing a channel when there are still some other goroutines will potentially send values to the channel later. When such a potential send (to the closed channel) really happens, a panic will occur. + +This mistake was ever made in some famous Go projects, such as [this bug][3] and [this bug][4] in the kubernetes project. + +Please read [this article][5] for explanations on how to safely and gracefully close channels. + +### Do 64-bit Atomic Operations On Values Which Are Not Guaranteed To Be 64-bit Aligned + +Up to now (Go 1.10), for the standard Go compiler, the address of the value involved in a 64-bit atomic operation is required to be 64-bit aligned. Failure to do so may make the current goroutine panic. For the standard Go compiler, such failure can only happen on 32-bit architectures. Please read [memory layouts][6] to get how to guarantee the addresses of 64-bit word 64-bit aligned on 32-bit OSes. + +### Not Pay Attention To Too Many Resources Are Consumed By Calls To The `time.After` Function + +The `After` function in the `time` standard package returns [a channel for delay notification][7]. The function is convenient, however each of its calls will create a new value of the `time.Timer` type. The new created `Timer` value will keep alive in the duration specified by the passed argument to the `After` function. If the function is called many times in the duration, there will be many `Timer` values alive and consuming much memory and computation. + +For example, if the following `longRunning` function is called and there are millions of messages coming in one minute, then there will be millions of `Timer` values alive in a certain period, even if most of these `Timer`values have already become useless. + +``` +import ( + "fmt" + "time" +) + +// The function will return if a message arrival interval +// is larger than one minute. +func longRunning(messages <-chan string) { + for { + select { + case <-time.After(time.Minute): + return + case msg := <-messages: + fmt.Println(msg) + } + } +} + +``` + +To avoid too many `Timer` values being created in the above code, we should use a single `Timer` value to do the same job. + +``` +func longRunning(messages <-chan string) { + timer := time.NewTimer(time.Minute) + defer timer.Stop() + + for { + select { + case <-timer.C: + return + case msg := <-messages: + fmt.Println(msg) + if !timer.Stop() { + <-timer.C + } + } + + // The above "if" block can also be put here. + + timer.Reset(time.Minute) + } +} + +``` + +### Use `time.Timer` Values Incorrectly + +An idiomatic use example of a `time.Timer` value has been shown in the last section. One detail which should be noted is that the `Reset` method should always be invoked on stopped or expired `time.Timer`values. + +At the end of the first `case` branch of the `select` block, the `time.Timer` value has expired, so we don't need to stop it. But we must stop the timer in the second branch. If the `if` code block in the second branch is missing, it is possible that a send (by the Go runtime) to the channel `timer.C` races with the `Reset`method call, and it is possible that the `longRunning` function returns earlier than expected, for the `Reset`method will only reset the internal timer to zero, it will not clear (drain) the value which has been sent to the `timer.C` channel. + +For example, the following program is very possible to exit in about one second, instead of ten seconds. and more importantly, the program is not data race free. + +``` +package main + +import ( + "fmt" + "time" +) + +func main() { + start := time.Now() + timer := time.NewTimer(time.Second/2) + select { + case <-timer.C: + default: + time.Sleep(time.Second) // go here + } + timer.Reset(time.Second * 10) + <-timer.C + fmt.Println(time.Since(start)) // 1.000188181s +} + +``` + +A `time.Timer` value can be leaved in non-stopping status when it is not used any more, but it is recommended to stop it in the end. + +It is bug prone and not recommended to use a `time.Timer` value concurrently in multiple goroutines. + +We should not rely on the return value of a `Reset` method call. The return result of the `Reset` method exists just for compatibility purpose. + +-------------------------------------------------------------------------------- + +via: https://go101.org/article/concurrent-common-mistakes.html + +作者:[go101.org ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:go101.org +[1]:https://go101.org/article/memory-model.html +[2]:https://go101.org/article/memory-model.html +[3]:https://github.com/kubernetes/kubernetes/pull/45291/files?diff=split +[4]:https://github.com/kubernetes/kubernetes/pull/39479/files?diff=split +[5]:https://go101.org/article/channel-closing.html +[6]:https://go101.org/article/memory-layout.html +[7]:https://go101.org/article/channel-use-cases.html#timer +[8]:https://go101.org/article/channel-use-cases.html +[9]:https://go101.org/article/channel.html +[10]:https://go101.org/article/concurrent-atomic-operation.html +[11]:https://go101.org/article/concurrent-synchronization-more.html +[12]:https://go101.org/article/channel-use-cases.html#first-response-wins +[13]:https://go101.org/article/channel-use-cases.html#first-response-wins-2 +[14]:https://go101.org/article/channel-use-cases.html +[15]:https://go101.org/article/channel-use-cases.html#future-promise From 17c836b003176ccf43ea584138bafcae8ba6fa39 Mon Sep 17 00:00:00 2001 From: songshunqiang Date: Fri, 4 May 2018 11:08:11 +0800 Subject: [PATCH 091/102] complete content and links --- .../tech/20180416 Running Jenkins builds in containers.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/sources/tech/20180416 Running Jenkins builds in containers.md b/sources/tech/20180416 Running Jenkins builds in containers.md index 1b03413c3a..f820ddaf56 100644 --- a/sources/tech/20180416 Running Jenkins builds in containers.md +++ b/sources/tech/20180416 Running Jenkins builds in containers.md @@ -56,6 +56,9 @@ Note: This solution is not related to OpenShift's [Source-to-Image (S2I)][15] bu There are several good blogs and documentation about Jenkins builds on OpenShift. The following are good to start with: + * [OpenShift Jenkins][29] image documentation and [source][30] + * [CI/CD with OpenShift][31] webcast + * [External Jenkins Integration][32] playbook Take a look at them to understand the overall solution. In this article, we'll look at the different issues that come up while applying those practices. ### Build my application @@ -954,3 +957,7 @@ via: https://opensource.com/article/18/4/running-jenkins-builds-containers [26]:http://maven.apache.org/surefire/maven-surefire-plugin/examples/fork-options-and-parallel-execution.html [27]:http://maven.apache.org/surefire/maven-surefire-plugin/test-mojo.html#argLine [28]:https://itnext.io/running-jenkins-builds-in-containers-458e90ff2a7b +[29]:https://docs.openshift.com/container-platform/3.7/using_images/other_images/jenkins.html +[30]:https://github.com/openshift/jenkins +[31]:https://blog.openshift.com/cicd-with-openshift/ +[32]:http://v1.uncontained.io/playbooks/continuous_delivery/external-jenkins-integration.html From c9594735468aec2c681ec01fdf5f63634b14a7b2 Mon Sep 17 00:00:00 2001 From: songshunqiang Date: Fri, 4 May 2018 11:10:29 +0800 Subject: [PATCH 092/102] complete content and links --- sources/tech/20180416 Running Jenkins builds in containers.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180416 Running Jenkins builds in containers.md b/sources/tech/20180416 Running Jenkins builds in containers.md index f820ddaf56..5eb00ecc8e 100644 --- a/sources/tech/20180416 Running Jenkins builds in containers.md +++ b/sources/tech/20180416 Running Jenkins builds in containers.md @@ -59,6 +59,7 @@ There are several good blogs and documentation about Jenkins builds on OpenShift * [OpenShift Jenkins][29] image documentation and [source][30] * [CI/CD with OpenShift][31] webcast * [External Jenkins Integration][32] playbook + Take a look at them to understand the overall solution. In this article, we'll look at the different issues that come up while applying those practices. ### Build my application From 1d8a0cca50e7ac76490269e2e90db56c27934123 Mon Sep 17 00:00:00 2001 From: songshunqiang Date: Fri, 4 May 2018 12:26:14 +0800 Subject: [PATCH 093/102] submit tech/20180416 Running Jenkins builds in containers.md --- ...16 Running Jenkins builds in containers.md | 239 ++++++++---------- 1 file changed, 108 insertions(+), 131 deletions(-) rename {sources => translated}/tech/20180416 Running Jenkins builds in containers.md (50%) diff --git a/sources/tech/20180416 Running Jenkins builds in containers.md b/translated/tech/20180416 Running Jenkins builds in containers.md similarity index 50% rename from sources/tech/20180416 Running Jenkins builds in containers.md rename to translated/tech/20180416 Running Jenkins builds in containers.md index 5eb00ecc8e..f967436aa8 100644 --- a/sources/tech/20180416 Running Jenkins builds in containers.md +++ b/translated/tech/20180416 Running Jenkins builds in containers.md @@ -1,94 +1,76 @@ -pinewall translating - -Running Jenkins builds in containers +在容器中运行 Jenkins 构建 ====== ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_scale_performance.jpg?itok=R7jyMeQf) -Running applications in containers has become a well-accepted practice in the enterprise sector, as [Docker][1] with [Kubernetes][2] (K8s) now provides a scalable, manageable application platform. The container-based approach also suits the [microservices architecture][3] that's gained significant momentum in the past few years. +由于 [Docker][1] 和 [Kubernetes][2] (K8s) 目前提供了可扩展、可管理的应用平台,将应用运行在容器中的实践已经被企业广泛接受。近些年势头很猛的[微服务架构][3]也很适合用容器实现。 -One of the most important advantages of a container application platform is the ability to dynamically bring up isolated containers with resource limits. Let's check out how this can change the way we run our continuous integration/continuous development (CI/CD) tasks. +容器应用平台可以动态启动指定资源配额、互相隔离的容器,这是其最主要的优势之一。让我们看看这会对我们运行持续集成/持续部署 (continuous integration/continuous development, CI/CD) 任务的方式产生怎样的改变。 -Building and packaging an application requires an environment that can download the source code, access dependencies, and have the build tools installed. Running unit and component tests as part of the build may use local ports or require third-party applications (e.g., databases, message brokers, etc.) to be running. In the end, we usually have multiple, pre-configured build servers with each running a certain type of job. For tests, we maintain dedicated instances of third-party apps (or try to run them embedded) and avoid running jobs in parallel that could mess up each other's outcome. The pre-configuration for such a CI/CD environment can be a hassle, and the required number of servers for different jobs can significantly change over time as teams shift between versions and development platforms. +构建并打包应用需要一定的环境,要求能够下载源代码、使用相关依赖及已经安装构建工具。作为构建的一部分,运行单元及组件测试可能会用到本地端口或需要运行第三方应用 (例如数据库及消息中间件等)。另外,我们一般定制化多台构建服务器,每台执行一种指定类型的构建任务。为方便测试,我们维护一些实例专门用于运行第三方应用 (或者试图在构建服务器上启动这些第三方应用),避免并行运行构建任务防止结果互相干扰。为 CI/CD 环境定制化构建服务器是一项繁琐的工作,而且随着开发团队使用的开发平台或其版本变更,所需构建服务器的数量也会变更。 -Once we have access to a container platform (onsite or in the cloud), it makes sense to move the resource-intensive CI/CD task executions into dynamically created containers. In this scenario, build environments can be independently started and configured for each job execution. Tests during the build have free reign to use available resources in this isolated box, while we can also bring up a third-party application in a side container that exists only for this job's lifecycle. +一旦我们有了容器管理平台 (自建或在云端),将资源密集型的 CI/CD 任务在动态生成的容器中执行是比较合理的。在这种方案中,每个构建任务运行在独立启动并配置的构建环境中。构建过程中,构建任务的测试环节可以任意使用隔离环境中的可用资源;此外,我们也可以在辅助容器中启动一个第三方应用,只在构建任务生命周期中为测试提供服务。 -It sounds nice… Let's see how it works in real life. +听上去不错,让我们在现实环境中实践一下。 -Note: This article is based on a real-world solution for a project running on a [Red Hat OpenShift][4] v3.7 cluster. OpenShift is the enterprise-ready version of Kubernetes, so these practices work on a K8s cluster as well. To try, download the [Red Hat CDK][5] and run the `jenkins-ephemeral` or `jenkins-persistent` [templates][6] that create preconfigured Jenkins masters on OpenShift. +注:本文基于现实中已有的解决方案,即在 [Red Hat OpenShift][4] v3.7 集群上运行项目。OpenShift 是企业就绪版本的 Kubernetes,故这些实践也适用于 K8s 集群。如果愿意尝试,可以下载 [Red Hat CDK][5],运行 `jenkins-ephemeral` 或 `jenkins-persistent` [模板][6]在 OpenShift 上创建定制化好的 Jenkins 管理节点。 -### Solution overview +### 解决方案概述 -The solution to executing CI/CD tasks (builds, tests, etc.) in containers on OpenShift is based on [Jenkins distributed builds][7], which means: +在 OpenShift 容器中执行 CI/CD 任务 (构建和测试等) 的方案基于[分布式 Jenkins 构建][7],具体如下: + * 我们需要一个 Jenkins 主节点;可以运行在集群中,也可以是外部提供 + * 支持 Jenkins 特性和插件,故已有项目仍可使用 + * 可以用 Jenkins GUI 配置、运行任务或查看任务输出 + * 如果你愿意编码,也可以使用 [Jenkins Pipeline][8] - * We need a Jenkins master; it may run inside the cluster but also works with an external master - * Jenkins features/plugins are available as usual, so existing projects can be used - * The Jenkins GUI is available to configure, run, and browse job output - * if you prefer code, [Jenkins Pipeline][8] is also available - - - -From a technical point of view, the dynamic containers to run jobs are Jenkins agent nodes. When a build kicks off, first a new node starts and "reports for duty" to the Jenkins master via JNLP (port 5000). The build is queued until the agent node comes up and picks up the build. The build output is sent back to the master—just like with regular Jenkins agent servers—but the agent container is shut down once the build is done. +从技术角度来看,运行任务的动态容器是 Jenkins 代理节点。当构建启动时,首先是一个新节点启动,通过 Jenkins 主节点的 JNLP (5000 端口) 告知就绪状态。在代理节点启动并提取构建任务之前,构建任务处于排队状态。就像通常 Jenkins 代理服务器那样,构建输出会送达主节点;不同的是,构建完成后代理节点容器会自动关闭。 ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/1_running_jenkinsincontainers.png?itok=fR4ntnn8) -Different kinds of builds (e.g., Java, NodeJS, Python, etc.) need different agent nodes. This is nothing new—labels could previously be used to restrict which agent nodes should run a build. To define the config for these Jenkins agent containers started for each job, we will need to set the following: +不同类型的构建任务 (例如 Java, NodeJS, Python等) 对应不同的代理节点。这并不新奇,之前也是使用标签来限制哪些代理节点可以运行指定的构建任务。启动用于构建任务的 Jenkins 代理节点容器需要配置参数,具体如下: + * 用于启动容器的 Docker 镜像 + * 资源限制 + * 环境变量 + * 挂载卷 - * The Docker image to boot up - * Resource limits - * Environment variables - * Volumes mounted +这里用到的关键组件是 [Jenkins Kubernetes 插件][9]。该插件 (通过使用一个服务账号) 与 K8s 集群交互,可以启动和关闭代理节点。在插件的配置管理中,多种代理节点类型表现为多种Kubernetes pod 模板,它们通过项目标签对应。 +这些[代理节点镜像][10]以开箱即用的方式提供 (也有 [CentOS7][11] 系统的版本): + * [jenkins-slave-base-rhel7][12]:基础镜像,启动与 Jenkins 主节点连接的代理节点;其中 Java 堆大小根据容器内容设置 + * [jenkins-slave-maven-rhel7][13]:用于 Maven 和 Gradle 构建的镜像 (从基础镜像扩展) + * [jenkins-slave-nodejs-rhel7][14]:包含 NodeJS4 工具的镜像 (从基础镜像扩展) +注意:本解决方案与 OpenShift 中的 [Source-to-Image (S2I)][15] 构建不同,虽然后者也可以用于某些特定的 CI/CD 任务。 -The core component here is the [Jenkins Kubernetes plugin][9]. This plugin interacts with the K8s cluster (by using a ServiceAccount) and starts/stops the agent nodes. Multiple agent types can be defined as Kubernetes pod templates under the plugin's configuration (refer to them by label in projects). +### 入门学习资料 -These [agent images][10] are provided out of the box (also on [CentOS7][11]): +有很多不错的博客和文档介绍了如何在 OpenShift 上执行 Jenkins 构建。不妨从下面这些开始: + * [OpenShift Jenkins][29] 镜像文档及 [源代码][30] + * 网络直播[基于 OpenShift 的 CI/CD][31] + * [外部 Jenkins 集成][32] playbook - * [jenkins-slave-base-rhel7][12]: Base image starting the agent that connects to Jenkins master; the Java heap is set according to container memory - * [jenkins-slave-maven-rhel7][13]: Image for Maven and Gradle builds (extends base) - * [jenkins-slave-nodejs-rhel7][14]: Image with NodeJS4 tools (extends base) +阅读这些博客和文档有助于完整的理解本解决方案。在本文中,我们主要关注具体实践中遇到的各类问题。 +### 构建我的应用 +作为[示例项目][16],我们选取了包含如下构建步骤的 Java 项目: -Note: This solution is not related to OpenShift's [Source-to-Image (S2I)][15] build, which can also be used for certain CI/CD tasks. + * **代码源:** 从一个Git代码库中获取项目代码 + * **使用 Maven 编译:** 依赖可从内部仓库获取,(不妨使用 Apache Nexus) 从外部 Maven 仓库镜像 + * **发布 artifact:** 将编译好的 JAR 上传至内部仓库 -### Background learning material +在 CI/CD 过程中,我们需要与 Git 和 Nexus 交互,故 Jenkins 任务需要能够访问这些系统。这涉及参数配置和已存储凭证,可以在下列位置进行存放及管理: + * **在 Jenkins 中:** 我们可以在 Jenkins 中添加凭证,通过 Git 插件获取项目代码 (使用容器不会改变操作) + * **在 OpenShift 中:** 使用 ConfigMap 和 Secret 对象,以文件或环境变量的形式附加到 Jenkins 代理容器中 + * **在高度定制化的 Docker 容器中:** 镜像是定制化的,已包含完成特定类型构建的全部特性;从一个代理镜像进行扩展即可得到。 -There are several good blogs and documentation about Jenkins builds on OpenShift. The following are good to start with: +你可以按自己的喜好选择一种实现方式,甚至你最终可能混用多种实现方式。下面我们采用第二种实现方式,即首选在 OpenShift 中管理参数配置。使用 Kubernetes 插件配置来定制化 Maven 代理容器,包括设置环境变量和映射文件等。 - * [OpenShift Jenkins][29] image documentation and [source][30] - * [CI/CD with OpenShift][31] webcast - * [External Jenkins Integration][32] playbook +注意:对于 Kubernetes 插件 v1.0 版,由于 [bug][17],在 UI 界面增加环境变量并不生效。可以升级插件,或 (作为变通方案) 直接修改 `config.xml` 文件并重启 Jenkins。 -Take a look at them to understand the overall solution. In this article, we'll look at the different issues that come up while applying those practices. +### 从 Git 获取源代码 -### Build my application - -For our [example][16], let's assume a Java project with the following build steps: - - * **Source:** Pull project source from a Git repository - * **Build with Maven:** Dependencies come from an internal repository (let's use Apache Nexus) mirroring external Maven repos - * **Deploy artifact:** The built JAR is uploaded to the repository - - - -During the CI/CD process, we need to interact with Git and Nexus, so the Jenkins jobs have be able to access those systems. This requires configuration and stored credentials that can be managed at different places: - - * **In Jenkins:** We can add credentials to Jenkins that the Git plugin can use and add files to the project (using containers doesn't change anything). - * **In OpenShift:** Use ConfigMap and secret objects that are added to the Jenkins agent containers as files or environment variables. - * **In a fully customized Docker image:** These are pre-configured with everything to run a type of job; just extend one of the agent images. - - - -Which approach you use is a question of taste, and your final solution may be a mix. Below we'll look at the second option, where the configuration is managed primarily in OpenShift. Customize the Maven agent container via the Kubernetes plugin configuration by setting environment variables and mounting files. - -Note: Adding environment variables through the UI doesn't work with Kubernetes plugin v1.0 due to a [bug][17]. Either update the plugin or (as a workaround) edit `config.xml` directly and restart Jenkins. - -### Pull source from Git - -Pulling a public Git is trivial. For a private Git repo, authentication is required and the client also needs to trust the server for a secure connection. A Git pull can typically be done via two protocols: - - * HTTPS: Authentication is with username/password. The server's SSL certificate must be trusted by the job, which is only tricky if it's signed by a custom CA. +从公共 Git 仓库获取源代码很容易。但对于私有 Git 仓库,不仅需要认证操作,客户端还需要信任服务器以便建立安全连接。一般而言,通过两种协议获取源代码: + * HTTPS:验证通过用户名/密码完成。Git 服务器的 SSL 证书必须被代理节点信任,这仅在证书被自建 CA 签名时才需要特别关注。 ``` @@ -96,7 +78,7 @@ git clone https://git.mycompany.com:443/myapplication.git ``` - * SSH: Authentication is with a private key. The server is trusted when its public key's fingerprint is found in the `known_hosts` file. + * SSH:验证通过私钥完成。如果服务器的公钥指纹出现在 `known_hosts` 文件中,那么该服务器是被信任的。 ``` @@ -104,24 +86,23 @@ git clone ssh://git@git.mycompany.com:22/myapplication.git ``` -Downloading the source through HTTP with username/password is OK when it's done manually; for automated builds, SSH is better. +对于手动操作,使用用户名/密码通过 HTTP 方式下载源代码是可行的;但对于自动构建而言,SSH 是更佳的选择。 -#### Git with SSH +#### 通过 SSH 方式使用 Git -For a SSH download, we need to ensure that the SSH connection works between the agent container and the Git's SSH port. First, we need a private-public key pair. To generate one, run: +要通过 SSH 方式下载源代码,我们需要保证代理容器与 Git 的 SSH 端口之间可以建立 SSH 连接。首先,我们需要创建一个私钥-公钥对。使用如下命令生成: ``` ssh keygen -t rsa -b 2048 -f my-git-ssh -N '' ``` -It generates a private key in `my-git-ssh` (empty passphrase) and the matching public key in `my-git-ssh.pub`. Add the public key to the user on the Git server (preferably a ServiceAccount); web UIs usually support upload. To make the SSH connection work, we need two files on the agent container: +命令生成的私钥位于 `my-git-ssh` 文件中 (无密码口令),对应的公钥位于 `my-git-ssh.pub` 文件中。将公钥添加至 Git 服务器的对应用户下 (推荐使用服务账号);网页界面一般支持公钥上传。为建立 SSH 连接,我们还需要在代理容器上配置两个文件: - * The private key at `~/.ssh/id_rsa` - * The server's public key in `~/.ssh/known_hosts`. To get this, try `ssh git.mycompany.com` and accept the fingerprint; this will create a new line in the `known_hosts` file. Use that. + * 私钥文件位于 `~/.ssh/id_rsa` + * 服务器的公钥位于 `~/.ssh/known_hosts`。要实现这一点,运行 `ssh git.mycompany.com` 并接受服务器指纹,系统会在 `~/.ssh/known_hosts` 文件中增加一行。这样需求得到了满足。 - -Store the private key as `id_rsa` and server's public key as `known_hosts` in an OpenShift secret (or config map). +将 `id_rsa` 对应的私钥和 `known_hosts` 对应的公钥保存到一个 OpenShift secret (或 config map) 对象中。 ``` apiVersion: v1 @@ -147,7 +128,7 @@ stringData: ``` -Then configure this as a volume in the Kubernetes plugin for the Maven pod at mount point `/home/jenkins/.ssh/`. Each item in the secret will be a file matching the key name under the mount directory. We can use the UI (`Manage Jenkins / Configure / Cloud / Kubernetes`), or edit Jenkins config `/var/lib/jenkins/config.xml`: +在 Kubernetes 插件中将 secret 对象配置为卷,挂载到 `/home/jenkins/.ssh/`,供 Maven pod 使用。secret 中的每个对象对应挂载目录的一个文件,文件名与 key 名称相符。我们可以使用 UI (管理 Jenkins / 配置 / 云 / Kubernetes),也可以直接编辑 Jenkins 配置文件 `/var/lib/jenkins/config.xml`: ``` @@ -169,9 +150,9 @@ Then configure this as a volume in the Kubernetes plugin for the Maven pod at mo ``` -Pulling a Git source through SSH should work in the jobs running on this agent now. +此时,在代理节点上运行的任务应该可以通过 SSH 方式从 Git 代码库获取源代码。 -Note: It's also possible to customize the SSH connection in `~/.ssh/config`, for example, if we don't want to bother with `known_hosts` or the private key is mounted to a different location: +注:我们也可以在 `~/.ssh/config` 文件中自定义 SSH 连接。例如,如果你不想处理 `known_hosts` 或 私钥位于其它挂载目录中: ``` Host git.mycompany.com @@ -181,11 +162,11 @@ Host git.mycompany.com ``` -#### Git with HTTP +#### 通过 HTTP 方式使用 Git -If you prefer an HTTP download, add the username/password to a [Git-credential-store][18] file somewhere: +如果你选择使用 HTTP 方式下载,在指定的 [Git-credential-store][18] 文件中添加用户名/密码: - * E.g. `/home/jenkins/.config/git-secret/credentials` from an OpenShift secret, one site per line: + * 例如,在一个 OpenShift secret 对象中增加 `/home/jenkins/.config/git-secret/credentials` 文件对应,其中每个站点对应文件中的一行: ``` @@ -195,7 +176,7 @@ https://user:pass@github.com ``` - * Enable it in [git-config][19] expected at `/home/jenkins/.config/git/config`: + * 在 [git-config][19] 配置中启用该文件,其中配置文件默认路径为 `/home/jenkins/.config/git/config`: ``` @@ -204,11 +185,10 @@ https://user:pass@github.com   helper = store --file=/home/jenkins/.config/git-secret/credentials ``` +如果 Git 服务使用了自有 CA 签名的证书,为代理容器设置环境变量 `GIT_SSL_NO_VERIFY=true` 是最便捷的方式。更恰当的解决方案包括如下两步: -If the Git service has a certificate signed by a custom certificate authority (CA), the quickest hack is to set the `GIT_SSL_NO_VERIFY=true` environment variable (EnvVar) for the agent. The proper solution needs two things: - - * Add the custom CA's public certificate to the agent container from a config map to a path (e.g. `/usr/ca/myTrustedCA.pem`). - * Tell Git the path to this cert in an EnvVar `GIT_SSL_CAINFO=/usr/ca/myTrustedCA.pem` or in the `git-config` file mentioned above: + * 利用 config map 将自有 CA 的公钥映射到一个路径下的文件中,例如 `/usr/ca/myTrustedCA.pem`)。 + * 通过环境变量 `GIT_SSL_CAINFO=/usr/ca/myTrustedCA.pem` 或上面提到的 `git-config` 文件的方式,将证书路径告知 Git。 ``` @@ -218,26 +198,25 @@ If the Git service has a certificate signed by a custom certificate authority (C ``` -Note: In OpenShift v3.7 (and earlier), the config map and secret mount points [must not overlap][20], so we can't map to `/home/jenkins` and `/home/jenkins/dir` at the same time. This is why we didn't use the well-known file locations above. A fix is expected in OpenShift v3.9. +注:在 OpenShift v3.7 及早期版本中,config map 及 secret 的挂载点之间[不能相互覆盖][20],故我们不能同时映射 `/home/jenkins` 和 `/home/jenkins/dir`。因此,上面的代码中并没有使用常见的文件路径。预计 OpenShift v3.9 版本会修复这个问题。 ### Maven -To make a Maven build work, there are usually two things to do: +要完成 Maven 构建,一般需要完成如下两步: - * A corporate Maven repository (e.g., Apache Nexus) should be set up to act as a proxy for external repos. Use this as a mirror. - * This internal repository may have an HTTPS endpoint with a certificate signed by a custom CA. + * 建立一个社区 Maven 库 (例如 Apache Nexus),充当外部库的代理。将其当作镜像使用。 + * 这个内部库可能提供 HTTPS 服务,其中使用自建 CA 签名的证书。 +对于容器中运行构建的实践而言,使用内部 Maven 库是非常关键的,因为容器启动后并没有本地库或缓存,这导致每次构建时 Maven 都下载全部的 Jar 文件。在本地网络使用内部代理库下载明显快于从因特网下载。 -Having an internal Maven repository is practically essential if builds run in containers because they start with an empty local repository (cache), so Maven downloads all the JARs every time. Downloading from an internal proxy repo on the local network is obviously quicker than downloading from the Internet. - -The [Maven Jenkins agent][13] image supports an environment variable that can be used to set the URL for this proxy. Set the following in the Kubernetes plugin container template: +[Maven Jenkins 代理][13]镜像允许配置环境变量,指定代理的 URL。在 Kubernetes 插件的容器模板中设置如下: ``` MAVEN_MIRROR_URL=https://nexus.mycompany.com/repository/maven-public ``` -The build artifacts (JARs) should also be archived in a repository, which may or may not be the same as the one acting as a mirror for dependencies above. Maven `deploy` requires the repo URL in the `pom.xml` under [Distribution management][21] (this has nothing to do with the agent image): +构建好的 artifacts (JARs) 也应该保存到库中,可以是上面提到的用于提供依赖的镜像库,也可以是其它库。Maven 完成 `deploy` 操作需要在 `pom.xml` 的[分发管理][21] 下配置库 URL,这与代理镜像无关。 ``` @@ -263,9 +242,9 @@ The build artifacts (JARs) should also be archived in a repository, which may or ``` -Uploading the artifact may require authentication. In this case, username/password must be set in the `settings.xml` under the server ID matching the one in `pom.xml`. We need to mount a whole `settings.xml` with the URL, username, and password on the Maven Jenkins agent container from an OpenShift secret. We can also use environment variables as below: +上传 artifact 可能涉及认证。在这种情况下,需要在 `settings.xml` 中配置用户名/密码,其中 server ID 要与 `pom.xml` 文件中的 server ID 对应。我们可以使用 OpenShift secret 将包含 URL、用户名和密码的完整 `settings.xml` 映射到 Maven Jenkins 代理容器中。另外,也可以使用环境变量。具体如下: - * Add environment variables from a secret to the container: + * 利用 secret 为容器添加环境变量: ``` @@ -275,7 +254,7 @@ MAVEN_SERVER_PASSWORD=admin123 ``` - * Mount `settings.xml` from a config map to `/home/jenkins/.m2/settings.xml`: + * 利用 config map 将 `settings.xml` 挂载至 `/home/jenkins/.m2/settings.xml`: ``` @@ -313,9 +292,9 @@ MAVEN_SERVER_PASSWORD=admin123 ``` -Disable interactive mode (use batch mode) to skip the download log by using `-B` for Maven commands or by adding `false` to `settings.xml`. +禁用交互模式 (即使用批处理模式) 可以忽略下载日志,一种方式是在 Maven 命令中增加 `-B` 参数,另一种方式是在 `settings.xml` 配置文件中增加 `false` 配置。 -If the Maven repository's HTTPS endpoint uses a certificate signed by a custom CA, we need to create a Java KeyStore using the [keytool][22] containing the CA certificate as trusted. This KeyStore should be uploaded as a config map in OpenShift. Use the `oc` command to create a config map from files: +如果 Maven 库的 HTTPS 服务使用自建 CA 签名的证书,我们需要使用 [keytool][22] 工具创建一个将 CA 公钥添加至信任列表的 Java KeyStore。在 OpenShift 中使用 config map 将这个 Keystore 上传。使用 `oc` 命令基于文件创建一个 config map: ```  oc create configmap maven-settings --from-file=settings.xml=settings.xml --from- @@ -323,9 +302,9 @@ file=myTruststore.jks=myTruststore.jks ``` -Mount the config map somewhere on the Jenkins agent. In this example we use `/home/jenkins/.m2`, but only because we have `settings.xml` in the same config map. The KeyStore can go under any path. +将这个 config map 挂载至 Jenkins 代理容器。在本例中我们使用 `/home/jenkins/.m2` 目录,但这仅仅是因为配置文件 `settings.xml` 也对应这个 config map,KeyStore 可以放置在任意路径下。 -Then make the Maven Java process use this file as a trust store by setting Java parameters in the `MAVEN_OPTS` environment variable for the container: +接着在容器环境变量 `MAVEN_OPTS` 中设置 Java 参数,以便让 Maven 对应的 Java 进程使用该文件: ``` MAVEN_OPTS= @@ -335,30 +314,28 @@ MAVEN_OPTS= ``` -### Memory usage +### 内存使用量 -This is probably the most important part—if we don't set max memory correctly, we'll run into intermittent build failures after everything seems to work. +这可能是最重要的一部分设置,如果没有正确的设置最大内存,我们会遇到间歇性构建失败,虽然每个组件都似乎工作正常。 -Running Java in a container can cause high memory usage errors if we don't set the heap in the Java command line. The JVM [sees the total memory of the host machine][23] instead of the container's memory limit and sets the [default max heap][24] accordingly. This is typically much more than the container's memory limit, and OpenShift simply kills the container when a Java process allocates more memory for the heap. +如果没有在 Java 命令行中设置堆大小,在容器中运行 Java 可能导致高内存使用量的报错。JVM [可以利用全部的主机内存][23],因而使用[默认的堆大小限制][24]。这通常会超过容器的内存资源总额,故当 Java 进程为堆分配过多内存时,OpenShift 会直接杀掉容器。 -Although the `jenkins``-slave-base` image has a built-in [script to set max heap ][25]to half the container memory (this can be modified via EnvVar `CONTAINER_HEAP_PERCENT=0.50`), it only applies to the Jenkins agent Java process. In a Maven build, we have important additional Java processes running: +虽然 `jenkins` `-slave-base` 镜像包含一个内建[脚本设置堆最大为][25]容器内存的一半 (可以通过环境变量 `CONTAINER_HEAP_PERCENT=0.50` 修改),但这只适用于 Jenkins 代理节点中的 Java 进程。在 Maven 构建中,还有其它重要的 Java 进程运行: - * The `mvn` command itself is a Java tool. - * The [Maven Surefire-plugin][26] executes the unit tests in a forked JVM by default. + * `mvn` 命令本身就是一个 Java 工具。 + * [Maven Surefire 插件][26] 按默认参数派生的 JVM 用于运行单元测试。 +总结一下,容器中同时运行着三个重要的 Java 进程,预估内存使用量以避免 pod 被误杀是很重要的。每个进程都有不同的方式设置 JVM 参数: -At the end of the day, we'll have three Java processes running at the same time in the container, and it's important to estimate their memory usage to avoid unexpectedly killed pods. Each process has a different way to set JVM options: - - * Jenkins agent heap is calculated as mentioned above, but we definitely shouldn't let the agent have such a big heap. Memory is needed for the other two JVMs. Setting `JAVA_OPTS` works for the Jenkins agent. - * The `mvn` tool is called by the Jenkins job. Set `MAVEN_OPTS` to customize this Java process. - * The JVM spawned by the Maven `surefire` plugin for the unit tests can be customized by the [argLine][27] Maven property. It can be set in the `pom.xml`, in a profile in `settings.xml` or simply by adding `-DargLine=… to mvn` command in `MAVEN_OPTS`. + * 我们在上面提到了 Jenkins 代理容器堆最大值的计算方法,但我们显然不应该让代理容器使用如此大的堆,毕竟还有两个 JVM 需要使用内存。对于 Jenkins 代理容器,可以设置 `JAVA_OPTS`。 + * `mvn` 工具被 Jenkins 任务调用。设置 `JAVA_OPTS` 可以用于自定义这类 Java 进程。 + * Maven `surefire` 插件派生的用于单元测试的 JVM 可以通过 Maven [argLine][27] 属性自定义。可以在 `pom.xml` 或 `settings.xml` 的某个配置文件中设置,也可以直接在 `maven` 命令参数 `MAVEN_OPS` 中增加 `-DargLine=…`。 - -Here is an example of how to set these environment variables for the Maven agent container: +下面例子给出 Maven 代理容器环境变量设置方法: ``` - JAVA_OPTS=-Xms64m -Xmx64m +JAVA_OPTS=-Xms64m -Xmx64m MAVEN_OPTS=-Xms128m -Xmx128m -DargLine=${env.SUREFIRE_OPTS} @@ -366,17 +343,17 @@ SUREFIRE_OPTS=-Xms256m -Xmx256m ``` -These numbers worked in our tests with 1024Mi agent container memory limit building and running unit tests for a SpringBoot app. These are relatively low numbers and a bigger heap size; a higher limit may be needed for complex Maven projects and unit tests. +我们的测试环境是具有 1024Mi 内存限额的代理容器,使用上述参数可以正常构建一个 SpringBoot 应用并进行单元测试。测试环境使用的资源相对较小,对于复杂的 Maven 项目和对应的单元测试,我们需要更大的堆大小及更大的容器内存限额。 -Note: The actual memory usage of a Java8 process is something like `HeapSize + MetaSpace + OffHeapMemory`, and this can be significantly more than the max heap size set. With the settings above, the three Java processes took more than 900Mi memory in our case. See RSS memory for processes within the container: `ps -e -o ``pid``,user``,``rss``,comm``,args` +注:Java8 进程的实际内存使用量包括 `堆大小 + 元数据 + 堆外内存`,因此内存使用量会明显高于设置的最大堆大小。在我们上面的测试环境中,三个 Java 进程使用了超过 900Mi 的内存。可以在容器内查看进程的 RSS 内存使用情况,命令如下:`ps -e -o ``pid``,user``,``rss``,comm``,args`。 -The Jenkins agent images have both JDK 64 bit and 32 bit installed. For `mvn` and `surefire`, the 64-bit JVM is used by default. To lower memory usage, it makes sense to force 32-bit JVM as long as `-Xmx` is less than 1.5 GB: +Jenkins 代理镜像同时安装了 JDK 64 位和 32 位版本。对于 `mvn` 和 `surefire`,默认使用 64 位版本 JVM。为减低内存使用量,只要 `-Xmx` 不超过 1.5 GB,强制使用 32 位 JVM 都是有意义的。 ``` JAVA_HOME=/usr/lib/jvm/Java-1.8.0-openjdk-1.8.0.161–0.b14.el7_4.i386 ``` -Note that it's also possible to set Java arguments in the `JAVA_TOOL_OPTIONS` EnvVar, which is picked up by any JVM started. The parameters in `JAVA_OPTS` and `MAVEN_OPTS` overwrite the ones in `JAVA_TOOL_OPTIONS`, so we can achieve the same heap configuration for our Java processes as above without using `argLine`: +注意到我们可以在 `JAVA_TOOL_OPTIONS` 环境变量中设置 Java 参数,每个 JVM 启动时都会读取该参数。`JAVA_OPTS` 和 `MAVEN_OPTS` 中的参数会覆盖 `JAVA_TOOL_OPTIONS` 中的对应值,故我们可以不使用 `argLine`,实现对 Java 进程同样的堆配置: ``` JAVA_OPTS=-Xms64m -Xmx64m @@ -386,11 +363,11 @@ JAVA_TOOL_OPTIONS=-Xms256m -Xmx256m ``` -It's still a bit confusing, as all JVMs log `Picked up JAVA_TOOL_OPTIONS:` +但缺点是每个 JVM 的日志中都会显示 `Picked up JAVA_TOOL_OPTIONS:`,这可能让人感到迷惑。 -### Jenkins Pipeline +### Jenkins 流水线 -Following the settings above, we should have everything prepared to run a successful build. We can pull the code, download the dependencies, run the unit tests, and upload the artifact to our repository. Let's create a Jenkins Pipeline project that does this: +完成上述配置,我们应该已经可以完成一次成功的构建。我们可以获取源代码,下载依赖,运行单元测试并将 artifact 上传到我们的库中。我们可以通过创建一个 Jenkins 流水线项目来完成上述操作。 ``` pipeline { @@ -442,15 +419,15 @@ pipeline { ``` -For a real project, of course, the CI/CD pipeline should do more than just the Maven build; it could deploy to a development environment, run integration tests, promote to higher environments, etc. The learning articles linked above show examples of how to do those things. +当然,对应真实项目,CI/CD 流水线不仅仅完成 Maven 构建,还可以部署到开发环境,运行集成测试,提升至更接近于生产的环境等。上面给出的学习资料中有执行这些操作的案例。 -### Multiple containers +### 多容器 -One pod can be running multiple containers with each having their own resource limits. They share the same network interface, so we can reach started services on `localhost`, but we need to think about port collisions. Environment variables are set separately, but the volumes mounted are the same for all containers configured in one Kubernetes pod template. +一个 pod 可以运行多个容器,每个容器有单独的资源限制。这些容器共享网络接口,故我们可以从 `localhost` 访问已启动的服务,但我们需要考虑端口冲突的问题。在一个 Kubernetes pod 模板中,每个容器的环境变量是单独设置的,但挂载的卷是统一的。 -Bringing up multiple containers is useful when an external service is required for unit tests and an embedded solution doesn't work (e.g., database, message broker, etc.). In this case, this second container also starts and stops with the Jenkins agent. +当一个外部服务需要单元测试且嵌入式方案无法工作 (例如,数据库、消息中间件等) 时,可以启动多个容器。在这种情况下,第二个容器会随着 Jenkins 代理容器启停。 -See the Jenkins `config.xml` snippet where we start an `httpbin` service on the side for our Maven build: +查看 Jenkins `config.xml` 片段,其中我们启动了一个辅助的 `httpbin` 服务用于 Maven 构建: ``` @@ -508,9 +485,9 @@ See the Jenkins `config.xml` snippet where we start an `httpbin` service on the ``` -### Summary +### 总结 -For a summary, see the created OpenShift resources and the Kubernetes plugin configuration from Jenkins `config.xml` with the configuration described above. +作为总结,我们查看上面已描述配置的 Jenkins `config.xml` 对应创建的 OpenShift 资源以及 Kubernetes 插件的配置。 ``` apiVersion: v1 @@ -608,7 +585,7 @@ items: ``` -One additional config map was created from files: +基于文件创建另一个 config map: ```  oc create configmap maven-settings --from-file=settings.xml=settings.xml @@ -616,7 +593,7 @@ One additional config map was created from files: ``` -Kubernetes plugin configuration: +Kubernetes 插件配置如下: ``` @@ -914,16 +891,16 @@ MIIC6jCC... ``` -Happy builds! +尝试愉快的构建吧! -This was originally published on [ITNext][28] and is reprinted with permission. +原文发表于 [ITNext][28],已获得翻版授权。 -------------------------------------------------------------------------------- via: https://opensource.com/article/18/4/running-jenkins-builds-containers 作者:[Balazs Szeti][a] -译者:[译者ID](https://github.com/译者ID) +译者:[pinewall](https://github.com/pinewall) 校对:[校对者ID](https://github.com/校对者ID) 选题:[lujun9972](https://github.com/lujun9972) From 176d6bb3eb8eefb88b667458cac40c6e85137711 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 4 May 2018 13:02:54 +0800 Subject: [PATCH 094/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20zzupdate=20-=20Si?= =?UTF-8?q?ngle=20Command=20To=20Upgrade=20Ubuntu?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...date - Single Command To Upgrade Ubuntu.md | 367 ++++++++++++++++++ 1 file changed, 367 insertions(+) create mode 100644 sources/tech/20180502 zzupdate - Single Command To Upgrade Ubuntu.md diff --git a/sources/tech/20180502 zzupdate - Single Command To Upgrade Ubuntu.md b/sources/tech/20180502 zzupdate - Single Command To Upgrade Ubuntu.md new file mode 100644 index 0000000000..b3c1e2f480 --- /dev/null +++ b/sources/tech/20180502 zzupdate - Single Command To Upgrade Ubuntu.md @@ -0,0 +1,367 @@ +zzupdate - Single Command To Upgrade Ubuntu +====== +Ubuntu 18.04 was already out and got good feedback from multiple community because Ubuntu 18.04 is the most exciting release of Ubuntu in years. + +By default Ubuntu and it’s derivatives can be upgraded from one version to another version using standard commands, which is official and recommended way to upgrade the system to latest version. + +### Ubuntu 18.04 Features/Highlights + +This release is contains vast of improvement and features and i picked only major things. Navigate to [Ubuntu 18.04 official][1] release page, if you want to know more detailed release information. + + * It ships with Linux kernel 4.15, which delivers new features inherited from upstream. + * It feature the latest GNOME 3.28 + * It offers minimal install option similar to RHEL, this allow users to install basic desktop environment with a web browser and core system utilities. + * For new installs, a swap file will be used by default instead of a swap partition. + * You can enable Livepatch to install Kernel updates without rebooting. + * laptops will automatically suspend after 20 minutes of inactivity while on battery power + * 32-bit installer images are no longer provided for Ubuntu Desktop + + + +**Note :** +1) Don’t forget to take backup of your important/valuable data. If something goes wrong we will install freshly and restore the data. +2) Upgrade will take time based on your Internet connection and application which you have installed. + +### What Is zzupdate? + +We can upgrade Ubuntu PC/Server from one version to another version with just a single command using [zzupdate][2] utility. It’s a free and open source utility and it doesn’t required any scripting knowledge to work on this because it’s purely configfile-driven script. + +There were two shell files are available in the utility, which make the utility to do the work as expected. The provided setup.sh auto-installs/updates the code and makes the script available as a new, simple shell command (zzupdate). The zzupdate.sh will do the actual upgrade from one version to next available version. + +### How To Install zzupdate? + +To install zzupdate, just execute the following command. +``` +$ curl -s https://raw.githubusercontent.com/TurboLabIt/zzupdate/master/setup.sh | sudo sh +. +. +Installing... +------------- +Cloning into 'zzupdate'... +remote: Counting objects: 57, done. +remote: Total 57 (delta 0), reused 0 (delta 0), pack-reused 57 +Unpacking objects: 100% (57/57), done. +Checking connectivity... done. +Already up-to-date. + +Setup completed! +---------------- +See https://github.com/TurboLabIt/zzupdate for the quickstart guide. + +``` + +To upgrade the Ubuntu system from one version to another version, you don’t want to run multiple commands and also no need to initiate the reboot. Just fire the below zzupdate command and sit back rest it will take care. + +Make a note, When you are upgrading the remote system, i would advise you to use any of the one below utility because it will help you to reconnect the session in case of any disconnection. + +**Suggested Read :** [How To Keep A Process/Command Running After Disconnecting SSH Session][3] + +### How To Configure zzupdate [optional] + +By default zzupdate works out of the box and no need to configure anything. It’s optional and if you want to configure something yes, you can. To do so, copy the provided sample configuration file `zzupdate.default.conf` to your own `zzupdate.conf` and set your preference. +``` +$ sudo cp /usr/local/turbolab.it/zzupdate/zzupdate.default.conf /etc/turbolab.it/zzupdate.conf + +``` + +Open the file and the default values are below. +``` +$ sudo nano /etc/turbolab.it/zzupdate.conf + +REBOOT=1 +REBOOT_TIMEOUT=15 +VERSION_UPGRADE=1 +VERSION_UPGRADE_SILENT=0 +COMPOSER_UPGRADE=1 +SWITCH_PROMPT_TO_NORMAL=0 + +``` + + * **`REBOOT=1 :`**System will automatically reboot once upgrade is done. + * **`REBOOT_TIMEOUT=15 :`**Default time out value for reboot. + * **`VERSION_UPGRADE=1 :`**It perform version upgrade from one version to another. + * **`VERSION_UPGRADE_SILENT=0 :`**It disable automatic upgrade perform version upgrade from one version to another. + * **`COMPOSER_UPGRADE=1 :`**This will automatically upgrade the composer. + * **`SWITCH_PROMPT_TO_NORMAL=0 :`**If it’s “0” then it looks for same kind of version upgrade. If you are running on LTS version then it will looking for LTS version upgrade and not for the normal release upgrade. If it’s “1” then it looks for the latest release whether you are running an LTS or a normal release. + + + +I’m currently running Ubuntu 17.10 and see the details. +``` +$ cat /etc/*-release +DISTRIB_ID=Ubuntu +DISTRIB_RELEASE=17.10 +DISTRIB_CODENAME=artful +DISTRIB_DESCRIPTION="Ubuntu 17.10" +NAME="Ubuntu" +VERSION="17.10 (Artful Aardvark)" +ID=ubuntu +ID_LIKE=debian +PRETTY_NAME="Ubuntu 17.10" +VERSION_ID="17.10" +HOME_URL="https://www.ubuntu.com/" +SUPPORT_URL="https://help.ubuntu.com/" +BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" +PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" +VERSION_CODENAME=artful +UBUNTU_CODENAME=artful + +``` + +To upgrade the Ubuntu to latest release, just execute the below command. +``` +$ sudo zzupdate + +O===========================================================O + zzupdate - Wed May 2 17:31:16 IST 2018 +O===========================================================O + +Self-update and update of other zzScript +---------------------------------------- +. +. +0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. + +Updating... +---------- +Already up-to-date. + +Setup completed! +---------------- +See https://github.com/TurboLabIt/zzupdate for the quickstart guide. + +Channel switching is disabled: using pre-existing setting +--------------------------------------------------------- + +Cleanup local cache +------------------- + +Update available packages informations +-------------------------------------- +Hit:1 https://download.docker.com/linux/ubuntu artful InRelease +Ign:2 http://dl.google.com/linux/chrome/deb stable InRelease +Hit:3 http://security.ubuntu.com/ubuntu artful-security InRelease +Hit:4 http://in.archive.ubuntu.com/ubuntu artful InRelease +Hit:5 http://dl.google.com/linux/chrome/deb stable Release +Hit:6 http://in.archive.ubuntu.com/ubuntu artful-updates InRelease +Hit:7 http://in.archive.ubuntu.com/ubuntu artful-backports InRelease +Hit:9 http://ppa.launchpad.net/notepadqq-team/notepadqq/ubuntu artful InRelease +Hit:10 http://ppa.launchpad.net/papirus/papirus/ubuntu artful InRelease +Hit:11 http://ppa.launchpad.net/twodopeshaggy/jarun/ubuntu artful InRelease +. +. +UPGRADE PACKAGES +---------------- +Reading package lists... +Building dependency tree... +Reading state information... +Calculating upgrade... +The following packages were automatically installed and are no longer required: +. +. +Interactively upgrade to a new release, if any +---------------------------------------------- + +Reading cache + +Checking package manager +Reading package lists... Done +Building dependency tree +Reading state information... Done +Ign http://dl.google.com/linux/chrome/deb stable InRelease +Hit https://download.docker.com/linux/ubuntu artful InRelease +Hit http://security.ubuntu.com/ubuntu artful-security InRelease +Hit http://dl.google.com/linux/chrome/deb stable Release +Hit http://in.archive.ubuntu.com/ubuntu artful InRelease +Hit http://in.archive.ubuntu.com/ubuntu artful-updates InRelease +Hit http://in.archive.ubuntu.com/ubuntu artful-backports InRelease +Hit http://ppa.launchpad.net/notepadqq-team/notepadqq/ubuntu artful InRelease +Hit http://ppa.launchpad.net/papirus/papirus/ubuntu artful InRelease +Hit http://ppa.launchpad.net/twodopeshaggy/jarun/ubuntu artful InRelease +Fetched 0 B in 6s (0 B/s) +Reading package lists... Done +Building dependency tree +Reading state information... Done + +``` + +We need to disable `Third Party` repository by hitting the `Enter` button to move forward the upgrade. +``` +Updating repository information + +Third party sources disabled + +Some third party entries in your sources.list were disabled. You can +re-enable them after the upgrade with the 'software-properties' tool +or your package manager. + +To continue please press [ENTER] +. +. +Get:35 http://in.archive.ubuntu.com/ubuntu bionic-updates/universe i386 Packages [2,180 B] +Get:36 http://in.archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [1,644 B] +Fetched 38.2 MB in 6s (1,276 kB/s) + +Checking package manager +Reading package lists... Done +Building dependency tree +Reading state information... Done + +Calculating the changes + +Calculating the changes + +``` + +Start Downloading the `Ubuntu 18.04 LTS` packages, It will take a while based on your Internet connection. Its time to have a cup of coffee. +``` +Do you want to start the upgrade? + + +63 installed packages are no longer supported by Canonical. You can +still get support from the community. + +4 packages are going to be removed. 175 new packages are going to be +installed. 1307 packages are going to be upgraded. + +You have to download a total of 999 M. This download will take about +12 minutes with your connection. + +Installing the upgrade can take several hours. Once the download has +finished, the process cannot be canceled. + +Continue [yN] Details [d]y +Fetching +Get:1 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 base-files amd64 10.1ubuntu2 [58.2 kB] +Get:2 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 debianutils amd64 4.8.4 [85.7 kB] +Get:3 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 bash amd64 4.4.18-2ubuntu1 [614 kB] +Get:4 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 locales all 2.27-3ubuntu1 [3,612 kB] +. +. +Get:1477 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 liblouisutdml-bin amd64 2.7.0-1 [9,588 B] +Get:1478 http://in.archive.ubuntu.com/ubuntu bionic/universe amd64 libtbb2 amd64 2017~U7-8 [110 kB] +Get:1479 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 libyajl2 amd64 2.1.0-2build1 [20.0 kB] +Get:1480 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 usb-modeswitch amd64 2.5.2+repack0-2ubuntu1 [53.6 kB] +Get:1481 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 usb-modeswitch-data all 20170806-2 [30.7 kB] +Get:1482 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 xbrlapi amd64 5.5-4ubuntu2 [61.8 kB] +Fetched 999 MB in 6s (721 kB/s) + +``` + +Few services need to be restart, While installing new packages. Hit `Yes` button, it will automatically restart the required services. +``` +Upgrading +Inhibiting until Ctrl+C is pressed... +Preconfiguring packages ... +Preconfiguring packages ... +Preconfiguring packages ... +Preconfiguring packages ... +(Reading database ... 441375 files and directories currently installed.) +Preparing to unpack .../base-files_10.1ubuntu2_amd64.deb ... +Warning: Stopping motd-news.service, but it can still be activated by: + motd-news.timer +Unpacking base-files (10.1ubuntu2) over (9.6ubuntu102) ... +Setting up base-files (10.1ubuntu2) ... +Installing new version of config file /etc/debian_version ... +Installing new version of config file /etc/issue ... +Installing new version of config file /etc/issue.net ... +Installing new version of config file /etc/lsb-release ... +motd-news.service is a disabled or a static unit, not starting it. +(Reading database ... 441376 files and directories currently installed.) +. +. +Progress: [ 80%] + +Progress: [ 85%] + +Progress: [ 90%] + +Progress: [ 95%] + +``` + +It’s time to remove obsolete (Which is anymore needed for system) packages. Hit `y` to remove it. +``` +Searching for obsolete software + ing package lists... 97% + ding package lists... 98% +Reading package lists... Done +Building dependency tree +Reading state information... Done +Reading state information... 23% +Reading state information... 47% +Reading state information... 71% +Reading state information... 94% +Reading state information... Done + +Remove obsolete packages? + + +88 packages are going to be removed. + +Continue [yN] Details [d]y +. +. +. +done +Removing perlmagick (8:6.9.7.4+dfsg-16ubuntu6) ... +Removing snapd-login-service (1.23-0ubuntu1) ... +Processing triggers for libc-bin (2.27-3ubuntu1) ... +Processing triggers for man-db (2.8.3-2) ... +Processing triggers for dbus (1.12.2-1ubuntu1) ... +Fetched 0 B in 0s (0 B/s) + +``` + +Upgrade is successfully completed and need to restart the system. Hit `Y` to restart the system. +``` +System upgrade is complete. + +Restart required + +To finish the upgrade, a restart is required. +If you select 'y' the system will be restarted. + +Continue [yN]y + +``` + +**`Note :`** Few times, it will ask you to confirm the configuration file replacement to move forward the installation. + +See the upgraded system details. +``` +$ cat /etc/*-release +DISTRIB_ID=Ubuntu +DISTRIB_RELEASE=18.04 +DISTRIB_CODENAME=bionic +DISTRIB_DESCRIPTION="Ubuntu 18.04 LTS" +NAME="Ubuntu" +VERSION="18.04 LTS (Bionic Beaver)" +ID=ubuntu +ID_LIKE=debian +PRETTY_NAME="Ubuntu 18.04 LTS" +VERSION_ID="18.04" +HOME_URL="https://www.ubuntu.com/" +SUPPORT_URL="https://help.ubuntu.com/" +BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" +PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" +VERSION_CODENAME=bionic +UBUNTU_CODENAME=bionic + +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/zzupdate-single-command-to-upgrade-ubuntu-18-04/ + +作者:[PRAKASH SUBRAMANIAN][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com/author/prakash/ +[1]:https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes +[2]:https://github.com/TurboLabIt/zzupdate +[3]:https://www.2daygeek.com/how-to-keep-a-process-command-running-after-disconnecting-ssh-session/ From c6d31480ab47d876ea57720f40c32ff4af3a102d Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 4 May 2018 13:06:15 +0800 Subject: [PATCH 095/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Cloud=20Commander?= =?UTF-8?q?=20=E2=80=93=20A=20Web=20File=20Manager=20With=20Console=20And?= =?UTF-8?q?=20Editor?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...eb File Manager With Console And Editor.md | 143 ++++++++++++++++++ 1 file changed, 143 insertions(+) create mode 100644 sources/tech/20160503 Cloud Commander - A Web File Manager With Console And Editor.md diff --git a/sources/tech/20160503 Cloud Commander - A Web File Manager With Console And Editor.md b/sources/tech/20160503 Cloud Commander - A Web File Manager With Console And Editor.md new file mode 100644 index 0000000000..0aae74d680 --- /dev/null +++ b/sources/tech/20160503 Cloud Commander - A Web File Manager With Console And Editor.md @@ -0,0 +1,143 @@ +Cloud Commander – A Web File Manager With Console And Editor +====== + +![](https://www.ostechnix.com/wp-content/uploads/2016/05/Cloud-Commander-A-Web-File-Manager-With-Console-And-Editor-720x340.png) + +**Cloud commander** is a web-based file manager application that allows you to view, access, and manage the files and folders of your system from any computer, mobile, and tablet Pc via a web browser. It has two simple and classic panels, and automatically converts it’s size as per your device’s display size. It also has two built-in editors namely **Dword** and **Edward** with support of Syntax-highlighting and one **Console** with support of your system’s command line. So you can edit your files on the go. Cloud Commander server is a cross-platform application that runs on Linux, Windows and Mac OS X operating systems, and the client will run on any web browser. It is written using **JavaScript/Node.Js** , and is licensed under **MIT**. + +In this brief tutorial, let us see how to install Cloud Commander in Ubuntu 18.04 LTS server. + +### Prerequisites + +As I mentioned earlier, Cloud Commander is written using Node.Js. So, in order to install Cloud Commander we need to install Node.Js first. To do so, refer the following guide. + +### Install Cloud Commander + +After installing Node.Js, run the following command to install Cloud Commander: +``` +$ npm i cloudcmd -g + +``` + +Congratulations! Cloud Commander has been installed. Let us go ahead and see the basic usage of Cloud Commander. + +### Getting started with Cloud Commander + +Run the following command to start Cloud Commander: +``` +$ cloudcmd + +``` + +**Sample output:** +``` +url: http://localhost:8000 + +``` + +Now, open your web browser and navigate to the URL: ** ****. + +From now on, you can create, delete, view, manage files or folders right in the web browser from the local system or remote system, or mobile, tablet etc. + +![][2] + +As you can see in the above screenshot, Cloud Commander has two panels, ten hotkeys (F1 to F10), and Console. + +Each hotkey does a unique job. + + * F1 – Help + * F2 – Rename file/folder + * F3 – View files and folders + * F4 – Edit files + * F5 – Copy files/folders + * F6 – Move files/folders + * F7 – Create new directory + * F8 – Delete file/folder + * F9 – Open Menu + * F10 – Open config + + + +#### Cloud Commander console + +Click on the Console icon. This will open your default system’s shell. + +![][3] + +From this console you can do all sort of administration tasks such as installing packages, removing packages, update your system etc. You can even shutdown or reboot system. Therefore, Cloud Commander is not just a file manager, but also has the functionality of a remote administration tool. + +#### Creating files/folders + +To create a new file or folder Right click on any empty place and go to **New - >File or Directory**. + +![][4] + +#### View files + +You can view pictures, watch audio and video files. + +![][5] + +#### Upload files + +The other cool feature is we can easily upload a file to Cloud Commander system from any system or device. + +To upload a file, right click on any empty space in the Cloud Commander panel, and click on the **Upload** option. + +![][6] + +Select the files you want to upload. + +Also, you can upload files from the Cloud services like Google drive, Dropbox, Amazon cloud drive, Facebook, Twitter, Gmail, GtiHub, Picasa, Instagram and many. + +To upload files from Cloud, right click on any empty space in the panel and select **Upload from Cloud**. + +![][7] + +Select any web service of your choice, for example Google drive. Click **Connect to Google drive** button. + +![][8] + +In the next step, authenticate your google drive with Cloud Commander. Finally, select the files from your Google drive and click **Upload**. + +![][9] + +#### Update Cloud Commander + +To update Cloud Commander to the latest available version, run the following command: +``` +$ npm update cloudcmd -g + +``` + +#### Conclusion + +As far as I tested Cloud Commander, It worked like charm. I didn’t face a single issue during the testing in my Ubuntu server. Also, Cloud Commander is not just a web-based file manager, but also acts as a remote administration tool that performs most Linux administration tasks. You can create a files/folders, rename, delete, edit, and view them. Also, You can install, update, upgrade, and remove any package as the way you do in the local system from the Terminal. And, of course, you can even shutdown or restart the system from the Cloud Commander console itself. What do you need more? Give it a try, you will find it useful. + +That’s all for now. I will be here soon with another interesting article. Until then, stay tuned with OSTechNix. + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/cloud-commander-a-web-file-manager-with-console-and-editor/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]:http://www.ostechnix.com/wp-content/uploads/2016/05/Cloud-Commander-Google-Chrome_006-4.jpg +[3]:http://www.ostechnix.com/wp-content/uploads/2016/05/Cloud-Commander-Google-Chrome_007-2.jpg +[4]:http://www.ostechnix.com/wp-content/uploads/2016/05/Cloud-commander-file-folder-1.png +[5]:http://www.ostechnix.com/wp-content/uploads/2016/05/Cloud-Commander-home-sk-Google-Chrome_008-1.jpg +[6]:http://www.ostechnix.com/wp-content/uploads/2016/05/cloud-commander-upload-2.png +[7]:http://www.ostechnix.com/wp-content/uploads/2016/05/upload-from-cloud-1.png +[8]:http://www.ostechnix.com/wp-content/uploads/2016/05/Cloud-Commander-home-sk-Google-Chrome_009-2.jpg +[9]:http://www.ostechnix.com/wp-content/uploads/2016/05/Cloud-Commander-home-sk-Google-Chrome_010-1.jpg From 3fd7e7690759f577d2cc8c2c2cfc98f6081bd6f4 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 4 May 2018 13:08:36 +0800 Subject: [PATCH 096/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20build?= =?UTF-8?q?=20container=20images=20with=20Buildah?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... to build container images with Buildah.md | 135 ++++++++++++++++++ 1 file changed, 135 insertions(+) create mode 100644 sources/tech/20180503 How to build container images with Buildah.md diff --git a/sources/tech/20180503 How to build container images with Buildah.md b/sources/tech/20180503 How to build container images with Buildah.md new file mode 100644 index 0000000000..c18d14f322 --- /dev/null +++ b/sources/tech/20180503 How to build container images with Buildah.md @@ -0,0 +1,135 @@ +How to build container images with Buildah +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/04/buildah-816x345.png) + +Project Atomic, through their efforts on the Open Container Initiative (OCI), have created a great tool called [Buildah][1]. Buildah helps with creating, building and updating container images supporting Docker formatted images as well as OCI compliant images. + +Buildah handles building container images without the need to have a full container runtime or daemon installed. This particularly shines for setting up a continuous integration and continuous delivery pipeline for building containers. + +Buildah makes the container’s filesystem directly available to the build host. Meaning that the build tooling is available on the host and not needed in the container image, keeping the build faster and the image smaller and safer. There are Buildah packages for CentOS, Fedora, and Debian. + +### Installing Buildah + +Since Fedora 26 Buildah can be installed using dnf. +``` +$ sudo dnf install buildah -y + +``` + +The current version of buildah is 0.16, which can be displayed by the following command. +``` +$ buildah --version + +``` + +### Basic commands + +The first step needed to build a container image is to get a base image, this is done by the FROM statement in a Dockerfile. Buildah does handle this in a similar way. +``` +$ sudo buildah from fedora + +``` + +This command pulls the Fedora based image and stores it on the host. It is possible to inspect the images available on the host, by running the following. +``` +$ sudo buildah images +IMAGE ID IMAGE NAME CREATED AT SIZE +9110ae7f579f docker.io/library/fedora:latest Mar 7, 2018 20:51 234.7 MB + +``` + +After pulling the base image, a running container instance of this image is available, this is a “working-container”. + +The following command displays the running containers. +``` +$ sudo buildah containers +CONTAINER ID BUILDER IMAGE ID IMAGE NAME +CONTAINER NAME +6112db586ab9 * 9110ae7f579f docker.io/library/fedora:latest fedora-working-container + +``` + +Buildah also provides a very useful command to stop and remove all the containers that are currently running. +``` +$ sudo buildah rm --all + +``` + +The full list of command is available using the –help option. +``` +$ buildah --help + +``` + +### Building an Apache web server container image + +Let’s see how to use Buildah to install an Apache web server on a Fedora base image, then copy a custom index.html to be served by the server. + +First let’s create the custom index.html. +``` +$ echo "Hello Fedora Magazine !!!" > index.html + +``` + +Then install the httpd package inside the running container. +``` +$ sudo buildah from fedora +$ sudo buildah run fedora-working-container dnf install httpd -y + +``` + +Let’s copy index.html to /var/www/html/. +``` +$ sudo buildah copy fedora-working-container index.html /var/www/html/index.html + +``` + +Then configure the container entrypoint to start httpd. +``` +$ sudo buildah config --entrypoint "/usr/sbin/httpd -DFOREGROUND" fedora-working-container + +``` + +Now to make the “working-container” available, the commit command saves the container to an image. +``` +$ sudo buildah commit fedora-working-container hello-fedora-magazine + +``` + +The hello-fedora-magazine image is now available, and can be pushed to a registry to be used. +``` +$ sudo buildah images +IMAGE ID IMAGE NAME CREATED +AT SIZE +9110ae7f579f docker.io/library/fedora:latest +Mar 7, 2018 22:51 234.7 MB +49bd5ec5be71 docker.io/library/hello-fedora-magazine:latest +Apr 27, 2018 11:01 427.7 MB + +``` + +It is also possible to use Buildah to test this image by running the following steps. +``` +$ sudo buildah from --name=hello-magazine docker.io/library/hello-fedora-magazine + +$ sudo buildah run hello-magazine + +``` + +Accessing will display “Hello Fedora Magazine !!!“ + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/daemon-less-container-management-buildah/ + +作者:[Ashutosh Sudhakar Bhakare][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fedoramagazine.org/author/ashutoshbhakare/ +[1]:https://github.com/projectatomic/buildah From 1689cbe6463204ac85c83c7f5c1913224f3d35f4 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 4 May 2018 13:10:31 +0800 Subject: [PATCH 097/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Writing=20Systemd?= =?UTF-8?q?=20Services=20for=20Fun=20and=20Profit?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ing Systemd Services for Fun and Profit.md | 108 ++++++++++++++++++ 1 file changed, 108 insertions(+) create mode 100644 sources/tech/20180502 Writing Systemd Services for Fun and Profit.md diff --git a/sources/tech/20180502 Writing Systemd Services for Fun and Profit.md b/sources/tech/20180502 Writing Systemd Services for Fun and Profit.md new file mode 100644 index 0000000000..042c46aedd --- /dev/null +++ b/sources/tech/20180502 Writing Systemd Services for Fun and Profit.md @@ -0,0 +1,108 @@ +Writing Systemd Services for Fun and Profit +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minetest.png?itok=Houi9zf9) + +Let's say you want to run a games server, a server that runs [Minetest][1], a very cool and open source mining and crafting sandbox game. You want to set it up for your school or friends and have it running on a server in your living room. Because, you know, if that’s good enough for the kernel mailing list admins, then it's good enough for you. + +However, you soon realize it is a chore to remember to run the server every time you switch your computer on and a nuisance to power down safely when you want to switch off. + +First, you have to run the server as a daemon: +``` +minetest --server & + +``` + +Take note of the PID (you'll need it later). + +Then you have to tell your friends the server is up by emailing or messaging them. After that you can start playing. + +Suddenly it is 3 am. Time to call it a day! But you can't just switch off your machine and go to bed. First, you have to tell the other players the server is coming down, locate the bit of paper where you wrote the PID we were talking about earlier, and kill the Minetest server gracefully... +``` +kill -2 + +``` + +...because just pulling the plug is a great way to end up with corrupted files. Then and only then can you power down your computer. + +There must be a way to make this easier. + +### Systemd Services to the Rescue + +Let's start off by making a systemd service you can run (manually) as a regular user and build up from there. + +Services you can run without admin privileges live in _~/.config/systemd/user/_ , so start by creating that directory: +``` +cd +mkdir -p ~/.config/systemd/user/ + +``` + +There are several types of systemd _units_ (the formal name of systemd scripts), such as _timers_ , _paths_ , and so on; but what you want is a service. Create a file in _~/.config/systemd/user/_ called _minetest.service_ and open it with your text editor and type the following into it: +``` +# minetest.service + +[Unit] +Description= Minetest server +Documentation= https://wiki.minetest.net/Main_Page + +[Service] +Type= simple +ExecStart= /usr/games/minetest --server + +``` + +Notice how units have different sections: The `[Unit]` section is mainly informative. It contains information for users describing what the unit is and where you can read more about it. + +The meat of your script is in the `[Service]` section. Here you start by stating what kind of service it is using the `Type` directive. [There are several types][2] of service. If, for example, the process you run first sets up an environment and then calls in another process (which is the main process) and then exits, you would use the `forking` type; if you needed to block the execution of other units until the process in your unit finished, you would use `oneshot`; and so on. + +None of the above is the case for the Minetest server, however. You want to start the server, make it go to the background, and move on. This is what the `simple` type does. + +Next up is the `ExecStart` directive. This directive tells systemd what program to run. In this case, you are going to run `minetest` as headless server. You can add options to your executables as shown above, but you can't chain a bunch of Bash commands together. A line like: +``` +ExecStart: lsmod | grep nvidia > videodrive.txt + +``` + +Would not work. If you need to chain Bash commands, it is best to wrap them in a script and execute that. + +Also notice that systemd requires you give the full path to the program. So, even if you have to run something as simple as _ls_ you will have to use `ExecStart= /bin/ls`. + +There is also an `ExecStop` directive that you can use to customize how your service should be terminated. We'll be talking about this directive more in part two, but for now you must know that, if you don't specify an `ExecStop`, systemd will take it on itself to finish the process as gracefully as possible. + +There is a full list of directives in the _systemd.directives_ man page or, if you prefer, [you can check them out on the web][3] and click through to see what each does. + +Although only 6 lines long, your _minetest.service_ is already a fully functional systemd unit. You can run it by executing +``` +systemd --user start minetest + +``` + +And stop it with +``` +systemd --user stop minetest + +``` + +The `--user` option tells systemd to look for the service in your own directories and to execute the service with your user's privileges. + +That wraps up this part of our server management story. In part two, we’ll go beyond starting and stopping and look at how to send emails to players, alerting them of the server’s availability. Stay tuned. + +Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/intro-to-linux/2018/5/writing-systemd-services-fun-and-profit + +作者:[Paul Brown][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/bro66 +[1]:https://www.minetest.net/ +[2]:http://man7.org/linux/man-pages/man5/systemd.service.5.html +[3]:http://man7.org/linux/man-pages/man7/systemd.directives.7.html +[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux From 8eb63c62c9562c1cf9c5204bba6a36a3af4c84bc Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 4 May 2018 14:30:05 +0800 Subject: [PATCH 098/102] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Customizing=20you?= =?UTF-8?q?r=20text=20colors=20on=20the=20Linux=20command=20line?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...r text colors on the Linux command line.md | 163 ++++++++++++++++++ 1 file changed, 163 insertions(+) create mode 100644 sources/tech/20180502 Customizing your text colors on the Linux command line.md diff --git a/sources/tech/20180502 Customizing your text colors on the Linux command line.md b/sources/tech/20180502 Customizing your text colors on the Linux command line.md new file mode 100644 index 0000000000..381df0e53a --- /dev/null +++ b/sources/tech/20180502 Customizing your text colors on the Linux command line.md @@ -0,0 +1,163 @@ +Customizing your text colors on the Linux command line +====== + +![](https://images.idgesg.net/images/article/2018/05/numbers-100756457-large.jpg) +If you spend much time on the Linux command line (and you probably wouldn't be reading this if you didn't), you've undoubtedly noticed that the ls command displays your files in a number of different colors. You've probably also come to recognize some of the distinctions — directories appearing in one color, executable files in another, etc. + +How that all happens and what options are available for you to change the color assignments might not be so obvious. + +One way to get a big dose of data showing how these colors are assigned is to run the **dircolors** command. It will show you something like this: +``` +$ dircolors +LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do +=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg +=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01 +;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01 +;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=0 +1;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31 +:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*. +xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.t +bz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.j +ar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.a +lz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.r +z=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*. +mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35: +*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35: +*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;3 +5:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01; +35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01 +;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01 +;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01 +;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;3 +5:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;3 +5:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;3 +6:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00; +36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00; +36:*.spx=00;36:*.xspf=00;36:'; +export LS_COLORS + +``` + +If you're good at parsing, you probably noticed that there's a pattern to this listing. Break it on the colons, and you'll see something like this: +``` +$ dircolors | tr ":" "\n" | head -10 +LS_COLORS='rs=0 +di=01;34 +ln=01;36 +mh=00 +pi=40;33 +so=01;35 +do=01;35 +bd=40;33;01 +cd=40;33;01 +or=40;31;01 + +``` + +OK, so we have a pattern here — a series of definitions that have one to three numeric components. Let's hone in on one of definition. +``` +pi=40;33 + +``` + +The first question someone is likely to ask is "What is pi?" We're working with colors and file types here, so this clearly isn't the intriguing number that starts with 3.14. No, this "pi" stands for "pipe" — a particular type of file on Linux systems that makes it possible to send data from one program to another. So, let's set one up. +``` +$ mknod /tmp/mypipe p +$ ls -l /tmp/mypipe +prw-rw-r-- 1 shs shs 0 May 1 14:00 /tmp/mypipe + +``` + +When we look at our pipe and a couple other files in a terminal window, the color differences are quite obvious. + +![font colors][1] Sandra Henry-Stocker + +The "40" in the definition of pi (shown above) makes the file show up in the terminal (or PuTTY) window with a black background. The 31 makes the font color red. Pipes are special files, and this special handling makes them stand out in a directory listing. + +The **bd** and **cd** definitions are identical to each other — 40;33;01 and have an extra setting. The settings cause block (bd) and character (cd) devices to be displayed with a black background, an orange font, and one other effect — the characters will be in bold. + +The following list shows the color and font assignments that are made by **file type** : +``` +setting file type +======= ========= +rs=0 reset to no color +di=01;34 directory +ln=01;36 link +mh=00 multi-hard link +pi=40;33 pipe +so=01;35 socket +do=01;35 door +bd=40;33;01 block device +cd=40;33;01 character device +or=40;31;01 orphan +mi=00 missing? +su=37;41 setuid +sg=30;43 setgid +ca=30;41 file with capability +tw=30;42 directory with sticky bit and world writable +ow=34;42 directory that is world writable +st=37;44 directory with sticky bit +ex=01;93 executable + +``` + +You may have noticed that in our **dircolors** command output, most of our definitions started with asterisks (e.g., *.wav=00;36). These define display attributes by **file extension** rather than file type. Here's a sampling: +``` +$ dircolors | tr ":" "\n" | tail -10 +*.mpc=00;36 +*.ogg=00;36 +*.ra=00;36 +*.wav=00;36 +*.oga=00;36 +*.opus=00;36 +*.spx=00;36 +*.xspf=00;36 +'; +export LS_COLORS + +``` + +These settings (all 00:36 in the listing above) would have these file names displaying in cyan. The available colors are shown below. + +![all colors][2] Sandra Henry-Stocker + +### How to change your settings + +The colors and font changes described require that you use an alias for ls that turns on the color feature. This is usually the default on Linux systems and will look like this: +``` +alias ls='ls --color=auto' + +``` + +If you wanted to turn off font colors, you could run the **unalias ls** command and your file listings would then show in only the default font color. + +You can alter your text colors by modifying your $LS_COLORS settings and exporting the modified setting: +``` +$ export LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;... + +``` + +NOTE: The command above is truncated. + +If you want your modified text colors to be permanent, you would need to add your modified LS_COLORS definition to one of your startup files (e.g., .bashrc). + +### More on command line text + +You can find additional information on text colors in this [November 2016][3] post on NetworkWorld. + + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3269587/linux/customizing-your-text-colors-on-the-linux-command-line.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[1]:https://images.idgesg.net/images/article/2018/05/font-colors-100756483-large.jpg +[2]:https://images.techhive.com/images/article/2016/11/all-colors-100691990-large.jpg +[3]:https://www.networkworld.com/article/3138909/linux/coloring-your-world-with-ls-colors.html From de27d19d845040553975ed03dcf312bd2a64a7e5 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 4 May 2018 20:07:21 +0800 Subject: [PATCH 099/102] PRF:20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md @geekpi --- ...You To Find Non-free Software In Debian.md | 42 ++++++++----------- 1 file changed, 18 insertions(+), 24 deletions(-) diff --git a/translated/tech/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md b/translated/tech/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md index 7c42daae2c..5bbeabb478 100644 --- a/translated/tech/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md +++ b/translated/tech/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md @@ -1,44 +1,41 @@ -Vrms 助你在 Debian 中查找非自由软件 +vrms 助你在 Debian 中查找非自由软件 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/04/vrms-1-720x340.png) -有一天,我在阅读一篇有趣的指南,它解释了[**在数字海洋中的自由和开源软件之间的区别**][1]。在此之前,我认为两者都差不多。但是,我错了。它们之间有一些显著差异。在阅读那篇文章时,我想知道如何在 Linux 中找到非自由软件,因此有了这篇文章。 + +有一天,我在 Digital ocean 上读到一篇有趣的指南,它解释了[自由和开源软件之间的区别][1]。在此之前,我认为两者都差不多。但是,我错了。它们之间有一些显著差异。在阅读那篇文章时,我想知道如何在 Linux 中找到非自由软件,因此有了这篇文章。 ### 向 “Virtual Richard M. Stallman” 问好,这是一个在 Debian 中查找非自由软件的 Perl 脚本 -**Virtual Richard M. Stallman** ,简称 **vrms**,是一个用 Perl 编写的程序,它在你基于 Debian 的系统上分析已安装软件的列表,并报告所有来自非自由和 contrib 树的已安装软件包。对于那些疑惑的人,免费软件应该符合以下[**四项基本自由**][2]。 +**Virtual Richard M. Stallman** ,简称 **vrms**,是一个用 Perl 编写的程序,它在你基于 Debian 的系统上分析已安装软件的列表,并报告所有来自非自由和 contrib 树的已安装软件包。对于那些不太清楚区别的人,自由软件应该符合以下[**四项基本自由**][2]。 * **自由 0** – 不管任何目的,随意运行程序的自由。 - * **自由 1** – 自由研究程序如何工作,并根据你的需求进行调整。访问源代码是一个先决条件。 - * **自由 2** – 自由重新分发拷贝,这样你可以帮助别人。 - * **自由 3** – 自由改进程序,并向公众发布改进,以便整个社区获益。访问源代码是一个先决条件。 + * **自由 1** – 研究程序如何工作的自由,并根据你的需求进行调整。访问源代码是一个先决条件。 + * **自由 2** – 重新分发副本的自由,这样你可以帮助别人。 + * **自由 3** – 改进程序,并向公众发布改进的自由,以便整个社区获益。访问源代码是一个先决条件。 - - -任何不满足上述四个条件的软件都不被视为自由软件。简而言之,**自由软件意味着用户可以自由运行、拷贝、分发、研究、修改和改进软件。** +任何不满足上述四个条件的软件都不被视为自由软件。简而言之,**自由软件意味着用户有运行、复制、分发、研究、修改和改进软件的自由。** 现在让我们来看看安装的软件是自由的还是非自由的,好么? -Vrms 包存在于 Debian 及其衍生版(如 Ubuntu)的默认仓库中。因此,你可以使用 apt 包管理器安装它,使用下面的命令。 +vrms 包存在于 Debian 及其衍生版(如 Ubuntu)的默认仓库中。因此,你可以使用 `apt` 包管理器安装它,使用下面的命令。 + ``` $ sudo apt-get install vrms - ``` 安装完成后,运行以下命令,在基于 debian 的系统中查找非自由软件。 + ``` $ vrms - ``` 在我的 Ubuntu 16.04 LTS 桌面版上输出的示例。 + ``` - Non-free packages installed on ostechnix - + Non-free packages installed on ostechnix unrar Unarchiver for .rar files (non-free version) - 1 non-free packages, 0.0% of 2103 installed packages. - ``` ![][4] @@ -46,33 +43,30 @@ unrar Unarchiver for .rar files (non-free version) 如你在上面的截图中看到的那样,我的 Ubuntu 中安装了一个非自由软件包。 如果你的系统中没有任何非自由软件包,则应该看到以下输出。 + ``` No non-free or contrib packages installed on ostechnix! rms would be proud. - ``` -Vrms 不仅可以在 Debian 上找到非自由软件包,还可以在 Ubuntu、Linux Mint 和其他基于 deb 的系统中找到非自由软件包。 +vrms 不仅可以在 Debian 上找到非自由软件包,还可以在 Ubuntu、Linux Mint 和其他基于 deb 的系统中找到非自由软件包。 **限制** -Vrms 虽然有一些限制。就像我已经提到的那样,它列出了安装的非自由和 contrib 部分的软件包。但是,某些发行版并未遵循确保专有软件仅在 vrm 识别为“非自由”的仓库中存在,并且它们不努力维护分离。在这种情况下,Vrms 将不会识别非自由软件,并且始终会报告你的系统上安装了非自由软件。如果你使用的是像 Debian 和 Ubuntu 这样的发行版,遵循将专有软件保留在非自由仓库的策略,Vrms 一定会帮助你找到非自由软件包。 +vrms 虽然有一些限制。就像我已经提到的那样,它列出了安装的非自由和 contrib 部分的软件包。但是,某些发行版并未遵循确保专有软件仅在 vrms 识别为“非自由”的仓库中存在,并且它们不努力维护这种分离。在这种情况下,vrms 将不能识别非自由软件,并且始终会报告你的系统上安装了非自由软件。如果你使用的是像 Debian 和 Ubuntu 这样的发行版,遵循将专有软件保留在非自由仓库的策略,vrms 一定会帮助你找到非自由软件包。 就是这些。希望它是有用的。还有更好的东西。敬请关注! -祝世上所有的泰米尔人在泰米尔新年快乐! - 干杯! - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/the-vrms-program-helps-you-to-find-non-free-software-in-debian/ 作者:[SK][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) 选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e5674ddc7c453405c7a1409c0cc91df125751d52 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 4 May 2018 20:07:53 +0800 Subject: [PATCH 100/102] PUB:20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md @geekpi --- ... Vrms Program Helps You To Find Non-free Software In Debian.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md (100%) diff --git a/translated/tech/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md b/published/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md similarity index 100% rename from translated/tech/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md rename to published/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md From b21dc1c897b44bd764599731daccde45adf5b27c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 4 May 2018 20:29:52 +0800 Subject: [PATCH 101/102] PRF:20180228 Protecting Code Integrity with PGP - Part 3- Generating PGP Subkeys.md @geekpi --- ...th PGP - Part 3- Generating PGP Subkeys.md | 64 ++++++++++--------- 1 file changed, 33 insertions(+), 31 deletions(-) diff --git a/translated/tech/20180228 Protecting Code Integrity with PGP - Part 3- Generating PGP Subkeys.md b/translated/tech/20180228 Protecting Code Integrity with PGP - Part 3- Generating PGP Subkeys.md index cd9c2c237d..35086cd037 100644 --- a/translated/tech/20180228 Protecting Code Integrity with PGP - Part 3- Generating PGP Subkeys.md +++ b/translated/tech/20180228 Protecting Code Integrity with PGP - Part 3- Generating PGP Subkeys.md @@ -1,43 +1,43 @@ -使用 PGP 保护代码完整性 - 第 3 部分:生成 PGP 子密钥 +使用 PGP 保护代码完整性(三):生成 PGP 子密钥 ====== + +> 在第三篇文章中,我们将解释如何生成用于日常工作的 PGP 子密钥。 + ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/binary.jpg?itok=h62HujOC) -在本系列教程中,我们提供了使用 PGP 的实用指南。在此之前,我们介绍了[基本工具和概念][1],并介绍了如何[生成并保护您的主 PGP 密钥][2]。在第三篇文章中,我们将解释如何生成 PGP 子密钥,以及它们在日常工作中使用。 +在本系列教程中,我们提供了使用 PGP 的实用指南。在此之前,我们介绍了[基本工具和概念][1],并介绍了如何[生成并保护您的主 PGP 密钥][2]。在第三篇文章中,我们将解释如何生成用于日常工作的 PGP 子密钥。 ### 清单 1. 生成 2048 位加密子密钥(必要) -   2. 生成 2048 位签名子密钥(必要) - -  3. 生成一个 2048 位验证子密钥(可选) - +  3. 生成一个 2048 位验证子密钥(推荐)   4. 将你的公钥上传到 PGP 密钥服务器(必要) -   5. 设置一个刷新的定时任务(必要) +### 注意事项 +现在我们已经创建了主密钥,让我们创建用于日常工作的密钥。我们创建 2048 位的密钥是因为很多专用硬件(我们稍后会讨论这个)不能处理更长的密钥,但同样也是出于实用的原因。如果我们发现自己处于一个 2048 位 RSA 密钥也不够好的世界,那将是由于计算或数学有了基本突破,因此更长的 4096 位密钥不会产生太大的差别。 -#### 注意事项 - -现在我们已经创建了主密钥,让我们创建用于日常工作的密钥。我们创建了 2048 位密钥,因为很多专用硬件(我们稍后会讨论这个)不能处理更长的密钥,但同样也是出于实用的原因。如果我们发现自己处于一个 2048 位 RSA 密钥也不够好的世界,那将是由于计算或数学的基本突破,因此更长的 4096 位密钥不会产生太大的差别。 - -##### 创建子密钥 +### 创建子密钥 要创建子密钥,请运行: + ``` $ gpg --quick-add-key [fpr] rsa2048 encr $ gpg --quick-add-key [fpr] rsa2048 sign - ``` -你也可以创建验证密钥,这能让你使用你的 PGP 密钥来使用 ssh: +用你密钥的完整指纹替换 `[fpr]`。 + +你也可以创建验证密钥,这能让你将你的 PGP 密钥用于 ssh: + ``` $ gpg --quick-add-key [fpr] rsa2048 auth - ``` -你可以使用 gpg --list-key [fpr] 来查看你的密钥信息: +你可以使用 `gpg --list-key [fpr]` 来查看你的密钥信息: + ``` pub rsa4096 2017-12-06 [C] [expires: 2019-12-06] 111122223333444455556666AAAABBBBCCCCDDDD @@ -45,55 +45,57 @@ uid [ultimate] Alice Engineer uid [ultimate] Alice Engineer sub rsa2048 2017-12-06 [E] sub rsa2048 2017-12-06 [S] - ``` -##### 上传你的公钥到密钥服务器 +### 上传你的公钥到密钥服务器 你的密钥创建已完成,因此现在需要你将其上传到一个公共密钥服务器,使其他人能更容易找到密钥。 (如果你不打算实际使用你创建的密钥,请跳过这一步,因为这只会在密钥服务器上留下垃圾数据。) + ``` $ gpg --send-key [fpr] - ``` 如果此命令不成功,你可以尝试指定一台密钥服务器以及端口,这很有可能成功: + ``` $ gpg --keyserver hkp://pgp.mit.edu:80 --send-key [fpr] - ``` 大多数密钥服务器彼此进行通信,因此你的密钥信息最终将与所有其他密钥信息同步。 -**关于隐私的注意事项:**密钥服务器是完全公开的,因此在设计上会泄露有关你的潜在敏感信息,例如你的全名、昵称以及个人或工作邮箱地址。如果你签名了其他人的钥匙或某人签名你的钥匙,那么密钥服务器还会成为你的社交网络的泄密者。一旦这些个人信息发送给密钥服务器,就不可能编辑或删除。即使你撤销签名或身份,它也不会将你的密钥记录删除,它只会将其标记为已撤消 - 这甚至会显得更突出。 +**关于隐私的注意事项:**密钥服务器是完全公开的,因此在设计上会泄露有关你的潜在敏感信息,例如你的全名、昵称以及个人或工作邮箱地址。如果你签名了其他人的钥匙或某人签名了你的钥匙,那么密钥服务器还会成为你的社交网络的泄密者。一旦这些个人信息发送给密钥服务器,就不可能被编辑或删除。即使你撤销签名或身份,它也不会将你的密钥记录删除,它只会将其标记为已撤消 —— 这甚至会显得更显眼。 也就是说,如果你参与公共项目的软件开发,以上所有信息都是公开记录,因此通过密钥服务器另外让这些信息可见,不会导致隐私的净损失。 -###### 上传你的公钥到 GitHub +### 上传你的公钥到 GitHub 如果你在开发中使用 GitHub(谁不是呢?),则应按照他们提供的说明上传密钥: +- [添加 PGP 密钥到你的 GitHub 账户](https://help.github.com/articles/adding-a-new-gpg-key-to-your-github-account/) + 要生成适合粘贴的公钥输出,只需运行: + ``` $ gpg --export --armor [fpr] - ``` -##### 设置一个刷新定时任务 +### 设置一个刷新定时任务 + +你需要定期刷新你的钥匙环,以获取其他人公钥的最新更改。你可以设置一个定时任务来做到这一点: -你需要定期刷新你的 keyring,以获取其他人公钥的最新更改。你可以设置一个定时任务来做到这一点: ``` $ crontab -e - ``` 在新行中添加以下内容: + ``` @daily /usr/bin/gpg2 --refresh >/dev/null 2>&1 - ``` -**注意:**检查你的 gpg 或 gpg2 命令的完整路径,如果你的 gpg 是旧式的 GnuPG v.1,请使用 gpg2。 +**注意:**检查你的 `gpg` 或 `gpg2` 命令的完整路径,如果你的 `gpg` 是旧式的 GnuPG v.1,请使用 gpg2。 +通过 Linux 基金会和 edX 的免费“[Introduction to Linux](https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux)” 课程了解关于 Linux 的更多信息。 -------------------------------------------------------------------------------- @@ -101,10 +103,10 @@ via: https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-p 作者:[Konstantin Ryabitsev][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.linux.com/users/mricon -[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools -[2]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key +[1]:https://linux.cn/article-9524-1.html +[2]:https://linux.cn/article-9529-1.html \ No newline at end of file From 356fb3e8c11951fc2efbf0e51d813eb71e923e2d Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 4 May 2018 20:30:26 +0800 Subject: [PATCH 102/102] PUB:20180228 Protecting Code Integrity with PGP - Part 3- Generating PGP Subkeys.md @geekpi --- ...ng Code Integrity with PGP - Part 3- Generating PGP Subkeys.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180228 Protecting Code Integrity with PGP - Part 3- Generating PGP Subkeys.md (100%) diff --git a/translated/tech/20180228 Protecting Code Integrity with PGP - Part 3- Generating PGP Subkeys.md b/published/20180228 Protecting Code Integrity with PGP - Part 3- Generating PGP Subkeys.md similarity index 100% rename from translated/tech/20180228 Protecting Code Integrity with PGP - Part 3- Generating PGP Subkeys.md rename to published/20180228 Protecting Code Integrity with PGP - Part 3- Generating PGP Subkeys.md