Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2018-06-12 10:04:39 +08:00
commit ef37bcadc7
13 changed files with 1088 additions and 190 deletions

View File

@ -1,6 +1,8 @@
在 Linux 中使用 Stratis 配置本地存储
======
> 关注于易用性Stratis 为桌面用户提供了一套强力的高级存储功能。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-storage.png?itok=95-zvHYl)
对桌面 Linux 用户而言极少或仅在安装系统时配置本地存储。Linux 存储技术进展比较慢,以至于 20 年前的很多存储工具仍在今天广泛使用。但从那之后,存储技术已经提升了不少,我们为何不享受新特性带来的好处呢?
@ -9,29 +11,29 @@
### 简单可靠地使用高级存储特性
Stratis 希望让如下三件事变得更加容易:存储初始化配置;后续变更;使用高级存储特性,包括<ruby>快照<rt>snapshots</rt></ruby><ruby>精简配置<rt>thin provisioning</rt></ruby>,甚至<ruby>分层<rt>tiering</rt></ruby>
Stratis 希望让如下三件事变得更加容易:存储初始化配置;后续变更;使用高级存储特性,包括<ruby>快照<rt>snapshots</rt></ruby><ruby>精简配置<rt>thin provisioning</rt></ruby>,甚至<ruby>分层<rt>tiering</rt></ruby>
### Stratis一个卷管理文件系统
Stratis 是一个<ruby>卷管理文件系统<rt>volume-managing filesystem, VMF</rt></ruby>,类似于 [ZFS][1] 和 [Btrfs][2]。它使用了存储“池”的核心思想,该思想被各种 VMFs 和 形如 [LVM][3] 的独立卷管理器采用。使用一个或多个硬盘(或分区)创建存储池,然后在存储池中创建<ruby><rt>volumes</rt></ruby>。与使用 [fdisk][4] 或 [GParted][5] 执行的传统硬盘分区不同,存储池中的卷分布无需用户指定。
Stratis 是一个<ruby>卷管理文件系统<rt>volume-managing filesystem</rt></ruby>VMF,类似于 [ZFS][1] 和 [Btrfs][2]。它使用了存储“池”的核心思想,该思想被各种 VMF 和 形如 [LVM][3] 的独立卷管理器采用。使用一个或多个硬盘(或分区)创建存储池,然后在存储池中创建<ruby><rt>volume</rt></ruby>。与使用 [fdisk][4] 或 [GParted][5] 执行的传统硬盘分区不同,存储池中的卷分布无需用户指定。
VMF 更进一步与文件系统层结合起来。用户无需在卷上部署选取的文件系统因为文件系统和卷已经被合并在一起成为一个概念上的文件树ZFS 称之为<ruby>数据集<rt>dataset</rt></ruby>Brtfs 称之为<ruby>子卷<rt>subvolume</rt></ruby>Stratis 称之为文件系统),文件数据位于存储池中,但文件大小仅受存储池整体容量限制。
换一个角度来看:正如文件系统对其中单个文件的真实存储块的实际位置做了一层<ruby>抽象<rt>abstract</rt></ruby>VMF 对存储池中单个文件系统的真实存储块的实际位置做了一层抽象。
换一个角度来看:正如文件系统对其中单个文件的真实存储块的实际位置做了一层<ruby>抽象<rt>abstract</rt></ruby>VMF 对存储池中单个文件系统的真实存储块的实际位置做了一层抽象。
基于存储池,我们可以启用其它有用的特性。特性中的一部分理所当然地来自典型的 VMF <ruby>实现<rt>implementation</rt></ruby>,例如文件系统快照,毕竟存储池中的多个文件系统可以共享<ruby>物理数据块<rt>physical data blocks</rt></ruby><ruby>冗余<rt>redundancy</rt></ruby>,分层,<ruby>完整性<rt>integrity</rt></ruby>等其它特性也很符合逻辑,因为存储池是操作系统中管理所有文件系统上述特性的重要场所。
基于存储池,我们可以启用其它有用的特性。特性中的一部分理所当然地来自典型的 VMF <ruby>实现<rt>implementation</rt></ruby>,例如文件系统快照,毕竟存储池中的多个文件系统可以共享<ruby>物理数据块<rt>physical data block</rt></ruby><ruby>冗余<rt>redundancy</rt></ruby>,分层,<ruby>完整性<rt>integrity</rt></ruby>等其它特性也很符合逻辑,因为存储池是操作系统中管理所有文件系统上述特性的重要场所。
上述结果表明相比独立的卷管理器和文件系统层VMF 的搭建和管理更简单,启用高级存储特性也更容易。
### Stratis 与 ZFS 和 Btrfs 有哪些不同?
作为新项目Stratis 可以从已有项目中吸取经验,我们将在[第二部分][6]深入介绍 Stratis 采用了 ZFSBrtfs 和 LVM 的哪些设计。总结一下Stratis 与其不同之处来自于对功能特性支持的观察,来自于个人使用及计算机自动化运行方式的改变,以及来自于底层硬件的改变。
作为新项目Stratis 可以从已有项目中吸取经验,我们将在[第二部分][6]深入介绍 Stratis 采用了 ZFSBrtfs 和 LVM 的哪些设计。总结一下Stratis 与其不同之处来自于对功能特性支持的观察,来自于个人使用及计算机自动化运行方式的改变,以及来自于底层硬件的改变。
首先Stratis 强调易用性和安全性。对个人用户而言,这很重要,毕竟他们与 Stratis 交互的时间间隔可能很长。如果交互不那么友好,尤其是有丢数据的可能性,大部分人宁愿放弃使用新特性,继续使用功能比较基础的文件系统。
第二,当前 API 和 <ruby>DevOps 式<rt>Devops-style</rt></ruby>自动化的重要性远高于早些年。通过提供<ruby>极好的<rt>first-class</rt></ruby> APIStratis 支持自动化,这样人们可以直接通过自动化工具使用 Stratis。
第二,当前 API 和 <ruby>DevOps 式<rt>Devops-style</rt></ruby>自动化的重要性远高于早些年。Stratis 提供了支持自动化的一流 API,这样人们可以直接通过自动化工具使用 Stratis。
第三SSD 的容量和市场份额都已经显著提升。早期的文件系统中很多代码用于优化机械<ruby>介质<rt>media</rt></ruby>访问速度慢的问题,但对于基于闪存的介质,这些优化变得不那么重要。即使当存储池过大不适合使用 SSD 的情况,仍可以考虑使用 SSD 充当<ruby>缓存层<rt>caching tier</rt></ruby>,可以提供不错的性能提升。考虑到 SSD 的优良性能Stratis 主要聚焦存储池设计方面的<ruby>灵活性<rt>flexibility</rt></ruby><ruby>可靠性<rt>reliability</rt></ruby>
第三SSD 的容量和市场份额都已经显著提升。早期的文件系统中很多代码用于优化机械介质访问速度慢的问题,但对于基于闪存的介质,这些优化变得不那么重要。即使当存储池过大不适合使用 SSD 的情况,仍可以考虑使用 SSD 充当<ruby>缓存层<rt>caching tier</rt></ruby>,可以提供不错的性能提升。考虑到 SSD 的优良性能Stratis 主要聚焦存储池设计方面的<ruby>灵活性<rt>flexibility</rt></ruby><ruby>可靠性<rt>reliability</rt></ruby>
最后,与 ZFS 和 Btrfs 相比Stratis 具有明显不一样的<ruby>实现模型<rt>implementation model</rt></ruby>(我会在[第二部分][6]进一步分析)。这意味着对 Stratis 而言,虽然一些功能较难实现,但一些功能较容易实现。这也加快了 Stratis 的开发进度。
@ -56,7 +58,7 @@ via: https://opensource.com/article/18/4/stratis-easy-use-local-storage-manageme
作者:[Andy Grover][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
源代码行计数器和分析器
Ohcount源代码行计数器和分析器
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/ohcount-720x340.png)
@ -9,26 +9,25 @@
### Ohcount 代码行计数器
**安装**
#### 安装
Ohcount 存在于 Debian 和 Ubuntu 及其派生版的默认仓库中,因此你可以使用 APT 软件包管理器来安装它,如下所示。
```
$ sudo apt-get install ohcount
```
**用法**
#### 用法
Ohcount 的使用非常简单。
你所要做的就是进入你想要分析代码的目录并执行程序。
举例来说,我将分析 [**coursera-dl**][2] 程序的源代码。
举例来说,我将分析 [coursera-dl][2] 程序的源代码。
```
$ cd coursera-dl-master/
$ ohcount
```
以下是 Coursera-dl 的行数摘要:
@ -38,6 +37,7 @@ $ ohcount
如你所见Coursera-dl 的源代码总共包含 141 个文件。第一列说明源码含有的编程语言的名称。第二列显示每种编程语言的文件数量。第三列显示每种编程语言的总行数。第四行和第五行显示代码中由多少行注释及其百分比。第六列显示空行的数量。最后一列和第七列显示每种语言的全部代码行数以及 coursera-dl 的总行数。
或者,直接使用下面的完整路径。
```
$ ohcount coursera-dl-master/
@ -47,52 +47,52 @@ $ ohcount coursera-dl-master/
如果你不想每次都输入完整目录路径,只需 cd 进入它,然后使用 ohcount 来分析该目录中的代码。
要计算每个文件的代码行数,请使用 **-i** 标志。
要计算每个文件的代码行数,请使用 `-i` 标志。
```
$ ohcount -i
```
**示例输出:**
示例输出:
![][5]
当您使用 **-a** 标志时ohcount 还可以显示带标注的源码。
当您使用 `-a` 标志时ohcount 还可以显示带标注的源码。
```
$ ohcount -a
```
![][6]
如你所见,显示了目录中所有源代码的内容。每行都以制表符分隔的语言名称和语义分类(代码、注释或空白)为前缀。
有时候,你只是想知道源码中使用的许可证。为此,请使用 **-l** 标志。
有时候,你只是想知道源码中使用的许可证。为此,请使用 `-l` 标志。
```
$ ohcount -l
lgpl3, coursera_dl.py
gpl coursera_dl.py
```
另一个可用选项是 **-re**,用于将原始实体信息打印到屏幕(主要用于调试)。
另一个可用选项是 `-re`,用于将原始实体信息打印到屏幕(主要用于调试)。
```
$ ohcount -re
```
要递归地查找给定路径内的所有源码文件,请使用 **-d** 标志。
要递归地查找给定路径内的所有源码文件,请使用 `-d` 标志。
```
$ ohcount -d
```
上述命令将显示当前工作目录中的所有源码文件,每个文件名将以制表符分隔的语言名称为前缀。
要了解更多详细信息和支持的选项,请运行:
```
$ ohcount --help
```
对于想要分析自己或其他开发人员开发的代码并检查代码的行数用于编写这些代码的语言以及代码的许可证详细信息等ohcount 非常有用。
@ -101,8 +101,6 @@ $ ohcount --help
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/ohcount-the-source-code-line-counter-and-analyzer/
@ -110,7 +108,7 @@ via: https://www.ostechnix.com/ohcount-the-source-code-line-counter-and-analyzer
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,111 @@
My Lisp Experiences and the Development of GNU Emacs
======
> (Transcript of Richard Stallman's Speech, 28 Oct 2002, at the International Lisp Conference).
Since none of my usual speeches have anything to do with Lisp, none of them were appropriate for today. So I'm going to have to wing it. Since I've done enough things in my career connected with Lisp I should be able to say something interesting.
My first experience with Lisp was when I read the Lisp 1.5 manual in high school. That's when I had my mind blown by the idea that there could be a computer language like that. The first time I had a chance to do anything with Lisp was when I was a freshman at Harvard and I wrote a Lisp interpreter for the PDP-11. It was a very small machine — it had something like 8k of memory — and I managed to write the interpreter in a thousand instructions. This gave me some room for a little bit of data. That was before I got to see what real software was like, that did real system jobs.
I began doing work on a real Lisp implementation with JonL White once I started working at MIT. I got hired at the Artificial Intelligence Lab not by JonL, but by Russ Noftsker, which was most ironic considering what was to come — he must have really regretted that day.
During the 1970s, before my life became politicized by horrible events, I was just going along making one extension after another for various programs, and most of them did not have anything to do with Lisp. But, along the way, I wrote a text editor, Emacs. The interesting idea about Emacs was that it had a programming language, and the user's editing commands would be written in that interpreted programming language, so that you could load new commands into your editor while you were editing. You could edit the programs you were using and then go on editing with them. So, we had a system that was useful for things other than programming, and yet you could program it while you were using it. I don't know if it was the first one of those, but it certainly was the first editor like that.
This spirit of building up gigantic, complicated programs to use in your own editing, and then exchanging them with other people, fueled the spirit of free-wheeling cooperation that we had at the AI Lab then. The idea was that you could give a copy of any program you had to someone who wanted a copy of it. We shared programs to whomever wanted to use them, they were human knowledge. So even though there was no organized political thought relating the way we shared software to the design of Emacs, I'm convinced that there was a connection between them, an unconscious connection perhaps. I think that it's the nature of the way we lived at the AI Lab that led to Emacs and made it what it was.
The original Emacs did not have Lisp in it. The lower level language, the non-interpreted language — was PDP-10 Assembler. The interpreter we wrote in that actually wasn't written for Emacs, it was written for TECO. It was our text editor, and was an extremely ugly programming language, as ugly as could possibly be. The reason was that it wasn't designed to be a programming language, it was designed to be an editor and command language. There were commands like 5l, meaning move five lines, or i and then a string and then an ESC to insert that string. You would type a string that was a series of commands, which was called a command string. You would end it with ESC ESC, and it would get executed.
Well, people wanted to extend this language with programming facilities, so they added some. For instance, one of the first was a looping construct, which was < >. You would put those around things and it would loop. There were other cryptic commands that could be used to conditionally exit the loop. To make Emacs, we (1) added facilities to have subroutines with names. Before that, it was sort of like Basic, and the subroutines could only have single letters as their names. That was hard to program big programs with, so we added code so they could have longer names. Actually, there were some rather sophisticated facilities; I think that Lisp got its unwind-protect facility from TECO.
We started putting in rather sophisticated facilities, all with the ugliest syntax you could ever think of, and it worked — people were able to write large programs in it anyway. The obvious lesson was that a language like TECO, which wasn't designed to be a programming language, was the wrong way to go. The language that you build your extensions on shouldn't be thought of as a programming language in afterthought; it should be designed as a programming language. In fact, we discovered that the best programming language for that purpose was Lisp.
It was Bernie Greenberg, who discovered that it was (2). He wrote a version of Emacs in Multics MacLisp, and he wrote his commands in MacLisp in a straightforward fashion. The editor itself was written entirely in Lisp. Multics Emacs proved to be a great success — programming new editing commands was so convenient that even the secretaries in his office started learning how to use it. They used a manual someone had written which showed how to extend Emacs, but didn't say it was a programming. So the secretaries, who believed they couldn't do programming, weren't scared off. They read the manual, discovered they could do useful things and they learned to program.
So Bernie saw that an application — a program that does something useful for you — which has Lisp inside it and which you could extend by rewriting the Lisp programs, is actually a very good way for people to learn programming. It gives them a chance to write small programs that are useful for them, which in most arenas you can't possibly do. They can get encouragement for their own practical use — at the stage where it's the hardest — where they don't believe they can program, until they get to the point where they are programmers.
At that point, people began to wonder how they could get something like this on a platform where they didn't have full service Lisp implementation. Multics MacLisp had a compiler as well as an interpreter — it was a full-fledged Lisp system — but people wanted to implement something like that on other systems where they had not already written a Lisp compiler. Well, if you didn't have the Lisp compiler you couldn't write the whole editor in Lisp — it would be too slow, especially redisplay, if it had to run interpreted Lisp. So we developed a hybrid technique. The idea was to write a Lisp interpreter and the lower level parts of the editor together, so that parts of the editor were built-in Lisp facilities. Those would be whatever parts we felt we had to optimize. This was a technique that we had already consciously practiced in the original Emacs, because there were certain fairly high level features which we re-implemented in machine language, making them into TECO primitives. For instance, there was a TECO primitive to fill a paragraph (actually, to do most of the work of filling a paragraph, because some of the less time-consuming parts of the job would be done at the higher level by a TECO program). You could do the whole job by writing a TECO program, but that was too slow, so we optimized it by putting part of it in machine language. We used the same idea here (in the hybrid technique), that most of the editor would be written in Lisp, but certain parts of it that had to run particularly fast would be written at a lower level.
Therefore, when I wrote my second implementation of Emacs, I followed the same kind of design. The low level language was not machine language anymore, it was C. C was a good, efficient language for portable programs to run in a Unix-like operating system. There was a Lisp interpreter, but I implemented facilities for special purpose editing jobs directly in C — manipulating editor buffers, inserting leading text, reading and writing files, redisplaying the buffer on the screen, managing editor windows.
Now, this was not the first Emacs that was written in C and ran on Unix. The first was written by James Gosling, and was referred to as GosMacs. A strange thing happened with him. In the beginning, he seemed to be influenced by the same spirit of sharing and cooperation of the original Emacs. I first released the original Emacs to people at MIT. Someone wanted to port it to run on Twenex — it originally only ran on the Incompatible Timesharing System we used at MIT. They ported it to Twenex, which meant that there were a few hundred installations around the world that could potentially use it. We started distributing it to them, with the rule that “you had to send back all of your improvements” so we could all benefit. No one ever tried to enforce that, but as far as I know people did cooperate.
Gosling did, at first, seem to participate in this spirit. He wrote in a manual that he called the program Emacs hoping that others in the community would improve it until it was worthy of that name. That's the right approach to take towards a community — to ask them to join in and make the program better. But after that he seemed to change the spirit, and sold it to a company.
At that time I was working on the GNU system (a free software Unix-like operating system that many people erroneously call “Linux”). There was no free software Emacs editor that ran on Unix. I did, however, have a friend who had participated in developing Gosling's Emacs. Gosling had given him, by email, permission to distribute his own version. He proposed to me that I use that version. Then I discovered that Gosling's Emacs did not have a real Lisp. It had a programming language that was known as mocklisp, which looks syntactically like Lisp, but didn't have the data structures of Lisp. So programs were not data, and vital elements of Lisp were missing. Its data structures were strings, numbers and a few other specialized things.
I concluded I couldn't use it and had to replace it all, the first step of which was to write an actual Lisp interpreter. I gradually adapted every part of the editor based on real Lisp data structures, rather than ad hoc data structures, making the data structures of the internals of the editor exposable and manipulable by the user's Lisp programs.
The one exception was redisplay. For a long time, redisplay was sort of an alternate world. The editor would enter the world of redisplay and things would go on with very special data structures that were not safe for garbage collection, not safe for interruption, and you couldn't run any Lisp programs during that. We've changed that since — it's now possible to run Lisp code during redisplay. It's quite a convenient thing.
This second Emacs program was free software in the modern sense of the term — it was part of an explicit political campaign to make software free. The essence of this campaign was that everybody should be free to do the things we did in the old days at MIT, working together on software and working with whomever wanted to work with us. That is the basis for the free software movement — the experience I had, the life that I've lived at the MIT AI lab — to be working on human knowledge, and not be standing in the way of anybody's further using and further disseminating human knowledge.
At the time, you could make a computer that was about the same price range as other computers that weren't meant for Lisp, except that it would run Lisp much faster than they would, and with full type checking in every operation as well. Ordinary computers typically forced you to choose between execution speed and good typechecking. So yes, you could have a Lisp compiler and run your programs fast, but when they tried to take `car` of a number, it got nonsensical results and eventually crashed at some point.
The Lisp machine was able to execute instructions about as fast as those other machines, but each instruction — a car instruction would do data typechecking — so when you tried to get the car of a number in a compiled program, it would give you an immediate error. We built the machine and had a Lisp operating system for it. It was written almost entirely in Lisp, the only exceptions being parts written in the microcode. People became interested in manufacturing them, which meant they should start a company.
There were two different ideas about what this company should be like. Greenblatt wanted to start what he called a “hacker” company. This meant it would be a company run by hackers and would operate in a way conducive to hackers. Another goal was to maintain the AI Lab culture (3). Unfortunately, Greenblatt didn't have any business experience, so other people in the Lisp machine group said they doubted whether he could succeed. They thought that his plan to avoid outside investment wouldn't work.
Why did he want to avoid outside investment? Because when a company has outside investors, they take control and they don't let you have any scruples. And eventually, if you have any scruples, they also replace you as the manager.
So Greenblatt had the idea that he would find a customer who would pay in advance to buy the parts. They would build machines and deliver them; with profits from those parts, they would then be able to buy parts for a few more machines, sell those and then buy parts for a larger number of machines, and so on. The other people in the group thought that this couldn't possibly work.
Greenblatt then recruited Russell Noftsker, the man who had hired me, who had subsequently left the AI Lab and created a successful company. Russell was believed to have an aptitude for business. He demonstrated this aptitude for business by saying to the other people in the group, “Let's ditch Greenblatt, forget his ideas, and we'll make another company.” Stabbing in the back, clearly a real businessman. Those people decided they would form a company called Symbolics. They would get outside investment, not have scruples, and do everything possible to win.
But Greenblatt didn't give up. He and the few people loyal to him decided to start Lisp Machines Inc. anyway and go ahead with their plans. And what do you know, they succeeded! They got the first customer and were paid in advance. They built machines and sold them, and built more machines and more machines. They actually succeeded even though they didn't have the help of most of the people in the group. Symbolics also got off to a successful start, so you had two competing Lisp machine companies. When Symbolics saw that LMI was not going to fall flat on its face, they started looking for ways to destroy it.
Thus, the abandonment of our lab was followed by “war” in our lab. The abandonment happened when Symbolics hired away all the hackers, except me and the few who worked at LMI part-time. Then they invoked a rule and eliminated people who worked part-time for MIT, so they had to leave entirely, which left only me. The AI lab was now helpless. And MIT had made a very foolish arrangement with these two companies. It was a three-way contract where both companies licensed the use of Lisp machine system sources. These companies were required to let MIT use their changes. But it didn't say in the contract that MIT was entitled to put them into the MIT Lisp machine systems that both companies had licensed. Nobody had envisioned that the AI lab's hacker group would be wiped out, but it was.
So Symbolics came up with a plan (4). They said to the lab, “We will continue making our changes to the system available for you to use, but you can't put it into the MIT Lisp machine system. Instead, we'll give you access to Symbolics' Lisp machine system, and you can run it, but that's all you can do.”
This, in effect, meant that they demanded that we had to choose a side, and use either the MIT version of the system or the Symbolics version. Whichever choice we made determined which system our improvements went to. If we worked on and improved the Symbolics version, we would be supporting Symbolics alone. If we used and improved the MIT version of the system, we would be doing work available to both companies, but Symbolics saw that we would be supporting LMI because we would be helping them continue to exist. So we were not allowed to be neutral anymore.
Up until that point, I hadn't taken the side of either company, although it made me miserable to see what had happened to our community and the software. But now, Symbolics had forced the issue. So, in an effort to help keep Lisp Machines Inc. going (5) — I began duplicating all of the improvements Symbolics had made to the Lisp machine system. I wrote the equivalent improvements again myself (i.e., the code was my own).
After a while (6), I came to the conclusion that it would be best if I didn't even look at their code. When they made a beta announcement that gave the release notes, I would see what the features were and then implement them. By the time they had a real release, I did too.
In this way, for two years, I prevented them from wiping out Lisp Machines Incorporated, and the two companies went on. But, I didn't want to spend years and years punishing someone, just thwarting an evil deed. I figured they had been punished pretty thoroughly because they were stuck with competition that was not leaving or going to disappear (7). Meanwhile, it was time to start building a new community to replace the one that their actions and others had wiped out.
The Lisp community in the 70s was not limited to the MIT AI Lab, and the hackers were not all at MIT. The war that Symbolics started was what wiped out MIT, but there were other events going on then. There were people giving up on cooperation, and together this wiped out the community and there wasn't much left.
Once I stopped punishing Symbolics, I had to figure out what to do next. I had to make a free operating system, that was clear — the only way that people could work together and share was with a free operating system.
At first, I thought of making a Lisp-based system, but I realized that wouldn't be a good idea technically. To have something like the Lisp machine system, you needed special purpose microcode. That's what made it possible to run programs as fast as other computers would run their programs and still get the benefit of typechecking. Without that, you would be reduced to something like the Lisp compilers for other machines. The programs would be faster, but unstable. Now that's okay if you're running one program on a timesharing system — if one program crashes, that's not a disaster, that's something your program occasionally does. But that didn't make it good for writing the operating system in, so I rejected the idea of making a system like the Lisp machine.
I decided instead to make a Unix-like operating system that would have Lisp implementations to run as user programs. The kernel wouldn't be written in Lisp, but we'd have Lisp. So the development of that operating system, the GNU operating system, is what led me to write the GNU Emacs. In doing this, I aimed to make the absolute minimal possible Lisp implementation. The size of the programs was a tremendous concern.
There were people in those days, in 1985, who had one-megabyte machines without virtual memory. They wanted to be able to use GNU Emacs. This meant I had to keep the program as small as possible.
For instance, at the time the only looping construct was while, which was extremely simple. There was no way to break out of the while statement, you just had to do a catch and a throw, or test a variable that ran the loop. That shows how far I was pushing to keep things small. We didn't have caar and cadr and so on; “squeeze out everything possible” was the spirit of GNU Emacs, the spirit of Emacs Lisp, from the beginning.
Obviously, machines are bigger now, and we don't do it that way any more. We put in caar and cadr and so on, and we might put in another looping construct one of these days. We're willing to extend it some now, but we don't want to extend it to the level of common Lisp. I implemented Common Lisp once on the Lisp machine, and I'm not all that happy with it. One thing I don't like terribly much is keyword arguments (8). They don't seem quite Lispy to me; I'll do it sometimes but I minimize the times when I do that.
That was not the end of the GNU projects involved with Lisp. Later on around 1995, we were looking into starting a graphical desktop project. It was clear that for the programs on the desktop, we wanted a programming language to write a lot of it in to make it easily extensible, like the editor. The question was what it should be.
At the time, TCL was being pushed heavily for this purpose. I had a very low opinion of TCL, basically because it wasn't Lisp. It looks a tiny bit like Lisp, but semantically it isn't, and it's not as clean. Then someone showed me an ad where Sun was trying to hire somebody to work on TCL to make it the “de-facto standard extension language” of the world. And I thought, “We've got to stop that from happening.” So we started to make Scheme the standard extensibility language for GNU. Not Common Lisp, because it was too large. The idea was that we would have a Scheme interpreter designed to be linked into applications in the same way TCL was linked into applications. We would then recommend that as the preferred extensibility package for all GNU programs.
There's an interesting benefit you can get from using such a powerful language as a version of Lisp as your primary extensibility language. You can implement other languages by translating them into your primary language. If your primary language is TCL, you can't very easily implement Lisp by translating it into TCL. But if your primary language is Lisp, it's not that hard to implement other things by translating them. Our idea was that if each extensible application supported Scheme, you could write an implementation of TCL or Python or Perl in Scheme that translates that program into Scheme. Then you could load that into any application and customize it in your favorite language and it would work with other customizations as well.
As long as the extensibility languages are weak, the users have to use only the language you provided them. Which means that people who love any given language have to compete for the choice of the developers of applications — saying “Please, application developer, put my language into your application, not his language.” Then the users get no choices at all — whichever application they're using comes with one language and they're stuck with [that language]. But when you have a powerful language that can implement others by translating into it, then you give the user a choice of language and we don't have to have a language war anymore. That's what we're hoping Guile, our scheme interpreter, will do. We had a person working last summer finishing up a translator from Python to Scheme. I don't know if it's entirely finished yet, but for anyone interested in this project, please get in touch. So that's the plan we have for the future.
I haven't been speaking about free software, but let me briefly tell you a little bit about what that means. Free software does not refer to price; it doesn't mean that you get it for free. (You may have paid for a copy, or gotten a copy gratis.) It means that you have freedom as a user. The crucial thing is that you are free to run the program, free to study what it does, free to change it to suit your needs, free to redistribute the copies of others and free to publish improved, extended versions. This is what free software means. If you are using a non-free program, you have lost crucial freedom, so don't ever do that.
The purpose of the GNU project is to make it easier for people to reject freedom-trampling, user-dominating, non-free software by providing free software to replace it. For those who don't have the moral courage to reject the non-free software, when that means some practical inconvenience, what we try to do is give a free alternative so that you can move to freedom with less of a mess and less of a sacrifice in practical terms. The less sacrifice the better. We want to make it easier for you to live in freedom, to cooperate.
This is a matter of the freedom to cooperate. We're used to thinking of freedom and cooperation with society as if they are opposites. But here they're on the same side. With free software you are free to cooperate with other people as well as free to help yourself. With non-free software, somebody is dominating you and keeping people divided. You're not allowed to share with them, you're not free to cooperate or help society, anymore than you're free to help yourself. Divided and helpless is the state of users using non-free software.
We've produced a tremendous range of free software. We've done what people said we could never do; we have two operating systems of free software. We have many applications and we obviously have a lot farther to go. So we need your help. I would like to ask you to volunteer for the GNU project; help us develop free software for more jobs. Take a look at [http://www.gnu.org/help][1] to find suggestions for how to help. If you want to order things, there's a link to that from the home page. If you want to read about philosophical issues, look in /philosophy. If you're looking for free software to use, look in /directory, which lists about 1900 packages now (which is a fraction of all the free software out there). Please write more and contribute to us. My book of essays, “Free Software and Free Society”, is on sale and can be purchased at [www.gnu.org][2]. Happy hacking!
--------------------------------------------------------------------------------
via: https://www.gnu.org/gnu/rms-lisp.html
作者:[Richard Stallman][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.gnu.org
[1]:https://www.gnu.org/help/
[2]:http://www.gnu.org/

View File

@ -1,5 +1,6 @@
Intel and AMD Reveal New Processor Designs
======
[translating by softpaopao](#)
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/whiskey-lake.jpg?itok=b1yuW71L)

View File

@ -1,135 +0,0 @@
translating---geekpi
A CLI Game To Learn Vim Commands
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/PacVim-720x340.png)
Howdy, Vim users! Today, I stumbled upon a cool utility to sharpen your Vim usage skills. Vim is a great editor to write and edit code. However, some of you (including me) are still struggling with the steep learning curve. Not anymore! Meet **PacVim** , a CLI game that helps you to learn Vim commands. PacVim is inspired by the classic game [**PacMan**][1] and it gives you plenty of practice with Vim commands in a fun and interesting way. Simply put, PacVim is a fun, free way to learn about the vim commands in-depth. Please do not confuse PacMan with [**pacman**][2] (the arch Linux package manager). PacMan is a classic, popular arcade game released in the 1980s.
In this brief guide, we will see how to install and use PacVim in Linux.
### Install PacVim
First, install **Ncurses** library and **development tools** as described in the following links.
Please note that this game may not compile and install properly without gcc version 4.8.X or higher. I tested PacVim on Ubuntu 18.04 LTS and it worked perfectly.
Once Ncurses and gcc are installed, run the following commands to install PacVim.
```
$ git clone https://github.com/jmoon018/PacVim.git
$ cd PacVim
$ sudo make install
```
## Learn Vim Commands Using PacVim
### Start PacVim game
To play this game, just run:
```
$ pacvim [LEVEL_NUMER] [MODE]
```
For example, the following command starts the game in 5th level with normal mode.
```
$ pacvim 5 n
```
Here, **“5”** represents the level and **“n”** represents the mode. There are two modes
* **n** normal mode.
* **h** hard mode.
The default mode is h, which is hard:
To start from the beginning (0 level), just run:
```
$ pacvim
```
Here is the sample output from my Ubuntu 18.04 LTS system.
![][4]
To begin the game, just press **ENTER**.
![][5]
Now start playing the game. Read the next chapter to know how to play.
To quit, press **ESC** or **q**.
The following command starts the game in 5th level with hard mode.
```
$ pacvim 5 h
```
Or,
```
$ pacvim 5
```
### How to play PacVim?
The usage of PacVim is very similar to PacMan.
You must run over all the characters on the screen while avoiding the ghosts (the red color characters).
PacVim has two special obstacles:
1. You cannot move into the walls (yellow color). You must use vim motions to jump over them.
2. If you step on a tilde character (cyan `~`), you lose!
You are given three lives. You gain a life each time you beat level 0, 3, 6, 9, etc. There are 10 levels in total, starting from 0 to 9. After beating the 9th level, the game is reset to the 0th level, but the ghosts move faster.
**Winning conditions**
Use vim commands to move the cursor over the letters and highlight them. After all letters are highlighted, you win and proceed to the next level.
**Losing conditions**
If you touch a ghost (indicated by a **red G** ) or a **tilde** character, you lose a life. If you have less than 0 lives, you will lose the entire game.
Here is the list of Implemented Commands:-
key what it does q quit the game h move left j move down k move up l move right w move forward to next word beginning W move forward to next WORD beginning e move forward to next word ending E move forward to next WORD ending b move backward to next word beginning B move backward to next WORD beginning $ move to the end of the line 0 move to the beginning of the line gg/1G move to the beginning of the first line numberG move to the beginning of the line given by number G move to the beginning of the last line ^ move to the first word at the current line & 1337 cheatz (beat current level)
After playing couple levels, you may notice there is a slight improvement in Vim usage. Keep playing this game once in a while until you mastering the Vim usage.
**Suggested read:**
And, thats all for now. Hope this was useful. Playing PacVim is fun, interesting and keep you occupied. At the same time, you should be able to thoroughly learn the enough Vim commands. Give it a try, you wont be disappointed.
More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/pacvim-a-cli-game-to-learn-vim-commands/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://en.wikipedia.org/wiki/Pac-Man
[2]:https://www.ostechnix.com/getting-started-pacman/
[4]:http://www.ostechnix.com/wp-content/uploads/2018/05/pacvim-1.png
[5]:http://www.ostechnix.com/wp-content/uploads/2018/05/pacvim-2.png

View File

@ -1,3 +1,5 @@
translating---geekpi
How To Disable Built-in Webcam In Linux
======

View File

@ -0,0 +1,152 @@
Find If A Package Is Available For Your Linux Distribution
======
![](https://www.ostechnix.com/wp-content/uploads/2018/06/Whohas-720x340.png)
Some times, you might wonder how to find if a package is available for your Linux distribution. Or, you simply wanted to know what version of package is available for your distribution. If so, well, its your lucky day. I know a tool that can get you such information. Meet **“Whohas”** a command line tool that allows querying several package lists at once. Currently, it supports Arch, Debian, Fedora, Gentoo, Mandriva, openSUSE, Slackware, Source Mage, Ubuntu, FreeBSD, NetBSD, OpenBSD, Fink, MacPorts and Cygwin. Using this little tool, the package maintainers can easily find ebuilds, pkgbuilds and similar package definitions from other distributions. Whohas is free, open source and written in Perl programming language.
### Find If A Package Is Available For Your Linux Distribution
**Installing Whohas**
Whohas is available in the default repositories of Debian, Ubuntu, Linux Mint. If youre using any one of the DEB-based system, you can install it using command:
```
$ sudo apt-get install whohas
```
For Arch-based systems, it is available in [**AUR**][1]. You can use any AUR helper programs to install it.
Using [**Packer**][2]:
```
$ packer -S whohas
```
Using [**Trizen**][3]:
```
$ trizen -S whohas
```
Using [**Yay**][4]:
```
$ yay -S whohas
```
Using [**Yaourt**][5]:
```
$ yaourt -S whohas
```
In other Linux distributions, download Whohas utility source from [**here**][6] and manually compile and install it.
**Usage**
The main objective of Whohas tool is to let you know:
* Which distribution provides packages on which the user depends.
* What version of a given package is in use in each distribution, or in each release of a distribution.
Let us find which distributions contains a specific package, for example **vim**. To do so, run:
```
$ whohas vim
```
This command will show all distributions that contains the vim package with the available version of the given package, its size, repository and the download URL.
![][8]
You can even sort the results in alphabetical order by distribution using by piping the output to “sort” command like below.
```
$ whohas vim | sort
```
Please note that the above commands will display all packages that starts with name **vim** , for example vim-spell, vimcommander, vimpager etc. You can narrow down the search to the exact package by using grep command and space before or after or on both sides of your package like below.
```
$ whohas vim | sort | grep " vim"
$ whohas vim | sort | grep "vim "
$ whohas vim | sort | grep " vim "
```
The space before the package name will display all packages that ends with search term. The space after the package name will display all packages whose names begin with your search term. The space on both sides of the search will display the exact match.
Alternatively, you could simply use “strict” option like below.
```
$ whohas --strict vim
```
Sometimes, you want to know if a package is available for a specific distribution only. For example, to find if vim package is available in Arch Linux, run:
```
$ whohas vim | grep "^Arch"
```
The distribution names are abbreviated as “archlinux”, “cygwin”, “debian”, “fedora”, “fink”, “freebsd”, “gentoo”, “mandriva”, “macports”, “netbsd”, “openbsd”, “opensuse”, “slackware”, “sourcemage”, and “ubuntu”.
You can also get the same results by using **-d** option like below.
```
$ whohas -d archlinux vim
```
This command will search vim packages for Arch Linux distribution only.
To search for multiple distributions, for example arch linux, ubuntu, use the following command instead.
```
$ whohas -d archlinux,ubuntu vim
```
You can even find which distributions have “whohas” package.
```
$ whohas whohas
```
For more details, refer the man pages.
```
$ man whohas
```
**Also read:**
All package managers can easily find the available package versions in the repositories.. However, Whohas can help you to get the comparison of available versions of packages across different distributions and which even has it available now. Give it a try, you wont be disappointed.
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/find-if-a-package-is-available-for-your-linux-distribution/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://aur.archlinux.org/packages/whohas/
[2]:https://www.ostechnix.com/install-packer-arch-linux-2/
[3]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/
[4]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[5]:https://www.ostechnix.com/install-yaourt-arch-linux/
[6]:http://www.philippwesche.org/200811/whohas/intro.html
[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[8]:http://www.ostechnix.com/wp-content/uploads/2018/06/whohas-1.png

View File

@ -0,0 +1,77 @@
GitLabs Ultimate & Gold Plans Are Now Free For Open-Source Projects
======
A lot has happened in the open-source community recently. First, [Microsoft acquired GitHub][1] and then people started to look for [GitHub alternatives][2] without even taking a second to think about it while Linus Torvalds released the [Linux Kernel 4.17][3]. Well, if youve been following us, I assume that you know all that.
But, today, GitLab made a smart move by making some of its high-tier plans free for educational institutes and open-source projects. There couldnt be a better time to offer something like this when a lot of developers are interested in migrating their open-source projects to GitLab.
### GitLabs premium plans are now free for open source projects and educational institutes
![GitLab Logo][4]
In a [blog post][5] today, GitLab announced that the **Ultimate** and Gold plans are now free for educational institutes and open-source projects. While we already know why GitLab made this move (a darn perfect timing!), they did explain their motive to make it free:
> We make GitLab free for education because we want students to use our most advanced features. Many universities already run GitLab. If the students use the advanced features of GitLab Ultimate and Gold they will take their experiences with these advanced features to their workplaces.
>
> We would love to have more open source projects use GitLab. Public projects on GitLab.com already have all the features of GitLab Ultimate. And projects like [Gnome][6] and [Debian][7] already run their own server with the open source version of GitLab. With todays announcement, open source projects that are comfortable running on proprietary software can use all the features GitLab has to offer while allowing us to have a sustainable business model by charging non-open-source organizations.
### What are these free plans offered by GitLab?
![GitLab Pricing][8]
GitLab has two categories of offerings. One is the software that you could host on your own cloud hosting service like [Digital Ocean][9]. The other is providing GitLab software as a service where the hosting is managed by GitLab itself and you get an account on GitLab.com.
![GitLab Pricing for hosted service][10]
Gold is the highest offering in the hosted category while Ultimate is the highest offering in the self-hosted category.
You can get more details about their features on GitLab pricing page. Do note that the support is not included in this offer. You have to purchase it separately.
### You have to match certain criteria to avail this offer
GitLab also mentioned to whom the offer will be valid for. Heres what they wrote in their blog post:
> 1. **Educational institutions:** any institution whose purposes directly relate to learning, teaching, and/or training by a qualified educational institution, faculty, or student. Educational purposes do not include commercial, professional, or any other for-profit purposes.
>
> 2. **Open source projects:** any project that uses a [standard open source license][11] and is non-commercial. It should not have paid support or paid contributors.
>
>
Although the free plan does not include support, you can still pay an additional fee of 4.95 USD per user per month which is a very fair price, when you are in the dire need of an expert to help resolve an issue.
GitLab also added a note for the students:
> To reduce the administrative burden for GitLab, only educational institutions can apply on behalf of their students. If youre a student and your educational institution does not apply, you can use public projects on GitLab.com with all functionality, use private projects with the free functionality, or pay yourself.
### Wrapping Up
Now that GitLab is stepping up its game, what do you think about it?
Do you have a project hosted on [GitHub][12]? Will you be switching over? Or, luckily, you already happen to use GitLab from the start?
Let us know your thoughts in the comments section below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/gitlab-free-open-source/
作者:[Ankush Das][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/ankush/
[1]:https://itsfoss.com/microsoft-github/
[2]:https://itsfoss.com/github-alternatives/
[3]:https://itsfoss.com/linux-kernel-4-17/
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/06/GitLab-logo-800x450.png
[5]:https://about.gitlab.com/2018/06/05/gitlab-ultimate-and-gold-free-for-education-and-open-source/
[6]:https://www.gnome.org/news/2018/05/gnome-moves-to-gitlab-2/
[7]:https://salsa.debian.org/public
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/06/gitlab-pricing.jpeg
[9]:https://m.do.co/c/d58840562553
[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/06/gitlab-hosted-service-800x273.jpeg
[11]:https://itsfoss.com/open-source-licenses-explained/
[12]:https://github.com/

View File

@ -0,0 +1,309 @@
Using MQTT to send and receive data for your next project
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP)
Last November we bought an electric car, and it raised an interesting question: When should we charge it? I was concerned about having the lowest emissions for the electricity used to charge the car, so this is a specific question: What is the rate of CO2 emissions per kWh at any given time, and when during the day is it at its lowest?
### Finding the data
I live in New York State. About 80% of our electricity comes from in-state generation, mostly through natural gas, hydro dams (much of it from Niagara Falls), nuclear, and a bit of wind, solar, and other fossil fuels. The entire system is managed by the [New York Independent System Operator][1] (NYISO), a not-for-profit entity that was set up to balance the needs of power generators, consumers, and regulatory bodies to keep the lights on in New York.
Although there is no official public API, as part of its mission, NYISO makes [a lot of open data][2] available for public consumption. This includes reporting on what fuels are being consumed to generate power, at five-minute intervals, throughout the state. These are published as CSV files on a public archive and updated throughout the day. If you know the number of megawatts coming from different kinds of fuels, you can make a reasonable approximation of how much CO2 is being emitted at any given time.
We should always be kind when building tools to collect and process open data to avoid overloading those systems. Instead of sending everyone to their archive service to download the files all the time, we can do better. We can create a low-overhead event stream that people can subscribe to and get updates as they happen. We can do that with [MQTT][3]. The target for my project ([ny-power.org][4]) was inclusion in the [Home Assistant][5] project, an open source home automation platform that has hundreds of thousands of users. If all of these users were hitting this CSV server all the time, NYISO might need to restrict access to it.
### What is MQTT?
MQTT is a publish/subscribe (pubsub) wire protocol designed with small devices in mind. Pubsub systems work like a message bus. You send a message to a topic, and any software with a subscription for that topic gets a copy of your message. As a sender, you never really know who is listening; you just provide your information to a set of topics and listen for any other topics you might care about. It's like walking into a party and listening for interesting conversations to join.
This can make for extremely efficient applications. Clients subscribe to a narrow selection of topics and only receive the information they are looking for. This saves both processing time and network bandwidth.
As an open standard, MQTT has many open source implementations of both clients and servers. There are client libraries for every language you could imagine, even a library you can embed in Arduino for making sensor networks. There are many servers to choose from. My go-to is the [Mosquitto][6] server from Eclipse, as it's small, written in C, and can handle tens of thousands of subscribers without breaking a sweat.
### Why I like MQTT
Over the past two decades, we've come up with tried and true models for software applications to ask questions of services. Do I have more email? What is the current weather? Should I buy this thing now? This pattern of "ask/receive" works well much of the time; however, in a world awash with data, there are other patterns we need. The MQTT pubsub model is powerful where lots of data is published inbound to the system. Clients can subscribe to narrow slices of data and receive updates instantly when that data comes in.
MQTT also has additional interesting features, such as "last-will-and-testament" messages, which make it possible to distinguish between silence because there is no relevant data and silence because your data collectors have crashed. MQTT also has retained messages, which provide the last message on a topic to clients when they first connect. This is extremely useful for topics that update slowly.
In my work with the Home Assistant project, I've found this message bus model works extremely well for heterogeneous systems. If you dive into the Internet of Things space, you'll quickly run into MQTT everywhere.
### Our first MQTT stream
One of NYSO's CSV files is the real-time fuel mix. Every five minutes, it's updated with the fuel sources and power generated (in megawatts) during that time period.
The CSV file looks something like this:
| Time Stamp | Time Zone | Fuel Category | Gen MW |
| 05/09/2018 00:05:00 | EDT | Dual Fuel | 1400 |
| 05/09/2018 00:05:00 | EDT | Natural Gas | 2144 |
| 05/09/2018 00:05:00 | EDT | Nuclear | 4114 |
| 05/09/2018 00:05:00 | EDT | Other Fossil Fuels | 4 |
| 05/09/2018 00:05:00 | EDT | Other Renewables | 226 |
| 05/09/2018 00:05:00 | EDT | Wind | 1 |
| 05/09/2018 00:05:00 | EDT | Hydro | 3229 |
| 05/09/2018 00:10:00 | EDT | Dual Fuel | 1307 |
| 05/09/2018 00:10:00 | EDT | Natural Gas | 2092 |
| 05/09/2018 00:10:00 | EDT | Nuclear | 4115 |
| 05/09/2018 00:10:00 | EDT | Other Fossil Fuels | 4 |
| 05/09/2018 00:10:00 | EDT | Other Renewables | 224 |
| 05/09/2018 00:10:00 | EDT | Wind | 40 |
| 05/09/2018 00:10:00 | EDT | Hydro | 3166 |
The only odd thing in the table is the dual-fuel category. Most natural gas plants in New York can also burn other fossil fuel to generate power. During cold snaps in the winter, the natural gas supply gets constrained, and its use for home heating is prioritized over power generation. This happens at a low enough frequency that we can consider dual fuel to be natural gas (for our calculations).
The file is updated throughout the day. I created a simple data pump that polls for the file every minute and looks for updates. It publishes any new entries out to the MQTT server into a set of topics that largely mirror this CSV file. The payload is turned into a JSON object that is easy to parse from nearly any programming language.
```
ny-power/upstream/fuel-mix/Hydro {"units": "MW", "value": 3229, "ts": "05/09/2018 00:05:00"}
ny-power/upstream/fuel-mix/Dual Fuel {"units": "MW", "value": 1400, "ts": "05/09/2018 00:05:00"}
ny-power/upstream/fuel-mix/Natural Gas {"units": "MW", "value": 2144, "ts": "05/09/2018 00:05:00"}
ny-power/upstream/fuel-mix/Other Fossil Fuels {"units": "MW", "value": 4, "ts": "05/09/2018 00:05:00"}
ny-power/upstream/fuel-mix/Wind {"units": "MW", "value": 41, "ts": "05/09/2018 00:05:00"}
ny-power/upstream/fuel-mix/Other Renewables {"units": "MW", "value": 226, "ts": "05/09/2018 00:05:00"}
ny-power/upstream/fuel-mix/Nuclear {"units": "MW", "value": 4114, "ts": "05/09/2018 00:05:00"}
```
This direct reflection is a good first step in turning open data into open events. We'll be converting this into a CO2 intensity, but other applications might want these raw feeds to do other calculations with them.
### MQTT topics
Topics and topic structures are one of MQTT's major design points. Unlike more "enterprisey" message buses, in MQTT topics are not preregistered. A sender can create topics on the fly, the only limit being that they are less than 220 characters. The `/` character is special; it's used to create topic hierarchies. As we'll soon see, you can subscribe to slices of data in these hierarchies.
Out of the box with Mosquitto, every client can publish to any topic. While it's great for prototyping, before going to production you'll want to add an access control list (ACL) to restrict writing to authorized applications. For example, my app's tree is accessible to everyone in read-only format, but only clients with specific credentials can publish to it.
There is no automatic schema around topics nor a way to discover all the possible topics that clients will publish to. You'll have to encode that understanding directly into any application that consumes the MQTT bus.
So how should you design your topics? The best practice is to start with an application-specific root name, in our case, `ny-power`. After that, build a hierarchy as deep as you need for efficient subscription. The `upstream` tree will contain data that comes directly from an upstream source without any processing. Our `fuel-mix` category is a specific type of data. We may add others later.
### Subscribing to topics
Subscriptions in MQTT are simple string matches. For processing efficiency, only two wildcards are allowed:
* `#` matches everything recursively to the end
* `+` matches only until the next `/` character
It's easiest to explain this with some examples:
```
ny-power/#  - match everything published by the ny-power app
ny-power/upstream/#  - match all raw data
ny-power/upstream/fuel-mix/+  - match all fuel types
ny-power/+/+/Hydro - match everything about Hydro power that's
       nested 2 deep (even if it's not in the upstream tree)
```
A wide subscription like `ny-power/#` is common for low-volume applications. Just get everything over the network and handle it in your own application. This works poorly for high-volume applications, as most of the network bandwidth will be wasted as you drop most of the messages on the floor.
To stay performant at higher volumes, applications will do some clever topic slides like `ny-power/+/+/Hydro` to get exactly the cross-section of data they need.
### Adding our next layer of data
From this point forward, everything in the application will work off existing MQTT streams. The first additional layer of data is computing the power's CO2 intensity.
Using the 2016 [U.S. Energy Information Administration][7] numbers for total emissions and total power by fuel type in New York, we can come up with an [average emissions rate][8] per megawatt hour of power.
This is encapsulated in a dedicated microservice. This has a subscription on `ny-power/upstream/fuel-mix/+`, which matches all upstream fuel-mix entries from the data pump. It then performs the calculation and publishes out to a new topic tree:
```
ny-power/computed/co2 {"units": "g / kWh", "value": 152.9486, "ts": "05/09/2018 00:05:00"}
```
In turn, there is another process that subscribes to this topic tree and archives that data into an [InfluxDB][9] instance. It then publishes a 24-hour time series to `ny-power/archive/co2/24h`, which makes it easy to graph the recent changes.
This layer model works well, as the logic for each of these programs can be distinct from each other. In a more complicated system, they may not even be in the same programming language. We don't care, because the interchange format is MQTT messages, with well-known topics and JSON payloads.
### Consuming from the command line
To get a feel for MQTT in action, it's useful to just attach it to a bus and see the messages flow. The `mosquitto_sub` program included in the `mosquitto-clients` package is a simple way to do that.
After you've installed it, you need to provide a server hostname and the topic you'd like to listen to. The `-v` flag is important if you want to see the topics being posted to. Without that, you'll see only the payloads.
```
mosquitto_sub -h mqtt.ny-power.org -t ny-power/# -v
```
Whenever I'm writing or debugging an MQTT application, I always have a terminal with `mosquitto_sub` running.
### Accessing MQTT directly from the web
We now have an application providing an open event stream. We can connect to it with our microservices and, with some command-line tooling, it's on the internet for all to see. But the web is still king, so it's important to get it directly into a user's browser.
The MQTT folks thought about this one. The protocol specification is designed to work over three transport protocols: [TCP][10], [UDP][11], and [WebSockets][12]. WebSockets are supported by all major browsers as a way to retain persistent connections for real-time applications.
The Eclipse project has a JavaScript implementation of MQTT called [Paho][13], which can be included in your application. The pattern is to connect to the host, set up some subscriptions, and then react to messages as they are received.
```
// ny-power web console application
var client = new Paho.MQTT.Client(mqttHost, Number("80"), "client-" + Math.random());
// set callback handlers
client.onMessageArrived = onMessageArrived;
// connect the client
client.reconnect = true;
client.connect({onSuccess: onConnect});
// called when the client connects
function onConnect() {
    // Once a connection has been made, make a subscription and send a message.
    console.log("onConnect");
    client.subscribe("ny-power/computed/co2");
    client.subscribe("ny-power/archive/co2/24h");
    client.subscribe("ny-power/upstream/fuel-mix/#");
}
// called when a message arrives
function onMessageArrived(message) {
    console.log("onMessageArrived:"+message.destinationName + message.payloadString);
    if (message.destinationName == "ny-power/computed/co2") {
        var data = JSON.parse(message.payloadString);
        $("#co2-per-kwh").html(Math.round(data.value));
        $("#co2-units").html(data.units);
        $("#co2-updated").html(data.ts);
    }
    if (message.destinationName.startsWith("ny-power/upstream/fuel-mix")) {
        fuel_mix_graph(message);
    }
    if (message.destinationName == "ny-power/archive/co2/24h") {
        var data = JSON.parse(message.payloadString);
        var plot = [
            {
                x: data.ts,
                y: data.values,
                type: 'scatter'
            }
        ];
        var layout = {
            yaxis: {
                title: "g CO2 / kWh",
            }
        };
        Plotly.newPlot('co2_graph', plot, layout);
    }
```
This application subscribes to a number of topics because we're going to display a few different kinds of data. The `ny-power/computed/co2` topic provides us a topline number of current intensity. Whenever we receive that topic, we replace the related contents on the site.
![NY ISO Grid CO2 Intensity][15]
NY ISO Grid CO2 Intensity graph from [ny-power.org][4].
The `ny-power/archive/co2/24h` topic provides a time series that can be loaded into a [Plotly][16] line graph. And `ny-power/upstream/fuel-mix` provides the data needed to provide a nice bar graph of the current fuel mix.
![Fuel mix on NYISO grid][18]
Fuel mix on NYISO grid, [ny-power.org][4].
This is a dynamic website that is not polling the server. It is attached to the MQTT bus and listening on its open WebSocket. The webpage is a pub/sub client just like the data pump and the archiver. This one just happens to be executing in your browser instead of a microservice in the cloud.
You can see the page in action at <http://ny-power.org>. That includes both the graphics and a real-time MQTT console to see the messages as they come in.
### Diving deeper
The entire ny-power.org application is [available as open source on GitHub][19]. You can also check out [this architecture overview][20] to see how it was built as a set of Kubernetes microservices deployed with [Helm][21]. You can see another interesting MQTT application example with [this code pattern][22] using MQTT and OpenWhisk to translate text messages in real time.
MQTT is used extensively in the Internet of Things space, and many more examples of MQTT use can be found at the [Home Assistant][23] project.
And if you want to dive deep into the protocol, [mqtt.org][3] has all the details for this open standard.
To learn more, attend Sean Dague's talk, [Adding MQTT to your toolkit][24], at [OSCON][25], which will be held July 16-19 in Portland, Oregon.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/mqtt
作者:[Sean Dague][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/sdague
[1]:http://www.nyiso.com/public/index.jsp
[2]:http://www.nyiso.com/public/markets_operations/market_data/reports_info/index.jsp
[3]:http://mqtt.org/
[4]:http://ny-power.org/#
[5]:https://www.home-assistant.io
[6]:https://mosquitto.org/
[7]:https://www.eia.gov/
[8]:https://github.com/IBM/ny-power/blob/master/src/nypower/calc.py#L1-L60
[9]:https://www.influxdata.com/
[10]:https://en.wikipedia.org/wiki/Transmission_Control_Protocol
[11]:https://en.wikipedia.org/wiki/User_Datagram_Protocol
[12]:https://en.wikipedia.org/wiki/WebSocket
[13]:https://www.eclipse.org/paho/
[14]:/file/400041
[15]:https://opensource.com/sites/default/files/uploads/mqtt_nyiso-co2intensity.png (NY ISO Grid CO2 Intensity)
[16]:https://plot.ly/
[17]:/file/400046
[18]:https://opensource.com/sites/default/files/uploads/mqtt_nyiso_fuel-mix.png (Fuel mix on NYISO grid)
[19]:https://github.com/IBM/ny-power
[20]:https://developer.ibm.com/code/patterns/use-mqtt-stream-real-time-data/
[21]:https://helm.sh/
[22]:https://developer.ibm.com/code/patterns/deploy-serverless-multilingual-conference-room/
[23]:https://www.home-assistant.io/
[24]:https://conferences.oreilly.com/oscon/oscon-or/public/schedule/speaker/77317
[25]:https://conferences.oreilly.com/oscon/oscon-or

View File

@ -0,0 +1,167 @@
How to Install and Use Flatpak on Linux
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/flatpak.jpg?itok=l3i0CHok)
The landscape of applications is quickly changing. Many platforms are migrating to containerized applications… and with good cause. An application wrapped in a bundled container is easier to install, includes all the necessary dependencies, doesnt directly affect the hosting platform libraries, automatically updates (in some cases), and (in most cases) is more secure than a standard application. Another benefit of these containerized applications is that they are universal (i.e., such an application would install on Ubuntu Linux or Fedora Linux, without having to convert a .deb package to an .rpm).
As of now, there are two main universal package systems: [Snap][1] and Flatpak. Both function in similar fashion, but one is found by default on Ubuntu-based systems (Snap) and one on Fedora-based systems (Flatpak). It should come as no surprise that both can be installed on either type of system. So if you want to run Snaps on Fedora, you can. If you want to run Flatpak on Ubuntu, you can.
I will walk you through the process of installing and using Flatpak on [Ubuntu 18.04][2]. If your platform of choice is Fedora (or a Fedora derivative), you can skip the installation process.
### Installation
The first thing to do is install Flatpak. The process is simple. Open up a terminal window and follow these steps:
1. Add the necessary repository with the command sudo add-apt-repository ppa:alexlarsson/flatpak.
2. Update apt with the command sudo apt update.
3. Install Flatpak with the command sudo apt install flatpak.
4. Install Flatpak support for GNOME Software with the command sudo apt install gnome-software-plugin-flatpak.
5. Reboot your system.
### Usage
Ill first show you how to install a Flatpak package from the command line, and then via the GUI. Lets say you want to install the Spotify desktop client via Flatpak. To do this, you must first instruct Flatpak to retrieve the necessary app. The Spotify Flatpak (along with others) is hosted on [Flathub][3]. The first thing were going to do is add the Flathub remote repository with the following command:
```
sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
Now you can install any Flatpak app found on Flathub. For example, to install [Spotify][4], the command would be:
```
sudo flatpak install flathub com.spotify.Client
```
To find out the exact command for each install, you only have to visit the apps page on Flathub and the installation command is listed beneath the description.
Running a Flatpak-installed app is a bit different than a standard app (at least from the command line). Head back to the terminal window and issue the command:
```
flatpak run com.spotify.Client
```
Of course, after youve re-started your machine (upon installing the GNOME Software Support), those apps should appear in your desktop menu, making it unnecessary to start them from the command line.
To uninstall a Flatpak from the command line, you would go back to the terminal and issue the command:
```
sudo flatpak uninstall NAME
```
where NAME is the name of the app to remove. In our Spotify case, that would be:
```
sudo flatpak uninstall com.spotify.Client
```
Now we want to update our Flatpak apps. To do this, first list all of your installed Flatpak apps by issuing the command:
```
flatpak list
```
Now that we have our list of apps (Figure 1), we can update with the command sudo flatpak update NAME (where NAME is the name of our app to update).
![Flatpak apps][6]
Figure 1: Our list of updated Flatpak apps.
[Used with permission][7]
So if we want to update GIMP, wed issue the command:
```
sudo flatpak update org.gimp.GIMP
```
If there are any updates to be applied, theyll be taken care of. If there are no updates to be applied, nothing will be reported.
### Installing from GNOME Software
Lets make this even easier. Since we installed GNOME Software support for flatpak, we dont actually have to bother with the command line. Dont be mistaken, unlike Snap support, you wont actually find Flatpak apps listed within GNOME Software (even though weve installed Software support). Instead, youll find support through the web browser.
Let me show you. Point your browser to [Flathub][3].
![Installing a Flatpak app][9]
Figure 2: Installing a Flatpak app from the Firefox browser.
[Used with permission][7]
Lets say you want to install Slack via Flatpak. Go to the [Slack Flathub][10] page and then click on the INSTALL button. Since we installed GNOME Software support, the standard browser dialog window will appear with an included option to open the file via Software Install (Figure 2).
This action will then open GNOME Software (or, in the case of Ubuntu, Ubuntu Software), where you can click the Install button (Figure 3) to complete the process.
![ready to go][12]
Figure 3: The installation process ready to go.
[Used with permission][7]
Once the installation completes, you can then either click the Launch button, or close GNOME Software and launch the application from the desktop menu (in the case of GNOME, the Dash).
After youve installed a Flatpak app via GNOME Software, it can also be removed from the same system (so theres still not need to go through the command line).
### What about KDE?
If you prefer using the KDE desktop environment, youre in luck. If you issue the command sudo apt install plasma-discover-flatpak-backend, itll install Flatpak support for the KDE app store, Discover. Once youve added Flatpak support, you then need to add a repository. Open Discover and then click on Settings. In the settings window, youll now see a Flatpak listing (Figure 4).
![Flatpak][14]
Figure 4: Flatpak is now available in Discover.
[Used with permission][7]
Click on the Flatpak drop-down and then click Add Flathub. Click on the Applications tab (in the left navigation) and you can then search for (and install) any applications found on Flathub (Figure 5).
![Slack ][16]
Figure 5: Slack can now be installed, from Flathub, via Discover.
[Used with permission][7]
### Easy Flatpak management
And thats the gist of using Flatpak. These universal packages can be used on most Linux distributions and can even be managed via the GUI on some desktop environments. I highly recommend you give Flatpak a try. With the combination of standard installation, Flatpak, and Snaps, youll find software management on Linux has become incredibly easy.
Learn more about Linux through the free ["Introduction to Linux" ][17]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/6/how-install-and-use-flatpak-linux
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.linux.com/learn/intro-to-linux/2018/5/get-started-snap-packages-linux
[2]:http://releases.ubuntu.com/18.04/
[3]:https://flathub.org/
[4]:https://flathub.org/apps/details/com.spotify.Client
[5]:/files/images/flatpak1jpg
[6]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/flatpak_1.jpg?itok=DlJ8zFYg (Flatpak apps)
[7]:/licenses/category/used-permission
[8]:/files/images/flatpak2jpg
[9]:https://www.linux.com/sites/lcom/files/styles/floated_images/public/flatpak_2.jpg?itok=fz1fTAco (Installing a Flatpak app)
[10]:https://flathub.org/apps/details/com.slack.Slack
[11]:/files/images/flatpak3jpg
[12]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/flatpak_3.jpg?itok=wlV8FdgJ (ready to go)
[13]:/files/images/flatpak4jpg
[14]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/flatpak_4.jpg?itok=dBKbVV8Z (Flatpak)
[15]:/files/images/flatpak5jpg
[16]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/flatpak_5.jpg?itok=IKeEgkxD (Slack )
[17]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,76 @@
4 tips for getting an older relative online with Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q)
According to a study by the [Pew Research Center][1], some members of older generations have a hard time learning computers because they were born at the wrong time to learn about computers in school or the workplace. It's a purely demographic phenomenon that tends to mostly affect older people. However, I firmly believe that these people can stay connected and can learn about the benefits of modern technology. The free software community is uniquely placed in ideology, values, and distribution to fill that need. We're a community dedicated to honest product development, longevity, and tools that do what you need and none of what you don't. Those ideologies used to define our world, but it's only in the computer era that they've been openly challenged.
So, I started a GNU/Linux tech support and system builder company that focuses on enabling the elderly and promoting open source adoption. We're sharing our teaching methods and techniques to help others create a more connected society so everyone can take full advantage of our wonderfully connected world.
### 4 tips for getting your family online with GNU/Linux
Whether you're trying to help your mom, dad, grandma, grandpa, or older neighbor or friend, the following tips will help you get them comfortable working with GNU/Linux.
#### 1\. Choose a Linux distro
One of the first and biggest questions you'll face is helping your family member decide which Linux distribution to use. Distributions vary wildly in their user-friendliness, ease of use, stability, customization, extensibility, and so on. You may have an idea of what to use, but here are things to consider before you choose:
* Do I know how to fix it if it breaks?
* How hard is it to break without root privileges?
* Is it going to fit their needs?
* Does it receive regular security updates?
I would shy away from a rolling-release distribution such as Arch, openSUSE Tumbleweed, or Gentoo, which can change and break without warning if you aren't careful. You'll probably have fewer headaches selecting a distribution such as Debian Stable, Fedora Workstation, or openSUSE Leap. In our business, we use [Ubuntu LTS][2]. Ultimately the decision is up to you. You know your skills and toolbelt better than anyone else, and it's you who will be keeping it up to date and secure.
#### 2\. Keep their hands on the controls
Learning how to use a computer is exactly like learning a language. It's a strange, inhuman form of interaction we usually learn while we're young and growing up. But there must be a lot of repetition to form the right habits and understanding. The easiest way to form those habits is with guided usage with the learner's hands on the controls the whole time. Older learners need to recognize it's not a jet plane or a tank, where pressing the wrong button is deadly. It's just a computer.
In our company, we want our customers to be completely self-sufficient. We want them to know how to stay safe online and really use their computer to its full extent. As a result, our teaching style looks a little different from what you'd see in a regular, large corporation's customer care or tech support department.
We can sum up our teaching policy in this short Python script:
```
def support(onsite, broken):
if broken==False:
    print("Never take away the mouse or keyboard.")
 elif broken==True:
    print("Fix it in the command line quickly.")
else:
    print("You shouldnt end up here, but it's correct syntax.")
```
#### 3\. Take notes
Have your learner take notes while you're teaching them about the computer. Taking notes has been proven to be one of the most effective memory-retention tricks for gaining new skills. It also serves another purpose: It gives the learner a resource to turn to when you aren't there and allows them to take a break from listening and focus on truly understanding.
#### 4\. Have patience
I think a lack of patience is the second-biggest factor (right behind demographics) that has prevented older people from learning to use a computer. The next time your loved one asks for help with her computer, ask yourself: "Do I not want to help because they can't learn? Or because I don't have the time to help them?" The second excuse seems to be the one I hear the most. Make sure you plan enough time to be patient with them. There's nothing more permanent than a temporary solution (such as doing everything for them).
### Wrapping up
If you combine these techniques to form habits, leave them with self-created teaching resources, and add a healthy portion of patience, you'll get your family members up and running with Linux in no time. The wonders of being online and knowing how to use a computer shouldn't be restricted to those lucky enough to grow up at a time where the computer is second nature. That's not to say it won't be difficult at times, but it's absolutely worth it.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/tips-family-online-Linux
作者:[About The Author;Brian Whetten;Founder Of The Riesling Computer Company;A Long-Time Blender;Linux;Open-Source Fan;User. I Work To Help Make Sure Our Elderly Members Of Society Are Welcomed With Open Arms To The Wonderful New Technologies Constantly Being Created.][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/classywhetten
[1]:http://www.pewinternet.org/2014/04/03/older-adults-and-technology-use/
[2]:https://www.ubuntu.com/download/desktop

View File

@ -1,25 +1,29 @@
How to enable repository using subscription-manager in RHEL
如何在RHEL中使用subscription-manager启用存储库
======
Learn how to enable repository using subscription-manager in RHEL. Article also includes steps to register system with Red Hat, attach subscription and errors along with resolutions.
了解如何在RHEL中使用subscription-manager启用存储库。 这篇文章还包括了使用 Red Hat 注册系统的步骤,添加订阅,发生错误的解决方案。
![Enable repository using subscription-manager][1]
In this article we will walk you through step by step process to enable Red Hat repository in RHEL fresh installed server.
图中文字了解在RHEL中使用subscription-manager启用存储库
Repository can be enabled using `subscription-manager` command like below
在本文中我们将逐步介绍如何在RHEL刚安装的服务器中启用Red Hat存储库。
可以利用 `subscription-manager` 命令启用存储库,如下所示
```
root@kerneltalks # subscription-manager repos --enable rhel-6-server-rhv-4-agent-beta-debug-rpms
Error: 'rhel-6-server-rhv-4-agent-beta-debug-rpms' does not match a valid repository ID. Use "subscription-manager repos --list" to see valid repositories.
```
You will see above error when your subscription is not in place. Lets go through step by step procedure to enable repositories via `subscription-manager`
当您的订阅不到位时,您会看到上述错误。让我们一步一步地通过 `subscription-manager` 来启用存储库。
##### Step 1 : Register your system with Red Hat
##### 步骤 1 : 使用 Red Hat 注册您的系统
We are considering you have freshly installed system and its not yet registered with Red Hat. If you have registered system already then you can ignore this step.
You can check if your system is registered with Red Hat for subscription using below command
我们正在考虑您已经安装了新系统并且尚未在Red Hat上注册。如果您已经注册了系统那么您可以忽略此步骤。
您可以使用下面的命令来检查您的系统是否已使用 Red Hat 注册以订阅
```
# subscription-manager version
@ -30,7 +34,8 @@ subscription-manager: 1.18.10-1.el6
python-rhsm: 1.18.6-1.el6
```
Here, in first line of output you can see system is not registered. So, lets start with registering system. You need to use `subscription-manager` command with `register` switch. You need to use your Red Hat account credentials here.
在这里输出的第一行中,您可以看到系统未注册。那么,让我们开始注册系统。您需要在 `register` 开关中使用 `subscription-manager` 命令。在这一步需要使用您的 Red Hat 帐户凭证。
```
root@kerneltalks # subscription-manager register
@ -40,7 +45,7 @@ Password:
Network error, unable to connect to server. Please see /var/log/rhsm/rhsm.log for more information.
```
If you are getting above error then your server is not able to reach RedHat. Check internet connection & if you are able to [resolve site name][2]s. Sometimes even if you are able to ping subscription server, you will see this error. This might be because of you have proxy server in your environment. In such case, you need to add its details in file `/etc/rhsm/rhsm.conf`. Below proxy details should be populated :
如果您遇到上述错误,那么您的服务器无法连接到 RedHat。检查您的网络连接或者您能[解决网站名称的问题][2]。有时候,即使你能够 ping 通订阅服务器,你也会看到这个错误。这可能是因为您的环境中有代理服务器。在这种情况下,您需要将其详细信息添加到文件 `/etc/rhsm/rhsm.conf` 中。以下代理详细信息应被填充为:
```
# an http proxy server to use
@ -57,7 +62,7 @@ If you are getting above error then your server is not able to reach RedHat. Che
```
Once you are done, recheck if `subscription-manager` taken up new proxy details by using below command
一旦你完成了这些,重新检查 `subscription-manager` 是否通过使用下面的命令获得了新的代理信息
```
root@kerneltalks # subscription-manager config
@ -96,7 +101,7 @@ root@kerneltalks # subscription-manager config
[] - Default value in use
```
Now, try registering your system again.
现在,请尝试重新注册您的系统。
```
root@kerneltalks # subscription-manager register
@ -106,7 +111,7 @@ Password:
You must first accept Red Hat's Terms and conditions. Please visit https://www.redhat.com/wapps/tnc/termsack?event[]=signIn . You may have to log out of and back into the Customer Portal in order to see the terms.
```
You will see above error if you are adding server to your Red Hat account for the first time. Go to the [URL ][3]and accept the terms. Come back to terminal and try again.
如果您是第一次将服务器添加到 Red Hat 帐户,您将看到上述错误。转到 [URL ][3]并接受条款。回到终端,然后再试一次。
```
oot@kerneltalks # subscription-manager register
@ -116,7 +121,7 @@ Password:
The system has been registered with ID: xxxxb2-xxxx-xxxx-xxxx-xx8e199xxx
```
Bingo! System is registered with Red Hat now. You can again verify it with `version` switch.
Bingo!系统现在已在 Red Hat 上注册。你可以再次用 `version` 开关来验证它。
```
#subscription-managerversionservertype:RedHatSubscriptionManagementsubscriptionmanagementserver:2.0.43-1subscriptionmanagementrules:5.26subscription-manager:1.18.10-1.el6python-rhsm:1.18.6-1.el6" decode="true" ]root@kerneltalks # subscription-manager version
@ -127,18 +132,18 @@ subscription-manager: 1.18.10-1.el6
python-rhsm: 1.18.6-1.el6
```
##### Step 2 : Attach subscription to your server
##### 步骤 2 : 将订阅添加到您的服务器
First try to list repositories. You wont be able to list any since we havent attached any subscription to our server yet.
首先尝试列出存储库。您将无法列出任何内容,因为我们尚未在我们的服务器中添加任何订阅。
```
root@kerneltalks # subscription-manager repos --list
This system has no repositories available through subscriptions.
```
As you can see `subscription-manager` couldnt found any repositories, you need to attach subscriptions to your server. Once subscription is attached, `subscription-manager` will be able to list repositories under it.
正如您所看到 `subscription-manager` 找不到任何存储库,您需要将订阅添加到您的服务器上。一旦订阅被添加,`subscription-manager` 将能够列出下列的存储库。
To attach subscription, first check all available subscriptions for your server with below command
要添加订阅,请先使用以下命令检查服务器的所有可用订阅
```
root@kerneltalks # subscription-manager list --available
@ -170,15 +175,16 @@ Ends: 12/01/2018
System Type: Virtual
```
You will get list of such subscriptions available for your server. You need to read through what it provides and note down `Pool ID` of subscriptions which are useful/required for you.
您将获得可用于您的服务器的此类订阅的列表。您需要阅读它提供的内容并记下对您有用或需要的订阅的 `Pool ID`
现在,使用 pool ID 将订阅附加到您的服务器。
Now, attach subscriptions to your server by using pool ID.
```
# subscription-manager attach --pool=8a85f98c6011059f0160110a2ae6000f
Successfully attached a subscription for: Red Hat Enterprise Linux for Virtual Datacenters, Standard
```
If you are not sure which one to pick, you can simple attach subscriptions automatically which are best suited for your server with below command
如果您不确定选择哪一个,则可以使用下面的命令自动地添加最适合您的服务器的订阅
```
root@kerneltalks # subscription-manager attach --auto
@ -187,18 +193,18 @@ Product Name: Red Hat Enterprise Linux Server
Status: Subscribed
```
Move on to final step to enable repository.
接下来是最后一步启用存储库。
##### Step 3 : Enable repository
##### 步骤 3 : 启用存储库
Now you will be enable repository which is available under your attached subscription.
现在您将能够启用存储库,该存储库在您的附加订阅下可用。
```
root@kerneltalks # subscription-manager repos --enable rhel-6-server-rhv-4-agent-beta-debug-rpms
Repository 'rhel-6-server-rhv-4-agent-beta-debug-rpms' is enabled for this system.
```
Thats it. You are done. You can [list repositories with yum command][4] and confirm.
到这里,您已经完成了。您可以[用 yum 命令列出存储库][4]并确认
--------------------------------------------------------------------------------

View File

@ -0,0 +1,132 @@
一个学习 vim 命令的命令行游戏
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/PacVim-720x340.png)
你好Vim用户今天我偶然发现了一个很酷的程序来提高 Vim 的使用技巧。Vim 是编写和编辑代码的绝佳编辑器。然而,你们中的一些人(包括我)仍陡峭的学习曲线中挣扎。再也不用了!来见 **PacVim**,一款可帮助你学习 Vim 命令的命令行游戏。PacVim 的灵感来源于经典游戏 [**PacMan**][1],它以一种好玩有趣的方式为你提供了大量的 Vim 命令练习。简而言之PacVim 是一种有趣而自由的方式来深入了解 vim 命令。请不要将 PacMan 与 [**pacman**][2] arch Linux 包管理器)混淆。 PacMan 是 20 世纪 80 年代发布的经典流行街机游戏。
在本简要指南中,我们将看到如何在 Linux 中安装和使用 PacVim。
### 安装 PacVim
首先按如下链接安装 **Ncurses** 库和**开发工具**。
请注意如果没有gcc 4.8.X 或更高版本,这款游戏可能无法正确编译和安装。我在 Ubuntu 18.04 LTS 上测试了 PacVim并且完美运行。
安装 Ncurses 和 gcc 后,运行以下命令来安装 PacVim。
```
$ git clone https://github.com/jmoon018/PacVim.git
$ cd PacVim
$ sudo make install
```
## 使用 PacVim 学习 Vim 命令
### 启动 PacVim 游戏
要玩这个游戏,只需运行:
```
$ pacvim [LEVEL_NUMER] [MODE]
```
例如,以下命令以普通模式启动游戏第 5 关。
```
$ pacvim 5 n
```
这里,**“5”** 表示等级,**“n”**表示模式。有两种模式:
* **n** 普通模式。
* **h** 困难模式。
默认模式是 h这很难
要从头开始0 级),请运行:
```
$ pacvim
```
以下是我 Ubuntu 18.04 LTS 的示例输出。
![][4]
要开始游戏,只需按下 **ENTER**.。
![][5]
现在开始游戏。阅读下一章了解如何玩。
要退出,请按下 **ESC****q**
以下命令以困难模式启动游戏第 5 关。
```
$ pacvim 5 h
```
或者,
```
$ pacvim 5
```
### 如何玩 PacVim
PacVim 的使用与 PacMan 非常相似。
你必须跑过屏幕上所有的字符,同时避免鬼魂(红色字符)。
PacVim有两个特殊的障碍
1. 你不能移动到墙壁中(黄色)。你必须使用 vim 动作来跳过它们。
2. 如果你踩着波浪字符(青色的 `~`),你就输了!
你有三条生命。每次打赢 0、3、6、9 关时你都会获得生命。总共有 10 关,从 0 到 9打赢第 9 关后,游戏重置为第 0 关,但是鬼魂速度变快。
**获胜条件**
使用 vim 命令将光标移动到字母上并高亮显示它们。所有字母都高亮显示后,你就会获胜并进入下一关。
**失败条件**
如果你碰到鬼魂(用**红色 G** 表示)或者**波浪字符**,你就会失去一条命。如果命小于 0 条,你将会输掉整个游戏。
这是实现的命令列表:
q 退出游戏h 向左移动j 向下移动k 向上移动l 向右移动w 向前移动到下一个 word 开始, W 向前移动到下一个 WORD 开始e 向前移动到下一个 word 结束, E 向前移动到下一个 WORD 结束, b 向后移动到下一个 word 开始, B 向后移动到下一个 WORD 开始,$ 移动到行的末尾0 移动到行的开始gg/1G 移动到第一行的开始,数字加 G 移动到由数字给出的行的开始G 移到最后一行的开头, ^ 移到当前行的第一个 word 1337 cheatz打赢当前关
玩过几关之后,你可能会注意到 vim 的使用有改善。一段时间后继续玩这个游戏,直到你掌握 Vim 的使用。
**建议阅读:**
今天就是这些。希望这篇文章有用。PacVim 好玩又有趣并且让你有事做。同时,你应该能够彻底学习足够的 Vim 命令。试试看,你不会感到失望。
还有更多的好东西。敬请关注!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/pacvim-a-cli-game-to-learn-vim-commands/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://en.wikipedia.org/wiki/Pac-Man
[2]:https://www.ostechnix.com/getting-started-pacman/
[4]:http://www.ostechnix.com/wp-content/uploads/2018/05/pacvim-1.png
[5]:http://www.ostechnix.com/wp-content/uploads/2018/05/pacvim-2.png