mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-25 23:11:02 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
4ddecabaa5
@ -1,57 +1,57 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12247-1.html)
|
||||
[#]: subject: (Speed up administration of Kubernetes clusters with k9s)
|
||||
[#]: via: (https://opensource.com/article/20/5/kubernetes-administration)
|
||||
[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb)
|
||||
|
||||
用 k9s 加速 Kubernetes 集群管理
|
||||
k9s:你没看错,这是一个加速 k8s 集群管理的工具
|
||||
======
|
||||
|
||||
> 看看这个很酷的 Kubernetes 管理的终端 UI。
|
||||
|
||||
![Dogs playing chess][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/202005/25/104742pqjmiroc44honcs5.jpg)
|
||||
|
||||
通常情况下,我写的关于 Kubernetes 管理的文章中,都是做集群管理的 `kubectl` 命令。然而最近,有人给我介绍了 [k9s][2] 项目,可以让我快速查看和解决 Kubernetes 中的日常问题。这对我的工作流程有了很大的改善,我将在本教程中告诉你如何上手。
|
||||
通常情况下,我写的关于 Kubernetes 管理的文章中用的都是做集群管理的 `kubectl` 命令。然而最近,有人给我介绍了 [k9s][2] 项目,可以让我快速查看并解决 Kubernetes 中的日常问题。这极大地改善了我的工作流程,我会在这篇教程中告诉你如何上手它。
|
||||
|
||||
它的安装可以在 Mac、Windows 和 Linux 中进行。每种操作系统的说明可以在这里找到[这里][2]。请务必完成安装,以便能够跟上本教程。
|
||||
它可以安装在 Mac、Windows 和 Linux 中,每种操作系统的说明可以在[这里][2]找到。请先完成安装,以便能够跟上本教程。
|
||||
|
||||
我将使用 Linux 和 Minikube,这是一种在个人电脑上运行 Kubernetes 的轻量级方式。按照[此教程][3]或使用[该文档][4]来安装它。
|
||||
我会使用 Linux 和 Minikube,这是一种在个人电脑上运行 Kubernetes 的轻量级方式。按照[此教程][3]或使用[该文档][4]来安装它。
|
||||
|
||||
### 设置 k9s 配置文件
|
||||
|
||||
安装好 `k9s` 应用后,从帮助命令开始总是很好的。
|
||||
安装好 `k9s` 应用后,从帮助命令开始总是很好的起点。
|
||||
|
||||
```
|
||||
$ k9s help
|
||||
```
|
||||
|
||||
正如你所看到的,我们可以用 `k9s` 来配置很多功能。我们唯一需要进行的步骤就是写一个配置文件。而`info` 命令会告诉我们应用程序要在哪里找该配置文件。
|
||||
正如你在帮助信息所看到的,我们可以用 `k9s` 来配置很多功能。我们唯一需要进行的步骤就是编写配置文件。而 `info` 命令会告诉我们该应用程序要在哪里找它的配置文件。
|
||||
|
||||
```
|
||||
$ k9s info
|
||||
____ __.________
|
||||
| |/ _/ __ \\______
|
||||
| < \\____ / ___/
|
||||
| | \ / /\\___ \
|
||||
|____|__ \ /____//____ >
|
||||
\/ \/
|
||||
____ __.________
|
||||
| |/ _/ __ \______
|
||||
| < \____ / ___/
|
||||
| | \ / /\___ \
|
||||
|____|__ \ /____//____ >
|
||||
\/ \/
|
||||
|
||||
Configuration: /Users/jess/.k9s/config.yml
|
||||
Logs: /var/folders/5l/c1y1gcw97szdywgf9rk1100m0000gn/T/k9s-jess.log
|
||||
Screen Dumps: /var/folders/5l/c1y1gcw97szdywgf9rk1100m0000gn/T/k9s-screens-jess
|
||||
Configuration: /Users/jess/.k9s/config.yml
|
||||
Logs: /var/folders/5l/c1y1gcw97szdywgf9rk1100m0000gn/T/k9s-jess.log
|
||||
Screen Dumps: /var/folders/5l/c1y1gcw97szdywgf9rk1100m0000gn/T/k9s-screens-jess
|
||||
```
|
||||
|
||||
如果要添加一个配置文件,该目录不存在的话就创建一个目录,然后添加一个配置文件。
|
||||
如果要添加配置文件,该配置目录不存在的话就创建它,然后添加一个配置文件。
|
||||
|
||||
```
|
||||
$ mkdir -p ~/.k9s/
|
||||
$ touch ~/.k9s/config.yml
|
||||
```
|
||||
|
||||
在这篇介绍中,我们将使用 k9s 版本库中推荐的默认 `config.yml`。维护者注意,这种格式可能会有变化,所以我们可以[在这里查看][5]最新版本。
|
||||
在这篇介绍中,我们将使用 `k9s` 版本库中推荐的默认 `config.yml`。维护者请注意,这种格式可能会有变化,可以[在这里查看][5]最新版本。
|
||||
|
||||
```
|
||||
k9s:
|
||||
@ -73,7 +73,7 @@ k9s:
|
||||
namespace:
|
||||
active: ""
|
||||
favorites:
|
||||
- all
|
||||
- all
|
||||
- kube-system
|
||||
- default
|
||||
view:
|
||||
@ -87,7 +87,7 @@ k9s:
|
||||
warn: 70
|
||||
```
|
||||
|
||||
我们设置了 `k9s` 寻找本地的 minikube 配置,所以我要确认 minikube 已经上线了,就可以使用了。
|
||||
我们设置了 `k9s` 寻找本地的 minikube 配置,所以我要确认 minikube 已经上线可以使用了。
|
||||
|
||||
```
|
||||
$ minikube status
|
||||
@ -105,26 +105,26 @@ kubeconfig: Configured
|
||||
$ k9s
|
||||
```
|
||||
|
||||
启动后,会弹出 `k9s` 基于文本的用户界面。在没有命名空间标志的情况下,它会向你显示默认命名空间中的 Pod。
|
||||
启动后,会弹出 `k9s` 的基于文本的用户界面。在没有指定命名空间标志的情况下,它会向你显示默认命名空间中的 Pod。
|
||||
|
||||
![K9s screenshot][6]
|
||||
|
||||
如果你运行在一个有很多 Pod 的环境中,默认的视图可能会让人不知所措。或者,我们可以将注意力集中在给定的命名空间上。退出应用程序,运行 `k9s -n <namespace>`,其中 ``<namespace>` 是已有的命名空间。在下图中,我运行了 `k9s -n minecraft`,它显示了我的损坏的 Pod:
|
||||
如果你运行在一个有很多 Pod 的环境中,默认视图可能会让人不知所措。或者,我们可以将注意力集中在给定的命名空间上。退出该应用程序,运行 `k9s -n <namespace>`,其中 `<namespace>` 是已存在的命名空间。在下图中,我运行了 `k9s -n minecraft`,它显示了我损坏的 Pod:
|
||||
|
||||
![K9s screenshot][7]
|
||||
|
||||
所以,一旦你有了 `k9s` 后,有很多事情你可以更快地完成。
|
||||
|
||||
通过快捷键来导航 `k9s`,我们可以随时使用方向键和回车键来选择列出的项目。还有不少其他的通用快捷键来导航到不同的视图。
|
||||
通过快捷键来导航 `k9s`,我们可以随时使用方向键和回车键来选择列出的项目。还有不少其他的通用快捷键可以导航到不同的视图。
|
||||
|
||||
* `0`:显示在所有命名空间中的所有 Pod
|
||||
![K9s screenshot][8]
|
||||
* `d`:描述所选的 Pod
|
||||
![K9s screenshot][9]
|
||||
![K9s screenshot][9]
|
||||
* `l`:显示所选的 Pod 的日志
|
||||
![Using k9s to show Kubernetes pod logs][10]
|
||||
|
||||
你可能会注意到 `k9s` 被设置为使用 [Vim 命令键][11],包括使用 `J` 和 `K` 键上下移动。Emacs 用户们,败退吧 :)
|
||||
你可能会注意到 `k9s` 设置为使用 [Vim 命令键][11],包括使用 `J` 和 `K` 键上下移动等。Emacs 用户们,败退吧 :)
|
||||
|
||||
### 快速查看不同的 Kubernetes 资源
|
||||
|
||||
@ -141,11 +141,11 @@ $ k9s
|
||||
* `:cj`:跳转到 cronjob 视图,查看集群中计划了哪些作业。
|
||||
![K9s screenshot][17]
|
||||
|
||||
这个应用最常用的工具是键盘;要在任何页面上往上或往下翻页,请使用方向键。如果你需要退出,记得使用 Vim 键绑定。键入 `:q`,然后按回车键离开。
|
||||
这个应用最常用的工具是键盘;要在任何页面往上或下翻页,请使用方向键。如果你需要退出,记得使用 Vim 绑定键,键入 `:q`,然后按回车键离开。
|
||||
|
||||
### 用 k9s 排除 Kubernetes 的故障示例
|
||||
|
||||
当出现故障的时候,`k9s` 怎么帮忙?举个例子,我让几个 Pod 由于配置错误而死机。下面你可以看到我那个可怜的 “hello” 部署死了。当我们将其高亮显示出来,可以按 `d` 运行 `describe` 命令,看看是什么原因导致了故障。
|
||||
当出现故障的时候,`k9s` 怎么帮忙?举个例子,我让几个 Pod 由于配置错误而死亡。下面你可以看到我那个可怜的 “hello” 部署死了。当我们将其高亮显示出来,可以按 `d` 运行 `describe` 命令,看看是什么原因导致了故障。
|
||||
|
||||
![K9s screenshot][18]
|
||||
|
||||
@ -155,15 +155,15 @@ $ k9s
|
||||
|
||||
![K9s screenshot][20]
|
||||
|
||||
不幸的是,日志也没有提供任何有用的信息(可能是因为部署从未正确配置过),而且 Pod也不会出现。
|
||||
不幸的是,日志也没有提供任何有用的信息(可能是因为部署从未正确配置过),而且 Pod 也没有出现。
|
||||
|
||||
然后我使用 `esc` 退了出来,我看看删除 Pod 是否能解决这个问题。要做到这一点,我高亮显示 Pod,然后使用 `ctrl-d`。幸好 `k9s` 在删除前会提示用户。
|
||||
然后我使用 `esc` 退了出来,我看看删除 Pod 是否能解决这个问题。要做到这一点,我高亮显示该 Pod,然后使用 `ctrl-d`。幸好 `k9s` 在删除前会提示用户。
|
||||
|
||||
![K9s screenshot][21]
|
||||
|
||||
虽然我确实删除了这个 Pod,但部署资源仍然存在,所以新的 Pod 会重新出现。它还会继续重启并死掉,无论什么原因(我们还不知道)。
|
||||
虽然我确实删除了这个 Pod,但部署资源仍然存在,所以新的 Pod 会重新出现。无论什么原因(我们还不知道),它还会继续重启并死掉。
|
||||
|
||||
在这里,我会重复查看日志,描述资源,甚至使用 `e` 快捷方式来编辑运行中的 Pod 以排除故障行为。在这个特殊情况下,失败的 Pod 没有配置在这个环境下运行。因此,让我们删除部署来停止崩溃接着重启的循环。
|
||||
在这里,我会重复查看日志,描述资源,甚至使用 `e` 快捷方式来编辑运行中的 Pod 以排除故障行为。在这个特殊情况下,失败的 Pod 是因为没有配置在这个环境下运行。因此,让我们删除部署来停止崩溃接着重启的循环。
|
||||
|
||||
我们可以通过键入 `:deploy` 并点击回车进入部署。从那里我们高亮显示并按 `ctrl-d` 来删除。
|
||||
|
||||
@ -171,13 +171,13 @@ $ k9s
|
||||
|
||||
![K9s screenshot][23]
|
||||
|
||||
部署失败了! 只用了几个按键就把这个失败的部署给清理掉了。
|
||||
这个有问题的部署被干掉了! 只用了几个按键就把这个失败的部署给清理掉了。
|
||||
|
||||
### k9s 是极其可定制的
|
||||
|
||||
这个应用有很多自定义选项,乃至于 UI 的配色方案。这里有几个可编辑的选项,你可能会感兴趣。
|
||||
这个应用有很多自定义选项、乃至于 UI 的配色方案。这里有几个可编辑的选项,你可能会感兴趣。
|
||||
|
||||
* 调整你把 `config.yml` 文件放置的位置(这样你就可以把它存储在[版本控制][24]中)。
|
||||
* 调整你放置 `config.yml` 文件的位置(这样你就可以把它存储在[版本控制][24]中)。
|
||||
* 在 `alias.yml` 文件中添加[自定义别名][25]。
|
||||
* 在 `hotkey.yml` 文件中创建[自定义热键][26]。
|
||||
* 探索现有的[插件][27]或编写自己的插件。
|
||||
@ -186,7 +186,7 @@ $ k9s
|
||||
|
||||
### 用 k9s 简化你的生活
|
||||
|
||||
我很容易在团队的系统上用非常人工的方式进行管理,更多的是为了锻炼脑力,而不是别的。当我第一次听说 `k9s` 的时候,我想,“这只是懒惰的 Kubernetes 而已。”于是我否定了它,回到了到处进行人工干预的状态。实际上,我在处理我的积压工作时就开始每天使用它,我觉得它的使用速度比单独使用 `kubectl` 快得多,这让我大吃一惊。现在,我已经皈依了。
|
||||
我倾向于以一种非常手动的方式来管理我团队的系统,更多的是为了锻炼脑力,而不是别的。当我第一次听说 `k9s` 的时候,我想,“这只是懒惰的 Kubernetes 而已。”于是我否定了它,然后回到了到处进行人工干预的状态。实际上,当我在处理积压工作时就开始每天使用它,而让我震惊的是它比单独使用 `kubectl` 快得多。现在,我已经皈依了。
|
||||
|
||||
了解你的工具并掌握做事情的“硬道理”很重要。还有一点很重要的是要记住,就管理而言,重要的是要更聪明地工作,而不是更努力。使用 `k9s`,就是我践行这个目标的方法。我想,我们可以把它叫做懒惰的 Kubernetes 管理,也没关系。
|
||||
|
||||
@ -197,7 +197,7 @@ via: https://opensource.com/article/20/5/kubernetes-administration
|
||||
作者:[Jessica Cherry][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
145
sources/talk/20190331 Codecademy vs. The BBC Micro.md
Normal file
145
sources/talk/20190331 Codecademy vs. The BBC Micro.md
Normal file
@ -0,0 +1,145 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Codecademy vs. The BBC Micro)
|
||||
[#]: via: (https://twobithistory.org/2019/03/31/bbc-micro.html)
|
||||
[#]: author: (Two-Bit History https://twobithistory.org)
|
||||
|
||||
Codecademy vs. The BBC Micro
|
||||
======
|
||||
|
||||
In the late 1970s, the computer, which for decades had been a mysterious, hulking machine that only did the bidding of corporate overlords, suddenly became something the average person could buy and take home. An enthusiastic minority saw how great this was and rushed to get a computer of their own. For many more people, the arrival of the microcomputer triggered helpless anxiety about the future. An ad from a magazine at the time promised that a home computer would “give your child an unfair advantage in school.” It showed a boy in a smart blazer and tie eagerly raising his hand to answer a question, while behind him his dim-witted classmates look on sullenly. The ad and others like it implied that the world was changing quickly and, if you did not immediately learn how to use one of these intimidating new devices, you and your family would be left behind.
|
||||
|
||||
In the UK, this anxiety metastasized into concern at the highest levels of government about the competitiveness of the nation. The 1970s had been, on the whole, an underwhelming decade for Great Britain. Both inflation and unemployment had been high. Meanwhile, a series of strikes put London through blackout after blackout. A government report from 1979 fretted that a failure to keep up with trends in computing technology would “add another factor to our poor industrial performance.”[1][1] The country already seemed to be behind in the computing arena—all the great computer companies were American, while integrated circuits were being assembled in Japan and Taiwan.
|
||||
|
||||
In an audacious move, the BBC, a public service broadcaster funded by the government, decided that it would solve Britain’s national competitiveness problems by helping Britons everywhere overcome their aversion to computers. It launched the _Computer Literacy Project_, a multi-pronged educational effort that involved several TV series, a few books, a network of support groups, and a specially built microcomputer known as the BBC Micro. The project was so successful that, by 1983, an editor for BYTE Magazine wrote, “compared to the US, proportionally more of Britain’s population is interested in microcomputers.”[2][2] The editor marveled that there were more people at the Fifth Personal Computer World Show in the UK than had been to that year’s West Coast Computer Faire. Over a sixth of Great Britain watched an episode in the first series produced for the _Computer Literacy Project_ and 1.5 million BBC Micros were ultimately sold.[3][3]
|
||||
|
||||
[An archive][4] containing every TV series produced and all the materials published for the _Computer Literacy Project_ was put on the web last year. I’ve had a huge amount of fun watching the TV series and trying to imagine what it would have been like to learn about computing in the early 1980s. But what’s turned out to be more interesting is how computing was _taught_. Today, we still worry about technology leaving people behind. Wealthy tech entrepreneurs and governments spend lots of money trying to teach kids “to code.” We have websites like Codecademy that make use of new technologies to teach coding interactively. One would assume that this approach is more effective than a goofy ’80s TV series. But is it?
|
||||
|
||||
### The Computer Literacy Project
|
||||
|
||||
The microcomputer revolution began in 1975 with the release of [the Altair 8800][5]. Only two years later, the Apple II, TRS-80, and Commodore PET had all been released. Sales of the new computers exploded. In 1978, the BBC explored the dramatic societal changes these new machines were sure to bring in a documentary called “Now the Chips Are Down.”
|
||||
|
||||
The documentary was alarming. Within the first five minutes, the narrator explains that microelectronics will “totally revolutionize our way of life.” As eerie synthesizer music plays, and green pulses of electricity dance around a magnified microprocessor on screen, the narrator argues that the new chips are why “Japan is abandoning its ship building, and why our children will grow up without jobs to go to.” The documentary goes on to explore how robots are being used to automate car assembly and how the European watch industry has lost out to digital watch manufacturers in the United States. It castigates the British government for not doing more to prepare the country for a future of mass unemployment.
|
||||
|
||||
The documentary was supposedly shown to the British Cabinet.[4][6] Several government agencies, including the Department of Industry and the Manpower Services Commission, became interested in trying to raise awareness about computers among the British public. The Manpower Services Commission provided funds for a team from the BBC’s education division to travel to Japan, the United States, and other countries on a fact-finding trip. This research team produced a report that cataloged the ways in which microelectronics would indeed mean major changes for industrial manufacturing, labor relations, and office work. In late 1979, it was decided that the BBC should make a ten-part TV series that would help regular Britons “learn how to use and control computers and not feel dominated by them.”[5][7] The project eventually became a multimedia endeavor similar to the _Adult Literacy Project_, an earlier BBC undertaking involving both a TV series and supplemental courses that helped two million people improve their reading.
|
||||
|
||||
The producers behind the _Computer Literacy Project_ were keen for the TV series to feature “hands-on” examples that viewers could try on their own if they had a microcomputer at home. These examples would have to be in BASIC, since that was the language (really the entire shell) used on almost all microcomputers. But the producers faced a thorny problem: Microcomputer manufacturers all had their own dialects of BASIC, so no matter which dialect they picked, they would inevitably alienate some large fraction of their audience. The only real solution was to create a new BASIC—BBC BASIC—and a microcomputer to go along with it. Members of the British public would be able to buy the new microcomputer and follow along without worrying about differences in software or hardware.
|
||||
|
||||
The TV producers and presenters at the BBC were not capable of building a microcomputer on their own. So they put together a specification for the computer they had in mind and invited British microcomputer companies to propose a new machine that met the requirements. The specification called for a relatively powerful computer because the BBC producers felt that the machine should be able to run real, useful applications. Technical consultants for the _Computer Literacy Project_ also suggested that, if it had to be a BASIC dialect that was going to be taught to the entire nation, then it had better be a good one. (They may not have phrased it exactly that way, but I bet that’s what they were thinking.) BBC BASIC would make up for some of BASIC’s usual shortcomings by allowing for recursion and local variables.[6][8]
|
||||
|
||||
The BBC eventually decided that a Cambridge-based company called Acorn Computers would make the BBC Micro. In choosing Acorn, the BBC passed over a proposal from Clive Sinclair, who ran a company called Sinclair Research. Sinclair Research had brought mass-market microcomputing to the UK in 1980 with the Sinclair ZX80. Sinclair’s new computer, the ZX81, was cheap but not powerful enough for the BBC’s purposes. Acorn’s new prototype computer, known internally as the Proton, would be more expensive but more powerful and expandable. The BBC was impressed. The Proton was never marketed or sold as the Proton because it was instead released in December 1981 as the BBC Micro, also affectionately called “The Beeb.” You could get a 16k version for £235 and a 32k version for £335.
|
||||
|
||||
In 1980, Acorn was an underdog in the British computing industry. But the BBC Micro helped establish the company’s legacy. Today, the world’s most popular microprocessor instruction set is the ARM architecture. “ARM” now stands for “Advanced RISC Machine,” but originally it stood for “Acorn RISC Machine.” ARM Holdings, the company behind the architecture, was spun out from Acorn in 1990.
|
||||
|
||||
![Picture of the BBC Micro.][9] _A bad picture of a BBC Micro, taken by me at the Computer History Museum
|
||||
in Mountain View, California._
|
||||
|
||||
### The Computer Programme
|
||||
|
||||
A dozen different TV series were eventually produced as part of the _Computer Literacy Project_, but the first of them was a ten-part series known as _The Computer Programme_. The series was broadcast over ten weeks at the beginning of 1982. A million people watched each week-night broadcast of the show; a quarter million watched the reruns on Sunday and Monday afternoon.
|
||||
|
||||
The show was hosted by two presenters, Chris Serle and Ian McNaught-Davis. Serle plays the neophyte while McNaught-Davis, who had professional experience programming mainframe computers, plays the expert. This was an inspired setup. It made for [awkward transitions][10]—Serle often goes directly from a conversation with McNaught-Davis to a bit of walk-and-talk narration delivered to the camera, and you can’t help but wonder whether McNaught-Davis is still standing there out of frame or what. But it meant that Serle could voice the concerns that the audience would surely have. He can look intimidated by a screenful of BASIC and can ask questions like, “What do all these dollar signs mean?” At several points during the show, Serle and McNaught-Davis sit down in front of a computer and essentially pair program, with McNaught-Davis providing hints here and there while Serle tries to figure it out. It would have been much less relatable if the show had been presented by a single, all-knowing narrator.
|
||||
|
||||
The show also made an effort to demonstrate the many practical applications of computing in the lives of regular people. By the early 1980s, the home computer had already begun to be associated with young boys and video games. The producers behind _The Computer Programme_ sought to avoid interviewing “impressively competent youngsters,” as that was likely “to increase the anxieties of older viewers,” a demographic that the show was trying to attract to computing.[7][11] In the first episode of the series, Gill Nevill, the show’s “on location” reporter, interviews a woman that has bought a Commodore PET to help manage her sweet shop. The woman (her name is Phyllis) looks to be 60-something years old, yet she has no trouble using the computer to do her accounting and has even started using her PET to do computer work for other businesses, which sounds like the beginning of a promising freelance career. Phyllis says that she wouldn’t mind if the computer work grew to replace her sweet shop business since she enjoys the computer work more. This interview could instead have been an interview with a teenager about how he had modified _Breakout_ to be faster and more challenging. But that would have been encouraging to almost nobody. On the other hand, if Phyllis, of all people, can use a computer, then surely you can too.
|
||||
|
||||
While the show features lots of BASIC programming, what it really wants to teach its audience is how computing works in general. The show explains these general principles with analogies. In the second episode, there is an extended discussion of the Jacquard loom, which accomplishes two things. First, it illustrates that computers are not based only on magical technology invented yesterday—some of the foundational principles of computing go back two hundred years and are about as simple as the idea that you can punch holes in card to control a weaving machine. Second, the interlacing of warp and weft threads is used to demonstrate how a binary choice (does the weft thread go above or below the warp thread?) is enough, when repeated over and over, to produce enormous variation. This segues, of course, into a discussion of how information can be stored using binary digits.
|
||||
|
||||
Later in the show there is a section about a steam organ that plays music encoded in a long, segmented roll of punched card. This time the analogy is used to explain subroutines in BASIC. Serle and McNaught-Davis lay out the whole roll of punched card on the floor in the studio, then point out the segments where it looks like a refrain is being repeated. McNaught-Davis explains that a subroutine is what you would get if you cut out those repeated segments of card and somehow added an instruction to go back to the original segment that played the refrain for the first time. This is a brilliant explanation and probably one that stuck around in people’s minds for a long time afterward.
|
||||
|
||||
I’ve picked out only a few examples, but I think in general the show excels at demystifying computers by explaining the principles that computers rely on to function. The show could instead have focused on teaching BASIC, but it did not. This, it turns out, was very much a conscious choice. In a retrospective written in 1983, John Radcliffe, the executive producer of the _Computer Literacy Project_, wrote the following:
|
||||
|
||||
> If computers were going to be as important as we believed, some genuine understanding of this new subject would be important for everyone, almost as important perhaps as the capacity to read and write. Early ideas, both here and in America, had concentrated on programming as the main route to computer literacy. However, as our thinking progressed, although we recognized the value of “hands-on” experience on personal micros, we began to place less emphasis on programming and more on wider understanding, on relating micros to larger machines, encouraging people to gain experience with a range of applications programs and high-level languages, and relating these to experience in the real world of industry and commerce…. Our belief was that once people had grasped these principles, at their simplest, they would be able to move further forward into the subject.
|
||||
|
||||
Later, Radcliffe writes, in a similar vein:
|
||||
|
||||
> There had been much debate about the main explanatory thrust of the series. One school of thought had argued that it was particularly important for the programmes to give advice on the practical details of learning to use a micro. But we had concluded that if the series was to have any sustained educational value, it had to be a way into the real world of computing, through an explanation of computing principles. This would need to be achieved by a combination of studio demonstration on micros, explanation of principles by analogy, and illustration on film of real-life examples of practical applications. Not only micros, but mini computers and mainframes would be shown.
|
||||
|
||||
I love this, particularly the part about mini-computers and mainframes. The producers behind _The Computer Programme_ aimed to help Britons get situated: Where had computing been, and where was it going? What can computers do now, and what might they do in the future? Learning some BASIC was part of answering those questions, but knowing BASIC alone was not seen as enough to make someone computer literate.
|
||||
|
||||
### Computer Literacy Today
|
||||
|
||||
If you google “learn to code,” the first result you see is a link to Codecademy’s website. If there is a modern equivalent to the _Computer Literacy Project_, something with the same reach and similar aims, then it is Codecademy.
|
||||
|
||||
“Learn to code” is Codecademy’s tagline. I don’t think I’m the first person to point this out—in fact, I probably read this somewhere and I’m now ripping it off—but there’s something revealing about using the word “code” instead of “program.” It suggests that the important thing you are learning is how to decode the code, how to look at a screen’s worth of Python and not have your eyes glaze over. I can understand why to the average person this seems like the main hurdle to becoming a professional programmer. Professional programmers spend all day looking at computer monitors covered in gobbledygook, so, if I want to become a professional programmer, I better make sure I can decipher the gobbledygook. But dealing with syntax is not the most challenging part of being a programmer, and it quickly becomes almost irrelevant in the face of much bigger obstacles. Also, armed only with knowledge of a programming language’s syntax, you may be able to _read_ code but you won’t be able to _write_ code to solve a novel problem.
|
||||
|
||||
I recently went through Codecademy’s “Code Foundations” course, which is the course that the site recommends you take if you are interested in programming (as opposed to web development or data science) and have never done any programming before. There are a few lessons in there about the history of computer science, but they are perfunctory and poorly researched. (Thank heavens for [this noble internet vigilante][12], who pointed out a particularly egregious error.) The main focus of the course is teaching you about the common structural elements of programming languages: variables, functions, control flow, loops. In other words, the course focuses on what you would need to know to start seeing patterns in the gobbledygook.
|
||||
|
||||
To be fair to Codecademy, they offer other courses that look meatier. But even courses such as their “Computer Science Path” course focus almost exclusively on programming and concepts that can be represented in programs. One might argue that this is the whole point—Codecademy’s main feature is that it gives you little interactive programming lessons with automated feedback. There also just isn’t enough room to cover more because there is only so much you can stuff into somebody’s brain in a little automated lesson. But the producers at the BBC tasked with kicking off the _Computer Literacy Project_ also had this problem; they recognized that they were limited by their medium and that “the amount of learning that would take place as a result of the television programmes themselves would be limited.”[8][13] With similar constraints on the volume of information they could convey, they chose to emphasize general principles over learning BASIC. Couldn’t Codecademy replace a lesson or two with an interactive visualization of a Jacquard loom weaving together warp and weft threads?
|
||||
|
||||
I’m banging the drum for “general principles” loudly now, so let me just explain what I think they are and why they are important. There’s a book by J. Clark Scott about computers called _But How Do It Know?_ The title comes from the anecdote that opens the book. A salesman is explaining to a group of people that a thermos can keep hot food hot and cold food cold. A member of the audience, astounded by this new invention, asks, “But how do it know?” The joke of course is that the thermos is not perceiving the temperature of the food and then making a decision—the thermos is just constructed so that cold food inevitably stays cold and hot food inevitably stays hot. People anthropomorphize computers in the same way, believing that computers are digital brains that somehow “choose” to do one thing or another based on the code they are fed. But learning a few things about how computers work, even at a rudimentary level, takes the homunculus out of the machine. That’s why the Jacquard loom is such a good go-to illustration. It may at first seem like an incredible device. It reads punch cards and somehow “knows” to weave the right pattern! The reality is mundane: Each row of holes corresponds to a thread, and where there is a hole in that row the corresponding thread gets lifted. Understanding this may not help you do anything new with computers, but it will give you the confidence that you are not dealing with something magical. We should impart this sense of confidence to beginners as soon as we can.
|
||||
|
||||
Alas, it’s possible that the real problem is that nobody wants to learn about the Jacquard loom. Judging by how Codecademy emphasizes the professional applications of what it teaches, many people probably start using Codecademy because they believe it will help them “level up” their careers. They believe, not unreasonably, that the primary challenge will be understanding the gobbledygook, so they want to “learn to code.” And they want to do it as quickly as possible, in the hour or two they have each night between dinner and collapsing into bed. Codecademy, which after all is a business, gives these people what they are looking for—not some roundabout explanation involving a machine invented in the 18th century.
|
||||
|
||||
The _Computer Literacy Project_, on the other hand, is what a bunch of producers and civil servants at the BBC thought would be the best way to educate the nation about computing. I admit that it is a bit elitist to suggest we should laud this group of people for teaching the masses what they were incapable of seeking out on their own. But I can’t help but think they got it right. Lots of people first learned about computing using a BBC Micro, and many of these people went on to become successful software developers or game designers. [As I’ve written before][14], I suspect learning about computing at a time when computers were relatively simple was a huge advantage. But perhaps another advantage these people had is shows like _The Computer Programme_, which strove to teach not just programming but also how and why computers can run programs at all. After watching _The Computer Programme_, you may not understand all the gobbledygook on a computer screen, but you don’t really need to because you know that, whatever the “code” looks like, the computer is always doing the same basic thing. After a course or two on Codecademy, you understand some flavors of gobbledygook, but to you a computer is just a magical machine that somehow turns gobbledygook into running software. That isn’t computer literacy.
|
||||
|
||||
_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][15] on Twitter or subscribe to the [RSS feed][16] to make sure you know when a new post is out._
|
||||
|
||||
_Previously on TwoBitHistory…_
|
||||
|
||||
> FINALLY some new damn content, amirite?
|
||||
>
|
||||
> Wanted to write an article about how Simula bought us object-oriented programming. It did that, but early Simula also flirted with a different vision for how OOP would work. Wrote about that instead!<https://t.co/AYIWRRceI6>
|
||||
>
|
||||
> — TwoBitHistory (@TwoBitHistory) [February 1, 2019][17]
|
||||
|
||||
1. Robert Albury and David Allen, Microelectronics, report (1979). [↩︎][18]
|
||||
|
||||
2. Gregg Williams, “Microcomputing, British Style”, Byte Magazine, 40, January 1983, accessed on March 31, 2019, <https://archive.org/stream/byte-magazine-1983-01/1983_01_BYTE_08-01_Looking_Ahead#page/n41/mode/2up>. [↩︎][19]
|
||||
|
||||
3. John Radcliffe, “Toward Computer Literacy,” Computer Literacy Project Achive, 42, accessed March 31, 2019, [https://computer-literacy-project.pilots.bbcconnectedstudio.co.uk/media/Towards Computer Literacy.pdf][20]. [↩︎][21]
|
||||
|
||||
4. David Allen, “About the Computer Literacy Project,” Computer Literacy Project Archive, accessed March 31, 2019, <https://computer-literacy-project.pilots.bbcconnectedstudio.co.uk/history>. [↩︎][22]
|
||||
|
||||
5. ibid. [↩︎][23]
|
||||
|
||||
6. Williams, 51. [↩︎][24]
|
||||
|
||||
7. Radcliffe, 11. [↩︎][25]
|
||||
|
||||
8. Radcliffe, 5. [↩︎][26]
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://twobithistory.org/2019/03/31/bbc-micro.html
|
||||
|
||||
作者:[Two-Bit History][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twobithistory.org
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: tmp.05mfBL4kP8#fn:1
|
||||
[2]: tmp.05mfBL4kP8#fn:2
|
||||
[3]: tmp.05mfBL4kP8#fn:3
|
||||
[4]: https://computer-literacy-project.pilots.bbcconnectedstudio.co.uk/
|
||||
[5]: https://twobithistory.org/2018/07/22/dawn-of-the-microcomputer.html
|
||||
[6]: tmp.05mfBL4kP8#fn:4
|
||||
[7]: tmp.05mfBL4kP8#fn:5
|
||||
[8]: tmp.05mfBL4kP8#fn:6
|
||||
[9]: https://twobithistory.org/images/beeb.jpg
|
||||
[10]: https://twitter.com/TwoBitHistory/status/1112372000742404098
|
||||
[11]: tmp.05mfBL4kP8#fn:7
|
||||
[12]: https://twitter.com/TwoBitHistory/status/1111305774939234304
|
||||
[13]: tmp.05mfBL4kP8#fn:8
|
||||
[14]: https://twobithistory.org/2018/09/02/learning-basic.html
|
||||
[15]: https://twitter.com/TwoBitHistory
|
||||
[16]: https://twobithistory.org/feed.xml
|
||||
[17]: https://twitter.com/TwoBitHistory/status/1091148050221944832?ref_src=twsrc%5Etfw
|
||||
[18]: tmp.05mfBL4kP8#fnref:1
|
||||
[19]: tmp.05mfBL4kP8#fnref:2
|
||||
[20]: https://computer-literacy-project.pilots.bbcconnectedstudio.co.uk/media/Towards%20Computer%20Literacy.pdf
|
||||
[21]: tmp.05mfBL4kP8#fnref:3
|
||||
[22]: tmp.05mfBL4kP8#fnref:4
|
||||
[23]: tmp.05mfBL4kP8#fnref:5
|
||||
[24]: tmp.05mfBL4kP8#fnref:6
|
||||
[25]: tmp.05mfBL4kP8#fnref:7
|
||||
[26]: tmp.05mfBL4kP8#fnref:8
|
98
sources/talk/20200421 What is a High Traffic Website.md
Normal file
98
sources/talk/20200421 What is a High Traffic Website.md
Normal file
@ -0,0 +1,98 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What is a High Traffic Website?)
|
||||
[#]: via: (https://theartofmachinery.com/2020/04/21/what_is_high_traffic.html)
|
||||
[#]: author: (Simon Arneaud https://theartofmachinery.com)
|
||||
|
||||
What is a High Traffic Website?
|
||||
======
|
||||
|
||||
Terms like “high traffic” are hazardous when designing online services because salespeople, business analysts and engineers all have different perspectives about what they mean. If we’re talking about, say, a high-stakes online poker room, then “high traffic” for the business side will be very low compared to what it is for the technical side. However, all these people will be in a meeting room together making decisions, using the same words to mean different things. It’s obvious how that can lead to bad (and sometimes expensive) choices.
|
||||
|
||||
A lot of my day job is talking to business stakeholders and figuring out the technical solutions they need, so this is a problem I have to deal with. So I’ve got my own purely technical way to think about traffic levels for online services.
|
||||
|
||||
### Scalability vs performance
|
||||
|
||||
First, let’s be clear about two concepts that come up a lot in online service design.
|
||||
|
||||
For online services, performance is all about how well (usually how fast) the system can handle a single request or unit of work. Scalability is about the volume or size of work that can be handled. For online services, scalability is usually about the number of user requests that can be handled within a timeframe, while for batch jobs we typically care about the size of the dataset we can process. Sometimes we want the system capacity to grow and shrink based on demand, but sometimes we don’t care as long as we can handle the full range of workloads we expect.
|
||||
|
||||
Scalability and performance often get confused because they commonly work together. For example, suppose you have some online service using a slow algorithm and you improve perfomance by replacing the algorithm with another that does the same job with less work. Primarily that’s a performance gain, but as long as the new algorithm doesn’t use more memory or something, you’ll be able to handle more requests in the same time.
|
||||
|
||||
There’s a counterexample I’ve discussed before: [internal caches versus external cache servers][1]. Using an external cache server like Redis means your app has to make network calls to do cache lookups, so it has performance overhead. On the other hand, if your app is replicated across multiple machines, a shared, external cache is more effective than a per-app, in-memory cache. The external cache is more scalable.
|
||||
|
||||
### Response time
|
||||
|
||||
When designing systems, it’s helpful to start by thinking about latency or response time requirements, even if we have to make some up and revise them later. Adding more RAM, more caches, more machines or more disk can solve a lot of problems, but latency problems tend to be fundamental to the system design. For example, suppose you’re designing an online game, and you want latencies of under 100ms for all users. Straight away, the speed of light limit means you can’t have one central server supporting a global game, regardless of whatever algorithms or hardware you throw at the problem.
|
||||
|
||||
There’s another reason it’s useful to focus on server response time in practice. If you have a simple, single-function website, such as a mortgage calculator, then the response time can be estimated based on technical things like the hardware specs and code quality. But that’s not how typical online services are built. The online service industry tends to emphasise adding more and more features for less development cost. That means [webpages tend to expand in complexity][2] using the easiest code possible, and only get optimised when they become too slow and users churn. Even the typical mortgage calculator site will end up bloated with advertising and tracking functionality. So the response time of a website in my day job depends mostly on the client’s budget and priorities, not on technical factors, regardless of whether it’s an ecommerce site or a cutting-edge data application.
|
||||
|
||||
### Looking at a single worker
|
||||
|
||||
Okay, so now imagine a simple web app that gets about one request an hour that takes about 5s to process (ignoring static assets because they’re not the bottleneck). That app has an obvious performance problem because many users will give up before 5s. But there’s no scaling problem because the server will practically never hit capacity limits and drop requests. Even if traffic rises, the performance problem is the bottleneck that takes priority over any hypothetical scalability problems.
|
||||
|
||||
That’s a simple insight that we can take further. Lets say we target 100ms per request, and our simple web app processes requests one at a time serially (i.e., no scaling). With 86,400 seconds in a day, a naïve calculation says we can handle 86,400 / 0.1 = 864,000 requests per day before we have scaling problems.
|
||||
|
||||
Of course, the real world isn’t that simple.
|
||||
|
||||
First, there will be slower and faster requests, and [requests that arrive at random won’t balance themselves nicely][3]. They’ll come in clusters that fill up queues, and the backlog will cause large spikes in response time. (There’s a handy rule that says [if you want to keep response time under control, you should target about 80% usage of theoretical total capacity][4].)
|
||||
|
||||
Then there’s diurnal variation. Some local websites get nearly all of their traffic during business hours, or about a third of the day. Even a very global website can easily have 2-3 times more traffic at peak than at trough because populations aren’t distributed evenly around the world (a lot of internet users live in East Asia and North/South America, but not in the huge Pacific ocean). The actual ratio depends on many factors that are hard to predict.
|
||||
|
||||
But even if we can’t easily get exact capacity estimates, this simple model is the justification for splitting websites into three traffic levels.
|
||||
|
||||
### The three traffic levels
|
||||
|
||||
The first level is for sites that get well under 100k dynamic requests a day. Most websites are at this level, and a lot will stay that way while being totally useful and successful. They can also pose complex technical challenges, both for performance and functionality, and a lot of my work is on sites this size. But they don’t have true scalability problems (as opposed to problems that can be solved purely by improving performance).
|
||||
|
||||
If a website gets bigger, it can get into the “growing pains” level, which is roughly around 100k-1M dynamic requests a day. This is where scalability problems start to appear. Often the site is at least a bit scalable (thanks to, e.g., async or multithreaded programming), but Web developers scaling a site through this level keep discovering new surprise pain points. Things that a smaller site can get away with start turning into real problems in this level. Some of the biggest challenges are actually sociotechnical, with the team that builds and manages the site needing to learn to think about it in new ways.
|
||||
|
||||
The next level is after leaving the 1M dynamic requests a day boundary behind. I think of sites at this level as being high traffic. Let me stress that that’s a technical line, not a value judgment or ego statement. The world’s biggest websites are orders of magnitude bigger than that, while most of the world’s useful websites are smaller. But the line matters because you simply can’t run a site at that scale without treating it like a high traffic site. You can get away with it at low traffic levels, you can fumble through it at the growing pains level, but at high traffic levels you just have to work differently. Coincidentally, it’s around this traffic level where it makes more sense to talk about requests per second than requests per day.
|
||||
|
||||
By the way, don’t focus too much on the exact traffic levels above. They’re very rough and honestly I picked them because they’re convenient round numbers that happen to be reasonable for typical websites. The real values depend on the target response time and all the other factors, of course. What I’m trying to explain is 1) that these levels exist, 2) why they exist and 3) what to expect if you’re trying to grow an online service.
|
||||
|
||||
### Going to more levels
|
||||
|
||||
What happens with sites that get even bigger? Once the problems at one set of bottlenecks are fixed, the site should just scale until it hits a new set of bottlenecks, either because the application has changed significantly, or just because of a very large increase in traffic. For example, once the application servers are scalable, the next scaling bottleneck could be database reads, followed by database writes. However, the basic ideas are the same, just applied to a different part of the system.
|
||||
|
||||
Working on a very high-traffic site can be a lot less “exciting” than working on a plain high-traffic site, simply because most major problems need to be solved to get to very high traffic levels in the first place.
|
||||
|
||||
### Scaling when you don’t have scaling problems
|
||||
|
||||
Some developers try to make online services scalable long before they have any scalability problems on the horizon, usually by adding exotic databases or broker servers or something. In particular, startup founders are often especially concerned that their technical assets might not scale to meet their business ambitions. It’s understandable, but it’s a dangerous trap for a couple of reasons.
|
||||
|
||||
One is that Paul Graham’s classic [Do Things That Don’t Scale][5] essay applies to your technology stack, too. You can’t beat bigger companies with scale, but your competitive advantage is that you can choose to _not_ solve the scalability problems that bigger companies are forced to with every step they take. That’s what makes smaller companies agile, and a startup that worries too much about scalability is just a big enterprise without the big to back it up.
|
||||
|
||||
The other problem is that premature scalability solutions can easily backfire. If you don’t have real scalability problems to test your solutions against, it’s hard to be sure you’re correctly solving a real problem. In fact, rapidly growing services tend to change requirements rapidly, too, so the risk of a scalability “solution” turning into technical debt is high. If you keep trying to add scalability to a part of the system that’s already scalable enough, chances are the next scaling bottleneck will appear somewhere else, anyway.
|
||||
|
||||
Architectures that err on the side of too simple are easier to scale in the long run than architectures that are too complex.
|
||||
|
||||
To be more concrete, I personally can’t think of a low-traffic online service I’ve worked on that couldn’t have been implemented cleanly enough using a simple, monolithic web app (in whatever popular language) in front of [a boring relational database][6], maybe with a search index like Xapian or Elasticsearch. [Many major websites aren’t much different from that.][7] It’s not the only valid architecture, but it’s a tried-and-tested one.
|
||||
|
||||
Having said all that, sometimes low-traffic sites need things that are sometimes sold as scalability solutions. For example, replicating an app behind a load balancer can help you deploy whenever you want without downtime. One fintech service I worked on split credit card code into its own service, making PCI DSS compliance simpler. In all these cases there’s a clear problem other than scalability that’s being solved, and that’s what avoids overengineering.
|
||||
|
||||
I often wish I had a systematic way to just figure out all the technical requirements for an online service in my head. But the real world is complicated and messy, and sometimes the only practical way to be sure is to experiment. However, every piece of software starts with ideas, and this is how I think about scalability for online service ideas during the early design phase.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://theartofmachinery.com/2020/04/21/what_is_high_traffic.html
|
||||
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://theartofmachinery.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://theartofmachinery.com/2016/07/30/server_caching_architectures.html
|
||||
[2]: https://mobiforge.com/research-analysis/the-web-is-doom
|
||||
[3]: https://theartofmachinery.com/2020/01/27/systems_programming_probability.html
|
||||
[4]: https://www.johndcook.com/blog/2009/01/30/server-utilization-joel-on-queuing/
|
||||
[5]: http://www.paulgraham.com/ds.html
|
||||
[6]: https://theartofmachinery.com/2017/10/28/rdbs_considered_useful.html
|
||||
[7]: https://nickcraver.com/blog/2016/02/17/stack-overflow-the-architecture-2016-edition/
|
@ -0,0 +1,67 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ethernet Alliance study finds Power over Ethernet issues)
|
||||
[#]: via: (https://www.networkworld.com/article/3543258/ethernet-alliance-study-finds-power-over-ethernet-issues.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Ethernet Alliance study finds Power over Ethernet issues
|
||||
======
|
||||
Reliability, connection issues, failure to deliver power traced to interoperability issues with Power over Ethernet.
|
||||
Martyn Williams/IDGNS
|
||||
|
||||
Four out five users experience challenges with power over Ethernet (PoE) deployments, according to a new survey of nearly 800 Ethernet designers, manufacturers, resellers, system integrators, network operators and others.
|
||||
|
||||
Conducted by the Ethernet Alliance in January, the study found a number of key PoE insights, including:
|
||||
|
||||
* Four out of five users experienced issues, including support, reliability, or connection challenges.
|
||||
* The top three PoE installations are cameras and phones, as well as computing and storage devices;
|
||||
* Of customers planning to implement PoE 63% said they would need 30w; 47% would need between 30-60w and 27% need greater than 60w.
|
||||
|
||||
|
||||
|
||||
**[ Now see [7 free network tools you must have][1]. ]**
|
||||
|
||||
With the global market projected to grow to $2 billion by 2025, PoE remains a wellspring of lucrative opportunities for designers, systems integrators, and solutions providers, David Tremblay, chair of the alliance's PoE Subcommittee, and system architect for Aruba, said in a statement. “Despite this good news, there are significant challenges that could threaten PoE’s growing adoption.”
|
||||
|
||||
According to the survey those chief PoE challenges include, vendor support, unreliable power or operation, long repair times, and first-time connection issues.
|
||||
|
||||
The Alliance reported that while 78% of respondents experienced difficulties with PoE deployments, 72% expect improvement with products certified through the Ethernet Alliance’s [PoE Certification Program][2]. The study found 84% said they expect certified PoE devices would be more likely to work the first time, and 85% expect those devices to be more reliable.
|
||||
|
||||
“Lacking a registered trademark, the use of the term ‘PoE’ is not formally regulated, allowing any vendor to freely describe products and solutions as PoE-enabled. Additionally, terminologies such as ‘PoE+’, as well as non-standard PoE implementations are causing confusion with device interoperability among technicians, designers, and end users,” the Alliance stated.
|
||||
|
||||
[Experts say][3] the single greatest challenge for PoE is assuring interoperability. Multivendor interoperability is Ethernet’s hallmark and it's an important consideration for consumers who want to know the gear will just work, while industry players need a way to find new partnership opportunities with companies offering certified equipment, the Dell’Oro Group said.
|
||||
|
||||
“With the diversity of application, come interoperability problems which dictate the need for testing and certification,” said Sameh Boujelbene, Senior Research Director for Ethernet Switch market research at Dell’Oro.
|
||||
|
||||
Certified Ethernet Alliance products range from component level Ethernet evaluation boards, to power-sourcing equipment, enterprise switches, and adapters.
|
||||
|
||||
Ultimately the Ethernet Alliance’s Power over Ethernet (PoE) Certification Program is the place customers should look to enable faster PoE installations with zero interoperability issues, Tremblay said.
|
||||
|
||||
A number of key Ethernet vendors including Cisco, Hewlett Packard Enterprise, Huawei, Analog Devices, Texas Instruments, Microsemi and others are part of the certification program.
|
||||
|
||||
Details about certified products are available via the [program’s public registry][4].
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3543258/ethernet-alliance-study-finds-power-over-ethernet-issues.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/2825879/7-free-open-source-network-monitoring-tools.html
|
||||
[2]: https://nam05.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbit.ly%2FEA_PoECertification&data=02%7C01%7C%7Ca67fc55074c748affa8408d7f74fc98e%7C3aedb78fc8e04326888a74230d1978ff%7C0%7C0%7C637249794373724526&sdata=2QDd6oO2Yu6EiCj6m7i%2BtbVJD1wHxmNwtDr2oMe9TgQ%3D&reserved=0
|
||||
[3]: https://www.networkworld.com/article/2328615/the-power-over-ethernet.html
|
||||
[4]: https://ea-poe-cert.iol.unh.edu/?utm_source=General%20Distribution&utm_medium=Press%20Release&utm_campaign=October%20PoE%20Certification%20Press%20Release
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
63
sources/talk/20200514 How IoT will rescue aviation.md
Normal file
63
sources/talk/20200514 How IoT will rescue aviation.md
Normal file
@ -0,0 +1,63 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How IoT will rescue aviation)
|
||||
[#]: via: (https://www.networkworld.com/article/3543318/how-iot-will-rescue-aviation.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
How IoT will rescue aviation
|
||||
======
|
||||
European airplane maker Airbus is one company exploring virus-spotting IoT sensors in an attempt to keep COVID-19-infected passengers off planes.
|
||||
[Stéphan Valentin][1] [(CC0)][2]
|
||||
|
||||
A biotech company that develops sensors to detect explosives and other chemicals on planes and in airports is teaming up with Airbus to create a sensor that could detect passengers who are positive for COVID-19.
|
||||
|
||||
California-based Koniku and Airbus, which have been working since 2017 on contactless equipment that sniffs out chemicals, are trying to adapt that technology to sniff out pathogens, says Osh Agabi, founder and CEO of Koniku, [in a blog post][3].
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][4]
|
||||
|
||||
They hope to identify odors in breath or sweat that are chemical markers indicating the presence of COVID-19 infection. "Most infections and diseases cause slight changes to the composition of our breath and sweat, which then produce distinct odors," Agabi writes. "If we can detect those odors, we can detect the presence of those infections."
|
||||
|
||||
The companies hope to identify markers specific to the novel coronavirus and an IOT sensor equipped with genetically engineered odoroant receptors that can detect them. "Those receptors screen molecules in the air and produce a signal when they come into contact with the molecular compounds of the hazard or threat that they have been programmed to detect," he writes.
|
||||
|
||||
He says that passengers would be screened by walking through an enclosed corridor where the sensors are deployed. "By programming the DNA of the cells that make up these receptors to react to the compounds that appear in infected people’s breath or sweat, we believe we will be able to quickly and reliably screen for COVID-19 and determine whether a person is infected," he writes.
|
||||
|
||||
Other types of contactless detectors are already in use, including elevated-skin-temperature (EST) cameras.
|
||||
|
||||
Italy's main airport, Leonardo da Vinci, acquired three thermal-imaging helmets with the intent to use them to spot persons with fevers. The airport already had fixed thermal scanners and has ordered more. Passengers detected with potentially high temperatures are made to take a further medical exam, [according to regional publication Fiumicino Online][5].
|
||||
|
||||
KC Wearable, the Shenzhen, China, company that makes the helmets, says they can be worn by staff and used at a distance from passengers.
|
||||
|
||||
FLIR Systems, which makes thermal cameras, says there’s been increased demand for them to be used in EST screening, the company says in this month in its [financial results][6].
|
||||
|
||||
"Although these thermal cameras cannot detect or diagnose any type of medical condition, the cameras do serve as an effective tool to identify elevated skin temperatures," it says.
|
||||
|
||||
"Many companies are looking to install this technology in their facilities in anticipation of lifting the shelter-in-place orders," FLIR CEO Jim Cannon [said in an earnings call][7] this month. General Motors is one of them, [according to Reuters][8].
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3543318/how-iot-will-rescue-aviation.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://unsplash.com/photos/s7NGQU2Nt8k
|
||||
[2]: https://creativecommons.org/publicdomain/zero/1.0/
|
||||
[3]: https://www.linkedin.com/pulse/what-happens-when-airports-open-back-up-osh-agabi/?src=aff-lilpar&veh=aff_src.aff-lilpar_c.partners_pkw.10078_plc.Skimbit%20Ltd._pcrid.449670_learning&trk=aff_src.aff-lilpar_c.partners_pkw.10078_plc.Skimbit%20Ltd._pcrid.449670_learning&clickid=WNmzMlyalxyOUI7wUx0Mo34HUkiwwpy%3APQ3X1Y0&irgwc=1
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.fiumicino-online.it/articoli/cronaca-2/fase-2-all-aeroporto-di-fiumicino-lo-smart-helmet-per-controllare-la-febbre-a-distanza
|
||||
[6]: https://flir.gcs-web.com/news-releases/news-release-details/flir-systems-announces-first-quarter-2020-financial-results
|
||||
[7]: https://www.fool.com/earnings/call-transcripts/2020/05/06/flir-systems-inc-flir-q1-2020-earnings-call-transc.aspx
|
||||
[8]: https://uk.reuters.com/article/us-flir-systems-gm/general-motors-taps-flir-systems-for-fever-check-cameras-at-factories-idUKKBN22J02B
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,157 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What is IoT? The internet of things explained)
|
||||
[#]: via: (https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html)
|
||||
[#]: author: (Josh Fruhlinger https://www.networkworld.com/author/Josh-Fruhlinger/)
|
||||
|
||||
What is IoT? The internet of things explained
|
||||
======
|
||||
The internet of things (IoT) is a network of connected smart devices providing rich data, but it can also be a security nightmare.
|
||||
Thinkstock
|
||||
|
||||
The internet of things (IoT) is a catch-all term for the growing number of electronics that aren't traditional computing devices, but are connected to the internet to send data, receive instructions or both.
|
||||
|
||||
There's an incredibly broad range of things that fall under that umbrella: Internet-connected "smart" versions of traditional appliances like refrigerators and light bulbs; gadgets that could only exist in an internet-enabled world like Alexa-style digital assistants; internet-enabled sensors that are transforming factories, healthcare, transportation, distribution centers and farms.
|
||||
|
||||
### What is the internet of things?
|
||||
|
||||
The IoT brings the power of the internet, data processing and analytics to the real world of physical objects. For consumers, this means interacting with the global information network without the intermediary of a keyboard and screen; many of their everyday objects and appliances can take instructions from that network with minimal human intervention.
|
||||
|
||||
**[ [More IoT coverage of Network World][1] ]**
|
||||
|
||||
In enterprise settings, IoT can bring the same efficiencies to physical manufacturing and distribution that the internet has long delivered for knowledge work. Millions if not billions of embedded internet-enabled sensors worldwide are providing an incredibly rich set of data that companies can use to gather data about their safety of their operations, track assets and reduce manual processes. Researchers can also use the IoT to gather data about people's preferences and behavior, though that can have serious implications for privacy and security.
|
||||
|
||||
### How big is it?
|
||||
|
||||
In a word: enormous. [Priceonomics breaks it down][2]: There are more than 50 billion IoT devices as of 2020, and those devices will generate 4.4 zettabytes of data this year. (A zettabyte is a trillion gigabytes.) By comparison, in 2013 IoT devices generated a mere 100 billion gigabytes. The amount of money to be made in the IoT market is similarly staggering; estimates on the value of the market in 2025 range from $1.6 trillion to $14.4 trillion.
|
||||
|
||||
### History of IoT
|
||||
|
||||
A world of omnipresent connected devices and sensors is one of the oldest tropes of science fiction. IoT lore has dubbed a [vending machine at Carnegie Mellon][3] that was connected to APRANET in 1970 as the first Internet of Things device, and many technologies have been touted as enabling "smart" IoT-style characteristics to give them a futuristic sheen. But the term Internet of Things was coined in 1999 by British technologist [ Kevin Ashton][4].
|
||||
|
||||
At first, the technology lagged behind the vision. Every internet-connected thing needed a processor and a means to communicate with other things, preferably wirelessly, and those factors imposed costs and power requirements that made widespread IoT rollouts impractical, at least until Moore's Law caught up in the mid '00s.
|
||||
|
||||
One important milestone was [widespread adoption of RFID tags][5], cheap minimalist transponders that could be stuck on any object to connect it to the larger internet world. Omnipresent Wi-Fi and 4G made it possible to for designers to simply assume wireless connectivity anywhere. And the rollout of IPv6 means that connecting billions of gadgets to the internet won't exhaust the store of IP addresses, which was a real concern. (Related story: [Can IoT networking drive adoption of IPv6?][6])
|
||||
|
||||
### How does the IoT work?
|
||||
|
||||
The basic elements of the IoT are devices that gather data. Broadly speaking, they are internet-connected devices, so they each have an IP address. They range in complexity from autonomous vehicles that haul products around factory floors to simple sensors that monitor the temperature in buildings. They also include personal devices like fitness trackers that monitor the number of steps individuals take each day. To make that data useful it needs to be collected, processed, filtered and analyzed, each of which can be handled in a variety of ways.
|
||||
|
||||
Collecting the data is done by transmitting it from the devices to a gathering point. Moving the data can be done wirelessly using a range of technologies or on wired networks. The data can be sent over the internet to a data center or a cloud that has storage and compute power or the transfer can be staged, with intermediary devices aggregating the data before sending it along.
|
||||
|
||||
Processing the data can take place in data centers or cloud, but sometimes that’s not an option. In the case of critical devices such as shutoffs in industrial settings, the delay of sending data from the device to a remote data center is too great. The round-trip time for sending data, processing it, analyzing it and returning instructions (close that valve before the pipes burst) can take too long. In such cases edge-computing can come into play, where a smart edge device can aggregate data, analyze it and fashion responses if necessary, all within relatively close physical distance, thereby reducing delay. Edge devices also have upstream connectivity for sending data to be further processed and stored.
|
||||
|
||||
[][7] Network World / IDG
|
||||
|
||||
How the internet of things works.
|
||||
|
||||
### **Examples of IoT devices**
|
||||
|
||||
Essentially, anything that's capable of gathering some information about the physical world and sending it back home can participate in the IoT ecosystem. Smart home appliances, RFID tags, and industrial sensors are a few examples. These sensors can monitor a range of factors including temperature and pressure in industrial systems, status of critical parts in machinery, patient vital signs, and use of water and electricity, among many, many other possibilities.
|
||||
|
||||
Entire factory robots can be considered IoT devices, as can autonomous vehicles that move products around industrial settings and warehouses.
|
||||
|
||||
Other examples include fitness wearables and home security systems. There are also more generic devices, like the [Raspberry Pi][8] or [Arduino][9], that let you build your own IoT end points. Even though you might think of your smartphone as a pocket-sized computer, it may well also be beaming data about your location and behavior to back-end services in very IoT-like ways.
|
||||
|
||||
#### **Device management**
|
||||
|
||||
In order to work together, all those devices need to be authenticated, provisioned, configured, and monitored, as well as patched and updated as necessary. Too often, all this happens within the context of a single vendor's proprietary systems – or, it doesn't happen at all, which is even more risky. But the industry is starting to transition to a [standards-based device management model][10], which allows IoT devices to interoperate and will ensure that devices aren't orphaned.
|
||||
|
||||
#### **IoT communication standards and protocols**
|
||||
|
||||
When IoT gadgets talk to other devices, they can use a wide variety of communications standards and protocols, many tailored to devices with limited processing capabilities or not much electrical power. Some of these you've definitely heard of — some devices use Wi-Fi or Bluetooth, for instance — but many more are specialized for the world of IoT. ZigBee, for instance, is a wireless protocol for low-power, short-distance communication, while message queuing telemetry transport (MQTT) is a publish/subscribe messaging protocol for devices connected by unreliable or delay-prone networks. (See Network World’s glossary of [IoT standards and protocols][11].)
|
||||
|
||||
The increased speeds and bandwidth of the coming 5G standard for cellular networks will also benefit IoT, though that usage will [lag behind ordinary cell phones][12].
|
||||
|
||||
### IoT, edge computing and the cloud
|
||||
|
||||
[][13] Network World / IDG
|
||||
|
||||
How edge computing enables IoT.
|
||||
|
||||
For many IoT systems, there's a lot of data coming in fast and furious, which has given rise to a new technology category, [edge computing][14]_,_ consisting of appliances placed relatively close to IoT devices, fielding the flow of data from them. These machines process that data and send only relevant material back to a more centralized system for analysis. For instance, imagine a network of dozens of IoT security cameras. Instead of bombarding the building's security operations center (SoC) with simultaneous live-streams, edge-computing systems can analyze the incoming video and only alert the SoC when one of the cameras detects movement.
|
||||
|
||||
And where does that data go once it’s been processed? Well, it might go to your centralized data center, but more often than not it will end up in the cloud.
|
||||
|
||||
The elastic nature of cloud computing is great for IoT scenarios where data might come in intermittently or asynchronously. And many of the big cloud heavy hitters — including [Google][15], [Microsoft][16], and [Amazon][17] — have IoT offerings.
|
||||
|
||||
### IoT platforms
|
||||
|
||||
The cloud giants are trying to sell more than just a place to stash the data your sensors have collected. They're offering full IoT platforms*,* which bundle together much of the functionality to coordinate the elements that make up IoT systems. In essence, an IoT platform serves as middleware that connects the IoT devices and edge gateways with the applications you use to deal with the IoT data. That said, every platform vendor seems to have a slightly different definition of what an IoT platform is, the better to [distance themselves from the competition][18].
|
||||
|
||||
### IoT and data
|
||||
|
||||
As mentioned, there are zettabytes of data being collected by all those IoT devices, funneled through edge gateways, and sent to a platform for processing. In many scenarios, this data is the reason IoT has been deployed in the first place. By collecting information from sensors in the real world, organizations can make nimble decisions in real time.
|
||||
|
||||
Oracle, for instance, [imagines a scenario][19] where people at a theme park are encouraged to download an app that offers information about the park. At the same time, the app sends GPS pings back to the park's management to help predict wait times in lines. With that information, the park can take action in the short term (by adding more staff to increase the capacity of some attractions, for instance) and the long term (by learning which rides are the most and least popular at the park).
|
||||
|
||||
These decisions can be made without human intervention. For example, data gathered from pressure sensors in a chemical-factory pipeline could be analyzed by software in an edge device that spots the threat of a pipeline rupture, and that information can trigger a signal to shut valves to avert a spill.
|
||||
|
||||
### IoT and big data analytics
|
||||
|
||||
The theme park example is easy to get your head around, but is small potatoes compared to many real-world IoT data-harvesting operations. Many big data operations use information harvested from IoT devices, correlated with other data points, to get insight into human behavior. _Software Advice_ gives [a few examples][20], including a service from Birst that matches coffee brewing information collected from internet-connected coffeemakers with social media posts to see if customers are talking about coffee brands online.
|
||||
|
||||
Another dramatic example came recently when X-Mode released a map based on tracking location data of people who partied at spring break in Ft. Lauderdale in March of 2020, even as the coronavirus pandemic was gaining speed in the United States, showing [where all those people ended up across the country][21]. The map was shocking not only because it showed the potential spread of the virus, but also because it illustrated just how closely IoT devices can track us. (For more on IoT and analytics, click [here][22].
|
||||
|
||||
### IoT data and AI
|
||||
|
||||
The volume of data IoT devices can gather is far larger than any human can deal with in a useful way, and certainly not in real time. We've already seen that edge computing devices are needed just to make sense of the raw data coming in from the IoT endpoints. There's also the need to detect and deal with data that might [be just plain wrong][23].
|
||||
|
||||
Many IoT providers are offering machine learning and artificial intelligence capabilities to make sense of the collected data. IBM's Jeopardy!-winning Watson platform, for instance, can be [trained on IoT data sets][24] to produce useful results in the field of predicative maintenance — analyzing data from drones to distinguish between trivial damage to a bridge and cracks that need attention, for instance. Meanwhile, Arm is working on [low-power chips][25] that can provide AI capabilities on the IoT endpoints themselves. ** **
|
||||
|
||||
### IoT and business
|
||||
|
||||
Business uses for IoT include keeping track of customers, inventory, and the status of important components. [IoT for All][26] flags four industries that have been transformed by IoT:
|
||||
|
||||
* **Oil and gas**: Isolated drilling sites can be better monitored with IoT sensors than by human intervention
|
||||
* A**griculture**: Granular data about crops growing in fields derived from IoT sensors can be used to increase yields
|
||||
* **HVAC**: Climate control systems across the country can be monitored by manufacturers
|
||||
* **Brick-and-mortar retail**: Customers can be microtargeted with offers on their phones as they linger in certain parts of a store
|
||||
|
||||
|
||||
|
||||
More generally, enterprises are looking for IoT solutions that can help in [four areas][27]: energy use, asset tracking, security, and the customer experience.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
|
||||
|
||||
作者:[Josh Fruhlinger][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Josh-Fruhlinger/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/category/internet-of-things/
|
||||
[2]: https://priceonomics.com/the-iot-data-explosion-how-big-is-the-iot-data/
|
||||
[3]: https://www.machinedesign.com/automation-iiot/article/21836968/iot-started-with-a-vending-machine
|
||||
[4]: https://www.visioncritical.com/blog/kevin-ashton-internet-of-things
|
||||
[5]: https://www.networkworld.com/article/2319384/rfid-readers-route-tag-traffic.html
|
||||
[6]: https://www.networkworld.com/article/3338106/can-iot-networking-drive-adoption-of-ipv6.html
|
||||
[7]: https://images.idgesg.net/images/article/2020/05/nw_how_iot_works_diagram-100840757-orig.jpg
|
||||
[8]: https://www.networkworld.com/article/3176091/10-killer-raspberry-pi-projects-collection-1.html
|
||||
[9]: https://www.networkworld.com/article/3075360/arduino-targets-the-internet-of-things-with-primo-board.html
|
||||
[10]: https://www.networkworld.com/article/3258812/the-future-of-iot-device-management.html
|
||||
[11]: https://www.networkworld.com/article/3235124/internet-of-things-definitions-a-handy-guide-to-essential-iot-terms.html
|
||||
[12]: https://www.networkworld.com/article/3291778/what-s-so-special-about-5g-and-iot.html
|
||||
[13]: https://images.idgesg.net/images/article/2017/09/nw_how_edge_computing_works_diagram_1400x1717-100736111-orig.jpg
|
||||
[14]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[15]: https://cloud.google.com/solutions/iot
|
||||
[16]: https://azure.microsoft.com/en-us/overview/iot/
|
||||
[17]: https://aws.amazon.com/iot/
|
||||
[18]: https://www.networkworld.com/article/3336166/why-are-iot-platforms-so-darn-confusing.html
|
||||
[19]: https://blogs.oracle.com/bigdata/how-big-data-powers-the-internet-of-things
|
||||
[20]: https://www.softwareadvice.com/resources/iot-data-analytics-use-cases/
|
||||
[21]: https://www.cnn.com/2020/04/04/tech/location-tracking-florida-coronavirus/index.html
|
||||
[22]: https://www.networkworld.com/article/3311919/iot-analytics-guide-what-to-expect-from-internet-of-things-data.html
|
||||
[23]: https://www.networkworld.com/article/3396230/when-iot-systems-fail-the-risk-of-having-bad-iot-data.html
|
||||
[24]: https://www.networkworld.com/article/3449243/watson-iot-chief-ai-can-broaden-iot-services.html
|
||||
[25]: https://www.networkworld.com/article/3532094/ai-everywhere-iot-chips-coming-from-arm.html
|
||||
[26]: https://www.iotforall.com/4-unlikely-industries-iot-changing/
|
||||
[27]: https://www.networkworld.com/article/3396128/the-state-of-enterprise-iot-companies-want-solutions-for-these-4-areas.html
|
@ -0,0 +1,133 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Research log: gene signatures and connectivity map)
|
||||
[#]: via: (https://www.jtolio.com/2015/11/research-log-gene-signatures-and-connectivity-map)
|
||||
[#]: author: (jtolio.com https://www.jtolio.com/)
|
||||
|
||||
Research log: gene signatures and connectivity map
|
||||
======
|
||||
|
||||
Happy Thanksgiving everyone!
|
||||
|
||||
### Context
|
||||
|
||||
This is the third post in my continuing series on my attempts at research. Previously we talked about:
|
||||
|
||||
* [what I’m doing, cell states, and microarrays][1]
|
||||
* and then [more about microarrays and R][2].
|
||||
|
||||
|
||||
|
||||
By the end of last week we had discussed how to get a table of normalized gene expression intensities that looks like this:
|
||||
|
||||
```
|
||||
ENSG00000280099_at 0.15484421
|
||||
ENSG00000280109_at 0.16881395
|
||||
ENSG00000280178_at -0.19621641
|
||||
ENSG00000280316_at 0.08622216
|
||||
ENSG00000280401_at 0.15966256
|
||||
ENSG00000281205_at -0.02085352
|
||||
...
|
||||
```
|
||||
|
||||
The reason for doing this is to figure out which genes are related, and perhaps more importantly, what a cell is even doing.
|
||||
|
||||
_Summary:_ new post, also, I’m bringing back the short section summaries.
|
||||
|
||||
### Cell lines
|
||||
|
||||
The first thing to do when trying to figure out what cells are doing is to choose a cell. There’s all sorts of cells. Healthy brain cells, cancerous blood cells, bruised skin cells, etc.
|
||||
|
||||
For any experiment, you’ll need a control to eliminate noise and apply statistical tests for validity. If you don’t use a control, the effect you’re seeing may not even exist, and so for any experiment with cells, you will need a control cell.
|
||||
|
||||
Cells often divide, which means that a cell, once chosen, will duplicate itself for you in the presence of the appropriate resources. Not all cells divide ad nauseam which provides some challenges, but many cells under study luckily do.
|
||||
|
||||
So, a _cell line_ is simply a set of cells that have all replicated from a specific chosen initial cell. Any set of cells from a cell line will be as identical as possible (unless you screwed up! geez). They will be the same type of cell with the same traits and behaviors, at least, as much as possible.
|
||||
|
||||
_Summary:_ a cell line is a large amount of cells that are as close to being the same as possible.
|
||||
|
||||
### Perturbagens
|
||||
|
||||
There are many things that might affect what a cell is doing. Drugs, agitation, temperature, disease, cancer, gene splicing, small molecules (maybe you give a cell more iron or calcium or something), hormones, light, Jello, ennui, etc. Given any particular cell line, giving a cell from that cell line one of these _perturbagens_, or, perturbing the cell in a specific way, when compared to a control will say what that cell does differently in the face of that perturbagen.
|
||||
|
||||
If you’d like to find out what exactly a certain type of cell does when you give it lemon lime soda, then you choose the right cell line, leave out some control cells and give the rest of the cells soda.
|
||||
|
||||
Then, you measure gene expression intensities for both the control cells and the perturbed cells. The _differential expression_ of genes between the perturbed cells and the controls cells is likely due to the introduction of the lemon lime soda.
|
||||
|
||||
Genes that end up getting expressed _more_ in the presence of the soda are considered _up-regulated_, whereas genes that end up getting expressed _less_ are considered _down-regulated_. The degree to which a gene is up or down regulated constitutes how much of an effect the soda may have had on that gene.
|
||||
|
||||
Of course, all of this has such a significant amount of experimental noise that you could find pretty much anything. You’ll need to replicate your experiment independently a few times before you publish that lemon lime soda causes increased expression in the [Sonic hedgehog gene][3].
|
||||
|
||||
_Summary:_ A perturbagen is something you introduce/do to a cell to change its behavior, such as drugs or throwing it at a wall or something. The wall perturbagen.
|
||||
|
||||
### Gene signature
|
||||
|
||||
For a given change or perturbagen to a cell, we now have enough to compute lists of up-regulated and down-regulated genes and the magnitude change in expression for each gene.
|
||||
|
||||
This gene expression pattern for some subset of important genes (perhaps the most changed in expression) is called a _gene signature_, and gene signatures are very useful. By comparing signatures, you can:
|
||||
|
||||
* identify or compare cell states
|
||||
* find sets of positively or negatively correlated genes
|
||||
* find similar disease signatures
|
||||
* find similar drug signatures
|
||||
* find drug signatures that might counteract opposite disease signatures.
|
||||
|
||||
|
||||
|
||||
(That last bullet point is essentially where I’m headed with my research.)
|
||||
|
||||
_Summary:_ a gene signature is a short summary of the most important gene expression differences a perturbagen causes in a cell.
|
||||
|
||||
### Drugs!
|
||||
|
||||
The pharmaceutical industry is constantly on the lookout for new breakthrough drugs that might represent huge windfalls in cash, and drugs don’t always work as planned. Many drugs spend years in research and development, only to ultimately find poor efficacy or adoption. Sometimes drugs even become known [much more for their side-effects than their originally intended therapy][4].
|
||||
|
||||
The practical upshot is that there’s countless FDA-approved drugs that represent decades of work that are simply underused or even unused entirely. These drugs have already cleared many challenging regulatory hurdles, but are simply and quite literally cures looking for a disease.
|
||||
|
||||
If even just one of these drugs can be given a new lease on life for some yet-to-be-cured disease, then perhaps we can give some people new leases on life!
|
||||
|
||||
_Summary:_ instead of developing new drugs, there’s already lots of drugs that aren’t being used. Maybe we can find matching diseases!
|
||||
|
||||
### The Connectivity Map project
|
||||
|
||||
The [Broad Institute’s Connectivity Map project][5] isn’t particularly new anymore, but it represents a ground breaking and promising idea - we can dump a bunch of signatures into a database and construct all sorts of new hypotheses we might not even have thought to check before.
|
||||
|
||||
To prove out the usefulness of this idea, the Connectivity Map (or cmap) project chose 5 different cell lines (all cancer cells, which are easy to get to replicate!) and a library of FDA approved drugs, and then gave some cells these drugs.
|
||||
|
||||
They then constructed a database of all of the signatures they computed for each possible perturbagen they measured. Finally, they constructed a web interface where a user can upload a gene signature and get a result list back of all of the signatures they collected, ordered by the most to least similar. You can totally go sign up and [try it out][5].
|
||||
|
||||
This simple tool is surprisingly powerful. It allows you to find similar drugs to a drug you know, but it also allows you to find drugs that might counteract a disease you’ve created a signature for.
|
||||
|
||||
Ultimately, the project led to [a number of successful applications][6]. So useful was it that the Broad Institute has doubled down and created the much larger and more comprehensive [LINCS Project][7] that targets an order of magnitude more cell lines (77) and more perturbagens (42,532, compared to cmap’s 6100). You can sign up and use that one too!
|
||||
|
||||
_Summary_: building a system that supports querying signature connections has already proved to be super useful.
|
||||
|
||||
### Whew
|
||||
|
||||
Alright, I wrote most of this on a plane yesterday but since I should now be spending time with family I’m going to cut it short here.
|
||||
|
||||
Stay tuned for next week!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.jtolio.com/2015/11/research-log-gene-signatures-and-connectivity-map
|
||||
|
||||
作者:[jtolio.com][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.jtolio.com/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.jtolio.com/writing/2015/11/research-log-cell-states-and-microarrays/
|
||||
[2]: https://www.jtolio.com/writing/2015/11/research-log-r-and-more-microarrays/
|
||||
[3]: https://en.wikipedia.org/wiki/Sonic_hedgehog
|
||||
[4]: https://en.wikipedia.org/wiki/Sildenafil#History
|
||||
[5]: https://www.broadinstitute.org/cmap/
|
||||
[6]: https://www.broadinstitute.org/cmap/publications.jsp
|
||||
[7]: http://www.lincscloud.org/
|
@ -0,0 +1,443 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Go channels are bad and you should feel bad)
|
||||
[#]: via: (https://www.jtolio.com/2016/03/go-channels-are-bad-and-you-should-feel-bad)
|
||||
[#]: author: (jtolio.com https://www.jtolio.com/)
|
||||
|
||||
Go channels are bad and you should feel bad
|
||||
======
|
||||
|
||||
_Update: If you’re coming to this blog post from a compendium titled “Go is not good,” I want to make it clear that I am ashamed to be on such a list. Go is absolutely the least worst programming language I’ve ever used. At the time I wrote this, I wanted to curb a trend I was seeing, namely, overuse of one of the more warty parts of Go. I still think channels could be much better, but overall, Go is wonderful. It’s like if your favorite toolbox had [this][1] in it; the tool can have uses (even if it could have had more uses), and it can still be your favorite toolbox!_
|
||||
|
||||
_Update 2: I would be remiss if I didn’t point out this excellent survey of real issues: [Understanding Real-World Concurrency Bugs In Go][2]. A significant finding of this survey is that… Go channels cause lots of bugs._
|
||||
|
||||
I’ve been using Google’s [Go programming language][3] on and off since mid-to-late 2010, and I’ve had legitimate product code written in Go for [Space Monkey][4] since January 2012 (before Go 1.0!). My initial experience with Go was back when I was researching Hoare’s [Communicating Sequential Processes][5] model of concurrency and the [π-calculus][6] under [Matt Might][7]’s [UCombinator research group][8] as part of my ([now redirected][9]) PhD work to better enable multicore development. Go was announced right then (how serendipitous!) and I immediately started kicking tires.
|
||||
|
||||
It quickly became a core part of Space Monkey development. Our production systems at Space Monkey currently account for over 425k lines of pure Go (_not_ counting all of our vendored libraries, which would make it just shy of 1.5 million lines), so not the most Go you’ll ever see, but for the relatively young language we’re heavy users. We’ve [written about our Go usage][10] before. We’ve open-sourced some fairly heavily used libraries; many people seem to be fans of our [OpenSSL bindings][11] (which are faster than [crypto/tls][12], but please keep openssl itself up-to-date!), our [error handling library][13], [logging library][14], and [metric collection library/zipkin client][15]. We use Go, we love Go, we think it’s the least bad programming language for our needs we’ve used so far.
|
||||
|
||||
Although I don’t think I can talk myself out of mentioning my widely avoided [goroutine-local-storage library][16] here either (which even though it’s a hack that you shouldn’t use, it’s a beautiful hack), hopefully my other experience will suffice as valid credentials that I kind of know what I’m talking about before I explain my deliberately inflamatory post title.
|
||||
|
||||
![][17]
|
||||
|
||||
### Wait, what?
|
||||
|
||||
If you ask the proverbial programmer on the street what’s so special about Go, she’ll most likely tell you that Go is most known for channels and goroutines. Go’s theoretical underpinnings are heavily based in Hoare’s CSP model, which is itself incredibly fascinating and interesting and I firmly believe has much more to yield than we’ve appropriated so far.
|
||||
|
||||
CSP (and the π-calculus) both use communication as the core synchronization primitive, so it makes sense Go would have channels. Rob Pike has been fascinated with CSP (with good reason) for a [considerable][18] [while][19] [now][20].
|
||||
|
||||
But from a pragmatic perspective (which Go prides itself on), Go got channels wrong. Channels as implemented are pretty much a solid anti-pattern in my book at this point. Why? Dear reader, let me count the ways.
|
||||
|
||||
#### You probably won’t end up using just channels.
|
||||
|
||||
Hoare’s Communicating Sequential Processes is a computational model where essentially the only synchronization primitive is sending or receiving on a channel. As soon as you use a mutex, semaphore, or condition variable, bam, you’re no longer in pure CSP land. Go programmers often tout this model and philosophy through the chanting of the [cached thought][21] “[share memory by communicating][22].”
|
||||
|
||||
So let’s try and write a small program using just CSP in Go! Let’s make a high score receiver. All we will do is keep track of the largest high score value we’ve seen. That’s it.
|
||||
|
||||
First, we’ll make a `Game` struct.
|
||||
|
||||
```
|
||||
type Game struct {
|
||||
bestScore int
|
||||
scores chan int
|
||||
}
|
||||
```
|
||||
|
||||
`bestScore` isn’t going to be protected by a mutex! That’s fine, because we’ll simply have one goroutine manage its state and receive new scores over a channel.
|
||||
|
||||
```
|
||||
func (g *Game) run() {
|
||||
for score := range g.scores {
|
||||
if g.bestScore < score {
|
||||
g.bestScore = score
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Okay, now we’ll make a helpful constructor to start a game.
|
||||
|
||||
```
|
||||
func NewGame() (g *Game) {
|
||||
g = &Game{
|
||||
bestScore: 0,
|
||||
scores: make(chan int),
|
||||
}
|
||||
go g.run()
|
||||
return g
|
||||
}
|
||||
```
|
||||
|
||||
Next, let’s assume someone has given us a `Player` that can return scores. It might also return an error, cause hey maybe the incoming TCP stream can die or something, or the player quits.
|
||||
|
||||
```
|
||||
type Player interface {
|
||||
NextScore() (score int, err error)
|
||||
}
|
||||
```
|
||||
|
||||
To handle the player, we’ll assume all errors are fatal and pass received scores down the channel.
|
||||
|
||||
```
|
||||
func (g *Game) HandlePlayer(p Player) error {
|
||||
for {
|
||||
score, err := p.NextScore()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
g.scores <- score
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Yay! Okay, we have a `Game` type that can keep track of the highest score a `Player` receives in a thread-safe way.
|
||||
|
||||
You wrap up your development and you’re on your way to having customers. You make this game server public and you’re incredibly successful! Lots of games are being created with your game server.
|
||||
|
||||
Soon, you discover people sometimes leave your game. Lots of games no longer have any players playing, but nothing stopped the game loop. You are getting overwhelmed by dead `(*Game).run` goroutines.
|
||||
|
||||
**Challenge:** fix the goroutine leak above without mutexes or panics. For real, scroll up to the above code and come up with a plan for fixing this problem using just channels.
|
||||
|
||||
I’ll wait.
|
||||
|
||||
For what it’s worth, it totally can be done with channels only, but observe the simplicity of the following solution which doesn’t even have this problem:
|
||||
|
||||
```
|
||||
type Game struct {
|
||||
mtx sync.Mutex
|
||||
bestScore int
|
||||
}
|
||||
|
||||
func NewGame() *Game {
|
||||
return &Game{}
|
||||
}
|
||||
|
||||
func (g *Game) HandlePlayer(p Player) error {
|
||||
for {
|
||||
score, err := p.NextScore()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
g.mtx.Lock()
|
||||
if g.bestScore < score {
|
||||
g.bestScore = score
|
||||
}
|
||||
g.mtx.Unlock()
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Which one would you rather work on? Don’t be deceived into thinking that the channel solution somehow makes this more readable and understandable in more complex cases. Teardown is very hard. This sort of teardown is just a piece of cake with a mutex, but the hardest thing to work out with Go-specific channels only. Also, if anyone replies that channels sending channels is easier to reason about here it will cause me an immediate head-to-desk motion.
|
||||
|
||||
Importantly, this particular case might actually be _easily_ solved _with channels_ with some runtime assistance Go doesn’t provide! Unfortunately, as it stands, there are simply a surprising amount of problems that are solved better with traditional synchronization primitives than with Go’s version of CSP. We’ll talk about what Go could have done to make this case easier later.
|
||||
|
||||
**Exercise:** Still skeptical? Try making both solutions above (channel-only vs. mutex-only) stop asking for scores from `Players` once `bestScore` is 100 or greater. Go ahead and open your text editor. This is a small, toy problem.
|
||||
|
||||
The summary here is that you will be using traditional synchronization primitives in addition to channels if you want to do anything real.
|
||||
|
||||
#### Channels are slower than implementing it yourself
|
||||
|
||||
One of the things I assumed about Go being so heavily based in CSP theory is that there should be some pretty killer scheduler optimizations the runtime can make with channels. Perhaps channels aren’t always the most straightforward primitive, but surely they’re efficient and fast, right?
|
||||
|
||||
![][23]
|
||||
|
||||
As [Dustin Hiatt][24] points out on [Tyler Treat’s post about Go][25],
|
||||
|
||||
> Behind the scenes, channels are using locks to serialize access and provide threadsafety. So by using channels to synchronize access to memory, you are, in fact, using locks; locks wrapped in a threadsafe queue. So how do Go’s fancy locks compare to just using mutex’s from their standard library `sync` package? The following numbers were obtained by using Go’s builtin benchmarking functionality to serially call Put on a single set of their respective types.
|
||||
|
||||
```
|
||||
> BenchmarkSimpleSet-8 3000000 391 ns/op
|
||||
> BenchmarkSimpleChannelSet-8 1000000 1699 ns/o
|
||||
>
|
||||
```
|
||||
|
||||
It’s a similar story with unbuffered channels, or even the same test under contention instead of run serially.
|
||||
|
||||
Perhaps the Go scheduler will improve, but in the meantime, good old mutexes and condition variables are very good, efficient, and fast. If you want performance, you use the tried and true methods.
|
||||
|
||||
#### Channels don’t compose well with other concurrency primitives
|
||||
|
||||
Alright, so hopefully I have convinced you that you’ll at least be interacting with primitives besides channels sometimes. The standard library certainly seems to prefer traditional synchronization primitives over channels.
|
||||
|
||||
Well guess what, it’s actually somewhat challenging to use channels alongside mutexes and condition variables correctly!
|
||||
|
||||
One of the interesting things about channels that makes a lot of sense coming from CSP is that channel sends are synchronous. A channel send and channel receive are intended to be synchronization barriers, and the send and receive should happen at the same virtual time. That’s wonderful if you’re in well-executed CSP-land.
|
||||
|
||||
![][26]
|
||||
|
||||
Pragmatically, Go channels also come in a buffered variety. You can allocate a fixed amount of space to account for possible buffering so that sends and receives are disparate events, but the buffer size is capped. Go doesn’t provide a way to have arbitrarily sized buffers - you have to allocate the buffer size in advance. _This is fine_, I’ve seen people argue on the mailing list, _because memory is bounded anyway._
|
||||
|
||||
Wat.
|
||||
|
||||
This is a bad answer. There’s all sorts of reasons to use an arbitrarily buffered channel. If we knew everything up front, why even have `malloc`?
|
||||
|
||||
Not having arbitrarily buffered channels means that a naive send on _any_ channel could block at any time. You want to send on a channel and update some other bookkeeping under a mutex? Careful! Your channel send might block!
|
||||
|
||||
```
|
||||
// ...
|
||||
s.mtx.Lock()
|
||||
// ...
|
||||
s.ch <- val // might block!
|
||||
s.mtx.Unlock()
|
||||
// ...
|
||||
```
|
||||
|
||||
This is a recipe for dining philosopher dinner fights. If you take a lock, you should quickly update state and release it and not do anything blocking under the lock if possible.
|
||||
|
||||
There is a way to do a non-blocking send on a channel in Go, but it’s not the default behavior. Assume we have a channel `ch := make(chan int)` and we want to send the value `1` on it without blocking. Here is the minimum amount of typing you have to do to send without blocking:
|
||||
|
||||
```
|
||||
select {
|
||||
case ch <- 1: // it sent
|
||||
default: // it didn't
|
||||
}
|
||||
```
|
||||
|
||||
This isn’t what naturally leaps to mind for beginning Go programmers.
|
||||
|
||||
The summary is that because many operations on channels block, it takes careful reasoning about philosophers and their dining to successfully use channel operations alongside and under mutex protection, without causing deadlocks.
|
||||
|
||||
#### Callbacks are strictly more powerful and don’t require unnecessary goroutines.
|
||||
|
||||
![][27]
|
||||
|
||||
Whenever an API uses a channel, or whenever I point out that a channel makes something hard, someone invariably points out that I should just spin up a goroutine to read off the channel and make whatever translation or fix I need as it reads of the channel.
|
||||
|
||||
Um, no. What if my code is in a hotpath? There’s very few instances that require a channel, and if your API could have been designed with mutexes, semaphores, and callbacks and no additional goroutines (because all event edges are triggered by API events), then using a channel forces me to add another stack of memory allocation to my resource usage. Goroutines are much lighter weight than threads, yes, but lighter weight doesn’t mean the lightest weight possible.
|
||||
|
||||
As I’ve formerly [argued in the comments on an article about using channels][28] (lol the internet), your API can _always_ be more general, _always_ more flexible, and take drastically less resources if you use callbacks instead of channels. “Always” is a scary word, but I mean it here. There’s proof-level stuff going on.
|
||||
|
||||
If someone provides a callback-based API to you and you need a channel, you can provide a callback that sends on a channel with little overhead and full flexibility.
|
||||
|
||||
If, on the other hand, someone provides a channel-based API to you and you need a callback, you have to spin up a goroutine to read off the channel _and_ you have to hope that no one tries to send more on the channel when you’re done reading so you cause blocked goroutine leaks.
|
||||
|
||||
For a super simple real-world example, check out the [context interface][29] (which incidentally is an incredibly useful package and what you should be using instead of [goroutine-local storage][16]):
|
||||
|
||||
```
|
||||
type Context interface {
|
||||
...
|
||||
// Done returns a channel that closes when this work unit should be canceled.
|
||||
Done() <-chan struct{}
|
||||
|
||||
// Err returns a non-nil error when the Done channel is closed
|
||||
Err() error
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
Imagine all you want to do is log the corresponding error when the `Done()` channel fires. What do you have to do? If you don’t have a good place you’re already selecting on a channel, you have to spin up a goroutine to deal with it:
|
||||
|
||||
```
|
||||
go func() {
|
||||
<-ctx.Done()
|
||||
logger.Errorf("canceled: %v", ctx.Err())
|
||||
}()
|
||||
```
|
||||
|
||||
What if `ctx` gets garbage collected without closing the channel `Done()` returned? Whoops! Just leaked a goroutine!
|
||||
|
||||
Now imagine we changed `Done`’s signature:
|
||||
|
||||
```
|
||||
// Done calls cb when this work unit should be canceled.
|
||||
Done(cb func())
|
||||
```
|
||||
|
||||
First off, logging is so easy now. Check it out: `ctx.Done(func() { log.Errorf("canceled: %v", ctx.Err()) })`. But lets say you really do need some select behavior. You can just call it like this:
|
||||
|
||||
```
|
||||
ch := make(chan struct{})
|
||||
ctx.Done(func() { close(ch) })
|
||||
```
|
||||
|
||||
Voila! No expressiveness lost by using a callback instead. `ch` works like the channel `Done()` used to return, and in the logging case we didn’t need to spin up a whole new stack. I got to keep my stack traces (if our log package is inclined to use them); I got to avoid another stack allocation and another goroutine to give to the scheduler.
|
||||
|
||||
Next time you use a channel, ask yourself if there’s some goroutines you could eliminate if you used mutexes and condition variables instead. If the answer is yes, your code will be more efficient if you change it. And if you’re trying to use channels just to be able to use the `range` keyword over a collection, I’m going to have to ask you to put your keyboard away or just go back to writing Python books.
|
||||
|
||||
![more like Zooey De-channel, amirite][30]
|
||||
|
||||
#### The channel API is inconsistent and just cray-cray
|
||||
|
||||
Closing or sending on a closed channel panics! Why? If you want to close a channel, you need to either synchronize its closed state externally (with mutexes and so forth that don’t compose well!) so that other writers don’t write to or close a closed channel, or just charge forward and close or write to closed channels and expect you’ll have to recover any raised panics.
|
||||
|
||||
This is such bizarre behavior. Almost every other operation in Go has a way to avoid a panic (type assertions have the `, ok =` pattern, for example), but with channels you just get to deal with it.
|
||||
|
||||
Okay, so when a send will fail, channels panic. I guess that makes some kind of sense. But unlike almost everything else with nil values, sending to a nil channel won’t panic. Instead, it will block forever! That’s pretty counter-intuitive. That might be useful behavior, just like having a can-opener attached to your weed-whacker might be useful (and found in Skymall), but it’s certainly unexpected. Unlike interacting with nil maps (which do implicit pointer dereferences), nil interfaces (implicit pointer dereferences), unchecked type assertions, and all sorts of other things, nil channels exhibit actual channel behavior, as if a brand new channel was just instantiated for this operation.
|
||||
|
||||
Receives are slightly nicer. What happens when you receive on a closed channel? Well, that works - you get a zero value. Okay that makes sense I guess. Bonus! Receives allow you to do a `, ok =`-style check if the channel was open when you received your value. Thank heavens we get `, ok =` here.
|
||||
|
||||
But what happens if you receive from a nil channel? _Also blocks forever!_ Yay! Don’t try and use the fact that your channel is nil to keep track of if you closed it!
|
||||
|
||||
### What are channels good for?
|
||||
|
||||
Of course channels are good for some things (they are a generic container after all), and there are certain things you can only do with them (`select`).
|
||||
|
||||
#### They are another special-cased generic datastructure
|
||||
|
||||
Go programmers are so used to arguments about generics that I can feel the PTSD coming on just by bringing up the word. I’m not here to talk about it so wipe the sweat off your brow and let’s keep moving.
|
||||
|
||||
Whatever your opinion of generics is, Go’s maps, slices, and channels are data structures that support generic element types, because they’ve been special-cased into the language.
|
||||
|
||||
In a language that doesn’t allow you to write your own generic containers, _anything_ that allows you to better manage collections of things is valuable. Here, channels are a thread-safe datastructure that supports arbitrary value types.
|
||||
|
||||
So that’s useful! That can save some boilerplate I suppose.
|
||||
|
||||
I’m having trouble counting this as a win for channels.
|
||||
|
||||
#### Select
|
||||
|
||||
The main thing you can do with channels is the `select` statement. Here you can wait on a fixed number of inputs for events. It’s kind of like epoll, but you have to know upfront how many sockets you’re going to be waiting on.
|
||||
|
||||
This is truly a useful language feature. Channels would be a complete wash if not for `select`. But holy smokes, let me tell you about the first time you decide you might need to select on multiple things but you don’t know how many and you have to use `reflect.Select`.
|
||||
|
||||
### How could channels be better?
|
||||
|
||||
It’s really tough to say what the most tactical thing the Go language team could do for Go 2.0 is (the Go 1.0 compatibility guarantee is good but hand-tying), but that won’t stop me from making some suggestions.
|
||||
|
||||
#### Select on condition variables!
|
||||
|
||||
We could just obviate the need for channels! This is where I propose we get rid of some sacred cows, but let me ask you this, how great would it be if you could select on any custom synchronization primitive? (A: So great.) If we had that, we wouldn’t need channels at all.
|
||||
|
||||
#### GC could help us?
|
||||
|
||||
In the very first example, we could easily solve the high score server cleanup with channels if we were able to use directionally-typed channel garbage collection to help us clean up.
|
||||
|
||||
![][31]
|
||||
|
||||
As you know, Go has directionally-typed channels. You can have a channel type that only supports reading (`<-chan`) and a channel type that only supports writing (`chan<-`). Great!
|
||||
|
||||
Go also has garbage collection. It’s clear that certain kinds of book keeping are just too onerous and we shouldn’t make the programmer deal with them. We clean up unused memory! Garbage collection is useful and neat.
|
||||
|
||||
So why not help clean up unused or deadlocked channel reads? Instead of having `make(chan Whatever)` return one bidirectional channel, have it return two single-direction channels (`chanReader, chanWriter := make(chan Type)`).
|
||||
|
||||
Let’s reconsider the original example:
|
||||
|
||||
```
|
||||
type Game struct {
|
||||
bestScore int
|
||||
scores chan<- int
|
||||
}
|
||||
|
||||
func run(bestScore *int, scores <-chan int) {
|
||||
// we don't keep a reference to a *Game directly because then we'd be holding
|
||||
// onto the send side of the channel.
|
||||
for score := range scores {
|
||||
if *bestScore < score {
|
||||
*bestScore = score
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func NewGame() (g *Game) {
|
||||
// this make(chan) return style is a proposal!
|
||||
scoreReader, scoreWriter := make(chan int)
|
||||
g = &Game{
|
||||
bestScore: 0,
|
||||
scores: scoreWriter,
|
||||
}
|
||||
go run(&g.bestScore, scoreReader)
|
||||
return g
|
||||
}
|
||||
|
||||
func (g *Game) HandlePlayer(p Player) error {
|
||||
for {
|
||||
score, err := p.NextScore()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
g.scores <- score
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If garbage collection closed a channel when we could prove no more values are ever coming down it, this solution is completely fixed. Yes yes, the comment in `run` is indicative of the existence of a rather large gun aimed at your foot, but at least the problem is easily solveable now, whereas it really wasn’t before. Furthermore, a smart compiler could probably make appropriate proofs to reduce the damage from said foot-gun.
|
||||
|
||||
#### Other smaller issues
|
||||
|
||||
* **Dup channels?** \- If we could use an equivalent of the `dup` syscall on channels, then we could also solve the multiple producer problem quite easily. Each producer could close their own `dup`-ed channel without ruining the other producers.
|
||||
* **Fix the channel API!** \- Close isn’t idempotent? Send on closed channel panics with no way to avoid it? Ugh!
|
||||
* **Arbitrarily buffered channels** \- If we could make buffered channels with no fixed buffer size limit, then we could make channels that don’t block.
|
||||
|
||||
|
||||
|
||||
### What do we tell people about Go then?
|
||||
|
||||
If you haven’t yet, please go take a look at my current favorite programming post: [What Color is Your Function][32]. Without being about Go specifically, this blog post much more eloquently than I could lays out exactly why goroutines are Go’s best feature (and incidentally one of the ways Go is better than Rust for some applications).
|
||||
|
||||
If you’re still writing code in a programming language that forces keywords like `yield` on you to get high performance, concurrency, or an event-driven model, you are living in the past, whether or not you or anyone else knows it. Go is so far one of the best entrants I’ve seen of languages that implement an M:N threading model that’s not 1:1, and dang that’s powerful.
|
||||
|
||||
So, tell folks about goroutines.
|
||||
|
||||
If I had to pick one other leading feature of Go, it’s interfaces. Statically-typed [duck typing][33] makes extending and working with your own or someone else’s project so fun and amazing it’s probably worth me writing an entirely different set of words about it some other time.
|
||||
|
||||
### So…
|
||||
|
||||
I keep seeing people charge in to Go, eager to use channels to their full potential. Here’s my advice to you.
|
||||
|
||||
**JUST STAHP IT**
|
||||
|
||||
When you’re writing APIs and interfaces, as bad as the advice “never” can be, I’m pretty sure there’s never a time where channels are better, and every Go API I’ve used that used channels I’ve ended up having to fight. I’ve never thought “oh good, there’s a channel here;” it’s always instead been some variant of _**WHAT FRESH HELL IS THIS?**_
|
||||
|
||||
So, _please, please use channels where appropriate and only where appropriate._
|
||||
|
||||
In all of my Go code I work with, I can count on one hand the number of times channels were really the best choice. Sometimes they are. That’s great! Use them then. But otherwise just stop.
|
||||
|
||||
![][34]
|
||||
|
||||
_Special thanks for the valuable feedback provided by my proof readers Jeff Wendling, [Andrew Harding][35], [George Shank][36], and [Tyler Treat][37]._
|
||||
|
||||
If you want to work on Go with us at Space Monkey, please [hit me up][38]!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.jtolio.com/2016/03/go-channels-are-bad-and-you-should-feel-bad
|
||||
|
||||
作者:[jtolio.com][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.jtolio.com/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://blog.codinghorror.com/content/images/uploads/2012/06/6a0120a85dcdae970b017742d249d5970d-800wi.jpg
|
||||
[2]: https://songlh.github.io/paper/go-study.pdf
|
||||
[3]: https://golang.org/
|
||||
[4]: http://www.spacemonkey.com/
|
||||
[5]: https://en.wikipedia.org/wiki/Communicating_sequential_processes
|
||||
[6]: https://en.wikipedia.org/wiki/%CE%A0-calculus
|
||||
[7]: http://matt.might.net
|
||||
[8]: http://www.ucombinator.org/
|
||||
[9]: https://www.jtolio.com/writing/2015/11/research-log-cell-states-and-microarrays/
|
||||
[10]: https://www.jtolio.com/writing/2014/04/go-space-monkey/
|
||||
[11]: https://godoc.org/github.com/spacemonkeygo/openssl
|
||||
[12]: https://golang.org/pkg/crypto/tls/
|
||||
[13]: https://godoc.org/github.com/spacemonkeygo/errors
|
||||
[14]: https://godoc.org/github.com/spacemonkeygo/spacelog
|
||||
[15]: https://godoc.org/gopkg.in/spacemonkeygo/monitor.v1
|
||||
[16]: https://github.com/jtolds/gls
|
||||
[17]: https://www.jtolio.com/images/wat/darth-helmet.jpg
|
||||
[18]: https://en.wikipedia.org/wiki/Newsqueak
|
||||
[19]: https://en.wikipedia.org/wiki/Alef_%28programming_language%29
|
||||
[20]: https://en.wikipedia.org/wiki/Limbo_%28programming_language%29
|
||||
[21]: https://lesswrong.com/lw/k5/cached_thoughts/
|
||||
[22]: https://blog.golang.org/share-memory-by-communicating
|
||||
[23]: https://www.jtolio.com/images/wat/jon-stewart.jpg
|
||||
[24]: https://twitter.com/HiattDustin
|
||||
[25]: http://bravenewgeek.com/go-is-unapologetically-flawed-heres-why-we-use-it/
|
||||
[26]: https://www.jtolio.com/images/wat/obama.jpg
|
||||
[27]: https://www.jtolio.com/images/wat/yael-grobglas.jpg
|
||||
[28]: http://www.informit.com/articles/article.aspx?p=2359758#comment-2061767464
|
||||
[29]: https://godoc.org/golang.org/x/net/context
|
||||
[30]: https://www.jtolio.com/images/wat/zooey-deschanel.jpg
|
||||
[31]: https://www.jtolio.com/images/wat/joel-mchale.jpg
|
||||
[32]: http://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/
|
||||
[33]: https://en.wikipedia.org/wiki/Duck_typing
|
||||
[34]: https://www.jtolio.com/images/wat/michael-cera.jpg
|
||||
[35]: https://github.com/azdagron
|
||||
[36]: https://twitter.com/taterbase
|
||||
[37]: http://bravenewgeek.com
|
||||
[38]: https://www.jtolio.com/contact/
|
@ -0,0 +1,658 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Writing Advanced Web Applications with Go)
|
||||
[#]: via: (https://www.jtolio.com/2017/01/writing-advanced-web-applications-with-go)
|
||||
[#]: author: (jtolio.com https://www.jtolio.com/)
|
||||
|
||||
Writing Advanced Web Applications with Go
|
||||
======
|
||||
|
||||
Web development in many programming environments often requires subscribing to some full framework ethos. With [Ruby][1], it’s usually [Rails][2] but could be [Sinatra][3] or something else. With [Python][4], it’s often [Django][5] or [Flask][6]. With [Go][7], it’s…
|
||||
|
||||
If you spend some time in Go communities like the [Go mailing list][8] or the [Go subreddit][9], you’ll find Go newcomers frequently wondering what web framework is best to use. [There][10] [are][11] [quite][12] [a][13] [few][14] [Go][15] [frameworks][16] ([and][17] [then][18] [some][19]), so which one is best seems like a reasonable question. Without fail, though, the strong recommendation of the Go community is to [avoid web frameworks entirely][20] and just stick with the standard library as long as possible. Here’s [an example from the Go mailing list][21] and here’s [one from the subreddit][22].
|
||||
|
||||
It’s not bad advice! The Go standard library is very rich and flexible, much more so than many other languages, and designing a web application in Go with just the standard library is definitely a good choice.
|
||||
|
||||
Even when these Go frameworks call themselves minimalistic, they can’t seem to help themselves avoid using a different request handler interface than the default standard library [http.Handler][23], and I think this is the biggest source of angst about why frameworks should be avoided. If everyone standardizes on [http.Handler][23], then dang, all sorts of things would be interoperable!
|
||||
|
||||
Before Go 1.7, it made some sense to give in and use a different interface for handling HTTP requests. But now that [http.Request][24] has the [Context][25] and [WithContext][26] methods, there truly isn’t a good reason any longer.
|
||||
|
||||
I’ve done a fair share of web development in Go and I’m here to share with you both some standard library development patterns I’ve learned and some code I’ve found myself frequently needing. The code I’m sharing is not for use instead of the standard library, but to augment it.
|
||||
|
||||
Overall, if this blog post feels like it’s predominantly plugging various little standalone libraries from my [Webhelp non-framework][27], that’s because it is. It’s okay, they’re little standalone libraries. Only use the ones you want!
|
||||
|
||||
If you’re new to Go web development, I suggest reading the Go documentation’s [Writing Web Applications][28] article first.
|
||||
|
||||
### Middleware
|
||||
|
||||
A frequent design pattern for server-side web development is the concept of _middleware_, where some portion of the request handler wraps some other portion of the request handler and does some preprocessing or routing or something. This is a big component of how [Express][29] is organized on [Node][30], and how Express middleware and [Negroni][17] middleware works is almost line-for-line identical in design.
|
||||
|
||||
Good use cases for middleware are things such as:
|
||||
|
||||
* making sure a user is logged in, redirecting if not,
|
||||
* making sure the request came over HTTPS,
|
||||
* making sure a session is set up and loaded from a session database,
|
||||
* making sure we logged information before and after the request was handled,
|
||||
* making sure the request was routed to the right handler,
|
||||
* and so on.
|
||||
|
||||
|
||||
|
||||
Composing your web app as essentially a chain of middleware handlers is a very powerful and flexible approach. It allows you to avoid a lot of [cross-cutting concerns][31] and have your code factored in very elegant and easy-to-maintain ways. By wrapping a set of handlers with middleware that ensures a user is logged in prior to actually attempting to handle the request, the individual handlers no longer need mistake-prone copy-and-pasted code to ensure the same thing.
|
||||
|
||||
So, middleware is good. However, if Negroni or other frameworks are any indication, you’d think the standard library’s `http.Handler` isn’t up to the challenge. Negroni adds its own `negroni.Handler` just for the sake of making middleware easier. There’s no reason for this.
|
||||
|
||||
Here is a full middleware implementation for ensuring a user is logged in, assuming a `GetUser(*http.Request)` function but otherwise just using the standard library:
|
||||
|
||||
```
|
||||
func RequireUser(h http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
|
||||
user, err := GetUser(req)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if user == nil {
|
||||
http.Error(w, "unauthorized", http.StatusUnauthorized)
|
||||
return
|
||||
}
|
||||
h.ServeHTTP(w, req)
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
Here’s how it’s used (just wrap another handler!):
|
||||
|
||||
```
|
||||
func main() {
|
||||
http.ListenAndServe(":8080", RequireUser(http.HandlerFunc(myHandler)))
|
||||
}
|
||||
```
|
||||
|
||||
Express, Negroni, and other frameworks expect this kind of signature for a middleware-supporting handler:
|
||||
|
||||
```
|
||||
type Handler interface {
|
||||
// don't do this!
|
||||
ServeHTTP(rw http.ResponseWriter, req *http.Request, next http.HandlerFunc)
|
||||
}
|
||||
```
|
||||
|
||||
There’s really no reason for adding the `next` argument - it reduces cross-library compatibility. So I say, don’t use `negroni.Handler` (or similar). Just use `http.Handler`!
|
||||
|
||||
### Composability
|
||||
|
||||
Hopefully I’ve sold you on middleware as a good design philosophy.
|
||||
|
||||
Probably the most commonly-used type of middleware is request routing, or muxing (seems like we should call this demuxing but what do I know). Some frameworks are almost solely focused on request routing. [gorilla/mux][32] seems more popular than any other part of the [Gorilla][33] library. I think the reason for this is that even though the Go standard library is completely full featured and has a good [ServeMux][34] implementation, it doesn’t make the right thing the default.
|
||||
|
||||
So! Let’s talk about request routing and consider the following problem. You, web developer extraordinaire, want to serve some HTML from your web server at `/hello/` but also want to serve some static assets from `/static/`. Let’s take a quick stab.
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
)
|
||||
|
||||
func hello(w http.ResponseWriter, req *http.Request) {
|
||||
w.Write([]byte("hello, world!"))
|
||||
}
|
||||
|
||||
func main() {
|
||||
mux := http.NewServeMux()
|
||||
mux.Handle("/hello/", http.HandlerFunc(hello))
|
||||
mux.Handle("/static/", http.FileServer(http.Dir("./static-assets")))
|
||||
http.ListenAndServe(":8080", mux)
|
||||
}
|
||||
```
|
||||
|
||||
If you visit `http://localhost:8080/hello/`, you’ll be rewarded with a friendly “hello, world!” message.
|
||||
|
||||
If you visit `http://localhost:8080/static/` on the other hand (assuming you have a folder of static assets in `./static-assets`), you’ll be surprised and frustrated. This code tries to find the source content for the request `/static/my-file` at `./static-assets/static/my-file`! There’s an extra `/static` in there!
|
||||
|
||||
Okay, so this is why `http.StripPrefix` exists. Let’s fix it.
|
||||
|
||||
```
|
||||
mux.Handle("/static/", http.StripPrefix("/static",
|
||||
http.FileServer(http.Dir("./static-assets"))))
|
||||
```
|
||||
|
||||
`mux.Handle` combined with `http.StripPrefix` is such a common pattern that I think it should be the default. Whenever a request router processes a certain amount of URL elements, it should strip them off the request so the wrapped `http.Handler` doesn’t need to know its absolute URL and only needs to be concerned with its relative one.
|
||||
|
||||
In [Russ Cox][35]’s recent [TiddlyWeb backend][36], I would argue that every time `strings.TrimPrefix` is needed to remove the full URL from the handler’s incoming path arguments, it is an unnecessary cross-cutting concern, unfortunately imposed by `http.ServeMux`. (An example is [line 201 in tiddly.go][37].)
|
||||
|
||||
I’d much rather have the default `mux` behavior work more like a directory of registered elements that by default strips off the ancestor directory before handing the request to the next middleware handler. It’s much more composable. To this end, I’ve written a simple muxer that works in this fashion called [whmux.Dir][38]. It is essentially `http.ServeMux` and `http.StripPrefix` combined. Here’s the previous example reworked to use it:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"gopkg.in/webhelp.v1/whmux"
|
||||
)
|
||||
|
||||
func hello(w http.ResponseWriter, req *http.Request) {
|
||||
w.Write([]byte("hello, world!"))
|
||||
}
|
||||
|
||||
func main() {
|
||||
mux := whmux.Dir{
|
||||
"hello": http.HandlerFunc(hello),
|
||||
"static": http.FileServer(http.Dir("./static-assets")),
|
||||
}
|
||||
http.ListenAndServe(":8080", mux)
|
||||
}
|
||||
```
|
||||
|
||||
There are other useful mux implementations inside the [whmux][39] package that demultiplex on various aspects of the request path, request method, request host, or pull arguments out of the request and place them into the context, such as a [whmux.IntArg][40] or [whmux.StringArg][41]. This brings us to [contexts][42].
|
||||
|
||||
### Contexts
|
||||
|
||||
Request contexts are a recent addition to the Go 1.7 standard library, but the idea of [contexts has been around since mid-2014][43]. As of Go 1.7, they were added to the standard library ([“context”][42]), but are available for older Go releases in the original location ([“golang.org/x/net/context”][44]).
|
||||
|
||||
First, here’s the definition of the `context.Context` type that `(*http.Request).Context()` returns:
|
||||
|
||||
```
|
||||
type Context interface {
|
||||
Done() <-chan struct{}
|
||||
Err() error
|
||||
Deadline() (deadline time.Time, ok bool)
|
||||
|
||||
Value(key interface{}) interface{}
|
||||
}
|
||||
```
|
||||
|
||||
Talking about `Done()`, `Err()`, and `Deadline()` are enough for an entirely different blog post, so I’m going to ignore them at least for now and focus on `Value(interface{})`.
|
||||
|
||||
As a motivating problem, let’s say that the `GetUser(*http.Request)` method we assumed earlier is expensive, and we only want to call it once per request. We certainly don’t want to call it once to check that a user is logged in, and then again when we actually need the `*User` value. With `(*http.Request).WithContext` and `context.WithValue`, we can pass the `*User` down to the next middleware precomputed!
|
||||
|
||||
Here’s the new middleware:
|
||||
|
||||
```
|
||||
type userKey int
|
||||
|
||||
func RequireUser(h http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
|
||||
user, err := GetUser(req)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if user == nil {
|
||||
http.Error(w, "unauthorized", http.StatusUnauthorized)
|
||||
return
|
||||
}
|
||||
ctx := r.Context()
|
||||
ctx = context.WithValue(ctx, userKey(0), user)
|
||||
h.ServeHTTP(w, req.WithContext(ctx))
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
Now, handlers that are protected by this `RequireUser` handler can load the previously computed `*User` value like this:
|
||||
|
||||
```
|
||||
if user, ok := req.Context().Value(userKey(0)).(*User); ok {
|
||||
// there's a valid user!
|
||||
}
|
||||
```
|
||||
|
||||
Contexts allow us to pass optional values to handlers down the chain in a way that is relatively type-safe and flexible. None of the above context logic requires anything outside of the standard library.
|
||||
|
||||
#### Aside about context keys
|
||||
|
||||
There was a curious piece of code in the above example. At the top, we defined a `type userKey int`, and then always used it as `userKey(0)`.
|
||||
|
||||
One of the possible problems with contexts is the `Value()` interface lends itself to a global namespace where you can stomp on other context users and use conflicting key names. Above, we used `type userKey` because it’s an unexported type in your package. It will never compare equal (without a cast) to any other type, including `int`, in Go. This gives us a way to namespace keys to your package, even though the `Value()` method is still a sort of global namespace.
|
||||
|
||||
Because the need for this is so common, the `webhelp` package defines a [GenSym()][45] helper that will create a brand new, never-before-seen, unique value for use as a context key.
|
||||
|
||||
If we used [GenSym()][45], then `type userKey int` would become `var userKey = webhelp.GenSym()` and `userKey(0)` would simply become `userKey`.
|
||||
|
||||
#### Back to whmux.StringArg
|
||||
|
||||
Armed with this new context behavior, we can now present a `whmux.StringArg` example:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"gopkg.in/webhelp.v1/whmux"
|
||||
)
|
||||
|
||||
var (
|
||||
pageName = whmux.NewStringArg()
|
||||
)
|
||||
|
||||
func page(w http.ResponseWriter, req *http.Request) {
|
||||
name := pageName.Get(req.Context())
|
||||
|
||||
fmt.Fprintf(w, "Welcome to %s", name)
|
||||
}
|
||||
|
||||
func main() {
|
||||
// pageName.Shift pulls the next /-delimited string out of the request's
|
||||
// URL.Path and puts it into the context instead.
|
||||
pageHandler := pageName.Shift(http.HandlerFunc(page))
|
||||
|
||||
http.ListenAndServe(":8080", whmux.Dir{
|
||||
"wiki": pageHandler,
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Pre-Go-1.7 support
|
||||
|
||||
Contexts let you do some pretty cool things. But let’s say you’re stuck with something before Go 1.7 (for instance, App Engine is currently Go 1.6).
|
||||
|
||||
That’s okay! I’ve backported all of the neat new context features to Go 1.6 and earlier in a forwards compatible way!
|
||||
|
||||
With the [whcompat][46] package, `req.Context()` becomes `whcompat.Context(req)`, and `req.WithContext(ctx)` becomes `whcompat.WithContext(req, ctx)`. The `whcompat` versions work with all releases of Go. Yay!
|
||||
|
||||
There’s a bit of unpleasantness behind the scenes to make this happen. Specifically, for pre-1.7 builds, a global map indexed by `req.URL` is kept, and a finalizer is installed on `req` to clean up. So don’t change what `req.URL` points to and this will work fine. In practice it’s not a problem.
|
||||
|
||||
`whcompat` adds additional backwards-compatibility helpers. In Go 1.7 and on, the context’s `Done()` channel is closed (and `Err()` is set), whenever the request is done processing. If you want this behavior in Go 1.6 and earlier, just use the [whcompat.DoneNotify][47] middleware.
|
||||
|
||||
In Go 1.8 and on, the context’s `Done()` channel is closed when the client goes away, even if the request hasn’t completed. If you want this behavior in Go 1.7 and earlier, just use the [whcompat.CloseNotify][48] middleware, though beware that it costs an extra goroutine.
|
||||
|
||||
### Error handling
|
||||
|
||||
How you handle errors can be another cross-cutting concern, but with good application of context and middleware, it too can be beautifully cleaned up so that the responsibilities lie in the correct place.
|
||||
|
||||
Problem statement: your `RequireUser` middleware needs to handle an authentication error differently between your HTML endpoints and your JSON API endpoints. You want to use `RequireUser` for both types of endpoints, but with your HTML endpoints you want to return a user-friendly error page, and with your JSON API endpoints you want to return an appropriate JSON error state.
|
||||
|
||||
In my opinion, the right thing to do is to have contextual error handlers, and luckily, we have a context for contextual information!
|
||||
|
||||
First, we need an error handler interface.
|
||||
|
||||
```
|
||||
type ErrHandler interface {
|
||||
HandleError(w http.ResponseWriter, req *http.Request, err error)
|
||||
}
|
||||
```
|
||||
|
||||
Next, let’s make a middleware that registers the error handler in the context:
|
||||
|
||||
```
|
||||
var errHandler = webhelp.GenSym() // see the aside about context keys
|
||||
|
||||
func HandleErrWith(eh ErrHandler, h http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
|
||||
ctx := context.WithValue(whcompat.Context(req), errHandler, eh)
|
||||
h.ServeHTTP(w, whcompat.WithContext(req, ctx))
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
Last, let’s make a function that will use the registered error handler for errors:
|
||||
|
||||
```
|
||||
func HandleErr(w http.ResponseWriter, req *http.Request, err error) {
|
||||
if handler, ok := whcompat.Context(req).Value(errHandler).(ErrHandler); ok {
|
||||
handler.HandleError(w, req, err)
|
||||
return
|
||||
}
|
||||
log.Printf("error: %v", err)
|
||||
http.Error(w, "internal server error", http.StatusInternalServerError)
|
||||
}
|
||||
```
|
||||
|
||||
Now, as long as everything uses `HandleErr` to handle errors, our JSON API can handle errors with JSON responses, and our HTML endpoints can handle errors with HTML responses.
|
||||
|
||||
Of course, the [wherr][49] package implements this all for you, and the [whjson][49] package even implements a friendly JSON API error handler.
|
||||
|
||||
Here’s how you might use it:
|
||||
|
||||
```
|
||||
var userKey = webhelp.GenSym()
|
||||
|
||||
func RequireUser(h http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
|
||||
user, err := GetUser(req)
|
||||
if err != nil {
|
||||
wherr.Handle(w, req, wherr.InternalServerError.New("failed to get user"))
|
||||
return
|
||||
}
|
||||
if user == nil {
|
||||
wherr.Handle(w, req, wherr.Unauthorized.New("no user found"))
|
||||
return
|
||||
}
|
||||
ctx := r.Context()
|
||||
ctx = context.WithValue(ctx, userKey, user)
|
||||
h.ServeHTTP(w, req.WithContext(ctx))
|
||||
})
|
||||
}
|
||||
|
||||
func userpage(w http.ResponseWriter, req *http.Request) {
|
||||
user := req.Context().Value(userKey).(*User)
|
||||
w.Header().Set("Content-Type", "text/html")
|
||||
userpageTmpl.Execute(w, user)
|
||||
}
|
||||
|
||||
func username(w http.ResponseWriter, req *http.Request) {
|
||||
user := req.Context().Value(userKey).(*User)
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(map[string]interface{}{"user": user})
|
||||
}
|
||||
|
||||
func main() {
|
||||
http.ListenAndServe(":8080", whmux.Dir{
|
||||
"api": wherr.HandleWith(whjson.ErrHandler,
|
||||
RequireUser(whmux.Dir{
|
||||
"username": http.HandlerFunc(username),
|
||||
})),
|
||||
"user": RequireUser(http.HandlerFunc(userpage)),
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
#### Aside about the spacemonkeygo/errors package
|
||||
|
||||
The default [wherr.Handle][50] implementation understands all of the [error classes defined in the wherr top level package][51].
|
||||
|
||||
These error classes are implemented using the [spacemonkeygo/errors][52] library and the [spacemonkeygo/errors/errhttp][53] extensions. You don’t have to use this library or these errors, but the benefit is that your error instances can be extended to include HTTP status code messages and information, which once again, provides for a nice elimination of cross-cutting concerns in your error handling logic.
|
||||
|
||||
See the [spacemonkeygo/errors][52] package for more details.
|
||||
|
||||
_**Update 2018-04-19:** After a few years of use, my friend condensed some lessons we learned and the best parts of `spacemonkeygo/errors` into a new, more concise, better library, over at [github.com/zeebo/errs][54]. Consider using that instead!_
|
||||
|
||||
### Sessions
|
||||
|
||||
Go’s standard library has great support for cookies, but cookies by themselves aren’t usually what a developer thinks of when she thinks about sessions. Cookies are unencrypted, unauthenticated, and readable by the user, and perhaps you don’t want that with your session data.
|
||||
|
||||
Further, sessions can be stored in cookies, but could also be stored in a database to provide features like session revocation and querying. There’s lots of potential details about the implementation of sessions.
|
||||
|
||||
Request handlers, however, probably don’t care too much about the implementation details of the session. Request handlers usually just want a bucket of keys and values they can store safely and securely.
|
||||
|
||||
The [whsess][55] package implements middleware for registering an arbitrary session store (a default cookie-based session store is provided), and implements helpers for retrieving and saving new values into the session.
|
||||
|
||||
The default cookie-based session store implements encryption and authentication via the excellent [nacl/secretbox][56] package.
|
||||
|
||||
Usage is like this:
|
||||
|
||||
```
|
||||
func handler(w http.ResponseWriter, req *http.Request) {
|
||||
ctx := whcompat.Context(req)
|
||||
sess, err := whsess.Load(ctx, "namespace")
|
||||
if err != nil {
|
||||
wherr.Handle(w, req, err)
|
||||
return
|
||||
}
|
||||
if loggedIn, _ := sess.Values["logged_in"].(bool); loggedIn {
|
||||
views, _ := sess.Values["views"].(int64)
|
||||
sess.Values["views"] = views + 1
|
||||
sess.Save(w)
|
||||
}
|
||||
}
|
||||
|
||||
func main() {
|
||||
http.ListenAndServe(":8080", whsess.HandlerWithStore(
|
||||
whsess.NewCookieStore(secret), http.HandlerFunc(handler)))
|
||||
}
|
||||
```
|
||||
|
||||
### Logging
|
||||
|
||||
The Go standard library by default doesn’t log incoming requests, outgoing responses, or even just what port the HTTP server is listening on.
|
||||
|
||||
The [whlog][57] package implements all three. The [whlog.LogRequests][58] middleware will log requests as they start. The [whlog.LogResponses][59] middleware will log requests as they end, along with status code and timing information. [whlog.ListenAndServe][60] will log the address the server ultimately listens on (if you specify “:0” as your address, a port will be randomly chosen, and [whlog.ListenAndServe][60] will log it).
|
||||
|
||||
[whlog.LogResponses][59] deserves special mention for how it does what it does. It uses the [whmon][61] package to instrument the outgoing `http.ResponseWriter` to keep track of response information.
|
||||
|
||||
Usage is like this:
|
||||
|
||||
```
|
||||
func main() {
|
||||
whlog.ListenAndServe(":8080", whlog.LogResponses(whlog.Default, handler))
|
||||
}
|
||||
```
|
||||
|
||||
#### App engine logging
|
||||
|
||||
App engine logging is unconventional crazytown. The standard library logger doesn’t work by default on App Engine, because App Engine logs _require_ the request context. This is unfortunate for libraries that don’t necessarily run on App Engine all the time, as their logging information doesn’t make it to the App Engine request-specific logger.
|
||||
|
||||
Unbelievably, this is fixable with [whgls][62], which uses my terrible, terrible (but recently improved) [Goroutine-local storage library][63] to store the request context on the current stack, register a new log output, and fix logging so standard library logging works with App Engine again.
|
||||
|
||||
### Template handling
|
||||
|
||||
Go’s standard library [html/template][64] package is excellent, but you’ll be unsurprised to find there’s a few tasks I do with it so commonly that I’ve written additional support code.
|
||||
|
||||
The [whtmpl][65] package really does two things. First, it provides a number of useful helper methods for use within templates, and second, it takes some friction out of managing a large number of templates.
|
||||
|
||||
When writing templates, one thing you can do is call out to other registered templates for small values. A good example might be some sort of list element. You can have a template that renders the list element, and then your template that renders your list can use the list element template in turn.
|
||||
|
||||
Use of another template within a template might look like this:
|
||||
|
||||
```
|
||||
<ul>
|
||||
{{ range .List }}
|
||||
{{ template "list_element" . }}
|
||||
{{ end }}
|
||||
</ul>
|
||||
```
|
||||
|
||||
You’re now rendering the `list_element` template with the list element from `.List`. But what if you want to also pass the current user `.User`? Unfortunately, you can only pass one argument from one template to another. If you have two arguments you want to pass to another template, with the standard library, you’re out of luck.
|
||||
|
||||
The [whtmpl][65] package adds three helper functions to aid you here, `makepair`, `makemap`, and `makeslice` (more docs under the [whtmpl.Collection][66] type). `makepair` is the simplest. It takes two arguments and constructs a [whtmpl.Pair][67]. Fixing our example above would look like this now:
|
||||
|
||||
```
|
||||
<ul>
|
||||
{{ $user := .User }}
|
||||
{{ range .List }}
|
||||
{{ template "list_element" (makepair . $user) }}
|
||||
{{ end }}
|
||||
</ul>
|
||||
```
|
||||
|
||||
The second thing [whtmpl][65] does is make defining lots of templates easy, by optionally automatically naming templates after the name of the file the template is defined in.
|
||||
|
||||
For example, say you have three files.
|
||||
|
||||
Here’s `pkg.go`:
|
||||
|
||||
```
|
||||
package views
|
||||
|
||||
import "gopkg.in/webhelp.v1/whtmpl"
|
||||
|
||||
var Templates = whtmpl.NewCollection()
|
||||
```
|
||||
|
||||
Here’s `landing.go`:
|
||||
|
||||
```
|
||||
package views
|
||||
|
||||
var _ = Templates.MustParse(`{{ template "header" . }}
|
||||
|
||||
<h1>Landing!</h1>`)
|
||||
```
|
||||
|
||||
And here’s `header.go`:
|
||||
|
||||
```
|
||||
package views
|
||||
|
||||
var _ = Templates.MustParse(`<title>My website!</title>`)
|
||||
```
|
||||
|
||||
Now, you can import your new `views` package and render the `landing` template this easily:
|
||||
|
||||
```
|
||||
func handler(w http.ResponseWriter, req *http.Request) {
|
||||
views.Templates.Render(w, req, "landing", map[string]interface{}{})
|
||||
}
|
||||
```
|
||||
|
||||
### User authentication
|
||||
|
||||
I’ve written two Webhelp-style authentication libraries that I end up using frequently.
|
||||
|
||||
The first is an OAuth2 library, [whoauth2][68]. I’ve written up [an example application that authenticates with Google, Facebook, and Github][69].
|
||||
|
||||
The second, [whgoth][70], is a wrapper around [markbates/goth][71]. My portion isn’t quite complete yet (some fixes are still necessary for optional App Engine support), but will support more non-OAuth2 authentication sources (like Twitter) when it is done.
|
||||
|
||||
### Route listing
|
||||
|
||||
Surprise! If you’ve used [webhelp][27] based handlers and middleware for your whole app, you automatically get route listing for free, via the [whroute][72] package.
|
||||
|
||||
My web serving code’s `main` method often has a form like this:
|
||||
|
||||
```
|
||||
switch flag.Arg(0) {
|
||||
case "serve":
|
||||
panic(whlog.ListenAndServe(*listenAddr, routes))
|
||||
case "routes":
|
||||
whroute.PrintRoutes(os.Stdout, routes)
|
||||
default:
|
||||
fmt.Printf("Usage: %s <serve|routes>\n", os.Args[0])
|
||||
}
|
||||
```
|
||||
|
||||
Here’s some example output:
|
||||
|
||||
```
|
||||
GET /auth/_cb/
|
||||
GET /auth/login/
|
||||
GET /auth/logout/
|
||||
GET /
|
||||
GET /account/apikeys/
|
||||
POST /account/apikeys/
|
||||
GET /project/<int>/
|
||||
GET /project/<int>/control/<int>/
|
||||
POST /project/<int>/control/<int>/sample/
|
||||
GET /project/<int>/control/
|
||||
Redirect: f(req)
|
||||
POST /project/<int>/control/
|
||||
POST /project/<int>/control_named/<string>/sample/
|
||||
GET /project/<int>/control_named/
|
||||
Redirect: f(req)
|
||||
GET /project/<int>/sample/<int>/
|
||||
GET /project/<int>/sample/<int>/similar[/<*>]
|
||||
GET /project/<int>/sample/
|
||||
Redirect: f(req)
|
||||
POST /project/<int>/search/
|
||||
GET /project/
|
||||
Redirect: /
|
||||
POST /project/
|
||||
```
|
||||
|
||||
### Other little things
|
||||
|
||||
[webhelp][27] has a number of other subpackages:
|
||||
|
||||
* [whparse][73] assists in parsing optional request arguments.
|
||||
* [whredir][74] provides some handlers and helper methods for doing redirects in various cases.
|
||||
* [whcache][75] creates request-specific mutable storage for caching various computations and database loaded data. Mutability helps helper functions that aren’t used as middleware share data.
|
||||
* [whfatal][76] uses panics to simplify early request handling termination. Probably avoid this package unless you want to anger other Go developers.
|
||||
|
||||
|
||||
|
||||
### Summary
|
||||
|
||||
Designing your web project as a collection of composable middlewares goes quite a long way to simplify your code design, eliminate cross-cutting concerns, and create a more flexible development environment. Use my [webhelp][27] package if it helps you.
|
||||
|
||||
Or don’t! Whatever! It’s still a free country last I checked.
|
||||
|
||||
#### Update
|
||||
|
||||
Peter Kieltyka points me to his [Chi framework][77], which actually does seem to do the right things with respect to middleware, handlers, and contexts - certainly much more so than all the other frameworks I’ve seen. So, shoutout to Peter and the team at Pressly!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.jtolio.com/2017/01/writing-advanced-web-applications-with-go
|
||||
|
||||
作者:[jtolio.com][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.jtolio.com/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ruby-lang.org/
|
||||
[2]: http://rubyonrails.org/
|
||||
[3]: http://www.sinatrarb.com/
|
||||
[4]: https://www.python.org/
|
||||
[5]: https://www.djangoproject.com/
|
||||
[6]: http://flask.pocoo.org/
|
||||
[7]: https://golang.org/
|
||||
[8]: https://groups.google.com/d/forum/golang-nuts
|
||||
[9]: https://www.reddit.com/r/golang/
|
||||
[10]: https://revel.github.io/
|
||||
[11]: https://gin-gonic.github.io/gin/
|
||||
[12]: http://iris-go.com/
|
||||
[13]: https://beego.me/
|
||||
[14]: https://go-macaron.com/
|
||||
[15]: https://github.com/go-martini/martini
|
||||
[16]: https://github.com/gocraft/web
|
||||
[17]: https://github.com/urfave/negroni
|
||||
[18]: https://godoc.org/goji.io
|
||||
[19]: https://echo.labstack.com/
|
||||
[20]: https://medium.com/code-zen/why-i-don-t-use-go-web-frameworks-1087e1facfa4
|
||||
[21]: https://groups.google.com/forum/#!topic/golang-nuts/R_lqsTTBh6I
|
||||
[22]: https://www.reddit.com/r/golang/comments/1yh6gm/new_to_go_trying_to_select_web_framework/
|
||||
[23]: https://golang.org/pkg/net/http/#Handler
|
||||
[24]: https://golang.org/pkg/net/http/#Request
|
||||
[25]: https://golang.org/pkg/net/http/#Request.Context
|
||||
[26]: https://golang.org/pkg/net/http/#Request.WithContext
|
||||
[27]: https://godoc.org/gopkg.in/webhelp.v1
|
||||
[28]: https://golang.org/doc/articles/wiki/
|
||||
[29]: https://expressjs.com/
|
||||
[30]: https://nodejs.org/en/
|
||||
[31]: https://en.wikipedia.org/wiki/Cross-cutting_concern
|
||||
[32]: https://github.com/gorilla/mux
|
||||
[33]: https://github.com/gorilla/
|
||||
[34]: https://golang.org/pkg/net/http/#ServeMux
|
||||
[35]: https://swtch.com/~rsc/
|
||||
[36]: https://github.com/rsc/tiddly
|
||||
[37]: https://github.com/rsc/tiddly/blob/8f9145ac183e374eb95d90a73be4d5f38534ec47/tiddly.go#L201
|
||||
[38]: https://godoc.org/gopkg.in/webhelp.v1/whmux#Dir
|
||||
[39]: https://godoc.org/gopkg.in/webhelp.v1/whmux
|
||||
[40]: https://godoc.org/gopkg.in/webhelp.v1/whmux#IntArg
|
||||
[41]: https://godoc.org/gopkg.in/webhelp.v1/whmux#StringArg
|
||||
[42]: https://golang.org/pkg/context/
|
||||
[43]: https://blog.golang.org/context
|
||||
[44]: https://godoc.org/golang.org/x/net/context
|
||||
[45]: https://godoc.org/gopkg.in/webhelp.v1#GenSym
|
||||
[46]: https://godoc.org/gopkg.in/webhelp.v1/whcompat
|
||||
[47]: https://godoc.org/gopkg.in/webhelp.v1/whcompat#DoneNotify
|
||||
[48]: https://godoc.org/gopkg.in/webhelp.v1/whcompat#CloseNotify
|
||||
[49]: https://godoc.org/gopkg.in/webhelp.v1/wherr
|
||||
[50]: https://godoc.org/gopkg.in/webhelp.v1/wherr#Handle
|
||||
[51]: https://godoc.org/gopkg.in/webhelp.v1/wherr#pkg-variables
|
||||
[52]: https://godoc.org/github.com/spacemonkeygo/errors
|
||||
[53]: https://godoc.org/github.com/spacemonkeygo/errors/errhttp
|
||||
[54]: https://github.com/zeebo/errs
|
||||
[55]: https://godoc.org/gopkg.in/webhelp.v1/whsess
|
||||
[56]: https://godoc.org/golang.org/x/crypto/nacl/secretbox
|
||||
[57]: https://godoc.org/gopkg.in/webhelp.v1/whlog
|
||||
[58]: https://godoc.org/gopkg.in/webhelp.v1/whlog#LogRequests
|
||||
[59]: https://godoc.org/gopkg.in/webhelp.v1/whlog#LogResponses
|
||||
[60]: https://godoc.org/gopkg.in/webhelp.v1/whlog#ListenAndServe
|
||||
[61]: https://godoc.org/gopkg.in/webhelp.v1/whmon
|
||||
[62]: https://godoc.org/gopkg.in/webhelp.v1/whgls
|
||||
[63]: https://godoc.org/github.com/jtolds/gls
|
||||
[64]: https://golang.org/pkg/html/template/
|
||||
[65]: https://godoc.org/gopkg.in/webhelp.v1/whtmpl
|
||||
[66]: https://godoc.org/gopkg.in/webhelp.v1/whtmpl#Collection
|
||||
[67]: https://godoc.org/gopkg.in/webhelp.v1/whtmpl#Pair
|
||||
[68]: https://godoc.org/gopkg.in/go-webhelp/whoauth2.v1
|
||||
[69]: https://github.com/go-webhelp/whoauth2/blob/v1/examples/group/main.go
|
||||
[70]: https://godoc.org/gopkg.in/go-webhelp/whgoth.v1
|
||||
[71]: https://github.com/markbates/goth
|
||||
[72]: https://godoc.org/gopkg.in/webhelp.v1/whroute
|
||||
[73]: https://godoc.org/gopkg.in/webhelp.v1/whparse
|
||||
[74]: https://godoc.org/gopkg.in/webhelp.v1/whredir
|
||||
[75]: https://godoc.org/gopkg.in/webhelp.v1/whcache
|
||||
[76]: https://godoc.org/gopkg.in/webhelp.v1/whfatal
|
||||
[77]: https://github.com/pressly/chi
|
414
sources/tech/20200223 The Zen of Go.md
Normal file
414
sources/tech/20200223 The Zen of Go.md
Normal file
@ -0,0 +1,414 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Zen of Go)
|
||||
[#]: via: (https://dave.cheney.net/2020/02/23/the-zen-of-go)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
The Zen of Go
|
||||
======
|
||||
|
||||
_This article was derived from my [GopherCon Israel 2020][1] presentation. It’s also quite long. If you’d prefer a shorter version, head over to [the-zen-of-go.netlify.com][2]_.
|
||||
|
||||
_A recording of the presentation is available on [YouTube][3]._
|
||||
|
||||
* * *
|
||||
|
||||
### How should I write good code?
|
||||
|
||||
Something that I’ve been thinking about a lot recently, when reflecting on the body of my own work, is a common subtitle, _how should I write good code?_ Given nobody actively seeks to write _bad_ code, this leads to the question; _how do you know when you’ve written good Go code?_
|
||||
|
||||
If there’s a continuum between good and bad, how to do we know what the good parts are? What are its properties, its attributes, its hallmarks, its patterns, and its idioms?
|
||||
|
||||
### Idiomatic Go
|
||||
|
||||
![][4]
|
||||
|
||||
Which brings me to idiomatic Go. To say that something is idiomatic is to say that it follows the style of the time. If something is not idiomatic, it is not following the prevailing style. It is unfashionable.
|
||||
|
||||
More importantly, to say to someone that their code is not idiomatic does not explain _why_ it’s not idiomatic. Why is this? Like all truths, the answer is found in the dictionary.
|
||||
|
||||
> idiom (noun): a group of words established by usage as having a meaning not deducible from those of the individual words.
|
||||
|
||||
Idioms are hallmarks of shared values. Idiomatic Go is not something you learn from a book, it’s something that you acquire by being part of a community.
|
||||
|
||||
![][5]
|
||||
|
||||
My concern with the mantra of idiomatic Go is, in many ways, it can be exclusionary. It’s saying “you can’t sit with us.” After all, isn’t that what we mean when critique of someone’s work as non-idiomatic? They didn’t do It right. It doesn’t look right. It doesn’t follow the style of time.
|
||||
|
||||
I offer that idiomatic Go is not a suitable mechanism for teaching how to write good Go code because it is defined, fundamentally, by telling someone they did it wrong. Wouldn’t it be better if the advice we gave didn’t alienate the author right at the point they were most willing to accept it?
|
||||
|
||||
### Proverbs
|
||||
|
||||
Stepping away problematic idioms, what other cultural artefacts do Gophers have? Perhaps we can turn to Rob Pike’s wonderful [Go Proverbs][6]. Are these suitable teaching tools? Will these tell newcomers how to write good Go code?
|
||||
|
||||
In general, I don’t think so. This is not to dismiss Pike’s work, it is just that the Go Proverbs, like Segoe Kensaku’s original, are observations, not statements of value. Again, the dictionary comes to the rescue:
|
||||
|
||||
> proverb (noun): a short, well-known pithy saying, stating a general truth or piece of advice.
|
||||
|
||||
The goal of the Go Proverbs are to reveal a deeper truth about the design of the language, but how useful is advice like the _empty interface says nothing_ to a novice from a language that doesn’t have structural typing?
|
||||
|
||||
It’s important to recognise that, in a growing community, at any time the people learning Go far outnumber those who claim to have mastered the language. Thus proverbs are perhaps not the best teaching tool in this scenario.
|
||||
|
||||
### Engineering Values
|
||||
|
||||
Dan Luu found [an old presentation][7] by Mark Lucovsky about the engineering culture of the windows team around the windows NT-windows 2000 timeframe. The reason I mention it is Lukovsky’s description of a culture as a common way of evaluating designs and making tradeoffs.
|
||||
|
||||
![][8]
|
||||
|
||||
There are many ways of discussing culture, but with respect to an engineering culture Lucovsky’s description is apt. The central idea is _values guide decisions in an unknown design space_. The values of the NT team were; portability, reliability, security, and extensibility. Engineering values are, crudely translated, the way things are done around here.
|
||||
|
||||
### Go’s values
|
||||
|
||||
What are the explicit values of Go? What are the core beliefs or philosophy that define the way a Go programmer interprets the world? How are they promulgated? How are they taught? How are they enforced? How do they change over time?
|
||||
|
||||
How will you, as a newly minted Go programmer, inculcate the engineering values of Go? Or, how will you, a seasoned Go professional promulgate your values to a future generations? And just so we’re clear, this process of knowledge transfer is not optional. Without new blood and new ideas, our community become myopic and wither.
|
||||
|
||||
#### The values of other languages
|
||||
|
||||
To set the scene for what I’m getting at we can look to other languages we see examples of their engineering values.
|
||||
|
||||
For example, C++ (and by extension Rust) believe that a programmer _should not have to pay for a feature they do not use_. If a program does not use some computationally expensive feature of the language, then it shouldn’t be forced to shoulder the cost of that feature. This value extends from the language, to its standard library, and is used as a yardstick for judging the design of all code written in C++.
|
||||
|
||||
In Java, and Ruby, and Smalltalk, the core value that _everything is an object_ drives the design of programs around message passing, information hiding, and polymorphism. Designs that shoehorn a procedural style, or even a functional style, into these languages are considered to be wrong–or as Gophers would say, non idiomatic.
|
||||
|
||||
Turning to our own community, what are the engineering values that bind Go programmers? Discourse in our community is often fractious, so deriving a set of values from first principles would be a formidable challenge. Consensus is critical, but exponentially more difficult as the number of contributors to the discussion increases. But what if someone had done the hard work for us.
|
||||
|
||||
### The Zen of ~~Python~~ Go
|
||||
|
||||
Several decades ago Tim Peters sat down and penned _[PEP-20][9]_, the Zen of Python. Peters’ attempted to document the engineering values that he saw Guido van Rossum apply in his role as BDFL for Python.
|
||||
|
||||
For the remainder of this article, I’m going to look towards the Zen of Python and ask, is there anything that can inform the engineering values of Go programmers?
|
||||
|
||||
### A good package starts with a good name
|
||||
|
||||
Let’s start with something spicy,
|
||||
|
||||
> “Namespaces are one honking great idea–let’s do more of those!”
|
||||
|
||||
The Zen of Python, Item 19
|
||||
|
||||
This is pretty unequivocal, Python programmers should use namespaces. Lots of them.
|
||||
|
||||
In Go parlance a namespace is a package. I doubt there is any question that grouping things into packages is good for design and potentially reuse. But there might be some confusion, especially if you’re coming with a decade of experience in another language, about the right way to do this.
|
||||
|
||||
In Go each package should have a purpose, and the best way to know a package’s purpose is by its name—a noun. A package’s name describes what it provides. So too reinterpret Peters’ words, every Go package should have a single purpose.
|
||||
|
||||
This is not a new idea, [I’ve been saying this a while][10], but why should you do this rather than approach where packages are used for fine grained taxonomy? Why, because change.
|
||||
|
||||
> “Design is the art of arranging code to work today, and be changeable forever.”
|
||||
|
||||
Sandi Metz
|
||||
|
||||
Change is the name of the game we’re in. What we do as programmers is manage change. When we do that well we call it design, or architecture. When we do it badly we call it technical debt, or legacy code.
|
||||
|
||||
If you are writing a program that works perfectly, one time, for one fixed set of inputs then nobody cares if the code is good or bad because ultimately the output of the program is all the business cares about.
|
||||
|
||||
But this is _never_ true. Software has bugs, requirements change, inputs change, and very few programs are written solely to be executed once, thus your program _will_ change over time. Maybe it’s you who’ll be tasked with this, more likely it will be someone else, but someone has to change that code. Someone has to maintain that code.
|
||||
|
||||
So, how can we make it easy to for programs to change? Interfaces everywhere? Make everything mockable? Pernicious dependency injection? Well, maybe, for some classes of programs, but not many, those techniques will be useful. However, for the majority of programs, designing something to be flexible up front is over engineering.
|
||||
|
||||
What if, instead, we take a position that rather than enhancing components, we replace them. Then the best way to know when something needs to be replaced, is when it doesn’t do what it says on the tin.
|
||||
|
||||
A good package starts with choosing a good name. Think of your package’s name as an elevator pitch, using just one word, to describe what it provides. When the name no longer matches the requirement, find a replacement.
|
||||
|
||||
### Simplicity matters
|
||||
|
||||
> “Simple is better than complex.”
|
||||
|
||||
The Zen of Python, Item 3
|
||||
|
||||
PEP-20 says simple is better than complex, I couldn’t agree more. A couple of years ago I made this tweet;
|
||||
|
||||
> Most programming languages start out aiming to be simple, but end up just settling for being powerful.
|
||||
>
|
||||
> — Dave Cheney (@davecheney) [December 2, 2014][11]
|
||||
|
||||
My observation, at least at the time, was that I couldn’t think of a language introduced in my life time that didn’t purport to be simple. Each new language offered as a justification, and an enticement, their inherent simplicity. But as I researched, I found that simplicity was not a core value of the many of the languages considered Go’s contemporaries. [1][12] Maybe this is just a cheap shot, but could it be that either these languages aren’t simple, or they don’t _think_ of themselves as being simple. They don’t consider simplicity to be a core value.
|
||||
|
||||
Call me old fashioned, but when did being simple fall out of style? Why does the commercial software development industry continually, gleefully, forget this fundamental truth?
|
||||
|
||||
> “There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”
|
||||
|
||||
C. A. R. Hoare, The Emperor’s Old Clothes, 1980 Turing Award Lecture
|
||||
|
||||
Simple does not mean easy, we know that. Often it is more work to make something simple to use, than easy to build.
|
||||
|
||||
> “Simplicity is prerequisite for reliability.”
|
||||
|
||||
Edsger W Dijkstra, EWD498, 18 June 1975
|
||||
|
||||
Why should we strive for simplicity? Why is important that Go programs be simple? Simple doesn’t mean crude, it means readable and maintainable. Simple doesn’t mean unsophisticated, it means reliable, relatable, and understandable.
|
||||
|
||||
> “Controlling complexity is the essence of computer programming.”
|
||||
|
||||
Brian W. Kernighan, _Software Tools_ (1976)
|
||||
|
||||
Whether Python abides by its mantra of simplicity is a matter for debate, but Go holds simplicity as a core value. I think that we can all agree that when it comes to Go, simple code is preferable to clever code.
|
||||
|
||||
### Avoid package level state
|
||||
|
||||
> “Explicit is better than implicit.”
|
||||
|
||||
_The Zen of Python, Item_ 2
|
||||
|
||||
This is a place where I think Peters’ was more aspirational than factual. Many things in Python are not explicit; decorators, dunder methods, and so on. Without doubt they are powerful, there’s a reason those features exists. Each feature is something someone cared enough about to do the work to implement it, especially the complicated ones. But heavy use of those features makes is harder for the reader to predict the cost of an operation.
|
||||
|
||||
The good news is we have a choice, as Go programmers, to choose to make our code explicit. Explicit could mean many things, perhaps you may be thinking explicit is just a nice way of saying bureaucratic and long winded, but that’s a superficial interpretation. It’s a misnomer to focus only on the syntax on the page, to fret about line lengths and DRYing up expressions. The more valuable, in my opinon, place to be explicit are to do with coupling and with state.
|
||||
|
||||
Coupling is a measure of the amount one thing depends on another. If two things are tightly coupled, they move together. An action that affects one is directly reflected in another. Imagine a train, each carriage joined–ironically the correct word is coupled–together; where the engine goes, the carriages follow.
|
||||
|
||||
Another way to describe coupling is the word cohesion. Cohesion measures how well two things naturally belong together. We talk about a cohesive argument, or a cohesive team; all their parts fit together as if they were designed that way.
|
||||
|
||||
Why does coupling matter? Because just like trains, when you need to change a piece of code, all the code that is tightly coupled to it must change. A prime example, someone release a new version of their API and now your code doesn’t compile.
|
||||
|
||||
APIs are an unavoidable source of coupling but there are more insidious forms of coupling. Clearly everyone knows that if an API’s signature changes the data passing into and out of that call changes. It’s right there in the signature of the function; I take values of these types and return values of other types. But what if the API passed data another way? What if every time you called this API the result was based on the previous time you called that API even though you didn’t change your parameters.
|
||||
|
||||
This is state, and management of state is _the_ problem in computer science.
|
||||
|
||||
```
|
||||
package counter
|
||||
|
||||
var count int
|
||||
|
||||
func Increment(n int) int {
|
||||
count += n
|
||||
return count
|
||||
}
|
||||
```
|
||||
|
||||
Suppose we have this simple `counter` package. You can call `Increment` to increment the counter, you can even get the value back if you `Increment` with a value of zero.
|
||||
|
||||
Suppose you had to test this code, how would you reset the counter after each test? Suppose you wanted to run those tests in parallel, could you do it? Now suppose that you wanted to count more than one thing per program, could you do it?
|
||||
|
||||
No, of course not. Clearly the answer is to encapsulate the `count` variable in a type.
|
||||
|
||||
```
|
||||
package counter
|
||||
|
||||
type Counter struct {
|
||||
count int
|
||||
}
|
||||
|
||||
func (c *Counter) Increment(n int) int {
|
||||
c.count += n
|
||||
return c.count
|
||||
}
|
||||
```
|
||||
|
||||
Now imagine that this problem isn’t restricted to just counters, but your applications main business logic. Can you test it in isolation? Can you test it in parallel? Can you use more than one instance at a time? If the answer those question is _no_, the reason is package level state.
|
||||
|
||||
Avoid package level state. Reduce coupling and spooky action at a distance by providing the dependencies a type needs as fields on that type rather than using package variables.
|
||||
|
||||
### Plan for failure, not success
|
||||
|
||||
> “Errors should never pass silently.”
|
||||
|
||||
_The Zen of Python, Item 1_0
|
||||
|
||||
It’s been said of languages that favour exception handling follow the Samurai principle; _return victorious or not at all_. In exception based languages functions only return valid results. If they don’t succeed then control flow takes an entirely different path.
|
||||
|
||||
Unchecked exceptions are clearly an unsafe model to program in. How can you possibly write code that is robust in the presence of errors when you don’t know which statements could throw an exception? Java tried to make exceptions safer by introducing the notion of a checked exception which, to the best of my knowledge, has not been repeated in another mainstream language. There are plenty of languages which use exceptions but they all, with the singular exception of Java, do so in the unchecked variety.
|
||||
|
||||
Obviously Go chose a different path. Go programmers believe that robust programs are composed from pieces that handle the failure cases _before_ they handle the happy path. In the space that Go was designed for; server programs, multi threaded programs, programs that handle input over the network, dealing with unexpected data, timeouts, connection failures and corrupted data must be front and centre of the programmer’s mind if they are to produce robust programs.
|
||||
|
||||
> “I think that error handling should be explicit, this should be a core value of the language.”
|
||||
|
||||
Peter Bourgon, [GoTime #91][13]
|
||||
|
||||
I want to echo Peter’s assertion, as it was the impetus for this article. I think so much of the success of Go is due to the explicit way errors are handled. Go programmers thinks about the failure case first. We solve the “what if…” case first. This leads to programs where failures are handled at the point of writing, rather than the point they occur in production.
|
||||
|
||||
The verbosity of
|
||||
|
||||
```
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
```
|
||||
|
||||
is outweighed by the value of deliberately handling each failure condition at the point at which they occur. Key to this is the cultural value of handling each and every error explicitly.
|
||||
|
||||
### Return early rather than nesting deeply
|
||||
|
||||
> “Flat is better than nested.”
|
||||
|
||||
The Zen of Python, Item 5
|
||||
|
||||
This is sage advice coming from a language where indentation is the primary form of control flow. How can we interpret this advice in terms of Go? `gofmt` controls the overall whitespace of a Go program so there’s not thing doing there.
|
||||
|
||||
I wrote earlier about package names, and there is probably some advice here about avoiding a complicated package hierarchy. In my experience the more a programmer tries to subdivide and taxonimise their Go codebase the more they risk hitting the dead end that is package import loops.
|
||||
|
||||
I think the best application of item 5’s advice is the control flow _within_ a function. Simply put, avoid control flow that requires deep indentation.
|
||||
|
||||
> “Line of sight is a straight line along which an observer has unobstructed vision.”
|
||||
|
||||
May Ryer, [Code: Align the happy path to the left edge][14]
|
||||
|
||||
Mat Ryer describes this idea as line of sight coding. Light of sight coding means things like:
|
||||
|
||||
* Using guard clauses to return early if a precondition is not met.
|
||||
* Placing the successful return statement at the end of the function rather than inside a conditional block.
|
||||
* Reducing the overall indentation level of the function by extracting functions and methods.
|
||||
|
||||
|
||||
|
||||
Key to this advice is the thing that you care about, the thing that the function does, is never in danger of sliding out of sight to the right of your screen. This style has a bonus side effect that you’ll avoid pointless arguments about line lengths on your team.
|
||||
|
||||
Every time you indent you add another precondition to the programmers stack, consuming one of their 7 ±2 short term memory slots. Rather than nesting deeply, keep the successful path of the function close to the left hand side of your screen.
|
||||
|
||||
### If you think it’s slow, prove it with a benchmark
|
||||
|
||||
> “In the face of ambiguity, refuse the temptation to guess.”
|
||||
|
||||
The Zen of Python, Item 12
|
||||
|
||||
Programming is based on mathematics and logic, two concepts which rarely involve the element of chance. But there are many things we, as programmers, guess about every day. What does this variable do? What does this parameter do? What happens if I pass `nil` here? What happens if I call `Register` twice? There’s actually a lot of guesswork in modern programming, especially when it comes to using libraries you didn’t write.
|
||||
|
||||
> “APIs should be easy to use and hard to misuse.”
|
||||
|
||||
Josh Bloch
|
||||
|
||||
One of the best ways I know to help a programmer avoid having to guess is to, when building an API, [focus on the default use case][15]. Make it as easy as you can for the caller to do the most common thing. However, I’ve written and talked a lot about API design in the past, so instead my interpretation of item 12 is; _don’t guess about performance_.
|
||||
|
||||
Despite how you may feel about Knuth’s advice, one of the drivers of Go’s success is its efficient execution. You can write efficient programs in Go and thus people _will_ choose Go because of this. There are a lot of misconceptions about performance, so my request is, when you’re looking to performance tune your code or you’re facing some dogmatic advice like defer is slow, CGO is expensive, or always use atomics not mutexes, don’t guess.
|
||||
|
||||
Don’t complicate your code because of outdated dogma, and, if you think something is slow, first prove it with a benchmark. Go has excellent benchmarking and profiling tools that come in the distribution for free. Use them to find your bottlenecks.
|
||||
|
||||
### Before you launch a goroutine, know when it will stop
|
||||
|
||||
At this point I think I think I’ve mined the valuable points from PEP-20 and possibly stretched its reinterpretation beyond the point of good taste. I think that’s fine, because although this was a useful rhetorical device, ultimately we are talking about two different languages.
|
||||
|
||||
> “You type g o, a space, and then a function call. Three keystrokes, you can’t make it much shorter than that. Three keystrokes and you’ve just started a sub process.”
|
||||
|
||||
Rob Pike, [Simplicity is Complicated][16], dotGo 2015
|
||||
|
||||
The next two suggestions I’ll dedicate to goroutines. Goroutines are the signature feature of the language, our answer for first class concurrency. They are so easy to use, just put the word `go` in front of the statement and you’ve launched that function asynchronously. It’s so simple, no threads, no stack sizes, no thread pool executors, no ID’s, no tracking completion status.
|
||||
|
||||
Goroutines are cheap. Because of the runtime’s ability to multiplex goroutines onto a small pool of threads (which you don’t have to manage), hundreds of thousands, millions of goroutines are easily accommodated. This opens up designs that would be not be practical under competing concurrency models like threads or evented callbacks.
|
||||
|
||||
But as cheap as goroutines are, they’re not free. At a minimum there’s a few kilobytes for their stack, which, when you’re getting up into the 10^6 goroutines, does start to add up. This is not to say you shouldn’t use millions of goroutines if that is what the design calls for, but when you do, it’s critical that you keep track of them because 10^6 of anything can consume a non trivial amount of resources in aggregate.
|
||||
|
||||
Goroutines are the key to resource ownership in Go. To be useful a goroutine has to do something, and that means it almost always holds reference to, or ownership of, a resource; a lock, a network connection, a buffer with data, the sending end of a channel. While that goroutine is alive, the lock is held, the network connection remains open, the buffer retained and the receivers of the channel will continue to wait for more data.
|
||||
|
||||
The simplest way to free those resources is to tie them to the lifetime of the goroutine–when the goroutine exits, the resource has been freed. So while it’s near trivial to start a goroutine, before you write those three letters, g o and a space, make sure you have an answer to these questions:
|
||||
|
||||
* **Under what condition will a goroutine stop?** Go doesn’t have a way to tell a goroutine to exit. There is no stop or kill function, for good reason. If we cannot command a goroutine to stop, we must instead ask it, politely. Almost always this comes down to a channel operation. Range loops over a channel exit when the channel is closed. A channel will become selectable if it is closed. The signal from one goroutine to another is best expressed as a closed channel.
|
||||
* **What is required for that condition to arise?** If channels are both the vehicle to communicate between goroutines and the mechanism for them to signal completion, the next question to the programmer becomes, who will close the channel, when will that happen?
|
||||
* **What signal will you use to know the goroutine has stopped?** When you signal a goroutine to stop, that stopping will happen at some time in the future relative to the goroutine’s frame of reference. It might happen quickly in terms of human perception, but computers execute billions of instructions every second, and from the point of view of each goroutine, their execution of instructions is unsynchronised. The solution is often to use a channel to signal back or a waitgroup where a fan in approach is needed.
|
||||
|
||||
|
||||
|
||||
### Leave concurrency to the caller
|
||||
|
||||
It is likely that in any serious Go program you write there will be concurrency involved. This raises the problem, many of the libraries and code that we write fall into this a one goroutine per connection, or worker pattern. How will you manage the lifetime of those goroutines?
|
||||
|
||||
`net/http` is a prime example. Shutting down the server owning the listening socket is relatively straight forward, but what about a goroutines spawned from that accepting socket? `net/http` does provide a context object inside the request object which can be used to signal–to code that is listening–that the request should be canceled, thereby terminating the goroutine, however it is less clear how to know when all of these things have been done. It’s one thing to call `context.Cancel`, its another to know that the cancellation has completed.[2][17]
|
||||
|
||||
The point I want to make about `net/http` is that its a counter example to good practice. Because each connection is handled by a goroutine spawned inside the `net/http.Server` type, the program, living outside the `net/http` package, does not have an ability to control the goroutines spawned for the accepting socket.
|
||||
|
||||
This is an area of design that is still evolving, with efforts like go-kit’s `run.Group` and the Go team’s [`ErrGroup`][18] which provide a framework to execute, cancel and wait on functions run asynchronously.
|
||||
|
||||
The bigger design maxim here is for library writers, or anyone writing code that could be run asynchronously, leave the responsibility of starting to goroutine to your caller. Let the caller choose how they want to start, track, and wait on your functions execution.
|
||||
|
||||
### Write tests to lock in the behaviour of your package’s API
|
||||
|
||||
Perhaps you were hoping to read an article from me where I didn’t rant about testing. Sadly, today is not that day.
|
||||
|
||||
Your tests are the contract about what your software does and does not do. Unit tests at the package level should lock in the behaviour of the package’s API. They describe, in code, what the package promises to do. If there is a unit test for each input permutation, you have defined the contract for what the code will do _in code_, not documentation.
|
||||
|
||||
This is a contract you can assert as simply as typing `go test`. At any stage, you can _know_ with a high degree of confidence, that the behaviour people relied on before your change continues to function after your change.
|
||||
|
||||
Tests lock in api behaviour. Any change that adds, modifies or removes a public api must include changes to its tests.
|
||||
|
||||
### Moderation is a virtue
|
||||
|
||||
Go is a simple language, only 25 keywords. In some ways this makes the features that are built into the language stand out. Equally these are the features that the language sells itself on, lightweight concurrency, structural typing.
|
||||
|
||||
I think all of us have experienced the confusion that comes from trying to use all of Go’s features at once. Who was so excited to use channels that they used them as much as they could, as often as they could? Personally for me I found the result was hard to test, fragile, and ultimately overcomplicated. Am I alone?
|
||||
|
||||
I had the same experience with goroutines, attempting to break the work into tiny units I created a hard to manage hurd of Goroutines and ultimately missed the observation that most of my goroutines were always blocked waiting for their predecessor– the code was ultimately sequential and I had added a lot of complexity for little real world benefit. Who has experienced something like this?
|
||||
|
||||
I had the same experience with embedding. Initially I mistook it for inheritance. Then later I recreated the fragile base class problem by composing complicated types, which already had several responsibilities, into more complicated mega types.
|
||||
|
||||
This is potentially the least actionable piece of advice, but one I think is important enough to mention. The advice is always the same, all things in moderation, and Go’s features are no exception. If you can, don’t reach for a goroutine, or a channel, or embed a struct, anonymous functions, going overboard with packages, interfaces for everything, instead prefer simpler approach rather than the clever approach.
|
||||
|
||||
### Maintainability counts
|
||||
|
||||
I want to close with one final item from PEP-20,
|
||||
|
||||
> “Readability Counts.”
|
||||
|
||||
The Zen of Python, Item 7
|
||||
|
||||
So much has been said, about the importance of readability, not just in Go, but all programming languages. People like me who stand on stages advocating for Go use words like simplicity, readability, clarity, productivity, but ultimately they are all synonyms for one word–_maintainability_.
|
||||
|
||||
The real goal is to write maintainable code. Code that can live on after the original author. Code that can exist not just as a point in time investment, but as a foundation for future value. It’s not that readability doesn’t matter, maintainability matters _more_.
|
||||
|
||||
Go is not a language that optimises for clever one liners. Go is not a language which optimises for the least number of lines in a program. We’re not optimising for the size of the source code on disk, nor how long it takes to type the program into an editor. Rather, we want to optimise our code to be clear to the reader. Because its the reader who’s going to have to maintain this code.
|
||||
|
||||
If you’re writing a program for yourself, maybe it only has to run once, or you’re the only person who’ll ever see it, then do what ever works for you. But if this is a piece of software that more than one person will contribute to, or that will be used by people over a long enough time that requirements, features, or the environment it runs in may change, then your goal must be for your program to be maintainable. If software cannot be maintained, then it will be rewritten; and that could be the last time your company will invest in Go.
|
||||
|
||||
Can the thing you worked hard to build be maintained after you’re gone? What can you do today to make it easier for someone to maintain your code tomorrow?
|
||||
|
||||
##### [the-zen-of-go.netlify.com][2]
|
||||
|
||||
1. This part of the talk had several screenshots of the landing pages for the websites for [Ruby][19], [Swift][20], [Elm][21], [Go][22], [NodeJS][23], [Python][24], [Rust][25], highlighting how the language described itself.[][26]
|
||||
2. I tend to pick on `net/http` a lot, and this is not because it is bad, in fact it is the opposite, it is the most successful, oldest, most used API in the Go codebase. And because of that its design, evolution, and shortcoming have been thoroughly picked over. Think of this as flattery, not criticism.[][27]
|
||||
|
||||
|
||||
|
||||
#### Related posts:
|
||||
|
||||
1. [Never start a goroutine without knowing how it will stop][28]
|
||||
2. [Simplicity Debt][29]
|
||||
3. [Curious Channels][30]
|
||||
4. [Let’s talk about logging][31]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2020/02/23/the-zen-of-go
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.gophercon.org.il
|
||||
[2]: https://the-zen-of-go.netlify.com
|
||||
[3]: https://www.youtube.com/watch?v=yd_rtwYaXps
|
||||
[4]: https://dave.cheney.net/wp-content/uploads/2020/02/1011226.jpg
|
||||
[5]: https://dave.cheney.net/wp-content/uploads/2020/02/mean-girls-you-cant-sit-with-us-main.jpg
|
||||
[6]: http://go-proverbs.github.io
|
||||
[7]: https://danluu.com/microsoft-culture/
|
||||
[8]: https://dave.cheney.net/wp-content/uploads/2020/02/Lucovsky.001.jpeg
|
||||
[9]: https://www.python.org/dev/peps/pep-0020/
|
||||
[10]: https://dave.cheney.net/2019/01/08/avoid-package-names-like-base-util-or-common
|
||||
[11]: https://twitter.com/davecheney/status/539576755254611968?ref_src=twsrc%5Etfw
|
||||
[12]: tmp.iUoDiQyXMU#easy-footnote-bottom-1-3936 (This part of the talk had several screenshots of the landing pages for the websites for <a href="https://www.ruby-lang.org/en/">Ruby</a>, <a href="https://swift.org">Swift</a>, <a href="https://elm-lang.org">Elm</a>, <a href="https://golang.org">Go</a>, <a href="https://nodejs.org/en/">NodeJS</a>, <a href="https://www.python.org">Python</a>, <a href="https://www.rust-lang.org">Rust</a>, highlighting how the language described itself.)
|
||||
[13]: https://changelog.com/gotime/91
|
||||
[14]: https://medium.com/@matryer/line-of-sight-in-code-186dd7cdea88
|
||||
[15]: http://sweng.the-davies.net/Home/rustys-api-design-manifesto
|
||||
[16]: https://www.youtube.com/watch?v=rFejpH_tAHM
|
||||
[17]: tmp.iUoDiQyXMU#easy-footnote-bottom-2-3936 (I tend to pick on <code>net/http</code> a lot, and this is not because it is bad, in fact it is the opposite, it is the most successful, oldest, most used API in the Go codebase. And because of that its design, evolution, and shortcoming have been thoroughly picked over. Think of this as flattery, not criticism.)
|
||||
[18]: https://godoc.org/golang.org/x/sync/errgroup
|
||||
[19]: https://www.ruby-lang.org/en/
|
||||
[20]: https://swift.org
|
||||
[21]: https://elm-lang.org
|
||||
[22]: https://golang.org
|
||||
[23]: https://nodejs.org/en/
|
||||
[24]: https://www.python.org
|
||||
[25]: https://www.rust-lang.org
|
||||
[26]: tmp.iUoDiQyXMU#easy-footnote-1-3936
|
||||
[27]: tmp.iUoDiQyXMU#easy-footnote-2-3936
|
||||
[28]: https://dave.cheney.net/2016/12/22/never-start-a-goroutine-without-knowing-how-it-will-stop (Never start a goroutine without knowing how it will stop)
|
||||
[29]: https://dave.cheney.net/2017/06/15/simplicity-debt (Simplicity Debt)
|
||||
[30]: https://dave.cheney.net/2013/04/30/curious-channels (Curious Channels)
|
||||
[31]: https://dave.cheney.net/2015/11/05/lets-talk-about-logging (Let’s talk about logging)
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,106 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Audacious 4.0 Released With Qt 5: Here’s How to Install it on Ubuntu)
|
||||
[#]: via: (https://itsfoss.com/audacious-4-release/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Audacious 4.0 Released With Qt 5: Here’s How to Install it on Ubuntu
|
||||
======
|
||||
|
||||
[Audacious][1] is an open-source audio player available for multiple platforms that include Linux. Almost after 2 years of its last major release, Audacious 4.0 has arrived with some big changes.
|
||||
|
||||
The latest release Audacious 4.0 comes with [Qt 5][2] UI by default. You can still go for the old GTK2 UI from the source – however, the new features will be added to the Qt UI only.
|
||||
|
||||
Let’s take a look at what has changed and how to install the latest Audacious on your Linux system.
|
||||
|
||||
### Audacious 4.0 Key Changes & Features
|
||||
|
||||
![Audacious 4 Release][3]
|
||||
|
||||
Of course, the major change would be the use of Qt 5 UI as the default. In addition to that, there are a lot of improvements and feature additions mentioned in their [official announcement post][4], here they are:
|
||||
|
||||
* Clicking on playlist column headers sorts the playlist
|
||||
* Dragging playlist column headers changes the column order
|
||||
* Application-wide settings for volume and time step sizes
|
||||
* New option to hide playlist tabs
|
||||
* Sorting playlist by path now sorts folders after files
|
||||
* Implemented additional MPRIS calls for compatibility with KDE 5.16+
|
||||
* New OpenMPT-based tracker module plugin
|
||||
* New VU Meter visualization plugin
|
||||
* Added option to use a SOCKS network proxy
|
||||
* The Song Change plugin now works on Windows
|
||||
* New “Next Album” and “Previous Album” commands
|
||||
* The tag editor in Qt UI can now edit multiple files at once
|
||||
* Implemented equalizer presets window for Qt UI
|
||||
* Lyrics plugin gained the ability to save and load lyrics locally
|
||||
* Blur Scope and Spectrum Analyzer visualizations ported to Qt
|
||||
* MIDI plugin SoundFont selection ported to Qt
|
||||
* JACK output plugin gained some new options
|
||||
* Added option to endlessly loop PSF files
|
||||
|
||||
|
||||
|
||||
If you didn’t know about it previously, you can easily get it installed and use the equalizer coupled with [LADSP][5] effects to tweak your music experience.
|
||||
|
||||
![Audacious Winamp Classic Interface][6]
|
||||
|
||||
### How to Install Audacious 4.0 on Ubuntu
|
||||
|
||||
It is worth noting that the [unofficial PPA][7] is made available by [UbuntuHandbook][8]. You can simply follow the instructions below to install it on Ubuntu 16.04, 18.04, 19.10, and 20.04.
|
||||
|
||||
1\. First, you have to add the PPA to your system by typing in the following command in the terminal:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:ubuntuhandbook1/apps
|
||||
```
|
||||
|
||||
3\. Next, you need to update/refresh the package information from the repositories/sources you have and proceed to install the app. Here’s how to do that:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt install audacious audacious-plugins
|
||||
```
|
||||
|
||||
That’s it. You don’t have to do anything else. In either case, if you want to [remove the PPA and the software][9], just type in the following commands in order:
|
||||
|
||||
```
|
||||
sudo add-apt-repository --remove ppa:ubuntuhandbook1/apps
|
||||
sudo apt remove --autoremove audacious audacious-plugins
|
||||
```
|
||||
|
||||
You can also check out their GitHub page for more information on the source and potentially install it on other Linux distros as well, if that’s what you’re looking for.
|
||||
|
||||
[Audacious Source Code][10]
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
The new features and the Qt 5 UI switch should be a good thing to improve the user experience and the functionality of the audio player. If you’re a fan of the classic Winamp interface, it works just fine as well – but missing a few features as mentioned in their announcement post.
|
||||
|
||||
You can try it out and let me know your thoughts in the comments below!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/audacious-4-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://audacious-media-player.org
|
||||
[2]: https://doc.qt.io/qt-5/qt5-intro.html
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/audacious-4-release.jpg?ssl=1
|
||||
[4]: https://audacious-media-player.org/news/45-audacious-4-0-released
|
||||
[5]: https://www.ladspa.org/
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/audacious-winamp.jpg?ssl=1
|
||||
[7]: https://itsfoss.com/ppa-guide/
|
||||
[8]: http://ubuntuhandbook.org/index.php/2020/03/audacious-4-0-released-qt5-ui/
|
||||
[9]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/
|
||||
[10]: https://github.com/audacious-media-player/audacious
|
@ -0,0 +1,232 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to examine processes running on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3543232/how-to-examine-processes-running-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
How to examine processes running on Linux
|
||||
======
|
||||
|
||||
Thinkstock
|
||||
|
||||
There are quite a number of ways to look at running processes on Linux systems – to see what’s running, the resources that processes are using, how the system is affected by the load and how memory is being used. Each command gives you a different view, and the range of details is considerable. In this post, we’ll run through a series of commands that can help you view process details in a number of different ways.
|
||||
|
||||
### ps
|
||||
|
||||
While the **ps** command is the most obvious command for examining processes, the arguments that you use when running **ps** will make a big difference in how much information will be provided. With no arguments, **ps** will only show processes associated with your current login session. Add a **-u** and you'll see extended details.
|
||||
|
||||
Here is a comparison:
|
||||
|
||||
```
|
||||
nemo$ ps
|
||||
PID TTY TIME CMD
|
||||
45867 pts/1 00:00:00 bash
|
||||
46140 pts/1 00:00:00 ps
|
||||
nemo$ ps -u
|
||||
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
|
||||
nemo 45867 0.0 0.0 11232 5636 pts/1 Ss 19:04 0:00 -bash
|
||||
nemo 46141 0.0 0.0 11700 3648 pts/1 R+ 19:16 0:00 ps -u
|
||||
```
|
||||
|
||||
Using **ps -ef** will display details on all of the processes running on the system but **ps -eF** will add some additional details.
|
||||
|
||||
```
|
||||
$ ps -ef | head -2
|
||||
UID PID PPID C STIME TTY TIME CMD
|
||||
root 1 0 0 May10 ? 00:00:06 /sbin/init splash
|
||||
$ ps -eF | head -2
|
||||
UID PID PPID C SZ RSS PSR STIME TTY TIME CMD
|
||||
root 1 0 0 42108 12524 0 May10 ? 00:00:06 /sbin/init splash
|
||||
```
|
||||
|
||||
Both commands show who is running the process, the process and parent process IDs, process start time, accumulated run time and the task being run. The additional fields shown when you use **F** instead of **f** include:
|
||||
|
||||
* SZ: the process **size** in physical pages for the core image of the process
|
||||
* RSS: the **resident set size** which shows how much memory is allocated to those parts of the process in RAM. It does not include memory that is swapped out, but does include memory from shared libraries as long as the pages from those libraries are currently in memory. It also includes stack and heap memory.
|
||||
* PSR: the **processor** the process is using
|
||||
|
||||
|
||||
|
||||
##### ps -fU
|
||||
|
||||
You can list processes for some particular user with a command like "ps -ef | grep USERNAME", but with **ps -fU** command, you’re going to see considerably more data. This is because details of processes that are being run on the user's behalf are also included. In fact, nearly all these processes shown have been kicked off by system simply to support this user’s online session. Nemo has only just logged in and is not yet running any commands or scripts.
|
||||
|
||||
```
|
||||
$ ps -fU nemo
|
||||
UID PID PPID C STIME TTY TIME CMD
|
||||
nemo 45726 1 0 19:04 ? 00:00:00 /lib/systemd/systemd --user
|
||||
nemo 45732 45726 0 19:04 ? 00:00:00 (sd-pam)
|
||||
nemo 45738 45726 0 19:04 ? 00:00:00 /usr/bin/pulseaudio --daemon
|
||||
nemo 45740 45726 0 19:04 ? 00:00:00 /usr/libexec/tracker-miner-f
|
||||
nemo 45754 45726 0 19:04 ? 00:00:00 /usr/bin/dbus-daemon --sessi
|
||||
nemo 45829 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfsd
|
||||
nemo 45856 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfsd-fuse /run
|
||||
nemo 45862 45706 0 19:04 ? 00:00:00 sshd: nemo@pts/1
|
||||
nemo 45864 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfs-udisks2-vo
|
||||
nemo 45867 45862 0 19:04 pts/1 00:00:00 -bash
|
||||
nemo 45878 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfs-afc-volume
|
||||
nemo 45883 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfs-goa-volume
|
||||
nemo 45887 45726 0 19:04 ? 00:00:00 /usr/libexec/goa-daemon
|
||||
nemo 45895 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfs-mtp-volume
|
||||
nemo 45896 45726 0 19:04 ? 00:00:00 /usr/libexec/goa-identity-se
|
||||
nemo 45903 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfs-gphoto2-vo
|
||||
nemo 45946 45726 0 19:04 ? 00:00:00 /usr/libexec/gvfsd-metadata
|
||||
```
|
||||
|
||||
Note that the only process with an assigned TTY is Nemo's shell and that the parent of all of the other processes is **systemd**.
|
||||
|
||||
You can supply a comma-separated list of usernames instead of a single name. Just be prepared to be looking at quite a bit more data.
|
||||
|
||||
#### top and ntop
|
||||
|
||||
The **top** and **ntop** commands will help when you want to get an idea which processes are using the most resources and allow you to reorder your view depending on what criteria you want to use to rank the processes (e.g., highest CPU or memory use).
|
||||
|
||||
```
|
||||
top - 11:51:27 up 1 day, 21:40, 1 user, load average: 0.08, 0.02, 0.01
|
||||
Tasks: 211 total, 1 running, 210 sleeping, 0 stopped, 0 zombie
|
||||
%Cpu(s): 5.0 us, 0.5 sy, 0.0 ni, 94.3 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
MiB Mem : 5944.4 total, 3527.4 free, 565.1 used, 1851.9 buff/cache
|
||||
MiB Swap: 2048.0 total, 2048.0 free, 0.0 used. 5084.3 avail Mem
|
||||
|
||||
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
|
||||
999 root 20 0 394660 14380 10912 S 8.0 0.2 0:46.54 udisksd
|
||||
65224 shs 20 0 314268 9824 8084 S 1.7 0.2 0:00.34 gvfs-ud+
|
||||
2034 gdm 20 0 314264 9820 7992 S 1.3 0.2 0:06.25 gvfs-ud+
|
||||
67909 root 20 0 0 0 0 I 0.3 0.0 0:00.09 kworker+
|
||||
1 root 20 0 168432 12532 8564 S 0.0 0.2 0:09.93 systemd
|
||||
2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd
|
||||
```
|
||||
|
||||
Use **shift+m** to sort by memory use and **shift+p** to go back to sorting by CPU usage (the default).
|
||||
|
||||
#### /proc
|
||||
|
||||
A tremendous amount of information is available on running processes in the **/proc** directory. In fact, if you haven't visited **/proc** quite a few times, you might be astounded by the amount of details available. Just keep in mind that **/proc** is a very different kind of file system. As an interface to kernel data, it provides a view of process details that are currently being used by the system.
|
||||
|
||||
Some of the more useful **/proc** files for viewing include **cmdline**, **environ**, **fd**, **limits** and **status**. The following views provide some samples of what you might see.
|
||||
|
||||
The **status** file shows the process that is running (bash), its status, the user and group ID for the person running bash, a full list of the groups the user is a member of and the process ID and parent process ID.
|
||||
|
||||
```
|
||||
$ head -11 /proc/65333/status
|
||||
Name: bash
|
||||
Umask: 0002
|
||||
State: S (sleeping)
|
||||
Tgid: 65333
|
||||
Ngid: 0
|
||||
Pid: 65333
|
||||
PPid: 65320
|
||||
TracerPid: 0
|
||||
Uid: 1000 1000 1000 1000
|
||||
Gid: 1000 1000 1000 1000
|
||||
FDSize: 256
|
||||
Groups: 4 11 24 27 30 46 118 128 500 1000
|
||||
...
|
||||
```
|
||||
|
||||
The **cmdline** file shows the command line used to start the process.
|
||||
|
||||
```
|
||||
$ cat /proc/65333/cmdline
|
||||
-bash
|
||||
```
|
||||
|
||||
The **environ** file shows the environment variables that are in effect.
|
||||
|
||||
```
|
||||
$ cat environ
|
||||
USER=shsLOGNAME=shsHOME=/home/shsPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/gamesSHELL=/bin/bashTERM=xtermXDG_SESSION_ID=626XDG_RUNTIME_DIR=/run/user/1000DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/busXDG_SESSION_TYPE=ttyXDG_SESSION_CLASS=userMOTD_SHOWN=pamLANG=en_US.UTF-8SSH_CLIENT=192.168.0.19 9385 22SSH_CONNECTION=192.168.0.19 9385 192.168.0.11 22SSH_TTY=/dev/pts/0$
|
||||
```
|
||||
|
||||
The **fd** file shows the file descriptors. Note how they reflect the pseudo-tty that is being used (pts/0).
|
||||
|
||||
```
|
||||
$ ls -l /proc/65333/fd
|
||||
total 0
|
||||
lrwx------ 1 shs shs 64 May 12 09:45 0 -> /dev/pts/0
|
||||
lrwx------ 1 shs shs 64 May 12 09:45 1 -> /dev/pts/0
|
||||
lrwx------ 1 shs shs 64 May 12 09:45 2 -> /dev/pts/0
|
||||
lrwx------ 1 shs shs 64 May 12 09:56 255 -> /dev/pts/0
|
||||
$ who
|
||||
shs pts/0 2020-05-12 09:45 (192.168.0.19)
|
||||
```
|
||||
|
||||
The **limits** file contains information about the limits imposed on the process.
|
||||
|
||||
```
|
||||
$ cat limits
|
||||
Limit Soft Limit Hard Limit Units
|
||||
Max cpu time unlimited unlimited seconds
|
||||
Max file size unlimited unlimited bytes
|
||||
Max data size unlimited unlimited bytes
|
||||
Max stack size 8388608 unlimited bytes
|
||||
Max core file size 0 unlimited bytes
|
||||
Max resident set unlimited unlimited bytes
|
||||
Max processes 23554 23554 processes
|
||||
Max open files 1024 1048576 files
|
||||
Max locked memory 67108864 67108864 bytes
|
||||
Max address space unlimited unlimited bytes
|
||||
Max file locks unlimited unlimited locks
|
||||
Max pending signals 23554 23554 signals
|
||||
Max msgqueue size 819200 819200 bytes
|
||||
Max nice priority 0 0
|
||||
Max realtime priority 0 0
|
||||
Max realtime timeout unlimited unlimited us
|
||||
```
|
||||
|
||||
#### pmap
|
||||
|
||||
The **pmap** command takes you in an entirely different direction when it comes to memory use. It provides a detailed map of a process’s memory usage. To make sense of this, you need to keep in mind that processes do not run entirely on their own. Instead, they make use of a wide range of system resources. The truncated **pmap** output below shows a portion of the memory map for a single user’s bash login along with some memory usage totals at the bottom.
|
||||
|
||||
```
|
||||
$ pmap -x 43120
|
||||
43120: -bash
|
||||
Address Kbytes RSS Dirty Mode Mapping
|
||||
000055887655b000 180 180 0 r---- bash
|
||||
0000558876588000 708 708 0 r-x-- bash
|
||||
0000558876639000 220 148 0 r---- bash
|
||||
0000558876670000 16 16 16 r---- bash
|
||||
0000558876674000 36 36 36 rw--- bash
|
||||
000055887667d000 40 28 28 rw--- [ anon ]
|
||||
0000558876b96000 1328 1312 1312 rw--- [ anon ]
|
||||
00007f0bd9a7e000 28 28 0 r---- libpthread-2.31.so
|
||||
00007f0bd9a85000 68 68 0 r-x-- libpthread-2.31.so
|
||||
00007f0bd9a96000 20 0 0 r---- libpthread-2.31.so
|
||||
00007f0bd9a9b000 4 4 4 r---- libpthread-2.31.so
|
||||
00007f0bd9a9c000 4 4 4 rw--- libpthread-2.31.so
|
||||
00007f0bd9a9d000 16 4 4 rw--- [ anon ]
|
||||
00007f0bd9aa1000 20 20 0 r---- libnss_systemd.so.2
|
||||
00007f0bd9aa6000 148 148 0 r-x-- libnss_systemd.so.2
|
||||
...
|
||||
ffffffffff600000 4 0 0 --x-- [ anon ]
|
||||
---------------- ------- ------- -------
|
||||
total kB 11368 5664 1656
|
||||
|
||||
Kbytes: size of map in kilobytes
|
||||
RSS: resident set size in kilobytes
|
||||
Dirty: dirty pages (both shared and private) in kilobytes
|
||||
```
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3543232/how-to-examine-processes-running-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.facebook.com/NetworkWorld/
|
||||
[2]: https://www.linkedin.com/company/network-world
|
214
sources/tech/20200516 Fatih-s question.md
Normal file
214
sources/tech/20200516 Fatih-s question.md
Normal file
@ -0,0 +1,214 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Fatih’s question)
|
||||
[#]: via: (https://dave.cheney.net/2020/05/16/fatihs-question)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
Fatih’s question
|
||||
======
|
||||
|
||||
A few days ago Fatih posted [this question][1] on twitter.
|
||||
|
||||
I’m going to attempt to give my answer, however to do that I need to apply some simplifications as my previous attempts to answer it involved a lot of phrases like _a pointer to a pointer_, and other unhelpful waffling. Hopefully my simplified answer can be useful in building a mental framework to answer Fatih’s original question.
|
||||
|
||||
### Restating the question
|
||||
|
||||
Fatih’s original tweet showed [four different variations][2] of `json.Unmarshal`. I’m going to focus on the last two, which I’ll rewrite a little:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
type Result struct {
|
||||
Foo string `json:"foo"`
|
||||
}
|
||||
|
||||
func main() {
|
||||
content := []byte(`{"foo": "bar"}`)
|
||||
var result1, result2 *Result
|
||||
|
||||
err := json.Unmarshal(content, &result1)
|
||||
fmt.Println(result1, err) // &{bar} <nil>
|
||||
|
||||
err = json.Unmarshal(content, result2)
|
||||
fmt.Println(result2, err) // <nil> json: Unmarshal(nil *main.Result)
|
||||
}
|
||||
```
|
||||
|
||||
Restated in words, `result1` and `result2` are the same type; `*Result`. Decoding into `result1` works as expected, whereas decoding into `result2` causes the `json` package to complain that the value passed to `Unmarshal` is `nil`. However, both values were declared without an initialiser so both would have taken on the type’s zero value, `nil`.
|
||||
|
||||
Eagle eyed readers will have spotted that the reason for the difference is the first` `invocation is passed `&result1`, while the second is passed `result2`, but this explanation is unsatisfactory because the documentation for `json.Unmarshal` states:
|
||||
|
||||
> Unmarshal parses the JSON-encoded data and stores the result in the value pointed to by v. **If v is nil or not a pointer**, Unmarshal returns an InvalidUnmarshalError.
|
||||
|
||||
Which is confusing because `result1` and `result2` _are_ pointers. Furthermore, without initialisation, both _are_ `nil`. Now, the documentation is correct (as you’d expect from a package that has been hammered on for a decade), but explaining _why_ takes a little more investigation.
|
||||
|
||||
### Functions receive a copy of their arguments
|
||||
|
||||
Every assignment in Go is a copy, this includes function arguments and return values.
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func increment(v int) {
|
||||
v++
|
||||
}
|
||||
|
||||
func main() {
|
||||
v := 1
|
||||
increment(v)
|
||||
fmt.Println(v) // 1
|
||||
}
|
||||
```
|
||||
|
||||
In this example, `increment` is operating on a _copy_ of `main`‘s `v`. This is because the `v` declared in `main` and `increment`‘s `v` parameter have different addresses in memory. Thus changes to `increment`‘s `v` cannot affect the contents of `main`‘s `v`.
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func increment(v *int) {
|
||||
*v++
|
||||
}
|
||||
|
||||
func main() {
|
||||
v := 1
|
||||
increment(&v)
|
||||
fmt.Println(v) // 2
|
||||
}
|
||||
```
|
||||
|
||||
If we wanted to write `increment` in a way that it could affect the contents of its caller we would need to pass a reference, a pointer, to `main.v`.[1][3] This example demonstrates why `json.Unmarshal` needs a pointer to the value to decode JSON into.
|
||||
|
||||
### Pointers to pointers
|
||||
|
||||
Returning to the original question, both `result1` and `result2` are declared as `*Result`, that is, pointers to a `Result` value. We established that you have to pass the address of caller’s value to `json.Unmarshal` otherwise it won’t be able to alter the contents of the caller’s value. Why then must we pass the address of `result1`, a `**Result`, a pointer to a pointer to a `Result`, for the operation to succeed.
|
||||
|
||||
To explain this another detour is required. Consider this code:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
type Result struct {
|
||||
Foo *string `json:"foo"`
|
||||
}
|
||||
|
||||
func main() {
|
||||
content := []byte(`{"foo": "bar"}`)
|
||||
var result1 *Result
|
||||
|
||||
err := json.Unmarshal(content, &result1)
|
||||
fmt.Printf("%#v %v", result1, err) // &main.Result{Foo:(*string)(0xc0000102f0)} <nil>
|
||||
}
|
||||
```
|
||||
|
||||
In this example `Result` contains a pointer typed field, `Foo *string`. During JSON decoding `Unmarshal` allocated a new `string` value, stored the value `bar` in it, then placed the address of the string in `Result.Foo`. This behaviour is quite handy as it frees the caller from having to initialise `Result.Foo` and makes it easier to detect when a field was not initialised because the JSON did not contain a value. Beyond the convenience this offers for simple examples it would be prohibitively difficult for the caller to properly initialise all the reference type fields in a structure before decoding unknown JSON without first inspecting the incoming JSON which itself may be problematic if the input is coming from an `io.Reader` without the ability to rewind the input.
|
||||
|
||||
> To unmarshal JSON into a pointer, Unmarshal first handles the case of the JSON being the JSON literal null. In that case, Unmarshal sets the pointer to nil. Otherwise, Unmarshal unmarshals the JSON into the value pointed at by the pointer. **If the pointer is nil, Unmarshal allocates a new value for it to point to**.
|
||||
|
||||
`json.Unmarshal`‘s handling of pointer fields is clearly documented, and works as you would expect, allocating a new value whenever there is a need to decode into a pointer shaped field. It is this behaviour that gives us a hint to what is happening in the original example.
|
||||
|
||||
We’ve seen that when `json.Unmarshal` encounters a field which points to `nil` it will allocate a new value of the correct type and assign its address the field before proceeding. Not only is does behaviour is applied recursively–for example in the case of a complex structure which contains pointers to other structures–but it also applies to the _value passed to `Unmarshal`._
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func main() {
|
||||
content := []byte(`1`)
|
||||
var result *int
|
||||
|
||||
err := json.Unmarshal(content, &result)
|
||||
fmt.Println(*result, err) // 1 <nil>
|
||||
}
|
||||
```
|
||||
|
||||
In this example `result` is not a struct, but a simple `*int` which, lacking an initialiser, defaults to `nil`. After JSON decoding, `result` now points to an `int` with the value `1`.
|
||||
|
||||
### Putting the pieces together
|
||||
|
||||
Now I think I’m ready to take a shot at answering Fatih’s question.
|
||||
|
||||
`json.Unmarshal` requires the address of the variable you want to decode into, otherwise it would decode into a temporary copy which would be discard on return. Normally this is done by declaring a value, then passing its address, or explicitly initialising the the value
|
||||
|
||||
```
|
||||
var result1 Result
|
||||
err := json.Unmarshal(content, &result1) // this is fine
|
||||
|
||||
var result2 = new(Result)
|
||||
err = json.Unmarshal(content, result2) // and this
|
||||
|
||||
var result3 = &Result{}
|
||||
err = json.Unmarshal(content, result3) // this is also fine
|
||||
```
|
||||
|
||||
In all three cases the address that the `*Result` points too is not `nil`, it points to initialised memory that `json.Unmarshal` decodes into.
|
||||
|
||||
Now consider what happens when `json.Unmarshal` encounters this
|
||||
|
||||
```
|
||||
var result4 *Result
|
||||
err = json.Unmarshal(content, result4) // err json: Unmarshal(nil *main.Result)
|
||||
```
|
||||
|
||||
`result2`, `result3`, and the expression `&result1` point to a `Result`. However `result4`, even though it has the same type as the previous three, does not point to initialised memory, it points to `nil`. Thus, according to the examples we saw previously, before `json.Unmarshal` can decode into it, the memory `result4` points too must be initialised.
|
||||
|
||||
However, because each function receives a copy of its arguments, the caller’s `result4` variable and the copy inside `json.Unmarshal` are unique. `json.Unmarshal` can allocate a new `Result` value and decode into it, but it cannot alter `result4` to point to this new value because it was not provided with a reference to `result4`, only a copy of its contents.
|
||||
|
||||
1. This does not violate the _everything is a copy_ rule, a copy of a pointer to `main.v` still points to `main.v`.[][4]
|
||||
|
||||
|
||||
|
||||
#### Related posts:
|
||||
|
||||
1. [Should methods be declared on T or *T][5]
|
||||
2. [Ice cream makers and data races][6]
|
||||
3. [Understand Go pointers in less than 800 words or your money back][7]
|
||||
4. [Slices from the ground up][8]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2020/05/16/fatihs-question
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://twitter.com/fatih/status/1260683136842608640
|
||||
[2]: https://play.golang.org/p/g2yUIYrV67F
|
||||
[3]: tmp.dRxkHxYRQS#easy-footnote-bottom-1-4153 (This does not violate the <em>everything is a copy</em> rule, a copy of a pointer to <code>main.v</code> still points to <code>main.v</code>.)
|
||||
[4]: tmp.dRxkHxYRQS#easy-footnote-1-4153
|
||||
[5]: https://dave.cheney.net/2016/03/19/should-methods-be-declared-on-t-or-t (Should methods be declared on T or *T)
|
||||
[6]: https://dave.cheney.net/2014/06/27/ice-cream-makers-and-data-races (Ice cream makers and data races)
|
||||
[7]: https://dave.cheney.net/2017/04/26/understand-go-pointers-in-less-than-800-words-or-your-money-back (Understand Go pointers in less than 800 words or your money back)
|
||||
[8]: https://dave.cheney.net/2018/07/12/slices-from-the-ground-up (Slices from the ground up)
|
@ -0,0 +1,153 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use Windows Subsystem for Linux to open Linux on Windows 10 machines)
|
||||
[#]: via: (https://www.networkworld.com/article/3543845/how-to-use-windows-subsystem-for-linux-to-open-linux-on-windows-10-machines.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
How to use Windows Subsystem for Linux to open Linux on Windows 10 machines
|
||||
======
|
||||
Opening a Linux terminal on a Windows 10 desktop can help you practice your Linux skills and explore Windows from an entirely different point of view. In this post, we look at Ubuntu 18.04 running through Windows Subsystem for Linux (WSL).
|
||||
[Nicolas Solerieu modified by IDG Comm. / Linux][1] [(CC0)][2]
|
||||
|
||||
Believe it or not, it's possible to open a Linux terminal on a Windows 10 system and you might be surprised how much Linux functionality you’ll be able to get by doing so.
|
||||
|
||||
You can run Linux commands, traipse around the provided Linux file system and even take a novel look at Windows files. The experience isn’t altogether different than opening a terminal window on a Linux desktop, with a few interesting exceptions.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
|
||||
|
||||
What is needed to make this happen is something called the Windows Subsystem for Linux (WSL) and a Windows 10 x86 PC.
|
||||
|
||||
### Linux versions for WSL
|
||||
|
||||
There are a number of options for running Linux on top of Windows. The Linux OS choices include:
|
||||
|
||||
* [Ubuntu 16.04 LTS][4]
|
||||
* [Ubuntu 18.04 LTS][5]
|
||||
* [openSUSE Leap 15.1][6]
|
||||
* [SUSE Linux Enterprise Server 12 SP5][7]
|
||||
* [SUSE Linux Enterprise Server 15 SP1][8]
|
||||
* [Kali Linux][9]
|
||||
* [Debian GNU/Linux][10]
|
||||
* [Fedora Remix for WSL][11]
|
||||
* [Pengwin][12]
|
||||
* [Pengwin Enterprise][13]
|
||||
* [Alpine WSL][14]
|
||||
|
||||
|
||||
|
||||
Ubuntu 18.04 LTS is just one option and, in this post, we’ll take a look at how the terminal runs on Windows using this particular distribution and how much it feels like working on a Linux system directly.
|
||||
|
||||
If you want to look into the process of putting an Ubuntu distribution on your Windows system, you can start with this page:
|
||||
|
||||
<https://ubuntu.com/tutorials/tutorial-ubuntu-on-windows#1-overview>
|
||||
|
||||
As part of the initial setup of installing your Linux on Windows terminal, you’ll be asked to create your user account. Once you do that and open the terminal, you can start to explore. One of the most noticeable differences between your Linux-on-Windows terminal and a terminal window on a Linux system is that examining processes isn’t going to show you much. After all, Windows will be providing the bulk of the required OS support. You’re likely to see something like this:
|
||||
|
||||
```
|
||||
myacct@hostname:~$ ps -ef
|
||||
UID PID PPID C STIME TTY TIME CMD
|
||||
root 1 0 0 12:45 ? 00:00:00 /init
|
||||
root 7 1 0 12:45 tty1 00:00:00 /init
|
||||
shs 8 7 0 12:45 tty1 00:00:00 -bash
|
||||
shs 166 8 0 13:32 tty1 00:00:00 ps -ef
|
||||
```
|
||||
|
||||
Yes, that's it.
|
||||
|
||||
If you’re anything like me, one of your next moves might be to get a handle on the available commands. If you just count the files in the **/bin** and **/usr/bin** directories, you should see that there are a lot of commands:
|
||||
|
||||
```
|
||||
myacct@hostname:~$ ls /bin | wc -l
|
||||
171
|
||||
myacct@hostname:~$ ls /usr/bin | wc -l
|
||||
707
|
||||
```
|
||||
|
||||
You can list available commands with commands like these (output truncated for this post):
|
||||
|
||||
```
|
||||
myacct@hostname:~$ ls /bin | head -25 | column
|
||||
bash btrfs-map-logical bunzip2 bzegrep bzip2recover
|
||||
btrfs btrfs-select-super busybox bzexe bzless
|
||||
btrfs-debug-tree btrfs-zero-log bzcat bzfgrep bzmore
|
||||
btrfs-find-root btrfsck bzcmp bzgrep cat
|
||||
btrfs-image btrfstune bzdiff bzip2 chacl
|
||||
|
||||
myacct@hostname:~$ ls /usr/bin | head -25 | column
|
||||
NF aa-exec apport-cli apt apt-extracttempl*
|
||||
VGAuthService acpi_listen apport-collect apt-add-repository apt-ftparchive
|
||||
X11 add-apt-repository apport-unpack apt-cache apt-get
|
||||
[ addpart appres apt-cdrom apt-key
|
||||
aa-enabled apport-bug apropos apt-config apt-mark
|
||||
```
|
||||
|
||||
You can update the system with **apt** commands (sudo apt update, sudo apt upgrade). You can even use Linux commands to move to the Windows disk partitions as you like and . Notice the last three entries in the output below. These represent several drives on the system.
|
||||
|
||||
```
|
||||
myacct@hostname:~$ df -k
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
rootfs 973067784 326920584 646147200 34% /
|
||||
none 973067784 326920584 646147200 34% /dev
|
||||
none 973067784 326920584 646147200 34% /run
|
||||
none 973067784 326920584 646147200 34% /run/lock
|
||||
none 973067784 326920584 646147200 34% /run/shm
|
||||
none 973067784 326920584 646147200 34% /run/user
|
||||
cgroup 973067784 326920584 646147200 34% /sys/fs/cgroup
|
||||
C:\ 973067784 326920584 646147200 34% /mnt/c <== C drive
|
||||
I:\ 976760000 231268208 745491792 24% /mnt/I <== external drive
|
||||
L:\ 409599996 159240 409440756 1% /mnt/l <== USB thumb drive
|
||||
```
|
||||
|
||||
If you’re interested in moving out of the Linux space and into the Windows portion of the file system within your **WSL** session, you can do that easily. Replace “myname” with your Windows account name and a **cd /mnt/c/Users/_myname_/Desktop** will take you to your Windows desktop. From there, don’t be surprised if in listing your files you see **WRL####.tmp** files that don’t seem to exist when you look at your desktop and don’t show up if you look at your files by opening a command prompt. These appear to be temporary files used by Windows for document management. You might also see files listed that look like **‘~$nux notes.docx’** – perhaps ghosts of files that were once located on your desktop. You won’t see those files when you look at your desktop on Windows – even using a **cmd** window.
|
||||
|
||||
Note that you’ll also see Windows directories such as **‘Program Files’** in single quotes when listed in your Linux terminal as you would any file with blanks included in their names. You can even start a Windows executable from your Linux terminal. For example:
|
||||
|
||||
```
|
||||
myacct@hostname: $ cd /mnt/c/WINDOWS/System32/WindowsPowerShell/v1.0
|
||||
myacct@hostname: $ powershell.exe
|
||||
```
|
||||
|
||||
If you do this, type **exit** when you want to end the **powershell** session.
|
||||
|
||||
Linux commands all seem to work as expected, though I don’t get any output when I run the **who** command.
|
||||
|
||||
Windows **.txt** files will display with **cat** commands, but the last line in a file will likely be displayed on the same line as the following shell prompt. This is because these files won’t end with a linefeed as Linux text files do.
|
||||
|
||||
You can create other accounts and switch user to them (e.g., **su – nemo**) if you like, but not log into them directly.
|
||||
|
||||
You can also update the system with apt commands (**sudo apt update**, **sudo apt upgrade**).
|
||||
|
||||
Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3543845/how-to-use-windows-subsystem-for-linux-to-open-linux-on-windows-10-machines.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://unsplash.com/photos/4gRNmhGzYZE
|
||||
[2]: https://creativecommons.org/publicdomain/zero/1.0/
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.microsoft.com/store/apps/9pjn388hp8c9
|
||||
[5]: https://www.microsoft.com/store/apps/9N9TNGVNDL3Q
|
||||
[6]: https://www.microsoft.com/store/apps/9NJFZK00FGKV
|
||||
[7]: https://www.microsoft.com/store/apps/9MZ3D1TRP8T1
|
||||
[8]: https://www.microsoft.com/store/apps/9PN498VPMF3Z
|
||||
[9]: https://www.microsoft.com/store/apps/9PKR34TNCV07
|
||||
[10]: https://www.microsoft.com/store/apps/9MSVKQC78PK6
|
||||
[11]: https://www.microsoft.com/store/apps/9n6gdm4k2hnc
|
||||
[12]: https://www.microsoft.com/store/apps/9NV1GV1PXZ6P
|
||||
[13]: https://www.microsoft.com/store/apps/9N8LP0X93VCP
|
||||
[14]: https://www.microsoft.com/store/apps/9p804crf0395
|
||||
[15]: https://www.facebook.com/NetworkWorld/
|
||||
[16]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,198 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to configure your router using VTY shell)
|
||||
[#]: via: (https://opensource.com/article/20/5/vty-shell)
|
||||
[#]: author: (M Umer https://opensource.com/users/noisybotnet)
|
||||
|
||||
How to configure your router using VTY shell
|
||||
======
|
||||
Free range routing gives you options for implementing multiple
|
||||
protocols. This guide will get you started.
|
||||
![Multi-colored and directional network computer cables][1]
|
||||
|
||||
Recently, I wrote an article explaining how we can implement Open Shortest Path First (OSPF) using the [Quagga][2] routing suite. There are multiple software suites that can be used instead of Quagga to implement different routing protocols. One such option is free range routing (FRR).
|
||||
|
||||
### FRR
|
||||
|
||||
[FRR][3] is a routing software suite, which has been derived from Quagga and is distributed under GNU GPL2 license. Like Quagga, it provides implementations of all major routing protocols such as OSPF, Routing Information Protocol (RIP), Border Gateway Protocol (BGP), and Intermediate system-to-intermediate system (IS-IS) for Unix-like platforms.
|
||||
|
||||
Several companies, such as Big Switch Networks, Cumulus, Open Source Routing, and 6wind, who were behind the development of Quagga, created FRR to improve on Quagga's well-established foundations.
|
||||
|
||||
#### Architecture
|
||||
|
||||
FRR is a suite of daemons that work together to build the routing table. Each major protocol is implemented in its own daemon, and these daemons talk to the core and protocol-independent daemon Zebra, which provides kernel routing table updates, interface lookups, and redistribution of routes between different routing protocols. Each protocol-specific daemon is responsible for running the relevant protocol and building the routing table based on the information exchanged.
|
||||
|
||||
![FRR architecture][4]
|
||||
|
||||
### VTY shell
|
||||
|
||||
[VTYSH][5] is an integrated shell for the FRR routing engine. It amalgamates all the CLI commands defined in each of the daemons and presents them to the user in a single shell. It provides a Cisco-like modal CLI, and many of the commands are similar to Cisco IOS commands. There are different modes to the CLI, and certain commands are only available within a specific mode.
|
||||
|
||||
### Setup
|
||||
|
||||
In this tutorial, we'll be implementing the routing information protocol (RIP) to configure dynamic routing using FRR. We can do this in two ways—either by editing the protocol daemon configuration file in an editor or by using the VTY shell. We'll be using the VTY shell in this example. Our setup includes two CentOS 7.7 hosts, named Alpha and Beta. Both hosts have two network interfaces and share access to the 192.168.122.0/24 network. We'll be advertising routes for 10.12.11.0/24 and 10.10.10.0/24 networks.
|
||||
|
||||
**For Host Alpha:**
|
||||
|
||||
* eth0 IP: 192.168.122.100/24
|
||||
* Gateway: 192.168.122.1
|
||||
* eth1 IP: 10.10.10.12/24
|
||||
|
||||
|
||||
|
||||
**For Host Beta:**
|
||||
|
||||
* eth0 IP: 192.168.122.50/24
|
||||
* Gateway: 192.168.122.1
|
||||
* eth1 IP: 10.12.11.12/24
|
||||
|
||||
|
||||
|
||||
#### Installation of package
|
||||
|
||||
First, we need to install the FRR package on both hosts; this can be done by following the instructions in the [official FRR documentation][6].
|
||||
|
||||
#### Enable IP forwarding
|
||||
|
||||
For routing, we need to enable IP forwarding on both hosts since that will performed by the Linux kernel.
|
||||
|
||||
|
||||
```
|
||||
sysctl -w net.ipv4.conf.all.forwarding = 1
|
||||
|
||||
sysctl -w net.ipv6.conf.all.forwarding = 1
|
||||
sysctl -p
|
||||
```
|
||||
|
||||
#### Enabling the RIPD daemon
|
||||
|
||||
Once installed, all the configuration files will be stored in the **/etc/frr** directory. The daemons must be explicitly enabled by editing the **/etc/frr/daemons** file. This file determines which daemons are activated when the FRR service is started. To enable a particular daemon, simply change the corresponding "no" to "yes." A subsequent service restart should start the daemon.
|
||||
|
||||
![FRR daemon restart][7]
|
||||
|
||||
#### Firewall configuration
|
||||
|
||||
Since RIP protocol uses UDP as its transport protocol and is assigned port 520, we need to allow this port in `firewalld` configuration.
|
||||
|
||||
|
||||
```
|
||||
firewall-cmd --add-port=520/udp –permanent
|
||||
|
||||
firewalld-cmd -reload
|
||||
```
|
||||
|
||||
We can now start the FRR service using:
|
||||
|
||||
|
||||
```
|
||||
`systemctl start frr`
|
||||
```
|
||||
|
||||
#### Configuration using VTY
|
||||
|
||||
Now, we need to configure RIP using the VTY shell.
|
||||
|
||||
On Host Alpha:
|
||||
|
||||
|
||||
```
|
||||
[root@alpha ~]# vtysh
|
||||
|
||||
Hello, this is FRRouting (version 7.2RPKI).
|
||||
Copyright 1996-2005 Kunihiro Ishiguro, et al.
|
||||
|
||||
alpha# configure terminal
|
||||
alpha(config)# router rip
|
||||
alpha(config-router)# network 192.168.122.0/24
|
||||
alpha(config-router)# network 10.10.10.0/24
|
||||
alpha(config-router)# route 10.10.10.5/24
|
||||
alpha(config-router)# do write
|
||||
Note: this version of vtysh never writes vtysh.conf
|
||||
Building Configuration...
|
||||
Configuration saved to /etc/frr/ripd.conf
|
||||
Configuration saved to /etc/frr/staticd.conf
|
||||
alpha(config-router)# do write memory
|
||||
Note: this version of vtysh never writes vtysh.conf
|
||||
Building Configuration...
|
||||
Configuration saved to /etc/frr/ripd.conf
|
||||
Configuration saved to /etc/frr/staticd.conf
|
||||
alpha(config-router)# exit
|
||||
```
|
||||
|
||||
Similarly, on Host Beta:
|
||||
|
||||
|
||||
```
|
||||
[root@beta ~]# vtysh
|
||||
|
||||
Hello, this is FRRouting (version 7.2RPKI).
|
||||
Copyright 1996-2005 Kunihiro Ishiguro, et al.
|
||||
|
||||
beta# configure terminal
|
||||
beta(config)# router rip
|
||||
beta(config-router)# network 192.168.122.0/24
|
||||
beta(config-router)# network 10.12.11.0/24
|
||||
beta(config-router)# do write
|
||||
Note: this version of vtysh never writes vtysh.conf
|
||||
Building Configuration...
|
||||
Configuration saved to /etc/frr/zebra.conf
|
||||
Configuration saved to /etc/frr/ripd.conf
|
||||
Configuration saved to /etc/frr/staticd.conf
|
||||
beta(config-router)# do write memory
|
||||
Note: this version of vtysh never writes vtysh.conf
|
||||
Building Configuration...
|
||||
Configuration saved to /etc/frr/zebra.conf
|
||||
Configuration saved to /etc/frr/ripd.conf
|
||||
Configuration saved to /etc/frr/staticd.conf
|
||||
beta(config-router)# exit
|
||||
```
|
||||
|
||||
Once done, check the routes on both hosts as follows:
|
||||
|
||||
|
||||
```
|
||||
[root@alpha ~]# ip route show
|
||||
default via 192.168.122.1 dev eth0 proto static metric 100
|
||||
10.10.10.0/24 dev eth1 proto kernel scope link src 10.10.10.12 metric 101
|
||||
10.12.11.0/24 via 192.168.122.50 dev eth0 proto 189 metric 20
|
||||
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.100 metric 100
|
||||
```
|
||||
|
||||
We can see that the routing table on Alpha contains an entry of 10.12.11.0/24 via 192.168.122.50, which was offered through RIP. Similarly, on Beta, the table contains an entry of network 10.10.10.0/24 via 192.168.122.100.
|
||||
|
||||
|
||||
```
|
||||
[root@beta ~]# ip route show
|
||||
default via 192.168.122.1 dev eth0 proto static metric 100
|
||||
10.10.10.0/24 via 192.168.122.100 dev eth0 proto 189 metric 20
|
||||
10.12.11.0/24 dev eth1 proto kernel scope link src 10.12.11.12 metric 101
|
||||
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.50 metric 100
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
As you can see, the setup and configuration are relatively simple. To add complexity, we can add more network interfaces to the router to provide routing for more networks. The configurations can be made by editing the configuration files in an editor, but using VTY shell provides us a frontend to all FRR daemons in a single, combined session.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/5/vty-shell
|
||||
|
||||
作者:[M Umer][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/noisybotnet
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/connections_wires_sysadmin_cable.png?itok=d5WqHmnJ (Multi-colored and directional network computer cables)
|
||||
[2]: https://opensource.com/article/20/4/quagga-linux
|
||||
[3]: https://en.wikipedia.org/wiki/FRRouting
|
||||
[4]: https://opensource.com/sites/default/files/uploads/frr_architecture.png (FRR architecture)
|
||||
[5]: http://docs.frrouting.org/projects/dev-guide/en/latest/vtysh.html
|
||||
[6]: http://docs.frrouting.org/projects/dev-guide/en/latest/building-frr-for-centos7.html
|
||||
[7]: https://opensource.com/sites/default/files/uploads/frr_daemon_restart.png (FRR daemon restart)
|
@ -0,0 +1,92 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Easy DNS configuration with PowerDNS for nameservers)
|
||||
[#]: via: (https://opensource.com/article/20/5/powerdns)
|
||||
[#]: author: (Jonathan Garrido https://opensource.com/users/jgarrido)
|
||||
|
||||
Easy DNS configuration with PowerDNS for nameservers
|
||||
======
|
||||
Use PDNS to provide a stable and reliable Domain Name System (DNS)
|
||||
server for your project.
|
||||
![Computer laptop in space][1]
|
||||
|
||||
A few months ago, we got a requirement to provide a stable and reliable Domain Name System ([DNS][2]) server for a new project. The project dealt with auto-deployment using containers and where each new environment would generate a unique, random URL. After a lot of research on possible solutions, we decided to give [PowerDNS][3] (PDNS) a try.
|
||||
|
||||
At the outset, we discovered that PowerDNS is supported in all major Linux distros, is available under the GPL license, and keeps its repositories up to date. We also found neat and well-organized [documentation][4] on the official site and tons of how-to's around the web from people who really like and use the product. After reading a few pages and learning some basic commands, PDNS was installed, up, and running, and our journey began.
|
||||
|
||||
### Database-driven
|
||||
|
||||
PowerDNS keeps its records in a SQL database. This was new for us, and not having to use flat files to keep records was a good change. We picked MariaDB as our power tool of choice, and since there is tons of advance information about the proper settings for installing the nameserver, we could set up and harden our database flawlessly.
|
||||
|
||||
### Easy configuration
|
||||
|
||||
The second thing that engaged us was all the features PDNS has in its config file. This file, pdns.conf, has a lot of options that you can enable or disable just by adding or removing the # sign. This was truly amazing because it gave us the chance to integrate this new service into our current infrastructure with only the values that we want, no more, no less, just the features that we need. A quick example:
|
||||
|
||||
Who can access your webserver?
|
||||
|
||||
|
||||
```
|
||||
`webserver-allow-from=172.10.0.1,172.10.1.2`
|
||||
```
|
||||
|
||||
Can I forward requests based in a domain? Sure!
|
||||
|
||||
|
||||
```
|
||||
forward-zones=mylocal.io=127.0.0.1:5300
|
||||
forward-zones+=example.com=172.10.0.5:53
|
||||
forward-zones+=lucky.tech=172.10.1.5:53
|
||||
```
|
||||
|
||||
### API included
|
||||
|
||||
We could activate using this config file, and this is when we started to meet PDNS's "power" by solving the first request from our development team, the API service. This feature gave us the ability to send requests to simply and cleanly create, modify, or remove records in our DNS server.
|
||||
|
||||
This API has some basic security parameters, so in just a few steps, you can control who has the right to interact with the nameserver based on a combination of an IP address and a pre-share key as a way of authentication. Here's what the configuration for this looks like:
|
||||
|
||||
|
||||
```
|
||||
api=yes
|
||||
api-key=lkjdsfpoiernf
|
||||
webserver-allow-from=172.10.7.13,172.10.7.5
|
||||
```
|
||||
|
||||
### Logging
|
||||
|
||||
PDNS does an extraordinary job when it comes to logging. You can monitor your server and see how the machine is doing by using the log files and a simple built-in web server. Using a browser, you can see different types of statistics from the machine, like CPU usage and the DNS queries received. This is very valuable—for example, we were able to detect a few "not-so-healthy" PCs that were sending DNS requests to our server looking for sites that are related to malicious traffic. After digging into the logs, we could see where traffic was coming from and do a clean operation on those PCs.
|
||||
|
||||
### Other features
|
||||
|
||||
This is only a glimpse of all the things you can do with PowerDNS; there is much more to it. It is a complete nameserver with a lot of features and functionalities that make it worth giving it a try.
|
||||
|
||||
At this moment, we are not implementing [DNSSEC][5], but it appears that it can be put into production quickly with just one click. Also, PowerDNS has a nice approach when it comes to separating the recursor service from the nameserver. I read that it also supports [DNS RPZ][6] (Response Policy Zones), and there are also some very nice and well-designed frontends available that let you manage your server using a simple web browser, like the one in the image below.
|
||||
|
||||
![PowerDNS frontend][7]
|
||||
|
||||
([PowerDNS documentation][4], MIT License)
|
||||
|
||||
Believe it or not, you can boost your knowledge about DNS and IT ops a lot just by expending a few hours "playing" with PDNS.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/5/powerdns
|
||||
|
||||
作者:[Jonathan Garrido][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jgarrido
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
|
||||
[2]: https://en.wikipedia.org/wiki/Domain_Name_System
|
||||
[3]: https://www.powerdns.com/opensource.html
|
||||
[4]: https://doc.powerdns.com/
|
||||
[5]: https://en.wikipedia.org/wiki/Domain_Name_System_Security_Extensions
|
||||
[6]: https://dnsrpz.info/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/pdns.jpg (PowerDNS frontend)
|
141
sources/tech/20200521 Glico (Weighted Rock Paper Scissors).md
Normal file
141
sources/tech/20200521 Glico (Weighted Rock Paper Scissors).md
Normal file
@ -0,0 +1,141 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Glico (Weighted Rock Paper Scissors))
|
||||
[#]: via: (https://theartofmachinery.com/2020/05/21/glico_weighted_rock_paper_scissors.html)
|
||||
[#]: author: (Simon Arneaud https://theartofmachinery.com)
|
||||
|
||||
Glico (Weighted Rock Paper Scissors)
|
||||
======
|
||||
|
||||
This still isn’t the blog post I said I was going to write about now, but I figured some game theory would make a good post at the moment, especially when a lot of people I know are working at home with kids who need entertaining. Here’s some stuff about a traditional Japanese kids’ game called Glico, a form of weighted Rock Paper Scissors (RPS).
|
||||
|
||||
### Glico
|
||||
|
||||
I’ll assume you’re familiar with regular RPS. It’s pretty obvious how to play well: the three plays, “rock”, “paper” and “scissors”, are equally strong, so the only trick is to play them unpredictably enough.
|
||||
|
||||
But what happens if the three plays have different values? Weighted RPS, under the name “Glico”, has been a well-known Japanese children’s game since at least before WWII, but let me explain an English adaptation. Traditionally it’s played starting at the bottom of a flight of stairs, and the aim is to get to the top first. Players can climb up steps by winning rounds of RPS. The trick is that the number of steps depends on the winning hand in each round. A player who wins with “rock” gets to climb up four steps by spelling out R-O-C-K, and similarly “paper” is worth five steps and “scissors” worth eight. This simple twist to the game creates whole new layers of offence and defence as players struggle to win with “scissors” as much as possible, without being too predictable and vulnerable.
|
||||
|
||||
(The rules for the Japanese version vary by region, but usually “rock” is worth 3 steps, while “paper” and “scissors” are worth 6. The mnemonic is that “rock”, “paper” and “scissors” are referred to as グー, パー and チョキ respectively, and the words spelled out when playing are グリコ (“Glico”, a food/confectionary brand), パイナップル (pineapple) and チョコレート (chocolate).)
|
||||
|
||||
Just a few notes before getting into the maths: The game works best with two players, but in the traditional rules for three or more players, each round is handled by having multiple rematches. Each time there’s a clear winning hand (e.g., two players with “paper” beating one with “rock”) the losers are eliminated until there’s one winner. That can take a long time, so cycling systematically between pairs of players might be faster for several players. (I’ll assume two players from now on.) Also, older kids sometimes add an extra challenge by requiring an exact landing at the top of the stairs to win. For example, if you’re five steps from the top, only “paper” will win; “scissors” will overshoot by three steps, and you’ll end up three steps back down from the top. Be warned: that makes gameplay a lot harder.
|
||||
|
||||
### Calculating the optimal strategy
|
||||
|
||||
Simple strategies like “just play rock” are what game theorists call “pure strategies”. By design, no pure strategy in RPS is better than all others, and an adaptive opponent can quickly learn to exploit any pure strategy (e.g., by always playing “paper” against someone who always plays “rock”). Any decent player will play RPS with something like a “mixed strategy” (selecting from the pure strategies at random, maybe with different probabilities). Game theory tells us that finite, two-player, zero-sum games always have optimal mixed strategies — i.e., a mixed strategy that’s as good or better than any other, even against an adaptive opponent. You might do better by exploiting a weak opponent, but you can’t do better against a perfect one. In plain RPS, the statistically unbeatable strategy is to play each hand with equal probability (\frac{1}{3}).
|
||||
|
||||
Glico is made up of multiple rounds of weighted RPS. A truly optimal player won’t just use one set of probabilities (p_{r}), (p_{p}) and (p_{s}) for playing “rock”, “paper” and “scissors” each round. The optimal probabilities will vary depending the position of both players on the stairs. For example, a player who is four steps from winning is either happy with any winning hand, or only wants “rock”, depending on the rules, and (theoretically) both players should recognise that and adapt their probabilities accordingly. However, it’s more practical to play with an optimal greedy strategy — i.e., assuming everyone is just trying to get the maximum step value each round.
|
||||
|
||||
I’ll calculate an optimal greedy strategy for weighted RPS in two ways. One way is longer but uses nothing but high school algebra and logical thinking, while the other way uses the power of linear programming.
|
||||
|
||||
#### The longer way
|
||||
|
||||
The greedy version of Glico has no end goal; the players are just trying to win points. It helps with solving the game if we make it zero sum — any time you win (N) points, your opponent loses (N) points, and vice versa. That just scales and shifts the score per round, so it doesn’t change the optimal strategy. Why do it? We know that optimal players can’t get any advantage over each other because the game is symmetric. If the game is zero sum, that means that no strategy can have an expected value of more than 0 points. That lets us write some equations. For example, playing “rock” might win you 4 points against “scissors”, or lose you 5 against “paper”. Against an optimal opponent, we can say
|
||||
|
||||
[4p_{s} - 5p_{p} \leq 0]
|
||||
|
||||
Is the inequality necessary? When would a pure strategy have a negative value against a non-adaptive but optimal player? Imagine if we added a fourth pure strategy, “bomb”, that simply gave 1000 points to the opponent. Obviously no optimal player would ever play “bomb”, so (p_{b} = 0). Playing “bomb” against an optimal player would have expected value -1000. We can say that some pure strategies are just _bad_: they have suboptimal value against an optimal opponent, and an optimal player will never play them. Other pure strategies have optimal value against an optimal opponent, and they’re reasonable to include in an optimal strategy.
|
||||
|
||||
Bad pure strategies aren’t always as obvious as “bomb”, but we can argue that none of the pure strategies in RPS are bad. “Rock” is the only way to beat “scissors”, and “paper” is the only way to beat “rock”, and “scissors” is the only way to beat “paper”. At least one must be in the optimal strategy, so we can expect them all to be. So let’s make that (\leq) into (=), and add the equations for playing “paper” and “scissors”, plus the fact that these are probabilities that add up to 1:
|
||||
|
||||
[\begin{matrix} {4p_{s} - 5p_{p}} & {= 0} \ {5p_{r} - 8p_{s}} & {= 0} \ {8p_{p} - 4p_{r}} & {= 0} \ {p_{r} + p_{p} + p_{s}} & {= 1} \ \end{matrix}]
|
||||
|
||||
That’s a system of linear equations that can be solved algorithmically using Gaussian elimination — either by hand or by using any good numerical algorithms software. I won’t go into the details, but here’s the solution:
|
||||
|
||||
[\begin{matrix} p_{r} & {= 0.4706} \ p_{p} & {= 0.2353} \ p_{s} & {= 0.2941} \ \end{matrix}]
|
||||
|
||||
Even though it’s worth the least, an optimal player will play “rock” almost half the time to counterattack “scissors”. The rest of the time is split between “paper” and “scissors”, with a slight bias towards “scissors”.
|
||||
|
||||
#### The powerful way
|
||||
|
||||
The previous solution needed special-case analysis: it exploited the symmetry of the game, and made some guesses about how good/bad the pure strategies are. What about games that are more complex, or maybe not even symmetric (say, because one player has a handicap)? There’s a more general solution using what’s called linear programming (which dates to when “programming” just meant “scheduling” or “planning”).
|
||||
|
||||
By the way, linear programming (LP) has a funny place in computer science. There are some industries and academic circles where LP and generalisations like mixed integer programming are super hot. Then there are computer science textbooks that never even mention them, so there are industries where the whole topic is pretty much unheard of. It might be because it wasn’t even known for a long time if LP problems can be solved in polynomial time (they can), so LP doesn’t have the same theoretical elegance as, say, shortest path finding, even if it has a lot of practical use.
|
||||
|
||||
Anyway, solving weighted RPS with LP is pretty straightforward. We just need to describe the game using a bunch of linear inequalities in multiple variables, and express strategic value as a linear function that can be optimised. That’s very similar to what was done before, but this time we won’t try to guess at the values of any strategies. We’ll just assume we’re choosing values (p_{r}), (p_{p}) and (p_{s}) to play against an opponent who scores an average (v) against us each round. The opponent is smart enough to choose a strategy that’s as least as good as any pure strategy, so we can say
|
||||
|
||||
[\begin{matrix} {4p_{s} - 5p_{p}} & {\leq v} \ {5p_{r} - 8p_{s}} & {\leq v} \ {8p_{p} - 4p_{r}} & {\leq v} \ \end{matrix}]
|
||||
|
||||
The opponent can only play some combination of “rock”, “paper” and “scissors”, so (v) can’t be strictly greater than all of them — at least one of the inequalities above must be tight. To model the gameplay fully, the only other constraints we need are the rules of probability:
|
||||
|
||||
[\begin{matrix} {p_{r} + p_{p} + p_{s}} & {= 1} \ p_{r} & {\geq 0} \ p_{p} & {\geq 0} \ p_{s} & {\geq 0} \ \end{matrix}]
|
||||
|
||||
Now we’ve modelled the problem, we just need to express what needs to be optimised. That’s actually dead simple: we just want to minimise (v), the average score the opponent can win from us. An LP solver can find a set of values for all variables that minimises (v) within the constraints, and we can read off the optimal strategy directly.
|
||||
|
||||
I’ve tried a few tools, and the [Julia][1] library [JuMP][2] has my current favourite FOSS API for throwaway optimisation problems. Here’s some code:
|
||||
|
||||
```
|
||||
# You might need Pkg.add("JuMP"); Pkg.add("GLPK")
|
||||
using JuMP
|
||||
using GLPK
|
||||
|
||||
game = Model(GLPK.Optimizer)
|
||||
|
||||
@variable(game, 0 <= pr <= 1)
|
||||
@variable(game, 0 <= pp <= 1)
|
||||
@variable(game, 0 <= ps <= 1)
|
||||
@variable(game, v)
|
||||
|
||||
@constraint(game, ptotal, pr + pp + ps == 1)
|
||||
@constraint(game, rock, 4*ps - 5*pp <= v)
|
||||
@constraint(game, paper, 5*pr - 8*ps <= v)
|
||||
@constraint(game, scissors, 8*pp - 4*pr <= v)
|
||||
|
||||
@objective(game, Min, v)
|
||||
|
||||
println(game)
|
||||
optimize!(game)
|
||||
|
||||
println("Opponent's value: ", value(v))
|
||||
println("Rock: ", value(pr))
|
||||
println("Paper: ", value(pp))
|
||||
println("Scissors: ", value(ps))
|
||||
```
|
||||
|
||||
Here’s the output:
|
||||
|
||||
```
|
||||
Min v
|
||||
Subject to
|
||||
ptotal : pr + pp + ps = 1.0
|
||||
rock : 4 ps - 5 pp - v ≤ 0.0
|
||||
paper : 5 pr - 8 ps - v ≤ 0.0
|
||||
scissors : 8 pp - 4 pr - v ≤ 0.0
|
||||
pr ≥ 0.0
|
||||
pp ≥ 0.0
|
||||
ps ≥ 0.0
|
||||
pr ≤ 1.0
|
||||
pp ≤ 1.0
|
||||
ps ≤ 1.0
|
||||
|
||||
Opponent's value: 0.0
|
||||
Rock: 0.47058823529411764
|
||||
Paper: 0.23529411764705882
|
||||
Scissors: 0.29411764705882354
|
||||
```
|
||||
|
||||
As argued in the previous solution, the best value the opponent can get against the optimal player is 0.
|
||||
|
||||
### What does optimality mean?
|
||||
|
||||
The optimal solution was calculated assuming an all-powerful opponent. It guarantees that even the best weighted RPS player can’t get an advantage over you, but it turns out you can’t get an advantage over a terrible player, either, if you insist on playing this “optimal” strategy. That’s because weighted RPS has no bad plays, in the sense that “bomb” is bad. _Any_ strategy played against the above “optimal” strategy will have expected value of 0, so it’s really a defensive, or “safe” strategy. To play truly optimally and win against a bad player, you’ll have to adapt your strategy. For example, if your opponent plays “scissors” too often and “paper” not enough, you should adapt by playing “rock” more often. Of course, your opponent might just be pretending to be weak, and could start taking advantage of your deviation from the safe strategy.
|
||||
|
||||
Games don’t always work out like that. For example, in theory, you could derive an optimal safe strategy for more complex games like poker. Such a strategy would tend to win against normal humans because even the best normal humans make bad poker plays. On the other hand, a “shark” at the table might be able to win against the “fish” faster by exploiting their weaknesses more aggressively. If you’re thinking of using LP to directly calculate a strategy for Texas Hold’em, though, sorry, but you’ll hit a combinatorial explosion of pure strategies as you account for all the cases like “if I’m dealt AJ on the big blind and I call a four-blind raise from the button preflop and then the flop is a rainbow 3K9…”. Only heavily simplified toy poker games are solvable with the general approach.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://theartofmachinery.com/2020/05/21/glico_weighted_rock_paper_scissors.html
|
||||
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://theartofmachinery.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://julialang.org/
|
||||
[2]: https://github.com/JuliaOpt/JuMP.jl
|
@ -0,0 +1,186 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Use the internet from the command line with curl)
|
||||
[#]: via: (https://opensource.com/article/20/5/curl-cheat-sheet)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Use the internet from the command line with curl
|
||||
======
|
||||
Download our new curl cheat sheet. Curl is a fast and efficient way to
|
||||
pull the information you need from the internet without using a
|
||||
graphical interface.
|
||||
![Cheat Sheet cover image][1]
|
||||
|
||||
Curl is commonly considered a non-interactive web browser. That means it's able to pull information from the internet and display it in your terminal or save it to a file. This is literally what web browsers, such as Firefox or Chromium, do except they _render_ the information by default, while curl downloads and displays raw information. In reality, the curl command does much more and has the ability to transfer data to or from a server using one of many supported protocols, including HTTP, FTP, SFTP, IMAP, POP3, LDAP, SMB, SMTP, and many more. It's a useful tool for the average terminal user, a vital convenience for the sysadmin, and a quality assurance tool for microservices and cloud developers.
|
||||
|
||||
Curl is designed to work without user interaction, so unlike Firefox, you must think about your interaction with online data from start to finish. For instance, if you want to view a web page in Firefox, you launch a Firefox window. After Firefox is open, you type the website you want to visit into the URL field or a search engine. Then you navigate to the site and click on the page you want to see.
|
||||
|
||||
The same concepts apply to curl, except you do it all at once: you launch curl at the same time you feed it the internet location you want and tell it whether you want to the data to be saved in your terminal or to a file. The complexity increases when you have to interact with a site that requires authentication or with an API, but once you learn the **curl** command syntax, it becomes second nature. To help you get the hang of it, we collected the pertinent syntax information in a handy [cheat sheet][2].
|
||||
|
||||
### Download a file with curl
|
||||
|
||||
You can download a file with the **curl** command by providing a link to a specific URL. If you provide a URL that defaults to **index.html**, then the index page is downloaded, and the file you downloaded is displayed on your terminal screen. You can pipe the output to less or tail or any other command:
|
||||
|
||||
|
||||
```
|
||||
$ curl "<http://example.com>" | tail -n 4
|
||||
<h1>Example Domain</h1>
|
||||
<p>This domain is for use in illustrative examples in documents. You may use this domain in literature without prior coordination or asking for permission.</p>
|
||||
<p><a href="[https://www.iana.org/domains/example"\>More][3] information...</a></p>
|
||||
</div></body></html>
|
||||
```
|
||||
|
||||
Because some URLs contain special characters that your shell normally interprets, it's safest to surround your URL in quotation marks.
|
||||
|
||||
Some files don't translate well to being displayed in a terminal. You can use the **\--remote-name** option to cause the file to be saved according to what it's called on the server:
|
||||
|
||||
|
||||
```
|
||||
$ curl --remote-name "<https://example.com/linux-distro.iso>"
|
||||
$ ls
|
||||
linux-distro.iso
|
||||
```
|
||||
|
||||
Alternatively, you can use the **\--output** option to name your download whatever you want:
|
||||
|
||||
|
||||
```
|
||||
`curl "http://example.com/foo.html" --output bar.html`
|
||||
```
|
||||
|
||||
### List contents of a remote directory with curl
|
||||
|
||||
Because curl is non-interactive, it's difficult to browse a page for downloadable elements. Provided that the remote server you're connecting to allows it, you can use **curl** to list the contents of a directory:
|
||||
|
||||
|
||||
```
|
||||
`$ curl --list-only "https://example.com/foo/"`
|
||||
```
|
||||
|
||||
### Continue a partial download
|
||||
|
||||
If you're downloading a very large file, you might find that you have to interrupt the download. Curl is intelligent enough to determine where you left off and continue the download. That means the next time you're downloading a 4GB Linux distribution ISO and something goes wrong, you never have to go back to the start. The syntax for **\--continue-at** is a little unusual: if you know the byte count where your download was interrupted, you can provide it; otherwise, you can use a lone dash (**-**) to tell curl to detect it automatically:
|
||||
|
||||
|
||||
```
|
||||
`$ curl --remote-name --continue-at - "https://example.com/linux-distro.iso"`
|
||||
```
|
||||
|
||||
### Download a sequence of files
|
||||
|
||||
If you need to download several files—rather than just one big file—curl can help with that. Assuming you know the location and file-name pattern of the files you want to download, you can use curl's sequencing notation: the start and end point between a range of integers, in brackets. For the output filename, use **#1** to indicate the first variable:
|
||||
|
||||
|
||||
```
|
||||
`$ curl "https://example.com/file_[1-4].webp" --output "file_#1.webp"`
|
||||
```
|
||||
|
||||
If you need to use another variable to represent another sequence, denote each variable in the order it appears in the command. For example, in this command, **#1** refers to the directories **images_000** through **images_009**, while **#2** refers to the files **file_1.webp** through **file_4.webp**:
|
||||
|
||||
|
||||
```
|
||||
$ curl "<https://example.com/images\_00\[0-9\]/file\_\[1-4\].webp>" \
|
||||
\--output "file_#1-#2.webp"
|
||||
```
|
||||
|
||||
### Download all PNG files from a site
|
||||
|
||||
You can do some rudimentary web scraping to find what you want to download, too, using only **curl** and **grep**. For instance, say you need to download all images associated with a web page you're archiving. First, download the page referencing the images. Pipe the page to grep with a search for the image type you're targeting (PNG in this example). Finally, create a **while** loop to construct a download URL and to save the files to your computer:
|
||||
|
||||
|
||||
```
|
||||
$ curl <https://example.com> |\
|
||||
grep --only-matching 'src="[^"]*.[png]"' |\
|
||||
cut -d\" -f2 |\
|
||||
while read i; do \
|
||||
curl <https://example.com/"${i}>" -o "${i##*/}"; \
|
||||
done
|
||||
```
|
||||
|
||||
This is just an example, but it demonstrates how flexible curl can be when combined with a Unix pipe and some clever, but basic, parsing.
|
||||
|
||||
### Fetch HTML headers
|
||||
|
||||
Protocols used for data exchange have a lot of metadata embedded in the packets that computers send to communicate. HTTP headers are components of the initial portion of data. It can be helpful to view these headers (especially the response code) when troubleshooting your connection to a site:
|
||||
|
||||
|
||||
```
|
||||
curl --head "<https://example.com>"
|
||||
HTTP/2 200
|
||||
accept-ranges: bytes
|
||||
age: 485487
|
||||
cache-control: max-age=604800
|
||||
content-type: text/html; charset=UTF-8
|
||||
date: Sun, 26 Apr 2020 09:02:09 GMT
|
||||
etag: "3147526947"
|
||||
expires: Sun, 03 May 2020 09:02:09 GMT
|
||||
last-modified: Thu, 17 Oct 2019 07:18:26 GMT
|
||||
server: ECS (sjc/4E76)
|
||||
x-cache: HIT
|
||||
content-length: 1256
|
||||
```
|
||||
|
||||
### Fail quickly
|
||||
|
||||
A 200 response is the usual HTTP indicator of success, so it's what you usually expect when you contact a server. The famous 404 response indicates that a page can't be found, and 500 means there was a server error.
|
||||
|
||||
To see what errors are happening during negotiation, add the **\--show-error** flag:
|
||||
|
||||
|
||||
```
|
||||
`$ curl --head --show-error "http://opensource.ga"`
|
||||
```
|
||||
|
||||
These can be difficult for you to fix unless you have access to the server you're contacting, but curl generally tries its best to resolve the location you point it to. Sometimes when testing things over a network, seemingly endless retries just waste time, so you can force curl to exit upon failure quickly with the **\--fail-early** option:
|
||||
|
||||
|
||||
```
|
||||
`curl --fail-early "http://opensource.ga"`
|
||||
```
|
||||
|
||||
### Redirect query as specified by a 3xx response
|
||||
|
||||
The 300 series of responses, however, are more flexible. Specifically, the 301 response means that a URL has been moved permanently to a different location. It's a common way for a website admin to relocate content while leaving a "trail" so people visiting the old location can still find it. Curl doesn't follow a 301 redirect by default, but you can make it continue on to a 301 destination by using the **\--location** option:
|
||||
|
||||
|
||||
```
|
||||
$ curl "<https://iana.org>" | grep title
|
||||
<title>301 Moved Permanently</title>
|
||||
$ curl --location "<https://iana.org>"
|
||||
<title>Internet Assigned Numbers Authority</title>
|
||||
```
|
||||
|
||||
### Expand a shortened URL
|
||||
|
||||
The **\--location** option is useful when you want to look at shortened URLs before visiting them. Shortened URLs can be useful for social networks with character limits (of course, this may not be an issue if you use a [modern and open source social network][4]) or for print media in which users can't just copy and paste a long URL. However, they can also be a little dangerous because their destination is, by nature, concealed. By combining the **\--head** option to view just the HTTP headers and the **\--location** option to unravel the final destination of a URL, you can peek into a shortened URL without loading the full resource:
|
||||
|
||||
|
||||
```
|
||||
$ curl --head --location \
|
||||
"<https://bit.ly/2yDyS4T>"
|
||||
```
|
||||
|
||||
### [Download our curl cheat sheet][2]
|
||||
|
||||
Once you practice thinking about the process of exploring the web as a single command, curl becomes a fast and efficient way to pull the information you need from the internet without bothering with a graphical interface. To help you build it into your usual workflow, we've created a [curl cheat sheet][2] with common curl uses and syntax, including an overview of using it to query an API.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/5/curl-cheat-sheet
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coverimage_cheat_sheet.png?itok=lYkNKieP (Cheat Sheet cover image)
|
||||
[2]: https://opensource.com/downloads/curl-command-cheat-sheet
|
||||
[3]: https://www.iana.org/domains/example"\>More
|
||||
[4]: https://opensource.com/article/17/4/guide-to-mastodon
|
@ -0,0 +1,493 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A beginner's guide to web scraping with Python)
|
||||
[#]: via: (https://opensource.com/article/20/5/web-scraping-python)
|
||||
[#]: author: (Julia Piaskowski https://opensource.com/users/julia-piaskowski)
|
||||
|
||||
A beginner's guide to web scraping with Python
|
||||
======
|
||||
Get some hands-on experience with essential Python tools to scrape
|
||||
complete HTML sites.
|
||||
![HTML code][1]
|
||||
|
||||
There are plenty of great books to help you learn Python, but who actually reads these A to Z? (Spoiler: not me).
|
||||
|
||||
Many people find instructional books useful, but I do not typically learn by reading a book front to back. I learn by doing a project, struggling, figuring some things out, and then reading another book. So, throw away your book (for now), and let's learn some Python.
|
||||
|
||||
What follows is a guide to my first scraping project in Python. It is very low on assumed knowledge in Python and HTML. This is intended to illustrate how to access web page content with Python library [requests][2] and parse the content using [BeatifulSoup4][3], as well as JSON and [pandas][4]. I will briefly introduce [Selenium][5], but I will not delve deeply into how to use that library—that topic deserves its own tutorial. Ultimately I hope to show you some tricks and tips to make web scraping less overwhelming.
|
||||
|
||||
### Installing our dependencies
|
||||
|
||||
All the resources from this guide are available at my [GitHub repo][6]. If you need help installing Python 3, check out the tutorials for [Linux][7], [Windows][8], and [Mac][9].
|
||||
|
||||
|
||||
```
|
||||
$ python3 -m venv
|
||||
$ source venv/bin/activate
|
||||
$ pip install requests bs4 pandas
|
||||
```
|
||||
|
||||
If you like using JupyterLab, you can run all the code using this [notebook][10]. There are a lot of ways to [install JupyterLab][11], and this is one of them:
|
||||
|
||||
|
||||
```
|
||||
# from the same virtual environment as above, run:
|
||||
$ pip install jupyterlab
|
||||
```
|
||||
|
||||
### Setting a goal for our web scraping project
|
||||
|
||||
Now we have our dependencies installed, but what does it take to scrape a webpage?
|
||||
|
||||
Let's take a step back and be sure to clarify our goal. Here is my list of requirements for a successful web scraping project.
|
||||
|
||||
* We are gathering information that is worth the effort it takes to build a working web scraper.
|
||||
* We are downloading information that can be legally and ethically gathered by a web scraper.
|
||||
* We have some knowledge of how to find the target information in HTML code.
|
||||
* We have the right tools: in this case, it's the libraries **BeautifulSoup** and **requests**.
|
||||
* We know (or are willing to learn) how to parse JSON objects.
|
||||
* We have enough data skills to use **pandas**.
|
||||
|
||||
|
||||
|
||||
A comment on HTML: While HTML is the beast that runs the Internet, what we mostly need to understand is how tags work. A tag is a collection of information sandwiched between angle-bracket enclosed labels. For example, here is a pretend tag, called "pro-tip":
|
||||
|
||||
|
||||
```
|
||||
<pro-tip> All you need to know about html is how tags work </pro-tip>
|
||||
```
|
||||
|
||||
We can access the information in there ("All you need to know…") by calling its tag "pro-tip." How to find and access a tag will be addressed further in this tutorial. For more of a look at HTML basics, check out [this article][12].
|
||||
|
||||
### What to look for in a web scraping project
|
||||
|
||||
Some goals for gathering data are more suited for web scraping than others. My guidelines for what qualifies as a good project are as follows.
|
||||
|
||||
There is no public API available for the data. It would be much easier to capture structured data through an API, and it would help clarify both the legality and ethics of gathering the data. There needs to be a sizable amount of structured data with a regular, repeatable format to justify this effort. Web scraping can be a pain. BeautifulSoup (bs4) makes this easier, but there is no avoiding the individual idiosyncrasies of websites that will require customization. Identical formatting of the data is not required, but it does make things easier. The more "edge cases" (departures from the norm) present, the more complicated the scraping will be.
|
||||
|
||||
Disclaimer: I have zero legal training; the following is not intended to be formal legal advice.
|
||||
|
||||
On the note of legality, accessing vast troves of information can be intoxicating, but just because it's possible doesn't mean it should be done.
|
||||
|
||||
There is, thankfully, public information that can guide our morals and our web scrapers. Most websites have a [robots.txt][13] file associated with the site, indicating which scraping activities are permitted and which are not. It's largely there for interacting with search engines (the ultimate web scrapers). However, much of the information on websites is considered public information. As such, some consider the robots.txt file as a set of recommendations rather than a legally binding document. The robots.txt file does not address topics such as ethical gathering and usage of the data.
|
||||
|
||||
Questions I ask myself before beginning a scraping project:
|
||||
|
||||
* Am I scraping copyrighted material?
|
||||
* Will my scraping activity compromise individual privacy?
|
||||
* Am I making a large number of requests that may overload or damage a server?
|
||||
* Is it possible the scraping will expose intellectual property I do not own?
|
||||
* Are there terms of service governing use of the website, and am I following those?
|
||||
* Will my scraping activities diminish the value of the original data? (for example, do I plan to repackage the data as-is and perhaps siphon off website traffic from the original source)?
|
||||
|
||||
|
||||
|
||||
When I scrape a site, I make sure I can answer "no" to all of those questions.
|
||||
|
||||
For a deeper look at the legal concerns, see the 2018 publications [Legality and Ethics of Web Scraping by Krotov and Silva][14] and [Twenty Years of Web Scraping and the Computer Fraud and Abuse Act by Sellars][15].
|
||||
|
||||
### Now it's time to scrape!
|
||||
|
||||
After assessing the above, I came up with a project. My goal was to extract addresses for all Family Dollar stores in Idaho. These stores have an outsized presence in rural areas, so I wanted to understand how many there are in a rather rural state.
|
||||
|
||||
The starting point is the [location page for Family Dollar][16].
|
||||
|
||||
![Family Dollar Idaho locations page][17]
|
||||
|
||||
To begin, let's load up our prerequisites in our Python virtual environment. The code from here is meant to be added to a Python file (_scraper.py_ if you're looking for a name) or be run in a cell in JupyterLab.
|
||||
|
||||
|
||||
```
|
||||
import requests # for making standard html requests
|
||||
from bs4 import BeautifulSoup # magical tool for parsing html data
|
||||
import json # for parsing data
|
||||
from pandas import DataFrame as df # premier library for data organization
|
||||
```
|
||||
|
||||
Next, we request data from our target URL.
|
||||
|
||||
|
||||
```
|
||||
page = requests.get("<https://locations.familydollar.com/id/>")
|
||||
soup = BeautifulSoup(page.text, 'html.parser')
|
||||
```
|
||||
|
||||
BeautifulSoup will take HTML or XML content and transform it into a complex tree of objects. Here are several common object types that we will use.
|
||||
|
||||
* **BeautifulSoup**—the parsed content
|
||||
* **Tag**—a standard HTML tag, the main type of bs4 element you will encounter
|
||||
* **NavigableString**—a string of text within a tag
|
||||
* **Comment**—a special type of NavigableString
|
||||
|
||||
|
||||
|
||||
There is more to consider when we look at **requests.get()** output. I've only used **page.text()** to translate the requested page into something readable, but there are other output types:
|
||||
|
||||
* **page.text()** for text (most common)
|
||||
* **page.content()** for byte-by-byte output
|
||||
* **page.json()** for JSON objects
|
||||
* **page.raw()** for the raw socket response (no thank you)
|
||||
|
||||
|
||||
|
||||
I have only worked on English-only sites using the Latin alphabet. The default encoding settings in **requests** have worked fine for that. However, there is a rich internet world beyond English-only sites. To ensure that **requests** correctly parses the content, you can set the encoding for the text:
|
||||
|
||||
|
||||
```
|
||||
page = requests.get(URL)
|
||||
page.encoding = 'ISO-885901'
|
||||
soup = BeautifulSoup(page.text, 'html.parser')
|
||||
```
|
||||
|
||||
Taking a closer look at BeautifulSoup tags, we see:
|
||||
|
||||
* The bs4 element **tag** is capturing an HTML tag
|
||||
* It has both a name and attributes that can be accessed like a dictionary: **tag['someAttribute']**
|
||||
* If a tag has multiple attributes with the same name, only the first instance is accessed.
|
||||
* A tag's children are accessed via **tag.contents**.
|
||||
* All tag descendants can be accessed with **tag.contents**.
|
||||
* You can always access the full contents as a string with: **re.compile("your_string")** instead of navigating the HTML tree.
|
||||
|
||||
|
||||
|
||||
### Determine how to extract relevant content
|
||||
|
||||
Warning: this process can be frustrating.
|
||||
|
||||
Extraction during web scraping can be a daunting process filled with missteps. I think the best way to approach this is to start with one representative example and then scale up (this principle is true for any programming task). Viewing the page's HTML source code is essential. There are a number of ways to do this.
|
||||
|
||||
You can view the entire source code of a page using Python in your terminal (not recommended). Run this code at your own risk:
|
||||
|
||||
|
||||
```
|
||||
print(soup.prettify())
|
||||
```
|
||||
|
||||
While printing out the entire source code for a page might work for a toy example shown in some tutorials, most modern websites have a massive amount of content on any one of their pages. Even the 404 page is likely to be filled with code for headers, footers, and so on.
|
||||
|
||||
It is usually easiest to browse the source code via **View Page Source** in your favorite browser (right-click, then select "view page source"). That is the most reliable way to find your target content (I will explain why in a moment).
|
||||
|
||||
![Family Dollar page source code][18]
|
||||
|
||||
|
||||
|
||||
In this instance, I need to find my target content—an address, city, state, and zip code—in this vast HTML ocean. Often, a simple search of the page source (**ctrl + F**) will yield the section where my target location is located. Once I can actually see an example of my target content (the address for at least one store), I look for an attribute or tag that sets this content apart from the rest.
|
||||
|
||||
It would appear that first, I need to collect web addresses for different cities in Idaho with Family Dollar stores and visit those websites to get the address information. These web addresses all appear to be enclosed in a **href** tag. Great! I will try searching for that using the **find_all** command:
|
||||
|
||||
|
||||
```
|
||||
dollar_tree_list = soup.find_all('href')
|
||||
dollar_tree_list
|
||||
```
|
||||
|
||||
Searching for **href** did not yield anything, darn. This might have failed because **href** is nested inside the class **itemlist**. For the next attempt, search on **item_list**. Because "class" is a reserved word in Python, **class_** is used instead. The bs4 function **soup.find_all()** turned out to be the Swiss army knife of bs4 functions.
|
||||
|
||||
|
||||
```
|
||||
dollar_tree_list = soup.find_all(class_ = 'itemlist')
|
||||
for i in dollar_tree_list[:2]:
|
||||
print(i)
|
||||
```
|
||||
|
||||
Anecdotally, I found that searching for a specific class was often a successful approach. We can learn more about the object by finding out its type and length.
|
||||
|
||||
|
||||
```
|
||||
type(dollar_tree_list)
|
||||
len(dollar_tree_list)
|
||||
```
|
||||
|
||||
The content from this BeautifulSoup "ResultSet" can be extracted using **.contents**. This is also a good time to create a single representative example.
|
||||
|
||||
|
||||
```
|
||||
example = dollar_tree_list[2] # a representative example
|
||||
example_content = example.contents
|
||||
print(example_content)
|
||||
```
|
||||
|
||||
Use **.attr** to find what attributes are present in the contents of this object. Note: **.contents** usually returns a list of exactly one item, so the first step is to index that item using the bracket notation.
|
||||
|
||||
|
||||
```
|
||||
example_content = example.contents[0]
|
||||
example_content.attrs
|
||||
```
|
||||
|
||||
Now that I can see that **href** is an attribute, that can be extracted like a dictionary item:
|
||||
|
||||
|
||||
```
|
||||
example_href = example_content['href']
|
||||
print(example_href)
|
||||
```
|
||||
|
||||
### Putting together our web scraper
|
||||
|
||||
All that exploration has given us a path forward. Here's the cleaned-up version of the logic we figured out above.
|
||||
|
||||
|
||||
```
|
||||
city_hrefs = [] # initialise empty list
|
||||
|
||||
for i in dollar_tree_list:
|
||||
cont = i.contents[0]
|
||||
href = cont['href']
|
||||
city_hrefs.append(href)
|
||||
|
||||
# check to be sure all went well
|
||||
for i in city_hrefs[:2]:
|
||||
print(i)
|
||||
```
|
||||
|
||||
The output is a list of URLs of Family Dollar stores in Idaho to scrape.
|
||||
|
||||
That said, I still don't have address information! Now, each city URL needs to be scraped to get this information. So we restart the process, using a single, representative example.
|
||||
|
||||
|
||||
```
|
||||
page2 = requests.get(city_hrefs[2]) # again establish a representative example
|
||||
soup2 = BeautifulSoup(page2.text, 'html.parser')
|
||||
```
|
||||
|
||||
![Family Dollar map and code][19]
|
||||
|
||||
The address information is nested within **type= "application/ld+json"**. After doing a lot of geolocation scraping, I've come to recognize this as a common structure for storing address information. Fortunately, **soup.find_all()** also enables searching on **type**.
|
||||
|
||||
|
||||
```
|
||||
arco = soup2.find_all(type="application/ld+json")
|
||||
print(arco[1])
|
||||
```
|
||||
|
||||
The address information is in the second list member! Finally!
|
||||
|
||||
I extracted the contents (from the second list item) using **.contents** (this is a good default action after filtering the soup). Again, since the output of contents is a list of one, I indexed that list item:
|
||||
|
||||
|
||||
```
|
||||
arco_contents = arco[1].contents[0]
|
||||
arco_contents
|
||||
```
|
||||
|
||||
Wow, looking good. The format presented here is consistent with the JSON format (also, the type did have "**json**" in its name). A JSON object can act like a dictionary with nested dictionaries inside. It's actually a nice format to work with once you become familiar with it (and it's certainly much easier to program than a long series of RegEx commands). Although this structurally looks like a JSON object, it is still a bs4 object and needs a formal programmatic conversion to JSON to be accessed as a JSON object:
|
||||
|
||||
|
||||
```
|
||||
arco_json = json.loads(arco_contents)
|
||||
|
||||
[/code] [code]
|
||||
|
||||
type(arco_json)
|
||||
print(arco_json)
|
||||
```
|
||||
|
||||
In that content is a key called **address** that has the desired address information in the smaller nested dictionary. This can be retrieved thusly:
|
||||
|
||||
|
||||
```
|
||||
arco_address = arco_json['address']
|
||||
arco_address
|
||||
```
|
||||
|
||||
Okay, we're serious this time. Now I can iterate over the list store URLs in Idaho:
|
||||
|
||||
|
||||
```
|
||||
locs_dict = [] # initialise empty list
|
||||
|
||||
for link in city_hrefs:
|
||||
locpage = requests.get(link) # request page info
|
||||
locsoup = BeautifulSoup(locpage.text, 'html.parser')
|
||||
# parse the page's content
|
||||
locinfo = locsoup.find_all(type="application/ld+json")
|
||||
# extract specific element
|
||||
loccont = locinfo[1].contents[0]
|
||||
# get contents from the bs4 element set
|
||||
locjson = json.loads(loccont) # convert to json
|
||||
locaddr = locjson['address'] # get address
|
||||
locs_dict.append(locaddr) # add address to list
|
||||
```
|
||||
|
||||
### Cleaning our web scraping results with pandas
|
||||
|
||||
We have loads of data in a dictionary, but we have some additional crud that will make reusing our data more complex than it needs to be. To do some final data organization steps, we convert to a pandas data frame, drop the unneeded columns "**@type**" and "**country**"), and check the top five rows to ensure that everything looks alright.
|
||||
|
||||
|
||||
```
|
||||
locs_df = df.from_records(locs_dict)
|
||||
locs_df.drop(['@type', 'addressCountry'], axis = 1, inplace = True)
|
||||
locs_df.head(n = 5)
|
||||
```
|
||||
|
||||
Make sure to save results!!
|
||||
|
||||
|
||||
```
|
||||
df.to_csv(locs_df, "family_dollar_ID_locations.csv", sep = ",", index = False)
|
||||
```
|
||||
|
||||
We did it! There is a comma-separated list of all the Idaho Family Dollar stores. What a wild ride.
|
||||
|
||||
### A few words on Selenium and data scraping
|
||||
|
||||
[Selenium][5] is a common utility for automatic interaction with a webpage. To explain why it's essential to use at times, let's go through an example using Walgreens' website. **Inspect Element** provides the code for what is displayed in a browser:
|
||||
|
||||
![Walgreens location page and code][20]
|
||||
|
||||
|
||||
|
||||
While **View Page Source** provides the code for what **requests** will obtain:
|
||||
|
||||
![Walgreens source code][21]
|
||||
|
||||
When these two don't agree, there are plugins modifying the source code—so, it should be accessed after the page has loaded in a browser. **requests** cannot do that, but **Selenium** can.
|
||||
|
||||
Selenium requires a web driver to retrieve the content. It actually opens a web browser, and this page content is collected. Selenium is powerful—it can interact with loaded content in many ways (read the documentation). After getting data with **Selenium**, continue to use **BeautifulSoup** as before:
|
||||
|
||||
|
||||
```
|
||||
url = "[https://www.walgreens.com/storelistings/storesbycity.jsp?requestType=locator\&state=ID][22]"
|
||||
driver = webdriver.Firefox(executable_path = 'mypath/geckodriver.exe')
|
||||
driver.get(url)
|
||||
soup_ID = BeautifulSoup(driver.page_source, 'html.parser')
|
||||
store_link_soup = soup_ID.find_all(class_ = 'col-xl-4 col-lg-4 col-md-4')
|
||||
```
|
||||
|
||||
I didn't need Selenium in the case of Family Dollar, but I do keep it on hand for those times when rendered content differs from source code.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
In conclusion, when using web scraping to accomplish a meaningful task:
|
||||
|
||||
* Be patient
|
||||
* Consult the manuals (these are very helpful)
|
||||
|
||||
|
||||
|
||||
If you are curious about the answer:
|
||||
|
||||
![Family Dollar locations map][23]
|
||||
|
||||
There are many many Family Dollar stores in America.
|
||||
|
||||
The complete source code is:
|
||||
|
||||
|
||||
```
|
||||
import requests
|
||||
from bs4 import BeautifulSoup
|
||||
import json
|
||||
from pandas import DataFrame as df
|
||||
|
||||
page = requests.get("<https://www.familydollar.com/locations/>")
|
||||
soup = BeautifulSoup(page.text, 'html.parser')
|
||||
|
||||
# find all state links
|
||||
state_list = soup.find_all(class_ = 'itemlist')
|
||||
|
||||
state_links = []
|
||||
|
||||
for i in state_list:
|
||||
cont = i.contents[0]
|
||||
attr = cont.attrs
|
||||
hrefs = attr['href']
|
||||
state_links.append(hrefs)
|
||||
|
||||
# find all city links
|
||||
city_links = []
|
||||
|
||||
for link in state_links:
|
||||
page = requests.get(link)
|
||||
soup = BeautifulSoup(page.text, 'html.parser')
|
||||
familydollar_list = soup.find_all(class_ = 'itemlist')
|
||||
for store in familydollar_list:
|
||||
cont = store.contents[0]
|
||||
attr = cont.attrs
|
||||
city_hrefs = attr['href']
|
||||
city_links.append(city_hrefs)
|
||||
# to get individual store links
|
||||
store_links = []
|
||||
|
||||
for link in city_links:
|
||||
locpage = requests.get(link)
|
||||
locsoup = BeautifulSoup(locpage.text, 'html.parser')
|
||||
locinfo = locsoup.find_all(type="application/ld+json")
|
||||
for i in locinfo:
|
||||
loccont = i.contents[0]
|
||||
locjson = json.loads(loccont)
|
||||
try:
|
||||
store_url = locjson['url']
|
||||
store_links.append(store_url)
|
||||
except:
|
||||
pass
|
||||
|
||||
# get address and geolocation information
|
||||
stores = []
|
||||
|
||||
for store in store_links:
|
||||
storepage = requests.get(store)
|
||||
storesoup = BeautifulSoup(storepage.text, 'html.parser')
|
||||
storeinfo = storesoup.find_all(type="application/ld+json")
|
||||
for i in storeinfo:
|
||||
storecont = i.contents[0]
|
||||
storejson = json.loads(storecont)
|
||||
try:
|
||||
store_addr = storejson['address']
|
||||
store_addr.update(storejson['geo'])
|
||||
stores.append(store_addr)
|
||||
except:
|
||||
pass
|
||||
|
||||
# final data parsing
|
||||
stores_df = df.from_records(stores)
|
||||
stores_df.drop(['@type', 'addressCountry'], axis = 1, inplace = True)
|
||||
stores_df['Store'] = "Family Dollar"
|
||||
|
||||
df.to_csv(stores_df, "family_dollar_locations.csv", sep = ",", index = False)
|
||||
```
|
||||
|
||||
\--
|
||||
_Author's note: This article is an adaptation of a [talk I gave at PyCascades][24] in Portland, Oregon on February 9, 2020._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/5/web-scraping-python
|
||||
|
||||
作者:[Julia Piaskowski][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/julia-piaskowski
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_html_code.png?itok=VjUmGsnl (HTML code)
|
||||
[2]: https://requests.readthedocs.io/en/master/
|
||||
[3]: https://beautiful-soup-4.readthedocs.io/en/latest/
|
||||
[4]: https://pandas.pydata.org/
|
||||
[5]: https://www.selenium.dev/
|
||||
[6]: https://github.com/jpiaskowski/pycas2020_web_scraping
|
||||
[7]: https://opensource.com/article/20/4/install-python-linux
|
||||
[8]: https://opensource.com/article/19/8/how-install-python-windows
|
||||
[9]: https://opensource.com/article/19/5/python-3-default-mac
|
||||
[10]: https://github.com/jpiaskowski/pycas2020_web_scraping/blob/master/example/Familydollar_location_scrape-all-states.ipynb
|
||||
[11]: https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html
|
||||
[12]: https://opensource.com/article/20/4/build-websites
|
||||
[13]: https://www.contentkingapp.com/academy/robotstxt/
|
||||
[14]: https://www.researchgate.net/publication/324907302_Legality_and_Ethics_of_Web_Scraping
|
||||
[15]: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3221625
|
||||
[16]: https://locations.familydollar.com/id/
|
||||
[17]: https://opensource.com/sites/default/files/uploads/familydollar1.png (Family Dollar Idaho locations page)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/familydollar2.png (Family Dollar page source code)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/familydollar3.png (Family Dollar map and code)
|
||||
[20]: https://opensource.com/sites/default/files/uploads/walgreens1.png (Walgreens location page and code)
|
||||
[21]: https://opensource.com/sites/default/files/uploads/walgreens2.png (Walgreens source code)
|
||||
[22]: https://www.walgreens.com/storelistings/storesbycity.jsp?requestType=locator\&state=ID
|
||||
[23]: https://opensource.com/sites/default/files/uploads/family_dollar_locations.png (Family Dollar locations map)
|
||||
[24]: https://2020.pycascades.com/talks/adventures-in-babysitting-webscraping-for-python-and-html-novices/
|
452
sources/tech/20200522 Fast data modeling with JavaScript.md
Normal file
452
sources/tech/20200522 Fast data modeling with JavaScript.md
Normal file
@ -0,0 +1,452 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Fast data modeling with JavaScript)
|
||||
[#]: via: (https://opensource.com/article/20/5/data-modeling-javascript)
|
||||
[#]: author: (Szymon https://opensource.com/users/schodevio)
|
||||
|
||||
Fast data modeling with JavaScript
|
||||
======
|
||||
This tutorial showcases a method to model data in just a few minutes.
|
||||
![Analytics: Charts and Graphs][1]
|
||||
|
||||
As a backend developer at the [Railwaymen][2], a software house in Kraków, Poland, some of my tasks rely on models that manipulate and customize data retrieved from a database. When I wanted to improve my skills in frontend frameworks, I [chose Vue][3], and I thought it would be good to have a similar way to model data in a store. I started with some libraries that I found through [NPM][4], but they offered many more features than I needed.
|
||||
|
||||
So I decided to build my own solution, and I was very surprised that the base took less than 15 lines of code and is very flexible. I implemented this solution in an open source application which I developed and called [Evally][5] - a web app that helps businesses keep track of their employees' performance reviews and professional development. It reminds managers or HR representatives about employees' upcoming evaluations and gathers all of the data needed to assess their performance in the fairest way.
|
||||
|
||||
### Model and list
|
||||
|
||||
The only things you need to do are to create a class and use the defaultsDeep function in the [Lodash][6] JavaScript library:
|
||||
|
||||
|
||||
```
|
||||
`_.defaultsDeep(object, [sources])`
|
||||
```
|
||||
|
||||
Arguments:
|
||||
|
||||
* `object (Object)`: The destination object
|
||||
* `[sources] (...Object)`: The source objects
|
||||
|
||||
|
||||
|
||||
Returns:
|
||||
|
||||
* `(Object)`: Returns object
|
||||
|
||||
|
||||
|
||||
This helper function: [Lodash Docs][7]
|
||||
|
||||
> "Assigns recursively own and inherited enumerable string keyed properties of source objects to the destination object for all destination properties that resolve to undefined. Source objects are applied from left to right. Once a property is set, additional values of the same property are ignored."
|
||||
|
||||
For example:
|
||||
|
||||
|
||||
```
|
||||
_.defaultsDeep({ 'a': { 'b': 2 } }, { 'a': { 'b': 1, 'c': 3 } })
|
||||
// => { 'a': { 'b': 2, 'c': 3 } }
|
||||
```
|
||||
|
||||
That's all! To try it out, create a file called **base.js** and import the defaultsDeep function from the Lodash package:
|
||||
|
||||
|
||||
```
|
||||
// base.js
|
||||
import defaultsDeep from "lodash/defaultsDeep";
|
||||
```
|
||||
|
||||
Next, create and export the Model class, where constructor will use the Lodash helper function to assign values to all passed attributes and initialize the attributes that were not received with default values:
|
||||
|
||||
|
||||
```
|
||||
// base.js
|
||||
// ...
|
||||
|
||||
export class Model {
|
||||
constructor(attributes = {}) {
|
||||
defaultsDeep(this, attributes, this.defaults);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now, create your first real model, Employee, with attributes for firstName, lastName, position and hiredAt where "position" defines "Programmer" as the default value:
|
||||
|
||||
|
||||
```
|
||||
// employee.js
|
||||
import { Model } from "./base.js";
|
||||
|
||||
export class Employee extends Model {
|
||||
get defaults() {
|
||||
return {
|
||||
firstName: "",
|
||||
lastName: "",
|
||||
position: "Programmer",
|
||||
hiredAt: ""
|
||||
};
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Next, begin creating employees:
|
||||
|
||||
|
||||
```
|
||||
// app.js
|
||||
import { Employee } from "./employee.js";
|
||||
|
||||
const programmer = new Employee({
|
||||
firstName: "Will",
|
||||
lastName: "Smith"
|
||||
});
|
||||
|
||||
// => Employee {
|
||||
// firstName: "Will",
|
||||
// lastName: "Smith",
|
||||
// position: "Programmer",
|
||||
// hiredAt: "",
|
||||
// constructor: Object
|
||||
// }
|
||||
|
||||
const techLeader = new Employee({
|
||||
firstName: "Charles",
|
||||
lastName: "Bartowski",
|
||||
position: "Tech Leader"
|
||||
});
|
||||
|
||||
// => Employee {
|
||||
// firstName: "Charles",
|
||||
// lastName: "Bartowski",
|
||||
// position: "Tech Leader",
|
||||
// hiredAt: "",
|
||||
// constructor: Object
|
||||
// }
|
||||
```
|
||||
|
||||
You have two employees, and the first one's position is assigned from the defaults. Here's how multiple employees can be defined:
|
||||
|
||||
|
||||
```
|
||||
// base.js
|
||||
|
||||
// ...
|
||||
|
||||
export class List {
|
||||
constructor(items = []) {
|
||||
this.models = items.map(item => new this.model(item));
|
||||
}
|
||||
}
|
||||
|
||||
[/code] [code]
|
||||
|
||||
// employee.js
|
||||
import { Model, List } from "./base.js";
|
||||
|
||||
// …
|
||||
|
||||
export class EmployeesList extends List {
|
||||
get model() {
|
||||
return Employee;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The List class constructor maps an array of received items into an array of desired models. The only requirement is to provide a correct model class name:
|
||||
|
||||
|
||||
```
|
||||
// app.js
|
||||
import { Employee, EmployeesList } from "./employee.js";
|
||||
|
||||
// …
|
||||
|
||||
const employees = new EmployeesList([
|
||||
{
|
||||
firstName: "Will",
|
||||
lastName: "Smith"
|
||||
},
|
||||
{
|
||||
firstName: "Charles",
|
||||
lastName: "Bartowski",
|
||||
position: "Tech Leader"
|
||||
}
|
||||
]);
|
||||
|
||||
// => EmployeesList {models: Array[2], constructor: Object}
|
||||
// models: Array[2]
|
||||
// 0: Employee
|
||||
// firstName: "Will"
|
||||
// lastName: "Smith"
|
||||
// position: "Programmer"
|
||||
// hiredAt: ""
|
||||
// <constructor>: "Employee"
|
||||
// 1: Employee
|
||||
// firstName: "Charles"
|
||||
// lastName: "Bartowski"
|
||||
// position: "Tech Leader"
|
||||
// hiredAt: ""
|
||||
// <constructor>: "Employee"
|
||||
// <constructor>: "EmployeesList"
|
||||
```
|
||||
|
||||
### Ways to use this approach
|
||||
|
||||
This simple solution allows you to keep your data structure in one place and avoid code repetition. The [DRY][8] principle rocks! You can also customize your models as needed, such as in the following examples.
|
||||
|
||||
#### Custom getters
|
||||
|
||||
Do you need one attribute to be dependent on the others? No problem; you can do this by improving your Employee model:
|
||||
|
||||
|
||||
```
|
||||
// employee.js
|
||||
import { Model } from "./base.js";
|
||||
|
||||
export class Employee extends Model {
|
||||
get defaults() {
|
||||
return {
|
||||
firstName: "",
|
||||
lastName: "",
|
||||
position: "Programmer",
|
||||
hiredAt: ""
|
||||
};
|
||||
}
|
||||
|
||||
get fullName() {
|
||||
return [this.firstName, this.lastName].join(' ')
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
[/code] [code]
|
||||
|
||||
// app.js
|
||||
import { Employee, EmployeesList } from "./employee.js";
|
||||
|
||||
// …
|
||||
|
||||
console.log(techLeader.fullName);
|
||||
// => Charles Bartowski
|
||||
```
|
||||
|
||||
Now you don't have to repeat the code to do something as simple as displaying the employee's full name.
|
||||
|
||||
#### Date formatting
|
||||
|
||||
Model is a good place to define other formats for given attributes. The best examples are dates:
|
||||
|
||||
|
||||
```
|
||||
// employee.js
|
||||
import { Model } from "./base.js";
|
||||
import moment from 'moment';
|
||||
|
||||
export class Employee extends Model {
|
||||
get defaults() {
|
||||
return {
|
||||
firstName: "",
|
||||
lastName: "",
|
||||
position: "Programmer",
|
||||
hiredAt: ""
|
||||
};
|
||||
}
|
||||
|
||||
get formattedHiredDate() {
|
||||
if (!this.hiredAt) return "---";
|
||||
|
||||
return moment(this.hiredAt).format('MMMM DD, YYYY');
|
||||
}
|
||||
}
|
||||
|
||||
[/code] [code]
|
||||
|
||||
// app.js
|
||||
import { Employee, EmployeesList } from "./employee.js";
|
||||
|
||||
// …
|
||||
|
||||
techLeader.hiredAt = "2020-05-01";
|
||||
|
||||
console.log(techLeader.formattedHiredDate);
|
||||
// => May 01, 2020
|
||||
```
|
||||
|
||||
Another case related to dates (which I discovered developing the Evally app) is the ability to operate with different date formats. Here's an example that uses datepicker:
|
||||
|
||||
1. All employees fetched from the database have the hiredAt date in the format:
|
||||
YEAR-MONTH-DAY, e.g., 2020-05-01
|
||||
2. You need to display the hiredAt date in a more friendly format:
|
||||
MONTH DAY, YEAR, e.g., May 01, 2020
|
||||
3. A datepicker uses the format:
|
||||
DAY-MONTH-YEAR, e.g., 01-05-2020
|
||||
|
||||
|
||||
|
||||
Resolve this issue with:
|
||||
|
||||
|
||||
```
|
||||
// employee.js
|
||||
import { Model } from "./base.js";
|
||||
import moment from 'moment';
|
||||
|
||||
export class Employee extends Model {
|
||||
|
||||
// …
|
||||
|
||||
get formattedHiredDate() {
|
||||
if (!this.hiredAt) return "---";
|
||||
|
||||
return moment(this.hiredAt).format('MMMM DD, YYYY');
|
||||
}
|
||||
|
||||
get hiredDate() {
|
||||
return (
|
||||
this.hiredAt
|
||||
? moment(this.hiredAt).format('DD-MM-YYYY')
|
||||
: ''
|
||||
);
|
||||
}
|
||||
|
||||
set hiredDate(date) {
|
||||
const mDate = moment(date, 'DD-MM-YYYY');
|
||||
|
||||
this.hiredAt = (
|
||||
mDate.isValid()
|
||||
? mDate.format('YYYY-MM-DD')
|
||||
: ''
|
||||
);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This adds getter and setter functions to handle datepicker's functionality.
|
||||
|
||||
|
||||
```
|
||||
// Get date from server
|
||||
techLeader.hiredAt = '2020-05-01';
|
||||
console.log(techLeader.formattedHiredDate);
|
||||
// => May 01, 2020
|
||||
|
||||
// Datepicker gets date
|
||||
console.log(techLeader.hiredDate);
|
||||
// => 01-05-2020
|
||||
|
||||
// Datepicker sets new date
|
||||
techLeader.hiredDate = '15-06-2020';
|
||||
|
||||
// Display new date
|
||||
console.log(techLeader.formattedHiredDate);
|
||||
// => June 15, 2020
|
||||
```
|
||||
|
||||
This makes it very simple to manage multiple date formats.
|
||||
|
||||
#### Storage for model-related information
|
||||
|
||||
Another use for a model class is storing general information related to the model, like paths for routing:
|
||||
|
||||
|
||||
```
|
||||
// employee.js
|
||||
import { Model } from "./base.js";
|
||||
import moment from 'moment';
|
||||
|
||||
export class Employee extends Model {
|
||||
|
||||
// …
|
||||
|
||||
static get routes() {
|
||||
return {
|
||||
employeesPath: '/api/v1/employees',
|
||||
employeePath: id => `/api/v1/employees/${id}`
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
[/code] [code]
|
||||
|
||||
// Path for POST requests
|
||||
console.log(Employee.routes.employeesPath)
|
||||
|
||||
// Path for GET request
|
||||
console.log(Employee.routes.employeePath(1))
|
||||
```
|
||||
|
||||
### Customize the list of models
|
||||
|
||||
Don't forget about the List class, which you can customize as needed:
|
||||
|
||||
|
||||
```
|
||||
// employee.js
|
||||
import { Model, List } from "./base.js";
|
||||
|
||||
// …
|
||||
|
||||
export class EmployeesList extends List {
|
||||
get model() {
|
||||
return Employee;
|
||||
}
|
||||
|
||||
findByFirstName(val) {
|
||||
return this.models.find(item => item.firstName === val);
|
||||
}
|
||||
|
||||
filterByPosition(val) {
|
||||
return this.models.filter(item => item.position === val);
|
||||
}
|
||||
}
|
||||
|
||||
[/code] [code]
|
||||
|
||||
console.log(employees.findByFirstName('Will'))
|
||||
// => Employee {
|
||||
// firstName: "Will",
|
||||
// lastName: "Smith",
|
||||
// position: "Programmer",
|
||||
// hiredAt: "",
|
||||
// constructor: Object
|
||||
// }
|
||||
|
||||
console.log(employees.filterByPosition('Tech Leader'))
|
||||
// => [Employee]
|
||||
// 0: Employee
|
||||
// firstName: "Charles"
|
||||
// lastName: "Bartowski"
|
||||
// position: "Tech Leader"
|
||||
// hiredAt: ""
|
||||
// <constructor>: "Employee"
|
||||
```
|
||||
|
||||
### Summary
|
||||
|
||||
This simple structure for data modeling in JavaScript should save you some development time. You can add new functions whenever you need them to keep your code cleaner and easier to maintain. All of this code is available in my [CodeSandbox][9], so try it out and let me know how it goes by leaving a comment below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/5/data-modeling-javascript
|
||||
|
||||
作者:[Szymon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/schodevio
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/analytics-graphs-charts.png?itok=sersoqbV (Analytics: Charts and Graphs)
|
||||
[2]: https://railwaymen.org/
|
||||
[3]: https://blog.railwaymen.org/vue-vs-react-which-one-is-better-for-your-app-similarities-differences
|
||||
[4]: https://www.npmjs.com/
|
||||
[5]: https://github.com/railwaymen/evally
|
||||
[6]: https://lodash.com/
|
||||
[7]: https://lodash.com/docs/4.17.15
|
||||
[8]: https://en.wikipedia.org/wiki/Don%27t_repeat_yourself
|
||||
[9]: https://codesandbox.io/s/02jsdatamodels-1mhtb
|
@ -0,0 +1,310 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Turn your Raspberry Pi homelab into a network filesystem)
|
||||
[#]: via: (https://opensource.com/article/20/5/nfs-raspberry-pi)
|
||||
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
|
||||
|
||||
Turn your Raspberry Pi homelab into a network filesystem
|
||||
======
|
||||
Add shared filesystems to your homelab with an NFS server.
|
||||
![Blue folders flying in the clouds above a city skyline][1]
|
||||
|
||||
A shared filesystem is a great way to add versatility and functionality to a homelab. Having a centralized filesystem shared to the clients in the lab makes organizing data, doing backups, and sharing data considerably easier. This is especially useful for web applications load-balanced across multiple servers and for persistent volumes used by [Kubernetes][2], as it allows pods to be spun up with persistent data on any number of nodes.
|
||||
|
||||
Whether your homelab is made up of ordinary computers, surplus enterprise servers, or Raspberry Pis or other single-board computers (SBCs), a shared filesystem is a useful asset, and a network filesystem (NFS) server is a great way to create one.
|
||||
|
||||
I have written before about [setting up a "private cloud at home][3]," a homelab made up of Raspberry Pis or other SBCs and maybe some other consumer hardware or a desktop PC. An NFS server is an ideal way of sharing data between these components. Since most SBCs' operating systems (OSes) run off an SD card, there are some challenges. SD cards suffer from increased failures, especially when used as the OS disk for a computer, and they are not made to be constantly read from and written to. What you really need is a real hard drive: they are generally cheaper per gigabyte than SD cards, especially for larger disks, and they are less likely to sustain failures. Raspberry Pi 4's now come with USB 3.0 ports, and USB 3.0 hard drives are ubiquitous and affordable. It's a perfect match. For this project, I will use a 2TB USB 3.0 external hard drive plugged into a Raspberry Pi 4 running an NFS server.
|
||||
|
||||
![Raspberry Pi with a USB hard disk][4]
|
||||
|
||||
(Chris Collins, [CC BY-SA 4.0][5])
|
||||
|
||||
### Install the NFS server software
|
||||
|
||||
I am running Fedora Server on a Raspberry Pi, but this project can be done with other distributions as well. To run an NFS server on Fedora, you need the nfs-utils package, and luckily it is already installed (at least in Fedora 31). You also need the rpcbind package if you are planning to run NFSv3 services, but it is not strictly required for NFSv4.
|
||||
|
||||
If these packages are not already on your system, install them with the **dnf** command:
|
||||
|
||||
|
||||
```
|
||||
# Intall nfs-utils and rpcbind
|
||||
$ sudo dnf install nfs-utils rpcbind
|
||||
```
|
||||
|
||||
Raspbian is another popular OS used with Raspberry Pis, and the setup is almost exactly the same. The package names differ, but that is about the only major difference. To install an NFS server on a system running Raspbian, you need the following packages:
|
||||
|
||||
* **nfs-common:** These files are common to NFS servers and clients
|
||||
* **nfs-kernel-server:** The main NFS server software package
|
||||
|
||||
|
||||
|
||||
Raspbian uses **apt-get** for package management (not **dnf**, as Fedora does), so use that to install the packages:
|
||||
|
||||
|
||||
```
|
||||
# For a Raspbian system, use apt-get to install the NFS packages
|
||||
$ sudo apt-get install nfs-common nfs-kernel-server
|
||||
```
|
||||
|
||||
### Prepare a USB hard drive as storage
|
||||
|
||||
As I mentioned above, a USB hard drive is a good choice for providing storage for Raspberry Pis or other SBCs, especially because the SD card used for the OS disk image is not ideal. For your private cloud at home, you can use cheap USB 3.0 hard drives for large-scale storage. Plug the disk in and use **fdisk** to find out the device ID assigned to it, so you can work with it.
|
||||
|
||||
|
||||
```
|
||||
# Find your disk using fdisk
|
||||
# Unrelated disk content omitted
|
||||
$ sudo fdisk -l
|
||||
|
||||
Disk /dev/sda: 1.84 TiB, 2000398933504 bytes, 3907029167 sectors
|
||||
Disk model: BUP Slim BK
|
||||
Units: sectors of 1 * 512 = 512 bytes
|
||||
Sector size (logical/physical): 512 bytes / 512 bytes
|
||||
I/O size (minimum/optimal): 512 bytes / 512 bytes
|
||||
Disklabel type: dos
|
||||
Disk identifier: 0xe3345ae9
|
||||
|
||||
Device Boot Start End Sectors Size Id Type
|
||||
/dev/sda1 2048 3907028991 3907026944 1.8T 83 Linux
|
||||
```
|
||||
|
||||
For clarity, in the example output above, I omitted all the disks except the one I'm interested in. You can see the USB disk I want to use was assigned the device **/dev/sda**, and you can see some information about the model (**Disk model: BUP Slim BK**), which helps me identify the correct disk. The disk already has a partition, and its size confirms it is the disk I am looking for.
|
||||
|
||||
_Note:_ Make sure to identify the correct disk and partition for your device. It may be different than the example above.
|
||||
|
||||
Each partition created on a drive gets a special universally unique identifier (UUID). The computer uses the UUID to make sure it is mounting the correct partition to the correct location using the **/etc/fstab** config file. You can retrieve the UUID of the partition using the **blkid** command:
|
||||
|
||||
|
||||
```
|
||||
# Get the block device attributes for the partition
|
||||
# Make sure to use the partition that applies in your case. It may differ.
|
||||
$ sudo blkid /dev/sda1
|
||||
|
||||
/dev/sda1: LABEL="backup" UUID="bd44867c-447c-4f85-8dbf-dc6b9bc65c91" TYPE="xfs" PARTUUID="e3345ae9-01"
|
||||
```
|
||||
|
||||
In this case, the UUID of **/dev/sda1** is **bd44867c-447c-4f85-8dbf-dc6b9bc65c91**. Yours will be different, so make a note of it.
|
||||
|
||||
### Configure the Raspberry Pi to mount this disk on startup, then mount it
|
||||
|
||||
Now that you have identified the disk and partition you want to use, you need to tell the computer how to mount it, to do so whenever it boots up, and to go ahead and mount it now. Because this is a USB disk and might be unplugged, you will also configure the Raspberry Pi to not wait on boot if the disk is not plugged in or is otherwise unavailable.
|
||||
|
||||
In Linux, this is done by adding the partition to the **/etc/fstab** configuration file, including where you want it to be mounted and some arguments to tell the computer how to treat it. This example will mount the partition to **/srv/nfs**, so start by creating that path:
|
||||
|
||||
|
||||
```
|
||||
# Create the mountpoint for the disk partition
|
||||
$ sudo mkdir -p /srv/nfs
|
||||
```
|
||||
|
||||
Next, modify the **/etc/fstab** file using the following syntax format:
|
||||
|
||||
|
||||
```
|
||||
`<disk id> <mountpoint> <filesystem type> <options> <fs_freq> <fs_passno>`
|
||||
```
|
||||
|
||||
Use the UUID you identified earlier for the disk ID. As I mentioned in the prior step, the mountpoint is **/srv/nfs**. For the filesystem type, it is usually best to select the actual filesystem, but since this will be a USB disk, use **auto**.
|
||||
|
||||
For the options values, use **nosuid,nodev,nofail**.
|
||||
|
||||
#### An aside about man pages:
|
||||
|
||||
That said, there are a _lot_ of possible options, and the manual (man) pages are the best way to see what they are. Investigating the man page for for fstab is a good place to start:
|
||||
|
||||
|
||||
```
|
||||
# Open the man page for fstab
|
||||
$ man fstab
|
||||
```
|
||||
|
||||
This opens the manual/documentation associated with the fstab command. In the man page, each of the options is broken down to show what it does and the common selections. For example, **The fourth field (fs_mntopts)** gives some basic information about the options that work in that field and directs you to **man (8) mount** for more in-depth description of the mount options. That makes sense, as the **/etc/fstab** file, in essence, tells the computer how to automate mounting disks, in the same way you would manually use the mount command.
|
||||
|
||||
You can get more information about the options you will use from mount's man page. The numeral 8, in parentheses, indicates the man page section. In this case, section 8 is for _System Administration tools and Daemons_.
|
||||
|
||||
Helpfully, you can get a list of the standard sections from the man page for **man**.
|
||||
|
||||
Back to mountng the disk, take a look at **man (8) mount**:
|
||||
|
||||
|
||||
```
|
||||
# Open Section 8 of the man pages for mount
|
||||
$ man (8) mount
|
||||
```
|
||||
|
||||
In this man page, you can examine what the options listed above do:
|
||||
|
||||
* **nosuid:** Do not honor the suid/guid bit. Do not allow any files that might be on the USB disk to be executed as root. This is a good security practice.
|
||||
* **nodev:** Do not interpret characters or block special devices on the file system; i.e., do not honor any device nodes that might be on the USB disk. Another good security practice.
|
||||
* **nofail:** Do not log any errors if the device does not exist. This is a USB disk and might not be plugged in, so it will be ignored if that is the case.
|
||||
|
||||
|
||||
|
||||
Returning to the line you are adding to the **/etc/fstab** file, there are two final options: **fs_freq** and **fs_passno**. Their values are related to somewhat legacy options, and _most_ modern systems just use a **0** for both, especially for filesystems on USB disks. The fs_freq value relates to the dump command and making dumps of the filesystem. The fs_passno value defines which filesystems to **fsck** on boot and their order. If it's set, usually the root partition would be **1** and any other filesystems would be **2**. Set the value to **0** to skip using **fsck** on this partition.
|
||||
|
||||
In your preferred editor, open the **/etc/fstab** file and add the entry for the partition on the USB disk, replacing the values here with those gathered in the previous steps.
|
||||
|
||||
|
||||
```
|
||||
# With sudo, or as root, add the partition info to the /etc/fstab file
|
||||
UUID="bd44867c-447c-4f85-8dbf-dc6b9bc65c91" /srv/nfs auto nosuid,nodev,nofail,noatime 0 0
|
||||
```
|
||||
|
||||
### Enable and start the NFS server
|
||||
|
||||
With the packages installed and the partition added to your **/etc/fstab** file, you can now go ahead and start the NFS server. On a Fedora system, you need to enable and start two services: **rpcbind** and **nfs-server**. Use the **systemctl** command to accomplish this:
|
||||
|
||||
|
||||
```
|
||||
# Start NFS server and rpcbind
|
||||
$ sudo systemctl enable rpcbind.service
|
||||
$ sudo systemctl enable nfs-server.service
|
||||
$ sudo systemctl start rpcbind.service
|
||||
$ sudo systemctl start nfs-server.service
|
||||
```
|
||||
|
||||
On Raspbian or other Debian-based distributions, you just need to enable and start the **nfs-kernel-server** service using the **systemctl** command the same way as above.
|
||||
|
||||
#### RPCBind
|
||||
|
||||
The rpcbind utility is used to map remote procedure call (RPC) services to ports on which they listen. According to the rpcbind man page:
|
||||
|
||||
> "When an RPC service is started, it tells rpcbind the address at which it is listening, and the RPC program numbers it is prepared to serve. When a client wishes to make an RPC call to a given program number, it first contacts rpcbind on the server machine to determine the address where RPC requests should be sent."
|
||||
|
||||
In the case of an NFS server, rpcbind maps the protocol number for NFS to the port on which the NFS server is listening. However, NFSv4 does not require the use of rpcbind. If you use _only_ NFSv4 (by removing versions two and three from the configuration), rpcbind is not required. I've included it here for backward compatibility with NFSv3.
|
||||
|
||||
### Export the mounted filesystem
|
||||
|
||||
The NFS server decides which filesystems are shared with (exported to) which remote clients based on another configuration file, **/etc/exports**. This file is just a map of host internet protocol (IP) addresses (or subnets) to the filesystems to be shared and some options (read-only or read-write, root squash, etc.). The format of the file is:
|
||||
|
||||
|
||||
```
|
||||
`<directory> <host or hosts>(options)`
|
||||
```
|
||||
|
||||
In this example, you will export the partition mounted to **/srv/nfs**. This is the "directory" piece.
|
||||
|
||||
The second part, the host or hosts, includes the hosts you want to export this partition to. These can be specified as a single host with a fully qualified domain name or hostname, the IP address of the host, a number of hosts using wildcard characters to match domains (e.g., *.example.org), IP networks (e.g., classless inter-domain routing, or CIDR, notation), or netgroups.
|
||||
|
||||
The third piece includes options to apply to the export:
|
||||
|
||||
* **ro/rw:** Export the filesystem as read only or read write
|
||||
* **wdelay:** Delay writes to the disk if another write is imminent to improve performance (this is _probably_ not as useful with a solid-state USB disk, if that is what you are using)
|
||||
* **root_squash:** Prevent any root users on the client from having root access on the host, and set the root UID to **nfsnobody** as a security precaution
|
||||
|
||||
|
||||
|
||||
Test exporting the partition you have mouted at **/srv/nfs** to a single client—for example, a laptop. Identify your client's IP address (my laptop's is **192.168.2.64**, but yours will likely be different). You could share it to a large subnet, but for testing, limit it to the single IP address. The CIDR notation for just this IP is **192.168.2.64/32**; a **/32** subnet is just a single IP.
|
||||
|
||||
Using your preferred editor, edit the **/etc/exports** file with your directory, host CIDR, and the **rw** and **root_squash** options:
|
||||
|
||||
|
||||
```
|
||||
# Edit your /etc/exports file like so, substituting the information from your systems
|
||||
/srv/nfs 192.168.2.64/32(rw,root_squash)
|
||||
```
|
||||
|
||||
_Note:_ If you copied the **/etc/exports** file from another location or otherwise overwrote the original with a copy, you may need to restore the SELinux context for the file. You can do this with the **restorecon** command:
|
||||
|
||||
|
||||
```
|
||||
# Restore the SELinux context of the /etc/exports file
|
||||
$ sudo restorecon /etc/exports
|
||||
```
|
||||
|
||||
Once this is done, restart the NFS server to pick up the changes to the **/etc/exports** file:
|
||||
|
||||
|
||||
```
|
||||
# Restart the nfs server
|
||||
$ sudo systemctl restart nfs-server.service
|
||||
```
|
||||
|
||||
### Open the firewall for the NFS service
|
||||
|
||||
Some systems, by default, do not run a [firewall service][6]. Raspbian, for example, defaults to open iptables rules, with ports opened by different services immediately available from outside the machine. Fedora server, by contrast, runs the firewalld service by default, so you must open the port for the NFS server (and rpcbind, if you will be using NFSv3). You can do this with the **firewall-cmd** command.
|
||||
|
||||
Check the zones used by firewalld and get the default zone. For Fedora Server, this will be the FedoraServer zone:
|
||||
|
||||
|
||||
```
|
||||
# List the zones
|
||||
# Output omitted for brevity
|
||||
$ sudo firewall-cmd --list-all-zones
|
||||
|
||||
# Retrieve just the default zone info
|
||||
# Make a note of the default zone
|
||||
$ sudo firewall-cmd --get-default-zone
|
||||
|
||||
# Permanently add the nfs service to the list of allowed ports
|
||||
$ sudo firewall-cmd --add-service=nfs --permanent
|
||||
|
||||
# For NFSv3, we need to add a few more ports, nfsv3, rpc-mountd, rpc-bind
|
||||
$ sudo firewall-cmd --add-service=(nfs3,mountd,rpc-bind)
|
||||
|
||||
# Check the services for the zone, substituting the default zone in use by your system
|
||||
$ sudo firewall-cmd --list-services --zone=FedoraServer
|
||||
|
||||
# If all looks good, reload firewalld
|
||||
$ sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
And with that, you have successfully configured the NFS server with your mounted USB disk partition and exported it to your test system for sharing. Now you can test mounting it on the system you added to the exports list.
|
||||
|
||||
### Test the NFS exports
|
||||
|
||||
First, from the NFS server, create a file to read in the **/srv/nfs** directory:
|
||||
|
||||
|
||||
```
|
||||
# Create a test file to share
|
||||
echo "Can you see this?" >> /srv/nfs/nfs_test
|
||||
```
|
||||
|
||||
Now, on the client system you added to the exports list, first make sure the NFS client packages are installed. On Fedora systems, this is the **nfs-utils** package and can be installed with **dnf**. Raspbian systems have the **libnfs-utils** package that can be installed with **apt-get**.
|
||||
|
||||
Install the NFS client packages:
|
||||
|
||||
|
||||
```
|
||||
# Install the nfs-utils package with dnf
|
||||
$ sudo dnf install nfs-utils
|
||||
```
|
||||
|
||||
Once the client package is installed, you can test out the NFS export. Again on the client, use the mount command with the IP of the NFS server and the path to the export, and mount it to a location on the client, which for this test is the **/mnt** directory. In this example, my NFS server's IP is **192.168.2.109**, but yours will likely be different:
|
||||
|
||||
|
||||
```
|
||||
# Mount the export from the NFS server to the client host
|
||||
# Make sure to substitute the information for your own hosts
|
||||
$ sudo mount 192.168.2.109:/srv/nfs /mnt
|
||||
|
||||
# See if the nfs_test file is visible:
|
||||
$ cat /mnt/nfs_test
|
||||
Can you see this?
|
||||
```
|
||||
|
||||
Success! You now have a working NFS server for your homelab, ready to share files with multiple hosts, allow multi-read/write access, and provide centralized storage and backups for your data. There are many options for shared storage for homelabs, but NFS is venerable, efficient, and a great option to add to your "private cloud at home" homelab. Future articles in this series will expand on how to automatically mount NFS shares on clients and how to use NFS as a storage class for Kubernetes Persistent Volumes.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/5/nfs-raspberry-pi
|
||||
|
||||
作者:[Chris Collins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clcollins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_cloud21x_cc.png?itok=5UwC92dO (Blue folders flying in the clouds above a city skyline)
|
||||
[2]: https://opensource.com/resources/what-is-kubernetes
|
||||
[3]: https://opensource.com/article/20/5/disk-image-raspberry-pi
|
||||
[4]: https://opensource.com/sites/default/files/uploads/raspberrypi_with_hard-disk.jpg (Raspberry Pi with a USB hard disk)
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]: https://opensource.com/article/18/9/linux-iptables-firewalld
|
@ -0,0 +1,124 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Diamond interface composition in Go 1.14)
|
||||
[#]: via: (https://dave.cheney.net/2020/05/24/diamond-interface-composition-in-go-1-14)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
Diamond interface composition in Go 1.14
|
||||
======
|
||||
|
||||
Per the [overlapping interfaces proposal][1], Go 1.14 now permits embedding of interfaces with overlapping method sets. This is a brief post explain what this change means:
|
||||
|
||||
Let’s start with the definition of the three key interfaces from the `io` package; `io.Reader`, `io.Writer`, and `io.Closer`:
|
||||
|
||||
```
|
||||
package io
|
||||
|
||||
type Reader interface {
|
||||
Read([]byte) (int, error)
|
||||
}
|
||||
|
||||
type Writer interface {
|
||||
Write([]byte) (int, error)
|
||||
}
|
||||
|
||||
type Closer interface {
|
||||
Close() error
|
||||
}
|
||||
```
|
||||
|
||||
Just as embedding a type inside a struct allows the embedded type’s fields and methods to be accessed as if it were declared on the embedding type[1][2], the process is true for interfaces. Thus there is no difference between explicitly declaring
|
||||
|
||||
```
|
||||
type ReadCloser interface {
|
||||
Read([]byte) (int, error)
|
||||
Close() error
|
||||
}
|
||||
```
|
||||
|
||||
and using embedding to compose the interface
|
||||
|
||||
```
|
||||
type ReadCloser interface {
|
||||
Reader
|
||||
Closer
|
||||
}
|
||||
```
|
||||
|
||||
You can even mix and match
|
||||
|
||||
```
|
||||
type WriteCloser interface {
|
||||
Write([]byte) (int, error)
|
||||
Closer
|
||||
}
|
||||
```
|
||||
|
||||
However, prior to Go 1.14, if you continued to compose interface declarations in this manner you would likely find that something like this,
|
||||
|
||||
```
|
||||
type ReadWriteCloser interface {
|
||||
ReadCloser
|
||||
WriterCloser
|
||||
}
|
||||
```
|
||||
|
||||
would fail to compile
|
||||
|
||||
```
|
||||
% go build interfaces.go
|
||||
command-line-arguments
|
||||
./interfaces.go:27:2: duplicate method Close
|
||||
```
|
||||
|
||||
Fortunately, with Go 1.14 this is no longer a limitation, thus solving problems that typically occur with diamond-shaped embedding graphs.
|
||||
|
||||
However, there is a catch that I ran into attempting to demonstrate this feature to the local user group–this feature is only enabled when the Go compiler uses the 1.14 (or later) spec.
|
||||
|
||||
As near as I can make out the rules for which version of the Go spec is used during compilation appear to be:
|
||||
|
||||
1. If your source code is stored inside `GOPATH` (or you have _disabled_ modules with `GO111MODULE=off`) then the version of the Go spec used to compile with matches the version of the compiler you are using. Said another way, if you have Go 1.13 installed, your Go version is 1.13. If you have Go 1.14 installed, your version is 1.14. No surprises here.
|
||||
2. If your source code is stored outside `GOPATH` (or you have forced modules on with `GO111MODULE=on`) then the `go` tool will take the Go version from the `go.mod` file.
|
||||
3. If there is no Go version listed in `go.mod` then the version of the spec will be the version of Go installed. This is identical to point 1.
|
||||
4. If you are in module mode, either by being outside `GOPATH` or with `GO111MODULE=on`, but there is no `go.mod` file in the current, or any parent, directory then the version of the Go spec used to compile your code defaults to Go 1.13.
|
||||
|
||||
|
||||
|
||||
The last point caught me out.
|
||||
|
||||
1. It is said that embedding promotes the type’s fields and methods.[][3]
|
||||
|
||||
|
||||
|
||||
### Related posts:
|
||||
|
||||
1. [Struct composition with Go][4]
|
||||
2. [term: low level serial with a high level interface][5]
|
||||
3. [Accidental method value][6]
|
||||
4. [How does the go build command work ?][7]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2020/05/24/diamond-interface-composition-in-go-1-14
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/golang/proposal/blob/master/design/6977-overlapping-interfaces.md
|
||||
[2]: tmp.nUQHg5BP9T#easy-footnote-bottom-1-4179 (It is said that embedding promotes the type’s fields and methods.)
|
||||
[3]: tmp.nUQHg5BP9T#easy-footnote-1-4179
|
||||
[4]: https://dave.cheney.net/2015/05/22/struct-composition-with-go (Struct composition with Go)
|
||||
[5]: https://dave.cheney.net/2014/05/08/term-low-level-serial-with-a-high-level-interface (term: low level serial with a high level interface)
|
||||
[6]: https://dave.cheney.net/2014/05/19/accidental-method-value (Accidental method value)
|
||||
[7]: https://dave.cheney.net/2013/10/15/how-does-the-go-build-command-work (How does the go build command work ?)
|
@ -0,0 +1,113 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (CopyQ Clipboard Manager for Keeping a Track of Clipboard History)
|
||||
[#]: via: (https://itsfoss.com/copyq-clipboard-manager/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
CopyQ Clipboard Manager for Keeping a Track of Clipboard History
|
||||
======
|
||||
|
||||
How do you copy-paste text? Let me guess. You either use the right click menu to copy-paste or use Ctrl+C to copy a text and Ctrl+V to paste the text. The text copied this way is saved to ‘clipboard’. The [clipboard][1] is a special location in the memory of your system that stores cut or copied text (and in some cases images).
|
||||
|
||||
But have you ever been in a situation where you had a text copied and then you copy another text and then realize you needed the text you copied earlier? Trust me, it happens a lot.
|
||||
|
||||
Instead of wondering about finding the previous text to copy again, you can use a clipboard manager.
|
||||
|
||||
A clipboard manager is a handy little tool that keeps a history of the text you had copied. If you need to use the earlier copied text, you can use the clipboard manager to copy it again.
|
||||
|
||||
![Clipboard][2]
|
||||
|
||||
There are several clipboard managers available for Linux. In this article, I’ll cover one such tool that goes by the name CopyQ.
|
||||
|
||||
### CopyQ Clipboard Manager
|
||||
|
||||
[CopyQ][3] is nifty clipboard manager that has plenty of features to manage your system’s clipboard. It is an open source software available for free for major Linux distributions.
|
||||
|
||||
Like any other clipboard manager, CopyQ monitors the system clipboard and saves its content. It can save both text and images from the clipboard.
|
||||
|
||||
CopyQ sits in the system tray and you can easily access it from there. From the system tray, just click on the text that you want. It will automatically copy this text and you would notice that the copied text moves on to the top of the saved clipboards.
|
||||
|
||||
![][4]
|
||||
|
||||
In the system tray, it shows only the five recent clips. You can open the main window using the “Show/hide main window” option in the system tray. CopyQ saves up to 200 clips. You may edit the clipboard items here.
|
||||
|
||||
![][5]
|
||||
|
||||
You may also set a keyboard shortcut to bring the clipboard with a few key combination. This option is available in Preferences->Shortcuts.
|
||||
|
||||
![][6]
|
||||
|
||||
If you decide to use it, I advise enabling the autostart so that CopyQ runs automatically when you start your system. By default, it saves 200 items in the history and that’s a lot in my opinion. You may want to change that as well.
|
||||
|
||||
![][7]
|
||||
|
||||
CopyQ is an advanced clipboard manager with plenty of additional features. You can search for text in the saved clipboard items. You can sort, create, edit or change the order of the clipboard items.
|
||||
|
||||
You can ignore clipboard copied from some windows or containing some text. You can also temporarily disable clipboard saving. CopyQ also supports [Vim][8]-like editor and shortcut for Vim fans.
|
||||
|
||||
There are many more features that you may explore on your own. For me, the most notable feature is that it gives me easy access to older copied text, and I am happy with that.
|
||||
|
||||
### Installing CopyQ on Linux
|
||||
|
||||
CopyQ is available for Linux, Windows and macOS. You can get the executable file for Windows and macOS [from its website][3].
|
||||
|
||||
For Linux, CopyQ is available in the repositories of all major Linux distributions. Which means that you can find it in your software center or install it using your distribution’s package manager.
|
||||
|
||||
Ubuntu users may find it in the software center if [universe repository is enabled][9].
|
||||
|
||||
![CopyQ in Ubuntu Software Center][10]
|
||||
|
||||
Alternatively, you can use the apt command to install it:
|
||||
|
||||
```
|
||||
sudo apt install copyq
|
||||
```
|
||||
|
||||
Ubuntu users also have the option to [use the official PPA][11] and always get the latest stable CopyQ version. For example, at the time of writing this article, CopyQ version in Ubuntu 20.04 is 3.10 while [PPA has newer version][12] 3.11. It’s your choice really.
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:hluk/copyq
|
||||
sudo apt update
|
||||
sudo apt install copyq
|
||||
```
|
||||
|
||||
You may also want to know [how to remove PPA][13] later.
|
||||
|
||||
### Do you use a clipboard manager?
|
||||
|
||||
I find it surprising that many people are not even aware of an essential utility like clipboard manager. For me, it’s one of the [essential productivity tools on Linux][14].
|
||||
|
||||
As I mentioned at the beginning of the article, there are several clipboard managers available for Linux. CopyQ is one of such tools. Do you use or know of some other similar clipboard tool? Why not let us know in the comments?
|
||||
|
||||
If you started using CopyQ after reading this article, do share your experience with it. What you liked and what you didn’t like? The comment section is all yours.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/copyq-clipboard-manager/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.computerhope.com/jargon/c/clipboar.htm
|
||||
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/clipboard.png?ssl=1
|
||||
[3]: https://hluk.github.io/CopyQ/
|
||||
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/copyq-system-tray.png?ssl=1
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/copyq-main-window.png?ssl=1
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/copyq-shortcuts.png?ssl=1
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/copyq-auto-start.png?ssl=1
|
||||
[8]: https://itsfoss.com/vim-8-release-install/
|
||||
[9]: https://itsfoss.com/ubuntu-repositories/
|
||||
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/copyq-software-center.png?resize=800%2C474&ssl=1
|
||||
[11]: https://itsfoss.com/ppa-guide/
|
||||
[12]: https://launchpad.net/~hluk/+archive/ubuntu/copyq
|
||||
[13]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/
|
||||
[14]: https://itsfoss.com/productivity-tips-ubuntu/
|
@ -0,0 +1,112 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Foliate: A Modern eBook Reader App for Linux)
|
||||
[#]: via: (https://itsfoss.com/foliate-ebook-viewer/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Foliate: A Modern eBook Reader App for Linux
|
||||
======
|
||||
|
||||
_**Brief: Foliate is simple and elegant open source eBook viewer that provides a Kindle-like reading experience on Linux desktop.**_
|
||||
|
||||
### Foliate provides modern reading experience on Linux desktop
|
||||
|
||||
![][1]
|
||||
|
||||
While we already have a list of [best eBook readers for Linux][2], I recently came across another eBook viewer for Linux. It is called [Foliate][3].
|
||||
|
||||
Foliate is a modern GTK eBook viewer that offers quite a lot of essential features. If you own an Amazon Kindle or some other eBook reader, you probably miss that kind of reading experience on the desktop.
|
||||
|
||||
Foliate addresses those complaints. Foliate shows an estimate of remaining reading time and pages in the book. You can add bookmarks, highlight text and add notes. You can export this data or sync them easily.
|
||||
|
||||
![Foliate Ebook Viewer Features][4]
|
||||
|
||||
You can also look up for words using Wiktionary and Wikipedia. You can switch between two page view and scroll view. It also has several themes to suit you reading preference.
|
||||
|
||||
![][5]
|
||||
|
||||
And the best thing is that it is being actively maintained and developed.
|
||||
|
||||
### Features of Foliate
|
||||
|
||||
![][6]
|
||||
|
||||
Let’s take a look at all the features Foliate offers:
|
||||
|
||||
* Supports .epub, .mobi, .azw, and .azw3 files. It DOES NOT support PDF files.
|
||||
* It lets you read the eBook on a two-page view mode and offers a scroll view mode as well.
|
||||
* Ability to customize font, line-spacing, margins, and brightness
|
||||
* Default themes include Light, sepia, dark, Solarized dark/light, Gruvbox light/dark, Grey, Nord, and invert mode.
|
||||
* You can also add custom themes to tweak the appearance of the eBook viewer
|
||||
* Reading progress slider with chapter marks
|
||||
* Bookmarks and annotations support
|
||||
* Ability to find a text in the book
|
||||
* Ability to zoom in and zoom out
|
||||
* Enable/Disable sidebar for navigation
|
||||
* Quick dictionary lookup using [Wiktionary][7] and [Wikipedia][8]
|
||||
* Translation of text using Google Translate
|
||||
* Touchpad gestures—use a two-finger swipe to turn the page
|
||||
* Text-to-Speech support with [eSpeak NG][9] and [Festival][10]
|
||||
|
||||
|
||||
|
||||
**Recommended Read:**
|
||||
|
||||
![][11]
|
||||
|
||||
#### [What Amazon Kindle? Here’s an Open Source eBook Reader][12]
|
||||
|
||||
Open Book is an open source eBook reader that you can tweak to your liking. Free from proprietary stuff, Open Book is a dream come true for open source enthusiasts.
|
||||
|
||||
### Installing Foliate on Linux
|
||||
|
||||
For Ubuntu and Debian based Linux distributions, you can get the .deb file from its [GitHub releases section][13] to download the. [Installing applications from deb file][14] is as easy as double clicking on it.
|
||||
|
||||
For other Linux distributions like Fedora, Arch, SUSE etc, Foliate is available as [Flatpak][15] and [Snap][16] package. In if you don’t know how to use them, you may follow our guide on [using flatpak][17] and using [snap packages][18] in Linux to get started with it.
|
||||
|
||||
You can explore its [GitHub page][19] to build from source if you need it.
|
||||
|
||||
[Download Foliate App][20]
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
I tried it on **Pop!_OS 19.10** using the latest **.deb** file available on GitHub and it worked well. I liked its features though I don’t read a lot on my desktop.
|
||||
|
||||
Have you tried Foliate yet? Feel free to share your experience with it.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/foliate-ebook-viewer/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/foliate-app.jpg?ssl=1
|
||||
[2]: https://itsfoss.com/best-ebook-readers-linux/
|
||||
[3]: https://johnfactotum.github.io/foliate/
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/foliate-ebook-viewer-features.jpg?ssl=1
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/foliate-screenshot.jpg?ssl=1
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/foliate-options.jpg?ssl=1
|
||||
[7]: https://en.wiktionary.org/wiki/Wiktionary:Main_Page
|
||||
[8]: https://en.wikipedia.org/wiki/Main_Page
|
||||
[9]: https://github.com/espeak-ng/espeak-ng
|
||||
[10]: http://www.cstr.ed.ac.uk/projects/festival/
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/open-book-under-development-feature.jpeg?fit=800%2C450&ssl=1
|
||||
[12]: https://itsfoss.com/open-book/
|
||||
[13]: https://github.com/johnfactotum/foliate/releases
|
||||
[14]: https://itsfoss.com/install-deb-files-ubuntu/
|
||||
[15]: https://flathub.org/apps/details/com.github.johnfactotum.Foliate
|
||||
[16]: https://snapcraft.io/foliate
|
||||
[17]: https://itsfoss.com/flatpak-guide/
|
||||
[18]: https://itsfoss.com/use-snap-packages-ubuntu-16-04/
|
||||
[19]: https://github.com/johnfactotum/foliate
|
||||
[20]: tmp.6FO70BtAuy
|
@ -0,0 +1,99 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Compress PDF in Linux [GUI & Terminal])
|
||||
[#]: via: (https://itsfoss.com/compress-pdf-linux/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
How to Compress PDF in Linux [GUI & Terminal]
|
||||
======
|
||||
|
||||
_**Brief: Learn how to reduce the size of a PDF file in Linux. Both command line and GUI methods have been discussed.**_
|
||||
|
||||
I was filling some application form and it asked to upload the necessary documents in PDF format. Not a big issue. I gathered all the [scanned images and combined them in one PDF using gscan2pdf tool][1].
|
||||
|
||||
The problem came when I tried to upload this PDF file. The upload failed because it exceeded the maximum file size limit. This only meant that I needed to somehow reduce the size of the PDF file.
|
||||
|
||||
Now, you may use an online PDF compressing website but I don’t trust them. A file with important documents uploading to an unknown server is not a good idea. You could never be sure that they don’t keep a copy your uploaded PDF document.
|
||||
|
||||
This is the reason why I prefer compressing PDF files on my system rather than uploading it to some random server.
|
||||
|
||||
In this quick tutorial, I’ll show you how to reduce the size of PDF files in Linux. I’ll show both command line and GUI methods.
|
||||
|
||||
### Method 1: Reduce PDF file size in Linux command line
|
||||
|
||||
![][2]
|
||||
|
||||
You can use [Ghostscript][3] command line tool for compressing a PDF file. Most Linux distributions include the open source version of Ghostscript already. However, you can still try to install it just to make sure.
|
||||
|
||||
On Debian/Ubuntu based distributions, use the following command to install Ghostscript:
|
||||
|
||||
```
|
||||
sudo apt install ghostscript
|
||||
```
|
||||
|
||||
Now that you have made sure that Ghostscript is installed, you can use the following command to reduce the size of your PDF file:
|
||||
|
||||
```
|
||||
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/prepress -dNOPAUSE -dQUIET -dBATCH -sOutputFile=compressed_PDF_file.pdf input_PDF_file.pdf
|
||||
```
|
||||
|
||||
In the above command, you should add the correct path of the input and out PDF file.
|
||||
|
||||
The command looks scary and confusing. I advise copying and pasting most of it. What you need to know is the dPDFSETTINGS parameter. This is what determines the compression level and thus the quality of your compressed PDF file.
|
||||
|
||||
dPDFSETTINGS | Description
|
||||
---|---
|
||||
/prepress (default) | Higher quality output (300 dpi) but bigger size
|
||||
/ebook | Medium quality output (150 dpi) with moderate output file size
|
||||
/screen | Lower quality output (72 dpi) but smallest possible output file size
|
||||
|
||||
Do keep in mind that some PDF files may not be compressed a lot or at all. Applying compression on some PDF files may even produce a file bigger than the original. There is not much you can do in such cases.
|
||||
|
||||
### Method 2: Compress PDF files in Linux using GUI tool
|
||||
|
||||
I understand that not everyone is comfortable with command line tool. The [PDF editors in Linux][4] doesn’t help much with compression. This is why we at It’s FOSS worked on creating a GUI version of the Ghostscript command that you saw above.
|
||||
|
||||
[Panos][5] from It’s FOSS team [worked on creating a Python-Qt based GUI wrapper for the Ghostscript][6]. The tool gives you a simple UI where you can select your input file, select a compression level and click on the compress button to compress the PDF file.
|
||||
|
||||
![][7]
|
||||
|
||||
The compressed PDF file is saved in the same folder as the original PDF file. Your original PDF file remains untouched. The compressed file is renamed by appending -compressed to the original file name.
|
||||
|
||||
If you are not satisfied with the compression, you can choose another compression level and compress the file again.
|
||||
|
||||
You may find the source code of the PDF Compressor on our GitHub repository. To let you easily use the tool, we have packaged it in AppImage format. Please [refer to this guide to know how to use AppImage][8].
|
||||
|
||||
[Download PDF Compressor (AppImage)][9]
|
||||
|
||||
Please keep in mind that the tool is in early stages of developments. You may experience some issues. If you do, please let us know in the comments or even better, [file a bug here][10].
|
||||
|
||||
We’ll try to add more packages (Snap, Deb, PPAs etc) in the future releases. If you have experience with the development and packaging, please feel free to give us a hand.
|
||||
|
||||
Would you like It’s FOSS team to work on creating more such small desktop tools in future? Your feedback and suggestions are welcome.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/compress-pdf-linux/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/convert-multiple-images-pdf-ubuntu-1304/
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/compress-pdf-linux.jpg?ssl=1
|
||||
[3]: https://www.ghostscript.com/
|
||||
[4]: https://itsfoss.com/pdf-editors-linux/
|
||||
[5]: https://github.com/libreazer
|
||||
[6]: https://github.com/itsfoss/compress-pdf
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/compress-PDF.jpg?fit=800%2C448&ssl=1
|
||||
[8]: https://itsfoss.com/use-appimage-linux/
|
||||
[9]: https://github.com/itsfoss/compress-pdf/releases/download/0.1/compress-pdf-v0.1-x86_64.AppImage
|
||||
[10]: https://github.com/itsfoss/compress-pdf/issues
|
@ -0,0 +1,97 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Make a GIF in GIMP [Simple Tutorial])
|
||||
[#]: via: (https://itsfoss.com/make-gif-in-gimp/)
|
||||
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
|
||||
|
||||
How to Make a GIF in GIMP [Simple Tutorial]
|
||||
======
|
||||
|
||||
Making a GIF can be fun and many users would like to know how to make one. You can create a GIF very easily with [GIMP][1], the powerful open-source image editing software.
|
||||
|
||||
In this GIMP tutorial, I’ll show you how to create a simple GIF in GIMP.
|
||||
|
||||
### Making a GIF in GIMP
|
||||
|
||||
![][2]
|
||||
|
||||
Using GIMP as an animation tool requires you to think of every layer as of an animation frame. In this tutorial, I will create a simple web banner based on It’s FOSS logo. I will use 2 images as my layers but feel free to add more when you make your own.
|
||||
|
||||
The method that I use here is called “the **combine** method”, in which the new frame is added to the previous frame. My idea is to make a “flashing” web banner, to draw the attention at something important.
|
||||
|
||||
I presume that you have [already installed GIMP in Ubuntu][3] or whichever operating system you are using. Let’s start making the GIF.
|
||||
|
||||
#### Step 1
|
||||
|
||||
From the File menu, click on **Open as Layers** and select all the images you want to include in the GIF. Then click **Open**.
|
||||
|
||||
![][4]
|
||||
|
||||
You can order your images in the layers tab. The GIF sequence will start with your bottom layer and run through each layer bottom to top.
|
||||
|
||||
![Change the order of layers][5]
|
||||
|
||||
From the main menu select **Filters**, then **Animation** and finally click **Optimise (for GIF)**.
|
||||
|
||||
![][6]
|
||||
|
||||
What “Optimise” does?
|
||||
|
||||
Optimise examines each layer, and reuses information from previous frames if they haven’t changed at the following frame. It only stores new values for pixels that change, and purges any duplicate parts.
|
||||
|
||||
If a frame is exactly the same as the previous one, it removes that frame completely and the previous frame just stays on the screen for longer.
|
||||
|
||||
To view GIF, from main menu click on **Filter** then **Animation** and **Playback**.
|
||||
|
||||
![][7]
|
||||
|
||||
Press the **Playback** button to start GIF. To save GIF on the File Menu select **File**, click on **Export as**.
|
||||
|
||||
![][8]
|
||||
|
||||
Name your GIF and choose the folder you want to save it in. On “**Select File Type**“, choose GIF Image.
|
||||
|
||||
![Save As Gif][9]
|
||||
|
||||
When prompted select ‘As Animation’, ‘Loop Forever’, set the desired delay value and to take effect click on “Use delay entered above for all frames”.
|
||||
|
||||
The most important option is to set the frame disposal action as “**Cumulative layers (combine)**” to get the “**flashing**” effect for our banner. Click Export as a final step.
|
||||
|
||||
![Gif Export Options][10]
|
||||
|
||||
**Your GIF is ready!**
|
||||
|
||||
![][11]
|
||||
|
||||
This was an easy-to-follow, simple example, although GIMP has a much greater depth in animation creating and requires a good amount of study and practice to master it.
|
||||
|
||||
If you are interested in more GIMP tutorials, you may read how to outline text in GIMP. Stay tuned at It’s FOSS for more such useful tips in the future. [Subscribing to the weekly newsletter][12] is the best way to stay updated. Enjoy!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/make-gif-in-gimp/
|
||||
|
||||
作者:[Dimitrios Savvopoulos][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/dimitrios/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.gimp.org/
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/create-gif-in-gimp.jpg?ssl=1
|
||||
[3]: https://itsfoss.com/gimp-2-10-release/
|
||||
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/1.-open-as-layers.jpeg?ssl=1
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/layers-order.jpg?ssl=1
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/2.-optimize-for-gif-1.png?fit=800%2C647&ssl=1
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/3.-playback.png?fit=800%2C692&ssl=1
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/4.-export-as.png?ssl=1
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/5.-save-as-gif.png?fit=800%2C677&ssl=1
|
||||
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/6.-export-options-1.png?ssl=1
|
||||
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/its-foss-logo.gif?fit=800%2C417&ssl=1
|
||||
[12]: https://itsfoss.com/subscribe-to-newsletter/
|
@ -0,0 +1,100 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux Mint 20: Release Date, Features and Everything Important Associated With it)
|
||||
[#]: via: (https://itsfoss.com/linux-mint-20/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Linux Mint 20: Release Date, Features and Everything Important Associated With it
|
||||
======
|
||||
|
||||
_**A continually updated article that lists all the new features in the upcoming Linux Mint 20 release.**_
|
||||
|
||||
[Ubuntu 20.04 LTS release][1] is just around the corner. This is a good news for Linux Mint users as well. A new Ubuntu LTS release means that a new major Linux Mint release will follow soon.
|
||||
|
||||
Why? Because Linux Mint is based on the long term support (LTS) release of Ubuntu. Mint 18 series was based on Ubuntu 16.04 LTS, Mint 19 was based on Ubuntu 18.04 LTS and Linux Mint is going
|
||||
|
||||
Unlike Ubuntu, [Linux Mint][2] doesn’t have a set release schedule. Keeping the past trends in mind, I can make an intelligent guess that Linux Mint 20 should be releasing in June this year.
|
||||
|
||||
### New Features Coming in Linux Mint 20 “Ulyana”
|
||||
|
||||
![][3]
|
||||
|
||||
Let’s take a look at some of the main proposed new features and changes in Linux Mint 20, code-named Ulyana.
|
||||
|
||||
#### Performance improvement to Nemo file manager
|
||||
|
||||
One of the planned performance improvement in the Nemo file manager is the way it handles the thumbnails. You might not have realized but thumbnail generations takes considerable system resources (and disk space as well). Try opening a folder with a few thousands images and you’ll notice that CPU consumption goes up.
|
||||
|
||||
In Linux Mint 20, the aim is to prioritize content and navigation and to delay thumbnails as much as possible. This means that the content of folders shows up with generic icons before the thumbnails are rendered. It won’t be pleasing to the eyes, but you’ll notice the improvement in performance.
|
||||
|
||||
#### Two refreshed color variants
|
||||
|
||||
By default Linux Mint has a green/mint accent. There are a few more color accents available. Linux Mint 20 refreshes the pink and blue colors in its kitty.
|
||||
|
||||
Here’s the new Aqua accent color:
|
||||
|
||||
![Linux Mint Aqua][4]
|
||||
|
||||
And the new Pink accent color:
|
||||
|
||||
![Linux Mint Pink][5]
|
||||
|
||||
#### Sharing files across network becomes simple with this new tool
|
||||
|
||||
Linux Mint 20 will feature a [new GUI tool][6] for easily sharing files on the local network without any additional configuration.
|
||||
|
||||
![New tool for sharing files across the network][7]
|
||||
|
||||
#### Better desktop integration for Electron apps
|
||||
|
||||
[Electron][8] is an open source framework that allows to build cross-platform desktop applications using web technologies. Some people call it the lazy approach because the application runs on top of Chromium web browser. However, this allows developers to easily make their applications available for Linux (and macOS). [Slack on Linux][9] is one of many such examples.
|
||||
|
||||
Linux Mint 20 will have better support for Electron applications with improved integration of system tray and desktop notifications.
|
||||
|
||||
#### Fractional scaling with improved multi-monitor support
|
||||
|
||||
![][10]
|
||||
|
||||
A proposed change is to include fractional scaling in Linux Mint 20 that too with multi-monitor support. If you have a combination of HiDPI and non-HiDPI monitors, you should be able to select the different resolution, refresh rate and different fractional scaling for each of them.
|
||||
|
||||
#### No more 32 bit
|
||||
|
||||
Though Ubuntu 18.04 dropped 32-bit ISO 2 years ago, Linux Mint 19 series kept on providing 32-bit ISO to download and install.
|
||||
|
||||
This changes in Linux Mint 20. There is no 32-bit version of Linux Mint 20 anymore. This is because 32-bit support completely disappears from Ubuntu 20.04.
|
||||
|
||||
#### What else?
|
||||
|
||||
A lot of visual changes come with the release of Cinnamon 4.6 desktop version.
|
||||
|
||||
There should be a few ‘under the hood’ changes coming from Ubuntu 20.04 such as Linux Kernel 5.4, removal of Python 2 support, inclusion of [Wireguard VPN][11] etc.
|
||||
|
||||
I’ll update this article with more features as the development progresses. Stay tuned!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-mint-20/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/ubuntu-20-04-release-features/
|
||||
[2]: https://www.linuxmint.com/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/Linux-Mint-20.png?ssl=1
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/mint-20-aqua.jpg?ssl=1
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/mint-20-pink-1.jpg?ssl=1
|
||||
[6]: https://blog.linuxmint.com/?p=3863
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/mint-20-warpinator-1.png?ssl=1
|
||||
[8]: https://www.electronjs.org/
|
||||
[9]: https://itsfoss.com/slack-use-linux/
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/monitor_display_Linux_mint_20.png?ssl=1
|
||||
[11]: https://itsfoss.com/wireguard/
|
@ -0,0 +1,103 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lnrCoder)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Now You Can Run Linux Apps in Windows (Thanks to WSL))
|
||||
[#]: via: (https://itsfoss.com/run-linux-apps-windows-wsl/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Now You Can Run Linux Apps in Windows (Thanks to WSL)
|
||||
======
|
||||
|
||||
Microsoft’s recent “[Build 2020][1]” developer conference involved some interesting announcements. I’m not sure if it’s something to be excited about or skeptical about — but Microsoft you have our attention now more than ever.
|
||||
|
||||
And, among all the announcements, the ability to run GUI apps on WSL (Windows Subsystem for Linux) gained the spotlight.
|
||||
|
||||
Not to forget the [fiasco with Xamrin.Forms rebranding as MAUI][2] which conflicts with an existing open-source project ([Maui Project][3]) by Uri Herrera of [Nitrux Linux.][4]
|
||||
|
||||
In case you didn’t know, WSL is an environment that lets you have a console-only Linux experience from within Windows 10. It is also one of the [best ways to run Linux commands in Windows.][5]
|
||||
|
||||
While the announcement through a blog post ([DirectX ❤ Linux][6]) may have been a PR bait as [Liam Dawe thinks][7]. But, it’s still something worth talking about.
|
||||
|
||||
### Support for Linux GUI Apps On WSL
|
||||
|
||||
![][8]
|
||||
|
||||
Recently, Microsoft announced a bunch of new features coming to WSL (a.k.a. WSL 2) during the online developer conference.
|
||||
|
||||
The introduction of [Windows Package Manager][9], [Windows Terminal 1.0][10], and a couple others were some its highlights.
|
||||
|
||||
But, the support for GPU hardware acceleration to **Windows Subsystem for Linux 2** was something significant.
|
||||
|
||||
So, does this mean that you can run Linux apps on Windows using WSL? Looks like it…
|
||||
|
||||
Microsoft plans to make it happen using a brand-new Linux kernel driver **dxgkrnl**. To give you a technical brief, I’d quote the description from their announcement here:
|
||||
|
||||
![Linux Kernel Driver Wsl][11]
|
||||
|
||||
> Dxgkrnl is a brand-new kernel driver for Linux that exposes the **/dev/dxg** device to user mode Linux. **/dev/dxg** exposes a set of IOCTL that closely mimic the native WDDM D3DKMT kernel service layer on Windows. Dxgkrnl inside the Linux kernel connects over the VM Bus to its big brother on the Windows host and uses this VM bus connection to communicate with the physical GPU.
|
||||
|
||||
I’m no expert here but it means that the **Linux applications on WSL will have the same access to the GPU as native Windows applications do**.
|
||||
|
||||
The support for GUI apps will be coming later this fall (not with May 2020 update) — so we’ll have to see when that happens.
|
||||
|
||||
Microsoft is specifically targeting the developers who want the comfort of using their Linux IDE on Windows. Google is also targeting the same user base by [bringing GUI Linux apps to Chromebook][12].
|
||||
|
||||
Well, that’s good news for users who want to stick with Windows. But, is it really?
|
||||
|
||||
### Microsoft Loves Linux — Do They Really?
|
||||
|
||||
![Microsoft Loves Linux][13]
|
||||
|
||||
It is definitely a good thing that they are embracing Linux and its benefits through their efforts of incorporating a Linux environment on Windows.
|
||||
|
||||
But, how is it really going to help the **desktop Linux users**? I don’t see any real-word benefits from it as of now.
|
||||
|
||||
You’re free to have a different opinion here. But, I think there’s no real value to the desktop users of Linux through the development of WSL. At least, none so far.
|
||||
|
||||
It was interesting to notice that someone on [Linux Unplugged podcast][14] highlighted Microsoft’s move as something in the line of EEE (Embrace, extend, and extinguish) for which they’re known for.
|
||||
|
||||
Maybe, who knows? Of course, the effort they’ve put to pull this off is worth appreciating — but it’s exciting and mystifying at the same time.
|
||||
|
||||
### Does this mean Windows users will no longer switch to Linux?
|
||||
|
||||
The reason why Microsoft is embracing Linux on its platform is that they know what it’s capable of and why developers (or users) prefer using.
|
||||
|
||||
But, with the updates to WSL 2, I tend to agree to what Abhishek thinks if this continues:
|
||||
|
||||
> Eventually, desktop Linux will be confined to become a desktop application under Windows…
|
||||
|
||||
Well, of course, the native experience is still superior for the time being. And, it’ll be rare to see that the existing Linux desktop users will use Windows over it. But, that’s still something to worry about.
|
||||
|
||||
What do you think about all this? I’m not ruling the advantages of WSL for users forced to use Windows — but do you think Microsoft’s progress with WSL is going to be something hostile in nature or something that will help Linux in the long run?
|
||||
|
||||
Let me know your thoughts in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/run-linux-apps-windows-wsl/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lnrCoder](https://github.com/lnrCoder)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://news.microsoft.com/build2020/
|
||||
[2]: https://itsfoss.com/microsoft-maui-kde-row/
|
||||
[3]: https://mauikit.org/
|
||||
[4]: https://itsfoss.com/nitrux-linux/
|
||||
[5]: https://itsfoss.com/run-linux-commands-in-windows/
|
||||
[6]: https://devblogs.microsoft.com/directx/directx-heart-linux/
|
||||
[7]: https://www.gamingonlinux.com/2020/05/microsoft-build-directx-and-linux-plus-more
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/Linux-GUI-app-Windows-WSL.png?ssl=1
|
||||
[9]: https://devblogs.microsoft.com/commandline/windows-package-manager-preview/
|
||||
[10]: https://devblogs.microsoft.com/commandline/windows-terminal-1-0/
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/linux-kernel-driver-wsl.png?ssl=1
|
||||
[12]: https://itsfoss.com/linux-apps-chromebook/
|
||||
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/microsoft-loves-linux.jpg?ssl=1
|
||||
[14]: https://linuxunplugged.com/354
|
@ -0,0 +1,177 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ubuntu Budgie 20.04 Review: Smooth, Polished & Plenty of Changes)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-budgie-20-04-review/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
Ubuntu Budgie 20.04 Review: Smooth, Polished & Plenty of Changes
|
||||
======
|
||||
|
||||
As we promised our readers, we’ll be reviewing all major flavors of [Ubuntu 20.04 LTS release][1]. In that continuation, here’s our take on the Ubuntu Budgie.
|
||||
|
||||
![Ubuntu Budgie Desktop][2]
|
||||
|
||||
[Ubuntu Budgie][3], as the name implies, is an [official flavor of Ubuntu][4] using the [Budgie desktop environment][5]. This flavor is a newer member of the Ubuntu family. Ubuntu Budgie’s first release was 16.04 and it was accepted as an official flavor with the 17.04 release.
|
||||
|
||||
Their [goal][6] is to “combine the simplicity and elegance of the Budgie interface to produce a traditional desktop orientated distro with a modern paradigm”.
|
||||
|
||||
### Ubuntu 20.04 Review: What has changed and what has not!
|
||||
|
||||
There have been a surprising number of updates and improvements to [Ubuntu Budgie since the 18.04 LTS release][7].
|
||||
|
||||
* New stylish menu apple
|
||||
* Budgie-based network manager applet as default
|
||||
* New Window Shuffler allows you to tile applications from the keyboard
|
||||
* New tool to quickly switch desktop layout
|
||||
* 4k resolution support
|
||||
* GNOME Firmware and Drawing are new default applications
|
||||
* Backport packages have now been rebuilt for 20.04
|
||||
* Firefox is the default browser.
|
||||
* Catfish file and text search is now the default
|
||||
* budgie-nemo integration
|
||||
* System Tray applet removed due to bugs
|
||||
* Event alerts sounds are disabled by default
|
||||
* Fix for keyboard shortcuts mysteriously going missing
|
||||
* Better lock screen styling
|
||||
* Files (Nautilus) has been replaced with Files (Nemo) due to community demand
|
||||
* Plank dock has now been switched to the bottom of the screen, is transparent and has the bounce animations by default
|
||||
* The Quick Notes and Hot Corners applets have been ported from Python to Vala to improve speed
|
||||
* Celluloid replaces MPV
|
||||
* GNOME dependencies have been updated
|
||||
|
||||
|
||||
|
||||
![][8]
|
||||
|
||||
Ubuntu Budgie now ships with the most recent release of the Budgie desktop environment (10.5.1). Improvements include:
|
||||
|
||||
* New Raven section in Budgie Desktop Settings
|
||||
* Raven Notification grouping and the ability to turn off notifications
|
||||
* Icon Task List has been revamped
|
||||
* Ability to set number of virtual desktops
|
||||
|
||||
|
||||
|
||||
Ubuntu Budgie comes with a whole slew of Budgie applets and min-apps. They can be installed through Ubuntu Budgie Welcome.
|
||||
|
||||
![Ubuntu Budgie Welcome][9]
|
||||
|
||||
* WeatherShow – shows the forecast for the next five days and updates every 3 hours
|
||||
* Wallstreet – a wallpaper utility that allows you to cycle through a folder of images
|
||||
* Visual-space – a compact workspace switcher
|
||||
* Dropby – this applet allows you to quickly manage USB thumb drives from the panel
|
||||
* Kangaroo – quickly browser folders from the panel
|
||||
* Trash applet – manage your trash can
|
||||
* Fuzzyclock – shows time in a fuzzy way
|
||||
* Workspace stopwatch – allows you to keep track of the time spent in each workspace
|
||||
|
||||
|
||||
|
||||
For a complete list of changes and updates, visit the [changelog][10].
|
||||
|
||||
#### System Requirements
|
||||
|
||||
Ubuntu Budgie 20.04 has updated the [system requirements][11]:
|
||||
|
||||
* 4GB or more of RAM
|
||||
* 64-bit capable Intel and AMD processors
|
||||
* UEFI PCs booting in CSM mode
|
||||
* Modern Intel-based Apple Macs
|
||||
|
||||
|
||||
|
||||
As you can see, Budgie is not really a lightweight option here.
|
||||
|
||||
#### Included Apps
|
||||
|
||||
![][12]
|
||||
|
||||
The following useful applications are included in Ubuntu Budgie by default:
|
||||
|
||||
* AisleRiot Solitaire
|
||||
* Geary
|
||||
* Catfish search tool
|
||||
* Cheese webcam tool
|
||||
* GNOME Drawing
|
||||
* GNOME 2048
|
||||
* GNOME Mahjongg
|
||||
* GNOME Mines
|
||||
* GNOME Sudoku
|
||||
|
||||
|
||||
* Gthumb
|
||||
* LibreOffice
|
||||
* Maps
|
||||
* Rhythmbox
|
||||
* Tilix
|
||||
* Ubuntu Budgie Welcome
|
||||
* Evince document viewer
|
||||
* Plank
|
||||
* Celluloid
|
||||
|
||||
|
||||
|
||||
![Ubuntu Budgie Ram Usage][13]
|
||||
|
||||
### Installation
|
||||
|
||||
Initially, I was unable to get Ubuntu Budgie to do into the live environment so that I could install it. It turned out that Ubuntu Budgie was trying to boot via EFI. I contacted the [Ubuntu Budgie forum][14] and was able to get a solution.
|
||||
|
||||
Once the purple splash screen I had to hit ESC and select legacy. After that, it booted as normal and installed without issue. I have only run into this issue with Ubuntu Budgie. I downloaded and tried the Ubuntu MATE 20.04 ISO, but didn’t have a similar issue.
|
||||
|
||||
### Experience with Ubuntu Budgie 20.04
|
||||
|
||||
![][15]
|
||||
|
||||
Other than the minor installation issue, my time with Ubuntu Budgie was very pleasant. The Budgie desktop has come a long way since [Ikey][16] first created it and it has become a very mature option. The goal of Ubuntu Budgie is to “produce a traditional desktop orientated distro”. It does that in spades. All the changes that they have made continually add more polish to their product.
|
||||
|
||||
Overall, Ubuntu Budgie is a very nice looking distro. From the default theme to wallpaper options, you can tell that a lot of effort was put into making the visual experience very appealing.
|
||||
|
||||
One thing to keep in mind is that Ubuntu Budgie is not intended for low spec systems. I’m running it on my Dell Latitude D630. Without any applications open, it used about 700 MB of RAM.
|
||||
|
||||
One part of Ubuntu Budgie that I enjoyed more than I should have, was the inclusion of the [Tilix terminal emulator][17]. Tilix allows you to add terminal windows to the right or below. It has a whole host of features and just loved using it. I’m planning to install on my other Linux systems.
|
||||
|
||||
### Final Thoughts on Ubuntu Budgie 20.04
|
||||
|
||||
Ubuntu Budgie is a welcome addition to the litany of official flavors. Budgie feels very smooth and polished. It gets out of your way and lets you get work done.
|
||||
|
||||
If you are tired of your current desktop environment and want to take a look at something new, check it out. If you’re happy with your current setup, check Ubuntu Budgie’s live DVD. You just might like it.
|
||||
|
||||
![Ubuntu Budgie About][18]
|
||||
|
||||
Have you already tried Ubuntu 20.04 Budgie? How’s your experience with it? If not, which Ubuntu 20.04 flavor are you using right now?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-budgie-20-04-review/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/download-ubuntu-20-04/
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/ubuntu-busgie-desktop.png?resize=800%2C500&ssl=1
|
||||
[3]: https://ubuntubudgie.org/
|
||||
[4]: https://itsfoss.com/which-ubuntu-install/
|
||||
[5]: https://en.wikipedia.org/wiki/Budgie_(desktop_environment
|
||||
[6]: https://ubuntubudgie.org/about-us/
|
||||
[7]: https://itsfoss.com/ubuntu-budgie-18-review/
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/ubuntu-budgie-desktop-settings.jpeg?ssl=1
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/ubuntu-budgie-welcome.png?resize=800%2C472&ssl=1
|
||||
[10]: https://ubuntubudgie.org/2020/04/21/ubuntu-budgie-20-04lts-release-notes-for-18-04-upgraders/
|
||||
[11]: https://ubuntubudgie.org/downloads/
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/ubuntu-budgie-applications.jpeg?ssl=1
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/ubuntu-budgie-ram-usage.png?resize=800%2C600&ssl=1
|
||||
[14]: https://discourse.ubuntubudgie.org/t/cant-get-ub-to-boot/3397
|
||||
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/ubuntu-budgie-20-04.jpg?ssl=1
|
||||
[16]: https://itsfoss.com/ikey-doherty-serpent-interview/
|
||||
[17]: https://gnunn1.github.io/tilix-web/
|
||||
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/ubuntu-budgie-about.png?resize=800%2C648&ssl=1
|
@ -0,0 +1,173 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ubuntu MATE 20.04 LTS Review: Better Than Ever)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-mate-20-04-review/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Ubuntu MATE 20.04 LTS Review: Better Than Ever
|
||||
======
|
||||
|
||||
Ubuntu MATE 20.04 LTS is undoubtedly one of the most popular [official flavors of Ubuntu][1].
|
||||
|
||||
It’s not just me, but [Ubuntu 20.04 survey results][2] also pointed out the same. Popular or not, it is indeed an impressive Linux distribution specially for older hardware. As a matter of fact, it is also one of the [best lightweight Linux distros][3] available out there.
|
||||
|
||||
So, I thought of trying it out for a while in a virtual machine setting to provide you an overview of what you can expect out of it. And, whether it’s worth trying out.
|
||||
|
||||
### What’s New In Ubuntu MATE 20.04 LTS?
|
||||
|
||||
[Subscribe to our YouTube channel for more Linux videos][4]
|
||||
|
||||
The primary highlight on Ubuntu MATE 20.04 LTS would be the addition of MATE Desktop 1.24.
|
||||
|
||||
You can expect all the new features of the MATE Desktop 1.24 to come packed in with Ubuntu MATE 20.04. In addition to that, there have been many significant changes, improvements, and additions.
|
||||
|
||||
Here’s an overview of what has changed in Ubuntu MATE 20.04:
|
||||
|
||||
* Addition of MATE Desktop 1.24
|
||||
* Numerous visual improvements
|
||||
* Dozens of bugs fixed
|
||||
* Based on [Linux Kernel 5.4][5] series
|
||||
* Addition of experimental [ZFS][6] support
|
||||
* Addition of GameMode from [Feral Interactive][7].
|
||||
* Several package updates
|
||||
|
||||
|
||||
|
||||
Now, to get a better idea on Ubuntu MATE 20.04, I’ll give you some more details.
|
||||
|
||||
### User Experience Improvements
|
||||
|
||||
![][8]
|
||||
|
||||
Considering that more users are leaning towards Linux on Desktop, the user experience plays a vital role in that.
|
||||
|
||||
If it’s something easy to use and pleasant to look at that makes all the difference as the first impression.
|
||||
|
||||
With Ubuntu MATE 20.04 LTS, I wasn’t disappointed either. Personally, I’m a fan of the latest [GNOME 3.36][9]. I like it on my [Pop OS 20.04][10] but with the presence of [MATE 1.24][11], it Ubuntu MATE was also a good experience.
|
||||
|
||||
You will see some significant changes to the window manager including the addition of **invisible resize borders**, **icons rendering in HiDPI**, **rework of ALT+TAB workspace switcher pop ups**, and a couple of other changes that comes as part of the latest MATE 1.24 desktop.
|
||||
|
||||
![][12]
|
||||
|
||||
Also, **MATE Tweak** has got some sweet improvements where you get to preserve user preferences even if you change the layout of the desktop. The new **MATE Welcome screen** also informs the user about the ability to change the desktop layout, so they don’t have to fiddle around to know about it.
|
||||
|
||||
Among other things, one of my favorite additions would be the **minimized app preview feature**.
|
||||
|
||||
For instance, you have an app minimized but want to get a preview of it before launching it – now you can do that by simply hovering your mouse over the taskbar as shown in the image below.
|
||||
|
||||
![][13]
|
||||
|
||||
Now, I must mention that it does not work as expected for every application. So, I’d still say **this feature is buggy and needs improvements**.
|
||||
|
||||
### App Additions or Upgrades
|
||||
|
||||
![][14]
|
||||
|
||||
With MATE 20.04, you will notice a new **Firmware updater** which is a GTK frontend for [fwupd][15]. You can manage your drivers easily using the updater.
|
||||
|
||||
This release also **replaces** **Thunderbird with the Evolution** email client. While [Thunderbird][16] is a quite popular desktop email client, [Evolution][17] integrates better with the MATE desktop and proves to be more useful.
|
||||
|
||||
![][18]
|
||||
|
||||
Considering that we have MATE 1.24 on board, you will also find a **new time and date manager app**. Not just that, if you need a magnifier, [Magnus][19] comes baked in with Ubuntu MATE 20.04.
|
||||
|
||||
![][20]
|
||||
|
||||
Ubuntu MATE 20.04 also includes upgrades to numerous packages/apps that come pre-installed.
|
||||
|
||||
![][21]
|
||||
|
||||
While these are small additions – but help in a big way to make the distro more useful.
|
||||
|
||||
### Linux Kernel 5.4
|
||||
|
||||
Ubuntu MATE 20.04 ships with the last major stable kernel release of 2019 i.e [Linux Kernel 5.4][5].
|
||||
|
||||
With this, you will be getting the native [exFAT support][22] and improved hardware support as well. Not to mention, the support for [WireGuard][23] VPN is also a nice thing to have.
|
||||
|
||||
So, you will be noticing numerous benefits of Linux Kernel 5.4 including the kernel lock down feature. In case you’re curious, you can read our coverage on [Linux Kernel 5.4][5] to get more details on it.
|
||||
|
||||
### Adding GameMode by Feral Interactive
|
||||
|
||||
Feral Interactive – popularly known for bringing games to Linux platform came up with a useful command-line tool i.e. [GameMode][7].
|
||||
|
||||
You won’t get a GUI – but using the command-line you can apply temporary system optimizations before launching a game.
|
||||
|
||||
While this may not make a big difference for every system but it’s best to have more resources available for gaming and the GameMode ensures that you get the necessary optimizations.
|
||||
|
||||
### Experimental ZFS Install Option
|
||||
|
||||
You get the support for ZFS as your root file system. It is worth noting that it is an experimental feature and should not be used if you’re not sure what you’re doing.
|
||||
|
||||
To get a better idea of ZFS, I recommend you reading one of our articles on [What is ZFS][6] by [John Paul][24].
|
||||
|
||||
### Performance & Other Improvements
|
||||
|
||||
Ubuntu MATE is perfectly tailored as a lightweight distro and also something fit for modern desktops.
|
||||
|
||||
![][25]
|
||||
|
||||
In this case, I didn’t run any specific benchmark tools- but for an average user, I didn’t find any performance issues in my virtual machine setting. If it helps, I tested this on a host system with an i5-7400 processor with a GTX 1050 graphics card coupled with 16 Gigs of RAM. And, 7 GB of RAM + 768 MB of graphics memory + 2 cores of my processor was allocated for the virtual machine.
|
||||
|
||||
![][26]
|
||||
|
||||
When you test it out yourself, feel free to let me know how it was for you.
|
||||
|
||||
Overall, along with all the major improvements, there are subtle changes/fixes/improvements here and there that makes Ubuntu MATE 20.04 LTS a good upgrade.
|
||||
|
||||
### Should You Upgrade?
|
||||
|
||||
If you are running Ubuntu MATE 19.10, you should proceed upgrading it immediately as the support for it ends in **June 2020.**
|
||||
|
||||
For Ubuntu MATE 18.04 users (**supported until April 2021**) – it depends on what works for you. If you need the features of the latest release, you should choose to upgrade it immediately.
|
||||
|
||||
But, if you don’t necessarily need the new stuff, you can look around for the [list of existing bugs][27] and join the [Ubuntu MATE community][28] to know more about the issues revolving the latest release.
|
||||
|
||||
Once you do the research needed, you can then proceed to upgrade your system to Ubuntu MATE 20.04 LTS which will be **supported until April 2023**.
|
||||
|
||||
_**Have you tried the latest Ubuntu MATE 20.04 yet? What do you think about it? Let me know your thoughts in the comments.**_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-mate-20-04-review/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/which-ubuntu-install/
|
||||
[2]: https://ubuntu.com/blog/ubuntu-20-04-survey-results
|
||||
[3]: https://itsfoss.com/lightweight-linux-beginners/
|
||||
[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
||||
[5]: https://itsfoss.com/linux-kernel-5-4/
|
||||
[6]: https://itsfoss.com/what-is-zfs/
|
||||
[7]: https://github.com/FeralInteractive/gamemode
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/ubuntu-mate-20-04.jpg?ssl=1
|
||||
[9]: https://itsfoss.com/gnome-3-36-release/
|
||||
[10]: https://itsfoss.com/pop-os-20-04-review/
|
||||
[11]: https://mate-desktop.org/blog/2020-02-10-mate-1-24-released/
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/ubuntu-mate-desktop-layout.png?ssl=1
|
||||
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/ubuntu-mate-minimized-app.png?ssl=1
|
||||
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/ubuntu-mate-20-04-firmware.png?ssl=1
|
||||
[15]: https://fwupd.org
|
||||
[16]: https://www.thunderbird.net/en-US/
|
||||
[17]: https://wiki.gnome.org/Apps/Evolution
|
||||
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/ubuntu-mate-evolution.png?ssl=1
|
||||
[19]: https://kryogenix.org/code/magnus/
|
||||
[20]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/ubuntu-mate-magnus.jpg?ssl=1
|
||||
[21]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/ubuntu-mate-apps.png?ssl=1
|
||||
[22]: https://cloudblogs.microsoft.com/opensource/2019/08/28/exfat-linux-kernel/
|
||||
[23]: https://wiki.ubuntu.com/WireGuard
|
||||
[24]: https://itsfoss.com/author/john/
|
||||
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/ubuntu-mate-system-reosource.jpg?ssl=1
|
||||
[26]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/ubuntu-mate-focal-neofetch.png?ssl=1
|
||||
[27]: https://bugs.launchpad.net/ubuntu-mate
|
||||
[28]: https://ubuntu-mate.community/
|
@ -0,0 +1,172 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Using ‘apt search’ and ‘apt show’ Commands to Search and Find Details of Packages in Ubuntu)
|
||||
[#]: via: (https://itsfoss.com/apt-search-command/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Using ‘apt search’ and ‘apt show’ Commands to Search and Find Details of Packages in Ubuntu
|
||||
======
|
||||
|
||||
_**This is a detailed beginners guide to apt search command. Using apt search and apt show commands, you can get details of the available versions, dependencies, repositories and other important information about packages in Ubuntu.**_
|
||||
|
||||
Have you ever wondered if a certain package is available to install via [apt package manager][1]?
|
||||
|
||||
Have you wondered if the package offered by [Ubuntu repositories][2] are the latest one or not?
|
||||
|
||||
The apt package manager in [Ubuntu][3] and many other distribution provides two handy [apt command options][4] for this purpose.
|
||||
|
||||
The apt search command looks for the provided string in the name and description of the packages.
|
||||
|
||||
```
|
||||
apt search package_name
|
||||
```
|
||||
|
||||
The apt show command provides detailed information on a package:
|
||||
|
||||
```
|
||||
apt show package_name
|
||||
```
|
||||
|
||||
The commands don’t require you to [be root in Ubuntu][5]. Here’s an example of these commands:
|
||||
|
||||
![][6]
|
||||
|
||||
### Why would you want to use apt search or apt show command?
|
||||
|
||||
Let’s say you want to [install Gambas programming language in Ubuntu][7]. You are happy with your knowledge of the apt command so you decided to use the command line for installing application.
|
||||
|
||||
You open a terminal and use the apt command to install gambas but it results in [unable to locate package error][8].
|
||||
|
||||
```
|
||||
sudo apt install gambas
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
E: Unable to locate package gambas
|
||||
```
|
||||
|
||||
Why did Ubuntu not find the gambas package? Because there is no such package called gambas. Instead, it is available as gambas3. This is a situation where you could take the advantage of the apt search command.
|
||||
|
||||
Let’s move to apt show command. This command provides detailed information about a package, its repository, dependencies and a lot more.
|
||||
|
||||
Knowing what version of a package is available from the official repository could help you in deciding whether you should install it from some other sources.
|
||||
|
||||
Quick recall
|
||||
|
||||
The apt package manager works on a local database/cache of available packages from various repositories. This database contains the information about the available package version, dependencies etc. It doesn’t contain the entire package itself. The packages are downloaded from the remote repositories.
|
||||
|
||||
When you run the sudo apt update command, this cache is created/updated in the /var/lib/apt/lists/ directory. The apt search and apt show commands utilize this cache.
|
||||
|
||||
The term package is used for an application, program, software.
|
||||
|
||||
### Search for available packages using apt search command
|
||||
|
||||
![][9]
|
||||
|
||||
Let me continue the gambas example. Say, you search for
|
||||
|
||||
```
|
||||
apt search gambas
|
||||
```
|
||||
|
||||
It will give you a huge list of packages that have “gambas” in its name or description. This output list is in alphabetical order.
|
||||
|
||||
Now, you’ll of course have to make some intelligent prediction about the package you want. In this example, the first result says “Complete visual development environment for Gambas”. This gives you a good hint that this is the main package you are looking for.
|
||||
|
||||
![][10]
|
||||
|
||||
Why so many packages associated with gambas? Because a number of these gambas packages are probably dependencies that will installed automatically if you install the gambas3 package. If you use the _‘apt show gambas3_‘ command, it will show all the dependencies that will be installed with gambas3 package.
|
||||
|
||||
Some of these listed packages could be libraries that a developer may need in some special cases while developing her/his software.
|
||||
|
||||
#### Use apt search for package name only
|
||||
|
||||
By default, apt search command looks for the searched term in both the name of the package and its description.
|
||||
|
||||
You may narrow down the search by instructing the apt command to search for package names only.
|
||||
|
||||
```
|
||||
apt search --names-only search_term
|
||||
```
|
||||
|
||||
If you are following this as a tutorial, give it a try. Check the output with search term ‘transitional’ with and without –names-only option and you’ll see how the output changes.
|
||||
|
||||
```
|
||||
apt search transitional
|
||||
apt search --names-only transitional
|
||||
```
|
||||
|
||||
**Bonus Tip**: You can use ‘apt list –installed’ command to [look for installed packages in Ubuntu][11].
|
||||
|
||||
### Get detailed information on a package using apt show command
|
||||
|
||||
The output of the apt search commands a brief introduction of the packages. If you want more details, use the apt show command.
|
||||
|
||||
```
|
||||
apt show exact_package_name
|
||||
```
|
||||
|
||||
The apt show command works on the exact package name and it gives you a lot more information on the package. You get:
|
||||
|
||||
* Version information
|
||||
* Repository information
|
||||
* Origin and maintainer of the package information
|
||||
* Where to file a bug
|
||||
* Download and installation size
|
||||
* Dependencies
|
||||
* Detailed description of the package
|
||||
* And a lot more
|
||||
|
||||
|
||||
|
||||
Here’s an example:
|
||||
|
||||
![][12]
|
||||
|
||||
You need to give the exact package name otherwise the apt show won’t work. The good thing is that tab completion works apt show command.
|
||||
|
||||
As you can see in the previous image, you have plenty of information that you may found helpful.
|
||||
|
||||
The apt show command also works on installed packages. In that case, you can see which source the package was installed from. Was it a PPA or some third-party repository or universe or the main repository itself?
|
||||
|
||||
Personally, I use apt show a lot. This helps me know if the package version provided by Ubuntu is the latest or not. Pretty handy tool!
|
||||
|
||||
### Conclusion
|
||||
|
||||
If you read my detailed [guide on the difference between apt and apt-get commands][13], you would know that this ‘apt search’ command works similar to ‘apt-cache search’. There is no such command as “apt-get search”.
|
||||
|
||||
The purpose of creating apt command is to give you one tool with only enough option to manage the packages in your Debian/Ubuntu system. The apt-get, apt-cache and other apt tools still exist, and they can be used in scripting for more complex scenarios.
|
||||
|
||||
I hope you found this introduction to **apt search** and **apt show** commands useful. I welcome your questions and suggestions on this topic.
|
||||
|
||||
If you liked it, please share it on various Linux forums and communities you frequent. That helps us a lot. Thank you.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/apt-search-command/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://wiki.debian.org/Apt
|
||||
[2]: https://itsfoss.com/ubuntu-repositories/
|
||||
[3]: https://ubuntu.com/
|
||||
[4]: https://itsfoss.com/apt-command-guide/
|
||||
[5]: https://itsfoss.com/root-user-ubuntu/
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/apt-search-apt-show-example-800x493.png?resize=800%2C493&ssl=1
|
||||
[7]: https://itsfoss.com/install-gambas-ubuntu/
|
||||
[8]: https://itsfoss.com/unable-to-locate-package-error-ubuntu/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/apt-search-command.png?ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/apt-search-command-example.png?fit=800%2C297&ssl=1
|
||||
[11]: https://itsfoss.com/list-installed-packages-ubuntu/
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/apt-show-command-example-800x474.png?resize=800%2C474&ssl=1
|
||||
[13]: https://itsfoss.com/apt-vs-apt-get-difference/
|
@ -0,0 +1,115 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What to do When You See “Repository does not have a release file” Error in Ubuntu)
|
||||
[#]: via: (https://itsfoss.com/repository-does-not-have-release-file-error-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
What to do When You See “Repository does not have a release file” Error in Ubuntu
|
||||
======
|
||||
|
||||
One of the [several ways of installing software in Ubuntu][1] is by using PPA or adding third-party repositories. A few magical lines give you easy access to a software or its newer version that is not available by default in [Ubuntu][2].
|
||||
|
||||
All thing looks well and good until you get habitual of adding additional third-party repositories and one day, you see an error like this while [updating Ubuntu][3]:
|
||||
|
||||
**E: The repository ‘<http://ppa.launchpad.net/numix/ppa/ubuntu> focal Release’ does not have a Release file.
|
||||
N: Updating from such a repository can’t be done securely, and is therefore disabled by default.
|
||||
N: See apt-secure(8) manpage for repository creation and user configuration details.**
|
||||
|
||||
In this tutorial for Ubuntu beginners, I’ll explain what does this error mean, why do you see it and what can you do to handle this error?
|
||||
|
||||
### Understanding “Repository does not have a release file” error
|
||||
|
||||
![][4]
|
||||
|
||||
Let’s go step by step here. The error message is:
|
||||
|
||||
**E: The repository ‘<http://ppa.launchpad.net/numix/ppa/ubuntu> focal release’ does not have a release file**
|
||||
|
||||
The important part of this error message is “focal release”.
|
||||
|
||||
You probably already know that [each Ubuntu release has a codename][5]. For Ubuntu 20.04, the codename is Focal Fossa. The “focal” in the error message indicates Focal Fossa which is Ubuntu 20.04.
|
||||
|
||||
The error is basically telling you that though you have added a third-party repository to your system’s sources list, this new repository is not available for your current Ubuntu version.
|
||||
|
||||
_**Why so? Because probably you are using a new version of Ubuntu and the developer has not made the software available for this new version.**_
|
||||
|
||||
At this point, I highly recommend reading my detailed guides on [PPA][6] and [Ubuntu repositories][7]. These two articles will give you a better, in-depth knowledge of the topic. Trust me, you won’t be disappointed.
|
||||
|
||||
### How to know if the PPA/third party is available for your Ubuntu version [Optional]
|
||||
|
||||
First you should [check your Ubuntu version and its codename][8] using ‘lsb_release -a’ command:
|
||||
|
||||
```
|
||||
[email protected]:~$ lsb_release -a
|
||||
No LSB modules are available.
|
||||
Distributor ID: Ubuntu
|
||||
Description: Ubuntu 20.04 LTS
|
||||
Release: 20.04
|
||||
Codename: focal
|
||||
```
|
||||
|
||||
As you can see, the codename it shows is focal. Now the next thing you can do is to go to the website of the software in question.
|
||||
|
||||
This could be the tricky part but you can figure it out with some patience and effort.
|
||||
|
||||
In the example here, the error complained about **<http://ppa.launchpad.net/numix/ppa/ubuntu>**. It is a PPA repository and you may easily find its webpage. How, you may ask.
|
||||
|
||||
Use Google or a [Google alternative search engine][9] like Duck Duck Go and search for “ppa numix”. This should give you the first result from [launchpad.net][10] which is the website used for hosting PPA related code.
|
||||
|
||||
On the webpage of the PPA, you can go to the “Overview of published packages” and filter it by the codename of your Ubuntu version:
|
||||
|
||||
![][11]
|
||||
|
||||
For non-PPA third-party repository, you’ll have to check of the official website of the software and see if the repository is available for your Ubuntu version or not.
|
||||
|
||||
### What to do if the repository is not available for your Ubuntu version
|
||||
|
||||
In case when the repository in question is not available for your Ubuntu version, here’s what you can do:
|
||||
|
||||
* Delete the troublesome repository from your list of repository so that you don’t see the error every time you run the update.
|
||||
* Get the software from another source (if it is possible).
|
||||
|
||||
|
||||
|
||||
To delete the troublesome repository, start Software & Updates tool:
|
||||
|
||||
![][12]
|
||||
|
||||
Go to the Other Software tab and look for the repository in question. Highlight it and then click on Remove button to delete it from your system.
|
||||
|
||||
![Remove Ppa][13]
|
||||
|
||||
This will [delete the PPA][14] or the repository in question.
|
||||
|
||||
Next step is to get the software from some other source and that’s totally subjective. In some cases, you can still download the DEB file from the PPA website and use the software (I have explained the steps in the [PPA guide][6]). Alternatively, you can check the project’s website if there is a Snap/Flatpak or Python version of the software available.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/repository-does-not-have-release-file-error-ubuntu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/install-remove-software-manjaro/
|
||||
[2]: https://ubuntu.com/
|
||||
[3]: https://itsfoss.com/update-ubuntu/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/Repository-does-not-have-a-release-file.png?ssl=1
|
||||
[5]: https://itsfoss.com/linux-code-names/
|
||||
[6]: https://itsfoss.com/ppa-guide/
|
||||
[7]: https://itsfoss.com/ubuntu-repositories/
|
||||
[8]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
|
||||
[9]: https://itsfoss.com/privacy-search-engines/
|
||||
[10]: https://launchpad.net/
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/check-repo-version.png?ssl=1
|
||||
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/software-updates-settings-ubuntu-20-04.jpg?ssl=1
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/remove-ppa.jpg?ssl=1
|
||||
[14]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/
|
@ -0,0 +1,105 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Audacious 4.0 Released With Qt 5: Here’s How to Install it on Ubuntu)
|
||||
[#]: via: (https://itsfoss.com/audacious-4-release/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
使用 Qt 5 的 Audacious 4.0 发布:以下是在 Ubuntu 上安装它的方法
|
||||
======
|
||||
|
||||
[Audacious][1]是用于包括 Linux 在内的多个平台的开源音频播放器。继上次发布主版本将近 2 年后,Audacious 4.0 带来了一些重大变化。
|
||||
|
||||
最新版本的 Audacious 4.0 默认带 [Qt 5][2] UI。你仍然可以从源中使用旧的 GTK2 UI,但是,新功能仅会添加到 Qt UI 中。
|
||||
|
||||
让我们看下发生了什么变化以及如何在 Linux 系统上安装最新的 Audacious。
|
||||
|
||||
### Audacious 4.0 关键变化和功能
|
||||
|
||||
![Audacious 4 Release][3]
|
||||
|
||||
当然,主要的变化是默认使用 Qt 5 UI。除此之外,他们的[官方公告][4]中提到了许多改进和功能补充,它们是:
|
||||
|
||||
* 单击播放列表列头可对播放列表进行排序
|
||||
* 拖动播放列表列头会更改列顺序
|
||||
* 应用中的音量和时间步长设置
|
||||
* 隐藏播放列表标签的新选项
|
||||
* 按路径对播放列表排序现在将文件夹排序在文件后面
|
||||
* 实现了额外的 MPRIS 调用,以与 KDE 5.16+ 兼容
|
||||
* 新的基于 OpenMPT 的跟踪器模块插件
|
||||
* 新的 VU Meter 可视化插件
|
||||
* 添加了使用 SOCKS 网络代理的选项
|
||||
* Song Change 插件现在可在 Windows 上使用
|
||||
* 新的“下一张专辑”和“上一张专辑”命令
|
||||
* Qt UI 中的标签编辑器现在可以一次编辑多个文件
|
||||
* 为 Qt UI 实现均衡器预设窗口
|
||||
* 歌词插件获得了在本地保存和加载歌词的功能
|
||||
* 模糊范围和频谱分析器可视化已移植到 Qt
|
||||
* MIDI 插件 SoundFont 选择已移植到 Qt
|
||||
* JACK 输出插件获得了一些新选项
|
||||
* 添加了无限循环 PSF 文件的选项
|
||||
|
||||
|
||||
如果你以前不了解它,你可以轻松安装它,并使用均衡器和 [LADSP][5] 效果器来调整音乐体验。
|
||||
|
||||
![Audacious Winamp Classic Interface][6]
|
||||
|
||||
### 如何在 Ubuntu 上安装 Audacious 4.0
|
||||
|
||||
值得注意的是,[UbuntuHandbook][8] 提供了[非官方 PPA][7]。你可以按照以下说明在 Ubuntu 16.04、18.04、19.10 和 20.04 上进行安装。
|
||||
|
||||
1\. 首先,你必须在终端中输入以下命令将 PPA 添加到系统中:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:ubuntuhandbook1/apps
|
||||
```
|
||||
|
||||
2\. 接下来,你需要从仓库/源码中更新/刷新软件包信息,然后继续安装该应用。方法如下:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt install audacious audacious-plugins
|
||||
```
|
||||
|
||||
就是这样。你无需执行其他任何操作。无论什么情况,如果你想[删除 PPA 和软件][9],只需按顺序输入以下命令:
|
||||
|
||||
```
|
||||
sudo add-apt-repository --remove ppa:ubuntuhandbook1/apps
|
||||
sudo apt remove --autoremove audacious audacious-plugins
|
||||
```
|
||||
|
||||
你也可以在它的 GitHub 页面上查看有关源码的更多信息,并根据需要在其他 Linux 发行版上进行安装。
|
||||
|
||||
[Audacious Source Code][10]
|
||||
|
||||
### 总结
|
||||
|
||||
新功能和 Qt 5 UI 开关对于改善用户体验和音频播放器的功能应该是一件好事。如果你是经典 Winamp 界面的粉丝,它也可以正常工作。但缺少其公告中提到的一些功能。
|
||||
|
||||
你可以尝试一下,并在下面的评论中让我知道你的想法!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/audacious-4-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://audacious-media-player.org
|
||||
[2]: https://doc.qt.io/qt-5/qt5-intro.html
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/audacious-4-release.jpg?ssl=1
|
||||
[4]: https://audacious-media-player.org/news/45-audacious-4-0-released
|
||||
[5]: https://www.ladspa.org/
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/audacious-winamp.jpg?ssl=1
|
||||
[7]: https://itsfoss.com/ppa-guide/
|
||||
[8]: http://ubuntuhandbook.org/index.php/2020/03/audacious-4-0-released-qt5-ui/
|
||||
[9]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/
|
||||
[10]: https://github.com/audacious-media-player/audacious
|
Loading…
Reference in New Issue
Block a user