mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-12 01:40:10 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
7befa43acc
232
published/20191008 How to manage Go projects with GVM.md
Normal file
232
published/20191008 How to manage Go projects with GVM.md
Normal file
@ -0,0 +1,232 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (heguangzhi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11447-1.html)
|
||||
[#]: subject: (How to manage Go projects with GVM)
|
||||
[#]: via: (https://opensource.com/article/19/10/introduction-gvm)
|
||||
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
|
||||
|
||||
如何用 GVM 管理 Go 项目
|
||||
======
|
||||
|
||||
> 使用 Go 版本管理器管理多个版本的 Go 语言环境及其模块。
|
||||
|
||||
![正在编程的女人][1]
|
||||
|
||||
Go 语言版本管理器([GVM][2])是管理 Go 语言环境的开源工具。GVM “pkgsets” 支持安装多个版本的 Go 并管理每个项目的模块。它最初由 [Josh Bussdieker][3] 开发,GVM(像它的对手 Ruby RVM 一样)允许你为每个项目或一组项目创建一个开发环境,分离不同的 Go 版本和包依赖关系,以提供更大的灵活性,防止不同版本造成的问题。
|
||||
|
||||
有几种管理 Go 包的方式,包括内置于 Go 中的 Go 1.11 的 Modules。我发现 GVM 简单直观,即使我不用它来管理包,我还是会用它来管理 Go 不同的版本的。
|
||||
|
||||
### 安装 GVM
|
||||
|
||||
安装 GVM 很简单。[GVM 存储库][4]安装文档指示你下载安装程序脚本并将其传送到 Bash 来安装:
|
||||
|
||||
```
|
||||
bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)
|
||||
```
|
||||
|
||||
尽管越来越多的人采用这种安装方法,但是在安装之前先看看安装程序在做什么仍然是一个很好的想法。以 GVM 为例,该安装程序脚本:
|
||||
|
||||
1. 检查一些相关依赖性
|
||||
2. 克隆 GVM 存储库
|
||||
3. 使用 shell 脚本:
|
||||
* 安装 Go 语言
|
||||
* 管理 `GOPATH` 环境变量
|
||||
* 向 `bashrc`、`zshrc` 或配置文件中添加一行内容
|
||||
|
||||
如果你想确认它在做什么,你可以克隆该存储库并查看 shell 脚本,然后运行 `./binscripts/gvm-installer` 这个本地脚本进行设置。
|
||||
|
||||
`注意:` 因为 GVM 可以用来下载和编译新的 Go 版本,所以有一些预期的依赖关系,如 Make、Git 和 Curl。你可以在 [GVM 的自述文件][5]中找到完整的发行版列表。
|
||||
|
||||
### 使用 GVM 安装和管理 GO 版本
|
||||
|
||||
一旦安装了 GVM,你就可以使用它来安装和管理不同版本的 Go。`gvm listall` 命令显示可下载和编译的可用版本的 Go:
|
||||
|
||||
```
|
||||
[chris@marvin ]$ gvm listall
|
||||
|
||||
gvm gos (available)
|
||||
|
||||
go1
|
||||
go1.0.1
|
||||
go1.0.2
|
||||
go1.0.3
|
||||
|
||||
<输出截断>
|
||||
```
|
||||
|
||||
安装特定的 Go 版本就像 `gvm install <版本>` 一样简单,其中 `<版本>` 是 `gvm listall` 命令返回的版本之一。
|
||||
|
||||
假设你正在进行一个使用 Go1.12.8 版本的项目。你可以使用 `gvm install go1.12.8` 安装这个版本:
|
||||
|
||||
```
|
||||
[chris@marvin]$ gvm install go1.12.8
|
||||
Installing go1.12.8...
|
||||
* Compiling...
|
||||
go1.12.8 successfully installed!
|
||||
```
|
||||
|
||||
输入 `gvm list`,你会看到 Go 版本 1.12.8 与系统 Go 版本(使用操作系统的软件包管理器打包的版本)一起并存:
|
||||
|
||||
```
|
||||
[chris@marvin]$ gvm list
|
||||
|
||||
gvm gos (installed)
|
||||
|
||||
go1.12.8
|
||||
=> system
|
||||
```
|
||||
|
||||
GVM 仍在使用系统版本的 Go ,由 `=>` 符号表示。你可以使用 `gvm use` 命令切换你的环境以使用新安装的 go1.12.8:
|
||||
|
||||
```
|
||||
[chris@marvin]$ gvm use go1.12.8
|
||||
Now using version go1.12.8
|
||||
|
||||
[chris@marvin]$ go version
|
||||
go version go1.12.8 linux/amd64
|
||||
```
|
||||
|
||||
GVM 使管理已安装版本的 Go 变得极其简单,但它不止于此!
|
||||
|
||||
### 使用 GVM pkgset
|
||||
|
||||
开箱即用,Go 有一种出色而令人沮丧的管理包和模块的方式。默认情况下,如果你 `go get` 获取一个包,它将被下载到 `$GOPATH` 目录中的 `src` 和 `pkg` 目录下,然后可以使用 `import` 将其包含在你的 Go 程序中。这使得获得软件包变得很容易,特别是对于非特权用户,而不需要 `sudo` 或 root 特权(很像 Python 中的 `pip install --user`)。然而,在不同的项目中管理相同包的不同版本是非常困难的。
|
||||
|
||||
有许多方法可以尝试修复或缓解这个问题,包括实验性 Go Modules(Go 1.11 版中增加了初步支持)和 [Go dep][6](Go Modules 的“官方实验”并且持续迭代)。在我发现 GVM 之前,我会在一个 Go 项目自己的 Docker 容器中构建和测试它,以确保分离。
|
||||
|
||||
GVM 通过使用 “pkgsets” 将项目的新目录附加到安装的 Go 版本的默认 `$GOPATH` 上,很好地实现了项目之间包的管理和隔离,就像 `$PATH` 在 Unix/Linux 系统上工作一样。
|
||||
|
||||
想象它如何运行的。首先,安装新版 Go 1.12.9:
|
||||
|
||||
```
|
||||
[chris@marvin]$ echo $GOPATH
|
||||
/home/chris/.gvm/pkgsets/go1.12.8/global
|
||||
|
||||
[chris@marvin]$ gvm install go1.12.9
|
||||
Installing go1.12.9...
|
||||
* Compiling...
|
||||
go1.12.9 successfully installed
|
||||
|
||||
[chris@marvin]$ gvm use go1.12.9
|
||||
Now using version go1.12.9
|
||||
```
|
||||
|
||||
当 GVM 被告知使用新版本时,它会更改为新的 `$GOPATH`,默认 `gloabl` pkgset 应用于该版本:
|
||||
|
||||
```
|
||||
[chris@marvin]$ echo $GOPATH
|
||||
/home/chris/.gvm/pkgsets/go1.12.9/global
|
||||
|
||||
[chris@marvin]$ gvm pkgset list
|
||||
|
||||
gvm go package sets (go1.12.9)
|
||||
|
||||
=> global
|
||||
```
|
||||
|
||||
尽管默认情况下没有安装额外的包,但是全局 pkgset 中的包对于使用该特定版本的 Go 的任何项目都是可用的。
|
||||
|
||||
现在,假设你正在启用一个新项目,它需要一个特定的包。首先,使用 GVM 创建一个新的 pkgset,名为 `introToGvm`:
|
||||
|
||||
```
|
||||
[chris@marvin]$ gvm pkgset create introToGvm
|
||||
|
||||
[chris@marvin]$ gvm pkgset use introToGvm
|
||||
Now using version go1.12.9@introToGvm
|
||||
|
||||
[chris@marvin]$ gvm pkgset list
|
||||
|
||||
gvm go package sets (go1.12.9)
|
||||
|
||||
global
|
||||
=> introToGvm
|
||||
```
|
||||
|
||||
如上所述,pkgset 的一个新目录被添加到 `$GOPATH`:
|
||||
|
||||
```
|
||||
[chris@marvin]$ echo $GOPATH
|
||||
/home/chris/.gvm/pkgsets/go1.12.9/introToGvm:/home/chris/.gvm/pkgsets/go1.12.9/global
|
||||
```
|
||||
|
||||
将目录更改为预先设置的 `introToGvm` 路径,检查目录结构,这里使用 `awk` 和 `bash` 完成。
|
||||
|
||||
```
|
||||
[chris@marvin]$ cd $( awk -F':' '{print $1}' <<< $GOPATH )
|
||||
[chris@marvin]$ pwd
|
||||
/home/chris/.gvm/pkgsets/go1.12.9/introToGvm
|
||||
|
||||
[chris@marvin]$ ls
|
||||
overlay pkg src
|
||||
```
|
||||
|
||||
请注意,新目录看起来很像普通的 `$GOPATH`。新的 Go 包使用同样的 `go get` 命令下载并正常使用,且添加到 pkgset 中。
|
||||
|
||||
例如,使用以下命令获取 `gorilla/mux` 包,然后检查 pkgset 的目录结构:
|
||||
|
||||
```
|
||||
[chris@marvin]$ go get github.com/gorilla/mux
|
||||
[chris@marvin]$ tree
|
||||
[chris@marvin introToGvm ]$ tree
|
||||
.
|
||||
├── overlay
|
||||
│ ├── bin
|
||||
│ └── lib
|
||||
│ └── pkgconfig
|
||||
├── pkg
|
||||
│ └── linux_amd64
|
||||
│ └── github.com
|
||||
│ └── gorilla
|
||||
│ └── mux.a
|
||||
src/
|
||||
└── github.com
|
||||
└── gorilla
|
||||
└── mux
|
||||
├── AUTHORS
|
||||
├── bench_test.go
|
||||
├── context.go
|
||||
├── context_test.go
|
||||
├── doc.go
|
||||
├── example_authentication_middleware_test.go
|
||||
├── example_cors_method_middleware_test.go
|
||||
├── example_route_test.go
|
||||
├── go.mod
|
||||
├── LICENSE
|
||||
├── middleware.go
|
||||
├── middleware_test.go
|
||||
├── mux.go
|
||||
├── mux_test.go
|
||||
├── old_test.go
|
||||
├── README.md
|
||||
├── regexp.go
|
||||
├── route.go
|
||||
└── test_helpers.go
|
||||
```
|
||||
|
||||
如你所见,`gorilla/mux` 已按预期添加到 pkgset `$GOPATH` 目录中,现在可用于使用此 pkgset 项目了。
|
||||
|
||||
### GVM 让 Go 管理变得轻而易举
|
||||
|
||||
GVM 是一种直观且非侵入性的管理 Go 版本和包的方式。它可以单独使用,也可以与其他 Go 模块管理技术结合使用并利用 GVM Go 版本管理功能。按 Go 版本和包依赖来分离项目使得开发更加容易,并且减少了管理版本冲突的复杂性,GVM 让这变得轻而易举。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/introduction-gvm
|
||||
|
||||
作者:[Chris Collins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[heguangzhi](https://github.com/heguangzhi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clcollins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
|
||||
[2]: https://github.com/moovweb/gvm
|
||||
[3]: https://github.com/jbussdieker
|
||||
[4]: https://github.com/moovweb/gvm#installing
|
||||
[5]: https://github.com/moovweb/gvm/blob/master/README.md
|
||||
[6]: https://golang.github.io/dep/
|
@ -0,0 +1,112 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why to choose Rust as your next programming language)
|
||||
[#]: via: (https://opensource.com/article/19/10/choose-rust-programming-language)
|
||||
[#]: author: (Ryan Levick https://opensource.com/users/ryanlevick)
|
||||
|
||||
Why to choose Rust as your next programming language
|
||||
======
|
||||
Selecting a programming language can be complicated, but some
|
||||
enterprises are finding that switching to Rust is a relatively easy
|
||||
decision.
|
||||
![Programming books on a shelf][1]
|
||||
|
||||
Choosing a programming language for a project is often a complicated decision, particularly when it involves switching from one language to another. For many programmers, it is not only a technical exercise but also a deeply emotional one. The lack of known or measurable criteria for picking a language often means the choice digresses into a series of emotional appeals.
|
||||
|
||||
I've been involved in many discussions about choosing a programming language, and they usually conclude in one of two ways: either the decision is made using measurable, yet unimportant criteria while ignoring relevant, yet hard to measure criteria; or it is made using anecdotes and emotional appeals.
|
||||
|
||||
There has been one language selection process that I've been a part of that has gone—at least so far—rather smoothly: the growing [consideration inside Microsoft][2] for using [Rust][3].
|
||||
|
||||
This article will explore several issues related to choosing a programming language in general and Rust in particular. They are: What are the criteria usually used for selecting a programming language, especially in large businesses, and why does this process rarely end successfully? Why has the consideration of Rust in Microsoft gone smoothly so far, and are there some general best practices that can be gleaned from it?
|
||||
|
||||
### Criteria for choosing a language
|
||||
|
||||
There are many criteria for deciding whether to switch to a new programming language. In general, the criteria that are most easily measured are the ones that are most often talked about, even if they are less important than other, more difficult-to-measure criteria.
|
||||
|
||||
#### Technical criteria
|
||||
|
||||
The first group of criteria are the technical considerations; they are often the first that come to mind because they are the easiest to measure.
|
||||
|
||||
Interestingly, the technical costs (e.g., build system integration, monitoring, tooling, support libraries, and more) are often easier to measure than the technical benefits. This is especially detrimental to the adoption of new programming languages, as the downsides of adoption are often the clearest part of the picture.
|
||||
|
||||
While some technical benefits (like performance) can be measured relatively easily, others are much harder to measure. For example, what are the relative merits of a dynamic typing system (like in Python) to a relatively verbose and feature-poor system (like Java), and how does this change when compared to stronger typed systems like Scala or Haskell? Many people have strong gut feelings that such technical differences should be taken very seriously in language considerations, but they are no good ways to measure them.
|
||||
|
||||
A side effect of the discrepancy in measurement ease is that the easiest-to-measure items are often given the most weight in the decision-making process even if that would not be the case with perfect information. This not only throws off the cost/benefit analysis but also the process of assigning importance to different costs and benefits.
|
||||
|
||||
#### Organizational criteria
|
||||
|
||||
Organizational criteria, which are the second consideration, include:
|
||||
|
||||
* How easy will it be to hire developers in this language?
|
||||
* How easy is it to enforce programming standards?
|
||||
* How quickly, on average, will developers be able to deliver software?
|
||||
|
||||
|
||||
|
||||
Costs and benefits of organizational criteria are hard to measure. People usually have vague, "gut feeling" answers to them, which create strong opinions on the matter. Unfortunately, however, it's often very difficult to measure these criteria. For example, it might be obvious to most that TypeScript allows programmers to deliver functioning, relatively bug-free software to customers more quickly than C does, but where is the data to back this up?
|
||||
|
||||
Moreover, it's often extremely difficult to assign importance weights to these criteria. It's easy to see that Go enforces standardized coding practices more easily than Scala (due to the wide use of gofmt), but it is extremely difficult to measure the concrete benefits to a company from standardizing codebases.
|
||||
|
||||
These criteria are still extremely important but, because of the difficulty in measuring them, they are often either ignored or reasoned about through anecdotes.
|
||||
|
||||
#### Emotional criteria
|
||||
|
||||
Third are the emotional criteria, which tend to be overlooked if not outright dismissed.
|
||||
|
||||
Software programming has traditionally tried to emulate more true "engineering" practices, where technical considerations are generally the most important. Some would argue that programming languages are "just tools" and should be measured only against technical criteria. Others would argue that programming languages assist the programmer in some of the more artistic aspects of the job. These criteria are extremely difficult to measure in any meaningful way.
|
||||
|
||||
In general, this comes down to how happy (and thus productive) programmers feel using this language. Such considerations can have a real impact on programmers, but how this translates to benefitting to an entire team is next to impossible to measure.
|
||||
|
||||
Because of the difficulty of quantifying these criteria, this is often ignored. But does this mean that emotional considerations of programming languages have no significant impact on programmers or programming organizations?
|
||||
|
||||
#### Unknown criteria
|
||||
|
||||
Finally, there's a set of criteria that are often overlooked because a new programming language is usually judged by the criteria set by the language currently in use. New languages may have capabilities that have no equivalent in other languages, so many people will not be familiar with them. Having no exposure to those capabilities may mean the evaluator unknowingly ignores or downplays them.
|
||||
|
||||
These criteria can be technical (e.g., the merits of Kotlin data classes over Java constructs), organizational (e.g., how helpful Elm error messages are for teaching those new to the language), or emotional (e.g., the way Ruby makes the programmer feel when writing it).
|
||||
|
||||
Because these aspects are hard to measure, and someone completely unfamiliar with them has no existing framework for judging them based on experience, intuition, or anecdote, they are often undervalued versus more well-understood criteria—if not outright ignored.
|
||||
|
||||
### Why Rust?
|
||||
|
||||
This brings us back to the growing excitement for Rust in Microsoft. I believe the discussions around Rust adoption have gone relatively smoothly so far because Rust offers an extremely clear and compelling advantage—not only over the language it seeks to replace (C++)—but also over any other language practically available to industry: great performance, a high level of control, and being memory safe.
|
||||
|
||||
Microsoft's decision to investigate Rust (and other languages) began due to the fact that roughly [70% of Common Vulnerabilities and Exposures][4] (CVEs) in Microsoft products were related to memory safety issues in C and C++. When it was discovered that most of the affected codebases could not be effectively rewritten in C# because of performance concerns, the search began. Rust was viewed as the only possible candidate to replace C++. It was similar enough that not everything had to be reworked, but it has a differentiator that makes it measurably better than the current alternative: being able to eliminate nearly 70% of Microsoft's most serious security vulnerabilities.
|
||||
|
||||
There are other reasons beyond memory safety, performance, and control that make Rust appealing (e.g., strong type safety guarantees, being an extremely loved language, etc.) but as expected, they were hard to talk about because they were hard to measure. In general, most people involved in the selection process were more interested in verifying that these other aspects of the language weren't perceivably worse than C++ but, because measuring these aspects was so difficult, they weren't considered active reasons to adopt the language.
|
||||
|
||||
However, the Microsoft teams that had already adopted Rust, like for the [IoT Edge Security Daemon][5], touted other aspects of the language (particularly "correctness" due to the advanced type system) as the reasons they were most keen on investing more in the language. These teams couldn't provide reliable measurements for these criteria, but they had clearly developed an intuition that this aspect of the language was extremely important.
|
||||
|
||||
With Rust at Microsoft, the main criteria being judged happened to be an easily measurable one. But what happens when an organization's most important issues are hard to measure? These issues are no less important just because they are currently difficult to measure.
|
||||
|
||||
### What now?
|
||||
|
||||
Having clearly measurable criteria is important when adopting a new programming language, but this does not mean that hard-to-measure criteria aren't real and shouldn't be taken seriously. We simply lack the tools to evaluate new languages holistically.
|
||||
|
||||
There has been some research into this question, but it has not yet produced anything that has been widely adopted by industry. While the case for Rust was relatively clear inside Microsoft, this doesn't mean new languages should be adopted only where there is one clear, technical reason to do so. We should become better at evaluating more aspects of programming languages beyond just the traditional ones (such as performance).
|
||||
|
||||
The path to Rust adoption is just beginning at Microsoft, and having just one reason to justify investment in Rust is definitely not ideal. While we're beginning to form collective, anecdotal evidence to justify Rust adoption further, there is definitely a need to quantify this understanding better and be able to talk about it in more objective terms.
|
||||
|
||||
We're still not quite sure how to do this, but stay tuned for more as we go down this path.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/choose-rust-programming-language
|
||||
|
||||
作者:[Ryan Levick][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ryanlevick
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_programming_languages.jpg?itok=KJcdnXM2 (Programming books on a shelf)
|
||||
[2]: https://msrc-blog.microsoft.com/tag/rust
|
||||
[3]: https://www.rust-lang.org/
|
||||
[4]: https://github.com/microsoft/MSRC-Security-Research/blob/master/presentations/2019_02_BlueHatIL/2019_01%20-%20BlueHatIL%20-%20Trends%2C%20challenge%2C%20and%20shifts%20in%20software%20vulnerability%20mitigation.pdf
|
||||
[5]: https://msrc-blog.microsoft.com/2019/09/30/building-the-azure-iot-edge-security-daemon-in-rust/
|
@ -0,0 +1,111 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Climate challenges call for open solutions)
|
||||
[#]: via: (https://opensource.com/open-organization/19/10/global-energy-climate-challenges)
|
||||
[#]: author: (Ron McFarland https://opensource.com/users/ron-mcfarland)
|
||||
|
||||
Climate challenges call for open solutions
|
||||
======
|
||||
We can meet global energy and climate change challenges with an open
|
||||
mindset.
|
||||
![Globe up in the clouds][1]
|
||||
|
||||
Global climate change affects us all. It is, at its heart, an energy issue—a problem too large and too complex for any single person, company, university, research institute, science laboratory, nuclear trade association, or government to address alone. It will require a truly global, cooperative effort, one aimed at continued innovation across a range of technologies: renewables, batteries, carbon capture, nuclear power development, and more.
|
||||
|
||||
Throughout the past year, I've been part of an initiative working on nuclear power decommissioning in Japan. As part of that work—which includes several meetings every month on this issue, as well as my own independent research on the subject—I've learned more about the ways various communities can play a role in understanding and impacting energy needs and climate discussions.
|
||||
|
||||
In this article, I'll offer one example that illustrates how this is the case—that of "[Generation IV][2]" nuclear power plant development. This example demonstrates how [open organization principles][3] can influence future discussions about our global energy and climate change challenges. We must address these challenges with an open mindset.
|
||||
|
||||
### Community purposes and frameworks
|
||||
|
||||
Members of a community must [believe in a common purpose][4]. That sense of common purpose is not only what unites an open project but also what helps an open, distributed group maintain its focus and measure its success. Clear, public, and mutually agreed-upon statements of purpose are a basic feature of open organizations.
|
||||
|
||||
So an open approach to global energy and climate change challenges should do the same. For example, when researching Generation IV nuclear power plant development, I've learned of a basic framework for task force goals:
|
||||
|
||||
1. There should be a desire to reduce current carbon dioxide (CO2) emissions and greenhouse gases.
|
||||
2. There should be a desire to reduce nuclear waste.
|
||||
3. There should be a desire to provide stable, low-cost electricity without increasing CO2 emissions globally, particularly in rural areas and developing countries where most of the future CO2 emissions will come from in the future.
|
||||
4. There should be a desire to improve safety in nuclear power energy production. This should include developing a nuclear fuel that cannot be converted to weapons, reducing the chance of nuclear weapon confrontation or terrorist attacks.
|
||||
5. There should be a desire to reduce global air, water, and land pollution.
|
||||
|
||||
|
||||
|
||||
A successful open approach to these issues must begin by uniting a community around a common set of goals like these.
|
||||
|
||||
### Building community: inclusivity and collaboration
|
||||
|
||||
Once a community has clarified its motivations, desires, and goals, how does it attract people who share those desires?
|
||||
|
||||
Once a community has clarified its motivations, desires, and goals, how does it attract people who _share_ those desires?
|
||||
|
||||
One method is by developing associations and having global conferences. For example, the [Generation IV International Forum (GIF)][5] was formed to address some of the desires I listed above. Members represent countries like Argentina, Brazil, Canada, China, EU, France, Japan, S. Korea, South Africa, Switzerland, UK, USA, Australia, and Russia. They have symposia to allow countries to exchange information, build communities, and expand inclusivity. In 2018, the group held its fourth symposium in Paris.
|
||||
|
||||
But in-person meetings aren't the only way to build community. Universities are working to build distributed, global communities focused on energy and climate challenges. MIT, for instance, is doing this with its own [energy initiative][6], which includes the "[Center for Advanced Nuclear Energy Systems][7]." The center's website facilitates discussions between like-minded advocates for energy solutions—a beautiful example of collaboration in action. Likewise, [Abilene Christian University][8] features a department in future nuclear power. That department collaborates with nuclear development institutes and works to inspire the next generation of nuclear scientists, which they hope will lead to:
|
||||
|
||||
1. raising people out of poverty worldwide through inexpensive, clean, safe and available energy,
|
||||
2. developing systems that provide clean water supply, and
|
||||
3. curing cancer.
|
||||
|
||||
|
||||
|
||||
Those are goals worth collaborating on.
|
||||
|
||||
### Community and passionate, purposeful participation
|
||||
|
||||
As we know from studying open organizations, the more specific a community's goals, the more successful it will likely be.
|
||||
|
||||
As we know from studying open organizations, _the more specific a community's goals, the more successful it will likely be._ This is especially true when working with _passionate_ communities, as keeping those communities focused ensures they're channeling their energy in appropriate, helpful directions.
|
||||
|
||||
Global attempts to solve energy and climate problems should consider this. Once again in the case of Generation IV nuclear power, there is growing interest in one type of nuclear power plant concept, the [Molten-salt reactor][9] (MSR), which uses thorium in nuclear fuel. Proponents of MSR hope to create a safer type of fuel, so they've started their own association, the [Thorium Energy World][10], to advocate their cause. This conference centers on the use of thorium in the fuel of these type nuclear power plants. Experts meet to discuss their concepts and progress on MSR technology.
|
||||
|
||||
But it's also true that communities are much more likely to invest in the ideas that _they_ specify—not necessarily those "handed down" from leadership. Whenever possible, communities focused on energy and climate change challenges should take their cues from members.
|
||||
|
||||
Recall the Generation IV International Forum (GIF), which I mentioned above. That organization ran into a problem: too many competing concepts for next-generation nuclear power solutions. Rather than simply select one and demand that all members support it, the GIF created general categories and let participants select the concepts they favored from each. This resulted in a list of six concepts for future nuclear power plant development—one of which was MSR technology.
|
||||
|
||||
Narrowing the community's focus on a smaller set of options should help that community have more detailed and productive technical discussions. But on top of that, letting the community itself select the parameters of its discussions should greatly increase its chances of success.
|
||||
|
||||
### Community and transparency
|
||||
|
||||
Once a community has formed, questions of transparency and collaboration often arise. How well will members interact, communicate, and work with each other?
|
||||
|
||||
I've seen these issues firsthand while working with overseas distributors of the products I want them to sell for me. Why should they buy, stock, promote, advertise, and exhibit the products if at any time I could just cut them out and start selling to their competitors?
|
||||
|
||||
Taking an open approach to building communities often involves making the communities' rules, responsibilities and norms explicit and transparent.
|
||||
|
||||
Taking an open approach to building communities often involves making the communities' rules, responsibilities and norms _explicit_ and _transparent_. To solve my own problem with distributors, for instance, I entered into distributor agreements with them. These detailed both my responsibilities and theirs. With that clear agreement in-hand, we could actively and collaboratively promote the product.
|
||||
|
||||
The Generation IV International Forum (GIF) faced a similar challenge with it member countries, specifically with regard to intellectual property. Each country knew it would be creating significant (and likely very valuable) intellectual property as part of its work exploring the six types of nuclear power. To ensure that knowledge sharing occurred effectively and amicably between the members, the group established guidelines for exchanging knowledge and research findings. It also granted a steering committee the authority to dismiss potential members who weren't operating according to the same standards of transparency and collaboration (less they become a burden on the growing community).
|
||||
|
||||
They formed three types of agreements: "[Framework Agreements][11]" (in both French and English), System Arrangements (for each of the six systems I mentioned), and Memoranda of Understanding (MOU). With those agreements, the members could be more transparent, be more collaborative, and form more productive communities.
|
||||
|
||||
### Growing demand—for energy and openness
|
||||
|
||||
Increasing demand for electrical power in developing countries will impact global energy needs and climate change. The need for electricity and clean water for both health and agriculture will continue to grow. And the way we _address_ both those needs and that growth will determine how we meet next-generation energy and climate challenges. Adopting technologies like Generation IV nuclear power (and MSR) could help—but doing so will require a global, community-driven effort. An approach based on open organization principles will help us solve climate problems faster.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/10/global-energy-climate-challenges
|
||||
|
||||
作者:[Ron McFarland][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ron-mcfarland
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-globe.png?itok=_drXt4Tn (Globe up in the clouds)
|
||||
[2]: https://en.wikipedia.org/wiki/Generation_IV_reactor
|
||||
[3]: https://opensource.com/open-organization/resources/open-org-definition
|
||||
[4]: https://opensource.com/open-organization/17/9/rediscovering-your-why
|
||||
[5]: https://www.gen-4.org/gif/jcms/c_74878/generation-iv-international-forum-gif-symposium
|
||||
[6]: http://energy.mit.edu/
|
||||
[7]: https://canes.mit.edu/
|
||||
[8]: https://www.youtube.com/watch?v=3pa35s6HKa8
|
||||
[9]: https://en.wikipedia.org/wiki/Molten_salt_reactor
|
||||
[10]: http://www.thoriumenergyworld.com/
|
||||
[11]: http://www.gen-4.org/gif/upload/docs/application/pdf/2014-01/framework_agreement.pdf
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Reimagining-the-Internet project gets funding)
|
||||
[#]: via: (https://www.networkworld.com/article/3444765/reimagining-the-internet-project-gets-funding.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Reimagining-the-Internet project gets funding
|
||||
======
|
||||
A National Science Foundation-financed Internet replacement testbed project lays out its initial plans.
|
||||
Thinkstock
|
||||
|
||||
The [Internet of Things][1] and 5G could be among the benefactors of an upcoming $20 million U.S. government cash injection designed to come up with new architectures to replace existing public internet.
|
||||
|
||||
FABRIC, as the National Science Foundation-funded umbrella project is called, aims to come up with a proving ground to explore new ways to move, keep and compute data in shared infrastructure such as the public internet. The project “will allow scientists to explore what a new internet could look like at scale,” says the lead institution, the University of North Carolina at Chapel Hill, [in a media release][2]. And it “will help determine the internet architecture of the future.”
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters. ]][3]
|
||||
|
||||
Bottlenecks, security and overall performance are infrastructure areas that the group are looking to improve on. The “Internet is showing its age and limitations,” Ilya Baldin, director of Network Research and Infrastructure at the Renaissance Computing Institute at UNC-Chapel Hill is quoted as saying in the release. “Especially when it comes to processing large amounts of data.” RENCI is involved in developing and deploying research technologies.
|
||||
|
||||
[][4]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][4]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
“Today’s internet was not designed for the massive datasets, machine-learning tools, advanced sensors and [Internet of Things devices][5],” FABRIC says, echoing others who, too, are envisioning a new internet:
|
||||
|
||||
[I wrote, in July,][6] for example, about a team of network engineers known as NOIA, who also want to revolutionize global public internet traffic. That group wants to co-join a new software-defined public internet with a bandwidth- and routing-trading system based on blockchain. Others, such as companies [FileStorm and YottaChain, are working on distributed blockchain-like storage for Internet][7] adoption.
|
||||
|
||||
Advertisement
|
||||
|
||||
Another group led by researchers at University of Magdeburg, [whom I’ve also written about][8], want to completely restructure the internet. That university, which has received German government funding, says adapting IoT to existing networks won’t work. Centralized security that causes choke points, is just one trouble-spot that needs fixing, it thinks. “The internet, as we know it, is based on network architectures of the 70s and 80s, when it was designed for completely different applications,” those researchers say.
|
||||
|
||||
FABRIC, the UNC project, which has begun to address ideas for the architecture it thinks will work best, says it will function using “storage, computational and network hardware nodes,” joined by 100Gbps and Terabit optical links. “Interconnected deeply programmable core nodes [will be] deployed across the country,” [it proposes in its media release][9]. Much like the original internet, in fact, universities, labs and [supercomputers][10] will be connected, this time in order for today’s massive datasets to be experimented with.
|
||||
|
||||
“All major aspects of the FABRIC infrastructure will be programmable,” it says. It will be “an everywhere programmable nationwide instrument comprised of novel extensible network elements.” Machine learning and distributed network systems control will be included.
|
||||
|
||||
The project asserts that it's the programmability that will let it customize the platform to experiment with specific aspects of public Internet: cybersecurity is one, it says; distributed architectures, could be another.
|
||||
|
||||
“If computer scientists were to start over today, knowing what they now know, the Internet might be designed in a different way,” Baldin says.
|
||||
|
||||
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3444765/reimagining-the-internet-project-gets-funding.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.zdnet.com/article/what-is-the-internet-of-things-everything-you-need-to-know-about-the-iot-right-now/
|
||||
[2]: https://uncnews.unc.edu/2019/09/17/unc-chapel-hill-to-lead-20-million-project-to-test-a-reimagined-internet/
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[5]: https://www.networkworld.com/article/3331676/iot-devices-proliferate-from-smart-bulbs-to-industrial-vibration-sensors.html
|
||||
[6]: https://www.networkworld.com/article/3409783/public-internet-should-be-all-software-defined.html
|
||||
[7]: https://www.networkworld.com/article/3390722/how-data-storage-will-shift-to-blockchain.html
|
||||
[8]: https://www.networkworld.com/article/3407852/smarter-iot-concepts-reveal-creaking-networks.html
|
||||
[9]: https://fabric-testbed.net/news/fabric-award
|
||||
[10]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
|
||||
[11]: https://www.facebook.com/NetworkWorld/
|
||||
[12]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,108 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (SD-WAN: What is it and why you’ll use it one day)
|
||||
[#]: via: (https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
SD-WAN: What is it and why you’ll use it one day
|
||||
======
|
||||
Software-defined wide area networks, a software approach managing wide-area networks, offers ease of deployment, central manageability and reduced costs, and can improve connectivity to branch offices and the cloud.
|
||||
Shutterstock
|
||||
|
||||
There have been significant changes in wide-area networks over the past few years, none more important than software-defined WAN or SD-WAN, which is changing how network pros think about optimizing the use of connectivity that is as varied as Multiprotocol Label Switching ([MPLS][1]), frame relay and even DSL.
|
||||
|
||||
### What is SD-WAN?
|
||||
|
||||
As the name states, software-defined wide-area networks use software to control the connectivity, management and services between [data centers][2] and remote branches or cloud instances. Like its bigger technology brother, software-defined networking, SD-WAN decouples the control plane from the data plane.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters. ]][3]
|
||||
|
||||
An SD-WAN deployment can include, existing routers and switches or virtualized customer premises equipment (vCPE) all running some version of software that handles policy, security, networking functions and other management tools, depending on vendor and customer configuration. ** **
|
||||
|
||||
[][4]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][4]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
One of SD-WAN’s chief features is the ability to manage multiple connections from MPLS to broadband to LTE. Another important piece is the ability to segment, partition and secure the traffic traversing the WAN.
|
||||
|
||||
SD-WAN's driving principle is to simplify the way big companies turn up new links to branch offices, better manage the way those links are utilized – for data, voice or video – and potentially save money in the process.
|
||||
|
||||
Advertisement
|
||||
|
||||
As a recent [Gartner][5] report said, SD-WAN and vCPE are key technologies to help enterprises transform their networks from “fragile to agile.”
|
||||
|
||||
“We believe that emerging SD-WAN solutions and vCPE platforms will best address enterprise requirements for the next five years, as they provide the best mix of performance, price and flexibility compared to alternative hardware-centric approaches,” Gartner stated. “Specifically, we predict that by 2023, more than 90% of WAN edge infrastructure refresh initiatives will be based on vCPE or SD-WAN appliances versus traditional routers (up from less than 40% today).”
|
||||
|
||||
Network World / Gartner
|
||||
|
||||
With all of its advanced features making it an attractive choice for customers, the market has also attracted a number of choices with more than 60 vendors – including [Cisco][6], VMware, Silver Peak, Riverbed, Aryaka, Fortinet, Nokia and Versa – that compete in the SD-WAN market; many with very specialized offerings, Gartner says. [IDC says][7] that SD-WAN technology will grow at a 30.8% compound annual growth rate from 2018 to 2023 to reach $5.25 billion.
|
||||
|
||||
From its VNI study, Cisco says that globally, SD-WAN traffic was 9 percent of business IP WAN traffic in 2017 and will be 29 percent of business IP WAN traffic by 2022. In addition, SD-WAN traffic will grow five-fold from 2017 to 2022, a compound annual growth rate of 37 percent.
|
||||
|
||||
“SD-WAN continues to be one of the fastest-growing segments of the network infrastructure market, driven by a variety of factors. First, traditional enterprise WANs are increasingly not meeting the needs of today's modern digital businesses, especially as it relates to supporting SaaS apps and multi- and hybrid-cloud usage. Second, enterprises are interested in easier management of multiple connection types across their WAN to improve application performance and end-user experience," said Rohit Mehra, IDC vice president, Network Infrastructure. "Combined with the rapid embrace of SD-WAN by leading communications service providers globally, these trends continue to drive deployments of SD-WAN, providing enterprises with dynamic management of hybrid WAN connections and the ability to guarantee high levels of quality of service on a per-application basis."
|
||||
|
||||
### How does SD-WAN help network security?
|
||||
|
||||
One of the bigger areas SD-WAN impacts is network security.
|
||||
|
||||
The tipping point for a lot of customers was the advent of applications like the cloud-based Office 365 and Amazon Web Services (AWS) applications that require secure remote access. said [Neil Anderson practice director, network solutions at World Wide Technology,][8] a technology service provider. “SD-WAN lets customers set up secure regional zones or whatever the customer needs and lets them securely direct that traffic to where it needs to go based on internal security policies. SD-WAN is about architecting and incorporating security for apps like AWS and Office 365 into your connectivity fabric. It’s a big motivator to move toward SD-WAN.”
|
||||
|
||||
“With SD-WAN, mission-critical traffic and assets can be partitioned and protected against vulnerabilities in other parts of the enterprise. This use case appears to be especially popular in verticals such as retail, healthcare, and financial,” [IDC wrote][9]. "SD-WAN can also protect application traffic from threats within the enterprise and from outside by leveraging a full stack of security solutions included in SD-WAN such as [next-gen firewalls][10], IPS, URL filtering, malware protection, and cloud security.”
|
||||
|
||||
### What does SD-WAN mean for MPLS?
|
||||
|
||||
One of the hotter SD-WAN debates is what the software technology would do to the use of MPLS, the packet-forwarding technology that uses labels in order to make data forwarding decisions. The most common use cases are branch offices, campus networks, metro Ethernet services and enterprises that need quality of service (QoS) for real-time applications.
|
||||
|
||||
For the most part, networking vendors believe MPLS will be around for a long time and that SD-WAN won’t totally eliminate the need for it. The major slaps against MPLS are how traditionally expensive the service is and how complicated it is to set up.
|
||||
|
||||
A recent report from [Avant Communications][11], a cloud services provider that specializes in SD-WAN, found that 83% of enterprises that use or are familiar with MPLS plan to increase their MPLS network infrastructure this year, and 40% say they will “significantly increase” their use of it.
|
||||
|
||||
How that shakes out remains an unknown, but it seems both technologies will have role in near future enterprises anyway.
|
||||
|
||||
“For us, MPLS is just another option. We have never said that SD-WAN versus MPLS so that MPLS is going to get killed off or it needs to get killed off,” said [Sanjay Uppal,][12] vice president and general manager of VMware’s VeloCloud Business Unit.
|
||||
|
||||
Uppal said with MPLS, VMware at least is not finding that customers are turning off their MPLS in droves. “They are capping it in several instances. They are continuing to buy some more. Maybe not as much as they probably had in the past but it’s really opening up applications to use more [of the the underlying network responsible for delivery of packets]. All kinds of underlay are being purchased. MPLS is being purchased, more of broadband, direct internet access,” he said.
|
||||
|
||||
Gartner says its clients hope to fund their WAN expansion/update by replacing or augmenting expensive MPLS connections with internet-based VPNs, often from alternate providers. However, suitability of internet connections varies widely by geography, and service providers mixing connections from multiple providers increases complexity. SD-WAN has dramatically simplified this approach for a number of reasons, Gartner says, including:
|
||||
|
||||
* Due to the simpler operational environment and the ability to use multiple circuits from multiple carriers, enterprises can abstract the transport layer from the logical layer and be less dependent on their service providers.
|
||||
* This decoupling of layers is enabling new MSPs to emerge to take advantage of the above for customers that still want to outsource their WANs.
|
||||
* Traditional service providers are responding with Network Function Virtualization ([NFV][13])-based offerings that combine and orchestrate services (SD-WAN, security, WAN optimization) from multiple popular vendors. NFV enables virtualized network functions including routing mobility and security.
|
||||
|
||||
|
||||
|
||||
There are other reasons customers will use MPLS in the SD-WAN world, experts said. “There is a concern about how customers will back up systems when there are outages,” Anderson said. “MPLS and other technologies have a role there.”
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/2297171/network-security-mpls-explained.html
|
||||
[2]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[5]: https://www.gartner.com/doc/reprints?id=1-5MNRUAO&ct=181019&st=sb&mkt_tok=eyJpIjoiTnpZNFlXTXpabU0xTVdNeSIsInQiOiJzSmZGdzFWZldRN0s0TUxWMVBKOFUxdnJVMCtEUk13Z3Y5VCs1Z1wvcUY5ZHQ1XC9uZG1WY1Uxbm5TOFFMZzcxQ3pybmhMSHo5RFdPVEVCVUZrbnJnODlGVklOZGtlT0pFQ1A1aFNaQ3N1ODk5Y1FaN0JqTDJiM0U5cnZpTVBMTnliIn0%3D
|
||||
[6]: https://www.networkworld.com/article/3322937/what-will-be-hot-for-cisco-in-2019.html
|
||||
[7]: https://www.idc.com/getdoc.jsp?containerId=prUS45380319
|
||||
[8]: https://www.wwt.com/profile/neil-anderson
|
||||
[9]: https://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise-networks/intelligent-wan/idc-tangible-benefits.pdf
|
||||
[10]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
|
||||
[11]: http://www.networkworld.com/cms/article/Avant%20Communications,%20a%20cloud%20services%20provider%20that%20specializes%20in%20SD-WAN,%20recently%20issued%20a%20report%20entitled%20State%20of%20Disruption%20that%20found%20that%2083%25%20of%20enterprises%20that%20use%20or%20are%20familiar%20with%20MPLS%20plan%20to%20increase%20their%20MPLS%20network%20infrastructure%20this%20year,%20and%2040%25%20say%20they%20will%20
|
||||
[12]: https://www.networkworld.com/article/3387641/beyond-sd-wan-vmwares-vision-for-the-network-edge.html
|
||||
[13]: https://www.networkworld.com/article/3253118/what-is-nfv-and-what-are-its-benefits.html
|
@ -0,0 +1,65 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The biggest risk to uptime? Your staff)
|
||||
[#]: via: (https://www.networkworld.com/article/3444762/the-biggest-risk-to-uptime-your-staff.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
The biggest risk to uptime? Your staff
|
||||
======
|
||||
Human error is the chief cause of downtime, a new study finds. Imagine that.
|
||||
Getty Images
|
||||
|
||||
There was an old joke: "To err is human, but to really foul up you need a computer." Now it seems the reverse is true. The reliability of [data center][1] equipment is vastly improved but the humans running them have not kept up and it's a threat to uptime.
|
||||
|
||||
The Uptime Institute has surveyed thousands of IT professionals throughout the year on outages and said the vast majority of data center failures are caused by human error, from 70 percent to 75 percent.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters. ]][2]
|
||||
|
||||
And some of them are severe. It found more than 30 percent of IT service and data center operators experienced downtime that they called a “severe degradation of service” over the last year, with 10 percent of the 2019 respondents reporting that their most recent incident cost more than $1 million.
|
||||
|
||||
[][3]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][3]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
In Uptime's April 2019 survey, 60 percent of respondents believed that their most recent significant downtime incident could have been prevented with better management/processes or configuration. For outages that cost greater than $1 million, this figure jumped to 74 percent.
|
||||
|
||||
However, the end fault is not necessarily with the staff, Uptime argues, but with management that has failed them.
|
||||
|
||||
Advertisement
|
||||
|
||||
"Perhaps there is simply a limit to what can be achieved in an industry that still relies heavily on people to perform many of the most basic and critical tasks and thus is subject to human error, which can never be completely eliminated," wrote Kevin Heslin, chief editor of the Uptime Institute Journal in a [blog post][4].
|
||||
|
||||
"However, a quick survey of the issues suggests that management failure — not human error — is the main reason that outages persist. By under-investing in training, failing to enforce policies, allowing procedures to grow outdated, and underestimating the importance of qualified staff, management sets the stage for a cascade of circumstances that leads to downtime," Heslin went on to say.
|
||||
|
||||
Uptime noted that the complexity of a company’s infrastructure, especially the distributed nature of it, can increase the risk that simple errors will cascade into a service outage and said companies need to be aware of the greater risk involved with greater complexity.
|
||||
|
||||
On the staffing side, it cautioned against expanding critical IT capacity faster than the company can attract and apply the resources to manage that infrastructure and to be aware of any staffing and skills shortage before they start to impair mission-critical operations.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3444762/the-biggest-risk-to-uptime-your-staff.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
|
||||
[2]: https://www.networkworld.com/newsletters/signup.html
|
||||
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[4]: https://journal.uptimeinstitute.com/how-to-avoid-outages-try-harder/
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (heguangzhi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -55,7 +55,7 @@ via: https://opensourceforu.com/2019/10/understanding-joins-in-hadoop/
|
||||
|
||||
作者:[Bhaskar Narayan Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[heguangzhi](https://github.com/heguangzhi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,100 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Command line quick tips: Locate and process files with find and xargs)
|
||||
[#]: via: (https://fedoramagazine.org/command-line-quick-tips-locate-and-process-files-with-find-and-xargs/)
|
||||
[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/)
|
||||
|
||||
Command line quick tips: Locate and process files with find and xargs
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
**find** is one of the more powerful and flexible command-line programs in the daily toolbox. It does what the name suggests: it finds files and directories that match the conditions you specify. And with arguments like **-exec** or **-delete**, you can have find take action on what it… finds.
|
||||
|
||||
In this installment of the [Command Line Quick Tips][2] series, you’ll get an introduction to the **find** command and learn how to use it to process files with built-in commands or the **xargs** command.
|
||||
|
||||
### Finding files
|
||||
|
||||
At a minimum, **find** takes a path to find things in. For example, this command will find (and print) every file on the system:
|
||||
|
||||
```
|
||||
find /
|
||||
```
|
||||
|
||||
And since everything is a file, you will get a lot of output to sort through. This probably doesn’t help you locate what you’re looking for. You can change the path argument to narrow things down a bit, but it’s still not really any more helpful than using the **ls** command. So you need to think about what you’re trying to locate.
|
||||
|
||||
Perhaps you want to find all the JPEG files in your home directory. The **-name** argument allows you to restrict your results to files that match the given pattern.
|
||||
|
||||
```
|
||||
find ~ -name '*jpg'
|
||||
```
|
||||
|
||||
But wait! What if some of them have an uppercase extension? **-iname** is like **-name**, but it is case-insensitive:
|
||||
|
||||
```
|
||||
find ~ -iname '*jpg'
|
||||
```
|
||||
|
||||
Great! But the 8.3 name scheme is so 1985. Some of the pictures might have a .jpeg extension. Fortunately, we can combine patterns with an “or,” represented by **-o**. The parentheses are escaped so that the shell doesn’t try to interpret them instead of the **find** command.
|
||||
|
||||
```
|
||||
find ~ \( -iname 'jpeg' -o -iname 'jpg' \)
|
||||
```
|
||||
|
||||
We’re getting closer. But what if you have some directories that end in jpg? (Why you named a directory **bucketofjpg** instead of **pictures** is beyond me.) We can modify our command with the **-type** argument to look only for files:
|
||||
|
||||
```
|
||||
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f
|
||||
```
|
||||
|
||||
Or maybe you’d like to find those oddly named directories so you can rename them later:
|
||||
|
||||
```
|
||||
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type d
|
||||
```
|
||||
|
||||
It turns out you’ve been taking a lot of pictures lately, so narrow this down to files that have changed in the last week with **-mtime** (modification time). The **-7** means all files modified in 7 days or fewer.
|
||||
|
||||
```
|
||||
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7
|
||||
```
|
||||
|
||||
### Taking action with xargs
|
||||
|
||||
The **xargs** command takes arguments from the standard input stream and executes a command based on them. Sticking with the example in the previous section, let’s say you want to copy all of the JPEG files in your home directory that have been modified in the last week to a thumb drive that you’ll attach to a digital photo display. Assume you already have the thumb drive mounted as _/media/photo_display_.
|
||||
|
||||
```
|
||||
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7 -print0 | xargs -0 cp -t /media/photo_display
|
||||
```
|
||||
|
||||
The **find** command is slightly modified from the previous version. The **-print0** command makes a subtle change on how the output is written: instead of using a newline, it adds a null character. The **-0** (zero) option to **xargs** adjusts the parsing to expect this. This is important because otherwise actions on file names that contain spaces, quotes, or other special characters may not work as expected. You should use these options whenever you’re taking action on files.
|
||||
|
||||
The **-t** argument to **cp** is important because **cp** normally expects the destination to come last. You can do this without **xargs** using **find**‘s **-exec** command, but the **xargs** method will be faster, especially with a large number of files, because it will run as a single invocation of **cp**.
|
||||
|
||||
### Find out more
|
||||
|
||||
This post only scratches the surface of what **find** can do. **find** supports testing based on permissions, ownership, access time, and much more. It can even compare the files in the search path to other files. Combining tests with Boolean logic can give you incredible flexibility to find exactly the files you’re looking for. With build in commands or piping to **xargs**, you can quickly process a large set of files.
|
||||
|
||||
_Portions of this article were previously published on [Opensource.com][3]._ _Photo by _[_Warren Wong_][4]_ on [Unsplash][5]_.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/command-line-quick-tips-locate-and-process-files-with-find-and-xargs/
|
||||
|
||||
作者:[Ben Cotton][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/bcotton/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2018/10/commandlinequicktips-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/?s=command+line+quick+tips
|
||||
[3]: https://opensource.com/article/18/4/how-use-find-linux
|
||||
[4]: https://unsplash.com/@wflwong?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[5]: https://unsplash.com/s/photos/search?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -0,0 +1,124 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Start developing in the cloud with Eclipse Che IDE)
|
||||
[#]: via: (https://opensource.com/article/19/10/cloud-ide-che)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brson)
|
||||
|
||||
Start developing in the cloud with Eclipse Che IDE
|
||||
======
|
||||
Eclipse Che offers Java developers an Eclipse IDE in a container-based
|
||||
cloud environment.
|
||||
![Tools in a cloud][1]
|
||||
|
||||
In the many, many technical interviews I've gone through in my professional career, I've noticed that I'm rarely asked questions that have definitive answers. Most of the time, I'm asked open-ended questions that do not have an absolutely correct answer but evaluate my prior experiences and how well I can explain things.
|
||||
|
||||
One interesting open-ended question that I've been asked several times is:
|
||||
|
||||
> "As you start your first day on a project, what five tools do you install first and why?"
|
||||
|
||||
There is no single definitely correct answer to this question. But as a programmer who codes, I know the must-have tools that I cannot live without. And as a Java developer, I always include an interactive development environment (IDE)—and my two favorites are Eclipse IDE and IntelliJ IDEA.
|
||||
|
||||
### My Java story
|
||||
|
||||
When I was a student at the University of Texas at Austin, most of my computer science courses were taught in Java. And as an enterprise developer working for different companies, I have mostly worked with Java to build various enterprise-level applications. So, I know Java, and most of the time I've developed with Eclipse. I have also used the Spring Tools Suite (STS), which is a variation of the Eclipse IDE that is installed with Spring Framework plugins, and IntelliJ, which is not exactly open source, since I prefer its paid edition, but some Java developers favor it due to its faster performance and other fancy features.
|
||||
|
||||
Regardless of which IDE you use, installing your own developer IDE presents one common, big problem: _"It works on my computer, and I don't know why it doesn't work on your computer."_
|
||||
|
||||
[![xkcd comic][2]][3]
|
||||
|
||||
Because a developer tool like Eclipse can be highly dependent on the runtime environment, library configuration, and operating system, the task of creating a unified sharing environment for everyone can be quite a challenge.
|
||||
|
||||
But there is a perfect solution to this. We are living in the age of cloud computing, and Eclipse Che provides an open source solution to running an Eclipse-based IDE in a container-based cloud environment.
|
||||
|
||||
### From local development to a cloud environment
|
||||
|
||||
I want the benefits of a cloud-based development environment with the familiarity of my local system. That's a difficult balance to find.
|
||||
|
||||
When I first heard about Eclipse Che, it looked like the cloud-based development environment I'd been looking for, but I got busy with technology I needed to learn and didn't follow up with it. Then a new project came up that required a remote environment, and I had the perfect excuse to use Che. Although I couldn't fully switch to the cloud-based IDE for my daily work, I saw it as a chance to get more familiar with it.
|
||||
|
||||
![Eclipse Che interface][4]
|
||||
|
||||
Eclipse Che IDE has a lot of excellent [features][5], but what I like most is that it is an open source framework that offers exactly what I want to achieve:
|
||||
|
||||
1. Scalable workspaces leveraging the power of cloud
|
||||
2. Extensible and customizable plugins for different runtimes
|
||||
3. A seamless onboarding experience to enable smooth collaboration between members
|
||||
|
||||
|
||||
|
||||
### Getting started with Eclipse Che
|
||||
|
||||
Eclipse Che can be installed on any container-based environment. I run both [Code Ready Workspace 1.2][6] and [Eclipse Che 7][7] on [OpenShift][8], but I've also tried it on top of [Minikube][9] and [Minishift][10].
|
||||
|
||||
![Eclipse Che on OpenShift][11]
|
||||
|
||||
Read the requirement guides to ensure your runtime is compatible with Che:
|
||||
|
||||
* [Che on Kubernetes][12]
|
||||
* [Che on OpenShift-compatible OSS environments like OKD][13]
|
||||
|
||||
|
||||
|
||||
For instance, you can quickly install Eclipse Che if you launch OKD locally through Minishift, but make sure to have at least 5GB RAM to have a smooth experience.
|
||||
|
||||
There are various ways to install Eclipse Che; I recommend leveraging the Che command-line interface, [chectl][14]. Although it is still in an incubator stage, it is my preferred way because it gives multiple configuration and management options. You can also run the installation as [an Operator][15], which you can [read more about][16]. I decided to go with chectl since I did not want to take on both concepts at the same time. Che's quick-start provides [installation steps for many scenarios][17].
|
||||
|
||||
### Why cloud works best for me
|
||||
|
||||
Although the local installation of Eclipse Che works, I found the most painless way is to install it on one of the common public cloud vendors.
|
||||
|
||||
I like to collaborate with others in my IDE; working collaboratively is essential if you want your application to be something more than a hobby project. And when you are working at a company, there will be enterprise considerations around the application lifecycle of develop, test, and deploy for your application.
|
||||
|
||||
Eclipse Che's multi-user capability means each person owns an isolated workspace that does not interfere with others' workspaces, yet team members can still collaborate on application development by working in the same cluster. And if you are considering moving to Eclipse Che for something more than a hobby or testing, the cloud environment's multi-user features will enable a faster development cycle. This includes [resource management][18] to ensure resources are allocated to each environment, as well as security considerations like [authentication and authorization][19] (or specific needs like [OpenID][20]) that are important to maintaining the environment.
|
||||
|
||||
Therefore, moving Eclipse Che to the cloud early will be a good choice if your development experience is like mine. By moving to the cloud, you can take advantage of cloud-based scalability and resource flexibility while on the road.
|
||||
|
||||
### Use Che and give back
|
||||
|
||||
I really enjoy this new development configuration that enables me to regularly code in the cloud. Open source enables me to do so in an easy way, so it's important for me to consider how to give back. All of Che's components are open source under the Eclipse Public License 2.0 and available on GitHub at the following links:
|
||||
|
||||
* [Eclipse Che GitHub][21]
|
||||
* [Eclipse Che Operator][15]
|
||||
* [chectl (Eclipse Che CLI)][14]
|
||||
|
||||
|
||||
|
||||
Consider using Che and giving back—either as a user by filing bug reports or as a developer to help enhance the project.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/cloud-ide-che
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT (Tools in a cloud)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/1_xkcd.jpg (xkcd comic)
|
||||
[3]: https://xkcd.com/1316
|
||||
[4]: https://opensource.com/sites/default/files/uploads/0_banner.jpg (Eclipse Che interface)
|
||||
[5]: https://www.eclipse.org/che/features
|
||||
[6]: https://developers.redhat.com/products/codeready-workspaces/overview
|
||||
[7]: https://che.eclipse.org/eclipse-che-7-is-now-available-40ae07120b38
|
||||
[8]: https://www.openshift.com/
|
||||
[9]: https://kubernetes.io/docs/tutorials/hello-minikube/
|
||||
[10]: https://www.okd.io/minishift/
|
||||
[11]: https://opensource.com/sites/default/files/uploads/2_openshiftresources.jpg (Eclipse Che on OpenShift)
|
||||
[12]: https://www.eclipse.org/che/docs/che-6/kubernetes-single-user.html
|
||||
[13]: https://www.eclipse.org/che/docs/che-6/openshift-single-user.html
|
||||
[14]: https://github.com/che-incubator/chectl
|
||||
[15]: https://github.com/eclipse/che-operator
|
||||
[16]: https://opensource.com/article/19/6/kubernetes-potential-run-anything
|
||||
[17]: https://www.eclipse.org/che/docs/che-7/che-quick-starts.html#running-che-locally_che-quick-starts
|
||||
[18]: https://www.eclipse.org/che/docs/che-6/resource-management.html
|
||||
[19]: https://www.eclipse.org/che/docs/che-6/user-management.html
|
||||
[20]: https://www.eclipse.org/che/docs/che-6/authentication.html
|
||||
[21]: https://github.com/eclipse/che
|
@ -0,0 +1,301 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Achieve high-scale application monitoring with Prometheus)
|
||||
[#]: via: (https://opensource.com/article/19/10/application-monitoring-prometheus)
|
||||
[#]: author: (Paul Brebner https://opensource.com/users/paul-brebner)
|
||||
|
||||
Achieve high-scale application monitoring with Prometheus
|
||||
======
|
||||
Prometheus' prowess as a monitoring system and its ability to achieve
|
||||
high-scalability make it a strong choice for monitoring applications and
|
||||
servers.
|
||||
![Tall building with windows][1]
|
||||
|
||||
[Prometheus][2] is an increasingly popular—for good reason—open source tool that provides monitoring and alerting for applications and servers. Prometheus' great strength is in monitoring server-side metrics, which it stores as [time-series data][3]. While Prometheus doesn't lend itself to application performance management, active control, or user experience monitoring (although a GitHub extension does make user browser metrics available to Prometheus), its prowess as a monitoring system and ability to achieve high-scalability through a [federation of servers][4] make Prometheus a strong choice for a wide variety of use cases.
|
||||
|
||||
In this article, we'll take a closer look at Prometheus' architecture and functionality and then examine a detailed instance of the tool in action.
|
||||
|
||||
### Prometheus architecture and components
|
||||
|
||||
Prometheus consists of the Prometheus server (handling service discovery, metrics retrieval and storage, and time-series data analysis through the PromQL query language), a data model for metrics, a graphing GUI, and native support for [Grafana][5]. There is also an optional alert manager that allows users to define alerts via the query language and an optional push gateway for short-term application monitoring. These components are situated as shown in the following diagram.
|
||||
|
||||
![Promethius architecture][6]
|
||||
|
||||
Prometheus can automatically capture standard metrics by using agents to execute general-purpose code in the application environment. It can also capture custom metrics through instrumentation, placing custom code within the source code of the monitored application. Prometheus officially supports [client libraries][7] for Go, Python, Ruby, and Java/Scala and also enables users to write their own libraries. Additionally, many unofficial libraries for other languages are available.
|
||||
|
||||
Developers can also utilize third-party [exporters][8] to automatically activate instrumentation for many popular software solutions they might be using. For example, users of JVM-based applications like open source [Apache Kafka][9] and [Apache Cassandra][10] can easily collect metrics by leveraging the existing [JMX exporter][11]. In other cases, an exporter won't be needed because the application will [expose metrics][12] that are already in the Prometheus format. Those on Cassandra might also find Instaclustr's freely available [Cassandra Exporter for Prometheus][13] to be helpful, as it integrates Cassandra metrics from a self-managed cluster into Prometheus application monitoring.
|
||||
|
||||
Also important: Developers can leverage an available [node exporter][14] to monitor kernel metrics and host hardware. Prometheus offers a [Java client][15] as well, with a number of features that can be registered either piecemeal or at once through a single **DefaultExports.initialize();** command—including memory pools, garbage collection, JMX, classloading, and thread counts.
|
||||
|
||||
### Prometheus data modeling and metrics
|
||||
|
||||
Prometheus provides four metric types:
|
||||
|
||||
* **Counter:** Counts incrementing values; a restart can return these values to zero
|
||||
* **Gauge:** Tracks metrics that can go up and down
|
||||
* **Histogram:** Observes data according to specified response sizes or durations and counts the sums of observed values along with counts in configurable buckets
|
||||
* **Summary:** Counts observed data similar to a histogram and offers configurable quantiles that are calculated over a sliding time window
|
||||
|
||||
|
||||
|
||||
Prometheus time-series data metrics each include a string name, which follows a naming convention to include the name of the monitored data subject, the logical type, and the units of measure used. Each metric includes streams of 64-bit float value that are timestamped down to the millisecond, and a set of key:value pairs labeling the dimensions it measures. Prometheus automatically adds **Job** and **Instance** labels to each metric to keep track of the configured job name of the data target and the **<host>:<port>** piece of the scraped target URL, respectively.
|
||||
|
||||
### Prometheus example: the Anomalia Machina anomaly detection experiment
|
||||
|
||||
Before moving into the example, download and begin using open source Prometheus by following this [getting started][16] guide.
|
||||
|
||||
To demonstrate how to put Prometheus into action and perform application monitoring at a high scale, let's take a look at a recent [experimental Anomalia Machina project][17] we completed at Instaclustr. This project—just a test case, not a commercially available solution—leverages Kafka and Cassandra in an application deployed by Kubernetes, which performs anomaly detection on streaming data. (Such detection is critical to use cases including IoT applications and digital ad fraud, among other areas.) The experimental application relies heavily on Prometheus to collect application metrics across distributed instances and make them readily available to view.
|
||||
|
||||
This diagram displays the experiment's architecture:
|
||||
|
||||
![Anomalia Machina Architecture][18]
|
||||
|
||||
Our goals in utilizing Prometheus included monitoring the application's more generic metrics, such as throughput, as well as the response times delivered by the Kafka load generator (the Kafka producer), the Kafka consumer, and the Cassandra client tasked with detecting any anomalies in the data. Prometheus monitors the system's hardware metrics as well, such as the CPU for each AWS EC2 instance running the application. The project also counts on Prometheus to monitor application-specific metrics such as the total number of rows each Cassandra read returns and, crucially, the number of anomalies it detects. All of this monitoring is centralized for simplicity.
|
||||
|
||||
In practice, this means forming a test pipeline with producer, consumer, and detector methods, as well as the following three metrics:
|
||||
|
||||
* A counter metric, called **prometheusTest_requests_total**, increments each time that each pipeline stage executes without incident, while a **stage** label allows for tracking the successful execution of each stage, and a **total** label tracks the total pipeline count.
|
||||
* Another counter metric, called **prometheusTest_anomalies_total**, counts any detected anomalies.
|
||||
* Finally, a gauge metric called **prometheusTest_duration_seconds** tracks the seconds of duration for each stage (again using a **stage** label and a **total** label).
|
||||
|
||||
|
||||
|
||||
The code behind these measurements increments counter metrics using the **inc()** method and sets the time value of the gauge metric with the **setToTime()** method. This is demonstrated in the following annotated example code:
|
||||
|
||||
|
||||
```
|
||||
import java.io.IOException;
|
||||
import io.prometheus.client.Counter;
|
||||
import io.prometheus.client.Gauge;
|
||||
import io.prometheus.client.exporter.HTTPServer;
|
||||
import io.prometheus.client.hotspot.DefaultExports;
|
||||
|
||||
// <https://github.com/prometheus/client\_java>
|
||||
// Demo of how we plan to use Prometheus Java client to instrument Anomalia Machina.
|
||||
// Note that the Anomalia Machina application will have Kafka Producer and Kafka consumer and rest of pipeline running in multiple separate processes/instances.
|
||||
// So metrics from each will have different host/port combinations.
|
||||
public class PrometheusBlog {
|
||||
static String appName = "prometheusTest";
|
||||
// counters can only increase in value (until process restart)
|
||||
// Execution count. Use a single Counter for all stages of the pipeline, stages are distinguished by labels
|
||||
static final Counter pipelineCounter = Counter.build()
|
||||
.name(appName + "_requests_total").help("Count of executions of pipeline stages")
|
||||
.labelNames("stage")
|
||||
.register();
|
||||
// in theory could also use pipelineCounter to count anomalies found using another label
|
||||
// but less potential for confusion having another counter. Doesn't need a label
|
||||
static final Counter anomalyCounter = Counter.build()
|
||||
.name(appName + "_anomalies_total").help("Count of anomalies detected")
|
||||
.register();
|
||||
// A Gauge can go up and down, and is used to measure current value of some variable.
|
||||
// pipelineGauge will measure duration in seconds of each stage using labels.
|
||||
static final Gauge pipelineGauge = Gauge.build()
|
||||
.name(appName + "_duration_seconds").help("Gauge of stage durations in seconds")
|
||||
.labelNames("stage")
|
||||
.register();
|
||||
|
||||
public static void main(String[] args) {
|
||||
// Allow default JVM metrics to be exported
|
||||
DefaultExports.initialize();
|
||||
|
||||
// Metrics are pulled by Prometheus, create an HTTP server as the endpoint
|
||||
// Note if there are multiple processes running on the same server need to change port number.
|
||||
// And add all IPs and port numbers to the Prometheus configuration file.
|
||||
HTTPServer server = null;
|
||||
try {
|
||||
server = new HTTPServer(1234);
|
||||
} catch (IOException e) {
|
||||
e.printStackTrace();
|
||||
}
|
||||
// now run 1000 executions of the complete pipeline with random time delays and increasing rate
|
||||
int max = 1000;
|
||||
for (int i=0; i < max; i++)
|
||||
{
|
||||
// total time for complete pipeline, and increment anomalyCounter
|
||||
pipelineGauge.labels("total").setToTime(() -> {
|
||||
producer();
|
||||
consumer();
|
||||
if (detector())
|
||||
anomalyCounter.inc();
|
||||
});
|
||||
// total pipeline count
|
||||
pipelineCounter.labels("total").inc();
|
||||
System.out.println("i=" + i);
|
||||
|
||||
// increase the rate of execution
|
||||
try {
|
||||
Thread.sleep(max-i);
|
||||
} catch (InterruptedException e) {
|
||||
e.printStackTrace();
|
||||
}
|
||||
}
|
||||
server.stop();
|
||||
}
|
||||
// the 3 stages of the pipeline, for each we increase the stage counter and set the Gauge duration time
|
||||
public static void producer() {
|
||||
class Local {};
|
||||
String name = Local.class.getEnclosingMethod().getName();
|
||||
pipelineGauge.labels(name).setToTime(() -> {
|
||||
try {
|
||||
Thread.sleep(1 + (long)(Math.random()*20));
|
||||
} catch (InterruptedException e) {
|
||||
e.printStackTrace();
|
||||
}
|
||||
});
|
||||
pipelineCounter.labels(name).inc();
|
||||
}
|
||||
public static void consumer() {
|
||||
class Local {};
|
||||
String name = Local.class.getEnclosingMethod().getName();
|
||||
pipelineGauge.labels(name).setToTime(() -> {
|
||||
try {
|
||||
Thread.sleep(1 + (long)(Math.random()*10));
|
||||
} catch (InterruptedException e) {
|
||||
e.printStackTrace();
|
||||
}
|
||||
});
|
||||
pipelineCounter.labels(name).inc();
|
||||
}
|
||||
// detector returns true if anomaly detected else false
|
||||
public static boolean detector() {
|
||||
class Local {};
|
||||
String name = Local.class.getEnclosingMethod().getName();
|
||||
pipelineGauge.labels(name).setToTime(() -> {
|
||||
try {
|
||||
Thread.sleep(1 + (long)(Math.random()*200));
|
||||
} catch (InterruptedException e) {
|
||||
e.printStackTrace();
|
||||
}
|
||||
});
|
||||
pipelineCounter.labels(name).inc();
|
||||
return (Math.random() > 0.95);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Prometheus collects metrics by polling ("scraping") instrumented code (unlike some other monitoring solutions that receive metrics via push methods). The code example above creates a required HTTP server on port 1234 so that Prometheus can scrape metrics as needed.
|
||||
|
||||
The following sample code addresses Maven dependencies:
|
||||
|
||||
|
||||
```
|
||||
<!-- The client -->
|
||||
<dependency>
|
||||
<groupId>io.prometheus</groupId>
|
||||
<artifactId>simpleclient</artifactId>
|
||||
<version>LATEST</version>
|
||||
</dependency>
|
||||
<!-- Hotspot JVM metrics-->
|
||||
<dependency>
|
||||
<groupId>io.prometheus</groupId>
|
||||
<artifactId>simpleclient_hotspot</artifactId>
|
||||
<version>LATEST</version>
|
||||
</dependency>
|
||||
<!-- Exposition HTTPServer-->
|
||||
<dependency>
|
||||
<groupId>io.prometheus</groupId>
|
||||
<artifactId>simpleclient_httpserver</artifactId>
|
||||
<version>LATEST</version>
|
||||
</dependency>
|
||||
<!-- Pushgateway exposition-->
|
||||
<dependency>
|
||||
<groupId>io.prometheus</groupId>
|
||||
<artifactId>simpleclient_pushgateway</artifactId>
|
||||
<version>LATEST</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
The code example below tells Prometheus where it should look to scrape metrics. This code can simply be added to the configuration file (default: Prometheus.yml) for basic deployments and tests.
|
||||
|
||||
|
||||
```
|
||||
global:
|
||||
scrape_interval: 15s # By default, scrape targets every 15 seconds.
|
||||
|
||||
# scrape_configs has jobs and targets to scrape for each.
|
||||
scrape_configs:
|
||||
# job 1 is for testing prometheus instrumentation from multiple application processes.
|
||||
# The job name is added as a label job=<job_name> to any timeseries scraped from this config.
|
||||
- job_name: 'testprometheus'
|
||||
|
||||
# Override the global default and scrape targets from this job every 5 seconds.
|
||||
scrape_interval: 5s
|
||||
|
||||
# this is where to put multiple targets, e.g. for Kafka load generators and detectors
|
||||
static_configs:
|
||||
- targets: ['localhost:1234', 'localhost:1235']
|
||||
|
||||
# job 2 provides operating system metrics (e.g. CPU, memory etc).
|
||||
- job_name: 'node'
|
||||
|
||||
# Override the global default and scrape targets from this job every 5 seconds.
|
||||
scrape_interval: 5s
|
||||
|
||||
static_configs:
|
||||
- targets: ['localhost:9100']
|
||||
```
|
||||
|
||||
Note the job named "node" that uses port 9100 in this configuration file; this job offers node metrics and requires running the [Prometheus node exporter][14] on the same server where the application is running. Polling for metrics should be done with care: doing it too often can overload applications, too infrequently can result in lag. Where application metrics can't be polled, Prometheus also offers a [push gateway][19].
|
||||
|
||||
### Viewing Prometheus metrics and results
|
||||
|
||||
Our experiment initially used [expressions][20], and later [Grafana][5], to visualize data and overcome Prometheus' lack of default dashboards. Using the Prometheus interface (or [http://localhost:][21]9[090/metrics][21]), select metrics by name and then enter them in the expression box for execution. (Note that it's common to experience error messages at this stage, so don't be discouraged if you encounter a few issues.) With correctly functioning expressions, results will be available for display in tables or graphs as appropriate.
|
||||
|
||||
Using the **[irate][22]** or **[rate][23]** function on a counter metric will produce a useful rate graph:
|
||||
|
||||
![Rate graph][24]
|
||||
|
||||
Here is a similar graph of a gauge metric:
|
||||
|
||||
![Gauge graph][25]
|
||||
|
||||
Grafana provides much more robust graphing capabilities and built-in Prometheus support with graphs able to display multiple metrics:
|
||||
|
||||
![Grafana graph][26]
|
||||
|
||||
To enable Grafana, install it, navigate to <http://localhost:3000/>, create a Prometheus data source, and add a Prometheus graph using an expression. A note here: An empty graph often points to a time range issue, which can usually be solved by using the "Last 5 minutes" setting.
|
||||
|
||||
Creating this experimental application offered an excellent opportunity to build our knowledge of what Prometheus is capable of and resulted in a high-scale experimental production application that can monitor 19 billion real-time data events for anomalies each day. By following this guide and our example, hopefully, more developers can successfully put Prometheus into practice.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/application-monitoring-prometheus
|
||||
|
||||
作者:[Paul Brebner][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/paul-brebner
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/windows_building_sky_scale.jpg?itok=mH6CAX29 (Tall building with windows)
|
||||
[2]: https://prometheus.io/
|
||||
[3]: https://prometheus.io/docs/concepts/data_model
|
||||
[4]: https://prometheus.io/docs/prometheus/latest/federation
|
||||
[5]: https://grafana.com/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/prometheus_architecture.png (Promethius architecture)
|
||||
[7]: https://prometheus.io/docs/instrumenting/clientlibs/
|
||||
[8]: https://prometheus.io/docs/instrumenting/exporters/
|
||||
[9]: https://kafka.apache.org/
|
||||
[10]: http://cassandra.apache.org/
|
||||
[11]: https://github.com/prometheus/jmx_exporter
|
||||
[12]: https://prometheus.io/docs/instrumenting/exporters/#software-exposing-prometheus-metrics
|
||||
[13]: https://github.com/instaclustr/cassandra-exporter
|
||||
[14]: https://prometheus.io/docs/guides/node-exporter/
|
||||
[15]: https://github.com/prometheus/client_java
|
||||
[16]: https://prometheus.io/docs/prometheus/latest/getting_started/
|
||||
[17]: https://github.com/instaclustr/AnomaliaMachina
|
||||
[18]: https://opensource.com/sites/default/files/uploads/anomalia_machina_architecture.png (Anomalia Machina Architecture)
|
||||
[19]: https://prometheus.io/docs/instrumenting/pushing/
|
||||
[20]: https://prometheus.io/docs/prometheus/latest/querying/basics/
|
||||
[21]: http://localhost:9090/metrics
|
||||
[22]: https://prometheus.io/docs/prometheus/latest/querying/functions/#irate
|
||||
[23]: https://prometheus.io/docs/prometheus/latest/querying/functions/#rate
|
||||
[24]: https://opensource.com/sites/default/files/uploads/rate_graph.png (Rate graph)
|
||||
[25]: https://opensource.com/sites/default/files/uploads/gauge_graph.png (Gauge graph)
|
||||
[26]: https://opensource.com/sites/default/files/uploads/grafana_graph.png (Grafana graph)
|
@ -0,0 +1,74 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (DevSecOps pipelines and tools: What you need to know)
|
||||
[#]: via: (https://opensource.com/article/19/10/devsecops-pipeline-and-tools)
|
||||
[#]: author: (Sagar Nangare https://opensource.com/users/sagarnangare)
|
||||
|
||||
DevSecOps pipelines and tools: What you need to know
|
||||
======
|
||||
DevSecOps evolves DevOps to ensure security remains an essential part of
|
||||
the process.
|
||||
![An intersection of pipes.][1]
|
||||
|
||||
DevOps is well-understood in the IT world by now, but it's not flawless. Imagine you have implemented all of the DevOps engineering practices in modern application delivery for a project. You've reached the end of the development pipeline—but a penetration testing team (internal or external) has detected a security flaw and come up with a report. Now you have to re-initiate all of your processes and ask developers to fix the flaw.
|
||||
|
||||
This is not terribly tedious in a DevOps-based software development lifecycle (SDLC) system—but it does consume time and affects the delivery schedule. If security were integrated from the start of the SDLC, you might have tracked down the glitch and eliminated it on the go. But pushing security to the end of the development pipeline, as in the above scenario, leads to a longer development lifecycle.
|
||||
|
||||
This is the reason for introducing DevSecOps, which consolidates the overall software delivery cycle in an automated way.
|
||||
|
||||
In modern DevOps methodologies, where containers are widely used by organizations to host applications, we see greater use of [Kubernetes][2] and [Istio][3]. However, these tools have their own vulnerabilities. For example, the Cloud Native Computing Foundation (CNCF) recently completed a [Kubernetes security audit][4] that identified several issues. All tools used in the DevOps pipeline need to undergo security checks while running in the pipeline, and DevSecOps pushes admins to monitor the tools' repositories for upgrades and patches.
|
||||
|
||||
### What Is DevSecOps?
|
||||
|
||||
Like DevOps, DevSecOps is a mindset or a culture that developers and IT operations teams follow while developing and deploying software applications. It integrates active and automated security audits and penetration testing into agile application development.
|
||||
|
||||
To utilize [DevSecOps][5], you need to:
|
||||
|
||||
* Introduce the concept of security right from the start of the SDLC to minimize vulnerabilities in software code.
|
||||
* Ensure everyone (including developers and IT operations teams) shares responsibility for following security practices in their tasks.
|
||||
* Integrate security controls, tools, and processes at the start of the DevOps workflow. These will enable automated security checks at each stage of software delivery.
|
||||
|
||||
|
||||
|
||||
DevOps has always been about including security—as well as quality assurance (QA), database administration, and everyone else—in the dev and release process. However, DevSecOps is an evolution of that process to ensure security is never forgotten as an essential part of the process.
|
||||
|
||||
### Understanding the DevSecOps pipeline
|
||||
|
||||
There are different stages in a typical DevOps pipeline; a typical SDLC process includes phases like Plan, Code, Build, Test, Release, and Deploy. In DevSecOps, specific security checks are applied in each phase.
|
||||
|
||||
* **Plan:** Execute security analysis and create a test plan to determine scenarios for where, how, and when testing will be done.
|
||||
* **Code:** Deploy linting tools and Git controls to secure passwords and API keys.
|
||||
* **Build:** While building code for execution, incorporate static application security testing (SAST) tools to track down flaws in code before deploying to production. These tools are specific to programming languages.
|
||||
* **Test:** Use dynamic application security testing (DAST) tools to test your application while in runtime. These tools can detect errors associated with user authentication, authorization, SQL injection, and API-related endpoints.
|
||||
* **Release:** Just before releasing the application, employ security analysis tools to perform thorough penetration testing and vulnerability scanning.
|
||||
* **Deploy:** After completing the above tests in runtime, send a secure build to production for final deployment.
|
||||
|
||||
|
||||
|
||||
### DevSecOps tools
|
||||
|
||||
Tools are available for every phase of the SDLC. Some are commercial products, but most are open source. In my next article, I will talk more about the tools to use in different stages of the pipeline.
|
||||
|
||||
DevSecOps will play a more crucial role as we continue to see an increase in the complexity of enterprise security threats built on modern IT infrastructure. However, the DevSecOps pipeline will need to improve over time, rather than simply relying on implementing all security changes simultaneously. This will eliminate the possibility of backtracking or the failure of application delivery.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/devsecops-pipeline-and-tools
|
||||
|
||||
作者:[Sagar Nangare][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sagarnangare
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe (An intersection of pipes.)
|
||||
[2]: https://opensource.com/resources/what-is-kubernetes
|
||||
[3]: https://opensource.com/article/18/9/what-istio
|
||||
[4]: https://www.cncf.io/blog/2019/08/06/open-sourcing-the-kubernetes-security-audit/
|
||||
[5]: https://resources.whitesourcesoftware.com/blog-whitesource/devsecops
|
@ -0,0 +1,254 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Viewing files and processes as trees on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3444589/viewing-files-and-processes-as-trees-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Viewing files and processes as trees on Linux
|
||||
======
|
||||
A look at three Linux commands - ps, pstree and tree - for viewing files and processes in a tree-like format.
|
||||
[Melissa McMasters][1] [(CC BY 2.0)][2]
|
||||
|
||||
[Linux][3] provides several handy commands for viewing both files and processes in a branching, tree-like format that makes it easy to view how they are related. In this post, we'll look at the **ps**, **pstree** and **tree** commands along with some options they provide to help focus your view on what you want to see.
|
||||
|
||||
### ps
|
||||
|
||||
The **ps** command that we all use to list processes has some interesting options that many of us never take advantage of. While the commonly used **ps -ef** provides a complete listing of running processes, the **ps -ejH** command adds a nice effect. It indents related processes to make the relationship between these processes visually more clear – as in this excerpt:
|
||||
|
||||
```
|
||||
$ ps -ejH
|
||||
PID PGID SID TTY TIME CMD
|
||||
...
|
||||
1396 1396 1396 ? 00:00:00 sshd
|
||||
28281 28281 28281 ? 00:00:00 sshd
|
||||
28409 28281 28281 ? 00:00:00 sshd
|
||||
28410 28410 28410 pts/0 00:00:00 bash
|
||||
30968 30968 28410 pts/0 00:00:00 ps
|
||||
```
|
||||
|
||||
As you can see, the ps process is being run is run within bash and bash within an ssh session.
|
||||
|
||||
[][4]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][4]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
The **-exjf** option string provides a similar view, but with some additional details and symbols to highlight the hierarchical nature of the processes:
|
||||
|
||||
```
|
||||
$ ps -exjf
|
||||
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
|
||||
...
|
||||
1 1396 1396 1396 ? -1 Ss 0 0:00 /usr/sbin/sshd -D
|
||||
1396 28281 28281 28281 ? -1 Ss 0 0:00 \_ sshd: shs [priv]
|
||||
28281 28409 28281 28281 ? -1 S 1000 0:00 \_ sshd: shs@pts/0
|
||||
28409 28410 28410 28410 pts/0 31028 Ss 1000 0:00 \_ -bash
|
||||
28410 31028 31028 28410 pts/0 31028 R+ 1000 0:00 \_ ps axjf
|
||||
```
|
||||
|
||||
The options used in these commands represent:
|
||||
|
||||
Advertisement
|
||||
|
||||
```
|
||||
-e select all processes
|
||||
-j use the jobs format
|
||||
-f provide a full format listing
|
||||
-H show the process hierarchy (i.e., the "forest format")
|
||||
-x lift the "must be associated with a tty" restriction
|
||||
```
|
||||
|
||||
There's also a **\--forest** option that provides a similar view.
|
||||
|
||||
```
|
||||
$ ps -ef --forest
|
||||
UID PID PPID C STIME TTY TIME CMD
|
||||
...
|
||||
root 1396 1 0 Oct08 ? 00:00:00 /usr/sbin/sshd -D
|
||||
root 28281 1396 0 12:55 ? 00:00:00 \_ sshd: shs [priv]
|
||||
shs 28409 28281 0 12:56 ? 00:00:00 \_ sshd: shs@pts/0
|
||||
shs 28410 28409 0 12:56 pts/0 00:00:00 \_ -bash
|
||||
shs 32351 28410 0 14:39 pts/0 00:00:00 \_ ps -ef --forest
|
||||
```
|
||||
|
||||
Note that these examples are only a sampling of how these commands can be used. You can select whichever options that give you the view of processes that works best for you.
|
||||
|
||||
[MORE ON NETWORK WORLD: Linux: Best desktop distros for newbies][5]
|
||||
|
||||
### pstree
|
||||
|
||||
A similar view of processes is available using the **pstree** command. While even **pstree** offers many options, the command provides a very useful display on its own. Notice that many parent-child process relationships are displayed on single lines rather than subsequent lines.
|
||||
|
||||
```
|
||||
$ pstree
|
||||
...
|
||||
├─sshd───sshd───sshd───bash───pstree
|
||||
├─systemd─┬─(sd-pam)
|
||||
│ ├─at-spi-bus-laun─┬─dbus-daemon
|
||||
│ │ └─3*[{at-spi-bus-laun}]
|
||||
│ ├─at-spi2-registr───2*[{at-spi2-registr}]
|
||||
│ ├─dbus-daemon
|
||||
│ ├─ibus-portal───2*[{ibus-portal}]
|
||||
│ ├─pulseaudio───2*[{pulseaudio}]
|
||||
│ └─xdg-permission-───2*[{xdg-permission-}]
|
||||
```
|
||||
|
||||
With the **-n** option, **pstree** displays the process in numerical (by process ID) order:
|
||||
|
||||
```
|
||||
$ pstree -n
|
||||
systemd─┬─systemd-journal
|
||||
├─systemd-udevd
|
||||
├─systemd-timesyn───{systemd-timesyn}
|
||||
├─systemd-resolve
|
||||
├─systemd-logind
|
||||
├─dbus-daemon
|
||||
├─atopacctd
|
||||
├─irqbalance───{irqbalance}
|
||||
├─accounts-daemon───2*[{accounts-daemon}]
|
||||
├─acpid
|
||||
├─rsyslogd───3*[{rsyslogd}]
|
||||
├─freshclam
|
||||
├─udisksd───4*[{udisksd}]
|
||||
├─networkd-dispat
|
||||
├─ModemManager───2*[{ModemManager}]
|
||||
├─snapd───10*[{snapd}]
|
||||
├─avahi-daemon───avahi-daemon
|
||||
├─NetworkManager───2*[{NetworkManager}]
|
||||
├─wpa_supplicant
|
||||
├─cron
|
||||
├─atd
|
||||
├─polkitd───2*[{polkitd}]
|
||||
├─colord───2*[{colord}]
|
||||
├─unattended-upgr───{unattended-upgr}
|
||||
├─sshd───sshd───sshd───bash───pstree
|
||||
```
|
||||
|
||||
Some options to consider when using **pstree** include **-a** (include command line arguments) and **-g** (include process groups).
|
||||
|
||||
Here are some quick (truncated) examples.
|
||||
|
||||
Output from **pstree -a**
|
||||
|
||||
```
|
||||
└─wpa_supplicant -u -s -O /run/wpa_supplicant
|
||||
```
|
||||
|
||||
Output from **pstree -g**:
|
||||
|
||||
```
|
||||
├─sshd(1396)───sshd(28281)───sshd(28281)───bash(28410)───pstree(1115)
|
||||
```
|
||||
|
||||
### tree
|
||||
|
||||
While the **tree** command sounds like it would be very similar to **pstree**, it's a command for looking at files rather than processes. It provides a nice tree-like view of directories and files.
|
||||
|
||||
If you use the **tree** command to look at **/proc**, your display would begin similar to this one:
|
||||
|
||||
```
|
||||
$ tree /proc
|
||||
/proc
|
||||
├── 1
|
||||
│ ├── attr
|
||||
│ │ ├── apparmor
|
||||
│ │ │ ├── current
|
||||
│ │ │ ├── exec
|
||||
│ │ │ └── prev
|
||||
│ │ ├── current
|
||||
│ │ ├── display
|
||||
│ │ ├── exec
|
||||
│ │ ├── fscreate
|
||||
│ │ ├── keycreate
|
||||
│ │ ├── prev
|
||||
│ │ ├── smack
|
||||
│ │ │ └── current
|
||||
│ │ └── sockcreate
|
||||
│ ├── autogroup
|
||||
│ ├── auxv
|
||||
│ ├── cgroup
|
||||
│ ├── clear_refs
|
||||
│ ├── cmdline
|
||||
...
|
||||
```
|
||||
|
||||
You will see a lot more detail if you run a command like this as root (**sudo tree /proc)** since much of the contents of **/proc** is inaccessible to regular users.
|
||||
|
||||
The **tree -d** command will limit your display to directories.
|
||||
|
||||
```
|
||||
$ tree -d /proc
|
||||
/proc
|
||||
├── 1
|
||||
│ ├── attr
|
||||
│ │ ├── apparmor
|
||||
│ │ └── smack
|
||||
│ ├── fd [error opening dir]
|
||||
│ ├── fdinfo [error opening dir]
|
||||
│ ├── map_files [error opening dir]
|
||||
│ ├── net
|
||||
│ │ ├── dev_snmp6
|
||||
│ │ ├── netfilter
|
||||
│ │ └── stat
|
||||
│ ├── ns [error opening dir]
|
||||
│ └── task
|
||||
│ └── 1
|
||||
│ ├── attr
|
||||
│ │ ├── apparmor
|
||||
│ │ └── smack
|
||||
...
|
||||
```
|
||||
|
||||
With the **-f** option, **tree** will show full pathnames.
|
||||
|
||||
```
|
||||
$ tree -f /proc
|
||||
/proc
|
||||
├── /proc/1
|
||||
│ ├── /proc/1/attr
|
||||
│ │ ├── /proc/1/attr/apparmor
|
||||
│ │ │ ├── /proc/1/attr/apparmor/current
|
||||
│ │ │ ├── /proc/1/attr/apparmor/exec
|
||||
│ │ │ └── /proc/1/attr/apparmor/prev
|
||||
│ │ ├── /proc/1/attr/current
|
||||
│ │ ├── /proc/1/attr/display
|
||||
│ │ ├── /proc/1/attr/exec
|
||||
│ │ ├── /proc/1/attr/fscreate
|
||||
│ │ ├── /proc/1/attr/keycreate
|
||||
│ │ ├── /proc/1/attr/prev
|
||||
│ │ ├── /proc/1/attr/smack
|
||||
│ │ │ └── /proc/1/attr/smack/current
|
||||
│ │ └── /proc/1/attr/sockcreate
|
||||
...
|
||||
```
|
||||
|
||||
Hierarchical displays can often make the relationship between processes and files easier to understand. While the number of options available is rather broad, you'll probably find some views that help you see just what you're looking for.
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3444589/viewing-files-and-processes-as-trees-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.flickr.com/photos/cricketsblog/46967168105/in/photolist-2eyk1Lr-KQsMHg-JbWG41-FWu8FU-6daUYv-cxH2Aq-DV2CNk-25eF8V1-GEEwLx-S9a29U-GpiYf2-Yi5dnF-YLPMV3-23ThoAZ-dTyphv-DVXTMY-ERmSjL-6z86DE-QVnnyv-7PLo9u-58CYnd-dYmbPX-63nVid-p7Ea54-238LQaD-Qb6CkZ-QoRhQX-suMNcq-22JeozK-BwMvBg-26AQHz1-PhQT4J-AGyhXA-2fhixB3-qngdKE-UiptQQ-ZzpiHa-pH4g9e-28CoU2s-81gNxg-qnoewg-2cmYaRk-d3FRuo-4fJrSL-23NqveR-LLEYMU-FZixFK-5aBDGU-PBQbWq-dJoaKi
|
||||
[2]: https://creativecommons.org/licenses/by/2.0/legalcode
|
||||
[3]: https://www.networkworld.com/article/3215226/what-is-linux-uses-featres-products-operating-systems.html
|
||||
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[5]: https://www.networkworld.com/slideshow/153439/linux-best-desktop-distros-for-newbies.html#tk.nww-infsb
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,131 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Unzip a Zip File in Linux [Beginner’s Tutorial])
|
||||
[#]: via: (https://itsfoss.com/unzip-linux/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
How to Unzip a Zip File in Linux [Beginner’s Tutorial]
|
||||
======
|
||||
|
||||
_**Brief: This quick tip shows you how to unzip a file in Ubuntu and other Linux distributions. Both terminal and GUI methods have been discussed.**_
|
||||
|
||||
[Zip][1] is one of the most common and most popular way to create compressed archive files. It is also one of the older archive file format that was created in 1989. Since it is widely used, you’ll regularly come across a zip file.
|
||||
|
||||
In an earlier tutorial, I showed [how to zip a folder in Linux][2]. In this quick tutorial for beginner’s, I’ll show you how to unzip files in Linux.
|
||||
|
||||
**Prerequisite: Verify if you have unzip installed**
|
||||
|
||||
In order to unzip a zip archive file, you must have the unzip package installed in your system. Most modern Linux distributions come with uzip support but there is no harm in verifying it to avoid bad surprises later.
|
||||
|
||||
In [Ubuntu][3] and [Debian][4] based distributions, you can use the command below to install unzip. If it’s already installed, you’ll be notified about it.
|
||||
|
||||
```
|
||||
sudo apt install unzip
|
||||
```
|
||||
|
||||
Once you have made sure that your system has unzip support, it’s time to unzip a zip file in Linux.
|
||||
|
||||
You can use both command line and GUI for this purpose and I’ll show you both methods.
|
||||
|
||||
* [Unzip files in Linux terminal][5]
|
||||
* [Unzip files in Ubuntu via GUI][6]
|
||||
|
||||
|
||||
|
||||
### Unzip file in Linux command line
|
||||
|
||||
Using unzip command in Linux is absolutely simple. In the directory, where you have the zip file, use this command:
|
||||
|
||||
```
|
||||
unzip zipped_file.zip
|
||||
```
|
||||
|
||||
You can also provide the path to the zip file instead of going to the directory. You’ll see extracted files in the output:
|
||||
|
||||
```
|
||||
unzip metallic-container.zip -d my_zip
|
||||
Archive: metallic-container.zip
|
||||
inflating: my_zip/625993-PNZP34-678.jpg
|
||||
inflating: my_zip/License free.txt
|
||||
inflating: my_zip/License premium.txt
|
||||
```
|
||||
|
||||
There is a slight problem with the above command. It will extract all the contents of the zip file in the current directory. That’s not a pretty thing to do because you’ll have a handful of files leaving the current directory unorganized.
|
||||
|
||||
#### Unzip to directory
|
||||
|
||||
A good practice is to unzip to directory in Linux command line. This way, all the extracted files are stored in the directory you specified. If the directory doesn’t exist, it will create one.
|
||||
|
||||
```
|
||||
unzip zipped_file.zip -d unzipped_directory
|
||||
```
|
||||
|
||||
Now all the contents of the zipped_file.zip will be extracted to unzipped_directory.
|
||||
|
||||
Since we are discussing good practices, another tip you can use is to have a look at the content of the zip file without actually extracting it.
|
||||
|
||||
#### See the content of zip file without unzipping it
|
||||
|
||||
You can check the content of the zip file without even extracting it with the option -l.
|
||||
|
||||
```
|
||||
unzip -l zipped_file.zip
|
||||
```
|
||||
|
||||
Here’s a sample output:
|
||||
|
||||
```
|
||||
unzip -l metallic-container.zip
|
||||
Archive: metallic-container.zip
|
||||
Length Date Time Name
|
||||
--------- ---------- ----- ----
|
||||
6576010 2019-03-07 10:30 625993-PNZP34-678.jpg
|
||||
1462 2019-03-07 13:39 License free.txt
|
||||
1116 2019-03-07 13:39 License premium.txt
|
||||
--------- -------
|
||||
6578588 3 files
|
||||
```
|
||||
|
||||
There are many other usage of the unzip command in Linux but I guess now you have enough knowledge to unzip files in Linux.
|
||||
|
||||
### Unzip files in Linux using GUI
|
||||
|
||||
You don’t always have to go to the terminal if you are using desktop Linux. Let’s see how to unzip in Ubuntu Linux graphically. I am using [GNOME desktop][7] here with Ubuntu 18.04 but the process is pretty much the same in other desktop Linux distributions.
|
||||
|
||||
Open the file manager and go to the folder where your zip file is stored. Right click the file and you’ll see the option “extract here”. Select this one.
|
||||
|
||||
![Unzip File in Ubuntu][8]
|
||||
|
||||
Unlike the unzip command, the extract here options create a folder of the same name as the zipped file and all the content of the zipped files are extracted to this newly created folder. I am glad that this is the default behavior instead of extracting everything in the current directory.
|
||||
|
||||
There is also the option of ‘extract to’ and with that you can specify the folder where you want to extract the files.
|
||||
|
||||
That’s it. Now you know how to unzip a file in Linux. Perhaps you might also be interested in learning about [using 7zip in Linux][9].
|
||||
|
||||
If you have questions or suggestions, do let me know in the comment section.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/unzip-linux/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Zip_(file_format)
|
||||
[2]: https://itsfoss.com/linux-zip-folder/
|
||||
[3]: https://ubuntu.com/
|
||||
[4]: https://www.debian.org/
|
||||
[5]: tmp.eqEocGssC8#terminal
|
||||
[6]: tmp.eqEocGssC8#gui
|
||||
[7]: https://gnome.org/
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/unzip-files-ubuntu.jpg?ssl=1
|
||||
[9]: https://itsfoss.com/use-7zip-ubuntu-linux/
|
@ -1,232 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (heguangzhi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to manage Go projects with GVM)
|
||||
[#]: via: (https://opensource.com/article/19/10/introduction-gvm)
|
||||
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
|
||||
|
||||
如何用GVM管理Go项目
|
||||
======
|
||||
|
||||
管理Go语言环境,包括安装多个版本和使用Go版本管理器管理模块。
|
||||
![正在编程的女人][1]
|
||||
|
||||
Go语言版本管理器([GVM][2])是管理Go语言环境的开源工具。GVM “pkgsets” 支持安装多个版本的Go并管理每个项目的模块。最初由[乔什·布斯迪克][3]开发,GVM(像它的对手 Ruby RVM一样)允许你为每个项目或项目组创建一个开发环境,分离不同的Go版本和包依赖关系,以允许更大的灵活性和防止不同版本造成的问题。
|
||||
|
||||
有几种管理Go包的方式,包括内置于Go中的Go1.11模块。我发现GVM简单直观,即使我不用它来管理包,我还是会用它来管理Go不同的版本的。
|
||||
|
||||
### 安装 GVM
|
||||
|
||||
安装GVM很简单。[GVM 存储库][4]安装文档指示你下载安装程序脚本并将其传送到Bash来安装:
|
||||
|
||||
```
|
||||
`bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)`
|
||||
```
|
||||
|
||||
尽管越来越多的人采用这种安装方法,但是在安装之前先看看安装程序在做什么仍然是一个很好的想法。以GVM为例,安装程序脚本:
|
||||
|
||||
1. 检查一些相关依赖性
|
||||
2. 克隆 GVM 存储库
|
||||
3. 使用 shell 脚本:
|
||||
* 安装 Go 语言
|
||||
* 管理 GOPATH 环境变量
|
||||
* 向bashrc、zshrc或配置文件中添加一行内容
|
||||
|
||||
如果你想再次检查它在做什么,你可以克隆存储库并查看 shell 脚本,然后运行 **/binscripts/gvm-installer** 使用本地脚本进行设置。
|
||||
|
||||
_注意:_ 因为GVM可以用来下载和编译新的Go版本,所以有一些预期的依赖关系,如 Make、Git 和 Curl。您可以在[GVM的自述文件][5]中找到完整的分发列表。
|
||||
|
||||
### 使用 GVM 安装和管理 GO 版本
|
||||
|
||||
一旦安装了GVM,你就可以使用它来安装和管理不同版本的Go。**gvm listall** 命令显示可下载和编译的可用版本的Go:
|
||||
```
|
||||
[chris@marvin ]$ gvm listall
|
||||
$ gvm listall
|
||||
|
||||
gvm gos (available)
|
||||
|
||||
go1
|
||||
go1.0.1
|
||||
go1.0.2
|
||||
go1.0.3
|
||||
|
||||
<output truncated>
|
||||
```
|
||||
|
||||
安装特定的Go版本就像 **gvm install <version>** 一样简单,其中 **<version>** 是**gvm listall** 命令返回的内容之一。
|
||||
|
||||
假设你正在进行一个使用Go1.12.8 版本的项目。您可以使用 **gvm install go1.12.8** 安装这个版本:
|
||||
|
||||
```
|
||||
[chris@marvin]$ gvm install go1.12.8
|
||||
Installing go1.12.8...
|
||||
* Compiling...
|
||||
go1.12.8 successfully installed!
|
||||
```
|
||||
|
||||
输入 **gvm list**,你会看到Go版本1.12.8 与系统Go版本(使用操作系统的软件包管理器打包的版本)一起并存:
|
||||
|
||||
```
|
||||
[chris@marvin]$ gvm list
|
||||
|
||||
gvm gos (installed)
|
||||
|
||||
go1.12.8
|
||||
=> system
|
||||
```
|
||||
|
||||
GVM仍在使用系统版本的Go ,由 **=>**符号表示。你可以使用 **gvm use** 命令切换你的环境以使用新安装的 go1.12.8:
|
||||
|
||||
|
||||
```
|
||||
[chris@marvin]$ gvm use go1.12.8
|
||||
Now using version go1.12.8
|
||||
|
||||
[chris@marvin]$ go version
|
||||
go version go1.12.8 linux/amd64
|
||||
```
|
||||
|
||||
GVM使管理已安装版本的Go变得极其简单,但它变得更好了!
|
||||
|
||||
### 使用 GVM pkgset
|
||||
|
||||
开箱即用,Go有一种出色而令人沮丧的管理包和模块的方式。默认情况下,如果你 **go get** 包,它将被下载到 **$GOPATH** 目录中的 **src** 和 **pkg** 目录下 ,然后可以使用 **import** 将其包含在你的Go程序中。这使得获得软件包变得很容易,特别是对于非特权用户,而不需要**sudo**或root 特权(很像Python中的**pip install --user**)。然而,在不同的项目中管理相同包的不同版本是非常困难的。
|
||||
|
||||
有许多方法可以尝试修复或缓解这个问题,包括实验性Go模块(Go 1.11版中增加了初步支持)和[Go dep][6](Go 模块的“官方实验”并且持续迭代)。在我发现GVM之前,我会在他们自己的Docker容器中构建和测试Go项目,以确保分离。
|
||||
|
||||
GVM通过使用“pkgsets”将项目的新目录附加到安装的Go版本的默认 **$GOPATH** 上,很好地实现了项目之间包的管理和隔离,就像 **$PATH** 在Unix/Linux系统上工作一样。
|
||||
|
||||
想象它如何运行的,例子如下。首先,安装新版 Go1.12.9:
|
||||
|
||||
```
|
||||
[chris@marvin]$ echo $GOPATH
|
||||
/home/chris/.gvm/pkgsets/go1.12.8/global
|
||||
|
||||
[chris@marvin]$ gvm install go1.12.9
|
||||
Installing go1.12.9...
|
||||
* Compiling...
|
||||
go1.12.9 successfully installed
|
||||
|
||||
[chris@marvin]$ gvm use go1.12.9
|
||||
Now using version go1.12.9
|
||||
```
|
||||
|
||||
当GVM被告知使用新版本时,它会更改为新的 **$GOPATH**,默认**gloabl** pkgset 应用于该版本:
|
||||
|
||||
|
||||
```
|
||||
[chris@marvin]$ echo $GOPATH
|
||||
/home/chris/.gvm/pkgsets/go1.12.9/global
|
||||
|
||||
[chris@marvin]$ gvm pkgset list
|
||||
|
||||
gvm go package sets (go1.12.9)
|
||||
|
||||
=> global
|
||||
```
|
||||
尽管默认情况下没有安装额外的包,但是全局pkgset中的包对于使用该特定版本的Go的任何项目都是可用的。
|
||||
|
||||
现在,假设你正在启用一个新项目,它需要一个特定的包。首先,使用 GVM 创建一个新的pkgset,名为 **introToGvm**:
|
||||
|
||||
```
|
||||
[chris@marvin]$ gvm pkgset create introToGvm
|
||||
|
||||
[chris@marvin]$ gvm pkgset use introToGvm
|
||||
Now using version go1.12.9@introToGvm
|
||||
|
||||
[chris@marvin]$ gvm pkgset list
|
||||
|
||||
gvm go package sets (go1.12.9)
|
||||
|
||||
global
|
||||
=> introToGvm
|
||||
```
|
||||
|
||||
如上所述,pkgset的一个新目录被添加到 **$GOPATH**:
|
||||
|
||||
```
|
||||
[chris@marvin]$ echo $GOPATH
|
||||
/home/chris/.gvm/pkgsets/go1.12.9/introToGvm:/home/chris/.gvm/pkgsets/go1.12.9/global
|
||||
```
|
||||
|
||||
将目录更改为预先设置的 **introToGvm** 路径,检查目录结构,并像你使用**awk**和**bash** 一样。
|
||||
|
||||
```
|
||||
[chris@marvin]$ cd $( awk -F':' '{print $1}' <<< $GOPATH )
|
||||
[chris@marvin]$ pwd
|
||||
/home/chris/.gvm/pkgsets/go1.12.9/introToGvm
|
||||
|
||||
[chris@marvin]$ ls
|
||||
overlay pkg src
|
||||
```
|
||||
请注意,新目录看起来很像普通的 **$GOPATH**。新的Go包使用 **go get** 命令下载并正常使用,且添加到 pkgset 中。
|
||||
|
||||
例如,使用以下命令获取 **gorilla/mux** 包,然后检查pkgset的目录结构:
|
||||
|
||||
|
||||
```
|
||||
[chris@marvin]$ go get github.com/gorilla/mux
|
||||
[chris@marvin]$ tree
|
||||
[chris@marvin introToGvm ]$ tree
|
||||
.
|
||||
├── overlay
|
||||
│ ├── bin
|
||||
│ └── lib
|
||||
│ └── pkgconfig
|
||||
├── pkg
|
||||
│ └── linux_amd64
|
||||
│ └── github.com
|
||||
│ └── gorilla
|
||||
│ └── mux.a
|
||||
src/
|
||||
└── github.com
|
||||
└── gorilla
|
||||
└── mux
|
||||
├── AUTHORS
|
||||
├── bench_test.go
|
||||
├── context.go
|
||||
├── context_test.go
|
||||
├── doc.go
|
||||
├── example_authentication_middleware_test.go
|
||||
├── example_cors_method_middleware_test.go
|
||||
├── example_route_test.go
|
||||
├── go.mod
|
||||
├── LICENSE
|
||||
├── middleware.go
|
||||
├── middleware_test.go
|
||||
├── mux.go
|
||||
├── mux_test.go
|
||||
├── old_test.go
|
||||
├── README.md
|
||||
├── regexp.go
|
||||
├── route.go
|
||||
└── test_helpers.go
|
||||
```
|
||||
|
||||
如你所见,**gorilla/mux** 已按预期添加到 pkgset **$GOPATH** 目录中,现在可用于使用此 pkgset 项目了。
|
||||
|
||||
### GVM让 Go 管理变得轻而易举
|
||||
|
||||
GVM 是一种直观且非侵入性的管理 Go版本和包的方式。它可以单独使用,也可以与其他 Go 模块管理技术结合使用,并利用 GVM Go 版本管理功能。按Go 版本和包依赖来分离项目使得开发更加容易,并且减少了管理版本冲突的复杂性,GVM让这变得轻而易举。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/introduction-gvm
|
||||
|
||||
作者:[Chris Collins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[heguangzhi](https://github.com/heguangzhi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clcollins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
|
||||
[2]: https://github.com/moovweb/gvm
|
||||
[3]: https://github.com/jbussdieker
|
||||
[4]: https://github.com/moovweb/gvm#installing
|
||||
[5]: https://github.com/moovweb/gvm/blob/master/README.md
|
||||
[6]: https://golang.github.io/dep/
|
Loading…
Reference in New Issue
Block a user