mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-03 01:10:13 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
0e3c09aab2
@ -1,16 +1,18 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12350-1.html)
|
||||
[#]: subject: (How to dump the GOSSAFUNC graph for a method)
|
||||
[#]: via: (https://dave.cheney.net/2020/06/19/how-to-dump-the-gossafunc-graph-for-a-method)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
如何转储一个方法的 GOSSAFUNC 图
|
||||
在 Go 中如何转储一个方法的 GOSSAFUNC 图
|
||||
======
|
||||
|
||||
Go 编译器的 SSA 后端包含一种工具,可以生成编译阶段的 HTML 调试输出。这篇文章介绍了如何为函数_和_方法打印 SSA 输出。
|
||||

|
||||
|
||||
Go 编译器的 SSA 后端包含一种工具,可以生成编译阶段的 HTML 调试输出。这篇文章介绍了如何为函数*和*方法打印 SSA 输出。
|
||||
|
||||
让我们从一个包含函数、值方法和指针方法的示例程序开始:
|
||||
|
||||
@ -47,7 +49,7 @@ func main() {
|
||||
}
|
||||
```
|
||||
|
||||
通过 `GOSSAFUNC` 环境变量控制 SSA 调试输出。此变量含有要转储的函数的名称。这_不是_函数的完全限定名。对于上面的 `func main`,函数名称为 `main` _而不是_ `main.main`。
|
||||
通过 `GOSSAFUNC` 环境变量控制 SSA 调试输出。此变量含有要转储的函数的名称。这*不是*函数的完全限定名。对于上面的 `func main`,函数名称为 `main` *而不是* `main.main`。
|
||||
|
||||
```
|
||||
% env GOSSAFUNC=main go build
|
||||
@ -57,11 +59,11 @@ t
|
||||
dumped SSA to ./ssa.html
|
||||
```
|
||||
|
||||
在这个例子中,`GOSSAFUNC=main` 同时匹配了 `main.main` 和一个名为 `runtime.main` 的函数。[1][1]这有点不走运,但是实际上可能没什么大不了的,因为如果你要对代码进行性能调整,它就不会出现在 `func main` 中的巨大的意大利面块中。
|
||||
在这个例子中,`GOSSAFUNC=main` 同时匹配了 `main.main` 和一个名为 `runtime.main` 的函数。[^1]这有点不走运,但是实际上可能没什么大不了的,因为如果你要对代码进行性能调整,它就不会出现在 `func main` 中的巨大的意大利面块中。
|
||||
|
||||
你的代码更有可能在_方法_中,你可能已经看到这篇文章,并寻找能够转储方法的 SSA 输出。
|
||||
你的代码更有可能在*方法*中,你可能已经看到这篇文章,并寻找能够转储方法的 SSA 输出。
|
||||
|
||||
要为指针方法 `func (n *Numbers) Add` 打印 SSA 调试,等效函数名为 `(*Numbers).Add`:[2][2]
|
||||
要为指针方法 `func (n *Numbers) Add` 打印 SSA 调试,等效函数名为 `(*Numbers).Add`:[^2]
|
||||
|
||||
```
|
||||
% env "GOSSAFUNC=(*Numbers).Add" go build
|
||||
@ -69,7 +71,7 @@ t
|
||||
dumped SSA to ./ssa.html
|
||||
```
|
||||
|
||||
要为值方法 `func (n Numbers) Average` 打印 SSA 调试,等效函数名为 `(*Numbers).Average`,_即使这是一个值方法_:
|
||||
要为值方法 `func (n Numbers) Average` 打印 SSA 调试,等效函数名为 `(*Numbers).Average`,*即使这是一个值方法*:
|
||||
|
||||
```
|
||||
% env "GOSSAFUNC=(*Numbers).Average" go build
|
||||
@ -77,9 +79,8 @@ t
|
||||
dumped SSA to ./ssa.html
|
||||
```
|
||||
|
||||
1. 如果你没有从源码构建 Go,那么 `runtime` 软件包的路径可能是只读的,并且可能会收到错误消息。请不要使用 `sudo` 来解决此问题。
|
||||
2. 请注意 shell 引用
|
||||
|
||||
[^1]: 如果你没有从源码构建 Go,那么 `runtime` 软件包的路径可能是只读的,并且可能会收到错误消息。请不要使用 `sudo` 来解决此问题。
|
||||
[^2]: 请注意 shell 引用
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -89,11 +90,10 @@ via: https://dave.cheney.net/2020/06/19/how-to-dump-the-gossafunc-graph-for-a-me
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: tmp.kLSHpQXzhr#easy-footnote-bottom-1-4188 (If you didn’t build Go from source then the path to the <code>runtime</code> package may be read only and you might receive an error. Please don’t use the <code>sudo</code> hammer to fix this.)
|
||||
[2]: tmp.kLSHpQXzhr#easy-footnote-bottom-2-4188 (Please pay attention to the shell quoting.)
|
||||
|
@ -0,0 +1,79 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Wi-Fi 6E: When it’s coming and what it’s good for)
|
||||
[#]: via: (https://www.networkworld.com/article/3563832/wi-fi-6e-when-its-coming-and-what-its-good-for.html)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
Wi-Fi 6E: When it’s coming and what it’s good for
|
||||
======
|
||||
New wireless spectrum recently dedicated to Wi-Fi allows for more channels and higher density deployments, but gear to support it won’t be widely deployed until 2020, according to an Extreme Networks exec.
|
||||
Thinkstock
|
||||
|
||||
This spring [the FCC opened up a new swath of unlicensed wireless spectrum][1] in the 6GHz band that’s intended for use with Wi-Fi and can provide lower latency and faster data rates. The new spectrum also has a shorter range and supports more channels than bands that were already dedicated to Wi-Fi, making it suitable for deployment in high-density areas like stadiums.
|
||||
|
||||
To further understand what Wi-Fi 6E is and how it differs from Wi-Fi 6, I recently talked with Perry Correll, director of product management for networking solutions vendor Extreme Networks.
|
||||
|
||||
**Learn more about 5G and WiFi 6**
|
||||
|
||||
* [What is 5G? How is it better than 4G?][2]
|
||||
* [How to determine if WiFi 6 is right for you][3]
|
||||
* [What is MU-MIMO? Why do you need it in your wireless routers?][4]
|
||||
* [When to use 5G, when to use WiFi 6][5]
|
||||
* [How enterprises can prep for 5G networks][6]
|
||||
|
||||
|
||||
|
||||
**Kerravala:** **Wi-Fi 6 seems to be getting a lot of hype but not Wi-Fi 6E. Why?**
|
||||
|
||||
**Correll:** There’s so much confusion around all the 666 numbers, it’ll scare you to death. You’ve got Wi-Fi 6, Wi-Fi 6E – and Wi-Fi 6 still has additional enhancements coming after that, with multi-user multiple input, multiple output (multi-user MIMO) functionalities. Then there’s the 6GHz spectrum, but that’s not where Wi-Fi 6 gets its name from: It’s the sixth generation of Wi-Fi. On top of all that, we are just getting a handle 5G and there already talking about 6G – seriously, look it up – it's going to get even more confusing.
|
||||
|
||||
**Kerravala:** **Why do we need Wi-Fi 6E versus regular Wi-Fi 6?**
|
||||
|
||||
**Correll:** The last time we got a boost in UNII-2 and UNII-2 Extended was 15 years ago and smartphones hadn’t even taken off yet. Now being able to get 1.2GHz is enormous. With Wi-Fi 6E, we’re not doubling the amount of Wi-Fi space, we're actually quadrupling the amount of usable space. That’s three, four, or five times more spectrum, depending on where you are in the world. Plus you don't have to worry about DFS [dynamic frequency selection], especially indoors.
|
||||
|
||||
Wi-Fi 6E is not going to be faster than Wi-Fi 6 and it’s not adding enhanced technology features. The neat thing is operating the 6GHz will require Wi-Fi 6 or above clients. So, we’re not going to have any slow clients and we’re not going to have a lot of noise. We’re going to gain performance in a cleaner environment with faster devices.
|
||||
|
||||
**Kerravala:** **Can you also run wider channels?**
|
||||
|
||||
**Correll:** Exactly, that's the cool thing about it. If you’re in a typical enterprise environment, 20 and 40MHz is pretty much all you need. In high-density environments like stadiums, trying to do 80 or 160MHz just became tough. Wider channels are really going help things like [virtual reality], which can take advantage of those channels that are eating up the rest of the spectrum. That’s probably the biggest use case.
|
||||
|
||||
Three or four years down the road, if you want to do digital signage or screen edge at stadiums then you can use 80 of the 160MHz without getting impacted by anything else. There’s already talk of Wi-Fi 7 and it’s going to have 320MHz-wide channels.
|
||||
|
||||
**Kerravala:** **Will this be primarily an augmentation to most Wi-Fi strategies?**
|
||||
|
||||
**Correll:** It's definitely going to be at the edges in the short term. The first products are probably going to launch at the end of this year, and they’re going to be consumer-grade. For the enterprise, 6GHz-capable products will start showing up next year. Not before 2022 will you actually start seeing any density – so, not any time soon. For smartphone companies, Wi-Fi is not a big deal and they’d rather focus on other features.
|
||||
|
||||
Still, it’s a huge opportunity. The nicest thing about the 6GHz versus CBRS [Citizens Broadband Radio Service] or 5G is [that many] would rather stick with Wi-Fi than having to move to a different architecture. These are the users that are going to drive the manufacturers of the widgets to IoT devices or robots or whatever requires the 6GHz. It's a clean spectrum and might be cheaper than the regular Wi-Fi 6. There are also some power-saving benefits there, too.
|
||||
|
||||
**Kerravala:** **There’s talk of 5G replacing Wi-Fi 6. But what’s the practicality of that?**
|
||||
|
||||
**Correll:** Realistically, you can’t put a SIM in every device. But one of the big issues that come up is data ownership because the carrier is going to own your data, not you. If you want to use your data for any kind of business analytics, will the carrier release the data back to you at a certain price? That’s a frightening thought.
|
||||
|
||||
There are just too many reasons why Wi-Fi is not going away. When Wi-Fi 6 and 5G-capable devices come out, what will happen to all the other laptops, tablets, and IoT devices that only have Wi-Fi? There will either be Wi-Fi-only or Wi-Fi and 5G devices, but 5G is not going to replace Wi-Fi altogether. If you look at the 5G radio network backbone, Wi-Fi is a component. It's one big happy family. The technologies are designed to coexist.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3563832/wi-fi-6e-when-its-coming-and-what-its-good-for.html
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3540288/how-wi-fi-6e-boosts-wireless-spectrum-five-fold.html
|
||||
[2]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
|
||||
[3]: https://www.networkworld.com/article/3356838/how-to-determine-if-wi-fi-6-is-right-for-you.html
|
||||
[4]: https://www.networkworld.com/article/3250268/what-is-mu-mimo-and-why-you-need-it-in-your-wireless-routers.html
|
||||
[5]: https://www.networkworld.com/article/3402316/when-to-use-5g-when-to-use-wi-fi-6.html
|
||||
[6]: https://www.networkworld.com/article/3306720/mobile-wireless/how-enterprises-can-prep-for-5g.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,281 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( nophDog)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Make the switch from Mac to Linux easier with Homebrew)
|
||||
[#]: via: (https://opensource.com/article/20/6/homebrew-linux)
|
||||
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg)
|
||||
|
||||
Make the switch from Mac to Linux easier with Homebrew
|
||||
======
|
||||
Whether you want to ease your migration from Mac to Linux or just don't
|
||||
like the standard Linux package managers, give Homebrew a try.
|
||||
![Digital images of a computer desktop][1]
|
||||
|
||||
The [Homebrew][2] project began its life as an unofficial Linux-style package manager for the Mac. Its users quickly fell in love with its friendly interface and helpful prompts, and—in what may seem like a strange twist of fate—it got ported to Linux.
|
||||
|
||||
At first, there were two separate projects for macOS and Linux (Homebrew and Linuxbrew), but now Homebrew's core manages both operating systems. Because I've been on a journey to [migrate from Mac to Linux][3], I have been looking at how my favorite open source applications for macOS perform on Linux, and I've been happy to find that Homebrew's support for Linux truly shines.
|
||||
|
||||
### Why Homebrew on Linux?
|
||||
|
||||
A reasonable first response to Homebrew from long-time Linux users is: "Why not just use…" where the next word is a package manager for their preferred version of Linux. Debian-based systems already have `apt`, Fedora-systems have `dnf` and `yum`, and projects like Flatpak and AppImage work to span the gap by running smoothly on both. I have spent a decent amount of time using all these technologies, and I have to say each one is powerful in its own right.
|
||||
|
||||
So why do I [stick with Homebrew][4]? First off, it's incredibly familiar to me. I'm already learning a lot as I transition to more open source alternatives for my past proprietary tools, and keeping something familiar—like Homebrew—helps me focus on learning one thing at a time instead of being overwhelmed by all the differences between operating systems.
|
||||
|
||||
Also, I have yet to see a package manager that is as kind to the user as Homebrew. Commands are well organized, as the default Help output shows:
|
||||
|
||||
|
||||
```
|
||||
$ brew -h
|
||||
Example usage:
|
||||
brew search [TEXT|/REGEX/]
|
||||
brew info [FORMULA...]
|
||||
brew install FORMULA...
|
||||
brew update
|
||||
brew upgrade [FORMULA...]
|
||||
brew uninstall FORMULA...
|
||||
brew list [FORMULA...]
|
||||
|
||||
Troubleshooting:
|
||||
brew config
|
||||
brew doctor
|
||||
brew install --verbose --debug FORMULA
|
||||
|
||||
Contributing:
|
||||
brew create [URL [--no-fetch]]
|
||||
brew edit [FORMULA...]
|
||||
|
||||
Further help:
|
||||
brew commands
|
||||
brew help [COMMAND]
|
||||
man brew
|
||||
<https://docs.brew.sh>
|
||||
```
|
||||
|
||||
This short output might be mistaken as a limitation, but a quick look inside any of the subcommands reveals a wealth of functionality. The list above is just 23 lines long, but the `install` subcommand has a whopping 79 lines of information available for the advanced user:
|
||||
|
||||
|
||||
```
|
||||
$ brew --help | wc -l
|
||||
23
|
||||
$ brew install --help | wc -l
|
||||
79
|
||||
```
|
||||
|
||||
It has options for ignoring or installing dependencies, choosing to build from source and with what compiler, and using exact upstream Git commits versus the official "bottled" version of the application. Suffice it to say, Homebrew is for experts and novices alike.
|
||||
|
||||
### Get started with Homebrew on Linux
|
||||
|
||||
If you want to give Homebrew a try, there is a great one-liner script to install it on Mac or Linux:
|
||||
|
||||
|
||||
```
|
||||
`$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"`
|
||||
```
|
||||
|
||||
This command executes the Homebrew installer script immediately. If you are more cautious, you can `curl` the file, then run it manually after a review:
|
||||
|
||||
|
||||
```
|
||||
$ curl -fsSL <https://raw.githubusercontent.com/Homebrew/install/master/install.sh> \--output homebrew_installer.sh
|
||||
$ more homebrew_installer.sh # review the script until you feel comfortable
|
||||
$ bash homebrew_installer.sh
|
||||
```
|
||||
|
||||
The Linux instructions include configurations for dotfiles, particularly `~/.profile` on Debian systems and `~/.bash_profile` on Fedora:
|
||||
|
||||
|
||||
```
|
||||
$ test -d /home/linuxbrew/.linuxbrew && eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv)
|
||||
$ test -r ~/.bash_profile && echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.bash_profile
|
||||
$ echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.profile
|
||||
```
|
||||
|
||||
To confirm the installation, the Homebrew team provides an empty `hello` formula for testing:
|
||||
|
||||
|
||||
```
|
||||
$ brew install hello
|
||||
==> Downloading <https://linuxbrew.bintray.com/bottles/hello-2.10.x86\_64\_linux.bottle.tar.gz>
|
||||
######################################################################## 100.0%
|
||||
==> Pouring hello-2.10.x86_64_linux.bottle.tar.gz
|
||||
🍺 /home/linuxbrew/.linuxbrew/Cellar/hello/2.10: 52 files, 595.6KB
|
||||
```
|
||||
|
||||
It looks like my installation is working without any issues, so I'll explore a little more.
|
||||
|
||||
### Brew for command-line utilities
|
||||
|
||||
Homebrew boasts of being an application that "installs the stuff you need that [Linux] didn't" by default.
|
||||
|
||||
You use the `brew` command to install any of the command-line utilities packaged up in Homebrew. These package definitions are called "formulae," and they are compiled and shared through "bottles." There is a host of other beer-oriented terminology in the Homebrew universe, but the package manager's main takeaway is to make software easily accessible.
|
||||
|
||||
What kind of software? Think about the things that come in handy for nerds like me (and, since you're reading this, probably you, too). For example, the handy `tree` command that shows directory structures or `pyenv`, which I use to [manage multiple versions of Python on a Mac][5].
|
||||
|
||||
You can see all formulae available using the `search` command, and adding the `wc` command shows how many are available:
|
||||
|
||||
|
||||
```
|
||||
# -l counts the number of lines
|
||||
$ brew search | wc -l
|
||||
5087
|
||||
```
|
||||
|
||||
There are over 5,000 formulae to date, which is an incredible amount of software. The caveat is that not every formula will run on Linux. There is a section in the output of `brew search --help` that shows flags to filter software by the operating system it runs on. It launches each operating system's repository list to a browser. I'm running Fedora, so I'll give it a try with:
|
||||
|
||||
|
||||
```
|
||||
`$ brew search --fedora tree`
|
||||
```
|
||||
|
||||
The browser loads `https://apps.fedoraproject.org/packages/s/tree`, which shows the options available for Fedora. There are other ways to browse, as well. Formulae are codified and centralized into the core repositories that are split out by operating system (Mac in [Homebrew Core][6] and [Linux Core][7] for Linux bits). They are also available through the Homebrew API and [listed on the website][8].
|
||||
|
||||
Even with all these options, I still find most of my new tools through recommendations from other users. Here are some of my favorites, if you're looking for inspiration:
|
||||
|
||||
* `pyenv`, `rbenv`, and `nodenv` to manage Python, Ruby, and Node.js versions (respectively)
|
||||
* `imagemagick` for scriptable image edits
|
||||
* `pandoc` for scriptable document conversions (I often switch from .docx to .md or .html)
|
||||
* `hub` for a [better Git experience][9] for GitHub users
|
||||
* `tldr` for examples of how to use a command-line utility
|
||||
|
||||
|
||||
|
||||
To explore Homebrew, take a look at [tldr pages][10], which is a user-friendly alternative to scrolling through an application's man pages. Confirm it's available by running `search`:
|
||||
|
||||
|
||||
```
|
||||
$ brew search tldr
|
||||
==> Formulae
|
||||
tldr ✔
|
||||
```
|
||||
|
||||
Success! The checkmark lets you know it is available. Now you can install it:
|
||||
|
||||
|
||||
```
|
||||
$ brew install tldr
|
||||
==> Downloading <https://linuxbrew.bintray.com/bottles/tldr-1.3.0\_2.x86\_64\_linux.bottle.1.tar.gz>
|
||||
######################################################################## 100.0%
|
||||
==> Pouring tldr-1.3.0_2.x86_64_linux.bottle.1.tar.gz
|
||||
🍺 /home/linuxbrew/.linuxbrew/Cellar/tldr/1.3.0_2: 6 files, 63.2KB
|
||||
```
|
||||
|
||||
Homebrew serves up prebuilt binaries, so you don't have to build from source code on your local machine. That saves a lot of time and CPU fan noise. Another thing I appreciate about Homebrew is that you can appreciate this feature without understanding exactly what it means. If you prefer to build it yourself, use the `-s` or `--build-from-source` flag with `brew install` to compile the formula from source (even if a bottle exists).
|
||||
|
||||
Similarly, the complexity under the hood can be interesting. Running `info` on `tldr` shows how dependency management happens, where the source code of the formula sits on disk, and even the public analytics are available:
|
||||
|
||||
|
||||
```
|
||||
$ brew info tldr
|
||||
tldr: stable 1.3.0 (bottled), HEAD
|
||||
Simplified and community-driven man pages
|
||||
<https://tldr.sh/>
|
||||
Conflicts with:
|
||||
tealdeer (because both install `tldr` binaries)
|
||||
/home/linuxbrew/.linuxbrew/Cellar/tldr/1.3.0_2 (6 files, 63.2KB) *
|
||||
Poured from bottle on 2020-06-08 at 15:56:15
|
||||
From: <https://github.com/Homebrew/linuxbrew-core/blob/master/Formula/tldr.rb>
|
||||
==> Dependencies
|
||||
Build: pkg-config ✔
|
||||
Required: libzip ✔, curl ✔
|
||||
==> Options
|
||||
\--HEAD
|
||||
Install HEAD version
|
||||
==> Analytics
|
||||
install: 197 (30 days), 647 (90 days), 1,546 (365 days)
|
||||
install-on-request: 197 (30 days), 646 (90 days), 1,546 (365 days)
|
||||
build-error: 0 (30 days)
|
||||
```
|
||||
|
||||
### One limitation from Mac to Linux
|
||||
|
||||
On macOS, the Homebrew `cask` subcommand offers users a way to install and manage entire applications using the same great command-line utility. Unfortunately, `cask` does not yet work on any Linux distributions. I found this out while trying to install an open source tool:
|
||||
|
||||
|
||||
```
|
||||
$ brew cask install tusk
|
||||
Error: Installing casks is supported only on macOS
|
||||
```
|
||||
|
||||
I asked about it [on the forum][11] and got some quick feedback from other users. In short, the options are to:
|
||||
|
||||
* Fork the project, build the feature, and show others that it's worthwhile
|
||||
* Write a formula for the application and build from source
|
||||
* Create a third-party repository for the application
|
||||
|
||||
|
||||
|
||||
The last one is the most interesting to me. Homebrew manages third-party repositories by [creating and maintaining "taps"][12] (another beer-influenced term). Taps are worth exploring as you get more familiar with the system and want to add to the ecosystem.
|
||||
|
||||
### Backing up Homebrew installs
|
||||
|
||||
One of my favorite Homebrew features is how you can back up your installation just like any other [dotfile in version control][13]. For this process, Homebrew offers a `bundle` subcommand that holds a `dump` subcommand that generates a Brewfile. This file is a reusable list of all your currently installed tools. To generate a Brewfile from your installation, go into whichever folder you want to use and run:
|
||||
|
||||
|
||||
```
|
||||
$ cd ~/Development/dotfiles # This is my dotfile folder
|
||||
$ brew bundle dump
|
||||
$ ls Brewfile
|
||||
Brewfile
|
||||
```
|
||||
|
||||
When I change machines and want to set up the same applications on it, I go to the folder with the Brewfile and reinstall them with:
|
||||
|
||||
|
||||
```
|
||||
$ ls Brewfile
|
||||
Brewfile
|
||||
$ brew bundle
|
||||
```
|
||||
|
||||
It will install all the listed formulae on my new machine.
|
||||
|
||||
#### Brewfile management across Mac and Linux
|
||||
|
||||
The Brewfile is a great way to backup your existing installation, but what if something on Mac doesn't run on Linux or vice versa? What I have found is that Homebrew will gracefully ignore the lines that don't work on a given operating system, whether Mac or Linux. As it comes across incompatible requests (like asking brew to install casks on Linux), it skips them and continues on its way:
|
||||
|
||||
|
||||
```
|
||||
$ brew bundle --file=Brewfile.example
|
||||
|
||||
Skipping cask licecap (on Linux)
|
||||
Skipping cask macdown (on Linux)
|
||||
Installing fish
|
||||
Homebrew Bundle complete! 1 Brewfile dependency now installed.
|
||||
```
|
||||
|
||||
To keep my configuration as simple as possible, I use the same Brewfile across both operating systems and haven't run into an issue since it installs the OS-specific version each time I run it.
|
||||
|
||||
### Homebrew for package management
|
||||
|
||||
Homebrew has been my go-to manager for command-line utilities, and its familiarity makes my Linux experience that much more enjoyable. Homebrew keeps me organized and up to date, and I continue to appreciate its balance between ease of use and depth of functionality. I prefer to keep package management details to the minimal amount of information a user needs to know, and most people will benefit from that. If you're already comfortable with Linux package managers, Homebrew may come off as simple, but looking a little deeper reveals its advanced options that go far beyond what's in this article.
|
||||
|
||||
There are a lot of package management options for Linux users. If you are coming from the world of macOS, Homebrew will feel like home.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/6/homebrew-linux
|
||||
|
||||
作者:[Matthew Broberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mbbroberg
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_desk_home_laptop_browser.png?itok=Y3UVpY0l (Digital images of a computer desktop)
|
||||
[2]: https://brew.sh/
|
||||
[3]: https://opensource.com/article/19/10/why-switch-mac-linux
|
||||
[4]: https://opensource.com/article/20/6/homebrew-mac
|
||||
[5]: https://opensource.com/article/20/4/pyenv
|
||||
[6]: https://github.com/Homebrew/homebrew-core
|
||||
[7]: https://github.com/Homebrew/linuxbrew-core
|
||||
[8]: https://formulae.brew.sh/formula/
|
||||
[9]: https://opensource.com/article/20/3/github-hub
|
||||
[10]: https://github.com/tldr-pages/tldr
|
||||
[11]: https://discourse.brew.sh/t/add-linux-support-to-existing-cask/5766
|
||||
[12]: https://docs.brew.sh/How-to-Create-and-Maintain-a-Tap
|
||||
[13]: https://opensource.com/article/19/3/move-your-dotfiles-version-control
|
@ -0,0 +1,450 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why you should use Node.js for data science)
|
||||
[#]: via: (https://opensource.com/article/20/6/data-science-nodejs)
|
||||
[#]: author: (Cristiano L. Fontana https://opensource.com/users/cristianofontana)
|
||||
|
||||
Why you should use Node.js for data science
|
||||
======
|
||||
Node.js and other JavaScript libraries are excellent choices for data
|
||||
science. Here's why.
|
||||
![Computer screen with files or windows open][1]
|
||||
|
||||
[JavaScript][2] (also known as JS) is the [lingua franca][3] of the web, as it is supported by all the major web browsers—the other languages that run in browsers are [transpiled][4] (or translated) to JavaScript. Sometimes JS [can be confusing][5], but I find it pleasant to use because I try to stick to the [good parts][6]. JavaScript was created to run in a browser, but it can also be used in other contexts, such as an [embedded language][7] or for [server-side applications][8].
|
||||
|
||||
In this tutorial, I will explain how to write a program that will run in Node.js, which is a runtime environment that can execute JavaScript applications. What I like the most about Node.js is its [event-driven architecture][9] for [asynchronous programming][10]. With this approach, functions (aka callbacks) can be attached to certain events; when the attached event occurs, the callback executes. This way, the developer does not have to write a main loop because the runtime takes care of that.
|
||||
|
||||
JavaScript also has new [async functions][11] that use a different syntax, but I think they hide the event-driven architecture too well to use them in a how-to article. So, in this tutorial, I will use the traditional callbacks approach, even though it is not necessary for this case.
|
||||
|
||||
### Understanding the program task
|
||||
|
||||
The program task in this tutorial is to:
|
||||
|
||||
* Read some data from a [CSV file][12] that contains the [Anscombe's quartet][13] dataset
|
||||
* Interpolate the data with a straight line (i.e., _f(x) = m·x + q_)
|
||||
* Plot the result to an image file
|
||||
|
||||
|
||||
|
||||
For more details about this task, you can read the previous articles in this series, which do the same task in [Python and GNU Octave][14] and [C and C++][15]. The full source code for all the examples is available in my [polyglot_fit repository][16] on GitLab.
|
||||
|
||||
### Installing
|
||||
|
||||
Before you can run this example, you must install Node.js and its package manager [npm][17]. To install them on [Fedora][18], run:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo dnf install nodejs npm`
|
||||
```
|
||||
|
||||
On Ubuntu:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install nodejs npm`
|
||||
```
|
||||
|
||||
Next, use `npm` to install the required packages. Packages are installed in a local [`node_modules` subdirectory][19], so Node.js can search for packages in that folder. The required packages are:
|
||||
|
||||
* [CSV Parse][20] for parsing the CSV file
|
||||
* [Simple Statistics][21] for calculating the data correlation factor
|
||||
* [Regression-js][22] for determining the fitting line
|
||||
* [D3-Node][23] for server-side plotting
|
||||
|
||||
|
||||
|
||||
Run npm to install the packages:
|
||||
|
||||
|
||||
```
|
||||
`$ npm install csv-parse simple-statistics regression d3-node`
|
||||
```
|
||||
|
||||
### Commenting code
|
||||
|
||||
Just like in C, in JavaScript, you can insert [comments][24] by putting `//` before your comment, and the interpreter will discard the rest of the line. Another option: JavaScript will discard anything between `/*` and `*/`:
|
||||
|
||||
|
||||
```
|
||||
// This is a comment ignored by the interpreter.
|
||||
/* Also this is ignored */
|
||||
```
|
||||
|
||||
### Loading modules
|
||||
|
||||
You can load modules with the [`require()` function][25]. The function returns an object that contains a module's functions:
|
||||
|
||||
|
||||
```
|
||||
const EventEmitter = require('events');
|
||||
const fs = require('fs');
|
||||
const csv = require('csv-parser');
|
||||
const regression = require('regression');
|
||||
const ss = require('simple-statistics');
|
||||
const D3Node = require('d3-node');
|
||||
```
|
||||
|
||||
Some of these modules are part of the Node.js standard library, so you do not need to install them with npm.
|
||||
|
||||
### Defining variables
|
||||
|
||||
Variables do not have to be declared before they are used, but if they are used without a declaration, they will be defined as global variables. Generally, global variables are considered bad practice, as they could lead to [bugs][26] if they're used carelessly. To declare a variable, you can use the [var][27], [let][28], and [const][29] statements. Variables can contain any kind of data (even functions!). You can create some objects by applying the [`new` operator][30] to a constructor function:
|
||||
|
||||
|
||||
```
|
||||
const inputFileName = "anscombe.csv";
|
||||
const delimiter = "\t";
|
||||
const skipHeader = 3;
|
||||
const columnX = String(0);
|
||||
const columnY = String(1);
|
||||
|
||||
const d3n = new D3Node();
|
||||
const d3 = d3n.d3;
|
||||
|
||||
var data = [];
|
||||
```
|
||||
|
||||
Data read from the CSV file is stored in the `data` array. Arrays are dynamic, so you do not have to decide their size beforehand.
|
||||
|
||||
### Defining functions
|
||||
|
||||
There are several ways to define functions in JavaScript. For example, the [function declaration][31] allows you to directly define a function:
|
||||
|
||||
|
||||
```
|
||||
function triplify(x) {
|
||||
return 3 * x;
|
||||
}
|
||||
|
||||
// The function call is:
|
||||
triplify(3);
|
||||
```
|
||||
|
||||
You can also declare a function with an [expression][32] and store it in a variable:
|
||||
|
||||
|
||||
```
|
||||
var triplify = function (x) {
|
||||
return 3 * x;
|
||||
}
|
||||
|
||||
// The function call is still:
|
||||
triplify(3);
|
||||
```
|
||||
|
||||
Finally, you can use the [arrow function expression][33], a syntactically short version of a function expression, but it has [some limitations][33]. It is generally used for concise functions that do simple calculations on its arguments:
|
||||
|
||||
|
||||
```
|
||||
var triplify = (x) => 3 * x;
|
||||
|
||||
// The function call is still:
|
||||
triplify(3);
|
||||
```
|
||||
|
||||
### Printing output
|
||||
|
||||
In order to print on the terminal, you can use the built-in [`console` object][34] in the Node.js standard library. The [`log()` method][35] prints on the terminal (adding a newline at the end of the string):
|
||||
|
||||
|
||||
```
|
||||
`console.log("#### Anscombe's first set with JavaScript in Node.js ####");`
|
||||
```
|
||||
|
||||
The `console` object is a more powerful facility than just printing output; for instance, it can also print [warnings][36] and [errors][37]. If you want to print the value of a variable, you can convert it to a string and use `console.log()`:
|
||||
|
||||
|
||||
```
|
||||
`console.log("Slope: " + slope.toString());`
|
||||
```
|
||||
|
||||
### Reading data
|
||||
|
||||
Input/output in Node.js uses a [very interesting approach][38]; you can choose either a synchronous or an asynchronous approach. The former uses blocking function calls, and the latter uses non-blocking function calls. In a blocking function, the program stops there and waits until the function finishes its task, whereas non-blocking functions do not stop the execution but continue their task somehow and somewhere else.
|
||||
|
||||
You have a couple of options here: you could periodically check whether the function ended, or the function could notify you when it ends. This tutorial uses the second approach: it employs [an `EventEmitter`][39] that generates an [event][40] associated with a callback function. The callback executes when the event is triggered.
|
||||
|
||||
First, generate the `EventEmitter`:
|
||||
|
||||
|
||||
```
|
||||
`const myEmitter = new EventEmitter();`
|
||||
```
|
||||
|
||||
Then associate the file-reading's end with an event called `myEmitter`. Although you do not need to follow this path for this simple example—you could use a simple blocking call—it is a very powerful approach that can be very useful in other situations. Before doing that, add another piece to this section for using the CSV Parse library to do the data reading. This library provides [several approaches][41] you can choose from, but this example uses the [stream API][42] with a [pipe][43]. The library needs some configuration, which is defined in an object:
|
||||
|
||||
|
||||
```
|
||||
const csvOptions = {'separator': delimiter,
|
||||
'skipLines': skipHeader,
|
||||
'headers': false};
|
||||
```
|
||||
|
||||
Since you've defined the options, you can read the file:
|
||||
|
||||
|
||||
```
|
||||
fs.createReadStream(inputFileName)
|
||||
.pipe(csv(csvOptions))
|
||||
.on('data', (datum) => data.push({'x': Number(datum[columnX]), 'y': Number(datum[columnY])}))
|
||||
.on('end', () => myEmitter.emit('reading-end'));
|
||||
```
|
||||
|
||||
I'll walk through each line of this short, dense code snippet:
|
||||
|
||||
* `fs.createReadStream(inputFileName)` opens a [stream of data][44] that is read from the file. A stream gradually reads a file in chunks.
|
||||
* `.pipe(csv(csvOptions))` forwards the stream to the CSV Parse library that handles the difficult task of reading the file and parsing it.
|
||||
* `.on('data', (datum) => data.push({'x': Number(datum[columnX]), 'y': Number(datum[columnY])}))` is rather dense, so I will break it out:
|
||||
* `(datum) => ...` defines a function to which each row of the CSV file will be passed.
|
||||
* `data.push(...` adds the newly read data to the `data` array.
|
||||
* `{'x': ..., 'y': ...}` constructs a new data point with `x` and `y` members.
|
||||
* `Number(datum[columnX])` converts the element in `columnX` to a number.
|
||||
* `.on('end', () => myEmitter.emit('reading-end'));` uses the emitter you created to notify you when the file-reading finishes.
|
||||
|
||||
|
||||
|
||||
When the emitter emits the `reading-end` event, you know that the file was completely parsed and its contents are in the `data` array.
|
||||
|
||||
### Fitting data
|
||||
|
||||
Now that you filled the `data` array, you can analyze the data in it. The function that carries out the analysis is associated with the `reading-end` event of the emitter you defined, so you can be sure that the data is ready. The emitter associates a callback function to that event and executes the function when the event is triggered.
|
||||
|
||||
|
||||
```
|
||||
myEmitter.on('reading-end', function () {
|
||||
const fit_data = data.map((datum) => [datum.x, datum.y]);
|
||||
|
||||
const result = regression.linear(fit_data);
|
||||
const slope = result.equation[0];
|
||||
const intercept = result.equation[1];
|
||||
|
||||
console.log("Slope: " + slope.toString());
|
||||
console.log("Intercept: " + intercept.toString());
|
||||
|
||||
const x = data.map((datum) => datum.x);
|
||||
const y = data.map((datum) => datum.y);
|
||||
|
||||
const r_value = ss.sampleCorrelation(x, y);
|
||||
|
||||
console.log("Correlation coefficient: " + r_value.toString());
|
||||
|
||||
myEmitter.emit('analysis-end', data, slope, intercept);
|
||||
});
|
||||
```
|
||||
|
||||
The statistics libraries expect data in different formats, so employ the [`map()` method][45] of the `data` array. `map()` creates a new array from an existing one and applies a function to each array element. The arrow functions are very practical in this context due to their conciseness. When the analysis finishes, you can trigger a new event to continue in a new callback. You could also directly plot the data in this function, but I opted to continue in a new one because the analysis could be a very lengthy process. By emitting the `analysis-end` event, you also pass the relevant data from this function to the next callback.
|
||||
|
||||
### Plotting
|
||||
|
||||
[D3.js][46] is a [very powerful][47] library for plotting data. The learning curve is rather steep, probably because it is a [misunderstood library][48], but it is the best open source option I've found for server-side plotting. My favorite D3.js feature is probably that it works on SVG images. D3.js was designed to run in a web browser, so it assumes it has a web page to handle. Working server-side is a very different environment, and you need a [virtual web page][49] to work on. Luckily, [D3-Node][50] makes this process very simple.
|
||||
|
||||
Begin by defining some useful measurements that will be required later:
|
||||
|
||||
|
||||
```
|
||||
const figDPI = 100;
|
||||
const figWidth = 7 * figDPI;
|
||||
const figHeight = figWidth / 16 * 9;
|
||||
const margins = {top: 20, right: 20, bottom: 50, left: 50};
|
||||
|
||||
let plotWidth = figWidth - margins.left - margins.right;
|
||||
let plotHeight = figHeight - margins.top - margins.bottom;
|
||||
|
||||
let minX = d3.min(data, (datum) => datum.x);
|
||||
let maxX = d3.max(data, (datum) => datum.x);
|
||||
let minY = d3.min(data, (datum) => datum.y);
|
||||
let maxY = d3.max(data, (datum) => datum.y);
|
||||
```
|
||||
|
||||
You have to convert between the data coordinates and the plot (image) coordinates. You can use scales for this conversion: the scale's domain is the data space where you pick the data points, and the scale's range is the image space where you put the points:
|
||||
|
||||
|
||||
```
|
||||
let scaleX = d3.scaleLinear()
|
||||
.range([0, plotWidth])
|
||||
.domain([minX - 1, maxX + 1]);
|
||||
let scaleY = d3.scaleLinear()
|
||||
.range([plotHeight, 0])
|
||||
.domain([minY - 1, maxY + 1]);
|
||||
|
||||
const axisX = d3.axisBottom(scaleX).ticks(10);
|
||||
const axisY = d3.axisLeft(scaleY).ticks(10);
|
||||
```
|
||||
|
||||
Note that the `y` scale has an inverted range because in the SVG standard, the `y` scale's origin is at the top. After defining the scales, start drawing the plot on a newly created SVG image:
|
||||
|
||||
|
||||
```
|
||||
let svg = d3n.createSVG(figWidth, figHeight)
|
||||
|
||||
svg.attr('background-color', 'white');
|
||||
|
||||
svg.append("rect")
|
||||
.attr("width", figWidth)
|
||||
.attr("height", figHeight)
|
||||
.attr("fill", 'white');
|
||||
```
|
||||
|
||||
First, draw the interpolating line appending a `line` element to the SVG image:
|
||||
|
||||
|
||||
```
|
||||
svg.append("g")
|
||||
.attr('transform', `translate(${margins.left}, ${margins.top})`)
|
||||
.append("line")
|
||||
.attr("x1", scaleX(minX - 1))
|
||||
.attr("y1", scaleY((minX - 1) * slope + intercept))
|
||||
.attr("x2", scaleX(maxX + 1))
|
||||
.attr("y2", scaleY((maxX + 1) * slope + intercept))
|
||||
.attr("stroke", "#1f77b4");
|
||||
```
|
||||
|
||||
Then add a `circle` for each data point to the right location. D3.js's key point is that it associates data with SVG elements. Thus, you use the `data()` method to associate the data points to the circles you create. The [`enter()` method][51] tells the library what to do with the newly associated data:
|
||||
|
||||
|
||||
```
|
||||
svg.append("g")
|
||||
.attr('transform', `translate(${margins.left}, ${margins.top})`)
|
||||
.selectAll("circle")
|
||||
.data(data)
|
||||
.enter()
|
||||
.append("circle")
|
||||
.classed("circle", true)
|
||||
.attr("cx", (d) => scaleX(d.x))
|
||||
.attr("cy", (d) => scaleY(d.y))
|
||||
.attr("r", 3)
|
||||
.attr("fill", "#ff7f0e");
|
||||
```
|
||||
|
||||
The last elements you draw are the axes and their labels; this is so you can be sure they overlap the plot lines and circles:
|
||||
|
||||
|
||||
```
|
||||
svg.append("g")
|
||||
.attr('transform', `translate(${margins.left}, ${margins.top + plotHeight})`)
|
||||
.call(axisX);
|
||||
|
||||
svg.append("g")
|
||||
.append("text")
|
||||
.attr("transform", `translate(${margins.left + 0.5 * plotWidth}, ${margins.top + plotHeight + 0.7 * margins.bottom})`)
|
||||
.style("text-anchor", "middle")
|
||||
.text("X");
|
||||
|
||||
svg.append("g")
|
||||
.attr('transform', `translate(${margins.left}, ${margins.top})`)
|
||||
.call(axisY);
|
||||
|
||||
svg.append("g")
|
||||
.attr("transform", `translate(${0.5 * margins.left}, ${margins.top + 0.5 * plotHeight})`)
|
||||
.append("text")
|
||||
.attr("transform", "rotate(-90)")
|
||||
.style("text-anchor", "middle")
|
||||
.text("Y");
|
||||
```
|
||||
|
||||
Finally, save the plot to an SVG file. I opted for a synchronous write of the file, so I could show this [second approach][52]:
|
||||
|
||||
|
||||
```
|
||||
`fs.writeFileSync("fit_node.svg", d3n.svgString());`
|
||||
```
|
||||
|
||||
### Results
|
||||
|
||||
Running the script is as simple as:
|
||||
|
||||
|
||||
```
|
||||
`$ node fitting_node.js`
|
||||
```
|
||||
|
||||
And the command-line output is:
|
||||
|
||||
|
||||
```
|
||||
#### Anscombe's first set with JavaScript in Node.js ####
|
||||
Slope: 0.5
|
||||
Intercept: 3
|
||||
Correlation coefficient: 0.8164205163448399
|
||||
```
|
||||
|
||||
Here is the image I generated with D3.js and Node.js:
|
||||
|
||||
![Plot and fit of the dataset obtained with Node.js][53]
|
||||
|
||||
(Cristiano Fontana, [CC BY-SA 4.0][54])
|
||||
|
||||
### Conclusion
|
||||
|
||||
JavaScript is a core technology of today, and it is well suited for data exploration with the right libraries. With this introduction to event-driven architecture and an example of how server-side plotting looks in practice, we can start to consider Node.js and D3.js as alternatives to the common programming languages associated with data science.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/6/data-science-nodejs
|
||||
|
||||
作者:[Cristiano L. Fontana][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cristianofontana
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open)
|
||||
[2]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/About_JavaScript
|
||||
[3]: https://en.wikipedia.org/wiki/Lingua_franca
|
||||
[4]: https://en.wikipedia.org/wiki/Source-to-source_compiler
|
||||
[5]: https://www.destroyallsoftware.com/talks/wat
|
||||
[6]: https://www.youtube.com/watch?v=_DKkVvOt6dk
|
||||
[7]: https://developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey
|
||||
[8]: https://nodejs.org/en/
|
||||
[9]: https://en.wikipedia.org/wiki/Event-driven_architecture
|
||||
[10]: https://en.wikipedia.org/wiki/Asynchrony_%28computer_programming%29
|
||||
[11]: https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Asynchronous/Async_await
|
||||
[12]: https://en.wikipedia.org/wiki/Comma-separated_values
|
||||
[13]: https://en.wikipedia.org/wiki/Anscombe%27s_quartet
|
||||
[14]: https://opensource.com/article/20/2/python-gnu-octave-data-science
|
||||
[15]: https://opensource.com/article/20/2/c-data-science
|
||||
[16]: https://gitlab.com/cristiano.fontana/polyglot_fit
|
||||
[17]: https://www.npmjs.com/
|
||||
[18]: https://getfedora.org/
|
||||
[19]: https://docs.npmjs.com/configuring-npm/folders.html
|
||||
[20]: https://csv.js.org/parse/
|
||||
[21]: https://simplestatistics.org/
|
||||
[22]: http://tom-alexander.github.io/regression-js/
|
||||
[23]: https://bradoyler.com/projects/d3-node/
|
||||
[24]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Lexical_grammar#Comments
|
||||
[25]: https://nodejs.org/en/knowledge/getting-started/what-is-require/
|
||||
[26]: https://gist.github.com/hallettj/64478
|
||||
[27]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/var
|
||||
[28]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/let
|
||||
[29]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/const
|
||||
[30]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/new
|
||||
[31]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/function
|
||||
[32]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/function
|
||||
[33]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions
|
||||
[34]: https://nodejs.org/api/console.html
|
||||
[35]: https://nodejs.org/api/console.html#console_console_log_data_args
|
||||
[36]: https://nodejs.org/api/console.html#console_console_warn_data_args
|
||||
[37]: https://nodejs.org/api/console.html#console_console_error_data_args
|
||||
[38]: https://nodejs.org/en/docs/guides/blocking-vs-non-blocking/
|
||||
[39]: https://nodejs.org/api/events.html#events_class_eventemitter
|
||||
[40]: https://nodejs.org/api/events.html#events_events
|
||||
[41]: https://csv.js.org/parse/api/
|
||||
[42]: https://csv.js.org/parse/api/stream/
|
||||
[43]: https://csv.js.org/parse/recipies/stream_pipe/
|
||||
[44]: https://nodejs.org/api/stream.html#stream_stream
|
||||
[45]: https://developer.mozilla.org/it/docs/Web/JavaScript/Reference/Global_Objects/Array/map
|
||||
[46]: https://d3js.org/
|
||||
[47]: https://observablehq.com/@d3/gallery
|
||||
[48]: https://medium.com/dailyjs/the-trouble-with-d3-4a84f7de011f
|
||||
[49]: https://github.com/jsdom/jsdom
|
||||
[50]: https://github.com/d3-node/d3-node
|
||||
[51]: https://www.d3indepth.com/enterexit/
|
||||
[52]: https://nodejs.org/api/fs.html#fs_fs_writefilesync_file_data_options
|
||||
[53]: https://opensource.com/sites/default/files/uploads/fit_node.jpg (Plot and fit of the dataset obtained with Node.js)
|
||||
[54]: https://creativecommons.org/licenses/by-sa/4.0/
|
211
sources/tech/20200625 How to stress test your Linux system.md
Normal file
211
sources/tech/20200625 How to stress test your Linux system.md
Normal file
@ -0,0 +1,211 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to stress test your Linux system)
|
||||
[#]: via: (https://www.networkworld.com/article/3563334/how-to-stress-test-your-linux-system.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
How to stress test your Linux system
|
||||
======
|
||||
Stressing your Linux servers can be a good idea if you'd like to see how well they function when they're loaded down. In this post, we'll look at some tools that can help you add stress and gauge the results.
|
||||
DigitalSoul / Getty Images / Linux
|
||||
|
||||
Why would you ever want to stress your Linux system? Because sometimes you might want to know how a system will behave when it’s under a lot of pressure due to a large number of running processes, heavy network traffic, excessive memory use and so on. This kind of testing can help to ensure that a system is ready to "go public".
|
||||
|
||||
If you need to predict how long applications might take to respond and what, if any, processes might fail or run slowly under a heavy load, doing the stress testing up front can be a very good idea.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
|
||||
|
||||
Fortunately for those who need to be able to predict how a Linux system will react under stress, there are some helpful techniques you can employ and tools that you can use to make the process easier. In this post, we examine a few options.
|
||||
|
||||
### Do it yourself loops
|
||||
|
||||
This first technique involves running some loops on the command line and watching how they affect the system. This technique burdens the CPUs by greatly increasing the load. The results can easily be seen using the **uptime** or similar commands.
|
||||
|
||||
In the command below, we kick off four endless loops. You can increase the number of loops by adding digits or using a **bash** expression like **{1..6}** in place of "1 2 3 4".
|
||||
|
||||
```
|
||||
for i in 1 2 3 4; do while : ; do : ; done & done
|
||||
```
|
||||
|
||||
Typed on the command line, this command will start four endless loops in the background.
|
||||
|
||||
```
|
||||
$ for i in 1 2 3 4; do while : ; do : ; done & done
|
||||
[1] 205012
|
||||
[2] 205013
|
||||
[3] 205014
|
||||
[4] 205015
|
||||
```
|
||||
|
||||
In this case, jobs 1-4 were kicked off. Both the job numbers and process IDs are displayed.
|
||||
|
||||
To observe the effect on load averages, use a command like the one shown below. In this case, the **uptime** command is run every 30 seconds:
|
||||
|
||||
```
|
||||
$ while true; do uptime; sleep 30; done
|
||||
```
|
||||
|
||||
If you intend to run tests like this periodically, you can put the loop command into a script:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
while true
|
||||
do
|
||||
uptime
|
||||
sleep 30
|
||||
done
|
||||
```
|
||||
|
||||
In the output, you can see how the load averages increase and then start going down again once the loops have been ended.
|
||||
|
||||
```
|
||||
11:25:34 up 5 days, 17:27, 2 users, load average: 0.15, 0.14, 0.08
|
||||
11:26:04 up 5 days, 17:27, 2 users, load average: 0.09, 0.12, 0.08
|
||||
11:26:34 up 5 days, 17:28, 2 users, load average: 1.42, 0.43, 0.18
|
||||
11:27:04 up 5 days, 17:28, 2 users, load average: 2.50, 0.79, 0.31
|
||||
11:27:34 up 5 days, 17:29, 2 users, load average: 3.09, 1.10, 0.43
|
||||
11:28:04 up 5 days, 17:29, 2 users, load average: 3.45, 1.38, 0.54
|
||||
11:28:34 up 5 days, 17:30, 2 users, load average: 3.67, 1.63, 0.66
|
||||
11:29:04 up 5 days, 17:30, 2 users, load average: 3.80, 1.86, 0.76
|
||||
11:29:34 up 5 days, 17:31, 2 users, load average: 3.88, 2.06, 0.87
|
||||
11:30:04 up 5 days, 17:31, 2 users, load average: 3.93, 2.25, 0.97
|
||||
11:30:34 up 5 days, 17:32, 2 users, load average: 3.64, 2.35, 1.04 <== loops
|
||||
11:31:04 up 5 days, 17:32, 2 users, load average: 2.20, 2.13, 1.01 stopped
|
||||
11:31:34 up 5 days, 17:33, 2 users, load average: 1.40, 1.94, 0.98
|
||||
```
|
||||
|
||||
Because the loads shown represent averages over 1, 5 and 15 minutes, the values will take a while to go back to what is likely normal for the system.
|
||||
|
||||
To stop the loops, issue a **kill** command like this one below – assuming the job numbers are 1-4 as was shown earlier in this post. If you’re unsure, use the jobs command to verify the job IDs.
|
||||
|
||||
```
|
||||
$ kill %1 %2 %3 %4
|
||||
```
|
||||
|
||||
### Specialized tools for adding stress
|
||||
|
||||
Another way to create system stress involves using a tool that was specifically built to stress the system for you. One of these is called “stress” and can stress the system in a number of ways. The **stress** tool is a workload generator that provides CPU, memory and disk I/O stress tests.
|
||||
|
||||
With the **\--cpu** option, the **stress** command uses a square-root function to force the CPUs to work hard. The higher the number of CPUs specified, the faster the loads will ramp up.
|
||||
|
||||
A second **watch-it** script (**watch-it-2**) can be used to gauge the effect on system memory usage. Note that it uses the **free** command to see the effect of the stressing.
|
||||
|
||||
```
|
||||
$ cat watch-it-2
|
||||
#!/bin/bash
|
||||
|
||||
while true
|
||||
do
|
||||
free
|
||||
sleep 30
|
||||
done
|
||||
```
|
||||
|
||||
Kicking off and observing the stress:
|
||||
|
||||
```
|
||||
$ stress --cpu 2
|
||||
|
||||
$ ./watch-it
|
||||
13:09:14 up 5 days, 19:10, 2 users, load average: 0.00, 0.00, 0.00
|
||||
13:09:44 up 5 days, 19:11, 2 users, load average: 0.68, 0.16, 0.05
|
||||
13:10:14 up 5 days, 19:11, 2 users, load average: 1.20, 0.34, 0.12
|
||||
13:10:44 up 5 days, 19:12, 2 users, load average: 1.52, 0.50, 0.18
|
||||
13:11:14 up 5 days, 19:12, 2 users, load average: 1.71, 0.64, 0.24
|
||||
13:11:44 up 5 days, 19:13, 2 users, load average: 1.83, 0.77, 0.30
|
||||
```
|
||||
|
||||
The more CPUs specified on the command line, the faster the load will ramp up.
|
||||
|
||||
```
|
||||
$ stress --cpu 4
|
||||
$ ./watch-it
|
||||
13:47:49 up 5 days, 19:49, 2 users, load average: 0.00, 0.00, 0.00
|
||||
13:48:19 up 5 days, 19:49, 2 users, load average: 1.58, 0.38, 0.13
|
||||
13:48:49 up 5 days, 19:50, 2 users, load average: 2.61, 0.75, 0.26
|
||||
13:49:19 up 5 days, 19:50, 2 users, load average: 3.16, 1.06, 0.38
|
||||
13:49:49 up 5 days, 19:51, 2 users, load average: 3.49, 1.34, 0.50
|
||||
13:50:19 up 5 days, 19:51, 2 users, load average: 3.69, 1.60, 0.61
|
||||
```
|
||||
|
||||
The **stress** command can also stress the system by adding I/O and memory load with its **\--io** (input/output) and **\--vm** (memory) options.
|
||||
|
||||
In this next example, this command for adding memory stress is run, and then the **watch-it-2** script is started:
|
||||
|
||||
```
|
||||
$ stress --vm 2
|
||||
|
||||
$ watch-it-2
|
||||
total used free shared buff/cache available
|
||||
Mem: 6087064 662160 2519164 8868 2905740 5117548
|
||||
Swap: 2097148 0 2097148
|
||||
total used free shared buff/cache available
|
||||
Mem: 6087064 803464 2377832 8864 2905768 4976248
|
||||
Swap: 2097148 0 2097148
|
||||
total used free shared buff/cache available
|
||||
Mem: 6087064 968512 2212772 8864 2905780 4811200
|
||||
Swap: 2097148 0 2097148
|
||||
```
|
||||
|
||||
Another option for **stress** is to use the **\--io** option to add input/output activity to the system. In this case, you would use a command like this:
|
||||
|
||||
```
|
||||
$ stress --io 4
|
||||
```
|
||||
|
||||
You could then observe the stressed IO using **iotop**. Note that **iotop** requires root privilege.
|
||||
|
||||
###### before
|
||||
|
||||
```
|
||||
$ sudo iotop -o
|
||||
Total DISK READ: 0.00 B/s | Total DISK WRITE: 19.36 K/s
|
||||
Current DISK READ: 0.00 B/s | Current DISK WRITE: 27.10 K/s
|
||||
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
|
||||
269308 be/4 root 0.00 B/s 0.00 B/s 0.00 % 1.24 % [kworker~fficient]
|
||||
283 be/3 root 0.00 B/s 19.36 K/s 0.00 % 0.26 % [jbd2/sda1-8]
|
||||
```
|
||||
|
||||
###### after
|
||||
|
||||
```
|
||||
Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
|
||||
Current DISK READ: 0.00 B/s | Current DISK WRITE: 0.00 B/s
|
||||
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
|
||||
270983 be/4 shs 0.00 B/s 0.00 B/s 0.00 % 51.45 % stress --io 4
|
||||
270984 be/4 shs 0.00 B/s 0.00 B/s 0.00 % 51.36 % stress --io 4
|
||||
270985 be/4 shs 0.00 B/s 0.00 B/s 0.00 % 50.95 % stress --io 4
|
||||
270982 be/4 shs 0.00 B/s 0.00 B/s 0.00 % 50.80 % stress --io 4
|
||||
269308 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.09 % [kworker~fficient]
|
||||
```
|
||||
|
||||
**Stress** is just one of a number of tools for adding stress to a system. Another and newer tool, **stress-ng**, will be covered in a future post.
|
||||
|
||||
### Wrap-Up
|
||||
|
||||
Various tools for stress-testing a system will help you anticipate how systems will respond in real world situations in which they are subjected to increased traffic and computing demands.
|
||||
|
||||
While what we've shown in the post are ways to create and measure various types of stress, the ultimate benefit is how the stress helps in determining how well your system or application responds to it.
|
||||
|
||||
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3563334/how-to-stress-test-your-linux-system.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/newsletters/signup.html
|
||||
[2]: https://www.facebook.com/NetworkWorld/
|
||||
[3]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,156 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Make Bash history more useful with these tips)
|
||||
[#]: via: (https://opensource.com/article/20/6/bash-history-control)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Make Bash history more useful with these tips
|
||||
======
|
||||
Tell Bash what you want it to remember—or even rewrite history by
|
||||
deleting entries you don't want or need.
|
||||
![A person programming][1]
|
||||
|
||||
A Linux terminal running [Bash][2] has a built-in history that you can use to track what you've been doing lately. To view a history of your Bash session, use the built-in command `history`:
|
||||
|
||||
|
||||
```
|
||||
$ echo "foo"
|
||||
foo
|
||||
$ echo "bar"
|
||||
bar
|
||||
$ history
|
||||
1 echo "foo"
|
||||
2 echo "bar"
|
||||
3 history
|
||||
```
|
||||
|
||||
The `history` command isn't an executable file on your filesystem, like most commands, but a function of Bash. You can verify this by using the `type` command:
|
||||
|
||||
|
||||
```
|
||||
$ type history
|
||||
history is a shell builtin
|
||||
```
|
||||
|
||||
### History control
|
||||
|
||||
The upper limit of lines in your shell history is defined by the `HISTSIZE` variable. You can set this variable in your `.bashrc` file. The following sets your history to 3,000 lines, after which the oldest line is removed to make room for the newest command, placed at the bottom of the list:
|
||||
|
||||
|
||||
```
|
||||
`export HISTSIZE=3000`
|
||||
```
|
||||
|
||||
There are other history-related variables, too. The `HISTCONTROL` variable controls what history is stored. You can force Bash to exclude commands starting with empty space by placing this in your `.bashrc` file:
|
||||
|
||||
|
||||
```
|
||||
`export HISTCONTROL=$HISTCONTROL:ignorespace`
|
||||
```
|
||||
|
||||
Now, if you type a command starting with a space, that command won't get recorded in history:
|
||||
|
||||
|
||||
```
|
||||
$ echo "hello"
|
||||
$ mysql -u bogus -h badpassword123 mydatabase
|
||||
$ echo "world"
|
||||
$ history
|
||||
1 echo "hello"
|
||||
2 echo "world"
|
||||
3 history
|
||||
```
|
||||
|
||||
You can avoid duplicate entries, too:
|
||||
|
||||
|
||||
```
|
||||
`export HISTCONTROL=$HISTCONTROL:ignoredups`
|
||||
```
|
||||
|
||||
Now, if you type two commands, one after another, only one appears in history:
|
||||
|
||||
|
||||
```
|
||||
$ ls
|
||||
$ ls
|
||||
$ ls
|
||||
$ history
|
||||
1 ls
|
||||
2 history
|
||||
```
|
||||
|
||||
If you like both of these ignores, you can just use `ignoreboth`:
|
||||
|
||||
|
||||
```
|
||||
`export HISTCONTROL=$HISTCONTROL:ignoreboth`
|
||||
```
|
||||
|
||||
### Remove a command from history
|
||||
|
||||
Sometimes you make a mistake and type something sensitive into your shell, or maybe you just want to clean up your history so that it more accurately represents the steps you took to get something working correctly. If you want to delete a command from your history in Bash, use the `-d` option with the line number of the item you want to remove:
|
||||
|
||||
|
||||
```
|
||||
$ echo "foo"
|
||||
foo
|
||||
$ echo "bar"
|
||||
bar
|
||||
$ history | tail
|
||||
535 echo "foo"
|
||||
536 echo "bar"
|
||||
537 history | tail
|
||||
$ history -d 536
|
||||
$ history | tail
|
||||
535 echo "foo"
|
||||
536 history | tail
|
||||
537 history -d 536
|
||||
538 history | tail
|
||||
```
|
||||
|
||||
To stop adding `history` entries, you can place a `space` before the command, as long as you have `ignorespace` in your `HISTCONTROL` environment variable:
|
||||
|
||||
|
||||
```
|
||||
$ history | tail
|
||||
535 echo "foo"
|
||||
536 echo "bar"
|
||||
$ history -d 536
|
||||
$ history | tail
|
||||
535 echo "foo"
|
||||
```
|
||||
|
||||
You can clear your entire session history with the `-c` option:
|
||||
|
||||
|
||||
```
|
||||
$ history -c
|
||||
$ history
|
||||
$
|
||||
```
|
||||
|
||||
### History lessons
|
||||
|
||||
Manipulating history is usually less dangerous than it sounds, especially when you're curating it with a purpose in mind. For instance, if you're documenting a complex problem, it's often best to use your session history to record your commands because, by slotting them into your history, you're running them and thereby testing the process. Very often, documenting without doing leads to overlooking small steps or writing minor details wrong.
|
||||
|
||||
Use your history sessions as needed, and exercise your power over history wisely. Happy history hacking!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/6/bash-history-control
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb (A person programming)
|
||||
[2]: https://opensource.com/resources/what-bash
|
@ -0,0 +1,276 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( nophDog)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Make the switch from Mac to Linux easier with Homebrew)
|
||||
[#]: via: (https://opensource.com/article/20/6/homebrew-linux)
|
||||
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg)
|
||||
|
||||
Homebrew 让你从 Mac 切换到 Linux 更轻松
|
||||
======
|
||||
不管你是想要更舒服地从 Mac 搬到 Linux,还是不满意常规的 Linux 包管理器,都可以试试 Homebrew。
|
||||
![Digital images of a computer desktop][1]
|
||||
|
||||
[Homebrew][2] 项目最初是为了给 Mac 用户提供一个非官方的类 Linux 包管理器。用户很快就爱上了它友好的界面以及帮助性的提示,而且 —— 真是爱恨纠缠 —— 它已经被移植到 Linux 系统。
|
||||
|
||||
一开始,macOS 和 Linux 有两个分开的项目(Homebrew 与 Linuxbrew),但是目前在两个操作系统上,都是由 Homebrew 核心管理。由于我正 [从 Mac 切换到 Linux][3],所以一直在研究我在 macOS 最常用的开源软件在 Linux 表现如何,最终,我很高兴地发现 Homebrew 也支持 Linux,太赞了!
|
||||
|
||||
### 为什么要在 Linux 使用 Homebrew 呢?
|
||||
|
||||
长期的 Linux 用户对 Homebrew 的第一反应是:“为什么不用......呢”,省略号代表他们喜欢的 Linux 包管理器。基于 Debian 的系统早就有了 `apt`,基于 Fedora 的系统则有 `dnf` 和 `yum`,并且像 Flatpak 跟 AppImage 这样的项目,在两种系统上都能流畅运行。我花了不少时间尝试这些技术,不得不说它们都很强大。
|
||||
|
||||
那我为什么还要 [坚持使用 Homebrew][4] 呢?首先,我对它非常熟悉。在为我过去使用的专利软件寻找替代品的过程中,我已经学会了许多使用方法,而且越来越熟练 —— 就像我使用 Homebrew 一样 —— 让我专注于一次学习一件事情,而不会被不同系统间的差异搞垮。
|
||||
|
||||
此外,我没有看到哪一个包管理器像 Homebrew 一样,对用户如此友好。命令井井有条,默认的帮助命令显示如下:
|
||||
|
||||
```
|
||||
$ brew -h
|
||||
Example usage:
|
||||
brew search [TEXT|/REGEX/]
|
||||
brew info [FORMULA...]
|
||||
brew install FORMULA...
|
||||
brew update
|
||||
brew upgrade [FORMULA...]
|
||||
brew uninstall FORMULA...
|
||||
brew list [FORMULA...]
|
||||
|
||||
Troubleshooting:
|
||||
brew config
|
||||
brew doctor
|
||||
brew install --verbose --debug FORMULA
|
||||
|
||||
Contributing:
|
||||
brew create [URL [--no-fetch]]
|
||||
brew edit [FORMULA...]
|
||||
|
||||
Further help:
|
||||
brew commands
|
||||
brew help [COMMAND]
|
||||
man brew
|
||||
<https://docs.brew.sh>
|
||||
```
|
||||
|
||||
过于简短的输入可能会被误解成它的缺陷,但是你仔细看看每一个子命令,都有很丰富的功能。虽然这一列只有短短 23 行,但对进阶用户来说,光是子命令 `install` 就包含整整 79 行信息量:
|
||||
|
||||
|
||||
```
|
||||
$ brew --help | wc -l
|
||||
23
|
||||
$ brew install --help | wc -l
|
||||
79
|
||||
```
|
||||
|
||||
忽视或者安装依赖,用什么编译器从源码编译,使用上游 Git 提交或者官方 “bottled” 版应用,这些你都可以选择。总而言之,Homebrew 即适合新手,也同样能满足老鸟。
|
||||
|
||||
### 开始在 Linux 使用 Homebrew
|
||||
|
||||
如果你想要试着使用 Homebrew,可以用这个一行脚本在 Mac 或者 Linux 上进行安装:
|
||||
|
||||
```
|
||||
`$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"`
|
||||
```
|
||||
|
||||
这条命令会立即开始安装 Homebrew。如果你比较谨慎,可以使用 `curl` 将文件下载到本地,检查完毕之后再运行。
|
||||
|
||||
```
|
||||
$ curl -fsSL <https://raw.githubusercontent.com/Homebrew/install/master/install.sh> \--output homebrew_installer.sh
|
||||
$ more homebrew_installer.sh # review the script until you feel comfortable
|
||||
$ bash homebrew_installer.sh
|
||||
```
|
||||
|
||||
对 Linux 的安装指导包括如何配置点文件,尤其是 Debian 系统的 `~/.profile` 和 Fedora 系统的 `~/.bash_profile`。
|
||||
|
||||
|
||||
```
|
||||
$ test -d /home/linuxbrew/.linuxbrew && eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv)
|
||||
$ test -r ~/.bash_profile && echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.bash_profile
|
||||
$ echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.profile
|
||||
```
|
||||
|
||||
为了确认已经安装好,Homebrew 团队提供一个空的 `hello` formulae 供测试:
|
||||
|
||||
|
||||
```
|
||||
$ brew install hello
|
||||
==> Downloading <https://linuxbrew.bintray.com/bottles/hello-2.10.x86\_64\_linux.bottle.tar.gz>
|
||||
######################################################################## 100.0%
|
||||
==> Pouring hello-2.10.x86_64_linux.bottle.tar.gz
|
||||
🍺 /home/linuxbrew/.linuxbrew/Cellar/hello/2.10: 52 files, 595.6KB
|
||||
```
|
||||
|
||||
It looks like my installation is working without any issues, so I'll explore a little more.
|
||||
看起来安装毫无问题,那我来试试。
|
||||
|
||||
### Brew for command-line utilities
|
||||
### 命令行工具 Brew
|
||||
|
||||
Homebrew 宣称自己默认只 “安装你需要但 [Linux] 不需要的东西”。
|
||||
|
||||
在 Homebrew 中,你可以用 `brew` 命令安装任何命令行软件。这些包的定义文件叫做 “formulae”,而且它们是编译好的文件,通过 “bottles” 共享。在 Homebrew 的世界里,还有许多 “beer-oriented” 术语,但这个包管理器主要目的是让软件便于使用。
|
||||
|
||||
什么样的软件呢?对我这样的技术玩家(既然你已经在读这篇文章,估计你也是)来说最方便的东西。例如,便利的 `tree` 命令,可以展示目录结构,或者 `pyenv`,我用来 [在 Mac 管理不同版本 Python][5]。
|
||||
|
||||
你可以用 `search` 命令查看所有可以安装的 formulae,在后面通过 `wc` 命令看看一共有多少:
|
||||
|
||||
|
||||
```
|
||||
# -l counts the number of lines
|
||||
$ brew search | wc -l
|
||||
5087
|
||||
```
|
||||
|
||||
迄今为止,一共有 5000 多个 formulae,这囊括了很多软件。需要注意的是:并非所有 formula 都能在 Linux 运行。在 `brew search --help` 输出中有一部分提到可以通过操作系统标识来筛选软件。它会在浏览器打开用于每一个操作系统的软件仓库。我运行的是 Fedora,所以我打算试一试:
|
||||
|
||||
|
||||
```
|
||||
`$ brew search --fedora tree`
|
||||
```
|
||||
|
||||
浏览器打开了网址 `https://apps.fedoraproject.org/packages/s/tree`,向我展示了所有 Fedora 的可用选项。你也可以通过其它方法进行浏览。Formulae 集中整理到由操作系统划分的核心仓库当中(Mac 在 [Homebrew Core][6],Linux 在 [Linux Core][7])。同样也可以通过 Homebrew API [在网页显示][8]。
|
||||
|
||||
尽管已经安装好这些软件,我又通过其它用户的推荐找到很多新工具。我列出一些我最喜欢的工具,你可以在里面找点灵感:
|
||||
|
||||
* `pyenv`、`rbenv` 和 `nodenv` 用来管理(相应的) Python、Ruby 和 Node.js 版本
|
||||
* `imagemagick` 用于脚本化编辑图片
|
||||
* `pandoc` 用于脚本化转换文档格式(我通常将 .docx 文件转成 .md 或者 .html)
|
||||
* `hub` 为 GitHub 用户提供 [更好的 Git 体验][9]
|
||||
* `tldr` 展示命令工具的用法
|
||||
|
||||
|
||||
想要深入了解 Homebrew,可以去 [trldr 页面][10] 看看,比起应用的 man 页面,它要友好得多。使用 `search` 命令确认你可以安装:
|
||||
|
||||
```
|
||||
$ brew search tldr
|
||||
==> Formulae
|
||||
tldr ✔
|
||||
```
|
||||
|
||||
太好了!勾号说明你可以安装。那么继续吧:
|
||||
|
||||
|
||||
```
|
||||
$ brew install tldr
|
||||
==> Downloading <https://linuxbrew.bintray.com/bottles/tldr-1.3.0\_2.x86\_64\_linux.bottle.1.tar.gz>
|
||||
######################################################################## 100.0%
|
||||
==> Pouring tldr-1.3.0_2.x86_64_linux.bottle.1.tar.gz
|
||||
🍺 /home/linuxbrew/.linuxbrew/Cellar/tldr/1.3.0_2: 6 files, 63.2KB
|
||||
```
|
||||
|
||||
Homebrew 提供编译好的二进制文件,所以你也不必在本地机器上编译源码。这能节省很多时间,也不用听 CPU 风扇的噪声。我同样欣赏 Homebrew 的另外一点,你不完全理解每一个选项的含义也不会影响正常使用。若你想自己编译,可以在 `brew install` 命令后面加上 `-s` 或者 `--build-from-source` 标识,这样就能从源码编译 formula(即便已经有一个 bottle 存在)。
|
||||
|
||||
同样,软件之后的复杂度也很有意思。使用 `info` 查看 `tldr` 软件的依赖管理,formula 源代码在磁盘上的分布,甚至还能查看公开数据。
|
||||
|
||||
|
||||
```
|
||||
$ brew info tldr
|
||||
tldr: stable 1.3.0 (bottled), HEAD
|
||||
Simplified and community-driven man pages
|
||||
<https://tldr.sh/>
|
||||
Conflicts with:
|
||||
tealdeer (because both install `tldr` binaries)
|
||||
/home/linuxbrew/.linuxbrew/Cellar/tldr/1.3.0_2 (6 files, 63.2KB) *
|
||||
Poured from bottle on 2020-06-08 at 15:56:15
|
||||
From: <https://github.com/Homebrew/linuxbrew-core/blob/master/Formula/tldr.rb>
|
||||
==> Dependencies
|
||||
Build: pkg-config ✔
|
||||
Required: libzip ✔, curl ✔
|
||||
==> Options
|
||||
\--HEAD
|
||||
Install HEAD version
|
||||
==> Analytics
|
||||
install: 197 (30 days), 647 (90 days), 1,546 (365 days)
|
||||
install-on-request: 197 (30 days), 646 (90 days), 1,546 (365 days)
|
||||
build-error: 0 (30 days)
|
||||
```
|
||||
|
||||
### 从 Mac 到 Linux 的一点不足
|
||||
|
||||
在 macOS,Homebrew 的 `cask` 子命令可以让用户完全使用命令行安装、管理图形界面软件。不幸的是,`cask` 只在部分 Linux 发行版有效。我在安装一个开源工具时遇到了问题:
|
||||
|
||||
|
||||
```
|
||||
$ brew cask install tusk
|
||||
Error: Installing casks is supported only on macOS
|
||||
```
|
||||
|
||||
我在 [论坛][11] 提问,迅速得到其他用户的一些反馈。总结一下,方案如下:
|
||||
|
||||
* 复刻该项目,构建这个特性,然后告诉别人值得这样做
|
||||
* 给该软件写一个 formula,从源代码编译
|
||||
* 为该软件创建一个第三方仓库
|
||||
|
||||
|
||||
对我来说还有最后一件很有意思的事情。Homebrew 通过 [创建并维护 “taps”][12] (另一个 beer-influenced 名词)管理第三方仓库。当你对系统越来越熟悉时,你会将 Taps 加入这个生态,花点时间研究它能让你受益匪浅。
|
||||
|
||||
### 备份 Homebrew 的安装记录
|
||||
|
||||
我最中意的 Homebrew 特性之一就是你可以像其它任何 [版本控制工具备份点文件][13] 一样备份你的安装记录。为了实现这个目的,Homebrew 提供 `bundle` 子命令,它有一个子命令叫 `dump` 可以生成一份 Brewfile。这个文件包含你目前所有安装的工具列表,可以反复使用。进入你想使用的目录然后运行命令,它会根据你所安装的软件生成 Brewfile:
|
||||
|
||||
|
||||
```
|
||||
$ cd ~/Development/dotfiles # This is my dotfile folder
|
||||
$ brew bundle dump
|
||||
$ ls Brewfile
|
||||
Brewfile
|
||||
```
|
||||
|
||||
当我换了一台机器,想要安装一样的软件时,进入含有 Brewfile 的文件夹,然后重新安装:
|
||||
|
||||
|
||||
```
|
||||
$ ls Brewfile
|
||||
Brewfile
|
||||
$ brew bundle
|
||||
```
|
||||
|
||||
它会在我的新机器上安装所有列出的 formulae。
|
||||
|
||||
#### 在 Mac 和 Linux 同时管理 Brewfile
|
||||
|
||||
Brewfile 非常适合备份你目前的安装记录,但是如果某些在 Mac 上运行的软件无法运行在 Linux 呢?或者刚好相反?我发现不管是 Mac 还是 Linux,如果软件无法在当前操作系统运行,Homebrew 会优雅地忽略那一行。如果它遇到不兼容的请求(正如使用 brew 在 Linux 安装 cask),它会选择跳过,继续安装过程:
|
||||
|
||||
|
||||
```
|
||||
$ brew bundle --file=Brewfile.example
|
||||
|
||||
Skipping cask licecap (on Linux)
|
||||
Skipping cask macdown (on Linux)
|
||||
Installing fish
|
||||
Homebrew Bundle complete! 1 Brewfile dependency now installed.
|
||||
```
|
||||
|
||||
为了尽量保持配置文件的简洁,我在跨操作系统时使用同一份 Brewfile,因为它只安装与操作系统相关的版本,所以我一直没有遇到任何问题。
|
||||
|
||||
### 使用 Homebrew 管理软件包
|
||||
|
||||
Homebrew 已经成了我必备的命令行工具,由于我很熟悉它,所以在 Linux 上的体验也充满乐趣。Homebrew 让我的工具井井有条,并且时刻保持更新,我愈发欣赏它在实用性与功能上找到的平衡点。我喜欢只保留用户必要的包管理信息,大多数人能够从中受益。如果你对目前的 Linux 包管理起很满意,Homebrew 可能会让你觉得很基础,但是你要去深入研究,发掘本文没有提到的高级用法。
|
||||
|
||||
对 Linux 用户来说,他们有很多包管理器可以选择。如果你来自 MacOS,Homebrew 会让你宾至如归。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/6/homebrew-linux
|
||||
|
||||
作者:[Matthew Broberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/nophDog)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mbbroberg
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_desk_home_laptop_browser.png?itok=Y3UVpY0l (Digital images of a computer desktop)
|
||||
[2]: https://brew.sh/
|
||||
[3]: https://opensource.com/article/19/10/why-switch-mac-linux
|
||||
[4]: https://opensource.com/article/20/6/homebrew-mac
|
||||
[5]: https://opensource.com/article/20/4/pyenv
|
||||
[6]: https://github.com/Homebrew/homebrew-core
|
||||
[7]: https://github.com/Homebrew/linuxbrew-core
|
||||
[8]: https://formulae.brew.sh/formula/
|
||||
[9]: https://opensource.com/article/20/3/github-hub
|
||||
[10]: https://github.com/tldr-pages/tldr
|
||||
[11]: https://discourse.brew.sh/t/add-linux-support-to-existing-cask/5766
|
||||
[12]: https://docs.brew.sh/How-to-Create-and-Maintain-a-Tap
|
||||
[13]: https://opensource.com/article/19/3/move-your-dotfiles-version-control
|
Loading…
Reference in New Issue
Block a user