Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-07-19 09:02:38 +08:00
commit 8c0aa53ff1
13 changed files with 1183 additions and 191 deletions

View File

@ -1,18 +1,18 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12427-1.html)
[#]: subject: (How to crop images in GIMP [Quick Tip])
[#]: via: (https://itsfoss.com/crop-images-gimp/)
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
如何使用 GIMP 裁剪图像(快速技巧)
GIMP 教程:如何使用 GIMP 裁剪图像
======
你可能有很多原因要在 [GIMP][1] 中裁剪图像。例如,你可能希望删除无用的边框或信息来改善图像,或者你可能希望最终图像的焦点实在特定细节上。
你可能想在 [GIMP][1] 中裁剪图像的原因有很多。例如,你可能希望删除无用的边框或信息来改善图像,或者你可能希望最终图像的焦点是在一个特定细节上。
在本教程中,我将演示如何在 GIMP 中快速裁剪图像而又不影响精度。让我们来看看。
在本教程中,我将演示如何在 GIMP 中快速裁剪图像而又不影响精度。让我们一起来看看
### 如何在 GIMP 中裁剪图像
@ -20,43 +20,37 @@
#### 方法 1
裁剪只是一种将图像修整比原始图像更小区域的操作。裁剪图像的过程很简单。
裁剪只是一种将图像修整比原始图像更小区域的操作。裁剪图像的过程很简单。
你可以通过“工具”面板访问“裁剪工具”,如下所示:
![Use Crop Tool for cropping images in GIMP][3]
你还可以通过菜单访问裁剪工具:
你还可以通过菜单访问裁剪工具:<ruby>工具 → 变形工具 → 裁剪<rt>Tools → Transform Tools → Crop</rt></ruby>”。
**Tools → Transform Tools → Crop**
激活该工具后,你会注意到画布上的鼠标光标会发生变化,以表示正在使用“裁剪工具”。
激活该工具后,你会注意到画布上的鼠标光标将变化以指示正在使用“裁剪工具”。
现在,你可以在图像画布上的任意位置单击鼠标左键,并将鼠标拖到某个位置以创建裁剪边界。此时你不必担心精度,因为你可以在实际裁剪之前修改最终选择。
现在,你可以在图像画布上的任意位置单击鼠标左键,并将鼠标拖到某个位置以创建裁剪边界。此时你不必担心精度,因为你可以在实际裁剪之前修改最终选区。
![Crop Selection][4]
此时,将鼠标光标悬停在所选内容的四个角上会更改鼠标光标并高亮显示该区域。现在,你可以微调裁剪的选区。你可以单击并拖动任何一侧或角落来移动部分选区。
此时,将鼠标光标悬停在所选内容的四个角上会更改鼠标光标并高亮显示该区域。现在,你可以微调裁剪的选区。你可以单击并拖动任何边或角来移动部分选区。
选定完区域后,你只需按键盘上的“**回车**”键即可进行裁剪。
选定完区域后,你只需按键盘上的回车键即可进行裁剪。
如果你想重新开始或者不裁剪,你可以按键盘上的 “**Esc**” 键。
如果你想重新开始或者不裁剪,你可以按键盘上的 `Esc` 键。
#### 方法 2
裁剪图像的另一种方法是使用“矩形选择工具”进行选择。
**Tools → Selection Tools → Rectangle Select**
裁剪图像的另一种方法是使用“矩形选择工具”进行选择:“<ruby>工具 → 选择工具 → 选择矩形<rt>Tools → Selection Tools → Rectangle Select</rt></ruby>”。
![][5]
然后,你可以使用与“裁剪工具”相同的方式高亮选区,并调整选区。选择好后,可以通过以下方式裁剪图像来适应选区。
**Image → Crop to Selection**
然后,你可以使用与“裁剪工具”相同的方式高亮选区,并调整选区。选择好后,可以通过以下方式裁剪图像来适应选区:“<ruby>图像 → 裁剪为选区<rt>Image → Crop to Selection</rt></ruby>”。
![][6]
#### 总结
### 总结
对于 GIMP 用户而言,精确裁剪图像可以视为一项基本功能。你可以选择哪种方法更适合你的需求并探索其潜力。
@ -69,7 +63,7 @@ via: https://itsfoss.com/crop-images-gimp/
作者:[Dimitrios Savvopoulos][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,66 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Yufei-Yan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Project OWL: IoT trying to hold connectivity together in disasters)
[#]: via: (https://www.networkworld.com/article/3564980/project-owl-iot-trying-to-hold-connectivity-together-in-disasters.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Project OWL: IoT trying to hold connectivity together in disasters
======
IoT devices configured in a wireless mesh network can be quickly deployed to provide basic connections when natural disasters knock out traditional communications links.
[AK Badwolf][1] [(CC BY 2.0)][2]
An open source project centered on mesh networking, [IoT][3] and LoRa connectivity could help emergency responders and victims stay in contact in the wake of natural disasters, said the head of Project OWL at the recent Open Source Summit.
Project OWLs target is the disruption in communications that often follows natural disaster. Widespread outages, in both cellular and wired networks, frequently impede the flow of information about emergency services, supplies and a host of other critical concerns that have to be addressed in the wake of a major storm or other catastrophe.
**Learn about 5G and WiFi 6**
* [How to determine if WiFi 6 is right for you][4]
* [What is MU-MIMO? Why do you need it in your wireless routers?][5]
* [When to use 5G, when to use WiFi 6][6]
* [How enterprises can prep for 5G networks][7]
It does this with an army of “ducks” small wireless modules that are cheap, simple-to-deploy and dont require the support of existing infrastructure. Some ducks are solar-powered, others have long-lasting batteries. A duck is equipped with a LoRa radio for communication with other ducks on the network, as well as with Wi-Fi and perhaps Bluetooth or GPS for additional functionality.
The idea is that, when networks are down, users can use their smartphones or laptops to make a [Wi-Fi][8] connection to a duck, which can relay small pieces of information to the rest of the network. Information propagates back along the network until it reaches a “Papaduck,” which is equipped with a satellite connection to the OWL data management system in the cloud. (OWL stands for “organization, whereabouts, and logistics.”) From the cloud, the information can be visualized on a smartphone or web app, or even plugged into existing systems via an API.
The secret sauce is in the ClusterDuck Protocol, the open source firmware that keeps  information flowing even when some modules on the network arent functional. Its designed to work on a wide range of cheap and easily accessed computing hardware Raspberry Pis and the like in order to make it easy to set up a ClusterDuck network quickly.
The project was prompted, according to founder Bryan Knouse, by the devastating hurricanes of 2017 and 2018, and the huge difficulties faced by affected communities in responding to them without adequate communications.
“A few of our founding members had been through these disasters, and we asked what do we do about this?’” he said.
The project has a cohort of students and professors at the University of Puerto Rico in Mayaguez, and most of the testing of the system happened there. Knouse said there are currently 17 solar-powered ducks nesting on rooftops and trees around campus, with plans to add more.
“This relationship created an open-source community on the ground, these students and profs are helping us develop this,” he said.
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3564980/project-owl-iot-trying-to-hold-connectivity-together-in-disasters.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://www.flickr.com/photos/spiderkat/8487309555/in/photolist-dVZFrn-dDctnA-8WuLez-6RBSHn-bQa5F8-syyFcV-rvxKJT-5bSAh-2Xey4-3D4xww-4t1ZYv-dMgY7k-mHeMk1-xsPw6B-EiD3UR-k1rNkD-atorAv-f58MG9-g2QCe-Zr1wAC-ewx5Px-6vrwz7-8CCPSd-hAC5HZ-aHJC1B-9ovTST-Wqj4Sk-fiJjWG-28ATb9y-6tHHiR-8VZrmy-8iUVNB-DzSQV5-j6gpDL-2c2C5Re-kmbqae-Th4XGx-g325LW-cC1cp-26aa3aC-X7ruJo-jDkSKD-57695d-8Dz2hm-fPsDJr-gxcdoV-iSVsHR-dWWbct-ejvCrM-8ofaVz
[2]: https://creativecommons.org/licenses/by/2.0/legalcode
[3]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
[4]: https://www.networkworld.com/article/3356838/how-to-determine-if-wi-fi-6-is-right-for-you.html
[5]: https://www.networkworld.com/article/3250268/what-is-mu-mimo-and-why-you-need-it-in-your-wireless-routers.html
[6]: https://www.networkworld.com/article/3402316/when-to-use-5g-when-to-use-wi-fi-6.html
[7]: https://www.networkworld.com/article/3306720/mobile-wireless/how-enterprises-can-prep-for-5g.html
[8]: https://www.networkworld.com/article/3560993/what-is-wi-fi-and-why-is-it-so-important.html
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,151 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open source accounting software developed by accountants)
[#]: via: (https://opensource.com/article/20/7/godbledger)
[#]: author: (Sean Darcy https://opensource.com/users/sean-darcy)
Open source accounting software developed by accountants
======
GoDBledger is open source accounting software that is both intuitive and
productive.
![Person using a laptop][1]
Over the last six months, I have been working on [GoDBLedger][2], an open source accounting system that I feel addresses some of the issues that plague current accounting software solutions. Even in my first year as a graduate accountant, the software frustrated me because I have seen what good software can be like and how much it can improve your productivity.
A great example of "good software" is a software developer's [Editor/IDE][3]. Developers love their editors, and there is a huge range that is highly customizable and allows for seamless coding experience. One of the major influencing factors for the existence of great software in this area is because developers are themselves the end users—they scratch their own itches and have immediate feedback on their designs.
The relationship between developers and their editor is fascinating because the editor's job is to facilitate the efficient transfer of the developer's ideas into the codebase.
As an accountant who loves programming in my spare time, I can see parallels between the accounting industry and software development. You can imagine the general ledger (a giant list of financial transactions) as a "codebase" that accountants work with, and it is the accountant's job to navigate and edit the general ledger before compiling it into various reports that an end user consumes—financial statements and tax returns, for example. This is similar to, for instance, the way the Red Hat Enterprise Linux codebase is maintained by developers, and then released officially as RHEL and CentOS to the world.
Unfortunately, when comparing the software between the two industries, I have found that navigating and editing the general ledger is not seamless, at least not when compared to my experiences when programming. Additionally, the compilation is slow and involves a lot of human labor to achieve; it can ultimately be as difficult as editing the general ledger itself.
So, given the parallels between accountants' and programmers' processes, why hasn't the accounting industry developed software to make editing the general ledger as efficient as an IDE editing a codebase? And why is compiling a set of financial statements from the general ledger not automated as is pushing code to production?
There are two influencing factors that I have noticed.
The first is the profit motive that drives both accountants and accounting software developers. Monetization can lead to inefficiencies in software because the most efficient end user is not always the most profitable. A profit-maximizing entity can extract more value from their users by taking actions to get control of the user's data, create walled gardens, and make it difficult for the user to change their software.
Accounting software has always existed in the realm of paid software. Because it plays such an important role in a business, it has always been easy to monetize. This is great for software developers who wish to make a profit, but unfortunately, it means that open source principles have had minimal influence on the shape and design of the software.
The second influencing factor is a lack of understanding of the general ledger as a data structure. The general ledger, in simple terms, is just a way to format a database of financial transactions. This data structure was designed in the 1600s, and it worked quite well within a physical book. At that point in time, working with the general ledger meant changing written text in the book as necessary, and accountants were professionals at maintaining this database. However, when relational databases were created, the structure never really got standardized and digitalized. Software packages implemented the general ledger within their own proprietary database structures, and accountants lost their ability to directly edit the database. Instead of managing the general ledger directly, the software only allowed for accountants to have restricted access.
### The result
Before TCP/IP got standardized, companies like AOL were creating their own proprietary environments and locking users into their walled gardens. Fortunately for the internet, they fell away to a free standard communication protocol. Unfortunately, the accounting industry did not do this, and we got stuck with "America Online."
Imagine that the software industry was ruled by a few big IDE software conglomerates which have decided to maximize their profits rather than maximizing developer efficiency. Your editor no longer saves code as text; instead, it is saved as a proprietary data format. You can not copy a codebase easily into another IDE, and if you do change editors, it's probably safe to assume that the codebase saved in the previous system is lost.
This is the state of the accounting software available today.
So what can accounting software developers learn from the open source community?
Open source means you can stand on the shoulders of giants and use codebases of other projects to influence and directly assist in the growth of your own project. If you are focusing on accountants' efficiency to edit the general ledger, you can review other software and copy the parts that work and ignore the parts that don't. In my case, I have been fortunate to leverage the great codebases of other accounting software such as [GNU Cash][4] and [Ledger-CLI][5], but there is another interesting area that has a lot to contribute—the open source servers that manage cryptocurrency nodes.
Accounting has struggled with the transition from physical ledgers to computers, but thankfully the cryptocurrency community has already developed a lot of open source software for maintaining a database of transactions.
The general ledger is a data structure to record financial transactions—just like a blockchain.
### What this means
My accounting system has been heavily influenced by [Geth (Go Ethereum)][6] and [Prysmatic Labs][7]—[Prysm][7], as there are a lot of talented developers working on these projects. They have provided the base for a server that manages financial transactions. Combining this with a database schema that was heavily influenced by GNU Cash means the heavy work behind designing an accounting system has already been done.
The result is [GoDBLedger][2], with source code available on [github.com/darcys22/godbledger][8].
It is also written in Golang to leverage the already good code that exists in this area, and because Golang tends to be good for servers of this nature.
I've talked a lot about how accountants should be able to work with the general ledger as seamlessly as a programmer can work on their codebase. I truly look forward to the day when I can create and edit journal entries with the same efficiency that I navigate and edit code using my text editor of choice. GoDBLedger is my first step toward achieving this. The next steps will be toward developing this "IDE" that communicates with GoDBLedger and its underlying database. Fortunately for me, there are already a lot of good open source database projects that I'll be able to leverage for this.
### How to use GoDBLedger
GoDBLedger is usable today! You can fire up a server that maintains a database for your financial transactions, and it will look and feel familiar to anyone who has run a cryptocurrency node before.
The end goal is for it to act as the central server for receiving financial transactions and storing them in the database. It lives in the background, always running so that other systems can communicate financial data to it as needed.
![GoDBledger operational flow chart][9]
Right now, if you are comfortable with a command line and scripts that communicate to systems using RPC, then you can play with GoDBLedger and experience double-entry bookkeeping on a level lower than all other software (and only slightly above directly manipulating the SQL database).
For instance, with GoDBLedger running, you can add entries in the interactive mode of ledger-cli:
```
$ ~/godbledger/ledger_cli journal
Journal Entry Wizard
\--------------------
Enter the date (yyyy-mm-dd): 2019-06-30
Enter the Journal Descripion: Get Money Get Paid!
Line item #1
Enter the line Descripion: Income is good yo
Enter the Account: Revenue:Sales
Enter the Amount: -1000
Would you like to enter more line items? (n to stop):
Line item #2
Enter the line Descripion: Cash is better
Enter the Account: Asset:Cash
Enter the Amount: 1000
Would you like to enter more line items? (n to stop): n
&amp;{Get Money Get Paid!
 2019-06-30 00:00:00 +0000 UTC [{Revenue:Sales
 Income is good yo
 -1000/1} {Asset:Cash
 Cash is better
 1000/1}] stuff}
```
However, the same entry can be made using JSON:
```
`$ ~/godbledger/ledger_cli jsonjournal '{"Payee":"Darcy Financial","Date":"2019-06-30T00:00:00Z","AccountChanges":[{"Name":"Asset:Cash","Description":"Cash is better","Currency":"USD","Balance":"100"},{"Name":"Revenue:Sales","Description":"Income is good yo","Currency":"USD","Balance":"-100"}],"Signature":"anythingHereCurrently"}'`
```
You can view a report with the `reporter` command:
```
$ ~/godbledger/reporter trialbalance
   ACCOUNT    | BALANCE AT 20 FEBRUARY 2020
\--------------+----------------------------
Asset:Cash    |                        1000
              |
Revenue:Sales |                       -1000
              |
```
Read the [quickstart on the Github Wiki][10] for more information. I've also developed a few [example scripts using Python][11] to show how you can send transactions from your own software. Additionally, I have made an [example "trading bot"][12] that saves every trade it does to GoDBLedger.
if you're interested in a procedural system to keep tracking of your accounts, try GoDBLedger.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/godbledger
作者:[Sean Darcy][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sean-darcy
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
[2]: https://godbledger.com/
[3]: https://www.redhat.com/en/topics/middleware/what-is-ide
[4]: https://www.gnucash.org/
[5]: https://www.ledger-cli.org/
[6]: https://github.com/ethereum/go-ethereum
[7]: https://github.com/prysmaticlabs/prysm
[8]: https://github.com/darcys22/godbledger
[9]: https://opensource.com/sites/default/files/uploads/godbledger_flow.png (GoDBledger operational flow chart)
[10]: https://github.com/darcys22/godbledger/wiki/Quickstart
[11]: https://github.com/darcys22/godbledger-pythonclient
[12]: https://github.com/darcys22/Trading-Simulator

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,99 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (JonnieWayy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Book Review: A Byte of Vim)
[#]: via: (https://itsfoss.com/book-review-a-byte-of-vim/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
Book Review: A Byte of Vim
======
[Vim][1] is a tool that is both simple and very powerful. Most new users will be intimidated by it because it doesnt work like regular graphical text editors. The unusual keyboard shortcuts makes people wonder about [how to save and exit Vim][2]. But once you master Vim, there is nothing like it.
There are numerous [Vim resources available online][3]. We have covered some Vim tricks on Its FOSS as well. Apart from online resources, plenty of books have been dedicated to this editor as well. Today, we will look at one of such book that is designed to make Vim easy for most users to understand. The book we will be discussing is [A Byte of Vim][4] by [Swaroop C H][5].
The author [Swaroop C H][6] has worked in computing for over a decade. He previously worked at Yahoo and Adobe. Out of college, he made money by selling Linux CDs. He started a number of businesses, including an iPod charger named ion. He is currently an engineering manager for the AI team at [Helpshift][7].
### A Byte of Vim
![][8]
Like all good books, A Byte of Vim starts by talking about what Vim is: “a computer program used for writing any kind of text”. He does on to say, “What makes Vim special is that it is one of those few software which is both simple and powerful.”
Before diving into telling how to use Vim, Swaroop tells the reader how to install Vim for Windows, Mac, Linux, and BSD. Once the installation is complete, he runs you through how to launch Vim and how to create your first file.
Next, Swaroop discusses the different modes of Vim and how to navigate around your document using Vims keyboard shortcuts. This is followed by the basics of editing a document with Vim, including the Vim version of cut/copy/paste and undo/redo.
Once the editing basics are covered, Swaroop talks about using Vim to edit multiple parts of a single document. You can also multiple tabs and windows to edit multiple documents at the same time.
[][9]
Suggested read  Bring Your Old Computer Back to Life With 4MLinux
The book also covers extending the functionality of Vim through scripting and installing plugins. There are two ways to using scripts in Vim, use Vims built-in scripting language or using a programming language like Python or Perl to access Vims internals. There are five types of Vim plugins that can be written or downloaded: vimrc, global plugin, filetype plugin, syntax highlighting plugin, and compiler plugin.
In a separate section, Swaroop C H covers the features of Vim that make it good for programming. These features include syntax highlighting, smart indentation, support for shell commands, omnicompletion, and the ability to be used as an IDE.
#### Getting the A Byte of Vim book and contributing to it
A Byte of Book is licensed under [Creative Commons 4.0][10]. You can read an online version of the book for free on [the authors website][4]. You can also download a [PDF][11], [Epub][12], or [Mobi][13] for free.
[Get A Byte of Vim for FREE][4]
If you prefer reading a [hard copy][14], you have that option, as well.
Please note that the _**original version of A Byte of Vim was written in 2008**_ and converted to PDf. Unfortunately, Swaroop C H lost the original source files and he is working to convert the book to [Markdown][15]. If you would like to help, please visit the [books GitHub page][16].
Preview | Product | Price |
---|---|---|---
![Mastering Vim Quickly: From WTF to OMG in no time][17] ![Mastering Vim Quickly: From WTF to OMG in no time][17] | [Mastering Vim Quickly: From WTF to OMG in no time][18] | $34.00[][19] | [Buy on Amazon][20]
#### Conclusion
When I first stared into the angry maw that is Vim, I did not have a clue what to do. I wish that I had known about A Byte of Vim then. This book is a good resource for anyone learning about Linux, especially if you are getting into the command line.
Have you read [A Byte of Vim][4] by Swaroop C H? If yes, how do you find it? If not, what is your favorite book on an open source topic? Let us know in the comments below.
[][21]
Suggested read  Iridium Browser: A Browser for the Privacy Conscious
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][22].
--------------------------------------------------------------------------------
via: https://itsfoss.com/book-review-a-byte-of-vim/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://www.vim.org/
[2]: https://itsfoss.com/how-to-exit-vim/
[3]: https://linuxhandbook.com/basic-vim-commands/
[4]: https://vim.swaroopch.com/
[5]: https://swaroopch.com/
[6]: https://swaroopch.com/about/
[7]: https://www.helpshift.com/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/Byte-of-vim-book.png?resize=800%2C450&ssl=1
[9]: https://itsfoss.com/4mlinux-review/
[10]: https://creativecommons.org/licenses/by/4.0/
[11]: https://www.gitbook.com/download/pdf/book/swaroopch/byte-of-vim
[12]: https://www.gitbook.com/download/epub/book/swaroopch/byte-of-vim
[13]: https://www.gitbook.com/download/mobi/book/swaroopch/byte-of-vim
[14]: https://swaroopch.com/buybook/
[15]: https://itsfoss.com/best-markdown-editors-linux/
[16]: https://github.com/swaroopch/byte-of-vim#status-incomplete
[17]: https://i2.wp.com/images-na.ssl-images-amazon.com/images/I/41itW8furUL._SL160_.jpg?ssl=1
[18]: https://www.amazon.com/Mastering-Vim-Quickly-WTF-time/dp/1983325740?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=1983325740 (Mastering Vim Quickly: From WTF to OMG in no time)
[19]: https://www.amazon.com/gp/prime/?tag=chmod7mediate-20 (Amazon Prime)
[20]: https://www.amazon.com/Mastering-Vim-Quickly-WTF-time/dp/1983325740?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=1983325740 (Buy on Amazon)
[21]: https://itsfoss.com/iridium-browser-review/
[22]: http://reddit.com/r/linuxusersgroup

View File

@ -0,0 +1,73 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A brief history of the Content Management System)
[#]: via: (https://opensource.com/article/20/7/history-content-management-system)
[#]: author: (Pierre Burgy https://opensource.com/users/pierreburgy)
A brief history of the Content Management System
======
CMS have gone from static page to JAMstack, and its history is at the
heart of open source and the evolution of the web.
![Text editor on a browser, in blue][1]
Content management system (CMS) is a prolific software category that covers all types of applications for the creation and modification of digital content. So it should come as no huge surprise that the history of the CMS traces back to the first website in history, by [Tim Berners-Lee][2] in 1990, which was modeled on an internet-based hypertext system HTML, which represented just text and links.
![timeline of CMS market evolution][3]
The humble beginnings of the world wide web lay in static sites that served content without the need for a back-end database. They consumed very little computing resources, so they loaded quickly—because there were no database queries, no templates to render, and no client-server requests to process. There was also little in the way of web traffic, given that few people were regular "web surfers," especially compared to today.
And, of course, it was all open source software that facilitated this interoperability. Indeed, open source has always played an important role in the evolution of CMS.
### Rise of the CMS
Fast-forward to the mid-nineties, as the popularity of the world wide web grows and websites increase the need for frequent updates—a change from its origins hosting brochure-type static content. This led to the introduction of a plethora of CMS products from FileNet, StoryBuilder from Vignette, Documentum, and many others. These were all proprietary, closed source products, which was not unusual for that time period.
However, in the early 2000s, open source CMS alternatives emerged, including WordPress, Drupal, and Joomla. WordPress included an extensible plugin architecture and provided templates that could be used to build websites without requiring users to have knowledge of HTML and CSS. The WordPress CMS software installed on a web server and typically paired with a MySQL or MariaDB database (both open source, of course). The big shift to WordPress was, in part, accelerated by the fact that the CMS is open-source.
Even today, about one-third of websites are built using these first-generation content management systems. These traditional CMS are monolithic systems that include the back-end user interface, plugins, front-end templates, Cascading Style Sheets (CSS), a web server, and a database. With every user request for a website page, a server first queries a database, then combines the result with data from the page's markup and plugins to generate an HTML document in the browser.
### Trend to LAMPstack
The emergence of the open source CMS was consistent with infrastructure built on the LAMP (Linux, Apache, MySQL, and PHP/Perl/Python) stack. This new structure represented the start of monolithic web development that enabled the creation of dynamic websites that use database queries to deliver unique content for different end users. At this point, the previous model of static sites sitting on a server—where individual files (HTML, CSS, JavaScript) consisting of text and links are delivered the same way to all end users—really started to disappear.
### Mobile web changes everything
As we move deeper and deeper into the first decade of the 2000s, early mobile devices like Palm and Blackberry provide access to web content, then the introduction of smartphones and tablets around 2010 brings more and more users to the web via mobile devices. In 2016, the scales tip and [web access from mobile devices and tablets exceeds desktops][4] worldwide.
The monolithic CMS wasn't suited to serving content to these different types of access devices, which necessitated different versions of websites—usually stripped-down versions of the website for mobile users. The emergence of new Web-ready device types—like smartwatches, gaming consoles, and voice assistants like [Alexa][5]—only exacerbated this problem, and the need for omnichannel content delivery became clear.
### The emergence of headless CMS and JAMstack
A headless CMS decouples the backend—which stores all the content, databases, and files—from the frontend. Typically, a headless CMS uses APIs so that content from databases (SQL and NoSQL) and files can be accessed for display on websites, smartphones, and even Internet of Things (IoT) devices. Additionally, a headless CMS is front-end framework-agnostic, making it compatible with a variety of static site generators and front-end frameworks (e.g., Gatsby.js, Next.js, Nuxt.js, Angular, React, and Vue.js), which gives developers the freedom to choose their favorite tools.
Headless CMS is particularly suitable for the JAM (Javascript, API, and Markup) stack web development architecture that is emerging as a popular solution as it delivers better web performance and SEO rankings, as well as strong security considerations. JAMstack does not depend on a web server and serves static files immediately when a request is made. There is no need to query the database as the files are already compiled and served to the browser.
The shift to headless CMS is driven by a new wave of players, either with a SaaS approach such as Contentful, or self-hosted open source alternatives such as [Strapi][6]. Headless is also disrupting the e-commerce industry, with new software editors such as Commerce Layer and [Saleor][7] (also open source) offering solutions to manage multiple SKUs, prices, and inventory data in a true omnichannel fashion.
### Conclusion
Throughout the evolution of the content management system, which has been driven by how information on the internet is consumed, open source software has progressed along the same trend lines, with new technologies emerging to solve arising requirements. Indeed, it seems there is an interdependency between CMS, the world wide web, and open source. The need to manage the growing volumes of content isn't going away anytime soon. There is every reason to expect even more widespread adoption of open source software in the coming ahead.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/history-content-management-system
作者:[Pierre Burgy][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/pierreburgy
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_blue_text_editor_web.png?itok=lcf-m6N7 (Text editor on a browser, in blue)
[2]: https://www.w3.org/People/Berners-Lee/#:~:text=A%20graduate%20of%20Oxford%20University,refined%20as%20Web%20technology%20spread.
[3]: https://opensource.com/sites/default/files/uploads/timeline.market.png (timeline of CMS market evolution)
[4]: https://techcrunch.com/2016/11/01/mobile-internet-use-passes-desktop-for-the-first-time-study-finds/
[5]: https://opensource.com/article/20/6/open-source-voice-assistant
[6]: https://strapi.io/
[7]: https://saleor.io/

View File

@ -0,0 +1,427 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Debug Linux using ProcDump)
[#]: via: (https://opensource.com/article/20/7/procdump-linux)
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
Debug Linux using ProcDump
======
Check out Microsoft's open source tool for getting process information.
![Dump truck rounding a turn in the road][1]
Microsoft's growing appreciation for Linux and open source is no secret. The company has steadily increased its contributions to open source in the last several years, including porting some of its software and tools to Linux. In late 2018, Microsoft [announced][2] it was porting some of its [Sysinternals][3] tools to Linux as open source, and [ProcDump for Linux][4] was the first such release.
If you have worked on Windows in debugging or troubleshooting, you have probably heard of Sysinternals. It is a "Swiss Army knife" toolset that helps system administrators, developers, and IT security professionals monitor and troubleshoot Windows environments.
One of Sysinternals' most popular tools is [ProcDump][5]. As its name suggests, it is used for dumping the memory of a running process into a core file on disk. This core file can then be analyzed using a debugger to understand the process' state when the dump was taken. Having used Sysinternals previously, I was curious to try out the Linux port of ProcDump.
### Get started with ProcDump for Linux
To try ProcDump for Linux, you need to download the tool and compile it. (I am using Red Hat Enterprise Linux, though these instructions should work the same on other Linux distros):
```
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.2 (Ootpa)
$
$ uname -r
4.18.0-193.el8.x86_64
$
```
First, clone the ProcDump for Linux repository:
```
$ git clone <https://github.com/microsoft/ProcDump-for-Linux.git>
Cloning into 'ProcDump-for-Linux'...
remote: Enumerating objects: 40, done.
remote: Counting objects: 100% (40/40), done.
remote: Compressing objects: 100% (33/33), done.
remote: Total 414 (delta 14), reused 14 (delta 6), pack-reused 374
Receiving objects: 100% (414/414), 335.28 KiB | 265.00 KiB/s, done.
Resolving deltas: 100% (232/232), done.
$
$ cd ProcDump-for-Linux/
$
$ ls
azure-pipelines.yml  CONTRIBUTING.md  docs     INSTALL.md  Makefile    procdump.gif  src
CODE_OF_CONDUCT.md   dist             include  LICENSE     procdump.1  README.md     tests
$
```
Next, build the program using `make`. It prints out the exact [GCC][6] command-line interface needed to compile the source files:
```
$ make
rm -rf obj
rm -rf bin
rm -rf /root/ProcDump-for-Linux/pkgbuild
gcc -c -g -o obj/Logging.o src/Logging.c -Wall -I ./include -pthread -std=gnu99
gcc -c -g -o obj/Events.o src/Events.c -Wall -I ./include -pthread -std=gnu99
gcc -c -g -o obj/ProcDumpConfiguration.o src/ProcDumpConfiguration.c -Wall -I ./include -pthread -std=gnu99
gcc -c -g -o obj/Handle.o src/Handle.c -Wall -I ./include -pthread -std=gnu99
gcc -c -g -o obj/Process.o src/Process.c -Wall -I ./include -pthread -std=gnu99
gcc -c -g -o obj/Procdump.o src/Procdump.c -Wall -I ./include -pthread -std=gnu99
gcc -c -g -o obj/TriggerThreadProcs.o src/TriggerThreadProcs.c -Wall -I ./include -pthread -std=gnu99
gcc -c -g -o obj/CoreDumpWriter.o src/CoreDumpWriter.c -Wall -I ./include -pthread -std=gnu99
gcc -o bin/procdump obj/Logging.o obj/Events.o obj/ProcDumpConfiguration.o obj/Handle.o obj/Process.o obj/Procdump.o obj/TriggerThreadProcs.o obj/CoreDumpWriter.o -Wall -I ./include -pthread -std=gnu99
gcc -c -g -o obj/ProcDumpTestApplication.o tests/integration/ProcDumpTestApplication.c -Wall -I ./include -pthread -std=gnu99
gcc -o bin/ProcDumpTestApplication obj/ProcDumpTestApplication.o -Wall -I ./include -pthread -std=gnu99
$
```
The compilation creates two new directories. First is an `obj/` directory, which holds the object files created during compilation. The second (and more important) directory is `bin/`, which is where the compiled `procdump` program is stored. It also compiles another test binary called `ProcDumpTestApplication`:
```
$ ls obj/
CoreDumpWriter.o  Handle.o   ProcDumpConfiguration.o  ProcDumpTestApplication.o  TriggerThreadProcs.o
Events.o          Logging.o  Procdump.o               Process.o
$
$
$ ls bin/
procdump  ProcDumpTestApplication
$
$ file bin/procdump
bin/procdump: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=6e8827db64835ea0d1f0941ac3ecff9ee8c06e6b, with debug_info, not stripped
$
$ file bin/ProcDumpTestApplication
bin/ProcDumpTestApplication: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=c8fd86f53c07df142e52518815b2573d1c690e4e, with debug_info, not stripped
$
```
With this setup, every time you run the `procdump` utility, you must move into the `bin/` folder. To make it available from anywhere within the system, run `make install`. This copies the binary into the usual `bin/` directory, which is part of your shell's `$PATH`:
```
$ which procdump
/usr/bin/which: no procdump in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
$
$ make install
mkdir -p //usr/bin
cp bin/procdump //usr/bin
mkdir -p //usr/share/man/man1
cp procdump.1 //usr/share/man/man1
$
$ which procdump
/usr/bin/procdump
$
```
With installation, ProcDump provides a man page, which you can access with `man procdump`:
```
$ man procdump
$
```
### Run ProcDump
To dump a process' memory, you need to provide its process ID (PID) to ProcDump. You can use any of the running programs or daemons on your machine. For this example, I will use a tiny C program that loops forever. Compile the program and run it (to exit the program, hit **Ctrl**+**C**, or if it's running in the background, use the `kill` command with the PID):
```
$ cat progxyz.c
#include &lt;stdio.h&gt;
int main() {
        for (;;)
        {
                printf(".");
                sleep(1);
        }
        return 0;
}
$
$ gcc progxyz.c -o progxyz
$
$ ./progxyz &amp;
[1] 350498
$
```
By running the program, you can find its PID using either `pgrep` or `ps`. Make note of the PID:
```
$ pgrep progxyz
350498
$
$ ps -ef | grep progxyz
root      350498  345445  0 03:29 pts/1    00:00:00 ./progxyz
root      350508  347350  0 03:29 pts/0    00:00:00 grep --color=auto progxyz
$
```
While the test process is running, invoke `procdump` and provide the PID. The output states the name of the process and the PID, reports that a `Core dump` was generated, and shows its file name:
```
$ procdump -p 350498
ProcDump v1.1.1 - Sysinternals process dump utility
Copyright (C) 2020 Microsoft Corporation. All rights reserved. Licensed under the MIT license.
Mark Russinovich, Mario Hewardt, John Salem, Javid Habibi
Monitors a process and writes a dump file when the process exceeds the
specified criteria.
Process:                progxyz (350498)
CPU Threshold:          n/a
Commit Threshold:       n/a
Polling interval (ms):  1000
Threshold (s):  10
Number of Dumps:        1
Press Ctrl-C to end monitoring without terminating the process.
[03:30:00 - INFO]: Timed:
[03:30:01 - INFO]: Core dump 0 generated: progxyz_time_2020-06-24_03:30:00.350498
$
```
List the contents of the current directory, and you should see the new core file. The file name matches the one shown by the `procdump` command, and the date, time, and PID are appended to it:
```
$ ls -l progxyz_time_2020-06-24_03\:30\:00.350498
-rw-r--r--. 1 root root 356848 Jun 24 03:30 progxyz_time_2020-06-24_03:30:00.350498
$
$ file progxyz_time_2020-06-24_03\:30\:00.350498
progxyz_time_2020-06-24_03:30:00.350498: ELF 64-bit LSB core file, x86-64, version 1 (SYSV), SVR4-style, from './progxyz', real uid: 0, effective uid: 0, real gid: 0, effective gid: 0, execfn: './progxyz', platform: 'x86_64'
$
```
### Analyze the core file with the GNU Project Debugger
To see if you can read the proc file, invoke the [GNU Project Debugger][7] (`gdb`). Remember to provide the test binary's path so you can see all the function names on the stack. Here, `bt` (backtrace) shows that the `sleep()` function was being executed when the dump was taken:
```
$ gdb -q ./progxyz ./progxyz_time_2020-06-24_03\:30\:00.350498
Reading symbols from ./progxyz...(no debugging symbols found)...done.
[New LWP 350498]
Core was generated by `./progxyz'.
#0  0x00007fb6947e9208 in nanosleep () from /lib64/libc.so.6
Missing separate debuginfos, use: yum debuginfo-install glibc-2.28-101.el8.x86_64
(gdb) bt
#0  0x00007fb6947e9208 in nanosleep () from /lib64/libc.so.6
#1  0x00007fb6947e913e in sleep () from /lib64/libc.so.6
#2  0x00000000004005f3 in main ()
(gdb)
```
### What about gcore?
Linux users will be quick to point out that Linux already has a command called `gcore`, which ships with most Linux distros and does the exact same thing as ProcDump. This is a valid argument. If you have never used it, try the following to dump a process' core with `gcore`. Run the test program again, then run `gcore`, and provide the PID as an argument:
```
$ ./progxyz &amp;
[1] 350664
$
$
$ pgrep progxyz
350664
$
$
$ gcore 350664
0x00007fefd3be2208 in nanosleep () from /lib64/libc.so.6
Saved corefile core.350664
[Inferior 1 (process 350664) detached]
$
```
`gcore` prints a message saying it has saved the core to a specific file. Check the current directory to find this core file, and use `gdb` again to load it:
```
$
$ ls -l  core.350664
-rw-r--r--. 1 root root 356848 Jun 24 03:34 core.350664
$
$
$ file core.350664
core.350664: ELF 64-bit LSB core file, x86-64, version 1 (SYSV), SVR4-style, from './progxyz', real uid: 0, effective uid: 0, real gid: 0, effective gid: 0, execfn: './progxyz', platform: 'x86_64'
$
$ gdb -q ./progxyz ./core.350664
Reading symbols from ./progxyz...(no debugging symbols found)...done.
[New LWP 350664]
Core was generated by `./progxyz'.
#0  0x00007fefd3be2208 in nanosleep () from /lib64/libc.so.6
Missing separate debuginfos, use: yum debuginfo-install glibc-2.28-101.el8.x86_64
(gdb) bt
#0  0x00007fefd3be2208 in nanosleep () from /lib64/libc.so.6
#1  0x00007fefd3be213e in sleep () from /lib64/libc.so.6
#2  0x00000000004005f3 in main ()
(gdb) q
$
```
For `gcore` to work, you need to make sure the following settings are in place. First, ensure the `ulimit` is set for core files; if it is set to `0`, core files won't be generated. Second, ensure that `/proc/sys/kernel/core_pattern` has the proper settings to specify the core pattern:
```
$ ulimit -c
unlimited
$
```
### Should you use ProcDump or gcore?
There are several cases where you might prefer using ProcDump instead of gcore, and ProcDump has a few built-in features that might be useful in general.
#### Waiting for a test binary to execute
Whether you use ProcDump or gcore, the test process must be executed and in a running state so that you can provide a PID to generate a core file. But ProcDump has a feature that waits until a specific binary runs; once it finds a test binary running that matches that given name, it generates a core file for that test binary. It can be enabled using the `-w` argument and the program's name instead of a PID. This feature can be useful in instances where the test program exits quickly.
Here's how it works. In this example, there is no process named `progxyz` running:
```
$ pgrep progxyz
$
```
Invoke `procdump` with the `-w` command to keep it waiting. From another terminal, invoke the test binary `progxyz`:
```
$ procdump -w progxyz
ProcDump v1.1.1 - Sysinternals process dump utility
Copyright (C) 2020 Microsoft Corporation. All rights reserved. Licensed under the MIT license.
Mark Russinovich, Mario Hewardt, John Salem, Javid Habibi
Monitors a process and writes a dump file when the process exceeds the
specified criteria.
Process:                progxyz (pending)
CPU Threshold:          n/a
Commit Threshold:       n/a
Polling interval (ms):  1000
Threshold (s):  10
Number of Dumps:        1
Press Ctrl-C to end monitoring without terminating the process.
[03:39:23 - INFO]: Waiting for process 'progxyz' to launch...
```
Then, from another terminal, invoke the test binary `progxyz`: 
```
$ ./progxyz &amp;
[1] 350951
$
```
ProcDump immediately detects that the binary is running and dumps the core file for this binary:
```
[03:39:23 - INFO]: Waiting for process 'progxyz' to launch...
[03:43:22 - INFO]: Found process with PID 350951
[03:43:22 - INFO]: Timed:
[03:43:23 - INFO]: Core dump 0 generated: progxyz_time_2020-06-24_03:43:22.350951
$
$ ls -l progxyz_time_2020-06-24_03\:43\:22.350951
-rw-r--r--. 1 root root 356848 Jun 24 03:43 progxyz_time_2020-06-24_03:43:22.350951
$
$ file progxyz_time_2020-06-24_03\:43\:22.350951
progxyz_time_2020-06-24_03:43:22.350951: ELF 64-bit LSB core file, x86-64, version 1 (SYSV), SVR4-style, from './progxyz', real uid: 0, effective uid: 0, real gid: 0, effective gid: 0, execfn: './progxyz', platform: 'x86_64'
$
```
#### Multiple core dumps
Another important ProcDump feature is that you can specify how many core files to generate by using the command-line argument `-n <count>`. The default time gap between the core dumps is 10 seconds, but you can modify this using the `-s <sec>` argument. This example uses ProcDump to take three core dumps of the test binary:
```
$ ./progxyz &amp;
[1] 351014
$
$ procdump -n 3 -p 351014
ProcDump v1.1.1 - Sysinternals process dump utility
Copyright (C) 2020 Microsoft Corporation. All rights reserved. Licensed under the MIT license.
Mark Russinovich, Mario Hewardt, John Salem, Javid Habibi
Monitors a process and writes a dump file when the process exceeds the
specified criteria.
Process:                progxyz (351014)
CPU Threshold:          n/a
Commit Threshold:       n/a
Polling interval (ms):  1000
Threshold (s):  10
Number of Dumps:        3
Press Ctrl-C to end monitoring without terminating the process.
[03:45:20 - INFO]: Timed:
[03:45:21 - INFO]: Core dump 0 generated: progxyz_time_2020-06-24_03:45:20.351014
[03:45:31 - INFO]: Timed:
[03:45:32 - INFO]: Core dump 1 generated: progxyz_time_2020-06-24_03:45:31.351014
[03:45:42 - INFO]: Timed:
[03:45:44 - INFO]: Core dump 2 generated: progxyz_time_2020-06-24_03:45:42.351014
$
$ ls -l progxyz_time_2020-06-24_03\:45\:*
-rw-r--r--. 1 root root 356848 Jun 24 03:45 progxyz_time_2020-06-24_03:45:20.351014
-rw-r--r--. 1 root root 356848 Jun 24 03:45 progxyz_time_2020-06-24_03:45:31.351014
-rw-r--r--. 1 root root 356848 Jun 24 03:45 progxyz_time_2020-06-24_03:45:42.351014
$
```
#### Core dump based on CPU and memory usage
ProcDump also enables you to trigger a core dump when a test binary or process reaches a certain CPU or memory threshold. ProcDump's man page shows the command-line arguments to use when invoking ProcDump:
```
-C          Trigger core dump generation when CPU exceeds or equals specified value (0 to 100 * nCPU)
-c          Trigger core dump generation when CPU is less than specified value (0 to 100 * nCPU)
-M          Trigger core dump generation when memory commit exceeds or equals specified value (MB)
-m          Trigger core dump generation when when memory commit is less than specified value (MB)
-T          Trigger when thread count exceeds or equals specified value.
-F          Trigger when filedescriptor count exceeds or equals specified value.
-I          Polling frequency in milliseconds (default is 1000)
```
For example, you can ask ProcDump to dump the core when the given PID's CPU usage exceeds 70%:
```
`procdump -C 70 -n 3 -p 351014`
```
### Conclusion
ProcDump is an interesting addition to the long list of Windows programs being ported to Linux. Not only does it provide additional tooling options to Linux users, but it can also make Windows users feel more at home when working on Linux.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/procdump-linux
作者:[Gaurav Kamathe][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gkamathe
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dumptruck_car_vehicle_storage_container_road.jpg?itok=TWK0CbX_ (Dump truck rounding a turn in the road)
[2]: https://www.zdnet.com/article/microsoft-working-on-porting-sysinternals-to-linux/
[3]: https://docs.microsoft.com/en-us/sysinternals/
[4]: https://github.com/Microsoft/ProcDump-for-Linux
[5]: https://docs.microsoft.com/en-us/sysinternals/downloads/procdump
[6]: https://gcc.gnu.org/
[7]: https://www.gnu.org/software/gdb/

View File

@ -0,0 +1,115 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to configure an SSH proxy server with Squid)
[#]: via: (https://fedoramagazine.org/configure-ssh-proxy-server/)
[#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/)
How to configure an SSH proxy server with Squid
======
![][1]
Sometimes you cant connect to an SSH server from your current location. Other times, you may want to add an extra layer of security to your SSH connection. In these cases connecting to another SSH server via a proxy server is one way to get through.
[Squid][2] is a full-featured proxy server application that provides caching and proxy services. Its normally used to help improve response times and reduce network bandwidth by reusing and caching previously requested web pages during browsing.
However for this setup youll configure Squid to be used as an SSH proxy server since its a robust trusted proxy server that is easy to configure.
### Installation and configuration
Install the squid package using [sudo][3]:
```
$ sudo dnf install squid -y
```
The squid configuration file is quite extensive but there are only a few things we need to configure. Squid uses access control lists to manage connections.
Edit the _/etc/squid/squid.conf_ file to make sure you have the two lines explained below.
First, specify your local IP network. The default configuration file already has a list of the most common ones but you will need to add yours if its not there. For example, if your local IP network range is 192.168.1.X, this is how the line would look:
```
acl localnet src 192.168.1.0/24
```
Next, add the SSH port as a safe port by adding the following line:
```
acl Safe_ports port 22
```
Save that file. Now enable and restart the squid proxy service:
```
$ sudo systemctl enable squid
$ sudo systemctl restart squid
```
4.) By default squid proxy listens on port 3128. Configure firewalld to allow for this:
```
$ sudo firewall-cmd --add-service=squid --perm
$ sudo firewall-cmd --reload
```
### Testing the ssh proxy connection
To connect to a server via ssh through a proxy server well be using netcat.
Install _nmap-ncat_ if its not already installed:
```
$ sudo dnf install nmap-ncat -y
```
Here is an example of a standard ssh connection:
```
$ ssh user@example.com
```
Here is how you would connect to that same server using the squid proxy server as a gateway.
This example assumes the squid proxy servers IP address is 192.168.1.63. You can also use the host-name or the FQDN of the squid proxy server:
```
$ ssh user@example.com -o "ProxyCommand nc --proxy 192.168.1.63:3128 %h %p"
```
Here are the meanings of the options:
* _ProxyCommand_ Tells ssh a proxy command is going to be used.
* _nc_ The command used to establish the connection to the proxy server. This is the netcat command.
* ***%***_h_ The placeholder for the proxy servers host-name or IP address.
* ***%***_p_ ****** The placeholder for the proxy servers port number.
There are many ways to configure an SSH proxy server but this is a simple way to get started.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/configure-ssh-proxy-server/
作者:[Curt Warfield][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/rcurtiswarfield/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/07/squid_ssh_proxy-816x345.png
[2]: http://www.squid-cache.org/
[3]: https://fedoramagazine.org/howto-use-sudo/

View File

@ -0,0 +1,123 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Tricks with Pseudorandom Number Generators)
[#]: via: (https://theartofmachinery.com/2020/07/18/prng_tricks.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
Tricks with Pseudorandom Number Generators
======
Pseudorandom number generators (PRNGs) are often treated like a compromise: their output isnt as good as real random number generators, but theyre cheap and easy to use on computer hardware. But a special feature of PRNGs is that theyre _reproducible_ sources of random-looking data:
```
import std.random;
import std.stdio;
void main()
{
// Seed a PRNG and generate 10 pseudo-random numbers
auto rng = Random(42);
foreach (_; 0..10) write(uniform(0, 10, rng), ' ');
writeln();
// Reset the PRNG, and the same sequence is generated again
rng = Random(42);
foreach (_; 0..10) write(uniform(0, 10, rng), ' ');
writeln();
// Output:
// 2 7 6 4 6 5 0 4 0 3
// 2 7 6 4 6 5 0 4 0 3
}
```
This simple fact enables a few neat tricks.
A couple of famous examples come from the gaming industry. The classic example is the space trading game Elite, which was originally written for 8b BBC Micros in the early 80s. It was a totally revolutionary game, but just one thing that amazed fans was its complex universe of thousands of star systems. That was something you just didnt normally get in games written for machines with kilobytes of RAM total. The trick was to generate the universe with a PRNG seeded with a small value. There was no need to store the universe in memory because the game could regenerate each star system on demand, repeatedly and deterministically.
PRNGs are now widely exploited for recording games for replays. You dont need to record every frame of the game world if you can just record the PRNG seed and all the player actions. (Like most things in software, [actually implementing that can be surprisingly challenging][1].)
### Random mappings
In machine learning, you often need a mapping from things to highly dimensional random unit vectors (random vectors of length 1). Lets get more specific and say youre processing documents for topic/sentiment analysis or similarity. In this case youll generate a random vector for each word in the dictionary. Then you can create a vector for each document by adding up the vectors for each word in it (with some kind of weighting scheme, in practice). Similar documents will end up with similar vectors, and you can use linear algebra tricks to uncover deeper patterns (read about [latent semantic analysis][2] if youre interested).
An obvious way to get a mapping between words and random vectors is to just initially generate a vector for each word, and create a hash table for looking them up later. Another way is to generate the random vectors on demand using a PRNG seeded by a hash of the word. Heres a toy example:
```
/+ dub.sdl:
name "prngvecdemo"
dependency "mir-random" version="~>2.2.14"
+/
// Demo of mapping words to random vectors with PRNGs
// Run me with "dub prngvecdemo.d"
import std.algorithm;
import std.stdio;
// Using the Mir numerical library https://www.libmir.org/
import mir.random.engine.xoshiro;
import mir.random.ndvariable;
enum kNumDims = 512;
alias RNG = Xoroshiro128Plus;
// D's built-in hash happens to be MurmurHash, but we just need it to be suitable for seeding the PRNG
static assert("".hashOf.sizeof == 8);
void main()
{
auto makeUnitVector = sphereVar!float();
auto doc = "a lot of words";
float[kNumDims] doc_vec, word_vec;
doc_vec[] = 0.0;
foreach (word; doc.splitter) // Not bothering with whitening or stop word filtering for this demo
{
// Create a PRNG seeded with the hash of the word
auto rng = RNG(word.hashOf);
// Generate a unit vector for the word using the PRNG
// We'll get the same vector every time we see the same word
makeUnitVector(rng, word_vec);
// Add it to the document vector (no weighting for simplicity)
doc_vec[] += word_vec[];
}
writeln(doc_vec);
}
```
This kind of trick isnt the answer to everything, but it has some uses. Obviously, it can be useful if youre working with more data than you have RAM (though you might still cache some of the generated data). Another use case is processing a large dataset with parallel workers. In the document example, you can get workers to “agree” on what the vector for each word should be, without data synchronisation, and without needing to do an initial pass over the data to build a dictionary of words. Ive used this trick with experimental code, just because I was too lazy to add an extra stage to the data pipeline. In some applications, recomputing data on the fly can even be faster than fetching it from a very large lookup table.
### An ode to Xorshift
You might have noticed I used `Xoroshiro128Plus`, a variant of the Xorshift PRNG. The Mersenne Twister is a de facto standard PRNG in some computing fields, but Im a bit of a fan of the Xorshift family. The basic Xorshift engines are fast and pretty good, and there are variants that are still fast and have excellent output quality. But the big advantage compared to the Mersenne Twister is the state size. The Mersenne Twister uses a pool of 2496 bytes of state, whereas most of the Xorshift PRNGs can fit into one or two machine `int`s.
The small state size has a couple of advantages for this kind of “on demand” PRNG usage: One is that thoroughly initialising a big state from a small seed takes work (some people “warm up” a Mersenne Twister by throwing away several of the initial outputs, just to be sure). The second is that the small size of the PRNGs makes them cheap enough to use in places you wouldnt think of using a Mersenne Twister.
### Random data structures made reliable
Some data structures and algorithms use randomisation. An example is a treap, which is a binary search tree that uses a randomised heap for balancing. Treaps are much less popular than AVL trees or red-black trees, but theyre easier to implement correctly because you end up with fewer edge cases. Theyre also good enough for most use cases. That makes them a good choice for application-specific “augmented” BSTs. But for argument purposes, its just a real example of a data structure that happens to use randomness as an implementation detail.
Randomisation comes with a major drawback: its a pain when testing and debugging. Test failures arent reproducible for debugging if real randomness is used. If you have any experience with testing, youll have seen this and youll know its a good idea to use a PRNG instead.
Using a global PRNG mostly works, but it couples the treaps through one shared PRNG. That accidental coupling can lead to test flakes if youre running several tests at once, unless youre careful to use one PRNG per thread and reset it for every test. Even then you can get Heisenbugs in your non-test code.
What about dependency injection? Making every treap method require a reference to a PRNG works, but it leaks the implementation detail throughout your code. You could make the treap take a reference to a PRNG in its constructor, but that implies adding an extra pointer to the data structure. If youre going to do that, why not just make every treap embed its own 32b or 64b Xorshift PRNG? Embedding the PRNG into the treap makes it deterministic and reproducible in a way thats encapsulated and decoupled from everything else.
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2020/07/18/prng_tricks.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://technology.riotgames.com/news/determinism-league-legends-introduction
[2]: https://en.wikipedia.org/wiki/Latent_semantic_analysis

View File

@ -0,0 +1,110 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ubuntu 19.10 Reaches End of Life. Upgrade to Ubuntu 20.04 As Soon As Possible!)
[#]: via: (https://itsfoss.com/ubuntu-19-10-end-of-life/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Ubuntu 19.10 Reaches End of Life. Upgrade to Ubuntu 20.04 As Soon As Possible!
======
_**Ubuntu 19.10 Eoan Ermine**_ _**has reached end of life. That it means it wont get any security or maintenance updates. Continue using Ubuntu 19.10 would be risky as your system may be vulnerable in future for the lack of security updates. You should upgrade to Ubuntu 20.04.**_
[Ubuntu 19.10 was released in October 2019][1] bringing some new features that prepared a base for [Ubuntu 20.04][2].
As a non-LTS release, it had a lifespan of nine months. It has completed its life cycle and as of 17th July 2020, it wont be getting any updates.
### End of life for Ubuntu 19.10
![][3]
I have [explained Ubuntu release cycle and end of life][4] in detail earlier. Ill reiterate what it means to you and your system if continue using Ubuntu 19.10 beyond this point.
[Software usually hav][5][e][5] [a predefined life cycle][5] and once a software version reaches end of life, it stops getting updates and support.
Beyond the end of life, Ubuntu 19.10 wont get system updates, security updates or application updates from Ubuntu anymore.
If you continue using it, your system may fell victim to potential cyberattacks as hackers tend to exploit vulnerable system.
Later, you might not be able to install new software using apt command as Ubuntu will archive the repository for 19.10.
### What to do if you are using Ubuntu 19.10?
First, [check which version of Ubuntu you are using][6]. This can be done quickly by entering this command in the terminal:
```
lsb_release -a
```
```
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 19.10
Release: 19.10
Codename: Eoan
```
If you see Ubuntu 19.10, you should do either of these two things:
* If you have a good speed, consistent internet connection, upgrade to Ubuntu 20.04 from within 19.10. Your personal files and most software remain untouched.
* If you have a slow or inconsistent internet connection, you should do a [fresh installation of Ubuntu 20.04][7]. Your files and everything else on the disk will be erased so you should make backup of your important data on an external disk.
#### How to upgrade to Ubuntu 20.04 from 19.10 (if you have good internet connection)
I have discussed the Ubuntu version upgrade in details previously. Ill quickly mention the steps here as well.
First, make sure that your system is set to be notified of new version in Software &amp; Updates.
Go to Software &amp; Updates:
![][8]
Go to Updates tab and set “Notify me of a new Ubuntu version” to “For any new version”:
![][9]
Now, install any pending updates.
Now, run Update Manager tool again. You should be given the option to upgrade to Ubuntu 20.04. Hit the upgrade button and follow the instructions.
It installs packages of around 1.2 GB. This is why you need a good and consistent internet connection.
![][10]
Upgrading this way keeps your home directory as it is. Having a backup on external disk is still suggested, though.
### Are you still using Ubuntu 19.10?
If you are still using Ubuntu 19.10, you must prepare for the upgrade or fresh installation. You must not ignore it.
If you dont like frequent version upgrades like this, you should stick with LTS versions that are supported for five years. The current LTS version is Ubuntu 20.04 which youll be upgrading to anyway.
Were/are you using Ubuntu 19.10? Have you already upgraded to Ubuntu 20.04? Let me know if you face any issue or if you have any questions.
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-19-10-end-of-life/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/ubuntu-19-10-released/
[2]: https://itsfoss.com/download-ubuntu-20-04/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/07/ubuntu-19-10-end-of-life.jpg?ssl=1
[4]: https://itsfoss.com/end-of-life-ubuntu/
[5]: https://en.wikipedia.org/wiki/Systems_development_life_cycle
[6]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
[7]: https://itsfoss.com/install-ubuntu/
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/03/upgrade-ubuntu-1.jpeg?ssl=1
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/11/software-update-any-new-version.jpeg?resize=800%2C378&ssl=1
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/software-updater-focal.jpg?ssl=1

View File

@ -0,0 +1,64 @@
[#]: collector: (lujun9972)
[#]: translator: (Yufei-Yan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Project OWL: IoT trying to hold connectivity together in disasters)
[#]: via: (https://www.networkworld.com/article/3564980/project-owl-iot-trying-to-hold-connectivity-together-in-disasters.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
OWL 项目:物联网正尝试在灾难中让一切保持联络
======
当自然灾害破坏了传统的通信连接时,配置在<ruby>多跳网络<rt>mesh network</rt></ruby>的物联网设备可以迅速部署以提供基本的连接。[AK Badwolf][1] [(CC BY 2.0)][2]
OWL 项目负责人在最近的开源峰会上说,一个以多跳网络、物联网和 LoRa 连接为中心的开源项目可以帮助急救和受灾人员在自然灾害之后保持联系。
OWL 项目的应用场景是当在自然灾害之后频繁发生的通信中断时。无论是蜂窝网络还是有线网络,大范围的中断会频繁阻碍急救服务、供应和在暴风雨或其他重大灾难后必须解决关键问题的信息流。
**学习 5G 和 WiFi 6**
* [如何判断 WiFi 6 是否适合你][4]
* [什么是 MU-MIMO为什么在你的无线路由器中需要它][5]
* [什么时候使用 5G什么时候使用 WiFi 6][6]
* [企业如何为 5G 网络做准备][7]
该项目通过一大群”鸭子“(便宜、易于部署且不需要现有基础设施支持的小型无线模块)实现这个目的。一些鸭子是太阳能的,其他一些则用的是耐用电池。每只鸭子配备一个 LoRa 无线电,用于在网络上和其他鸭子进行通信,同时还配备有 Wi-Fi而且可能配备蓝牙和 GPS 来实现其他功能。
这个想法是这样的,当网络瘫痪时,用户可以使用他们的智能手机或者笔记本电脑与鸭子建立一个 Wi-Fi 连接这个鸭子可以将小块的信息传递到网络的其他部分。信息向网络后端传递直到到达”papaduck“”papaduck“装备了可以与云上的 OWL 数据管理系统连接的卫星系统。OWL 代表 ”<ruby>组织<rt>organization</rt></ruby><ruby>位置<rt>whereabouts</rt></ruby><ruby>物流<rt>logistics</rt></ruby>”。)信息可以通过云在智能手机或者网页上进行可视化,甚至可以通过 API 插入到现有的系统中。
秘密在于 ClusterDuck 协议,这是一个开源固件,即使在一些模块不能正常工作的网络中,它仍然能保持信息流通。它就是设计用来工作在大量便宜且容易获取的计算硬件上,类似树莓派的硬件,这样可以更容易且更快捷的建立一个 ClusterDuck 网络。
创始人 Bryan Knouse 表示,这个项目的创建,是因为在 2017 年和 2018 年的毁灭性飓风中,要与受影响社区进行有效的通信而采取救援措施,面临着巨大的困难。
“我们的一些创始成员经历了这些灾难,然后我们会问‘我们该做些什么?’”,他说道。
在马亚圭斯该项目有一批来自波多黎各大学的学生和教授大多数的系统测试都在那里进行。Knouse 说,校园中目前有 17 个太阳能鸭子,分布在屋顶和树上,并且计划增加数量。
他说,“这种关系实际上创建了一个开源社区,这些学生和教授正在帮助我们开发这个项目。”
在 [Facebook][9] 和[领英][10]上加入网络世界社区,并对重要话题发表评论。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3564980/project-owl-iot-trying-to-hold-connectivity-together-in-disasters.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[Yufei-Yan](https://github.com/Yufei-Yan)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://www.flickr.com/photos/spiderkat/8487309555/in/photolist-dVZFrn-dDctnA-8WuLez-6RBSHn-bQa5F8-syyFcV-rvxKJT-5bSAh-2Xey4-3D4xww-4t1ZYv-dMgY7k-mHeMk1-xsPw6B-EiD3UR-k1rNkD-atorAv-f58MG9-g2QCe-Zr1wAC-ewx5Px-6vrwz7-8CCPSd-hAC5HZ-aHJC1B-9ovTST-Wqj4Sk-fiJjWG-28ATb9y-6tHHiR-8VZrmy-8iUVNB-DzSQV5-j6gpDL-2c2C5Re-kmbqae-Th4XGx-g325LW-cC1cp-26aa3aC-X7ruJo-jDkSKD-57695d-8Dz2hm-fPsDJr-gxcdoV-iSVsHR-dWWbct-ejvCrM-8ofaVz
[2]: https://creativecommons.org/licenses/by/2.0/legalcode
[3]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
[4]: https://www.networkworld.com/article/3356838/how-to-determine-if-wi-fi-6-is-right-for-you.html
[5]: https://www.networkworld.com/article/3250268/what-is-mu-mimo-and-why-you-need-it-in-your-wireless-routers.html
[6]: https://www.networkworld.com/article/3402316/when-to-use-5g-when-to-use-wi-fi-6.html
[7]: https://www.networkworld.com/article/3306720/mobile-wireless/how-enterprises-can-prep-for-5g.html
[8]: https://www.networkworld.com/article/3560993/what-is-wi-fi-and-why-is-it-so-important.html
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,100 @@
[#]: collector: (lujun9972)
[#]: translator: (JonnieWayy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Book Review: A Byte of Vim)
[#]: via: (https://itsfoss.com/book-review-a-byte-of-vim/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
书评A Byte of Vim
======
[Vim][1]是一个简单而又强大的文本编辑工具。大多数新用户都会被它吓倒因为它不像常规的图形化文本编辑器那样“工作”。Vim“不寻常”的键盘快捷键让人很好奇[如何保存并退出Vim][2]. 但一旦你掌握了Vim就不会再产生这样的问题了。
网上有大量的[Vim资源][3]。我们也在It's FOSS上介绍了一些Vim技巧。除了线上资源也有很多书致力于介绍这个编辑器。今天我们要介绍的是一本旨在使Vim易于大多数用户理解的书。我们将讨论的书是[Swaroop C H][5]的[《A Byte of Vim》][4]。
本书作者[Swaroop C H]已经在计算机领域工作了十余年。他曾在Yahoo和Adobe工作过。大学毕业后他通过售卖Linux CD赚钱。他曾多次创业包括一个名为ion的iPod充电器。他目前是[Helpshift][7] AI团队的工程经理。
### A Byte of Vim
![][8]
和所有好书一样《A Byte of Vim》从谈论什么是Vim开始“一个用于写各类文本的电脑程序。”他继续说道“Vim之所以与众不同是因为它是为数不多的既简单又强大的软件之一。”
在深入讲解如何使用Vim之前Swaroop先告诉读者如何在Windows、Mac、Linux和BSD上安装Vim。安装完成后他将进而指导读者完成如何启动Vim以及如何创建第一个文件。
接着Swaroop讨论了Vim的不同模式以及如何通过Vim的键盘快捷键在文档中浏览。接着是使用Vim编辑文档的基础知识包括剪切/赋值/粘帖以及撤销/重做的Vim版本。
在涵盖了编辑基础知识后Swaroop讨论了使用Vim编辑单个文档的多个部分。读者也可以使用多个标签和窗口来同时编辑多个文档。
[][9]
推荐阅读  《Bring Your Old Computer Back to Life With 4MLinux》
本书还涵盖了通过编写脚本和安装插件来扩展Vim的功能。在Vim中使用脚本有两种方法一种是使用Vim的内置脚本语言另一种是使用Python或Perl等编程语言来访问Vim的内核。可以编写或下载五种类型的Vim插件vimrc全局插件文件类型插件语法突出显示插件和编译器插件。
在独立的部分中Swaroop C H涵盖了使Vim更适合编程的特点。这些功能包括语法高亮、智能缩进、对Shell命令的支持、全能补全以及可用作IDE的功能。
#### 获取《A Byte of Vim》一书并为之贡献
《A Byte of Vim》由[Creative Commons 4.0][10]许可。读者可以在[作者的主页][4]上免费阅读其在线版本。您也可以免费下载其[PDF][11]、[Epub][12]或者[Mobi][13]版本。
[免费获取《A Byte of Vim》][4]
如果您更喜欢阅读[纸质版本][14],你也可以选择该选项。
请注意,** Vim字节的原始版本写于2008**并转换为PDf。不幸的是Swaroop CH丢失了原始源文件。他正在努力将该书转换为[Markdown][15]。如果您想提供帮助,请访问[图书的GitHub页面][16]。
| 简介 | 产品 | 价格 |
| --- | --- | --- |
| ![快速掌握Vim立即从WTF到OMG][17] | [在Amazon上购买][21] | $34.00[][19] |
#### 结语
当我初次对着Vim生气时我不知道该怎么办。我希望那时候我就知道《A Byte of Vim》这本书。对于任何学习Linux的人来说这本书都是不错的资源特别是当您开始学习命令行的时候。
您读过Swaroop C H的[《A Byte of Vim》][4]吗?如果读过,您是如何找到它的?如果不是,那么您最喜欢关于开源主题的是哪本书?请在下方评论区告诉我们。
[][21]
推荐阅读 《Iridium Browser: A Browser for the Privacy Conscious》
如果您觉得这篇文章有意思请花上一分钟在社交媒体、Hacker News或[Reddit][22]上分享它。
--------------------------------------------------------------------------------
via: https://itsfoss.com/book-review-a-byte-of-vim/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[JonnieWayy](https://github.com/JonnieWayy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://www.vim.org/
[2]: https://itsfoss.com/how-to-exit-vim/
[3]: https://linuxhandbook.com/basic-vim-commands/
[4]: https://vim.swaroopch.com/
[5]: https://swaroopch.com/
[6]: https://swaroopch.com/about/
[7]: https://www.helpshift.com/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/Byte-of-vim-book.png?resize=800%2C450&ssl=1
[9]: https://itsfoss.com/4mlinux-review/
[10]: https://creativecommons.org/licenses/by/4.0/
[11]: https://www.gitbook.com/download/pdf/book/swaroopch/byte-of-vim
[12]: https://www.gitbook.com/download/epub/book/swaroopch/byte-of-vim
[13]: https://www.gitbook.com/download/mobi/book/swaroopch/byte-of-vim
[14]: https://swaroopch.com/buybook/
[15]: https://itsfoss.com/best-markdown-editors-linux/
[16]: https://github.com/swaroopch/byte-of-vim#status-incomplete
[17]: https://i2.wp.com/images-na.ssl-images-amazon.com/images/I/41itW8furUL._SL160_.jpg?ssl=1
[18]: https://www.amazon.com/Mastering-Vim-Quickly-WTF-time/dp/1983325740?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=1983325740 (Mastering Vim Quickly: From WTF to OMG in no time)
[19]: https://www.amazon.com/gp/prime/?tag=chmod7mediate-20 (Amazon Prime)
[20]: https://www.amazon.com/Mastering-Vim-Quickly-WTF-time/dp/1983325740?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=1983325740 (Buy on Amazon)
[21]: https://itsfoss.com/iridium-browser-review/
[22]: http://reddit.com/r/linuxusersgroup