From fc929cf5ac5a9a43b74e17eadb6280c9fb272bd6 Mon Sep 17 00:00:00 2001 From: chenjt Date: Thu, 16 Jul 2015 18:36:51 +0800 Subject: [PATCH 001/697] tmp commit --- ...st Using Docker Machine in a VirtualBox.md | 114 ++++++++++++++++++ 1 file changed, 114 insertions(+) create mode 100644 translated/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md diff --git a/translated/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md b/translated/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md new file mode 100644 index 0000000000..64c044b100 --- /dev/null +++ b/translated/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md @@ -0,0 +1,114 @@ +[bazz2] +在 VirtualBox 中使用 Docker Machine 管理主机 +================================================================================ +大家好,今天我们学习在 VirtualBox 中使用 Docker Machine 来创建和管理 Docker 主机。Docker Machine 是一个应用,用于在我们的电脑上、在云端、在数据中心创建 Docker 主机,然后用户可以使用 Docker 客户端来配置一些东西。这个 API 为本地主机、或数据中心的虚拟机、或云端的实例提供 Docker 服务。Docker Machine 支持 Windows、OSX 和 Linux,并且是以一个独立的二进制文件包形式安装的。使用(与现有 Docker 工具)相同的接口,我们就可以充分利用已经提供 Docker 基础框架的生态系统。只要一个命令,用户就能快速部署 Docker 容器。 + +Here are some easy and simple steps that helps us to deploy docker containers using Docker Machine. + +### 1. Installing Docker Machine ### + +Docker Machine supports awesome on every Linux Operating System. First of all, we'll need to download the latest version of Docker Machine from the [Github site][1] . Here, we'll use curl to download the latest version of Docker Machine ie 0.2.0 . + +**For 64 Bit Operating System** + + # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine + +**For 32 Bit Operating System** + + # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-i386 > /usr/local/bin/docker-machine + +After downloading the latest release of Docker Machine, we'll make the file named **docker-machine** under **/usr/local/bin/** executable using the command below. + + # chmod +x /usr/local/bin/docker-machine + +After doing the above, we'll wanna ensure that we have successfully installed docker-machine. To check it, we can run the docker-machine -v which will give output of the version of docker-machine installed in our system. + + # docker-machine -v + +![Installing Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/installing-docker-machine.png) + +To enable Docker commands on our machines, make sure to install the Docker client as well by running the command below. + + # curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker + # chmod +x /usr/local/bin/docker + +### 2. Creating VirualBox VM ### + +After we have successfully installed Docker Machine in our Linux running machine, we'll definitely wanna go for creating a Virtual Machine using VirtualBox. To get started, we need to run docker-machine create command followed by --driver flag with string as virtualbox as we are trying to deploy docker inside of Virtual Box running VM and the final argument is the name of the machine, here we have machine name as "linux". This command will download [boot2docker][2] iso which is a light-weighted linux distribution based on Tiny Core Linux with the Docker daemon installed and will create and start a VirtualBox VM with Docker running as mentioned above. + +To do so, we'll run the following command in a terminal or shell in our box. + + # docker-machine create --driver virtualbox linux + +![Creating Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/creating-docker-machine.png) + +Now, to check whether we have successfully create a Virtualbox running Docker or not, we'll run the command **docker-machine** ls as shown below. + + # docker-machine ls + +![Docker Machine List](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-list.png) + +If the host is active, we can see * under the ACTIVE column in the output as shown above. + +### 3. Setting Environment Variables ### + +Now, we'll need to make docker talk with the machine. We can do that by running docker-machine env and then the machine name, here we have named **linux** as above. + + # eval "$(docker-machine env linux)" + # docker ps + +This will set environment variables that the Docker client will read which specify the TLS settings. Note that we'll need to do this every time we reboot our machine or start a new tab. We can see what variables will be set by running the following command. + + # docker-machine env linux + + export DOCKER_TLS_VERIFY=1 + export DOCKER_CERT_PATH=/Users//.docker/machine/machines/dev + export DOCKER_HOST=tcp://192.168.99.100:2376 + +### 4. Running Docker Containers ### + +Finally, after configuring the environment variables and Virtual Machine, we are able to run docker containers in the host running inside the Virtual Machine. To give it a test, we'll run a busybox container out of it run running **docker run busybox** command with **echo hello world** so that we can get the output of the container. + + # docker run busybox echo hello world + +![Running Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/05/running-docker-container.png) + +### 5. Getting Docker Host's IP ### + +We can get the IP Address of the running Docker Host's using the **docker-machine ip** command. We can see any exposed ports that are available on the Docker host’s IP address. + + # docker-machine ip + +![Docker IP Address](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-ip-address.png) + +### 6. Managing the Hosts ### + +Now we can manage as many local VMs running Docker as we desire by running docker-machine create command again and again as mentioned in above steps + +If you are finished working with the running docker, we can simply run **docker-machine stop** command to stop the whole hosts which are Active and if wanna start again, we can run **docker-machine start**. + + # docker-machine stop + # docker-machine start + +You can also specify a host to stop or start using the host name as an argument. + + $ docker-machine stop linux + $ docker-machine start linux + +### Conclusion ### + +Finally, we have successfully created and managed a Docker host inside a VirtualBox using Docker Machine. Really, Docker Machine enables people fast and easy to create, deploy and manage Docker hosts in different platforms as here we are running Docker hosts using Virtualbox platform. This virtualbox driver API works for provisioning Docker on a local machine, on a virtual machine in the data center. Docker Machine ships with drivers for provisioning Docker locally with Virtualbox as well as remotely on Digital Ocean instances whereas more drivers are in the work for AWS, Azure, VMware, and other infrastructure. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/host-virtualbox-docker-machine/ + +作者:[Arun Pyasi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:https://github.com/docker/machine/releases +[2]:https://github.com/boot2docker/boot2docker From e0681399fd312fc45dc5c908bb1c037aee1ecbd3 Mon Sep 17 00:00:00 2001 From: alim0x Date: Sat, 25 Jul 2015 20:27:50 +0800 Subject: [PATCH 002/697] [Part One]18 - The history of Android --- .../18 - The history of Android.md | 28 +++++++++---------- 1 file changed, 13 insertions(+), 15 deletions(-) diff --git a/sources/talk/The history of Android/18 - The history of Android.md b/sources/talk/The history of Android/18 - The history of Android.md index 8b303033bc..3af4359680 100644 --- a/sources/talk/The history of Android/18 - The history of Android.md +++ b/sources/talk/The history of Android/18 - The history of Android.md @@ -1,24 +1,22 @@ -alim0x translating - -The history of Android +安卓编年史 ================================================================================ -![Yet another Android Market redesign dips its toe into the "cards" interface that would become a Google staple.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/play-store.png) -Yet another Android Market redesign dips its toe into the "cards" interface that would become a Google staple. -Photo by Ron Amadeo +![安卓市场的新设计试水“卡片式”界面,这将成为谷歌的主要风格。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/play-store.png) +安卓市场的新设计试水“卡片式”界面,这将成为谷歌的主要风格。 +Ron Amadeo 供图 -The Android Market released its fourth new design in Android's two-and-a-half years on the market. This new design was hugely important as it came really close to Google's "cards" interface. By displaying Apps or other content in little blocks, Google could seamlessly transition its app design between screens of various sizes with minimal effort. Content could be displayed just like photos in a gallery app—feed the layout renderer a big list of content blocks, enable screen wrapping, and you were done. Bigger screens saw more blocks of content, and smaller screens only saw a few at a time. With the content display out of the way, Google added a "Categories" fragment to the right side and a big featured app carousel at the top. +安卓推向市场已经有两年半时间了,安卓市场放出了它的第四版设计。这个新设计十分重要,因为它已经很接近谷歌的“卡片式”界面了。通过在小方块中显示应用或其他内容,谷歌可以使其设计在不同尺寸屏幕下无缝过渡而不受影响。内容可以像一个相册应用里的照片一样显示——给布局渲染填充一个内容块列表,加上屏幕包装,就完成了。更大的屏幕一次可以看到更多的内容块,小点的屏幕一次看到的内容就少。内容用了不一样的方式显示,谷歌还在右边新增了一个“分类”板块,顶部还有个巨大的热门应用滚动显示。 -While the design was ready for an easily configurable interface, the functionality was not. The original shipping version of the market was locked to a landscape orientation and was Honeycomb-exclusive. +虽然设计上为更容易配置界面准备好准备好了,但功能上还没有。最初发布的市场版本锁定为横屏模式,而且还是蜂巢独占的。 -![The app page and "My Apps" interface.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/device-2014-02-12-190002.png) -The app page and "My Apps" interface. -Photo by Ron Amadeo +![应用详情页和“我的应用”界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/device-2014-02-12-190002.png) +应用详情页和“我的应用”界面。 +Ron Amadeo 供图 -This new market sold not only apps, but brought Books and Movies rentals into the fold as well. Google was selling books since 2010; it was only ever through a Website. The new market unified all of Google's content sales in a single location and brought it one step closer to taking on Apple's iTunes juggernaut, though selling all of these items under the "Android Market" was a bit of a branding snafu, as much of the content didn't require Android to use. +新的市场不仅出售应用,还加入了书籍和电影租借。谷歌从2010年开始出售图书;之前只通过网站出售。新的市场将谷歌所有的内容销售聚合到了一处,进一步向苹果 iTunes 的主宰展开较量。虽然在“安卓市场”出售这些东西有点品牌混乱,因为大部分内容都不依赖于安卓才能使用。 -![The browser did its best to look like Chrome, and Contacts used a two-pane interface.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/browsercontactst.png) -The browser did its best to look like Chrome, and Contacts used a two-pane interface. -Photo by Ron Amadeo +![浏览器看起来非常像 Chrome,联系人使用了双面板界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/browsercontactst.png) +浏览器看起来非常像 Chrome,联系人使用了双面板界面。 +Ron Amadeo 供图 The new Browser added an honest-to-goodness tabs strip at the top of the interface. While this browser wasn't Chrome, it aped a lot of Chrome's design and features. Besides the pioneering tabs-on-top interface, it added Incognito tabs, which kept no history or autocomplete records. There was also an option to have a Chrome-style new tab page consisting of thumbnails of your most-viewed webpages. From 1bc3516185f27417f9a750a8cd6b756010c72929 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 2 Aug 2015 13:58:34 +0800 Subject: [PATCH 003/697] Update 20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md [Translating] 20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md --- ...0150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md b/sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md index 4b49e3acca..99b2b3acc1 100644 --- a/sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md +++ b/sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md @@ -1,3 +1,5 @@ ++Translating by Ezio + How to run Ubuntu Snappy Core on Raspberry Pi 2 ================================================================================ The Internet of Things (IoT) is upon us. In a couple of years some of us might ask ourselves how we ever survived without it, just like we question our past without cellphones today. Canonical is a contender in this fast growing, but still wide open market. The company wants to claim their stakes in IoT just as they already did for the cloud. At the end of January, the company launched a small operating system that goes by the name of [Ubuntu Snappy Core][1] which is based on Ubuntu Core. @@ -86,4 +88,4 @@ via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html [1]:http://www.ubuntu.com/things [2]:http://www.raspberrypi.org/downloads/ [3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html -[4]:https://developer.ubuntu.com/en/snappy/ \ No newline at end of file +[4]:https://developer.ubuntu.com/en/snappy/ From 70694119830a191fbcd888e946c9b7ecb3d0619b Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 2 Aug 2015 16:13:22 +0800 Subject: [PATCH 004/697] Delete Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md --- ... RAID, Concepts of RAID and RAID Levels.md | 144 ------------------ 1 file changed, 144 deletions(-) delete mode 100644 sources/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md diff --git a/sources/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md b/sources/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md deleted file mode 100644 index 0f393fd7c4..0000000000 --- a/sources/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md +++ /dev/null @@ -1,144 +0,0 @@ -struggling 翻译中 -Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 -================================================================================ -RAID is a Redundant Array of Inexpensive disks, but nowadays it is called Redundant Array of Independent drives. Earlier it is used to be very costly to buy even a smaller size of disk, but nowadays we can buy a large size of disk with the same amount like before. Raid is just a collection of disks in a pool to become a logical volume. - -![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) - -Understanding RAID Setups in Linux - -Raid contains groups or sets or Arrays. A combine of drivers make a group of disks to form a RAID Array or RAID set. It can be a minimum of 2 number of disk connected to a raid controller and make a logical volume or more drives can be in a group. Only one Raid level can be applied in a group of disks. Raid are used when we need excellent performance. According to our selected raid level, performance will differ. Saving our data by fault tolerance & high availability. - -This series will be titled Preparation for the setting up RAID ‘s through Parts 1-9 and covers the following topics. - -- Part 1: Introduction to RAID, Concepts of RAID and RAID Levels -- Part 2: How to setup RAID0 (Stripe) in Linux -- Part 3: How to setup RAID1 (Mirror) in Linux -- Part 4: How to setup RAID5 (Striping with Distributed Parity) in Linux -- Part 5: How to setup RAID6 (Striping with Double Distributed Parity) in Linux -- Part 6: Setting Up RAID 10 or 1+0 (Nested) in Linux -- Part 7: Growing an Existing RAID Array and Removing Failed Disks in Raid -- Part 8: Recovering (Rebuilding) failed drives in RAID -- Part 9: Managing RAID in Linux - -This is the Part 1 of a 9-tutorial series, here we will cover the introduction of RAID, Concepts of RAID and RAID Levels that are required for the setting up RAID in Linux. - -### Software RAID and Hardware RAID ### - -Software RAID have low performance, because of consuming resource from hosts. Raid software need to load for read data from software raid volumes. Before loading raid software, OS need to get boot to load the raid software. No need of Physical hardware in software raids. Zero cost investment. - -Hardware RAID have high performance. They are dedicated RAID Controller which is Physically built using PCI express cards. It won’t use the host resource. They have NVRAM for cache to read and write. Stores cache while rebuild even if there is power-failure, it will store the cache using battery power backups. Very costly investments needed for a large scale. - -Hardware RAID Card will look like below: - -![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) - -Hardware RAID - -#### Featured Concepts of RAID #### - -- Parity method in raid regenerate the lost content from parity saved information’s. RAID 5, RAID 6 Based on Parity. -- Stripe is sharing data randomly to multiple disk. This won’t have full data in a single disk. If we use 3 disks half of our data will be in each disks. -- Mirroring is used in RAID 1 and RAID 10. Mirroring is making a copy of same data. In RAID 1 it will save the same content to the other disk too. -- Hot spare is just a spare drive in our server which can automatically replace the failed drives. If any one of the drive failed in our array this hot spare drive will be used and rebuild automatically. -- Chunks are just a size of data which can be minimum from 4KB and more. By defining chunk size we can increase the I/O performance. - -RAID’s are in various Levels. Here we will see only the RAID Levels which is used mostly in real environment. - -- RAID0 = Striping -- RAID1 = Mirroring -- RAID5 = Single Disk Distributed Parity -- RAID6 = Double Disk Distributed Parity -- RAID10 = Combine of Mirror & Stripe. (Nested RAID) - -RAID are managed using mdadm package in most of the Linux distributions. Let us get a Brief look into each RAID Levels. - -#### RAID 0 (or) Striping #### - -Striping have a excellent performance. In Raid 0 (Striping) the data will be written to disk using shared method. Half of the content will be in one disk and another half will be written to other disk. - -Let us assume we have 2 Disk drives, for example, if we write data “TECMINT” to logical volume it will be saved as ‘T‘ will be saved in first disk and ‘E‘ will be saved in Second disk and ‘C‘ will be saved in First disk and again ‘M‘ will be saved in Second disk and it continues in round-robin process. - -In this situation if any one of the drive fails we will loose our data, because with half of data from one of the disk can’t use to rebuilt the raid. But while comparing to Write Speed and performance RAID 0 is Excellent. We need at least minimum 2 disks to create a RAID 0 (Striping). If you need your valuable data don’t use this RAID LEVEL. - -- High Performance. -- There is Zero Capacity Loss in RAID 0 -- Zero Fault Tolerance. -- Write and Reading will be good performance. - -#### RAID 1 (or) Mirroring #### - -Mirroring have a good performance. Mirroring can make a copy of same data what we have. Assuming we have two numbers of 2TB Hard drives, total there we have 4TB, but in mirroring while the drives are behind the RAID Controller to form a Logical drive Only we can see the 2TB of logical drive. - -While we save any data, it will write to both 2TB Drives. Minimum two drives are needed to create a RAID 1 or Mirror. If a disk failure occurred we can reproduce the raid set by replacing a new disk. If any one of the disk fails in RAID 1, we can get the data from other one as there was a copy of same content in the other disk. So there is zero data loss. - -- Good Performance. -- Here Half of the Space will be lost in total capacity. -- Full Fault Tolerance. -- Rebuilt will be faster. -- Writing Performance will be slow. -- Reading will be good. -- Can be used for operating systems and database for small scale. - -#### RAID 5 (or) Distributed Parity #### - -RAID 5 is mostly used in enterprise levels. RAID 5 work by distributed parity method. Parity info will be used to rebuild the data. It rebuilds from the information left on the remaining good drives. This will protect our data from drive failure. - -Assume we have 4 drives, if one drive fails and while we replace the failed drive we can rebuild the replaced drive from parity informations. Parity information’s are Stored in all 4 drives, if we have 4 numbers of 1TB hard-drive. The parity information will be stored in 256GB in each drivers and other 768GB in each drives will be defined for Users. RAID 5 can be survive from a single Drive failure, If drives fails more than 1 will cause loss of data’s. - -- Excellent Performance -- Reading will be extremely very good in speed. -- Writing will be Average, slow if we won’t use a Hardware RAID Controller. -- Rebuild from Parity information from all drives. -- Full Fault Tolerance. -- 1 Disk Space will be under Parity. -- Can be used in file servers, web servers, very important backups. - -#### RAID 6 Two Parity Distributed Disk #### - -RAID 6 is same as RAID 5 with two parity distributed system. Mostly used in a large number of arrays. We need minimum 4 Drives, even if there 2 Drive fails we can rebuild the data while replacing new drives. - -Very slower than RAID 5, because it writes data to all 4 drivers at same time. Will be average in speed while we using a Hardware RAID Controller. If we have 6 numbers of 1TB hard-drives 4 drives will be used for data and 2 drives will be used for Parity. - -- Poor Performance. -- Read Performance will be good. -- Write Performance will be Poor if we not using a Hardware RAID Controller. -- Rebuild from 2 Parity Drives. -- Full Fault tolerance. -- 2 Disks space will be under Parity. -- Can be Used in Large Arrays. -- Can be use in backup purpose, video streaming, used in large scale. - -#### RAID 10 (or) Mirror & Stripe #### - -RAID 10 can be called as 1+0 or 0+1. This will do both works of Mirror & Striping. Mirror will be first and stripe will be the second in RAID 10. Stripe will be the first and mirror will be the second in RAID 01. RAID 10 is better comparing to 01. - -Assume, we have 4 Number of drives. While I’m writing some data to my logical volume it will be saved under All 4 drives using mirror and stripe methods. - -If I’m writing a data “TECMINT” in RAID 10 it will save the data as follow. First “T” will write to both disks and second “E” will write to both disk, this step will be used for all data write. It will make a copy of every data to other disk too. - -Same time it will use the RAID 0 method and write data as follow “T” will write to first disk and “E” will write to second disk. Again “C” will write to first Disk and “M” to second disk. - -- Good read and write performance. -- Here Half of the Space will be lost in total capacity. -- Fault Tolerance. -- Fast rebuild from copying data. -- Can be used in Database storage for high performance and availability. - -### Conclusion ### - -In this article we have seen what is RAID and which levels are mostly used in RAID in real environment. Hope you have learned the write-up about RAID. For RAID setup one must know about the basic Knowledge about RAID. The above content will fulfil basic understanding about RAID. - -In the next upcoming articles I’m going to cover how to setup and create a RAID using Various Levels, Growing a RAID Group (Array) and Troubleshooting with failed Drives and much more. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/understanding-raid-setup-in-linux/ - -作者:[Babin Lonston][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ \ No newline at end of file From 4e630c4b50c5c68b42fb2706cad309db1d8320b8 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 2 Aug 2015 16:13:38 +0800 Subject: [PATCH 005/697] Delete Part 2 - Creating Software RAID0 (Stripe) on 'Two Devices' Using 'mdadm' Tool in Linux.md --- ...wo Devices' Using 'mdadm' Tool in Linux.md | 219 ------------------ 1 file changed, 219 deletions(-) delete mode 100644 sources/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on 'Two Devices' Using 'mdadm' Tool in Linux.md diff --git a/sources/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on 'Two Devices' Using 'mdadm' Tool in Linux.md b/sources/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on 'Two Devices' Using 'mdadm' Tool in Linux.md deleted file mode 100644 index 8057e4828e..0000000000 --- a/sources/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on 'Two Devices' Using 'mdadm' Tool in Linux.md +++ /dev/null @@ -1,219 +0,0 @@ -struggling 翻译中 -Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 -================================================================================ -RAID is Redundant Array of Inexpensive disks, used for high availability and reliability in large scale environments, where data need to be protected than normal use. Raid is just a collection of disks in a pool to become a logical volume and contains an array. A combine drivers makes an array or called as set of (group). - -RAID can be created, if there are minimum 2 number of disk connected to a raid controller and make a logical volume or more drives can be added in an array according to defined RAID Levels. Software Raid are available without using Physical hardware those are called as software raid. Software Raid will be named as Poor man raid. - -![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) - -Setup RAID0 in Linux - -Main concept of using RAID is to save data from Single point of failure, means if we using a single disk to store the data and if it’s failed, then there is no chance of getting our data back, to stop the data loss we need a fault tolerance method. So, that we can use some collection of disk to form a RAID set. - -#### What is Stripe in RAID 0? #### - -Stripe is striping data across multiple disk at the same time by dividing the contents. Assume we have two disks and if we save content to logical volume it will be saved under both two physical disks by dividing the content. For better performance RAID 0 will be used, but we can’t get the data if one of the drive fails. So, it isn’t a good practice to use RAID 0. The only solution is to install operating system with RAID0 applied logical volumes to safe your important files. - -- RAID 0 has High Performance. -- Zero Capacity Loss in RAID 0. No Space will be wasted. -- Zero Fault Tolerance ( Can’t get back the data if any one of disk fails). -- Write and Reading will be Excellent. - -#### Requirements #### - -Minimum number of disks are allowed to create RAID 0 is 2, but you can add more disk but the order should be twice as 2, 4, 6, 8. If you have a Physical RAID card with enough ports, you can add more disks. - -Here we are not using a Hardware raid, this setup depends only on Software RAID. If we have a physical hardware raid card we can access it from it’s utility UI. Some motherboard by default in-build with RAID feature, there UI can be accessed using Ctrl+I keys. - -If you’re new to RAID setups, please read our earlier article, where we’ve covered some basic introduction of about RAID. - -- [Introduction to RAID and RAID Concepts][1] - -**My Server Setup** - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.225 - Two Disks : 20 GB each - -This article is Part 2 of a 9-tutorial RAID series, here in this part, we are going to see how we can create and setup Software RAID0 or striping in Linux systems or servers using two 20GB disks named sdb and sdc. - -### Step 1: Updating System and Installing mdadm for Managing RAID ### - -1. Before setting up RAID0 in Linux, let’s do a system update and then install ‘mdadm‘ package. The mdadm is a small program, which will allow us to configure and manage RAID devices in Linux. - - # yum clean all && yum update - # yum install mdadm -y - -![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) - -Install mdadm Tool - -### Step 2: Verify Attached Two 20GB Drives ### - -2. Before creating RAID 0, make sure to verify that the attached two hard drives are detected or not, using the following command. - - # ls -l /dev | grep sd - -![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) - -Check Hard Drives - -3. Once the new hard drives detected, it’s time to check whether the attached drives are already using any existing raid with the help of following ‘mdadm’ command. - - # mdadm --examine /dev/sd[b-c] - -![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) - -Check RAID Devices - -In the above output, we come to know that none of the RAID have been applied to these two sdb and sdc drives. - -### Step 3: Creating Partitions for RAID ### - -4. Now create sdb and sdc partitions for raid, with the help of following fdisk command. Here, I will show how to create partition on sdb drive. - - # fdisk /dev/sdb - -Follow below instructions for creating partitions. - -- Press ‘n‘ for creating new partition. -- Then choose ‘P‘ for Primary partition. -- Next select the partition number as 1. -- Give the default value by just pressing two times Enter key. -- Next press ‘P‘ to print the defined partition. - -![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) - -Create Partitions - -Follow below instructions for creating Linux raid auto on partitions. - -- Press ‘L‘ to list all available types. -- Type ‘t‘to choose the partitions. -- Choose ‘fd‘ for Linux raid auto and press Enter to apply. -- Then again use ‘P‘ to print the changes what we have made. -- Use ‘w‘ to write the changes. - -![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) - -Create RAID Partitions in Linux - -**Note**: Please follow same above instructions to create partition on sdc drive now. - -5. After creating partitions, verify both the drivers are correctly defined for RAID using following command. - - # mdadm --examine /dev/sd[b-c] - # mdadm --examine /dev/sd[b-c]1 - -![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) - -Verify RAID Partitions - -### Step 4: Creating RAID md Devices ### - -6. Now create md device (i.e. /dev/md0) and apply raid level using below command. - - # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 - # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 - -- -C – create -- -l – level -- -n – No of raid-devices - -7. Once md device has been created, now verify the status of RAID Level, Devices and Array used, with the help of following series of commands as shown. - - # cat /proc/mdstat - -![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) - -Verify RAID Level - - # mdadm -E /dev/sd[b-c]1 - -![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) - -Verify RAID Device - - # mdadm --detail /dev/md0 - -![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) - -Verify RAID Array - -### Step 5: Assiging RAID Devices to Filesystem ### - -8. Create a ext4 filesystem for a RAID device /dev/md0 and mount it under /dev/raid0. - - # mkfs.ext4 /dev/md0 - -![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) - -Create ext4 Filesystem - -9. Once ext4 filesystem has been created for Raid device, now create a mount point directory (i.e. /mnt/raid0) and mount the device /dev/md0 under it. - - # mkdir /mnt/raid0 - # mount /dev/md0 /mnt/raid0/ - -10. Next, verify that the device /dev/md0 is mounted under /mnt/raid0 directory using df command. - - # df -h - -11. Next, create a file called ‘tecmint.txt‘ under the mount point /mnt/raid0, add some content to the created file and view the content of a file and directory. - - # touch /mnt/raid0/tecmint.txt - # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt - # cat /mnt/raid0/tecmint.txt - # ls -l /mnt/raid0/ - -![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) - -Verify Mount Device - -12. Once you’ve verified mount points, it’s time to create an fstab entry in /etc/fstab file. - - # vim /etc/fstab - -Add the following entry as described. May vary according to your mount location and filesystem you using. - - /dev/md0 /mnt/raid0 ext4 deaults 0 0 - -![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) - -Add Device to Fstab - -13. Run mount ‘-a‘ to check if there is any error in fstab entry. - - # mount -av - -![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) - -Check Errors in Fstab - -### Step 6: Saving RAID Configurations ### - -14. Finally, save the raid configuration to one of the file to keep the configurations for future use. Again we use ‘mdadm’ command with ‘-s‘ (scan) and ‘-v‘ (verbose) options as shown. - - # mdadm -E -s -v >> /etc/mdadm.conf - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - # cat /etc/mdadm.conf - -![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) - -Save RAID Configurations - -That’s it, we have seen here, how to configure RAID0 striping with raid levels by using two hard disks. In next article, we will see how to setup RAID5. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid0-in-linux/ - -作者:[Babin Lonston][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ \ No newline at end of file From 5a2d8d351fa17cb222c3ab9429902a75352d9804 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 2 Aug 2015 16:15:47 +0800 Subject: [PATCH 006/697] =?UTF-8?q?Create=20Introduction=20to=20RAID,=20Co?= =?UTF-8?q?ncepts=20of=20RAID=20and=20RAID=20Levels=20=E2=80=93=20Part=201?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Concepts of RAID and RAID Levels – Part 1 | 146 ++++++++++++++++++ 1 file changed, 146 insertions(+) create mode 100644 translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 diff --git a/translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 b/translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 new file mode 100644 index 0000000000..8ca0ecbd7e --- /dev/null +++ b/translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 @@ -0,0 +1,146 @@ + +RAID的级别和概念的介绍 - 第1部分 +================================================================================ +RAID是廉价磁盘冗余阵列,但现在它被称为独立磁盘冗余阵列。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是磁盘的一个集合,被称为逻辑卷。 + + +![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) + +在 Linux 中理解 RAID 的设置 + +RAID包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。一个 RAID 控制器至少使用两个磁盘并且使用一个逻辑卷或者多个驱动器在一个组中。在一个磁盘组的应用中只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。 + +这个系列被命名为RAID的构建共包含9个部分包括以下主题。 + +- 第1部分:RAID的级别和概念的介绍 +- 第2部分:在Linux中如何设置 RAID0(条带化) +- 第3部分:在Linux中如何设置 RAID1(镜像化) +- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验) +- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验) +- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套) +- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘 +- 第8部分:在 RAID 中恢复(重建)损坏的驱动器 +- 第9部分:在 Linux 中管理 RAID + +这是9系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。 + + +### 软件RAID和硬件RAID ### + +软件 RAID 的性能很低,因为其从主机资源消耗。 RAID 软件需要加载可读取数据从软件 RAID 卷中。在加载 RAID 软件前,操作系统需要得到加载 RAID 软件的引导。在软件 RAID 中无需物理硬件。零成本投资。 + +硬件 RAID 具有很高的性能。他们有专用的 RAID 控制器,采用 PCI Express卡物理内置的。它不会使用主机资源。他们有 NVRAM 缓存读取和写入。当重建时即使出现电源故障,它会使用电池电源备份存储缓存。对于大规模使用需要非常昂贵的投资。 + +硬件 RAID 卡如下所示: + +![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) + +硬件RAID + +#### 精选的 RAID 概念 #### + +- 在 RAID 重建中校验方法中丢失的内容来自从校验中保存的信息。 RAID 5,RAID 6 基于校验。 +- 条带化是随机共享数据到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用3个磁盘,则数据将会存在于每个磁盘上。 +- 镜像被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1,它将保存相同的内容到其他盘上。 +- 在我们的服务器上,热备份只是一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动重建。 +- 块是 RAID 控制器每次读写数据时的最小单位,最小4KB。通过定义块大小,我们可以增加 I/O 性能。 + +RAID有不同的级别。在这里,我们仅看到在真实环境下的使用最多的 RAID 级别。 + +- RAID0 = 条带化 +- RAID1 = 镜像 +- RAID5 = 单个磁盘分布式奇偶校验 +- RAID6 = 双盘分布式奇偶校验 +- RAID10 = 镜像 + 条带。(嵌套RAID) + +RAID 在大多数 Linux 发行版上使用 mdadm 的包管理。让我们先对每个 RAID 级别认识一下。 + +#### RAID 0(或)条带化 #### + +条带化有很好的性能。在 RAID 0(条带化)中数据将使用共享的方式被写入到磁盘。一半的内容将是在一个磁盘上,另一半内容将被写入到其它磁盘。 + +假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。 + +在这种情况下,如果驱动器中的任何一个发生故障,我们将丢失所有的数据,因为一个盘中只有一半的数据,不能用于重建。不过,当比较写入速度和性能时,RAID0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。 + +- 高性能。 +- 在 RAID0 上零容量损失。 +- 零容错。 +- 写和读有很高的性能。 + +#### RAID1(或)镜像化 #### + +镜像也有不错的性能。镜像可以备份我们的数据。假设我们有两组2TB的硬盘驱动器,我们总共有4TB,但在镜像中,驱动器在 RAID 控制器的后面形成一个逻辑驱动器,我们只能看到逻辑驱动器有2TB。 + +当我们保存数据时,它将同时写入2TB驱动器中。创建 RAID 1 (镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以恢复 RAID 通过更换一个新的磁盘。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为其他的磁盘中也有相同的数据。所以是零数据丢失。 + +- 良好的性能。 +- 空间的一半将在总容量丢失。 +- 完全容错。 +- 重建会更快。 +- 写性能将是缓慢的。 +- 读将会很好。 +- 被操作系统和数据库使用的规模很小。 + +#### RAID 5(或)分布式奇偶校验 #### + +RAID 5 多用于企业的水平。 RAID 5 的工作通过分布式奇偶校验的方法。奇偶校验信息将被用于重建数据。它需要留下的正常驱动器上的信息去重建。驱动器故障时,这会保护我们的数据。 + +假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。 + +- 性能卓越 +- 读速度将非常好。 +- 如果我们不使用硬件 RAID 控制器,写速度是缓慢的。 +- 从所有驱动器的奇偶校验信息中重建。 +- 完全容错。 +- 1个磁盘空间将用于奇偶校验。 +- 可以被用在文件服务器,Web服务器,非常重要的备份中。 + +#### RAID 6 两个分布式奇偶校验磁盘 #### + +RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以重建数据,同时更换新的驱动器。 + +它比 RAID 5 非常慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度将被平均。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。 + +- 性能不佳。 +- 读的性能很好。 +- 如果我们不使用硬件 RAID 控制器写的性能会很差。 +- 从2奇偶校验驱动器上重建。 +- 完全容错。 +- 2个磁盘空间将用于奇偶校验。 +- 可用于大型阵列。 +- 在备份和视频流中大规模使用。 + +#### RAID 10(或)镜像+条带 #### + +RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。 + +假设,我们有4个驱动器。当我写了一些数据到逻辑卷上,它会使用镜像和条带将数据保存到4个驱动器上。 + +如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下形式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入两个磁盘,这一步将所有数据都写入。这使数据得到备份。 + +同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一个盘,“E”写入第二个盘。再次将“C”写入第一个盘,“M”到第二个盘。 + +- 良好的读写性能。 +- 空间的一半将在总容量丢失。 +- 容错。 +- 从备份数据中快速重建。 +- 它的高性能和高可用性常被用于数据库的存储中。 + +### 结论 ### + +在这篇文章中,我们已经看到了什么是 RAID 和在实际环境大多采用 RAID 的哪个级别。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容对于你了解 RAID 基本满足。 + +在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/understanding-raid-setup-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ From f1ab428b04c8e3d225b0e5bfdde1c28f541282e3 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 2 Aug 2015 16:16:13 +0800 Subject: [PATCH 007/697] =?UTF-8?q?Delete=20Introduction=20to=20RAID,=20Co?= =?UTF-8?q?ncepts=20of=20RAID=20and=20RAID=20Levels=20=E2=80=93=20Part=201?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Concepts of RAID and RAID Levels – Part 1 | 146 ------------------ 1 file changed, 146 deletions(-) delete mode 100644 translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 diff --git a/translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 b/translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 deleted file mode 100644 index 8ca0ecbd7e..0000000000 --- a/translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 +++ /dev/null @@ -1,146 +0,0 @@ - -RAID的级别和概念的介绍 - 第1部分 -================================================================================ -RAID是廉价磁盘冗余阵列,但现在它被称为独立磁盘冗余阵列。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是磁盘的一个集合,被称为逻辑卷。 - - -![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) - -在 Linux 中理解 RAID 的设置 - -RAID包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。一个 RAID 控制器至少使用两个磁盘并且使用一个逻辑卷或者多个驱动器在一个组中。在一个磁盘组的应用中只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。 - -这个系列被命名为RAID的构建共包含9个部分包括以下主题。 - -- 第1部分:RAID的级别和概念的介绍 -- 第2部分:在Linux中如何设置 RAID0(条带化) -- 第3部分:在Linux中如何设置 RAID1(镜像化) -- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验) -- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验) -- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套) -- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘 -- 第8部分:在 RAID 中恢复(重建)损坏的驱动器 -- 第9部分:在 Linux 中管理 RAID - -这是9系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。 - - -### 软件RAID和硬件RAID ### - -软件 RAID 的性能很低,因为其从主机资源消耗。 RAID 软件需要加载可读取数据从软件 RAID 卷中。在加载 RAID 软件前,操作系统需要得到加载 RAID 软件的引导。在软件 RAID 中无需物理硬件。零成本投资。 - -硬件 RAID 具有很高的性能。他们有专用的 RAID 控制器,采用 PCI Express卡物理内置的。它不会使用主机资源。他们有 NVRAM 缓存读取和写入。当重建时即使出现电源故障,它会使用电池电源备份存储缓存。对于大规模使用需要非常昂贵的投资。 - -硬件 RAID 卡如下所示: - -![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) - -硬件RAID - -#### 精选的 RAID 概念 #### - -- 在 RAID 重建中校验方法中丢失的内容来自从校验中保存的信息。 RAID 5,RAID 6 基于校验。 -- 条带化是随机共享数据到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用3个磁盘,则数据将会存在于每个磁盘上。 -- 镜像被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1,它将保存相同的内容到其他盘上。 -- 在我们的服务器上,热备份只是一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动重建。 -- 块是 RAID 控制器每次读写数据时的最小单位,最小4KB。通过定义块大小,我们可以增加 I/O 性能。 - -RAID有不同的级别。在这里,我们仅看到在真实环境下的使用最多的 RAID 级别。 - -- RAID0 = 条带化 -- RAID1 = 镜像 -- RAID5 = 单个磁盘分布式奇偶校验 -- RAID6 = 双盘分布式奇偶校验 -- RAID10 = 镜像 + 条带。(嵌套RAID) - -RAID 在大多数 Linux 发行版上使用 mdadm 的包管理。让我们先对每个 RAID 级别认识一下。 - -#### RAID 0(或)条带化 #### - -条带化有很好的性能。在 RAID 0(条带化)中数据将使用共享的方式被写入到磁盘。一半的内容将是在一个磁盘上,另一半内容将被写入到其它磁盘。 - -假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。 - -在这种情况下,如果驱动器中的任何一个发生故障,我们将丢失所有的数据,因为一个盘中只有一半的数据,不能用于重建。不过,当比较写入速度和性能时,RAID0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。 - -- 高性能。 -- 在 RAID0 上零容量损失。 -- 零容错。 -- 写和读有很高的性能。 - -#### RAID1(或)镜像化 #### - -镜像也有不错的性能。镜像可以备份我们的数据。假设我们有两组2TB的硬盘驱动器,我们总共有4TB,但在镜像中,驱动器在 RAID 控制器的后面形成一个逻辑驱动器,我们只能看到逻辑驱动器有2TB。 - -当我们保存数据时,它将同时写入2TB驱动器中。创建 RAID 1 (镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以恢复 RAID 通过更换一个新的磁盘。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为其他的磁盘中也有相同的数据。所以是零数据丢失。 - -- 良好的性能。 -- 空间的一半将在总容量丢失。 -- 完全容错。 -- 重建会更快。 -- 写性能将是缓慢的。 -- 读将会很好。 -- 被操作系统和数据库使用的规模很小。 - -#### RAID 5(或)分布式奇偶校验 #### - -RAID 5 多用于企业的水平。 RAID 5 的工作通过分布式奇偶校验的方法。奇偶校验信息将被用于重建数据。它需要留下的正常驱动器上的信息去重建。驱动器故障时,这会保护我们的数据。 - -假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。 - -- 性能卓越 -- 读速度将非常好。 -- 如果我们不使用硬件 RAID 控制器,写速度是缓慢的。 -- 从所有驱动器的奇偶校验信息中重建。 -- 完全容错。 -- 1个磁盘空间将用于奇偶校验。 -- 可以被用在文件服务器,Web服务器,非常重要的备份中。 - -#### RAID 6 两个分布式奇偶校验磁盘 #### - -RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以重建数据,同时更换新的驱动器。 - -它比 RAID 5 非常慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度将被平均。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。 - -- 性能不佳。 -- 读的性能很好。 -- 如果我们不使用硬件 RAID 控制器写的性能会很差。 -- 从2奇偶校验驱动器上重建。 -- 完全容错。 -- 2个磁盘空间将用于奇偶校验。 -- 可用于大型阵列。 -- 在备份和视频流中大规模使用。 - -#### RAID 10(或)镜像+条带 #### - -RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。 - -假设,我们有4个驱动器。当我写了一些数据到逻辑卷上,它会使用镜像和条带将数据保存到4个驱动器上。 - -如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下形式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入两个磁盘,这一步将所有数据都写入。这使数据得到备份。 - -同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一个盘,“E”写入第二个盘。再次将“C”写入第一个盘,“M”到第二个盘。 - -- 良好的读写性能。 -- 空间的一半将在总容量丢失。 -- 容错。 -- 从备份数据中快速重建。 -- 它的高性能和高可用性常被用于数据库的存储中。 - -### 结论 ### - -在这篇文章中,我们已经看到了什么是 RAID 和在实际环境大多采用 RAID 的哪个级别。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容对于你了解 RAID 基本满足。 - -在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/understanding-raid-setup-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ From 9e4df4271683151f4f04b3cc44bc04b1f16cf1e0 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 2 Aug 2015 16:16:43 +0800 Subject: [PATCH 008/697] =?UTF-8?q?Create=20Introduction=20to=20RAID,=20Co?= =?UTF-8?q?ncepts=20of=20RAID=20and=20RAID=20Levels=20=E2=80=93=20Part=201?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Concepts of RAID and RAID Levels – Part 1 | 218 ++++++++++++++++++ 1 file changed, 218 insertions(+) create mode 100644 translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 diff --git a/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 b/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 new file mode 100644 index 0000000000..9feba99609 --- /dev/null +++ b/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 @@ -0,0 +1,218 @@ +在 Linux 上使用 ‘mdadm’ 工具创建软件 RAID0 (条带化)在 ‘两个设备’ 上 - 第2部分 +================================================================================ +RAID 是廉价磁盘的冗余阵列,其高可用性和可靠性适用于大规模环境中,为了使数据被保护而不是被正常使用。RAID 只是磁盘的一个集合被称为逻辑卷。结合驱动器,使其成为一个阵列或称为集合(组)。 + +创建 RAID 最少应使用2个磁盘被连接组成 RAID 控制器,逻辑卷或多个驱动器可以根据定义的 RAID 级别添加在一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 一般都是不太有钱的人才使用的。 + +![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) + +在 Linux 中创建 RAID0 + +使用 RAID 的主要目的是为了在单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。 + +#### 在 RAID 0 中条带是什么 #### + +条带是通过将数据在同一时间分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID0 逻辑卷的操作系统来提高文件的安全性。 + +- RAID 0 性能较高。 +- 在 RAID 0 上,空间零浪费。 +- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。 +- 写和读性能得以提高。 + +#### 要求 #### + +创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,但数目应该是2,4,6,8等的两倍。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。 + +在这里,我们没有使用硬件 RAID,此设置只依赖于软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的 UI 组件访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问 UI。 + +如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。 + +- [Introduction to RAID and RAID Concepts][1] + +**我的服务器设置** + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.225 + Two Disks : 20 GB each + +这篇文章是9个 RAID 系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID0(条带化),以名为 sdb 和 sdc 两个20GB的硬盘为例。 + +### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ### + +1.在 Linux 上设置 RAID0 前,我们先更新一下系统,然后安装 ‘mdadm’ 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。 + + # yum clean all && yum update + # yum install mdadm -y + +![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) + +安装 mdadm 工具 + +### 第2步:检测并连接两个 20GB 的硬盘 ### + +2.在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。 + + # ls -l /dev | grep sd + +![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) + +检查硬盘 + +3.一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的 ‘mdadm’ 命令来查看。 + + # mdadm --examine /dev/sd[b-c] + +![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) + +检查 RAID 设备 + +从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。 + +### 第3步:创建 RAID 分区 ### + +4.现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。 + + # fdisk /dev/sdb + +请按照以下说明创建分区。 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按 ‘P’ 来打印创建好的分区。 + +![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) + +创建分区 + +请按照以下说明将分区创建为 Linux 的 RAID 类型。 + +- 按 ‘L’,列出所有可用的类型。 +- 按 ‘t’ 去修改分区。 +- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) + +在 Linux 上创建 RAID 分区 + +**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。 + +5.创建分区后,验证这两个驱动器能使用下面的命令来正确定义 RAID。 + + # mdadm --examine /dev/sd[b-c] + # mdadm --examine /dev/sd[b-c]1 + +![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) + +验证 RAID 分区 + +### 第4步:创建 RAID md 设备 ### + +6.现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。 + + # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 + # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 + +- -C – create +- -l – level +- -n – No of raid-devices + +7.一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。 + + # cat /proc/mdstat + +![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) + +查看 RAID 级别 + + # mdadm -E /dev/sd[b-c]1 + +![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) + +查看 RAID 设备 + + # mdadm --detail /dev/md0 + +![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) + +查看 RAID 阵列 + +### 第5步:挂载 RAID 设备到文件系统 ### + +8.将 RAID 设备 /dev/md0 创建为 ext4 文件系统并挂载到 /mnt/raid0 下。 + + # mkfs.ext4 /dev/md0 + +![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) + +创建 ext4 文件系统 + +9. ext4 文件系统为 RAID 设备创建好后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。 + + # mkdir /mnt/raid0 + # mount /dev/md0 /mnt/raid0/ + +10.下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。 + + # df -h + +11.接下来,创建一个名为 ‘tecmint.txt’ 的文件挂载到 /mnt/raid0 下,为创建的文件添加一些内容,并查看文件和目录的内容。 + + # touch /mnt/raid0/tecmint.txt + # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt + # cat /mnt/raid0/tecmint.txt + # ls -l /mnt/raid0/ + +![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) + +验证挂载的设备 + +12.一旦你验证挂载点后,同时将它添加到 /etc/fstab 文件中。 + + # vim /etc/fstab + +添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。 + + /dev/md0 /mnt/raid0 ext4 deaults 0 0 + +![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) + +添加设备到 fstab 文件中 + +13.使用 mount ‘-a‘ 来检查 fstab 的条目是否有误。 + + # mount -av + +![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) + +检查 fstab 文件是否有误 + +### 第6步:保存 RAID 配置 ### + +14.最后,保存 RAID 配置到一个文件中,以供将来使用。同样,我们使用 ‘mdadm’ 命令带有 ‘-s‘ (scan) 和 ‘-v‘ (verbose) 选项,如图所示。 + + # mdadm -E -s -v >> /etc/mdadm.conf + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + # cat /etc/mdadm.conf + +![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) + +保存 RAID 配置 + +就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID0 级别。在接下来的文章中,我们将看到如何设置 RAID5。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid0-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 0b475953acb42284d0e6595285b29c20745b9124 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 2 Aug 2015 16:17:39 +0800 Subject: [PATCH 009/697] =?UTF-8?q?Update=20Introduction=20to=20RAID,=20Co?= =?UTF-8?q?ncepts=20of=20RAID=20and=20RAID=20Levels=20=E2=80=93=20Part=201?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Concepts of RAID and RAID Levels – Part 1 | 250 +++++++----------- 1 file changed, 89 insertions(+), 161 deletions(-) diff --git a/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 b/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 index 9feba99609..8ca0ecbd7e 100644 --- a/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 +++ b/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 @@ -1,212 +1,141 @@ -在 Linux 上使用 ‘mdadm’ 工具创建软件 RAID0 (条带化)在 ‘两个设备’ 上 - 第2部分 + +RAID的级别和概念的介绍 - 第1部分 ================================================================================ -RAID 是廉价磁盘的冗余阵列,其高可用性和可靠性适用于大规模环境中,为了使数据被保护而不是被正常使用。RAID 只是磁盘的一个集合被称为逻辑卷。结合驱动器,使其成为一个阵列或称为集合(组)。 +RAID是廉价磁盘冗余阵列,但现在它被称为独立磁盘冗余阵列。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是磁盘的一个集合,被称为逻辑卷。 -创建 RAID 最少应使用2个磁盘被连接组成 RAID 控制器,逻辑卷或多个驱动器可以根据定义的 RAID 级别添加在一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 一般都是不太有钱的人才使用的。 -![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) +![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) -在 Linux 中创建 RAID0 +在 Linux 中理解 RAID 的设置 -使用 RAID 的主要目的是为了在单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。 +RAID包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。一个 RAID 控制器至少使用两个磁盘并且使用一个逻辑卷或者多个驱动器在一个组中。在一个磁盘组的应用中只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。 -#### 在 RAID 0 中条带是什么 #### +这个系列被命名为RAID的构建共包含9个部分包括以下主题。 -条带是通过将数据在同一时间分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID0 逻辑卷的操作系统来提高文件的安全性。 +- 第1部分:RAID的级别和概念的介绍 +- 第2部分:在Linux中如何设置 RAID0(条带化) +- 第3部分:在Linux中如何设置 RAID1(镜像化) +- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验) +- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验) +- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套) +- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘 +- 第8部分:在 RAID 中恢复(重建)损坏的驱动器 +- 第9部分:在 Linux 中管理 RAID -- RAID 0 性能较高。 -- 在 RAID 0 上,空间零浪费。 -- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。 -- 写和读性能得以提高。 +这是9系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。 -#### 要求 #### -创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,但数目应该是2,4,6,8等的两倍。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。 +### 软件RAID和硬件RAID ### -在这里,我们没有使用硬件 RAID,此设置只依赖于软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的 UI 组件访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问 UI。 +软件 RAID 的性能很低,因为其从主机资源消耗。 RAID 软件需要加载可读取数据从软件 RAID 卷中。在加载 RAID 软件前,操作系统需要得到加载 RAID 软件的引导。在软件 RAID 中无需物理硬件。零成本投资。 -如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。 +硬件 RAID 具有很高的性能。他们有专用的 RAID 控制器,采用 PCI Express卡物理内置的。它不会使用主机资源。他们有 NVRAM 缓存读取和写入。当重建时即使出现电源故障,它会使用电池电源备份存储缓存。对于大规模使用需要非常昂贵的投资。 -- [Introduction to RAID and RAID Concepts][1] +硬件 RAID 卡如下所示: -**我的服务器设置** +![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.225 - Two Disks : 20 GB each +硬件RAID -这篇文章是9个 RAID 系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID0(条带化),以名为 sdb 和 sdc 两个20GB的硬盘为例。 +#### 精选的 RAID 概念 #### -### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ### +- 在 RAID 重建中校验方法中丢失的内容来自从校验中保存的信息。 RAID 5,RAID 6 基于校验。 +- 条带化是随机共享数据到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用3个磁盘,则数据将会存在于每个磁盘上。 +- 镜像被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1,它将保存相同的内容到其他盘上。 +- 在我们的服务器上,热备份只是一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动重建。 +- 块是 RAID 控制器每次读写数据时的最小单位,最小4KB。通过定义块大小,我们可以增加 I/O 性能。 -1.在 Linux 上设置 RAID0 前,我们先更新一下系统,然后安装 ‘mdadm’ 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。 +RAID有不同的级别。在这里,我们仅看到在真实环境下的使用最多的 RAID 级别。 - # yum clean all && yum update - # yum install mdadm -y +- RAID0 = 条带化 +- RAID1 = 镜像 +- RAID5 = 单个磁盘分布式奇偶校验 +- RAID6 = 双盘分布式奇偶校验 +- RAID10 = 镜像 + 条带。(嵌套RAID) -![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) +RAID 在大多数 Linux 发行版上使用 mdadm 的包管理。让我们先对每个 RAID 级别认识一下。 -安装 mdadm 工具 +#### RAID 0(或)条带化 #### -### 第2步:检测并连接两个 20GB 的硬盘 ### +条带化有很好的性能。在 RAID 0(条带化)中数据将使用共享的方式被写入到磁盘。一半的内容将是在一个磁盘上,另一半内容将被写入到其它磁盘。 -2.在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。 +假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。 - # ls -l /dev | grep sd +在这种情况下,如果驱动器中的任何一个发生故障,我们将丢失所有的数据,因为一个盘中只有一半的数据,不能用于重建。不过,当比较写入速度和性能时,RAID0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。 -![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) +- 高性能。 +- 在 RAID0 上零容量损失。 +- 零容错。 +- 写和读有很高的性能。 -检查硬盘 +#### RAID1(或)镜像化 #### -3.一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的 ‘mdadm’ 命令来查看。 +镜像也有不错的性能。镜像可以备份我们的数据。假设我们有两组2TB的硬盘驱动器,我们总共有4TB,但在镜像中,驱动器在 RAID 控制器的后面形成一个逻辑驱动器,我们只能看到逻辑驱动器有2TB。 - # mdadm --examine /dev/sd[b-c] +当我们保存数据时,它将同时写入2TB驱动器中。创建 RAID 1 (镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以恢复 RAID 通过更换一个新的磁盘。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为其他的磁盘中也有相同的数据。所以是零数据丢失。 -![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) +- 良好的性能。 +- 空间的一半将在总容量丢失。 +- 完全容错。 +- 重建会更快。 +- 写性能将是缓慢的。 +- 读将会很好。 +- 被操作系统和数据库使用的规模很小。 -检查 RAID 设备 +#### RAID 5(或)分布式奇偶校验 #### -从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。 +RAID 5 多用于企业的水平。 RAID 5 的工作通过分布式奇偶校验的方法。奇偶校验信息将被用于重建数据。它需要留下的正常驱动器上的信息去重建。驱动器故障时,这会保护我们的数据。 -### 第3步:创建 RAID 分区 ### +假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。 -4.现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。 +- 性能卓越 +- 读速度将非常好。 +- 如果我们不使用硬件 RAID 控制器,写速度是缓慢的。 +- 从所有驱动器的奇偶校验信息中重建。 +- 完全容错。 +- 1个磁盘空间将用于奇偶校验。 +- 可以被用在文件服务器,Web服务器,非常重要的备份中。 - # fdisk /dev/sdb +#### RAID 6 两个分布式奇偶校验磁盘 #### -请按照以下说明创建分区。 +RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以重建数据,同时更换新的驱动器。 -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。 -- 接下来选择分区号为1。 -- 只需按两次回车键选择默认值即可。 -- 然后,按 ‘P’ 来打印创建好的分区。 +它比 RAID 5 非常慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度将被平均。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。 -![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) +- 性能不佳。 +- 读的性能很好。 +- 如果我们不使用硬件 RAID 控制器写的性能会很差。 +- 从2奇偶校验驱动器上重建。 +- 完全容错。 +- 2个磁盘空间将用于奇偶校验。 +- 可用于大型阵列。 +- 在备份和视频流中大规模使用。 -创建分区 +#### RAID 10(或)镜像+条带 #### -请按照以下说明将分区创建为 Linux 的 RAID 类型。 +RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。 -- 按 ‘L’,列出所有可用的类型。 -- 按 ‘t’ 去修改分区。 -- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 +假设,我们有4个驱动器。当我写了一些数据到逻辑卷上,它会使用镜像和条带将数据保存到4个驱动器上。 -![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) +如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下形式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入两个磁盘,这一步将所有数据都写入。这使数据得到备份。 -在 Linux 上创建 RAID 分区 +同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一个盘,“E”写入第二个盘。再次将“C”写入第一个盘,“M”到第二个盘。 -**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。 +- 良好的读写性能。 +- 空间的一半将在总容量丢失。 +- 容错。 +- 从备份数据中快速重建。 +- 它的高性能和高可用性常被用于数据库的存储中。 -5.创建分区后,验证这两个驱动器能使用下面的命令来正确定义 RAID。 +### 结论 ### - # mdadm --examine /dev/sd[b-c] - # mdadm --examine /dev/sd[b-c]1 +在这篇文章中,我们已经看到了什么是 RAID 和在实际环境大多采用 RAID 的哪个级别。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容对于你了解 RAID 基本满足。 -![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) - -验证 RAID 分区 - -### 第4步:创建 RAID md 设备 ### - -6.现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。 - - # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 - # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 - -- -C – create -- -l – level -- -n – No of raid-devices - -7.一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。 - - # cat /proc/mdstat - -![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) - -查看 RAID 级别 - - # mdadm -E /dev/sd[b-c]1 - -![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) - -查看 RAID 设备 - - # mdadm --detail /dev/md0 - -![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) - -查看 RAID 阵列 - -### 第5步:挂载 RAID 设备到文件系统 ### - -8.将 RAID 设备 /dev/md0 创建为 ext4 文件系统并挂载到 /mnt/raid0 下。 - - # mkfs.ext4 /dev/md0 - -![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) - -创建 ext4 文件系统 - -9. ext4 文件系统为 RAID 设备创建好后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。 - - # mkdir /mnt/raid0 - # mount /dev/md0 /mnt/raid0/ - -10.下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。 - - # df -h - -11.接下来,创建一个名为 ‘tecmint.txt’ 的文件挂载到 /mnt/raid0 下,为创建的文件添加一些内容,并查看文件和目录的内容。 - - # touch /mnt/raid0/tecmint.txt - # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt - # cat /mnt/raid0/tecmint.txt - # ls -l /mnt/raid0/ - -![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) - -验证挂载的设备 - -12.一旦你验证挂载点后,同时将它添加到 /etc/fstab 文件中。 - - # vim /etc/fstab - -添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。 - - /dev/md0 /mnt/raid0 ext4 deaults 0 0 - -![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) - -添加设备到 fstab 文件中 - -13.使用 mount ‘-a‘ 来检查 fstab 的条目是否有误。 - - # mount -av - -![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) - -检查 fstab 文件是否有误 - -### 第6步:保存 RAID 配置 ### - -14.最后,保存 RAID 配置到一个文件中,以供将来使用。同样,我们使用 ‘mdadm’ 命令带有 ‘-s‘ (scan) 和 ‘-v‘ (verbose) 选项,如图所示。 - - # mdadm -E -s -v >> /etc/mdadm.conf - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - # cat /etc/mdadm.conf - -![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) - -保存 RAID 配置 - -就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID0 级别。在接下来的文章中,我们将看到如何设置 RAID5。 +在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。 -------------------------------------------------------------------------------- -via: http://www.tecmint.com/create-raid0-in-linux/ +via: http://www.tecmint.com/understanding-raid-setup-in-linux/ 作者:[Babin Lonston][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) @@ -215,4 +144,3 @@ via: http://www.tecmint.com/create-raid0-in-linux/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 958d434bed7380f898c012761a351780b273d9bd Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 2 Aug 2015 16:18:07 +0800 Subject: [PATCH 010/697] =?UTF-8?q?Create=20Creating=20Software=20RAID0=20?= =?UTF-8?q?(Stripe)=20on=20=E2=80=98Two=20Devices=E2=80=99=20Using=20?= =?UTF-8?q?=E2=80=98mdadm=E2=80=99=20Tool=20in=20Linux=20=E2=80=93=20Part?= =?UTF-8?q?=202?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...evices’ Using ‘mdadm’ Tool in Linux – Part 2 | 218 ++++++++++++++++++ 1 file changed, 218 insertions(+) create mode 100644 translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 diff --git a/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 b/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 new file mode 100644 index 0000000000..9feba99609 --- /dev/null +++ b/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 @@ -0,0 +1,218 @@ +在 Linux 上使用 ‘mdadm’ 工具创建软件 RAID0 (条带化)在 ‘两个设备’ 上 - 第2部分 +================================================================================ +RAID 是廉价磁盘的冗余阵列,其高可用性和可靠性适用于大规模环境中,为了使数据被保护而不是被正常使用。RAID 只是磁盘的一个集合被称为逻辑卷。结合驱动器,使其成为一个阵列或称为集合(组)。 + +创建 RAID 最少应使用2个磁盘被连接组成 RAID 控制器,逻辑卷或多个驱动器可以根据定义的 RAID 级别添加在一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 一般都是不太有钱的人才使用的。 + +![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) + +在 Linux 中创建 RAID0 + +使用 RAID 的主要目的是为了在单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。 + +#### 在 RAID 0 中条带是什么 #### + +条带是通过将数据在同一时间分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID0 逻辑卷的操作系统来提高文件的安全性。 + +- RAID 0 性能较高。 +- 在 RAID 0 上,空间零浪费。 +- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。 +- 写和读性能得以提高。 + +#### 要求 #### + +创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,但数目应该是2,4,6,8等的两倍。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。 + +在这里,我们没有使用硬件 RAID,此设置只依赖于软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的 UI 组件访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问 UI。 + +如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。 + +- [Introduction to RAID and RAID Concepts][1] + +**我的服务器设置** + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.225 + Two Disks : 20 GB each + +这篇文章是9个 RAID 系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID0(条带化),以名为 sdb 和 sdc 两个20GB的硬盘为例。 + +### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ### + +1.在 Linux 上设置 RAID0 前,我们先更新一下系统,然后安装 ‘mdadm’ 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。 + + # yum clean all && yum update + # yum install mdadm -y + +![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) + +安装 mdadm 工具 + +### 第2步:检测并连接两个 20GB 的硬盘 ### + +2.在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。 + + # ls -l /dev | grep sd + +![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) + +检查硬盘 + +3.一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的 ‘mdadm’ 命令来查看。 + + # mdadm --examine /dev/sd[b-c] + +![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) + +检查 RAID 设备 + +从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。 + +### 第3步:创建 RAID 分区 ### + +4.现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。 + + # fdisk /dev/sdb + +请按照以下说明创建分区。 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按 ‘P’ 来打印创建好的分区。 + +![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) + +创建分区 + +请按照以下说明将分区创建为 Linux 的 RAID 类型。 + +- 按 ‘L’,列出所有可用的类型。 +- 按 ‘t’ 去修改分区。 +- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) + +在 Linux 上创建 RAID 分区 + +**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。 + +5.创建分区后,验证这两个驱动器能使用下面的命令来正确定义 RAID。 + + # mdadm --examine /dev/sd[b-c] + # mdadm --examine /dev/sd[b-c]1 + +![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) + +验证 RAID 分区 + +### 第4步:创建 RAID md 设备 ### + +6.现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。 + + # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 + # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 + +- -C – create +- -l – level +- -n – No of raid-devices + +7.一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。 + + # cat /proc/mdstat + +![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) + +查看 RAID 级别 + + # mdadm -E /dev/sd[b-c]1 + +![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) + +查看 RAID 设备 + + # mdadm --detail /dev/md0 + +![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) + +查看 RAID 阵列 + +### 第5步:挂载 RAID 设备到文件系统 ### + +8.将 RAID 设备 /dev/md0 创建为 ext4 文件系统并挂载到 /mnt/raid0 下。 + + # mkfs.ext4 /dev/md0 + +![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) + +创建 ext4 文件系统 + +9. ext4 文件系统为 RAID 设备创建好后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。 + + # mkdir /mnt/raid0 + # mount /dev/md0 /mnt/raid0/ + +10.下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。 + + # df -h + +11.接下来,创建一个名为 ‘tecmint.txt’ 的文件挂载到 /mnt/raid0 下,为创建的文件添加一些内容,并查看文件和目录的内容。 + + # touch /mnt/raid0/tecmint.txt + # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt + # cat /mnt/raid0/tecmint.txt + # ls -l /mnt/raid0/ + +![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) + +验证挂载的设备 + +12.一旦你验证挂载点后,同时将它添加到 /etc/fstab 文件中。 + + # vim /etc/fstab + +添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。 + + /dev/md0 /mnt/raid0 ext4 deaults 0 0 + +![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) + +添加设备到 fstab 文件中 + +13.使用 mount ‘-a‘ 来检查 fstab 的条目是否有误。 + + # mount -av + +![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) + +检查 fstab 文件是否有误 + +### 第6步:保存 RAID 配置 ### + +14.最后,保存 RAID 配置到一个文件中,以供将来使用。同样,我们使用 ‘mdadm’ 命令带有 ‘-s‘ (scan) 和 ‘-v‘ (verbose) 选项,如图所示。 + + # mdadm -E -s -v >> /etc/mdadm.conf + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + # cat /etc/mdadm.conf + +![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) + +保存 RAID 配置 + +就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID0 级别。在接下来的文章中,我们将看到如何设置 RAID5。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid0-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 6a03d892dc65eba50a5390d41dc88241690b109b Mon Sep 17 00:00:00 2001 From: XLCYun Date: Sun, 2 Aug 2015 19:39:53 +0800 Subject: [PATCH 011/697] Delete 20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md --- ...t Right & Wrong - Page 1 - Introduction.md | 55 ------------------- 1 file changed, 55 deletions(-) delete mode 100644 sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md diff --git a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md b/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md deleted file mode 100644 index 43735170c3..0000000000 --- a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md +++ /dev/null @@ -1,55 +0,0 @@ -Translating by XLCYun. -A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 1 - Introduction -================================================================================ -*Author's Note: If by some miracle you managed to click this article without reading the title then I want to re-iterate something... This is an editorial. These are my opinions. They are not representative of Phoronix, or Michael, these are my own thoughts.* - -Additionally, yes... This is quite possibly a flame-bait article. I hope the community is better than that, because I do want to start a discussion and give feedback to both the KDE and Gnome communities. For that reason when I point out, what I see as, a flaw I will try to be specific and direct so that any discussion can be equally specific and direct. For the record: The alternative title for this article was "Death By A Thousand [Paper Cuts][1]". - -Now, with that out of the way... Onto the article. - -![](http://www.phoronix.net/image.php?id=fedora-22-fan&image=fedora_22_good1_show&w=1920) - -When I sent the [Fedora 22 KDE Review][2] off to Michael I did it with a bit of a bad taste in my mouth. It wasn't because I didn't like KDE, or hadn't been enjoying Fedora, far from it. In fact, I started to transition my T450s over to Arch Linux but quickly decided against that, as I enjoyed the level of convenience that Fedora brings to me for many things. - -The reason I had a bad taste in my mouth was because the Fedora developers put a lot of time and effort into their "Workstation" product and I wasn't seeing any of it. I wasn't using Fedora the way the main developers had intended it to be used and therefore wasn't getting the "Fedora Experience." It felt like someone reviewing Ubuntu by using Kubuntu, using a Hackintosh to review OS X, or reviewing Gentoo by using Sabayon. A lot of readers in the forums bash on Michael for reviewing distributions in their default configurations-- myself included. While I still do believe that reviews should be done under 'real-world' configurations, I do see the value in reviewing something in the condition it was given to you-- for better or worse. - -It was with that attitude in mind that I decided to take a dip in the Gnome pool. - -I do, however, need to add one more disclaimer... I am looking at KDE and Gnome as they are packaged in Fedora. OpenSUSE, Kubuntu, Arch, etc, might all have different implementations of each desktop that will change whether my specific 'pain points' are relevant to your distribution. Furthermore, despite the title, this is going to be a VERY KDE heavy article. I called the article what I did because it was actually USING Gnome that made me realize how many "paper cuts" KDE actually has. - -### Login Screen ### - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login1_show&w=1920) - -I normally don't mind Distributions shipping distro-specific themes, because most of them make the desktop look nicer. I finally found my exception. - -First impression's count for a lot, right? Well, GDM definitely gets this one right. The login screen is incredibly clean with consistent design language through every single part of it. The use of common-language icons instead of text boxes helps in that regard. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login2_show&w=1920) - -That is not to say that the Fedora 22 KDE login screen-- now SDDM rather than KDM-- looks 'bad' per say but its definitely more jarring. - -Where's the fault? The top bar. Look at the Gnome screenshot-- you select a user and you get a tiny little gear simple for selecting what session you want to log into. The design is clean, it gets out of your way, you could honestly miss it completely if you weren't paying attention. Now look at the blue KDE screenshot, the bar doesn't look it was even rendered using the same widgets, and its entire placement feels like an after thought of "Well shit, we need to throw this option somewhere..." - -The same can be said for the Reboot and Shutdown options in the top right. Why not just a power button that creates a drop down menu that has a drop down for Reboot, Shutdown, Suspend? Having the buttons be different colors than the background certainly makes them stick out and be noticeable... but I don't think in a good way. Again, they feel like an after thought. - -GDM is also far more useful from a practical standpoint, look again along the top row. The time is listed, there's a volume control so that if you are trying to be quiet you can mute all sounds before you even login, there's an accessibility button for things like high contrast, zooming, test to speech, etc, all available via simple toggle buttons. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login3_show&w=1920) - -Swap it to upstream's Breeze theme and... suddenly most of my complaints are fixed. Common-language icons, everything is in the center of the screen, but the less important stuff is off to the sides. This creates a nice harmony between the top and bottom of the screen since they are equally empty. You still have a text box for the session switcher, but I can forgive that since the power buttons are now common language icons. Current time is available which is a nice touch, as is a battery life indicator. Sure gnome still has a few nice additions, such as the volume applet and the accessibility buttons, but Breeze is a step up from Fedora's KDE theme. - -Go to Windows (pre-Windows 8 & 10...) or OS X and you will see similar things – very clean, get-out-of-your-way lock screens and login screens that are devoid of text boxes or other widgets that distract the eye. It's a design that works and that is non-distracting. Fedora... Ship Breeze by default. VDG got the design of the Breeze theme right. Don't mess it up. - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=1 - -作者:Eric Griffith -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://wiki.ubuntu.com/One%20Hundred%20Papercuts -[2]:http://www.phoronix.com/scan.php?page=article&item=fedora-22-kde&num=1 From b2d7ab2ba05d746334150505746e8c43bbe4f3a8 Mon Sep 17 00:00:00 2001 From: XLCYun Date: Sun, 2 Aug 2015 19:40:51 +0800 Subject: [PATCH 012/697] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit XLCYun 翻译完成 --- ...t Right & Wrong - Page 1 - Introduction.md | 55 +++++++++++++++++++ 1 file changed, 55 insertions(+) create mode 100644 translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md new file mode 100644 index 0000000000..de47f0864e --- /dev/null +++ b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md @@ -0,0 +1,55 @@ +将GNOME作为我的Linux桌面的一周: 他们做对的与做错的 - 第一节 - 简介 +================================================================================ +*作者声明: 如果你是因为某种神迹而在没看标题的情况下点开了这篇文章,那么我想再重申一些东西...这是一篇评论文章。文中的观点都是我自己的,不代表Phoronix和Michael的观点。它们完全是我自己的想法。 + +另外,没错……这可能是一篇引战的文章。我希望社团成员们更沉稳一些,因为我确实想在KDE和Gnome的社团上发起讨论,反馈。因此当我想指出——我所看到的——一个瑕疵时,我会尽量地做到具体而直接。这样,相关的讨论也能做到同样的具体和直接。再次声明:本文另一可选标题为“被[剪纸][1]千刀万剐”(原文剪纸一词为papercuts, 指易修复而烦人的漏洞,译者注)。 + +现在,重申完毕……文章开始。 + +![](http://www.phoronix.net/image.php?id=fedora-22-fan&image=fedora_22_good1_show&w=1920) + +当我把[《评价Fedora 22 KDE》][2]一文发给Michael时,感觉很不是滋味。不是因为我不喜欢KDE,或者不享受Fedora,远非如此。事实上,我刚开始想把我的T450s的系统换为Arch Linux时,马上又决定放弃了,因为我很享受fedora在很多方面所带来的便捷性。 + +我感觉很不是滋味的原因是Fedora的开发者花费了大量的时间和精力在他们的“工作站”产品上,但是我却一点也没看到。在使用Fedora时,我采用的并非那些主要开发者希望用户采用的那种使用方式,因此我也就体验不到所谓的“Fedora体验”。它感觉就像一个人评价Ubuntu时用的却是Kubuntu,评价OS X时用的却是Hackintosh,或者评价Gentoo时用的却是Sabayon。根据大量Michael论坛的读者的说法,它们在评价各种发行版时使用的都是默认设置的发行版——我也不例外。但是我还是认为这些评价应该在“真实”配置下完成,当然我也知道在给定的情况下评论某些东西也的确是有价值的——无论是好是坏。 + +正是在怀着这种态度的情况下,我决定到Gnome这个水坑里来泡泡澡。 + +但是,我还要在此多加一个声明……我在这里所看到的KDE和Gnome都是打包在Fedora中的。OpenSUSE, Kubuntu, Arch等发行版的各个桌面可能有不同的实现方法,使得我这里所说的具体的“痛处”跟你所用的发行版有所不同。还有,虽然用了这个标题,但这篇文章将会是一篇很沉重的非常“KDE”的文章。之所以这样称呼这篇文章,是因为我在使用了Gnome之后,才知道KDE的“剪纸”到底有多多。 + +### 登录界面 ### + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login1_show&w=1920) + +我一般情况下都不会介意发行版装载它们自己的特别主题,因为一般情况下桌面看起来会更好看。可我今天可算是找到了一个例外。 + +第一印象很重要,对吧?那么,GDM(Gnome Display Manage:Gnome显示管理器,译者注,下同。)决对干得漂亮。它的登录界面看起来极度简洁,每一部分都应用了一致的设计风格。使用通用图标而不是输入框为它的简洁加了分。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login2_show&w=1920) + +这并不是说Fedora 22 KDE——现在已经是SDDM而不是KDM了——的登录界面不好看,但是看起来决对没有它这样和谐。 + +问题到底出来在哪?顶部栏。看看Gnome的截图——你选择一个用户,然后用一个很小的齿轮简单地选择想登入哪个会话。设计很简洁,它不挡着你的道儿,实话讲,如果你没注意的话可能完全会看不到它。现在看看那蓝色( blue,有忧郁之意,一语双关,译者注)的KDE截图,顶部栏看起来甚至不像是用同一个工具渲染出来的,它的整个位置的安排好像是某人想着:“哎哟妈呀,我们需要把这个选项扔在哪个地方……”之后决定下来的。 + +对于右上角的重启和关机选项也一样。为什么不单单用一个电源按钮,点击后会下拉出一个菜单,里面包括重启,关机,挂起的功能?按钮的颜色跟背景色不同肯定会让它更加突兀和显眼……但我可不觉得这样子有多好。同样,这看起来可真像“苦思”后的决定。 + +从实用观点来看,GDM还要远远实用的多,再看看顶部一栏。时间被列了出来,还有一个音量控制按钮,如果你想保持周围安静,你甚至可以在登录前设置静音,还有一个可用的按钮来实现高对比度,缩放,语音转文字等功能,所有可用的功能通过简单的一个开关按钮就能得到。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login3_show&w=1920) + +切换到上流的Breeve主题……突然间,我抱怨的大部分问题都被完善了。通用图标,所有东西都放在了屏幕中央,但不是那么重要的被放到了一边。因为屏幕顶部和底部都是同样的空白,在中间也就酝酿出了一种美好的和谐。还是有一个输入框来切换会话,但既然电源按钮被做成了通用图标,那么这点还算可以原谅。当然gnome还是有一些很好的附加物,例如音量小程序和可访问按钮,但Breeze总归是Fedora的KDE主题的一个进步。 + +到Windows(Windows 8和10之前)或者OS X中去,你会看到类似的东西——非常简洁的,“不挡你道”的锁屏与登录界面,它们都没有输入框或者其它分散视觉的小工具。这是一种有效的不分散人注意力的设计。Fedora……默认装有Breeze。VDG在Breeze主题设计上干得不错。可别糟蹋了它。 + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=1 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://wiki.ubuntu.com/One%20Hundred%20Papercuts +[2]:http://www.phoronix.com/scan.php?page=article&item=fedora-22-kde&num=1 +[3]:https://launchpad.net/hundredpapercuts From e01d81f5945506d1b0098cdc625a8ab565442e2e Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 2 Aug 2015 22:06:53 +0800 Subject: [PATCH 013/697] PUB:20150722 How To Manage StartUp Applications In Ubuntu @FSSlc --- ...22 How To Manage StartUp Applications In Ubuntu.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) rename {translated/tech => published}/20150722 How To Manage StartUp Applications In Ubuntu.md (92%) diff --git a/translated/tech/20150722 How To Manage StartUp Applications In Ubuntu.md b/published/20150722 How To Manage StartUp Applications In Ubuntu.md similarity index 92% rename from translated/tech/20150722 How To Manage StartUp Applications In Ubuntu.md rename to published/20150722 How To Manage StartUp Applications In Ubuntu.md index 745a84860a..3494e90a61 100644 --- a/translated/tech/20150722 How To Manage StartUp Applications In Ubuntu.md +++ b/published/20150722 How To Manage StartUp Applications In Ubuntu.md @@ -6,17 +6,17 @@ 每当你开机进入一个操作系统,一系列的应用将会自动启动。这些应用被称为‘开机启动应用’ 或‘开机启动程序’。随着时间的推移,当你在系统中安装了足够多的应用时,你将发现有太多的‘开机启动应用’在开机时自动地启动了,它们吃掉了很多的系统资源,并将你的系统拖慢。这可能会让你感觉卡顿,我想这种情况并不是你想要的。 -让 Ubuntu 变得更快的方法之一是对这些开机启动应用进行控制。 Ubuntu 为你提供了一个 GUI 工具来让你发现这些开机启动应用,然后完全禁止或延迟它们的启动,这样就可以不让每个应用在开机时同时运行。 +让 Ubuntu 变得更快的方法之一是对这些开机启动应用进行控制。 Ubuntu 为你提供了一个 GUI 工具来让你找到这些开机启动应用,然后完全禁止或延迟它们的启动,这样就可以不让每个应用在开机时同时运行。 在这篇文章中,我们将看到 **在 Ubuntu 中,如何控制开机启动应用,如何让一个应用在开机时启动以及如何发现隐藏的开机启动应用。**这里提供的指导对所有的 Ubuntu 版本均适用,例如 Ubuntu 12.04, Ubuntu 14.04 和 Ubuntu 15.04。 ### 在 Ubuntu 中管理开机启动应用 ### -默认情况下, Ubuntu 提供了一个`开机启动应用工具`来供你使用,你不必再进行安装。只需到 Unity 面板中就可以查找到该工具。 +默认情况下, Ubuntu 提供了一个`Startup Applications`工具来供你使用,你不必再进行安装。只需到 Unity 面板中就可以查找到该工具。 ![ubuntu 中的开机启动应用工具](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/startup_applications_Ubuntu.jpeg) -点击它来启动。下面是我的`开机启动应用`的样子: +点击它来启动。下面是我的`Startup Applications`的样子: ![在 Ubuntu 中查看开机启动程序](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Screenshot-from-2015-07-18-122550.png) @@ -84,7 +84,7 @@ 就这样,你将在下一次开机时看到这个程序会自动运行。这就是在 Ubuntu 中你能做的关于开机启动应用的所有事情。 -到现在为止,我们已经讨论在开机时可见的应用,但仍有更多的服务,守护进程和程序并不在`开机启动应用工具`中可见。下一节中,我们将看到如何在 Ubuntu 中查看这些隐藏的开机启动程序。 +到现在为止,我们已经讨论在开机时可见到的应用,但仍有更多的服务,守护进程和程序并不在`开机启动应用工具`中可见。下一节中,我们将看到如何在 Ubuntu 中查看这些隐藏的开机启动程序。 ### 在 Ubuntu 中查看隐藏的开机启动程序 ### @@ -97,13 +97,14 @@ ![在 Ubuntu 中查看隐藏的开机启动程序](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Hidden_startup_program_Ubuntu.jpg) 你可以像先前我们讨论的那样管理这些开机启动应用。我希望这篇教程可以帮助你在 Ubuntu 中控制开机启动程序。任何的问题或建议总是欢迎的。 + -------------------------------------------------------------------------------- via: http://itsfoss.com/manage-startup-applications-ubuntu/ 作者:[Abhishek][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 7f1ac1f1041cd8f0db2d5bee74dbb9b1151a5ccf Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 2 Aug 2015 22:33:58 +0800 Subject: [PATCH 014/697] PUB:20150625 How to Provision Swarm Clusters using Docker Machine @DongShuaike --- ...ion Swarm Clusters using Docker Machine.md | 41 ++++++++++--------- 1 file changed, 21 insertions(+), 20 deletions(-) rename {translated/tech => published}/20150625 How to Provision Swarm Clusters using Docker Machine.md (54%) diff --git a/translated/tech/20150625 How to Provision Swarm Clusters using Docker Machine.md b/published/20150625 How to Provision Swarm Clusters using Docker Machine.md similarity index 54% rename from translated/tech/20150625 How to Provision Swarm Clusters using Docker Machine.md rename to published/20150625 How to Provision Swarm Clusters using Docker Machine.md index 940c68b55d..a36284e6de 100644 --- a/translated/tech/20150625 How to Provision Swarm Clusters using Docker Machine.md +++ b/published/20150625 How to Provision Swarm Clusters using Docker Machine.md @@ -1,11 +1,14 @@ 如何使用Docker Machine部署Swarm集群 ================================================================================ -大家好,今天我们来研究一下如何使用Docker Machine部署Swarm集群。Docker Machine提供了独立的Docker API,所以任何与Docker守护进程进行交互的工具都可以使用Swarm来(透明地)扩增到多台主机上。Docker Machine可以用来在个人电脑、云端以及的数据中心里创建Docker主机。它为创建服务器,安装Docker以及根据用户设定配置Docker客户端提供了便捷化的解决方案。我们可以使用任何驱动来部署swarm集群,并且swarm集群将由于使用了TLS加密具有极好的安全性。 + +大家好,今天我们来研究一下如何使用Docker Machine部署Swarm集群。Docker Machine提供了标准的Docker API 支持,所以任何可以与Docker守护进程进行交互的工具都可以使用Swarm来(透明地)扩增到多台主机上。Docker Machine可以用来在个人电脑、云端以及的数据中心里创建Docker主机。它为创建服务器,安装Docker以及根据用户设定来配置Docker客户端提供了便捷化的解决方案。我们可以使用任何驱动来部署swarm集群,并且swarm集群将由于使用了TLS加密具有极好的安全性。 下面是我提供的简便方法。 + ### 1. 安装Docker Machine ### -Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Github上下载最新版本的Docker Machine。我们使用curl命令来下载最先版本Docker Machine ie 0.2.0。 +Docker Machine 在各种Linux系统上都支持的很好。首先,我们需要从Github上下载最新版本的Docker Machine。我们使用curl命令来下载最先版本Docker Machine ie 0.2.0。 + 64位操作系统: # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine @@ -18,7 +21,7 @@ Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Git # chmod +x /usr/local/bin/docker-machine -在做完上面的事情以后,我们必须确保docker-machine已经安装好。怎么检查呢?运行docker-machine -v指令,指令将会给出我们系统上所安装的docker-machine版本。 +在做完上面的事情以后,我们要确保docker-machine已经安装正确。怎么检查呢?运行`docker-machine -v`指令,该指令将会给出我们系统上所安装的docker-machine版本。 # docker-machine -v @@ -31,14 +34,15 @@ Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Git ### 2. 创建Machine ### -在将Docker Machine安装到我们的设备上之后,我们需要使用Docker Machine创建一个machine。在这片文章中,我们会将其部署在Digital Ocean Platform上。所以我们将使用“digitalocean”作为它的Driver API,然后将docker swarm运行在其中。这个Droplet会被设置为Swarm主节点,我们还要创建另外一个Droplet,并将其设定为Swarm节点代理。 +在将Docker Machine安装到我们的设备上之后,我们需要使用Docker Machine创建一个machine。在这篇文章中,我们会将其部署在Digital Ocean Platform上。所以我们将使用“digitalocean”作为它的Driver API,然后将docker swarm运行在其中。这个Droplet会被设置为Swarm主控节点,我们还要创建另外一个Droplet,并将其设定为Swarm节点代理。 + 创建machine的命令如下: # docker-machine create --driver digitalocean --digitalocean-access-token linux-dev -**Note**: 假设我们要创建一个名为“linux-dev”的machine。是用户在Digital Ocean Cloud Platform的Digital Ocean控制面板中生成的密钥。为了获取这个密钥,我们需要登录我们的Digital Ocean控制面板,然后点击API选项,之后点击Generate New Token,起个名字,然后在Read和Write两个选项上打钩。之后我们将得到一个很长的十六进制密钥,这个就是了。用其替换上面那条命令中的API-Token字段。 +**备注**: 假设我们要创建一个名为“linux-dev”的machine。是用户在Digital Ocean Cloud Platform的Digital Ocean控制面板中生成的密钥。为了获取这个密钥,我们需要登录我们的Digital Ocean控制面板,然后点击API选项,之后点击Generate New Token,起个名字,然后在Read和Write两个选项上打钩。之后我们将得到一个很长的十六进制密钥,这个就是了。用其替换上面那条命令中的API-Token字段。 -现在,运行下面的指令,将Machine configuration装载进shell。 +现在,运行下面的指令,将Machine 的配置变量加载进shell里。 # eval "$(docker-machine env linux-dev)" @@ -48,7 +52,7 @@ Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Git # docker-machine active linux-dev -现在,我们检查是否它(指machine)被标记为了 ACTIVE "*"。 +现在,我们检查它(指machine)是否被标记为了 ACTIVE "*"。 # docker-machine ls @@ -56,22 +60,21 @@ Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Git ### 3. 运行Swarm Docker镜像 ### -现在,在我们创建完成了machine之后。我们需要将swarm docker镜像部署上去。这个machine将会运行这个docker镜像并且控制Swarm主节点和从节点。使用下面的指令运行镜像: +现在,在我们创建完成了machine之后。我们需要将swarm docker镜像部署上去。这个machine将会运行这个docker镜像,并且控制Swarm主控节点和从节点。使用下面的指令运行镜像: # docker run swarm create ![Docker Machine Swarm Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-create.png) -If you are trying to run swarm docker image using **32 bit Operating System** in the computer where Docker Machine is running, we'll need to SSH into the Droplet. 如果你想要在**32位操作系统**上运行swarm docker镜像。你需要SSH登录到Droplet当中。 # docker-machine ssh #docker run swarm create #exit -### 4. 创建Swarm主节点 ### +### 4. 创建Swarm主控节点 ### -在我们的swarm image已经运行在machine当中之后,我们将要创建一个Swarm主节点。使用下面的语句,添加一个主节点。(这里的感觉怪怪的,好像少翻译了很多东西,是我把Master翻译为主节点的原因吗?) +在我们的swarm image已经运行在machine当中之后,我们将要创建一个Swarm主控节点。使用下面的语句,添加一个主控节点。 # docker-machine create \ -d digitalocean \ @@ -83,9 +86,9 @@ If you are trying to run swarm docker image using **32 bit Operating System** in ![Docker Machine Swarm Master Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-master-create.png) -### 5. 创建Swarm结点群 ### +### 5. 创建Swarm从节点 ### -现在,我们将要创建一个swarm结点,此结点将与Swarm主节点相连接。下面的指令将创建一个新的名为swarm-node的droplet,其与Swarm主节点相连。到此,我们就拥有了一个两节点的swarm集群了。 +现在,我们将要创建一个swarm从节点,此节点将与Swarm主控节点相连接。下面的指令将创建一个新的名为swarm-node的droplet,其与Swarm主控节点相连。到此,我们就拥有了一个两节点的swarm集群了。 # docker-machine create \ -d digitalocean \ @@ -96,21 +99,19 @@ If you are trying to run swarm docker image using **32 bit Operating System** in ![Docker Machine Swarm Nodes](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-nodes.png) -### 6. Connecting to the Swarm Master ### -### 6. 与Swarm主节点连接 ### +### 6. 与Swarm主控节点连接 ### -现在,我们连接Swarm主节点以便我们可以依照需求和配置文件在节点间部署Docker容器。运行下列命令将Swarm主节点的Machine配置文件加载到环境当中。 +现在,我们连接Swarm主控节点以便我们可以依照需求和配置文件在节点间部署Docker容器。运行下列命令将Swarm主控节点的Machine配置文件加载到环境当中。 # eval "$(docker-machine env --swarm swarm-master)" -然后,我们就可以跨结点地运行我们所需的容器了。在这里,我们还要检查一下是否一切正常。所以,运行**docker info**命令来检查Swarm集群的信息。 +然后,我们就可以跨节点地运行我们所需的容器了。在这里,我们还要检查一下是否一切正常。所以,运行**docker info**命令来检查Swarm集群的信息。 # docker info -### Conclusion ### ### 总结 ### -我们可以用Docker Machine轻而易举地创建Swarm集群。这种方法有非常高的效率,因为它极大地减少了系统管理员和用户的时间消耗。在这篇文章中,我们以Digital Ocean作为驱动,通过创建一个主节点和一个从节点成功地部署了集群。其他类似的应用还有VirtualBox,Google Cloud Computing,Amazon Web Service,Microsoft Azure等等。这些连接都是通过TLS进行加密的,具有很高的安全性。如果你有任何的疑问,建议,反馈,欢迎在下面的评论框中注明以便我们可以更好地提高文章的质量! +我们可以用Docker Machine轻而易举地创建Swarm集群。这种方法有非常高的效率,因为它极大地减少了系统管理员和用户的时间消耗。在这篇文章中,我们以Digital Ocean作为驱动,通过创建一个主控节点和一个从节点成功地部署了集群。其他类似的驱动还有VirtualBox,Google Cloud Computing,Amazon Web Service,Microsoft Azure等等。这些连接都是通过TLS进行加密的,具有很高的安全性。如果你有任何的疑问,建议,反馈,欢迎在下面的评论框中注明以便我们可以更好地提高文章的质量! -------------------------------------------------------------------------------- @@ -118,7 +119,7 @@ via: http://linoxide.com/linux-how-to/provision-swarm-clusters-using-docker-mach 作者:[Arun Pyasi][a] 译者:[DongShuaike](https://github.com/DongShuaike) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 9b61686c047dd14af43d13b1e593789c388b6aaf Mon Sep 17 00:00:00 2001 From: XLCYun Date: Mon, 3 Aug 2015 08:25:33 +0800 Subject: [PATCH 015/697] =?UTF-8?q?=E5=88=A0=E9=99=A4=E5=8E=9F=E6=96=87=20?= =?UTF-8?q?=20XLCYun?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ht & Wrong - Page 2 - The GNOME Desktop.md | 32 ------------------- 1 file changed, 32 deletions(-) delete mode 100644 sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md diff --git a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md b/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md deleted file mode 100644 index 1bf684313b..0000000000 --- a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md +++ /dev/null @@ -1,32 +0,0 @@ -Translating by XLCYun. -A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 2 - The GNOME Desktop -================================================================================ -### The Desktop ### - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_gdm_show&w=1920) - -I spent the first five days of my week logging into Gnome manually-- not turning on automatic login. On night of the fifth day I got annoyed with having to login by hand and so I went into the User Manager and turned on automatic login. The next time I logged in I got a prompt: "Your keychain was not unlocked. Please enter your password to unlock your keychain." That was when I realized something... Gnome had been automatically unlocking my keychain—my wallet in KDE speak-- every time I logged in via GDM. It was only when I bypassed GDM's login that Gnome had to step in and make me do it manually. - -Now, I am under the personal belief that if you enable automatic login then your key chain should be unlocked automatically as well-- otherwise what's the point? Either way you still have to type in your password and at least if you hit the GDM Login screen you have a chance to change your session if you want to. - -But, regardless of that, it was at that moment that I realized it was such a simple thing that made the desktop feel so much more like it was working WITH ME. When I log into KDE via SDDM? Before the splash screen is even finished loading there is a window popping up over top the splash animation-- thereby disrupting the splash screen-- prompting me to unlock my KDE wallet or GPG keyring. - -If a wallet doesn't exist already you get prompted to create a wallet-- why couldn't one have been created for me at user creation?-- and then get asked to pick between two encryption methods, where one is even implied as insecure (Blowfish), why are you letting me pick something that's insecure for my security? Author's Note: If you install the actual KDE spin and don't just install KDE after-the-fact then a wallet is created for you at user creation. Unfortunately it's not unlocked for you automatically, and it seems to use the older Blowfish method rather than the new, and more secure, GPG method. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_kgpg_show&w=1920) - -If you DO pick the secure one (GPG) then it tries to load an Gpg key... which I hope you had one created already because if you don't you get yelled at. How do you create one? Well, it doesn't offer to make one for you... nor It doesn't tell you... and if you do manage TO figure out that you are supposed to use KGpg to create the key then you get taken through several menus and prompts that are nothing but confusing to new users. Why are you asking me where the GPG binary is located? How on earth am I supposed to know? Can't you just use the most recent one if there's more than one? And if there IS only one then, I ask again, why are you prompting me? - -Why are you asking me what key size and encryption algorithm to use? You select 2048 and RSA/RSA by default, so why not just use those? If you want to have those options available then throw them under the "Expert mode" button that is right there. This isn't just about having configuration options available, its about needless things that get thrown in the user's face by default. This is going to be a theme for the rest of the article... KDE needs better sane defaults. Configuration is great, I love the configuration I get with KDE, but it needs to learn when to and when not to prompt. It also needs to learn that "Well its configurable" is no excuse for bad defaults. Defaults are what users see initially, bad defaults will lose users. - -Let's move on from the key chain issue though, because I think I made my point. - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=2 - -作者:Eric Griffith -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 03c44a7869162c8f82d29bf73501ef7984f124bd Mon Sep 17 00:00:00 2001 From: XLCYun Date: Mon, 3 Aug 2015 08:27:06 +0800 Subject: [PATCH 016/697] =?UTF-8?q?XLCYun=20=20Gnome=E7=AC=AC=E4=BA=8C?= =?UTF-8?q?=E8=8A=82=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit translated by XLCYun --- ...ht & Wrong - Page 2 - The GNOME Desktop.md | 31 +++++++++++++++++++ 1 file changed, 31 insertions(+) create mode 100644 translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md new file mode 100644 index 0000000000..5ce4dcd8d5 --- /dev/null +++ b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md @@ -0,0 +1,31 @@ +将GNOME作为我的Linux桌面的一周:他们做对的与做错的 - 第二节 - GNOME桌面 +================================================================================ +### 桌面 ### + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_gdm_show&w=1920) + +在我这一周的前五天中,我都是直接手动登录进Gnome的——没有打开自动登录功能。在第五天的晚上,每一次都要手动登录让我觉得很厌烦,所以我就到用户管理器中打开了自动登录功能。下一次我登录的时候收到了一个提示:“你的密钥链(keychain)未解锁,请输入你的密码解锁”。在这时我才意识到了什么……每一次我通过GDM登录时——我的KDB钱包提示——Gnome以前一直都在自动解锁我的密钥链!当我绕开GDM的登录程序时,Gnome才不得不介入让我手动解锁。 + +现在,鄙人的陋见是如果你打开了自动登录功能,那么你的密钥链也应当自动解锁——否则,这功能还有何用?无论如何,你还是要输入你的密码,况且在GDM登录界面你还能有机会选择要登录的会话。 + +但是,这点且不提,也就是在那一刻,我意识到要让这桌面感觉是它在**和我**一起工作是多么简单的一件事。当我通过SDDM登录KDE时?甚至连启动界面都还没加载完成,就有一个窗口弹出来遮挡了启动动画——因此启动动画也就被破坏了——提示我解锁我的KDE钱包或GPG钥匙环。 + +如果当前不存在钱包,你就会收到一个创建钱包的提醒——就不能在创建用户的时候同时为我创建一个吗?——接着它又让你在两种加密模式中选择一种,甚至还暗示我们其中一种是不安全的(Blowfish),既然是为了安全,为什么还要我选择一个不安全的东西?作者声明:如果你安装了真正的KDE spin版本而不是仅仅安装了KDE的事后版本,那么在创建用户时,它就会为你创建一个钱包。但很不幸的是,它不会帮你解锁,并且它似乎还使用了更老的Blowfish加密模式,而不是更新而且更安全的GPG模式。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_kgpg_show&w=1920) + +如果你选择了那个安全的加密模式(GPG),那么它会尝试加载GPG密钥……我希望你已经创建过一个了,因为如果你没有,那么你可又要被批一顿了。怎么样才能创建一个?额……它不帮你创建一个……也不告诉你怎么创建……假如你真的搞明白了你应该使用KGpg来创建一个密钥,接着在你就会遇到一层层的菜单和一个个的提示,而这些菜单和提示只能让新手感到困惑。为什么你要问我GPG的二进制文件在哪?天知道在哪!如果不止一个,你就不能为我选择一个最新的吗?如果只有一个,我再问一次,为什么你还要问我? + +为什么你要问我要使用多大的密钥大小和加密算法?你既然默认选择了2048和RSA/RSA,为什么不直接使用?如果你想让这些选项能够被改变,那就把它们扔在下面的"Expert mode(专家模式)"按钮里去。这不仅仅关于使配置可被用户改变,而是关于默认地把多余的东西扔在了用户面前。这种问题将会成为剩下文章的主题之一……KDE需要更好更理智的默认配置。配置是好的,我很喜欢在使用KDE时的配置,但它还需要知道什么时候应该,什么时候不应该去提示用户。而且它还需要知道“嗯,它是可配置的”不能做为默认配置做得不好的借口。用户最先接触到的就是默认配置,不好的默认配置注定要失去用户。 + +让我们抛开密钥链的问题,因为我想我已经表达出了我的想法。 + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=2 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 1665f293d1a20b351480b49a9a2a9be67ca16af9 Mon Sep 17 00:00:00 2001 From: joeren Date: Mon, 3 Aug 2015 09:58:33 +0800 Subject: [PATCH 017/697] =?UTF-8?q?github=202.0=E6=B5=8B=E8=AF=95=E6=8E=A8?= =?UTF-8?q?=E9=80=81=E6=B5=8B=E8=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- github 2.0测试.txt | 0 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 github 2.0测试.txt diff --git a/github 2.0测试.txt b/github 2.0测试.txt new file mode 100644 index 0000000000..e69de29bb2 From d90f37fdb59f8130a83c3894ef2c247870979aef Mon Sep 17 00:00:00 2001 From: joeren Date: Mon, 3 Aug 2015 10:11:49 +0800 Subject: [PATCH 018/697] =?UTF-8?q?=E6=B5=8B=E8=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- github 2.0测试.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/github 2.0测试.txt b/github 2.0测试.txt index e69de29bb2..9d07aa0df5 100644 --- a/github 2.0测试.txt +++ b/github 2.0测试.txt @@ -0,0 +1 @@ +111 \ No newline at end of file From f7b881a22862a051f1f2529900989bce239dcf51 Mon Sep 17 00:00:00 2001 From: joeren Date: Mon, 3 Aug 2015 10:15:21 +0800 Subject: [PATCH 019/697] Test --- github 2.0测试.txt | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/github 2.0测试.txt b/github 2.0测试.txt index 9d07aa0df5..7787faa3c1 100644 --- a/github 2.0测试.txt +++ b/github 2.0测试.txt @@ -1 +1,2 @@ -111 \ No newline at end of file +111 +222 \ No newline at end of file From 60689c5fc61a277731adf66298f33409f95dcde2 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 3 Aug 2015 10:57:25 +0800 Subject: [PATCH 020/697] Delete 20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 删除原文 oska874 --- ...un Ubuntu Snappy Core on Raspberry Pi 2.md | 91 ------------------- 1 file changed, 91 deletions(-) delete mode 100644 sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md diff --git a/sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md b/sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md deleted file mode 100644 index 99b2b3acc1..0000000000 --- a/sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md +++ /dev/null @@ -1,91 +0,0 @@ -+Translating by Ezio - -How to run Ubuntu Snappy Core on Raspberry Pi 2 -================================================================================ -The Internet of Things (IoT) is upon us. In a couple of years some of us might ask ourselves how we ever survived without it, just like we question our past without cellphones today. Canonical is a contender in this fast growing, but still wide open market. The company wants to claim their stakes in IoT just as they already did for the cloud. At the end of January, the company launched a small operating system that goes by the name of [Ubuntu Snappy Core][1] which is based on Ubuntu Core. - -Snappy, the new component in the mix, represents a package format that is derived from DEB, is a frontend to update the system that lends its idea from atomic upgrades used in CoreOS, Red Hat's Atomic and elsewhere. As soon as the Raspberry Pi 2 was marketed, Canonical released Snappy Core for that plattform. The first edition of the Raspberry Pi was not able to run Ubuntu because Ubuntu's ARM images use the ARMv7 architecture, while the first Raspberry Pis were based on ARMv6. That has changed now, and Canonical, by releasing a RPI2-Image of Snappy Core, took the opportunity to make clear that Snappy was meant for the cloud and especially for IoT. - -Snappy also runs on other platforms like Amazon EC2, Microsofts Azure, and Google's Compute Engine, and can also be virtualized with KVM, Virtualbox, or Vagrant. Canonical has embraced big players like Microsoft, Google, Docker or OpenStack and, at the same time, also included small projects from the maker scene as partners. Besides startups like Ninja Sphere and Erle Robotics, there are board manufacturers like Odroid, Banana Pro, Udoo, PCDuino and Parallella as well as Allwinner. Snappy Core will also run in routers soon to help with the poor upgrade policy that vendors perform. - -In this post, let's see how we can test Ubuntu Snappy Core on Raspberry Pi 2. - -The image for Snappy Core for the RPI2 can be downloaded from the [Raspberry Pi website][2]. Unpacked from the archive, the resulting image should be [written to an SD card][3] of at least 8 GB. Even though the OS is small, atomic upgrades and the rollback function eat up quite a bit of space. After booting up your Raspberry Pi 2 with Snappy Core, you can log into the system with the default username and password being 'ubuntu'. - -![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg) - -sudo is already configured and ready for use. For security reasons you should change the username with: - - $ sudo usermod -l - -Alternatively, you can add a new user with the command `adduser`. - -Due to the lack of a hardware clock on the RPI, that the Snappy Core image does not take account of, the image has a small bug that will throw a lot of errors when processing commands. It is easy to fix. - -To find out if the bug affects you, use the command: - - $ date - -If the output is "Thu Jan 1 01:56:44 UTC 1970", you can fix it with: - - $ sudo date --set="Sun Apr 04 17:43:26 UTC 2015" - -adapted to your actual time. - -![](https://farm9.staticflickr.com/8735/16426231744_c54d9b8877_b.jpg) - -Now you might want to check if there are any updates available. Note that the usual commands: - - $ sudo apt-get update && sudo apt-get distupgrade - -will not get you very far though, as Snappy uses its own simplified package management system which is based on dpkg. This makes sense, as Snappy will run on a lot of embedded appliances, and you want things to be as simple as possible. - -Let's dive into the engine room for a minute to understand how things work with Snappy. The SD card you run Snappy on has three partitions besides the boot partition. Two of those house a duplicated file system. Both of those parallel file systems are permanently mounted as "read only", and only one is active at any given time. The third partition holds a partial writable file system and the users persistent data. With a fresh system, the partition labeled 'system-a' holds one complete file system, called a core, leaving the parallel partition still empty. - -![](https://farm9.staticflickr.com/8758/16841251947_21f42609ce_b.jpg) - -If we run the following command now: - - $ sudo snappy update - -the system will install the update as a complete core, similar to an image, on 'system-b'. You will be asked to reboot your device afterwards to activate the new core. - -After the reboot, run the following command to check if your system is up to date and which core is active. - - $ sudo snappy versions -a - -After rolling out the update and rebooting, you should see that the core that is now active has changed. - -As we have not installed any apps yet, the following command: - - $ sudo snappy update ubuntu-core - -would have been sufficient, and is the way if you want to upgrade just the underlying OS. Should something go wrong, you can rollback by: - - $ sudo snappy rollback ubuntu-core - -which will take you back to the system's state before the update. - -![](https://farm8.staticflickr.com/7666/17022676786_5fe6804ed8_c.jpg) - -Speaking of apps, they are what makes Snappy useful. There are not that many at this point, but the IRC channel #snappy on Freenode is humming along nicely and with a lot of people involved, the Snappy App Store gets new apps added on a regular basis. You can visit the shop by pointing your browser to http://:4200, and you can install apps right from the shop and then launch them with http://webdm.local in your browser. Building apps yourself for Snappy is not all that hard, and [well documented][4]. You can also port DEB packages into the snappy format quite easily. - -![](https://farm8.staticflickr.com/7656/17022676836_968a2a7254_c.jpg) - -Ubuntu Snappy Core, due to the limited number of available apps, is not overly useful in a productive way at this point in time, although it invites us to dive into the new Snappy package format and play with atomic upgrades the Canonical way. Since it is easy to set up, this seems like a good opportunity to learn something new. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html - -作者:[Ferdinand Thommes][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/ferdinand -[1]:http://www.ubuntu.com/things -[2]:http://www.raspberrypi.org/downloads/ -[3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html -[4]:https://developer.ubuntu.com/en/snappy/ From 0b73d40c0e585226303bc4dcff5ed5a172c87ee8 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 3 Aug 2015 11:01:10 +0800 Subject: [PATCH 021/697] Create 20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 添加翻译过的文章 oska874 --- ...un Ubuntu Snappy Core on Raspberry Pi 2.md | 89 +++++++++++++++++++ 1 file changed, 89 insertions(+) create mode 100644 translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md diff --git a/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md b/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md new file mode 100644 index 0000000000..c4475f39a2 --- /dev/null +++ b/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md @@ -0,0 +1,89 @@ +如何在树莓派2 代运行ubuntu Snappy Core +================================================================================ +物联网(Internet of Things, IoT) 时代即将来临。很快,过不了几年,我们就会问自己当初是怎么在没有物联网的情况下生存的,就像我们现在怀疑过去没有手机的年代。Canonical 就是一个物联网快速发展却还是开放市场下的竞争者。这家公司宣称自己把赌注压到了IoT 上,就像他们已经在“云”上做过的一样。。在今年一月底,Canonical 启动了一个基于Ubuntu Core 的小型操作系统,名字叫做 [Ubuntu Snappy Core][1] 。 + +Snappy 是一种用来替代deb 的新的打包格式,是一个用来更新系统的前端,从CoreOS、红帽子和其他地方借鉴了原子更新这个想法。很快树莓派2 代投入市场,Canonical 就发布了用于树莓派的Snappy Core 版本。第一代树莓派因为是基于ARMv6 ,而Ubuntu 的ARM 镜像是基于ARMv7 ,所以不能运行ubuntu 。不过这种状况现在改变了,Canonical 通过发布用于RPI2 的镜像,抓住机会澄清了Snappy 就是一个用于云计算,特别是IoT 的系统。 + +Snappy 同样可以运行在其它像Amazon EC2, Microsofts Azure, Google's Compute Engine 这样的云端上,也可以虚拟化在KVM、Virtuabox 和vagrant 上。Canonical 已经拥抱了微软、谷歌、Docker、OpenStack 这些重量级选手,同时也与一些小项目达成合作关系。除了一些创业公司,像Ninja Sphere、Erle Robotics,还有一些开发板生产商比如Odroid、Banana Pro, Udoo, PCDuino 和Parallella 、全志。Snappy Core 也希望很快能运行在路由器上,来帮助改进路由器生产商目前很少更新固件的策略。 + +接下来,让我们看看怎么样在树莓派2 上运行Snappy。 + +用于树莓派2 的Snappy 镜像可以从 [Raspberry Pi 网站][2] 上下载。解压缩出来的镜像必须[写到一个至少8GB 大小的SD 卡][3]。尽管原始系统很小,但是院子升级和回滚功能会蚕食不小的空间。使用Snappy 启动树莓派2 后你就可以使用默认用户名和密码(都是ubuntu)登录系统。 + +![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg) + +sudo 已经配置好了可以直接用,安全起见,你应该使用以下命令来修改你的用户名 + + $ sudo usermod -l + +或者也可以使用`adduser` 为你添加一个新用户。 + +因为RPI缺少硬件始终,而Snappy 不知道这一点,所以系统会有一个小bug:处理命令时会报很多错。不过这个很容易解决: + +使用这个命令来确认这个bug 是否影响: + + $ date + +如果输出是 "Thu Jan 1 01:56:44 UTC 1970", 你可以这样做来改正: + + $ sudo date --set="Sun Apr 04 17:43:26 UTC 2015" + +改成你的实际时间。 + +![](https://farm9.staticflickr.com/8735/16426231744_c54d9b8877_b.jpg) + +现在你可能打算检查一下,看看有没有可用的更新。注意通常使用的命令: + + $ sudo apt-get update && sudo apt-get distupgrade + +现在将不会让你通过,因为Snappy 会使用它自己精简过的、基于dpkg 的包管理系统。这是做是应为Snappy 会运行很多嵌入式程序,而你也会想着所有事情尽可能的简化。 + +让我们来看看最关键的部分,理解一下程序是如何与Snappy 工作的。运行Snappy 的SD 卡上除了boot 分区外还有3个分区。其中的两个构成了一个重复的文件系统。这两个平行文件系统被固定挂载为只读模式,并且任何时刻只有一个是激活的。第三个分区是一个部分可写的文件系统,用来让用户存储数据。通过更新系统,标记为'system-a' 的分区会保持一个完整的文件系统,被称作核心,而另一个平行文件系统仍然会是空的。 + +![](https://farm9.staticflickr.com/8758/16841251947_21f42609ce_b.jpg) + +如果我们运行以下命令: + + $ sudo snappy update + +系统将会在'system-b' 上作为一个整体进行更新,这有点像是更新一个镜像文件。接下来你将会被告知要重启系统来激活新核心。 + +重启之后,运行下面的命令可以检查你的系统是否已经更新到最新版本,以及当前被激活的是那个核心 + + $ sudo snappy versions -a + +经过更新-重启的操作,你应该可以看到被激活的核心已经被改变了。 + +因为到目前为止我们还没有安装任何软件,下面的命令: + + $ sudo snappy update ubuntu-core + +将会生效,而且如果你打算仅仅更新特定的OS,这也是一个办法。如果出了问题,你可以使用下面的命令回滚: + + $ sudo snappy rollback ubuntu-core + +这将会把系统状态回滚到更新之前。 + +![](https://farm8.staticflickr.com/7666/17022676786_5fe6804ed8_c.jpg) + +再来说说那些让Snappy 有用的软件。这里不会讲的太多关于如何构建软件、向Snappy 应用商店添加软件的基础知识,但是你可以通过Freenode 上的IRC 频道#snappy 了解更多信息,那个上面有很多人参与。你可以通过浏览器访问http://:4200 来浏览应用商店,然后从商店安装软件,再在浏览器里访问http://webdm.local 来启动程序。如何构建用于Snappy 的软件并不难,而且也有了现成的[参考文档][4] 。你也可以很容易的把DEB 安装包使用Snappy 格式移植到Snappy 上。 + +![](https://farm8.staticflickr.com/7656/17022676836_968a2a7254_c.jpg) + +尽管Ubuntu Snappy Core 吸引我们去研究新型的Snappy 安装包格式和Canonical 式的原子更新操作,但是因为有限的可用应用,它现在在生产环境里还不是很有用。但是既然搭建一个Snappy 环境如此简单,这看起来是一个学点新东西的好机会。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html + +作者:[Ferdinand Thommes][a] +译者:[译者ID](https://github.com/oska874) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/ferdinand +[1]:http://www.ubuntu.com/things +[2]:http://www.raspberrypi.org/downloads/ +[3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html +[4]:https://developer.ubuntu.com/en/snappy/ From 9e78da3fe288fdc6346cc4cefff359b1f56d8915 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 3 Aug 2015 16:00:21 +0800 Subject: [PATCH 022/697] =?UTF-8?q?20150803-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ds for profiling your Unix file systems.md | 64 +++ sources/tech/20150803 Linux Logging Basics.md | 90 ++++ sources/tech/20150803 Managing Linux Logs.md | 418 ++++++++++++++++++ ...0150803 Troubleshooting with Linux Logs.md | 116 +++++ 4 files changed, 688 insertions(+) create mode 100644 sources/tech/20150803 Handy commands for profiling your Unix file systems.md create mode 100644 sources/tech/20150803 Linux Logging Basics.md create mode 100644 sources/tech/20150803 Managing Linux Logs.md create mode 100644 sources/tech/20150803 Troubleshooting with Linux Logs.md diff --git a/sources/tech/20150803 Handy commands for profiling your Unix file systems.md b/sources/tech/20150803 Handy commands for profiling your Unix file systems.md new file mode 100644 index 0000000000..ae5951b0d7 --- /dev/null +++ b/sources/tech/20150803 Handy commands for profiling your Unix file systems.md @@ -0,0 +1,64 @@ +Handy commands for profiling your Unix file systems +================================================================================ +![Credit: Sandra H-S](http://images.techhive.com/images/article/2015/07/file-profile-100597239-primary.idge.png) +Credit: Sandra H-S + +One of the problems that seems to plague nearly all file systems -- Unix and others -- is the continuous buildup of files. Almost no one takes the time to clean out files that they no longer use and file systems, as a result, become so cluttered with material of little or questionable value that keeping them them running well, adequately backed up, and easy to manage is a constant challenge. + +One way that I have seen to help encourage the owners of all that data detritus to address the problem is to create a summary report or "profile" of a file collection that reports on such things as the number of files; the oldest, newest, and largest of those files; and a count of who owns those files. If someone realizes that a collection of half a million files contains none less than five years old, they might go ahead and remove them -- or, at least, archive and compress them. The basic problem is that huge collections of files are overwhelming and most people are afraid that they might accidentally delete something important. Having a way to characterize a file collection can help demonstrate the nature of the content and encourage those digital packrats to clean out their nests. + +When I prepare a file system summary report on Unix, a handful of Unix commands easily provide some very useful statistics. To count the files in a directory, you can use a find command like this. + + $ find . -type f | wc -l + 187534 + +Finding the oldest and newest files is a bit more complicated, but still quite easy. In the commands below, we're using the find command again to find files, displaying the data with a year-month-day format that makes it possible to sort by file age, and then displaying the top -- thus the oldest -- file in that list. + +In the second command, we do the same, but print the last line -- thus the newest -- file. + + $ find -type f -printf '%T+ %p\n' | sort | head -n 1 + 2006-02-03+02:40:33 ./skel/.xemacs/init.el + $ find -type f -printf '%T+ %p\n' | sort | tail -n 1 + 2015-07-19+14:20:16 ./.bash_history + +The %T (file date and time) and %p (file name with path) parameters with the printf command allow this to work. + +If we're looking at home directories, we're undoubtedly going to find that history files are the newest files and that isn't likely to be a very interesting bit of information. You can omit those files by "un-grepping" them or you can omit all files that start with dots as shown below. + + $ find -type f -printf '%T+ %p\n' | grep -v "\./\." | sort | tail -n 1 + 2015-07-19+13:02:12 ./isPrime + +Finding the largest file involves using the %s (size) parameter and we include the file name (%f) since that's what we want the report to show. + + $ find -type f -printf '%s %f \n' | sort -n | uniq | tail -1 + 20183040 project.org.tar + +To summarize file ownership, use the %u (owner) + + $ find -type f -printf '%u \n' | grep -v "\./\." | sort | uniq -c + 180034 shs + 7500 jdoe + +If your file system also records the last access date, it can be very useful to show that files haven't been accessed in, say, more than two years. This would give your reviewers an important insight into the value of those files. The last access parameter (%a) could be used like this: + + $ find -type f -printf '%a+ %p\n' | sort | head -n 1 + Fri Dec 15 03:00:30 2006+ ./statreport + +Of course, if the most recently accessed file is also in the deep dark past, that's likely to get even more of a reaction. + + $ find -type f -printf '%a+ %p\n' | sort | tail -n 1 + Wed Nov 26 03:00:27 2007+ ./my-notes + +Getting a sense of what's in a file system or large directory by creating a summary report showing the file date ranges, the largest files, the file owners, and the oldest and new access times can help to demonstrate how current and how important a file collection is and help its owners decide if it's time to clean up. + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.html + +作者:[Sandra Henry-Stocker][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/ \ No newline at end of file diff --git a/sources/tech/20150803 Linux Logging Basics.md b/sources/tech/20150803 Linux Logging Basics.md new file mode 100644 index 0000000000..d20f68f140 --- /dev/null +++ b/sources/tech/20150803 Linux Logging Basics.md @@ -0,0 +1,90 @@ +Linux Logging Basics +================================================================================ +First we’ll describe the basics of what Linux logs are, where to find them, and how they get created. If you already know this stuff, feel free to skip to the next section. + +### Linux System Logs ### + +Many valuable log files are automatically created for you by Linux. You can find them in your /var/log directory. Here is what this directory looks like on a typical Ubuntu system: + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Linux-system-log-terminal.png) + +Some of the most important Linux system logs include: + +- /var/log/syslog or /var/log/messages stores all global system activity data, including startup messages. Debian-based systems like Ubuntu store this in /var/log/syslog. RedHat-based systems like RHEL or CentOS store this in /var/log/messages. +- /var/log/auth.log or /var/log/secure stores logs from the Pluggable Authentication Module (pam) including successful logins, failed login attempts, and authentication methods. Ubuntu and Debian store authentication messages in /var/log/auth.log. RedHat and CentOS store this data in /var/log/secure. +- /var/log/kern stores kernel error and warning data, which is particularly helpful for troubleshooting custom kernels. +- /var/log/cron stores information about cron jobs. Use this data to verify that your cron jobs are running successfully. + +Digital Ocean has a thorough [tutorial][1] on these files and how rsyslog creates them on common distributions like RedHat and CentOS. + +Applications also write log files in this directory. For example, popular servers like Apache, Nginx, MySQL, and more can write log files here. Some of these log files are written by the application itself. Others are created through syslog (see below). + +### What’s Syslog? ### + +How do Linux system log files get created? The answer is through the syslog daemon, which listens for log messages on the syslog socket /dev/log and then writes them to the appropriate log file. + +The word “syslog” is an overloaded term and is often used in short to refer to one of these: + +1. **Syslog daemon** — a program to receive, process, and send syslog messages. It can [send syslog remotely][2] to a centralized server or write it to a local file. Common examples include rsyslogd and syslog-ng. In this usage, people will often say “sending to syslog.” +1. **Syslog protocol** — a transport protocol specifying how logs can be sent over a network and a data format definition for syslog messages (below). It’s officially defined in [RFC-5424][3]. The standard ports are 514 for plaintext logs and 6514 for encrypted logs. In this usage, people will often say “sending over syslog.” +1. **Syslog messages** — log messages or events in the syslog format, which includes a header with several standard fields. In this usage, people will often say “sending syslog.” + +Syslog messages or events include a header with several standard fields, making analysis and routing easier. They include the timestamp, the name of the application, the classification or location in the system where the message originates, and the priority of the issue. + +Here is an example log message with the syslog header included. It’s from the sshd daemon, which controls remote logins to the system. This message describes a failed login attempt: + + <34>1 2003-10-11T22:14:15.003Z server1.com sshd - - pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2 + +### Syslog Format and Fields ### + +Each syslog message includes a header with fields. Fields are structured data that makes it easier to analyze and route the events. Here is the format we used to generate the above syslog example. You can match each value to a specific field name. + + <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% %msg%n + +Below, you’ll find descriptions of some of the most commonly used syslog fields when searching or troubleshooting issues. + +#### Timestamp #### + +The [timestamp][4] field (2003-10-11T22:14:15.003Z in the example) indicates the time and date that the message was generated on the system sending the message. That time can be different from when another system receives the message. The example timestamp breaks down like this: + +- **2003-10-11** is the year, month, and day. +- **T** is a required element of the TIMESTAMP field, separating the date and the time. +- **22:14:15.003** is the 24-hour format of the time, including the number of milliseconds (**003**) into the next second. +- **Z** is an optional element, indicating UTC time. Instead of Z, the example could have included an offset, such as -08:00, which indicates that the time is offset from UTC by 8 hours, PST. + +#### Hostname #### + +The [hostname][5] field (server1.com in the example above) indicates the name of the host or system that sent the message. + +#### App-Name #### + +The [app-name][6] field (sshd:auth in the example) indicates the name of the application that sent the message. + +#### Priority #### + +The priority field or [pri][7] for short (<34> in the example above) tells you how urgent or severe the event is. It’s a combination of two numerical fields: the facility and the severity. The severity ranges from the number 7 for debug events all the way to 0 which is an emergency. The facility describes which process created the event. It ranges from 0 for kernel messages to 23 for local application use. + +Pri can be output in two ways. The first is as a single number prival which is calculated as the facility field value multiplied by 8, then the result is added to the severity field value: (facility)(8) + (severity). The second is pri-text which will output in the string format “facility.severity.” The latter format can often be easier to read and search but takes up more storage space. + +-------------------------------------------------------------------------------- + +via: http://www.loggly.com/ultimate-guide/logging/linux-logging-basics/ + +作者:[Jason Skowronski][a1] +作者:[Amy Echeverri][a2] +作者:[Sadequl Hussain][a3] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a1]:https://www.linkedin.com/in/jasonskowronski +[a2]:https://www.linkedin.com/in/amyecheverri +[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 +[1]:https://www.digitalocean.com/community/tutorials/how-to-view-and-configure-linux-logs-on-ubuntu-and-centos +[2]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.y2e9tdfk1cdb +[3]:https://tools.ietf.org/html/rfc5424 +[4]:https://tools.ietf.org/html/rfc5424#section-6.2.3 +[5]:https://tools.ietf.org/html/rfc5424#section-6.2.4 +[6]:https://tools.ietf.org/html/rfc5424#section-6.2.5 +[7]:https://tools.ietf.org/html/rfc5424#section-6.2.1 \ No newline at end of file diff --git a/sources/tech/20150803 Managing Linux Logs.md b/sources/tech/20150803 Managing Linux Logs.md new file mode 100644 index 0000000000..d68adddf52 --- /dev/null +++ b/sources/tech/20150803 Managing Linux Logs.md @@ -0,0 +1,418 @@ +Managing Linux Logs +================================================================================ +A key best practice for logging is to centralize or aggregate your logs in one place, especially if you have multiple servers or tiers in your architecture. We’ll tell you why this is a good idea and give tips on how to do it easily. + +### Benefits of Centralizing Logs ### + +It can be cumbersome to look at individual log files if you have many servers. Modern web sites and services often include multiple tiers of servers, distributed with load balancers, and more. It’d take a long time to hunt down the right file, and even longer to correlate problems across servers. There’s nothing more frustrating than finding the information you are looking for hasn’t been captured, or the log file that could have held the answer has just been lost after a restart. + +Centralizing your logs makes them faster to search, which can help you solve production issues faster. You don’t have to guess which server had the issue because all the logs are in one place. Additionally, you can use more powerful tools to analyze them, including log management solutions. These solutions can [transform plain text logs][1] into fields that can be easily searched and analyzed. + +Centralizing your logs also makes them easier to manage: + +- They are safer from accidental or intentional loss when they are backed up and archived in a separate location. If your servers go down or are unresponsive, you can use the centralized logs to debug the problem. +- You don’t have to worry about ssh or inefficient grep commands requiring more resources on troubled systems. +- You don’t have to worry about full disks, which can crash your servers. +- You can keep your production servers secure without giving your entire team access just to look at logs. It’s much safer to give your team access to logs from the central location. + +With centralized log management, you still must deal with the risk of being unable to transfer logs to the centralized location due to poor network connectivity or of using up a lot of network bandwidth. We’ll discuss how to intelligently address these issues in the sections below. + +### Popular Tools for Centralizing Logs ### + +The most common way of centralizing logs on Linux is by using syslog daemons or agents. The syslog daemon supports local log collection, then transports logs through the syslog protocol to a central server. There are several popular daemons that you can use to centralize your log files: + +- [rsyslog][2] is a light-weight daemon installed on most common Linux distributions. +- [syslog-ng][3] is the second most popular syslog daemon for Linux. +- [logstash][4] is a heavier-weight agent that can do more advanced processing and parsing. +- [fluentd][5] is another agent with advanced processing capabilities. + +Rsyslog is the most popular daemon for centralizing your log data because it’s installed by default in most common distributions of Linux. You don’t need to download it or install it, and it’s lightweight so it won’t take up much of your system resources. + +If you need more advanced filtering or custom parsing capabilities, Logstash is the next most popular choice if you don’t mind the extra system footprint. + +### Configure Rsyslog.conf ### + +Since rsyslog is the most widely used syslog daemon, we’ll show how to configure it to centralize logs. The global configuration file is located at /etc/rsyslog.conf. It loads modules, sets global directives, and has an include for application-specific files located in the directory /etc/rsyslog.d/. This directory contains /etc/rsyslog.d/50-default.conf which instructs rsyslog to write the system logs to file. You can read more about the configuration files in the [rsyslog documentation][6]. + +The configuration language for rsyslog is [RainerScript][7]. You set up specific inputs for logs as well as actions to output them to another destination. Rsyslog already configures standard defaults for syslog input, so you usually just need to add an output to your log server. Here is an example configuration for rsyslog to output logs to an external server. In this example, **BEBOP** is the hostname of the server, so you should replace it with your own server name. + + action(type="omfwd" protocol="tcp" target="BEBOP" port="514") + +You could send your logs to a log server with ample storage to keep a copy for search, backup, and analysis. If you’re storing logs in the file system, then you should set up [log rotation][8] to keep your disk from getting full. + +Alternatively, you can send these logs to a log management solution. If your solution is installed locally you can send it to your local host and port as specified in your system documentation. If you use a cloud-based provider, you will send them to the hostname and port specified by your provider. + +### Log Directories ### + +You can centralize all the files in a directory or matching a wildcard pattern. The nxlog and syslog-ng daemons support both directories and wildcards (*). + +Common versions of rsyslog can’t monitor directories directly. As a workaround, you can setup a cron job to monitor the directory for new files, then configure rsyslog to send those files to a destination, such as your log management system. As an example, the log management vendor Loggly has an open source version of a [script to monitor directories][9]. + +### Which Protocol: UDP, TCP, or RELP? ### + +There are three main protocols that you can choose from when you transmit data over the Internet. The most common is UDP for your local network and TCP for the Internet. If you cannot lose logs, then use the more advanced RELP protocol. + +[UDP][10] sends a datagram packet, which is a single packet of information. It’s an outbound-only protocol, so it doesn’t send you an acknowledgement of receipt (ACK). It makes only one attempt to send the packet. UDP can be used to smartly degrade or drop logs when the network gets congested. It’s most commonly used on reliable networks like localhost. + +[TCP][11] sends streaming information in multiple packets and returns an ACK. TCP makes multiple attempts to send the packet, but is limited by the size of the [TCP buffer][12]. This is the most common protocol for sending logs over the Internet. + +[RELP][13] is the most reliable of these three protocols but was created for rsyslog and has less industry adoption. It acknowledges receipt of data in the application layer and will resend if there is an error. Make sure your destination also supports this protocol. + +### Reliably Send with Disk Assisted Queues ### + +If rsyslog encounters a problem when storing logs, such as an unavailable network connection, it can queue the logs until the connection is restored. The queued logs are stored in memory by default. However, memory is limited and if the problem persists, the logs can exceed memory capacity. + +**Warning: You can lose data if you store logs only in memory.** + +Rsyslog can queue your logs to disk when memory is full. [Disk-assisted queues][14] make transport of logs more reliable. Here is an example of how to configure rsyslog with a disk-assisted queue: + + $WorkDirectory /var/spool/rsyslog # where to place spool files + $ActionQueueFileName fwdRule1 # unique name prefix for spool files + $ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible) + $ActionQueueSaveOnShutdown on # save messages to disk on shutdown + $ActionQueueType LinkedList # run asynchronously + $ActionResumeRetryCount -1 # infinite retries if host is down + +### Encrypt Logs Using TLS ### + +When the security and privacy of your data is a concern, you should consider encrypting your logs. Sniffers and middlemen could read your log data if you transmit it over the Internet in clear text. You should encrypt your logs if they contain private information, sensitive identification data, or government-regulated data. The rsyslog daemon can encrypt your logs using the TLS protocol and keep your data safer. + +To set up TLS encryption, you need to do the following tasks: + +1. Generate a [certificate authority][15] (CA). There are sample certificates in /contrib/gnutls, which are good only for testing, but you need to create your own for production. If you’re using a log management service, it will have one ready for you. +1. Generate a [digital certificate][16] for your server to enable SSL operation, or use one from your log management service provider. +1. Configure your rsyslog daemon to send TLS-encrypted data to your log management system. + +Here’s an example rsyslog configuration with TLS encryption. Replace CERT and DOMAIN_NAME with your own server setting. + + $DefaultNetstreamDriverCAFile /etc/rsyslog.d/keys/ca.d/CERT.crt + $ActionSendStreamDriver gtls + $ActionSendStreamDriverMode 1 + $ActionSendStreamDriverAuthMode x509/name + $ActionSendStreamDriverPermittedPeer *.DOMAIN_NAME.com + +### Best Practices for Application Logging ### + +In addition to the logs that Linux creates by default, it’s also a good idea to centralize logs from important applications. Almost all Linux-based server class applications write their status information in separate, dedicated log files. This includes database products like PostgreSQL or MySQL, web servers like Nginx or Apache, firewalls, print and file sharing services, directory and DNS servers and so on. + +The first thing administrators do after installing an application is to configure it. Linux applications typically have a .conf file somewhere within the /etc directory. It can be somewhere else too, but that’s the first place where people look for configuration files. + +Depending on how complex or large the application is, the number of settable parameters can be few or in hundreds. As mentioned before, most applications would write their status in some sort of log file: configuration file is where log settings are defined among other things. + +If you’re not sure where it is, you can use the locate command to find it: + + [root@localhost ~]# locate postgresql.conf + /usr/pgsql-9.4/share/postgresql.conf.sample + /var/lib/pgsql/9.4/data/postgresql.conf + +#### Set a Standard Location for Log Files #### + +Linux systems typically save their log files under /var/log directory. This works fine, but check if the application saves under a specific directory under /var/log. If it does, great, if not, you may want to create a dedicated directory for the app under /var/log. Why? That’s because other applications save their log files under /var/log too and if your app saves more than one log file – perhaps once every day or after each service restart – it may be a bit difficult to trawl through a large directory to find the file you want. + +If you have the more than one instance of the application running in your network, this approach also comes handy. Think about a situation where you may have a dozen web servers running in your network. When troubleshooting any one of the boxes, you would know exactly where to go. + +#### Use A Standard Filename #### + +Use a standard filename for the latest logs from your application. This makes it easy because you can monitor and tail a single file. Many applications add some sort of date time stamp in them. This makes it much more difficult to find the latest file and to setup file monitoring by rsyslog. A better approach is to add timestamps to older log files using logrotate. This makes them easier to archive and search historically. + +#### Append the Log File #### + +Is the log file going to be overwritten after each application restart? If so, we recommend turning that off. After each restart the app should append to the log file. That way, you can always go back to the last log line before the restart. + +#### Appending vs. Rotation of Log File #### + +Even if the application writes a new log file after each restart, how is it saving in the current log? Is it appending to one single, massive file? Linux systems are not known for frequent reboots or crashes: applications can run for very long periods without even blinking, but that can also make the log file very large. If you are trying to analyze the root cause of a connection failure that happened last week, you could easily be searching through tens of thousands of lines. + +We recommend you configure the application to rotate its log file once every day, say at mid-night. + +Why? Well it becomes manageable for a starter. It’s much easier to find a file name with a specific date time pattern than to search through one file for that date’s entries. Files are also much smaller: you don’t think vi has frozen when you open a log file. Secondly, if you are sending the log file over the wire to a different location – perhaps a nightly backup job copying to a centralized log server – it doesn’t chew up your network’s bandwidth. Third and final, it helps with your log retention. If you want to cull old log entries, it’s easier to delete files older than a particular date than to have an application parsing one single large file. + +#### Retention of Log File #### + +How long do you keep a log file? That definitely comes down to business requirement. You could be asked to keep one week’s worth of logging information, or it may be a regulatory requirement to keep ten years’ worth of data. Whatever it is, logs need to go from the server at one time or other. + +In our opinion, unless otherwise required, keep at least a month’s worth of log files online, plus copy them to a secondary location like a logging server. Anything older than that can be offloaded to a separate media. For example, if you are on AWS, your older logs can be copied to Glacier. + +#### Separate Disk Location for Log Files #### + +Linux best practice usually suggests mounting the /var directory to a separate file system. This is because of the high number of I/Os associated with this directory. We would recommend mounting /var/log directory under a separate disk system. This can save I/O contention with the main application’s data. Also, if the number of log files becomes too large or the single log file becomes too big, it doesn’t fill up the entire disk. + +#### Log Entries #### + +What information should be captured in each log entry? + +That depends on what you want to use the log for. Do you want to use it only for troubleshooting purposes, or do you want to capture everything that’s happening? Is it a legal requirement to capture what each user is running or viewing? + +If you are using logs for troubleshooting purposes, save only errors, warnings or fatal messages. There’s no reason to capture debug messages, for example. The app may log debug messages by default or another administrator might have turned this on for another troubleshooting exercise, but you need to turn this off because it can definitely fill up the space quickly. At a minimum, capture the date, time, client application name, source IP or client host name, action performed and the message itself. + +#### A Practical Example for PostgreSQL #### + +As an example, let’s look at the main configuration file for a vanilla PostgreSQL 9.4 installation. It’s called postgresql.conf and contrary to other config files in Linux systems, it’s not saved under /etc directory. In the code snippet below, we can see it’s in /var/lib/pgsql directory of our CentOS 7 server: + + root@localhost ~]# vi /var/lib/pgsql/9.4/data/postgresql.conf + ... + #------------------------------------------------------------------------------ + # ERROR REPORTING AND LOGGING + #------------------------------------------------------------------------------ + # - Where to Log - + log_destination = 'stderr' + # Valid values are combinations of + # stderr, csvlog, syslog, and eventlog, + # depending on platform. csvlog + # requires logging_collector to be on. + # This is used when logging to stderr: + logging_collector = on + # Enable capturing of stderr and csvlog + # into log files. Required to be on for + # csvlogs. + # (change requires restart) + # These are only used if logging_collector is on: + log_directory = 'pg_log' + # directory where log files are written, + # can be absolute or relative to PGDATA + log_filename = 'postgresql-%a.log' # log file name pattern, + # can include strftime() escapes + # log_file_mode = 0600 . + # creation mode for log files, + # begin with 0 to use octal notation + log_truncate_on_rotation = on # If on, an existing log file with the + # same name as the new log file will be + # truncated rather than appended to. + # But such truncation only occurs on + # time-driven rotation, not on restarts + # or size-driven rotation. Default is + # off, meaning append to existing files + # in all cases. + log_rotation_age = 1d + # Automatic rotation of logfiles will happen after that time. 0 disables. + log_rotation_size = 0 # Automatic rotation of logfiles will happen after that much log output. 0 disables. + # These are relevant when logging to syslog: + #syslog_facility = 'LOCAL0' + #syslog_ident = 'postgres' + # This is only relevant when logging to eventlog (win32): + #event_source = 'PostgreSQL' + # - When to Log - + #client_min_messages = notice # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # log + # notice + # warning + # error + #log_min_messages = warning # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # info + # notice + # warning + # error + # log + # fatal + # panic + #log_min_error_statement = error # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # info + # notice + # warning + # error + # log + # fatal + # panic (effectively off) + #log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements + # and their durations, > 0 logs only + # statements running at least this number + # of milliseconds + # - What to Log + #debug_print_parse = off + #debug_print_rewritten = off + #debug_print_plan = off + #debug_pretty_print = on + #log_checkpoints = off + #log_connections = off + #log_disconnections = off + #log_duration = off + #log_error_verbosity = default + # terse, default, or verbose messages + #log_hostname = off + log_line_prefix = '< %m >' # special values: + # %a = application name + # %u = user name + # %d = database name + # %r = remote host and port + # %h = remote host + # %p = process ID + # %t = timestamp without milliseconds + # %m = timestamp with milliseconds + # %i = command tag + # %e = SQL state + # %c = session ID + # %l = session line number + # %s = session start timestamp + # %v = virtual transaction ID + # %x = transaction ID (0 if none) + # %q = stop here in non-session + # processes + # %% = '%' + # e.g. '<%u%%%d> ' + #log_lock_waits = off # log lock waits >= deadlock_timeout + #log_statement = 'none' # none, ddl, mod, all + #log_temp_files = -1 # log temporary files equal or larger + # than the specified size in kilobytes;5# -1 disables, 0 logs all temp files5 + log_timezone = 'Australia/ACT' + +Although most parameters are commented out, they assume default values. We can see the log file directory is pg_log (log_directory parameter), the file names should start with postgresql (log_filename parameter), the files are rotated once every day (log_rotation_age parameter) and the log entries start with a timestamp (log_line_prefix parameter). Of particular interest is the log_line_prefix parameter: there is a whole gamut of information you can include there. + +Looking under /var/lib/pgsql/9.4/data/pg_log directory shows us these files: + + [root@localhost ~]# ls -l /var/lib/pgsql/9.4/data/pg_log + total 20 + -rw-------. 1 postgres postgres 1212 May 1 20:11 postgresql-Fri.log + -rw-------. 1 postgres postgres 243 Feb 9 21:49 postgresql-Mon.log + -rw-------. 1 postgres postgres 1138 Feb 7 11:08 postgresql-Sat.log + -rw-------. 1 postgres postgres 1203 Feb 26 21:32 postgresql-Thu.log + -rw-------. 1 postgres postgres 326 Feb 10 01:20 postgresql-Tue.log + +So the log files only have the name of the weekday stamped in the file name. We can change it. How? Configure the log_filename parameter in postgresql.conf. + +Looking inside one log file shows its entries start with date time only: + + [root@localhost ~]# cat /var/lib/pgsql/9.4/data/pg_log/postgresql-Fri.log + ... + < 2015-02-27 01:21:27.020 EST >LOG: received fast shutdown request + < 2015-02-27 01:21:27.025 EST >LOG: aborting any active transactions + < 2015-02-27 01:21:27.026 EST >LOG: autovacuum launcher shutting down + < 2015-02-27 01:21:27.036 EST >LOG: shutting down + < 2015-02-27 01:21:27.211 EST >LOG: database system is shut down + +### Centralizing Application Logs ### + +#### Log File Monitoring with Imfile #### + +Traditionally, the most common way for applications to log their data is with files. Files are easy to search on a single machine but don’t scale well with more servers. You can set up log file monitoring and send the events to a centralized server when new logs are appended to the bottom. Create a new configuration file in /etc/rsyslog.d/ then add a file input like this: + + $ModLoad imfile + $InputFilePollInterval 10 + $PrivDropToGroup adm + +---------- + + # Input for FILE1 + $InputFileName /FILE1 + $InputFileTag APPNAME1 + $InputFileStateFile stat-APPNAME1 #this must be unique for each file being polled + $InputFileSeverity info + $InputFilePersistStateInterval 20000 + $InputRunFileMonitor + +Replace FILE1 and APPNAME1 with your own file and application names. Rsyslog will send it to the outputs you have configured. + +#### Local Socket Logs with Imuxsock #### + +A socket is similar to a UNIX file handle except that the socket is read into memory by your syslog daemon and then sent to the destination. No file needs to be written. As an example, the logger command sends its logs to this UNIX socket. + +This approach makes efficient use of system resources if your server is constrained by disk I/O or you have no need for local file logs. The disadvantage of this approach is that the socket has a limited queue size. If your syslog daemon goes down or can’t keep up, then you could lose log data. + +The rsyslog daemon will read from the /dev/log socket by default, but you can specifically enable it with the [imuxsock input module][17] using the following command: + + $ModLoad imuxsock + +#### UDP Logs with Imupd #### + +Some applications output log data in UDP format, which is the standard syslog protocol when transferring log files over a network or your localhost. Your syslog daemon receives these logs and can process them or transmit them in a different format. Alternately, you can send the logs to your log server or to a log management solution. + +Use the following command to configure rsyslog to accept syslog data over UDP on the standard port 514: + + $ModLoad imudp + +---------- + + $UDPServerRun 514 + +### Manage Logs with Logrotate ### + +Log rotation is a process that archives log files automatically when they reach a specified age. Without intervention, log files keep growing, using up disk space. Eventually they will crash your machine. + +The logrotate utility can truncate your logs as they age, freeing up space. Your new log file retains the filename. Your old log file is renamed with a number appended to the end of it. Each time the logrotate utility runs, a new file is created and the existing file is renamed in sequence. You determine the threshold when old files are deleted or archived. + +When logrotate copies a file, the new file has a new inode, which can interfere with rsyslog’s ability to monitor the new file. You can alleviate this issue by adding the copytruncate parameter to your logrotate cron job. This parameter copies existing log file contents to a new file and truncates these contents from the existing file. The inode never changes because the log file itself remains the same; its contents are in a new file. + +The logrotate utility uses the main configuration file at /etc/logrotate.conf and application-specific settings in the directory /etc/logrotate.d/. DigitalOcean has a detailed [tutorial on logrotate][18]. + +### Manage Configuration on Many Servers ### + +When you have just a few servers, you can manually configure logging on them. Once you have a few dozen or more servers, you can take advantage of tools that make this easier and more scalable. At a basic level, all of these copy your rsyslog configuration to each server, and then restart rsyslog so the changes take effect. + +#### Pssh #### + +This tool lets you run an ssh command on several servers in parallel. Use a pssh deployment for only a small number of servers. If one of your servers fails, then you have to ssh into the failed server and do the deployment manually. If you have several failed servers, then the manual deployment on them can take a long time. + +#### Puppet/Chef #### + +Puppet and Chef are two different tools that can automatically configure all of the servers in your network to a specified standard. Their reporting tools let you know about failures and can resync periodically. Both Puppet and Chef have enthusiastic supporters. If you aren’t sure which one is more suitable for your deployment configuration management, you might appreciate [InfoWorld’s comparison of the two tools][19]. + +Some vendors also offer modules or recipes for configuring rsyslog. Here is an example from Loggly’s Puppet module. It offers a class for rsyslog to which you can add an identifying token: + + node 'my_server_node.example.net' { + # Send syslog events to Loggly + class { 'loggly::rsyslog': + customer_token => 'de7b5ccd-04de-4dc4-fbc9-501393600000', + } + } + +#### Docker #### + +Docker uses containers to run applications independent of the underlying server. Everything runs from inside a container, which you can think of as a unit of functionality. ZDNet has an in-depth article about [using Docker][20] in your data center. + +There are several ways to log from Docker containers including linking to a logging container, logging to a shared volume, or adding a syslog agent directly inside the container. One of the most popular logging containers is called [logspout][21]. + +#### Vendor Scripts or Agents #### + +Most log management solutions offer scripts or agents to make sending data from one or more servers relatively easy. Heavyweight agents can use up extra system resources. Some vendors like Loggly offer configuration scripts to make using existing syslog daemons easier. Here is an example [script][22] from Loggly which can run on any number of servers. + +-------------------------------------------------------------------------------- + +via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/ + +作者:[Jason Skowronski][a1] +作者:[Amy Echeverri][a2] +作者:[Sadequl Hussain][a3] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a1]:https://www.linkedin.com/in/jasonskowronski +[a2]:https://www.linkedin.com/in/amyecheverri +[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 +[1]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.esrreycnpnbl +[2]:http://www.rsyslog.com/ +[3]:http://www.balabit.com/network-security/syslog-ng/opensource-logging-system +[4]:http://logstash.net/ +[5]:http://www.fluentd.org/ +[6]:http://www.rsyslog.com/doc/rsyslog_conf.html +[7]:http://www.rsyslog.com/doc/master/rainerscript/index.html +[8]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.eck7acdxin87 +[9]:https://www.loggly.com/docs/file-monitoring/ +[10]:http://www.networksorcery.com/enp/protocol/udp.htm +[11]:http://www.networksorcery.com/enp/protocol/tcp.htm +[12]:http://blog.gerhards.net/2008/04/on-unreliability-of-plain-tcp-syslog.html +[13]:http://www.rsyslog.com/doc/relp.html +[14]:http://www.rsyslog.com/doc/queues.html +[15]:http://www.rsyslog.com/doc/tls_cert_ca.html +[16]:http://www.rsyslog.com/doc/tls_cert_machine.html +[17]:http://www.rsyslog.com/doc/v8-stable/configuration/modules/imuxsock.html +[18]:https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10 +[19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html +[20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/ +[21]:https://github.com/progrium/logspout +[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/ \ No newline at end of file diff --git a/sources/tech/20150803 Troubleshooting with Linux Logs.md b/sources/tech/20150803 Troubleshooting with Linux Logs.md new file mode 100644 index 0000000000..8f595427a9 --- /dev/null +++ b/sources/tech/20150803 Troubleshooting with Linux Logs.md @@ -0,0 +1,116 @@ +Troubleshooting with Linux Logs +================================================================================ +Troubleshooting is the main reason people create logs. Often you’ll want to diagnose why a problem happened with your Linux system or application. An error message or a sequence of events can give you clues to the root cause, indicate how to reproduce the issue, and point out ways to fix it. Here are a few use cases for things you might want to troubleshoot in your logs. + +### Cause of Login Failures ### + +If you want to check if your system is secure, you can check your authentication logs for failed login attempts and unfamiliar successes. Authentication failures occur when someone passes incorrect or otherwise invalid login credentials, often to ssh for remote access or su for local access to another user’s permissions. These are logged by the [pluggable authentication module][1], or pam for short. Look in your logs for strings like Failed password and user unknown. Successful authentication records include strings like Accepted password and session opened. + +Failure Examples: + + pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2 + Failed password for invalid user hoover from 10.0.2.2 port 4791 ssh2 + pam_unix(sshd:auth): check pass; user unknown + PAM service(sshd) ignoring max retries; 6 > 3 + +Success Examples: + + Accepted password for hoover from 10.0.2.2 port 4792 ssh2 + pam_unix(sshd:session): session opened for user hoover by (uid=0) + pam_unix(sshd:session): session closed for user hoover + +You can use grep to find which users accounts have the most failed logins. These are the accounts that potential attackers are trying and failing to access. This example is for an Ubuntu system. + + $ grep "invalid user" /var/log/auth.log | cut -d ' ' -f 10 | sort | uniq -c | sort -nr + 23 oracle + 18 postgres + 17 nagios + 10 zabbix + 6 test + +You’ll need to write a different command for each application and message because there is no standard format. Log management systems that automatically parse logs will effectively normalize them and help you extract key fields like username. + +Log management systems can extract the usernames from your Linux logs using automated parsing. This lets you see an overview of the users and filter on them with a single click. In this example, we can see that the root user logged in over 2,700 times because we are filtering the logs to show login attempts only for the root user. + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.36-AM.png) + +Log management systems also let you view graphs over time to spot unusual trends. If someone had one or two failed logins within a few minutes, it might be that a real user forgot his or her password. However, if there are hundreds of failed logins or they are all different usernames, it’s more likely that someone is trying to attack the system. Here you can see that on March 12, someone tried to login as test and nagios several hundred times. This is clearly not a legitimate use of the system. + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.12.18-AM.png) + +### Cause of Reboots ### + +Sometimes a server can stop due to a system crash or reboot. How do you know when it happened and who did it? + +#### Shutdown Command #### + +If someone ran the shutdown command manually, you can see it in the auth log file. Here you can see that someone remotely logged in from the IP 50.0.134.125 as the user ubuntu and then shut the system down. + + Mar 19 18:36:41 ip-172-31-11-231 sshd[23437]: Accepted publickey for ubuntu from 50.0.134.125 port 52538 ssh + Mar 19 18:36:41 ip-172-31-11-231 23437]:sshd[ pam_unix(sshd:session): session opened for user ubuntu by (uid=0) + Mar 19 18:37:09 ip-172-31-11-231 sudo: ubuntu : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/sbin/shutdown -r now + +#### Kernel Initializing #### + +If you want to see when the server restarted regardless of reason (including crashes) you can search logs from the kernel initializing. You’d search for the facility kernel messages and Initializing cpu. + + Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpuset + Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpu + Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Linux version 3.8.0-44-generic (buildd@tipua) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #66~precise1-Ubuntu SMP Tue Jul 15 04:01:04 UTC 2014 (Ubuntu 3.8.0-44.66~precise1-generic 3.8.13.25) + +### Detect Memory Problems ### + +There are lots of reasons a server might crash, but one common cause is running out of memory. + +When your system is low on memory, processes are killed, typically in the order of which ones will release the most resources. The error occurs when your system is using all of its memory and a new or existing process attempts to access additional memory. Look in your log files for strings like Out of Memory or for kernel warnings like to kill. These strings indicate that your system intentionally killed the process or application rather than allowing the process to crash. + +Examples: + + [33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child + [29923450.995084] select 5230 (docker), adj 0, size 708, to kill + +You can find these logs using a tool like grep. This example is for Ubuntu: + + $ grep “Out of memory” /var/log/syslog + [33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child + +Keep in mind that grep itself uses memory, so you might cause an out of memory error just by running grep. This is another reason it’s a fabulous idea to centralize your logs! + +### Log Cron Job Errors ### + +The cron daemon is a scheduler that runs processes at specified dates and times. If the process fails to run or fails to finish, then a cron error appears in your log files. You can find these files in /var/log/cron, /var/log/messages, and /var/log/syslog depending on your distribution. There are many reasons a cron job can fail. Usually the problems lie with the process rather than the cron daemon itself. + +By default, cron jobs output through email using Postfix. Here is a log showing that an email was sent. Unfortunately, you cannot see the contents of the message here. + + Mar 13 16:35:01 PSQ110 postfix/pickup[15158]: C3EDC5800B4: uid=1001 from= + Mar 13 16:35:01 PSQ110 postfix/cleanup[15727]: C3EDC5800B4: message-id=<20150310110501.C3EDC5800B4@PSQ110> + Mar 13 16:35:01 PSQ110 postfix/qmgr[15159]: C3EDC5800B4: from=, size=607, nrcpt=1 (queue active) + Mar 13 16:35:05 PSQ110 postfix/smtp[15729]: C3EDC5800B4: to=, relay=gmail-smtp-in.l.google.com[74.125.130.26]:25, delay=4.1, delays=0.26/0/2.2/1.7, dsn=2.0.0, status=sent (250 2.0.0 OK 1425985505 f16si501651pdj.5 - gsmtp) + +You should consider logging the cron standard output to help debug problems. Here is how you can redirect your cron standard output to syslog using the logger command. Replace the echo command with your own script and helloCron with whatever you want to set the appName to. + + */5 * * * * echo ‘Hello World’ 2>&1 | /usr/bin/logger -t helloCron + +Which creates the log entries: + + Apr 28 22:20:01 ip-172-31-11-231 CRON[15296]: (ubuntu) CMD (echo 'Hello World!' 2>&1 | /usr/bin/logger -t helloCron) + Apr 28 22:20:01 ip-172-31-11-231 helloCron: Hello World! + +Each cron job will log differently based on the specific type of job and how it outputs data. Hopefully there are clues to the root cause of problems within the logs, or you can add additional logging as needed. + +-------------------------------------------------------------------------------- + +via: http://www.loggly.com/ultimate-guide/logging/troubleshooting-with-linux-logs/ + +作者:[Jason Skowronski][a1] +作者:[Amy Echeverri][a2] +作者:[Sadequl Hussain][a3] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a1]:https://www.linkedin.com/in/jasonskowronski +[a2]:https://www.linkedin.com/in/amyecheverri +[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 +[1]:http://linux.die.net/man/8/pam.d \ No newline at end of file From c06d768d03a95906e23bb18e5f4db16df178c668 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 3 Aug 2015 19:52:52 +0800 Subject: [PATCH 023/697] Update 20150803 Handy commands for profiling your Unix file systems.md --- ...0803 Handy commands for profiling your Unix file systems.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150803 Handy commands for profiling your Unix file systems.md b/sources/tech/20150803 Handy commands for profiling your Unix file systems.md index ae5951b0d7..359aba14c9 100644 --- a/sources/tech/20150803 Handy commands for profiling your Unix file systems.md +++ b/sources/tech/20150803 Handy commands for profiling your Unix file systems.md @@ -1,3 +1,4 @@ +translation by strugglingyouth Handy commands for profiling your Unix file systems ================================================================================ ![Credit: Sandra H-S](http://images.techhive.com/images/article/2015/07/file-profile-100597239-primary.idge.png) @@ -61,4 +62,4 @@ via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.ht 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/ \ No newline at end of file +[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/ From 6660ef7b90c207bfa032b6db21ae448434ecfa47 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 3 Aug 2015 19:55:12 +0800 Subject: [PATCH 024/697] Update 20150803 Troubleshooting with Linux Logs.md --- sources/tech/20150803 Troubleshooting with Linux Logs.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150803 Troubleshooting with Linux Logs.md b/sources/tech/20150803 Troubleshooting with Linux Logs.md index 8f595427a9..9ee0820a9c 100644 --- a/sources/tech/20150803 Troubleshooting with Linux Logs.md +++ b/sources/tech/20150803 Troubleshooting with Linux Logs.md @@ -1,3 +1,4 @@ +translation by strugglingyouth Troubleshooting with Linux Logs ================================================================================ Troubleshooting is the main reason people create logs. Often you’ll want to diagnose why a problem happened with your Linux system or application. An error message or a sequence of events can give you clues to the root cause, indicate how to reproduce the issue, and point out ways to fix it. Here are a few use cases for things you might want to troubleshoot in your logs. @@ -113,4 +114,4 @@ via: http://www.loggly.com/ultimate-guide/logging/troubleshooting-with-linux-log [a1]:https://www.linkedin.com/in/jasonskowronski [a2]:https://www.linkedin.com/in/amyecheverri [a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 -[1]:http://linux.die.net/man/8/pam.d \ No newline at end of file +[1]:http://linux.die.net/man/8/pam.d From 59b2f6c25fc31bc791983d869f02b3f2bf97d78a Mon Sep 17 00:00:00 2001 From: ictlyh Date: Mon, 3 Aug 2015 20:21:50 +0800 Subject: [PATCH 025/697] [Translated] tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md --- ...erver behind NAT via reverse SSH tunnel.md | 131 ------------------ ...erver behind NAT via reverse SSH tunnel.md | 131 ++++++++++++++++++ ...Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md} | 0 ...oncepts of RAID and RAID Levels – Part 1.md} | 0 4 files changed, 131 insertions(+), 131 deletions(-) delete mode 100644 sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md create mode 100644 translated/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md rename translated/tech/{Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 => Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md} (100%) rename translated/tech/{Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 => Introduction to RAID, Concepts of RAID and RAID Levels – Part 1.md} (100%) diff --git a/sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md b/sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md deleted file mode 100644 index 4239073013..0000000000 --- a/sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md +++ /dev/null @@ -1,131 +0,0 @@ -ictlyh Translating -How to access a Linux server behind NAT via reverse SSH tunnel -================================================================================ -You are running a Linux server at home, which is behind a NAT router or restrictive firewall. Now you want to SSH to the home server while you are away from home. How would you set that up? SSH port forwarding will certainly be an option. However, port forwarding can become tricky if you are dealing with multiple nested NAT environment. Besides, it can be interfered with under various ISP-specific conditions, such as restrictive ISP firewalls which block forwarded ports, or carrier-grade NAT which shares IPv4 addresses among users. - -### What is Reverse SSH Tunneling? ### - -One alternative to SSH port forwarding is **reverse SSH tunneling**. The concept of reverse SSH tunneling is simple. For this, you will need another host (so-called "relay host") outside your restrictive home network, which you can connect to via SSH from where you are. You could set up a relay host using a [VPS instance][1] with a public IP address. What you do then is to set up a persistent SSH tunnel from the server in your home network to the public relay host. With that, you can connect "back" to the home server from the relay host (which is why it's called a "reverse" tunnel). As long as the relay host is reachable to you, you can connect to your home server wherever you are, or however restrictive your NAT or firewall is in your home network. - -![](https://farm8.staticflickr.com/7742/17162647378_c7d9f10de8_b.jpg) - -### Set up a Reverse SSH Tunnel on Linux ### - -Let's see how we can create and use a reverse SSH tunnel. We assume the following. We will be setting up a reverse SSH tunnel from homeserver to relayserver, so that we can SSH to homeserver via relayserver from another computer called clientcomputer. The public IP address of **relayserver** is 1.1.1.1. - -On homeserver, open an SSH connection to relayserver as follows. - - homeserver~$ ssh -fN -R 10022:localhost:22 relayserver_user@1.1.1.1 - -Here the port 10022 is any arbitrary port number you can choose. Just make sure that this port is not used by other programs on relayserver. - -The "-R 10022:localhost:22" option defines a reverse tunnel. It forwards traffic on port 10022 of relayserver to port 22 of homeserver. - -With "-fN" option, SSH will go right into the background once you successfully authenticate with an SSH server. This option is useful when you do not want to execute any command on a remote SSH server, and just want to forward ports, like in our case. - -After running the above command, you will be right back to the command prompt of homeserver. - -Log in to relayserver, and verify that 127.0.0.1:10022 is bound to sshd. If so, that means a reverse tunnel is set up correctly. - - relayserver~$ sudo netstat -nap | grep 10022 - ----------- - - tcp 0 0 127.0.0.1:10022 0.0.0.0:* LISTEN 8493/sshd - -Now from any other computer (e.g., clientcomputer), log in to relayserver. Then access homeserver as follows. - - relayserver~$ ssh -p 10022 homeserver_user@localhost - -One thing to take note is that the SSH login/password you type for localhost should be for homeserver, not for relayserver, since you are logging in to homeserver via the tunnel's local endpoint. So do not type login/password for relayserver. After successful login, you will be on homeserver. - -### Connect Directly to a NATed Server via a Reverse SSH Tunnel ### - -While the above method allows you to reach **homeserver** behind NAT, you need to log in twice: first to **relayserver**, and then to **homeserver**. This is because the end point of an SSH tunnel on relayserver is binding to loopback address (127.0.0.1). - -But in fact, there is a way to reach NATed homeserver directly with a single login to relayserver. For this, you will need to let sshd on relayserver forward a port not only from loopback address, but also from an external host. This is achieved by specifying **GatewayPorts** option in sshd running on relayserver. - -Open /etc/ssh/sshd_conf of **relayserver** and add the following line. - - relayserver~$ vi /etc/ssh/sshd_conf - ----------- - - GatewayPorts clientspecified - -Restart sshd. - -Debian-based system: - - relayserver~$ sudo /etc/init.d/ssh restart - -Red Hat-based system: - - relayserver~$ sudo systemctl restart sshd - -Now let's initiate a reverse SSH tunnel from homeserver as follows. -homeserver~$ ssh -fN -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1 - -Log in to relayserver and confirm with netstat command that a reverse SSH tunnel is established successfully. - - relayserver~$ sudo netstat -nap | grep 10022 - ----------- - - tcp 0 0 1.1.1.1:10022 0.0.0.0:* LISTEN 1538/sshd: dev - -Unlike a previous case, the end point of a tunnel is now at 1.1.1.1:10022 (relayserver's public IP address), not 127.0.0.1:10022. This means that the end point of the tunnel is reachable from an external host. - -Now from any other computer (e.g., clientcomputer), type the following command to gain access to NATed homeserver. - - clientcomputer~$ ssh -p 10022 homeserver_user@1.1.1.1 - -In the above command, while 1.1.1.1 is the public IP address of relayserver, homeserver_user must be the user account associated with homeserver. This is because the real host you are logging in to is homeserver, not relayserver. The latter simply relays your SSH traffic to homeserver. - -### Set up a Persistent Reverse SSH Tunnel on Linux ### - -Now that you understand how to create a reverse SSH tunnel, let's make the tunnel "persistent", so that the tunnel is up and running all the time (regardless of temporary network congestion, SSH timeout, relay host rebooting, etc.). After all, if the tunnel is not always up, you won't be able to connect to your home server reliably. - -For a persistent tunnel, I am going to use a tool called autossh. As the name implies, this program allows you to automatically restart an SSH session should it breaks for any reason. So it is useful to keep a reverse SSH tunnel active. - -As the first step, let's set up [passwordless SSH login][2] from homeserver to relayserver. That way, autossh can restart a broken reverse SSH tunnel without user's involvement. - -Next, [install autossh][3] on homeserver where a tunnel is initiated. - -From homeserver, run autossh with the following arguments to create a persistent SSH tunnel destined to relayserver. - - homeserver~$ autossh -M 10900 -fN -o "PubkeyAuthentication=yes" -o "StrictHostKeyChecking=false" -o "PasswordAuthentication=no" -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1 - -The "-M 10900" option specifies a monitoring port on relayserver which will be used to exchange test data to monitor an SSH session. This port should not be used by any program on relayserver. - -The "-fN" option is passed to ssh command, which will let the SSH tunnel run in the background. - -The "-o XXXX" options tell ssh to: - -- Use key authentication, not password authentication. -- Automatically accept (unknown) SSH host keys. -- Exchange keep-alive messages every 60 seconds. -- Send up to 3 keep-alive messages without receiving any response back. - -The rest of reverse SSH tunneling related options remain the same as before. - -If you want an SSH tunnel to be automatically up upon boot, you can add the above autossh command in /etc/rc.local. - -### Conclusion ### - -In this post, I talked about how you can use a reverse SSH tunnel to access a Linux server behind a restrictive firewall or NAT gateway from outside world. While I demonstrated its use case for a home network, you must be careful when applying it for corporate networks. Such a tunnel can be considered as a breach of a corporate policy, as it circumvents corporate firewalls and can expose corporate networks to outside attacks. There is a great chance it can be misused or abused. So always remember its implication before setting it up. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/nanni -[1]:http://xmodulo.com/go/digitalocean -[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html -[3]:http://ask.xmodulo.com/install-autossh-linux.html diff --git a/translated/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md b/translated/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md new file mode 100644 index 0000000000..5f9828e912 --- /dev/null +++ b/translated/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md @@ -0,0 +1,131 @@ +如何通过反向 SSH 隧道访问 NAT 后面的 Linux 服务器 +================================================================================ +你在家里运行着一台 Linux 服务器,访问它需要先经过 NAT 路由器或者限制性防火墙。现在你想不在家的时候用 SSH 登录到这台服务器。你如何才能做到呢?SSH 端口转发当然是一种选择。但是,如果你需要处理多个嵌套的 NAT 环境,端口转发可能会变得非常棘手。另外,在多种 ISP 特定条件下可能会受到干扰,例如阻塞转发端口的限制性 ISP 防火墙、或者在用户间共享 IPv4 地址的运营商级 NAT。 + +### 什么是反向 SSH 隧道? ### + +SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧道的概念非常简单。对于此,在限制性家庭网络之外你需要另一台主机(所谓的“中继主机”),你能从当前所在地通过 SSH 登录。你可以用有公共 IP 地址的 [VPS 实例][1] 配置一个中继主机。然后要做的就是从你家庭网络服务器中建立一个到公共中继主机的永久 SSH 隧道。有了这个隧道,你就可以从中继主机中连接“回”家庭服务器(这就是为什么称之为 “反向” 隧道)。不管你在哪里、你家庭网络中的 NAT 或 防火墙限制多么严重,只要你可以访问中继主机,你就可以连接到家庭服务器。 + +![](https://farm8.staticflickr.com/7742/17162647378_c7d9f10de8_b.jpg) + +### 在 Linux 上设置反向 SSH 隧道 ### + +让我们来看看怎样创建和使用反向 SSH 隧道。我们有如下假设。我们会设置一个从家庭服务器到中继服务器的反向 SSH 隧道,然后我们可以通过中继服务器从客户端计算机 SSH 登录到家庭服务器。**中继服务器** 的公共 IP 地址是 1.1.1.1。 + +在家庭主机上,按照以下方式打开一个到中继服务器的 SSH 连接。 + + homeserver~$ ssh -fN -R 10022:localhost:22 relayserver_user@1.1.1.1 + +这里端口 10022 是任何你可以使用的端口数字。只需要确保中继服务器上不会有其它程序使用这个端口。 + +“-R 10022:localhost:22” 选项定义了一个反向隧道。它转发中继服务器 10022 端口的流量到家庭服务器的 22 号端口。 + +用 “-fN” 选项,当你用一个 SSH 服务器成功通过验证时 SSH 会进入后台运行。当你不想在远程 SSH 服务器执行任何命令、就像我们的例子中只想转发端口的时候非常有用。 + +运行上面的命令之后,你就会回到家庭主机的命令行提示框中。 + +登录到中继服务器,确认 127.0.0.1:10022 绑定到了 sshd。如果是的话就表示已经正确设置了反向隧道。 + + relayserver~$ sudo netstat -nap | grep 10022 + +---------- + + tcp 0 0 127.0.0.1:10022 0.0.0.0:* LISTEN 8493/sshd + +现在就可以从任何其它计算机(客户端计算机)登录到中继服务器,然后按照下面的方法访问家庭服务器。 + + relayserver~$ ssh -p 10022 homeserver_user@localhost + +需要注意的一点是你在本地输入的 SSH 登录/密码应该是家庭服务器的,而不是中继服务器的,因为你是通过隧道的本地端点登录到家庭服务器。因此不要输入中继服务器的登录/密码。成功登陆后,你就在家庭服务器上了。 + +### 通过反向 SSH 隧道直接连接到网络地址变换后的服务器 ### + +上面的方法允许你访问 NAT 后面的 **家庭服务器**,但你需要登录两次:首先登录到 **中继服务器**,然后再登录到**家庭服务器**。这是因为中继服务器上 SSH 隧道的端点绑定到了回环地址(127.0.0.1)。 + +事实上,有一种方法可以只需要登录到中继服务器就能直接访问网络地址变换之后的家庭服务器。要做到这点,你需要让中继服务器上的 sshd 不仅转发回环地址上的端口,还要转发外部主机的端口。这通过指定中继服务器上运行的 sshd 的 **网关端口** 实现。 + +打开**中继服务器**的 /etc/ssh/sshd_conf 并添加下面的行。 + + relayserver~$ vi /etc/ssh/sshd_conf + +---------- + + GatewayPorts clientspecified + +重启 sshd。 + +基于 Debian 的系统: + + relayserver~$ sudo /etc/init.d/ssh restart + +基于红帽的系统: + + relayserver~$ sudo systemctl restart sshd + +现在在家庭服务器中按照下面方式初始化一个反向 SSH 隧道。 + + homeserver~$ ssh -fN -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1 + +登录到中继服务器然后用 netstat 命令确认成功建立的一个反向 SSH 隧道。 + + relayserver~$ sudo netstat -nap | grep 10022 + +---------- + + tcp 0 0 1.1.1.1:10022 0.0.0.0:* LISTEN 1538/sshd: dev + +不像之前的情况,现在隧道的端点是 1.1.1.1:10022(中继服务器的公共 IP 地址),而不是 127.0.0.1:10022。这就意味着从外部主机可以访问隧道的端点。 + +现在在任何其它计算机(客户端计算机),输入以下命令访问网络地址变换之后的家庭服务器。 + + clientcomputer~$ ssh -p 10022 homeserver_user@1.1.1.1 + +在上面的命令中,1.1.1.1 是中继服务器的公共 IP 地址,家庭服务器用户必须是和家庭服务器相关联的用户账户。这是因为你真正登录到的主机是家庭服务器,而不是中继服务器。后者只是中继你的 SSH 流量到家庭服务器。 + +### 在 Linux 上设置一个永久反向 SSH 隧道 ### + +现在你已经明白了怎样创建一个反向 SSH 隧道,然后把隧道设置为 “永久”,这样隧道启动后就会一直运行(不管临时的网络拥塞、SSH 超时、中继主机重启,等等)。毕竟,如果隧道不是一直有效,你不可能可靠的登录到你的家庭服务器。 + +对于永久隧道,我打算使用一个叫 autossh 的工具。正如名字暗示的,这个程序允许你不管任何理由自动重启 SSH 会话。因此对于保存一个反向 SSH 隧道有效非常有用。 + +第一步,我们要设置从家庭服务器到中继服务器的[无密码 SSH 登录][2]。这样的话,autossh 可以不需要用户干预就能重启一个损坏的反向 SSH 隧道。 + +下一步,在初始化隧道的家庭服务器上[安装 autossh][3]。 + +在家庭服务器上,用下面的参数运行 autossh 来创建一个连接到中继服务器的永久 SSH 隧道。 + + homeserver~$ autossh -M 10900 -fN -o "PubkeyAuthentication=yes" -o "StrictHostKeyChecking=false" -o "PasswordAuthentication=no" -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1 + +“-M 10900” 选项指定中继服务器上的监视端口,用于交换监视 SSH 会话的测试数据。中继服务器上的其它程序不能使用这个端口。 + +“-fN” 选项传递给 ssh 命令,让 SSH 隧道在后台运行。 + +“-o XXXX” 选项让 ssh: + +- 使用密钥验证,而不是密码验证。 +- 自动接受(未知)SSH 主机密钥。 +- 每 60 秒交换 keep-alive 消息。 +- 没有收到任何响应时最多发送 3 条 keep-alive 消息。 + +其余 SSH 隧道相关的选项和之前介绍的一样。 + +如果你想系统启动时自动运行 SSH 隧道,你可以将上面的 autossh 命令添加到 /etc/rc.local。 + +### 总结 ### + +在这篇博文中,我介绍了你如何能从外部中通过反向 SSH 隧道访问限制性防火墙或 NAT 网关之后的 Linux 服务器。尽管我介绍了家庭网络中的一个使用事例,在企业网络中使用时你尤其要小心。这样的一个隧道可能被视为违反公司政策,因为它绕过了企业的防火墙并把企业网络暴露给外部攻击。这很可能被误用或者滥用。因此在使用之前一定要记住它的作用。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html + +作者:[Dan Nanni][a] +译者:[ictlyh](https://github.com/ictlyh) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://xmodulo.com/go/digitalocean +[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html +[3]:http://ask.xmodulo.com/install-autossh-linux.html diff --git a/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 b/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md similarity index 100% rename from translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 rename to translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md diff --git a/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 b/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1.md similarity index 100% rename from translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 rename to translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1.md From f1e0bd44ae78a3ffd31607226d2d44e846a15e5a Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Mon, 3 Aug 2015 21:31:58 +0800 Subject: [PATCH 026/697] [Translated]20150128 7 communities driving open source development.md --- ...unities driving open source development.md | 56 +++++++++---------- 1 file changed, 27 insertions(+), 29 deletions(-) diff --git a/sources/talk/20150128 7 communities driving open source development.md b/sources/talk/20150128 7 communities driving open source development.md index c3b6df31d2..2074ad9e23 100644 --- a/sources/talk/20150128 7 communities driving open source development.md +++ b/sources/talk/20150128 7 communities driving open source development.md @@ -1,79 +1,77 @@ -FSSlc Translating - -7 communities driving open source development +7 个驱动开源发展的社区 ================================================================================ -Not so long ago, the open source model was the rebellious kid on the block, viewed with suspicion by established industry players. Today, open initiatives and foundations are flourishing with long lists of vendor committers who see the model as a key to innovation. +不久前,开源模式还被成熟的工业厂商以怀疑的态度认作是叛逆小孩的玩物。如今,开放的倡议和基金会在一长列的供应商提供者的支持下正蓬勃发展,而他们将开源模式视作创新的关键。 ![](http://images.techhive.com/images/article/2015/01/0_opensource-title-100539095-orig.jpg) -### Open Development of Tech Drives Innovation ### +### 技术的开放发展驱动着创新 ### -Over the past two decades, open development of technology has come to be seen as a key to driving innovation. Even companies that once saw open source as a threat have come around — Microsoft, for example, is now active in a number of open source initiatives. To date, most open development has focused on software. But even that is changing as communities have begun to coalesce around open hardware initiatives. Here are seven organizations that are successfully promoting and developing open technologies, both hardware and software. +在过去的 20 几年间,技术的开放发展已被视作驱动创新的关键因素。即使那些以前将开源视作威胁的公司也开始接受这个观点 — 例如微软,如今它在一系列的开源倡议中表现活跃。到目前为止,大多数的开放发展都集中在软件方面,但甚至这个也正在改变,因为社区已经开始向开源硬件倡议方面聚拢。这里有 7 个成功地在硬件和软件方面同时促进和发展开源技术的组织。 -### OpenPOWER Foundation ### +### OpenPOWER 基金会 ### ![](http://images.techhive.com/images/article/2015/01/1_openpower-100539100-orig.jpg) -The [OpenPOWER Foundation][2] was founded by IBM, Google, Mellanox, Tyan and NVIDIA in 2013 to drive open collaboration hardware development in the same spirit as the open source software development which has found fertile ground in the past two decades. +[OpenPOWER 基金会][2] 由 IBM, Google, Mellanox, Tyan 和 NVIDIA 于 2013 年共同创建,在与开源软件发展相同的精神下,旨在驱动开放协作硬件的发展,在过去的 20 几年间,开源软件发展已经找到了肥沃的土壤。 -IBM seeded the foundation by opening up its Power-based hardware and software technologies, offering licenses to use Power IP in independent hardware products. More than 70 members now work together to create custom open servers, components and software for Linux-based data centers. +IBM 通过开放其基于 Power 架构的硬件和软件技术,向使用 Power IP 的独立硬件产品提供许可证等方式为基金会的建立播下种子。如今超过 70 个成员共同协作来为基于 Linux 的数据中心提供自定义的开放服务器,组件和硬件。 -In April, OpenPOWER unveiled a technology roadmap based on new POWER8 process-based servers capable of analyzing data 50 times faster than the latest x86-based systems. In July, IBM and Google released a firmware stack. October saw the availability of NVIDIA GPU accelerated POWER8 systems and the first OpenPOWER reference server from Tyan. +今年四月,在比最新基于 x86 系统快 50 倍的数据分析能力的新的 POWER8 处理器的服务器的基础上, OpenPOWER 推出了一个技术路线图。七月, IBM 和 Google 发布了一个固件堆栈。十月见证了 NVIDIA GPU 带来加速 POWER8 系统的能力和来自 Tyan 的第一个 OpenPOWER 参考服务器。 -### The Linux Foundation ### +### Linux 基金会 ### ![](http://images.techhive.com/images/article/2015/01/2_the-linux-foundation-100539101-orig.jpg) -Founded in 2000, [The Linux Foundation][2] is now the host for the largest open source, collaborative development effort in history, with more than 180 corporate members and many individual and student members. It sponsors the work of key Linux developers and promotes, protects and advances the Linux operating system and collaborative software development. +于 2000 年建立的 [Linux 基金会][2] 如今成为掌控着历史上最大的开源协同发展成果,它有着超过 180 个合作成员和许多独立成员及学生成员。它赞助核心 Linux 开发者的工作并促进、保护和推进 Linux 操作系统和协作软件的开发。 -Some of its most successful collaborative projects include Code Aurora Forum (a consortium of companies with projects serving the mobile wireless industry), MeeGo (a project to build a Linux kernel-based operating system for mobile devices and IVI) and the Open Virtualization Alliance (which fosters the adoption of free and open source software virtualization solutions). +它最为成功的协作项目包括 Code Aurora Forum (一个拥有为移动无线产业服务的企业财团),MeeGo (一个为移动设备和 IVI (注:指的是车载消息娱乐设备,为 In-Vehicle Infotainment 的简称) 构建一个基于 Linux 内核的操作系统的项目) 和 Open Virtualization Alliance (开放虚拟化联盟,它促进自由和开源软件虚拟化解决方案的采用)。 -### Open Virtualization Alliance ### +### 开放虚拟化联盟 ### ![](http://images.techhive.com/images/article/2015/01/3_open-virtualization-alliance-100539102-orig.jpg) -The [Open Virtualization Alliance (OVA)][3] exists to foster the adoption of free and open source software virtualization solutions like Kernel-based Virtual Machine (KVM) through use cases and support for the development of interoperable common interfaces and APIs. KVM turns the Linux kernel into a hypervisor. +[开放虚拟化联盟(OVA)][3] 的存在目的为:通过提供使用案例和对具有互操作性的通用接口和 API 的发展提供支持,来促进自由、开源软件的虚拟化解决方案例如 KVM 的采用。KVM 将 Linux 内核转变为一个虚拟机管理程序。 -Today, KVM is the most commonly used hypervisor with OpenStack. +如今, KVM 已成为和 OpenStack 共同使用的最为常见的虚拟机管理程序。 -### The OpenStack Foundation ### +### OpenStack 基金会 ### ![](http://images.techhive.com/images/article/2015/01/4_the-openstack-foundation-100539096-orig.jpg) -Originally launched as an Infrastructure-as-a-Service (IaaS) product by NASA and Rackspace hosting in 2010, the [OpenStack Foundation][4] has become the home for one of the biggest open source projects around. It boasts more than 200 member companies, including AT&T, AMD, Avaya, Canonical, Cisco, Dell and HP. +原本作为一个 IaaS(基础设施即服务) 产品由 NASA 和 Rackspace 于 2010 年启动,[OpenStack 基金会][4] 已成为最大的开源项目聚居地之一。它拥有超过 200 家公司成员,其中包括 AT&T, AMD, Avaya, Canonical, Cisco, Dell 和 HP。 -Organized around a six-month release cycle, the foundation's OpenStack projects are developed to control pools of processing, storage and networking resources through a data center — all managed or provisioned through a Web-based dashboard, command-line tools or a RESTful API. So far, the collaborative development supported by the foundation has resulted in the creation of OpenStack components including OpenStack Compute (a cloud computing fabric controller that is the main part of an IaaS system), OpenStack Networking (a system for managing networks and IP addresses) and OpenStack Object Storage (a scalable redundant storage system). +大约以 6 个月为一个发行周期,基金会的 OpenStack 项目被发展用来通过一个基于 Web 的仪表盘,命令行工具或一个 RESTful 风格的 API 来控制或调配流经一个数据中心的处理存储池和网络资源。至今为止,基金会支持的协作发展已经孕育出了一系列 OpenStack 组件,其中包括 OpenStack Compute(一个云计算网络控制器,它是一个 IaaS 系统的主要部分),OpenStack Networking(一个用以管理网络和 IP 地址的系统) 和 OpenStack Object Storage(一个可扩展的冗余存储系统)。 ### OpenDaylight ### ![](http://images.techhive.com/images/article/2015/01/5_opendaylight-100539097-orig.jpg) -Another collaborative project to come out of the Linux Foundation, [OpenDaylight][5] is a joint initiative of industry vendors, like Dell, HP, Oracle and Avaya founded in April 2013. Its mandate is the creation of a community-led, open, industry-supported framework consisting of code and blueprints for Software-Defined Networking (SDN). The idea is to provide a fully functional SDN platform that can be deployed directly, without requiring other components, though vendors can offer add-ons and enhancements. +作为来自 Linux 基金会的另一个协作项目, [OpenDaylight][5] 是一个由诸如 Dell, HP, Oracle 和 Avaya 等行业厂商于 2013 年 4 月建立的联合倡议。它的任务是建立一个由社区主导,开放,有工业支持的针对 Software-Defined Networking (SDN) 的包含代码和蓝图的框架。其思路是提供一个可直接部署的全功能 SDN 平台,而不需要其他组件,供应商可提供附件组件和增强组件。 -### Apache Software Foundation ### +### Apache 软件基金会 ### ![](http://images.techhive.com/images/article/2015/01/6_apache-software-foundation-100539098-orig.jpg) -The [Apache Software Foundation (ASF)][7] is home to nearly 150 top level projects ranging from open source enterprise automation software to a whole ecosystem of distributed computing projects related to Apache Hadoop. These projects deliver enterprise-grade, freely available software products, while the Apache License is intended to make it easy for users, whether commercial or individual, to deploy Apache products. +[Apache 软件基金会 (ASF)][7] 是将近 150 个顶级项目的聚居地,这些项目涵盖从开源企业级自动化软件到与 Apache Hadoop 相关的分布式计算的整个生态系统。这些项目分发企业级、可免费获取的软件产品,而 Apache 协议则是为了让无论是商业用户还是个人用户更方便地部署 Apache 的产品。 -ASF was incorporated in 1999 as a membership-based, not-for-profit corporation with meritocracy at its heart — to become a member you must first be actively contributing to one or more of the foundation's collaborative projects. +ASF 于 1999 年作为一个会员制,非盈利公司注册,其核心为精英 — 要成为它的成员,你必须首先在基金会的一个或多个协作项目中做出积极贡献。 -### Open Compute Project ### +### 开放计算项目 ### ![](http://images.techhive.com/images/article/2015/01/7_open-compute-project-100539099-orig.jpg) -An outgrowth of Facebook's redesign of its Oregon data center, the [Open Compute Project (OCP)][7] aims to develop open hardware solutions for data centers. The OCP is an initiative made up of cheap, vanity-free servers, modular I/O storage for Open Rack (a rack standard designed for data centers to integrate the rack into the data center infrastructure) and a relatively "green" data center design. +作为 Facebook 重新设计其 Oregon 数据中心的副产物, [开放计算项目][7] 旨在发展针对数据中心的开放硬件解决方案。 OCP 是一个由廉价、无浪费的服务器,针对 Open Rack(为数据中心设计的机架标准,来让机架集成到数据中心的基础设施中) 的模块化 I/O 存储和一个相对 "绿色" 的数据中心设计方案等构成。 -OCP board members include representatives from Facebook, Intel, Goldman Sachs, Rackspace and Microsoft. +OCP 董事会成员包括来自 Facebook,Intel,Goldman Sachs,Rackspace 和 Microsoft 的代表。 -OCP recently announced two options for licensing: an Apache 2.0-like license that allows for derivative works and a more prescriptive license that encourages changes to be rolled back into the original software. +OCP 最近宣布了许可证的两个选择: 一个类似 Apache 2.0 的允许衍生工作的许可证和一个更规范的鼓励回滚到原有软件的更改的许可证。 -------------------------------------------------------------------------------- via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities-driving-open-source-development.html 作者:[Thor Olavsrud][a] -译者:[译者ID](https://github.com/译者ID) +译者:[FSSlc](https://github.com/FSSlc) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 @@ -85,4 +83,4 @@ via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities [4]:http://www.openstack.org/foundation/ [5]:http://www.opendaylight.org/ [6]:http://www.apache.org/ -[7]:http://www.opencompute.org/ +[7]:http://www.opencompute.org/ \ No newline at end of file From 87c50a0cd95d1f25360ef2b85cf47a997aa333e0 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 3 Aug 2015 22:27:09 +0800 Subject: [PATCH 027/697] =?UTF-8?q?=E8=A1=A5=E5=AE=8C=20PR?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @FSSlc --- .../20150128 7 communities driving open source development.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/talk/20150128 7 communities driving open source development.md (100%) diff --git a/sources/talk/20150128 7 communities driving open source development.md b/translated/talk/20150128 7 communities driving open source development.md similarity index 100% rename from sources/talk/20150128 7 communities driving open source development.md rename to translated/talk/20150128 7 communities driving open source development.md From 2ee44522e991b6d9f8be3aab4e341a3c473772dc Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 3 Aug 2015 22:36:14 +0800 Subject: [PATCH 028/697] =?UTF-8?q?=E8=B6=85=E6=9C=9F=E5=9B=9E=E6=94=B6?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @wi-cuckoo @KevinSJ --- .../tech/20150717 How to monitor NGINX with Datadog - Part 3.md | 1 - sources/tech/20150717 How to monitor NGINX- Part 1.md | 1 - 2 files changed, 2 deletions(-) diff --git a/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md b/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md index 40787cdd96..949fd3d949 100644 --- a/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md +++ b/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md @@ -1,4 +1,3 @@ -translating wi-cuckoo How to monitor NGINX with Datadog - Part 3 ================================================================================ ![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png) diff --git a/sources/tech/20150717 How to monitor NGINX- Part 1.md b/sources/tech/20150717 How to monitor NGINX- Part 1.md index 97ab822fca..1ae6858792 100644 --- a/sources/tech/20150717 How to monitor NGINX- Part 1.md +++ b/sources/tech/20150717 How to monitor NGINX- Part 1.md @@ -1,4 +1,3 @@ -KevinSJ Translating How to monitor NGINX - Part 1 ================================================================================ ![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png) From 42083f4166af08cd49ac5749ca8f770663545f95 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 3 Aug 2015 22:42:03 +0800 Subject: [PATCH 029/697] Update 20150717 How to monitor NGINX with Datadog - Part 3.md --- .../20150717 How to monitor NGINX with Datadog - Part 3.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md b/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md index 949fd3d949..727c552ed0 100644 --- a/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md +++ b/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md @@ -1,3 +1,4 @@ +translation by strugglingyouth How to monitor NGINX with Datadog - Part 3 ================================================================================ ![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png) @@ -147,4 +148,4 @@ via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ [16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics [17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up [18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md -[19]:https://github.com/DataDog/the-monitor/issues \ No newline at end of file +[19]:https://github.com/DataDog/the-monitor/issues From c52f369407f617927b6cc65e9a07937e957acd50 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 3 Aug 2015 22:44:22 +0800 Subject: [PATCH 030/697] Update 20150717 How to monitor NGINX- Part 1.md --- sources/tech/20150717 How to monitor NGINX- Part 1.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20150717 How to monitor NGINX- Part 1.md b/sources/tech/20150717 How to monitor NGINX- Part 1.md index 1ae6858792..690ab192ba 100644 --- a/sources/tech/20150717 How to monitor NGINX- Part 1.md +++ b/sources/tech/20150717 How to monitor NGINX- Part 1.md @@ -1,3 +1,4 @@ +translation by strugglingyouth How to monitor NGINX - Part 1 ================================================================================ ![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png) From afcff7a42fd2eb7d06bc5cdf51ca624424c8f75a Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 3 Aug 2015 23:14:11 +0800 Subject: [PATCH 031/697] PUB:20150717 Howto Configure FTP Server with Proftpd on Fedora 22 @zpl1025 --- ...re FTP Server with Proftpd on Fedora 22.md | 20 ++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) rename {translated/tech => published}/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md (83%) diff --git a/translated/tech/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md b/published/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md similarity index 83% rename from translated/tech/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md rename to published/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md index 0ccfe69b8f..d812c1b0ac 100644 --- a/translated/tech/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md +++ b/published/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md @@ -1,11 +1,13 @@ 如何在 Fedora 22 上配置 Proftpd 服务器 ================================================================================ -在本文中,我们将了解如何在运行 Fedora 22 的电脑或服务器上使用 Proftpd 架设 FTP 服务器。[ProFTPD][1] 是一款免费的基于 GPL 授权开源的 FTP 服务器软件,是 Linux 上的主流 FTP 服务器。它的主要设计目标是具备许多高级功能以及能为用户提供丰富的配置选项可以轻松实现定制。它的许多配置选项在其他一些 FTP 服务器软件里仍然没有集成。最初它是被开发作为 wu-ftpd 服务器的一个更安全更容易配置的替代。FTP 服务器是这样一个软件,用户可以通过 FTP 客户端从安装了它的远端服务器上传或下载文件和目录。下面是一些 ProFTPD 服务器的主要功能,更详细的资料可以访问 [http://www.proftpd.org/features.html][2]。 +在本文中,我们将了解如何在运行 Fedora 22 的电脑或服务器上使用 Proftpd 架设 FTP 服务器。[ProFTPD][1] 是一款基于 GPL 授权的自由开源 FTP 服务器软件,是 Linux 上的主流 FTP 服务器。它的主要设计目标是提供许多高级功能以及给用户提供丰富的配置选项以轻松实现定制。它具备许多在其他一些 FTP 服务器软件里仍然没有的配置选项。最初它是被开发作为 wu-ftpd 服务器的一个更安全更容易配置的替代。 -- 每个目录都包含 ".ftpaccess" 文件用于访问控制,类似 Apache 的 ".htaccess" +FTP 服务器是这样一个软件,用户可以通过 FTP 客户端从安装了它的远端服务器上传或下载文件和目录。下面是一些 ProFTPD 服务器的主要功能,更详细的资料可以访问 [http://www.proftpd.org/features.html][2]。 + +- 每个目录都可以包含 ".ftpaccess" 文件用于访问控制,类似 Apache 的 ".htaccess" - 支持多个虚拟 FTP 服务器以及多用户登录和匿名 FTP 服务。 - 可以作为独立进程启动服务或者通过 inetd/xinetd 启动 -- 它的文件/目录属性、属主和权限采用类 UNIX 方式。 +- 它的文件/目录属性、属主和权限是基于 UNIX 方式的。 - 它可以独立运行,保护系统避免 root 访问可能带来的损坏。 - 模块化的设计让它可以轻松扩展其他模块,比如 LDAP 服务器,SSL/TLS 加密,RADIUS 支持,等等。 - ProFTPD 服务器还支持 IPv6. @@ -38,7 +40,7 @@ ### 3. 添加 FTP 用户 ### -在设定好了基本的配置文件后,我们很自然地希望为指定目录添加 FTP 用户。目前用来登录的用户是 FTP 服务自动生成的,可以用来登录到 FTP 服务器。但是,在这篇教程里,我们将创建一个以 ftp 服务器上指定目录为主目录的新用户。 +在设定好了基本的配置文件后,我们很自然地希望添加一个以特定目录为根目录的 FTP 用户。目前登录的用户自动就可以使用 FTP 服务,可以用来登录到 FTP 服务器。但是,在这篇教程里,我们将创建一个以 ftp 服务器上指定目录为主目录的新用户。 下面,我们将建立一个名字是 ftpgroup 的新用户组。 @@ -57,7 +59,7 @@ Retype new password: passwd: all authentication tokens updated successfully. -现在,我们将通过下面命令为这个 ftp 用户设定主目录的读写权限。 +现在,我们将通过下面命令为这个 ftp 用户设定主目录的读写权限(LCTT 译注:这是SELinux 相关设置,如果未启用 SELinux,可以不用)。 $ sudo setsebool -P allow_ftpd_full_access=1 $ sudo setsebool -P ftp_home_dir=1 @@ -129,7 +131,7 @@ 如果 **打开了 TLS/SSL 加密**,执行下面的命令。 - $sudo firewall-cmd --add-port=1024-65534/tcp + $ sudo firewall-cmd --add-port=1024-65534/tcp $ sudo firewall-cmd --add-port=1024-65534/tcp --permanent 如果 **没有打开 TLS/SSL 加密**,执行下面的命令。 @@ -158,7 +160,7 @@ ### 7. 登录到 FTP 服务器 ### -现在,如果都是按照本教程设置好的,我们一定可以连接到 ftp 服务器并使用以上设置的信息登录上去。在这里,我们将配置一下 FTP 客户端 filezilla,使用 **服务器的 IP 或 URL **作为主机名,协议选择 **FTP**,用户名填入 **arunftp**,密码是在上面第 3 步中设定的密码。如果你按照第 4 步中的方式打开了 TLS 支持,还需要在加密类型中选择 **显式要求基于 TLS 的 FTP**,如果没有打开,也不想使用 TLS 加密,那么加密类型选择 **简单 FTP**。 +现在,如果都是按照本教程设置好的,我们一定可以连接到 ftp 服务器并使用以上设置的信息登录上去。在这里,我们将配置一下 FTP 客户端 filezilla,使用 **服务器的 IP 或名称 **作为主机名,协议选择 **FTP**,用户名填入 **arunftp**,密码是在上面第 3 步中设定的密码。如果你按照第 4 步中的方式打开了 TLS 支持,还需要在加密类型中选择 **要求显式的基于 TLS 的 FTP**,如果没有打开,也不想使用 TLS 加密,那么加密类型选择 **简单 FTP**。 ![FTP 登录细节](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-login-details.png) @@ -170,7 +172,7 @@ ### 总结 ### -最后,我们成功地在 Fedora 22 机器上安装并配置好了 Proftpd FTP 服务器。Proftpd 是一个超级强大,能高度配置和扩展的 FTP 守护软件。上面的教程展示了如何配置一个采用 TLS 加密的安全 FTP 服务器。强烈建议设置 FTP 服务器支持 TLS 加密,因为它允许使用 SSL 凭证加密数据传输和登录。本文中,我们也没有配置 FTP 的匿名访问,因为一般受保护的 FTP 系统不建议这样做。 FTP 访问让人们的上传和下载变得非常简单也更高效。我们还可以改变用户端口增加安全性。好吧,如果你有任何疑问,建议,反馈,请在下面评论区留言,这样我们就能够改善并更新文章内容。谢谢!玩的开心 :-) +最后,我们成功地在 Fedora 22 机器上安装并配置好了 Proftpd FTP 服务器。Proftpd 是一个超级强大,能高度定制和扩展的 FTP 守护软件。上面的教程展示了如何配置一个采用 TLS 加密的安全 FTP 服务器。强烈建议设置 FTP 服务器支持 TLS 加密,因为它允许使用 SSL 凭证加密数据传输和登录。本文中,我们也没有配置 FTP 的匿名访问,因为一般受保护的 FTP 系统不建议这样做。 FTP 访问让人们的上传和下载变得非常简单也更高效。我们还可以改变用户端口增加安全性。好吧,如果你有任何疑问,建议,反馈,请在下面评论区留言,这样我们就能够改善并更新文章内容。谢谢!玩的开心 :-) -------------------------------------------------------------------------------- @@ -178,7 +180,7 @@ via: http://linoxide.com/linux-how-to/configure-ftp-proftpd-fedora-22/ 作者:[Arun Pyasi][a] 译者:[zpl1025](https://github.com/zpl1025) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 1908ba60a53e99fcfc69f82dba743935707b41d3 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 3 Aug 2015 23:46:03 +0800 Subject: [PATCH 032/697] PUB:20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall @wwy-hust --- ...Experience on Linux 'iptables' Firewall.md | 205 ++++++++++++++++++ ...Experience on Linux 'iptables' Firewall.md | 205 ------------------ 2 files changed, 205 insertions(+), 205 deletions(-) create mode 100644 published/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md delete mode 100644 translated/tech/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md diff --git a/published/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md b/published/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md new file mode 100644 index 0000000000..9d8d582dfb --- /dev/null +++ b/published/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md @@ -0,0 +1,205 @@ +关于Linux防火墙'iptables'的面试问答 +================================================================================ +Nishita Agarwal是Tecmint的用户,她将分享关于她刚刚经历的一家公司(印度的一家私人公司Pune)的面试经验。在面试中她被问及许多不同的问题,但她是iptables方面的专家,因此她想分享这些关于iptables的问题和相应的答案给那些以后可能会进行相关面试的人。 + +![Linux防火墙Iptables面试问题](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-iptables-Interview-Questions.jpg) + +所有的问题和相应的答案都基于Nishita Agarwal的记忆并经过了重写。 + +> “嗨,朋友!我叫**Nishita Agarwal**。我已经取得了理学学士学位,我的专业集中在UNIX和它的变种(BSD,Linux)。它们一直深深的吸引着我。我在存储方面有1年多的经验。我正在寻求职业上的变化,并将供职于印度的Pune公司。” + +下面是我在面试中被问到的问题的集合。我已经把我记忆中有关iptables的问题和它们的答案记录了下来。希望这会对您未来的面试有所帮助。 + +### 1. 你听说过Linux下面的iptables和Firewalld么?知不知道它们是什么,是用来干什么的? ### + +**答案** : iptables和Firewalld我都知道,并且我已经使用iptables好一段时间了。iptables主要由C语言写成,并且以GNU GPL许可证发布。它是从系统管理员的角度写的,最新的稳定版是iptables 1.4.21。iptables通常被用作类UNIX系统中的防火墙,更准确的说,可以称为iptables/netfilter。管理员通过终端/GUI工具与iptables打交道,来添加和定义防火墙规则到预定义的表中。Netfilter是内核中的一个模块,它执行包过滤的任务。 + +Firewalld是RHEL/CentOS 7(也许还有其他发行版,但我不太清楚)中最新的过滤规则的实现。它已经取代了iptables接口,并与netfilter相连接。 + +### 2. 你用过一些iptables的GUI或命令行工具么? ### + +**答案** : 虽然我既用过GUI工具,比如与[Webmin][1]结合的Shorewall;以及直接通过终端访问iptables,但我必须承认通过Linux终端直接访问iptables能给予用户更高级的灵活性、以及对其背后工作更好的理解的能力。GUI适合初级管理员,而终端适合有经验的管理员。 + +### 3. 那么iptables和firewalld的基本区别是什么呢? ### + +**答案** : iptables和firewalld都有着同样的目的(包过滤),但它们使用不同的方式。iptables与firewalld不同,在每次发生更改时都刷新整个规则集。通常iptables配置文件位于‘/etc/sysconfig/iptables‘,而firewalld的配置文件位于‘/etc/firewalld/‘。firewalld的配置文件是一组XML文件。以XML为基础进行配置的firewalld比iptables的配置更加容易,但是两者都可以完成同样的任务。例如,firewalld可以在自己的命令行界面以及基于XML的配置文件下使用iptables。 + +### 4. 如果有机会的话,你会在你所有的服务器上用firewalld替换iptables么? ### + +**答案** : 我对iptables很熟悉,它也工作的很好。如果没有任何需求需要firewalld的动态特性,那么没有理由把所有的配置都从iptables移动到firewalld。通常情况下,目前为止,我还没有看到iptables造成什么麻烦。IT技术的通用准则也说道“为什么要修一件没有坏的东西呢?”。上面是我自己的想法,但如果组织愿意用firewalld替换iptables的话,我不介意。 + +### 5. 你看上去对iptables很有信心,巧的是,我们的服务器也在使用iptables。 ### + +iptables使用的表有哪些?请简要的描述iptables使用的表以及它们所支持的链。 + +**答案** : 谢谢您的赞赏。至于您问的问题,iptables使用的表有四个,它们是: + +- Nat 表 +- Mangle 表 +- Filter 表 +- Raw 表 + +Nat表 : Nat表主要用于网络地址转换。根据表中的每一条规则修改网络包的IP地址。流中的包仅遍历一遍Nat表。例如,如果一个通过某个接口的包被修饰(修改了IP地址),该流中其余的包将不再遍历这个表。通常不建议在这个表中进行过滤,由NAT表支持的链称为PREROUTING 链,POSTROUTING 链和OUTPUT 链。 + +Mangle表 : 正如它的名字一样,这个表用于校正网络包。它用来对特殊的包进行修改。它能够修改不同包的头部和内容。Mangle表不能用于地址伪装。支持的链包括PREROUTING 链,OUTPUT 链,Forward 链,Input 链和POSTROUTING 链。 + +Filter表 : Filter表是iptables中使用的默认表,它用来过滤网络包。如果没有定义任何规则,Filter表则被当作默认的表,并且基于它来过滤。支持的链有INPUT 链,OUTPUT 链,FORWARD 链。 + +Raw表 : Raw表在我们想要配置之前被豁免的包时被使用。它支持PREROUTING 链和OUTPUT 链。 + +### 6. 简要谈谈什么是iptables中的目标值(能被指定为目标),他们有什么用 ### + +**答案** : 下面是在iptables中可以指定为目标的值: + +- ACCEPT : 接受包 +- QUEUE : 将包传递到用户空间 (应用程序和驱动所在的地方) +- DROP : 丢弃包 +- RETURN : 将控制权交回调用的链并且为当前链中的包停止执行下一调用规则 + +### 7. 让我们来谈谈iptables技术方面的东西,我的意思是说实际使用方面 ### + +你怎么检测在CentOS中安装iptables时需要的iptables的rpm? + +**答案** : iptables已经被默认安装在CentOS中,我们不需要单独安装它。但可以这样检测rpm: + + # rpm -qa iptables + + iptables-1.4.21-13.el7.x86_64 + +如果您需要安装它,您可以用yum来安装。 + + # yum install iptables-services + +### 8. 怎样检测并且确保iptables服务正在运行? ### + +**答案** : 您可以在终端中运行下面的命令来检测iptables的状态。 + + # service status iptables [On CentOS 6/5] + # systemctl status iptables [On CentOS 7] + +如果iptables没有在运行,可以使用下面的语句 + + ---------------- 在CentOS 6/5下 ---------------- + # chkconfig --level 35 iptables on + # service iptables start + + ---------------- 在CentOS 7下 ---------------- + # systemctl enable iptables + # systemctl start iptables + +我们还可以检测iptables的模块是否被加载: + + # lsmod | grep ip_tables + +### 9. 你怎么检查iptables中当前定义的规则呢? ### + +**答案** : 当前的规则可以简单的用下面的命令查看: + + # iptables -L + +示例输出 + + Chain INPUT (policy ACCEPT) + target prot opt source destination + ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED + ACCEPT icmp -- anywhere anywhere + ACCEPT all -- anywhere anywhere + ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh + REJECT all -- anywhere anywhere reject-with icmp-host-prohibited + + Chain FORWARD (policy ACCEPT) + target prot opt source destination + REJECT all -- anywhere anywhere reject-with icmp-host-prohibited + + Chain OUTPUT (policy ACCEPT) + target prot opt source destination + +### 10. 你怎样刷新所有的iptables规则或者特定的链呢? ### + +**答案** : 您可以使用下面的命令来刷新一个特定的链。 + + # iptables --flush OUTPUT + +要刷新所有的规则,可以用: + + # iptables --flush + +### 11. 请在iptables中添加一条规则,接受所有从一个信任的IP地址(例如,192.168.0.7)过来的包。 ### + +**答案** : 上面的场景可以通过运行下面的命令来完成。 + + # iptables -A INPUT -s 192.168.0.7 -j ACCEPT + +我们还可以在源IP中使用标准的斜线和子网掩码: + + # iptables -A INPUT -s 192.168.0.7/24 -j ACCEPT + # iptables -A INPUT -s 192.168.0.7/255.255.255.0 -j ACCEPT + +### 12. 怎样在iptables中添加规则以ACCEPT,REJECT,DENY和DROP ssh的服务? ### + +**答案** : 但愿ssh运行在22端口,那也是ssh的默认端口,我们可以在iptables中添加规则来ACCEPT ssh的tcp包(在22号端口上)。 + + # iptables -A INPUT -s -p tcp --dport 22 -j ACCEPT + +REJECT ssh服务(22号端口)的tcp包。 + + # iptables -A INPUT -s -p tcp --dport 22 -j REJECT + +DENY ssh服务(22号端口)的tcp包。 + + + # iptables -A INPUT -s -p tcp --dport 22 -j DENY + +DROP ssh服务(22号端口)的tcp包。 + + + # iptables -A INPUT -s -p tcp --dport 22 -j DROP + +### 13. 让我给你另一个场景,假如有一台电脑的本地IP地址是192.168.0.6。你需要封锁在21、22、23和80号端口上的连接,你会怎么做? ### + +**答案** : 这时,我所需要的就是在iptables中使用‘multiport‘选项,并将要封锁的端口号跟在它后面。上面的场景可以用下面的一条语句搞定: + + # iptables -A INPUT -s 192.168.0.6 -p tcp -m multiport --dport 22,23,80,8080 -j DROP + +可以用下面的语句查看写入的规则。 + + # iptables -L + + Chain INPUT (policy ACCEPT) + target prot opt source destination + ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED + ACCEPT icmp -- anywhere anywhere + ACCEPT all -- anywhere anywhere + ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh + REJECT all -- anywhere anywhere reject-with icmp-host-prohibited + DROP tcp -- 192.168.0.6 anywhere multiport dports ssh,telnet,http,webcache + + Chain FORWARD (policy ACCEPT) + target prot opt source destination + REJECT all -- anywhere anywhere reject-with icmp-host-prohibited + + Chain OUTPUT (policy ACCEPT) + target prot opt source destination + +**面试官** : 好了,我问的就是这些。你是一个很有价值的雇员,我们不会错过你的。我将会向HR推荐你的名字。如果你有什么问题,请问我。 + +作为一个候选人我不愿不断的问将来要做的项目的事以及公司里其他的事,这样会打断愉快的对话。更不用说HR轮会不会比较难,总之,我获得了机会。 + +同时我要感谢Avishek和Ravi(我的朋友)花时间帮我整理我的面试。 + +朋友!如果您有过类似的面试,并且愿意与数百万Tecmint读者一起分享您的面试经历,请将您的问题和答案发送到admin@tecmint.com。 + +谢谢!保持联系。如果我能更好的回答我上面的问题的话,请记得告诉我。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-firewall-iptables-interview-questions-and-answers/ + +作者:[Avishek Kumar][a] +译者:[wwy-hust](https://github.com/wwy-hust) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/install-webmin-web-based-system-administration-tool-for-rhel-centos-fedora/ diff --git a/translated/tech/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md b/translated/tech/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md deleted file mode 100644 index 1d476d0f18..0000000000 --- a/translated/tech/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md +++ /dev/null @@ -1,205 +0,0 @@ -Nishita Agarwal分享它关于Linux防火墙'iptables'的面试经验 -================================================================================ -Nishita Agarwal是Tecmint的用户,她将分享关于她刚刚经历的一家公司(私人公司Pune,印度)的面试经验。在面试中她被问及许多不同的问题,但她是iptables方面的专家,因此她想分享这些关于iptables的问题和相应的答案给那些以后可能会进行相关面试的人。 - -![Linux防火墙Iptables面试问题](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-iptables-Interview-Questions.jpg) - -所有的问题和相应的答案都基于Nishita Agarwal的记忆并经过了重写。 - -> “嗨,朋友!我叫**Nishita Agarwal**。我已经取得了理学学士学位,我的专业集中在UNIX和它的变种(BSD,Linux)。它们一直深深的吸引着我。我在存储方面有1年多的经验。我正在寻求职业上的变化,并将供职于印度的Pune公司。” - -下面是我在面试中被问到的问题的集合。我已经把我记忆中有关iptables的问题和它们的答案记录了下来。希望这会对您未来的面试有所帮助。 - -### 1. 你听说过Linux下面的iptables和Firewalld么?知不知道它们是什么,是用来干什么的? ### - -> **答案** : iptables和Firewalld我都知道,并且我已经使用iptables好一段时间了。iptables主要由C语言写成,并且以GNU GPL许可证发布。它是从系统管理员的角度写的,最新的稳定版是iptables 1.4.21。iptables通常被认为是类UNIX系统中的防火墙,更准确的说,可以称为iptables/netfilter。管理员通过终端/GUI工具与iptables打交道,来添加和定义防火墙规则到预定义的表中。Netfilter是内核中的一个模块,它执行过滤的任务。 -> -> Firewalld是RHEL/CentOS 7(也许还有其他发行版,但我不太清楚)中最新的过滤规则的实现。它已经取代了iptables接口,并与netfilter相连接。 - -### 2. 你用过一些iptables的GUI或命令行工具么? ### - -> **答案** : 虽然我既用过GUI工具,比如与[Webmin][1]结合的Shorewall;以及直接通过终端访问iptables。但我必须承认通过Linux终端直接访问iptables能给予用户更高级的灵活性、以及对其背后工作更好的理解的能力。GUI适合初级管理员而终端适合有经验的管理员。 - -### 3. 那么iptables和firewalld的基本区别是什么呢? ### - -> **答案** : iptables和firewalld都有着同样的目的(包过滤),但它们使用不同的方式。iptables与firewalld不同,在每次发生更改时都刷新整个规则集。通常iptables配置文件位于‘/etc/sysconfig/iptables‘,而firewalld的配置文件位于‘/etc/firewalld/‘。firewalld的配置文件是一组XML文件。以XML为基础进行配置的firewalld比iptables的配置更加容易,但是两者都可以完成同样的任务。例如,firewalld可以在自己的命令行界面以及基于XML的配置文件下使用iptables。 - -### 4. 如果有机会的话,你会在你所有的服务器上用firewalld替换iptables么? ### - -> **答案** : 我对iptables很熟悉,它也工作的很好。如果没有任何需求需要firewalld的动态特性,那么没有理由把所有的配置都从iptables移动到firewalld。通常情况下,目前为止,我还没有看到iptables造成什么麻烦。IT技术的通用准则也说道“为什么要修一件没有坏的东西呢?”。上面是我自己的想法,但如果组织愿意用firewalld替换iptables的话,我不介意。 - -### 5. 你看上去对iptables很有信心,巧的是,我们的服务器也在使用iptables。 ### - -iptables使用的表有哪些?请简要的描述iptables使用的表以及它们所支持的链。 - -> **答案** : 谢谢您的赞赏。至于您问的问题,iptables使用的表有四个,它们是: -> -> Nat 表 -> Mangle 表 -> Filter 表 -> Raw 表 -> -> Nat表 : Nat表主要用于网络地址转换。根据表中的每一条规则修改网络包的IP地址。流中的包仅遍历一遍Nat表。例如,如果一个通过某个接口的包被修饰(修改了IP地址),该流中其余的包将不再遍历这个表。通常不建议在这个表中进行过滤,由NAT表支持的链称为PREROUTING Chain,POSTROUTING Chain和OUTPUT Chain。 -> -> Mangle表 : 正如它的名字一样,这个表用于校正网络包。它用来对特殊的包进行修改。它能够修改不同包的头部和内容。Mangle表不能用于地址伪装。支持的链包括PREROUTING Chain,OUTPUT Chain,Forward Chain,InputChain和POSTROUTING Chain。 -> -> Filter表 : Filter表是iptables中使用的默认表,它用来过滤网络包。如果没有定义任何规则,Filter表则被当作默认的表,并且基于它来过滤。支持的链有INPUT Chain,OUTPUT Chain,FORWARD Chain。 -> -> Raw表 : Raw表在我们想要配置之前被豁免的包时被使用。它支持PREROUTING Chain 和OUTPUT Chain。 - -### 6. 简要谈谈什么是iptables中的目标值(能被指定为目标),他们有什么用 ### - -> **答案** : 下面是在iptables中可以指定为目标的值: -> -> ACCEPT : 接受包 -> QUEUE : 将包传递到用户空间 (应用程序和驱动所在的地方) -> DROP : 丢弃包 -> RETURN : 将控制权交回调用的链并且为当前链中的包停止执行下一调规则 - -### 7. 让我们来谈谈iptables技术方面的东西,我的意思是说实际使用方面 ### - -你怎么检测在CentOS中安装iptables时需要的iptables的rpm? - -> **答案** : iptables已经被默认安装在CentOS中,我们不需要单独安装它。但可以这样检测rpm: -> -> # rpm -qa iptables -> -> iptables-1.4.21-13.el7.x86_64 -> -> 如果您需要安装它,您可以用yum来安装。 -> -> # yum install iptables-services - -### 8. 怎样检测并且确保iptables服务正在运行? ### - -> **答案** : 您可以在终端中运行下面的命令来检测iptables的状态。 -> -> # service status iptables [On CentOS 6/5] -> # systemctl status iptables [On CentOS 7] -> -> 如果iptables没有在运行,可以使用下面的语句 -> -> ---------------- 在CentOS 6/5下 ---------------- -> # chkconfig --level 35 iptables on -> # service iptables start -> -> ---------------- 在CentOS 7下 ---------------- -> # systemctl enable iptables -> # systemctl start iptables -> -> 我们还可以检测iptables的模块是否被加载: -> -> # lsmod | grep ip_tables - -### 9. 你怎么检查iptables中当前定义的规则呢? ### - -> **答案** : 当前的规则可以简单的用下面的命令查看: -> -> # iptables -L -> -> 示例输出 -> -> Chain INPUT (policy ACCEPT) -> target prot opt source destination -> ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED -> ACCEPT icmp -- anywhere anywhere -> ACCEPT all -- anywhere anywhere -> ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh -> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited -> -> Chain FORWARD (policy ACCEPT) -> target prot opt source destination -> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited -> -> Chain OUTPUT (policy ACCEPT) -> target prot opt source destination - -### 10. 你怎样刷新所有的iptables规则或者特定的链呢? ### - -> **答案** : 您可以使用下面的命令来刷新一个特定的链。 -> -> # iptables --flush OUTPUT -> -> 要刷新所有的规则,可以用: -> -> # iptables --flush - -### 11. 请在iptables中添加一条规则,接受所有从一个信任的IP地址(例如,192.168.0.7)过来的包。 ### - -> **答案** : 上面的场景可以通过运行下面的命令来完成。 -> -> # iptables -A INPUT -s 192.168.0.7 -j ACCEPT -> -> 我们还可以在源IP中使用标准的斜线和子网掩码: -> -> # iptables -A INPUT -s 192.168.0.7/24 -j ACCEPT -> # iptables -A INPUT -s 192.168.0.7/255.255.255.0 -j ACCEPT - -### 12. 怎样在iptables中添加规则以ACCEPT,REJECT,DENY和DROP ssh的服务? ### - -> **答案** : 但愿ssh运行在22端口,那也是ssh的默认端口,我们可以在iptables中添加规则来ACCEPT ssh的tcp包(在22号端口上)。 -> -> # iptables -A INPUT -s -p tcp --dport 22 -j ACCEPT -> -> REJECT ssh服务(22号端口)的tcp包。 -> -> # iptables -A INPUT -s -p tcp --dport 22 -j REJECT -> -> DENY ssh服务(22号端口)的tcp包。 -> -> -> # iptables -A INPUT -s -p tcp --dport 22 -j DENY -> -> DROP ssh服务(22号端口)的tcp包。 -> -> -> # iptables -A INPUT -s -p tcp --dport 22 -j DROP - -### 13. 让我给你另一个场景,假如有一台电脑的本地IP地址是192.168.0.6。你需要封锁在21、22、23和80号端口上的连接,你会怎么做? ### - -> **答案** : 这时,我所需要的就是在iptables中使用‘multiport‘选项,并将要封锁的端口号跟在它后面。上面的场景可以用下面的一条语句搞定: -> -> # iptables -A INPUT -s 192.168.0.6 -p tcp -m multiport --dport 22,23,80,8080 -j DROP -> -> 可以用下面的语句查看写入的规则。 -> -> # iptables -L -> -> Chain INPUT (policy ACCEPT) -> target prot opt source destination -> ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED -> ACCEPT icmp -- anywhere anywhere -> ACCEPT all -- anywhere anywhere -> ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh -> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited -> DROP tcp -- 192.168.0.6 anywhere multiport dports ssh,telnet,http,webcache -> -> Chain FORWARD (policy ACCEPT) -> target prot opt source destination -> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited -> -> Chain OUTPUT (policy ACCEPT) -> target prot opt source destination - -**面试官** : 好了,我问的就是这些。你是一个很有价值的雇员,我们不会错过你的。我将会向HR推荐你的名字。如果你有什么问题,请问我。 - -作为一个候选人我不愿不断的问将来要做的项目的事以及公司里其他的事,这样会打断愉快的对话。更不用说HR轮会不会比较难,总之,我获得了机会。 - -同时我要感谢Avishek和Ravi(我的朋友)花时间帮我整理我的面试。 - -朋友!如果您有过类似的面试,并且愿意与数百万Tecmint读者一起分享您的面试经历,请将您的问题和答案发送到admin@tecmint.com。 - -谢谢!保持联系。如果我能更好的回答我上面的问题的话,请记得告诉我。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-firewall-iptables-interview-questions-and-answers/ - -作者:[Avishek Kumar][a] -译者:[wwy-hust](https://github.com/wwy-hust) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/install-webmin-web-based-system-administration-tool-for-rhel-centos-fedora/ From f250353b717561555612c15b4fe71910487aec56 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 4 Aug 2015 00:08:13 +0800 Subject: [PATCH 033/697] PUB:20150730 Compare PDF Files on Ubuntu @GOLinux --- .../20150730 Compare PDF Files on Ubuntu.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) rename {translated/tech => published}/20150730 Compare PDF Files on Ubuntu.md (81%) diff --git a/translated/tech/20150730 Compare PDF Files on Ubuntu.md b/published/20150730 Compare PDF Files on Ubuntu.md similarity index 81% rename from translated/tech/20150730 Compare PDF Files on Ubuntu.md rename to published/20150730 Compare PDF Files on Ubuntu.md index 3215caf23f..57b933765f 100644 --- a/translated/tech/20150730 Compare PDF Files on Ubuntu.md +++ b/published/20150730 Compare PDF Files on Ubuntu.md @@ -1,15 +1,15 @@ -Ubuntu上比较PDF文件 +如何在 Ubuntu 上比较 PDF 文件 ================================================================================ 如果你想要对PDF文件进行比较,你可以使用下面工具之一。 ### Comparepdf ### -comparepdf是一个命令行应用,用于将两个PDF文件进行对比。默认对比模式文本模式,该模式会对各对相关页面进行文字对比。只要一检测到差异,该程序就会终止,并显示一条信息(除非设置了-v0)和一个指示性的返回码。 +comparepdf是一个命令行应用,用于将两个PDF文件进行对比。默认对比模式是文本模式,该模式会对各对相关页面进行文字对比。只要一检测到差异,该程序就会终止,并显示一条信息(除非设置了-v0)和一个指示性的返回码。 -用于文本模式对比的选项有 -ct 或 --compare=text(默认),用于视觉对比(这对图标或其它图像发生改变时很有用)的选项有 -ca 或 --compare=appearance。而 -v=1 或 --verbose=1 选项则用于报告差异(或者对匹配文件不作任何回应):使用 -v=0 选项取消报告,或者 -v=2 来同时报告不同的和匹配的文件。 +用于文本模式对比的选项有 -ct 或 --compare=text(默认),用于视觉对比(这对图标或其它图像发生改变时很有用)的选项有 -ca 或 --compare=appearance。而 -v=1 或 --verbose=1 选项则用于报告差异(或者对匹配文件不作任何回应);使用 -v=0 选项取消报告,或者 -v=2 来同时报告不同的和匹配的文件。 -### 安装comparepdf到Ubuntu ### +#### 安装comparepdf到Ubuntu #### 打开终端,然后运行以下命令 @@ -19,17 +19,17 @@ comparepdf是一个命令行应用,用于将两个PDF文件进行对比。默 comparepdf [OPTIONS] file1.pdf file2.pdf -**Diffpdf** +###Diffpdf### DiffPDF是一个图形化应用程序,用于对两个PDF文件进行对比。默认情况下,它只会对比两个相关页面的文字,但是也支持对图形化页面进行对比(例如,如果图表被修改过,或者段落被重新格式化过)。它也可以对特定的页面或者页面范围进行对比。例如,如果同一个PDF文件有两个版本,其中一个有页面1-12,而另一个则有页面1-13,因为这里添加了一个额外的页面4,它们可以通过指定两个页面范围来进行对比,第一个是1-12,而1-3,5-13则可以作为第二个页面范围。这将使得DiffPDF成对地对比这些页面(1,1),(2,2),(3,3),(4,5),(5,6),以此类推,直到(12,13)。 -### 安装 diffpdf 到 ubuntu ### +#### 安装 diffpdf 到 ubuntu #### 打开终端,然后运行以下命令 sudo apt-get install diffpdf -### 截图 ### +#### 截图 #### ![](http://www.ubuntugeek.com/wp-content/uploads/2015/07/14.png) @@ -41,7 +41,7 @@ via: http://www.ubuntugeek.com/compare-pdf-files-on-ubuntu.html 作者:[ruchi][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 07af5c5aa9a77aeb84e0faa8874660c511116f95 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Tue, 4 Aug 2015 06:51:47 +0800 Subject: [PATCH 034/697] =?UTF-8?q?=E7=A7=BB=E5=8A=A8=E8=AF=91=E6=96=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20150128 7 communities driving open source development.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/talk/20150128 7 communities driving open source development.md (100%) diff --git a/sources/talk/20150128 7 communities driving open source development.md b/translated/talk/20150128 7 communities driving open source development.md similarity index 100% rename from sources/talk/20150128 7 communities driving open source development.md rename to translated/talk/20150128 7 communities driving open source development.md From 065cdbb77acac0170f8604631294c1790fd55bb5 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Tue, 4 Aug 2015 07:04:09 +0800 Subject: [PATCH 035/697] Update 20150803 Linux Logging Basics.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- sources/tech/20150803 Linux Logging Basics.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150803 Linux Logging Basics.md b/sources/tech/20150803 Linux Logging Basics.md index d20f68f140..6c3c3693a4 100644 --- a/sources/tech/20150803 Linux Logging Basics.md +++ b/sources/tech/20150803 Linux Logging Basics.md @@ -1,3 +1,5 @@ +FSSlc translating + Linux Logging Basics ================================================================================ First we’ll describe the basics of what Linux logs are, where to find them, and how they get created. If you already know this stuff, feel free to skip to the next section. @@ -87,4 +89,4 @@ via: http://www.loggly.com/ultimate-guide/logging/linux-logging-basics/ [4]:https://tools.ietf.org/html/rfc5424#section-6.2.3 [5]:https://tools.ietf.org/html/rfc5424#section-6.2.4 [6]:https://tools.ietf.org/html/rfc5424#section-6.2.5 -[7]:https://tools.ietf.org/html/rfc5424#section-6.2.1 \ No newline at end of file +[7]:https://tools.ietf.org/html/rfc5424#section-6.2.1 From 0a2bc302010ada1e2f78027cc3965b655e62aa56 Mon Sep 17 00:00:00 2001 From: joeren Date: Tue, 4 Aug 2015 07:54:44 +0800 Subject: [PATCH 036/697] Change --- github 2.0测试.txt | 2 -- 1 file changed, 2 deletions(-) delete mode 100644 github 2.0测试.txt diff --git a/github 2.0测试.txt b/github 2.0测试.txt deleted file mode 100644 index 7787faa3c1..0000000000 --- a/github 2.0测试.txt +++ /dev/null @@ -1,2 +0,0 @@ -111 -222 \ No newline at end of file From 6811374932148a3ce7bfe446c8670a63ace04b3e Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 09:08:16 +0800 Subject: [PATCH 037/697] =?UTF-8?q?Create=20Setting=20up=20RAID=201=20(Mir?= =?UTF-8?q?roring)=20using=20=E2=80=98Two=20Disks=E2=80=99=20in=20Linux=20?= =?UTF-8?q?=E2=80=93=20Part=203?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...oring) using ‘Two Disks’ in Linux – Part 3 | 217 ++++++++++++++++++ 1 file changed, 217 insertions(+) create mode 100644 translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 diff --git a/translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 b/translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 new file mode 100644 index 0000000000..948e530ed8 --- /dev/null +++ b/translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 @@ -0,0 +1,217 @@ +在 Linux 中使用"两个磁盘"创建 RAID 1(镜像) - 第3部分 +================================================================================ +RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁盘中。创建 RAID1 至少需要两个磁盘,它的读取性能或者可靠性比数据存储容量更好。 + + +![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg) + +在 Linux 中设置 RAID1 + +创建镜像是为了防止因硬盘故障导致数据丢失。镜像中的每个磁盘包含数据的完整副本。当一个磁盘发生故障时,相同的数据可以从其它正常磁盘中读取。而后,可以从正在运行的计算机中直接更换发生故障的磁盘,无需任何中断。 + +### RAID 1 的特点 ### + +-镜像具有良好的性能。 + +-磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。 + +-在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。 + +-读取数据会比写入性能更好。 + +#### 要求 #### + + +创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8的两倍。为了能够添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。 + +这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的 UI 组件或使用 Ctrl + I 键来访问它。 + +需要阅读: [Basic Concepts of RAID in Linux][1] + +#### 在我的服务器安装 #### + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.226 + Hostname : rd1.tecmintlocal.com + Disk 1 [20GB] : /dev/sdb + Disk 2 [20GB] : /dev/sdc + +本文将指导你使用 mdadm (创建和管理 RAID 的)一步一步的建立一个软件 RAID 1 或镜像在 Linux 平台上。但同样的做法也适用于其它 Linux 发行版如 RedHat,CentOS,Fedora 等等。 + +### 第1步:安装所需要的并且检查磁盘 ### + +1.正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。 + + # yum install mdadm [on RedHat systems] + # apt-get install mdadm [on Debain systems] + +2. 一旦安装好‘mdadm‘包,我们需要使用下面的命令来检查磁盘是否已经配置好。 + + # mdadm -E /dev/sd[b-c] + +![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png) + +检查 RAID 的磁盘 + + +正如你从上面图片看到的,没有检测到任何超级块,这意味着还没有创建RAID。 + +### 第2步:为 RAID 创建分区 ### + +3. 正如我提到的,我们最少使用两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID1。我们首先使用‘fdisk‘命令来创建这两个分区并更改其类型为 raid。 + + # fdisk /dev/sdb + +按照下面的说明 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。 +- 接下来选择分区号为1。 +- 按两次回车键默认将整个容量分配给它。 +- 然后,按 ‘P’ 来打印创建好的分区。 +- 按 ‘L’,列出所有可用的类型。 +- 按 ‘t’ 修改分区类型。 +- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png) + +创建磁盘分区 + +在创建“/dev/sdb”分区后,接下来按照同样的方法创建分区 /dev/sdc 。 + + # fdisk /dev/sdc + +![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png) + +创建第二个分区 + +4. 一旦这两个分区创建成功后,使用相同的命令来检查 sdb & sdc 分区并确认 RAID 分区的类型如上图所示。 + + # mdadm -E /dev/sd[b-c] + +![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png) + +验证分区变化 + +![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png) + +检查 RAID 类型 + +**注意**: 正如你在上图所看到的,在 sdb1 和 sdc1 中没有任何对 RAID 的定义,这就是我们没有检测到超级块的原因。 + +### 步骤3:创建 RAID1 设备 ### + +5.接下来使用以下命令来创建一个名为 /dev/md0 的“RAID1”设备并验证它 + + # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1 + # cat /proc/mdstat + +![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png) + +创建RAID设备 + +6. 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列 + + # mdadm -E /dev/sd[b-c]1 + # mdadm --detail /dev/md0 + +![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png) + +检查 RAID 设备类型 + +![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png) + +检查 RAID 设备阵列 + +从上图中,人们很容易理解,RAID1 已经使用的 /dev/sdb1 和 /dev/sdc1 分区被创建,你也可以看到状态为 resyncing。 + +### 第4步:在 RAID 设备上创建文件系统 ### + +7. 使用 ext4 为 md0 创建文件系统并挂载到 /mnt/raid1 . + + # mkfs.ext4 /dev/md0 + +![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png) + +创建 RAID 设备文件系统 + +8. 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据 + + # mkdir /mnt/raid1 + # mount /dev/md0 /mnt/raid1/ + # touch /mnt/raid1/tecmint.txt + # echo "tecmint raid setups" > /mnt/raid1/tecmint.txt + +![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png) + +挂载 RAID 设备 + +9.为了在系统重新启动自动挂载 RAID1,需要在 fstab 文件中添加条目。打开“/etc/fstab”文件并添加以下行。 + + /dev/md0 /mnt/raid1 ext4 defaults 0 0 + +![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png) + +自动挂载 Raid 设备 + +10. 运行“mount -a”,检查 fstab 中的条目是否有错误 + # mount -av + +![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png) + +检查 fstab 中的错误 + +11. 接下来,使用下面的命令保存 raid 的配置到文件“mdadm.conf”中。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png) + +保存 Raid 的配置 + +上述配置文件在系统重启时会读取并加载 RAID 设备。 + +### 第5步:在磁盘故障后检查数据 ### + +12.我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。 + + # mdadm --detail /dev/md0 + +![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png) + +验证 Raid 设备 + +在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的并且 Active Devices 是2.现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。 + + # ls -l /dev | grep sd + # mdadm --detail /dev/md0 + +![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png) + +测试 RAID 设备 + +现在,在上面的图片中你可以看到,一个磁盘不见了。我从虚拟机上删除了一个磁盘。此时让我们来检查我们宝贵的数据。 + + # cd /mnt/raid1/ + # cat tecmint.txt + +![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png) + +验证 RAID 数据 + +你有没有看到我们的数据仍然可用。由此,我们可以知道 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid1-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 2ec2917fdba92c31201a7bdde94ac2e8befb41ed Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 09:09:17 +0800 Subject: [PATCH 038/697] =?UTF-8?q?Create=20Creating=20RAID=205=20(Stripin?= =?UTF-8?q?g=20with=20Distributed=20Parity)=20in=20Linux=20=E2=80=93=20Par?= =?UTF-8?q?t=204?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...with Distributed Parity) in Linux – Part 4 | 285 ++++++++++++++++++ 1 file changed, 285 insertions(+) create mode 100644 translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 diff --git a/translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 b/translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 new file mode 100644 index 0000000000..7de5199a08 --- /dev/null +++ b/translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 @@ -0,0 +1,285 @@ + +在 Linux 中创建 RAID 5(条带化与分布式奇偶校验) - 第4部分 +================================================================================ +在 RAID 5 中,条带化数据跨多个驱磁盘使用分布式奇偶校验。分布式奇偶校验的条带化意味着它将奇偶校验信息和条带中的数据分布在多个磁盘上,它将有很好的数据冗余。 + +![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/setup-raid-5-in-linux.jpg) + +在 Linux 中配置 RAID 5 + +对于此 RAID 级别它至少应该有三个或更多个磁盘。RAID 5 通常被用于大规模生产环境中花费更多的成本来提供更好的数据冗余性能。 + +#### 什么是奇偶校验? #### + +奇偶校验是在数据存储中检测错误最简单的一个方法。奇偶校验信息存储在每个磁盘中,比如说,我们有4个磁盘,其中一个磁盘空间被分割去存储所有磁盘的奇偶校验信息。如果任何一个磁盘出现故障,我们可以通过更换故障磁盘后,从奇偶校验信息重建得到原来的数据。 + +#### RAID 5 的优点和缺点 #### + +- 提供更好的性能 +- 支持冗余和容错。 +- 支持热备份。 +- 将失去一个磁盘的容量存储奇偶校验信息。 +- 单个磁盘发生故障后不会丢失数据。我们可以更换故障硬盘后从奇偶校验信息中重建数据。 +- 事务处理读操作会更快。 +- 由于奇偶校验占用资源,写操作将是缓慢的。 +- 重建需要很长的时间。 + +#### 要求 #### +创建 RAID 5 最少需要3个磁盘,你也可以添加更多的磁盘,前提是你要有多端口的专用硬件 RAID 控制器。在这里,我们使用“mdadm”包来创建软件 RAID。 + +mdadm 是一个允许我们在 Linux 下配置和管理 RAID 设备的包。默认情况下 RAID 没有可用的配置文件,我们在创建和配置 RAID 后必须将配置文件保存在一个单独的文件中,例如:mdadm.conf。 + +在进一步学习之前,我建议你通过下面的文章去了解 Linux 中 RAID 的基础知识。 + +- [Basic Concepts of RAID in Linux – Part 1][1] +- [Creating RAID 0 (Stripe) in Linux – Part 2][2] +- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3] + +#### 我的服务器设置 #### + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.227 + Hostname : rd5.tecmintlocal.com + Disk 1 [20GB] : /dev/sdb + Disk 2 [20GB] : /dev/sdc + Disk 3 [20GB] : /dev/sdd + +这篇文章是 RAID 系列9教程的第4部分,在这里我们要建立一个软件 RAID 5(分布式奇偶校验)使用三个20GB(名为/dev/sdb, /dev/sdc 和 /dev/sdd)的磁盘在 Linux 系统或服务器中上。 + +### 第1步:安装 mdadm 并检验磁盘 ### + +1.正如我们前面所说,我们使用 CentOS 6.5 Final 版本来创建 RAID 设置,但同样的做法也适用于其他 Linux 发行版。 + + # lsb_release -a + # ifconfig | grep inet + +![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/CentOS-6.5-Summary.png) + +CentOS 6.5 摘要 + +2. 如果你按照我们的 RAID 系列去配置的,我们假设你已经安装了“mdadm”包,如果没有,根据你的 Linux 发行版使用下面的命令安装。 + + # yum install mdadm [on RedHat systems] + # apt-get install mdadm [on Debain systems] + +3. “mdadm”包安装后,先使用‘fdisk‘命令列出我们在系统上增加的三个20GB的硬盘。 + + # fdisk -l | grep sd + +![Install mdadm Tool in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Install-mdadm-Tool.png) + +安装 mdadm 工具 + +4. 现在该检查这三个磁盘是否存在 RAID 块,使用下面的命令来检查。 + + # mdadm -E /dev/sd[b-d] + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd + +![Examine Drives For Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-Drives-For-Raid.png) + +检查 Raid 磁盘 + +**注意**: 上面的图片说明,没有检测到任何超级块。所以,这三个磁盘中没有定义 RAID。让我们现在开始创建一个吧! + +### 第2步:为磁盘创建 RAID 分区 ### + +5. 首先,在创建 RAID 前我们要为磁盘分区(/dev/sdb, /dev/sdc 和 /dev/sdd),在进行下一步之前,先使用‘fdisk’命令进行分区。 + + # fdisk /dev/sdb + # fdisk /dev/sdc + # fdisk /dev/sdd + +#### 创建 /dev/sdb 分区 #### + +请按照下面的说明在 /dev/sdb 硬盘上创建分区。 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。选择主分区是因为还没有定义过分区。 +- 接下来选择分区号为1。默认就是1. +- 这里是选择柱面大小,我们没必要选择指定的大小,因为我们需要为 RAID 使用整个分区,所以只需按两次 Enter 键默认将整个容量分配给它。 +- 然后,按 ‘P’ 来打印创建好的分区。 +- 改变分区类型,按 ‘L’可以列出所有可用的类型。 +- 按 ‘t’ 修改分区类型。 +- 这里使用‘fd’设置为 RAID 的类型。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition1.png) + +创建 sdb 分区 + +**注意**: 我们仍要按照上面的步骤来创建 sdc 和 sdd 的分区。 + +#### 创建 /dev/sdc 分区 #### + +现在,通过下面的截图给出创建 sdc 和 sdd 磁盘分区的方法,或者你可以按照上面的步骤。 + + # fdisk /dev/sdc + +![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition1.png) + +创建 sdc 分区 + +#### 创建 /dev/sdd 分区 #### + + # fdisk /dev/sdd + +![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition1.png) + +创建 sdd 分区 + +6. 创建分区后,检查三个磁盘 sdb, sdc, sdd 的变化。 + + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd + + or + + # mdadm -E /dev/sd[b-c] + +![Check Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Changes-on-Partitions.png) + +检查磁盘变化 + +**注意**: 在上面的图片中,磁盘的类型是 fd。 + +7.现在在新创建的分区检查 RAID 块。如果没有检测到超级块,我们就能够继续下一步,创建一个新的 RAID 5 的设置在这些磁盘中。 + +![Check Raid on Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-Partitions.png) + +在分区中检查 Raid + +### 第3步:创建 md 设备 md0 ### + +8. 现在创建一个 RAID 设备“md0”(即 /dev/md0)使用所有新创建的分区(sdb1, sdc1 and sdd1) ,使用以下命令。 + + # mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 + + or + + # mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1 + +9. 创建 RAID 设备后,检查并确认 RAID,包括设备和从 mdstat 中输出的 RAID 级别。 + + # cat /proc/mdstat + +![Verify Raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Device.png) + +验证 Raid 设备 + +如果你想监视当前的创建过程,你可以使用‘watch‘命令,使用 watch ‘cat /proc/mdstat‘,它会在屏幕上显示且每隔1秒刷新一次。 + + # watch -n1 cat /proc/mdstat + +![Monitor Raid Process](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Raid-Process.png) + +监控 Raid 5 过程 + +![Raid 5 Process Summary](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Process-Summary.png) + +Raid 5 过程概要 + +10. 创建 RAID 后,使用以下命令验证 RAID 设备 + + # mdadm -E /dev/sd[b-d]1 + +![Verify Raid Level](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Level.png) + +验证 Raid 级别 + +**注意**: 因为它显示三个磁盘的信息,上述命令的输出会有点长。 + +11. 接下来,验证 RAID 阵列的假设,这包含正在运行 RAID 的设备,并开始重新同步。 + + # mdadm --detail /dev/md0 + +![Verify Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Array.png) + +验证 Raid 阵列 + +### 第4步:为 md0 创建文件系统### + +12. 在挂载前为“md0”设备创建 ext4 文件系统。 + + # mkfs.ext4 /dev/md0 + +![Create md0 Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md0-Filesystem.png) + +创建 md0 文件系统 + +13.现在,在‘/mnt‘下创建目录 raid5,然后挂载文件系统到 /mnt/raid5/ 下并检查下挂载点的文件,你会看到 lost+found 目录。 + + # mkdir /mnt/raid5 + # mount /dev/md0 /mnt/raid5/ + # ls -l /mnt/raid5/ + +14. 在挂载点 /mnt/raid5 下创建几个文件,并在其中一个文件中添加一些内容然后去验证。 + + # touch /mnt/raid5/raid5_tecmint_{1..5} + # ls -l /mnt/raid5/ + # echo "tecmint raid setups" > /mnt/raid5/raid5_tecmint_1 + # cat /mnt/raid5/raid5_tecmint_1 + # cat /proc/mdstat + +![Mount Raid 5 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-Raid-Device.png) + +挂载 Raid 设备 + +15. 我们需要在 fstab 中添加条目,否则系统重启后将不会显示我们的挂载点。然后编辑 fstab 文件添加条目,在文件尾追加以下行,如下图所示。挂载点会根据你环境的不同而不同。 + + # vim /etc/fstab + + /dev/md0 /mnt/raid5 ext4 defaults 0 0 + +![Raid 5 Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Device-Automount.png) + +自动挂载 Raid 5 + +16. 接下来,运行‘mount -av‘命令检查 fstab 条目中是否有错误。 + + # mount -av + +![Check Fstab Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Fstab-Errors.png) + +检查 Fstab 错误 + +### 第5步:保存 Raid 5 的配置 ### + +17. 在前面章节已经说过,默认情况下 RAID 没有配置文件。我们必须手动保存。如果此步不跟 RAID 设备将不会存在 md0,它将会跟一些其他数子。 + +所以,我们必须要在系统重新启动之前保存配置。如果配置保存它在系统重新启动时会被加载到内核中然后 RAID 也将被加载。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +![Save Raid 5 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid-5-Configuration.png) + +保存 Raid 5 配置 + +注意:保存配置将保持 RAID 级别的稳定性在 md0 设备中。 + +### 第6步:添加备用磁盘 ### + +18.备用磁盘有什么用?它是非常有用的,如果我们有一个备用磁盘,当我们阵列中的任何一个磁盘发生故障后,这个备用磁盘会主动添加并重建进程,并从其他磁盘上同步数据,所以我们可以在这里看到冗余。 + +更多关于添加备用磁盘和检查 RAID 5 容错的指令,请阅读下面文章中的第6步和第7步。 + +- [Add Spare Drive to Raid 5 Setup][4] + +### 结论 ### + +在这篇文章中,我们已经看到了如何使用三个磁盘配置一个 RAID 5 。在接下来的文章中,我们将看到如何故障排除并且当 RAID 5 中的一个磁盘损坏后如何恢复。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid-5-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[2]:http://www.tecmint.com/create-raid0-in-linux/ +[3]:http://www.tecmint.com/create-raid1-in-linux/ +[4]:http://www.tecmint.com/create-raid-6-in-linux/ From 593eb1799e96e40d4a8d88141bf4de850382f253 Mon Sep 17 00:00:00 2001 From: XLCYun Date: Tue, 4 Aug 2015 09:30:10 +0800 Subject: [PATCH 039/697] Update 20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md --- ...ktop--What They Get Right & Wrong - Page 1 - Introduction.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md index de47f0864e..39f29af147 100644 --- a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md +++ b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md @@ -36,7 +36,7 @@ ![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login3_show&w=1920) -切换到上流的Breeve主题……突然间,我抱怨的大部分问题都被完善了。通用图标,所有东西都放在了屏幕中央,但不是那么重要的被放到了一边。因为屏幕顶部和底部都是同样的空白,在中间也就酝酿出了一种美好的和谐。还是有一个输入框来切换会话,但既然电源按钮被做成了通用图标,那么这点还算可以原谅。当然gnome还是有一些很好的附加物,例如音量小程序和可访问按钮,但Breeze总归是Fedora的KDE主题的一个进步。 +切换到upstream的Breeve主题……突然间,我抱怨的大部分问题都被完善了。通用图标,所有东西都放在了屏幕中央,但不是那么重要的被放到了一边。因为屏幕顶部和底部都是同样的空白,在中间也就酝酿出了一种美好的和谐。还是有一个输入框来切换会话,但既然电源按钮被做成了通用图标,那么这点还算可以原谅。当然gnome还是有一些很好的附加物,例如音量小程序和可访问按钮,但Breeze总归是Fedora的KDE主题的一个进步。 到Windows(Windows 8和10之前)或者OS X中去,你会看到类似的东西——非常简洁的,“不挡你道”的锁屏与登录界面,它们都没有输入框或者其它分散视觉的小工具。这是一种有效的不分散人注意力的设计。Fedora……默认装有Breeze。VDG在Breeze主题设计上干得不错。可别糟蹋了它。 From bba6ac1d9e04d9cdf1a2d35a77a549e83873350d Mon Sep 17 00:00:00 2001 From: XLCYun Date: Tue, 4 Aug 2015 09:34:03 +0800 Subject: [PATCH 040/697] =?UTF-8?q?=E5=88=A0=E9=99=A4=E5=8E=9F=E6=96=87=20?= =?UTF-8?q?=20XLCYun?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...t & Wrong - Page 3 - GNOME Applications.md | 62 ------------------- 1 file changed, 62 deletions(-) delete mode 100644 sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md diff --git a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md b/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md deleted file mode 100644 index c70978dc9b..0000000000 --- a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md +++ /dev/null @@ -1,62 +0,0 @@ -Translating by XLCYun. -A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 3 - GNOME Applications -================================================================================ -### Applications ### - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_videos_show&w=1920) - -This is the one area where things are basically a wash. Each environment has a few applications that are really nice, and a few that are not so great. Once again though, Gnome gets the little things right in a way that KDE completely misses. None of KDE's applications are bad or broken, that's not what I'm saying. They function. But that's about it. To use an analogy: they passed the test, but they sure didn't get any where close to 100% on it. - -Gnome on left, KDE on right. Dragon performs perfectly fine, it has clearly marked buttons for playing a file, URL, or a disc, just as you can do under Gnome Videos... but Gnome takes it one extra little step further in the name of convenience and user friendliness: they show all the videos detected under your system by default, without you having to do anything. KDE has Baloo-- just as they had Nepomuk before that-- why not use them? They've got a list video files that are freely accessible... but don't make use of the feature. - -Moving on... Music Players. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_rhythmbox_show&w=1920) - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_amarok_show&w=1920) - -Both of these applications, Rhythmbox on the left and Amarok on the right were opened up and then a screenshot was immediately taken, nothing was clicked, or altered. See the difference? Rhythmbox looks like a music player. It's direct, there's obvious ways to sort the results, it knows what is trying to be and what it's job is: to play music. - -Amarok feels like one of the tech demos, or library demos where someone puts every option and extension they possible can all inside one application in order to show them off-- it's never something that gets shipped as production, it's just there to show off bits and pieces. And that's exactly what Amarok feels like: its someone trying to show off every single possible cool thing they shove into a media player without ever stopping to think "Wait, what were trying to write again? An app to play music?" - -Just look at the default layout. What is front and center for the user? A visualizer and Wikipedia integration-- the largest and most prominent column on the page. What's the second largest? Playlist list. Third largest, aka smallest? The actual music listing. How on earth are these sane defaults for a core application? - -Software Managers! Something that has seen a lot of push in recent years and will likely only see a bigger push in the months to come. Unfortunately, it's another area where KDE was so close... and then fell on its face right at the finish line. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_software_show&w=1920) - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apper_show&w=1920) - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon_show&w=1920) - -Gnome Software is probably my new favorite software center, minus one gripe which I will get to in a bit. Muon, I wanted to like you. I really did. But you are a design nightmare. When the VDG was drawing up plans for you (mockup below), you looked pretty slick. Good use of white space, clean design, nice category listing, your whole not-being-split-into-two-applications. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon1_show&w=1920) - -Then someone got around to coding you and doing your actual UI, and I can only guess they were drunk while they did it. - -Let's look at Gnome Software. What's smack dab in the middle? The application, its screenshots, its description, etc. What's smack dab in the middle of Muon? Gigantic waste of white space. Gnome Software also includes the lovely convenience feature of putting a "Launch" button right there in case you already have an application installed. Convenience and ease of use are important, people. Honestly, JUST having things in Muon be centered aligned would probably make things look better already. - -What's along the top edge of Gnome Software, like a tab listing? All Software, Installed, Updates. Clean language, direct, to the point. Muon? Well, we have "Discover", which works okay as far as language goes, and then we have Installed, and then nothing. Where's updates? - -Well.. the developers decided to split updates off into its own application, thus requiring you to open two applications to handle your software-- one to install it, and one to update it-- going against every Software Center paradigm that has ever existed since the Synaptic graphical package manager. - -I'm not going to show it in a screenshot just because I don't want to have to clean up my system afterwards, but if you go into Muon and start installing something the way it shows that is by adding a little tab to the bottom of your screen with the application's name. That tab doesn't go away when the application is done installing either, so if you're installing a lot of applications at a single time then you'll just slowly accumulate tabs along the bottom that you then have to go through and clean up manually, because if you don't then they grow off the screen and you have to swipe through them all to get to the most recent ones. Think: opening 50 tabs in Firefox. Major annoyance, major inconvenience. - -I did say I would bash on Gnome a bit, and I meant it. Muon does get one thing very right that Gnome Software doesn't. Under the settings bar Muon has an option for "Show Technical Packages" aka: compilers, software libraries, non-graphical applications, applications without AppData, etc. Gnome doesn't. If you want to install any of those you have to drop down to the terminal. I think that's wrong. I certainly understand wanting to push AppData but I think they pushed it too soon. What made me realize Gnome didn't have this setting was when I went to install PowerTop and couldn't get Gnome to display it-- no AppData, no "Show Technical Packages" setting. - -Doubly unfortunate is the fact that you can't "just use apper" if you're under KDE since... - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apperlocal_show&w=1920) - -Apper's support for installing local packages has been broken for since Fedora 19 or so, almost two years. I love the attention to detail and quality. - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=3 - -作者:Eric Griffith -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From f7e118d42898004ec5ce6ce2c842c94822a2f148 Mon Sep 17 00:00:00 2001 From: XLCYun Date: Tue, 4 Aug 2015 09:39:34 +0800 Subject: [PATCH 041/697] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...t & Wrong - Page 3 - GNOME Applications.md | 61 +++++++++++++++++++ 1 file changed, 61 insertions(+) create mode 100644 translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md new file mode 100644 index 0000000000..42539badcc --- /dev/null +++ b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md @@ -0,0 +1,61 @@ +将GNOME作为我的Linux桌面的一周: 他们做对的与做错的 - 第三节 - GNOME应用 +================================================================================ +### 应用 ### + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_videos_show&w=1920) + +这是一个基本上一潭死水的地方。每一个桌面环境都有一些非常好的和不怎么样的应用。再次强调,Gnome把那些KDE完全错失的小细节给做对了。我不是想说KDE中有哪些应用不好。他们都能工作。但仅此而已。也就是说:它们合格了,但确实还没有达到甚至接近100分。 + +Gnome的在左边,KDE的在右边。Dragon运行得很好,清晰的标出了播放文件、URL或和光盘的按钮,正如你在Gnome Videos中能做到的一样……但是在便利的文件名和用户的友好度方面,Gnome多走了一小步。它默认显示了在你的电脑上检测到的所有影像文件,不需要你做任何事情。KDE有Baloo——正如之前有Nepomuk——为什么不使用它们?它们能列出可读取的影像文件……但却没被使用。 + +下一步……音乐播放器 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_rhythmbox_show&w=1920) + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_amarok_show&w=1920) + +这两个应用,左边的Rhythmbox和右边的Amarok,都是打开后没有做任何修改直接截屏的。看到差别了吗?Rhythmbox看起来像个音乐播放器,直接了当,排序文件的方法也很清晰,它知道它应该是什么样的,它的工作是什么:就是播放音乐。 + +Amarok感觉就像是某个人为了展示而把所有的扩展和选项都尽可能地塞进一个应用程序中去而做出来的一个技术演示产品(tech demos),或者一个库演示产品(library demos)——而这些是不应该做为产品装进去的,它只应该展示一些零碎的东西。而Amarok给人的感觉却是这样的:好像是某个人想把每一个感觉可能很酷的东西都塞进一个媒体播放器里,甚至都不停下来想“我想写啥来着?一个播放音乐的应用?” + +看看默认布局就行了。前面和中心都呈现了什么?一个可视化工具和维基集成(wikipedia integration)——占了整个页面最大和最显眼的区域。第二大的呢?播放列表。第三大,同时也是最小的呢?真正的音乐列表。这种默认设置对于一个核心应用来说,怎么可能称得上理智? + +软件管理器!它在最近几年当中有很大的进步,而且接下来的几个月中,很可能只能看到它更大的进步。不幸的是,这是另一个地方KDE做得差一点点就能……但还是在终点线前摔了脸。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_software_show&w=1920) + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apper_show&w=1920) + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon_show&w=1920) + +Gnome软件中心可能是我最新的最爱,先放下牢骚等下再发。Muon, 我想爱上你,真的。但你就是个设计上的梦魇。当VDG给你画设计草稿时(模型在下面),你看起来真漂亮。白色空间用得很好,设计简洁,类别列表也很好,你的整个“不要分开做成两个应用程序”的设计都很不错。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon1_show&w=1920) + +接着就有人为你写代码,实现真正的UI,但是,我猜这些家伙当时一定是喝醉了。 + +我们来看看Gnome软件中心。正中间是什么?软件,软件截图和软件描述等等。Muon的正中心是什么?白白浪费的大块白色空间。Gnome软件中心还有一个贴心便利特点,那就是放了一个“运行“的按钮在那儿,以防你已经安装了这个软件。便利性和易用性很重要啊,大哥。说实话,仅仅让Muon把东西都居中对齐了可能看起来的效果都要好得多。 + +Gnome软件中心沿着顶部的东西是什么,像个标签列表?所有软件,已安装软件,软件升级。语言简洁,直接,直指要点。Muon,好吧,我们有个”发现“,这个语言表达上还算差强人意,然后我们又有一个”已安装软件“,然后,就没有然后了。软件升级哪去了? + +好吧……开发者决定把升级独立分开成一个应用程序,这样你就得打开两个应用程序才能管理你的软件——一个用来安装,一个用来升级——自从有了新得立图形软件包管理器以来,首次有这种破天荒的设计,与任何已存的软件中心的设计范例相违背。 + +我不想贴上截图给你们看,因为我不想等下还得清理我的电脑,如果你进入Muon安装了什么,那么它就会在屏幕下方根据安装的应用名创建一个标签,所以如果你一次性安装很多软件的话,那么下面的标签数量就会慢慢的增长,然后你就不得不手动检查清除它们,因为如果你不这样做,当标签增长到超过屏幕显示时,你就不得不一个个找过去来才能找到最近正在安装的软件。想想:在火狐浏览器打开50个标签。太烦人,太不方便! + +我说过我会给Gnome一点打击,我是认真的。Muon有一点做得比Gnome软件中心做得好。在Muon的设置栏下面有个“显示技术包”,即:编辑器,软件库,非图形应用程序,无AppData的应用等等(AppData,软件包中的一个特殊文件,用于专门存储软件的信息,译注)。Gnome则没有。如果你想安装其中任何一项你必须跑到终端操作。我想这是他们做得不对的一点。我完全理解他们推行AppData的心情,但我想他们太急了(推行所有软件包带有AppData,是Gnome软件中心的目标之一,译注)。我是在想安装PowerTop,而Gnome不显示这个软件时我才发现这点的——没有AppData,没有“显示技术包“设置。 + +更不幸的事实是你不能“用Apper就行了”,自从…… + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apperlocal_show&w=1920) + +Apper对安装本地软件包的支持大约在Fedora 19时就中止了,几乎两年了。我喜欢那种对细节与质量的关注。 + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=3 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 6c78506ad82f4ca2fa473cb2e7e609813b5637f6 Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 4 Aug 2015 09:48:34 +0800 Subject: [PATCH 042/697] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 修改一些笔误 --- ...o run Ubuntu Snappy Core on Raspberry Pi 2.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md b/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md index c4475f39a2..f5e6fe60b2 100644 --- a/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md +++ b/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md @@ -2,13 +2,13 @@ ================================================================================ 物联网(Internet of Things, IoT) 时代即将来临。很快,过不了几年,我们就会问自己当初是怎么在没有物联网的情况下生存的,就像我们现在怀疑过去没有手机的年代。Canonical 就是一个物联网快速发展却还是开放市场下的竞争者。这家公司宣称自己把赌注压到了IoT 上,就像他们已经在“云”上做过的一样。。在今年一月底,Canonical 启动了一个基于Ubuntu Core 的小型操作系统,名字叫做 [Ubuntu Snappy Core][1] 。 -Snappy 是一种用来替代deb 的新的打包格式,是一个用来更新系统的前端,从CoreOS、红帽子和其他地方借鉴了原子更新这个想法。很快树莓派2 代投入市场,Canonical 就发布了用于树莓派的Snappy Core 版本。第一代树莓派因为是基于ARMv6 ,而Ubuntu 的ARM 镜像是基于ARMv7 ,所以不能运行ubuntu 。不过这种状况现在改变了,Canonical 通过发布用于RPI2 的镜像,抓住机会澄清了Snappy 就是一个用于云计算,特别是IoT 的系统。 +Snappy 是一种用来替代deb 的新的打包格式,是一个用来更新系统的前端,从CoreOS、红帽子和其他系统借鉴了**原子更新**这个想法。树莓派2 代投入市场,Canonical 很快就发布了用于树莓派的Snappy Core 版本。而第一代树莓派因为是基于ARMv6 ,Ubuntu 的ARM 镜像是基于ARMv7 ,所以不能运行ubuntu 。不过这种状况现在改变了,Canonical 通过发布用于RPI2 的镜像,抓住机会证明了Snappy 就是一个用于云计算,特别是用于物联网的系统。 -Snappy 同样可以运行在其它像Amazon EC2, Microsofts Azure, Google's Compute Engine 这样的云端上,也可以虚拟化在KVM、Virtuabox 和vagrant 上。Canonical 已经拥抱了微软、谷歌、Docker、OpenStack 这些重量级选手,同时也与一些小项目达成合作关系。除了一些创业公司,像Ninja Sphere、Erle Robotics,还有一些开发板生产商比如Odroid、Banana Pro, Udoo, PCDuino 和Parallella 、全志。Snappy Core 也希望很快能运行在路由器上,来帮助改进路由器生产商目前很少更新固件的策略。 +Snappy 同样可以运行在其它像Amazon EC2, Microsofts Azure, Google的 Compute Engine 这样的云端上,也可以虚拟化在KVM、Virtuabox 和vagrant 上。Canonical Ubuntu 已经拥抱了微软、谷歌、Docker、OpenStack 这些重量级选手,同时也与一些小项目达成合作关系。除了一些创业公司,比如Ninja Sphere、Erle Robotics,还有一些开发板生产商,比如Odroid、Banana Pro, Udoo, PCDuino 和Parallella 、全志,Snappy 也提供了支持。Snappy Core 同时也希望尽快运行到路由器上来帮助改进路由器生产商目前很少更新固件的策略。 接下来,让我们看看怎么样在树莓派2 上运行Snappy。 -用于树莓派2 的Snappy 镜像可以从 [Raspberry Pi 网站][2] 上下载。解压缩出来的镜像必须[写到一个至少8GB 大小的SD 卡][3]。尽管原始系统很小,但是院子升级和回滚功能会蚕食不小的空间。使用Snappy 启动树莓派2 后你就可以使用默认用户名和密码(都是ubuntu)登录系统。 +用于树莓派2 的Snappy 镜像可以从 [Raspberry Pi 网站][2] 上下载。解压缩出来的镜像必须[写到一个至少8GB 大小的SD 卡][3]。尽管原始系统很小,但是原子升级和回滚功能会占用不小的空间。使用Snappy 启动树莓派2 后你就可以使用默认用户名和密码(都是ubuntu)登录系统。 ![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg) @@ -18,7 +18,7 @@ sudo 已经配置好了可以直接用,安全起见,你应该使用以下命 或者也可以使用`adduser` 为你添加一个新用户。 -因为RPI缺少硬件始终,而Snappy 不知道这一点,所以系统会有一个小bug:处理命令时会报很多错。不过这个很容易解决: +因为RPI缺少硬件时钟,而Snappy 并不知道这一点,所以系统会有一个小bug:处理某些命令时会报很多错。不过这个很容易解决: 使用这个命令来确认这个bug 是否影响: @@ -36,7 +36,7 @@ sudo 已经配置好了可以直接用,安全起见,你应该使用以下命 $ sudo apt-get update && sudo apt-get distupgrade -现在将不会让你通过,因为Snappy 会使用它自己精简过的、基于dpkg 的包管理系统。这是做是应为Snappy 会运行很多嵌入式程序,而你也会想着所有事情尽可能的简化。 +不过这时系统不会让你通过,因为Snappy 使用它自己精简过的、基于dpkg 的包管理系统。这么做的原因是Snappy 会运行很多嵌入式程序,而同时你也会想着所有事情尽可能的简化。 让我们来看看最关键的部分,理解一下程序是如何与Snappy 工作的。运行Snappy 的SD 卡上除了boot 分区外还有3个分区。其中的两个构成了一个重复的文件系统。这两个平行文件系统被固定挂载为只读模式,并且任何时刻只有一个是激活的。第三个分区是一个部分可写的文件系统,用来让用户存储数据。通过更新系统,标记为'system-a' 的分区会保持一个完整的文件系统,被称作核心,而另一个平行文件系统仍然会是空的。 @@ -52,13 +52,13 @@ sudo 已经配置好了可以直接用,安全起见,你应该使用以下命 $ sudo snappy versions -a -经过更新-重启的操作,你应该可以看到被激活的核心已经被改变了。 +经过更新-重启两步操作,你应该可以看到被激活的核心已经被改变了。 因为到目前为止我们还没有安装任何软件,下面的命令: $ sudo snappy update ubuntu-core -将会生效,而且如果你打算仅仅更新特定的OS,这也是一个办法。如果出了问题,你可以使用下面的命令回滚: +将会生效,而且如果你打算仅仅更新特定的OS 版本,这也是一个办法。如果出了问题,你可以使用下面的命令回滚: $ sudo snappy rollback ubuntu-core @@ -77,7 +77,7 @@ sudo 已经配置好了可以直接用,安全起见,你应该使用以下命 via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html 作者:[Ferdinand Thommes][a] -译者:[译者ID](https://github.com/oska874) +译者:[Ezio](https://github.com/oska874) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From 5e44c4c18b98bbb459008ca388adc4f54a5a59e3 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:29:24 +0800 Subject: [PATCH 043/697] Create Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md --- ... (Mirroring) using 'Two Disks' in Linux.md | 217 ++++++++++++++++++ 1 file changed, 217 insertions(+) create mode 100644 translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md diff --git a/translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md b/translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md new file mode 100644 index 0000000000..948e530ed8 --- /dev/null +++ b/translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md @@ -0,0 +1,217 @@ +在 Linux 中使用"两个磁盘"创建 RAID 1(镜像) - 第3部分 +================================================================================ +RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁盘中。创建 RAID1 至少需要两个磁盘,它的读取性能或者可靠性比数据存储容量更好。 + + +![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg) + +在 Linux 中设置 RAID1 + +创建镜像是为了防止因硬盘故障导致数据丢失。镜像中的每个磁盘包含数据的完整副本。当一个磁盘发生故障时,相同的数据可以从其它正常磁盘中读取。而后,可以从正在运行的计算机中直接更换发生故障的磁盘,无需任何中断。 + +### RAID 1 的特点 ### + +-镜像具有良好的性能。 + +-磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。 + +-在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。 + +-读取数据会比写入性能更好。 + +#### 要求 #### + + +创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8的两倍。为了能够添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。 + +这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的 UI 组件或使用 Ctrl + I 键来访问它。 + +需要阅读: [Basic Concepts of RAID in Linux][1] + +#### 在我的服务器安装 #### + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.226 + Hostname : rd1.tecmintlocal.com + Disk 1 [20GB] : /dev/sdb + Disk 2 [20GB] : /dev/sdc + +本文将指导你使用 mdadm (创建和管理 RAID 的)一步一步的建立一个软件 RAID 1 或镜像在 Linux 平台上。但同样的做法也适用于其它 Linux 发行版如 RedHat,CentOS,Fedora 等等。 + +### 第1步:安装所需要的并且检查磁盘 ### + +1.正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。 + + # yum install mdadm [on RedHat systems] + # apt-get install mdadm [on Debain systems] + +2. 一旦安装好‘mdadm‘包,我们需要使用下面的命令来检查磁盘是否已经配置好。 + + # mdadm -E /dev/sd[b-c] + +![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png) + +检查 RAID 的磁盘 + + +正如你从上面图片看到的,没有检测到任何超级块,这意味着还没有创建RAID。 + +### 第2步:为 RAID 创建分区 ### + +3. 正如我提到的,我们最少使用两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID1。我们首先使用‘fdisk‘命令来创建这两个分区并更改其类型为 raid。 + + # fdisk /dev/sdb + +按照下面的说明 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。 +- 接下来选择分区号为1。 +- 按两次回车键默认将整个容量分配给它。 +- 然后,按 ‘P’ 来打印创建好的分区。 +- 按 ‘L’,列出所有可用的类型。 +- 按 ‘t’ 修改分区类型。 +- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png) + +创建磁盘分区 + +在创建“/dev/sdb”分区后,接下来按照同样的方法创建分区 /dev/sdc 。 + + # fdisk /dev/sdc + +![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png) + +创建第二个分区 + +4. 一旦这两个分区创建成功后,使用相同的命令来检查 sdb & sdc 分区并确认 RAID 分区的类型如上图所示。 + + # mdadm -E /dev/sd[b-c] + +![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png) + +验证分区变化 + +![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png) + +检查 RAID 类型 + +**注意**: 正如你在上图所看到的,在 sdb1 和 sdc1 中没有任何对 RAID 的定义,这就是我们没有检测到超级块的原因。 + +### 步骤3:创建 RAID1 设备 ### + +5.接下来使用以下命令来创建一个名为 /dev/md0 的“RAID1”设备并验证它 + + # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1 + # cat /proc/mdstat + +![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png) + +创建RAID设备 + +6. 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列 + + # mdadm -E /dev/sd[b-c]1 + # mdadm --detail /dev/md0 + +![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png) + +检查 RAID 设备类型 + +![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png) + +检查 RAID 设备阵列 + +从上图中,人们很容易理解,RAID1 已经使用的 /dev/sdb1 和 /dev/sdc1 分区被创建,你也可以看到状态为 resyncing。 + +### 第4步:在 RAID 设备上创建文件系统 ### + +7. 使用 ext4 为 md0 创建文件系统并挂载到 /mnt/raid1 . + + # mkfs.ext4 /dev/md0 + +![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png) + +创建 RAID 设备文件系统 + +8. 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据 + + # mkdir /mnt/raid1 + # mount /dev/md0 /mnt/raid1/ + # touch /mnt/raid1/tecmint.txt + # echo "tecmint raid setups" > /mnt/raid1/tecmint.txt + +![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png) + +挂载 RAID 设备 + +9.为了在系统重新启动自动挂载 RAID1,需要在 fstab 文件中添加条目。打开“/etc/fstab”文件并添加以下行。 + + /dev/md0 /mnt/raid1 ext4 defaults 0 0 + +![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png) + +自动挂载 Raid 设备 + +10. 运行“mount -a”,检查 fstab 中的条目是否有错误 + # mount -av + +![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png) + +检查 fstab 中的错误 + +11. 接下来,使用下面的命令保存 raid 的配置到文件“mdadm.conf”中。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png) + +保存 Raid 的配置 + +上述配置文件在系统重启时会读取并加载 RAID 设备。 + +### 第5步:在磁盘故障后检查数据 ### + +12.我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。 + + # mdadm --detail /dev/md0 + +![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png) + +验证 Raid 设备 + +在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的并且 Active Devices 是2.现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。 + + # ls -l /dev | grep sd + # mdadm --detail /dev/md0 + +![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png) + +测试 RAID 设备 + +现在,在上面的图片中你可以看到,一个磁盘不见了。我从虚拟机上删除了一个磁盘。此时让我们来检查我们宝贵的数据。 + + # cd /mnt/raid1/ + # cat tecmint.txt + +![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png) + +验证 RAID 数据 + +你有没有看到我们的数据仍然可用。由此,我们可以知道 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid1-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 21d4f77e67c1780b8f0a67defbe4dc8487cd512a Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:31:36 +0800 Subject: [PATCH 044/697] Create Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md --- ...iping with Distributed Parity) in Linux.md | 285 ++++++++++++++++++ 1 file changed, 285 insertions(+) create mode 100644 translated/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md diff --git a/translated/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md b/translated/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md new file mode 100644 index 0000000000..7de5199a08 --- /dev/null +++ b/translated/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md @@ -0,0 +1,285 @@ + +在 Linux 中创建 RAID 5(条带化与分布式奇偶校验) - 第4部分 +================================================================================ +在 RAID 5 中,条带化数据跨多个驱磁盘使用分布式奇偶校验。分布式奇偶校验的条带化意味着它将奇偶校验信息和条带中的数据分布在多个磁盘上,它将有很好的数据冗余。 + +![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/setup-raid-5-in-linux.jpg) + +在 Linux 中配置 RAID 5 + +对于此 RAID 级别它至少应该有三个或更多个磁盘。RAID 5 通常被用于大规模生产环境中花费更多的成本来提供更好的数据冗余性能。 + +#### 什么是奇偶校验? #### + +奇偶校验是在数据存储中检测错误最简单的一个方法。奇偶校验信息存储在每个磁盘中,比如说,我们有4个磁盘,其中一个磁盘空间被分割去存储所有磁盘的奇偶校验信息。如果任何一个磁盘出现故障,我们可以通过更换故障磁盘后,从奇偶校验信息重建得到原来的数据。 + +#### RAID 5 的优点和缺点 #### + +- 提供更好的性能 +- 支持冗余和容错。 +- 支持热备份。 +- 将失去一个磁盘的容量存储奇偶校验信息。 +- 单个磁盘发生故障后不会丢失数据。我们可以更换故障硬盘后从奇偶校验信息中重建数据。 +- 事务处理读操作会更快。 +- 由于奇偶校验占用资源,写操作将是缓慢的。 +- 重建需要很长的时间。 + +#### 要求 #### +创建 RAID 5 最少需要3个磁盘,你也可以添加更多的磁盘,前提是你要有多端口的专用硬件 RAID 控制器。在这里,我们使用“mdadm”包来创建软件 RAID。 + +mdadm 是一个允许我们在 Linux 下配置和管理 RAID 设备的包。默认情况下 RAID 没有可用的配置文件,我们在创建和配置 RAID 后必须将配置文件保存在一个单独的文件中,例如:mdadm.conf。 + +在进一步学习之前,我建议你通过下面的文章去了解 Linux 中 RAID 的基础知识。 + +- [Basic Concepts of RAID in Linux – Part 1][1] +- [Creating RAID 0 (Stripe) in Linux – Part 2][2] +- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3] + +#### 我的服务器设置 #### + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.227 + Hostname : rd5.tecmintlocal.com + Disk 1 [20GB] : /dev/sdb + Disk 2 [20GB] : /dev/sdc + Disk 3 [20GB] : /dev/sdd + +这篇文章是 RAID 系列9教程的第4部分,在这里我们要建立一个软件 RAID 5(分布式奇偶校验)使用三个20GB(名为/dev/sdb, /dev/sdc 和 /dev/sdd)的磁盘在 Linux 系统或服务器中上。 + +### 第1步:安装 mdadm 并检验磁盘 ### + +1.正如我们前面所说,我们使用 CentOS 6.5 Final 版本来创建 RAID 设置,但同样的做法也适用于其他 Linux 发行版。 + + # lsb_release -a + # ifconfig | grep inet + +![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/CentOS-6.5-Summary.png) + +CentOS 6.5 摘要 + +2. 如果你按照我们的 RAID 系列去配置的,我们假设你已经安装了“mdadm”包,如果没有,根据你的 Linux 发行版使用下面的命令安装。 + + # yum install mdadm [on RedHat systems] + # apt-get install mdadm [on Debain systems] + +3. “mdadm”包安装后,先使用‘fdisk‘命令列出我们在系统上增加的三个20GB的硬盘。 + + # fdisk -l | grep sd + +![Install mdadm Tool in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Install-mdadm-Tool.png) + +安装 mdadm 工具 + +4. 现在该检查这三个磁盘是否存在 RAID 块,使用下面的命令来检查。 + + # mdadm -E /dev/sd[b-d] + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd + +![Examine Drives For Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-Drives-For-Raid.png) + +检查 Raid 磁盘 + +**注意**: 上面的图片说明,没有检测到任何超级块。所以,这三个磁盘中没有定义 RAID。让我们现在开始创建一个吧! + +### 第2步:为磁盘创建 RAID 分区 ### + +5. 首先,在创建 RAID 前我们要为磁盘分区(/dev/sdb, /dev/sdc 和 /dev/sdd),在进行下一步之前,先使用‘fdisk’命令进行分区。 + + # fdisk /dev/sdb + # fdisk /dev/sdc + # fdisk /dev/sdd + +#### 创建 /dev/sdb 分区 #### + +请按照下面的说明在 /dev/sdb 硬盘上创建分区。 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。选择主分区是因为还没有定义过分区。 +- 接下来选择分区号为1。默认就是1. +- 这里是选择柱面大小,我们没必要选择指定的大小,因为我们需要为 RAID 使用整个分区,所以只需按两次 Enter 键默认将整个容量分配给它。 +- 然后,按 ‘P’ 来打印创建好的分区。 +- 改变分区类型,按 ‘L’可以列出所有可用的类型。 +- 按 ‘t’ 修改分区类型。 +- 这里使用‘fd’设置为 RAID 的类型。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition1.png) + +创建 sdb 分区 + +**注意**: 我们仍要按照上面的步骤来创建 sdc 和 sdd 的分区。 + +#### 创建 /dev/sdc 分区 #### + +现在,通过下面的截图给出创建 sdc 和 sdd 磁盘分区的方法,或者你可以按照上面的步骤。 + + # fdisk /dev/sdc + +![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition1.png) + +创建 sdc 分区 + +#### 创建 /dev/sdd 分区 #### + + # fdisk /dev/sdd + +![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition1.png) + +创建 sdd 分区 + +6. 创建分区后,检查三个磁盘 sdb, sdc, sdd 的变化。 + + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd + + or + + # mdadm -E /dev/sd[b-c] + +![Check Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Changes-on-Partitions.png) + +检查磁盘变化 + +**注意**: 在上面的图片中,磁盘的类型是 fd。 + +7.现在在新创建的分区检查 RAID 块。如果没有检测到超级块,我们就能够继续下一步,创建一个新的 RAID 5 的设置在这些磁盘中。 + +![Check Raid on Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-Partitions.png) + +在分区中检查 Raid + +### 第3步:创建 md 设备 md0 ### + +8. 现在创建一个 RAID 设备“md0”(即 /dev/md0)使用所有新创建的分区(sdb1, sdc1 and sdd1) ,使用以下命令。 + + # mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 + + or + + # mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1 + +9. 创建 RAID 设备后,检查并确认 RAID,包括设备和从 mdstat 中输出的 RAID 级别。 + + # cat /proc/mdstat + +![Verify Raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Device.png) + +验证 Raid 设备 + +如果你想监视当前的创建过程,你可以使用‘watch‘命令,使用 watch ‘cat /proc/mdstat‘,它会在屏幕上显示且每隔1秒刷新一次。 + + # watch -n1 cat /proc/mdstat + +![Monitor Raid Process](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Raid-Process.png) + +监控 Raid 5 过程 + +![Raid 5 Process Summary](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Process-Summary.png) + +Raid 5 过程概要 + +10. 创建 RAID 后,使用以下命令验证 RAID 设备 + + # mdadm -E /dev/sd[b-d]1 + +![Verify Raid Level](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Level.png) + +验证 Raid 级别 + +**注意**: 因为它显示三个磁盘的信息,上述命令的输出会有点长。 + +11. 接下来,验证 RAID 阵列的假设,这包含正在运行 RAID 的设备,并开始重新同步。 + + # mdadm --detail /dev/md0 + +![Verify Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Array.png) + +验证 Raid 阵列 + +### 第4步:为 md0 创建文件系统### + +12. 在挂载前为“md0”设备创建 ext4 文件系统。 + + # mkfs.ext4 /dev/md0 + +![Create md0 Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md0-Filesystem.png) + +创建 md0 文件系统 + +13.现在,在‘/mnt‘下创建目录 raid5,然后挂载文件系统到 /mnt/raid5/ 下并检查下挂载点的文件,你会看到 lost+found 目录。 + + # mkdir /mnt/raid5 + # mount /dev/md0 /mnt/raid5/ + # ls -l /mnt/raid5/ + +14. 在挂载点 /mnt/raid5 下创建几个文件,并在其中一个文件中添加一些内容然后去验证。 + + # touch /mnt/raid5/raid5_tecmint_{1..5} + # ls -l /mnt/raid5/ + # echo "tecmint raid setups" > /mnt/raid5/raid5_tecmint_1 + # cat /mnt/raid5/raid5_tecmint_1 + # cat /proc/mdstat + +![Mount Raid 5 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-Raid-Device.png) + +挂载 Raid 设备 + +15. 我们需要在 fstab 中添加条目,否则系统重启后将不会显示我们的挂载点。然后编辑 fstab 文件添加条目,在文件尾追加以下行,如下图所示。挂载点会根据你环境的不同而不同。 + + # vim /etc/fstab + + /dev/md0 /mnt/raid5 ext4 defaults 0 0 + +![Raid 5 Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Device-Automount.png) + +自动挂载 Raid 5 + +16. 接下来,运行‘mount -av‘命令检查 fstab 条目中是否有错误。 + + # mount -av + +![Check Fstab Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Fstab-Errors.png) + +检查 Fstab 错误 + +### 第5步:保存 Raid 5 的配置 ### + +17. 在前面章节已经说过,默认情况下 RAID 没有配置文件。我们必须手动保存。如果此步不跟 RAID 设备将不会存在 md0,它将会跟一些其他数子。 + +所以,我们必须要在系统重新启动之前保存配置。如果配置保存它在系统重新启动时会被加载到内核中然后 RAID 也将被加载。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +![Save Raid 5 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid-5-Configuration.png) + +保存 Raid 5 配置 + +注意:保存配置将保持 RAID 级别的稳定性在 md0 设备中。 + +### 第6步:添加备用磁盘 ### + +18.备用磁盘有什么用?它是非常有用的,如果我们有一个备用磁盘,当我们阵列中的任何一个磁盘发生故障后,这个备用磁盘会主动添加并重建进程,并从其他磁盘上同步数据,所以我们可以在这里看到冗余。 + +更多关于添加备用磁盘和检查 RAID 5 容错的指令,请阅读下面文章中的第6步和第7步。 + +- [Add Spare Drive to Raid 5 Setup][4] + +### 结论 ### + +在这篇文章中,我们已经看到了如何使用三个磁盘配置一个 RAID 5 。在接下来的文章中,我们将看到如何故障排除并且当 RAID 5 中的一个磁盘损坏后如何恢复。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid-5-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[2]:http://www.tecmint.com/create-raid0-in-linux/ +[3]:http://www.tecmint.com/create-raid1-in-linux/ +[4]:http://www.tecmint.com/create-raid-6-in-linux/ From c16ed6b8063def3f227e5e3f783a1b1520569574 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:32:00 +0800 Subject: [PATCH 045/697] =?UTF-8?q?Delete=20Setting=20up=20RAID=201=20(Mir?= =?UTF-8?q?roring)=20using=20=E2=80=98Two=20Disks=E2=80=99=20in=20Linux=20?= =?UTF-8?q?=E2=80=93=20Part=203?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...oring) using ‘Two Disks’ in Linux – Part 3 | 217 ------------------ 1 file changed, 217 deletions(-) delete mode 100644 translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 diff --git a/translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 b/translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 deleted file mode 100644 index 948e530ed8..0000000000 --- a/translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 +++ /dev/null @@ -1,217 +0,0 @@ -在 Linux 中使用"两个磁盘"创建 RAID 1(镜像) - 第3部分 -================================================================================ -RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁盘中。创建 RAID1 至少需要两个磁盘,它的读取性能或者可靠性比数据存储容量更好。 - - -![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg) - -在 Linux 中设置 RAID1 - -创建镜像是为了防止因硬盘故障导致数据丢失。镜像中的每个磁盘包含数据的完整副本。当一个磁盘发生故障时,相同的数据可以从其它正常磁盘中读取。而后,可以从正在运行的计算机中直接更换发生故障的磁盘,无需任何中断。 - -### RAID 1 的特点 ### - --镜像具有良好的性能。 - --磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。 - --在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。 - --读取数据会比写入性能更好。 - -#### 要求 #### - - -创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8的两倍。为了能够添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。 - -这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的 UI 组件或使用 Ctrl + I 键来访问它。 - -需要阅读: [Basic Concepts of RAID in Linux][1] - -#### 在我的服务器安装 #### - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.226 - Hostname : rd1.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc - -本文将指导你使用 mdadm (创建和管理 RAID 的)一步一步的建立一个软件 RAID 1 或镜像在 Linux 平台上。但同样的做法也适用于其它 Linux 发行版如 RedHat,CentOS,Fedora 等等。 - -### 第1步:安装所需要的并且检查磁盘 ### - -1.正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。 - - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] - -2. 一旦安装好‘mdadm‘包,我们需要使用下面的命令来检查磁盘是否已经配置好。 - - # mdadm -E /dev/sd[b-c] - -![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png) - -检查 RAID 的磁盘 - - -正如你从上面图片看到的,没有检测到任何超级块,这意味着还没有创建RAID。 - -### 第2步:为 RAID 创建分区 ### - -3. 正如我提到的,我们最少使用两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID1。我们首先使用‘fdisk‘命令来创建这两个分区并更改其类型为 raid。 - - # fdisk /dev/sdb - -按照下面的说明 - -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。 -- 接下来选择分区号为1。 -- 按两次回车键默认将整个容量分配给它。 -- 然后,按 ‘P’ 来打印创建好的分区。 -- 按 ‘L’,列出所有可用的类型。 -- 按 ‘t’ 修改分区类型。 -- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 - -![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png) - -创建磁盘分区 - -在创建“/dev/sdb”分区后,接下来按照同样的方法创建分区 /dev/sdc 。 - - # fdisk /dev/sdc - -![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png) - -创建第二个分区 - -4. 一旦这两个分区创建成功后,使用相同的命令来检查 sdb & sdc 分区并确认 RAID 分区的类型如上图所示。 - - # mdadm -E /dev/sd[b-c] - -![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png) - -验证分区变化 - -![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png) - -检查 RAID 类型 - -**注意**: 正如你在上图所看到的,在 sdb1 和 sdc1 中没有任何对 RAID 的定义,这就是我们没有检测到超级块的原因。 - -### 步骤3:创建 RAID1 设备 ### - -5.接下来使用以下命令来创建一个名为 /dev/md0 的“RAID1”设备并验证它 - - # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1 - # cat /proc/mdstat - -![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png) - -创建RAID设备 - -6. 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列 - - # mdadm -E /dev/sd[b-c]1 - # mdadm --detail /dev/md0 - -![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png) - -检查 RAID 设备类型 - -![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png) - -检查 RAID 设备阵列 - -从上图中,人们很容易理解,RAID1 已经使用的 /dev/sdb1 和 /dev/sdc1 分区被创建,你也可以看到状态为 resyncing。 - -### 第4步:在 RAID 设备上创建文件系统 ### - -7. 使用 ext4 为 md0 创建文件系统并挂载到 /mnt/raid1 . - - # mkfs.ext4 /dev/md0 - -![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png) - -创建 RAID 设备文件系统 - -8. 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据 - - # mkdir /mnt/raid1 - # mount /dev/md0 /mnt/raid1/ - # touch /mnt/raid1/tecmint.txt - # echo "tecmint raid setups" > /mnt/raid1/tecmint.txt - -![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png) - -挂载 RAID 设备 - -9.为了在系统重新启动自动挂载 RAID1,需要在 fstab 文件中添加条目。打开“/etc/fstab”文件并添加以下行。 - - /dev/md0 /mnt/raid1 ext4 defaults 0 0 - -![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png) - -自动挂载 Raid 设备 - -10. 运行“mount -a”,检查 fstab 中的条目是否有错误 - # mount -av - -![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png) - -检查 fstab 中的错误 - -11. 接下来,使用下面的命令保存 raid 的配置到文件“mdadm.conf”中。 - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png) - -保存 Raid 的配置 - -上述配置文件在系统重启时会读取并加载 RAID 设备。 - -### 第5步:在磁盘故障后检查数据 ### - -12.我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。 - - # mdadm --detail /dev/md0 - -![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png) - -验证 Raid 设备 - -在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的并且 Active Devices 是2.现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。 - - # ls -l /dev | grep sd - # mdadm --detail /dev/md0 - -![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png) - -测试 RAID 设备 - -现在,在上面的图片中你可以看到,一个磁盘不见了。我从虚拟机上删除了一个磁盘。此时让我们来检查我们宝贵的数据。 - - # cd /mnt/raid1/ - # cat tecmint.txt - -![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png) - -验证 RAID 数据 - -你有没有看到我们的数据仍然可用。由此,我们可以知道 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid1-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From e7e25c9cb5858924af23c3ba900b69568250285e Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:32:12 +0800 Subject: [PATCH 046/697] =?UTF-8?q?Delete=20Introduction=20to=20RAID,=20Co?= =?UTF-8?q?ncepts=20of=20RAID=20and=20RAID=20Levels=20=E2=80=93=20Part=201?= =?UTF-8?q?.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ncepts of RAID and RAID Levels – Part 1.md | 146 ------------------ 1 file changed, 146 deletions(-) delete mode 100644 translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1.md diff --git a/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1.md b/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1.md deleted file mode 100644 index 8ca0ecbd7e..0000000000 --- a/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1.md +++ /dev/null @@ -1,146 +0,0 @@ - -RAID的级别和概念的介绍 - 第1部分 -================================================================================ -RAID是廉价磁盘冗余阵列,但现在它被称为独立磁盘冗余阵列。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是磁盘的一个集合,被称为逻辑卷。 - - -![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) - -在 Linux 中理解 RAID 的设置 - -RAID包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。一个 RAID 控制器至少使用两个磁盘并且使用一个逻辑卷或者多个驱动器在一个组中。在一个磁盘组的应用中只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。 - -这个系列被命名为RAID的构建共包含9个部分包括以下主题。 - -- 第1部分:RAID的级别和概念的介绍 -- 第2部分:在Linux中如何设置 RAID0(条带化) -- 第3部分:在Linux中如何设置 RAID1(镜像化) -- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验) -- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验) -- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套) -- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘 -- 第8部分:在 RAID 中恢复(重建)损坏的驱动器 -- 第9部分:在 Linux 中管理 RAID - -这是9系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。 - - -### 软件RAID和硬件RAID ### - -软件 RAID 的性能很低,因为其从主机资源消耗。 RAID 软件需要加载可读取数据从软件 RAID 卷中。在加载 RAID 软件前,操作系统需要得到加载 RAID 软件的引导。在软件 RAID 中无需物理硬件。零成本投资。 - -硬件 RAID 具有很高的性能。他们有专用的 RAID 控制器,采用 PCI Express卡物理内置的。它不会使用主机资源。他们有 NVRAM 缓存读取和写入。当重建时即使出现电源故障,它会使用电池电源备份存储缓存。对于大规模使用需要非常昂贵的投资。 - -硬件 RAID 卡如下所示: - -![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) - -硬件RAID - -#### 精选的 RAID 概念 #### - -- 在 RAID 重建中校验方法中丢失的内容来自从校验中保存的信息。 RAID 5,RAID 6 基于校验。 -- 条带化是随机共享数据到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用3个磁盘,则数据将会存在于每个磁盘上。 -- 镜像被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1,它将保存相同的内容到其他盘上。 -- 在我们的服务器上,热备份只是一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动重建。 -- 块是 RAID 控制器每次读写数据时的最小单位,最小4KB。通过定义块大小,我们可以增加 I/O 性能。 - -RAID有不同的级别。在这里,我们仅看到在真实环境下的使用最多的 RAID 级别。 - -- RAID0 = 条带化 -- RAID1 = 镜像 -- RAID5 = 单个磁盘分布式奇偶校验 -- RAID6 = 双盘分布式奇偶校验 -- RAID10 = 镜像 + 条带。(嵌套RAID) - -RAID 在大多数 Linux 发行版上使用 mdadm 的包管理。让我们先对每个 RAID 级别认识一下。 - -#### RAID 0(或)条带化 #### - -条带化有很好的性能。在 RAID 0(条带化)中数据将使用共享的方式被写入到磁盘。一半的内容将是在一个磁盘上,另一半内容将被写入到其它磁盘。 - -假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。 - -在这种情况下,如果驱动器中的任何一个发生故障,我们将丢失所有的数据,因为一个盘中只有一半的数据,不能用于重建。不过,当比较写入速度和性能时,RAID0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。 - -- 高性能。 -- 在 RAID0 上零容量损失。 -- 零容错。 -- 写和读有很高的性能。 - -#### RAID1(或)镜像化 #### - -镜像也有不错的性能。镜像可以备份我们的数据。假设我们有两组2TB的硬盘驱动器,我们总共有4TB,但在镜像中,驱动器在 RAID 控制器的后面形成一个逻辑驱动器,我们只能看到逻辑驱动器有2TB。 - -当我们保存数据时,它将同时写入2TB驱动器中。创建 RAID 1 (镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以恢复 RAID 通过更换一个新的磁盘。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为其他的磁盘中也有相同的数据。所以是零数据丢失。 - -- 良好的性能。 -- 空间的一半将在总容量丢失。 -- 完全容错。 -- 重建会更快。 -- 写性能将是缓慢的。 -- 读将会很好。 -- 被操作系统和数据库使用的规模很小。 - -#### RAID 5(或)分布式奇偶校验 #### - -RAID 5 多用于企业的水平。 RAID 5 的工作通过分布式奇偶校验的方法。奇偶校验信息将被用于重建数据。它需要留下的正常驱动器上的信息去重建。驱动器故障时,这会保护我们的数据。 - -假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。 - -- 性能卓越 -- 读速度将非常好。 -- 如果我们不使用硬件 RAID 控制器,写速度是缓慢的。 -- 从所有驱动器的奇偶校验信息中重建。 -- 完全容错。 -- 1个磁盘空间将用于奇偶校验。 -- 可以被用在文件服务器,Web服务器,非常重要的备份中。 - -#### RAID 6 两个分布式奇偶校验磁盘 #### - -RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以重建数据,同时更换新的驱动器。 - -它比 RAID 5 非常慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度将被平均。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。 - -- 性能不佳。 -- 读的性能很好。 -- 如果我们不使用硬件 RAID 控制器写的性能会很差。 -- 从2奇偶校验驱动器上重建。 -- 完全容错。 -- 2个磁盘空间将用于奇偶校验。 -- 可用于大型阵列。 -- 在备份和视频流中大规模使用。 - -#### RAID 10(或)镜像+条带 #### - -RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。 - -假设,我们有4个驱动器。当我写了一些数据到逻辑卷上,它会使用镜像和条带将数据保存到4个驱动器上。 - -如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下形式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入两个磁盘,这一步将所有数据都写入。这使数据得到备份。 - -同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一个盘,“E”写入第二个盘。再次将“C”写入第一个盘,“M”到第二个盘。 - -- 良好的读写性能。 -- 空间的一半将在总容量丢失。 -- 容错。 -- 从备份数据中快速重建。 -- 它的高性能和高可用性常被用于数据库的存储中。 - -### 结论 ### - -在这篇文章中,我们已经看到了什么是 RAID 和在实际环境大多采用 RAID 的哪个级别。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容对于你了解 RAID 基本满足。 - -在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/understanding-raid-setup-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ From be241caf54d4564570a4897b5f68aa2e5e2a9842 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:32:27 +0800 Subject: [PATCH 047/697] =?UTF-8?q?Delete=20Creating=20RAID=205=20(Stripin?= =?UTF-8?q?g=20with=20Distributed=20Parity)=20in=20Linux=20=E2=80=93=20Par?= =?UTF-8?q?t=204?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...with Distributed Parity) in Linux – Part 4 | 285 ------------------ 1 file changed, 285 deletions(-) delete mode 100644 translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 diff --git a/translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 b/translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 deleted file mode 100644 index 7de5199a08..0000000000 --- a/translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 +++ /dev/null @@ -1,285 +0,0 @@ - -在 Linux 中创建 RAID 5(条带化与分布式奇偶校验) - 第4部分 -================================================================================ -在 RAID 5 中,条带化数据跨多个驱磁盘使用分布式奇偶校验。分布式奇偶校验的条带化意味着它将奇偶校验信息和条带中的数据分布在多个磁盘上,它将有很好的数据冗余。 - -![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/setup-raid-5-in-linux.jpg) - -在 Linux 中配置 RAID 5 - -对于此 RAID 级别它至少应该有三个或更多个磁盘。RAID 5 通常被用于大规模生产环境中花费更多的成本来提供更好的数据冗余性能。 - -#### 什么是奇偶校验? #### - -奇偶校验是在数据存储中检测错误最简单的一个方法。奇偶校验信息存储在每个磁盘中,比如说,我们有4个磁盘,其中一个磁盘空间被分割去存储所有磁盘的奇偶校验信息。如果任何一个磁盘出现故障,我们可以通过更换故障磁盘后,从奇偶校验信息重建得到原来的数据。 - -#### RAID 5 的优点和缺点 #### - -- 提供更好的性能 -- 支持冗余和容错。 -- 支持热备份。 -- 将失去一个磁盘的容量存储奇偶校验信息。 -- 单个磁盘发生故障后不会丢失数据。我们可以更换故障硬盘后从奇偶校验信息中重建数据。 -- 事务处理读操作会更快。 -- 由于奇偶校验占用资源,写操作将是缓慢的。 -- 重建需要很长的时间。 - -#### 要求 #### -创建 RAID 5 最少需要3个磁盘,你也可以添加更多的磁盘,前提是你要有多端口的专用硬件 RAID 控制器。在这里,我们使用“mdadm”包来创建软件 RAID。 - -mdadm 是一个允许我们在 Linux 下配置和管理 RAID 设备的包。默认情况下 RAID 没有可用的配置文件,我们在创建和配置 RAID 后必须将配置文件保存在一个单独的文件中,例如:mdadm.conf。 - -在进一步学习之前,我建议你通过下面的文章去了解 Linux 中 RAID 的基础知识。 - -- [Basic Concepts of RAID in Linux – Part 1][1] -- [Creating RAID 0 (Stripe) in Linux – Part 2][2] -- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3] - -#### 我的服务器设置 #### - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.227 - Hostname : rd5.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc - Disk 3 [20GB] : /dev/sdd - -这篇文章是 RAID 系列9教程的第4部分,在这里我们要建立一个软件 RAID 5(分布式奇偶校验)使用三个20GB(名为/dev/sdb, /dev/sdc 和 /dev/sdd)的磁盘在 Linux 系统或服务器中上。 - -### 第1步:安装 mdadm 并检验磁盘 ### - -1.正如我们前面所说,我们使用 CentOS 6.5 Final 版本来创建 RAID 设置,但同样的做法也适用于其他 Linux 发行版。 - - # lsb_release -a - # ifconfig | grep inet - -![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/CentOS-6.5-Summary.png) - -CentOS 6.5 摘要 - -2. 如果你按照我们的 RAID 系列去配置的,我们假设你已经安装了“mdadm”包,如果没有,根据你的 Linux 发行版使用下面的命令安装。 - - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] - -3. “mdadm”包安装后,先使用‘fdisk‘命令列出我们在系统上增加的三个20GB的硬盘。 - - # fdisk -l | grep sd - -![Install mdadm Tool in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Install-mdadm-Tool.png) - -安装 mdadm 工具 - -4. 现在该检查这三个磁盘是否存在 RAID 块,使用下面的命令来检查。 - - # mdadm -E /dev/sd[b-d] - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd - -![Examine Drives For Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-Drives-For-Raid.png) - -检查 Raid 磁盘 - -**注意**: 上面的图片说明,没有检测到任何超级块。所以,这三个磁盘中没有定义 RAID。让我们现在开始创建一个吧! - -### 第2步:为磁盘创建 RAID 分区 ### - -5. 首先,在创建 RAID 前我们要为磁盘分区(/dev/sdb, /dev/sdc 和 /dev/sdd),在进行下一步之前,先使用‘fdisk’命令进行分区。 - - # fdisk /dev/sdb - # fdisk /dev/sdc - # fdisk /dev/sdd - -#### 创建 /dev/sdb 分区 #### - -请按照下面的说明在 /dev/sdb 硬盘上创建分区。 - -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。选择主分区是因为还没有定义过分区。 -- 接下来选择分区号为1。默认就是1. -- 这里是选择柱面大小,我们没必要选择指定的大小,因为我们需要为 RAID 使用整个分区,所以只需按两次 Enter 键默认将整个容量分配给它。 -- 然后,按 ‘P’ 来打印创建好的分区。 -- 改变分区类型,按 ‘L’可以列出所有可用的类型。 -- 按 ‘t’ 修改分区类型。 -- 这里使用‘fd’设置为 RAID 的类型。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 - -![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition1.png) - -创建 sdb 分区 - -**注意**: 我们仍要按照上面的步骤来创建 sdc 和 sdd 的分区。 - -#### 创建 /dev/sdc 分区 #### - -现在,通过下面的截图给出创建 sdc 和 sdd 磁盘分区的方法,或者你可以按照上面的步骤。 - - # fdisk /dev/sdc - -![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition1.png) - -创建 sdc 分区 - -#### 创建 /dev/sdd 分区 #### - - # fdisk /dev/sdd - -![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition1.png) - -创建 sdd 分区 - -6. 创建分区后,检查三个磁盘 sdb, sdc, sdd 的变化。 - - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd - - or - - # mdadm -E /dev/sd[b-c] - -![Check Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Changes-on-Partitions.png) - -检查磁盘变化 - -**注意**: 在上面的图片中,磁盘的类型是 fd。 - -7.现在在新创建的分区检查 RAID 块。如果没有检测到超级块,我们就能够继续下一步,创建一个新的 RAID 5 的设置在这些磁盘中。 - -![Check Raid on Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-Partitions.png) - -在分区中检查 Raid - -### 第3步:创建 md 设备 md0 ### - -8. 现在创建一个 RAID 设备“md0”(即 /dev/md0)使用所有新创建的分区(sdb1, sdc1 and sdd1) ,使用以下命令。 - - # mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 - - or - - # mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1 - -9. 创建 RAID 设备后,检查并确认 RAID,包括设备和从 mdstat 中输出的 RAID 级别。 - - # cat /proc/mdstat - -![Verify Raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Device.png) - -验证 Raid 设备 - -如果你想监视当前的创建过程,你可以使用‘watch‘命令,使用 watch ‘cat /proc/mdstat‘,它会在屏幕上显示且每隔1秒刷新一次。 - - # watch -n1 cat /proc/mdstat - -![Monitor Raid Process](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Raid-Process.png) - -监控 Raid 5 过程 - -![Raid 5 Process Summary](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Process-Summary.png) - -Raid 5 过程概要 - -10. 创建 RAID 后,使用以下命令验证 RAID 设备 - - # mdadm -E /dev/sd[b-d]1 - -![Verify Raid Level](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Level.png) - -验证 Raid 级别 - -**注意**: 因为它显示三个磁盘的信息,上述命令的输出会有点长。 - -11. 接下来,验证 RAID 阵列的假设,这包含正在运行 RAID 的设备,并开始重新同步。 - - # mdadm --detail /dev/md0 - -![Verify Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Array.png) - -验证 Raid 阵列 - -### 第4步:为 md0 创建文件系统### - -12. 在挂载前为“md0”设备创建 ext4 文件系统。 - - # mkfs.ext4 /dev/md0 - -![Create md0 Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md0-Filesystem.png) - -创建 md0 文件系统 - -13.现在,在‘/mnt‘下创建目录 raid5,然后挂载文件系统到 /mnt/raid5/ 下并检查下挂载点的文件,你会看到 lost+found 目录。 - - # mkdir /mnt/raid5 - # mount /dev/md0 /mnt/raid5/ - # ls -l /mnt/raid5/ - -14. 在挂载点 /mnt/raid5 下创建几个文件,并在其中一个文件中添加一些内容然后去验证。 - - # touch /mnt/raid5/raid5_tecmint_{1..5} - # ls -l /mnt/raid5/ - # echo "tecmint raid setups" > /mnt/raid5/raid5_tecmint_1 - # cat /mnt/raid5/raid5_tecmint_1 - # cat /proc/mdstat - -![Mount Raid 5 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-Raid-Device.png) - -挂载 Raid 设备 - -15. 我们需要在 fstab 中添加条目,否则系统重启后将不会显示我们的挂载点。然后编辑 fstab 文件添加条目,在文件尾追加以下行,如下图所示。挂载点会根据你环境的不同而不同。 - - # vim /etc/fstab - - /dev/md0 /mnt/raid5 ext4 defaults 0 0 - -![Raid 5 Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Device-Automount.png) - -自动挂载 Raid 5 - -16. 接下来,运行‘mount -av‘命令检查 fstab 条目中是否有错误。 - - # mount -av - -![Check Fstab Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Fstab-Errors.png) - -检查 Fstab 错误 - -### 第5步:保存 Raid 5 的配置 ### - -17. 在前面章节已经说过,默认情况下 RAID 没有配置文件。我们必须手动保存。如果此步不跟 RAID 设备将不会存在 md0,它将会跟一些其他数子。 - -所以,我们必须要在系统重新启动之前保存配置。如果配置保存它在系统重新启动时会被加载到内核中然后 RAID 也将被加载。 - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -![Save Raid 5 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid-5-Configuration.png) - -保存 Raid 5 配置 - -注意:保存配置将保持 RAID 级别的稳定性在 md0 设备中。 - -### 第6步:添加备用磁盘 ### - -18.备用磁盘有什么用?它是非常有用的,如果我们有一个备用磁盘,当我们阵列中的任何一个磁盘发生故障后,这个备用磁盘会主动添加并重建进程,并从其他磁盘上同步数据,所以我们可以在这里看到冗余。 - -更多关于添加备用磁盘和检查 RAID 5 容错的指令,请阅读下面文章中的第6步和第7步。 - -- [Add Spare Drive to Raid 5 Setup][4] - -### 结论 ### - -在这篇文章中,我们已经看到了如何使用三个磁盘配置一个 RAID 5 。在接下来的文章中,我们将看到如何故障排除并且当 RAID 5 中的一个磁盘损坏后如何恢复。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid-5-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ -[2]:http://www.tecmint.com/create-raid0-in-linux/ -[3]:http://www.tecmint.com/create-raid1-in-linux/ -[4]:http://www.tecmint.com/create-raid-6-in-linux/ From b1fd032e97a42a7256d951cbc670bdce03cc08cb Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:32:40 +0800 Subject: [PATCH 048/697] Delete Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md --- ... (Mirroring) using 'Two Disks' in Linux.md | 217 ------------------ 1 file changed, 217 deletions(-) delete mode 100644 translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md diff --git a/translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md b/translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md deleted file mode 100644 index 948e530ed8..0000000000 --- a/translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md +++ /dev/null @@ -1,217 +0,0 @@ -在 Linux 中使用"两个磁盘"创建 RAID 1(镜像) - 第3部分 -================================================================================ -RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁盘中。创建 RAID1 至少需要两个磁盘,它的读取性能或者可靠性比数据存储容量更好。 - - -![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg) - -在 Linux 中设置 RAID1 - -创建镜像是为了防止因硬盘故障导致数据丢失。镜像中的每个磁盘包含数据的完整副本。当一个磁盘发生故障时,相同的数据可以从其它正常磁盘中读取。而后,可以从正在运行的计算机中直接更换发生故障的磁盘,无需任何中断。 - -### RAID 1 的特点 ### - --镜像具有良好的性能。 - --磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。 - --在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。 - --读取数据会比写入性能更好。 - -#### 要求 #### - - -创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8的两倍。为了能够添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。 - -这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的 UI 组件或使用 Ctrl + I 键来访问它。 - -需要阅读: [Basic Concepts of RAID in Linux][1] - -#### 在我的服务器安装 #### - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.226 - Hostname : rd1.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc - -本文将指导你使用 mdadm (创建和管理 RAID 的)一步一步的建立一个软件 RAID 1 或镜像在 Linux 平台上。但同样的做法也适用于其它 Linux 发行版如 RedHat,CentOS,Fedora 等等。 - -### 第1步:安装所需要的并且检查磁盘 ### - -1.正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。 - - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] - -2. 一旦安装好‘mdadm‘包,我们需要使用下面的命令来检查磁盘是否已经配置好。 - - # mdadm -E /dev/sd[b-c] - -![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png) - -检查 RAID 的磁盘 - - -正如你从上面图片看到的,没有检测到任何超级块,这意味着还没有创建RAID。 - -### 第2步:为 RAID 创建分区 ### - -3. 正如我提到的,我们最少使用两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID1。我们首先使用‘fdisk‘命令来创建这两个分区并更改其类型为 raid。 - - # fdisk /dev/sdb - -按照下面的说明 - -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。 -- 接下来选择分区号为1。 -- 按两次回车键默认将整个容量分配给它。 -- 然后,按 ‘P’ 来打印创建好的分区。 -- 按 ‘L’,列出所有可用的类型。 -- 按 ‘t’ 修改分区类型。 -- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 - -![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png) - -创建磁盘分区 - -在创建“/dev/sdb”分区后,接下来按照同样的方法创建分区 /dev/sdc 。 - - # fdisk /dev/sdc - -![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png) - -创建第二个分区 - -4. 一旦这两个分区创建成功后,使用相同的命令来检查 sdb & sdc 分区并确认 RAID 分区的类型如上图所示。 - - # mdadm -E /dev/sd[b-c] - -![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png) - -验证分区变化 - -![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png) - -检查 RAID 类型 - -**注意**: 正如你在上图所看到的,在 sdb1 和 sdc1 中没有任何对 RAID 的定义,这就是我们没有检测到超级块的原因。 - -### 步骤3:创建 RAID1 设备 ### - -5.接下来使用以下命令来创建一个名为 /dev/md0 的“RAID1”设备并验证它 - - # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1 - # cat /proc/mdstat - -![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png) - -创建RAID设备 - -6. 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列 - - # mdadm -E /dev/sd[b-c]1 - # mdadm --detail /dev/md0 - -![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png) - -检查 RAID 设备类型 - -![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png) - -检查 RAID 设备阵列 - -从上图中,人们很容易理解,RAID1 已经使用的 /dev/sdb1 和 /dev/sdc1 分区被创建,你也可以看到状态为 resyncing。 - -### 第4步:在 RAID 设备上创建文件系统 ### - -7. 使用 ext4 为 md0 创建文件系统并挂载到 /mnt/raid1 . - - # mkfs.ext4 /dev/md0 - -![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png) - -创建 RAID 设备文件系统 - -8. 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据 - - # mkdir /mnt/raid1 - # mount /dev/md0 /mnt/raid1/ - # touch /mnt/raid1/tecmint.txt - # echo "tecmint raid setups" > /mnt/raid1/tecmint.txt - -![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png) - -挂载 RAID 设备 - -9.为了在系统重新启动自动挂载 RAID1,需要在 fstab 文件中添加条目。打开“/etc/fstab”文件并添加以下行。 - - /dev/md0 /mnt/raid1 ext4 defaults 0 0 - -![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png) - -自动挂载 Raid 设备 - -10. 运行“mount -a”,检查 fstab 中的条目是否有错误 - # mount -av - -![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png) - -检查 fstab 中的错误 - -11. 接下来,使用下面的命令保存 raid 的配置到文件“mdadm.conf”中。 - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png) - -保存 Raid 的配置 - -上述配置文件在系统重启时会读取并加载 RAID 设备。 - -### 第5步:在磁盘故障后检查数据 ### - -12.我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。 - - # mdadm --detail /dev/md0 - -![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png) - -验证 Raid 设备 - -在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的并且 Active Devices 是2.现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。 - - # ls -l /dev | grep sd - # mdadm --detail /dev/md0 - -![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png) - -测试 RAID 设备 - -现在,在上面的图片中你可以看到,一个磁盘不见了。我从虚拟机上删除了一个磁盘。此时让我们来检查我们宝贵的数据。 - - # cd /mnt/raid1/ - # cat tecmint.txt - -![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png) - -验证 RAID 数据 - -你有没有看到我们的数据仍然可用。由此,我们可以知道 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid1-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 10ee418d6422f54036002872fd4a328bdf691557 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:32:59 +0800 Subject: [PATCH 049/697] =?UTF-8?q?Delete=20Creating=20Software=20RAID0=20?= =?UTF-8?q?(Stripe)=20on=20=E2=80=98Two=20Devices=E2=80=99=20Using=20?= =?UTF-8?q?=E2=80=98mdadm=E2=80=99=20Tool=20in=20Linux=20=E2=80=93=20Part?= =?UTF-8?q?=202.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ces’ Using ‘mdadm’ Tool in Linux – Part 2.md | 218 ------------------ 1 file changed, 218 deletions(-) delete mode 100644 translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md diff --git a/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md b/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md deleted file mode 100644 index 9feba99609..0000000000 --- a/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md +++ /dev/null @@ -1,218 +0,0 @@ -在 Linux 上使用 ‘mdadm’ 工具创建软件 RAID0 (条带化)在 ‘两个设备’ 上 - 第2部分 -================================================================================ -RAID 是廉价磁盘的冗余阵列,其高可用性和可靠性适用于大规模环境中,为了使数据被保护而不是被正常使用。RAID 只是磁盘的一个集合被称为逻辑卷。结合驱动器,使其成为一个阵列或称为集合(组)。 - -创建 RAID 最少应使用2个磁盘被连接组成 RAID 控制器,逻辑卷或多个驱动器可以根据定义的 RAID 级别添加在一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 一般都是不太有钱的人才使用的。 - -![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) - -在 Linux 中创建 RAID0 - -使用 RAID 的主要目的是为了在单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。 - -#### 在 RAID 0 中条带是什么 #### - -条带是通过将数据在同一时间分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID0 逻辑卷的操作系统来提高文件的安全性。 - -- RAID 0 性能较高。 -- 在 RAID 0 上,空间零浪费。 -- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。 -- 写和读性能得以提高。 - -#### 要求 #### - -创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,但数目应该是2,4,6,8等的两倍。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。 - -在这里,我们没有使用硬件 RAID,此设置只依赖于软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的 UI 组件访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问 UI。 - -如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。 - -- [Introduction to RAID and RAID Concepts][1] - -**我的服务器设置** - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.225 - Two Disks : 20 GB each - -这篇文章是9个 RAID 系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID0(条带化),以名为 sdb 和 sdc 两个20GB的硬盘为例。 - -### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ### - -1.在 Linux 上设置 RAID0 前,我们先更新一下系统,然后安装 ‘mdadm’ 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。 - - # yum clean all && yum update - # yum install mdadm -y - -![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) - -安装 mdadm 工具 - -### 第2步:检测并连接两个 20GB 的硬盘 ### - -2.在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。 - - # ls -l /dev | grep sd - -![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) - -检查硬盘 - -3.一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的 ‘mdadm’ 命令来查看。 - - # mdadm --examine /dev/sd[b-c] - -![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) - -检查 RAID 设备 - -从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。 - -### 第3步:创建 RAID 分区 ### - -4.现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。 - - # fdisk /dev/sdb - -请按照以下说明创建分区。 - -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。 -- 接下来选择分区号为1。 -- 只需按两次回车键选择默认值即可。 -- 然后,按 ‘P’ 来打印创建好的分区。 - -![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) - -创建分区 - -请按照以下说明将分区创建为 Linux 的 RAID 类型。 - -- 按 ‘L’,列出所有可用的类型。 -- 按 ‘t’ 去修改分区。 -- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 - -![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) - -在 Linux 上创建 RAID 分区 - -**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。 - -5.创建分区后,验证这两个驱动器能使用下面的命令来正确定义 RAID。 - - # mdadm --examine /dev/sd[b-c] - # mdadm --examine /dev/sd[b-c]1 - -![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) - -验证 RAID 分区 - -### 第4步:创建 RAID md 设备 ### - -6.现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。 - - # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 - # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 - -- -C – create -- -l – level -- -n – No of raid-devices - -7.一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。 - - # cat /proc/mdstat - -![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) - -查看 RAID 级别 - - # mdadm -E /dev/sd[b-c]1 - -![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) - -查看 RAID 设备 - - # mdadm --detail /dev/md0 - -![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) - -查看 RAID 阵列 - -### 第5步:挂载 RAID 设备到文件系统 ### - -8.将 RAID 设备 /dev/md0 创建为 ext4 文件系统并挂载到 /mnt/raid0 下。 - - # mkfs.ext4 /dev/md0 - -![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) - -创建 ext4 文件系统 - -9. ext4 文件系统为 RAID 设备创建好后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。 - - # mkdir /mnt/raid0 - # mount /dev/md0 /mnt/raid0/ - -10.下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。 - - # df -h - -11.接下来,创建一个名为 ‘tecmint.txt’ 的文件挂载到 /mnt/raid0 下,为创建的文件添加一些内容,并查看文件和目录的内容。 - - # touch /mnt/raid0/tecmint.txt - # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt - # cat /mnt/raid0/tecmint.txt - # ls -l /mnt/raid0/ - -![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) - -验证挂载的设备 - -12.一旦你验证挂载点后,同时将它添加到 /etc/fstab 文件中。 - - # vim /etc/fstab - -添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。 - - /dev/md0 /mnt/raid0 ext4 deaults 0 0 - -![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) - -添加设备到 fstab 文件中 - -13.使用 mount ‘-a‘ 来检查 fstab 的条目是否有误。 - - # mount -av - -![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) - -检查 fstab 文件是否有误 - -### 第6步:保存 RAID 配置 ### - -14.最后,保存 RAID 配置到一个文件中,以供将来使用。同样,我们使用 ‘mdadm’ 命令带有 ‘-s‘ (scan) 和 ‘-v‘ (verbose) 选项,如图所示。 - - # mdadm -E -s -v >> /etc/mdadm.conf - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - # cat /etc/mdadm.conf - -![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) - -保存 RAID 配置 - -就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID0 级别。在接下来的文章中,我们将看到如何设置 RAID5。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid0-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From f3d587ede36d68eb11002fb77c4eb66db7af746d Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:33:36 +0800 Subject: [PATCH 050/697] Create Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md --- ... (Mirroring) using 'Two Disks' in Linux.md | 217 ++++++++++++++++++ 1 file changed, 217 insertions(+) create mode 100644 translated/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md diff --git a/translated/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md b/translated/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md new file mode 100644 index 0000000000..948e530ed8 --- /dev/null +++ b/translated/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md @@ -0,0 +1,217 @@ +在 Linux 中使用"两个磁盘"创建 RAID 1(镜像) - 第3部分 +================================================================================ +RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁盘中。创建 RAID1 至少需要两个磁盘,它的读取性能或者可靠性比数据存储容量更好。 + + +![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg) + +在 Linux 中设置 RAID1 + +创建镜像是为了防止因硬盘故障导致数据丢失。镜像中的每个磁盘包含数据的完整副本。当一个磁盘发生故障时,相同的数据可以从其它正常磁盘中读取。而后,可以从正在运行的计算机中直接更换发生故障的磁盘,无需任何中断。 + +### RAID 1 的特点 ### + +-镜像具有良好的性能。 + +-磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。 + +-在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。 + +-读取数据会比写入性能更好。 + +#### 要求 #### + + +创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8的两倍。为了能够添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。 + +这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的 UI 组件或使用 Ctrl + I 键来访问它。 + +需要阅读: [Basic Concepts of RAID in Linux][1] + +#### 在我的服务器安装 #### + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.226 + Hostname : rd1.tecmintlocal.com + Disk 1 [20GB] : /dev/sdb + Disk 2 [20GB] : /dev/sdc + +本文将指导你使用 mdadm (创建和管理 RAID 的)一步一步的建立一个软件 RAID 1 或镜像在 Linux 平台上。但同样的做法也适用于其它 Linux 发行版如 RedHat,CentOS,Fedora 等等。 + +### 第1步:安装所需要的并且检查磁盘 ### + +1.正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。 + + # yum install mdadm [on RedHat systems] + # apt-get install mdadm [on Debain systems] + +2. 一旦安装好‘mdadm‘包,我们需要使用下面的命令来检查磁盘是否已经配置好。 + + # mdadm -E /dev/sd[b-c] + +![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png) + +检查 RAID 的磁盘 + + +正如你从上面图片看到的,没有检测到任何超级块,这意味着还没有创建RAID。 + +### 第2步:为 RAID 创建分区 ### + +3. 正如我提到的,我们最少使用两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID1。我们首先使用‘fdisk‘命令来创建这两个分区并更改其类型为 raid。 + + # fdisk /dev/sdb + +按照下面的说明 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。 +- 接下来选择分区号为1。 +- 按两次回车键默认将整个容量分配给它。 +- 然后,按 ‘P’ 来打印创建好的分区。 +- 按 ‘L’,列出所有可用的类型。 +- 按 ‘t’ 修改分区类型。 +- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png) + +创建磁盘分区 + +在创建“/dev/sdb”分区后,接下来按照同样的方法创建分区 /dev/sdc 。 + + # fdisk /dev/sdc + +![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png) + +创建第二个分区 + +4. 一旦这两个分区创建成功后,使用相同的命令来检查 sdb & sdc 分区并确认 RAID 分区的类型如上图所示。 + + # mdadm -E /dev/sd[b-c] + +![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png) + +验证分区变化 + +![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png) + +检查 RAID 类型 + +**注意**: 正如你在上图所看到的,在 sdb1 和 sdc1 中没有任何对 RAID 的定义,这就是我们没有检测到超级块的原因。 + +### 步骤3:创建 RAID1 设备 ### + +5.接下来使用以下命令来创建一个名为 /dev/md0 的“RAID1”设备并验证它 + + # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1 + # cat /proc/mdstat + +![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png) + +创建RAID设备 + +6. 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列 + + # mdadm -E /dev/sd[b-c]1 + # mdadm --detail /dev/md0 + +![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png) + +检查 RAID 设备类型 + +![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png) + +检查 RAID 设备阵列 + +从上图中,人们很容易理解,RAID1 已经使用的 /dev/sdb1 和 /dev/sdc1 分区被创建,你也可以看到状态为 resyncing。 + +### 第4步:在 RAID 设备上创建文件系统 ### + +7. 使用 ext4 为 md0 创建文件系统并挂载到 /mnt/raid1 . + + # mkfs.ext4 /dev/md0 + +![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png) + +创建 RAID 设备文件系统 + +8. 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据 + + # mkdir /mnt/raid1 + # mount /dev/md0 /mnt/raid1/ + # touch /mnt/raid1/tecmint.txt + # echo "tecmint raid setups" > /mnt/raid1/tecmint.txt + +![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png) + +挂载 RAID 设备 + +9.为了在系统重新启动自动挂载 RAID1,需要在 fstab 文件中添加条目。打开“/etc/fstab”文件并添加以下行。 + + /dev/md0 /mnt/raid1 ext4 defaults 0 0 + +![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png) + +自动挂载 Raid 设备 + +10. 运行“mount -a”,检查 fstab 中的条目是否有错误 + # mount -av + +![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png) + +检查 fstab 中的错误 + +11. 接下来,使用下面的命令保存 raid 的配置到文件“mdadm.conf”中。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png) + +保存 Raid 的配置 + +上述配置文件在系统重启时会读取并加载 RAID 设备。 + +### 第5步:在磁盘故障后检查数据 ### + +12.我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。 + + # mdadm --detail /dev/md0 + +![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png) + +验证 Raid 设备 + +在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的并且 Active Devices 是2.现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。 + + # ls -l /dev | grep sd + # mdadm --detail /dev/md0 + +![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png) + +测试 RAID 设备 + +现在,在上面的图片中你可以看到,一个磁盘不见了。我从虚拟机上删除了一个磁盘。此时让我们来检查我们宝贵的数据。 + + # cd /mnt/raid1/ + # cat tecmint.txt + +![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png) + +验证 RAID 数据 + +你有没有看到我们的数据仍然可用。由此,我们可以知道 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid1-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 03f40b38babfc323a30e846016f4743afa8ca4d8 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:48:30 +0800 Subject: [PATCH 051/697] =?UTF-8?q?Create=20Part=202=20-=20Creating=20Soft?= =?UTF-8?q?ware=20RAID0=20(Stripe)=20on=20=E2=80=98Two=20Devices=E2=80=99?= =?UTF-8?q?=20Using=20=E2=80=98mdadm=E2=80=99=20Tool=20in=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Two Devices’ Using ‘mdadm’ Tool in Linux.md | 218 ++++++++++++++++++ 1 file changed, 218 insertions(+) create mode 100644 translated/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md diff --git a/translated/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md b/translated/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md new file mode 100644 index 0000000000..9feba99609 --- /dev/null +++ b/translated/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md @@ -0,0 +1,218 @@ +在 Linux 上使用 ‘mdadm’ 工具创建软件 RAID0 (条带化)在 ‘两个设备’ 上 - 第2部分 +================================================================================ +RAID 是廉价磁盘的冗余阵列,其高可用性和可靠性适用于大规模环境中,为了使数据被保护而不是被正常使用。RAID 只是磁盘的一个集合被称为逻辑卷。结合驱动器,使其成为一个阵列或称为集合(组)。 + +创建 RAID 最少应使用2个磁盘被连接组成 RAID 控制器,逻辑卷或多个驱动器可以根据定义的 RAID 级别添加在一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 一般都是不太有钱的人才使用的。 + +![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) + +在 Linux 中创建 RAID0 + +使用 RAID 的主要目的是为了在单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。 + +#### 在 RAID 0 中条带是什么 #### + +条带是通过将数据在同一时间分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID0 逻辑卷的操作系统来提高文件的安全性。 + +- RAID 0 性能较高。 +- 在 RAID 0 上,空间零浪费。 +- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。 +- 写和读性能得以提高。 + +#### 要求 #### + +创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,但数目应该是2,4,6,8等的两倍。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。 + +在这里,我们没有使用硬件 RAID,此设置只依赖于软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的 UI 组件访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问 UI。 + +如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。 + +- [Introduction to RAID and RAID Concepts][1] + +**我的服务器设置** + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.225 + Two Disks : 20 GB each + +这篇文章是9个 RAID 系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID0(条带化),以名为 sdb 和 sdc 两个20GB的硬盘为例。 + +### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ### + +1.在 Linux 上设置 RAID0 前,我们先更新一下系统,然后安装 ‘mdadm’ 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。 + + # yum clean all && yum update + # yum install mdadm -y + +![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) + +安装 mdadm 工具 + +### 第2步:检测并连接两个 20GB 的硬盘 ### + +2.在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。 + + # ls -l /dev | grep sd + +![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) + +检查硬盘 + +3.一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的 ‘mdadm’ 命令来查看。 + + # mdadm --examine /dev/sd[b-c] + +![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) + +检查 RAID 设备 + +从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。 + +### 第3步:创建 RAID 分区 ### + +4.现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。 + + # fdisk /dev/sdb + +请按照以下说明创建分区。 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按 ‘P’ 来打印创建好的分区。 + +![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) + +创建分区 + +请按照以下说明将分区创建为 Linux 的 RAID 类型。 + +- 按 ‘L’,列出所有可用的类型。 +- 按 ‘t’ 去修改分区。 +- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) + +在 Linux 上创建 RAID 分区 + +**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。 + +5.创建分区后,验证这两个驱动器能使用下面的命令来正确定义 RAID。 + + # mdadm --examine /dev/sd[b-c] + # mdadm --examine /dev/sd[b-c]1 + +![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) + +验证 RAID 分区 + +### 第4步:创建 RAID md 设备 ### + +6.现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。 + + # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 + # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 + +- -C – create +- -l – level +- -n – No of raid-devices + +7.一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。 + + # cat /proc/mdstat + +![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) + +查看 RAID 级别 + + # mdadm -E /dev/sd[b-c]1 + +![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) + +查看 RAID 设备 + + # mdadm --detail /dev/md0 + +![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) + +查看 RAID 阵列 + +### 第5步:挂载 RAID 设备到文件系统 ### + +8.将 RAID 设备 /dev/md0 创建为 ext4 文件系统并挂载到 /mnt/raid0 下。 + + # mkfs.ext4 /dev/md0 + +![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) + +创建 ext4 文件系统 + +9. ext4 文件系统为 RAID 设备创建好后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。 + + # mkdir /mnt/raid0 + # mount /dev/md0 /mnt/raid0/ + +10.下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。 + + # df -h + +11.接下来,创建一个名为 ‘tecmint.txt’ 的文件挂载到 /mnt/raid0 下,为创建的文件添加一些内容,并查看文件和目录的内容。 + + # touch /mnt/raid0/tecmint.txt + # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt + # cat /mnt/raid0/tecmint.txt + # ls -l /mnt/raid0/ + +![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) + +验证挂载的设备 + +12.一旦你验证挂载点后,同时将它添加到 /etc/fstab 文件中。 + + # vim /etc/fstab + +添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。 + + /dev/md0 /mnt/raid0 ext4 deaults 0 0 + +![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) + +添加设备到 fstab 文件中 + +13.使用 mount ‘-a‘ 来检查 fstab 的条目是否有误。 + + # mount -av + +![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) + +检查 fstab 文件是否有误 + +### 第6步:保存 RAID 配置 ### + +14.最后,保存 RAID 配置到一个文件中,以供将来使用。同样,我们使用 ‘mdadm’ 命令带有 ‘-s‘ (scan) 和 ‘-v‘ (verbose) 选项,如图所示。 + + # mdadm -E -s -v >> /etc/mdadm.conf + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + # cat /etc/mdadm.conf + +![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) + +保存 RAID 配置 + +就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID0 级别。在接下来的文章中,我们将看到如何设置 RAID5。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid0-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 81f06d85c3773c901edb69c80c182fd225a8f217 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:52:16 +0800 Subject: [PATCH 052/697] Create Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md --- ... RAID, Concepts of RAID and RAID Levels.md | 146 ++++++++++++++++++ 1 file changed, 146 insertions(+) create mode 100644 translated/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md diff --git a/translated/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md b/translated/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md new file mode 100644 index 0000000000..8ca0ecbd7e --- /dev/null +++ b/translated/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md @@ -0,0 +1,146 @@ + +RAID的级别和概念的介绍 - 第1部分 +================================================================================ +RAID是廉价磁盘冗余阵列,但现在它被称为独立磁盘冗余阵列。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是磁盘的一个集合,被称为逻辑卷。 + + +![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) + +在 Linux 中理解 RAID 的设置 + +RAID包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。一个 RAID 控制器至少使用两个磁盘并且使用一个逻辑卷或者多个驱动器在一个组中。在一个磁盘组的应用中只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。 + +这个系列被命名为RAID的构建共包含9个部分包括以下主题。 + +- 第1部分:RAID的级别和概念的介绍 +- 第2部分:在Linux中如何设置 RAID0(条带化) +- 第3部分:在Linux中如何设置 RAID1(镜像化) +- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验) +- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验) +- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套) +- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘 +- 第8部分:在 RAID 中恢复(重建)损坏的驱动器 +- 第9部分:在 Linux 中管理 RAID + +这是9系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。 + + +### 软件RAID和硬件RAID ### + +软件 RAID 的性能很低,因为其从主机资源消耗。 RAID 软件需要加载可读取数据从软件 RAID 卷中。在加载 RAID 软件前,操作系统需要得到加载 RAID 软件的引导。在软件 RAID 中无需物理硬件。零成本投资。 + +硬件 RAID 具有很高的性能。他们有专用的 RAID 控制器,采用 PCI Express卡物理内置的。它不会使用主机资源。他们有 NVRAM 缓存读取和写入。当重建时即使出现电源故障,它会使用电池电源备份存储缓存。对于大规模使用需要非常昂贵的投资。 + +硬件 RAID 卡如下所示: + +![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) + +硬件RAID + +#### 精选的 RAID 概念 #### + +- 在 RAID 重建中校验方法中丢失的内容来自从校验中保存的信息。 RAID 5,RAID 6 基于校验。 +- 条带化是随机共享数据到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用3个磁盘,则数据将会存在于每个磁盘上。 +- 镜像被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1,它将保存相同的内容到其他盘上。 +- 在我们的服务器上,热备份只是一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动重建。 +- 块是 RAID 控制器每次读写数据时的最小单位,最小4KB。通过定义块大小,我们可以增加 I/O 性能。 + +RAID有不同的级别。在这里,我们仅看到在真实环境下的使用最多的 RAID 级别。 + +- RAID0 = 条带化 +- RAID1 = 镜像 +- RAID5 = 单个磁盘分布式奇偶校验 +- RAID6 = 双盘分布式奇偶校验 +- RAID10 = 镜像 + 条带。(嵌套RAID) + +RAID 在大多数 Linux 发行版上使用 mdadm 的包管理。让我们先对每个 RAID 级别认识一下。 + +#### RAID 0(或)条带化 #### + +条带化有很好的性能。在 RAID 0(条带化)中数据将使用共享的方式被写入到磁盘。一半的内容将是在一个磁盘上,另一半内容将被写入到其它磁盘。 + +假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。 + +在这种情况下,如果驱动器中的任何一个发生故障,我们将丢失所有的数据,因为一个盘中只有一半的数据,不能用于重建。不过,当比较写入速度和性能时,RAID0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。 + +- 高性能。 +- 在 RAID0 上零容量损失。 +- 零容错。 +- 写和读有很高的性能。 + +#### RAID1(或)镜像化 #### + +镜像也有不错的性能。镜像可以备份我们的数据。假设我们有两组2TB的硬盘驱动器,我们总共有4TB,但在镜像中,驱动器在 RAID 控制器的后面形成一个逻辑驱动器,我们只能看到逻辑驱动器有2TB。 + +当我们保存数据时,它将同时写入2TB驱动器中。创建 RAID 1 (镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以恢复 RAID 通过更换一个新的磁盘。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为其他的磁盘中也有相同的数据。所以是零数据丢失。 + +- 良好的性能。 +- 空间的一半将在总容量丢失。 +- 完全容错。 +- 重建会更快。 +- 写性能将是缓慢的。 +- 读将会很好。 +- 被操作系统和数据库使用的规模很小。 + +#### RAID 5(或)分布式奇偶校验 #### + +RAID 5 多用于企业的水平。 RAID 5 的工作通过分布式奇偶校验的方法。奇偶校验信息将被用于重建数据。它需要留下的正常驱动器上的信息去重建。驱动器故障时,这会保护我们的数据。 + +假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。 + +- 性能卓越 +- 读速度将非常好。 +- 如果我们不使用硬件 RAID 控制器,写速度是缓慢的。 +- 从所有驱动器的奇偶校验信息中重建。 +- 完全容错。 +- 1个磁盘空间将用于奇偶校验。 +- 可以被用在文件服务器,Web服务器,非常重要的备份中。 + +#### RAID 6 两个分布式奇偶校验磁盘 #### + +RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以重建数据,同时更换新的驱动器。 + +它比 RAID 5 非常慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度将被平均。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。 + +- 性能不佳。 +- 读的性能很好。 +- 如果我们不使用硬件 RAID 控制器写的性能会很差。 +- 从2奇偶校验驱动器上重建。 +- 完全容错。 +- 2个磁盘空间将用于奇偶校验。 +- 可用于大型阵列。 +- 在备份和视频流中大规模使用。 + +#### RAID 10(或)镜像+条带 #### + +RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。 + +假设,我们有4个驱动器。当我写了一些数据到逻辑卷上,它会使用镜像和条带将数据保存到4个驱动器上。 + +如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下形式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入两个磁盘,这一步将所有数据都写入。这使数据得到备份。 + +同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一个盘,“E”写入第二个盘。再次将“C”写入第一个盘,“M”到第二个盘。 + +- 良好的读写性能。 +- 空间的一半将在总容量丢失。 +- 容错。 +- 从备份数据中快速重建。 +- 它的高性能和高可用性常被用于数据库的存储中。 + +### 结论 ### + +在这篇文章中,我们已经看到了什么是 RAID 和在实际环境大多采用 RAID 的哪个级别。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容对于你了解 RAID 基本满足。 + +在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/understanding-raid-setup-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ From 58e17d89a5653abf7d8f4d7315e4f19a684099c0 Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Tue, 4 Aug 2015 19:12:16 +0800 Subject: [PATCH 053/697] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完毕 --- ...y Using 'Explain Shell' Script in Linux.md | 121 ------------------ 1 file changed, 121 deletions(-) delete mode 100644 sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md diff --git a/sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md b/sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md deleted file mode 100644 index ab7572cd7a..0000000000 --- a/sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md +++ /dev/null @@ -1,121 +0,0 @@ - -Translating by dingdongnigetou - -Understanding Shell Commands Easily Using “Explain Shell” Script in Linux -================================================================================ -While working on Linux platform all of us need help on shell commands, at some point of time. Although inbuilt help like man pages, whatis command is helpful, but man pages output are too lengthy and until and unless one has some experience with Linux, it is very difficult to get any help from massive man pages. The output of whatis command is rarely more than one line which is not sufficient for newbies. - -![Explain Shell Commands in Linux Shell](http://www.tecmint.com/wp-content/uploads/2015/07/Explain-Shell-Commands-in-Linux-Shell.jpeg) - -Explain Shell Commands in Linux Shell - -There are third-party application like ‘cheat‘, which we have covered here “[Commandline Cheat Sheet for Linux Users][1]. Although Cheat is an exceptionally good application which shows help on shell command even when computer is not connected to Internet, it shows help on predefined commands only. - -There is a small piece of code written by Jackson which is able to explain shell commands within the bash shell very effectively and guess what the best part is you don’t need to install any third party package. He named the file containing this piece of code as `'explain.sh'`. - -#### Features of Explain Utility #### - -- Easy Code Embedding. -- No third-party utility needed to be installed. -- Output just enough information in course of explanation. -- Requires internet connection to work. -- Pure command-line utility. -- Able to explain most of the shell commands in bash shell. -- No root Account involvement required. - -**Prerequisite** - -The only requirement is `'curl'` package. In most of the today’s latest Linux distributions, curl package comes pre-installed, if not you can install it using package manager as shown below. - - # apt-get install curl [On Debian systems] - # yum install curl [On CentOS systems] - -### Installation of explain.sh Utility in Linux ### - -We have to insert the below piece of code as it is in the `~/.bashrc` file. The code should be inserted for each user and each `.bashrc` file. It is suggested to insert the code to the user’s .bashrc file only and not in the .bashrc of root user. - -Notice the first line of code that starts with hash `(#)` is optional and added just to differentiate rest of the codes of .bashrc. - -# explain.sh marks the beginning of the codes, we are inserting in .bashrc file at the bottom of this file. - - # explain.sh begins - explain () { - if [ "$#" -eq 0 ]; then - while read -p "Command: " cmd; do - curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$cmd" - done - echo "Bye!" - elif [ "$#" -eq 1 ]; then - curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$1" - else - echo "Usage" - echo "explain interactive mode." - echo "explain 'cmd -o | ...' one quoted command to explain it." - fi - } - -### Working of explain.sh Utility ### - -After inserting the code and saving it, you must logout of the current session and login back to make the changes taken into effect. Every thing is taken care of by the ‘curl’ command which transfer the input command and flag that need explanation to the mankier server and then print just necessary information to the Linux command-line. Not to mention to use this utility you must be connected to internet always. - -Let’s test few examples of command which I don’t know the meaning with explain.sh script. - -**1. I forgot what ‘du -h‘ does. All I need to do is:** - - $ explain 'du -h' - -![Get Help on du Command](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-du-Command.png) - -Get Help on du Command - -**2. If you forgot what ‘tar -zxvf‘ does, you may simply do:** - - $ explain 'tar -zxvf' - -![Tar Command Help](http://www.tecmint.com/wp-content/uploads/2015/07/Tar-Command-Help.png) - -Tar Command Help - -**3. One of my friend often confuse the use of ‘whatis‘ and ‘whereis‘ command, so I advised him.** - -Go to Interactive Mode by simply typing explain command on the terminal. - - $ explain - -and then type the commands one after another to see what they do in one window, as: - - Command: whatis - Command: whereis - -![Whatis Whereis Commands Help](http://www.tecmint.com/wp-content/uploads/2015/07/Whatis-Whereis-Commands-Help.png) - -Whatis Whereis Commands Help - -To exit interactive mode he just need to do Ctrl + c. - -**4. You can ask to explain more than one command chained by pipeline.** - - $ explain 'ls -l | grep -i Desktop' - -![Get Help on Multiple Commands](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-Multiple-Commands.png) - -Get Help on Multiple Commands - -Similarly you can ask your shell to explain any shell command. All you need is a working Internet connection. The output is generated based upon the explanation needed from the server and hence the output result is not customizable. - -For me this utility is really helpful and it has been honored being added to my .bashrc. Let me know what is your thought on this project? How it can useful for you? Is the explanation satisfactory? - -Provide us with your valuable feedback in the comments below. Like and share us and help us get spread. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/explain-shell-commands-in-the-linux-shell/ - -作者:[Avishek Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/cheat-command-line-cheat-sheet-for-linux-users/ From b8127cd50d3c05e938f45795e9f37e617112cc99 Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Tue, 4 Aug 2015 19:17:36 +0800 Subject: [PATCH 054/697] =?UTF-8?q?Create=20=E3=80=90=E7=BF=BB=E8=AF=91?= =?UTF-8?q?=E5=AE=8C=E6=AF=95=E3=80=9120150728=20Understanding=20Shell=20C?= =?UTF-8?q?ommands=20Easily=20Using=20'Explain=20Shell'=20Script=20in=20Li?= =?UTF-8?q?nux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Easily Using 'Explain Shell' Script in Linux.md | 118 ++++++++++++++++++ 1 file changed, 118 insertions(+) create mode 100644 translated/tech/【翻译完毕】20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md diff --git a/translated/tech/【翻译完毕】20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md b/translated/tech/【翻译完毕】20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md new file mode 100644 index 0000000000..b8f993676c --- /dev/null +++ b/translated/tech/【翻译完毕】20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md @@ -0,0 +1,118 @@ +在Linux中利用"Explain Shell"脚本更容易地理解Shell命令 +================================================================================ +在某些时刻, 当我们在Linux平台上工作时我们所有人都需要shell命令的帮助信息。 尽管内置的帮助像man pages、whatis命令是有帮助的, 但man pages的输出非常冗长, 除非是个有linux经验的人,不然从大量的man pages中获取帮助信息是非常困难的,而whatis命令的输出很少超过一行, 这对初学者来说是不够的。 + +![Explain Shell Commands in Linux Shell](http://www.tecmint.com/wp-content/uploads/2015/07/Explain-Shell-Commands-in-Linux-Shell.jpeg) + +在Linux Shell中解释Shell命令 + +有一些第三方应用程序, 像我们在[Commandline Cheat Sheet for Linux Users][1]提及过的'cheat'命令。Cheat是个杰出的应用程序,即使计算机没有联网也能提供shell命令的帮助, 但是它仅限于预先定义好的命令。 + +Jackson写了一小段代码,它能非常有效地在bash shell里面解释shell命令,可能最美之处就是你不需要安装第三方包了。他把包含这段代码的的文件命名为”explain.sh“。 + +#### Explain工具的特性 #### + +- 易嵌入代码。 +- 不需要安装第三方工具。 +- 在解释过程中输出恰到好处的信息。 +- 需要网络连接才能工作。 +- 纯命令行工具。 +- 可以解释bash shell里面的大部分shell命令。 +- 无需root账户参与。 + +**先决条件** + +唯一的条件就是'curl'包了。 在如今大多数Linux发行版里面已经预安装了culr包, 如果没有你可以按照下面的命令来安装。 + + # apt-get install curl [On Debian systems] + # yum install curl [On CentOS systems] + +### 在Linux上安装explain.sh工具 ### + +我们要将下面这段代码插入'~/.bashrc'文件(LCTT注: 若没有该文件可以自己新建一个)中。我们必须为每个用户以及对应的'.bashrc'文件插入这段代码,笔者建议你不要加在root用户下。 + +我们注意到.bashrc文件的第一行代码以(#)开始, 这个是可选的并且只是为了区分余下的代码。 + +# explain.sh 标记代码的开始, 我们将代码插入.bashrc文件的底部。 + + # explain.sh begins + explain () { + if [ "$#" -eq 0 ]; then + while read -p "Command: " cmd; do + curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$cmd" + done + echo "Bye!" + elif [ "$#" -eq 1 ]; then + curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$1" + else + echo "Usage" + echo "explain interactive mode." + echo "explain 'cmd -o | ...' one quoted command to explain it." + fi + } + +### explain.sh工具的使用 ### + +在插入代码并保存之后,你必须退出当前的会话然后重新登录来使改变生效(LCTT注:你也可以直接使用命令“source~/.bashrc”来让改变生效)。每件事情都是交由‘curl’命令处理, 它负责将需要解释的命令以及命令选项传送给mankier服务,然后将必要的信息打印到Linux命令行。不必说的就是使用这个工具你总是需要连接网络。 + +让我们用explain.sh脚本测试几个笔者不懂的命令例子。 + +**1.我忘了‘du -h’是干嘛用的, 我只需要这样做:** + + $ explain 'du -h' + +![Get Help on du Command](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-du-Command.png) + +获得du命令的帮助 + +**2.如果你忘了'tar -zxvf'的作用,你可以简单地如此做:** + + $ explain 'tar -zxvf' + +![Tar Command Help](http://www.tecmint.com/wp-content/uploads/2015/07/Tar-Command-Help.png) + +Tar命令帮助 + +**3.我的一个朋友经常对'whatis'以及'whereis'命令的使用感到困惑,所以我建议他:** + +在终端简单的地敲下explain命令进入交互模式。 + + $ explain + +然后一个接着一个地输入命令,就能在一个窗口看到他们各自的作用: + + Command: whatis + Command: whereis + +![Whatis Whereis Commands Help](http://www.tecmint.com/wp-content/uploads/2015/07/Whatis-Whereis-Commands-Help.png) + +Whatis/Whereis命令的帮助 + +你只需要使用“Ctrl+c”就能退出交互模式。 + +**4. 你可以通过管道来请求解释更多的命令。** + + $ explain 'ls -l | grep -i Desktop' + +![Get Help on Multiple Commands](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-Multiple-Commands.png) + +获取多条命令的帮助 + +同样地,你可以请求你的shell来解释任何shell命令。 前提是你需要一个可用的网络。输出的信息是基于解释的需要从服务器中生成的,因此输出的结果是不可定制的。 + +对于我来说这个工具真的很有用并且它已经荣幸地添加在我的.bashrc文件中。你对这个项目有什么想法?它对你有用么?它的解释令你满意吗?请让我知道吧! + +请在下面评论为我们提供宝贵意见,喜欢并分享我们以及帮助我们得到传播。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/explain-shell-commands-in-the-linux-shell/ + +作者:[Avishek Kumar][a] +译者:[dingdongnigetou](https://github.com/dingdongnigetou) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/cheat-command-line-cheat-sheet-for-linux-users/ From a61412acbc3535975deb7581f54844a83d1f1e99 Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Tue, 4 Aug 2015 19:18:09 +0800 Subject: [PATCH 055/697] =?UTF-8?q?Rename=20=E3=80=90=E7=BF=BB=E8=AF=91?= =?UTF-8?q?=E5=AE=8C=E6=AF=95=E3=80=9120150728=20Understanding=20Shell=20C?= =?UTF-8?q?ommands=20Easily=20Using=20'Explain=20Shell'=20Script=20in=20Li?= =?UTF-8?q?nux.md=20to=2020150728=20Understanding=20Shell=20Commands=20Eas?= =?UTF-8?q?ily=20Using=20'Explain=20Shell'=20Script=20in=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ding Shell Commands Easily Using 'Explain Shell' Script in Linux.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename translated/tech/{【翻译完毕】20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md => 20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md} (100%) diff --git a/translated/tech/【翻译完毕】20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md b/translated/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md similarity index 100% rename from translated/tech/【翻译完毕】20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md rename to translated/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md From c823978d30497a34d2e73d8ace42140ddc4567c1 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 4 Aug 2015 22:32:09 +0800 Subject: [PATCH 056/697] PUB:20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu @geekpi --- ...ment and How Do You Enable It in Ubuntu.md | 21 +++++++++---------- 1 file changed, 10 insertions(+), 11 deletions(-) rename {translated/tech => published}/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md (67%) diff --git a/translated/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md b/published/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md similarity index 67% rename from translated/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md rename to published/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md index 05f07b74e6..86569b0128 100644 --- a/translated/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md +++ b/published/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md @@ -1,26 +1,25 @@ -什么是逻辑分区管理工具,它怎么在Ubuntu启用? +什么是逻辑分区管理 LVM ,如何在Ubuntu中使用? ================================================================================ -> 逻辑分区管理(LVM)是每一个主流Linux发行版都含有的磁盘管理选项。无论你是否需要设置存储池或者只需要动态创建分区,LVM就是你正在寻找的。 + +> 逻辑分区管理(LVM)是每一个主流Linux发行版都含有的磁盘管理选项。无论是你需要设置存储池,还是只想动态创建分区,那么LVM就是你正在寻找的。 ### 什么是 LVM? ### -逻辑分区管理是一个存在于磁盘/分区和操作系统之间的一个抽象层。在传统的磁盘管理中,你的操作系统寻找有哪些磁盘可用(/dev/sda、/dev/sdb等等)接着这些磁盘有哪些可用的分区(如/dev/sda1、/dev/sda2等等)。 +逻辑分区管理是一个存在于磁盘/分区和操作系统之间的一个抽象层。在传统的磁盘管理中,你的操作系统寻找有哪些磁盘可用(/dev/sda、/dev/sdb等等),并且这些磁盘有哪些可用的分区(如/dev/sda1、/dev/sda2等等)。 -在LVM下,磁盘和分区可以抽象成一个设备中含有多个磁盘和分区。你的操作系统将不会知道这些区别,因为LVM只会给操作系统展示你设置的卷组(磁盘)和逻辑卷(分区) +在LVM下,磁盘和分区可以抽象成一个含有多个磁盘和分区的设备。你的操作系统将不会知道这些区别,因为LVM只会给操作系统展示你设置的卷组(磁盘)和逻辑卷(分区) -,因此可以很容易地动态调整和创建新的磁盘和分区。除此之外,LVM带来你的文件系统不具备的功能。比如,ext3不支持实时快照,但是如果你正在使用LVM你可以不卸载磁盘的情况下做一个逻辑卷的快照。 +因为卷组和逻辑卷并不物理地对应到影片,因此可以很容易地动态调整和创建新的磁盘和分区。除此之外,LVM带来了你的文件系统所不具备的功能。比如,ext3不支持实时快照,但是如果你正在使用LVM你可以不卸载磁盘的情况下做一个逻辑卷的快照。 ### 你什么时候该使用LVM? ### -在使用LVM之前首先得考虑的一件事是你要用你的磁盘和分区完成什么。一些发行版如Fedora已经默认安装了LVM。 +在使用LVM之前首先得考虑的一件事是你要用你的磁盘和分区来做什么。注意,一些发行版如Fedora已经默认安装了LVM。 -如果你使用的是一台只有一块磁盘的Ubuntu笔记本电脑,并且你不需要像实时快照这样的扩展功能,那么你或许不需要LVM。如果I想要轻松地扩展或者想要将多块磁盘组成一个存储池,那么LVM或许正式你郑寻找的。 +如果你使用的是一台只有一块磁盘的Ubuntu笔记本电脑,并且你不需要像实时快照这样的扩展功能,那么你或许不需要LVM。如果你想要轻松地扩展或者想要将多块磁盘组成一个存储池,那么LVM或许正是你所寻找的。 ### 在Ubuntu中设置LVM ### -使用LVM首先要了解的一件事是没有简单的方法将已经存在传统的分区转换成逻辑分区。可以将它移到一个使用LVM的新分区下,但是这并不会在本篇中提到;反之我们将全新安装一台Ubuntu 10.10来设置LVM - -![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/ubuntu-10-banner.png) +使用LVM首先要了解的一件事是,没有一个简单的方法可以将已有的传统分区转换成逻辑卷。可以将数据移到一个使用LVM的新分区下,但是这并不会在本篇中提到;在这里,我们将全新安装一台Ubuntu 10.10来设置LVM。(LCTT 译注:本文针对的是较老的版本,新的版本已经不需如此麻烦了) 要使用LVM安装Ubuntu你需要使用另外的安装CD。从下面的链接中下载并烧录到CD中或者[使用unetbootin创建一个USB盘][1]。 @@ -64,7 +63,7 @@ via: http://www.howtogeek.com/howto/36568/what-is-logical-volume-management-and- 作者:[How-To Geek][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 7c8fd52aa7704ddee0fa04a71fee6b5b06647180 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 4 Aug 2015 23:11:02 +0800 Subject: [PATCH 057/697] PUB:20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu @ictlyh --- ...M (Logical Volume Management) in Ubuntu.md | 52 +++++++++---------- 1 file changed, 26 insertions(+), 26 deletions(-) rename {translated/tech => published}/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md (79%) diff --git a/translated/tech/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md b/published/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md similarity index 79% rename from translated/tech/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md rename to published/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md index c3a84f5fcf..76a2c8d224 100644 --- a/translated/tech/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md +++ b/published/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md @@ -1,20 +1,20 @@ -如何在 Ubuntu 中管理和使用 LVM(Logical Volume Management,逻辑卷管理) +如何在 Ubuntu 中管理和使用 逻辑卷管理 LVM ================================================================================ ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/652x202xbanner-1.png.pagespeed.ic.VGSxDeVS9P.png) 在我们之前的文章中,我们介绍了[什么是 LVM 以及能用 LVM 做什么][1],今天我们会给你介绍一些 LVM 的主要管理工具,使得你在设置和扩展安装时更游刃有余。 -正如之前所述,LVM 是介于你的操作系统和物理硬盘驱动器之间的抽象层。这意味着你的物理硬盘驱动器和分区不再依赖于他们所在的硬盘驱动和分区。而是,你的操作系统所见的硬盘驱动和分区可以是由任意数目的独立硬盘驱动汇集而成或是一个软件磁盘阵列。 +正如之前所述,LVM 是介于你的操作系统和物理硬盘驱动器之间的抽象层。这意味着你的物理硬盘驱动器和分区不再依赖于他们所在的硬盘驱动和分区。而是你的操作系统所见的硬盘驱动和分区可以是由任意数目的独立硬盘汇集而成的或是一个软件磁盘阵列。 要管理 LVM,这里有很多可用的 GUI 工具,但要真正理解 LVM 配置发生的事情,最好要知道一些命令行工具。这当你在一个服务器或不提供 GUI 工具的发行版上管理 LVM 时尤为有用。 LVM 的大部分命令和彼此都非常相似。每个可用的命令都由以下其中之一开头: -- Physical Volume = pv -- Volume Group = vg -- Logical Volume = lv +- Physical Volume (物理卷) = pv +- Volume Group (卷组)= vg +- Logical Volume (逻辑卷)= lv -物理卷命令用于在卷组中添加或删除硬盘驱动。卷组命令用于为你的逻辑卷操作更改显示的物理分区抽象集。逻辑卷命令会以分区形式显示卷组使得你的操作系统能使用指定的空间。 +物理卷命令用于在卷组中添加或删除硬盘驱动。卷组命令用于为你的逻辑卷操作更改显示的物理分区抽象集。逻辑卷命令会以分区形式显示卷组,使得你的操作系统能使用指定的空间。 ### 可下载的 LVM 备忘单 ### @@ -26,7 +26,7 @@ LVM 的大部分命令和彼此都非常相似。每个可用的命令都由以 ### 如何查看当前 LVM 信息 ### -你首先需要做的事情是检查你的 LVM 设置。s 和 display 命令和物理卷(pv)、卷组(vg)以及逻辑卷(lv)一起使用,是一个找出当前设置好的开始点。 +你首先需要做的事情是检查你的 LVM 设置。s 和 display 命令可以和物理卷(pv)、卷组(vg)以及逻辑卷(lv)一起使用,是一个找出当前设置的好起点。 display 命令会格式化输出信息,因此比 s 命令更易于理解。对每个命令你会看到名称和 pv/vg 的路径,它还会给出空闲和已使用空间的信息。 @@ -40,17 +40,17 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对 #### 创建物理卷 #### -我们会从一个完全新的没有任何分区和信息的硬盘驱动开始。首先找出你将要使用的磁盘。(/dev/sda, sdb, 等) +我们会从一个全新的没有任何分区和信息的硬盘开始。首先找出你将要使用的磁盘。(/dev/sda, sdb, 等) > 注意:记住所有的命令都要以 root 身份运行或者在命令前面添加 'sudo' 。 fdisk -l -如果之前你的硬盘驱动从没有格式化或分区,在 fdisk 的输出中你很可能看到类似下面的信息。这完全正常,因为我们会在下面的步骤中创建需要的分区。 +如果之前你的硬盘从未格式化或分区过,在 fdisk 的输出中你很可能看到类似下面的信息。这完全正常,因为我们会在下面的步骤中创建需要的分区。 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/fdisk.png.pagespeed.ce.AmAEsxm-7Q.png) -我们的新磁盘位置是 /dev/sdb,让我们用 fdisk 命令在驱动上创建一个新的分区。 +我们的新磁盘位置是 /dev/sdb,让我们用 fdisk 命令在磁盘上创建一个新的分区。 这里有大量能创建新分区的 GUI 工具,包括 [Gparted][2],但由于我们已经打开了终端,我们将使用 fdisk 命令创建需要的分区。 @@ -62,9 +62,9 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/627x145xfdisk00.png.pagespeed.ic.I7S8bjoXQG.png) -以指定的顺序输入命令创建一个使用新硬盘驱动 100% 空间的主分区并为 LVM 做好了准备。如果你需要更改分区的大小或相应多个分区,我建议使用 GParted 或自己了解关于 fdisk 命令的使用。 +以指定的顺序输入命令创建一个使用新硬盘 100% 空间的主分区并为 LVM 做好了准备。如果你需要更改分区的大小或想要多个分区,我建议使用 GParted 或自己了解一下关于 fdisk 命令的使用。 -**警告:下面的步骤会格式化你的硬盘驱动。确保在进行下面步骤之前你的硬盘驱动中没有任何信息。** +**警告:下面的步骤会格式化你的硬盘驱动。确保在进行下面步骤之前你的硬盘驱动中没有任何有用的信息。** - n = 创建新分区 - p = 创建主分区 @@ -79,9 +79,9 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对 - t = 更改分区类型 - 8e = 更改为 LVM 分区类型 -核实并将信息写入硬盘驱动器。 +核实并将信息写入硬盘。 -- p = 查看分区设置使得写入更改到磁盘之前可以回看 +- p = 查看分区设置使得在写入更改到磁盘之前可以回看 - w = 写入更改到磁盘 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/560x339xfdisk03.png.pagespeed.ic.FC8foICZsb.png) @@ -102,7 +102,7 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/vgcreate.png.pagespeed.ce.fVLzSmPZou.png) -Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称,但建议标签以 vg 开头,以便后面你使用它时能意识到这是一个卷组。 +vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称,但建议标签以 vg 开头,以便后面你使用它时能意识到这是一个卷组。 #### 创建逻辑卷 #### @@ -112,7 +112,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/lvcreate.png.pagespeed.ce.vupLB-LJEW.png) --L 命令指定逻辑卷的大小,在该情况中是 3 GB,-n 命令指定卷的名称。 指定 vgpool 所以 lvcreate 命令知道从什么卷获取空间。 +-L 命令指定逻辑卷的大小,在该情况中是 3 GB,-n 命令指定卷的名称。 指定 vgpool 以便 lvcreate 命令知道从什么卷获取空间。 #### 格式化并挂载逻辑卷 #### @@ -131,7 +131,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称 #### 重新设置逻辑卷大小 #### -逻辑卷的一个好处是你能使你的共享物理变大或变小而不需要移动所有东西到一个更大的硬盘驱动。另外,你可以添加新的硬盘驱动并同时扩展你的卷组。或者如果你有一个不使用的硬盘驱动,你可以从卷组中移除它使得逻辑卷变小。 +逻辑卷的一个好处是你能使你的存储物理地变大或变小,而不需要移动所有东西到一个更大的硬盘。另外,你可以添加新的硬盘并同时扩展你的卷组。或者如果你有一个不使用的硬盘,你可以从卷组中移除它使得逻辑卷变小。 这里有三个用于使物理卷、卷组和逻辑卷变大或变小的基础工具。 @@ -147,9 +147,9 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称 按照上面创建新分区并更改分区类型为 LVM(8e) 的步骤安装一个新硬盘驱动。然后用 pvcreate 命令创建一个 LVM 能识别的物理卷。 -#### 添加新硬盘驱动到卷组 #### +#### 添加新硬盘到卷组 #### -要添加新的硬盘驱动到一个卷组,你只需要知道你的新分区,在我们的例子中是 /dev/sdc1,以及想要添加到的卷组的名称。 +要添加新的硬盘到一个卷组,你只需要知道你的新分区,在我们的例子中是 /dev/sdc1,以及想要添加到的卷组的名称。 这会添加新物理卷到已存在的卷组中。 @@ -189,7 +189,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称 1. 调整文件系统大小 (调整之前确保已经移动文件到硬盘驱动安全的地方) 1. 减小逻辑卷 (除了 + 可以扩展大小,你也可以用 - 压缩大小) -1. 用 vgreduce 从卷组中移除硬盘驱动 +1. 用 vgreduce 从卷组中移除硬盘 #### 备份逻辑卷 #### @@ -197,7 +197,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/652x202xbanner-2.png.pagespeed.ic.VtOUuqYX1W.png) -LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该照片可以用于在不同的硬盘驱动上进行备份。生成一个备份的时候,任何需要添加到逻辑卷的新信息会如往常一样写入磁盘,但会跟踪更改使得原始快照永远不会损毁。 +LVM 获取快照的时候,会有一张和逻辑卷完全相同的“照片”,该“照片”可以用于在不同的硬盘上进行备份。生成一个备份的时候,任何需要添加到逻辑卷的新信息会如往常一样写入磁盘,但会跟踪更改使得原始快照永远不会损毁。 要创建一个快照,我们需要创建拥有足够空闲空间的逻辑卷,用于保存我们备份的时候会写入该逻辑卷的任何新信息。如果驱动并不是经常写入,你可以使用很小的一个存储空间。备份完成的时候我们只需要移除临时逻辑卷,原始逻辑卷会和往常一样。 @@ -209,7 +209,7 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/597x68xlvcreate-snapshot.png.pagespeed.ic.Rw2ivtcpPg.png) -这里我们创建了一个只有 512MB 的逻辑卷,因为驱动实际上并不会使用。512MB 的空间会保存备份时产生的任何新数据。 +这里我们创建了一个只有 512MB 的逻辑卷,因为该硬盘实际上并不会使用。512MB 的空间会保存备份时产生的任何新数据。 #### 挂载新快照 #### @@ -222,7 +222,7 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该 #### 复制快照和删除逻辑卷 #### -你剩下需要做的是从 /mnt/lvstuffbackup/ 中复制所有文件到一个外部的硬盘驱动或者打包所有文件到一个文件。 +你剩下需要做的是从 /mnt/lvstuffbackup/ 中复制所有文件到一个外部的硬盘或者打包所有文件到一个文件。 **注意:tar -c 会创建一个归档文件,-f 要指出归档文件的名称和路径。要获取 tar 命令的帮助信息,可以在终端中输入 man tar。** @@ -230,7 +230,7 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/627x67xsnapshot-backup.png.pagespeed.ic.tw-2AK_lfZ.png) -记住备份发生的时候写到 lvstuff 的所有文件都会在我们之前创建的临时逻辑卷中被跟踪。确保备份的时候你有足够的空闲空间。 +记住备份时候写到 lvstuff 的所有文件都会在我们之前创建的临时逻辑卷中被跟踪。确保备份的时候你有足够的空闲空间。 备份完成后,卸载卷并移除临时快照。 @@ -259,10 +259,10 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该 via: http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ 译者:[ictlyh](https://github.com/ictlyh) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 -[1]:http://www.howtogeek.com/howto/36568/what-is-logical-volume-management-and-how-do-you-enable-it-in-ubuntu/ +[1]:https://linux.cn/article-5953-1.html [2]:http://www.howtogeek.com/howto/17001/how-to-format-a-usb-drive-in-ubuntu-using-gparted/ [3]:http://www.howtogeek.com/howto/33552/htg-explains-which-linux-file-system-should-you-choose/ \ No newline at end of file From 2d90d07755d4b8bcc6786007e1324fdb48f49f22 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 5 Aug 2015 00:06:26 +0800 Subject: [PATCH 058/697] PUB:20150128 7 communities driving open source development @FSSlc --- ...unities driving open source development.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) rename {translated/talk => published}/20150128 7 communities driving open source development.md (58%) diff --git a/translated/talk/20150128 7 communities driving open source development.md b/published/20150128 7 communities driving open source development.md similarity index 58% rename from translated/talk/20150128 7 communities driving open source development.md rename to published/20150128 7 communities driving open source development.md index 2074ad9e23..1f4aac1a09 100644 --- a/translated/talk/20150128 7 communities driving open source development.md +++ b/published/20150128 7 communities driving open source development.md @@ -1,12 +1,12 @@ 7 个驱动开源发展的社区 ================================================================================ -不久前,开源模式还被成熟的工业厂商以怀疑的态度认作是叛逆小孩的玩物。如今,开放的倡议和基金会在一长列的供应商提供者的支持下正蓬勃发展,而他们将开源模式视作创新的关键。 +不久前,开源模式还被成熟的工业级厂商以怀疑的态度认作是叛逆小孩的玩物。如今,开源的促进会和基金会在一长列的供应商提供者的支持下正蓬勃发展,而他们将开源模式视作创新的关键。 ![](http://images.techhive.com/images/article/2015/01/0_opensource-title-100539095-orig.jpg) ### 技术的开放发展驱动着创新 ### -在过去的 20 几年间,技术的开放发展已被视作驱动创新的关键因素。即使那些以前将开源视作威胁的公司也开始接受这个观点 — 例如微软,如今它在一系列的开源倡议中表现活跃。到目前为止,大多数的开放发展都集中在软件方面,但甚至这个也正在改变,因为社区已经开始向开源硬件倡议方面聚拢。这里有 7 个成功地在硬件和软件方面同时促进和发展开源技术的组织。 +在过去的 20 几年间,技术的开源推进已被视作驱动创新的关键因素。即使那些以前将开源视作威胁的公司也开始接受这个观点 — 例如微软,如今它在一系列的开源的促进会中表现活跃。到目前为止,大多数的开源推进都集中在软件方面,但甚至这个也正在改变,因为社区已经开始向开源硬件倡议方面聚拢。这里介绍 7 个成功地在硬件和软件方面同时促进和发展开源技术的组织。 ### OpenPOWER 基金会 ### @@ -16,21 +16,21 @@ IBM 通过开放其基于 Power 架构的硬件和软件技术,向使用 Power IP 的独立硬件产品提供许可证等方式为基金会的建立播下种子。如今超过 70 个成员共同协作来为基于 Linux 的数据中心提供自定义的开放服务器,组件和硬件。 -今年四月,在比最新基于 x86 系统快 50 倍的数据分析能力的新的 POWER8 处理器的服务器的基础上, OpenPOWER 推出了一个技术路线图。七月, IBM 和 Google 发布了一个固件堆栈。十月见证了 NVIDIA GPU 带来加速 POWER8 系统的能力和来自 Tyan 的第一个 OpenPOWER 参考服务器。 +去年四月,在比最新基于 x86 系统快 50 倍的数据分析能力的新的 POWER8 处理器的服务器的基础上, OpenPOWER 推出了一个技术路线图。七月, IBM 和 Google 发布了一个固件堆栈。去年十月见证了 NVIDIA GPU 带来加速 POWER8 系统的能力和来自 Tyan 的第一个 OpenPOWER 参考服务器。 ### Linux 基金会 ### ![](http://images.techhive.com/images/article/2015/01/2_the-linux-foundation-100539101-orig.jpg) -于 2000 年建立的 [Linux 基金会][2] 如今成为掌控着历史上最大的开源协同发展成果,它有着超过 180 个合作成员和许多独立成员及学生成员。它赞助核心 Linux 开发者的工作并促进、保护和推进 Linux 操作系统和协作软件的开发。 +于 2000 年建立的 [Linux 基金会][2] 如今成为掌控着历史上最大的开源协同开发成果,它有着超过 180 个合作成员和许多独立成员及学生成员。它赞助 Linux 核心开发者的工作并促进、保护和推进 Linux 操作系统,并协调软件的协作开发。 -它最为成功的协作项目包括 Code Aurora Forum (一个拥有为移动无线产业服务的企业财团),MeeGo (一个为移动设备和 IVI (注:指的是车载消息娱乐设备,为 In-Vehicle Infotainment 的简称) 构建一个基于 Linux 内核的操作系统的项目) 和 Open Virtualization Alliance (开放虚拟化联盟,它促进自由和开源软件虚拟化解决方案的采用)。 +它最为成功的协作项目包括 Code Aurora Forum (一个拥有为移动无线产业服务的企业财团),MeeGo (一个为移动设备和 IVI [注:指的是车载消息娱乐设备,为 In-Vehicle Infotainment 的简称] 构建一个基于 Linux 内核的操作系统的项目) 和 Open Virtualization Alliance (开放虚拟化联盟,它促进自由和开源软件虚拟化解决方案的采用)。 ### 开放虚拟化联盟 ### ![](http://images.techhive.com/images/article/2015/01/3_open-virtualization-alliance-100539102-orig.jpg) -[开放虚拟化联盟(OVA)][3] 的存在目的为:通过提供使用案例和对具有互操作性的通用接口和 API 的发展提供支持,来促进自由、开源软件的虚拟化解决方案例如 KVM 的采用。KVM 将 Linux 内核转变为一个虚拟机管理程序。 +[开放虚拟化联盟(OVA)][3] 的存在目的为:通过提供使用案例和对具有互操作性的通用接口和 API 的发展提供支持,来促进自由、开源软件的虚拟化解决方案,例如 KVM 的采用。KVM 将 Linux 内核转变为一个虚拟机管理程序。 如今, KVM 已成为和 OpenStack 共同使用的最为常见的虚拟机管理程序。 @@ -40,31 +40,31 @@ IBM 通过开放其基于 Power 架构的硬件和软件技术,向使用 Power 原本作为一个 IaaS(基础设施即服务) 产品由 NASA 和 Rackspace 于 2010 年启动,[OpenStack 基金会][4] 已成为最大的开源项目聚居地之一。它拥有超过 200 家公司成员,其中包括 AT&T, AMD, Avaya, Canonical, Cisco, Dell 和 HP。 -大约以 6 个月为一个发行周期,基金会的 OpenStack 项目被发展用来通过一个基于 Web 的仪表盘,命令行工具或一个 RESTful 风格的 API 来控制或调配流经一个数据中心的处理存储池和网络资源。至今为止,基金会支持的协作发展已经孕育出了一系列 OpenStack 组件,其中包括 OpenStack Compute(一个云计算网络控制器,它是一个 IaaS 系统的主要部分),OpenStack Networking(一个用以管理网络和 IP 地址的系统) 和 OpenStack Object Storage(一个可扩展的冗余存储系统)。 +大约以 6 个月为一个发行周期,基金会的 OpenStack 项目开发用于通过一个基于 Web 的仪表盘,命令行工具或一个 RESTful 风格的 API 来控制或调配流经一个数据中心的处理存储池和网络资源。至今为止,基金会支持的协同开发已经孕育出了一系列 OpenStack 组件,其中包括 OpenStack Compute(一个云计算网络控制器,它是一个 IaaS 系统的主要部分),OpenStack Networking(一个用以管理网络和 IP 地址的系统) 和 OpenStack Object Storage(一个可扩展的冗余存储系统)。 ### OpenDaylight ### ![](http://images.techhive.com/images/article/2015/01/5_opendaylight-100539097-orig.jpg) -作为来自 Linux 基金会的另一个协作项目, [OpenDaylight][5] 是一个由诸如 Dell, HP, Oracle 和 Avaya 等行业厂商于 2013 年 4 月建立的联合倡议。它的任务是建立一个由社区主导,开放,有工业支持的针对 Software-Defined Networking (SDN) 的包含代码和蓝图的框架。其思路是提供一个可直接部署的全功能 SDN 平台,而不需要其他组件,供应商可提供附件组件和增强组件。 +作为来自 Linux 基金会的另一个协作项目, [OpenDaylight][5] 是一个由诸如 Dell, HP, Oracle 和 Avaya 等行业厂商于 2013 年 4 月建立的联合倡议。它的任务是建立一个由社区主导、开源、有工业支持的针对软件定义网络( SDN: Software-Defined Networking)的包含代码和蓝图的框架。其思路是提供一个可直接部署的全功能 SDN 平台,而不需要其他组件,供应商可提供附件组件和增强组件。 ### Apache 软件基金会 ### ![](http://images.techhive.com/images/article/2015/01/6_apache-software-foundation-100539098-orig.jpg) -[Apache 软件基金会 (ASF)][7] 是将近 150 个顶级项目的聚居地,这些项目涵盖从开源企业级自动化软件到与 Apache Hadoop 相关的分布式计算的整个生态系统。这些项目分发企业级、可免费获取的软件产品,而 Apache 协议则是为了让无论是商业用户还是个人用户更方便地部署 Apache 的产品。 +[Apache 软件基金会 (ASF)][7] 是将近 150 个顶级项目的聚居地,这些项目涵盖从开源的企业级自动化软件到与 Apache Hadoop 相关的分布式计算的整个生态系统。这些项目分发企业级、可免费获取的软件产品,而 Apache 协议则是为了让无论是商业用户还是个人用户更方便地部署 Apache 的产品。 -ASF 于 1999 年作为一个会员制,非盈利公司注册,其核心为精英 — 要成为它的成员,你必须首先在基金会的一个或多个协作项目中做出积极贡献。 +ASF 是 1999 年成立的一个会员制,非盈利公司,以精英为其核心 — 要成为它的成员,你必须首先在基金会的一个或多个协作项目中做出积极贡献。 ### 开放计算项目 ### ![](http://images.techhive.com/images/article/2015/01/7_open-compute-project-100539099-orig.jpg) -作为 Facebook 重新设计其 Oregon 数据中心的副产物, [开放计算项目][7] 旨在发展针对数据中心的开放硬件解决方案。 OCP 是一个由廉价、无浪费的服务器,针对 Open Rack(为数据中心设计的机架标准,来让机架集成到数据中心的基础设施中) 的模块化 I/O 存储和一个相对 "绿色" 的数据中心设计方案等构成。 +作为 Facebook 重新设计其 Oregon 数据中心的副产物, [开放计算项目][7] 旨在发展针对数据中心的开源硬件解决方案。 OCP 是一个由廉价无浪费的服务器、针对 Open Rack(为数据中心设计的机架标准,来让机架集成到数据中心的基础设施中) 的模块化 I/O 存储和一个相对 "绿色" 的数据中心设计方案等构成。 OCP 董事会成员包括来自 Facebook,Intel,Goldman Sachs,Rackspace 和 Microsoft 的代表。 -OCP 最近宣布了许可证的两个选择: 一个类似 Apache 2.0 的允许衍生工作的许可证和一个更规范的鼓励回滚到原有软件的更改的许可证。 +OCP 最近宣布了有两种可选的许可证: 一个类似 Apache 2.0 的允许衍生工作的许可证,和一个更规范的鼓励将更改回馈到原有软件的许可证。 -------------------------------------------------------------------------------- @@ -72,7 +72,7 @@ via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities 作者:[Thor Olavsrud][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From 77b1de1f10b9e5d07426da4163dd6b57197078ee Mon Sep 17 00:00:00 2001 From: joeren Date: Wed, 5 Aug 2015 07:42:36 +0800 Subject: [PATCH 059/697] Update 20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md --- ...LVM on Ubuntu for Easy Partition Resizing and Snapshots.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md b/sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md index 883c5e3203..2cb09193b6 100644 --- a/sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md +++ b/sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md @@ -1,4 +1,4 @@ - +Translating by GOLinux! How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots ================================================================================ ![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035707bbd74.png.pagespeed.ic.9_yebxUF1C.png) @@ -65,4 +65,4 @@ via: http://www.howtogeek.com/211937/how-to-use-lvm-on-ubuntu-for-easy-partition [3]:http://www.howtogeek.com/114503/how-to-resize-your-ubuntu-partitions/ [4]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ [5]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ -[6]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ \ No newline at end of file +[6]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ From d8b9908e0f63f78c4156bedc13c45698bd21f205 Mon Sep 17 00:00:00 2001 From: XLCYun Date: Wed, 5 Aug 2015 08:29:58 +0800 Subject: [PATCH 060/697] =?UTF-8?q?=E5=88=A0=E9=99=A4=E5=8E=9F=E6=96=87=20?= =?UTF-8?q?=20XLCYun?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Right & Wrong - Page 4 - GNOME Settings.md | 52 ------------------- 1 file changed, 52 deletions(-) delete mode 100644 sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md diff --git a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md b/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md deleted file mode 100644 index bf233ce5d3..0000000000 --- a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md +++ /dev/null @@ -1,52 +0,0 @@ -Translating by XLCYun. -A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 4 - GNOME Settings -================================================================================ -### Settings ### - -There are a few specific KDE Control modules that I am going to pick at, mostly because they are so laughable horrible compared to their gnome counter-part that its honestly pathetic. - -First one up? Printers. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers1_show&w=1920) - -Gnome is on the left, KDE is on the right. You know what the difference is between the printer applet on the left, and the one on the right? When I opened up Gnome Control Center and hit "Printers" the applet popped up and nothing happened. When I opened up KDE System Settings and hit "Printers" I got a password prompt. Before I was even allowed to LOOK at the printers I had to give up ROOT'S password. - -Let me just re-iterate that. In this, the days of PolicyKit and Logind, I am still being asked for Root's password for what should be a sudo operation. I didn't even SETUP root's password when I installed the system. I had to drop down to Konsole and run 'sudo passwd root' so that I could GIVE root a password so that I could go back into System Setting's printer applet and then give up root's password to even LOOK at what printers were available. Once I did that I got prompted for root's password AGAIN when I hit "Add Printer" then I got prompted for root's password AGAIN after I went through and selected a printer and driver. Three times I got asked for ROOT'S password just to add a printer to the system. - -When I added a printer under Gnome I didn't get prompted for my SUDO password until I hit "Unlock" in the printer applet. I got asked once, then I never got asked again. KDE, I am begging you... Adopt Gnome's "Unlock" methodology. Do not prompt for a password until you really need one. Furthermore, whatever library is out there that allows for KDE applications to bypass PolicyKit / Logind (if its available) and prompt directly for root... Bin that code. If this was a multi-user system I either have to give up root's password, or be there every second of every day in order to put it in any time a user might have to update, change, or add a new printer. Both options are completely unacceptable. - -One more thing... - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers2_show&w=1920) - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers3_show&w=1920) - -Question to the forums: What looks cleaner to you? I had this realization when I was writing this article: Gnome's applet makes it very clear where any additional printers are going to go, they set aside a column on the left to list them. Before I added a second printer to KDE, and it suddenly grew a left side column, I had this nightmare-image in my head of the applet just shoving another icon into the screen and them being listed out like preview images in a folder of pictures. I was pleasantly surprised to see that I was wrong but the fact that the applet just 'grew' another column that didn't exist before and drastically altered its presentation is not really 'good' either. It's a design that's confusing, shocking, and non-intuitive. - -Enough about printers though... Next KDE System Setting that is up for my public stoning? Multimedia, Aka Phonon. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_sound_show&w=1920) - -As always, Gnome's on the left, KDE is on the right. Let's just run through the Gnome setting first... The eyes go left to right, top to bottom, right? So let's do the same. First up: volume control slider. The blue hint against the empty bar with 100% clearly marked removes all confusion about which way is "volume up." Immediately after the slider is an easy On/Off toggle that functions a mute on/off. Points to Gnome for remembering what the volume was set to BEFORE I muted sound, and returning to that same level AFTER I press volume-up to un-mute. Kmixer, you amnesiac piece of crap, I wish I could say as much about you. - -Moving on! Tabbed options for Output, Input and Applications? With per application volume controls within easy reach? Gnome I love you more and more with every passing second. Balance options, sound profiles, and a clearly marked "Test Speakers" option. - -I'm not sure how this could have been implemented in a cleaner, more concise way. Yes, it's just a Gnome-ized Pavucontrol but I think that's the point. Pavucontrol got it mostly right to begin with, the Sound applet in Gnome Control Center just refines it slightly to make it even closer to perfect. - -Phonon, you're up. And let me start by saying: What the fsck am I looking at? -I- get that I am looking at the priority list for the audio devices on the system, but the way it is presented is a bit of a nightmare. Also where are the things the user probably cares about? A priority list is a great thing to have, it SHOULD be available, but it's something the user messes with once or twice and then never touches again. It's not important, or common, enough to warrant being front and center. Where's the volume slider? Where's per application controls? The things that users will be using more frequently? Well.. those are under Kmix, a separate program, with its own settings and configuration... not under the System Settings... which kind of makes System Settings a bit of a misnomer. And in that same vein, Let's hop over to network settings. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_network_show&w=1920) - -Presented above is the Gnome Network Settings. KDE's isn't included because of the reason I'm about to hit on. If you go to KDE's System Settings and hit any of the three options under the "Network" Section you get tons of options: Bluetooth settings, default username and password for Samba shares (Seriously, "Connectivity" only has 2 options: Username and password for SMB shares. How the fsck does THAT deserve the all-inclusive title "Connectivity"?), controls for Browser Identification (which only work for Konqueror...a dead project), proxy settings, etc... Where's my wifi settings? They aren't there. Where are they? Well, they are in the network applet's private settings... not under Network Settings... - -KDE, you're killing me. You have "System Settings" USE IT! - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4 - -作者:Eric Griffith -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 67e1d8db380b6fdbd959f30e473853725945904c Mon Sep 17 00:00:00 2001 From: XLCYun Date: Wed, 5 Aug 2015 08:36:39 +0800 Subject: [PATCH 061/697] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90=20?= =?UTF-8?q?=E7=AC=AC=E5=9B=9B=E8=8A=82?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Right & Wrong - Page 4 - GNOME Settings.md | 54 +++++++++++++++++++ 1 file changed, 54 insertions(+) create mode 100644 translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md new file mode 100644 index 0000000000..1c0cc4bd86 --- /dev/null +++ b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md @@ -0,0 +1,54 @@ +将GNOME作为我的Linux桌面的一周: 他们做对的与做错的 - 第四节 - GNOME设置 +================================================================================ +### Settings设置 ### + +在这我要挑一挑几个特定KDE控制模块的毛病,大部分原因是因为相比它们的对手GNOME来说,糟糕得太可笑,实话说,真是悲哀。 + +第一个接招的?打印机。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers1_show&w=1920) + +GNOME在左,KDE在右。你知道左边跟右边的打印程序有什么区别吗?当我在GNOME控制中心打开“打印机”时,程序窗口弹出来了,之后没有也没发生。而当我在KDE系统设置打开“打印机”时,我收到了一条密码提示。甚至我都没能看一眼打印机呢,我就必须先交出ROOT密码。 + +让我再重复一遍。在今天,PolicyKit和Logind的日子里,对一个应该是sudo的操作,我依然被询问要求ROOT的密码。我安装系统的时候甚至都没设置root密码。所以我必须跑到Konsole去,然后运行'sudo passwd root'命令,这样我才能给root设一个密码,这样我才能回到系统设置中的打印程序,然后交出root密码,然后仅仅是看一看哪些打印机可用。完成了这些工作后,当我点击“添加打印机”时,我再次收到请求ROOT密码的提示,当我解决了它后再选择一个打印机和驱动时,我再次收到请求ROOT密码的提示。仅仅是为了添加一个打印机到系统我就收到三次密码请求。 + +而在GNOME下添加打印机,在点击打印机程序中的”解锁“之前,我没有收到任何请求SUDO密码的提示。整个过程我只被请求过一次,仅此而已。KDE,求你了……采用GNOME的”解锁“模式吧。不到一定需要的时候不要发出提示。还有,不管是哪个库,只要它允许KDE应用程序绕过PolicyKit/Logind(如果有的话)并直接请求ROOT权限……那就把它封进箱里吧。如果这是个多用户系统,那我要么必须交出ROOT密码,要么我必须时时刻刻呆着以免有一个用户需要升级、更改或添加一个新的打印机。而这两种情况都是完全无法接受的。 + +有还一件事…… + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers2_show&w=1920) + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers3_show&w=1920) + +给论坛的问题:怎么样看起来更简洁?我在写这篇文章时意识到:当有任何的附加打印机准备好时,Gnome打印机程序会把过程做得非常简洁,它们在左边上放了一个竖直栏来列出这些打印机。而我在KDE添加第二台打印机时,它突然增加出一个左边栏来。而在添加之前,我脑海中已经有了一个恐怖的画面它会像图片文件夹显示预览图一样,直接插入另外一个图标到界面里去。我很高兴也很惊讶的看到我是错的。但是事实是它直接”长出”另外一个从末存在的竖直栏,彻底改变了它的界面布局,而这样也称不上“好”。终究还是一种令人困惑,奇怪而又不直观的设计。 + +打印机说得够多了……下一个接受我公开石刑的KDE系统设置是?多媒体,即Phonon。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_sound_show&w=1920) + +一如既往,GNOME在左边,KDE在右边。让我们先看看GNOME的系统设置先……眼睛从左到右,从上到下,对吧?来吧,就这样做。首先:音量控制滑条。滑条中的蓝色条与空白条百分百清晰地消除了哪边是“音量增加”的困惑。在音量控制条后马上就是一个On/Off开关,用来开关静音功能。Gnome的再次得分在于静音后能记住当前设置的音量,而在点击音量增加按钮取消静音后能回到原来设置的音量中来。Kmixer,你个健忘的垃圾,我真的希望我能多讨论你。 + + +继续!输入输出和应用程序的标签选项?每一个应用程序的音量随时可控?Gnome,每过一秒,我爱你越深。均衡的选项设置,声音配置,和清晰地标上标志的“测试麦克风”选项。 + + + +我不清楚它能否以一种更干净更简洁的设计实现。是的,它只是一个Gnome化的Pavucontrol,但我想这就是重要的地方。Pavucontrol在这方面几乎完全做对了,Gnome控制中心中的“声音”应用程序的改善使它向完美更进了一步。 + +Phonon,该你上了。但开始前我想说:我TM看到的是什么?我知道我看到的是音频设备的权限列表,但是它呈现的方式有点太坑。还有,那些用户可能关心的那些东西哪去了?拥有一个权限列表当然很好,它也应该存在,但问题是权限列表属于那种用户乱搞一两次之后就不会再碰的东西。它还不够重要,或者说常用到可以直接放在正中间位置的程度。音量控制滑块呢?对每个应用程序的音量控制功能呢?那些用户使用最频繁的东西呢?好吧,它们在Kmix中,一个分离的程序,拥有它自己的配置选项……而不是在系统设置下……这样真的让“系统设置”这个词变得有点用词不当。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_network_show&w=1920) + +上面展示的Gnome的网络设置。KDE的没有展示,原因就是我接下来要吐槽的内容了。如果你进入KDE的系统设置里,然后点击“网络”区域中三个选项中的任何一个,你会得到一大堆的选项:蓝牙设置,Samba分享的默认用户名和密码(说真的,“连通性(Connectivity)”下面只有两个选项:SMB的用户名和密码。TMD怎么就配得上“连通性”这么大的词?),浏览器身份验证控制(只有Konqueror能用……一个已经倒闭的项目),代理设置,等等……我的wifi设置哪去了?它们没在这。哪去了?好吧,它们在网络应用程序的设置里面……而不是在网络设置里…… + +KDE,你这是要杀了我啊,你有“系统设置”当凶器,拿着它动手吧! + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From e9411c42a3eeead7b498a8950f1625bcbb326c5b Mon Sep 17 00:00:00 2001 From: joeren Date: Wed, 5 Aug 2015 09:04:29 +0800 Subject: [PATCH 062/697] [Translated]201150318How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md --- ...r Easy Partition Resizing and Snapshots.md | 68 ------------------- ...r Easy Partition Resizing and Snapshots.md | 67 ++++++++++++++++++ 2 files changed, 67 insertions(+), 68 deletions(-) delete mode 100644 sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md create mode 100644 translated/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md diff --git a/sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md b/sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md deleted file mode 100644 index 2cb09193b6..0000000000 --- a/sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md +++ /dev/null @@ -1,68 +0,0 @@ -Translating by GOLinux! -How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots -================================================================================ -![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035707bbd74.png.pagespeed.ic.9_yebxUF1C.png) - -Ubuntu’s installer offers an easy “Use LVM” checkbox. The description says it enables Logical Volume Management so you can take snapshots and more easily resize your hard disk partitions — here’s how to do that. - -LVM is a technology that’s similar to [RAID arrays][1] or [Storage Spaces on Windows][2] in some ways. While this technology is particularly useful on servers, it can be used on desktop PCs, too. - -### Should You Use LVM With Your New Ubuntu Installation? ### - -The first question is whether you even want to use LVM with your Ubuntu installation. Ubuntu makes this easy to enable with a quick click, but this option isn’t enabled by default. As the installer says, this allows you to resize partitions, create snapshots, merge multiple disks into a single logical volume, and so on — all while the system is running. Unlike with typical partitions, you don’t have to shut down your system, boot from a live CD or USB drive, and [resize your partitions while they aren’t in use][3]. - -To be perfectly honest, the average Ubuntu desktop user probably won’t realize whether they’re using LVM or not. But, if you want to do more advanced things later, LVM can help. LVM is potentially more complex, which could cause problems if you need to recover your data later — especially if you’re not that experienced with it. There shouldn’t be a noticeable performance penalty here — LVM is implemented right down in the Linux kernel. - -![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035cbada6ae.png.pagespeed.ic.cnqyiKfCvi.png) - -### Logical Volume Management Explained ### - -We’re previously [explained what LVM is][4]. In a nutshell, it provides a layer of abstraction between your physical disks and the partitions presented to your operating system. For example, your computer might have two hard drives inside it, each 1 TB in size. You’d have to have at least two partitions on these disks, and each of these partitions would be 1 TB in size. - -LVM provides a layer of abstraction over this. Instead of the traditional partition on a disk, LVM would treat the disks as two separate “physical volumes” after you initialize them. You could then create “logical volumes” based on these physical volumes. For example, you could combine those two 1 TB disks into a single 2 TB partition. Your operating system would just see a 2 TB volume, and LVM would deal with everything in the background. A group of physical volumes and logical volumes is known as a “volume group.” A typical system will just have a single volume group. - -This layer of abstraction makes it possibly to easily resize partitions, combine multiple disks into a single volume, and even take “snapshots” of a partition’s file system while it’s running, all without unmounting it. - -Note that merging multiple disks into a single volume can be a bad idea if you’re not creating backups. It’s like with RAID 0 — if you combine two 1 TB volumes into a single 2 TB volume, you could lose important data on the volume if just one of your hard disks fails. Backups are crucial if you go this route. - -### Graphical Utilities for Managing Your LVM Volumes ### - -Traditionally, [LVM volumes are managed with Linux terminal commands][5].These will work for you on Ubuntu, but there’s an easier, graphical method anyone can take advantage of. If you’re a Linux user used to using GParted or a similar partition manager, don’t bother — GParted doesn’t have support for LVM disks. - -Instead, you can use the Disks utility included along with Ubuntu for this. This utility is also known as GNOME Disk Utility, or Palimpsest. Launch it by clicking the icon on the dash, searching for Disks, and pressing Enter. Unlike GParted, the Disks utility will display your LVM partitions under “Other Devices,” so you can format them and adjust other options if you need to. This utility will also work from a live CD or USB drive, too. - -![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550361b3772f7.png.pagespeed.ic.nZWwLJUywR.png) - -Unfortunately, the Disks utility doesn’t include support for taking advantage of LVM’s most powerful features. There’s no options for managing your volume groups, extending partitions, or taking snapshots. You could do that from the terminal, but you don’t have to. Instead, you can open the Ubuntu Software Center, search for LVM, and install the Logical Volume Management tool. You could also just run the **sudo apt-get install system-config-lvm** command in a terminal window. After it’s installed, you can open the Logical Volume Management utility from the dash. - -This graphical configuration tool was made by Red Hat. It’s a bit dated, but it’s the only graphical way to do this stuff without resorting to terminal commands. - -Let’s say you wanted to add a new physical volume to your volume group. You’d open the tool, select the new disk under Uninitialized Entries, and click the “Initialize Entry” button. You’d then find the new physical volume under Unallocated Volumes, and you could use the “Add to existing Volume Group” button to add it to the “ubuntu-vg” volume group Ubuntu created during the installation process. - -![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363106789c.png.pagespeed.ic.drVInt3Weq.png) - -The volume group view shows you a visual overview of your physical volumes and logical volumes. Here, we have two physical partitions across two separate hard drives. We have a swap partition and a root partition, just as Ubuntu sets up its partitioning scheme by default. Because we’ve added a second physical partition from another drive, there’s now a good chunk of unused space. - -![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363f631c19.png.pagespeed.ic.54E_Owcq8y.png) - -To expand a logical partition into the physical space, you could select it under Logical View, click Edit Properties, and modify the size to grow the partition. You could also shrink it from here. - -![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55036893712d3.png.pagespeed.ic.ce7y_Mt0uF.png) - -The other options in system-config-lvm allow you to set up snapshots and mirroring. You probably won’t need these features on a typical desktop, but they’re available graphically here. Remember, you can also [do all of this with terminal commands][6]. - --------------------------------------------------------------------------------- - -via: http://www.howtogeek.com/211937/how-to-use-lvm-on-ubuntu-for-easy-partition-resizing-and-snapshots/ - -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[1]:http://www.howtogeek.com/162676/how-to-use-multiple-disks-intelligently-an-introduction-to-raid/ -[2]:http://www.howtogeek.com/109380/how-to-use-windows-8s-storage-spaces-to-mirror-combine-drives/ -[3]:http://www.howtogeek.com/114503/how-to-resize-your-ubuntu-partitions/ -[4]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ -[5]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ -[6]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ diff --git a/translated/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md b/translated/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md new file mode 100644 index 0000000000..2e66e27f31 --- /dev/null +++ b/translated/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md @@ -0,0 +1,67 @@ +Ubuntu上使用LVM轻松调整分区并制作快照 +================================================================================ +![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035707bbd74.png.pagespeed.ic.9_yebxUF1C.png) + +Ubuntu的安装器提供了一个轻松“使用LVM”的复选框。说明中说,它启用了逻辑卷管理,因此你可以制作快照,并更容易地调整硬盘分区大小——这里将为大家讲述如何完成这些操作。 + +LVM是一种技术,某种程度上和[RAID阵列][1]或[Windows上的存储空间][2]类似。虽然该技术在服务器上更为有用,但是它也可以在桌面端PC上使用。 + +### 你应该在新安装Ubuntu时使用LVM吗? ### + +第一个问题是,你是否想要在安装Ubuntu时使用LVM?如果是,那么Ubuntu让这一切变得很简单,只需要轻点鼠标就可以完成,但是该选项默认是不启用的。正如安装器所说的,它允许你调整分区、创建快照、合并多个磁盘到一个逻辑卷等等——所有这一切都可以在系统运行时完成。不同于传统分区,你不需要关掉你的系统,从Live CD或USB驱动,然后[调整这些不使用的分区][3]。 + +完全坦率地说,普通Ubuntu桌面用户可能不会意识到他们是否正在使用LVM。但是,如果你想要在今后做一些更高深的事情,那么LVM就会有所帮助了。LVM可能更复杂,可能会在你今后恢复数据时会导致问题——尤其是在你经验不足时。这里不会有显著的性能损失——LVM是彻底地在Linux内核中实现的。 + +![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035cbada6ae.png.pagespeed.ic.cnqyiKfCvi.png) + +### 逻辑卷管理说明 ### + +前面,我们已经[说明了何谓LVM][4]。概括来讲,它在你的物理磁盘和呈现在你系统中的分区之间提供了一个抽象层。例如,你的计算机可能装有两个硬盘驱动器,它们的大小都是 1 TB。你必须得在这些磁盘上至少分两个区,每个区大小 1 TB。 + +LVM就在这些分区上提供了一个抽象层。用于取代磁盘上的传统分区,LVM将在你对这些磁盘初始化后,将它们当作独立的“物理卷”来对待。然后,你就可以基于这些物理卷创建“逻辑卷”。例如,你可以将这两个 1 TB 的磁盘组合成一个 2 TB 的分区,你的系统将只看到一个 2 TB 的卷,而LVM将会在后台处理这一切。一组物理卷以及一组逻辑卷被称之为“卷组”,一个标准的系统只会有一个卷组。 + +该抽象层使得调整分区、将多个磁盘组合成单个卷、甚至为一个运行着的分区的文件系统创建“快照”变得十分简单,而完成所有这一切都无需先卸载分区。 + +注意,如果你没有创建备份,那么将多个磁盘合并成一个卷将会是个糟糕的想法。它就像RAID 0——如果你将两个 1 TB 的卷组合成一个 2 TB 的卷,只要其中一个硬盘失败,你将丢失该卷上的重要数据。所以,如果你要走这条路,那么备份就及其重要。 + +### 管理LVM卷的图形化工具 ### + +通常,[LVM通过Linux终端命令来管理][5]。这在Ubuntu上也行得通,但是有个更简单的图形化方法可供大家采用。如果你是一个Linux用户,对GParted或者与其类似的分区管理器熟悉,算了,别瞎掰了——GParted根本不支持LVM磁盘。 + +然而,你可以使用Ubuntu附带的磁盘工具。该工具也被称之为GNOME磁盘工具,或者叫Palimpsest。点击停靠盘上的图标来开启它吧,搜索磁盘然后敲击回车。不像GParted,该磁盘工具将会在“其它设备”下显示LVM分区,因此你可以根据需要格式化这些分区,也可以调整其它选项。该工具在Live CD或USB 驱动下也可以使用。 + +![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550361b3772f7.png.pagespeed.ic.nZWwLJUywR.png) + +不幸的是,该磁盘工具不支持LVM的大多数强大的特性,没有管理卷组、扩展分区,或者创建快照等选项。对于这些操作,你可以通过终端来实现,但是你没有那个必要。相反,你可以打开Ubuntu软件中心,搜索关键字LVM,然后安装逻辑卷管理工具,你可以在终端窗口中运行**sudo apt-get install system-config-lvm**命令来安装它。安装完之后,你就可以从停靠盘上打开逻辑卷管理工具了。 + +这个图形化配置工具是由红帽公司开发的,虽然有点陈旧了,但却是唯一的图形化方式,你可以通过它来完成上述操作,将那些终端命令抛诸脑后了。 + +比如说,你想要添加一个新的物理卷到卷组中。你可以打开该工具,选择未初始化条目下的新磁盘,然后点击“初始化条目”按钮。然后,你就可以在未分配卷下找到新的物理卷了,你可以使用“添加到现存卷组”按钮来将它添加到“ubuntu-vg”卷组,这是Ubuntu在安装过程中创建的卷组。 + +![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363106789c.png.pagespeed.ic.drVInt3Weq.png) + +卷组视图会列出你所有物理卷和逻辑卷的总览。这里,我们有两个横跨两个独立硬盘驱动器的物理分区,我们有一个交换分区和一个根分区,就像Ubuntu默认设置的分区图表。由于我们从另一个驱动器添加了第二个物理分区,现在那里有大量未使用空间。 + +![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363f631c19.png.pagespeed.ic.54E_Owcq8y.png) + +要扩展逻辑分区到物理空间,你可以在逻辑视图下选择它,点击编辑属性,然后修改大小来扩大分区。你也可以在这里缩减分区。 + +![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55036893712d3.png.pagespeed.ic.ce7y_Mt0uF.png) + +system-config-lvm的其它选项允许你设置快照和镜像。对于传统桌面而言,你或许不需要这些特性,但是在这里也可以通过图形化处理。记住,你也可以[使用终端命令完成这一切][6]。 + +-------------------------------------------------------------------------------- + +via: http://www.howtogeek.com/211937/how-to-use-lvm-on-ubuntu-for-easy-partition-resizing-and-snapshots/ + +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[1]:http://www.howtogeek.com/162676/how-to-use-multiple-disks-intelligently-an-introduction-to-raid/ +[2]:http://www.howtogeek.com/109380/how-to-use-windows-8s-storage-spaces-to-mirror-combine-drives/ +[3]:http://www.howtogeek.com/114503/how-to-resize-your-ubuntu-partitions/ +[4]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ +[5]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ +[6]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ From c9ee20d2d32beefb6ae69c12d2ae696ce352213b Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 5 Aug 2015 13:38:18 +0800 Subject: [PATCH 063/697] Delete 20150717 How to monitor NGINX- Part 1.md --- .../20150717 How to monitor NGINX- Part 1.md | 409 ------------------ 1 file changed, 409 deletions(-) delete mode 100644 sources/tech/20150717 How to monitor NGINX- Part 1.md diff --git a/sources/tech/20150717 How to monitor NGINX- Part 1.md b/sources/tech/20150717 How to monitor NGINX- Part 1.md deleted file mode 100644 index 690ab192ba..0000000000 --- a/sources/tech/20150717 How to monitor NGINX- Part 1.md +++ /dev/null @@ -1,409 +0,0 @@ -translation by strugglingyouth -How to monitor NGINX - Part 1 -================================================================================ -![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png) - -### What is NGINX? ### - -[NGINX][1] (pronounced “engine X”) is a popular HTTP server and reverse proxy server. As an HTTP server, NGINX serves static content very efficiently and reliably, using relatively little memory. As a [reverse proxy][2], it can be used as a single, controlled point of access for multiple back-end servers or for additional applications such as caching and load balancing. NGINX is available as a free, open-source product or in a more full-featured, commercially distributed version called NGINX Plus. - -NGINX can also be used as a mail proxy and a generic TCP proxy, but this article does not directly address NGINX monitoring for these use cases. - -### Key NGINX metrics ### - -By monitoring NGINX you can catch two categories of issues: resource issues within NGINX itself, and also problems developing elsewhere in your web infrastructure. Some of the metrics most NGINX users will benefit from monitoring include **requests per second**, which provides a high-level view of combined end-user activity; **server error rate**, which indicates how often your servers are failing to process seemingly valid requests; and **request processing time**, which describes how long your servers are taking to process client requests (and which can point to slowdowns or other problems in your environment). - -More generally, there are at least three key categories of metrics to watch: - -- Basic activity metrics -- Error metrics -- Performance metrics - -Below we’ll break down a few of the most important NGINX metrics in each category, as well as metrics for a fairly common use case that deserves special mention: using NGINX Plus for reverse proxying. We will also describe how you can monitor all of these metrics with your graphing or monitoring tools of choice. - -This article references metric terminology [introduced in our Monitoring 101 series][3], which provides a framework for metric collection and alerting. - -#### Basic activity metrics #### - -Whatever your NGINX use case, you will no doubt want to monitor how many client requests your servers are receiving and how those requests are being processed. - -NGINX Plus can report basic activity metrics exactly like open-source NGINX, but it also provides a secondary module that reports metrics slightly differently. We discuss open-source NGINX first, then the additional reporting capabilities provided by NGINX Plus. - -**NGINX** - -The diagram below shows the lifecycle of a client connection and how the open-source version of NGINX collects metrics during a connection. - -![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_connection_diagram-2.png) - -Accepts, handled, and requests are ever-increasing counters. Active, waiting, reading, and writing grow and shrink with request volume. - -注:表格 - ---- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric type
acceptsCount of client connections attempted by NGINXResource: Utilization
handledCount of successful client connectionsResource: Utilization
activeCurrently active client connectionsResource: Utilization
dropped (calculated)Count of dropped connections (accepts – handled)Work: Errors*
requestsCount of client requestsWork: Throughput
*Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.
- -The **accepts** counter is incremented when an NGINX worker picks up a request for a connection from the OS, whereas **handled** is incremented when the worker actually gets a connection for the request (by establishing a new connection or reusing an open one). These two counts are usually the same—any divergence indicates that connections are being **dropped**, often because a resource limit, such as NGINX’s [worker_connections][4] limit, has been reached. - -Once NGINX successfully handles a connection, the connection moves to an **active** state, where it remains as client requests are processed: - -Active state - -- **Waiting**: An active connection may also be in a Waiting substate if there is no active request at the moment. New connections can bypass this state and move directly to Reading, most commonly when using “accept filter” or “deferred accept”, in which case NGINX does not receive notice of work until it has enough data to begin working on the response. Connections will also be in the Waiting state after sending a response if the connection is set to keep-alive. -- **Reading**: When a request is received, the connection moves out of the waiting state, and the request itself is counted as Reading. In this state NGINX is reading a client request header. Request headers are lightweight, so this is usually a fast operation. -- **Writing**: After the request is read, it is counted as Writing, and remains in that state until a response is returned to the client. That means that the request is Writing while NGINX is waiting for results from upstream systems (systems “behind” NGINX), and while NGINX is operating on the response. Requests will often spend the majority of their time in the Writing state. - -Often a connection will only support one request at a time. In this case, the number of Active connections == Waiting connections + Reading requests + Writing requests. However, the newer SPDY and HTTP/2 protocols allow multiple concurrent requests/responses to be multiplexed over a connection, so Active may be less than the sum of Waiting, Reading, and Writing. (As of this writing, NGINX does not support HTTP/2, but expects to add support during 2015.) - -**NGINX Plus** - -As mentioned above, all of open-source NGINX’s metrics are available within NGINX Plus, but Plus can also report additional metrics. The section covers the metrics that are only available from NGINX Plus. - -![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_plus_connection_diagram-2.png) - -Accepted, dropped, and total are ever-increasing counters. Active, idle, and current track the current number of connections or requests in each of those states, so they grow and shrink with request volume. - -注:表格 - ---- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric type
acceptedCount of client connections attempted by NGINXResource: Utilization
droppedCount of dropped connectionsWork: Errors*
activeCurrently active client connectionsResource: Utilization
idleClient connections with zero current requestsResource: Utilization
totalCount of client requestsWork: Throughput
*Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.
- -The **accepted** counter is incremented when an NGINX Plus worker picks up a request for a connection from the OS. If the worker fails to get a connection for the request (by establishing a new connection or reusing an open one), then the connection is dropped and **dropped** is incremented. Ordinarily connections are dropped because a resource limit, such as NGINX Plus’s [worker_connections][4] limit, has been reached. - -**Active** and **idle** are the same as “active” and “waiting” states in open-source NGINX as described [above][5], with one key exception: in open-source NGINX, “waiting” falls under the “active” umbrella, whereas in NGINX Plus “idle” connections are excluded from the “active” count. **Current** is the same as the combined “reading + writing” states in open-source NGINX. - -**Total** is a cumulative count of client requests. Note that a single client connection can involve multiple requests, so this number may be significantly larger than the cumulative number of connections. In fact, (total / accepted) yields the average number of requests per connection. - -**Metric differences between Open-Source and Plus** - -注:表格 - --- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NGINX (open-source)NGINX Plus
acceptsaccepted
dropped must be calculateddropped is reported directly
reading + writingcurrent
waitingidle
active (includes “waiting” states)active (excludes “idle” states)
requeststotal
- -**Metric to alert on: Dropped connections** - -The number of connections that have been dropped is equal to the difference between accepts and handled (NGINX) or is exposed directly as a standard metric (NGINX Plus). Under normal circumstances, dropped connections should be zero. If your rate of dropped connections per unit time starts to rise, look for possible resource saturation. - -![Dropped connections](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/dropped_connections.png) - -**Metric to alert on: Requests per second** - -Sampling your request data (**requests** in open-source, or **total** in Plus) with a fixed time interval provides you with the number of requests you’re receiving per unit of time—often minutes or seconds. Monitoring this metric can alert you to spikes in incoming web traffic, whether legitimate or nefarious, or sudden drops, which are usually indicative of problems. A drastic change in requests per second can alert you to problems brewing somewhere in your environment, even if it cannot tell you exactly where those problems lie. Note that all requests are counted the same, regardless of their URLs. - -![Requests per second](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/requests_per_sec.png) - -**Collecting activity metrics** - -Open-source NGINX exposes these basic server metrics on a simple status page. Because the status information is displayed in a standardized form, virtually any graphing or monitoring tool can be configured to parse the relevant data for analysis, visualization, or alerting. NGINX Plus provides a JSON feed with much richer data. Read the companion post on [NGINX metrics collection][6] for instructions on enabling metrics collection. - -#### Error metrics #### - -注:表格 - ----- - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric typeAvailability
4xx codesCount of client errorsWork: ErrorsNGINX logs, NGINX Plus
5xx codesCount of server errorsWork: ErrorsNGINX logs, NGINX Plus
- -NGINX error metrics tell you how often your servers are returning errors instead of producing useful work. Client errors are represented by 4xx status codes, server errors with 5xx status codes. - -**Metric to alert on: Server error rate** - -Your server error rate is equal to the number of 5xx errors divided by the total number of [status codes][7] (1xx, 2xx, 3xx, 4xx, 5xx), per unit of time (often one to five minutes). If your error rate starts to climb over time, investigation may be in order. If it spikes suddenly, urgent action may be required, as clients are likely to report errors to the end user. - -![Server error rate](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/5xx_rate.png) - -A note on client errors: while it is tempting to monitor 4xx, there is limited information you can derive from that metric since it measures client behavior without offering any insight into particular URLs. In other words, a change in 4xx could be noise, e.g. web scanners blindly looking for vulnerabilities. - -**Collecting error metrics** - -Although open-source NGINX does not make error rates immediately available for monitoring, there are at least two ways to capture that information: - -- Use the expanded status module available with commercially supported NGINX Plus -- Configure NGINX’s log module to write response codes in access logs - -Read the companion post on NGINX metrics collection for detailed instructions on both approaches. - -#### Performance metrics #### - -注:表格 - ----- - - - - - - - - - - - - - - - - -
NameDescriptionMetric typeAvailability
request timeTime to process each request, in secondsWork: PerformanceNGINX logs
- -**Metric to alert on: Request processing time** - -The request time metric logged by NGINX records the processing time for each request, from the reading of the first client bytes to fulfilling the request. Long response times can point to problems upstream. - -**Collecting processing time metrics** - -NGINX and NGINX Plus users can capture data on processing time by adding the $request_time variable to the access log format. More details on configuring logs for monitoring are available in our companion post on [NGINX metrics collection][8]. - -#### Reverse proxy metrics #### - -注:表格 - ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric typeAvailability
Active connections by upstream serverCurrently active client connectionsResource: UtilizationNGINX Plus
5xx codes by upstream serverServer errorsWork: ErrorsNGINX Plus
Available servers per upstream groupServers passing health checksResource: AvailabilityNGINX Plus
- -One of the most common ways to use NGINX is as a [reverse proxy][9]. The commercially supported NGINX Plus exposes a large number of metrics about backend (or “upstream”) servers, which are relevant to a reverse proxy setup. This section highlights a few of the key upstream metrics that are available to users of NGINX Plus. - -NGINX Plus segments its upstream metrics first by group, and then by individual server. So if, for example, your reverse proxy is distributing requests to five upstream web servers, you can see at a glance whether any of those individual servers is overburdened, and also whether you have enough healthy servers in the upstream group to ensure good response times. - -**Activity metrics** - -The number of **active connections per upstream server** can help you verify that your reverse proxy is properly distributing work across your server group. If you are using NGINX as a load balancer, significant deviations in the number of connections handled by any one server can indicate that the server is struggling to process requests in a timely manner or that the load-balancing method (e.g., [round-robin or IP hashing][10]) you have configured is not optimal for your traffic patterns - -**Error metrics** - -Recall from the error metric section above that 5xx (server error) codes are a valuable metric to monitor, particularly as a share of total response codes. NGINX Plus allows you to easily extract the number of **5xx codes per upstream server**, as well as the total number of responses, to determine that particular server’s error rate. - -**Availability metrics** - -For another view of the health of your web servers, NGINX also makes it simple to monitor the health of your upstream groups via the total number of **servers currently available within each group**. In a large reverse proxy setup, you may not care very much about the current state of any one server, just as long as your pool of available servers is capable of handling the load. But monitoring the total number of servers that are up within each upstream group can provide a very high-level view of the aggregate health of your web servers. - -**Collecting upstream metrics** - -NGINX Plus upstream metrics are exposed on the internal NGINX Plus monitoring dashboard, and are also available via a JSON interface that can serve up metrics into virtually any external monitoring platform. See examples in our companion post on [collecting NGINX metrics][11]. - -### Conclusion ### - -In this post we’ve touched on some of the most useful metrics you can monitor to keep tabs on your NGINX servers. If you are just getting started with NGINX, monitoring most or all of the metrics in the list below will provide good visibility into the health and activity levels of your web infrastructure: - -- [Dropped connections][12] -- [Requests per second][13] -- [Server error rate][14] -- [Request processing time][15] - -Eventually you will recognize additional, more specialized metrics that are particularly relevant to your own infrastructure and use cases. Of course, what you monitor will depend on the tools you have and the metrics available to you. See the companion post for [step-by-step instructions on metric collection][16], whether you use NGINX or NGINX Plus. - -At Datadog, we have built integrations with both NGINX and NGINX Plus so that you can begin collecting and monitoring metrics from all your web servers with a minimum of setup. Learn how to monitor NGINX with Datadog [in this post][17], and get started right away with [a free trial of Datadog][18]. - -### Acknowledgments ### - -Many thanks to the NGINX team for reviewing this article prior to publication and providing important feedback and clarifications. - ----------- - -Source Markdown for this post is available [on GitHub][19]. Questions, corrections, additions, etc.? Please [let us know][20]. - --------------------------------------------------------------------------------- - -via: https://www.datadoghq.com/blog/how-to-monitor-nginx/ - -作者:K Young -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:http://nginx.org/en/ -[2]:http://nginx.com/resources/glossary/reverse-proxy-server/ -[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/ -[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections -[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state -[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html -[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[9]:https://en.wikipedia.org/wiki/Reverse_proxy -[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/ -[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections -[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second -[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate -[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time -[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ -[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up -[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md -[20]:https://github.com/DataDog/the-monitor/issues From 2b38bda7026c505229b4de281ac9ef5823b7f6d6 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 5 Aug 2015 13:41:46 +0800 Subject: [PATCH 064/697] Create 20150717 How to monitor NGINX- Part 1.md --- .../20150717 How to monitor NGINX- Part 1.md | 416 ++++++++++++++++++ 1 file changed, 416 insertions(+) create mode 100644 translated/tech/20150717 How to monitor NGINX- Part 1.md diff --git a/translated/tech/20150717 How to monitor NGINX- Part 1.md b/translated/tech/20150717 How to monitor NGINX- Part 1.md new file mode 100644 index 0000000000..86e72c0324 --- /dev/null +++ b/translated/tech/20150717 How to monitor NGINX- Part 1.md @@ -0,0 +1,416 @@ +如何监控 NGINX - 第1部分 +================================================================================ +![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png) + +### NGINX 是什么? ### + +[NGINX][1] (发音为 “engine X”) 是一种流行的 HTTP 和反向代理服务器。作为一个 HTTP 服务器,NGINX 提供静态内容非常高效可靠,使用较少的内存。作为[反向代理][2],它可以用作一个单一的控制器来为其他应用代理至后端的多个服务器上,如高速缓存和负载平衡。NGINX 是作为一个免费,开源的产品并有更全的功能,商业版的叫 NGINX Plus。 + +NGINX 也可以用作邮件代理和通用的 TCP 代理,但本文并不直接说明对 NGINX 的这些用例做监控。 + +### NGINX 主要指标 ### + +通过监控 NGINX 可以捕捉两类问题:NGINX 本身的资源问题,也有很多问题会出现在你的基础网络设施处。大多数 NGINX 用户受益于以下指标的监控,包括**requests per second**,它提供了一个所有用户活动的高级视图;**server error rate** ,这表明你的服务器已经多长没有处理看似有效的请求;还有**request processing time**,这说明你的服务器处理客户端请求的总共时长(并且可以看出性能降低时或当前环境的其他问题)。 + +更一般地,至少有三个主要的指标类别来监视: + +- 基本活动指标 +- 错误指标 +- 性能指标 + +下面我们将分析在每个类别中最重要的 NGINX 指标,以及用一个相当普遍的案例来说明,值得特别说明的是:使用 NGINX Plus 作反向代理。我们还将介绍如何使用图形工具或可选择的监控工具来监控所有的指标。 + +本文引用指标术语[介绍我们的监控在 101 系列][3],,它提供了指标收集和警告框架。 + +#### 基本活动指标 #### + +无论你在怎样的情况下使用 NGINX,毫无疑问你要监视服务器接收多少客户端请求和如何处理这些请求。 + +NGINX Plus 上像开源 NGINX 一样可以报告基本活动指标,但它也提供了略有不同的辅助模块。我们首先讨论开源的 NGINX,再来说明 NGINX Plus 提供的其他指标的功能。 + +**NGINX** + +下图显示了一个客户端连接,以及如何在连接过程中收集指标的活动周期在开源 NGINX 版本上。 + +![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_connection_diagram-2.png) + +接受,处理,增加请求的计数器。主动,等待,读,写增加和减少请求量。 + +注:表格 + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionMetric type
acceptsCount of client connections attempted by NGINXResource: Utilization
handledCount of successful client connectionsResource: Utilization
activeCurrently active client connectionsResource: Utilization
dropped (calculated)Count of dropped connections (accepts – handled)Work: Errors*
requestsCount of client requestsWork: Throughput
*Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.
+ +NGINX 进程接受 OS 的连接请求时**accepts** 计数器增加,而**handled** 是当实际的请求得到连接时(通过建立一个新的连接或重新使用一个空闲的)。这两个计数器的值通常都是相同的,表明连接正在被**dropped**,往往由于资源限制,如 NGINX 的[worker_connections][4]的限制已经达到。 + +一旦 NGINX 成功处理一个连接时,连接会移动到**active**状态,然后保持为客户端请求进行处理: + +Active 状态 + +- **Waiting**: 活动的连接也可以是一个 Waiting 子状态,如果有在此刻没有活动请求。新连接绕过这个状态并直接移动到读,最常见的是使用“accept filter” 和 “deferred accept”,在这种情况下,NGINX 不会接收进程的通知,直到它具有足够的数据来开始响应工作。如果连接设置为 keep-alive ,连接在发送响应后将处于等待状态。 + +- **Reading**: 当接收到请求时,连接移出等待状态,并且该请求本身也被视为 Reading。在这种状态下NGINX 正在读取客户端请求首部。请求首部是比较少的,因此这通常是一个快速的操作。 + +- **Writing**: 请求被读取之后,将其计为 Writing,并保持在该状态,直到响应返回给客户端。这意味着,该请求在 Writing 时, NGINX 同时等待来自负载均衡服务器的结果(系统“背后”的 NGINX),NGINX 也同时响应。请求往往会花费大量的时间在 Writing 状态。 + +通常,一个连接在同一时间只接受一个请求。在这种情况下,Active 连接的数目 == Waiting 连接 + Reading 请求 + Writing 请求。然而,较新的 SPDY 和 HTTP/2 协议允许多个并发请求/响应对被复用的连接,所以 Active 可小于 Waiting,Reading,Writing 的总和。 (在撰写本文时,NGINX 不支持 HTTP/2,但预计到2015年期间将会支持。) + +**NGINX Plus** + +正如上面提到的,所有开源 NGINX 的指标在 NGINX Plus 中是可用的,但另外也提供其他的指标。本节仅说明了 NGINX Plus 可用的指标。 + + +![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_plus_connection_diagram-2.png) + +接受,中断,总数是不断增加的。活动,空闲和已建立连接的,当前状态下每一个连接或请​​求的数量是随着请求量增加和收缩的。 + +注:表格 + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionMetric type
acceptedCount of client connections attempted by NGINXResource: Utilization
droppedCount of dropped connectionsWork: Errors*
activeCurrently active client connectionsResource: Utilization
idleClient connections with zero current requestsResource: Utilization
totalCount of client requestsWork: Throughput
*Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.
+ +当 NGINX Plus 进程接受 OS 的连接请求时 **accepted** 计数器递增。如果进程请求连接失败(通过建立一个新的连接或重新使用一个空闲),则该连接断开 **dropped** 计数增加。通常连接被中断是因为资源限制,如 NGINX Plus 的[worker_connections][4]的限制已经达到。 + +**Active** 和 **idle** 和开源 NGINX 的“active” 和 “waiting”状态是相同的,[如上所述][5],有一个不同的地方:在开源 NGINX 上,“waiting”状态包括在“active”中,而在 NGINX Plus 上“idle”的连接被排除在“active” 计数外。**Current** 和开源 NGINX 是一样的也是由“reading + writing” 状态组成。 + + +**Total** 为客户端请求的累积计数。请注意,单个客户端连接可涉及多个请求,所以这个数字可能会比连接的累计次数明显大。事实上,(total / accepted)是每个连接请求的平均数量。 + +**开源 和 Plus 之间指标的不同** + +注:表格 + +++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NGINX (open-source)NGINX Plus
acceptsaccepted
dropped must be calculateddropped is reported directly
reading + writingcurrent
waitingidle
active (includes “waiting” states)active (excludes “idle” states)
requeststotal
+ +**提醒指标: 中断连接** + +被中断的连接数目等于接受和处理之差(NGINX),或被公开直接作为指标的标准(NGINX加)。在正常情况下,中断连接数应该是零。如果每秒中中断连接的速度开始上升,寻找资源可能用尽的地方。 + +![Dropped connections](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/dropped_connections.png) + +**提醒指标: 每秒请求数** + +提供你(开源中的**requests**或者 Plus 中**total**)固定时间间隔每秒或每分钟请求的平均数据。监测这个指标可以查看 Web 的输入流量的最大值,无论是合法的还是恶意的,有可能会突然下降,通常可以看出问题。每秒的请求若发生急剧变化可以提醒你出问题了,即使它不能告诉你确切问题的位置所在。请注意,所有的请求都算作是相同的,无论哪个 URLs。 + +![Requests per second](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/requests_per_sec.png) + +**收集活动指标** + +开源的 NGINX 提供了一个简单状态页面来显示基本的服务器指标。该状态信息以标准格式被显示,实际上任何图形或监控工具可以被配置去解析相关的数据为分析,可视化,或提醒而用。NGINX Plus 提供一个 JSON 接口来显示更多的数据。阅读[NGINX 指标收集][6]后来启用指标收集的功能。 + +#### 错误指标 #### + +注:表格 + +++++ + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionMetric typeAvailability
4xx codesCount of client errorsWork: ErrorsNGINX logs, NGINX Plus
5xx codesCount of server errorsWork: ErrorsNGINX logs, NGINX Plus
+ +NGINX 错误指标告诉你服务器经常返回哪些错误,这也是有用的。客户端错误返回4XX状态码,服务器端错误返回5XX状态码。 + +**提醒指标: 服务器错误率** + +服务器错误率等于5xx错误状态代码的总数除以[状态码][7](1XX,2XX,3XX,4XX,5XX)的总数,每单位时间(通常为一到五分钟)的数目。如果你的错误率随着时间的推移开始攀升,调查可能的原因。如果突然增加,可能需要采取紧急行动,因为客户端可能收到错误信息。 + +![Server error rate](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/5xx_rate.png) + +客户端收到错误时的注意事项:虽然监控4XX是很有用的,但从该指标中你仅可以捕捉有限的信息,因为它只是衡量客户的行为而不捕捉任何特殊的 URLs。换句话说,在4xx出现时只是相当于一点噪音,例如寻找漏洞的网络扫描仪。 + +**收集错误度量** + +虽然开源 NGINX 不会监测错误率,但至少有两种方法可以捕获其信息: + +- 使用商业支持的 NGINX Plus 提供的可扩展状态模块 +- 配置 NGINX 的日志模块将响应码写入访问日志 + +阅读关于 NGINX 指标收集的后两个方法的详细说明。 + +#### 性能指标 #### + +注:表格 + +++++ + + + + + + + + + + + + + + + + +
NameDescriptionMetric typeAvailability
request timeTime to process each request, in secondsWork: PerformanceNGINX logs
+ +**提醒指标: 请求处理时间** + +请求时间指标记录 NGINX 处理每个请求的时间,从第一个客户端的请求字节读出到完成请求。较长的响应时间可以将问题指向负载均衡服务器。 + +**收集处理时间指标** + +NGINX 和 NGINX Plus 用户可以通过添加 $request_time 变量到访问日志格式中来捕​​捉处理时间数据。关于配置日志监控的更多细节在[NGINX指标收集][8]。 + +#### 反向代理指标 #### + +注:表格 + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionMetric typeAvailability
Active connections by upstream serverCurrently active client connectionsResource: UtilizationNGINX Plus
5xx codes by upstream serverServer errorsWork: ErrorsNGINX Plus
Available servers per upstream groupServers passing health checksResource: AvailabilityNGINX Plus
+ +[反向代理][9]是 NGINX 最常见的使用方法之一。商业支持的 NGINX Plus 显示了大量有关后端(或“负载均衡”)的服务器指标,这是反向代理设置的。本节重点介绍了几个关键的负载均衡服务器的指标为 NGINX Plus 用户。 + +NGINX Plus 的负载均衡服务器指标首先是组的,然后是单个服务器的。因此,例如,你的反向代理将请求分配到五个 Web 负载均衡服务器上,你可以一眼看出是否有单个服务器压力过大,也可以看出负载均衡服务器组的健康状况,以确保良好的响应时间。 + +**活动指标** + +**active connections per upstream server**的数量可以帮助你确认反向代理是否正确的分配工作到负载均衡服务器上。如果你正在使用 NGINX 作为负载均衡器,任何一台服务器处理的连接数有显著的偏差都可能表明服务器正在努力处理请求或你配置处理请求的负载均衡的方法(例如[round-robin or IP hashing][10])不是最适合你流量模式的。 + +**错误指标** + +错误指标,上面所说的高于5XX(服务器错误)状态码,是监控指标中有价值的一个,尤其是响应码部分。 NGINX Plus 允许你轻松地提取每个负载均衡服务器 **5xx codes per upstream server**的数量,以及响应的总数量,以此来确定该特定服务器的错误率。 + + +**可用性指标** + +对于 web 服务器的运行状况,另一种观点认为,NGINX 也可以很方便监控你的负载均衡服务器组的健康通过**servers currently available within each group**的总量​​。在一个大的反向代理上,你可能不会非常关心其中一个服务器的当前状态,就像你只要可用的服务器组能够处理当前的负载就行了。但监视负载均衡服务器组内的所有服务器可以提供一个高水平的图像来判断 Web 服务器的健康状况。 + +**收集负载均衡服务器的指标** + +NGINX Plus 负载均衡服务器的指标显示在内部 NGINX Plus 的监控仪表盘上,并且也可通过一个JSON 接口来服务于所有外部的监控平台。在这儿看一个例子[收集 NGINX 指标][11]。 + +### 结论 ### + +在这篇文章中,我们已经谈到了一些有用的指标,你可以使用表格来监控 NGINX 服务器。如果你是刚开始使用 NGINX,下面提供了良好的网络基础设施的健康和活动的可视化工具来监控大部分或所有的指标: + +- [Dropped connections][12] +- [Requests per second][13] +- [Server error rate][14] +- [Request processing time][15] + +最终,你会学到更多,更专业的衡量指标,尤其是关于你自己基础设施和使用情况的。当然,监控哪一项指标将取决于你可用的工具。参见[一步一步来说明指标收集][16],不管你使用 NGINX 还是 NGINX Plus。 + + + +在 Datadog 中,我们已经集成了 NGINX 和 NGINX Plus,这样你就可以以最小的设置来收集和监控所有 Web 服务器的指标。了解如何用 NGINX Datadog来监控 [在本文中][17],并开始使用 [免费的 Datadog][18]。 + +### Acknowledgments ### + +在文章发表之前非常感谢 NGINX 团队审阅这篇,并提供重要的反馈和说明。 + +---------- + +文章来源在这儿 [on GitHub][19]。问题,更正,补充等?请[告诉我们][20]。 + + +-------------------------------------------------------------------------------- + +via: https://www.datadoghq.com/blog/how-to-monitor-nginx/ + +作者:K Young +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://nginx.org/en/ +[2]:http://nginx.com/resources/glossary/reverse-proxy-server/ +[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/ +[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections +[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state +[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html +[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[9]:https://en.wikipedia.org/wiki/Reverse_proxy +[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/ +[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections +[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second +[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate +[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time +[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ +[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up +[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md +[20]:https://github.com/DataDog/the-monitor/issues From d883f1ef7607ced1c97a49c36994a3c51934d636 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 6 Aug 2015 11:27:53 +0800 Subject: [PATCH 065/697] =?UTF-8?q?20150806-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20150806 5 heroes of the Linux world.md | 99 +++++++++++++++++++ 1 file changed, 99 insertions(+) create mode 100644 sources/talk/20150806 5 heroes of the Linux world.md diff --git a/sources/talk/20150806 5 heroes of the Linux world.md b/sources/talk/20150806 5 heroes of the Linux world.md new file mode 100644 index 0000000000..ae35d674a1 --- /dev/null +++ b/sources/talk/20150806 5 heroes of the Linux world.md @@ -0,0 +1,99 @@ +5 heroes of the Linux world +================================================================================ +Who are these people, seen and unseen, whose work affects all of us every day? + +![Image courtesy Christopher Michel/Flickr](http://core0.staticworld.net/images/article/2015/07/penguin-100599348-orig.jpg) +Image courtesy [Christopher Michel/Flickr][1] + +### High-flying penguins ### + +Linux and open source is driven by passionate people who write best-of-breed software and then release the code to the public so anyone can use it, without any strings attached. (Well, there is one string attached and that’s licence.) + +Who are these people? These heroes of the Linux world, whose work affects all of us every day. Allow me to introduce you. + +![Image courtesy Swapnil Bhartiya](http://images.techhive.com/images/article/2015/07/swap-klaus-100599357-orig.jpg) +Image courtesy Swapnil Bhartiya + +### Klaus Knopper ### + +Klaus Knopper, an Austrian developer who lives in Germany, is the founder of Knoppix and Adriana Linux, which he developed for his blind wife. + +Knoppix holds a very special place in heart of those Linux users who started using Linux before Ubuntu came along. What makes Knoppix so special is that it popularized the concept of Live CD. Unlike Windows or Mac OS X, you could run the entire operating system from the CD without installing anything on the system. It allowed new users to test Linux on their systems without formatting the hard drive. The live feature of Linux alone contributed heavily to its popularity. + +![Image courtesy Fórum Internacional Software Live/Flickr](http://images.techhive.com/images/article/2015/07/lennart-100599356-orig.jpg) +Image courtesy [Fórum Internacional Software Live/Flickr][2] + +### Lennart Pottering ### + +Lennart Pottering is yet another genius from Germany. He has written so many core components of a Linux (as well as BSD) system that it’s hard to keep track. Most of his work is towards the successors of aging or broken components of the Linux systems. + +Pottering wrote the modern init system systemd, which shook the Linux world and created a [rift in the Debian community][3]. + +While Linus Torvalds has no problems with systemd, and praises it, he is not a huge fan of the way systemd developers (including the co-author Kay Sievers,) respond to bug reports and criticism. At one point Linus said on the LKML (Linux Kernel Mailing List) that he would [never work with Sievers][4]. + +Lennart is also the author of Pulseaudio, sound server on Linux and Avahi, zero-configuration networking (zeroconf) implementation. + +![Image courtesy Meego Com/Flickr](http://images.techhive.com/images/article/2015/07/jim-zemlin-100599362-orig.jpg) +Image courtesy [Meego Com/Flickr][5] + +### Jim Zemlin ### + +Jim Zemlin isn't a developer, but as founder of The Linux Foundation he is certainly one of the most important figures of the Linux world. + +In 2007, The Linux Foundation was formed as a result of merger between two open source bodies: the Free Standards Group and the Open Source Development Labs. Zemlin was the executive director of the Free Standards Group. Post-merger Zemlin became the executive director of The Linux Foundation and has held that position since. + +Under his leadership, The Linux Foundation has become the central figure in the modern IT world and plays a very critical role for the Linux ecosystem. In order to ensure that key developers like Torvalds and Kroah-Hartman can focus on Linux, the foundation sponsors them as fellows. + +Zemlin also made the foundation a bridge between companies so they can collaborate on Linux while at the same time competing in the market. The foundation also organizes many conferences around the world and [offers many courses for Linux developers][6]. + +People may think of Zemlin as Linus Torvalds' boss, but he refers to himself as "Linus Torvalds' janitor." + +![Image courtesy Coscup/Flickr](http://images.techhive.com/images/article/2015/07/greg-kh-100599350-orig.jpg) +Image courtesy [Coscup/Flickr][7] + +### Greg Kroah-Hartman ### + +Greg Kroah-Hartman is known as second-in-command of the Linux kernel. The ‘gentle giant’ is the maintainer of the stable branch of the kernel and of staging subsystem, USB, driver core, debugfs, kref, kobject, and the [sysfs][8] kernel subsystems along with many other components of a Linux system. + +He is also credited for device drivers for Linux. One of his jobs is to travel around the globe, meet hardware makers and persuade them to make their drivers available for Linux. The next time you plug some random USB device to your system and it works out of the box, thank Kroah-Hartman. (Don't thank the distro. Some distros try to take credit for the work Kroah-Hartman or the Linux kernel did.) + +Kroah-Hartman previously worked for Novell and then joined the Linux Foundation as a fellow, alongside Linus Torvalds. + +Kroah-Hartman is the total opposite of Linus and never rants (at least publicly). One time there was some ripple was when he stated that [Canonical doesn’t contribute much to the Linux kernel][9]. + +On a personal level, Kroah-Hartman is extremely helpful to new developers and users and is easily accessible. + +![Image courtesy Swapnil Bhartiya](http://images.techhive.com/images/article/2015/07/linus-swapnil-100599349-orig.jpg) +Image courtesy Swapnil Bhartiya + +### Linus Torvalds ### + +No collection of Linux heroes would be complete without Linus Torvalds. He is the author of the Linux kernel, the most used open source technology on the planet and beyond. His software powers everything from space stations to supercomputers, military drones to mobile devices and tiny smartwatches. Linus remains the authority on the Linux kernel and makes the final decision on which patches to merge to the kernel. + +Linux isn't Torvalds' only contribution open source. When he got fed-up with the existing software revision control systems, which his kernel heavily relied on, he wrote his own, called Git. Git enjoys the same reputation as Linux; it is the most used version control system in the world. + +Torvalds is also a passionate scuba diver and when he found no decent dive logs for Linux, he wrote his own and called it SubSurface. + +Torvalds is [well known for his rants][10] and once admitted that his ego is as big as a small planet. But he is also known for admitting his mistakes if he realizes he was wrong. + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2955001/linux/5-heros-of-the-linux-world.html + +作者:[Swapnil Bhartiya][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Swapnil-Bhartiya/ +[1]:https://flic.kr/p/siJ25M +[2]:https://flic.kr/p/uTzj54 +[3]:http://www.itwire.com/business-it-news/open-source/66153-systemd-fallout-two-debian-technical-panel-members-resign +[4]:http://www.linuxveda.com/2014/04/04/linus-torvalds-systemd-kay-sievers/ +[5]:https://flic.kr/p/9Lnhpu +[6]:http://www.itworld.com/article/2951968/linux/linux-foundation-offers-cheaper-courses-and-certifications-for-india.html +[7]:https://flic.kr/p/hBv8Pp +[8]:https://en.wikipedia.org/wiki/Sysfs +[9]:https://www.youtube.com/watch?v=CyHAeGBFS8k +[10]:http://www.itworld.com/article/2873200/operating-systems/11-technologies-that-tick-off-linus-torvalds.html \ No newline at end of file From 308319dffbe0f9e27d61c482b71b9ba46823cadd Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 6 Aug 2015 11:38:08 +0800 Subject: [PATCH 066/697] =?UTF-8?q?20150806-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...lation Guide for Puppet on Ubuntu 15.04.md | 429 ++++++++++++++++++ 1 file changed, 429 insertions(+) create mode 100644 sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md diff --git a/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md b/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md new file mode 100644 index 0000000000..ea8fcd6e2e --- /dev/null +++ b/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md @@ -0,0 +1,429 @@ +Installation Guide for Puppet on Ubuntu 15.04 +================================================================================ +Hi everyone, today in this article we'll learn how to install puppet to manage your server infrastructure running ubuntu 15.04. Puppet is an open source software configuration management tool which is developed and maintained by Puppet Labs that allows us to automate the provisioning, configuration and management of a server infrastructure. Whether we're managing just a few servers or thousands of physical and virtual machines to orchestration and reporting, puppet automates tasks that system administrators often do manually which frees up time and mental space so sysadmins can work on improving other aspects of your overall setup. It ensures consistency, reliability and stability of the automated jobs processed. It facilitates closer collaboration between sysadmins and developers, enabling more efficient delivery of cleaner, better-designed code. Puppet is available in two solutions configuration management and data center automation. They are **puppet open source and puppet enterprise**. Puppet open source is a flexible, customizable solution available under the Apache 2.0 license, designed to help system administrators automate the many repetitive tasks they regularly perform. Whereas puppet enterprise edition is a proven commercial solution for diverse enterprise IT environments which lets us get all the benefits of open source puppet, plus puppet apps, commercial-only enhancements, supported modules and integrations, and the assurance of a fully supported platform. Puppet uses SSL certificates to authenticate communication between master and agent nodes. + +In this tutorial, we will cover how to install open source puppet in an agent and master setup running ubuntu 15.04 linux distribution. Here, Puppet master is a server from where all the configurations will be controlled and managed and all our remaining servers will be puppet agent nodes, which is configured according to the configuration of puppet master server. Here are some easy steps to install and configure puppet to manage our server infrastructure running Ubuntu 15.04. + +### 1. Setting up Hosts ### + +In this tutorial, we'll use two machines, one as puppet master server and another as puppet node agent both running ubuntu 15.04 "Vivid Vervet" in both the machines. Here is the infrastructure of the server that we're gonna use for this tutorial. + +puppet master server with IP 44.55.88.6 and hostname : puppetmaster +puppet node agent with IP 45.55.86.39 and hostname : puppetnode + +Now we'll add the entry of the machines to /etc/hosts on both machines node agent and master server. + + # nano /etc/hosts + + 45.55.88.6 puppetmaster.example.com puppetmaster + 45.55.86.39 puppetnode.example.com puppetnode + +Please note that the Puppet Master server must be reachable on port 8140. So, we'll need to open port 8140 in it. + +### 2. Updating Time with NTP ### + +As puppet nodes needs to maintain accurate system time to avoid problems when it issues agent certificates. Certificates can appear to be expired if there is time difference, the time of the both the master and the node agent must be synced with each other. To sync the time, we'll update the time with NTP. To do so, here's the command below that we need to run on both master and node agent. + + # ntpdate pool.ntp.org + + 17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec + +Now, we'll update our local repository index and install ntp as follows. + + # apt-get update && sudo apt-get -y install ntp ; service ntp restart + +### 3. Puppet Master Package Installation ### + +There are many ways to install open source puppet. In this tutorial, we'll download and install a debian binary package named as **puppetlabs-release** packaged by the Puppet Labs which will add the source of the **puppetmaster-passenger** package. The puppetmaster-passenger includes the puppet master with apache web server. So, we'll now download the Puppet Labs package. + + # cd /tmp/ + # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb + + --2015-06-17 00:19:26-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb + Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d + Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 7384 (7.2K) [application/x-debian-package] + Saving to: ‘puppetlabs-release-trusty.deb’ + + puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.06s + + 2015-06-17 00:19:26 (130 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] + +After the download has been completed, we'll wanna install the package. + + # dpkg -i puppetlabs-release-trusty.deb + + Selecting previously unselected package puppetlabs-release. + (Reading database ... 85899 files and directories currently installed.) + Preparing to unpack puppetlabs-release-trusty.deb ... + Unpacking puppetlabs-release (1.0-11) ... + Setting up puppetlabs-release (1.0-11) ... + +Then, we'll update the local respository index with the server using apt package manager. + + # apt-get update + +Then, we'll install the puppetmaster-passenger package by running the below command. + + # apt-get install puppetmaster-passenger + +**Note**: While installing we may get an error **Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')** but we no need to worry, we'll just simply ignore this as it says that the templatedir is deprecated so, we'll simply disbale that setting in the configuration. :) + +To check whether puppetmaster has been installed successfully in our Master server not not, we'll gonna try to check its version. + + # puppet --version + + 3.8.1 + +We have successfully installed puppet master package in our puppet master box. As we are using passenger with apache, the puppet master process is controlled by apache server, that means it runs when apache is running. + +Before continuing, we'll need to stop the Puppet master by stopping the apache2 service. + + # systemctl stop apache2 + +### 4. Master version lock with Apt ### + +As We have puppet version as 3.8.1, we need to lock the puppet version update as this will mess up the configurations while updating the puppet. So, we'll use apt's locking feature for that. To do so, we'll need to create a new file **/etc/apt/preferences.d/00-puppet.pref** using our favorite text editor. + + # nano /etc/apt/preferences.d/00-puppet.pref + +Then, we'll gonna add the entries in the newly created file as: + + # /etc/apt/preferences.d/00-puppet.pref + Package: puppet puppet-common puppetmaster-passenger + Pin: version 3.8* + Pin-Priority: 501 + +Now, it will not update the puppet while running updates in the system. + +### 5. Configuring Puppet Config ### + +Puppet master acts as a certificate authority and must generate its own certificates which is used to sign agent certificate requests. First of all, we'll need to remove any existing SSL certificates that were created during the installation of package. The default location of puppet's SSL certificates is /var/lib/puppet/ssl. So, we'll remove the entire ssl directory using rm command. + + # rm -rf /var/lib/puppet/ssl + +Then, we'll configure the certificate. While creating the puppet master's certificate, we need to include every DNS name at which agent nodes can contact the master at. So, we'll edit the master's puppet.conf using our favorite text editor. + + # nano /etc/puppet/puppet.conf + +The output seems as shown below. + + [main] + logdir=/var/log/puppet + vardir=/var/lib/puppet + ssldir=/var/lib/puppet/ssl + rundir=/var/run/puppet + factpath=$vardir/lib/facter + templatedir=$confdir/templates + + [master] + # These are needed when the puppetmaster is run by passenger + # and can safely be removed if webrick is used. + ssl_client_header = SSL_CLIENT_S_DN + ssl_client_verify_header = SSL_CLIENT_VERIFY + +Here, we'll need to comment the templatedir line to disable the setting as it has been already depreciated. After that, we'll add the following line at the end of the file under [main]. + + server = puppetmaster + environment = production + runinterval = 1h + strict_variables = true + certname = puppetmaster + dns_alt_names = puppetmaster, puppetmaster.example.com + +This configuration file has many options which might be useful in order to setup own configuration. A full description of the file is available at Puppet Labs [Main Config File (puppet.conf)][1]. + +After editing the file, we'll wanna save that and exit. + +Now, we'll gonna generate a new CA certificates by running the following command. + + # puppet master --verbose --no-daemonize + + Info: Creating a new SSL key for ca + Info: Creating a new SSL certificate request for ca + Info: Certificate Request fingerprint (SHA256): F6:2F:69:89:BA:A5:5E:FF:7F:94:15:6B:A7:C4:20:CE:23:C7:E3:C9:63:53:E0:F2:76:D7:2E:E0:BF:BD:A6:78 + ... + Notice: puppetmaster has a waiting certificate request + Notice: Signed certificate request for puppetmaster + Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/ca/requests/puppetmaster.pem' + Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/certificate_requests/puppetmaster.pem' + Notice: Starting Puppet master version 3.8.1 + ^CNotice: Caught INT; storing stop + Notice: Processing stop + +Now, the certificate is being generated. Once we see **Notice: Starting Puppet master version 3.8.1**, the certificate setup is complete. Then we'll press CTRL-C to return to the shell. + +If we wanna look at the cert information of the certificate that was just created, we can get the list by running in the following command. + + # puppet cert list -all + + + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") + +### 6. Creating a Puppet Manifest ### + +The default location of the main manifest is /etc/puppet/manifests/site.pp. The main manifest file contains the definition of configuration that is used to execute in the puppet node agent. Now, we'll create the manifest file by running the following command. + + # nano /etc/puppet/manifests/site.pp + +Then, we'll add the following lines of configuration in the file that we just opened. + + # execute 'apt-get update' + exec { 'apt-update': # exec resource named 'apt-update' + command => '/usr/bin/apt-get update' # command this resource will run + } + + # install apache2 package + package { 'apache2': + require => Exec['apt-update'], # require 'apt-update' before installing + ensure => installed, + } + + # ensure apache2 service is running + service { 'apache2': + ensure => running, + } + +The above lines of configuration are responsible for the deployment of the installation of apache web server across the node agent. + +### 7. Starting Master Service ### + +We are now ready to start the puppet master. We can start it by running the apache2 service. + + # systemctl start apache2 + +Here, our puppet master is running, but it isn't managing any agent nodes yet. Now, we'll gonna add the puppet node agents to the master. + +**Note**: If you get an error **Job for apache2.service failed. See "systemctl status apache2.service" and "journalctl -xe" for details.** then it must be that there is some problem with the apache server. So, we can see the log what exactly has happened by running **apachectl start** under root or sudo mode. Here, while performing this tutorial, we got a misconfiguration of the certificates under **/etc/apache2/sites-enabled/puppetmaster.conf** file. We replaced **SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem with SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem** and commented **SSLCertificateKeyFile** line. Then we'll need to rerun the above command to run apache server. + +### 8. Puppet Agent Package Installation ### + +Now, as we have our puppet master ready and it needs an agent to manage, we'll need to install puppet agent into the nodes. We'll need to install puppet agent in every nodes in our infrastructure we want puppet master to manage. We'll need to make sure that we have added our node agents in the DNS. Now, we'll gonna install the latest puppet agent in our agent node ie. puppetnode.example.com . + +We'll run the following command to download the Puppet Labs package in our puppet agent nodes. + + # cd /tmp/ + # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb\ + + --2015-06-17 00:54:42-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb + Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d + Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 7384 (7.2K) [application/x-debian-package] + Saving to: ‘puppetlabs-release-trusty.deb’ + + puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.04s + + 2015-06-17 00:54:42 (162 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] + +Then, as we're running ubuntu 15.04, we'll use debian package manager to install it. + + # dpkg -i puppetlabs-release-trusty.deb + +Now, we'll gonna update the repository index using apt-get. + + # apt-get update + +Finally, we'll gonna install the puppet agent directly from the remote repository. + + # apt-get install puppet + +Puppet agent is always disabled by default, so we'll need to enable it. To do so we'll need to edit /etc/default/puppet file using a text editor. + + # nano /etc/default/puppet + +Then, we'll need to change value of **START** to "yes" as shown below. + + START=yes + +Then, we'll need to save and exit the file. + +### 9. Agent Version Lock with Apt ### + +As We have puppet version as 3.8.1, we need to lock the puppet version update as this will mess up the configurations while updating the puppet. So, we'll use apt's locking feature for that. To do so, we'll need to create a file /etc/apt/preferences.d/00-puppet.pref using our favorite text editor. + + # nano /etc/apt/preferences.d/00-puppet.pref + +Then, we'll gonna add the entries in the newly created file as: + + # /etc/apt/preferences.d/00-puppet.pref + Package: puppet puppet-common + Pin: version 3.8* + Pin-Priority: 501 + +Now, it will not update the Puppet while running updates in the system. + +### 10. Configuring Puppet Node Agent ### + +Next, We must make a few configuration changes before running the agent. To do so, we'll need to edit the agent's puppet.conf + + # nano /etc/puppet/puppet.conf + +It will look exactly like the Puppet master's initial configuration file. + +This time also we'll comment the **templatedir** line. Then we'll gonna delete the [master] section, and all of the lines below it. + +Assuming that the puppet master is reachable at "puppet-master", the agent should be able to connect to the master. If not we'll need to use its fully qualified domain name ie. puppetmaster.example.com . + + [agent] + server = puppetmaster.example.com + certname = puppetnode.example.com + +After adding this, it will look alike this. + + [main] + logdir=/var/log/puppet + vardir=/var/lib/puppet + ssldir=/var/lib/puppet/ssl + rundir=/var/run/puppet + factpath=$vardir/lib/facter + #templatedir=$confdir/templates + + [agent] + server = puppetmaster.example.com + certname = puppetnode.example.com + +After done with that, we'll gonna save and exit it. + +Next, we'll wanna start our latest puppet agent in our Ubuntu 15.04 nodes. To start our puppet agent, we'll need to run the following command. + + # systemctl start puppet + +If everything went as expected and configured properly, we should not see any output displayed by running the above command. When we run an agent for the first time, it generates an SSL certificate and sends a request to the puppet master then if the master signs the agent's certificate, it will be able to communicate with the agent node. + +**Note**: If you are adding your first node, it is recommended that you attempt to sign the certificate on the puppet master before adding your other agents. Once you have verified that everything works properly, then you can go back and add the remaining agent nodes further. + +### 11. Signing certificate Requests on Master ### + +While puppet agent runs for the first time, it generates an SSL certificate and sends a request for signing to the master server. Before the master will be able to communicate and control the agent node, it must sign that specific agent node's certificate. + +To get the list of the certificate requests, we'll run the following command in the puppet master server. + + # puppet cert list + + "puppetnode.example.com" (SHA256) 31:A1:7E:23:6B:CD:7B:7D:83:98:33:8B:21:01:A6:C4:01:D5:53:3D:A0:0E:77:9A:77:AE:8F:05:4A:9A:50:B2 + +As we just setup our first agent node, we will see one request. It will look something like the following, with the agent node's Domain name as the hostname. + +Note that there is no + in front of it which indicates that it has not been signed yet. + +Now, we'll go for signing a certification request. In order to sign a certification request, we should simply run **puppet cert sign** with the **hostname** as shown below. + + # puppet cert sign puppetnode.example.com + + Notice: Signed certificate request for puppetnode.example.com + Notice: Removing file Puppet::SSL::CertificateRequest puppetnode.example.com at '/var/lib/puppet/ssl/ca/requests/puppetnode.example.com.pem' + +The Puppet master can now communicate and control the node that the signed certificate belongs to. + +If we want to sign all of the current requests, we can use the -all option as shown below. + + # puppet cert sign --all + +### Removing a Puppet Certificate ### + +If we wanna remove a host from it or wanna rebuild a host then add it back to it. In this case, we will want to revoke the host's certificate from the puppet master. To do this, we will want to use the clean action as follows. + + # puppet cert clean hostname + + Notice: Revoked certificate with serial 5 + Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/ca/signed/puppetnode.example.com.pem' + Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/certs/puppetnode.example.com.pem' + +If we want to view all of the requests signed and unsigned, run the following command: + + # puppet cert list --all + + + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") + +### 12. Deploying a Puppet Manifest ### + +After we configure and complete the puppet manifest, we'll wanna deploy the manifest to the agent nodes server. To apply and load the main manifest we can simply run the following command in the agent node. + + # puppet agent --test + + Info: Retrieving pluginfacts + Info: Retrieving plugin + Info: Caching catalog for puppetnode.example.com + Info: Applying configuration version '1434563858' + Notice: /Stage[main]/Main/Exec[apt-update]/returns: executed successfully + Notice: Finished catalog run in 10.53 seconds + +This will show us all the processes how the main manifest will affect a single server immediately. + +If we wanna run a puppet manifest that is not related to the main manifest, we can simply use puppet apply followed by the manifest file path. It only applies the manifest to the node that we run the apply from. + + # puppet apply /etc/puppet/manifest/test.pp + +### 13. Configuring Manifest for a Specific Node ### + +If we wanna deploy a manifest only to a specific node then we'll need to configure the manifest as follows. + +We'll need to edit the manifest on the master server using a text editor. + + # nano /etc/puppet/manifest/site.pp + +Now, we'll gonna add the following lines there. + + node 'puppetnode', 'puppetnode1' { + # execute 'apt-get update' + exec { 'apt-update': # exec resource named 'apt-update' + command => '/usr/bin/apt-get update' # command this resource will run + } + + # install apache2 package + package { 'apache2': + require => Exec['apt-update'], # require 'apt-update' before installing + ensure => installed, + } + + # ensure apache2 service is running + service { 'apache2': + ensure => running, + } + } + +Here, the above configuration will install and deploy the apache web server only to the two specified nodes having shortname puppetnode and puppetnode1. We can add more nodes that we need to get deployed with the manifest specifically. + +### 14. Configuring Manifest with a Module ### + +Modules are useful for grouping tasks together, they are many available in the Puppet community which anyone can contribute further. + +On the puppet master, we'll gonna install the **puppetlabs-apache** module using the puppet module command. + + # puppet module install puppetlabs-apache + +**Warning**: Please do not use this module on an existing apache setup else it will purge your apache configurations that are not managed by puppet. + +Now we'll gonna edit the main manifest ie **site.pp** using a text editor. + + # nano /etc/puppet/manifest/site.pp + +Now add the following lines to install apache under puppetnode. + + node 'puppet-node' { + class { 'apache': } # use apache module + apache::vhost { 'example.com': # define vhost resource + port => '80', + docroot => '/var/www/html' + } + } + +Then we'll wanna save and exit it. Then, we'll wanna rerun the manifest to deploy the configuration to the agents for our infrastructure. + +### Conclusion ### + +Finally we have successfully installed puppet to manage our Server Infrastructure running Ubuntu 15.04 "Vivid Vervet" linux operating system. We learned how puppet works, configure a manifest configuration, communicate with nodes and deploy the manifest on the agent nodes with secure SSL certification. Controlling, managing and configuring repeated task in several N number of nodes is very easy with puppet open source software configuration management tool. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/ + +作者:[Arun Pyasi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html \ No newline at end of file From 1093cce5b950327736eea1b94718c7864a58edaf Mon Sep 17 00:00:00 2001 From: XLCYun Date: Thu, 6 Aug 2015 12:29:08 +0800 Subject: [PATCH 067/697] Update 20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md --- ...ktop--What They Get Right & Wrong - Page 1 - Introduction.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md index 39f29af147..582708f5a4 100644 --- a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md +++ b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md @@ -2,7 +2,7 @@ ================================================================================ *作者声明: 如果你是因为某种神迹而在没看标题的情况下点开了这篇文章,那么我想再重申一些东西...这是一篇评论文章。文中的观点都是我自己的,不代表Phoronix和Michael的观点。它们完全是我自己的想法。 -另外,没错……这可能是一篇引战的文章。我希望社团成员们更沉稳一些,因为我确实想在KDE和Gnome的社团上发起讨论,反馈。因此当我想指出——我所看到的——一个瑕疵时,我会尽量地做到具体而直接。这样,相关的讨论也能做到同样的具体和直接。再次声明:本文另一可选标题为“被[剪纸][1]千刀万剐”(原文剪纸一词为papercuts, 指易修复而烦人的漏洞,译者注)。 +另外,没错……这可能是一篇引战的文章。我希望社团成员们更沉稳一些,因为我确实想在KDE和Gnome的社团上发起讨论,反馈。因此当我想指出——我所看到的——一个瑕疵时,我会尽量地做到具体而直接。这样,相关的讨论也能做到同样的具体和直接。再次声明:本文另一可选标题为“被[细纸片][1]千刀万剐”(原文含paper cuts一词,指易修复但烦人的缺陷,译者注)。 现在,重申完毕……文章开始。 From 41b90d9bdfb996de4549c2bafa82a921844a8ed8 Mon Sep 17 00:00:00 2001 From: XLCYun Date: Thu, 6 Aug 2015 12:30:28 +0800 Subject: [PATCH 068/697] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90=20?= =?UTF-8?q?=E7=AC=AC=E4=BA=94=E8=8A=82=20XLCYun?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Get Right & Wrong - Page 5 - Conclusion.md | 39 +++++++++++++++++++ 1 file changed, 39 insertions(+) create mode 100644 translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md new file mode 100644 index 0000000000..02ee7425fc --- /dev/null +++ b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md @@ -0,0 +1,39 @@ +将GNOME作为我的Linux桌面的一周:他们做对的与做错的 - 第五节 - 总结 +================================================================================ +### 用户体验和最后想法 ### + +当Gnome 2.x和KDE 4.x要正面交锋时……我相当开心的跳到其中。我爱的东西它们有,恨的东西也有,但总的来说它们使用起来还算是一种乐趣。然后Gnome 3.x来了,带着一场Gnome Shell的戏剧。那时我就放弃了Gnome,我尽我所能的避开它。当时它对用户是不友好的,而且不直观,它打破了原有的设计典范,只为平板的统治世界做准备……而根据平板下跌的销量来看,这样的未来不可能实现。 + +Gnome 3后续发面了八个版本后,奇迹发生了。Gnome变得对对用户友好了。变得直观了。它完美吗?当然不了。我还是很讨厌它想推动的那种设计范例,我讨厌它总想把工作流(work flow)强加给我,但是在时间和耐心的作用下,这两都能被接受。只要你能够回头去看看Gnome Shell那外星人一样的界面,然后开始跟Gnome的其它部分(特别是控制中心)互动,你就能发现Gnome绝对做对了:细节。对细节的关注! + +人们能适应新的界面设计范例,能适应新的工作流——iPhone和iPad都证明了这一点——但真正一直让他们操心的是“纸片的割伤”(paper cuts,此处指易于修复但烦人的缺陷,译注)。 + +它带出了KDE和Gnome之间最重要的一个区别。Gnome感觉像一个产品。像一种非凡的体验。你用它的时候,觉得它是完整的,你要的东西都在你的指尖。它让人感觉就像是一个拥有windows或者OS X那样桌面体验的Linux桌面版:你要的都在里面,而且它是被同一个目标一致的团队中的同一个人写出来的。天,即使是一个应用程序发出的sudo请求都感觉是Gnome下的一个特意设计的部分,就像在Windows下的一样。而在KDE它就像是任何应用程序都能创建的那种随机外观的弹窗。它不像是以系统的一部分这样的正式身份停下来说“嘿,有个东西要请求管理员权限!你要给它吗?”。 + +KDE让人体验不到有凝聚力的体验。KDE像是在没有方向地打转,感觉没有完整的体验。它就像是一堆东西往不同的的方向移动,只不过恰好它们都有一个共同享有的工具包。如果开发者对此很开心,那么好吧,他们开心就好,但是如果他们想提供最好体验的话,那么就需要多关注那些小地方了。用户体验跟直观应当做为每一个应用程序的设计中心,应当有一个视野,知道KDE要提供什么——并且——知道它看起来应该是什么样的。 + +是不是有什么原因阻止我在KDE下使用Gnome磁盘管理? Rhythmbox? Evolution? 没有。没有。没有。但是这样说又错过了关键。Gnome和KDE都称它们为“桌面环境”。那么它们就应该是完整的环境,这意味着他们的各个部件应该汇集并紧密结合在一起,意味着你使用它们环境下的工具,因为它们说“您在一个完整的桌面中需要的任何东西,我们都支持。”说真的?只有Gnome看起来能符合完整的要求。KDE在“汇集在一起”这一方面感觉就像个半成品,更不用说提供“完整体验”中你所需要的东西。Gnome磁盘管理没有相应的对手——kpartionmanage要求ROOT权限。KDE不运行“首次用户注册”的过程(原文:No 'First Time User' run through.可能是指系统安装过程中KDE没有创建新用户的过程,译注) ,现在也不过是在Kubuntu下引入了一个用户管理器。老天,Gnome甚至提供了地图,笔记,日历和时钟应用。这些应用都是百分百要紧的吗?不,当然不了。但是正是这些应用帮助Gnome推动“Gnome是一种完整丰富的体验”的想法。 + +我吐槽的KDE问题并非不可能解决,决对不是这样的!但是它需要人去关心它。它需要开发者为他们的作品感到自豪,而不仅仅是为它们实现的功能而感到自豪——组织的价值可大了去了。别夺走用户设置选项的能力——GNOME 3.x就是因为缺乏配置选项的能力而为我所诟病,但别把“好吧,你想怎么设置就怎么设置,”作为借口而不提供任何理智的默认设置。默认设置是用户将看到的东西,它们是用户从打开软件的第一刻开始进行评判的关键。给用户留个好印象吧。 + +我知道KDE开发者们知道设计很重要,这也是为什么Visual Design Group(视觉设计团体)存在的原因,但是感觉好像他们没有让VDG充分发挥。所以KDE里存在组织上的缺陷。不是KDE没办法完整,不是它没办法汇集整合在一起然后解决衰败问题,只是开发者们没做到。他们瞄准了靶心……但是偏了。 + +还有,在任何人说这句话之前……千万别说“补丁很受欢迎啊"。因为当我开心的为个人提交补丁时,只要开发者坚持以他们喜欢的却不直观的方式干事,更多这样的烦事就会不断发生。这不关Muon有没有中心对齐。也不关Amarok的界面太丑。也不关每次我敲下快捷键后,弹出的音量和亮度调节窗口占用了我一大块的屏幕“房地产”(说真的,有人会去缩小这些东西)。 + +这跟心态的冷漠有关,跟开发者们在为他们的应用设计UI时根本就不多加思考有关。KDE团队做的东西都工作得很好。Amarok能播放音乐。Dragon能播放视频。Kwin或Qt和kdelibs似乎比Mutter/gtk更有力更效率(仅根本我的电池电量消耗计算。非科学性测试)。这些都很好,很重要……但是它们呈现的方式也很重要。甚至可以说,呈现方式是最重要的,因为它是用户看到的和与之交互的东西。 + +KDE应用开发者们……让VDG参与进来吧。让VDG审查并核准每一个”核心“应用,让一个VDG的UI/UX专家来设计应用的使用模式和使用流程,以此保证其直观性。真见鬼,不管你们在开发的是啥应用,仅仅把它的模型发到VDG论坛寻求反馈甚至都可能都能得到一些非常好的指点跟反馈。你有这么好的资源在这,现在赶紧用吧。 + +我不想说得好像我一点都不懂感恩。我爱KDE,我爱那些志愿者们为了给Linux用户一个可视化的桌面而付出的工作与努力,也爱可供选择的Gnome。正是因为我关心我才写这篇文章。因为我想看到更好的KDE,我想看到它走得比以前更加遥远。而这样做需要每个人继续努力,并且需要人们不再躲避批评。它需要人们对系统互动及系统崩溃的地方都保持诚实。如果我们不能直言批评,如果我们不说”这真垃圾!”,那么情况永远不会变好。 + +这周后我会继续使用Gnome吗?可能不,不。Gnome还在试着强迫我接受其工作流,而我不想追随,也不想遵循,因为我在使用它的时候感觉变得不够高效,因为它并不遵循我的思维模式。可是对于我的朋友们,当他们问我“我该用哪种桌面环境?”我可能会推荐Gnome,特别是那些不大懂技术,只要求“能工作”就行的朋友。根据目前KDE的形势来看,这可能是我能说出的最狠毒的评估了。 + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 196acfe054a235f90dcbd8e99174c08d4c1eede9 Mon Sep 17 00:00:00 2001 From: XLCYun Date: Thu, 6 Aug 2015 13:12:55 +0800 Subject: [PATCH 069/697] =?UTF-8?q?=E5=88=A0=E9=99=A4=E5=8E=9F=E6=96=87=20?= =?UTF-8?q?=20XLCYun?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Get Right & Wrong - Page 5 - Conclusion.md | 40 ------------------- 1 file changed, 40 deletions(-) delete mode 100644 sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md diff --git a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md b/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md deleted file mode 100644 index cf9028229d..0000000000 --- a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md +++ /dev/null @@ -1,40 +0,0 @@ -Translating by XLCYun. -A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 5 - Conclusion -================================================================================ -### User Experience and Closing Thoughts ### - -When Gnome 2.x and KDE 4.x were going head to head.. I jumped between the two quite happily. Some things I loved, some things I hated, but over all they were both a pleasure to use. Then Gnome 3.x came around and all of the drama with Gnome Shell. I swore off Gnome and avoided it every chance I could. It wasn't user friendly, it was non-intuitive, it broke an establish paradigm in preparation for tablet's taking over the world... A future that, judging from the dropping sales of tablets, will never come. - -Eight releases of Gnome 3 later and the unimaginable happened. Gnome got user friendly. Gnome got intuitive. Is it perfect? Of course not. I still hate the paradigm it tries to push, I hate how it tries to force a work flow onto me, but both of those things can be gotten used to with time and patience. Once you have managed to look past Gnome Shell's alien appearance and you start interacting with it and the other parts of Gnome (Control Center especially) you see what Gnome has definitely gotten right: the little things. The attention to detail. - -People can adapt to new paradigms, people can adapt to new work flows-- the iPhone and iPad proved that-- but what will always bother them are the paper cuts. - -Which brings up an important distinction between KDE and Gnome. Gnome feels like a product. It feels like a singular experience. When you use it, it feels like it is complete and that everything you need is at your fingertips. It feel's like THE Linux desktop in the same way that Windows or OS X have THE desktop experience: what you need is there and it was all written by the same guys working on the same team towards the same goal. Hell, even an application prompting for sudo access feels like an intentional part of the desktop under Gnome, much the way that it is under Windows. In KDE it's just some random-looking window popup that any application could have created. It doesn't feel like a part of the system stopping and going "Hey! Something has requested administrative rights! Do you want to let it go through?" in an official capacity. - -KDE doesn't feel like cohesive experience. KDE doesn't feel like it has a direction its moving in, it doesn't feel like a full experience. KDE feels like its a bunch of pieces that are moving in a bunch of different directions, that just happen to have a shared toolkit beneath them. If that's what the developers are happy with, then fine, good for them, but if the developers still have the hope of offering the best experience possible then the little stuff needs to matter. The user experience and being intuitive needs to be at the forefront of every single application, there needs to be a vision of what KDE wants to offer -and- how it should look. - -Is there anything stopping me from using Gnome Disks under KDE? Rhythmbox? Evolution? Nope. Nope. Nope. But that misses the point. Gnome and KDE both market themselves as "Desktop Environments." They are supposed to be full -environments-, that means they all the pieces come and fit together, that you use that environment's tools because they are saying "We support everything you need to have a full desktop." Honestly? Only Gnome seems to fit the bill of being complete. KDE feel's half-finished when it comes to "coming together" part, let alone offering everything you need for a "full experience". There's no counterpart to Gnome Disks-- kpartitionmanager prompts for root. No "First Time User" run through, it just now got a user manager in Kubuntu. Hell, Gnome even provides a Maps, Notes, Calendar and Clock application. Do all of these applications matter 100%? No, of course not. But the fact that Gnome has them helps to push the idea that Gnome is a full and complete experience. - -My complaints about KDE are not impossible to fix, not by a long shot. But it requires people to care. It requires developers to take pride in their work beyond just function-- form counts for a whole hell of a lot. Don't take away the user's ability to configure things-- the lack of configuration is one of my biggest gripes with GNOME 3.x, but don't use "Well you can configure it however you want," as an excuse for not providing sane defaults. The defaults are what users are going to see, they are what the users are going to judge from the first moment they open your application. Make it a good impression. - -I know the KDE developers know design matters, that is WHY the Visual Design Group exists, but it feels like they aren't using the VDG to their fullest. And therein lies KDE's hamartia. It's not that KDE can't be complete, it's not that it can't come together and fix the downfalls, it just that they haven't. They aimed for the bulls eye... but they missed. - -And before anyone says it... Don't say "Patches are welcome." Because while I can happily submit patches for the individual annoyances more will just keep coming as developers keep on their marry way of doing things in non-intuitive ways. This isn't about Muon not being center-aligned. This isn't about Amarok having an ugly UI. This isn't about the volume and brightness pop-up notifiers taking up a large chunk of my screen real-estate every time I hit my hotkeys (seriously, someone shrink those things). - -This is about a mentality of apathy, this is about developers apparently not thinking things through when they make the UI for their applications. Everything the KDE Community does works fine. Amarok plays music. Dragon Player plays videos. Kwin / Qt & kdelibs is seemingly more power efficient than Mutter / gtk (according to my battery life times. Non-scientific testing). Those things are all well and good, and important.. but the presentation matters to. Arguably, the presentation matters the most because that is what user's see and interact with. - -To KDE application developers... Get the VDG involved. Make every single 'core' application get its design vetted and approved by the VDG, have a UI/UX expert from the VDG go through the usage patterns and usage flow of your application to make sure its intuitive. Hell, even just posting a mock up to the VDG forums and asking for feedback would probably get you some nice pointers and feedback for whatever application you're working on. You have this great resource there, now actually use them. - -I am not trying to sound ungrateful. I love KDE, I love the work and effort that volunteers put into giving Linux users a viable desktop, and an alternative to Gnome. And it is because I care that I write this article. Because I want to see KDE excel, I want to see it go further and farther than it has before. But doing that requires work on everyone's part, and it requires that people don't hold back criticism. It requires that people are honest about their interaction with the system and where it falls apart. If we can't give direct criticism, if we can't say "This sucks!" then it will never get better. - -Will I still use Gnome after this week? Probably not, no. Gnome still trying to force a work flow on me that I don't want to follow or abide by, I feel less productive when I'm using it because it doesn't follow my paradigm. For my friends though, when they ask me "What desktop environment should I use?" I'm probably going to recommend Gnome, especially if they are less technical users who want things to "just work." And that is probably the most damning assessment I could make in regards to the current state of KDE. - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5 - -作者:Eric Griffith -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 26ad573be26ca3f90eb97bea4fd591febb0c0c58 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 6 Aug 2015 13:14:12 +0800 Subject: [PATCH 070/697] =?UTF-8?q?20150806-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ware Developer is a Great Career Choice.md | 50 +++++++++++++++++++ 1 file changed, 50 insertions(+) create mode 100644 sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md diff --git a/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md b/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md new file mode 100644 index 0000000000..0302c0b006 --- /dev/null +++ b/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md @@ -0,0 +1,50 @@ +5 Reasons Why Software Developer is a Great Career Choice +================================================================================ +This week I will give a presentation at a local high school on what it is like to work as a programmer. I am volunteering (through the organization [Transfer][1]) to come to schools and talk about what I work with. This school will have a technology theme day this week, and would like to hear what working in the technology sector is like. Since I develop software, that’s what I will talk about. One section will be on why I think a career in software development is great. The main reasons are: + +### 5 Reasons ### + +**1 Creative**. If you ask people to name creative jobs, chances are they will say things like writer, musician or painter. But few people know that software development is also very creative. It is almost by definition creative, since you create new functionality that didn’t exist before. The solutions can be expressed in many ways, both structurally and in the details. Often there are trade-offs to make (for example speed versus memory consumption). And of course the solution has to be correct. All this requires creativity. + +**2 Collaborative**. Another myth is that programmers sit alone at their computers and code all day. But software development is in fact almost always a team effort. You discuss programming problems and solutions with your colleagues, and discuss requirements and other issues with product managers, testers and customers. It is also telling that pair-programming (two developers programming together on one computer) is a popular practice. + +**3 In demand**. More and more in the world is using software, or as Marc Andreessen put it: “[Software is Eating the World][2]“. Even as there are more programmers (in Stockholm, programmer is now the [most common occupation][3]), demand is still outpacing supply. Software companies report that one of their greatest challenges is [finding good developers][4]. I regularly get contacted by recruiters trying to get me to change jobs. I don’t know of many other professions where employers compete for you like that. + +**4 Pays well**. Developing software can create a lot of value. There is no marginal cost to selling one extra copy of software you have already developed. This combined with the high demand for developers means that pay is quite good. There are of course occupations where you make more money, but compared to the general population, I think software developers are paid quite well. + +**5 Future proof**. Many jobs disappear, often because they can be replaced by computers and software. But all those new programs still need to be developed and maintained, so the outlook for programmers is quite good. + +### But… ### + +**What about outsourcing?** Won’t all software development be outsourced to countries where the salaries are much lower? This is an example of an idea that is better in theory than in practice (much like the [waterfall development methodology][5]). Software development is a discovery activity as much as a design activity. It benefits greatly from intense collaboration. Furthermore, especially when the main product is software, the knowledge gained when developing it is a competitive advantage. The easier that knowledge is shared within the whole company, the better it is. + +Another way to look at it is this. Outsourcing of software development has existed for quite a while now. Yet there is still high demand for local developers. So companies see benefits of hiring local developers that outweigh the higher costs. + +### How to Win ### + +There are many reasons why I think developing software is enjoyable (see also [Why I Love Coding][6]). But it is not for everybody. Fortunately it is quite easy to try programming out. There are innumerable resources on the web for learning to program. For example, both [Coursera][7] and [Udacity][8] have introductory courses. If you have never programmed, try one of the free courses or tutorials to get a feel for it. + +Finding something you really enjoy to do for a living has at least two benefits. First, since you do it every day, work will be much more fun than if you simply do something to make money. Second, if you really like it, you have a much better chance of getting good at it. I like the Venn diagram below (by [@eskimon][9]) on what constitutes a great job. Since programming pays relatively well, I think that if you like it, you have a good chance of ending up in the center of the diagram! + +![](https://henrikwarne1.files.wordpress.com/2014/12/career-planning.png) + +-------------------------------------------------------------------------------- + +via: http://henrikwarne.com/2014/12/08/5-reasons-why-software-developer-is-a-great-career-choice/ + +作者:[Henrik Warne][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://henrikwarne.com/ +[1]:http://www.transfer.nu/omoss/transferinenglish.jspx?pageId=23 +[2]:http://online.wsj.com/articles/SB10001424053111903480904576512250915629460 +[3]:http://www.di.se/artiklar/2014/6/12/jobbet-som-tar-over-landet/ +[4]:http://computersweden.idg.se/2.2683/1.600324/examinationstakten-racker-inte-for-branschens-behov +[5]:http://en.wikipedia.org/wiki/Waterfall_model +[6]:http://henrikwarne.com/2012/06/02/why-i-love-coding/ +[7]:https://www.coursera.org/ +[8]:https://www.udacity.com/ +[9]:https://eskimon.wordpress.com/about/ \ No newline at end of file From bb276ee6be7206faefab7041f6c150560987ca6b Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 6 Aug 2015 13:17:29 +0800 Subject: [PATCH 071/697] =?UTF-8?q?=E6=B7=BB=E5=8A=A0=E8=AF=91=E8=80=85?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... 5 Reasons Why Software Developer is a Great Career Choice.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md b/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md index 0302c0b006..d24aa83983 100644 --- a/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md +++ b/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md @@ -1,3 +1,4 @@ +Translating by MousyCoder 5 Reasons Why Software Developer is a Great Career Choice ================================================================================ This week I will give a presentation at a local high school on what it is like to work as a programmer. I am volunteering (through the organization [Transfer][1]) to come to schools and talk about what I work with. This school will have a technology theme day this week, and would like to hear what working in the technology sector is like. Since I develop software, that’s what I will talk about. One section will be on why I think a career in software development is great. The main reasons are: From 16f5b9676503cfc9ed1702e9ebf8e98bf89c32cc Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 6 Aug 2015 15:03:05 +0800 Subject: [PATCH 072/697] PUB:20150727 Easy Backup Restore and Migrate Containers in Docker @GOLinux --- ...estore and Migrate Containers in Docker.md | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) rename {translated/tech => published}/20150727 Easy Backup Restore and Migrate Containers in Docker.md (61%) diff --git a/translated/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md b/published/20150727 Easy Backup Restore and Migrate Containers in Docker.md similarity index 61% rename from translated/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md rename to published/20150727 Easy Backup Restore and Migrate Containers in Docker.md index 420430cca8..7d2d5f26d8 100644 --- a/translated/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md +++ b/published/20150727 Easy Backup Restore and Migrate Containers in Docker.md @@ -1,32 +1,32 @@ 无忧之道:Docker中容器的备份、恢复和迁移 ================================================================================ -今天,我们将学习如何快速地对docker容器进行快捷备份、恢复和迁移。[Docker][1]是一个开源平台,用于自动化部署应用,以通过快捷的途径在称之为容器的轻量级软件层下打包、发布和运行这些应用。它使得应用平台独立,因为它扮演了Linux上一个额外的操作系统级虚拟化的自动化抽象层。它通过其组件cgroups和命名空间利用Linux内核的资源分离特性,达到避免虚拟机开销的目的。它使得用于部署和扩展web应用、数据库和后端服务的大规模构建块无需依赖于特定的堆栈或供应者。 +今天,我们将学习如何快速地对docker容器进行快捷备份、恢复和迁移。[Docker][1]是一个开源平台,用于自动化部署应用,以通过快捷的途径在称之为容器的轻量级软件层下打包、发布和运行这些应用。它使得应用平台独立,因为它扮演了Linux上一个额外的操作系统级虚拟化的自动化抽象层。它通过其组件cgroups和命名空间利用Linux内核的资源分离特性,达到避免虚拟机开销的目的。它使得用于部署和扩展web应用、数据库和后端服务的大规模构建组件无需依赖于特定的堆栈或供应者。 -所谓的容器,就是那些创建自Docker镜像的软件层,它包含了独立的Linux文件系统和开箱即用的应用程序。如果我们有一个在盒子中运行着的Docker容器,并且想要备份这些容器以便今后使用,或者想要迁移这些容器,那么,本教程将帮助你掌握在Linux操作系统中备份、恢复和迁移Docker容器。 +所谓的容器,就是那些创建自Docker镜像的软件层,它包含了独立的Linux文件系统和开箱即用的应用程序。如果我们有一个在机器中运行着的Docker容器,并且想要备份这些容器以便今后使用,或者想要迁移这些容器,那么,本教程将帮助你掌握在Linux操作系统中备份、恢复和迁移Docker容器的方法。 我们怎样才能在Linux中备份、恢复和迁移Docker容器呢?这里为您提供了一些便捷的步骤。 ### 1. 备份容器 ### -首先,为了备份Docker中的容器,我们会想看看我们想要备份的容器列表。要达成该目的,我们需要在我们运行这Docker引擎,并已创建了容器的Linux机器中运行 docker ps 命令。 +首先,为了备份Docker中的容器,我们会想看看我们想要备份的容器列表。要达成该目的,我们需要在我们运行着Docker引擎,并已创建了容器的Linux机器中运行 docker ps 命令。 # docker ps ![Docker Containers List](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-containers-list.png) -在此之后,我们要选择我们想要备份的容器,然后我们会去创建该容器的快照。我们可以使用 docker commit 命令来创建快照。 +在此之后,我们要选择我们想要备份的容器,然后去创建该容器的快照。我们可以使用 docker commit 命令来创建快照。 # docker commit -p 30b8f18f20b4 container-backup ![Docker Commit](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-commit.png) -该命令会生成一个作为Docker镜像的容器快照,我们可以通过运行 docker images 命令来查看Docker镜像,如下。 +该命令会生成一个作为Docker镜像的容器快照,我们可以通过运行 `docker images` 命令来查看Docker镜像,如下。 # docker images ![Docker Images](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-images.png) -正如我们所看见的,上面做的快照已经作为Docker镜像保存了。现在,为了备份该快照,我们有两个选择,一个是我们可以登陆进Docker注册中心,并推送该镜像;另一个是我们可以将Docker镜像打包成tarball备份,以供今后使用。 +正如我们所看见的,上面做的快照已经作为Docker镜像保存了。现在,为了备份该快照,我们有两个选择,一个是我们可以登录进Docker注册中心,并推送该镜像;另一个是我们可以将Docker镜像打包成tar包备份,以供今后使用。 如果我们想要在[Docker注册中心][2]上传或备份镜像,我们只需要运行 docker login 命令来登录进Docker注册中心,然后推送所需的镜像即可。 @@ -39,23 +39,23 @@ ![Docker Push](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-push.png) -如果我们不想备份到docker注册中心,而是想要将此镜像保存在本地机器中,以供日后使用,那么我们可以将其作为tarball备份。要完成该操作,我们需要运行以下 docker save 命令。 +如果我们不想备份到docker注册中心,而是想要将此镜像保存在本地机器中,以供日后使用,那么我们可以将其作为tar包备份。要完成该操作,我们需要运行以下 `docker save` 命令。 # docker save -o ~/container-backup.tar container-backup ![taking tarball backup](http://blog.linoxide.com/wp-content/uploads/2015/07/taking-tarball-backup.png) -要验证tarball时候已经生成,我们只需要在保存tarball的目录中运行 ls 命令。 +要验证tar包是否已经生成,我们只需要在保存tar包的目录中运行 ls 命令即可。 ### 2. 恢复容器 ### -接下来,在我们成功备份了我们的Docker容器后,我们现在来恢复这些被快照成Docker镜像的容器。如果我们已经在注册中心推送了这些Docker镜像,那么我们仅仅需要把那个Docker镜像拖回并直接运行即可。 +接下来,在我们成功备份了我们的Docker容器后,我们现在来恢复这些制作了Docker镜像快照的容器。如果我们已经在注册中心推送了这些Docker镜像,那么我们仅仅需要把那个Docker镜像拖回并直接运行即可。 # docker pull arunpyasi/container-backup:test ![Docker Pull](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-pull.png) -但是,如果我们将这些Docker镜像作为tarball文件备份到了本地,那么我们只要使用 docker load 命令,后面加上tarball的备份路径,就可以加载该Docker镜像了。 +但是,如果我们将这些Docker镜像作为tar包文件备份到了本地,那么我们只要使用 docker load 命令,后面加上tar包的备份路径,就可以加载该Docker镜像了。 # docker load -i ~/container-backup.tar @@ -63,7 +63,7 @@ # docker images -在镜像被加载后,我们将从加载的镜像去运行Docker容器。 +在镜像被加载后,我们将用加载的镜像去运行Docker容器。 # docker run -d -p 80:80 container-backup @@ -71,11 +71,11 @@ ### 3. 迁移Docker容器 ### -迁移容器同时涉及到了上面两个操作,备份和恢复。我们可以将任何一个Docker容器从一台机器迁移到另一台机器。在迁移过程中,首先我们将容器的备份作为快照Docker镜像。然后,该Docker镜像或者是被推送到了Docker注册中心,或者被作为tarball文件保存到了本地。如果我们将镜像推送到了Docker注册中心,我们简单地从任何我们想要的机器上使用 docker run 命令来恢复并运行该容器。但是,如果我们将镜像打包成tarball备份到了本地,我们只需要拷贝或移动该镜像到我们想要的机器上,加载该镜像并运行需要的容器即可。 +迁移容器同时涉及到了上面两个操作,备份和恢复。我们可以将任何一个Docker容器从一台机器迁移到另一台机器。在迁移过程中,首先我们将把容器备份为Docker镜像快照。然后,该Docker镜像或者是被推送到了Docker注册中心,或者被作为tar包文件保存到了本地。如果我们将镜像推送到了Docker注册中心,我们简单地从任何我们想要的机器上使用 docker run 命令来恢复并运行该容器。但是,如果我们将镜像打包成tar包备份到了本地,我们只需要拷贝或移动该镜像到我们想要的机器上,加载该镜像并运行需要的容器即可。 ### 尾声 ### -最后,我们已经学习了如何快速地备份、恢复和迁移Docker容器,本教程适用于各个成功运行Docker的操作系统平台。真的,Docker是一个相当简单易用,然而功能却十分强大的工具。它的命令相当易记,这些命令都非常短,带有许多简单而强大的标记和参数。上面的方法让我们备份容器时很是安逸,使得我们可以在日后很轻松地恢复它们。这会帮助我们恢复我们的容器和镜像,即便主机系统崩溃,甚至意外地被清除。如果你还有很多问题、建议、反馈,请在下面的评论框中写出来吧,可以帮助我们改进或更新我们的内容。谢谢大家!享受吧 :-) +最后,我们已经学习了如何快速地备份、恢复和迁移Docker容器,本教程适用于各个可以成功运行Docker的操作系统平台。真的,Docker是一个相当简单易用,然而功能却十分强大的工具。它的命令相当易记,这些命令都非常短,带有许多简单而强大的标记和参数。上面的方法让我们备份容器时很是安逸,使得我们可以在日后很轻松地恢复它们。这会帮助我们恢复我们的容器和镜像,即便主机系统崩溃,甚至意外地被清除。如果你还有很多问题、建议、反馈,请在下面的评论框中写出来吧,可以帮助我们改进或更新我们的内容。谢谢大家!享受吧 :-) -------------------------------------------------------------------------------- @@ -83,7 +83,7 @@ via: http://linoxide.com/linux-how-to/backup-restore-migrate-containers-docker/ 作者:[Arun Pyasi][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From f6f0cfde12d224b51ddcb416a18d01626351c94d Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 6 Aug 2015 15:06:17 +0800 Subject: [PATCH 073/697] =?UTF-8?q?20150806-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...minism and increasing diversity in tech.md | 81 +++++++++++++++++++ 1 file changed, 81 insertions(+) create mode 100644 sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md diff --git a/sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md b/sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md new file mode 100644 index 0000000000..36f5642c10 --- /dev/null +++ b/sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md @@ -0,0 +1,81 @@ +Torvalds 2.0: Patricia Torvalds on computing, college, feminism, and increasing diversity in tech +================================================================================ +![Image by : Photo by Becky Svartström. Modified by Opensource.com. CC BY-SA 4.0](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-lead-patriciatorvalds.png) +Image by : Photo by Becky Svartström. Modified by Opensource.com. [CC BY-SA 4.0][1] + +Patricia Torvalds isn't the Torvalds name that pops up in Linux and open source circles. Yet. + +![](http://opensource.com/sites/default/files/images/life-uploads/ptorvalds.png) + +At 18, Patricia is a feminist with a growing list of tech achievements, open source industry experience, and her sights set on diving into her freshman year of college at Duke University's Pratt School of Engineering. She works for [Puppet Labs][2] in Portland, Oregon, as an intern, but soon she'll head to Durham, North Carolina, to start the fall semester of college. + +In this exclusive interview, Patricia explains what got her interested in computer science and engineering (spoiler alert: it wasn't her father), what her high school did "right" with teaching tech, the important role feminism plays in her life, and her thoughts on the lack of diversity in technology. + +![](http://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png) + +### What made you interested in studying computer science and engineering? ### + +My interest in tech really grew throughout high school. I wanted to go into biology for a while, until around my sophomore year. I had a web design internship at the Portland VA after my sophomore year. And I took an engineering class called Exploratory Ventures, which sent an ROV into the Pacific ocean late in my sophomore year, but the turning point was probably when I was named a regional winner and national runner up for the [NCWIT Aspirations in Computing][3] award halfway through my junior year. + +The award made me feel validated in my interest, of course, but I think the most important part of it was getting to join a Facebook group for all the award winners. The girls who have won the award are absolutely incredible and so supportive of each other. I was definitely interested in computer science before I won the award, because of my work in XV and at the VA, but having these girls to talk to solidified my interest and has kept it really strong. Teaching XV—more on that later—my junior and senior year, also, made engineering and computer science really fun for me. + +### What do you plan to study? And do you already know what you want to do after college? ### + +I hope to major in either Mechanical or Electrical and Computer Engineering as well as Computer Science, and minor in Women's Studies. After college, I hope to work for a company that supports or creates technology for social good, or start my own company. + +### My daughter had one high school programming class—Visual Basic. She was the only girl in her class, and she ended up getting harassed and having a miserable experience. What was your experience like? ### + +My high school began offering computer science classes my senior year, and I took Visual Basic as well! The class wasn't bad, but I was definitely one of three or four girls in the class of 20 or so students. Other computing classes seemed to have similar gender breakdowns. However, my high school was extremely small and the teacher was supportive of inclusivity in tech, so there was no harassment that I noticed. Hopefully the classes become more diverse in future years. + +### What did your schools do right technology-wise? And how could they have been better? ### + +My high school gave us consistent access to computers, and teachers occasionally assigned technology-based assignments in unrelated classes—we had to create a website for a social studies class a few times—which I think is great because it exposes everyone to tech. The robotics club was also pretty active and well-funded, but fairly small; I was not a member. One very strong component of the school's technology/engineering program is actually a student-taught engineering class called Exploratory Ventures, which is a hands-on class that tackles a new engineering or computer science problem every year. I taught it for two years with a classmate of mine, and have had students come up to me and tell me they're interested in pursuing engineering or computer science as a result of the class. + +However, my high school was not particularly focused on deliberately including young women in these programs, and it isn't very racially diverse. The computing-based classes and clubs were, by a vast majority, filled with white male students. This could definitely be improved on. + +### Growing up, how did you use technology at home? ### + +Honestly, when I was younger I used my computer time (my dad created a tracker, which logged us off after an hour of Internet use) to play Neopets or similar games. I guess I could have tried to mess with the tracker or played on the computer without Internet use, but I just didn't. I sometimes did little science projects with my dad, and I remember once printing "Hello world" in the terminal with him a thousand times, but mostly I just played online games with my sisters and didn't get my start in computing until high school. + +### You were active in the Feminism Club at your high school. What did you learn from that experience? What feminist issues are most important to you now? ### + +My friend and I co-founded Feminism Club at our high school late in our sophomore year. We did receive lots of resistance to the club at first, and while that never entirely went away, by the time we graduated feminist ideals were absolutely a part of the school's culture. The feminist work we did at my high school was generally on a more immediate scale and focused on issues like the dress code. + +Personally, I'm very focused on intersectional feminism, which is feminism as it applies to other aspects of oppression like racism and classism. The Facebook page [Guerrilla Feminism][4] is a great example of an intersectional feminism and has done so much to educate me. I currently run the Portland branch. + +Feminism is also important to me in terms of diversity in tech, although as an upper-class white woman with strong connections in the tech world, the problems here affect me much less than they do other people. The same goes for my involvement in intersectional feminism. Publications like [Model View Culture][5] are very inspiring to me, and I admire Shanley Kane so much for what she does. + +### What advice would you give parents who want to teach their children how to program? ### + +Honestly, nobody ever pushed me into computer science or engineering. Like I said, for a long time I wanted to be a geneticist. I got a summer internship doing web design for the VA the summer after my sophomore year and totally changed my mind. So I don't know if I can fully answer that question. + +I do think genuine interest is important, though. If my dad had sat me down in front of the computer and told me to configure a webserver when I was 12, I don't think I'd be interested in computer science. Instead, my parents gave me a lot of free reign to do what I wanted, which was mostly coding terrible little HTML sites for my Neopets. Neither of my younger sisters are interested in engineering or computer science, and my parents don't care. I'm really lucky my parents have given me and my sisters the encouragement and resources to explore our interests. + +Still, I grew up saying my future career would be "like my dad's"—even when I didn't know what he did. He has a pretty cool job. Also, one time when I was in middle school, I told him that and he got a little choked up and said I wouldn't think that in high school. So I guess that motivated me a bit. + +### What suggestions do you have for leaders in open source communities to help them attract and maintain a more diverse mix of contributors? ### + +I'm actually not active in particular open source communities. I feel much more comfortable discussing computing with other women; I'm a member of the [NCWIT Aspirations in Computing][6] network and it's been one of the most important aspects of my continued interest in technology, as well as the Facebook group [Ladies Storm Hackathons][7]. + +I think this applies well to attracting and maintaining a talented and diverse mix of contributors: Safe spaces are important. I have seen the misogynistic and racist comments made in some open source communities, and subsequent dismissals when people point out the issues. I think that in maintaining a professional community there have to be strong standards on what constitutes harassment or inappropriate conduct. Of course, people can—and will—have a variety of opinions on what they should be able to express in open source communities, or any community. However, if community leaders actually want to attract and maintain diverse talent, they need to create a safe space and hold community members to high standards. + +I also think that some community leaders just don't value diversity. It's really easy to argue that tech is a meritocracy, and the reason there are so few marginalized people in tech is just that they aren't interested, and that the problem comes from earlier on in the pipeline. They argue that if someone is good enough at their job, their gender or race or sexual orientation doesn't matter. That's the easy argument. But I was raised not to make excuses for mistakes. And I think the lack of diversity is a mistake, and that we should be taking responsibility for it and actively trying to make it better. + +-------------------------------------------------------------------------------- + +via: http://opensource.com/life/15/8/patricia-torvalds-interview + +作者:[Rikki Endsley][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://opensource.com/users/rikki-endsley +[1]:https://creativecommons.org/licenses/by-sa/4.0/ +[2]:https://puppetlabs.com/ +[3]:https://www.aspirations.org/ +[4]:https://www.facebook.com/guerrillafeminism +[5]:https://modelviewculture.com/ +[6]:https://www.aspirations.org/ +[7]:https://www.facebook.com/groups/LadiesStormHackathons/ \ No newline at end of file From 60edce17ba50fa21a0caec3f4350b3b79c249e6b Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 6 Aug 2015 16:05:07 +0800 Subject: [PATCH 074/697] =?UTF-8?q?20150806-5=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...witch for debugging and troubleshooting.md | 69 ++++++++++++++++++ ...or--No module named wxversion' on Linux.md | 49 +++++++++++++ ...th Answers--How to install git on Linux.md | 72 +++++++++++++++++++ 3 files changed, 190 insertions(+) create mode 100644 sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md create mode 100644 sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md create mode 100644 sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md b/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md new file mode 100644 index 0000000000..2b4e16bcaf --- /dev/null +++ b/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md @@ -0,0 +1,69 @@ +Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting +================================================================================ +> **Question:** I am trying to troubleshoot my Open vSwitch deployment. For that I would like to inspect its debug messages generated by its built-in logging mechanism. How can I enable logging in Open vSwitch, and change its logging level (e.g., to INFO/DEBUG level) to check more detailed debug information? + +Open vSwitch (OVS) is the most popular open-source implementation of virtual switch on the Linux platform. As the today's data centers increasingly rely on the software-defined network (SDN) architecture, OVS is fastly adopted as the de-facto standard network element in data center's SDN deployments. + +Open vSwitch has a built-in logging mechanism called VLOG. The VLOG facility allows one to enable and customize logging within various components of the switch. The logging information generated by VLOG can be sent to a combination of console, syslog and a separate log file for inspection. You can configure OVS logging dynamically at run-time with a command-line tool called `ovs-appctl`. + +![](https://farm1.staticflickr.com/499/19300367114_cd8aac2fb2_c.jpg) + +Here is how to enable logging and customize logging levels in Open vSwitch with `ovs-appctl`. + +The syntax of `ovs-appctl` to customize VLOG is as follows. + + $ sudo ovs-appctl vlog/set module[:facility[:level]] + +- **Module**: name of any valid component in OVS (e.g., netdev, ofproto, dpif, vswitchd, and many others) +- **Facility**: destination of logging information (must be: console, syslog or file) +- **Level**: verbosity of logging (must be: emer, err, warn, info, or dbg) + +In OVS source code, module name is defined in each source file in the form of: + + VLOG_DEFINE_THIS_MODULE(); + +For example, in lib/netdev.c, you will see: + + VLOG_DEFINE_THIS_MODULE(netdev); + +which indicates that lib/netdev.c is part of netdev module. Any logging messages generated in lib/netdev.c will belong to netdev module. + +In OVS source code, there are multiple severity levels used to define several different kinds of logging messages: VLOG_INFO() for informational, VLOG_WARN() for warning, VLOG_ERR() for error, VLOG_DBG() for debugging, VLOG_EMERG for emergency. Logging level and facility determine which logging messages are sent where. + +To see a full list of available modules, facilities, and their respective logging levels, run the following commands. This command must be invoked after you have started OVS. + + $ sudo ovs-appctl vlog/list + +![](https://farm1.staticflickr.com/465/19734939478_7eb5d44635_c.jpg) + +The output shows the debug levels of each module for three different facilities (console, syslog, file). By default, all modules have their logging level set to INFO. + +Given any one OVS module, you can selectively change the debug level of any particular facility. For example, if you want to see more detailed debug messages of dpif module at the console screen, run the following command. + + $ sudo ovs-appctl vlog/set dpif:console:dbg + +You will see that dpif module's console facility has changed its logging level to DBG. The logging level of two other facilities, syslog and file, remains unchanged. + +![](https://farm1.staticflickr.com/333/19896760146_5d851311ae_c.jpg) + +If you want to change the logging level for all modules, you can specify "ANY" as the module name. For example, the following command will change the console logging level of every module to DBG. + + $ sudo ovs-appctl vlog/set ANY:console:dbg + +![](https://farm1.staticflickr.com/351/19734939828_8c7f59e404_c.jpg) + +Also, if you want to change the logging level of all three facilities at once, you can specify "ANY" as the facility name. For example, the following command will change the logging level of all facilities for every module to DBG. + + $ sudo ovs-appctl vlog/set ANY:ANY:dbg + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/enable-logging-open-vswitch.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md b/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md new file mode 100644 index 0000000000..11d814d8f4 --- /dev/null +++ b/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md @@ -0,0 +1,49 @@ +Linux FAQs with Answers--How to fix “ImportError: No module named wxversion” on Linux +================================================================================ +> **Question:** I was trying to run a Python application on [insert your Linux distro], but I got an error "ImportError: No module named wxversion." How can I solve this error in the Python program? + + Looking for python... 2.7.9 - Traceback (most recent call last): + File "/home/dev/playonlinux/python/check_python.py", line 1, in + import os, wxversion + ImportError: No module named wxversion + failed tests + +This error indicates that your Python application is GUI-based, relying on a missing Python module called wxPython. [wxPython][1] is a Python extension module for the wxWidgets GUI library, popularly used by C++ programmers to design GUI applications. The wxPython extension allows Python developers to easily design and integrate GUI within any Python application. + +To solve this import error, you need to install wxPython on your Linux, as described below. + +### Install wxPython on Debian, Ubuntu or Linux Mint ### + + $ sudo apt-get install python-wxgtk2.8 + +### Install wxPython on Fedora ### + + $ sudo yum install wxPython + +### Install wxPython on CentOS/RHEL ### + +wxPython is available on the EPEL repository of CentOS/RHEL, not on base repositories. Thus, first [enable EPEL repository][2] on your system, and then use yum command. + + $ sudo yum install wxPython + +### Install wxPython on Arch Linux ### + + $ sudo pacman -S wxpython + +### Install wxPython on Gentoo ### + + $ emerge wxPython + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/importerror-no-module-named-wxversion.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:http://wxpython.org/ +[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html \ No newline at end of file diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md b/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md new file mode 100644 index 0000000000..c5c34f3a72 --- /dev/null +++ b/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md @@ -0,0 +1,72 @@ +Linux FAQs with Answers--How to install git on Linux +================================================================================ +> **Question:** I am trying to clone a project from a public Git repository, but I am getting "git: command not found" error. How can I install git on [insert your Linux distro]? + +Git is a popular open-source version control system (VCS) originally developed for Linux environment. Contrary to other VCS tools like CVS or SVN, Git's revision control is considered "distributed" in a sense that your local Git working directory can function as a fully-working repository with complete history and version-tracking capabilities. In this model, each collaborator commits to his or her local repository (as opposed to always committing to a central repository), and optionally push to a centralized repository if need be. This brings in scalability and redundancy to the revision control system, which is a must in any kind of large-scale collaboration. + +![](https://farm1.staticflickr.com/341/19433194168_c79d4570aa_b.jpg) + +### Install Git with a Package Manager ### + +Git is shipped with all major Linux distributions. Thus the easiest way to install Git is by using your Linux distro's package manager. + +**Debian, Ubuntu, or Linux Mint** + + $ sudo apt-get install git + +**Fedora, CentOS or RHEL** + + $ sudo yum install git + +**Arch Linux** + + $ sudo pacman -S git + +**OpenSUSE** + + $ sudo zypper install git + +**Gentoo** + + $ emerge --ask --verbose dev-vcs/git + +### Install Git from the Source ### + +If for whatever reason you want to built Git from the source, you can follow the instructions below. + +**Install Dependencies** + +Before building Git, first install dependencies. + +**Debian, Ubuntu or Linux** + + $ sudo apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev asciidoc xmlto docbook2x + +**Fedora, CentOS or RHEL** + + $ sudo yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel asciidoc xmlto docbook2x + +#### Compile Git from the Source #### + +Download the latest release of Git from [https://github.com/git/git/releases][1]. Then build and install Git under /usr as follows. + +Note that if you want to install it under a different directory (e.g., /opt), replace "--prefix=/usr" in configure command with something else. + + $ cd git-x.x.x + $ make configure + $ ./configure --prefix=/usr + $ make all doc info + $ sudo make install install-doc install-html install-info + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/install-git-linux.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:https://github.com/git/git/releases \ No newline at end of file From c709912c2c77d4f618418743de6b1eae007c76f6 Mon Sep 17 00:00:00 2001 From: KS Date: Thu, 6 Aug 2015 16:11:10 +0800 Subject: [PATCH 075/697] Update 20150803 Managing Linux Logs.md --- sources/tech/20150803 Managing Linux Logs.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150803 Managing Linux Logs.md b/sources/tech/20150803 Managing Linux Logs.md index d68adddf52..e317a63253 100644 --- a/sources/tech/20150803 Managing Linux Logs.md +++ b/sources/tech/20150803 Managing Linux Logs.md @@ -1,3 +1,4 @@ +wyangsun translating Managing Linux Logs ================================================================================ A key best practice for logging is to centralize or aggregate your logs in one place, especially if you have multiple servers or tiers in your architecture. We’ll tell you why this is a good idea and give tips on how to do it easily. @@ -415,4 +416,4 @@ via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/ [19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html [20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/ [21]:https://github.com/progrium/logspout -[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/ \ No newline at end of file +[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/ From 5a078ed4770c650a2fdc78eab7798b37ae4a8df4 Mon Sep 17 00:00:00 2001 From: mousycoder Date: Thu, 6 Aug 2015 18:54:15 +0800 Subject: [PATCH 076/697] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ware Developer is a Great Career Choice.md | 51 ----------- ...ware Developer is a Great Career Choice.md | 91 +++++++++++++++++++ 2 files changed, 91 insertions(+), 51 deletions(-) delete mode 100644 sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md create mode 100644 translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md diff --git a/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md b/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md deleted file mode 100644 index d24aa83983..0000000000 --- a/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md +++ /dev/null @@ -1,51 +0,0 @@ -Translating by MousyCoder -5 Reasons Why Software Developer is a Great Career Choice -================================================================================ -This week I will give a presentation at a local high school on what it is like to work as a programmer. I am volunteering (through the organization [Transfer][1]) to come to schools and talk about what I work with. This school will have a technology theme day this week, and would like to hear what working in the technology sector is like. Since I develop software, that’s what I will talk about. One section will be on why I think a career in software development is great. The main reasons are: - -### 5 Reasons ### - -**1 Creative**. If you ask people to name creative jobs, chances are they will say things like writer, musician or painter. But few people know that software development is also very creative. It is almost by definition creative, since you create new functionality that didn’t exist before. The solutions can be expressed in many ways, both structurally and in the details. Often there are trade-offs to make (for example speed versus memory consumption). And of course the solution has to be correct. All this requires creativity. - -**2 Collaborative**. Another myth is that programmers sit alone at their computers and code all day. But software development is in fact almost always a team effort. You discuss programming problems and solutions with your colleagues, and discuss requirements and other issues with product managers, testers and customers. It is also telling that pair-programming (two developers programming together on one computer) is a popular practice. - -**3 In demand**. More and more in the world is using software, or as Marc Andreessen put it: “[Software is Eating the World][2]“. Even as there are more programmers (in Stockholm, programmer is now the [most common occupation][3]), demand is still outpacing supply. Software companies report that one of their greatest challenges is [finding good developers][4]. I regularly get contacted by recruiters trying to get me to change jobs. I don’t know of many other professions where employers compete for you like that. - -**4 Pays well**. Developing software can create a lot of value. There is no marginal cost to selling one extra copy of software you have already developed. This combined with the high demand for developers means that pay is quite good. There are of course occupations where you make more money, but compared to the general population, I think software developers are paid quite well. - -**5 Future proof**. Many jobs disappear, often because they can be replaced by computers and software. But all those new programs still need to be developed and maintained, so the outlook for programmers is quite good. - -### But… ### - -**What about outsourcing?** Won’t all software development be outsourced to countries where the salaries are much lower? This is an example of an idea that is better in theory than in practice (much like the [waterfall development methodology][5]). Software development is a discovery activity as much as a design activity. It benefits greatly from intense collaboration. Furthermore, especially when the main product is software, the knowledge gained when developing it is a competitive advantage. The easier that knowledge is shared within the whole company, the better it is. - -Another way to look at it is this. Outsourcing of software development has existed for quite a while now. Yet there is still high demand for local developers. So companies see benefits of hiring local developers that outweigh the higher costs. - -### How to Win ### - -There are many reasons why I think developing software is enjoyable (see also [Why I Love Coding][6]). But it is not for everybody. Fortunately it is quite easy to try programming out. There are innumerable resources on the web for learning to program. For example, both [Coursera][7] and [Udacity][8] have introductory courses. If you have never programmed, try one of the free courses or tutorials to get a feel for it. - -Finding something you really enjoy to do for a living has at least two benefits. First, since you do it every day, work will be much more fun than if you simply do something to make money. Second, if you really like it, you have a much better chance of getting good at it. I like the Venn diagram below (by [@eskimon][9]) on what constitutes a great job. Since programming pays relatively well, I think that if you like it, you have a good chance of ending up in the center of the diagram! - -![](https://henrikwarne1.files.wordpress.com/2014/12/career-planning.png) - --------------------------------------------------------------------------------- - -via: http://henrikwarne.com/2014/12/08/5-reasons-why-software-developer-is-a-great-career-choice/ - -作者:[Henrik Warne][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://henrikwarne.com/ -[1]:http://www.transfer.nu/omoss/transferinenglish.jspx?pageId=23 -[2]:http://online.wsj.com/articles/SB10001424053111903480904576512250915629460 -[3]:http://www.di.se/artiklar/2014/6/12/jobbet-som-tar-over-landet/ -[4]:http://computersweden.idg.se/2.2683/1.600324/examinationstakten-racker-inte-for-branschens-behov -[5]:http://en.wikipedia.org/wiki/Waterfall_model -[6]:http://henrikwarne.com/2012/06/02/why-i-love-coding/ -[7]:https://www.coursera.org/ -[8]:https://www.udacity.com/ -[9]:https://eskimon.wordpress.com/about/ \ No newline at end of file diff --git a/translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md b/translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md new file mode 100644 index 0000000000..a592cb595e --- /dev/null +++ b/translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md @@ -0,0 +1,91 @@ +选择软件开发攻城师的5个原因 +================================================================================ +这个星期我将给本地一所高中做一次有关于程序猿是怎样工作的演讲,我是 [Transfer][1] 组织推荐来到这所学校,谈论我的工作。这个学校本周将有一个技术主题日,并且他们很想听听科技行业是怎样工作的。因为我是从事软件开发的,这也是我将和学生们讲的内容。我为什么觉得软件开发是一个很酷的职业将是演讲的其中一部分。主要原因如下: + +### 5个原因 ### + +**1 创造性** + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/34042817.jpg) + +如果你问别人创造性的工作有哪些,别人通常会说像作家,音乐家或者画家那样的(工作)。但是极少有人知道软件开发也是一项非常具有创造性的工作。因为你创造了一个以前没有的新功能,这样的功能基本上可以被定义为非常具有创造性。这种解决方案可以在整体和细节上以很多形式来展现。我们经常会遇到一些需要做权衡的场景(比如说运行速度与内存消耗的权衡)。当然前提是这种解决方案必须是正确的。其实这些所有的行为都是需要强大的创造性的。 + +**2 协作性** + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/94579377.jpg) + +另外一个表象是程序猿们独自坐在他们的电脑前,然后撸一天的代码。但是软件开发事实上通常总是一个团队努力的结果。你会经常和你的同事讨论编程问题以及解决方案,并且和产品经理,测试人员,客户讨论需求以及其他问题。 +经常有人说,结对编程(2个开发人员一起在一个电脑上编程)是一种流行的最佳实践。 + + +**3 高需性** + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/26662164.jpg) + +世界上越来越多的人在用软件,正如 [Marc Andreessen](https://en.wikipedia.org/wiki/Marc_Andreessen) 所说 " [软件正在吞噬世界][2] "。虽然程序猿现在的数量非常巨大(在斯德哥尔摩,程序猿现在是 [最普遍的职业][3] ),但是,需求量一直处于供不应求的局面。据软件公司报告,他们最大的挑战之一就是 [找到优秀的程序猿][4] 。我也经常接到那些想让我跳槽的招聘人员打来的电话。我知道至少除软件行业之外的其他行业的雇主不会那么拼(的去招聘)。 + +**4 高酬性** + + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/50538928.jpg) + +软件开发可以带来不菲的收入。卖一份你已经开发好的软件的额外副本是没有 [边际成本][5] 的。这个事实与对程序猿的高需求意味着收入相当可观。当然还有许多更捞金的职业,但是相比一般人群,我认为软件开发者确实“日进斗金”(知足吧!骚年~~)。 + +**5 前瞻性** + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/89799239.jpg) + +有许多工作岗位消失,往往是由于它们可以被计算机和软件代替。但是所有这些新的程序依然需要开发和维护,因此,程序猿的前景还是相当好的。 + + +### 但是...### + +**外包又是怎么一回事呢?** + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/41615753.jpg) + +难道所有外包到其他地区的软件开发的薪水都很低吗?这是一个理想丰满,现实骨感的例子(有点像 [瀑布开发模型][6] )。软件开发基本上跟设计的工作一样,是一个探索发现的工作。它受益于强有力的合作。更进一步说,特别当你的主打产品是软件的时候,你所掌握的开发知识是绝对的优势。知识在整个公司中分享的越容易,那么公司的发展也将越来越好。 + + +换一种方式去看待这个问题。软件外包已经存在了相当一段时间了。但是对本土程序猿的需求量依旧非常高。因为许多软件公司看到了雇佣本土程序猿的带来的收益要远远超过了相对较高的成本(其实还是赚了)。 + +### 如何成为人生大赢家 ### + + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/44219908.jpg) + +虽然我有许多我认为软件开发是一件非常有趣的事情的理由 (详情见: [为什么我热爱编程][7] )。但是这些理由,并不适用于所有人。幸运的是,尝试编程是一件非常容易的事情。在互联网上有数不尽的学习编程的资源。例如,[Coursera][8] 和 [Udacity][9] 都拥有很好的入门课程。如果你从来没有撸过码,可以尝试其中一个免费的课程,找找感觉。 + +寻找一个既热爱又能谋生的事情至少有2个好处。首先,由于你天天去做,工作将比你简单的只为谋生要有趣的多。其次,如果你真的非常喜欢,你将更好的擅长它。我非常喜欢下面一副关于伟大工作组成的韦恩图(作者 [@eskimon)][10] 。因为编码的薪水确实相当不错,我认为如果你真的喜欢它,你将有一个很好的机会,成为人生的大赢家! + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/17571624.jpg) + +-------------------------------------------------------------------------------- + +via: http://henrikwarne.com/2014/12/08/5-reasons-why-software-developer-is-a-great-career-choice/ + +作者:[Henrik Warne][a] +译者:[mousycoder](https://github.com/mousycoder) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + + +[a]:http://henrikwarne.com/ +[1]:http://www.transfer.nu/omoss/transferinenglish.jspx?pageId=23 +[2]:http://www.wsj.com/articles/SB10001424053111903480904576512250915629460 +[3]:http://www.di.se/artiklar/2014/6/12/jobbet-som-tar-over-landet/ +[4]:http://computersweden.idg.se/2.2683/1.600324/examinationstakten-racker-inte-for-branschens-behov +[5]:https://en.wikipedia.org/wiki/Marginal_cost +[6]:https://en.wikipedia.org/wiki/Waterfall_model +[7]:http://henrikwarne.com/2012/06/02/why-i-love-coding/ +[8]:https://www.coursera.org/ +[9]:https://www.udacity.com/ +[10]:https://eskimon.wordpress.com/about/ + + + + + + + From a37ff025ebb8b740d71d06ed1e6178a5ab4ccc3c Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 7 Aug 2015 00:06:17 +0800 Subject: [PATCH 077/697] PUB:20150717 How to monitor NGINX- Part 1 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @strugglingyouth 好长,翻译辛苦了。其中有些字句翻译不对,比如 upstreaming 是指上游(服务器),并不专指负载均衡环境。 --- .../20150717 How to monitor NGINX- Part 1.md | 231 ++++++++++ .../20150717 How to monitor NGINX- Part 1.md | 416 ------------------ 2 files changed, 231 insertions(+), 416 deletions(-) create mode 100644 published/20150717 How to monitor NGINX- Part 1.md delete mode 100644 translated/tech/20150717 How to monitor NGINX- Part 1.md diff --git a/published/20150717 How to monitor NGINX- Part 1.md b/published/20150717 How to monitor NGINX- Part 1.md new file mode 100644 index 0000000000..908aa7448e --- /dev/null +++ b/published/20150717 How to monitor NGINX- Part 1.md @@ -0,0 +1,231 @@ +如何监控 NGINX(第一篇) +================================================================================ +![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png) + +### NGINX 是什么? ### + +[NGINX][1] (发音为 “engine X”) 是一种流行的 HTTP 和反向代理服务器。作为一个 HTTP 服务器,NGINX 可以使用较少的内存非常高效可靠地提供静态内容。作为[反向代理][2],它可以用作多个后端服务器或类似缓存和负载平衡这样的其它应用的单一访问控制点。NGINX 是一个自由开源的产品,并有一个具备更全的功能的叫做 NGINX Plus 的商业版。 + +NGINX 也可以用作邮件代理和通用的 TCP 代理,但本文并不直接讨论 NGINX 的那些用例的监控。 + +### NGINX 主要指标 ### + +通过监控 NGINX 可以 捕获到两类问题:NGINX 本身的资源问题,和出现在你的基础网络设施的其它问题。大多数 NGINX 用户会用到以下指标的监控,包括**每秒请求数**,它提供了一个由所有最终用户活动组成的上层视图;**服务器错误率** ,这表明你的服务器已经多长没有处理看似有效的请求;还有**请求处理时间**,这说明你的服务器处理客户端请求的总共时长(并且可以看出性能降低或当前环境的其他问题)。 + +更一般地,至少有三个主要的指标类别来监视: + +- 基本活动指标 +- 错误指标 +- 性能指标 + +下面我们将分析在每个类别中最重要的 NGINX 指标,以及用一个相当普遍但是值得特别提到的案例来说明:使用 NGINX Plus 作反向代理。我们还将介绍如何使用图形工具或可选择的监控工具来监控所有的指标。 + +本文引用指标术语[来自我们的“监控 101 系列”][3],,它提供了一个指标收集和警告框架。 + +#### 基本活跃指标 #### + +无论你在怎样的情况下使用 NGINX,毫无疑问你要监视服务器接收多少客户端请求和如何处理这些请求。 + +NGINX Plus 上像开源 NGINX 一样可以报告基本活跃指标,但它也提供了略有不同的辅助模块。我们首先讨论开源的 NGINX,再来说明 NGINX Plus 提供的其他指标的功能。 + +**NGINX** + +下图显示了一个客户端连接的过程,以及开源版本的 NGINX 如何在连接过程中收集指标。 + +![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_connection_diagram-2.png) + +Accepts(接受)、Handled(已处理)、Requests(请求)是一直在增加的计数器。Active(活跃)、Waiting(等待)、Reading(读)、Writing(写)随着请求量而增减。 + +| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| +|-----------|-----------------|-------------------------------------------------------------------------------------------------------------------------| +| Accepts | NGINX 所接受的客户端连接数 | 资源: 功能 | +| Handled | 成功的客户端连接数 | 资源: 功能 | +| Active | 当前活跃的客户端连接数| 资源: 功能 | +| Dropped(已丢弃,计算得出)| 丢弃的连接数(接受 - 已处理)| 工作:错误*| +| Requests | 客户端请求数 | 工作:吞吐量 | + + +_*严格的来说,丢弃的连接是 [一个资源饱和指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#resource-metrics),但是因为饱和会导致 NGINX 停止服务(而不是延后该请求),所以,“已丢弃”视作 [一个工作指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#work-metrics) 比较合适。_ + +NGINX worker 进程接受 OS 的连接请求时 **Accepts** 计数器增加,而**Handled** 是当实际的请求得到连接时(通过建立一个新的连接或重新使用一个空闲的)。这两个计数器的值通常都是相同的,如果它们有差别则表明连接被**Dropped**,往往这是由于资源限制,比如已经达到 NGINX 的[worker_connections][4]的限制。 + +一旦 NGINX 成功处理一个连接时,连接会移动到**Active**状态,在这里对客户端请求进行处理: + +Active状态 + +- **Waiting**: 活跃的连接也可以处于 Waiting 子状态,如果有在此刻没有活跃请求的话。新连接可以绕过这个状态并直接变为到 Reading 状态,最常见的是在使用“accept filter(接受过滤器)” 和 “deferred accept(延迟接受)”时,在这种情况下,NGINX 不会接收 worker 进程的通知,直到它具有足够的数据才开始响应。如果连接设置为 keep-alive ,那么它在发送响应后将处于等待状态。 + +- **Reading**: 当接收到请求时,连接离开 Waiting 状态,并且该请求本身使 Reading 状态计数增加。在这种状态下 NGINX 会读取客户端请求首部。请求首部是比较小的,因此这通常是一个快速的操作。 + +- **Writing**: 请求被读取之后,其使 Writing 状态计数增加,并保持在该状态,直到响应返回给客户端。这意味着,该请求在 Writing 状态时, 一方面 NGINX 等待来自上游系统的结果(系统放在 NGINX “后面”),另外一方面,NGINX 也在同时响应。请求往往会在 Writing 状态花费大量的时间。 + +通常,一个连接在同一时间只接受一个请求。在这种情况下,Active 连接的数目 == Waiting 的连接 + Reading 请求 + Writing 。然而,较新的 SPDY 和 HTTP/2 协议允许多个并发请求/响应复用一个连接,所以 Active 可小于 Waiting 的连接、 Reading 请求、Writing 请求的总和。 (在撰写本文时,NGINX 不支持 HTTP/2,但预计到2015年期间将会支持。) + +**NGINX Plus** + +正如上面提到的,所有开源 NGINX 的指标在 NGINX Plus 中是可用的,但另外也提供其他的指标。本节仅说明了 NGINX Plus 可用的指标。 + + +![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_plus_connection_diagram-2.png) + +Accepted (已接受)、Dropped,总数是不断增加的计数器。Active、 Idle(空闲)和处于 Current(当前)处理阶段的各种状态下的连接或请​​求的当前数量随着请求量而增减。 + +| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| +|-----------|-----------------|-------------------------------------------------------------------------------------------------------------------------| +| Accepted | NGINX 所接受的客户端连接数 | 资源: 功能 | +| Dropped |丢弃的连接数(接受 - 已处理)| 工作:错误*| +| Active | 当前活跃的客户端连接数| 资源: 功能 | +| Idle | 没有当前请求的客户端连接| 资源: 功能 | +| Total(全部) | 客户端请求数 | 工作:吞吐量 | + +_*严格的来说,丢弃的连接是 [一个资源饱和指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#resource-metrics),但是因为饱和会导致 NGINX 停止服务(而不是延后该请求),所以,“已丢弃”视作 [一个工作指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#work-metrics) 比较合适。_ + +当 NGINX Plus worker 进程接受 OS 的连接请求时 **Accepted** 计数器递增。如果 worker 进程为请求建立连接失败(通过建立一个新的连接或重新使用一个空闲),则该连接被丢弃, **Dropped** 计数增加。通常连接被丢弃是因为资源限制,如 NGINX Plus 的[worker_connections][4]的限制已经达到。 + +**Active** 和 **Idle** 和[如上所述][5]的开源 NGINX 的“active” 和 “waiting”状态是相同的,但是有一点关键的不同:在开源 NGINX 上,“waiting”状态包括在“active”中,而在 NGINX Plus 上“idle”的连接被排除在“active” 计数外。**Current** 和开源 NGINX 是一样的也是由“reading + writing” 状态组成。 + +**Total** 为客户端请求的累积计数。请注意,单个客户端连接可涉及多个请求,所以这个数字可能会比连接的累计次数明显大。事实上,(total / accepted)是每个连接的平均请求数量。 + +**开源 和 Plus 之间指标的不同** + +|NGINX (开源) |NGINX Plus| +|-----------------------|----------------| +| accepts | accepted | +| dropped 通过计算得来| dropped 直接得到 | +| reading + writing| current| +| waiting| idle| +| active (包括 “waiting”状态) | active (排除 “idle” 状态)| +| requests| total| + +**提醒指标: 丢弃连接** + +被丢弃的连接数目等于 Accepts 和 Handled 之差(NGINX 中),或是可直接得到标准指标(NGINX Plus 中)。在正常情况下,丢弃连接数应该是零。如果在每个单位时间内丢弃连接的速度开始上升,那么应该看看是否资源饱和了。 + +![Dropped connections](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/dropped_connections.png) + +**提醒指标: 每秒请求数** + +按固定时间间隔采样你的请求数据(开源 NGINX 的**requests**或者 NGINX Plus 中**total**) 会提供给你单位时间内(通常是分钟或秒)所接受的请求数量。监测这个指标可以查看进入的 Web 流量尖峰,无论是合法的还是恶意的,或者突然的下降,这通常都代表着出现了问题。每秒请求数若发生急剧变化可以提醒你的环境出现问题了,即使它不能告诉你确切问题的位置所在。请注意,所有的请求都同样计数,无论 URL 是什么。 + +![Requests per second](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/requests_per_sec.png) + +**收集活跃指标** + +开源的 NGINX 提供了一个简单状态页面来显示基本的服务器指标。该状态信息以标准格式显示,实际上任何图形或监控工具可以被配置去解析这些相关数据,以用于分析、可视化、或提醒。NGINX Plus 提供一个 JSON 接口来供给更多的数据。阅读相关文章“[NGINX 指标收集][6]”来启用指标收集的功能。 + +#### 错误指标 #### + +| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| 可用于 | +|-----------|-----------------|--------------------------------------------------------------------------------------------------------|----------------| +| 4xx 代码 | 客户端错误计数 | 工作:错误 | NGINX 日志, NGINX Plus| +| 5xx 代码| 服务器端错误计数 | 工作:错误 | NGINX 日志, NGINX Plus| + +NGINX 错误指标告诉你服务器是否经常返回错误而不是正常工作。客户端错误返回4XX状态码,服务器端错误返回5XX状态码。 + +**提醒指标: 服务器错误率** + +服务器错误率等于在单位时间(通常为一到五分钟)内5xx错误状态代码的总数除以[状态码][7](1XX,2XX,3XX,4XX,5XX)的总数。如果你的错误率随着时间的推移开始攀升,调查可能的原因。如果突然增加,可能需要采取紧急行动,因为客户端可能收到错误信息。 + +![Server error rate](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/5xx_rate.png) + +关于客户端错误的注意事项:虽然监控4XX是很有用的,但从该指标中你仅可以捕捉有限的信息,因为它只是衡量客户的行为而不捕捉任何特殊的 URL。换句话说,4xx出现的变化可能是一个信号,例如网络扫描器正在寻找你的网站漏洞时。 + +**收集错误度量** + +虽然开源 NGINX 不能马上得到用于监测的错误率,但至少有两种方法可以得到: + +- 使用商业支持的 NGINX Plus 提供的扩展状态模块 +- 配置 NGINX 的日志模块将响应码写入访问日志 + +关于这两种方法,请阅读相关文章“[NGINX 指标收集][6]”。 + +#### 性能指标 #### + +| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| 可用于 | +|-----------|-----------------|--------------------------------------------------------------------------------------------------------|----------------| +| request time (请求处理时间)| 处理每个请求的时间,单位为秒 | 工作:性能 | NGINX 日志| + +**提醒指标: 请求处理时间** + +请求处理时间指标记录了 NGINX 处理每个请求的时间,从读到客户端的第一个请求字节到完成请求。较长的响应时间说明问题在上游。 + +**收集处理时间指标** + +NGINX 和 NGINX Plus 用户可以通过添加 $request_time 变量到访问日志格式中来捕​​捉处理时间数据。关于配置日志监控的更多细节在[NGINX指标收集][6]。 + +#### 反向代理指标 #### + +| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| 可用于 | +|-----------|-----------------|--------------------------------------------------------------------------------------------------------|----------------| +| 上游服务器的活跃链接 | 当前活跃的客户端连接 | 资源:功能 | NGINX Plus | +| 上游服务器的 5xx 错误代码| 服务器错误 | 工作:错误 | NGINX Plus | +| 每个上游组的可用服务器 | 服务器传递健康检查 | 资源:可用性| NGINX Plus + +[反向代理][9]是 NGINX 最常见的使用方法之一。商业支持的 NGINX Plus 显示了大量有关后端(或“上游 upstream”)的服务器指标,这些与反向代理设置相关的。本节重点介绍了几个 NGINX Plus 用户可用的关键上游指标。 + +NGINX Plus 首先将它的上游指标按组分开,然后是针对单个服务器的。因此,例如,你的反向代理将请求分配到五个上游的 Web 服务器上,你可以一眼看出是否有单个服务器压力过大,也可以看出上游组中服务器的健康状况,以确保良好的响应时间。 + +**活跃指标** + +**每上游服务器的活跃连接**的数量可以帮助你确认反向代理是否正确的分配工作到你的整个服务器组上。如果你正在使用 NGINX 作为负载均衡器,任何一台服务器处理的连接数的明显偏差都可能表明服务器正在努力消化请求,或者是你配置使用的负载均衡的方法(例如[round-robin 或 IP hashing][10])不是最适合你流量模式的。 + +**错误指标** + +错误指标,上面所说的高于5XX(服务器错误)状态码,是监控指标中有价值的一个,尤其是响应码部分。 NGINX Plus 允许你轻松地提取**每个上游服务器的 5xx 错误代码**的数量,以及响应的总数量,以此来确定某个特定服务器的错误率。 + +**可用性指标** + +对于 web 服务器的运行状况,还有另一种角度,NGINX 可以通过**每个组中当前可用服务器的总量**很方便监控你的上游组的健康。在一个大的反向代理上,你可能不会非常关心其中一个服务器的当前状态,就像你只要有可用的服务器组能够处理当前的负载就行了。但监视上游组内的所有工作的服务器总量可为判断 Web 服务器的健康状况提供一个更高层面的视角。 + +**收集上游指标** + +NGINX Plus 上游指标显示在内部 NGINX Plus 的监控仪表盘上,并且也可通过一个JSON 接口来服务于各种外部监控平台。在我们的相关文章“[NGINX指标收集][6]”中有个例子。 + +### 结论 ### + +在这篇文章中,我们已经谈到了一些有用的指标,你可以使用表格来监控 NGINX 服务器。如果你是刚开始使用 NGINX,监控下面提供的大部分或全部指标,可以让你很好的了解你的网络基础设施的健康和活跃程度: + +- [已丢弃的连接][12] +- [每秒请求数][13] +- [服务器错误率][14] +- [请求处理数据][15] + +最终,你会学到更多,更专业的衡量指标,尤其是关于你自己基础设施和使用情况的。当然,监控哪一项指标将取决于你可用的工具。参见相关的文章来[逐步指导你的指标收集][6],不管你使用 NGINX 还是 NGINX Plus。 + +在 Datadog 中,我们已经集成了 NGINX 和 NGINX Plus,这样你就可以以最少的设置来收集和监控所有 Web 服务器的指标。 [在本文中][17]了解如何用 NGINX Datadog来监控,并开始[免费试用 Datadog][18]吧。 + +### 诚谢 ### + +在文章发表之前非常感谢 NGINX 团队审阅这篇,并提供重要的反馈和说明。 + + +-------------------------------------------------------------------------------- + +via: https://www.datadoghq.com/blog/how-to-monitor-nginx/ + +作者:K Young +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://nginx.org/en/ +[2]:http://nginx.com/resources/glossary/reverse-proxy-server/ +[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/ +[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections +[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state +[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html +[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[9]:https://en.wikipedia.org/wiki/Reverse_proxy +[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/ +[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections +[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second +[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate +[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time +[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ +[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up +[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md +[20]:https://github.com/DataDog/the-monitor/issues diff --git a/translated/tech/20150717 How to monitor NGINX- Part 1.md b/translated/tech/20150717 How to monitor NGINX- Part 1.md deleted file mode 100644 index 86e72c0324..0000000000 --- a/translated/tech/20150717 How to monitor NGINX- Part 1.md +++ /dev/null @@ -1,416 +0,0 @@ -如何监控 NGINX - 第1部分 -================================================================================ -![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png) - -### NGINX 是什么? ### - -[NGINX][1] (发音为 “engine X”) 是一种流行的 HTTP 和反向代理服务器。作为一个 HTTP 服务器,NGINX 提供静态内容非常高效可靠,使用较少的内存。作为[反向代理][2],它可以用作一个单一的控制器来为其他应用代理至后端的多个服务器上,如高速缓存和负载平衡。NGINX 是作为一个免费,开源的产品并有更全的功能,商业版的叫 NGINX Plus。 - -NGINX 也可以用作邮件代理和通用的 TCP 代理,但本文并不直接说明对 NGINX 的这些用例做监控。 - -### NGINX 主要指标 ### - -通过监控 NGINX 可以捕捉两类问题:NGINX 本身的资源问题,也有很多问题会出现在你的基础网络设施处。大多数 NGINX 用户受益于以下指标的监控,包括**requests per second**,它提供了一个所有用户活动的高级视图;**server error rate** ,这表明你的服务器已经多长没有处理看似有效的请求;还有**request processing time**,这说明你的服务器处理客户端请求的总共时长(并且可以看出性能降低时或当前环境的其他问题)。 - -更一般地,至少有三个主要的指标类别来监视: - -- 基本活动指标 -- 错误指标 -- 性能指标 - -下面我们将分析在每个类别中最重要的 NGINX 指标,以及用一个相当普遍的案例来说明,值得特别说明的是:使用 NGINX Plus 作反向代理。我们还将介绍如何使用图形工具或可选择的监控工具来监控所有的指标。 - -本文引用指标术语[介绍我们的监控在 101 系列][3],,它提供了指标收集和警告框架。 - -#### 基本活动指标 #### - -无论你在怎样的情况下使用 NGINX,毫无疑问你要监视服务器接收多少客户端请求和如何处理这些请求。 - -NGINX Plus 上像开源 NGINX 一样可以报告基本活动指标,但它也提供了略有不同的辅助模块。我们首先讨论开源的 NGINX,再来说明 NGINX Plus 提供的其他指标的功能。 - -**NGINX** - -下图显示了一个客户端连接,以及如何在连接过程中收集指标的活动周期在开源 NGINX 版本上。 - -![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_connection_diagram-2.png) - -接受,处理,增加请求的计数器。主动,等待,读,写增加和减少请求量。 - -注:表格 - ---- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric type
acceptsCount of client connections attempted by NGINXResource: Utilization
handledCount of successful client connectionsResource: Utilization
activeCurrently active client connectionsResource: Utilization
dropped (calculated)Count of dropped connections (accepts – handled)Work: Errors*
requestsCount of client requestsWork: Throughput
*Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.
- -NGINX 进程接受 OS 的连接请求时**accepts** 计数器增加,而**handled** 是当实际的请求得到连接时(通过建立一个新的连接或重新使用一个空闲的)。这两个计数器的值通常都是相同的,表明连接正在被**dropped**,往往由于资源限制,如 NGINX 的[worker_connections][4]的限制已经达到。 - -一旦 NGINX 成功处理一个连接时,连接会移动到**active**状态,然后保持为客户端请求进行处理: - -Active 状态 - -- **Waiting**: 活动的连接也可以是一个 Waiting 子状态,如果有在此刻没有活动请求。新连接绕过这个状态并直接移动到读,最常见的是使用“accept filter” 和 “deferred accept”,在这种情况下,NGINX 不会接收进程的通知,直到它具有足够的数据来开始响应工作。如果连接设置为 keep-alive ,连接在发送响应后将处于等待状态。 - -- **Reading**: 当接收到请求时,连接移出等待状态,并且该请求本身也被视为 Reading。在这种状态下NGINX 正在读取客户端请求首部。请求首部是比较少的,因此这通常是一个快速的操作。 - -- **Writing**: 请求被读取之后,将其计为 Writing,并保持在该状态,直到响应返回给客户端。这意味着,该请求在 Writing 时, NGINX 同时等待来自负载均衡服务器的结果(系统“背后”的 NGINX),NGINX 也同时响应。请求往往会花费大量的时间在 Writing 状态。 - -通常,一个连接在同一时间只接受一个请求。在这种情况下,Active 连接的数目 == Waiting 连接 + Reading 请求 + Writing 请求。然而,较新的 SPDY 和 HTTP/2 协议允许多个并发请求/响应对被复用的连接,所以 Active 可小于 Waiting,Reading,Writing 的总和。 (在撰写本文时,NGINX 不支持 HTTP/2,但预计到2015年期间将会支持。) - -**NGINX Plus** - -正如上面提到的,所有开源 NGINX 的指标在 NGINX Plus 中是可用的,但另外也提供其他的指标。本节仅说明了 NGINX Plus 可用的指标。 - - -![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_plus_connection_diagram-2.png) - -接受,中断,总数是不断增加的。活动,空闲和已建立连接的,当前状态下每一个连接或请​​求的数量是随着请求量增加和收缩的。 - -注:表格 - ---- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric type
acceptedCount of client connections attempted by NGINXResource: Utilization
droppedCount of dropped connectionsWork: Errors*
activeCurrently active client connectionsResource: Utilization
idleClient connections with zero current requestsResource: Utilization
totalCount of client requestsWork: Throughput
*Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.
- -当 NGINX Plus 进程接受 OS 的连接请求时 **accepted** 计数器递增。如果进程请求连接失败(通过建立一个新的连接或重新使用一个空闲),则该连接断开 **dropped** 计数增加。通常连接被中断是因为资源限制,如 NGINX Plus 的[worker_connections][4]的限制已经达到。 - -**Active** 和 **idle** 和开源 NGINX 的“active” 和 “waiting”状态是相同的,[如上所述][5],有一个不同的地方:在开源 NGINX 上,“waiting”状态包括在“active”中,而在 NGINX Plus 上“idle”的连接被排除在“active” 计数外。**Current** 和开源 NGINX 是一样的也是由“reading + writing” 状态组成。 - - -**Total** 为客户端请求的累积计数。请注意,单个客户端连接可涉及多个请求,所以这个数字可能会比连接的累计次数明显大。事实上,(total / accepted)是每个连接请求的平均数量。 - -**开源 和 Plus 之间指标的不同** - -注:表格 - --- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NGINX (open-source)NGINX Plus
acceptsaccepted
dropped must be calculateddropped is reported directly
reading + writingcurrent
waitingidle
active (includes “waiting” states)active (excludes “idle” states)
requeststotal
- -**提醒指标: 中断连接** - -被中断的连接数目等于接受和处理之差(NGINX),或被公开直接作为指标的标准(NGINX加)。在正常情况下,中断连接数应该是零。如果每秒中中断连接的速度开始上升,寻找资源可能用尽的地方。 - -![Dropped connections](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/dropped_connections.png) - -**提醒指标: 每秒请求数** - -提供你(开源中的**requests**或者 Plus 中**total**)固定时间间隔每秒或每分钟请求的平均数据。监测这个指标可以查看 Web 的输入流量的最大值,无论是合法的还是恶意的,有可能会突然下降,通常可以看出问题。每秒的请求若发生急剧变化可以提醒你出问题了,即使它不能告诉你确切问题的位置所在。请注意,所有的请求都算作是相同的,无论哪个 URLs。 - -![Requests per second](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/requests_per_sec.png) - -**收集活动指标** - -开源的 NGINX 提供了一个简单状态页面来显示基本的服务器指标。该状态信息以标准格式被显示,实际上任何图形或监控工具可以被配置去解析相关的数据为分析,可视化,或提醒而用。NGINX Plus 提供一个 JSON 接口来显示更多的数据。阅读[NGINX 指标收集][6]后来启用指标收集的功能。 - -#### 错误指标 #### - -注:表格 - ----- - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric typeAvailability
4xx codesCount of client errorsWork: ErrorsNGINX logs, NGINX Plus
5xx codesCount of server errorsWork: ErrorsNGINX logs, NGINX Plus
- -NGINX 错误指标告诉你服务器经常返回哪些错误,这也是有用的。客户端错误返回4XX状态码,服务器端错误返回5XX状态码。 - -**提醒指标: 服务器错误率** - -服务器错误率等于5xx错误状态代码的总数除以[状态码][7](1XX,2XX,3XX,4XX,5XX)的总数,每单位时间(通常为一到五分钟)的数目。如果你的错误率随着时间的推移开始攀升,调查可能的原因。如果突然增加,可能需要采取紧急行动,因为客户端可能收到错误信息。 - -![Server error rate](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/5xx_rate.png) - -客户端收到错误时的注意事项:虽然监控4XX是很有用的,但从该指标中你仅可以捕捉有限的信息,因为它只是衡量客户的行为而不捕捉任何特殊的 URLs。换句话说,在4xx出现时只是相当于一点噪音,例如寻找漏洞的网络扫描仪。 - -**收集错误度量** - -虽然开源 NGINX 不会监测错误率,但至少有两种方法可以捕获其信息: - -- 使用商业支持的 NGINX Plus 提供的可扩展状态模块 -- 配置 NGINX 的日志模块将响应码写入访问日志 - -阅读关于 NGINX 指标收集的后两个方法的详细说明。 - -#### 性能指标 #### - -注:表格 - ----- - - - - - - - - - - - - - - - - -
NameDescriptionMetric typeAvailability
request timeTime to process each request, in secondsWork: PerformanceNGINX logs
- -**提醒指标: 请求处理时间** - -请求时间指标记录 NGINX 处理每个请求的时间,从第一个客户端的请求字节读出到完成请求。较长的响应时间可以将问题指向负载均衡服务器。 - -**收集处理时间指标** - -NGINX 和 NGINX Plus 用户可以通过添加 $request_time 变量到访问日志格式中来捕​​捉处理时间数据。关于配置日志监控的更多细节在[NGINX指标收集][8]。 - -#### 反向代理指标 #### - -注:表格 - ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric typeAvailability
Active connections by upstream serverCurrently active client connectionsResource: UtilizationNGINX Plus
5xx codes by upstream serverServer errorsWork: ErrorsNGINX Plus
Available servers per upstream groupServers passing health checksResource: AvailabilityNGINX Plus
- -[反向代理][9]是 NGINX 最常见的使用方法之一。商业支持的 NGINX Plus 显示了大量有关后端(或“负载均衡”)的服务器指标,这是反向代理设置的。本节重点介绍了几个关键的负载均衡服务器的指标为 NGINX Plus 用户。 - -NGINX Plus 的负载均衡服务器指标首先是组的,然后是单个服务器的。因此,例如,你的反向代理将请求分配到五个 Web 负载均衡服务器上,你可以一眼看出是否有单个服务器压力过大,也可以看出负载均衡服务器组的健康状况,以确保良好的响应时间。 - -**活动指标** - -**active connections per upstream server**的数量可以帮助你确认反向代理是否正确的分配工作到负载均衡服务器上。如果你正在使用 NGINX 作为负载均衡器,任何一台服务器处理的连接数有显著的偏差都可能表明服务器正在努力处理请求或你配置处理请求的负载均衡的方法(例如[round-robin or IP hashing][10])不是最适合你流量模式的。 - -**错误指标** - -错误指标,上面所说的高于5XX(服务器错误)状态码,是监控指标中有价值的一个,尤其是响应码部分。 NGINX Plus 允许你轻松地提取每个负载均衡服务器 **5xx codes per upstream server**的数量,以及响应的总数量,以此来确定该特定服务器的错误率。 - - -**可用性指标** - -对于 web 服务器的运行状况,另一种观点认为,NGINX 也可以很方便监控你的负载均衡服务器组的健康通过**servers currently available within each group**的总量​​。在一个大的反向代理上,你可能不会非常关心其中一个服务器的当前状态,就像你只要可用的服务器组能够处理当前的负载就行了。但监视负载均衡服务器组内的所有服务器可以提供一个高水平的图像来判断 Web 服务器的健康状况。 - -**收集负载均衡服务器的指标** - -NGINX Plus 负载均衡服务器的指标显示在内部 NGINX Plus 的监控仪表盘上,并且也可通过一个JSON 接口来服务于所有外部的监控平台。在这儿看一个例子[收集 NGINX 指标][11]。 - -### 结论 ### - -在这篇文章中,我们已经谈到了一些有用的指标,你可以使用表格来监控 NGINX 服务器。如果你是刚开始使用 NGINX,下面提供了良好的网络基础设施的健康和活动的可视化工具来监控大部分或所有的指标: - -- [Dropped connections][12] -- [Requests per second][13] -- [Server error rate][14] -- [Request processing time][15] - -最终,你会学到更多,更专业的衡量指标,尤其是关于你自己基础设施和使用情况的。当然,监控哪一项指标将取决于你可用的工具。参见[一步一步来说明指标收集][16],不管你使用 NGINX 还是 NGINX Plus。 - - - -在 Datadog 中,我们已经集成了 NGINX 和 NGINX Plus,这样你就可以以最小的设置来收集和监控所有 Web 服务器的指标。了解如何用 NGINX Datadog来监控 [在本文中][17],并开始使用 [免费的 Datadog][18]。 - -### Acknowledgments ### - -在文章发表之前非常感谢 NGINX 团队审阅这篇,并提供重要的反馈和说明。 - ----------- - -文章来源在这儿 [on GitHub][19]。问题,更正,补充等?请[告诉我们][20]。 - - --------------------------------------------------------------------------------- - -via: https://www.datadoghq.com/blog/how-to-monitor-nginx/ - -作者:K Young -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:http://nginx.org/en/ -[2]:http://nginx.com/resources/glossary/reverse-proxy-server/ -[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/ -[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections -[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state -[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html -[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[9]:https://en.wikipedia.org/wiki/Reverse_proxy -[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/ -[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections -[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second -[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate -[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time -[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ -[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up -[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md -[20]:https://github.com/DataDog/the-monitor/issues From 67aff80abfcf06fefd44d67bb4a263d90f743d38 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 7 Aug 2015 00:25:48 +0800 Subject: [PATCH 078/697] Delete 20150803 Handy commands for profiling your Unix file systems.md --- ...ds for profiling your Unix file systems.md | 65 ------------------- 1 file changed, 65 deletions(-) delete mode 100644 sources/tech/20150803 Handy commands for profiling your Unix file systems.md diff --git a/sources/tech/20150803 Handy commands for profiling your Unix file systems.md b/sources/tech/20150803 Handy commands for profiling your Unix file systems.md deleted file mode 100644 index 359aba14c9..0000000000 --- a/sources/tech/20150803 Handy commands for profiling your Unix file systems.md +++ /dev/null @@ -1,65 +0,0 @@ -translation by strugglingyouth -Handy commands for profiling your Unix file systems -================================================================================ -![Credit: Sandra H-S](http://images.techhive.com/images/article/2015/07/file-profile-100597239-primary.idge.png) -Credit: Sandra H-S - -One of the problems that seems to plague nearly all file systems -- Unix and others -- is the continuous buildup of files. Almost no one takes the time to clean out files that they no longer use and file systems, as a result, become so cluttered with material of little or questionable value that keeping them them running well, adequately backed up, and easy to manage is a constant challenge. - -One way that I have seen to help encourage the owners of all that data detritus to address the problem is to create a summary report or "profile" of a file collection that reports on such things as the number of files; the oldest, newest, and largest of those files; and a count of who owns those files. If someone realizes that a collection of half a million files contains none less than five years old, they might go ahead and remove them -- or, at least, archive and compress them. The basic problem is that huge collections of files are overwhelming and most people are afraid that they might accidentally delete something important. Having a way to characterize a file collection can help demonstrate the nature of the content and encourage those digital packrats to clean out their nests. - -When I prepare a file system summary report on Unix, a handful of Unix commands easily provide some very useful statistics. To count the files in a directory, you can use a find command like this. - - $ find . -type f | wc -l - 187534 - -Finding the oldest and newest files is a bit more complicated, but still quite easy. In the commands below, we're using the find command again to find files, displaying the data with a year-month-day format that makes it possible to sort by file age, and then displaying the top -- thus the oldest -- file in that list. - -In the second command, we do the same, but print the last line -- thus the newest -- file. - - $ find -type f -printf '%T+ %p\n' | sort | head -n 1 - 2006-02-03+02:40:33 ./skel/.xemacs/init.el - $ find -type f -printf '%T+ %p\n' | sort | tail -n 1 - 2015-07-19+14:20:16 ./.bash_history - -The %T (file date and time) and %p (file name with path) parameters with the printf command allow this to work. - -If we're looking at home directories, we're undoubtedly going to find that history files are the newest files and that isn't likely to be a very interesting bit of information. You can omit those files by "un-grepping" them or you can omit all files that start with dots as shown below. - - $ find -type f -printf '%T+ %p\n' | grep -v "\./\." | sort | tail -n 1 - 2015-07-19+13:02:12 ./isPrime - -Finding the largest file involves using the %s (size) parameter and we include the file name (%f) since that's what we want the report to show. - - $ find -type f -printf '%s %f \n' | sort -n | uniq | tail -1 - 20183040 project.org.tar - -To summarize file ownership, use the %u (owner) - - $ find -type f -printf '%u \n' | grep -v "\./\." | sort | uniq -c - 180034 shs - 7500 jdoe - -If your file system also records the last access date, it can be very useful to show that files haven't been accessed in, say, more than two years. This would give your reviewers an important insight into the value of those files. The last access parameter (%a) could be used like this: - - $ find -type f -printf '%a+ %p\n' | sort | head -n 1 - Fri Dec 15 03:00:30 2006+ ./statreport - -Of course, if the most recently accessed file is also in the deep dark past, that's likely to get even more of a reaction. - - $ find -type f -printf '%a+ %p\n' | sort | tail -n 1 - Wed Nov 26 03:00:27 2007+ ./my-notes - -Getting a sense of what's in a file system or large directory by creating a summary report showing the file date ranges, the largest files, the file owners, and the oldest and new access times can help to demonstrate how current and how important a file collection is and help its owners decide if it's time to clean up. - --------------------------------------------------------------------------------- - -via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.html - -作者:[Sandra Henry-Stocker][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/ From 63ad6aab7dce0df6174c05b1dd83a62405af46f3 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 7 Aug 2015 00:27:30 +0800 Subject: [PATCH 079/697] Create 20150803 Handy commands for profiling your Unix file systems.md --- ...ds for profiling your Unix file systems.md | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 translated/tech/20150803 Handy commands for profiling your Unix file systems.md diff --git a/translated/tech/20150803 Handy commands for profiling your Unix file systems.md b/translated/tech/20150803 Handy commands for profiling your Unix file systems.md new file mode 100644 index 0000000000..13efdcf0a1 --- /dev/null +++ b/translated/tech/20150803 Handy commands for profiling your Unix file systems.md @@ -0,0 +1,66 @@ + +很实用的命令来分析你的 Unix 文件系统 +================================================================================ +![Credit: Sandra H-S](http://images.techhive.com/images/article/2015/07/file-profile-100597239-primary.idge.png) +Credit: Sandra H-S + +其中一个问题几乎困扰着所有的文件系统 -- 包括 Unix 和其他的 -- 那就是文件的不断积累。几乎没有人愿意花时间清理掉他们不再使用的文件和文件系统,结果,文件变得很混乱,很难找到有用的东西使它们运行良好,能够得到备份,并且易于管理,这将是一种持久的挑战。 + +我见过的一种解决问题的方法是鼓励使用者将所有的数据碎屑创建成一个总结报告或"profile"这样一个文件集合来报告所有的文件数量;最老的,最新的,最大的文件;并统计谁拥有这些文件。如果有人看到一个包含五十万个文件的文件夹并且时间不小于五年,他们可能会去删除哪些文件 -- 或者,至少归档和压缩。主要问题是太大的文件夹会使人产生压制性害怕误删一些重要的东西。有一个描述文件夹的方法能帮助显示文件的性质并期待你去清理它。 + + +当我准备做 Unix 文件系统的总结报告时,几个有用的 Unix 命令能提供一些非常有用的统计信息。要计算目录中的文件数,你可以使用这样一个 find 命令。 + + $ find . -type f | wc -l + 187534 + +查找最老的和最新的文件是比较复杂,但还是相当方便的。在下面的命令,我们使用 find 命令再次查找文件,以文件时间排序并按年-月-日的格式显示在顶部 -- 因此最老的 -- 的文件在列表中。 + +在第二个命令,我们做同样的,但打印的是最后一行 -- 这是最新的 -- 文件 + + $ find -type f -printf '%T+ %p\n' | sort | head -n 1 + 2006-02-03+02:40:33 ./skel/.xemacs/init.el + $ find -type f -printf '%T+ %p\n' | sort | tail -n 1 + 2015-07-19+14:20:16 ./.bash_history + +printf 命令输出 %T(文件日期和时间)和 %P(带路径的文件名)参数。 + +如果我们在查找家目录时,无疑会发现,history 文件是最新的,这不像是一个很有趣的信息。你可以通过 "un-grepping" 来忽略这些文件,也可以忽略以.开头的文件,如下图所示的。 + + $ find -type f -printf '%T+ %p\n' | grep -v "\./\." | sort | tail -n 1 + 2015-07-19+13:02:12 ./isPrime + +寻找最大的文件使用 %s(大小)参数,包括文件名(%f),因为这就是我们想要在报告中显示的。 + + $ find -type f -printf '%s %f \n' | sort -n | uniq | tail -1 + 20183040 project.org.tar + +打印文件的所有着者,使用%u(所有者) + + $ find -type f -printf '%u \n' | grep -v "\./\." | sort | uniq -c + 180034 shs + 7500 jdoe + +如果文件系统能记录上次的访问日期,也将是非常有用的来看该文件有没有被访问,比方说,两年之内。这将使你能明确分辨这些文件的价值。最后一个访问参数(%a)这样使用: + + $ find -type f -printf '%a+ %p\n' | sort | head -n 1 + Fri Dec 15 03:00:30 2006+ ./statreport + +当然,如果最近​​访问的文件也是在很久之前的,这将使你有更多的处理时间。 + + $ find -type f -printf '%a+ %p\n' | sort | tail -n 1 + Wed Nov 26 03:00:27 2007+ ./my-notes + +一个文件系统要层次分明,为大目录创建一个总结报告,显示该文件的日期范围,最大的文件,文件所有者,最老的和访问时间都可以帮助文件拥有者判断当前有哪些文件夹是重要的哪些该清理了。 + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.html + +作者:[Sandra Henry-Stocker][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/ From e33f5e9a1b751567fb5498697363c30c48dff32e Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 7 Aug 2015 00:35:21 +0800 Subject: [PATCH 080/697] PUB:20150806 5 Reasons Why Software Developer is a Great Career Choice MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @mousycoder 明早9:30 发布:https://linux.cn/article-5971-1.html 原文现在取消了大部分配图(配图有问题),所以我也取消了。 翻译的不错,我基本没怎么改动。 --- ...ware Developer is a Great Career Choice.md | 75 +++++++++++++++ ...ware Developer is a Great Career Choice.md | 91 ------------------- 2 files changed, 75 insertions(+), 91 deletions(-) create mode 100644 published/20150806 5 Reasons Why Software Developer is a Great Career Choice.md delete mode 100644 translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md diff --git a/published/20150806 5 Reasons Why Software Developer is a Great Career Choice.md b/published/20150806 5 Reasons Why Software Developer is a Great Career Choice.md new file mode 100644 index 0000000000..12831e66ad --- /dev/null +++ b/published/20150806 5 Reasons Why Software Developer is a Great Career Choice.md @@ -0,0 +1,75 @@ +选择成为软件开发工程师的5个原因 +================================================================================ + +![](http://henrikwarne1.files.wordpress.com/2011/09/cropped-desk1.jpg) + +这个星期我将给本地一所高中做一次有关于程序猿是怎样工作的演讲。我是志愿(由 [Transfer][1] 组织的)来到这所学校谈论我的工作的。这个学校本周将有一个技术主题日,并且他们很想听听科技行业是怎样工作的。因为我是从事软件开发的,这也是我将和学生们讲的内容。演讲的其中一部分是我为什么觉得软件开发是一个很酷的职业。主要原因如下: + +### 5个原因 ### + +**1、创造性** + +如果你问别人创造性的工作有哪些,别人通常会说像作家,音乐家或者画家那样的(工作)。但是极少有人知道软件开发也是一项非常具有创造性的工作。它是最符合创造性定义的了,因为你创造了一个以前没有的新功能。这种解决方案可以在整体和细节上以很多形式来展现。我们经常会遇到一些需要做权衡的场景(比如说运行速度与内存消耗的权衡)。当然前提是这种解决方案必须是正确的。这些所有的行为都是需要强大的创造性的。 + +**2、协作性** + +另外一个表象是程序猿们独自坐在他们的电脑前,然后撸一天的代码。但是软件开发事实上通常总是一个团队努力的结果。你会经常和你的同事讨论编程问题以及解决方案,并且和产品经理、测试人员、客户讨论需求以及其他问题。 +经常有人说,结对编程(2个开发人员一起在一个电脑上编程)是一种流行的最佳实践。 + +**3、高需性** + +世界上越来越多的人在用软件,正如 [Marc Andreessen](https://en.wikipedia.org/wiki/Marc_Andreessen) 所说 " [软件正在吞噬世界][2] "。虽然程序猿现在的数量非常巨大(在斯德哥尔摩,程序猿现在是 [最普遍的职业][3] ),但是,需求量一直处于供不应求的局面。据软件公司说,他们最大的挑战之一就是 [找到优秀的程序猿][4] 。我也经常接到那些想让我跳槽的招聘人员打来的电话。我知道至少除软件行业之外的其他行业的雇主不会那么拼(的去招聘)。 + +**4、高酬性** + +软件开发可以带来不菲的收入。卖一份你已经开发好的软件的额外副本是没有 [边际成本][5] 的。这个事实与对程序猿的高需求意味着收入相当可观。当然还有许多更捞金的职业,但是相比一般人群,我认为软件开发者确实“日进斗金”(知足吧!骚年~~)。 + +**5、前瞻性** + +有许多工作岗位消失,往往是由于它们可以被计算机和软件代替。但是所有这些新的程序依然需要开发和维护,因此,程序猿的前景还是相当好的。 + +### 但是...### + +**外包又是怎么一回事呢?** + +难道所有外包到其他国家的软件开发的薪水都很低吗?这是一个理想丰满,现实骨感的例子(有点像 [瀑布开发模型][6] )。软件开发基本上跟设计的工作一样,是一个探索发现的工作。它受益于强有力的合作。更进一步说,特别当你的主打产品是软件的时候,你所掌握的开发知识是绝对的优势。知识在整个公司中分享的越容易,那么公司的发展也将越来越好。 + +换一种方式去看待这个问题。软件外包已经存在了相当一段时间了。但是对本土程序猿的需求量依旧非常高。因为许多软件公司看到了雇佣本土程序猿的带来的收益要远远超过了相对较高的成本(其实还是赚了)。 + +### 如何成为人生大赢家 ### + +虽然我有许多我认为软件开发是一件非常有趣的事情的理由 (详情见: [为什么我热爱编程][7] )。但是这些理由,并不适用于所有人。幸运的是,尝试编程是一件非常容易的事情。在互联网上有数不尽的学习编程的资源。例如,[Coursera][8] 和 [Udacity][9] 都拥有很好的入门课程。如果你从来没有撸过码,可以尝试其中一个免费的课程,找找感觉。 + +寻找一个既热爱又能谋生的事情至少有2个好处。首先,由于你天天去做,工作将比你简单的只为谋生要有趣的多。其次,如果你真的非常喜欢,你将更好的擅长它。我非常喜欢下面一副关于伟大工作组成的韦恩图(作者 [@eskimon)][10]) 。因为编码的薪水确实相当不错,我认为如果你真的喜欢它,你将有一个很好的机会,成为人生的大赢家! + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/17571624.jpg) + +-------------------------------------------------------------------------------- + +via: http://henrikwarne.com/2014/12/08/5-reasons-why-software-developer-is-a-great-career-choice/ + +作者:[Henrik Warne][a] +译者:[mousycoder](https://github.com/mousycoder) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + + +[a]:http://henrikwarne.com/ +[1]:http://www.transfer.nu/omoss/transferinenglish.jspx?pageId=23 +[2]:http://www.wsj.com/articles/SB10001424053111903480904576512250915629460 +[3]:http://www.di.se/artiklar/2014/6/12/jobbet-som-tar-over-landet/ +[4]:http://computersweden.idg.se/2.2683/1.600324/examinationstakten-racker-inte-for-branschens-behov +[5]:https://en.wikipedia.org/wiki/Marginal_cost +[6]:https://en.wikipedia.org/wiki/Waterfall_model +[7]:http://henrikwarne.com/2012/06/02/why-i-love-coding/ +[8]:https://www.coursera.org/ +[9]:https://www.udacity.com/ +[10]:https://eskimon.wordpress.com/about/ + + + + + + + diff --git a/translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md b/translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md deleted file mode 100644 index a592cb595e..0000000000 --- a/translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md +++ /dev/null @@ -1,91 +0,0 @@ -选择软件开发攻城师的5个原因 -================================================================================ -这个星期我将给本地一所高中做一次有关于程序猿是怎样工作的演讲,我是 [Transfer][1] 组织推荐来到这所学校,谈论我的工作。这个学校本周将有一个技术主题日,并且他们很想听听科技行业是怎样工作的。因为我是从事软件开发的,这也是我将和学生们讲的内容。我为什么觉得软件开发是一个很酷的职业将是演讲的其中一部分。主要原因如下: - -### 5个原因 ### - -**1 创造性** - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/34042817.jpg) - -如果你问别人创造性的工作有哪些,别人通常会说像作家,音乐家或者画家那样的(工作)。但是极少有人知道软件开发也是一项非常具有创造性的工作。因为你创造了一个以前没有的新功能,这样的功能基本上可以被定义为非常具有创造性。这种解决方案可以在整体和细节上以很多形式来展现。我们经常会遇到一些需要做权衡的场景(比如说运行速度与内存消耗的权衡)。当然前提是这种解决方案必须是正确的。其实这些所有的行为都是需要强大的创造性的。 - -**2 协作性** - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/94579377.jpg) - -另外一个表象是程序猿们独自坐在他们的电脑前,然后撸一天的代码。但是软件开发事实上通常总是一个团队努力的结果。你会经常和你的同事讨论编程问题以及解决方案,并且和产品经理,测试人员,客户讨论需求以及其他问题。 -经常有人说,结对编程(2个开发人员一起在一个电脑上编程)是一种流行的最佳实践。 - - -**3 高需性** - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/26662164.jpg) - -世界上越来越多的人在用软件,正如 [Marc Andreessen](https://en.wikipedia.org/wiki/Marc_Andreessen) 所说 " [软件正在吞噬世界][2] "。虽然程序猿现在的数量非常巨大(在斯德哥尔摩,程序猿现在是 [最普遍的职业][3] ),但是,需求量一直处于供不应求的局面。据软件公司报告,他们最大的挑战之一就是 [找到优秀的程序猿][4] 。我也经常接到那些想让我跳槽的招聘人员打来的电话。我知道至少除软件行业之外的其他行业的雇主不会那么拼(的去招聘)。 - -**4 高酬性** - - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/50538928.jpg) - -软件开发可以带来不菲的收入。卖一份你已经开发好的软件的额外副本是没有 [边际成本][5] 的。这个事实与对程序猿的高需求意味着收入相当可观。当然还有许多更捞金的职业,但是相比一般人群,我认为软件开发者确实“日进斗金”(知足吧!骚年~~)。 - -**5 前瞻性** - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/89799239.jpg) - -有许多工作岗位消失,往往是由于它们可以被计算机和软件代替。但是所有这些新的程序依然需要开发和维护,因此,程序猿的前景还是相当好的。 - - -### 但是...### - -**外包又是怎么一回事呢?** - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/41615753.jpg) - -难道所有外包到其他地区的软件开发的薪水都很低吗?这是一个理想丰满,现实骨感的例子(有点像 [瀑布开发模型][6] )。软件开发基本上跟设计的工作一样,是一个探索发现的工作。它受益于强有力的合作。更进一步说,特别当你的主打产品是软件的时候,你所掌握的开发知识是绝对的优势。知识在整个公司中分享的越容易,那么公司的发展也将越来越好。 - - -换一种方式去看待这个问题。软件外包已经存在了相当一段时间了。但是对本土程序猿的需求量依旧非常高。因为许多软件公司看到了雇佣本土程序猿的带来的收益要远远超过了相对较高的成本(其实还是赚了)。 - -### 如何成为人生大赢家 ### - - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/44219908.jpg) - -虽然我有许多我认为软件开发是一件非常有趣的事情的理由 (详情见: [为什么我热爱编程][7] )。但是这些理由,并不适用于所有人。幸运的是,尝试编程是一件非常容易的事情。在互联网上有数不尽的学习编程的资源。例如,[Coursera][8] 和 [Udacity][9] 都拥有很好的入门课程。如果你从来没有撸过码,可以尝试其中一个免费的课程,找找感觉。 - -寻找一个既热爱又能谋生的事情至少有2个好处。首先,由于你天天去做,工作将比你简单的只为谋生要有趣的多。其次,如果你真的非常喜欢,你将更好的擅长它。我非常喜欢下面一副关于伟大工作组成的韦恩图(作者 [@eskimon)][10] 。因为编码的薪水确实相当不错,我认为如果你真的喜欢它,你将有一个很好的机会,成为人生的大赢家! - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/17571624.jpg) - --------------------------------------------------------------------------------- - -via: http://henrikwarne.com/2014/12/08/5-reasons-why-software-developer-is-a-great-career-choice/ - -作者:[Henrik Warne][a] -译者:[mousycoder](https://github.com/mousycoder) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - - -[a]:http://henrikwarne.com/ -[1]:http://www.transfer.nu/omoss/transferinenglish.jspx?pageId=23 -[2]:http://www.wsj.com/articles/SB10001424053111903480904576512250915629460 -[3]:http://www.di.se/artiklar/2014/6/12/jobbet-som-tar-over-landet/ -[4]:http://computersweden.idg.se/2.2683/1.600324/examinationstakten-racker-inte-for-branschens-behov -[5]:https://en.wikipedia.org/wiki/Marginal_cost -[6]:https://en.wikipedia.org/wiki/Waterfall_model -[7]:http://henrikwarne.com/2012/06/02/why-i-love-coding/ -[8]:https://www.coursera.org/ -[9]:https://www.udacity.com/ -[10]:https://eskimon.wordpress.com/about/ - - - - - - - From bc21018f00e1834be2d8c597855f247441788256 Mon Sep 17 00:00:00 2001 From: Ping Date: Fri, 7 Aug 2015 08:20:37 +0800 Subject: [PATCH 081/697] Translating --- ...06 Linux FAQs with Answers--How to install git on Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md b/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md index c5c34f3a72..c9610a2dfe 100644 --- a/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md +++ b/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md @@ -1,3 +1,5 @@ +Translating by Ping + Linux FAQs with Answers--How to install git on Linux ================================================================================ > **Question:** I am trying to clone a project from a public Git repository, but I am getting "git: command not found" error. How can I install git on [insert your Linux distro]? @@ -69,4 +71,4 @@ via: http://ask.xmodulo.com/install-git-linux.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://ask.xmodulo.com/author/nanni -[1]:https://github.com/git/git/releases \ No newline at end of file +[1]:https://github.com/git/git/releases From 6d313b52541e7bfd243825ad0a9d06a78d7c998b Mon Sep 17 00:00:00 2001 From: joeren Date: Fri, 7 Aug 2015 08:54:23 +0800 Subject: [PATCH 082/697] Update 20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md --- ...to fix 'ImportError--No module named wxversion' on Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md b/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md index 11d814d8f4..66af8413fd 100644 --- a/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md +++ b/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md @@ -1,3 +1,4 @@ +Translating by GOLinux! Linux FAQs with Answers--How to fix “ImportError: No module named wxversion” on Linux ================================================================================ > **Question:** I was trying to run a Python application on [insert your Linux distro], but I got an error "ImportError: No module named wxversion." How can I solve this error in the Python program? @@ -46,4 +47,4 @@ via: http://ask.xmodulo.com/importerror-no-module-named-wxversion.html [a]:http://ask.xmodulo.com/author/nanni [1]:http://wxpython.org/ -[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html \ No newline at end of file +[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html From f26be7487f95cd3f5098ab147e42da5e74d599e0 Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Fri, 7 Aug 2015 08:59:37 +0800 Subject: [PATCH 083/697] =?UTF-8?q?=E3=80=90Translating=20by=20dingdongnig?= =?UTF-8?q?etou=E3=80=9120150730=20Howto=20Configure=20Nginx=20as=20Rrever?= =?UTF-8?q?se=20Proxy=20or=20Load=20Balancer=20with=20Weave=20and=20Docker?= =?UTF-8?q?.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Rreverse Proxy or Load Balancer with Weave and Docker.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md b/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md index 82c592d3b4..f217db9c70 100644 --- a/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md +++ b/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md @@ -1,3 +1,6 @@ + +Translating by dingdongnigetou + Howto Configure Nginx as Rreverse Proxy / Load Balancer with Weave and Docker ================================================================================ Hi everyone today we'll learnHowto configure Nginx as Rreverse Proxy / Load balancer with Weave and Docker Weave creates a virtual network that connects Docker containers with each other, deploys across multiple hosts and enables their automatic discovery. It allows us to focus on developing our application, rather than our infrastructure. It provides such an awesome environment that the applications uses the network as if its containers were all plugged into the same network without need to configure ports, mappings, link, etc. The services of the application containers on the network can be easily accessible to the external world with no matter where its running. Here, in this tutorial we'll be using weave to quickly and easily deploy nginx web server as a load balancer for a simple php application running in docker containers on multiple nodes in Amazon Web Services. Here, we will be introduced to WeaveDNS, which provides a simple way for containers to find each other using hostname with no changes in codes and tells other containers to connect to those names. @@ -123,4 +126,4 @@ via: http://linoxide.com/linux-how-to/nginx-load-balancer-weave-docker/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://linoxide.com/author/arunp/ -[1]:http://console.aws.amazon.com/ \ No newline at end of file +[1]:http://console.aws.amazon.com/ From dfa1d5bf8796e70c8ee522a1bb67e6d68efd97a0 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Fri, 7 Aug 2015 09:09:47 +0800 Subject: [PATCH 084/697] [Translated]20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md --- ...or--No module named wxversion' on Linux.md | 50 ------------------- ...or--No module named wxversion' on Linux.md | 49 ++++++++++++++++++ 2 files changed, 49 insertions(+), 50 deletions(-) delete mode 100644 sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md create mode 100644 translated/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md b/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md deleted file mode 100644 index 66af8413fd..0000000000 --- a/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md +++ /dev/null @@ -1,50 +0,0 @@ -Translating by GOLinux! -Linux FAQs with Answers--How to fix “ImportError: No module named wxversion” on Linux -================================================================================ -> **Question:** I was trying to run a Python application on [insert your Linux distro], but I got an error "ImportError: No module named wxversion." How can I solve this error in the Python program? - - Looking for python... 2.7.9 - Traceback (most recent call last): - File "/home/dev/playonlinux/python/check_python.py", line 1, in - import os, wxversion - ImportError: No module named wxversion - failed tests - -This error indicates that your Python application is GUI-based, relying on a missing Python module called wxPython. [wxPython][1] is a Python extension module for the wxWidgets GUI library, popularly used by C++ programmers to design GUI applications. The wxPython extension allows Python developers to easily design and integrate GUI within any Python application. - -To solve this import error, you need to install wxPython on your Linux, as described below. - -### Install wxPython on Debian, Ubuntu or Linux Mint ### - - $ sudo apt-get install python-wxgtk2.8 - -### Install wxPython on Fedora ### - - $ sudo yum install wxPython - -### Install wxPython on CentOS/RHEL ### - -wxPython is available on the EPEL repository of CentOS/RHEL, not on base repositories. Thus, first [enable EPEL repository][2] on your system, and then use yum command. - - $ sudo yum install wxPython - -### Install wxPython on Arch Linux ### - - $ sudo pacman -S wxpython - -### Install wxPython on Gentoo ### - - $ emerge wxPython - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/importerror-no-module-named-wxversion.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni -[1]:http://wxpython.org/ -[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html diff --git a/translated/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md b/translated/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md new file mode 100644 index 0000000000..2a937daeff --- /dev/null +++ b/translated/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md @@ -0,0 +1,49 @@ +Linux有问必答——如何修复Linux上的“ImportError: No module named wxversion”错误 +================================================================================ + +> **问题** 我试着在[你的Linux发行版]上运行一个Python应用,但是我得到了这个错误"ImportError: No module named wxversion."。我怎样才能解决Python程序中的这个错误呢? + + Looking for python... 2.7.9 - Traceback (most recent call last): + File "/home/dev/playonlinux/python/check_python.py", line 1, in + import os, wxversion + ImportError: No module named wxversion + failed tests + +该错误表明,你的Python应用是基于GUI的,依赖于一个名为wxPython的缺失模块。[wxPython][1]是一个用于wxWidgets GUI库的Python扩展模块,普遍被C++程序员用来设计GUI应用。该wxPython扩展允许Python开发者在任何Python应用中方便地设计和整合GUI。 +To solve this import error, you need to install wxPython on your Linux, as described below. + +### 安装wxPython到Debian,Ubuntu或Linux Mint ### + + $ sudo apt-get install python-wxgtk2.8 + +### 安装wxPython到Fedora ### + + $ sudo yum install wxPython + +### 安装wxPython到CentOS/RHEL ### + +wxPython可以在CentOS/RHEL的EPEL仓库中获取到,而基本仓库中则没有。因此,首先要在你的系统中[启用EPEL仓库][2],然后使用yum命令来安装。 + + $ sudo yum install wxPython + +### 安装wxPython到Arch Linux ### + + $ sudo pacman -S wxpython + +### 安装wxPython到Gentoo ### + + $ emerge wxPython + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/importerror-no-module-named-wxversion.html + +作者:[Dan Nanni][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:http://wxpython.org/ +[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html From 09b98f265e23ed5ec0c759f2e16c749288a47728 Mon Sep 17 00:00:00 2001 From: joeren Date: Fri, 7 Aug 2015 09:11:45 +0800 Subject: [PATCH 085/697] Update 20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md --- ...ogging in Open vSwitch for debugging and troubleshooting.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md b/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md index 2b4e16bcaf..dcf811a003 100644 --- a/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md +++ b/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md @@ -1,3 +1,4 @@ +Translating by GOlinu! Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting ================================================================================ > **Question:** I am trying to troubleshoot my Open vSwitch deployment. For that I would like to inspect its debug messages generated by its built-in logging mechanism. How can I enable logging in Open vSwitch, and change its logging level (e.g., to INFO/DEBUG level) to check more detailed debug information? @@ -66,4 +67,4 @@ via: http://ask.xmodulo.com/enable-logging-open-vswitch.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file +[a]:http://ask.xmodulo.com/author/nanni From 6323f7946f874af701db19742a8c6931b59aeb11 Mon Sep 17 00:00:00 2001 From: Ping Date: Fri, 7 Aug 2015 09:39:48 +0800 Subject: [PATCH 086/697] Complete 20150806 Linux FAQs with Answers--How to install git on Linux.md --- ...th Answers--How to install git on Linux.md | 74 ------------------- ...th Answers--How to install git on Linux.md | 73 ++++++++++++++++++ 2 files changed, 73 insertions(+), 74 deletions(-) delete mode 100644 sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md create mode 100644 translated/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md b/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md deleted file mode 100644 index c9610a2dfe..0000000000 --- a/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md +++ /dev/null @@ -1,74 +0,0 @@ -Translating by Ping - -Linux FAQs with Answers--How to install git on Linux -================================================================================ -> **Question:** I am trying to clone a project from a public Git repository, but I am getting "git: command not found" error. How can I install git on [insert your Linux distro]? - -Git is a popular open-source version control system (VCS) originally developed for Linux environment. Contrary to other VCS tools like CVS or SVN, Git's revision control is considered "distributed" in a sense that your local Git working directory can function as a fully-working repository with complete history and version-tracking capabilities. In this model, each collaborator commits to his or her local repository (as opposed to always committing to a central repository), and optionally push to a centralized repository if need be. This brings in scalability and redundancy to the revision control system, which is a must in any kind of large-scale collaboration. - -![](https://farm1.staticflickr.com/341/19433194168_c79d4570aa_b.jpg) - -### Install Git with a Package Manager ### - -Git is shipped with all major Linux distributions. Thus the easiest way to install Git is by using your Linux distro's package manager. - -**Debian, Ubuntu, or Linux Mint** - - $ sudo apt-get install git - -**Fedora, CentOS or RHEL** - - $ sudo yum install git - -**Arch Linux** - - $ sudo pacman -S git - -**OpenSUSE** - - $ sudo zypper install git - -**Gentoo** - - $ emerge --ask --verbose dev-vcs/git - -### Install Git from the Source ### - -If for whatever reason you want to built Git from the source, you can follow the instructions below. - -**Install Dependencies** - -Before building Git, first install dependencies. - -**Debian, Ubuntu or Linux** - - $ sudo apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev asciidoc xmlto docbook2x - -**Fedora, CentOS or RHEL** - - $ sudo yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel asciidoc xmlto docbook2x - -#### Compile Git from the Source #### - -Download the latest release of Git from [https://github.com/git/git/releases][1]. Then build and install Git under /usr as follows. - -Note that if you want to install it under a different directory (e.g., /opt), replace "--prefix=/usr" in configure command with something else. - - $ cd git-x.x.x - $ make configure - $ ./configure --prefix=/usr - $ make all doc info - $ sudo make install install-doc install-html install-info - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/install-git-linux.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni -[1]:https://github.com/git/git/releases diff --git a/translated/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md b/translated/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md new file mode 100644 index 0000000000..e6d3f59c71 --- /dev/null +++ b/translated/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md @@ -0,0 +1,73 @@ +Linux问答 -- 如何在Linux上安装Git +================================================================================ + +> **问题:** 我尝试从一个Git公共仓库克隆项目,但出现了这样的错误提示:“git: command not found”。 请问我该如何安装Git? [注明一下是哪个Linux发行版]? + +Git是一个流行的并且开源的版本控制系统(VCS),最初是为Linux环境开发的。跟CVS或者SVN这些版本控制系统不同的是,Git的版本控制被认为是“分布式的”,某种意义上,git的本地工作目录可以作为一个功能完善的仓库来使用,它具备完整的历史记录和版本追踪能力。在这种工作模型之下,各个协作者将内容提交到他们的本地仓库中(与之相对的会直接提交到核心仓库),如果有必要,再有选择性地推送到核心仓库。这就为Git这个版本管理系统带来了大型协作系统所必须的可扩展能力和冗余能力。 + +![](https://farm1.staticflickr.com/341/19433194168_c79d4570aa_b.jpg) + +### 使用包管理器安装Git ### + +Git已经被所有的主力Linux发行版所支持。所以安装它最简单的方法就是使用各个Linux发行版的包管理器。 + +**Debian, Ubuntu, 或 Linux Mint** + + $ sudo apt-get install git + +**Fedora, CentOS 或 RHEL** + + $ sudo yum install git + +**Arch Linux** + + $ sudo pacman -S git + +**OpenSUSE** + + $ sudo zypper install git + +**Gentoo** + + $ emerge --ask --verbose dev-vcs/git + +### 从源码安装Git ### + +如果由于某些原因,你希望从源码安装Git,安装如下介绍操作。 + +**安装依赖包** + +在构建Git之前,先安装它的依赖包。 + +**Debian, Ubuntu 或 Linux Mint** + + $ sudo apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev asciidoc xmlto docbook2x + +**Fedora, CentOS 或 RHEL** + + $ sudo yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel asciidoc xmlto docbook2x + +#### 从源码编译Git #### + +从 [https://github.com/git/git/releases][1] 下载最新版本的Git。然后在/usr下构建和安装。 + +注意,如果你打算安装到其他目录下(例如:/opt),那就把"--prefix=/usr"这个配置命令使用其他路径替换掉。 + + $ cd git-x.x.x + $ make configure + $ ./configure --prefix=/usr + $ make all doc info + $ sudo make install install-doc install-html install-info + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/install-git-linux.html + +作者:[Dan Nanni][a] +译者:[mr-ping](https://github.com/mr-ping) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:https://github.com/git/git/releases From b2706ff03ec3218931128a03bca7fd05b28817e0 Mon Sep 17 00:00:00 2001 From: ZTinoZ Date: Fri, 7 Aug 2015 09:52:37 +0800 Subject: [PATCH 087/697] Translating by ZTinoZ --- sources/talk/20150806 5 heroes of the Linux world.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/talk/20150806 5 heroes of the Linux world.md b/sources/talk/20150806 5 heroes of the Linux world.md index ae35d674a1..5b5485198c 100644 --- a/sources/talk/20150806 5 heroes of the Linux world.md +++ b/sources/talk/20150806 5 heroes of the Linux world.md @@ -1,3 +1,4 @@ +Translating by ZTinoZ 5 heroes of the Linux world ================================================================================ Who are these people, seen and unseen, whose work affects all of us every day? @@ -96,4 +97,4 @@ via: http://www.itworld.com/article/2955001/linux/5-heros-of-the-linux-world.htm [7]:https://flic.kr/p/hBv8Pp [8]:https://en.wikipedia.org/wiki/Sysfs [9]:https://www.youtube.com/watch?v=CyHAeGBFS8k -[10]:http://www.itworld.com/article/2873200/operating-systems/11-technologies-that-tick-off-linus-torvalds.html \ No newline at end of file +[10]:http://www.itworld.com/article/2873200/operating-systems/11-technologies-that-tick-off-linus-torvalds.html From cf1e4ef93781185b77462724d1c8529ad89a7d60 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Fri, 7 Aug 2015 09:55:36 +0800 Subject: [PATCH 088/697] [Translated]20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md --- ...witch for debugging and troubleshooting.md | 70 ------------------- ...witch for debugging and troubleshooting.md | 69 ++++++++++++++++++ 2 files changed, 69 insertions(+), 70 deletions(-) delete mode 100644 sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md create mode 100644 translated/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md b/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md deleted file mode 100644 index dcf811a003..0000000000 --- a/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md +++ /dev/null @@ -1,70 +0,0 @@ -Translating by GOlinu! -Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting -================================================================================ -> **Question:** I am trying to troubleshoot my Open vSwitch deployment. For that I would like to inspect its debug messages generated by its built-in logging mechanism. How can I enable logging in Open vSwitch, and change its logging level (e.g., to INFO/DEBUG level) to check more detailed debug information? - -Open vSwitch (OVS) is the most popular open-source implementation of virtual switch on the Linux platform. As the today's data centers increasingly rely on the software-defined network (SDN) architecture, OVS is fastly adopted as the de-facto standard network element in data center's SDN deployments. - -Open vSwitch has a built-in logging mechanism called VLOG. The VLOG facility allows one to enable and customize logging within various components of the switch. The logging information generated by VLOG can be sent to a combination of console, syslog and a separate log file for inspection. You can configure OVS logging dynamically at run-time with a command-line tool called `ovs-appctl`. - -![](https://farm1.staticflickr.com/499/19300367114_cd8aac2fb2_c.jpg) - -Here is how to enable logging and customize logging levels in Open vSwitch with `ovs-appctl`. - -The syntax of `ovs-appctl` to customize VLOG is as follows. - - $ sudo ovs-appctl vlog/set module[:facility[:level]] - -- **Module**: name of any valid component in OVS (e.g., netdev, ofproto, dpif, vswitchd, and many others) -- **Facility**: destination of logging information (must be: console, syslog or file) -- **Level**: verbosity of logging (must be: emer, err, warn, info, or dbg) - -In OVS source code, module name is defined in each source file in the form of: - - VLOG_DEFINE_THIS_MODULE(); - -For example, in lib/netdev.c, you will see: - - VLOG_DEFINE_THIS_MODULE(netdev); - -which indicates that lib/netdev.c is part of netdev module. Any logging messages generated in lib/netdev.c will belong to netdev module. - -In OVS source code, there are multiple severity levels used to define several different kinds of logging messages: VLOG_INFO() for informational, VLOG_WARN() for warning, VLOG_ERR() for error, VLOG_DBG() for debugging, VLOG_EMERG for emergency. Logging level and facility determine which logging messages are sent where. - -To see a full list of available modules, facilities, and their respective logging levels, run the following commands. This command must be invoked after you have started OVS. - - $ sudo ovs-appctl vlog/list - -![](https://farm1.staticflickr.com/465/19734939478_7eb5d44635_c.jpg) - -The output shows the debug levels of each module for three different facilities (console, syslog, file). By default, all modules have their logging level set to INFO. - -Given any one OVS module, you can selectively change the debug level of any particular facility. For example, if you want to see more detailed debug messages of dpif module at the console screen, run the following command. - - $ sudo ovs-appctl vlog/set dpif:console:dbg - -You will see that dpif module's console facility has changed its logging level to DBG. The logging level of two other facilities, syslog and file, remains unchanged. - -![](https://farm1.staticflickr.com/333/19896760146_5d851311ae_c.jpg) - -If you want to change the logging level for all modules, you can specify "ANY" as the module name. For example, the following command will change the console logging level of every module to DBG. - - $ sudo ovs-appctl vlog/set ANY:console:dbg - -![](https://farm1.staticflickr.com/351/19734939828_8c7f59e404_c.jpg) - -Also, if you want to change the logging level of all three facilities at once, you can specify "ANY" as the facility name. For example, the following command will change the logging level of all facilities for every module to DBG. - - $ sudo ovs-appctl vlog/set ANY:ANY:dbg - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/enable-logging-open-vswitch.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni diff --git a/translated/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md b/translated/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md new file mode 100644 index 0000000000..542cf31cb3 --- /dev/null +++ b/translated/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md @@ -0,0 +1,69 @@ +Linux有问必答——如何启用Open vSwitch的日志功能以便调试和排障 +================================================================================ +> **问题** 我试着为我的Open vSwitch部署排障,鉴于此,我想要检查它的由内建日志机制生成的调试信息。我怎样才能启用Open vSwitch的日志功能,并且修改它的日志等级(如,修改成INFO/DEBUG级别)以便于检查更多详细的调试信息呢? + +Open vSwitch(OVS)是Linux平台上用于虚拟切换的最流行的开源部署。由于当今的数据中心日益依赖于软件定义的网络(SDN)架构,OVS被作为数据中心的SDN部署中实际上的标准网络元素而快速采用。 + +Open vSwitch具有一个内建的日志机制,它称之为VLOG。VLOG工具允许你在各种切换组件中启用并自定义日志,由VLOG生成的日志信息可以被发送到一个控制台,syslog以及一个独立日志文件组合,以供检查。你可以通过一个名为`ovs-appctl`的命令行工具在运行时动态配置OVS日志。 + +![](https://farm1.staticflickr.com/499/19300367114_cd8aac2fb2_c.jpg) + +这里为你演示如何使用`ovs-appctl`启用Open vSwitch中的日志功能,并进行自定义。 + +下面是`ovs-appctl`自定义VLOG的语法。 + + $ sudo ovs-appctl vlog/set module[:facility[:level]] + +- **Module**:OVS中的任何合法组件的名称(如netdev,ofproto,dpif,vswitchd,以及其它大量组件) +- **Facility**:日志信息的目的地(必须是:console,syslog,或者file) +- **Level**:日志的详细程度(必须是:emer,err,warn,info,或者dbg) + +在OVS源代码中,模块名称在源文件中是以以下格式定义的: + + VLOG_DEFINE_THIS_MODULE(); + +例如,在lib/netdev.c中,你可以看到: + + VLOG_DEFINE_THIS_MODULE(netdev); + +这个表明,lib/netdev.c是netdev模块的一部分,任何在lib/netdev.c中生成的日志信息将属于netdev模块。 + +在OVS源代码中,有多个严重度等级用于定义几个不同类型的日志信息:VLOG_INFO()用于报告,VLOG_WARN()用于警告,VLOG_ERR()用于错误提示,VLOG_DBG()用于调试信息,VLOG_EMERG用于紧急情况。日志等级和工具确定哪个日志信息发送到哪里。 + +要查看可用模块、工具和各自日志级别的完整列表,请运行以下命令。该命令必须在你启动OVS后调用。 + + $ sudo ovs-appctl vlog/list + +![](https://farm1.staticflickr.com/465/19734939478_7eb5d44635_c.jpg) + +输出结果显示了用于三个工具(console,syslog,file)的各个模块的调试级别。默认情况下,所有模块的日志等级都被设置为INFO。 + +指定任何一个OVS模块,你可以选择性地修改任何特定工具的调试级别。例如,如果你想要在控制台屏幕中查看dpif更为详细的调试信息,可以运行以下命令。 + + $ sudo ovs-appctl vlog/set dpif:console:dbg + +你将看到dpif模块的console工具已经将其日志等级修改为DBG,而其它两个工具syslog和file的日志级别仍然没有改变。 + +![](https://farm1.staticflickr.com/333/19896760146_5d851311ae_c.jpg) + +如果你想要修改所有模块的日志等级,你可以指定“ANY”作为模块名。例如,下面命令将修改每个模块的console的日志级别为DBG。 + + $ sudo ovs-appctl vlog/set ANY:console:dbg + +![](https://farm1.staticflickr.com/351/19734939828_8c7f59e404_c.jpg) + +同时,如果你想要一次性修改所有三个工具的日志级别,你可以指定“ANY”作为工具名。例如,下面的命令将修改每个模块的所有工具的日志级别为DBG。 + + $ sudo ovs-appctl vlog/set ANY:ANY:dbg + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/enable-logging-open-vswitch.html + +作者:[Dan Nanni][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni From f3cb1163f4d917f9e9db845461f0fbecca88b84f Mon Sep 17 00:00:00 2001 From: ZTinoZ Date: Fri, 7 Aug 2015 10:37:40 +0800 Subject: [PATCH 089/697] Translating by ZTinoZ --- sources/talk/20150806 5 heroes of the Linux world.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/sources/talk/20150806 5 heroes of the Linux world.md b/sources/talk/20150806 5 heroes of the Linux world.md index 5b5485198c..bf1d75d562 100644 --- a/sources/talk/20150806 5 heroes of the Linux world.md +++ b/sources/talk/20150806 5 heroes of the Linux world.md @@ -1,16 +1,15 @@ -Translating by ZTinoZ -5 heroes of the Linux world +Linux世界的五个大神 ================================================================================ -Who are these people, seen and unseen, whose work affects all of us every day? +这些人是谁?见或者没见过?谁在每天影响着我们? ![Image courtesy Christopher Michel/Flickr](http://core0.staticworld.net/images/article/2015/07/penguin-100599348-orig.jpg) Image courtesy [Christopher Michel/Flickr][1] -### High-flying penguins ### +### 野心勃勃的企鹅 ### -Linux and open source is driven by passionate people who write best-of-breed software and then release the code to the public so anyone can use it, without any strings attached. (Well, there is one string attached and that’s licence.) +Linux和开源世界一直在被那些热情洋溢的人们驱动着,他们开发出最好的软件并将代码向公众开放,所以每个人都能无条件地看到。(对了,有那么一个条件,那就是许可证。) -Who are these people? These heroes of the Linux world, whose work affects all of us every day. Allow me to introduce you. +那么,这些人是谁?这些Linux世界里的大神们,谁在每天影响着我们?让我来给你一一揭晓。 ![Image courtesy Swapnil Bhartiya](http://images.techhive.com/images/article/2015/07/swap-klaus-100599357-orig.jpg) Image courtesy Swapnil Bhartiya From 14a989ea41ead6ccb5bf70206ca4e96fed54596f Mon Sep 17 00:00:00 2001 From: Ping Date: Fri, 7 Aug 2015 14:38:03 +0800 Subject: [PATCH 090/697] Marking out Translating article --- .../tech/20150518 How to set up a Replica Set on MongoDB.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150518 How to set up a Replica Set on MongoDB.md b/sources/tech/20150518 How to set up a Replica Set on MongoDB.md index 07e16dafc1..83a7da8769 100644 --- a/sources/tech/20150518 How to set up a Replica Set on MongoDB.md +++ b/sources/tech/20150518 How to set up a Replica Set on MongoDB.md @@ -1,3 +1,4 @@ +Translating by Ping How to set up a Replica Set on MongoDB ================================================================================ MongoDB has become the most famous NoSQL database on the market. MongoDB is document-oriented, and its scheme-free design makes it a really attractive solution for all kinds of web applications. One of the features that I like the most is Replica Set, where multiple copies of the same data set are maintained by a group of mongod nodes for redundancy and high availability. @@ -179,4 +180,4 @@ via: http://xmodulo.com/setup-replica-set-mongodb.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://xmodulo.com/author/valerio -[1]:http://docs.mongodb.org/ecosystem/drivers/ \ No newline at end of file +[1]:http://docs.mongodb.org/ecosystem/drivers/ From ee9f6a02e8e29bd00c790a8a2098d2654a0269fa Mon Sep 17 00:00:00 2001 From: ZTinoZ Date: Fri, 7 Aug 2015 15:15:02 +0800 Subject: [PATCH 091/697] Translating by ZTinoZ --- sources/talk/20150806 5 heroes of the Linux world.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/sources/talk/20150806 5 heroes of the Linux world.md b/sources/talk/20150806 5 heroes of the Linux world.md index bf1d75d562..abc42df7f9 100644 --- a/sources/talk/20150806 5 heroes of the Linux world.md +++ b/sources/talk/20150806 5 heroes of the Linux world.md @@ -7,7 +7,7 @@ Image courtesy [Christopher Michel/Flickr][1] ### 野心勃勃的企鹅 ### -Linux和开源世界一直在被那些热情洋溢的人们驱动着,他们开发出最好的软件并将代码向公众开放,所以每个人都能无条件地看到。(对了,有那么一个条件,那就是许可证。) +Linux和开源世界一直在被那些热情洋溢的人们推动着,他们开发出最好的软件并将代码向公众开放,所以每个人都能无条件地看到。(对了,有那么一个条件,那就是许可证。) 那么,这些人是谁?这些Linux世界里的大神们,谁在每天影响着我们?让我来给你一一揭晓。 @@ -16,9 +16,9 @@ Image courtesy Swapnil Bhartiya ### Klaus Knopper ### -Klaus Knopper, an Austrian developer who lives in Germany, is the founder of Knoppix and Adriana Linux, which he developed for his blind wife. +Klaus Knopper,一个生活在德国的奥地利开发者,他是Knoppix和Adriana Linux的创始人,为了他失明的妻子开发程序。 -Knoppix holds a very special place in heart of those Linux users who started using Linux before Ubuntu came along. What makes Knoppix so special is that it popularized the concept of Live CD. Unlike Windows or Mac OS X, you could run the entire operating system from the CD without installing anything on the system. It allowed new users to test Linux on their systems without formatting the hard drive. The live feature of Linux alone contributed heavily to its popularity. +Knoppix在那些Linux用户心里有着特殊的地位,他们在使用Ubuntu之前都会尝试Knoppix,而Knoppix让人称道的就是它让Live CD的概念普及开来。不像Windows或Mac OS X,你可以通过CD运行整个操作系统而不用再系统上安装任何东西,它允许新用户在他们的机子上快速试用Linux而不用去格式化硬盘。Linux这种实时的特性为它的普及做出了巨大贡献。 ![Image courtesy Fórum Internacional Software Live/Flickr](http://images.techhive.com/images/article/2015/07/lennart-100599356-orig.jpg) Image courtesy [Fórum Internacional Software Live/Flickr][2] From 0fe382d37872929df93bbdc84daa135d253e9d8b Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 7 Aug 2015 15:25:11 +0800 Subject: [PATCH 092/697] =?UTF-8?q?20150807-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...riables on a Linux and Unix-like System.md | 98 +++++++++++++++++++ 1 file changed, 98 insertions(+) create mode 100644 sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md diff --git a/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md b/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md new file mode 100644 index 0000000000..b2fa80ff0a --- /dev/null +++ b/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md @@ -0,0 +1,98 @@ +How To: Temporarily Clear Bash Environment Variables on a Linux and Unix-like System +================================================================================ +I'm a bash shell user. I would like to temporarily clear bash shell environment variables. I do not want to delete or unset an exported environment variable. How do I run a program in a temporary environment in bash or ksh shell? + +You can use the env command to set and print environment on a Linux or Unix-like systems. The env command executes utility after modifying the environment as specified on the command line. + +### How do I display my current environment? ### + +Open the terminal application and type any one of the following command: + + printenv + +OR + + env + +Sample outputs: + +![Fig.01: Unix/Linux: List All Environment Variables Command](http://s0.cyberciti.org/uploads/faq/2015/08/env-unix-linux-command-output.jpg) +Fig.01: Unix/Linux: List All Environment Variables Command + +### Counting your environment variables ### + +Type the following command: + + env | wc -l + printenv | wc -l + +Sample outputs: + + 20 + +### Run a program in a clean environment in bash/ksh/zsh ### + +The syntax is as follows: + + env -i your-program-name-here arg1 arg2 ... + +For example, run the wget program without using http_proxy and/or all other variables i.e. temporarily clear all bash/ksh/zsh environment variables and run the wget program: + + env -i /usr/local/bin/wget www.cyberciti.biz + env -i wget www.cyberciti.biz + +This is very useful when you want to run a command ignoring any environment variables you have set. I use this command many times everyday to ignore the http_proxy and other environment variable I have set. + +#### Example: With the http_proxy #### + + $ wget www.cyberciti.biz + --2015-08-03 23:20:23-- http://www.cyberciti.biz/ + Connecting to 10.12.249.194:3128... connected. + Proxy request sent, awaiting response... 200 OK + Length: unspecified [text/html] + Saving to: 'index.html' + index.html [ <=> ] 36.17K 87.0KB/s in 0.4s + 2015-08-03 23:20:24 (87.0 KB/s) - 'index.html' saved [37041] + +#### Example: Ignore the http_proxy #### + + $ env -i /usr/local/bin/wget www.cyberciti.biz + --2015-08-03 23:25:17-- http://www.cyberciti.biz/ + Resolving www.cyberciti.biz... 74.86.144.194 + Connecting to www.cyberciti.biz|74.86.144.194|:80... connected. + HTTP request sent, awaiting response... 200 OK + Length: unspecified [text/html] + Saving to: 'index.html.1' + index.html.1 [ <=> ] 36.17K 115KB/s in 0.3s + 2015-08-03 23:25:18 (115 KB/s) - 'index.html.1' saved [37041] + +The option -i causes env command to completely ignore the environment it inherits. However, it does not prevent your command (such as wget or curl) setting new variables. Also, note down the side effect of running bash/ksh shell: + + env -i env | wc -l ## empty ## + # Now run bash ## + env -i bash + ## New enviroment set by bash program ## + env | wc -l + +#### Example: Set an environmental variable #### + +The syntax is: + + env var=value /path/to/command arg1 arg2 ... + ## OR ## + var=value /path/to/command arg1 arg2 ... + +For example set http_proxy: + + env http_proxy="http://USER:PASSWORD@server1.cyberciti.biz:3128/" \ + /usr/local/bin/wget www.cyberciti.biz + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/linux-unix-temporarily-clearing-environment-variables-command/ + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file From b9f635745b5dfc4ac1151540c4d74e4a842ddedd Mon Sep 17 00:00:00 2001 From: ictlyh Date: Fri, 7 Aug 2015 17:47:09 +0800 Subject: [PATCH 093/697] [Translating] sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md --- ...Bash Environment Variables on a Linux and Unix-like System.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md b/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md index b2fa80ff0a..715ecc2084 100644 --- a/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md +++ b/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md @@ -1,3 +1,4 @@ +Translating by ictlyh How To: Temporarily Clear Bash Environment Variables on a Linux and Unix-like System ================================================================================ I'm a bash shell user. I would like to temporarily clear bash shell environment variables. I do not want to delete or unset an exported environment variable. How do I run a program in a temporary environment in bash or ksh shell? From cff6b78e5da1392612b87b6975d9cf38b40ce89d Mon Sep 17 00:00:00 2001 From: martin qi Date: Fri, 7 Aug 2015 19:39:10 +0800 Subject: [PATCH 094/697] Update 20150716 Interview--Larry Wall.md --- sources/talk/20150716 Interview--Larry Wall.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/talk/20150716 Interview--Larry Wall.md b/sources/talk/20150716 Interview--Larry Wall.md index 5d0b40d2ed..1362281517 100644 --- a/sources/talk/20150716 Interview--Larry Wall.md +++ b/sources/talk/20150716 Interview--Larry Wall.md @@ -1,3 +1,5 @@ +martin + Interview: Larry Wall ================================================================================ > Perl 6 has been 15 years in the making, and is now due to be released at the end of this year. We speak to its creator to find out what’s going on. @@ -122,4 +124,4 @@ via: http://www.linuxvoice.com/interview-larry-wall/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://www.linuxvoice.com/author/mike/ \ No newline at end of file +[a]:http://www.linuxvoice.com/author/mike/ From a78cf0b53f3c126737ea4b77a4e7af8f552524f3 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 7 Aug 2015 22:05:46 +0800 Subject: [PATCH 095/697] PUB:20150728 How To Fix--There is no command installed for 7-zip archive files @GOLinux --- ...There is no command installed for 7-zip archive files.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) rename {translated/tech => published}/20150728 How To Fix--There is no command installed for 7-zip archive files.md (96%) diff --git a/translated/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md b/published/20150728 How To Fix--There is no command installed for 7-zip archive files.md similarity index 96% rename from translated/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md rename to published/20150728 How To Fix--There is no command installed for 7-zip archive files.md index 61237467ca..34a7af3190 100644 --- a/translated/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md +++ b/published/20150728 How To Fix--There is no command installed for 7-zip archive files.md @@ -5,10 +5,12 @@ 我试着在Ubuntu中安装Emerald图标主题,而这个主题被打包成了.7z归档包。和以往一样,我试着通过在GUI中右击并选择“提取到这里”来将它解压缩。但是Ubuntu 15.04却并没有解压文件,取而代之的,却是丢给了我一个下面这样的错误信息: > Could not open this file +> > 无法打开该文件 > > There is no command installed for 7-zip archive files. Do you want to search for a command to open this file? -> 没有安装用于7-zip归档文件的命令。你是否想要搜索命令来打开该文件? +> +> 没有安装用于7-zip归档文件的命令。你是否想要搜索用于来打开该文件的命令? 错误信息看上去是这样的: @@ -42,7 +44,7 @@ via: http://itsfoss.com/fix-there-is-no-command-installed-for-7-zip-archive-file 作者:[Abhishek][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From fb82c13465c5d60e15a2ddadd8c7f17ac307ea99 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 7 Aug 2015 22:47:51 +0800 Subject: [PATCH 096/697] PUB:20150803 Handy commands for profiling your Unix file systems @strugglingyouth --- ...ds for profiling your Unix file systems.md | 65 ++++++++++++++++++ ...ds for profiling your Unix file systems.md | 66 ------------------- 2 files changed, 65 insertions(+), 66 deletions(-) create mode 100644 published/20150803 Handy commands for profiling your Unix file systems.md delete mode 100644 translated/tech/20150803 Handy commands for profiling your Unix file systems.md diff --git a/published/20150803 Handy commands for profiling your Unix file systems.md b/published/20150803 Handy commands for profiling your Unix file systems.md new file mode 100644 index 0000000000..1bfc6ac4bd --- /dev/null +++ b/published/20150803 Handy commands for profiling your Unix file systems.md @@ -0,0 +1,65 @@ +使用 Find 命令来帮你找到那些需要清理的文件 +================================================================================ +![Credit: Sandra H-S](http://images.techhive.com/images/article/2015/07/file-profile-100597239-primary.idge.png) + +*Credit: Sandra H-S* + +有一个问题几乎困扰着所有的文件系统 -- 包括 Unix 和其他的 -- 那就是文件的不断积累。几乎没有人愿意花时间清理掉他们不再使用的文件和整理文件系统,结果,文件变得很混乱,很难找到有用的东西,要使它们运行良好、维护备份、易于管理,这将是一种持久的挑战。 + +我见过的一种解决问题的方法是建议使用者将所有的数据碎屑创建一个文件集合的总结报告或"概况",来报告诸如所有的文件数量;最老的,最新的,最大的文件;并统计谁拥有这些文件等数据。如果有人看到五年前的一个包含五十万个文件的文件夹,他们可能会去删除哪些文件 -- 或者,至少会归档和压缩。主要问题是太大的文件夹会使人担心误删一些重要的东西。如果有一个描述文件夹的方法能帮助显示文件的性质,那么你就可以去清理它了。 + +当我准备做 Unix 文件系统的总结报告时,几个有用的 Unix 命令能提供一些非常有用的统计信息。要计算目录中的文件数,你可以使用这样一个 find 命令。 + + $ find . -type f | wc -l + 187534 + +虽然查找最老的和最新的文件是比较复杂,但还是相当方便的。在下面的命令,我们使用 find 命令再次查找文件,以文件时间排序并按年-月-日的格式显示,在列表顶部的显然是最老的。 + +在第二个命令,我们做同样的,但打印的是最后一行,这是最新的。 + + $ find -type f -printf '%T+ %p\n' | sort | head -n 1 + 2006-02-03+02:40:33 ./skel/.xemacs/init.el + $ find -type f -printf '%T+ %p\n' | sort | tail -n 1 + 2015-07-19+14:20:16 ./.bash_history + +printf 命令输出 %T(文件日期和时间)和 %P(带路径的文件名)参数。 + +如果我们在查找家目录时,无疑会发现,history 文件(如 .bash_history)是最新的,这并没有什么用。你可以通过 "un-grepping" 来忽略这些文件,也可以忽略以.开头的文件,如下图所示的。 + + $ find -type f -printf '%T+ %p\n' | grep -v "\./\." | sort | tail -n 1 + 2015-07-19+13:02:12 ./isPrime + +寻找最大的文件使用 %s(大小)参数,包括文件名(%f),因为这就是我们想要在报告中显示的。 + + $ find -type f -printf '%s %f \n' | sort -n | uniq | tail -1 + 20183040 project.org.tar + +统计文件的所有者,使用%u(所有者) + + $ find -type f -printf '%u \n' | grep -v "\./\." | sort | uniq -c + 180034 shs + 7500 jdoe + +如果文件系统能记录上次的访问日期,也将是非常有用的,可以用来看该文件有没有被访问过,比方说,两年之内没访问过。这将使你能明确分辨这些文件的价值。这个最后访问(%a)参数这样使用: + + $ find -type f -printf '%a+ %p\n' | sort | head -n 1 + Fri Dec 15 03:00:30 2006+ ./statreport + +当然,如果大多数最近​​访问的文件也是在很久之前的,这看起来你需要处理更多文件了。 + + $ find -type f -printf '%a+ %p\n' | sort | tail -n 1 + Wed Nov 26 03:00:27 2007+ ./my-notes + +要想层次分明,可以为一个文件系统或大目录创建一个总结报告,显示这些文件的日期范围、最大的文件、文件所有者们、最老的文件和最新访问时间,可以帮助文件拥有者判断当前有哪些文件夹是重要的哪些该清理了。 + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.html + +作者:[Sandra Henry-Stocker][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/ diff --git a/translated/tech/20150803 Handy commands for profiling your Unix file systems.md b/translated/tech/20150803 Handy commands for profiling your Unix file systems.md deleted file mode 100644 index 13efdcf0a1..0000000000 --- a/translated/tech/20150803 Handy commands for profiling your Unix file systems.md +++ /dev/null @@ -1,66 +0,0 @@ - -很实用的命令来分析你的 Unix 文件系统 -================================================================================ -![Credit: Sandra H-S](http://images.techhive.com/images/article/2015/07/file-profile-100597239-primary.idge.png) -Credit: Sandra H-S - -其中一个问题几乎困扰着所有的文件系统 -- 包括 Unix 和其他的 -- 那就是文件的不断积累。几乎没有人愿意花时间清理掉他们不再使用的文件和文件系统,结果,文件变得很混乱,很难找到有用的东西使它们运行良好,能够得到备份,并且易于管理,这将是一种持久的挑战。 - -我见过的一种解决问题的方法是鼓励使用者将所有的数据碎屑创建成一个总结报告或"profile"这样一个文件集合来报告所有的文件数量;最老的,最新的,最大的文件;并统计谁拥有这些文件。如果有人看到一个包含五十万个文件的文件夹并且时间不小于五年,他们可能会去删除哪些文件 -- 或者,至少归档和压缩。主要问题是太大的文件夹会使人产生压制性害怕误删一些重要的东西。有一个描述文件夹的方法能帮助显示文件的性质并期待你去清理它。 - - -当我准备做 Unix 文件系统的总结报告时,几个有用的 Unix 命令能提供一些非常有用的统计信息。要计算目录中的文件数,你可以使用这样一个 find 命令。 - - $ find . -type f | wc -l - 187534 - -查找最老的和最新的文件是比较复杂,但还是相当方便的。在下面的命令,我们使用 find 命令再次查找文件,以文件时间排序并按年-月-日的格式显示在顶部 -- 因此最老的 -- 的文件在列表中。 - -在第二个命令,我们做同样的,但打印的是最后一行 -- 这是最新的 -- 文件 - - $ find -type f -printf '%T+ %p\n' | sort | head -n 1 - 2006-02-03+02:40:33 ./skel/.xemacs/init.el - $ find -type f -printf '%T+ %p\n' | sort | tail -n 1 - 2015-07-19+14:20:16 ./.bash_history - -printf 命令输出 %T(文件日期和时间)和 %P(带路径的文件名)参数。 - -如果我们在查找家目录时,无疑会发现,history 文件是最新的,这不像是一个很有趣的信息。你可以通过 "un-grepping" 来忽略这些文件,也可以忽略以.开头的文件,如下图所示的。 - - $ find -type f -printf '%T+ %p\n' | grep -v "\./\." | sort | tail -n 1 - 2015-07-19+13:02:12 ./isPrime - -寻找最大的文件使用 %s(大小)参数,包括文件名(%f),因为这就是我们想要在报告中显示的。 - - $ find -type f -printf '%s %f \n' | sort -n | uniq | tail -1 - 20183040 project.org.tar - -打印文件的所有着者,使用%u(所有者) - - $ find -type f -printf '%u \n' | grep -v "\./\." | sort | uniq -c - 180034 shs - 7500 jdoe - -如果文件系统能记录上次的访问日期,也将是非常有用的来看该文件有没有被访问,比方说,两年之内。这将使你能明确分辨这些文件的价值。最后一个访问参数(%a)这样使用: - - $ find -type f -printf '%a+ %p\n' | sort | head -n 1 - Fri Dec 15 03:00:30 2006+ ./statreport - -当然,如果最近​​访问的文件也是在很久之前的,这将使你有更多的处理时间。 - - $ find -type f -printf '%a+ %p\n' | sort | tail -n 1 - Wed Nov 26 03:00:27 2007+ ./my-notes - -一个文件系统要层次分明,为大目录创建一个总结报告,显示该文件的日期范围,最大的文件,文件所有者,最老的和访问时间都可以帮助文件拥有者判断当前有哪些文件夹是重要的哪些该清理了。 - --------------------------------------------------------------------------------- - -via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.html - -作者:[Sandra Henry-Stocker][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/ From 96a056137fdee0854671418ce7aae8be952ba66d Mon Sep 17 00:00:00 2001 From: wi-cuckoo Date: Fri, 7 Aug 2015 23:52:45 +0800 Subject: [PATCH 097/697] translating wi-cuckoo --- ... Interview Experience on RedHat Linux Package Management.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md b/sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md index 7915907e6a..6243a8c0de 100644 --- a/sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md +++ b/sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md @@ -1,3 +1,4 @@ +translating wi-cuckoo Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management ================================================================================ **Shilpa Nair has just graduated in the year 2015. She went to apply for Trainee position in a National News Television located in Noida, Delhi. When she was in the last year of graduation and searching for help on her assignments she came across Tecmint. Since then she has been visiting Tecmint regularly.** @@ -345,4 +346,4 @@ via: http://www.tecmint.com/linux-rpm-package-management-interview-questions/ [a]:http://www.tecmint.com/author/avishek/ [1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ -[2]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ \ No newline at end of file +[2]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ From 7b64f7af56f5b9513420c849bfbba8c91c12e926 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 7 Aug 2015 23:53:07 +0800 Subject: [PATCH 098/697] PUB:20150504 How to access a Linux server behind NAT via reverse SSH tunnel @ictlyh --- ...erver behind NAT via reverse SSH tunnel.md | 34 +++++++++---------- 1 file changed, 17 insertions(+), 17 deletions(-) rename {translated/tech => published}/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md (58%) diff --git a/translated/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md b/published/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md similarity index 58% rename from translated/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md rename to published/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md index 5f9828e912..c6dddd3639 100644 --- a/translated/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md +++ b/published/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md @@ -1,18 +1,18 @@ 如何通过反向 SSH 隧道访问 NAT 后面的 Linux 服务器 ================================================================================ -你在家里运行着一台 Linux 服务器,访问它需要先经过 NAT 路由器或者限制性防火墙。现在你想不在家的时候用 SSH 登录到这台服务器。你如何才能做到呢?SSH 端口转发当然是一种选择。但是,如果你需要处理多个嵌套的 NAT 环境,端口转发可能会变得非常棘手。另外,在多种 ISP 特定条件下可能会受到干扰,例如阻塞转发端口的限制性 ISP 防火墙、或者在用户间共享 IPv4 地址的运营商级 NAT。 +你在家里运行着一台 Linux 服务器,它放在一个 NAT 路由器或者限制性防火墙后面。现在你想在外出时用 SSH 登录到这台服务器。你如何才能做到呢?SSH 端口转发当然是一种选择。但是,如果你需要处理多级嵌套的 NAT 环境,端口转发可能会变得非常棘手。另外,在多种 ISP 特定条件下可能会受到干扰,例如阻塞转发端口的限制性 ISP 防火墙、或者在用户间共享 IPv4 地址的运营商级 NAT。 ### 什么是反向 SSH 隧道? ### -SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧道的概念非常简单。对于此,在限制性家庭网络之外你需要另一台主机(所谓的“中继主机”),你能从当前所在地通过 SSH 登录。你可以用有公共 IP 地址的 [VPS 实例][1] 配置一个中继主机。然后要做的就是从你家庭网络服务器中建立一个到公共中继主机的永久 SSH 隧道。有了这个隧道,你就可以从中继主机中连接“回”家庭服务器(这就是为什么称之为 “反向” 隧道)。不管你在哪里、你家庭网络中的 NAT 或 防火墙限制多么严重,只要你可以访问中继主机,你就可以连接到家庭服务器。 +SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧道的概念非常简单。使用这种方案,在你的受限的家庭网络之外你需要另一台主机(所谓的“中继主机”),你能从当前所在地通过 SSH 登录到它。你可以用有公网 IP 地址的 [VPS 实例][1] 配置一个中继主机。然后要做的就是从你的家庭网络服务器中建立一个到公网中继主机的永久 SSH 隧道。有了这个隧道,你就可以从中继主机中连接“回”家庭服务器(这就是为什么称之为 “反向” 隧道)。不管你在哪里、你的家庭网络中的 NAT 或 防火墙限制多么严格,只要你可以访问中继主机,你就可以连接到家庭服务器。 ![](https://farm8.staticflickr.com/7742/17162647378_c7d9f10de8_b.jpg) ### 在 Linux 上设置反向 SSH 隧道 ### -让我们来看看怎样创建和使用反向 SSH 隧道。我们有如下假设。我们会设置一个从家庭服务器到中继服务器的反向 SSH 隧道,然后我们可以通过中继服务器从客户端计算机 SSH 登录到家庭服务器。**中继服务器** 的公共 IP 地址是 1.1.1.1。 +让我们来看看怎样创建和使用反向 SSH 隧道。我们做如下假设:我们会设置一个从家庭服务器(homeserver)到中继服务器(relayserver)的反向 SSH 隧道,然后我们可以通过中继服务器从客户端计算机(clientcomputer) SSH 登录到家庭服务器。本例中的**中继服务器** 的公网 IP 地址是 1.1.1.1。 -在家庭主机上,按照以下方式打开一个到中继服务器的 SSH 连接。 +在家庭服务器上,按照以下方式打开一个到中继服务器的 SSH 连接。 homeserver~$ ssh -fN -R 10022:localhost:22 relayserver_user@1.1.1.1 @@ -20,11 +20,11 @@ SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧 “-R 10022:localhost:22” 选项定义了一个反向隧道。它转发中继服务器 10022 端口的流量到家庭服务器的 22 号端口。 -用 “-fN” 选项,当你用一个 SSH 服务器成功通过验证时 SSH 会进入后台运行。当你不想在远程 SSH 服务器执行任何命令、就像我们的例子中只想转发端口的时候非常有用。 +用 “-fN” 选项,当你成功通过 SSH 服务器验证时 SSH 会进入后台运行。当你不想在远程 SSH 服务器执行任何命令,就像我们的例子中只想转发端口的时候非常有用。 运行上面的命令之后,你就会回到家庭主机的命令行提示框中。 -登录到中继服务器,确认 127.0.0.1:10022 绑定到了 sshd。如果是的话就表示已经正确设置了反向隧道。 +登录到中继服务器,确认其 127.0.0.1:10022 绑定到了 sshd。如果是的话就表示已经正确设置了反向隧道。 relayserver~$ sudo netstat -nap | grep 10022 @@ -36,13 +36,13 @@ SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧 relayserver~$ ssh -p 10022 homeserver_user@localhost -需要注意的一点是你在本地输入的 SSH 登录/密码应该是家庭服务器的,而不是中继服务器的,因为你是通过隧道的本地端点登录到家庭服务器。因此不要输入中继服务器的登录/密码。成功登陆后,你就在家庭服务器上了。 +需要注意的一点是你在上面为localhost输入的 SSH 登录/密码应该是家庭服务器的,而不是中继服务器的,因为你是通过隧道的本地端点登录到家庭服务器,因此不要错误输入中继服务器的登录/密码。成功登录后,你就在家庭服务器上了。 ### 通过反向 SSH 隧道直接连接到网络地址变换后的服务器 ### 上面的方法允许你访问 NAT 后面的 **家庭服务器**,但你需要登录两次:首先登录到 **中继服务器**,然后再登录到**家庭服务器**。这是因为中继服务器上 SSH 隧道的端点绑定到了回环地址(127.0.0.1)。 -事实上,有一种方法可以只需要登录到中继服务器就能直接访问网络地址变换之后的家庭服务器。要做到这点,你需要让中继服务器上的 sshd 不仅转发回环地址上的端口,还要转发外部主机的端口。这通过指定中继服务器上运行的 sshd 的 **网关端口** 实现。 +事实上,有一种方法可以只需要登录到中继服务器就能直接访问NAT之后的家庭服务器。要做到这点,你需要让中继服务器上的 sshd 不仅转发回环地址上的端口,还要转发外部主机的端口。这通过指定中继服务器上运行的 sshd 的 **GatewayPorts** 实现。 打开**中继服务器**的 /etc/ssh/sshd_conf 并添加下面的行。 @@ -74,23 +74,23 @@ SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧 tcp 0 0 1.1.1.1:10022 0.0.0.0:* LISTEN 1538/sshd: dev -不像之前的情况,现在隧道的端点是 1.1.1.1:10022(中继服务器的公共 IP 地址),而不是 127.0.0.1:10022。这就意味着从外部主机可以访问隧道的端点。 +不像之前的情况,现在隧道的端点是 1.1.1.1:10022(中继服务器的公网 IP 地址),而不是 127.0.0.1:10022。这就意味着从外部主机可以访问隧道的另一端。 现在在任何其它计算机(客户端计算机),输入以下命令访问网络地址变换之后的家庭服务器。 clientcomputer~$ ssh -p 10022 homeserver_user@1.1.1.1 -在上面的命令中,1.1.1.1 是中继服务器的公共 IP 地址,家庭服务器用户必须是和家庭服务器相关联的用户账户。这是因为你真正登录到的主机是家庭服务器,而不是中继服务器。后者只是中继你的 SSH 流量到家庭服务器。 +在上面的命令中,1.1.1.1 是中继服务器的公共 IP 地址,homeserver_user必须是家庭服务器上的用户账户。这是因为你真正登录到的主机是家庭服务器,而不是中继服务器。后者只是中继你的 SSH 流量到家庭服务器。 ### 在 Linux 上设置一个永久反向 SSH 隧道 ### -现在你已经明白了怎样创建一个反向 SSH 隧道,然后把隧道设置为 “永久”,这样隧道启动后就会一直运行(不管临时的网络拥塞、SSH 超时、中继主机重启,等等)。毕竟,如果隧道不是一直有效,你不可能可靠的登录到你的家庭服务器。 +现在你已经明白了怎样创建一个反向 SSH 隧道,然后把隧道设置为 “永久”,这样隧道启动后就会一直运行(不管临时的网络拥塞、SSH 超时、中继主机重启,等等)。毕竟,如果隧道不是一直有效,你就不能可靠的登录到你的家庭服务器。 -对于永久隧道,我打算使用一个叫 autossh 的工具。正如名字暗示的,这个程序允许你不管任何理由自动重启 SSH 会话。因此对于保存一个反向 SSH 隧道有效非常有用。 +对于永久隧道,我打算使用一个叫 autossh 的工具。正如名字暗示的,这个程序可以让你的 SSH 会话无论因为什么原因中断都会自动重连。因此对于保持一个反向 SSH 隧道非常有用。 第一步,我们要设置从家庭服务器到中继服务器的[无密码 SSH 登录][2]。这样的话,autossh 可以不需要用户干预就能重启一个损坏的反向 SSH 隧道。 -下一步,在初始化隧道的家庭服务器上[安装 autossh][3]。 +下一步,在建立隧道的家庭服务器上[安装 autossh][3]。 在家庭服务器上,用下面的参数运行 autossh 来创建一个连接到中继服务器的永久 SSH 隧道。 @@ -113,7 +113,7 @@ SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧 ### 总结 ### -在这篇博文中,我介绍了你如何能从外部中通过反向 SSH 隧道访问限制性防火墙或 NAT 网关之后的 Linux 服务器。尽管我介绍了家庭网络中的一个使用事例,在企业网络中使用时你尤其要小心。这样的一个隧道可能被视为违反公司政策,因为它绕过了企业的防火墙并把企业网络暴露给外部攻击。这很可能被误用或者滥用。因此在使用之前一定要记住它的作用。 +在这篇博文中,我介绍了你如何能从外部通过反向 SSH 隧道访问限制性防火墙或 NAT 网关之后的 Linux 服务器。这里我介绍了家庭网络中的一个使用事例,但在企业网络中使用时你尤其要小心。这样的一个隧道可能被视为违反公司政策,因为它绕过了企业的防火墙并把企业网络暴露给外部攻击。这很可能被误用或者滥用。因此在使用之前一定要记住它的作用。 -------------------------------------------------------------------------------- @@ -121,11 +121,11 @@ via: http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html 作者:[Dan Nanni][a] 译者:[ictlyh](https://github.com/ictlyh) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:http://xmodulo.com/author/nanni [1]:http://xmodulo.com/go/digitalocean -[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html -[3]:http://ask.xmodulo.com/install-autossh-linux.html +[2]:https://linux.cn/article-5444-1.html +[3]:https://linux.cn/article-5459-1.html From 569768fdea3706313f31ce6787da102f597d9e44 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Sat, 8 Aug 2015 03:26:54 +0800 Subject: [PATCH 099/697] =?UTF-8?q?=E6=9B=B4=E6=96=B0=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...o Perform File and Directory Management.md | 194 +++++++++--------- 1 file changed, 98 insertions(+), 96 deletions(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 02--How to Perform File and Directory Management.md b/sources/tech/RHCSA Series/RHCSA Series--Part 02--How to Perform File and Directory Management.md index abf9910994..f46fd93321 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 02--How to Perform File and Directory Management.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 02--How to Perform File and Directory Management.md @@ -1,59 +1,61 @@ -[translating by xiqingongzi] -RHCSA Series: How to Perform File and Directory Management – Part 2 +RHCSA 系列: 如何执行文件并进行文件管理 – Part 2 ================================================================================ -In this article, RHCSA Part 2: File and directory management, we will review some essential skills that are required in the day-to-day tasks of a system administrator. + +在本篇(RHCSA 第二篇:文件和目录管理)中,我们江回顾一些系统管理员日常任务需要的技能 ![RHCSA: Perform File and Directory Management – Part 2](http://www.tecmint.com/wp-content/uploads/2015/03/RHCSA-Part2.png) -RHCSA: Perform File and Directory Management – Part 2 -### Create, Delete, Copy, and Move Files and Directories ### +RHCSA : 运行文件以及进行文件夹管理 - 第二章 +### 创建,删除,复制和移动文件及目录 ### -File and directory management is a critical competence that every system administrator should possess. This includes the ability to create / delete text files from scratch (the core of each program’s configuration) and directories (where you will organize files and other directories), and to find out the type of existing files. +文件和目录管理是每一个系统管理员都应该掌握的必要的技能.它包括了从头开始的创建、删除文本文件(每个程序的核心配置)以及目录(你用来组织文件和其他目录),以及识别存在的文件的类型 -The [touch command][1] can be used not only to create empty files, but also to update the access and modification times of existing files. + [touch 命令][1] 不仅仅能用来创建空文件,还能用来更新已存在的文件的权限和时间表 ![touch command example](http://www.tecmint.com/wp-content/uploads/2015/03/touch-command-example.png) -touch command example +touch 命令示例 -You can use `file [filename]` to determine a file’s type (this will come in handy before launching your preferred text editor to edit it). +你可以使用 `file [filename]`来判断一个文件的类型 (在你用文本编辑器编辑之前,判断类型将会更方便编辑). ![file command example](http://www.tecmint.com/wp-content/uploads/2015/03/file-command-example.png) -file command example +file 命令示例 -and `rm [filename]` to delete it. +使用`rm [filename]` 可以删除文件 ![Linux rm command examples](http://www.tecmint.com/wp-content/uploads/2015/03/rm-command-examples.png) -rm command example +rm 命令示例 + +对于目录,你可以使用`mkdir [directory]`在已经存在的路径中创建目录,或者使用 `mkdir -p [/full/path/to/directory].`带全路径创建文件夹 -As for directories, you can create directories inside existing paths with `mkdir [directory]` or create a full path with `mkdir -p [/full/path/to/directory].` ![mkdir command example](http://www.tecmint.com/wp-content/uploads/2015/03/mkdir-command-example.png) -mkdir command example +mkdir 命令示例 -When it comes to removing directories, you need to make sure that they’re empty before issuing the `rmdir [directory]` command, or use the more powerful (handle with care!) `rm -rf [directory]`. This last option will force remove recursively the `[directory]` and all its contents – so use it at your own risk. +当你想要去删除目录时,在你使用`rmdir [directory]` 前,你需要先确保目录是空的,或者使用更加强力的命令(小心使用它)`rm -rf [directory]`.后者会强制删除`[directory]`以及他的内容.所以使用这个命令存在一定的风险 -### Input and Output Redirection and Pipelining ### +### 输入输出重定向以及管道 ### -The command line environment provides two very useful features that allows to redirect the input and output of commands from and to files, and to send the output of a command to another, called redirection and pipelining, respectively. +命令行环境提供了两个非常有用的功能:允许命令重定向的输入和输出到文件和发送到另一个文件,分别称为重定向和管道 To understand those two important concepts, we must first understand the three most important types of I/O (Input and Output) streams (or sequences) of characters, which are in fact special files, in the *nix sense of the word. +为了理解这两个重要概念,我们首先需要理解通常情况下三个重要的输入输出流的形式 -- Standard input (aka stdin) is by default attached to the keyboard. In other words, the keyboard is the standard input device to enter commands to the command line. -- Standard output (aka stdout) is by default attached to the screen, the device that “receives” the output of commands and display them on the screen. -- Standard error (aka stderr), is where the status messages of a command is sent to by default, which is also the screen. +- 标准输入 (aka stdin) 是指默认使用键盘链接. 换句话说,键盘是输入命令到命令行的标准输入设备。 +- 标准输出 (aka stdout) 是指默认展示再屏幕上, 显示器接受输出命令,并且展示在屏幕上。 +- 标准错误 (aka stderr), 是指命令的状态默认输出, 同时也会展示在屏幕上 In the following example, the output of `ls /var` is sent to stdout (the screen), as well as the result of ls /tecmint. But in the latter case, it is stderr that is shown. +在下面的例子中,`ls /var`的结果被发送到stdout(屏幕展示),就像ls /tecmint 的结果。但在后一种情况下,它是标准错误输出。 ![Linux input output redirect](http://www.tecmint.com/wp-content/uploads/2015/03/Linux-input-output-redirect.png) +输入和输出命令实例 -Input and Output Example - -To more easily identify these special files, they are each assigned a file descriptor, an abstract representation that is used to access them. The essential thing to understand is that these files, just like others, can be redirected. What this means is that you can capture the output from a file or script and send it as input to another file, command, or script. This will allow you to store on disk, for example, the output of commands for later processing or analysis. +为了更容易识别这些特殊文件,每个文件都被分配有一个文件描述符(用于控制他们的抽象标识)。主要要理解的是,这些文件就像其他人一样,可以被重定向。这就意味着你可以从一个文件或脚本中捕获输出,并将它传送到另一个文件、命令或脚本中。你就可以在在磁盘上存储命令的输出结果,用于稍后的分析 To redirect stdin (fd 0), stdout (fd 1), or stderr (fd 2), the following operators are available. @@ -63,102 +65,102 @@ To redirect stdin (fd 0), stdout (fd 1), or stderr (fd 2), the following operato -Redirection Operator -Effect +转向操作 +效果 > -Redirects standard output to a file containing standard output. If the destination file exists, it will be overwritten. +标准输出到一个文件。如果目标文件存在,内容就会被重写 >> -Appends standard output to a file. +添加标准输出到文件尾部 2> -Redirects standard error to a file containing standard output. If the destination file exists, it will be overwritten. +标准错误输出到一个文件。如果目标文件存在,内容就会被重写 2>> -Appends standard error to the existing file. +添加标准错误输出到文件尾部. &> -Redirects both standard output and standard error to a file; if the specified file exists, it will be overwritten. +标准错误和标准输出都到一个文件。如果目标文件存在,内容就会被重写 < -Uses the specified file as standard input. +使用特定的文件做标准输出 <> -The specified file is used for both standard input and standard output. +使用特定的文件做标准输出和标准错误 -As opposed to redirection, pipelining is performed by adding a vertical bar `(|)` after a command and before another one. -Remember: +相比与重定向,管道是通过在命令后添加一个竖杠`(|)`再添加另一个命令 . -- Redirection is used to send the output of a command to a file, or to send a file as input to a command. -- Pipelining is used to send the output of a command to another command as input. +记得: -#### Examples Of Redirection and Pipelining #### +- 重定向是用来定向命令的输出到一个文件,或定向一个文件作为输入到一个命令。 +- 管道是用来将命令的输出转发到另一个命令作为输入。 -**Example 1: Redirecting the output of a command to a file** +#### 重定向和管道的使用实例 #### -There will be times when you will need to iterate over a list of files. To do that, you can first save that list to a file and then read that file line by line. While it is true that you can iterate over the output of ls directly, this example serves to illustrate redirection. +** 例1:将一个命令的输出到文件 ** + +有些时候,你需要遍历一个文件列表。要做到这样,你可以先将该列表保存到文件中,然后再按行读取该文件。虽然你可以遍历直接ls的输出,不过这个例子是用来说明重定向。 # ls -1 /var/mail > mail.txt ![Redirect output of command tot a file](http://www.tecmint.com/wp-content/uploads/2015/03/Redirect-output-to-a-file.png) -Redirect output of command tot a file +将一个命令的输出到文件 -**Example 2: Redirecting both stdout and stderr to /dev/null** +** 例2:重定向stdout和stderr到/dev/null ** -In case we want to prevent both stdout and stderr to be displayed on the screen, we can redirect both file descriptors to `/dev/null`. Note how the output changes when the redirection is implemented for the same command. +如果不想让标准输出和标准错误展示在屏幕上,我们可以把文件描述符重定向到 `/dev/null` 请注意在执行这个命令时该如何更改输出 # ls /var /tecmint # ls /var/ /tecmint &> /dev/null ![Redirecting stdout and stderr ouput to /dev/null](http://www.tecmint.com/wp-content/uploads/2015/03/Redirecting-stdout-stderr-ouput.png) -Redirecting stdout and stderr ouput to /dev/null +重定向stdout和stderr到/dev/null -#### Example 3: Using a file as input to a command #### +#### 例3:使用一个文件作为命令的输入 #### -While the classic syntax of the [cat command][2] is as follows. +当官方的[cat 命令][2]的语法如下时 # cat [file(s)] -You can also send a file as input, using the correct redirection operator. +您还可以使用正确的重定向操作符传送一个文件作为输入。 # cat < mail.txt ![Linux cat command examples](http://www.tecmint.com/wp-content/uploads/2015/03/cat-command-examples.png) -cat command example +cat 命令实例 -#### Example 4: Sending the output of a command as input to another #### +#### 例4:发送一个命令的输出作为另一个命令的输入 #### -If you have a large directory or process listing and want to be able to locate a certain file or process at a glance, you will want to pipeline the listing to grep. +如果你有一个较大的目录或进程列表,并且想快速定位,你或许需要将列表通过管道传送给grep -Note that we use to pipelines in the following example. The first one looks for the required keyword, while the second one will eliminate the actual `grep command` from the results. This example lists all the processes associated with the apache user. +接下来我们使用管道在下面的命令中,第一个是查找所需的关键词,第二个是除去产生的 `grep command`.这个例子列举了所有与apache用户有关的进程 # ps -ef | grep apache | grep -v grep ![Send output of command as input to another](http://www.tecmint.com/wp-content/uploads/2015/03/Send-output-of-command-as-input-to-another1.png) -Send output of command as input to another +发送一个命令的输出作为另一个命令的输入 -### Archiving, Compressing, Unpacking, and Uncompressing Files ### +### 归档,压缩,解包,解压文件 ### -If you need to transport, backup, or send via email a group of files, you will use an archiving (or grouping) tool such as [tar][3], typically used with a compression utility like gzip, bzip2, or xz. - -Your choice of a compression tool will be likely defined by the compression speed and rate of each one. Of these three compression tools, gzip is the oldest and provides the least compression, bzip2 provides improved compression, and xz is the newest and provides the best compression. Typically, files compressed with these utilities have .gz, .bz2, or .xz extensions, respectively. +如果你需要传输,备份,或者通过邮件发送一组文件,你可以使用一个存档(或文件夹)如 [tar][3]工具,通常使用gzip,bzip2,或XZ压缩工具. +您选择的压缩工具每一个都有自己的定义的压缩速度和速率的。这三种压缩工具,gzip是最古老和提供最小压缩的工具,bzip2提供经过改进的压缩,以及XZ提供最信和最好的压缩。通常情况下,这些文件都是被压缩的如.gz .bz2或.xz 注:表格 @@ -166,44 +168,44 @@ Your choice of a compression tool will be likely defined by the compression spee - - - + + + - + - + - + - + - + - + - +
CommandAbbreviationDescription命令缩写描述
–create cCreates a tar archive创建一个tar归档
–concatenate AAppends tar files to an archive向归档中添加tar文件
–append rAppends non-tar files to an archive向归档中添加非tar文件
–update uAppends files that are newer than those in an archive添加比归档中的文件更新的文件
–diff or –compare dCompares an archive to files on disk将归档和硬盘的文件夹进行对比
–list tLists the contents of a tarball列举一个tar的压缩包
–extract or –get xExtracts files from an archive从归档中解压文件
@@ -215,51 +217,51 @@ Your choice of a compression tool will be likely defined by the compression spee -Operation modifier -Abbreviation -Description +操作参数 +缩写 +描述 directory dir C -Changes to directory dir before performing operations +在执行操作前更改目录 same-permissions and same-owner p -Preserves permissions and ownership information, respectively. +分别保留权限和所有者信息 –verbose v -Lists all files as they are read or extracted; if combined with –list, it also displays file sizes, ownership, and timestamps +列举所有文件用于读取或提取,这里包含列表,并显示文件的大小、所有权和时间戳 exclude file -Excludes file from the archive. In this case, file can be an actual file or a pattern. +排除存档文件。在这种情况下,文件可以是一个实际的文件或目录。 gzip or gunzip z -Compresses an archive through gzip +使用gzip压缩文件 –bzip2 j -Compresses an archive through bzip2 +使用bzip2压缩文件 –xz J -Compresses an archive through xz +使用xz压缩文件 -#### Example 5: Creating a tarball and then compressing it using the three compression utilities #### +#### 例5:创建一个文件,然后使用三种压缩工具压缩#### -You may want to compare the effectiveness of each tool before deciding to use one or another. Note that while compressing small files, or a few files, the results may not show much differences, but may give you a glimpse of what they have to offer. +在决定使用一个或另一个工具之前,您可能想比较每个工具的压缩效率。请注意压缩小文件或几个文件,结果可能不会有太大的差异,但可能会给你看出他们的差异 # tar cf ApacheLogs-$(date +%Y%m%d).tar /var/log/httpd/* # Create an ordinary tarball # tar czf ApacheLogs-$(date +%Y%m%d).tar.gz /var/log/httpd/* # Create a tarball and compress with gzip @@ -268,51 +270,51 @@ You may want to compare the effectiveness of each tool before deciding to use on ![Linux tar command examples](http://www.tecmint.com/wp-content/uploads/2015/03/tar-command-examples.png) -tar command examples +tar 命令实例 -#### Example 6: Preserving original permissions and ownership while archiving and when #### +#### 例6:归档时同时保存原始权限和所有权 #### -If you are creating backups from users’ home directories, you will want to store the individual files with the original permissions and ownership instead of changing them to that of the user account or daemon performing the backup. The following example preserves these attributes while taking the backup of the contents in the `/var/log/httpd` directory: +如果你创建的是用户的主目录的备份,你需要要存储的个人文件与原始权限和所有权,而不是通过改变他们的用户帐户或守护进程来执行备份。下面的命令可以在归档时保留文件属性 # tar cJf ApacheLogs-$(date +%Y%m%d).tar.xz /var/log/httpd/* --same-permissions --same-owner -### Create Hard and Soft Links ### +### 创建软连接和硬链接 ### -In Linux, there are two types of links to files: hard links and soft (aka symbolic) links. Since a hard link represents another name for an existing file and is identified by the same inode, it then points to the actual data, as opposed to symbolic links, which point to filenames instead. +在Linux中,有2种类型的链接文件:硬链接和软(也称为符号)链接。因为硬链接文件代表另一个名称是由同一点确定,然后链接到实际的数据;符号链接指向的文件名,而不是实际的数据 -In addition, hard links do not occupy space on disk, while symbolic links do take a small amount of space to store the text of the link itself. The downside of hard links is that they can only be used to reference files within the filesystem where they are located because inodes are unique inside a filesystem. Symbolic links save the day, in that they point to another file or directory by name rather than by inode, and therefore can cross filesystem boundaries. +此外,硬链接不占用磁盘上的空间,而符号链接做占用少量的空间来存储的链接本身的文本。硬链接的缺点就是要求他们必须在同一个innode内。而符号链接没有这个限制,符号链接因为只保存了文件名和目录名,所以可以跨文件系统. -The basic syntax to create links is similar in both cases: +创建链接的基本语法看起来是相似的: - # ln TARGET LINK_NAME # Hard link named LINK_NAME to file named TARGET - # ln -s TARGET LINK_NAME # Soft link named LINK_NAME to file named TARGET + # ln TARGET LINK_NAME #从Link_NAME到Target的硬链接 + # ln -s TARGET LINK_NAME #从Link_NAME到Target的软链接 -#### Example 7: Creating hard and soft links #### +#### 例7:创建硬链接和软链接 #### -There is no better way to visualize the relation between a file and a hard or symbolic link that point to it, than to create those links. In the following screenshot you will see that the file and the hard link that points to it share the same inode and both are identified by the same disk usage of 466 bytes. +没有更好的方式来形象的说明一个文件和一个指向它的符号链接的关系,而不是创建这些链接。在下面的截图中你会看到文件的硬链接指向它共享相同的节点都是由466个字节的磁盘使用情况确定。 -On the other hand, creating a hard link results in an extra disk usage of 5 bytes. Not that you’re going to run out of storage capacity, but this example is enough to illustrate the difference between a hard link and a soft link. +另一方面,在别的磁盘创建一个硬链接将占用5个字节,并不是说你将耗尽存储容量,而是这个例子足以说明一个硬链接和软链接之间的区别。 ![Difference between a hard link and a soft link](http://www.tecmint.com/wp-content/uploads/2015/03/hard-soft-link.png) -Difference between a hard link and a soft link +软连接和硬链接之间的不同 -A typical usage of symbolic links is to reference a versioned file in a Linux system. Suppose there are several programs that need access to file fooX.Y, which is subject to frequent version updates (think of a library, for example). Instead of updating every single reference to fooX.Y every time there’s a version update, it is wiser, safer, and faster, to have programs look to a symbolic link named just foo, which in turn points to the actual fooX.Y. +符号链接的典型用法是在Linux系统的版本文件参考。假设有需要一个访问文件foo X.Y 想图书馆一样经常被访问,你想更新一个就可以而不是更新所有的foo X.Y,这时使用软连接更为明智和安全。有文件被看成foo X.Y的链接符号,从而找到foo X.Y -Thus, when X and Y change, you only need to edit the symbolic link foo with a new destination name instead of tracking every usage of the destination file and updating it. +这样的话,当你的X和Y发生变化后,你只需更新一个文件,而不是更新每个文件。 -### Summary ### +### 总结 ### -In this article we have reviewed some essential file and directory management skills that must be a part of every system administrator’s tool-set. Make sure to review other parts of this series as well in order to integrate these topics with the content covered in this tutorial. +在这篇文章中,我们回顾了一些基本的文件和目录管理技能,这是每个系统管理员的工具集的一部分。请确保阅读了本系列的其他部分,以及复习并将这些主题与本教程所涵盖的内容相结合。 -Feel free to let us know if you have any questions or comments. We are always more than glad to hear from our readers. +如果你有任何问题或意见,请随时告诉我们。我们总是很高兴从读者那获取反馈. -------------------------------------------------------------------------------- via: http://www.tecmint.com/file-and-directory-management-in-linux/ 作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) +译者:[xiqingongzi](https://github.com/xiqingongzi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 483fe49938411ce8d731a9e6893d56246d1a363a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Sat, 8 Aug 2015 03:27:35 +0800 Subject: [PATCH 100/697] =?UTF-8?q?=E9=A2=86RHCSA=E7=AC=AC=E4=B8=89?= =?UTF-8?q?=E7=AF=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...eries--Part 03--How to Manage Users and Groups in RHEL 7.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md b/sources/tech/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md index be78c87e3a..0b85744c6c 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md @@ -1,3 +1,4 @@ +[translated by xiqingongzi] RHCSA Series: How to Manage Users and Groups in RHEL 7 – Part 3 ================================================================================ Managing a RHEL 7 server, as it is the case with any other Linux server, will require that you know how to add, edit, suspend, or delete user accounts, and grant users the necessary permissions to files, directories, and other system resources to perform their assigned tasks. @@ -245,4 +246,4 @@ via: http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ [2]:http://www.tecmint.com/usermod-command-examples/ [3]:http://www.tecmint.com/ls-interview-questions/ [4]:http://www.tecmint.com/file-and-directory-management-in-linux/ -[5]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ \ No newline at end of file +[5]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ From 6a1641ce6e2328d1ad15908f34c9ecb80e7c19f3 Mon Sep 17 00:00:00 2001 From: xiqingongzi Date: Sat, 8 Aug 2015 03:38:00 +0800 Subject: [PATCH 101/697] Move --- ...t 01--Reviewing Essential Commands and System Documentation.md | 0 ...ries--Part 02--How to Perform File and Directory Management.md | 0 2 files changed, 0 insertions(+), 0 deletions(-) rename {sources/tech/RHCSA Series => translated/tech/RHCSA}/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md (100%) rename {sources/tech/RHCSA Series => translated/tech/RHCSA}/RHCSA Series--Part 02--How to Perform File and Directory Management.md (100%) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md b/translated/tech/RHCSA/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md similarity index 100% rename from sources/tech/RHCSA Series/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md rename to translated/tech/RHCSA/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 02--How to Perform File and Directory Management.md b/translated/tech/RHCSA/RHCSA Series--Part 02--How to Perform File and Directory Management.md similarity index 100% rename from sources/tech/RHCSA Series/RHCSA Series--Part 02--How to Perform File and Directory Management.md rename to translated/tech/RHCSA/RHCSA Series--Part 02--How to Perform File and Directory Management.md From 339bbf0b5c6ba7649972b300e40535316fabbb64 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 8 Aug 2015 18:06:08 +0800 Subject: [PATCH 102/697] [Translated] tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md --- ...riables on a Linux and Unix-like System.md | 47 +++++++++---------- 1 file changed, 23 insertions(+), 24 deletions(-) rename {sources => translated}/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md (50%) diff --git a/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md b/translated/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md similarity index 50% rename from sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md rename to translated/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md index 715ecc2084..202c4e304a 100644 --- a/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md +++ b/translated/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md @@ -1,50 +1,49 @@ -Translating by ictlyh -How To: Temporarily Clear Bash Environment Variables on a Linux and Unix-like System +如何在 Linux 和类 Unix 系统上临时清空 Bash 环境变量 ================================================================================ -I'm a bash shell user. I would like to temporarily clear bash shell environment variables. I do not want to delete or unset an exported environment variable. How do I run a program in a temporary environment in bash or ksh shell? +我是个 bash shell 用户。我想临时清空 bash shell 环境变量。但我不想删除或者 unset 一个 export 环境变量。我怎样才能在 bash 或 ksh shell 的临时环境中运行程序呢? -You can use the env command to set and print environment on a Linux or Unix-like systems. The env command executes utility after modifying the environment as specified on the command line. +你可以在 Linux 或类 Unix 系统中使用 env 命令设置并打印环境。env 命令将环境修改为命令行指定的那样之后再执行程序。 -### How do I display my current environment? ### +### 如何显示当前环境? ### -Open the terminal application and type any one of the following command: +打开终端应用程序并输入下面的其中一个命令: printenv -OR +或 env -Sample outputs: +输出样例: -![Fig.01: Unix/Linux: List All Environment Variables Command](http://s0.cyberciti.org/uploads/faq/2015/08/env-unix-linux-command-output.jpg) -Fig.01: Unix/Linux: List All Environment Variables Command +![Fig.01: Unix/Linux: 列出所有环境变量](http://s0.cyberciti.org/uploads/faq/2015/08/env-unix-linux-command-output.jpg) +Fig.01: Unix/Linux: 列出所有环境变量 -### Counting your environment variables ### +### 统计环境变量数目 ### -Type the following command: +输入下面的命令: env | wc -l printenv | wc -l -Sample outputs: +输出样例: 20 -### Run a program in a clean environment in bash/ksh/zsh ### +### 在 bash/ksh/zsh 干净环境中运行程序 ### -The syntax is as follows: +语法如下所示: env -i your-program-name-here arg1 arg2 ... -For example, run the wget program without using http_proxy and/or all other variables i.e. temporarily clear all bash/ksh/zsh environment variables and run the wget program: +例如,不使用 http_proxy 和/或任何其它变量运行 wget 程序。临时清除所有 bash/ksh/zsh 环境变量并运行 wget 程序: env -i /usr/local/bin/wget www.cyberciti.biz env -i wget www.cyberciti.biz -This is very useful when you want to run a command ignoring any environment variables you have set. I use this command many times everyday to ignore the http_proxy and other environment variable I have set. +这当你想忽视任何已经设置的环境变量来运行命令时非常有用。我每天都会多次使用这个命令,以便忽视 http_proxy 和其它我设置的环境变量。 -#### Example: With the http_proxy #### +#### 例子:使用 http_proxy #### $ wget www.cyberciti.biz --2015-08-03 23:20:23-- http://www.cyberciti.biz/ @@ -55,7 +54,7 @@ This is very useful when you want to run a command ignoring any environment vari index.html [ <=> ] 36.17K 87.0KB/s in 0.4s 2015-08-03 23:20:24 (87.0 KB/s) - 'index.html' saved [37041] -#### Example: Ignore the http_proxy #### +#### 例子:忽视 http_proxy #### $ env -i /usr/local/bin/wget www.cyberciti.biz --2015-08-03 23:25:17-- http://www.cyberciti.biz/ @@ -67,7 +66,7 @@ This is very useful when you want to run a command ignoring any environment vari index.html.1 [ <=> ] 36.17K 115KB/s in 0.3s 2015-08-03 23:25:18 (115 KB/s) - 'index.html.1' saved [37041] -The option -i causes env command to completely ignore the environment it inherits. However, it does not prevent your command (such as wget or curl) setting new variables. Also, note down the side effect of running bash/ksh shell: +-i 选项使 env 命令完全忽视它继承的环境。但是,它并不阻止你的命令(例如 wget 或 curl)设置新的变量。同时,也要注意运行 bash/ksh shell 的副作用: env -i env | wc -l ## empty ## # Now run bash ## @@ -75,15 +74,15 @@ The option -i causes env command to completely ignore the environment it inherit ## New enviroment set by bash program ## env | wc -l -#### Example: Set an environmental variable #### +#### 例子:设置一个环境变量 #### -The syntax is: +语法如下: env var=value /path/to/command arg1 arg2 ... ## OR ## var=value /path/to/command arg1 arg2 ... -For example set http_proxy: +例如设置 http_proxy: env http_proxy="http://USER:PASSWORD@server1.cyberciti.biz:3128/" \ /usr/local/bin/wget www.cyberciti.biz @@ -93,7 +92,7 @@ For example set http_proxy: via: http://www.cyberciti.biz/faq/linux-unix-temporarily-clearing-environment-variables-command/ 作者:Vivek Gite -译者:[译者ID](https://github.com/译者ID) +译者:[ictlyh](https://github.com/ictlyh) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file From a2890da52e074dbd6b53ca274db3ce87202569fa Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 8 Aug 2015 22:53:24 +0800 Subject: [PATCH 103/697] PUB:20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux @dingdongnigetou --- ...y Using 'Explain Shell' Script in Linux.md | 36 +++++++++---------- 1 file changed, 18 insertions(+), 18 deletions(-) rename {translated/tech => published}/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md (68%) diff --git a/translated/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md b/published/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md similarity index 68% rename from translated/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md rename to published/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md index b8f993676c..d31df55711 100644 --- a/translated/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md +++ b/published/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md @@ -1,16 +1,16 @@ -在Linux中利用"Explain Shell"脚本更容易地理解Shell命令 +轻松使用“Explain Shell”脚本来理解 Shell 命令 ================================================================================ -在某些时刻, 当我们在Linux平台上工作时我们所有人都需要shell命令的帮助信息。 尽管内置的帮助像man pages、whatis命令是有帮助的, 但man pages的输出非常冗长, 除非是个有linux经验的人,不然从大量的man pages中获取帮助信息是非常困难的,而whatis命令的输出很少超过一行, 这对初学者来说是不够的。 +我们在Linux上工作时,每个人都会遇到需要查找shell命令的帮助信息的时候。 尽管内置的帮助像man pages、whatis命令有所助益, 但man pages的输出非常冗长, 除非是个有linux经验的人,不然从大量的man pages中获取帮助信息是非常困难的,而whatis命令的输出很少超过一行, 这对初学者来说是不够的。 ![Explain Shell Commands in Linux Shell](http://www.tecmint.com/wp-content/uploads/2015/07/Explain-Shell-Commands-in-Linux-Shell.jpeg) -在Linux Shell中解释Shell命令 +*在Linux Shell中解释Shell命令* -有一些第三方应用程序, 像我们在[Commandline Cheat Sheet for Linux Users][1]提及过的'cheat'命令。Cheat是个杰出的应用程序,即使计算机没有联网也能提供shell命令的帮助, 但是它仅限于预先定义好的命令。 +有一些第三方应用程序, 像我们在[Linux 用户的命令行速查表][1]提及过的'cheat'命令。cheat是个优秀的应用程序,即使计算机没有联网也能提供shell命令的帮助, 但是它仅限于预先定义好的命令。 -Jackson写了一小段代码,它能非常有效地在bash shell里面解释shell命令,可能最美之处就是你不需要安装第三方包了。他把包含这段代码的的文件命名为”explain.sh“。 +Jackson写了一小段代码,它能非常有效地在bash shell里面解释shell命令,可能最美之处就是你不需要安装第三方包了。他把包含这段代码的的文件命名为“explain.sh”。 -#### Explain工具的特性 #### +#### explain.sh工具的特性 #### - 易嵌入代码。 - 不需要安装第三方工具。 @@ -18,22 +18,22 @@ Jackson写了一小段代码,它能非常有效地在bash shell里面解释she - 需要网络连接才能工作。 - 纯命令行工具。 - 可以解释bash shell里面的大部分shell命令。 -- 无需root账户参与。 +- 无需使用root账户。 **先决条件** -唯一的条件就是'curl'包了。 在如今大多数Linux发行版里面已经预安装了culr包, 如果没有你可以按照下面的命令来安装。 +唯一的条件就是'curl'包了。 在如今大多数Linux发行版里面已经预安装了curl包, 如果没有你可以按照下面的命令来安装。 # apt-get install curl [On Debian systems] # yum install curl [On CentOS systems] ### 在Linux上安装explain.sh工具 ### -我们要将下面这段代码插入'~/.bashrc'文件(LCTT注: 若没有该文件可以自己新建一个)中。我们必须为每个用户以及对应的'.bashrc'文件插入这段代码,笔者建议你不要加在root用户下。 +我们要将下面这段代码插入'~/.bashrc'文件(LCTT译注: 若没有该文件可以自己新建一个)中。我们要为每个用户以及对应的'.bashrc'文件插入这段代码,但是建议你不要加在root用户下。 我们注意到.bashrc文件的第一行代码以(#)开始, 这个是可选的并且只是为了区分余下的代码。 -# explain.sh 标记代码的开始, 我们将代码插入.bashrc文件的底部。 +\# explain.sh 标记代码的开始, 我们将代码插入.bashrc文件的底部。 # explain.sh begins explain () { @@ -53,7 +53,7 @@ Jackson写了一小段代码,它能非常有效地在bash shell里面解释she ### explain.sh工具的使用 ### -在插入代码并保存之后,你必须退出当前的会话然后重新登录来使改变生效(LCTT注:你也可以直接使用命令“source~/.bashrc”来让改变生效)。每件事情都是交由‘curl’命令处理, 它负责将需要解释的命令以及命令选项传送给mankier服务,然后将必要的信息打印到Linux命令行。不必说的就是使用这个工具你总是需要连接网络。 +在插入代码并保存之后,你必须退出当前的会话然后重新登录来使改变生效(LCTT译注:你也可以直接使用命令`source~/.bashrc` 来让改变生效)。每件事情都是交由‘curl’命令处理, 它负责将需要解释的命令以及命令选项传送给mankier服务,然后将必要的信息打印到Linux命令行。不必说的就是使用这个工具你总是需要连接网络。 让我们用explain.sh脚本测试几个笔者不懂的命令例子。 @@ -63,7 +63,7 @@ Jackson写了一小段代码,它能非常有效地在bash shell里面解释she ![Get Help on du Command](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-du-Command.png) -获得du命令的帮助 +*获得du命令的帮助* **2.如果你忘了'tar -zxvf'的作用,你可以简单地如此做:** @@ -71,7 +71,7 @@ Jackson写了一小段代码,它能非常有效地在bash shell里面解释she ![Tar Command Help](http://www.tecmint.com/wp-content/uploads/2015/07/Tar-Command-Help.png) -Tar命令帮助 +*Tar命令帮助* **3.我的一个朋友经常对'whatis'以及'whereis'命令的使用感到困惑,所以我建议他:** @@ -86,7 +86,7 @@ Tar命令帮助 ![Whatis Whereis Commands Help](http://www.tecmint.com/wp-content/uploads/2015/07/Whatis-Whereis-Commands-Help.png) -Whatis/Whereis命令的帮助 +*Whatis/Whereis命令的帮助* 你只需要使用“Ctrl+c”就能退出交互模式。 @@ -96,11 +96,11 @@ Whatis/Whereis命令的帮助 ![Get Help on Multiple Commands](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-Multiple-Commands.png) -获取多条命令的帮助 +*获取多条命令的帮助* -同样地,你可以请求你的shell来解释任何shell命令。 前提是你需要一个可用的网络。输出的信息是基于解释的需要从服务器中生成的,因此输出的结果是不可定制的。 +同样地,你可以请求你的shell来解释任何shell命令。 前提是你需要一个可用的网络。输出的信息是基于需要解释的命令,从服务器中生成的,因此输出的结果是不可定制的。 -对于我来说这个工具真的很有用并且它已经荣幸地添加在我的.bashrc文件中。你对这个项目有什么想法?它对你有用么?它的解释令你满意吗?请让我知道吧! +对于我来说这个工具真的很有用,并且它已经荣幸地添加在我的.bashrc文件中。你对这个项目有什么想法?它对你有用么?它的解释令你满意吗?请让我知道吧! 请在下面评论为我们提供宝贵意见,喜欢并分享我们以及帮助我们得到传播。 @@ -110,7 +110,7 @@ via: http://www.tecmint.com/explain-shell-commands-in-the-linux-shell/ 作者:[Avishek Kumar][a] 译者:[dingdongnigetou](https://github.com/dingdongnigetou) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 8a5094fa7787dc36c9761d7128a0bd15b1cb8a64 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 8 Aug 2015 22:54:02 +0800 Subject: [PATCH 104/697] Update 20150728 Process of the Linux kernel building.md --- sources/tech/20150728 Process of the Linux kernel building.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/sources/tech/20150728 Process of the Linux kernel building.md index cb7ec19b45..1c03ebbe72 100644 --- a/sources/tech/20150728 Process of the Linux kernel building.md +++ b/sources/tech/20150728 Process of the Linux kernel building.md @@ -1,3 +1,5 @@ +Translating by Ezio + Process of the Linux kernel building ================================================================================ Introduction @@ -671,4 +673,4 @@ via: https://github.com/0xAX/linux-insides/blob/master/Misc/how_kernel_compiled. 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From be302cdf123a3ab2492aee561432876902949655 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 8 Aug 2015 23:50:31 +0800 Subject: [PATCH 105/697] PUB:20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System @ictlyh --- ...riables on a Linux and Unix-like System.md | 32 +++++++++---------- 1 file changed, 16 insertions(+), 16 deletions(-) rename {translated/tech => published}/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md (71%) diff --git a/translated/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md b/published/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md similarity index 71% rename from translated/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md rename to published/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md index 202c4e304a..2157cdc4e6 100644 --- a/translated/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md +++ b/published/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md @@ -1,8 +1,8 @@ -如何在 Linux 和类 Unix 系统上临时清空 Bash 环境变量 +如何在 Linux 上运行命令前临时清空 Bash 环境变量 ================================================================================ -我是个 bash shell 用户。我想临时清空 bash shell 环境变量。但我不想删除或者 unset 一个 export 环境变量。我怎样才能在 bash 或 ksh shell 的临时环境中运行程序呢? +我是个 bash shell 用户。我想临时清空 bash shell 环境变量。但我不想删除或者 unset 一个输出的环境变量。我怎样才能在 bash 或 ksh shell 的临时环境中运行程序呢? -你可以在 Linux 或类 Unix 系统中使用 env 命令设置并打印环境。env 命令将环境修改为命令行指定的那样之后再执行程序。 +你可以在 Linux 或类 Unix 系统中使用 env 命令设置并打印环境。env 命令可以按命令行指定的变量来修改环境,之后再执行程序。 ### 如何显示当前环境? ### @@ -17,29 +17,30 @@ 输出样例: ![Fig.01: Unix/Linux: 列出所有环境变量](http://s0.cyberciti.org/uploads/faq/2015/08/env-unix-linux-command-output.jpg) -Fig.01: Unix/Linux: 列出所有环境变量 + +*Fig.01: Unix/Linux: 列出所有环境变量* ### 统计环境变量数目 ### 输入下面的命令: env | wc -l - printenv | wc -l + printenv | wc -l # 或者 输出样例: 20 -### 在 bash/ksh/zsh 干净环境中运行程序 ### +### 在干净的 bash/ksh/zsh 环境中运行程序 ### 语法如下所示: env -i your-program-name-here arg1 arg2 ... -例如,不使用 http_proxy 和/或任何其它变量运行 wget 程序。临时清除所有 bash/ksh/zsh 环境变量并运行 wget 程序: +例如,要在不使用 http_proxy 和/或任何其它环境变量的情况下运行 wget 程序。临时清除所有 bash/ksh/zsh 环境变量并运行 wget 程序: env -i /usr/local/bin/wget www.cyberciti.biz - env -i wget www.cyberciti.biz + env -i wget www.cyberciti.biz # 或者 这当你想忽视任何已经设置的环境变量来运行命令时非常有用。我每天都会多次使用这个命令,以便忽视 http_proxy 和其它我设置的环境变量。 @@ -66,12 +67,12 @@ Fig.01: Unix/Linux: 列出所有环境变量 index.html.1 [ <=> ] 36.17K 115KB/s in 0.3s 2015-08-03 23:25:18 (115 KB/s) - 'index.html.1' saved [37041] --i 选项使 env 命令完全忽视它继承的环境。但是,它并不阻止你的命令(例如 wget 或 curl)设置新的变量。同时,也要注意运行 bash/ksh shell 的副作用: +-i 选项使 env 命令完全忽视它继承的环境。但是,它并不会阻止你的命令(例如 wget 或 curl)设置新的变量。同时,也要注意运行 bash/ksh shell 的副作用: - env -i env | wc -l ## empty ## - # Now run bash ## + env -i env | wc -l ## 空的 ## + # 现在运行 bash ## env -i bash - ## New enviroment set by bash program ## + ## bash 设置了新的环境变量 ## env | wc -l #### 例子:设置一个环境变量 #### @@ -79,13 +80,12 @@ Fig.01: Unix/Linux: 列出所有环境变量 语法如下: env var=value /path/to/command arg1 arg2 ... - ## OR ## + ## 或 ## var=value /path/to/command arg1 arg2 ... 例如设置 http_proxy: - env http_proxy="http://USER:PASSWORD@server1.cyberciti.biz:3128/" \ - /usr/local/bin/wget www.cyberciti.biz + env http_proxy="http://USER:PASSWORD@server1.cyberciti.biz:3128/" /usr/local/bin/wget www.cyberciti.biz -------------------------------------------------------------------------------- @@ -93,6 +93,6 @@ via: http://www.cyberciti.biz/faq/linux-unix-temporarily-clearing-environment-va 作者:Vivek Gite 译者:[ictlyh](https://github.com/ictlyh) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file From 1f7c172f463723aa85ea3605c88de9e89835f78b Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 9 Aug 2015 10:51:47 +0800 Subject: [PATCH 106/697] translating --- ...0 How to Setup iTOP (IT Operational Portal) on CentOS 7.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md b/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md index 38477bb662..8b598999e1 100644 --- a/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md +++ b/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md @@ -1,3 +1,5 @@ +tranlsating---geekpi + How to Setup iTOP (IT Operational Portal) on CentOS 7 ================================================================================ iTOP is a simple, Open source web based IT Service Management tool. It has all of ITIL functionality that includes with Service desk, Configuration Management, Incident Management, Problem Management, Change Management and Service Management. iTop relays on Apache/IIS, MySQL and PHP, so it can run on any operating system supporting these applications. Since iTop is a web based application you don’t need to deploy any client software on each user’s PC. A simple web browser is enough to perform day to day operations of an IT environment with iTOP. @@ -171,4 +173,4 @@ via: http://linoxide.com/tools/setup-itop-centos-7/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://linoxide.com/author/kashifs/ -[1]:http://www.combodo.com/spip.php?page=rubrique&id_rubrique=8 \ No newline at end of file +[1]:http://www.combodo.com/spip.php?page=rubrique&id_rubrique=8 From 3e3f2fab11865dd4cdbb700a62507cd8d14d2f4d Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 9 Aug 2015 16:34:00 +0800 Subject: [PATCH 107/697] translated --- ...TOP (IT Operational Portal) on CentOS 7.md | 104 +++++++++--------- 1 file changed, 51 insertions(+), 53 deletions(-) diff --git a/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md b/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md index 8b598999e1..dd20493d77 100644 --- a/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md +++ b/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md @@ -1,30 +1,28 @@ -tranlsating---geekpi - -How to Setup iTOP (IT Operational Portal) on CentOS 7 +如何在CentOS上安装iTOP(IT操作门户) ================================================================================ -iTOP is a simple, Open source web based IT Service Management tool. It has all of ITIL functionality that includes with Service desk, Configuration Management, Incident Management, Problem Management, Change Management and Service Management. iTop relays on Apache/IIS, MySQL and PHP, so it can run on any operating system supporting these applications. Since iTop is a web based application you don’t need to deploy any client software on each user’s PC. A simple web browser is enough to perform day to day operations of an IT environment with iTOP. +iTOP简单来说是一个简单的基于网络的开源IT服务管理工具。它有所有的ITIL功能包括服务台、配置管理、事件管理、问题管理、更改管理和服务管理。iTOP依赖于Apache/IIS、MySQL和PHP,因此它可以运行在任何支持这些软件的操作系统中。因为iTOP是一个网络程序,因此你不必在用户的PC端任何客户端程序。一个简单的浏览器就足够每天的IT环境操作了。 -To install and configure iTOP we will be using CentOS 7 as base operating with basic LAMP Stack environment installed on it that will cover its almost all prerequisites. +我们要在一台有满足基本需求的LAMP环境的CentOS 7上安装和配置iTOP。 -### Downloading iTOP ### +### 下载 iTOP ### -iTop download package is present on SourceForge, we can get its link from their official website [link][1]. +iTOP的下载包现在在SOurceForge上,我们可以从这获取它的官方[链接][1]。 ![itop download](http://blog.linoxide.com/wp-content/uploads/2015/07/1-itop-download.png) -We will the download link from here and get this zipped file on server with wget command as below. +我们从这里的连接用wget命令获取压缩文件 [root@centos-007 ~]# wget http://downloads.sourceforge.net/project/itop/itop/2.1.0/iTop-2.1.0-2127.zip -### iTop Extensions and Web Setup ### +### iTop扩展和网络安装 ### -By using unzip command we will extract the downloaded packages in the document root directory of our apache web server in a new directory with name itop. +使用unzip命令解压到apache根目录下的itop文件夹下。 [root@centos-7 ~]# ls iTop-2.1.0-2127.zip [root@centos-7 ~]# unzip iTop-2.1.0-2127.zip -d /var/www/html/itop/ -List the folder to view installation packages in it. +列出安装包中的内容。 [root@centos-7 ~]# ls -lh /var/www/html/itop/ total 68K @@ -33,7 +31,7 @@ List the folder to view installation packages in it. -rw-r--r--. 1 root root 23K Dec 17 2014 README drwxr-xr-x. 19 root root 4.0K Jul 14 13:10 web -Here is all the extensions that we can install. +这些是我们可以安装的扩展。 [root@centos-7 2.x]# ls authent-external itop-backup itop-config-mgmt itop-problem-mgmt itop-service-mgmt-provider itop-welcome-itil @@ -42,132 +40,132 @@ Here is all the extensions that we can install. installation.xml itop-change-mgmt-itil itop-incident-mgmt-itil itop-request-mgmt-itil itop-tickets itop-attachments itop-config itop-knownerror-mgmt itop-service-mgmt itop-virtualization-mgmt -Now from the extracted web directory, moving through different data models we will migrate the required extensions from the datamodels into the web extensions directory of web document root directory with copy command. +在解压的目录下,通过不同的数据模型用复制命令迁移需要的扩展从datamodels复制到web扩展目录下。 [root@centos-7 2.x]# pwd /var/www/html/itop/web/datamodels/2.x [root@centos-7 2.x]# cp -r itop-request-mgmt itop-service-mgmt itop-service-mgmt itop-config itop-change-mgmt /var/www/html/itop/web/extensions/ -### Installing iTop Web Interface ### +### 安装 iTop web界面 ### -Most of our server side settings and configurations are done.Finally we need to complete its web interface installation process to finalize the setup. +大多数服务端设置和配置已经完成了。最后我们安装web界面来完成安装。 -Open your favorite web browser and access the WordPress web directory in your web browser using your server IP or FQDN like. +打开浏览器使用ip地址或者FQDN来访问WordPress web目录。 http://servers_ip_address/itop/web/ -You will be redirected towards the web installation process for iTop. Let’s configure it as per your requirements like we did here in this tutorial. +你会被重定向到iTOP的web安装页面。让我们按照要求配置,就像在这篇教程中做的那样。 -#### Prerequisites Validation #### +#### 先决要求验证 #### -At the stage you will be prompted for welcome screen with prerequisites validation ok. If you get some warning then you have to make resolve it by installing its prerequisites. +这一步你就会看到验证完成的欢迎界面。如果你看到了一些警告信息,你需要先安装这些软件来解决这些问题。 ![mcrypt missing](http://blog.linoxide.com/wp-content/uploads/2015/07/2-itop-web-install.png) -At this stage one optional package named php mcrypt will be missing. Download the following rpm package then try to install php mcrypt package. +这一步一个叫php mcrypt的可选包丢失了。下载下面的rpm包接着尝试安装php mcrypt包。 [root@centos-7 ~]#yum localinstall php-mcrypt-5.3.3-1.el6.x86_64.rpm libmcrypt-2.5.8-9.el6.x86_64.rpm. -After successful installation of php-mcrypt library we need to restart apache web service, then reload the web page and this time its prerequisites validation should be OK. +成功安装完php-mcrypt后,我们需要重启apache服务,接着刷新页面,这时验证应该已经OK。 -#### Install or Upgrade iTop #### +#### 安装或者升级 iTop #### -Here we will choose the fresh installation as we have not installed iTop previously on our server. +现在我们要在没有安装iTOP的服务器上选择全新安装。 ![Install New iTop](http://blog.linoxide.com/wp-content/uploads/2015/07/3.png) -#### iTop License Agreement #### +#### iTop 许可协议 #### -Chose the option to accept the terms of the licenses of all the components of iTop and click "NEXT". +勾选同意iTOP所有组件的许可协议并点击“NEXT”。 ![License Agreement](http://blog.linoxide.com/wp-content/uploads/2015/07/4.png) -#### Database Configuration #### +#### 数据库配置 #### -Here we the do Configuration of the database connection by giving our database servers credentials and then choose from the option to create new database as shown. +现在我们输入数据库凭据来配置数据库连接,接着选择如下选择创建新数据库。 ![DB Connection](http://blog.linoxide.com/wp-content/uploads/2015/07/5.png) -#### Administrator Account #### +#### 管理员账户 #### -In this step we will configure an Admin account by filling out its login details as. +这一步中我们会输入它的登录信息来配置管理员账户。 ![Admin Account](http://blog.linoxide.com/wp-content/uploads/2015/07/6.png) -#### Miscellaneous Parameters #### +#### 杂项参数 #### -Let's choose the additional parameters whether you want to install with demo contents or with fresh database and proceed forward. +让我们选择额外的参数来选择你是否需要安装一个演示内容或者使用全新的数据库,接着下一步。 ![Misc Parameters](http://blog.linoxide.com/wp-content/uploads/2015/07/7.png) -### iTop Configurations Management ### +### iTop 配置管理 ### -The options below allow you to configure the type of elements that are to be managed inside iTop like all the base objects that are mandatory in the iTop CMDB, Manage Data Center devices, storage device and virtualization. +下面的选项允许你配置在iTOP要管理的元素类型,像CMDB、数据中心设备、存储设备和虚拟化这些东西在iTOP中是必须的。 ![Conf Management](http://blog.linoxide.com/wp-content/uploads/2015/07/8.png) -#### Service Management #### +#### 服务管理 #### -Select from the choices that best describes the relationships between the services and the IT infrastructure in your IT environment. So we are choosing Service Management for Service Providers here. +选择一个最能描述你的IT设备和环境之间的关系的选项。因此我们这里选择为服务提供商的服务管理。 ![Service Management](http://blog.linoxide.com/wp-content/uploads/2015/07/9.png) -#### iTop Tickets Management #### +#### iTop Tickets 管理 #### -From the different available options we will Select the ITIL Compliant Tickets Management option to have different types of ticket for managing user requests and incidents. +从不同的可用选项我们选择符合ITIL Tickets管理选项来管理不同类型的用户请求和事件。 ![Ticket Management](http://blog.linoxide.com/wp-content/uploads/2015/07/10.png) -#### Change Management Options #### +#### 改变管理选项 #### -Select the type of tickets you want to use in order to manage changes to the IT infrastructure from the available options. We are going to choose ITIL change management option here. +选择不同的ticket类型以便管理可用选项中的IT设备更改。我们选择ITTL更改管理选项。 ![ITIL Change](http://blog.linoxide.com/wp-content/uploads/2015/07/11.png) -#### iTop Extensions #### +#### iTop 扩展 #### -In this section we can select the additional extensions to install or we can unchecked the ones that you want to skip. +这一节我们选择额外的扩展来安装或者不选直接跳过。 ![iTop Extensions](http://blog.linoxide.com/wp-content/uploads/2015/07/13.png) -### Ready to Start Web Installation ### +### 准备开始web安装 ### -Now we are ready to start installing the components that we choose in previous steps. We can also drop down these installation parameters to view our configuration from the drop down. +现在我们开始准备安装先前先前选择的组件。我们也可以下拉这些安装参数来浏览我们的配置。 -Once you are confirmed with the installation parameters click on the install button. +确认安装参数后点击安装按钮。 ![Installation Parameters](http://blog.linoxide.com/wp-content/uploads/2015/07/16.png) -Let's wait for the progress bar to complete the installation process. It might takes few minutes to complete its installation process. +让我们等待进度条来完成安装步骤。它也许会花费几分钟来完成安装步骤。 ![iTop Installation Process](http://blog.linoxide.com/wp-content/uploads/2015/07/17.png) -### iTop Installation Done ### +### iTop安装完成 ### -Our iTop installation setup is complete, just need to do a simple manual operation as shown and then click to enter iTop. +我们的iTOP安装已经完成了,只要如下一个简单的手动操作就可以进入到iTOP。 ![iTop Done](http://blog.linoxide.com/wp-content/uploads/2015/07/18.png) -### Welcome to iTop (IT Operational Portal) ### +### 欢迎来到iTop (IT操作门户) ### ![itop welcome note](http://blog.linoxide.com/wp-content/uploads/2015/07/20.png) -### iTop Dashboard ### +### iTop 面板 ### -You can manage configuration of everything from here Servers, computers, Contacts, Locations, Contracts, Network devices…. You can create your own. Just the fact, that the installed CMDB module is great which is an essential part of every bigger IT. +你这里可以配置任何东西,服务、计算机、通讯录、位置、合同、网络设备等等。你可以创建你自己的。事实是刚安装的CMDB模块是每一个IT人员的必备模块。 ![iTop Dashboard](http://blog.linoxide.com/wp-content/uploads/2015/07/19.png) -### Conclusion ### +### 总结 ### -ITOP is one of the best Open Source Service Desk solutions. We have successfully installed and configured it on our CentOS 7 cloud host. So, the most powerful aspect of iTop is the ease with which it can be customized via its “extensions”. Feel free to comment if you face any trouble during its setup. +ITOP是一个最棒的开源桌面服务解决方案。我们已经在CentOS 7上成功地安装和配置了。因此,iTOP最强大的一方面是它可以很简单地通过扩展来自定义。如果你在安装中遇到任何问题欢迎评论。 -------------------------------------------------------------------------------- via: http://linoxide.com/tools/setup-itop-centos-7/ 作者:[Kashif Siddique][a] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From b88de33b50ce231372d1ce08194d09a7d28989d7 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Sun, 9 Aug 2015 19:30:54 +0800 Subject: [PATCH 108/697] [Translated]20150803 Linux Logging Basics.md --- sources/tech/20150803 Linux Logging Basics.md | 92 ------------------- .../tech/20150803 Linux Logging Basics.md | 90 ++++++++++++++++++ 2 files changed, 90 insertions(+), 92 deletions(-) delete mode 100644 sources/tech/20150803 Linux Logging Basics.md create mode 100644 translated/tech/20150803 Linux Logging Basics.md diff --git a/sources/tech/20150803 Linux Logging Basics.md b/sources/tech/20150803 Linux Logging Basics.md deleted file mode 100644 index 6c3c3693a4..0000000000 --- a/sources/tech/20150803 Linux Logging Basics.md +++ /dev/null @@ -1,92 +0,0 @@ -FSSlc translating - -Linux Logging Basics -================================================================================ -First we’ll describe the basics of what Linux logs are, where to find them, and how they get created. If you already know this stuff, feel free to skip to the next section. - -### Linux System Logs ### - -Many valuable log files are automatically created for you by Linux. You can find them in your /var/log directory. Here is what this directory looks like on a typical Ubuntu system: - -![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Linux-system-log-terminal.png) - -Some of the most important Linux system logs include: - -- /var/log/syslog or /var/log/messages stores all global system activity data, including startup messages. Debian-based systems like Ubuntu store this in /var/log/syslog. RedHat-based systems like RHEL or CentOS store this in /var/log/messages. -- /var/log/auth.log or /var/log/secure stores logs from the Pluggable Authentication Module (pam) including successful logins, failed login attempts, and authentication methods. Ubuntu and Debian store authentication messages in /var/log/auth.log. RedHat and CentOS store this data in /var/log/secure. -- /var/log/kern stores kernel error and warning data, which is particularly helpful for troubleshooting custom kernels. -- /var/log/cron stores information about cron jobs. Use this data to verify that your cron jobs are running successfully. - -Digital Ocean has a thorough [tutorial][1] on these files and how rsyslog creates them on common distributions like RedHat and CentOS. - -Applications also write log files in this directory. For example, popular servers like Apache, Nginx, MySQL, and more can write log files here. Some of these log files are written by the application itself. Others are created through syslog (see below). - -### What’s Syslog? ### - -How do Linux system log files get created? The answer is through the syslog daemon, which listens for log messages on the syslog socket /dev/log and then writes them to the appropriate log file. - -The word “syslog” is an overloaded term and is often used in short to refer to one of these: - -1. **Syslog daemon** — a program to receive, process, and send syslog messages. It can [send syslog remotely][2] to a centralized server or write it to a local file. Common examples include rsyslogd and syslog-ng. In this usage, people will often say “sending to syslog.” -1. **Syslog protocol** — a transport protocol specifying how logs can be sent over a network and a data format definition for syslog messages (below). It’s officially defined in [RFC-5424][3]. The standard ports are 514 for plaintext logs and 6514 for encrypted logs. In this usage, people will often say “sending over syslog.” -1. **Syslog messages** — log messages or events in the syslog format, which includes a header with several standard fields. In this usage, people will often say “sending syslog.” - -Syslog messages or events include a header with several standard fields, making analysis and routing easier. They include the timestamp, the name of the application, the classification or location in the system where the message originates, and the priority of the issue. - -Here is an example log message with the syslog header included. It’s from the sshd daemon, which controls remote logins to the system. This message describes a failed login attempt: - - <34>1 2003-10-11T22:14:15.003Z server1.com sshd - - pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2 - -### Syslog Format and Fields ### - -Each syslog message includes a header with fields. Fields are structured data that makes it easier to analyze and route the events. Here is the format we used to generate the above syslog example. You can match each value to a specific field name. - - <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% %msg%n - -Below, you’ll find descriptions of some of the most commonly used syslog fields when searching or troubleshooting issues. - -#### Timestamp #### - -The [timestamp][4] field (2003-10-11T22:14:15.003Z in the example) indicates the time and date that the message was generated on the system sending the message. That time can be different from when another system receives the message. The example timestamp breaks down like this: - -- **2003-10-11** is the year, month, and day. -- **T** is a required element of the TIMESTAMP field, separating the date and the time. -- **22:14:15.003** is the 24-hour format of the time, including the number of milliseconds (**003**) into the next second. -- **Z** is an optional element, indicating UTC time. Instead of Z, the example could have included an offset, such as -08:00, which indicates that the time is offset from UTC by 8 hours, PST. - -#### Hostname #### - -The [hostname][5] field (server1.com in the example above) indicates the name of the host or system that sent the message. - -#### App-Name #### - -The [app-name][6] field (sshd:auth in the example) indicates the name of the application that sent the message. - -#### Priority #### - -The priority field or [pri][7] for short (<34> in the example above) tells you how urgent or severe the event is. It’s a combination of two numerical fields: the facility and the severity. The severity ranges from the number 7 for debug events all the way to 0 which is an emergency. The facility describes which process created the event. It ranges from 0 for kernel messages to 23 for local application use. - -Pri can be output in two ways. The first is as a single number prival which is calculated as the facility field value multiplied by 8, then the result is added to the severity field value: (facility)(8) + (severity). The second is pri-text which will output in the string format “facility.severity.” The latter format can often be easier to read and search but takes up more storage space. - --------------------------------------------------------------------------------- - -via: http://www.loggly.com/ultimate-guide/logging/linux-logging-basics/ - -作者:[Jason Skowronski][a1] -作者:[Amy Echeverri][a2] -作者:[Sadequl Hussain][a3] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a1]:https://www.linkedin.com/in/jasonskowronski -[a2]:https://www.linkedin.com/in/amyecheverri -[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 -[1]:https://www.digitalocean.com/community/tutorials/how-to-view-and-configure-linux-logs-on-ubuntu-and-centos -[2]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.y2e9tdfk1cdb -[3]:https://tools.ietf.org/html/rfc5424 -[4]:https://tools.ietf.org/html/rfc5424#section-6.2.3 -[5]:https://tools.ietf.org/html/rfc5424#section-6.2.4 -[6]:https://tools.ietf.org/html/rfc5424#section-6.2.5 -[7]:https://tools.ietf.org/html/rfc5424#section-6.2.1 diff --git a/translated/tech/20150803 Linux Logging Basics.md b/translated/tech/20150803 Linux Logging Basics.md new file mode 100644 index 0000000000..00acdf183e --- /dev/null +++ b/translated/tech/20150803 Linux Logging Basics.md @@ -0,0 +1,90 @@ +Linux 日志基础 +================================================================================ +首先,我们将描述有关 Linux 日志是什么,到哪儿去找它们以及它们是如何创建的基础知识。如果你已经知道这些,请随意跳至下一节。 + +### Linux 系统日志 ### + +许多有价值的日志文件都是由 Linux 自动地为你创建的。你可以在 `/var/log` 目录中找到它们。下面是在一个典型的 Ubuntu 系统中这个目录的样子: + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Linux-system-log-terminal.png) + +一些最为重要的 Linux 系统日志包括: + +- `/var/log/syslog` 或 `/var/log/messages` 存储所有的全局系统活动数据,包括开机信息。基于 Debian 的系统如 Ubuntu 在 `/var/log/syslog` 目录中存储它们,而基于 RedHat 的系统如 RHEL 或 CentOS 则在 `/var/log/messages` 中存储它们。 +- `/var/log/auth.log` 或 `/var/log/secure` 存储来自可插拔认证模块(PAM)的日志,包括成功的登录,失败的登录尝试和认证方式。Ubuntu 和 Debian 在 `/var/log/auth.log` 中存储认证信息,而 RedHat 和 CentOS 则在 `/var/log/secure` 中存储该信息。 +- `/var/log/kern` 存储内核错误和警告数据,这对于排除与自定义内核相关的故障尤为实用。 +- `/var/log/cron` 存储有关 cron 作业的信息。使用这个数据来确保你的 cron 作业正成功地运行着。 + +Digital Ocean 有一个完整的关于这些文件及 rsyslog 如何在常见的发行版本如 RedHat 和 CentOS 中创建它们的 [教程][1] 。 + +应用程序也会在这个目录中写入日志文件。例如像 Apache,Nginx,MySQL 等常见的服务器程序可以在这个目录中写入日志文件。其中一些日志文件由应用程序自己创建,其他的则通过 syslog (具体见下文)来创建。 + +### 什么是 Syslog? ### + +Linux 系统日志文件是如何创建的呢?答案是通过 syslog 守护程序,它在 syslog +套接字 `/dev/log` 上监听日志信息,然后将它们写入适当的日志文件中。 + +单词“syslog” 是一个重载的条目,并经常被用来简称如下的几个名称之一: + +1. **Syslog 守护进程** — 一个用来接收,处理和发送 syslog 信息的程序。它可以[远程发送 syslog][2] 到一个集中式的服务器或写入一个本地文件。常见的例子包括 rsyslogd 和 syslog-ng。在这种使用方式中,人们常说 "发送到 syslog." +1. **Syslog 协议** — 一个指定日志如何通过网络来传送的传输协议和一个针对 syslog 信息(具体见下文) 的数据格式的定义。它在 [RFC-5424][3] 中被正式定义。对于文本日志,标准的端口是 514,对于加密日志,端口是 6514。在这种使用方式中,人们常说"通过 syslog 传送." +1. **Syslog 信息** — syslog 格式的日志信息或事件,它包括一个带有几个标准域的文件头。在这种使用方式中,人们常说"发送 syslog." + +Syslog 信息或事件包括一个带有几个标准域的 header ,使得分析和路由更方便。它们包括时间戳,应用程序的名称,在系统中信息来源的分类或位置,以及事件的优先级。 + +下面展示的是一个包含 syslog header 的日志信息,它来自于 sshd 守护进程,它控制着到该系统的远程登录,这个信息描述的是一次失败的登录尝试: + + <34>1 2003-10-11T22:14:15.003Z server1.com sshd - - pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2 + +### Syslog 格式和域 ### + +每条 syslog 信息包含一个带有域的 header,这些域是结构化的数据,使得分析和路由事件更加容易。下面是我们使用的用来产生上面的 syslog 例子的格式,你可以将每个值匹配到一个特定的域的名称上。 + + <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% %msg%n + +下面,你将看到一些在查找或排错时最常使用的 syslog 域: + +#### 时间戳 #### + +[时间戳][4] (上面的例子为 2003-10-11T22:14:15.003Z) 暗示了在系统中发送该信息的时间和日期。这个时间在另一系统上接收该信息时可能会有所不同。上面例子中的时间戳可以分解为: + +- **2003-10-11** 年,月,日. +- **T** 为时间戳的必需元素,它将日期和时间分离开. +- **22:14:15.003** 是 24 小时制的时间,包括进入下一秒的毫秒数(**003**). +- **Z** 是一个可选元素,指的是 UTC 时间,除了 Z,这个例子还可以包括一个偏移量,例如 -08:00,这意味着时间从 UTC 偏移 8 小时,即 PST 时间. + +#### 主机名 #### + +[主机名][5] 域(在上面的例子中对应 server1.com) 指的是主机的名称或发送信息的系统. + +#### 应用名 #### + +[应用名][6] 域(在上面的例子中对应 sshd:auth) 指的是发送信息的程序的名称. + +#### 优先级 #### + +优先级域或缩写为 [pri][7] (在上面的例子中对应 <34>) 告诉我们这个事件有多紧急或多严峻。它由两个数字域组成:设备域和紧急性域。紧急性域从代表 debug 类事件的数字 7 一直到代表紧急事件的数字 0 。设备域描述了哪个进程创建了该事件。它从代表内核信息的数字 0 到代表本地应用使用的 23 。 + +Pri 有两种输出方式。第一种是以一个单独的数字表示,可以这样计算:先用设备域的值乘以 8,再加上紧急性域的值:(设备域)(8) + (紧急性域)。第二种是 pri 文本,将以“设备域.紧急性域” 的字符串格式输出。后一种格式更方便阅读和搜索,但占据更多的存储空间。 +-------------------------------------------------------------------------------- + +via: http://www.loggly.com/ultimate-guide/logging/linux-logging-basics/ + +作者:[Jason Skowronski][a1] +作者:[Amy Echeverri][a2] +作者:[Sadequl Hussain][a3] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a1]:https://www.linkedin.com/in/jasonskowronski +[a2]:https://www.linkedin.com/in/amyecheverri +[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 +[1]:https://www.digitalocean.com/community/tutorials/how-to-view-and-configure-linux-logs-on-ubuntu-and-centos +[2]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.y2e9tdfk1cdb +[3]:https://tools.ietf.org/html/rfc5424 +[4]:https://tools.ietf.org/html/rfc5424#section-6.2.3 +[5]:https://tools.ietf.org/html/rfc5424#section-6.2.4 +[6]:https://tools.ietf.org/html/rfc5424#section-6.2.5 +[7]:https://tools.ietf.org/html/rfc5424#section-6.2.1 \ No newline at end of file From 093951ac6cd1e1d65a99bf76c5224a8d146c52a2 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 9 Aug 2015 23:53:54 +0800 Subject: [PATCH 109/697] PUB:20150730 Must-Know Linux Commands For New Users @GOLinux --- ... Must-Know Linux Commands For New Users.md | 43 ++++++++++--------- 1 file changed, 22 insertions(+), 21 deletions(-) rename {translated/tech => published}/20150730 Must-Know Linux Commands For New Users.md (72%) diff --git a/translated/tech/20150730 Must-Know Linux Commands For New Users.md b/published/20150730 Must-Know Linux Commands For New Users.md similarity index 72% rename from translated/tech/20150730 Must-Know Linux Commands For New Users.md rename to published/20150730 Must-Know Linux Commands For New Users.md index 230cecf736..657d7372bb 100644 --- a/translated/tech/20150730 Must-Know Linux Commands For New Users.md +++ b/published/20150730 Must-Know Linux Commands For New Users.md @@ -1,11 +1,12 @@ 新手应知应会的Linux命令 ================================================================================ ![Manage system updates via the command line with dnf on Fedora.](http://www.linux.com/images/stories/41373/fedora-cli.png) -在Fedora上通过命令行使用dnf来管理系统更新 -基于Linux的系统的优点之一,就是你可以通过终端中使用命令该ing来管理整个系统。使用命令行的优势在于,你可以使用相同的知识和技能来管理随便哪个Linux发行版。 +*在Fedora上通过命令行使用dnf来管理系统更新* -对于各个发行版以及桌面环境(DE)而言,要一致地使用图形化用户界面(GUI)却几乎是不可能的,因为它们都提供了各自的用户界面。要明确的是,有那么些情况,你需要在不同的发行版上使用不同的命令来部署某些特定的任务,但是,或多或少它们的概念和意图却仍然是一致的。 +基于Linux的系统最美妙的一点,就是你可以在终端中使用命令行来管理整个系统。使用命令行的优势在于,你可以使用相同的知识和技能来管理随便哪个Linux发行版。 + +对于各个发行版以及桌面环境(DE)而言,要一致地使用图形化用户界面(GUI)却几乎是不可能的,因为它们都提供了各自的用户界面。要明确的是,有些情况下在不同的发行版上需要使用不同的命令来执行某些特定的任务,但是,基本来说它们的思路和目的是一致的。 在本文中,我们打算讨论Linux用户应当掌握的一些基本命令。我将给大家演示怎样使用命令行来更新系统、管理软件、操作文件以及切换到root,这些操作将在三个主要发行版上进行:Ubuntu(也包括其定制版和衍生版,还有Debian),openSUSE,以及Fedora。 @@ -15,7 +16,7 @@ Linux是基于安全设计的,但事实上是,任何软件都有缺陷,会导致安全漏洞。所以,保持你的系统更新到最新是十分重要的。这么想吧:运行过时的操作系统,就像是你坐在全副武装的坦克里头,而门却没有锁。武器会保护你吗?任何人都可以进入开放的大门,对你造成伤害。同样,在你的系统中也有没有打补丁的漏洞,这些漏洞会危害到你的系统。开源社区,不像专利世界,在漏洞补丁方面反应是相当快的,所以,如果你保持系统最新,你也获得了安全保证。 -留意新闻站点,了解安全漏洞。如果发现了一个漏洞,请阅读之,然后在补丁出来的第一时间更新。不管怎样,在生产机器上,你每星期必须至少运行一次更新命令。如果你运行这一台复杂的服务器,那么就要额外当心了。仔细阅读变更日志,以确保更新不会搞坏你的自定义服务。 +留意新闻站点,了解安全漏洞。如果发现了一个漏洞,了解它,然后在补丁出来的第一时间更新。不管怎样,在生产环境上,你每星期必须至少运行一次更新命令。如果你运行着一台复杂的服务器,那么就要额外当心了。仔细阅读变更日志,以确保更新不会搞坏你的自定义服务。 **Ubuntu**:牢记一点:你在升级系统或安装不管什么软件之前,都必须要刷新仓库(也就是repos)。在Ubuntu上,你可以使用下面的命令来更新系统,第一个命令用于刷新仓库: @@ -29,7 +30,7 @@ Linux是基于安全设计的,但事实上是,任何软件都有缺陷,会 sudo apt-get dist-upgrade -**openSUSE**:如果你是在openSUSE上,你可以使用以下命令来更新系统(照例,第一个命令的意思是更新仓库) +**openSUSE**:如果你是在openSUSE上,你可以使用以下命令来更新系统(照例,第一个命令的意思是更新仓库): sudo zypper refresh sudo zypper up @@ -42,7 +43,7 @@ Linux是基于安全设计的,但事实上是,任何软件都有缺陷,会 ### 软件安装与移除 ### 你只可以安装那些你系统上启用的仓库中可用的包,各个发行版默认都附带有并启用了一些官方或者第三方仓库。 -**Ubuntu**: To install any package on Ubuntu, first update the repo and then use this syntax: + **Ubuntu**:要在Ubuntu上安装包,首先更新仓库,然后使用下面的语句: sudo apt-get install [package_name] @@ -75,9 +76,9 @@ Linux是基于安全设计的,但事实上是,任何软件都有缺陷,会 ### 如何管理第三方软件? ### -在一个庞大的开发者社区中,这些开发者们为用户提供了许多的软件。不同的发行版有不同的机制来使用这些第三方软件,将它们提供给用户。同时也取决于开发者怎样将这些软件提供给用户,有些开发者会提供二进制包,而另外一些开发者则将软件发布到仓库中。 +在一个庞大的开发者社区中,这些开发者们为用户提供了许多的软件。不同的发行版有不同的机制来将这些第三方软件提供给用户。当然,同时也取决于开发者怎样将这些软件提供给用户,有些开发者会提供二进制包,而另外一些开发者则将软件发布到仓库中。 -Ubuntu严重依赖于PPA(个人包归档),但是,不幸的是,它却没有提供一个内建工具来帮助用于搜索这些PPA仓库。在安装软件前,你将需要通过Google搜索PPA,然后手工添加该仓库。下面就是添加PPA到系统的方法: +Ubuntu很多地方都用到PPA(个人包归档),但是,不幸的是,它却没有提供一个内建工具来帮助用于搜索这些PPA仓库。在安装软件前,你将需要通过Google搜索PPA,然后手工添加该仓库。下面就是添加PPA到系统的方法: sudo add-apt-repository ppa: @@ -85,7 +86,7 @@ Ubuntu严重依赖于PPA(个人包归档),但是,不幸的是,它却 sudo add-apt-repository ppa:libreoffice/ppa -它会要你按下回车键来导入秘钥。完成后,使用'update'命令来刷新仓库,然后安装该包。 +它会要你按下回车键来导入密钥。完成后,使用'update'命令来刷新仓库,然后安装该包。 openSUSE拥有一个针对第三方应用的优雅的解决方案。你可以访问software.opensuse.org,一键点击搜索并安装相应包,它会自动将对应的仓库添加到你的系统中。如果你想要手工添加仓库,可以使用该命令: @@ -97,13 +98,13 @@ openSUSE拥有一个针对第三方应用的优雅的解决方案。你可以访 sudo zypper refresh sudo zypper install libreoffice -Fedora用户只需要添加RPMFusion(free和non-free仓库一起),该仓库包含了大量的应用。如果你需要添加仓库,命令如下: +Fedora用户只需要添加RPMFusion(包括自由软件和非自由软件仓库),该仓库包含了大量的应用。如果你需要添加该仓库,命令如下: -dnf config-manager --add-repo http://www.example.com/example.repo + dnf config-manager --add-repo http://www.example.com/example.repo ### 一些基本命令 ### -我已经写了一些关于使用CLI来管理你系统上的文件的[文章][1],下面介绍一些基本米ing令,这些命令在所有发行版上都经常会用到。 +我已经写了一些关于使用CLI来管理你系统上的文件的[文章][1],下面介绍一些基本命令,这些命令在所有发行版上都经常会用到。 拷贝文件或目录到一个新的位置: @@ -113,13 +114,13 @@ dnf config-manager --add-repo http://www.example.com/example.repo cp path_of_files/* path_of_the_directory_where_you_want_to_copy/ -将一个文件从某个位置移动到另一个位置(尾斜杠是说在该目录中): +将一个文件从某个位置移动到另一个位置(尾斜杠是说放在该目录中): - mv path_of_file_1 path_of_the_directory_where_you_want_to_move/ + mv path_of_file_1 path_of_the_directory_where_you_want_to_move/ 将所有文件从一个位置移动到另一个位置: - mv path_of_directory_where_files_are/* path_of_the_directory_where_you_want_to_move/ + mv path_of_directory_where_files_are/* path_of_the_directory_where_you_want_to_move/ 删除一个文件: @@ -135,11 +136,11 @@ dnf config-manager --add-repo http://www.example.com/example.repo ### 创建新目录 ### -要创建一个新目录,首先输入你要创建的目录的位置。比如说,你想要在你的Documents目录中创建一个名为'foundation'的文件夹。让我们使用 cd (即change directory,改变目录)命令来改变目录: +要创建一个新目录,首先进入到你要创建该目录的位置。比如说,你想要在你的Documents目录中创建一个名为'foundation'的文件夹。让我们使用 cd (即change directory,改变目录)命令来改变目录: cd /home/swapnil/Documents -(替换'swapnil'为你系统中的用户) +(替换'swapnil'为你系统中的用户名) 然后,使用 mkdir 命令来创建该目录: @@ -149,13 +150,13 @@ dnf config-manager --add-repo http://www.example.com/example.repo mdkir /home/swapnil/Documents/foundation -如果你想要创建父-子目录,那是指目录中的目录,那么可以使用 -p 选项。它会在指定路径中创建所有目录: +如果你想要连父目录一起创建,那么可以使用 -p 选项。它会在指定路径中创建所有目录: mdkir -p /home/swapnil/Documents/linux/foundation ### 成为root ### -你或许需要成为root,或者具有sudo权力的用户,来实施一些管理任务,如管理软件包或者对根目录或其下的文件进行一些修改。其中一个例子就是编辑'fstab'文件,该文件记录了挂载的硬件驱动器。它在'etc'目录中,而该目录又在根目录中,你只能作为超级用户来修改该文件。在大多数的发行版中,你可以通过'切换用户'来成为root。比如说,在openSUSE上,我想要成为root,因为我要在根目录中工作,你可以使用下面的命令之一: +你或许需要成为root,或者具有sudo权力的用户,来实施一些管理任务,如管理软件包或者对根目录或其下的文件进行一些修改。其中一个例子就是编辑'fstab'文件,该文件记录了挂载的硬盘驱动器。它在'etc'目录中,而该目录又在根目录中,你只能作为超级用户来修改该文件。在大多数的发行版中,你可以通过'su'来成为root。比如说,在openSUSE上,我想要成为root,因为我要在根目录中工作,你可以使用下面的命令之一: sudo su - @@ -165,7 +166,7 @@ dnf config-manager --add-repo http://www.example.com/example.repo 该命令会要求输入密码,然后你就具有root特权了。记住一点:千万不要以root用户来运行系统,除非你知道你正在做什么。另外重要的一点需要注意的是,你以root什么对目录或文件进行修改后,会将它们的拥有关系从该用户或特定的服务改变为root。你必须恢复这些文件的拥有关系,否则该服务或用户就不能访问或写入到那些文件。要改变用户,命令如下: - sudo chown -R user:user /path_of_file_or_directory + sudo chown -R 用户:组 文件或目录名 当你将其它发行版上的分区挂载到系统中时,你可能经常需要该操作。当你试着访问这些分区上的文件时,你可能会碰到权限拒绝错误,你只需要改变这些分区的拥有关系就可以访问它们了。需要额外当心的是,不要改变根目录的权限或者拥有关系。 @@ -177,7 +178,7 @@ via: http://www.linux.com/learn/tutorials/842251-must-know-linux-commands-for-ne 作者:[Swapnil Bhartiya][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From de0f6fe24b3e4197727221dfedc0fa8fc4e1a42f Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 10 Aug 2015 00:50:06 +0800 Subject: [PATCH 110/697] PUB:20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem @FSSlc --- ...nd to Quickly Navigate Linux Filesystem.md | 98 ++++++++++--------- 1 file changed, 50 insertions(+), 48 deletions(-) rename {translated/tech => published}/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md (62%) diff --git a/translated/tech/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md b/published/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md similarity index 62% rename from translated/tech/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md rename to published/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md index 546a4b0baf..f749039d5d 100644 --- a/translated/tech/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md +++ b/published/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md @@ -1,53 +1,54 @@ -Autojump – 一个高级的‘cd’命令用以快速浏览 Linux 文件系统 +Autojump:一个可以在 Linux 文件系统快速导航的高级 cd 命令 ================================================================================ -对于那些主要通过控制台或终端使用 Linux 命令行来工作的 Linux 用户来说,他们真切地感受到了 Linux 的强大。 然而在 Linux 的分层文件系统中进行浏览有时或许是一件头疼的事,尤其是对于那些新手来说。 + +对于那些主要通过控制台或终端使用 Linux 命令行来工作的 Linux 用户来说,他们真切地感受到了 Linux 的强大。 然而在 Linux 的分层文件系统中进行导航有时或许是一件头疼的事,尤其是对于那些新手来说。 现在,有一个用 Python 写的名为 `autojump` 的 Linux 命令行实用程序,它是 Linux ‘[cd][1]’命令的高级版本。 ![Autojump 命令](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-Command.jpg) -Autojump – 浏览 Linux 文件系统的最快方式 +*Autojump – Linux 文件系统导航的最快方式* 这个应用原本由 Joël Schaerer 编写,现在由 +William Ting 维护。 -Autojump 应用从用户那里学习并帮助用户在 Linux 命令行中进行更轻松的目录浏览。与传统的 `cd` 命令相比,autojump 能够更加快速地浏览至目的目录。 +Autojump 应用可以从用户那里学习并帮助用户在 Linux 命令行中进行更轻松的目录导航。与传统的 `cd` 命令相比,autojump 能够更加快速地导航至目的目录。 #### autojump 的特色 #### -- 免费且开源的应用,在 GPL V3 协议下发布。 -- 自主学习的应用,从用户的浏览习惯中学习。 -- 更快速地浏览。不必包含子目录的名称。 -- 对于大多数的标准 Linux 发行版本,能够在软件仓库中下载得到,它们包括 Debian (testing/unstable), Ubuntu, Mint, Arch, Gentoo, Slackware, CentOS, RedHat and Fedora。 +- 自由开源的应用,在 GPL V3 协议下发布。 +- 自主学习的应用,从用户的导航习惯中学习。 +- 更快速地导航。不必包含子目录的名称。 +- 对于大多数的标准 Linux 发行版本,能够在软件仓库中下载得到,它们包括 Debian (testing/unstable), Ubuntu, Mint, Arch, Gentoo, Slackware, CentOS, RedHat 和 Fedora。 - 也能在其他平台中使用,例如 OS X(使用 Homebrew) 和 Windows (通过 Clink 来实现) -- 使用 autojump 你可以跳至任何特定的目录或一个子目录。你还可以打开文件管理器来到达某个目录,并查看你在某个目录中所待时间的统计数据。 +- 使用 autojump 你可以跳至任何特定的目录或一个子目录。你还可以用文件管理器打开某个目录,并查看你在某个目录中所待时间的统计数据。 #### 前提 #### - 版本号不低于 2.6 的 Python -### 第 1 步: 做一次全局系统升级 ### +### 第 1 步: 做一次完整的系统升级 ### -1. 以 **root** 用户的身份,做一次系统更新或升级,以此保证你安装有最新版本的 Python。 +1、 以 **root** 用户的身份,做一次系统更新或升级,以此保证你安装有最新版本的 Python。 - # apt-get update && apt-get upgrade && apt-get dist-upgrade [APT based systems] - # yum update && yum upgrade [YUM based systems] - # dnf update && dnf upgrade [DNF based systems] + # apt-get update && apt-get upgrade && apt-get dist-upgrade [基于 APT 的系统] + # yum update && yum upgrade [基于 YUM 的系统] + # dnf update && dnf upgrade [基于 DNF 的系统] **注** : 这里特别提醒,在基于 YUM 或 DNF 的系统中,更新和升级执行相同的行动,大多数时间里它们是通用的,这点与基于 APT 的系统不同。 ### 第 2 步: 下载和安装 Autojump ### -2. 正如前面所言,在大多数的 Linux 发行版本的软件仓库中, autojump 都可获取到。通过包管理器你就可以安装它。但若你想从源代码开始来安装它,你需要克隆源代码并执行 python 脚本,如下面所示: +2、 正如前面所言,在大多数的 Linux 发行版本的软件仓库中, autojump 都可获取到。通过包管理器你就可以安装它。但若你想从源代码开始来安装它,你需要克隆源代码并执行 python 脚本,如下面所示: #### 从源代码安装 #### 若没有安装 git,请安装它。我们需要使用它来克隆 git 仓库。 - # apt-get install git [APT based systems] - # yum install git [YUM based systems] - # dnf install git [DNF based systems] + # apt-get install git [基于 APT 的系统] + # yum install git [基于 YUM 的系统] + # dnf install git [基于 DNF 的系统] -一旦安装完 git,以常规用户身份登录,然后像下面那样来克隆 autojump: +一旦安装完 git,以普通用户身份登录,然后像下面那样来克隆 autojump: $ git clone git://github.com/joelthelion/autojump.git @@ -55,29 +56,29 @@ Autojump 应用从用户那里学习并帮助用户在 Linux 命令行中进行 $ cd autojump -下载,赋予脚本文件可执行权限,并以 root 用户身份来运行安装脚本。 +下载,赋予安装脚本文件可执行权限,并以 root 用户身份来运行安装脚本。 # chmod 755 install.py # ./install.py #### 从软件仓库中安装 #### -3. 假如你不想麻烦,你可以以 **root** 用户身份从软件仓库中直接安装它: +3、 假如你不想麻烦,你可以以 **root** 用户身份从软件仓库中直接安装它: 在 Debian, Ubuntu, Mint 及类似系统中安装 autojump : - # apt-get install autojump (注: 这里原文为 autojumo, 应该为 autojump) + # apt-get install autojump 为了在 Fedora, CentOS, RedHat 及类似系统中安装 autojump, 你需要启用 [EPEL 软件仓库][2]。 # yum install epel-release # yum install autojump - OR + 或 # dnf install autojump ### 第 3 步: 安装后的配置 ### -4. 在 Debian 及其衍生系统 (Ubuntu, Mint,…) 中, 激活 autojump 应用是非常重要的。 +4、 在 Debian 及其衍生系统 (Ubuntu, Mint,…) 中, 激活 autojump 应用是非常重要的。 为了暂时激活 autojump 应用,即直到你关闭当前会话或打开一个新的会话之前让 autojump 均有效,你需要以常规用户身份运行下面的命令: @@ -89,7 +90,7 @@ Autojump 应用从用户那里学习并帮助用户在 Linux 命令行中进行 ### 第 4 步: Autojump 的预测试和使用 ### -5. 如先前所言, autojump 将只跳到先前 `cd` 命令到过的目录。所以在我们开始测试之前,我们要使用 `cd` 切换到一些目录中去,并创建一些目录。下面是我所执行的命令。 +5、 如先前所言, autojump 将只跳到先前 `cd` 命令到过的目录。所以在我们开始测试之前,我们要使用 `cd` 切换到一些目录中去,并创建一些目录。下面是我所执行的命令。 $ cd $ cd @@ -120,45 +121,45 @@ Autojump 应用从用户那里学习并帮助用户在 Linux 命令行中进行 现在,我们已经切换到过上面所列的目录,并为了测试创建了一些目录,一切准备就绪,让我们开始吧。 -**需要记住的一点** : `j` 是 autojump 的一个包装,你可以使用 j 来代替 autojump, 相反亦可。 +**需要记住的一点** : `j` 是 autojump 的一个封装,你可以使用 j 来代替 autojump, 相反亦可。 -6. 使用 -v 选项查看安装的 autojump 的版本。 +6、 使用 -v 选项查看安装的 autojump 的版本。 $ j -v - or + 或 $ autojump -v ![查看 Autojump 的版本](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Autojump-Version.png) -查看 Autojump 的版本 +*查看 Autojump 的版本* -7. 跳到先前到过的目录 ‘/var/www‘。 +7、 跳到先前到过的目录 ‘/var/www‘。 $ j www ![跳到目录](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-To-Directory.png) -跳到目录 +*跳到目录* -8. 跳到先前到过的子目录‘/home/avi/autojump-test/b‘ 而不键入子目录的全名。 +8、 跳到先前到过的子目录‘/home/avi/autojump-test/b‘ 而不键入子目录的全名。 $ jc b ![跳到子目录](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-to-Child-Directory.png) -跳到子目录 +*跳到子目录* -9. 使用下面的命令,你就可以从命令行打开一个文件管理器,例如 GNOME Nautilus ,而不是跳到一个目录。 +9、 使用下面的命令,你就可以从命令行打开一个文件管理器,例如 GNOME Nautilus ,而不是跳到一个目录。 $ jo www -![跳到目录](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-to-Direcotory.png) +![打开目录](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-to-Direcotory.png) -跳到目录 +*打开目录* ![在文件管理器中打开目录](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Directory-in-File-Browser.png) -在文件管理器中打开目录 +*在文件管理器中打开目录* 你也可以在一个文件管理器中打开一个子目录。 @@ -166,19 +167,19 @@ Autojump 应用从用户那里学习并帮助用户在 Linux 命令行中进行 ![打开子目录](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Child-Directory1.png) -打开子目录 +*打开子目录* ![在文件管理器中打开子目录](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Child-Directory-in-File-Browser1.png) -在文件管理器中打开子目录 +*在文件管理器中打开子目录* -10. 查看每个文件夹的关键权重和在所有目录权重中的总关键权重的相关统计数据。文件夹的关键权重代表在这个文件夹中所花的总时间。 目录权重是列表中目录的数目。(注: 在这一句中,我觉得原文中的 if 应该为 is) +10、 查看每个文件夹的权重和全部文件夹计算得出的总权重的统计数据。文件夹的权重代表在这个文件夹中所花的总时间。 文件夹权重是该列表中目录的数字。(LCTT 译注: 在这一句中,我觉得原文中的 if 应该为 is) $ j --stat -![查看目录统计数据](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Statistics.png) +![查看文件夹统计数据](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Statistics.png) -查看目录统计数据 +*查看文件夹统计数据* **提醒** : autojump 存储其运行日志和错误日志的地方是文件夹 `~/.local/share/autojump/`。千万不要重写这些文件,否则你将失去你所有的统计状态结果。 @@ -186,15 +187,15 @@ Autojump 应用从用户那里学习并帮助用户在 Linux 命令行中进行 ![Autojump 的日志](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-Logs.png) -Autojump 的日志 +*Autojump 的日志* -11. 假如需要,你只需运行下面的命令就可以查看帮助 : +11、 假如需要,你只需运行下面的命令就可以查看帮助 : $ j --help ![Autojump 的帮助和选项](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-help-options.png) -Autojump 的帮助和选项 +*Autojump 的帮助和选项* ### 功能需求和已知的冲突 ### @@ -204,18 +205,19 @@ Autojump 的帮助和选项 ### 结论: ### -假如你是一个命令行用户, autojump 是你必备的实用程序。它可以简化许多事情。它是一个在命令行中浏览 Linux 目录的绝佳的程序。请自行尝试它,并在下面的评论框中让我知晓你宝贵的反馈。保持联系,保持分享。喜爱并分享,帮助我们更好地传播。 +假如你是一个命令行用户, autojump 是你必备的实用程序。它可以简化许多事情。它是一个在命令行中导航 Linux 目录的绝佳的程序。请自行尝试它,并在下面的评论框中让我知晓你宝贵的反馈。保持联系,保持分享。喜爱并分享,帮助我们更好地传播。 + -------------------------------------------------------------------------------- via: http://www.tecmint.com/autojump-a-quickest-way-to-navigate-linux-filesystem/ 作者:[Avishek Kumar][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/avishek/ [1]:http://www.tecmint.com/cd-command-in-linux/ -[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ +[2]:https://linux.cn/article-2324-1.html [3]:http://www.tecmint.com/manage-linux-filenames-with-special-characters/ \ No newline at end of file From 99ef2dab243d3e400189462378026b7cad784472 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Mon, 10 Aug 2015 09:23:08 +0800 Subject: [PATCH 111/697] Update 20150209 Install OpenQRM Cloud Computing Platform In Debian.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ...0209 Install OpenQRM Cloud Computing Platform In Debian.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md b/sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md index 127f10affc..2c6a990b83 100644 --- a/sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md +++ b/sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md @@ -1,3 +1,5 @@ +FSSlc translating + Install OpenQRM Cloud Computing Platform In Debian ================================================================================ ### Introduction ### @@ -146,4 +148,4 @@ via: http://www.unixmen.com/install-openqrm-cloud-computing-platform-debian/ [a]:http://www.unixmen.com/author/sk/ [1]:http://www.openqrm-enterprise.com/products/edition-comparison.html [2]:http://sourceforge.net/projects/openqrm/files/?source=navbar -[3]:http://www.openqrm-enterprise.com/fileadmin/Documents/Whitepaper/openQRM-Enterprise-Administrator-Guide-5.2.pdf \ No newline at end of file +[3]:http://www.openqrm-enterprise.com/fileadmin/Documents/Whitepaper/openQRM-Enterprise-Administrator-Guide-5.2.pdf From abc7c38b3accd6953e9a40d2b43d3b362a22307b Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 10 Aug 2015 10:56:03 +0800 Subject: [PATCH 112/697] PUB:20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04 @geekpi --- ...ver or client) on Ubuntu 14.04 or 15.04.md | 120 +++++++----------- 1 file changed, 48 insertions(+), 72 deletions(-) rename {translated/tech => published}/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md (68%) diff --git a/translated/tech/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md b/published/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md similarity index 68% rename from translated/tech/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md rename to published/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md index 38574e6fa7..2fdaa71872 100644 --- a/translated/tech/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md +++ b/published/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md @@ -1,6 +1,6 @@ -如何在Ubuntu 14.04/15.04上配置Chef(服务端/客户端) +如何在 Ubuntu 上安装配置管理系统 Chef (大厨) ================================================================================ -Chef是对于信息技术专业人员的一款配置管理和自动化工具,它可以配置和管理你的设备无论它在本地还是在云上。它可以用于加速应用部署并协调多个系统管理员和开发人员的工作,涉及到成百甚至上千的服务器和程序来支持大量的客户群。chef最有用的是让设备变成代码。一旦你掌握了Chef,你可以获得一流的网络IT支持来自动化管理你的云端设备或者终端用户。 +Chef是面对IT专业人员的一款配置管理和自动化工具,它可以配置和管理你的基础设施,无论它在本地还是在云上。它可以用于加速应用部署并协调多个系统管理员和开发人员的工作,这涉及到可支持大量的客户群的成百上千的服务器和程序。chef最有用的是让基础设施变成代码。一旦你掌握了Chef,你可以获得一流的网络IT支持来自动化管理你的云端基础设施或者终端用户。 下面是我们将要在本篇中要设置和配置Chef的主要组件。 @@ -10,34 +10,13 @@ Chef是对于信息技术专业人员的一款配置管理和自动化工具, 我们将在下面的基础环境下设置Chef配置管理系统。 -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - -
管理和配置工具:Chef
基础操作系统Ubuntu 14.04.1 LTS (x86_64)
Chef ServerVersion 12.1.0
Chef ManageVersion 1.17.0
Chef Development KitVersion 0.6.2
内存和CPU4 GB  , 2.0+2.0 GHZ
+|管理和配置工具:Chef|| +|-------------------------------|---| +|基础操作系统|Ubuntu 14.04.1 LTS (x86_64)| +|Chef Server|Version 12.1.0| +|Chef Manage|Version 1.17.0| +|Chef Development Kit|Version 0.6.2| +|内存和CPU|4 GB  , 2.0+2.0 GHz| ### Chef服务端的安装和配置 ### @@ -45,15 +24,15 @@ Chef服务端是核心组件,它存储配置以及其他和工作站交互的 我使用下面的命令来下载和安装它。 -**1) 下载Chef服务端** +####1) 下载Chef服务端 root@ubuntu-14-chef:/tmp# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/chef-server-core_12.1.0-1_amd64.deb -**2) 安装Chef服务端** +####2) 安装Chef服务端 root@ubuntu-14-chef:/tmp# dpkg -i chef-server-core_12.1.0-1_amd64.deb -**3) 重新配置Chef服务端** +####3) 重新配置Chef服务端 现在运行下面的命令来启动所有的chef服务端服务,这步也许会花费一些时间,因为它有许多不同一起工作的服务组成来创建一个正常运作的系统。 @@ -64,35 +43,35 @@ chef服务端启动命令'chef-server-ctl reconfigure'需要运行两次,这 Chef Client finished, 342/350 resources updated in 113.71139964 seconds opscode Reconfigured! -**4) 重启系统 ** +####4) 重启系统 安装完成后重启系统使系统能最好的工作,不然我们或许会在创建用户的时候看到下面的SSL连接错误。 ERROR: Errno::ECONNRESET: Connection reset by peer - SSL_connect -**5) 创建心的管理员** +####5) 创建新的管理员 -运行下面的命令来创建一个新的用它自己的配置的管理员账户。创建过程中,用户的RSA私钥会自动生成并需要被保存到一个安全的地方。--file选项会保存RSA私钥到指定的路径下。 +运行下面的命令来创建一个新的管理员账户及其配置。创建过程中,用户的RSA私钥会自动生成,它需要保存到一个安全的地方。--file选项会保存RSA私钥到指定的路径下。 root@ubuntu-14-chef:/tmp# chef-server-ctl user-create kashi kashi kashi kashif.fareedi@gmail.com kashi123 --filename /root/kashi.pem ### Chef服务端的管理设置 ### -Chef Manage是一个针对企业Chef用户的管理控制台,它启用了可视化的web用户界面并可以管理节点、数据包、规则、环境、配置和基于角色的访问控制(RBAC) +Chef Manage是一个针对企业Chef用户的管理控制台,它提供了可视化的web用户界面,可以管理节点、数据包、规则、环境、Cookbook 和基于角色的访问控制(RBAC) -**1) 下载Chef Manage** +####1) 下载Chef Manage -从官网复制链接病下载chef manage的安装包。 +从官网复制链接并下载chef manage的安装包。 root@ubuntu-14-chef:~# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/opscode-manage_1.17.0-1_amd64.deb -**2) 安装Chef Manage** +####2) 安装Chef Manage 使用下面的命令在root的家目录下安装它。 root@ubuntu-14-chef:~# chef-server-ctl install opscode-manage --path /root -**3) 重启Chef Manage和服务端** +####3) 重启Chef Manage和服务端 安装完成后我们需要运行下面的命令来重启chef manage和服务端。 @@ -101,28 +80,27 @@ Chef Manage是一个针对企业Chef用户的管理控制台,它启用了可 ### Chef Manage网页控制台 ### -我们可以使用localhost访问网页控制台以及fqdn,并用已经创建的管理员登录 +我们可以使用localhost或它的全称域名来访问网页控制台,并用已经创建的管理员登录 ![chef amanage](http://blog.linoxide.com/wp-content/uploads/2015/07/5-chef-web.png) -**1) Chef Manage创建新的组织 ** +####1) Chef Manage创建新的组织 -你或许被要求创建新的组织或者接受其他阻止的邀请。如下所示,使用缩写和全名来创建一个新的组织。 +你或许被要求创建新的组织,或者也可以接受其他组织的邀请。如下所示,使用缩写和全名来创建一个新的组织。 ![Create Org](http://blog.linoxide.com/wp-content/uploads/2015/07/7-create-org.png) -**2) 用命令行创建心的组织 ** +####2) 用命令行创建新的组织 -We can also create new Organization from the command line by executing the following command. 我们同样也可以运行下面的命令来创建新的组织。 root@ubuntu-14-chef:~# chef-server-ctl org-create linux Linoxide Linux Org. --association_user kashi --filename linux.pem ### 设置工作站 ### -我们已经完成安装chef服务端,现在我们可以开始创建任何recipes、cookbooks、属性和其他任何的我们想要对Chef的修改。 +我们已经完成安装chef服务端,现在我们可以开始创建任何recipes([基础配置元素](https://docs.chef.io/recipes.html))、cookbooks([基础配置集](https://docs.chef.io/cookbooks.html))、attributes([节点属性](https://docs.chef.io/attributes.html))和其他任何的我们想要对Chef做的修改。 -**1) 在Chef服务端上创建新的用户和组织 ** +####1) 在Chef服务端上创建新的用户和组织 为了设置工作站,我们用命令行创建一个新的用户和组织。 @@ -130,25 +108,23 @@ We can also create new Organization from the command line by executing the follo root@ubuntu-14-chef:~# chef-server-ctl org-create blogs Linoxide Blogs Inc. --association_user bloger --filename blogs.pem -**2) 下载工作站入门套件 ** +####2) 下载工作站入门套件 -Now Download and Save starter-kit from the chef manage web console on a workstation and use it to work with Chef server. -在工作站的网页控制台中下面并保存入门套件用于与服务端协同工作 +在工作站的网页控制台中下载保存入门套件,它用于与服务端协同工作 ![Starter Kit](http://blog.linoxide.com/wp-content/uploads/2015/07/8-download-kit.png) -**3) 点击"Proceed"下载套件 ** +####3) 下载套件后,点击"Proceed" ![starter kit](http://blog.linoxide.com/wp-content/uploads/2015/07/9-download-kit.png) -### 对于工作站的Chef开发套件设置 ### +### 用于工作站的Chef开发套件设置 ### -Chef开发套件是一款包含所有开发chef所需工具的软件包。它捆绑了由Chef开发的带Chef客户端的工具。 +Chef开发套件是一款包含开发chef所需的所有工具的软件包。它捆绑了由Chef开发的带Chef客户端的工具。 -**1) 下载 Chef DK** +####1) 下载 Chef DK -We can Download chef development kit from its official web link and choose the required operating system to get its chef development tool kit. -我们可以从它的官网链接中下载开发包,并选择操作系统来得到chef开发包。 +我们可以从它的官网链接中下载开发包,并选择操作系统来下载chef开发包。 ![Chef DK](http://blog.linoxide.com/wp-content/uploads/2015/07/10-CDK.png) @@ -156,13 +132,13 @@ We can Download chef development kit from its official web link and choose the r root@ubuntu-15-WKS:~# wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.6.2-1_amd64.deb -**1) Chef开发套件安装** +####2) Chef开发套件安装 使用dpkg命令安装开发套件 root@ubuntu-15-WKS:~# dpkg -i chefdk_0.6.2-1_amd64.deb -**3) Chef DK 验证** +####3) Chef DK 验证 使用下面的命令验证客户端是否已经正确安装。 @@ -195,7 +171,7 @@ We can Download chef development kit from its official web link and choose the r Verification of component 'chefspec' succeeded. Verification of component 'package installation' succeeded. -**连接Chef服务端** +####4) 连接Chef服务端 我们将创建 ~/.chef并从chef服务端复制两个用户和组织的pem文件到chef的文件到这个目录下。 @@ -209,7 +185,7 @@ We can Download chef development kit from its official web link and choose the r kashi.pem 100% 1678 1.6KB/s 00:00 linux.pem 100% 1678 1.6KB/s 00:00 -** 编辑配置来管理chef环境 ** +####5) 编辑配置来管理chef环境 现在使用下面的内容创建"~/.chef/knife.rb"。 @@ -231,13 +207,13 @@ We can Download chef development kit from its official web link and choose the r root@ubuntu-15-WKS:/# mkdir cookbooks -**测试Knife配置** +####6) 测试Knife配置 运行“knife user list”和“knife client list”来验证knife是否在工作。 root@ubuntu-15-WKS:/.chef# knife user list -第一次运行的时候可能会得到下面的错误,这是因为工作站上还没有chef服务端的SSL证书。 +第一次运行的时候可能会看到下面的错误,这是因为工作站上还没有chef服务端的SSL证书。 ERROR: SSL Validation failure connecting to host: 172.25.10.173 - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed ERROR: Could not establish a secure connection to the server. @@ -245,24 +221,24 @@ We can Download chef development kit from its official web link and choose the r If your Chef Server uses a self-signed certificate, you can use `knife ssl fetch` to make knife trust the server's certificates. -要从上面的命令中恢复,运行下面的命令来获取ssl整数并重新运行knife user和client list,这时候应该就可以了。 +要从上面的命令中恢复,运行下面的命令来获取ssl证书,并重新运行knife user和client list,这时候应该就可以了。 root@ubuntu-15-WKS:/.chef# knife ssl fetch WARNING: Certificates from 172.25.10.173 will be fetched and placed in your trusted_cert directory (/.chef/trusted_certs). - knife没有办法验证这些是有效的证书。你应该在下载时候验证这些证书的真实性。 +knife没有办法验证这些是有效的证书。你应该在下载时候验证这些证书的真实性。 - 在/.chef/trusted_certs/ubuntu-14-chef_test_com.crt下面添加ubuntu-14-chef.test.com的证书。 +在/.chef/trusted_certs/ubuntu-14-chef_test_com.crt下面添加ubuntu-14-chef.test.com的证书。 在上面的命令取得ssl证书后,接着运行下面的命令。 root@ubuntu-15-WKS:/.chef#knife client list kashi-linux -### 与chef服务端交互的新的节点 ### +### 配置与chef服务端交互的新节点 ### -节点是执行所有设备自动化的chef客户端。因此是时侯添加新的服务端到我们的chef环境下,在配置完chef-server和knife工作站后配置新的节点与chef-server交互。 +节点是执行所有基础设施自动化的chef客户端。因此,在配置完chef-server和knife工作站后,通过配置新的与chef-server交互的节点,来添加新的服务端到我们的chef环境下。 我们使用下面的命令来添加新的节点与chef服务端工作。 @@ -291,16 +267,16 @@ We can Download chef development kit from its official web link and choose the r 172.25.10.170 to file /tmp/install.sh.26024/metadata.txt 172.25.10.170 trying wget... -之后我们可以在knife节点列表下看到新创建的节点,也会新节点列表下创建新的客户端。 +之后我们可以在knife节点列表下看到新创建的节点,它也会在新节点创建新的客户端。 root@ubuntu-15-WKS:~# knife node list mydns -相似地我们只要提供ssh证书通过上面的knife命令来创建多个节点到chef设备上。 +相似地我们只要提供ssh证书通过上面的knife命令,就可以在chef设施上创建多个节点。 ### 总结 ### -本篇我们学习了chef管理工具并通过安装和配置设置浏览了它的组件。我希望你在学习安装和配置Chef服务端以及它的工作站和客户端节点中获得乐趣。 +本篇我们学习了chef管理工具并通过安装和配置设置基本了解了它的组件。我希望你在学习安装和配置Chef服务端以及它的工作站和客户端节点中获得乐趣。 -------------------------------------------------------------------------------- @@ -308,7 +284,7 @@ via: http://linoxide.com/ubuntu-how-to/install-configure-chef-ubuntu-14-04-15-04 作者:[Kashif Siddique][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From eb0ff99810690c7d3078c6a85ddc7731ef544e3e Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 10 Aug 2015 10:57:27 +0800 Subject: [PATCH 113/697] =?UTF-8?q?=E6=B8=85=E9=99=A4=E9=94=99=E6=94=BE?= =?UTF-8?q?=E7=9A=84=E6=96=87=E4=BB=B6?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @xiqingongzi --- ...ntial Commands and System Documentation.md | 320 ------------------ 1 file changed, 320 deletions(-) delete mode 100644 translated/tech/RHCSA Series/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md diff --git a/translated/tech/RHCSA Series/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md b/translated/tech/RHCSA Series/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md deleted file mode 100644 index 93c2787c7e..0000000000 --- a/translated/tech/RHCSA Series/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md +++ /dev/null @@ -1,320 +0,0 @@ -[translating by xiqingongzi] - -RHCSA系列: 复习基础命令及系统文档 – 第一部分 -================================================================================ -RHCSA (红帽认证系统工程师) 是由给商业公司提供开源操作系统和软件的RedHat公司举行的认证考试, 除此之外,红帽公司还为这些企业和机构提供支持、训练以及咨询服务 - -![RHCSA Exam Guide](http://www.tecmint.com/wp-content/uploads/2015/02/RHCSA-Series-by-Tecmint.png) - -RHCSA 考试准备指南 - -RHCSA 考试(考试编号 EX200)通过后可以获取由Red Hat 公司颁发的证书. RHCSA 考试是RHCT(红帽认证技师)的升级版,而且RHCSA必须在新的Red Hat Enterprise Linux(红帽企业版)下完成.RHCT和RHCSA的主要变化就是RHCT基于 RHEL5 , 而RHCSA基于RHEL6或者7, 这两个认证的等级也有所不同. - -红帽认证管理员所会的最基础的是在红帽企业版的环境下执行如下系统管理任务: - -- 理解并会使用命令管理文件、目录、命令行以及系统/软件包的文档 -- 使用不同的启动等级启动系统,认证和控制进程,启动或停止虚拟机 -- 使用分区和逻辑卷管理本地存储 -- 创建并且配置本地文件系统和网络文件系统,设置他们的属性(许可、加密、访问控制表) -- 部署、配置、并且控制系统,包括安装、升级和卸载软件 -- 管理系统用户和组,独立使用集中制的LDAP目录权限控制 -- 确保系统安全,包括基础的防火墙规则和SELinux配置 - - -关于你所在国家的考试注册费用参考 [RHCSA Certification page][1]. - -关于你所在国家的考试注册费用参考RHCSA 认证页面 - - -在这个有15章的RHCSA(红帽认证管理员)备考系列,我们将覆盖以下的关于红帽企业Linux第七版的最新的信息 - -- Part 1: 回顾必会的命令和系统文档 -- Part 2: 在RHEL7如何展示文件和管理目录 -- Part 3: 在RHEL7中如何管理用户和组 -- Part 4: 使用nano和vim管理命令/ 使用grep和正则表达式分析文本 -- Part 5: RHEL7的进程管理:启动,关机,以及其他介于二者之间的. -- Part 6: 使用 'Parted'和'SSM'来管理和加密系统存储 -- Part 7: 使用ACLs(访问控制表)并挂载 Samba /NFS 文件分享 -- Part 8: 加固SSH,设置主机名并开启网络服务 -- Part 9: 安装、配置和加固一个Web,FTP服务器 -- Part 10: Yum 包管理方式,使用Cron进行自动任务管理以及监控系统日志 -- Part 11: 使用FirewallD和Iptables设置防火墙,控制网络流量 -- Part 12: 使用Kickstart 自动安装RHEL 7 -- Part 13: RHEL7:什么是SeLinux?他的原理是什么? -- Part 14: 在RHEL7 中使用基于LDAP的权限控制 -- Part 15: RHEL7的虚拟化:KVM 和虚拟机管理 - -在第一章,我们讲解如何输入和运行正确的命令在终端或者Shell窗口,并且讲解如何找到、插入,以及使用系统文档 - -![RHCSA: Reviewing Essential Linux Commands – Part 1](http://www.tecmint.com/wp-content/uploads/2015/02/Reviewing-Essential-Linux-Commands.png) - -RHCSA:回顾必会的Linux命令 - 第一部分 - -#### 前提: #### - -至少你要熟悉如下命令 - -- [cd command][2] (改变目录) -- [ls command][3] (列举文件) -- [cp command][4] (复制文件) -- [mv command][5] (移动或重命名文件) -- [touch command][6] (创建一个新的文件或更新已存在文件的时间表) -- rm command (删除文件) -- mkdir command (创建目录) - -在这篇文章中你将会找到更多的关于如何更好的使用他们的正确用法和特殊用法. - -虽然没有严格的要求,但是作为讨论常用的Linux命令和方法,你应该安装RHEL7 来尝试使用文章中提到的命令.这将会使你学习起来更省力. - -- [红帽企业版Linux(RHEL)7 安装指南][7] - -### 使用Shell进行交互 ### -如果我们使用文本模式登陆Linux,我们就无法使用鼠标在默认的shell。另一方面,如果我们使用图形化界面登陆,我们将会通过启动一个终端来开启shell,无论那种方式,我们都会看到用户提示,并且我们可以开始输入并且执行命令(当按下Enter时,命令就会被执行) - - -当我们使用文本模式登陆Linux时, -命令是由两个部分组成的: - -- 命令本身 -- 参数 - -某些参数,称为选项(通常使用一个连字符区分),改变了由其他参数定义的命令操作. - -命令的类型可以帮助我们识别某一个特定的命令是由shell内建的还是由一个单独的包提供。这样的区别在于我们能够找到更多关于该信息的命令,对shell内置的命令,我们需要看shell的ManPage,如果是其他提供的,我们需要看它自己的ManPage. - -![Check Shell built in Commands](http://www.tecmint.com/wp-content/uploads/2015/02/Check-shell-built-in-Commands.png) - -检查Shell的内建命令 - -在上面的例子中, cd 和 type 是shell内建的命令,top和 less 是由其他的二进制文件提供的(在这种情况下,type将返回命令的位置) -其他的内建命令 - -- [echo command][8]: 展示字符串 -- [pwd command][9]: 输出当前的工作目录 - -![More Built in Shell Commands](http://www.tecmint.com/wp-content/uploads/2015/02/More-Built-in-Shell-Commands.png) - -更多内建函数 - -**exec 命令** - -运行我们指定的外部程序。请注意,最好是只输入我们想要运行的程序的名字,不过exec命令有一个特殊的特性:使用旧的shell运行,而不是创建新的进程,可以作为子请求的验证. - - # ps -ef | grep [shell 进程的PID] - -当新的进程注销,Shell也随之注销,运行 exec top 然后按下 q键来退出top,你会注意到shell 会话会结束,如下面的屏幕录像展示的那样: - -注:youtube视频 - - -**export 命令** - -输出之后执行的命令的环境的变量 - -**history 命令** - -展示数行之前的历史命令.在感叹号前输入命令编号可以再次执行这个命令.如果我们需要编辑历史列表中的命令,我们可以按下 Ctrl + r 并输入与命令相关的第一个字符. -当我们看到的命令自动补全,我们可以根据我们目前的需要来编辑它: - -注:youtube视频 - - -命令列表会保存在一个叫 .bash_history的文件里.history命令是一个非常有用的用于减少输入次数的工具,特别是进行命令行编辑的时候.默认情况下,bash保留最后输入的500个命令,不过可以通过修改 HISTSIZE 环境变量来增加: - - -![Linux history Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-history-Command.png) - -Linux history 命令 - -但上述变化,在我们的下一次启动不会保留。为了保持HISTSIZE变量的变化,我们需要通过手工修改文件编辑: - - # 设置history请看 HISTSIZE 和 HISTFILESIZE 在 bash(1)的文档 - HISTSIZE=1000 - -**重要**: 我们的更改不会生效,除非我们重启了系统 - -**alias 命令** -没有参数或使用-p参数将会以 名称=值的标准形式输出alias 列表.当提供了参数时,一个alias 将被定义给给定的命令和值 - -使用alias ,我们可以创建我们自己的命令,或修改现有的命令,包括需要的参数.举个例子,假设我们想别名 ls 到 ls –color=auto ,这样就可以使用不同颜色输出文件、目录、链接 - - - # alias ls='ls --color=auto' - -![Linux alias Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-alias-Command.png) - -Linux 别名命令 - -**Note**: 你可以给你的新命令起任何的名字,并且附上足够多的使用单引号分割的参数,但是这样的情况下你要用分号区分开他们. - - # alias myNewCommand='cd /usr/bin; ls; cd; clear' - -**exit 命令** - -Exit和logout命令都是退出shell.exit命令退出所有的shell,logout命令只注销登陆的shell,其他的自动以文本模式启动的shell不算. - -如果我们对某个程序由疑问,我们可以看他的man Page,可以使用man命令调出它,额外的,还有一些重要的文件的手册页(inittab,fstab,hosts等等),库函数,shells,设备及其他功能 - -#### 举例: #### - -- man uname (输出系统信息,如内核名称、处理器、操作系统类型、架构等). -- man inittab (初始化守护设置). - -另外一个重要的信息的来源就是info命令提供的,info命令常常被用来读取信息文件.这些文件往往比manpage 提供更多信息.通过info 关键词调用某个命令的信息 - - # info ls - # info cut - - -另外,在/usr/share/doc 文件夹包含了大量的子目录,里面可以找到大量的文档.他们包含文本文件或其他友好的格式. -确保你使用这三种方法去查找命令的信息。重点关注每个命令文档中介绍的详细的语法 - -**使用expand命令把tabs转换为空格** - -有时候文本文档包含了tabs但是程序无法很好的处理的tabs.或者我们只是简单的希望将tabs转换成空格.这就是为什么expand (GNU核心组件提供)工具出现, - -举个例子,给我们一个文件 NumberList.txt,让我们使用expand处理它,将tabs转换为一个空格.并且以标准形式输出. - - # expand --tabs=1 NumbersList.txt - -![Linux expand Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-expand-Command.png) - -Linux expand 命令 - -unexpand命令可以实现相反的功能(将空格转为tab) - -**使用head输出文件首行及使用tail输出文件尾行** - -通常情况下,head命令后跟着文件名时,将会输出该文件的前十行,我们可以通过 -n 参数来自定义具体的行数。 - - # head -n3 /etc/passwd - # tail -n3 /etc/passwd - -![Linux head and tail Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-head-and-tail-Command.png) - -Linux 的 head 和 tail 命令 - -tail 最有意思的一个特性就是能够展现信息(最后一行)就像我们输入文件(tail -f my.log,一行一行的,就像我们在观察它一样。)这在我们监控一个持续增加的日志文件时非常有用 - -更多: [Manage Files Effectively using head and tail Commands][10] - -**使用paste合并文本文件** -paste命令一行一行的合并文件,默认会以tab来区分每一行,或者其他你自定义的分行方式.(下面的例子就是输出使用等号划分行的文件). - # paste -d= file1 file2 - -![Merge Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Merge-Files-in-Linux-with-paste-command.png) - -Merge Files in Linux - -**使用split命令将文件分块** - -split 命令常常用于把一个文件切割成两个或多个文由我们自定义的前缀命名的件文件.这些文件可以通过大小、区块、行数,生成的文件会有一个数字或字母的后缀.在下面的例子中,我们将切割bash.pdf ,每个文件50KB (-b 50KB) ,使用命名后缀 (-d): - - # split -b 50KB -d bash.pdf bash_ - -![Split Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Split-Files-in-Linux-with-split-command.png) - -在Linux下划分文件 - -你可以使用如下命令来合并这些文件,生成源文件: - - # cat bash_00 bash_01 bash_02 bash_03 bash_04 bash_05 > bash.pdf - -**使用tr命令改变字符** - -tr 命令多用于变化(改变)一个一个的字符活使用字符范围.和之前一样,下面的实例我们江使用同样的文件file2,我们将实习: - -- 小写字母 o 变成大写 -- 所有的小写字母都变成大写字母 - - # cat file2 | tr o O - # cat file2 | tr [a-z] [A-Z] - -![Translate Characters in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Translate-characters-in-Linux-with-tr-command.png) - -在Linux中替换文字 - -**使用uniq和sort检查或删除重复的文字** - -uniq命令可以帮我们查出或删除文件中的重复的行,默认会写出到stdout.我们应当注意, uniq 只能查出相邻的两个相同的单纯,所以, uniq 往往和sort 一起使用(sort一般用于对文本文件的内容进行排序) - - -默认的,sort 以第一个参数(使用空格区分)为关键字.想要定义特殊的关键字,我们需要使用 -k参数,请注意如何使用sort 和uniq输出我们想要的字段,具体可以看下面的例子 - - # cat file3 - # sort file3 | uniq - # sort -k2 file3 | uniq - # sort -k3 file3 | uniq - -![删除文件中重复的行](http://www.tecmint.com/wp-content/uploads/2015/02/Remove-Duplicate-Lines-in-file.png) - -删除文件中重复的行 - -**从文件中提取文本的命令** - -Cut命令基于字节(-b),字符(-c),或者区块(-f)从stdin活文件中提取到的部分将会以标准的形式展现在屏幕上 - -当我们使用区块切割时,默认的分隔符是一个tab,不过你可以通过 -d 参数来自定义分隔符. - - # cut -d: -f1,3 /etc/passwd # 这个例子提取了第一块和第三块的文本 - # cut -d: -f2-4 /etc/passwd # 这个例子提取了第一块到第三块的文本 - -![从文件中提取文本](http://www.tecmint.com/wp-content/uploads/2015/02/Extract-Text-from-a-file.png) - -从文件中提取文本 - - -注意,上方的两个输出的结果是十分简洁的。 - -**使用fmt命令重新格式化文件** - -fmt 被用于去“清理”有大量内容或行的文件,或者有很多缩进的文件.新的锻炼格式每行不会超过75个字符款,你能改变这个设定通过 -w(width 宽度)参数,它可以设置行宽为一个特定的数值 - -举个例子,让我们看看当我们用fmt显示定宽为100个字符的时候的文件/etc/passwd 时会发生什么.再来一次,输出值变得更加简洁. - - # fmt -w100 /etc/passwd - -![File Reformatting in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/File-Reformatting-in-Linux-with-fmt-command.png) - -Linux文件重新格式化 - -**使用pr命令格式化打印内容** - -pr 分页并且在列中展示一个或多个用于打印的文件. 换句话说,使用pr格式化一个文件使他打印出来时看起来更好.举个例子,下面这个命令 - - # ls -a /etc | pr -n --columns=3 -h "Files in /etc" - -以一个友好的排版方式(3列)输出/etc下的文件,自定义了页眉(通过 -h 选项实现),行号(-n) - -![File Formatting in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/File-Formatting-in-Linux-with-pr-command.png) - -Linux的文件格式 - -### 总结 ### - -在这篇文章中,我们已经讨论了如何在Shell或终端以正确的语法输入和执行命令,并解释如何找到,检查和使用系统文档。正如你看到的一样简单,这就是你成为RHCSA的第一大步 - -如果你想添加一些其他的你经常使用的能够有效帮你完成你的日常工作的基础命令,并为分享他们而感到自豪,请在下方留言.也欢迎提出问题.我们期待您的回复. - - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:https://www.redhat.com/en/services/certification/rhcsa -[2]:http://www.tecmint.com/cd-command-in-linux/ -[3]:http://www.tecmint.com/ls-command-interview-questions/ -[4]:http://www.tecmint.com/advanced-copy-command-shows-progress-bar-while-copying-files/ -[5]:http://www.tecmint.com/rename-multiple-files-in-linux/ -[6]:http://www.tecmint.com/8-pratical-examples-of-linux-touch-command/ -[7]:http://www.tecmint.com/redhat-enterprise-linux-7-installation/ -[8]:http://www.tecmint.com/echo-command-in-linux/ -[9]:http://www.tecmint.com/pwd-command-examples/ -[10]:http://www.tecmint.com/view-contents-of-file-in-linux/ From 0e4b77320242cc007f4cc6ee17a72b6f6dc824de Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 10 Aug 2015 12:38:55 +0800 Subject: [PATCH 114/697] =?UTF-8?q?20150810-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...20150810 For Linux, Supercomputers R Us.md | 59 +++++++++++++++++++ 1 file changed, 59 insertions(+) create mode 100644 sources/talk/20150810 For Linux, Supercomputers R Us.md diff --git a/sources/talk/20150810 For Linux, Supercomputers R Us.md b/sources/talk/20150810 For Linux, Supercomputers R Us.md new file mode 100644 index 0000000000..7bc48125f0 --- /dev/null +++ b/sources/talk/20150810 For Linux, Supercomputers R Us.md @@ -0,0 +1,59 @@ +For Linux, Supercomputers R Us +================================================================================ +![Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons](http://images.techhive.com/images/article/2015/08/playstation_3-100602985-primary.idge.jpg) +Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons + +> Almost all supercomputers run Linux, including the ones built from Raspberry Pi boards and PlayStation 3 game consoles + +Supercomputers are serious things, called on to do serious computing. They tend to be engaged in serious pursuits like atomic bomb simulations, climate modeling and high-level physics. Naturally, they cost serious money. At the very top of the latest [Top500][1] supercomputer ranking is the Tianhe-2 supercomputer at China’s National University of Defense Technology. It cost about $390 million to build. + +But then there’s the supercomputer that Joshua Kiepert, a doctoral student at Boise State’s Electrical and Computer Engineering department, [created with Raspberry Pi computers][2].It cost less than $2,000. + +No, I’m not making that up. It’s an honest-to-goodness supercomputer made from overclocked 1-GHz [Model B Raspberry Pi][3] ARM11 processors with Videocore IV GPUs. Each one comes with 512MB of RAM, a pair of USB ports and a 10/100 BaseT Ethernet port. + +And what do the Tianhe-2 and the Boise State supercomputer have in common? They both run Linux. As do [486 out of the world’s fastest 500 supercomputers][4]. It’s part of a domination of the category that began over 20 years ago. And now it’s trickling down to built-on-the-cheap supercomputers. Because Kiepert’s machine isn’t the only budget number cruncher out there. + +Gaurav Khanna, an associate professor of physics at the University of Massachusetts Dartmouth, created a [supercomputer with something shy of 200 PlayStation 3 video game consoles][5]. + +The PlayStations are powered by a 3.2-GHz PowerPC-based Power Processing Element. Each comes with 512MB of RAM. You can still buy one, although Sony will be phasing them out by year’s end, for just over $200. Khanna started with only 16 PlayStation 3s for his first supercomputer, so you too could put a supercomputer on your credit card for less than four grand. + +These machines may be built from toys, but they’re not playthings. Khanna has done serious astrophysics on his rig. A white-hat hacking group used a similar [PlayStation 3 supercomputer in 2008 to crack the SSL MD5 hashing algorithm][6] in 2008. + +Two years later, the Air Force Research Laboratory [Condor Cluster was using 1,760 Sony PlayStation 3 processors][7] and 168 general-purpose graphical processing units. This bargain-basement supercomputer runs at about 500TFLOPs, or 500 trillion floating point operations per second. + +Other cheap options for home supercomputers include specialist parallel-processing boards such as the [$99 credit-card-sized Parallella board][8], and high-end graphics boards such as [Nvidia’s Titan Z][9] and [AMD’s FirePro W9100][10]. Those high-end boards, coveted by gamers with visions of a dream machine or even a chance at winning the first-place prize of over $100,000 in the [Intel Extreme Masters World Championship League of][11] [Legends][12], cost considerably more, retailing for about $3,000. On the other hand, a single one can deliver over 2.5TFLOPS all by itself, and for scientists and researchers, they offer an affordable way to get a supercomputer they can call their own. + +As for the Linux connection, that all started in 1994 at the Goddard Space Flight Center with the first [Beowulf supercomputer][13]. + +By our standards, there wasn’t much that was super about the first Beowulf. But in its day, the first homemade supercomputer, with its 16 Intel 486DX processors and 10Mbps Ethernet for the bus, was great. [Beowulf, designed by NASA contractors Don Becker and Thomas Sterling][14], was the first “maker” supercomputer. Its “compute components,” 486DX PCs, cost only a few thousand dollars. While its speed was only in single-digit gigaflops, [Beowulf][15] showed you could build supercomputers from commercial off-the-shelf (COTS) hardware and Linux. + +I wish I’d had a part in its creation, but I’d already left Goddard by 1994 for a career as a full-time technology journalist. Darn it! + +But even from this side of my reporter’s notebook, I can still appreciate how COTS and open-source software changed supercomputing forever. I hope you can too. Because, whether it’s a cluster of Raspberry Pis or a monster with over 3 million Intel Ivy Bridge and Xeon Phi chips, almost all of today’s supercomputers trace their ancestry to Beowulf. + +-------------------------------------------------------------------------------- + +via: + +作者:[Steven J. Vaughan-Nichols][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ +[1]:http://www.top500.org/ +[2]:http://www.zdnet.com/article/build-your-own-supercomputer-out-of-raspberry-pi-boards/ +[3]:https://www.raspberrypi.org/products/model-b/ +[4]:http://www.zdnet.com/article/linux-still-rules-supercomputing/ +[5]:http://www.nytimes.com/2014/12/23/science/an-economical-way-to-save-progress.html?smid=fb-nytimes&smtyp=cur&bicmp=AD&bicmlukp=WT.mc_id&bicmst=1409232722000&bicmet=1419773522000&_r=4 +[6]:http://www.computerworld.com/article/2529932/cybercrime-hacking/researchers-hack-verisign-s-ssl-scheme-for-securing-web-sites.html +[7]:http://phys.org/news/2010-12-air-playstation-3s-supercomputer.html +[8]:http://www.zdnet.com/article/parallella-the-99-linux-supercomputer/ +[9]:http://blogs.nvidia.com/blog/2014/03/25/titan-z/ +[10]:http://www.amd.com/en-us/press-releases/Pages/amd-flagship-professional-2014apr7.aspx +[11]:http://en.intelextrememasters.com/news/check-out-the-intel-extreme-masters-katowice-prize-money-distribution/ +[12]:http://www.google.com/url?q=http%3A%2F%2Fen.intelextrememasters.com%2Fnews%2Fcheck-out-the-intel-extreme-masters-katowice-prize-money-distribution%2F&sa=D&sntz=1&usg=AFQjCNE6yoAGGz-Hpi2tPF4gdhuPBEckhQ +[13]:http://www.beowulf.org/overview/history.html +[14]:http://yclept.ucdavis.edu/Beowulf/aboutbeowulf.html +[15]:http://www.beowulf.org/ \ No newline at end of file From e58cee17af9223454ed0cead2fa9c52f8027eaaf Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 10 Aug 2015 14:31:12 +0800 Subject: [PATCH 115/697] PUB:20150717 How to collect NGINX metrics - Part 2 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @strugglingyouth 第二篇也发了~~等第三篇哦 --- ...7 How to collect NGINX metrics - Part 2.md | 178 +++++++++++++ ...7 How to collect NGINX metrics - Part 2.md | 237 ------------------ 2 files changed, 178 insertions(+), 237 deletions(-) create mode 100644 published/20150717 How to collect NGINX metrics - Part 2.md delete mode 100644 translated/tech/20150717 How to collect NGINX metrics - Part 2.md diff --git a/published/20150717 How to collect NGINX metrics - Part 2.md b/published/20150717 How to collect NGINX metrics - Part 2.md new file mode 100644 index 0000000000..f1acf82a35 --- /dev/null +++ b/published/20150717 How to collect NGINX metrics - Part 2.md @@ -0,0 +1,178 @@ + +如何收集 NGINX 指标(第二篇) +================================================================================ +![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_2.png) + +### 如何获取你所需要的 NGINX 指标 ### + +如何获取需要的指标取决于你正在使用的 NGINX 版本以及你希望看到哪些指标。(参见 [如何监控 NGINX(第一篇)][1] 来深入了解NGINX指标。)自由开源的 NGINX 和商业版的 NGINX Plus 都有可以报告指标度量的状态模块,NGINX 也可以在其日志中配置输出特定指标: + +**指标可用性** + +| 指标 | [NGINX (开源)](https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source) | [NGINX Plus](https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus) | [NGINX 日志](https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#logs)| +|-----|------|-------|-----| +|accepts(接受) / accepted(已接受)|x|x| | +|handled(已处理)|x|x| | +|dropped(已丢弃)|x|x| | +|active(活跃)|x|x| | +|requests (请求数)/ total(全部请求数)|x|x| | +|4xx 代码||x|x| +|5xx 代码||x|x| +|request time(请求处理时间)|||x| + +#### 指标收集:NGINX(开源版) #### + +开源版的 NGINX 会在一个简单的状态页面上显示几个与服务器状态有关的基本指标,它们由你启用的 HTTP [stub status module][2] 所提供。要检查该模块是否已启用,运行以下命令: + + nginx -V 2>&1 | grep -o with-http_stub_status_module + +如果你看到终端输出了 **http_stub_status_module**,说明该状态模块已启用。 + +如果该命令没有输出,你需要启用该状态模块。你可以在[从源代码构建 NGINX ][3]时使用 `--with-http_stub_status_module` 配置参数: + + ./configure \ + … \ + --with-http_stub_status_module + make + sudo make install + +在验证该模块已经启用或你自己启用它后,你还需要修改 NGINX 配置文件,来给状态页面设置一个本地可访问的 URL(例如: /nginx_status): + + server { + location /nginx_status { + stub_status on; + + access_log off; + allow 127.0.0.1; + deny all; + } + } + +注:nginx 配置中的 server 块通常并不放在主配置文件中(例如:/etc/nginx/nginx.conf),而是放在主配置会加载的辅助配置文件中。要找到主配置文件,首先运行以下命令: + + nginx -t + +打开列出的主配置文件,在以 http 块结尾的附近查找以 include 开头的行,如: + + include /etc/nginx/conf.d/*.conf; + +在其中一个包含的配置文件中,你应该会找到主 **server** 块,你可以如上所示配置 NGINX 的指标输出。更改任何配置后,通过执行以下命令重新加载配置文件: + + nginx -s reload + +现在,你可以浏览状态页看到你的指标: + + Active connections: 24 + server accepts handled requests + 1156958 1156958 4491319 + Reading: 0 Writing: 18 Waiting : 6 + +请注意,如果你希望从远程计算机访问该状态页面,则需要将远程计算机的 IP 地址添加到你的状态配置文件的白名单中,在上面的配置文件中的白名单仅有 127.0.0.1。 + +NGINX 的状态页面是一种快速查看指标状况的简单方法,但当连续监测时,你需要按照标准间隔自动记录该数据。监控工具箱 [Nagios][4] 或者 [Datadog][5],以及收集统计信息的服务 [collectD][6] 已经可以解析 NGINX 的状态信息了。 + +#### 指标收集: NGINX Plus #### + +商业版的 NGINX Plus 通过它的 ngx_http_status_module 提供了比开源版 NGINX [更多的指标][7]。NGINX Plus 以字节流的方式提供这些额外的指标,提供了关于上游系统和高速缓存的信息。NGINX Plus 也会报告所有的 HTTP 状态码类型(1XX,2XX,3XX,4XX,5XX)的计数。一个 NGINX Plus 状态报告例子[可在此查看][8]: + +![NGINX Plus status board](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/status_plus-2.png) + +注:NGINX Plus 在状态仪表盘中的“Active”连接的定义和开源 NGINX 通过 stub_status_module 收集的“Active”连接指标略有不同。在 NGINX Plus 指标中,“Active”连接不包括Waiting状态的连接(即“Idle”连接)。 + +NGINX Plus 也可以输出 [JSON 格式的指标][9],可以用于集成到其他监控系统。在 NGINX Plus 中,你可以看到 [给定的上游服务器组][10]的指标和健康状况,或者简单地从上游服务器的[单个服务器][11]得到响应代码的计数: + + {"1xx":0,"2xx":3483032,"3xx":0,"4xx":23,"5xx":0,"total":3483055} + +要启动 NGINX Plus 指标仪表盘,你可以在 NGINX 配置文件的 http 块内添加状态 server 块。 (参见上一节,为收集开源版 NGINX 指标而如何查找相关的配置文件的说明。)例如,要设置一个状态仪表盘 (http://your.ip.address:8080/status.html)和一个 JSON 接口(http://your.ip.address:8080/status),可以添加以下 server 块来设定: + + server { + listen 8080; + root /usr/share/nginx/html; + + location /status { + status; + } + + location = /status.html { + } + } + +当你重新加载 NGINX 配置后,状态页就可以用了: + + nginx -s reload + +关于如何配置扩展状态模块,官方 NGINX Plus 文档有 [详细介绍][13] 。 + +#### 指标收集:NGINX 日志 #### + +NGINX 的 [日志模块][14] 会把可自定义的访问日志写到你配置的指定位置。你可以通过[添加或移除变量][15]来自定义日志的格式和包含的数据。要存储详细的日志,最简单的方法是添加下面一行在你配置文件的 server 块中(参见上上节,为收集开源版 NGINX 指标而如何查找相关的配置文件的说明。): + + access_log logs/host.access.log combined; + +更改 NGINX 配置文件后,执行如下命令重新加载配置文件: + + nginx -s reload + +默认包含的 “combined” 的日志格式,会包括[一系列关键的数据][17],如实际的 HTTP 请求和相应的响应代码。在下面的示例日志中,NGINX 记录了请求 /index.html 时的 200(成功)状态码和访问不存在的请求文件 /fail 的 404(未找到)错误。 + + 127.0.0.1 - - [19/Feb/2015:12:10:46 -0500] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari 537.36" + + 127.0.0.1 - - [19/Feb/2015:12:11:05 -0500] "GET /fail HTTP/1.1" 404 570 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36" + +你可以通过在 NGINX 配置文件中的 http 块添加一个新的日志格式来记录请求处理时间: + + log_format nginx '$remote_addr - $remote_user [$time_local] ' + '"$request" $status $body_bytes_sent $request_time ' + '"$http_referer" "$http_user_agent"'; + +并修改配置文件中 **server** 块的 access_log 行: + + access_log logs/host.access.log nginx; + +重新加载配置文件后(运行 `nginx -s reload`),你的访问日志将包括响应时间,如下所示。单位为秒,精度到毫秒。在这个例子中,服务器接收到一个对 /big.pdf 的请求时,发送 33973115 字节后返回 206(成功)状态码。处理请求用时 0.202 秒(202毫秒): + + 127.0.0.1 - - [19/Feb/2015:15:50:36 -0500] "GET /big.pdf HTTP/1.1" 206 33973115 0.202 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36" + +你可以使用各种工具和服务来解析和分析 NGINX 日志。例如,[rsyslog][18] 可以监视你的日志,并将其传递给多个日志分析服务;你也可以使用自由开源工具,比如 [logstash][19] 来收集和分析日志;或者你可以使用一个统一日志记录层,如 [Fluentd][20] 来收集和解析你的 NGINX 日志。 + +### 结论 ### + +监视 NGINX 的哪一项指标将取决于你可用的工具,以及监控指标所提供的信息是否满足你们的需要。举例来说,错误率的收集是否足够重要到需要你们购买 NGINX Plus ,还是架设一个可以捕获和分析日志的系统就够了? + +在 Datadog 中,我们已经集成了 NGINX 和 NGINX Plus,这样你就可以以最小的设置来收集和监控所有 Web 服务器的指标。[在本文中][21]了解如何用 NGINX Datadog 来监控 ,并开始 [Datadog 的免费试用][22]吧。 + + +-------------------------------------------------------------------------------- + +via: https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ + +作者:K Young +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/ +[2]:http://nginx.org/en/docs/http/ngx_http_stub_status_module.html +[3]:http://wiki.nginx.org/InstallOptions +[4]:https://exchange.nagios.org/directory/Plugins/Web-Servers/nginx +[5]:http://docs.datadoghq.com/integrations/nginx/ +[6]:https://collectd.org/wiki/index.php/Plugin:nginx +[7]:http://nginx.org/en/docs/http/ngx_http_status_module.html#data +[8]:http://demo.nginx.com/status.html +[9]:http://demo.nginx.com/status +[10]:http://demo.nginx.com/status/upstreams/demoupstreams +[11]:http://demo.nginx.com/status/upstreams/demoupstreams/0/responses +[12]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source +[13]:http://nginx.org/en/docs/http/ngx_http_status_module.html#example +[14]:http://nginx.org/en/docs/http/ngx_http_log_module.html +[15]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format +[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source +[17]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format +[18]:http://www.rsyslog.com/ +[19]:https://www.elastic.co/products/logstash +[20]:http://www.fluentd.org/ +[21]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ +[22]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#sign-up +[23]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_collect_nginx_metrics.md +[24]:https://github.com/DataDog/the-monitor/issues diff --git a/translated/tech/20150717 How to collect NGINX metrics - Part 2.md b/translated/tech/20150717 How to collect NGINX metrics - Part 2.md deleted file mode 100644 index 848042bf2c..0000000000 --- a/translated/tech/20150717 How to collect NGINX metrics - Part 2.md +++ /dev/null @@ -1,237 +0,0 @@ - -如何收集NGINX指标 - 第2部分 -================================================================================ -![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_2.png) - -### 如何获取你所需要的NGINX指标 ### - -如何获取需要的指标取决于你正在使用的 NGINX 版本。(参见 [the companion article][1] 将深入探索NGINX指标。)免费,开源版的 NGINX 和商业版的 NGINX 都有指标度量的状态模块,NGINX 也可以在其日志中配置指标模块: - -注:表格 - ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
MetricAvailability
NGINX (open-source)NGINX PlusNGINX logs
accepts / acceptedxx
handledxx
droppedxx
activexx
requests / totalxx
4xx codesxx
5xx codesxx
request timex
- -#### 指标收集:NGINX(开源版) #### - -开源版的 NGINX 会显示几个与服务器状态有关的指标在状态页面上,只要你启用了 HTTP [stub status module][2] 。要检查模块是否被加载,运行以下命令: - - nginx -V 2>&1 | grep -o with-http_stub_status_module - -如果你看到 http_stub_status_module 被输出在终端,说明状态模块已启用。 - -如果该命令没有输出,你需要启用状态模块。你可以使用 --with-http_stub_status_module 参数去配置 [building NGINX from source][3]: - - ./configure \ - … \ - --with-http_stub_status_module - make - sudo make install - -验证模块已经启用或你自己启用它后,你还需要修改 NGINX 配置文件为状态页面设置本地访问的 URL(例如,/ nginx_status): - - server { - location /nginx_status { - stub_status on; - - access_log off; - allow 127.0.0.1; - deny all; - } - } - -注:nginx 配置中的 server 块通常并不在主配置文件中(例如,/etc/nginx/nginx.conf),但主配置中会加载补充的配置文件。要找到主配置文件,首先运行以下命令: - - nginx -t - -打开主配置文件,在以 http 模块结尾的附近查找以 include 开头的行包,如: - - include /etc/nginx/conf.d/*.conf; - -在所包含的配置文件中,你应该会找到主服务器模块,你可以如上所示修改 NGINX 的指标报告。更改任何配置后,通过执行以下命令重新加载配置文件: - - nginx -s reload - -现在,你可以查看指标的状态页: - - Active connections: 24 - server accepts handled requests - 1156958 1156958 4491319 - Reading: 0 Writing: 18 Waiting : 6 - -请注意,如果你正试图从远程计算机访问状态页面,则需要将远程计算机的 IP 地址添加到你的状态配置文件的白名单中,在上面的配置文件中 127.0.0.1 仅在白名单中。 - -nginx 的状态页面是一中查看指标快速又简单的方法,但当连续监测时,你需要每隔一段时间自动记录该数据。然后通过监控工具箱 [Nagios][4] 或者 [Datadog][5],以及收集统计信息的服务 [collectD][6] 来分析已保存的 NGINX 状态信息。 - -#### 指标收集: NGINX Plus #### - -商业版的 NGINX Plus 通过 ngx_http_status_module 提供的可用指标比开源版 NGINX 更多 [many more metrics][7] 。NGINX Plus 附加了更多的字节流指标,以及负载均衡系统和高速缓存的信息。NGINX Plus 还报告所有的 HTTP 状态码类型(1XX,2XX,3XX,4XX,5XX)的计数。一个简单的 NGINX Plus 状态报告 [here][8]。 - -![NGINX Plus status board](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/status_plus-2.png) - -*注: NGINX Plus 在状态仪表盘"Active”连接定义的收集指标的状态模块和开源 NGINX 的略有不同。在 NGINX Plus 指标中,活动连接不包括等待状态(又叫空闲连接)连接。* - -NGINX Plus 也集成了其他监控系统的报告 [JSON格式指标][9] 。用 NGINX Plus 时,你可以看到 [负载均衡服务器组的][10]指标和健康状况,或着再向下能取得的仅是响应代码计数[从单个服务器][11]在负载均衡服务器中: - {"1xx":0,"2xx":3483032,"3xx":0,"4xx":23,"5xx":0,"total":3483055} - -启动 NGINX Plus 指标仪表盘,你可以在 NGINX 配置文件的 http 块内添加状态 server 块。 ([参见上一页][12]查找相关的配置文件,收集开源 NGINX 版指标的说明。)例如,设立以下一个状态仪表盘在http://your.ip.address:8080/status.html 和一个 JSON 接口 http://your.ip.address:8080/status,可以添加以下 server block 来设定: - - server { - listen 8080; - root /usr/share/nginx/html; - - location /status { - status; - } - - location = /status.html { - } - } - -一旦你重新加载 NGINX 配置,状态页就会被加载: - - nginx -s reload - -关于如何配置扩展状态模块,官方 NGINX Plus 文档有 [详细介绍][13] 。 - -#### 指标收集:NGINX日志 #### - -NGINX 的 [日志模块][14] 写到配置可以自定义访问日志到指定文件。你可以自定义日志的格式和时间通过 [添加或移除变量][15]。捕获日志的详细信息,最简单的方法是添加下面一行在你配置文件的server 块中(参见[此节][16] 通过加载配置文件的信息来收集开源 NGINX 的指标): - - access_log logs/host.access.log combined; - -更改 NGINX 配置文件后,必须要重新加载配置文件: - - nginx -s reload - -“combined” 的日志格式,只包含默认参数,包括[一些关键数据][17],如实际的 HTTP 请求和相应的响应代码。在下面的示例日志中,NGINX 记录了200(成功)状态码当请求 /index.html 时和404(未找到)错误不存在的请求文件 /fail。 - - 127.0.0.1 - - [19/Feb/2015:12:10:46 -0500] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari 537.36" - - 127.0.0.1 - - [19/Feb/2015:12:11:05 -0500] "GET /fail HTTP/1.1" 404 570 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36" - -你可以记录请求处理的时间通过添加一个新的日志格式在 NGINX 配置文件中的 http 块: - - log_format nginx '$remote_addr - $remote_user [$time_local] ' - '"$request" $status $body_bytes_sent $request_time ' - '"$http_referer" "$http_user_agent"'; - -通过修改配置文件中 server 块的 access_log 行: - - access_log logs/host.access.log nginx; - -重新加载配置文件(运行 nginx -s reload)后,你的访问日志将包括响应时间,如下图所示。单位为秒,毫秒。在这种情况下,服务器接收 /big.pdf 的请求时,发送33973115字节后返回206(成功)状态码。处理请求用时0.202秒(202毫秒): - - 127.0.0.1 - - [19/Feb/2015:15:50:36 -0500] "GET /big.pdf HTTP/1.1" 206 33973115 0.202 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36" - -你可以使用各种工具和服务来收集和分析 NGINX 日志。例如,[rsyslog][18] 可以监视你的日志,并将其传递给多个日志分析服务;你也可以使用免费的开源工具,如[logstash][19]来收集和分析日志;或者你可以使用一个统一日志记录层,如[Fluentd][20]来收集和分析你的 NGINX 日志。 - -### 结论 ### - -监视 NGINX 的哪一项指标将取决于你提供的工具,以及是否由给定指标证明监控指标的开销。例如,通过收集和分析日志来定位问题是非常重要的在 NGINX Plus 或者 运行的系统中。 - -在 Datadog 中,我们已经集成了 NGINX 和 NGINX Plus,这样你就可以以最小的设置来收集和监控所有 Web 服务器的指标。了解如何用 NGINX Datadog来监控 [在本文中][21],并开始使用 [免费的Datadog][22]。 - ----------- - -原文在这 [on GitHub][23]。问题,更正,补充等?请[让我们知道][24]。 - --------------------------------------------------------------------------------- - -via: https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ - -作者:K Young -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/ -[2]:http://nginx.org/en/docs/http/ngx_http_stub_status_module.html -[3]:http://wiki.nginx.org/InstallOptions -[4]:https://exchange.nagios.org/directory/Plugins/Web-Servers/nginx -[5]:http://docs.datadoghq.com/integrations/nginx/ -[6]:https://collectd.org/wiki/index.php/Plugin:nginx -[7]:http://nginx.org/en/docs/http/ngx_http_status_module.html#data -[8]:http://demo.nginx.com/status.html -[9]:http://demo.nginx.com/status -[10]:http://demo.nginx.com/status/upstreams/demoupstreams -[11]:http://demo.nginx.com/status/upstreams/demoupstreams/0/responses -[12]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source -[13]:http://nginx.org/en/docs/http/ngx_http_status_module.html#example -[14]:http://nginx.org/en/docs/http/ngx_http_log_module.html -[15]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format -[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source -[17]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format -[18]:http://www.rsyslog.com/ -[19]:https://www.elastic.co/products/logstash -[20]:http://www.fluentd.org/ -[21]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ -[22]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#sign-up -[23]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_collect_nginx_metrics.md -[24]:https://github.com/DataDog/the-monitor/issues From 843f0e99478af5d7c47fd3e774c79aacfff4732f Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Mon, 10 Aug 2015 15:01:28 +0800 Subject: [PATCH 116/697] [Translated]20150209 Install OpenQRM Cloud Computing Platform In Debian.md --- ...nQRM Cloud Computing Platform In Debian.md | 151 ------------------ ...nQRM Cloud Computing Platform In Debian.md | 148 +++++++++++++++++ 2 files changed, 148 insertions(+), 151 deletions(-) delete mode 100644 sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md create mode 100644 translated/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md diff --git a/sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md b/sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md deleted file mode 100644 index 2c6a990b83..0000000000 --- a/sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md +++ /dev/null @@ -1,151 +0,0 @@ -FSSlc translating - -Install OpenQRM Cloud Computing Platform In Debian -================================================================================ -### Introduction ### - -**openQRM** is a web-based open source Cloud computing and datacenter management platform that integrates flexibly with existing components in enterprise data centers. - -It supports the following virtualization technologies: - -- KVM, -- XEN, -- Citrix XenServer, -- VMWare ESX, -- LXC, -- OpenVZ. - -The Hybrid Cloud Connector in openQRM supports a range of private or public cloud providers to extend your infrastructure on demand via **Amazon AWS**, **Eucalyptus** or **OpenStack**. It, also, automates provisioning, virtualization, storage and configuration management, and it takes care of high-availability. A self-service cloud portal with integrated billing system enables end-users to request new servers and application stacks on-demand. - -openQRM is available in two different flavours such as: - -- Enterprise Edition -- Community Edition - -You can view the difference between both editions [here][1]. - -### Features ### - -- Private/Hybrid Cloud Computing Platform; -- Manages physical and virtualized server systems; -- Integrates with all major open and commercial storage technologies; -- Cross-platform: Linux, Windows, OpenSolaris, and *BSD; -- Supports KVM, XEN, Citrix XenServer, VMWare ESX(i), lxc, OpenVZ and VirtualBox; -- Support for Hybrid Cloud setups using additional Amazon AWS, Eucalyptus, Ubuntu UEC cloud resources; -- Supports P2V, P2P, V2P, V2V Migrations and High-Availability; -- Integrates with the best Open Source management tools – like puppet, nagios/Icinga or collectd; -- Over 50 plugins for extended features and integration with your infrastructure; -- Self-Service Portal for end-users; -- Integrated billing system. - -### Installation ### - -Here, we will install openQRM in Ubuntu 14.04 LTS. Your server must atleast meet the following requirements. - -- 1 GB RAM; -- 100 GB Hdd; -- Optional: Virtualization enabled (VT for Intel CPUs or AMD-V for AMD CPUs) in Bios. - -First, install make package to compile openQRM source package. - - sudo apt-get update - sudo apt-get upgrade - sudo apt-get install make - -Then, run the following commands one by one to install openQRM. - -Download the latest available version [from here][2]. - - wget http://sourceforge.net/projects/openqrm/files/openQRM-Community-5.1/openqrm-community-5.1.tgz - - tar -xvzf openqrm-community-5.1.tgz - - cd openqrm-community-5.1/src/ - - sudo make - - sudo make install - - sudo make start - -During installation, you’ll be asked to update the php.ini file. - -![~-openqrm-community-5.1-src_001](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_001.png) - -Enter mysql root user password. - -![~-openqrm-community-5.1-src_002](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_002.png) - -Re-enter password: - -![~-openqrm-community-5.1-src_003](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_003.png) - -Select the mail server configuration type. - -![~-openqrm-community-5.1-src_004](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_004.png) - -If you’re not sure, select Local only. In our case, I go with **Local only** option. - -![~-openqrm-community-5.1-src_005](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_005.png) - -Enter your system mail name, and finally enter the Nagios administration password. - -![~-openqrm-community-5.1-src_007](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_007.png) - -The above commands will take long time depending upon your Internet connection to download all packages required to run openQRM. Be patient. - -Finally, you’ll get the openQRM configuration URL along with username and password. - -![~_002](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@debian-_002.png) - -### Configuration ### - -After installing openQRM, open up your web browser and navigate to the URL: **http://ip-address/openqrm**. - -For example, in my case http://192.168.1.100/openqrm. - -The default username and password is: **openqrm/openqrm**. - -![Mozilla Firefox_003](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/Mozilla-Firefox_003.png) - -Select a network card to use for the openQRM management network. - -![openQRM Server - Mozilla Firefox_004](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_004.png) - -Select a database type. In our case, I selected mysql. - -![openQRM Server - Mozilla Firefox_006](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_006.png) - -Now, configure the database connection and initialize openQRM. Here, I use **openQRM** as database name, and user as **root** and debian as password for the database. Be mindful that you should enter the mysql root user password that you have created while installing openQRM. - -![openQRM Server - Mozilla Firefox_012](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_012.png) - -Congratulations!! openQRM has been installed and configured. - -![openQRM Server - Mozilla Firefox_013](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_013.png) - -### Update openQRM ### - -To update openQRM at any time run the following command: - - cd openqrm/src/ - make update - -What we have done so far is just installed and configured openQRM in our Ubuntu server. For creating, running Virtual Machines, managing Storage, integrating additional systems and running your own private Cloud, I suggest you to read the [openQRM Administrator Guide][3]. - -That’s all now. Cheers! Happy weekend!! - --------------------------------------------------------------------------------- - -via: http://www.unixmen.com/install-openqrm-cloud-computing-platform-debian/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.unixmen.com/author/sk/ -[1]:http://www.openqrm-enterprise.com/products/edition-comparison.html -[2]:http://sourceforge.net/projects/openqrm/files/?source=navbar -[3]:http://www.openqrm-enterprise.com/fileadmin/Documents/Whitepaper/openQRM-Enterprise-Administrator-Guide-5.2.pdf diff --git a/translated/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md b/translated/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md new file mode 100644 index 0000000000..2eacc933b9 --- /dev/null +++ b/translated/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md @@ -0,0 +1,148 @@ +在 Debian 中安装 OpenQRM 云计算平台 +================================================================================ +### 简介 ### + +**openQRM**是一个基于 Web 的开源云计算和数据中心管理平台,可灵活地与企业数据中心的现存组件集成。 + +它支持下列虚拟技术: + +- KVM, +- XEN, +- Citrix XenServer, +- VMWare ESX, +- LXC, +- OpenVZ. + +openQRM 中的杂交云连接器通过 **Amazon AWS**, **Eucalyptus** 或 **OpenStack** 来支持一系列的私有或公有云提供商,以此来按需扩展你的基础设施。它也自动地进行资源调配、 虚拟化、 存储和配置管理,且关注高可用性。集成计费系统的自助服务云门户可使终端用户按需请求新的服务器和应用堆栈。 + +openQRM 有两种不同风格的版本可获取: + +- 企业版 +- 社区版 + +你可以在[这里][1] 查看这两个版本间的区别。 + +### 特点 ### + +- 私有/杂交的云计算平台; +- 可管理物理或虚拟的服务器系统; +- 可与所有主流的开源或商业的存储技术集成; +- 跨平台: Linux, Windows, OpenSolaris, and BSD; +- 支持 KVM, XEN, Citrix XenServer, VMWare ESX(i), lxc, OpenVZ 和 VirtualBox; +- 支持使用额外的 Amazon AWS, Eucalyptus, Ubuntu UEC 等云资源来进行杂交云设置; +- 支持 P2V, P2P, V2P, V2V 迁移和高可用性; +- 集成最好的开源管理工具 – 如 puppet, nagios/Icinga 或 collectd; +- 有超过 50 个插件来支持扩展功能并与你的基础设施集成; +- 针对终端用户的自助门户; +- 集成计费系统. + +### 安装 ### + +在这里我们将在 in Debian 7.5 上安装 openQRM。你的服务器必须至少满足以下要求: + +- 1 GB RAM; +- 100 GB Hdd(硬盘驱动器); +- 可选: Bios 支持虚拟化(Intel CPUs 的 VT 或 AMD CPUs AMD-V). + +首先,安装 `make` 软件包来编译 openQRM 源码包: + + sudo apt-get update + sudo apt-get upgrade + sudo apt-get install make + +然后,逐次运行下面的命令来安装 openQRM。 + +从[这里][2] 下载最新的可用版本: + + wget http://sourceforge.net/projects/openqrm/files/openQRM-Community-5.1/openqrm-community-5.1.tgz + + tar -xvzf openqrm-community-5.1.tgz + + cd openqrm-community-5.1/src/ + + sudo make + + sudo make install + + sudo make start + +安装期间,你将被询问去更新文件 `php.ini` + +![~-openqrm-community-5.1-src_001](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_001.png) + +输入 mysql root 用户密码。 + +![~-openqrm-community-5.1-src_002](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_002.png) + +再次输入密码: + +![~-openqrm-community-5.1-src_003](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_003.png) + +选择邮件服务器配置类型。 + +![~-openqrm-community-5.1-src_004](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_004.png) + +假如你不确定该如何选择,可选择 `Local only`。在我们的这个示例中,我选择了 **Local only** 选项。 + +![~-openqrm-community-5.1-src_005](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_005.png) + +输入你的系统邮件名称,并最后输入 Nagios 管理员密码。 + +![~-openqrm-community-5.1-src_007](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_007.png) + +根据你的网络连接状态,上面的命令可能将花费很长的时间来下载所有运行 openQRM 所需的软件包,请耐心等待。 + +最后你将得到 openQRM 配置 URL 地址以及相关的用户名和密码。 + +![~_002](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@debian-_002.png) + +### 配置 ### + +在安装完 openQRM 后,打开你的 Web 浏览器并转到 URL: **http://ip-address/openqrm** + +例如,在我的示例中为 http://192.168.1.100/openqrm 。 + +默认的用户名和密码是: **openqrm/openqrm** 。 + +![Mozilla Firefox_003](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/Mozilla-Firefox_003.png) + +选择一个网卡来给 openQRM 管理网络使用。 + +![openQRM Server - Mozilla Firefox_004](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_004.png) + +选择一个数据库类型,在我们的示例中,我选择了 mysql。 + +![openQRM Server - Mozilla Firefox_006](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_006.png) + +现在,配置数据库连接并初始化 openQRM, 在这里,我使用 **openQRM** 作为数据库名称, **root** 作为用户的身份,并将 debian 作为数据库的密码。 请小心,你应该输入先前在安装 openQRM 时创建的 mysql root 用户密码。 + +![openQRM Server - Mozilla Firefox_012](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_012.png) + +祝贺你!! openQRM 已经安装并配置好了。 + +![openQRM Server - Mozilla Firefox_013](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_013.png) + +### 更新 openQRM ### + +在任何时候可以使用下面的命令来更新 openQRM: + + cd openqrm/src/ + make update + +到现在为止,我们做的只是在我们的 Ubuntu 服务器中安装和配置 openQRM, 至于 创建、运行虚拟,管理存储,额外的系统集成和运行你自己的私有云等内容,我建议你阅读 [openQRM 管理员指南][3]。 + +就是这些了,欢呼吧!周末快乐! +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/install-openqrm-cloud-computing-platform-debian/ + +作者:[SK][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/sk/ +[1]:http://www.openqrm-enterprise.com/products/edition-comparison.html +[2]:http://sourceforge.net/projects/openqrm/files/?source=navbar +[3]:http://www.openqrm-enterprise.com/fileadmin/Documents/Whitepaper/openQRM-Enterprise-Administrator-Guide-5.2.pdf \ No newline at end of file From 1664191a8aa7b3137e1f39eb0756e4e60b2e35b1 Mon Sep 17 00:00:00 2001 From: xiaoyu33 <1136299502@qq.com> Date: Mon, 10 Aug 2015 16:50:55 +0800 Subject: [PATCH 117/697] Update 20150810 For Linux, Supercomputers R Us.md add "Translating by xiaoyu33" --- sources/talk/20150810 For Linux, Supercomputers R Us.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/talk/20150810 For Linux, Supercomputers R Us.md b/sources/talk/20150810 For Linux, Supercomputers R Us.md index 7bc48125f0..8f7302cca1 100644 --- a/sources/talk/20150810 For Linux, Supercomputers R Us.md +++ b/sources/talk/20150810 For Linux, Supercomputers R Us.md @@ -1,3 +1,4 @@ +Translating by xiaoyu33 For Linux, Supercomputers R Us ================================================================================ ![Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons](http://images.techhive.com/images/article/2015/08/playstation_3-100602985-primary.idge.jpg) @@ -56,4 +57,4 @@ via: [12]:http://www.google.com/url?q=http%3A%2F%2Fen.intelextrememasters.com%2Fnews%2Fcheck-out-the-intel-extreme-masters-katowice-prize-money-distribution%2F&sa=D&sntz=1&usg=AFQjCNE6yoAGGz-Hpi2tPF4gdhuPBEckhQ [13]:http://www.beowulf.org/overview/history.html [14]:http://yclept.ucdavis.edu/Beowulf/aboutbeowulf.html -[15]:http://www.beowulf.org/ \ No newline at end of file +[15]:http://www.beowulf.org/ From 219731a082612b418fd451207170d39264c3f67e Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Tue, 11 Aug 2015 07:34:03 +0800 Subject: [PATCH 118/697] Update RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ...th Nano and Vim or Analyzing text with grep and regexps.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md b/sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md index 1529fecf2e..f3de8528fc 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md @@ -1,3 +1,5 @@ +FSSlc translating + RHCSA Series: Editing Text Files with Nano and Vim / Analyzing text with grep and regexps – Part 4 ================================================================================ Every system administrator has to deal with text files as part of his daily responsibilities. That includes editing existing files (most likely configuration files), or creating new ones. It has been said that if you want to start a holy war in the Linux world, you can ask sysadmins what their favorite text editor is and why. We are not going to do that in this article, but will present a few tips that will be helpful to use two of the most widely used text editors in RHEL 7: nano (due to its simplicity and easiness of use, specially to new users), and vi/m (due to its several features that convert it into more than a simple editor). I am sure that you can find many more reasons to use one or the other, or perhaps some other editor such as emacs or pico. It’s entirely up to you. @@ -251,4 +253,4 @@ via: http://www.tecmint.com/rhcsa-exam-how-to-use-nano-vi-editors/ [2]:http://www.tecmint.com/file-and-directory-management-in-linux/ [3]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ [4]:http://www.nano-editor.org/ -[5]:http://www.vim.org/ \ No newline at end of file +[5]:http://www.vim.org/ From d0f66e61773cc0d995e0f3b64495f965d742e416 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 11 Aug 2015 09:31:21 +0800 Subject: [PATCH 119/697] Rename sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md to translated/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md --- ...50730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md (100%) diff --git a/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md b/translated/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md similarity index 100% rename from sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md rename to translated/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md From 83c752ac30a52faf8a0d2b772c2d00180e648f2a Mon Sep 17 00:00:00 2001 From: Ping Date: Mon, 10 Aug 2015 17:27:59 +0800 Subject: [PATCH 120/697] Complete 20150518 How to set up a Replica Set on MongoDB.md --- ... How to set up a Replica Set on MongoDB.md | 183 ------------------ ... How to set up a Replica Set on MongoDB.md | 183 ++++++++++++++++++ 2 files changed, 183 insertions(+), 183 deletions(-) delete mode 100644 sources/tech/20150518 How to set up a Replica Set on MongoDB.md create mode 100644 translated/tech/20150518 How to set up a Replica Set on MongoDB.md diff --git a/sources/tech/20150518 How to set up a Replica Set on MongoDB.md b/sources/tech/20150518 How to set up a Replica Set on MongoDB.md deleted file mode 100644 index 83a7da8769..0000000000 --- a/sources/tech/20150518 How to set up a Replica Set on MongoDB.md +++ /dev/null @@ -1,183 +0,0 @@ -Translating by Ping -How to set up a Replica Set on MongoDB -================================================================================ -MongoDB has become the most famous NoSQL database on the market. MongoDB is document-oriented, and its scheme-free design makes it a really attractive solution for all kinds of web applications. One of the features that I like the most is Replica Set, where multiple copies of the same data set are maintained by a group of mongod nodes for redundancy and high availability. - -This tutorial describes how to configure a Replica Set on MonoDB. - -The most common configuration for a Replica Set involves one primary and multiple secondary nodes. The replication will then be initiated from the primary toward the secondaries. Replica Sets can not only provide database protection against unexpected hardware failure and service downtime, but also improve read throughput of database clients as they can be configured to read from different nodes. - -### Set up the Environment ### - -In this tutorial, we are going to set up a Replica Set with one primary and two secondary nodes. - -![](https://farm8.staticflickr.com/7667/17801038505_529a5224a1.jpg) - -In order to implement this lab, we will use three virtual machines (VMs) running on VirtualBox. I am going to install Ubuntu 14.04 on the VMs, and install official packages for Mongodb. - -I am going to set up a necessary environment on one VM instance, and then clone it to the other two VM instances. Thus pick one VM named master, and perform the following installations. - -First, we need to add the MongoDB key for apt: - - $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 - -Then we need to add the official MongoDB repository to our source.list: - - $ sudo su - # echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release -sc)"/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list - -Let's update repositories and install MongoDB. - - $ sudo apt-get update - $ sudo apt-get install -y mongodb-org - -Now let's make some changes in /etc/mongodb.conf. - - auth = true - dbpath=/var/lib/mongodb - logpath=/var/log/mongodb/mongod.log - logappend=true - keyFile=/var/lib/mongodb/keyFile - replSet=myReplica - -The first line is to make sure that we are going to have authentication on our database. keyFile is to set up a keyfile that is going to be used by MongoDB to replicate between nodes. replSet sets up the name of our replica set. - -Now we are going to create our keyfile, so that it can be in all our instances. - - $ echo -n "MyRandomStringForReplicaSet" | md5sum > keyFile - -This will create keyfile that contains a MD5 string, but it has some noise that we need to clean up before using it in MongoDB. Use the following command to clean it up: - - $ echo -n "MyReplicaSetKey" | md5sum|grep -o "[0-9a-z]\+" > keyFile - -What grep command does is to print MD5 string with no spaces or other characters that we don't want. - -Now we are going to make the keyfile ready for use: - - $ sudo cp keyFile /var/lib/mongodb - $ sudo chown mongodb:nogroup keyFile - $ sudo chmod 400 keyFile - -Now we have our Ubuntu VM ready to be cloned. Power it off, and clone it to the other VMs. - -![](https://farm9.staticflickr.com/8729/17800903865_9876a9cc9c.jpg) - -I name the cloned VMs secondary1 and secondary2. Make sure to reinitialize the MAC address of cloned VMs and clone full disks. - -![](https://farm6.staticflickr.com/5333/17613392900_6de45c9450.jpg) - -All three VM instances should be on the same network to communicate with each other. For this, we are going to attach all three VMs to "Internet Network". - -It is recommended that each VM instances be assigned a static IP address, as opposed to DHCP IP address, so that the VMs will not lose connectivity among themselves when a DHCP server assigns different IP addresses to them. - -Let's edit /etc/networks/interfaces of each VM as follows. - -On primary: - - auto eth1 - iface eth1 inet static - address 192.168.50.2 - netmask 255.255.255.0 - -On secondary1: - - auto eth1 - iface eth1 inet static - address 192.168.50.3 - netmask 255.255.255.0 - -On secondary2: - - auto eth1 - iface eth1 inet static - address 192.168.50.4 - netmask 255.255.255.0 - -Another file that needs to be set up is /etc/hosts, because we don't have DNS. We need to set the hostnames in /etc/hosts. - -On primary: - - 127.0.0.1 localhost primary - 192.168.50.2 primary - 192.168.50.3 secondary1 - 192.168.50.4 secondary2 - -On secondary1: - - 127.0.0.1 localhost secondary1 - 192.168.50.2 primary - 192.168.50.3 secondary1 - 192.168.50.4 secondary2 - -On secondary2: - - 127.0.0.1 localhost secondary2 - 192.168.50.2 primary - 192.168.50.3 secondary1 - 192.168.50.4 secondary2 - -Check connectivity among themselves by using ping command: - - $ ping primary - $ ping secondary1 - $ ping secondary2 - -### Set up a Replica Set ### - -After verifying connectivity among VMs, we can go ahead and create the admin user so that we can start working on the Replica Set. - -On primary node, open /etc/mongodb.conf, and comment out two lines that start with auth and replSet: - - dbpath=/var/lib/mongodb - logpath=/var/log/mongodb/mongod.log - logappend=true - #auth = true - keyFile=/var/lib/mongodb/keyFile - #replSet=myReplica - -Restart mongod daemon. - - $ sudo service mongod restart - -Create an admin user after conencting to MongoDB: - - > use admin - > db.createUser({ - user:"admin", - pwd:" - }) - $ sudo service mongod restart - -Connect to MongoDB and use these commands to add secondary1 and secondary2 to our Replicat Set. - - > use admin - > db.auth("admin","myreallyhardpassword") - > rs.initiate() - > rs.add ("secondary1:27017") - > rs.add("secondary2:27017") - -Now that we have our Replica Set, we can start working on our project. Consult the [official driver documentation][1] to see how to connect to a Replica Set. In case you want to query from shell, you have to connect to primary instance to insert or query the database. Secondary nodes will not let you do that. If you attempt to access the database on a secondary node, you will get this error message: - - myReplica:SECONDARY> - myReplica:SECONDARY> show databases - 2015-05-10T03:09:24.131+0000 E QUERY Error: listDatabases failed:{ "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" } - at Error () - at Mongo.getDBs (src/mongo/shell/mongo.js:47:15) - at shellHelper.show (src/mongo/shell/utils.js:630:33) - at shellHelper (src/mongo/shell/utils.js:524:36) - at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47 - -I hope you find this tutorial useful. You can use Vagrant to automate your local environments and help you code faster. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/setup-replica-set-mongodb.html - -作者:[Christopher Valerio][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/valerio -[1]:http://docs.mongodb.org/ecosystem/drivers/ diff --git a/translated/tech/20150518 How to set up a Replica Set on MongoDB.md b/translated/tech/20150518 How to set up a Replica Set on MongoDB.md new file mode 100644 index 0000000000..44b8535b82 --- /dev/null +++ b/translated/tech/20150518 How to set up a Replica Set on MongoDB.md @@ -0,0 +1,183 @@ +如何配置MongoDB副本集(Replica Set) +================================================================================ +MongoDB已经成为市面上最知名的NoSQL数据库。MongoDB是面向文档的,它的无模式设计使得它在各种各样的WEB应用当中广受欢迎。最让我喜欢的特性之一是它的副本集,副本集将同一数据的多份拷贝放在一组mongod节点上,从而实现数据的冗余以及高可用性。 + +这篇教程将向你介绍如何配置一个MongoDB副本集。 + +副本集的最常见配置涉及到一个主节点以及多个副节点。这之后启动的复制行为会从这个主节点到其他副节点。副本集不止可以针对意外的硬件故障和停机事件对数据库提供保护,同时也因为提供了更多的结点从而提高了数据库客户端数据读取的吞吐量。 + +### 配置环境 ### + +这个教程里,我们会配置一个包括一个主节点以及两个副节点的副本集。 + +![](https://farm8.staticflickr.com/7667/17801038505_529a5224a1.jpg) + +为了达到这个目的,我们使用了3个运行在VirtualBox上的虚拟机。我会在这些虚拟机上安装Ubuntu 14.04,并且安装MongoDB官方包。 + +我会在一个虚拟机实例上配置好需要的环境,然后将它克隆到其他的虚拟机实例上。因此,选择一个名为master的虚拟机,执行以下安装过程。 + +首先,我们需要在apt中增加一个MongoDB密钥: + + $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 + +然后,将官方的MongoDB仓库添加到source.list中: + + $ sudo su + # echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release -sc)"/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list + +接下来更新apt仓库并且安装MongoDB。 + + $ sudo apt-get update + $ sudo apt-get install -y mongodb-org + +现在对/etc/mongodb.conf做一些更改 + + auth = true + dbpath=/var/lib/mongodb + logpath=/var/log/mongodb/mongod.log + logappend=true + keyFile=/var/lib/mongodb/keyFile + replSet=myReplica + +第一行的作用是确认我们的数据库需要验证才可以使用的。keyfile用来配置用于MongoDB结点间复制行为的密钥文件。replSet用来为副本集设置一个名称。 + +接下来我们创建一个用于所有实例的密钥文件。 + + $ echo -n "MyRandomStringForReplicaSet" | md5sum > keyFile + +这将会创建一个含有MD5字符串的密钥文件,但是由于其中包含了一些噪音,我们需要对他们清理后才能正式在MongoDB中使用。 + + $ echo -n "MyReplicaSetKey" | md5sum|grep -o "[0-9a-z]\+" > keyFile + +grep命令的作用的是把将空格等我们不想要的内容过滤掉之后的MD5字符串打印出来。 + +现在我们对密钥文件进行一些操作,让它真正可用。 + + $ sudo cp keyFile /var/lib/mongodb + $ sudo chown mongodb:nogroup keyFile + $ sudo chmod 400 keyFile + +接下来,关闭此虚拟机。将其Ubuntu系统克隆到其他虚拟机上。 + +![](https://farm9.staticflickr.com/8729/17800903865_9876a9cc9c.jpg) + +这是克隆后的副节点1和副节点2。确认你已经将它们的MAC地址重新初始化,并且克隆整个硬盘。 + +![](https://farm6.staticflickr.com/5333/17613392900_6de45c9450.jpg) + +请注意,三个虚拟机示例需要在同一个网络中以便相互通讯。因此,我们需要它们弄到“互联网"上去。 + +这里推荐给每个虚拟机设置一个静态IP地址,而不是使用DHCP。这样它们就不至于在DHCP分配IP地址给他们的时候失去连接。 + +像下面这样编辑每个虚拟机的/etc/networks/interfaces文件。 + +在主结点上: + + auto eth1 + iface eth1 inet static + address 192.168.50.2 + netmask 255.255.255.0 + +在副结点1上: + + auto eth1 + iface eth1 inet static + address 192.168.50.3 + netmask 255.255.255.0 + +在副结点2上: + + auto eth1 + iface eth1 inet static + address 192.168.50.4 + netmask 255.255.255.0 + +由于我们没有DNS服务,所以需要设置设置一下/etc/hosts这个文件,手工将主机名称放到次文件中。 + +在主结点上: + + 127.0.0.1 localhost primary + 192.168.50.2 primary + 192.168.50.3 secondary1 + 192.168.50.4 secondary2 + +在副结点1上: + + 127.0.0.1 localhost secondary1 + 192.168.50.2 primary + 192.168.50.3 secondary1 + 192.168.50.4 secondary2 + +在副结点2上: + + 127.0.0.1 localhost secondary2 + 192.168.50.2 primary + 192.168.50.3 secondary1 + 192.168.50.4 secondary2 + +使用ping命令检查各个结点之间的连接。 + + $ ping primary + $ ping secondary1 + $ ping secondary2 + +### 配置副本集 ### + +验证各个结点可以正常连通后,我们就可以新建一个管理员用户,用于之后的副本集操作。 + +在主节点上,打开/etc/mongodb.conf文件,将auth和replSet两项注释掉。 + + dbpath=/var/lib/mongodb + logpath=/var/log/mongodb/mongod.log + logappend=true + #auth = true + keyFile=/var/lib/mongodb/keyFile + #replSet=myReplica + +重启mongod进程。 + + $ sudo service mongod restart + +连接MongoDB后,新建管理员用户。 + + > use admin + > db.createUser({ + user:"admin", + pwd:" + }) + $ sudo service mongod restart + +连接到MongoDB,用以下命令将secondary1和secondary2节点添加到我们的副本集中。 + + > use admin + > db.auth("admin","myreallyhardpassword") + > rs.initiate() + > rs.add ("secondary1:27017") + > rs.add("secondary2:27017") + + +现在副本集到手了,可以开始我们的项目了。参照 [official driver documentation][1] 来了解如何连接到副本集。如果你想要用Shell来请求数据,那么你需要连接到主节点上来插入或者请求数据,副节点不行。如果你执意要尝试用附件点操作,那么以下错误信息就蹦出来招呼你了。 + + myReplica:SECONDARY> + myReplica:SECONDARY> show databases + 2015-05-10T03:09:24.131+0000 E QUERY Error: listDatabases failed:{ "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" } + at Error () + at Mongo.getDBs (src/mongo/shell/mongo.js:47:15) + at shellHelper.show (src/mongo/shell/utils.js:630:33) + at shellHelper (src/mongo/shell/utils.js:524:36) + at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47 + +希望这篇教程能对你有所帮助。你可以使用Vagrant来自动完成你的本地环境配置,并且加速你的代码。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/setup-replica-set-mongodb.html + +作者:[Christopher Valerio][a] +译者:[mr-ping](https://github.com/mr-ping) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/valerio +[1]:http://docs.mongodb.org/ecosystem/drivers/ From 5adf534ccbbdb8fc2db27e242835ed78f8ba9220 Mon Sep 17 00:00:00 2001 From: ZTinoZ Date: Tue, 11 Aug 2015 10:13:13 +0800 Subject: [PATCH 121/697] Delete translated article --- .../20150806 5 heroes of the Linux world.md | 99 ------------------- 1 file changed, 99 deletions(-) delete mode 100644 sources/talk/20150806 5 heroes of the Linux world.md diff --git a/sources/talk/20150806 5 heroes of the Linux world.md b/sources/talk/20150806 5 heroes of the Linux world.md deleted file mode 100644 index abc42df7f9..0000000000 --- a/sources/talk/20150806 5 heroes of the Linux world.md +++ /dev/null @@ -1,99 +0,0 @@ -Linux世界的五个大神 -================================================================================ -这些人是谁?见或者没见过?谁在每天影响着我们? - -![Image courtesy Christopher Michel/Flickr](http://core0.staticworld.net/images/article/2015/07/penguin-100599348-orig.jpg) -Image courtesy [Christopher Michel/Flickr][1] - -### 野心勃勃的企鹅 ### - -Linux和开源世界一直在被那些热情洋溢的人们推动着,他们开发出最好的软件并将代码向公众开放,所以每个人都能无条件地看到。(对了,有那么一个条件,那就是许可证。) - -那么,这些人是谁?这些Linux世界里的大神们,谁在每天影响着我们?让我来给你一一揭晓。 - -![Image courtesy Swapnil Bhartiya](http://images.techhive.com/images/article/2015/07/swap-klaus-100599357-orig.jpg) -Image courtesy Swapnil Bhartiya - -### Klaus Knopper ### - -Klaus Knopper,一个生活在德国的奥地利开发者,他是Knoppix和Adriana Linux的创始人,为了他失明的妻子开发程序。 - -Knoppix在那些Linux用户心里有着特殊的地位,他们在使用Ubuntu之前都会尝试Knoppix,而Knoppix让人称道的就是它让Live CD的概念普及开来。不像Windows或Mac OS X,你可以通过CD运行整个操作系统而不用再系统上安装任何东西,它允许新用户在他们的机子上快速试用Linux而不用去格式化硬盘。Linux这种实时的特性为它的普及做出了巨大贡献。 - -![Image courtesy Fórum Internacional Software Live/Flickr](http://images.techhive.com/images/article/2015/07/lennart-100599356-orig.jpg) -Image courtesy [Fórum Internacional Software Live/Flickr][2] - -### Lennart Pottering ### - -Lennart Pottering is yet another genius from Germany. He has written so many core components of a Linux (as well as BSD) system that it’s hard to keep track. Most of his work is towards the successors of aging or broken components of the Linux systems. - -Pottering wrote the modern init system systemd, which shook the Linux world and created a [rift in the Debian community][3]. - -While Linus Torvalds has no problems with systemd, and praises it, he is not a huge fan of the way systemd developers (including the co-author Kay Sievers,) respond to bug reports and criticism. At one point Linus said on the LKML (Linux Kernel Mailing List) that he would [never work with Sievers][4]. - -Lennart is also the author of Pulseaudio, sound server on Linux and Avahi, zero-configuration networking (zeroconf) implementation. - -![Image courtesy Meego Com/Flickr](http://images.techhive.com/images/article/2015/07/jim-zemlin-100599362-orig.jpg) -Image courtesy [Meego Com/Flickr][5] - -### Jim Zemlin ### - -Jim Zemlin isn't a developer, but as founder of The Linux Foundation he is certainly one of the most important figures of the Linux world. - -In 2007, The Linux Foundation was formed as a result of merger between two open source bodies: the Free Standards Group and the Open Source Development Labs. Zemlin was the executive director of the Free Standards Group. Post-merger Zemlin became the executive director of The Linux Foundation and has held that position since. - -Under his leadership, The Linux Foundation has become the central figure in the modern IT world and plays a very critical role for the Linux ecosystem. In order to ensure that key developers like Torvalds and Kroah-Hartman can focus on Linux, the foundation sponsors them as fellows. - -Zemlin also made the foundation a bridge between companies so they can collaborate on Linux while at the same time competing in the market. The foundation also organizes many conferences around the world and [offers many courses for Linux developers][6]. - -People may think of Zemlin as Linus Torvalds' boss, but he refers to himself as "Linus Torvalds' janitor." - -![Image courtesy Coscup/Flickr](http://images.techhive.com/images/article/2015/07/greg-kh-100599350-orig.jpg) -Image courtesy [Coscup/Flickr][7] - -### Greg Kroah-Hartman ### - -Greg Kroah-Hartman is known as second-in-command of the Linux kernel. The ‘gentle giant’ is the maintainer of the stable branch of the kernel and of staging subsystem, USB, driver core, debugfs, kref, kobject, and the [sysfs][8] kernel subsystems along with many other components of a Linux system. - -He is also credited for device drivers for Linux. One of his jobs is to travel around the globe, meet hardware makers and persuade them to make their drivers available for Linux. The next time you plug some random USB device to your system and it works out of the box, thank Kroah-Hartman. (Don't thank the distro. Some distros try to take credit for the work Kroah-Hartman or the Linux kernel did.) - -Kroah-Hartman previously worked for Novell and then joined the Linux Foundation as a fellow, alongside Linus Torvalds. - -Kroah-Hartman is the total opposite of Linus and never rants (at least publicly). One time there was some ripple was when he stated that [Canonical doesn’t contribute much to the Linux kernel][9]. - -On a personal level, Kroah-Hartman is extremely helpful to new developers and users and is easily accessible. - -![Image courtesy Swapnil Bhartiya](http://images.techhive.com/images/article/2015/07/linus-swapnil-100599349-orig.jpg) -Image courtesy Swapnil Bhartiya - -### Linus Torvalds ### - -No collection of Linux heroes would be complete without Linus Torvalds. He is the author of the Linux kernel, the most used open source technology on the planet and beyond. His software powers everything from space stations to supercomputers, military drones to mobile devices and tiny smartwatches. Linus remains the authority on the Linux kernel and makes the final decision on which patches to merge to the kernel. - -Linux isn't Torvalds' only contribution open source. When he got fed-up with the existing software revision control systems, which his kernel heavily relied on, he wrote his own, called Git. Git enjoys the same reputation as Linux; it is the most used version control system in the world. - -Torvalds is also a passionate scuba diver and when he found no decent dive logs for Linux, he wrote his own and called it SubSurface. - -Torvalds is [well known for his rants][10] and once admitted that his ego is as big as a small planet. But he is also known for admitting his mistakes if he realizes he was wrong. - --------------------------------------------------------------------------------- - -via: http://www.itworld.com/article/2955001/linux/5-heros-of-the-linux-world.html - -作者:[Swapnil Bhartiya][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.itworld.com/author/Swapnil-Bhartiya/ -[1]:https://flic.kr/p/siJ25M -[2]:https://flic.kr/p/uTzj54 -[3]:http://www.itwire.com/business-it-news/open-source/66153-systemd-fallout-two-debian-technical-panel-members-resign -[4]:http://www.linuxveda.com/2014/04/04/linus-torvalds-systemd-kay-sievers/ -[5]:https://flic.kr/p/9Lnhpu -[6]:http://www.itworld.com/article/2951968/linux/linux-foundation-offers-cheaper-courses-and-certifications-for-india.html -[7]:https://flic.kr/p/hBv8Pp -[8]:https://en.wikipedia.org/wiki/Sysfs -[9]:https://www.youtube.com/watch?v=CyHAeGBFS8k -[10]:http://www.itworld.com/article/2873200/operating-systems/11-technologies-that-tick-off-linus-torvalds.html From 6aa59c9a58eb024b7cda8c6a1d0d8cc9afd66b07 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 11 Aug 2015 10:17:19 +0800 Subject: [PATCH 122/697] Delete 20150803 Troubleshooting with Linux Logs.md --- ...0150803 Troubleshooting with Linux Logs.md | 117 ------------------ 1 file changed, 117 deletions(-) delete mode 100644 sources/tech/20150803 Troubleshooting with Linux Logs.md diff --git a/sources/tech/20150803 Troubleshooting with Linux Logs.md b/sources/tech/20150803 Troubleshooting with Linux Logs.md deleted file mode 100644 index 9ee0820a9c..0000000000 --- a/sources/tech/20150803 Troubleshooting with Linux Logs.md +++ /dev/null @@ -1,117 +0,0 @@ -translation by strugglingyouth -Troubleshooting with Linux Logs -================================================================================ -Troubleshooting is the main reason people create logs. Often you’ll want to diagnose why a problem happened with your Linux system or application. An error message or a sequence of events can give you clues to the root cause, indicate how to reproduce the issue, and point out ways to fix it. Here are a few use cases for things you might want to troubleshoot in your logs. - -### Cause of Login Failures ### - -If you want to check if your system is secure, you can check your authentication logs for failed login attempts and unfamiliar successes. Authentication failures occur when someone passes incorrect or otherwise invalid login credentials, often to ssh for remote access or su for local access to another user’s permissions. These are logged by the [pluggable authentication module][1], or pam for short. Look in your logs for strings like Failed password and user unknown. Successful authentication records include strings like Accepted password and session opened. - -Failure Examples: - - pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2 - Failed password for invalid user hoover from 10.0.2.2 port 4791 ssh2 - pam_unix(sshd:auth): check pass; user unknown - PAM service(sshd) ignoring max retries; 6 > 3 - -Success Examples: - - Accepted password for hoover from 10.0.2.2 port 4792 ssh2 - pam_unix(sshd:session): session opened for user hoover by (uid=0) - pam_unix(sshd:session): session closed for user hoover - -You can use grep to find which users accounts have the most failed logins. These are the accounts that potential attackers are trying and failing to access. This example is for an Ubuntu system. - - $ grep "invalid user" /var/log/auth.log | cut -d ' ' -f 10 | sort | uniq -c | sort -nr - 23 oracle - 18 postgres - 17 nagios - 10 zabbix - 6 test - -You’ll need to write a different command for each application and message because there is no standard format. Log management systems that automatically parse logs will effectively normalize them and help you extract key fields like username. - -Log management systems can extract the usernames from your Linux logs using automated parsing. This lets you see an overview of the users and filter on them with a single click. In this example, we can see that the root user logged in over 2,700 times because we are filtering the logs to show login attempts only for the root user. - -![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.36-AM.png) - -Log management systems also let you view graphs over time to spot unusual trends. If someone had one or two failed logins within a few minutes, it might be that a real user forgot his or her password. However, if there are hundreds of failed logins or they are all different usernames, it’s more likely that someone is trying to attack the system. Here you can see that on March 12, someone tried to login as test and nagios several hundred times. This is clearly not a legitimate use of the system. - -![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.12.18-AM.png) - -### Cause of Reboots ### - -Sometimes a server can stop due to a system crash or reboot. How do you know when it happened and who did it? - -#### Shutdown Command #### - -If someone ran the shutdown command manually, you can see it in the auth log file. Here you can see that someone remotely logged in from the IP 50.0.134.125 as the user ubuntu and then shut the system down. - - Mar 19 18:36:41 ip-172-31-11-231 sshd[23437]: Accepted publickey for ubuntu from 50.0.134.125 port 52538 ssh - Mar 19 18:36:41 ip-172-31-11-231 23437]:sshd[ pam_unix(sshd:session): session opened for user ubuntu by (uid=0) - Mar 19 18:37:09 ip-172-31-11-231 sudo: ubuntu : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/sbin/shutdown -r now - -#### Kernel Initializing #### - -If you want to see when the server restarted regardless of reason (including crashes) you can search logs from the kernel initializing. You’d search for the facility kernel messages and Initializing cpu. - - Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpuset - Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpu - Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Linux version 3.8.0-44-generic (buildd@tipua) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #66~precise1-Ubuntu SMP Tue Jul 15 04:01:04 UTC 2014 (Ubuntu 3.8.0-44.66~precise1-generic 3.8.13.25) - -### Detect Memory Problems ### - -There are lots of reasons a server might crash, but one common cause is running out of memory. - -When your system is low on memory, processes are killed, typically in the order of which ones will release the most resources. The error occurs when your system is using all of its memory and a new or existing process attempts to access additional memory. Look in your log files for strings like Out of Memory or for kernel warnings like to kill. These strings indicate that your system intentionally killed the process or application rather than allowing the process to crash. - -Examples: - - [33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child - [29923450.995084] select 5230 (docker), adj 0, size 708, to kill - -You can find these logs using a tool like grep. This example is for Ubuntu: - - $ grep “Out of memory” /var/log/syslog - [33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child - -Keep in mind that grep itself uses memory, so you might cause an out of memory error just by running grep. This is another reason it’s a fabulous idea to centralize your logs! - -### Log Cron Job Errors ### - -The cron daemon is a scheduler that runs processes at specified dates and times. If the process fails to run or fails to finish, then a cron error appears in your log files. You can find these files in /var/log/cron, /var/log/messages, and /var/log/syslog depending on your distribution. There are many reasons a cron job can fail. Usually the problems lie with the process rather than the cron daemon itself. - -By default, cron jobs output through email using Postfix. Here is a log showing that an email was sent. Unfortunately, you cannot see the contents of the message here. - - Mar 13 16:35:01 PSQ110 postfix/pickup[15158]: C3EDC5800B4: uid=1001 from= - Mar 13 16:35:01 PSQ110 postfix/cleanup[15727]: C3EDC5800B4: message-id=<20150310110501.C3EDC5800B4@PSQ110> - Mar 13 16:35:01 PSQ110 postfix/qmgr[15159]: C3EDC5800B4: from=, size=607, nrcpt=1 (queue active) - Mar 13 16:35:05 PSQ110 postfix/smtp[15729]: C3EDC5800B4: to=, relay=gmail-smtp-in.l.google.com[74.125.130.26]:25, delay=4.1, delays=0.26/0/2.2/1.7, dsn=2.0.0, status=sent (250 2.0.0 OK 1425985505 f16si501651pdj.5 - gsmtp) - -You should consider logging the cron standard output to help debug problems. Here is how you can redirect your cron standard output to syslog using the logger command. Replace the echo command with your own script and helloCron with whatever you want to set the appName to. - - */5 * * * * echo ‘Hello World’ 2>&1 | /usr/bin/logger -t helloCron - -Which creates the log entries: - - Apr 28 22:20:01 ip-172-31-11-231 CRON[15296]: (ubuntu) CMD (echo 'Hello World!' 2>&1 | /usr/bin/logger -t helloCron) - Apr 28 22:20:01 ip-172-31-11-231 helloCron: Hello World! - -Each cron job will log differently based on the specific type of job and how it outputs data. Hopefully there are clues to the root cause of problems within the logs, or you can add additional logging as needed. - --------------------------------------------------------------------------------- - -via: http://www.loggly.com/ultimate-guide/logging/troubleshooting-with-linux-logs/ - -作者:[Jason Skowronski][a1] -作者:[Amy Echeverri][a2] -作者:[Sadequl Hussain][a3] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a1]:https://www.linkedin.com/in/jasonskowronski -[a2]:https://www.linkedin.com/in/amyecheverri -[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 -[1]:http://linux.die.net/man/8/pam.d From 55f5e577c8684ad955e5e189172343a6e316af6d Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 11 Aug 2015 10:18:16 +0800 Subject: [PATCH 123/697] Create 20150803 Troubleshooting with Linux Logs.md --- ...0150803 Troubleshooting with Linux Logs.md | 117 ++++++++++++++++++ 1 file changed, 117 insertions(+) create mode 100644 translated/tech/20150803 Troubleshooting with Linux Logs.md diff --git a/translated/tech/20150803 Troubleshooting with Linux Logs.md b/translated/tech/20150803 Troubleshooting with Linux Logs.md new file mode 100644 index 0000000000..5950a69d98 --- /dev/null +++ b/translated/tech/20150803 Troubleshooting with Linux Logs.md @@ -0,0 +1,117 @@ +在 Linux 中使用日志来排错 +================================================================================ +人们创建日志的主要原因是排错。通常你会诊断为什么问题发生在你的 Linux 系统或应用程序中。错误信息或一些列事件可以给你提供造成根本原因的线索,说明问题是如何发生的,并指出如何解决它。这里有几个使用日志来解决的样例。 + +### 登录失败原因 ### + +如果你想检查你的系统是否安全,你可以在验证日志中检查登录失败的和登录成功但可疑的用户。当有人通过不正当或无效的凭据来登录时会出现认证失败,经常使用 SSH 进行远程登录或 su 到本地其他用户来进行访问权。这些是由[插入式验证模块][1]来记录,或 PAM 进行短期记录。在你的日志中会看到像 Failed 这样的字符串密码和未知的用户。成功认证记录包括像 Accepted 这样的字符串密码并打开会话。 + +失败的例子: + + pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2 + Failed password for invalid user hoover from 10.0.2.2 port 4791 ssh2 + pam_unix(sshd:auth): check pass; user unknown + PAM service(sshd) ignoring max retries; 6 > 3 + +成功的例子: + + Accepted password for hoover from 10.0.2.2 port 4792 ssh2 + pam_unix(sshd:session): session opened for user hoover by (uid=0) + pam_unix(sshd:session): session closed for user hoover + +你可以使用 grep 来查找哪些用户失败登录的次数最多。这些都是潜在的攻击者正在尝试和访问失败的账户。这是一个在 ubuntu 系统上的例子。 + + $ grep "invalid user" /var/log/auth.log | cut -d ' ' -f 10 | sort | uniq -c | sort -nr + 23 oracle + 18 postgres + 17 nagios + 10 zabbix + 6 test + +由于没有标准格式,所以你需要为每个应用程序的日志使用不同的命令。日志管理系统,可以自动分析日志,将它们有效的归类,帮助你提取关键字,如用户名。 + +日志管理系统可以使用自动解析功能从 Linux 日志中提取用户名。这使你可以看到用户的信息,并能单个的筛选。在这个例子中,我们可以看到,root 用户登录了 2700 次,因为我们筛选的日志显示尝试登录的只有 root 用户。 + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.36-AM.png) + +日志管理系统也让你以时间为做坐标轴的图标来查看使你更容易发现异常。如果有人在几分钟内登录失败一次或两次,它可能是一个真正的用户而忘记了密码。但是,如果有几百个失败的登录并且使用的都是不同的用户名,它更可能是在试图攻击系统。在这里,你可以看到在3月12日,有人试图登录 Nagios 几百次。这显然​​不是一个合法的系统用户。 + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.12.18-AM.png) + +### 重启的原因 ### + + +有时候,一台服务器由于系统崩溃或重启而宕机。你怎么知道它何时发生,是谁做的? + +#### 关机命令 #### + +如果有人手动运行 shutdown 命令,你可以看到它的身份在验证日志文件中。在这里,你可以看到,有人从 IP 50.0.134.125 上作为 ubuntu 的用户远程登录了,然后关闭了系统。 + + Mar 19 18:36:41 ip-172-31-11-231 sshd[23437]: Accepted publickey for ubuntu from 50.0.134.125 port 52538 ssh + Mar 19 18:36:41 ip-172-31-11-231 23437]:sshd[ pam_unix(sshd:session): session opened for user ubuntu by (uid=0) + Mar 19 18:37:09 ip-172-31-11-231 sudo: ubuntu : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/sbin/shutdown -r now + +#### 内核初始化 #### + +如果你想看看服务器重新启动的所有原因(包括崩溃),你可以从内核初始化日志中寻找。你需要搜索内核设施和初始化 cpu 的信息。 + + Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpuset + Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpu + Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Linux version 3.8.0-44-generic (buildd@tipua) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #66~precise1-Ubuntu SMP Tue Jul 15 04:01:04 UTC 2014 (Ubuntu 3.8.0-44.66~precise1-generic 3.8.13.25) + +### 检测内存问题 ### + +有很多原因可能导致服务器崩溃,但一个普遍的原因是内存用尽。 + +当你系统的内存不足时,进程会被杀死,通常会杀死使用最多资源的进程。当系统正在使用的内存发生错误并且有新的或现有的进程试图使用更多的内存。在你的日志文件查找像 Out of Memory 这样的字符串,内核也会发出杀死进程的警告。这些信息表明系统故意杀死进程或应用程序,而不是允许进程崩溃。 + +例如: + + [33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child + [29923450.995084] select 5230 (docker), adj 0, size 708, to kill + +你可以使用像 grep 这样的工具找到这些日志。这个例子是在 ubuntu 中: + + $ grep “Out of memory” /var/log/syslog + [33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child + +请记住,grep 也要使用内存,所以导致内存不足的错误可能只是运行的 grep。这是另一个分析日志的独特方法! + +### 定时任务错误日志 ### + +cron 守护程序是一个调度器只在指定的日期和时间运行进程。如果进程运行失败或无法完成,那么 cron 的错误出现在你的日志文件中。你可以找到这些文件在 /var/log/cron,/var/log/messages,和 /var/log/syslog 中,具体取决于你的发行版。cron 任务失败原因有很多。通常情况下,问题出在进程中而不是 cron 守护进程本身。 + +默认情况下,cron 作业会通过电子邮件发送信息。这里是一个日志中记录的发送电子邮件的内容。不幸的是,你不能看到邮件的内容在这里。 + + Mar 13 16:35:01 PSQ110 postfix/pickup[15158]: C3EDC5800B4: uid=1001 from= + Mar 13 16:35:01 PSQ110 postfix/cleanup[15727]: C3EDC5800B4: message-id=<20150310110501.C3EDC5800B4@PSQ110> + Mar 13 16:35:01 PSQ110 postfix/qmgr[15159]: C3EDC5800B4: from=, size=607, nrcpt=1 (queue active) + Mar 13 16:35:05 PSQ110 postfix/smtp[15729]: C3EDC5800B4: to=, relay=gmail-smtp-in.l.google.com[74.125.130.26]:25, delay=4.1, delays=0.26/0/2.2/1.7, dsn=2.0.0, status=sent (250 2.0.0 OK 1425985505 f16si501651pdj.5 - gsmtp) + +你应该想想 cron 在日志中的标准输出以帮助你定位问题。这里展示你可以使用 logger 命令重定向 cron 标准输出到 syslog。用你的脚本来代替 echo 命令,helloCron 可以设置为任何你想要的应用程序的名字。 + + */5 * * * * echo ‘Hello World’ 2>&1 | /usr/bin/logger -t helloCron + +它创建的日志条目: + + Apr 28 22:20:01 ip-172-31-11-231 CRON[15296]: (ubuntu) CMD (echo 'Hello World!' 2>&1 | /usr/bin/logger -t helloCron) + Apr 28 22:20:01 ip-172-31-11-231 helloCron: Hello World! + +每个 cron 作业将根据作业的具体类型以及如何输出数据来记录不同的日志。希望在日志中有问题根源的线索,也可以根据需要添加额外的日志记录。 + +-------------------------------------------------------------------------------- + +via: http://www.loggly.com/ultimate-guide/logging/troubleshooting-with-linux-logs/ + +作者:[Jason Skowronski][a1] +作者:[Amy Echeverri][a2] +作者:[Sadequl Hussain][a3] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a1]:https://www.linkedin.com/in/jasonskowronski +[a2]:https://www.linkedin.com/in/amyecheverri +[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 +[1]:http://linux.die.net/man/8/pam.d From 1252e6195493dd66bcd0f314379c21c6547c5d7e Mon Sep 17 00:00:00 2001 From: ZTinoZ Date: Tue, 11 Aug 2015 10:28:42 +0800 Subject: [PATCH 124/697] Translating by ZTinoZ --- .../20150806 Installation Guide for Puppet on Ubuntu 15.04.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md b/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md index ea8fcd6e2e..501cb4a8dc 100644 --- a/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md +++ b/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md @@ -1,3 +1,4 @@ +Translating by ZTinoZ Installation Guide for Puppet on Ubuntu 15.04 ================================================================================ Hi everyone, today in this article we'll learn how to install puppet to manage your server infrastructure running ubuntu 15.04. Puppet is an open source software configuration management tool which is developed and maintained by Puppet Labs that allows us to automate the provisioning, configuration and management of a server infrastructure. Whether we're managing just a few servers or thousands of physical and virtual machines to orchestration and reporting, puppet automates tasks that system administrators often do manually which frees up time and mental space so sysadmins can work on improving other aspects of your overall setup. It ensures consistency, reliability and stability of the automated jobs processed. It facilitates closer collaboration between sysadmins and developers, enabling more efficient delivery of cleaner, better-designed code. Puppet is available in two solutions configuration management and data center automation. They are **puppet open source and puppet enterprise**. Puppet open source is a flexible, customizable solution available under the Apache 2.0 license, designed to help system administrators automate the many repetitive tasks they regularly perform. Whereas puppet enterprise edition is a proven commercial solution for diverse enterprise IT environments which lets us get all the benefits of open source puppet, plus puppet apps, commercial-only enhancements, supported modules and integrations, and the assurance of a fully supported platform. Puppet uses SSL certificates to authenticate communication between master and agent nodes. @@ -426,4 +427,4 @@ via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://linoxide.com/author/arunp/ -[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html \ No newline at end of file +[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html From e0a4f5017065fd958548f89125c41bad72a7d550 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 11 Aug 2015 10:37:03 +0800 Subject: [PATCH 125/697] =?UTF-8?q?20150811-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Install Snort and Usage in Ubuntu 15.04.md | 203 ++++++++++++++++++ ...k files from Google Play Store on Linux.md | 99 +++++++++ 2 files changed, 302 insertions(+) create mode 100644 sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md create mode 100644 sources/tech/20150811 How to download apk files from Google Play Store on Linux.md diff --git a/sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md b/sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md new file mode 100644 index 0000000000..7bf2438c95 --- /dev/null +++ b/sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md @@ -0,0 +1,203 @@ +How to Install Snort and Usage in Ubuntu 15.04 +================================================================================ +Intrusion detection in a network is important for IT security. Intrusion Detection System used for the detection of illegal and malicious attempts in the network. Snort is well-known open source intrusion detection system. Web interface (Snorby) can be used for better analysis of alerts. Snort can be used as an intrusion prevention system with iptables/pf firewall. In this article, we will install and configure an open source IDS system snort. + +### Snort Installation ### + +#### Prerequisite #### + +Data Acquisition library (DAQ) is used by the snort for abstract calls to packet capture libraries. It is available on snort website. Downloading process is shown in the following screenshot. + +![downloading_daq](http://blog.linoxide.com/wp-content/uploads/2015/07/downloading_daq.png) + +Extract it and run ./configure, make and make install commands for DAQ installation. However, DAQ required other tools therefore ./configure script will generate following errors . + +flex and bison error + +![flexandbison_error](http://blog.linoxide.com/wp-content/uploads/2015/07/flexandbison_error.png) + +libpcap error. + +![libpcap error](http://blog.linoxide.com/wp-content/uploads/2015/07/libpcap-error.png) + +Therefore first install flex/bison and libcap before DAQ installation which is shown in the figure. + +![install_flex](http://blog.linoxide.com/wp-content/uploads/2015/07/install_flex.png) + +Installation of libpcap development library is shown below + +![libpcap-dev installation](http://blog.linoxide.com/wp-content/uploads/2015/07/libpcap-dev-installation.png) + +After installation of necessary tools, again run ./configure script which will show following output. + +![without_error_configure](http://blog.linoxide.com/wp-content/uploads/2015/07/without_error_configure.png) + +make and make install commands result is shown in the following screens. + +![make install](http://blog.linoxide.com/wp-content/uploads/2015/07/make-install.png) + +![make](http://blog.linoxide.com/wp-content/uploads/2015/07/make.png) + +After successful installation of DAQ, now we will install snort. Downloading using wget is shown in the below figure. + +![downloading_snort](http://blog.linoxide.com/wp-content/uploads/2015/07/downloading_snort.png) + +Extract compressed package using below given command. + + #tar -xvzf snort-2.9.7.3.tar.gz + +![snort_extraction](http://blog.linoxide.com/wp-content/uploads/2015/07/snort_extraction.png) + +Create installation directory and set prefix parameter in the configure script. It is also recommended to enable sourcefire flag for Packet Performance Monitoring (PPM). + + #mkdir /usr/local/snort + + #./configure --prefix=/usr/local/snort/ --enable-sourcefire + +![snort_installation](http://blog.linoxide.com/wp-content/uploads/2015/07/snort_installation.png) + +Configure script generates error due to missing libpcre-dev , libdumbnet-dev and zlib development libraries. + +error due to missing libpcre library. + +![pcre-error](http://blog.linoxide.com/wp-content/uploads/2015/07/pcre-error.png) + +error due to missing dnet (libdumbnet) library. + +![libdnt error](http://blog.linoxide.com/wp-content/uploads/2015/07/libdnt-error.png) + +configure script generate error due to missing zlib library. + +![zlib error](http://blog.linoxide.com/wp-content/uploads/2015/07/zlib-error.png) + +Installation of all required development libraries is shown in the next screenshots. + + # aptitude install libpcre3-dev + +![libpcre3-dev install](http://blog.linoxide.com/wp-content/uploads/2015/07/libpcre3-dev-install.png) + + # aptitude install libdumbnet-dev + +![libdumnet-dev installation](http://blog.linoxide.com/wp-content/uploads/2015/07/libdumnet-dev-installation.png) + + # aptitude install zlib1g-dev + +![zlibg-dev installation](http://blog.linoxide.com/wp-content/uploads/2015/07/zlibg-dev-installation.png) + +After installation of above required libraries for snort, again run the configure scripts without any error. + +Run make & make install commands for the compilation and installations of snort in /usr/local/snort directory. + + #make + +![make snort](http://blog.linoxide.com/wp-content/uploads/2015/07/make-snort.png) + + #make install + +![make install snort](http://blog.linoxide.com/wp-content/uploads/2015/07/make-install-snort.png) + +Finally snort running from /usr/local/snort/bin directory. Currently it is in promisc mode (packet dump mode) of all traffic on eth0 interface. + +![snort running](http://blog.linoxide.com/wp-content/uploads/2015/07/snort-running.png) + +Traffic dump by the snort interface is shown in following figure. + +![traffic](http://blog.linoxide.com/wp-content/uploads/2015/07/traffic1.png) + +#### Rules and Configuration of Snort #### + +Snort installation from source code required rules and configuration setting therefore now we will copy rules and configuration under /etc/snort directory. We have created single bash scripts for rules and configuration setting. It is used for following snort setting. + +- Creation of snort user for snort IDS service on linux. +- Creation of directories and files under /etc directory for snort configuration. +- Permission setting and copying data from etc directory of snort source code. +- Remove # (comment sign) from rules path in snort.conf file. + + #!/bin/bash##PATH of source code of snort + snort_src="/home/test/Downloads/snort-2.9.7.3" + echo "adding group and user for snort..." + groupadd snort &> /dev/null + useradd snort -r -s /sbin/nologin -d /var/log/snort -c snort_idps -g snort &> /dev/null#snort configuration + echo "Configuring snort..."mkdir -p /etc/snort + mkdir -p /etc/snort/rules + touch /etc/snort/rules/black_list.rules + touch /etc/snort/rules/white_list.rules + touch /etc/snort/rules/local.rules + mkdir /etc/snort/preproc_rules + mkdir /var/log/snort + mkdir -p /usr/local/lib/snort_dynamicrules + chmod -R 775 /etc/snort + chmod -R 775 /var/log/snort + chmod -R 775 /usr/local/lib/snort_dynamicrules + chown -R snort:snort /etc/snort + chown -R snort:snort /var/log/snort + chown -R snort:snort /usr/local/lib/snort_dynamicrules + ###copy configuration and rules from etc directory under source code of snort + echo "copying from snort source to /etc/snort ....." + echo $snort_src + echo "-------------" + cp $snort_src/etc/*.conf* /etc/snort + cp $snort_src/etc/*.map /etc/snort##enable rules + sed -i 's/include \$RULE\_PATH/#include \$RULE\_PATH/' /etc/snort/snort.conf + echo "---DONE---" + +Change the snort source directory in the script and run it. Following output appear in case of success. + +![running script](http://blog.linoxide.com/wp-content/uploads/2015/08/running_script.png) + +Above script copied following files/directories from snort source into /etc/snort configuration file. + +![files copied](http://blog.linoxide.com/wp-content/uploads/2015/08/created.png) + +Snort configuration file is very complex however following necessary changes are required in snort.conf for IDS proper working. + + ipvar HOME_NET 192.168.1.0/24 # LAN side + +---------- + + ipvar EXTERNAL_NET !$HOME_NET # WAN side + +![veriable set](http://blog.linoxide.com/wp-content/uploads/2015/08/12.png) + + var RULE_PATH /etc/snort/rules # snort signature path + var SO_RULE_PATH /etc/snort/so_rules #rules in shared libraries + var PREPROC_RULE_PATH /etc/snort/preproc_rules # Preproces path + var WHITE_LIST_PATH /etc/snort/rules # dont scan + var BLACK_LIST_PATH /etc/snort/rules # Must scan + +![main path](http://blog.linoxide.com/wp-content/uploads/2015/08/rule-path.png) + + include $RULE_PATH/local.rules # file for custom rules + +remove comment sign (#) from other rules such as ftp.rules,exploit.rules etc. + +![path rules](http://blog.linoxide.com/wp-content/uploads/2015/08/path-rules.png) + +Now [Download community][1] rules and extract under /etc/snort/rules directory. Enable community and emerging threats rules in snort.conf file. + +![wget_rules](http://blog.linoxide.com/wp-content/uploads/2015/08/wget_rules.png) + +![community rules](http://blog.linoxide.com/wp-content/uploads/2015/08/community-rules1.png) + +Run following command to test the configuration file after above mentioned changes. + + #snort -T -c /etc/snort/snort.conf + +![snort running](http://blog.linoxide.com/wp-content/uploads/2015/08/snort-final.png) + +### Conclusion ### + +In this article our focus was on the installation and configuration of an open source IDPS system snort on Ubuntu distribution. By default it is used for the monitoring of events however it can con configured inline mode for the protection of network. Snort rules can be tested and analysed in offline mode using pcap capture file. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/security/install-snort-usage-ubuntu-15-04/ + +作者:[nido][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/naveeda/ +[1]:https://www.snort.org/downloads/community/community-rules.tar.gz \ No newline at end of file diff --git a/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md b/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md new file mode 100644 index 0000000000..529e877d7b --- /dev/null +++ b/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md @@ -0,0 +1,99 @@ +How to download apk files from Google Play Store on Linux +================================================================================ +Suppose you want to install an Android app on your Android device. However, for whatever reason, you cannot access Google Play Store on the Android device. What can you do then? One way to install the app without Google Play Store access is to download its APK file using some other means, and then [install the APK][1] file on the Android device manually. + +There are several ways to download official APK files from Google Play Store on non-Android devices such as regular computers and laptops. For example, there are browser plugins (e.g., for [Chrome][2] or [Firefox][3]) or online APK archives that allow you to download APK files using a web browser. If you do not trust these closed-source plugins or third-party APK repositories, there is yet another way to download official APK files manually, and that is via an open-source Linux app called [GooglePlayDownloader][4]. + +GooglePlayDownloader is a Python-based GUI application that enables you to search and download APK files from Google Play Store. Since this is completely open-source, you can be assured while using it. In this tutorial, I am going to show how to download an APK file from Google Play Store using GooglePlayDownloader in Linux environment. + +### Python requirement ### + +GooglePlayDownloader requires Python with SNI (Server Name Indication) support for SSL/TLS communication. This feature comes with Python 2.7.9 or higher. This leaves out older distributions such as Debian 7 Wheezy or earlier, Ubuntu 14.04 or earlier, or CentOS/RHEL 7 or earlier. Assuming that you have a Linux distribution with Python 2.7.9 or higher, proceed to install GooglePlayDownloader as follows. + +### Install GooglePlayDownloader on Ubuntu ### + +On Ubuntu, you can use the official deb build. One catch is that you may need to install one required dependency manually. + +#### On Ubuntu 14.10 #### + +Download [python-ndg-httpsclient][5] deb package, which is a missing dependency on older Ubuntu distributions. Also download GooglePlayDownloader's official deb package. + + $ wget http://mirrors.kernel.org/ubuntu/pool/main/n/ndg-httpsclient/python-ndg-httpsclient_0.3.2-1ubuntu4_all.deb + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb + +We are going to use [gdebi command][6] to install those two deb files as follows. The gdebi command will automatically handle any other dependencies. + + $ sudo apt-get install gdebi-core + $ sudo gdebi python-ndg-httpsclient_0.3.2-1ubuntu4_all.deb + $ sudo gdebi googleplaydownloader_1.7-1_all.deb + +#### On Ubuntu 15.04 or later #### + +Recent Ubuntu distributions ship all required dependencies, and thus the installation is straightforward as follows. + + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb + $ sudo apt-get install gdebi-core + $ sudo gdebi googleplaydownloader_1.7-1_all.deb + +### Install GooglePlayDownloader on Debian ### + +Due to its Python requirement, GooglePlayDownloader cannot be installed on Debian 7 Wheezy or earlier unless you upgrade its stock Python. + +#### On Debian 8 Jessie and higher: #### + + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb + $ sudo apt-get install gdebi-core + $ sudo gdebi googleplaydownloader_1.7-1_all.deb + +### Install GooglePlayDownloader on Fedora ### + +Since GooglePlayDownloader was originally developed for Debian based distributions, you need to install it from the source if you want to use it on Fedora. + +First, install necessary dependencies. + + $ sudo yum install python-pyasn1 wxPython python-ndg_httpsclient protobuf-python python-requests + +Then install it as follows. + + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7.orig.tar.gz + $ tar -xvf googleplaydownloader_1.7.orig.tar.gz + $ cd googleplaydownloader-1.7 + $ chmod o+r -R . + $ sudo python setup.py install + $ sudo sh -c "echo 'python /usr/lib/python2.7/site-packages/googleplaydownloader-1.7-py2.7.egg/googleplaydownloader/googleplaydownloader.py' > /usr/bin/googleplaydownloader" + +### Download APK Files from Google Play Store with GooglePlayDownloader ### + +Once you installed GooglePlayDownloader, you can download APK files from Google Play Store as follows. + +First launch the app by typing: + + $ googleplaydownloader + +![](https://farm1.staticflickr.com/425/20229024898_105396fa68_b.jpg) + +At the search bar, type the name of the app you want to download from Google Play Store. + +![](https://farm1.staticflickr.com/503/20230360479_925f5da613_b.jpg) + +Once you find the app in the search list, choose the app, and click on "Download selected APK(s)" button. You will find the downloaded APK file in your home directory. Now you can move the APK file to the Android device of your choice, and install it manually. + +Hope this helps. + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/download-apk-files-google-play-store.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://xmodulo.com/how-to-install-apk-file-on-android-phone-or-tablet.html +[2]:https://chrome.google.com/webstore/detail/apk-downloader/cgihflhdpokeobcfimliamffejfnmfii +[3]:https://addons.mozilla.org/en-us/firefox/addon/apk-downloader/ +[4]:http://codingteam.net/project/googleplaydownloader +[5]:http://packages.ubuntu.com/vivid/python-ndg-httpsclient +[6]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html \ No newline at end of file From 4f320ebb4d4b5982e931f6960e3c7fbd41219c2c Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 11 Aug 2015 10:41:56 +0800 Subject: [PATCH 126/697] PUB:20150806 Linux FAQs with Answers--How to install git on Linux @mr-ping --- ...Qs with Answers--How to install git on Linux.md | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) rename {translated/tech => published}/20150806 Linux FAQs with Answers--How to install git on Linux.md (66%) diff --git a/translated/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md b/published/20150806 Linux FAQs with Answers--How to install git on Linux.md similarity index 66% rename from translated/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md rename to published/20150806 Linux FAQs with Answers--How to install git on Linux.md index e6d3f59c71..1d30a02083 100644 --- a/translated/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md +++ b/published/20150806 Linux FAQs with Answers--How to install git on Linux.md @@ -1,15 +1,15 @@ -Linux问答 -- 如何在Linux上安装Git +Linux有问必答:如何在Linux上安装Git ================================================================================ -> **问题:** 我尝试从一个Git公共仓库克隆项目,但出现了这样的错误提示:“git: command not found”。 请问我该如何安装Git? [注明一下是哪个Linux发行版]? +> **问题:** 我尝试从一个Git公共仓库克隆项目,但出现了这样的错误提示:“git: command not found”。 请问我该如何在某某发行版上安装Git? -Git是一个流行的并且开源的版本控制系统(VCS),最初是为Linux环境开发的。跟CVS或者SVN这些版本控制系统不同的是,Git的版本控制被认为是“分布式的”,某种意义上,git的本地工作目录可以作为一个功能完善的仓库来使用,它具备完整的历史记录和版本追踪能力。在这种工作模型之下,各个协作者将内容提交到他们的本地仓库中(与之相对的会直接提交到核心仓库),如果有必要,再有选择性地推送到核心仓库。这就为Git这个版本管理系统带来了大型协作系统所必须的可扩展能力和冗余能力。 +Git是一个流行的开源版本控制系统(VCS),最初是为Linux环境开发的。跟CVS或者SVN这些版本控制系统不同的是,Git的版本控制被认为是“分布式的”,某种意义上,git的本地工作目录可以作为一个功能完善的仓库来使用,它具备完整的历史记录和版本追踪能力。在这种工作模型之下,各个协作者将内容提交到他们的本地仓库中(与之相对的会总是提交到核心仓库),如果有必要,再有选择性地推送到核心仓库。这就为Git这个版本管理系统带来了大型协作系统所必须的可扩展能力和冗余能力。 ![](https://farm1.staticflickr.com/341/19433194168_c79d4570aa_b.jpg) ### 使用包管理器安装Git ### -Git已经被所有的主力Linux发行版所支持。所以安装它最简单的方法就是使用各个Linux发行版的包管理器。 +Git已经被所有的主流Linux发行版所支持。所以安装它最简单的方法就是使用各个Linux发行版的包管理器。 **Debian, Ubuntu, 或 Linux Mint** @@ -18,6 +18,8 @@ Git已经被所有的主力Linux发行版所支持。所以安装它最简单的 **Fedora, CentOS 或 RHEL** $ sudo yum install git + 或 + $ sudo dnf install git **Arch Linux** @@ -33,7 +35,7 @@ Git已经被所有的主力Linux发行版所支持。所以安装它最简单的 ### 从源码安装Git ### -如果由于某些原因,你希望从源码安装Git,安装如下介绍操作。 +如果由于某些原因,你希望从源码安装Git,按照如下介绍操作。 **安装依赖包** @@ -65,7 +67,7 @@ via: http://ask.xmodulo.com/install-git-linux.html 作者:[Dan Nanni][a] 译者:[mr-ping](https://github.com/mr-ping) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 3726bb850331c17fc57933c997ef739b8cfbb9a7 Mon Sep 17 00:00:00 2001 From: XIAOYU <1136299502@qq.com> Date: Tue, 11 Aug 2015 16:45:36 +0800 Subject: [PATCH 127/697] translated translated this article --- ...20150810 For Linux, Supercomputers R Us.md | 59 ++++++++++++++++++ ...20150810 For Linux, Supercomputers R Us.md | 60 ------------------- 2 files changed, 59 insertions(+), 60 deletions(-) create mode 100644 published/20150810 For Linux, Supercomputers R Us.md delete mode 100644 sources/talk/20150810 For Linux, Supercomputers R Us.md diff --git a/published/20150810 For Linux, Supercomputers R Us.md b/published/20150810 For Linux, Supercomputers R Us.md new file mode 100644 index 0000000000..ef9b32684c --- /dev/null +++ b/published/20150810 For Linux, Supercomputers R Us.md @@ -0,0 +1,59 @@ +Linux:我们最好用的超级计算机系统 +================================================================================ +![Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons](http://images.techhive.com/images/article/2015/08/playstation_3-100602985-primary.idge.jpg) +首图来源:By Michel Ngilen,[ CC BY 2.0 ], via Wikimedia Commons + +> 几乎所有超级计算机上运行的系统都是Linux,其中包括那些由树莓派(Raspberry Pi)板和PlayStation 3游戏机板组成的计算机。 + +超级计算机是很正经的工具,目的是做严肃的计算。它们往往从事于严肃的追求,比如原子弹的模拟,气候模拟和高级物理学。当然,它们也需要大笔资金的投资。在最新的超级计算机[500强][1]排名中,中国国防科大研制的天河2号位居第一。天河2号耗资约3.9亿美元。 + +但是,也有一个超级计算机,是由博伊西州立大学电气和计算机工程系的一名在读博士Joshua Kiepert[用树莓派构建完成][2]的。其创建成本低于2000美元。 + +不,这不是我编造的。这是一个真实的超级计算机,由超频1GHz的[B型树莓派][3]ARM11处理器与VideoCore IV GPU组成。每个都配备了512MB的RAM,一对USB端口和1个10/100 BaseT以太网端口。 + +那么天河2号和博伊西州立大学的超级计算机有什么共同点?它们都运行Linux系统。世界最快的超级计算机[前500强中有486][4]个也同样运行的是Linux系统。这是20多年前就开始的一种覆盖。现在Linux开始建立于廉价的超级计算机。因为Kiepert的机器并不是唯一的预算数字计算机。 + +Gaurav Khanna,麻省大学达特茅斯分校的物理学副教授,创建了一台超级计算机仅用了[不足200的PlayStation3视频游戏机][5]。 + +PlayStation游戏机是由一个3.2 GHz的基于PowerPC的电源处理单元供电。每个都配有512M的RAM。你现在仍然可以花200美元买到一个,尽管索尼将在年底逐步淘汰它们。Khanna仅用16个PlayStation 3s构建了他第一台超级计算机,所以你也可以花费不到4000美元就拥有你自己的超级计算机。 + +这些机器可能是从玩具建成的,但他们不是玩具。Khanna已经用它做了严肃的天体物理学研究。一个白帽子黑客组织使用了类似的[PlayStation 3超级计算机在2008年破解了SSL的MD5哈希算法][6]。 + +两年后,美国空军研究实验室研制的[Condor Cluster,使用了1,760个索尼的PlayStation3的处理器][7]和168个通用的图形处理单元。这个低廉的超级计算机,每秒运行约500TFLOPs,或500万亿次浮点运算。 + +其他的一些便宜且适用于构建家庭超级计算机的构件包括,专业并行处理板比如信用卡大小[99美元的Parallella板][8],以及高端显卡比如[Nvidia的 Titan Z][9] 以及[ AMD的 FirePro W9100][10].这些高端主板市场零售价约3000美元,被一些[英特尔极限大师赛世界锦标赛英雄联盟参赛][11]玩家觊觎能够赢得的梦想的机器,c[传说][12]这项比赛第一名能获得超过10万美元奖金。另一方面,一个人能够独自提供超过2.5TFLOPS,并且他们为科学家和研究人员提供了一个经济的方法,使得他们拥有自己专属的超级计算机。 + +作为Linux的连接,这一切都开始于1994年戈达德航天中心的第一个名为[Beowulf超级计算机][13]。 + +按照我们的标准,Beowulf不能算是最优越的。但在那个时期,作为第一台自制的超级计算机,其16英特尔486DX处理器和10Mbps的以太网总线,是伟大的创举。由[美国航空航天局承包人Don Becker和Thomas Sterling设计的Beowulf][14],是第一个“制造者”超级计算机。它的“计算部件”486DX PCs,成本仅有几千美元。尽管它的速度只有一位数的浮点运算,[Beowulf][15]表明了你可以用商用现货(COTS)硬件和Linux创建超级计算机。 + +我真希望我参与创作了一部分,但是我1994年就离开了戈达德,开始了作为一名全职的科技记者的职业生涯。该死。 + +但是尽管我只是使用笔记本的记者,我依然能够体会到COTS和开源软件是如何永远的改变了超级计算机。我希望现在读这篇文章的你也能。因为,无论是Raspberry Pis集群,还是超过300万英特尔的Ivy Bridge和Xeon Phi芯片的庞然大物,几乎所有当代的超级计算机都可以追溯到Beowulf。 + +-------------------------------------------------------------------------------- + +via: + +作者:[Steven J. Vaughan-Nichols][a] +译者:[xiaoyu33](https://github.com/xiaoyu33) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ +[1]:http://www.top500.org/ +[2]:http://www.zdnet.com/article/build-your-own-supercomputer-out-of-raspberry-pi-boards/ +[3]:https://www.raspberrypi.org/products/model-b/ +[4]:http://www.zdnet.com/article/linux-still-rules-supercomputing/ +[5]:http://www.nytimes.com/2014/12/23/science/an-economical-way-to-save-progress.html?smid=fb-nytimes&smtyp=cur&bicmp=AD&bicmlukp=WT.mc_id&bicmst=1409232722000&bicmet=1419773522000&_r=4 +[6]:http://www.computerworld.com/article/2529932/cybercrime-hacking/researchers-hack-verisign-s-ssl-scheme-for-securing-web-sites.html +[7]:http://phys.org/news/2010-12-air-playstation-3s-supercomputer.html +[8]:http://www.zdnet.com/article/parallella-the-99-linux-supercomputer/ +[9]:http://blogs.nvidia.com/blog/2014/03/25/titan-z/ +[10]:http://www.amd.com/en-us/press-releases/Pages/amd-flagship-professional-2014apr7.aspx +[11]:http://en.intelextrememasters.com/news/check-out-the-intel-extreme-masters-katowice-prize-money-distribution/ +[12]:http://www.google.com/url?q=http%3A%2F%2Fen.intelextrememasters.com%2Fnews%2Fcheck-out-the-intel-extreme-masters-katowice-prize-money-distribution%2F&sa=D&sntz=1&usg=AFQjCNE6yoAGGz-Hpi2tPF4gdhuPBEckhQ +[13]:http://www.beowulf.org/overview/history.html +[14]:http://yclept.ucdavis.edu/Beowulf/aboutbeowulf.html +[15]:http://www.beowulf.org/ diff --git a/sources/talk/20150810 For Linux, Supercomputers R Us.md b/sources/talk/20150810 For Linux, Supercomputers R Us.md deleted file mode 100644 index 8f7302cca1..0000000000 --- a/sources/talk/20150810 For Linux, Supercomputers R Us.md +++ /dev/null @@ -1,60 +0,0 @@ -Translating by xiaoyu33 -For Linux, Supercomputers R Us -================================================================================ -![Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons](http://images.techhive.com/images/article/2015/08/playstation_3-100602985-primary.idge.jpg) -Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons - -> Almost all supercomputers run Linux, including the ones built from Raspberry Pi boards and PlayStation 3 game consoles - -Supercomputers are serious things, called on to do serious computing. They tend to be engaged in serious pursuits like atomic bomb simulations, climate modeling and high-level physics. Naturally, they cost serious money. At the very top of the latest [Top500][1] supercomputer ranking is the Tianhe-2 supercomputer at China’s National University of Defense Technology. It cost about $390 million to build. - -But then there’s the supercomputer that Joshua Kiepert, a doctoral student at Boise State’s Electrical and Computer Engineering department, [created with Raspberry Pi computers][2].It cost less than $2,000. - -No, I’m not making that up. It’s an honest-to-goodness supercomputer made from overclocked 1-GHz [Model B Raspberry Pi][3] ARM11 processors with Videocore IV GPUs. Each one comes with 512MB of RAM, a pair of USB ports and a 10/100 BaseT Ethernet port. - -And what do the Tianhe-2 and the Boise State supercomputer have in common? They both run Linux. As do [486 out of the world’s fastest 500 supercomputers][4]. It’s part of a domination of the category that began over 20 years ago. And now it’s trickling down to built-on-the-cheap supercomputers. Because Kiepert’s machine isn’t the only budget number cruncher out there. - -Gaurav Khanna, an associate professor of physics at the University of Massachusetts Dartmouth, created a [supercomputer with something shy of 200 PlayStation 3 video game consoles][5]. - -The PlayStations are powered by a 3.2-GHz PowerPC-based Power Processing Element. Each comes with 512MB of RAM. You can still buy one, although Sony will be phasing them out by year’s end, for just over $200. Khanna started with only 16 PlayStation 3s for his first supercomputer, so you too could put a supercomputer on your credit card for less than four grand. - -These machines may be built from toys, but they’re not playthings. Khanna has done serious astrophysics on his rig. A white-hat hacking group used a similar [PlayStation 3 supercomputer in 2008 to crack the SSL MD5 hashing algorithm][6] in 2008. - -Two years later, the Air Force Research Laboratory [Condor Cluster was using 1,760 Sony PlayStation 3 processors][7] and 168 general-purpose graphical processing units. This bargain-basement supercomputer runs at about 500TFLOPs, or 500 trillion floating point operations per second. - -Other cheap options for home supercomputers include specialist parallel-processing boards such as the [$99 credit-card-sized Parallella board][8], and high-end graphics boards such as [Nvidia’s Titan Z][9] and [AMD’s FirePro W9100][10]. Those high-end boards, coveted by gamers with visions of a dream machine or even a chance at winning the first-place prize of over $100,000 in the [Intel Extreme Masters World Championship League of][11] [Legends][12], cost considerably more, retailing for about $3,000. On the other hand, a single one can deliver over 2.5TFLOPS all by itself, and for scientists and researchers, they offer an affordable way to get a supercomputer they can call their own. - -As for the Linux connection, that all started in 1994 at the Goddard Space Flight Center with the first [Beowulf supercomputer][13]. - -By our standards, there wasn’t much that was super about the first Beowulf. But in its day, the first homemade supercomputer, with its 16 Intel 486DX processors and 10Mbps Ethernet for the bus, was great. [Beowulf, designed by NASA contractors Don Becker and Thomas Sterling][14], was the first “maker” supercomputer. Its “compute components,” 486DX PCs, cost only a few thousand dollars. While its speed was only in single-digit gigaflops, [Beowulf][15] showed you could build supercomputers from commercial off-the-shelf (COTS) hardware and Linux. - -I wish I’d had a part in its creation, but I’d already left Goddard by 1994 for a career as a full-time technology journalist. Darn it! - -But even from this side of my reporter’s notebook, I can still appreciate how COTS and open-source software changed supercomputing forever. I hope you can too. Because, whether it’s a cluster of Raspberry Pis or a monster with over 3 million Intel Ivy Bridge and Xeon Phi chips, almost all of today’s supercomputers trace their ancestry to Beowulf. - --------------------------------------------------------------------------------- - -via: - -作者:[Steven J. Vaughan-Nichols][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ -[1]:http://www.top500.org/ -[2]:http://www.zdnet.com/article/build-your-own-supercomputer-out-of-raspberry-pi-boards/ -[3]:https://www.raspberrypi.org/products/model-b/ -[4]:http://www.zdnet.com/article/linux-still-rules-supercomputing/ -[5]:http://www.nytimes.com/2014/12/23/science/an-economical-way-to-save-progress.html?smid=fb-nytimes&smtyp=cur&bicmp=AD&bicmlukp=WT.mc_id&bicmst=1409232722000&bicmet=1419773522000&_r=4 -[6]:http://www.computerworld.com/article/2529932/cybercrime-hacking/researchers-hack-verisign-s-ssl-scheme-for-securing-web-sites.html -[7]:http://phys.org/news/2010-12-air-playstation-3s-supercomputer.html -[8]:http://www.zdnet.com/article/parallella-the-99-linux-supercomputer/ -[9]:http://blogs.nvidia.com/blog/2014/03/25/titan-z/ -[10]:http://www.amd.com/en-us/press-releases/Pages/amd-flagship-professional-2014apr7.aspx -[11]:http://en.intelextrememasters.com/news/check-out-the-intel-extreme-masters-katowice-prize-money-distribution/ -[12]:http://www.google.com/url?q=http%3A%2F%2Fen.intelextrememasters.com%2Fnews%2Fcheck-out-the-intel-extreme-masters-katowice-prize-money-distribution%2F&sa=D&sntz=1&usg=AFQjCNE6yoAGGz-Hpi2tPF4gdhuPBEckhQ -[13]:http://www.beowulf.org/overview/history.html -[14]:http://yclept.ucdavis.edu/Beowulf/aboutbeowulf.html -[15]:http://www.beowulf.org/ From 0584a85ed9e385ce4967c0563afe1e586c995144 Mon Sep 17 00:00:00 2001 From: XIAOYU <1136299502@qq.com> Date: Tue, 11 Aug 2015 16:47:43 +0800 Subject: [PATCH 128/697] translated translated --- published/20150810 For Linux, Supercomputers R Us.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/published/20150810 For Linux, Supercomputers R Us.md b/published/20150810 For Linux, Supercomputers R Us.md index ef9b32684c..e5022a658f 100644 --- a/published/20150810 For Linux, Supercomputers R Us.md +++ b/published/20150810 For Linux, Supercomputers R Us.md @@ -1,4 +1,4 @@ -Linux:我们最好用的超级计算机系统 +Linux:称霸超级计算机系统 ================================================================================ ![Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons](http://images.techhive.com/images/article/2015/08/playstation_3-100602985-primary.idge.jpg) 首图来源:By Michel Ngilen,[ CC BY 2.0 ], via Wikimedia Commons From b0b4e31da96977b4d3a15eb0d978e2d06db2df13 Mon Sep 17 00:00:00 2001 From: XIAOYU <1136299502@qq.com> Date: Tue, 11 Aug 2015 16:56:12 +0800 Subject: [PATCH 129/697] Rename published/20150810 For Linux, Supercomputers R Us.md to translated/talk/20150810 For Linux, Supercomputers R Us.md --- .../talk}/20150810 For Linux, Supercomputers R Us.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {published => translated/talk}/20150810 For Linux, Supercomputers R Us.md (100%) diff --git a/published/20150810 For Linux, Supercomputers R Us.md b/translated/talk/20150810 For Linux, Supercomputers R Us.md similarity index 100% rename from published/20150810 For Linux, Supercomputers R Us.md rename to translated/talk/20150810 For Linux, Supercomputers R Us.md From b8f075d64a773a10553643568f5dde7b83a4d290 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Tue, 11 Aug 2015 20:15:34 +0800 Subject: [PATCH 130/697] [Translated]RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md --- ...or Analyzing text with grep and regexps.md | 256 ----------------- ...or Analyzing text with grep and regexps.md | 258 ++++++++++++++++++ 2 files changed, 258 insertions(+), 256 deletions(-) delete mode 100644 sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md create mode 100644 translated/tech/RHCSA/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md b/sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md deleted file mode 100644 index f3de8528fc..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md +++ /dev/null @@ -1,256 +0,0 @@ -FSSlc translating - -RHCSA Series: Editing Text Files with Nano and Vim / Analyzing text with grep and regexps – Part 4 -================================================================================ -Every system administrator has to deal with text files as part of his daily responsibilities. That includes editing existing files (most likely configuration files), or creating new ones. It has been said that if you want to start a holy war in the Linux world, you can ask sysadmins what their favorite text editor is and why. We are not going to do that in this article, but will present a few tips that will be helpful to use two of the most widely used text editors in RHEL 7: nano (due to its simplicity and easiness of use, specially to new users), and vi/m (due to its several features that convert it into more than a simple editor). I am sure that you can find many more reasons to use one or the other, or perhaps some other editor such as emacs or pico. It’s entirely up to you. - -![Learn Nano and vi Editors](http://www.tecmint.com/wp-content/uploads/2015/03/Learn-Nano-and-vi-Editors.png) - -RHCSA: Editing Text Files with Nano and Vim – Part 4 - -### Editing Files with Nano Editor ### - -To launch nano, you can either just type nano at the command prompt, optionally followed by a filename (in this case, if the file exists, it will be opened in edition mode). If the file does not exist, or if we omit the filename, nano will also be opened in edition mode but will present a blank screen for us to start typing: - -![Nano Editor](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Editor.png) - -Nano Editor - -As you can see in the previous image, nano displays at the bottom of the screen several functions that are available via the indicated shortcuts (^, aka caret, indicates the Ctrl key). To name a few of them: - -- Ctrl + G: brings up the help menu with a complete list of functions and descriptions:Ctrl + X: exits the current file. If changes have not been saved, they are discarded. -- Ctrl + R: lets you choose a file to insert its contents into the present file by specifying a full path. - -![Nano Editor Help Menu](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Help.png) - -Nano Editor Help Menu - -- Ctrl + O: saves changes made to a file. It will let you save the file with the same name or a different one. Then press Enter to confirm. - -![Nano Editor Save Changes Mode](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Save-Changes.png) - -Nano Editor Save Changes Mode - -- Ctrl + X: exits the current file. If changes have not been saved, they are discarded. -- Ctrl + R: lets you choose a file to insert its contents into the present file by specifying a full path. - -![Nano: Insert File Content to Parent File](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-File-Content.png) - -Nano: Insert File Content to Parent File - -will insert the contents of /etc/passwd into the current file. - -- Ctrl + K: cuts the current line. -- Ctrl + U: paste. -- Ctrl + C: cancels the current operation and places you at the previous screen. - -To easily navigate the opened file, nano provides the following features: - -- Ctrl + F and Ctrl + B move the cursor forward or backward, whereas Ctrl + P and Ctrl + N move it up or down one line at a time, respectively, just like the arrow keys. -- Ctrl + space and Alt + space move the cursor forward and backward one word at a time. - -Finally, - -- Ctrl + _ (underscore) and then entering X,Y will take you precisely to Line X, column Y, if you want to place the cursor at a specific place in the document. - -![Navigate to Line Numbers in Nano](http://www.tecmint.com/wp-content/uploads/2015/03/Column-Numbers.png) - -Navigate to Line Numbers in Nano - -The example above will take you to line 15, column 14 in the current document. - -If you can recall your early Linux days, specially if you came from Windows, you will probably agree that starting off with nano is the best way to go for a new user. - -### Editing Files with Vim Editor ### - -Vim is an improved version of vi, a famous text editor in Linux that is available on all POSIX-compliant *nix systems, such as RHEL 7. If you have the chance and can install vim, go ahead; if not, most (if not all) the tips given in this article should also work. - -One of vim’s distinguishing features is the different modes in which it operates: - - -- Command mode will allow you to browse through the file and enter commands, which are brief and case-sensitive combinations of one or more letters. If you need to repeat one of them a certain number of times, you can prefix it with a number (there are only a few exceptions to this rule). For example, yy (or Y, short for yank) copies the entire current line, whereas 4yy (or 4Y) copies the entire current line along with the next three lines (4 lines in total). -- In ex mode, you can manipulate files (including saving a current file and running outside programs or commands). To enter ex mode, we must type a colon (:) starting from command mode (or in other words, Esc + :), directly followed by the name of the ex-mode command that you want to use. -- In insert mode, which is accessed by typing the letter i, we simply enter text. Most keystrokes result in text appearing on the screen. -- We can always enter command mode (regardless of the mode we’re working on) by pressing the Esc key. - -Let’s see how we can perform the same operations that we outlined for nano in the previous section, but now with vim. Don’t forget to hit the Enter key to confirm the vim command! - -To access vim’s full manual from the command line, type :help while in command mode and then press Enter: - -![vim Edito Help Menu](http://www.tecmint.com/wp-content/uploads/2015/03/vim-Help-Menu.png) - -vim Edito Help Menu - -The upper section presents an index list of contents, with defined sections dedicated to specific topics about vim. To navigate to a section, place the cursor over it and press Ctrl + ] (closing square bracket). Note that the bottom section displays the current file. - -1. To save changes made to a file, run any of the following commands from command mode and it will do the trick: - - :wq! - :x! - ZZ (yes, double Z without the colon at the beginning) - -2. To exit discarding changes, use :q!. This command will also allow you to exit the help menu described above, and return to the current file in command mode. - -3. Cut N number of lines: type Ndd while in command mode. - -4. Copy M number of lines: type Myy while in command mode. - -5. Paste lines that were previously cutted or copied: press the P key while in command mode. - -6. To insert the contents of another file into the current one: - - :r filename - -For example, to insert the contents of `/etc/fstab`, do: - -![Insert Content of File in vi Editor](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-Content-vi-Editor.png) - -Insert Content of File in vi Editor - -7. To insert the output of a command into the current document: - - :r! command - -For example, to insert the date and time in the line below the current position of the cursor: - -![Insert Time an Date in vi Editor](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-Time-and-Date-in-vi-Editor.png) - -Insert Time an Date in vi Editor - -In another article that I wrote for, ([Part 2 of the LFCS series][1]), I explained in greater detail the keyboard shortcuts and functions available in vim. You may want to refer to that tutorial for further examples on how to use this powerful text editor. - -### Analyzing Text with Grep and Regular Expressions ### - -By now you have learned how to create and edit files using nano or vim. Say you become a text editor ninja, so to speak – now what? Among other things, you will also need how to search for regular expressions inside text. - -A regular expression (also known as “regex” or “regexp“) is a way of identifying a text string or pattern so that a program can compare the pattern against arbitrary text strings. Although the use of regular expressions along with grep would deserve an entire article on its own, let us review the basics here: - -**1. The simplest regular expression is an alphanumeric string (i.e., the word “svm”) or two (when two are present, you can use the | (OR) operator):** - - # grep -Ei 'svm|vmx' /proc/cpuinfo - -The presence of either of those two strings indicate that your processor supports virtualization: - -![Regular Expression Example](http://www.tecmint.com/wp-content/uploads/2015/03/Regular-Expression-Example.png) - -Regular Expression Example - -**2. A second kind of a regular expression is a range list, enclosed between square brackets.** - -For example, `c[aeiou]t` matches the strings cat, cet, cit, cot, and cut, whereas `[a-z]` and `[0-9]` match any lowercase letter or decimal digit, respectively. If you want to repeat the regular expression X certain number of times, type `{X}` immediately following the regexp. - -For example, let’s extract the UUIDs of storage devices from `/etc/fstab`: - - # grep -Ei '[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}' -o /etc/fstab - -![Extract String from a File in Linux](http://www.tecmint.com/wp-content/uploads/2015/03/Extract-String-from-a-File.png) - -Extract String from a File - -The first expression in brackets `[0-9a-f]` is used to denote lowercase hexadecimal characters, and `{8}` is a quantifier that indicates the number of times that the preceding match should be repeated (the first sequence of characters in an UUID is a 8-character long hexadecimal string). - -The parentheses, the `{4}` quantifier, and the hyphen indicate that the next sequence is a 4-character long hexadecimal string, and the quantifier that follows `({3})` denote that the expression should be repeated 3 times. - -Finally, the last sequence of 12-character long hexadecimal string in the UUID is retrieved with `[0-9a-f]{12}`, and the -o option prints only the matched (non-empty) parts of the matching line in /etc/fstab. - -**3. POSIX character classes.** - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Character ClassMatches…
 [[:alnum:]] Any alphanumeric [a-zA-Z0-9] character
 [[:alpha:]] Any alphabetic [a-zA-Z] character
 [[:blank:]] Spaces or tabs
 [[:cntrl:]] Any control characters (ASCII 0 to 32)
 [[:digit:]] Any numeric digits [0-9]
 [[:graph:]] Any visible characters
 [[:lower:]] Any lowercase [a-z] character
 [[:print:]] Any non-control characters
 [[:space:]] Any whitespace
 [[:punct:]] Any punctuation marks
 [[:upper:]] Any uppercase [A-Z] character
 [[:xdigit:]] Any hex digits [0-9a-fA-F]
 [:word:] Any letters, numbers, and underscores [a-zA-Z0-9_]
- -For example, we may be interested in finding out what the used UIDs and GIDs (refer to [Part 2][2] of this series to refresh your memory) are for real users that have been added to our system. Thus, we will search for sequences of 4 digits in /etc/passwd: - - # grep -Ei [[:digit:]]{4} /etc/passwd - -![Search For a String in File](http://www.tecmint.com/wp-content/uploads/2015/03/Search-For-String-in-File.png) - -Search For a String in File - -The above example may not be the best case of use of regular expressions in the real world, but it clearly illustrates how to use POSIX character classes to analyze text along with grep. - -### Conclusion ### - -In this article we have provided some tips to make the most of nano and vim, two text editors for the command-line users. Both tools are supported by extensive documentation, which you can consult in their respective official web sites (links given below) and using the suggestions given in [Part 1][3] of this series. - -#### Reference Links #### - -- [http://www.nano-editor.org/][4] -- [http://www.vim.org/][5] - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-how-to-use-nano-vi-editors/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/vi-editor-usage/ -[2]:http://www.tecmint.com/file-and-directory-management-in-linux/ -[3]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ -[4]:http://www.nano-editor.org/ -[5]:http://www.vim.org/ diff --git a/translated/tech/RHCSA/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md b/translated/tech/RHCSA/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md new file mode 100644 index 0000000000..8438ec0351 --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md @@ -0,0 +1,258 @@ +RHCSA 系列:使用 Nano 和 Vim 编辑文本文件/使用 grep 和 regexps 分析文本 – Part 4 +================================================================================ +作为系统管理员的日常职责的一部分,每个系统管理员都必须处理文本文件,这包括编辑现存文件(大多可能是配置文件),或创建新的文件。有这样一个说法,假如你想在 Linux 世界中挑起一场圣战,你可以询问系统管理员们,什么是他们最喜爱的编辑器以及为什么。在这篇文章中,我们并不打算那样做,但我们将向你呈现一些技巧,这些技巧对使用两款在 RHEL 7 中最为常用的文本编辑器: nano(由于其简单和易用,特别是对于新手来说) 和 vi/m(由于其自身的几个特色使得它不仅仅是一个简单的编辑器)来说都大有裨益。我确信你可以找到更多的理由来使用其中的一个或另一个,或许其他的一些编辑器如 emacs 或 pico。这完全取决于你。 + +![学习 Nano 和 vi 编辑器](http://www.tecmint.com/wp-content/uploads/2015/03/Learn-Nano-and-vi-Editors.png) + +RHCSA: 使用 Nano 和 Vim 编辑文本文件 – Part 4 + +### 使用 Nano 编辑器来编辑文件 ### + +要启动 nano,你可以在命令提示符下输入 `nano`,或选择性地跟上一个文件名(在这种情况下,若文件存在,它将在编辑模式中被打开)。若文件不存在,或我们省略了文件名, nano 也将在 编辑模式下开启,但将为我们开启一个空白屏以便开始输入: + +![Nano 编辑器](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Editor.png) + +Nano 编辑器 + +正如你在上一张图片中所见的那样, nano 在屏幕的底部呈现出一些功能,它们可以通过暗指的快捷键来触发(^,即插入记号,代指 Ctrl 键)。它们中的一些是: + +- Ctrl + G: 触发一个帮助菜单,带有一个关于功能和相应的描述的完整列表; +- Ctrl + X: 离开当前文件,假如更改没有被保存,则它们将被丢弃; +- Ctrl + R: 通过指定一个完整的文件路径,让你选择一个文件来将该文件的内容插入到当前文件中; + +![Nano 编辑器帮助菜单](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Help.png) + +Nano 编辑器帮助菜单 + +- Ctrl + O: 保存更改到一个文件。它将让你用一个与源文件相同或不同的名称来保存该文件,然后按 Enter 键来确认。 + +![Nano 编辑器保存更改模式](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Save-Changes.png) + +Nano 编辑器的保存更改模式 + +- Ctrl + X: 离开当前文件,假如更改没有被保存,则它们将被丢弃; +- Ctrl + R: 通过指定一个完整的文件路径,让你选择一个文件来将该文件的内容插入到当前文件中; + +![Nano: 插入文件内容到主文件中](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-File-Content.png) + +Nano: 插入文件内容到主文件中 + +上图的操作将把 `/etc/passwd` 的内容插入到当前文件中。 + +- Ctrl + K: 剪切当前行; +- Ctrl + U: 粘贴; +- Ctrl + C: 取消当前的操作并返回先前的屏幕; + +为了轻松地在打开的文件中浏览, nano 提供了下面的功能: + +- Ctrl + F 和 Ctrl + B 分别先前或向后移动光标;而 Ctrl + P 和 Ctrl + N 则分别向上或向下移动一行,功能与箭头键相同; +- Ctrl + space 和 Alt + space 分别向前或向后移动一个单词; + +最后, + +- 假如你想将光标移动到文档中的特定位置,使用 Ctrl + _ (下划线) 并接着输入 X,Y 将准确地带你到 第 X 行,第 Y 列。 + +![在 nano 中定位到具体的行,列](http://www.tecmint.com/wp-content/uploads/2015/03/Column-Numbers.png) + +在 nano 中定位到具体的行和列 + +上面的例子将带你到当前文档的第 15 行,第 14 列。 + +假如你可以回忆起你早期的 Linux 岁月,特别是当你刚从 Windows 迁移到 Linux 中,你就可能会同意:对于一个新手来说,使用 nano 来开始学习是最好的方式。 + +### 使用 Vim 编辑器来编辑文件 ### + + +Vim 是 vi 的加强版本,它是 Linux 中一个著名的文本编辑器,可在所有兼容 POSIX 的 *nix 系统中获取到,例如在 RHEL 7 中。假如你有机会并可以安装 Vim,请继续;假如不能,这篇文章中的大多数(若不是全部)的提示也应该可以正常工作。 + +Vim 的一个出众的特点是可以在多个不同的模式中进行操作: + +- 命令模式将允许你在文件中跳转和输入命令,这些命令是由一个或多个字母组成的简洁且对大小写敏感的组合。假如你想重复执行某个命令特定次,你可以在这个命令前加上需要重复的次数(这个规则只有极少数例外)。例如, yy(或 Y,yank 的缩写)可以复制整个当前行,而 4yy(或 4Y)则复制整个当前行到接着的 3 行(总共 4 行)。 +- 在 ex 模式中,你可以操作文件(包括保存当前文件和运行外部的程序或命令)。要进入 ex 模式,你必须在命令模式前(或其他词前,Esc + :)输入一个冒号(:),再直接跟上你想使用的 ex 模式命令的名称。 +- 对于插入模式,可以输入字母 i 进入,我们只需要输入文字即可。大多数的键击结果都将出现在屏幕中的文本中。 +- 我们总是可以通过敲击 Esc 键来进入命令模式(无论我们正工作在哪个模式下)。 + +现在,让我们看看如何在 vim 中执行在上一节列举的针对 nano 的相同的操作。不要忘记敲击 Enter 键来确认 vim 命令。 + +为了从命令行中获取 vim 的完整手册,在命令模式下键入 `:help` 并敲击 Enter 键: + +![vim 编辑器帮助菜单](http://www.tecmint.com/wp-content/uploads/2015/03/vim-Help-Menu.png) + +vim 编辑器帮助菜单 + +上面的小节呈现出一个目录列表,而定义过的小节则主要关注 Vim 的特定话题。要浏览某一个小节,可以将光标放到它的上面,然后按 Ctrl + ] (闭方括号)。注意,底部的小节展示的是当前文件的内容。 + +1. 要保存更改到文件,在命令模式中运行下面命令中的任意一个,就可以达到这个目的: + +``` +:wq! +:x! +ZZ (是的,两个 ZZ,前面无需添加冒号) +``` + +2. 要离开并丢弃更改,使用 `:q!`。这个命令也将允许你离开上面描述过的帮助菜单,并返回到命令模式中的当前文件。 + +3. 剪切 N 行:在命令模式中键入 `Ndd`。 + +4. 复制 M 行:在命令模式中键入 `Myy`。 + +5. 粘贴先前剪贴或复制过的行:在命令模式中按 `P`键。 + +6. 要插入另一个文件的内容到当前文件: + + :r filename + +例如,插入 `/etc/fstab` 的内容,可以这样做: + +[在 vi 编辑器中插入文件的内容](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-Content-vi-Editor.png) + +在 vi 编辑器中插入文件的内容 + +7. 插入一个命名的输出到当前文档: + + :r! command + +例如,要在光标所在的当前位置后面插入日期和时间: + +![在 vi 编辑器中插入时间和日期](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-Time-and-Date-in-vi-Editor.png) + +在 vi 编辑器中插入时间和日期 + +在另一篇我写的文章中,([LFCS 系列的 Part 2][1]),我更加详细地解释了在 vim 中可用的键盘快捷键和功能。或许你可以参考那个教程来查看如何使用这个强大的文本编辑器的更深入的例子。 + +### 使用 Grep 和正则表达式来分析文本 ### + +到现在为止,你已经学习了如何使用 nano 或 vim 创建和编辑文件。打个比方说,假如你成为了一个文本编辑器忍者 – 那又怎样呢? 在其他事情上,你也需要知道如何在文本中搜索正则表达式。 + +正则表达式(也称为 "regex" 或 "regexp") 是一种识别一个特定文本字符串或模式的方式,使得一个程序可以将这个模式和任意的文本字符串相比较。尽管利用 grep 来使用正则表达式值得用一整篇文章来描述,这里就让我们复习一些基本的知识: + +**1. 最简单的正则表达式是一个由数字和字母构成的字符串(即,单词 "svm") 或两个(在使用两个字符串时,你可以使用 `|`(或) 操作符):** + + # grep -Ei 'svm|vmx' /proc/cpuinfo + +上面命令的输出结果中若有这两个字符串之一的出现,则标志着你的处理器支持虚拟化: + +![正则表达式示例](http://www.tecmint.com/wp-content/uploads/2015/03/Regular-Expression-Example.png) + +正则表达式示例 + +**2. 第二种正则表达式是一个范围列表,由方括号包裹。** + +例如, `c[aeiou]t` 匹配字符串 cat,cet,cit,cot 和 cut,而 `[a-z]` 和 `[0-9]` 则相应地匹配小写字母或十进制数字。假如你想重复正则表达式 X 次,在正则表达式的后面立即输入 `{X}`即可。 + +例如,让我们从 `/etc/fstab` 中析出存储设备的 UUID: + + # grep -Ei '[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}' -o /etc/fstab + +![在 Linux 中从一个文件中析出字符串](http://www.tecmint.com/wp-content/uploads/2015/03/Extract-String-from-a-File.png) + +从一个文件中析出字符串 + +方括号中的第一个表达式 `[0-9a-f]` 被用来表示小写的十六进制字符,`{8}`是一个量词,暗示前面匹配的字符串应该重复的次数(在一个 UUID 中的开头序列是一个 8 个字符长的十六进制字符串)。 + +在圆括号中,量词 `{4}`和连字符暗示下一个序列是一个 4 个字符长的十六进制字符串,接着的量词 `({3})`表示前面的表达式要重复 3 次。 + +最后,在 UUID 中的最后一个 12 个字符长的十六进制字符串可以由 `[0-9a-f]{12}` 取得, `-o` 选项表示只打印出在 `/etc/fstab`中匹配行中的匹配的(非空)部分。 + +**3. POSIX 字符类 ** + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
字符类匹配 …
 [[:alnum:]] 任意字母或数字 [a-zA-Z0-9]
 [[:alpha:]] 任意字母 [a-zA-Z]
 [[:blank:]] 空格或制表符
 [[:cntrl:]] 任意控制字符 (ASCII 码的 0 至 32)
 [[:digit:]] 任意数字 [0-9]
 [[:graph:]] 任意可见字符
 [[:lower:]] 任意小写字母 [a-z]
 [[:print:]] 任意非控制字符 +
 [[:space:]] 任意空格
 [[:punct:]] 任意标点字符
 [[:upper:]] 任意大写字母 [A-Z]
 [[:xdigit:]] 任意十六进制数字 [0-9a-fA-F]
 [:word:] 任意字母,数字和下划线 [a-zA-Z0-9_]
+ +例如,我们可能会对查找已添加到我们系统中给真实用户的 UID 和 GID(参考这个系列的 [Part 2][2]来回忆起这些知识)感兴趣。那么,我们将在 `/etc/passwd` 文件中查找 4 个字符长的序列: + + # grep -Ei [[:digit:]]{4} /etc/passwd + +![在文件中查找一个字符串](http://www.tecmint.com/wp-content/uploads/2015/03/Search-For-String-in-File.png) + +在文件中查找一个字符串 + +上面的示例可能不是真实世界中使用正则表达式的最好案例,但它清晰地启发了我们如何使用 POSIX 字符类来使用 grep 分析文本。 + +### 总结 ### + + +在这篇文章中,我们已经提供了一些技巧来最大地利用针对命令行用户的两个文本编辑器 nano 和 vim,这两个工具都有相关的扩展文档可供阅读,你可以分别查询它们的官方网站(链接在下面给出)以及使用这个系列中的 [Part 1][3] 给出的建议。 + +#### 参考文件链接 #### + +- [http://www.nano-editor.org/][4] +- [http://www.vim.org/][5] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-exam-how-to-use-nano-vi-editors/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/vi-editor-usage/ +[2]:http://www.tecmint.com/file-and-directory-management-in-linux/ +[3]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ +[4]:http://www.nano-editor.org/ +[5]:http://www.vim.org/ From 8e789f8324e065ffd379940e6aaf70234213dbcc Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Tue, 11 Aug 2015 20:20:36 +0800 Subject: [PATCH 131/697] Update 20150811 How to download apk files from Google Play Store on Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ...w to download apk files from Google Play Store on Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md b/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md index 529e877d7b..50bf618e86 100644 --- a/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md +++ b/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md @@ -1,3 +1,5 @@ +FSSlc translating + How to download apk files from Google Play Store on Linux ================================================================================ Suppose you want to install an Android app on your Android device. However, for whatever reason, you cannot access Google Play Store on the Android device. What can you do then? One way to install the app without Google Play Store access is to download its APK file using some other means, and then [install the APK][1] file on the Android device manually. @@ -96,4 +98,4 @@ via: http://xmodulo.com/download-apk-files-google-play-store.html [3]:https://addons.mozilla.org/en-us/firefox/addon/apk-downloader/ [4]:http://codingteam.net/project/googleplaydownloader [5]:http://packages.ubuntu.com/vivid/python-ndg-httpsclient -[6]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html \ No newline at end of file +[6]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html From e0f43461b533db0e4444eb78cc24e749d8209713 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 11 Aug 2015 22:28:25 +0800 Subject: [PATCH 132/697] PUB:20150810 For Linux, Supercomputers R Us @xiaoyu33 --- ...20150810 For Linux, Supercomputers R Us.md | 61 +++++++++++++++++++ ...20150810 For Linux, Supercomputers R Us.md | 59 ------------------ 2 files changed, 61 insertions(+), 59 deletions(-) create mode 100644 published/20150810 For Linux, Supercomputers R Us.md delete mode 100644 translated/talk/20150810 For Linux, Supercomputers R Us.md diff --git a/published/20150810 For Linux, Supercomputers R Us.md b/published/20150810 For Linux, Supercomputers R Us.md new file mode 100644 index 0000000000..e173d7513c --- /dev/null +++ b/published/20150810 For Linux, Supercomputers R Us.md @@ -0,0 +1,61 @@ +Linux:称霸超级计算机系统 +================================================================================ + +> 几乎所有超级计算机上运行的系统都是 Linux,其中包括那些由树莓派(Raspberry Pi)板卡和 PlayStation 3游戏机组成的计算机。 + +![Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons](http://images.techhive.com/images/article/2015/08/playstation_3-100602985-primary.idge.jpg) + +*题图来源:By Michel Ngilen,[ CC BY 2.0 ], via Wikimedia Commons* + +超级计算机是一种严肃的工具,做的都是高大上的计算。它们往往从事于严肃的用途,比如原子弹模拟、气候模拟和高等物理学。当然,它们的花费也很高大上。在最新的超级计算机 [Top500][1] 排名中,中国国防科技大学研制的天河 2 号位居第一,而天河 2 号的建造耗资约 3.9 亿美元! + +但是,也有一个超级计算机,是由博伊西州立大学电气和计算机工程系的一名在读博士 Joshua Kiepert [用树莓派构建完成][2]的,其建造成本低于2000美元。 + +不,这不是我编造的。它一个真实的超级计算机,由超频到 1GHz 的 [B 型树莓派][3]的 ARM11 处理器与 VideoCore IV GPU 组成。每个都配备了 512MB 的内存、一对 USB 端口和 1 个 10/100 BaseT 以太网端口。 + +那么天河 2 号和博伊西州立大学的超级计算机有什么共同点吗?它们都运行 Linux 系统。世界最快的超级计算机[前 500 强中有 486][4] 个也同样运行的是 Linux 系统。这是从 20 多年前就开始的格局。而现在的趋势是超级计算机开始由廉价单元组成,因为 Kiepert 的机器并不是唯一一个无所谓预算的超级计算机。 + +麻省大学达特茅斯分校的物理学副教授 Gaurav Khanna 创建了一台超级计算机仅用了[不足 200 台的 PlayStation3 视频游戏机][5]。 + +PlayStation 游戏机由一个 3.2 GHz 的基于 PowerPC 的 Power 处理器所驱动。每个都配有 512M 的内存。你现在仍然可以花 200 美元买到一个,尽管索尼将在年底逐步淘汰它们。Khanna 仅用了 16 个 PlayStation 3 构建了他第一台超级计算机,所以你也可以花费不到 4000 美元就拥有你自己的超级计算机。 + +这些机器可能是用玩具建成的,但他们不是玩具。Khanna 已经用它做了严肃的天体物理学研究。一个白帽子黑客组织使用了类似的 [PlayStation 3 超级计算机在 2008 年破解了 SSL 的 MD5 哈希算法][6]。 + +两年后,美国空军研究实验室研制的 [Condor Cluster,使用了 1760 个索尼的 PlayStation 3 的处理器][7]和168 个通用的图形处理单元。这个低廉的超级计算机,每秒运行约 500 TFLOP ,即每秒可进行 500 万亿次浮点运算。 + +其他的一些便宜且适用于构建家庭超级计算机的构件包括,专业并行处理板卡,比如信用卡大小的 [99 美元的 Parallella 板卡][8],以及高端显卡,比如 [Nvidia 的 Titan Z][9] 和 [ AMD 的 FirePro W9100][10]。这些高端板卡的市场零售价约 3000 美元,一些想要一台梦幻般的机器的玩家为此参加了[英特尔极限大师赛:英雄联盟世界锦标赛][11],要是甚至有机会得到了第一名的话,能获得超过 10 万美元奖金。另一方面,一个能够自己提供超过 2.5TFLOPS 计算能力的计算机,对于科学家和研究人员来说,这为他们提供了一个可以拥有自己专属的超级计算机的经济的方法。 + +而超级计算机与 Linux 的连接,这一切都始于 1994 年戈达德航天中心的第一个名为 [Beowulf 超级计算机][13]。 + +按照我们的标准,Beowulf 不能算是最优越的。但在那个时期,作为第一台自制的超级计算机,它的 16 个英特尔486DX 处理器和 10Mbps 的以太网总线,是个伟大的创举。[Beowulf 是由美国航空航天局的承建商 Don Becker 和 Thomas Sterling 所设计的][14],是第一台“创客”超级计算机。它的“计算部件” 486DX PC,成本仅有几千美元。尽管它的速度只有个位数的 GFLOPS (吉拍,每秒10亿次)浮点运算,[Beowulf][15] 表明了你可以用商用现货(COTS)硬件和 Linux 创建超级计算机。 + +我真希望我参与创建了一部分,但是我 1994 年就离开了戈达德,开始了作为一名全职的科技记者的职业生涯。该死。 + +但是尽管我只是使用笔记本的记者,我依然能够体会到 COTS 和开源软件是如何永远的改变了超级计算机。我希望现在读这篇文章的你也能。因为,无论是 Raspberry Pi 集群,还是超过 300 万个英特尔的 Ivy Bridge 和 Xeon Phi 芯片的庞然大物,几乎所有当代的超级计算机都可以追溯到 Beowulf。 + +-------------------------------------------------------------------------------- + +via: http://www.computerworld.com/article/2960701/linux/for-linux-supercomputers-r-us.html + +作者:[Steven J. Vaughan-Nichols][a] +译者:[xiaoyu33](https://github.com/xiaoyu33) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ +[1]:http://www.top500.org/ +[2]:http://www.zdnet.com/article/build-your-own-supercomputer-out-of-raspberry-pi-boards/ +[3]:https://www.raspberrypi.org/products/model-b/ +[4]:http://www.zdnet.com/article/linux-still-rules-supercomputing/ +[5]:http://www.nytimes.com/2014/12/23/science/an-economical-way-to-save-progress.html?smid=fb-nytimes&smtyp=cur&bicmp=AD&bicmlukp=WT.mc_id&bicmst=1409232722000&bicmet=1419773522000&_r=4 +[6]:http://www.computerworld.com/article/2529932/cybercrime-hacking/researchers-hack-verisign-s-ssl-scheme-for-securing-web-sites.html +[7]:http://phys.org/news/2010-12-air-playstation-3s-supercomputer.html +[8]:http://www.zdnet.com/article/parallella-the-99-linux-supercomputer/ +[9]:http://blogs.nvidia.com/blog/2014/03/25/titan-z/ +[10]:http://www.amd.com/en-us/press-releases/Pages/amd-flagship-professional-2014apr7.aspx +[11]:http://en.intelextrememasters.com/news/check-out-the-intel-extreme-masters-katowice-prize-money-distribution/ + +[13]:http://www.beowulf.org/overview/history.html +[14]:http://yclept.ucdavis.edu/Beowulf/aboutbeowulf.html +[15]:http://www.beowulf.org/ diff --git a/translated/talk/20150810 For Linux, Supercomputers R Us.md b/translated/talk/20150810 For Linux, Supercomputers R Us.md deleted file mode 100644 index e5022a658f..0000000000 --- a/translated/talk/20150810 For Linux, Supercomputers R Us.md +++ /dev/null @@ -1,59 +0,0 @@ -Linux:称霸超级计算机系统 -================================================================================ -![Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons](http://images.techhive.com/images/article/2015/08/playstation_3-100602985-primary.idge.jpg) -首图来源:By Michel Ngilen,[ CC BY 2.0 ], via Wikimedia Commons - -> 几乎所有超级计算机上运行的系统都是Linux,其中包括那些由树莓派(Raspberry Pi)板和PlayStation 3游戏机板组成的计算机。 - -超级计算机是很正经的工具,目的是做严肃的计算。它们往往从事于严肃的追求,比如原子弹的模拟,气候模拟和高级物理学。当然,它们也需要大笔资金的投资。在最新的超级计算机[500强][1]排名中,中国国防科大研制的天河2号位居第一。天河2号耗资约3.9亿美元。 - -但是,也有一个超级计算机,是由博伊西州立大学电气和计算机工程系的一名在读博士Joshua Kiepert[用树莓派构建完成][2]的。其创建成本低于2000美元。 - -不,这不是我编造的。这是一个真实的超级计算机,由超频1GHz的[B型树莓派][3]ARM11处理器与VideoCore IV GPU组成。每个都配备了512MB的RAM,一对USB端口和1个10/100 BaseT以太网端口。 - -那么天河2号和博伊西州立大学的超级计算机有什么共同点?它们都运行Linux系统。世界最快的超级计算机[前500强中有486][4]个也同样运行的是Linux系统。这是20多年前就开始的一种覆盖。现在Linux开始建立于廉价的超级计算机。因为Kiepert的机器并不是唯一的预算数字计算机。 - -Gaurav Khanna,麻省大学达特茅斯分校的物理学副教授,创建了一台超级计算机仅用了[不足200的PlayStation3视频游戏机][5]。 - -PlayStation游戏机是由一个3.2 GHz的基于PowerPC的电源处理单元供电。每个都配有512M的RAM。你现在仍然可以花200美元买到一个,尽管索尼将在年底逐步淘汰它们。Khanna仅用16个PlayStation 3s构建了他第一台超级计算机,所以你也可以花费不到4000美元就拥有你自己的超级计算机。 - -这些机器可能是从玩具建成的,但他们不是玩具。Khanna已经用它做了严肃的天体物理学研究。一个白帽子黑客组织使用了类似的[PlayStation 3超级计算机在2008年破解了SSL的MD5哈希算法][6]。 - -两年后,美国空军研究实验室研制的[Condor Cluster,使用了1,760个索尼的PlayStation3的处理器][7]和168个通用的图形处理单元。这个低廉的超级计算机,每秒运行约500TFLOPs,或500万亿次浮点运算。 - -其他的一些便宜且适用于构建家庭超级计算机的构件包括,专业并行处理板比如信用卡大小[99美元的Parallella板][8],以及高端显卡比如[Nvidia的 Titan Z][9] 以及[ AMD的 FirePro W9100][10].这些高端主板市场零售价约3000美元,被一些[英特尔极限大师赛世界锦标赛英雄联盟参赛][11]玩家觊觎能够赢得的梦想的机器,c[传说][12]这项比赛第一名能获得超过10万美元奖金。另一方面,一个人能够独自提供超过2.5TFLOPS,并且他们为科学家和研究人员提供了一个经济的方法,使得他们拥有自己专属的超级计算机。 - -作为Linux的连接,这一切都开始于1994年戈达德航天中心的第一个名为[Beowulf超级计算机][13]。 - -按照我们的标准,Beowulf不能算是最优越的。但在那个时期,作为第一台自制的超级计算机,其16英特尔486DX处理器和10Mbps的以太网总线,是伟大的创举。由[美国航空航天局承包人Don Becker和Thomas Sterling设计的Beowulf][14],是第一个“制造者”超级计算机。它的“计算部件”486DX PCs,成本仅有几千美元。尽管它的速度只有一位数的浮点运算,[Beowulf][15]表明了你可以用商用现货(COTS)硬件和Linux创建超级计算机。 - -我真希望我参与创作了一部分,但是我1994年就离开了戈达德,开始了作为一名全职的科技记者的职业生涯。该死。 - -但是尽管我只是使用笔记本的记者,我依然能够体会到COTS和开源软件是如何永远的改变了超级计算机。我希望现在读这篇文章的你也能。因为,无论是Raspberry Pis集群,还是超过300万英特尔的Ivy Bridge和Xeon Phi芯片的庞然大物,几乎所有当代的超级计算机都可以追溯到Beowulf。 - --------------------------------------------------------------------------------- - -via: - -作者:[Steven J. Vaughan-Nichols][a] -译者:[xiaoyu33](https://github.com/xiaoyu33) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ -[1]:http://www.top500.org/ -[2]:http://www.zdnet.com/article/build-your-own-supercomputer-out-of-raspberry-pi-boards/ -[3]:https://www.raspberrypi.org/products/model-b/ -[4]:http://www.zdnet.com/article/linux-still-rules-supercomputing/ -[5]:http://www.nytimes.com/2014/12/23/science/an-economical-way-to-save-progress.html?smid=fb-nytimes&smtyp=cur&bicmp=AD&bicmlukp=WT.mc_id&bicmst=1409232722000&bicmet=1419773522000&_r=4 -[6]:http://www.computerworld.com/article/2529932/cybercrime-hacking/researchers-hack-verisign-s-ssl-scheme-for-securing-web-sites.html -[7]:http://phys.org/news/2010-12-air-playstation-3s-supercomputer.html -[8]:http://www.zdnet.com/article/parallella-the-99-linux-supercomputer/ -[9]:http://blogs.nvidia.com/blog/2014/03/25/titan-z/ -[10]:http://www.amd.com/en-us/press-releases/Pages/amd-flagship-professional-2014apr7.aspx -[11]:http://en.intelextrememasters.com/news/check-out-the-intel-extreme-masters-katowice-prize-money-distribution/ -[12]:http://www.google.com/url?q=http%3A%2F%2Fen.intelextrememasters.com%2Fnews%2Fcheck-out-the-intel-extreme-masters-katowice-prize-money-distribution%2F&sa=D&sntz=1&usg=AFQjCNE6yoAGGz-Hpi2tPF4gdhuPBEckhQ -[13]:http://www.beowulf.org/overview/history.html -[14]:http://yclept.ucdavis.edu/Beowulf/aboutbeowulf.html -[15]:http://www.beowulf.org/ From cc16c9074da3fd5406449f3e7cadbfd39998790c Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 11 Aug 2015 23:13:10 +0800 Subject: [PATCH 133/697] =?UTF-8?q?20150811-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...k Traffic Analyzer--Install it on Linux.md | 62 +++++++++++++++++++ 1 file changed, 62 insertions(+) create mode 100644 sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md diff --git a/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md b/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md new file mode 100644 index 0000000000..9f78722cb6 --- /dev/null +++ b/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md @@ -0,0 +1,62 @@ +Darkstat is a Web Based Network Traffic Analyzer – Install it on Linux +================================================================================ +Darkstat is a simple, web based network traffic analyzer application. It works on many popular operating systems like Linux, Solaris, Mac and AIX. It keeps running in the background as a daemon and continues collecting and sniffing network data and presents it in easily understandable format within its web interface. It can generate traffic reports for hosts, identify which ports are open on some particular host and is IPV 6 complaint application. Let’s see how we can install and configure it on Linux operating system. + +### Installing Darkstat on Linux ### + +**Install Darkstat on Fedora/CentOS/RHEL:** + +In order to install it on Fedora/RHEL and CentOS Linux distributions, run following command on the terminal. + + sudo yum install darkstat + +**Install Darkstat on Ubuntu/Debian:** + +Run following on the terminal to install it on Ubuntu and Debian. + + sudo apt-get install darkstat + +Congratulations, Darkstat has been installed on your Linux system now. + +### Configuring Darkstat ### + +In order to run this application properly, we need to perform some basic configurations. Edit /etc/darkstat/init.cfg file in Gedit text editor by running the following command on the terminal. + + sudo gedit /etc/darkstat/init.cfg + +![](http://linuxpitstop.com/wp-content/uploads/2015/08/13.png) +Edit Darkstat + +Change START_DARKSTAT parameter to “yes” and provide your network interface in “INTERFACE”. Make sure to uncomment DIR, PORT, BINDIP, and LOCAL parameters here. If you wish to bind the web interface for Darkstat to some specific IP, provide it in BINDIP section. + +### Starting Darkstat Daemon ### + +Once the installation and configuration for Darkstat is complete, run following command to start its daemon. + + sudo /etc/init.d/darkstat start + +![Restarting Darkstat](http://linuxpitstop.com/wp-content/uploads/2015/08/23.png) + +You can configure Darkstat to start on system boot by running the following command: + + chkconfig darkstat on + +Launch your browser and load **http://localhost:666** and it will display the web based graphical interface for Darkstat. Start using this tool to analyze your network traffic. + +![Darkstat](http://linuxpitstop.com/wp-content/uploads/2015/08/32.png) + +### Conclusion ### + +It is a lightweight tool with very low memory footprints. The key reason for the popularity of this tool is simplicity, ease of configuration and usage. It is a must-have application for System and Network Administrators. + +-------------------------------------------------------------------------------- + +via: http://linuxpitstop.com/install-darkstat-on-ubuntu-linux/ + +作者:[Aun][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://linuxpitstop.com/author/aun/ \ No newline at end of file From accf7e54ceb2f72f33d9db5b6ce3e35fe1a3e9f3 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 11 Aug 2015 23:24:39 +0800 Subject: [PATCH 134/697] =?UTF-8?q?20150811-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ind and Delete Duplicate Files in Linux.md | 265 ++++++++++++++++++ 1 file changed, 265 insertions(+) create mode 100644 sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md diff --git a/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md b/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md new file mode 100644 index 0000000000..f89f060c92 --- /dev/null +++ b/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md @@ -0,0 +1,265 @@ +fdupes – A Comamndline Tool to Find and Delete Duplicate Files in Linux +================================================================================ +It is a common requirement to find and replace duplicate files for most of the computer users. Finding and removing duplicate files is a tiresome job that demands time and patience. Finding duplicate files can be very easy if your machine is powered by GNU/Linux, thanks to ‘**fdupes**‘ utility. + +![Find and Delete Duplicate Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/find-and-delete-duplicate-files-in-linux.png) + +Fdupes – Find and Delete Duplicate Files in Linux + +### What is fdupes? ### + +**Fdupes** is a Linux utility written by **Adrian Lopez** in C programming Language released under MIT License. The application is able to find duplicate files in the given set of directories and sub-directories. Fdupes recognize duplicates by comparing MD5 signature of files followed by a byte-to-byte comparison. A lots of options can be passed with Fdupes to list, delete and replace the files with hardlinks to duplicates. + +The comparison starts in the order: + +**size comparison > Partial MD5 Signature Comparison > Full MD5 Signature Comparison > Byte-to-Byte Comparison.** + +### Install fdupes on a Linux ### + +Installation of latest version of fdupes (fdupes version 1.51) as easy as running following command on **Debian** based systems such as **Ubuntu** and **Linux Mint**. + + $ sudo apt-get install fdupes + +On CentOS/RHEL and Fedora based systems, you need to turn on [epel repository][1] to install fdupes package. + + # yum install fdupes + # dnf install fdupes [On Fedora 22 onwards] + +**Note**: The default package manager yum is replaced by dnf from Fedora 22 onwards… + +### How to use fdupes command? ### + +1. For demonstration purpose, let’s a create few duplicate files under a directory (say tecmint) simply as: + + $ mkdir /home/"$USER"/Desktop/tecmint && cd /home/"$USER"/Desktop/tecmint && for i in {1..15}; do echo "I Love Tecmint. Tecmint is a very nice community of Linux Users." > tecmint${i}.txt ; done + +After running above command, let’s verify the duplicates files are created or not using ls [command][2]. + + $ ls -l + + total 60 + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint10.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint11.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint12.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint13.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint14.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint15.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint1.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint2.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint3.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint4.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint5.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint6.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint7.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint8.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9.txt + +The above script create **15** files namely tecmint1.txt, tecmint2.txt…tecmint15.txt and every files contains the same data i.e., + + "I Love Tecmint. Tecmint is a very nice community of Linux Users." + +2. Now search for duplicate files within the folder **tecmint**. + + $ fdupes /home/$USER/Desktop/tecmint + + /home/tecmint/Desktop/tecmint/tecmint13.txt + /home/tecmint/Desktop/tecmint/tecmint8.txt + /home/tecmint/Desktop/tecmint/tecmint11.txt + /home/tecmint/Desktop/tecmint/tecmint3.txt + /home/tecmint/Desktop/tecmint/tecmint4.txt + /home/tecmint/Desktop/tecmint/tecmint6.txt + /home/tecmint/Desktop/tecmint/tecmint7.txt + /home/tecmint/Desktop/tecmint/tecmint9.txt + /home/tecmint/Desktop/tecmint/tecmint10.txt + /home/tecmint/Desktop/tecmint/tecmint2.txt + /home/tecmint/Desktop/tecmint/tecmint5.txt + /home/tecmint/Desktop/tecmint/tecmint14.txt + /home/tecmint/Desktop/tecmint/tecmint1.txt + /home/tecmint/Desktop/tecmint/tecmint15.txt + /home/tecmint/Desktop/tecmint/tecmint12.txt + +3. Search for duplicates recursively under every directory including it’s sub-directories using the **-r** option. + +It search across all the files and folder recursively, depending upon the number of files and folders it will take some time to scan duplicates. In that mean time, you will be presented with the total progress in terminal, something like this. + + $ fdupes -r /home + + Progress [37780/54747] 69% + +4. See the size of duplicates found within a folder using the **-S** option. + + $ fdupes -S /home/$USER/Desktop/tecmint + + 65 bytes each: + /home/tecmint/Desktop/tecmint/tecmint13.txt + /home/tecmint/Desktop/tecmint/tecmint8.txt + /home/tecmint/Desktop/tecmint/tecmint11.txt + /home/tecmint/Desktop/tecmint/tecmint3.txt + /home/tecmint/Desktop/tecmint/tecmint4.txt + /home/tecmint/Desktop/tecmint/tecmint6.txt + /home/tecmint/Desktop/tecmint/tecmint7.txt + /home/tecmint/Desktop/tecmint/tecmint9.txt + /home/tecmint/Desktop/tecmint/tecmint10.txt + /home/tecmint/Desktop/tecmint/tecmint2.txt + /home/tecmint/Desktop/tecmint/tecmint5.txt + /home/tecmint/Desktop/tecmint/tecmint14.txt + /home/tecmint/Desktop/tecmint/tecmint1.txt + /home/tecmint/Desktop/tecmint/tecmint15.txt + /home/tecmint/Desktop/tecmint/tecmint12.txt + +5. You can see the size of duplicate files for every directory and subdirectories encountered within using the **-S** and **-r** options at the same time, as: + + $ fdupes -Sr /home/avi/Desktop/ + + 65 bytes each: + /home/tecmint/Desktop/tecmint/tecmint13.txt + /home/tecmint/Desktop/tecmint/tecmint8.txt + /home/tecmint/Desktop/tecmint/tecmint11.txt + /home/tecmint/Desktop/tecmint/tecmint3.txt + /home/tecmint/Desktop/tecmint/tecmint4.txt + /home/tecmint/Desktop/tecmint/tecmint6.txt + /home/tecmint/Desktop/tecmint/tecmint7.txt + /home/tecmint/Desktop/tecmint/tecmint9.txt + /home/tecmint/Desktop/tecmint/tecmint10.txt + /home/tecmint/Desktop/tecmint/tecmint2.txt + /home/tecmint/Desktop/tecmint/tecmint5.txt + /home/tecmint/Desktop/tecmint/tecmint14.txt + /home/tecmint/Desktop/tecmint/tecmint1.txt + /home/tecmint/Desktop/tecmint/tecmint15.txt + /home/tecmint/Desktop/tecmint/tecmint12.txt + + 107 bytes each: + /home/tecmint/Desktop/resume_files/r-csc.html + /home/tecmint/Desktop/resume_files/fc.html + +6. Other than searching in one folder or all the folders recursively, you may choose to choose in two folders or three folders as required. Not to mention you can use option **-S** and/or **-r** if required. + + $ fdupes /home/avi/Desktop/ /home/avi/Templates/ + +7. To delete the duplicate files while preserving a copy you can use the option ‘**-d**’. Extra care should be taken while using this option else you might end up loosing necessary files/data and mind it the process is unrecoverable. + + $ fdupes -d /home/$USER/Desktop/tecmint + + [1] /home/tecmint/Desktop/tecmint/tecmint13.txt + [2] /home/tecmint/Desktop/tecmint/tecmint8.txt + [3] /home/tecmint/Desktop/tecmint/tecmint11.txt + [4] /home/tecmint/Desktop/tecmint/tecmint3.txt + [5] /home/tecmint/Desktop/tecmint/tecmint4.txt + [6] /home/tecmint/Desktop/tecmint/tecmint6.txt + [7] /home/tecmint/Desktop/tecmint/tecmint7.txt + [8] /home/tecmint/Desktop/tecmint/tecmint9.txt + [9] /home/tecmint/Desktop/tecmint/tecmint10.txt + [10] /home/tecmint/Desktop/tecmint/tecmint2.txt + [11] /home/tecmint/Desktop/tecmint/tecmint5.txt + [12] /home/tecmint/Desktop/tecmint/tecmint14.txt + [13] /home/tecmint/Desktop/tecmint/tecmint1.txt + [14] /home/tecmint/Desktop/tecmint/tecmint15.txt + [15] /home/tecmint/Desktop/tecmint/tecmint12.txt + + Set 1 of 1, preserve files [1 - 15, all]: + +You may notice that all the duplicates are listed and you are prompted to delete, either one by one or certain range or all in one go. You may select a range something like below to delete files files of specific range. + + Set 1 of 1, preserve files [1 - 15, all]: 2-15 + + [-] /home/tecmint/Desktop/tecmint/tecmint13.txt + [+] /home/tecmint/Desktop/tecmint/tecmint8.txt + [-] /home/tecmint/Desktop/tecmint/tecmint11.txt + [-] /home/tecmint/Desktop/tecmint/tecmint3.txt + [-] /home/tecmint/Desktop/tecmint/tecmint4.txt + [-] /home/tecmint/Desktop/tecmint/tecmint6.txt + [-] /home/tecmint/Desktop/tecmint/tecmint7.txt + [-] /home/tecmint/Desktop/tecmint/tecmint9.txt + [-] /home/tecmint/Desktop/tecmint/tecmint10.txt + [-] /home/tecmint/Desktop/tecmint/tecmint2.txt + [-] /home/tecmint/Desktop/tecmint/tecmint5.txt + [-] /home/tecmint/Desktop/tecmint/tecmint14.txt + [-] /home/tecmint/Desktop/tecmint/tecmint1.txt + [-] /home/tecmint/Desktop/tecmint/tecmint15.txt + [-] /home/tecmint/Desktop/tecmint/tecmint12.txt + +8. From safety point of view, you may like to print the output of ‘**fdupes**’ to file and then check text file to decide what file to delete. This decrease chances of getting your file deleted accidentally. You may do: + + $ fdupes -Sr /home > /home/fdupes.txt + +**Note**: You may replace ‘**/home**’ with the your desired folder. Also use option ‘**-r**’ and ‘**-S**’ if you want to search recursively and Print Size, respectively. + +9. You may omit the first file from each set of matches by using option ‘**-f**’. + +First List files of the directory. + + $ ls -l /home/$USER/Desktop/tecmint + + total 20 + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9 (3rd copy).txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9 (4th copy).txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9 (another copy).txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9 (copy).txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9.txt + +and then omit the first file from each set of matches. + + $ fdupes -f /home/$USER/Desktop/tecmint + + /home/tecmint/Desktop/tecmint9 (copy).txt + /home/tecmint/Desktop/tecmint9 (3rd copy).txt + /home/tecmint/Desktop/tecmint9 (another copy).txt + /home/tecmint/Desktop/tecmint9 (4th copy).txt + +10. Check installed version of fdupes. + + $ fdupes --version + + fdupes 1.51 + +11. If you need any help on fdupes you may use switch ‘**-h**’. + + $ fdupes -h + + Usage: fdupes [options] DIRECTORY... + + -r --recurse for every directory given follow subdirectories + encountered within + -R --recurse: for each directory given after this option follow + subdirectories encountered within (note the ':' at + the end of the option, manpage for more details) + -s --symlinks follow symlinks + -H --hardlinks normally, when two or more files point to the same + disk area they are treated as non-duplicates; this + option will change this behavior + -n --noempty exclude zero-length files from consideration + -A --nohidden exclude hidden files from consideration + -f --omitfirst omit the first file in each set of matches + -1 --sameline list each set of matches on a single line + -S --size show size of duplicate files + -m --summarize summarize dupe information + -q --quiet hide progress indicator + -d --delete prompt user for files to preserve and delete all + others; important: under particular circumstances, + data may be lost when using this option together + with -s or --symlinks, or when specifying a + particular directory more than once; refer to the + fdupes documentation for additional information + -N --noprompt together with --delete, preserve the first file in + each set of duplicates and delete the rest without + prompting the user + -v --version display fdupes version + -h --help display this help message + +That’s for all now. Let me know how you were finding and deleting duplicates files till now in Linux? and also tell me your opinion about this utility. Put your valuable feedback in the comment section below and don’t forget to like/share us and help us get spread. + +I am working on another utility called **fslint** to remove duplicate files, will soon post and you people will love to read. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/fdupes-find-and-delete-duplicate-files-in-linux/ + +作者:[Avishek Kumar][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ +[2]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/ \ No newline at end of file From 54f9b2ca351aa6a65fa5896a6896aaf93255c5cc Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 12 Aug 2015 00:35:39 +0800 Subject: [PATCH 135/697] =?UTF-8?q?=E4=BF=AE=E6=94=B9=E6=A0=87=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- published/20150810 For Linux, Supercomputers R Us.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/published/20150810 For Linux, Supercomputers R Us.md b/published/20150810 For Linux, Supercomputers R Us.md index e173d7513c..f86b3694d0 100644 --- a/published/20150810 For Linux, Supercomputers R Us.md +++ b/published/20150810 For Linux, Supercomputers R Us.md @@ -1,4 +1,4 @@ -Linux:称霸超级计算机系统 +有了 Linux,你就可以搭建自己的超级计算机 ================================================================================ > 几乎所有超级计算机上运行的系统都是 Linux,其中包括那些由树莓派(Raspberry Pi)板卡和 PlayStation 3游戏机组成的计算机。 From d3454dc30effe283eeca9dac490e57e860333fe0 Mon Sep 17 00:00:00 2001 From: joeren Date: Wed, 12 Aug 2015 09:29:13 +0800 Subject: [PATCH 136/697] Update 20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md --- ...mndline Tool to Find and Delete Duplicate Files in Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md b/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md index f89f060c92..1e55090d67 100644 --- a/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md +++ b/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md @@ -1,3 +1,4 @@ +Translating by GOLinux! fdupes – A Comamndline Tool to Find and Delete Duplicate Files in Linux ================================================================================ It is a common requirement to find and replace duplicate files for most of the computer users. Finding and removing duplicate files is a tiresome job that demands time and patience. Finding duplicate files can be very easy if your machine is powered by GNU/Linux, thanks to ‘**fdupes**‘ utility. @@ -262,4 +263,4 @@ via: http://www.tecmint.com/fdupes-find-and-delete-duplicate-files-in-linux/ [a]:http://www.tecmint.com/author/avishek/ [1]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ -[2]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/ \ No newline at end of file +[2]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/ From b161d18382714006ea52e2cf6cb46b3febe0ba92 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Wed, 12 Aug 2015 10:51:29 +0800 Subject: [PATCH 137/697] [Translated]20150811 fdupes--A Commandline Tool to Find and Delete Duplicate Files in Linux.md --- ...ind and Delete Duplicate Files in Linux.md | 69 +++++++++---------- 1 file changed, 33 insertions(+), 36 deletions(-) rename {sources => translated}/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md (67%) diff --git a/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md b/translated/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md similarity index 67% rename from sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md rename to translated/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md index 1e55090d67..09f10fb546 100644 --- a/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md +++ b/translated/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md @@ -1,40 +1,38 @@ -Translating by GOLinux! -fdupes – A Comamndline Tool to Find and Delete Duplicate Files in Linux +fdupes——Linux中查找并删除重复文件的命令行工具 ================================================================================ -It is a common requirement to find and replace duplicate files for most of the computer users. Finding and removing duplicate files is a tiresome job that demands time and patience. Finding duplicate files can be very easy if your machine is powered by GNU/Linux, thanks to ‘**fdupes**‘ utility. +对于大多数计算机用户而言,查找并替换重复的文件是一个常见的需求。查找并移除重复文件真是一项领人不胜其烦的工作,它耗时又耗力。如果你的机器上跑着GNU/Linux,那么查找重复文件会变得十分简单,这多亏了`**fdupes**`工具。 ![Find and Delete Duplicate Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/find-and-delete-duplicate-files-in-linux.png) -Fdupes – Find and Delete Duplicate Files in Linux +Fdupes——在Linux中查找并删除重复文件 -### What is fdupes? ### +### fdupes是啥东东? ### -**Fdupes** is a Linux utility written by **Adrian Lopez** in C programming Language released under MIT License. The application is able to find duplicate files in the given set of directories and sub-directories. Fdupes recognize duplicates by comparing MD5 signature of files followed by a byte-to-byte comparison. A lots of options can be passed with Fdupes to list, delete and replace the files with hardlinks to duplicates. +**Fdupes**是Linux下的一个工具,它由**Adrian Lopez**用C编程语言编写并基于MIT许可证发行,该应用程序可以在指定的目录及子目录中查找重复的文件。Fdupes通过对比文件的MD5签名,以及逐字节比较文件来识别重复内容,可以为Fdupes指定大量的选项以实现对文件的列出、删除、替换到文件副本的硬链接等操作。 -The comparison starts in the order: +对比以下列顺序开始: -**size comparison > Partial MD5 Signature Comparison > Full MD5 Signature Comparison > Byte-to-Byte Comparison.** +**大小对比 > 部分 MD5 签名对比 > 完整 MD5 签名对比 > 逐字节对比** -### Install fdupes on a Linux ### +### 安装 fdupes 到 Linux ### -Installation of latest version of fdupes (fdupes version 1.51) as easy as running following command on **Debian** based systems such as **Ubuntu** and **Linux Mint**. +在基于**Debian**的系统上,如**Ubuntu**和**Linux Mint**,安装最新版fdupes,用下面的命令手到擒来。 $ sudo apt-get install fdupes -On CentOS/RHEL and Fedora based systems, you need to turn on [epel repository][1] to install fdupes package. +在基于CentOS/RHEL和Fedora的系统上,你需要开启[epel仓库][1]来安装fdupes包。 # yum install fdupes # dnf install fdupes [On Fedora 22 onwards] -**Note**: The default package manager yum is replaced by dnf from Fedora 22 onwards… +**注意**:自Fedora 22之后,默认的包管理器yum被dnf取代了。 -### How to use fdupes command? ### - -1. For demonstration purpose, let’s a create few duplicate files under a directory (say tecmint) simply as: +### fdupes命令咋个搞? ### +1.作为演示的目的,让我们来在某个目录(比如 tecmint)下创建一些重复文件,命令如下: $ mkdir /home/"$USER"/Desktop/tecmint && cd /home/"$USER"/Desktop/tecmint && for i in {1..15}; do echo "I Love Tecmint. Tecmint is a very nice community of Linux Users." > tecmint${i}.txt ; done -After running above command, let’s verify the duplicates files are created or not using ls [command][2]. +在执行以上命令后,让我们使用ls[命令][2]验证重复文件是否创建。 $ ls -l @@ -55,11 +53,11 @@ After running above command, let’s verify the duplicates files are created or -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint8.txt -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9.txt -The above script create **15** files namely tecmint1.txt, tecmint2.txt…tecmint15.txt and every files contains the same data i.e., +上面的脚本创建了**15**个文件,名称分别为tecmint1.txt,tecmint2.txt……tecmint15.txt,并且每个文件的数据相同,如 "I Love Tecmint. Tecmint is a very nice community of Linux Users." -2. Now search for duplicate files within the folder **tecmint**. +2.现在在**tecmint**文件夹内搜索重复的文件。 $ fdupes /home/$USER/Desktop/tecmint @@ -79,15 +77,15 @@ The above script create **15** files namely tecmint1.txt, tecmint2.txt…tecmint /home/tecmint/Desktop/tecmint/tecmint15.txt /home/tecmint/Desktop/tecmint/tecmint12.txt -3. Search for duplicates recursively under every directory including it’s sub-directories using the **-r** option. +3.使用**-r**选项在每个目录包括其子目录中递归搜索重复文件。 -It search across all the files and folder recursively, depending upon the number of files and folders it will take some time to scan duplicates. In that mean time, you will be presented with the total progress in terminal, something like this. +它会递归搜索所有文件和文件夹,花一点时间来扫描重复文件,时间的长短取决于文件和文件夹的数量。在此其间,终端中会显示全部过程,像下面这样。 $ fdupes -r /home Progress [37780/54747] 69% -4. See the size of duplicates found within a folder using the **-S** option. +4.使用**-S**选项来查看某个文件夹内找到的重复文件的大小。 $ fdupes -S /home/$USER/Desktop/tecmint @@ -108,7 +106,7 @@ It search across all the files and folder recursively, depending upon the number /home/tecmint/Desktop/tecmint/tecmint15.txt /home/tecmint/Desktop/tecmint/tecmint12.txt -5. You can see the size of duplicate files for every directory and subdirectories encountered within using the **-S** and **-r** options at the same time, as: +5.你可以同时使用**-S**和**-r**选项来查看所有涉及到的目录和子目录中的重复文件的大小,如下: $ fdupes -Sr /home/avi/Desktop/ @@ -133,11 +131,11 @@ It search across all the files and folder recursively, depending upon the number /home/tecmint/Desktop/resume_files/r-csc.html /home/tecmint/Desktop/resume_files/fc.html -6. Other than searching in one folder or all the folders recursively, you may choose to choose in two folders or three folders as required. Not to mention you can use option **-S** and/or **-r** if required. +6.不同于在一个或所有文件夹内递归搜索,你可以选择按要求有选择性地在两个或三个文件夹内进行搜索。不必再提醒你了吧,如有需要,你可以使用**-S**和/或**-r**选项。 $ fdupes /home/avi/Desktop/ /home/avi/Templates/ -7. To delete the duplicate files while preserving a copy you can use the option ‘**-d**’. Extra care should be taken while using this option else you might end up loosing necessary files/data and mind it the process is unrecoverable. +7.要删除重复文件,同时保留一个副本,你可以使用`**-d**`选项。使用该选项,你必须额外小心,否则最终结果可能会是文件/数据的丢失。郑重提醒,此操作不可恢复。 $ fdupes -d /home/$USER/Desktop/tecmint @@ -159,7 +157,7 @@ It search across all the files and folder recursively, depending upon the number Set 1 of 1, preserve files [1 - 15, all]: -You may notice that all the duplicates are listed and you are prompted to delete, either one by one or certain range or all in one go. You may select a range something like below to delete files files of specific range. +你可能注意到了,所有重复的文件被列了出来,并给出删除提示,一个一个来,或者指定范围,或者一次性全部删除。你可以选择一个范围,就像下面这样,来删除指定范围内的文件。 Set 1 of 1, preserve files [1 - 15, all]: 2-15 @@ -179,15 +177,15 @@ You may notice that all the duplicates are listed and you are prompted to delete [-] /home/tecmint/Desktop/tecmint/tecmint15.txt [-] /home/tecmint/Desktop/tecmint/tecmint12.txt -8. From safety point of view, you may like to print the output of ‘**fdupes**’ to file and then check text file to decide what file to delete. This decrease chances of getting your file deleted accidentally. You may do: +8.从安全角度出发,你可能想要打印`**fdupes**`的输出结果到文件中,然后检查文本文件来决定要删除什么文件。这可以降低意外删除文件的风险。你可以这么做: $ fdupes -Sr /home > /home/fdupes.txt -**Note**: You may replace ‘**/home**’ with the your desired folder. Also use option ‘**-r**’ and ‘**-S**’ if you want to search recursively and Print Size, respectively. +**注意**:你可以替换`**/home**`为你想要的文件夹。同时,如果你想要递归搜索并打印大小,可以使用`**-r**`和`**-S**`选项。 -9. You may omit the first file from each set of matches by using option ‘**-f**’. +9.你可以使用`**-f**`选项来忽略每个匹配集中的首个文件。 -First List files of the directory. +首先列出该目录中的文件。 $ ls -l /home/$USER/Desktop/tecmint @@ -198,7 +196,7 @@ First List files of the directory. -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9 (copy).txt -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9.txt -and then omit the first file from each set of matches. +然后,忽略掉每个匹配集中的首个文件。 $ fdupes -f /home/$USER/Desktop/tecmint @@ -207,13 +205,13 @@ and then omit the first file from each set of matches. /home/tecmint/Desktop/tecmint9 (another copy).txt /home/tecmint/Desktop/tecmint9 (4th copy).txt -10. Check installed version of fdupes. +10.检查已安装的fdupes版本。 $ fdupes --version fdupes 1.51 -11. If you need any help on fdupes you may use switch ‘**-h**’. +11.如果你需要关于fdupes的帮助,可以使用`**-h**`开关。 $ fdupes -h @@ -247,16 +245,15 @@ and then omit the first file from each set of matches. -v --version display fdupes version -h --help display this help message -That’s for all now. Let me know how you were finding and deleting duplicates files till now in Linux? and also tell me your opinion about this utility. Put your valuable feedback in the comment section below and don’t forget to like/share us and help us get spread. +到此为止了。让我知道你到现在为止你是怎么在Linux中查找并删除重复文件的?同时,也让我知道你关于这个工具的看法。在下面的评论部分中提供你有价值的反馈吧,别忘了为我们点赞并分享,帮助我们扩散哦。 -I am working on another utility called **fslint** to remove duplicate files, will soon post and you people will love to read. +我正在使用另外一个移除重复文件的工具,它叫**fslint**。很快就会把使用心得分享给大家哦,你们一定会喜欢看的。 -------------------------------------------------------------------------------- via: http://www.tecmint.com/fdupes-find-and-delete-duplicate-files-in-linux/ -作者:[Avishek Kumar][a] -译者:[译者ID](https://github.com/译者ID) +作者:[GOLinux](https://github.com/GOLinux) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From 31c2f1416e45300f44b57be5518b824e926053ef Mon Sep 17 00:00:00 2001 From: wi-cuckoo Date: Wed, 12 Aug 2015 12:56:35 +0800 Subject: [PATCH 138/697] translated wi-cuckoo --- ...ence on RedHat Linux Package Management.md | 349 ------------------ ...ence on RedHat Linux Package Management.md | 348 +++++++++++++++++ 2 files changed, 348 insertions(+), 349 deletions(-) delete mode 100644 sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md create mode 100644 translated/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md diff --git a/sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md b/sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md deleted file mode 100644 index 6243a8c0de..0000000000 --- a/sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md +++ /dev/null @@ -1,349 +0,0 @@ -translating wi-cuckoo -Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management -================================================================================ -**Shilpa Nair has just graduated in the year 2015. She went to apply for Trainee position in a National News Television located in Noida, Delhi. When she was in the last year of graduation and searching for help on her assignments she came across Tecmint. Since then she has been visiting Tecmint regularly.** - -![Linux Interview Questions on RPM](http://www.tecmint.com/wp-content/uploads/2015/06/Linux-Interview-Questions-on-RPM.jpeg) - -Linux Interview Questions on RPM - -All the questions and answers are rewritten based upon the memory of Shilpa Nair. - -> “Hi friends! I am Shilpa Nair from Delhi. I have completed my graduation very recently and was hunting for a Trainee role soon after my degree. I have developed a passion for UNIX since my early days in the collage and I was looking for a role that suits me and satisfies my soul. I was asked a lots of questions and most of them were basic questions related to RedHat Package Management.” - -Here are the questions, that I was asked and their corresponding answers. I am posting only those questions that are related to RedHat GNU/Linux Package Management, as they were mainly asked. - -### 1. How will you find if a package is installed or not? Say you have to find if ‘nano’ is installed or not, what will you do? ### - -> **Answer** : To find the package nano, weather installed or not, we can use rpm command with the option -q is for query and -a stands for all the installed packages. -> -> # rpm -qa nano -> OR -> # rpm -qa | grep -i nano -> -> nano-2.3.1-10.el7.x86_64 -> -> Also the package name must be complete, an incomplete package name will return the prompt without printing anything which means that package (incomplete package name) is not installed. It can be understood easily by the example below: -> -> We generally substitute vim command with vi. But if we find package vi/vim we will get no result on the standard output. -> -> # vi -> # vim -> -> However we can clearly see that the package is installed by firing vi/vim command. Here is culprit is incomplete file name. If we are not sure of the exact file-name we can use wildcard as: -> -> # rpm -qa vim* -> -> vim-minimal-7.4.160-1.el7.x86_64 -> -> This way we can find information about any package, if installed or not. - -### 2. How will you install a package XYZ using rpm? ### - -> **Answer** : We can install any package (*.rpm) using rpm command a shown below, here options -i (install), -v (verbose or display additional information) and -h (print hash mark during package installation). -> -> # rpm -ivh peazip-1.11-1.el6.rf.x86_64.rpm -> -> Preparing... ################################# [100%] -> Updating / installing... -> 1:peazip-1.11-1.el6.rf ################################# [100%] -> -> If upgrading a package from earlier version -U switch should be used, option -v and -h follows to make sure we get a verbose output along with hash Mark, that makes it readable. - -### 3. You have installed a package (say httpd) and now you want to see all the files and directories installed and created by the above package. What will you do? ### - -> **Answer** : We can list all the files (Linux treat everything as file including directories) installed by the package httpd using options -l (List all the files) and -q (is for query). -> -> # rpm -ql httpd -> -> /etc/httpd -> /etc/httpd/conf -> /etc/httpd/conf.d -> ... - -### 4. You are supposed to remove a package say postfix. What will you do? ### - -> **Answer** : First we need to know postfix was installed by what package. Find the package name that installed postfix using options -e erase/uninstall a package) and –v (verbose output). -> -> # rpm -qa postfix* -> -> postfix-2.10.1-6.el7.x86_64 -> -> and then remove postfix as: -> -> # rpm -ev postfix-2.10.1-6.el7.x86_64 -> -> Preparing packages... -> postfix-2:3.0.1-2.fc22.x86_64 - -### 5. Get detailed information about an installed package, means information like Version, Release, Install Date, Size, Summary and a brief description. ### - -> **Answer** : We can get detailed information about an installed package by using option -qa with rpm followed by package name. -> -> For example to find details of package openssh, all I need to do is: -> -> # rpm -qi openssh -> -> [root@tecmint tecmint]# rpm -qi openssh -> Name : openssh -> Version : 6.8p1 -> Release : 5.fc22 -> Architecture: x86_64 -> Install Date: Thursday 28 May 2015 12:34:50 PM IST -> Group : Applications/Internet -> Size : 1542057 -> License : BSD -> .... - -### 6. You are not sure about what are the configuration files provided by a specific package say httpd. How will you find list of all the configuration files provided by httpd and their location. ### - -> **Answer** : We need to run option -c followed by package name with rpm command and it will list the name of all the configuration file and their location. -> -> # rpm -qc httpd -> -> /etc/httpd/conf.d/autoindex.conf -> /etc/httpd/conf.d/userdir.conf -> /etc/httpd/conf.d/welcome.conf -> /etc/httpd/conf.modules.d/00-base.conf -> /etc/httpd/conf/httpd.conf -> /etc/sysconfig/httpd -> -> Similarly we can list all the associated document files as: -> -> # rpm -qd httpd -> -> /usr/share/doc/httpd/ABOUT_APACHE -> /usr/share/doc/httpd/CHANGES -> /usr/share/doc/httpd/LICENSE -> ... -> -> also, we can list the associated License file as: -> -> # rpm -qL openssh -> -> /usr/share/licenses/openssh/LICENCE -> -> Not to mention that the option -d and option -L in the above command stands for ‘documents‘ and ‘License‘, respectively. - -### 7. You came across a configuration file located at ‘/usr/share/alsa/cards/AACI.conf’ and you are not sure this configuration file is associated with what package. How will you find out the parent package name? ### - -> **Answer** : When a package is installed, the relevant information gets stored in the database. So it is easy to trace what provides the above package using option -qf (-f query packages owning files). -> -> # rpm -qf /usr/share/alsa/cards/AACI.conf -> alsa-lib-1.0.28-2.el7.x86_64 -> -> Similarly we can find (what provides) information about any sub-packge, document files and License files. - -### 8. How will you find list of recently installed software’s using rpm? ### - -> **Answer** : As said earlier, everything being installed is logged in database. So it is not difficult to query the rpm database and find the list of recently installed software’s. -> -> We can do this by running the below commands using option –last (prints the most recent installed software’s). -> -> # rpm -qa --last -> -> The above command will print all the packages installed in a order such that, the last installed software appears at the top. -> -> If our concern is to find out specific package, we can grep that package (say sqlite) from the list, simply as: -> -> # rpm -qa --last | grep -i sqlite -> -> sqlite-3.8.10.2-1.fc22.x86_64 Thursday 18 June 2015 05:05:43 PM IST -> -> We can also get a list of 10 most recently installed software simply as: -> -> # rpm -qa --last | head -> -> We can refine the result to output a more custom result simply as: -> -> # rpm -qa --last | head -n 2 -> -> In the above command -n represents number followed by a numeric value. The above command prints a list of 2 most recent installed software. - -### 9. Before installing a package, you are supposed to check its dependencies. What will you do? ### - -> **Answer** : To check the dependencies of a rpm package (XYZ.rpm), we can use switches -q (query package), -p (query a package file) and -R (Requires / List packages on which this package depends i.e., dependencies). -> -> # rpm -qpR gedit-3.16.1-1.fc22.i686.rpm -> -> /bin/sh -> /usr/bin/env -> glib2(x86-32) >= 2.40.0 -> gsettings-desktop-schemas -> gtk3(x86-32) >= 3.16 -> gtksourceview3(x86-32) >= 3.16 -> gvfs -> libX11.so.6 -> ... - -### 10. Is rpm a front-end Package Management Tool? ### - -> **Answer** : No! rpm is a back-end package management for RPM based Linux Distribution. -> -> [YUM][1] which stands for Yellowdog Updater Modified is the front-end for rpm. YUM automates the overall process of resolving dependencies and everything else. -> -> Very recently [DNF][2] (Dandified YUM) replaced YUM in Fedora 22. Though YUM is still available to be used in RHEL and CentOS, we can install dnf and use it alongside of YUM. DNF is said to have a lots of improvement over YUM. -> -> Good to know, you keep yourself updated. Lets move to the front-end part. - -### 11. How will you list all the enabled repolist on a system. ### - -> **Answer** : We can list all the enabled repos on a system simply using following commands. -> -> # yum repolist -> or -> # dnf repolist -> -> Last metadata expiration check performed 0:30:03 ago on Mon Jun 22 16:50:00 2015. -> repo id repo name status -> *fedora Fedora 22 - x86_64 44,762 -> ozonos Repository for Ozon OS 61 -> *updates Fedora 22 - x86_64 - Updates -> -> The above command will only list those repos that are enabled. If we need to list all the repos, enabled or not, we can do. -> -> # yum repolist all -> or -> # dnf repolist all -> -> Last metadata expiration check performed 0:29:45 ago on Mon Jun 22 16:50:00 2015. -> repo id repo name status -> *fedora Fedora 22 - x86_64 enabled: 44,762 -> fedora-debuginfo Fedora 22 - x86_64 - Debug disabled -> fedora-source Fedora 22 - Source disabled -> ozonos Repository for Ozon OS enabled: 61 -> *updates Fedora 22 - x86_64 - Updates enabled: 5,018 -> updates-debuginfo Fedora 22 - x86_64 - Updates - Debug - -### 12. How will you list all the available and installed packages on a system? ### - -> **Answer** : To list all the available packages on a system, we can do: -> -> # yum list available -> or -> # dnf list available -> -> ast metadata expiration check performed 0:34:09 ago on Mon Jun 22 16:50:00 2015. -> Available Packages -> 0ad.x86_64 0.0.18-1.fc22 fedora -> 0ad-data.noarch 0.0.18-1.fc22 fedora -> 0install.x86_64 2.6.1-2.fc21 fedora -> 0xFFFF.x86_64 0.3.9-11.fc22 fedora -> 2048-cli.x86_64 0.9-4.git20141214.723738c.fc22 fedora -> 2048-cli-nocurses.x86_64 0.9-4.git20141214.723738c.fc22 fedora -> .... -> -> To list all the installed Packages on a system, we can do. -> -> # yum list installed -> or -> # dnf list installed -> -> Last metadata expiration check performed 0:34:30 ago on Mon Jun 22 16:50:00 2015. -> Installed Packages -> GeoIP.x86_64 1.6.5-1.fc22 @System -> GeoIP-GeoLite-data.noarch 2015.05-1.fc22 @System -> NetworkManager.x86_64 1:1.0.2-1.fc22 @System -> NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System -> aajohan-comfortaa-fonts.noarch 2.004-4.fc22 @System -> .... -> -> To list all the available and installed packages on a system, we can do. -> -> # yum list -> or -> # dnf list -> -> Last metadata expiration check performed 0:32:56 ago on Mon Jun 22 16:50:00 2015. -> Installed Packages -> GeoIP.x86_64 1.6.5-1.fc22 @System -> GeoIP-GeoLite-data.noarch 2015.05-1.fc22 @System -> NetworkManager.x86_64 1:1.0.2-1.fc22 @System -> NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System -> aajohan-comfortaa-fonts.noarch 2.004-4.fc22 @System -> acl.x86_64 2.2.52-7.fc22 @System -> .... - -### 13. How will you install and update a package and a group of packages separately on a system using YUM/DNF? ### - -> Answer : To Install a package (say nano), we can do, -> -> # yum install nano -> -> To Install a Group of Package (say Haskell), we can do. -> -> # yum groupinstall 'haskell' -> -> To update a package (say nano), we can do. -> -> # yum update nano -> -> To update a Group of Package (say Haskell), we can do. -> -> # yum groupupdate 'haskell' - -### 14. How will you SYNC all the installed packages on a system to stable release? ### - -> **Answer** : We can sync all the packages on a system (say CentOS or Fedora) to stable release as, -> -> # yum distro-sync [On CentOS/RHEL] -> or -> # dnf distro-sync [On Fedora 20 Onwards] - -Seems you have done a good homework before coming for the interview,Good!. Before proceeding further I just want to ask one more question. - -### 15. Are you familiar with YUM local repository? Have you tried making a Local YUM repository? Let me know in brief what you will do to create a local YUM repo. ### - -> **Answer** : First I would like to Thank you Sir for appreciation. Coming to question, I must admit that I am quiet familiar with Local YUM repositories and I have already implemented it for testing purpose in my local machine. -> -> 1. To set up Local YUM repository, we need to install the below three packages as: -> -> # yum install deltarpm python-deltarpm createrepo -> -> 2. Create a directory (say /home/$USER/rpm) and copy all the RPMs from RedHat/CentOS DVD to that folder. -> -> # mkdir /home/$USER/rpm -> # cp /path/to/rpm/on/DVD/*.rpm /home/$USER/rpm -> -> 3. Create base repository headers as. -> -> # createrepo -v /home/$USER/rpm -> -> 4. Create the .repo file (say abc.repo) at the location /etc/yum.repos.d simply as: -> -> cd /etc/yum.repos.d && cat << EOF > abc.repo -> [local-installation]name=yum-local -> baseurl=file:///home/$USER/rpm -> enabled=1 -> gpgcheck=0 -> EOF - -**Important**: Make sure to remove $USER with user_name. - -That’s all we need to do to create a Local YUM repository. We can now install applications from here, that is relatively fast, secure and most important don’t need an Internet connection. - -Okay! It was nice interviewing you. I am done. I am going to suggest your name to HR. You are a young and brilliant candidate we would like to have in our organization. If you have any question you may ask me. - -**Me**: Sir, it was really a very nice interview and I feel very lucky today, to have cracked the interview.. - -Obviously it didn’t end here. I asked a lots of questions like the project they are handling. What would be my role and responsibility and blah..blah..blah - -Friends, by the time all these were documented I have been called for HR round which is 3 days from now. Hope I do my best there as well. All your blessings will count. - -Thankyou friends and Tecmint for taking time and documenting my experience. Mates I believe Tecmint is doing some really extra-ordinary which must be praised. When we share ours experience with other, other get to know many things from us and we get to know our mistakes. - -It enhances our confidence level. If you have given any such interview recently, don’t keep it to yourself. Spread it! Let all of us know that. You may use the below form to share your experience with us. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-rpm-package-management-interview-questions/ - -作者:[Avishek Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ -[2]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ diff --git a/translated/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md b/translated/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md new file mode 100644 index 0000000000..f095a31c65 --- /dev/null +++ b/translated/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md @@ -0,0 +1,348 @@ +Shilpa Nair 分享了她面试 RedHat Linux 包管理方面的经验 +======================================================================== +**Shilpa Nair 刚于2015年毕业。她之后去了一家位于 Noida,Delhi 的国家新闻电视台,应聘实习生的岗位。在她去年毕业季的时候,常逛 Tecmint 寻求作业上的帮助。从那时开始,她就常去 Tecmint。** + +![Linux Interview Questions on RPM](http://www.tecmint.com/wp-content/uploads/2015/06/Linux-Interview-Questions-on-RPM.jpeg) + +有关 RPM 方面的 Linux 面试题 + +所有的问题和回答都是 Shilpa Nair 根据回忆重写的。 + +> “大家好!我是来自 Delhi 的Shilpa Nair。我不久前才顺利毕业,正寻找一个实习的机会。在大学早期的时候,我就对 UNIX 十分喜爱,所以我也希望这个机会能适合我,满足我的兴趣。我被提问了很多问题,大部分都是关于 RedHat 包管理的基础问题。” + +下面就是我被问到的问题,和对应的回答。我仅贴出了与 RedHat GNU/Linux 包管理相关的,也是主要被提问的。 + +### 1,里如何查找一个包安装与否?假设你需要确认 ‘nano’ 有没有安装,你怎么做? ### + +> **回答**:为了确认 nano 软件包有没有安装,我们可以使用 rpm 命令,配合 -q 和 -a 选项来查询所有已安装的包 +> +> # rpm -qa nano +> OR +> # rpm -qa | grep -i nano +> +> nano-2.3.1-10.el7.x86_64 +> +> 同时包的名字必须是完成的,不完整的包名返回提示,不打印任何东西,就是说这包(包名字不全)未安装。下面的例子会更好理解些: +> +> 我们通常使用 vim 替代 vi 命令。当时如果我们查找安装包 vi/vim 的时候,我们就会看到标准输出上没有任何结果。 +> +> # vi +> # vim +> +> 尽管如此,我们仍然可以通过使用 vi/vim 命令来清楚地知道包有没有安装。Here is ... name(这句不知道)。如果我们不确切知道完整的文件名,我们可以使用通配符: +> +> # rpm -qa vim* +> +> vim-minimal-7.4.160-1.el7.x86_64 +> +> 通过这种方式,我们可以获得任何软件包的信息,安装与否。 + +### 2. 你如何使用 rpm 命令安装 XYZ 软件包? ### + +> **回答**:我们可以使用 rpm 命令安装任何的软件包(*.rpm),像下面这样,选项 -i(install),-v(冗余或者显示额外的信息)和 -h(打印#号显示进度,在安装过程中)。 +> +> # rpm -ivh peazip-1.11-1.el6.rf.x86_64.rpm +> +> Preparing... ################################# [100%] +> Updating / installing... +> 1:peazip-1.11-1.el6.rf ################################# [100%] +> +> 如果要升级一个早期版本的包,应加上 -U 选项,选项 -v 和 -h 可以确保我们得到用 # 号表示的冗余输出,这增加了可读性。 + +### 3. 你已经安装了一个软件包(假设是 httpd),现在你想看看软件包创建并安装的所有文件和目录,你会怎么做? ### + +> **回答**:使用选项 -l(列出所有文件)和 -q(查询)列出 httpd 软件包安装的所有文件(Linux哲学:所有的都是文件,包括目录)。 +> +> # rpm -ql httpd +> +> /etc/httpd +> /etc/httpd/conf +> /etc/httpd/conf.d +> ... + +### 4. 假如你要移除一个软件包,叫 postfix。你会怎么做? ### + +> **回答**:首先我们需要知道什么包安装了 postfix。查找安装 postfix 的包名后,使用 -e(擦除/卸载软件包)和 -v(冗余输出)两个选项来实现。 +> +> # rpm -qa postfix* +> +> postfix-2.10.1-6.el7.x86_64 +> +> 然后移除 postfix,如下: +> +> # rpm -ev postfix-2.10.1-6.el7.x86_64 +> +> Preparing packages... +> postfix-2:3.0.1-2.fc22.x86_64 + +### 5. 获得一个已安装包的具体信息,如版本,发行号,安装日期,大小,总结和一个间短的描述。 ### + +> **回答**:我们通过使用 rpm 的选项 -qi,后面接包名,可以获得关于一个已安装包的具体信息。 +> +> 举个例子,为了获得 openssh 包的具体信息,我需要做的就是: +> +> # rpm -qi openssh +> +> [root@tecmint tecmint]# rpm -qi openssh +> Name : openssh +> Version : 6.8p1 +> Release : 5.fc22 +> Architecture: x86_64 +> Install Date: Thursday 28 May 2015 12:34:50 PM IST +> Group : Applications/Internet +> Size : 1542057 +> License : BSD +> .... + +### 6. 假如你不确定一个指定包的配置文件在哪,比如 httpd。你如何找到所有 httpd 提供的配置文件列表和位置。 ### + +> **回答**: 我们需要用选项 -c 接包名,这会列出所有配置文件的名字和他们的位置。 +> +> # rpm -qc httpd +> +> /etc/httpd/conf.d/autoindex.conf +> /etc/httpd/conf.d/userdir.conf +> /etc/httpd/conf.d/welcome.conf +> /etc/httpd/conf.modules.d/00-base.conf +> /etc/httpd/conf/httpd.conf +> /etc/sysconfig/httpd +> +> 相似地,我们可以列出所有相关的文档文件,如下: +> +> # rpm -qd httpd +> +> /usr/share/doc/httpd/ABOUT_APACHE +> /usr/share/doc/httpd/CHANGES +> /usr/share/doc/httpd/LICENSE +> ... +> +> 我们也可以列出所有相关的证书文件,如下: +> +> # rpm -qL openssh +> +> /usr/share/licenses/openssh/LICENCE +> +> 忘了说明上面的选项 -d 和 -L 分别表示 “文档” 和 “证书”,抱歉。 + +### 7. 你进入了一个配置文件,位于‘/usr/share/alsa/cards/AACI.conf’,现在你不确定该文件属于哪个包。你如何查找出包的名字? ### + +> **回答**:当一个包被安装后,相关的信息就存储在了数据库里。所以使用选项 -qf(-f 查询包拥有的文件)很容易追踪谁提供了上述的包。 +> +> # rpm -qf /usr/share/alsa/cards/AACI.conf +> alsa-lib-1.0.28-2.el7.x86_64 +> +> 类似地,我们可以查找(谁提供的)关于任何子包,文档和证书文件的信息。 + +### 8. 你如何使用 rpm 查找最近安装的软件列表? ### + +> **回答**:如刚刚说的,每一样被安装的文件都记录在了数据库里。所以这并不难,通过查询 rpm 的数据库,找到最近安装软件的列表。 +> +> 我们通过运行下面的命令,使用选项 -last(打印出最近安装的软件)达到目的。 +> +> # rpm -qa --last +> +> 上面的命令会打印出所有安装的软件,最近一次安装的软件在列表的顶部。 +> +> 如果我们关心的是找出特定的包,我们可以使用 grep 命令从列表中匹配包(假设是 sqlite ),简单如下: +> +> # rpm -qa --last | grep -i sqlite +> +> sqlite-3.8.10.2-1.fc22.x86_64 Thursday 18 June 2015 05:05:43 PM IST +> +> 我们也可以获得10个最近安装的软件列表,简单如下: +> +> # rpm -qa --last | head +> +> 我们可以重定义一下,输出想要的结果,简单如下: +> +> # rpm -qa --last | head -n 2 +> +> 上面的命令中,-n 代表数目,后面接一个常数值。该命令是打印2个最近安装的软件的列表。 + +### 9. 安装一个包之前,你如果要检查其依赖。你会怎么做? ### + +> **回答**:检查一个 rpm 包(XYZ.rpm)的依赖,我们可以使用选项 -q(查询包),-p(指定包名)和 -R(查询/列出该包依赖的包,嗯,就是依赖)。 +> +> # rpm -qpR gedit-3.16.1-1.fc22.i686.rpm +> +> /bin/sh +> /usr/bin/env +> glib2(x86-32) >= 2.40.0 +> gsettings-desktop-schemas +> gtk3(x86-32) >= 3.16 +> gtksourceview3(x86-32) >= 3.16 +> gvfs +> libX11.so.6 +> ... + +### 10. rpm 是不是一个前端的包管理工具呢? ### + +> **回答**:不是!rpm 是一个后端管理工具,适用于基于 Linux 发行版的 RPM (此处指 Redhat Package Management)。 +> +> [YUM][1],全称 Yellowdog Updater Modified,是一个 RPM 的前端工具。YUM 命令自动完成所有工作,包括解决依赖和其他一切事务。 +> +> 最近,[DNF][2](YUM命令升级版)在Fedora 22发行版中取代了 YUM。尽管 YUM 仍然可以在 RHEL 和 CentOS 平台使用,我们也可以安装 dnf,与 YUM 命令共存使用。据说 DNF 较于 YUM 有很多提高。 +> +> 知道更多总是好的,保持自我更新。现在我们移步到前端部分来谈谈。 + +### 11. 你如何列出一个系统上面所有可用的仓库列表。 ### + +> **回答**:简单地使用下面的命令,我们就可以列出一个系统上所有可用的仓库列表。 +> +> # yum repolist +> 或 +> # dnf repolist +> +> Last metadata expiration check performed 0:30:03 ago on Mon Jun 22 16:50:00 2015. +> repo id repo name status +> *fedora Fedora 22 - x86_64 44,762 +> ozonos Repository for Ozon OS 61 +> *updates Fedora 22 - x86_64 - Updates +> +> 上面的命令仅会列出可用的仓库。如果你需要列出所有的仓库,不管可用与否,可以这样做。 +> +> # yum repolist all +> or +> # dnf repolist all +> +> Last metadata expiration check performed 0:29:45 ago on Mon Jun 22 16:50:00 2015. +> repo id repo name status +> *fedora Fedora 22 - x86_64 enabled: 44,762 +> fedora-debuginfo Fedora 22 - x86_64 - Debug disabled +> fedora-source Fedora 22 - Source disabled +> ozonos Repository for Ozon OS enabled: 61 +> *updates Fedora 22 - x86_64 - Updates enabled: 5,018 +> updates-debuginfo Fedora 22 - x86_64 - Updates - Debug + +### 12. 你如何列出一个系统上所有可用并且安装了的包? ### + +> **回答**:列出一个系统上所有可用的包,我们可以这样做: +> +> # yum list available +> 或 +> # dnf list available +> +> ast metadata expiration check performed 0:34:09 ago on Mon Jun 22 16:50:00 2015. +> Available Packages +> 0ad.x86_64 0.0.18-1.fc22 fedora +> 0ad-data.noarch 0.0.18-1.fc22 fedora +> 0install.x86_64 2.6.1-2.fc21 fedora +> 0xFFFF.x86_64 0.3.9-11.fc22 fedora +> 2048-cli.x86_64 0.9-4.git20141214.723738c.fc22 fedora +> 2048-cli-nocurses.x86_64 0.9-4.git20141214.723738c.fc22 fedora +> .... +> +> 而列出一个系统上所有已安装的包,我们可以这样做。 +> +> # yum list installed +> or +> # dnf list installed +> +> Last metadata expiration check performed 0:34:30 ago on Mon Jun 22 16:50:00 2015. +> Installed Packages +> GeoIP.x86_64 1.6.5-1.fc22 @System +> GeoIP-GeoLite-data.noarch 2015.05-1.fc22 @System +> NetworkManager.x86_64 1:1.0.2-1.fc22 @System +> NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System +> aajohan-comfortaa-fonts.noarch 2.004-4.fc22 @System +> .... +> +> 而要同时满足两个要求的时候,我们可以这样做。 +> +> # yum list +> 或 +> # dnf list +> +> Last metadata expiration check performed 0:32:56 ago on Mon Jun 22 16:50:00 2015. +> Installed Packages +> GeoIP.x86_64 1.6.5-1.fc22 @System +> GeoIP-GeoLite-data.noarch 2015.05-1.fc22 @System +> NetworkManager.x86_64 1:1.0.2-1.fc22 @System +> NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System +> aajohan-comfortaa-fonts.noarch 2.004-4.fc22 @System +> acl.x86_64 2.2.52-7.fc22 @System +> .... + +### 13. 你会怎么分别安装和升级一个包与一组包,在一个系统上面使用 YUM/DNF? ### + +> **回答**:安装一个包(假设是 nano),我们可以这样做, +> +> # yum install nano +> +> 而安装一组包(假设是 Haskell),我们可以这样做, +> +> # yum groupinstall 'haskell' +> +> 升级一个包(还是 nano),我们可以这样做, +> +> # yum update nano +> +> 而为了升级一组包(还是 haskell),我们可以这样做, +> +> # yum groupupdate 'haskell' + +### 14. 你会如何同步一个系统上面的所有安装软件到稳定发行版? ### + +> **回答**:我们可以一个系统上(假设是 CentOS 或者 Fedora)的所有包到稳定发行版,如下, +> +> # yum distro-sync [On CentOS/ RHEL] +> 或 +> # dnf distro-sync [On Fedora 20之后版本] + +似乎来面试之前你做了相当不多的功课,很好!在进一步交谈前,我还想问一两个问题。 + +### 15. 你对 YUM 本地仓库熟悉吗?你尝试过建立一个本地 YUM 仓库吗?让我们简单看看你会怎么建立一个本地 YUM 仓库。 ### + +> **回答**:首先,感谢你的夸奖。回到问题,我必须承认我对本地 YUM 仓库十分熟悉,并且在我的本地主机上也部署过,作为测试用。 +> +> 1. 为了建立本地 YUM 仓库,我们需要安装下面三个包: +> +> # yum install deltarpm python-deltarpm createrepo +> +> 2. 新建一个目录(假设 /home/$USER/rpm),然后复制 RedHat/CentOS DVD 上的 RPM 包到这个文件夹下 +> +> # mkdir /home/$USER/rpm +> # cp /path/to/rpm/on/DVD/*.rpm /home/$USER/rpm +> +> 3. 新建基本的库头文件如下。 +> +> # createrepo -v /home/$USER/rpm +> +> 4. 在路径 /etc/yum.repo.d 下创建一个 .repo 文件(如 abc.repo): +> +> cd /etc/yum.repos.d && cat << EOF > abc.repo +> [local-installation]name=yum-local +> baseurl=file:///home/$USER/rpm +> enabled=1 +> gpgcheck=0 +> EOF + +**重要**:用你的用户名替换掉 $USER。 + +以上就是创建一个本地 YUM 仓库所要做的全部工作。我们现在可以从这里安装软件了,相对快一些,安全一些,并且最重要的是不需要 Internet 连接。 + +好了!面试过程很愉快。我已经问完了。我会将你推荐给 HR。你是一个年轻且十分聪明的候选者,我们很愿意你加入进来。如果你有任何问题,你可以问我。 + +**我**:谢谢,这确实是一次愉快的面试,我感到非常幸运今天,然后这次面试就毁了。。。 + +显然,不会在这里结束。我问了很多问题,比如他们正在做的项目。我会担任什么角色,负责什么,,,balabalabala + +小伙伴们,3天以前 HR 轮的所有问题到时候也会被写成文档。希望我当时表现不错。感谢你们所有的祝福。 + +谢谢伙伴们和 Tecmint,花时间来编辑我的面试经历。我相信 Tecmint 好伙伴们做了很大的努力,必要要赞一个。当我们与他人分享我们的经历的时候,其他人从我们这里知道了更多,而我们自己则发现了自己的不足。 + +这增加了我们的信心。如果你最近也有任何类似的面试经历,别自己蔵着。分享出来!让我们所有人都知道。你可以使用如下的格式来与我们分享你的经历。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-rpm-package-management-interview-questions/ + +作者:[Avishek Kumar][a] +译者:[wi-cuckoo](https://github.com/wi-cuckoo) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ +[2]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ From d00a59eee12db7b2ae81fea8ca39046d75e00f77 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 12 Aug 2015 16:24:17 +0800 Subject: [PATCH 139/697] =?UTF-8?q?20150812-1=20=E9=80=89=E9=A2=98=20=20RH?= =?UTF-8?q?CE=20=E4=B8=93=E9=A2=98=20=E6=96=87=E7=AB=A0=E6=9C=AA=E5=85=A8?= =?UTF-8?q?=E9=83=A8=E5=AE=8C=E7=BB=93=EF=BC=8C=E7=9B=AE=E5=89=8D=E5=8F=AA?= =?UTF-8?q?=E6=9C=89=E4=B8=89=E7=AF=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...o Setup and Test Static Network Routing.md | 227 ++++++++++++++++++ ...ation and Set Kernel Runtime Parameters.md | 177 ++++++++++++++ ...m Activity Reports Using Linux Toolsets.md | 182 ++++++++++++++ 3 files changed, 586 insertions(+) create mode 100644 sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md create mode 100644 sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md create mode 100644 sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md diff --git a/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md b/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md new file mode 100644 index 0000000000..03356f9dd1 --- /dev/null +++ b/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md @@ -0,0 +1,227 @@ +Part 1 - RHCE Series: How to Setup and Test Static Network Routing +================================================================================ +RHCE (Red Hat Certified Engineer) is a certification from Red Hat company, which gives an open source operating system and software to the enterprise community, It also gives training, support and consulting services for the companies. + +![RHCE Exam Preparation Guide](http://www.tecmint.com/wp-content/uploads/2015/07/RHCE-Exam-Series-by-TecMint.jpg) + +RHCE Exam Preparation Guide + +This RHCE (Red Hat Certified Engineer) is a performance-based exam (codename EX300), who possesses the additional skills, knowledge, and abilities required of a senior system administrator responsible for Red Hat Enterprise Linux (RHEL) systems. + +**Important**: [Red Hat Certified System Administrator][1] (RHCSA) certification is required to earn RHCE certification. + +Following are the exam objectives based on the Red Hat Enterprise Linux 7 version of the exam, which will going to cover in this RHCE series: + +- Part 1: How to Setup and Test Static Routing in RHEL 7 +- Part 2: How to Perform Packet Filtering, Network Address Translation and Set Kernel Runtime Parameters +- Part 3: How to Produce and Deliver System Activity Reports Using Linux Toolsets +- Part 4: Automate System Maintenance Tasks Using Shell Scripts +- Part 5: How to Configure Local and Remote System Logging +- Part 6: How to Configure a Samba Server and a NFS Server +- Part 7: Setting Up Complete SMTP Server for Mailing +- Part 8: Setting Up HTTPS and TLS on RHEL 7 +- Part 9: Setting Up Network Time Protocol +- Part 10: How to Configure a Cache-Only DNS Server + +To view fees and register for an exam in your country, check the [RHCE Certification][2] page. + +In this Part 1 of the RHCE series and the next, we will present basic, yet typical, cases where the principles of static routing, packet filtering, and network address translation come into play. + +![Setup Static Network Routing in RHEL](http://www.tecmint.com/wp-content/uploads/2015/07/Setup-Static-Network-Routing-in-RHEL-7.jpg) + +RHCE: Setup and Test Network Static Routing – Part 1 + +Please note that we will not cover them in depth, but rather organize these contents in such a way that will be helpful to take the first steps and build from there. + +### Static Routing in Red Hat Enterprise Linux 7 ### + +One of the wonders of modern networking is the vast availability of devices that can connect groups of computers, whether in relatively small numbers and confined to a single room or several machines in the same building, city, country, or across continents. + +However, in order to effectively accomplish this in any situation, network packets need to be routed, or in other words, the path they follow from source to destination must be ruled somehow. + +Static routing is the process of specifying a route for network packets other than the default, which is provided by a network device known as the default gateway. Unless specified otherwise through static routing, network packets are directed to the default gateway; with static routing, other paths are defined based on predefined criteria, such as the packet destination. + +Let us define the following scenario for this tutorial. We have a Red Hat Enterprise Linux 7 box connecting to router #1 [192.168.0.1] to access the Internet and machines in 192.168.0.0/24. + +A second router (router #2) has two network interface cards: enp0s3 is also connected to router #1 to access the Internet and to communicate with the RHEL 7 box and other machines in the same network, whereas the other (enp0s8) is used to grant access to the 10.0.0.0/24 network where internal services reside, such as a web and / or database server. + +This scenario is illustrated in the diagram below: + +![Static Routing Network Diagram](http://www.tecmint.com/wp-content/uploads/2015/07/Static-Routing-Network-Diagram.png) + +Static Routing Network Diagram + +In this article we will focus exclusively on setting up the routing table on our RHEL 7 box to make sure that it can both access the Internet through router #1 and the internal network via router #2. + +In RHEL 7, you will use the [ip command][3] to configure and show devices and routing using the command line. These changes can take effect immediately on a running system but since they are not persistent across reboots, we will use ifcfg-enp0sX and route-enp0sX files inside /etc/sysconfig/network-scripts to save our configuration permanently. + +To begin, let’s print our current routing table: + + # ip route show + +![Check Routing Table in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Current-Routing-Table.png) + +Check Current Routing Table + +From the output above, we can see the following facts: + +- The default gateway’s IP address is 192.168.0.1 and can be accessed via the enp0s3 NIC. +- When the system booted up, it enabled the zeroconf route to 169.254.0.0/16 (just in case). In few words, if a machine is set to obtain an IP address through DHCP but fails to do so for some reason, it is automatically assigned an address in this network. Bottom line is, this route will allow us to communicate, also via enp0s3, with other machines who have failed to obtain an IP address from a DHCP server. +- Last, but not least, we can communicate with other boxes inside the 192.168.0.0/24 network through enp0s3, whose IP address is 192.168.0.18. + +These are the typical tasks that you would have to perform in such a setting. Unless specified otherwise, the following tasks should be performed in router #2: + +Make sure all NICs have been properly installed: + + # ip link show + +If one of them is down, bring it up: + + # ip link set dev enp0s8 up + +and assign an IP address in the 10.0.0.0/24 network to it: + + # ip addr add 10.0.0.17 dev enp0s8 + +Oops! We made a mistake in the IP address. We will have to remove the one we assigned earlier and then add the right one (10.0.0.18): + + # ip addr del 10.0.0.17 dev enp0s8 + # ip addr add 10.0.0.18 dev enp0s8 + +Now, please note that you can only add a route to a destination network through a gateway that is itself already reachable. For that reason, we need to assign an IP address within the 192.168.0.0/24 range to enp0s3 so that our RHEL 7 box can communicate with it: + + # ip addr add 192.168.0.19 dev enp0s3 + +Finally, we will need to enable packet forwarding: + + # echo "1" > /proc/sys/net/ipv4/ip_forward + +and stop / disable (just for the time being – until we cover packet filtering in the next article) the firewall: + + # systemctl stop firewalld + # systemctl disable firewalld + +Back in our RHEL 7 box (192.168.0.18), let’s configure a route to 10.0.0.0/24 through 192.168.0.19 (enp0s3 in router #2): + + # ip route add 10.0.0.0/24 via 192.168.0.19 + +After that, the routing table looks as follows: + + # ip route show + +![Show Network Routing Table](http://www.tecmint.com/wp-content/uploads/2015/07/Show-Network-Routing.png) + +Confirm Network Routing Table + +Likewise, add the corresponding route in the machine(s) you’re trying to reach in 10.0.0.0/24: + + # ip route add 192.168.0.0/24 via 10.0.0.18 + +You can test for basic connectivity using ping: + +In the RHEL 7 box, run + + # ping -c 4 10.0.0.20 + +where 10.0.0.20 is the IP address of a web server in the 10.0.0.0/24 network. + +In the web server (10.0.0.20), run + + # ping -c 192.168.0.18 + +where 192.168.0.18 is, as you will recall, the IP address of our RHEL 7 machine. + +Alternatively, we can use [tcpdump][4] (you may need to install it with yum install tcpdump) to check the 2-way communication over TCP between our RHEL 7 box and the web server at 10.0.0.20. + +To do so, let’s start the logging in the first machine with: + + # tcpdump -qnnvvv -i enp0s3 host 10.0.0.20 + +and from another terminal in the same system let’s telnet to port 80 in the web server (assuming Apache is listening on that port; otherwise, indicate the right port in the following command): + + # telnet 10.0.0.20 80 + +The tcpdump log should look as follows: + +![Check Network Communication between Servers](http://www.tecmint.com/wp-content/uploads/2015/07/Tcpdump-logs.png) + +Check Network Communication between Servers + +Where the connection has been properly initialized, as we can tell by looking at the 2-way communication between our RHEL 7 box (192.168.0.18) and the web server (10.0.0.20). + +Please remember that these changes will go away when you restart the system. If you want to make them persistent, you will need to edit (or create, if they don’t already exist) the following files, in the same systems where we performed the above commands. + +Though not strictly necessary for our test case, you should know that /etc/sysconfig/network contains system-wide network parameters. A typical /etc/sysconfig/network looks as follows: + + # Enable networking on this system? + NETWORKING=yes + # Hostname. Should match the value in /etc/hostname + HOSTNAME=yourhostnamehere + # Default gateway + GATEWAY=XXX.XXX.XXX.XXX + # Device used to connect to default gateway. Replace X with the appropriate number. + GATEWAYDEV=enp0sX + +When it comes to setting specific variables and values for each NIC (as we did for router #2), you will have to edit /etc/sysconfig/network-scripts/ifcfg-enp0s3 and /etc/sysconfig/network-scripts/ifcfg-enp0s8. + +Following our case, + + TYPE=Ethernet + BOOTPROTO=static + IPADDR=192.168.0.19 + NETMASK=255.255.255.0 + GATEWAY=192.168.0.1 + NAME=enp0s3 + ONBOOT=yes + +and + + TYPE=Ethernet + BOOTPROTO=static + IPADDR=10.0.0.18 + NETMASK=255.255.255.0 + GATEWAY=10.0.0.1 + NAME=enp0s8 + ONBOOT=yes + +for enp0s3 and enp0s8, respectively. + +As for routing in our client machine (192.168.0.18), we will need to edit /etc/sysconfig/network-scripts/route-enp0s3: + + 10.0.0.0/24 via 192.168.0.19 dev enp0s3 + +Now reboot your system and you should see that route in your table. + +### Summary ### + +In this article we have covered the essentials of static routing in Red Hat Enterprise Linux 7. Although scenarios may vary, the case presented here illustrates the required principles and the procedures to perform this task. Before wrapping up, I would like to suggest you to take a look at [Chapter 4][5] of the Securing and Optimizing Linux section in The Linux Documentation Project site for further details on the topics covered here. + +Free ebook on Securing & Optimizing Linux: The Hacking Solution (v.3.0) – This 800+ eBook contains comprehensive collection of Linux security tips and how to use them safely and easily to configure Linux-based applications and services. + +![Linux Security and Optimization Book](http://www.tecmint.com/wp-content/uploads/2015/07/Linux-Security-Optimization-Book.gif) + +Linux Security and Optimization Book + +[Download Now][6] + +In the next article we will talk about packet filtering and network address translation to sum up the networking basic skills needed for the RHCE certification. + +As always, we look forward to hearing from you, so feel free to leave your questions, comments, and suggestions using the form below. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ +[2]:https://www.redhat.com/en/services/certification/rhce +[3]:http://www.tecmint.com/ip-command-examples/ +[4]:http://www.tecmint.com/12-tcpdump-commands-a-network-sniffer-tool/ +[5]:http://www.tldp.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/net-manage.html +[6]:http://tecmint.tradepub.com/free/w_opeb01/prgm.cgi \ No newline at end of file diff --git a/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md b/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md new file mode 100644 index 0000000000..8a5f4e6cf4 --- /dev/null +++ b/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md @@ -0,0 +1,177 @@ +Part 2 - How to Perform Packet Filtering, Network Address Translation and Set Kernel Runtime Parameters +================================================================================ +As promised in Part 1 (“[Setup Static Network Routing][1]”), in this article (Part 2 of RHCE series) we will begin by introducing the principles of packet filtering and network address translation (NAT) in Red Hat Enterprise Linux 7, before diving into setting runtime kernel parameters to modify the behavior of a running kernel if certain conditions change or needs arise. + +![Network Packet Filtering in RHEL](http://www.tecmint.com/wp-content/uploads/2015/07/Network-Packet-Filtering-in-RHEL.jpg) + +RHCE: Network Packet Filtering – Part 2 + +### Network Packet Filtering in RHEL 7 ### + +When we talk about packet filtering, we refer to a process performed by a firewall in which it reads the header of each data packet that attempts to pass through it. Then, it filters the packet by taking the required action based on rules that have been previously defined by the system administrator. + +As you probably know, beginning with RHEL 7, the default service that manages firewall rules is [firewalld][2]. Like iptables, it talks to the netfilter module in the Linux kernel in order to examine and manipulate network packets. Unlike iptables, updates can take effect immediately without interrupting active connections – you don’t even have to restart the service. + +Another advantage of firewalld is that it allows us to define rules based on pre-configured service names (more on that in a minute). + +In Part 1, we used the following scenario: + +![Static Routing Network Diagram](http://www.tecmint.com/wp-content/uploads/2015/07/Static-Routing-Network-Diagram.png) + +Static Routing Network Diagram + +However, you will recall that we disabled the firewall on router #2 to simplify the example since we had not covered packet filtering yet. Let’s see now how we can enable incoming packets destined for a specific service or port in the destination. + +First, let’s add a permanent rule to allow inbound traffic in enp0s3 (192.168.0.19) to enp0s8 (10.0.0.18): + + # firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT + +The above command will save the rule to /etc/firewalld/direct.xml: + + # cat /etc/firewalld/direct.xml + +![Check Firewalld Saved Rules in CentOS 7](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Firewalld-Save-Rules.png) + +Check Firewalld Saved Rules + +Then enable the rule for it to take effect immediately: + + # firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT + +Now you can telnet to the web server from the RHEL 7 box and run [tcpdump][3] again to monitor the TCP traffic between the two machines, this time with the firewall in router #2 enabled. + + # telnet 10.0.0.20 80 + # tcpdump -qnnvvv -i enp0s3 host 10.0.0.20 + +What if you want to only allow incoming connections to the web server (port 80) from 192.168.0.18 and block connections from other sources in the 192.168.0.0/24 network? + +In the web server’s firewall, add the following rules: + + # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept' + # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept' --permanent + # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop' + # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop' --permanent + +Now you can make HTTP requests to the web server, from 192.168.0.18 and from some other machine in 192.168.0.0/24. In the first case the connection should complete successfully, whereas in the second it will eventually timeout. + +To do so, any of the following commands will do the trick: + + # telnet 10.0.0.20 80 + # wget 10.0.0.20 + +I strongly advise you to check out the [Firewalld Rich Language][4] documentation in the Fedora Project Wiki for further details on rich rules. + +### Network Address Translation in RHEL 7 ### + +Network Address Translation (NAT) is the process where a group of computers (it can also be just one of them) in a private network are assigned an unique public IP address. As result, they are still uniquely identified by their own private IP address inside the network but to the outside they all “seem” the same. + +In addition, NAT makes it possible that computers inside a network sends requests to outside resources (like the Internet) and have the corresponding responses be sent back to the source system only. + +Let’s now consider the following scenario: + +![Network Address Translation in RHEL](http://www.tecmint.com/wp-content/uploads/2015/07/Network-Address-Translation-Diagram.png) + +Network Address Translation + +In router #2, we will move the enp0s3 interface to the external zone, and enp0s8 to the internal zone, where masquerading, or NAT, is enabled by default: + + # firewall-cmd --list-all --zone=external + # firewall-cmd --change-interface=enp0s3 --zone=external + # firewall-cmd --change-interface=enp0s3 --zone=external --permanent + # firewall-cmd --change-interface=enp0s8 --zone=internal + # firewall-cmd --change-interface=enp0s8 --zone=internal --permanent + +For our current setup, the internal zone – along with everything that is enabled in it will be the default zone: + + # firewall-cmd --set-default-zone=internal + +Next, let’s reload firewall rules and keep state information: + + # firewall-cmd --reload + +Finally, let’s add router #2 as default gateway in the web server: + + # ip route add default via 10.0.0.18 + +You can now verify that you can ping router #1 and an external site (tecmint.com, for example) from the web server: + + # ping -c 2 192.168.0.1 + # ping -c 2 tecmint.com + +![Verify Network Routing](http://www.tecmint.com/wp-content/uploads/2015/07/Verify-Network-Routing.png) + +Verify Network Routing + +### Setting Kernel Runtime Parameters in RHEL 7 ### + +In Linux, you are allowed to change, enable, and disable the kernel runtime parameters, and RHEL is no exception. The /proc/sys interface (sysctl) lets you set runtime parameters on-the-fly to modify the system’s behavior without much hassle when operating conditions change. + +To do so, the echo shell built-in is used to write to files inside /proc/sys/, where is most likely one of the following directories: + +- dev: parameters for specific devices connected to the machine. +- fs: filesystem configuration (quotas and inodes, for example). +- kernel: kernel-specific configuration. +- net: network configuration. +- vm: use of the kernel’s virtual memory. + +To display the list of all the currently available values, run + + # sysctl -a | less + +In Part 1, we changed the value of the net.ipv4.ip_forward parameter by doing + + # echo 1 > /proc/sys/net/ipv4/ip_forward + +in order to allow a Linux machine to act as router. + +Another runtime parameter that you may want to set is kernel.sysrq, which enables the Sysrq key in your keyboard to instruct the system to perform gracefully some low-level functions, such as rebooting the system if it has frozen for some reason: + + # echo 1 > /proc/sys/kernel/sysrq + +To display the value of a specific parameter, use sysctl as follows: + + # sysctl + +For example, + + # sysctl net.ipv4.ip_forward + # sysctl kernel.sysrq + +Some parameters, such as the ones mentioned above, require only one value, whereas others (for example, fs.inode-state) require multiple values: + +![Check Kernel Parameters in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Kernel-Parameters.png) + +Check Kernel Parameters + +In either case, you need to read the kernel’s documentation before making any changes. + +Please note that these settings will go away when the system is rebooted. To make these changes permanent, we will need to add .conf files inside the /etc/sysctl.d as follows: + + # echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/10-forward.conf + +(where the number 10 indicates the order of processing relative to other files in the same directory). + +and enable the changes with + + # sysctl -p /etc/sysctl.d/10-forward.conf + +### Summary ### + +In this tutorial we have explained the basics of packet filtering, network address translation, and setting kernel runtime parameters on a running system and persistently across reboots. I hope you have found this information useful, and as always, we look forward to hearing from you! +Don’t hesitate to share with us your questions, comments, or suggestions using the form below. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/perform-packet-filtering-network-address-translation-and-set-kernel-runtime-parameters-in-rhel/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/ +[2]:http://www.tecmint.com/firewalld-rules-for-centos-7/ +[3]:http://www.tecmint.com/12-tcpdump-commands-a-network-sniffer-tool/ +[4]:https://fedoraproject.org/wiki/Features/FirewalldRichLanguage \ No newline at end of file diff --git a/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md b/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md new file mode 100644 index 0000000000..34693ea6bf --- /dev/null +++ b/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md @@ -0,0 +1,182 @@ +Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets +================================================================================ +As a system engineer, you will often need to produce reports that show the utilization of your system’s resources in order to make sure that: 1) they are being utilized optimally, 2) prevent bottlenecks, and 3) ensure scalability, among other reasons. + +![Monitor Linux Performance Activity Reports](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Performance-Activity-Reports.jpg) + +RHCE: Monitor Linux Performance Activity Reports – Part 3 + +Besides the well-known native Linux tools that are used to check disk, memory, and CPU usage – to name a few examples, Red Hat Enterprise Linux 7 provides two additional toolsets to enhance the data you can collect for your reports: sysstat and dstat. + +In this article we will describe both, but let’s first start by reviewing the usage of the classic tools. + +### Native Linux Tools ### + +With df, you will be able to report disk space and inode usage of by filesystem. You need to monitor both because a lack of space will prevent you from being able to save further files (and may even cause the system to crash), just like running out of inodes will mean you can’t link further files with their corresponding data structures, thus producing the same effect: you won’t be able to save those files to disk. + + # df -h [Display output in human-readable form] + # df -h --total [Produce a grand total] + +![Check Linux Total Disk Usage](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-Disk-Usage.png) + +Check Linux Total Disk Usage + + # df -i [Show inode count by filesystem] + # df -i --total [Produce a grand total] + +![Check Linux Total inode Numbers](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-inode-Numbers.png) + +Check Linux Total inode Numbers + +With du, you can estimate file space usage by either file, directory, or filesystem. + +For example, let’s see how much space is used by the /home directory, which includes all of the user’s personal files. The first command will return the overall space currently used by the entire /home directory, whereas the second will also display a disaggregated list by sub-directory as well: + + # du -sch /home + # du -sch /home/* + +![Check Linux Directory Disk Size](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Directory-Disk-Size.png) + +Check Linux Directory Disk Size + +Don’t Miss: + +- [12 ‘df’ Command Examples to Check Linux Disk Space Usage][1] +- [10 ‘du’ Command Examples to Find Disk Usage of Files/Directories][2] + +Another utility that can’t be missing from your toolset is vmstat. It will allow you to see at a quick glance information about processes, CPU and memory usage, disk activity, and more. + +If run without arguments, vmstat will return averages since the last reboot. While you may use this form of the command once in a while, it will be more helpful to take a certain amount of system utilization samples, one after another, with a defined time separation between samples. + +For example, + + # vmstat 5 10 + +will return 10 samples taken every 5 seconds: + +![Check Linux System Performance](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Systerm-Performance.png) + +Check Linux System Performance + +As you can see in the above picture, the output of vmstat is divided by columns: procs (processes), memory, swap, io, system, and cpu. The meaning of each field can be found in the FIELD DESCRIPTION sections in the man page of vmstat. + +Where can vmstat come in handy? Let’s examine the behavior of the system before and during a yum update: + + # vmstat -a 1 5 + +![Vmstat Linux Performance Monitoring](http://www.tecmint.com/wp-content/uploads/2015/08/Vmstat-Linux-Peformance-Monitoring.png) + +Vmstat Linux Performance Monitoring + +Please note that as files are being modified on disk, the amount of active memory increases and so does the number of blocks written to disk (bo) and the CPU time that is dedicated to user processes (us). + +Or during the saving process of a large file directly to disk (caused by dsync): + + # vmstat -a 1 5 + # dd if=/dev/zero of=dummy.out bs=1M count=1000 oflag=dsync + +![VmStat Linux Disk Performance Monitoring](http://www.tecmint.com/wp-content/uploads/2015/08/VmStat-Linux-Disk-Performance-Monitoring.png) + +VmStat Linux Disk Performance Monitoring + +In this case, we can see a yet larger number of blocks being written to disk (bo), which was to be expected, but also an increase of the amount of CPU time that it has to wait for I/O operations to complete before processing tasks (wa). + +**Don’t Miss**: [Vmstat – Linux Performance Monitoring][3] + +### Other Linux Tools ### + +As mentioned in the introduction of this chapter, there are other tools that you can use to check the system status and utilization (they are not only provided by Red Hat but also by other major distributions from their officially supported repositories). + +The sysstat package contains the following utilities: + +- sar (collect, report, or save system activity information). +- sadf (display data collected by sar in multiple formats). +- mpstat (report processors related statistics). +- iostat (report CPU statistics and I/O statistics for devices and partitions). +- pidstat (report statistics for Linux tasks). +- nfsiostat (report input/output statistics for NFS). +- cifsiostat (report CIFS statistics) and +- sa1 (collect and store binary data in the system activity daily data file. +- sa2 (write a daily report in the /var/log/sa directory) tools. + +whereas dstat adds some extra features to the functionality provided by those tools, along with more counters and flexibility. You can find an overall description of each tool by running yum info sysstat or yum info dstat, respectively, or checking the individual man pages after installation. + +To install both packages: + + # yum update && yum install sysstat dstat + +The main configuration file for sysstat is /etc/sysconfig/sysstat. You will find the following parameters in that file: + + # How long to keep log files (in days). + # If value is greater than 28, then log files are kept in + # multiple directories, one for each month. + HISTORY=28 + # Compress (using gzip or bzip2) sa and sar files older than (in days): + COMPRESSAFTER=31 + # Parameters for the system activity data collector (see sadc manual page) + # which are used for the generation of log files. + SADC_OPTIONS="-S DISK" + # Compression program to use. + ZIP="bzip2" + +When sysstat is installed, two cron jobs are added and enabled in /etc/cron.d/sysstat. The first job runs the system activity accounting tool every 10 minutes and stores the reports in /var/log/sa/saXX where XX is the day of the month. + +Thus, /var/log/sa/sa05 will contain all the system activity reports from the 5th of the month. This assumes that we are using the default value in the HISTORY variable in the configuration file above: + + */10 * * * * root /usr/lib64/sa/sa1 1 1 + +The second job generates a daily summary of process accounting at 11:53 pm every day and stores it in /var/log/sa/sarXX files, where XX has the same meaning as in the previous example: + + 53 23 * * * root /usr/lib64/sa/sa2 -A + +For example, you may want to output system statistics from 9:30 am through 5:30 pm of the sixth of the month to a .csv file that can easily be viewed using LibreOffice Calc or Microsoft Excel (this approach will also allow you to create charts or graphs): + + # sadf -s 09:30:00 -e 17:30:00 -dh /var/log/sa/sa06 -- | sed 's/;/,/g' > system_stats20150806.csv + +You could alternatively use the -j flag instead of -d in the sadf command above to output the system stats in JSON format, which could be useful if you need to consume the data in a web application, for example. + +![Linux System Statistics](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-System-Statistics.png) + +Linux System Statistics + +Finally, let’s see what dstat has to offer. Please note that if run without arguments, dstat assumes -cdngy by default (short for CPU, disk, network, memory pages, and system stats, respectively), and adds one line every second (execution can be interrupted anytime with Ctrl + C): + + # dstat + +![Linux Disk Statistics Monitoring](http://www.tecmint.com/wp-content/uploads/2015/08/dstat-command.png) + +Linux Disk Statistics Monitoring + +To output the stats to a .csv file, use the –output flag followed by a file name. Let’s see how this looks on LibreOffice Calc: + +![Monitor Linux Statistics Output](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Statistics-Output.png) + +Monitor Linux Statistics Output + +I strongly advise you to check out the man page of dstat, included with this article along with the man page of sysstat in PDF format for your reading convenience. You will find several other options that will help you create custom and detailed system activity reports. + +**Don’t Miss**: [Sysstat – Linux Usage Activity Monitoring Tool][4] + +### Summary ### + +In this guide we have explained how to use both native Linux tools and specific utilities provided with RHEL 7 in order to produce reports on system utilization. At one point or another, you will come to rely on these reports as best friends. + +You will probably have used other tools that we have not covered in this tutorial. If so, feel free to share them with the rest of the community along with any other suggestions / questions / comments that you may have- using the form below. + +We look forward to hearing from you. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/how-to-check-disk-space-in-linux/ +[2]:http://www.tecmint.com/check-linux-disk-usage-of-files-and-directories/ +[3]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/ +[4]:http://www.tecmint.com/install-sysstat-in-linux/ \ No newline at end of file From eaecf395ae651fbf78bef16e9acc27cea0e636c0 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 12 Aug 2015 16:39:28 +0800 Subject: [PATCH 140/697] =?UTF-8?q?20150812-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...edule a Job and Watch Commands in Linux.md | 143 ++++++++++++++++++ 1 file changed, 143 insertions(+) create mode 100644 sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md diff --git a/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md b/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md new file mode 100644 index 0000000000..1ad92c594b --- /dev/null +++ b/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md @@ -0,0 +1,143 @@ +Linux Tricks: Play Game in Chrome, Text-to-Speech, Schedule a Job and Watch Commands in Linux +================================================================================ +Here again, I have compiled a list of four things under [Linux Tips and Tricks][1] series you may do to remain more productive and entertained with Linux Environment. + +![Linux Tips and Tricks Series](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-Tips-and-Tricks.png) + +Linux Tips and Tricks Series + +The topics I have covered includes Google-chrome inbuilt small game, Text-to-speech in Linux Terminal, Quick job scheduling using ‘at‘ command and watch a command at regular interval. + +### 1. Play A Game in Google Chrome Browser ### + +Very often when there is a power shedding or no network due to some other reason, I don’t put my Linux box into maintenance mode. I keep myself engage in a little fun game by Google Chrome. I am not a gamer and hence I have not installed third-party creepy games. Security is another concern. + +So when there is Internet related issue and my web page seems something like this: + +![Unable to Connect Internet](http://www.tecmint.com/wp-content/uploads/2015/08/Unable-to-Connect-Internet.png) + +Unable to Connect Internet + +You may play the Google-chrome inbuilt game simply by hitting the space-bar. There is no limitation for the number of times you can play. The best thing is you need not break a sweat installing and using it. + +No third-party application/plugin required. It should work well on other platforms like Windows and Mac but our niche is Linux and I’ll talk about Linux only and mind it, it works well on Linux. It is a very simple game (a kind of time pass). + +Use Space-Bar/Navigation-up-key to jump. A glimpse of the game in action. + +![Play Game in Google Chrome](http://www.tecmint.com/wp-content/uploads/2015/08/Play-Game-in-Google-Chrome.gif) + +Play Game in Google Chrome + +### 2. Text to Speech in Linux Terminal ### + +For those who may not be aware of espeak utility, It is a Linux command-line text to speech converter. Write anything in a variety of languages and espeak utility will read it loud for you. + +Espeak should be installed in your system by default, however it is not installed for your system, you may do: + + # apt-get install espeak (Debian) + # yum install espeak (CentOS) + # dnf install espeak (Fedora 22 onwards) + +You may ask espeak to accept Input Interactively from standard Input device and convert it to speech for you. You may do: + + $ espeak [Hit Return Key] + +For detailed output you may do: + + $ espeak --stdout | aplay [Hit Return Key][Double - Here] + +espeak is flexible and you can ask espeak to accept input from a text file and speak it loud for you. All you need to do is: + + $ espeak --stdout /path/to/text/file/file_name.txt | aplay [Hit Enter] + +You may ask espeak to speak fast/slow for you. The default speed is 160 words per minute. Define your preference using switch ‘-s’. + +To ask espeak to speak 30 words per minute, you may do: + + $ espeak -s 30 -f /path/to/text/file/file_name.txt | aplay + +To ask espeak to speak 200 words per minute, you may do: + + $ espeak -s 200 -f /path/to/text/file/file_name.txt | aplay + +To use another language say Hindi (my mother tongue), you may do: + + $ espeak -v hindi --stdout 'टेकमिंट विश्व की एक बेहतरीन लाइंक्स आधारित वेबसाइट है|' | aplay + +You may choose any language of your preference and ask to speak in your preferred language as suggested above. To get the list of all the languages supported by espeak, you need to run: + + $ espeak --voices + +### 3. Quick Schedule a Job ### + +Most of us are already familiar with [cron][2] which is a daemon to execute scheduled commands. + +Cron is an advanced command often used by Linux SYSAdmins to schedule a job such as Backup or practically anything at certain time/interval. + +Are you aware of ‘at’ command in Linux which lets you schedule a job/command to run at specific time? You can tell ‘at’ what to do and when to do and everything else will be taken care by command ‘at’. + +For an example, say you want to print the output of uptime command at 11:02 AM, All you need to do is: + + $ at 11:02 + uptime >> /home/$USER/uptime.txt + Ctrl+D + +![Schedule Job in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Schedule-Job-in-Linux.png) + +Schedule Job in Linux + +To check if the command/script/job has been set or not by ‘at’ command, you may do: + + $ at -l + +![View Scheduled Jobs](http://www.tecmint.com/wp-content/uploads/2015/08/View-Scheduled-Jobs.png) + +View Scheduled Jobs + +You may schedule more than one command in one go using at, simply as: + + $ at 12:30 + Command – 1 + Command – 2 + … + command – 50 + … + Ctrl + D + +### 4. Watch a Command at Specific Interval ### + +We need to run some command for specified amount of time at regular interval. Just for example say we need to print the current time and watch the output every 3 seconds. + +To see current time we need to run the below command in terminal. + + $ date +"%H:%M:%S + +![Check Date and Time in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Date-in-Linux.png) + +Check Date and Time in Linux + +and to check the output of this command every three seconds, we need to run the below command in Terminal. + + $ watch -n 3 'date +"%H:%M:%S"' + +![Watch Command in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Watch-Command-in-Linux.gif) + +Watch Command in Linux + +The switch ‘-n’ in watch command is for Interval. In the above example we defined Interval to be 3 sec. You may define yours as required. Also you may pass any command/script with watch command to watch that command/script at the defined interval. + +That’s all for now. Hope you are like this series that aims at making you more productive with Linux and that too with fun inside. All the suggestions are welcome in the comments below. Stay tuned for more such posts. Keep connected and Enjoy… + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/text-to-speech-in-terminal-schedule-a-job-and-watch-commands-in-linux/ + +作者:[Avishek Kumar][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/tag/linux-tricks/ +[2]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/ \ No newline at end of file From 4ea6ca5ad27848c8bed914a6ffa5ea4ee08016d5 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Wed, 12 Aug 2015 19:57:08 +0800 Subject: [PATCH 141/697] [Translated]20150811 How to download apk files from Google Play Store on Linux.md --- ...k files from Google Play Store on Linux.md | 101 ------------------ ...k files from Google Play Store on Linux.md | 99 +++++++++++++++++ 2 files changed, 99 insertions(+), 101 deletions(-) delete mode 100644 sources/tech/20150811 How to download apk files from Google Play Store on Linux.md create mode 100644 translated/tech/20150811 How to download apk files from Google Play Store on Linux.md diff --git a/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md b/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md deleted file mode 100644 index 50bf618e86..0000000000 --- a/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md +++ /dev/null @@ -1,101 +0,0 @@ -FSSlc translating - -How to download apk files from Google Play Store on Linux -================================================================================ -Suppose you want to install an Android app on your Android device. However, for whatever reason, you cannot access Google Play Store on the Android device. What can you do then? One way to install the app without Google Play Store access is to download its APK file using some other means, and then [install the APK][1] file on the Android device manually. - -There are several ways to download official APK files from Google Play Store on non-Android devices such as regular computers and laptops. For example, there are browser plugins (e.g., for [Chrome][2] or [Firefox][3]) or online APK archives that allow you to download APK files using a web browser. If you do not trust these closed-source plugins or third-party APK repositories, there is yet another way to download official APK files manually, and that is via an open-source Linux app called [GooglePlayDownloader][4]. - -GooglePlayDownloader is a Python-based GUI application that enables you to search and download APK files from Google Play Store. Since this is completely open-source, you can be assured while using it. In this tutorial, I am going to show how to download an APK file from Google Play Store using GooglePlayDownloader in Linux environment. - -### Python requirement ### - -GooglePlayDownloader requires Python with SNI (Server Name Indication) support for SSL/TLS communication. This feature comes with Python 2.7.9 or higher. This leaves out older distributions such as Debian 7 Wheezy or earlier, Ubuntu 14.04 or earlier, or CentOS/RHEL 7 or earlier. Assuming that you have a Linux distribution with Python 2.7.9 or higher, proceed to install GooglePlayDownloader as follows. - -### Install GooglePlayDownloader on Ubuntu ### - -On Ubuntu, you can use the official deb build. One catch is that you may need to install one required dependency manually. - -#### On Ubuntu 14.10 #### - -Download [python-ndg-httpsclient][5] deb package, which is a missing dependency on older Ubuntu distributions. Also download GooglePlayDownloader's official deb package. - - $ wget http://mirrors.kernel.org/ubuntu/pool/main/n/ndg-httpsclient/python-ndg-httpsclient_0.3.2-1ubuntu4_all.deb - $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb - -We are going to use [gdebi command][6] to install those two deb files as follows. The gdebi command will automatically handle any other dependencies. - - $ sudo apt-get install gdebi-core - $ sudo gdebi python-ndg-httpsclient_0.3.2-1ubuntu4_all.deb - $ sudo gdebi googleplaydownloader_1.7-1_all.deb - -#### On Ubuntu 15.04 or later #### - -Recent Ubuntu distributions ship all required dependencies, and thus the installation is straightforward as follows. - - $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb - $ sudo apt-get install gdebi-core - $ sudo gdebi googleplaydownloader_1.7-1_all.deb - -### Install GooglePlayDownloader on Debian ### - -Due to its Python requirement, GooglePlayDownloader cannot be installed on Debian 7 Wheezy or earlier unless you upgrade its stock Python. - -#### On Debian 8 Jessie and higher: #### - - $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb - $ sudo apt-get install gdebi-core - $ sudo gdebi googleplaydownloader_1.7-1_all.deb - -### Install GooglePlayDownloader on Fedora ### - -Since GooglePlayDownloader was originally developed for Debian based distributions, you need to install it from the source if you want to use it on Fedora. - -First, install necessary dependencies. - - $ sudo yum install python-pyasn1 wxPython python-ndg_httpsclient protobuf-python python-requests - -Then install it as follows. - - $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7.orig.tar.gz - $ tar -xvf googleplaydownloader_1.7.orig.tar.gz - $ cd googleplaydownloader-1.7 - $ chmod o+r -R . - $ sudo python setup.py install - $ sudo sh -c "echo 'python /usr/lib/python2.7/site-packages/googleplaydownloader-1.7-py2.7.egg/googleplaydownloader/googleplaydownloader.py' > /usr/bin/googleplaydownloader" - -### Download APK Files from Google Play Store with GooglePlayDownloader ### - -Once you installed GooglePlayDownloader, you can download APK files from Google Play Store as follows. - -First launch the app by typing: - - $ googleplaydownloader - -![](https://farm1.staticflickr.com/425/20229024898_105396fa68_b.jpg) - -At the search bar, type the name of the app you want to download from Google Play Store. - -![](https://farm1.staticflickr.com/503/20230360479_925f5da613_b.jpg) - -Once you find the app in the search list, choose the app, and click on "Download selected APK(s)" button. You will find the downloaded APK file in your home directory. Now you can move the APK file to the Android device of your choice, and install it manually. - -Hope this helps. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/download-apk-files-google-play-store.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/nanni -[1]:http://xmodulo.com/how-to-install-apk-file-on-android-phone-or-tablet.html -[2]:https://chrome.google.com/webstore/detail/apk-downloader/cgihflhdpokeobcfimliamffejfnmfii -[3]:https://addons.mozilla.org/en-us/firefox/addon/apk-downloader/ -[4]:http://codingteam.net/project/googleplaydownloader -[5]:http://packages.ubuntu.com/vivid/python-ndg-httpsclient -[6]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html diff --git a/translated/tech/20150811 How to download apk files from Google Play Store on Linux.md b/translated/tech/20150811 How to download apk files from Google Play Store on Linux.md new file mode 100644 index 0000000000..670c0f331b --- /dev/null +++ b/translated/tech/20150811 How to download apk files from Google Play Store on Linux.md @@ -0,0 +1,99 @@ +如何在 Linux 中从 Google Play 商店里下载 apk 文件 +================================================================================ +假设你想在你的 Android 设备中安装一个 Android 应用,然而由于某些原因,你不能在 Andor 设备上访问 Google Play 商店。接着你该怎么做呢?在不访问 Google Play 商店的前提下安装应用的一种可能的方法是使用其他的手段下载该应用的 APK 文件,然后手动地在 Android 设备上 [安装 APK 文件][1]。 + +在非 Android 设备如常规的电脑和笔记本电脑上,有着几种方式来从 Google Play 商店下载到官方的 APK 文件。例如,使用浏览器插件(例如, 针对 [Chrome][2] 或针对 [Firefox][3] 的插件) 或利用允许你使用浏览器下载 APK 文件的在线的 APK 存档等。假如你不信任这些闭源的插件或第三方的 APK 仓库,这里有另一种手动下载官方 APK 文件的方法,它使用一个名为 [GooglePlayDownloader][4] 的开源 Linux 应用。 + +GooglePlayDownloader 是一个基于 Python 的 GUI 应用,使得你可以从 Google Play 商店上搜索和下载 APK 文件。由于它是完全开源的,你可以放心地使用它。在本篇教程中,我将展示如何在 Linux 环境下,使用 GooglePlayDownloader 来从 Google Play 商店下载 APK 文件。 + +### Python 需求 ### + +GooglePlayDownloader 需要使用 Python 中 SSL 模块的扩展 SNI(服务器名称指示) 来支持 SSL/TLS 通信,该功能由 Python 2.7.9 或更高版本带来。这使得一些旧的发行版本如 Debian 7 Wheezy 及早期版本,Ubuntu 14.04 及早期版本或 CentOS/RHEL 7 及早期版本均不能满足该要求。假设你已经有了一个带有 Python 2.7.9 或更高版本的发行版本,可以像下面这样接着安装 GooglePlayDownloader。 + +### 在 Ubuntu 上安装 GooglePlayDownloader ### + +在 Ubuntu 上,你可以使用官方构建的 deb 包。有一个条件是你可能需要手动地安装一个必需的依赖。 + +#### 在 Ubuntu 14.10 上 #### + +下载 [python-ndg-httpsclient][5] deb 软件包,这在旧一点的 Ubuntu 发行版本中是一个缺失的依赖。同时还要下载 GooglePlayDownloader 的官方 deb 软件包。 + + $ wget http://mirrors.kernel.org/ubuntu/pool/main/n/ndg-httpsclient/python-ndg-httpsclient_0.3.2-1ubuntu4_all.deb + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb + +如下所示,我们将使用 [gdebi 命令][6] 来安装这两个 deb 文件。 gedbi 命令将自动地处理任何其他的依赖。 + + $ sudo apt-get install gdebi-core + $ sudo gdebi python-ndg-httpsclient_0.3.2-1ubuntu4_all.deb + $ sudo gdebi googleplaydownloader_1.7-1_all.deb + +#### 在 Ubuntu 15.04 或更新的版本上 #### + +最近的 Ubuntu 发行版本上已经配备了所有需要的依赖,所以安装过程可以如下面那样直接进行。 + + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb + $ sudo apt-get install gdebi-core + $ sudo gdebi googleplaydownloader_1.7-1_all.deb + +### 在 Debian 上安装 GooglePlayDownloader ### + +由于其 Python 需求, Googleplaydownloader 不能被安装到 Debian 7 Wheezy 或早期版本上,除非你升级了它自备的 Python 版本。 + +#### 在 Debian 8 Jessie 及更高版本上: #### + + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb + $ sudo apt-get install gdebi-core + $ sudo gdebi googleplaydownloader_1.7-1_all.deb + +### 在 Fedora 上安装 GooglePlayDownloader ### + +由于 GooglePlayDownloader 原本是针对基于 Debian 的发行版本所开发的,假如你想在 Fedora 上使用它,你需要从它的源码开始安装。 + +首先安装必需的依赖。 + + $ sudo yum install python-pyasn1 wxPython python-ndg_httpsclient protobuf-python python-requests + +然后像下面这样安装它。 + + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7.orig.tar.gz + $ tar -xvf googleplaydownloader_1.7.orig.tar.gz + $ cd googleplaydownloader-1.7 + $ chmod o+r -R . + $ sudo python setup.py install + $ sudo sh -c "echo 'python /usr/lib/python2.7/site-packages/googleplaydownloader-1.7-py2.7.egg/googleplaydownloader/googleplaydownloader.py' > /usr/bin/googleplaydownloader" + +### 使用 GooglePlayDownloader 从 Google Play 商店下载 APK 文件 ### + +一旦你安装好 GooglePlayDownloader 后,你就可以像下面那样从 Google Play 商店下载 APK 文件。 + +首先通过输入下面的命令来启动该应用: + + $ googleplaydownloader + +![](https://farm1.staticflickr.com/425/20229024898_105396fa68_b.jpg) + +在搜索栏中,输入你想从 Google Play 商店下载的应用的名称。 + +![](https://farm1.staticflickr.com/503/20230360479_925f5da613_b.jpg) + +一旦你从搜索列表中找到了该应用,就选择该应用,接着点击 "下载选定的 APK 文件" 按钮。最后你将在你的家目录中找到下载的 APK 文件。现在,你就可以将下载到的 APK 文件转移到你所选择的 Android 设备上,然后手动安装它。 + +希望这篇教程对你有所帮助。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/download-apk-files-google-play-store.html + +作者:[Dan Nanni][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://xmodulo.com/how-to-install-apk-file-on-android-phone-or-tablet.html +[2]:https://chrome.google.com/webstore/detail/apk-downloader/cgihflhdpokeobcfimliamffejfnmfii +[3]:https://addons.mozilla.org/en-us/firefox/addon/apk-downloader/ +[4]:http://codingteam.net/project/googleplaydownloader +[5]:http://packages.ubuntu.com/vivid/python-ndg-httpsclient +[6]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html From c80b246c12f9be9c9dea4267d399b779f230a186 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Wed, 12 Aug 2015 20:01:28 +0800 Subject: [PATCH 142/697] Update RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ...ment in RHEL 7--Boot Shutdown and Everything in Between.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md b/sources/tech/RHCSA Series/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md index 2befb7bc55..23bf9f0ac1 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md @@ -1,3 +1,5 @@ +FSSlc translating + RHCSA Series: Process Management in RHEL 7: Boot, Shutdown, and Everything in Between – Part 5 ================================================================================ We will start this article with an overall and brief revision of what happens since the moment you press the Power button to turn on your RHEL 7 server until you are presented with the login screen in a command line interface. @@ -213,4 +215,4 @@ via: http://www.tecmint.com/rhcsa-exam-boot-process-and-process-management/ [a]:http://www.tecmint.com/author/gacanepa/ [1]:http://www.tecmint.com/dmesg-commands/ [2]:http://www.tecmint.com/systemd-replaces-init-in-linux/ -[3]:http://www.tecmint.com/how-to-kill-a-process-in-linux/ \ No newline at end of file +[3]:http://www.tecmint.com/how-to-kill-a-process-in-linux/ From 69122f983c12c6911363b696af572a6d06a06d68 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Wed, 12 Aug 2015 23:10:47 +0800 Subject: [PATCH 143/697] =?UTF-8?q?[Translating]=20RHCE=20=E7=B3=BB?= =?UTF-8?q?=E5=88=97?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... RHCE Series--How to Setup and Test Static Network Routing.md | 1 + ...work Address Translation and Set Kernel Runtime Parameters.md | 1 + ...e and Deliver System Activity Reports Using Linux Toolsets.md | 1 + 3 files changed, 3 insertions(+) diff --git a/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md b/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md index 03356f9dd1..731e78e5cf 100644 --- a/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md +++ b/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md @@ -1,3 +1,4 @@ +Translating by ictlyh Part 1 - RHCE Series: How to Setup and Test Static Network Routing ================================================================================ RHCE (Red Hat Certified Engineer) is a certification from Red Hat company, which gives an open source operating system and software to the enterprise community, It also gives training, support and consulting services for the companies. diff --git a/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md b/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md index 8a5f4e6cf4..cd798b906d 100644 --- a/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md +++ b/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md @@ -1,3 +1,4 @@ +Translating by ictlyh Part 2 - How to Perform Packet Filtering, Network Address Translation and Set Kernel Runtime Parameters ================================================================================ As promised in Part 1 (“[Setup Static Network Routing][1]”), in this article (Part 2 of RHCE series) we will begin by introducing the principles of packet filtering and network address translation (NAT) in Red Hat Enterprise Linux 7, before diving into setting runtime kernel parameters to modify the behavior of a running kernel if certain conditions change or needs arise. diff --git a/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md b/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md index 34693ea6bf..ea0157be4f 100644 --- a/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md +++ b/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md @@ -1,3 +1,4 @@ +Translating by ictlyh Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets ================================================================================ As a system engineer, you will often need to produce reports that show the utilization of your system’s resources in order to make sure that: 1) they are being utilized optimally, 2) prevent bottlenecks, and 3) ensure scalability, among other reasons. From 604582f47ad84aebdf1df2a0fef838c702f59086 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 12 Aug 2015 23:55:02 +0800 Subject: [PATCH 144/697] PUB:20141211 Open source all over the world MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @fyh 这篇不好翻译啊,翻译的不错! --- ...20141211 Open source all over the world.md | 47 +++++++++---------- 1 file changed, 23 insertions(+), 24 deletions(-) rename {translated/talk => published}/20141211 Open source all over the world.md (67%) diff --git a/translated/talk/20141211 Open source all over the world.md b/published/20141211 Open source all over the world.md similarity index 67% rename from translated/talk/20141211 Open source all over the world.md rename to published/20141211 Open source all over the world.md index 0abb08121f..e07db43680 100644 --- a/translated/talk/20141211 Open source all over the world.md +++ b/published/20141211 Open source all over the world.md @@ -2,8 +2,6 @@ ================================================================================ ![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUS_OpenSourceExperience_520x292_cm.png) -图片来源 : opensource.com - 经过了一整天的Opensource.com[社区版主][1]年会,最后一项日程提了上来,内容只有“特邀嘉宾:待定”几个字。作为[Opensource.com][3]的项目负责人和社区管理员,[Jason Hibbets][2]起身解释道,“因为这个嘉宾有可能无法到场,因此我不想提前说是谁。在几个月前我问他何时有空过来,他给了我两个时间点,我选了其中一个。今天是这三周中Jim唯一能来的一天”。(译者注:Jim是指下文中提到的Jim Whitehurst,即红帽公司总裁兼首席执行官) 这句话在版主们(Moderators)中引起一阵轰动,他们从世界各地赶来参加此次的[拥抱开源大会(All Things Open Conference)][4]。版主们纷纷往前挪动椅子,仔细聆听。 @@ -14,7 +12,7 @@ “大家好!”,这个家伙开口了。他没穿正装,只是衬衫和休闲裤。 -这时会场中第二高个子的人,红帽全球意识部门(Global Awareness)的高级主管[Jeff Mackanic][5],告诉他大部分社区版本今天都在场,然后让每个人开始作简单的自我介绍。 +这时会场中第二高个子的人,红帽全球意识部门(Global Awareness)的高级主管[Jeff Mackanic][5],告诉他大部分社区版主今天都在场,然后让每个人开始作简单的自我介绍。 “我叫[Jen Wike Huger][6],负责Opensource.com的内容管理,很高兴见到大家。” @@ -22,13 +20,13 @@ “我叫[Robin][9],从2013年开始参与版主项目。我在OSDC做了一些事情,工作是在[City of the Hague][10]维护[网站][11]。” -“我叫[Marcus Hanwell][12],来自英格兰,在[Kitware][13]工作。同时,我是FOSS科学软件的技术总监,和国家实验室在[Titan][14] Z和[Gpu programming][15]方面合作。我主要使用[Gentoo][16]和[KDE][17]。最后,我很激动能加入FOSS和开源科学。” +“我叫[Marcus Hanwell][12],来自英格兰,在[Kitware][13]工作。同时,我是FOSS science software的技术总监,和国家实验室在[Titan][14] Z和[Gpu programming][15]方面合作。我主要使用[Gentoo][16]和[KDE][17]。最后,我很激动能参与到FOSS和开源科学。” -“我叫[Phil Shapiro][18],是华盛顿的一个小图书馆28个Linux工作站的管理员。我视各位为我的同事。非常高兴能一起交流分享,贡献力量。我主要关注FOSS和自豪感的关系,以及FOSS如何提升自豪感。” +“我叫[Phil Shapiro][18],是华盛顿的一个小图书馆的28个Linux工作站的管理员。我视各位为我的同事。非常高兴能一起交流分享,贡献力量。我主要关注FOSS和自豪感的关系,以及FOSS如何提升自豪感。” “我叫[Joshua Holm][19]。我大多数时间都在关注系统更新,以及帮助人们在网上找工作。” -“我叫[Mel Chernoff][20],在红帽工作,和[Jason Hibbets]和[Mark Bohannon]一起主要关注政府渠道方面。” +“我叫[Mel Chernoff][20],在红帽工作,和[Jason Hibbets][22]和[Mark Bohannon][23]一起主要关注[政府][21]渠道方面。” “我叫[Scott Nesbitt][24],写过很多东西,使用FOSS很久了。我是个普通人,不是系统管理员,也不是程序员,只希望能更加高效工作。我帮助人们在商业和生活中使用FOSS。” @@ -38,41 +36,41 @@ “你在[新FOSS Minor][30]教书?!”,Jim说道,“很酷!” -“我叫[Jason Baker][31]。我是红慢的一个云专家,主要做[OpenStack][32]方面的工作。” +“我叫[Jason Baker][31]。我是红帽的一个云专家,主要做[OpenStack][32]方面的工作。” “我叫[Mark Bohannan][33],是红帽全球开放协议的一员,在华盛顿外工作。和Mel一样,我花了相当多时间写作,也从法律和政府部门中找合作者。我做了一个很好的小册子来讨论正在发生在政府中的积极变化。” -“我叫[Jason Hibbets][34],我组织了这次会议。” +“我叫[Jason Hibbets][34],我组织了这次讨论。” 会场中一片笑声。 -“我也组织了这片讨论,可以这么说,”这个棕红色头发笑容灿烂的家伙说道。笑声持续一会逐渐平息。 +“我也组织了这个讨论,可以这么说,”这个棕红色头发笑容灿烂的家伙说道。笑声持续一会逐渐平息。 -我当时在他左边,时不时从转录空隙中抬头看一眼,然后从眼神中注意到微笑背后暗示的那个自2008年1月起开始领导公司的人,红帽的CEO[Jim Whitehurst][35]。 +我当时在他左边,时不时从记录的间隙中抬头看一眼,我注意到淡淡微笑背后的那个令人瞩目的人,是自2008年1月起开始领导红帽公司的CEO [Jim Whitehurst][35]。 -“我有世界上最好的工作,”稍稍向后靠、叉腿抱头,Whitehurst开始了演讲。“我开始领导红帽,在世界各地旅行到处看看情况。在这里的七年中,FOSS和广泛的开源创新所发生的美好的事情是开源已经脱离了条条框框。我现在认为,IT正处在FOSS之前所在的位置。我们可以预见FOSS从一个替代走向创新驱动力。”用户也看到了这一点。他们用FOSS并不是因为它便宜,而是因为它能提供和创新的解决方案。这也十一个全球现象。比如,我刚才还在印度,然后发现那里的用户拥抱开源的两个理由:一个是创新,另一个是那里的市场有些特殊,需要完全的控制。 +“我有世界上最好的工作,”稍稍向后靠、叉腿抱头,Whitehurst开始了演讲。“我开始领导红帽,在世界各地旅行到处看看情况。在这里的七年中,FOSS和广泛的开源创新所发生的最美好的事情是开源已经脱离了条条框框。我现在认为,信息技术正处在FOSS之前所在的位置。我们可以预见FOSS从一个替代品走向创新驱动力。我们的用户也看到了这一点。他们用FOSS并不是因为它便宜,而是因为它能带来可控和创新的解决方案。这也是个全球现象。比如,我刚才还在印度,然后发现那里的用户拥抱开源的两个理由:一个是创新,另一个是那里的市场有些特殊,需要完全的可控。” -“[孟买证券交易所][36]想得到源代码并加以控制,五年前这在证券交易领域闻所未闻。那时FOSS正在重复发明轮子。今天看来,FOSS正在做几乎所有的结合了大数据的事物。几乎所有的新框架,语言和方法论,包括流动(尽管不包括设备),都首先发生在开源世界。” +“[孟买证券交易所][36]想得到源代码并加以控制,五年前这种事情在证券交易领域就没有听说过。那时FOSS正在重复发明轮子。今天看来,实际上大数据的每件事情都出现在FOSS领域。几乎所有的新框架,语言和方法论,包括移动通讯(尽管不包括设备),都首先发生在开源世界。” -“这是因为用户数量已经达到了相当的规模。这不只是红帽遇到的情况,[Google][37],[Amazon][38],[Facebook][39]等也出现这样的情况。他们想解决自己的问题,用开源的方式。忘掉协议吧,开源绝不仅如此。我们建立了一个交通工具,一套规则,例如[Hadoop][40],[Cassandra][41]和其他工具。事实上,开源驱动创新。例如,Hadoop在厂商们意识的规模带来的问题。他们实际上有足够的资和资源金来解决自己的问题。”开源是许多领域的默认技术方案。这在一个更加注重内容的世界中更是如此,例如[3D打印][42]和其他使用信息内容的物理产品。” +“这是因为用户数量已经达到了相当的规模。这不只是红帽遇到的情况,[Google][37],[Amazon][38],[Facebook][39]等也出现这样的情况。他们想解决自己的问题,用开源的方式。忘掉许可协议吧,开源绝不仅如此。我们建立了一个交通工具,一套规则,例如[Hadoop][40],[Cassandra][41]和其他工具。事实上,开源驱动创新。例如,Hadoop是在厂商们意识到规模带来的问题时的一个解决方案。他们实际上有足够的资金和资源来解决自己的问题。开源是许多领域的默认技术方案。这在一个更加注重内容的世界中更是如此,例如[3D打印][42]和其他使用信息内容的实体产品。” “源代码的开源确实很酷,但开源不应当仅限于此。在各行各业不同领域开源仍有可以用武之地。我们要问下自己:‘开源能够为教育,政府,法律带来什么?其它的呢?其它的领域如何能学习我们?’” -“还有内容的问题。内容在现在是免费的,当然我们可以投资更多的免费内容,不过我们也需要商业模式围绕的内容。这是我们更应该关注的。如果你相信开放的创新能带来更好,那么我们需要更多的商业模式。” +“还有内容的问题。内容在现在是免费的,当然我们可以投资更多的免费内容,不过我们也需要商业模式围绕的内容。这是我们更应该关注的。如果你相信开放的创新更好,那么我们需要更多的商业模式。” -“教育让我担心其相比与‘社区’它更关注‘内容’。例如,无论我走到哪里,大学校长们都会说,‘等等,难道教育将会免费?!’对于下游来说FOSS免费很棒,但别忘了上游很强大。免费课程很棒,但我们同样需要社区来不断迭代和完善。这是很多人都在做的事情,Opensource.com是一个提供交流的社区。问题不是‘我们如何控制内容’,也不是‘如何建立和分发内容’,而是要确保它处在不断的完善当中,而且能给其他领域提供有价值的参考。” +“教育让我担心,其相比与‘社区’它更关注‘内容’。例如,无论我走到哪里,大学的校长们都会说,‘等等,难道教育将会免费?!’对于下游来说FOSS免费很棒,但别忘了上游很强大。免费课程很棒,但我们同样需要社区来不断迭代和完善。这是很多人都在做的事情,Opensource.com是一个提供交流的社区。问题不是‘我们如何控制内容’,也不是‘如何建立和分发内容’,而是要确保它处在不断的完善当中,而且能给其他领域提供有价值的参考。” “改变世界的潜力是无穷无尽的,我们已经取得了很棒的进步。”六年前我们痴迷于制定宣言,我们说‘我们是领导者’。我们用错词了,因为那潜在意味着控制。积极的参与者们同样也不能很好理解……[Máirín Duffy][43]提出了[催化剂][44]这个词。然后我们组成了红帽,不断地促进行动,指引方向。” -“Opensource.com也是其他领域的催化剂,而这正是它的本义所在,我希望你们也这样认为。当时的内容质量和现在比起来都令人难以置信。你可以看到每季度它都在进步。谢谢你们的时间!谢谢成为了催化剂!这是一个让世界变得更好的机会。我想听听你们的看法。” +“Opensource.com也是其他领域的催化剂,而这正是它的本义所在,我希望你们也这样认为。当时的内容质量和现在比起来都令人难以置信。你可以看到每季度它都在进步。谢谢你们付出的时间!谢谢成为了催化剂!这是一个让世界变得更好的机会。我想听听你们的看法。” 我瞥了一下桌子,发现几个人眼中带泪。 然后Whitehurst又回顾了大会的开放教育议题。“极端一点看,如果你有一门[Ulysses][45]的公开课。在这里你能和一群人一起合作体验课堂。这样就和代码块一样的:大家一起努力,代码随着时间不断改进。” -在这一点上,我有发言权。当谈论其FOSS和学术团体之间的差异,向基础和可能的不调和这些词语都跳了出来。 +在这一点上,我有发言权。当谈论其FOSS和学术团体之间的差异,像“基础”和“可能不调和”这些词语都跳了出来。 -**Remy**: “倒退带来死亡。如果你在论文或者发布的代码中烦了一个错误,有可能带来十分严重的后果。学校一直都是避免失败寻求正确答案的地方。复制意味着抄袭。轮子在一遍遍地教条地被发明。FOSS你能快速失败,但在学术界,你只能带来无效的结果。” +**Remy**: “倒退带来死亡。如果你在论文或者发布的代码中犯了一个错误,有可能带来十分严重的后果。学校一直都是避免失败寻求正确答案的地方。复制意味着抄袭。轮子在一遍遍地教条地被发明。FOSS让你能快速失败,但在学术界,你只能带来无效的结果。” **Nicole**: “学术界有太多自我的家伙,你们需要一个发布经理。” @@ -80,20 +78,21 @@ **Luis**: “团队和分享应该优先考虑,红帽可以多向它们强调这一点。” -**Jim**: “还有公司在其中扮演积极角色吗?” +**Jim**: “还有公司在其中扮演积极角色了吗?” -[Phil Shapiro][46]: “我对FOSS的临界点感兴趣。联邦没有改用[LibreOffice][47]把我逼疯了。我们没有在软件上花税款,也不应当在字处理软件或者微软的Office上浪费税钱。” +[Phil Shapiro][46]: “我对FOSS的临界点感兴趣。Fed没有改用[LibreOffice][47]把我逼疯了。我们没有在软件上花税款,也不应当在字处理软件或者微软的Office上浪费税钱。” -**Jim**: “我们经常提倡这一点。我们能做更多吗?这是个问题。首先,我们在我们的产品涉足的地方取得了进步。我们在政府中有坚实的专营权。我们比私有公司平均话费更多。银行和电信业都和政府挨着。我们在欧洲做的更好,我认为在那工作又更低的税。下一代计算就像‘终结者’,我们到处取得了进步,但仍然需要忧患意识。” +**Jim**: “我们经常提倡这一点。我们能做更多吗?这是个问题。首先,我们在我们的产品涉足的地方取得了进步。我们在政府中有坚实的专营权。我们比私有公司平均花费更多。银行和电信业都和政府挨着。我们在欧洲做的更好,我认为在那工作有更低的税。下一代计算就像‘终结者’,我们到处取得了进步,但仍然需要忧患意识。” + +突然,门开了。Jim转身向门口站着的执行助理点头。他要去参加下一场会了。他并拢双腿,站着向前微倾。然后,他再次向每个人的工作和奉献表示感谢,微笑着出了门……留给我们更多的激励。 -突然,门开了。Jim转身向门口站着的执行助理点头。他要去参加下一场会了。他并拢双腿,站着向前微倾。然后,他再次向每个人的工作和奉献表示感谢,微笑着除了门……留给我们更多的激励。 -------------------------------------------------------------------------------- via: https://opensource.com/business/14/12/jim-whitehurst-inspiration-open-source 作者:[Remy][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[fyh](https://github.com/fyh) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From 83b197b3ce99246cb5c1965fc1de693d9f090f55 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 13 Aug 2015 09:31:59 +0800 Subject: [PATCH 145/697] translating --- ...Web Based Network Traffic Analyzer--Install it on Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md b/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md index 9f78722cb6..3b3fe49a7f 100644 --- a/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md +++ b/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md @@ -1,3 +1,5 @@ +translating-----geekpi + Darkstat is a Web Based Network Traffic Analyzer – Install it on Linux ================================================================================ Darkstat is a simple, web based network traffic analyzer application. It works on many popular operating systems like Linux, Solaris, Mac and AIX. It keeps running in the background as a daemon and continues collecting and sniffing network data and presents it in easily understandable format within its web interface. It can generate traffic reports for hosts, identify which ports are open on some particular host and is IPV 6 complaint application. Let’s see how we can install and configure it on Linux operating system. @@ -59,4 +61,4 @@ via: http://linuxpitstop.com/install-darkstat-on-ubuntu-linux/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 -[a]:http://linuxpitstop.com/author/aun/ \ No newline at end of file +[a]:http://linuxpitstop.com/author/aun/ From f5f2a55acba0a563381de7dc8718d856de00ff22 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 13 Aug 2015 10:12:57 +0800 Subject: [PATCH 146/697] translating --- ...k Traffic Analyzer--Install it on Linux.md | 40 +++++++++---------- 1 file changed, 19 insertions(+), 21 deletions(-) diff --git a/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md b/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md index 3b3fe49a7f..e8e6bace07 100644 --- a/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md +++ b/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md @@ -1,62 +1,60 @@ -translating-----geekpi - -Darkstat is a Web Based Network Traffic Analyzer – Install it on Linux +Darkstat一个基于网络的流量分析器 - 在Linux中安装 ================================================================================ -Darkstat is a simple, web based network traffic analyzer application. It works on many popular operating systems like Linux, Solaris, Mac and AIX. It keeps running in the background as a daemon and continues collecting and sniffing network data and presents it in easily understandable format within its web interface. It can generate traffic reports for hosts, identify which ports are open on some particular host and is IPV 6 complaint application. Let’s see how we can install and configure it on Linux operating system. +Darkstat是一个简易的,基于网络的流量分析程序。它可以在主流的操作系统如Linux、Solaris、MAC、AIX上工作。它以守护进程的形式持续工作在后台并不断地嗅探网络数据并以简单易懂的形式展现在网页上。它可以为主机生成流量报告,鉴别特定主机上哪些端口打开并且兼容IPv6。让我们看下如何在Linux中安装和配置它。 -### Installing Darkstat on Linux ### +### 在Linux中安装配置Darkstat ### -**Install Darkstat on Fedora/CentOS/RHEL:** +** 在Fedora/CentOS/RHEL中安装Darkstat:** -In order to install it on Fedora/RHEL and CentOS Linux distributions, run following command on the terminal. +要在Fedora/RHEL和CentOS中安装,运行下面的命令。 sudo yum install darkstat -**Install Darkstat on Ubuntu/Debian:** +**在Ubuntu/Debian中安装Darkstat:** -Run following on the terminal to install it on Ubuntu and Debian. +运行下面的命令在Ubuntu和Debian中安装。 sudo apt-get install darkstat -Congratulations, Darkstat has been installed on your Linux system now. +恭喜你,Darkstat已经在你的Linux中安装了。 -### Configuring Darkstat ### +### 配置 Darkstat ### -In order to run this application properly, we need to perform some basic configurations. Edit /etc/darkstat/init.cfg file in Gedit text editor by running the following command on the terminal. +为了正确运行这个程序,我恩需要执行一些基本的配置。运行下面的命令用gedit编辑器打开/etc/darkstat/init.cfg文件。 sudo gedit /etc/darkstat/init.cfg ![](http://linuxpitstop.com/wp-content/uploads/2015/08/13.png) -Edit Darkstat +编辑 Darkstat -Change START_DARKSTAT parameter to “yes” and provide your network interface in “INTERFACE”. Make sure to uncomment DIR, PORT, BINDIP, and LOCAL parameters here. If you wish to bind the web interface for Darkstat to some specific IP, provide it in BINDIP section. +修改START_DARKSTAT这个参数为yes,并在“INTERFACE”中提供你的网络接口。确保取消了DIR、PORT、BINDIP和LOCAL这些参数的注释。如果你希望绑定Darkstat到特定的IP,在BINDIP中提供它 -### Starting Darkstat Daemon ### +### 启动Darkstat守护进程 ### -Once the installation and configuration for Darkstat is complete, run following command to start its daemon. +安装并配置完Darkstat后,运行下面的命令启动它的守护进程。 sudo /etc/init.d/darkstat start ![Restarting Darkstat](http://linuxpitstop.com/wp-content/uploads/2015/08/23.png) -You can configure Darkstat to start on system boot by running the following command: +你可以用下面的命令来在开机时启动Darkstat: chkconfig darkstat on -Launch your browser and load **http://localhost:666** and it will display the web based graphical interface for Darkstat. Start using this tool to analyze your network traffic. +打开浏览器并打开**http://localhost:666**,它会显示Darkstat的网页界面。使用这个工具来分析你的网络流量。 ![Darkstat](http://linuxpitstop.com/wp-content/uploads/2015/08/32.png) -### Conclusion ### +### 总结 ### -It is a lightweight tool with very low memory footprints. The key reason for the popularity of this tool is simplicity, ease of configuration and usage. It is a must-have application for System and Network Administrators. +它是一个占用很少内存的轻量级工具。这个工具流行的原因是简易、易于配置和使用。这是一个对系统管理员而言必须拥有的程序 -------------------------------------------------------------------------------- via: http://linuxpitstop.com/install-darkstat-on-ubuntu-linux/ 作者:[Aun][a] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From ed05a3b8483d65879b9bfc48daaa909e29afd347 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 13 Aug 2015 10:22:14 +0800 Subject: [PATCH 147/697] =?UTF-8?q?20150813-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ow to get Public IP from Linux Terminal.md | 68 +++ ...150813 Linux file system hierarchy v2.0.md | 438 ++++++++++++++++++ ... Install The Latest Nvidia Linux Driver.md | 63 +++ 3 files changed, 569 insertions(+) create mode 100644 sources/tech/20150813 How to get Public IP from Linux Terminal.md create mode 100644 sources/tech/20150813 Linux file system hierarchy v2.0.md create mode 100644 sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md diff --git a/sources/tech/20150813 How to get Public IP from Linux Terminal.md b/sources/tech/20150813 How to get Public IP from Linux Terminal.md new file mode 100644 index 0000000000..f0bba2cea9 --- /dev/null +++ b/sources/tech/20150813 How to get Public IP from Linux Terminal.md @@ -0,0 +1,68 @@ +How to get Public IP from Linux Terminal? +================================================================================ +![](http://www.blackmoreops.com/wp-content/uploads/2015/06/256x256xHow-to-get-Public-IP-from-Linux-Terminal-blackMORE-Ops.png.pagespeed.ic.GKEAEd4UNr.png) + +Public addresses are assigned by InterNIC and consist of class-based network IDs or blocks of CIDR-based addresses (called CIDR blocks) that are guaranteed to be globally unique to the Internet. How to get Public IP from Linux Terminal - blackMORE OpsWhen the public addresses are assigned, routes are programmed into the routers of the Internet so that traffic to the assigned public addresses can reach their locations. Traffic to destination public addresses are reachable on the Internet. For example, when an organization is assigned a CIDR block in the form of a network ID and subnet mask, that [network ID, subnet mask] pair also exists as a route in the routers of the Internet. IP packets destined to an address within the CIDR block are routed to the proper destination. In this post I will show several ways to find your public IP address from Linux terminal. This though seems like a waste for normal users, but when you are in a terminal of a headless Linux server(i.e. no GUI or you’re connected as a user with minimal tools). Either way, being able to getHow to get Public IP from Linux Terminal public IP from Linux terminal can be useful in many cases or it could be one of those things that might just come in handy someday. + +There’s two main commands we use, curl and wget. You can use them interchangeably. + +### Curl output in plain text format: ### + + curl icanhazip.com + curl ifconfig.me + curl curlmyip.com + curl ip.appspot.com + curl ipinfo.io/ip + curl ipecho.net/plain + curl www.trackip.net/i + +### curl output in JSON format: ### + + curl ipinfo.io/json + curl ifconfig.me/all.json + curl www.trackip.net/ip?json (bit ugly) + +### curl output in XML format: ### + + curl ifconfig.me/all.xml + +### curl all IP details – The motherload ### + + curl ifconfig.me/all + +### Using DYNDNS (Useful when you’re using DYNDNS service) ### + + curl -s 'http://checkip.dyndns.org' | sed 's/.*Current IP Address: \([0-9\.]*\).*/\1/g' + curl -s http://checkip.dyndns.org/ | grep -o "[[:digit:].]\+" + +### Using wget instead of curl ### + + wget http://ipecho.net/plain -O - -q ; echo + wget http://observebox.com/ip -O - -q ; echo + +### Using host and dig command (cause we can) ### + +You can also use host and dig command assuming they are available or installed + + host -t a dartsclink.com | sed 's/.*has address //' + dig +short myip.opendns.com @resolver1.opendns.com + +### Sample bash script: ### + + #!/bin/bash + + PUBLIC_IP=`wget http://ipecho.net/plain -O - -q ; echo` + echo $PUBLIC_IP + +Quite a few to pick from. + +I was actually writing a small script to track all the IP changes of my router each day and save those into a file. I found these nifty commands and sites to use while doing some online research. Hope they help someone else someday too. Thanks for reading, please Share and RT. + +-------------------------------------------------------------------------------- + +via: http://www.blackmoreops.com/2015/06/14/how-to-get-public-ip-from-linux-terminal/ + +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/sources/tech/20150813 Linux file system hierarchy v2.0.md b/sources/tech/20150813 Linux file system hierarchy v2.0.md new file mode 100644 index 0000000000..9df6d23dcf --- /dev/null +++ b/sources/tech/20150813 Linux file system hierarchy v2.0.md @@ -0,0 +1,438 @@ +Linux file system hierarchy v2.0 +================================================================================ +What is a file in Linux? What is file system in Linux? Where are all the configuration files? Where do I keep my downloaded applications? Is there really a filesystem standard structure in Linux? Well, the above image explains Linux file system hierarchy in a very simple and non-complex way. It’s very useful when you’re looking for a configuration file or a binary file. I’ve added some explanation and examples below, but that’s TL;DR. + +Another issue is when you got configuration and binary files all over the system that creates inconsistency and if you’re a large organization or even an end user, it can compromise your system (binary talking with old lib files etc.) and when you do [security audit of your Linux system][1], you find it is vulnerable to different exploits. So keeping a clean operating system (no matter Windows or Linux) is important. + +### What is a file in Linux? ### + +A simple description of the UNIX system, also applicable to Linux, is this: + +> On a UNIX system, everything is a file; if something is not a file, it is a process. + +This statement is true because there are special files that are more than just files (named pipes and sockets, for instance), but to keep things simple, saying that everything is a file is an acceptable generalization. A Linux system, just like UNIX, makes no difference between a file and a directory, since a directory is just a file containing names of other files. Programs, services, texts, images, and so forth, are all files. Input and output devices, and generally all devices, are considered to be files, according to the system. + +![](http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png) + +- Version 2.0 – 17-06-2015 + - – Improved: Added title and version history. + - – Improved: Added /srv, /media and /proc. + - – Improved: Updated descriptions to reflect modern Linux File Systems. + - – Fixed: Multiple typo’s. + - – Fixed: Appearance and colour. +- Version 1.0 – 14-02-2015 + - – Created: Initial diagram. + - – Note: Discarded lowercase version. + +### Download Links ### + +Following are two links for download. If you need this in any other format, let me know and I will try to create that and upload it somewhere. + +- [Large (PNG) Format – 2480×1755 px – 184KB][2] +- [Largest (PDF) Format – 9919x7019 px – 1686KB][3] + +**Note**: PDF Format is best for printing and very high in quality + +### Linux file system description ### + +In order to manage all those files in an orderly fashion, man likes to think of them in an ordered tree-like structure on the hard disk, as we know from `MS-DOS` (Disk Operating System) for instance. The large branches contain more branches, and the branches at the end contain the tree’s leaves or normal files. For now we will use this image of the tree, but we will find out later why this is not a fully accurate image. + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DirectoryDescription
+
/
+
Primary hierarchy root and root directory of the entire file system hierarchy.
+
/bin
+
Essential command binaries that need to be available in single user mode; for all users, e.g., cat, ls, cp.
+
/boot
+
Boot loader files, e.g., kernels, initrd.
+
/dev
+
Essential devices, e.g., /dev/null.
+
/etc
+
Host-specific system-wide configuration filesThere has been controversy over the meaning of the name itself. In early versions of the UNIX Implementation Document from Bell labs, /etc is referred to as the etcetera directory, as this directory historically held everything that did not belong elsewhere (however, the FHS restricts /etc to static configuration files and may not contain binaries). Since the publication of early documentation, the directory name has been re-designated in various ways. Recent interpretations include backronyms such as “Editable Text Configuration” or “Extended Tool Chest”.
+
+
+
/opt
+
+
+
Configuration files for add-on packages that are stored in /opt/.
+
+
+
/sgml
+
+
+
Configuration files, such as catalogs, for software that processes SGML.
+
+
+
/X11
+
+
+
Configuration files for the X Window System, version 11.
+
+
+
/xml
+
+
+
Configuration files, such as catalogs, for software that processes XML.
+
/home
+
Users’ home directories, containing saved files, personal settings, etc.
+
/lib
+
Libraries essential for the binaries in /bin/ and /sbin/.
+
/lib<qual>
+
Alternate format essential libraries. Such directories are optional, but if they exist, they have some requirements.
+
/media
+
Mount points for removable media such as CD-ROMs (appeared in FHS-2.3).
+
/mnt
+
Temporarily mounted filesystems.
+
/opt
+
Optional application software packages.
+
/proc
+
Virtual filesystem providing process and kernel information as files. In Linux, corresponds to a procfs mount.
+
/root
+
Home directory for the root user.
+
/sbin
+
Essential system binaries, e.g., init, ip, mount.
+
/srv
+
Site-specific data which are served by the system.
+
/tmp
+
Temporary files (see also /var/tmp). Often not preserved between system reboots.
+
/usr
+
Secondary hierarchy for read-only user data; contains the majority of (multi-)user utilities and applications.
+
+
+
/bin
+
+
+
Non-essential command binaries (not needed in single user mode); for all users.
+
+
+
/include
+
+
+
Standard include files.
+
+
+
/lib
+
+
+
Libraries for the binaries in /usr/bin/ and /usr/sbin/.
+
+
+
/lib<qual>
+
+
+
Alternate format libraries (optional).
+
+
+
/local
+
+
+
Tertiary hierarchy for local data, specific to this host. Typically has further subdirectories, e.g., bin/, lib/, share/.
+
+
+
/sbin
+
+
+
Non-essential system binaries, e.g., daemons for various network-services.
+
+
+
/share
+
+
+
Architecture-independent (shared) data.
+
+
+
/src
+
+
+
Source code, e.g., the kernel source code with its header files.
+
+
+
/X11R6
+
+
+
X Window System, Version 11, Release 6.
+
/var
+
Variable files—files whose content is expected to continually change during normal operation of the system—such as logs, spool files, and temporary e-mail files.
+
+
+
/cache
+
+
+
Application cache data. Such data are locally generated as a result of time-consuming I/O or calculation. The application must be able to regenerate or restore the data. The cached files can be deleted without loss of data.
+
+
+
/lib
+
+
+
State information. Persistent data modified by programs as they run, e.g., databases, packaging system metadata, etc.
+
+
+
/lock
+
+
+
Lock files. Files keeping track of resources currently in use.
+
+
+
/log
+
+
+
Log files. Various logs.
+
+
+
/mail
+
+
+
Users’ mailboxes.
+
+
+
/opt
+
+
+
Variable data from add-on packages that are stored in /opt/.
+
+
+
/run
+
+
+
Information about the running system since last boot, e.g., currently logged-in users and running daemons.
+
+
+
/spool
+
+
+
Spool for tasks waiting to be processed, e.g., print queues and outgoing mail queue.
+
+
+
+
+
/mail
+
+
+
+
+
Deprecated location for users’ mailboxes.
+
+
+
/tmp
+
+
+
Temporary files to be preserved between reboots.
+ +### Types of files in Linux ### + +Most files are just files, called `regular` files; they contain normal data, for example text files, executable files or programs, input for or output from a program and so on. + +While it is reasonably safe to suppose that everything you encounter on a Linux system is a file, there are some exceptions. + +- `Directories`: files that are lists of other files. +- `Special files`: the mechanism used for input and output. Most special files are in `/dev`, we will discuss them later. +- `Links`: a system to make a file or directory visible in multiple parts of the system’s file tree. We will talk about links in detail. +- `(Domain) sockets`: a special file type, similar to TCP/IP sockets, providing inter-process networking protected by the file system’s access control. +- `Named pipes`: act more or less like sockets and form a way for processes to communicate with each other, without using network socket semantics. + +### File system in reality ### + +For most users and for most common system administration tasks, it is enough to accept that files and directories are ordered in a tree-like structure. The computer, however, doesn’t understand a thing about trees or tree-structures. + +Every partition has its own file system. By imagining all those file systems together, we can form an idea of the tree-structure of the entire system, but it is not as simple as that. In a file system, a file is represented by an `inode`, a kind of serial number containing information about the actual data that makes up the file: to whom this file belongs, and where is it located on the hard disk. + +Every partition has its own set of inodes; throughout a system with multiple partitions, files with the same inode number can exist. + +Each inode describes a data structure on the hard disk, storing the properties of a file, including the physical location of the file data. When a hard disk is initialized to accept data storage, usually during the initial system installation process or when adding extra disks to an existing system, a fixed number of inodes per partition is created. This number will be the maximum amount of files, of all types (including directories, special files, links etc.) that can exist at the same time on the partition. We typically count on having 1 inode per 2 to 8 kilobytes of storage.At the time a new file is created, it gets a free inode. In that inode is the following information: + +- Owner and group owner of the file. +- File type (regular, directory, …) +- Permissions on the file +- Date and time of creation, last read and change. +- Date and time this information has been changed in the inode. +- Number of links to this file (see later in this chapter). +- File size +- An address defining the actual location of the file data. + +The only information not included in an inode, is the file name and directory. These are stored in the special directory files. By comparing file names and inode numbers, the system can make up a tree-structure that the user understands. Users can display inode numbers using the -i option to ls. The inodes have their own separate space on the disk. + +-------------------------------------------------------------------------------- + +via: http://www.blackmoreops.com/2015/06/18/linux-file-system-hierarchy-v2-0/ + +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.blackmoreops.com/2015/02/15/in-light-of-recent-linux-exploits-linux-security-audit-is-a-must/ +[2]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png +[3]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-File-System-Hierarchy-blackMORE-Ops.pdf \ No newline at end of file diff --git a/sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md b/sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md new file mode 100644 index 0000000000..2bae0061c4 --- /dev/null +++ b/sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md @@ -0,0 +1,63 @@ +Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver +================================================================================ +![Ubuntu Gamers are on the rise -and so is demand for the latest drivers](http://www.omgubuntu.co.uk/wp-content/uploads/2014/03/ubuntugamer_logo_dark-500x250.jpg) +Ubuntu Gamers are on the rise -and so is demand for the latest drivers + +**Installing the latest upstream NVIDIA graphics driver on Ubuntu could be about to get much easier. ** + +Ubuntu developers are considering the creation of a brand new ‘official’ PPA to distribute the latest closed-source NVIDIA binary drivers to desktop users. + +The move would benefit Ubuntu gamers **without** risking the stability of the OS for everyone else. + +New upstream drivers would be installed and updated from this new PPA **only** when a user explicitly opts-in to it. Everyone else would continue to receive and use the more recent stable NVIDIA Linux driver snapshot included in the Ubuntu archive. + +### Why Is This Needed? ### + +![Ubuntu provides drivers – but they’re not the latest](http://www.omgubuntu.co.uk/wp-content/uploads/2013/04/drivers.jpg) +Ubuntu provides drivers – but they’re not the latest + +The closed-source NVIDIA graphics drivers that are available to install on Ubuntu from the archive (using the command line, synaptic or through the additional drivers tool) work fine for most and can handle the composited Unity desktop shell with ease. + +For gaming needs it’s a different story. + +If you want to squeeze every last frame and HD texture out of the latest big-name Steam game you’ll need the latest binary drivers blob. + +> ‘Installing the very latest Nvidia Linux driver on Ubuntu is not easy and not always safe.’ + +The more recent the driver the more likely it is to support the latest features and technologies, or come pre-packed with game-specific tweaks and bug fixes too. + +The problem is that installing the very latest Nvidia Linux driver on Ubuntu is not easy and not always safe. + +To fill the void many third-party PPAs maintained by enthusiasts have emerged. Since many of these PPAs also distribute other experimental or bleeding-edge software their use is **not without risk**. Adding a bleeding edge PPA is often the fastest way to entirely hose a system! + +A solution that lets Ubuntu users install the latest propriety graphics drivers as offered in third-party PPAs is needed **but** with the safety catch of being able to roll-back to the stable archive version if needed. + +### ‘Demand for fresh drivers is hard to ignore’ ### + +> ‘A solution that lets Ubuntu users get the latest hardware drivers safely is coming.’ + +‘The demand for fresh drivers in a fast developing market is becoming hard to ignore, users are going to want the latest upstream has to offer,’ Castro explains in an e-mail to the Ubuntu Desktop mailing list. + +‘[NVIDIA] can deliver a kickass experience with almost no effort from the user [in Windows 10]. Until we can convince NVIDIA to do the same with Ubuntu we’re going to have to pick up the slack.’ + +Castro’s proposition of a “blessed” NVIDIA PPA is the easiest way to do this. + +Gamers would be able to opt-in to receive new drivers from the PPA straight from Ubuntu’s default proprietary hardware drivers tool — no need for them to copy and paste terminal commands from websites or wiki pages. + +The drivers within this PPA would be packaged and maintained by a select band of community members and receive benefits from being a semi-official option, namely **automated testing**. + +As Castro himself puts it: ‘People want the latest bling, and no matter what they’re going to do it. We might as well put a framework around it so people can get what they want without breaking their computer.’ + +**Would you make use of this PPA? How would you rate the performance of the default Nvidia drivers on Ubuntu? Share your thoughts in the comments, folks! ** + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2015/08/ubuntu-easy-install-latest-nvidia-linux-drivers + +作者:[Joey-Elijah Sneddon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author \ No newline at end of file From cc0d58299115994ccd4982cdb806460dc18e718b Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 13 Aug 2015 10:35:12 +0800 Subject: [PATCH 148/697] translated --- ...s a Web Based Network Traffic Analyzer--Install it on Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md (100%) diff --git a/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md b/translated/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md similarity index 100% rename from sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md rename to translated/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md From 2857115c5c775ef504947bd7cfc8fdffcafcb256 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 13 Aug 2015 10:45:09 +0800 Subject: [PATCH 149/697] =?UTF-8?q?20150813-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...How to Install Logwatch on Ubuntu 15.04.md | 137 +++++++++++++++ ...st Disk I O Performance With dd Command.md | 162 ++++++++++++++++++ 2 files changed, 299 insertions(+) create mode 100644 sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md create mode 100644 sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md diff --git a/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md b/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md new file mode 100644 index 0000000000..fa9458dcb4 --- /dev/null +++ b/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md @@ -0,0 +1,137 @@ +How to Install Logwatch on Ubuntu 15.04 +================================================================================ +Hi, Today we are going to illustrate the setup of Logwatch on Ubuntu 15.04 Operating system where as it can be used for any Linux and UNIX like operating systems. Logwatch is a customizable system log analyzer and reporting log-monitoring system that go through your logs for a given period of time and make a report in the areas that you wish with the details you want. Its an easy tool to install, configure, review and to take actions that will improve security from data it provides. Logwatch scans the log files of major operating system components, like SSH, Web Server and forwards a summary that contains the valuable items in it that needs to be looked at. + +### Pre-installation Setup ### + +We will be using Ubuntu 15.04 operating system to deploy Logwatch on it so as a perquisite for the installation of Logwatch, make sure that your emails setup is working as it will be used to send email to the administrators for daily reports on the gathered reports.Your system repositories should be enabled as we will be installing it from its available universal repositories. + +Then open the terminal of your ubuntu operating system and login with root user to update your system packages before moving to Logwatch installation. + + root@ubuntu-15:~# apt-get update + +### Installing Logwatch ### + +Once your system is updated and your have fulfilled all its prerequisites then run the following command to start the installation of Logwatch in your server. + + root@ubuntu-15:~# apt-get install logwatch + +The logwatch installation process will starts with addition of some extra required packages as shown once you press “Y” to accept the required changes to the system. + +During the installation process you will be prompted to configure the Postfix Configurations according to your mail server’s setup. Here we used “Local only” in the tutorial for ease, we can choose from the other available options as per your infrastructure requirements and then press “OK” to proceed. + +![Potfix Configurations](http://blog.linoxide.com/wp-content/uploads/2015/08/21.png) + +Then you have to choose your mail server’s name that will also be used by other programs, so it should be single fully qualified domain name (FQDN). + +![Postfix Setup](http://blog.linoxide.com/wp-content/uploads/2015/08/31.png) + +Once you press “OK” after postfix configurations, then it will completes the Logwatch installation process with default configurations of Postfix. + +![Logwatch Completion](http://blog.linoxide.com/wp-content/uploads/2015/08/41.png) + +You can check the status of Logwatch by issuing the following command in the terminal that should be in active state. + + root@ubuntu-15:~# service postfix status + +![Postfix Status](http://blog.linoxide.com/wp-content/uploads/2015/08/51.png) + +To confirm the installation of Logwatch with its default configurations, issue the simple “logwatch” command as shown. + + root@ubuntu-15:~# logwatch + +The output from the above executed command will results in following compiled report form in the terminal. + +![Logwatch Report](http://blog.linoxide.com/wp-content/uploads/2015/08/61.png) + +### Logwatch Configurations ### + +Now after successful installation of Logwatch, we need to make few configuration changes in its configuration file located under following shown path. So, let’s open it with the file editor to update its configurations as required. + + root@ubuntu-15:~# vim /usr/share/logwatch/default.conf/logwatch.conf + +**Output/Format Options** + +By default Logwatch will print to stdout in text with no encoding.To make email Default set “Output = mail” and to save to file set “Output = file”. So you can comment out the its default configurations as per your required settings. + + Output = stdout + +To make Html the default formatting update the following line if you are using Internet email configurations. + + Format = text + +Now add the default person to mail reports should be sent to, it could be a local account or a complete email address that you are free to mention in this line + + MailTo = root + #MailTo = user@test.com + +Default person to mail reports sent from can be a local account or any other you wish to use. + + # complete email address. + MailFrom = Logwatch + +Save the changes made in the configuration file of Logwatch while leaving the other parameter as default. + +**Cronjob Configuration** + +Now edit the "00logwatch" file in daily crons directory to configure your desired email address to forward reports from logwatch. + + root@ubuntu-15:~# vim /etc/cron.daily/00logwatch + +Here you need to use "--mailto" user@test.com instead of --output mail and save the file. + +![Logwatch Cronjob](http://blog.linoxide.com/wp-content/uploads/2015/08/71.png) + +### Using Logwatch Report ### + +Now we generate the test report by executing the "logwatch" command in the terminal to get its result shown in the Text format within the terminal. + + root@ubuntu-15:~#logwatch + +The generated report starts with showing its execution time and date, it will be comprising of different sections that starts with its begin status and closed with end status after showing the complete information about its logs of the mentioned sections. + +Here is its starting point looks like, where it starts by showing all the installed packages in the system as shown below. + +![dpkg status](http://blog.linoxide.com/wp-content/uploads/2015/08/81.png) + +The following sections shows the logs informmation about the login sessions, rsyslogs and SSH connections about the current and last sessions enabled on the system. + +![logwatch report](http://blog.linoxide.com/wp-content/uploads/2015/08/9.png) + +The logwatch report will ends up by showing the secure sudo logs and the disk space usage of the root diretory as shown below. + +![Logwatch end report](http://blog.linoxide.com/wp-content/uploads/2015/08/10.png) + +You can also check for the generated emails about the logwatch reports by opening the following file. + + root@ubuntu-15:~# vim /var/mail/root + +Here you will be able to see all the generated emails to your configured users with their message delivery status. + +### More about Logwatch ### + +Logwatch is a great tool to lern more about it, so if your more interested to learn more about its logwatch then you can also get much help from the below few commands. + + root@ubuntu-15:~# man logwatch + +The above command contains all the users manual about the logwatch, so read it carefully and to exit from the manuals section simply press "q". + +To get help about the logwatch commands usage you can run the following help command for further information in details. + + root@ubuntu-15:~# logwatch --help + +### Conclusion ### + +At the end of this tutorial you learn about the complete setup of Logwatch on Ubuntu 15.04 that includes with its installation and configurations guide. Now you can start monitoring your logs in a customize able form, whether you monitor the logs of all the services rnning on your system or you customize it to send you the reports about the specific services on the scheduled days. So, let's use this tool and feel free to leave us a comment if you face any issue or need to know more about logwatch usage. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/ubuntu-how-to/install-use-logwatch-ubuntu-15-04/ + +作者:[Kashif Siddique][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ \ No newline at end of file diff --git a/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md b/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md new file mode 100644 index 0000000000..c30619d13e --- /dev/null +++ b/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md @@ -0,0 +1,162 @@ +Linux and Unix Test Disk I/O Performance With dd Command +================================================================================ +How can I use dd command on a Linux to test I/O performance of my hard disk drive? How do I check the performance of a hard drive including the read and write speed on a Linux operating systems? + +You can use the following commands on a Linux or Unix-like systems for simple I/O performance test: + +- **dd command** : It is used to monitor the writing performance of a disk device on a Linux and Unix-like system +- **hdparm command** : It is used to get/set hard disk parameters including test the reading and caching performance of a disk device on a Linux based system. + +In this tutorial you will learn how to use the dd command to test disk I/O performance. + +### Use dd command to monitor the reading and writing performance of a disk device: ### + +- Open a shell prompt. +- Or login to a remote server via ssh. +- Use the dd command to measure server throughput (write speed) `dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync` +- Use the dd command to measure server latency `dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync` + +#### Understanding dd command options #### + +In this example, I'm using RAID-10 (Adaptec 5405Z with SAS SSD) array running on a Ubuntu Linux 14.04 LTS server. The basic syntax is + + dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync + ## GNU dd syntax ## + dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync + ## OR alternate syntax for GNU/dd ## + dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync + +Sample outputs: + +![Fig.01: Ubuntu Linux Server with RAID10 and testing server throughput with dd](http://s0.cyberciti.org/uploads/faq/2015/08/dd-server-test-io-speed-output.jpg) +Fig.01: Ubuntu Linux Server with RAID10 and testing server throughput with dd + +Please note that one gigabyte was written for the test and 135 MB/s was server throughput for this test. Where, + +- `if=/dev/zero (if=/dev/input.file)` : The name of the input file you want dd the read from. +- `of=/tmp/test1.img (of=/path/to/output.file)` : The name of the output file you want dd write the input.file to. +- `bs=1G (bs=block-size)` : Set the size of the block you want dd to use. 1 gigabyte was written for the test. +- `count=1 (count=number-of-blocks)`: The number of blocks you want dd to read. +- `oflag=dsync (oflag=dsync)` : Use synchronized I/O for data. Do not skip this option. This option get rid of caching and gives you good and accurate results +- `conv=fdatasyn`: Again, this tells dd to require a complete "sync" once, right before it exits. This option is equivalent to oflag=dsync. + +In this example, 512 bytes were written one thousand times to get RAID10 server latency time: + + dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync + +Sample outputs: + + 1000+0 records in + 1000+0 records out + 512000 bytes (512 kB) copied, 0.60362 s, 848 kB/s + +Please note that server throughput and latency time depends upon server/application load too. So I recommend that you run these tests on a newly rebooted server as well as peak time to get better idea about your workload. You can now compare these numbers with all your devices. + +#### But why the server throughput and latency time are so low? #### + +Low values does not mean you are using slow hardware. The value can be low because of the HARDWARE RAID10 controller's cache. + +Use hdparm command to see buffered and cached disk read speed + +I suggest you run the following commands 2 or 3 times Perform timings of device reads for benchmark and comparison purposes: + + ### Buffered disk read test for /dev/sda ## + hdparm -t /dev/sda1 + ## OR ## + hdparm -t /dev/sda + +To perform timings of cache reads for benchmark and comparison purposes again run the following command 2-3 times (note the -T option): + + ## Cache read benchmark for /dev/sda ### + hdparm -T /dev/sda1 + ## OR ## + hdparm -T /dev/sda + +OR combine both tests: + + hdparm -Tt /dev/sda + +Sample outputs: + +![Fig.02: Linux hdparm command to test reading and caching disk performance](http://s0.cyberciti.org/uploads/faq/2015/08/hdparam-output.jpg) +Fig.02: Linux hdparm command to test reading and caching disk performance + +Again note that due to filesystems caching on file operations, you will always see high read rates. + +**Use dd command on Linux to test read speed** + +To get accurate read test data, first discard caches before testing by running the following commands: + + flush + echo 3 | sudo tee /proc/sys/vm/drop_caches + time time dd if=/path/to/bigfile of=/dev/null bs=8k + +**Linux Laptop example** + +Run the following command: + + ### Debian Laptop Throughput With Cache ## + dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct + + ### Deactivate the cache ### + hdparm -W0 /dev/sda + + ### Debian Laptop Throughput Without Cache ## + dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct + +**Apple OS X Unix (Macbook pro) example** + +GNU dd has many more options but OS X/BSD and Unix-like dd command need to run as follows to test real disk I/O and not memory add sync option as follows: + + ## Run command 2-3 times to get good results ### + time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync" + +Sample outputs: + + 1024+0 records in + 1024+0 records out + 104857600 bytes transferred in 0.165040 secs (635346520 bytes/sec) + + real 0m0.241s + user 0m0.004s + sys 0m0.113s + +So I'm getting 635346520 bytes (635.347 MB/s) write speed on my MBP. + +**Not a fan of command line...?** + +You can use disk utility (gnome-disk-utility) on a Linux or Unix based system to get the same information. The following screenshot is taken from my Fedora Linux v22 VM. + +**Graphical method** + +Click on the "Activities" or press the "Super" key to switch between the Activities overview and desktop. Type "Disks" + +![Fig.03: Start the Gnome disk utility](http://s0.cyberciti.org/uploads/faq/2015/08/disk-1.jpg) +Fig.03: Start the Gnome disk utility + +Select your hard disk at left pane and click on configure button and click on "Benchmark partition": + +![Fig.04: Benchmark disk/partition](http://s0.cyberciti.org/uploads/faq/2015/08/disks-2.jpg) +Fig.04: Benchmark disk/partition + +Finally, click on the "Start Benchmark..." button (you may be promoted for the admin username and password): + +![Fig.05: Final benchmark result](http://s0.cyberciti.org/uploads/faq/2015/08/disks-3.jpg) +Fig.05: Final benchmark result + +Which method and command do you recommend to use? + +- I recommend dd command on all Unix-like systems (`time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync`" +- If you are using GNU/Linux use the dd command (`dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync`) +- Make sure you adjust count and bs arguments as per your setup to get a good set of result. +- The GUI method is recommended only for Linux/Unix laptop users running Gnome2 or 3 desktop. + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd-command/ + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file From fc087db114a358509e8efdd90b708b5c6e72548a Mon Sep 17 00:00:00 2001 From: runningwater Date: Thu, 13 Aug 2015 10:49:04 +0800 Subject: [PATCH 150/697] by runningwater --- .../tech/20150813 How to Install Logwatch on Ubuntu 15.04.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md b/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md index fa9458dcb4..24c71b0cbe 100644 --- a/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md +++ b/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md @@ -1,3 +1,4 @@ +(translating by runningwater) How to Install Logwatch on Ubuntu 15.04 ================================================================================ Hi, Today we are going to illustrate the setup of Logwatch on Ubuntu 15.04 Operating system where as it can be used for any Linux and UNIX like operating systems. Logwatch is a customizable system log analyzer and reporting log-monitoring system that go through your logs for a given period of time and make a report in the areas that you wish with the details you want. Its an easy tool to install, configure, review and to take actions that will improve security from data it provides. Logwatch scans the log files of major operating system components, like SSH, Web Server and forwards a summary that contains the valuable items in it that needs to be looked at. @@ -129,9 +130,9 @@ At the end of this tutorial you learn about the complete setup of Logwatch on Ub via: http://linoxide.com/ubuntu-how-to/install-use-logwatch-ubuntu-15-04/ 作者:[Kashif Siddique][a] -译者:[译者ID](https://github.com/译者ID) +译者:[runningwater](https://github.com/runningwater) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://linoxide.com/author/kashifs/ \ No newline at end of file +[a]:http://linoxide.com/author/kashifs/ From 14f172121f66f179cb7525924c2c46a0cdb064b2 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 13 Aug 2015 11:01:53 +0800 Subject: [PATCH 151/697] =?UTF-8?q?20150813-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ation GA with OData in Docker Container.md | 102 ++++++++++++++++++ 1 file changed, 102 insertions(+) create mode 100644 sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md diff --git a/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md b/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md new file mode 100644 index 0000000000..0893b9a361 --- /dev/null +++ b/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md @@ -0,0 +1,102 @@ +Howto Run JBoss Data Virtualization GA with OData in Docker Container +================================================================================ +Hi everyone, today we'll learn how to run JBoss Data Virtualization 6.0.0.GA with OData in a Docker Container. JBoss Data Virtualization is a data supply and integration solution platform that transforms various scatered multiple sources data, treats them as single source and delivers the required data into actionable information at business speed to any applications or users. JBoss Data Virtualization can help us easily combine and transform data into reusable business friendly data models and make unified data easily consumable through open standard interfaces. It offers comprehensive data abstraction, federation, integration, transformation, and delivery capabilities to combine data from one or multiple sources into reusable for agile data utilization and sharing.For more information about JBoss Data Virtualization, we can check out [its official page][1]. Docker is an open source platform that provides an open platform to pack, ship and run any application as a lightweight container. Running JBoss Data Virtualization with OData in Docker Container makes us easy to handle and launch. + +Here are some easy to follow tutorial on how we can run JBoss Data Virtualization with OData in Docker Container. + +### 1. Cloning the Repository ### + +First of all, we'll wanna clone the repository of OData with Data Virtualization ie [https://github.com/jbossdemocentral/dv-odata-docker-integration-demo][2] using git command. As we have an Ubuntu 15.04 distribution of linux running in our machine. We'll need to install git initially using apt-get command. + + # apt-get install git + +Then after installing git, we'll wanna clone the repository by running the command below. + + # git clone https://github.com/jbossdemocentral/dv-odata-docker-integration-demo + + Cloning into 'dv-odata-docker-integration-demo'... + remote: Counting objects: 96, done. + remote: Total 96 (delta 0), reused 0 (delta 0), pack-reused 96 + Unpacking objects: 100% (96/96), done. + Checking connectivity... done. + +### 2. Downloading JBoss Data Virtualization Installer ### + +Now, we'll need to download JBoss Data Virtualization Installer from the Download Page ie [http://www.jboss.org/products/datavirt/download/][3] . After we download **jboss-dv-installer-6.0.0.GA-redhat-4.jar**, we'll need to keep it under the directory named **software**. + +### 3. Building the Docker Image ### + +Next, after we have downloaded the JBoss Data Virtualization installer, we'll then go for building the docker image using the Dockerfile and its resources we had just cloned from the repository. + + # cd dv-odata-docker-integration-demo/ + # docker build -t jbossdv600 . + + ... + Step 22 : USER jboss + ---> Running in 129f701febd0 + ---> 342941381e37 + Removing intermediate container 129f701febd0 + Step 23 : EXPOSE 8080 9990 31000 + ---> Running in 61e6d2c26081 + ---> 351159bb6280 + Removing intermediate container 61e6d2c26081 + Step 24 : CMD $JBOSS_HOME/bin/standalone.sh -c standalone.xml -b 0.0.0.0 -bmanagement 0.0.0.0 + ---> Running in a9fed69b3000 + ---> 407053dc470e + Removing intermediate container a9fed69b3000 + Successfully built 407053dc470e + +Note: Here, we assume that you have already installed docker and is running in your machine. + +### 4. Starting the Docker Container ### + +As we have built the Docker Image of JBoss Data Virtualization with oData, we'll now gonna run the docker container and expose its port with -P flag. To do so, we'll run the following command. + + # docker run -p 8080:8080 -d -t jbossdv600 + + 7765dee9cd59c49ca26850e88f97c21f46859d2dc1d74166353d898773214c9c + +### 5. Getting the Container IP ### + +After we have started the Docker Container, we'll wanna get the IP address of the running docker container. To do so, we'll run the docker inspect command followed by the running container id. + + # docker inspect <$containerID> + + ... + "NetworkSettings": { + "Bridge": "", + "EndpointID": "3e94c5900ac5954354a89591a8740ce2c653efde9232876bc94878e891564b39", + "Gateway": "172.17.42.1", + "GlobalIPv6Address": "", + "GlobalIPv6PrefixLen": 0, + "HairpinMode": false, + "IPAddress": "172.17.0.8", + "IPPrefixLen": 16, + "IPv6Gateway": "", + "LinkLocalIPv6Address": "", + "LinkLocalIPv6PrefixLen": 0, + +### 6. Web Interface ### + +Now, if everything went as expected as done above, we'll gonna see the login screen of JBoss Data Virtualization with oData when pointing our web browser to http://container-ip:8080/ and the JBoss Management from http://container-ip:9990. The Management credentials for username is admin and password is redhat1! whereas the Data virtualization credentials for username is user and password is user . After that, we can navigate the contents via the web interface. + +**Note**: It is strongly recommended to change the password as soon as possible after the first login. Thanks :) + +### Conclusion ### + +Finally we've successfully run Docker Container running JBoss Data Virtualization with OData Multisource Virtual Database. JBoss Data Virtualization is really an awesome platform for the virtualization of data from different multiple source and transform them into reusable business friendly data models and produces data easily consumable through open standard interfaces. The deployment of JBoss Data Virtualization with OData Multisource Virtual Database has been very easy, secure and fast to setup with the Docker Technology. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/run-jboss-data-virtualization-ga-odata-docker-container/ + +作者:[Arun Pyasi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://www.redhat.com/en/technologies/jboss-middleware/data-virtualization +[2]:https://github.com/jbossdemocentral/dv-odata-docker-integration-demo +[3]:http://www.jboss.org/products/datavirt/download/ \ No newline at end of file From e5c72db0c2ee4f1b0b0a57d65de1c17ecbdfe2bd Mon Sep 17 00:00:00 2001 From: DongShuaike Date: Thu, 13 Aug 2015 11:29:35 +0800 Subject: [PATCH 152/697] Update 20150813 Linux and Unix Test Disk I O Performance With dd Command.md --- ...inux and Unix Test Disk I O Performance With dd Command.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md b/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md index c30619d13e..bcd9f8455f 100644 --- a/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md +++ b/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md @@ -1,3 +1,5 @@ +DongShuaike is translating. + Linux and Unix Test Disk I/O Performance With dd Command ================================================================================ How can I use dd command on a Linux to test I/O performance of my hard disk drive? How do I check the performance of a hard drive including the read and write speed on a Linux operating systems? @@ -159,4 +161,4 @@ via: http://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 363e76972134acc20f1b92efe0b5c4bf2f145615 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 13 Aug 2015 13:34:50 +0800 Subject: [PATCH 153/697] PUB:20150803 Linux Logging Basics @FSSlc --- .../20150803 Linux Logging Basics.md | 52 +++++++++---------- 1 file changed, 25 insertions(+), 27 deletions(-) rename {translated/tech => published}/20150803 Linux Logging Basics.md (53%) diff --git a/translated/tech/20150803 Linux Logging Basics.md b/published/20150803 Linux Logging Basics.md similarity index 53% rename from translated/tech/20150803 Linux Logging Basics.md rename to published/20150803 Linux Logging Basics.md index 00acdf183e..de8a5d661c 100644 --- a/translated/tech/20150803 Linux Logging Basics.md +++ b/published/20150803 Linux Logging Basics.md @@ -1,6 +1,6 @@ Linux 日志基础 ================================================================================ -首先,我们将描述有关 Linux 日志是什么,到哪儿去找它们以及它们是如何创建的基础知识。如果你已经知道这些,请随意跳至下一节。 +首先,我们将描述有关 Linux 日志是什么,到哪儿去找它们,以及它们是如何创建的基础知识。如果你已经知道这些,请随意跳至下一节。 ### Linux 系统日志 ### @@ -10,71 +10,69 @@ Linux 日志基础 一些最为重要的 Linux 系统日志包括: -- `/var/log/syslog` 或 `/var/log/messages` 存储所有的全局系统活动数据,包括开机信息。基于 Debian 的系统如 Ubuntu 在 `/var/log/syslog` 目录中存储它们,而基于 RedHat 的系统如 RHEL 或 CentOS 则在 `/var/log/messages` 中存储它们。 +- `/var/log/syslog` 或 `/var/log/messages` 存储所有的全局系统活动数据,包括开机信息。基于 Debian 的系统如 Ubuntu 在 `/var/log/syslog` 中存储它们,而基于 RedHat 的系统如 RHEL 或 CentOS 则在 `/var/log/messages` 中存储它们。 - `/var/log/auth.log` 或 `/var/log/secure` 存储来自可插拔认证模块(PAM)的日志,包括成功的登录,失败的登录尝试和认证方式。Ubuntu 和 Debian 在 `/var/log/auth.log` 中存储认证信息,而 RedHat 和 CentOS 则在 `/var/log/secure` 中存储该信息。 -- `/var/log/kern` 存储内核错误和警告数据,这对于排除与自定义内核相关的故障尤为实用。 +- `/var/log/kern` 存储内核的错误和警告数据,这对于排除与定制内核相关的故障尤为实用。 - `/var/log/cron` 存储有关 cron 作业的信息。使用这个数据来确保你的 cron 作业正成功地运行着。 -Digital Ocean 有一个完整的关于这些文件及 rsyslog 如何在常见的发行版本如 RedHat 和 CentOS 中创建它们的 [教程][1] 。 +Digital Ocean 有一个关于这些文件的完整[教程][1],介绍了 rsyslog 如何在常见的发行版本如 RedHat 和 CentOS 中创建它们。 应用程序也会在这个目录中写入日志文件。例如像 Apache,Nginx,MySQL 等常见的服务器程序可以在这个目录中写入日志文件。其中一些日志文件由应用程序自己创建,其他的则通过 syslog (具体见下文)来创建。 ### 什么是 Syslog? ### -Linux 系统日志文件是如何创建的呢?答案是通过 syslog 守护程序,它在 syslog -套接字 `/dev/log` 上监听日志信息,然后将它们写入适当的日志文件中。 +Linux 系统日志文件是如何创建的呢?答案是通过 syslog 守护程序,它在 syslog 套接字 `/dev/log` 上监听日志信息,然后将它们写入适当的日志文件中。 -单词“syslog” 是一个重载的条目,并经常被用来简称如下的几个名称之一: +单词“syslog” 代表几个意思,并经常被用来简称如下的几个名称之一: -1. **Syslog 守护进程** — 一个用来接收,处理和发送 syslog 信息的程序。它可以[远程发送 syslog][2] 到一个集中式的服务器或写入一个本地文件。常见的例子包括 rsyslogd 和 syslog-ng。在这种使用方式中,人们常说 "发送到 syslog." -1. **Syslog 协议** — 一个指定日志如何通过网络来传送的传输协议和一个针对 syslog 信息(具体见下文) 的数据格式的定义。它在 [RFC-5424][3] 中被正式定义。对于文本日志,标准的端口是 514,对于加密日志,端口是 6514。在这种使用方式中,人们常说"通过 syslog 传送." -1. **Syslog 信息** — syslog 格式的日志信息或事件,它包括一个带有几个标准域的文件头。在这种使用方式中,人们常说"发送 syslog." +1. **Syslog 守护进程** — 一个用来接收、处理和发送 syslog 信息的程序。它可以[远程发送 syslog][2] 到一个集中式的服务器或写入到一个本地文件。常见的例子包括 rsyslogd 和 syslog-ng。在这种使用方式中,人们常说“发送到 syslog”。 +1. **Syslog 协议** — 一个指定日志如何通过网络来传送的传输协议和一个针对 syslog 信息(具体见下文) 的数据格式的定义。它在 [RFC-5424][3] 中被正式定义。对于文本日志,标准的端口是 514,对于加密日志,端口是 6514。在这种使用方式中,人们常说“通过 syslog 传送”。 +1. **Syslog 信息** — syslog 格式的日志信息或事件,它包括一个带有几个标准字段的消息头。在这种使用方式中,人们常说“发送 syslog”。 -Syslog 信息或事件包括一个带有几个标准域的 header ,使得分析和路由更方便。它们包括时间戳,应用程序的名称,在系统中信息来源的分类或位置,以及事件的优先级。 +Syslog 信息或事件包括一个带有几个标准字段的消息头,可以使分析和路由更方便。它们包括时间戳、应用程序的名称、在系统中信息来源的分类或位置、以及事件的优先级。 -下面展示的是一个包含 syslog header 的日志信息,它来自于 sshd 守护进程,它控制着到该系统的远程登录,这个信息描述的是一次失败的登录尝试: +下面展示的是一个包含 syslog 消息头的日志信息,它来自于控制着到该系统的远程登录的 sshd 守护进程,这个信息描述的是一次失败的登录尝试: <34>1 2003-10-11T22:14:15.003Z server1.com sshd - - pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2 -### Syslog 格式和域 ### +### Syslog 格式和字段 ### -每条 syslog 信息包含一个带有域的 header,这些域是结构化的数据,使得分析和路由事件更加容易。下面是我们使用的用来产生上面的 syslog 例子的格式,你可以将每个值匹配到一个特定的域的名称上。 +每条 syslog 信息包含一个带有字段的信息头,这些字段是结构化的数据,使得分析和路由事件更加容易。下面是我们使用的用来产生上面的 syslog 例子的格式,你可以将每个值匹配到一个特定的字段的名称上。 <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% %msg%n -下面,你将看到一些在查找或排错时最常使用的 syslog 域: +下面,你将看到一些在查找或排错时最常使用的 syslog 字段: #### 时间戳 #### [时间戳][4] (上面的例子为 2003-10-11T22:14:15.003Z) 暗示了在系统中发送该信息的时间和日期。这个时间在另一系统上接收该信息时可能会有所不同。上面例子中的时间戳可以分解为: -- **2003-10-11** 年,月,日. -- **T** 为时间戳的必需元素,它将日期和时间分离开. -- **22:14:15.003** 是 24 小时制的时间,包括进入下一秒的毫秒数(**003**). -- **Z** 是一个可选元素,指的是 UTC 时间,除了 Z,这个例子还可以包括一个偏移量,例如 -08:00,这意味着时间从 UTC 偏移 8 小时,即 PST 时间. +- **2003-10-11** 年,月,日。 +- **T** 为时间戳的必需元素,它将日期和时间分隔开。 +- **22:14:15.003** 是 24 小时制的时间,包括进入下一秒的毫秒数(**003**)。 +- **Z** 是一个可选元素,指的是 UTC 时间,除了 Z,这个例子还可以包括一个偏移量,例如 -08:00,这意味着时间从 UTC 偏移 8 小时,即 PST 时间。 #### 主机名 #### -[主机名][5] 域(在上面的例子中对应 server1.com) 指的是主机的名称或发送信息的系统. +[主机名][5] 字段(在上面的例子中对应 server1.com) 指的是主机的名称或发送信息的系统. #### 应用名 #### -[应用名][6] 域(在上面的例子中对应 sshd:auth) 指的是发送信息的程序的名称. +[应用名][6] 字段(在上面的例子中对应 sshd:auth) 指的是发送信息的程序的名称. #### 优先级 #### -优先级域或缩写为 [pri][7] (在上面的例子中对应 <34>) 告诉我们这个事件有多紧急或多严峻。它由两个数字域组成:设备域和紧急性域。紧急性域从代表 debug 类事件的数字 7 一直到代表紧急事件的数字 0 。设备域描述了哪个进程创建了该事件。它从代表内核信息的数字 0 到代表本地应用使用的 23 。 +优先级字段或缩写为 [pri][7] (在上面的例子中对应 <34>) 告诉我们这个事件有多紧急或多严峻。它由两个数字字段组成:设备字段和紧急性字段。紧急性字段从代表 debug 类事件的数字 7 一直到代表紧急事件的数字 0 。设备字段描述了哪个进程创建了该事件。它从代表内核信息的数字 0 到代表本地应用使用的 23 。 + +Pri 有两种输出方式。第一种是以一个单独的数字表示,可以这样计算:先用设备字段的值乘以 8,再加上紧急性字段的值:(设备字段)(8) + (紧急性字段)。第二种是 pri 文本,将以“设备字段.紧急性字段” 的字符串格式输出。后一种格式更方便阅读和搜索,但占据更多的存储空间。 -Pri 有两种输出方式。第一种是以一个单独的数字表示,可以这样计算:先用设备域的值乘以 8,再加上紧急性域的值:(设备域)(8) + (紧急性域)。第二种是 pri 文本,将以“设备域.紧急性域” 的字符串格式输出。后一种格式更方便阅读和搜索,但占据更多的存储空间。 -------------------------------------------------------------------------------- via: http://www.loggly.com/ultimate-guide/logging/linux-logging-basics/ -作者:[Jason Skowronski][a1] -作者:[Amy Echeverri][a2] -作者:[Sadequl Hussain][a3] +作者:[Jason Skowronski][a1],[Amy Echeverri][a2],[Sadequl Hussain][a3] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 1e2265405f51c68843b693a2b28d1895b6f8d286 Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Thu, 13 Aug 2015 14:31:47 +0800 Subject: [PATCH 154/697] =?UTF-8?q?=E3=80=90Translating=20by=20dingdongnig?= =?UTF-8?q?etou=E3=80=9120150813=20Linux=20file=20system=20hierarchy=20v2.?= =?UTF-8?q?0.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20150813 Linux file system hierarchy v2.0.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150813 Linux file system hierarchy v2.0.md b/sources/tech/20150813 Linux file system hierarchy v2.0.md index 9df6d23dcf..0021bb57c9 100644 --- a/sources/tech/20150813 Linux file system hierarchy v2.0.md +++ b/sources/tech/20150813 Linux file system hierarchy v2.0.md @@ -1,3 +1,6 @@ + +Translating by dingdongnigetou + Linux file system hierarchy v2.0 ================================================================================ What is a file in Linux? What is file system in Linux? Where are all the configuration files? Where do I keep my downloaded applications? Is there really a filesystem standard structure in Linux? Well, the above image explains Linux file system hierarchy in a very simple and non-complex way. It’s very useful when you’re looking for a configuration file or a binary file. I’ve added some explanation and examples below, but that’s TL;DR. @@ -435,4 +438,4 @@ via: http://www.blackmoreops.com/2015/06/18/linux-file-system-hierarchy-v2-0/ [1]:http://www.blackmoreops.com/2015/02/15/in-light-of-recent-linux-exploits-linux-security-audit-is-a-must/ [2]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png -[3]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-File-System-Hierarchy-blackMORE-Ops.pdf \ No newline at end of file +[3]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-File-System-Hierarchy-blackMORE-Ops.pdf From 4aec4761431227cc01ac683281d99eef6c87460e Mon Sep 17 00:00:00 2001 From: XIAOYU <1136299502@qq.com> Date: Thu, 13 Aug 2015 19:41:10 +0800 Subject: [PATCH 155/697] translating by xiaoyu33 translating by xiaoyu33 --- ...kr Is An Open-Source RSS News Ticker for Linux Desktops.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md b/sources/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md index 638482a144..ccbbd3abd8 100644 --- a/sources/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md +++ b/sources/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md @@ -1,3 +1,5 @@ +translating by xiaoyu33 + Tickr Is An Open-Source RSS News Ticker for Linux Desktops ================================================================================ ![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/rss-tickr.jpg) @@ -92,4 +94,4 @@ via: http://www.omgubuntu.co.uk/2015/06/tickr-open-source-desktop-rss-news-ticke 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:apt://tickr \ No newline at end of file +[1]:apt://tickr From efdfdebe94f2ae4e42040c614e7ad05491451d17 Mon Sep 17 00:00:00 2001 From: Kevin Sicong Jiang Date: Thu, 13 Aug 2015 11:45:11 -0500 Subject: [PATCH 156/697] Update 20150813 How to get Public IP from Linux Terminal.md --- .../tech/20150813 How to get Public IP from Linux Terminal.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150813 How to get Public IP from Linux Terminal.md b/sources/tech/20150813 How to get Public IP from Linux Terminal.md index f0bba2cea9..c22fec283d 100644 --- a/sources/tech/20150813 How to get Public IP from Linux Terminal.md +++ b/sources/tech/20150813 How to get Public IP from Linux Terminal.md @@ -1,3 +1,4 @@ +KevinSJ Translating How to get Public IP from Linux Terminal? ================================================================================ ![](http://www.blackmoreops.com/wp-content/uploads/2015/06/256x256xHow-to-get-Public-IP-from-Linux-Terminal-blackMORE-Ops.png.pagespeed.ic.GKEAEd4UNr.png) @@ -65,4 +66,4 @@ via: http://www.blackmoreops.com/2015/06/14/how-to-get-public-ip-from-linux-term 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From f166b13f668e522a49246db26e8a746a6fae67d4 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 14 Aug 2015 01:27:35 +0800 Subject: [PATCH 157/697] Delete 20150717 How to monitor NGINX with Datadog - Part 3.md --- ... to monitor NGINX with Datadog - Part 3.md | 151 ------------------ 1 file changed, 151 deletions(-) delete mode 100644 sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md diff --git a/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md b/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md deleted file mode 100644 index 727c552ed0..0000000000 --- a/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md +++ /dev/null @@ -1,151 +0,0 @@ -translation by strugglingyouth -How to monitor NGINX with Datadog - Part 3 -================================================================================ -![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png) - -If you’ve already read [our post on monitoring NGINX][1], you know how much information you can gain about your web environment from just a handful of metrics. And you’ve also seen just how easy it is to start collecting metrics from NGINX on ad hoc basis. But to implement comprehensive, ongoing NGINX monitoring, you will need a robust monitoring system to store and visualize your metrics, and to alert you when anomalies happen. In this post, we’ll show you how to set up NGINX monitoring in Datadog so that you can view your metrics on customizable dashboards like this: - -![NGINX dashboard](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_board_5.png) - -Datadog allows you to build graphs and alerts around individual hosts, services, processes, metrics—or virtually any combination thereof. For instance, you can monitor all of your NGINX hosts, or all hosts in a certain availability zone, or you can monitor a single key metric being reported by all hosts with a certain tag. This post will show you how to: - -- Monitor NGINX metrics on Datadog dashboards, alongside all your other systems -- Set up automated alerts to notify you when a key metric changes dramatically - -### Configuring NGINX ### - -To collect metrics from NGINX, you first need to ensure that NGINX has an enabled status module and a URL for reporting its status metrics. Step-by-step instructions [for configuring open-source NGINX][2] and [NGINX Plus][3] are available in our companion post on metric collection. - -### Integrating Datadog and NGINX ### - -#### Install the Datadog Agent #### - -The Datadog Agent is [the open-source software][4] that collects and reports metrics from your hosts so that you can view and monitor them in Datadog. Installing the agent usually takes [just a single command][5]. - -As soon as your Agent is up and running, you should see your host reporting metrics [in your Datadog account][6]. - -![Datadog infrastructure list](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/infra_2.png) - -#### Configure the Agent #### - -Next you’ll need to create a simple NGINX configuration file for the Agent. The location of the Agent’s configuration directory for your OS can be found [here][7]. - -Inside that directory, at conf.d/nginx.yaml.example, you will find [a sample NGINX config file][8] that you can edit to provide the status URL and optional tags for each of your NGINX instances: - - init_config: - - instances: - - - nginx_status_url: http://localhost/nginx_status/ - tags: - - instance:foo - -Once you have supplied the status URLs and any tags, save the config file as conf.d/nginx.yaml. - -#### Restart the Agent #### - -You must restart the Agent to load your new configuration file. The restart command varies somewhat by platform—see the specific commands for your platform [here][9]. - -#### Verify the configuration settings #### - -To check that Datadog and NGINX are properly integrated, run the Datadog info command. The command for each platform is available [here][10]. - -If the configuration is correct, you will see a section like this in the output: - - Checks - ====== - - [...] - - nginx - ----- - - instance #0 [OK] - - Collected 8 metrics & 0 events - -#### Install the integration #### - -Finally, switch on the NGINX integration inside your Datadog account. It’s as simple as clicking the “Install Integration” button under the Configuration tab in the [NGINX integration settings][11]. - -![Install integration](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/install.png) - -### Metrics! ### - -Once the Agent begins reporting NGINX metrics, you will see [an NGINX dashboard][12] among your list of available dashboards in Datadog. - -The basic NGINX dashboard displays a handful of graphs encapsulating most of the key metrics highlighted [in our introduction to NGINX monitoring][13]. (Some metrics, notably request processing time, require log analysis and are not available in Datadog.) - -You can easily create a comprehensive dashboard for monitoring your entire web stack by adding additional graphs with important metrics from outside NGINX. For example, you might want to monitor host-level metrics on your NGINX hosts, such as system load. To start building a custom dashboard, simply clone the default NGINX dashboard by clicking on the gear near the upper right of the dashboard and selecting “Clone Dash”. - -![Clone dash](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/clone_2.png) - -You can also monitor your NGINX instances at a higher level using Datadog’s [Host Maps][14]—for instance, color-coding all your NGINX hosts by CPU usage to identify potential hotspots. - -![](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx-host-map-3.png) - -### Alerting on NGINX metrics ### - -Once Datadog is capturing and visualizing your metrics, you will likely want to set up some monitors to automatically keep tabs on your metrics—and to alert you when there are problems. Below we’ll walk through a representative example: a metric monitor that alerts on sudden drops in NGINX throughput. - -#### Monitor your NGINX throughput #### - -Datadog metric alerts can be threshold-based (alert when the metric exceeds a set value) or change-based (alert when the metric changes by a certain amount). In this case we’ll take the latter approach, alerting when our incoming requests per second drop precipitously. Such drops are often indicative of problems. - -1.**Create a new metric monitor**. Select “New Monitor” from the “Monitors” dropdown in Datadog. Select “Metric” as the monitor type. - -![NGINX metric monitor](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_1.png) - -2.**Define your metric monitor**. We want to know when our total NGINX requests per second drop by a certain amount. So we define the metric of interest to be the sum of nginx.net.request_per_s across our infrastructure. - -![NGINX metric](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_2.png) - -3.**Set metric alert conditions**. Since we want to alert on a change, rather than on a fixed threshold, we select “Change Alert.” We’ll set the monitor to alert us whenever the request volume drops by 30 percent or more. Here we use a one-minute window of data to represent the metric’s value “now” and alert on the average change across that interval, as compared to the metric’s value 10 minutes prior. - -![NGINX metric change alert](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_3.png) - -4.**Customize the notification**. If our NGINX request volume drops, we want to notify our team. In this case we will post a notification in the ops team’s chat room and page the engineer on call. In “Say what’s happening”, we name the monitor and add a short message that will accompany the notification to suggest a first step for investigation. We @mention the Slack channel that we use for ops and use @pagerduty to [route the alert to PagerDuty][15] - -![NGINX metric notification](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_4v3.png) - -5.**Save the integration monitor**. Click the “Save” button at the bottom of the page. You’re now monitoring a key NGINX [work metric][16], and your on-call engineer will be paged anytime it drops rapidly. - -### Conclusion ### - -In this post we’ve walked you through integrating NGINX with Datadog to visualize your key metrics and notify your team when your web infrastructure shows signs of trouble. - -If you’ve followed along using your own Datadog account, you should now have greatly improved visibility into what’s happening in your web environment, as well as the ability to create automated monitors tailored to your environment, your usage patterns, and the metrics that are most valuable to your organization. - -If you don’t yet have a Datadog account, you can sign up for [a free trial][17] and start monitoring your infrastructure, your applications, and your services today. - ----------- - -Source Markdown for this post is available [on GitHub][18]. Questions, corrections, additions, etc.? Please [let us know][19]. - ------------------------------------------------------------- - -via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ - -作者:K Young -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/ -[2]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source -[3]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus -[4]:https://github.com/DataDog/dd-agent -[5]:https://app.datadoghq.com/account/settings#agent -[6]:https://app.datadoghq.com/infrastructure -[7]:http://docs.datadoghq.com/guides/basic_agent_usage/ -[8]:https://github.com/DataDog/dd-agent/blob/master/conf.d/nginx.yaml.example -[9]:http://docs.datadoghq.com/guides/basic_agent_usage/ -[10]:http://docs.datadoghq.com/guides/basic_agent_usage/ -[11]:https://app.datadoghq.com/account/settings#integrations/nginx -[12]:https://app.datadoghq.com/dash/integration/nginx -[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/ -[14]:https://www.datadoghq.com/blog/introducing-host-maps-know-thy-infrastructure/ -[15]:https://www.datadoghq.com/blog/pagerduty/ -[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics -[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up -[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md -[19]:https://github.com/DataDog/the-monitor/issues From dda288ff51ce354aed0d3015ec76b2525e678124 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 14 Aug 2015 01:28:35 +0800 Subject: [PATCH 158/697] Create 20150717 How to monitor NGINX with Datadog - Part 3.md --- ... to monitor NGINX with Datadog - Part 3.md | 154 ++++++++++++++++++ 1 file changed, 154 insertions(+) create mode 100644 translated/tech/20150717 How to monitor NGINX with Datadog - Part 3.md diff --git a/translated/tech/20150717 How to monitor NGINX with Datadog - Part 3.md b/translated/tech/20150717 How to monitor NGINX with Datadog - Part 3.md new file mode 100644 index 0000000000..003290a915 --- /dev/null +++ b/translated/tech/20150717 How to monitor NGINX with Datadog - Part 3.md @@ -0,0 +1,154 @@ + +如何使用 Datadog 监控 NGINX - 第3部分 +================================================================================ +![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png) + +如果你已经阅读了[前面的如何监控 NGINX][1],你应该知道从你网络环境的几个指标中可以获取多少信息。而且你也看到了从 NGINX 特定的基础中收集指标是多么容易的。但要实现全面,持续的监控 NGINX,你需要一个强大的监控系统来存储并将指标可视化,当异常发生时能提醒你。在这篇文章中,我们将向你展示如何使用 Datadog 安装 NGINX 监控,以便你可以在定制的仪表盘中查看这些指标: + +![NGINX dashboard](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_board_5.png) + +Datadog 允许你建立单个主机,服务,流程,度量,或者几乎任何它们的组合图形周围和警报。例如,你可以在一定的可用性区域监控所有NGINX主机,或所有主机,或者您可以监视被报道具有一定标签的所有主机的一个关键指标。本文将告诉您如何: + +Datadog 允许你来建立图表并报告周围的主机,进程,指标或其他的。例如,你可以在特定的可用性区域监控所有 NGINX 主机,或所有主机,或者你可以监视一个关键指标并将它报告给周围所有标记的主机。本文将告诉你如何做: + +- 在 Datadog 仪表盘上监控 NGINX 指标,对其他所有系统 +- 当一个关键指标急剧变化时设置自动警报来通知你 + +### 配置 NGINX ### + +为了收集 NGINX 指标,首先需要确保 NGINX 已启用 status 模块和一个URL 来报告 status 指标。下面将一步一步展示[配置开源 NGINX ][2]和[NGINX Plus][3]。 + +### 整合 Datadog 和 NGINX ### + +#### 安装 Datadog 代理 #### + +Datadog 代理是 [一个开源软件][4] 能收集和报告你主机的指标,这样就可以使用 Datadog 查看和监控他们。安装代理通常 [仅需要一个命令][5] + +只要你的代理启动并运行着,你会看到你主机的指标报告[在你 Datadog 账号下][6]。 + +![Datadog infrastructure list](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/infra_2.png) + +#### 配置 Agent #### + + +接下来,你需要为代理创建一个简单的 NGINX 配置文件。在你系统中代理的配置目录应该 [在这儿][7]。 + +在目录里面的 conf.d/nginx.yaml.example 中,你会发现[一个简单的配置文件][8],你可以编辑并提供 status URL 和可选的标签为每个NGINX 实例: + + init_config: + + instances: + + - nginx_status_url: http://localhost/nginx_status/ + tags: + - instance:foo + +一旦你修改了 status URLs 和其他标签,将配置文件保存为 conf.d/nginx.yaml。 + +#### 重启代理 #### + + +你必须重新启动代理程序来加载新的配置文件。重新启动命令 [在这里][9] 根据平台的不同而不同。 + +#### 检查配置文件 #### + +要检查 Datadog 和 NGINX 是否正确整合,运行 Datadog 的信息命令。每个平台使用的命令[看这儿][10]。 + +如果配置是正确的,你会看到这样的输出: + + Checks + ====== + + [...] + + nginx + ----- + - instance #0 [OK] + - Collected 8 metrics & 0 events + +#### 安装整合 #### + +最后,在你的 Datadog 帐户里面整合 Nginx。这非常简单,你只要点击“Install Integration”按钮在 [NGINX 集成设置][11] 配置表中。 + +![Install integration](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/install.png) + +### 指标! ### + +一旦代理开始报告 NGINX 指标,你会看到 [一个 NGINX 仪表盘][12] 在你 Datadog 可用仪表盘的列表中。 + +基本的 NGINX 仪表盘显示了几个关键指标 [在我们介绍的 NGINX 监控中][13] 的最大值。 (一些指标,特别是请求处理时间,日志分析,Datadog 不提供。) + +你可以轻松创建一个全面的仪表盘来监控你的整个网站区域通过增加额外的图形与 NGINX 外部的重要指标。例如,你可能想监视你 NGINX 主机的host-level 指标,如系统负载。你需要构建一个自定义的仪表盘,只需点击靠近仪表盘的右上角的选项并选择“Clone Dash”来克隆一个默认的 NGINX 仪表盘。 + +![Clone dash](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/clone_2.png) + +你也可以更高级别的监控你的 NGINX 实例通过使用 Datadog 的 [Host Maps][14] -对于实例,color-coding 你所有的 NGINX 主机通过 CPU 使用率来辨别潜在热点。 + +![](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx-host-map-3.png) + +### NGINX 指标 ### + +一旦 Datadog 捕获并可视化你的指标,你可能会希望建立一些监控自动密切的关注你的指标,并当有问题提醒你。下面将介绍一个典型的例子:一个提醒你 NGINX 吞吐量突然下降时的指标监控器。 + +#### 监控 NGINX 吞吐量 #### + +Datadog 指标警报可以是 threshold-based(当指标超过设定值会警报)或 change-based(当指标的变化超过一定范围会警报)。在这种情况下,我们会采取后一种方式,当每秒传入的请求急剧下降时会提醒我们。下降往往意味着有问题。 + +1.**创建一个新的指标监控**. 从 Datadog 的“Monitors”下拉列表中选择“New Monitor”。选择“Metric”作为监视器类型。 + +![NGINX metric monitor](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_1.png) + +2.**定义你的指标监视器**. 我们想知道 NGINX 每秒总的请求量下降的数量。所以我们在基础设施中定义我们感兴趣的 nginx.net.request_per_s度量和。 + +![NGINX metric](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_2.png) + +3.**设置指标警报条件**.我们想要在变化时警报,而不是一个固定的值,所以我们选择“Change Alert”。我们设置监控为无论何时请求量下降了30%以上时警报。在这里,我们使用一个 one-minute 数据窗口来表示“now” 指标的值,警报横跨该间隔内的平均变化,和之前 10 分钟的指标值作比较。 + +![NGINX metric change alert](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_3.png) + +4.**自定义通知**.如果 NGINX 的请求量下降,我们想要通知我们的团队。在这种情况下,我们将给 ops 队的聊天室发送通知,网页呼叫工程师。在“Say what’s happening”中,我们将其命名为监控器并添加一个短消息将伴随该通知并建议首先开始调查。我们使用 @mention 作为一般警告,使用 ops 并用 @pagerduty [专门给 PagerDuty 发警告][15]。 + +![NGINX metric notification](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_4v3.png) + +5.**保存集成监控**.点击页面底部的“Save”按钮。你现在监控的关键指标NGINX [work 指标][16],它边打电话给工程师并在它迅速下时随时分页。 + +### 结论 ### + +在这篇文章中,我们已经通过整合 NGINX 与 Datadog 来可视化你的关键指标,并当你的网络基础架构有问题时会通知你的团队。 + +如果你一直使用你自己的 Datadog 账号,你现在应该在 web 环境中有了很大的可视化提高,也有能力根据你的环境创建自动监控,你所使用的模式,指标应该是最有价值的对你的组织。 + +如果你还没有 Datadog 帐户,你可以注册[免费试用][17],并开始监视你的基础架构,应用程序和现在的服务。 + +---------- +这篇文章的来源在 [on GitHub][18]. 问题,错误,补充等?请[联系我们][19]. + +------------------------------------------------------------ + +via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ + +作者:K Young +译者:[strugglingyouth](https://github.com/译者ID) +校对:[strugglingyouth](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/ +[2]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source +[3]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus +[4]:https://github.com/DataDog/dd-agent +[5]:https://app.datadoghq.com/account/settings#agent +[6]:https://app.datadoghq.com/infrastructure +[7]:http://docs.datadoghq.com/guides/basic_agent_usage/ +[8]:https://github.com/DataDog/dd-agent/blob/master/conf.d/nginx.yaml.example +[9]:http://docs.datadoghq.com/guides/basic_agent_usage/ +[10]:http://docs.datadoghq.com/guides/basic_agent_usage/ +[11]:https://app.datadoghq.com/account/settings#integrations/nginx +[12]:https://app.datadoghq.com/dash/integration/nginx +[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/ +[14]:https://www.datadoghq.com/blog/introducing-host-maps-know-thy-infrastructure/ +[15]:https://www.datadoghq.com/blog/pagerduty/ +[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics +[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up +[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md +[19]:https://github.com/DataDog/the-monitor/issues From 0ab376d789381a52cc6928c859782fa63a99faa4 Mon Sep 17 00:00:00 2001 From: joeren Date: Fri, 14 Aug 2015 08:48:38 +0800 Subject: [PATCH 159/697] Update 20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md --- ...Easier For You To Install The Latest Nvidia Linux Driver.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md b/sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md index 2bae0061c4..2dfc45cc4f 100644 --- a/sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md +++ b/sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md @@ -1,3 +1,4 @@ +Translating by GOLinux! Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver ================================================================================ ![Ubuntu Gamers are on the rise -and so is demand for the latest drivers](http://www.omgubuntu.co.uk/wp-content/uploads/2014/03/ubuntugamer_logo_dark-500x250.jpg) @@ -60,4 +61,4 @@ via: http://www.omgubuntu.co.uk/2015/08/ubuntu-easy-install-latest-nvidia-linux- 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:https://plus.google.com/117485690627814051450/?rel=author \ No newline at end of file +[a]:https://plus.google.com/117485690627814051450/?rel=author From 8595d6c8e51f7c3de6417721c1d05e0936617749 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Fri, 14 Aug 2015 11:01:01 +0800 Subject: [PATCH 160/697] [Translated]20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md --- ... Install The Latest Nvidia Linux Driver.md | 64 ------------------- ... Install The Latest Nvidia Linux Driver.md | 63 ++++++++++++++++++ 2 files changed, 63 insertions(+), 64 deletions(-) delete mode 100644 sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md create mode 100644 translated/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md diff --git a/sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md b/sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md deleted file mode 100644 index 2dfc45cc4f..0000000000 --- a/sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md +++ /dev/null @@ -1,64 +0,0 @@ -Translating by GOLinux! -Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver -================================================================================ -![Ubuntu Gamers are on the rise -and so is demand for the latest drivers](http://www.omgubuntu.co.uk/wp-content/uploads/2014/03/ubuntugamer_logo_dark-500x250.jpg) -Ubuntu Gamers are on the rise -and so is demand for the latest drivers - -**Installing the latest upstream NVIDIA graphics driver on Ubuntu could be about to get much easier. ** - -Ubuntu developers are considering the creation of a brand new ‘official’ PPA to distribute the latest closed-source NVIDIA binary drivers to desktop users. - -The move would benefit Ubuntu gamers **without** risking the stability of the OS for everyone else. - -New upstream drivers would be installed and updated from this new PPA **only** when a user explicitly opts-in to it. Everyone else would continue to receive and use the more recent stable NVIDIA Linux driver snapshot included in the Ubuntu archive. - -### Why Is This Needed? ### - -![Ubuntu provides drivers – but they’re not the latest](http://www.omgubuntu.co.uk/wp-content/uploads/2013/04/drivers.jpg) -Ubuntu provides drivers – but they’re not the latest - -The closed-source NVIDIA graphics drivers that are available to install on Ubuntu from the archive (using the command line, synaptic or through the additional drivers tool) work fine for most and can handle the composited Unity desktop shell with ease. - -For gaming needs it’s a different story. - -If you want to squeeze every last frame and HD texture out of the latest big-name Steam game you’ll need the latest binary drivers blob. - -> ‘Installing the very latest Nvidia Linux driver on Ubuntu is not easy and not always safe.’ - -The more recent the driver the more likely it is to support the latest features and technologies, or come pre-packed with game-specific tweaks and bug fixes too. - -The problem is that installing the very latest Nvidia Linux driver on Ubuntu is not easy and not always safe. - -To fill the void many third-party PPAs maintained by enthusiasts have emerged. Since many of these PPAs also distribute other experimental or bleeding-edge software their use is **not without risk**. Adding a bleeding edge PPA is often the fastest way to entirely hose a system! - -A solution that lets Ubuntu users install the latest propriety graphics drivers as offered in third-party PPAs is needed **but** with the safety catch of being able to roll-back to the stable archive version if needed. - -### ‘Demand for fresh drivers is hard to ignore’ ### - -> ‘A solution that lets Ubuntu users get the latest hardware drivers safely is coming.’ - -‘The demand for fresh drivers in a fast developing market is becoming hard to ignore, users are going to want the latest upstream has to offer,’ Castro explains in an e-mail to the Ubuntu Desktop mailing list. - -‘[NVIDIA] can deliver a kickass experience with almost no effort from the user [in Windows 10]. Until we can convince NVIDIA to do the same with Ubuntu we’re going to have to pick up the slack.’ - -Castro’s proposition of a “blessed” NVIDIA PPA is the easiest way to do this. - -Gamers would be able to opt-in to receive new drivers from the PPA straight from Ubuntu’s default proprietary hardware drivers tool — no need for them to copy and paste terminal commands from websites or wiki pages. - -The drivers within this PPA would be packaged and maintained by a select band of community members and receive benefits from being a semi-official option, namely **automated testing**. - -As Castro himself puts it: ‘People want the latest bling, and no matter what they’re going to do it. We might as well put a framework around it so people can get what they want without breaking their computer.’ - -**Would you make use of this PPA? How would you rate the performance of the default Nvidia drivers on Ubuntu? Share your thoughts in the comments, folks! ** - --------------------------------------------------------------------------------- - -via: http://www.omgubuntu.co.uk/2015/08/ubuntu-easy-install-latest-nvidia-linux-drivers - -作者:[Joey-Elijah Sneddon][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/117485690627814051450/?rel=author diff --git a/translated/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md b/translated/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md new file mode 100644 index 0000000000..bf24d3e5c2 --- /dev/null +++ b/translated/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md @@ -0,0 +1,63 @@ +Ubuntu想要让你安装最新版Nvidia Linux驱动更简单 +================================================================================ +![Ubuntu Gamers are on the rise -and so is demand for the latest drivers](http://www.omgubuntu.co.uk/wp-content/uploads/2014/03/ubuntugamer_logo_dark-500x250.jpg) +Ubuntu游戏在增长——因而需要最新版驱动 + +**Ubuntu上安装上游NVIDIA图形驱动即将变得更加容易。** + +Ubuntu开发者正在考虑构建一个全新的'官方'PPA,以便为桌面用户分发最新的闭源NVIDIA二进制驱动。 + +该项运动将使得Ubuntu游戏玩家收益,并且**不会**给其它人造成OS稳定性方面的风险。 + +新的上游驱动将通过该新PPA安装并更新,**只有**在用户明确选择它。其他人将继续接收并使用更近的包含在Ubuntu归档中的稳定版NVIDIA Linux驱动快照。 + +### 为什么需要该项目? ### + +![Ubuntu provides drivers – but they’re not the latest](http://www.omgubuntu.co.uk/wp-content/uploads/2013/04/drivers.jpg) +Ubuntu提供了驱动——但是它们不是最新的 + +可以从归档中(使用命令行、synaptic,或者通过额外驱动工具)安装到Ubuntu上的闭源NVIDIA图形驱动在大多数情况下都能工作得很好,并且可以轻松地处理混合Unity桌面shell。 + +对于游戏需求而言,那完全是另外一码事儿。 + +如果你想要将所有最后帧和HD纹理从最新的大游戏Steam游戏中挤压出来,你需要最新的二进制驱动对象。 + +> '安装最新Nvidia Linux驱动到Ubuntu不是件容易的事儿,而且也不具安全保证。' + +驱动越新,越可能支持最新的特性和技术,或者预先打包好了游戏专门优化和漏洞修复。 + +问题在于安装最新的Nvidia Linux驱动到Ubuntu上不是件容易的事儿,也没有安全保证。 + +要填补空白,许多由热心人维护的第三方PPA就出现了。由于许多这些PPA也发布了其它实验性的或者前沿软件,它们的使用**没有风险**。添加一个前沿的PPA通常是最快的方式,以完全满足系统需要! + +一个让Ubuntu用户安装作为第三方PPA提供最新专有图形驱动解决方案就十分需要了,**但是**提供了一个安全机制,如果有需要,你可以回滚到稳定版本。 + +### ‘对全新驱动的需求难以忽视’ ### + +> '一个让Ubuntu用户安全地获得最新硬件驱动的解决方案出现了。' + +'在快速开发市场中,对全新驱动的需求正变得难以忽视,用户将想要最新的上游软件,'卡斯特罗在一封给Ubuntu桌面邮件列表的电子邮件中解释道。 + +'[NVIDIA]可以分发一个了不起的体验,几乎像[Windows 10]用户那样毫不费力。直到我们可以证实NVIDIA在Ubuntu中做了同样的工作,我们就可以收拾残局了。' + +卡斯特罗关于一个“神圣的”NVIDIA PPA命题就是最实现这一目的的最容易的方式。 + +游戏玩家将可以在Ubuntu的默认专有软件驱动工具中选择接收来自该PPA的新驱动——不需要它们从网站或维基页面拷贝并粘贴终端命令了。 + +该PPA内的驱动将由一个选定的社区成员组成的团队打包并维护,并从一个名为**自动化测试**的半官方选项受惠。 + +就像卡斯特罗自己说的那样:'人们想要最新的闪光的东西,不管他们想要做什么。我们也许也要在其周围放置一个框架,因此人们可以获得他们所想要的,而不必破坏他们的计算机。' + +**你想要使用这个PPA吗?你怎样来评估Ubuntu上默认Nvidia驱动的性能呢?在评论中分享你的想法吧,伙计们!** + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2015/08/ubuntu-easy-install-latest-nvidia-linux-drivers + +作者:[Joey-Elijah Sneddon][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author From 3b7a14d182ecdfb4a1b30861b572d5c2d4566ae9 Mon Sep 17 00:00:00 2001 From: wi-cuckoo Date: Fri, 14 Aug 2015 15:19:33 +0800 Subject: [PATCH 161/697] translating wi-cuckoo --- ...ss Data Virtualization GA with OData in Docker Container.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md b/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md index 0893b9a361..f1505c5649 100644 --- a/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md +++ b/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md @@ -1,3 +1,4 @@ +translating wi-cuckoo Howto Run JBoss Data Virtualization GA with OData in Docker Container ================================================================================ Hi everyone, today we'll learn how to run JBoss Data Virtualization 6.0.0.GA with OData in a Docker Container. JBoss Data Virtualization is a data supply and integration solution platform that transforms various scatered multiple sources data, treats them as single source and delivers the required data into actionable information at business speed to any applications or users. JBoss Data Virtualization can help us easily combine and transform data into reusable business friendly data models and make unified data easily consumable through open standard interfaces. It offers comprehensive data abstraction, federation, integration, transformation, and delivery capabilities to combine data from one or multiple sources into reusable for agile data utilization and sharing.For more information about JBoss Data Virtualization, we can check out [its official page][1]. Docker is an open source platform that provides an open platform to pack, ship and run any application as a lightweight container. Running JBoss Data Virtualization with OData in Docker Container makes us easy to handle and launch. @@ -99,4 +100,4 @@ via: http://linoxide.com/linux-how-to/run-jboss-data-virtualization-ga-odata-doc [a]:http://linoxide.com/author/arunp/ [1]:http://www.redhat.com/en/technologies/jboss-middleware/data-virtualization [2]:https://github.com/jbossdemocentral/dv-odata-docker-integration-demo -[3]:http://www.jboss.org/products/datavirt/download/ \ No newline at end of file +[3]:http://www.jboss.org/products/datavirt/download/ From 74c949d60bc82fe61d3115007b3335bc6b4e6ed2 Mon Sep 17 00:00:00 2001 From: VicYu Date: Fri, 14 Aug 2015 18:03:52 +0800 Subject: [PATCH 162/697] Update 20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md --- ...xt-to-Speech Schedule a Job and Watch Commands in Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md b/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md index 1ad92c594b..28981add17 100644 --- a/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md +++ b/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md @@ -1,3 +1,5 @@ + Vic020 + Linux Tricks: Play Game in Chrome, Text-to-Speech, Schedule a Job and Watch Commands in Linux ================================================================================ Here again, I have compiled a list of four things under [Linux Tips and Tricks][1] series you may do to remain more productive and entertained with Linux Environment. @@ -140,4 +142,4 @@ via: http://www.tecmint.com/text-to-speech-in-terminal-schedule-a-job-and-watch- [a]:http://www.tecmint.com/author/avishek/ [1]:http://www.tecmint.com/tag/linux-tricks/ -[2]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/ \ No newline at end of file +[2]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/ From dd71c92d85e2bb24c957e6d2179ed005b1ae28ec Mon Sep 17 00:00:00 2001 From: KS Date: Fri, 14 Aug 2015 22:54:04 +0800 Subject: [PATCH 163/697] Create 20150803 Managing Linux Logs.md --- .../tech/20150803 Managing Linux Logs.md | 418 ++++++++++++++++++ 1 file changed, 418 insertions(+) create mode 100644 translated/tech/20150803 Managing Linux Logs.md diff --git a/translated/tech/20150803 Managing Linux Logs.md b/translated/tech/20150803 Managing Linux Logs.md new file mode 100644 index 0000000000..59b41aa831 --- /dev/null +++ b/translated/tech/20150803 Managing Linux Logs.md @@ -0,0 +1,418 @@ +Linux日志管理 +================================================================================ +管理日志的一个关键典型做法是集中或整合你的日志到一个地方,特别是如果你有许多服务器或多层级架构。我们将告诉你为什么这是一个好主意然后给出如何更容易的做这件事的一些小技巧。 + +### 集中管理日志的好处 ### + +如果你有很多服务器,查看单独的一个日志文件可能会很麻烦。现代的网站和服务经常包括许多服务器层级,分布式的负载均衡器,还有更多。这将花费很长时间去获取正确的日志,甚至花更长时间在登录服务器的相关问题上。没什么比发现你找的信息没有被捕获更沮丧的了,或者本能保留答案时正好在重启后丢失了日志文件。 + +集中你的日志使他们查找更快速,可以帮助你更快速的解决产品问题。你不用猜测那个服务器存在问题,因为所有的日志在同一个地方。此外,你可以使用更强大的工具去分析他们,包括日志管理解决方案。一些解决方案能[转换纯文本日志][1]为一些字段,更容易查找和分析。 + +集中你的日志也可以是他们更易于管理: + +- 他们更安全,当他们备份归档一个单独区域时意外或者有意的丢失。如果你的服务器宕机或者无响应,你可以使用集中的日志去调试问题。 +- 你不用担心ssh或者低效的grep命令需要更多的资源在陷入困境的系统。 +- 你不用担心磁盘占满,这个能让你的服务器死机。 +- 你能保持你的产品服务安全性,只是为了查看日志无需给你所有团队登录权限。给你的团队从中心区域访问日志权限更安全。 + +随着集中日志管理,你仍需处理由于网络联通性不好或者用尽大量网络带宽导致不能传输日志到中心区域的风险。在下面的章节我们将要讨论如何聪明的解决这些问题。 + +### 流行的日志归集工具 ### + +在Linux上最常见的日志归集是通过使用系统日志守护进程或者代理。系统日志守护进程支持本地日志的采集,然后通过系统日志协议传输日志到中心服务器。你可以使用很多流行的守护进程来归集你的日志文件: + +- [rsyslog][2]是一个轻量后台程序在大多数Linux分支上已经安装。 +- [syslog-ng][3]是第二流行的Linux系统日志后台程序。 +- [logstash][4]是一个重量级的代理,他可以做更多高级加工和分析。 +- [fluentd][5]是另一个有高级处理能力的代理。 + +Rsyslog是集中日志数据最流行的后台程序因为他在大多数Linux分支上是被默认安装的。你不用下载或安装它,并且它是轻量的,所以不需要占用你太多的系统资源。 + +如果你需要更多先进的过滤或者自定义分析功能,如果你不在乎额外的系统封装Logstash是下一个最流行的选择。 + +### 配置Rsyslog.conf ### + +既然rsyslog成为最广泛使用的系统日志程序,我们将展示如何配置它为日志中心。全局配置文件位于/etc/rsyslog.conf。它加载模块,设置全局指令,和包含应用特有文件位于目录/etc/rsyslog.d中。这些目录包含/etc/rsyslog.d/50-default.conf命令rsyslog写系统日志到文件。在[rsyslog文档][6]你可以阅读更多相关配置。 + +rsyslog配置语言是是[RainerScript][7]。你建立特定的日志输入就像输出他们到另一个目标。Rsyslog已经配置为系统日志输入的默认标准,所以你通常只需增加一个输出到你的日志服务器。这里有一个rsyslog输出到一个外部服务器的配置例子。在举例中,**BEBOP**是一个服务器的主机名,所以你应该替换为你的自己的服务器名。 + + action(type="omfwd" protocol="tcp" target="BEBOP" port="514") + +你可以发送你的日志到一个有丰富存储的日志服务器来存储,提供查询,备份和分析。如果你正存储日志在文件系统,然后你应该建立[日志转储][8]来防止你的磁盘报满。 + +作为一种选择,你可以发送这些日志到一个日志管理方案。如果你的解决方案是安装在本地你可以发送到您的本地系统文档中指定主机和端口。如果你使用基于云提供商,你将发送他们到你的提供商特定的主机名和端口。 + +### 日志目录 ### + +你可以归集一个目录或者匹配一个通配符模式的所有文件。nxlog和syslog-ng程序支持目录和通配符(*)。 + +rsyslog的通用形式不支持直接的监控目录。一种解决方案,你可以设置一个定时任务去监控这个目录的新文件,然后配置rsyslog来发送这些文件到目的地,比如你的日志管理系统。作为一个例子,日志管理提供商Loggly有一个开源版本的[目录监控脚本][9]。 + +### 哪个协议: UDP, TCP, or RELP? ### + +当你使用网络传输数据时,你可以选择三个主流的协议。UDP在你自己的局域网是最常用的,TCP是用在互联网。如果你不能失去日志,就要使用更高级的RELP协议。 + +[UDP][10]发送一个数据包,那只是一个简单的包信息。它是一个只外传的协议,所以他不发送给你回执(ACK)。它只尝试发送包。当网络拥堵时,UDP通常会巧妙的降级或者丢弃日志。它通常使用在类似局域网的可靠网络。 + +[TCP][11]通过多个包和返回确认发送流信息。TCP会多次尝试发送数据包,但是受限于[TCP缓存][12]大小。这是在互联网上发送送日志最常用的协议。 + +[RELP][13]是这三个协议中最可靠的,但是它是为rsyslog创建而且很少有行业应用。它在应用层接收数据然后再发出是否有错误。确认你的目标也支持这个协议。 + +### 用磁盘辅助队列可靠的传送 ### + +如果rsyslog在存储日志时遭遇错误,例如一个不可用网络连接,他能将日志排队直到连接还原。队列日志默认被存储在内存里。无论如何,内存是有限的并且如果问题仍然存在,日志会超出内存容量。 + +**警告:如果你只存储日志到内存,你可能会失去数据。** + +Rsyslog能在内存被占满时将日志队列放到磁盘。[磁盘辅助队列][14]使日志的传输更可靠。这里有一个例子如何配置rsyslog的磁盘辅助队列: + + $WorkDirectory /var/spool/rsyslog # where to place spool files + $ActionQueueFileName fwdRule1 # unique name prefix for spool files + $ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible) + $ActionQueueSaveOnShutdown on # save messages to disk on shutdown + $ActionQueueType LinkedList # run asynchronously + $ActionResumeRetryCount -1 # infinite retries if host is down + +### 使用TLS加密日志 ### + +当你的安全隐私数据是一个关心的事,你应该考虑加密你的日志。如果你使用纯文本在互联网传输日志,嗅探器和中间人可以读到你的日志。如果日志包含私人信息、敏感的身份数据或者政府管制数据,你应该加密你的日志。rsyslog程序能使用TLS协议加密你的日志保证你的数据更安全。 + +建立TLS加密,你应该做如下任务: + +1. 生成一个[证书授权][15](CA)。在/contrib/gnutls有一些简单的证书,只是有助于测试,但是你需要创建自己的产品证书。如果你正在使用一个日志管理服务,它将有一个证书给你。 +1. 为你的服务器生成一个[数字证书][16]使它能SSL运算,或者使用你自己的日志管理服务提供商的一个数字证书。 +1. 配置你的rsyslog程序来发送TLS加密数据到你的日志管理系统。 + +这有一个rsyslog配置TLS加密的例子。替换CERT和DOMAIN_NAME为你自己的服务器配置。 + + $DefaultNetstreamDriverCAFile /etc/rsyslog.d/keys/ca.d/CERT.crt + $ActionSendStreamDriver gtls + $ActionSendStreamDriverMode 1 + $ActionSendStreamDriverAuthMode x509/name + $ActionSendStreamDriverPermittedPeer *.DOMAIN_NAME.com + +### 应用日志的最佳管理方法 ### + +除Linux默认创建的日志之外,归集重要的应用日志也是一个好主意。几乎所有基于Linux的服务器的应用把他们的状态信息写入到独立专门的日志文件。这包括数据库产品,像PostgreSQL或者MySQL,网站服务器像Nginx或者Apache,防火墙,打印和文件共享服务还有DNS服务等等。 + +管理员要做的第一件事是安装一个应用后配置它。Linux应用程序典型的有一个.conf文件在/etc目录里。它也可能在其他地方,但是那是大家找配置文件首先会看的地方。 + +根据应用程序有多复杂多庞大,可配置参数的数量可能会很少或者上百行。如前所述,大多数应用程序可能会在某种日志文件写他们的状态:配置文件是日志设置的地方定义了其他的东西。 + +如果你不确定它在哪,你可以使用locate命令去找到它: + + [root@localhost ~]# locate postgresql.conf + /usr/pgsql-9.4/share/postgresql.conf.sample + /var/lib/pgsql/9.4/data/postgresql.conf + +#### 设置一个日志文件的标准位置 #### + +Linux系统一般保存他们的日志文件在/var/log目录下。如果是,很好,如果不是,你也许想在/var/log下创建一个专用目录?为什么?因为其他程序也在/var/log下保存他们的日志文件,如果你的应用报错多于一个日志文件 - 也许每天一个或者每次重启一个 - 通过这么大的目录也许有点难于搜索找到你想要的文件。 + +如果你有多于一个的应用实例在你网络运行,这个方法依然便利。想想这样的情景,你也许有一打web服务器在你的网络运行。当排查任何一个盒子的问题,你将知道确切的位置。 + +#### 使用一个标准的文件名 #### + +给你的应用最新的日志使用一个标准的文件名。这使一些事变得容易,因为你可以监控和追踪一个单独的文件。很多应用程序在他们的日志上追加一种时间戳。他让rsyslog更难于找到最新的文件和设置文件监控。一个更好的方法是使用日志转储增加时间戳到老的日志文件。这样更易去归档和历史查询。 + +#### 追加日志文件 #### + +日志文件会在每个应用程序重启后被覆盖?如果这样,我们建议关掉它。每次重启app后应该去追加日志文件。这样,你就可以追溯重启前最后的日志。 + +#### 日志文件追加 vs. 转储 #### + +虽然应用程序每次重启后写一个新日志文件,如何保存当前日志?追加到一个单独文件,巨大的文件?Linux系统不是因频繁重启或者崩溃出名的:应用程序可以运行很长时间甚至不间歇,但是也会使日志文件非常大。如果你查询分析上周发生连接错误的原因,你可能无疑的要在成千上万行里搜索。 + +我们建议你配置应用每天半晚转储它的日志文件。 + +为什么?首先它将变得可管理。找一个有特定日期部分的文件名比遍历一个文件指定日期的条目更容易。文件也小的多:你不用考虑当你打开一个日志文件时vi僵住。第二,如果你正发送日志到另一个位置 - 也许每晚备份任务拷贝到归集日志服务器 - 这样不会消耗你的网络带宽。最后第三点,这样帮助你做日志保持。如果你想剔除旧的日志记录,这样删除超过指定日期的文件比一个应用解析一个大文件更容易。 + +#### 日志文件的保持 #### + +你保留你的日志文件多长时间?这绝对可以归结为业务需求。你可能被要求保持一个星期的日志信息,或者管理要求保持一年的数据。无论如何,日志需要在一个时刻或其他从服务器删除。 + +在我们看来,除非必要,只在线保持最近一个月的日志文件,加上拷贝他们到第二个地方如日志服务器。任何比这更旧的日志可以被转到一个单独的介质上。例如,如果你在AWS上,你的旧日志可以被拷贝到Glacier。 + +#### 给日志单独的磁盘分区 #### + +Linux最典型的方式通常建议挂载到/var目录到一个单独度的文件系统。这是因为这个目录的高I/Os。我们推荐挂在/var/log目录到一个单独的磁盘系统下。这样可以节省与主应用的数据I/O竞争。另外,如果一些日志文件变的太多,或者一个文件变的太大,不会占满整个磁盘。 + +#### 日志条目 #### + +每个日志条目什么信息应该被捕获? + +这依赖于你想用日志来做什么。你只想用它来排除故障,或者你想捕获所有发生的事?这是一个规则条件去捕获每个用户在运行什么或查看什么? + +如果你正用日志做错误排查的目的,只保存错误,报警或者致命信息。没有理由去捕获调试信息,例如,应用也许默认记录了调试信息或者另一个管理员也许为了故障排查使用打开了调试信息,但是你应该关闭它,因为它肯定会很快的填满空间。在最低限度上,捕获日期,时间,客户端应用名,原ip或者客户端主机名,执行动作和它自身信息。 + +#### 一个PostgreSQL的实例 #### + +作为一个例子,让我们看看vanilla(这是一个开源论坛)PostgreSQL 9.4安装主配置文件。它叫做postgresql.conf与其他Linux系统中的配置文件不同,他不保存在/etc目录下。在代码段下,我们可以在我们的Centos 7服务器的/var/lib/pgsql目录下看见: + + root@localhost ~]# vi /var/lib/pgsql/9.4/data/postgresql.conf + ... + #------------------------------------------------------------------------------ + # ERROR REPORTING AND LOGGING + #------------------------------------------------------------------------------ + # - Where to Log - + log_destination = 'stderr' + # Valid values are combinations of + # stderr, csvlog, syslog, and eventlog, + # depending on platform. csvlog + # requires logging_collector to be on. + # This is used when logging to stderr: + logging_collector = on + # Enable capturing of stderr and csvlog + # into log files. Required to be on for + # csvlogs. + # (change requires restart) + # These are only used if logging_collector is on: + log_directory = 'pg_log' + # directory where log files are written, + # can be absolute or relative to PGDATA + log_filename = 'postgresql-%a.log' # log file name pattern, + # can include strftime() escapes + # log_file_mode = 0600 . + # creation mode for log files, + # begin with 0 to use octal notation + log_truncate_on_rotation = on # If on, an existing log file with the + # same name as the new log file will be + # truncated rather than appended to. + # But such truncation only occurs on + # time-driven rotation, not on restarts + # or size-driven rotation. Default is + # off, meaning append to existing files + # in all cases. + log_rotation_age = 1d + # Automatic rotation of logfiles will happen after that time. 0 disables. + log_rotation_size = 0 # Automatic rotation of logfiles will happen after that much log output. 0 disables. + # These are relevant when logging to syslog: + #syslog_facility = 'LOCAL0' + #syslog_ident = 'postgres' + # This is only relevant when logging to eventlog (win32): + #event_source = 'PostgreSQL' + # - When to Log - + #client_min_messages = notice # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # log + # notice + # warning + # error + #log_min_messages = warning # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # info + # notice + # warning + # error + # log + # fatal + # panic + #log_min_error_statement = error # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # info + # notice + # warning + # error + # log + # fatal + # panic (effectively off) + #log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements + # and their durations, > 0 logs only + # statements running at least this number + # of milliseconds + # - What to Log + #debug_print_parse = off + #debug_print_rewritten = off + #debug_print_plan = off + #debug_pretty_print = on + #log_checkpoints = off + #log_connections = off + #log_disconnections = off + #log_duration = off + #log_error_verbosity = default + # terse, default, or verbose messages + #log_hostname = off + log_line_prefix = '< %m >' # special values: + # %a = application name + # %u = user name + # %d = database name + # %r = remote host and port + # %h = remote host + # %p = process ID + # %t = timestamp without milliseconds + # %m = timestamp with milliseconds + # %i = command tag + # %e = SQL state + # %c = session ID + # %l = session line number + # %s = session start timestamp + # %v = virtual transaction ID + # %x = transaction ID (0 if none) + # %q = stop here in non-session + # processes + # %% = '%' + # e.g. '<%u%%%d> ' + #log_lock_waits = off # log lock waits >= deadlock_timeout + #log_statement = 'none' # none, ddl, mod, all + #log_temp_files = -1 # log temporary files equal or larger + # than the specified size in kilobytes;5# -1 disables, 0 logs all temp files5 + log_timezone = 'Australia/ACT' + +虽然大多数参数被加上了注释,他们呈现了默认数值。我们可以看见日志文件目录是pg_log(log_directory参数),文件名应该以postgresql开头(log_filename参数),文件每天转储一次(log_rotation_age参数)然后日志记录以时间戳开头(log_line_prefix参数)。特别说明有趣的是log_line_prefix参数:你可以包含很多整体丰富的信息在这。 + +看/var/lib/pgsql/9.4/data/pg_log目录下展现给我们这些文件: + + [root@localhost ~]# ls -l /var/lib/pgsql/9.4/data/pg_log + total 20 + -rw-------. 1 postgres postgres 1212 May 1 20:11 postgresql-Fri.log + -rw-------. 1 postgres postgres 243 Feb 9 21:49 postgresql-Mon.log + -rw-------. 1 postgres postgres 1138 Feb 7 11:08 postgresql-Sat.log + -rw-------. 1 postgres postgres 1203 Feb 26 21:32 postgresql-Thu.log + -rw-------. 1 postgres postgres 326 Feb 10 01:20 postgresql-Tue.log + +所以日志文件命只有工作日命名的标签。我们可以改变他。如何做?在postgresql.conf配置log_filename参数。 + +查看一个日志内容,它的条目仅以日期时间开头: + + [root@localhost ~]# cat /var/lib/pgsql/9.4/data/pg_log/postgresql-Fri.log + ... + < 2015-02-27 01:21:27.020 EST >LOG: received fast shutdown request + < 2015-02-27 01:21:27.025 EST >LOG: aborting any active transactions + < 2015-02-27 01:21:27.026 EST >LOG: autovacuum launcher shutting down + < 2015-02-27 01:21:27.036 EST >LOG: shutting down + < 2015-02-27 01:21:27.211 EST >LOG: database system is shut down + +### 集中应用日志 ### + +#### 使用Imfile监控日志 #### + +习惯上,应用通常记录他们数据在文件里。文件容易在一个机器上寻找但是多台服务器上就不是很恰当了。你可以设置日志文件监控然后当新的日志被添加到底部就发送事件到一个集中服务器。在/etc/rsyslog.d/里创建一个新的配置文件然后增加一个文件输入,像这样: + + $ModLoad imfile + $InputFilePollInterval 10 + $PrivDropToGroup adm + +---------- + + # Input for FILE1 + $InputFileName /FILE1 + $InputFileTag APPNAME1 + $InputFileStateFile stat-APPNAME1 #this must be unique for each file being polled + $InputFileSeverity info + $InputFilePersistStateInterval 20000 + $InputRunFileMonitor + +替换FILE1和APPNAME1位你自己的文件和应用名称。Rsyslog将发送它到你配置的输出中。 + +#### 本地套接字日志与Imuxsock #### + +套接字类似UNIX文件句柄,所不同的是套接字内容是由系统日志程序读取到内存中,然后发送到目的地。没有文件需要被写入。例如,logger命令发送他的日志到这个UNIX套接字。 + +如果你的服务器I/O有限或者你不需要本地文件日志,这个方法使系统资源有效利用。这个方法缺点是套接字有队列大小的限制。如果你的系统日志程序宕掉或者不能保持运行,然后你可能会丢失日志数据。 + +rsyslog程序将默认从/dev/log套接字中种读取,但是你要用[imuxsock输入模块][17]如下命令使它生效: + + $ModLoad imuxsock + +#### UDP日志与Imupd #### + +一些应用程序使用UDP格式输出日志数据,这是在网络上或者本地传输日志文件的标准系统日志协议。你的系统日志程序收集这些日志然后处理他们或者用不同的格式传输他们。交替地,你可以发送日志到你的日志服务器或者到一个日志管理方案中。 + +使用如下命令配置rsyslog来接收标准端口514的UDP系统日志数据: + + $ModLoad imudp + +---------- + + $UDPServerRun 514 + +### 用Logrotate管理日志 ### + +日志转储是当日志到达指定的时期时自动归档日志文件的方法。如果不介入,日志文件一直增长,会用尽磁盘空间。最后他们将破坏你的机器。 + +logrotate实例能随着日志的日期截取你的日志,腾出空间。你的新日志文件保持文件名。你的旧日志文件被重命名为后缀加上数字。每次logrotate实例运行,一个新文件被建立然后现存的文件被逐一重命名。你来决定何时旧文件被删除或归档的阈值。 + +当logrotate拷贝一个文件,新的文件已经有一个新的索引节点,这会妨碍rsyslog监控新文件。你可以通过增加copytruncate参数到你的logrotate定时任务来缓解这个问题。这个参数拷贝现有的日志文件内容到新文件然后从现有文件截短这些内容。这个索引节点从不改变,因为日志文件自己保持不变;它的内容是一个新文件。 + +logrotate实例使用的主配置文件是/etc/logrotate.conf,应用特有设置在/etc/logrotate.d/目录下。DigitalOcean有一个详细的[logrotate教程][18] + +### 管理很多服务器的配置 ### + +当你只有很少的服务器,你可以登陆上去手动配置。一旦你有几打或者更多服务器,你可以用高级工具使这变得更容易和更可扩展。基本上,所有的事情就是拷贝你的rsyslog配置到每个服务器,然后重启rsyslog使更改生效。 + +#### Pssh #### + +这个工具可以让你在很多服务器上并行的运行一个ssh命令。使用pssh部署只有一小部分的服务器。如果你其中一个服务器失败,然后你必须ssh到失败的服务器,然后手动部署。如果你有很多服务器失败,那么手动部署他们会话费很长时间。 + +#### Puppet/Chef #### + +Puppet和Chef是两个不同的工具,他们能在你的网络按你规定的标准自动的配置所有服务器。他们的报表工具使你知道关于错误然后定期重新同步。Puppet和Chef有一些狂热的支持者。如果你不确定那个更适合你的部署配置管理,你可以领会一下[InfoWorld上这两个工具的对比][19] + +一些厂商也提供一些配置rsyslog的模块或者方法。这有一个Loggly上Puppet模块的例子。它提供给rsyslog一个类,你可以添加一个标识令牌: + + node 'my_server_node.example.net' { + # Send syslog events to Loggly + class { 'loggly::rsyslog': + customer_token => 'de7b5ccd-04de-4dc4-fbc9-501393600000', + } + } + +#### Docker #### + +Docker使用容器去运行应用不依赖底层服务。所有东西都从内部的容器运行,你可以想象为一个单元功能。ZDNet有一个深入文章关于在你的数据中心[使用Docker][20]。 + +这有很多方式从Docker容器记录日志,包括链接到一个日志容器,记录到一个共享卷,或者直接在容器里添加一个系统日志代理。其中最流行的日志容器叫做[logspout][21]。 + +#### 供应商的脚本或代理 #### + +大多数日志管理方案提供一些脚本或者代理,从一个或更多服务器比较简单的发送数据。重量级代理会耗尽额外的系统资源。一些供应商像Loggly提供配置脚本,来使用现存的系统日志程序更轻松。这有一个Loggly上的例子[脚本][22],它能运行在任意数量的服务器上。 + +-------------------------------------------------------------------------------- + +via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/ + +作者:[Jason Skowronski][a1] +作者:[Amy Echeverri][a2] +作者:[Sadequl Hussain][a3] +译者:[wyangsun](https://github.com/wyangsun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a1]:https://www.linkedin.com/in/jasonskowronski +[a2]:https://www.linkedin.com/in/amyecheverri +[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 +[1]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.esrreycnpnbl +[2]:http://www.rsyslog.com/ +[3]:http://www.balabit.com/network-security/syslog-ng/opensource-logging-system +[4]:http://logstash.net/ +[5]:http://www.fluentd.org/ +[6]:http://www.rsyslog.com/doc/rsyslog_conf.html +[7]:http://www.rsyslog.com/doc/master/rainerscript/index.html +[8]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.eck7acdxin87 +[9]:https://www.loggly.com/docs/file-monitoring/ +[10]:http://www.networksorcery.com/enp/protocol/udp.htm +[11]:http://www.networksorcery.com/enp/protocol/tcp.htm +[12]:http://blog.gerhards.net/2008/04/on-unreliability-of-plain-tcp-syslog.html +[13]:http://www.rsyslog.com/doc/relp.html +[14]:http://www.rsyslog.com/doc/queues.html +[15]:http://www.rsyslog.com/doc/tls_cert_ca.html +[16]:http://www.rsyslog.com/doc/tls_cert_machine.html +[17]:http://www.rsyslog.com/doc/v8-stable/configuration/modules/imuxsock.html +[18]:https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10 +[19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html +[20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/ +[21]:https://github.com/progrium/logspout +[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/ From 3ade8029a018fbac08e0f65bf37dfe8f5dbc6986 Mon Sep 17 00:00:00 2001 From: KS Date: Fri, 14 Aug 2015 22:54:54 +0800 Subject: [PATCH 164/697] Delete 20150803 Managing Linux Logs.md --- sources/tech/20150803 Managing Linux Logs.md | 419 ------------------- 1 file changed, 419 deletions(-) delete mode 100644 sources/tech/20150803 Managing Linux Logs.md diff --git a/sources/tech/20150803 Managing Linux Logs.md b/sources/tech/20150803 Managing Linux Logs.md deleted file mode 100644 index e317a63253..0000000000 --- a/sources/tech/20150803 Managing Linux Logs.md +++ /dev/null @@ -1,419 +0,0 @@ -wyangsun translating -Managing Linux Logs -================================================================================ -A key best practice for logging is to centralize or aggregate your logs in one place, especially if you have multiple servers or tiers in your architecture. We’ll tell you why this is a good idea and give tips on how to do it easily. - -### Benefits of Centralizing Logs ### - -It can be cumbersome to look at individual log files if you have many servers. Modern web sites and services often include multiple tiers of servers, distributed with load balancers, and more. It’d take a long time to hunt down the right file, and even longer to correlate problems across servers. There’s nothing more frustrating than finding the information you are looking for hasn’t been captured, or the log file that could have held the answer has just been lost after a restart. - -Centralizing your logs makes them faster to search, which can help you solve production issues faster. You don’t have to guess which server had the issue because all the logs are in one place. Additionally, you can use more powerful tools to analyze them, including log management solutions. These solutions can [transform plain text logs][1] into fields that can be easily searched and analyzed. - -Centralizing your logs also makes them easier to manage: - -- They are safer from accidental or intentional loss when they are backed up and archived in a separate location. If your servers go down or are unresponsive, you can use the centralized logs to debug the problem. -- You don’t have to worry about ssh or inefficient grep commands requiring more resources on troubled systems. -- You don’t have to worry about full disks, which can crash your servers. -- You can keep your production servers secure without giving your entire team access just to look at logs. It’s much safer to give your team access to logs from the central location. - -With centralized log management, you still must deal with the risk of being unable to transfer logs to the centralized location due to poor network connectivity or of using up a lot of network bandwidth. We’ll discuss how to intelligently address these issues in the sections below. - -### Popular Tools for Centralizing Logs ### - -The most common way of centralizing logs on Linux is by using syslog daemons or agents. The syslog daemon supports local log collection, then transports logs through the syslog protocol to a central server. There are several popular daemons that you can use to centralize your log files: - -- [rsyslog][2] is a light-weight daemon installed on most common Linux distributions. -- [syslog-ng][3] is the second most popular syslog daemon for Linux. -- [logstash][4] is a heavier-weight agent that can do more advanced processing and parsing. -- [fluentd][5] is another agent with advanced processing capabilities. - -Rsyslog is the most popular daemon for centralizing your log data because it’s installed by default in most common distributions of Linux. You don’t need to download it or install it, and it’s lightweight so it won’t take up much of your system resources. - -If you need more advanced filtering or custom parsing capabilities, Logstash is the next most popular choice if you don’t mind the extra system footprint. - -### Configure Rsyslog.conf ### - -Since rsyslog is the most widely used syslog daemon, we’ll show how to configure it to centralize logs. The global configuration file is located at /etc/rsyslog.conf. It loads modules, sets global directives, and has an include for application-specific files located in the directory /etc/rsyslog.d/. This directory contains /etc/rsyslog.d/50-default.conf which instructs rsyslog to write the system logs to file. You can read more about the configuration files in the [rsyslog documentation][6]. - -The configuration language for rsyslog is [RainerScript][7]. You set up specific inputs for logs as well as actions to output them to another destination. Rsyslog already configures standard defaults for syslog input, so you usually just need to add an output to your log server. Here is an example configuration for rsyslog to output logs to an external server. In this example, **BEBOP** is the hostname of the server, so you should replace it with your own server name. - - action(type="omfwd" protocol="tcp" target="BEBOP" port="514") - -You could send your logs to a log server with ample storage to keep a copy for search, backup, and analysis. If you’re storing logs in the file system, then you should set up [log rotation][8] to keep your disk from getting full. - -Alternatively, you can send these logs to a log management solution. If your solution is installed locally you can send it to your local host and port as specified in your system documentation. If you use a cloud-based provider, you will send them to the hostname and port specified by your provider. - -### Log Directories ### - -You can centralize all the files in a directory or matching a wildcard pattern. The nxlog and syslog-ng daemons support both directories and wildcards (*). - -Common versions of rsyslog can’t monitor directories directly. As a workaround, you can setup a cron job to monitor the directory for new files, then configure rsyslog to send those files to a destination, such as your log management system. As an example, the log management vendor Loggly has an open source version of a [script to monitor directories][9]. - -### Which Protocol: UDP, TCP, or RELP? ### - -There are three main protocols that you can choose from when you transmit data over the Internet. The most common is UDP for your local network and TCP for the Internet. If you cannot lose logs, then use the more advanced RELP protocol. - -[UDP][10] sends a datagram packet, which is a single packet of information. It’s an outbound-only protocol, so it doesn’t send you an acknowledgement of receipt (ACK). It makes only one attempt to send the packet. UDP can be used to smartly degrade or drop logs when the network gets congested. It’s most commonly used on reliable networks like localhost. - -[TCP][11] sends streaming information in multiple packets and returns an ACK. TCP makes multiple attempts to send the packet, but is limited by the size of the [TCP buffer][12]. This is the most common protocol for sending logs over the Internet. - -[RELP][13] is the most reliable of these three protocols but was created for rsyslog and has less industry adoption. It acknowledges receipt of data in the application layer and will resend if there is an error. Make sure your destination also supports this protocol. - -### Reliably Send with Disk Assisted Queues ### - -If rsyslog encounters a problem when storing logs, such as an unavailable network connection, it can queue the logs until the connection is restored. The queued logs are stored in memory by default. However, memory is limited and if the problem persists, the logs can exceed memory capacity. - -**Warning: You can lose data if you store logs only in memory.** - -Rsyslog can queue your logs to disk when memory is full. [Disk-assisted queues][14] make transport of logs more reliable. Here is an example of how to configure rsyslog with a disk-assisted queue: - - $WorkDirectory /var/spool/rsyslog # where to place spool files - $ActionQueueFileName fwdRule1 # unique name prefix for spool files - $ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible) - $ActionQueueSaveOnShutdown on # save messages to disk on shutdown - $ActionQueueType LinkedList # run asynchronously - $ActionResumeRetryCount -1 # infinite retries if host is down - -### Encrypt Logs Using TLS ### - -When the security and privacy of your data is a concern, you should consider encrypting your logs. Sniffers and middlemen could read your log data if you transmit it over the Internet in clear text. You should encrypt your logs if they contain private information, sensitive identification data, or government-regulated data. The rsyslog daemon can encrypt your logs using the TLS protocol and keep your data safer. - -To set up TLS encryption, you need to do the following tasks: - -1. Generate a [certificate authority][15] (CA). There are sample certificates in /contrib/gnutls, which are good only for testing, but you need to create your own for production. If you’re using a log management service, it will have one ready for you. -1. Generate a [digital certificate][16] for your server to enable SSL operation, or use one from your log management service provider. -1. Configure your rsyslog daemon to send TLS-encrypted data to your log management system. - -Here’s an example rsyslog configuration with TLS encryption. Replace CERT and DOMAIN_NAME with your own server setting. - - $DefaultNetstreamDriverCAFile /etc/rsyslog.d/keys/ca.d/CERT.crt - $ActionSendStreamDriver gtls - $ActionSendStreamDriverMode 1 - $ActionSendStreamDriverAuthMode x509/name - $ActionSendStreamDriverPermittedPeer *.DOMAIN_NAME.com - -### Best Practices for Application Logging ### - -In addition to the logs that Linux creates by default, it’s also a good idea to centralize logs from important applications. Almost all Linux-based server class applications write their status information in separate, dedicated log files. This includes database products like PostgreSQL or MySQL, web servers like Nginx or Apache, firewalls, print and file sharing services, directory and DNS servers and so on. - -The first thing administrators do after installing an application is to configure it. Linux applications typically have a .conf file somewhere within the /etc directory. It can be somewhere else too, but that’s the first place where people look for configuration files. - -Depending on how complex or large the application is, the number of settable parameters can be few or in hundreds. As mentioned before, most applications would write their status in some sort of log file: configuration file is where log settings are defined among other things. - -If you’re not sure where it is, you can use the locate command to find it: - - [root@localhost ~]# locate postgresql.conf - /usr/pgsql-9.4/share/postgresql.conf.sample - /var/lib/pgsql/9.4/data/postgresql.conf - -#### Set a Standard Location for Log Files #### - -Linux systems typically save their log files under /var/log directory. This works fine, but check if the application saves under a specific directory under /var/log. If it does, great, if not, you may want to create a dedicated directory for the app under /var/log. Why? That’s because other applications save their log files under /var/log too and if your app saves more than one log file – perhaps once every day or after each service restart – it may be a bit difficult to trawl through a large directory to find the file you want. - -If you have the more than one instance of the application running in your network, this approach also comes handy. Think about a situation where you may have a dozen web servers running in your network. When troubleshooting any one of the boxes, you would know exactly where to go. - -#### Use A Standard Filename #### - -Use a standard filename for the latest logs from your application. This makes it easy because you can monitor and tail a single file. Many applications add some sort of date time stamp in them. This makes it much more difficult to find the latest file and to setup file monitoring by rsyslog. A better approach is to add timestamps to older log files using logrotate. This makes them easier to archive and search historically. - -#### Append the Log File #### - -Is the log file going to be overwritten after each application restart? If so, we recommend turning that off. After each restart the app should append to the log file. That way, you can always go back to the last log line before the restart. - -#### Appending vs. Rotation of Log File #### - -Even if the application writes a new log file after each restart, how is it saving in the current log? Is it appending to one single, massive file? Linux systems are not known for frequent reboots or crashes: applications can run for very long periods without even blinking, but that can also make the log file very large. If you are trying to analyze the root cause of a connection failure that happened last week, you could easily be searching through tens of thousands of lines. - -We recommend you configure the application to rotate its log file once every day, say at mid-night. - -Why? Well it becomes manageable for a starter. It’s much easier to find a file name with a specific date time pattern than to search through one file for that date’s entries. Files are also much smaller: you don’t think vi has frozen when you open a log file. Secondly, if you are sending the log file over the wire to a different location – perhaps a nightly backup job copying to a centralized log server – it doesn’t chew up your network’s bandwidth. Third and final, it helps with your log retention. If you want to cull old log entries, it’s easier to delete files older than a particular date than to have an application parsing one single large file. - -#### Retention of Log File #### - -How long do you keep a log file? That definitely comes down to business requirement. You could be asked to keep one week’s worth of logging information, or it may be a regulatory requirement to keep ten years’ worth of data. Whatever it is, logs need to go from the server at one time or other. - -In our opinion, unless otherwise required, keep at least a month’s worth of log files online, plus copy them to a secondary location like a logging server. Anything older than that can be offloaded to a separate media. For example, if you are on AWS, your older logs can be copied to Glacier. - -#### Separate Disk Location for Log Files #### - -Linux best practice usually suggests mounting the /var directory to a separate file system. This is because of the high number of I/Os associated with this directory. We would recommend mounting /var/log directory under a separate disk system. This can save I/O contention with the main application’s data. Also, if the number of log files becomes too large or the single log file becomes too big, it doesn’t fill up the entire disk. - -#### Log Entries #### - -What information should be captured in each log entry? - -That depends on what you want to use the log for. Do you want to use it only for troubleshooting purposes, or do you want to capture everything that’s happening? Is it a legal requirement to capture what each user is running or viewing? - -If you are using logs for troubleshooting purposes, save only errors, warnings or fatal messages. There’s no reason to capture debug messages, for example. The app may log debug messages by default or another administrator might have turned this on for another troubleshooting exercise, but you need to turn this off because it can definitely fill up the space quickly. At a minimum, capture the date, time, client application name, source IP or client host name, action performed and the message itself. - -#### A Practical Example for PostgreSQL #### - -As an example, let’s look at the main configuration file for a vanilla PostgreSQL 9.4 installation. It’s called postgresql.conf and contrary to other config files in Linux systems, it’s not saved under /etc directory. In the code snippet below, we can see it’s in /var/lib/pgsql directory of our CentOS 7 server: - - root@localhost ~]# vi /var/lib/pgsql/9.4/data/postgresql.conf - ... - #------------------------------------------------------------------------------ - # ERROR REPORTING AND LOGGING - #------------------------------------------------------------------------------ - # - Where to Log - - log_destination = 'stderr' - # Valid values are combinations of - # stderr, csvlog, syslog, and eventlog, - # depending on platform. csvlog - # requires logging_collector to be on. - # This is used when logging to stderr: - logging_collector = on - # Enable capturing of stderr and csvlog - # into log files. Required to be on for - # csvlogs. - # (change requires restart) - # These are only used if logging_collector is on: - log_directory = 'pg_log' - # directory where log files are written, - # can be absolute or relative to PGDATA - log_filename = 'postgresql-%a.log' # log file name pattern, - # can include strftime() escapes - # log_file_mode = 0600 . - # creation mode for log files, - # begin with 0 to use octal notation - log_truncate_on_rotation = on # If on, an existing log file with the - # same name as the new log file will be - # truncated rather than appended to. - # But such truncation only occurs on - # time-driven rotation, not on restarts - # or size-driven rotation. Default is - # off, meaning append to existing files - # in all cases. - log_rotation_age = 1d - # Automatic rotation of logfiles will happen after that time. 0 disables. - log_rotation_size = 0 # Automatic rotation of logfiles will happen after that much log output. 0 disables. - # These are relevant when logging to syslog: - #syslog_facility = 'LOCAL0' - #syslog_ident = 'postgres' - # This is only relevant when logging to eventlog (win32): - #event_source = 'PostgreSQL' - # - When to Log - - #client_min_messages = notice # values in order of decreasing detail: - # debug5 - # debug4 - # debug3 - # debug2 - # debug1 - # log - # notice - # warning - # error - #log_min_messages = warning # values in order of decreasing detail: - # debug5 - # debug4 - # debug3 - # debug2 - # debug1 - # info - # notice - # warning - # error - # log - # fatal - # panic - #log_min_error_statement = error # values in order of decreasing detail: - # debug5 - # debug4 - # debug3 - # debug2 - # debug1 - # info - # notice - # warning - # error - # log - # fatal - # panic (effectively off) - #log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements - # and their durations, > 0 logs only - # statements running at least this number - # of milliseconds - # - What to Log - #debug_print_parse = off - #debug_print_rewritten = off - #debug_print_plan = off - #debug_pretty_print = on - #log_checkpoints = off - #log_connections = off - #log_disconnections = off - #log_duration = off - #log_error_verbosity = default - # terse, default, or verbose messages - #log_hostname = off - log_line_prefix = '< %m >' # special values: - # %a = application name - # %u = user name - # %d = database name - # %r = remote host and port - # %h = remote host - # %p = process ID - # %t = timestamp without milliseconds - # %m = timestamp with milliseconds - # %i = command tag - # %e = SQL state - # %c = session ID - # %l = session line number - # %s = session start timestamp - # %v = virtual transaction ID - # %x = transaction ID (0 if none) - # %q = stop here in non-session - # processes - # %% = '%' - # e.g. '<%u%%%d> ' - #log_lock_waits = off # log lock waits >= deadlock_timeout - #log_statement = 'none' # none, ddl, mod, all - #log_temp_files = -1 # log temporary files equal or larger - # than the specified size in kilobytes;5# -1 disables, 0 logs all temp files5 - log_timezone = 'Australia/ACT' - -Although most parameters are commented out, they assume default values. We can see the log file directory is pg_log (log_directory parameter), the file names should start with postgresql (log_filename parameter), the files are rotated once every day (log_rotation_age parameter) and the log entries start with a timestamp (log_line_prefix parameter). Of particular interest is the log_line_prefix parameter: there is a whole gamut of information you can include there. - -Looking under /var/lib/pgsql/9.4/data/pg_log directory shows us these files: - - [root@localhost ~]# ls -l /var/lib/pgsql/9.4/data/pg_log - total 20 - -rw-------. 1 postgres postgres 1212 May 1 20:11 postgresql-Fri.log - -rw-------. 1 postgres postgres 243 Feb 9 21:49 postgresql-Mon.log - -rw-------. 1 postgres postgres 1138 Feb 7 11:08 postgresql-Sat.log - -rw-------. 1 postgres postgres 1203 Feb 26 21:32 postgresql-Thu.log - -rw-------. 1 postgres postgres 326 Feb 10 01:20 postgresql-Tue.log - -So the log files only have the name of the weekday stamped in the file name. We can change it. How? Configure the log_filename parameter in postgresql.conf. - -Looking inside one log file shows its entries start with date time only: - - [root@localhost ~]# cat /var/lib/pgsql/9.4/data/pg_log/postgresql-Fri.log - ... - < 2015-02-27 01:21:27.020 EST >LOG: received fast shutdown request - < 2015-02-27 01:21:27.025 EST >LOG: aborting any active transactions - < 2015-02-27 01:21:27.026 EST >LOG: autovacuum launcher shutting down - < 2015-02-27 01:21:27.036 EST >LOG: shutting down - < 2015-02-27 01:21:27.211 EST >LOG: database system is shut down - -### Centralizing Application Logs ### - -#### Log File Monitoring with Imfile #### - -Traditionally, the most common way for applications to log their data is with files. Files are easy to search on a single machine but don’t scale well with more servers. You can set up log file monitoring and send the events to a centralized server when new logs are appended to the bottom. Create a new configuration file in /etc/rsyslog.d/ then add a file input like this: - - $ModLoad imfile - $InputFilePollInterval 10 - $PrivDropToGroup adm - ----------- - - # Input for FILE1 - $InputFileName /FILE1 - $InputFileTag APPNAME1 - $InputFileStateFile stat-APPNAME1 #this must be unique for each file being polled - $InputFileSeverity info - $InputFilePersistStateInterval 20000 - $InputRunFileMonitor - -Replace FILE1 and APPNAME1 with your own file and application names. Rsyslog will send it to the outputs you have configured. - -#### Local Socket Logs with Imuxsock #### - -A socket is similar to a UNIX file handle except that the socket is read into memory by your syslog daemon and then sent to the destination. No file needs to be written. As an example, the logger command sends its logs to this UNIX socket. - -This approach makes efficient use of system resources if your server is constrained by disk I/O or you have no need for local file logs. The disadvantage of this approach is that the socket has a limited queue size. If your syslog daemon goes down or can’t keep up, then you could lose log data. - -The rsyslog daemon will read from the /dev/log socket by default, but you can specifically enable it with the [imuxsock input module][17] using the following command: - - $ModLoad imuxsock - -#### UDP Logs with Imupd #### - -Some applications output log data in UDP format, which is the standard syslog protocol when transferring log files over a network or your localhost. Your syslog daemon receives these logs and can process them or transmit them in a different format. Alternately, you can send the logs to your log server or to a log management solution. - -Use the following command to configure rsyslog to accept syslog data over UDP on the standard port 514: - - $ModLoad imudp - ----------- - - $UDPServerRun 514 - -### Manage Logs with Logrotate ### - -Log rotation is a process that archives log files automatically when they reach a specified age. Without intervention, log files keep growing, using up disk space. Eventually they will crash your machine. - -The logrotate utility can truncate your logs as they age, freeing up space. Your new log file retains the filename. Your old log file is renamed with a number appended to the end of it. Each time the logrotate utility runs, a new file is created and the existing file is renamed in sequence. You determine the threshold when old files are deleted or archived. - -When logrotate copies a file, the new file has a new inode, which can interfere with rsyslog’s ability to monitor the new file. You can alleviate this issue by adding the copytruncate parameter to your logrotate cron job. This parameter copies existing log file contents to a new file and truncates these contents from the existing file. The inode never changes because the log file itself remains the same; its contents are in a new file. - -The logrotate utility uses the main configuration file at /etc/logrotate.conf and application-specific settings in the directory /etc/logrotate.d/. DigitalOcean has a detailed [tutorial on logrotate][18]. - -### Manage Configuration on Many Servers ### - -When you have just a few servers, you can manually configure logging on them. Once you have a few dozen or more servers, you can take advantage of tools that make this easier and more scalable. At a basic level, all of these copy your rsyslog configuration to each server, and then restart rsyslog so the changes take effect. - -#### Pssh #### - -This tool lets you run an ssh command on several servers in parallel. Use a pssh deployment for only a small number of servers. If one of your servers fails, then you have to ssh into the failed server and do the deployment manually. If you have several failed servers, then the manual deployment on them can take a long time. - -#### Puppet/Chef #### - -Puppet and Chef are two different tools that can automatically configure all of the servers in your network to a specified standard. Their reporting tools let you know about failures and can resync periodically. Both Puppet and Chef have enthusiastic supporters. If you aren’t sure which one is more suitable for your deployment configuration management, you might appreciate [InfoWorld’s comparison of the two tools][19]. - -Some vendors also offer modules or recipes for configuring rsyslog. Here is an example from Loggly’s Puppet module. It offers a class for rsyslog to which you can add an identifying token: - - node 'my_server_node.example.net' { - # Send syslog events to Loggly - class { 'loggly::rsyslog': - customer_token => 'de7b5ccd-04de-4dc4-fbc9-501393600000', - } - } - -#### Docker #### - -Docker uses containers to run applications independent of the underlying server. Everything runs from inside a container, which you can think of as a unit of functionality. ZDNet has an in-depth article about [using Docker][20] in your data center. - -There are several ways to log from Docker containers including linking to a logging container, logging to a shared volume, or adding a syslog agent directly inside the container. One of the most popular logging containers is called [logspout][21]. - -#### Vendor Scripts or Agents #### - -Most log management solutions offer scripts or agents to make sending data from one or more servers relatively easy. Heavyweight agents can use up extra system resources. Some vendors like Loggly offer configuration scripts to make using existing syslog daemons easier. Here is an example [script][22] from Loggly which can run on any number of servers. - --------------------------------------------------------------------------------- - -via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/ - -作者:[Jason Skowronski][a1] -作者:[Amy Echeverri][a2] -作者:[Sadequl Hussain][a3] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a1]:https://www.linkedin.com/in/jasonskowronski -[a2]:https://www.linkedin.com/in/amyecheverri -[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 -[1]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.esrreycnpnbl -[2]:http://www.rsyslog.com/ -[3]:http://www.balabit.com/network-security/syslog-ng/opensource-logging-system -[4]:http://logstash.net/ -[5]:http://www.fluentd.org/ -[6]:http://www.rsyslog.com/doc/rsyslog_conf.html -[7]:http://www.rsyslog.com/doc/master/rainerscript/index.html -[8]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.eck7acdxin87 -[9]:https://www.loggly.com/docs/file-monitoring/ -[10]:http://www.networksorcery.com/enp/protocol/udp.htm -[11]:http://www.networksorcery.com/enp/protocol/tcp.htm -[12]:http://blog.gerhards.net/2008/04/on-unreliability-of-plain-tcp-syslog.html -[13]:http://www.rsyslog.com/doc/relp.html -[14]:http://www.rsyslog.com/doc/queues.html -[15]:http://www.rsyslog.com/doc/tls_cert_ca.html -[16]:http://www.rsyslog.com/doc/tls_cert_machine.html -[17]:http://www.rsyslog.com/doc/v8-stable/configuration/modules/imuxsock.html -[18]:https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10 -[19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html -[20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/ -[21]:https://github.com/progrium/logspout -[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/ From a14dc152fd6de7cb5357d7a23eed25a89c147afe Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 15 Aug 2015 00:46:41 +0800 Subject: [PATCH 165/697] PUB:20150728 How to Update Linux Kernel for Improved System Performance @geekpi --- ... Kernel for Improved System Performance.md | 120 ++++++++++++++++ ... Kernel for Improved System Performance.md | 129 ------------------ 2 files changed, 120 insertions(+), 129 deletions(-) create mode 100644 published/20150728 How to Update Linux Kernel for Improved System Performance.md delete mode 100644 translated/tech/20150728 How to Update Linux Kernel for Improved System Performance.md diff --git a/published/20150728 How to Update Linux Kernel for Improved System Performance.md b/published/20150728 How to Update Linux Kernel for Improved System Performance.md new file mode 100644 index 0000000000..89823aaad7 --- /dev/null +++ b/published/20150728 How to Update Linux Kernel for Improved System Performance.md @@ -0,0 +1,120 @@ +如何更新 Linux 内核来提升系统性能 +================================================================================ +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/update-linux-kernel-644x373.jpg?2c3c1f) + +目前的 [Linux 内核][1]的开发速度是前所未有的,大概每2到3个月就会有一个主要的版本发布。每个发布都带来几个的新的功能和改进,可以让很多人的处理体验更快、更有效率、或者其它的方面更好。 + +问题是,你不能在这些内核发布的时候就用它们,你要等到你的发行版带来新内核的发布。我们先前讲到[定期更新内核的好处][2],所以你不必等到那时。让我们来告诉你该怎么做。 + +> 免责声明: 我们先前的一些文章已经提到过,升级内核有(很小)的风险可能会破坏你系统。如果发生这种情况,通常可以通过使用旧内核来使系统保持工作,但是有时还是不行。因此我们对系统的任何损坏都不负责,你得自己承担风险! + +### 预备工作 ### + +要更新你的内核,你首先要确定你使用的是32位还是64位的系统。打开终端并运行: + + uname -a + +检查一下输出的是 x86\_64 还是 i686。如果是 x86\_64,你就运行64位的版本,否则就运行32位的版本。千万记住这个,这很重要。 + +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/linux_kernel_arch.jpg?2c3c1f) + +接下来,访问[官方的 Linux 内核网站][3],它会告诉你目前稳定内核的版本。愿意的话,你可以尝试下发布预选版(RC),但是这比稳定版少了很多测试。除非你确定想要需要发布预选版,否则就用稳定内核。 + +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/kernel_latest_version.jpg?2c3c1f) + +### Ubuntu 指导 ### + +对 Ubuntu 及其衍生版的用户而言升级内核非常简单,这要感谢 Ubuntu 主线内核 PPA。虽然,官方把它叫做 PPA,但是你不能像其他 PPA 一样将它添加到你软件源列表中,并指望它自动升级你的内核。实际上,它只是一个简单的网页,你应该浏览并下载到你想要的内核。 + +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_new_kernels.jpg?2c3c1f) + +现在,访问这个[内核 PPA 网页][4],并滚到底部。列表的最下面会含有最新发布的预选版本(你可以在名字中看到“rc”字样),但是这上面就可以看到最新的稳定版(说的更清楚些,本文写作时最新的稳定版是4.1.2。LCTT 译注:这里虽然 4.1.2 是当时的稳定版,但是由于尚未进入 Ubuntu 发行版中,所以文件夹名称为“-unstable”)。点击文件夹名称,你会看到几个选择。你需要下载 3 个文件并保存到它们自己的文件夹中(如果你喜欢的话可以放在下载文件夹中),以便它们与其它文件相隔离: + +1. 针对架构的含“generic”(通用)的头文件(我这里是64位,即“amd64”) +2. 放在列表中间,在文件名末尾有“all”的头文件 +3. 针对架构的含“generic”内核文件(再说一次,我会用“amd64”,但是你如果用32位的,你需要使用“i686”) + +你还可以在下面看到含有“lowlatency”(低延时)的文件。但最好忽略它们。这些文件相对不稳定,并且只为那些通用文件不能满足像音频录制这类任务想要低延迟的人准备的。再说一次,首选通用版,除非你有特定的任务需求不能很好地满足。一般的游戏和网络浏览不是使用低延时版的借口。 + +你把它们放在各自的文件夹下,对么?现在打开终端,使用`cd`命令切换到新创建的文件夹下,如 + + cd /home/user/Downloads/Kernel + +接着运行: + + sudo dpkg -i *.deb + +这个命令会标记文件夹中所有的“.deb”文件为“待安装”,接着执行安装。这是推荐的安装方法,因为不可以很简单地选择一个文件安装,它总会报出依赖问题。这这样一起安装就可以避免这个问题。如果你不清楚`cd`和`sudo`是什么。快速地看一下 [Linux 基本命令][5]这篇文章。 + +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_install_kernel.jpg?2c3c1f) + +安装完成后,**重启**你的系统,这时应该就会运行刚安装的内核了!你可以在命令行中使用`uname -a`来检查输出。 + +### Fedora 指导 ### + +如果你使用的是 Fedora 或者它的衍生版,过程跟 Ubuntu 很类似。不同的是文件获取的位置不同,安装的命令也不同。 + +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/fedora_new_kernels.jpg?2c3c1f) + +查看 [最新 Fedora 内核构建][6]列表。选取列表中最新的稳定版并翻页到下面选择 i686 或者 x86_64 版。这取决于你的系统架构。这时你需要下载下面这些文件并保存到它们对应的目录下(比如“Kernel”到下载目录下): + +- kernel +- kernel-core +- kernel-headers +- kernel-modules +- kernel-modules-extra +- kernel-tools +- perf 和 python-perf (可选) + +如果你的系统是 i686(32位)同时你有 4GB 或者更大的内存,你需要下载所有这些文件的 PAE 版本。PAE 是用于32位系统上的地址扩展技术,它允许你使用超过 3GB 的内存。 + +现在使用`cd`命令进入文件夹,像这样 + + cd /home/user/Downloads/Kernel + +接着运行下面的命令来安装所有的文件 + + yum --nogpgcheck localinstall *.rpm + +最后**重启**你的系统,这样你就可以运行新的内核了! + +#### 使用 Rawhide #### + +另外一个方案是,Fedora 用户也可以[切换到 Rawhide][7],它会自动更新所有的包到最新版本,包括内核。然而,Rawhide 经常会破坏系统(尤其是在早期的开发阶段中),它**不应该**在你日常使用的系统中用。 + +### Arch 指导 ### + +[Arch 用户][8]应该总是使用的是最新和最棒的稳定版(或者相当接近的版本)。如果你想要更接近最新发布的稳定版,你可以启用测试库提前2到3周获取到主要的更新。 + +要这么做,用[你喜欢的编辑器][9]以`sudo`权限打开下面的文件 + + /etc/pacman.conf + +接着取消注释带有 testing 的三行(删除行前面的#号)。如果你启用了 multilib 仓库,就把 multilib-testing 也做相同的事情。如果想要了解更多参考[这个 Arch 的 wiki 界面][10]。 + +升级内核并不简单(有意这么做的),但是这会给你带来很多好处。只要你的新内核不会破坏任何东西,你可以享受它带来的性能提升,更好的效率,更多的硬件支持和潜在的新特性。尤其是你正在使用相对较新的硬件时,升级内核可以帮助到你。 + + +**怎么升级内核这篇文章帮助到你了么?你认为你所喜欢的发行版对内核的发布策略应该是怎样的?**。在评论栏让我们知道! + +-------------------------------------------------------------------------------- + +via: http://www.makeuseof.com/tag/update-linux-kernel-improved-system-performance/ + +作者:[Danny Stieben][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.makeuseof.com/tag/author/danny/ +[1]:http://www.makeuseof.com/tag/linux-kernel-explanation-laymans-terms/ +[2]:http://www.makeuseof.com/tag/5-reasons-update-kernel-linux/ +[3]:http://www.kernel.org/ +[4]:http://kernel.ubuntu.com/~kernel-ppa/mainline/ +[5]:http://www.makeuseof.com/tag/an-a-z-of-linux-40-essential-commands-you-should-know/ +[6]:http://koji.fedoraproject.org/koji/packageinfo?packageID=8 +[7]:http://www.makeuseof.com/tag/bleeding-edge-linux-fedora-rawhide/ +[8]:http://www.makeuseof.com/tag/arch-linux-letting-you-build-your-linux-system-from-scratch/ +[9]:http://www.makeuseof.com/tag/nano-vs-vim-terminal-text-editors-compared/ +[10]:https://wiki.archlinux.org/index.php/Pacman#Repositories diff --git a/translated/tech/20150728 How to Update Linux Kernel for Improved System Performance.md b/translated/tech/20150728 How to Update Linux Kernel for Improved System Performance.md deleted file mode 100644 index 2114549452..0000000000 --- a/translated/tech/20150728 How to Update Linux Kernel for Improved System Performance.md +++ /dev/null @@ -1,129 +0,0 @@ -如何更新Linux内核提升系统性能 -================================================================================ -![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/update-linux-kernel-644x373.jpg?2c3c1f) - -[Linux内核][1]内核的开发速度目前是空前,大概每2到3个月就会有一个主要的版本发布。每个发布都带来让很多人的计算更加快、更加有效率、或者更好的功能和提升。 - -问题是你不能在这些内核发布的时候就用它们-你要等到你的发行版带来新内核的发布。我们先前发布了[定期更新内核的好处][2],你不必等到那时。我们会向你展示该怎么做。 - -> 免责声明: 我们先前的一些文章已经提到过,升级内核会带来(很小的)破坏你系统的风险。在这种情况下,通常可以通过旧内核来使系统工作,但是有时还是不行。因此我们对系统的任何损坏都不负责-你自己承担风险! - -### 预备工作 ### - -![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/linux_kernel_arch.jpg?2c3c1f) - -要更新你的内核,你首先要确定你使用的是32位还是64位的系统。打开终端并运行: - - uname -a - -检查一下输出的是x86_64还是i686。如果是x86_64,你就运行64位的版本,否则就运行32位的版本。记住这个因为这个很重要。 - -![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/kernel_latest_version.jpg?2c3c1f) - -接下来,访问[官方Linux内核网站][3],它会告诉你目前稳定内核的版本。如果你喜欢你可以尝试发布预选版,但是这比稳定版少了很多测试。除非你确定想要用发布预选版否则就用稳定内核。 - -### Ubuntu指导 ### - -对Ubuntu及其衍生版的用户而言升级内核非常简单,这要感谢Ubuntu主线内核PPA。虽然,官方称为一个PPA。但是你不能像其他PPA一样用来添加它到你软件源列表中,并指望它自动升级你的内核。而它只是一个简单的网页,你可以下载到你想要的内核。 - -![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_new_kernels.jpg?2c3c1f) - -现在,访问[内核PPA网页][4],并滚到底部。列表的最下面会含有最新发布的预选版本(你可以在名字中看到“rc”字样),但是这上面就可以看到最新的稳定版(为了更容易地解释这个,这时最新的稳定版是4.1.2)。点击它,你会看到几个选项。你需要下载3个文件并保存到各自的文件夹中(如果你喜欢的话可以在下载文件夹中),这样就可以将它们相互隔离了: - -- 针对架构的含“generic”的头文件(我这里是64位或者“amd64”) -- 中间的头文件在文件名末尾有“all” -- 针对架构的含“generic”内核文件(再说一次,我会用“amd64”,但是你如果用32位的,你需要使用“i686”) - -你会看到还有含有“lowlatency”的文件可以下面。但最好忽略它们。这些文件相对不稳定,并且只为那些通用文件不能满足像录音这类任务想要低延迟的人准备的。再说一次,首选通用版除非你特定的任务需求不能很好地满足。一般的游戏和网络浏览不是使用低延时版的借口。 - -![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_install_kernel.jpg?2c3c1f) - -你把它们放在各自的文件夹下,对么?现在打开终端,使用 - - cd - -命令到新创建的文件夹下,像 - - cd /home/user/Downloads/Kernel - -接着运行: - - sudo dpkg -i *.deb - -这个命令会标记所有文件夹的“.deb”文件为“待安装”,接着执行安装。这是推荐的安装放大,因为除非可以很简单地选择一个文件安装,它总会报出依赖问题。这个方法可以避免这个问题。如果你不清楚cd和sudo是什么。快速地看一下[Linux基本命令][5]这篇文章。 - -安装完成后,**重启**你的系统,这时应该就会运行刚安装的内核了!你可以在命令行中使用uname -a来检查输出。 - -### Fedora指导 ### - -如果你使用的是Fedora或者它的衍生版,过程跟Ubuntu很类似。不同的是文件获取的位置不同,安装的命令也不同。 - -![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/fedora_new_kernels.jpg?2c3c1f) - -查看[Fedora最新内核编译][6]列表。选取列表中最新的稳定版并滚到下面选择i686或者x86_64版。这依赖于你的系统架构。这时你需要下载下面这些文件并保存到它们对应的目录下(比如“Kernel”到下载目录下): - -- kernel -- kernel-core -- kernel-headers -- kernel-modules -- kernel-modules-extra -- kernel-tools -- perf and python-perf (optional) - -如果你的系统是i686(32位)同时你有4GB或者更大的内存,你需要下载所有这些文件的PAE版本。PAE是用于32位的地址扩展技术上,它允许你使用3GB的内存。 - -现在使用 - - cd - -命令进入文件夹,像这样 - - cd /home/user/Downloads/Kernel - -and then run the following command to install all the files: -接着运行下面的命令来安装所有的文件 - - yum --nogpgcheck localinstall *.rpm - -最后**重启**你的系统,这样你就可以运行新的内核了! - -### 使用 Rawhide ### - -另外一个方案是,Fedora用户也可以[切换到Rawhide][7],它会自动更新所有的包到最新版本,包括内核。然而,Rawhide经常会破坏系统(尤其是在早期的开发版中),它**不应该**在你日常使用的系统中用。 - -### Arch指导 ### - -[Arch][8]应该总是使用的是最新和最棒的稳定版(或者相当接近的版本)。如果你想要更接近最新发布的稳定版,你可以启用测试库提前2到3周获取到主要的更新。 - -要这么做,用[你喜欢的编辑器][9]以sudo权限打开下面的文件 - - /etc/pacman.conf - -接着取消注释带有testing的三行(删除行前面的井号)。如果你想要启用multilib仓库,就把multilib-testing也做相同的事情。如果想要了解更多参考[这个Arch的wiki界面][10]。 - -升级内核并不简单(有意这么做),但是这会给你带来很多好处。只要你的新内核不会破坏任何东西,你可以享受它带来的性能提升,更好的效率,支持更多的硬件和潜在的新特性。尤其是你正在使用相对更新的硬件,升级内核可以帮助到它。 - - -**怎么升级内核这篇文章帮助到你了么?你认为你所喜欢的发行版对内核的发布策略应该是怎样的?**。在评论栏让我们知道! - --------------------------------------------------------------------------------- - -via: http://www.makeuseof.com/tag/update-linux-kernel-improved-system-performance/ - -作者:[Danny Stieben][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.makeuseof.com/tag/author/danny/ -[1]:http://www.makeuseof.com/tag/linux-kernel-explanation-laymans-terms/ -[2]:http://www.makeuseof.com/tag/5-reasons-update-kernel-linux/ -[3]:http://www.kernel.org/ -[4]:http://kernel.ubuntu.com/~kernel-ppa/mainline/ -[5]:http://www.makeuseof.com/tag/an-a-z-of-linux-40-essential-commands-you-should-know/ -[6]:http://koji.fedoraproject.org/koji/packageinfo?packageID=8 -[7]:http://www.makeuseof.com/tag/bleeding-edge-linux-fedora-rawhide/ -[8]:http://www.makeuseof.com/tag/arch-linux-letting-you-build-your-linux-system-from-scratch/ -[9]:http://www.makeuseof.com/tag/nano-vs-vim-terminal-text-editors-compared/ -[10]:https://wiki.archlinux.org/index.php/Pacman#Repositories From 29c2cfb01350f3960ca314df890fc8a3308f8e30 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 15 Aug 2015 01:24:33 +0800 Subject: [PATCH 166/697] PUB:20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver @GOLinux --- ... Install The Latest Nvidia Linux Driver.md | 63 +++++++++++++++++++ ... Install The Latest Nvidia Linux Driver.md | 63 ------------------- 2 files changed, 63 insertions(+), 63 deletions(-) create mode 100644 published/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md delete mode 100644 translated/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md diff --git a/published/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md b/published/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md new file mode 100644 index 0000000000..37d106b69f --- /dev/null +++ b/published/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md @@ -0,0 +1,63 @@ +Ubuntu 有望让你安装最新 Nvidia Linux 驱动更简单 +================================================================================ +![Ubuntu Gamers are on the rise -and so is demand for the latest drivers](http://www.omgubuntu.co.uk/wp-content/uploads/2014/03/ubuntugamer_logo_dark-500x250.jpg) + +*Ubuntu 上的游戏玩家在增长——因而需要最新版驱动* + +**在 Ubuntu 上安装上游的 NVIDIA 图形驱动即将变得更加容易。** + +Ubuntu 开发者正在考虑构建一个全新的'官方' PPA,以便为桌面用户分发最新的闭源 NVIDIA 二进制驱动。 + +该项改变会让 Ubuntu 游戏玩家收益,并且*不会*给其它人造成 OS 稳定性方面的风险。 + +**仅**当用户明确选择它时,新的上游驱动将通过这个新 PPA 安装并更新。其他人将继续得到并使用更近的包含在 Ubuntu 归档中的稳定版 NVIDIA Linux 驱动快照。 + +### 为什么需要该项目? ### + +![Ubuntu provides drivers – but they’re not the latest](http://www.omgubuntu.co.uk/wp-content/uploads/2013/04/drivers.jpg) + +*Ubuntu 提供了驱动——但是它们不是最新的* + +可以从归档中(使用命令行、synaptic,或者通过额外驱动工具)安装到 Ubuntu 上的闭源 NVIDIA 图形驱动在大多数情况下都能工作得很好,并且可以轻松地处理 Unity 桌面外壳的混染。 + +但对于游戏需求而言,那完全是另外一码事儿。 + +如果你想要将最高帧率和 HD 纹理从最新流行的 Steam 游戏中压榨出来,你需要最新的二进制驱动文件。 + +驱动越新,越可能支持最新的特性和技术,或者带有预先打包的游戏专门的优化和漏洞修复。 + +问题在于,在 Ubuntu 上安装最新 Nvidia Linux 驱动不是件容易的事儿,而且也不具安全保证。 + +要填补这个空白,许多由热心人维护的第三方 PPA 就出现了。由于许多这些 PPA 也发布了其它实验性的或者前沿软件,它们的使用**并不是毫无风险的**。添加一个前沿的 PPA 通常是搞崩整个系统的最快的方式! + +一个解决方法是,让 Ubuntu 用户安装最新的专有图形驱动以满足对第三方 PPA 的需要,**但是**提供一个安全机制,如果有需要,你可以回滚到稳定版本。 + +### ‘对全新驱动的需求难以忽视’ ### + +> '一个让Ubuntu用户安全地获得最新硬件驱动的解决方案出现了。' + +‘在快速发展的市场中,对全新驱动的需求正变得难以忽视,用户将想要最新的上游软件,’卡斯特罗在一封给 Ubuntu 桌面邮件列表的电子邮件中解释道。 + +‘[NVIDIA] 可以毫不费力为 [Windows 10] 用户带来了不起的体验。直到我们可以说服 NVIDIA 在 Ubuntu 中做了同样的工作,这样我们就可以搞定这一切了。’ + +卡斯特罗的“官方的” NVIDIA PPA 方案就是最实现这一目的的最容易的方式。 + +游戏玩家将可以在 Ubuntu 的默认专有软件驱动工具中选择接收来自该 PPA 的新驱动,再也不需要它们从网站或维基页面拷贝并粘贴终端命令了。 + +该 PPA 内的驱动将由一个选定的社区成员组成的团队打包并维护,并受惠于一个名为**自动化测试**的半官方方式。 + +就像卡斯特罗自己说的那样:'人们想要最新的闪光的东西,而不管他们想要做什么。我们也许也要在其周围放置一个框架,因此人们可以获得他们所想要的,而不必破坏他们的计算机。' + +**你想要使用这个 PPA 吗?你怎样来评估 Ubuntu 上默认 Nvidia 驱动的性能呢?在评论中分享你的想法吧,伙计们!** + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2015/08/ubuntu-easy-install-latest-nvidia-linux-drivers + +作者:[Joey-Elijah Sneddon][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author diff --git a/translated/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md b/translated/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md deleted file mode 100644 index bf24d3e5c2..0000000000 --- a/translated/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md +++ /dev/null @@ -1,63 +0,0 @@ -Ubuntu想要让你安装最新版Nvidia Linux驱动更简单 -================================================================================ -![Ubuntu Gamers are on the rise -and so is demand for the latest drivers](http://www.omgubuntu.co.uk/wp-content/uploads/2014/03/ubuntugamer_logo_dark-500x250.jpg) -Ubuntu游戏在增长——因而需要最新版驱动 - -**Ubuntu上安装上游NVIDIA图形驱动即将变得更加容易。** - -Ubuntu开发者正在考虑构建一个全新的'官方'PPA,以便为桌面用户分发最新的闭源NVIDIA二进制驱动。 - -该项运动将使得Ubuntu游戏玩家收益,并且**不会**给其它人造成OS稳定性方面的风险。 - -新的上游驱动将通过该新PPA安装并更新,**只有**在用户明确选择它。其他人将继续接收并使用更近的包含在Ubuntu归档中的稳定版NVIDIA Linux驱动快照。 - -### 为什么需要该项目? ### - -![Ubuntu provides drivers – but they’re not the latest](http://www.omgubuntu.co.uk/wp-content/uploads/2013/04/drivers.jpg) -Ubuntu提供了驱动——但是它们不是最新的 - -可以从归档中(使用命令行、synaptic,或者通过额外驱动工具)安装到Ubuntu上的闭源NVIDIA图形驱动在大多数情况下都能工作得很好,并且可以轻松地处理混合Unity桌面shell。 - -对于游戏需求而言,那完全是另外一码事儿。 - -如果你想要将所有最后帧和HD纹理从最新的大游戏Steam游戏中挤压出来,你需要最新的二进制驱动对象。 - -> '安装最新Nvidia Linux驱动到Ubuntu不是件容易的事儿,而且也不具安全保证。' - -驱动越新,越可能支持最新的特性和技术,或者预先打包好了游戏专门优化和漏洞修复。 - -问题在于安装最新的Nvidia Linux驱动到Ubuntu上不是件容易的事儿,也没有安全保证。 - -要填补空白,许多由热心人维护的第三方PPA就出现了。由于许多这些PPA也发布了其它实验性的或者前沿软件,它们的使用**没有风险**。添加一个前沿的PPA通常是最快的方式,以完全满足系统需要! - -一个让Ubuntu用户安装作为第三方PPA提供最新专有图形驱动解决方案就十分需要了,**但是**提供了一个安全机制,如果有需要,你可以回滚到稳定版本。 - -### ‘对全新驱动的需求难以忽视’ ### - -> '一个让Ubuntu用户安全地获得最新硬件驱动的解决方案出现了。' - -'在快速开发市场中,对全新驱动的需求正变得难以忽视,用户将想要最新的上游软件,'卡斯特罗在一封给Ubuntu桌面邮件列表的电子邮件中解释道。 - -'[NVIDIA]可以分发一个了不起的体验,几乎像[Windows 10]用户那样毫不费力。直到我们可以证实NVIDIA在Ubuntu中做了同样的工作,我们就可以收拾残局了。' - -卡斯特罗关于一个“神圣的”NVIDIA PPA命题就是最实现这一目的的最容易的方式。 - -游戏玩家将可以在Ubuntu的默认专有软件驱动工具中选择接收来自该PPA的新驱动——不需要它们从网站或维基页面拷贝并粘贴终端命令了。 - -该PPA内的驱动将由一个选定的社区成员组成的团队打包并维护,并从一个名为**自动化测试**的半官方选项受惠。 - -就像卡斯特罗自己说的那样:'人们想要最新的闪光的东西,不管他们想要做什么。我们也许也要在其周围放置一个框架,因此人们可以获得他们所想要的,而不必破坏他们的计算机。' - -**你想要使用这个PPA吗?你怎样来评估Ubuntu上默认Nvidia驱动的性能呢?在评论中分享你的想法吧,伙计们!** - --------------------------------------------------------------------------------- - -via: http://www.omgubuntu.co.uk/2015/08/ubuntu-easy-install-latest-nvidia-linux-drivers - -作者:[Joey-Elijah Sneddon][a] -译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/117485690627814051450/?rel=author From aa0754b2b1578f65ba69065fb6b3b2ffb3db9a51 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 15 Aug 2015 01:40:34 +0800 Subject: [PATCH 167/697] PUB:20150811 How to download apk files from Google Play Store on Linux @FSSlc --- ...k files from Google Play Store on Linux.md | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) rename {translated/tech => published}/20150811 How to download apk files from Google Play Store on Linux.md (67%) diff --git a/translated/tech/20150811 How to download apk files from Google Play Store on Linux.md b/published/20150811 How to download apk files from Google Play Store on Linux.md similarity index 67% rename from translated/tech/20150811 How to download apk files from Google Play Store on Linux.md rename to published/20150811 How to download apk files from Google Play Store on Linux.md index 670c0f331b..615dcca7c2 100644 --- a/translated/tech/20150811 How to download apk files from Google Play Store on Linux.md +++ b/published/20150811 How to download apk files from Google Play Store on Linux.md @@ -1,14 +1,15 @@ -如何在 Linux 中从 Google Play 商店里下载 apk 文件 +如何在 Linux 上从 Google Play 商店里下载 apk 文件 ================================================================================ -假设你想在你的 Android 设备中安装一个 Android 应用,然而由于某些原因,你不能在 Andor 设备上访问 Google Play 商店。接着你该怎么做呢?在不访问 Google Play 商店的前提下安装应用的一种可能的方法是使用其他的手段下载该应用的 APK 文件,然后手动地在 Android 设备上 [安装 APK 文件][1]。 -在非 Android 设备如常规的电脑和笔记本电脑上,有着几种方式来从 Google Play 商店下载到官方的 APK 文件。例如,使用浏览器插件(例如, 针对 [Chrome][2] 或针对 [Firefox][3] 的插件) 或利用允许你使用浏览器下载 APK 文件的在线的 APK 存档等。假如你不信任这些闭源的插件或第三方的 APK 仓库,这里有另一种手动下载官方 APK 文件的方法,它使用一个名为 [GooglePlayDownloader][4] 的开源 Linux 应用。 +假设你想在你的 Android 设备中安装一个 Android 应用,然而由于某些原因,你不能在 Andord 设备上访问 Google Play 商店(LCTT 译注:显然这对于我们来说是常态)。接着你该怎么做呢?在不访问 Google Play 商店的前提下安装应用的一种可能的方法是,使用其他的手段下载该应用的 APK 文件,然后手动地在 Android 设备上 [安装 APK 文件][1]。 -GooglePlayDownloader 是一个基于 Python 的 GUI 应用,使得你可以从 Google Play 商店上搜索和下载 APK 文件。由于它是完全开源的,你可以放心地使用它。在本篇教程中,我将展示如何在 Linux 环境下,使用 GooglePlayDownloader 来从 Google Play 商店下载 APK 文件。 +在非 Android 设备如常规的电脑和笔记本电脑上,有着几种方式来从 Google Play 商店下载到官方的 APK 文件。例如,使用浏览器插件(例如,针对 [Chrome][2] 或针对 [Firefox][3] 的插件) 或利用允许你使用浏览器下载 APK 文件的在线的 APK 存档等。假如你不信任这些闭源的插件或第三方的 APK 仓库,这里有另一种手动下载官方 APK 文件的方法,它使用一个名为 [GooglePlayDownloader][4] 的开源 Linux 应用。 + +GooglePlayDownloader 是一个基于 Python 的 GUI 应用,它可以让你从 Google Play 商店上搜索和下载 APK 文件。由于它是完全开源的,你可以放心地使用它。在本篇教程中,我将展示如何在 Linux 环境下,使用 GooglePlayDownloader 来从 Google Play 商店下载 APK 文件。 ### Python 需求 ### -GooglePlayDownloader 需要使用 Python 中 SSL 模块的扩展 SNI(服务器名称指示) 来支持 SSL/TLS 通信,该功能由 Python 2.7.9 或更高版本带来。这使得一些旧的发行版本如 Debian 7 Wheezy 及早期版本,Ubuntu 14.04 及早期版本或 CentOS/RHEL 7 及早期版本均不能满足该要求。假设你已经有了一个带有 Python 2.7.9 或更高版本的发行版本,可以像下面这样接着安装 GooglePlayDownloader。 +GooglePlayDownloader 需要使用带有 SNI(Server Name Indication 服务器名称指示)的 Python 来支持 SSL/TLS 通信,该功能由 Python 2.7.9 或更高版本引入。这使得一些旧的发行版本如 Debian 7 Wheezy 及早期版本,Ubuntu 14.04 及早期版本或 CentOS/RHEL 7 及早期版本均不能满足该要求。这里假设你已经有了一个带有 Python 2.7.9 或更高版本的发行版本,可以像下面这样接着安装 GooglePlayDownloader。 ### 在 Ubuntu 上安装 GooglePlayDownloader ### @@ -16,7 +17,7 @@ GooglePlayDownloader 需要使用 Python 中 SSL 模块的扩展 SNI(服务器 #### 在 Ubuntu 14.10 上 #### -下载 [python-ndg-httpsclient][5] deb 软件包,这在旧一点的 Ubuntu 发行版本中是一个缺失的依赖。同时还要下载 GooglePlayDownloader 的官方 deb 软件包。 +下载 [python-ndg-httpsclient][5] deb 软件包,这是一个较旧的 Ubuntu 发行版本中缺失的依赖。同时还要下载 GooglePlayDownloader 的官方 deb 软件包。 $ wget http://mirrors.kernel.org/ubuntu/pool/main/n/ndg-httpsclient/python-ndg-httpsclient_0.3.2-1ubuntu4_all.deb $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb @@ -64,7 +65,7 @@ GooglePlayDownloader 需要使用 Python 中 SSL 模块的扩展 SNI(服务器 ### 使用 GooglePlayDownloader 从 Google Play 商店下载 APK 文件 ### -一旦你安装好 GooglePlayDownloader 后,你就可以像下面那样从 Google Play 商店下载 APK 文件。 +一旦你安装好 GooglePlayDownloader 后,你就可以像下面那样从 Google Play 商店下载 APK 文件。(LCTT 译注:显然你需要让你的 Linux 能爬梯子) 首先通过输入下面的命令来启动该应用: @@ -76,7 +77,7 @@ GooglePlayDownloader 需要使用 Python 中 SSL 模块的扩展 SNI(服务器 ![](https://farm1.staticflickr.com/503/20230360479_925f5da613_b.jpg) -一旦你从搜索列表中找到了该应用,就选择该应用,接着点击 "下载选定的 APK 文件" 按钮。最后你将在你的家目录中找到下载的 APK 文件。现在,你就可以将下载到的 APK 文件转移到你所选择的 Android 设备上,然后手动安装它。 +一旦你从搜索列表中找到了该应用,就选择该应用,接着点击 “下载选定的 APK 文件” 按钮。最后你将在你的家目录中找到下载的 APK 文件。现在,你就可以将下载到的 APK 文件转移到你所选择的 Android 设备上,然后手动安装它。 希望这篇教程对你有所帮助。 @@ -86,7 +87,7 @@ via: http://xmodulo.com/download-apk-files-google-play-store.html 作者:[Dan Nanni][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From eaf4b2444bbff4896171b0e4dbdef671d9435a52 Mon Sep 17 00:00:00 2001 From: XIAOYU <1136299502@qq.com> Date: Sat, 15 Aug 2015 09:58:49 +0800 Subject: [PATCH 168/697] translated translated --- ...urce RSS News Ticker for Linux Desktops.md | 93 +++++++++---------- 1 file changed, 46 insertions(+), 47 deletions(-) diff --git a/sources/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md b/sources/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md index ccbbd3abd8..d7bb0e425b 100644 --- a/sources/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md +++ b/sources/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md @@ -1,94 +1,93 @@ -translating by xiaoyu33 - -Tickr Is An Open-Source RSS News Ticker for Linux Desktops +Trickr:一个开源的Linux桌面RSS新闻速递 ================================================================================ ![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/rss-tickr.jpg) -**Latest! Latest! Read all about it!** +**最新的!最新的!阅读关于它的一切!** -Alright, so the app we’re highlighting today isn’t quite the binary version of an old newspaper seller — but it is a great way to have the latest news brought to you, on your desktop. +好了,所以我们今天要强调的应用程序不是相当于旧报纸的二进制版本—而是它会以一个伟大的方式,将最新的新闻推送到你的桌面上。 -Tick is a GTK-based news ticker for the Linux desktop that scrolls the latest headlines and article titles from your favourite RSS feeds in horizontal strip that you can place anywhere on your desktop. +Tick是一个基于GTK的Linux桌面新闻速递,能够在水平带滚动显示最新头条新闻,以及你最爱的RSS资讯文章标题,当然你可以放置在你桌面的任何地方。 -Call me Joey Calamezzo; I put mine on the bottom TV news station style. +请叫我Joey Calamezzo;我把我的放在底部,有电视新闻台的风格。 -“Over to you, sub-heading.” +“到你了,子标题” -### RSS — Remember That? ### +### RSS -还记得吗? ### -“Thanks paragraph ending.” +“谢谢段落结尾。” -In an era of push notifications, social media, and clickbait, cajoling us into reading the latest mind-blowing, humanity saving listicle ASAP, RSS can seem a bit old hat. +在一个推送通知,社交媒体,以及点击诱饵的时代,哄骗我们阅读最新的令人惊奇的,人人都爱读的清单,RSS看起来有一点过时了。 -For me? Well, RSS lives up to its name of Really Simple Syndication. It’s the easiest, most manageable way to have news come to me. I can manage and read stuff when I want; there’s no urgency to view lest the tweet vanish into the stream or the push notification vanish. +对我来说?恩,RSS是名副其实的真正简单的聚合。这是将消息通知给我的最简单,最易于管理的方式。我可以在我愿意的时候,管理和阅读一些东西;没必要匆忙的去看,以防这条微博消失在信息流中,或者推送通知消失。 -The beauty of Tickr is in its utility. You can have a constant stream of news trundling along the bottom of your screen, which you can passively glance at from time to time. +tickr的美在于它的实用性。你可以不断地有新闻滚动在屏幕的底部,然后不时地瞥一眼。 ![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/tickr-close-up-750x58.jpg) -There’s no pressure to ‘read’ or ‘mark all read’ or any of that. When you see something you want to read you just click it to open it in a web browser. +你不会有“阅读”或“标记所有为已读”的压力。当你看到一些你想读的东西,你只需点击它,将它在Web浏览器中打开。 -### Setting it Up ### +### 开始设置 ### ![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/tickr-rss-settings.jpg) -Although Tickr is available to install from the Ubuntu Software Centre it hasn’t been updated for a long time. Nowhere is this sense of abandonment more keenly felt than when opening the unwieldy and unintuitive configuration panel. +尽管虽然tickr可以从Ubuntu软件中心安装,然而它已经很久没有更新了。当你打开笨拙的不直观的控制面板的时候,没有什么能够比这更让人感觉被遗弃的了。 -To open it: +打开它: -1. Right click on the Tickr bar -1. Go to Edit > Preferences -1. Adjust the various settings +1. 右键单击tickr条 +1. 转至编辑>首选项 +1. 调整各种设置 -Row after row of options and settings, few of which seem to make sense at first. But poke and prod around and you’ll controls for pretty much everything, including: +选项和设置行的后面,有些似乎是容易理解的。但是知己知彼你能够几乎掌控一切,包括: -- Set scrolling speed -- Choose behaviour when mousing over -- Feed update frequency -- Font, including font sizes and color -- Separator character (‘delineator’) -- Position of Tickr on screen -- Color and opacity of Tickr bar -- Choose how many articles each feed displays +- 设置滚动速度 +- 选择鼠标经过时的行为 +- 资讯更新频率 +- 字体,包括字体大小和颜色 +- 分隔符(“delineator”) +- tickr在屏幕上的位置 +- tickr条的颜色和不透明度 +- 选择每种资讯显示多少文章 -One ‘quirk’ worth mentioning is that pressing the ‘Apply’ only updates the on-screen Tickr to preview changes. For changes to take effect when you exit the Preferences window you need to click ‘OK’. +有个值得一提的“怪癖”是,当你点击“应用”按钮,只会更新tickr的屏幕预览。当您退出“首选项”窗口时,请单击“确定”。 -Getting the bar to sit flush on your display can also take a fair bit of tweaking, especially on Unity. +想要滚动条在你的显示屏上水平显示,也需要公平一点的调整,特别是统一显示。 -Press the “full width button” to have the app auto-detect your screen width. By default when placed at the top or bottom it leaves a 25px gap (the app was created back in the days of GNOME 2.x desktops). After hitting the top or bottom buttons just add an extra 25 pixels to the input box compensate for this. +按下“全宽按钮”,能够让应用程序自动检测你的屏幕宽度。默认情况下,当放置在顶部或底部时,会留下25像素的间距(应用程序被创建在过去的GNOME2.x桌面)。只需添加额外的25像素到输入框,来弥补这个问题。 -Other options available include: choose which browser articles open in; whether Tickr appears within a regular window frame; whether a clock is shown; and how often the app checks feed for articles. +其他可供选择的选项包括:选择文章在哪个浏览器打开;tickr是否以一个常规的窗口出现; +是否显示一个时钟;以及应用程序多久检查一次文章资讯。 -#### Adding Feeds #### +#### 添加资讯 #### -Tickr comes with a built-in list of over 30 different feeds, ranging from technology blogs to mainstream news services. +tickr自带的有超过30种不同的资讯列表,从技术博客到主流新闻服务。 ![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/feed-picker-750x398.jpg) -You can select as many of these as you like to show headlines in the on screen ticker. If you want to add your own feeds you can: – +你可以选择很多你想在屏幕上显示的新闻提要。如果你想添加自己的资讯,你可以:— -1. Right click on the Tickr bar -1. Go to File > Open Feed -1. Enter Feed URL -1. Click ‘Add/Upd’ button -1. Click ‘OK (select)’ +1. 右键单击tickr条 +1. 转至文件>打开资讯 +1. 输入资讯网址 +1. 点击“添加/更新”按钮 +1. 单击“确定”(选择) -To set how many items from each feed shows in the ticker change the “Read N items max per feed” in the other preferences window. +如果想设置每个资讯在ticker中显示多少条文章,可以去另一个首选项窗口修改“每个资讯最大读取N条文章” -### Install Tickr in Ubuntu 14.04 LTS and Up ### +### 在Ubuntu 14.04 LTS或更高版本上安装Tickr ### -So that’s Tickr. It’s not going to change the world but it will keep you abreast of what’s happening in it. +在Ubuntu 14.04 LTS或更高版本上安装Tickr -To install it in Ubuntu 14.04 LTS or later head to the Ubuntu Software Centre but clicking the button below. +在Ubuntu 14.04 LTS或更高版本中安装,转到Ubuntu软件中心,但要点击下面的按钮。 -- [Click to install Tickr form the Ubuntu Software Center][1] +- [点击此处进入Ubuntu软件中心安装tickr][1] -------------------------------------------------------------------------------- via: http://www.omgubuntu.co.uk/2015/06/tickr-open-source-desktop-rss-news-ticker 作者:[Joey-Elijah Sneddon][a] -译者:[译者ID](https://github.com/译者ID) +译者:[xiaoyu33](https://github.com/xiaoyu33) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 6c69a46dc31c5ac42d88dee04c787b82d804d77b Mon Sep 17 00:00:00 2001 From: XIAOYU <1136299502@qq.com> Date: Sat, 15 Aug 2015 10:00:20 +0800 Subject: [PATCH 169/697] translated change the url --- ... Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md (100%) diff --git a/sources/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md b/translated/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md similarity index 100% rename from sources/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md rename to translated/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md From 8de21554f0dd6220e53f6d3cee738a5a4c40a372 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 15 Aug 2015 10:39:03 +0800 Subject: [PATCH 170/697] translating --- ...20150811 How to Install Snort and Usage in Ubuntu 15.04.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md b/sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md index 7bf2438c95..96759a29e6 100644 --- a/sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md +++ b/sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md @@ -1,3 +1,5 @@ +translating----geekpi + How to Install Snort and Usage in Ubuntu 15.04 ================================================================================ Intrusion detection in a network is important for IT security. Intrusion Detection System used for the detection of illegal and malicious attempts in the network. Snort is well-known open source intrusion detection system. Web interface (Snorby) can be used for better analysis of alerts. Snort can be used as an intrusion prevention system with iptables/pf firewall. In this article, we will install and configure an open source IDS system snort. @@ -200,4 +202,4 @@ via: http://linoxide.com/security/install-snort-usage-ubuntu-15-04/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://linoxide.com/author/naveeda/ -[1]:https://www.snort.org/downloads/community/community-rules.tar.gz \ No newline at end of file +[1]:https://www.snort.org/downloads/community/community-rules.tar.gz From 1c9c7bcec0630789c0a7b3a8aeff64abfafddda2 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 15 Aug 2015 11:34:52 +0800 Subject: [PATCH 171/697] translating --- ...Install Snort and Usage in Ubuntu 15.04.md | 80 +++++++++---------- 1 file changed, 39 insertions(+), 41 deletions(-) diff --git a/sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md b/sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md index 96759a29e6..06fbfd62b8 100644 --- a/sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md +++ b/sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md @@ -1,56 +1,54 @@ -translating----geekpi - -How to Install Snort and Usage in Ubuntu 15.04 +在Ubuntu 15.04中如何安装和使用Snort ================================================================================ -Intrusion detection in a network is important for IT security. Intrusion Detection System used for the detection of illegal and malicious attempts in the network. Snort is well-known open source intrusion detection system. Web interface (Snorby) can be used for better analysis of alerts. Snort can be used as an intrusion prevention system with iptables/pf firewall. In this article, we will install and configure an open source IDS system snort. +对于IT安全而言入侵检测是一件非常重要的事。入侵检测系统用于检测网络中非法与恶意的请求。Snort是一款知名的开源入侵检测系统。Web界面(Snorby)可以用于更好地分析警告。Snort使用iptables/pf防火墙来作为入侵检测系统。本篇中,我们会安装并配置一个开源的IDS系统snort。 -### Snort Installation ### +### Snort 安装 ### -#### Prerequisite #### +#### 要求 #### -Data Acquisition library (DAQ) is used by the snort for abstract calls to packet capture libraries. It is available on snort website. Downloading process is shown in the following screenshot. +snort所使用的数据采集库(DAQ)用于抽象地调用采集库。这个在snort上就有。下载过程如下截图所示。 ![downloading_daq](http://blog.linoxide.com/wp-content/uploads/2015/07/downloading_daq.png) -Extract it and run ./configure, make and make install commands for DAQ installation. However, DAQ required other tools therefore ./configure script will generate following errors . +解压并运行./configure、make、make install来安装DAQ。然而,DAQ要求其他的工具,因此,./configure脚本会生成下面的错误。 -flex and bison error +flex和bison错误 ![flexandbison_error](http://blog.linoxide.com/wp-content/uploads/2015/07/flexandbison_error.png) -libpcap error. +libpcap错误 ![libpcap error](http://blog.linoxide.com/wp-content/uploads/2015/07/libpcap-error.png) -Therefore first install flex/bison and libcap before DAQ installation which is shown in the figure. +因此在安装DAQ之前先安装flex/bison和libcap。 ![install_flex](http://blog.linoxide.com/wp-content/uploads/2015/07/install_flex.png) -Installation of libpcap development library is shown below +如下所示安装libpcap开发库 ![libpcap-dev installation](http://blog.linoxide.com/wp-content/uploads/2015/07/libpcap-dev-installation.png) -After installation of necessary tools, again run ./configure script which will show following output. +安装完必要的工具后,再次运行./configure脚本,将会显示下面的输出。 ![without_error_configure](http://blog.linoxide.com/wp-content/uploads/2015/07/without_error_configure.png) -make and make install commands result is shown in the following screens. +make和make install 命令的结果如下所示。 ![make install](http://blog.linoxide.com/wp-content/uploads/2015/07/make-install.png) ![make](http://blog.linoxide.com/wp-content/uploads/2015/07/make.png) -After successful installation of DAQ, now we will install snort. Downloading using wget is shown in the below figure. +成功安装DAQ之后,我们现在安装snort。如下图使用wget下载它。 ![downloading_snort](http://blog.linoxide.com/wp-content/uploads/2015/07/downloading_snort.png) -Extract compressed package using below given command. +使用下面的命令解压安装包。 #tar -xvzf snort-2.9.7.3.tar.gz ![snort_extraction](http://blog.linoxide.com/wp-content/uploads/2015/07/snort_extraction.png) -Create installation directory and set prefix parameter in the configure script. It is also recommended to enable sourcefire flag for Packet Performance Monitoring (PPM). +创建安装目录并在脚本中设置prefix参数。同样也建议启用包性能监控(PPM)标志。 #mkdir /usr/local/snort @@ -58,21 +56,21 @@ Create installation directory and set prefix parameter in the configure script. ![snort_installation](http://blog.linoxide.com/wp-content/uploads/2015/07/snort_installation.png) -Configure script generates error due to missing libpcre-dev , libdumbnet-dev and zlib development libraries. +配置脚本由于缺少libpcre-dev、libdumbnet-dev 和zlib开发库而报错。 -error due to missing libpcre library. +配置脚本由于缺少libpcre库报错。 ![pcre-error](http://blog.linoxide.com/wp-content/uploads/2015/07/pcre-error.png) -error due to missing dnet (libdumbnet) library. +配置脚本由于缺少dnet(libdumbnet)库而报错。 ![libdnt error](http://blog.linoxide.com/wp-content/uploads/2015/07/libdnt-error.png) -configure script generate error due to missing zlib library. +配置脚本由于缺少zlib库而报错 ![zlib error](http://blog.linoxide.com/wp-content/uploads/2015/07/zlib-error.png) -Installation of all required development libraries is shown in the next screenshots. +如下所示,安装所有需要的开发库。 # aptitude install libpcre3-dev @@ -86,9 +84,9 @@ Installation of all required development libraries is shown in the next screensh ![zlibg-dev installation](http://blog.linoxide.com/wp-content/uploads/2015/07/zlibg-dev-installation.png) -After installation of above required libraries for snort, again run the configure scripts without any error. +安装完snort需要的库之后,再次运行配置脚本就不会报错了。 -Run make & make install commands for the compilation and installations of snort in /usr/local/snort directory. +运行make和make install命令在/usr/local/snort目录下完成安装。 #make @@ -98,22 +96,22 @@ Run make & make install commands for the compilation and installations of snort ![make install snort](http://blog.linoxide.com/wp-content/uploads/2015/07/make-install-snort.png) -Finally snort running from /usr/local/snort/bin directory. Currently it is in promisc mode (packet dump mode) of all traffic on eth0 interface. +最终snort在/usr/local/snort/bin中运行。现在它对eth0的所有流量都处在promisc模式(包转储模式)。 ![snort running](http://blog.linoxide.com/wp-content/uploads/2015/07/snort-running.png) -Traffic dump by the snort interface is shown in following figure. +如下图所示snort转储流量。 ![traffic](http://blog.linoxide.com/wp-content/uploads/2015/07/traffic1.png) -#### Rules and Configuration of Snort #### +#### Snort的规则和配置 #### -Snort installation from source code required rules and configuration setting therefore now we will copy rules and configuration under /etc/snort directory. We have created single bash scripts for rules and configuration setting. It is used for following snort setting. +从源码安装的snort需要规则和安装配置,因此我们会从/etc/snort下面复制规则和配置。我们已经创建了单独的bash脚本来用于规则和配置。它会设置下面这些snort设置。 -- Creation of snort user for snort IDS service on linux. -- Creation of directories and files under /etc directory for snort configuration. -- Permission setting and copying data from etc directory of snort source code. -- Remove # (comment sign) from rules path in snort.conf file. +- 在linux中创建snort用户用于snort IDS服务。 +- 在/etc下面创建snort的配置文件和文件夹。 +- 权限设置并从etc中复制snortsnort源代码 +- 从snort文件中移除规则中的#(注释符号)。 #!/bin/bash##PATH of source code of snort snort_src="/home/test/Downloads/snort-2.9.7.3" @@ -143,15 +141,15 @@ Snort installation from source code required rules and configuration setting the sed -i 's/include \$RULE\_PATH/#include \$RULE\_PATH/' /etc/snort/snort.conf echo "---DONE---" -Change the snort source directory in the script and run it. Following output appear in case of success. +改变脚本中的snort源目录并运行。下面是成功的输出。 ![running script](http://blog.linoxide.com/wp-content/uploads/2015/08/running_script.png) -Above script copied following files/directories from snort source into /etc/snort configuration file. +上面的脚本从snort源中复制下面的文件/文件夹到/etc/snort配置文件中 ![files copied](http://blog.linoxide.com/wp-content/uploads/2015/08/created.png) -Snort configuration file is very complex however following necessary changes are required in snort.conf for IDS proper working. +、snort的配置非常复杂,然而为了IDS能正常工作需要进行下面必要的修改。 ipvar HOME_NET 192.168.1.0/24 # LAN side @@ -171,32 +169,32 @@ Snort configuration file is very complex however following necessary changes are include $RULE_PATH/local.rules # file for custom rules -remove comment sign (#) from other rules such as ftp.rules,exploit.rules etc. +移除ftp.rules、exploit.rules前面的注释符号(#)。 ![path rules](http://blog.linoxide.com/wp-content/uploads/2015/08/path-rules.png) -Now [Download community][1] rules and extract under /etc/snort/rules directory. Enable community and emerging threats rules in snort.conf file. +下载[下载社区][1]规则并解压到/etc/snort/rules。启用snort.conf中的社区及紧急威胁规则。 ![wget_rules](http://blog.linoxide.com/wp-content/uploads/2015/08/wget_rules.png) ![community rules](http://blog.linoxide.com/wp-content/uploads/2015/08/community-rules1.png) -Run following command to test the configuration file after above mentioned changes. +进行了上面的更改后,运行下面的命令来检验配置文件。 #snort -T -c /etc/snort/snort.conf ![snort running](http://blog.linoxide.com/wp-content/uploads/2015/08/snort-final.png) -### Conclusion ### +### 总结 ### -In this article our focus was on the installation and configuration of an open source IDPS system snort on Ubuntu distribution. By default it is used for the monitoring of events however it can con configured inline mode for the protection of network. Snort rules can be tested and analysed in offline mode using pcap capture file. +本篇中,我们致力于开源IDPS系统snort在Ubuntu上的安装和配置。默认它用于监控时间,然而它可以被配置成用于网络保护的内联模式。snort规则可以在离线模式中可以使用pcap文件测试和分析 -------------------------------------------------------------------------------- via: http://linoxide.com/security/install-snort-usage-ubuntu-15-04/ 作者:[nido][a] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 19ccf708732b86651de67cef8804946927b127aa Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 15 Aug 2015 11:41:25 +0800 Subject: [PATCH 172/697] [Translate] tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md --- ...o Setup and Test Static Network Routing.md | 228 ------------------ ...o Setup and Test Static Network Routing.md | 227 +++++++++++++++++ 2 files changed, 227 insertions(+), 228 deletions(-) delete mode 100644 sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md create mode 100644 translated/tech/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md diff --git a/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md b/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md deleted file mode 100644 index 731e78e5cf..0000000000 --- a/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md +++ /dev/null @@ -1,228 +0,0 @@ -Translating by ictlyh -Part 1 - RHCE Series: How to Setup and Test Static Network Routing -================================================================================ -RHCE (Red Hat Certified Engineer) is a certification from Red Hat company, which gives an open source operating system and software to the enterprise community, It also gives training, support and consulting services for the companies. - -![RHCE Exam Preparation Guide](http://www.tecmint.com/wp-content/uploads/2015/07/RHCE-Exam-Series-by-TecMint.jpg) - -RHCE Exam Preparation Guide - -This RHCE (Red Hat Certified Engineer) is a performance-based exam (codename EX300), who possesses the additional skills, knowledge, and abilities required of a senior system administrator responsible for Red Hat Enterprise Linux (RHEL) systems. - -**Important**: [Red Hat Certified System Administrator][1] (RHCSA) certification is required to earn RHCE certification. - -Following are the exam objectives based on the Red Hat Enterprise Linux 7 version of the exam, which will going to cover in this RHCE series: - -- Part 1: How to Setup and Test Static Routing in RHEL 7 -- Part 2: How to Perform Packet Filtering, Network Address Translation and Set Kernel Runtime Parameters -- Part 3: How to Produce and Deliver System Activity Reports Using Linux Toolsets -- Part 4: Automate System Maintenance Tasks Using Shell Scripts -- Part 5: How to Configure Local and Remote System Logging -- Part 6: How to Configure a Samba Server and a NFS Server -- Part 7: Setting Up Complete SMTP Server for Mailing -- Part 8: Setting Up HTTPS and TLS on RHEL 7 -- Part 9: Setting Up Network Time Protocol -- Part 10: How to Configure a Cache-Only DNS Server - -To view fees and register for an exam in your country, check the [RHCE Certification][2] page. - -In this Part 1 of the RHCE series and the next, we will present basic, yet typical, cases where the principles of static routing, packet filtering, and network address translation come into play. - -![Setup Static Network Routing in RHEL](http://www.tecmint.com/wp-content/uploads/2015/07/Setup-Static-Network-Routing-in-RHEL-7.jpg) - -RHCE: Setup and Test Network Static Routing – Part 1 - -Please note that we will not cover them in depth, but rather organize these contents in such a way that will be helpful to take the first steps and build from there. - -### Static Routing in Red Hat Enterprise Linux 7 ### - -One of the wonders of modern networking is the vast availability of devices that can connect groups of computers, whether in relatively small numbers and confined to a single room or several machines in the same building, city, country, or across continents. - -However, in order to effectively accomplish this in any situation, network packets need to be routed, or in other words, the path they follow from source to destination must be ruled somehow. - -Static routing is the process of specifying a route for network packets other than the default, which is provided by a network device known as the default gateway. Unless specified otherwise through static routing, network packets are directed to the default gateway; with static routing, other paths are defined based on predefined criteria, such as the packet destination. - -Let us define the following scenario for this tutorial. We have a Red Hat Enterprise Linux 7 box connecting to router #1 [192.168.0.1] to access the Internet and machines in 192.168.0.0/24. - -A second router (router #2) has two network interface cards: enp0s3 is also connected to router #1 to access the Internet and to communicate with the RHEL 7 box and other machines in the same network, whereas the other (enp0s8) is used to grant access to the 10.0.0.0/24 network where internal services reside, such as a web and / or database server. - -This scenario is illustrated in the diagram below: - -![Static Routing Network Diagram](http://www.tecmint.com/wp-content/uploads/2015/07/Static-Routing-Network-Diagram.png) - -Static Routing Network Diagram - -In this article we will focus exclusively on setting up the routing table on our RHEL 7 box to make sure that it can both access the Internet through router #1 and the internal network via router #2. - -In RHEL 7, you will use the [ip command][3] to configure and show devices and routing using the command line. These changes can take effect immediately on a running system but since they are not persistent across reboots, we will use ifcfg-enp0sX and route-enp0sX files inside /etc/sysconfig/network-scripts to save our configuration permanently. - -To begin, let’s print our current routing table: - - # ip route show - -![Check Routing Table in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Current-Routing-Table.png) - -Check Current Routing Table - -From the output above, we can see the following facts: - -- The default gateway’s IP address is 192.168.0.1 and can be accessed via the enp0s3 NIC. -- When the system booted up, it enabled the zeroconf route to 169.254.0.0/16 (just in case). In few words, if a machine is set to obtain an IP address through DHCP but fails to do so for some reason, it is automatically assigned an address in this network. Bottom line is, this route will allow us to communicate, also via enp0s3, with other machines who have failed to obtain an IP address from a DHCP server. -- Last, but not least, we can communicate with other boxes inside the 192.168.0.0/24 network through enp0s3, whose IP address is 192.168.0.18. - -These are the typical tasks that you would have to perform in such a setting. Unless specified otherwise, the following tasks should be performed in router #2: - -Make sure all NICs have been properly installed: - - # ip link show - -If one of them is down, bring it up: - - # ip link set dev enp0s8 up - -and assign an IP address in the 10.0.0.0/24 network to it: - - # ip addr add 10.0.0.17 dev enp0s8 - -Oops! We made a mistake in the IP address. We will have to remove the one we assigned earlier and then add the right one (10.0.0.18): - - # ip addr del 10.0.0.17 dev enp0s8 - # ip addr add 10.0.0.18 dev enp0s8 - -Now, please note that you can only add a route to a destination network through a gateway that is itself already reachable. For that reason, we need to assign an IP address within the 192.168.0.0/24 range to enp0s3 so that our RHEL 7 box can communicate with it: - - # ip addr add 192.168.0.19 dev enp0s3 - -Finally, we will need to enable packet forwarding: - - # echo "1" > /proc/sys/net/ipv4/ip_forward - -and stop / disable (just for the time being – until we cover packet filtering in the next article) the firewall: - - # systemctl stop firewalld - # systemctl disable firewalld - -Back in our RHEL 7 box (192.168.0.18), let’s configure a route to 10.0.0.0/24 through 192.168.0.19 (enp0s3 in router #2): - - # ip route add 10.0.0.0/24 via 192.168.0.19 - -After that, the routing table looks as follows: - - # ip route show - -![Show Network Routing Table](http://www.tecmint.com/wp-content/uploads/2015/07/Show-Network-Routing.png) - -Confirm Network Routing Table - -Likewise, add the corresponding route in the machine(s) you’re trying to reach in 10.0.0.0/24: - - # ip route add 192.168.0.0/24 via 10.0.0.18 - -You can test for basic connectivity using ping: - -In the RHEL 7 box, run - - # ping -c 4 10.0.0.20 - -where 10.0.0.20 is the IP address of a web server in the 10.0.0.0/24 network. - -In the web server (10.0.0.20), run - - # ping -c 192.168.0.18 - -where 192.168.0.18 is, as you will recall, the IP address of our RHEL 7 machine. - -Alternatively, we can use [tcpdump][4] (you may need to install it with yum install tcpdump) to check the 2-way communication over TCP between our RHEL 7 box and the web server at 10.0.0.20. - -To do so, let’s start the logging in the first machine with: - - # tcpdump -qnnvvv -i enp0s3 host 10.0.0.20 - -and from another terminal in the same system let’s telnet to port 80 in the web server (assuming Apache is listening on that port; otherwise, indicate the right port in the following command): - - # telnet 10.0.0.20 80 - -The tcpdump log should look as follows: - -![Check Network Communication between Servers](http://www.tecmint.com/wp-content/uploads/2015/07/Tcpdump-logs.png) - -Check Network Communication between Servers - -Where the connection has been properly initialized, as we can tell by looking at the 2-way communication between our RHEL 7 box (192.168.0.18) and the web server (10.0.0.20). - -Please remember that these changes will go away when you restart the system. If you want to make them persistent, you will need to edit (or create, if they don’t already exist) the following files, in the same systems where we performed the above commands. - -Though not strictly necessary for our test case, you should know that /etc/sysconfig/network contains system-wide network parameters. A typical /etc/sysconfig/network looks as follows: - - # Enable networking on this system? - NETWORKING=yes - # Hostname. Should match the value in /etc/hostname - HOSTNAME=yourhostnamehere - # Default gateway - GATEWAY=XXX.XXX.XXX.XXX - # Device used to connect to default gateway. Replace X with the appropriate number. - GATEWAYDEV=enp0sX - -When it comes to setting specific variables and values for each NIC (as we did for router #2), you will have to edit /etc/sysconfig/network-scripts/ifcfg-enp0s3 and /etc/sysconfig/network-scripts/ifcfg-enp0s8. - -Following our case, - - TYPE=Ethernet - BOOTPROTO=static - IPADDR=192.168.0.19 - NETMASK=255.255.255.0 - GATEWAY=192.168.0.1 - NAME=enp0s3 - ONBOOT=yes - -and - - TYPE=Ethernet - BOOTPROTO=static - IPADDR=10.0.0.18 - NETMASK=255.255.255.0 - GATEWAY=10.0.0.1 - NAME=enp0s8 - ONBOOT=yes - -for enp0s3 and enp0s8, respectively. - -As for routing in our client machine (192.168.0.18), we will need to edit /etc/sysconfig/network-scripts/route-enp0s3: - - 10.0.0.0/24 via 192.168.0.19 dev enp0s3 - -Now reboot your system and you should see that route in your table. - -### Summary ### - -In this article we have covered the essentials of static routing in Red Hat Enterprise Linux 7. Although scenarios may vary, the case presented here illustrates the required principles and the procedures to perform this task. Before wrapping up, I would like to suggest you to take a look at [Chapter 4][5] of the Securing and Optimizing Linux section in The Linux Documentation Project site for further details on the topics covered here. - -Free ebook on Securing & Optimizing Linux: The Hacking Solution (v.3.0) – This 800+ eBook contains comprehensive collection of Linux security tips and how to use them safely and easily to configure Linux-based applications and services. - -![Linux Security and Optimization Book](http://www.tecmint.com/wp-content/uploads/2015/07/Linux-Security-Optimization-Book.gif) - -Linux Security and Optimization Book - -[Download Now][6] - -In the next article we will talk about packet filtering and network address translation to sum up the networking basic skills needed for the RHCE certification. - -As always, we look forward to hearing from you, so feel free to leave your questions, comments, and suggestions using the form below. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ -[2]:https://www.redhat.com/en/services/certification/rhce -[3]:http://www.tecmint.com/ip-command-examples/ -[4]:http://www.tecmint.com/12-tcpdump-commands-a-network-sniffer-tool/ -[5]:http://www.tldp.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/net-manage.html -[6]:http://tecmint.tradepub.com/free/w_opeb01/prgm.cgi \ No newline at end of file diff --git a/translated/tech/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md b/translated/tech/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md new file mode 100644 index 0000000000..03038b92d5 --- /dev/null +++ b/translated/tech/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md @@ -0,0 +1,227 @@ +RHCE 系列第一部分:如何设置和测试静态网络路由 +================================================================================ +RHCE(Red Hat Certified Engineer,红帽认证工程师)是红帽公司的一个认证,红帽向企业社区贡献开源操作系统和软件,同时它还给公司提供训练、支持和咨询服务。 + +![RHCE 考试准备指南](http://www.tecmint.com/wp-content/uploads/2015/07/RHCE-Exam-Series-by-TecMint.jpg) + +RHCE 考试准备指南 + +这个 RHCE 是基于性能的考试(代号 EX300),面向那些拥有更多的技能、知识和能力的红帽企业版 Linux(RHEL)系统高级系统管理员。 + +**重要**: [红帽认证系统管理员][1] (Red Hat Certified System Administrator,RHCSA)认证要求先有 RHCE 认证。 + +以下是基于红帽企业版 Linux 7 考试的考试目标,我们会在该 RHCE 系列中分别介绍: + +- 第一部分:如何在 RHEL 7 中设置和测试静态路由 +- 第二部分:如果进行包过滤、网络地址转换和设置内核运行时参数 +- 第三部分:如果使用 Linux 工具集产生和发送系统活动报告 +- 第四部分:使用 Shell 脚本进行自动化系统维护 +- 第五部分:如果配置本地和远程系统日志 +- 第六部分:如果配置一个 Samba 服务器或 NFS 服务器(译者注:Samba 是在 Linux 和 UNI X系统上实现 SMB 协议的一个免费软件,由服务器及客户端程序构成。SMB,Server Messages Block,信息服务块,是一种在局域网上共享文件和打印机的一种通信协议,它为局域网内的不同计算机之间提供文件及打印机等资源的共享服务。) +- 第七部分:为收发邮件配置完整的 SMTP 服务器 +- 第八部分:在 RHEL 7 上设置 HTTPS 和 TLS +- 第九部分:设置网络时间协议 +- 第十部分:如何配置一个 Cache-Only DNS 服务器 + +在你的国家查看考试费用和注册考试,可以到 [RHCE 认证][2] 网页。 + +在 RHCE 的第一和第二部分,我们会介绍一些基本的但典型的情形,也就是静态路由原理、包过滤和网络地址转换。 + +![在 RHEL 中设置静态网络路由](http://www.tecmint.com/wp-content/uploads/2015/07/Setup-Static-Network-Routing-in-RHEL-7.jpg) + +RHCE 系列第一部分:设置和测试网络静态路由 + +请注意我们不会作深入的介绍,但以这种方式组织内容能帮助你开始第一步并继续后面的内容。 + +### 红帽企业版 Linux 7 中的静态路由 ### + +现代网络的一个奇迹就是有很多可用的设备能将一组计算机连接起来,不管是在一个房间里少量的机器还是在一栋建筑物、城市、国家或者大洲之间的多台机器。 + +然而,为了能在任意情形下有效的实现这些,需要对网络包进行路由,或者换句话说,它们从源到目的地的路径需要按照某种规则。 + +静态路由是为网络包指定一个路由的过程,而不是使用网络设备提供的默认网关。除非另有指定,否则通过路由,网络包会被导向默认网关;基于预定义的标准,例如数据包目的地,使用静态路由可以定义其它路径。 + +我们在该篇指南中会考虑以下场景。我们有一台红帽企业版 Linux 7,连接到路由器 1号 [192.168.0.1] 以访问因特网以及 192.168.0.0/24 中的其它机器。 + +第二个路由器(路由器 2号)有两个网卡:enp0s3 同样通过网络连接到路由器 1号,以便连接RHEL 7 以及相同网络中的其它机器,另外一个网卡(enp0s8)用于授权访问内部服务所在的 10.0.0.0/24 网络,例如 web 或数据库服务器。 + +该场景可以用下面的示意图表示: + +![静态路由网络示意图](http://www.tecmint.com/wp-content/uploads/2015/07/Static-Routing-Network-Diagram.png) + +静态路由网络示意图 + +在这篇文章中我们会集中介绍在 RHEL 7 中设置路由表,确保它能通过路由器 1号访问因特网以及通过路由器 2号访问内部网络。 + +在 RHEL 7 中,你会通过命令行用 [命令 ip][3] 配置和显示设备和路由。这些更改能在运行的系统中及时生效,但由于重启后不会保存,我们会使用 /etc/sysconfig/network-scripts 目录下的 ifcfg-enp0sX 和 route-enp0sX 文件永久保存我们的配置。 + +首先,让我们打印出当前的路由表: + + # ip route show + +![在 Linux 中检查路由表](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Current-Routing-Table.png) + +检查当前路由表 + +从上面的输出中,我们可以得出以下结论: + +- 默认网关的 IP 是 192.168.0.1,可以通过网卡 enp0s3 访问。 +- 系统启动的时候,它启用了到 169.254.0.0/16 的 zeroconf 路由(只是在本例中)。也就是说,如果机器设置为通过 DHCP 获取一个 IP 地址,但是由于某些原因失败了,它就会在该网络中自动分配到一个地址。这一行的意思是,该路由会允许我们通过 enp0s3 和其它没有从 DHCP 服务器中成功获得 IP 地址的机器机器连接。 +- 最后,但同样重要的是,我们也可以通过 IP 地址是 192.168.0.18 的 enp0s3 和 192.168.0.0/24 网络中的其它机器连接。 + +下面是这样的配置中你需要做的一些典型任务。除非另有说明,下面的任务都在路由器 2号上进行。 + +确保正确安装了所有网卡: + + # ip link show + +如果有某块网卡停用了,启动它: + + # ip link set dev enp0s8 up + +分配 10.0.0.0/24 网络中的一个 IP 地址给它: + + # ip addr add 10.0.0.17 dev enp0s8 + +噢!我们分配了一个错误的 IP 地址。我们需要删除之前分配的那个并添加正确的地址(10.0.0.18): + + # ip addr del 10.0.0.17 dev enp0s8 + # ip addr add 10.0.0.18 dev enp0s8 + +现在,请注意你只能添加一个通过已经能访问的网关到目标网络的路由。因为这个原因,我们需要在 192.168.0.0/24 范围中给 enp0s3 分配一个 IP 地址,这样我们的 RHEL 7 才能连接到它: + + # ip addr add 192.168.0.19 dev enp0s3 + +最后,我们需要启用包转发: + + # echo "1" > /proc/sys/net/ipv4/ip_forward + +并停用/取消防火墙(从现在开始,直到下一篇文章中我们介绍了包过滤): + + # systemctl stop firewalld + # systemctl disable firewalld + +回到我们的 RHEL 7(192.168.0.18),让我们配置一个通过 192.168.0.19(路由器 2号的 enp0s3)到 10.0.0.0/24 的路由: + + # ip route add 10.0.0.0/24 via 192.168.0.19 + +之后,路由表看起来像下面这样: + + # ip route show + +![显示网络路由表](http://www.tecmint.com/wp-content/uploads/2015/07/Show-Network-Routing.png) + +确认网络路由表 + +同样,在你尝试连接的 10.0.0.0/24 网络的机器中添加对应的路由: + + # ip route add 192.168.0.0/24 via 10.0.0.18 + +你可以使用 ping 测试基本连接: + +在 RHEL 7 中运行: + + # ping -c 4 10.0.0.20 + +10.0.0.20 是 10.0.0.0/24 网络中一个 web 服务器的 IP 地址。 + +在 web 服务器(10.0.0.20)中运行 + + # ping -c 192.168.0.18 + +192.168.0.18 也就是我们的 RHEL 7 机器的 IP 地址。 + +另外,我们还可以使用 [tcpdump][4](需要通过 yum install tcpdump 安装)来检查我们 RHEL 7 和 10.0.0.20 中 web 服务器之间的 TCP 双向通信。 + +首先在第一台机器中启用日志: + + # tcpdump -qnnvvv -i enp0s3 host 10.0.0.20 + +在同一个系统上的另一个终端,让我们通过 telnet 连接到 web 服务器的 80 号端口(假设 Apache 正在监听该端口;否则在下面命令中使用正确的端口): + + # telnet 10.0.0.20 80 + +tcpdump 日志看起来像下面这样: + +![检查服务器之间的网络连接](http://www.tecmint.com/wp-content/uploads/2015/07/Tcpdump-logs.png) + +检查服务器之间的网络连接 + +通过查看我们 RHEL 7(192.168.0.18)和 web 服务器(10.0.0.20)之间的双向通信,可以看出已经正确地初始化了连接。 + +请注意你重启系统后会丢失这些更改。如果你想把它们永久保存下来,你需要在我们运行上面的命令的相同系统中编辑(如果不存在的话就创建)以下的文件。 + +尽管对于我们的测试例子不是严格要求,你需要知道 /etc/sysconfig/network 包含了一些系统范围的网络参数。一个典型的 /etc/sysconfig/network 看起来类似下面这样: + + # Enable networking on this system? + NETWORKING=yes + # Hostname. Should match the value in /etc/hostname + HOSTNAME=yourhostnamehere + # Default gateway + GATEWAY=XXX.XXX.XXX.XXX + # Device used to connect to default gateway. Replace X with the appropriate number. + GATEWAYDEV=enp0sX + +当需要为每个网卡设置特定的变量和值时(正如我们在路由器 2号上面做的),你需要编辑 /etc/sysconfig/network-scripts/ifcfg-enp0s3 和 /etc/sysconfig/network-scripts/ifcfg-enp0s8 文件。 + +下面是我们的例子, + + TYPE=Ethernet + BOOTPROTO=static + IPADDR=192.168.0.19 + NETMASK=255.255.255.0 + GATEWAY=192.168.0.1 + NAME=enp0s3 + ONBOOT=yes + +以及 + + TYPE=Ethernet + BOOTPROTO=static + IPADDR=10.0.0.18 + NETMASK=255.255.255.0 + GATEWAY=10.0.0.1 + NAME=enp0s8 + ONBOOT=yes + +分别对应 enp0s3 和 enp0s8。 + +由于要为我们的客户端机器(192.168.0.18)进行路由,我们需要编辑 /etc/sysconfig/network-scripts/route-enp0s3: + + 10.0.0.0/24 via 192.168.0.19 dev enp0s3 + +现在重启系统你可以在路由表中看到该路由规则。 + +### 总结 ### + +在这篇文章中我们介绍了红帽企业版 Linux 7 的静态路由。尽管场景可能不同,这里介绍的例子说明了所需的原理以及进行该任务的步骤。结束之前,我还建议你看一下 Linux 文档项目中 [第四章 4][5] 保护和优化 Linux 部分,以了解这里介绍主题的更详细内容。 + +免费电子书 Securing & Optimizing Linux: The Hacking Solution (v.3.0) - 这本 800 多页的电子书全面收集了 Linux 安全的小技巧以及如果安全和简便的使用它们去配置基于 Linux 的应用和服务。 + +![Linux 安全和优化](http://www.tecmint.com/wp-content/uploads/2015/07/Linux-Security-Optimization-Book.gif) + +Linux 安全和优化 + +[马上下载][6] + +在下篇文章中我们会介绍数据包过滤和网络地址转换,结束 RHCE 验证需要的网络基本技巧。 + +如往常一样,我们期望听到你的回复,用下面的表格留下你的疑问、评论和建议吧。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/ + +作者:[Gabriel Cánepa][a] +译者:[ictlyh](https://github.com/ictlyh) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ +[2]:https://www.redhat.com/en/services/certification/rhce +[3]:http://www.tecmint.com/ip-command-examples/ +[4]:http://www.tecmint.com/12-tcpdump-commands-a-network-sniffer-tool/ +[5]:http://www.tldp.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/net-manage.html +[6]:http://tecmint.tradepub.com/free/w_opeb01/prgm.cgi \ No newline at end of file From 2177afef00a1e6fcb1108dad75fb280cc9d040af Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 15 Aug 2015 12:32:52 +0800 Subject: [PATCH 173/697] Rename sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md to translated/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md --- .../20150811 How to Install Snort and Usage in Ubuntu 15.04.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md (100%) diff --git a/sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md b/translated/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md similarity index 100% rename from sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md rename to translated/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md From 9a5f4c41627956b74c3daaa6b1e2ee3e31bc0014 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 15 Aug 2015 15:31:15 +0800 Subject: [PATCH 174/697] Update 20150728 Process of the Linux kernel building.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译了一部分,太长了 --- ...28 Process of the Linux kernel building.md | 23 ++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/sources/tech/20150728 Process of the Linux kernel building.md index 1c03ebbe72..f11c1cc7a2 100644 --- a/sources/tech/20150728 Process of the Linux kernel building.md +++ b/sources/tech/20150728 Process of the Linux kernel building.md @@ -1,26 +1,41 @@ Translating by Ezio Process of the Linux kernel building +如何构建Linux 内核的 ================================================================================ -Introduction +介绍 -------------------------------------------------------------------------------- I will not tell you how to build and install custom Linux kernel on your machine, you can find many many [resources](https://encrypted.google.com/search?q=building+linux+kernel#q=building+linux+kernel+from+source+code) that will help you to do it. Instead, we will know what does occur when you are typed `make` in the directory with Linux kernel source code in this part. When I just started to learn source code of the Linux kernel, the [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) file was a first file that I've opened. And it was scary :) This [makefile](https://en.wikipedia.org/wiki/Make_%28software%29) contains `1591` lines of code at the time when I wrote this part and it was [third](https://github.com/torvalds/linux/commit/52721d9d3334c1cb1f76219a161084094ec634dc) release candidate. +我不会告诉你怎么在自己的电脑上去构建、安装一个定制化的Linux 内核,这样的[资料](https://encrypted.google.com/search?q=building+linux+kernel#q=building+linux+kernel+from+source+code) 太多了,它们会对你有帮助。本文会告诉你当你在内核源码路径里敲下`make` 时会发生什么。当我刚刚开始学习内核代码时,[Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 是我打开的第一个文件,这个文件真令人害怕 :)。那时候这个[Makefile](https://en.wikipedia.org/wiki/Make_%28software%29) 包含了`1591` 行代码,当我开始写本文是,这个[Makefile](https://github.com/torvalds/linux/commit/52721d9d3334c1cb1f76219a161084094ec634dc) 已经是第三个候选版本了。 + This makefile is the the top makefile in the Linux kernel source code and kernel build starts here. Yes, it is big, but moreover, if you've read the source code of the Linux kernel you can noted that all directories with a source code has an own makefile. Of course it is not real to describe how each source files compiled and linked. So, we will see compilation only for the standard case. You will not find here building of the kernel's documentation, cleaning of the kernel source code, [tags](https://en.wikipedia.org/wiki/Ctags) generation, [cross-compilation](https://en.wikipedia.org/wiki/Cross_compiler) related stuff and etc. We will start from the `make` execution with the standard kernel configuration file and will finish with the building of the [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage). It would be good if you're already familiar with the [make](https://en.wikipedia.org/wiki/Make_%28software%29) util, but I will anyway try to describe all code that will be in this part. So let's start. +这个makefile 是Linux 内核代码的顶端makefile ,内核构件就始于此处。是的,它的内容很多,但是如果你已经读过内核源代码,你就会发现每个包含代码的目录都有一个自己的makefile。当然了,我们不会去描述每个代码文件是怎么编译链接的。所以我们将只会挑选一些通用的例子来说明问题,而你不会在这里找到构建内核的文档,如何整洁内核代码, [tags](https://en.wikipedia.org/wiki/Ctags) 的生成,和[交叉编译](https://en.wikipedia.org/wiki/Cross_compiler) 相关的说明,等等。我们将从`make` 开始,使用标准的内核配置文件,到生成了内核镜像[bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage) 结束。 + +如果你已经很了解[make](https://en.wikipedia.org/wiki/Make_%28software%29) 工具那是最好,但是我也会描述本文出现的相关代码。 + +让我们开始吧 + + Preparation before the kernel compilation +编译内核前的准备 --------------------------------------------------------------------------------- There are many things to preparate before the kernel compilation will be started. The main point here is to find and configure the type of compilation, to parse command line arguments that are passed to the `make` util and etc. So let's dive into the top `Makefile` of the Linux kernel. +在开始便以前要进行很多准备工作。最主要的就是找到并配置好配置文件,`make` 命令要使用到的参数都需要从这些配置文件获取。 + The Linux kernel top `Makefile` is responsible for building two major products: [vmlinux](https://en.wikipedia.org/wiki/Vmlinux) (the resident kernel image) and the modules (any module files). The [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) of the Linux kernel starts from the definition of the following variables: +内核顶端的`Makefile` 负责构建两个主要的产品:[vmlinux](https://en.wikipedia.org/wiki/Vmlinux) (内核镜像可执行文件)和模块文件。内核的 [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 以次开始: + ```Makefile VERSION = 4 PATCHLEVEL = 2 @@ -31,12 +46,16 @@ NAME = Hurr durr I'ma sheep These variables determine the current version of the Linux kernel and are used in the different places, for example in the forming of the `KERNELVERSION` variable: +这些变量决定了当前内核的版本,并且被使用在很多不同的地方,比如`KERNELVERSION` : + ```Makefile KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION) ``` After this we can see a couple of the `ifeq` condition that check some of the parameters passed to `make`. The Linux kernel `makefiles` provides a special `make help` target that prints all available targets and some of the command line arguments that can be passed to `make`. For example: `make V=1` - provides verbose builds. The first `ifeq` condition checks if the `V=n` option is passed to make: +接下来我们会看到很多`ifeq` 条件判断语句,它们负责检查传给`make` 的参数。内核的`Makefile` 提供了一个特殊的编译选项`make help` ,这个选项可以生成所有的可用目标和一些能传给`make` 的有效的命令行参数。举个例子,`make V=1` 会在构建过程中输出详细的编译信息,第一个`ifeq` 就是检查传递给make的`V=n` 选项。 + ```Makefile ifeq ("$(origin V)", "command line") KBUILD_VERBOSE = $(V) @@ -58,6 +77,8 @@ export quiet Q KBUILD_VERBOSE If this option is passed to `make` we set the `KBUILD_VERBOSE` variable to the value of the `V` option. Otherwise we set the `KBUILD_VERBOSE` variable to zero. After this we check value of the `KBUILD_VERBOSE` variable and set values of the `quiet` and `Q` variables depends on the `KBUILD_VERBOSE` value. The `@` symbols suppress the output of the command and if it will be set before a command we will see something like this: `CC scripts/mod/empty.o` instead of the `Compiling .... scripts/mod/empty.o`. In the end we just export all of these variables. The next `ifeq` statement checks that `O=/dir` option was passed to the `make`. This option allows to locate all output files in the given `dir`: +如果`V=n` 这个选项传给了`make` ,系统就会给变量`KBUILD_VERBOSE` 选项附上`V` 的值,否则的话`KBUILD_VERBOSE` 就会为0。然后系统会检查`KBUILD_VERBOSE` 的值,以此来决定`quiet` 和`Q` 的值。符号`@` 控制命令的输出,如果它被放在一个命令之前,这条命令的执行将会是`CC scripts/mod/empty.o`,而不是`Compiling .... scripts/mod/empty.o`(注:CC 在makefile 中一般都是编译命令)。最后系统仅仅导出所有的变量。下一个`ifeq` 语句检查的是传递给`make` 的选项`O=/dir`,这个选项允许在指定的目录`dir` 输出所有的结果文件: + ```Makefile ifeq ($(KBUILD_SRC),) From 4452a5c19d27820e077a99971d68c633e4ae8818 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 15 Aug 2015 16:25:13 +0800 Subject: [PATCH 175/697] [Translated] tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md --- ...ation and Set Kernel Runtime Parameters.md | 178 ------------------ ...o Setup and Test Static Network Routing.md | 0 ...ation and Set Kernel Runtime Parameters.md | 175 +++++++++++++++++ 3 files changed, 175 insertions(+), 178 deletions(-) delete mode 100644 sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md rename translated/tech/{ => RHCE}/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md (100%) create mode 100644 translated/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md diff --git a/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md b/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md deleted file mode 100644 index cd798b906d..0000000000 --- a/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md +++ /dev/null @@ -1,178 +0,0 @@ -Translating by ictlyh -Part 2 - How to Perform Packet Filtering, Network Address Translation and Set Kernel Runtime Parameters -================================================================================ -As promised in Part 1 (“[Setup Static Network Routing][1]”), in this article (Part 2 of RHCE series) we will begin by introducing the principles of packet filtering and network address translation (NAT) in Red Hat Enterprise Linux 7, before diving into setting runtime kernel parameters to modify the behavior of a running kernel if certain conditions change or needs arise. - -![Network Packet Filtering in RHEL](http://www.tecmint.com/wp-content/uploads/2015/07/Network-Packet-Filtering-in-RHEL.jpg) - -RHCE: Network Packet Filtering – Part 2 - -### Network Packet Filtering in RHEL 7 ### - -When we talk about packet filtering, we refer to a process performed by a firewall in which it reads the header of each data packet that attempts to pass through it. Then, it filters the packet by taking the required action based on rules that have been previously defined by the system administrator. - -As you probably know, beginning with RHEL 7, the default service that manages firewall rules is [firewalld][2]. Like iptables, it talks to the netfilter module in the Linux kernel in order to examine and manipulate network packets. Unlike iptables, updates can take effect immediately without interrupting active connections – you don’t even have to restart the service. - -Another advantage of firewalld is that it allows us to define rules based on pre-configured service names (more on that in a minute). - -In Part 1, we used the following scenario: - -![Static Routing Network Diagram](http://www.tecmint.com/wp-content/uploads/2015/07/Static-Routing-Network-Diagram.png) - -Static Routing Network Diagram - -However, you will recall that we disabled the firewall on router #2 to simplify the example since we had not covered packet filtering yet. Let’s see now how we can enable incoming packets destined for a specific service or port in the destination. - -First, let’s add a permanent rule to allow inbound traffic in enp0s3 (192.168.0.19) to enp0s8 (10.0.0.18): - - # firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT - -The above command will save the rule to /etc/firewalld/direct.xml: - - # cat /etc/firewalld/direct.xml - -![Check Firewalld Saved Rules in CentOS 7](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Firewalld-Save-Rules.png) - -Check Firewalld Saved Rules - -Then enable the rule for it to take effect immediately: - - # firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT - -Now you can telnet to the web server from the RHEL 7 box and run [tcpdump][3] again to monitor the TCP traffic between the two machines, this time with the firewall in router #2 enabled. - - # telnet 10.0.0.20 80 - # tcpdump -qnnvvv -i enp0s3 host 10.0.0.20 - -What if you want to only allow incoming connections to the web server (port 80) from 192.168.0.18 and block connections from other sources in the 192.168.0.0/24 network? - -In the web server’s firewall, add the following rules: - - # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept' - # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept' --permanent - # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop' - # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop' --permanent - -Now you can make HTTP requests to the web server, from 192.168.0.18 and from some other machine in 192.168.0.0/24. In the first case the connection should complete successfully, whereas in the second it will eventually timeout. - -To do so, any of the following commands will do the trick: - - # telnet 10.0.0.20 80 - # wget 10.0.0.20 - -I strongly advise you to check out the [Firewalld Rich Language][4] documentation in the Fedora Project Wiki for further details on rich rules. - -### Network Address Translation in RHEL 7 ### - -Network Address Translation (NAT) is the process where a group of computers (it can also be just one of them) in a private network are assigned an unique public IP address. As result, they are still uniquely identified by their own private IP address inside the network but to the outside they all “seem” the same. - -In addition, NAT makes it possible that computers inside a network sends requests to outside resources (like the Internet) and have the corresponding responses be sent back to the source system only. - -Let’s now consider the following scenario: - -![Network Address Translation in RHEL](http://www.tecmint.com/wp-content/uploads/2015/07/Network-Address-Translation-Diagram.png) - -Network Address Translation - -In router #2, we will move the enp0s3 interface to the external zone, and enp0s8 to the internal zone, where masquerading, or NAT, is enabled by default: - - # firewall-cmd --list-all --zone=external - # firewall-cmd --change-interface=enp0s3 --zone=external - # firewall-cmd --change-interface=enp0s3 --zone=external --permanent - # firewall-cmd --change-interface=enp0s8 --zone=internal - # firewall-cmd --change-interface=enp0s8 --zone=internal --permanent - -For our current setup, the internal zone – along with everything that is enabled in it will be the default zone: - - # firewall-cmd --set-default-zone=internal - -Next, let’s reload firewall rules and keep state information: - - # firewall-cmd --reload - -Finally, let’s add router #2 as default gateway in the web server: - - # ip route add default via 10.0.0.18 - -You can now verify that you can ping router #1 and an external site (tecmint.com, for example) from the web server: - - # ping -c 2 192.168.0.1 - # ping -c 2 tecmint.com - -![Verify Network Routing](http://www.tecmint.com/wp-content/uploads/2015/07/Verify-Network-Routing.png) - -Verify Network Routing - -### Setting Kernel Runtime Parameters in RHEL 7 ### - -In Linux, you are allowed to change, enable, and disable the kernel runtime parameters, and RHEL is no exception. The /proc/sys interface (sysctl) lets you set runtime parameters on-the-fly to modify the system’s behavior without much hassle when operating conditions change. - -To do so, the echo shell built-in is used to write to files inside /proc/sys/, where is most likely one of the following directories: - -- dev: parameters for specific devices connected to the machine. -- fs: filesystem configuration (quotas and inodes, for example). -- kernel: kernel-specific configuration. -- net: network configuration. -- vm: use of the kernel’s virtual memory. - -To display the list of all the currently available values, run - - # sysctl -a | less - -In Part 1, we changed the value of the net.ipv4.ip_forward parameter by doing - - # echo 1 > /proc/sys/net/ipv4/ip_forward - -in order to allow a Linux machine to act as router. - -Another runtime parameter that you may want to set is kernel.sysrq, which enables the Sysrq key in your keyboard to instruct the system to perform gracefully some low-level functions, such as rebooting the system if it has frozen for some reason: - - # echo 1 > /proc/sys/kernel/sysrq - -To display the value of a specific parameter, use sysctl as follows: - - # sysctl - -For example, - - # sysctl net.ipv4.ip_forward - # sysctl kernel.sysrq - -Some parameters, such as the ones mentioned above, require only one value, whereas others (for example, fs.inode-state) require multiple values: - -![Check Kernel Parameters in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Kernel-Parameters.png) - -Check Kernel Parameters - -In either case, you need to read the kernel’s documentation before making any changes. - -Please note that these settings will go away when the system is rebooted. To make these changes permanent, we will need to add .conf files inside the /etc/sysctl.d as follows: - - # echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/10-forward.conf - -(where the number 10 indicates the order of processing relative to other files in the same directory). - -and enable the changes with - - # sysctl -p /etc/sysctl.d/10-forward.conf - -### Summary ### - -In this tutorial we have explained the basics of packet filtering, network address translation, and setting kernel runtime parameters on a running system and persistently across reboots. I hope you have found this information useful, and as always, we look forward to hearing from you! -Don’t hesitate to share with us your questions, comments, or suggestions using the form below. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/perform-packet-filtering-network-address-translation-and-set-kernel-runtime-parameters-in-rhel/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/ -[2]:http://www.tecmint.com/firewalld-rules-for-centos-7/ -[3]:http://www.tecmint.com/12-tcpdump-commands-a-network-sniffer-tool/ -[4]:https://fedoraproject.org/wiki/Features/FirewalldRichLanguage \ No newline at end of file diff --git a/translated/tech/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md b/translated/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md similarity index 100% rename from translated/tech/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md rename to translated/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md diff --git a/translated/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md b/translated/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md new file mode 100644 index 0000000000..74b162be1c --- /dev/null +++ b/translated/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md @@ -0,0 +1,175 @@ +RHCE 第二部分 - 如何进行包过滤、网络地址转换和设置内核运行时参数 +================================================================================ +正如第一部分(“[设置静态网络路由][1]”)承诺的,在这篇文章(RHCE 系列第二部分),我们首先介绍红帽企业版 Linux 7中包过滤和网络地址转换原理,然后再介绍某些条件发送变化或者需要激活时设置运行时内核参数以改变运行时内核行为。 + +![RHEL 中的网络包过滤](http://www.tecmint.com/wp-content/uploads/2015/07/Network-Packet-Filtering-in-RHEL.jpg) + +RHCE 第二部分:网络包过滤 + +### RHEL 7 中的网络包过滤 ### + +当我们讨论数据包过滤的时候,我们指防火墙读取每个尝试通过它的数据包的包头所进行的处理。然后,根据系统管理员之前定义的规则,通过采取所要求的动作过滤数据包。 + +正如你可能知道的,从 RHEL 7 开始,管理防火墙的默认服务是 [firewalld][2]。类似 iptables,它和 Linux 内核的 netfilter 模块交互以便检查和操作网络数据包。不像 iptables,Firewalld 的更新可以立即生效,而不用中断活跃的连接 - 你甚至不需要重启服务。 + +Firewalld 的另一个优势是它允许我们定义基于预配置服务名称的规则(之后会详细介绍)。 + +在第一部分,我们用了下面的场景: + +![静态路由网络示意图](http://www.tecmint.com/wp-content/uploads/2015/07/Static-Routing-Network-Diagram.png) + +静态路由网络示意图 + +然而,你应该记得,由于还没有介绍包过滤,为了简化例子,我们停用了路由器 2号 的防火墙。现在让我们来看看如何可以使接收的数据包发送到目的地的特定服务或端口。 + +首先,让我们添加一条永久规则允许从 enp0s3 (192.168.0.19) 到 enp0s8 (10.0.0.18) 的绑定流量: + + # firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT + +上面的命令会把规则保存到 /etc/firewalld/direct.xml: + + # cat /etc/firewalld/direct.xml + +![在 CentOS 7 中检查 Firewalld 保存的规则](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Firewalld-Save-Rules.png) + +检查 Firewalld 保存的规则 + +然后启用规则使其立即生效: + + # firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT + +现在你可以从 RHEL 7 中通过 telnet 登录到 web 服务器并再次运行 [tcpdump][3] 监视两台机器之间的 TCP 流量,这次路由器 2号已经启用了防火墙。 + + # telnet 10.0.0.20 80 + # tcpdump -qnnvvv -i enp0s3 host 10.0.0.20 + +如果你想只允许从 192.168.0.18 到 web 服务器(80 号端口)的连接而阻塞 192.168.0.0/24 网络中的其它来源呢? + +在 web 服务器的防火墙中添加以下规则: + + # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept' + # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept' --permanent + # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop' + # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop' --permanent + +现在你可以从 192.168.0.18 和 192.168.0.0/24 中的其它机器发送到 web 服务器的 HTTP 请求。第一种情况连接会成功完成,但第二种情况最终会超时。 + +任何下面的命令可以验证这个结果: + + # telnet 10.0.0.20 80 + # wget 10.0.0.20 + +我强烈建议你看看 Fedora Project Wiki 中的 [Firewalld Rich Language][4] 文档更详细地了解关于富规则的内容。 + +### RHEL 7 中的网络地址转换 ### + +网络地址转换(NAT)是为专用网络中的一组计算机(也可能是其中的一台)分配一个独立的公共 IP 地址的过程。结果,在内部网络中仍然可以用它们自己的私有 IP 地址区别,但外部“看来”它们是一样的。 + +另外,网络地址转换使得内部网络中的计算机发送请求到外部资源(例如因特网)然后只有源系统能接收到对应的响应成为可能。 + +现在让我们考虑下面的场景: + +![RHEL 中的网络地址转换](http://www.tecmint.com/wp-content/uploads/2015/07/Network-Address-Translation-Diagram.png) + +网络地址转换 + +在路由器 2 中,我们会把 enp0s3 接口移动到外部区域,enp0s8 到内部区域,伪装或者说 NAT 默认是启用的: + + # firewall-cmd --list-all --zone=external + # firewall-cmd --change-interface=enp0s3 --zone=external + # firewall-cmd --change-interface=enp0s3 --zone=external --permanent + # firewall-cmd --change-interface=enp0s8 --zone=internal + # firewall-cmd --change-interface=enp0s8 --zone=internal --permanent + +对于我们当前的设置,内部区域 - 以及和它一起启用的任何东西都是默认区域: + + # firewall-cmd --set-default-zone=internal + +下一步,让我们重载防火墙规则并保持状态信息: + + # firewall-cmd --reload + +最后,在 web 服务器中添加路由器 2 为默认网关: + + # ip route add default via 10.0.0.18 + +现在你会发现在 web 服务器中你可以 ping 路由器 1 和外部网站(例如 tecmint.com): + + # ping -c 2 192.168.0.1 + # ping -c 2 tecmint.com + +![验证网络路由](http://www.tecmint.com/wp-content/uploads/2015/07/Verify-Network-Routing.png) + +验证网络路由 + +### 在 RHEL 7 中设置内核运行时参数 ### + +在 Linux 中,允许你更改、启用以及停用内核运行时参数,RHEL 也不例外。/proc/sys 接口允许你当操作条件发生变化时实时设置运行时参数以改变系统行为而不需太多麻烦。 + +为了实现这个目的,会用内建的 echo shell 写 /proc/sys/ 中的文件,其中 很可能是以下目录中的一个: + +- dev: 连接到机器中的特定设备的参数。 +- fs: 文件系统配置(例如 quotas 和 inodes)。 +- kernel: 内核配置。 +- net: 网络配置。 +- vm: 内核虚拟内存的使用。 + +要显示所有当前可用值的列表,运行 + + # sysctl -a | less + +在第一部分中,我们通过以下命令改变了 net.ipv4.ip_forward 参数的值以允许 Linux 机器作为一个路由器。 + + # echo 1 > /proc/sys/net/ipv4/ip_forward + +另一个你可能想要设置的运行时参数是 kernel.sysrq,它会启用你键盘上的 Sysrq 键,以使系统更好的运行一些底层函数,例如如果由于某些原因冻结了后重启系统: + + # echo 1 > /proc/sys/kernel/sysrq + +要显示特定参数的值,可以按照下面方式使用 sysctl: + + # sysctl + +例如, + + # sysctl net.ipv4.ip_forward + # sysctl kernel.sysrq + +一些参数,例如上面提到的一个,只需要一个值,而其它一些(例如 fs.inode-state)要求多个值: + +![在 Linux 中查看内核参数](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Kernel-Parameters.png) + +查看内核参数 + +不管什么情况下,做任何更改之前你都需要阅读内核文档。 + +请注意系统重启后这些设置会丢失。要使这些更改永久生效,我们需要添加内容到 /etc/sysctl.d 目录的 .conf 文件,像下面这样: + + # echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/10-forward.conf + +(其中数字 10 表示相对同一个目录中其它文件的处理顺序)。 + +并用下面命令启用更改 + + # sysctl -p /etc/sysctl.d/10-forward.conf + +### 总结 ### + +在这篇指南中我们解释了基本的包过滤、网络地址变换和在运行的系统中设置内核运行时参数并使重启后能持久化。我希望这些信息能对你有用,如往常一样,我们期望收到你的回复! +别犹豫,在下面的表格中和我们分享你的疑问、评论和建议吧。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/perform-packet-filtering-network-address-translation-and-set-kernel-runtime-parameters-in-rhel/ + +作者:[Gabriel Cánepa][a] +译者:[ictlyh](https://github.com/ictlyh) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/ +[2]:http://www.tecmint.com/firewalld-rules-for-centos-7/ +[3]:http://www.tecmint.com/12-tcpdump-commands-a-network-sniffer-tool/ +[4]:https://fedoraproject.org/wiki/Features/FirewalldRichLanguage \ No newline at end of file From cf65318b5f157da02ce80fade5ce656818407155 Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Sat, 15 Aug 2015 16:33:29 +0800 Subject: [PATCH 176/697] =?UTF-8?q?=E3=80=90=E7=BF=BB=E8=AF=91=E5=AE=8C?= =?UTF-8?q?=E6=AF=95=E3=80=9120150730=20Howto=20Configure=20Nginx=20as=20R?= =?UTF-8?q?reverse=20Proxy=20or=20Load=20Balancer=20with=20Weave=20and=20D?= =?UTF-8?q?ocker.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... or Load Balancer with Weave and Docker.md | 49 +++++++++---------- 1 file changed, 23 insertions(+), 26 deletions(-) diff --git a/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md b/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md index f217db9c70..f38acdd874 100644 --- a/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md +++ b/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md @@ -1,24 +1,21 @@ - -Translating by dingdongnigetou - -Howto Configure Nginx as Rreverse Proxy / Load Balancer with Weave and Docker +如何使用Weave以及Docker搭建Nginx反向代理/负载均衡服务器 ================================================================================ -Hi everyone today we'll learnHowto configure Nginx as Rreverse Proxy / Load balancer with Weave and Docker Weave creates a virtual network that connects Docker containers with each other, deploys across multiple hosts and enables their automatic discovery. It allows us to focus on developing our application, rather than our infrastructure. It provides such an awesome environment that the applications uses the network as if its containers were all plugged into the same network without need to configure ports, mappings, link, etc. The services of the application containers on the network can be easily accessible to the external world with no matter where its running. Here, in this tutorial we'll be using weave to quickly and easily deploy nginx web server as a load balancer for a simple php application running in docker containers on multiple nodes in Amazon Web Services. Here, we will be introduced to WeaveDNS, which provides a simple way for containers to find each other using hostname with no changes in codes and tells other containers to connect to those names. +Hi, 今天我们将会学习如何使用如何使用Weave和Docker搭建Nginx反向代理/负载均衡服务器。Weave创建一个虚拟网络将跨主机部署的Docker容器连接在一起并使它们自动暴露给外部世界。它让我们更加专注于应用的开发,而不是基础架构。Weave提供了一个如此棒的环境,仿佛它的所有容器都属于同个网络,不需要端口/映射/连接等的配置。容器中的应用提供的服务在weave网络中可以轻易地被外部世界访问,不论你的容器运行在哪里。在这个教程里我们将会使用weave快速并且轻易地将nginx web服务器部署为一个负载均衡器,反向代理一个运行在Amazon Web Services里面多个节点上的docker容器中的简单php应用。这里我们将会介绍WeaveDNS,它提供一个简单的方式让容器利用主机名找到彼此,不需要改变代码,并且能够告诉其他容器连接到这些主机名。 -Here, in this tutorial, we will use Nginx to load balance requests to a set of containers running Apache. Here are the simple and easy to do steps on using Weave to configure nginx as a load balancer running in ubuntu docker container. +在这篇教程里,我们需要一个运行的容器集合来配置nginx负载均衡服务器。最简单轻松的方法就是使用Weave在ubuntu的docker容器中搭建nginx负载均衡服务器。 -### 1. Settting up AWS Instances ### +### 1. 搭建AWS实例 ### -First of all, we'll need to setup Amazon Web Service Instances so that we can run docker containers with Weave and Ubuntu as Operating System. We will use the [AWS CLI][1] to setup and configure two AWS EC2 instances. Here, in this tutorial, we'll use the smallest available instances, t1.micro. We will need to have a valid **Amazon Web Services account** with AWS CLI setup and configured. We'll first gonna clone the repository of weave from the github by running the following command in AWS CLI. +首先,我们需要搭建Amzaon Web Service实例,这样才能在ubuntu下用weave跑docker容器。我们将会使用[AWS CLI][1]来搭建和配置两个AWS EC2实例。在这里,我们使用最小的有效实例,t1.micro。我们需要一个有效的**Amazon Web Services账户**用以AWS命令行界面的搭建和配置。我们先在AWS命令行界面下使用下面的命令将github上的weave仓库克隆下来。 $ git clone http://github.com/fintanr/weave-gs $ cd weave-gs/aws-nginx-ubuntu-simple -After cloning the repository, we wanna run the script that will deploy two instances of t1.micro instance running Weave and Docker in Ubuntu Operating System. +在克隆完仓库之后,我们执行下面的脚本,这个脚本将会部署两个t1.micro实例,每个实例中都是ubuntu作为操作系统并用weave跑着docker容器。 $ sudo ./demo-aws-setup.sh -Here, for this tutorial we'll need the IP addresses of these instances further in future. These are stored in an environment file weavedemo.env which is created during the execution of the demo-aws-setup.sh. To get those ip addresses, we need to run the following command which will give the output similar to the output below. +在这里,我们将会在以后用到这些实例的IP地址。这些地址储存在一个weavedemo.env文件中,这个文件在执行demo-aws-setup.sh脚本的期间被创建。为了获取这些IP地址,我们需要执行下面的命令,命令输出类似下面的信息。 $ cat weavedemo.env @@ -27,56 +24,56 @@ Here, for this tutorial we'll need the IP addresses of these instances further i export WEAVE_AWS_DEMO_HOSTCOUNT=2 export WEAVE_AWS_DEMO_HOSTS=(52.26.175.175 52.26.83.141) -Please note these are not the IP addresses for our tutorial, AWS dynamically allocate IP addresses to our instances. +请注意这些不是固定的IP地址,AWS会为我们的实例动态地分配IP地址。 -As were are using a bash, we will just source this file and execute it using the command below. +我们在bash下执行下面的命令使环境变量生效。 . ./weavedemo.env -### 2. Launching Weave and WeaveDNS ### +### 2. 启动Weave and WeaveDNS ### -After deploying the instances, we'll want to launch weave and weavedns on each hosts. Weave and weavedns allows us to easily deploy our containers to a new infrastructure and configuration without the need of changing the codes and without the need to understand concepts such as ambassador containers and links. Here are the commands to launch them in the first host. +在安装完实例之后,我们将会在每台主机上启动weave以及weavedns。Weave以及weavedns使得我们能够轻易地将容器部署到一个全新的基础架构以及配置中, 不需要改变代码,也不需要去理解像Ambassador容器以及Link机制之类的概念。下面是在第一台主机上启动weave以及weavedns的命令。 ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 $ sudo weave launch $ sudo weave launch-dns 10.2.1.1/24 -Next, we'll also wanna launch them in our second host. +下一步,我也准备在第二台主机上启动weave以及weavedns。 ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 $ sudo weave launch $WEAVE_AWS_DEMO_HOST1 $ sudo weave launch-dns 10.2.1.2/24 -### 3. Launching Application Containers ### +### 3. 启动应用容器 ### -Now, we wanna launch six containers across our two hosts running an Apache2 Web Server instance with our simple php site. So, we'll be running the following commands which will run 3 containers running Apache2 Web Server on our 1st instance. +现在,我们准备跨两台主机启动六个容器,这两台主机都用Apache2 Web服务实例跑着简单的php网站。为了在第一个Apache2 Web服务器实例跑三个容器, 我们将会使用下面的命令。 ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 $ sudo weave run --with-dns 10.3.1.1/24 -h ws1.weave.local fintanr/weave-gs-nginx-apache $ sudo weave run --with-dns 10.3.1.2/24 -h ws2.weave.local fintanr/weave-gs-nginx-apache $ sudo weave run --with-dns 10.3.1.3/24 -h ws3.weave.local fintanr/weave-gs-nginx-apache -After that, we'll again launch 3 containers running apache2 web server in our 2nd instance as shown below. +在那之后,我们将会在第二个实例上启动另外三个容器,请使用下面的命令。 ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 $ sudo weave run --with-dns 10.3.1.4/24 -h ws4.weave.local fintanr/weave-gs-nginx-apache $ sudo weave run --with-dns 10.3.1.5/24 -h ws5.weave.local fintanr/weave-gs-nginx-apache $ sudo weave run --with-dns 10.3.1.6/24 -h ws6.weave.local fintanr/weave-gs-nginx-apache -Note: Here, --with-dns option tells the container to use weavedns to resolve names and -h x.weave.local allows the host to be resolvable with WeaveDNS. +注意: 在这里,--with-dns选项告诉容器使用weavedns来解析主机名,-h x.weave.local则使得weavedns能够解析指定主机。 -### 4. Launching Nginx Container ### +### 4. 启动Nginx容器 ### -After our application containers are running well as expected, we'll wanna launch an nginx container which contains the nginx configuration which will round-robin across the severs for the reverse proxy or load balancing. To run the nginx container, we'll need to run the following command. +在应用容器运行得有如意料中的稳定之后,我们将会启动nginx容器,它将会在六个应用容器服务之间轮询并提供反向代理或者负载均衡。 为了启动nginx容器,请使用下面的命令。 ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 $ sudo weave run --with-dns 10.3.1.7/24 -ti -h nginx.weave.local -d -p 80:80 fintanr/weave-gs-nginx-simple -Hence, our Nginx container is publicly exposed as a http server on $WEAVE_AWS_DEMO_HOST1. +因此,我们的nginx容器在$WEAVE_AWS_DEMO_HOST1上公开地暴露成为一个http服务器。 -### 5. Testing the Load Balancer ### +### 5. 测试负载均衡服务器 ### -To test our load balancer is working or not, we'll run a script that will make http requests to our nginx container. We'll make six requests so that we can see nginx moving through each of the webservers in round-robin turn. +为了测试我们的负载均衡服务器是否可以工作,我们执行一段可以发送http请求给nginx容器的脚本。我们将会发送6个请求,这样我们就能看到nginx在一次的轮询中服务于每台web服务器之间。 $ ./access-aws-hosts.sh @@ -113,14 +110,14 @@ To test our load balancer is working or not, we'll run a script that will make h ### Conclusion ### -Finally, we've successfully configured nginx as a reverse proxy or load balancer with weave and docker running ubuntu server in AWS (Amazon Web Service) EC2 . From the above output in above step, it is clear that we have configured it correctly. We can see that the request is being sent to 6 application containers in round-robin turn which is running a PHP app hosted in apache web server. Here, weave and weavedns did great work to deploy a containerised PHP application using nginx across multiple hosts on AWS EC2 without need to change in codes and connected the containers to eachother with the hostname using weavedns. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) +我们最终成功地将nginx配置成一个反向代理/负载均衡服务器,通过使用weave以及运行在AWS(Amazon Web Service)EC2之中的ubuntu服务器里面的docker。从上面的步骤输出可以清楚的看到我们已经成功地配置了nginx。我们可以看到请求在一次循环中被发送到6个应用容器,这些容器在Apache2 Web服务器中跑着PHP应用。在这里,我们部署了一个容器化的PHP应用,使用nginx横跨多台在AWS EC2上的主机而不需要改变代码,利用weavedns使得每个容器连接在一起,只需要主机名就够了,眼前的这些便捷, 都要归功于weave以及weavedns。 如果你有任何的问题、建议、反馈,请在评论中注明,这样我们才能够做得更好,谢谢:-) -------------------------------------------------------------------------------- via: http://linoxide.com/linux-how-to/nginx-load-balancer-weave-docker/ 作者:[Arun Pyasi][a] -译者:[译者ID](https://github.com/译者ID) +译者:[dingdongnigetou](https://github.com/dingdongnigetou) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From a7ac6a469b2bd157ce10739bc1cf2a4162a48b36 Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Sat, 15 Aug 2015 16:34:48 +0800 Subject: [PATCH 177/697] =?UTF-8?q?=E3=80=90=E7=BF=BB=E8=AF=91=E5=AE=8C?= =?UTF-8?q?=E6=AF=95=E3=80=9120150730=20Howto=20Configure=20Nginx=20as=20R?= =?UTF-8?q?reverse=20Proxy=20or=20Load=20Balancer=20with=20Weave=20and=20D?= =?UTF-8?q?ocker.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... as Rreverse Proxy or Load Balancer with Weave and Docker.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md b/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md index f38acdd874..f90a1ce76d 100644 --- a/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md +++ b/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md @@ -108,7 +108,7 @@ Hi, 今天我们将会学习如何使用如何使用Weave和Docker搭建Nginx反 "date" : "2015-06-26 12:24:23" } -### Conclusion ### +### 结束语 ### 我们最终成功地将nginx配置成一个反向代理/负载均衡服务器,通过使用weave以及运行在AWS(Amazon Web Service)EC2之中的ubuntu服务器里面的docker。从上面的步骤输出可以清楚的看到我们已经成功地配置了nginx。我们可以看到请求在一次循环中被发送到6个应用容器,这些容器在Apache2 Web服务器中跑着PHP应用。在这里,我们部署了一个容器化的PHP应用,使用nginx横跨多台在AWS EC2上的主机而不需要改变代码,利用weavedns使得每个容器连接在一起,只需要主机名就够了,眼前的这些便捷, 都要归功于weave以及weavedns。 如果你有任何的问题、建议、反馈,请在评论中注明,这样我们才能够做得更好,谢谢:-) From 9a3838add4034cdec57f4a73c89d1506795a5b8e Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Sat, 15 Aug 2015 16:39:06 +0800 Subject: [PATCH 178/697] Create 20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md --- ... or Load Balancer with Weave and Docker.md | 126 ++++++++++++++++++ 1 file changed, 126 insertions(+) create mode 100644 translated/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md diff --git a/translated/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md b/translated/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md new file mode 100644 index 0000000000..f90a1ce76d --- /dev/null +++ b/translated/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md @@ -0,0 +1,126 @@ +如何使用Weave以及Docker搭建Nginx反向代理/负载均衡服务器 +================================================================================ +Hi, 今天我们将会学习如何使用如何使用Weave和Docker搭建Nginx反向代理/负载均衡服务器。Weave创建一个虚拟网络将跨主机部署的Docker容器连接在一起并使它们自动暴露给外部世界。它让我们更加专注于应用的开发,而不是基础架构。Weave提供了一个如此棒的环境,仿佛它的所有容器都属于同个网络,不需要端口/映射/连接等的配置。容器中的应用提供的服务在weave网络中可以轻易地被外部世界访问,不论你的容器运行在哪里。在这个教程里我们将会使用weave快速并且轻易地将nginx web服务器部署为一个负载均衡器,反向代理一个运行在Amazon Web Services里面多个节点上的docker容器中的简单php应用。这里我们将会介绍WeaveDNS,它提供一个简单的方式让容器利用主机名找到彼此,不需要改变代码,并且能够告诉其他容器连接到这些主机名。 + +在这篇教程里,我们需要一个运行的容器集合来配置nginx负载均衡服务器。最简单轻松的方法就是使用Weave在ubuntu的docker容器中搭建nginx负载均衡服务器。 + +### 1. 搭建AWS实例 ### + +首先,我们需要搭建Amzaon Web Service实例,这样才能在ubuntu下用weave跑docker容器。我们将会使用[AWS CLI][1]来搭建和配置两个AWS EC2实例。在这里,我们使用最小的有效实例,t1.micro。我们需要一个有效的**Amazon Web Services账户**用以AWS命令行界面的搭建和配置。我们先在AWS命令行界面下使用下面的命令将github上的weave仓库克隆下来。 + + $ git clone http://github.com/fintanr/weave-gs + $ cd weave-gs/aws-nginx-ubuntu-simple + +在克隆完仓库之后,我们执行下面的脚本,这个脚本将会部署两个t1.micro实例,每个实例中都是ubuntu作为操作系统并用weave跑着docker容器。 + + $ sudo ./demo-aws-setup.sh + +在这里,我们将会在以后用到这些实例的IP地址。这些地址储存在一个weavedemo.env文件中,这个文件在执行demo-aws-setup.sh脚本的期间被创建。为了获取这些IP地址,我们需要执行下面的命令,命令输出类似下面的信息。 + + $ cat weavedemo.env + + export WEAVE_AWS_DEMO_HOST1=52.26.175.175 + export WEAVE_AWS_DEMO_HOST2=52.26.83.141 + export WEAVE_AWS_DEMO_HOSTCOUNT=2 + export WEAVE_AWS_DEMO_HOSTS=(52.26.175.175 52.26.83.141) + +请注意这些不是固定的IP地址,AWS会为我们的实例动态地分配IP地址。 + +我们在bash下执行下面的命令使环境变量生效。 + + . ./weavedemo.env + +### 2. 启动Weave and WeaveDNS ### + +在安装完实例之后,我们将会在每台主机上启动weave以及weavedns。Weave以及weavedns使得我们能够轻易地将容器部署到一个全新的基础架构以及配置中, 不需要改变代码,也不需要去理解像Ambassador容器以及Link机制之类的概念。下面是在第一台主机上启动weave以及weavedns的命令。 + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 + $ sudo weave launch + $ sudo weave launch-dns 10.2.1.1/24 + +下一步,我也准备在第二台主机上启动weave以及weavedns。 + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 + $ sudo weave launch $WEAVE_AWS_DEMO_HOST1 + $ sudo weave launch-dns 10.2.1.2/24 + +### 3. 启动应用容器 ### + +现在,我们准备跨两台主机启动六个容器,这两台主机都用Apache2 Web服务实例跑着简单的php网站。为了在第一个Apache2 Web服务器实例跑三个容器, 我们将会使用下面的命令。 + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 + $ sudo weave run --with-dns 10.3.1.1/24 -h ws1.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.2/24 -h ws2.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.3/24 -h ws3.weave.local fintanr/weave-gs-nginx-apache + +在那之后,我们将会在第二个实例上启动另外三个容器,请使用下面的命令。 + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 + $ sudo weave run --with-dns 10.3.1.4/24 -h ws4.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.5/24 -h ws5.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.6/24 -h ws6.weave.local fintanr/weave-gs-nginx-apache + +注意: 在这里,--with-dns选项告诉容器使用weavedns来解析主机名,-h x.weave.local则使得weavedns能够解析指定主机。 + +### 4. 启动Nginx容器 ### + +在应用容器运行得有如意料中的稳定之后,我们将会启动nginx容器,它将会在六个应用容器服务之间轮询并提供反向代理或者负载均衡。 为了启动nginx容器,请使用下面的命令。 + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 + $ sudo weave run --with-dns 10.3.1.7/24 -ti -h nginx.weave.local -d -p 80:80 fintanr/weave-gs-nginx-simple + +因此,我们的nginx容器在$WEAVE_AWS_DEMO_HOST1上公开地暴露成为一个http服务器。 + +### 5. 测试负载均衡服务器 ### + +为了测试我们的负载均衡服务器是否可以工作,我们执行一段可以发送http请求给nginx容器的脚本。我们将会发送6个请求,这样我们就能看到nginx在一次的轮询中服务于每台web服务器之间。 + + $ ./access-aws-hosts.sh + + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws1.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws2.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws3.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws4.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws5.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws6.weave.local", + "date" : "2015-06-26 12:24:23" + } + +### 结束语 ### + +我们最终成功地将nginx配置成一个反向代理/负载均衡服务器,通过使用weave以及运行在AWS(Amazon Web Service)EC2之中的ubuntu服务器里面的docker。从上面的步骤输出可以清楚的看到我们已经成功地配置了nginx。我们可以看到请求在一次循环中被发送到6个应用容器,这些容器在Apache2 Web服务器中跑着PHP应用。在这里,我们部署了一个容器化的PHP应用,使用nginx横跨多台在AWS EC2上的主机而不需要改变代码,利用weavedns使得每个容器连接在一起,只需要主机名就够了,眼前的这些便捷, 都要归功于weave以及weavedns。 如果你有任何的问题、建议、反馈,请在评论中注明,这样我们才能够做得更好,谢谢:-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/nginx-load-balancer-weave-docker/ + +作者:[Arun Pyasi][a] +译者:[dingdongnigetou](https://github.com/dingdongnigetou) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://console.aws.amazon.com/ From b95ff897f01ac5378675bbbe5a390d395c7db16e Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Sat, 15 Aug 2015 16:39:26 +0800 Subject: [PATCH 179/697] Delete 20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md --- ... or Load Balancer with Weave and Docker.md | 126 ------------------ 1 file changed, 126 deletions(-) delete mode 100644 sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md diff --git a/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md b/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md deleted file mode 100644 index f90a1ce76d..0000000000 --- a/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md +++ /dev/null @@ -1,126 +0,0 @@ -如何使用Weave以及Docker搭建Nginx反向代理/负载均衡服务器 -================================================================================ -Hi, 今天我们将会学习如何使用如何使用Weave和Docker搭建Nginx反向代理/负载均衡服务器。Weave创建一个虚拟网络将跨主机部署的Docker容器连接在一起并使它们自动暴露给外部世界。它让我们更加专注于应用的开发,而不是基础架构。Weave提供了一个如此棒的环境,仿佛它的所有容器都属于同个网络,不需要端口/映射/连接等的配置。容器中的应用提供的服务在weave网络中可以轻易地被外部世界访问,不论你的容器运行在哪里。在这个教程里我们将会使用weave快速并且轻易地将nginx web服务器部署为一个负载均衡器,反向代理一个运行在Amazon Web Services里面多个节点上的docker容器中的简单php应用。这里我们将会介绍WeaveDNS,它提供一个简单的方式让容器利用主机名找到彼此,不需要改变代码,并且能够告诉其他容器连接到这些主机名。 - -在这篇教程里,我们需要一个运行的容器集合来配置nginx负载均衡服务器。最简单轻松的方法就是使用Weave在ubuntu的docker容器中搭建nginx负载均衡服务器。 - -### 1. 搭建AWS实例 ### - -首先,我们需要搭建Amzaon Web Service实例,这样才能在ubuntu下用weave跑docker容器。我们将会使用[AWS CLI][1]来搭建和配置两个AWS EC2实例。在这里,我们使用最小的有效实例,t1.micro。我们需要一个有效的**Amazon Web Services账户**用以AWS命令行界面的搭建和配置。我们先在AWS命令行界面下使用下面的命令将github上的weave仓库克隆下来。 - - $ git clone http://github.com/fintanr/weave-gs - $ cd weave-gs/aws-nginx-ubuntu-simple - -在克隆完仓库之后,我们执行下面的脚本,这个脚本将会部署两个t1.micro实例,每个实例中都是ubuntu作为操作系统并用weave跑着docker容器。 - - $ sudo ./demo-aws-setup.sh - -在这里,我们将会在以后用到这些实例的IP地址。这些地址储存在一个weavedemo.env文件中,这个文件在执行demo-aws-setup.sh脚本的期间被创建。为了获取这些IP地址,我们需要执行下面的命令,命令输出类似下面的信息。 - - $ cat weavedemo.env - - export WEAVE_AWS_DEMO_HOST1=52.26.175.175 - export WEAVE_AWS_DEMO_HOST2=52.26.83.141 - export WEAVE_AWS_DEMO_HOSTCOUNT=2 - export WEAVE_AWS_DEMO_HOSTS=(52.26.175.175 52.26.83.141) - -请注意这些不是固定的IP地址,AWS会为我们的实例动态地分配IP地址。 - -我们在bash下执行下面的命令使环境变量生效。 - - . ./weavedemo.env - -### 2. 启动Weave and WeaveDNS ### - -在安装完实例之后,我们将会在每台主机上启动weave以及weavedns。Weave以及weavedns使得我们能够轻易地将容器部署到一个全新的基础架构以及配置中, 不需要改变代码,也不需要去理解像Ambassador容器以及Link机制之类的概念。下面是在第一台主机上启动weave以及weavedns的命令。 - - ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 - $ sudo weave launch - $ sudo weave launch-dns 10.2.1.1/24 - -下一步,我也准备在第二台主机上启动weave以及weavedns。 - - ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 - $ sudo weave launch $WEAVE_AWS_DEMO_HOST1 - $ sudo weave launch-dns 10.2.1.2/24 - -### 3. 启动应用容器 ### - -现在,我们准备跨两台主机启动六个容器,这两台主机都用Apache2 Web服务实例跑着简单的php网站。为了在第一个Apache2 Web服务器实例跑三个容器, 我们将会使用下面的命令。 - - ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 - $ sudo weave run --with-dns 10.3.1.1/24 -h ws1.weave.local fintanr/weave-gs-nginx-apache - $ sudo weave run --with-dns 10.3.1.2/24 -h ws2.weave.local fintanr/weave-gs-nginx-apache - $ sudo weave run --with-dns 10.3.1.3/24 -h ws3.weave.local fintanr/weave-gs-nginx-apache - -在那之后,我们将会在第二个实例上启动另外三个容器,请使用下面的命令。 - - ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 - $ sudo weave run --with-dns 10.3.1.4/24 -h ws4.weave.local fintanr/weave-gs-nginx-apache - $ sudo weave run --with-dns 10.3.1.5/24 -h ws5.weave.local fintanr/weave-gs-nginx-apache - $ sudo weave run --with-dns 10.3.1.6/24 -h ws6.weave.local fintanr/weave-gs-nginx-apache - -注意: 在这里,--with-dns选项告诉容器使用weavedns来解析主机名,-h x.weave.local则使得weavedns能够解析指定主机。 - -### 4. 启动Nginx容器 ### - -在应用容器运行得有如意料中的稳定之后,我们将会启动nginx容器,它将会在六个应用容器服务之间轮询并提供反向代理或者负载均衡。 为了启动nginx容器,请使用下面的命令。 - - ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 - $ sudo weave run --with-dns 10.3.1.7/24 -ti -h nginx.weave.local -d -p 80:80 fintanr/weave-gs-nginx-simple - -因此,我们的nginx容器在$WEAVE_AWS_DEMO_HOST1上公开地暴露成为一个http服务器。 - -### 5. 测试负载均衡服务器 ### - -为了测试我们的负载均衡服务器是否可以工作,我们执行一段可以发送http请求给nginx容器的脚本。我们将会发送6个请求,这样我们就能看到nginx在一次的轮询中服务于每台web服务器之间。 - - $ ./access-aws-hosts.sh - - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws1.weave.local", - "date" : "2015-06-26 12:24:23" - } - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws2.weave.local", - "date" : "2015-06-26 12:24:23" - } - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws3.weave.local", - "date" : "2015-06-26 12:24:23" - } - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws4.weave.local", - "date" : "2015-06-26 12:24:23" - } - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws5.weave.local", - "date" : "2015-06-26 12:24:23" - } - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws6.weave.local", - "date" : "2015-06-26 12:24:23" - } - -### 结束语 ### - -我们最终成功地将nginx配置成一个反向代理/负载均衡服务器,通过使用weave以及运行在AWS(Amazon Web Service)EC2之中的ubuntu服务器里面的docker。从上面的步骤输出可以清楚的看到我们已经成功地配置了nginx。我们可以看到请求在一次循环中被发送到6个应用容器,这些容器在Apache2 Web服务器中跑着PHP应用。在这里,我们部署了一个容器化的PHP应用,使用nginx横跨多台在AWS EC2上的主机而不需要改变代码,利用weavedns使得每个容器连接在一起,只需要主机名就够了,眼前的这些便捷, 都要归功于weave以及weavedns。 如果你有任何的问题、建议、反馈,请在评论中注明,这样我们才能够做得更好,谢谢:-) - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/nginx-load-balancer-weave-docker/ - -作者:[Arun Pyasi][a] -译者:[dingdongnigetou](https://github.com/dingdongnigetou) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:http://console.aws.amazon.com/ From db0e5a7401554ce7ab4b6e0baa04d17242cee1a2 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Sat, 15 Aug 2015 17:12:33 +0800 Subject: [PATCH 180/697] [Translated]RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md --- ...Boot Shutdown and Everything in Between.md | 218 ------------------ ...Boot Shutdown and Everything in Between.md | 214 +++++++++++++++++ 2 files changed, 214 insertions(+), 218 deletions(-) delete mode 100644 sources/tech/RHCSA Series/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md create mode 100644 translated/tech/RHCSA/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md b/sources/tech/RHCSA Series/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md deleted file mode 100644 index 23bf9f0ac1..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md +++ /dev/null @@ -1,218 +0,0 @@ -FSSlc translating - -RHCSA Series: Process Management in RHEL 7: Boot, Shutdown, and Everything in Between – Part 5 -================================================================================ -We will start this article with an overall and brief revision of what happens since the moment you press the Power button to turn on your RHEL 7 server until you are presented with the login screen in a command line interface. - -![RHEL 7 Boot Process](http://www.tecmint.com/wp-content/uploads/2015/03/RHEL-7-Boot-Process.png) - -Linux Boot Process - -**Please note that:** - -1. the same basic principles apply, with perhaps minor modifications, to other Linux distributions as well, and -2. the following description is not intended to represent an exhaustive explanation of the boot process, but only the fundamentals. - -### Linux Boot Process ### - -1. The POST (Power On Self Test) initializes and performs hardware checks. - -2. When the POST finishes, the system control is passed to the first stage boot loader, which is stored on either the boot sector of one of the hard disks (for older systems using BIOS and MBR), or a dedicated (U)EFI partition. - -3. The first stage boot loader then loads the second stage boot loader, most usually GRUB (GRand Unified Boot Loader), which resides inside /boot, which in turn loads the kernel and the initial RAM–based file system (also known as initramfs, which contains programs and binary files that perform the necessary actions needed to ultimately mount the actual root filesystem). - -4. We are presented with a splash screen that allows us to choose an operating system and kernel to boot: - -![RHEL 7 Boot Screen](http://www.tecmint.com/wp-content/uploads/2015/03/RHEL-7-Boot-Screen.png) - -Boot Menu Screen - -5. The kernel sets up the hardware attached to the system and once the root filesystem has been mounted, launches process with PID 1, which in turn will initialize other processes and present us with a login prompt. - -Note: That if we wish to do so at a later time, we can examine the specifics of this process using the [dmesg command][1] and filtering its output using the tools that we have explained in previous articles of this series. - -![Login Screen and Process PID](http://www.tecmint.com/wp-content/uploads/2015/03/Login-Screen-Process-PID.png) - -Login Screen and Process PID - -In the example above, we used the well-known ps command to display a list of current processes whose parent process (or in other words, the process that started them) is systemd (the system and service manager that most modern Linux distributions have switched to) during system startup: - - # ps -o ppid,pid,uname,comm --ppid=1 - -Remember that the -o flag (short for –format) allows you to present the output of ps in a customized format to suit your needs using the keywords specified in the STANDARD FORMAT SPECIFIERS section in man ps. - -Another case in which you will want to define the output of ps instead of going with the default is when you need to find processes that are causing a significant CPU and / or memory load, and sort them accordingly: - - # ps aux --sort=+pcpu # Sort by %CPU (ascending) - # ps aux --sort=-pcpu # Sort by %CPU (descending) - # ps aux --sort=+pmem # Sort by %MEM (ascending) - # ps aux --sort=-pmem # Sort by %MEM (descending) - # ps aux --sort=+pcpu,-pmem # Combine sort by %CPU (ascending) and %MEM (descending) - -![http://www.tecmint.com/wp-content/uploads/2015/03/ps-command-output.png](http://www.tecmint.com/wp-content/uploads/2015/03/ps-command-output.png) - -Customize ps Command Output - -### An Introduction to SystemD ### - -Few decisions in the Linux world have caused more controversies than the adoption of systemd by major Linux distributions. Systemd’s advocates name as its main advantages the following facts: - -Read Also: [The Story Behind ‘init’ and ‘systemd’][2] - -1. Systemd allows more processing to be done in parallel during system startup (as opposed to older SysVinit, which always tends to be slower because it starts processes one by one, checks if one depends on another, and then waits for daemons to launch so more services can start), and - -2. It works as a dynamic resource management in a running system. Thus, services are started when needed (to avoid consuming system resources if they are not being used) instead of being launched without a valid reason during boot. - -3. Backwards compatibility with SysVinit scripts. - -Systemd is controlled by the systemctl utility. If you come from a SysVinit background, chances are you will be familiar with: - -- the service tool, which -in those older systems- was used to manage SysVinit scripts, and -- the chkconfig utility, which served the purpose of updating and querying runlevel information for system services. -- shutdown, which you must have used several times to either restart or halt a running system. - -The following table shows the similarities between the use of these legacy tools and systemctl: - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Legacy toolSystemctl equivalentDescription
service name startsystemctl start nameStart name (where name is a service)
service name stopsystemctl stop nameStop name
service name condrestartsystemctl try-restart nameRestarts name (if it’s already running)
service name restartsystemctl restart nameRestarts name
service name reloadsystemctl reload nameReloads the configuration for name
service name statussystemctl status nameDisplays the current status of name
service –status-allsystemctlDisplays the status of all current services
chkconfig name onsystemctl enable nameEnable name to run on startup as specified in the unit file (the file to which the symlink points). The process of enabling or disabling a service to start automatically on boot consists in adding or removing symbolic links inside the /etc/systemd/system directory.
chkconfig name offsystemctl disable nameDisables name to run on startup as specified in the unit file (the file to which the symlink points)
chkconfig –list namesystemctl is-enabled nameVerify whether name (a specific service) is currently enabled
chkconfig –listsystemctl –type=serviceDisplays all services and tells whether they are enabled or disabled
shutdown -h nowsystemctl poweroffPower-off the machine (halt)
shutdown -r nowsystemctl rebootReboot the system
- -Systemd also introduced the concepts of units (which can be either a service, a mount point, a device, or a network socket) and targets (which is how systemd manages to start several related process at the same time, and can be considered -though not equal- as the equivalent of runlevels in SysVinit-based systems. - -### Summing Up ### - -Other tasks related with process management include, but may not be limited to, the ability to: - -**1. Adjust the execution priority as far as the use of system resources is concerned of a process:** - -This is accomplished through the renice utility, which alters the scheduling priority of one or more running processes. In simple terms, the scheduling priority is a feature that allows the kernel (present in versions => 2.6) to allocate system resources as per the assigned execution priority (aka niceness, in a range from -20 through 19) of a given process. - -The basic syntax of renice is as follows: - - # renice [-n] priority [-gpu] identifier - -In the generic command above, the first argument is the priority value to be used, whereas the other argument can be interpreted as process IDs (which is the default setting), process group IDs, user IDs, or user names. A normal user (other than root) can only modify the scheduling priority of a process he or she owns, and only increase the niceness level (which means taking up less system resources). - -![Renice Process in Linux](http://www.tecmint.com/wp-content/uploads/2015/03/Process-Scheduling-Priority.png) - -Process Scheduling Priority - -**2. Kill (or interrupt the normal execution) of a process as needed:** - -In more precise terms, killing a process entitles sending it a signal to either finish its execution gracefully (SIGTERM=15) or immediately (SIGKILL=9) through the [kill or pkill commands][3]. - -The difference between these two tools is that the former is used to terminate a specific process or a process group altogether, while the latter allows you to do the same based on name and other attributes. - -In addition, pkill comes bundled with pgrep, which shows you the PIDs that will be affected should pkill be used. For example, before running: - - # pkill -u gacanepa - -It may be useful to view at a glance which are the PIDs owned by gacanepa: - - # pgrep -l -u gacanepa - -![Find PIDs of User](http://www.tecmint.com/wp-content/uploads/2015/03/Find-PIDs-of-User.png) - -Find PIDs of User - -By default, both kill and pkill send the SIGTERM signal to the process. As we mentioned above, this signal can be ignored (while the process finishes its execution or for good), so when you seriously need to stop a running process with a valid reason, you will need to specify the SIGKILL signal on the command line: - - # kill -9 identifier # Kill a process or a process group - # kill -s SIGNAL identifier # Idem - # pkill -s SIGNAL identifier # Kill a process by name or other attributes - -### Conclusion ### - -In this article we have explained the basics of the boot process in a RHEL 7 system, and analyzed some of the tools that are available to help you with managing processes using common utilities and systemd-specific commands. - -Note that this list is not intended to cover all the bells and whistles of this topic, so feel free to add your own preferred tools and commands to this article using the comment form below. Questions and other comments are also welcome. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-boot-process-and-process-management/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/dmesg-commands/ -[2]:http://www.tecmint.com/systemd-replaces-init-in-linux/ -[3]:http://www.tecmint.com/how-to-kill-a-process-in-linux/ diff --git a/translated/tech/RHCSA/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md b/translated/tech/RHCSA/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md new file mode 100644 index 0000000000..91e2482e49 --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md @@ -0,0 +1,214 @@ +RHECSA 系列:RHEL7 中的进程管理:开机,关机,以及两者之间的所有其他事项 – Part 5 +================================================================================ +我们将概括和简要地复习从你按开机按钮来打开你的 RHEL 7 服务器到呈现出命令行界面的登录屏幕之间所发生的所有事情,以此来作为这篇文章的开始。 + +![RHEL 7 开机过程](http://www.tecmint.com/wp-content/uploads/2015/03/RHEL-7-Boot-Process.png) + +Linux 开机过程 + +**请注意:** + +1. 相同的基本原则也可以应用到其他的 Linux 发行版本中,但可能需要较小的更改,并且 +2. 下面的描述并不是旨在给出开机过程的一个详尽的解释,而只是介绍一些基础的东西 + +### Linux 开机过程 ### + +1.初始化 POST(加电自检)并执行硬件检查; + +2.当 POST 完成后,系统的控制权将移交给启动管理器的第一阶段,它存储在一个硬盘的引导扇区(对于使用 BIOS 和 MBR 的旧式的系统)或存储在一个专门的 (U)EFI 分区上。 + +3.启动管理器的第一阶段完成后,接着进入启动管理器的第二阶段,通常大多数使用的是 GRUB(GRand Unified Boot Loader 的简称),它驻留在 `/boot` 中,反过来加载内核和驻留在 RAM 中的初始化文件系统(被称为 initramfs,它包含执行必要操作所需要的程序和二进制文件,以此来最终挂载真实的根文件系统)。 + +4.接着经历了闪屏过后,呈现在我们眼前的是类似下图的画面,它允许我们选择一个操作系统和内核来启动: + +![RHEL 7 开机屏幕](http://www.tecmint.com/wp-content/uploads/2015/03/RHEL-7-Boot-Screen.png) + +启动菜单屏幕 + +5.然后内核对挂载到系统的硬件进行设置,一旦根文件系统被挂载,接着便启动 PID 为 1 的进程,反过来这个进程将初始化其他的进程并最终呈现给我们一个登录提示符界面。 + +注意:假如我们想在后面这样做(注:这句话我总感觉不通顺,不明白它的意思,希望改一下),我们可以使用 [dmesg 命令][1](注:这篇文章已经翻译并发表了,链接是 https://linux.cn/article-3587-1.html )并使用这个系列里的上一篇文章中解释过的工具(注:即 grep)来过滤它的输出。 + +![登录屏幕和进程的 PID](http://www.tecmint.com/wp-content/uploads/2015/03/Login-Screen-Process-PID.png) + +登录屏幕和进程的 PID + +在上面的例子中,我们使用了众所周知的 `ps` 命令来显示在系统启动过程中的一系列当前进程的信息,它们的父进程(或者换句话说,就是那个开启这些进程的进程) 为 systemd(大多数现代的 Linux 发行版本已经切换到的系统和服务管理器): + + # ps -o ppid,pid,uname,comm --ppid=1 + +记住 `-o`(为 -format 的简写)选项允许你以一个自定义的格式来显示 ps 的输出,以此来满足你的需求;这个自定义格式使用 man ps 里 STANDARD FORMAT SPECIFIERS 一节中的特定关键词。 + +另一个你想自定义 ps 的输出而不是使用其默认输出的情形是:当你需要找到引起 CPU 或内存消耗过多的那些进程,并按照下列方式来对它们进行排序时: + + # ps aux --sort=+pcpu # 以 %CPU 来排序(增序) + # ps aux --sort=-pcpu # 以 %CPU 来排序(降序) + # ps aux --sort=+pmem # 以 %MEM 来排序(增序) + # ps aux --sort=-pmem # 以 %MEM 来排序(降序) + # ps aux --sort=+pcpu,-pmem # 结合 %CPU (增序) 和 %MEM (降序)来排列 + +![http://www.tecmint.com/wp-content/uploads/2015/03/ps-command-output.png](http://www.tecmint.com/wp-content/uploads/2015/03/ps-command-output.png) + +自定义 ps 命令的输出 + +### systemd 的一个介绍 ### + +在 Linux 世界中,很少有决定能够比在主流的 Linux 发行版本中采用 systemd 引起更多的争论。systemd 的倡导者根据以下事实命名其主要的优势: + +另外请阅读: ['init' 和 'systemd' 背后的故事][2] + +1. 在系统启动期间,systemd 允许并发地启动更多的进程(相比于先前的 SysVinit,SysVinit 似乎总是表现得更慢,因为它一个接一个地启动进程,检查一个进程是否依赖于另一个进程,然后等待守护进程去开启可以开始的更多的服务),并且 +2. 在一个运行着的系统中,它作为一个动态的资源管理器来工作。这样在开机期间,当一个服务被需要时,才启动它(以此来避免消耗系统资源)而不是在没有一个合理的原因的情况下启动额外的服务。 +3. 向后兼容 sysvinit 的脚本。 + +systemd 由 systemctl 工具控制,假如你带有 SysVinit 背景,你将会对以下的内容感到熟悉: + +- service 工具, 在旧一点的系统中,它被用来管理 SysVinit 脚本,以及 +- chkconfig 工具, 为系统服务升级和查询运行级别信息 +- shutdown, 你一定使用过几次来重启或关闭一个运行的系统。 + +下面的表格展示了使用传统的工具和 systemctl 之间的相似之处: + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Legacy toolSystemctl equivalentDescription
service name startsystemctl start nameStart name (where name is a service)
service name stopsystemctl stop nameStop name
service name condrestartsystemctl try-restart nameRestarts name (if it’s already running)
service name restartsystemctl restart nameRestarts name
service name reloadsystemctl reload nameReloads the configuration for name
service name statussystemctl status nameDisplays the current status of name
service –status-allsystemctlDisplays the status of all current services
chkconfig name onsystemctl enable nameEnable name to run on startup as specified in the unit file (the file to which the symlink points). The process of enabling or disabling a service to start automatically on boot consists in adding or removing symbolic links inside the /etc/systemd/system directory.
chkconfig name offsystemctl disable nameDisables name to run on startup as specified in the unit file (the file to which the symlink points)
chkconfig –list namesystemctl is-enabled nameVerify whether name (a specific service) is currently enabled
chkconfig –listsystemctl –type=serviceDisplays all services and tells whether they are enabled or disabled
shutdown -h nowsystemctl poweroffPower-off the machine (halt)
shutdown -r nowsystemctl rebootReboot the system
+ +systemd 也引进了单元(它可能是一个服务,一个挂载点,一个设备或者一个网络套接字)和目标(它们定义了 systemd 如何去管理和同时开启几个相关的进程,并可认为它们与在基于 SysVinit 的系统中的运行级别等价,尽管事实上它们并不等价)。 + +### 总结归纳 ### + +其他与进程管理相关,但并不仅限于下面所列的功能的任务有: + +**1. 在考虑到系统资源的使用上,调整一个进程的执行优先级:** + +这是通过 `renice` 工具来完成的,它可以改变一个或多个正在运行着的进程的调度优先级。简单来说,调度优先级是一个允许内核(当前只支持 >= 2.6 的版本)根据某个给定进程被分配的执行优先级(即优先级,从 -20 到 19)来为其分配系统资源的功能。 + +`renice` 的基本语法如下: + + # renice [-n] priority [-gpu] identifier + +在上面的通用命令中,第一个参数是将要使用的优先级数值,而另一个参数可以解释为进程 ID(这是默认的设定),进程组 ID,用户 ID 或者用户名。一个常规的用户(即除 root 以外的用户)只可以更改他或她所拥有的进程的调度优先级,并且只能增加优先级的层次(这意味着占用更少的系统资源)。 + +![在 Linux 中调整进程的优先级](http://www.tecmint.com/wp-content/uploads/2015/03/Process-Scheduling-Priority.png) + +进程调度优先级 + +**2. 按照需要杀死一个进程(或终止其正常执行):** + +更精确地说,杀死一个进程指的是通过 [kill 或 pkill][3]命令给该进程发送一个信号,让它优雅地(SIGTERM=15)或立即(SIGKILL=9)结束它的执行。 + +这两个工具的不同之处在于前一个被用来终止一个特定的进程或一个进程组,而后一个则允许你在进程的名称和其他属性的基础上,执行相同的动作。 + +另外, pkill 与 pgrep 相捆绑,pgrep 提供将受影响的进程的 PID 给 pkill 来使用。例如,在运行下面的命令之前: + + # pkill -u gacanepa + +查看一眼由 gacanepa 所拥有的 PID 或许会带来点帮助: + + # pgrep -l -u gacanepa + +![找到用户拥有的 PID](http://www.tecmint.com/wp-content/uploads/2015/03/Find-PIDs-of-User.png) + +找到用户拥有的 PID + +默认情况下,kill 和 pkiill 都发送 SIGTERM 信号给进程,如我们上面提到的那样,这个信号可以被忽略(即该进程可能会终止其自身的执行或者不终止),所以当你因一个合理的理由要真正地停止一个运行着的进程,则你将需要在命令行中带上特定的 SIGKILL 信号: + + # kill -9 identifier # 杀死一个进程或一个进程组 + # kill -s SIGNAL identifier # 同上 + # pkill -s SIGNAL identifier # 通过名称或其他属性来杀死一个进程 + +### 结论 ### + +在这篇文章中,我们解释了在 RHEL 7 系统中,有关开机启动过程的基本知识,并分析了一些可用的工具来帮助你通过使用一般的程序和 systemd 特有的命令来管理进程。 + +请注意,这个列表并不旨在涵盖有关这个话题的所有花哨的工具,请随意使用下面的评论栏来添加你自已钟爱的工具和命令。同时欢迎你的提问和其他的评论。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-exam-boot-process-and-process-management/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/dmesg-commands/ +[2]:http://www.tecmint.com/systemd-replaces-init-in-linux/ +[3]:http://www.tecmint.com/how-to-kill-a-process-in-linux/ From 1d98d80fc80ef59545d36b7b4fe98f11ab6b7947 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Sat, 15 Aug 2015 17:20:30 +0800 Subject: [PATCH 181/697] Update RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ...rted' and 'SSM' to Configure and Encrypt System Storage.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md b/sources/tech/RHCSA Series/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md index 474b707d23..0e631ce37d 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md @@ -1,3 +1,5 @@ +FSSlc translating + RHCSA Series: Using ‘Parted’ and ‘SSM’ to Configure and Encrypt System Storage – Part 6 ================================================================================ In this article we will discuss how to set up and configure local system storage in Red Hat Enterprise Linux 7 using classic tools and introducing the System Storage Manager (also known as SSM), which greatly simplifies this task. @@ -266,4 +268,4 @@ via: http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-p 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/create-lvm-storage-in-linux/ \ No newline at end of file +[1]:http://www.tecmint.com/create-lvm-storage-in-linux/ From 26c93e98c624b0e916b8f6b78fab8ed4c29f062a Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 15 Aug 2015 17:53:07 +0800 Subject: [PATCH 182/697] [Translated] tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md --- ...m Activity Reports Using Linux Toolsets.md | 183 ------------------ ...m Activity Reports Using Linux Toolsets.md | 182 +++++++++++++++++ 2 files changed, 182 insertions(+), 183 deletions(-) delete mode 100644 sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md create mode 100644 translated/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md diff --git a/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md b/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md deleted file mode 100644 index ea0157be4f..0000000000 --- a/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md +++ /dev/null @@ -1,183 +0,0 @@ -Translating by ictlyh -Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets -================================================================================ -As a system engineer, you will often need to produce reports that show the utilization of your system’s resources in order to make sure that: 1) they are being utilized optimally, 2) prevent bottlenecks, and 3) ensure scalability, among other reasons. - -![Monitor Linux Performance Activity Reports](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Performance-Activity-Reports.jpg) - -RHCE: Monitor Linux Performance Activity Reports – Part 3 - -Besides the well-known native Linux tools that are used to check disk, memory, and CPU usage – to name a few examples, Red Hat Enterprise Linux 7 provides two additional toolsets to enhance the data you can collect for your reports: sysstat and dstat. - -In this article we will describe both, but let’s first start by reviewing the usage of the classic tools. - -### Native Linux Tools ### - -With df, you will be able to report disk space and inode usage of by filesystem. You need to monitor both because a lack of space will prevent you from being able to save further files (and may even cause the system to crash), just like running out of inodes will mean you can’t link further files with their corresponding data structures, thus producing the same effect: you won’t be able to save those files to disk. - - # df -h [Display output in human-readable form] - # df -h --total [Produce a grand total] - -![Check Linux Total Disk Usage](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-Disk-Usage.png) - -Check Linux Total Disk Usage - - # df -i [Show inode count by filesystem] - # df -i --total [Produce a grand total] - -![Check Linux Total inode Numbers](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-inode-Numbers.png) - -Check Linux Total inode Numbers - -With du, you can estimate file space usage by either file, directory, or filesystem. - -For example, let’s see how much space is used by the /home directory, which includes all of the user’s personal files. The first command will return the overall space currently used by the entire /home directory, whereas the second will also display a disaggregated list by sub-directory as well: - - # du -sch /home - # du -sch /home/* - -![Check Linux Directory Disk Size](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Directory-Disk-Size.png) - -Check Linux Directory Disk Size - -Don’t Miss: - -- [12 ‘df’ Command Examples to Check Linux Disk Space Usage][1] -- [10 ‘du’ Command Examples to Find Disk Usage of Files/Directories][2] - -Another utility that can’t be missing from your toolset is vmstat. It will allow you to see at a quick glance information about processes, CPU and memory usage, disk activity, and more. - -If run without arguments, vmstat will return averages since the last reboot. While you may use this form of the command once in a while, it will be more helpful to take a certain amount of system utilization samples, one after another, with a defined time separation between samples. - -For example, - - # vmstat 5 10 - -will return 10 samples taken every 5 seconds: - -![Check Linux System Performance](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Systerm-Performance.png) - -Check Linux System Performance - -As you can see in the above picture, the output of vmstat is divided by columns: procs (processes), memory, swap, io, system, and cpu. The meaning of each field can be found in the FIELD DESCRIPTION sections in the man page of vmstat. - -Where can vmstat come in handy? Let’s examine the behavior of the system before and during a yum update: - - # vmstat -a 1 5 - -![Vmstat Linux Performance Monitoring](http://www.tecmint.com/wp-content/uploads/2015/08/Vmstat-Linux-Peformance-Monitoring.png) - -Vmstat Linux Performance Monitoring - -Please note that as files are being modified on disk, the amount of active memory increases and so does the number of blocks written to disk (bo) and the CPU time that is dedicated to user processes (us). - -Or during the saving process of a large file directly to disk (caused by dsync): - - # vmstat -a 1 5 - # dd if=/dev/zero of=dummy.out bs=1M count=1000 oflag=dsync - -![VmStat Linux Disk Performance Monitoring](http://www.tecmint.com/wp-content/uploads/2015/08/VmStat-Linux-Disk-Performance-Monitoring.png) - -VmStat Linux Disk Performance Monitoring - -In this case, we can see a yet larger number of blocks being written to disk (bo), which was to be expected, but also an increase of the amount of CPU time that it has to wait for I/O operations to complete before processing tasks (wa). - -**Don’t Miss**: [Vmstat – Linux Performance Monitoring][3] - -### Other Linux Tools ### - -As mentioned in the introduction of this chapter, there are other tools that you can use to check the system status and utilization (they are not only provided by Red Hat but also by other major distributions from their officially supported repositories). - -The sysstat package contains the following utilities: - -- sar (collect, report, or save system activity information). -- sadf (display data collected by sar in multiple formats). -- mpstat (report processors related statistics). -- iostat (report CPU statistics and I/O statistics for devices and partitions). -- pidstat (report statistics for Linux tasks). -- nfsiostat (report input/output statistics for NFS). -- cifsiostat (report CIFS statistics) and -- sa1 (collect and store binary data in the system activity daily data file. -- sa2 (write a daily report in the /var/log/sa directory) tools. - -whereas dstat adds some extra features to the functionality provided by those tools, along with more counters and flexibility. You can find an overall description of each tool by running yum info sysstat or yum info dstat, respectively, or checking the individual man pages after installation. - -To install both packages: - - # yum update && yum install sysstat dstat - -The main configuration file for sysstat is /etc/sysconfig/sysstat. You will find the following parameters in that file: - - # How long to keep log files (in days). - # If value is greater than 28, then log files are kept in - # multiple directories, one for each month. - HISTORY=28 - # Compress (using gzip or bzip2) sa and sar files older than (in days): - COMPRESSAFTER=31 - # Parameters for the system activity data collector (see sadc manual page) - # which are used for the generation of log files. - SADC_OPTIONS="-S DISK" - # Compression program to use. - ZIP="bzip2" - -When sysstat is installed, two cron jobs are added and enabled in /etc/cron.d/sysstat. The first job runs the system activity accounting tool every 10 minutes and stores the reports in /var/log/sa/saXX where XX is the day of the month. - -Thus, /var/log/sa/sa05 will contain all the system activity reports from the 5th of the month. This assumes that we are using the default value in the HISTORY variable in the configuration file above: - - */10 * * * * root /usr/lib64/sa/sa1 1 1 - -The second job generates a daily summary of process accounting at 11:53 pm every day and stores it in /var/log/sa/sarXX files, where XX has the same meaning as in the previous example: - - 53 23 * * * root /usr/lib64/sa/sa2 -A - -For example, you may want to output system statistics from 9:30 am through 5:30 pm of the sixth of the month to a .csv file that can easily be viewed using LibreOffice Calc or Microsoft Excel (this approach will also allow you to create charts or graphs): - - # sadf -s 09:30:00 -e 17:30:00 -dh /var/log/sa/sa06 -- | sed 's/;/,/g' > system_stats20150806.csv - -You could alternatively use the -j flag instead of -d in the sadf command above to output the system stats in JSON format, which could be useful if you need to consume the data in a web application, for example. - -![Linux System Statistics](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-System-Statistics.png) - -Linux System Statistics - -Finally, let’s see what dstat has to offer. Please note that if run without arguments, dstat assumes -cdngy by default (short for CPU, disk, network, memory pages, and system stats, respectively), and adds one line every second (execution can be interrupted anytime with Ctrl + C): - - # dstat - -![Linux Disk Statistics Monitoring](http://www.tecmint.com/wp-content/uploads/2015/08/dstat-command.png) - -Linux Disk Statistics Monitoring - -To output the stats to a .csv file, use the –output flag followed by a file name. Let’s see how this looks on LibreOffice Calc: - -![Monitor Linux Statistics Output](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Statistics-Output.png) - -Monitor Linux Statistics Output - -I strongly advise you to check out the man page of dstat, included with this article along with the man page of sysstat in PDF format for your reading convenience. You will find several other options that will help you create custom and detailed system activity reports. - -**Don’t Miss**: [Sysstat – Linux Usage Activity Monitoring Tool][4] - -### Summary ### - -In this guide we have explained how to use both native Linux tools and specific utilities provided with RHEL 7 in order to produce reports on system utilization. At one point or another, you will come to rely on these reports as best friends. - -You will probably have used other tools that we have not covered in this tutorial. If so, feel free to share them with the rest of the community along with any other suggestions / questions / comments that you may have- using the form below. - -We look forward to hearing from you. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/how-to-check-disk-space-in-linux/ -[2]:http://www.tecmint.com/check-linux-disk-usage-of-files-and-directories/ -[3]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/ -[4]:http://www.tecmint.com/install-sysstat-in-linux/ \ No newline at end of file diff --git a/translated/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md b/translated/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md new file mode 100644 index 0000000000..7a373cd76b --- /dev/null +++ b/translated/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md @@ -0,0 +1,182 @@ +RHCE 第三部分 - 如何使用 Linux 工具集产生和发送系统活动报告 +================================================================================ +作为一个系统工程师,你经常需要生成一些显示系统资源利用率的报告,以便确保:1)正最佳利用它们,2)防止出现瓶颈,3)确保可扩展性,以及其它原因。 + +![监视 Linux 性能活动报告](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Performance-Activity-Reports.jpg) + +RHCE 第三部分:监视 Linux 性能活动报告 + +除了著名的用于检测磁盘、内存和 CPU 使用率的原生 Linux 工具 - 可以给出很多例子,红帽企业版 Linux 7 还提供了两个额外的工具集用于为你的报告增加可以收集的数据:sysstat 和 dstat。 + +在这篇文章中,我们会介绍两者,但首先让我们来回顾一下传统工具的使用。 + +### 原生 Linux 工具 ### + +使用 df,你可以报告磁盘空间以及文件系统的 inode 使用情况。你需要监视两者,因为缺少磁盘空间会阻止你保存更多文件(甚至会导致系统崩溃),就像耗尽 inode 意味着你不能将文件链接到对应的数据结构,从而导致同样的结果:你不能将那些文件保存到磁盘中。 + + # df -h [以人类可读形式显示输出] + # df -h --total [生成总计] + +![检查 Linux 总的磁盘使用](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-Disk-Usage.png) + +检查 Linux 总的磁盘使用 + + # df -i [显示文件系统的 inode 数目] + # df -i --total [生成总计] + +![检查 Linux 总的 inode 数目](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-inode-Numbers.png) + +检查 Linux 总的 inode 数目 + +用 du,你可以估计文件、目录或文件系统的文件空间使用。 + +举个例子,让我们来看看 /home 目录使用了多少空间,它包括了所有用户的个人文件。第一条命令会返回整个 /home 目录当前使用的所有空间,第二条命令会显示子目录的分类列表: + + # du -sch /home + # du -sch /home/* + +![检查 Linux 目录磁盘大小](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Directory-Disk-Size.png) + +检查 Linux 目录磁盘大小 + +别错过了: + +- [检查 Linux 磁盘空间使用的 12 个 ‘df’ 命令例子][1] +- [查看文件/目录磁盘使用的 10 个 ‘du’ 命令例子][2] + +另一个你工具集中不容忽视的工具就是 vmstat。它允许你查看进程、CPU 和 内存使用、磁盘活动以及其它的大概信息。 + +如果不带参数运行,vmstat 会返回自从上一次启动后的平均信息。尽管你可能以这种方式使用该命令有一段时间了,再看一些系统使用率的例子会有更多帮助,例如在例子中定义了时间间隔。 + +例如 + + # vmstat 5 10 + +会每个 5 秒返回 10 个事例: + +![检查 Linux 系统性能](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Systerm-Performance.png) + +检查 Linux 系统性能 + +正如你从上面图片看到的,vmstat 的输出分为很多列:proc(process)、memory、swap、io、system、和 CPU。每个字段的意义可以在 vmstat man 手册的 FIELD DESCRIPTION 部分找到。 + +在哪里 vmstat 可以派上用场呢?让我们在 yum 升级之前和升级时检查系统行为: + + # vmstat -a 1 5 + +![Vmstat Linux 性能监视](http://www.tecmint.com/wp-content/uploads/2015/08/Vmstat-Linux-Peformance-Monitoring.png) + +Vmstat Linux 性能监视 + +请注意当磁盘上的文件被更改时,活跃内存的数量增加,写到磁盘的块数目(bo)和属于用户进程的 CPU 时间(us)也是这样。 + +或者一个保存大文件到磁盘时(dsync 引发): + + # vmstat -a 1 5 + # dd if=/dev/zero of=dummy.out bs=1M count=1000 oflag=dsync + +![Vmstat Linux 磁盘性能监视](http://www.tecmint.com/wp-content/uploads/2015/08/VmStat-Linux-Disk-Performance-Monitoring.png) + +Vmstat Linux 磁盘性能监视 + +在这个例子中,我们可以看到很大数目的块被写入到磁盘(bo),这正如预期的那样,同时 CPU 处理任务之前等待 IO 操作完成的时间(wa)也增加了。 + +**别错过**: [Vmstat – Linux 性能监视][3] + +### 其它 Linux 工具 ### + +正如本文介绍部分提到的,这里有其它的工具你可以用来检测系统状态和利用率(不仅红帽,其它主流发行版的官方支持库中也提供了这些工具)。 + +sysstat 软件包包含以下工具: + +- sar (收集、报告、或者保存系统活动信息)。 +- sadf (以多种方式显式 sar 收集的数据)。 +- mpstat (报告处理器相关的统计信息)。 +- iostat (报告 CPU 统计信息和设备以及分区的 IO统计信息)。 +- pidstat (报告 Linux 任务统计信息)。 +- nfsiostat (报告 NFS 的输出/输出统计信息)。 +- cifsiostat (报告 CIFS 统计信息) +- sa1 (收集并保存系统活动日常文件的二进制数据)。 +- sa2 (在 /var/log/sa 目录写每日报告)。 + +dstat 为这些工具提供的功能添加了一些额外的特性,以及更多的计数器和更大的灵活性。你可以通过运行 yum info sysstat 或者 yum info dstat 找到每个工具完整的介绍,或者安装完成后分别查看每个工具的 man 手册。 + +安装两个软件包: + + # yum update && yum install sysstat dstat + +sysstat 主要的配置文件是 /etc/sysconfig/sysstat。你可以在该文件中找到下面的参数: + + # How long to keep log files (in days). + # If value is greater than 28, then log files are kept in + # multiple directories, one for each month. + HISTORY=28 + # Compress (using gzip or bzip2) sa and sar files older than (in days): + COMPRESSAFTER=31 + # Parameters for the system activity data collector (see sadc manual page) + # which are used for the generation of log files. + SADC_OPTIONS="-S DISK" + # Compression program to use. + ZIP="bzip2" + +sysstat 安装完成后,/etc/cron.d/sysstat 中会添加和启用两个 cron 作业。第一个作业每 10 分钟运行系统活动计数工具并在 /var/log/sa/saXX 中保存报告,其中 XX 是该月的一天。 + +因此,/var/log/sa/sa05 会包括该月份第 5 天所有的系统活动报告。这里假设我们在上面的配置文件中对 HISTORY 变量使用默认的值: + + */10 * * * * root /usr/lib64/sa/sa1 1 1 + +第二个作业在每天夜间 11:53 生成每日进程计数总结并把它保存到 /var/log/sa/sarXX 文件,其中 XX 和之前例子中的含义相同: + + 53 23 * * * root /usr/lib64/sa/sa2 -A + +例如,你可能想要输出该月份第 6 天从上午 9:30 到晚上 5:30 的系统统计信息到一个 LibreOffice Calc 或 Microsoft Excel 可以查看的 .csv 文件(它也允许你创建表格和图片): + + # sadf -s 09:30:00 -e 17:30:00 -dh /var/log/sa/sa06 -- | sed 's/;/,/g' > system_stats20150806.csv + +你可以在上面的 sadf 命令中用 -j 标记代替 -d 以 JSON 格式输出系统统计信息,这当你在 web 应用中使用这些数据的时候非常有用。 + +![Linux 系统统计信息](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-System-Statistics.png) + +Linux 系统统计信息 + +最后,让我们看看 dstat 提供什么功能。请注意如果不带参数运行,dstat 默认使用 -cdngy(表示 CPU、磁盘、网络、内存页、和系统统计信息),并每秒添加一行(可以在任何时候用 Ctrl + C 中断执行): + + # dstat + +![Linux 磁盘统计检测](http://www.tecmint.com/wp-content/uploads/2015/08/dstat-command.png) + +Linux 磁盘统计检测 + +要输出统计信息到 .csv 文件,可以用 -output 标记后面跟一个文件名称。让我们来看看在 LibreOffice Calc 中该文件看起来是怎样的: + +![检测 Linux 统计信息输出](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Statistics-Output.png) + +检测 Linux 统计信息输出 + +我强烈建议你查看 dstat 的 man 手册,为了方便你的阅读用 PDF 格式包括本文以及 sysstat 的 man 手册。你会找到其它能帮助你创建自定义的详细系统活动报告的选项。 + +**别错过**: [Sysstat – Linux 的使用活动检测工具][4] + +### 总结 ### + +在该指南中我们解释了如何使用 Linux 原生工具以及 RHEL 7 提供的特定工具来生成系统使用报告。在某种情况下,你可能像依赖最好的朋友那样依赖这些报告。 + +你很可能使用过这篇指南中我们没有介绍到的其它工具。如果真是这样的话,用下面的表格和社区中的其他成员一起分享吧,也可以是任何其它的建议/疑问/或者评论。 + +我们期待你的回复。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/ + +作者:[Gabriel Cánepa][a] +译者:[ictlyh](https://github.com/ictlyh) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/how-to-check-disk-space-in-linux/ +[2]:http://www.tecmint.com/check-linux-disk-usage-of-files-and-directories/ +[3]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/ +[4]:http://www.tecmint.com/install-sysstat-in-linux/ \ No newline at end of file From 973441749b1e0fcc303d5cc53653850a2baf7247 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 15 Aug 2015 18:07:53 +0800 Subject: [PATCH 183/697] PUB:20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux @geekpi --- ...work Traffic Analyzer--Install it on Linux.md | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) rename {translated/tech => published}/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md (73%) diff --git a/translated/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md b/published/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md similarity index 73% rename from translated/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md rename to published/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md index e8e6bace07..53f7b8a9d4 100644 --- a/translated/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md +++ b/published/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md @@ -1,6 +1,7 @@ -Darkstat一个基于网络的流量分析器 - 在Linux中安装 +在 Linux 中安装 Darkstat:基于网页的流量分析器 ================================================================================ -Darkstat是一个简易的,基于网络的流量分析程序。它可以在主流的操作系统如Linux、Solaris、MAC、AIX上工作。它以守护进程的形式持续工作在后台并不断地嗅探网络数据并以简单易懂的形式展现在网页上。它可以为主机生成流量报告,鉴别特定主机上哪些端口打开并且兼容IPv6。让我们看下如何在Linux中安装和配置它。 + +Darkstat是一个简易的,基于网页的流量分析程序。它可以在主流的操作系统如Linux、Solaris、MAC、AIX上工作。它以守护进程的形式持续工作在后台,不断地嗅探网络数据,以简单易懂的形式展现在它的网页上。它可以为主机生成流量报告,识别特定的主机上哪些端口是打开的,它兼容IPv6。让我们看下如何在Linux中安装和配置它。 ### 在Linux中安装配置Darkstat ### @@ -20,14 +21,15 @@ Darkstat是一个简易的,基于网络的流量分析程序。它可以在主 ### 配置 Darkstat ### -为了正确运行这个程序,我恩需要执行一些基本的配置。运行下面的命令用gedit编辑器打开/etc/darkstat/init.cfg文件。 +为了正确运行这个程序,我们需要执行一些基本的配置。运行下面的命令用gedit编辑器打开/etc/darkstat/init.cfg文件。 sudo gedit /etc/darkstat/init.cfg ![](http://linuxpitstop.com/wp-content/uploads/2015/08/13.png) -编辑 Darkstat -修改START_DARKSTAT这个参数为yes,并在“INTERFACE”中提供你的网络接口。确保取消了DIR、PORT、BINDIP和LOCAL这些参数的注释。如果你希望绑定Darkstat到特定的IP,在BINDIP中提供它 +*编辑 Darkstat* + +修改START_DARKSTAT这个参数为yes,并在“INTERFACE”中提供你的网络接口。确保取消了DIR、PORT、BINDIP和LOCAL这些参数的注释。如果你希望绑定Darkstat到特定的IP,在BINDIP参数中提供它。 ### 启动Darkstat守护进程 ### @@ -47,7 +49,7 @@ Darkstat是一个简易的,基于网络的流量分析程序。它可以在主 ### 总结 ### -它是一个占用很少内存的轻量级工具。这个工具流行的原因是简易、易于配置和使用。这是一个对系统管理员而言必须拥有的程序 +它是一个占用很少内存的轻量级工具。这个工具流行的原因是简易、易于配置使用。这是一个对系统管理员而言必须拥有的程序。 -------------------------------------------------------------------------------- @@ -55,7 +57,7 @@ via: http://linuxpitstop.com/install-darkstat-on-ubuntu-linux/ 作者:[Aun][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From 1e48d11ef118229b2618fec29cdd44276a58a4da Mon Sep 17 00:00:00 2001 From: DongShuaike Date: Sat, 15 Aug 2015 18:18:42 +0800 Subject: [PATCH 184/697] Delete 20150813 Linux and Unix Test Disk I O Performance With dd Command.md --- ...st Disk I O Performance With dd Command.md | 164 ------------------ 1 file changed, 164 deletions(-) delete mode 100644 sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md diff --git a/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md b/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md deleted file mode 100644 index bcd9f8455f..0000000000 --- a/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md +++ /dev/null @@ -1,164 +0,0 @@ -DongShuaike is translating. - -Linux and Unix Test Disk I/O Performance With dd Command -================================================================================ -How can I use dd command on a Linux to test I/O performance of my hard disk drive? How do I check the performance of a hard drive including the read and write speed on a Linux operating systems? - -You can use the following commands on a Linux or Unix-like systems for simple I/O performance test: - -- **dd command** : It is used to monitor the writing performance of a disk device on a Linux and Unix-like system -- **hdparm command** : It is used to get/set hard disk parameters including test the reading and caching performance of a disk device on a Linux based system. - -In this tutorial you will learn how to use the dd command to test disk I/O performance. - -### Use dd command to monitor the reading and writing performance of a disk device: ### - -- Open a shell prompt. -- Or login to a remote server via ssh. -- Use the dd command to measure server throughput (write speed) `dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync` -- Use the dd command to measure server latency `dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync` - -#### Understanding dd command options #### - -In this example, I'm using RAID-10 (Adaptec 5405Z with SAS SSD) array running on a Ubuntu Linux 14.04 LTS server. The basic syntax is - - dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync - ## GNU dd syntax ## - dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync - ## OR alternate syntax for GNU/dd ## - dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync - -Sample outputs: - -![Fig.01: Ubuntu Linux Server with RAID10 and testing server throughput with dd](http://s0.cyberciti.org/uploads/faq/2015/08/dd-server-test-io-speed-output.jpg) -Fig.01: Ubuntu Linux Server with RAID10 and testing server throughput with dd - -Please note that one gigabyte was written for the test and 135 MB/s was server throughput for this test. Where, - -- `if=/dev/zero (if=/dev/input.file)` : The name of the input file you want dd the read from. -- `of=/tmp/test1.img (of=/path/to/output.file)` : The name of the output file you want dd write the input.file to. -- `bs=1G (bs=block-size)` : Set the size of the block you want dd to use. 1 gigabyte was written for the test. -- `count=1 (count=number-of-blocks)`: The number of blocks you want dd to read. -- `oflag=dsync (oflag=dsync)` : Use synchronized I/O for data. Do not skip this option. This option get rid of caching and gives you good and accurate results -- `conv=fdatasyn`: Again, this tells dd to require a complete "sync" once, right before it exits. This option is equivalent to oflag=dsync. - -In this example, 512 bytes were written one thousand times to get RAID10 server latency time: - - dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync - -Sample outputs: - - 1000+0 records in - 1000+0 records out - 512000 bytes (512 kB) copied, 0.60362 s, 848 kB/s - -Please note that server throughput and latency time depends upon server/application load too. So I recommend that you run these tests on a newly rebooted server as well as peak time to get better idea about your workload. You can now compare these numbers with all your devices. - -#### But why the server throughput and latency time are so low? #### - -Low values does not mean you are using slow hardware. The value can be low because of the HARDWARE RAID10 controller's cache. - -Use hdparm command to see buffered and cached disk read speed - -I suggest you run the following commands 2 or 3 times Perform timings of device reads for benchmark and comparison purposes: - - ### Buffered disk read test for /dev/sda ## - hdparm -t /dev/sda1 - ## OR ## - hdparm -t /dev/sda - -To perform timings of cache reads for benchmark and comparison purposes again run the following command 2-3 times (note the -T option): - - ## Cache read benchmark for /dev/sda ### - hdparm -T /dev/sda1 - ## OR ## - hdparm -T /dev/sda - -OR combine both tests: - - hdparm -Tt /dev/sda - -Sample outputs: - -![Fig.02: Linux hdparm command to test reading and caching disk performance](http://s0.cyberciti.org/uploads/faq/2015/08/hdparam-output.jpg) -Fig.02: Linux hdparm command to test reading and caching disk performance - -Again note that due to filesystems caching on file operations, you will always see high read rates. - -**Use dd command on Linux to test read speed** - -To get accurate read test data, first discard caches before testing by running the following commands: - - flush - echo 3 | sudo tee /proc/sys/vm/drop_caches - time time dd if=/path/to/bigfile of=/dev/null bs=8k - -**Linux Laptop example** - -Run the following command: - - ### Debian Laptop Throughput With Cache ## - dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct - - ### Deactivate the cache ### - hdparm -W0 /dev/sda - - ### Debian Laptop Throughput Without Cache ## - dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct - -**Apple OS X Unix (Macbook pro) example** - -GNU dd has many more options but OS X/BSD and Unix-like dd command need to run as follows to test real disk I/O and not memory add sync option as follows: - - ## Run command 2-3 times to get good results ### - time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync" - -Sample outputs: - - 1024+0 records in - 1024+0 records out - 104857600 bytes transferred in 0.165040 secs (635346520 bytes/sec) - - real 0m0.241s - user 0m0.004s - sys 0m0.113s - -So I'm getting 635346520 bytes (635.347 MB/s) write speed on my MBP. - -**Not a fan of command line...?** - -You can use disk utility (gnome-disk-utility) on a Linux or Unix based system to get the same information. The following screenshot is taken from my Fedora Linux v22 VM. - -**Graphical method** - -Click on the "Activities" or press the "Super" key to switch between the Activities overview and desktop. Type "Disks" - -![Fig.03: Start the Gnome disk utility](http://s0.cyberciti.org/uploads/faq/2015/08/disk-1.jpg) -Fig.03: Start the Gnome disk utility - -Select your hard disk at left pane and click on configure button and click on "Benchmark partition": - -![Fig.04: Benchmark disk/partition](http://s0.cyberciti.org/uploads/faq/2015/08/disks-2.jpg) -Fig.04: Benchmark disk/partition - -Finally, click on the "Start Benchmark..." button (you may be promoted for the admin username and password): - -![Fig.05: Final benchmark result](http://s0.cyberciti.org/uploads/faq/2015/08/disks-3.jpg) -Fig.05: Final benchmark result - -Which method and command do you recommend to use? - -- I recommend dd command on all Unix-like systems (`time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync`" -- If you are using GNU/Linux use the dd command (`dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync`) -- Make sure you adjust count and bs arguments as per your setup to get a good set of result. -- The GUI method is recommended only for Linux/Unix laptop users running Gnome2 or 3 desktop. - --------------------------------------------------------------------------------- - -via: http://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd-command/ - -作者:Vivek Gite -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 7fc831fe8fd5ada1167f8f4bc7f57bff9b5e1034 Mon Sep 17 00:00:00 2001 From: DongShuaike Date: Sat, 15 Aug 2015 18:22:26 +0800 Subject: [PATCH 185/697] Create Linux and Unix Test Disk IO Performance With dd Command.MD --- ...est Disk IO Performance With dd Command.MD | 167 ++++++++++++++++++ 1 file changed, 167 insertions(+) create mode 100644 translated/tech/Linux and Unix Test Disk IO Performance With dd Command.MD diff --git a/translated/tech/Linux and Unix Test Disk IO Performance With dd Command.MD b/translated/tech/Linux and Unix Test Disk IO Performance With dd Command.MD new file mode 100644 index 0000000000..ab3615876c --- /dev/null +++ b/translated/tech/Linux and Unix Test Disk IO Performance With dd Command.MD @@ -0,0 +1,167 @@ +使用dd命令在Linux和Unix环境下进行硬盘I/O性能检测 +================================================================================ +如何使用dd命令测试硬盘的性能?如何在linux操作系统下检测硬盘的读写能力? + +你可以使用以下命令在一个Linux或类Unix操作系统上进行简单的I/O性能测试。 + +- **dd命令** :它被用来在Linux和类Unix系统下对硬盘设备进行写性能的检测。 +- **hparm命令**:它被用来获取或设置硬盘参数,包括测试读性能以及缓存性能等。 + +在这篇指南中,你将会学到如何使用dd命令来测试硬盘性能。 + +### 使用dd命令来监控硬盘的读写性能:### + +- 打开shell终端(这里貌似不能翻译为终端提示符)。 +- 通过ssh登录到远程服务器。 +- 使用dd命令来测量服务器的吞吐率(写速度) `dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync` +- 使用dd命令测量服务器延迟 `dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync` + +####理解dd命令的选项### + +在这个例子当中,我将使用搭载Ubuntu Linux 14.04 LTS系统的RAID-10(配有SAS SSD的Adaptec 5405Z)服务器阵列来运行。基本语法为: + + dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync + ## GNU dd syntax ## + dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync + ## OR alternate syntax for GNU/dd ## + dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync + +输出样例: + +![Fig.01: Ubuntu Linux Server with RAID10 and testing server throughput with dd](http://s0.cyberciti.org/uploads/faq/2015/08/dd-server-test-io-speed-output.jpg) +Fig.01: 使用dd命令获取的服务器吞吐率 + +请各位注意在这个实验中,我们写入一个G的数据,可以发现,服务器的吞吐率是135 MB/s,这其中 + +- `if=/dev/zero (if=/dev/input.file)` :用来设置dd命令读取的输入文件名。 +- `of=/tmp/test1.img (of=/path/to/output.file)` :dd命令将input.file写入的输出文件的名字。 +- `bs=1G (bs=block-size)` :设置dd命令读取的块的大小。例子中为1个G。 +- `count=1 (count=number-of-blocks)`: dd命令读取的块的个数。 +- `oflag=dsync (oflag=dsync)` :使用同步I/O。不要省略这个选项。这个选项能够帮助你去除caching的影响,以便呈现给你精准的结果。 +- `conv=fdatasyn`: 这个选项和`oflag=dsync`含义一样。 + +在这个例子中,一共写了1000次,每次写入512字节来获得RAID10服务器的延迟时间: + + dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync + +输出样例: + + 1000+0 records in + 1000+0 records out + 512000 bytes (512 kB) copied, 0.60362 s, 848 kB/s + +请注意服务器的吞吐率以及延迟时间也取决于服务器/应用的加载。所以我推荐你在一个刚刚重启过并且处于峰值时间的服务器上来运行测试,以便得到更加准确的度量。现在你可以在你的所有设备上互相比较这些测试结果了。 + +####为什么服务器的吞吐率和延迟时间都这么差?### + +低的数值并不意味着你在使用差劲的硬件。可能是HARDWARE RAID10的控制器缓存导致的。 + +使用hdparm命令来查看硬盘缓存的读速度。 + +我建议你运行下面的命令2-3次来对设备读性能进行检测,以作为参照和相互比较: + + ### 有缓存的硬盘读性能测试——/dev/sda ### + hdparm -t /dev/sda1 + ## 或者 ## + hdparm -t /dev/sda + +然后运行下面这个命令2-3次来对缓存的读性能进行对照性检测: + + ## Cache读基准——/dev/sda ### + hdparm -T /dev/sda1 + ## 或者 ## + hdparm -T /dev/sda + +或者干脆把两个测试结合起来: + + hdparm -Tt /dev/sda + +输出样例: + +![Fig.02: Linux hdparm command to test reading and caching disk performance](http://s0.cyberciti.org/uploads/faq/2015/08/hdparam-output.jpg) +Fig.02: 检测硬盘读入以及缓存性能的Linux hdparm命令 + +请再一次注意由于文件文件操作的缓存属性,你将总是会看到很高的读速度。 + +**使用dd命令来测试读入速度** + +为了获得精确的读测试数据,首先在测试前运行下列命令,来将缓存设置为无效: + + flush + echo 3 | sudo tee /proc/sys/vm/drop_caches + time time dd if=/path/to/bigfile of=/dev/null bs=8k + +**笔记本上的示例** + +运行下列命令: + + ### Cache存在的Debian系统笔记本吞吐率### + dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct + + ###使cache失效### + hdparm -W0 /dev/sda + + ###没有Cache的Debian系统笔记本吞吐率### + dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct + +**苹果OS X Unix(Macbook pro)的例子** + +GNU dd has many more options but OS X/BSD and Unix-like dd command need to run as follows to test real disk I/O and not memory add sync option as follows: +GNU dd命令有其他许多选项但是在 OS X/BSD 以及类Unix中, dd命令需要像下面那样执行来检测去除掉内存地址同步的硬盘真实I/O性能: + + ## 运行这个命令2-3次来获得更好地结果 ### + time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync" + +输出样例: + + 1024+0 records in + 1024+0 records out + 104857600 bytes transferred in 0.165040 secs (635346520 bytes/sec) + + real 0m0.241s + user 0m0.004s + sys 0m0.113s + +本人Macbook Pro的写速度是635346520字节(635.347MB/s)。 + +**不喜欢用命令行?^_^** + +你可以在Linux或基于Unix的系统上使用disk utility(gnome-disk-utility)这款工具来得到同样的信息。下面的那个图就是在我的Fedora Linux v22 VM上截取的。 + +**图形化方法** + +点击“Activites”或者“Super”按键来在桌面和Activites视图间切换。输入“Disks” + +![Fig.03: Start the Gnome disk utility](http://s0.cyberciti.org/uploads/faq/2015/08/disk-1.jpg) +Fig.03: 打开Gnome硬盘工具 + +在左边的面板上选择你的硬盘,点击configure按钮,然后点击“Benchmark partition”: + +![Fig.04: Benchmark disk/partition](http://s0.cyberciti.org/uploads/faq/2015/08/disks-2.jpg) +Fig.04: 评测硬盘/分区 + +最后,点击“Start Benchmark...”按钮(你可能被要求输入管理员用户名和密码): + +![Fig.05: Final benchmark result](http://s0.cyberciti.org/uploads/faq/2015/08/disks-3.jpg) +Fig.05: 最终的评测结果 + +如果你要问,我推荐使用哪种命令和方法? + +- 我推荐在所有的类Unix系统上使用dd命令(`time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync`) +- 如果你在使用GNU/Linux,使用dd命令 (`dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync`) +- 确保你每次使用时,都调整了count以及bs参数以获得更好的结果。 +- GUI方法只适合桌面系统为Gnome2或Gnome3的Linux/Unix笔记本用户。 + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd-command/ + +作者:Vivek Gite +译者:[DongShuaike](https://github.com/DongShuaike) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + + + + From af421be7153844a90ebc512b164be5b92da1bca1 Mon Sep 17 00:00:00 2001 From: DongShuaike Date: Sat, 15 Aug 2015 18:24:35 +0800 Subject: [PATCH 186/697] Update Linux and Unix Test Disk IO Performance With dd Command.MD --- ...Linux and Unix Test Disk IO Performance With dd Command.MD | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/translated/tech/Linux and Unix Test Disk IO Performance With dd Command.MD b/translated/tech/Linux and Unix Test Disk IO Performance With dd Command.MD index ab3615876c..be5986b78e 100644 --- a/translated/tech/Linux and Unix Test Disk IO Performance With dd Command.MD +++ b/translated/tech/Linux and Unix Test Disk IO Performance With dd Command.MD @@ -21,9 +21,9 @@ 在这个例子当中,我将使用搭载Ubuntu Linux 14.04 LTS系统的RAID-10(配有SAS SSD的Adaptec 5405Z)服务器阵列来运行。基本语法为: dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync - ## GNU dd syntax ## + ## GNU dd语法 ## dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync - ## OR alternate syntax for GNU/dd ## + ##另外一种GNU dd的语法 ## dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync 输出样例: From 2bf20aff9200598742e6b7646426fe03c843f6bf Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 15 Aug 2015 23:41:11 +0800 Subject: [PATCH 187/697] PUB:20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @XLCYun 翻译的很好!下周每天连载一篇 --- ...t Right & Wrong - Page 1 - Introduction.md | 56 +++++++++++++++++++ ...t Right & Wrong - Page 1 - Introduction.md | 55 ------------------ 2 files changed, 56 insertions(+), 55 deletions(-) create mode 100644 published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md delete mode 100644 translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md diff --git a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md b/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md new file mode 100644 index 0000000000..61e181c80c --- /dev/null +++ b/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md @@ -0,0 +1,56 @@ + 一周 GNOME 之旅:品味它和 KDE 的是是非非(第一节 介绍) +================================================================================ + +*作者声明: 如果你是因为某种神迹而在没看标题的情况下点开了这篇文章,那么我想再重申一些东西……这是一篇评论文章,文中的观点都是我自己的,不代表 Phoronix 网站和 Michael 的观点。它们完全是我自己的想法。* + +另外,没错……这可能是一篇引战的文章。我希望 KDE 和 Gnome 社团变得更好一些,因为我想发起一个讨论并反馈给他们。为此,当我想指出(我所看到的)一个瑕疵时,我会尽量地做到具体而直接。这样,相关的讨论也能做到同样的具体和直接。再次声明:本文另一可选标题为“死于成千上万的[纸割][1]”(LCTT 译注:paper cuts——纸割,被纸片割伤——指易修复但烦人的缺陷。Ubuntu 从 9.10 开始,发起了 [One Hundred Papercuts][1] 项目,用于修复那些小而烦人的易用性问题)。 + +现在,重申完毕……文章开始。 + +![](http://www.phoronix.net/image.php?id=fedora-22-fan&image=fedora_22_good1_show&w=1920) + +当我把[《评价 Fedora 22 KDE 》][2]一文发给 Michael 时,感觉很不是滋味。不是因为我不喜欢 KDE,或者不待见 Fedora,远非如此。事实上,我刚开始想把我的 T450s 的系统换为 Arch Linux 时,马上又决定放弃了,因为我很享受 fedora 在很多方面所带来的便捷性。 + +我感觉很不是滋味的原因是 Fedora 的开发者花费了大量的时间和精力在他们的“工作站”产品上,但是我却一点也没看到。在使用 Fedora 时,我并没用采用那些主要开发者希望用户采用的那种使用方式,因此我也就体验不到所谓的“ Fedora 体验”。它感觉就像一个人评价 Ubuntu 时用的却是 Kubuntu,评价 OS X 时用的却是 Hackintosh,或者评价 Gentoo 时用的却是 Sabayon。根据论坛里大量读者对 Michael 的说法,他们在评价各种发行版时都是使用的默认设置——我也不例外。但是我还是认为这些评价应该在“真实”配置下完成,当然我也知道在给定的情况下评论某些东西也的确是有价值的——无论是好是坏。 + +正是在怀着这种态度的情况下,我决定跳到 Gnome 这个水坑里来泡泡澡。 + +但是,我还要在此多加一个声明……我在这里所看到的 KDE 和 Gnome 都是打包在 Fedora 中的。OpenSUSE、 Kubuntu、 Arch等发行版的各个桌面可能有不同的实现方法,使得我这里所说的具体的“痛点”跟你所用的发行版有所不同。还有,虽然用了这个标题,但这篇文章将会是一篇“很 KDE”的重量级文章。之所以这样称呼,是因为我在“使用” Gnome 之后,才知道 KDE 的“纸割”到底有多么的多。 + +### 登录界面 ### + +![Gnome 登录界面](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login1_show&w=1920) + +我一般情况下都不会介意发行版带着它们自己的特别主题,因为一般情况下桌面看起来会更好看。可我今天可算是找到了一个例外。 + +第一印象很重要,对吧?那么,GDM(LCTT 译注: Gnome Display Manage:Gnome 显示管理器。)绝对干得漂亮。它的登录界面看起来极度简洁,每一部分都应用了一致的设计风格。使用通用图标而不是文本框为它的简洁加了分。 + +![ KDE 登录界面](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login2_show&w=1920) + +这并不是说 Fedora 22 KDE ——现在已经是 SDDM 而不是 KDM 了——的登录界面不好看,但是看起来绝对没有它这样和谐。 + +问题到底出来在哪?顶部栏。看看 Gnome 的截图——你选择一个用户,然后用一个很小的齿轮简单地选择想登入哪个会话。设计很简洁,一点都不碍事,实话讲,如果你没注意的话可能完全会看不到它。现在看看那蓝色( LCTT 译注:blue,有忧郁之意,一语双关)的 KDE 截图,顶部栏看起来甚至不像是用同一个工具渲染出来的,它的整个位置的安排好像是某人想着:“哎哟妈呀,我们需要把这个选项扔在哪个地方……”之后决定下来的。 + +对于右上角的重启和关机选项也一样。为什么不单单用一个电源按钮,点击后会下拉出一个菜单,里面包括重启,关机,挂起的功能?按钮的颜色跟背景色不同肯定会让它更加突兀和显眼……但我可不觉得这样子有多好。同样,这看起来可真像“苦思”后的决定。 + +从实用观点来看,GDM 还要远远实用的多,再看看顶部一栏。时间被列了出来,还有一个音量控制按钮,如果你想保持周围安静,你甚至可以在登录前设置静音,还有一个可用性按钮来实现高对比度、缩放、语音转文字等功能,所有可用的功能通过简单的一个开关按钮就能得到。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login3_show&w=1920) + +切换到上游(KDE 自带)的 Breeve 主题……突然间,我抱怨的大部分问题都被解决了。通用图标,所有东西都放在了屏幕中央,但不是那么重要的被放到了一边。因为屏幕顶部和底部都是同样的空白,在中间也就酝酿出了一种美好的和谐。还是有一个文本框来切换会话,但既然电源按钮被做成了通用图标,那么这点还算可以原谅。当前时间以一种漂亮的感觉呈现,旁边还有电量指示器。当然 gnome 还是有一些很好的附加物,例如音量小程序和可用性按钮,但 Breeze 总归要比 Fedora 的 KDE 主题进步。 + +到 Windows(Windows 8和10之前)或者 OS X 中去,你会看到类似的东西——非常简洁的,“不碍事”的锁屏与登录界面,它们都没有文本框或者其它分散视觉的小工具。这是一种有效的不分散人注意力的设计。Fedora……默认带有 Breeze 主题。VDG 在 Breeze 主题设计上干得不错。可别糟蹋了它。 + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=1 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://wiki.ubuntu.com/One%20Hundred%20Papercuts +[2]:http://www.phoronix.com/scan.php?page=article&item=fedora-22-kde&num=1 +[3]:https://launchpad.net/hundredpapercuts diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md deleted file mode 100644 index 582708f5a4..0000000000 --- a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md +++ /dev/null @@ -1,55 +0,0 @@ -将GNOME作为我的Linux桌面的一周: 他们做对的与做错的 - 第一节 - 简介 -================================================================================ -*作者声明: 如果你是因为某种神迹而在没看标题的情况下点开了这篇文章,那么我想再重申一些东西...这是一篇评论文章。文中的观点都是我自己的,不代表Phoronix和Michael的观点。它们完全是我自己的想法。 - -另外,没错……这可能是一篇引战的文章。我希望社团成员们更沉稳一些,因为我确实想在KDE和Gnome的社团上发起讨论,反馈。因此当我想指出——我所看到的——一个瑕疵时,我会尽量地做到具体而直接。这样,相关的讨论也能做到同样的具体和直接。再次声明:本文另一可选标题为“被[细纸片][1]千刀万剐”(原文含paper cuts一词,指易修复但烦人的缺陷,译者注)。 - -现在,重申完毕……文章开始。 - -![](http://www.phoronix.net/image.php?id=fedora-22-fan&image=fedora_22_good1_show&w=1920) - -当我把[《评价Fedora 22 KDE》][2]一文发给Michael时,感觉很不是滋味。不是因为我不喜欢KDE,或者不享受Fedora,远非如此。事实上,我刚开始想把我的T450s的系统换为Arch Linux时,马上又决定放弃了,因为我很享受fedora在很多方面所带来的便捷性。 - -我感觉很不是滋味的原因是Fedora的开发者花费了大量的时间和精力在他们的“工作站”产品上,但是我却一点也没看到。在使用Fedora时,我采用的并非那些主要开发者希望用户采用的那种使用方式,因此我也就体验不到所谓的“Fedora体验”。它感觉就像一个人评价Ubuntu时用的却是Kubuntu,评价OS X时用的却是Hackintosh,或者评价Gentoo时用的却是Sabayon。根据大量Michael论坛的读者的说法,它们在评价各种发行版时使用的都是默认设置的发行版——我也不例外。但是我还是认为这些评价应该在“真实”配置下完成,当然我也知道在给定的情况下评论某些东西也的确是有价值的——无论是好是坏。 - -正是在怀着这种态度的情况下,我决定到Gnome这个水坑里来泡泡澡。 - -但是,我还要在此多加一个声明……我在这里所看到的KDE和Gnome都是打包在Fedora中的。OpenSUSE, Kubuntu, Arch等发行版的各个桌面可能有不同的实现方法,使得我这里所说的具体的“痛处”跟你所用的发行版有所不同。还有,虽然用了这个标题,但这篇文章将会是一篇很沉重的非常“KDE”的文章。之所以这样称呼这篇文章,是因为我在使用了Gnome之后,才知道KDE的“剪纸”到底有多多。 - -### 登录界面 ### - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login1_show&w=1920) - -我一般情况下都不会介意发行版装载它们自己的特别主题,因为一般情况下桌面看起来会更好看。可我今天可算是找到了一个例外。 - -第一印象很重要,对吧?那么,GDM(Gnome Display Manage:Gnome显示管理器,译者注,下同。)决对干得漂亮。它的登录界面看起来极度简洁,每一部分都应用了一致的设计风格。使用通用图标而不是输入框为它的简洁加了分。 - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login2_show&w=1920) - -这并不是说Fedora 22 KDE——现在已经是SDDM而不是KDM了——的登录界面不好看,但是看起来决对没有它这样和谐。 - -问题到底出来在哪?顶部栏。看看Gnome的截图——你选择一个用户,然后用一个很小的齿轮简单地选择想登入哪个会话。设计很简洁,它不挡着你的道儿,实话讲,如果你没注意的话可能完全会看不到它。现在看看那蓝色( blue,有忧郁之意,一语双关,译者注)的KDE截图,顶部栏看起来甚至不像是用同一个工具渲染出来的,它的整个位置的安排好像是某人想着:“哎哟妈呀,我们需要把这个选项扔在哪个地方……”之后决定下来的。 - -对于右上角的重启和关机选项也一样。为什么不单单用一个电源按钮,点击后会下拉出一个菜单,里面包括重启,关机,挂起的功能?按钮的颜色跟背景色不同肯定会让它更加突兀和显眼……但我可不觉得这样子有多好。同样,这看起来可真像“苦思”后的决定。 - -从实用观点来看,GDM还要远远实用的多,再看看顶部一栏。时间被列了出来,还有一个音量控制按钮,如果你想保持周围安静,你甚至可以在登录前设置静音,还有一个可用的按钮来实现高对比度,缩放,语音转文字等功能,所有可用的功能通过简单的一个开关按钮就能得到。 - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login3_show&w=1920) - -切换到upstream的Breeve主题……突然间,我抱怨的大部分问题都被完善了。通用图标,所有东西都放在了屏幕中央,但不是那么重要的被放到了一边。因为屏幕顶部和底部都是同样的空白,在中间也就酝酿出了一种美好的和谐。还是有一个输入框来切换会话,但既然电源按钮被做成了通用图标,那么这点还算可以原谅。当然gnome还是有一些很好的附加物,例如音量小程序和可访问按钮,但Breeze总归是Fedora的KDE主题的一个进步。 - -到Windows(Windows 8和10之前)或者OS X中去,你会看到类似的东西——非常简洁的,“不挡你道”的锁屏与登录界面,它们都没有输入框或者其它分散视觉的小工具。这是一种有效的不分散人注意力的设计。Fedora……默认装有Breeze。VDG在Breeze主题设计上干得不错。可别糟蹋了它。 - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=1 - -作者:Eric Griffith -译者:[XLCYun](https://github.com/XLCYun) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://wiki.ubuntu.com/One%20Hundred%20Papercuts -[2]:http://www.phoronix.com/scan.php?page=article&item=fedora-22-kde&num=1 -[3]:https://launchpad.net/hundredpapercuts From 716529e17d8b7ab2a7bf975282f1f34ea00ecf07 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 16 Aug 2015 00:38:02 +0800 Subject: [PATCH 188/697] PUB:20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop @XLCYun --- ...ht & Wrong - Page 2 - The GNOME Desktop.md | 32 +++++++++++++++++++ ...ht & Wrong - Page 2 - The GNOME Desktop.md | 31 ------------------ 2 files changed, 32 insertions(+), 31 deletions(-) create mode 100644 published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md delete mode 100644 translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md diff --git a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md b/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md new file mode 100644 index 0000000000..e47e59eaed --- /dev/null +++ b/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md @@ -0,0 +1,32 @@ +一周 GNOME 之旅:品味它和 KDE 的是是非非(第二节 GNOME桌面) +================================================================================ + +### 桌面 ### + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_gdm_show&w=1920) + +在我这一周的前五天中,我都是直接手动登录进 Gnome 的——没有打开自动登录功能。在第五天的晚上,每一次都要手动登录让我觉得很厌烦,所以我就到用户管理器中打开了自动登录功能。下一次我登录的时候收到了一个提示:“你的密钥链(keychain)未解锁,请输入你的密码解锁”。在这时我才意识到了什么……Gnome 以前一直都在自动解锁我的密钥链(KDE 中叫做我的钱包),每当我通过 GDM 登录时 !当我绕开 GDM 的登录程序时,Gnome 才不得不介入让我手动解锁。 + +现在,鄙人的陋见是如果你打开了自动登录功能,那么你的密钥链也应当自动解锁——否则,这功能还有何用?无论如何,你还是需要输入你的密码,况且在 GDM 登录界面你还有机会选择要登录的会话,如果你想换的话。 + +但是,这点且不提,也就是在那一刻,我意识到要让这桌面感觉就像它在**和我**一起工作一样是多么简单的一件事。当我通过 SDDM 登录 KDE 时?甚至连启动界面都还没加载完成,就有一个窗口弹出来遮挡了启动动画(因此启动动画也就被破坏了),它提示我解锁我的 KDE 钱包或 GPG 钥匙环。 + +如果当前还没有钱包,你就会收到一个创建钱包的提醒——就不能在创建用户的时候同时为我创建一个吗?接着它又让你在两种加密模式中选择一种,甚至还暗示我们其中一种(Blowfish)是不安全的,既然是为了安全,为什么还要我选择一个不安全的东西?作者声明:如果你安装了真正的 KDE spin 版本而不是仅仅安装了被 KDE 搞过的版本,那么在创建用户时,它就会为你创建一个钱包。但很不幸的是,它不会帮你自动解锁,并且它似乎还使用了更老的 Blowfish 加密模式,而不是更新而且更安全的 GPG 模式。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_kgpg_show&w=1920) + +如果你选择了那个安全的加密模式(GPG),那么它会尝试加载 GPG 密钥……我希望你已经创建过一个了,因为如果你没有,那么你可又要被指责一番了。怎么样才能创建一个?额……它不帮你创建一个……也不告诉你怎么创建……假如你真的搞明白了你应该使用 KGpg 来创建一个密钥,接着在你就会遇到一层层的菜单和一个个的提示,而这些菜单和提示只能让新手感到困惑。为什么你要问我 GPG 的二进制文件在哪?天知道在哪!如果不止一个,你就不能为我选择一个最新的吗?如果只有一个,我再问一次,为什么你还要问我? + +为什么你要问我要使用多大的密钥大小和加密算法?你既然默认选择了 2048 和 RSA/RSA,为什么不直接使用?如果你想让这些选项能够被修改,那就把它们扔在下面的“Expert mode(专家模式)” 按钮里去。这里不仅仅是说让配置可被用户修改的问题,而是说根本不需要默认把多余的东西扔在了用户面前。这种问题将会成为这篇文章剩下的主要内容之一……KDE 需要更理智的默认配置。配置是好的,我很喜欢在使用 KDE 时的配置,但它还需要知道什么时候应该,什么时候不应该去提示用户。而且它还需要知道“嗯,它是可配置的”不能做为默认配置做得不好的借口。用户最先接触到的就是默认配置,不好的默认配置注定要失去用户。 + +让我们抛开密钥链的问题,因为我想我已经表达出了我的想法。 + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=2 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md deleted file mode 100644 index 5ce4dcd8d5..0000000000 --- a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md +++ /dev/null @@ -1,31 +0,0 @@ -将GNOME作为我的Linux桌面的一周:他们做对的与做错的 - 第二节 - GNOME桌面 -================================================================================ -### 桌面 ### - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_gdm_show&w=1920) - -在我这一周的前五天中,我都是直接手动登录进Gnome的——没有打开自动登录功能。在第五天的晚上,每一次都要手动登录让我觉得很厌烦,所以我就到用户管理器中打开了自动登录功能。下一次我登录的时候收到了一个提示:“你的密钥链(keychain)未解锁,请输入你的密码解锁”。在这时我才意识到了什么……每一次我通过GDM登录时——我的KDB钱包提示——Gnome以前一直都在自动解锁我的密钥链!当我绕开GDM的登录程序时,Gnome才不得不介入让我手动解锁。 - -现在,鄙人的陋见是如果你打开了自动登录功能,那么你的密钥链也应当自动解锁——否则,这功能还有何用?无论如何,你还是要输入你的密码,况且在GDM登录界面你还能有机会选择要登录的会话。 - -但是,这点且不提,也就是在那一刻,我意识到要让这桌面感觉是它在**和我**一起工作是多么简单的一件事。当我通过SDDM登录KDE时?甚至连启动界面都还没加载完成,就有一个窗口弹出来遮挡了启动动画——因此启动动画也就被破坏了——提示我解锁我的KDE钱包或GPG钥匙环。 - -如果当前不存在钱包,你就会收到一个创建钱包的提醒——就不能在创建用户的时候同时为我创建一个吗?——接着它又让你在两种加密模式中选择一种,甚至还暗示我们其中一种是不安全的(Blowfish),既然是为了安全,为什么还要我选择一个不安全的东西?作者声明:如果你安装了真正的KDE spin版本而不是仅仅安装了KDE的事后版本,那么在创建用户时,它就会为你创建一个钱包。但很不幸的是,它不会帮你解锁,并且它似乎还使用了更老的Blowfish加密模式,而不是更新而且更安全的GPG模式。 - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_kgpg_show&w=1920) - -如果你选择了那个安全的加密模式(GPG),那么它会尝试加载GPG密钥……我希望你已经创建过一个了,因为如果你没有,那么你可又要被批一顿了。怎么样才能创建一个?额……它不帮你创建一个……也不告诉你怎么创建……假如你真的搞明白了你应该使用KGpg来创建一个密钥,接着在你就会遇到一层层的菜单和一个个的提示,而这些菜单和提示只能让新手感到困惑。为什么你要问我GPG的二进制文件在哪?天知道在哪!如果不止一个,你就不能为我选择一个最新的吗?如果只有一个,我再问一次,为什么你还要问我? - -为什么你要问我要使用多大的密钥大小和加密算法?你既然默认选择了2048和RSA/RSA,为什么不直接使用?如果你想让这些选项能够被改变,那就把它们扔在下面的"Expert mode(专家模式)"按钮里去。这不仅仅关于使配置可被用户改变,而是关于默认地把多余的东西扔在了用户面前。这种问题将会成为剩下文章的主题之一……KDE需要更好更理智的默认配置。配置是好的,我很喜欢在使用KDE时的配置,但它还需要知道什么时候应该,什么时候不应该去提示用户。而且它还需要知道“嗯,它是可配置的”不能做为默认配置做得不好的借口。用户最先接触到的就是默认配置,不好的默认配置注定要失去用户。 - -让我们抛开密钥链的问题,因为我想我已经表达出了我的想法。 - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=2 - -作者:Eric Griffith -译者:[XLCYun](https://github.com/XLCYun) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 88435ed2b8be50e2c4e7534259af9336397bb1c6 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Sun, 16 Aug 2015 02:34:08 +0800 Subject: [PATCH 189/697] =?UTF-8?q?20150816-1=20RHCE=20=E7=AC=AC=E5=9B=9B?= =?UTF-8?q?=E7=AF=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Automate Linux System Maintenance Tasks.md | 207 ++++++++++++++++++ 1 file changed, 207 insertions(+) create mode 100644 sources/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md diff --git a/sources/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md b/sources/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md new file mode 100644 index 0000000000..bcd058611a --- /dev/null +++ b/sources/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md @@ -0,0 +1,207 @@ +Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks +================================================================================ +Some time ago I read that one of the distinguishing characteristics of an effective system administrator / engineer is laziness. It seemed a little contradictory at first but the author then proceeded to explain why: + +![Automate Linux System Maintenance Tasks](http://www.tecmint.com/wp-content/uploads/2015/08/Automate-Linux-System-Maintenance-Tasks.png) + +RHCE Series: Automate Linux System Maintenance Tasks – Part 4 + +if a sysadmin spends most of his time solving issues and doing repetitive tasks, you can suspect he or she is not doing things quite right. In other words, an effective system administrator / engineer should develop a plan to perform repetitive tasks with as less action on his / her part as possible, and should foresee problems by using, + +for example, the tools reviewed in Part 3 – [Monitor System Activity Reports Using Linux Toolsets][1] of this series. Thus, although he or she may not seem to be doing much, it’s because most of his / her responsibilities have been taken care of with the help of shell scripting, which is what we’re going to talk about in this tutorial. + +### What is a shell script? ### + +In few words, a shell script is nothing more and nothing less than a program that is executed step by step by a shell, which is another program that provides an interface layer between the Linux kernel and the end user. + +By default, the shell used for user accounts in RHEL 7 is bash (/bin/bash). If you want a detailed description and some historical background, you can refer to [this Wikipedia article][2]. + +To find out more about the enormous set of features provided by this shell, you may want to check out its **man page**, which is downloaded in in PDF format at ([Bash Commands][3]). Other than that, it is assumed that you are familiar with Linux commands (if not, I strongly advise you to go through [A Guide from Newbies to SysAdmin][4] article in **Tecmint.com** before proceeding). Now let’s get started. + +### Writing a script to display system information ### + +For our convenience, let’s create a directory to store our shell scripts: + + # mkdir scripts + # cd scripts + +And open a new text file named `system_info.sh` with your preferred text editor. We will begin by inserting a few comments at the top and some commands afterwards: + + #!/bin/bash + + # Sample script written for Part 4 of the RHCE series + # This script will return the following set of system information: + # -Hostname information: + echo -e "\e[31;43m***** HOSTNAME INFORMATION *****\e[0m" + hostnamectl + echo "" + # -File system disk space usage: + echo -e "\e[31;43m***** FILE SYSTEM DISK SPACE USAGE *****\e[0m" + df -h + echo "" + # -Free and used memory in the system: + echo -e "\e[31;43m ***** FREE AND USED MEMORY *****\e[0m" + free + echo "" + # -System uptime and load: + echo -e "\e[31;43m***** SYSTEM UPTIME AND LOAD *****\e[0m" + uptime + echo "" + # -Logged-in users: + echo -e "\e[31;43m***** CURRENTLY LOGGED-IN USERS *****\e[0m" + who + echo "" + # -Top 5 processes as far as memory usage is concerned + echo -e "\e[31;43m***** TOP 5 MEMORY-CONSUMING PROCESSES *****\e[0m" + ps -eo %mem,%cpu,comm --sort=-%mem | head -n 6 + echo "" + echo -e "\e[1;32mDone.\e[0m" + +Next, give the script execute permissions: + + # chmod +x system_info.sh + +and run it: + + ./system_info.sh + +Note that the headers of each section are shown in color for better visualization: + +![Server Monitoring Shell Script](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Shell-Script.png) + +Server Monitoring Shell Script + +That functionality is provided by this command: + + echo -e "\e[COLOR1;COLOR2m\e[0m" + +Where COLOR1 and COLOR2 are the foreground and background colors, respectively (more info and options are explained in this entry from the [Arch Linux Wiki][5]) and is the string that you want to show in color. + +### Automating Tasks ### + +The tasks that you may need to automate may vary from case to case. Thus, we cannot possibly cover all of the possible scenarios in a single article, but we will present three classic tasks that can be automated using shell scripting: + +**1)** update the local file database, 2) find (and alternatively delete) files with 777 permissions, and 3) alert when filesystem usage surpasses a defined limit. + +Let’s create a file named `auto_tasks.sh` in our scripts directory with the following content: + + #!/bin/bash + + # Sample script to automate tasks: + # -Update local file database: + echo -e "\e[4;32mUPDATING LOCAL FILE DATABASE\e[0m" + updatedb + if [ $? == 0 ]; then + echo "The local file database was updated correctly." + else + echo "The local file database was not updated correctly." + fi + echo "" + + # -Find and / or delete files with 777 permissions. + echo -e "\e[4;32mLOOKING FOR FILES WITH 777 PERMISSIONS\e[0m" + # Enable either option (comment out the other line), but not both. + # Option 1: Delete files without prompting for confirmation. Assumes GNU version of find. + #find -type f -perm 0777 -delete + # Option 2: Ask for confirmation before deleting files. More portable across systems. + find -type f -perm 0777 -exec rm -i {} +; + echo "" + # -Alert when file system usage surpasses a defined limit + echo -e "\e[4;32mCHECKING FILE SYSTEM USAGE\e[0m" + THRESHOLD=30 + while read line; do + # This variable stores the file system path as a string + FILESYSTEM=$(echo $line | awk '{print $1}') + # This variable stores the use percentage (XX%) + PERCENTAGE=$(echo $line | awk '{print $5}') + # Use percentage without the % sign. + USAGE=${PERCENTAGE%?} + if [ $USAGE -gt $THRESHOLD ]; then + echo "The remaining available space in $FILESYSTEM is critically low. Used: $PERCENTAGE" + fi + done < <(df -h --total | grep -vi filesystem) + +Please note that there is a space between the two `<` signs in the last line of the script. + +![Shell Script to Find 777 Permissions](http://www.tecmint.com/wp-content/uploads/2015/08/Shell-Script-to-Find-777-Permissions.png) + +Shell Script to Find 777 Permissions + +### Using Cron ### + +To take efficiency one step further, you will not want to sit in front of your computer and run those scripts manually. Rather, you will use cron to schedule those tasks to run on a periodic basis and sends the results to a predefined list of recipients via email or save them to a file that can be viewed using a web browser. + +The following script (filesystem_usage.sh) will run the well-known **df -h** command, format the output into a HTML table and save it in the **report.html** file: + + #!/bin/bash + # Sample script to demonstrate the creation of an HTML report using shell scripting + # Web directory + WEB_DIR=/var/www/html + # A little CSS and table layout to make the report look a little nicer + echo " + + + + + " > $WEB_DIR/report.html + # View hostname and insert it at the top of the html body + HOST=$(hostname) + echo "Filesystem usage for host $HOST
+ Last updated: $(date)

+ + " >> $WEB_DIR/report.html + # Read the output of df -h line by line + while read line; do + echo "" >> $WEB_DIR/report.html + done < <(df -h | grep -vi filesystem) + echo "
Filesystem + Size + Use % +
" >> $WEB_DIR/report.html + echo $line | awk '{print $1}' >> $WEB_DIR/report.html + echo "" >> $WEB_DIR/report.html + echo $line | awk '{print $2}' >> $WEB_DIR/report.html + echo "" >> $WEB_DIR/report.html + echo $line | awk '{print $5}' >> $WEB_DIR/report.html + echo "
" >> $WEB_DIR/report.html + +In our **RHEL 7** server (**192.168.0.18**), this looks as follows: + +![Server Monitoring Report](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Report.png) + +Server Monitoring Report + +You can add to that report as much information as you want. To run the script every day at 1:30 pm, add the following crontab entry: + + 30 13 * * * /root/scripts/filesystem_usage.sh + +### Summary ### + +You will most likely think of several other tasks that you want or need to automate; as you can see, using shell scripting will greatly simplify this effort. Feel free to let us know if you find this article helpful and don't hesitate to add your own ideas or comments via the form below. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/using-shell-script-to-automate-linux-system-maintenance-tasks/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/ +[2]:https://en.wikipedia.org/wiki/Bash_%28Unix_shell%29 +[3]:http://www.tecmint.com/wp-content/pdf/bash.pdf +[4]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/ +[5]:https://wiki.archlinux.org/index.php/Color_Bash_Prompt \ No newline at end of file From 71b69ec5747ae3160a79e0fd15dd181d35871fa6 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Sun, 16 Aug 2015 02:59:16 +0800 Subject: [PATCH 190/697] =?UTF-8?q?20150816-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...raphics Drivers PPA Is Ready For Action.md | 62 ++++++ ...box--A Web based AJAX Terminal Emulator.md | 157 +++++++++++++++ ...ow to migrate MySQL to MariaDB on Linux.md | 186 ++++++++++++++++++ 3 files changed, 405 insertions(+) create mode 100644 sources/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md create mode 100644 sources/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md create mode 100644 sources/tech/20150816 How to migrate MySQL to MariaDB on Linux.md diff --git a/sources/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md b/sources/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md new file mode 100644 index 0000000000..e2a78e88dc --- /dev/null +++ b/sources/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md @@ -0,0 +1,62 @@ +Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action +================================================================================ +![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/Screen-Shot-2015-08-12-at-14.19.42.png) + +Get your frame-rate on + +**Well, that didn’t take long. ** + +Just days after [proposing][1] the creation of a new PPA to provide Ubuntu users with the latest NVIDIA graphics drivers the Ubuntu community has clubbed together to do, well, just that. + +The plainly named ‘**Graphics Drivers PPA**‘ contains the latest release of NVIDIA’s proprietary Linux grapics drivers, packaged up for Ubuntu users to upgrade to – no binary runtime headaches needed! + +The PPA is designed to offer gamers a way to run the latest games on the latest on Ubuntu as easily as possible. + +#### Ready, But Not Ready #### + +Jorge Castro’s idea to create a ‘blessed’ PPA containing newer NVIDIA graphics drivers for those wot want ’em has been greeted with enthusiasm by Ubuntu users and games developers alike. + +Even those involved in porting some of Steam’s biggest titles to Linux have chimed in to offer advice and suggestions. + +Edwin Smith, head of production at Feral Interactive (‘Shadow of Mordor’) welcomed the initiative to prove users with “easier way of updating drivers”. + +### How To Use The New Nvidia Drivers PPA ### + +Although the new ‘Graphic Drivers PPA’ is live it is not strictly ready for the prime time. Its maintainers caution: + +> “This PPA is currently in testing, you should be experienced with packaging before you dive in here. Give a few days to sort out the kinks.” + +Jorge, who soft launched the PPA in a post to the Ubuntu desktop mailing list, also notes that gamers using existing PPAs, like xorg-edgers, for timely graphics driver updates won’t notice any driver difference for now (as the drivers have simply been copied over from some of those PPAs to this new one). + +“The real fun begins when new drivers are released,” he adds. + +Right now, as of writing, the PPA contains a batch of recent Nvidia drivers for Ubuntu 12.04.1 through 15.10. Note all drivers are available for all releases. + +> **It should go without saying: unless you know what you’re doing, and how to undo it, do not follow the instructions that follow. ** + +To add the PPA run the following in a new Terminal window: + + sudo add-apt-repository ppa:graphics-drivers/ppa + +To upgrade to or install the latest Nvidia drivers: + + sudo apt-get update && sudo apt-get install nvidia-355 + +Remember: if the PPA breaks your system you are allowed to keep both halves. + +To roll back/undo changes made the PPA you should use the ppa-purge command. + +Feel free to leave any advice/help/corrections/thoughts on the PPA (and as I don’t have NVIDIA hardware to test the above out for myself, it’s all appreciated) in the comments below. + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2015/08/ubuntu-nvidia-graphics-drivers-ppa-is-ready-for-action + +作者:[Joey-Elijah Sneddon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:http://www.omgubuntu.co.uk/2015/08/ubuntu-easy-install-latest-nvidia-linux-drivers \ No newline at end of file diff --git a/sources/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md b/sources/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md new file mode 100644 index 0000000000..002a2ed10f --- /dev/null +++ b/sources/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md @@ -0,0 +1,157 @@ +shellinabox – A Web based AJAX Terminal Emulator +================================================================================ +### About shellinabox ### + +Greetings Unixmen readers! + +We, usually, access any remote servers using well known communication tools like OpenSSH, and Putty etc. But, one important thing is we can’t access the remote systems using those tools behind a Firewall or the firewalls that allow only HTTPS traffic. No worries! We, still, have some options to access your remote systems even if you’re behind a firewall. And also, you don’t need to install any communications tools like OpenSSH or Putty. All you need is only a modern JavaScript and CSS enabled browser. And you don’t need to install any plugins or third party softwares either. + +Meet **Shell In A Box**, pronounced as **shellinabox**, a free, open source, web based AJAX Terminal emulator developed by **Markus Gutschke**. It uses AJAX technology to provide the look and feel of a native shell via a web browser. The **shellinaboxd** daemon implements a webserver that listens on the specified port. The web server publishes one or more services that will be displayed in a VT100 emulator implemented as an AJAX web application. By default, the port is 4200. You can change the default port to any random port number of your choice. After installing shellinabox on all your remote servers that you want to access them from your local system, open up the web browser and navigate to: **http://IP-Address:4200/**. Enter your user name and password and start using your remote system’s shell. Seems interesting, isn’t it? Indeed! + +**Disclaimer**: + +Shellinabox is not a ssh client or any sort of security software. It is just a application that emulates a remote system’s shell via a web browser. Also, It has nothing to do with SSH in anyway. It’s not a bullet proof security way to remote your systems. It is just one of the easiest methods so far. You should not run it on any public network for any reason. + +### Install shellinabox ### + +#### In Debian/Ubuntu based systems: #### + +shellinabox is available in the default repositories. So, you can install it using command: + + $ sudo apt-get install shellinabox + +#### In RHEL/CentOS systems: #### + +First, install EPEL repository using command: + + # yum install epel-release + +Then, install shellinabox using command: + + # yum install shellinabox + +Done! + +### Configure shellinabox ### + +As I mentioned before, shellinabox listens on port **4200** by default. You can change this port to any random number of your choice to make it difficult to guess by anyone. + +The shellinabox config file is located in **/etc/default/shellinabox** file by default in Debian/Ubuntu systems. In RHEL/CentOS/Fedora, the default location of config file is **/etc/sysconfig/shellinaboxd**. + +If you want to change the default port, + +In Debian/Ubuntu: + + $ sudo vi /etc/default/shellinabox + +In RHEL/CentOS/Fedora: + + # vi /etc/sysconfig/shellinaboxd + +Change your port to any random number. Since I am testing it on my local network, I use the default values. + + # Shell in a box daemon configuration + # For details see shellinaboxd man page + + # Basic options + USER=shellinabox + GROUP=shellinabox + CERTDIR=/var/lib/shellinabox + PORT=4200 + OPTS="--disable-ssl-menu -s /:LOGIN" + + # Additional examples with custom options: + + # Fancy configuration with right-click menu choice for black-on-white: + # OPTS="--user-css Normal:+black-on-white.css,Reverse:-white-on-black.css --disable-ssl-menu -s /:LOGIN" + + # Simple configuration for running it as an SSH console with SSL disabled: + # OPTS="-t -s /:SSH:host.example.com" + +Restart shelinabox service. + +**In Debian/Ubuntu:** + + $ sudo systemctl restart shellinabox + +Or + + $ sudo service shellinabox restart + +In RHEL/CentOS systems run the following command to start shellinaboxd service automatically on every reboot. + + # systemctl enable shellinaboxd + +Or + + # chkconfig shellinaboxd on + +Remember to open up port **4200** or any port that you assign if you are running a firewall. + +For example, in RHEL/CentOS systems, you can allow the port as shown below. + + # firewall-cmd --permanent --add-port=4200/tcp + +---------- + + # firewall-cmd --reload + +### Usage ### + +Now, go to your client systems, open up the web browser and navigate to: **https://ip-address-of-remote-servers:4200**. + +**Note**: Mention the correct port if you have changed it. + +You’ll get a warning message of certificate issue. Accept the certificate and go on. + +![Privacy error - Google Chrome_001](http://www.unixmen.com/wp-content/uploads/2015/08/Privacy-error-Google-Chrome_001.jpg) + +Enter your remote system’s username and password. Now, you’ll be able to access the remote system’s shell right from the browser itself. + +![Shell In A Box - Google Chrome_003](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_003.jpg) + +You can get some additional menu options which might be useful by right clicking on the empty space of your browser. + +![Shell In A Box - Google Chrome_004](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_004.jpg) + +From now on, you can do whatever you want to do in your remote server from the local system’s web browser. + +Once you done, type **exit** in the shell. + +To connect again to the remote system, click the **Connect** button and then type the user name and password of your remote server. + +![Shell In A Box - Google Chrome_005](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_005.jpg) + +For more details about shellinabox, type the following command in your Terminal: + + # man shellinabox + +Or + + # shellinaboxd -help + +Also, refer the [shellinabox wiki page][1] for comprehensive usage details. + +### Conclusion ### + +Like I mentioned before, web-based SSH tools are very useful if you’re running servers behind a Firewall. There are many web-based ssh tools, but Shellinabox is pretty simple and useful tool to emulate a remote system’s shell from anywhere in your network. Since, it is browser based, you can access your remote server from any device as long as you have a JavaScript and CSS enabled browser. + +That’s all for now. Have a good day! + +#### Reference link: #### + +- [shellinabox website][2] + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/shellinabox-a-web-based-ajax-terminal-emulator/ + +作者:[SK][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/sk/ +[1]:https://code.google.com/p/shellinabox/wiki/shellinaboxd_man +[2]:https://code.google.com/p/shellinabox/ \ No newline at end of file diff --git a/sources/tech/20150816 How to migrate MySQL to MariaDB on Linux.md b/sources/tech/20150816 How to migrate MySQL to MariaDB on Linux.md new file mode 100644 index 0000000000..8c1b68b1ed --- /dev/null +++ b/sources/tech/20150816 How to migrate MySQL to MariaDB on Linux.md @@ -0,0 +1,186 @@ +How to migrate MySQL to MariaDB on Linux +================================================================================ +Since the Oracle's acquisition of MySQL, a lot of MySQL developers and users moved away from MySQL due to Oracle's more closed-door stance on MySQL development and maintenance. The community-driven outcome of such movement is a fork of MySQL, called MariaDB. Led by original MySQL developers, the development of MariaDB follows the open-source philosophy and makes sure of [its binary compatibility with MySQL][1]. The Linux distributions such as Red Hat families (Fedora, CentOS, RHEL), Ubuntu and Mint, openSUSE and Debian already started to use and support MariaDB as a drop-in replacement of MySQL. + +If you want to migrate your database from MySQL to MariaDB, this article is what you are looking for. Fortunately, due to their binary compatibility, MySQL-to-MariaDB migration process is pretty much straightforward. If you follow the steps below, the migration from MySQL to MariaDB will most likely be painless. + +### Prepare a MySQL Database and a Table ### + +For demonstration purpose, let's create a test MySQL database and one table in the database before doing the migration. Skip this step if you already have existing MySQL database(s) to migrate to MariaDB. Otherwise proceed as follows. + +Log in into MySQL from a terminal by typing your MySQL root user password. + + $ mysql -u root -p + +Create a database and a table. + + mysql> create database test01; + mysql> use test01; + mysql> create table pet(name varchar(30), owner varchar(30), species varchar(20), sex char(1)); + +Add some records to the table. + + mysql> insert into pet values('brandon','Jack','puddle','m'),('dixie','Danny','chihuahua','f'); + +Then quit the MySQL database. + +### Backup the MySQL Database ### + +The next step is to back up existing MySQL database(s). Use the following mysqldump command to export all existing databases to a file. Before running this command, make sure that binary logging is enabled in your MySQL server. If you don't know how to enable binary logging, see the instructions toward the end of the tutorial. + + $ mysqldump --all-databases --user=root --password --master-data > backupdb.sql + +![](https://farm6.staticflickr.com/5775/20555772385_21b89335e3_b.jpg) + +Now create a backup of my.cnf file somewhere in your system before uninstalling MySQL. This step is optional. + + $ sudo cp /etc/mysql/my.cnf /opt/my.cnf.bak + +### Uninstall MySQL Package ### + +First, you need to stop the MySQL service. + + $ sudo service mysql stop + +or: + + $ sudo systemctl stop mysql + +or: + + $ sudo /etc/init.d/mysql stop + +Then go ahead and remove MySQL packages and configurations as follows. + +On RPM based system (e.g., CentOS, Fedora or RHEL): + + $ sudo yum remove mysql* mysql-server mysql-devel mysql-libs + $ sudo rm -rf /var/lib/mysql + +On Debian based system (e.g., Debian, Ubuntu or Mint): + + $ sudo apt-get remove mysql-server mysql-client mysql-common + $ sudo apt-get autoremove + $ sudo apt-get autoclean + $ sudo deluser mysql + $ sudo rm -rf /var/lib/mysql + +### Install MariaDB Package ### + +The latest CentOS/RHEL 7 and Ubuntu (14.04 or later) contain MariaDB packages in their official repositories. In Fedora, MariaDB has become a replacement of MySQL since version 19. If you are using an old version or LTS type like Ubuntu 13.10 or earlier, you still can install MariaDB by adding its official repository. + +[MariaDB website][2] provide an online tool to help you add MariaDB's official repository according to your Linux distribution. This tool provides steps to add the MariaDB repository for openSUSE, Arch Linux, Mageia, Fedora, CentOS, RedHat, Mint, Ubuntu, and Debian. + +![](https://farm6.staticflickr.com/5809/20367745260_073020b910_c.jpg) + +As an example, let's use the Ubuntu 14.04 distribution and CentOS 7 to configure the MariaDB repository. + +**Ubuntu 14.04** + + $ sudo apt-get install software-properties-common + $ sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db + $ sudo add-apt-repository 'deb http://mirror.mephi.ru/mariadb/repo/5.5/ubuntu trusty main' + $ sudo apt-get update + $ sudo apt-get install mariadb-server + +**CentOS 7** + +Create a custom yum repository file for MariaDB as follows. + + $ sudo vi /etc/yum.repos.d/MariaDB.repo + +---------- + + [mariadb] + name = MariaDB + baseurl = http://yum.mariadb.org/5.5/centos7-amd64 + gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB + gpgcheck=1 + +---------- + + $ sudo yum install MariaDB-server MariaDB-client + +After all necessary packages are installed, you may be asked to type a new password for root user account. After setting the root password, don't forget to recover my.cnf backup file. + + $ sudo cp /opt/my.cnf /etc/mysql/ + +Now start MariaDB service as follows. + + $ sudo service mariadb start + +or: + + $ sudo systemctl start mariadb + +or: + + $ sudo /etc/init.d/mariadb start + +### Importing MySQL Database(s) ### + +Finally, we have to import the previously exported database(s) back to MariaDB server as follows. + + $ mysql -u root -p < backupdb.sql + +Enter your MariaDB's root password, and the database import process will start. When the import process is finished, it will return to a command prompt. + +To check whether or not the import process is completed successfully, log in into MariaDB server and perform some sample queries. + + $ mysql -u root -p + +---------- + + MariaDB [(none)]> show databases; + MariaDB [(none)]> use test01; + MariaDB [test01]> select * from pet; + +![](https://farm6.staticflickr.com/5820/20562243721_428a9a12a7_b.jpg) + +### Conclusion ### + +As you can see in this tutorial, MySQL-to-MariaDB migration is not difficult. MariaDB has a lot of new features than MySQL, that you should know about. As far as configuration is concerned, in my test case, I simply used my old MySQL configuration file (my.cnf) as a MariaDB configuration file, and the import process was completed fine without any issue. My suggestion for the configuration is that you read the documentation on MariaDB configuration options carefully before the migration, especially if you are using specific MySQL configurations. + +If you are running more complex setup with tons of tables and databases including clustering or master-slave replication, take a look at the [more detailed guide][3] by the Mozilla IT and Operations team, or the [official MariaDB documentation][4]. + +### Troubleshooting ### + +1. While running mysqldump command to back up databases, you are getting the following error. + + $ mysqldump --all-databases --user=root --password --master-data > backupdb.sql + +---------- + + mysqldump: Error: Binlogging on server not active + +By using "--master-data", you are trying to include binary log information in the exported output, which is useful for database replication and recovery. However, binary logging is not enabled in MySQL server. To fix this error, modify your my.cnf file, and add the following option under [mysqld] section. + + log-bin=mysql-bin + +Save my.cnf file, and restart the MySQL service: + + $ sudo service mysql restart + +or: + + $ sudo systemctl restart mysql + +or: + + $ sudo /etc/init.d/mysql restart + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/migrate-mysql-to-mariadb-linux.html + +作者:[Kristophorus Hadiono][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/kristophorus +[1]:https://mariadb.com/kb/en/mariadb/mariadb-vs-mysql-compatibility/ +[2]:https://downloads.mariadb.org/mariadb/repositories/#mirror=aasaam +[3]:https://blog.mozilla.org/it/2013/12/16/upgrading-from-mysql-5-1-to-mariadb-5-5/ +[4]:https://mariadb.com/kb/en/mariadb/documentation/ \ No newline at end of file From bf1893ed319087970b2a8b1cf72ba1cbc4f11871 Mon Sep 17 00:00:00 2001 From: DongShuaike Date: Sun, 16 Aug 2015 08:45:34 +0800 Subject: [PATCH 191/697] Update 20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md --- ... Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md b/sources/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md index e2a78e88dc..9309069bb8 100644 --- a/sources/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md +++ b/sources/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md @@ -1,3 +1,5 @@ +DongShuaike is translating. + Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action ================================================================================ ![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/Screen-Shot-2015-08-12-at-14.19.42.png) @@ -59,4 +61,4 @@ via: http://www.omgubuntu.co.uk/2015/08/ubuntu-nvidia-graphics-drivers-ppa-is-re 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:http://www.omgubuntu.co.uk/2015/08/ubuntu-easy-install-latest-nvidia-linux-drivers \ No newline at end of file +[1]:http://www.omgubuntu.co.uk/2015/08/ubuntu-easy-install-latest-nvidia-linux-drivers From 04b06a015b47139520f7940ee17e5a2d5e861c8b Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 16 Aug 2015 09:51:17 +0800 Subject: [PATCH 192/697] Update 20150816 How to migrate MySQL to MariaDB on Linux.md --- .../tech/20150816 How to migrate MySQL to MariaDB on Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150816 How to migrate MySQL to MariaDB on Linux.md b/sources/tech/20150816 How to migrate MySQL to MariaDB on Linux.md index 8c1b68b1ed..83dfe7b923 100644 --- a/sources/tech/20150816 How to migrate MySQL to MariaDB on Linux.md +++ b/sources/tech/20150816 How to migrate MySQL to MariaDB on Linux.md @@ -1,3 +1,4 @@ +translation by strugglingyouth How to migrate MySQL to MariaDB on Linux ================================================================================ Since the Oracle's acquisition of MySQL, a lot of MySQL developers and users moved away from MySQL due to Oracle's more closed-door stance on MySQL development and maintenance. The community-driven outcome of such movement is a fork of MySQL, called MariaDB. Led by original MySQL developers, the development of MariaDB follows the open-source philosophy and makes sure of [its binary compatibility with MySQL][1]. The Linux distributions such as Red Hat families (Fedora, CentOS, RHEL), Ubuntu and Mint, openSUSE and Debian already started to use and support MariaDB as a drop-in replacement of MySQL. @@ -183,4 +184,4 @@ via: http://xmodulo.com/migrate-mysql-to-mariadb-linux.html [1]:https://mariadb.com/kb/en/mariadb/mariadb-vs-mysql-compatibility/ [2]:https://downloads.mariadb.org/mariadb/repositories/#mirror=aasaam [3]:https://blog.mozilla.org/it/2013/12/16/upgrading-from-mysql-5-1-to-mariadb-5-5/ -[4]:https://mariadb.com/kb/en/mariadb/documentation/ \ No newline at end of file +[4]:https://mariadb.com/kb/en/mariadb/documentation/ From 83ff975b611de4e0b84c8d799c2a0c1cd13056b0 Mon Sep 17 00:00:00 2001 From: DongShuaike Date: Sun, 16 Aug 2015 10:14:53 +0800 Subject: [PATCH 193/697] Delete 20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md --- ...raphics Drivers PPA Is Ready For Action.md | 64 ------------------- 1 file changed, 64 deletions(-) delete mode 100644 sources/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md diff --git a/sources/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md b/sources/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md deleted file mode 100644 index 9309069bb8..0000000000 --- a/sources/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md +++ /dev/null @@ -1,64 +0,0 @@ -DongShuaike is translating. - -Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action -================================================================================ -![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/Screen-Shot-2015-08-12-at-14.19.42.png) - -Get your frame-rate on - -**Well, that didn’t take long. ** - -Just days after [proposing][1] the creation of a new PPA to provide Ubuntu users with the latest NVIDIA graphics drivers the Ubuntu community has clubbed together to do, well, just that. - -The plainly named ‘**Graphics Drivers PPA**‘ contains the latest release of NVIDIA’s proprietary Linux grapics drivers, packaged up for Ubuntu users to upgrade to – no binary runtime headaches needed! - -The PPA is designed to offer gamers a way to run the latest games on the latest on Ubuntu as easily as possible. - -#### Ready, But Not Ready #### - -Jorge Castro’s idea to create a ‘blessed’ PPA containing newer NVIDIA graphics drivers for those wot want ’em has been greeted with enthusiasm by Ubuntu users and games developers alike. - -Even those involved in porting some of Steam’s biggest titles to Linux have chimed in to offer advice and suggestions. - -Edwin Smith, head of production at Feral Interactive (‘Shadow of Mordor’) welcomed the initiative to prove users with “easier way of updating drivers”. - -### How To Use The New Nvidia Drivers PPA ### - -Although the new ‘Graphic Drivers PPA’ is live it is not strictly ready for the prime time. Its maintainers caution: - -> “This PPA is currently in testing, you should be experienced with packaging before you dive in here. Give a few days to sort out the kinks.” - -Jorge, who soft launched the PPA in a post to the Ubuntu desktop mailing list, also notes that gamers using existing PPAs, like xorg-edgers, for timely graphics driver updates won’t notice any driver difference for now (as the drivers have simply been copied over from some of those PPAs to this new one). - -“The real fun begins when new drivers are released,” he adds. - -Right now, as of writing, the PPA contains a batch of recent Nvidia drivers for Ubuntu 12.04.1 through 15.10. Note all drivers are available for all releases. - -> **It should go without saying: unless you know what you’re doing, and how to undo it, do not follow the instructions that follow. ** - -To add the PPA run the following in a new Terminal window: - - sudo add-apt-repository ppa:graphics-drivers/ppa - -To upgrade to or install the latest Nvidia drivers: - - sudo apt-get update && sudo apt-get install nvidia-355 - -Remember: if the PPA breaks your system you are allowed to keep both halves. - -To roll back/undo changes made the PPA you should use the ppa-purge command. - -Feel free to leave any advice/help/corrections/thoughts on the PPA (and as I don’t have NVIDIA hardware to test the above out for myself, it’s all appreciated) in the comments below. - --------------------------------------------------------------------------------- - -via: http://www.omgubuntu.co.uk/2015/08/ubuntu-nvidia-graphics-drivers-ppa-is-ready-for-action - -作者:[Joey-Elijah Sneddon][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:http://www.omgubuntu.co.uk/2015/08/ubuntu-easy-install-latest-nvidia-linux-drivers From 3dc4222fa12ead237c9b256a171beaf6d9071e5d Mon Sep 17 00:00:00 2001 From: DongShuaike Date: Sun, 16 Aug 2015 10:15:33 +0800 Subject: [PATCH 194/697] Create 20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md --- ...raphics Drivers PPA Is Ready For Action.md | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 translated/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md diff --git a/translated/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md b/translated/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md new file mode 100644 index 0000000000..3726a6465a --- /dev/null +++ b/translated/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md @@ -0,0 +1,66 @@ +Ubuntu NVIDIA显卡驱动PPA已经做好准备 +================================================================================ +![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/Screen-Shot-2015-08-12-at-14.19.42.png) + +加速你的帧频! + +**嘿,各位,稍安勿躁,很快就好。** + +就在提议开发一个新的PPA来提供给Ubuntu用户们最新的NVIDIA显卡驱动后不久,ubuntu社区的人们又集结起来了,就是为了这件事。 + +顾名思义,‘**Graphics Drivers PPA**’包含了最新的NVIDIA Linux显卡驱动发行版,打包起来给用户升级用——没有让人头疼的二进制运行时文件! + +PPA被设计用来让家们尽可能方便地在Ubuntu上上运行最新款的游戏。 + +#### 万事俱备,只欠东风 #### + +Jorge Castro开发一个包含NVIDIA最新显卡驱动的PPA神器的想法得到了Ubuntu用户和广大游戏开发者的热烈响应。 + +就连那些致力于将游戏从“Steam平台”移植到Linux的游戏人员,也给了不少建议。 + +Edwin Smith,Feral Interactive公司的生产总监,对于“开发PPA源,让用户更方便地更新驱动”这种原创行为表示非常欣慰。 + +### 如何使用最新的Nvidia Drivers PPA### + +虽然新的“显卡PPA”已经开发出来,但是现在还远远达不到成熟。开发者们提醒到: + +>“这款PPA还处于测试阶段,在你使用它之前最好对打包经验丰富。请大家稍安勿躁,再等几天。” + +将PPA试发布给Ubuntu desktop邮件列表的Jorge,也强调说,使用现行的一些PPA(比如xorg-edgers)的玩家可能发现不了什么区别(因为现在的驱动只不过是把内容从其他那些现存驱动拷贝过来了) + +“新驱动发布的时候,好戏才会上演呢,”他说。 + +截至写作本文时为止,这个PPA囊括了从Ubuntu 12.04.1到 15.10各个版本的Nvidia驱动。注意这些驱动对所有的发行版都适用。 + +> **毫无疑问,除非你清楚自己在干些什么,并且知道如果出了问题应该怎么撤销,否则就不要进行下面的操作。** + +新打开一个终端窗口,运行下面的命令加入PPA: + + sudo add-apt-repository ppa:graphics-drivers/ppa + +安装或更新到最新的Nvidia显卡驱动: + + sudo apt-get update && sudo apt-get install nvidia-355 + +记住:如果PPA把你的系统弄崩了,你可得自己去想办法,我们提醒过了哦。(译者注:切记!) + +如果想要撤销对PPA的改变,使用ppa-purge命令。 + +有什么意见,想法,或者指正,就在下面的评论栏里写下来吧。(我没有NVIDIA的硬件来为我自己验证上面的这些东西,如果你可以验证的话,那就太感谢了。) + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2015/08/ubuntu-nvidia-graphics-drivers-ppa-is-ready-for-action + +作者:[Joey-Elijah Sneddon][a] +译者:[DongShuaike](https://github.com/DongShuaike) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:http://www.omgubuntu.co.uk/2015/08/ubuntu-easy-install-latest-nvidia-linux-drivers + + + + From 4df5e8680dd705fad61bbe298684e8a24bb46391 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 16 Aug 2015 10:56:35 +0800 Subject: [PATCH 195/697] PUB:20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @DongShuaike 翻译的挺好~加油。 --- ...raphics Drivers PPA Is Ready For Action.md | 66 +++++++++++++++++++ ...raphics Drivers PPA Is Ready For Action.md | 66 ------------------- 2 files changed, 66 insertions(+), 66 deletions(-) create mode 100644 published/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md delete mode 100644 translated/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md diff --git a/published/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md b/published/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md new file mode 100644 index 0000000000..a310c6be3a --- /dev/null +++ b/published/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md @@ -0,0 +1,66 @@ +Ubuntu NVIDIA 显卡驱动 PPA 已经做好准备 +================================================================================ +![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/Screen-Shot-2015-08-12-at-14.19.42.png) + +加速你的帧率! + +**嘿,各位,稍安勿躁,很快就好。** + +就在提议开发一个[新的 PPA][1] 来给 Ubuntu 用户们提供最新的 NVIDIA 显卡驱动后不久,ubuntu 社区的人们又集结起来了,就是为了这件事。 + +顾名思义,‘**Graphics Drivers PPA**’ 包含了最新的 NVIDIA Linux 显卡驱动发布,已经打包好可供用户升级使用,没有让人头疼的二进制运行时文件! + +这个 PPA 被设计用来让玩家们尽可能方便地在 Ubuntu 上运行最新款的游戏。 + +#### 万事俱备,只欠东风 #### + +Jorge Castro 开发一个包含 NVIDIA 最新显卡驱动的 PPA 神器的想法得到了 Ubuntu 用户和广大游戏开发者的热烈响应。 + +就连那些致力于将“Steam平台”上的知名大作移植到 Linux 上的人们,也给了不少建议。 + +Edwin Smith,Feral Interactive 公司(‘Shadow of Mordor’) 的产品总监,对于“让用户更方便地更新驱动”的倡议表示非常欣慰。 + +### 如何使用最新的 Nvidia Drivers PPA### + +虽然新的“显卡PPA”已经开发出来,但是现在还远远达不到成熟。开发者们提醒到: + +> “这个 PPA 还处于测试阶段,在你使用它之前最好有一些打包的经验。请大家稍安勿躁,再等几天。” + +将 PPA 试发布给 Ubuntu desktop 邮件列表的 Jorge,也强调说,使用现行的一些 PPA(比如 xorg-edgers)的玩家可能发现不了什么区别(因为现在的驱动只不过是把内容从其他那些现存驱动拷贝过来了) + +“新驱动发布的时候,好戏才会上演呢,”他说。 + +截至写作本文时为止,这个 PPA 囊括了从 Ubuntu 12.04.1 到 15.10 各个版本的 Nvidia 驱动。注意这些驱动对所有的发行版都适用。 + +> **毫无疑问,除非你清楚自己在干些什么,并且知道如果出了问题应该怎么撤销,否则就不要进行下面的操作。** + +新打开一个终端窗口,运行下面的命令加入 PPA: + + sudo add-apt-repository ppa:graphics-drivers/ppa + +安装或更新到最新的 Nvidia 显卡驱动: + + sudo apt-get update && sudo apt-get install nvidia-355 + +记住:如果PPA把你的系统弄崩了,你可得自己去想办法,我们提醒过了哦。(译者注:切记!) + +如果想要撤销对PPA的改变,使用 `ppa-purge` 命令。 + +有什么意见,想法,或者指正,就在下面的评论栏里写下来吧。(我没有 NVIDIA 的硬件来为我自己验证上面的这些东西,如果你可以验证的话,那就太感谢了。) + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2015/08/ubuntu-nvidia-graphics-drivers-ppa-is-ready-for-action + +作者:[Joey-Elijah Sneddon][a] +译者:[DongShuaike](https://github.com/DongShuaike) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:https://linux.cn/article-6030-1.html + + + + diff --git a/translated/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md b/translated/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md deleted file mode 100644 index 3726a6465a..0000000000 --- a/translated/news/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md +++ /dev/null @@ -1,66 +0,0 @@ -Ubuntu NVIDIA显卡驱动PPA已经做好准备 -================================================================================ -![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/Screen-Shot-2015-08-12-at-14.19.42.png) - -加速你的帧频! - -**嘿,各位,稍安勿躁,很快就好。** - -就在提议开发一个新的PPA来提供给Ubuntu用户们最新的NVIDIA显卡驱动后不久,ubuntu社区的人们又集结起来了,就是为了这件事。 - -顾名思义,‘**Graphics Drivers PPA**’包含了最新的NVIDIA Linux显卡驱动发行版,打包起来给用户升级用——没有让人头疼的二进制运行时文件! - -PPA被设计用来让家们尽可能方便地在Ubuntu上上运行最新款的游戏。 - -#### 万事俱备,只欠东风 #### - -Jorge Castro开发一个包含NVIDIA最新显卡驱动的PPA神器的想法得到了Ubuntu用户和广大游戏开发者的热烈响应。 - -就连那些致力于将游戏从“Steam平台”移植到Linux的游戏人员,也给了不少建议。 - -Edwin Smith,Feral Interactive公司的生产总监,对于“开发PPA源,让用户更方便地更新驱动”这种原创行为表示非常欣慰。 - -### 如何使用最新的Nvidia Drivers PPA### - -虽然新的“显卡PPA”已经开发出来,但是现在还远远达不到成熟。开发者们提醒到: - ->“这款PPA还处于测试阶段,在你使用它之前最好对打包经验丰富。请大家稍安勿躁,再等几天。” - -将PPA试发布给Ubuntu desktop邮件列表的Jorge,也强调说,使用现行的一些PPA(比如xorg-edgers)的玩家可能发现不了什么区别(因为现在的驱动只不过是把内容从其他那些现存驱动拷贝过来了) - -“新驱动发布的时候,好戏才会上演呢,”他说。 - -截至写作本文时为止,这个PPA囊括了从Ubuntu 12.04.1到 15.10各个版本的Nvidia驱动。注意这些驱动对所有的发行版都适用。 - -> **毫无疑问,除非你清楚自己在干些什么,并且知道如果出了问题应该怎么撤销,否则就不要进行下面的操作。** - -新打开一个终端窗口,运行下面的命令加入PPA: - - sudo add-apt-repository ppa:graphics-drivers/ppa - -安装或更新到最新的Nvidia显卡驱动: - - sudo apt-get update && sudo apt-get install nvidia-355 - -记住:如果PPA把你的系统弄崩了,你可得自己去想办法,我们提醒过了哦。(译者注:切记!) - -如果想要撤销对PPA的改变,使用ppa-purge命令。 - -有什么意见,想法,或者指正,就在下面的评论栏里写下来吧。(我没有NVIDIA的硬件来为我自己验证上面的这些东西,如果你可以验证的话,那就太感谢了。) - --------------------------------------------------------------------------------- - -via: http://www.omgubuntu.co.uk/2015/08/ubuntu-nvidia-graphics-drivers-ppa-is-ready-for-action - -作者:[Joey-Elijah Sneddon][a] -译者:[DongShuaike](https://github.com/DongShuaike) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:http://www.omgubuntu.co.uk/2015/08/ubuntu-easy-install-latest-nvidia-linux-drivers - - - - From faaf65211719dfa5152015f20cf6780b4a17beec Mon Sep 17 00:00:00 2001 From: XIAOYU <1136299502@qq.com> Date: Sun, 16 Aug 2015 11:43:06 +0800 Subject: [PATCH 196/697] translating by xiaoyu33 translating by xiaoyu33 --- ...0150816 shellinabox--A Web based AJAX Terminal Emulator.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md b/sources/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md index 002a2ed10f..c6f44a30f5 100644 --- a/sources/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md +++ b/sources/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md @@ -1,3 +1,5 @@ +translating by xiaoyu33 + shellinabox – A Web based AJAX Terminal Emulator ================================================================================ ### About shellinabox ### @@ -154,4 +156,4 @@ via: http://www.unixmen.com/shellinabox-a-web-based-ajax-terminal-emulator/ [a]:http://www.unixmen.com/author/sk/ [1]:https://code.google.com/p/shellinabox/wiki/shellinaboxd_man -[2]:https://code.google.com/p/shellinabox/ \ No newline at end of file +[2]:https://code.google.com/p/shellinabox/ From fd32cc53be0833b4a5609b44c75ade3a6b884f73 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 16 Aug 2015 12:58:38 +0800 Subject: [PATCH 197/697] Delete 20150816 How to migrate MySQL to MariaDB on Linux.md --- ...ow to migrate MySQL to MariaDB on Linux.md | 187 ------------------ 1 file changed, 187 deletions(-) delete mode 100644 sources/tech/20150816 How to migrate MySQL to MariaDB on Linux.md diff --git a/sources/tech/20150816 How to migrate MySQL to MariaDB on Linux.md b/sources/tech/20150816 How to migrate MySQL to MariaDB on Linux.md deleted file mode 100644 index 83dfe7b923..0000000000 --- a/sources/tech/20150816 How to migrate MySQL to MariaDB on Linux.md +++ /dev/null @@ -1,187 +0,0 @@ -translation by strugglingyouth -How to migrate MySQL to MariaDB on Linux -================================================================================ -Since the Oracle's acquisition of MySQL, a lot of MySQL developers and users moved away from MySQL due to Oracle's more closed-door stance on MySQL development and maintenance. The community-driven outcome of such movement is a fork of MySQL, called MariaDB. Led by original MySQL developers, the development of MariaDB follows the open-source philosophy and makes sure of [its binary compatibility with MySQL][1]. The Linux distributions such as Red Hat families (Fedora, CentOS, RHEL), Ubuntu and Mint, openSUSE and Debian already started to use and support MariaDB as a drop-in replacement of MySQL. - -If you want to migrate your database from MySQL to MariaDB, this article is what you are looking for. Fortunately, due to their binary compatibility, MySQL-to-MariaDB migration process is pretty much straightforward. If you follow the steps below, the migration from MySQL to MariaDB will most likely be painless. - -### Prepare a MySQL Database and a Table ### - -For demonstration purpose, let's create a test MySQL database and one table in the database before doing the migration. Skip this step if you already have existing MySQL database(s) to migrate to MariaDB. Otherwise proceed as follows. - -Log in into MySQL from a terminal by typing your MySQL root user password. - - $ mysql -u root -p - -Create a database and a table. - - mysql> create database test01; - mysql> use test01; - mysql> create table pet(name varchar(30), owner varchar(30), species varchar(20), sex char(1)); - -Add some records to the table. - - mysql> insert into pet values('brandon','Jack','puddle','m'),('dixie','Danny','chihuahua','f'); - -Then quit the MySQL database. - -### Backup the MySQL Database ### - -The next step is to back up existing MySQL database(s). Use the following mysqldump command to export all existing databases to a file. Before running this command, make sure that binary logging is enabled in your MySQL server. If you don't know how to enable binary logging, see the instructions toward the end of the tutorial. - - $ mysqldump --all-databases --user=root --password --master-data > backupdb.sql - -![](https://farm6.staticflickr.com/5775/20555772385_21b89335e3_b.jpg) - -Now create a backup of my.cnf file somewhere in your system before uninstalling MySQL. This step is optional. - - $ sudo cp /etc/mysql/my.cnf /opt/my.cnf.bak - -### Uninstall MySQL Package ### - -First, you need to stop the MySQL service. - - $ sudo service mysql stop - -or: - - $ sudo systemctl stop mysql - -or: - - $ sudo /etc/init.d/mysql stop - -Then go ahead and remove MySQL packages and configurations as follows. - -On RPM based system (e.g., CentOS, Fedora or RHEL): - - $ sudo yum remove mysql* mysql-server mysql-devel mysql-libs - $ sudo rm -rf /var/lib/mysql - -On Debian based system (e.g., Debian, Ubuntu or Mint): - - $ sudo apt-get remove mysql-server mysql-client mysql-common - $ sudo apt-get autoremove - $ sudo apt-get autoclean - $ sudo deluser mysql - $ sudo rm -rf /var/lib/mysql - -### Install MariaDB Package ### - -The latest CentOS/RHEL 7 and Ubuntu (14.04 or later) contain MariaDB packages in their official repositories. In Fedora, MariaDB has become a replacement of MySQL since version 19. If you are using an old version or LTS type like Ubuntu 13.10 or earlier, you still can install MariaDB by adding its official repository. - -[MariaDB website][2] provide an online tool to help you add MariaDB's official repository according to your Linux distribution. This tool provides steps to add the MariaDB repository for openSUSE, Arch Linux, Mageia, Fedora, CentOS, RedHat, Mint, Ubuntu, and Debian. - -![](https://farm6.staticflickr.com/5809/20367745260_073020b910_c.jpg) - -As an example, let's use the Ubuntu 14.04 distribution and CentOS 7 to configure the MariaDB repository. - -**Ubuntu 14.04** - - $ sudo apt-get install software-properties-common - $ sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db - $ sudo add-apt-repository 'deb http://mirror.mephi.ru/mariadb/repo/5.5/ubuntu trusty main' - $ sudo apt-get update - $ sudo apt-get install mariadb-server - -**CentOS 7** - -Create a custom yum repository file for MariaDB as follows. - - $ sudo vi /etc/yum.repos.d/MariaDB.repo - ----------- - - [mariadb] - name = MariaDB - baseurl = http://yum.mariadb.org/5.5/centos7-amd64 - gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB - gpgcheck=1 - ----------- - - $ sudo yum install MariaDB-server MariaDB-client - -After all necessary packages are installed, you may be asked to type a new password for root user account. After setting the root password, don't forget to recover my.cnf backup file. - - $ sudo cp /opt/my.cnf /etc/mysql/ - -Now start MariaDB service as follows. - - $ sudo service mariadb start - -or: - - $ sudo systemctl start mariadb - -or: - - $ sudo /etc/init.d/mariadb start - -### Importing MySQL Database(s) ### - -Finally, we have to import the previously exported database(s) back to MariaDB server as follows. - - $ mysql -u root -p < backupdb.sql - -Enter your MariaDB's root password, and the database import process will start. When the import process is finished, it will return to a command prompt. - -To check whether or not the import process is completed successfully, log in into MariaDB server and perform some sample queries. - - $ mysql -u root -p - ----------- - - MariaDB [(none)]> show databases; - MariaDB [(none)]> use test01; - MariaDB [test01]> select * from pet; - -![](https://farm6.staticflickr.com/5820/20562243721_428a9a12a7_b.jpg) - -### Conclusion ### - -As you can see in this tutorial, MySQL-to-MariaDB migration is not difficult. MariaDB has a lot of new features than MySQL, that you should know about. As far as configuration is concerned, in my test case, I simply used my old MySQL configuration file (my.cnf) as a MariaDB configuration file, and the import process was completed fine without any issue. My suggestion for the configuration is that you read the documentation on MariaDB configuration options carefully before the migration, especially if you are using specific MySQL configurations. - -If you are running more complex setup with tons of tables and databases including clustering or master-slave replication, take a look at the [more detailed guide][3] by the Mozilla IT and Operations team, or the [official MariaDB documentation][4]. - -### Troubleshooting ### - -1. While running mysqldump command to back up databases, you are getting the following error. - - $ mysqldump --all-databases --user=root --password --master-data > backupdb.sql - ----------- - - mysqldump: Error: Binlogging on server not active - -By using "--master-data", you are trying to include binary log information in the exported output, which is useful for database replication and recovery. However, binary logging is not enabled in MySQL server. To fix this error, modify your my.cnf file, and add the following option under [mysqld] section. - - log-bin=mysql-bin - -Save my.cnf file, and restart the MySQL service: - - $ sudo service mysql restart - -or: - - $ sudo systemctl restart mysql - -or: - - $ sudo /etc/init.d/mysql restart - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/migrate-mysql-to-mariadb-linux.html - -作者:[Kristophorus Hadiono][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/kristophorus -[1]:https://mariadb.com/kb/en/mariadb/mariadb-vs-mysql-compatibility/ -[2]:https://downloads.mariadb.org/mariadb/repositories/#mirror=aasaam -[3]:https://blog.mozilla.org/it/2013/12/16/upgrading-from-mysql-5-1-to-mariadb-5-5/ -[4]:https://mariadb.com/kb/en/mariadb/documentation/ From bd1a96a2eb86a075400dec864a6501f8e943bcab Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 16 Aug 2015 12:59:52 +0800 Subject: [PATCH 198/697] Create 20150816 How to migrate MySQL to MariaDB on Linux.md --- ...ow to migrate MySQL to MariaDB on Linux.md | 188 ++++++++++++++++++ 1 file changed, 188 insertions(+) create mode 100644 translated/tech/20150816 How to migrate MySQL to MariaDB on Linux.md diff --git a/translated/tech/20150816 How to migrate MySQL to MariaDB on Linux.md b/translated/tech/20150816 How to migrate MySQL to MariaDB on Linux.md new file mode 100644 index 0000000000..70856ec874 --- /dev/null +++ b/translated/tech/20150816 How to migrate MySQL to MariaDB on Linux.md @@ -0,0 +1,188 @@ + +在 Linux 中怎样将 MySQL 迁移到 MariaDB 上 +================================================================================ + +自从甲骨文收购 MySQL 后,很多 MySQL 的开发者和用户放弃了 MySQL 由于甲骨文对 MySQL 的开发和维护更多倾向于闭门的立场。在社区驱动下,促使更多人移到 MySQL 的另一个分支中,叫 MariaDB。在原有 MySQL 开发人员的带领下,MariaDB 的开发遵循开源的理念,并确保 [它的二进制格式与 MySQL 兼容][1]。Linux 发行版如 Red Hat 家族(Fedora,CentOS,RHEL),Ubuntu 和Mint,openSUSE 和 Debian 已经开始使用,并支持 MariaDB 作为 MySQL 的简易替换品。 + +如果想要将 MySQL 中的数据库迁移到 MariaDB 中,这篇文章就是你所期待的。幸运的是,由于他们的二进制兼容性,MySQL-to-MariaDB 迁移过程是非常简单的。如果你按照下面的步骤,将 MySQL 迁移到 MariaDB 会是无痛的。 + +### 准备 MySQL 数据库和表 ### + +出于演示的目的,我们在做迁移之前在数据库中创建一个测试的 MySQL 数据库和表。如果你在 MySQL 中已经有了要迁移到 MariaDB 的数据库,跳过此步骤。否则,按以下步骤操作。 + +在终端输入 root 密码登录到 MySQL 。 + + $ mysql -u root -p + +创建一个数据库和表。 + + mysql> create database test01; + mysql> use test01; + mysql> create table pet(name varchar(30), owner varchar(30), species varchar(20), sex char(1)); + +在表中添加一些数据。 + + mysql> insert into pet values('brandon','Jack','puddle','m'),('dixie','Danny','chihuahua','f'); + +退出 MySQL 数据库. + +### 备份 MySQL 数据库 ### + +下一步是备份现有的 MySQL 数据库。使用下面的 mysqldump 命令导出现有的数据库到文件中。运行此命令之前,请确保你的 MySQL 服务器上启用了二进制日志。如果你不知道如何启用二进制日志,请参阅结尾的教程说明。 + + $ mysqldump --all-databases --user=root --password --master-data > backupdb.sql + +![](https://farm6.staticflickr.com/5775/20555772385_21b89335e3_b.jpg) + +现在,在卸载 MySQL 之前先在系统上备份 my.cnf 文件。此步是可选的。 + + $ sudo cp /etc/mysql/my.cnf /opt/my.cnf.bak + +### 卸载 MySQL ### + +首先,停止 MySQL 服务。 + + $ sudo service mysql stop + +或者: + + $ sudo systemctl stop mysql + +或: + + $ sudo /etc/init.d/mysql stop + +然后继续下一步,使用以下命令移除 MySQL 和配置文件。 + +在基于 RPM 的系统上 (例如, CentOS, Fedora 或 RHEL): + + $ sudo yum remove mysql* mysql-server mysql-devel mysql-libs + $ sudo rm -rf /var/lib/mysql + +在基于 Debian 的系统上(例如, Debian, Ubuntu 或 Mint): + + $ sudo apt-get remove mysql-server mysql-client mysql-common + $ sudo apt-get autoremove + $ sudo apt-get autoclean + $ sudo deluser mysql + $ sudo rm -rf /var/lib/mysql + +### 安装 MariaDB ### + +在 CentOS/RHEL 7和Ubuntu(14.04或更高版本)上,最新的 MariaDB 包含在其官方源。在 Fedora 上,自19版本后 MariaDB 已经替代了 MySQL。如果你使用的是旧版本或 LTS 类型如 Ubuntu 13.10 或更早的,你仍然可以通过添加其官方仓库来安装 MariaDB。 + +[MariaDB 网站][2] 提供了一个在线工具帮助你依据你的 Linux 发行版中来添加 MariaDB 的官方仓库。此工具为 openSUSE, Arch Linux, Mageia, Fedora, CentOS, RedHat, Mint, Ubuntu, 和 Debian 提供了 MariaDB 的官方仓库. + +![](https://farm6.staticflickr.com/5809/20367745260_073020b910_c.jpg) + +下面例子中,我们使用 Ubuntu 14.04 发行版和 CentOS 7 配置 MariaDB 库。 + +**Ubuntu 14.04** + + $ sudo apt-get install software-properties-common + $ sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db + $ sudo add-apt-repository 'deb http://mirror.mephi.ru/mariadb/repo/5.5/ubuntu trusty main' + $ sudo apt-get update + $ sudo apt-get install mariadb-server + +**CentOS 7** + +以下为 MariaDB 创建一个自定义的 yum 仓库文件。 + + $ sudo vi /etc/yum.repos.d/MariaDB.repo + +---------- + + [mariadb] + name = MariaDB + baseurl = http://yum.mariadb.org/5.5/centos7-amd64 + gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB + gpgcheck=1 + +---------- + + $ sudo yum install MariaDB-server MariaDB-client + +安装了所有必要的软件包后,你可能会被要求为 root 用户创建一个新密码。设置 root 的密码后,别忘了恢复备份的 my.cnf 文件。 + + $ sudo cp /opt/my.cnf /etc/mysql/ + +现在启动 MariaDB 服务。 + + $ sudo service mariadb start + +或者: + + $ sudo systemctl start mariadb + +或: + + $ sudo /etc/init.d/mariadb start + +### 导入 MySQL 的数据库 ### + +最后,我们将以前导出的数据库导入到 MariaDB 服务器中。 + + $ mysql -u root -p < backupdb.sql + +输入你 MariaDB 的 root 密码,数据库导入过程将开始。导入过程完成后,将返回到命令提示符下。 + +要检查导入过程是否完全成功,请登录到 MariaDB 服务器,并查看一些样本来检查。 + + $ mysql -u root -p + +---------- + + MariaDB [(none)]> show databases; + MariaDB [(none)]> use test01; + MariaDB [test01]> select * from pet; + +![](https://farm6.staticflickr.com/5820/20562243721_428a9a12a7_b.jpg) + +### 结论 ### + +如你在本教程中看到的,MySQL-to-MariaDB 的迁移并不难。MariaDB 相比 MySQL 有很多新的功能,你应该知道的。至于配置方面,在我的测试情况下,我只是将我旧的 MySQL 配置文件(my.cnf)作为 MariaDB 的配置文件,导入过程完全没有出现任何问题。对于配置文件,我建议你在迁移之前请仔细阅读MariaDB 配置选项的文件,特别是如果你正在使用 MySQL 的特殊配置。 + +如果你正在运行更复杂的配置有海量的数据库和表,包括群集或主从复制,看一看 Mozilla IT 和 Operations 团队的 [更详细的指南][3] ,或者 [官方的 MariaDB 文档][4]。 + +### 故障排除 ### + +1.在运行 mysqldump 命令备份数据库时出现以下错误。 + + $ mysqldump --all-databases --user=root --password --master-data > backupdb.sql + +---------- + + mysqldump: Error: Binlogging on server not active + +通过使用 "--master-data",你要在导出的输出中包含二进制日志信息,这对于数据库的复制和恢复是有用的。但是,二进制日志未在 MySQL 服务器启用。要解决这个错误,修改 my.cnf 文件,并在 [mysqld] 部分添加下面的选项。 + + log-bin=mysql-bin + +保存 my.cnf 文件,并重新启动 MySQL 服务: + + $ sudo service mysql restart + +或者: + + $ sudo systemctl restart mysql + +或: + + $ sudo /etc/init.d/mysql restart + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/migrate-mysql-to-mariadb-linux.html + +作者:[Kristophorus Hadiono][a] +译者:[strugglingyouth](https://github.com/译者ID) +校对:[strugglingyouth](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/kristophorus +[1]:https://mariadb.com/kb/en/mariadb/mariadb-vs-mysql-compatibility/ +[2]:https://downloads.mariadb.org/mariadb/repositories/#mirror=aasaam +[3]:https://blog.mozilla.org/it/2013/12/16/upgrading-from-mysql-5-1-to-mariadb-5-5/ +[4]:https://mariadb.com/kb/en/mariadb/documentation/ From b01218101e32a365db7a9f75ec47269e647aa4cd Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 16 Aug 2015 13:13:39 +0800 Subject: [PATCH 199/697] Update 20150728 Process of the Linux kernel building.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译到212line --- ...28 Process of the Linux kernel building.md | 24 ++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/sources/tech/20150728 Process of the Linux kernel building.md index f11c1cc7a2..4191968e04 100644 --- a/sources/tech/20150728 Process of the Linux kernel building.md +++ b/sources/tech/20150728 Process of the Linux kernel building.md @@ -110,6 +110,14 @@ We check the `KBUILD_SRC` that represent top directory of the source code of the * If custom output directory created sucessfully, execute `make` again with the new directory (see `-C` option). The next `ifeq` statements checks that `C` or `M` options was passed to the make: +系统会检查变量`KBUILD_SRC`,如果他是空的(第一次执行makefile 时总是空的),并且变量`KBUILD_OUTPUT` 被设成了选项`O` 的值(如果这个选项被传进来了),那么这个值就会用来代表内核源码的顶层目录。下一步会检查变量`KBUILD_OUTPUT` ,如果之前设置过这个变量,那么接下来会做一下几件事: + +* 将变量`KBUILD_OUTPUT` 的值保存到临时变量`saved-output`; +* 尝试创建输出目录; +* 检查创建的输出目录,如果失败了就打印错误; +* 如果成功创建了输出目录,那么就在新目录重新执行`make` 命令(参见选项`-C`)。 + +下一个`ifeq` 语句会检查传递给make 的选项`C` 和`M`: ```Makefile ifeq ("$(origin C)", "command line") @@ -126,6 +134,8 @@ endif The first `C` option tells to the `makefile` that need to check all `c` source code with a tool provided by the `$CHECK` environment variable, by default it is [sparse](https://en.wikipedia.org/wiki/Sparse). The second `M` option provides build for the external modules (will not see this case in this part). As we set this variables we make a check of the `KBUILD_SRC` variable and if it is not set we set `srctree` variable to `.`: +第一个选项`C` 会告诉`makefile` 需要使用环境变量`$CHECK` 提供的工具来检查全部`c` 代码,默认情况下会使用[sparse](https://en.wikipedia.org/wiki/Sparse)。第二个选项`M` 会用来编译外部模块(本文不做讨论)。因为设置了这两个变量,系统还会检查变量`KBUILD_SRC`,如果`KBUILD_SRC` 没有被设置,系统会设置变量`srctree` 为`.`: + ```Makefile ifeq ($(KBUILD_SRC),) srctree := . @@ -138,7 +148,9 @@ obj := $(objtree) export srctree objtree VPATH ``` -That tells to `Makefile` that source tree of the Linux kernel will be in the current directory where `make` command was executed. After this we set `objtree` and other variables to this directory and export these variables. The next step is the getting value for the `SUBARCH` variable that will represent tewhat the underlying archicecture is: +That tells to `Makefile` that source tree of the Linux kernel will be in the current directory where `make` command was executed. After this we set `objtree` and other variables to this directory and export these variables. The next step is the setting value for the `SUBARCH` variable that will represent what the underlying archicecture is: + +这将会告诉`Makefile` 内核的源码树就在执行make 命令的目录。然后要设置`objtree` 和其他变量为执行make 命令的目录,并且将这些变量导出。接着就是要获取`SUBARCH` 的值,这个变量代表了当前的系统架构(注:一般值CPU 架构): ```Makefile SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \ @@ -151,6 +163,8 @@ SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \ As you can see it executes [uname](https://en.wikipedia.org/wiki/Uname) utils that prints information about machine, operating system and architecture. As it will get output of the `uname` util, it will parse it and assign to the `SUBARCH` variable. As we got `SUBARCH`, we set the `SRCARCH` variable that provides directory of the certain architecture and `hfr-arch` that provides directory for the header files: +如你所见,系统执行[uname](https://en.wikipedia.org/wiki/Uname) 得到机器、操作系统和架构的信息。因为我们得到的是`uname` 的输出,所以我们需要做一些处理在赋给变量`SUBARCH` 。获得`SUBARCH` 之后就要设置`SRCARCH` 和`hfr-arch`,`SRCARCH`提供了硬件架构相关代码的目录,`hfr-arch` 提供了相关头文件的目录: + ```Makefile ifeq ($(ARCH),i386) SRCARCH := x86 @@ -164,6 +178,8 @@ hdr-arch := $(SRCARCH) Note that `ARCH` is the alias for the `SUBARCH`. In the next step we set the `KCONFIG_CONFIG` variable that represents path to the kernel configuration file and if it was not set before, it will be `.config` by default: +注意:`ARCH` 是`SUBARCH` 的别名。如果没有设置过代表内核配置文件路径的变量`KCONFIG_CONFIG`,下一步系统会设置他,默认情况下就是`.config` : + ```Makefile KCONFIG_CONFIG ?= .config export KCONFIG_CONFIG @@ -171,6 +187,8 @@ export KCONFIG_CONFIG and the [shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) that will be used during kernel compilation: +和编译内核过程中要用到的[shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) + ```Makefile CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \ else if [ -x /bin/bash ]; then echo /bin/bash; \ @@ -179,6 +197,9 @@ CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \ The next set of variables related to the compiler that will be used during Linux kernel compilation. We set the host compilers for the `c` and `c++` and flags for it: +接下来就要设置一组和编译内核的编译器相关的变量。我们会设置host 的C 和C++ 的编译器及相关配置项: + + ```Makefile HOSTCC = gcc HOSTCXX = g++ @@ -188,6 +209,7 @@ HOSTCXXFLAGS = -O2 Next we will meet the `CC` variable that represent compiler too, so why do we need in the `HOST*` variables? The `CC` is the target compiler that will be used during kernel compilation, but `HOSTCC` will be used during compilation of the set of the `host` programs (we will see it soon). After this we can see definition of the `KBUILD_MODULES` and `KBUILD_BUILTIN` variables that are used for the determination of the what to compile (kernel, modules or both): +然后会去适配代表编译器的变量`CC`,为什么还要`HOST*` 这些选项呢?`CC` 是编译内核过程中要使用的目标架构的编译器,但是`HOSTCC` 是要被用来编译一组`host` 程序的(下面我们就会看到)。然后我们就看看变量`KBUILD_MODULES` 和`KBUILD_BUILTIN` 的定义,这两个变量据欸的那个了我们要编译什么(内核、模块还是其他?): ```Makefile KBUILD_MODULES := KBUILD_BUILTIN := 1 From 686b2855ac1c72f760ac469f12728e08e1f216a2 Mon Sep 17 00:00:00 2001 From: XIAOYU <1136299502@qq.com> Date: Sun, 16 Aug 2015 13:55:13 +0800 Subject: [PATCH 200/697] translated shellinabox--A Web based AJAX translated shellinabox--A Web based AJAX Terminal Emulator --- ...box--A Web based AJAX Terminal Emulator.md | 159 ------------------ ...box--A Web based AJAX Terminal Emulator.md | 157 +++++++++++++++++ 2 files changed, 157 insertions(+), 159 deletions(-) delete mode 100644 sources/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md create mode 100644 translated/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md diff --git a/sources/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md b/sources/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md deleted file mode 100644 index c6f44a30f5..0000000000 --- a/sources/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md +++ /dev/null @@ -1,159 +0,0 @@ -translating by xiaoyu33 - -shellinabox – A Web based AJAX Terminal Emulator -================================================================================ -### About shellinabox ### - -Greetings Unixmen readers! - -We, usually, access any remote servers using well known communication tools like OpenSSH, and Putty etc. But, one important thing is we can’t access the remote systems using those tools behind a Firewall or the firewalls that allow only HTTPS traffic. No worries! We, still, have some options to access your remote systems even if you’re behind a firewall. And also, you don’t need to install any communications tools like OpenSSH or Putty. All you need is only a modern JavaScript and CSS enabled browser. And you don’t need to install any plugins or third party softwares either. - -Meet **Shell In A Box**, pronounced as **shellinabox**, a free, open source, web based AJAX Terminal emulator developed by **Markus Gutschke**. It uses AJAX technology to provide the look and feel of a native shell via a web browser. The **shellinaboxd** daemon implements a webserver that listens on the specified port. The web server publishes one or more services that will be displayed in a VT100 emulator implemented as an AJAX web application. By default, the port is 4200. You can change the default port to any random port number of your choice. After installing shellinabox on all your remote servers that you want to access them from your local system, open up the web browser and navigate to: **http://IP-Address:4200/**. Enter your user name and password and start using your remote system’s shell. Seems interesting, isn’t it? Indeed! - -**Disclaimer**: - -Shellinabox is not a ssh client or any sort of security software. It is just a application that emulates a remote system’s shell via a web browser. Also, It has nothing to do with SSH in anyway. It’s not a bullet proof security way to remote your systems. It is just one of the easiest methods so far. You should not run it on any public network for any reason. - -### Install shellinabox ### - -#### In Debian/Ubuntu based systems: #### - -shellinabox is available in the default repositories. So, you can install it using command: - - $ sudo apt-get install shellinabox - -#### In RHEL/CentOS systems: #### - -First, install EPEL repository using command: - - # yum install epel-release - -Then, install shellinabox using command: - - # yum install shellinabox - -Done! - -### Configure shellinabox ### - -As I mentioned before, shellinabox listens on port **4200** by default. You can change this port to any random number of your choice to make it difficult to guess by anyone. - -The shellinabox config file is located in **/etc/default/shellinabox** file by default in Debian/Ubuntu systems. In RHEL/CentOS/Fedora, the default location of config file is **/etc/sysconfig/shellinaboxd**. - -If you want to change the default port, - -In Debian/Ubuntu: - - $ sudo vi /etc/default/shellinabox - -In RHEL/CentOS/Fedora: - - # vi /etc/sysconfig/shellinaboxd - -Change your port to any random number. Since I am testing it on my local network, I use the default values. - - # Shell in a box daemon configuration - # For details see shellinaboxd man page - - # Basic options - USER=shellinabox - GROUP=shellinabox - CERTDIR=/var/lib/shellinabox - PORT=4200 - OPTS="--disable-ssl-menu -s /:LOGIN" - - # Additional examples with custom options: - - # Fancy configuration with right-click menu choice for black-on-white: - # OPTS="--user-css Normal:+black-on-white.css,Reverse:-white-on-black.css --disable-ssl-menu -s /:LOGIN" - - # Simple configuration for running it as an SSH console with SSL disabled: - # OPTS="-t -s /:SSH:host.example.com" - -Restart shelinabox service. - -**In Debian/Ubuntu:** - - $ sudo systemctl restart shellinabox - -Or - - $ sudo service shellinabox restart - -In RHEL/CentOS systems run the following command to start shellinaboxd service automatically on every reboot. - - # systemctl enable shellinaboxd - -Or - - # chkconfig shellinaboxd on - -Remember to open up port **4200** or any port that you assign if you are running a firewall. - -For example, in RHEL/CentOS systems, you can allow the port as shown below. - - # firewall-cmd --permanent --add-port=4200/tcp - ----------- - - # firewall-cmd --reload - -### Usage ### - -Now, go to your client systems, open up the web browser and navigate to: **https://ip-address-of-remote-servers:4200**. - -**Note**: Mention the correct port if you have changed it. - -You’ll get a warning message of certificate issue. Accept the certificate and go on. - -![Privacy error - Google Chrome_001](http://www.unixmen.com/wp-content/uploads/2015/08/Privacy-error-Google-Chrome_001.jpg) - -Enter your remote system’s username and password. Now, you’ll be able to access the remote system’s shell right from the browser itself. - -![Shell In A Box - Google Chrome_003](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_003.jpg) - -You can get some additional menu options which might be useful by right clicking on the empty space of your browser. - -![Shell In A Box - Google Chrome_004](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_004.jpg) - -From now on, you can do whatever you want to do in your remote server from the local system’s web browser. - -Once you done, type **exit** in the shell. - -To connect again to the remote system, click the **Connect** button and then type the user name and password of your remote server. - -![Shell In A Box - Google Chrome_005](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_005.jpg) - -For more details about shellinabox, type the following command in your Terminal: - - # man shellinabox - -Or - - # shellinaboxd -help - -Also, refer the [shellinabox wiki page][1] for comprehensive usage details. - -### Conclusion ### - -Like I mentioned before, web-based SSH tools are very useful if you’re running servers behind a Firewall. There are many web-based ssh tools, but Shellinabox is pretty simple and useful tool to emulate a remote system’s shell from anywhere in your network. Since, it is browser based, you can access your remote server from any device as long as you have a JavaScript and CSS enabled browser. - -That’s all for now. Have a good day! - -#### Reference link: #### - -- [shellinabox website][2] - --------------------------------------------------------------------------------- - -via: http://www.unixmen.com/shellinabox-a-web-based-ajax-terminal-emulator/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.unixmen.com/author/sk/ -[1]:https://code.google.com/p/shellinabox/wiki/shellinaboxd_man -[2]:https://code.google.com/p/shellinabox/ diff --git a/translated/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md b/translated/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md new file mode 100644 index 0000000000..71acf990c1 --- /dev/null +++ b/translated/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md @@ -0,0 +1,157 @@ +shellinabox–基于Web的Ajax的终端模拟器安装及使用详解 +================================================================================ +### shellinabox简介 ### + +unixmen的读者朋友们,你们好! + +通常情况下,我们访问任何远程服务器时,使用常见的通信工具如OpenSSH和Putty等。但是如果我们在防火墙外,或者防火墙只允许HTTPS流量才能通过,那么我们就不能再使用这些工具来访问远程系统了。不用担心!即使你在防火墙后面,我们依然有办法来访问你的远程系统。而且,你不需要安装任何类似于OpenSSH或Putty的通讯工具。你只需要有一个支持JavaScript和CSS的现代浏览器。并且你不用安装任何插件或第三方应用软件。 + +Meet **Shell In A Box**,发音是**shellinabox**,是由**Markus Gutschke**开发的一款免费的,开源的,基于Web的Ajax的终端模拟器。它使用AJAX技术,通过Web浏览器提供的外观和感觉像一个原生壳。该**shellinaboxd**的守护进程实现了一个Web服务器,能够侦听指定的端口。Web服务器发布一个或多个服务,这些服务将在VT100模拟器实现为一个AJAX的Web应用程序显示。默认情况下,端口为4200。你可以更改默认端口到任意选择的任意端口号。在你的远程服务器安装shellinabox以后,如果你想从本地系统接入,打开Web浏览器并导航到:**http://IP-Address:4200/**。输入你的用户名和密码,然后就可以开始使用你远程系统的外壳。看起来很有趣,不是吗?确实! + +**免责声明**: + +shellinabox不是SSH客户端或任何安全软件。它仅仅是一个应用程序,能够通过Web浏览器模拟一个远程系统的壳。同时,它和SSH没有任何关系。这不是防弹的安全的方式来远程控制您的系统。这只是迄今为止最简单的方法之一。无论什么原因,你都不应该在任何公共网络上运行它。 + +### 安装shellinabox ### + +#### 在Debian / Ubuntu系统上: #### + +shellinabox在默认库是可用的。所以,你可以使用命令来安装它: + + $ sudo apt-get install shellinabox + +#### 在RHEL / CentOS系统上: #### + +首先,使用命令安装EPEL仓库: + + # yum install epel-release + +然后,使用命令安装shellinabox: + + # yum install shellinabox + +完成! + +### 配置shellinabox ### + +正如我之前提到的,shellinabox侦听端口默认为**4200**。你可以将此端口更改为任意数字,以防别人猜到。 + +在Debian/Ubuntu系统上shellinabox配置文件的默认位置是**/etc/default/shellinabox**。在RHEL/CentOS/Fedora上,默认位置在**/etc/sysconfig/shellinaboxd**。 + +如果要更改默认端口, + +在Debian / Ubuntu: + + $ sudo vi /etc/default/shellinabox + +在RHEL / CentOS / Fedora: + + # vi /etc/sysconfig/shellinaboxd + +更改你的端口到任意数量。因为我在本地网络上测试它,所以我使用默认值。 + + # Shell in a box daemon configuration + # For details see shellinaboxd man page + + # Basic options + USER=shellinabox + GROUP=shellinabox + CERTDIR=/var/lib/shellinabox + PORT=4200 + OPTS="--disable-ssl-menu -s /:LOGIN" + + # Additional examples with custom options: + + # Fancy configuration with right-click menu choice for black-on-white: + # OPTS="--user-css Normal:+black-on-white.css,Reverse:-white-on-black.css --disable-ssl-menu -s /:LOGIN" + + # Simple configuration for running it as an SSH console with SSL disabled: + # OPTS="-t -s /:SSH:host.example.com" + +重启shelinabox服务。 + +**在Debian/Ubuntu:** + + $ sudo systemctl restart shellinabox + +或者 + + $ sudo service shellinabox restart + +在RHEL/CentOS系统,运行下面的命令能在每次重启时自动启动shellinaboxd服务 + + # systemctl enable shellinaboxd + +或者 + + # chkconfig shellinaboxd on + +如果你正在运行一个防火墙,记得要打开端口**4200**或任何你指定的端口。 + +例如,在RHEL/CentOS系统,你可以如下图所示允许端口。 + + # firewall-cmd --permanent --add-port=4200/tcp + +---------- + + # firewall-cmd --reload + +### 使用 ### + +现在,去你的客户端系统,打开Web浏览器并导航到:**https://ip-address-of-remote-servers:4200**。 + +**注意**:如果你改变了端口,请填写修改后的端口。 + +你会得到一个证书问题的警告信息。接受该证书并继续。 + +![Privacy error - Google Chrome_001](http://www.unixmen.com/wp-content/uploads/2015/08/Privacy-error-Google-Chrome_001.jpg) + +输入远程系统的用户名和密码。现在,您就能够从浏览器本身访问远程系统的外壳。 + +![Shell In A Box - Google Chrome_003](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_003.jpg) + +右键点击你浏览器的空白位置。你可以得到一些有很有用的额外的菜单选项。 + +![Shell In A Box - Google Chrome_004](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_004.jpg) + +从现在开始,你可以通过本地系统的Web浏览器在你的远程服务器随意操作。 + +当你完成时,记得点击**退出**。 + +当再次连接到远程系统时,单击**连接**按钮,然后输入远程服务器的用户名和密码。 + +![Shell In A Box - Google Chrome_005](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_005.jpg) + +如果想了解shellinabox更多细节,在你的终端键入下面的命令: + + # man shellinabox + +或者 + + # shellinaboxd -help + +同时,参考[shellinabox 在wiki页面的介绍][1],来了解shellinabox的综合使用细节。 + +### 结论 ### + +正如我之前提到的,如果你在服务器运行在防火墙后面,那么基于web的SSH工具是非常有用的。有许多基于web的SSH工具,但shellinabox是非常简单并且有用的工具,能从的网络上的任何地方,模拟一个远程系统的壳。因为它是基于浏览器的,所以你可以从任何设备访问您的远程服务器,只要你有一个支持JavaScript和CSS的浏览器。 + +就这些啦。祝你今天有个好心情! + +#### 参考链接: #### + +- [shellinabox website][2] + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/shellinabox-a-web-based-ajax-terminal-emulator/ + +作者:[SK][a] +译者:[xiaoyu33](https://github.com/xiaoyu33) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/sk/ +[1]:https://code.google.com/p/shellinabox/wiki/shellinaboxd_man +[2]:https://code.google.com/p/shellinabox/ From a545ebe14d5e8611fdcc905db5296ed53a2a4c97 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sun, 16 Aug 2015 16:37:23 +0800 Subject: [PATCH 201/697] [Translating] tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md --- ...Shell Scripting to Automate Linux System Maintenance Tasks.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md b/sources/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md index bcd058611a..6b534423e7 100644 --- a/sources/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md +++ b/sources/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md @@ -1,3 +1,4 @@ +ictlyh Translating Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks ================================================================================ Some time ago I read that one of the distinguishing characteristics of an effective system administrator / engineer is laziness. It seemed a little contradictory at first but the author then proceeded to explain why: From ef473497499f650a03962097e990cb74babe2f27 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 16 Aug 2015 18:49:19 +0800 Subject: [PATCH 202/697] PUB:20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux @GOLinux --- ...mportError--No module named wxversion' on Linux.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) rename {translated/tech => published}/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md (75%) diff --git a/translated/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md b/published/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md similarity index 75% rename from translated/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md rename to published/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md index 2a937daeff..4b3eaf0aa0 100644 --- a/translated/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md +++ b/published/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md @@ -1,7 +1,7 @@ -Linux有问必答——如何修复Linux上的“ImportError: No module named wxversion”错误 +Linux有问必答:如何修复“ImportError: No module named wxversion”错误 ================================================================================ -> **问题** 我试着在[你的Linux发行版]上运行一个Python应用,但是我得到了这个错误"ImportError: No module named wxversion."。我怎样才能解决Python程序中的这个错误呢? +> **问题** 我试着在[某某 Linux 发行版]上运行一个 Python 应用,但是我得到了这个错误“ImportError: No module named wxversion.”。我怎样才能解决 Python 程序中的这个错误呢? Looking for python... 2.7.9 - Traceback (most recent call last): File "/home/dev/playonlinux/python/check_python.py", line 1, in @@ -10,7 +10,8 @@ Linux有问必答——如何修复Linux上的“ImportError: No module named wx failed tests 该错误表明,你的Python应用是基于GUI的,依赖于一个名为wxPython的缺失模块。[wxPython][1]是一个用于wxWidgets GUI库的Python扩展模块,普遍被C++程序员用来设计GUI应用。该wxPython扩展允许Python开发者在任何Python应用中方便地设计和整合GUI。 -To solve this import error, you need to install wxPython on your Linux, as described below. + +摇解决这个 import 错误,你需要在你的 Linux 上安装 wxPython,如下: ### 安装wxPython到Debian,Ubuntu或Linux Mint ### @@ -40,10 +41,10 @@ via: http://ask.xmodulo.com/importerror-no-module-named-wxversion.html 作者:[Dan Nanni][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://ask.xmodulo.com/author/nanni [1]:http://wxpython.org/ -[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html +[2]:https://linux.cn/article-2324-1.html From 3b29580b83bb95018b748cea58036b45882bc9cc Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 16 Aug 2015 19:07:46 +0800 Subject: [PATCH 203/697] PUB:20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser @strugglingyouth --- ...vity and Check Memory Usages of Browser.md | 57 +++++++++++-------- 1 file changed, 32 insertions(+), 25 deletions(-) rename {translated/tech => published}/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md (72%) diff --git a/translated/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md b/published/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md similarity index 72% rename from translated/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md rename to published/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md index 02805f62ff..97dd874c43 100644 --- a/translated/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md +++ b/published/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md @@ -1,47 +1,52 @@ - -用 CD 创建 ISO,观察用户活动和检查浏览器内存的技巧 +一些 Linux 小技巧 ================================================================================ 我已经写过 [Linux 提示和技巧][1] 系列的一篇文章。写这篇文章的目的是让你知道这些小技巧可以有效地管理你的系统/服务器。 ![Create Cdrom ISO Image and Monitor Users in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/creating-cdrom-iso-watch-users-in-linux.jpg) -在Linux中创建 Cdrom ISO 镜像和监控用户 +*在Linux中创建 Cdrom ISO 镜像和监控用户* -在这篇文章中,我们将看到如何使用 CD/DVD 驱动器中加载到的内容来创建 ISO 镜像,打开随机手册页学习,看到登录用户的详细情况和查看浏览器内存使用量,而所有这些完全使用本地工具/命令无任何第三方应用程序/组件。让我们开始吧... +在这篇文章中,我们将看到如何使用 CD/DVD 驱动器中载入的碟片来创建 ISO 镜像;打开随机手册页学习;看到登录用户的详细情况和查看浏览器内存使用量,而所有这些完全使用本地工具/命令,无需任何第三方应用程序/组件。让我们开始吧…… -### 用 CD 中创建 ISO 映像 ### +### 用 CD 碟片创建 ISO 映像 ### 我们经常需要备份/复制 CD/DVD 的内容。如果你是在 Linux 平台上,不需要任何额外的软件。所有需要的是进入 Linux 终端。 要从 CD/DVD 上创建 ISO 镜像,你需要做两件事。第一件事就是需要找到CD/DVD 驱动器的名称。要找到 CD/DVD 驱动器的名称,可以使用以下三种方法。 -**1. 从终端/控制台上运行 lsblk 命令(单个驱动器).** +**1. 从终端/控制台上运行 lsblk 命令(列出块设备)** $ lsblk ![Find Block Devices in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Find-Block-Devices.png) -找驱动器 +*找块设备* -**2.要查看有关 CD-ROM 的信息,可以使用以下命令。** +从上图可以看到,sr0 就是你的 cdrom (即 /dev/sr0 )。 + +**2. 要查看有关 CD-ROM 的信息,可以使用以下命令** $ less /proc/sys/dev/cdrom/info ![Check Cdrom Information](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Cdrom-Inforamtion.png) -检查 Cdrom 信息 +*检查 Cdrom 信息* + +从上图可以看到, 设备名称是 sr0 (即 /dev/sr0)。 **3. 使用 [dmesg 命令][2] 也会得到相同的信息,并使用 egrep 来自定义输出。** -命令 ‘dmesg‘ 命令的输出/控制内核缓冲区信息。‘egrep‘ 命令输出匹配到的行。选项 -i 和 -color 与 egrep 连用时会忽略大小写,并高亮显示匹配的字符串。 +命令 ‘dmesg‘ 命令的输出/控制内核缓冲区信息。‘egrep‘ 命令输出匹配到的行。egrep 使用选项 -i 和 -color 时会忽略大小写,并高亮显示匹配的字符串。 $ dmesg | egrep -i --color 'cdrom|dvd|cd/rw|writer' ![Find Device Information](http://www.tecmint.com/wp-content/uploads/2015/07/Find-Device-Information.png) -查找设备信息 +*查找设备信息* -一旦知道 CD/DVD 的名称后,在 Linux 上你可以用下面的命令来创建 ISO 镜像。 +从上图可以看到,设备名称是 sr0 (即 /dev/sr0)。 + +一旦知道 CD/DVD 的名称后,在 Linux 上你可以用下面的命令来创建 ISO 镜像(你看,只需要 cat 即可!)。 $ cat /dev/sr0 > /path/to/output/folder/iso_name.iso @@ -49,11 +54,11 @@ ![Create ISO Image of CDROM in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Create-ISO-Image-of-CDROM.png) -创建 CDROM 的 ISO 映像 +*创建 CDROM 的 ISO 映像* ### 随机打开一个手册页 ### -如果你是 Linux 新人并想学习使用命令行开关,这个修改是为你做的。把下面的代码行添加在`〜/ .bashrc`文件的末尾。 +如果你是 Linux 新人并想学习使用命令行开关,这个技巧就是给你的。把下面的代码行添加在`〜/ .bashrc`文件的末尾。 /use/bin/man $(ls /bin | shuf | head -1) @@ -63,17 +68,19 @@ ![LoadKeys Man Pages](http://www.tecmint.com/wp-content/uploads/2015/07/LoadKeys-Man-Pages.png) -LoadKeys 手册页 +*LoadKeys 手册页* ![Zgrep Man Pages](http://www.tecmint.com/wp-content/uploads/2015/07/Zgrep-Man-Pages.png) -Zgrep 手册页 +*Zgrep 手册页* + +希望你知道如何退出手册页浏览——如果你已经厌烦了每次都看到手册页,你可以删除你添加到 `.bashrc`文件中的那几行。 ### 查看登录用户的状态 ### 了解其他用户正在共享服务器上做什么。 -一般情况下,你是共享的 Linux 服务器的用户或管理员的。如果你担心自己服务器的安全并想要查看哪些用户在做什么,你可以使用命令 'w'。 +一般情况下,你是共享的 Linux 服务器的用户或管理员的。如果你担心自己服务器的安全并想要查看哪些用户在做什么,你可以使用命令 `w`。 这个命令可以让你知道是否有人在执行恶意代码或篡改服务器,让他停下或使用其他方法。'w' 是查看登录用户状态的首选方式。 @@ -83,33 +90,33 @@ Zgrep 手册页 ![Check Linux User Activity](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Linux-User-Activity.png) -检查 Linux 用户状态 +*检查 Linux 用户状态* ### 查看浏览器的内存使用状况 ### -最近有不少谈论关于 Google-chrome 内存使用量。如果你想知道一个浏览器的内存用量,你可以列出进程名,PID 和它的使用情况。要检查浏览器的内存使用情况,只需在地址栏输入 “about:memory” 不要带引号。 +最近有不少谈论关于 Google-chrome 的内存使用量。如要检查浏览器的内存使用情况,只需在地址栏输入 “about:memory”,不要带引号。 我已经在 Google-Chrome 和 Mozilla 的 Firefox 网页浏览器进行了测试。你可以查看任何浏览器,如果它工作得很好,你可能会承认我们在下面的评论。你也可以杀死浏览器进程在 Linux 终端的进程/服务中。 -在 Google Chrome 中,在地址栏输入 `about:memory`,你应该得到类似下图的东西。 +在 Google Chrome 中,在地址栏输入 `about:memory`,你应该得到类似下图的东西。 ![Check Chrome Memory Usage](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Chrome-Memory-Usage.png) -查看 Chrome 内存使用状况 +*查看 Chrome 内存使用状况* 在Mozilla Firefox浏览器,在地址栏输入 `about:memory`,你应该得到类似下图的东西。 ![Check Firefox Memory Usage](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Firefox-Memory-Usage.png) -查看 Firefox 内存使用状况 +*查看 Firefox 内存使用状况* 如果你已经了解它是什么,除了这些选项。要检查内存用量,你也可以点击最左边的 ‘Measure‘ 选项。 ![Firefox Main Process](http://www.tecmint.com/wp-content/uploads/2015/07/Firefox-Main-Processes.png) -Firefox 主进程 +*Firefox 主进程* -它将通过浏览器树形展示进程内存使用量 +它将通过浏览器树形展示进程内存使用量。 目前为止就这样了。希望上述所有的提示将会帮助你。如果你有一个(或多个)技巧,分享给我们,将帮助 Linux 用户更有效地管理他们的 Linux 系统/服务器。 @@ -122,7 +129,7 @@ via: http://www.tecmint.com/creating-cdrom-iso-image-watch-user-activity-in-linu 作者:[Avishek Kumar][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 85aa60c1a7364d152d0471ce09b0125b59e90d3e Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 17 Aug 2015 00:41:29 +0800 Subject: [PATCH 204/697] PUB:20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications @XLCYun --- ...t & Wrong - Page 3 - GNOME Applications.md | 66 +++++++++++++++++++ ...t & Wrong - Page 3 - GNOME Applications.md | 61 ----------------- 2 files changed, 66 insertions(+), 61 deletions(-) create mode 100644 published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md delete mode 100644 translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md diff --git a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md b/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md new file mode 100644 index 0000000000..61600366c9 --- /dev/null +++ b/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md @@ -0,0 +1,66 @@ +一周 GNOME 之旅:品味它和 KDE 的是是非非(第三节 GNOME应用) +================================================================================ + +### 应用 ### + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_videos_show&w=1920) + +这是一个基本扯平的方面。每一个桌面环境都有一些非常好的应用,也有一些不怎么样的。再次强调,Gnome 把那些 KDE 完全错失的小细节给做对了。我不是想说 KDE 中有哪些应用不好。他们都能工作,但仅此而已。也就是说:它们合格了,但确实还没有达到甚至接近100分。 + +Gnome 是一个样子,KDE 是另外一种。Dragon 播放器运行得很好,清晰的标出了播放文件、URL或和光盘的按钮,正如你在 Gnome Videos 中能做到的一样……但是在便利的文件名和用户的友好度方面,Gnome 多走了一小步。它默认显示了在你的电脑上检测到的所有影像文件,不需要你做任何事情。KDE 有 [Baloo][](正如之前的 [Nepomuk][2],LCTT 译注:这是 KDE 中一种文件索引服务框架)为什么不使用它们?它们能列出可读取的影像文件……但却没被使用。 + +下一步……音乐播放器 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_rhythmbox_show&w=1920) + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_amarok_show&w=1920) + +这两个应用,左边的是 Rhythmbox ,右边的是 Amarok,都是打开后没有做任何修改直接截屏的。看到差别了吗?Rhythmbox 看起来像个音乐播放器,直接了当,排序文件的方法也很清晰,它知道它应该是什么样的,它的工作是什么:就是播放音乐。 + +Amarok 感觉就像是某个人为了展示而把所有的扩展和选项都尽可能地塞进一个应用程序中去,而做出来的一个技术演示产品(tech demos),或者一个库演示产品(library demos)——而这些是不应该做为产品装进去的,它只应该展示其中一点东西。而 Amarok 给人的感觉却是这样的:好像是某个人想把每一个感觉可能很酷的东西都塞进一个媒体播放器里,甚至都不停下来想“我想写啥来着?一个播放音乐的应用?” + +看看默认布局就行了。前面和中心都呈现了什么?一个可视化工具和集成了维基百科——占了整个页面最大和最显眼的区域。第二大的呢?播放列表。第三大,同时也是最小的呢?真正的音乐列表。这种默认设置对于一个核心应用来说,怎么可能称得上理智? + +软件管理器!它在最近几年当中有很大的进步,而且接下来的几个月中,很可能只能看到它更大的进步。不幸的是,这是另一个 KDE 做得差一点点就能……但还是在终点线前以脸戗地了。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_software_show&w=1920) + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apper_show&w=1920) + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon_show&w=1920) + +Gnome 软件中心可能是我的新的最爱的软件中心,先放下牢骚等下再发。Muon, 我想爱上你,真的。但你就是个设计上的梦魇。当 VDG 给你画设计草稿时(草图如下),你看起来真漂亮。白色空间用得很好,设计简洁,类别列表也很好,你的整个“不要分开做成两个应用程序”的设计都很不错。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon1_show&w=1920) + +接着就有人为你写代码,实现真正的UI,但是,我猜这些家伙当时一定是喝醉了。 + +我们来看看 Gnome 软件中心。正中间是什么?软件,软件截图和软件描述等等。Muon 的正中心是什么?白白浪费的大块白色空间。Gnome 软件中心还有一个贴心便利特点,那就是放了一个“运行”的按钮在那儿,以防你已经安装了这个软件。便利性和易用性很重要啊,大哥。说实话,仅仅让 Muon 把东西都居中对齐了可能看起来的效果都要好得多。 + +Gnome 软件中心沿着顶部的东西是什么,像个标签列表?所有软件,已安装软件,软件升级。语言简洁,直接,直指要点。Muon,好吧,我们有个“发现”,这个语言表达上还算差强人意,然后我们又有一个“已安装软件”,然后,就没有然后了。软件升级哪去了? + +好吧……开发者决定把升级独立分开成一个应用程序,这样你就得打开两个应用程序才能管理你的软件——一个用来安装,一个用来升级——自从有了新立得图形软件包管理器以来,首次有这种破天荒的设计,与任何已存的软件中心的设计范例相违背。 + +我不想贴上截图给你们看,因为我不想等下还得清理我的电脑,如果你进入 Muon 安装了什么,那么它就会在屏幕下方根据安装的应用名创建一个标签,所以如果你一次性安装很多软件的话,那么下面的标签数量就会慢慢的增长,然后你就不得不手动检查清除它们,因为如果你不这样做,当标签增长到超过屏幕显示时,你就不得不一个个找过去来才能找到最近正在安装的软件。想想:在火狐浏览器中打开50个标签是什么感受。太烦人,太不方便! + +我说过我会给 Gnome 一点打击,我是认真的。Muon 有一点做得比 Gnome 软件中心做得好。在 Muon 的设置栏下面有个“显示技术包”,即:编辑器,软件库,非图形应用程序,无 AppData 的应用等等(LCTT 译注:AppData 是软件包中的一个特殊文件,用于专门存储软件的信息)。Gnome 则没有。如果你想安装其中任何一项你必须跑到终端操作。我想这是他们做得不对的一点。我完全理解他们推行 AppData 的心情,但我想他们太急了(LCTT 译注:推行所有软件包带有 AppData 是 Gnome 软件中心的目标之一)。我是在想安装 PowerTop,而 Gnome 不显示这个软件时我才发现这点的——因为它没有 AppData,也没有“显示技术包”设置。 + +更不幸的事实是,如果你在 KDE 下你不能说“用 [Apper][3] 就行了”,因为…… + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apperlocal_show&w=1920) + +Apper 对安装本地软件包的支持大约在 Fedora 19 时就中止了,几乎两年了。我喜欢关注细节与质量。 + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=3 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://community.kde.org/Baloo +[2]:http://www.ikde.org/tech/kde-tech-nepomuk/ +[3]:https://en.wikipedia.org/wiki/Apper diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md deleted file mode 100644 index 42539badcc..0000000000 --- a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md +++ /dev/null @@ -1,61 +0,0 @@ -将GNOME作为我的Linux桌面的一周: 他们做对的与做错的 - 第三节 - GNOME应用 -================================================================================ -### 应用 ### - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_videos_show&w=1920) - -这是一个基本上一潭死水的地方。每一个桌面环境都有一些非常好的和不怎么样的应用。再次强调,Gnome把那些KDE完全错失的小细节给做对了。我不是想说KDE中有哪些应用不好。他们都能工作。但仅此而已。也就是说:它们合格了,但确实还没有达到甚至接近100分。 - -Gnome的在左边,KDE的在右边。Dragon运行得很好,清晰的标出了播放文件、URL或和光盘的按钮,正如你在Gnome Videos中能做到的一样……但是在便利的文件名和用户的友好度方面,Gnome多走了一小步。它默认显示了在你的电脑上检测到的所有影像文件,不需要你做任何事情。KDE有Baloo——正如之前有Nepomuk——为什么不使用它们?它们能列出可读取的影像文件……但却没被使用。 - -下一步……音乐播放器 - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_rhythmbox_show&w=1920) - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_amarok_show&w=1920) - -这两个应用,左边的Rhythmbox和右边的Amarok,都是打开后没有做任何修改直接截屏的。看到差别了吗?Rhythmbox看起来像个音乐播放器,直接了当,排序文件的方法也很清晰,它知道它应该是什么样的,它的工作是什么:就是播放音乐。 - -Amarok感觉就像是某个人为了展示而把所有的扩展和选项都尽可能地塞进一个应用程序中去而做出来的一个技术演示产品(tech demos),或者一个库演示产品(library demos)——而这些是不应该做为产品装进去的,它只应该展示一些零碎的东西。而Amarok给人的感觉却是这样的:好像是某个人想把每一个感觉可能很酷的东西都塞进一个媒体播放器里,甚至都不停下来想“我想写啥来着?一个播放音乐的应用?” - -看看默认布局就行了。前面和中心都呈现了什么?一个可视化工具和维基集成(wikipedia integration)——占了整个页面最大和最显眼的区域。第二大的呢?播放列表。第三大,同时也是最小的呢?真正的音乐列表。这种默认设置对于一个核心应用来说,怎么可能称得上理智? - -软件管理器!它在最近几年当中有很大的进步,而且接下来的几个月中,很可能只能看到它更大的进步。不幸的是,这是另一个地方KDE做得差一点点就能……但还是在终点线前摔了脸。 - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_software_show&w=1920) - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apper_show&w=1920) - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon_show&w=1920) - -Gnome软件中心可能是我最新的最爱,先放下牢骚等下再发。Muon, 我想爱上你,真的。但你就是个设计上的梦魇。当VDG给你画设计草稿时(模型在下面),你看起来真漂亮。白色空间用得很好,设计简洁,类别列表也很好,你的整个“不要分开做成两个应用程序”的设计都很不错。 - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon1_show&w=1920) - -接着就有人为你写代码,实现真正的UI,但是,我猜这些家伙当时一定是喝醉了。 - -我们来看看Gnome软件中心。正中间是什么?软件,软件截图和软件描述等等。Muon的正中心是什么?白白浪费的大块白色空间。Gnome软件中心还有一个贴心便利特点,那就是放了一个“运行“的按钮在那儿,以防你已经安装了这个软件。便利性和易用性很重要啊,大哥。说实话,仅仅让Muon把东西都居中对齐了可能看起来的效果都要好得多。 - -Gnome软件中心沿着顶部的东西是什么,像个标签列表?所有软件,已安装软件,软件升级。语言简洁,直接,直指要点。Muon,好吧,我们有个”发现“,这个语言表达上还算差强人意,然后我们又有一个”已安装软件“,然后,就没有然后了。软件升级哪去了? - -好吧……开发者决定把升级独立分开成一个应用程序,这样你就得打开两个应用程序才能管理你的软件——一个用来安装,一个用来升级——自从有了新得立图形软件包管理器以来,首次有这种破天荒的设计,与任何已存的软件中心的设计范例相违背。 - -我不想贴上截图给你们看,因为我不想等下还得清理我的电脑,如果你进入Muon安装了什么,那么它就会在屏幕下方根据安装的应用名创建一个标签,所以如果你一次性安装很多软件的话,那么下面的标签数量就会慢慢的增长,然后你就不得不手动检查清除它们,因为如果你不这样做,当标签增长到超过屏幕显示时,你就不得不一个个找过去来才能找到最近正在安装的软件。想想:在火狐浏览器打开50个标签。太烦人,太不方便! - -我说过我会给Gnome一点打击,我是认真的。Muon有一点做得比Gnome软件中心做得好。在Muon的设置栏下面有个“显示技术包”,即:编辑器,软件库,非图形应用程序,无AppData的应用等等(AppData,软件包中的一个特殊文件,用于专门存储软件的信息,译注)。Gnome则没有。如果你想安装其中任何一项你必须跑到终端操作。我想这是他们做得不对的一点。我完全理解他们推行AppData的心情,但我想他们太急了(推行所有软件包带有AppData,是Gnome软件中心的目标之一,译注)。我是在想安装PowerTop,而Gnome不显示这个软件时我才发现这点的——没有AppData,没有“显示技术包“设置。 - -更不幸的事实是你不能“用Apper就行了”,自从…… - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apperlocal_show&w=1920) - -Apper对安装本地软件包的支持大约在Fedora 19时就中止了,几乎两年了。我喜欢那种对细节与质量的关注。 - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=3 - -作者:Eric Griffith -译者:[XLCYun](https://github.com/XLCYun) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From a083bab979c96d4b979a0c5cdbb34dbe2fbed3be Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 17 Aug 2015 10:19:17 +0800 Subject: [PATCH 205/697] Update 20150728 Process of the Linux kernel building.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译到300行 --- ...150728 Process of the Linux kernel building.md | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/sources/tech/20150728 Process of the Linux kernel building.md index 4191968e04..d3c71f0a43 100644 --- a/sources/tech/20150728 Process of the Linux kernel building.md +++ b/sources/tech/20150728 Process of the Linux kernel building.md @@ -210,6 +210,7 @@ HOSTCXXFLAGS = -O2 Next we will meet the `CC` variable that represent compiler too, so why do we need in the `HOST*` variables? The `CC` is the target compiler that will be used during kernel compilation, but `HOSTCC` will be used during compilation of the set of the `host` programs (we will see it soon). After this we can see definition of the `KBUILD_MODULES` and `KBUILD_BUILTIN` variables that are used for the determination of the what to compile (kernel, modules or both): 然后会去适配代表编译器的变量`CC`,为什么还要`HOST*` 这些选项呢?`CC` 是编译内核过程中要使用的目标架构的编译器,但是`HOSTCC` 是要被用来编译一组`host` 程序的(下面我们就会看到)。然后我们就看看变量`KBUILD_MODULES` 和`KBUILD_BUILTIN` 的定义,这两个变量据欸的那个了我们要编译什么(内核、模块还是其他?): + ```Makefile KBUILD_MODULES := KBUILD_BUILTIN := 1 @@ -221,12 +222,16 @@ endif Here we can see definition of these variables and the value of the `KBUILD_BUILTIN` will depens on the `CONFIG_MODVERSIONS` kernel configuration parameter if we pass only `modules` to the `make`. The next step is including of the: +在这我们可以看到这些变量的定义,并且,如果们仅仅传递了`modules` 给`make`,变量`KBUILD_BUILTIN` 会依赖于内核配置选项`CONFIG_MODVERSIONS`。下一步操作是引入: + ```Makefile include scripts/Kbuild.include ``` `kbuild` file. The [Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) or `Kernel Build System` is the special infrastructure to manage building of the kernel and its modules. The `kbuild` files has the same syntax that makefiles. The [scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) file provides some generic definitions for the `kbuild` system. As we included this `kbuild` files we can see definition of the variables that are related to the different tools that will be used during kernel and modules compilation (like linker, compilers, utils from the [binutils](http://www.gnu.org/software/binutils/) and etc...): +文件`kbuild` ,[Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) 或者又叫做 `Kernel Build System`是一个用来管理构建内核和模块的特殊框架。`kbuild` 文件的语法与makefile 一样。文件[scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) 为`kbuild` 系统同提供了一些原生的定义。因为我们包含了这个`kbuild` 文件,我们可以看到和不同工具关联的这些变量的定义,这些工具会在内核和模块编译过程中被使用(比如链接器、编译器、二进制工具包[binutils](http://www.gnu.org/software/binutils/),等等): + ```Makefile AS = $(CROSS_COMPILE)as LD = $(CROSS_COMPILE)ld @@ -245,6 +250,8 @@ AWK = awk After definition of these variables we define two variables: `USERINCLUDE` and `LINUXINCLUDE`. They will contain paths of the directories with headers (public for users in the first case and for kernel in the second case): +在这些定义好的变量之后,我们又定义了两个变量:`USERINCLUDE` 和`LINUXINCLUDE`。他们为包含头文件的路径(第一个是给用户用的,第二个是给内核用的): + ```Makefile USERINCLUDE := \ -I$(srctree)/arch/$(hdr-arch)/include/uapi \ @@ -259,7 +266,7 @@ LINUXINCLUDE := \ ``` And the standard flags for the C compiler: - +以及标准的C 编译器标志: ```Makefile KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ -fno-strict-aliasing -fno-common \ @@ -269,6 +276,7 @@ KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ ``` It is the not last compiler flags, they can be updated by the other makefiles (for example kbuilds from `arch/`). After all of these, all variables will be exported to be available in the other makefiles. The following two the `RCS_FIND_IGNORE` and the `RCS_TAR_IGNORE` variables will contain files that will be ignored in the version control system: +这并不是最终确定的编译器标志,他们还可以在其他makefile 里面更新(比如`arch/` 里面的kbuild)。经过所有这些,全部变量会被导出,这样其他makefile 就可以直接使用了。下面的两个变量`RCS_FIND_IGNORE` 和 `RCS_TAR_IGNORE` 包含了被版本控制系统忽略的文件: ```Makefile export RCS_FIND_IGNORE := \( -name SCCS -o -name BitKeeper -o -name .svn -o \ @@ -280,11 +288,16 @@ export RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn \ That's all. We have finished with the all preparations, next point is the building of `vmlinux`. +这就是全部了,我们已经完成了所有的准备工作,下一个点就是如果构建`vmlinux`. + Directly to the kernel build +直面构建内核 -------------------------------------------------------------------------------- As we have finished all preparations, next step in the root makefile is related to the kernel build. Before this moment we will not see in the our terminal after the execution of the `make` command. But now first steps of the compilation are started. In this moment we need to go on the [598](https://github.com/torvalds/linux/blob/master/Makefile#L598) line of the Linux kernel top makefile and we will see `vmlinux` target there: +现在我们已经完成了所有的准备工作,根makefile(注:内核根目录下的makefile)的下一步工作就是和编译内核相关的了。在我们执行`make` 命令之前,我们不会在终端看到任何东西。但是现在编译的第一步开始了,这里我们需要从内核根makefile的的[598](https://github.com/torvalds/linux/blob/master/Makefile#L598) 行开始,这里可以看到目标`vmlinux`: + ```Makefile all: vmlinux include arch/$(SRCARCH)/Makefile From 4dbc0c3601189cae703cd9e97bb1507fa0d39c2e Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 17 Aug 2015 10:56:51 +0800 Subject: [PATCH 206/697] =?UTF-8?q?20150817-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Top 5 Torrent Clients For Ubuntu Linux.md | 117 ++++++++++++++++++ ...number of threads in a process on Linux.md | 50 ++++++++ 2 files changed, 167 insertions(+) create mode 100644 sources/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md create mode 100644 sources/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md diff --git a/sources/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md b/sources/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md new file mode 100644 index 0000000000..5ae03e4df1 --- /dev/null +++ b/sources/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md @@ -0,0 +1,117 @@ +Top 5 Torrent Clients For Ubuntu Linux +================================================================================ +![Best Torrent clients for Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/5_Best_Torrent_Ubuntu.png) + +Looking for the **best torrent client in Ubuntu**? Indeed there are a number of torrent clients available for desktop Linux. But which ones are the **best Ubuntu torrent clients** among them? + +I am going to list top 5 torrent clients for Linux, which are lightweight, feature rich and have impressive GUI. Ease of installation and using is also a factor. + +### Best torrent programs for Ubuntu ### + +Since Ubuntu comes by default with Transmission, I am going to exclude it from the list. This doesn’t mean that Transmission doesn’t deserve to be on the list. Transmission is a good to have torrent client for Ubuntu and this is the reason why it is the default Torrent application in several Linux distributions, including Ubuntu. + +---------- + +### Deluge ### + +![Logo of Deluge torrent client for Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Deluge.png) + +[Deluge][1] has been chosen as the best torrent client for Linux by Lifehacker and that speaks itself of the usefulness of Deluge. And it’s not just Lifehacker who is fan of Deluge, check out any forum and you’ll find a number of people admitting that Deluge is their favorite. + +Fast, sleek and intuitive interface makes Deluge a hot favorite among Linux users. + +Deluge is available in Ubuntu repositories and you can install it in Ubuntu Software Center or by using the command below: + + sudo apt-get install deluge + +---------- + +### qBittorrent ### + +![qBittorrent client for Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/qbittorrent_icon.png) + +As the name suggests, [qBittorrent][2] is the Qt version of famous [Bittorrent][3] application. You’ll see an interface similar to Bittorrent client in Windows, if you ever used it. Sort of lightweight and have all the standard features of a torrent program, qBittorrent is also available in default Ubuntu repository. + +It could be installed from Ubuntu Software Center or using the command below: + + sudo apt-get install qbittorrent + +---------- + +### Tixati ### + +![Tixati torrent client logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/tixati_icon.png) + +[Tixati][4] is another nice to have torrent client for Ubuntu. It has a default dark theme which might be preferred by many but not me. It has all the standard features that you can seek in a torrent client. + +In addition to that, there are additional feature of data analysis. You can measure and analyze bandwidth and other statistics in nice charts. + +- [Download Tixati][5] + +---------- + +### Vuze ### + +![Vuze Torrent Logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/vuze_icon_for_mac_os_x_by_hamzasaleem-d6yx1fp.png) + +[Vuze][6] is favorite torrent application of a number of Linux as well as Windows users. Apart from the standard features, you can search for torrents directly in the application. You can also subscribe to episodic content so that you won’t have to search for new contents as you can see it in your subscription in sidebar. + +It also comes with a video player that can play HD videos with subtitles and all. But I don’t think you would like to use it over the better video players such as VLC. + +Vuze can be installed from Ubuntu Software Center or using the command below: + + sudo apt-get install vuze + +---------- + +### Frostwire ### + +![Logo of Frostwire torrent client](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/frostwire.png) + +[Frostwire][7] is the torrent application you might want to try. It is more than just a simple torrent client. Also available for Android, you can use it to share files over WiFi. + +You can search for torrents from within the application and play them inside the application. In addition to the downloaded files, it can browse your local media and have them organized inside the player. The same is applicable for the Android version. + +An additional feature is that Frostwire also provides access to legal music by indi artists. You can download them and listen to it, for free, for legal. + +- [Download Frostwire][8] + +---------- + +### Honorable mention ### + +On Windows, uTorrent (pronounced mu torrent) is my favorite torrent application. While uTorrent may be available for Linux, I deliberately skipped it from the list because installing and using uTorrent in Linux is neither easy nor does it provide a complete application experience (runs with in web browser). + +You can read about uTorrent installation in Ubuntu [here][9]. + +#### Quick tip: #### + +Most of the time, torrent applications do not start by default. You might want to change this behavior. Read this post to learn [how to manage startup applications in Ubuntu][10]. + +### What’s your favorite? ### + +That was my opinion on the best Torrent clients in Ubuntu. What is your favorite one? Do leave a comment. You can also check the [best download managers for Ubuntu][11] in related posts. And if you use Popcorn Time, check these [Popcorn Time Tips][12]. + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/best-torrent-ubuntu/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://deluge-torrent.org/ +[2]:http://www.qbittorrent.org/ +[3]:http://www.bittorrent.com/ +[4]:http://www.tixati.com/ +[5]:http://www.tixati.com/download/ +[6]:http://www.vuze.com/ +[7]:http://www.frostwire.com/ +[8]:http://www.frostwire.com/downloads +[9]:http://sysads.co.uk/2014/05/install-utorrent-3-3-ubuntu-14-04-13-10/ +[10]:http://itsfoss.com/manage-startup-applications-ubuntu/ +[11]:http://itsfoss.com/4-best-download-managers-for-linux/ +[12]:http://itsfoss.com/popcorn-time-tips/ \ No newline at end of file diff --git a/sources/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md b/sources/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md new file mode 100644 index 0000000000..7993b32628 --- /dev/null +++ b/sources/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md @@ -0,0 +1,50 @@ +Linux FAQs with Answers--How to count the number of threads in a process on Linux +================================================================================ +> **Question**: I have an application running, which forks a number of threads at run-time. I want to know how many threads are actively running in the program. What is the easiest way to check the thread count of a process on Linux? + +If you want to see the number of threads per process in Linux environments, there are several ways to do it. + +### Method One: /proc ### + +The proc pseudo filesystem, which resides in /proc directory, is the easiest way to see the thread count of any active process. The /proc directory exports in the form of readable text files a wealth of information related to existing processes and system hardware such as CPU, interrupts, memory, disk, etc. + + $ cat /proc//status + +The above command will show detailed information about the process with , which includes process state (e.g., sleeping, running), parent PID, UID, GID, the number of file descriptors used, and the number of context switches. The output also indicates **the total number of threads created in a process** as follows. + + Threads: + +For example, to check the thread count of a process with PID 20571: + + $ cat /proc/20571/status + +![](https://farm6.staticflickr.com/5649/20341236279_f4a4d809d2_b.jpg) + +The output indicates that the process has 28 threads in it. + +Alternatively, you could simply count the number of directories found in /proc//task, as shown below. + + $ ls /proc//task | wc + +This is because, for every thread created within a process, there is a corresponding directory created in /proc//task, named with its thread ID. Thus the total number of directories in /proc//task represents the number of threads in the process. + +### Method Two: ps ### + +If you are an avid user of the versatile ps command, this command can also show you individual threads of a process (with "H" option). The following command will print the thread count of a process. The "h" option is needed to hide the header in the top output. + + $ ps hH p | wc -l + +If you want to monitor the hardware resources (CPU & memory) consumed by different threads of a process, refer to [this tutorial][1].(注:此文我们翻译过) + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/number-of-threads-process-linux.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:http://ask.xmodulo.com/view-threads-process-linux.html \ No newline at end of file From 66bcdcee0665664be02af42b513d49a4680560b2 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 17 Aug 2015 11:11:24 +0800 Subject: [PATCH 207/697] =?UTF-8?q?20150817-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...cketing System in Fedora 22 or Centos 7.md | 179 ++++++++++++++++++ ...x Wireshark GUI freeze on Linux desktop.md | 61 ++++++ 2 files changed, 240 insertions(+) create mode 100644 sources/tech/20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md create mode 100644 sources/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md diff --git a/sources/tech/20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md b/sources/tech/20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md new file mode 100644 index 0000000000..515b15844a --- /dev/null +++ b/sources/tech/20150817 How to Install OsTicket Ticketing System in Fedora 22 or Centos 7.md @@ -0,0 +1,179 @@ +How to Install OsTicket Ticketing System in Fedora 22 / Centos 7 +================================================================================ +In this article, we'll learn how to setup help desk ticketing system with osTicket in our machine or server running Fedora 22 or CentOS 7 as operating system. osTicket is a free and open source popular customer support ticketing system developed and maintained by [Enhancesoft][1] and its contributors. osTicket is the best solution for help and support ticketing system and management for better communication and support assistance with clients and customers. It has the ability to easily integrate with inquiries created via email, phone and web based forms into a beautiful multi-user web interface. osTicket makes us easy to manage, organize and log all our support requests and responses in one single place. It is a simple, lightweight, reliable, open source, web-based and easy to setup and use help desk ticketing system. + +Here are some easy steps on how we can setup Help Desk ticketing system with osTicket in Fedora 22 or CentOS 7 operating system. + +### 1. Installing LAMP stack ### + +First of all, we'll need to install LAMP Stack to make osTicket working. LAMP stack is the combination of Apache web server, MySQL or MariaDB database system and PHP. To install a complete suit of LAMP stack that we need for the installation of osTicket, we'll need to run the following commands in a shell or a terminal. + +**On Fedora 22** + +LAMP stack is available on the official repository of Fedora 22. As the default package manager of Fedora 22 is the latest DNF package manager, we'll need to run the following command. + + $ sudo dnf install httpd mariadb mariadb-server php php-mysql php-fpm php-cli php-xml php-common php-gd php-imap php-mbstring wget + +**On CentOS 7** + +As there is LAMP stack available on the official repository of CentOS 7, we'll gonna install it using yum package manager. + + $ sudo yum install httpd mariadb mariadb-server php php-mysql php-fpm php-cli php-xml php-common php-gd php-imap php-mbstring wget + +### 2. Starting Apache Web Server and MariaDB ### + +Next, we'll gonna start MariaDB server and Apache Web Server to get started. + + $ sudo systemctl start mariadb httpd + +Then, we'll gonna enable them to start on every boot of the system. + + $ sudo systemctl enable mariadb httpd + + Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service. + Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service. + +### 3. Downloading osTicket package ### + +Next, we'll gonna download the latest release of osTicket ie version 1.9.9 . We can download it from the official download page [http://osticket.com/download][2] or from the official github repository. [https://github.com/osTicket/osTicket-1.8/releases][3] . Here, in this tutorial we'll download the tarball of the latest release of osTicket from the github release page using wget command. + + $ cd /tmp/ + $ wget https://github.com/osTicket/osTicket-1.8/releases/download/v1.9.9/osTicket-v1.9.9-1-gbe2f138.zip + + --2015-07-16 09:14:23-- https://github.com/osTicket/osTicket-1.8/releases/download/v1.9.9/osTicket-v1.9.9-1-gbe2f138.zip + Resolving github.com (github.com)... 192.30.252.131 + ... + Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.244.4|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 7150871 (6.8M) [application/octet-stream] + Saving to: ‘osTicket-v1.9.9-1-gbe2f138.zip’ + osTicket-v1.9.9-1-gb 100%[========================>] 6.82M 1.25MB/s in 12s + 2015-07-16 09:14:37 (604 KB/s) - ‘osTicket-v1.9.9-1-gbe2f138.zip’ saved [7150871/7150871] + +### 4. Extracting the osTicket ### + +After we have successfully downloaded the osTicket zipped package, we'll now gonna extract the zip. As the default root directory of Apache web server is /var/www/html/ , we'll gonna create a directory called "**support**" where we'll extract the whole directory and files of the compressed zip file. To do so, we'll need to run the following commands in a terminal or a shell. + + $ unzip osTicket-v1.9.9-1-gbe2f138.zip + +Then, we'll move the whole extracted files to it. + + $ sudo mv /tmp/upload /var/www/html/support + +### 5. Fixing Ownership and Permission ### + +Now, we'll gonna assign the ownership of the directories and files under /var/ww/html/support to apache to enable writable access to the apache process owner. To do so, we'll need to run the following command. + + $ sudo chown apache: -R /var/www/html/support + +Then, we'll also need to copy a sample configuration file to its default configuration file. To do so, we'll need to run the below command. + + $ cd /var/www/html/support/ + $ sudo cp include/ost-sampleconfig.php include/ost-config.php + $ sudo chmod 0666 include/ost-config.php + +If you have SELinux enabled on the system, run the following command. + + $ sudo chcon -R -t httpd_sys_content_t /var/www/html/vtigercrm + $ sudo chcon -R -t httpd_sys_rw_content_t /var/www/html/vtigercrm + +### 6. Configuring MariaDB ### + +As this is the first time we're going to configure MariaDB, we'll need to create a password for the root user of mariadb so that we can use it to login and create the database for our osTicket installation. To do so, we'll need to run the following command in a terminal or a shell. + + $ sudo mysql_secure_installation + + ... + Enter current password for root (enter for none): + OK, successfully used password, moving on... + + Setting the root password ensures that nobody can log into the MariaDB + root user without the proper authorisation. + + Set root password? [Y/n] y + New password: + Re-enter new password: + Password updated successfully! + Reloading privilege tables.. + Success! + ... + All done! If you've completed all of the above steps, your MariaDB + installation should now be secure. + + Thanks for using MariaDB! + +Note: Above, we are asked to enter the root password of the mariadb server but as we are setting for the first time and no password has been set yet, we'll simply hit enter while asking the current mariadb root password. Then, we'll need to enter twice the new password we wanna set. Then, we can simply hit enter in every argument in order to set default configurations. + +### 7. Creating osTicket Database ### + +As osTicket needs a database system to store its data and information, we'll be configuring MariaDB for osTicket. So, we'll need to first login into the mariadb command environment. To do so, we'll need to run the following command. + + $ sudo mysql -u root -p + +Now, we'll gonna create a new database "**osticket_db**" with user "**osticket_user**" and password "osticket_password" which will be granted access to the database. To do so, we'll need to run the following commands inside the MariaDB command environment. + + > CREATE DATABASE osticket_db; + > CREATE USER 'osticket_user'@'localhost' IDENTIFIED BY 'osticket_password'; + > GRANT ALL PRIVILEGES on osticket_db.* TO 'osticket_user'@'localhost' ; + > FLUSH PRIVILEGES; + > EXIT; + +**Note**: It is strictly recommended to replace the database name, user and password as your desire for security issue. + +### 8. Allowing Firewall ### + +If we are running a firewall program, we'll need to configure our firewall to allow port 80 so that the Apache web server's default port will be accessible externally. This will allow us to navigate our web browser to osTicket's web interface with the default http port 80. To do so, we'll need to run the following command. + + $ sudo firewall-cmd --zone=public --add-port=80/tcp --permanent + +After done, we'll need to reload our firewall service. + + $ sudo firewall-cmd --reload + +### 9. Web based Installation ### + +Finally, is everything is done as described above, we'll now should be able to navigate osTicket's Installer by pointing our web browser to http://domain.com/support or http://ip-address/support . Now, we'll be shown if the dependencies required by osTicket are installed or not. As we've already installed all the necessary packages, we'll be welcomed with **green colored tick** to proceed forward. + +![osTicket Requirements Check](http://blog.linoxide.com/wp-content/uploads/2015/07/osticket-requirements-check1.png) + +After that, we'll be required to enter the details for our osTicket instance as shown below. We'll need to enter the database name, username, password and hostname and other important account information that we'll require while logging into the admin panel. + +![osticket configuration](http://blog.linoxide.com/wp-content/uploads/2015/07/osticket-configuration.png) + +After the installation has been completed successfully, we'll be welcomed by a Congratulations screen. There we can see two links, one for our Admin Panel and the other for the support center as the homepage of the osTicket Support Help Desk. + +![osticket installation completed](http://blog.linoxide.com/wp-content/uploads/2015/07/osticket-installation-completed.png) + +If we click on http://ip-address/support or http://domain.com/support, we'll be redirected to the osTicket support page which is as shown below. + +![osticket support homepage](http://blog.linoxide.com/wp-content/uploads/2015/07/osticket-support-homepage.png) + +Next, to login into the admin panel, we'll need to navigate our web browser to http://ip-address/support/scp or http://domain.com/support/scp . Then, we'll need to enter the login details we had just created above while configuring the database and other information in the web installer. After successful login, we'll be able to access our dashboard and other admin sections. + +![osticket admin panel](http://blog.linoxide.com/wp-content/uploads/2015/07/osticket-admin-panel.png) + +### 10. Post Installation ### + +After we have finished the web installation of osTicket, we'll now need to secure some of our configuration files. To do so, we'll need to run the following command. + + $ sudo rm -rf /var/www/html/support/setup/ + $ sudo chmod 644 /var/www/html/support/include/ost-config.php + +### Conclusion ### + +osTicket is an awesome help desk ticketing system providing several new features. It supports rich text or HTML emails, ticket filters, agent collision avoidance, auto-responder and many more features. The user interface of osTicket is very beautiful with easy to use control panel. It is a complete set of tools required for a help and support ticketing system. It is the best solution for providing customers a better way to communicate with the support team. It helps a company to make their customers happy with them regarding the support and help desk. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) + +------------------------------------------------------------------------------ + +via: http://linoxide.com/linux-how-to/install-osticket-fedora-22-centos-7/ + +作者:[Arun Pyasi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://www.enhancesoft.com/ +[2]:http://osticket.com/download +[3]:https://github.com/osTicket/osTicket-1.8/releases \ No newline at end of file diff --git a/sources/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md b/sources/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md new file mode 100644 index 0000000000..f6fb3be325 --- /dev/null +++ b/sources/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md @@ -0,0 +1,61 @@ +Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop +================================================================================ +> **Question**: When I try to open a pre-recorded packet dump on Wireshark on Ubuntu, its UI suddenly freezes, and the following errors and warnings appear in the terminal where I launched Wireshark. How can I fix this problem? + +Wireshark is a GUI-based packet capture and sniffer tool. This tool is popularly used by network administrators, network security engineers or developers for various tasks where packet-level network analysis is required, for example during network troubleshooting, vulnerability testing, application debugging, or protocol reverse engineering. Wireshark allows one to capture live packets and browse their protocol headers and payloads via a convenient GUI. + +![](https://farm1.staticflickr.com/722/20584224675_f4d7a59474_c.jpg) + +It is known that Wireshark's UI, especially run under Ubuntu desktop, sometimes hangs or freezes with the following errors, while you are scrolling up or down the packet list view, or starting to load a pre-recorded packet dump file. + + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject' + (wireshark:3480): GLib-GObject-CRITICAL **: g_object_set_qdata_full: assertion 'G_IS_OBJECT (object)' failed + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkRange' + (wireshark:3480): Gtk-CRITICAL **: gtk_range_get_adjustment: assertion 'GTK_IS_RANGE (range)' failed + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkOrientable' + (wireshark:3480): Gtk-CRITICAL **: gtk_orientable_get_orientation: assertion 'GTK_IS_ORIENTABLE (orientable)' failed + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkScrollbar' + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkWidget' + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject' + (wireshark:3480): GLib-GObject-CRITICAL **: g_object_get_qdata: assertion 'G_IS_OBJECT (object)' failed + (wireshark:3480): Gtk-CRITICAL **: gtk_widget_set_name: assertion 'GTK_IS_WIDGET (widget)' failed + +Apparently this error is caused by some incompatibility between Wireshark and overlay-scrollbar, and has not been fixed in the latest Ubuntu desktop (e.g., as of Ubuntu 15.04 Vivid Vervet). + +A workaround to avoid this Wireshark UI freeze problem is to **temporarily disabling overlay-scrollbar**. There are two ways to disable overlay-scrollbar in Wireshark, depending on how you launch Wireshark on your desktop. + +### Command-Line Solution ### + +Overlay-scrollbar can be disabled by setting "**LIBOVERLAY_SCROLLBAR**" environment variable to "0". + +So if you are launching Wireshark from the command in a terminal, you can disable overlay-scrollbar in Wireshark as follows. + +Open your .bashrc, and define the following alias. + + alias wireshark="LIBOVERLAY_SCROLLBAR=0 /usr/bin/wireshark" + +### Desktop Launcher Solution ### + +If you are launching Wireshark using a desktop launcher, you can edit its desktop launcher file. + + $ sudo vi /usr/share/applications/wireshark.desktop + +Look for a line that starts with "Exec", and change it as follows. + + Exec=env LIBOVERLAY_SCROLLBAR=0 wireshark %f + +While this solution will be beneficial for all desktop users system-wide, it will not survive Wireshark upgrade. If you want to preserve the modified .desktop file, copy it to your home directory as follows. + + $ cp /usr/share/applications/wireshark.desktop ~/.local/share/applications/ + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/fix-wireshark-gui-freeze-linux-desktop.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file From 64ff0afb64ffd64f11f553e6517d0d64c8d1e834 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 17 Aug 2015 11:40:27 +0800 Subject: [PATCH 208/697] Update 20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md --- ...How to count the number of threads in a process on Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md b/sources/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md index 7993b32628..35ee2f00de 100644 --- a/sources/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md +++ b/sources/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md @@ -1,3 +1,4 @@ +translation by strugglingyouth Linux FAQs with Answers--How to count the number of threads in a process on Linux ================================================================================ > **Question**: I have an application running, which forks a number of threads at run-time. I want to know how many threads are actively running in the program. What is the easiest way to check the thread count of a process on Linux? @@ -47,4 +48,4 @@ via: http://ask.xmodulo.com/number-of-threads-process-linux.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://ask.xmodulo.com/author/nanni -[1]:http://ask.xmodulo.com/view-threads-process-linux.html \ No newline at end of file +[1]:http://ask.xmodulo.com/view-threads-process-linux.html From 63bfa49922873b3c26beac5a4217e7f5bb1df899 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 17 Aug 2015 11:45:19 +0800 Subject: [PATCH 209/697] Update 20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md --- ...nswers--How to fix Wireshark GUI freeze on Linux desktop.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md b/sources/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md index f6fb3be325..d906349ff9 100644 --- a/sources/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md +++ b/sources/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md @@ -1,3 +1,4 @@ +translation by strugglingyouth Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop ================================================================================ > **Question**: When I try to open a pre-recorded packet dump on Wireshark on Ubuntu, its UI suddenly freezes, and the following errors and warnings appear in the terminal where I launched Wireshark. How can I fix this problem? @@ -58,4 +59,4 @@ via: http://ask.xmodulo.com/fix-wireshark-gui-freeze-linux-desktop.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file +[a]:http://ask.xmodulo.com/author/nanni From 93096c2c2bb8675b0b9da23a0438b69b37094d0e Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 17 Aug 2015 14:59:32 +0800 Subject: [PATCH 210/697] =?UTF-8?q?20150817-3=20=E9=80=89=E9=A2=98=20LFCS?= =?UTF-8?q?=20=E4=B8=93=E9=A2=98=201-5=EF=BC=8C=E5=85=B1=E5=8D=81=E7=AF=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...eate Edit and Manipulate files in Linux.md | 220 ++++++++++ ...and Use vi or vim as a Full Text Editor.md | 387 ++++++++++++++++++ ...e Attributes and Finding Files in Linux.md | 382 +++++++++++++++++ ...esystems and Configuring Swap Partition.md | 191 +++++++++ ...work Samba and NFS Filesystems in Linux.md | 232 +++++++++++ 5 files changed, 1412 insertions(+) create mode 100644 sources/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md create mode 100644 sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md create mode 100644 sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md create mode 100644 sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md create mode 100644 sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md diff --git a/sources/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md b/sources/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md new file mode 100644 index 0000000000..ca96b7dac6 --- /dev/null +++ b/sources/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md @@ -0,0 +1,220 @@ +Part 1 - LFCS: How to use GNU ‘sed’ Command to Create, Edit, and Manipulate files in Linux +================================================================================ +The Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, a new program that aims at helping individuals all over the world to get certified in basic to intermediate system administration tasks for Linux systems. This includes supporting running systems and services, along with first-hand troubleshooting and analysis, and smart decision-making to escalate issues to engineering teams. + +![Linux Foundation Certified Sysadmin](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-1.png) + +Linux Foundation Certified Sysadmin – Part 1 + +Please watch the following video that demonstrates about The Linux Foundation Certification Program. + +注:youtube 视频 + + +The series will be titled Preparation for the LFCS (Linux Foundation Certified Sysadmin) Parts 1 through 10 and cover the following topics for Ubuntu, CentOS, and openSUSE: + +- Part 1: How to use GNU ‘sed’ Command to Create, Edit, and Manipulate files in Linux +- Part 2: How to Install and Use vi/m as a full Text Editor +- Part 3: Archiving Files/Directories and Finding Files on the Filesystem +- Part 4: Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition +- Part 5: Mount/Unmount Local and Network (Samba & NFS) Filesystems in Linux +- Part 6: Assembling Partitions as RAID Devices – Creating & Managing System Backups +- Part 7: Managing System Startup Process and Services (SysVinit, Systemd and Upstart +- Part 8: Managing Users & Groups, File Permissions & Attributes and Enabling sudo Access on Accounts +- Part 9: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper +- Part 10: Learning Basic Shell Scripting and Filesystem Troubleshooting + + +This post is Part 1 of a 10-tutorial series, which will cover the necessary domains and competencies that are required for the LFCS certification exam. That being said, fire up your terminal, and let’s start. + +### Processing Text Streams in Linux ### + +Linux treats the input to and the output from programs as streams (or sequences) of characters. To begin understanding redirection and pipes, we must first understand the three most important types of I/O (Input and Output) streams, which are in fact special files (by convention in UNIX and Linux, data streams and peripherals, or device files, are also treated as ordinary files). + +The difference between > (redirection operator) and | (pipeline operator) is that while the first connects a command with a file, the latter connects the output of a command with another command. + + # command > file + # command1 | command2 + +Since the redirection operator creates or overwrites files silently, we must use it with extreme caution, and never mistake it with a pipeline. One advantage of pipes on Linux and UNIX systems is that there is no intermediate file involved with a pipe – the stdout of the first command is not written to a file and then read by the second command. + +For the following practice exercises we will use the poem “A happy child” (anonymous author). + +![cat command](http://www.tecmint.com/wp-content/uploads/2014/10/cat-command.png) + +cat command example + +#### Using sed #### + +The name sed is short for stream editor. For those unfamiliar with the term, a stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline). + +The most basic (and popular) usage of sed is the substitution of characters. We will begin by changing every occurrence of the lowercase y to UPPERCASE Y and redirecting the output to ahappychild2.txt. The g flag indicates that sed should perform the substitution for all instances of term on every line of file. If this flag is omitted, sed will replace only the first occurrence of term on each line. + +**Basic syntax:** + + # sed ‘s/term/replacement/flag’ file + +**Our example:** + + # sed ‘s/y/Y/g’ ahappychild.txt > ahappychild2.txt + +![sed command](http://www.tecmint.com/wp-content/uploads/2014/10/sed-command.png) + +sed command example + +Should you want to search for or replace a special character (such as /, \, &) you need to escape it, in the term or replacement strings, with a backward slash. + +For example, we will substitute the word and for an ampersand. At the same time, we will replace the word I with You when the first one is found at the beginning of a line. + + # sed 's/and/\&/g;s/^I/You/g' ahappychild.txt + +![sed replace string](http://www.tecmint.com/wp-content/uploads/2014/10/sed-replace-string.png) + +sed replace string + +In the above command, a ^ (caret sign) is a well-known regular expression that is used to represent the beginning of a line. + +As you can see, we can combine two or more substitution commands (and use regular expressions inside them) by separating them with a semicolon and enclosing the set inside single quotes. + +Another use of sed is showing (or deleting) a chosen portion of a file. In the following example, we will display the first 5 lines of /var/log/messages from Jun 8. + + # sed -n '/^Jun 8/ p' /var/log/messages | sed -n 1,5p + +Note that by default, sed prints every line. We can override this behaviour with the -n option and then tell sed to print (indicated by p) only the part of the file (or the pipe) that matches the pattern (Jun 8 at the beginning of line in the first case and lines 1 through 5 inclusive in the second case). + +Finally, it can be useful while inspecting scripts or configuration files to inspect the code itself and leave out comments. The following sed one-liner deletes (d) blank lines or those starting with # (the | character indicates a boolean OR between the two regular expressions). + + # sed '/^#\|^$/d' apache2.conf + +![sed match string](http://www.tecmint.com/wp-content/uploads/2014/10/sed-match-string.png) + +sed match string + +#### uniq Command #### + +The uniq command allows us to report or remove duplicate lines in a file, writing to stdout by default. We must note that uniq does not detect repeated lines unless they are adjacent. Thus, uniq is commonly used along with a preceding sort (which is used to sort lines of text files). By default, sort takes the first field (separated by spaces) as key field. To specify a different key field, we need to use the -k option. + +**Examples** + +The du –sch /path/to/directory/* command returns the disk space usage per subdirectories and files within the specified directory in human-readable format (also shows a total per directory), and does not order the output by size, but by subdirectory and file name. We can use the following command to sort by size. + + # du -sch /var/* | sort –h + +![sort command](http://www.tecmint.com/wp-content/uploads/2014/10/sort-command.jpg) + +sort command example + +You can count the number of events in a log by date by telling uniq to perform the comparison using the first 6 characters (-w 6) of each line (where the date is specified), and prefixing each output line by the number of occurrences (-c) with the following command. + + # cat /var/log/mail.log | uniq -c -w 6 + +![Count Numbers in File](http://www.tecmint.com/wp-content/uploads/2014/10/count-numbers-in-file.jpg) + +Count Numbers in File + +Finally, you can combine sort and uniq (as they usually are). Consider the following file with a list of donors, donation date, and amount. Suppose we want to know how many unique donors there are. We will use the following command to cut the first field (fields are delimited by a colon), sort by name, and remove duplicate lines. + + # cat sortuniq.txt | cut -d: -f1 | sort | uniq + +![Find Unique Records in File](http://www.tecmint.com/wp-content/uploads/2014/10/find-uniqu-records-in-file.jpg) + +Find Unique Records in File + +- Read Also: [13 “cat” Command Examples][1] + +#### grep Command #### + +grep searches text files or (command output) for the occurrence of a specified regular expression and outputs any line containing a match to standard output. + +**Examples** + +Display the information from /etc/passwd for user gacanepa, ignoring case. + + # grep -i gacanepa /etc/passwd + +![grep Command](http://www.tecmint.com/wp-content/uploads/2014/10/grep-command.jpg) + +grep command example + +Show all the contents of /etc whose name begins with rc followed by any single number. + + # ls -l /etc | grep rc[0-9] + +![List Content Using grep](http://www.tecmint.com/wp-content/uploads/2014/10/list-content-using-grep.jpg) + +List Content Using grep + +- Read Also: [12 “grep” Command Examples][2] + +#### tr Command Usage #### + +The tr command can be used to translate (change) or delete characters from stdin, and write the result to stdout. + +**Examples** + +Change all lowercase to uppercase in sortuniq.txt file. + + # cat sortuniq.txt | tr [:lower:] [:upper:] + +![Sort Strings in File](http://www.tecmint.com/wp-content/uploads/2014/10/sort-strings.jpg) + +Sort Strings in File + +Squeeze the delimiter in the output of ls –l to only one space. + + # ls -l | tr -s ' ' + +![Squeeze Delimiter](http://www.tecmint.com/wp-content/uploads/2014/10/squeeze-delimeter.jpg) + +Squeeze Delimiter + +#### cut Command Usage #### + +The cut command extracts portions of input lines (from stdin or files) and displays the result on standard output, based on number of bytes (-b option), characters (-c), or fields (-f). In this last case (based on fields), the default field separator is a tab, but a different delimiter can be specified by using the -d option. + +**Examples** + +Extract the user accounts and the default shells assigned to them from /etc/passwd (the –d option allows us to specify the field delimiter, and the –f switch indicates which field(s) will be extracted. + + # cat /etc/passwd | cut -d: -f1,7 + +![Extract User Accounts](http://www.tecmint.com/wp-content/uploads/2014/10/extract-user-accounts.jpg) + +Extract User Accounts + +Summing up, we will create a text stream consisting of the first and third non-blank files of the output of the last command. We will use grep as a first filter to check for sessions of user gacanepa, then squeeze delimiters to only one space (tr -s ‘ ‘). Next, we’ll extract the first and third fields with cut, and finally sort by the second field (IP addresses in this case) showing unique. + + # last | grep gacanepa | tr -s ‘ ‘ | cut -d’ ‘ -f1,3 | sort -k2 | uniq + +![last command](http://www.tecmint.com/wp-content/uploads/2014/10/last-command.png) + +last command example + +The above command shows how multiple commands and pipes can be combined so as to obtain filtered data according to our desires. Feel free to also run it by parts, to help you see the output that is pipelined from one command to the next (this can be a great learning experience, by the way!). + +### Summary ### + +Although this example (along with the rest of the examples in the current tutorial) may not seem very useful at first sight, they are a nice starting point to begin experimenting with commands that are used to create, edit, and manipulate files from the Linux command line. Feel free to leave your questions and comments below – they will be much appreciated! + +#### Reference Links #### + +- [About the LFCS][3] +- [Why get a Linux Foundation Certification?][4] +- [Register for the LFCS exam][5] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/ +[2]:http://www.tecmint.com/12-practical-examples-of-linux-grep-command/ +[3]:https://training.linuxfoundation.org/certification/LFCS +[4]:https://training.linuxfoundation.org/certification/why-certify-with-us +[5]:https://identity.linuxfoundation.org/user?destination=pid/1 \ No newline at end of file diff --git a/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md b/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md new file mode 100644 index 0000000000..7537f784bd --- /dev/null +++ b/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md @@ -0,0 +1,387 @@ +Part 2 - LFCS: How to Install and Use vi/vim as a Full Text Editor +================================================================================ +A couple of months ago, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification in order to help individuals from all over the world to verify they are capable of doing basic to intermediate system administration tasks on Linux systems: system support, first-hand troubleshooting and maintenance, plus intelligent decision-making to know when it’s time to raise issues to upper support teams. + +![Learning VI Editor in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/LFCS-Part-2.png) + +Learning VI Editor in Linux + +Please take a look at the below video that explains The Linux Foundation Certification Program. + +注:youtube 视频 + + +This post is Part 2 of a 10-tutorial series, here in this part, we will cover the basic file editing operations and understanding modes in vi/m editor, that are required for the LFCS certification exam. + +### Perform Basic File Editing Operations Using vi/m ### + +Vi was the first full-screen text editor written for Unix. Although it was intended to be small and simple, it can be a bit challenging for people used exclusively to GUI text editors, such as NotePad++, or gedit, to name a few examples. + +To use Vi, we must first understand the 3 modes in which this powerful program operates, in order to begin learning later about the its powerful text-editing procedures. + +Please note that most modern Linux distributions ship with a variant of vi known as vim (“Vi improved”), which supports more features than the original vi does. For that reason, throughout this tutorial we will use vi and vim interchangeably. + +If your distribution does not have vim installed, you can install it as follows. + +- Ubuntu and derivatives: aptitude update && aptitude install vim +- Red Hat-based distributions: yum update && yum install vim +- openSUSE: zypper update && zypper install vim + +### Why should I want to learn vi? ### + +There are at least 2 good reasons to learn vi. + +1. vi is always available (no matter what distribution you’re using) since it is required by POSIX. + +2. vi does not consume a considerable amount of system resources and allows us to perform any imaginable tasks without lifting our fingers from the keyboard. + +In addition, vi has a very extensive built-in manual, which can be launched using the :help command right after the program is started. This built-in manual contains more information than vi/m’s man page. + +![vi Man Pages](http://www.tecmint.com/wp-content/uploads/2014/10/vi-man-pages.png) + +vi Man Pages + +#### Launching vi #### + +To launch vi, type vi in your command prompt. + +![Start vi Editor](http://www.tecmint.com/wp-content/uploads/2014/10/start-vi-editor.png) + +Start vi Editor + +Then press i to enter Insert mode, and you can start typing. Another way to launch vi/m is. + + # vi filename + +Which will open a new buffer (more on buffers later) named filename, which you can later save to disk. + +#### Understanding Vi modes #### + +1. In command mode, vi allows the user to navigate around the file and enter vi commands, which are brief, case-sensitive combinations of one or more letters. Almost all of them can be prefixed with a number to repeat the command that number of times. + +For example, yy (or Y) copies the entire current line, whereas 3yy (or 3Y) copies the entire current line along with the two next lines (3 lines in total). We can always enter command mode (regardless of the mode we’re working on) by pressing the Esc key. The fact that in command mode the keyboard keys are interpreted as commands instead of text tends to be confusing to beginners. + +2. In ex mode, we can manipulate files (including saving a current file and running outside programs). To enter this mode, we must type a colon (:) from command mode, directly followed by the name of the ex-mode command that needs to be used. After that, vi returns automatically to command mode. + +3. In insert mode (the letter i is commonly used to enter this mode), we simply enter text. Most keystrokes result in text appearing on the screen (one important exception is the Esc key, which exits insert mode and returns to command mode). + +![vi Insert Mode](http://www.tecmint.com/wp-content/uploads/2014/10/vi-insert-mode.png) + +vi Insert Mode + +#### Vi Commands #### + +The following table shows a list of commonly used vi commands. File edition commands can be enforced by appending the exclamation sign to the command (for example, + + + + + + +  Key command +  Description + + +  h or left arrow +  Go one character to the left + + +  j or down arrow +  Go down one line + + +  k or up arrow +  Go up one line + + +  l (lowercase L) or right arrow +  Go one character to the right + + +  H +  Go to the top of the screen + + +  L +  Go to the bottom of the screen + + +  G +  Go to the end of the file + + +  w +  Move one word to the right + + +  b +  Move one word to the left + + +  0 (zero) +  Go to the beginning of the current line + + +  ^ +  Go to the first nonblank character on the current line + + +  $ +  Go to the end of the current line + + +  Ctrl-B +  Go back one screen + + +  Ctrl-F +  Go forward one screen + + +  i +  Insert at the current cursor position + + +  I (uppercase i) +  Insert at the beginning of the current line + + +  J (uppercase j) +  Join current line with the next one (move next line up) + + +  a +  Append after the current cursor position + + +  o (lowercase O) +  Creates a blank line after the current line + + +  O (uppercase o) +  Creates a blank line before the current line + + +  r +  Replace the character at the current cursor position + + +  R +  Overwrite at the current cursor position + + +  x +  Delete the character at the current cursor position + + +  X +  Delete the character immediately before (to the left) of the current cursor position + + +  dd +  Cut (for later pasting) the entire current line + + +  D +  Cut from the current cursor position to the end of the line (this command is equivalent to d$) + + +  yX +  Give a movement command X, copy (yank) the appropriate number of characters, words, or lines from the current cursor position + + +  yy or Y +  Yank (copy) the entire current line + + +  p +  Paste after (next line) the current cursor position + + +  P +  Paste before (previous line) the current cursor position + + +  . (period) +  Repeat the last command + + +  u +  Undo the last command + + +  U +  Undo the last command in the last line. This will work as long as the cursor is still on the line. + + +  n +  Find the next match in a search + + +  N +  Find the previous match in a search + + +  :n +  Next file; when multiple files are specified for editing, this commands loads the next file. + + +  :e file +  Load file in place of the current file. + + +  :r file +  Insert the contents of file after (next line) the current cursor position + + +  :q +  Quit without saving changes. + + +  :w file +  Write the current buffer to file. To append to an existing file, use :w >> file. + + +  :wq +  Write the contents of the current file and quit. Equivalent to x! and ZZ + + +  :r! command +  Execute command and insert output after (next line) the current cursor position. + + + + +#### Vi Options #### + +The following options can come in handy while running vim (we need to add them in our ~/.vimrc file). + + # echo set number >> ~/.vimrc + # echo syntax on >> ~/.vimrc + # echo set tabstop=4 >> ~/.vimrc + # echo set autoindent >> ~/.vimrc + +![vi Editor Options](http://www.tecmint.com/wp-content/uploads/2014/10/vi-options.png) + +vi Editor Options + +- set number shows line numbers when vi opens an existing or a new file. +- syntax on turns on syntax highlighting (for multiple file extensions) in order to make code and config files more readable. +- set tabstop=4 sets the tab size to 4 spaces (default value is 8). +- set autoindent carries over previous indent to the next line. + +#### Search and replace #### + +vi has the ability to move the cursor to a certain location (on a single line or over an entire file) based on searches. It can also perform text replacements with or without confirmation from the user. + +a). Searching within a line: the f command searches a line and moves the cursor to the next occurrence of a specified character in the current line. + +For example, the command fh would move the cursor to the next instance of the letter h within the current line. Note that neither the letter f nor the character you’re searching for will appear anywhere on your screen, but the character will be highlighted after you press Enter. + +For example, this is what I get after pressing f4 in command mode. + +![Search String in Vi](http://www.tecmint.com/wp-content/uploads/2014/10/vi-search-string.png) + +Search String in Vi + +b). Searching an entire file: use the / command, followed by the word or phrase to be searched for. A search may be repeated using the previous search string with the n command, or the next one (using the N command). This is the result of typing /Jane in command mode. + +![Vi Search String in File](http://www.tecmint.com/wp-content/uploads/2014/10/vi-search-line.png) + +Vi Search String in File + +c). vi uses a command (similar to sed’s) to perform substitution operations over a range of lines or an entire file. To change the word “old” to “young” for the entire file, we must enter the following command. + + :%s/old/young/g + +**Notice**: The colon at the beginning of the command. + +![Vi Search and Replace](http://www.tecmint.com/wp-content/uploads/2014/10/vi-search-and-replace.png) + +Vi Search and Replace + +The colon (:) starts the ex command, s in this case (for substitution), % is a shortcut meaning from the first line to the last line (the range can also be specified as n,m which means “from line n to line m”), old is the search pattern, while young is the replacement text, and g indicates that the substitution should be performed on every occurrence of the search string in the file. + +Alternatively, a c can be added to the end of the command to ask for confirmation before performing any substitution. + + :%s/old/young/gc + +Before replacing the original text with the new one, vi/m will present us with the following message. + +![Replace String in Vi](http://www.tecmint.com/wp-content/uploads/2014/10/vi-replace-old-with-young.png) + +Replace String in Vi + +- y: perform the substitution (yes) +- n: skip this occurrence and go to the next one (no) +- a: perform the substitution in this and all subsequent instances of the pattern. +- q or Esc: quit substituting. +- l (lowercase L): perform this substitution and quit (last). +- Ctrl-e, Ctrl-y: Scroll down and up, respectively, to view the context of the proposed substitution. + +#### Editing Multiple Files at a Time #### + +Let’s type vim file1 file2 file3 in our command prompt. + + # vim file1 file2 file3 + +First, vim will open file1. To switch to the next file (file2), we need to use the :n command. When we want to return to the previous file, :N will do the job. + +In order to switch from file1 to file3. + +a). The :buffers command will show a list of the file currently being edited. + + :buffers + +![Edit Multiple Files](http://www.tecmint.com/wp-content/uploads/2014/10/vi-edit-multiple-files.png) + +Edit Multiple Files + +b). The command :buffer 3 (without the s at the end) will open file3 for editing. + +In the image above, a pound sign (#) indicates that the file is currently open but in the background, while %a marks the file that is currently being edited. On the other hand, a blank space after the file number (3 in the above example) indicates that the file has not yet been opened. + +#### Temporary vi buffers #### + +To copy a couple of consecutive lines (let’s say 4, for example) into a temporary buffer named a (not associated with a file) and place those lines in another part of the file later in the current vi section, we need to… + +1. Press the ESC key to be sure we are in vi Command mode. + +2. Place the cursor on the first line of the text we wish to copy. + +3. Type “a4yy to copy the current line, along with the 3 subsequent lines, into a buffer named a. We can continue editing our file – we do not need to insert the copied lines immediately. + +4. When we reach the location for the copied lines, use “a before the p or P commands to insert the lines copied into the buffer named a: + +- Type “ap to insert the lines copied into buffer a after the current line on which the cursor is resting. +- Type “aP to insert the lines copied into buffer a before the current line. + +If we wish, we can repeat the above steps to insert the contents of buffer a in multiple places in our file. A temporary buffer, as the one in this section, is disposed when the current window is closed. + +### Summary ### + +As we have seen, vi/m is a powerful and versatile text editor for the CLI. Feel free to share your own tricks and comments below. + +#### Reference Links #### + +- [About the LFCS][1] +- [Why get a Linux Foundation Certification?][2] +- [Register for the LFCS exam][3] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/vi-editor-usage/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:https://training.linuxfoundation.org/certification/LFCS +[2]:https://training.linuxfoundation.org/certification/why-certify-with-us +[3]:https://identity.linuxfoundation.org/user?destination=pid/1 \ No newline at end of file diff --git a/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md b/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md new file mode 100644 index 0000000000..6ac3d104a0 --- /dev/null +++ b/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md @@ -0,0 +1,382 @@ +Part 3 - LFCS: How to Archive/Compress Files & Directories, Setting File Attributes and Finding Files in Linux +================================================================================ +Recently, the Linux Foundation started the LFCS (Linux Foundation Certified Sysadmin) certification, a brand new program whose purpose is allowing individuals from all corners of the globe to have access to an exam, which if approved, certifies that the person is knowledgeable in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-level troubleshooting and analysis, plus the ability to decide when to escalate issues to engineering teams. + +![Linux Foundation Certified Sysadmin – Part 3](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-3.png) + +Linux Foundation Certified Sysadmin – Part 3 + +Please watch the below video that gives the idea about The Linux Foundation Certification Program. + +注:youtube 视频 + + +This post is Part 3 of a 10-tutorial series, here in this part, we will cover how to archive/compress files and directories, set file attributes, and find files on the filesystem, that are required for the LFCS certification exam. + +### Archiving and Compression Tools ### + +A file archiving tool groups a set of files into a single standalone file that we can backup to several types of media, transfer across a network, or send via email. The most frequently used archiving utility in Linux is tar. When an archiving utility is used along with a compression tool, it allows to reduce the disk size that is needed to store the same files and information. + +#### The tar utility #### + +tar bundles a group of files together into a single archive (commonly called a tar file or tarball). The name originally stood for tape archiver, but we must note that we can use this tool to archive data to any kind of writeable media (not only to tapes). Tar is normally used with a compression tool such as gzip, bzip2, or xz to produce a compressed tarball. + +**Basic syntax:** + + # tar [options] [pathname ...] + +Where … represents the expression used to specify which files should be acted upon. + +#### Most commonly used tar commands #### + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Long optionAbbreviationDescription
 –create c Creates a tar archive
 –concatenate A Appends tar files to an archive
 –append r Appends files to the end of an archive
 –update u Appends files newer than copy in archive
 –diff or –compare d Find differences between archive and file system
 –file archive f Use archive file or device ARCHIVE
 –list t Lists the contents of a tarball
 –extract or –get x Extracts files from an archive
+ +#### Normally used operation modifiers #### + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Long optionAbbreviationDescription
 –directory dir C Changes to directory dir before performing operations
 –same-permissions p Preserves original permissions
 –verbose v Lists all files read or extracted. When this flag is used along with –list, the file sizes, ownership, and time stamps are displayed.
 –verify W Verifies the archive after writing it
 –exclude file — Excludes file from the archive
 –exclude=pattern X Exclude files, given as a PATTERN
 –gzip or –gunzip z Processes an archive through gzip
 –bzip2 j Processes an archive through bzip2
 –xz J Processes an archive through xz
+ +Gzip is the oldest compression tool and provides the least compression, while bzip2 provides improved compression. In addition, xz is the newest but (usually) provides the best compression. This advantages of best compression come at a price: the time it takes to complete the operation, and system resources used during the process. + +Normally, tar files compressed with these utilities have .gz, .bz2, or .xz extensions, respectively. In the following examples we will be using these files: file1, file2, file3, file4, and file5. + +**Grouping and compressing with gzip, bzip2 and xz** + +Group all the files in the current working directory and compress the resulting bundle with gzip, bzip2, and xz (please note the use of a regular expression to specify which files should be included in the bundle – this is to prevent the archiving tool to group the tarballs created in previous steps). + + # tar czf myfiles.tar.gz file[0-9] + # tar cjf myfiles.tar.bz2 file[0-9] + # tar cJf myfile.tar.xz file[0-9] + +![Compress Multiple Files Using tar](http://www.tecmint.com/wp-content/uploads/2014/10/Compress-Multiple-Files.png) + +Compress Multiple Files + +**Listing the contents of a tarball and updating / appending files to the bundle** + +List the contents of a tarball and display the same information as a long directory listing. Note that update or append operations cannot be applied to compressed files directly (if you need to update or append a file to a compressed tarball, you need to uncompress the tar file and update / append to it, then compress again). + + # tar tvf [tarball] + +![Check Files in tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/List-Archive-Content.png) + +List Archive Content + +Run any of the following commands: + + # gzip -d myfiles.tar.gz [#1] + # bzip2 -d myfiles.tar.bz2 [#2] + # xz -d myfiles.tar.xz [#3] + +Then + + # tar --delete --file myfiles.tar file4 (deletes the file inside the tarball) + # tar --update --file myfiles.tar file4 (adds the updated file) + +and + + # gzip myfiles.tar [ if you choose #1 above ] + # bzip2 myfiles.tar [ if you choose #2 above ] + # xz myfiles.tar [ if you choose #3 above ] + +Finally, + + # tar tvf [tarball] #again + +and compare the modification date and time of file4 with the same information as shown earlier. + +**Excluding file types** + +Suppose you want to perform a backup of user’s home directories. A good sysadmin practice would be (may also be specified by company policies) to exclude all video and audio files from backups. + +Maybe your first approach would be to exclude from the backup all files with an .mp3 or .mp4 extension (or other extensions). What if you have a clever user who can change the extension to .txt or .bkp, your approach won’t do you much good. In order to detect an audio or video file, you need to check its file type with file. The following shell script will do the job. + + #!/bin/bash + # Pass the directory to backup as first argument. + DIR=$1 + # Create the tarball and compress it. Exclude files with the MPEG string in its file type. + # -If the file type contains the string mpeg, $? (the exit status of the most recently executed command) expands to 0, and the filename is redirected to the exclude option. Otherwise, it expands to 1. + # -If $? equals 0, add the file to the list of files to be backed up. + tar X <(for i in $DIR/*; do file $i | grep -i mpeg; if [ $? -eq 0 ]; then echo $i; fi;done) -cjf backupfile.tar.bz2 $DIR/* + +![Exclude Files in tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/Exclude-Files-in-Tar.png) + +Exclude Files in tar + +**Restoring backups with tar preserving permissions** + +You can then restore the backup to the original user’s home directory (user_restore in this example), preserving permissions, with the following command. + + # tar xjf backupfile.tar.bz2 --directory user_restore --same-permissions + +![Restore Files from tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/Restore-tar-Backup-Files.png) + +Restore Files from Archive + +**Read Also:** + +- [18 tar Command Examples in Linux][1] +- [Dtrx – An Intelligent Archive Tool for Linux][2] + +### Using find Command to Search for Files ### + +The find command is used to search recursively through directory trees for files or directories that match certain characteristics, and can then either print the matching files or directories or perform other operations on the matches. + +Normally, we will search by name, owner, group, type, permissions, date, and size. + +#### Basic syntax: #### + +# find [directory_to_search] [expression] + +**Finding files recursively according to Size** + +Find all files (-f) in the current directory (.) and 2 subdirectories below (-maxdepth 3 includes the current working directory and 2 levels down) whose size (-size) is greater than 2 MB. + + # find . -maxdepth 3 -type f -size +2M + +![Find Files by Size in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Files-Based-on-Size.png) + +Find Files Based on Size + +**Finding and deleting files that match a certain criteria** + +Files with 777 permissions are sometimes considered an open door to external attackers. Either way, it is not safe to let anyone do anything with files. We will take a rather aggressive approach and delete them! (‘{}‘ + is used to “collect” the results of the search). + + # find /home/user -perm 777 -exec rm '{}' + + +![Find all 777 Permission Files](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Files-with-777-Permission.png) + +Find Files with 777Permission + +**Finding files per atime or mtime** + +Search for configuration files in /etc that have been accessed (-atime) or modified (-mtime) more (+180) or less (-180) than 6 months ago or exactly 6 months ago (180). + +Modify the following command as per the example below: + + # find /etc -iname "*.conf" -mtime -180 -print + +![Find Files by Modification Time](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Modified-Files.png) + +Find Modified Files + +- Read Also: [35 Practical Examples of Linux ‘find’ Command][3] + +### File Permissions and Basic Attributes ### + +The first 10 characters in the output of ls -l are the file attributes. The first of these characters is used to indicate the file type: + +- – : a regular file +- -d : a directory +- -l : a symbolic link +- -c : a character device (which treats data as a stream of bytes, i.e. a terminal) +- -b : a block device (which handles data in blocks, i.e. storage devices) + +The next nine characters of the file attributes are called the file mode and represent the read (r), write (w), and execute (x) permissions of the file’s owner, the file’s group owner, and the rest of the users (commonly referred to as “the world”). + +Whereas the read permission on a file allows the same to be opened and read, the same permission on a directory allows its contents to be listed if the execute permission is also set. In addition, the execute permission in a file allows it to be handled as a program and run, while in a directory it allows the same to be cd’ed into it. + +File permissions are changed with the chmod command, whose basic syntax is as follows: + + # chmod [new_mode] file + +Where new_mode is either an octal number or an expression that specifies the new permissions. + +The octal number can be converted from its binary equivalent, which is calculated from the desired file permissions for the owner, the group, and the world, as follows: + +The presence of a certain permission equals a power of 2 (r=22, w=21, x=20), while its absence equates to 0. For example: + +![Linux File Permissions](http://www.tecmint.com/wp-content/uploads/2014/10/File-Permissions.png) + +File Permissions + +To set the file’s permissions as above in octal form, type: + + # chmod 744 myfile + +You can also set a file’s mode using an expression that indicates the owner’s rights with the letter u, the group owner’s rights with the letter g, and the rest with o. All of these “individuals” can be represented at the same time with the letter a. Permissions are granted (or revoked) with the + or – signs, respectively. + +**Revoking execute permission for a shell script to all users** + +As we explained earlier, we can revoke a certain permission prepending it with the minus sign and indicating whether it needs to be revoked for the owner, the group owner, or all users. The one-liner below can be interpreted as follows: Change mode for all (a) users, revoke (–) execute permission (x). + + # chmod a-x backup.sh + +Granting read, write, and execute permissions for a file to the owner and group owner, and read permissions for the world. + +When we use a 3-digit octal number to set permissions for a file, the first digit indicates the permissions for the owner, the second digit for the group owner and the third digit for everyone else: + +- Owner: (r=22 + w=21 + x=20 = 7) +- Group owner: (r=22 + w=21 + x=20 = 7) +- World: (r=22 + w=0 + x=0 = 4), + + # chmod 774 myfile + +In time, and with practice, you will be able to decide which method to change a file mode works best for you in each case. A long directory listing also shows the file’s owner and its group owner (which serve as a rudimentary yet effective access control to files in a system): + +![Linux File Listing](http://www.tecmint.com/wp-content/uploads/2014/10/Linux-File-Listing.png) + +Linux File Listing + +File ownership is changed with the chown command. The owner and the group owner can be changed at the same time or separately. Its basic syntax is as follows: + + # chown user:group file + +Where at least user or group need to be present. + +**Few Examples** + +Changing the owner of a file to a certain user. + + # chown gacanepa sent + +Changing the owner and group of a file to an specific user:group pair. + + # chown gacanepa:gacanepa TestFile + +Changing only the group owner of a file to a certain group. Note the colon before the group’s name. + + # chown :gacanepa email_body.txt + +### Conclusion ### + +As a sysadmin, you need to know how to create and restore backups, how to find files in your system and change their attributes, along with a few tricks that can make your life easier and will prevent you from running into future issues. + +I hope that the tips provided in the present article will help you to achieve that goal. Feel free to add your own tips and ideas in the comments section for the benefit of the community. Thanks in advance! +Reference Links + +- [About the LFCS][4] +- [Why get a Linux Foundation Certification?][5] +- [Register for the LFCS exam][6] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/compress-files-and-finding-files-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/18-tar-command-examples-in-linux/ +[2]:http://www.tecmint.com/dtrx-an-intelligent-archive-extraction-tar-zip-cpio-rpm-deb-rar-tool-for-linux/ +[3]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/ +[4]:https://training.linuxfoundation.org/certification/LFCS +[5]:https://training.linuxfoundation.org/certification/why-certify-with-us +[6]:https://identity.linuxfoundation.org/user?destination=pid/1 \ No newline at end of file diff --git a/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md b/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md new file mode 100644 index 0000000000..ada637fabb --- /dev/null +++ b/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md @@ -0,0 +1,191 @@ +Part 4 - LFCS: Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition +================================================================================ +Last August, the Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators to show, through a performance-based exam, that they can perform overall operational support of Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation – if needed – to other support teams. + +![Linux Foundation Certified Sysadmin – Part 4](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-4.png) + +Linux Foundation Certified Sysadmin – Part 4 + +Please aware that Linux Foundation certifications are precise, totally based on performance and available through an online portal anytime, anywhere. Thus, you no longer have to travel to a examination center to get the certifications you need to establish your skills and expertise. + +Please watch the below video that explains The Linux Foundation Certification Program. + +注:youtube 视频 + + +This post is Part 4 of a 10-tutorial series, here in this part, we will cover the Partitioning storage devices, Formatting filesystems and Configuring swap partition, that are required for the LFCS certification exam. + +### Partitioning Storage Devices ### + +Partitioning is a means to divide a single hard drive into one or more parts or “slices” called partitions. A partition is a section on a drive that is treated as an independent disk and which contains a single type of file system, whereas a partition table is an index that relates those physical sections of the hard drive to partition identifications. + +In Linux, the traditional tool for managing MBR partitions (up to ~2009) in IBM PC compatible systems is fdisk. For GPT partitions (~2010 and later) we will use gdisk. Each of these tools can be invoked by typing its name followed by a device name (such as /dev/sdb). + +#### Managing MBR Partitions with fdisk #### + +We will cover fdisk first. + + # fdisk /dev/sdb + +A prompt appears asking for the next operation. If you are unsure, you can press the ‘m‘ key to display the help contents. + +![fdisk Help Menu](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-help.png) + +fdisk Help Menu + +In the above image, the most frequently used options are highlighted. At any moment, you can press ‘p‘ to display the current partition table. + +![Check Partition Table in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Show-Partition-Table.png) + +Show Partition Table + +The Id column shows the partition type (or partition id) that has been assigned by fdisk to the partition. A partition type serves as an indicator of the file system, the partition contains or, in simple words, the way data will be accessed in that partition. + +Please note that a comprehensive study of each partition type is out of the scope of this tutorial – as this series is focused on the LFCS exam, which is performance-based. + +**Some of the options used by fdisk as follows:** + +You can list all the partition types that can be managed by fdisk by pressing the ‘l‘ option (lowercase l). + +Press ‘d‘ to delete an existing partition. If more than one partition is found in the drive, you will be asked which one should be deleted. + +Enter the corresponding number, and then press ‘w‘ (write modifications to partition table) to apply changes. + +In the following example, we will delete /dev/sdb2, and then print (p) the partition table to verify the modifications. + +![fdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-options.png) + +fdisk Command Options + +Press ‘n‘ to create a new partition, then ‘p‘ to indicate it will be a primary partition. Finally, you can accept all the default values (in which case the partition will occupy all the available space), or specify a size as follows. + +![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-New-Partition.png) + +Create New Partition + +If the partition Id that fdisk chose is not the right one for our setup, we can press ‘t‘ to change it. + +![Change Partition Name in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Change-Partition-Name.png) + +Change Partition Name + +When you’re done setting up the partitions, press ‘w‘ to commit the changes to disk. + +![Save Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Partition-Changes.png) + +Save Partition Changes + +#### Managing GPT Partitions with gdisk #### + +In the following example, we will use /dev/sdb. + + # gdisk /dev/sdb + +We must note that gdisk can be used either to create MBR or GPT partitions. + +![Create GPT Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-GPT-Partitions.png) + +Create GPT Partitions + +The advantage of using GPT partitioning is that we can create up to 128 partitions in the same disk whose size can be up to the order of petabytes, whereas the maximum size for MBR partitions is 2 TB. + +Note that most of the options in fdisk are the same in gdisk. For that reason, we will not go into detail about them, but here’s a screenshot of the process. + +![gdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/gdisk-options.png) + +gdisk Command Options + +### Formatting Filesystems ### + +Once we have created all the necessary partitions, we must create filesystems. To find out the list of filesystems supported in your system, run. + + # ls /sbin/mk* + +![Check Filesystems Type in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Filesystems.png) + +Check Filesystems Type + +The type of filesystem that you should choose depends on your requirements. You should consider the pros and cons of each filesystem and its own set of features. Two important attributes to look for in a filesystem are. + +- Journaling support, which allows for faster data recovery in the event of a system crash. +- Security Enhanced Linux (SELinux) support, as per the project wiki, “a security enhancement to Linux which allows users and administrators more control over access control”. + +In our next example, we will create an ext4 filesystem (supports both journaling and SELinux) labeled Tecmint on /dev/sdb1, using mkfs, whose basic syntax is. + + # mkfs -t [filesystem] -L [label] device + or + # mkfs.[filesystem] -L [label] device + +![Create ext4 Filesystems in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystems.png) + +Create ext4 Filesystems + +### Creating and Using Swap Partitions ### + +Swap partitions are necessary if we need our Linux system to have access to virtual memory, which is a section of the hard disk designated for use as memory, when the main system memory (RAM) is all in use. For that reason, a swap partition may not be needed on systems with enough RAM to meet all its requirements; however, even in that case it’s up to the system administrator to decide whether to use a swap partition or not. + +A simple rule of thumb to decide the size of a swap partition is as follows. + +Swap should usually equal 2x physical RAM for up to 2 GB of physical RAM, and then an additional 1x physical RAM for any amount above 2 GB, but never less than 32 MB. + +So, if: + +M = Amount of RAM in GB, and S = Amount of swap in GB, then + + If M < 2 + S = M *2 + Else + S = M + 2 + +Remember this is just a formula and that only you, as a sysadmin, have the final word as to the use and size of a swap partition. + +To configure a swap partition, create a regular partition as demonstrated earlier with the desired size. Next, we need to add the following entry to the /etc/fstab file (X can be either b or c). + + /dev/sdX1 swap swap sw 0 0 + +Finally, let’s format and enable the swap partition. + + # mkswap /dev/sdX1 + # swapon -v /dev/sdX1 + +To display a snapshot of the swap partition(s). + + # cat /proc/swaps + +To disable the swap partition. + + # swapoff /dev/sdX1 + +For the next example, we’ll use /dev/sdc1 (=512 MB, for a system with 256 MB of RAM) to set up a partition with fdisk that we will use as swap, following the steps detailed above. Note that we will specify a fixed size in this case. + +![Create-Swap-Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Swap-Partition.png) + +Create Swap Partition + +![Add Swap Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Swap-Partition.png) + +Enable Swap Partition + +### Conclusion ### + +Creating partitions (including swap) and formatting filesystems are crucial in your road to Sysadminship. I hope that the tips given in this article will guide you to achieve your goals. Feel free to add your own tips & ideas in the comments section below, for the benefit of the community. +Reference Links + +- [About the LFCS][1] +- [Why get a Linux Foundation Certification?][2] +- [Register for the LFCS exam][3] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-partitions-and-filesystems-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:https://training.linuxfoundation.org/certification/LFCS +[2]:https://training.linuxfoundation.org/certification/why-certify-with-us +[3]:https://identity.linuxfoundation.org/user?destination=pid/1 \ No newline at end of file diff --git a/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md b/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md new file mode 100644 index 0000000000..1544a378bc --- /dev/null +++ b/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md @@ -0,0 +1,232 @@ +Part 5 - LFCS: How to Mount/Unmount Local and Network (Samba & NFS) Filesystems in Linux +================================================================================ +The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is allowing individuals from all corners of the globe to get certified in basic to intermediate system administration tasks for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams. + +![Linux Foundation Certified Sysadmin – Part 5](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-5.png) + +Linux Foundation Certified Sysadmin – Part 5 + +The following video shows an introduction to The Linux Foundation Certification Program. + +注:youtube 视频 + + +This post is Part 5 of a 10-tutorial series, here in this part, we will explain How to mount/unmount local and network filesystems in linux, that are required for the LFCS certification exam. + +### Mounting Filesystems ### + +Once a disk has been partitioned, Linux needs some way to access the data on the partitions. Unlike DOS or Windows (where this is done by assigning a drive letter to each partition), Linux uses a unified directory tree where each partition is mounted at a mount point in that tree. + +A mount point is a directory that is used as a way to access the filesystem on the partition, and mounting the filesystem is the process of associating a certain filesystem (a partition, for example) with a specific directory in the directory tree. + +In other words, the first step in managing a storage device is attaching the device to the file system tree. This task can be accomplished on a one-time basis by using tools such as mount (and then unmounted with umount) or persistently across reboots by editing the /etc/fstab file. + +The mount command (without any options or arguments) shows the currently mounted filesystems. + + # mount + +![Check Mounted Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/check-mounted-filesystems.png) + +Check Mounted Filesystem + +In addition, mount is used to mount filesystems into the filesystem tree. Its standard syntax is as follows. + + # mount -t type device dir -o options + +This command instructs the kernel to mount the filesystem found on device (a partition, for example, that has been formatted with a filesystem type) at the directory dir, using all options. In this form, mount does not look in /etc/fstab for instructions. + +If only a directory or device is specified, for example. + + # mount /dir -o options + or + # mount device -o options + +mount tries to find a mount point and if it can’t find any, then searches for a device (both cases in the /etc/fstab file), and finally attempts to complete the mount operation (which usually succeeds, except for the case when either the directory or the device is already being used, or when the user invoking mount is not root). + +You will notice that every line in the output of mount has the following format. + + device on directory type (options) + +For example, + + /dev/mapper/debian-home on /home type ext4 (rw,relatime,user_xattr,barrier=1,data=ordered) + +Reads: + +dev/mapper/debian-home is mounted on /home, which has been formatted as ext4, with the following options: rw,relatime,user_xattr,barrier=1,data=ordered + +**Mount Options** + +Most frequently used mount options include. + +- async: allows asynchronous I/O operations on the file system being mounted. +- auto: marks the file system as enabled to be mounted automatically using mount -a. It is the opposite of noauto. +- defaults: this option is an alias for async,auto,dev,exec,nouser,rw,suid. Note that multiple options must be separated by a comma without any spaces. If by accident you type a space between options, mount will interpret the subsequent text string as another argument. +- loop: Mounts an image (an .iso file, for example) as a loop device. This option can be used to simulate the presence of the disk’s contents in an optical media reader. +- noexec: prevents the execution of executable files on the particular filesystem. It is the opposite of exec. +- nouser: prevents any users (other than root) to mount and unmount the filesystem. It is the opposite of user. +- remount: mounts the filesystem again in case it is already mounted. +- ro: mounts the filesystem as read only. +- rw: mounts the file system with read and write capabilities. +- relatime: makes access time to files be updated only if atime is earlier than mtime. +- user_xattr: allow users to set and remote extended filesystem attributes. + +**Mounting a device with ro and noexec options** + + # mount -t ext4 /dev/sdg1 /mnt -o ro,noexec + +In this case we can see that attempts to write a file to or to run a binary file located inside our mounting point fail with corresponding error messages. + + # touch /mnt/myfile + # /mnt/bin/echo “Hi there” + +![Mount Device in Read Write Mode](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device-Read-Write.png) + +Mount Device Read Write + +**Mounting a device with default options** + +In the following scenario, we will try to write a file to our newly mounted device and run an executable file located within its filesystem tree using the same commands as in the previous example. + + # mount -t ext4 /dev/sdg1 /mnt -o defaults + +![Mount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device.png) + +Mount Device + +In this last case, it works perfectly. + +### Unmounting Devices ### + +Unmounting a device (with the umount command) means finish writing all the remaining “on transit” data so that it can be safely removed. Note that if you try to remove a mounted device without properly unmounting it first, you run the risk of damaging the device itself or cause data loss. + +That being said, in order to unmount a device, you must be “standing outside” its block device descriptor or mount point. In other words, your current working directory must be something else other than the mounting point. Otherwise, you will get a message saying that the device is busy. + +![Unmount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Unmount-Device.png) + +Unmount Device + +An easy way to “leave” the mounting point is typing the cd command which, in lack of arguments, will take us to our current user’s home directory, as shown above. + +### Mounting Common Networked Filesystems ### + +The two most frequently used network file systems are SMB (which stands for “Server Message Block”) and NFS (“Network File System”). Chances are you will use NFS if you need to set up a share for Unix-like clients only, and will opt for Samba if you need to share files with Windows-based clients and perhaps other Unix-like clients as well. + +Read Also + +- [Setup Samba Server in RHEL/CentOS and Fedora][1] +- [Setting up NFS (Network File System) on RHEL/CentOS/Fedora and Debian/Ubuntu][2] + +The following steps assume that Samba and NFS shares have already been set up in the server with IP 192.168.0.10 (please note that setting up a NFS share is one of the competencies required for the LFCE exam, which we will cover after the present series). + +#### Mounting a Samba share on Linux #### + +Step 1: Install the samba-client samba-common and cifs-utils packages on Red Hat and Debian based distributions. + + # yum update && yum install samba-client samba-common cifs-utils + # aptitude update && aptitude install samba-client samba-common cifs-utils + +Then run the following command to look for available samba shares in the server. + + # smbclient -L 192.168.0.10 + +And enter the password for the root account in the remote machine. + +![Mount Samba Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Samba-Share.png) + +Mount Samba Share + +In the above image we have highlighted the share that is ready for mounting on our local system. You will need a valid samba username and password on the remote server in order to access it. + +Step 2: When mounting a password-protected network share, it is not a good idea to write your credentials in the /etc/fstab file. Instead, you can store them in a hidden file somewhere with permissions set to 600, like so. + + # mkdir /media/samba + # echo “username=samba_username” > /media/samba/.smbcredentials + # echo “password=samba_password” >> /media/samba/.smbcredentials + # chmod 600 /media/samba/.smbcredentials + +Step 3: Then add the following line to /etc/fstab file. + + # //192.168.0.10/gacanepa /media/samba cifs credentials=/media/samba/.smbcredentials,defaults 0 0 + +Step 4: You can now mount your samba share, either manually (mount //192.168.0.10/gacanepa) or by rebooting your machine so as to apply the changes made in /etc/fstab permanently. + +![Mount Password Protect Samba Share](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Password-Protect-Samba-Share.png) + +Mount Password Protect Samba Share + +#### Mounting a NFS share on Linux #### + +Step 1: Install the nfs-common and portmap packages on Red Hat and Debian based distributions. + + # yum update && yum install nfs-utils nfs-utils-lib + # aptitude update && aptitude install nfs-common + +Step 2: Create a mounting point for the NFS share. + + # mkdir /media/nfs + +Step 3: Add the following line to /etc/fstab file. + +192.168.0.10:/NFS-SHARE /media/nfs nfs defaults 0 0 + +Step 4: You can now mount your nfs share, either manually (mount 192.168.0.10:/NFS-SHARE) or by rebooting your machine so as to apply the changes made in /etc/fstab permanently. + +![Mount NFS Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-NFS-Share.png) + +Mount NFS Share + +### Mounting Filesystems Permanently ### + +As shown in the previous two examples, the /etc/fstab file controls how Linux provides access to disk partitions and removable media devices and consists of a series of lines that contain six fields each; the fields are separated by one or more spaces or tabs. A line that begins with a hash mark (#) is a comment and is ignored. + +Each line has the following format. + + + +Where: + +- : The first column specifies the mount device. Most distributions now specify partitions by their labels or UUIDs. This practice can help reduce problems if partition numbers change. +- : The second column specifies the mount point. +- : The file system type code is the same as the type code used to mount a filesystem with the mount command. A file system type code of auto lets the kernel auto-detect the filesystem type, which can be a convenient option for removable media devices. Note that this option may not be available for all filesystems out there. +- : One (or more) mount option(s). +- : You will most likely leave this to 0 (otherwise set it to 1) to disable the dump utility to backup the filesystem upon boot (The dump program was once a common backup tool, but it is much less popular today.) +- : This column specifies whether the integrity of the filesystem should be checked at boot time with fsck. A 0 means that fsck should not check a filesystem. The higher the number, the lowest the priority. Thus, the root partition will most likely have a value of 1, while all others that should be checked should have a value of 2. + +**Mount Examples** + +1. To mount a partition with label TECMINT at boot time with rw and noexec attributes, you should add the following line in /etc/fstab file. + + LABEL=TECMINT /mnt ext4 rw,noexec 0 0 + +2. If you want the contents of a disk in your DVD drive be available at boot time. + + /dev/sr0 /media/cdrom0 iso9660 ro,user,noauto 0 0 + +Where /dev/sr0 is your DVD drive. + +### Summary ### + +You can rest assured that mounting and unmounting local and network filesystems from the command line will be part of your day-to-day responsibilities as sysadmin. You will also need to master /etc/fstab. I hope that you have found this article useful to help you with those tasks. Feel free to add your comments (or ask questions) below and to share this article through your network social profiles. +Reference Links + +- [About the LFCS][3] +- [Why get a Linux Foundation Certification?][4] +- [Register for the LFCS exam][5] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/mount-filesystem-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/setup-samba-server-using-tdbsam-backend-on-rhel-centos-6-3-5-8-and-fedora-17-12/ +[2]:http://www.tecmint.com/how-to-setup-nfs-server-in-linux/ +[3]:https://training.linuxfoundation.org/certification/LFCS +[4]:https://training.linuxfoundation.org/certification/why-certify-with-us +[5]:https://identity.linuxfoundation.org/user?destination=pid/1 \ No newline at end of file From 2bd491e4887a8ccf9e80670d9ef772cf3dc54ea8 Mon Sep 17 00:00:00 2001 From: KS Date: Mon, 17 Aug 2015 16:25:20 +0800 Subject: [PATCH 211/697] Update 20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md --- ...ll Strongswan - A Tool to Setup IPsec Based VPN in Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md b/sources/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md index ca909934fa..cd9ee43213 100644 --- a/sources/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md +++ b/sources/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md @@ -1,3 +1,4 @@ +wyangsun translating Install Strongswan - A Tool to Setup IPsec Based VPN in Linux ================================================================================ IPsec is a standard which provides the security at network layer. It consist of authentication header (AH) and encapsulating security payload (ESP) components. AH provides the packet Integrity and confidentiality is provided by ESP component . IPsec ensures the following security features at network layer. @@ -110,4 +111,4 @@ via: http://linoxide.com/security/install-strongswan/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:http://linoxide.com/author/naveeda/ -[1]:https://www.strongswan.org/ \ No newline at end of file +[1]:https://www.strongswan.org/ From ca3975194fd28a2a417cd36a198be1034033bea2 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 17 Aug 2015 21:22:39 +0800 Subject: [PATCH 212/697] Update 20150728 Process of the Linux kernel building.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译一点点 --- sources/tech/20150728 Process of the Linux kernel building.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/sources/tech/20150728 Process of the Linux kernel building.md index d3c71f0a43..e139143ccc 100644 --- a/sources/tech/20150728 Process of the Linux kernel building.md +++ b/sources/tech/20150728 Process of the Linux kernel building.md @@ -305,8 +305,12 @@ all: vmlinux Don't worry that we have missed many lines in Makefile that are placed after `export RCS_FIND_IGNORE.....` and before `all: vmlinux.....`. This part of the makefile is responsible for the `make *.config` targets and as I wrote in the beginning of this part we will see only building of the kernel in a general way. +不要操心我们略过的从`export RCS_FIND_IGNORE.....` 到`all: vmlinux.....` 这一部分makefile 代码,他们只是负责根据各种配置文件生成不同目标内核的,因为之前我就说了这一部分我们只讨论构建内核的通用途径。 + The `all:` target is the default when no target is given on the command line. You can see here that we include architecture specific makefile there (in our case it will be [arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)). From this moment we will continue from this makefile. As we can see `all` target depends on the `vmlinux` target that defined a little lower in the top makefile: +目标`all:` 是在命令行里不指定目标时默认生成的目标。你可以看到这里我们包含了架构相关的makefile(默认情况下会是[arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile))。从这一时刻起,我们会从这个makefile 继续进行下去。如我们所见,目标`all` 依赖于根makefile 后面一点的生命`vmlinux`: + ```Makefile vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE ``` From 8cceff8c289b4e4c694af9de6bb755634cb10297 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 17 Aug 2015 22:44:42 +0800 Subject: [PATCH 213/697] PUB:20150816 shellinabox--A Web based AJAX Terminal Emulator @xiaoyu33 --- ...box--A Web based AJAX Terminal Emulator.md | 23 ++++++++++--------- 1 file changed, 12 insertions(+), 11 deletions(-) rename {translated/share => published}/20150816 shellinabox--A Web based AJAX Terminal Emulator.md (66%) diff --git a/translated/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md b/published/20150816 shellinabox--A Web based AJAX Terminal Emulator.md similarity index 66% rename from translated/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md rename to published/20150816 shellinabox--A Web based AJAX Terminal Emulator.md index 71acf990c1..c4d9523d50 100644 --- a/translated/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md +++ b/published/20150816 shellinabox--A Web based AJAX Terminal Emulator.md @@ -1,16 +1,17 @@ -shellinabox–基于Web的Ajax的终端模拟器安装及使用详解 +shellinabox:一款使用 AJAX 的基于 Web 的终端模拟器 ================================================================================ + ### shellinabox简介 ### -unixmen的读者朋友们,你们好! +通常情况下,我们在访问任何远程服务器时,会使用常见的通信工具如OpenSSH和Putty等。但是,有可能我们在防火墙后面不能使用这些工具访问远程系统,或者防火墙只允许HTTPS流量才能通过。不用担心!即使你在这样的防火墙后面,我们依然有办法来访问你的远程系统。而且,你不需要安装任何类似于OpenSSH或Putty的通讯工具。你只需要有一个支持JavaScript和CSS的现代浏览器,并且你不用安装任何插件或第三方应用软件。 -通常情况下,我们访问任何远程服务器时,使用常见的通信工具如OpenSSH和Putty等。但是如果我们在防火墙外,或者防火墙只允许HTTPS流量才能通过,那么我们就不能再使用这些工具来访问远程系统了。不用担心!即使你在防火墙后面,我们依然有办法来访问你的远程系统。而且,你不需要安装任何类似于OpenSSH或Putty的通讯工具。你只需要有一个支持JavaScript和CSS的现代浏览器。并且你不用安装任何插件或第三方应用软件。 +这个 **Shell In A Box**,发音是**shellinabox**,是由**Markus Gutschke**开发的一款自由开源的基于Web的Ajax的终端模拟器。它使用AJAX技术,通过Web浏览器提供了类似原生的 Shell 的外观和感受。 -Meet **Shell In A Box**,发音是**shellinabox**,是由**Markus Gutschke**开发的一款免费的,开源的,基于Web的Ajax的终端模拟器。它使用AJAX技术,通过Web浏览器提供的外观和感觉像一个原生壳。该**shellinaboxd**的守护进程实现了一个Web服务器,能够侦听指定的端口。Web服务器发布一个或多个服务,这些服务将在VT100模拟器实现为一个AJAX的Web应用程序显示。默认情况下,端口为4200。你可以更改默认端口到任意选择的任意端口号。在你的远程服务器安装shellinabox以后,如果你想从本地系统接入,打开Web浏览器并导航到:**http://IP-Address:4200/**。输入你的用户名和密码,然后就可以开始使用你远程系统的外壳。看起来很有趣,不是吗?确实! +这个**shellinaboxd**守护进程实现了一个Web服务器,能够侦听指定的端口。其Web服务器可以发布一个或多个服务,这些服务显示在用 AJAX Web 应用实现的VT100模拟器中。默认情况下,端口为4200。你可以更改默认端口到任意选择的任意端口号。在你的远程服务器安装shellinabox以后,如果你想从本地系统接入,打开Web浏览器并导航到:**http://IP-Address:4200/**。输入你的用户名和密码,然后就可以开始使用你远程系统的Shell。看起来很有趣,不是吗?确实 有趣! **免责声明**: -shellinabox不是SSH客户端或任何安全软件。它仅仅是一个应用程序,能够通过Web浏览器模拟一个远程系统的壳。同时,它和SSH没有任何关系。这不是防弹的安全的方式来远程控制您的系统。这只是迄今为止最简单的方法之一。无论什么原因,你都不应该在任何公共网络上运行它。 +shellinabox不是SSH客户端或任何安全软件。它仅仅是一个应用程序,能够通过Web浏览器模拟一个远程系统的Shell。同时,它和SSH没有任何关系。这不是可靠的安全地远程控制您的系统的方式。这只是迄今为止最简单的方法之一。无论如何,你都不应该在任何公共网络上运行它。 ### 安装shellinabox ### @@ -48,7 +49,7 @@ shellinabox在默认库是可用的。所以,你可以使用命令来安装它 # vi /etc/sysconfig/shellinaboxd -更改你的端口到任意数量。因为我在本地网络上测试它,所以我使用默认值。 +更改你的端口到任意数字。因为我在本地网络上测试它,所以我使用默认值。 # Shell in a box daemon configuration # For details see shellinaboxd man page @@ -98,7 +99,7 @@ shellinabox在默认库是可用的。所以,你可以使用命令来安装它 ### 使用 ### -现在,去你的客户端系统,打开Web浏览器并导航到:**https://ip-address-of-remote-servers:4200**。 +现在,在你的客户端系统,打开Web浏览器并导航到:**https://ip-address-of-remote-servers:4200**。 **注意**:如果你改变了端口,请填写修改后的端口。 @@ -110,13 +111,13 @@ shellinabox在默认库是可用的。所以,你可以使用命令来安装它 ![Shell In A Box - Google Chrome_003](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_003.jpg) -右键点击你浏览器的空白位置。你可以得到一些有很有用的额外的菜单选项。 +右键点击你浏览器的空白位置。你可以得到一些有很有用的额外菜单选项。 ![Shell In A Box - Google Chrome_004](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_004.jpg) 从现在开始,你可以通过本地系统的Web浏览器在你的远程服务器随意操作。 -当你完成时,记得点击**退出**。 +当你完成工作时,记得输入`exit`退出。 当再次连接到远程系统时,单击**连接**按钮,然后输入远程服务器的用户名和密码。 @@ -134,7 +135,7 @@ shellinabox在默认库是可用的。所以,你可以使用命令来安装它 ### 结论 ### -正如我之前提到的,如果你在服务器运行在防火墙后面,那么基于web的SSH工具是非常有用的。有许多基于web的SSH工具,但shellinabox是非常简单并且有用的工具,能从的网络上的任何地方,模拟一个远程系统的壳。因为它是基于浏览器的,所以你可以从任何设备访问您的远程服务器,只要你有一个支持JavaScript和CSS的浏览器。 +正如我之前提到的,如果你在服务器运行在防火墙后面,那么基于web的SSH工具是非常有用的。有许多基于web的SSH工具,但shellinabox是非常简单而有用的工具,可以从的网络上的任何地方,模拟一个远程系统的Shell。因为它是基于浏览器的,所以你可以从任何设备访问您的远程服务器,只要你有一个支持JavaScript和CSS的浏览器。 就这些啦。祝你今天有个好心情! @@ -148,7 +149,7 @@ via: http://www.unixmen.com/shellinabox-a-web-based-ajax-terminal-emulator/ 作者:[SK][a] 译者:[xiaoyu33](https://github.com/xiaoyu33) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From 40cbb17f9ee6f21418829d1c6ebf83976f2ad59a Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 17 Aug 2015 23:53:26 +0800 Subject: [PATCH 214/697] PUB:20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @XLCYun 翻译的渐入佳境了! --- ...t & Wrong - Page 3 - GNOME Applications.md | 2 +- ...Right & Wrong - Page 4 - GNOME Settings.md | 52 ++++++++++++++++++ ...Right & Wrong - Page 4 - GNOME Settings.md | 54 ------------------- 3 files changed, 53 insertions(+), 55 deletions(-) create mode 100644 published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md delete mode 100644 translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md diff --git a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md b/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md index 61600366c9..4dd942dd29 100644 --- a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md +++ b/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md @@ -7,7 +7,7 @@ 这是一个基本扯平的方面。每一个桌面环境都有一些非常好的应用,也有一些不怎么样的。再次强调,Gnome 把那些 KDE 完全错失的小细节给做对了。我不是想说 KDE 中有哪些应用不好。他们都能工作,但仅此而已。也就是说:它们合格了,但确实还没有达到甚至接近100分。 -Gnome 是一个样子,KDE 是另外一种。Dragon 播放器运行得很好,清晰的标出了播放文件、URL或和光盘的按钮,正如你在 Gnome Videos 中能做到的一样……但是在便利的文件名和用户的友好度方面,Gnome 多走了一小步。它默认显示了在你的电脑上检测到的所有影像文件,不需要你做任何事情。KDE 有 [Baloo][](正如之前的 [Nepomuk][2],LCTT 译注:这是 KDE 中一种文件索引服务框架)为什么不使用它们?它们能列出可读取的影像文件……但却没被使用。 +Gnome 在左,KDE 在右。Dragon 播放器运行得很好,清晰的标出了播放文件、URL或和光盘的按钮,正如你在 Gnome Videos 中能做到的一样……但是在便利的文件名和用户的友好度方面,Gnome 多走了一小步。它默认显示了在你的电脑上检测到的所有影像文件,不需要你做任何事情。KDE 有 [Baloo][](正如之前的 [Nepomuk][2],LCTT 译注:这是 KDE 中一种文件索引服务框架)为什么不使用它们?它们能列出可读取的影像文件……但却没被使用。 下一步……音乐播放器 diff --git a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md b/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md new file mode 100644 index 0000000000..289c1cb14e --- /dev/null +++ b/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md @@ -0,0 +1,52 @@ +一周 GNOME 之旅:品味它和 KDE 的是是非非(第四节 GNOME设置) +================================================================================ + +### 设置 ### + +在这我要挑一挑几个特定 KDE 控制模块的毛病,大部分原因是因为相比它们的对手GNOME来说,糟糕得太可笑,实话说,真是悲哀。 + +第一个接招的?打印机。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers1_show&w=1920) + +GNOME 在左,KDE 在右。你知道左边跟右边的打印程序有什么区别吗?当我在 GNOME 控制中心打开“打印机”时,程序窗口弹出来了,然后这样就可以使用了。而当我在 KDE 系统设置打开“打印机”时,我得到了一条密码提示。甚至我都没能看一眼打印机呢,我就必须先交出 ROOT 密码。 + +让我再重复一遍。在今天这个有了 PolicyKit 和 Logind 的日子里,对一个应该是 sudo 的操作,我依然被询问要求 ROOT 的密码。我安装系统的时候甚至都没设置 root 密码。所以我必须跑到 Konsole 去,接着运行 'sudo passwd root' 命令,这样我才能给 root 设一个密码,然后我才能回到系统设置中的打印程序,再交出 root 密码,然后仅仅是看一看哪些打印机可用。完成了这些工作后,当我点击“添加打印机”时,我再次得到请求 ROOT 密码的提示,当我解决了它后再选择一个打印机和驱动时,我再次得到请求 ROOT 密码的提示。仅仅是为了添加一个打印机到系统我就收到三次密码请求! + +而在 GNOME 下添加打印机,在点击打印机程序中的“解锁”之前,我没有得到任何请求 SUDO 密码的提示。整个过程我只被请求过一次,仅此而已。KDE,求你了……采用 GNOME 的“解锁”模式吧。不到一定需要的时候不要发出提示。还有,不管是哪个库,只要它允许 KDE 应用程序绕过 PolicyKit/Logind(如果有的话)并直接请求 ROOT 权限……那就把它封进箱里吧。如果这是个多用户系统,那我要么必须交出 ROOT 密码,要么我必须时时刻刻待命,以免有一个用户需要升级、更改或添加一个新的打印机。而这两种情况都是完全无法接受的。 + +有还一件事…… + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers2_show&w=1920) + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers3_show&w=1920) + +这个问题问大家:怎么样看起来更简洁?我在写这篇文章时意识到:当有任何附加的打印机准备好时,Gnome 打印机程序会把过程做得非常简洁,它们在左边上放了一个竖直栏来列出这些打印机。而我在 KDE 中添加第二台打印机时,它突然增加出一个左边栏来。而在添加之前,我脑海中已经有了一个恐怖的画面,它会像图片文件夹显示预览图一样直接在界面里插入另外一个图标。我很高兴也很惊讶的看到我是错的。但是事实是它直接“长出”另外一个从未存在的竖直栏,彻底改变了它的界面布局,而这样也称不上“好”。终究还是一种令人困惑,奇怪而又不直观的设计。 + +打印机说得够多了……下一个接受我公开石刑的 KDE 系统设置是?多媒体,即 Phonon。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_sound_show&w=1920) + +一如既往,GNOME 在左边,KDE 在右边。让我们先看看 GNOME 的系统设置先……眼睛移动是从左到右,从上到下,对吧?来吧,就这样做。首先:音量控制滑条。滑条中的蓝色条与空条百分百清晰地消除了哪边是“音量增加”的困惑。在音量控制条后马上就是一个 On/Off 开关,用来开关静音功能。Gnome 的再次得分在于静音后能记住当前设置的音量,而在点击音量增加按钮取消静音后能回到原来设置的音量中来。Kmixer,你个健忘的垃圾,我真的希望我能多讨论你一下。 + +继续!输入输出和应用程序的标签选项?每一个应用程序的音量随时可控?Gnome,每过一秒,我爱你越深。音量均衡选项、声音配置、和清晰地标上标志的“测试麦克风”选项。 + +我不清楚它能否以一种更干净更简洁的设计实现。是的,它只是一个 Gnome 化的 Pavucontrol,但我想这就是重要的地方。Pavucontrol 在这方面几乎完全做对了,Gnome 控制中心中的“声音”应用程序的改善使它向完美更进了一步。 + +Phonon,该你上了。但开始前我想说:我 TM 看到的是什么?!我知道我看到的是音频设备的优先级列表,但是它呈现的方式有点太坑。还有,那些用户可能关心的那些东西哪去了?拥有一个优先级列表当然很好,它也应该存在,但问题是优先级列表属于那种用户乱搞一两次之后就不会再碰的东西。它还不够重要,或者说不够常用到可以直接放在正中间位置的程度。音量控制滑块呢?对每个应用程序的音量控制功能呢?那些用户使用最频繁的东西呢?好吧,它们在 Kmix 中,一个分离的程序,拥有它自己的配置选项……而不是在系统设置下……这样真的让“系统设置”这个词变得有点用词不当。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_network_show&w=1920) + +上面展示的 Gnome 的网络设置。KDE 的没有展示,原因就是我接下来要吐槽的内容了。如果你进入 KDE 的系统设置里,然后点击“网络”区域中三个选项中的任何一个,你会得到一大堆的选项:蓝牙设置、Samba 分享的默认用户名和密码(说真的,“连通性(Connectivity)”下面只有两个选项:SMB 的用户名和密码。TMD 怎么就配得上“连通性”这么大的词?),浏览器身份验证控制(只有 Konqueror 能用……一个已经倒闭的项目),代理设置,等等……我的 wifi 设置哪去了?它们没在这。哪去了?好吧,它们在网络应用程序的设置里面……而不是在网络设置里…… + +KDE,你这是要杀了我啊,你有“系统设置”当凶器,拿着它动手吧! + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md deleted file mode 100644 index 1c0cc4bd86..0000000000 --- a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md +++ /dev/null @@ -1,54 +0,0 @@ -将GNOME作为我的Linux桌面的一周: 他们做对的与做错的 - 第四节 - GNOME设置 -================================================================================ -### Settings设置 ### - -在这我要挑一挑几个特定KDE控制模块的毛病,大部分原因是因为相比它们的对手GNOME来说,糟糕得太可笑,实话说,真是悲哀。 - -第一个接招的?打印机。 - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers1_show&w=1920) - -GNOME在左,KDE在右。你知道左边跟右边的打印程序有什么区别吗?当我在GNOME控制中心打开“打印机”时,程序窗口弹出来了,之后没有也没发生。而当我在KDE系统设置打开“打印机”时,我收到了一条密码提示。甚至我都没能看一眼打印机呢,我就必须先交出ROOT密码。 - -让我再重复一遍。在今天,PolicyKit和Logind的日子里,对一个应该是sudo的操作,我依然被询问要求ROOT的密码。我安装系统的时候甚至都没设置root密码。所以我必须跑到Konsole去,然后运行'sudo passwd root'命令,这样我才能给root设一个密码,这样我才能回到系统设置中的打印程序,然后交出root密码,然后仅仅是看一看哪些打印机可用。完成了这些工作后,当我点击“添加打印机”时,我再次收到请求ROOT密码的提示,当我解决了它后再选择一个打印机和驱动时,我再次收到请求ROOT密码的提示。仅仅是为了添加一个打印机到系统我就收到三次密码请求。 - -而在GNOME下添加打印机,在点击打印机程序中的”解锁“之前,我没有收到任何请求SUDO密码的提示。整个过程我只被请求过一次,仅此而已。KDE,求你了……采用GNOME的”解锁“模式吧。不到一定需要的时候不要发出提示。还有,不管是哪个库,只要它允许KDE应用程序绕过PolicyKit/Logind(如果有的话)并直接请求ROOT权限……那就把它封进箱里吧。如果这是个多用户系统,那我要么必须交出ROOT密码,要么我必须时时刻刻呆着以免有一个用户需要升级、更改或添加一个新的打印机。而这两种情况都是完全无法接受的。 - -有还一件事…… - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers2_show&w=1920) - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers3_show&w=1920) - -给论坛的问题:怎么样看起来更简洁?我在写这篇文章时意识到:当有任何的附加打印机准备好时,Gnome打印机程序会把过程做得非常简洁,它们在左边上放了一个竖直栏来列出这些打印机。而我在KDE添加第二台打印机时,它突然增加出一个左边栏来。而在添加之前,我脑海中已经有了一个恐怖的画面它会像图片文件夹显示预览图一样,直接插入另外一个图标到界面里去。我很高兴也很惊讶的看到我是错的。但是事实是它直接”长出”另外一个从末存在的竖直栏,彻底改变了它的界面布局,而这样也称不上“好”。终究还是一种令人困惑,奇怪而又不直观的设计。 - -打印机说得够多了……下一个接受我公开石刑的KDE系统设置是?多媒体,即Phonon。 - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_sound_show&w=1920) - -一如既往,GNOME在左边,KDE在右边。让我们先看看GNOME的系统设置先……眼睛从左到右,从上到下,对吧?来吧,就这样做。首先:音量控制滑条。滑条中的蓝色条与空白条百分百清晰地消除了哪边是“音量增加”的困惑。在音量控制条后马上就是一个On/Off开关,用来开关静音功能。Gnome的再次得分在于静音后能记住当前设置的音量,而在点击音量增加按钮取消静音后能回到原来设置的音量中来。Kmixer,你个健忘的垃圾,我真的希望我能多讨论你。 - - -继续!输入输出和应用程序的标签选项?每一个应用程序的音量随时可控?Gnome,每过一秒,我爱你越深。均衡的选项设置,声音配置,和清晰地标上标志的“测试麦克风”选项。 - - - -我不清楚它能否以一种更干净更简洁的设计实现。是的,它只是一个Gnome化的Pavucontrol,但我想这就是重要的地方。Pavucontrol在这方面几乎完全做对了,Gnome控制中心中的“声音”应用程序的改善使它向完美更进了一步。 - -Phonon,该你上了。但开始前我想说:我TM看到的是什么?我知道我看到的是音频设备的权限列表,但是它呈现的方式有点太坑。还有,那些用户可能关心的那些东西哪去了?拥有一个权限列表当然很好,它也应该存在,但问题是权限列表属于那种用户乱搞一两次之后就不会再碰的东西。它还不够重要,或者说常用到可以直接放在正中间位置的程度。音量控制滑块呢?对每个应用程序的音量控制功能呢?那些用户使用最频繁的东西呢?好吧,它们在Kmix中,一个分离的程序,拥有它自己的配置选项……而不是在系统设置下……这样真的让“系统设置”这个词变得有点用词不当。 - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_network_show&w=1920) - -上面展示的Gnome的网络设置。KDE的没有展示,原因就是我接下来要吐槽的内容了。如果你进入KDE的系统设置里,然后点击“网络”区域中三个选项中的任何一个,你会得到一大堆的选项:蓝牙设置,Samba分享的默认用户名和密码(说真的,“连通性(Connectivity)”下面只有两个选项:SMB的用户名和密码。TMD怎么就配得上“连通性”这么大的词?),浏览器身份验证控制(只有Konqueror能用……一个已经倒闭的项目),代理设置,等等……我的wifi设置哪去了?它们没在这。哪去了?好吧,它们在网络应用程序的设置里面……而不是在网络设置里…… - -KDE,你这是要杀了我啊,你有“系统设置”当凶器,拿着它动手吧! - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4 - -作者:Eric Griffith -译者:[XLCYun](https://github.com/XLCYun) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From d675e982b31faa8e48135ab58931502ba44c0ce4 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 18 Aug 2015 00:55:00 +0800 Subject: [PATCH 215/697] Delete 20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md --- ...number of threads in a process on Linux.md | 51 ------------------- 1 file changed, 51 deletions(-) delete mode 100644 sources/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md diff --git a/sources/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md b/sources/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md deleted file mode 100644 index 35ee2f00de..0000000000 --- a/sources/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md +++ /dev/null @@ -1,51 +0,0 @@ -translation by strugglingyouth -Linux FAQs with Answers--How to count the number of threads in a process on Linux -================================================================================ -> **Question**: I have an application running, which forks a number of threads at run-time. I want to know how many threads are actively running in the program. What is the easiest way to check the thread count of a process on Linux? - -If you want to see the number of threads per process in Linux environments, there are several ways to do it. - -### Method One: /proc ### - -The proc pseudo filesystem, which resides in /proc directory, is the easiest way to see the thread count of any active process. The /proc directory exports in the form of readable text files a wealth of information related to existing processes and system hardware such as CPU, interrupts, memory, disk, etc. - - $ cat /proc//status - -The above command will show detailed information about the process with , which includes process state (e.g., sleeping, running), parent PID, UID, GID, the number of file descriptors used, and the number of context switches. The output also indicates **the total number of threads created in a process** as follows. - - Threads: - -For example, to check the thread count of a process with PID 20571: - - $ cat /proc/20571/status - -![](https://farm6.staticflickr.com/5649/20341236279_f4a4d809d2_b.jpg) - -The output indicates that the process has 28 threads in it. - -Alternatively, you could simply count the number of directories found in /proc//task, as shown below. - - $ ls /proc//task | wc - -This is because, for every thread created within a process, there is a corresponding directory created in /proc//task, named with its thread ID. Thus the total number of directories in /proc//task represents the number of threads in the process. - -### Method Two: ps ### - -If you are an avid user of the versatile ps command, this command can also show you individual threads of a process (with "H" option). The following command will print the thread count of a process. The "h" option is needed to hide the header in the top output. - - $ ps hH p | wc -l - -If you want to monitor the hardware resources (CPU & memory) consumed by different threads of a process, refer to [this tutorial][1].(注:此文我们翻译过) - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/number-of-threads-process-linux.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni -[1]:http://ask.xmodulo.com/view-threads-process-linux.html From 56225b0911420c15454682f279958d0ccde22755 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 18 Aug 2015 00:55:36 +0800 Subject: [PATCH 216/697] Create 20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md --- ...number of threads in a process on Linux.md | 51 +++++++++++++++++++ 1 file changed, 51 insertions(+) create mode 100644 translated/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md diff --git a/translated/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md b/translated/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md new file mode 100644 index 0000000000..96bf143533 --- /dev/null +++ b/translated/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md @@ -0,0 +1,51 @@ + +Linux 有问必答 - 如何在 Linux 中统计一个进程的线程数 +================================================================================ +> **问题**: 我正在运行一个程序,它在运行时会派生出多个线程。我想知道程序在运行时会有多少线程。在 Linux 中检查进程的线程数最简单的方法是什么? + +如果你想看到 Linux 中每个进程的线程数,有以下几种方法可以做到这一点。 + +### 方法一: /proc ### + + proc 伪文件系统,它驻留在 /proc 目录,这是最简单的方法来查看任何活动进程的线程数。 /proc 目录以可读文本文件形式输出,提供现有进程和系统硬件相关的信息如 CPU, interrupts, memory, disk, 等等. + + $ cat /proc//status + +上面的命令将显示进程 的详细信息,包括过程状态(例如, sleeping, running),父进程 PID,UID,GID,使用的文件描述符的数量,以及上下文切换的数量。输出也包括**进程创建的总线程数**如下所示。 + + Threads: + +例如,检查 PID 20571进程的线程数: + + $ cat /proc/20571/status + +![](https://farm6.staticflickr.com/5649/20341236279_f4a4d809d2_b.jpg) + +输出表明该进程有28个线程。 + +或者,你可以在 /proc//task 中简单的统计目录的数量,如下所示。 + + $ ls /proc//task | wc + +这是因为,对于一个进程中创建的每个线程,在 /proc//task 中会创建一个相应的目录,命名为其线程 ID。由此在 /proc//task 中目录的总数表示在进程中线程的数目。 + +### 方法二: ps ### + +如果你是功能强大的 ps 命令的忠实用户,这个命令也可以告诉你一个进程(用“H”选项)的线程数。下面的命令将输出进程的线程数。“h”选项需要放在前面。 + + $ ps hH p | wc -l + +如果你想监视一个进程的不同线程消耗的硬件资源(CPU & memory),请参阅[此教程][1]。(注:此文我们翻译过) + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/number-of-threads-process-linux.html + +作者:[Dan Nanni][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:http://ask.xmodulo.com/view-threads-process-linux.html From 579b625083dad361059c070506a5218a8badd6c0 Mon Sep 17 00:00:00 2001 From: Vic___ Date: Tue, 18 Aug 2015 01:19:45 +0800 Subject: [PATCH 217/697] translated --- ...edule a Job and Watch Commands in Linux.md | 104 +++++++++--------- 1 file changed, 53 insertions(+), 51 deletions(-) diff --git a/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md b/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md index 28981add17..54d3996e0e 100644 --- a/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md +++ b/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md @@ -1,102 +1,103 @@ - Vic020 - -Linux Tricks: Play Game in Chrome, Text-to-Speech, Schedule a Job and Watch Commands in Linux +Linux小技巧:Chrome小游戏,文字说话,计划作业,重复执行命令 ================================================================================ -Here again, I have compiled a list of four things under [Linux Tips and Tricks][1] series you may do to remain more productive and entertained with Linux Environment. -![Linux Tips and Tricks Series](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-Tips-and-Tricks.png) +重要的事情说两遍,我完成了一个[Linux提示与彩蛋][1]系列,让你的Linux获得更多创造和娱乐。 -Linux Tips and Tricks Series +![Linux提示与彩蛋系列](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-Tips-and-Tricks.png) -The topics I have covered includes Google-chrome inbuilt small game, Text-to-speech in Linux Terminal, Quick job scheduling using ‘at‘ command and watch a command at regular interval. +Linux提示与彩蛋系列 -### 1. Play A Game in Google Chrome Browser ### +本文,我将会讲解Google-chrome内建小游戏,在终端中如何让文字说话,使用‘at’命令设置作业和使用watch命令重复执行命令。 -Very often when there is a power shedding or no network due to some other reason, I don’t put my Linux box into maintenance mode. I keep myself engage in a little fun game by Google Chrome. I am not a gamer and hence I have not installed third-party creepy games. Security is another concern. +### 1. Google Chrome 浏览器小游戏彩蛋 ### -So when there is Internet related issue and my web page seems something like this: +网线脱掉或者其他什么原因连不上网时,Google Chrome就会出现一个小游戏。声明,我并不是游戏玩家,因此我的电脑上并没有安装任何第三方的恶意游戏。安全是第一位。 -![Unable to Connect Internet](http://www.tecmint.com/wp-content/uploads/2015/08/Unable-to-Connect-Internet.png) +所以当Internet发生出错,会出现一个这样的界面: -Unable to Connect Internet +![不能连接到互联网](http://www.tecmint.com/wp-content/uploads/2015/08/Unable-to-Connect-Internet.png) -You may play the Google-chrome inbuilt game simply by hitting the space-bar. There is no limitation for the number of times you can play. The best thing is you need not break a sweat installing and using it. +不能连接到互联网 -No third-party application/plugin required. It should work well on other platforms like Windows and Mac but our niche is Linux and I’ll talk about Linux only and mind it, it works well on Linux. It is a very simple game (a kind of time pass). +按下空格键来激活Google-chrome彩蛋游戏。游戏没有时间限制。并且还不需要浪费时间安装使用。 -Use Space-Bar/Navigation-up-key to jump. A glimpse of the game in action. +不需要第三方软件的支持。同样支持Windows和Mac平台,但是我的平台是Linux,我也只谈论Linux。当然在Linux,这个游戏运行很好。游戏简单,但也很花费时间。 -![Play Game in Google Chrome](http://www.tecmint.com/wp-content/uploads/2015/08/Play-Game-in-Google-Chrome.gif) +使用空格/向上方向键来跳跃。请看下列截图: -Play Game in Google Chrome +![Google Chrome中玩游戏](http://www.tecmint.com/wp-content/uploads/2015/08/Play-Game-in-Google-Chrome.gif) -### 2. Text to Speech in Linux Terminal ### +Google Chrome中玩游戏 -For those who may not be aware of espeak utility, It is a Linux command-line text to speech converter. Write anything in a variety of languages and espeak utility will read it loud for you. +### 2. Linux 终端中朗读文字 ### -Espeak should be installed in your system by default, however it is not installed for your system, you may do: +对于那些不能文字朗读的设备,有个小工具可以实现文字说话的转换器。 +espeak支持多种语言,可以及时朗读输入文字。 + +系统应该默认安装了Espeak,如果你的系统没有安装,你可以使用下列命令来安装: # apt-get install espeak (Debian) # yum install espeak (CentOS) # dnf install espeak (Fedora 22 onwards) You may ask espeak to accept Input Interactively from standard Input device and convert it to speech for you. You may do: +你可以设置接受从标准输入的交互地输入并及时转换成语音朗读出来。这样设置: - $ espeak [Hit Return Key] + $ espeak [按回车键] -For detailed output you may do: +更详细的输出你可以这样做: - $ espeak --stdout | aplay [Hit Return Key][Double - Here] + $ espeak --stdout | aplay [按回车键][这里需要双击] -espeak is flexible and you can ask espeak to accept input from a text file and speak it loud for you. All you need to do is: +espeak设置灵活,也可以朗读文本文件。你可以这样设置: $ espeak --stdout /path/to/text/file/file_name.txt | aplay [Hit Enter] -You may ask espeak to speak fast/slow for you. The default speed is 160 words per minute. Define your preference using switch ‘-s’. +espeak可以设置朗读速度。默认速度是160词每分钟。使用-s参数来设置。 -To ask espeak to speak 30 words per minute, you may do: +设置30词每分钟: $ espeak -s 30 -f /path/to/text/file/file_name.txt | aplay -To ask espeak to speak 200 words per minute, you may do: +设置200词每分钟: $ espeak -s 200 -f /path/to/text/file/file_name.txt | aplay -To use another language say Hindi (my mother tongue), you may do: +让其他语言说北印度语(作者母语),这样设置: $ espeak -v hindi --stdout 'टेकमिंट विश्व की एक बेहतरीन लाइंक्स आधारित वेबसाइट है|' | aplay -You may choose any language of your preference and ask to speak in your preferred language as suggested above. To get the list of all the languages supported by espeak, you need to run: +espeak支持多种语言,支持自定义设置。使用下列命令来获得语言表: $ espeak --voices -### 3. Quick Schedule a Job ### +### 3. 快速计划作业 ### -Most of us are already familiar with [cron][2] which is a daemon to execute scheduled commands. +我们已经非常熟悉使用[cron][2]后台执行一个计划命令。 -Cron is an advanced command often used by Linux SYSAdmins to schedule a job such as Backup or practically anything at certain time/interval. +Cron是一个Linux系统管理的高级命令,用于计划定时任务如备份或者指定时间或间隔的任何事情。 -Are you aware of ‘at’ command in Linux which lets you schedule a job/command to run at specific time? You can tell ‘at’ what to do and when to do and everything else will be taken care by command ‘at’. +但是,你是否知道at命令可以让你计划一个作业或者命令在指定时间?at命令可以指定时间和指定内容执行作业。 -For an example, say you want to print the output of uptime command at 11:02 AM, All you need to do is: +例如,你打算在早上11点2分执行uptime命令,你只需要这样做: $ at 11:02 uptime >> /home/$USER/uptime.txt Ctrl+D -![Schedule Job in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Schedule-Job-in-Linux.png) +![Linux中计划作业](http://www.tecmint.com/wp-content/uploads/2015/08/Schedule-Job-in-Linux.png) -Schedule Job in Linux +Linux中计划作业 -To check if the command/script/job has been set or not by ‘at’ command, you may do: +检查at命令是否成功设置,使用: $ at -l -![View Scheduled Jobs](http://www.tecmint.com/wp-content/uploads/2015/08/View-Scheduled-Jobs.png) +![浏览计划作业](http://www.tecmint.com/wp-content/uploads/2015/08/View-Scheduled-Jobs.png) -View Scheduled Jobs +浏览计划作业 -You may schedule more than one command in one go using at, simply as: +at支持计划多个命令,例如: $ at 12:30 Command – 1 @@ -106,36 +107,37 @@ You may schedule more than one command in one go using at, simply as: … Ctrl + D -### 4. Watch a Command at Specific Interval ### +### 4. 特定时间重复执行命令 ### -We need to run some command for specified amount of time at regular interval. Just for example say we need to print the current time and watch the output every 3 seconds. +有时,我们可以需要在指定时间间隔执行特定命令。例如,每3秒,想打印一次时间。 -To see current time we need to run the below command in terminal. +查看现在时间,使用下列命令。 $ date +"%H:%M:%S -![Check Date and Time in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Date-in-Linux.png) +![Linux中查看日期和时间](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Date-in-Linux.png) -Check Date and Time in Linux +Linux中查看日期和时间 -and to check the output of this command every three seconds, we need to run the below command in Terminal. +为了查看这个命令每三秒的输出,我需要运行下列命令: $ watch -n 3 'date +"%H:%M:%S"' -![Watch Command in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Watch-Command-in-Linux.gif) +![Linux中watch命令](http://www.tecmint.com/wp-content/uploads/2015/08/Watch-Command-in-Linux.gif) -Watch Command in Linux +Linux中watch命令 -The switch ‘-n’ in watch command is for Interval. In the above example we defined Interval to be 3 sec. You may define yours as required. Also you may pass any command/script with watch command to watch that command/script at the defined interval. +watch命令的‘-n’开关设定时间间隔。在上诉命令中,我们定义了时间间隔为3秒。你可以按你的需求定义。同样watch +也支持其他命令或者脚本。 -That’s all for now. Hope you are like this series that aims at making you more productive with Linux and that too with fun inside. All the suggestions are welcome in the comments below. Stay tuned for more such posts. Keep connected and Enjoy… +至此。希望你喜欢这个系列的文章,让你的linux更有创造性,获得更多快乐。所有的建议欢迎评论。欢迎你也看看其他文章,谢谢。 -------------------------------------------------------------------------------- via: http://www.tecmint.com/text-to-speech-in-terminal-schedule-a-job-and-watch-commands-in-linux/ 作者:[Avishek Kumar][a] -译者:[译者ID](https://github.com/译者ID) +译者:[VicYu/Vic020](http://vicyu.net) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 5a1be0ab80258ceb49b2240bd72a83f89d8438b4 Mon Sep 17 00:00:00 2001 From: Vic___ Date: Tue, 18 Aug 2015 01:20:29 +0800 Subject: [PATCH 218/697] moved --- ...e Text-to-Speech Schedule a Job and Watch Commands in Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md (100%) diff --git a/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md b/translated/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md similarity index 100% rename from sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md rename to translated/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md From 46cf1d082cb263bb04cf0919c2d3bd1800e54199 Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 18 Aug 2015 10:14:10 +0800 Subject: [PATCH 219/697] Update 20150728 Process of the Linux kernel building.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 编译到400行 --- ...28 Process of the Linux kernel building.md | 21 +++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/sources/tech/20150728 Process of the Linux kernel building.md index e139143ccc..f4510d81b2 100644 --- a/sources/tech/20150728 Process of the Linux kernel building.md +++ b/sources/tech/20150728 Process of the Linux kernel building.md @@ -309,7 +309,7 @@ Don't worry that we have missed many lines in Makefile that are placed after `ex The `all:` target is the default when no target is given on the command line. You can see here that we include architecture specific makefile there (in our case it will be [arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)). From this moment we will continue from this makefile. As we can see `all` target depends on the `vmlinux` target that defined a little lower in the top makefile: -目标`all:` 是在命令行里不指定目标时默认生成的目标。你可以看到这里我们包含了架构相关的makefile(默认情况下会是[arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile))。从这一时刻起,我们会从这个makefile 继续进行下去。如我们所见,目标`all` 依赖于根makefile 后面一点的生命`vmlinux`: +目标`all:` 是在命令行里不指定目标时默认生成的目标。你可以看到这里我们包含了架构相关的makefile(默认情况下会是[arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile))。从这一时刻起,我们会从这个makefile 继续进行下去。如我们所见,目标`all` 依赖于根makefile 后面声明的`vmlinux`: ```Makefile vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE @@ -317,12 +317,17 @@ vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE The `vmlinux` is is the Linux kernel in an statically linked executable file format. The [scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) script links combines different compiled subsystems into vmlinux. The second target is the `vmlinux-deps` that defined as: +`vmlinux` 是linux 内核的静态链接可执行文件格式。脚本[scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) 把不同的编译好的子模块链接到一起形成了vmlinux。第二个目标是`vmlinux-deps`,它的定义如下: + + ```Makefile vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) ``` and consists from the set of the `built-in.o` from the each top directory of the Linux kernel. Later, when we will go through all directories in the Linux kernel, the `Kbuild` will compile all the `$(obj-y)` files. It then calls `$(LD) -r` to merge these files into one `built-in.o` file. For this moment we have no `vmlinux-deps`, so the `vmlinux` target will not be executed now. For me `vmlinux-deps` contains following files: +它是由内核代码下的每个顶级目录的`built-in.o` 组成的。之后我们还会检查内核所有的目录,`kbuild` 会编译各个目录下所有的对应`$obj-y` 的源文件。接着调用`$(LD) -r` 把这些文件合并到一个`build-in.o` 文件里。此时我们还没有`vmloinux-deps`, 所以目标`vmlinux` 现在还不会被构建。对我而言`vmlinux-deps` 包含下面的文件 + ``` arch/x86/kernel/vmlinux.lds arch/x86/kernel/head_64.o arch/x86/kernel/head64.o arch/x86/kernel/head.o @@ -341,6 +346,8 @@ net/built-in.o The next target that can be executed is following: +下一个可以被执行的目标如下: + ```Makefile $(sort $(vmlinux-deps)): $(vmlinux-dirs) ; $(vmlinux-dirs): prepare scripts @@ -349,6 +356,8 @@ $(vmlinux-dirs): prepare scripts As we can see the `vmlinux-dirs` depends on the two targets: `prepare` and `scripts`. The first `prepare` defined in the top `Makefile` of the Linux kernel and executes three stages of preparations: +就像我们看到的,`vmlinux-dir` 依赖于两部分:`prepare` 和`scripts`。第一个`prepare` 定义在内核的根`makefile` ,准备工作分成三个阶段: + ```Makefile prepare: prepare0 prepare0: archprepare FORCE @@ -361,7 +370,9 @@ prepare1: prepare2 $(version_h) include/generated/utsrelease.h \ prepare2: prepare3 outputmakefile asm-generic ``` -The first `prepare0` expands to the `archprepare` that exapnds to the `archheaders` and `archscripts` that defined in the `x86_64` specific [Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile). Let's look on it. The `x86_64` specific makefile starts from the definition of the variables that are related to the archicteture-specific configs ([defconfig](https://github.com/torvalds/linux/tree/master/arch/x86/configs) and etc.). After this it defines flags for the compiling of the [16-bit](https://en.wikipedia.org/wiki/Real_mode) code, calculating of the `BITS` variable that can be `32` for `i386` or `64` for the `x86_64` flags for the assembly source code, flags for the linker and many many more (all definitions you can find in the [arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)). The first target is `archheaders` in the makefile generates syscall table: +The first `prepare0` expands to the `archprepare` that exapnds to the `archheaders` and `archscripts` that defined in the `x86_64` specific [Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile). Let's look on it. The `x86_64` specific makefile starts from the definition of the variables that are related to the archicteture-specific configs ([defconfig](https://github.com/torvalds/linux/tree/master/arch/x86/configs) and etc.). After this it defines flags for the compiling of the [16-bit](https://en.wikipedia.org/wiki/Real_mode) code,calculating of the `BITS` variable that can be `32` for `i386` or `64` for the `x86_64` flags for the assembly source code, flags for the linker and many many more (all definitions you can find in the [arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)). The first target is `archheaders` in the makefile generates syscall table: + +第一个`prepare0` 展开到`archprepare` ,后者又展开到`archheader` 和`archscripts`,这两个变量定义在`x86_64` 相关的[Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)。让我们看看这个文件。`x86_64` 特定的makefile从变量定义开始,这些变量都是和特定架构的配置文件 ([defconfig](https://github.com/torvalds/linux/tree/master/arch/x86/configs),等等)有关联。变量定义之后,这个makefile 定义了编译[16-bit](https://en.wikipedia.org/wiki/Real_mode)代码的编译选项,根据变量`BITS` 的值,如果是`32` 汇编代码、链接器、以及其它很多东西(全部的定义都可以在[arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)找到)对应的参数就是`i386`,而`64`就对应的是`x86_84`。生成的系统调用列表(syscall table)的makefile 里第一个目标就是`archheaders` : ```Makefile archheaders: @@ -370,6 +381,8 @@ archheaders: And the second target is `archscripts` in this makefile is: +这个makefile 里第二个目标就是`archscripts`: + ```Makefile archscripts: scripts_basic $(Q)$(MAKE) $(build)=arch/x86/tools relocs @@ -377,6 +390,8 @@ archscripts: scripts_basic We can see that it depends on the `scripts_basic` target from the top [Makefile](https://github.com/torvalds/linux/blob/master/Makefile). At the first we can see the `scripts_basic` target that executes make for the [scripts/basic](https://github.com/torvalds/linux/blob/master/scripts/basic/Makefile) makefile: + 我们可以看到`archscripts` 是依赖于根[Makefile](https://github.com/torvalds/linux/blob/master/Makefile)里的`scripts_basic` 。首先我们可以看出`scripts_basic` 是根据[scripts/basic](https://github.com/torvalds/linux/blob/master/scripts/basic/Makefile) 的mekefile 执行的: + ```Maklefile scripts_basic: $(Q)$(MAKE) $(build)=scripts/basic @@ -384,6 +399,8 @@ scripts_basic: The `scripts/basic/Makefile` contains targets for compilation of the two host programs: `fixdep` and `bin2`: +`scripts/basic/Makefile`包含了编译两个主机程序`fixdep` 和`bin2` 的目标: + ```Makefile hostprogs-y := fixdep hostprogs-$(CONFIG_BUILD_BIN2C) += bin2c From dc7c041b3ce8958672f07089b74b1c8b34918f42 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 18 Aug 2015 10:55:26 +0800 Subject: [PATCH 220/697] =?UTF-8?q?20150818-1=20=E9=80=89=E9=A2=98=20LFCS?= =?UTF-8?q?=20=E4=B8=93=E9=A2=98=206-10=20=E5=85=B1=E5=8D=81=E7=AF=87=20?= =?UTF-8?q?=E5=AE=8C=E7=BB=93?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ng and Linux Filesystem Troubleshooting.md | 315 +++++++++++++++ ...es – Creating & Managing System Backups.md | 276 +++++++++++++ ...d Services SysVinit Systemd and Upstart.md | 367 ++++++++++++++++++ ...es and Enabling sudo Access on Accounts.md | 330 ++++++++++++++++ ...th Yum RPM Apt Dpkg Aptitude and Zypper.md | 229 +++++++++++ 5 files changed, 1517 insertions(+) create mode 100644 sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md create mode 100644 sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md create mode 100644 sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md create mode 100644 sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md create mode 100644 sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md diff --git a/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md b/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md new file mode 100644 index 0000000000..45029ac20e --- /dev/null +++ b/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md @@ -0,0 +1,315 @@ +Part 10 - LFCS: Understanding & Learning Basic Shell Scripting and Linux Filesystem Troubleshooting +================================================================================ +The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new initiative whose purpose is to allow individuals everywhere (and anywhere) to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams. + +![Basic Shell Scripting and Filesystem Troubleshooting](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-10.png) + +Linux Foundation Certified Sysadmin – Part 10 + +Check out the following video that guides you an introduction to the Linux Foundation Certification Program. + +注:youtube 视频 + + + +This is the last article (Part 10) of the present 10-tutorial long series. In this article we will focus on basic shell scripting and troubleshooting Linux file systems. Both topics are required for the LFCS certification exam. + +### Understanding Terminals and Shells ### + +Let’s clarify a few concepts first. + +- A shell is a program that takes commands and gives them to the operating system to be executed. +- A terminal is a program that allows us as end users to interact with the shell. One example of a terminal is GNOME terminal, as shown in the below image. + +![Gnome Terminal](http://www.tecmint.com/wp-content/uploads/2014/11/Gnome-Terminal.png) + +Gnome Terminal + +When we first start a shell, it presents a command prompt (also known as the command line), which tells us that the shell is ready to start accepting commands from its standard input device, which is usually the keyboard. + +You may want to refer to another article in this series ([Use Command to Create, Edit, and Manipulate files – Part 1][1]) to review some useful commands. + +Linux provides a range of options for shells, the following being the most common: + +**bash Shell** + +Bash stands for Bourne Again SHell and is the GNU Project’s default shell. It incorporates useful features from the Korn shell (ksh) and C shell (csh), offering several improvements at the same time. This is the default shell used by the distributions covered in the LFCS certification, and it is the shell that we will use in this tutorial. + +**sh Shell** + +The Bourne SHell is the oldest shell and therefore has been the default shell of many UNIX-like operating systems for many years. +ksh Shell + +The Korn SHell is a Unix shell which was developed by David Korn at Bell Labs in the early 1980s. It is backward-compatible with the Bourne shell and includes many features of the C shell. + +A shell script is nothing more and nothing less than a text file turned into an executable program that combines commands that are executed by the shell one after another. + +### Basic Shell Scripting ### + +As mentioned earlier, a shell script is born as a plain text file. Thus, can be created and edited using our preferred text editor. You may want to consider using vi/m (refer to [Usage of vi Editor – Part 2][2] of this series), which features syntax highlighting for your convenience. + +Type the following command to create a file named myscript.sh and press Enter. + + # vim myscript.sh + +The very first line of a shell script must be as follows (also known as a shebang). + + #!/bin/bash + +It “tells” the operating system the name of the interpreter that should be used to run the text that follows. + +Now it’s time to add our commands. We can clarify the purpose of each command, or the entire script, by adding comments as well. Note that the shell ignores those lines beginning with a pound sign # (explanatory comments). + + #!/bin/bash + echo This is Part 10 of the 10-article series about the LFCS certification + echo Today is $(date +%Y-%m-%d) + +Once the script has been written and saved, we need to make it executable. + + # chmod 755 myscript.sh + +Before running our script, we need to say a few words about the $PATH environment variable. If we run, + + echo $PATH + +from the command line, we will see the contents of $PATH: a colon-separated list of directories that are searched when we enter the name of a executable program. It is called an environment variable because it is part of the shell environment – a set of information that becomes available for the shell and its child processes when the shell is first started. + +When we type a command and press Enter, the shell searches in all the directories listed in the $PATH variable and executes the first instance that is found. Let’s see an example, + +![Linux Environment Variables](http://www.tecmint.com/wp-content/uploads/2014/11/Environment-Variable.png) + +Environment Variables + +If there are two executable files with the same name, one in /usr/local/bin and another in /usr/bin, the one in the first directory will be executed first, whereas the other will be disregarded. + +If we haven’t saved our script inside one of the directories listed in the $PATH variable, we need to append ./ to the file name in order to execute it. Otherwise, we can run it just as we would do with a regular command. + + # pwd + # ./myscript.sh + # cp myscript.sh ../bin + # cd ../bin + # pwd + # myscript.sh + +![Execute Script in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Execute-Script.png) + +Execute Script + +#### Conditionals #### + +Whenever you need to specify different courses of action to be taken in a shell script, as result of the success or failure of a command, you will use the if construct to define such conditions. Its basic syntax is: + + if CONDITION; then + COMMANDS; + else + OTHER-COMMANDS + fi + +Where CONDITION can be one of the following (only the most frequent conditions are cited here) and evaluates to true when: + +- [ -a file ] → file exists. +- [ -d file ] → file exists and is a directory. +- [ -f file ] →file exists and is a regular file. +- [ -u file ] →file exists and its SUID (set user ID) bit is set. +- [ -g file ] →file exists and its SGID bit is set. +- [ -k file ] →file exists and its sticky bit is set. +- [ -r file ] →file exists and is readable. +- [ -s file ]→ file exists and is not empty. +- [ -w file ]→file exists and is writable. +- [ -x file ] is true if file exists and is executable. +- [ string1 = string2 ] → the strings are equal. +- [ string1 != string2 ] →the strings are not equal. + +[ int1 op int2 ] should be part of the preceding list, while the items that follow (for example, -eq –> is true if int1 is equal to int2.) should be a “children” list of [ int1 op int2 ] where op is one of the following comparison operators. + +- -eq –> is true if int1 is equal to int2. +- -ne –> true if int1 is not equal to int2. +- -lt –> true if int1 is less than int2. +- -le –> true if int1 is less than or equal to int2. +- -gt –> true if int1 is greater than int2. +- -ge –> true if int1 is greater than or equal to int2. + +#### For Loops #### + +This loop allows to execute one or more commands for each value in a list of values. Its basic syntax is: + + for item in SEQUENCE; do + COMMANDS; + done + +Where item is a generic variable that represents each value in SEQUENCE during each iteration. + +#### While Loops #### + +This loop allows to execute a series of repetitive commands as long as the control command executes with an exit status equal to zero (successfully). Its basic syntax is: + + while EVALUATION_COMMAND; do + EXECUTE_COMMANDS; + done + +Where EVALUATION_COMMAND can be any command(s) that can exit with a success (0) or failure (other than 0) status, and EXECUTE_COMMANDS can be any program, script or shell construct, including other nested loops. + +#### Putting It All Together #### + +We will demonstrate the use of the if construct and the for loop with the following example. + +**Determining if a service is running in a systemd-based distro** + +Let’s create a file with a list of services that we want to monitor at a glance. + + # cat myservices.txt + + sshd + mariadb + httpd + crond + firewalld + +![Script to Monitor Linux Services](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Services.png) + +Script to Monitor Linux Services + +Our shell script should look like. + + #!/bin/bash + + # This script iterates over a list of services and + # is used to determine whether they are running or not. + + for service in $(cat myservices.txt); do + systemctl status $service | grep --quiet "running" + if [ $? -eq 0 ]; then + echo $service "is [ACTIVE]" + else + echo $service "is [INACTIVE or NOT INSTALLED]" + fi + done + +![Linux Service Monitoring Script](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Script.png) + +Linux Service Monitoring Script + +**Let’s explain how the script works.** + +1). The for loop reads the myservices.txt file one element of LIST at a time. That single element is denoted by the generic variable named service. The LIST is populated with the output of, + + # cat myservices.txt + +2). The above command is enclosed in parentheses and preceded by a dollar sign to indicate that it should be evaluated to populate the LIST that we will iterate over. + +3). For each element of LIST (meaning every instance of the service variable), the following command will be executed. + + # systemctl status $service | grep --quiet "running" + +This time we need to precede our generic variable (which represents each element in LIST) with a dollar sign to indicate it’s a variable and thus its value in each iteration should be used. The output is then piped to grep. + +The –quiet flag is used to prevent grep from displaying to the screen the lines where the word running appears. When that happens, the above command returns an exit status of 0 (represented by $? in the if construct), thus verifying that the service is running. + +An exit status different than 0 (meaning the word running was not found in the output of systemctl status $service) indicates that the service is not running. + +![Services Monitoring Script](http://www.tecmint.com/wp-content/uploads/2014/11/Services-Monitoring-Script.png) + +Services Monitoring Script + +We could go one step further and check for the existence of myservices.txt before even attempting to enter the for loop. + + #!/bin/bash + + # This script iterates over a list of services and + # is used to determine whether they are running or not. + + if [ -f myservices.txt ]; then + for service in $(cat myservices.txt); do + systemctl status $service | grep --quiet "running" + if [ $? -eq 0 ]; then + echo $service "is [ACTIVE]" + else + echo $service "is [INACTIVE or NOT INSTALLED]" + fi + done + else + echo "myservices.txt is missing" + fi + +**Pinging a series of network or internet hosts for reply statistics** + +You may want to maintain a list of hosts in a text file and use a script to determine every now and then whether they’re pingable or not (feel free to replace the contents of myhosts and try for yourself). + +The read shell built-in command tells the while loop to read myhosts line by line and assigns the content of each line to variable host, which is then passed to the ping command. + + #!/bin/bash + + # This script is used to demonstrate the use of a while loop + + while read host; do + ping -c 2 $host + done < myhosts + +![Script to Ping Servers](http://www.tecmint.com/wp-content/uploads/2014/11/Script-to-Ping-Servers.png) + +Script to Ping Servers + +Read Also: + +- [Learn Shell Scripting: A Guide from Newbies to System Administrator][3] +- [5 Shell Scripts to Learn Shell Programming][4] + +### Filesystem Troubleshooting ### + +Although Linux is a very stable operating system, if it crashes for some reason (for example, due to a power outage), one (or more) of your file systems will not be unmounted properly and thus will be automatically checked for errors when Linux is restarted. + +In addition, each time the system boots during a normal boot, it always checks the integrity of the filesystems before mounting them. In both cases this is performed using a tool named fsck (“file system check”). + +fsck will not only check the integrity of file systems, but also attempt to repair corrupt file systems if instructed to do so. Depending on the severity of damage, fsck may succeed or not; when it does, recovered portions of files are placed in the lost+found directory, located in the root of each file system. + +Last but not least, we must note that inconsistencies may also happen if we try to remove an USB drive when the operating system is still writing to it, and may even result in hardware damage. + +The basic syntax of fsck is as follows: + + # fsck [options] filesystem + +**Checking a filesystem for errors and attempting to repair automatically** + +In order to check a filesystem with fsck, we must first unmount it. + + # mount | grep sdg1 + # umount /mnt + # fsck -y /dev/sdg1 + +![Scan Linux Filesystem for Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Filesystem-Errors.png) + +Check Filesystem Errors + +Besides the -y flag, we can use the -a option to automatically repair the file systems without asking any questions, and force the check even when the filesystem looks clean. + + # fsck -af /dev/sdg1 + +If we’re only interested in finding out what’s wrong (without trying to fix anything for the time being) we can run fsck with the -n option, which will output the filesystem issues to standard output. + + # fsck -n /dev/sdg1 + +Depending on the error messages in the output of fsck, we will know whether we can try to solve the issue ourselves or escalate it to engineering teams to perform further checks on the hardware. + +### Summary ### + +We have arrived at the end of this 10-article series where have tried to cover the basic domain competencies required to pass the LFCS exam. + +For obvious reasons, it is not possible to cover every single aspect of these topics in any single tutorial, and that’s why we hope that these articles have put you on the right track to try new stuff yourself and continue learning. + +If you have any questions or comments, they are always welcome – so don’t hesitate to drop us a line via the form below! + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[2]:http://www.tecmint.com/vi-editor-usage/ +[3]:http://www.tecmint.com/learning-shell-scripting-language-a-guide-from-newbies-to-system-administrator/ +[4]:http://www.tecmint.com/basic-shell-programming-part-ii/ \ No newline at end of file diff --git a/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md b/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md new file mode 100644 index 0000000000..bdabfb1f9d --- /dev/null +++ b/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md @@ -0,0 +1,276 @@ +Part 6 - LFCS: Assembling Partitions as RAID Devices – Creating & Managing System Backups +================================================================================ +Recently, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification, a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of performing overall operational support on Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation, when required, to other support teams. + +![Linux Foundation Certified Sysadmin – Part 6](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-6.png) + +Linux Foundation Certified Sysadmin – Part 6 + +The following video provides an introduction to The Linux Foundation Certification Program. + +注:youtube 视频 + + +This post is Part 6 of a 10-tutorial series, here in this part, we will explain How to Assemble Partitions as RAID Devices – Creating & Managing System Backups, that are required for the LFCS certification exam. + +### Understanding RAID ### + +The technology known as Redundant Array of Independent Disks (RAID) is a storage solution that combines multiple hard disks into a single logical unit to provide redundancy of data and/or improve performance in read / write operations to disk. + +However, the actual fault-tolerance and disk I/O performance lean on how the hard disks are set up to form the disk array. Depending on the available devices and the fault tolerance / performance needs, different RAID levels are defined. You can refer to the RAID series here in Tecmint.com for a more detailed explanation on each RAID level. + +- RAID Guide: [What is RAID, Concepts of RAID and RAID Levels Explained][1] + +Our tool of choice for creating, assembling, managing, and monitoring our software RAIDs is called mdadm (short for multiple disks admin). + + ---------------- Debian and Derivatives ---------------- + # aptitude update && aptitude install mdadm + +---------- + + ---------------- Red Hat and CentOS based Systems ---------------- + # yum update && yum install mdadm + +---------- + + ---------------- On openSUSE ---------------- + # zypper refresh && zypper install mdadm # + +#### Assembling Partitions as RAID Devices #### + +The process of assembling existing partitions as RAID devices consists of the following steps. + +**1. Create the array using mdadm** + +If one of the partitions has been formatted previously, or has been a part of another RAID array previously, you will be prompted to confirm the creation of the new array. Assuming you have taken the necessary precautions to avoid losing important data that may have resided in them, you can safely type y and press Enter. + + # mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1 + +![Creating RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Creating-RAID-Array.png) + +Creating RAID Array + +**2. Check the array creation status** + +After creating RAID array, you an check the status of the array using the following commands. + + # cat /proc/mdstat + or + # mdadm --detail /dev/md0 [More detailed summary] + +![Check RAID Array Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Array-Status.png) + +Check RAID Array Status + +**3. Format the RAID Device** + +Format the device with a filesystem as per your needs / requirements, as explained in [Part 4][2] of this series. + +**4. Monitor RAID Array Service** + +Instruct the monitoring service to “keep an eye” on the array. Add the output of mdadm –detail –scan to /etc/mdadm/mdadm.conf (Debian and derivatives) or /etc/mdadm.conf (CentOS / openSUSE), like so. + + # mdadm --detail --scan + +![Monitor RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Monitor-RAID-Array.png) + +Monitor RAID Array + + # mdadm --assemble --scan [Assemble the array] + +To ensure the service starts on system boot, run the following commands as root. + +**Debian and Derivatives** + +Debian and derivatives, though it should start running on boot by default. + + # update-rc.d mdadm defaults + +Edit the /etc/default/mdadm file and add the following line. + + AUTOSTART=true + +**On CentOS and openSUSE (systemd-based)** + + # systemctl start mdmonitor + # systemctl enable mdmonitor + +**On CentOS and openSUSE (SysVinit-based)** + + # service mdmonitor start + # chkconfig mdmonitor on + +**5. Check RAID Disk Failure** + +In RAID levels that support redundancy, replace failed drives when needed. When a device in the disk array becomes faulty, a rebuild automatically starts only if there was a spare device added when we first created the array. + +![Check RAID Faulty Disk](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Faulty-Disk.png) + +Check RAID Faulty Disk + +Otherwise, we need to manually attach an extra physical drive to our system and run. + + # mdadm /dev/md0 --add /dev/sdX1 + +Where /dev/md0 is the array that experienced the issue and /dev/sdX1 is the new device. + +**6. Disassemble a working array** + +You may have to do this if you need to create a new array using the devices – (Optional Step). + + # mdadm --stop /dev/md0 # Stop the array + # mdadm --remove /dev/md0 # Remove the RAID device + # mdadm --zero-superblock /dev/sdX1 # Overwrite the existing md superblock with zeroes + +**7. Set up mail alerts** + +You can configure a valid email address or system account to send alerts to (make sure you have this line in mdadm.conf). – (Optional Step) + + MAILADDR root + +In this case, all alerts that the RAID monitoring daemon collects will be sent to the local root account’s mail box. One of such alerts looks like the following. + +**Note**: This event is related to the example in STEP 5, where a device was marked as faulty and the spare device was automatically built into the array by mdadm. Thus, we “ran out” of healthy spare devices and we got the alert. + +![RAID Monitoring Alerts](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Monitoring-Alerts.png) + +RAID Monitoring Alerts + +#### Understanding RAID Levels #### + +**RAID 0** + +The total array size is n times the size of the smallest partition, where n is the number of independent disks in the array (you will need at least two drives). Run the following command to assemble a RAID 0 array using partitions /dev/sdb1 and /dev/sdc1. + + # mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1 + +Common uses: Setups that support real-time applications where performance is more important than fault-tolerance. + +**RAID 1 (aka Mirroring)** + +The total array size equals the size of the smallest partition (you will need at least two drives). Run the following command to assemble a RAID 1 array using partitions /dev/sdb1 and /dev/sdc1. + + # mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1 + +Common uses: Installation of the operating system or important subdirectories, such as /home. + +**RAID 5 (aka drives with Parity)** + +The total array size will be (n – 1) times the size of the smallest partition. The “lost” space in (n-1) is used for parity (redundancy) calculation (you will need at least three drives). + +Note that you can specify a spare device (/dev/sde1 in this case) to replace a faulty part when an issue occurs. Run the following command to assemble a RAID 5 array using partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1 as spare. + + # mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1 + +Common uses: Web and file servers. + +**RAID 6 (aka drives with double Parity** + +The total array size will be (n*s)-2*s, where n is the number of independent disks in the array and s is the size of the smallest disk. Note that you can specify a spare device (/dev/sdf1 in this case) to replace a faulty part when an issue occurs. + +Run the following command to assemble a RAID 6 array using partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1, and /dev/sdf1 as spare. + + # mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde --spare-devices=1 /dev/sdf1 + +Common uses: File and backup servers with large capacity and high availability requirements. + +**RAID 1+0 (aka stripe of mirrors)** + +The total array size is computed based on the formulas for RAID 0 and RAID 1, since RAID 1+0 is a combination of both. First, calculate the size of each mirror and then the size of the stripe. + +Note that you can specify a spare device (/dev/sdf1 in this case) to replace a faulty part when an issue occurs. Run the following command to assemble a RAID 1+0 array using partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1, and /dev/sdf1 as spare. + + # mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 --spare-devices=1 /dev/sdf1 + +Common uses: Database and application servers that require fast I/O operations. + +#### Creating and Managing System Backups #### + +It never hurts to remember that RAID with all its bounties IS NOT A REPLACEMENT FOR BACKUPS! Write it 1000 times on the chalkboard if you need to, but make sure you keep that idea in mind at all times. Before we begin, we must note that there is no one-size-fits-all solution for system backups, but here are some things that you do need to take into account while planning a backup strategy. + +- What do you use your system for? (Desktop or server? If the latter case applies, what are the most critical services – whose configuration would be a real pain to lose?) +- How often do you need to take backups of your system? +- What is the data (e.g. files / directories / database dumps) that you want to backup? You may also want to consider if you really need to backup huge files (such as audio or video files). +- Where (meaning physical place and media) will those backups be stored? + +**Backing Up Your Data** + +Method 1: Backup entire drives with dd command. You can either back up an entire hard disk or a partition by creating an exact image at any point in time. Note that this works best when the device is offline, meaning it’s not mounted and there are no processes accessing it for I/O operations. + +The downside of this backup approach is that the image will have the same size as the disk or partition, even when the actual data occupies a small percentage of it. For example, if you want to image a partition of 20 GB that is only 10% full, the image file will still be 20 GB in size. In other words, it’s not only the actual data that gets backed up, but the entire partition itself. You may consider using this method if you need exact backups of your devices. + +**Creating an image file out of an existing device** + + # dd if=/dev/sda of=/system_images/sda.img + OR + --------------------- Alternatively, you can compress the image file --------------------- + # dd if=/dev/sda | gzip -c > /system_images/sda.img.gz + +**Restoring the backup from the image file** + + # dd if=/system_images/sda.img of=/dev/sda + OR + + --------------------- Depending on your choice while creating the image --------------------- + gzip -dc /system_images/sda.img.gz | dd of=/dev/sda + +Method 2: Backup certain files / directories with tar command – already covered in [Part 3][3] of this series. You may consider using this method if you need to keep copies of specific files and directories (configuration files, users’ home directories, and so on). + +Method 3: Synchronize files with rsync command. Rsync is a versatile remote (and local) file-copying tool. If you need to backup and synchronize your files to/from network drives, rsync is a go. + +Whether you’re synchronizing two local directories or local < — > remote directories mounted on the local filesystem, the basic syntax is the same. +Synchronizing two local directories or local < — > remote directories mounted on the local filesystem + + # rsync -av source_directory destination directory + +Where, -a recurse into subdirectories (if they exist), preserve symbolic links, timestamps, permissions, and original owner / group and -v verbose. + +![rsync Synchronizing Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronizing-Files.png) + +rsync Synchronizing Files + +In addition, if you want to increase the security of the data transfer over the wire, you can use ssh over rsync. + +**Synchronizing local → remote directories over ssh** + + # rsync -avzhe ssh backups root@remote_host:/remote_directory/ + +This example will synchronize the backups directory on the local host with the contents of /root/remote_directory on the remote host. + +Where the -h option shows file sizes in human-readable format, and the -e flag is used to indicate a ssh connection. + +![rsync Synchronize Remote Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronize-Remote-Files.png) + +rsync Synchronize Remote Files + +Synchronizing remote → local directories over ssh. + +In this case, switch the source and destination directories from the previous example. + + # rsync -avzhe ssh root@remote_host:/remote_directory/ backups + +Please note that these are only 3 examples (most frequent cases you’re likely to run into) of the use of rsync. For more examples and usages of rsync commands can be found at the following article. + +- Read Also: [10 rsync Commands to Sync Files in Linux][4] + +### Summary ### + +As a sysadmin, you need to ensure that your systems perform as good as possible. If you’re well prepared, and if the integrity of your data is well supported by a storage technology such as RAID and regular system backups, you’ll be safe. + +If you have questions, comments, or further ideas on how this article can be improved, feel free to speak out below. In addition, please consider sharing this series through your social network profiles. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/creating-and-managing-raid-backups-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[2]:http://www.tecmint.com/create-partitions-and-filesystems-in-linux/ +[3]:http://www.tecmint.com/compress-files-and-finding-files-in-linux/ +[4]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/ \ No newline at end of file diff --git a/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md b/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md new file mode 100644 index 0000000000..b024c89540 --- /dev/null +++ b/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md @@ -0,0 +1,367 @@ +Part 7 - LFCS: Managing System Startup Process and Services (SysVinit, Systemd and Upstart) +================================================================================ +A couple of months ago, the Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, an exciting new program whose aim is allowing individuals from all ends of the world to get certified in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-hand problem-finding and analysis, plus the ability to decide when to raise issues to engineering teams. + +![Linux Foundation Certified Sysadmin – Part 7](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-7.png) + +Linux Foundation Certified Sysadmin – Part 7 + +The following video describes an brief introduction to The Linux Foundation Certification Program. + +注:youtube 视频 + + +This post is Part 7 of a 10-tutorial series, here in this part, we will explain how to Manage Linux System Startup Process and Services, that are required for the LFCS certification exam. + +### Managing the Linux Startup Process ### + +The boot process of a Linux system consists of several phases, each represented by a different component. The following diagram briefly summarizes the boot process and shows all the main components involved. + +![Linux Boot Process](http://www.tecmint.com/wp-content/uploads/2014/10/Linux-Boot-Process.png) + +Linux Boot Process + +When you press the Power button on your machine, the firmware that is stored in a EEPROM chip in the motherboard initializes the POST (Power-On Self Test) to check on the state of the system’s hardware resources. When the POST is finished, the firmware then searches and loads the 1st stage boot loader, located in the MBR or in the EFI partition of the first available disk, and gives control to it. + +#### MBR Method #### + +The MBR is located in the first sector of the disk marked as bootable in the BIOS settings and is 512 bytes in size. + +- First 446 bytes: The bootloader contains both executable code and error message text. +- Next 64 bytes: The Partition table contains a record for each of four partitions (primary or extended). Among other things, each record indicates the status (active / not active), size, and start / end sectors of each partition. +- Last 2 bytes: The magic number serves as a validation check of the MBR. + +The following command performs a backup of the MBR (in this example, /dev/sda is the first hard disk). The resulting file, mbr.bkp can come in handy should the partition table become corrupt, for example, rendering the system unbootable. + +Of course, in order to use it later if the need arises, we will need to save it and store it somewhere else (like a USB drive, for example). That file will help us restore the MBR and will get us going once again if and only if we do not change the hard drive layout in the meanwhile. + +**Backup MBR** + + # dd if=/dev/sda of=mbr.bkp bs=512 count=1 + +![Backup MBR in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Backup-MBR-in-Linux.png) + +Backup MBR in Linux + +**Restoring MBR** + + # dd if=mbr.bkp of=/dev/sda bs=512 count=1 + +![Restore MBR in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Restore-MBR-in-Linux.png) + +Restore MBR in Linux + +#### EFI/UEFI Method #### + +For systems using the EFI/UEFI method, the UEFI firmware reads its settings to determine which UEFI application is to be launched and from where (i.e., in which disk and partition the EFI partition is located). + +Next, the 2nd stage boot loader (aka boot manager) is loaded and run. GRUB [GRand Unified Boot] is the most frequently used boot manager in Linux. One of two distinct versions can be found on most systems used today. + +- GRUB legacy configuration file: /boot/grub/menu.lst (older distributions, not supported by EFI/UEFI firmwares). +- GRUB2 configuration file: most likely, /etc/default/grub. + +Although the objectives of the LFCS exam do not explicitly request knowledge about GRUB internals, if you’re brave and can afford to mess up your system (you may want to try it first on a virtual machine, just in case), you need to run. + + # update-grub + +As root after modifying GRUB’s configuration in order to apply the changes. + +Basically, GRUB loads the default kernel and the initrd or initramfs image. In few words, initrd or initramfs help to perform the hardware detection, the kernel module loading and the device discovery necessary to get the real root filesystem mounted. + +Once the real root filesystem is up, the kernel executes the system and service manager (init or systemd, whose process identification or PID is always 1) to begin the normal user-space boot process in order to present a user interface. + +Both init and systemd are daemons (background processes) that manage other daemons, as the first service to start (during boot) and the last service to terminate (during shutdown). + +![Systemd and Init](http://www.tecmint.com/wp-content/uploads/2014/10/systemd-and-init.png) + +Systemd and Init + +### Starting Services (SysVinit) ### + +The concept of runlevels in Linux specifies different ways to use a system by controlling which services are running. In other words, a runlevel controls what tasks can be accomplished in the current execution state = runlevel (and which ones cannot). + +Traditionally, this startup process was performed based on conventions that originated with System V UNIX, with the system passing executing collections of scripts that start and stop services as the machine entered a specific runlevel (which, in other words, is a different mode of running the system). + +Within each runlevel, individual services can be set to run, or to be shut down if running. Latest versions of some major distributions are moving away from the System V standard in favour of a rather new service and system manager called systemd (which stands for system daemon), but usually support sysv commands for compatibility purposes. This means that you can run most of the well-known sysv init tools in a systemd-based distribution. + +- Read Also: [Why ‘systemd’ replaces ‘init’ in Linux][1] + +Besides starting the system process, init looks to the /etc/inittab file to decide what runlevel must be entered. + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Runlevel Description
0 Halt the system. Runlevel 0 is a special transitional state used to shutdown the system quickly.
1 Also aliased to s, or S, this runlevel is sometimes called maintenance mode. What services, if any, are started at this runlevel varies by distribution. It’s typically used for low-level system maintenance that may be impaired by normal system operation.
2 Multiuser. On Debian systems and derivatives, this is the default runlevel, and includes -if available- a graphical login. On Red-Hat based systems, this is multiuser mode without networking.
3 On Red-Hat based systems, this is the default multiuser mode, which runs everything except the graphical environment. This runlevel and levels 4 and 5 usually are not used on Debian-based systems.
4 Typically unused by default and therefore available for customization.
5 On Red-Hat based systems, full multiuser mode with GUI login. This runlevel is like level 3, but with a GUI login available.
6 Reboot the system.
+ +To switch between runlevels, we can simply issue a runlevel change using the init command: init N (where N is one of the runlevels listed above). Please note that this is not the recommended way of taking a running system to a different runlevel because it gives no warning to existing logged-in users (thus causing them to lose work and processes to terminate abnormally). + +Instead, the shutdown command should be used to restart the system (which first sends a warning message to all logged-in users and blocks any further logins; it then signals init to switch runlevels); however, the default runlevel (the one the system will boot to) must be edited in the /etc/inittab file first. + +For that reason, follow these steps to properly switch between runlevels, As root, look for the following line in /etc/inittab. + + id:2:initdefault: + +and change the number 2 for the desired runlevel with your preferred text editor, such as vim (described in [How to use vi/vim editor in Linux – Part 2][2] of this series). + +Next, run as root. + + # shutdown -r now + +That last command will restart the system, causing it to start in the specified runlevel during next boot, and will run the scripts located in the /etc/rc[runlevel].d directory in order to decide which services should be started and which ones should not. For example, for runlevel 2 in the following system. + +![Change Runlevels in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Change-Runlevels-in-Linux.jpeg) + +Change Runlevels in Linux + +#### Manage Services using chkconfig #### + +To enable or disable system services on boot, we will use [chkconfig command][3] in CentOS / openSUSE and sysv-rc-conf in Debian and derivatives. This tool can also show us what is the preconfigured state of a service for a particular runlevel. + +- Read Also: [How to Stop and Disable Unwanted Services in Linux][4] + +Listing the runlevel configuration for a service. + + # chkconfig --list [service name] + # chkconfig --list postfix + # chkconfig --list mysqld + +![Listing Runlevel Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Listing-Runlevel-Configuration.png) + +Listing Runlevel Configuration + +In the above image we can see that postfix is set to start when the system enters runlevels 2 through 5, whereas mysqld will be running by default for runlevels 2 through 4. Now suppose that this is not the expected behaviour. + +For example, we need to turn on mysqld for runlevel 5 as well, and turn off postfix for runlevels 4 and 5. Here’s what we would do in each case (run the following commands as root). + +**Enabling a service for a particular runlevel** + + # chkconfig --level [level(s)] service on + # chkconfig --level 5 mysqld on + +**Disabling a service for particular runlevels** + + # chkconfig --level [level(s)] service off + # chkconfig --level 45 postfix off + +![Enable Disable Services in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Disable-Services.png) + +Enable Disable Services + +We will now perform similar tasks in a Debian-based system using sysv-rc-conf. + +#### Manage Services using sysv-rc-conf #### + +Configuring a service to start automatically on a specific runlevel and prevent it from starting on all others. + +1. Let’s use the following command to see what are the runlevels where mdadm is configured to start. + + # ls -l /etc/rc[0-6].d | grep -E 'rc[0-6]|mdadm' + +![Check Runlevel of Service Running](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Service-Runlevel.png) + +Check Runlevel of Service Running + +2. We will use sysv-rc-conf to prevent mdadm from starting on all runlevels except 2. Just check or uncheck (with the space bar) as desired (you can move up, down, left, and right with the arrow keys). + + # sysv-rc-conf + +![SysV Runlevel Config](http://www.tecmint.com/wp-content/uploads/2014/10/SysV-Runlevel-Config.png) + +SysV Runlevel Config + +Then press q to quit. + +3. We will restart the system and run again the command from STEP 1. + + # ls -l /etc/rc[0-6].d | grep -E 'rc[0-6]|mdadm' + +![Verify Service Runlevel](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Service-Runlevel.png) + +Verify Service Runlevel + +In the above image we can see that mdadm is configured to start only on runlevel 2. + +### What About systemd? ### + +systemd is another service and system manager that is being adopted by several major Linux distributions. It aims to allow more processing to be done in parallel during system startup (unlike sysvinit, which always tends to be slower because it starts processes one at a time, checks whether one depends on another, and waits for daemons to launch so more services can start), and to serve as a dynamic resource management to a running system. + +Thus, services are started when needed (to avoid consuming system resources) instead of being launched without a solid reason during boot. + +Viewing the status of all the processes running on your system, both systemd native and SysV services, run the following command. + + # systemctl + +![Check All Running Processes in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-All-Running-Processes.png) + +Check All Running Processes + +The LOAD column shows whether the unit definition (refer to the UNIT column, which shows the service or anything maintained by systemd) was properly loaded, while the ACTIVE and SUB columns show the current status of such unit. +Displaying information about the current status of a service + +When the ACTIVE column indicates that an unit’s status is other than active, we can check what happened using. + + # systemctl status [unit] + +For example, in the image above, media-samba.mount is in failed state. Let’s run. + + # systemctl status media-samba.mount + +![Check Linux Service Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Service-Status.png) + +Check Service Status + +We can see that media-samba.mount failed because the mount process on host dev1 was unable to find the network share at //192.168.0.10/gacanepa. + +### Starting or Stopping Services ### + +Once the network share //192.168.0.10/gacanepa becomes available, let’s try to start, then stop, and finally restart the unit media-samba.mount. After performing each action, let’s run systemctl status media-samba.mount to check on its status. + + # systemctl start media-samba.mount + # systemctl status media-samba.mount + # systemctl stop media-samba.mount + # systemctl restart media-samba.mount + # systemctl status media-samba.mount + +![Starting Stoping Services](http://www.tecmint.com/wp-content/uploads/2014/10/Starting-Stoping-Service.jpeg) + +Starting Stoping Services + +**Enabling or disabling a service to start during boot** + +Under systemd you can enable or disable a service when it boots. + + # systemctl enable [service] # enable a service + # systemctl disable [service] # prevent a service from starting at boot + +The process of enabling or disabling a service to start automatically on boot consists in adding or removing symbolic links in the /etc/systemd/system/multi-user.target.wants directory. + +![Enabling Disabling Services](http://www.tecmint.com/wp-content/uploads/2014/10/Enabling-Disabling-Services.jpeg) + +Enabling Disabling Services + +Alternatively, you can find out a service’s current status (enabled or disabled) with the command. + + # systemctl is-enabled [service] + +For example, + + # systemctl is-enabled postfix.service + +In addition, you can reboot or shutdown the system with. + + # systemctl reboot + # systemctl shutdown + +### Upstart ### + +Upstart is an event-based replacement for the /sbin/init daemon and was born out of the need for starting services only, when they are needed (also supervising them while they are running), and handling events as they occur, thus surpassing the classic, dependency-based sysvinit system. + +It was originally developed for the Ubuntu distribution, but is used in Red Hat Enterprise Linux 6.0. Though it was intended to be suitable for deployment in all Linux distributions as a replacement for sysvinit, in time it was overshadowed by systemd. On February 14, 2014, Mark Shuttleworth (founder of Canonical Ltd.) announced that future releases of Ubuntu would use systemd as the default init daemon. + +Because the SysV startup script for system has been so common for so long, a large number of software packages include SysV startup scripts. To accommodate such packages, Upstart provides a compatibility mode: It runs SysV startup scripts in the usual locations (/etc/rc.d/rc?.d, /etc/init.d/rc?.d, /etc/rc?.d, or a similar location). Thus, if we install a package that doesn’t yet include an Upstart configuration script, it should still launch in the usual way. + +Furthermore, if we have installed utilities such as [chkconfig][5], you should be able to use them to manage your SysV-based services just as we would on sysvinit based systems. + +Upstart scripts also support starting or stopping services based on a wider variety of actions than do SysV startup scripts; for example, Upstart can launch a service whenever a particular hardware device is attached. + +A system that uses Upstart and its native scripts exclusively replaces the /etc/inittab file and the runlevel-specific SysV startup script directories with .conf scripts in the /etc/init directory. + +These *.conf scripts (also known as job definitions) generally consists of the following: + +- Description of the process. +- Runlevels where the process should run or events that should trigger it. +- Runlevels where process should be stopped or events that should stop it. +- Options. +- Command to launch the process. + +For example, + + # My test service - Upstart script demo description "Here goes the description of 'My test service'" author "Dave Null " + # Stanzas + + # + # Stanzas define when and how a process is started and stopped + # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn + # When to start the service + start on runlevel [2345] + # When to stop the service + stop on runlevel [016] + # Automatically restart process in case of crash + respawn + # Specify working directory + chdir /home/dave/myfiles + # Specify the process/command (add arguments if needed) to run + exec bash backup.sh arg1 arg2 + +To apply changes, you will need to tell upstart to reload its configuration. + + # initctl reload-configuration + +Then start your job by typing the following command. + + $ sudo start yourjobname + +Where yourjobname is the name of the job that was added earlier with the yourjobname.conf script. + +A more complete and detailed reference guide for Upstart is available in the project’s web site under the menu “[Cookbook][6]”. + +### Summary ### + +A knowledge of the Linux boot process is necessary to help you with troubleshooting tasks as well as with adapting the computer’s performance and running services to your needs. + +In this article we have analyzed what happens from the moment when you press the Power switch to turn on the machine until you get a fully operational user interface. I hope you have learned reading it as much as I did while putting it together. Feel free to leave your comments or questions below. We always look forward to hearing from our readers! + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-boot-process-and-manage-services/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/systemd-replaces-init-in-linux/ +[2]:http://www.tecmint.com/vi-editor-usage/ +[3]:http://www.tecmint.com/chkconfig-command-examples/ +[4]:http://www.tecmint.com/remove-unwanted-services-from-linux/ +[5]:http://www.tecmint.com/chkconfig-command-examples/ +[6]:http://upstart.ubuntu.com/cookbook/ \ No newline at end of file diff --git a/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md b/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md new file mode 100644 index 0000000000..4ccf3f20f6 --- /dev/null +++ b/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md @@ -0,0 +1,330 @@ +Part 8 - LFCS: Managing Users & Groups, File Permissions & Attributes and Enabling sudo Access on Accounts +================================================================================ +Last August, the Linux Foundation started the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is to allow individuals everywhere and anywhere take an exam in order to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus intelligent decision-making to be able to decide when it’s necessary to escalate issues to higher level support teams. + +![Linux Users and Groups Management](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-8.png) + +Linux Foundation Certified Sysadmin – Part 8 + +Please have a quick look at the following video that describes an introduction to the Linux Foundation Certification Program. + +注:youtube视频 + + +This article is Part 8 of a 10-tutorial long series, here in this section, we will guide you on how to manage users and groups permissions in Linux system, that are required for the LFCS certification exam. + +Since Linux is a multi-user operating system (in that it allows multiple users on different computers or terminals to access a single system), you will need to know how to perform effective user management: how to add, edit, suspend, or delete user accounts, along with granting them the necessary permissions to do their assigned tasks. + +### Adding User Accounts ### + +To add a new user account, you can run either of the following two commands as root. + + # adduser [new_account] + # useradd [new_account] + +When a new user account is added to the system, the following operations are performed. + +1. His/her home directory is created (/home/username by default). + +2. The following hidden files are copied into the user’s home directory, and will be used to provide environment variables for his/her user session. + + .bash_logout + .bash_profile + .bashrc + +3. A mail spool is created for the user at /var/spool/mail/username. + +4. A group is created and given the same name as the new user account. + +**Understanding /etc/passwd** + +The full account information is stored in the /etc/passwd file. This file contains a record per system user account and has the following format (fields are delimited by a colon). + + [username]:[x]:[UID]:[GID]:[Comment]:[Home directory]:[Default shell] + +- Fields [username] and [Comment] are self explanatory. +- The x in the second field indicates that the account is protected by a shadowed password (in /etc/shadow), which is needed to logon as [username]. +- The [UID] and [GID] fields are integers that represent the User IDentification and the primary Group IDentification to which [username] belongs, respectively. +- The [Home directory] indicates the absolute path to [username]’s home directory, and +- The [Default shell] is the shell that will be made available to this user when he or she logins the system. + +**Understanding /etc/group** + +Group information is stored in the /etc/group file. Each record has the following format. + + [Group name]:[Group password]:[GID]:[Group members] + +- [Group name] is the name of group. +- An x in [Group password] indicates group passwords are not being used. +- [GID]: same as in /etc/passwd. +- [Group members]: a comma separated list of users who are members of [Group name]. + +![Add User Accounts in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-user-accounts.png) + +Add User Accounts + +After adding an account, you can edit the following information (to name a few fields) using the usermod command, whose basic syntax of usermod is as follows. + + # usermod [options] [username] + +**Setting the expiry date for an account** + +Use the –expiredate flag followed by a date in YYYY-MM-DD format. + + # usermod --expiredate 2014-10-30 tecmint + +**Adding the user to supplementary groups** + +Use the combined -aG, or –append –groups options, followed by a comma separated list of groups. + + # usermod --append --groups root,users tecmint + +**Changing the default location of the user’s home directory** + +Use the -d, or –home options, followed by the absolute path to the new home directory. + + # usermod --home /tmp tecmint + +**Changing the shell the user will use by default** + +Use –shell, followed by the path to the new shell. + + # usermod --shell /bin/sh tecmint + +**Displaying the groups an user is a member of** + + # groups tecmint + # id tecmint + +Now let’s execute all the above commands in one go. + + # usermod --expiredate 2014-10-30 --append --groups root,users --home /tmp --shell /bin/sh tecmint + +![usermod Command Examples](http://www.tecmint.com/wp-content/uploads/2014/10/usermod-command-examples.png) + +usermod Command Examples + +Read Also: + +- [15 useradd Command Examples in Linux][1] +- [15 usermod Command Examples in Linux][2] + +For existing accounts, we can also do the following. + +**Disabling account by locking password** + +Use the -L (uppercase L) or the –lock option to lock a user’s password. + + # usermod --lock tecmint + +**Unlocking user password** + +Use the –u or the –unlock option to unlock a user’s password that was previously blocked. + + # usermod --unlock tecmint + +![Lock User in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/lock-user-in-linux.png) + +Lock User Accounts + +**Creating a new group for read and write access to files that need to be accessed by several users** + +Run the following series of commands to achieve the goal. + + # groupadd common_group # Add a new group + # chown :common_group common.txt # Change the group owner of common.txt to common_group + # usermod -aG common_group user1 # Add user1 to common_group + # usermod -aG common_group user2 # Add user2 to common_group + # usermod -aG common_group user3 # Add user3 to common_group + +**Deleting a group** + +You can delete a group with the following command. + + # groupdel [group_name] + +If there are files owned by group_name, they will not be deleted, but the group owner will be set to the GID of the group that was deleted. + +### Linux File Permissions ### + +Besides the basic read, write, and execute permissions that we discussed in [Setting File Attributes – Part 3][3] of this series, there are other less used (but not less important) permission settings, sometimes referred to as “special permissions”. + +Like the basic permissions discussed earlier, they are set using an octal file or through a letter (symbolic notation) that indicates the type of permission. +Deleting user accounts + +You can delete an account (along with its home directory, if it’s owned by the user, and all the files residing therein, and also the mail spool) using the userdel command with the –remove option. + + # userdel --remove [username] + +#### Group Management #### + +Every time a new user account is added to the system, a group with the same name is created with the username as its only member. Other users can be added to the group later. One of the purposes of groups is to implement a simple access control to files and other system resources by setting the right permissions on those resources. + +For example, suppose you have the following users. + +- user1 (primary group: user1) +- user2 (primary group: user2) +- user3 (primary group: user3) + +All of them need read and write access to a file called common.txt located somewhere on your local system, or maybe on a network share that user1 has created. You may be tempted to do something like, + + # chmod 660 common.txt + OR + # chmod u=rw,g=rw,o= common.txt [notice the space between the last equal sign and the file name] + +However, this will only provide read and write access to the owner of the file and to those users who are members of the group owner of the file (user1 in this case). Again, you may be tempted to add user2 and user3 to group user1, but that will also give them access to the rest of the files owned by user user1 and group user1. + +This is where groups come in handy, and here’s what you should do in a case like this. + +**Understanding Setuid** + +When the setuid permission is applied to an executable file, an user running the program inherits the effective privileges of the program’s owner. Since this approach can reasonably raise security concerns, the number of files with setuid permission must be kept to a minimum. You will likely find programs with this permission set when a system user needs to access a file owned by root. + +Summing up, it isn’t just that the user can execute the binary file, but also that he can do so with root’s privileges. For example, let’s check the permissions of /bin/passwd. This binary is used to change the password of an account, and modifies the /etc/shadow file. The superuser can change anyone’s password, but all other users should only be able to change their own. + +![passwd Command Examples](http://www.tecmint.com/wp-content/uploads/2014/10/passwd-command.png) + +passwd Command Examples + +Thus, any user should have permission to run /bin/passwd, but only root will be able to specify an account. Other users can only change their corresponding passwords. + +![Change User Password in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/change-user-password.png) + +Change User Password + +**Understanding Setgid** + +When the setgid bit is set, the effective GID of the real user becomes that of the group owner. Thus, any user can access a file under the privileges granted to the group owner of such file. In addition, when the setgid bit is set on a directory, newly created files inherit the same group as the directory, and newly created subdirectories will also inherit the setgid bit of the parent directory. You will most likely use this approach whenever members of a certain group need access to all the files in a directory, regardless of the file owner’s primary group. + + # chmod g+s [filename] + +To set the setgid in octal form, prepend the number 2 to the current (or desired) basic permissions. + + # chmod 2755 [directory] + +**Setting the SETGID in a directory** + +![Add Setgid in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-setgid-to-directory.png) + +Add Setgid to Directory + +**Understanding Sticky Bit** + +When the “sticky bit” is set on files, Linux just ignores it, whereas for directories it has the effect of preventing users from deleting or even renaming the files it contains unless the user owns the directory, the file, or is root. + +# chmod o+t [directory] + +To set the sticky bit in octal form, prepend the number 1 to the current (or desired) basic permissions. + +# chmod 1755 [directory] + +Without the sticky bit, anyone able to write to the directory can delete or rename files. For that reason, the sticky bit is commonly found on directories, such as /tmp, that are world-writable. + +![Add Stickybit in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-sticky-bit-to-directory.png) + +Add Stickybit to Directory + +### Special Linux File Attributes ### + +There are other attributes that enable further limits on the operations that are allowed on files. For example, prevent the file from being renamed, moved, deleted, or even modified. They are set with the [chattr command][4] and can be viewed using the lsattr tool, as follows. + + # chattr +i file1 + # chattr +a file2 + +After executing those two commands, file1 will be immutable (which means it cannot be moved, renamed, modified or deleted) whereas file2 will enter append-only mode (can only be open in append mode for writing). + +![Protect File from Deletion](http://www.tecmint.com/wp-content/uploads/2014/10/chattr-command.png) + +Chattr Command to Protect Files + +### Accessing the root Account and Using sudo ### + +One of the ways users can gain access to the root account is by typing. + + $ su + +and then entering root’s password. + +If authentication succeeds, you will be logged on as root with the current working directory as the same as you were before. If you want to be placed in root’s home directory instead, run. + + $ su - + +and then enter root’s password. + +![Enable sudo Access on Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Sudo-Access.png) + +Enable Sudo Access on Users + +The above procedure requires that a normal user knows root’s password, which poses a serious security risk. For that reason, the sysadmin can configure the sudo command to allow an ordinary user to execute commands as a different user (usually the superuser) in a very controlled and limited way. Thus, restrictions can be set on a user so as to enable him to run one or more specific privileged commands and no others. + +- Read Also: [Difference Between su and sudo User][5] + +To authenticate using sudo, the user uses his/her own password. After entering the command, we will be prompted for our password (not the superuser’s) and if the authentication succeeds (and if the user has been granted privileges to run the command), the specified command is carried out. + +To grant access to sudo, the system administrator must edit the /etc/sudoers file. It is recommended that this file is edited using the visudo command instead of opening it directly with a text editor. + + # visudo + +This opens the /etc/sudoers file using vim (you can follow the instructions given in [Install and Use vim as Editor – Part 2][6] of this series to edit the file). + +These are the most relevant lines. + + Defaults secure_path="/usr/sbin:/usr/bin:/sbin" + root ALL=(ALL) ALL + tecmint ALL=/bin/yum update + gacanepa ALL=NOPASSWD:/bin/updatedb + %admin ALL=(ALL) ALL + +Let’s take a closer look at them. + + Defaults secure_path="/usr/sbin:/usr/bin:/sbin:/usr/local/bin" + +This line lets you specify the directories that will be used for sudo, and is used to prevent using user-specific directories, which can harm the system. + +The next lines are used to specify permissions. + + root ALL=(ALL) ALL + +- The first ALL keyword indicates that this rule applies to all hosts. +- The second ALL indicates that the user in the first column can run commands with the privileges of any user. +- The third ALL means any command can be run. + + tecmint ALL=/bin/yum update + +If no user is specified after the = sign, sudo assumes the root user. In this case, user tecmint will be able to run yum update as root. + + gacanepa ALL=NOPASSWD:/bin/updatedb + +The NOPASSWD directive allows user gacanepa to run /bin/updatedb without needing to enter his password. + + %admin ALL=(ALL) ALL + +The % sign indicates that this line applies to a group called “admin”. The meaning of the rest of the line is identical to that of an regular user. This means that members of the group “admin” can run all commands as any user on all hosts. + +To see what privileges are granted to you by sudo, use the “-l” option to list them. + +![Sudo Access Rules](http://www.tecmint.com/wp-content/uploads/2014/10/sudo-access-rules.png) + +Sudo Access Rules + +### Summary ### + +Effective user and file management skills are essential tools for any system administrator. In this article we have covered the basics and hope you can use it as a good starting to point to build upon. Feel free to leave your comments or questions below, and we’ll respond quickly. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/manage-users-and-groups-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/add-users-in-linux/ +[2]:http://www.tecmint.com/usermod-command-examples/ +[3]:http://www.tecmint.com/compress-files-and-finding-files-in-linux/ +[4]:http://www.tecmint.com/chattr-command-examples/ +[5]:http://www.tecmint.com/su-vs-sudo-and-how-to-configure-sudo-in-linux/ +[6]:http://www.tecmint.com/vi-editor-usage/ \ No newline at end of file diff --git a/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md b/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md new file mode 100644 index 0000000000..7b58a467d7 --- /dev/null +++ b/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md @@ -0,0 +1,229 @@ +Part 9 - LFCS: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper +================================================================================ +Last August, the Linux Foundation announced the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of succeeding at overall operational support for Linux systems. A Linux Foundation Certified Sysadmin has the expertise to ensure effective system support, first-level troubleshooting and monitoring, including finally issue escalation, when needed, to engineering support teams. + +![Linux Package Management](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-9.png) + +Linux Foundation Certified Sysadmin – Part 9 + +Watch the following video that explains about the Linux Foundation Certification Program. + +注:youtube 视频 + + +This article is a Part 9 of 10-tutorial long series, today in this article we will guide you about Linux Package Management, that are required for the LFCS certification exam. + +### Package Management ### + +In few words, package management is a method of installing and maintaining (which includes updating and probably removing as well) software on the system. + +In the early days of Linux, programs were only distributed as source code, along with the required man pages, the necessary configuration files, and more. Nowadays, most Linux distributors use by default pre-built programs or sets of programs called packages, which are presented to users ready for installation on that distribution. However, one of the wonders of Linux is still the possibility to obtain source code of a program to be studied, improved, and compiled. + +**How package management systems work** + +If a certain package requires a certain resource such as a shared library, or another package, it is said to have a dependency. All modern package management systems provide some method of dependency resolution to ensure that when a package is installed, all of its dependencies are installed as well. + +**Packaging Systems** + +Almost all the software that is installed on a modern Linux system will be found on the Internet. It can either be provided by the distribution vendor through central repositories (which can contain several thousands of packages, each of which has been specifically built, tested, and maintained for the distribution) or be available in source code that can be downloaded and installed manually. + +Because different distribution families use different packaging systems (Debian: *.deb / CentOS: *.rpm / openSUSE: *.rpm built specially for openSUSE), a package intended for one distribution will not be compatible with another distribution. However, most distributions are likely to fall into one of the three distribution families covered by the LFCS certification. + +**High and low-level package tools** + +In order to perform the task of package management effectively, you need to be aware that you will have two types of available utilities: low-level tools (which handle in the backend the actual installation, upgrade, and removal of package files), and high-level tools (which are in charge of ensuring that the tasks of dependency resolution and metadata searching -”data about the data”- are performed). + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DISTRIBUTIONLOW-LEVEL TOOLHIGH-LEVEL TOOL
 Debian and derivatives dpkg apt-get / aptitude
 CentOS rpm yum
 openSUSE rpm zypper
+ +Let us see the descrption of the low-level and high-level tools. + +dpkg is a low-level package manager for Debian-based systems. It can install, remove, provide information about and build *.deb packages but it can’t automatically download and install their corresponding dependencies. + +- Read More: [15 dpkg Command Examples][1] + +apt-get is a high-level package manager for Debian and derivatives, and provides a simple way to retrieve and install packages, including dependency resolution, from multiple sources using the command line. Unlike dpkg, apt-get does not work directly with *.deb files, but with the package proper name. + +- Read More: [25 apt-get Command Examples][2] + +aptitude is another high-level package manager for Debian-based systems, and can be used to perform management tasks (installing, upgrading, and removing packages, also handling dependency resolution automatically) in a fast and easy way. It provides the same functionality as apt-get and additional ones, such as offering access to several versions of a package. + +rpm is the package management system used by Linux Standard Base (LSB)-compliant distributions for low-level handling of packages. Just like dpkg, it can query, install, verify, upgrade, and remove packages, and is more frequently used by Fedora-based distributions, such as RHEL and CentOS. + +- Read More: [20 rpm Command Examples][3] + +yum adds the functionality of automatic updates and package management with dependency management to RPM-based systems. As a high-level tool, like apt-get or aptitude, yum works with repositories. + +- Read More: [20 yum Command Examples][4] +- +### Common Usage of Low-Level Tools ### + +The most frequent tasks that you will do with low level tools are as follows: + +**1. Installing a package from a compiled (*.deb or *.rpm) file** + +The downside of this installation method is that no dependency resolution is provided. You will most likely choose to install a package from a compiled file when such package is not available in the distribution’s repositories and therefore cannot be downloaded and installed through a high-level tool. Since low-level tools do not perform dependency resolution, they will exit with an error if we try to install a package with unmet dependencies. + + # dpkg -i file.deb [Debian and derivative] + # rpm -i file.rpm [CentOS / openSUSE] + +**Note**: Do not attempt to install on CentOS a *.rpm file that was built for openSUSE, or vice-versa! + +**2. Upgrading a package from a compiled file** + +Again, you will only upgrade an installed package manually when it is not available in the central repositories. + + # dpkg -i file.deb [Debian and derivative] + # rpm -U file.rpm [CentOS / openSUSE] + +**3. Listing installed packages** + +When you first get your hands on an already working system, chances are you’ll want to know what packages are installed. + + # dpkg -l [Debian and derivative] + # rpm -qa [CentOS / openSUSE] + +If you want to know whether a specific package is installed, you can pipe the output of the above commands to grep, as explained in [manipulate files in Linux – Part 1][6] of this series. Suppose we need to verify if package mysql-common is installed on an Ubuntu system. + + # dpkg -l | grep mysql-common + +![Check Installed Packages in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Installed-Package.png) + +Check Installed Packages + +Another way to determine if a package is installed. + + # dpkg --status package_name [Debian and derivative] + # rpm -q package_name [CentOS / openSUSE] + +For example, let’s find out whether package sysdig is installed on our system. + + # rpm -qa | grep sysdig + +![Check sysdig Package](http://www.tecmint.com/wp-content/uploads/2014/11/Check-sysdig-Package.png) + +Check sysdig Package + +**4. Finding out which package installed a file** + + # dpkg --search file_name + # rpm -qf file_name + +For example, which package installed pw_dict.hwm? + + # rpm -qf /usr/share/cracklib/pw_dict.hwm + +![Query File in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Query-File-in-Linux.png) + +Query File in Linux + +### Common Usage of High-Level Tools ### + +The most frequent tasks that you will do with high level tools are as follows. + +**1. Searching for a package** + +aptitude update will update the list of available packages, and aptitude search will perform the actual search for package_name. + + # aptitude update && aptitude search package_name + +In the search all option, yum will search for package_name not only in package names, but also in package descriptions. + + # yum search package_name + # yum search all package_name + # yum whatprovides “*/package_name” + +Let’s supposed we need a file whose name is sysdig. To know that package we will have to install, let’s run. + + # yum whatprovides “*/sysdig” + +![Check Package Description in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Description.png) + +Check Package Description + +whatprovides tells yum to search the package the will provide a file that matches the above regular expression. + + # zypper refresh && zypper search package_name [On openSUSE] + +**2. Installing a package from a repository** + +While installing a package, you may be prompted to confirm the installation after the package manager has resolved all dependencies. Note that running update or refresh (according to the package manager being used) is not strictly necessary, but keeping installed packages up to date is a good sysadmin practice for security and dependency reasons. + + # aptitude update && aptitude install package_name [Debian and derivatives] + # yum update && yum install package_name [CentOS] + # zypper refresh && zypper install package_name [openSUSE] + +**3. Removing a package** + +The option remove will uninstall the package but leaving configuration files intact, whereas purge will erase every trace of the program from your system. +# aptitude remove / purge package_name +# yum erase package_name + + ---Notice the minus sign in front of the package that will be uninstalled, openSUSE --- + + # zypper remove -package_name + +Most (if not all) package managers will prompt you, by default, if you’re sure about proceeding with the uninstallation before actually performing it. So read the onscreen messages carefully to avoid running into unnecessary trouble! + +**4. Displaying information about a package** + +The following command will display information about the birthday package. + + # aptitude show birthday + # yum info birthday + # zypper info birthday + +![Check Package Information in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Information.png) + +Check Package Information + +### Summary ### + +Package management is something you just can’t sweep under the rug as a system administrator. You should be prepared to use the tools described in this article at a moment’s notice. Hope you find it useful in your preparation for the LFCS exam and for your daily tasks. Feel free to leave your comments or questions below. We will be more than glad to get back to you as soon as possible. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-package-management/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/dpkg-command-examples/ +[2]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ +[3]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ +[4]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ +[5]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ \ No newline at end of file From e07871e6e337a534fa779edab8cbf90a275f9fa3 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 18 Aug 2015 17:04:51 +0800 Subject: [PATCH 221/697] =?UTF-8?q?20150818-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Limits--IBM Launch LinuxONE Mainframes.md | 52 +++++++++++++++++++ 1 file changed, 52 insertions(+) create mode 100644 sources/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md diff --git a/sources/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md b/sources/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md new file mode 100644 index 0000000000..f97c690e3a --- /dev/null +++ b/sources/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md @@ -0,0 +1,52 @@ +Linux Without Limits: IBM Launch LinuxONE Mainframes +================================================================================ +![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/Screenshot-2015-08-17-at-12.58.10.png) + +LinuxONE Emperor MainframeGood news for Ubuntu’s server team today as [IBM launch the LinuxONE][1] a Linux-only mainframe that is also able to run Ubuntu. + +The largest of the LinuxONE systems launched by IBM is called ‘Emperor’ and can scale up to 8000 virtual machines or tens of thousands of containers – a possible record for any one single Linux system. + +The LinuxONE is described by IBM as a ‘game changer’ that ‘unleashes the potential of Linux for business’. + +IBM and Canonical are working together on the creation of an Ubuntu distribution for LinuxONE and other IBM z Systems. Ubuntu will join RedHat and SUSE as ‘premier Linux distributions’ on IBM z. + +Alongside the ‘Emperor’ IBM is also offering the LinuxONE Rockhopper, a smaller mainframe for medium-sized businesses and organisations. + +IBM is the market leader in mainframes and commands over 90% of the mainframe market. + +注:youtube 视频 + + +### What Is a Mainframe Computer Used For? ### + +The computer you’re reading this article on would be dwarfed by a ‘big iron’ mainframe. They are large, hulking great cabinets packed full of high-end components, custom designed technology and dizzying amounts of storage (that is data storage, not ample room for pens and rulers). + +Mainframes computers are used by large organizations and businesses to process and store large amounts of data, crunch through statistics, and handle large-scale transaction processing. + +### ‘World’s Fastest Processor’ ### + +IBM has teamed up with Canonical Ltd to use Ubuntu on the LinuxONE and other IBM z Systems. + +The LinuxONE Emperor uses the IBM z13 processor. The chip, announced back in January, is said to be the world’s fastest microprocessor. It is able to deliver transaction response times in the milliseconds. + +But as well as being well equipped to handle for high-volume mobile transactions, the z13 inside the LinuxONE is also an ideal cloud system. + +It can handle more than 50 virtual servers per core for a total of 8000 virtual servers, making it a cheaper, greener and more performant way to scale-out to the cloud. + +**You don’t have to be a CIO or mainframe spotter to appreciate this announcement. The possibilities LinuxONE provides are clear enough. ** + +Source: [Reuters (h/t @popey)][2] + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2015/08/ibm-linuxone-mainframe-ubuntu-partnership + +作者:[Joey-Elijah Sneddon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:http://www-03.ibm.com/systems/z/announcement.html +[2]:http://www.reuters.com/article/2015/08/17/us-ibm-linuxone-idUSKCN0QM09P20150817 \ No newline at end of file From bb62496b06ad16f7baf2c1a992b84d7f73b156c7 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 18 Aug 2015 17:29:00 +0800 Subject: [PATCH 222/697] =?UTF-8?q?20150818-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...After More than 8 Years--See Comparison.md | 344 ++++++++++++++++++ ... 22 Years of Journey and Still Counting.md | 109 ++++++ ...k quotes from the command line on Linux.md | 99 +++++ 3 files changed, 552 insertions(+) create mode 100644 sources/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md create mode 100644 sources/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md create mode 100644 sources/tech/20150818 How to monitor stock quotes from the command line on Linux.md diff --git a/sources/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md b/sources/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md new file mode 100644 index 0000000000..cf472613c4 --- /dev/null +++ b/sources/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md @@ -0,0 +1,344 @@ +A Linux User Using ‘Windows 10′ After More than 8 Years – See Comparison +================================================================================ +Windows 10 is the newest member of windows NT family of which general availability was made on July 29, 2015. It is the successor of Windows 8.1. Windows 10 is supported on Intel Architecture 32 bit, AMD64 and ARMv7 processors. + +![Windows 10 and Linux Comparison](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-vs-Linux.jpg) + +Windows 10 and Linux Comparison + +As a Linux-user for more than 8 continuous years, I thought to test Windows 10, as it is making a lots of news these days. This article is a breakthrough of my observation. I will be seeing everything from the perspective of a Linux user so you may find it a bit biased towards Linux but with absolutely no false information. + +1. I searched Google with the text “download windows 10” and clicked the first link. + +![Search Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Search-Windows-10.jpg) + +Search Windows 10 + +You may directly go to link : [https://www.microsoft.com/en-us/software-download/windows10ISO][1] + +2. I was supposed to select a edition from ‘windows 10‘, ‘windows 10 KN‘, ‘windows 10 N‘ and ‘windows 10 single language‘. + +![Select Windows 10 Edition](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Windows-10-Edition.jpg) + +Select Windows 10 Edition + +For those who want to know details of different editions of Windows 10, here is the brief details of editions. + +- Windows 10 – Contains everything offered by Microsoft for this OS. +- Windows 10N – This edition comes without Media-player. +- Windows 10KN – This edition comes without media playing capabilities. +- Windows 10 Single Language – Only one Language Pre-installed. + +3. I selected the first option ‘Windows 10‘ and clicked ‘Confirm‘. Then I was supposed to select a product language. I choose ‘English‘. + +I was provided with Two Download Links. One for 32-bit and other for 64-bit. I clicked 64-bit, as per my architecture. + +![Download Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Download-Windows-10.jpg) + +Download Windows 10 + +With my download speed (15Mbps), it took me 3 long hours to download it. Unfortunately there were no torrent file to download the OS, which could otherwise have made the overall process smooth. The OS iso image size is 3.8 GB. + +I could not find an image of smaller size but again the truth is there don’t exist net-installer image like things for Windows. Also there is no way to calculate hash value after the iso image has been downloaded. + +Wonder why so ignorance from windows on such issues. To verify if the iso is downloaded correctly I need to write the image to a disk or to a USB flash drive and then boot my system and keep my finger crossed till the setup is finished. + +Lets start. I made my USB flash drive bootable with the windows 10 iso using dd command, as: + + # dd if=/home/avi/Downloads/Win10_English_x64.iso of=/dev/sdb1 bs=512M; sync + +It took a few minutes to complete the process. I then rebooted the system and choose to boot from USB flash Drive in my UEFI (BIOS) settings. + +#### System Requirements #### + +If you are upgrading + +- Upgrade supported only from Windows 7 SP1 or Windows 8.1 + +If you are fresh Installing + +- Processor: 1GHz or faster +- RAM : 1GB and Above(32-bit), 2GB and Above(64-bit) +- HDD: 16GB and Above(32-bit), 20GB and Above(64-bit) +- Graphic card: DirectX 9 or later + WDDM 1.0 Driver + +### Installation of Windows 10 ### + +1. Windows 10 boots. Yet again they changed the logo. Also no information on whats going on. + +![Windows 10 Logo](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Logo.jpg) + +Windows 10 Logo + +2. Selected Language to install, Time & currency format and keyboard & Input methods before clicking Next. + +![Select Language and Time](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Language-and-Time.jpg) + +Select Language and Time + +3. And then ‘Install Now‘ Menu. + +![Install Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Windows-10.jpg) + +Install Windows 10 + +4. The next screen is asking for Product key. I clicked ‘skip’. + +![Windows 10 Product Key](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Product-Key.jpg) + +Windows 10 Product Key + +5. Choose from a listed OS. I chose ‘windows 10 pro‘. + +![Select Install Operating System](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Install-Operating-System.jpg) + +Select Install Operating System + +6. oh yes the license agreement. Put a check mark against ‘I accept the license terms‘ and click next. + +![Accept License](http://www.tecmint.com/wp-content/uploads/2015/08/Accept-License.jpg) + +Accept License + +7. Next was to upgrade (to windows 10 from previous versions of windows) and Install Windows. Don’t know why custom: Windows Install only is suggested as advanced by windows. Anyway I chose to Install windows only. + +![Select Installation Type](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Installation-Type.jpg) + +Select Installation Type + +8. Selected the file-system and clicked ‘next’. + +![Select Install Drive](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Install-Drive.jpg) + +Select Install Drive + +9. The installer started to copy files, getting files ready for installation, installing features, installing updates and finishing up. It would be better if the installer would have shown verbose output on the action is it taking. + +![Installing Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Installing-Windows.jpg) + +Installing Windows + +10. And then windows restarted. They said reboot was needed to continue. + +![Windows Installation Process](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Installation-Process.jpg) + +Windows Installation Process + +11. And then all I got was the below screen which reads “Getting Ready”. It took 5+ minutes at this point. No idea what was going on. No output. + +![Windows Getting Ready](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Getting-Ready.jpg) + +Windows Getting Ready + +12. yet again, it was time to “Enter Product Key”. I clicked “Do this later” and then used expressed settings. + +![Enter Product Key](http://www.tecmint.com/wp-content/uploads/2015/08/Enter-Product-Key.jpg) + +Enter Product Key + +![Select Express Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Express-Settings.jpg) + +Select Express Settings + +14. And then three more output screens, where I as a Linuxer expected that the Installer will tell me what it is doing but all in vain. + +![Loading Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Loading-Windows.jpg) + +Loading Windows + +![Getting Updates](http://www.tecmint.com/wp-content/uploads/2015/08/Getting-Updates.jpg) + +Getting Updates + +![Still Loading Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Still-Loading-Windows.jpg) + +Still Loading Windows + +15. And then the installer wanted to know who owns this machine “My organization” or I myself. Chose “I own it” and then next. + +![Select Organization](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Organization.jpg) + +Select Organization + +16. Installer prompted me to join “Azure Ad” or “Join a domain”, before I can click ‘continue’. I chooses the later option. + +![Connect Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Connect-Windows.jpg) + +Connect Windows + +17. The Installer wants me to create an account. So I entered user_name and clicked ‘Next‘, I was expecting an error message that I must enter a password. + +![Create Account](http://www.tecmint.com/wp-content/uploads/2015/08/Create-Account.jpg) + +Create Account + +18. To my surprise Windows didn’t even showed warning/notification that I must create password. Such a negligence. Anyway I got my desktop. + +![Windows 10 Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Desktop.jpg) + +Windows 10 Desktop + +#### Experience of a Linux-user (Myself) till now #### + +- No Net-installer Image +- Image size too heavy +- No way to check the integrity of iso downloaded (no hash check) +- The booting and installation remains same as it was in XP, Windows 7 and 8 perhaps. +- As usual no output on what windows Installer is doing – What file copying or what package installing. +- Installation was straight forward and easy as compared to the installation of a Linux distribution. + +### Windows 10 Testing ### + +19. The default Desktop is clean. It has a recycle bin Icon on the default desktop. Search web directly from the desktop itself. Additionally icons for Task viewing, Internet browsing, folder browsing and Microsoft store is there. As usual notification bar is present on the bottom right to sum up desktop. + +![Deskop Shortcut Icons](http://www.tecmint.com/wp-content/uploads/2015/08/Deskop-Shortcut-icons.jpg) + +Deskop Shortcut Icons + +20. Internet Explorer replaced with Microsoft Edge. Windows 10 has replace the legacy web browser Internet Explorer also known as IE with Edge aka project spartan. + +![Microsoft Edge Browser](http://www.tecmint.com/wp-content/uploads/2015/08/Edge-browser.jpg) + +Microsoft Edge Browser + +It is fast at least as compared to IE (as it seems it testing). Familiar user Interface. The home screen contains news feed updates. There is also a search bar title that reads ‘Where to next?‘. The browser loads time is considerably low which result in improving overall speed and performance. The memory usages of Edge seems normal. + +![Windows Performance](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Performance.jpg) + +Windows Performance + +Edge has got cortana – Intelligent Personal Assistant, Support for chrome-extension, web Note – Take notes while Browsing, Share – Right from the tab without opening any other TAB. + +#### Experience of a Linux-user (Myself) on this point #### + +21. Microsoft has really improved web browsing. Lets see how stable and fine it remains. It don’t lag as of now. + +22. Though RAM usages by Edge was fine for me, a lots of users are complaining that Edge is notorious for Excessive RAM Usages. + +23. Difficult to say at this point if Edge is ready to compete with Chrome and/or Firefox at this point of time. Lets see what future unfolds. + +#### A few more Virtual Tour #### + +24. Start Menu redesigned – Seems clear and effective. Metro icons make it live. Populated with most commonly applications viz., Calendar, Mail, Edge, Photos, Contact, Temperature, Companion suite, OneNote, Store, Xbox, Music, Movies & TV, Money, News, Store, etc. + +![Windows Look and Feel](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Look.jpg) + +Windows Look and Feel + +In Linux on Gnome Desktop Environment, I use to search required applications simply by pressing windows key and then type the name of the application. + +![Search Within Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Search-Within-Desktop.jpg) + +Search Within Desktop + +25. File Explorer – seems clear Designing. Edges are sharp. In the left pane there is link to quick access folders. + +![Windows File Explorer](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-File-Explorer.jpg) + +Windows File Explorer + +Equally clear and effective file explorer on Gnome Desktop Environment on Linux. Removed UN-necessary graphics and images from icons is a plus point. + +![File Browser on Gnome](http://www.tecmint.com/wp-content/uploads/2015/08/File-Browser.jpg) + +File Browser on Gnome + +26. Settings – Though the settings are a bit refined on Windows 10, you may compare it with the settings on a Linux Box. + +**Settings on Windows** + +![Windows 10 Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Settings.jpg) + +Windows 10 Settings + +**Setting on Linux Gnome** + +![Gnome Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Gnome-Settings.jpg) + +Gnome Settings + +27. List of Applications – List of Application on Linux is better than what they use to provide (based upon my memory, when I was a regular windows user) but still it stands low as compared to how Gnome3 list application. + +**Application Listed by Windows** + +![Application List on Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Application-List-on-Windows-10.jpg) + +Application List on Windows 10 + +**Application Listed by Gnome3 on Linux** + +![Gnome Application List on Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Gnome-Application-List-on-Linux.jpg) + +Gnome Application List on Linux + +28. Virtual Desktop – Virtual Desktop feature of Windows 10 is one of those topic which are very much talked about these days. + +Here is the virtual Desktop in Windows 10. + +![Windows Virtual Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Virtual-Desktop.jpg) + +Windows Virtual Desktop + +and the virtual Desktop on Linux we are using for more than 2 decades. + +![Virtual Desktop on Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Virtual-Desktop-on-Linux.jpg) + +Virtual Desktop on Linux + +#### A few other features of Windows 10 #### + +29. Windows 10 comes with wi-fi sense. It shares your password with others. Anyone who is in the range of your wi-fi and connected to you over Skype, Outlook, Hotmail or Facebook can be granted access to your wifi network. And mind it this feature has been added as a feature by microsoft to save time and hassle-free connection. + +In a reply to question raised by Tecmint, Microsoft said – The user has to agree to enable wifi sense, everytime on a new network. oh! What a pathetic taste as far as security is concerned. I am not convinced. + +30. Up-gradation from Windows 7 and Windows 8.1 is free though the retail cost of Home and pro editions are approximately $119 and $199 respectively. + +31. Microsoft released first cumulative update for windows 10, which is said to put system into endless crash loop for a few people. Windows perhaps don’t understand such problem or don’t want to work on that part don’t know why. + +32. Microsoft’s inbuilt utility to block/hide unwanted updates don’t work in my case. This means If a update is there, there is no way to block/hide it. Sorry windows users! + +#### A few features native to Linux that windows 10 have #### + +Windows 10 has a lots of features that were taken directly from Linux. If Linux were not released under GNU License perhaps Microsoft would never had the below features. + +33. Command-line package management – Yup! You heard it right. Windows 10 has a built-in package management. It works only in Windows Power Shell. OneGet is the official package manager for windows. Windows package manager in action. + +![Windows 10 Package Manager](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Package-Manager.jpg) + +Windows 10 Package Manager + +- Border-less windows +- Flat Icons +- Virtual Desktop +- One search for Online+offline search +- Convergence of mobile and desktop OS + +### Overall Conclusion ### + +- Improved responsiveness +- Well implemented Animation +- low on resource +- Improved battery life +- Microsoft Edge web-browser is rock solid +- Supported on Raspberry pi 2. +- It is good because windows 8/8.1 was not upto mark and really bad. +- It is a the same old wine in new bottle. Almost the same things with brushed up icons. + +What my testing suggest is Windows 10 has improved on a few things like look and feel (as windows always did), +1 for Project spartan, Virtual Desktop, Command-line package management, one search for online and offline search. It is overall an improved product but those who thinks that Windows 10 will prove to be the last nail in the coffin of Linux are mistaken. + +Linux is years ahead of Windows. Their approach is different. In near future windows won’t stand anywhere around Linux and there is nothing for which a Linux user need to go to Windows 10. + +That’s all for now. Hope you liked the post. I will be here again with another interesting post you people will love to read. Provide us with your valuable feedback in the comments below. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/a-linux-user-using-windows-10-after-more-than-8-years-see-comparison/ + +作者:[vishek Kumar][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:https://www.microsoft.com/en-us/software-download/windows10ISO \ No newline at end of file diff --git a/sources/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md b/sources/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md new file mode 100644 index 0000000000..f74384b616 --- /dev/null +++ b/sources/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md @@ -0,0 +1,109 @@ +Debian GNU/Linux Birthday : A 22 Years of Journey and Still Counting… +================================================================================ +On 16th August 2015, the Debian project has celebrated its 22nd anniversary, making it one of the oldest popular distribution in open source world. Debian project was conceived and founded in the year 1993 by Ian Murdock. By that time Slackware had already made a remarkable presence as one of the earliest Linux Distribution. + +![Happy 22nd Birthday to Debian](http://www.tecmint.com/wp-content/uploads/2014/08/Debian-22nd-Birthday.png) + +Happy 22nd Birthday to Debian Linux + +Ian Ashley Murdock, an American Software Engineer by profession, conceived the idea of Debian project, when he was a student of Purdue University. He named the project Debian after the name of his then-girlfriend Debra Lynn (Deb) and his name. He later married her and then got divorced in January 2008. + +![Ian Murdock](http://www.tecmint.com/wp-content/uploads/2014/08/Ian-Murdock.jpeg) + +Debian Creator: Ian Murdock + +Ian is currently serving as Vice President of Platform and Development Community at ExactTarget. + +Debian (as Slackware) was the result of unavailability of up-to mark Linux Distribution, that time. Ian in an interview said – “Providing the first class Product without profit would be the sole aim of Debian Project. Even Linux was not reliable and up-to mark that time. I Remember…. Moving files between file-system and dealing with voluminous file would often result in Kernel Panic. However the project Linux was promising. The availability of Source Code freely and the potential it seemed was qualitative.” + +I remember … like everyone else I wanted to solve problem, run something like UNIX at home, but it was not possible…neither financially nor legally, in the other sense . Then I come to know about GNU kernel Development and its non-association with any kind of legal issues, he added. He was sponsored by Free Software Foundation (FSF) in the early days when he was working on Debian, it also helped Debian to take a giant step though Ian needed to finish his degree and hence quited FSF roughly after one year of sponsorship. + +### Debian Development History ### + +- **Debian 0.01 – 0.09** : Released between August 1993 – December 1993. +- **Debian 0.91 ** – Released in January 1994 with primitive package system, No dependencies. +- **Debian 0.93 rc5** : March 1995. It is the first modern release of Debian, dpkg was used to install and maintain packages after base system installation. +- **Debian 0.93 rc6**: Released in November 1995. It was last a.out release, deselect made an appearance for the first time – 60 developers were maintaining packages, then at that time. +- **Debian 1.1**: Released in June 1996. Code name – Buzz, Packages count – 474, Package Manager dpkg, Kernel 2.0, ELF. +- **Debian 1.2**: Released in December 1996. Code name – Rex, Packages count – 848, Developers Count – 120. +- **Debian 1.3**: Released in July 1997. Code name – Bo, package count 974, Developers count – 200. +- **Debian 2.0**: Released in July 1998. Code name: Hamm, Support for architecture – Intel i386 and Motorola 68000 series, Number of Packages: 1500+, Number of Developers: 400+, glibc included. +- **Debian 2.1**: Released on March 09, 1999. Code name – slink, support architecture Alpha and Sparc, apt came in picture, Number of package – 2250. +- **Debian 2.2**: Released on August 15, 2000. Code name – Potato, Supported architecture – Intel i386, Motorola 68000 series, Alpha, SUN Sparc, PowerPC and ARM architecture. Number of packages: 3900+ (binary) and 2600+ (Source), Number of Developers – 450. There were a group of people studied and came with an article called Counting potatoes, which shows – How a free software effort could lead to a modern operating system despite all the issues around it. +- **Debian 3.0** : Released on July 19th, 2002. Code name – woody, Architecture supported increased– HP, PA_RISC, IA-64, MIPS and IBM, First release in DVD, Package Count – 8500+, Developers Count – 900+, Cryptography. +- **Debian 3.1**: Release on June 6th, 2005. Code name – sarge, Architecture support – same as woody + AMD64 – Unofficial Port released, Kernel – 2.4 qnd 2.6 series, Number of Packages: 15000+, Number of Developers : 1500+, packages like – OpenOffice Suite, Firefox Browser, Thunderbird, Gnome 2.8, kernel 3.3 Advanced Installation Support: RAID, XFS, LVM, Modular Installer. +- **Debian 4.0**: Released on April 8th, 2007. Code name – etch, architecture support – same as sarge, included AMD64. Number of packages: 18,200+ Developers count : 1030+, Graphical Installer. +- **Debian 5.0**: Released on February 14th, 2009. Code name – lenny, Architecture Support – Same as before + ARM. Number of packages: 23000+, Developers Count: 1010+. +- **Debian 6.0** : Released on July 29th, 2009. Code name – squeeze, Package included : kernel 2.6.32, Gnome 2.3. Xorg 7.5, DKMS included, Dependency-based. Architecture : Same as pervious + kfreebsd-i386 and kfreebsd-amd64, Dependency based booting. +- **Debian 7.0**: Released on may 4, 2013. Code name: wheezy, Support for Multiarch, Tools for private cloud, Improved Installer, Third party repo need removed, full featured multimedia-codec, Kernel 3.2, Xen Hypervisor 4.1.4 Package Count: 37400+. +- **Debian 8.0**: Released on May 25, 2015 and Code name: Jessie, Systemd as the default init system, powered by Kernel 3.16, fast booting, cgroups for services, possibility of isolating part of the services, 43000+ packages. Sysvinit init system available in Jessie. + +**Note**: Linux Kernel initial release was on October 05, 1991 and Debian initial release was on September 15, 1993. So, Debian is there for 22 Years running Linux Kernel which is there for 24 years. + +### Debian Facts ### + +Year 1994 was spent on organizing and managing Debian project so that it would be easy for others to contribute. Hence no release for users were made this year however there were certain internal release. + +Debian 1.0 was never released. A CDROM manufacturer company by mistakenly labelled an unreleased version as Debian 1.0. Hence to avoid confusion Debian 1.0 was released as Debian 1.1 and since then only the concept of official CDROM images came into existence. + +Each release of Debian is a character of Toy Story. + +Debian remains available in old stable, stable, testing and experimental, all the time. + +The Debian Project continues to work on the unstable distribution (codenamed sid, after the evil kid from the Toy Story). Sid is the permanent name for the unstable distribution and is remains ‘Still In Development’. The testing release is intended to become the next stable release and is currently codenamed jessie. + +Debian official distribution includes only Free and OpenSource Software and nothing else. However the availability of contrib and Non-free Packages makes it possible to install those packages which are free but their dependencies are not licensed free (contrib) and Packages licensed under non-free softwares. + +Debian is the mother of a lot of Linux distribution. Some of these Includes: + +- Damn Small Linux +- KNOPPIX +- Linux Advanced +- MEPIS +- Ubuntu +- 64studio (No more active) +- LMDE + +Debian is the world’s largest non commercial Linux Distribution. It is written in C (32.1%) programming language and rest in 70 other languages. + +![Debian Contribution](http://www.tecmint.com/wp-content/uploads/2014/08/Debian-Programming.png) + +Debian Contribution + +Image Source: [Xmodulo][1] + +Debian project contains 68.5 million actual loc (lines of code) + 4.5 million lines of comments and white spaces. + +International Space station dropped Windows & Red Hat for adopting Debian – These astronauts are using one release back – now “squeeze” for stability and strength from community. + +Thank God! Who would have heard the scream from space on Windows Metro Screen :P + +#### The Black Wednesday #### + +On November 20th, 2002 the University of Twente Network Operation Center (NOC) caught fire. The fire department gave up protecting the server area. NOC hosted satie.debian.org which included Security, non-US archive, New Maintainer, quality assurance, databases – Everything was turned to ashes. Later these services were re-built by debian. + +#### The Future Distro #### + +Next in the list is Debian 9, code name – Stretch, what it will have is yet to be revealed. The best is yet to come, Just Wait for it! + +A lot of distribution made an appearance in Linux Distro genre and then disappeared. In most cases managing as it gets bigger was a concern. But certainly this is not the case with Debian. It has hundreds of thousands of developer and maintainer all across the globe. It is a one Distro which was there from the initial days of Linux. + +The contribution of Debian in Linux ecosystem can’t be measured in words. If there had been no Debian, Linux would not have been so rich and user-friendly. Debian is among one of the disto which is considered highly reliable, secure and stable and a perfect choice for Web Servers. + +That’s the beginning of Debian. It came a long way and still going. The Future is Here! The world is here! If you have not used Debian till now, What are you Waiting for. Just Download Your Image and get started, we will be here if you get into trouble. + +- [Debian Homepage][2] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/happy-birthday-to-debian-gnu-linux/ + +作者:[Avishek Kumar][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://xmodulo.com/2013/08/interesting-facts-about-debian-linux.html +[2]:https://www.debian.org/ \ No newline at end of file diff --git a/sources/tech/20150818 How to monitor stock quotes from the command line on Linux.md b/sources/tech/20150818 How to monitor stock quotes from the command line on Linux.md new file mode 100644 index 0000000000..662ac1eb84 --- /dev/null +++ b/sources/tech/20150818 How to monitor stock quotes from the command line on Linux.md @@ -0,0 +1,99 @@ +How to monitor stock quotes from the command line on Linux +================================================================================ +If you are one of those stock investors or traders, monitoring the stock market will be one of your daily routines. Most likely you will be using an online trading platform which comes with some fancy real-time charts and all sort of advanced stock analysis and tracking tools. While such sophisticated market research tools are a must for any serious stock investors to read the market, monitoring the latest stock quotes still goes a long way to build a profitable portfolio. + +If you are a full-time system admin constantly sitting in front of terminals while trading stocks as a hobby during the day, a simple command-line tool that shows real-time stock quotes will be a blessing for you. + +In this tutorial, let me introduce a neat command-line tool that allows you to monitor stock quotes from the command line on Linux. + +This tool is called [Mop][1]. Written in Go, this lightweight command-line tool is extremely handy for tracking the latest stock quotes from the U.S. markets. You can easily customize the list of stocks to monitor, and it shows the latest stock quotes in ncurses-based, easy-to-read interface. + +**Note**: Mop obtains the latest stock quotes via Yahoo! Finance API. Be aware that their stock quotes are known to be delayed by 15 minutes. So if you are looking for "real-time" stock quotes with zero delay, Mop is not a tool for you. Such "live" stock quote feeds are usually available for a fee via some proprietary closed-door interface. With that being said, let's see how you can use Mop under Linux environment. + +### Install Mop on Linux ### + +Since Mop is implemented in Go, you will need to install Go language first. If you don't have Go installed, follow [this guide][2] to install Go on your Linux platform. Make sure to set GOPATH environment variable as described in the guide. + +Once Go is installed, proceed to install Mop as follows. + +**Debian, Ubuntu or Linux Mint** + + $ sudo apt-get install git + $ go get github.com/michaeldv/mop + $ cd $GOPATH/src/github.com/michaeldv/mop + $ make install + +Fedora, CentOS, RHEL + + $ sudo yum install git + $ go get github.com/michaeldv/mop + $ cd $GOPATH/src/github.com/michaeldv/mop + $ make install + +The above commands will install Mop under $GOPATH/bin. + +Now edit your .bashrc to include $GOPATH/bin in your PATH variable. + + export PATH="$PATH:$GOPATH/bin" + +---------- + + $ source ~/.bashrc + +### Monitor Stock Quotes from the Command Line with Mop ### + +To launch Mod, simply run the command called cmd. + + $ cmd + +At the first launch, you will see a few stock tickers which Mop comes pre-configured with. + +![](https://farm6.staticflickr.com/5749/20018949104_c8c64e0e06_c.jpg) + +The quotes show information like the latest price, %change, daily low/high, 52-week low/high, dividend, and annual yield. Mop obtains market overview information from [CNN][3], and individual stock quotes from [Yahoo Finance][4]. The stock quote information updates itself within the terminal periodically. + +### Customize Stock Quotes in Mop ### + +Let's try customizing the stock list. Mop provides easy-to-remember shortcuts for this: '+' to add a new stock, and '-' to remove a stock. + +To add a new stock, press '+', and type a stock ticker symbol to add (e.g., MSFT). You can add more than one stock at once by typing a comma-separated list of tickers (e.g., "MSFT, AMZN, TSLA"). + +![](https://farm1.staticflickr.com/636/20648164441_642ae33a22_c.jpg) + +Removing stocks from the list can be done similarly by pressing '-'. + +### Sort Stock Quotes in Mop ### + +You can sort the stock quote list based on any column. To sort, press 'o', and use left/right key to choose the column to sort by. When a particular column is chosen, you can sort the list either in increasing order or in decreasing order by pressing ENTER. + +![](https://farm1.staticflickr.com/724/20648164481_15631eefcf_c.jpg) + +By pressing 'g', you can group your stocks based on whether they are advancing or declining for the day. Advancing issues are represented in green color, while declining issues are colored in white. + +![](https://c2.staticflickr.com/6/5633/20615252696_a5bd44d3aa_b.jpg) + +If you want to access help page, simply press '?'. + +![](https://farm1.staticflickr.com/573/20632365342_da196b657f_c.jpg) + +### Conclusion ### + +As you can see, Mop is a lightweight, yet extremely handy stock monitoring tool. Of course you can easily access stock quotes information elsewhere, from online websites, your smartphone, etc. However, if you spend a great deal of your time in a terminal environment, Mop can easily fit in to your workspace, hopefully without distracting must of your workflow. Just let it run and continuously update market date in one of your terminals, and be done with it. + +Happy trading! + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/monitor-stock-quotes-command-line-linux.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:https://github.com/michaeldv/mop +[2]:http://ask.xmodulo.com/install-go-language-linux.html +[3]:http://money.cnn.com/data/markets/ +[4]:http://finance.yahoo.com/ \ No newline at end of file From 95631da14d2cff6bfb59aee4227d291999a3b4f6 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 18 Aug 2015 17:34:47 +0800 Subject: [PATCH 223/697] =?UTF-8?q?20150818-4=20=E9=80=89=E9=A2=98=20?= =?UTF-8?q?=E5=8F=A6=E4=B8=80=E7=AF=87=20IBM=20=E7=9A=84=E5=A4=87=E9=80=89?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Ubuntu Linux is coming to IBM mainframes.md | 46 +++++++++++++++++++ 1 file changed, 46 insertions(+) create mode 100644 sources/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md diff --git a/sources/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md b/sources/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md new file mode 100644 index 0000000000..8da7227eee --- /dev/null +++ b/sources/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md @@ -0,0 +1,46 @@ +​Ubuntu Linux is coming to IBM mainframes +================================================================================ +SEATTLE -- It's finally happened. At [LinuxCon][1], IBM and [Canonical][2] announced that [Ubuntu Linux][3] will soon be running on IBM mainframes. + +![](http://zdnet2.cbsistatic.com/hub/i/2015/08/17/f389e12f-03f5-48cc-8019-af4ccf6c6ecd/f15b099e439c0e3a5fd823637d4bcf87/ubuntu-mainframe.jpg) + +You'll soon to be able to get your IBM mainframe in Ubuntu Linux orange + +According to Ross Mauri, IBM's General Manager of System z, and Mark Shuttleworth, Canonical and Ubuntu's founder, this move came about because of customer demand. For over a decade, [Red Hat Enterprise Linux (RHEL)][4] and [SUSE Linux Enterprise Server (SLES)][5] were the only supported IBM mainframe Linux distributions. + +As Ubuntu matured, more and more businesses turned to it for the enterprise Linux, and more and more of them wanted it on IBM big iron hardware. In particular, banks wanted Ubuntu there. Soon, financial CIOs will have their wish granted. + +In an interview Shuttleworth said that Ubuntu Linux will be available on the mainframe by April 2016 in the next long-term support version of Ubuntu: Ubuntu 16.04. Canonical and IBM already took the first move in this direction in late 2014 by bringing [Ubuntu to IBM's POWER][6] architecture. + +Before that, Canonical and IBM almost signed the dotted line to bring [Ubuntu to IBM mainframes in 2011][7] but that deal was never finalized. This time, it's happening. + +Jane Silber, Canonical's CEO, explained in a statement, "Our [expansion of Ubuntu platform][8] support to [IBM z Systems][9] is a recognition of the number of customers that count on z Systems to run their businesses, and the maturity the hybrid cloud is reaching in the marketplace. + +**Silber continued:** + +> With support of z Systems, including [LinuxONE][10], Canonical is also expanding our relationship with IBM, building on our support for the POWER architecture and OpenPOWER ecosystem. Just as Power Systems clients are now benefiting from the scaleout capabilities of Ubuntu, and our agile development process which results in first to market support of new technologies such as CAPI (Coherent Accelerator Processor Interface) on POWER8, z Systems clients can expect the same rapid rollout of technology advancements, and benefit from [Juju][11] and our other cloud tools to enable faster delivery of new services to end users. In addition, our collaboration with IBM includes the enablement of scale-out deployment of many IBM software solutions with Juju Charms. Mainframe clients will delight in having a wealth of 'charmed' IBM solutions, other software provider products, and open source solutions, deployable on mainframes via Juju. + +Shuttleworth expects Ubuntu on z to be very successful. "It's blazingly fast, and with its support for OpenStack, people who want exceptional cloud region performance will be very happy. + +-------------------------------------------------------------------------------- + +via: http://www.zdnet.com/article/ubuntu-linux-is-coming-to-the-mainframe/#ftag=RSSbaffb68 + +作者:[Steven J. Vaughan-Nichols][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ +[1]:http://events.linuxfoundation.org/events/linuxcon-north-america +[2]:http://www.canonical.com/ +[3]:http://www.ubuntu.comj/ +[4]:http://www.redhat.com/en/technologies/linux-platforms/enterprise-linux +[5]:https://www.suse.com/products/server/ +[6]:http://www.zdnet.com/article/ibm-doubles-down-on-linux/ +[7]:http://www.zdnet.com/article/mainframe-ubuntu-linux/ +[8]:https://insights.ubuntu.com/2015/08/17/ibm-and-canonical-plan-ubuntu-support-on-ibm-z-systems-mainframe/ +[9]:http://www-03.ibm.com/systems/uk/z/ +[10]:http://www.zdnet.com/article/linuxone-ibms-new-linux-mainframes/ +[11]:https://jujucharms.com/ \ No newline at end of file From 6d3282d2b50edc9a671187069efdcb768c4d761e Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 18 Aug 2015 17:41:53 +0800 Subject: [PATCH 224/697] =?UTF-8?q?20150818-5=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ity Components Live Container Migration.md | 53 +++++++++++++++++++ 1 file changed, 53 insertions(+) create mode 100644 sources/talk/20150818 Docker Working on Security Components Live Container Migration.md diff --git a/sources/talk/20150818 Docker Working on Security Components Live Container Migration.md b/sources/talk/20150818 Docker Working on Security Components Live Container Migration.md new file mode 100644 index 0000000000..ad974b4859 --- /dev/null +++ b/sources/talk/20150818 Docker Working on Security Components Live Container Migration.md @@ -0,0 +1,53 @@ +Docker Working on Security Components, Live Container Migration +================================================================================ +![Docker Container Talk](http://www.eweek.com/imagesvr_ce/1905/290x195DockerMarianna.jpg) + +**Docker developers take the stage at Containercon and discuss their work on future container innovations for security and live migration.** + +SEATTLE—Containers are one of the hottest topics in IT today and at the Linuxcon USA event here there is a co-located event called Containercon, dedicated to this virtualization technology. + +Docker, the lead commercial sponsor of the open-source Docker effort brought three of its top people to the keynote stage today, but not Docker founder Solomon Hykes. + +Hykes who delivered a Linuxcon keynote in 2014 was in the audience though, as Senior Vice President of Engineering Marianna Tessel, Docker security chief Diogo Monica and Docker chief maintainer Michael Crosby presented what's new and what's coming in Docker. + +Tessel emphasized that Docker is very real today and used in production environments at some of the largest organizations on the planet, including the U.S. Government. Docker also is working in small environments too, including the Raspberry Pi small form factor ARM computer, which now can support up to 2,300 containers on a single device. + +"We're getting more powerful and at the same time Docker will also get simpler to use," Tessel said. + +As a metaphor, Tessel said that the whole Docker experience is much like a cruise ship, where there is powerful and complex machinery that powers the ship, yet the experience for passengers is all smooth sailing. + +One area that Docker is trying to make easier is security. Tessel said that security is mind-numbingly complex for most people as organizations constantly try to avoid network breaches. + +That's where Docker Content Trust comes into play, which is a configurable feature in the recent Docker 1.8 release. Diogo Mónica, security lead for Docker joined Tessel on stage and said that security is a hard topic, which is why Docker content trust is being developed. + +With Docker Content Trust there is a verifiable way to make sure that a given Docker application image is authentic. There also are controls to limit fraud and potential malicious code injection by verifying application freshness. + +To prove his point, Monica did a live demonstration of what could happen if Content Trust is not enabled. In one instance, a Website update is manipulated to allow the demo Web app to be defaced. When Content Trust is enabled, the hack didn't work and was blocked. + +"Don't let the simple demo fool you," Tessel said. "You have seen the best security possible." + +One area where containers haven't been put to use before is for live migration, which on VMware virtual machines is a technology called vMotion. It's an area that Docker is currently working on. + +Docker chief maintainer Michael Crosby did an onstage demonstration of a live migration of Docker containers. Crosby referred to the approach as checkpoint and restore, where a running container gets a checkpoint snapshot and is then restored to another location. + +A container also can be cloned and then run in another location. Crosby humorously referred to his cloned container as "Dolly," a reference to the world's first cloned animal, Dolly the sheep. + +Tessel also took time to talk about the RunC component of containers, which is now a technology component that is being developed by the Open Containers Initiative as a multi-stakeholder process. With RunC, containers expand beyond Linux to multiple operating systems including Windows and Solaris. + +Overall, Tessel said that she can't predict the future of Docker, though she is very optimistic. + +"I'm not sure what the future is, but I'm sure it'll be out of this world," Tessel said. + +Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist. + +-------------------------------------------------------------------------------- + +via: http://www.eweek.com/virtualization/docker-working-on-security-components-live-container-migration.html + +作者:[Sean Michael Kerner][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/ \ No newline at end of file From 4d084a2f3144ab58cf65b7f540a5058e97c85d65 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Tue, 18 Aug 2015 22:36:23 +0800 Subject: [PATCH 225/697] [Translated]RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md --- ...to Configure and Encrypt System Storage.md | 271 ------------------ ...to Configure and Encrypt System Storage.md | 269 +++++++++++++++++ 2 files changed, 269 insertions(+), 271 deletions(-) delete mode 100644 sources/tech/RHCSA Series/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md create mode 100644 translated/tech/RHCSA/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md b/sources/tech/RHCSA Series/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md deleted file mode 100644 index 0e631ce37d..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md +++ /dev/null @@ -1,271 +0,0 @@ -FSSlc translating - -RHCSA Series: Using ‘Parted’ and ‘SSM’ to Configure and Encrypt System Storage – Part 6 -================================================================================ -In this article we will discuss how to set up and configure local system storage in Red Hat Enterprise Linux 7 using classic tools and introducing the System Storage Manager (also known as SSM), which greatly simplifies this task. - -![Configure and Encrypt System Storage](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-and-Encrypt-System-Storage.png) - -RHCSA: Configure and Encrypt System Storage – Part 6 - -Please note that we will present this topic in this article but will continue its description and usage on the next one (Part 7) due to vastness of the subject. - -### Creating and Modifying Partitions in RHEL 7 ### - -In RHEL 7, parted is the default utility to work with partitions, and will allow you to: - -- Display the current partition table -- Manipulate (increase or decrease the size of) existing partitions -- Create partitions using free space or additional physical storage devices - -It is recommended that before attempting the creation of a new partition or the modification of an existing one, you should ensure that none of the partitions on the device are in use (`umount /dev/partition`), and if you’re using part of the device as swap you need to disable it (`swapoff -v /dev/partition`) during the process. - -The easiest way to do this is to boot RHEL in rescue mode using an installation media such as a RHEL 7 installation DVD or USB (Troubleshooting → Rescue a Red Hat Enterprise Linux system) and Select Skip when you’re prompted to choose an option to mount the existing Linux installation, and you will be presented with a command prompt where you can start typing the same commands as shown as follows during the creation of an ordinary partition in a physical device that is not being used. - -![RHEL 7 Rescue Mode](http://www.tecmint.com/wp-content/uploads/2015/04/RHEL-7-Rescue-Mode.png) - -RHEL 7 Rescue Mode - -To start parted, simply type. - - # parted /dev/sdb - -Where `/dev/sdb` is the device where you will create the new partition; next, type print to display the current drive’s partition table: - -![Creat New Partition](http://www.tecmint.com/wp-content/uploads/2015/04/Create-New-Partition.png) - -Creat New Partition - -As you can see, in this example we are using a virtual drive of 5 GB. We will now proceed to create a 4 GB primary partition and then format it with the xfs filesystem, which is the default in RHEL 7. - -You can choose from a variety of file systems. You will need to manually create the partition with mkpart and then format it with mkfs.fstype as usual because mkpart does not support many modern filesystems out-of-the-box. - -In the following example we will set a label for the device and then create a primary partition `(p)` on `/dev/sdb`, which starts at the 0% percentage of the device and ends at 4000 MB (4 GB): - -![Set Partition Name in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Label-Partition.png) - -Label Partition Name - -Next, we will format the partition as xfs and print the partition table again to verify that changes were applied: - - # mkfs.xfs /dev/sdb1 - # parted /dev/sdb print - -![Format Partition in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Format-Partition-in-Linux.png) - -Format Partition as XFS Filesystem - -For older filesystems, you could use the resize command in parted to resize a partition. Unfortunately, this only applies to ext2, fat16, fat32, hfs, linux-swap, and reiserfs (if libreiserfs is installed). - -Thus, the only way to resize a partition is by deleting it and creating it again (so make sure you have a good backup of your data!). No wonder the default partitioning scheme in RHEL 7 is based on LVM. - -To remove a partition with parted: - - # parted /dev/sdb print - # parted /dev/sdb rm 1 - -![Remove Partition in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-Partition-in-Linux.png) - -Remove or Delete Partition - -### The Logical Volume Manager (LVM) ### - -Once a disk has been partitioned, it can be difficult or risky to change the partition sizes. For that reason, if we plan on resizing the partitions on our system, we should consider the possibility of using LVM instead of the classic partitioning system, where several physical devices can form a volume group that will host a defined number of logical volumes, which can be expanded or reduced without any hassle. - -In simple terms, you may find the following diagram useful to remember the basic architecture of LVM. - -![Basic Architecture of LVM](http://www.tecmint.com/wp-content/uploads/2015/04/LVM-Diagram.png) - -Basic Architecture of LVM - -#### Creating Physical Volumes, Volume Group and Logical Volumes #### - -Follow these steps in order to set up LVM using classic volume management tools. Since you can expand this topic reading the [LVM series on this site][1], I will only outline the basic steps to set up LVM, and then compare them to implementing the same functionality with SSM. - -**Note**: That we will use the whole disks `/dev/sdb` and `/dev/sdc` as PVs (Physical Volumes) but it’s entirely up to you if you want to do the same. - -**1. Create partitions `/dev/sdb1` and `/dev/sdc1` using 100% of the available disk space in /dev/sdb and /dev/sdc:** - - # parted /dev/sdb print - # parted /dev/sdc print - -![Create New Partitions](http://www.tecmint.com/wp-content/uploads/2015/04/Create-New-Partitions.png) - -Create New Partitions - -**2. Create 2 physical volumes on top of /dev/sdb1 and /dev/sdc1, respectively.** - - # pvcreate /dev/sdb1 - # pvcreate /dev/sdc1 - -![Create Two Physical Volumes](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Physical-Volumes.png) - -Create Two Physical Volumes - -Remember that you can use pvdisplay /dev/sd{b,c}1 to show information about the newly created PVs. - -**3. Create a VG on top of the PV that you created in the previous step:** - - # vgcreate tecmint_vg /dev/sd{b,c}1 - -![Create Volume Group in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Volume-Group.png) - -Create Volume Group - -Remember that you can use vgdisplay tecmint_vg to show information about the newly created VG. - -**4. Create three logical volumes on top of VG tecmint_vg, as follows:** - - # lvcreate -L 3G -n vol01_docs tecmint_vg [vol01_docs → 3 GB] - # lvcreate -L 1G -n vol02_logs tecmint_vg [vol02_logs → 1 GB] - # lvcreate -l 100%FREE -n vol03_homes tecmint_vg [vol03_homes → 6 GB] - -![Create Logical Volumes in LVM](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Logical-Volumes.png) - -Create Logical Volumes - -Remember that you can use lvdisplay tecmint_vg to show information about the newly created LVs on top of VG tecmint_vg. - -**5. Format each of the logical volumes with xfs (do NOT use xfs if you’re planning on shrinking volumes later!):** - - # mkfs.xfs /dev/tecmint_vg/vol01_docs - # mkfs.xfs /dev/tecmint_vg/vol02_logs - # mkfs.xfs /dev/tecmint_vg/vol03_homes - -**6. Finally, mount them:** - - # mount /dev/tecmint_vg/vol01_docs /mnt/docs - # mount /dev/tecmint_vg/vol02_logs /mnt/logs - # mount /dev/tecmint_vg/vol03_homes /mnt/homes - -#### Removing Logical Volumes, Volume Group and Physical Volumes #### - -**7. Now we will reverse the LVM implementation and remove the LVs, the VG, and the PVs:** - - # lvremove /dev/tecmint_vg/vol01_docs - # lvremove /dev/tecmint_vg/vol02_logs - # lvremove /dev/tecmint_vg/vol03_homes - # vgremove /dev/tecmint_vg - # pvremove /dev/sd{b,c}1 - -**8. Now let’s install SSM and we will see how to perform the above in ONLY 1 STEP!** - - # yum update && yum install system-storage-manager - -We will use the same names and sizes as before: - - # ssm create -s 3G -n vol01_docs -p tecmint_vg --fstype ext4 /mnt/docs /dev/sd{b,c}1 - # ssm create -s 1G -n vol02_logs -p tecmint_vg --fstype ext4 /mnt/logs /dev/sd{b,c}1 - # ssm create -n vol03_homes -p tecmint_vg --fstype ext4 /mnt/homes /dev/sd{b,c}1 - -Yes! SSM will let you: - -- initialize block devices as physical volumes -- create a volume group -- create logical volumes -- format LVs, and -- mount them using only one command - -**9. We can now display the information about PVs, VGs, or LVs, respectively, as follows:** - - # ssm list dev - # ssm list pool - # ssm list vol - -![Check Information of PVs, VGs, or LVs](http://www.tecmint.com/wp-content/uploads/2015/04/Display-LVM-Information.png) - -Check Information of PVs, VGs, or LVs - -**10. As we already know, one of the distinguishing features of LVM is the possibility to resize (expand or decrease) logical volumes without downtime.** - -Say we are running out of space in vol02_logs but have plenty of space in vol03_homes. We will resize vol03_homes to 4 GB and expand vol02_logs to use the remaining space: - - # ssm resize -s 4G /dev/tecmint_vg/vol03_homes - -Run ssm list pool again and take note of the free space in tecmint_vg: - -![Check Volume Size](http://www.tecmint.com/wp-content/uploads/2015/04/Check-LVM-Free-Space.png) - -Check Volume Size - -Then do: - - # ssm resize -s+1.99 /dev/tecmint_vg/vol02_logs - -**Note**: that the plus sign after the -s flag indicates that the specified value should be added to the present value. - -**11. Removing logical volumes and volume groups is much easier with ssm as well. A simple,** - - # ssm remove tecmint_vg - -will return a prompt asking you to confirm the deletion of the VG and the LVs it contains: - -![Remove Logical Volume and Volume Group](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-LV-VG.png) - -Remove Logical Volume and Volume Group - -### Managing Encrypted Volumes ### - -SSM also provides system administrators with the capability of managing encryption for new or existing volumes. You will need the cryptsetup package installed first: - - # yum update && yum install cryptsetup - -Then issue the following command to create an encrypted volume. You will be prompted to enter a passphrase to maximize security: - - # ssm create -s 3G -n vol01_docs -p tecmint_vg --fstype ext4 --encrypt luks /mnt/docs /dev/sd{b,c}1 - # ssm create -s 1G -n vol02_logs -p tecmint_vg --fstype ext4 --encrypt luks /mnt/logs /dev/sd{b,c}1 - # ssm create -n vol03_homes -p tecmint_vg --fstype ext4 --encrypt luks /mnt/homes /dev/sd{b,c}1 - -Our next task consists in adding the corresponding entries in /etc/fstab in order for those logical volumes to be available on boot. Rather than using the device identifier (/dev/something). - -We will use each LV’s UUID (so that our devices will still be uniquely identified should we add other logical volumes or devices), which we can find out with the blkid utility: - - # blkid -o value UUID /dev/tecmint_vg/vol01_docs - # blkid -o value UUID /dev/tecmint_vg/vol02_logs - # blkid -o value UUID /dev/tecmint_vg/vol03_homes - -In our case: - -![Find Logical Volume UUID](http://www.tecmint.com/wp-content/uploads/2015/04/Logical-Volume-UUID.png) - -Find Logical Volume UUID - -Next, create the /etc/crypttab file with the following contents (change the UUIDs for the ones that apply to your setup): - - docs UUID=ba77d113-f849-4ddf-8048-13860399fca8 none - logs UUID=58f89c5a-f694-4443-83d6-2e83878e30e4 none - homes UUID=92245af6-3f38-4e07-8dd8-787f4690d7ac none - -And insert the following entries in /etc/fstab. Note that device_name (/dev/mapper/device_name) is the mapper identifier that appears in the first column of /etc/crypttab. - - # Logical volume vol01_docs: - /dev/mapper/docs /mnt/docs ext4 defaults 0 2 - # Logical volume vol02_logs - /dev/mapper/logs /mnt/logs ext4 defaults 0 2 - # Logical volume vol03_homes - /dev/mapper/homes /mnt/homes ext4 defaults 0 2 - -Now reboot (systemctl reboot) and you will be prompted to enter the passphrase for each LV. Afterwards you can confirm that the mount operation was successful by checking the corresponding mount points: - -![Verify Logical Volume Mount Points](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-LV-Mount-Points.png) - -Verify Logical Volume Mount Points - -### Conclusion ### - -In this tutorial we have started to explore how to set up and configure system storage using classic volume management tools and SSM, which also integrates filesystem and encryption capabilities in one package. This makes SSM an invaluable tool for any sysadmin. - -Let us know if you have any questions or comments – feel free to use the form below to get in touch with us! - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/create-lvm-storage-in-linux/ diff --git a/translated/tech/RHCSA/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md b/translated/tech/RHCSA/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md new file mode 100644 index 0000000000..41890b2280 --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md @@ -0,0 +1,269 @@ +RHCSA 系列:使用 'Parted' 和 'SSM' 来配置和加密系统存储 – Part 6 +================================================================================ +在本篇文章中,我们将讨论在 RHEL 7 中如何使用传统的工具来设置和配置本地系统存储,并介绍系统存储管理器(也称为 SSM),它将极大地简化上面的任务。 + +![配置和加密系统存储](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-and-Encrypt-System-Storage.png) + +RHCSA: 配置和加密系统存储 – Part 6 + +请注意,我们将在这篇文章中展开这个话题,但由于该话题的宽泛性,我们将在下一期(Part 7)中继续介绍有关它的描述和使用。 + +### 在 RHEL 7 中创建和修改分区 ### + +在 RHEL 7 中, parted 是默认的用来处理分区的程序,且它允许你: + +- 展示当前的分区表 +- 操纵(增加或减少分区的大小)现有的分区 +- 利用空余的磁盘空间或额外的物理存储设备来创建分区 + +强烈建议你在试图增加一个新的分区或对一个现有分区进行更改前,你应当确保设备上没有任何一个分区正在使用(`umount /dev/partition`),且假如你正使用设备的一部分来作为 swap 分区,在进行上面的操作期间,你需要将它禁用(`swapoff -v /dev/partition`) 。 + +实施上面的操作的最简单的方法是使用一个安装介质例如一个 RHEL 7 安装 DVD 或 USB 以急救模式启动 RHEL(Troubleshooting → Rescue a Red Hat Enterprise Linux system),然后当让你选择一个选项来挂载现有的 Linux 安装时,选择'跳过'这个选项,接着你将看到一个命令行提示符,在其中你可以像下图显示的那样开始键入与在一个未被使用的物理设备上创建一个正常的分区时所用的相同的命令。 + +![RHEL 7 急救模式](http://www.tecmint.com/wp-content/uploads/2015/04/RHEL-7-Rescue-Mode.png) + +RHEL 7 急救模式 + +要启动 parted,只需键入: + + # parted /dev/sdb + +其中 `/dev/sdb` 是你将要创建新分区所在的设备;然后键入 `print` 来显示当前设备的分区表: + +![创建新的分区](http://www.tecmint.com/wp-content/uploads/2015/04/Create-New-Partition.png) + +创建新的分区 + +正如你所看到的那样,在这个例子中,我们正在使用一个 5 GB 的虚拟光驱。现在我们将要创建一个 4 GB 的主分区,然后将它格式化为 xfs 文件系统,它是 RHEL 7 中默认的文件系统。 + +你可以从一系列的文件系统中进行选择。你将需要使用 mkpart 来手动地创建分区,接着和平常一样,用 mkfs.fstype 来对分区进行格式化,因为 mkpart 并不支持许多现代的文件系统以达到即开即用。 + +在下面的例子中,我们将为设备设定一个标记,然后在 `/dev/sdb` 上创建一个主分区 `(p)`,它从设备的 0% 开始,并在 4000MB(4 GB) 处结束。 + +![在 Linux 中设定分区名称](http://www.tecmint.com/wp-content/uploads/2015/04/Label-Partition.png) + +标记分区的名称 + +接下来,我们将把分区格式化为 xfs 文件系统,然后再次打印出分区表,以此来确保更改已被应用。 + + # mkfs.xfs /dev/sdb1 + # parted /dev/sdb print + +![在 Linux 中格式化分区](http://www.tecmint.com/wp-content/uploads/2015/04/Format-Partition-in-Linux.png) + +格式化分区为 XFS 文件系统 + +对于旧一点的文件系统,在 parted 中你应该使用 `resize` 命令来改变分区的大小。不幸的是,这只适用于 ext2, fat16, fat32, hfs, linux-swap, 和 reiserfs (若 libreiserfs 已被安装)。 + +因此,改变分区大小的唯一方式是删除它然后再创建它(所以确保你对你的数据做了完整的备份!)。毫无疑问,在 RHEL 7 中默认的分区方案是基于 LVM 的。 + +使用 parted 来移除一个分区,可以用: + + # parted /dev/sdb print + # parted /dev/sdb rm 1 + +![在 Linux 中移除分区](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-Partition-in-Linux.png) + +移除或删除分区 + +### 逻辑卷管理(LVM) ### + +一旦一个磁盘被分好了分区,再去更改分区的大小就是一件困难或冒险的事了。基于这个原因,假如我们计划在我们的系统上对分区的大小进行更改,我们应当考虑使用 LVM 的可能性,而不是使用传统的分区系统。这样多个物理设备可以组成一个逻辑组,以此来寄宿可自定义数目的逻辑卷,而逻辑卷的增大或减少不会带来任何麻烦。 + +简单来说,你会发现下面的示意图对记住 LVM 的基础架构或许有用。 + +![LVM 的基本架构](http://www.tecmint.com/wp-content/uploads/2015/04/LVM-Diagram.png) + +LVM 的基本架构 + +#### 创建物理卷,卷组和逻辑卷 #### + +遵循下面的步骤是为了使用传统的卷管理工具来设置 LVM。由于你可以通过阅读这个网站上的 LVM 系列来扩展这个话题,我将只是概要的介绍设置 LVM 的基本步骤,然后与使用 SSM 来实现相同功能做个比较。 + +**注**: 我们将使用整个磁盘 `/dev/sdb` 和 `/dev/sdc` 来作为 PVs (物理卷),但是否执行相同的操作完全取决于你。 + +**1. 使用 /dev/sdb 和 /dev/sdc 中 100% 的可用磁盘空间来创建分区 `/dev/sdb1` 和 `/dev/sdc1`:** + + # parted /dev/sdb print + # parted /dev/sdc print + +![创建新分区](http://www.tecmint.com/wp-content/uploads/2015/04/Create-New-Partitions.png) + +创建新分区 + +**2. 分别在 /dev/sdb1 和 /dev/sdc1 上共创建 2 个物理卷。** + + # pvcreate /dev/sdb1 + # pvcreate /dev/sdc1 + +![创建两个物理卷](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Physical-Volumes.png) + +创建两个物理卷 + +记住,你可以使用 pvdisplay /dev/sd{b,c}1 来显示有关新建的 PV 的信息。 + +**3. 在上一步中创建的 PV 之上创建一个 VG:** + + # vgcreate tecmint_vg /dev/sd{b,c}1 + +![在 Linux 中创建卷组](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Volume-Group.png) + +创建卷组 + +记住,你可使用 vgdisplay tecmint_vg 来显示有关新建的 VG 的信息。 + +**4. 像下面那样,在 VG tecmint_vg 之上创建 3 个逻辑卷:** + + # lvcreate -L 3G -n vol01_docs tecmint_vg [vol01_docs → 3 GB] + # lvcreate -L 1G -n vol02_logs tecmint_vg [vol02_logs → 1 GB] + # lvcreate -l 100%FREE -n vol03_homes tecmint_vg [vol03_homes → 6 GB] + +![在 LVM 中创建逻辑卷](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Logical-Volumes.png) + +创建逻辑卷 + +记住,你可以使用 lvdisplay tecmint_vg 来显示有关在 VG tecmint_vg 之上新建的 LV 的信息。 + +**5. 格式化每个逻辑卷为 xfs 文件系统格式(假如你计划在以后将要缩小卷的大小,请别使用 xfs 文件系统格式!):** + + # mkfs.xfs /dev/tecmint_vg/vol01_docs + # mkfs.xfs /dev/tecmint_vg/vol02_logs + # mkfs.xfs /dev/tecmint_vg/vol03_homes + +**6. 最后,挂载它们:** + + # mount /dev/tecmint_vg/vol01_docs /mnt/docs + # mount /dev/tecmint_vg/vol02_logs /mnt/logs + # mount /dev/tecmint_vg/vol03_homes /mnt/homes + +#### 移除逻辑卷,卷组和物理卷 #### + +**7.现在我们将进行与刚才相反的操作并移除 LV,VG 和 PV:** + + # lvremove /dev/tecmint_vg/vol01_docs + # lvremove /dev/tecmint_vg/vol02_logs + # lvremove /dev/tecmint_vg/vol03_homes + # vgremove /dev/tecmint_vg + # pvremove /dev/sd{b,c}1 + +**8. 现在,让我们来安装 SSM,我们将看到如何只用一步就完成上面所有的操作!** + + # yum update && yum install system-storage-manager + +我们将和上面一样,使用相同的名称和大小: + + # ssm create -s 3G -n vol01_docs -p tecmint_vg --fstype ext4 /mnt/docs /dev/sd{b,c}1 + # ssm create -s 1G -n vol02_logs -p tecmint_vg --fstype ext4 /mnt/logs /dev/sd{b,c}1 + # ssm create -n vol03_homes -p tecmint_vg --fstype ext4 /mnt/homes /dev/sd{b,c}1 + +是的! SSM 可以让你: + +- 初始化块设备来作为物理卷 +- 创建一个卷组 +- 创建逻辑卷 +- 格式化 LV 和 +- 只使用一个命令来挂载它们 + +**9. 现在,我们可以使用下面的命令来展示有关 PV,VG 或 LV 的信息:** + + # ssm list dev + # ssm list pool + # ssm list vol + +![检查有关 PV, VG,或 LV 的信息](http://www.tecmint.com/wp-content/uploads/2015/04/Display-LVM-Information.png) + +检查有关 PV, VG,或 LV 的信息 + +**10. 正如我们知道的那样, LVM 的一个显著的特点是可以在不停机的情况下更改(增大或缩小) 逻辑卷的大小:** + +假定在 vol02_logs 上我们用尽了空间,而 vol03_homes 还留有足够的空间。我们将把 vol03_homes 的大小调整为 4 GB,并使用剩余的空间来扩展 vol02_logs: + + # ssm resize -s 4G /dev/tecmint_vg/vol03_homes + +再次运行 `ssm list pool`,并记录 tecmint_vg 中的剩余空间的大小: + +![查看卷的大小](http://www.tecmint.com/wp-content/uploads/2015/04/Check-LVM-Free-Space.png) + +查看卷的大小 + +然后执行: + + # ssm resize -s+1.99 /dev/tecmint_vg/vol02_logs + +**注**: 在 `-s` 后的加号暗示特定值应该被加到当前值上。 + +**11. 使用 ssm 来移除逻辑卷和卷组也更加简单,只需使用:** + + # ssm remove tecmint_vg + +这个命令将返回一个提示,询问你是否确认删除 VG 和它所包含的 LV: + +![移除逻辑卷和卷组](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-LV-VG.png) + +移除逻辑卷和卷组 + +### 管理加密的卷 ### + +SSM 也给系统管理员提供了为新的或现存的卷加密的能力。首先,你将需要安装 cryptsetup 软件包: + + # yum update && yum install cryptsetup + +然后写出下面的命令来创建一个加密卷,你将被要求输入一个密码来增强安全性: + + # ssm create -s 3G -n vol01_docs -p tecmint_vg --fstype ext4 --encrypt luks /mnt/docs /dev/sd{b,c}1 + # ssm create -s 1G -n vol02_logs -p tecmint_vg --fstype ext4 --encrypt luks /mnt/logs /dev/sd{b,c}1 + # ssm create -n vol03_homes -p tecmint_vg --fstype ext4 --encrypt luks /mnt/homes /dev/sd{b,c}1 + +我们的下一个任务是往 /etc/fstab 中添加条目来让这些逻辑卷在启动时可用,而不是使用设备识别编号(/dev/something)。 + +我们将使用每个 LV 的 UUID (使得当我们添加其他的逻辑卷或设备后,我们的设备仍然可以被唯一的标记),而我们可以使用 blkid 应用来找到它们的 UUID: + + # blkid -o value UUID /dev/tecmint_vg/vol01_docs + # blkid -o value UUID /dev/tecmint_vg/vol02_logs + # blkid -o value UUID /dev/tecmint_vg/vol03_homes + +在我们的例子中: + +![找到逻辑卷的 UUID](http://www.tecmint.com/wp-content/uploads/2015/04/Logical-Volume-UUID.png) + +找到逻辑卷的 UUID + +接着,使用下面的内容来创建 /etc/crypttab 文件(请更改 UUID 来适用于你的设置): + + docs UUID=ba77d113-f849-4ddf-8048-13860399fca8 none + logs UUID=58f89c5a-f694-4443-83d6-2e83878e30e4 none + homes UUID=92245af6-3f38-4e07-8dd8-787f4690d7ac none + +然后在 /etc/fstab 中添加如下的条目。请注意到 device_name (/dev/mapper/device_name) 是出现在 /etc/crypttab 中第一列的映射标识: + + # Logical volume vol01_docs: + /dev/mapper/docs /mnt/docs ext4 defaults 0 2 + # Logical volume vol02_logs + /dev/mapper/logs /mnt/logs ext4 defaults 0 2 + # Logical volume vol03_homes + /dev/mapper/homes /mnt/homes ext4 defaults 0 2 + +现在重启(systemctl reboot),则你将被要求为每个 LV 输入密码。随后,你可以通过检查相应的挂载点来确保挂载操作是否成功: + +![确保逻辑卷挂载点](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-LV-Mount-Points.png) + +确保逻辑卷挂载点 + +### 总结 ### + +在这篇教程中,我们开始探索如何使用传统的卷管理工具和 SSM 来设置和配置系统存储,SSM 也在一个软件包中集成了文件系统和加密功能。这使得对于任何系统管理员来说,SSM 是一个非常有价值的工具。 + +假如你有任何的问题或评论,请让我们知晓 – 请随意使用下面的评论框来与我们保存联系! + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/create-lvm-storage-in-linux/ From 9be2a6d3692adc8a49338a52ce31b4ca2b21068d Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Tue, 18 Aug 2015 22:42:37 +0800 Subject: [PATCH 226/697] Update RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ...(Access Control Lists) and Mounting Samba or NFS Shares.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md b/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md index d4801d9923..9237e8bd1c 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md @@ -1,3 +1,5 @@ +FSSlc translating + RHCSA Series: Using ACLs (Access Control Lists) and Mounting Samba / NFS Shares – Part 7 ================================================================================ In the last article ([RHCSA series Part 6][1]) we started explaining how to set up and configure local system storage using parted and ssm. @@ -209,4 +211,4 @@ via: http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares [a]:http://www.tecmint.com/author/gacanepa/ [1]:http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/ [2]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ -[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html \ No newline at end of file +[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html From daa0150cb50a7ad9e3a28a301cd969c30dd20215 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 19 Aug 2015 09:33:47 +0800 Subject: [PATCH 227/697] Delete Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md --- ... (Mirroring) using 'Two Disks' in Linux.md | 213 ------------------ 1 file changed, 213 deletions(-) delete mode 100644 sources/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md diff --git a/sources/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md b/sources/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md deleted file mode 100644 index 4acfe4366b..0000000000 --- a/sources/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md +++ /dev/null @@ -1,213 +0,0 @@ -struggling 翻译中 -Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 -================================================================================ -RAID Mirroring means an exact clone (or mirror) of the same data writing to two drives. A minimum two number of disks are more required in an array to create RAID1 and it’s useful only, when read performance or reliability is more precise than the data storage capacity. - -![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg) - -Setup Raid1 in Linux - -Mirrors are created to protect against data loss due to disk failure. Each disk in a mirror involves an exact copy of the data. When one disk fails, the same data can be retrieved from other functioning disk. However, the failed drive can be replaced from the running computer without any user interruption. - -### Features of RAID 1 ### - -- Mirror has Good Performance. -- 50% of space will be lost. Means if we have two disk with 500GB size total, it will be 1TB but in Mirroring it will only show us 500GB. -- No data loss in Mirroring if one disk fails, because we have the same content in both disks. -- Reading will be good than writing data to drive. - -#### Requirements #### - -Minimum Two number of disks are allowed to create RAID 1, but you can add more disks by using twice as 2, 4, 6, 8. To add more disks, your system must have a RAID physical adapter (hardware card). - -Here we’re using software raid not a Hardware raid, if your system has an inbuilt physical hardware raid card you can access it from it’s utility UI or using Ctrl+I key. - -Read Also: [Basic Concepts of RAID in Linux][1] - -#### My Server Setup #### - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.226 - Hostname : rd1.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc - -This article will guide you through a step-by-step instructions on how to setup a software RAID 1 or Mirror using mdadm (creates and manages raid) on Linux Platform. Although the same instructions also works on other Linux distributions such as RedHat, CentOS, Fedora, etc. - -### Step 1: Installing Prerequisites and Examine Drives ### - -1. As I said above, we’re using mdadm utility for creating and managing RAID in Linux. So, let’s install the mdadm software package on Linux using yum or apt-get package manager tool. - - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] - -2. Once ‘mdadm‘ package has been installed, we need to examine our disk drives whether there is already any raid configured using the following command. - - # mdadm -E /dev/sd[b-c] - -![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png) - -Check RAID on Disks - -As you see from the above screen, that there is no any super-block detected yet, means no RAID defined. - -### Step 2: Drive Partitioning for RAID ### - -3. As I mentioned above, that we’re using minimum two partitions /dev/sdb and /dev/sdc for creating RAID1. Let’s create partitions on these two drives using ‘fdisk‘ command and change the type to raid during partition creation. - - # fdisk /dev/sdb - -Follow the below instructions - -- Press ‘n‘ for creating new partition. -- Then choose ‘P‘ for Primary partition. -- Next select the partition number as 1. -- Give the default full size by just pressing two times Enter key. -- Next press ‘p‘ to print the defined partition. -- Press ‘L‘ to list all available types. -- Type ‘t‘to choose the partitions. -- Choose ‘fd‘ for Linux raid auto and press Enter to apply. -- Then again use ‘p‘ to print the changes what we have made. -- Use ‘w‘ to write the changes. - -![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png) - -Create Disk Partitions - -After ‘/dev/sdb‘ partition has been created, next follow the same instructions to create new partition on /dev/sdc drive. - - # fdisk /dev/sdc - -![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png) - -Create Second Partitions - -4. Once both the partitions are created successfully, verify the changes on both sdb & sdc drive using the same ‘mdadm‘ command and also confirm the RAID type as shown in the following screen grabs. - - # mdadm -E /dev/sd[b-c] - -![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png) - -Verify Partitions Changes - -![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png) - -Check RAID Type - -**Note**: As you see in the above picture, there is no any defined RAID on the sdb1 and sdc1 drives so far, that’s the reason we are getting as no super-blocks detected. - -### Step 3: Creating RAID1 Devices ### - -5. Next create RAID1 Device called ‘/dev/md0‘ using the following command and verity it. - - # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1 - # cat /proc/mdstat - -![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png) - -Create RAID Device - -6. Next check the raid devices type and raid array using following commands. - - # mdadm -E /dev/sd[b-c]1 - # mdadm --detail /dev/md0 - -![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png) - -Check RAID Device type - -![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png) - -Check RAID Device Array - -From the above pictures, one can easily understand that raid1 have been created and using /dev/sdb1 and /dev/sdc1 partitions and also you can see the status as resyncing. - -### Step 4: Creating File System on RAID Device ### - -7. Create file system using ext4 for md0 and mount under /mnt/raid1. - - # mkfs.ext4 /dev/md0 - -![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png) - -Create RAID Device Filesystem - -8. Next, mount the newly created filesystem under ‘/mnt/raid1‘ and create some files and verify the contents under mount point. - - # mkdir /mnt/raid1 - # mount /dev/md0 /mnt/raid1/ - # touch /mnt/raid1/tecmint.txt - # echo "tecmint raid setups" > /mnt/raid1/tecmint.txt - -![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png) - -Mount Raid Device - -9. To auto-mount RAID1 on system reboot, you need to make an entry in fstab file. Open ‘/etc/fstab‘ file and add the following line at the bottom of the file. - - /dev/md0 /mnt/raid1 ext4 defaults 0 0 - -![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png) - -Raid Automount Device - -10. Run ‘mount -a‘ to check whether there are any errors in fstab entry. - - # mount -av - -![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png) - -Check Errors in fstab - -11. Next, save the raid configuration manually to ‘mdadm.conf‘ file using the below command. - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png) - -Save Raid Configuration - -The above configuration file is read by the system at the reboots and load the RAID devices. - -### Step 5: Verify Data After Disk Failure ### - -12. Our main purpose is, even after any of hard disk fail or crash our data needs to be available. Let’s see what will happen when any of disk disk is unavailable in array. - - # mdadm --detail /dev/md0 - -![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png) - -Raid Device Verify - -In the above image, we can see there are 2 devices available in our RAID and Active Devices are 2. Now let us see what will happen when a disk plugged out (removed sdc disk) or fails. - - # ls -l /dev | grep sd - # mdadm --detail /dev/md0 - -![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png) - -Test RAID Devices - -Now in the above image, you can see that one of our drive is lost. I unplugged one of the drive from my Virtual machine. Now let us check our precious data. - - # cd /mnt/raid1/ - # cat tecmint.txt - -![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png) - -Verify RAID Data - -Did you see our data is still available. From this we come to know the advantage of RAID 1 (mirror). In next article, we will see how to setup a RAID 5 striping with distributed Parity. Hope this helps you to understand how the RAID 1 (Mirror) Works. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid1-in-linux/ - -作者:[Babin Lonston][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ \ No newline at end of file From e229a59ce691a09656ed58958d06edbabf81dbc7 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 19 Aug 2015 09:34:00 +0800 Subject: [PATCH 228/697] Delete Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md --- ...iping with Distributed Parity) in Linux.md | 286 ------------------ 1 file changed, 286 deletions(-) delete mode 100644 sources/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md diff --git a/sources/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md b/sources/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md deleted file mode 100644 index dafdf514aa..0000000000 --- a/sources/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md +++ /dev/null @@ -1,286 +0,0 @@ -struggling 翻译中 -Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 -================================================================================ -In RAID 5, data strips across multiple drives with distributed parity. The striping with distributed parity means it will split the parity information and stripe data over the multiple disks, which will have good data redundancy. - -![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/setup-raid-5-in-linux.jpg) - -Setup Raid 5 in Linux - -For RAID Level it should have at least three hard drives or more. RAID 5 are being used in the large scale production environment where it’s cost effective and provide performance as well as redundancy. - -#### What is Parity? #### - -Parity is a simplest common method of detecting errors in data storage. Parity stores information in each disks, Let’s say we have 4 disks, in 4 disks one disk space will be split to all disks to store the parity information’s. If any one of the disks fails still we can get the data by rebuilding from parity information after replacing the failed disk. - -#### Pros and Cons of RAID 5 #### - -- Gives better performance -- Support Redundancy and Fault tolerance. -- Support hot spare options. -- Will loose a single disk capacity for using parity information. -- No data loss if a single disk fails. We can rebuilt from parity after replacing the failed disk. -- Suits for transaction oriented environment as the reading will be faster. -- Due to parity overhead, writing will be slow. -- Rebuild takes long time. - -#### Requirements #### - -Minimum 3 hard drives are required to create Raid 5, but you can add more disks, only if you’ve a dedicated hardware raid controller with multi ports. Here, we are using software RAID and ‘mdadm‘ package to create raid. - -mdadm is a package which allow us to configure and manage RAID devices in Linux. By default there is no configuration file is available for RAID, we must save the configuration file after creating and configuring RAID setup in separate file called mdadm.conf. - -Before moving further, I suggest you to go through the following articles for understanding the basics of RAID in Linux. - -- [Basic Concepts of RAID in Linux – Part 1][1] -- [Creating RAID 0 (Stripe) in Linux – Part 2][2] -- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3] - -#### My Server Setup #### - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.227 - Hostname : rd5.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc - Disk 3 [20GB] : /dev/sdd - -This article is a Part 4 of a 9-tutorial RAID series, here we are going to setup a software RAID 5 with distributed parity in Linux systems or servers using three 20GB disks named /dev/sdb, /dev/sdc and /dev/sdd. - -### Step 1: Installing mdadm and Verify Drives ### - -1. As we said earlier, that we’re using CentOS 6.5 Final release for this raid setup, but same steps can be followed for RAID setup in any Linux based distributions. - - # lsb_release -a - # ifconfig | grep inet - -![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/CentOS-6.5-Summary.png) - -CentOS 6.5 Summary - -2. If you’re following our raid series, we assume that you’ve already installed ‘mdadm‘ package, if not, use the following command according to your Linux distribution to install the package. - - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] - -3. After the ‘mdadm‘ package installation, let’s list the three 20GB disks which we have added in our system using ‘fdisk‘ command. - - # fdisk -l | grep sd - -![Install mdadm Tool in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Install-mdadm-Tool.png) - -Install mdadm Tool - -4. Now it’s time to examine the attached three drives for any existing RAID blocks on these drives using following command. - - # mdadm -E /dev/sd[b-d] - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd - -![Examine Drives For Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-Drives-For-Raid.png) - -Examine Drives For Raid - -**Note**: From the above image illustrated that there is no any super-block detected yet. So, there is no RAID defined in all three drives. Let us start to create one now. - -### Step 2: Partitioning the Disks for RAID ### - -5. First and foremost, we have to partition the disks (/dev/sdb, /dev/sdc and /dev/sdd) before adding to a RAID, So let us define the partition using ‘fdisk’ command, before forwarding to the next steps. - - # fdisk /dev/sdb - # fdisk /dev/sdc - # fdisk /dev/sdd - -#### Create /dev/sdb Partition #### - -Please follow the below instructions to create partition on /dev/sdb drive. - -- Press ‘n‘ for creating new partition. -- Then choose ‘P‘ for Primary partition. Here we are choosing Primary because there is no partitions defined yet. -- Then choose ‘1‘ to be the first partition. By default it will be 1. -- Here for cylinder size we don’t have to choose the specified size because we need the whole partition for RAID so just Press Enter two times to choose the default full size. -- Next press ‘p‘ to print the created partition. -- Change the Type, If we need to know the every available types Press ‘L‘. -- Here, we are selecting ‘fd‘ as my type is RAID. -- Next press ‘p‘ to print the defined partition. -- Then again use ‘p‘ to print the changes what we have made. -- Use ‘w‘ to write the changes. - -![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition1.png) - -Create sdb Partition - -**Note**: We have to follow the steps mentioned above to create partitions for sdc & sdd drives too. - -#### Create /dev/sdc Partition #### - -Now partition the sdc and sdd drives by following the steps given in the screenshot or you can follow above steps. - - # fdisk /dev/sdc - -![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition1.png) - -Create sdc Partition - -#### Create /dev/sdd Partition #### - - # fdisk /dev/sdd - -![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition1.png) - -Create sdd Partition - -6. After creating partitions, check for changes in all three drives sdb, sdc, & sdd. - - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd - - or - - # mdadm -E /dev/sd[b-c] - -![Check Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Changes-on-Partitions.png) - -Check Partition Changes - -**Note**: In the above pic. depict the type is fd i.e. for RAID. - -7. Now Check for the RAID blocks in newly created partitions. If no super-blocks detected, than we can move forward to create a new RAID 5 setup on these drives. - -![Check Raid on Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-Partitions.png) - -Check Raid on Partition - -### Step 3: Creating md device md0 ### - -8. Now create a Raid device ‘md0‘ (i.e. /dev/md0) and include raid level on all newly created partitions (sdb1, sdc1 and sdd1) using below command. - - # mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 - - or - - # mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1 - -9. After creating raid device, check and verify the RAID, devices included and RAID Level from the mdstat output. - - # cat /proc/mdstat - -![Verify Raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Device.png) - -Verify Raid Device - -If you want to monitor the current building process, you can use ‘watch‘ command, just pass through the ‘cat /proc/mdstat‘ with watch command which will refresh screen every 1 second. - - # watch -n1 cat /proc/mdstat - -![Monitor Raid Process](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Raid-Process.png) - -Monitor Raid 5 Process - -![Raid 5 Process Summary](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Process-Summary.png) - -Raid 5 Process Summary - -10. After creation of raid, Verify the raid devices using the following command. - - # mdadm -E /dev/sd[b-d]1 - -![Verify Raid Level](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Level.png) - -Verify Raid Level - -**Note**: The Output of the above command will be little long as it prints the information of all three drives. - -11. Next, verify the RAID array to assume that the devices which we’ve included in the RAID level are running and started to re-sync. - - # mdadm --detail /dev/md0 - -![Verify Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Array.png) - -Verify Raid Array - -### Step 4: Creating file system for md0 ### - -12. Create a file system for ‘md0‘ device using ext4 before mounting. - - # mkfs.ext4 /dev/md0 - -![Create md0 Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md0-Filesystem.png) - -Create md0 Filesystem - -13. Now create a directory under ‘/mnt‘ then mount the created filesystem under /mnt/raid5 and check the files under mount point, you will see lost+found directory. - - # mkdir /mnt/raid5 - # mount /dev/md0 /mnt/raid5/ - # ls -l /mnt/raid5/ - -14. Create few files under mount point /mnt/raid5 and append some text in any one of the file to verify the content. - - # touch /mnt/raid5/raid5_tecmint_{1..5} - # ls -l /mnt/raid5/ - # echo "tecmint raid setups" > /mnt/raid5/raid5_tecmint_1 - # cat /mnt/raid5/raid5_tecmint_1 - # cat /proc/mdstat - -![Mount Raid 5 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-Raid-Device.png) - -Mount Raid Device - -15. We need to add entry in fstab, else will not display our mount point after system reboot. To add an entry, we should edit the fstab file and append the following line as shown below. The mount point will differ according to your environment. - - # vim /etc/fstab - - /dev/md0 /mnt/raid5 ext4 defaults 0 0 - -![Raid 5 Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Device-Automount.png) - -Raid 5 Automount - -16. Next, run ‘mount -av‘ command to check whether any errors in fstab entry. - - # mount -av - -![Check Fstab Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Fstab-Errors.png) - -Check Fstab Errors - -### Step 5: Save Raid 5 Configuration ### - -17. As mentioned earlier in requirement section, by default RAID don’t have a config file. We have to save it manually. If this step is not followed RAID device will not be in md0, it will be in some other random number. - -So, we must have to save the configuration before system reboot. If the configuration is saved it will be loaded to the kernel during the system reboot and RAID will also gets loaded. - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -![Save Raid 5 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid-5-Configuration.png) - -Save Raid 5 Configuration - -Note: Saving the configuration will keep the RAID level stable in md0 device. - -### Step 6: Adding Spare Drives ### - -18. What the use of adding a spare drive? its very useful if we have a spare drive, if any one of the disk fails in our array, this spare drive will get active and rebuild the process and sync the data from other disk, so we can see a redundancy here. - -For more instructions on how to add spare drive and check Raid 5 fault tolerance, read #Step 6 and #Step 7 in the following article. - -- [Add Spare Drive to Raid 5 Setup][4] - -### Conclusion ### - -Here, in this article, we have seen how to setup a RAID 5 using three number of disks. Later in my upcoming articles, we will see how to troubleshoot when a disk fails in RAID 5 and how to replace for recovery. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid-5-in-linux/ - -作者:[Babin Lonston][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ -[2]:http://www.tecmint.com/create-raid0-in-linux/ -[3]:http://www.tecmint.com/create-raid1-in-linux/ -[4]:http://www.tecmint.com/create-raid-6-in-linux/ \ No newline at end of file From 6ae6e7cd43e62c6d22e2621c689dd892e88ba7b1 Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 19 Aug 2015 10:05:45 +0800 Subject: [PATCH 229/697] Update 20150728 Process of the Linux kernel building.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译到Line 483 --- ...50728 Process of the Linux kernel building.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/sources/tech/20150728 Process of the Linux kernel building.md index f4510d81b2..e55605f863 100644 --- a/sources/tech/20150728 Process of the Linux kernel building.md +++ b/sources/tech/20150728 Process of the Linux kernel building.md @@ -411,6 +411,8 @@ $(addprefix $(obj)/,$(filter-out fixdep,$(always))): $(obj)/fixdep First program is `fixdep` - optimizes list of dependencies generated by the [gcc](https://gcc.gnu.org/) that tells make when to remake a source code file. The second program is `bin2c` depends on the value of the `CONFIG_BUILD_BIN2C` kernel configuration option and very little C program that allows to convert a binary on stdin to a C include on stdout. You can note here strange notation: `hostprogs-y` and etc. This notation is used in the all `kbuild` files and more about it you can read in the [documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt). In our case the `hostprogs-y` tells to the `kbuild` that there is one host program named `fixdep` that will be built from the will be built from `fixdep.c` that located in the same directory that `Makefile`. The first output after we will execute `make` command in our terminal will be result of this `kbuild` file: +第一个工具是`fixdep`:用来优化[gcc](https://gcc.gnu.org/) 生成的依赖列表,然后在重新编译源文件的时候告诉make。第二个工具是`bin2c`,他依赖于内核配置选项`CONFIG_BUILD_BIN2C`,并且它是一个用来将标准输入接口(注:即stdin)收到的二进制流通过标准输出接口(即:stdout)转换成C 头文件的非常小的C 程序。你可以注意到这里有些奇怪的标志,如`hostprogs-y`等。这些标志使用在所有的`kbuild` 文件,更多的信息你可以从[documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt) 获得。在我们的用例`hostprogs-y` 中,他告诉`kbuild` 这里有个名为`fixed` 的程序,这个程序会通过和`Makefile` 相同目录的`fixdep.c` 编译而来。执行make 之后,终端的第一个输出就是`kbuild` 的结果: + ``` $ make HOSTCC scripts/basic/fixdep @@ -418,12 +420,16 @@ $ make As `script_basic` target was executed, the `archscripts` target will execute `make` for the [arch/x86/tools](https://github.com/torvalds/linux/blob/master/arch/x86/tools/Makefile) makefile with the `relocs` target: +当目标`script_basic` 被执行,目标`archscripts` 就会make [arch/x86/tools](https://github.com/torvalds/linux/blob/master/arch/x86/tools/Makefile) 下的makefile 和目标`relocs`: + ```Makefile $(Q)$(MAKE) $(build)=arch/x86/tools relocs ``` The `relocs_32.c` and the `relocs_64.c` will be compiled that will contain [relocation](https://en.wikipedia.org/wiki/Relocation_%28computing%29) information and we will see it in the `make` output: +代码`relocs_32.c` 和`relocs_64.c` 包含了[重定位](https://en.wikipedia.org/wiki/Relocation_%28computing%29) 的信息,将会被编译,者可以在`make` 的输出中看到: + ```Makefile HOSTCC arch/x86/tools/relocs_32.o HOSTCC arch/x86/tools/relocs_64.o @@ -433,6 +439,8 @@ The `relocs_32.c` and the `relocs_64.c` will be compiled that will contain [relo There is checking of the `version.h` after compiling of the `relocs.c`: +在编译完`relocs.c` 之后会检查`version.h`: + ```Makefile $(version_h): $(srctree)/Makefile FORCE $(call filechk,version.h) @@ -441,12 +449,17 @@ $(version_h): $(srctree)/Makefile FORCE We can see it in the output: +我们可以在输出看到它: + + ``` CHK include/config/kernel.release ``` and the building of the `generic` assembly headers with the `asm-generic` target from the `arch/x86/include/generated/asm` that generated in the top Makefile of the Linux kernel. After the `asm-generic` target the `archprepare` will be done, so the `prepare0` target will be executed. As I wrote above: +以及在内核根Makefiel 使用`arch/x86/include/generated/asm`的目标`asm-generic` 来构建`generic` 汇编头文件。在目标`asm-generic` 之后,`archprepare` 就会被完成,所以目标`prepare0` 会接着被执行,如我上面所写: + ```Makefile prepare0: archprepare FORCE $(Q)$(MAKE) $(build)=. @@ -454,12 +467,15 @@ prepare0: archprepare FORCE Note on the `build`. It defined in the [scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) and looks like this: +注意`build`,它是定义在文件[scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include),内容是这样的: + ```Makefile build := -f $(srctree)/scripts/Makefile.build obj ``` or in our case it is current source directory - `.`: +或者在我们的例子中,他就是当前源码目录路径——`.`: ```Makefile $(Q)$(MAKE) -f $(srctree)/scripts/Makefile.build obj=. ``` From 01457fa101936d30230d3b2c0a5cb5175c3a3da6 Mon Sep 17 00:00:00 2001 From: wi-cuckoo Date: Wed, 19 Aug 2015 21:11:44 +0800 Subject: [PATCH 230/697] cancel pr --- ...Boss Data Virtualization GA with OData in Docker Container.md | 1 - 1 file changed, 1 deletion(-) diff --git a/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md b/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md index f1505c5649..007f16493b 100644 --- a/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md +++ b/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md @@ -1,4 +1,3 @@ -translating wi-cuckoo Howto Run JBoss Data Virtualization GA with OData in Docker Container ================================================================================ Hi everyone, today we'll learn how to run JBoss Data Virtualization 6.0.0.GA with OData in a Docker Container. JBoss Data Virtualization is a data supply and integration solution platform that transforms various scatered multiple sources data, treats them as single source and delivers the required data into actionable information at business speed to any applications or users. JBoss Data Virtualization can help us easily combine and transform data into reusable business friendly data models and make unified data easily consumable through open standard interfaces. It offers comprehensive data abstraction, federation, integration, transformation, and delivery capabilities to combine data from one or multiple sources into reusable for agile data utilization and sharing.For more information about JBoss Data Virtualization, we can check out [its official page][1]. Docker is an open source platform that provides an open platform to pack, ship and run any application as a lightweight container. Running JBoss Data Virtualization with OData in Docker Container makes us easy to handle and launch. From 554825a1f4a7069a42bc7b2a7b4560e4152be42b Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 19 Aug 2015 21:29:41 +0800 Subject: [PATCH 231/697] PUB:20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @XLCYun 翻译这个系列辛苦啦! --- ...Get Right & Wrong - Page 5 - Conclusion.md | 40 +++++++++++++++++++ ...Get Right & Wrong - Page 5 - Conclusion.md | 39 ------------------ 2 files changed, 40 insertions(+), 39 deletions(-) create mode 100644 published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md delete mode 100644 translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md diff --git a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md b/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md new file mode 100644 index 0000000000..ee9ded7f77 --- /dev/null +++ b/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md @@ -0,0 +1,40 @@ +一周 GNOME 之旅:品味它和 KDE 的是是非非(第三节 总结) +================================================================================ + +### 用户体验和最后想法 ### + +当 Gnome 2.x 和 KDE 4.x 要正面交锋时……我在它们之间左右逢源。我对它们爱恨交织,但总的来说它们使用起来还算是一种乐趣。然后 Gnome 3.x 来了,带着一场 Gnome Shell 的戏剧。那时我就放弃了 Gnome,我尽我所能的避开它。当时它对用户是不友好的,而且不直观,它打破了原有的设计典范,只为平板的统治世界做准备……而根据平板下跌的销量来看,这样的未来不可能实现。 + +在 Gnome 3 后续发布了八个版本后,奇迹发生了。Gnome 变得对对用户友好了,变得直观了。它完美吗?当然不。我还是很讨厌它想推动的那种设计范例,我讨厌它总想给我强加一种工作流(work flow),但是在付出时间和耐心后,这两都能被接受。只要你能够回头去看看 Gnome Shell 那外星人一样的界面,然后开始跟 Gnome 的其它部分(特别是控制中心)互动,你就能发现 Gnome 绝对做对了:细节,对细节的关注! + +人们能适应新的界面设计范例,能适应新的工作流—— iPhone 和 iPad 都证明了这一点——但真正让他们操心的一直是“纸割”——那些不完美的细节。 + +它带出了 KDE 和 Gnome 之间最重要的一个区别。Gnome 感觉像一个产品,像一种非凡的体验。你用它的时候,觉得它是完整的,你要的东西都触手可及。它让人感觉就像是一个拥有 Windows 或者 OS X 那样桌面体验的 Linux 桌面版:你要的都在里面,而且它是被同一个目标一致的团队中的同一个人写出来的。天,即使是一个应用程序发出的 sudo 请求都感觉是 Gnome 下的一个特意设计的部分,就像在 Windows 下的一样。而在 KDE 下感觉就是随便一个应用程序都能创建的那种各种外观的弹窗。它不像是以系统本身这样的正式身份停下来说“嘿,有个东西要请求管理员权限!你要给它吗?”。 + +KDE 让人体验不到有凝聚力的体验。KDE 像是在没有方向地打转,感觉没有完整的体验。它就像是一堆东西往不同的的方向移动,只不过恰好它们都有一个共同享有的工具包而已。如果开发者对此很开心,那么好吧,他们开心就好,但是如果他们想提供最好体验的话,那么就需要多关注那些小地方了。用户体验跟直观应当做为每一个应用程序的设计中心,应当有一个视野,知道 KDE 要提供什么——并且——知道它看起来应该是什么样的。 + +是不是有什么原因阻止我在 KDE 下使用 Gnome 磁盘管理? Rhythmbox 呢? Evolution 呢? 没有,没有,没有。但是这样说又错过了关键。Gnome 和 KDE 都称它们自己为“桌面环境”。那么它们就应该是完整的环境,这意味着他们的各个部件应该汇集并紧密结合在一起,意味着你应该使用它们环境下的工具,因为它们说“您在一个完整的桌面中需要的任何东西,我们都支持。”说真的?只有 Gnome 看起来能符合完整的要求。KDE 在“汇集在一起”这一方面感觉就像个半成品,更不用说提供“完整体验”中你所需要的东西。Gnome 磁盘管理没有相应的对手—— kpartionmanage 要求 ROOT 权限。KDE 不运行“首次用户注册”的过程(原文:No 'First Time User' run through。可能是指系统安装过程中KDE没有创建新用户的过程,译注) ,现在也不过是在 Kubuntu 下引入了一个用户管理器。老天,Gnome 甚至提供了地图、笔记、日历和时钟应用。这些应用都是百分百要紧的吗?不,当然不了。但是正是这些应用帮助 Gnome 推动“Gnome 是一种完整丰富的体验”的想法。 + +我吐槽的 KDE 问题并非不可能解决,决对不是这样的!但是它需要人去关心它。它需要开发者为他们的作品感到自豪,而不仅仅是为它们实现的功能而感到自豪——组织的价值可大了去了。别夺走用户设置选项的能力—— GNOME 3.x 就是因为缺乏配置选项的能力而为我所诟病,但别把“好吧,你想怎么设置就怎么设置”作为借口而不提供任何理智的默认设置。默认设置是用户将看到的东西,它们是用户从打开软件的第一刻开始进行评判的关键。给用户留个好印象吧。 + +我知道 KDE 开发者们知道设计很重要,这也是为什么VDG(Visual Design Group 视觉设计组)存在的原因,但是感觉好像他们没有让 VDG 充分发挥,所以 KDE 里存在组织上的缺陷。不是 KDE 没办法完整,不是它没办法汇集整合在一起然后解决衰败问题,只是开发者们没做到。他们瞄准了靶心……但是偏了。 + +还有,在任何人说这句话之前……千万别说“欢迎给我们提交补丁啊"。因为当我开心的为某个人提交补丁时,只要开发者坚持以他们喜欢的却不直观的方式干事,更多这样的烦人事就会不断发生。这不关 Muon 有没有中心对齐。也不关 Amarok 的界面太丑。也不关每次我敲下快捷键后,弹出的音量和亮度调节窗口占用了我一大块的屏幕“地皮”(说真的,有人会把这些东西缩小)。 + +这跟心态的冷漠有关,跟开发者们在为他们的应用设计 UI 时根本就不多加思考有关。KDE 团队做的东西都工作得很好。Amarok 能播放音乐。Dragon 能播放视频。Kwin 或 Qt 和 kdelibs 似乎比 Mutter/gtk 更有力更效率(仅根据我的电池电量消耗计算。非科学性测试)。这些都很好,很重要……但是它们呈现的方式也很重要。甚至可以说,呈现方式是最重要的,因为它是用户看到的并与之交互的东西。 + +KDE 应用开发者们……让 VDG 参与进来吧。让 VDG 审查并核准每一个“核心”应用,让一个 VDG 的 UI/UX 专家来设计应用的使用模式和使用流程,以此保证其直观性。真见鬼,不管你们在开发的是啥应用,仅仅把它的模型发到 VDG 论坛寻求反馈甚至都可能都能得到一些非常好的指点跟反馈。你有这么好的资源在这,现在赶紧用吧。 + +我不想说得好像我一点都不懂感恩。我爱 KDE,我爱那些志愿者们为了给 Linux 用户一个可视化的桌面而付出的工作与努力,也爱可供选择的 Gnome。正是因为我关心我才写这篇文章。因为我想看到更好的 KDE,我想看到它走得比以前更加遥远。而这样做需要每个人继续努力,并且需要人们不再躲避批评。它需要人们对系统互动及系统崩溃的地方都保持诚实。如果我们不能直言批评,如果我们不说“这真垃圾!”,那么情况永远不会变好。 + +这周后我会继续使用 Gnome 吗?可能不。应该不。Gnome 还在试着强迫我接受其工作流,而我不想追随,也不想遵循,因为我在使用它的时候感觉变得不够高效,因为它并不遵循我的思维模式。可是对于我的朋友们,当他们问我“我该用哪种桌面环境?”我可能会推荐 Gnome,特别是那些不大懂技术,只要求“能工作”就行的朋友。根据目前 KDE 的形势来看,这可能是我能说出的最狠毒的评估了。 + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md deleted file mode 100644 index 02ee7425fc..0000000000 --- a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md +++ /dev/null @@ -1,39 +0,0 @@ -将GNOME作为我的Linux桌面的一周:他们做对的与做错的 - 第五节 - 总结 -================================================================================ -### 用户体验和最后想法 ### - -当Gnome 2.x和KDE 4.x要正面交锋时……我相当开心的跳到其中。我爱的东西它们有,恨的东西也有,但总的来说它们使用起来还算是一种乐趣。然后Gnome 3.x来了,带着一场Gnome Shell的戏剧。那时我就放弃了Gnome,我尽我所能的避开它。当时它对用户是不友好的,而且不直观,它打破了原有的设计典范,只为平板的统治世界做准备……而根据平板下跌的销量来看,这样的未来不可能实现。 - -Gnome 3后续发面了八个版本后,奇迹发生了。Gnome变得对对用户友好了。变得直观了。它完美吗?当然不了。我还是很讨厌它想推动的那种设计范例,我讨厌它总想把工作流(work flow)强加给我,但是在时间和耐心的作用下,这两都能被接受。只要你能够回头去看看Gnome Shell那外星人一样的界面,然后开始跟Gnome的其它部分(特别是控制中心)互动,你就能发现Gnome绝对做对了:细节。对细节的关注! - -人们能适应新的界面设计范例,能适应新的工作流——iPhone和iPad都证明了这一点——但真正一直让他们操心的是“纸片的割伤”(paper cuts,此处指易于修复但烦人的缺陷,译注)。 - -它带出了KDE和Gnome之间最重要的一个区别。Gnome感觉像一个产品。像一种非凡的体验。你用它的时候,觉得它是完整的,你要的东西都在你的指尖。它让人感觉就像是一个拥有windows或者OS X那样桌面体验的Linux桌面版:你要的都在里面,而且它是被同一个目标一致的团队中的同一个人写出来的。天,即使是一个应用程序发出的sudo请求都感觉是Gnome下的一个特意设计的部分,就像在Windows下的一样。而在KDE它就像是任何应用程序都能创建的那种随机外观的弹窗。它不像是以系统的一部分这样的正式身份停下来说“嘿,有个东西要请求管理员权限!你要给它吗?”。 - -KDE让人体验不到有凝聚力的体验。KDE像是在没有方向地打转,感觉没有完整的体验。它就像是一堆东西往不同的的方向移动,只不过恰好它们都有一个共同享有的工具包。如果开发者对此很开心,那么好吧,他们开心就好,但是如果他们想提供最好体验的话,那么就需要多关注那些小地方了。用户体验跟直观应当做为每一个应用程序的设计中心,应当有一个视野,知道KDE要提供什么——并且——知道它看起来应该是什么样的。 - -是不是有什么原因阻止我在KDE下使用Gnome磁盘管理? Rhythmbox? Evolution? 没有。没有。没有。但是这样说又错过了关键。Gnome和KDE都称它们为“桌面环境”。那么它们就应该是完整的环境,这意味着他们的各个部件应该汇集并紧密结合在一起,意味着你使用它们环境下的工具,因为它们说“您在一个完整的桌面中需要的任何东西,我们都支持。”说真的?只有Gnome看起来能符合完整的要求。KDE在“汇集在一起”这一方面感觉就像个半成品,更不用说提供“完整体验”中你所需要的东西。Gnome磁盘管理没有相应的对手——kpartionmanage要求ROOT权限。KDE不运行“首次用户注册”的过程(原文:No 'First Time User' run through.可能是指系统安装过程中KDE没有创建新用户的过程,译注) ,现在也不过是在Kubuntu下引入了一个用户管理器。老天,Gnome甚至提供了地图,笔记,日历和时钟应用。这些应用都是百分百要紧的吗?不,当然不了。但是正是这些应用帮助Gnome推动“Gnome是一种完整丰富的体验”的想法。 - -我吐槽的KDE问题并非不可能解决,决对不是这样的!但是它需要人去关心它。它需要开发者为他们的作品感到自豪,而不仅仅是为它们实现的功能而感到自豪——组织的价值可大了去了。别夺走用户设置选项的能力——GNOME 3.x就是因为缺乏配置选项的能力而为我所诟病,但别把“好吧,你想怎么设置就怎么设置,”作为借口而不提供任何理智的默认设置。默认设置是用户将看到的东西,它们是用户从打开软件的第一刻开始进行评判的关键。给用户留个好印象吧。 - -我知道KDE开发者们知道设计很重要,这也是为什么Visual Design Group(视觉设计团体)存在的原因,但是感觉好像他们没有让VDG充分发挥。所以KDE里存在组织上的缺陷。不是KDE没办法完整,不是它没办法汇集整合在一起然后解决衰败问题,只是开发者们没做到。他们瞄准了靶心……但是偏了。 - -还有,在任何人说这句话之前……千万别说“补丁很受欢迎啊"。因为当我开心的为个人提交补丁时,只要开发者坚持以他们喜欢的却不直观的方式干事,更多这样的烦事就会不断发生。这不关Muon有没有中心对齐。也不关Amarok的界面太丑。也不关每次我敲下快捷键后,弹出的音量和亮度调节窗口占用了我一大块的屏幕“房地产”(说真的,有人会去缩小这些东西)。 - -这跟心态的冷漠有关,跟开发者们在为他们的应用设计UI时根本就不多加思考有关。KDE团队做的东西都工作得很好。Amarok能播放音乐。Dragon能播放视频。Kwin或Qt和kdelibs似乎比Mutter/gtk更有力更效率(仅根本我的电池电量消耗计算。非科学性测试)。这些都很好,很重要……但是它们呈现的方式也很重要。甚至可以说,呈现方式是最重要的,因为它是用户看到的和与之交互的东西。 - -KDE应用开发者们……让VDG参与进来吧。让VDG审查并核准每一个”核心“应用,让一个VDG的UI/UX专家来设计应用的使用模式和使用流程,以此保证其直观性。真见鬼,不管你们在开发的是啥应用,仅仅把它的模型发到VDG论坛寻求反馈甚至都可能都能得到一些非常好的指点跟反馈。你有这么好的资源在这,现在赶紧用吧。 - -我不想说得好像我一点都不懂感恩。我爱KDE,我爱那些志愿者们为了给Linux用户一个可视化的桌面而付出的工作与努力,也爱可供选择的Gnome。正是因为我关心我才写这篇文章。因为我想看到更好的KDE,我想看到它走得比以前更加遥远。而这样做需要每个人继续努力,并且需要人们不再躲避批评。它需要人们对系统互动及系统崩溃的地方都保持诚实。如果我们不能直言批评,如果我们不说”这真垃圾!”,那么情况永远不会变好。 - -这周后我会继续使用Gnome吗?可能不,不。Gnome还在试着强迫我接受其工作流,而我不想追随,也不想遵循,因为我在使用它的时候感觉变得不够高效,因为它并不遵循我的思维模式。可是对于我的朋友们,当他们问我“我该用哪种桌面环境?”我可能会推荐Gnome,特别是那些不大懂技术,只要求“能工作”就行的朋友。根据目前KDE的形势来看,这可能是我能说出的最狠毒的评估了。 - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5 - -作者:Eric Griffith -译者:[XLCYun](https://github.com/XLCYun) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 5cbc818b77dd1743ebac25a7b6f1bac3610a6ae5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Thu, 20 Aug 2015 01:21:17 +0800 Subject: [PATCH 232/697] RHCSA Series--Part 03-Done MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 更新完成 --- ...ow to Manage Users and Groups in RHEL 7.md | 224 ++++++++++++++++++ 1 file changed, 224 insertions(+) create mode 100644 translated/tech/RHCSA/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md diff --git a/translated/tech/RHCSA/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md b/translated/tech/RHCSA/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md new file mode 100644 index 0000000000..1436621c4e --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md @@ -0,0 +1,224 @@ +RHCSA 系列: 如何管理RHEL7的用户和组 – Part 3 +================================================================================ +和管理其他Linux服务器一样,管理一个 RHEL 7 服务器 要求你能够添加,修改,暂停或删除用户帐户,并且授予他们文件,目录,其他系统资源所必要的权限。 +![User and Group Management in Linux](http://www.tecmint.com/wp-content/uploads/2015/03/User-and-Group-Management-in-Linux.png) + +RHCSA: 用户和组管理 – Part 3 + +### 管理用户帐户## + +如果想要给RHEL 7 服务器添加账户,你需要以root用户执行如下两条命令 + + # adduser [new_account] + # useradd [new_account] + +当添加新的用户帐户时,默认会执行下列操作。 + +- 他/她 的主目录就会被创建(一般是"/home/用户名",除非你特别设置) +- 一些隐藏文件 如`.bash_logout`, `.bash_profile` 以及 `.bashrc` 会被复制到用户的主目录,并且会为用户的回话提供环境变量.你可以进一步查看他们的相关细节。 +- 会为您的账号添加一个邮件池目录 +- 会创建一个和用户名同样的组 + +用户帐户的全部信息被保存在`/etc/passwd `文件。这个文件以如下格式保存了每一个系统帐户的所有信息(以:分割) + [username]:[x]:[UID]:[GID]:[Comment]:[Home directory]:[Default shell] + +- `[username]` 和`[Comment]` 是用于自我解释的 +- ‘x’表示帐户的密码保护(详细在`/etc/shadow`文件),就是我们用于登录的`[username]`. +- `[UID]` 和`[GID]`是用于显示`[username]` 的 用户认证和主用户组。 + +最后, + +- `[Home directory]`显示`[username]`的主目录的绝对路径 +- `[Default shell]` 是当用户登录系统后使用的默认shell + +另外一个你必须要熟悉的重要的文件是存储组信息的`/etc/group`.因为和`/etc/passwd`类似,所以也是由:分割 + [Group name]:[Group password]:[GID]:[Group members] + + + +- `[Group name]` 是组名 +- 这个组是否使用了密码 (如果是"X"意味着没有). +- `[GID]`: 和`/etc/passwd`中一样 +- `[Group members]`:用户列表,使用,隔开。里面包含组内的所有用户 + +添加过帐户后,任何时候你都可以通过 usermod 命令来修改用户战壕沟,基础的语法如下: + # usermod [options] [username] + +相关阅读 + +- [15 ‘useradd’ Command Examples][1] +- [15 ‘usermod’ Command Examples][2] + +#### 示例1 : 设置帐户的过期时间 #### + +如果你的公司有一些短期使用的帐户或者你相应帐户在有限时间内使用,你可以使用 `--expiredate` 参数 ,后加YYYY-MM-DD格式的日期。为了查看是否生效,你可以使用如下命令查看 + # chage -l [username] + +帐户更新前后的变动如下图所示 +![Change User Account Information](http://www.tecmint.com/wp-content/uploads/2015/03/Change-User-Account-Information.png) + +修改用户信息 + +#### 示例 2: 向组内追加用户 #### + +除了创建用户时的主用户组,一个用户还能被添加到别的组。你需要使用 -aG或 -append -group 选项,后跟逗号分隔的组名 +#### 示例 3: 修改用户主目录或默认Shell #### + +如果因为一些原因,你需要修改默认的用户主目录(一般为 /home/用户名),你需要使用 -d 或 -home 参数,后跟绝对路径来修改主目录 +如果有用户想要使用其他的shell来取代bash(比如sh ),一般默认是bash .使用 usermod ,并使用 -shell 的参数,后加新的shell的路径 +#### 示例 4: 展示组内的用户 #### + +当把用户添加到组中后,你可以使用如下命令验证属于哪一个组 + + # groups [username] + # id [username] + +下面图片的演示了示例2到示例四 + +![Adding User to Supplementary Group](http://www.tecmint.com/wp-content/uploads/2015/03/Adding-User-to-Supplementary-Group.png) + +添加用户到额外的组 + +在上面的示例中: + + # usermod --append --groups gacanepa,users --home /tmp --shell /bin/sh tecmint + +如果想要从组内删除用户,省略 `--append` 切换,并且可以使用 `--groups` 来列举组内的用户 + +#### 示例 5: 通过锁定密码来停用帐户 #### + +如果想要关闭帐户,你可以使用 -l(小写的L)或 -lock 选项来锁定用户的密码。这将会阻止用户登录。 + +#### 示例 6: 解锁密码 #### + +当你想要重新启用帐户让他可以继续登录时,属于 -u 或 –unlock 选项来解锁用户的密码,就像示例5 介绍的那样 + + # usermod --unlock tecmint + +下面的图片展示了示例5和示例6 + +![Lock Unlock User Account](http://www.tecmint.com/wp-content/uploads/2015/03/Lock-Unlock-User-Account.png) + +锁定上锁用户 + +#### 示例 7:删除组和用户 #### + +如果要删除一个组,你需要使用 groupdel ,如果需要删除用户 你需要使用 userdel (添加 -r 可以删除主目录和邮件池的内容) + # groupdel [group_name] # 删除组 + # userdel -r [user_name] # 删除用户,并删除主目录和邮件池 + +如果一些文件属于组,他们将不会被删除。但是组拥有者将会被设置为删除掉的组的GID +### 列举,设置,并且修改 ugo/rwx 权限 ### + +著名的 [ls 命令][3] 是管理员最好的助手. 当我们使用 -l 参数, 这个工具允许您查看一个目录中的内容(或详细格式). + +而且,该命令还可以应用于单个文件中。无论哪种方式,在“ls”输出中的前10个字符表示每个文件的属性。 +这10个字符序列的第一个字符用于表示文件类型: + +- – (连字符): 一个标准文件 +- d: 一个目录 +- l: 一个符号链接 +- c: 字符设备(将数据作为字节流,即一个终端) +- b: 块设备(处理数据块,即存储设备) + +文件属性的下一个九个字符,分为三个组,被称为文件模式,并注明读(r),写(w),并执行(x)授予文件的所有者,文件的所有组,和其他的用户(通常被称为“世界”)。 +在文件的读取权限允许打开和读取相同的权限时,允许其内容被列出,如果还设置了执行权限,还允许它作为一个程序和运行。 +文件权限是通过chmod命令改变的,它的基本语法如下: + + # chmod [new_mode] file + +new_mode是一个八进制数或表达式,用于指定新的权限。适合每一个随意的案例。或者您已经有了一个更好的方式来设置文件的权限,所以你觉得可以自由地使用最适合你自己的方法。 +八进制数可以基于二进制等效计算,可以从所需的文件权限的文件的所有者,所有组,和世界。一定权限的存在等于2的幂(R = 22,W = 21,x = 20),没有时意为0。例如: +![File Permissions](http://www.tecmint.com/wp-content/uploads/2015/03/File-Permissions.png) + +文件权限 + +在八进制形式下设置文件的权限,如上图所示 + + # chmod 744 myfile + +请用一分钟来对比一下我们以前的计算,在更改文件的权限后,我们的实际输出为: + +![Long List Format](http://www.tecmint.com/wp-content/uploads/2015/03/Long-List-Format.png) + +长列表格式 + +#### 示例 8: 寻找777权限的文件 #### + +出于安全考虑,你应该确保在正常情况下,尽可能避免777权限(读、写、执行的文件)。虽然我们会在以后的教程中教你如何更有效地找到所有的文件在您的系统的权限集的说明,你现在仍可以使用LS grep获取这种信息。 +在下面的例子,我们会寻找 /etc 目录下的777权限文件. 注意,我们要使用第二章讲到的管道的知识[第二章:文件和目录管理][4]: + + # ls -l /etc | grep rwxrwxrwx + +![Find All Files with 777 Permission](http://www.tecmint.com/wp-content/uploads/2015/03/Find-All-777-Files.png) + +查找所有777权限的文件 + +#### 示例 9: 为所有用户指定特定权限 #### + +shell脚本,以及一些二进制文件,所有用户都应该有权访问(不只是其相应的所有者和组),应该有相应的执行权限(我们会讨论特殊情况下的问题): + # chmod a+x script.sh + +**注意**: 我们可以设置文件模式使用表示用户权限的字母如“u”,组所有者权限的字母“g”,其余的为o 。所有权限为a.权限可以通过`+` 或 `-` 来管理。 + +![Set Execute Permission on File](http://www.tecmint.com/wp-content/uploads/2015/03/Set-Execute-Permission-on-File.png) + +为文件设置执行权限 + +长目录列表还显示了该文件的所有者和其在第一和第二列中的组主。此功能可作为系统中文件的第一级访问控制方法: + +![Check File Owner and Group](http://www.tecmint.com/wp-content/uploads/2015/03/Check-File-Owner-and-Group.png) + +检查文件的属主和属组 + +改变文件的所有者,您将使用chown命令。请注意,您可以在同一时间或单独的更改文件的所有权: + # chown user:group file + +虽然可以在同一时间更改用户或组,或在同一时间的两个属性,但是不要忘记冒号区分,如果你想要更新其他属性,让另外的选项保持空白: + # chown :group file # Change group ownership only + # chown user: file # Change user ownership only + +#### 示例 10:从一个文件复制权限到另一个文件#### + +If you would like to “clone” ownership from one file to another, you can do so using the –reference flag, as follows: +如果你想“克隆”一个文件的所有权到另一个,你可以这样做,使用–reference参数,如下: + # chown --reference=ref_file file + +ref_file的所有信息会复制给 file + +![Clone File Ownership](http://www.tecmint.com/wp-content/uploads/2015/03/Clone-File-Ownership.png) + +复制文件属主信息 + +### 设置 SETGID 协作目录 ### + +你应该授予在一个特定的目录中拥有访问所有的文件的权限给一个特点的用户组,你将有可能使用目录设置setgid的方法。当setgid后设置,真实用户的有效GID成为团队的主人。 +因此,任何用户都可以访问该文件的组所有者授予的权限的文件。此外,当setgid设置在一个目录中,新创建的文件继承同一组目录,和新创建的子目录也将继承父目录的setgid。 + # chmod g+s [filename] + +为了设置 setgid 在八进制形式,预先准备好数字2 来给基本的权限 + # chmod 2755 [directory] + +### 总结 ### + +扎实的用户和组管理知识,符合规则的,Linux权限管理,以及部分实践,可以帮你快速解决RHEL 7 服务器的文件权限。 +我向你保证,当你按照本文所概述的步骤和使用系统文档(和第一章解释的那样 [Part 1: Reviewing Essential Commands & System Documentation][5] of this series) 你将掌握基本的系统管理的能力。 + +请随时让我们知道你是否有任何问题或意见使用下面的表格。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ + +作者:[Gabriel Cánepa][a] +译者:[xiqingongzi](https://github.com/xiqingongzi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/add-users-in-linux/ +[2]:http://www.tecmint.com/usermod-command-examples/ +[3]:http://www.tecmint.com/ls-interview-questions/ +[4]:http://www.tecmint.com/file-and-directory-management-in-linux/ +[5]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ From a450026ce4ddab845ce40bffc5f6e6c86e79da78 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Thu, 20 Aug 2015 01:21:53 +0800 Subject: [PATCH 233/697] =?UTF-8?q?=E5=88=A0=E9=99=A4=E5=A4=9A=E4=BD=99?= =?UTF-8?q?=E6=96=87=E4=BB=B6?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ow to Manage Users and Groups in RHEL 7.md | 249 ------------------ 1 file changed, 249 deletions(-) delete mode 100644 sources/tech/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md b/sources/tech/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md deleted file mode 100644 index 0b85744c6c..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md +++ /dev/null @@ -1,249 +0,0 @@ -[translated by xiqingongzi] -RHCSA Series: How to Manage Users and Groups in RHEL 7 – Part 3 -================================================================================ -Managing a RHEL 7 server, as it is the case with any other Linux server, will require that you know how to add, edit, suspend, or delete user accounts, and grant users the necessary permissions to files, directories, and other system resources to perform their assigned tasks. - -![User and Group Management in Linux](http://www.tecmint.com/wp-content/uploads/2015/03/User-and-Group-Management-in-Linux.png) - -RHCSA: User and Group Management – Part 3 - -### Managing User Accounts ### - -To add a new user account to a RHEL 7 server, you can run either of the following two commands as root: - - # adduser [new_account] - # useradd [new_account] - -When a new user account is added, by default the following operations are performed. - -- His/her home directory is created (`/home/username` unless specified otherwise). -- These `.bash_logout`, `.bash_profile` and `.bashrc` hidden files are copied inside the user’s home directory, and will be used to provide environment variables for his/her user session. You can explore each of them for further details. -- A mail spool directory is created for the added user account. -- A group is created with the same name as the new user account. - -The full account summary is stored in the `/etc/passwd `file. This file holds a record per system user account and has the following format (fields are separated by a colon): - - [username]:[x]:[UID]:[GID]:[Comment]:[Home directory]:[Default shell] - -- These two fields `[username]` and `[Comment]` are self explanatory. -- The second filed ‘x’ indicates that the account is secured by a shadowed password (in `/etc/shadow`), which is used to logon as `[username]`. -- The fields `[UID]` and `[GID]` are integers that shows the User IDentification and the primary Group IDentification to which `[username]` belongs, equally. - -Finally, - -- The `[Home directory]` shows the absolute location of `[username]’s` home directory, and -- `[Default shell]` is the shell that is commit to this user when he/she logins into the system. - -Another important file that you must become familiar with is `/etc/group`, where group information is stored. As it is the case with `/etc/passwd`, there is one record per line and its fields are also delimited by a colon: - - [Group name]:[Group password]:[GID]:[Group members] - -where, - -- `[Group name]` is the name of group. -- Does this group use a group password? (An “x” means no). -- `[GID]`: same as in `/etc/passwd`. -- `[Group members]`: a list of users, separated by commas, that are members of each group. - -After adding an account, at anytime, you can edit the user’s account information using usermod, whose basic syntax is: - - # usermod [options] [username] - -Read Also: - -- [15 ‘useradd’ Command Examples][1] -- [15 ‘usermod’ Command Examples][2] - -#### EXAMPLE 1: Setting the expiry date for an account #### - -If you work for a company that has some kind of policy to enable account for a certain interval of time, or if you want to grant access to a limited period of time, you can use the `--expiredate` flag followed by a date in YYYY-MM-DD format. To verify that the change has been applied, you can compare the output of - - # chage -l [username] - -before and after updating the account expiry date, as shown in the following image. - -![Change User Account Information](http://www.tecmint.com/wp-content/uploads/2015/03/Change-User-Account-Information.png) - -Change User Account Information - -#### EXAMPLE 2: Adding the user to supplementary groups #### - -Besides the primary group that is created when a new user account is added to the system, a user can be added to supplementary groups using the combined -aG, or –append –groups options, followed by a comma separated list of groups. - -#### EXAMPLE 3: Changing the default location of the user’s home directory and / or changing its shell #### - -If for some reason you need to change the default location of the user’s home directory (other than /home/username), you will need to use the -d, or –home options, followed by the absolute path to the new home directory. - -If a user wants to use another shell other than bash (for example, sh), which gets assigned by default, use usermod with the –shell flag, followed by the path to the new shell. - -#### EXAMPLE 4: Displaying the groups an user is a member of #### - -After adding the user to a supplementary group, you can verify that it now actually belongs to such group(s): - - # groups [username] - # id [username] - -The following image depicts Examples 2 through 4: - -![Adding User to Supplementary Group](http://www.tecmint.com/wp-content/uploads/2015/03/Adding-User-to-Supplementary-Group.png) - -Adding User to Supplementary Group - -In the example above: - - # usermod --append --groups gacanepa,users --home /tmp --shell /bin/sh tecmint - -To remove a user from a group, omit the `--append` switch in the command above and list the groups you want the user to belong to following the `--groups` flag. - -#### EXAMPLE 5: Disabling account by locking password #### - -To disable an account, you will need to use either the -l (lowercase L) or the –lock option to lock a user’s password. This will prevent the user from being able to log on. - -#### EXAMPLE 6: Unlocking password #### - -When you need to re-enable the user so that he can log on to the server again, use the -u or the –unlock option to unlock a user’s password that was previously blocked, as explained in Example 5 above. - - # usermod --unlock tecmint - -The following image illustrates Examples 5 and 6: - -![Lock Unlock User Account](http://www.tecmint.com/wp-content/uploads/2015/03/Lock-Unlock-User-Account.png) - -Lock Unlock User Account - -#### EXAMPLE 7: Deleting a group or an user account #### - -To delete a group, you’ll want to use groupdel, whereas to delete a user account you will use userdel (add the –r switch if you also want to delete the contents of its home directory and mail spool): - - # groupdel [group_name] # Delete a group - # userdel -r [user_name] # Remove user_name from the system, along with his/her home directory and mail spool - -If there are files owned by group_name, they will not be deleted, but the group owner will be set to the GID of the group that was deleted. - -### Listing, Setting and Changing Standard ugo/rwx Permissions ### - -The well-known [ls command][3] is one of the best friends of any system administrator. When used with the -l flag, this tool allows you to view a list a directory’s contents in long (or detailed) format. - -However, this command can also be applied to a single file. Either way, the first 10 characters in the output of `ls -l` represent each file’s attributes. - -The first char of this 10-character sequence is used to indicate the file type: - -- – (hyphen): a regular file -- d: a directory -- l: a symbolic link -- c: a character device (which treats data as a stream of bytes, i.e. a terminal) -- b: a block device (which handles data in blocks, i.e. storage devices) - -The next nine characters of the file attributes, divided in groups of three from left to right, are called the file mode and indicate the read (r), write(w), and execute (x) permissions granted to the file’s owner, the file’s group owner, and the rest of the users (commonly referred to as “the world”), respectively. - -While the read permission on a file allows the same to be opened and read, the same permission on a directory allows its contents to be listed if the execute permission is also set. In addition, the execute permission in a file allows it to be handled as a program and run. - -File permissions are changed with the chmod command, whose basic syntax is as follows: - - # chmod [new_mode] file - -where new_mode is either an octal number or an expression that specifies the new permissions. Feel free to use the mode that works best for you in each case. Or perhaps you already have a preferred way to set a file’s permissions – so feel free to use the method that works best for you. - -The octal number can be calculated based on the binary equivalent, which can in turn be obtained from the desired file permissions for the owner of the file, the owner group, and the world.The presence of a certain permission equals a power of 2 (r=22, w=21, x=20), while its absence means 0. For example: - -![File Permissions](http://www.tecmint.com/wp-content/uploads/2015/03/File-Permissions.png) - -File Permissions - -To set the file’s permissions as indicated above in octal form, type: - - # chmod 744 myfile - -Please take a minute to compare our previous calculation to the actual output of `ls -l` after changing the file’s permissions: - -![Long List Format](http://www.tecmint.com/wp-content/uploads/2015/03/Long-List-Format.png) - -Long List Format - -#### EXAMPLE 8: Searching for files with 777 permissions #### - -As a security measure, you should make sure that files with 777 permissions (read, write, and execute for everyone) are avoided like the plague under normal circumstances. Although we will explain in a later tutorial how to more effectively locate all the files in your system with a certain permission set, you can -by now- combine ls with grep to obtain such information. - -In the following example, we will look for file with 777 permissions in the /etc directory only. Note that we will use pipelining as explained in [Part 2: File and Directory Management][4] of this RHCSA series: - - # ls -l /etc | grep rwxrwxrwx - -![Find All Files with 777 Permission](http://www.tecmint.com/wp-content/uploads/2015/03/Find-All-777-Files.png) - -Find All Files with 777 Permission - -#### EXAMPLE 9: Assigning a specific permission to all users #### - -Shell scripts, along with some binaries that all users should have access to (not just their corresponding owner and group), should have the execute bit set accordingly (please note that we will discuss a special case later): - - # chmod a+x script.sh - -**Note**: That we can also set a file’s mode using an expression that indicates the owner’s rights with the letter `u`, the group owner’s rights with the letter `g`, and the rest with `o`. All of these rights can be represented at the same time with the letter `a`. Permissions are granted (or revoked) with the `+` or `-` signs, respectively. - -![Set Execute Permission on File](http://www.tecmint.com/wp-content/uploads/2015/03/Set-Execute-Permission-on-File.png) - -Set Execute Permission on File - -A long directory listing also shows the file’s owner and its group owner in the first and second columns, respectively. This feature serves as a first-level access control method to files in a system: - -![Check File Owner and Group](http://www.tecmint.com/wp-content/uploads/2015/03/Check-File-Owner-and-Group.png) - -Check File Owner and Group - -To change file ownership, you will use the chown command. Note that you can change the file and group ownership at the same time or separately: - - # chown user:group file - -**Note**: That you can change the user or group, or the two attributes at the same time, as long as you don’t forget the colon, leaving user or group blank if you want to update the other attribute, for example: - - # chown :group file # Change group ownership only - # chown user: file # Change user ownership only - -#### EXAMPLE 10: Cloning permissions from one file to another #### - -If you would like to “clone” ownership from one file to another, you can do so using the –reference flag, as follows: - - # chown --reference=ref_file file - -where the owner and group of ref_file will be assigned to file as well: - -![Clone File Ownership](http://www.tecmint.com/wp-content/uploads/2015/03/Clone-File-Ownership.png) - -Clone File Ownership - -### Setting Up SETGID Directories for Collaboration ### - -Should you need to grant access to all the files owned by a certain group inside a specific directory, you will most likely use the approach of setting the setgid bit for such directory. When the setgid bit is set, the effective GID of the real user becomes that of the group owner. - -Thus, any user can access a file under the privileges granted to the group owner of such file. In addition, when the setgid bit is set on a directory, newly created files inherit the same group as the directory, and newly created subdirectories will also inherit the setgid bit of the parent directory. - - # chmod g+s [filename] - -To set the setgid in octal form, prepend the number 2 to the current (or desired) basic permissions. - - # chmod 2755 [directory] - -### Conclusion ### - -A solid knowledge of user and group management, along with standard and special Linux permissions, when coupled with practice, will allow you to quickly identify and troubleshoot issues with file permissions in your RHEL 7 server. - -I assure you that as you follow the steps outlined in this article and use the system documentation (as explained in [Part 1: Reviewing Essential Commands & System Documentation][5] of this series) you will master this essential competence of system administration. - -Feel free to let us know if you have any questions or comments using the form below. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/add-users-in-linux/ -[2]:http://www.tecmint.com/usermod-command-examples/ -[3]:http://www.tecmint.com/ls-interview-questions/ -[4]:http://www.tecmint.com/file-and-directory-management-in-linux/ -[5]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ From c4c3eca5f243fd41b4cfa2da2cca0aa2bdf9517c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Thu, 20 Aug 2015 01:24:52 +0800 Subject: [PATCH 234/697] =?UTF-8?q?=E3=80=90=E7=BF=BB=E8=AF=91=E4=B8=AD?= =?UTF-8?q?=E3=80=91RHCSA=20Series--Part=2007?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... (Access Control Lists) and Mounting Samba or NFS Shares.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md b/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md index d4801d9923..f8d9d45d27 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md @@ -1,3 +1,4 @@ +[xiqingongzi Translating] RHCSA Series: Using ACLs (Access Control Lists) and Mounting Samba / NFS Shares – Part 7 ================================================================================ In the last article ([RHCSA series Part 6][1]) we started explaining how to set up and configure local system storage using parted and ssm. @@ -209,4 +210,4 @@ via: http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares [a]:http://www.tecmint.com/author/gacanepa/ [1]:http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/ [2]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ -[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html \ No newline at end of file +[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html From 9c78c7296ec672e8e0825d08106bcba40362f8f4 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 20 Aug 2015 10:03:16 +0800 Subject: [PATCH 235/697] Update 20150728 Process of the Linux kernel building.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译到line 600 --- ...28 Process of the Linux kernel building.md | 21 +++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/sources/tech/20150728 Process of the Linux kernel building.md index e55605f863..c622e43ba0 100644 --- a/sources/tech/20150728 Process of the Linux kernel building.md +++ b/sources/tech/20150728 Process of the Linux kernel building.md @@ -482,12 +482,16 @@ $(Q)$(MAKE) -f $(srctree)/scripts/Makefile.build obj=. The [scripts/Makefile.build](https://github.com/torvalds/linux/blob/master/scripts/Makefile.build) tries to find the `Kbuild` file by the given directory via the `obj` parameter, include this `Kbuild` files: +参数`obj` 会告诉脚本[scripts/Makefile.build](https://github.com/torvalds/linux/blob/master/scripts/Makefile.build) 那些目录包含`kbuild` 文件,脚本以此来寻找各个`kbuild` 文件: + ```Makefile include $(kbuild-file) ``` and build targets from it. In our case `.` contains the [Kbuild](https://github.com/torvalds/linux/blob/master/Kbuild) file that generates the `kernel/bounds.s` and the `arch/x86/kernel/asm-offsets.s`. After this the `prepare` target finished to work. The `vmlinux-dirs` also depends on the second target - `scripts` that compiles following programs: `file2alias`, `mk_elfconfig`, `modpost` and etc... After scripts/host-programs compilation our `vmlinux-dirs` target can be executed. First of all let's try to understand what does `vmlinux-dirs` contain. For my case it contains paths of the following kernel directories: +然后根据这个构建目标。我们这里`.` 包含了[Kbuild](https://github.com/torvalds/linux/blob/master/Kbuild),就用这个文件来生成`kernel/bounds.s` 和`arch/x86/kernel/asm-offsets.s`。这样目标`prepare` 就完成了它的工作。`vmlinux-dirs` 也依赖于第二个目标——`scripts` ,`scripts`会编译接下来的几个程序:`filealias`,`mk_elfconfig`,`modpost`等等。`scripts/host-programs` 编译完之后,我们的目标`vmlinux-dirs` 就可以开始编译了。第一步,我们先来理解一下`vmlinux-dirs` 都包含了那些东西。在我们的例子中它包含了接下来的内核目录的路径: + ``` init usr arch/x86 kernel mm fs ipc security crypto block drivers sound firmware arch/x86/pci arch/x86/power @@ -496,6 +500,8 @@ arch/x86/video net lib arch/x86/lib We can find definition of the `vmlinux-dirs` in the top [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) of the Linux kernel: +我们可以在内核的根[Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 里找到`vmlinux-dirs` 的定义: + ```Makefile vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \ $(core-y) $(core-m) $(drivers-y) $(drivers-m) \ @@ -512,6 +518,8 @@ libs-y := lib/ Here we remove the `/` symbol from the each directory with the help of the `patsubst` and `filter` functions and put it to the `vmlinux-dirs`. So we have list of directories in the `vmlinux-dirs` and the following code: +这里我们借助函数`patsubst` 和`filter`去掉了每个目录路径里的符号`/`,并且把结果放到`vmlinux-dirs` 里。所以我们就有了`vmlinux-dirs` 里的目录的列表,以及下面的代码: + ```Makefile $(vmlinux-dirs): prepare scripts $(Q)$(MAKE) $(build)=$@ @@ -519,6 +527,8 @@ $(vmlinux-dirs): prepare scripts The `$@` represents `vmlinux-dirs` here that means that it will go recursively over all directories from the `vmlinux-dirs` and its internal directories (depens on configuration) and will execute `make` in there. We can see it in the output: +符号`$@` 在这里代表了`vmlinux-dirs`,这就表明程序会递归遍历从`vmlinux-dirs` 以及它内部的全部目录(依赖于配置),并且在对应的目录下执行`make` 命令。我们可以在输出看到结果: + ``` CC init/main.o CHK include/generated/compile.h @@ -535,7 +545,7 @@ The `$@` represents `vmlinux-dirs` here that means that it will go recursively o ``` Source code in each directory will be compiled and linked to the `built-in.o`: - +每个目录下的源代码将会被编译并且链接到`built-io.o` 里: ``` $ find . -name built-in.o ./arch/x86/crypto/built-in.o @@ -549,6 +559,8 @@ $ find . -name built-in.o Ok, all buint-in.o(s) built, now we can back to the `vmlinux` target. As you remember, the `vmlinux` target is in the top Makefile of the Linux kernel. Before the linking of the `vmlinux` it builds [samples](https://github.com/torvalds/linux/tree/master/samples), [Documentation](https://github.com/torvalds/linux/tree/master/Documentation) and etc., but I will not describe it in this part as I wrote in the beginning of this part. +好了,所有的`built-in.o` 都构建完了,现在我们回到目标`vmlinux` 上。你应该还记得,目标`vmlinux` 是在内核的根makefile 里。在链接`vmlinux` 之前,系统会构建[samples](https://github.com/torvalds/linux/tree/master/samples), [Documentation](https://github.com/torvalds/linux/tree/master/Documentation)等等,但是如上文所述,我不会在本文描述这些。 + ```Makefile vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE ... @@ -558,6 +570,8 @@ vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE As you can see main purpose of it is a call of the [scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) script is linking of the all `built-in.o`(s) to the one statically linked executable and creation of the [System.map](https://en.wikipedia.org/wiki/System.map). In the end we will see following output: +你可以看到,`vmlinux` 的调用脚本[scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) 的主要目的是把所有的`built-in.o` 链接成一个静态可执行文件、生成[System.map](https://en.wikipedia.org/wiki/System.map)。 最后我们来看看下面的输出: + ``` LINK vmlinux LD vmlinux.o @@ -575,7 +589,7 @@ As you can see main purpose of it is a call of the [scripts/link-vmlinux.sh](htt ``` and `vmlinux` and `System.map` in the root of the Linux kernel source tree: - +还有内核源码树根目录下的`vmlinux` 和`System.map` ``` $ ls vmlinux System.map System.map vmlinux @@ -583,7 +597,10 @@ System.map vmlinux That's all, `vmlinux` is ready. The next step is creation of the [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage). +这就是全部了,`vmlinux` 构建好了,下一步就是创建[bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage). + Building bzImage +制作bzImage -------------------------------------------------------------------------------- The `bzImage` is the compressed Linux kernel image. We can get it with the execution of the `make bzImage` after the `vmlinux` built. In other way we can just execute `make` without arguments and will get `bzImage` anyway because it is default image: From 60092d5f44b9660dca43a1fe5e8df24cf19073c1 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 20 Aug 2015 16:16:43 +0800 Subject: [PATCH 236/697] =?UTF-8?q?20150820-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...con--The Changing Role of the Server OS.md | 49 +++++++++++++++++++ ... Torvalds muses about open-source software.md | 46 +++++++++++++++++ 2 files changed, 95 insertions(+) create mode 100644 sources/talk/20150819 Linuxcon--The Changing Role of the Server OS.md create mode 100644 sources/talk/20150820 LinuxCon's surprise keynote speaker ​Linus Torvalds muses about open-source software.md diff --git a/sources/talk/20150819 Linuxcon--The Changing Role of the Server OS.md b/sources/talk/20150819 Linuxcon--The Changing Role of the Server OS.md new file mode 100644 index 0000000000..8f6d80c7e9 --- /dev/null +++ b/sources/talk/20150819 Linuxcon--The Changing Role of the Server OS.md @@ -0,0 +1,49 @@ +Linuxcon: The Changing Role of the Server OS +================================================================================ +SEATTLE - Containers might one day change the world, but it will take time and it will also change the role of the operating system. That's the message delivered during a Linuxcon keynote here today by Wim Coekaerts, SVP Linux and virtualization engineering at Oracle. + +![](http://www.serverwatch.com/imagesvr_ce/6421/wim-200x150.jpg) + +Coekaerts started his presentation by putting up a slide stating it's the year of the desktop, which generated a few laughs from the audience. Oracle Wim Coekarts Truly, though, Coekaerts said it is now apparent that 2015 is the year of the container, and more importantly the year of the application, which is what containers really are all about. + +"What do you need an operating system for?" Coekaerts asked. "It's really just there to run an application; an operating system is there to manage hardware and resources so your app can run." + +Coekaerts added that with Docker containers, the focus is once again on the application. At Oracle, Coekaerts said much of the focus is on how to make the app run better on the OS. + +"Many people are used to installing apps, but many of the younger generation just click a button on their mobile device and it runs," Coekaerts said. + +Coekaerts said that people now wonder why it's more complex in the enterprise to install software, and Docker helps to change that. + +"The role of the operating system is changing," Coekaerts said. + +The rise of Docker does not mean the demise of virtual machines (VMs), though. Coekaerts said it will take a very long time for things to mature in the containerization space and get used in real world. + +During that period VMs and containers will co-exist and there will be a need for transition and migration tools between containers and VMs. For example, Coekaerts noted that Oracle's VirtualBox open-source technology is widely used on desktop systems today as a way to help users run Docker. The Docker Kitematic project makes use of VirtualBox to boot Docker on Macs today. + +### The Open Compute Initiative and Write Once, Deploy Anywhere for Containers ### + +A key promise that needs to be enabled for containers to truly be successful is the concept of write once, deploy anywhere. That's an area where the Linux Foundations' Open Compute Initiative (OCI) will play a key role in enabling interoperability across container runtimes. + +"With OCI, it will make it easier to build once and run anywhere, so what you package locally you can run wherever you want," Coekaerts said. + +Overall, though, Coekaerts said that while there is a lot of interest in moving to the container model, it's not quite ready yet. He noted Oracle is working on certifying its products to run in containers, but it's a hard process. + +"Running the database is easy; it's everything else around it that is complex," Coekaerts said. "Containers don't behave the same as VMs, and some applications depend on low-level system configuration items that are not exposed from the host to the container." + +Additionally, Coekaerts commented that debugging problems inside a container is different than in a VM, and there is currently a lack of mature tools for proper container app debugging. + +Coekaerts emphasized that as containers matures it's important to not forget about the existing technology that organizations use to run and deploy applications on servers today. He said enterprises don't typically throw out everything they have just to start with new technology. + +"Deploying new technology is hard, and you need to be able to transition from what you have," Coekaerts said. "The technology that allows you to transition easily is the technology that wins." + +-------------------------------------------------------------------------------- + +via: http://www.serverwatch.com/server-news/linuxcon-the-changing-role-of-the-server-os.html + +作者:[Sean Michael Kerner][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.serverwatch.com/author/Sean-Michael-Kerner-101580.htm \ No newline at end of file diff --git a/sources/talk/20150820 LinuxCon's surprise keynote speaker ​Linus Torvalds muses about open-source software.md b/sources/talk/20150820 LinuxCon's surprise keynote speaker ​Linus Torvalds muses about open-source software.md new file mode 100644 index 0000000000..c045233630 --- /dev/null +++ b/sources/talk/20150820 LinuxCon's surprise keynote speaker ​Linus Torvalds muses about open-source software.md @@ -0,0 +1,46 @@ +LinuxCon's surprise keynote speaker ​Linus Torvalds muses about open-source software +================================================================================ +> In a broad-ranging question and answer session, Linus Torvalds, Linux's founder, shared his thoughts on the current state of open source and Linux. + +**SEATTLE** -- [LinuxCon][1] attendees got an early Christmas present when the Wednesday morning "surprise" keynote speaker turned out to be Linux's founder, Linus Torvalds. + +![zemlin-and-torvalds-08192015-1.jpg](http://zdnet2.cbsistatic.com/hub/i/2015/08/19/9951f05a-fedf-4bf4-a4a1-3b4a15458de6/c19c89ded58025eccd090787ba40e803/zemlin-and-torvalds-08192015-1.jpg) + +Jim Zemlin and Linus Torvalds shooting the breeze at LinuxCon in Seattle. -- sjvn + +Jim Zemlin, the Linux Foundation's executive director, opened the question and answer session by quoting from a recent article about Linus, "[Torvalds may be the most influential individual economic force][2] of the past 20 years. ... Torvalds has, in effect, been as instrumental in retooling the production lines of the modern economy as Henry Ford was 100 years earlier." + +Torvalds replied, "I don't think I'm all that powerful, but I'm glad to get all the credit for open source." For someone who's arguably been more influential on technology than Bill Gates, Steve Jobs, or Larry Ellison, Torvalds remains amusingly modest. That's probably one reason [Torvalds, who doesn't suffer fools gladly][3], remains the unchallenged leader of Linux. + +It also helps that he doesn't take himself seriously, except when it comes to code quality. Zemlin reminded him that he was also described in the same article as being "5-feet, ho-hum tall with a paunch, ... his body type and gait resemble that of Tux, the penguin mascot of Linux." Torvald's reply was to grin and say "What is this? A roast?" He added that 5'8" was a perfectly good height. + +More seriously, Zemlin asked Torvalds what he thought about the current excitement over containers. Indeed, at times LinuxCon has felt like DockerCon. Torvalds replied, "I'm glad that the kernel is far removed from containers and other buzzwords. We only care about just the kernel. I'm so focused on the kernel I really don't care. I don't get involved in the politics above the kernel and I'm really happy that I don't know." + +Moving on, Zemlin asked Torvalds what he thought about the demand from the Internet of Things (IoT) for an even smaller Linux kernel. "Everyone has always wished for a smaller kernel," Torvalds said. "But, with all the modules it's still tens of MegaBytes in size. It's shocking that it used to fit into a MB. We'd like it to be mean lean, mean IT machine again." + +But, "Torvalds continued, "It's hard to get rid of unnecessary fat. Things tend to grow. Realistically I don't think we can get down to the sizes we were 20 years ago." + +As for security, the next topic, Torvalds said, "I'm at odds with the security community. They tend to see technology as black and white. If it's not security they don't care at all about it." The truth is "security is bugs. Most of the security issues we've had in the kernel hasn't been that big. Most of them have been really stupid and then some clever person takes advantage of it." + +The bottom line is, "We'll never get rid of bugs so security will never be perfect. We do try to be really careful about code. With user space we have to be very strict." But, "Bugs happen and all you can do is mitigate them. Open source is doing fairly well, but anyone who thinks we'll ever be completely secure is foolish." + +Zemlin concluded by asking Torvalds where he saw Linux ten years from now. Torvalds replied that he doesn't look at it this way. "I'm plodding, pedestrian, I look ahead six months, I don't plan 10 years ahead. I think that's insane." + +Sure, "companies plan ten years, and their plans use open source. Their whole process is very forward thinking. But I'm not worried about 10 years ahead. I look to the next release and the release beyond that." + +For Torvalds, who works at home where "the FedEx guy is no longer surprised to find me in my bathrobe at 2 in the afternoon," looking ahead a few months works just fine. And so do all the businesses -- both technology-based Amazon, Google, Facebook and more mainstream, WalMart, the New York Stock Exchange, and McDonalds -- that live on Linux every day. + +-------------------------------------------------------------------------------- + +via: http://www.zdnet.com/article/linus-torvalds-muses-about-open-source-software/ + +作者:[Steven J. Vaughan-Nichols][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ +[1]:http://events.linuxfoundation.org/events/linuxcon-north-america +[2]:http://www.bloomberg.com/news/articles/2015-06-16/the-creator-of-linux-on-the-future-without-him +[3]:http://www.zdnet.com/article/linus-torvalds-finds-gnome-3-4-to-be-a-total-user-experience-design-failure/ \ No newline at end of file From 42ff87dfae1d59fe49974424b2849c3fbfc88a66 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 20 Aug 2015 16:31:42 +0800 Subject: [PATCH 237/697] =?UTF-8?q?20150820-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ook at What's Next for the Linux Kernel.md | 49 ++++++ ...butions Would Presidential Hopefuls Run.md | 53 +++++++ .../20150820 Why did you start using Linux.md | 147 ++++++++++++++++++ 3 files changed, 249 insertions(+) create mode 100644 sources/talk/20150820 A Look at What's Next for the Linux Kernel.md create mode 100644 sources/talk/20150820 Which Open Source Linux Distributions Would Presidential Hopefuls Run.md create mode 100644 sources/talk/20150820 Why did you start using Linux.md diff --git a/sources/talk/20150820 A Look at What's Next for the Linux Kernel.md b/sources/talk/20150820 A Look at What's Next for the Linux Kernel.md new file mode 100644 index 0000000000..9705fd3a90 --- /dev/null +++ b/sources/talk/20150820 A Look at What's Next for the Linux Kernel.md @@ -0,0 +1,49 @@ +A Look at What's Next for the Linux Kernel +================================================================================ +![](http://www.eweek.com/imagesvr_ce/485/290x195cilinux1.jpg) + +**The upcoming Linux 4.2 kernel will have more contributors than any other Linux kernel in history, according to Linux kernel developer Jonathan Corbet.** + +SEATTLE—The Linux kernel continues to grow—both in lines of code and the number of developers that contribute to it—yet some challenges need to be addressed. That was one of the key messages from Linux kernel developer Jonathan Corbet during his annual Kernel Report session at the LinuxCon conference here. + +The Linux 4.2 kernel is still under development, with general availability expected on Aug. 23. Corbet noted that 1,569 developers have contributed code for the Linux 4.2 kernel. Of those, 277 developers made their first contribution ever, during the Linux 4.2 development cycle. + +Even as more developers are coming to Linux, the pace of development and releases is very fast, Corbet said. He estimates that it now takes approximately 63 days for the community to build a new Linux kernel milestone. + +Linux 4.2 will benefit from a number of improvements that have been evolving in Linux over the last several releases. One such improvement is the introduction of OverlayFS, a new type of read-only file system that is useful because it can enable many containers to be layered on top of each other, Corbet said. + +Linux networking also is set to improve small packet performance, which is important for areas such as high-frequency financial trading. The improvements are aimed at reducing the amount of time and power needed to process each data packet, Corbet said. + +New drivers are always being added to Linux. On average, there are 60 to 80 new or updated drivers added in every Linux kernel development cycle, Corbet said. + +Another key area that continues to improve is that of Live Kernel patching, first introduced in the Linux 4.0 kernel. With live kernel patching, the promise is that a system administrator can patch a live running kernel without the need to reboot a running production system. While the basic elements of live kernel patching are in the kernel already, work is under way to make the technology all work with the right level of consistency and stability, Corbet explained. + +**Linux Security, IoT and Other Concerns** + +Security has been a hot topic in the open-source community in the past year due to high-profile issues, including Heartbleed and Shellshock. + +"I don't doubt there are some unpleasant surprises in the neglected Linux code at this point," Corbet said. + +He noted that there are more than 3 millions lines of code in the Linux kernel today that have been untouched in the last decade by developers and that the Shellshock vulnerability was a flaw in 20-year-old code that hadn't been looked at in some time. + +Another issue that concerns Corbet is the Unix 2038 issue—the Linux equivalent of the Y2K bug, which could have caused global havoc in the year 2000 if it hadn't been fixed. With the 2038 issue, there is a bug that could shut down Linux and Unix machines in the year 2038. Corbet said that while 2038 is still 23 years away, there are systems being deployed now that will be in use in the 2038. + +Some initial work took place to fix the 2038 flaw in Linux, but much more remains to be done, Corbet said. "The time to fix this is now, not 20 years from now in a panic when we're all trying to enjoy our retirement," Corbet said. + +The Internet of things (IoT) is another area of Linux concern for Corbet. Today, Linux is a leading embedded operating system for IoT, but that might not always be the case. Corbet is concerned that the Linux kernel's growth is making it too big in terms of memory footprint to work in future IoT devices. + +A Linux project is now under way to minimize the size of the Linux kernel, and it's important that it gets the support it needs, Corbet said. + +"Either Linux is suitable for IoT, or something else will come along and that something else might not be as free and open as Linux," Corbet said. "We can't assume the continued dominance of Linux in IoT. We have to earn it. We have to pay attention to stuff that makes the kernel bigger." + +-------------------------------------------------------------------------------- + +via: http://www.eweek.com/enterprise-apps/a-look-at-whats-next-for-the-linux-kernel.html + +作者:[Sean Michael Kerner][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/ \ No newline at end of file diff --git a/sources/talk/20150820 Which Open Source Linux Distributions Would Presidential Hopefuls Run.md b/sources/talk/20150820 Which Open Source Linux Distributions Would Presidential Hopefuls Run.md new file mode 100644 index 0000000000..2a850a7468 --- /dev/null +++ b/sources/talk/20150820 Which Open Source Linux Distributions Would Presidential Hopefuls Run.md @@ -0,0 +1,53 @@ +Which Open Source Linux Distributions Would Presidential Hopefuls Run? +================================================================================ +![Republican presidential candidate Donald Trump +](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/08/donaldtrump.jpg) + +Republican presidential candidate Donald Trump + +If people running for president used Linux or another open source operating system, which distribution would it be? That's a key question that the rest of the press—distracted by issues of questionable relevance such as "policy platforms" and whether it's appropriate to add an exclamation point to one's Christian name—has been ignoring. But the ignorance ends here: Read on for this sometime-journalist's take on presidential elections and Linux distributions. + +If this sounds like a familiar topic to those of you who have been reading my drivel for years (is anyone, other than my dear editor, unfortunate enough to have actually done that?), it's because I wrote a [similar post][1] during the last presidential election cycle. Some kind readers took that article more seriously than I intended, so I'll take a moment to point out that I don't actually believe that open source software and political campaigns have anything meaningful to do with one another. I am just trying to amuse myself at the start of a new week. + +But you can make of this what you will. You're the reader, after all. + +### Linux Distributions of Choice: Republicans ### + +Today, I'll cover just the Republicans. And I won't even discuss all of them, since the candidates hoping for the Republican party's nomination are too numerous to cover fully here in one post. But for starters: + +If **Jeb (Jeb!?) Bush** ran Linux, it would be [Debian][2]. It's a relatively boring distribution designed for serious, grown-up hackers—the kind who see it as their mission to be the adults in the pack and clean up the messes that less-experienced open source fans create. Of course, this also makes Debian relatively unexciting, and its user base remains perennially small as a result. + +**Scott Walker**, for his part, would be a [Damn Small Linux][3] (DSL) user. Requiring merely 50MB of disk space and 16MB of RAM to run, DSL can breathe new life into 20-year-old 486 computers—which is exactly what a cost-cutting guru like Walker would want. Of course, the user experience you get from DSL is damn primitive; the platform barely runs a browser. But at least you won't be wasting money on new computer hardware when the stuff you bought in 1993 can still serve you perfectly well. + +How about **Chris Christie**? He'd obviously be clinging to [Relax-and-Recover Linux][4], which bills itself as a "setup-and-forget Linux bare metal disaster recovery solution." "Setup-and-forget" has basically been Christie's political strategy ever since that unfortunate incident on the George Washington Bridge stymied his political momentum. Disaster recovery may or may not bring back everything for Christie in the end, but at least he might succeed in recovering a confidential email or two that accidentally disappeared when his computer crashed. + +As for **Carly Fiorina**, she'd no doubt be using software developed for "[The Machine][5]" operating system from [Hewlett-Packard][6] (HPQ), the company she led from 1999 to 2005. The Machine actually may run several different operating systems, which may or may not be based on Linux—details remain unclear—and its development began well after Fiorina's tenure at HP came to a conclusion. Still, her roots as a successful executive in the IT world form an important part of her profile today, meaning that her ties to HP have hardly been severed fully. + +Last but not least—and you knew this was coming—there's **Donald Trump**. He'd most likely pay a team of elite hackers millions of dollars to custom-build an operating system just for him—even though he could obtain a perfectly good, ready-made operating system for free—to show off how much money he has to waste. He'd then brag about it being the best operating system ever made, though it would of course not be compliant with POSIX or anything else, because that would mean catering to the establishment. The platform would also be totally undocumented, since, if Trump explained how his operating system actually worked, he'd risk giving away all his secrets to the Islamic State—obviously. + +Alternatively, if Trump had to go with a Linux platform already out there, [Ubuntu][7] seems like the most obvious choice. Like Trump, the Ubuntu developers have taken a we-do-what-we-want approach to building open source software by implementing their own, sometimes proprietary applications and interfaces. Free-software purists hate Ubuntu for that, but plenty of ordinary people like it a lot. Of course, whether playing purely by your own rules—in the realms of either software or politics—is sustainable in the long run remains to be seen. + +### Stay Tuned ### + +If you're wondering why I haven't yet mentioned the Democratic candidates, worry not. I am not leaving them out of today's writing because I like them any more or less than the Republicans. (Personally, I think the peculiar American practice of having only two viable political parties—which virtually no other functioning democracy does—is ridiculous, and I am suspicious of all of these candidates as a result.) + +On the contrary, there's plenty to say about the Linux distributions the Democrats might use, too. And I will, in a future post. Stay tuned. + +-------------------------------------------------------------------------------- + +via: http://thevarguy.com/open-source-application-software-companies/081715/which-open-source-linux-distributions-would-presidential- + +作者:[Christopher Tozzi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://thevarguy.com/author/christopher-tozzi +[1]:http://thevarguy.com/open-source-application-software-companies/aligning-linux-distributions-presidential-hopefuls +[2]:http://debian.org/ +[3]:http://www.damnsmalllinux.org/ +[4]:http://relax-and-recover.org/ +[5]:http://thevarguy.com/open-source-application-software-companies/061614/hps-machine-open-source-os-truly-revolutionary +[6]:http://hp.com/ +[7]:http://ubuntu.com/ \ No newline at end of file diff --git a/sources/talk/20150820 Why did you start using Linux.md b/sources/talk/20150820 Why did you start using Linux.md new file mode 100644 index 0000000000..f83742a7a1 --- /dev/null +++ b/sources/talk/20150820 Why did you start using Linux.md @@ -0,0 +1,147 @@ +Why did you start using Linux? +================================================================================ +> In today's open source roundup: What got you started with Linux? Plus: IBM's Linux only Mainframe. And why you should skip Windows 10 and go with Linux + +### Why did you start using Linux? ### + +Linux has become quite popular over the years, with many users defecting to it from OS X or Windows. But have you ever wondered what got people started with Linux? A redditor asked that question and got some very interesting answers. + +SilverKnight asked his question on the Linux subreddit: + +> I know this has been asked before, but I wanted to hear more from the younger generation why it is that they started using linux and what keeps them here. +> +> I dont want to discourage others from giving their linux origin stories, because those are usually pretty good, but I was mostly curious about our younger population since there isn't much out there from them yet. +> +> I myself am 27 and am a linux dabbler. I have installed quite a few different distros over the years but I haven't made the plunge to full time linux. I guess I am looking for some more reasons/inspiration to jump on the bandwagon. +> +> [More at Reddit][1] + +Fellow redditors in the Linux subreddit responded with their thoughts: + +> **DoublePlusGood**: "I started using Backtrack Linux (now Kali) at 12 because I wanted to be a "1337 haxor". I've stayed with Linux (Archlinux currently) because it lets me have the endless freedom to make my computer do what I want." +> +> **Zack**: "I'm a Linux user since, I think, the age of 12 or 13, I'm 15 now. +> +> It started when I got tired with Windows XP at 11 and the waiting, dammit am I impatient sometimes, but waiting for a basic task such as shutting down just made me tired of Windows all together. +> +> A few months previously I had started participating in discussions in a channel on the freenode IRC network which was about a game, and as freenode usually goes, it was open source and most of the users used Linux. +> +> I kept on hearing about this Linux but wasn't that interested in it at the time. However, because the channel (and most of freenode) involved quite a bit of programming I started learning Python. +> +> A year passed and I was attempting to install GNU/Linux (specifically Ubuntu) on my new (technically old, but I had just got it for my birthday) PC, unfortunately it continually froze, for reasons unknown (probably a bad hard drive, or a lot of dust or something else...). +> +> Back then I was the type to give up on things, so I just continually nagged my dad to try and install Ubuntu, he couldn't do it for the same reasons. +> +> After wanting Linux for a while I became determined to get Linux and ditch windows for good. So instead of Ubuntu I tried Linux Mint, being a derivative of Ubuntu(?) I didn't have high hopes, but it worked! +> +> I continued using it for another 6 months. +> +> During that time a friend on IRC gave me a virtual machine (which ran Ubuntu) on their server, I kept it for a year a bit until my dad got me my own server. +> +> After the 6 months I got a new PC (which I still use!) I wanted to try something different. +> +> I decided to install openSUSE. +> +> I liked it a lot, and on the same Christmas I obtained a Raspberry Pi, and stuck with Debian on it for a while due to the lack of support other distros had for it." +> +> **Cqz**: "Was about 9 when the Windows 98 machine handed down to me stopped working for reasons unknown. We had no Windows install disk, but Dad had one of those magazines that comes with demo programs and stuff on CDs. This one happened to have install media for Mandrake Linux, and so suddenly I was a Linux user. Had no idea what I was doing but had a lot of fun doing it, and although in following years I often dual booted with various Windows versions, the FLOSS world always felt like home. Currently only have one Windows installation, which is a virtual machine for games." +> +> **Tosmarcel**: "I was 15 and was really curious about this new concept called 'programming' and then I stumbled upon this Harvard course, CS50. They told users to install a Linux vm to use the command line. But then I asked myself: "Why doesn't windows have this command line?!". I googled 'linux' and Ubuntu was the top result -Ended up installing Ubuntu and deleted the windows partition accidentally... It was really hard to adapt because I knew nothing about linux. Now I'm 16 and running arch linux, never looked back and I love it!" +> +> **Micioonthet**: "First heard about Linux in the 5th grade when I went over to a friend's house and his laptop was running MEPIS (an old fork of Debian) instead of Windows XP. +> +> Turns out his dad was a socialist (in America) and their family didn't trust Microsoft. This was completely foreign to me, and I was confused as to why he would bother using an operating system that didn't support the majority of software that I knew. +> +> Fast forward to when I was 13 and without a laptop. Another friend of mine was complaining about how slow his laptop was, so I offered to buy it off of him so I could fix it up and use it for myself. I paid $20 and got a virus filled, unusable HP Pavilion with Windows Vista. Instead of trying to clean up the disgusting Windows install, I remembered that Linux was a thing and that it was free. I burned an Ubuntu 12.04 disc and installed it right away, and was absolutely astonished by the performance. +> +> Minecraft (one of the few early Linux games because it ran on Java), which could barely run at 5 FPS on Vista, ran at an entirely playable 25 FPS on a clean install of Ubuntu. +> +> I actually still have that old laptop and use it occasionally, because why not? Linux doesn't care how old your hardware is. +> +> I since converted my dad to Linux and we buy old computers at lawn sales and thrift stores for pennies and throw Linux Mint or some other lightweight distros on them." +> +> **Webtm**: "My dad had every computer in the house with some distribution on it, I think a couple with OpenSUSE and Debian, and his personal computer had Slackware on it. So I remember being little and playing around with Debian and not really getting into it much. So I had a Windows laptop for a few years and my dad asked me if I wanted to try out Debian. It was a fun experience and ever since then I've been using Debian and trying out distributions. I currently moved away from Linux and have been using FreeBSD for around 5 months now, and I am absolutely happy with it. +> +> The control over your system is fantastic. There are a lot of cool open source projects. I guess a lot of the fun was figuring out how to do the things I want by myself and tweaking those things in ways to make them do something else. Stability and performance is also a HUGE plus. Not to mention the level of privacy when switching." +> +> **Wyronaut**: "I'm currently 18, but I first started using Linux when I was 13. Back then my first distro was Ubuntu. The reason why I wanted to check out Linux, was because I was hosting little Minecraft game servers for myself and a couple of friends, back then Minecraft was pretty new-ish. I read that the defacto operating system for hosting servers was Linux. +> +> I was a big newbie when it came to command line work, so Linux scared me a little, because I had to take care of a lot of things myself. But thanks to google and a few wiki pages I managed to get up a couple of simple servers running on a few older PC's I had lying around. Great use for all that older hardware no one in the house ever uses. +> +> After running a few game servers I started running a few web servers as well. Experimenting with HTML, CSS and PHP. I worked with those for a year or two. Afterwards, took a look at Java. I made the terrible mistake of watching TheNewBoston video's. +> +> So after like a week I gave up on Java and went to pick up a book on Python instead. That book was Learn Python The Hard Way by Zed A. Shaw. After I finished that at the fast pace of two weeks, I picked up the book C++ Primer, because at the time I wanted to become a game developer. Went trough about half of the book (~500 pages) and burned out on learning. At that point I was spending a sickening amount of time behind my computer. +> +> After taking a bit of a break, I decided to pick up JavaScript. Read like 2 books, made like 4 different platformers and called it a day. +> +> Now we're arriving at the present. I had to go through the horrendous process of finding a school and deciding what job I wanted to strive for when I graduated. I ruled out anything in the gaming sector as I didn't want anything to do with graphics programming anymore, I also got completely sick of drawing and modelling. And I found this bachelor that had something to do with netsec and I instantly fell in love. I picked up a couple books on C to shred this vacation period and brushed up on some maths and I'm now waiting for the new school year to commence. +> +> Right now, I am having loads of fun with Arch Linux, made couple of different arrangements on different PC's and it's going great! +> +> In a sense Linux is what also got me into programming and ultimately into what I'm going to study in college starting this september. I probably have my future life to thank for it." +> +> **Linuxllc**: "You also can learn from old farts like me. +> +> The crutch, The crutch, The crutch. Getting rid of the crutch will inspired you and have good reason to stick with Linux. +> +> I got rid of my crutch(Windows XP) back in 2003. Took me only 5 days to get all my computer task back and running at a 100% workflow. Including all my peripheral devices. Minus any Windows games. I just play native Linux games." +> +> **Highclass**: "Hey I'm 28 not sure if this is the age group you are looking for. +> +> To be honest, I was always interested in computers and the thought of a free operating system was intriguing even though at the time I didn't fully grasp the free software philosophy, to me it was free as in no cost. I also did not find the CLI too intimidating as from an early age I had exposure to DOS. +> +> I believe my first distro was Mandrake, I was 11 or 12, I messed up the family computer on several occasions.... I ended up sticking with it always trying to push myself to the next level. Now I work in the industry with Linux everyday. +> +> /shrug" +> +> Matto: "My computer couldn't run fast enough for XP (got it at a garage sale), so I started looking for alternatives. Ubuntu came up in Google. I was maybe 15 or 16 at the time. Now I'm 23 and have a job working on a product that uses Linux internally." +> +> [More at Reddit][2] + +### IBM's Linux only Mainframe ### + +IBM has a long history with Linux, and now the company has created a Mainframe that features Ubuntu Linux. The new machine is named LinuxOne. + +Ron Miller reports for TechCrunch: + +> The new mainframes come in two flavors, named for penguins (Linux — penguins — get it?). The first is called Emperor and runs on the IBM z13, which we wrote about in January. The other is a smaller mainframe called the Rockhopper designed for a more “entry level” mainframe buyer. +> +> You may have thought that mainframes went the way of the dinosaur, but they are still alive and well and running in large institutions throughout the world. IBM as part of its broader strategy to promote the cloud, analytics and security is hoping to expand the potential market for mainframes by running Ubuntu Linux and supporting a range of popular open source enterprise software such as Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL and Chef. +> +> The metered mainframe will still sit inside the customer’s on-premises data center, but billing will be based on how much the customer uses the system, much like a cloud model, Mauri explained. +> +> ...IBM is looking for ways to increase those sales. Partnering with Canonical and encouraging use of open source tools on a mainframe gives the company a new way to attract customers to a small, but lucrative market. +> +> [More at TechCrunch][3] + +### Why you should skip Windows 10 and opt for Linux ### + +Since Windows 10 has been released there has been quite a bit of media coverage about its potential to spy on users. ZDNet has listed some reasons why you should skip Windows 10 and opt for Linux instead on your computer. + +SJVN reports for ZDNet: + +> You can try to turn Windows 10's data-sharing ways off, but, bad news: Windows 10 will keep sharing some of your data with Microsoft anyway. There is an alternative: Desktop Linux. +> +> You can do a lot to keep Windows 10 from blabbing, but you can't always stop it from talking. Cortana, Windows 10's voice activated assistant, for example, will share some data with Microsoft, even when it's disabled. That data includes a persistent computer ID to identify your PC to Microsoft. +> +> So, if that gives you a privacy panic attack, you can either stick with your old operating system, which is likely Windows 7, or move to Linux. Eventually, when Windows 7 is no longer supported, if you want privacy you'll have no other viable choice but Linux. +> +> There are other, more obscure desktop operating systems that are also desktop-based and private. These include the BSD Unix family such as FreeBSD, PCBSD, and NetBSD and eComStation, OS/2 for the 21st century. Your best choice, though, is a desktop-based Linux with a low learning curve. +> +> [More at ZDNet][4] + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2972587/linux/why-did-you-start-using-linux.html + +作者:[Jim Lynch][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Jim-Lynch/ +[1]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/ +[2]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/ +[3]:http://techcrunch.com/2015/08/16/ibm-teams-with-canonical-on-linux-mainframe/ +[4]:http://www.zdnet.com/article/sick-of-windows-spying-on-you-go-linux/ \ No newline at end of file From cabdaf5b35fda0ad86949befaa9075e2ebe4a483 Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Fri, 21 Aug 2015 09:00:19 +0800 Subject: [PATCH 238/697] =?UTF-8?q?=E6=92=A4=E9=94=80=E7=BF=BB=E8=AF=91=20?= =?UTF-8?q?20150813=20Linux=20file=20system=20hierarchy=20v2.0.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20150813 Linux file system hierarchy v2.0.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/sources/tech/20150813 Linux file system hierarchy v2.0.md b/sources/tech/20150813 Linux file system hierarchy v2.0.md index 0021bb57c9..ec4f47234c 100644 --- a/sources/tech/20150813 Linux file system hierarchy v2.0.md +++ b/sources/tech/20150813 Linux file system hierarchy v2.0.md @@ -1,6 +1,3 @@ - -Translating by dingdongnigetou - Linux file system hierarchy v2.0 ================================================================================ What is a file in Linux? What is file system in Linux? Where are all the configuration files? Where do I keep my downloaded applications? Is there really a filesystem standard structure in Linux? Well, the above image explains Linux file system hierarchy in a very simple and non-complex way. It’s very useful when you’re looking for a configuration file or a binary file. I’ve added some explanation and examples below, but that’s TL;DR. From 81e6ae11687c0af2cb40e455bb805aac0b7470a2 Mon Sep 17 00:00:00 2001 From: "Y.C.S.M" Date: Fri, 21 Aug 2015 09:13:10 +0800 Subject: [PATCH 239/697] Update 20150813 Linux file system hierarchy v2.0.md --- sources/tech/20150813 Linux file system hierarchy v2.0.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20150813 Linux file system hierarchy v2.0.md b/sources/tech/20150813 Linux file system hierarchy v2.0.md index ec4f47234c..23f70258b1 100644 --- a/sources/tech/20150813 Linux file system hierarchy v2.0.md +++ b/sources/tech/20150813 Linux file system hierarchy v2.0.md @@ -1,3 +1,5 @@ +translating by tnuoccalanosrep + Linux file system hierarchy v2.0 ================================================================================ What is a file in Linux? What is file system in Linux? Where are all the configuration files? Where do I keep my downloaded applications? Is there really a filesystem standard structure in Linux? Well, the above image explains Linux file system hierarchy in a very simple and non-complex way. It’s very useful when you’re looking for a configuration file or a binary file. I’ve added some explanation and examples below, but that’s TL;DR. From e48fefb9952ead67f4bc5592cce6d79f8e33659a Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 21 Aug 2015 09:35:40 +0800 Subject: [PATCH 240/697] Update 20150728 Process of the Linux kernel building.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译到line 699 --- ...0728 Process of the Linux kernel building.md | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/sources/tech/20150728 Process of the Linux kernel building.md index c622e43ba0..00504e60fd 100644 --- a/sources/tech/20150728 Process of the Linux kernel building.md +++ b/sources/tech/20150728 Process of the Linux kernel building.md @@ -605,12 +605,16 @@ Building bzImage The `bzImage` is the compressed Linux kernel image. We can get it with the execution of the `make bzImage` after the `vmlinux` built. In other way we can just execute `make` without arguments and will get `bzImage` anyway because it is default image: +`bzImage` 就是压缩了的linux 内核镜像。我们可以在构建了`vmlinux` 之后通过执行`make bzImage` 获得`bzImage`。同时我们可以仅仅执行`make` 而不带任何参数也可以生成`bzImage` ,因为它是在[arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile) 里定义的、默认会生成的镜像: + ```Makefile all: bzImage ``` in the [arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile). Let's look on this target, it will help us to understand how this image builds. As I already said the `bzImage` target defined in the [arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile) and looks like this: +让我们看看这个目标,他能帮助我们理解这个镜像是怎么构建的。我已经说过了`bzImage` 师被定义在[arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile),定义如下: + ```Makefile bzImage: vmlinux $(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE) @@ -620,12 +624,16 @@ bzImage: vmlinux We can see here, that first of all called `make` for the boot directory, in our case it is: +在这里我们可以看到第一次为boot 目录执行`make`,在我们的例子里是这样的: + ```Makefile boot := arch/x86/boot ``` The main goal now to build source code in the `arch/x86/boot` and `arch/x86/boot/compressed` directories, build `setup.bin` and `vmlinux.bin`, and build the `bzImage` from they in the end. First target in the [arch/x86/boot/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/Makefile) is the `$(obj)/setup.elf`: +现在的主要目标是编译目录`arch/x86/boot` 和`arch/x86/boot/compressed` 的代码,构建`setup.bin` 和`vmlinux.bin`,然后用这两个文件生成`bzImage`。第一个目标是定义在[arch/x86/boot/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/Makefile) 的`$(obj)/setup.elf`: + ```Makefile $(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE $(call if_changed,ld) @@ -633,6 +641,8 @@ $(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE We already have the `setup.ld` linker script in the `arch/x86/boot` directory and the `SETUP_OBJS` expands to the all source files from the `boot` directory. We can see first output: +我们已经在目录`arch/x86/boot`有了链接脚本`setup.ld`,并且将变量`SETUP_OBJS` 扩展到`boot` 目录下的全部源代码。我们可以看看第一个输出: + ```Makefile AS arch/x86/boot/bioscall.o CC arch/x86/boot/cmdline.o @@ -648,11 +658,14 @@ We already have the `setup.ld` linker script in the `arch/x86/boot` directory an The next source code file is the [arch/x86/boot/header.S](https://github.com/torvalds/linux/blob/master/arch/x86/boot/header.S), but we can't build it now because this target depends on the following two header files: +下一个源码文件是[arch/x86/boot/header.S](https://github.com/torvalds/linux/blob/master/arch/x86/boot/header.S),但是我们不能现在就编译他,因为这个目标依赖于下面两个头文件: + ```Makefile $(obj)/header.o: $(obj)/voffset.h $(obj)/zoffset.h ``` The first is `voffset.h` generated by the `sed` script that gets two addresses from the `vmlinux` with the `nm` util: +第一个头文件`voffset.h` 是使用`sed` 脚本生成的,包含用`nm` 工具从`vmlinux` 获取的两个地址: ```C #define VO__end 0xffffffff82ab0000 @@ -661,6 +674,8 @@ The first is `voffset.h` generated by the `sed` script that gets two addresses f They are start and end of the kernel. The second is `zoffset.h` depens on the `vmlinux` target from the [arch/x86/boot/compressed/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/Makefile): +这两个地址是内核的起始和结束地址。第二个头文件`zoffset.h` 在[arch/x86/boot/compressed/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/Makefile) 可以看出是依赖于目标`vmlinux`的: + ```Makefile $(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE $(call if_changed,zoffset) @@ -668,6 +683,8 @@ $(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE The `$(obj)/compressed/vmlinux` target depends on the `vmlinux-objs-y` that compiles source code files from the [arch/x86/boot/compressed](https://github.com/torvalds/linux/tree/master/arch/x86/boot/compressed) directory and generates `vmlinux.bin`, `vmlinux.bin.bz2`, and compiles programm - `mkpiggy`. We can see this in the output: +目标`$(obj)/compressed/vmlinux` 依赖于变量`vmlinux-objs-y` —— 表明要编译目录[arch/x86/boot/compressed](https://github.com/torvalds/linux/tree/master/arch/x86/boot/compressed) 下的源代码,然后生成`vmlinux.bin`, `vmlinux.bin.bz2`, 和编译工具 - `mkpiggy`。我们可以在下面的输出看出来: + ```Makefile LDS arch/x86/boot/compressed/vmlinux.lds AS arch/x86/boot/compressed/head_64.o From a6b8923adb1e6c085115162b17b824a93532c8f1 Mon Sep 17 00:00:00 2001 From: bazz2 Date: Fri, 21 Aug 2015 12:54:09 +0800 Subject: [PATCH 241/697] [translated]Howto Manage Host Using Docker Machine in a VirtualBox --- ...st Using Docker Machine in a VirtualBox.md | 63 +++++++++---------- 1 file changed, 30 insertions(+), 33 deletions(-) diff --git a/translated/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md b/translated/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md index 64c044b100..153035c9f4 100644 --- a/translated/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md +++ b/translated/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md @@ -1,63 +1,60 @@ -[bazz2] 在 VirtualBox 中使用 Docker Machine 管理主机 ================================================================================ 大家好,今天我们学习在 VirtualBox 中使用 Docker Machine 来创建和管理 Docker 主机。Docker Machine 是一个应用,用于在我们的电脑上、在云端、在数据中心创建 Docker 主机,然后用户可以使用 Docker 客户端来配置一些东西。这个 API 为本地主机、或数据中心的虚拟机、或云端的实例提供 Docker 服务。Docker Machine 支持 Windows、OSX 和 Linux,并且是以一个独立的二进制文件包形式安装的。使用(与现有 Docker 工具)相同的接口,我们就可以充分利用已经提供 Docker 基础框架的生态系统。只要一个命令,用户就能快速部署 Docker 容器。 -Here are some easy and simple steps that helps us to deploy docker containers using Docker Machine. +本文列出一些简单的步骤用 Docker Machine 来部署 docker 容器。 -### 1. Installing Docker Machine ### +### 1. 安装 Docker Machine ### -Docker Machine supports awesome on every Linux Operating System. First of all, we'll need to download the latest version of Docker Machine from the [Github site][1] . Here, we'll use curl to download the latest version of Docker Machine ie 0.2.0 . +Docker Machine 完美支持所有 Linux 操作系统。首先我们需要从 [github][1] 下载最新版本的 Docker Machine,本文使用 curl 作为下载工具,Docker Machine 版本为 0.2.0。 -**For 64 Bit Operating System** +** 64 位操作系统 ** # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine -**For 32 Bit Operating System** +** 32 位操作系统 ** # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-i386 > /usr/local/bin/docker-machine -After downloading the latest release of Docker Machine, we'll make the file named **docker-machine** under **/usr/local/bin/** executable using the command below. +下载完成后,找到 **/usr/local/bin** 目录下的 **docker-machine** 文件,执行一下: # chmod +x /usr/local/bin/docker-machine -After doing the above, we'll wanna ensure that we have successfully installed docker-machine. To check it, we can run the docker-machine -v which will give output of the version of docker-machine installed in our system. +确认是否成功安装了 docker-machine,可以运行下面的命令,它会打印 Docker Machine 的版本信息: # docker-machine -v -![Installing Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/installing-docker-machine.png) +![安装 Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/installing-docker-machine.png) -To enable Docker commands on our machines, make sure to install the Docker client as well by running the command below. +运行下面的命令,安装 Docker 客户端,以便于在我们自己的电脑止运行 Docker 命令: # curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker # chmod +x /usr/local/bin/docker -### 2. Creating VirualBox VM ### +### 2. 创建 VirtualBox 虚拟机 ### -After we have successfully installed Docker Machine in our Linux running machine, we'll definitely wanna go for creating a Virtual Machine using VirtualBox. To get started, we need to run docker-machine create command followed by --driver flag with string as virtualbox as we are trying to deploy docker inside of Virtual Box running VM and the final argument is the name of the machine, here we have machine name as "linux". This command will download [boot2docker][2] iso which is a light-weighted linux distribution based on Tiny Core Linux with the Docker daemon installed and will create and start a VirtualBox VM with Docker running as mentioned above. - -To do so, we'll run the following command in a terminal or shell in our box. +在 Linux 系统上安装完 Docker Machine 后,接下来我们可以安装 VirtualBox 虚拟机,运行下面的就可以了。--driver virtualbox 选项表示我们要在 VirtualBox 的虚拟机里面部署 docker,最后的参数“linux” 是虚拟机的名称。这个命令会下载 [boot2docker][2] iso,它是个基于 Tiny Core Linux 的轻量级发行版,自带 Docker 程序,然后 docker-machine 命令会创建一个 VirtualBox 虚拟机(LCTT:当然,我们也可以选择其他的虚拟机软件)来运行这个 boot2docker 系统。 # docker-machine create --driver virtualbox linux -![Creating Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/creating-docker-machine.png) +![创建 Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/creating-docker-machine.png) -Now, to check whether we have successfully create a Virtualbox running Docker or not, we'll run the command **docker-machine** ls as shown below. +测试下有没有成功运行 VirtualBox 和 Docker,运行命令: # docker-machine ls ![Docker Machine List](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-list.png) -If the host is active, we can see * under the ACTIVE column in the output as shown above. +如果执行成功,我们可以看到在 ACTIVE 那列下面会出现一个星号“*”。 -### 3. Setting Environment Variables ### +### 3. 设置环境变量 ### -Now, we'll need to make docker talk with the machine. We can do that by running docker-machine env and then the machine name, here we have named **linux** as above. +现在我们需要让 docker 与虚拟机通信,运行 docker-machine env <虚拟机名称> 来实现这个目的。 # eval "$(docker-machine env linux)" # docker ps -This will set environment variables that the Docker client will read which specify the TLS settings. Note that we'll need to do this every time we reboot our machine or start a new tab. We can see what variables will be set by running the following command. +这个命令会设置 TLS 认证的环境变量,每次重启机器或者重新打开一个会话都需要执行一下这个命令,我们可以看到它的输出内容: # docker-machine env linux @@ -65,46 +62,46 @@ This will set environment variables that the Docker client will read which speci export DOCKER_CERT_PATH=/Users//.docker/machine/machines/dev export DOCKER_HOST=tcp://192.168.99.100:2376 -### 4. Running Docker Containers ### +### 4. 运行 Docker 容器 ### -Finally, after configuring the environment variables and Virtual Machine, we are able to run docker containers in the host running inside the Virtual Machine. To give it a test, we'll run a busybox container out of it run running **docker run busybox** command with **echo hello world** so that we can get the output of the container. +完成配置后我们就可以在 VirtualBox 上运行 docker 容器了。测试一下,在虚拟机里执行 **docker run busybox echo hello world** 命令,我们可以看到容器的输出信息。 # docker run busybox echo hello world -![Running Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/05/running-docker-container.png) +![运行 Docker 容器](http://blog.linoxide.com/wp-content/uploads/2015/05/running-docker-container.png) -### 5. Getting Docker Host's IP ### +### 5. 拿到 Docker 主机的 IP ### -We can get the IP Address of the running Docker Host's using the **docker-machine ip** command. We can see any exposed ports that are available on the Docker host’s IP address. +我们可以执行下面的命令获取 Docker 主机的 IP 地址。 # docker-machine ip -![Docker IP Address](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-ip-address.png) +![Docker IP 地址](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-ip-address.png) -### 6. Managing the Hosts ### +### 6. 管理主机 ### -Now we can manage as many local VMs running Docker as we desire by running docker-machine create command again and again as mentioned in above steps +现在我们可以随心所欲地使用上述的 docker-machine 命令来不断创建主机了。 -If you are finished working with the running docker, we can simply run **docker-machine stop** command to stop the whole hosts which are Active and if wanna start again, we can run **docker-machine start**. +当你使用完 docker 时,可以运行 **docker-machine stop** 来停止所有主机,如果想开启所有主机,运行 **docker-machine start**。 # docker-machine stop # docker-machine start -You can also specify a host to stop or start using the host name as an argument. +你也可以只停止或开启一台主机: $ docker-machine stop linux $ docker-machine start linux -### Conclusion ### +### 总结 ### -Finally, we have successfully created and managed a Docker host inside a VirtualBox using Docker Machine. Really, Docker Machine enables people fast and easy to create, deploy and manage Docker hosts in different platforms as here we are running Docker hosts using Virtualbox platform. This virtualbox driver API works for provisioning Docker on a local machine, on a virtual machine in the data center. Docker Machine ships with drivers for provisioning Docker locally with Virtualbox as well as remotely on Digital Ocean instances whereas more drivers are in the work for AWS, Azure, VMware, and other infrastructure. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) +最后,我们使用 Docker Machine 成功在 VirtualBox 上创建并管理一台 Docker 主机。Docker Machine 确实能让用户快速地在不同的平台上部署 Docker 主机,就像我们这里部署在 VirtualBox 上一样。这个 --driver virtulbox 驱动可以在本地机器上使用,也可以在数据中心的虚拟机上使用。Docker Machine 驱动除了支持本地的 VirtualBox 之外,还支持远端的 Digital Ocean、AWS、Azure、VMware 以及其他基础设施。如果你有任何疑问,或者建议,请在评论栏中写出来,我们会不断改进我们的内容。谢谢,祝愉快。 -------------------------------------------------------------------------------- via: http://linoxide.com/linux-how-to/host-virtualbox-docker-machine/ 作者:[Arun Pyasi][a] -译者:[译者ID](https://github.com/译者ID) +译者:[bazz2](https://github.com/bazz2) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 35731ea8f2793176705cd62264b3447fc1acb3f7 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 21 Aug 2015 14:53:23 +0800 Subject: [PATCH 242/697] =?UTF-8?q?PR=20=E8=A1=A5=E5=AE=8C?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @bazz2 --- ...st Using Docker Machine in a VirtualBox.md | 114 ------------------ 1 file changed, 114 deletions(-) delete mode 100644 sources/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md diff --git a/sources/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md b/sources/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md deleted file mode 100644 index 77292a03ee..0000000000 --- a/sources/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md +++ /dev/null @@ -1,114 +0,0 @@ -[bazz2] -Howto Manage Host Using Docker Machine in a VirtualBox -================================================================================ -Hi all, today we'll learn how to create and manage a Docker host using Docker Machine in a VirtualBox. Docker Machine is an application that helps to create Docker hosts on our computer, on cloud providers and inside our own data center. It provides easy solution for creating servers, installing Docker on them and then configuring the Docker client according the users configuration and requirements. This API works for provisioning Docker on a local machine, on a virtual machine in the data center, or on a public cloud instance. Docker Machine is supported on Windows, OSX, and Linux and is available for installation as one standalone binary. It enables us to take full advantage of ecosystem partners providing Docker-ready infrastructure, while still accessing everything through the same interface. It makes people able to deploy the docker containers in the respective platform pretty fast and in pretty easy way with just a single command. - -Here are some easy and simple steps that helps us to deploy docker containers using Docker Machine. - -### 1. Installing Docker Machine ### - -Docker Machine supports awesome on every Linux Operating System. First of all, we'll need to download the latest version of Docker Machine from the [Github site][1] . Here, we'll use curl to download the latest version of Docker Machine ie 0.2.0 . - -**For 64 Bit Operating System** - - # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine - -**For 32 Bit Operating System** - - # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-i386 > /usr/local/bin/docker-machine - -After downloading the latest release of Docker Machine, we'll make the file named **docker-machine** under **/usr/local/bin/** executable using the command below. - - # chmod +x /usr/local/bin/docker-machine - -After doing the above, we'll wanna ensure that we have successfully installed docker-machine. To check it, we can run the docker-machine -v which will give output of the version of docker-machine installed in our system. - - # docker-machine -v - -![Installing Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/installing-docker-machine.png) - -To enable Docker commands on our machines, make sure to install the Docker client as well by running the command below. - - # curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker - # chmod +x /usr/local/bin/docker - -### 2. Creating VirualBox VM ### - -After we have successfully installed Docker Machine in our Linux running machine, we'll definitely wanna go for creating a Virtual Machine using VirtualBox. To get started, we need to run docker-machine create command followed by --driver flag with string as virtualbox as we are trying to deploy docker inside of Virtual Box running VM and the final argument is the name of the machine, here we have machine name as "linux". This command will download [boot2docker][2] iso which is a light-weighted linux distribution based on Tiny Core Linux with the Docker daemon installed and will create and start a VirtualBox VM with Docker running as mentioned above. - -To do so, we'll run the following command in a terminal or shell in our box. - - # docker-machine create --driver virtualbox linux - -![Creating Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/creating-docker-machine.png) - -Now, to check whether we have successfully create a Virtualbox running Docker or not, we'll run the command **docker-machine** ls as shown below. - - # docker-machine ls - -![Docker Machine List](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-list.png) - -If the host is active, we can see * under the ACTIVE column in the output as shown above. - -### 3. Setting Environment Variables ### - -Now, we'll need to make docker talk with the machine. We can do that by running docker-machine env and then the machine name, here we have named **linux** as above. - - # eval "$(docker-machine env linux)" - # docker ps - -This will set environment variables that the Docker client will read which specify the TLS settings. Note that we'll need to do this every time we reboot our machine or start a new tab. We can see what variables will be set by running the following command. - - # docker-machine env linux - - export DOCKER_TLS_VERIFY=1 - export DOCKER_CERT_PATH=/Users//.docker/machine/machines/dev - export DOCKER_HOST=tcp://192.168.99.100:2376 - -### 4. Running Docker Containers ### - -Finally, after configuring the environment variables and Virtual Machine, we are able to run docker containers in the host running inside the Virtual Machine. To give it a test, we'll run a busybox container out of it run running **docker run busybox** command with **echo hello world** so that we can get the output of the container. - - # docker run busybox echo hello world - -![Running Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/05/running-docker-container.png) - -### 5. Getting Docker Host's IP ### - -We can get the IP Address of the running Docker Host's using the **docker-machine ip** command. We can see any exposed ports that are available on the Docker host’s IP address. - - # docker-machine ip - -![Docker IP Address](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-ip-address.png) - -### 6. Managing the Hosts ### - -Now we can manage as many local VMs running Docker as we desire by running docker-machine create command again and again as mentioned in above steps - -If you are finished working with the running docker, we can simply run **docker-machine stop** command to stop the whole hosts which are Active and if wanna start again, we can run **docker-machine start**. - - # docker-machine stop - # docker-machine start - -You can also specify a host to stop or start using the host name as an argument. - - $ docker-machine stop linux - $ docker-machine start linux - -### Conclusion ### - -Finally, we have successfully created and managed a Docker host inside a VirtualBox using Docker Machine. Really, Docker Machine enables people fast and easy to create, deploy and manage Docker hosts in different platforms as here we are running Docker hosts using Virtualbox platform. This virtualbox driver API works for provisioning Docker on a local machine, on a virtual machine in the data center. Docker Machine ships with drivers for provisioning Docker locally with Virtualbox as well as remotely on Digital Ocean instances whereas more drivers are in the work for AWS, Azure, VMware, and other infrastructure. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/host-virtualbox-docker-machine/ - -作者:[Arun Pyasi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:https://github.com/docker/machine/releases -[2]:https://github.com/boot2docker/boot2docker From 530b8c6af21937d0930548f4124963c025a08d5d Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 21 Aug 2015 15:47:47 +0800 Subject: [PATCH 243/697] =?UTF-8?q?20150821-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... to Install Visual Studio Code in Linux.md | 127 ++++++++++++++++++ 1 file changed, 127 insertions(+) create mode 100644 sources/tech/20150821 How to Install Visual Studio Code in Linux.md diff --git a/sources/tech/20150821 How to Install Visual Studio Code in Linux.md b/sources/tech/20150821 How to Install Visual Studio Code in Linux.md new file mode 100644 index 0000000000..2fac79701e --- /dev/null +++ b/sources/tech/20150821 How to Install Visual Studio Code in Linux.md @@ -0,0 +1,127 @@ +How to Install Visual Studio Code in Linux +================================================================================ +Hi everyone, today we'll learn how to install Visual Studio Code in Linux Distributions. Visual Studio Code is a code-optimized editor based on Electron, a piece of software that is based on Chromium, which is used to deploy io.js applications for the desktop. It is a source code editor and text editor developed by Microsoft for all the operating system platforms including Linux. Visual Studio Code is free but not an open source software ie. its under proprietary software license terms. It is an awesome powerful and fast code editor for our day to day use. Some of the cool features of visual studio code are navigation, intellisense support, syntax highlighting, bracket matching, auto indentation, and snippets, keyboard support with customizable bindings and support for dozens of languages like Python, C++, jade, PHP, XML, Batch, F#, DockerFile, Coffee Script, Java, HandleBars, R, Objective-C, PowerShell, Luna, Visual Basic, .Net, Asp.Net, C#, JSON, Node.js, Javascript, HTML, CSS, Less, Sass and Markdown. Visual Studio Code integrates with package managers and repositories, and builds and other common tasks to make everyday workflows faster. The most popular feature in Visual Studio Code is its debugging feature which includes a streamlined support for Node.js debugging in the preview. + +Note: Please note that, Visual Studio Code is only available for 64-bit versions of Linux Distributions. + +Here, are some easy to follow steps on how to install Visual Sudio Code in all Linux Distribution. + +### 1. Downloading Visual Studio Code Package ### + +First of all, we'll gonna download the Visual Studio Code Package for 64-bit Linux Operating System from the Microsoft server using the given url [http://go.microsoft.com/fwlink/?LinkID=534108][1] . Here, we'll use wget to download it and keep it under /tmp/VSCODE directory as shown below. + + # mkdir /tmp/vscode; cd /tmp/vscode/ + # wget https://az764295.vo.msecnd.net/public/0.3.0/VSCode-linux-x64.zip + + --2015-06-24 06:02:54-- https://az764295.vo.msecnd.net/public/0.3.0/VSCode-linux-x64.zip + Resolving az764295.vo.msecnd.net (az764295.vo.msecnd.net)... 93.184.215.200, 2606:2800:11f:179a:1972:2405:35b:459 + Connecting to az764295.vo.msecnd.net (az764295.vo.msecnd.net)|93.184.215.200|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 64992671 (62M) [application/octet-stream] + Saving to: ‘VSCode-linux-x64.zip’ + 100%[================================================>] 64,992,671 14.9MB/s in 4.1s + 2015-06-24 06:02:58 (15.0 MB/s) - ‘VSCode-linux-x64.zip’ saved [64992671/64992671] + +### 2. Extracting the Package ### + +Now, after we have successfully downloaded the zipped package of Visual Studio Code, we'll gonna extract it using the unzip command to /opt/directory. To do so, we'll need to run the following command in a terminal or a console. + + # unzip /tmp/vscode/VSCode-linux-x64.zip -d /opt/ + +Note: If we don't have unzip already installed, we'll need to install it via our Package Manager. If you're running Ubuntu, apt-get whereas if you're running Fedora, CentOS, dnf or yum can be used to install it. + +### 3. Running Visual Studio Code ### + +After we have extracted the package, we can directly launch the Visual Studio Code by executing a file named Code. + + # sudo chmod +x /opt/VSCode-linux-x64/Code + # sudo /opt/VSCode-linux-x64/Code + +If we want to launch Code and want to be available globally via terminal in any place, we'll need to create the link of /opt/vscode/Code as/usr/local/bin/code . + + # ln -s /opt/VSCode-linux-x64/Code /usr/local/bin/code + +Now, we can launch Visual Studio Code by running the following command in a terminal. + + # code . + +### 4. Creating a Desktop Launcher ### + +Next, after we have successfully extracted the Visual Studio Code package, we'll gonna create a desktop launcher so that it will be easily available in the launchers, menus, desktop, according to the desktop environment so that anyone can launch it from them. So, first we'll gonna copy the icon file to /usr/share/icons/ directory. + + # cp /opt/VSCode-linux-x64/resources/app/vso.png /usr/share/icons/ + +Then, we'll gonna create the desktop launcher having the extension as .desktop. Here, we'll create a file named visualstudiocode.desktop under /tmp/VSCODE/ folder using our favorite text editor. + + # vi /tmp/vscode/visualstudiocode.desktop + +Then, we'll gonna paste the following lines into that file. + + [Desktop Entry] + Name=Visual Studio Code + Comment=Multi-platform code editor for Linux + Exec=/opt/VSCode-linux-x64/Code + Icon=/usr/share/icons/vso.png + Type=Application + StartupNotify=true + Categories=TextEditor;Development;Utility; + MimeType=text/plain; + +After we're done creating the desktop file, we'll wanna copy that desktop file to /usr/share/applications/ directory so that it will be available in launchers and menus for use with single click. + + # cp /tmp/vscode/visualstudiocode.desktop /usr/share/applications/ + +Once its done, we can launch it by opening it from the Launcher or Menu. + +![Visual Studio Code](http://blog.linoxide.com/wp-content/uploads/2015/06/visual-studio-code.png) + +### Installing Visual Studio Code in Ubuntu ### + +We can use Ubuntu Make 0.7 in order to install Visual Studio Code in Ubuntu 14.04/14.10/15.04 distribution of linux. This method is the most easiest way to setup Code in ubuntu as we just need to execute few commands for it. First of all, we'll need to install Ubuntu Make 0.7 in our ubuntu distribution of linux. To install it, we'll need to add PPA for it. This can be done by running the command below. + + # add-apt-repository ppa:ubuntu-desktop/ubuntu-make + + This ppa proposes package backport of Ubuntu make for supported releases. + More info: https://launchpad.net/~ubuntu-desktop/+archive/ubuntu/ubuntu-make + Press [ENTER] to continue or ctrl-c to cancel adding it + gpg: keyring `/tmp/tmpv0vf24us/secring.gpg' created + gpg: keyring `/tmp/tmpv0vf24us/pubring.gpg' created + gpg: requesting key A1231595 from hkp server keyserver.ubuntu.com + gpg: /tmp/tmpv0vf24us/trustdb.gpg: trustdb created + gpg: key A1231595: public key "Launchpad PPA for Ubuntu Desktop" imported + gpg: no ultimately trusted keys found + gpg: Total number processed: 1 + gpg: imported: 1 (RSA: 1) + OK + +Then, we'll gonna update the local repository index and install ubuntu-make. + + # apt-get update + # apt-get install ubuntu-make + +After Ubuntu Make is installed in our ubuntu operating system, we'll gonna install Code by running the following command in a terminal. + + # umake web visual-studio-code + +![Umake Web Code](http://blog.linoxide.com/wp-content/uploads/2015/06/umake-web-code.png) + +After running the above command, we'll be asked to enter the path where we want to install it. Then, it will ask for permission to install Visual Studio Code in our ubuntu system. Then, we'll press "a". Once we do that, it will download and install it in our ubuntu machine. Finally, we can launch it by opening it from the Launcher or Menu. + +### Conclusion ### + +We have successfully installed Visual Studio Code in Linux Distribution. Installing Visual Studio Code in every linux distribution is the same as shown in the above steps where we can also use umake to install it in ubuntu distributions. Umake is a popular tool for the development tools, IDEs, Languages. We can easily install Android Studios, Eclipse and many other popular IDEs with umake. Visual Studio Code is based on a project in Github called [Electron][2] which is a part of [Atom.io][3] Editor. It has a bunch of new cool and improved features that Atom.io Editor doesn't have. Visual Studio Code is currently only available in 64-bit platform of linux operating system. So, If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-visual-studio-code-linux/ + +作者:[Arun Pyasi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://go.microsoft.com/fwlink/?LinkID=534108 +[2]:https://github.com/atom/electron +[3]:https://github.com/atom/atom \ No newline at end of file From 8710f13ab6ee96fb7b425d6a53cfe58ce8309b86 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 21 Aug 2015 16:30:54 +0800 Subject: [PATCH 244/697] =?UTF-8?q?20150821-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... open source command-line email clients.md | 79 +++++++++++++++++++ ...Kernel To Add The MOST Driver Subsystem.md | 28 +++++++ 2 files changed, 107 insertions(+) create mode 100644 sources/share/20150821 Top 4 open source command-line email clients.md create mode 100644 sources/talk/20150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md diff --git a/sources/share/20150821 Top 4 open source command-line email clients.md b/sources/share/20150821 Top 4 open source command-line email clients.md new file mode 100644 index 0000000000..df96173c18 --- /dev/null +++ b/sources/share/20150821 Top 4 open source command-line email clients.md @@ -0,0 +1,79 @@ +Top 4 open source command-line email clients +================================================================================ +![](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/life_mail.png) + +Like it or not, email isn't dead yet. And for Linux power users who live and die by the command line, leaving the shell to use a traditional desktop or web based email client just doesn't cut it. After all, if there's one thing that the command line excels at, it's letting you process files, and especially text, with uninterrupted efficiency. + +Fortunately, there are a number of great command-line email clients, many with a devoted following of users who can help you get started and answer any questions you might have along the way. But fair warning: once you've mastered one of these clients, you may find it hard to go back to your old GUI-based solution! + +To install any of these four clients is pretty easy; most are available in standard repositories for major Linux distributions, and can be installed with a normal package manager. You may also have luck finding and running them on other operating systems as well, although I haven't tried it and can't speak to the experience. + +### Mutt ### + +- [Project page][1] +- [Source code][2] +- License: [GPLv2][3] + +Many terminal enthusiasts may already have heard of or even be familiar with Mutt and Alpine, which have both been on the scene for many years. Let's first take a look at Mutt. + +Mutt supports many of the features you've come to expect from any email system: message threading, color coding, availability in a number of languages, and lots of configuration options. It supports POP3 and IMAP, the two most common email transfer protocols, and multiple mailbox formats. Having first been released in 1995, Mutt still has an active development community, but in recent years, new releases have focused on bug fixes and security updates rather than new features. That's okay for many Mutt users, though, who are comfortable with the interface and adhere to the project's slogan: "All mail clients suck. This one just sucks less." + +### Alpine ### + +- [Project page][4] +- [Source code][5] +- License: [Apache 2.0][6] + +Alpine is the other well-known client for terminal email, developed at the University of Washington and designed to be an open source, Unicode-friendly alternative to Pine, also originally from UW. + +Designed to be friendly to beginners, but also chocked full of features for advanced users, Alpine also supports a multitude of protocols—IMAP, LDAP, NNTP, POP, SMTP, etc.—as well as different mailbox formats. Alpine is packaged with Pico, a simple text editing utility that many use as a standalone tool, but it also should work with your text editor of choice: vi, Emacs, etc. + +While Alpine is still infrequently updated, there is also a fork, re-alpine, which was created to allow a different set of maintainers to continue the project's development. + +Alpine features contextual help on the screen, which some users may prefer to breaking out the manual with Mutt, but both are well documented. Between Mutt and Alpine, users may want to try both and let personal preference guide their decision, or they may wish to check out a couple of the newer options below. + +### Sup ### + +- [Project page][7] +- [Source code][8] +- License: [GPLv2][9] + +Sup is the first of two of what can be called "high volume email clients" on our list. Described as a "console-based email client for people with a lot of email," Sup's goal is to provide an interface to email with a hierarchical design and to allow tagging of threads for easier organization. + +Written in Ruby, Sup provides exceptionally fast searching, manages your contact list automatically, and allows for custom extensions. For people who are used to Gmail as a webmail interface, these features will seem familiar, and Sup might be seen as a more modern approach to email on the command line. + +### Notmuch ### + +- [Project page][10] +- [Source code][11] +- License: [GPLv3][12] + +"Sup? Notmuch." Notmuch was written as a response to Sup, originally starting out as a speed-focused rewrite of some portions of Sup to enhance performance. Eventually, the project grew in scope and is now a stand-alone email client. + +Notmuch is also a fairly trim program. It doesn't actually send or receive email messages on its own, and the code which enables Notmuch's super-fast searching is actually designed as a separate library which the program can call. But its modular nature enables you to pick your favorite tools for composing, sending, and receiving, and instead focuses on doing one task and doing it well—efficient browsing and management of your email. + +This list isn’t by any means comprehensive; there are a lot more email clients out there which might be an even better fit for you. What’s your favorite? Did we leave one out that you want to share about? Let us know in the comments below! + +-------------------------------------------------------------------------------- + +via: http://opensource.com/life/15/8/top-4-open-source-command-line-email-clients + +作者:[Jason Baker][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://opensource.com/users/jason-baker +[1]:http://www.mutt.org/ +[2]:http://dev.mutt.org/trac/ +[3]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html +[4]:http://www.washington.edu/alpine/ +[5]:http://www.washington.edu/alpine/acquire/ +[6]:http://www.apache.org/licenses/LICENSE-2.0 +[7]:http://supmua.org/ +[8]:https://github.com/sup-heliotrope/sup +[9]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html +[10]:http://notmuchmail.org/ +[11]:http://notmuchmail.org/releases/ +[12]:http://www.gnu.org/licenses/gpl.html \ No newline at end of file diff --git a/sources/talk/20150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md b/sources/talk/20150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md new file mode 100644 index 0000000000..5b4ad2251f --- /dev/null +++ b/sources/talk/20150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md @@ -0,0 +1,28 @@ +Linux 4.3 Kernel To Add The MOST Driver Subsystem +================================================================================ + While the [Linux 4.2][1] kernel hasn't been officially released yet, Greg Kroah-Hartman sent in early his pull requests for the various subsystems he maintains for the Linux 4.3 merge window. + +The pull requests sent in by Greg KH on Thursday include the Linux 4.3 merge window updates for the driver core, TTY/serial, USB driver, char/misc, and the staging area. These pull requests don't offer any really shocking changes but mostly routine work on improvements / additions / bug-fixes. The staging area once again is heavy with various fixes and clean-ups but there's also a new driver subsystem. + +Greg mentioned of the [4.3 staging changes][2], "Lots of things all over the place, almost all of them trivial fixups and changes. The usual IIO updates and new drivers and we have added the MOST driver subsystem which is getting cleaned up in the tree. The ozwpan driver is finally being deleted as it is obviously abandoned and no one cares about it." + +The MOST driver subsystem is short for the Media Oriented Systems Transport. The documentation to be added in the Linux 4.3 kernel explains, "The Media Oriented Systems Transport (MOST) driver gives Linux applications access a MOST network: The Automotive Information Backbone and the de-facto standard for high-bandwidth automotive multimedia networking. MOST defines the protocol, hardware and software layers necessary to allow for the efficient and low-cost transport of control, real-time and packet data using a single medium (physical layer). Media currently in use are fiber optics, unshielded twisted pair cables (UTP) and coax cables. MOST also supports various speed grades up to 150 Mbps." As explained, MOST is mostly about Linux in automotive applications. + +While Greg KH sent in his various subsystem updates for Linux 4.3, he didn't yet propose the [KDBUS][5] kernel code be pulled. He's previously expressed plans for [KDBUS in Linux 4.3][3] so we'll wait until the 4.3 merge window officially gets going to see what happens. Stay tuned to Phoronix for more Linux 4.3 kernel coverage next week when the merge window will begin, [assuming Linus releases 4.2][4] this weekend. + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.3-Staging-Pull + +作者:[Michael Larabel][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.michaellarabel.com/ +[1]:http://www.phoronix.com/scan.php?page=search&q=Linux+4.2 +[2]:http://lkml.iu.edu/hypermail/linux/kernel/1508.2/02604.html +[3]:http://www.phoronix.com/scan.php?page=news_item&px=KDBUS-Not-In-Linux-4.2 +[4]:http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.2-rc7-Released +[5]:http://www.phoronix.com/scan.php?page=search&q=KDBUS \ No newline at end of file From 778c920c34eda71845723237a4bf4bfa714aa636 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 21 Aug 2015 16:37:50 +0800 Subject: [PATCH 245/697] =?UTF-8?q?20150821-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...rs--How to check MariaDB server version.md | 49 +++++++++++++++++++ 1 file changed, 49 insertions(+) create mode 100644 sources/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md diff --git a/sources/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md b/sources/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md new file mode 100644 index 0000000000..11bf478f09 --- /dev/null +++ b/sources/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md @@ -0,0 +1,49 @@ +Linux FAQs with Answers--How to check MariaDB server version +================================================================================ +> **Question**: I am on a VPS server where MariaDB server is running. How can I find out which version of MariaDB server it is running? + +There are circumstances where you need to know the version of your database server, e.g., when upgrading the database or patching any known server vulnerabilities. There are a few ways to find out what the version of your MariaDB server is. + +### Method One ### + +The first method to identify MariaDB server version is by logging in to the MariaDB server. Right after you log in, your will see a welcome message where MariaDB server version is indicated. + +![](https://farm6.staticflickr.com/5807/20669891016_91249d3239_c.jpg) + +Alternatively, simply type 'status' command at the MariaDB prompt any time while you are logged in. The output will show server version as well as protocol version as follows. + +![](https://farm6.staticflickr.com/5801/20669891046_73f60e5c81_c.jpg) + +### Method Two ### + +If you don't have access to the MariaDB server, you cannot use the first method. In this case, you can infer MariaDB server version by checking which MariaDB package was installed. This works only when the MariaDB server was installed using a distribution's package manager. + +You can search for the installed MariaDB server package as follows. + +#### Debian, Ubuntu or Linux Mint: #### + + $ dpkg -l | grep mariadb + +The output below indicates that installed MariaDB server is version 10.0.17. + +![](https://farm1.staticflickr.com/607/20669890966_b611fcd915_c.jpg) + +#### Fedora, CentOS or RHEL: #### + + $ rpm -qa | grep mariadb + +The output below indicates that the installed version is 5.5.41. + +![](https://farm1.staticflickr.com/764/20508160748_23d9808256_b.jpg) + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/check-mariadb-server-version.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file From f038435d712622128d7d3cc811e2c4380a00cbb3 Mon Sep 17 00:00:00 2001 From: bazz2 Date: Sat, 22 Aug 2015 08:36:54 +0800 Subject: [PATCH 246/697] [translating by bazz2]Docker Working on Security Components Live Container Migration --- ... Working on Security Components Live Container Migration.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) rename {sources => translated}/talk/20150818 Docker Working on Security Components Live Container Migration.md (98%) diff --git a/sources/talk/20150818 Docker Working on Security Components Live Container Migration.md b/translated/talk/20150818 Docker Working on Security Components Live Container Migration.md similarity index 98% rename from sources/talk/20150818 Docker Working on Security Components Live Container Migration.md rename to translated/talk/20150818 Docker Working on Security Components Live Container Migration.md index ad974b4859..356c6f943c 100644 --- a/sources/talk/20150818 Docker Working on Security Components Live Container Migration.md +++ b/translated/talk/20150818 Docker Working on Security Components Live Container Migration.md @@ -1,3 +1,4 @@ +[bazz2 translating] Docker Working on Security Components, Live Container Migration ================================================================================ ![Docker Container Talk](http://www.eweek.com/imagesvr_ce/1905/290x195DockerMarianna.jpg) @@ -50,4 +51,4 @@ via: http://www.eweek.com/virtualization/docker-working-on-security-components-l 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/ \ No newline at end of file +[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/ From 3417a617f81e13d2b5930cf6efc1ccaef0dc8a06 Mon Sep 17 00:00:00 2001 From: bazz2 Date: Sat, 22 Aug 2015 08:50:21 +0800 Subject: [PATCH 247/697] [place it into error place] --- ...ker Working on Security Components Live Container Migration.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated => sources}/talk/20150818 Docker Working on Security Components Live Container Migration.md (100%) diff --git a/translated/talk/20150818 Docker Working on Security Components Live Container Migration.md b/sources/talk/20150818 Docker Working on Security Components Live Container Migration.md similarity index 100% rename from translated/talk/20150818 Docker Working on Security Components Live Container Migration.md rename to sources/talk/20150818 Docker Working on Security Components Live Container Migration.md From de5e769323a47f73f7cdba495524ffe9425c27f7 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 22 Aug 2015 09:09:52 +0800 Subject: [PATCH 248/697] translating --- ...18 Linux Without Limits--IBM Launch LinuxONE Mainframes.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md b/sources/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md index f97c690e3a..dc1f0e8986 100644 --- a/sources/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md +++ b/sources/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md @@ -1,3 +1,5 @@ +Translating----geekpi + Linux Without Limits: IBM Launch LinuxONE Mainframes ================================================================================ ![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/Screenshot-2015-08-17-at-12.58.10.png) @@ -49,4 +51,4 @@ via: http://www.omgubuntu.co.uk/2015/08/ibm-linuxone-mainframe-ubuntu-partnershi [a]:https://plus.google.com/117485690627814051450/?rel=author [1]:http://www-03.ibm.com/systems/z/announcement.html -[2]:http://www.reuters.com/article/2015/08/17/us-ibm-linuxone-idUSKCN0QM09P20150817 \ No newline at end of file +[2]:http://www.reuters.com/article/2015/08/17/us-ibm-linuxone-idUSKCN0QM09P20150817 From 5842e697b11a834c1b9b57757800a95ca9aafc0c Mon Sep 17 00:00:00 2001 From: Kevin Sicong Jiang Date: Fri, 21 Aug 2015 20:13:54 -0500 Subject: [PATCH 249/697] Update 20150813 How to get Public IP from Linux Terminal.md --- ...ow to get Public IP from Linux Terminal.md | 32 +++++++++---------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/sources/tech/20150813 How to get Public IP from Linux Terminal.md b/sources/tech/20150813 How to get Public IP from Linux Terminal.md index c22fec283d..98c0ec7b31 100644 --- a/sources/tech/20150813 How to get Public IP from Linux Terminal.md +++ b/sources/tech/20150813 How to get Public IP from Linux Terminal.md @@ -1,13 +1,13 @@ -KevinSJ Translating -How to get Public IP from Linux Terminal? +如何在 Linux 终端中获取公有 IP ================================================================================ ![](http://www.blackmoreops.com/wp-content/uploads/2015/06/256x256xHow-to-get-Public-IP-from-Linux-Terminal-blackMORE-Ops.png.pagespeed.ic.GKEAEd4UNr.png) -Public addresses are assigned by InterNIC and consist of class-based network IDs or blocks of CIDR-based addresses (called CIDR blocks) that are guaranteed to be globally unique to the Internet. How to get Public IP from Linux Terminal - blackMORE OpsWhen the public addresses are assigned, routes are programmed into the routers of the Internet so that traffic to the assigned public addresses can reach their locations. Traffic to destination public addresses are reachable on the Internet. For example, when an organization is assigned a CIDR block in the form of a network ID and subnet mask, that [network ID, subnet mask] pair also exists as a route in the routers of the Internet. IP packets destined to an address within the CIDR block are routed to the proper destination. In this post I will show several ways to find your public IP address from Linux terminal. This though seems like a waste for normal users, but when you are in a terminal of a headless Linux server(i.e. no GUI or you’re connected as a user with minimal tools). Either way, being able to getHow to get Public IP from Linux Terminal public IP from Linux terminal can be useful in many cases or it could be one of those things that might just come in handy someday. +公有地址由InterNIC分配并由基于类的网络 ID 或基于 CIDR 地址块构成(被称为 CIDR 块)并保证了在全球英特网中的唯一性。当公有地址被分配时,路径将会被记录到互联网中的路由器中,这样访问公有地址的流量就能顺利到达。访问目标公有地址的流量可通过互联网获取。比如,当一个一个 CIDR 块被以网络 ID 和子网掩码的形式分配给一个组织时,对应的 [网络 ID,子网掩码] 也会同时作为路径储存在英特网中的路由器中。访问 CIDR 块中的地址的 IP 封包会被导向对应的位置。在本文中我将会介绍在几种在 Linux 终端中查看你的公有 IP 地址的方法。这对普通用户来说并无意义,但 Linux 服务器(无GUI或者作为只能使用基本工具的用户登录时)会很有用。无论如何,从 Linux 终端中获取公有 IP 在各种方面都很意义,说不定某一天就能用得着。 -There’s two main commands we use, curl and wget. You can use them interchangeably. -### Curl output in plain text format: ### +以下是我们主要使用的两个命令,curl 和 wget。你可以换着用。 + +### Curl 纯文本格式输出: ### curl icanhazip.com curl ifconfig.me @@ -17,53 +17,53 @@ There’s two main commands we use, curl and wget. You can use them interchangea curl ipecho.net/plain curl www.trackip.net/i -### curl output in JSON format: ### +### curl JSON格式输出: ### curl ipinfo.io/json curl ifconfig.me/all.json curl www.trackip.net/ip?json (bit ugly) -### curl output in XML format: ### +### curl XML格式输出: ### curl ifconfig.me/all.xml -### curl all IP details – The motherload ### +### curl 所有IP细节 ### curl ifconfig.me/all -### Using DYNDNS (Useful when you’re using DYNDNS service) ### +### 使用 DYDNS (当你使用 DYDNS 服务时有用)Using DYNDNS (Useful when you’re using DYNDNS service) ### curl -s 'http://checkip.dyndns.org' | sed 's/.*Current IP Address: \([0-9\.]*\).*/\1/g' curl -s http://checkip.dyndns.org/ | grep -o "[[:digit:].]\+" -### Using wget instead of curl ### +### 使用 Wget 代替 Curl ### wget http://ipecho.net/plain -O - -q ; echo wget http://observebox.com/ip -O - -q ; echo -### Using host and dig command (cause we can) ### +### 使用 host 和 dig 命令 ### -You can also use host and dig command assuming they are available or installed +在可用时,你可以直接使用 host 和 dig 命令。 host -t a dartsclink.com | sed 's/.*has address //' dig +short myip.opendns.com @resolver1.opendns.com -### Sample bash script: ### +### bash 脚本示例: ### #!/bin/bash PUBLIC_IP=`wget http://ipecho.net/plain -O - -q ; echo` echo $PUBLIC_IP -Quite a few to pick from. +已经由不少选项了。 -I was actually writing a small script to track all the IP changes of my router each day and save those into a file. I found these nifty commands and sites to use while doing some online research. Hope they help someone else someday too. Thanks for reading, please Share and RT. +我实际上写了一个用于记录每日我的路由器中所有 IP 变化并保存到一个文件的脚本。我在搜索过程中找到了这些很好用的命令。希望某天它能帮到其他人。 -------------------------------------------------------------------------------- via: http://www.blackmoreops.com/2015/06/14/how-to-get-public-ip-from-linux-terminal/ -译者:[译者ID](https://github.com/译者ID) +译者:[KevinSJ](https://github.com/KevinSJ) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From e965dbb00062f017cf5c64e93103adc22e6d597c Mon Sep 17 00:00:00 2001 From: Kevin Sicong Jiang Date: Fri, 21 Aug 2015 20:15:39 -0500 Subject: [PATCH 250/697] Rename sources/tech/20150813 How to get Public IP from Linux Terminal.md to translated/tech/20150813 How to get Public IP from Linux Terminal.md --- .../tech/20150813 How to get Public IP from Linux Terminal.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20150813 How to get Public IP from Linux Terminal.md (100%) diff --git a/sources/tech/20150813 How to get Public IP from Linux Terminal.md b/translated/tech/20150813 How to get Public IP from Linux Terminal.md similarity index 100% rename from sources/tech/20150813 How to get Public IP from Linux Terminal.md rename to translated/tech/20150813 How to get Public IP from Linux Terminal.md From a4d55d4f6da743e381beec1e26fd37e7871e24f2 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 22 Aug 2015 14:53:23 +0800 Subject: [PATCH 251/697] [Translated] tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md --- ...Automate Linux System Maintenance Tasks.md | 208 ------------------ ...Automate Linux System Maintenance Tasks.md | 205 +++++++++++++++++ 2 files changed, 205 insertions(+), 208 deletions(-) delete mode 100644 sources/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md create mode 100644 translated/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md diff --git a/sources/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md b/sources/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md deleted file mode 100644 index 6b534423e7..0000000000 --- a/sources/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md +++ /dev/null @@ -1,208 +0,0 @@ -ictlyh Translating -Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks -================================================================================ -Some time ago I read that one of the distinguishing characteristics of an effective system administrator / engineer is laziness. It seemed a little contradictory at first but the author then proceeded to explain why: - -![Automate Linux System Maintenance Tasks](http://www.tecmint.com/wp-content/uploads/2015/08/Automate-Linux-System-Maintenance-Tasks.png) - -RHCE Series: Automate Linux System Maintenance Tasks – Part 4 - -if a sysadmin spends most of his time solving issues and doing repetitive tasks, you can suspect he or she is not doing things quite right. In other words, an effective system administrator / engineer should develop a plan to perform repetitive tasks with as less action on his / her part as possible, and should foresee problems by using, - -for example, the tools reviewed in Part 3 – [Monitor System Activity Reports Using Linux Toolsets][1] of this series. Thus, although he or she may not seem to be doing much, it’s because most of his / her responsibilities have been taken care of with the help of shell scripting, which is what we’re going to talk about in this tutorial. - -### What is a shell script? ### - -In few words, a shell script is nothing more and nothing less than a program that is executed step by step by a shell, which is another program that provides an interface layer between the Linux kernel and the end user. - -By default, the shell used for user accounts in RHEL 7 is bash (/bin/bash). If you want a detailed description and some historical background, you can refer to [this Wikipedia article][2]. - -To find out more about the enormous set of features provided by this shell, you may want to check out its **man page**, which is downloaded in in PDF format at ([Bash Commands][3]). Other than that, it is assumed that you are familiar with Linux commands (if not, I strongly advise you to go through [A Guide from Newbies to SysAdmin][4] article in **Tecmint.com** before proceeding). Now let’s get started. - -### Writing a script to display system information ### - -For our convenience, let’s create a directory to store our shell scripts: - - # mkdir scripts - # cd scripts - -And open a new text file named `system_info.sh` with your preferred text editor. We will begin by inserting a few comments at the top and some commands afterwards: - - #!/bin/bash - - # Sample script written for Part 4 of the RHCE series - # This script will return the following set of system information: - # -Hostname information: - echo -e "\e[31;43m***** HOSTNAME INFORMATION *****\e[0m" - hostnamectl - echo "" - # -File system disk space usage: - echo -e "\e[31;43m***** FILE SYSTEM DISK SPACE USAGE *****\e[0m" - df -h - echo "" - # -Free and used memory in the system: - echo -e "\e[31;43m ***** FREE AND USED MEMORY *****\e[0m" - free - echo "" - # -System uptime and load: - echo -e "\e[31;43m***** SYSTEM UPTIME AND LOAD *****\e[0m" - uptime - echo "" - # -Logged-in users: - echo -e "\e[31;43m***** CURRENTLY LOGGED-IN USERS *****\e[0m" - who - echo "" - # -Top 5 processes as far as memory usage is concerned - echo -e "\e[31;43m***** TOP 5 MEMORY-CONSUMING PROCESSES *****\e[0m" - ps -eo %mem,%cpu,comm --sort=-%mem | head -n 6 - echo "" - echo -e "\e[1;32mDone.\e[0m" - -Next, give the script execute permissions: - - # chmod +x system_info.sh - -and run it: - - ./system_info.sh - -Note that the headers of each section are shown in color for better visualization: - -![Server Monitoring Shell Script](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Shell-Script.png) - -Server Monitoring Shell Script - -That functionality is provided by this command: - - echo -e "\e[COLOR1;COLOR2m\e[0m" - -Where COLOR1 and COLOR2 are the foreground and background colors, respectively (more info and options are explained in this entry from the [Arch Linux Wiki][5]) and is the string that you want to show in color. - -### Automating Tasks ### - -The tasks that you may need to automate may vary from case to case. Thus, we cannot possibly cover all of the possible scenarios in a single article, but we will present three classic tasks that can be automated using shell scripting: - -**1)** update the local file database, 2) find (and alternatively delete) files with 777 permissions, and 3) alert when filesystem usage surpasses a defined limit. - -Let’s create a file named `auto_tasks.sh` in our scripts directory with the following content: - - #!/bin/bash - - # Sample script to automate tasks: - # -Update local file database: - echo -e "\e[4;32mUPDATING LOCAL FILE DATABASE\e[0m" - updatedb - if [ $? == 0 ]; then - echo "The local file database was updated correctly." - else - echo "The local file database was not updated correctly." - fi - echo "" - - # -Find and / or delete files with 777 permissions. - echo -e "\e[4;32mLOOKING FOR FILES WITH 777 PERMISSIONS\e[0m" - # Enable either option (comment out the other line), but not both. - # Option 1: Delete files without prompting for confirmation. Assumes GNU version of find. - #find -type f -perm 0777 -delete - # Option 2: Ask for confirmation before deleting files. More portable across systems. - find -type f -perm 0777 -exec rm -i {} +; - echo "" - # -Alert when file system usage surpasses a defined limit - echo -e "\e[4;32mCHECKING FILE SYSTEM USAGE\e[0m" - THRESHOLD=30 - while read line; do - # This variable stores the file system path as a string - FILESYSTEM=$(echo $line | awk '{print $1}') - # This variable stores the use percentage (XX%) - PERCENTAGE=$(echo $line | awk '{print $5}') - # Use percentage without the % sign. - USAGE=${PERCENTAGE%?} - if [ $USAGE -gt $THRESHOLD ]; then - echo "The remaining available space in $FILESYSTEM is critically low. Used: $PERCENTAGE" - fi - done < <(df -h --total | grep -vi filesystem) - -Please note that there is a space between the two `<` signs in the last line of the script. - -![Shell Script to Find 777 Permissions](http://www.tecmint.com/wp-content/uploads/2015/08/Shell-Script-to-Find-777-Permissions.png) - -Shell Script to Find 777 Permissions - -### Using Cron ### - -To take efficiency one step further, you will not want to sit in front of your computer and run those scripts manually. Rather, you will use cron to schedule those tasks to run on a periodic basis and sends the results to a predefined list of recipients via email or save them to a file that can be viewed using a web browser. - -The following script (filesystem_usage.sh) will run the well-known **df -h** command, format the output into a HTML table and save it in the **report.html** file: - - #!/bin/bash - # Sample script to demonstrate the creation of an HTML report using shell scripting - # Web directory - WEB_DIR=/var/www/html - # A little CSS and table layout to make the report look a little nicer - echo " - - - - - " > $WEB_DIR/report.html - # View hostname and insert it at the top of the html body - HOST=$(hostname) - echo "Filesystem usage for host $HOST
- Last updated: $(date)

- - " >> $WEB_DIR/report.html - # Read the output of df -h line by line - while read line; do - echo "" >> $WEB_DIR/report.html - done < <(df -h | grep -vi filesystem) - echo "
Filesystem - Size - Use % -
" >> $WEB_DIR/report.html - echo $line | awk '{print $1}' >> $WEB_DIR/report.html - echo "" >> $WEB_DIR/report.html - echo $line | awk '{print $2}' >> $WEB_DIR/report.html - echo "" >> $WEB_DIR/report.html - echo $line | awk '{print $5}' >> $WEB_DIR/report.html - echo "
" >> $WEB_DIR/report.html - -In our **RHEL 7** server (**192.168.0.18**), this looks as follows: - -![Server Monitoring Report](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Report.png) - -Server Monitoring Report - -You can add to that report as much information as you want. To run the script every day at 1:30 pm, add the following crontab entry: - - 30 13 * * * /root/scripts/filesystem_usage.sh - -### Summary ### - -You will most likely think of several other tasks that you want or need to automate; as you can see, using shell scripting will greatly simplify this effort. Feel free to let us know if you find this article helpful and don't hesitate to add your own ideas or comments via the form below. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/using-shell-script-to-automate-linux-system-maintenance-tasks/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/ -[2]:https://en.wikipedia.org/wiki/Bash_%28Unix_shell%29 -[3]:http://www.tecmint.com/wp-content/pdf/bash.pdf -[4]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/ -[5]:https://wiki.archlinux.org/index.php/Color_Bash_Prompt \ No newline at end of file diff --git a/translated/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md b/translated/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md new file mode 100644 index 0000000000..37a3dbe11c --- /dev/null +++ b/translated/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md @@ -0,0 +1,205 @@ +第四部分 - 使用 Shell 脚本自动化 Linux 系统维护任务 +================================================================================ +之前我听说高效系统管理员/工程师的其中一个特点是懒惰。一开始看起来很矛盾,但作者接下来解释了其中的原因: + +![自动化 Linux 系统维护任务](http://www.tecmint.com/wp-content/uploads/2015/08/Automate-Linux-System-Maintenance-Tasks.png) + +RHCE 系列:第四部分 - 自动化 Linux 系统维护任务 + +如果一个系统管理员花费大量的时间解决问题以及做重复的工作,你就应该怀疑他这么做是否正确。换句话说,一个高效的系统管理员/工程师应该制定一个计划使得尽量花费少的时间去做重复的工作,以及通过使用该系列中第三部分 [使用 Linux 工具集监视系统活动报告][1] 介绍的工具预见问题。因此,尽管看起来他/她没有做很多的工作,但那是因为 shell 脚本帮助完成了他的/她的大部分任务,这也就是本章我们将要探讨的东西。 + +### 什么是 shell 脚本? ### + +简单的说,shell 脚本就是一个由 shell 一步一步执行的程序,而 shell 是在 Linux 内核和端用户之间提供接口的另一个程序。 + +默认情况下,RHEL 7 中用户使用的 shell 是 bash(/bin/bash)。如果你想知道详细的信息和历史背景,你可以查看 [维基页面][2]。 + +关于这个 shell 提供的众多功能的介绍,可以查看 **man 手册**,也可以从 ([Bash 命令][3])下载 PDF 格式。除此之外,假设你已经熟悉 Linux 命令(否则我强烈建议你首先看一下 **Tecmint.com** 中的文章 [从新手到系统管理员指南][4] )。现在让我们开始吧。 + +### 写一个脚本显示系统信息 ### + +为了方便,首先让我们新建一个目录用于保存我们的 shell 脚本: + + # mkdir scripts + # cd scripts + +然后用喜欢的文本编辑器打开新的文本文件 `system_info.sh`。我们首先在头部插入一些注释以及一些命令: + + #!/bin/bash + + # RHCE 系列第四部分事例脚本 + # 该脚本会返回以下这些系统信息: + # -主机名称: + echo -e "\e[31;43m***** HOSTNAME INFORMATION *****\e[0m" + hostnamectl + echo "" + # -文件系统磁盘空间使用: + echo -e "\e[31;43m***** FILE SYSTEM DISK SPACE USAGE *****\e[0m" + df -h + echo "" + # -系统空闲和使用中的内存: + echo -e "\e[31;43m ***** FREE AND USED MEMORY *****\e[0m" + free + echo "" + # -系统启动时间: + echo -e "\e[31;43m***** SYSTEM UPTIME AND LOAD *****\e[0m" + uptime + echo "" + # -登录的用户: + echo -e "\e[31;43m***** CURRENTLY LOGGED-IN USERS *****\e[0m" + who + echo "" + # -使用内存最多的 5 个进程 + echo -e "\e[31;43m***** TOP 5 MEMORY-CONSUMING PROCESSES *****\e[0m" + ps -eo %mem,%cpu,comm --sort=-%mem | head -n 6 + echo "" + echo -e "\e[1;32mDone.\e[0m" + +然后,给脚本可执行权限: + + # chmod +x system_info.sh + +运行脚本: + + ./system_info.sh + +注意为了更好的可视化效果各部分标题都用颜色显示: + +![服务器监视 Shell 脚本](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Shell-Script.png) + +服务器监视 Shell 脚本 + +该功能用以下命令提供: + + echo -e "\e[COLOR1;COLOR2m\e[0m" + +其中 COLOR1 和 COLOR2 是前景色和背景色([Arch Linux Wiki][5] 有更多的信息和选项解释), 是你想用颜色显示的字符串。 + +### 使任务自动化 ### + +你想使其自动化的任务可能因情况而不同。因此,我们不可能在一篇文章中覆盖所有可能的场景,但是我们会介绍使用 shell 脚本可以使其自动化的三种典型任务: + +**1)** 更新本地文件数据库, 2) 查找(或者删除)有 777 权限的文件, 以及 3) 文件系统使用超过定义的阀值时发出警告。 + +让我们在脚本目录中新建一个名为 `auto_tasks.sh` 的文件并添加以下内容: + + #!/bin/bash + + # 自动化任务事例脚本: + # -更新本地文件数据库: + echo -e "\e[4;32mUPDATING LOCAL FILE DATABASE\e[0m" + updatedb + if [ $? == 0 ]; then + echo "The local file database was updated correctly." + else + echo "The local file database was not updated correctly." + fi + echo "" + + # -查找 和/或 删除有 777 权限的文件。 + echo -e "\e[4;32mLOOKING FOR FILES WITH 777 PERMISSIONS\e[0m" + # Enable either option (comment out the other line), but not both. + # Option 1: Delete files without prompting for confirmation. Assumes GNU version of find. + #find -type f -perm 0777 -delete + # Option 2: Ask for confirmation before deleting files. More portable across systems. + find -type f -perm 0777 -exec rm -i {} +; + echo "" + # -文件系统使用率超过定义的阀值时发出警告 + echo -e "\e[4;32mCHECKING FILE SYSTEM USAGE\e[0m" + THRESHOLD=30 + while read line; do + # This variable stores the file system path as a string + FILESYSTEM=$(echo $line | awk '{print $1}') + # This variable stores the use percentage (XX%) + PERCENTAGE=$(echo $line | awk '{print $5}') + # Use percentage without the % sign. + USAGE=${PERCENTAGE%?} + if [ $USAGE -gt $THRESHOLD ]; then + echo "The remaining available space in $FILESYSTEM is critically low. Used: $PERCENTAGE" + fi + done < <(df -h --total | grep -vi filesystem) + +请注意该脚本最后一行两个 `<` 符号之间有个空格。 + +![查找 777 权限文件的 Shell 脚本](http://www.tecmint.com/wp-content/uploads/2015/08/Shell-Script-to-Find-777-Permissions.png) + +查找 777 权限文件的 Shell 脚本 + +### 使用 Cron ### + +想更进一步提高效率,你不会想只是坐在你的电脑前手动执行这些脚本。相反,你会使用 cron 来调度这些任务周期性地执行,并把结果通过邮件发动给预定义的接收者或者将它们保存到使用 web 浏览器可以查看的文件中。 + +下面的脚本(filesystem_usage.sh)会运行有名的 **df -h** 命令,格式化输出到 HTML 表格并保存到 **report.html** 文件中: + + #!/bin/bash + # Sample script to demonstrate the creation of an HTML report using shell scripting + # Web directory + WEB_DIR=/var/www/html + # A little CSS and table layout to make the report look a little nicer + echo " + + + + + " > $WEB_DIR/report.html + # View hostname and insert it at the top of the html body + HOST=$(hostname) + echo "Filesystem usage for host $HOST
+ Last updated: $(date)

+ + " >> $WEB_DIR/report.html + # Read the output of df -h line by line + while read line; do + echo "" >> $WEB_DIR/report.html + done < <(df -h | grep -vi filesystem) + echo "
Filesystem + Size + Use % +
" >> $WEB_DIR/report.html + echo $line | awk '{print $1}' >> $WEB_DIR/report.html + echo "" >> $WEB_DIR/report.html + echo $line | awk '{print $2}' >> $WEB_DIR/report.html + echo "" >> $WEB_DIR/report.html + echo $line | awk '{print $5}' >> $WEB_DIR/report.html + echo "
" >> $WEB_DIR/report.html + +在我们的 **RHEL 7** 服务器(**192.168.0.18**)中,看起来像下面这样: + +![服务器监视报告](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Report.png) + +服务器监视报告 + +你可以添加任何你想要的信息到那个报告中。添加下面的 crontab 条目在每天下午的 1:30 运行该脚本: + + 30 13 * * * /root/scripts/filesystem_usage.sh + +### 总结 ### + +你很可能想起各种其他想要自动化的任务;正如你看到的,使用 shell 脚本能极大的简化任务。如果你觉得这篇文章对你有所帮助就告诉我们吧,别犹豫在下面的表格中添加你自己的想法或评论。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/using-shell-script-to-automate-linux-system-maintenance-tasks/ + +作者:[Gabriel Cánepa][a] +译者:[ictlyh](https://github.com/ictlyh) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/ +[2]:https://en.wikipedia.org/wiki/Bash_%28Unix_shell%29 +[3]:http://www.tecmint.com/wp-content/pdf/bash.pdf +[4]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/ +[5]:https://wiki.archlinux.org/index.php/Color_Bash_Prompt \ No newline at end of file From 94448c4d63edaa885d1e3506d39883fc3781196e Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 22 Aug 2015 16:13:59 +0800 Subject: [PATCH 252/697] translated --- ... Limits--IBM Launch LinuxONE Mainframes.md | 54 ------------------- ... Limits--IBM Launch LinuxONE Mainframes.md | 52 ++++++++++++++++++ 2 files changed, 52 insertions(+), 54 deletions(-) delete mode 100644 sources/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md create mode 100644 translated/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md diff --git a/sources/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md b/sources/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md deleted file mode 100644 index dc1f0e8986..0000000000 --- a/sources/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md +++ /dev/null @@ -1,54 +0,0 @@ -Translating----geekpi - -Linux Without Limits: IBM Launch LinuxONE Mainframes -================================================================================ -![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/Screenshot-2015-08-17-at-12.58.10.png) - -LinuxONE Emperor MainframeGood news for Ubuntu’s server team today as [IBM launch the LinuxONE][1] a Linux-only mainframe that is also able to run Ubuntu. - -The largest of the LinuxONE systems launched by IBM is called ‘Emperor’ and can scale up to 8000 virtual machines or tens of thousands of containers – a possible record for any one single Linux system. - -The LinuxONE is described by IBM as a ‘game changer’ that ‘unleashes the potential of Linux for business’. - -IBM and Canonical are working together on the creation of an Ubuntu distribution for LinuxONE and other IBM z Systems. Ubuntu will join RedHat and SUSE as ‘premier Linux distributions’ on IBM z. - -Alongside the ‘Emperor’ IBM is also offering the LinuxONE Rockhopper, a smaller mainframe for medium-sized businesses and organisations. - -IBM is the market leader in mainframes and commands over 90% of the mainframe market. - -注:youtube 视频 - - -### What Is a Mainframe Computer Used For? ### - -The computer you’re reading this article on would be dwarfed by a ‘big iron’ mainframe. They are large, hulking great cabinets packed full of high-end components, custom designed technology and dizzying amounts of storage (that is data storage, not ample room for pens and rulers). - -Mainframes computers are used by large organizations and businesses to process and store large amounts of data, crunch through statistics, and handle large-scale transaction processing. - -### ‘World’s Fastest Processor’ ### - -IBM has teamed up with Canonical Ltd to use Ubuntu on the LinuxONE and other IBM z Systems. - -The LinuxONE Emperor uses the IBM z13 processor. The chip, announced back in January, is said to be the world’s fastest microprocessor. It is able to deliver transaction response times in the milliseconds. - -But as well as being well equipped to handle for high-volume mobile transactions, the z13 inside the LinuxONE is also an ideal cloud system. - -It can handle more than 50 virtual servers per core for a total of 8000 virtual servers, making it a cheaper, greener and more performant way to scale-out to the cloud. - -**You don’t have to be a CIO or mainframe spotter to appreciate this announcement. The possibilities LinuxONE provides are clear enough. ** - -Source: [Reuters (h/t @popey)][2] - --------------------------------------------------------------------------------- - -via: http://www.omgubuntu.co.uk/2015/08/ibm-linuxone-mainframe-ubuntu-partnership - -作者:[Joey-Elijah Sneddon][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:http://www-03.ibm.com/systems/z/announcement.html -[2]:http://www.reuters.com/article/2015/08/17/us-ibm-linuxone-idUSKCN0QM09P20150817 diff --git a/translated/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md b/translated/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md new file mode 100644 index 0000000000..8d9b3dbccb --- /dev/null +++ b/translated/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md @@ -0,0 +1,52 @@ +Linux无极限:IBM发布LinuxONE大型机 +================================================================================ +![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/Screenshot-2015-08-17-at-12.58.10.png) + +LinuxONE Emperor MainframeGood的Ubuntu服务器团队今天发布了一条消息关于[IBM发布了LinuxONE][1],一种只支持Linux的大型机,也可以运行Ubuntu。 + +IBM发布的最大的LinuxONE系统称作‘Emperor’,它可以扩展到8000台虚拟机或者上万台容器- 对任何一台Linux系统都可能的记录。 + +LinuxONE被IBM称作‘游戏改变者’,它‘释放了Linux的商业潜力’。 + +IBM和Canonical正在一起协作为LinuxONE和其他IBM z系统创建Ubuntu发行版。Ubuntu将会在IBM z加入RedHat和SUSE作为首屈一指的Linux发行版。 + +随着IBM ‘Emperor’发布的还有LinuxONE Rockhopper,一个为中等规模商业或者组织小一点的大型机。 + +IBM是大型机中的领导者,并且占有大型机市场中90%的份额。 + +注:youtube 视频 + + +### 大型机用于什么? ### + +你阅读这篇文章所使用的电脑在一个‘大铁块’一样的大型机前会显得很矮小。它们是巨大的,笨重的机柜里面充满了高端的组件、自己设计的技术和眼花缭乱的大量存储(就是数据存储,没有空间放钢笔和尺子)。 + +大型机被大型机构和商业用来处理和存储大量数据,通过统计来处理数据和处理大规模的事务处理。 + +### ‘世界最快的处理器’ ### + +IBM已经与Canonical Ltd组成了团队来在LinuxONE和其他IBM z系统中使用Ubuntu。 + +LinuxONE Emperor使用IBM z13处理器。发布于一月的芯片声称是时间上最快的微处理器。它可以在几毫秒内响应事务。 + +但是也可以很好地处理高容量的移动事务,z13中的LinuxONE系统也是一个理想的云系统。 + +每个核心可以处理超过50个虚拟服务器,总共可以超过8000台虚拟服务器么,这使它以更便宜,更环保、更高效的方式扩展到云。 + +**在阅读这篇文章时你不必是一个CIO或者大型机巡查员。LinuxONE提供的可能性足够清晰。** + +来源: [Reuters (h/t @popey)][2] + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2015/08/ibm-linuxone-mainframe-ubuntu-partnership + +作者:[Joey-Elijah Sneddon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:http://www-03.ibm.com/systems/z/announcement.html +[2]:http://www.reuters.com/article/2015/08/17/us-ibm-linuxone-idUSKCN0QM09P20150817 From 5877780a5dfb3e99fd82039e47b0d20b7053e934 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 22 Aug 2015 16:15:50 +0800 Subject: [PATCH 253/697] translating --- ... FAQs with Answers--How to check MariaDB server version.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md b/sources/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md index 11bf478f09..f358d20756 100644 --- a/sources/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md +++ b/sources/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md @@ -1,3 +1,5 @@ +translating---geekpi + Linux FAQs with Answers--How to check MariaDB server version ================================================================================ > **Question**: I am on a VPS server where MariaDB server is running. How can I find out which version of MariaDB server it is running? @@ -46,4 +48,4 @@ via: http://ask.xmodulo.com/check-mariadb-server-version.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file +[a]:http://ask.xmodulo.com/author/nanni From c18956ce3476f54064bf39b0ea58d5b4c2ad476e Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 22 Aug 2015 16:30:41 +0800 Subject: [PATCH 254/697] translated --- ...rs--How to check MariaDB server version.md | 51 ------------------- ...rs--How to check MariaDB server version.md | 49 ++++++++++++++++++ 2 files changed, 49 insertions(+), 51 deletions(-) delete mode 100644 sources/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md create mode 100644 translated/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md diff --git a/sources/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md b/sources/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md deleted file mode 100644 index f358d20756..0000000000 --- a/sources/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md +++ /dev/null @@ -1,51 +0,0 @@ -translating---geekpi - -Linux FAQs with Answers--How to check MariaDB server version -================================================================================ -> **Question**: I am on a VPS server where MariaDB server is running. How can I find out which version of MariaDB server it is running? - -There are circumstances where you need to know the version of your database server, e.g., when upgrading the database or patching any known server vulnerabilities. There are a few ways to find out what the version of your MariaDB server is. - -### Method One ### - -The first method to identify MariaDB server version is by logging in to the MariaDB server. Right after you log in, your will see a welcome message where MariaDB server version is indicated. - -![](https://farm6.staticflickr.com/5807/20669891016_91249d3239_c.jpg) - -Alternatively, simply type 'status' command at the MariaDB prompt any time while you are logged in. The output will show server version as well as protocol version as follows. - -![](https://farm6.staticflickr.com/5801/20669891046_73f60e5c81_c.jpg) - -### Method Two ### - -If you don't have access to the MariaDB server, you cannot use the first method. In this case, you can infer MariaDB server version by checking which MariaDB package was installed. This works only when the MariaDB server was installed using a distribution's package manager. - -You can search for the installed MariaDB server package as follows. - -#### Debian, Ubuntu or Linux Mint: #### - - $ dpkg -l | grep mariadb - -The output below indicates that installed MariaDB server is version 10.0.17. - -![](https://farm1.staticflickr.com/607/20669890966_b611fcd915_c.jpg) - -#### Fedora, CentOS or RHEL: #### - - $ rpm -qa | grep mariadb - -The output below indicates that the installed version is 5.5.41. - -![](https://farm1.staticflickr.com/764/20508160748_23d9808256_b.jpg) - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/check-mariadb-server-version.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni diff --git a/translated/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md b/translated/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md new file mode 100644 index 0000000000..36ea2d15d6 --- /dev/null +++ b/translated/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md @@ -0,0 +1,49 @@ +Linux有问必答--如何检查MatiaDB服务端版本 +================================================================================ +> **提问**: 我使用的是一台运行MariaDB的VPS。我该如何检查MariaDB服务端的版本? + +你需要知道数据库版本的情况有:当你生你数据库或者为服务器打补丁。这里有几种方法找出MariaDB版本的方法。 + +### 方法一 ### + +第一种找出版本的方法是登录MariaDB服务器,登录之后,你会看到一些MariaDB的版本信息。 + +![](https://farm6.staticflickr.com/5807/20669891016_91249d3239_c.jpg) + +另一种方法是在登录MariaDB后出现的命令行中输入‘status’命令。输出会显示服务器的版本还有协议版本。 + +![](https://farm6.staticflickr.com/5801/20669891046_73f60e5c81_c.jpg) + +### 方法二 ### + +如果你不能访问MariaDB,那么你就不能用第一种方法。这种情况下你可以根据MariaDB的安装包的版本来推测。这种方法只有在MariaDB通过包管理器安装的才有用。 + +你可以用下面的方法检查MariaDB的安装包。 + +#### Debian、Ubuntu或者Linux Mint: #### + + $ dpkg -l | grep mariadb + +下面的输出说明MariaDB的版本是10.0.17。 + +![](https://farm1.staticflickr.com/607/20669890966_b611fcd915_c.jpg) + +#### Fedora、CentOS或者 RHEL: #### + + $ rpm -qa | grep mariadb + +下面的输出说明安装的版本是5.5.41。 + +![](https://farm1.staticflickr.com/764/20508160748_23d9808256_b.jpg) + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/check-mariadb-server-version.html + +作者:[Dan Nanni][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni From 595f01ffe4122410e9cc05f2592d14ed4ac1af80 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 22 Aug 2015 16:32:02 +0800 Subject: [PATCH 255/697] Update 20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md --- ...0818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md b/translated/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md index 8d9b3dbccb..7899bfaf31 100644 --- a/translated/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md +++ b/translated/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md @@ -42,7 +42,7 @@ LinuxONE Emperor使用IBM z13处理器。发布于一月的芯片声称是时间 via: http://www.omgubuntu.co.uk/2015/08/ibm-linuxone-mainframe-ubuntu-partnership 作者:[Joey-Elijah Sneddon][a] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From dddad3bdeb95ac64c57565bd62a0f4f221f3c408 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 22 Aug 2015 19:57:00 +0800 Subject: [PATCH 256/697] =?UTF-8?q?[Translating]=20news/20150818=20?= =?UTF-8?q?=E2=80=8BUbuntu=20Linux=20is=20coming=20to=20IBM=20mainframes.m?= =?UTF-8?q?d?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md b/sources/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md index 8da7227eee..3eec354255 100644 --- a/sources/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md +++ b/sources/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md @@ -1,4 +1,5 @@ -​Ubuntu Linux is coming to IBM mainframes +ictlyh Translating​ +Ubuntu Linux is coming to IBM mainframes ================================================================================ SEATTLE -- It's finally happened. At [LinuxCon][1], IBM and [Canonical][2] announced that [Ubuntu Linux][3] will soon be running on IBM mainframes. From e34f824c8a877802d09f41b8cba8d1ddc693f8a1 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 22 Aug 2015 20:35:29 +0800 Subject: [PATCH 257/697] =?UTF-8?q?[Translated]=20news/20150818=20?= =?UTF-8?q?=E2=80=8BUbuntu=20Linux=20is=20coming=20to=20IBM=20mainframes.m?= =?UTF-8?q?d?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Ubuntu Linux is coming to IBM mainframes.md | 47 ------------------- ...Ubuntu Linux is coming to IBM mainframes.md | 46 ++++++++++++++++++ 2 files changed, 46 insertions(+), 47 deletions(-) delete mode 100644 sources/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md create mode 100644 translated/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md diff --git a/sources/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md b/sources/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md deleted file mode 100644 index 3eec354255..0000000000 --- a/sources/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md +++ /dev/null @@ -1,47 +0,0 @@ -ictlyh Translating​ -Ubuntu Linux is coming to IBM mainframes -================================================================================ -SEATTLE -- It's finally happened. At [LinuxCon][1], IBM and [Canonical][2] announced that [Ubuntu Linux][3] will soon be running on IBM mainframes. - -![](http://zdnet2.cbsistatic.com/hub/i/2015/08/17/f389e12f-03f5-48cc-8019-af4ccf6c6ecd/f15b099e439c0e3a5fd823637d4bcf87/ubuntu-mainframe.jpg) - -You'll soon to be able to get your IBM mainframe in Ubuntu Linux orange - -According to Ross Mauri, IBM's General Manager of System z, and Mark Shuttleworth, Canonical and Ubuntu's founder, this move came about because of customer demand. For over a decade, [Red Hat Enterprise Linux (RHEL)][4] and [SUSE Linux Enterprise Server (SLES)][5] were the only supported IBM mainframe Linux distributions. - -As Ubuntu matured, more and more businesses turned to it for the enterprise Linux, and more and more of them wanted it on IBM big iron hardware. In particular, banks wanted Ubuntu there. Soon, financial CIOs will have their wish granted. - -In an interview Shuttleworth said that Ubuntu Linux will be available on the mainframe by April 2016 in the next long-term support version of Ubuntu: Ubuntu 16.04. Canonical and IBM already took the first move in this direction in late 2014 by bringing [Ubuntu to IBM's POWER][6] architecture. - -Before that, Canonical and IBM almost signed the dotted line to bring [Ubuntu to IBM mainframes in 2011][7] but that deal was never finalized. This time, it's happening. - -Jane Silber, Canonical's CEO, explained in a statement, "Our [expansion of Ubuntu platform][8] support to [IBM z Systems][9] is a recognition of the number of customers that count on z Systems to run their businesses, and the maturity the hybrid cloud is reaching in the marketplace. - -**Silber continued:** - -> With support of z Systems, including [LinuxONE][10], Canonical is also expanding our relationship with IBM, building on our support for the POWER architecture and OpenPOWER ecosystem. Just as Power Systems clients are now benefiting from the scaleout capabilities of Ubuntu, and our agile development process which results in first to market support of new technologies such as CAPI (Coherent Accelerator Processor Interface) on POWER8, z Systems clients can expect the same rapid rollout of technology advancements, and benefit from [Juju][11] and our other cloud tools to enable faster delivery of new services to end users. In addition, our collaboration with IBM includes the enablement of scale-out deployment of many IBM software solutions with Juju Charms. Mainframe clients will delight in having a wealth of 'charmed' IBM solutions, other software provider products, and open source solutions, deployable on mainframes via Juju. - -Shuttleworth expects Ubuntu on z to be very successful. "It's blazingly fast, and with its support for OpenStack, people who want exceptional cloud region performance will be very happy. - --------------------------------------------------------------------------------- - -via: http://www.zdnet.com/article/ubuntu-linux-is-coming-to-the-mainframe/#ftag=RSSbaffb68 - -作者:[Steven J. Vaughan-Nichols][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ -[1]:http://events.linuxfoundation.org/events/linuxcon-north-america -[2]:http://www.canonical.com/ -[3]:http://www.ubuntu.comj/ -[4]:http://www.redhat.com/en/technologies/linux-platforms/enterprise-linux -[5]:https://www.suse.com/products/server/ -[6]:http://www.zdnet.com/article/ibm-doubles-down-on-linux/ -[7]:http://www.zdnet.com/article/mainframe-ubuntu-linux/ -[8]:https://insights.ubuntu.com/2015/08/17/ibm-and-canonical-plan-ubuntu-support-on-ibm-z-systems-mainframe/ -[9]:http://www-03.ibm.com/systems/uk/z/ -[10]:http://www.zdnet.com/article/linuxone-ibms-new-linux-mainframes/ -[11]:https://jujucharms.com/ \ No newline at end of file diff --git a/translated/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md b/translated/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md new file mode 100644 index 0000000000..d31f9c34a7 --- /dev/null +++ b/translated/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md @@ -0,0 +1,46 @@ +IBM 大型机将搭载 Ubuntu Linux +================================================================================ +西雅图 -- 最终还是发生了。在 [LinuxCon][1] 上,IBM 和 [Canonical][2] 宣布 [Ubuntu Linux][3] 不久就会运行在 IBM 大型机上。 + +![](http://zdnet2.cbsistatic.com/hub/i/2015/08/17/f389e12f-03f5-48cc-8019-af4ccf6c6ecd/f15b099e439c0e3a5fd823637d4bcf87/ubuntu-mainframe.jpg) + +很快你就可以在你的 IBM 大型机上安装 Ubuntu Linux orange 啦 + +根据 IBM z 系统的总经理 Ross Mauri 以及 Canonical 和 Ubuntu 的创立者 Mark Shuttleworth 所言,这是因为客户需要。十多年来,IBM 大型机只支持 [红帽企业版 Linux (RHEL)][4] 和 [SUSE Linux 企业版 (SLES)][5] Linux 发行版。 + +随着 Ubuntu 越来越成熟,更多的企业把它作为企业级 Linux,也有更多的人希望它能运行在 IBM 大型机上。尤其是银行希望如此。不久,金融 CIO 们就可以满足他们的需求啦。 + +在一次采访中 Shuttleworth 说 Ubuntu Linux 在 2016 年 4 月下一次长期支持版 Ubuntu 16.04 中就可以用到大型机上。2014 年底 Canonical 和 IBM 将 [Ubuntu 带到 IBM 的 POWER][6] 架构中就迈出了第一步。 + +在那之前,Canonical 和 IBM 几乎签署协议 [在 2011 年实现 Ubuntu 支持 IBM 大型机][7],但最终也没有实现。这次,真的发生了。 + +Canonical 的 CEO Jane Silber 解释说 “[扩大 Ubuntu 平台支持][8] 到 [IBM z 系统][9] 是因为认识到需要 z 系统运行其业务的客户数量以及混合云市场的成熟。” + +**Silber 还说:** + +> 由于 z 系统的支持,包括 [LinuxONE][10],Canonical 和 IBM 的关系进一步加深,构建了对 POWER 架构的支持和 OpenPOWER 生态系统。正如 Power 系统的客户受益于 Ubuntu 的可扩展能力,我们的敏捷开发过程也使得类似 POWER8 CAPI(Coherent Accelerator Processor Interface,一致性加速器接口)达到市场支持,z 系统的客户也可以期望技术进步能快速部署,并从 [Juju][11] 和我们的其它云工具中获益,使得能快速向端用户提供新服务。另外,我们和 IBM 的合作包括实现扩展部署很多 IBM 和 Juju 的软件解决方案。大型机客户对于能通过 Juju 将丰富‘迷人的’ IBM 解决方案、其它软件供应商的产品、开源解决方案部署到大型机上感到高兴。 + +Shuttleworth 期望 z 系统上的 Ubuntu 能取得巨大成功。它发展很快,由于对 OpenStack 的支持,希望有卓越云性能的人会感到非常高兴。 + +-------------------------------------------------------------------------------- + +via: http://www.zdnet.com/article/ubuntu-linux-is-coming-to-the-mainframe/#ftag=RSSbaffb68 + +作者:[Steven J. Vaughan-Nichols][a] +译者:[ictlyh](https://github.com/ictlyh) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ +[1]:http://events.linuxfoundation.org/events/linuxcon-north-america +[2]:http://www.canonical.com/ +[3]:http://www.ubuntu.comj/ +[4]:http://www.redhat.com/en/technologies/linux-platforms/enterprise-linux +[5]:https://www.suse.com/products/server/ +[6]:http://www.zdnet.com/article/ibm-doubles-down-on-linux/ +[7]:http://www.zdnet.com/article/mainframe-ubuntu-linux/ +[8]:https://insights.ubuntu.com/2015/08/17/ibm-and-canonical-plan-ubuntu-support-on-ibm-z-systems-mainframe/ +[9]:http://www-03.ibm.com/systems/uk/z/ +[10]:http://www.zdnet.com/article/linuxone-ibms-new-linux-mainframes/ +[11]:https://jujucharms.com/ \ No newline at end of file From a252e2a53ed53a47e2420e45f7b908ce5ca53e03 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Sat, 22 Aug 2015 20:41:11 +0800 Subject: [PATCH 258/697] [Translated]RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS shares.md --- ...Lists) and Mounting Samba or NFS Shares.md | 214 ----------------- ...Lists) and Mounting Samba or NFS Shares.md | 215 ++++++++++++++++++ 2 files changed, 215 insertions(+), 214 deletions(-) delete mode 100644 sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md create mode 100644 translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md b/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md deleted file mode 100644 index 9237e8bd1c..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md +++ /dev/null @@ -1,214 +0,0 @@ -FSSlc translating - -RHCSA Series: Using ACLs (Access Control Lists) and Mounting Samba / NFS Shares – Part 7 -================================================================================ -In the last article ([RHCSA series Part 6][1]) we started explaining how to set up and configure local system storage using parted and ssm. - -![Configure ACL's and Mounting NFS / Samba Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ACLs-and-Mounting-NFS-Samba-Shares.png) - -RHCSA Series:: Configure ACL’s and Mounting NFS / Samba Shares – Part 7 - -We also discussed how to create and mount encrypted volumes with a password during system boot. In addition, we warned you to avoid performing critical storage management operations on mounted filesystems. With that in mind we will now review the most used file system formats in Red Hat Enterprise Linux 7 and then proceed to cover the topics of mounting, using, and unmounting both manually and automatically network filesystems (CIFS and NFS), along with the implementation of access control lists for your system. - -#### Prerequisites #### - -Before proceeding further, please make sure you have a Samba server and a NFS server available (note that NFSv2 is no longer supported in RHEL 7). - -During this guide we will use a machine with IP 192.168.0.10 with both services running in it as server, and a RHEL 7 box as client with IP address 192.168.0.18. Later in the article we will tell you which packages you need to install on the client. - -### File System Formats in RHEL 7 ### - -Beginning with RHEL 7, XFS has been introduced as the default file system for all architectures due to its high performance and scalability. It currently supports a maximum filesystem size of 500 TB as per the latest tests performed by Red Hat and its partners for mainstream hardware. - -Also, XFS enables user_xattr (extended user attributes) and acl (POSIX access control lists) as default mount options, unlike ext3 or ext4 (ext2 is considered deprecated as of RHEL 7), which means that you don’t need to specify those options explicitly either on the command line or in /etc/fstab when mounting a XFS filesystem (if you want to disable such options in this last case, you have to explicitly use no_acl and no_user_xattr). - -Keep in mind that the extended user attributes can be assigned to files and directories for storing arbitrary additional information such as the mime type, character set or encoding of a file, whereas the access permissions for user attributes are defined by the regular file permission bits. - -#### Access Control Lists #### - -As every system administrator, either beginner or expert, is well acquainted with regular access permissions on files and directories, which specify certain privileges (read, write, and execute) for the owner, the group, and “the world” (all others). However, feel free to refer to [Part 3 of the RHCSA series][2] if you need to refresh your memory a little bit. - -However, since the standard ugo/rwx set does not allow to configure different permissions for different users, ACLs were introduced in order to define more detailed access rights for files and directories than those specified by regular permissions. - -In fact, ACL-defined permissions are a superset of the permissions specified by the file permission bits. Let’s see how all of this translates is applied in the real world. - -1. There are two types of ACLs: access ACLs, which can be applied to either a specific file or a directory), and default ACLs, which can only be applied to a directory. If files contained therein do not have a ACL set, they inherit the default ACL of their parent directory. - -2. To begin, ACLs can be configured per user, per group, or per an user not in the owning group of a file. - -3. ACLs are set (and removed) using setfacl, with either the -m or -x options, respectively. - -For example, let us create a group named tecmint and add users johndoe and davenull to it: - - # groupadd tecmint - # useradd johndoe - # useradd davenull - # usermod -a -G tecmint johndoe - # usermod -a -G tecmint davenull - -And let’s verify that both users belong to supplementary group tecmint: - - # id johndoe - # id davenull - -![Verify Users](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-Users.png) - -Verify Users - -Let’s now create a directory called playground within /mnt, and a file named testfile.txt inside. We will set the group owner to tecmint and change its default ugo/rwx permissions to 770 (read, write, and execute permissions granted to both the owner and the group owner of the file): - - # mkdir /mnt/playground - # touch /mnt/playground/testfile.txt - # chmod 770 /mnt/playground/testfile.txt - -Then switch user to johndoe and davenull, in that order, and write to the file: - - echo "My name is John Doe" > /mnt/playground/testfile.txt - echo "My name is Dave Null" >> /mnt/playground/testfile.txt - -So far so good. Now let’s have user gacanepa write to the file – and the write operation will, which was to be expected. - -But what if we actually need user gacanepa (who is not a member of group tecmint) to have write permissions on /mnt/playground/testfile.txt? The first thing that may come to your mind is adding that user account to group tecmint. But that will give him write permissions on ALL files were the write bit is set for the group, and we don’t want that. We only want him to be able to write to /mnt/playground/testfile.txt. - - # touch /mnt/playground/testfile.txt - # chown :tecmint /mnt/playground/testfile.txt - # chmod 777 /mnt/playground/testfile.txt - # su johndoe - $ echo "My name is John Doe" > /mnt/playground/testfile.txt - $ su davenull - $ echo "My name is Dave Null" >> /mnt/playground/testfile.txt - $ su gacanepa - $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt - -![Manage User Permissions](http://www.tecmint.com/wp-content/uploads/2015/04/User-Permissions.png) - -Manage User Permissions - -Let’s give user gacanepa read and write access to /mnt/playground/testfile.txt. - -Run as root, - - # setfacl -R -m u:gacanepa:rwx /mnt/playground - -and you’ll have successfully added an ACL that allows gacanepa to write to the test file. Then switch to user gacanepa and try to write to the file again: - - $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt - -To view the ACLs for a specific file or directory, use getfacl: - - # getfacl /mnt/playground/testfile.txt - -![Check ACLs of Files](http://www.tecmint.com/wp-content/uploads/2015/04/Check-ACL-of-File.png) - -Check ACLs of Files - -To set a default ACL to a directory (which its contents will inherit unless overwritten otherwise), add d: before the rule and specify a directory instead of a file name: - - # setfacl -m d:o:r /mnt/playground - -The ACL above will allow users not in the owner group to have read access to the future contents of the /mnt/playground directory. Note the difference in the output of getfacl /mnt/playground before and after the change: - -![Set Default ACL in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Set-Default-ACL-in-Linux.png) - -Set Default ACL in Linux - -[Chapter 20 in the official RHEL 7 Storage Administration Guide][3] provides more ACL examples, and I highly recommend you take a look at it and have it handy as reference. - -#### Mounting NFS Network Shares #### - -To show the list of NFS shares available in your server, you can use the showmount command with the -e option, followed by the machine name or its IP address. This tool is included in the nfs-utils package: - - # yum update && yum install nfs-utils - -Then do: - - # showmount -e 192.168.0.10 - -and you will get a list of the available NFS shares on 192.168.0.10: - -![Check Available NFS Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Mount-NFS-Shares.png) - -Check Available NFS Shares - -To mount NFS network shares on the local client using the command line on demand, use the following syntax: - - # mount -t nfs -o [options] remote_host:/remote/directory /local/directory - -which, in our case, translates to: - - # mount -t nfs 192.168.0.10:/NFS-SHARE /mnt/nfs - -If you get the following error message: “Job for rpc-statd.service failed. See “systemctl status rpc-statd.service” and “journalctl -xn” for details.”, make sure the rpcbind service is enabled and started in your system first: - - # systemctl enable rpcbind.socket - # systemctl restart rpcbind.service - -and then reboot. That should do the trick and you will be able to mount your NFS share as explained earlier. If you need to mount the NFS share automatically on system boot, add a valid entry to the /etc/fstab file: - - remote_host:/remote/directory /local/directory nfs options 0 0 - -The variables remote_host, /remote/directory, /local/directory, and options (which is optional) are the same ones used when manually mounting an NFS share from the command line. As per our previous example: - - 192.168.0.10:/NFS-SHARE /mnt/nfs nfs defaults 0 0 - -#### Mounting CIFS (Samba) Network Shares #### - -Samba represents the tool of choice to make a network share available in a network with *nix and Windows machines. To show the Samba shares that are available, use the smbclient command with the -L flag, followed by the machine name or its IP address. This tool is included in the samba-client package: - -You will be prompted for root’s password in the remote host: - - # smbclient -L 192.168.0.10 - -![Check Samba Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Samba-Shares.png) - -Check Samba Shares - -To mount Samba network shares on the local client you will need to install first the cifs-utils package: - - # yum update && yum install cifs-utils - -Then use the following syntax on the command line: - - # mount -t cifs -o credentials=/path/to/credentials/file //remote_host/samba_share /local/directory - -which, in our case, translates to: - - # mount -t cifs -o credentials=~/.smbcredentials //192.168.0.10/gacanepa /mnt/samba - -where smbcredentials: - - username=gacanepa - password=XXXXXX - -is a hidden file inside root’s home (/root/) with permissions set to 600, so that no one else but the owner of the file can read or write to it. - -Please note that the samba_share is the name of the Samba share as returned by smbclient -L remote_host as shown above. - -Now, if you need the Samba share to be available automatically on system boot, add a valid entry to the /etc/fstab file as follows: - - //remote_host:/samba_share /local/directory cifs options 0 0 - -The variables remote_host, /samba_share, /local/directory, and options (which is optional) are the same ones used when manually mounting a Samba share from the command line. Following the definitions given in our previous example: - - //192.168.0.10/gacanepa /mnt/samba cifs credentials=/root/smbcredentials,defaults 0 0 - -### Conclusion ### - -In this article we have explained how to set up ACLs in Linux, and discussed how to mount CIFS and NFS network shares in a RHEL 7 client. - -I recommend you to practice these concepts and even mix them (go ahead and try to set ACLs in mounted network shares) until you feel comfortable. If you have questions or comments feel free to use the form below to contact us anytime. Also, feel free to share this article through your social networks. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/ -[2]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ -[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html diff --git a/translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md b/translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md new file mode 100644 index 0000000000..a68d36de2b --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md @@ -0,0 +1,215 @@ +RHCSA 系列:使用 ACL(访问控制列表) 和挂载 Samba/NFS 共享 – Part 7 +================================================================================ +在上一篇文章([RHCSA 系列 Part 6][1])中,我们解释了如何使用 parted 和 ssm 来设置和配置本地系统存储。 + +![配置 ACL 及挂载 NFS/Samba 共享](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ACLs-and-Mounting-NFS-Samba-Shares.png) + +RHCSA Series: 配置 ACL 及挂载 NFS/Samba 共享 – Part 7 + +我们也讨论了如何创建和在系统启动时使用一个密码来挂载加密的卷。另外,我们告诫过你要避免在挂载的文件系统上执行苛刻的存储管理操作。记住了这点后,现在,我们将回顾在 RHEL 7 中最常使用的文件系统格式,然后将涵盖有关手动或自动挂载、使用和卸载网络文件系统(CIFS 和 NFS)的话题以及在你的操作系统上实现访问控制列表的使用。 + +#### 前提条件 #### + +在进一步深入之前,请确保你可使用 Samba 服务和 NFS 服务(注意在 RHEL 7 中 NFSv2 已不再被支持)。 + +在本次指导中,我们将使用一个IP 地址为 192.168.0.10 且同时运行着 Samba 服务和 NFS 服务的机子来作为服务器,使用一个 IP 地址为 192.168.0.18 的 RHEL 7 机子来作为客户端。在这篇文章的后面部分,我们将告诉你在客户端上你需要安装哪些软件包。 + +### RHEL 7 中的文件系统格式 ### + +从 RHEL 7 开始,由于 XFS 的高性能和可扩展性,它已经被引入所有的架构中来作为默认的文件系统。 +根据 Red Hat 及其合作伙伴在主流硬件上执行的最新测试,当前 XFS 已支持最大为 500 TB 大小的文件系统。 + +另外, XFS 启用了 user_xattr(扩展用户属性) 和 acl( +POSIX 访问控制列表)来作为默认的挂载选项,而不像 ext3 或 ext4(对于 RHEL 7 来说, ext2 已过时),这意味着当挂载一个 XFS 文件系统时,你不必显式地在命令行或 /etc/fstab 中指定这些选项(假如你想在后一种情况下禁用这些选项,你必须显式地使用 no_acl 和 no_user_xattr)。 + +请记住扩展用户属性可以被指定到文件和目录中来存储任意的额外信息如 mime 类型,字符集或文件的编码,而用户属性中的访问权限由一般的文件权限位来定义。 + +#### 访问控制列表 #### + +作为一名系统管理员,无论你是新手还是专家,你一定非常熟悉与文件和目录有关的常规访问权限,这些权限为所有者,所有组和"世界"(所有的其他人)指定了特定的权限(可读,可写及可执行)。但如若你需要稍微更新你的记忆,请随意参考 [RHCSA 系列的 Part 3][3]. + +但是,由于标准的 `ugo/rwx` 集合并不允许为不同的用户配置不同的权限,所以 ACL 便被引入了进来,为的是为文件和目录定义更加详细的访问权限,而不仅仅是这些特别指定的特定权限。 + +事实上, ACL 定义的权限是由文件权限位所特别指定的权限的一个超集。下面就让我们看看这个转换是如何在真实世界中被应用的吧。 + +1. 存在两种类型的 ACL:访问 ACL,可被应用到一个特定的文件或目录上,以及默认 ACL,只可被应用到一个目录上。假如目录中的文件没有 ACL,则它们将继承它们的父目录的默认 ACL 。 + +2. 从一开始, ACL 就可以为每个用户,每个组或不在文件所属组中的用户配置相应的权限。 + +3. ACL 可使用 `setfacl` 来设置(和移除),可相应地使用 -m 或 -x 选项。 + +例如,让我们创建一个名为 tecmint 的组,并将用户 johndoe 和 davenull 加入该组: + + # groupadd tecmint + # useradd johndoe + # useradd davenull + # usermod -a -G tecmint johndoe + # usermod -a -G tecmint davenull + +并且让我们检验这两个用户都已属于追加的组 tecmint: + + # id johndoe + # id davenull + +![检验用户](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-Users.png) + +检验用户 + +现在,我们在 /mnt 下创建一个名为 playground 的目录,并在该目录下创建一个名为 testfile.txt 的文件。我们将设定该文件的属组为 tecmint,并更改它的默认 ugo/rwx 权限为 770(即赋予该文件的属主和属组可读,可写和可执行权限): + + # mkdir /mnt/playground + # touch /mnt/playground/testfile.txt + # chmod 770 /mnt/playground/testfile.txt + +接着,依次切换为 johndoe 和 davenull 用户,并在文件中写入一些信息: + + echo "My name is John Doe" > /mnt/playground/testfile.txt + echo "My name is Dave Null" >> /mnt/playground/testfile.txt + +到目前为止,一切正常。现在我们让用户 gacanepa 来向该文件执行写操作 – 则写操作将会失败,这是可以预料的。 + +但实际上我们需要用户 gacanepa(TA 不是组 tecmint 的成员)在文件 /mnt/playground/testfile.txt 上有写权限,那又该怎么办呢?首先映入你脑海里的可能是将该用户添加到组 tecmint 中。但那将使得他在所有该组具有写权限位的文件上均拥有写权限,但我们并不想这样,我们只想他能够在文件 /mnt/playground/testfile.txt 上有写权限。 + + # touch /mnt/playground/testfile.txt + # chown :tecmint /mnt/playground/testfile.txt + # chmod 777 /mnt/playground/testfile.txt + # su johndoe + $ echo "My name is John Doe" > /mnt/playground/testfile.txt + $ su davenull + $ echo "My name is Dave Null" >> /mnt/playground/testfile.txt + $ su gacanepa + $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt + +![管理用户的权限](http://www.tecmint.com/wp-content/uploads/2015/04/User-Permissions.png) + +管理用户的权限 + +现在,让我们给用户 gacanepa 在 /mnt/playground/testfile.txt 文件上有读和写权限。 + +以 root 的身份运行如下命令: + + # setfacl -R -m u:gacanepa:rwx /mnt/playground + +则你将成功地添加一条 ACL,运行 gacanepa 对那个测试文件可写。然后切换为 gacanepa 用户,并再次尝试向该文件写入一些信息: + + $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt + +要观察一个特定的文件或目录的 ACL,可以使用 `getfacl` 命令: + + # getfacl /mnt/playground/testfile.txt + +![检查文件的 ACL](http://www.tecmint.com/wp-content/uploads/2015/04/Check-ACL-of-File.png) + +检查文件的 ACL + +要为目录设定默认 ACL(它的内容将被该目录下的文件继承,除非另外被覆写),在规则前添加 `d:`并特别指定一个目录名,而不是文件名: + + # setfacl -m d:o:r /mnt/playground + +上面的 ACL 将允许不在属组中的用户对目录 /mnt/playground 中的内容有读权限。请注意观察这次更改前后 +`getfacl /mnt/playground` 的输出结果的不同: + +![在 Linux 中设定默认 ACL](http://www.tecmint.com/wp-content/uploads/2015/04/Set-Default-ACL-in-Linux.png) + +在 Linux 中设定默认 ACL + +[在官方的 RHEL 7 存储管理指导手册的第 20 章][3] 中提供了更多有关 ACL 的例子,我极力推荐你看一看它并将它放在身边作为参考。 + +#### 挂载 NFS 网络共享 #### + +要显示你服务器上可用的 NFS 共享的列表,你可以使用带有 -e 选项的 `showmount` 命令,再跟上机器的名称或它的 IP 地址。这个工具包含在 `nfs-utils` 软件包中: + + # yum update && yum install nfs-utils + +接着运行: + + # showmount -e 192.168.0.10 + +则你将得到一个在 192.168.0.10 上可用的 NFS 共享的列表: + +![检查可用的 NFS 共享](http://www.tecmint.com/wp-content/uploads/2015/04/Mount-NFS-Shares.png) + +检查可用的 NFS 共享 + +要按照需求在本地客户端上使用命令行来挂载 NFS 网络共享,可使用下面的语法: + + # mount -t nfs -o [options] remote_host:/remote/directory /local/directory + +其中,在我们的例子中,对应为: + + # mount -t nfs 192.168.0.10:/NFS-SHARE /mnt/nfs + +若你得到如下的错误信息:“Job for rpc-statd.service failed. See “systemctl status rpc-statd.service”及“journalctl -xn” for details.”,请确保 `rpcbind` 服务被启用且已在你的系统中启动了。 + + # systemctl enable rpcbind.socket + # systemctl restart rpcbind.service + +接着重启。这就应该达到了上面的目的,且你将能够像先前解释的那样挂载你的 NFS 共享了。若你需要在系统启动时自动挂载 NFS 共享,可以向 /etc/fstab 文件添加一个有效的条目: + + remote_host:/remote/directory /local/directory nfs options 0 0 + +上面的变量 remote_host, /remote/directory, /local/directory 和 options(可选) 和在命令行中手动挂载一个 NFS 共享时使用的一样。按照我们前面的例子,对应为: + + 192.168.0.10:/NFS-SHARE /mnt/nfs nfs defaults 0 0 + +#### 挂载 CIFS (Samba) 网络共享 #### + +Samba 代表一个特别的工具,使得在由 *nix 和 Windows 机器组成的网络中进行网络共享成为可能。要显示可用的 Samba 共享,可使用带有 -L 选项的 smbclient 命令,再跟上机器的名称或它的 IP 地址。这个工具包含在 samba_client 软件包中: + +你将被提示在远程主机上输入 root 用户的密码: + + # smbclient -L 192.168.0.10 + +![检查 Samba 共享](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Samba-Shares.png) + +检查 Samba 共享 + +要在本地客户端上挂载 Samba 网络共享,你需要已安装好 cifs-utils 软件包: + + # yum update && yum install cifs-utils + +然后在命令行中使用下面的语法: + + # mount -t cifs -o credentials=/path/to/credentials/file //remote_host/samba_share /local/directory + +其中,在我们的例子中,对应为: + + # mount -t cifs -o credentials=~/.smbcredentials //192.168.0.10/gacanepa /mnt/samba + +其中 `smbcredentials` + + username=gacanepa + password=XXXXXX + +是一个位于 root 用户的家目录(/root/) 中的隐藏文件,其权限被设置为 600,所以除了该文件的属主外,其他人对该文件既不可读也不可写。 + +请注意 samba_share 是 Samba 分享的名称,由上面展示的 `smbclient -L remote_host` 所返回。 + +现在,若你需要在系统启动时自动地使得 Samba 分享可用,可以向 /etc/fstab 文件添加一个像下面这样的有效条目: + + //remote_host:/samba_share /local/directory cifs options 0 0 + +上面的变量 remote_host, /remote/directory, /local/directory 和 options(可选) 和在命令行中手动挂载一个 Samba 共享时使用的一样。按照我们前面的例子中所给的定义,对应为: + + //192.168.0.10/gacanepa /mnt/samba cifs credentials=/root/smbcredentials,defaults 0 0 + +### 结论 ### + +在这篇文章中,我们已经解释了如何在 Linux 中设置 ACL,并讨论了如何在一个 RHEL 7 客户端上挂载 CIFS 和 NFS 网络共享。 + +我建议你去练习这些概念,甚至混合使用它们(试着在一个挂载的网络共享上设置 ACL),直至你感觉舒适。假如你有问题或评论,请随时随意地使用下面的评论框来联系我们。另外,请随意通过你的社交网络分享这篇文章。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/ +[2]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ +[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html \ No newline at end of file From 0ff0aa172f23fc7372361f35a179e925f50769ca Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 22 Aug 2015 20:42:00 +0800 Subject: [PATCH 259/697] [Translating] sources/tech/20150821 How to Install Visual Studio Code in Linux.md --- .../tech/20150821 How to Install Visual Studio Code in Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20150821 How to Install Visual Studio Code in Linux.md b/sources/tech/20150821 How to Install Visual Studio Code in Linux.md index 2fac79701e..9c00401f76 100644 --- a/sources/tech/20150821 How to Install Visual Studio Code in Linux.md +++ b/sources/tech/20150821 How to Install Visual Studio Code in Linux.md @@ -1,3 +1,4 @@ +ictlyh Translating How to Install Visual Studio Code in Linux ================================================================================ Hi everyone, today we'll learn how to install Visual Studio Code in Linux Distributions. Visual Studio Code is a code-optimized editor based on Electron, a piece of software that is based on Chromium, which is used to deploy io.js applications for the desktop. It is a source code editor and text editor developed by Microsoft for all the operating system platforms including Linux. Visual Studio Code is free but not an open source software ie. its under proprietary software license terms. It is an awesome powerful and fast code editor for our day to day use. Some of the cool features of visual studio code are navigation, intellisense support, syntax highlighting, bracket matching, auto indentation, and snippets, keyboard support with customizable bindings and support for dozens of languages like Python, C++, jade, PHP, XML, Batch, F#, DockerFile, Coffee Script, Java, HandleBars, R, Objective-C, PowerShell, Luna, Visual Basic, .Net, Asp.Net, C#, JSON, Node.js, Javascript, HTML, CSS, Less, Sass and Markdown. Visual Studio Code integrates with package managers and repositories, and builds and other common tasks to make everyday workflows faster. The most popular feature in Visual Studio Code is its debugging feature which includes a streamlined support for Node.js debugging in the preview. From 90ccfc60e7d072ac6c3ab2ea07381e3ce43ca2fa Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Sat, 22 Aug 2015 20:47:45 +0800 Subject: [PATCH 260/697] Update RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ...ing SSH, Setting Hostname and Enabling Network Services.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md b/sources/tech/RHCSA Series/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md index a381b1c94a..40fa771580 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md @@ -1,3 +1,5 @@ +FSSlc translating + RHCSA Series: Securing SSH, Setting Hostname and Enabling Network Services – Part 8 ================================================================================ As a system administrator you will often have to log on to remote systems to perform a variety of administration tasks using a terminal emulator. You will rarely sit in front of a real (physical) terminal, so you need to set up a way to log on remotely to the machines that you will be asked to manage. @@ -212,4 +214,4 @@ via: http://www.tecmint.com/rhcsa-series-secure-ssh-set-hostname-enable-network- [a]:http://www.tecmint.com/author/gacanepa/ [1]:http://www.tecmint.com/20-netstat-commands-for-linux-network-management/ -[2]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ \ No newline at end of file +[2]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ From 0f984b2a584921758f4d6afba60ebf0bb65d2af5 Mon Sep 17 00:00:00 2001 From: runningwater Date: Sun, 23 Aug 2015 00:21:15 +0800 Subject: [PATCH 261/697] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...How to Install Logwatch on Ubuntu 15.04.md | 138 ------------------ ...How to Install Logwatch on Ubuntu 15.04.md | 137 +++++++++++++++++ 2 files changed, 137 insertions(+), 138 deletions(-) delete mode 100644 sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md create mode 100644 translated/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md diff --git a/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md b/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md deleted file mode 100644 index 24c71b0cbe..0000000000 --- a/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md +++ /dev/null @@ -1,138 +0,0 @@ -(translating by runningwater) -How to Install Logwatch on Ubuntu 15.04 -================================================================================ -Hi, Today we are going to illustrate the setup of Logwatch on Ubuntu 15.04 Operating system where as it can be used for any Linux and UNIX like operating systems. Logwatch is a customizable system log analyzer and reporting log-monitoring system that go through your logs for a given period of time and make a report in the areas that you wish with the details you want. Its an easy tool to install, configure, review and to take actions that will improve security from data it provides. Logwatch scans the log files of major operating system components, like SSH, Web Server and forwards a summary that contains the valuable items in it that needs to be looked at. - -### Pre-installation Setup ### - -We will be using Ubuntu 15.04 operating system to deploy Logwatch on it so as a perquisite for the installation of Logwatch, make sure that your emails setup is working as it will be used to send email to the administrators for daily reports on the gathered reports.Your system repositories should be enabled as we will be installing it from its available universal repositories. - -Then open the terminal of your ubuntu operating system and login with root user to update your system packages before moving to Logwatch installation. - - root@ubuntu-15:~# apt-get update - -### Installing Logwatch ### - -Once your system is updated and your have fulfilled all its prerequisites then run the following command to start the installation of Logwatch in your server. - - root@ubuntu-15:~# apt-get install logwatch - -The logwatch installation process will starts with addition of some extra required packages as shown once you press “Y” to accept the required changes to the system. - -During the installation process you will be prompted to configure the Postfix Configurations according to your mail server’s setup. Here we used “Local only” in the tutorial for ease, we can choose from the other available options as per your infrastructure requirements and then press “OK” to proceed. - -![Potfix Configurations](http://blog.linoxide.com/wp-content/uploads/2015/08/21.png) - -Then you have to choose your mail server’s name that will also be used by other programs, so it should be single fully qualified domain name (FQDN). - -![Postfix Setup](http://blog.linoxide.com/wp-content/uploads/2015/08/31.png) - -Once you press “OK” after postfix configurations, then it will completes the Logwatch installation process with default configurations of Postfix. - -![Logwatch Completion](http://blog.linoxide.com/wp-content/uploads/2015/08/41.png) - -You can check the status of Logwatch by issuing the following command in the terminal that should be in active state. - - root@ubuntu-15:~# service postfix status - -![Postfix Status](http://blog.linoxide.com/wp-content/uploads/2015/08/51.png) - -To confirm the installation of Logwatch with its default configurations, issue the simple “logwatch” command as shown. - - root@ubuntu-15:~# logwatch - -The output from the above executed command will results in following compiled report form in the terminal. - -![Logwatch Report](http://blog.linoxide.com/wp-content/uploads/2015/08/61.png) - -### Logwatch Configurations ### - -Now after successful installation of Logwatch, we need to make few configuration changes in its configuration file located under following shown path. So, let’s open it with the file editor to update its configurations as required. - - root@ubuntu-15:~# vim /usr/share/logwatch/default.conf/logwatch.conf - -**Output/Format Options** - -By default Logwatch will print to stdout in text with no encoding.To make email Default set “Output = mail” and to save to file set “Output = file”. So you can comment out the its default configurations as per your required settings. - - Output = stdout - -To make Html the default formatting update the following line if you are using Internet email configurations. - - Format = text - -Now add the default person to mail reports should be sent to, it could be a local account or a complete email address that you are free to mention in this line - - MailTo = root - #MailTo = user@test.com - -Default person to mail reports sent from can be a local account or any other you wish to use. - - # complete email address. - MailFrom = Logwatch - -Save the changes made in the configuration file of Logwatch while leaving the other parameter as default. - -**Cronjob Configuration** - -Now edit the "00logwatch" file in daily crons directory to configure your desired email address to forward reports from logwatch. - - root@ubuntu-15:~# vim /etc/cron.daily/00logwatch - -Here you need to use "--mailto" user@test.com instead of --output mail and save the file. - -![Logwatch Cronjob](http://blog.linoxide.com/wp-content/uploads/2015/08/71.png) - -### Using Logwatch Report ### - -Now we generate the test report by executing the "logwatch" command in the terminal to get its result shown in the Text format within the terminal. - - root@ubuntu-15:~#logwatch - -The generated report starts with showing its execution time and date, it will be comprising of different sections that starts with its begin status and closed with end status after showing the complete information about its logs of the mentioned sections. - -Here is its starting point looks like, where it starts by showing all the installed packages in the system as shown below. - -![dpkg status](http://blog.linoxide.com/wp-content/uploads/2015/08/81.png) - -The following sections shows the logs informmation about the login sessions, rsyslogs and SSH connections about the current and last sessions enabled on the system. - -![logwatch report](http://blog.linoxide.com/wp-content/uploads/2015/08/9.png) - -The logwatch report will ends up by showing the secure sudo logs and the disk space usage of the root diretory as shown below. - -![Logwatch end report](http://blog.linoxide.com/wp-content/uploads/2015/08/10.png) - -You can also check for the generated emails about the logwatch reports by opening the following file. - - root@ubuntu-15:~# vim /var/mail/root - -Here you will be able to see all the generated emails to your configured users with their message delivery status. - -### More about Logwatch ### - -Logwatch is a great tool to lern more about it, so if your more interested to learn more about its logwatch then you can also get much help from the below few commands. - - root@ubuntu-15:~# man logwatch - -The above command contains all the users manual about the logwatch, so read it carefully and to exit from the manuals section simply press "q". - -To get help about the logwatch commands usage you can run the following help command for further information in details. - - root@ubuntu-15:~# logwatch --help - -### Conclusion ### - -At the end of this tutorial you learn about the complete setup of Logwatch on Ubuntu 15.04 that includes with its installation and configurations guide. Now you can start monitoring your logs in a customize able form, whether you monitor the logs of all the services rnning on your system or you customize it to send you the reports about the specific services on the scheduled days. So, let's use this tool and feel free to leave us a comment if you face any issue or need to know more about logwatch usage. - --------------------------------------------------------------------------------- - -via: http://linoxide.com/ubuntu-how-to/install-use-logwatch-ubuntu-15-04/ - -作者:[Kashif Siddique][a] -译者:[runningwater](https://github.com/runningwater) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/kashifs/ diff --git a/translated/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md b/translated/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md new file mode 100644 index 0000000000..8bb0836755 --- /dev/null +++ b/translated/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md @@ -0,0 +1,137 @@ +Ubuntu 15.04 and系统中安装 Logwatch +================================================================================ +大家好,今天我们会讲述在 Ubuntu 15.04 操作系统上如何安装 Logwatch 软件,它也可以在任意的 Linux 系统和类 Unix 系统上安装。Logwatch 是一款可定制的日志分析和日志监控报告生成系统,它可以根据一段时间的日志文件生成您所希望关注的详细报告。它具有易安装、易配置、可审查等特性,同时对其提供的数据的安全性上也有一些保障措施。Logwatch 会扫描重要的操作系统组件像 SSH、网站服务等的日志文件,然后生成用户所关心的有价值的条目汇总报告。 + +### 预安装设置 ### + +我们会使用 Ubuntu 15.04 版本的操作系统来部署 Logwatch,所以安装 Logwatch 之前,要确保系统上邮件服务设置是正常可用的。因为它会每天把生成的报告通过日报的形式发送邮件给管理员。您的系统的源库也应该设置可用,以便可以从通用源库来安装 Logwatch。 + +然后打开您 ubuntu 系统的终端,用 root 账号登陆,在进入 Logwatch 的安装操作前,先更新您的系统软件包。 + + root@ubuntu-15:~# apt-get update + +### 安装 Logwatch ### + +只要你的系统已经更新和已经满足前面说的先决条件,那么就可以在您的机器上输入如下命令来安装 Logwatch。 + + root@ubuntu-15:~# apt-get install logwatch + +在安装过程中,一旦您按提示按下“Y”健同意对系统修改的话,Logwatch 将会开始安装一些额外的必须软件包。 + +在安装过程中会根据您机器上的邮件服务器设置情况弹出提示对 Postfix 设置的配置界面。在这篇教程中我们使用最容易的 “仅本地” 选项。根据您的基础设施情况也可以选择其它的可选项,然后点击“确定”继续。 + +![Potfix Configurations](http://blog.linoxide.com/wp-content/uploads/2015/08/21.png) + +随后您得选择邮件服务器名,这邮件服务器名也会被其它程序使用,所以它应该是一个完全合格域名/全称域名(FQDN),且只一个。 + +![Postfix Setup](http://blog.linoxide.com/wp-content/uploads/2015/08/31.png) + +一旦按下在 postfix 配置提示底端的 “OK”,安装进程就会用 Postfix 的默认配置来安装,并且完成 Logwatch 的整个安装。 + +![Logwatch Completion](http://blog.linoxide.com/wp-content/uploads/2015/08/41.png) + +您可以在终端下发出如下命令来检查 Logwatch 状态,正常情况下它应该是激活状态。 + + root@ubuntu-15:~# service postfix status + +![Postfix Status](http://blog.linoxide.com/wp-content/uploads/2015/08/51.png) + +要确认 Logwatch 在默认配置下的安装信息,可以如下示简单的发出“logwatch” 命令。 + + root@ubuntu-15:~# logwatch + +上面执行命令的输出就是终端下编制出的报表展现格式。 + +![Logwatch Report](http://blog.linoxide.com/wp-content/uploads/2015/08/61.png) + +### 配置 Logwatch ### + +在成功安装好 Logwatch 后,我们需要在它的配置文件中做一些修改,配置文件位于如下所示的路径。那么,就让我们用文本编辑器打开它,然后按需要做些变动。 + + root@ubuntu-15:~# vim /usr/share/logwatch/default.conf/logwatch.conf + +**输出/格式化选项** + +默认情况下 Logwatch 会以无编码的文本打印到标准输出方式。要改为以邮件为默认方式,需设置“Output = mail”,要改为保存成文件方式,需设置“Output = file”。所以您可以根据您的要求设置其默认配置。 + + Output = stdout + +如果使用的是因特网电子邮件配置,要用 Html 格式为默认出格式,需要修改成如下行所示的样子。 + + Format = text + +现在增加默认的邮件报告接收人地址,可以是本地账号也可以是完整的邮件地址,需要的都可以在这行上写上 + + MailTo = root + #MailTo = user@test.com + +默认的邮件发送人可以是本地账号,也可以是您需要使用的其它名字。 + + # complete email address. + MailFrom = Logwatch + +对这个配置文件保存修改,至于其它的参数就让它是默认的,无需改动。 + +**调度任务配置** + +现在编辑在日常 crons 目录下的 “00logwatch” 文件来配置从 logwatch 生成的报告需要发送的邮件地址。 + + root@ubuntu-15:~# vim /etc/cron.daily/00logwatch + +在这儿您需要作用“--mailto user@test.com”来替换掉“--output mail”,然后保存文件。 + +![Logwatch Cronjob](http://blog.linoxide.com/wp-content/uploads/2015/08/71.png) + +### 生成报告 ### + +现在我们在终端中执行“logwatch”命令来生成测试报告,生成的结果在终端中会以文本格式显示出来。 + + root@ubuntu-15:~#logwatch + +生成的报告开始部分显示的是执行的时间和日期。它包含不同的部分,每个部分以开始标识开始而以结束标识结束,中间显示的标识部分提到的完整日志信息。 + +这儿演示的是开始标识头的样子,要显示系统上所有安装包的信息,如下所示: + +![dpkg status](http://blog.linoxide.com/wp-content/uploads/2015/08/81.png) + +接下来的部分显示的日志信息是关于当前系统登陆会话、rsyslogs 和当前及最后可用的会话 SSH 连接信息。 + +![logwatch report](http://blog.linoxide.com/wp-content/uploads/2015/08/9.png) + +Logwatch 报告最后显示的是安全 sudo 日志及root目录磁盘使用情况,如下示: + +![Logwatch end report](http://blog.linoxide.com/wp-content/uploads/2015/08/10.png) + +您也可以打开如下的文件来检查生成的 logwatch 报告电子邮件。 + + root@ubuntu-15:~# vim /var/mail/root + +您会看到所有已生成的邮件到其配置用户的信息传送状态。 + +### 更多详情 ### + +Logwatch 是一款很不错的工具,可以学习的很多很多,所以如果您对它的日志监控功能很感兴趣的话,也以通过如下所示的简短命令来获得更多帮助。 + + root@ubuntu-15:~# man logwatch + +上面的命令包含所有关于 logwatch 的用户手册,所以仔细阅读,要退出手册的话可以简单的输入“q”。 + +关于 logwatch 命令的使用,您可以使用如下所示的帮助命令来获得更多的详细信息。 + + root@ubuntu-15:~# logwatch --help + +### 结论 ### + +教程结束,您也学会了如何在 Ubuntu 15.04 上对 Logwatch 的安装、配置等全部设置指导。现在您就可以自定义监控您的系统日志,不管是监控所有服务的运行情况还是对特定的服务在指定的时间发送报告都可以。所以,开始使用这工具吧,无论何时有问题或想知道更多关于 logwatch 的使用的都可以给我们留言。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/ubuntu-how-to/install-use-logwatch-ubuntu-15-04/ + +作者:[Kashif Siddique][a] +译者:[runningwater](https://github.com/runningwater) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ From 4eb314a47b3d8decfb737dccd8ac92e45603c7e3 Mon Sep 17 00:00:00 2001 From: DongShuaike Date: Sun, 23 Aug 2015 09:43:03 +0800 Subject: [PATCH 262/697] Create 20150823 How learning data structures and algorithms make you a better developer.md --- ... algorithms make you a better developer.md | 126 ++++++++++++++++++ 1 file changed, 126 insertions(+) create mode 100644 sources/talk/20150823 How learning data structures and algorithms make you a better developer.md diff --git a/sources/talk/20150823 How learning data structures and algorithms make you a better developer.md b/sources/talk/20150823 How learning data structures and algorithms make you a better developer.md new file mode 100644 index 0000000000..7152efa1ed --- /dev/null +++ b/sources/talk/20150823 How learning data structures and algorithms make you a better developer.md @@ -0,0 +1,126 @@ +How learning data structures and algorithms make you a better developer +================================================================================ + +> "I'm a huge proponent of designing your code around the data, rather than the other way around, and I think it's one of the reasons git has been fairly successful […] I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important." +-- Linus Torvalds + +--- + +> "Smart data structures and dumb code works a lot better than the other way around." +-- Eric S. Raymond, The Cathedral and The Bazaar + +Learning about data structures and algorithms makes you a stonking good programmer. + +**Data structures and algorithms are patterns for solving problems.** The more of them you have in your utility belt, the greater variety of problems you'll be able to solve. You'll also be able to come up with more elegant solutions to new problems than you would otherwise be able to. + +You'll understand, ***in depth***, how your computer gets things done. This informs any technical decisions you make, regardless of whether or not you're using a given algorithm directly. Everything from memory allocation in the depths of your operating system, to the inner workings of your RDBMS to how your networking stack manages to send data from one corner of Earth to another. All computers rely on fundamental data structures and algorithms, so understanding them better makes you understand the computer better. + +Cultivate a broad and deep knowledge of algorithms and you'll have stock solutions to large classes of problems. Problem spaces that you had difficulty modelling before often slot neatly into well-worn data structures that elegantly handle the known use-cases. Dive deep into the implementation of even the most basic data structures and you'll start seeing applications for them in your day-to-day programming tasks. + +You'll also be able to come up with novel solutions to the somewhat fruitier problems you're faced with. Data structures and algorithms have the habit of proving themselves useful in situations that they weren't originally intended for, and the only way you'll discover these on your own is by having a deep and intuitive knowledge of at least the basics. + +But enough with the theory, have a look at some examples + +###Figuring out the fastest way to get somewhere### +Let's say we're creating software to figure out the shortest distance from one international airport to another. Assume we're constrained to following routes: + +![](http://www.happybearsoftware.com/assets/posts/how-learning-data-structures-and-algorithms-makes-you-a-better-developer/airport-graph-d2e32b3344b708383e405d67a80c29ea.svg) + +graph of destinations and the distances between them, how can we find the shortest distance say, from Helsinki to London? **Dijkstra's algorithm** is the algorithm that will definitely get us the right answer in the shortest time. + +In all likelihood, if you ever came across this problem and knew that Dijkstra's algorithm was the solution, you'd probably never have to implement it from scratch. Just ***knowing*** about it would point you to a library implementation that solves the problem for you. + +If you did dive deep into the implementation, you'd be working through one of the most important graph algorithms we know of. You'd know that in practice it's a little resource intensive so an extension called A* is often used in it's place. It gets used everywhere from robot guidance to routing TCP packets to GPS pathfinding. + +###Figuring out the order to do things in### +Let's say you're trying to model courses on a new Massive Open Online Courses platform (like Udemy or Khan Academy). Some of the courses depend on each other. For example, a user has to have taken Calculus before she's eligible for the course on Newtonian Mechanics. Courses can have multiple dependencies. Here's are some examples of what that might look like written out in YAML: + + # Mapping from course name to requirements + # + # If you're a physcist or a mathematicisn and you're reading this, sincere + # apologies for the completely made-up dependency tree :) + courses: + arithmetic: [] + algebra: [arithmetic] + trigonometry: [algebra] + calculus: [algebra, trigonometry] + geometry: [algebra] + mechanics: [calculus, trigonometry] + atomic_physics: [mechanics, calculus] + electromagnetism: [calculus, atomic_physics] + radioactivity: [algebra, atomic_physics] + astrophysics: [radioactivity, calculus] + quantumn_mechanics: [atomic_physics, radioactivity, calculus] + +Given those dependencies, as a user, I want to be able to pick any course and have the system give me an ordered list of courses that I would have to take to be eligible. So if I picked `calculus`, I'd want the system to return the list: + + arithmetic -> algebra -> trigonometry -> calculus + +Two important constraints on this that may not be self-evident: + + - At every stage in the course list, the dependencies of the next course must be met. + - We don't want any duplicate courses in the list. + +This is an example of resolving dependencies and the algorithm we're looking for to solve this problem is called topological sort (tsort). Tsort works on a dependency graph like we've outlined in the YAML above. Here's what that would look like in a graph (where each arrow means `requires`): + +![](http://www.happybearsoftware.com/assets/posts/how-learning-data-structures-and-algorithms-makes-you-a-better-developer/course-graph-2f60f42bb0dc95319954ce34c02705a2.svg) + +topological sort does is take a graph like the one above and find an ordering in which all the dependencies are met at each stage. So if we took a sub-graph that only contained `radioactivity` and it's dependencies, then ran tsort on it, we might get the following ordering: + + arithmetic + algebra + trigonometry + calculus + mechanics + atomic_physics + radioactivity + +This meets the requirements set out by the use case we described above. A user just has to pick `radioactivity` and they'll get an ordered list of all the courses they have to work through before they're allowed to. + +We don't even need to go into the details of how topological sort works before we put it to good use. In all likelihood, your programming language of choice probably has an implementation of it in the standard library. In the worst case scenario, your Unix probably has the `tsort` utility installed by default, run man `tsort` and have a play with it. + +###Other places tsort get's used### + + - **Tools like** `make` allow you to declare task dependencies. Topological sort is used under the hood to figure out what order the tasks should be executed in. + - **Any programming language that has a `require` directive**, indicating that the current file requires the code in a different file to be run first. Here topological sort can be used to figure out what order the files should be loaded in so that each is only loaded once and all dependencies are met. + - **Project management tools with Gantt charts**. A Gantt chart is a graph that outlines all the dependencies of a given task and gives you an estimate of when it will be complete based on those dependencies. I'm not a fan of Gantt charts, but it's highly likely that tsort will be used to draw them. + +###Squeezing data with Huffman coding### +[Huffman coding](http://en.wikipedia.org/wiki/Huffman_coding) is an algorithm used for lossless data compression. It works by analyzing the data you want to compress and creating a binary code for each character. More frequently occurring characters get smaller codes, so `e` might be encoded as `111` while `x` might be `10010`. The codes are created so that they can be concatenated without a delimeter and still be decoded accurately. + +Huffman coding is used along with LZ77 in the DEFLATE algorithm which is used by gzip to compress things. gzip is used all over the place, in particular for compressing files (typically anything with a `.gz` extension) and for http requests/responses in transit. + +Knowing how to implement and use Huffman coding has a number of benefits: + + - You'll know why a larger compression context results in better compression overall (e.g. the more you compress, the better the compression ratio). This is one of the proposed benefits of SPDY: that you get better compression on multiple HTTP requests/responses. + - You'll know that if you're compressing your javascript/css in transit anyway, it's completely pointless to run a minifier on them. Sames goes for PNG files, which use DEFLATE internally for compression already. + - If you ever find yourself trying to forcibly decipher encrypted information , you may realize that since repeating data compresses better, the compression ratio of a given bit of ciphertext will help you determine it's [block cipher mode of operation](http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation). + +###Picking what to learn next is hard### +Being a programmer involves learning constantly. To operate as a web developer you need to know markup languages, high level languages like ruby/python, regular expressions, SQL and JavaScript. You need to know the fine details of HTTP, how to drive a unix terminal and the subtle art of object oriented programming. It's difficult to navigate that landscape effectively and choose what to learn next. + +I'm not a fast learner so I have to choose what to spend time on very carefully. As much as possible, I want to learn skills and techniques that are evergreen, that is, won't be rendered obsolete in a few years time. That means I'm hesitant to learn the javascript framework of the week or untested programming languages and environments. + +As long as our dominant model of computation stays the same, data structures and algorithms that we use today will be used in some form or another in the future. You can safely spend time on gaining a deep and thorough knowledge of them and know that they will pay dividends for your entire career as a programmer. + +###Sign up to the Happy Bear Software List### +Find this article useful? For a regular dose of freshly squeezed technical content delivered straight to your inbox, **click on the big green button below to sign up to the Happy Bear Software mailing list.** + +We'll only be in touch a few times per month and you can unsubscribe at any time. + +-------------------------------------------------------------------------------- + +via: http://www.happybearsoftware.com/how-learning-data-structures-and-algorithms-makes-you-a-better-developer + +作者:[Happy Bear][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.happybearsoftware.com/ +[1]:http://en.wikipedia.org/wiki/Huffman_coding +[2]:http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation + + + From 308d8a627262cc3cc9b7a25701b1813fe02cea01 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sun, 23 Aug 2015 15:30:33 +0800 Subject: [PATCH 263/697] [Translated] tech/20150821 How to Install Visual Studio Code in Linux.md --- ... to Install Visual Studio Code in Linux.md | 128 ------------------ ... to Install Visual Studio Code in Linux.md | 127 +++++++++++++++++ 2 files changed, 127 insertions(+), 128 deletions(-) delete mode 100644 sources/tech/20150821 How to Install Visual Studio Code in Linux.md create mode 100644 translated/tech/20150821 How to Install Visual Studio Code in Linux.md diff --git a/sources/tech/20150821 How to Install Visual Studio Code in Linux.md b/sources/tech/20150821 How to Install Visual Studio Code in Linux.md deleted file mode 100644 index 9c00401f76..0000000000 --- a/sources/tech/20150821 How to Install Visual Studio Code in Linux.md +++ /dev/null @@ -1,128 +0,0 @@ -ictlyh Translating -How to Install Visual Studio Code in Linux -================================================================================ -Hi everyone, today we'll learn how to install Visual Studio Code in Linux Distributions. Visual Studio Code is a code-optimized editor based on Electron, a piece of software that is based on Chromium, which is used to deploy io.js applications for the desktop. It is a source code editor and text editor developed by Microsoft for all the operating system platforms including Linux. Visual Studio Code is free but not an open source software ie. its under proprietary software license terms. It is an awesome powerful and fast code editor for our day to day use. Some of the cool features of visual studio code are navigation, intellisense support, syntax highlighting, bracket matching, auto indentation, and snippets, keyboard support with customizable bindings and support for dozens of languages like Python, C++, jade, PHP, XML, Batch, F#, DockerFile, Coffee Script, Java, HandleBars, R, Objective-C, PowerShell, Luna, Visual Basic, .Net, Asp.Net, C#, JSON, Node.js, Javascript, HTML, CSS, Less, Sass and Markdown. Visual Studio Code integrates with package managers and repositories, and builds and other common tasks to make everyday workflows faster. The most popular feature in Visual Studio Code is its debugging feature which includes a streamlined support for Node.js debugging in the preview. - -Note: Please note that, Visual Studio Code is only available for 64-bit versions of Linux Distributions. - -Here, are some easy to follow steps on how to install Visual Sudio Code in all Linux Distribution. - -### 1. Downloading Visual Studio Code Package ### - -First of all, we'll gonna download the Visual Studio Code Package for 64-bit Linux Operating System from the Microsoft server using the given url [http://go.microsoft.com/fwlink/?LinkID=534108][1] . Here, we'll use wget to download it and keep it under /tmp/VSCODE directory as shown below. - - # mkdir /tmp/vscode; cd /tmp/vscode/ - # wget https://az764295.vo.msecnd.net/public/0.3.0/VSCode-linux-x64.zip - - --2015-06-24 06:02:54-- https://az764295.vo.msecnd.net/public/0.3.0/VSCode-linux-x64.zip - Resolving az764295.vo.msecnd.net (az764295.vo.msecnd.net)... 93.184.215.200, 2606:2800:11f:179a:1972:2405:35b:459 - Connecting to az764295.vo.msecnd.net (az764295.vo.msecnd.net)|93.184.215.200|:443... connected. - HTTP request sent, awaiting response... 200 OK - Length: 64992671 (62M) [application/octet-stream] - Saving to: ‘VSCode-linux-x64.zip’ - 100%[================================================>] 64,992,671 14.9MB/s in 4.1s - 2015-06-24 06:02:58 (15.0 MB/s) - ‘VSCode-linux-x64.zip’ saved [64992671/64992671] - -### 2. Extracting the Package ### - -Now, after we have successfully downloaded the zipped package of Visual Studio Code, we'll gonna extract it using the unzip command to /opt/directory. To do so, we'll need to run the following command in a terminal or a console. - - # unzip /tmp/vscode/VSCode-linux-x64.zip -d /opt/ - -Note: If we don't have unzip already installed, we'll need to install it via our Package Manager. If you're running Ubuntu, apt-get whereas if you're running Fedora, CentOS, dnf or yum can be used to install it. - -### 3. Running Visual Studio Code ### - -After we have extracted the package, we can directly launch the Visual Studio Code by executing a file named Code. - - # sudo chmod +x /opt/VSCode-linux-x64/Code - # sudo /opt/VSCode-linux-x64/Code - -If we want to launch Code and want to be available globally via terminal in any place, we'll need to create the link of /opt/vscode/Code as/usr/local/bin/code . - - # ln -s /opt/VSCode-linux-x64/Code /usr/local/bin/code - -Now, we can launch Visual Studio Code by running the following command in a terminal. - - # code . - -### 4. Creating a Desktop Launcher ### - -Next, after we have successfully extracted the Visual Studio Code package, we'll gonna create a desktop launcher so that it will be easily available in the launchers, menus, desktop, according to the desktop environment so that anyone can launch it from them. So, first we'll gonna copy the icon file to /usr/share/icons/ directory. - - # cp /opt/VSCode-linux-x64/resources/app/vso.png /usr/share/icons/ - -Then, we'll gonna create the desktop launcher having the extension as .desktop. Here, we'll create a file named visualstudiocode.desktop under /tmp/VSCODE/ folder using our favorite text editor. - - # vi /tmp/vscode/visualstudiocode.desktop - -Then, we'll gonna paste the following lines into that file. - - [Desktop Entry] - Name=Visual Studio Code - Comment=Multi-platform code editor for Linux - Exec=/opt/VSCode-linux-x64/Code - Icon=/usr/share/icons/vso.png - Type=Application - StartupNotify=true - Categories=TextEditor;Development;Utility; - MimeType=text/plain; - -After we're done creating the desktop file, we'll wanna copy that desktop file to /usr/share/applications/ directory so that it will be available in launchers and menus for use with single click. - - # cp /tmp/vscode/visualstudiocode.desktop /usr/share/applications/ - -Once its done, we can launch it by opening it from the Launcher or Menu. - -![Visual Studio Code](http://blog.linoxide.com/wp-content/uploads/2015/06/visual-studio-code.png) - -### Installing Visual Studio Code in Ubuntu ### - -We can use Ubuntu Make 0.7 in order to install Visual Studio Code in Ubuntu 14.04/14.10/15.04 distribution of linux. This method is the most easiest way to setup Code in ubuntu as we just need to execute few commands for it. First of all, we'll need to install Ubuntu Make 0.7 in our ubuntu distribution of linux. To install it, we'll need to add PPA for it. This can be done by running the command below. - - # add-apt-repository ppa:ubuntu-desktop/ubuntu-make - - This ppa proposes package backport of Ubuntu make for supported releases. - More info: https://launchpad.net/~ubuntu-desktop/+archive/ubuntu/ubuntu-make - Press [ENTER] to continue or ctrl-c to cancel adding it - gpg: keyring `/tmp/tmpv0vf24us/secring.gpg' created - gpg: keyring `/tmp/tmpv0vf24us/pubring.gpg' created - gpg: requesting key A1231595 from hkp server keyserver.ubuntu.com - gpg: /tmp/tmpv0vf24us/trustdb.gpg: trustdb created - gpg: key A1231595: public key "Launchpad PPA for Ubuntu Desktop" imported - gpg: no ultimately trusted keys found - gpg: Total number processed: 1 - gpg: imported: 1 (RSA: 1) - OK - -Then, we'll gonna update the local repository index and install ubuntu-make. - - # apt-get update - # apt-get install ubuntu-make - -After Ubuntu Make is installed in our ubuntu operating system, we'll gonna install Code by running the following command in a terminal. - - # umake web visual-studio-code - -![Umake Web Code](http://blog.linoxide.com/wp-content/uploads/2015/06/umake-web-code.png) - -After running the above command, we'll be asked to enter the path where we want to install it. Then, it will ask for permission to install Visual Studio Code in our ubuntu system. Then, we'll press "a". Once we do that, it will download and install it in our ubuntu machine. Finally, we can launch it by opening it from the Launcher or Menu. - -### Conclusion ### - -We have successfully installed Visual Studio Code in Linux Distribution. Installing Visual Studio Code in every linux distribution is the same as shown in the above steps where we can also use umake to install it in ubuntu distributions. Umake is a popular tool for the development tools, IDEs, Languages. We can easily install Android Studios, Eclipse and many other popular IDEs with umake. Visual Studio Code is based on a project in Github called [Electron][2] which is a part of [Atom.io][3] Editor. It has a bunch of new cool and improved features that Atom.io Editor doesn't have. Visual Studio Code is currently only available in 64-bit platform of linux operating system. So, If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/install-visual-studio-code-linux/ - -作者:[Arun Pyasi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:http://go.microsoft.com/fwlink/?LinkID=534108 -[2]:https://github.com/atom/electron -[3]:https://github.com/atom/atom \ No newline at end of file diff --git a/translated/tech/20150821 How to Install Visual Studio Code in Linux.md b/translated/tech/20150821 How to Install Visual Studio Code in Linux.md new file mode 100644 index 0000000000..48f68ade0b --- /dev/null +++ b/translated/tech/20150821 How to Install Visual Studio Code in Linux.md @@ -0,0 +1,127 @@ +如何在 Linux 中安装 Visual Studio Code +================================================================================ +大家好,今天我们一起来学习如何在 Linux 发行版中安装 Visual Studio Code。Visual Studio Code 是基于 Electron 优化代码后的编辑器,后者是基于 Chromium 的一款软件,用于为桌面系统发布 io.js 应用。Visual Studio Code 是微软开发的包括 Linux 在内的全平台代码编辑器和文本编辑器。它是免费软件但不开源,在专有软件许可条款下发布。它是我们日常使用的超级强大和快速的代码编辑器。Visual Studio Code 有很多很酷的功能,例如导航、智能感知支持、语法高亮、括号匹配、自动补全、片段、支持自定义键盘绑定、并且支持多种语言,例如 Python、C++、Jade、PHP、XML、Batch、F#、DockerFile、Coffee Script、Java、HandleBars、 R、 Objective-C、 PowerShell、 Luna、 Visual Basic、 .Net、 Asp.Net、 C#、 JSON、 Node.js、 Javascript、 HTML、 CSS、 Less、 Sass 和 Markdown。Visual Studio Code 集成了包管理器和库,并构建通用任务使得加速每日的工作流。Visual Studio Code 中最受欢迎的是它的调试功能,它包括流式支持 Node.js 的预览调试。 + +注意:请注意 Visual Studio Code 只支持 64 位 Linux 发行版。 + +下面是在所有 Linux 发行版中安装 Visual Studio Code 的几个简单步骤。 + +### 1. 下载 Visual Studio Code 软件包 ### + +首先,我们要从微软服务器中下载 64 位 Linux 操作系统的 Visual Studio Code 安装包,链接是 [http://go.microsoft.com/fwlink/?LinkID=534108][1]。这里我们使用 wget 下载并保存到 tmp/VSCODE 目录。 + + # mkdir /tmp/vscode; cd /tmp/vscode/ + # wget https://az764295.vo.msecnd.net/public/0.3.0/VSCode-linux-x64.zip + + --2015-06-24 06:02:54-- https://az764295.vo.msecnd.net/public/0.3.0/VSCode-linux-x64.zip + Resolving az764295.vo.msecnd.net (az764295.vo.msecnd.net)... 93.184.215.200, 2606:2800:11f:179a:1972:2405:35b:459 + Connecting to az764295.vo.msecnd.net (az764295.vo.msecnd.net)|93.184.215.200|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 64992671 (62M) [application/octet-stream] + Saving to: ‘VSCode-linux-x64.zip’ + 100%[================================================>] 64,992,671 14.9MB/s in 4.1s + 2015-06-24 06:02:58 (15.0 MB/s) - ‘VSCode-linux-x64.zip’ saved [64992671/64992671] + +### 2. 提取软件包 ### + +现在,下载好 Visual Studio Code 的 zip 压缩包之后,我们打算使用 unzip 命令解压它。我们要在终端或者控制台中运行以下命令。 + + # unzip /tmp/vscode/VSCode-linux-x64.zip -d /opt/ + +注意:如果我们还没有安装 unzip,我们首先需要通过软件包管理器安装它。如果你运行的是 Ubuntu,使用 apt-get,如果运行的是 Fedora、CentOS、可以用 dnf 或 yum 安装它。 + +### 3. 运行 Visual Studio Code ### + +提取软件包之后,我们可以直接运行一个名为 Code 的文件启动 Visual Studio Code。 + + # sudo chmod +x /opt/VSCode-linux-x64/Code + # sudo /opt/VSCode-linux-x64/Code + +如果我们想启动 Code 并通过终端能在任何地方打开,我们就需要创建 /opt/vscode/Code 的一个链接 /usr/local/bin/code。 + + # ln -s /opt/VSCode-linux-x64/Code /usr/local/bin/code + +现在,我们就可以在终端中运行以下命令启动 Visual Studio Code 了。 + + # code . + +### 4. 创建桌面启动 ### + +下一步,成功抽取 Visual Studio Code 软件包之后,我们打算创建桌面启动程序,使得根据不同桌面环境能够从启动器、菜单、桌面启动它。首先我们要复制一个图标文件到 /usr/share/icons/ 目录。 + + # cp /opt/VSCode-linux-x64/resources/app/vso.png /usr/share/icons/ + +然后,我们创建一个桌面启动程序,文件扩展名为 .desktop。这里我们在 /tmp/VSCODE/ 目录中使用喜欢的文本编辑器创建名为 visualstudiocode.desktop 的文件。 + + # vi /tmp/vscode/visualstudiocode.desktop + +然后,粘贴下面的行到那个文件中。 + + [Desktop Entry] + Name=Visual Studio Code + Comment=Multi-platform code editor for Linux + Exec=/opt/VSCode-linux-x64/Code + Icon=/usr/share/icons/vso.png + Type=Application + StartupNotify=true + Categories=TextEditor;Development;Utility; + MimeType=text/plain; + +创建完桌面文件之后,我们会复制这个桌面文件到 /usr/share/applications/ 目录,这样启动器和菜单中就可以单击启动 Visual Studio Code 了。 + + # cp /tmp/vscode/visualstudiocode.desktop /usr/share/applications/ + +完成之后,我们可以在启动器或者菜单中启动它。 + +![Visual Studio Code](http://blog.linoxide.com/wp-content/uploads/2015/06/visual-studio-code.png) + +### 在 Ubuntu 中 Visual Studio Code ### + +要在 Ubuntu 14.04/14.10/15.04 Linux 发行版中安装 Visual Studio Code,我们可以使用 Ubuntu Make 0.7。这是在 ubuntu 中安装 code 最简单的方法,因为我们只需要执行几个命令。首先,我们要在我们的 ubuntu linux 发行版中安装 Ubuntu Make 0.7。要安装它,首先要为它添加 PPA。可以通过运行下面命令完成。 + + # add-apt-repository ppa:ubuntu-desktop/ubuntu-make + + This ppa proposes package backport of Ubuntu make for supported releases. + More info: https://launchpad.net/~ubuntu-desktop/+archive/ubuntu/ubuntu-make + Press [ENTER] to continue or ctrl-c to cancel adding it + gpg: keyring `/tmp/tmpv0vf24us/secring.gpg' created + gpg: keyring `/tmp/tmpv0vf24us/pubring.gpg' created + gpg: requesting key A1231595 from hkp server keyserver.ubuntu.com + gpg: /tmp/tmpv0vf24us/trustdb.gpg: trustdb created + gpg: key A1231595: public key "Launchpad PPA for Ubuntu Desktop" imported + gpg: no ultimately trusted keys found + gpg: Total number processed: 1 + gpg: imported: 1 (RSA: 1) + OK + +然后,更新本地库索引并安装 ubuntu-make。 + + # apt-get update + # apt-get install ubuntu-make + +在我们的 ubuntu 操作系统上安装完 Ubuntu Make 之后,我们打算在一个终端中运行以下命令安装 Code。 + + # umake web visual-studio-code + +![Umake Web Code](http://blog.linoxide.com/wp-content/uploads/2015/06/umake-web-code.png) + +运行完上面的命令之后,会要求我们输入想要的安装路径。然后,会请求我们允许在 ubuntu 系统中安装 Visual Studio Code。我们敲击 “a”。点击完后,它会在 ubuntu 机器上下载和安装 Code。最后,我们可以在启动器或者菜单中启动它。 + +### 总结 ### + +我们已经成功地在 Linux 发行版上安装了 Visual Studio Code。在所有 linux 发行版上安装 Visual Studio Code 都和上面介绍的相似,我们同样可以使用 umake 在 linux 发行版中安装。Umake 是一个安装开发工具,IDEs 和语言流行的工具。我们可以用 Umake 轻松地安装 Android Studios、Eclipse 和很多其它流行 IDE。Visual Studio Code 是基于 Github 上一个叫 [Electron][2] 的项目,它是 [Atom.io][3] 编辑器的一部分。它有很多 Atom.io 编辑器没有的改进功能。当前 Visual Studio Code 只支持 64 位 linux 操作系统平台。如果你有任何疑问、建议或者反馈,请在下面的评论框中留言以便我们改进和更新我们的内容。非常感谢!Enjoy :-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-visual-studio-code-linux/ + +作者:[Arun Pyasi][a] +译者:[ictlyh](https://github.com/ictlyh) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://go.microsoft.com/fwlink/?LinkID=534108 +[2]:https://github.com/atom/electron +[3]:https://github.com/atom/atom \ No newline at end of file From eeac34ccc2ad8e98ff8f31caa9679ebf513e326e Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 23 Aug 2015 22:09:26 +0800 Subject: [PATCH 264/697] =?UTF-8?q?PUB:20150818=20=E2=80=8BUbuntu=20Linux?= =?UTF-8?q?=20is=20coming=20to=20IBM=20mainframes?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @ictlyh --- ...Ubuntu Linux is coming to IBM mainframes.md | 29 ++++++++++++------- 1 file changed, 18 insertions(+), 11 deletions(-) rename {translated/news => published}/20150818 ​Ubuntu Linux is coming to IBM mainframes.md (53%) diff --git a/translated/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md b/published/20150818 ​Ubuntu Linux is coming to IBM mainframes.md similarity index 53% rename from translated/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md rename to published/20150818 ​Ubuntu Linux is coming to IBM mainframes.md index d31f9c34a7..d3bacb3da6 100644 --- a/translated/news/20150818 ​Ubuntu Linux is coming to IBM mainframes.md +++ b/published/20150818 ​Ubuntu Linux is coming to IBM mainframes.md @@ -1,34 +1,41 @@ -IBM 大型机将搭载 Ubuntu Linux +Ubuntu Linux 来到 IBM 大型机 ================================================================================ -西雅图 -- 最终还是发生了。在 [LinuxCon][1] 上,IBM 和 [Canonical][2] 宣布 [Ubuntu Linux][3] 不久就会运行在 IBM 大型机上。 +最终来到了。在 [LinuxCon][1] 上,IBM 和 [Canonical][2] 宣布 [Ubuntu Linux][3] 不久就会运行在 IBM 大型机 [LinuxONE][1] 上,这是一种只支持 Linux 的大型机,现在也可以运行 Ubuntu 了。 + +这个 IBM 发布的最大的 LinuxONE 系统称作‘Emperor’,它可以扩展到 8000 台虚拟机或者上万台容器,这可能是单独一台 Linux 系统的记录。 + +LinuxONE 被 IBM 称作‘游戏改变者’,它‘释放了 Linux 的商业潜力’。 ![](http://zdnet2.cbsistatic.com/hub/i/2015/08/17/f389e12f-03f5-48cc-8019-af4ccf6c6ecd/f15b099e439c0e3a5fd823637d4bcf87/ubuntu-mainframe.jpg) -很快你就可以在你的 IBM 大型机上安装 Ubuntu Linux orange 啦 +*很快你就可以在你的 IBM 大型机上安装 Ubuntu Linux orange 啦* 根据 IBM z 系统的总经理 Ross Mauri 以及 Canonical 和 Ubuntu 的创立者 Mark Shuttleworth 所言,这是因为客户需要。十多年来,IBM 大型机只支持 [红帽企业版 Linux (RHEL)][4] 和 [SUSE Linux 企业版 (SLES)][5] Linux 发行版。 随着 Ubuntu 越来越成熟,更多的企业把它作为企业级 Linux,也有更多的人希望它能运行在 IBM 大型机上。尤其是银行希望如此。不久,金融 CIO 们就可以满足他们的需求啦。 -在一次采访中 Shuttleworth 说 Ubuntu Linux 在 2016 年 4 月下一次长期支持版 Ubuntu 16.04 中就可以用到大型机上。2014 年底 Canonical 和 IBM 将 [Ubuntu 带到 IBM 的 POWER][6] 架构中就迈出了第一步。 +在一次采访中 Shuttleworth 说 Ubuntu Linux 在 2016 年 4 月下一次长期支持版 Ubuntu 16.04 中就可以用到大型机上。而在 2014 年底 Canonical 和 IBM 将 [Ubuntu 带到 IBM 的 POWER][6] 架构中就迈出了第一步。 -在那之前,Canonical 和 IBM 几乎签署协议 [在 2011 年实现 Ubuntu 支持 IBM 大型机][7],但最终也没有实现。这次,真的发生了。 +在那之前,Canonical 和 IBM 差点签署了协议 [在 2011 年实现 Ubuntu 支持 IBM 大型机][7],但最终也没有实现。这次,真的发生了。 -Canonical 的 CEO Jane Silber 解释说 “[扩大 Ubuntu 平台支持][8] 到 [IBM z 系统][9] 是因为认识到需要 z 系统运行其业务的客户数量以及混合云市场的成熟。” +Canonical 的 CEO Jane Silber 解释说 “[把 Ubuntu 平台支持扩大][8]到 [IBM z 系统][9] 是因为认识到需要 z 系统运行其业务的客户数量以及混合云市场的成熟。” **Silber 还说:** -> 由于 z 系统的支持,包括 [LinuxONE][10],Canonical 和 IBM 的关系进一步加深,构建了对 POWER 架构的支持和 OpenPOWER 生态系统。正如 Power 系统的客户受益于 Ubuntu 的可扩展能力,我们的敏捷开发过程也使得类似 POWER8 CAPI(Coherent Accelerator Processor Interface,一致性加速器接口)达到市场支持,z 系统的客户也可以期望技术进步能快速部署,并从 [Juju][11] 和我们的其它云工具中获益,使得能快速向端用户提供新服务。另外,我们和 IBM 的合作包括实现扩展部署很多 IBM 和 Juju 的软件解决方案。大型机客户对于能通过 Juju 将丰富‘迷人的’ IBM 解决方案、其它软件供应商的产品、开源解决方案部署到大型机上感到高兴。 +> 由于 z 系统的支持,包括 [LinuxONE][10],Canonical 和 IBM 的关系进一步加深,构建了对 POWER 架构的支持和 OpenPOWER 生态系统。正如 Power 系统的客户受益于 Ubuntu 的可扩展能力,我们的敏捷开发过程也使得类似 POWER8 CAPI (Coherent Accelerator Processor Interface,一致性加速器接口)得到了市场支持,z 系统的客户也可以期望技术进步能快速部署,并从 [Juju][11] 和我们的其它云工具中获益,使得能快速向端用户提供新服务。另外,我们和 IBM 的合作包括实现扩展部署很多 IBM 和 Juju 的软件解决方案。大型机客户对于能通过 Juju 将丰富‘迷人的’ IBM 解决方案、其它软件供应商的产品、开源解决方案部署到大型机上感到高兴。 Shuttleworth 期望 z 系统上的 Ubuntu 能取得巨大成功。它发展很快,由于对 OpenStack 的支持,希望有卓越云性能的人会感到非常高兴。 + -------------------------------------------------------------------------------- -via: http://www.zdnet.com/article/ubuntu-linux-is-coming-to-the-mainframe/#ftag=RSSbaffb68 +via: http://www.zdnet.com/article/ubuntu-linux-is-coming-to-the-mainframe/ -作者:[Steven J. Vaughan-Nichols][a] -译者:[ictlyh](https://github.com/ictlyh) -校对:[校对者ID](https://github.com/校对者ID) +via: http://www.omgubuntu.co.uk/2015/08/ibm-linuxone-mainframe-ubuntu-partnership + +作者:[Steven J. Vaughan-Nichols][a],[Joey-Elijah Sneddon][a] +译者:[ictlyh](https://github.com/ictlyh),[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 8ac7cb03119286775fcf23de3bc511c03fb1078a Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 23 Aug 2015 22:11:48 +0800 Subject: [PATCH 265/697] =?UTF-8?q?=E6=9C=AA=E6=A0=A1=E5=AF=B9=EF=BC=8C?= =?UTF-8?q?=E6=91=98=E5=8F=96=E9=83=A8=E5=88=86=E5=86=85=E5=AE=B9=E5=90=88?= =?UTF-8?q?=E5=B9=B6=E5=88=B0=E7=9B=B8=E4=BC=BC=E6=96=87=E7=AB=A0?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @geekpi 这篇翻译质量不行啊,请翻译后通读一遍。此外,这篇和另外一篇内容相近,合并发布了。 --- ...150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/news => published}/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md (100%) diff --git a/translated/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md b/published/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md similarity index 100% rename from translated/news/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md rename to published/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md From 60a0d30df8bea3b4d6b16cdf2a5259f105090756 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 23 Aug 2015 22:52:41 +0800 Subject: [PATCH 266/697] PUB:20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @xiaoyu33 这篇有一些口语化的动向,如果你再细心些,可以翻译的更好。 --- ...urce RSS News Ticker for Linux Desktops.md | 35 +++++++++---------- 1 file changed, 17 insertions(+), 18 deletions(-) rename {translated/share => published}/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md (59%) diff --git a/translated/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md b/published/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md similarity index 59% rename from translated/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md rename to published/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md index d7bb0e425b..fec42d22fa 100644 --- a/translated/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md +++ b/published/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md @@ -1,24 +1,24 @@ -Trickr:一个开源的Linux桌面RSS新闻速递 +Tickr:一个开源的 Linux 桌面 RSS 新闻速递应用 ================================================================================ ![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/rss-tickr.jpg) **最新的!最新的!阅读关于它的一切!** -好了,所以我们今天要强调的应用程序不是相当于旧报纸的二进制版本—而是它会以一个伟大的方式,将最新的新闻推送到你的桌面上。 +好了,我们今天要推荐的应用程序可不是旧式报纸的二进制版本——它会以一种漂亮的方式将最新的新闻推送到你的桌面上。 -Tick是一个基于GTK的Linux桌面新闻速递,能够在水平带滚动显示最新头条新闻,以及你最爱的RSS资讯文章标题,当然你可以放置在你桌面的任何地方。 +Tickr 是一个基于 GTK 的 Linux 桌面新闻速递应用,能够以横条方式滚动显示最新头条新闻以及你最爱的RSS资讯文章标题,当然你可以放置在你桌面的任何地方。 -请叫我Joey Calamezzo;我把我的放在底部,有电视新闻台的风格。 +请叫我 Joey Calamezzo;我把它放在底部,就像电视新闻台的滚动字幕一样。 (LCTT 译注: Joan Callamezzo 是 Pawnee Today 的主持人,一位 Pawnee 的本地新闻/脱口秀主持人。而本文作者是 Joey。) -“到你了,子标题” +“到你了,副标题”。 ### RSS -还记得吗? ### -“谢谢段落结尾。” +“谢谢,这段结束了。” -在一个推送通知,社交媒体,以及点击诱饵的时代,哄骗我们阅读最新的令人惊奇的,人人都爱读的清单,RSS看起来有一点过时了。 +在一个充斥着推送通知、社交媒体、标题党,以及哄骗人们点击的清单体的时代,RSS看起来有一点过时了。 -对我来说?恩,RSS是名副其实的真正简单的聚合。这是将消息通知给我的最简单,最易于管理的方式。我可以在我愿意的时候,管理和阅读一些东西;没必要匆忙的去看,以防这条微博消失在信息流中,或者推送通知消失。 +对我来说呢?恩,RSS是名副其实的真正简单的聚合(RSS : Really Simple Syndication)。这是将消息通知给我的最简单、最易于管理的方式。我可以在我愿意的时候,管理和阅读一些东西;没必要匆忙的去看,以防这条微博消失在信息流中,或者推送通知消失。 tickr的美在于它的实用性。你可以不断地有新闻滚动在屏幕的底部,然后不时地瞥一眼。 @@ -32,31 +32,30 @@ tickr的美在于它的实用性。你可以不断地有新闻滚动在屏幕的 尽管虽然tickr可以从Ubuntu软件中心安装,然而它已经很久没有更新了。当你打开笨拙的不直观的控制面板的时候,没有什么能够比这更让人感觉被遗弃的了。 -打开它: +要打开它: 1. 右键单击tickr条 1. 转至编辑>首选项 1. 调整各种设置 -选项和设置行的后面,有些似乎是容易理解的。但是知己知彼你能够几乎掌控一切,包括: +选项和设置行的后面,有些似乎是容易理解的。但是详细了解这些你才能够掌握一切,包括: - 设置滚动速度 - 选择鼠标经过时的行为 - 资讯更新频率 - 字体,包括字体大小和颜色 -- 分隔符(“delineator”) +- 消息分隔符(“delineator”) - tickr在屏幕上的位置 - tickr条的颜色和不透明度 - 选择每种资讯显示多少文章 有个值得一提的“怪癖”是,当你点击“应用”按钮,只会更新tickr的屏幕预览。当您退出“首选项”窗口时,请单击“确定”。 -想要滚动条在你的显示屏上水平显示,也需要公平一点的调整,特别是统一显示。 +想要得到完美的显示效果, 你需要一点点调整,特别是在 Unity 上。 -按下“全宽按钮”,能够让应用程序自动检测你的屏幕宽度。默认情况下,当放置在顶部或底部时,会留下25像素的间距(应用程序被创建在过去的GNOME2.x桌面)。只需添加额外的25像素到输入框,来弥补这个问题。 +按下“全宽按钮”,能够让应用程序自动检测你的屏幕宽度。默认情况下,当放置在顶部或底部时,会留下25像素的间距(应用程序以前是在GNOME2.x桌面上创建的)。只需添加额外的25像素到输入框,来弥补这个问题。 -其他可供选择的选项包括:选择文章在哪个浏览器打开;tickr是否以一个常规的窗口出现; -是否显示一个时钟;以及应用程序多久检查一次文章资讯。 +其他可供选择的选项包括:选择文章在哪个浏览器打开;tickr是否以一个常规的窗口出现;是否显示一个时钟;以及应用程序多久检查一次文章资讯。 #### 添加资讯 #### @@ -76,9 +75,9 @@ tickr自带的有超过30种不同的资讯列表,从技术博客到主流新 ### 在Ubuntu 14.04 LTS或更高版本上安装Tickr ### -在Ubuntu 14.04 LTS或更高版本上安装Tickr +这就是 Tickr,它不会改变世界,但是它能让你知道世界上发生了什么。 -在Ubuntu 14.04 LTS或更高版本中安装,转到Ubuntu软件中心,但要点击下面的按钮。 +在Ubuntu 14.04 LTS或更高版本中安装,点击下面的按钮转到Ubuntu软件中心。 - [点击此处进入Ubuntu软件中心安装tickr][1] @@ -88,7 +87,7 @@ via: http://www.omgubuntu.co.uk/2015/06/tickr-open-source-desktop-rss-news-ticke 作者:[Joey-Elijah Sneddon][a] 译者:[xiaoyu33](https://github.com/xiaoyu33) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 2080842fadc7b00bfa207fc528aa808368c397b8 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 23 Aug 2015 23:13:01 +0800 Subject: [PATCH 267/697] PUB:20150813 How to get Public IP from Linux Terminal @KevinSJ --- ...ow to get Public IP from Linux Terminal.md | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) rename {translated/tech => published}/20150813 How to get Public IP from Linux Terminal.md (50%) diff --git a/translated/tech/20150813 How to get Public IP from Linux Terminal.md b/published/20150813 How to get Public IP from Linux Terminal.md similarity index 50% rename from translated/tech/20150813 How to get Public IP from Linux Terminal.md rename to published/20150813 How to get Public IP from Linux Terminal.md index 98c0ec7b31..c454db655c 100644 --- a/translated/tech/20150813 How to get Public IP from Linux Terminal.md +++ b/published/20150813 How to get Public IP from Linux Terminal.md @@ -1,9 +1,10 @@ -如何在 Linux 终端中获取公有 IP +如何在 Linux 终端中知道你的公有 IP ================================================================================ ![](http://www.blackmoreops.com/wp-content/uploads/2015/06/256x256xHow-to-get-Public-IP-from-Linux-Terminal-blackMORE-Ops.png.pagespeed.ic.GKEAEd4UNr.png) -公有地址由InterNIC分配并由基于类的网络 ID 或基于 CIDR 地址块构成(被称为 CIDR 块)并保证了在全球英特网中的唯一性。当公有地址被分配时,路径将会被记录到互联网中的路由器中,这样访问公有地址的流量就能顺利到达。访问目标公有地址的流量可通过互联网获取。比如,当一个一个 CIDR 块被以网络 ID 和子网掩码的形式分配给一个组织时,对应的 [网络 ID,子网掩码] 也会同时作为路径储存在英特网中的路由器中。访问 CIDR 块中的地址的 IP 封包会被导向对应的位置。在本文中我将会介绍在几种在 Linux 终端中查看你的公有 IP 地址的方法。这对普通用户来说并无意义,但 Linux 服务器(无GUI或者作为只能使用基本工具的用户登录时)会很有用。无论如何,从 Linux 终端中获取公有 IP 在各种方面都很意义,说不定某一天就能用得着。 +公有地址由 InterNIC 分配并由基于类的网络 ID 或基于 CIDR 的地址块构成(被称为 CIDR 块),并保证了在全球互联网中的唯一性。当公有地址被分配时,其路由将会被记录到互联网中的路由器中,这样访问公有地址的流量就能顺利到达。访问目标公有地址的流量可经由互联网抵达。比如,当一个 CIDR 块被以网络 ID 和子网掩码的形式分配给一个组织时,对应的 [网络 ID,子网掩码] 也会同时作为路由储存在互联网中的路由器中。目标是 CIDR 块中的地址的 IP 封包会被导向对应的位置。 +在本文中我将会介绍在几种在 Linux 终端中查看你的公有 IP 地址的方法。这对普通用户来说并无意义,但 Linux 服务器(无GUI或者作为只能使用基本工具的用户登录时)会很有用。无论如何,从 Linux 终端中获取公有 IP 在各种方面都很意义,说不定某一天就能用得着。 以下是我们主要使用的两个命令,curl 和 wget。你可以换着用。 @@ -21,17 +22,17 @@ curl ipinfo.io/json curl ifconfig.me/all.json - curl www.trackip.net/ip?json (bit ugly) + curl www.trackip.net/ip?json (有点丑陋) ### curl XML格式输出: ### curl ifconfig.me/all.xml -### curl 所有IP细节 ### +### curl 得到所有IP细节 (挖掘机)### curl ifconfig.me/all -### 使用 DYDNS (当你使用 DYDNS 服务时有用)Using DYNDNS (Useful when you’re using DYNDNS service) ### +### 使用 DYDNS (当你使用 DYDNS 服务时有用)### curl -s 'http://checkip.dyndns.org' | sed 's/.*Current IP Address: \([0-9\.]*\).*/\1/g' curl -s http://checkip.dyndns.org/ | grep -o "[[:digit:].]\+" @@ -43,7 +44,7 @@ ### 使用 host 和 dig 命令 ### -在可用时,你可以直接使用 host 和 dig 命令。 +如果有的话,你也可以直接使用 host 和 dig 命令。 host -t a dartsclink.com | sed 's/.*has address //' dig +short myip.opendns.com @resolver1.opendns.com @@ -55,15 +56,15 @@ PUBLIC_IP=`wget http://ipecho.net/plain -O - -q ; echo` echo $PUBLIC_IP -已经由不少选项了。 +简单易用。 -我实际上写了一个用于记录每日我的路由器中所有 IP 变化并保存到一个文件的脚本。我在搜索过程中找到了这些很好用的命令。希望某天它能帮到其他人。 +我实际上是在写一个用于记录每日我的路由器中所有 IP 变化并保存到一个文件的脚本。我在搜索过程中找到了这些很好用的命令。希望某天它能帮到其他人。 -------------------------------------------------------------------------------- via: http://www.blackmoreops.com/2015/06/14/how-to-get-public-ip-from-linux-terminal/ 译者:[KevinSJ](https://github.com/KevinSJ) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 5f42179fb3c8b136b74f29bb174e376fae1d5d1f Mon Sep 17 00:00:00 2001 From: Kevin Sicong Jiang Date: Sun, 23 Aug 2015 10:20:33 -0500 Subject: [PATCH 268/697] KevinSJ Translating --- .../20150821 Top 4 open source command-line email clients.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/share/20150821 Top 4 open source command-line email clients.md b/sources/share/20150821 Top 4 open source command-line email clients.md index df96173c18..afdcd8cf4a 100644 --- a/sources/share/20150821 Top 4 open source command-line email clients.md +++ b/sources/share/20150821 Top 4 open source command-line email clients.md @@ -1,3 +1,4 @@ +KevinSJ Translating Top 4 open source command-line email clients ================================================================================ ![](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/life_mail.png) @@ -76,4 +77,4 @@ via: http://opensource.com/life/15/8/top-4-open-source-command-line-email-client [9]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html [10]:http://notmuchmail.org/ [11]:http://notmuchmail.org/releases/ -[12]:http://www.gnu.org/licenses/gpl.html \ No newline at end of file +[12]:http://www.gnu.org/licenses/gpl.html From e076870e0de5683c653bcbc0d79ea359e97275e4 Mon Sep 17 00:00:00 2001 From: joeren Date: Mon, 24 Aug 2015 09:34:38 +0800 Subject: [PATCH 269/697] Update 20150818 How to monitor stock quotes from the command line on Linux.md --- ...w to monitor stock quotes from the command line on Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150818 How to monitor stock quotes from the command line on Linux.md b/sources/tech/20150818 How to monitor stock quotes from the command line on Linux.md index 662ac1eb84..48c4979f3e 100644 --- a/sources/tech/20150818 How to monitor stock quotes from the command line on Linux.md +++ b/sources/tech/20150818 How to monitor stock quotes from the command line on Linux.md @@ -1,3 +1,4 @@ +Translating by GOLinux! How to monitor stock quotes from the command line on Linux ================================================================================ If you are one of those stock investors or traders, monitoring the stock market will be one of your daily routines. Most likely you will be using an online trading platform which comes with some fancy real-time charts and all sort of advanced stock analysis and tracking tools. While such sophisticated market research tools are a must for any serious stock investors to read the market, monitoring the latest stock quotes still goes a long way to build a profitable portfolio. @@ -96,4 +97,4 @@ via: http://xmodulo.com/monitor-stock-quotes-command-line-linux.html [1]:https://github.com/michaeldv/mop [2]:http://ask.xmodulo.com/install-go-language-linux.html [3]:http://money.cnn.com/data/markets/ -[4]:http://finance.yahoo.com/ \ No newline at end of file +[4]:http://finance.yahoo.com/ From 7680623bb62923e10160f70655fba96023da0ab8 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 24 Aug 2015 10:08:48 +0800 Subject: [PATCH 270/697] Update 20150728 Process of the Linux kernel building.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 全部翻译完 --- ...28 Process of the Linux kernel building.md | 27 ++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/sources/tech/20150728 Process of the Linux kernel building.md index 00504e60fd..6f9368384c 100644 --- a/sources/tech/20150728 Process of the Linux kernel building.md +++ b/sources/tech/20150728 Process of the Linux kernel building.md @@ -698,6 +698,8 @@ The `$(obj)/compressed/vmlinux` target depends on the `vmlinux-objs-y` that comp Where the `vmlinux.bin` is the `vmlinux` with striped debuging information and comments and the `vmlinux.bin.bz2` compressed `vmlinux.bin.all` + `u32` size of `vmlinux.bin.all`. The `vmlinux.bin.all` is `vmlinux.bin + vmlinux.relocs`, where `vmlinux.relocs` is the `vmlinux` that was handled by the `relocs` program (see above). As we got these files, the `piggy.S` assembly files will be generated with the `mkpiggy` program and compiled: +`vmlinux.bin` 是去掉了调试信息和注释的`vmlinux` 二进制文件,加上了占用了`u32` (注:即4-Byte)的长度信息的`vmlinux.bin.all` 压缩后就是`vmlinux.bin.bz2`。其中`vmlinux.bin.all` 包含了`vmlinux.bin` 和`vmlinux.relocs`(注:vmlinux 的重定位信息),其中`vmlinux.relocs` 是`vmlinux` 经过程序`relocs` 处理之后的`vmlinux` 镜像(见上文所述)。我们现在已经获取到了这些文件,汇编文件`piggy.S` 将会被`mkpiggy` 生成、然后编译: + ```Makefile MKPIGGY arch/x86/boot/compressed/piggy.S AS arch/x86/boot/compressed/piggy.o @@ -705,12 +707,16 @@ Where the `vmlinux.bin` is the `vmlinux` with striped debuging information and c This assembly files will contain computed offset from a compressed kernel. After this we can see that `zoffset` generated: +这个汇编文件会包含经过计算得来的、压缩内核的偏移信息。处理完这个汇编文件,我们就可以看到`zoffset` 生成了: + ```Makefile ZOFFSET arch/x86/boot/zoffset.h ``` As the `zoffset.h` and the `voffset.h` are generated, compilation of the source code files from the [arch/x86/boot](https://github.com/torvalds/linux/tree/master/arch/x86/boot/) can be continued: +现在`zoffset.h` 和`voffset.h` 已经生成了,[arch/x86/boot](https://github.com/torvalds/linux/tree/master/arch/x86/boot/) 里的源文件可以继续编译: + ```Makefile AS arch/x86/boot/header.o CC arch/x86/boot/main.o @@ -731,36 +737,48 @@ As the `zoffset.h` and the `voffset.h` are generated, compilation of the source As all source code files will be compiled, they will be linked to the `setup.elf`: +所有的源代码会被编译,他们最终会被链接到`setup.elf` : + ```Makefile LD arch/x86/boot/setup.elf ``` or: +或者: + ``` ld -m elf_x86_64 -T arch/x86/boot/setup.ld arch/x86/boot/a20.o arch/x86/boot/bioscall.o arch/x86/boot/cmdline.o arch/x86/boot/copy.o arch/x86/boot/cpu.o arch/x86/boot/cpuflags.o arch/x86/boot/cpucheck.o arch/x86/boot/early_serial_console.o arch/x86/boot/edd.o arch/x86/boot/header.o arch/x86/boot/main.o arch/x86/boot/mca.o arch/x86/boot/memory.o arch/x86/boot/pm.o arch/x86/boot/pmjump.o arch/x86/boot/printf.o arch/x86/boot/regs.o arch/x86/boot/string.o arch/x86/boot/tty.o arch/x86/boot/video.o arch/x86/boot/video-mode.o arch/x86/boot/version.o arch/x86/boot/video-vga.o arch/x86/boot/video-vesa.o arch/x86/boot/video-bios.o -o arch/x86/boot/setup.elf ``` The last two things is the creation of the `setup.bin` that will contain compiled code from the `arch/x86/boot/*` directory: +最后两件事是创建包含目录`arch/x86/boot/*` 下的编译过的代码的`setup.bin`: + ``` objcopy -O binary arch/x86/boot/setup.elf arch/x86/boot/setup.bin ``` and the creation of the `vmlinux.bin` from the `vmlinux`: +以及从`vmlinux` 生成`vmlinux.bin` : + ``` objcopy -O binary -R .note -R .comment -S arch/x86/boot/compressed/vmlinux arch/x86/boot/vmlinux.bin ``` In the end we compile host program: [arch/x86/boot/tools/build.c](https://github.com/torvalds/linux/blob/master/arch/x86/boot/tools/build.c) that will create our `bzImage` from the `setup.bin` and the `vmlinux.bin`: +最后,我们编译主机程序[arch/x86/boot/tools/build.c](https://github.com/torvalds/linux/blob/master/arch/x86/boot/tools/build.c),它将会用来把`setup.bin` 和`vmlinux.bin` 打包成`bzImage`: + ``` arch/x86/boot/tools/build arch/x86/boot/setup.bin arch/x86/boot/vmlinux.bin arch/x86/boot/zoffset.h arch/x86/boot/bzImage ``` Actually the `bzImage` is the concatenated `setup.bin` and the `vmlinux.bin`. In the end we will see the output which familiar to all who once build the Linux kernel from source: +实际上`bzImage` 就是把`setup.bin` 和`vmlinux.bin` 连接到一起。最终我们会看到输出结果,就和那些用源码编译过内核的同行的结果一样: + ``` Setup is 16268 bytes (padded to 16384 bytes). System is 4704 kB @@ -770,12 +788,19 @@ Kernel: arch/x86/boot/bzImage is ready (#5) That's all. +全部结束。 + Conclusion +结论 ================================================================================ It is the end of this part and here we saw all steps from the execution of the `make` command to the generation of the `bzImage`. I know, the Linux kernel makefiles and process of the Linux kernel building may seem confusing at first glance, but it is not so hard. Hope this part will help you to understand process of the Linux kernel building. +这就是本文的最后一节。本文我们了解了编译内核的全部步骤:从执行`make` 命令开始,到最后生成`bzImage`。我知道,linux 内核的makefiles 和构建linux 的过程第一眼看起来可能比较迷惑,但是这并不是很难。希望本文可以帮助你理解构建linux 内核的整个流程。 + + Links +链接 ================================================================================ * [GNU make util](https://en.wikipedia.org/wiki/Make_%28software%29) @@ -797,7 +822,7 @@ Links via: https://github.com/0xAX/linux-insides/blob/master/Misc/how_kernel_compiled.md -译者:[译者ID](https://github.com/译者ID) +译者:[译者ID](https://github.com/oska874) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 31094aa2e37bf5449c639b400433517c39b47508 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Mon, 24 Aug 2015 10:41:43 +0800 Subject: [PATCH 271/697] [Translated]20150818 How to monitor stock quotes from the command line on Linux.md --- ...k quotes from the command line on Linux.md | 100 ------------------ ...k quotes from the command line on Linux.md | 99 +++++++++++++++++ 2 files changed, 99 insertions(+), 100 deletions(-) delete mode 100644 sources/tech/20150818 How to monitor stock quotes from the command line on Linux.md create mode 100644 translated/tech/20150818 How to monitor stock quotes from the command line on Linux.md diff --git a/sources/tech/20150818 How to monitor stock quotes from the command line on Linux.md b/sources/tech/20150818 How to monitor stock quotes from the command line on Linux.md deleted file mode 100644 index 48c4979f3e..0000000000 --- a/sources/tech/20150818 How to monitor stock quotes from the command line on Linux.md +++ /dev/null @@ -1,100 +0,0 @@ -Translating by GOLinux! -How to monitor stock quotes from the command line on Linux -================================================================================ -If you are one of those stock investors or traders, monitoring the stock market will be one of your daily routines. Most likely you will be using an online trading platform which comes with some fancy real-time charts and all sort of advanced stock analysis and tracking tools. While such sophisticated market research tools are a must for any serious stock investors to read the market, monitoring the latest stock quotes still goes a long way to build a profitable portfolio. - -If you are a full-time system admin constantly sitting in front of terminals while trading stocks as a hobby during the day, a simple command-line tool that shows real-time stock quotes will be a blessing for you. - -In this tutorial, let me introduce a neat command-line tool that allows you to monitor stock quotes from the command line on Linux. - -This tool is called [Mop][1]. Written in Go, this lightweight command-line tool is extremely handy for tracking the latest stock quotes from the U.S. markets. You can easily customize the list of stocks to monitor, and it shows the latest stock quotes in ncurses-based, easy-to-read interface. - -**Note**: Mop obtains the latest stock quotes via Yahoo! Finance API. Be aware that their stock quotes are known to be delayed by 15 minutes. So if you are looking for "real-time" stock quotes with zero delay, Mop is not a tool for you. Such "live" stock quote feeds are usually available for a fee via some proprietary closed-door interface. With that being said, let's see how you can use Mop under Linux environment. - -### Install Mop on Linux ### - -Since Mop is implemented in Go, you will need to install Go language first. If you don't have Go installed, follow [this guide][2] to install Go on your Linux platform. Make sure to set GOPATH environment variable as described in the guide. - -Once Go is installed, proceed to install Mop as follows. - -**Debian, Ubuntu or Linux Mint** - - $ sudo apt-get install git - $ go get github.com/michaeldv/mop - $ cd $GOPATH/src/github.com/michaeldv/mop - $ make install - -Fedora, CentOS, RHEL - - $ sudo yum install git - $ go get github.com/michaeldv/mop - $ cd $GOPATH/src/github.com/michaeldv/mop - $ make install - -The above commands will install Mop under $GOPATH/bin. - -Now edit your .bashrc to include $GOPATH/bin in your PATH variable. - - export PATH="$PATH:$GOPATH/bin" - ----------- - - $ source ~/.bashrc - -### Monitor Stock Quotes from the Command Line with Mop ### - -To launch Mod, simply run the command called cmd. - - $ cmd - -At the first launch, you will see a few stock tickers which Mop comes pre-configured with. - -![](https://farm6.staticflickr.com/5749/20018949104_c8c64e0e06_c.jpg) - -The quotes show information like the latest price, %change, daily low/high, 52-week low/high, dividend, and annual yield. Mop obtains market overview information from [CNN][3], and individual stock quotes from [Yahoo Finance][4]. The stock quote information updates itself within the terminal periodically. - -### Customize Stock Quotes in Mop ### - -Let's try customizing the stock list. Mop provides easy-to-remember shortcuts for this: '+' to add a new stock, and '-' to remove a stock. - -To add a new stock, press '+', and type a stock ticker symbol to add (e.g., MSFT). You can add more than one stock at once by typing a comma-separated list of tickers (e.g., "MSFT, AMZN, TSLA"). - -![](https://farm1.staticflickr.com/636/20648164441_642ae33a22_c.jpg) - -Removing stocks from the list can be done similarly by pressing '-'. - -### Sort Stock Quotes in Mop ### - -You can sort the stock quote list based on any column. To sort, press 'o', and use left/right key to choose the column to sort by. When a particular column is chosen, you can sort the list either in increasing order or in decreasing order by pressing ENTER. - -![](https://farm1.staticflickr.com/724/20648164481_15631eefcf_c.jpg) - -By pressing 'g', you can group your stocks based on whether they are advancing or declining for the day. Advancing issues are represented in green color, while declining issues are colored in white. - -![](https://c2.staticflickr.com/6/5633/20615252696_a5bd44d3aa_b.jpg) - -If you want to access help page, simply press '?'. - -![](https://farm1.staticflickr.com/573/20632365342_da196b657f_c.jpg) - -### Conclusion ### - -As you can see, Mop is a lightweight, yet extremely handy stock monitoring tool. Of course you can easily access stock quotes information elsewhere, from online websites, your smartphone, etc. However, if you spend a great deal of your time in a terminal environment, Mop can easily fit in to your workspace, hopefully without distracting must of your workflow. Just let it run and continuously update market date in one of your terminals, and be done with it. - -Happy trading! - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/monitor-stock-quotes-command-line-linux.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/nanni -[1]:https://github.com/michaeldv/mop -[2]:http://ask.xmodulo.com/install-go-language-linux.html -[3]:http://money.cnn.com/data/markets/ -[4]:http://finance.yahoo.com/ diff --git a/translated/tech/20150818 How to monitor stock quotes from the command line on Linux.md b/translated/tech/20150818 How to monitor stock quotes from the command line on Linux.md new file mode 100644 index 0000000000..c2a9e5d576 --- /dev/null +++ b/translated/tech/20150818 How to monitor stock quotes from the command line on Linux.md @@ -0,0 +1,99 @@ +Linux中通过命令行监控股票报价 +================================================================================ +如果你是那些股票投资者或者交易者中的一员,那么监控证券市场将成为你日常工作中的其中一项任务。最有可能是你会使用一个在线交易平台,这个平台有着一些漂亮的实时图表和全部种类的高级股票分析和交易工具。虽然这种复杂的市场研究工具是任何严肃的证券投资者阅读市场的必备,但是监控最新的股票报价来构建有利可图的投资组合仍然有很长一段路要走。 + +如果你是一位长久坐在终端前的全职系统管理员,而证券交易又成了你日常生活中的业余兴趣,那么一个简单地显示实时股票报价的命令行工具会是你的恩赐。 + +在本教程中,让我来介绍一个灵巧而简洁的命令行工具,它可以让你在Linux上从命令行监控股票报价。 + +这个工具叫做[Mop][1]。它是用GO编写的一个轻量级命令行工具,可以极其方便地跟踪来自美国市场的最新股票报价。你可以很轻松地自定义要监控的证券列表,它会在一个基于ncurses的便于阅读的界面显示最新的股票报价。 + +**注意**:Mop是通过雅虎金融API获取最新的股票报价的。你必须意识到,他们的的股票报价已知会有15分钟的延时。所以,如果你正在寻找0延时的“实时”股票报价,那么Mop就不是你的菜了。这种“现场”股票报价订阅通常可以通过向一些不开放的私有接口付费获取。对于上面讲得,让我们来看看怎样在Linux环境下使用Mop吧。 + +### 安装 Mop 到 Linux ### + +由于Mop部署在Go中,你首先需要安装Go语言。如果你还没有安装Go,请参照[此指南][2]将Go安装到你的Linux平台中。请确保按指南中所讲的设置GOPATH环境变量。 + +安装完Go后,继续像下面这样安装Mop。 + +**Debian,Ubuntu 或 Linux Mint** + + $ sudo apt-get install git + $ go get github.com/michaeldv/mop + $ cd $GOPATH/src/github.com/michaeldv/mop + $ make install + +**Fedora,CentOS,RHEL** + + $ sudo yum install git + $ go get github.com/michaeldv/mop + $ cd $GOPATH/src/github.com/michaeldv/mop + $ make install + +上述命令将安装Mop到$GOPATH/bin。 + +现在,编辑你的.bashrc,将$GOPATH/bin写到你的PATH变量中。 + + export PATH="$PATH:$GOPATH/bin" + +---------- + + $ source ~/.bashrc + +### 使用Mop来通过命令行监控股票报价 ### + +要启动Mop,只需运行名为cmd的命令。 + + $ cmd + +首次启动,你将看到一些Mop预配置的证券行情自动收录器。 + +![](https://farm6.staticflickr.com/5749/20018949104_c8c64e0e06_c.jpg) + +报价显示了像最新价格、交易百分比、每日低/高、52周低/高、股利以及年产量等信息。Mop从[CNN][3]获取市场总览信息,从[雅虎金融][4]获得个股报价,股票报价信息它自己会在终端内周期性更新。 + +### 自定义Mop中的股票报价 ### + +让我们来试试自定义证券列表吧。对此,Mop提供了易于记忆的快捷键:‘+’用于添加一只新股,而‘-’则用于移除一只股票。 + +要添加新股,请按‘+’,然后输入股票代码来添加(如MSFT)。你可以通过输入一个由逗号分隔的交易代码列表来一次添加多个股票(如”MSFT, AMZN, TSLA”)。 + +![](https://farm1.staticflickr.com/636/20648164441_642ae33a22_c.jpg) + +从列表中移除股票可以类似地按‘-’来完成。 + +### 对Mop中的股票报价排序 ### + +你可以基于任何栏目对股票报价列表进行排序。要排序,请按‘o’,然后使用左/右键来选择排序的基准栏目。当选定了一个特定栏目后,你可以按回车来对列表进行升序排序,或者降序排序。 + +![](https://farm1.staticflickr.com/724/20648164481_15631eefcf_c.jpg) + +通过按‘g’,你可以根据股票当日的涨或跌来分组。涨的情况以绿色表示,跌的情况以白色表示。 + +![](https://c2.staticflickr.com/6/5633/20615252696_a5bd44d3aa_b.jpg) + +如果你想要访问帮助页,只需要按‘?’。 + +![](https://farm1.staticflickr.com/573/20632365342_da196b657f_c.jpg) + +### 尾声 ### + +正如你所见,Mop是一个轻量级的,然而极其方便的证券监控工具。当然,你可以很轻松地从其它别的什么地方,从在线站点,你的智能手机等等访问到股票报价信息。然而,如果你在终端环境中花费大量时间,Mop可以很容易地适应你的工作空间,希望没有让你过多地从你的公罗流程中分心。只要让它在你其中一个终端中运行并保持市场日期持续更新,就让它在那干着吧。 + +交易快乐! + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/monitor-stock-quotes-command-line-linux.html + +作者:[Dan Nanni][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:https://github.com/michaeldv/mop +[2]:http://ask.xmodulo.com/install-go-language-linux.html +[3]:http://money.cnn.com/data/markets/ +[4]:http://finance.yahoo.com/ From ec95766dae136d277438da8d53d59186df375fa4 Mon Sep 17 00:00:00 2001 From: bazz2 Date: Mon, 24 Aug 2015 12:36:06 +0800 Subject: [PATCH 272/697] [translated by bazz2]Docker Working on Security Components Live Container Migration --- ...ity Components Live Container Migration.md | 54 ------------------- ...ity Components Live Container Migration.md | 53 ++++++++++++++++++ 2 files changed, 53 insertions(+), 54 deletions(-) delete mode 100644 sources/talk/20150818 Docker Working on Security Components Live Container Migration.md create mode 100644 translated/talk/20150818 Docker Working on Security Components Live Container Migration.md diff --git a/sources/talk/20150818 Docker Working on Security Components Live Container Migration.md b/sources/talk/20150818 Docker Working on Security Components Live Container Migration.md deleted file mode 100644 index 356c6f943c..0000000000 --- a/sources/talk/20150818 Docker Working on Security Components Live Container Migration.md +++ /dev/null @@ -1,54 +0,0 @@ -[bazz2 translating] -Docker Working on Security Components, Live Container Migration -================================================================================ -![Docker Container Talk](http://www.eweek.com/imagesvr_ce/1905/290x195DockerMarianna.jpg) - -**Docker developers take the stage at Containercon and discuss their work on future container innovations for security and live migration.** - -SEATTLE—Containers are one of the hottest topics in IT today and at the Linuxcon USA event here there is a co-located event called Containercon, dedicated to this virtualization technology. - -Docker, the lead commercial sponsor of the open-source Docker effort brought three of its top people to the keynote stage today, but not Docker founder Solomon Hykes. - -Hykes who delivered a Linuxcon keynote in 2014 was in the audience though, as Senior Vice President of Engineering Marianna Tessel, Docker security chief Diogo Monica and Docker chief maintainer Michael Crosby presented what's new and what's coming in Docker. - -Tessel emphasized that Docker is very real today and used in production environments at some of the largest organizations on the planet, including the U.S. Government. Docker also is working in small environments too, including the Raspberry Pi small form factor ARM computer, which now can support up to 2,300 containers on a single device. - -"We're getting more powerful and at the same time Docker will also get simpler to use," Tessel said. - -As a metaphor, Tessel said that the whole Docker experience is much like a cruise ship, where there is powerful and complex machinery that powers the ship, yet the experience for passengers is all smooth sailing. - -One area that Docker is trying to make easier is security. Tessel said that security is mind-numbingly complex for most people as organizations constantly try to avoid network breaches. - -That's where Docker Content Trust comes into play, which is a configurable feature in the recent Docker 1.8 release. Diogo Mónica, security lead for Docker joined Tessel on stage and said that security is a hard topic, which is why Docker content trust is being developed. - -With Docker Content Trust there is a verifiable way to make sure that a given Docker application image is authentic. There also are controls to limit fraud and potential malicious code injection by verifying application freshness. - -To prove his point, Monica did a live demonstration of what could happen if Content Trust is not enabled. In one instance, a Website update is manipulated to allow the demo Web app to be defaced. When Content Trust is enabled, the hack didn't work and was blocked. - -"Don't let the simple demo fool you," Tessel said. "You have seen the best security possible." - -One area where containers haven't been put to use before is for live migration, which on VMware virtual machines is a technology called vMotion. It's an area that Docker is currently working on. - -Docker chief maintainer Michael Crosby did an onstage demonstration of a live migration of Docker containers. Crosby referred to the approach as checkpoint and restore, where a running container gets a checkpoint snapshot and is then restored to another location. - -A container also can be cloned and then run in another location. Crosby humorously referred to his cloned container as "Dolly," a reference to the world's first cloned animal, Dolly the sheep. - -Tessel also took time to talk about the RunC component of containers, which is now a technology component that is being developed by the Open Containers Initiative as a multi-stakeholder process. With RunC, containers expand beyond Linux to multiple operating systems including Windows and Solaris. - -Overall, Tessel said that she can't predict the future of Docker, though she is very optimistic. - -"I'm not sure what the future is, but I'm sure it'll be out of this world," Tessel said. - -Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist. - --------------------------------------------------------------------------------- - -via: http://www.eweek.com/virtualization/docker-working-on-security-components-live-container-migration.html - -作者:[Sean Michael Kerner][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/ diff --git a/translated/talk/20150818 Docker Working on Security Components Live Container Migration.md b/translated/talk/20150818 Docker Working on Security Components Live Container Migration.md new file mode 100644 index 0000000000..bd3f0451c7 --- /dev/null +++ b/translated/talk/20150818 Docker Working on Security Components Live Container Migration.md @@ -0,0 +1,53 @@ +Docker Working on Security Components, Live Container Migration +================================================================================ +![Docker Container Talk](http://www.eweek.com/imagesvr_ce/1905/290x195DockerMarianna.jpg) + +**Docker 开发者在 Containercon 上的演讲,谈论将来的容器在安全和实时迁移方面的创新** + +来自西雅图的消息。当前 IT 界最热的词汇是“容器”,美国有两大研讨会:Linuxcon USA 和 Containercon,后者就是为容器而生的。 + +Docker 公司是开源 Docker 项目的商业赞助商,本次研讨会这家公司有 3 位高管带来主题演讲,但公司创始人 Solomon Hykes 没上场演讲。 + +Hykes 曾在 2014 年的 Linuxcon 上进行过一次主题演讲,但今年的 Containeron 他只坐在观众席上。而工程部高级副总裁 Marianna Tessel、Docker 首席安全员 Diogo Monica 和核心维护员 Michael Crosby 为我们演讲 Docker 新增的功能和将来会有的功能。 + +Tessel 强调 Docker 现在已经被很多世界上最大的组织用在生产环境中,包括美国政府。Docker 也被用在小环境中,比如树莓派,一块树莓派上可以跑 2300 个容器。 + +“Docker 的功能正在变得越来越强大,而部署方法变得越来越简单。”Tessel 在会上说道。 + +Tessel 把 Docker 形容成一艘游轮,内部由强大而复杂的机器驱动,外部为乘客提供平稳航行的体验。 + +Docker 试图解决的领域是简化安全配置。Tessel 认为对于大多数用户和组织来说,避免网络漏洞所涉及的安全问题是一个乏味而且复杂的过程。 + +于是 Docker Content Trust 就出现在 Docker 1.8 release 版本中了。安全项目领导 Diogo Mónica 中加入 Tessel 上台讨论,说安全是一个难题,而 Docker Content Trust 就是为解决这个难道而存在的。 + +Docker Content Trusst 提供一种方法来验证一个 Docker 应用是否可信,以及多种方法来限制欺骗和病毒注入。 + +为了证明他的观点,Monica 做了个现场示范,演示 Content Trust 的效果。在一个实验中,一个网站在更新过程中其 Web App 被人为攻破,而当 Content Trust 启动后,这个黑客行为再也无法得逞。 + +“不要被这个表面上简单的演示欺骗了,”Tessel 说道,“你们看的是最安全的可行方案。” + +Docker 以前没有实现的领域是实时迁移,这个技术在 VMware 虚拟机中叫做 vMotion,而现在,Docker 也实现了这个功能。 + +Docker 首席维护员 Micheal Crosby 在台上做了个实时迁移的演示,Crosby 把这个过程称为快照和恢复:首先从运行中的容器拿到一个快照,之后将这个快照移到另一个地方恢复。 + +一个容器也可以克隆到另一个地方,Crosby 将他的克隆容器称为“多利”,就是世界上第一只被克隆出来的羊的名字。 + +Tessel 也花了点时间聊了下 RunC 组件,这是个正在被 Open Container Initiative 作为多方开发的项目,目的是让窗口兼容 Linux、Windows 和 Solaris。 + +Tessel 总结说她不知道 Docker 的未来是什么样,但对此抱非常乐观的态度。 + +“我不确定未来是什么样的,但我很确定 Docker 会在这个世界中脱颖而出”,Tessel 说的。 + +Sean Michael Kerner 是 eWEEK 和 InternetNews.com 网站的高级编辑,可通过推特 @TechJournalist 关注他。 + +-------------------------------------------------------------------------------- + +via: http://www.eweek.com/virtualization/docker-working-on-security-components-live-container-migration.html + +作者:[Sean Michael Kerner][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/ From ac233258c71eb2db6d53f037eed36ef9ef67789c Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 24 Aug 2015 14:18:37 +0800 Subject: [PATCH 273/697] =?UTF-8?q?20150824-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Open Source Collaborative Editing Tools.md | 228 ++++++++++++++++++ ...out to gain a new file system--bcachefs.md | 25 ++ ... NetworkManager Command Line Tool Nmcli.md | 153 ++++++++++++ ...u 15.04 to connect to Android or iPhone.md | 74 ++++++ 4 files changed, 480 insertions(+) create mode 100644 sources/share/20150824 Great Open Source Collaborative Editing Tools.md create mode 100644 sources/talk/20150824 Linux about to gain a new file system--bcachefs.md create mode 100644 sources/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md create mode 100644 sources/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md diff --git a/sources/share/20150824 Great Open Source Collaborative Editing Tools.md b/sources/share/20150824 Great Open Source Collaborative Editing Tools.md new file mode 100644 index 0000000000..8f3ab16110 --- /dev/null +++ b/sources/share/20150824 Great Open Source Collaborative Editing Tools.md @@ -0,0 +1,228 @@ +Great Open Source Collaborative Editing Tools +================================================================================ +In a nutshell, collaborative writing is writing done by more than one person. There are benefits and risks of collaborative working. Some of the benefits include a more integrated / co-ordinated approach, better use of existing resources, and a stronger, united voice. For me, the greatest advantage is one of the most transparent. That's when I need to take colleagues' views. Sending files back and forth between colleagues is inefficient, causes unnecessary delays and leaves people (i.e. me) unhappy with the whole notion of collaboration. With good collaborative software, I can share notes, data and files, and use comments to share thoughts in real-time or asynchronously. Working together on documents, images, video, presentations, and tasks is made less of a chore. + +There are many ways to collaborate online, and it has never been easier. This article highlights my favourite open source tools to collaborate on documents in real time. + +Google Docs is an excellent productivity application with most of the features I need. It serves as a collaborative tool for editing documents in real time. Documents can be shared, opened, and edited by multiple users simultaneously and users can see character-by-character changes as other collaborators make edits. While Google Docs is free for individuals, it is not open source. + +Here is my take on the finest open source collaborative editors which help you focus on writing without interruption, yet work mutually with others. + +---------- + +### Hackpad ### + +![Hackpad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Hackpad.png) + +Hackpad is an open source web-based realtime wiki, based on the open source EtherPad collaborative document editor. + +Hackpad allows users to share your docs realtime and it uses color coding to show which authors have contributed to which content. It also allows in line photos, checklists and can also be used for coding as it offers syntax highlighting. + +While Dropbox acquired Hackpad in April 2014, it is only this month that the software has been released under an open source license. It has been worth the wait. + +Features include: + +- Very rich set of functions, similar to those offered by wikis +- Take collaborative notes, share data and files, and use comments to share your thoughts in real-time or asynchronously +- Granular privacy permissions enable you to invite a single friend, a dozen teammates, or thousands of Twitter followers +- Intelligent execution +- Directly embed videos from popular video sharing sites +- Tables +- Syntax highlighting for most common programming languages including C, C#, CSS, CoffeeScript, Java, and HTML + +- Website: [hackpad.com][1] +- Source code: [github.com/dropbox/hackpad][2] +- Developer: [Contributors][3] +- License: Apache License, Version 2.0 +- Version Number: - + +---------- + +### Etherpad ### + +![Etherpad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Etherpad.png) + +Etherpad is an open source web-based collaborative real-time editor, allowing authors to simultaneously edit a text document leave comments, and interact with others using an integrated chat. + +Etherpad is implemented in JavaScript, on top of the AppJet platform, with the real-time functionality achieved using Comet streaming. + +Features include: + +- Well designed spartan interface +- Simple text formatting features +- "Time slider" - explore the history of a pad +- Download documents in plain text, PDF, Microsoft Word, Open Document, and HTML +- Auto-saves the document at regular, short intervals +- Highly customizable +- Client side plugins extend the editor functionality +- Hundreds of plugins extend Etherpad including support for email notifications, pad management, authentication +- Accessibility enabled +- Interact with Pad contents in real time from within Node and from your CLI + +- Website: [etherpad.org][4] +- Source code: [github.com/ether/etherpad-lite][5] +- Developer: David Greenspan, Aaron Iba, J.D. Zamfiresc, Daniel Clemens, David Cole +- License: Apache License Version 2.0 +- Version Number: 1.5.7 + +---------- + +### Firepad ### + +![Firepad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Firepad.png) + +Firepad is an open source, collaborative text editor. It is designed to be embedded inside larger web applications with collaborative code editing added in only a few days. + +Firepad is a full-featured text editor, with capabilities like conflict resolution, cursor synchronization, user attribution, and user presence detection. It uses Firebase as a backend, and doesn't need any server-side code. It can be added to any web app. Firepad can use either the CodeMirror editor or the Ace editor to render documents, and its operational transform code borrows from ot.js. + +If you want to extend your web application capabilities by adding the simple document and code editor, Firepad is perfect. + +Firepad is used by several editors, including the Atlassian Stash Realtime Editor, Nitrous.IO, LiveMinutes, and Koding. + +Features include: + +- True collaborative editing +- Intelligent OT-based merging and conflict resolution +- Support for both rich text and code editing +- Cursor position synchronization +- Undo / redo +- Text highlighting +- User attribution +- Presence detection +- Version checkpointing +- Images +- Extend Firepad through its API +- Supports all modern browsers: Chrome, Safari, Opera 11+, IE8+, Firefox 3.6+ + +- Website: [www.firepad.io][6] +- Source code: [github.com/firebase/firepad][7] +- Developer: Michael Lehenbauer and the team at Firebase +- License: MIT +- Version Number: 1.1.1 + +---------- + +### OwnCloud Documents ### + +![ownCloud Documents in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-ownCloud.png) + +ownCloud Documents is an ownCloud app to work with office documents alone and/or collaboratively. It allows up to 5 individuals to collaborate editing .odt and .doc files in a web browser. + +ownCloud is a self-hosted file sync and share server. It provides access to your data through a web interface, sync clients or WebDAV while providing a platform to view, sync and share across devices easily. + +Features include: + +- Cooperative edit, with multiple users editing files simultaneously +- Document creation within ownCloud +- Document upload +- Share and edit files in the browser, and then share them inside ownCloud or through a public link +- ownCloud features like versioning, local syncing, encryption, undelete +- Seamless support for Microsoft Word documents by way of transparent conversion of file formats + +- Website: [owncloud.org][8] +- Source code: [github.com/owncloud/documents][9] +- Developer: OwnCloud Inc. +- License: AGPLv3 +- Version Number: 8.1.1 + +---------- + +### Gobby ### + +![Gobby in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Gobby.png) + +Gobby is a collaborative editor supporting multiple documents in one session and a multi-user chat. All users could work on the file simultaneously without the need to lock it. The parts the various users write are highlighted in different colours and it supports syntax highlighting of various programming and markup languages. + +Gobby allows multiple users to edit the same document together over the internet in real-time. It integrates well with the GNOME environment. It features a client-server architecture which supports multiple documents in one session, document synchronisation on request, password protection and an IRC-like chat for communication out of band. Users can choose a colour to highlight the text they have written in a document. + +A dedicated server called infinoted is also provided. + +Features include: + +- Full-fledged text editing capabilities including syntax highlighting using GtkSourceView +- Real-time, lock-free collaborative text editing through encrypted connections (including PFS) +- Integrated group chat +- Local group undo: Undo does not affect changes of remote users +- Shows cursors and selections of remote users +- Highlights text written by different users with different colors +- Syntax highlighting for most programming languages, auto indentation, configurable tab width +- Zeroconf support +- Encrypted data transfer including perfect forward secrecy (PFS) +- Sessions can be password-protected +- Sophisticated access control with Access Control Lists (ACLs) +- Highly configurable dedicated server +- Automatic saving of documents +- Advanced search and replace options +- Internationalisation +- Full Unicode support + +- Website: [gobby.github.io][10] +- Source code: [github.com/gobby][11] +- Developer: Armin Burgmeier, Philipp Kern and contributors +- License: GNU GPLv2+ and ISC +- Version Number: 0.5.0 + +---------- + +### OnlyOffice ### + +![OnlyOffice in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-OnlyOffice.png) + +ONLYOFFICE (formerly known as Teamlab Office) is a multifunctional cloud online office suite integrated with CRM system, document and project management toolset, Gantt chart and email aggregator. + +It allows you to organize business tasks and milestones, store and share your corporate or personal documents, use social networking tools such as blogs and forums, as well as communicate with your team members via corporate IM. + +Manage documents, projects, team and customer relations in one place. OnlyOffice combines text, spreadsheet and presentation editors that include features similar to Microsoft desktop editors (Word, Excel and PowerPoint), but then allow to co-edit, comment and chat in real time. + +OnlyOffice is written in ASP.NET, based on HTML5 Canvas element, and translated to 21 languages. + +Features include: + +- As powerful as a desktop editor when working with large documents, paging and zooming +- Document sharing in view / edit modes +- Document embedding +- Spreadsheet and presentation editors +- Co-editing +- Commenting +- Integrated chat +- Mobile applications +- Gantt charts +- Time management +- Access right management +- Invoicing system +- Calendar +- Integration with file storage systems: Google Drive, Box, OneDrive, Dropbox, OwnCloud +- Integration with CRM, email aggregator and project management module +- Mail server +- Mail aggregator +- Edit documents, spreadsheets and presentations of the most popular formats: DOC, DOCX, ODT, RTF, TXT, XLS, XLSX, ODS, CSV, PPTX, PPT, ODP + +- Website: [www.onlyoffice.com][12] +- Source code: [github.com/ONLYOFFICE/DocumentServer][13] +- Developer: Ascensio System SIA +- License: GNU GPL v3 +- Version Number: 7.7 + +-------------------------------------------------------------------------------- + +via: http://www.linuxlinks.com/article/20150823085112605/CollaborativeEditing.html + +作者:Frazer Kline +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://hackpad.com/ +[2]:https://github.com/dropbox/hackpad +[3]:https://github.com/dropbox/hackpad/blob/master/CONTRIBUTORS +[4]:http://etherpad.org/ +[5]:https://github.com/ether/etherpad-lite +[6]:http://www.firepad.io/ +[7]:https://github.com/firebase/firepad +[8]:https://owncloud.org/ +[9]:http://github.com/owncloud/documents/ +[10]:https://gobby.github.io/ +[11]:https://github.com/gobby +[12]:https://www.onlyoffice.com/free-edition.aspx +[13]:https://github.com/ONLYOFFICE/DocumentServer \ No newline at end of file diff --git a/sources/talk/20150824 Linux about to gain a new file system--bcachefs.md b/sources/talk/20150824 Linux about to gain a new file system--bcachefs.md new file mode 100644 index 0000000000..df3cd14682 --- /dev/null +++ b/sources/talk/20150824 Linux about to gain a new file system--bcachefs.md @@ -0,0 +1,25 @@ +Linux about to gain a new file system – bcachefs +================================================================================ +A five year old file system built by Kent Overstreet, formerly of Google, is near feature complete with all critical components in place. Bcachefs boasts the performance and reliability of the widespread ext4 and xfs as well as the feature list similar to that of btrfs and zfs. Notable features include checksumming, compression, multiple devices, caching and eventually snapshots and other “nifty” features. + +Bcachefs started out as **bcache** which was a block caching layer, the evolution from bcache to a fully featured [copy-on-write][1] file system has been described as a metamorphosis. + +Responding to the self-imposed question “Yet another new filesystem? Why?” Kent Overstreet replies with the following “Well, years ago (going back to when I was still at Google), I and the other people working on bcache realized that what we were working on was, almost by accident, a good chunk of the functionality of a full blown filesystem – and there was a really clean and elegant design to be had there if we took it and ran with it. And a fast one – the main goal of bcachefs to match ext4 and xfs on performance and reliability, but with the features of btrfs/xfs.” + +Overstreet has invited people to use and test bcachefs out on their own systems. To find instructions to use bcachefs on your system check out the mailing list [announcement][2]. + +The file system situation on Linux is a fairly drawn out one, Fedora 16 for instance aimed to use btrfs instead of ext4 as the default file system, this switch still has not happened. Currently all of the Debian based distros, including Ubuntu, Mint and elementary OS, still use ext4 as their default file systems and none have even whispered about switching to a new default file system yet. + +-------------------------------------------------------------------------------- + +via: http://www.linuxveda.com/2015/08/22/linux-gain-new-file-system-bcachefs/ + +作者:[Paul Hill][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxveda.com/author/paul_hill/ +[1]:https://en.wikipedia.org/wiki/Copy-on-write +[2]:https://lkml.org/lkml/2015/8/21/22 \ No newline at end of file diff --git a/sources/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md b/sources/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md new file mode 100644 index 0000000000..577411f58a --- /dev/null +++ b/sources/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md @@ -0,0 +1,153 @@ + Basics Of NetworkManager Command Line Tool, Nmcli +================================================================================ +![](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/08/networking1.jpg) + +### Introduction ### + +In this tutorial, we will discuss NetworkManager command line tool, aka **nmcli**, in CentOS / RHEL 7. Users who are using **ifconfig** should avoid this command in Centos 7. + +Lets configure some networking settings with nmcli utility. + +#### To get all address information of all interfaces connected with System #### + + [root@localhost ~]# ip addr show + +**Sample Output:** + + 1: lo: mtu 65536 qdisc noqueue state UNKNOWN + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever + 2: eno16777736: mtu 1500 qdisc pfifo_fast state UP qlen 1000 + link/ether 00:0c:29:67:2f:4c brd ff:ff:ff:ff:ff:ff + inet 192.168.1.51/24 brd 192.168.1.255 scope global eno16777736 + valid_lft forever preferred_lft forever + inet6 fe80::20c:29ff:fe67:2f4c/64 scope link + valid_lft forever preferred_lft forever + +#### To retrieve packets statistics related with connected interfaces #### + + [root@localhost ~]# ip -s link show eno16777736 + +**Sample Output:** + +![unxmen_(011)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0111.png) + +#### Get routing configuration #### + + [root@localhost ~]# ip route + +Sample Output: + + default via 192.168.1.1 dev eno16777736 proto static metric 100 + 192.168.1.0/24 dev eno16777736 proto kernel scope link src 192.168.1.51 metric 100 + +#### Analyze path for some host/website #### + + [root@localhost ~]# tracepath unixmen.com + +Output will be just like traceroute but in more managed form. + +![unxmen_0121](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_01211.png) + +### nmcli utility ### + +**Nmcli** is a very rich and flexible command line utility. some of the terms used in nmcli are: + +- **Device** – A network interface being used. +- **Connection** – A set of configuration settings, for a single device you can have multiple connections, you can switch between connections. + +#### Find out how many connections are available for how many devices #### + + [root@localhost ~]# nmcli connection show + +![unxmen_(013)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_013.png) + +#### Get details of a specific connection #### + + [root@localhost ~]# nmcli connection show eno1 + +**Sample output:** + +![unxmen_(014)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0141.png) + +#### Get the Network device status #### + + [root@localhost ~]# nmcli device status + +---------- + + DEVICE TYPE STATE CONNECTION + eno16777736 ethernet connected eno1 + lo loopback unmanaged -- + +#### Create a new connection with “dhcp” #### + + [root@localhost ~]# nmcli connection add con-name "dhcp" type ethernet ifname eno16777736 + +Where, + +- **Connection add** – To add new connection +- **con-name** – connection name +- **type** – type of device +- **ifname** – interface name + +This command will add connection with dhcp protocol. + +**Sample output:** + + Connection 'dhcp' (163a6822-cd50-4d23-bb42-8b774aeab9cb) successfully added. + +#### Instead of assigning an IP via dhcp, you can add ip address as “static” #### + + [root@localhost ~]# nmcli connection add con-name "static" ifname eno16777736 autoconnect no type ethernet ip4 192.168.1.240 gw4 192.168.1.1 + +**Sample Output:** + + Connection 'static' (8e69d847-03d7-47c7-8623-bb112f5cc842) successfully added. + +**Update connection:** + + [root@localhost ~]# nmcli connection up eno1 + +Again Check, whether ip address is changed or not. + + [root@localhost ~]# ip addr show + +![unxmen_(015)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0151.png) + +#### Add DNS settings to Static connections. #### + + [root@localhost ~]# nmcli connection modify "static" ipv4.dns 202.131.124.4 + +#### Add additional DNS value. #### + +[root@localhost ~]# nmcli connection modify "static" +ipv4.dns 8.8.8.8 + +**Note**: For additional entries **+** symbol will be used and **+ipv4.dns** will be used instead on **ip4.dns** + +Put an additional ip address: + + [root@localhost ~]# nmcli connection modify "static" +ipv4.addresses 192.168.200.1/24 + +Refresh settings using command: + + [root@localhost ~]# nmcli connection up eno1 + +![unxmen_(016)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_016.png) + +You will see, setting are effective now. + +That’s it. + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/basics-networkmanager-command-line-tool-nmcli/ + +作者:Rajneesh Upadhyay +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/sources/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md b/sources/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md new file mode 100644 index 0000000000..a8e21419fb --- /dev/null +++ b/sources/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md @@ -0,0 +1,74 @@ +How to create an AP in Ubuntu 15.04 to connect to Android/iPhone +================================================================================ +I tried creating a wireless access point via Gnome Network Manager in 15.04 and was successful. I’m sharing the steps with our readers. Please note: you must have a wifi card which allows you to create an Access Point. If you want to know how to find that, type iw list in a terminal. + +If you don’t have iw installed, you can install iw in Ubuntu using the command sudo apt-get install iw. + +After you type iw list, look for supported interface section, where it should be a entry called AP like the one shown below: + +Supported interface modes: + +* IBSS +* managed +* AP +* AP/VLAN +* monitor +* mesh point + +Let’s see the steps in detail + +1. Disconnect WIFI. Get a an internet cable and plug into your laptop so that you are connected to a wired internet connection +1. Go to Network Icon on the top panel -> Edit Connections then click the Add button in the pop-up window +1. Choose Wi-Fi from the drop-down menu +1. Next, + +a. Type in a connection name e.g. Hotspot + +b. Type in a SSID e.g. Hotspot + +c. Select mode: Infrastructure + +d. Device MAC address: select your wireless card from drop-down menu + +![](http://i2.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome1.jpg) + +1. Go to Wi-Fi Security tab, select security type WPA & WPA2 Personal and set a password +1. Go to IPv4 Settings tab, from Method drop-down box, select Shared to other computers + +![](http://i1.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome4.jpg) + +1. Go to IPv6 tab and set Method to ignore (do this only if you do not use IPv6) +1. Hit the “Save” button to save the configuration +1. Open a terminal from the menu/dash +1. Now, edit the connection with you just created via network settings + +VIM editor: + + sudo vim /etc/NetworkManager/system-connections/Hotspot + +Gedit: + + gksu gedit /etc/NetworkManager/system-connections/Hotspot + +Replace name Hotspot with the connection name you have given in step 4 + +![](http://i2.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome2.jpg?resize=640%2C402) + +1. Change the line mode=infrastructure to mode=ap and save the file +1. Once you save the file, you should be able to see the wifi named Hotspot showing up in the list of available wifi networks. (If the network does not show, disable and enable wifi ) + +![](http://i1.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome3.jpg?resize=290%2C375) + +1. You can now connect your Android phone. Connection tested using Xioami Mi4i running Android 5.0 (Downloaded 1GB to test speed and reliability) + +-------------------------------------------------------------------------------- + +via: http://www.linuxveda.com/2015/08/23/how-to-create-an-ap-in-ubuntu-15-04-to-connect-to-androidiphone/ + +作者:[Sayantan Das][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxveda.com/author/sayantan_das/ \ No newline at end of file From 90c65877863ed475a05d0e87023217510503e4ee Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 24 Aug 2015 14:47:25 +0800 Subject: [PATCH 274/697] =?UTF-8?q?20150824-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...aving Fun With Linux Terminal In Ubuntu.md | 37 +++ ...ice Found Error After Installing Ubuntu.md | 97 ++++++++ ...gari Support In Antergos And Arch Linux.md | 47 ++++ ...phyr Test Management Tool on CentOS 7.x.md | 233 ++++++++++++++++++ 4 files changed, 414 insertions(+) create mode 100644 sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md create mode 100644 sources/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md create mode 100644 sources/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md create mode 100644 sources/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md diff --git a/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md b/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md new file mode 100644 index 0000000000..6adcbbc3bc --- /dev/null +++ b/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md @@ -0,0 +1,37 @@ +Watch These Kids Having Fun With Linux Terminal In Ubuntu +================================================================================ +I found this short video of children having fun with Linux terminals in their computer lab at school. I do not know where do they belong to, but I guess it is either in Indonesia or Malaysia. + +注:youtube 视频 + + +### Run train in Linux terminal ### + +There is no magic here. It’s just a small command line fun tool called ‘sl’. I presume that it was developed entirely to have some fun when command ls is wrongly typed. If you ever worked on Linux terminal, you know that ls is one of the most commonly used commands and perhaps one of the most frequently mis-typed command as well. + +If you want to have little fun with this terminal train, you can install it using the following command: + + sudo apt-get install sl + +To run the terminal train, just type **sl** in the terminal. It also has the following options: + +- -a : Accident mode. You can see people crying help +- -l : shows a smaller train but with more coaches +- -F : A flying train +- -e : Allows interrupt by Ctrl+C. In other mode you cannot use Ctrl+C to stop the train. But then, it doesn’t run for long. + +Normally, you should hear the whistle as well but it doesn’t work in most of the Linux OS, Ubuntu 14.04 being one of them. Here is the accidental terminal train :) + +![Linux Terminal Train](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/04/Linux_Terminal_Train.jpeg) + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/ubuntu-terminal-train/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ \ No newline at end of file diff --git a/sources/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md b/sources/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md new file mode 100644 index 0000000000..3281a51137 --- /dev/null +++ b/sources/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md @@ -0,0 +1,97 @@ +Fix No Bootable Device Found Error After Installing Ubuntu +================================================================================ +Usually, I dual boot Ubuntu and Windows but this time I decided to go for a clean Ubuntu installation i.e. eliminating Windows completely. After the clean install of Ubuntu, I ended up with a screen saying **no bootable device found** instead of the Grub screen. Clearly, the installation messed up with the UEFI boot settings. + +![No Bootable Device Found After Installing Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_1.jpg) + +I am going to show you how I fixed **no bootable device found error after installing Ubuntu in Acer laptops**. It is important that I mention that I am using Acer Aspire R13 because we have to change things in firmware settings and those settings might look different from manufacturer to manufacturer and from device to device. + +So before you go on trying the steps mentioned here, let’s first see what state my computer was in during this error: + +- My Acer Aspire R13 came preinstalled with Windows 8.1 and with UEFI boot manager +- Secure boot was not turned off (my laptop has just come from repair and the service guy had put the secure boot on again, I did not know until I ran up in the problem). You can read this post to know [how disable secure boot in Acer laptops][1] +- I chose to install Ubuntu by erasing everything i.e. existing Windows 8.1, various partitions etc. +- After installing Ubuntu, I saw no bootable device found error while booting from the hard disk. Booting from live USB worked just fine + +In my opinion, not disabling the secure boot was the reason of this error. However, I have no data to backup my claim. It is just a hunch. Interestingly, dual booting Windows and Linux often ends up in common Grub issues like these two: + +- [error: no such partition grub rescue][2] +- [Minimal BASH like line editing is supported][3] + +If you are in similar situation, you can try the fix which worked for me. + +### Fix no bootable device found error after installing Ubuntu ### + +Pardon me for poor quality images. My OnePlus camera seems to be not very happy with my laptop screen. + +#### Step 1 #### + +Turn the power off and boot into boot settings. I had to press Fn+F2 (to press F2 key) on Acer Aspire R13 quickly. You have to be very quick with it if you are using SSD hard disk because SSDs are very fast in booting. Depending upon your manufacturer/model, you might need to use Del or F10 or F12 keys. + +#### Step 2 #### + +In the boot settings, make sure that Secure Boot is turned on. It should be under the Boot tab. + +#### Step 3 #### + +Go to Security tab and look for “Select an UEFI file as trusted for executing” and click enter. + +![Fix no bootable device found ](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_2.jpg) + +Just for your information, what we are going to do here is to add the UEFI settings file (it was generated while Ubuntu installation) among the trusted UEFI boots in your device. If you remember, UEFI boot’s main aim is to provide security and since Secure Boot was not disabled (perhaps) the device did not intend to boot from the newly installed OS. Adding it as trusted, kind of whitelisting, will let the device boot from the Ubuntu UEFI file. + +#### Step 4 #### + +You should see your hard disk like HDD0 etc here. If you have more than one hard disk, I hope you remember where did you install Ubuntu. Press Enter here as well. + +![Fix no bootable device found in boot settings](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_3.jpg) + +#### Step 5 #### + +You should see here. Press enter. + +![Fix settings in UEFI](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_4.jpg) + +#### Step 6 #### + +You’ll see in next screen. Don’t get impatient, you are almost there + +![Fixing boot error after installing Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_5.jpg) + +#### Step 7 #### + +You’ll see shimx64.efi, grubx64.efi and MokManager.efi file here. The important one is shimx64.efi here. Select it and click enter. + + +![Fix no bootable device found](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_6.jpg) + +In next screen, type Yes and click enter. + +![No_Bootable_Device_Found_7](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_7.jpg) + +#### Step 8 #### + +Once we have added it as trused EFI file to be executed, press F10 to save and exit. + +![Save and exist firmware settings](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_8.jpg) + +Reboot your system and this time you should be seeing the familiar Grub screen. Even if you do not see Grub screen, you should at least not be seeing “no bootable device found” screen anymore. You should be able to boot into Ubuntu. + +If your Grub screen was messed up after the fix but you got to login into it, you can reinstall Grub to boot into the familiar purple Grub screen of Ubuntu. + +I hope this tutorial helped you to fix no bootable device found error. Any questions or suggestions or a word of thanks is always welcomed. + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/no-bootable-device-found-ubuntu/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://itsfoss.com/disable-secure-boot-in-acer/ +[2]:http://itsfoss.com/solve-error-partition-grub-rescue-ubuntu-linux/ +[3]:http://itsfoss.com/fix-minimal-bash-line-editing-supported-grub-error-linux/ \ No newline at end of file diff --git a/sources/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md b/sources/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md new file mode 100644 index 0000000000..db36df66e6 --- /dev/null +++ b/sources/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md @@ -0,0 +1,47 @@ +How To Add Hindi And Devanagari Support In Antergos And Arch Linux +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Indian-languages.jpg) + +You might be knowing by now that I have been trying my hands on [Antergos Linux][1] lately. One of the first few things I noticed after installing [Antergos][2] was that **Hindi scripts were not displayed properly** in the default chromium browser. + +This is a strange thing that I never encountered before in my desktop Linux experience ever. First, I thought maybe it could be a browser problem so I went on to install Firefox only to see the same story repeated. Firefox also could not display Hindi properly. Unlike Chromium that displayed nothing, Firefox did display something but it was not readable. + +![No hindi support in Arch Linux based Antergos](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_1.jpeg) + +Hindi display in Chromium + +![No hindi support in Arch Linux based Antergos](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_2.jpeg) + +Hindi display in Firefox + +Strange? So no Hindi support in Arch based Antergos Linux by default? I did not verify, but I presume that it would be the same for other Indian languages etc that are also based on Devanagari script. + +I this quick tutorial, I am going to show you how to add Devanagari support so that Hindi and other Indian languages are displayed properly. + +### Add Indian language support in Antergos and Arch Linux ### + +Open a terminal and use the following command: + + sudo yaourt -S ttf-indic-otf + +Enter the password. And it will provide rendering support for Indian languages. + +Restarting Firefox displayed Hindi correctly immediately, but it took a restart to display Hindi. For that reason, I advise that you **restart your system** after installing the Indian fonts. + +![Adding Hindi display support in Arch based Antergos Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_4.jpeg) + +I hope tis quick helped you to read Hindi, Sanskrit, Tamil, Telugu, Malayalam, Bangla and other Indian languages in Antergos and other Arch based Linux distros such as Manjaro Linux. + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/display-hindi-arch-antergos/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://antergos.com/ +[2]:http://itsfoss.com/tag/antergos/ \ No newline at end of file diff --git a/sources/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md b/sources/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md new file mode 100644 index 0000000000..b4014bb009 --- /dev/null +++ b/sources/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md @@ -0,0 +1,233 @@ +How to Setup Zephyr Test Management Tool on CentOS 7.x +================================================================================ +Test Management encompasses anything and everything that you need to do as testers. Test management tools are used to store information on how testing is to be done, plan testing activities and report the status of quality assurance activities. So in this article we will illustrate you about the setup of Zephyr test management tool that includes everything needed to manage the test process can save testers hassle of installing separate applications that are necessary for the testing process. Once you have done with its setup you will be able to track bugs, defects and allows the project tasks for collaboration with your team as you can easily share and access the data across multiple project teams for communication and collaboration throughout the testing process. + +### Requirements for Zephyr ### + +We are going to install and run Zephyr under the following set of its minimum resources. Resources can be enhanced as per your infrastructure requirements. We will be installing Zephyr on the CentOS-7 64-bit while its binary distributions are available for almost all Linux operating systems. + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Zephyr test management tool
Linux OSCentOS Linux 7 (Core), 64-bit
PackagesJDK 7 or above ,  Oracle JDK 6 updateNo Prior Tomcat, MySQL installed
RAM4 GBPreferred 8 GB
CPU2.0 GHZ or Higher
Hard Disk30 GB , Atleast 5GB must be free
+ +You must have super user (root) access to perform the installation process for Zephyr and make sure that you have properly configured yout network with static IP address and its default set of ports must be available and allowed in the firewall where as the Port 80/443, 8005, 8009, 8010 will used by tomcat and Port 443 or 2099 will used within Zephyr by flex for the RTMP protocol. + +### Install Java JDK 7 ### + +Java JDK 7 is the basic requirement for the installation of Zephyr, if its not already installed in your operating system then do the following to install Java and setup its JAVA_HOME environment variables to be properly configured. + +Let’s issue the below commands to install Java JDK 7. + + [root@centos-007 ~]# yum install java-1.7.0-openjdk-1.7.0.79-2.5.5.2.el7_1 + +---------- + + [root@centos-007 ~]# yum install java-1.7.0-openjdk-devel-1.7.0.85-2.6.1.2.el7_1.x86_64 + +Once your java is installed including its required dependencies, run the following commands to set its JAVA_HOME environment variables. + + [root@centos-007 ~]# export JAVA_HOME=/usr/java/default + [root@centos-007 ~]# export PATH=/usr/java/default/bin:$PATH + +Now check the version of java to verify its installation with following command. + + [root@centos-007 ~]# java –version + +---------- + + java version "1.7.0_79" + OpenJDK Runtime Environment (rhel-2.5.5.2.el7_1-x86_64 u79-b14) + OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode) + +The output shows that we we have successfully installed OpenJDK Java verion 1.7.0_79. + +### Install MySQL 5.6.X ### + +If you have other MySQLs on the machine then it is recommended to remove them and +install this version on top of them or upgrade their schemas to what is specified. As this specific major/minor (5.6.X) version of MySQL is required with the root username as a prerequisite of Zephyr. + +To install MySQL 5.6 on CentOS-7.1 lets do the following steps: + +Download the rpm package, which will create a yum repo file for MySQL Server installation. + + [root@centos-007 ~]# yum install wget + [root@centos-007 ~]# wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm + +Now Install this downloaded rpm package by using rpm command. + + [root@centos-007 ~]# rpm -ivh mysql-community-release-el7-5.noarch.rpm + +After the installation of this package you will get two new yum repo related to MySQL. Then by using yum command, now we will install MySQL Server 5.6 and all dependencies will be installed itself. + + [root@centos-007 ~]# yum install mysql-server + +Once the installation process completes, run the following commands to start mysqld services and check its status whether its active or not. + + [root@centos-007 ~]# service mysqld start + [root@centos-007 ~]# service mysqld status + +On fresh installation of MySQL Server. The MySQL root user password is blank. +For good security practice, we should reset the password MySQL root user. + +Connect to MySQL using the auto-generated empty password and change the +root password. + + [root@centos-007 ~]# mysql + mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('your_password'); + mysql> flush privileges; + mysql> quit; + +Now we need to configure the required database parameters in the default configuration file of MySQL. Let's open its file located in "/etc/" folder and update it as follow. + + [root@centos-007 ~]# vi /etc/my.cnf + +---------- + + [mysqld] + datadir=/var/lib/mysql + socket=/var/lib/mysql/mysql.sock + symbolic-links=0 + + sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES + max_allowed_packet=150M + max_connections=600 + default-storage-engine=INNODB + character-set-server=utf8 + collation-server=utf8_unicode_ci + + [mysqld_safe] + log-error=/var/log/mysqld.log + pid-file=/var/run/mysqld/mysqld.pid + default-storage-engine=INNODB + character-set-server=utf8 + collation-server=utf8_unicode_ci + + [mysql] + max_allowed_packet = 150M + [mysqldump] + quick + +Save the changes made in the configuration file and restart mysql services. + + [root@centos-007 ~]# service mysqld restart + +### Download Zephyr Installation Package ### + +We done with installation of required packages necessary to install Zephyr. Now we need to get the binary distributed package of Zephyr and its license key. Go to official download link of Zephyr that is http://download.yourzephyr.com/linux/download.php give your email ID and click to download. + +![Zephyr Download](http://blog.linoxide.com/wp-content/uploads/2015/08/13.png) + +Then and confirm your mentioned Email Address and you will get the Zephyr Download link and its License Key link. So click on the provided links and choose the appropriate version of your Operating system to download the binary installation package and its license file to the server. + +We have placed it in the home directory and modify its permissions to make it executable. + +![Zephyr Binary](http://blog.linoxide.com/wp-content/uploads/2015/08/22.png) + +### Start Zephyr Installation and Configuration ### + +Now we are ready to start the installation of Zephyr by executing its binary installation script as below. + + [root@centos-007 ~]# ./zephyr_4_7_9213_linux_setup.sh –c + +Once you run the above command, it will check for the Java environment variables to be properly setup and configured. If there's some mis-configuration you might the error like. + + testing JVM in /usr ... + Starting Installer ... + Error : Either JDK is not found at expected locations or JDK version is mismatched. + Zephyr requires Oracle Java Development Kit (JDK) version 1.7 or higher. + +Once you have properly configured your Java, then it will start installation of Zephyr and asks to press "o" to proceed and "c" to cancel the setup. Let's type "o" and press "Enter" key to start installation. + +![install zephyr](http://blog.linoxide.com/wp-content/uploads/2015/08/32.png) + +The next option is to review all the requirements for the Zephyr setup and Press "Enter" to move forward to next option. + +![zephyr requirements](http://blog.linoxide.com/wp-content/uploads/2015/08/42.png) + +To accept the license agreement type "1" and Press Enter. + + I accept the terms of this license agreement [1], I do not accept the terms of this license agreement [2, Enter] + +Here we need to choose the appropriate destination location where we want to install the zephyr and choose the default ports, if you want to choose other than default ports, you are free to mention here. + +![installation folder](http://blog.linoxide.com/wp-content/uploads/2015/08/52.png) + +Then customize the mysql database parameters and give the right paths to the configurations file. You might the an error at this point as shown below. + + Please update MySQL configuration. Configuration parameter max_connection should be at least 500 (max_connection = 500) and max_allowed_packet should be at least 50MB (max_allowed_packet = 50M). + +To overcome this error make sure that you have configure the "max_connection" and "max_allowed_packet" limits properly in the mysql configuration file. So confirm these settings, connect to mysql server and run the commands as shown. + +![mysql connections](http://blog.linoxide.com/wp-content/uploads/2015/08/62.png) + +Once you have configured your mysql database properly, it will extract the configuration files to complete the setup. + +![mysql customization](http://blog.linoxide.com/wp-content/uploads/2015/08/72.png) + +The installation process completes with successful installation of Zephyr 4.7 on your computer. To Launch Zephyr Desktop type "y" to finish Zephyr installation. + +![launch zephyr](http://blog.linoxide.com/wp-content/uploads/2015/08/82.png) + +### Launch Zephyr Desktop ### + +Open your web browser to launch Zephyr Desktop with your localhost IP adress and you will be direted to the Zephyr Desktop. + + http://your_server_IP/zephyr/desktop/ + +![Zephyr Desktop](http://blog.linoxide.com/wp-content/uploads/2015/08/91.png) + +From your Zephyr Dashboard click on the "Test Manager" and login with the dault user name and password that is "test.manager". + +![Test Manage Login](http://blog.linoxide.com/wp-content/uploads/2015/08/test_manager_login.png) + +Once you are loged in you will be able to configure your administrative settings as shown. So choose the settings you wish to put according to your environment. + +![Test Manage Administration](http://blog.linoxide.com/wp-content/uploads/2015/08/test_manage_admin.png) + +Save the settings after you have done with your administrative settings, similarly do the settings of resources management and project setup and start using Zephyr as a complete set of your testing management tool. You check and edit the status of your administrative settings from the Department Dashboard Management as shown. + +![zephyr dashboard](http://blog.linoxide.com/wp-content/uploads/2015/08/dashboard.png) + +### Conclusion ### + +Cheers! we have done with the complete setup of Zephyr installation setup on Centos 7.1. We hope you are now much aware of Zephyr Test management tool which offer the prospect of streamlining the testing process and allow quick access to data analysis, collaborative tools and easy communication across multiple project teams. Feel free to comment us if you find any difficulty while you are doing it in your environment. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/setup-zephyr-tool-centos-7-x/ + +作者:[Kashif Siddique][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ \ No newline at end of file From d29562dafcdb2241616dd46beff562c2107f80ca Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 24 Aug 2015 15:24:44 +0800 Subject: [PATCH 275/697] =?UTF-8?q?20150824-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ng before CoreOS and the Atomic Project.md | 92 +++++++++ ...artition into One Large Virtual Storage.md | 186 ++++++++++++++++++ 2 files changed, 278 insertions(+) create mode 100644 sources/talk/20150824 LinuxCon exclusive--Mark Shuttleworth says Snappy was born long before CoreOS and the Atomic Project.md create mode 100644 sources/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md diff --git a/sources/talk/20150824 LinuxCon exclusive--Mark Shuttleworth says Snappy was born long before CoreOS and the Atomic Project.md b/sources/talk/20150824 LinuxCon exclusive--Mark Shuttleworth says Snappy was born long before CoreOS and the Atomic Project.md new file mode 100644 index 0000000000..2c45b6064b --- /dev/null +++ b/sources/talk/20150824 LinuxCon exclusive--Mark Shuttleworth says Snappy was born long before CoreOS and the Atomic Project.md @@ -0,0 +1,92 @@ +LinuxCon exclusive: Mark Shuttleworth says Snappy was born long before CoreOS and the Atomic Project +================================================================================ +![](http://images.techhive.com/images/article/2015/08/mark-100608730-primary.idge.jpg) + +Mark Shuttleworth at LinuxCon Credit: Swapnil Bhartiya + +> Mark Shuttleworth, founder of Canonical and Ubuntu, made a surprise visit at LinuxCon. I sat down with him for a video interview and talked about Ubuntu on IBM’s new LinuxONE systems, Canonical’s plans for containers, open source in the enterprise space and much more. + +### You made a surprise entry during the keynote. What brought you to LinuxCon? ### + +**Mark Shuttleworth**: I am here at LinuxCon to support IBM and Canonical in their announcement of Ubuntu on their new Linux-only super-high-end mainframe LinuxONE. These are the biggest machines in the world, purpose-built to run only Linux. And we will be bringing Ubuntu to them, which is a real privilege for us and is going to be incredible for developers. + +![mark selfie](http://images.techhive.com/images/article/2015/08/mark-selfie-100608731-large.idge.jpg) + +Swapnil Bhartiya + +Mark Shuttleworth and Swapnil Bhartiya, mandatory selfie at LinuxCon + +### Only Red Hat and SUSE were supported on it. Why was Ubuntu missing from the mainframe scene? ### + +**Mark**: Ubuntu has always been about developers. It has been about enabling the free software platform from where it is collaboratively built to be available at no cost to developers in the world, so they are limited only by their imagination—not by money, not by geography. + +There was an incredible story told today about a 12-year-old kid who started out with Ubuntu; there are incredible stories about people building giant businesses with Ubuntu. And for me, being able to empower people, whether they come from one part of the world or another to express their ideas on free software, is what Ubuntu is all about. It's been a journey for us essentially, going to the platforms those developers care about, and just in the last year, we suddenly saw a flood of requests from companies who run mainframes, who are using Ubuntu for their infrastructure—70% of OpenStack deployments are on Ubuntu. Those same people said, “Look, there is the mainframe, and we like to unleash it and think of it as a region in the cloud.” So when IBM started talking to us, saying that they have this project in the works, it felt like a very natural fit: You are going to be able to take your Ubuntu laptop, build code there and ship it straight to every cloud, every virtualization environment, every bare metal in every architecture including the mainframe, and that's going to be beautiful. + +### Will Canonical be offering support for these systems? ### + +**Mark**: Yes. Ubuntu on z Systems is going to be completely supported. We will make long-term commitments to that. The idea is to bring together scale-out-fast cloud-like workloads, which is really born on Ubuntu; 70% of workloads on Amazon and other public clouds run on Ubuntu. Now you can think of running that on a mainframe if that makes sense to you. + +We are going to provide exactly the same platform that we do on the cloud, and we are going to provide that on the mainframe as well. We are also going to expose it to the OpenStack API so you can consume it on a mainframe with exactly the same tools and exactly the same processes that you would consume on a laptop, or OpenStack or public cloud resources. So all of the things that Ubuntu builds to make your life easy as a developer are going to be available across that full range of platforms and systems, and all of that is commercially supported. + +### Canonical is doing a lot of things: It is into enterprise, and it’s in the consumer space with mobile and desktop. So what is the core focus of Canonical now? ### + +**Mark**: The trick for us is to enable the reuse of specifically the same parts [of our technology] in as many useful ways as possible. So if you look at the work that we do at z Systems, it's absolutely defined by the work that we do on the cloud. We want to deliver exactly the same libraries on exactly the same date for the mainframe as we do for public clouds and for x86, ARM and Power servers today. + +We don't allow Ubuntu or our focus to fragment very dramatically because we don't allow different products managers to find Ubuntu in different ways in different environments. We just want to bring that standard experience that developers love to this new environment. + +Similarly if you look at the work we are doing on IoT [Internet of Things], Snappy Ubuntu is the heart of the phone. It’s the phone without the GUI. So the definitions, the tools, the kernels, the mechanisms are shared across those projects. So we are able to multiply the impact of the work. We have an incredible community, and we try to enable the community to do things that they want to do that we can’t do. So that's why we have so many buntus, and it's kind of incredible for me to see what they do with that. + +We also see the community climbing in. We see hundreds of developers working with Snappy for IoT, and we see developers working with Snappy on mobile, for personal computing as convergence becomes real. And, of course, there is the cloud server story: 70% of the world is Ubuntu, so there is a huge audience. We don't have to do all the work that we do; we just have to be open and willing to, kind of, do the core infrastructure and then reuse it as efficiently as possible. + +### Is Snappy a response to Atomic or CoreOS? ### + +**Mark**: Snappy as a project was born four years ago when we started working on the phone, which was long before the CoreOS, long before Atomic. I think the principles of atomicity, transactionality are beautiful, but remember: We needed to build the same things for the phone. And with Snappy, we have the ability to deliver transactional updates to any of these systems—phones, servers and cloud devices. + +Of course, it feels a little different because in order to provide those guarantees, we have to shape the system in such a way that we can guarantee the guarantees. And that's why Snappy is snappy; it's a new thing. It's not based on an old packaging system. Though we will keep both of them: All Snaps for us that Canonical makes, the core snaps that define the OS, are all built from Debian packages. They are two different faces of the same coin for us, and developers will use them as tools. We use the right tools for the job. + +There are couple of key advantages for Snappy over CoreOS and Atomic, and the main one is this: We took the view that we wanted the base idea to be extensible. So with Snappy, the core operating system is tiny. You make all the choices, and you take all the decisions about things you want to bolt on that: you want to bolt on Docker; you want to bolt on Kubernete; you want to bolt on Mesos; you want to bolt on Lattice from Pivotal; you want to bolt on OpenStack. Those are the things you choose to add with Snappy. Whereas with Atomic and CoreOS, it's one blob and you have to do it exactly the way they want you to do it. You have to live with the versions of software and the choices they make. + +Whereas with Snappy, we really preserve this idea of the choices you have got in Ubuntu are now transactionally available on Snappy systems. That makes the core much smaller, and it gives you the choice of different container systems, different container management systems, different cloud infrastructure systems or different apps of every description. I think that's the winning idea. In fullness of time, people will realize that they wanted to make those choices themselves; they just want Canonical to do the work of providing the updates in a really efficient manner. + +### There is so much competition in the container space with Docker, Rocket and many other players. Where will Canonical stand amid this competition? ### + +**Mark**: Canonical is focused on platform tools, and we see things like the Rocket and Docker as things super-useful for developers; we just make sure that those work best on Ubuntu. Docker, for years, ran only Ubuntu because we work very closely with them, and we are glad now that it's available everywhere else. But if you look at the numbers, the vast majority of Docker containers are on Ubuntu. Because we work really hard, as developers, you get the best experience with all of these tools on Ubuntu. We don't want to try and control everything, and it’s great for us to have those guys competing. + +I think in the end people will see that there is really two kinds of containers. 1) There are cases where a container is just like a VM machine. It feels like a whole machine, it runs all processes, all the logs and cron jobs are there. It's like a VM, just that it's much cheaper, much lighter, much faster, and that's LXD. 2) And then there would be process containers, which are like Docker or Rocket; they are there to run a specific application very fast. I think we lead the world in general machine container story, which is our hypervisor LXD, and I think Docker leads the story when it comes to applications containers, process containers. And those two work together really beautifully. + +### Microsoft and Canonical are working together on LXD? Can you tell us about this engagement? ### + +Mark: LXD is two things. First, it's an implementation on top of Canonical's work on the kernel so that you can start to create full machine containers on any host. But it's also a REST API. That’s the transitions from LXC to LXD. We got a daemon there so you can talk to the daemon over the network, if it's listening on the network, and says tell me about the containers on that machine, tell me about the file systems on that machine, the networks on that machine, start or stop the container. + +So LXD becomes a distributed hypervisor effectively. Very interestingly, last week Microsoft announced that they like REST API. It is very clean, very simple, very well engineered, and they are going to implement the same API for Windows machines. It's completely cross-platform, which means you will be able to talk to any machine—Linux or Windows. So it gives you very clean and simple APIs to talk about containers on any host on the network. + +Of course, we have led the work in [OpenStack to bind LXD to Nova][1], which is the control system to compute in OpenStack, so that's how we create a whole cloud with OpenStack API with the individual VMs being actually containers, so much denser, much faster, much lighter, much cheaper. + +### Open Source is becoming a norm in the enterprise segment. What do you think is driving the adoption of open source in the enterprise? ### + +**Mark**: The reason why open source has become so popular in the enterprise is because it enables them to go faster. We are all competing at some level, and if you can't make progress because you have to call up some vendor, you can't dig in and help yourself go faster, then you feel frustrated. And given the choice between frustration and at least the ability to dig into a problem, enterprises over time will always choose to give themselves the ability to dig in and help themselves. So that is why open source is phenomenal. + +I think it goes a bit deeper than that. I think people have started to realize as much as we compete, 99% of what we need to do is shared, and there is something meaningful about contributing to something that is shared. As I have seen Ubuntu go from something that developers love, to something that CIOs love that developers love Ubuntu. As that happens, it's not a one-way ticket. They often want to say how can we help contribute to make this whole thing go faster. + +We have always seen a curve of complexity, and open source has traditionally been higher up on the curve of complexity and therefore considered threatening or difficult or too uncertain for people who are not comfortable with the complexity. What's wonderful to me is that many open source projects have identified that as a blocker for their own future. So in Ubuntu we have made user experience, design and “making it easy” a first-class goal. We have done the same for OpenStack. With Ubuntu tools for OpenStack anybody can build an OpenStack cloud in an hour, and if you want, that cloud can run itself, scale itself, manage itself, can deal with failures. It becomes something you can just fire up and forget, which also makes it really cheap. It also makes it something that's not a distraction, and so by making open source easier and easier, we are broadening its appeal to consumers and into the enterprise and potentially into the government. + +### How open are governments to open source? Can you tell us about the utilization of open source by governments, especially in the U.S.? ### + +**Mark**: I don't track the usage in government, but part of government utilization in the modern era is the realization that how untrustworthy other governments might be. There is a desire for people to be able to say, “Look, I want to review or check and potentially self-build all the things that I depend on.” That's a really important mission. At the end of the day, some people see this as a game where maybe they can get something out of the other guy. I see it as a game where we can make a level playing field, where everybody gets to compete. I have a very strong interest in making sure that Ubuntu is trustworthy, which means the way we build it, the way we run it, the governance around it is such that people can have confidence in it as an independent thing. + +### You are quite vocal about freedom, privacy and other social issues on Google+. How do you see yourself, your company and Ubuntu playing a role in making the world a better place? ### + +**Mark**: The most important thing for us to do is to build confidence in trusted platforms, platforms that are freely available but also trustworthy. At any given time, there will always be people who can make arguments about why they should have access to something. But we know from history that at the end of the day, due process of law, justice, doesn't depend on the abuse of privacy, abuse of infrastructure, the abuse of data. So I am very strongly of the view that in the fullness of time, all of the different major actors will come to the view that their primary interest is in having something that is conceptually trustworthy. This isn't about what America can steal from Germany or what China can learn in Russia. This is about saying we’re all going to be able to trust our infrastructure; that's a generational journey. But I believe Ubuntu can be right at the center of people's thinking about that. + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2973116/linux/linuxcon-exclusive-mark-shuttleworth-says-snappy-was-born-long-before-coreos-and-the-atomic-project.html + +作者:[Swapnil Bhartiya][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Swapnil-Bhartiya/ +[1]:https://wiki.openstack.org/wiki/HypervisorSupportMatrix \ No newline at end of file diff --git a/sources/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md b/sources/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md new file mode 100644 index 0000000000..6815fa64d8 --- /dev/null +++ b/sources/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md @@ -0,0 +1,186 @@ +Mhddfs – Combine Several Smaller Partition into One Large Virtual Storage +================================================================================ +Let’s assume that you have 30GB of movies and you have 3 drives each 20 GB in size. So how will you store? + +Obviously you can split your videos in two or three different volumes and store them on the drive manually. This certainly is not a good idea, it is an exhaustive work which requires manual intervention and a lots of your time. + +Another solution is to create a [RAID array of disk][1]. The RAID has always remained notorious for loss of storage reliability and usable disk space. Another solution is mhddfs. + +![Combine Multiple Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Combine-Multiple-Partitions-in-Linux.png) + +Mhddfs – Combine Multiple Partitions in Linux + +mhddfs is a driver for Linux that combines several mount points into one virtual disk. It is a fuse based driver, which provides a easy solution for large data storage. It combines all small file systems to create a single big virtual filesystem which contains every particle of its member filesystem including files and free spaces. + +#### Why you need Mhddfs? #### + +All your storage devices creates a single virtual pool and it can be mounted right at the boot. This small utility takes care of, which drive is full and which is empty and to write data to what drive, intelligently. Once you create virtual drives successfully, you can share your virtual filesystem using [SAMBA][2]. Your client will always see a huge drive and lots of free space. + +#### Features of Mhddfs #### + +- Get attributes of the file system and system information. +- Set attributes of the file system. +- Create, Read, Remove and write Directories and files. +- Support for file locks and Hardlinks on single device. + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Pros of mhddfsCons of mhddfs
 Perfect for home users.mhddfs driver is not built in the Linux Kernel
 Simple to run. Required lots of processing power during runtime
 No evidence of Data loss No redundancy solution.
 Do not split the file. Hardlinks moving not supported
 Add new files to the combined virtual filesystem. 
 Manage the location where these files are saved. 
  Extended file attributes 
+ +### Installation of Mhddfs in Linux ### + +On Debian and portable to alike systems, you can install mhddfs package using following command. + + # apt-get update && apt-get install mhddfs + +![Install Mhddfs on Debian based Systems](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Mhddfs-on-Ubuntu.png) + +Install Mhddfs on Debian based Systems + +On RHEL/CentOS Linux systems, you need to turn on [epel-repository][3] and then execute the below command to install mhddfs package. + + # yum install mhddfs + +On Fedora 22+ systems, you may get it by dnf package manger as shown below. + + # dnf install mhddfs + +![Install Mhddfs on Fedora](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Mhddfs-on-Fedora.png) + +Install Mhddfs on Fedora + +If incase, mhddfs package isn’t available from epel repository, then you need to resolve following dependencies to install and compile it from source as shown below. + +- FUSE header files +- GCC +- libc6 header files +- uthash header files +- libattr1 header files (optional) + +Next, download the latest source package simply as suggested below and compile it. + + # wget http://mhddfs.uvw.ru/downloads/mhddfs_0.1.39.tar.gz + # tar -zxvf mhddfs*.tar.gz + # cd mhddfs-0.1.39/ + # make + +You should be able to see binary mhddfs in the current directory. Move it to /usr/bin/ and /usr/local/bin/ as root. + + # cp mhddfs /usr/bin/ + # cp mhddfs /usr/local/bin/ + +All set, mhddfs is ready to be used. + +### How do I use Mhddfs? ### + +1. Lets see all the HDD mounted to my system currently. + + $ df -h + +![Check Mounted Devices](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Mounted-Devices.gif) + +**Sample Output** + + Filesystem Size Used Avail Use% Mounted on + + /dev/sda1 511M 132K 511M 1% /boot/efi + /dev/sda2 451G 92G 336G 22% / + /dev/sdb1 1.9T 161G 1.7T 9% /media/avi/BD9B-5FCE + /dev/sdc1 555M 555M 0 100% /media/avi/Debian 8.1.0 M-A 1 + +Notice the ‘Mount Point‘ name here, which we will be using later. + +2. Create a directory `/mnt/virtual_hdd` where all these all file system will be grouped together as, + + # mkdir /mnt/virtual_hdd + +3. And then mount all the file-systems. Either as root or as a user who is a member of FUSE group. + + # mhddfs /boot/efi, /, /media/avi/BD9B-5FCE/, /media/avi/Debian\ 8.1.0\ M-A\ 1/ /mnt/virtual_hdd -o allow_other + +![Mount All File System in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Mount-All-File-System-in-Linux.png) + +Mount All File System in Linux + +**Note**: We are used mount Point names here of all the HDDs. Obviously the mount point in your case will be different. Also notice “-o allow_other” option makes this Virtual file system visible to all others and not only the person who created it. + +4. Now run “df -h” see all the filesystems. It should contain the one you created just now. + + $ df -h + +![Verify Virtual File System Mount](http://www.tecmint.com/wp-content/uploads/2015/08/Verify-Virtual-File-System.png) + +Verify Virtual File System Mount + +You can perform all the option to the Virtual File System you created as you would have done to a Mounted Drive. + +5. To create this Virtual File system on every system boot, you should add the below line of code (in your case it should be different, depending upon your mount point), at the end of /etc/fstab file as root. + + mhddfs# /boot/efi, /, /media/avi/BD9B-5FCE/, /media/avi/Debian\ 8.1.0\ M-A\ 1/ /mnt/virtual_hdd fuse defaults,allow_other 0 0 + +6. If at any point of time you want to add/remove a new drive to Virtual_hdd, you may mount a new drive, copy the contents of mount point /mnt/virtual_hdd, un-mount the volume, Eject the Drive you want to remove and/or mount the new drive you want to include, Mount the overall filesystem under Virtual_hdd using mhddfs command and you should be done. + +#### How do I Un-Mount Virtual_hdd? #### + +Unmounting virtual_hdd is as easy as, + + # umount /mnt/virtual_hdd + +![Unmount Virtual Filesystem](http://www.tecmint.com/wp-content/uploads/2015/08/Unmount-Virtual-Filesystem.png) + +Unmount Virtual Filesystem + +Notice it is umount and not unmount. A lot of user type it wrong. + +That’s all for now. I am working on another post you people will love to read. Till then stay tuned and connected to Tecmint. Provide us with your valuable feedback in the comments below. Like and share us and help us get spread. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/combine-partitions-into-one-in-linux-using-mhddfs/ + +作者:[Avishek Kumar][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[2]:http://www.tecmint.com/mount-filesystem-in-linux/ +[3]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ \ No newline at end of file From 401102481e8c4ae48d054704c1e1e8d32e3bd3d9 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 24 Aug 2015 17:35:32 +0800 Subject: [PATCH 276/697] PUB:Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @strugglingyouth 有些地方原文有误,应根据理解修正。 --- ... RAID, Concepts of RAID and RAID Levels.md | 156 ++++++++++++++++++ ... RAID, Concepts of RAID and RAID Levels.md | 146 ---------------- 2 files changed, 156 insertions(+), 146 deletions(-) create mode 100644 published/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md delete mode 100644 translated/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md diff --git a/published/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md b/published/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md new file mode 100644 index 0000000000..d54e794459 --- /dev/null +++ b/published/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md @@ -0,0 +1,156 @@ +在 Linux 下使用 RAID(一):介绍 RAID 的级别和概念 +================================================================================ + +RAID 的意思是廉价磁盘冗余阵列(Redundant Array of Inexpensive Disks),但现在它被称为独立磁盘冗余阵列(Redundant Array of Independent Drives)。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是一系列放在一起,成为一个逻辑卷的磁盘集合。 + +![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) + +*在 Linux 中理解 RAID 设置* + +RAID 包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。将至少两个磁盘连接到一个 RAID 控制器,而成为一个逻辑卷,也可以将多个驱动器放在一个组中。一组磁盘只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。 + +这个系列被命名为“在 Linux 下使用 RAID”,分为9个部分,包括以下主题: + +- 第1部分:介绍 RAID 的级别和概念 +- 第2部分:在Linux中如何设置 RAID0(条带化) +- 第3部分:在Linux中如何设置 RAID1(镜像化) +- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验) +- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验) +- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套) +- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘 +- 第8部分:在 RAID 中恢复(重建)损坏的驱动器 +- 第9部分:在 Linux 中管理 RAID + +这是9篇系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。 + +### 软件 RAID 和硬件 RAID ### + +软件 RAID 的性能较低,因为其使用主机的资源。 需要加载 RAID 软件以从软件 RAID 卷中读取数据。在加载 RAID 软件前,操作系统需要引导起来才能加载 RAID 软件。在软件 RAID 中无需物理硬件。零成本投资。 + +硬件 RAID 的性能较高。他们采用 PCI Express 卡物理地提供有专用的 RAID 控制器。它不会使用主机资源。他们有 NVRAM 用于缓存的读取和写入。缓存用于 RAID 重建时,即使出现电源故障,它会使用后备的电池电源保持缓存。对于大规模使用是非常昂贵的投资。 + +硬件 RAID 卡如下所示: + +![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) + +*硬件 RAID* + +#### 重要的 RAID 概念 #### + +- **校验**方式用在 RAID 重建中从校验所保存的信息中重新生成丢失的内容。 RAID 5,RAID 6 基于校验。 +- **条带化**是将切片数据随机存储到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用2个磁盘,则每个磁盘存储我们的一半数据。 +- **镜像**被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1 中,它会保存相同的内容到其他盘上。 +- **热备份**只是我们的服务器上的一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动用于重建 RAID。 +- **块**是 RAID 控制器每次读写数据时的最小单位,最小 4KB。通过定义块大小,我们可以增加 I/O 性能。 + +RAID有不同的级别。在这里,我们仅列出在真实环境下的使用最多的 RAID 级别。 + +- RAID0 = 条带化 +- RAID1 = 镜像 +- RAID5 = 单磁盘分布式奇偶校验 +- RAID6 = 双磁盘分布式奇偶校验 +- RAID10 = 镜像 + 条带。(嵌套RAID) + +RAID 在大多数 Linux 发行版上使用名为 mdadm 的软件包进行管理。让我们先对每个 RAID 级别认识一下。 + +#### RAID 0 / 条带化 #### + +![](https://upload.wikimedia.org/wikipedia/commons/thumb/9/9b/RAID_0.svg/150px-RAID_0.svg.png) + +条带化有很好的性能。在 RAID 0(条带化)中数据将使用切片的方式被写入到磁盘。一半的内容放在一个磁盘上,另一半内容将被写入到另一个磁盘。 + +假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。(LCTT 译注:实际上不可能按字节切片,是按数据块切片的。) + +在这种情况下,如果驱动器中的任何一个发生故障,我们就会丢失数据,因为一个盘中只有一半的数据,不能用于重建 RAID。不过,当比较写入速度和性能时,RAID 0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。 + +- 高性能。 +- RAID 0 中容量零损失。 +- 零容错。 +- 写和读有很高的性能。 + +#### RAID 1 / 镜像化 #### + +![](https://upload.wikimedia.org/wikipedia/commons/thumb/b/b7/RAID_1.svg/150px-RAID_1.svg.png) + +镜像也有不错的性能。镜像可以对我们的数据做一份相同的副本。假设我们有两个2TB的硬盘驱动器,我们总共有4TB,但在镜像中,但是放在 RAID 控制器后面的驱动器形成了一个逻辑驱动器,我们只能看到这个逻辑驱动器有2TB。 + +当我们保存数据时,它将同时写入这两个2TB驱动器中。创建 RAID 1(镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以通过更换一个新的磁盘恢复 RAID 。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为另外的磁盘中也有相同的数据。所以是零数据丢失。 + +- 良好的性能。 +- 总容量丢失一半可用空间。 +- 完全容错。 +- 重建会更快。 +- 写性能变慢。 +- 读性能变好。 +- 能用于操作系统和小规模的数据库。 + +#### RAID 5 / 分布式奇偶校验 #### + +![](https://upload.wikimedia.org/wikipedia/commons/thumb/6/64/RAID_5.svg/300px-RAID_5.svg.png) + +RAID 5 多用于企业级。 RAID 5 的以分布式奇偶校验的方式工作。奇偶校验信息将被用于重建数据。它从剩下的正常驱动器上的信息来重建。在驱动器发生故障时,这可以保护我们的数据。 + +假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中,而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。 + +- 性能卓越 +- 读速度将非常好。 +- 写速度处于平均水准,如果我们不使用硬件 RAID 控制器,写速度缓慢。 +- 从所有驱动器的奇偶校验信息中重建。 +- 完全容错。 +- 1个磁盘空间将用于奇偶校验。 +- 可以被用在文件服务器,Web服务器,非常重要的备份中。 + +#### RAID 6 双分布式奇偶校验磁盘 #### + +![](https://upload.wikimedia.org/wikipedia/commons/thumb/7/70/RAID_6.svg/300px-RAID_6.svg.png) + +RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大数量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以更换新的驱动器后重建数据。 + +它比 RAID 5 慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度就处于平均水准。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。 + +- 性能不佳。 +- 读的性能很好。 +- 如果我们不使用硬件 RAID 控制器写的性能会很差。 +- 从两个奇偶校验驱动器上重建。 +- 完全容错。 +- 2个磁盘空间将用于奇偶校验。 +- 可用于大型阵列。 +- 用于备份和视频流中,用于大规模。 + +#### RAID 10 / 镜像+条带 #### + +![](https://upload.wikimedia.org/wikipedia/commons/thumb/e/e6/RAID_10_01.svg/300px-RAID_10_01.svg.png) + +![](https://upload.wikimedia.org/wikipedia/commons/thumb/a/ad/RAID_01.svg/300px-RAID_01.svg.png) + +RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。 + +假设,我们有4个驱动器。当我逻辑卷上写数据时,它会使用镜像和条带的方式将数据保存到4个驱动器上。 + +如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下方式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入另外两个磁盘,所有数据都写入两块磁盘。这样可以将每个数据复制到另外的磁盘。 + +同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一组盘,“E”写入第二组盘。再次将“C”写入第一组盘,“M”到第二组盘。 + +- 良好的读写性能。 +- 总容量丢失一半的可用空间。 +- 容错。 +- 从副本数据中快速重建。 +- 由于其高性能和高可用性,常被用于数据库的存储中。 + +### 结论 ### + +在这篇文章中,我们已经了解了什么是 RAID 和在实际环境大多采用哪个级别的 RAID。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容可以基本满足你对 RAID 的了解。 + +在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/understanding-raid-setup-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ diff --git a/translated/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md b/translated/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md deleted file mode 100644 index 8ca0ecbd7e..0000000000 --- a/translated/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md +++ /dev/null @@ -1,146 +0,0 @@ - -RAID的级别和概念的介绍 - 第1部分 -================================================================================ -RAID是廉价磁盘冗余阵列,但现在它被称为独立磁盘冗余阵列。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是磁盘的一个集合,被称为逻辑卷。 - - -![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) - -在 Linux 中理解 RAID 的设置 - -RAID包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。一个 RAID 控制器至少使用两个磁盘并且使用一个逻辑卷或者多个驱动器在一个组中。在一个磁盘组的应用中只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。 - -这个系列被命名为RAID的构建共包含9个部分包括以下主题。 - -- 第1部分:RAID的级别和概念的介绍 -- 第2部分:在Linux中如何设置 RAID0(条带化) -- 第3部分:在Linux中如何设置 RAID1(镜像化) -- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验) -- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验) -- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套) -- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘 -- 第8部分:在 RAID 中恢复(重建)损坏的驱动器 -- 第9部分:在 Linux 中管理 RAID - -这是9系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。 - - -### 软件RAID和硬件RAID ### - -软件 RAID 的性能很低,因为其从主机资源消耗。 RAID 软件需要加载可读取数据从软件 RAID 卷中。在加载 RAID 软件前,操作系统需要得到加载 RAID 软件的引导。在软件 RAID 中无需物理硬件。零成本投资。 - -硬件 RAID 具有很高的性能。他们有专用的 RAID 控制器,采用 PCI Express卡物理内置的。它不会使用主机资源。他们有 NVRAM 缓存读取和写入。当重建时即使出现电源故障,它会使用电池电源备份存储缓存。对于大规模使用需要非常昂贵的投资。 - -硬件 RAID 卡如下所示: - -![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) - -硬件RAID - -#### 精选的 RAID 概念 #### - -- 在 RAID 重建中校验方法中丢失的内容来自从校验中保存的信息。 RAID 5,RAID 6 基于校验。 -- 条带化是随机共享数据到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用3个磁盘,则数据将会存在于每个磁盘上。 -- 镜像被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1,它将保存相同的内容到其他盘上。 -- 在我们的服务器上,热备份只是一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动重建。 -- 块是 RAID 控制器每次读写数据时的最小单位,最小4KB。通过定义块大小,我们可以增加 I/O 性能。 - -RAID有不同的级别。在这里,我们仅看到在真实环境下的使用最多的 RAID 级别。 - -- RAID0 = 条带化 -- RAID1 = 镜像 -- RAID5 = 单个磁盘分布式奇偶校验 -- RAID6 = 双盘分布式奇偶校验 -- RAID10 = 镜像 + 条带。(嵌套RAID) - -RAID 在大多数 Linux 发行版上使用 mdadm 的包管理。让我们先对每个 RAID 级别认识一下。 - -#### RAID 0(或)条带化 #### - -条带化有很好的性能。在 RAID 0(条带化)中数据将使用共享的方式被写入到磁盘。一半的内容将是在一个磁盘上,另一半内容将被写入到其它磁盘。 - -假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。 - -在这种情况下,如果驱动器中的任何一个发生故障,我们将丢失所有的数据,因为一个盘中只有一半的数据,不能用于重建。不过,当比较写入速度和性能时,RAID0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。 - -- 高性能。 -- 在 RAID0 上零容量损失。 -- 零容错。 -- 写和读有很高的性能。 - -#### RAID1(或)镜像化 #### - -镜像也有不错的性能。镜像可以备份我们的数据。假设我们有两组2TB的硬盘驱动器,我们总共有4TB,但在镜像中,驱动器在 RAID 控制器的后面形成一个逻辑驱动器,我们只能看到逻辑驱动器有2TB。 - -当我们保存数据时,它将同时写入2TB驱动器中。创建 RAID 1 (镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以恢复 RAID 通过更换一个新的磁盘。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为其他的磁盘中也有相同的数据。所以是零数据丢失。 - -- 良好的性能。 -- 空间的一半将在总容量丢失。 -- 完全容错。 -- 重建会更快。 -- 写性能将是缓慢的。 -- 读将会很好。 -- 被操作系统和数据库使用的规模很小。 - -#### RAID 5(或)分布式奇偶校验 #### - -RAID 5 多用于企业的水平。 RAID 5 的工作通过分布式奇偶校验的方法。奇偶校验信息将被用于重建数据。它需要留下的正常驱动器上的信息去重建。驱动器故障时,这会保护我们的数据。 - -假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。 - -- 性能卓越 -- 读速度将非常好。 -- 如果我们不使用硬件 RAID 控制器,写速度是缓慢的。 -- 从所有驱动器的奇偶校验信息中重建。 -- 完全容错。 -- 1个磁盘空间将用于奇偶校验。 -- 可以被用在文件服务器,Web服务器,非常重要的备份中。 - -#### RAID 6 两个分布式奇偶校验磁盘 #### - -RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以重建数据,同时更换新的驱动器。 - -它比 RAID 5 非常慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度将被平均。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。 - -- 性能不佳。 -- 读的性能很好。 -- 如果我们不使用硬件 RAID 控制器写的性能会很差。 -- 从2奇偶校验驱动器上重建。 -- 完全容错。 -- 2个磁盘空间将用于奇偶校验。 -- 可用于大型阵列。 -- 在备份和视频流中大规模使用。 - -#### RAID 10(或)镜像+条带 #### - -RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。 - -假设,我们有4个驱动器。当我写了一些数据到逻辑卷上,它会使用镜像和条带将数据保存到4个驱动器上。 - -如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下形式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入两个磁盘,这一步将所有数据都写入。这使数据得到备份。 - -同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一个盘,“E”写入第二个盘。再次将“C”写入第一个盘,“M”到第二个盘。 - -- 良好的读写性能。 -- 空间的一半将在总容量丢失。 -- 容错。 -- 从备份数据中快速重建。 -- 它的高性能和高可用性常被用于数据库的存储中。 - -### 结论 ### - -在这篇文章中,我们已经看到了什么是 RAID 和在实际环境大多采用 RAID 的哪个级别。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容对于你了解 RAID 基本满足。 - -在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/understanding-raid-setup-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ From 43ec03e9126a76f442cb07c3a995670b8649d184 Mon Sep 17 00:00:00 2001 From: Jerry Ling Date: Mon, 24 Aug 2015 18:00:30 +0800 Subject: [PATCH 277/697] =?UTF-8?q?[20150824=20How=20to=20create=20an=20AP?= =?UTF-8?q?=20in=20Ubuntu=2015.04=20to=20connect=20to=20Android=20or=20iPh?= =?UTF-8?q?one]=20=E7=BF=BB=E8=AF=91=E4=B8=AD?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完成 20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone --- ...u 15.04 to connect to Android or iPhone.md | 74 ------------------- 1 file changed, 74 deletions(-) delete mode 100644 sources/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md diff --git a/sources/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md b/sources/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md deleted file mode 100644 index a8e21419fb..0000000000 --- a/sources/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md +++ /dev/null @@ -1,74 +0,0 @@ -How to create an AP in Ubuntu 15.04 to connect to Android/iPhone -================================================================================ -I tried creating a wireless access point via Gnome Network Manager in 15.04 and was successful. I’m sharing the steps with our readers. Please note: you must have a wifi card which allows you to create an Access Point. If you want to know how to find that, type iw list in a terminal. - -If you don’t have iw installed, you can install iw in Ubuntu using the command sudo apt-get install iw. - -After you type iw list, look for supported interface section, where it should be a entry called AP like the one shown below: - -Supported interface modes: - -* IBSS -* managed -* AP -* AP/VLAN -* monitor -* mesh point - -Let’s see the steps in detail - -1. Disconnect WIFI. Get a an internet cable and plug into your laptop so that you are connected to a wired internet connection -1. Go to Network Icon on the top panel -> Edit Connections then click the Add button in the pop-up window -1. Choose Wi-Fi from the drop-down menu -1. Next, - -a. Type in a connection name e.g. Hotspot - -b. Type in a SSID e.g. Hotspot - -c. Select mode: Infrastructure - -d. Device MAC address: select your wireless card from drop-down menu - -![](http://i2.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome1.jpg) - -1. Go to Wi-Fi Security tab, select security type WPA & WPA2 Personal and set a password -1. Go to IPv4 Settings tab, from Method drop-down box, select Shared to other computers - -![](http://i1.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome4.jpg) - -1. Go to IPv6 tab and set Method to ignore (do this only if you do not use IPv6) -1. Hit the “Save” button to save the configuration -1. Open a terminal from the menu/dash -1. Now, edit the connection with you just created via network settings - -VIM editor: - - sudo vim /etc/NetworkManager/system-connections/Hotspot - -Gedit: - - gksu gedit /etc/NetworkManager/system-connections/Hotspot - -Replace name Hotspot with the connection name you have given in step 4 - -![](http://i2.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome2.jpg?resize=640%2C402) - -1. Change the line mode=infrastructure to mode=ap and save the file -1. Once you save the file, you should be able to see the wifi named Hotspot showing up in the list of available wifi networks. (If the network does not show, disable and enable wifi ) - -![](http://i1.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome3.jpg?resize=290%2C375) - -1. You can now connect your Android phone. Connection tested using Xioami Mi4i running Android 5.0 (Downloaded 1GB to test speed and reliability) - --------------------------------------------------------------------------------- - -via: http://www.linuxveda.com/2015/08/23/how-to-create-an-ap-in-ubuntu-15-04-to-connect-to-androidiphone/ - -作者:[Sayantan Das][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxveda.com/author/sayantan_das/ \ No newline at end of file From 34c8df979a55aa0507c226d1bfcfc7efe1f5a8d5 Mon Sep 17 00:00:00 2001 From: Jerry Ling Date: Mon, 24 Aug 2015 19:22:45 +0800 Subject: [PATCH 278/697] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...u 15.04 to connect to Android or iPhone.md | 74 +++++++++++++++++++ 1 file changed, 74 insertions(+) create mode 100644 translated/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md diff --git a/translated/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md b/translated/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md new file mode 100644 index 0000000000..02aef62d82 --- /dev/null +++ b/translated/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md @@ -0,0 +1,74 @@ +如何在 Ubuntu 15.04 下创建连接至 Android/iOS 的 AP +================================================================================ +我成功地在 Ubuntu 15.04 下用 Gnome Network Manager 创建了一个无线AP热点. 接下来我要分享一下我的步骤. 请注意: 你必须要有一个可以用来创建AP热点的无线网卡. 如果你不知道如何找到连上了的设备的话, 在终端(Terminal)里输入`iw list`. + +如果你没有安装`iw`的话, 在Ubuntu下你可以使用`udo apt-get install iw`进行安装. + +在你键入`iw list`之后, 寻找可用的借口, 你应该会看到类似下列的条目: + +Supported interface modes: + +* IBSS +* managed +* AP +* AP/VLAN +* monitor +* mesh point + +让我们一步步看 + +1. 断开WIFI连接. 使用有线网络接入你的笔记本. +1. 在顶栏面板里点击网络的图标 -> Edit Connections(编辑连接) -> 在弹出窗口里点击Add(新增)按钮. +1. 在下拉菜单内选择Wi-Fi. +1. 接下来, + +a. 输入一个链接名 比如: Hotspot + +b. 输入一个 SSID 比如: Hotspot + +c. 选择模式(mode): Infrastructure + +d. 设备 MAC 地址: 在下拉菜单里选择你的无线设备 + +![](http://i2.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome1.jpg) + +1. 进入Wi-Fi安全选项卡, 选择 WPA & WPA2 Personal 并且输入密码. +1. 进入IPv4设置选项卡, 在Method(方法)下拉菜单里, 选择Shared to other computers(共享至其他电脑). + +![](http://i1.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome4.jpg) + +1. 进入IPv6选项卡, 在Method(方法)里设置为忽略ignore (只有在你不使用IPv6的情况下这么做) +1. 点击 Save(保存) 按钮以保存配置. +1. 从 menu/dash 里打开Terminal. +1. 修改你刚刚使用 network settings 创建的连接. + +使用 VIM 编辑器: + + sudo vim /etc/NetworkManager/system-connections/Hotspot + +使用Gedit 编辑器: + + gksu gedit /etc/NetworkManager/system-connections/Hotspot + +把名字 Hotspot 用你在第4步里起的连接名替换掉. + +![](http://i2.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome2.jpg?resize=640%2C402) + +1. 把 `mode=infrastructure` 改成 `mode=ap` 并且保存文件 +1. 一旦你保存了这个文件, 你应该能在 Wifi 菜单里看到你刚刚建立的AP了. (如果没有的话请再顶栏里 关闭/打开 Wifi 选项一次) + +![](http://i1.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome3.jpg?resize=290%2C375) + +1. 你现在可以把你的设备连上Wifi了. 已经过 Android 5.0的小米4测试.(下载了1GB的文件以测试速度与稳定性) + +-------------------------------------------------------------------------------- + +via: http://www.linuxveda.com/2015/08/23/how-to-create-an-ap-in-ubuntu-15-04-to-connect-to-androidiphone/ + +作者:[Sayantan Das][a] +译者:[jerryling315](https://github.com/jerryling315) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxveda.com/author/sayantan_das/ From 8168cc7b2e85e9b26839ab33c45ac7a3bca579be Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 24 Aug 2015 22:13:59 +0800 Subject: [PATCH 279/697] Delete Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md --- ...ith Double Distributed Parity) in Linux.md | 321 ------------------ 1 file changed, 321 deletions(-) delete mode 100644 sources/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md diff --git a/sources/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md b/sources/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md deleted file mode 100644 index ea1d5993c0..0000000000 --- a/sources/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md +++ /dev/null @@ -1,321 +0,0 @@ -struggling 翻译中 -Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux – Part 5 -================================================================================ -RAID 6 is upgraded version of RAID 5, where it has two distributed parity which provides fault tolerance even after two drives fails. Mission critical system still operational incase of two concurrent disks failures. It’s alike RAID 5, but provides more robust, because it uses one more disk for parity. - -In our earlier article, we’ve seen distributed parity in RAID 5, but in this article we will going to see RAID 6 with double distributed parity. Don’t expect extra performance than any other RAID, if so we have to install a dedicated RAID Controller too. Here in RAID 6 even if we loose our 2 disks we can get the data back by replacing a spare drive and build it from parity. - -![Setup RAID 6 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Setup-RAID-6-in-Linux.jpg) - -Setup RAID 6 in Linux - -To setup a RAID 6, minimum 4 numbers of disks or more in a set are required. RAID 6 have multiple disks even in some set it may be have some bunch of disks, while reading, it will read from all the drives, so reading would be faster whereas writing would be poor because it has to stripe over multiple disks. - -Now, many of us comes to conclusion, why we need to use RAID 6, when it doesn’t perform like any other RAID. Hmm… those who raise this question need to know that, if they need high fault tolerance choose RAID 6. In every higher environments with high availability for database, they use RAID 6 because database is the most important and need to be safe in any cost, also it can be useful for video streaming environments. - -#### Pros and Cons of RAID 6 #### - -- Performance are good. -- RAID 6 is expensive, as it requires two independent drives are used for parity functions. -- Will loose a two disks capacity for using parity information (double parity). -- No data loss, even after two disk fails. We can rebuilt from parity after replacing the failed disk. -- Reading will be better than RAID 5, because it reads from multiple disk, But writing performance will be very poor without dedicated RAID Controller. - -#### Requirements #### - -Minimum 4 numbers of disks are required to create a RAID 6. If you want to add more disks, you can, but you must have dedicated raid controller. In software RAID, we will won’t get better performance in RAID 6. So we need a physical RAID controller. - -Those who are new to RAID setup, we recommend to go through RAID articles below. - -- [Basic Concepts of RAID in Linux – Part 1][1] -- [Creating Software RAID 0 (Stripe) in Linux – Part 2][2] -- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3] - -#### My Server Setup #### - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.228 - Hostname : rd6.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc - Disk 3 [20GB] : /dev/sdd - Disk 4 [20GB] : /dev/sde - -This article is a Part 5 of a 9-tutorial RAID series, here we are going to see how we can create and setup Software RAID 6 or Striping with Double Distributed Parity in Linux systems or servers using four 20GB disks named /dev/sdb, /dev/sdc, /dev/sdd and /dev/sde. - -### Step 1: Installing mdadm Tool and Examine Drives ### - -1. If you’re following our last two Raid articles (Part 2 and Part 3), where we’ve already shown how to install ‘mdadm‘ tool. If you’re new to this article, let me explain that ‘mdadm‘ is a tool to create and manage Raid in Linux systems, let’s install the tool using following command according to your Linux distribution. - - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] - -2. After installing the tool, now it’s time to verify the attached four drives that we are going to use for raid creation using the following ‘fdisk‘ command. - - # fdisk -l | grep sd - -![Check Hard Disk in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Linux-Disks.png) - -Check Disks in Linux - -3. Before creating a RAID drives, always examine our disk drives whether there is any RAID is already created on the disks. - - # mdadm -E /dev/sd[b-e] - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde - -![Check Raid on Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Disk-Raid.png) - -Check Raid on Disk - -**Note**: In the above image depicts that there is no any super-block detected or no RAID is defined in four disk drives. We may move further to start creating RAID 6. - -### Step 2: Drive Partitioning for RAID 6 ### - -4. Now create partitions for raid on ‘/dev/sdb‘, ‘/dev/sdc‘, ‘/dev/sdd‘ and ‘/dev/sde‘ with the help of following fdisk command. Here, we will show how to create partition on sdb drive and later same steps to be followed for rest of the drives. - -**Create /dev/sdb Partition** - - # fdisk /dev/sdb - -Please follow the instructions as shown below for creating partition. - -- Press ‘n‘ for creating new partition. -- Then choose ‘P‘ for Primary partition. -- Next choose the partition number as 1. -- Define the default value by just pressing two times Enter key. -- Next press ‘P‘ to print the defined partition. -- Press ‘L‘ to list all available types. -- Type ‘t‘ to choose the partitions. -- Choose ‘fd‘ for Linux raid auto and press Enter to apply. -- Then again use ‘P‘ to print the changes what we have made. -- Use ‘w‘ to write the changes. - -![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition.png) - -Create /dev/sdb Partition - -**Create /dev/sdb Partition** - - # fdisk /dev/sdc - -![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition.png) - -Create /dev/sdc Partition - -**Create /dev/sdd Partition** - - # fdisk /dev/sdd - -![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition.png) - -Create /dev/sdd Partition - -**Create /dev/sde Partition** - - # fdisk /dev/sde - -![Create sde Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sde-Partition.png) - -Create /dev/sde Partition - -5. After creating partitions, it’s always good habit to examine the drives for super-blocks. If super-blocks does not exist than we can go head to create a new RAID setup. - - # mdadm -E /dev/sd[b-e]1 - - - or - - # mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 - -![Check Raid on New Partitions](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Partitions.png) - -Check Raid on New Partitions - -### Step 3: Creating md device (RAID) ### - -6. Now it’s time to create Raid device ‘md0‘ (i.e. /dev/md0) and apply raid level on all newly created partitions and confirm the raid using following commands. - - # mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 - # cat /proc/mdstat - -![Create Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Raid-6-Device.png) - -Create Raid 6 Device - -7. You can also check the current process of raid using watch command as shown in the screen grab below. - - # watch -n1 cat /proc/mdstat - -![Check Raid 6 Process](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Process.png) - -Check Raid 6 Process - -8. Verify the raid devices using the following command. - -# mdadm -E /dev/sd[b-e]1 - -**Note**:: The above command will be display the information of the four disks, which is quite long so not possible to post the output or screen grab here. - -9. Next, verify the RAID array to confirm that the re-syncing is started. - - # mdadm --detail /dev/md0 - -![Check Raid 6 Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Array.png) - -Check Raid 6 Array - -### Step 4: Creating FileSystem on Raid Device ### - -10. Create a filesystem using ext4 for ‘/dev/md0‘ and mount it under /mnt/raid5. Here we’ve used ext4, but you can use any type of filesystem as per your choice. - - # mkfs.ext4 /dev/md0 - -![Create File System on Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Create-File-System-on-Raid.png) - -Create File System on Raid 6 - -11. Mount the created filesystem under /mnt/raid6 and verify the files under mount point, we can see lost+found directory. - - # mkdir /mnt/raid6 - # mount /dev/md0 /mnt/raid6/ - # ls -l /mnt/raid6/ - -12. Create some files under mount point and append some text in any one of the file to verify the content. - - # touch /mnt/raid6/raid6_test.txt - # ls -l /mnt/raid6/ - # echo "tecmint raid setups" > /mnt/raid6/raid6_test.txt - # cat /mnt/raid6/raid6_test.txt - -![Verify Raid Content](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Content.png) - -Verify Raid Content - -13. Add an entry in /etc/fstab to auto mount the device at the system startup and append the below entry, mount point may differ according to your environment. - - # vim /etc/fstab - - /dev/md0 /mnt/raid6 ext4 defaults 0 0 - -![Automount Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Automount-Raid-Device.png) - -Automount Raid 6 Device - -14. Next, execute ‘mount -a‘ command to verify whether there is any error in fstab entry. - - # mount -av - -![Verify Raid Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Automount-Raid-Devices.png) - -Verify Raid Automount - -### Step 5: Save RAID 6 Configuration ### - -15. Please note by default RAID don’t have a config file. We have to save it by manually using below command and then verify the status of device ‘/dev/md0‘. - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - # mdadm --detail /dev/md0 - -![Save Raid 6 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png) - -Save Raid 6 Configuration - -![Check Raid 6 Status](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png) - -Check Raid 6 Status - -### Step 6: Adding a Spare Drives ### - -16. Now it has 4 disks and there are two parity information’s available. In some cases, if any one of the disk fails we can get the data, because there is double parity in RAID 6. - -May be if the second disk fails, we can add a new one before loosing third disk. It is possible to add a spare drive while creating our RAID set, But I have not defined the spare drive while creating our raid set. But, we can add a spare drive after any drive failure or while creating the RAID set. Now we have already created the RAID set now let me add a spare drive for demonstration. - -For the demonstration purpose, I’ve hot-plugged a new HDD disk (i.e. /dev/sdf), let’s verify the attached disk. - - # ls -l /dev/ | grep sd - -![Check New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-New-Disk.png) - -Check New Disk - -17. Now again confirm the new attached disk for any raid is already configured or not using the same mdadm command. - - # mdadm --examine /dev/sdf - -![Check Raid on New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Disk.png) - -Check Raid on New Disk - -**Note**: As usual, like we’ve created partitions for four disks earlier, similarly we’ve to create new partition on the new plugged disk using fdisk command. - - # fdisk /dev/sdf - -![Create sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Partition-on-sdf.png) - -Create /dev/sdf Partition - -18. Again after creating new partition on /dev/sdf, confirm the raid on the partition, include the spare drive to the /dev/md0 raid device and verify the added device. - - # mdadm --examine /dev/sdf - # mdadm --examine /dev/sdf1 - # mdadm --add /dev/md0 /dev/sdf1 - # mdadm --detail /dev/md0 - -![Verify Raid on sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-on-sdf.png) - -Verify Raid on sdf Partition - -![Add sdf Partition to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Add-sdf-Partition-to-Raid.png) - -Add sdf Partition to Raid - -![Verify sdf Partition Details](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-sdf-Details.png) - -Verify sdf Partition Details - -### Step 7: Check Raid 6 Fault Tolerance ### - -19. Now, let us check whether spare drive works automatically, if anyone of the disk fails in our Array. For testing, I’ve personally marked one of the drive is failed. - -Here, we’re going to mark /dev/sdd1 as failed drive. - - # mdadm --manage --fail /dev/md0 /dev/sdd1 - -![Check Raid 6 Fault Tolerance](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Failover.png) - -Check Raid 6 Fault Tolerance - -20. Let me get the details of RAID set now and check whether our spare started to sync. - - # mdadm --detail /dev/md0 - -![Check Auto Raid Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Auto-Raid-Syncing.png) - -Check Auto Raid Syncing - -**Hurray!** Here, we can see the spare got activated and started rebuilding process. At the bottom we can see the faulty drive /dev/sdd1 listed as faulty. We can monitor build process using following command. - - # cat /proc/mdstat - -![Raid 6 Auto Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-6-Auto-Syncing.png) - -Raid 6 Auto Syncing - -### Conclusion: ### - -Here, we have seen how to setup RAID 6 using four disks. This RAID level is one of the expensive setup with high redundancy. We will see how to setup a Nested RAID 10 and much more in the next articles. Till then, stay connected with TECMINT. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid-6-in-linux/ - -作者:[Babin Lonston][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ -[2]:http://www.tecmint.com/create-raid0-in-linux/ -[3]:http://www.tecmint.com/create-raid1-in-linux/ \ No newline at end of file From cb66c17ac5617ef4f84b17efef743bed9fe83369 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 24 Aug 2015 22:15:25 +0800 Subject: [PATCH 280/697] Create Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md --- ...ith Double Distributed Parity) in Linux.md | 321 ++++++++++++++++++ 1 file changed, 321 insertions(+) create mode 100644 translated/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md diff --git a/translated/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md b/translated/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md new file mode 100644 index 0000000000..1890a242e2 --- /dev/null +++ b/translated/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md @@ -0,0 +1,321 @@ + +在 Linux 中安装 RAID 6(条带化双分布式奇偶校验) - 第5部分 +================================================================================ +RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即时两个磁盘发生故障后依然有容错能力。两并列的磁盘发生故障时,系统的关键任务仍然能运行。它与 RAID 5 相似,但性能更健壮,因为它多用了一个磁盘来进行奇偶校验。 + +在之前的文章中,我们已经在 RAID 5 看了分布式奇偶校验,但在本文中,我们将看到的是 RAID 6 双分布式奇偶校验。不要期望比其他 RAID 有额外的性能,我们仍然需要安装一个专用的 RAID 控制器。在 RAID 6 中,即使我们失去了2个磁盘,我们仍可以取回数据通过更换磁盘,然后从校验中构建数据。 + +![Setup RAID 6 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Setup-RAID-6-in-Linux.jpg) + +在 Linux 中安装 RAID 6 + +要建立一个 RAID 6,一组最少需要4个磁盘。RAID 6 甚至在有些设定中会有多组磁盘,当读取数据时,它会同时从所有磁盘读取,所以读取速度会更快,当写数据时,因为它要将数据写在条带化的多个磁盘上,所以性能会较差。 + +现在,很多人都在讨论为什么我们需要使用 RAID 6,它的性能和其他 RAID 相比并不太好。提出这个问题首先需要知道的是,如果需要高容错的必须选择 RAID 6。在每一个对数据库的高可用性要求较高的环境中,他们需要 RAID 6 因为数据库是最重要,无论花费多少都需要保护其安全,它在视频流环境中也是非常有用的。 + +#### RAID 6 的的优点和缺点 #### + +- 性能很不错。 +- RAID 6 非常昂贵,因为它要求两个独立的磁盘用于奇偶校验功能。 +- 将失去两个磁盘的容量来保存奇偶校验信息(双奇偶校验)。 +- 不存在数据丢失,即时两个磁盘损坏。我们可以在更换损坏的磁盘后从校验中重建数据。 +- 读性能比 RAID 5 更好,因为它从多个磁盘读取,但对于没有专用的 RAID 控制器的设备写性能将非常差。 + +#### 要求 #### + +要创建一个 RAID 6 最少需要4个磁盘.你也可以添加更多的磁盘,但你必须有专用的 RAID 控制器。在软件 RAID 中,我们在 RAID 6 中不会得到更好的性能,所以我们需要一个物理 RAID 控制器。 + +这些是新建一个 RAID 需要的设置,我们建议先看完以下 RAID 文章。 + +- [Linux 中 RAID 的基本概念 – 第一部分][1] +- [在 Linux 上创建软件 RAID 0 (条带化) – 第二部分][2] +- [在 Linux 上创建软件 RAID 1 (镜像) – 第三部分][3] + +#### My Server Setup #### + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.228 + Hostname : rd6.tecmintlocal.com + Disk 1 [20GB] : /dev/sdb + Disk 2 [20GB] : /dev/sdc + Disk 3 [20GB] : /dev/sdd + Disk 4 [20GB] : /dev/sde + +这篇文章是9系列 RAID 教程的第5部分,在这里我们将看到我们如何在 Linux 系统或者服务器上创建和设置软件 RAID 6 或条带化双分布式奇偶校验,使用四个 20GB 的磁盘 /dev/sdb, /dev/sdc, /dev/sdd 和 /dev/sde. + +### 第1步:安装 mdadm 工具,并检查磁盘 ### + +1.如果你按照我们最进的两篇 RAID 文章(第2篇和第3篇),我们已经展示了如何安装‘mdadm‘工具。如果你直接看的这篇文章,我们先来解释下在Linux系统中如何使用‘mdadm‘工具来创建和管理 RAID,首先根据你的 Linux 发行版使用以下命令来安装。 + + # yum install mdadm [on RedHat systems] + # apt-get install mdadm [on Debain systems] + +2.安装该工具后,然后来验证需要的四个磁盘,我们将会使用下面的‘fdisk‘命令来检验用于创建 RAID 的磁盘。 + + # fdisk -l | grep sd + +![Check Hard Disk in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Linux-Disks.png) + +在 Linux 中检查磁盘 + +3.在创建 RAID 磁盘前,先检查下我们的磁盘是否创建过 RAID 分区。 + + # mdadm -E /dev/sd[b-e] + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde + +![Check Raid on Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Disk-Raid.png) + +在磁盘上检查 Raid 分区 + +**注意**: 在上面的图片中,没有检测到任何 super-block 或者说在四个磁盘上没有 RAID 存在。现在我们开始创建 RAID 6。 + +### 第2步:为 RAID 6 创建磁盘分区 ### + +4.现在为 raid 创建分区‘/dev/sdb‘, ‘/dev/sdc‘, ‘/dev/sdd‘ 和 ‘/dev/sde‘使用下面 fdisk 命令。在这里,我们将展示如何创建分区在 sdb 磁盘,同样的步骤也适用于其他分区。 + +**创建 /dev/sdb 分区** + + # fdisk /dev/sdb + +请按照说明进行操作,如下图所示创建分区。 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按 ‘P’ 来打印创建好的分区。 +- 按 ‘L’,列出所有可用的类型。 +- 按 ‘t’ 去修改分区。 +- 键入 ‘fd’ 设置为 Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition.png) + +创建 /dev/sdb 分区 + +**创建 /dev/sdc 分区** + + # fdisk /dev/sdc + +![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition.png) + +创建 /dev/sdc 分区 + +**创建 /dev/sdd 分区** + + # fdisk /dev/sdd + +![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition.png) + +创建 /dev/sdd 分区 + +**创建 /dev/sde 分区** + + # fdisk /dev/sde + +![Create sde Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sde-Partition.png) + +创建 /dev/sde 分区 + +5.创建好分区后,检查磁盘的 super-blocks 是个好的习惯。如果 super-blocks 不存在我们可以按前面的创建一个新的 RAID。 + + # mdadm -E /dev/sd[b-e]1 + + + 或者 + + # mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 + +![Check Raid on New Partitions](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Partitions.png) + +在新分区中检查 Raid + +### 步骤3:创建 md 设备(RAID) ### + +6,现在是时候来创建 RAID 设备‘md0‘ (即 /dev/md0)并应用 RAID 级别在所有新创建的分区中,确认 raid 使用以下命令。 + + # mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 + # cat /proc/mdstat + +![Create Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Raid-6-Device.png) + +创建 Raid 6 设备 + +7.你还可以使用 watch 命令来查看当前 raid 的进程,如下图所示。 + + # watch -n1 cat /proc/mdstat + +![Check Raid 6 Process](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Process.png) + +检查 Raid 6 进程 + +8.使用以下命令验证 RAID 设备。 + +# mdadm -E /dev/sd[b-e]1 + +**注意**::上述命令将显示四个磁盘的信息,这是相当长的,所以没有截取其完整的输出。 + +9.接下来,验证 RAID 阵列,以确认 re-syncing 被启动。 + + # mdadm --detail /dev/md0 + +![Check Raid 6 Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Array.png) + +检查 Raid 6 阵列 + +### 第4步:在 RAID 设备上创建文件系统 ### + +10.使用 ext4 为‘/dev/md0‘创建一个文件系统并将它挂载在 /mnt/raid5 。这里我们使用的是 ext4,但你可以根据你的选择使用任意类型的文件系统。 + + # mkfs.ext4 /dev/md0 + +![Create File System on Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Create-File-System-on-Raid.png) + +在 Raid 6 上创建文件系统 + +11.挂载创建的文件系统到 /mnt/raid6,并验证挂载点下的文件,我们可以看到 lost+found 目录。 + + # mkdir /mnt/raid6 + # mount /dev/md0 /mnt/raid6/ + # ls -l /mnt/raid6/ + +12.在挂载点下创建一些文件,在任意文件中添加一些文字并验证其内容。 + + # touch /mnt/raid6/raid6_test.txt + # ls -l /mnt/raid6/ + # echo "tecmint raid setups" > /mnt/raid6/raid6_test.txt + # cat /mnt/raid6/raid6_test.txt + +![Verify Raid Content](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Content.png) + +验证 Raid 内容 + +13.在 /etc/fstab 中添加以下条目使系统启动时自动挂载设备,环境不同挂载点可能会有所不同。 + + # vim /etc/fstab + + /dev/md0 /mnt/raid6 ext4 defaults 0 0 + +![Automount Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Automount-Raid-Device.png) + +自动挂载 Raid 6 设备 + +14.接下来,执行‘mount -a‘命令来验证 fstab 中的条目是否有错误。 + + # mount -av + +![Verify Raid Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Automount-Raid-Devices.png) + +验证 Raid 是否自动挂载 + +### 第5步:保存 RAID 6 的配置 ### + +15.请注意默认 RAID 没有配置文件。我们需要使用以下命令手动保存它,然后检查设备‘/dev/md0‘的状态。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + # mdadm --detail /dev/md0 + +![Save Raid 6 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png) + +保存 Raid 6 配置 + +![Check Raid 6 Status](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png) + +检查 Raid 6 状态 + +### 第6步:添加备用磁盘 ### + +16.现在,它使用了4个磁盘,并且有两个作为奇偶校验信息来使用。在某些情况下,如果任意一个磁盘出现故障,我们仍可以得到数据,因为在 RAID 6 使用双奇偶校验。 + +如果第二个磁盘也出现故障,在第三块磁盘损坏前我们可以添加一个​​新的。它可以作为一个备用磁盘并入 RAID 集合,但我在创建 raid 集合前没有定义备用的磁盘。但是,在磁盘损坏后或者创建 RAId 集合时我们可以添加一块磁盘。现在,我们已经创建好了 RAID,下面让我演示如何添加备用磁盘。 + +为了达到演示的目的,我已经热插入了一个新的 HDD 磁盘(即 /dev/sdf),让我们来验证接入的磁盘。 + + # ls -l /dev/ | grep sd + +![Check New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-New-Disk.png) + +检查新 Disk + +17.现在再次确认新连接的磁盘没有配置过 RAID ,使用 mdadm 来检查。 + + # mdadm --examine /dev/sdf + +![Check Raid on New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Disk.png) + +在新磁盘中检查 Raid + +**注意**: 像往常一样,我们早前已经为四个磁盘创建了分区,同样,我们使用 fdisk 命令为新插入的磁盘创建新分区。 + + # fdisk /dev/sdf + +![Create sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Partition-on-sdf.png) + +为 /dev/sdf 创建分区 + +18.在 /dev/sdf 创建新的分区后,在新分区上确认 raid,包括/dev/md0 raid 设备的备用磁盘,并验证添加的设备。 + + # mdadm --examine /dev/sdf + # mdadm --examine /dev/sdf1 + # mdadm --add /dev/md0 /dev/sdf1 + # mdadm --detail /dev/md0 + +![Verify Raid on sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-on-sdf.png) + +在 sdf 分区上验证 Raid + +![Add sdf Partition to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Add-sdf-Partition-to-Raid.png) + +为 RAID 添加 sdf 分区 + +![Verify sdf Partition Details](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-sdf-Details.png) + +验证 sdf 分区信息 + +### 第7步:检查 RAID 6 容错 ### + +19.现在,让我们检查备用驱动器是否能自动工作,当我们阵列中的任何一个磁盘出现故障时。为了测试,我亲自将一个磁盘模拟为故障设备。 + +在这里,我们标记 /dev/sdd1 为故障磁盘。 + + # mdadm --manage --fail /dev/md0 /dev/sdd1 + +![Check Raid 6 Fault Tolerance](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Failover.png) + +检查 Raid 6 容错 + +20.让我们查看 RAID 的详细信息,并检查备用磁盘是否开始同步。 + + # mdadm --detail /dev/md0 + +![Check Auto Raid Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Auto-Raid-Syncing.png) + +检查 Raid 自动同步 + +**哇塞!** 这里,我们看到备用磁盘激活了,并开始重建进程。在底部,我们可以看到有故障的磁盘 /dev/sdd1 标记为 faulty。可以使用下面的命令查看进程重建。 + + # cat /proc/mdstat + +![Raid 6 Auto Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-6-Auto-Syncing.png) + +Raid 6 自动同步 + +### 结论: ### + +在这里,我们看到了如何使用四个磁盘设置 RAID 6。这种 RAID 级别是具有高冗余的昂贵设置之一。在接下来的文章中,我们将看到如何建立一个嵌套的 RAID 10 甚至更多。至此,请继续关注 TECMINT。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid-6-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[2]:http://www.tecmint.com/create-raid0-in-linux/ +[3]:http://www.tecmint.com/create-raid1-in-linux/ From 079be8dc226607baac11d0663901e3b3f6a15c2c Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 24 Aug 2015 22:29:00 +0800 Subject: [PATCH 281/697] =?UTF-8?q?PUB:Part=202=20-=20Creating=20Software?= =?UTF-8?q?=20RAID0=20(Stripe)=20on=20=E2=80=98Two=20Devices=E2=80=99=20Us?= =?UTF-8?q?ing=20=E2=80=98mdadm=E2=80=99=20Tool=20in=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @strugglingyouth --- ...Two Devices’ Using ‘mdadm’ Tool in Linux.md | 219 ++++++++++++++++++ ...Two Devices’ Using ‘mdadm’ Tool in Linux.md | 218 ----------------- 2 files changed, 219 insertions(+), 218 deletions(-) create mode 100644 published/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md delete mode 100644 translated/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md diff --git a/published/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md b/published/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md new file mode 100644 index 0000000000..650897d1d5 --- /dev/null +++ b/published/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md @@ -0,0 +1,219 @@ +在 Linux 下使用 RAID(一):使用 mdadm 工具创建软件 RAID 0 (条带化) +================================================================================ + +RAID 即廉价磁盘冗余阵列,其高可用性和可靠性适用于大规模环境中,相比正常使用,数据更需要被保护。RAID 是一些磁盘的集合,是包含一个阵列的逻辑卷。驱动器可以组合起来成为一个阵列或称为(组的)集合。 + +创建 RAID 最少应使用2个连接到 RAID 控制器的磁盘组成,来构成逻辑卷,可以根据定义的 RAID 级别将更多的驱动器添加到一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 也叫做穷人 RAID。 + +![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) + +*在 Linux 中创建 RAID0* + +使用 RAID 的主要目的是为了在发生单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。 + +#### 在 RAID 0 中条带是什么 #### + +条带是通过将数据在同时分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到该逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID 0 逻辑卷的操作系统来提高重要文件的安全性。 + +- RAID 0 性能较高。 +- 在 RAID 0 上,空间零浪费。 +- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。 +- 写和读性能都很好。 + +#### 要求 #### + +创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,不过数目应该是2,4,6,8等的偶数。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。 + +在这里,我们没有使用硬件 RAID,此设置只需要软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的功能界面访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问它的界面。 + +如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。 + +- [介绍 RAID 的级别和概念][1] + +**我的服务器设置** + + 操作系统 : CentOS 6.5 Final + IP 地址 : 192.168.0.225 + 两块盘 : 20 GB each + +这是9篇系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID 0(条带化),以名为 sdb 和 sdc 两个 20GB 的硬盘为例。 + +### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ### + +1、 在 Linux 上设置 RAID 0 前,我们先更新一下系统,然后安装`mdadm` 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。 + + # yum clean all && yum update + # yum install mdadm -y + +![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) + +*安装 mdadm 工具* + +### 第2步:确认连接了两个 20GB 的硬盘 ### + +2、 在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。 + + # ls -l /dev | grep sd + +![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) + +*检查硬盘* + +3、 一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的`mdadm` 命令来查看。 + + # mdadm --examine /dev/sd[b-c] + +![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) + +*检查 RAID 设备* + +从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。 + +### 第3步:创建 RAID 分区 ### + +4、 现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。 + + # fdisk /dev/sdb + +请按照以下说明创建分区。 + +- 按`n` 创建新的分区。 +- 然后按`P` 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按`P` 来显示创建好的分区。 + +![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) + +*创建分区* + +请按照以下说明将分区创建为 Linux 的 RAID 类型。 + +- 按`L`,列出所有可用的类型。 +- 按`t` 去修改分区。 +- 键入`fd` 设置为 Linux 的 RAID 类型,然后按回车确认。 +- 然后再次使用`p`查看我们所做的更改。 +- 使用`w`保存更改。 + +![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) + +*在 Linux 上创建 RAID 分区* + +**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。 + +5、 创建分区后,验证这两个驱动器是否正确定义 RAID,使用下面的命令。 + + # mdadm --examine /dev/sd[b-c] + # mdadm --examine /dev/sd[b-c]1 + +![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) + +*验证 RAID 分区* + +### 第4步:创建 RAID md 设备 ### + +6、 现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。 + + # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 + # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 + +- -C – 创建 +- -l – 级别 +- -n – RAID 设备数 + +7、 一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。 + + # cat /proc/mdstat + +![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) + +*查看 RAID 级别* + + # mdadm -E /dev/sd[b-c]1 + +![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) + +*查看 RAID 设备* + + # mdadm --detail /dev/md0 + +![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) + +*查看 RAID 阵列* + +### 第5步:给 RAID 设备创建文件系统 ### + +8、 将 RAID 设备 /dev/md0 创建为 ext4 文件系统,并挂载到 /mnt/raid0 下。 + + # mkfs.ext4 /dev/md0 + +![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) + +*创建 ext4 文件系统* + +9、 在 RAID 设备上创建好 ext4 文件系统后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。 + + # mkdir /mnt/raid0 + # mount /dev/md0 /mnt/raid0/ + +10、下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。 + + # df -h + +11、 接下来,在挂载点 /mnt/raid0 下创建一个名为`tecmint.txt` 的文件,为创建的文件添加一些内容,并查看文件和目录的内容。 + + # touch /mnt/raid0/tecmint.txt + # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt + # cat /mnt/raid0/tecmint.txt + # ls -l /mnt/raid0/ + +![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) + +*验证挂载的设备* + +12、 当你验证挂载点后,就可以将它添加到 /etc/fstab 文件中。 + + # vim /etc/fstab + +添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。 + + /dev/md0 /mnt/raid0 ext4 deaults 0 0 + +![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) + +*添加设备到 fstab 文件中* + +13、 使用 mount 命令的 `-a` 来检查 fstab 的条目是否有误。 + + # mount -av + +![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) + +*检查 fstab 文件是否有误* + +### 第6步:保存 RAID 配置 ### + +14、 最后,保存 RAID 配置到一个文件中,以供将来使用。我们再次使用带有`-s` (scan) 和`-v` (verbose) 选项的 `mdadm` 命令,如图所示。 + + # mdadm -E -s -v >> /etc/mdadm.conf + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + # cat /etc/mdadm.conf + +![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) + +*保存 RAID 配置* + +就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID 0 。在接下来的文章中,我们将看到如何设置 RAID 1。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid0-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:https://linux.cn/article-6085-1.html diff --git a/translated/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md b/translated/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md deleted file mode 100644 index 9feba99609..0000000000 --- a/translated/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md +++ /dev/null @@ -1,218 +0,0 @@ -在 Linux 上使用 ‘mdadm’ 工具创建软件 RAID0 (条带化)在 ‘两个设备’ 上 - 第2部分 -================================================================================ -RAID 是廉价磁盘的冗余阵列,其高可用性和可靠性适用于大规模环境中,为了使数据被保护而不是被正常使用。RAID 只是磁盘的一个集合被称为逻辑卷。结合驱动器,使其成为一个阵列或称为集合(组)。 - -创建 RAID 最少应使用2个磁盘被连接组成 RAID 控制器,逻辑卷或多个驱动器可以根据定义的 RAID 级别添加在一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 一般都是不太有钱的人才使用的。 - -![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) - -在 Linux 中创建 RAID0 - -使用 RAID 的主要目的是为了在单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。 - -#### 在 RAID 0 中条带是什么 #### - -条带是通过将数据在同一时间分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID0 逻辑卷的操作系统来提高文件的安全性。 - -- RAID 0 性能较高。 -- 在 RAID 0 上,空间零浪费。 -- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。 -- 写和读性能得以提高。 - -#### 要求 #### - -创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,但数目应该是2,4,6,8等的两倍。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。 - -在这里,我们没有使用硬件 RAID,此设置只依赖于软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的 UI 组件访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问 UI。 - -如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。 - -- [Introduction to RAID and RAID Concepts][1] - -**我的服务器设置** - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.225 - Two Disks : 20 GB each - -这篇文章是9个 RAID 系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID0(条带化),以名为 sdb 和 sdc 两个20GB的硬盘为例。 - -### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ### - -1.在 Linux 上设置 RAID0 前,我们先更新一下系统,然后安装 ‘mdadm’ 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。 - - # yum clean all && yum update - # yum install mdadm -y - -![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) - -安装 mdadm 工具 - -### 第2步:检测并连接两个 20GB 的硬盘 ### - -2.在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。 - - # ls -l /dev | grep sd - -![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) - -检查硬盘 - -3.一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的 ‘mdadm’ 命令来查看。 - - # mdadm --examine /dev/sd[b-c] - -![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) - -检查 RAID 设备 - -从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。 - -### 第3步:创建 RAID 分区 ### - -4.现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。 - - # fdisk /dev/sdb - -请按照以下说明创建分区。 - -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。 -- 接下来选择分区号为1。 -- 只需按两次回车键选择默认值即可。 -- 然后,按 ‘P’ 来打印创建好的分区。 - -![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) - -创建分区 - -请按照以下说明将分区创建为 Linux 的 RAID 类型。 - -- 按 ‘L’,列出所有可用的类型。 -- 按 ‘t’ 去修改分区。 -- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 - -![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) - -在 Linux 上创建 RAID 分区 - -**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。 - -5.创建分区后,验证这两个驱动器能使用下面的命令来正确定义 RAID。 - - # mdadm --examine /dev/sd[b-c] - # mdadm --examine /dev/sd[b-c]1 - -![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) - -验证 RAID 分区 - -### 第4步:创建 RAID md 设备 ### - -6.现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。 - - # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 - # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 - -- -C – create -- -l – level -- -n – No of raid-devices - -7.一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。 - - # cat /proc/mdstat - -![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) - -查看 RAID 级别 - - # mdadm -E /dev/sd[b-c]1 - -![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) - -查看 RAID 设备 - - # mdadm --detail /dev/md0 - -![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) - -查看 RAID 阵列 - -### 第5步:挂载 RAID 设备到文件系统 ### - -8.将 RAID 设备 /dev/md0 创建为 ext4 文件系统并挂载到 /mnt/raid0 下。 - - # mkfs.ext4 /dev/md0 - -![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) - -创建 ext4 文件系统 - -9. ext4 文件系统为 RAID 设备创建好后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。 - - # mkdir /mnt/raid0 - # mount /dev/md0 /mnt/raid0/ - -10.下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。 - - # df -h - -11.接下来,创建一个名为 ‘tecmint.txt’ 的文件挂载到 /mnt/raid0 下,为创建的文件添加一些内容,并查看文件和目录的内容。 - - # touch /mnt/raid0/tecmint.txt - # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt - # cat /mnt/raid0/tecmint.txt - # ls -l /mnt/raid0/ - -![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) - -验证挂载的设备 - -12.一旦你验证挂载点后,同时将它添加到 /etc/fstab 文件中。 - - # vim /etc/fstab - -添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。 - - /dev/md0 /mnt/raid0 ext4 deaults 0 0 - -![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) - -添加设备到 fstab 文件中 - -13.使用 mount ‘-a‘ 来检查 fstab 的条目是否有误。 - - # mount -av - -![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) - -检查 fstab 文件是否有误 - -### 第6步:保存 RAID 配置 ### - -14.最后,保存 RAID 配置到一个文件中,以供将来使用。同样,我们使用 ‘mdadm’ 命令带有 ‘-s‘ (scan) 和 ‘-v‘ (verbose) 选项,如图所示。 - - # mdadm -E -s -v >> /etc/mdadm.conf - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - # cat /etc/mdadm.conf - -![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) - -保存 RAID 配置 - -就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID0 级别。在接下来的文章中,我们将看到如何设置 RAID5。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid0-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 234e307b477bcad5a12a40c540be4f08b125d8c4 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 24 Aug 2015 23:58:14 +0800 Subject: [PATCH 282/697] Delete 20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md --- ...x Wireshark GUI freeze on Linux desktop.md | 62 ------------------- 1 file changed, 62 deletions(-) delete mode 100644 sources/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md diff --git a/sources/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md b/sources/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md deleted file mode 100644 index d906349ff9..0000000000 --- a/sources/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md +++ /dev/null @@ -1,62 +0,0 @@ -translation by strugglingyouth -Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop -================================================================================ -> **Question**: When I try to open a pre-recorded packet dump on Wireshark on Ubuntu, its UI suddenly freezes, and the following errors and warnings appear in the terminal where I launched Wireshark. How can I fix this problem? - -Wireshark is a GUI-based packet capture and sniffer tool. This tool is popularly used by network administrators, network security engineers or developers for various tasks where packet-level network analysis is required, for example during network troubleshooting, vulnerability testing, application debugging, or protocol reverse engineering. Wireshark allows one to capture live packets and browse their protocol headers and payloads via a convenient GUI. - -![](https://farm1.staticflickr.com/722/20584224675_f4d7a59474_c.jpg) - -It is known that Wireshark's UI, especially run under Ubuntu desktop, sometimes hangs or freezes with the following errors, while you are scrolling up or down the packet list view, or starting to load a pre-recorded packet dump file. - - (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject' - (wireshark:3480): GLib-GObject-CRITICAL **: g_object_set_qdata_full: assertion 'G_IS_OBJECT (object)' failed - (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkRange' - (wireshark:3480): Gtk-CRITICAL **: gtk_range_get_adjustment: assertion 'GTK_IS_RANGE (range)' failed - (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkOrientable' - (wireshark:3480): Gtk-CRITICAL **: gtk_orientable_get_orientation: assertion 'GTK_IS_ORIENTABLE (orientable)' failed - (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkScrollbar' - (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkWidget' - (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject' - (wireshark:3480): GLib-GObject-CRITICAL **: g_object_get_qdata: assertion 'G_IS_OBJECT (object)' failed - (wireshark:3480): Gtk-CRITICAL **: gtk_widget_set_name: assertion 'GTK_IS_WIDGET (widget)' failed - -Apparently this error is caused by some incompatibility between Wireshark and overlay-scrollbar, and has not been fixed in the latest Ubuntu desktop (e.g., as of Ubuntu 15.04 Vivid Vervet). - -A workaround to avoid this Wireshark UI freeze problem is to **temporarily disabling overlay-scrollbar**. There are two ways to disable overlay-scrollbar in Wireshark, depending on how you launch Wireshark on your desktop. - -### Command-Line Solution ### - -Overlay-scrollbar can be disabled by setting "**LIBOVERLAY_SCROLLBAR**" environment variable to "0". - -So if you are launching Wireshark from the command in a terminal, you can disable overlay-scrollbar in Wireshark as follows. - -Open your .bashrc, and define the following alias. - - alias wireshark="LIBOVERLAY_SCROLLBAR=0 /usr/bin/wireshark" - -### Desktop Launcher Solution ### - -If you are launching Wireshark using a desktop launcher, you can edit its desktop launcher file. - - $ sudo vi /usr/share/applications/wireshark.desktop - -Look for a line that starts with "Exec", and change it as follows. - - Exec=env LIBOVERLAY_SCROLLBAR=0 wireshark %f - -While this solution will be beneficial for all desktop users system-wide, it will not survive Wireshark upgrade. If you want to preserve the modified .desktop file, copy it to your home directory as follows. - - $ cp /usr/share/applications/wireshark.desktop ~/.local/share/applications/ - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/fix-wireshark-gui-freeze-linux-desktop.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni From 68ca6216a812bae6178844c68fadf162a57d7db1 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 24 Aug 2015 23:58:40 +0800 Subject: [PATCH 283/697] Create 20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md --- ...x Wireshark GUI freeze on Linux desktop.md | 64 +++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 translated/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md diff --git a/translated/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md b/translated/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md new file mode 100644 index 0000000000..9db7231a68 --- /dev/null +++ b/translated/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md @@ -0,0 +1,64 @@ + +Linux 有问必答--如何解决 Linux 桌面上的 Wireshark GUI 死机 +================================================================================ +> **问题**: 当我试图在 Ubuntu 上的 Wireshark 中打开一个 pre-recorded 数据包转储时,它的 UI 突然死机,在我发起 Wireshark 的终端出现了下面的错误和警告。我该如何解决这个问题? + +Wireshark 是一个基于 GUI 的数据包捕获和嗅探工具。该工具被网络管理员普遍使用,网络安全工程师或开发人员对于各种任务的 packet-level 网络分析是必需的,例如在网络故障,漏洞测试,应用程序调试,或逆向协议工程是必需的。 Wireshark 允许记录存活数据包,并通过便捷的图形用户界面浏览他们的协议首部和有效负荷。 + +![](https://farm1.staticflickr.com/722/20584224675_f4d7a59474_c.jpg) + +这是 Wireshark 的 UI,尤其是在 Ubuntu 桌面下运行,有时会挂起或冻结出现以下错误,而你是向上或向下滚动分组列表视图时,就开始加载一个 pre-recorded 包转储文件。 + + + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject' + (wireshark:3480): GLib-GObject-CRITICAL **: g_object_set_qdata_full: assertion 'G_IS_OBJECT (object)' failed + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkRange' + (wireshark:3480): Gtk-CRITICAL **: gtk_range_get_adjustment: assertion 'GTK_IS_RANGE (range)' failed + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkOrientable' + (wireshark:3480): Gtk-CRITICAL **: gtk_orientable_get_orientation: assertion 'GTK_IS_ORIENTABLE (orientable)' failed + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkScrollbar' + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkWidget' + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject' + (wireshark:3480): GLib-GObject-CRITICAL **: g_object_get_qdata: assertion 'G_IS_OBJECT (object)' failed + (wireshark:3480): Gtk-CRITICAL **: gtk_widget_set_name: assertion 'GTK_IS_WIDGET (widget)' failed + +显然,这个错误是由 Wireshark 和叠加滚动条之间的一些不兼容造成的,在最新的 Ubuntu 桌面还没有被解决(例如,Ubuntu 15.04 的桌面)。 + +一种避免 Wireshark 的 UI 卡死的办法就是 **暂时禁用叠加滚动条**。在 Wireshark 上有两种方法来禁用叠加滚动条,这取决于你在桌面上如何启动 Wireshark 的。 + +### 命令行解决方法 ### + +叠加滚动条可以通过设置"**LIBOVERLAY_SCROLLBAR**"环境变量为“0”来被禁止。 + +所以,如果你是在终端使用命令行启动 Wireshark 的,你可以在 Wireshark 中禁用叠加滚动条,如下所示。 + +打开你的 .bashrc 文件,并定义以下 alias。 + + alias wireshark="LIBOVERLAY_SCROLLBAR=0 /usr/bin/wireshark" + +### 桌面启动解决方法 ### + +如果你是使用桌面启动器启动的 Wireshark,你可以编辑它的桌面启动器文件。 + + $ sudo vi /usr/share/applications/wireshark.desktop + +查找以"Exec"开头的行,并如下更改。 + + Exec=env LIBOVERLAY_SCROLLBAR=0 wireshark %f + +虽然这种解决方法将有利于所有桌面用户的 system-wide,但它将无法升级 Wireshark。如果你想保留修改的 .desktop 文件,如下所示将它复制到你的主目录。 + + $ cp /usr/share/applications/wireshark.desktop ~/.local/share/applications/ + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/fix-wireshark-gui-freeze-linux-desktop.html + +作者:[Dan Nanni][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni + From 98d249980e03d618467d60c8145d859d84ff91da Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 25 Aug 2015 00:28:35 +0800 Subject: [PATCH 284/697] PUB:20150717 How to monitor NGINX with Datadog - Part 3 @strugglingyouth --- ... to monitor NGINX with Datadog - Part 3.md | 146 +++++++++++++++++ ... to monitor NGINX with Datadog - Part 3.md | 154 ------------------ 2 files changed, 146 insertions(+), 154 deletions(-) create mode 100644 published/20150717 How to monitor NGINX with Datadog - Part 3.md delete mode 100644 translated/tech/20150717 How to monitor NGINX with Datadog - Part 3.md diff --git a/published/20150717 How to monitor NGINX with Datadog - Part 3.md b/published/20150717 How to monitor NGINX with Datadog - Part 3.md new file mode 100644 index 0000000000..fecab87e66 --- /dev/null +++ b/published/20150717 How to monitor NGINX with Datadog - Part 3.md @@ -0,0 +1,146 @@ +如何使用 Datadog 监控 NGINX(第三篇) +================================================================================ +![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png) + +如果你已经阅读了前面的[如何监控 NGINX][1],你应该知道从你网络环境的几个指标中可以获取多少信息。而且你也看到了从 NGINX 特定的基础中收集指标是多么容易的。但要实现全面,持续的监控 NGINX,你需要一个强大的监控系统来存储并将指标可视化,当异常发生时能提醒你。在这篇文章中,我们将向你展示如何使用 Datadog 安装 NGINX 监控,以便你可以在定制的仪表盘中查看这些指标: + +![NGINX dashboard](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_board_5.png) + +Datadog 允许你以单个主机、服务、流程和度量来构建图形和警告,或者使用它们的几乎任何组合构建。例如,你可以监控你的所有主机,或者某个特定可用区域的所有NGINX主机,或者您可以监视具有特定标签的所有主机的一个关键指标。本文将告诉您如何: + +- 在 Datadog 仪表盘上监控 NGINX 指标,就像监控其他系统一样 +- 当一个关键指标急剧变化时设置自动警报来通知你 + +### 配置 NGINX ### + +为了收集 NGINX 指标,首先需要确保 NGINX 已启用 status 模块和一个 报告 status 指标的 URL。一步步的[配置开源 NGINX][2] 和 [NGINX Plus][3] 请参见之前的相关文章。 + +### 整合 Datadog 和 NGINX ### + +#### 安装 Datadog 代理 #### + +Datadog 代理是[一个开源软件][4],它能收集和报告你主机的指标,这样就可以使用 Datadog 查看和监控他们。安装这个代理通常[仅需要一个命令][5] + +只要你的代理启动并运行着,你会看到你主机的指标报告[在你 Datadog 账号下][6]。 + +![Datadog infrastructure list](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/infra_2.png) + +#### 配置 Agent #### + +接下来,你需要为代理创建一个简单的 NGINX 配置文件。在你系统中代理的配置目录应该[在这儿][7]找到。 + +在目录里面的 conf.d/nginx.yaml.example 中,你会发现[一个简单的配置文件][8],你可以编辑并提供 status URL 和可选的标签为每个NGINX 实例: + + init_config: + + instances: + + - nginx_status_url: http://localhost/nginx_status/ + tags: + - instance:foo + +当你提供了 status URL 和任意 tag,将配置文件保存为 conf.d/nginx.yaml。 + +#### 重启代理 #### + +你必须重新启动代理程序来加载新的配置文件。重新启动命令[在这里][9],根据平台的不同而不同。 + +#### 检查配置文件 #### + +要检查 Datadog 和 NGINX 是否正确整合,运行 Datadog 的 info 命令。每个平台使用的命令[看这儿][10]。 + +如果配置是正确的,你会看到这样的输出: + + Checks + ====== + + [...] + + nginx + ----- + - instance #0 [OK] + - Collected 8 metrics & 0 events + +#### 安装整合 #### + +最后,在你的 Datadog 帐户打开“Nginx 整合”。这非常简单,你只要在 [NGINX 整合设置][11]中点击“Install Integration”按钮。 + +![Install integration](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/install.png) + +### 指标! ### + +一旦代理开始报告 NGINX 指标,你会看到[一个 NGINX 仪表盘][12]出现在在你 Datadog 可用仪表盘的列表中。 + +基本的 NGINX 仪表盘显示有用的图表,囊括了几个[我们的 NGINX 监控介绍][13]中的关键指标。 (一些指标,特别是请求处理时间要求进行日志分析,Datadog 不支持。) + +你可以通过增加 NGINX 之外的重要指标的图表来轻松创建一个全面的仪表盘,以监控你的整个网站设施。例如,你可能想监视你 NGINX 的主机级的指标,如系统负载。要构建一个自定义的仪表盘,只需点击靠近仪表盘的右上角的选项并选择“Clone Dash”来克隆一个默认的 NGINX 仪表盘。 + +![Clone dash](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/clone_2.png) + +你也可以使用 Datadog 的[主机地图][14]在更高层面监控你的 NGINX 实例,举个例子,用颜色标示你所有的 NGINX 主机的 CPU 使用率来辨别潜在热点。 + +![](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx-host-map-3.png) + +### NGINX 指标警告 ### + +一旦 Datadog 捕获并可视化你的指标,你可能会希望建立一些监控自动地密切关注你的指标,并当有问题提醒你。下面将介绍一个典型的例子:一个提醒你 NGINX 吞吐量突然下降时的指标监控器。 + +#### 监控 NGINX 吞吐量 #### + +Datadog 指标警报可以是“基于吞吐量的”(当指标超过设定值会警报)或“基于变化幅度的”(当指标的变化超过一定范围会警报)。在这个例子里,我们会采取后一种方式,当每秒传入的请求急剧下降时会提醒我们。下降往往意味着有问题。 + +1. **创建一个新的指标监控**。从 Datadog 的“Monitors”下拉列表中选择“New Monitor”。选择“Metric”作为监视器类型。 + + ![NGINX metric monitor](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_1.png) + +2. **定义你的指标监视器**。我们想知道 NGINX 每秒总的请求量下降的数量,所以我们在基础设施中定义我们感兴趣的 nginx.net.request_per_s 之和。 + + ![NGINX metric](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_2.png) + +3. **设置指标警报条件**。我们想要在变化时警报,而不是一个固定的值,所以我们选择“Change Alert”。我们设置监控为无论何时请求量下降了30%以上时警报。在这里,我们使用一个一分钟的数据窗口来表示 “now” 指标的值,对横跨该间隔内的平均变化和之前 10 分钟的指标值作比较。 + + ![NGINX metric change alert](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_3.png) + +4. **自定义通知**。如果 NGINX 的请求量下降,我们想要通知我们的团队。在这个例子中,我们将给 ops 团队的聊天室发送通知,并给值班工程师发送短信。在“Say what’s happening”中,我们会为监控器命名,并添加一个伴随该通知的短消息,建议首先开始调查的内容。我们会 @ ops 团队使用的 Slack,并 @pagerduty [将警告发给短信][15]。 + + ![NGINX metric notification](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_4v3.png) + +5. **保存集成监控**。点击页面底部的“Save”按钮。你现在在监控一个关键的 NGINX [工作指标][16],而当它快速下跌时会给值班工程师发短信。 + +### 结论 ### + +在这篇文章中,我们谈到了通过整合 NGINX 与 Datadog 来可视化你的关键指标,并当你的网络基础架构有问题时会通知你的团队。 + +如果你一直使用你自己的 Datadog 账号,你现在应该可以极大的提升你的 web 环境的可视化,也有能力对你的环境、你所使用的模式、和对你的组织最有价值的指标创建自动监控。 + +如果你还没有 Datadog 帐户,你可以注册[免费试用][17],并开始监视你的基础架构,应用程序和现在的服务。 + +------------------------------------------------------------ + +via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ + +作者:K Young +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://linux.cn/article-5970-1.html +[2]:https://linux.cn/article-5985-1.html#open-source +[3]:https://linux.cn/article-5985-1.html#plus +[4]:https://github.com/DataDog/dd-agent +[5]:https://app.datadoghq.com/account/settings#agent +[6]:https://app.datadoghq.com/infrastructure +[7]:http://docs.datadoghq.com/guides/basic_agent_usage/ +[8]:https://github.com/DataDog/dd-agent/blob/master/conf.d/nginx.yaml.example +[9]:http://docs.datadoghq.com/guides/basic_agent_usage/ +[10]:http://docs.datadoghq.com/guides/basic_agent_usage/ +[11]:https://app.datadoghq.com/account/settings#integrations/nginx +[12]:https://app.datadoghq.com/dash/integration/nginx +[13]:https://linux.cn/article-5970-1.html +[14]:https://www.datadoghq.com/blog/introducing-host-maps-know-thy-infrastructure/ +[15]:https://www.datadoghq.com/blog/pagerduty/ +[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics +[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up +[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md +[19]:https://github.com/DataDog/the-monitor/issues diff --git a/translated/tech/20150717 How to monitor NGINX with Datadog - Part 3.md b/translated/tech/20150717 How to monitor NGINX with Datadog - Part 3.md deleted file mode 100644 index 003290a915..0000000000 --- a/translated/tech/20150717 How to monitor NGINX with Datadog - Part 3.md +++ /dev/null @@ -1,154 +0,0 @@ - -如何使用 Datadog 监控 NGINX - 第3部分 -================================================================================ -![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png) - -如果你已经阅读了[前面的如何监控 NGINX][1],你应该知道从你网络环境的几个指标中可以获取多少信息。而且你也看到了从 NGINX 特定的基础中收集指标是多么容易的。但要实现全面,持续的监控 NGINX,你需要一个强大的监控系统来存储并将指标可视化,当异常发生时能提醒你。在这篇文章中,我们将向你展示如何使用 Datadog 安装 NGINX 监控,以便你可以在定制的仪表盘中查看这些指标: - -![NGINX dashboard](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_board_5.png) - -Datadog 允许你建立单个主机,服务,流程,度量,或者几乎任何它们的组合图形周围和警报。例如,你可以在一定的可用性区域监控所有NGINX主机,或所有主机,或者您可以监视被报道具有一定标签的所有主机的一个关键指标。本文将告诉您如何: - -Datadog 允许你来建立图表并报告周围的主机,进程,指标或其他的。例如,你可以在特定的可用性区域监控所有 NGINX 主机,或所有主机,或者你可以监视一个关键指标并将它报告给周围所有标记的主机。本文将告诉你如何做: - -- 在 Datadog 仪表盘上监控 NGINX 指标,对其他所有系统 -- 当一个关键指标急剧变化时设置自动警报来通知你 - -### 配置 NGINX ### - -为了收集 NGINX 指标,首先需要确保 NGINX 已启用 status 模块和一个URL 来报告 status 指标。下面将一步一步展示[配置开源 NGINX ][2]和[NGINX Plus][3]。 - -### 整合 Datadog 和 NGINX ### - -#### 安装 Datadog 代理 #### - -Datadog 代理是 [一个开源软件][4] 能收集和报告你主机的指标,这样就可以使用 Datadog 查看和监控他们。安装代理通常 [仅需要一个命令][5] - -只要你的代理启动并运行着,你会看到你主机的指标报告[在你 Datadog 账号下][6]。 - -![Datadog infrastructure list](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/infra_2.png) - -#### 配置 Agent #### - - -接下来,你需要为代理创建一个简单的 NGINX 配置文件。在你系统中代理的配置目录应该 [在这儿][7]。 - -在目录里面的 conf.d/nginx.yaml.example 中,你会发现[一个简单的配置文件][8],你可以编辑并提供 status URL 和可选的标签为每个NGINX 实例: - - init_config: - - instances: - - - nginx_status_url: http://localhost/nginx_status/ - tags: - - instance:foo - -一旦你修改了 status URLs 和其他标签,将配置文件保存为 conf.d/nginx.yaml。 - -#### 重启代理 #### - - -你必须重新启动代理程序来加载新的配置文件。重新启动命令 [在这里][9] 根据平台的不同而不同。 - -#### 检查配置文件 #### - -要检查 Datadog 和 NGINX 是否正确整合,运行 Datadog 的信息命令。每个平台使用的命令[看这儿][10]。 - -如果配置是正确的,你会看到这样的输出: - - Checks - ====== - - [...] - - nginx - ----- - - instance #0 [OK] - - Collected 8 metrics & 0 events - -#### 安装整合 #### - -最后,在你的 Datadog 帐户里面整合 Nginx。这非常简单,你只要点击“Install Integration”按钮在 [NGINX 集成设置][11] 配置表中。 - -![Install integration](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/install.png) - -### 指标! ### - -一旦代理开始报告 NGINX 指标,你会看到 [一个 NGINX 仪表盘][12] 在你 Datadog 可用仪表盘的列表中。 - -基本的 NGINX 仪表盘显示了几个关键指标 [在我们介绍的 NGINX 监控中][13] 的最大值。 (一些指标,特别是请求处理时间,日志分析,Datadog 不提供。) - -你可以轻松创建一个全面的仪表盘来监控你的整个网站区域通过增加额外的图形与 NGINX 外部的重要指标。例如,你可能想监视你 NGINX 主机的host-level 指标,如系统负载。你需要构建一个自定义的仪表盘,只需点击靠近仪表盘的右上角的选项并选择“Clone Dash”来克隆一个默认的 NGINX 仪表盘。 - -![Clone dash](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/clone_2.png) - -你也可以更高级别的监控你的 NGINX 实例通过使用 Datadog 的 [Host Maps][14] -对于实例,color-coding 你所有的 NGINX 主机通过 CPU 使用率来辨别潜在热点。 - -![](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx-host-map-3.png) - -### NGINX 指标 ### - -一旦 Datadog 捕获并可视化你的指标,你可能会希望建立一些监控自动密切的关注你的指标,并当有问题提醒你。下面将介绍一个典型的例子:一个提醒你 NGINX 吞吐量突然下降时的指标监控器。 - -#### 监控 NGINX 吞吐量 #### - -Datadog 指标警报可以是 threshold-based(当指标超过设定值会警报)或 change-based(当指标的变化超过一定范围会警报)。在这种情况下,我们会采取后一种方式,当每秒传入的请求急剧下降时会提醒我们。下降往往意味着有问题。 - -1.**创建一个新的指标监控**. 从 Datadog 的“Monitors”下拉列表中选择“New Monitor”。选择“Metric”作为监视器类型。 - -![NGINX metric monitor](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_1.png) - -2.**定义你的指标监视器**. 我们想知道 NGINX 每秒总的请求量下降的数量。所以我们在基础设施中定义我们感兴趣的 nginx.net.request_per_s度量和。 - -![NGINX metric](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_2.png) - -3.**设置指标警报条件**.我们想要在变化时警报,而不是一个固定的值,所以我们选择“Change Alert”。我们设置监控为无论何时请求量下降了30%以上时警报。在这里,我们使用一个 one-minute 数据窗口来表示“now” 指标的值,警报横跨该间隔内的平均变化,和之前 10 分钟的指标值作比较。 - -![NGINX metric change alert](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_3.png) - -4.**自定义通知**.如果 NGINX 的请求量下降,我们想要通知我们的团队。在这种情况下,我们将给 ops 队的聊天室发送通知,网页呼叫工程师。在“Say what’s happening”中,我们将其命名为监控器并添加一个短消息将伴随该通知并建议首先开始调查。我们使用 @mention 作为一般警告,使用 ops 并用 @pagerduty [专门给 PagerDuty 发警告][15]。 - -![NGINX metric notification](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_4v3.png) - -5.**保存集成监控**.点击页面底部的“Save”按钮。你现在监控的关键指标NGINX [work 指标][16],它边打电话给工程师并在它迅速下时随时分页。 - -### 结论 ### - -在这篇文章中,我们已经通过整合 NGINX 与 Datadog 来可视化你的关键指标,并当你的网络基础架构有问题时会通知你的团队。 - -如果你一直使用你自己的 Datadog 账号,你现在应该在 web 环境中有了很大的可视化提高,也有能力根据你的环境创建自动监控,你所使用的模式,指标应该是最有价值的对你的组织。 - -如果你还没有 Datadog 帐户,你可以注册[免费试用][17],并开始监视你的基础架构,应用程序和现在的服务。 - ----------- -这篇文章的来源在 [on GitHub][18]. 问题,错误,补充等?请[联系我们][19]. - ------------------------------------------------------------- - -via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ - -作者:K Young -译者:[strugglingyouth](https://github.com/译者ID) -校对:[strugglingyouth](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/ -[2]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source -[3]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus -[4]:https://github.com/DataDog/dd-agent -[5]:https://app.datadoghq.com/account/settings#agent -[6]:https://app.datadoghq.com/infrastructure -[7]:http://docs.datadoghq.com/guides/basic_agent_usage/ -[8]:https://github.com/DataDog/dd-agent/blob/master/conf.d/nginx.yaml.example -[9]:http://docs.datadoghq.com/guides/basic_agent_usage/ -[10]:http://docs.datadoghq.com/guides/basic_agent_usage/ -[11]:https://app.datadoghq.com/account/settings#integrations/nginx -[12]:https://app.datadoghq.com/dash/integration/nginx -[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/ -[14]:https://www.datadoghq.com/blog/introducing-host-maps-know-thy-infrastructure/ -[15]:https://www.datadoghq.com/blog/pagerduty/ -[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics -[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up -[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md -[19]:https://github.com/DataDog/the-monitor/issues From 999fbdbcc2b902c5c8e023bd7351d05b6f945c6d Mon Sep 17 00:00:00 2001 From: Kevin Sicong Jiang Date: Mon, 24 Aug 2015 19:40:44 -0500 Subject: [PATCH 285/697] KevinSJ translated --- ... open source command-line email clients.md | 80 ------------------- ... open source command-line email clients.md | 80 +++++++++++++++++++ 2 files changed, 80 insertions(+), 80 deletions(-) delete mode 100644 sources/share/20150821 Top 4 open source command-line email clients.md create mode 100644 translated/share/20150821 Top 4 open source command-line email clients.md diff --git a/sources/share/20150821 Top 4 open source command-line email clients.md b/sources/share/20150821 Top 4 open source command-line email clients.md deleted file mode 100644 index afdcd8cf4a..0000000000 --- a/sources/share/20150821 Top 4 open source command-line email clients.md +++ /dev/null @@ -1,80 +0,0 @@ -KevinSJ Translating -Top 4 open source command-line email clients -================================================================================ -![](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/life_mail.png) - -Like it or not, email isn't dead yet. And for Linux power users who live and die by the command line, leaving the shell to use a traditional desktop or web based email client just doesn't cut it. After all, if there's one thing that the command line excels at, it's letting you process files, and especially text, with uninterrupted efficiency. - -Fortunately, there are a number of great command-line email clients, many with a devoted following of users who can help you get started and answer any questions you might have along the way. But fair warning: once you've mastered one of these clients, you may find it hard to go back to your old GUI-based solution! - -To install any of these four clients is pretty easy; most are available in standard repositories for major Linux distributions, and can be installed with a normal package manager. You may also have luck finding and running them on other operating systems as well, although I haven't tried it and can't speak to the experience. - -### Mutt ### - -- [Project page][1] -- [Source code][2] -- License: [GPLv2][3] - -Many terminal enthusiasts may already have heard of or even be familiar with Mutt and Alpine, which have both been on the scene for many years. Let's first take a look at Mutt. - -Mutt supports many of the features you've come to expect from any email system: message threading, color coding, availability in a number of languages, and lots of configuration options. It supports POP3 and IMAP, the two most common email transfer protocols, and multiple mailbox formats. Having first been released in 1995, Mutt still has an active development community, but in recent years, new releases have focused on bug fixes and security updates rather than new features. That's okay for many Mutt users, though, who are comfortable with the interface and adhere to the project's slogan: "All mail clients suck. This one just sucks less." - -### Alpine ### - -- [Project page][4] -- [Source code][5] -- License: [Apache 2.0][6] - -Alpine is the other well-known client for terminal email, developed at the University of Washington and designed to be an open source, Unicode-friendly alternative to Pine, also originally from UW. - -Designed to be friendly to beginners, but also chocked full of features for advanced users, Alpine also supports a multitude of protocols—IMAP, LDAP, NNTP, POP, SMTP, etc.—as well as different mailbox formats. Alpine is packaged with Pico, a simple text editing utility that many use as a standalone tool, but it also should work with your text editor of choice: vi, Emacs, etc. - -While Alpine is still infrequently updated, there is also a fork, re-alpine, which was created to allow a different set of maintainers to continue the project's development. - -Alpine features contextual help on the screen, which some users may prefer to breaking out the manual with Mutt, but both are well documented. Between Mutt and Alpine, users may want to try both and let personal preference guide their decision, or they may wish to check out a couple of the newer options below. - -### Sup ### - -- [Project page][7] -- [Source code][8] -- License: [GPLv2][9] - -Sup is the first of two of what can be called "high volume email clients" on our list. Described as a "console-based email client for people with a lot of email," Sup's goal is to provide an interface to email with a hierarchical design and to allow tagging of threads for easier organization. - -Written in Ruby, Sup provides exceptionally fast searching, manages your contact list automatically, and allows for custom extensions. For people who are used to Gmail as a webmail interface, these features will seem familiar, and Sup might be seen as a more modern approach to email on the command line. - -### Notmuch ### - -- [Project page][10] -- [Source code][11] -- License: [GPLv3][12] - -"Sup? Notmuch." Notmuch was written as a response to Sup, originally starting out as a speed-focused rewrite of some portions of Sup to enhance performance. Eventually, the project grew in scope and is now a stand-alone email client. - -Notmuch is also a fairly trim program. It doesn't actually send or receive email messages on its own, and the code which enables Notmuch's super-fast searching is actually designed as a separate library which the program can call. But its modular nature enables you to pick your favorite tools for composing, sending, and receiving, and instead focuses on doing one task and doing it well—efficient browsing and management of your email. - -This list isn’t by any means comprehensive; there are a lot more email clients out there which might be an even better fit for you. What’s your favorite? Did we leave one out that you want to share about? Let us know in the comments below! - --------------------------------------------------------------------------------- - -via: http://opensource.com/life/15/8/top-4-open-source-command-line-email-clients - -作者:[Jason Baker][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://opensource.com/users/jason-baker -[1]:http://www.mutt.org/ -[2]:http://dev.mutt.org/trac/ -[3]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html -[4]:http://www.washington.edu/alpine/ -[5]:http://www.washington.edu/alpine/acquire/ -[6]:http://www.apache.org/licenses/LICENSE-2.0 -[7]:http://supmua.org/ -[8]:https://github.com/sup-heliotrope/sup -[9]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html -[10]:http://notmuchmail.org/ -[11]:http://notmuchmail.org/releases/ -[12]:http://www.gnu.org/licenses/gpl.html diff --git a/translated/share/20150821 Top 4 open source command-line email clients.md b/translated/share/20150821 Top 4 open source command-line email clients.md new file mode 100644 index 0000000000..db28f4c543 --- /dev/null +++ b/translated/share/20150821 Top 4 open source command-line email clients.md @@ -0,0 +1,80 @@ +KevinSJ Translating +四大开源版命令行邮件客户端 +================================================================================ +![](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/life_mail.png) + +无论你承认与否,email并没有消亡。对依赖命令行的 Linux 高级用户而言,离开 shell 转而使用传统的桌面或网页版邮件客户端并不合适。归根结底,命令行最善于处理文件,特别是文本文件,能使效率倍增。 + +幸运的是,也有不少的命令行邮件客户端,他们的用户大都乐于帮助你入门并回答你使用中遇到的问题。但别说我没警告过你:一旦你完全掌握了其中一个客户端,要再使用图基于图形界面的客户端将回变得很困难! + +要安装下述四个客户端中的任何一个是非常容易的;主要 Linux 发行版的软件仓库中都提供此类软件,并可通过包管理器进行安装。你也可以再其他的操作系统中寻找并安装这类客户端,但我并未尝试过也没有相关的经验。 + +### Mutt ### + +- [项目主页][1] +- [源代码][2] +- 授权协议: [GPLv2][3] + +许多终端爱好者都听说过甚至熟悉 Mutt 和 Alpine, 他们已经存在多年。让我们先看看 Mutt。 + +Mutt 支持许多你所期望 email 系统支持的功能:会话,颜色区分,支持多语言,同时还有很多设置选项。它支持 POP3 和 IMAP, 两个主要的邮件传输协议,以及许多邮箱格式。自从1995年诞生以来, Mutt 即拥有一个活跃的开发社区,但最近几年,新版本更多的关注于修复问题和安全更新而非提供新功能。这对大多数 Mutt 用户而言并无大碍,他们钟爱这样的界面,并支持此项目的口号:“所有邮件客户端都很烂,只是这个烂的没那么彻底。” + +### Alpine ### + +- [项目主页][4] +- [源代码][5] +- 授权协议: [Apache 2.0][6] + +Alpine 是另一款知名的终端邮件客户端,它由华盛顿大学开发,初衷是作为 UW 开发的 Pine 的开源,支持unicode的替代版本。 + +Alpine 不仅容易上手,还为高级用户提供了很多特性,它支持很多协议 —— IMAP, LDAP, NNTP, POP, SMTP 等,同时也支持不同的邮箱格式。Alpine 内置了一款名为 Pico 的可独立使用的简易文本编辑工具,但你也可以使用你常用的文本编辑器: vi, Emacs等。 + +尽管Alpine的升级并不频繁,名为re-alpine的分支为不同的开发者提供了开发此项目的机会。 + +Alpine 支持再屏幕上显示上下文帮助,但一些用户回喜欢 Mutt 式的独立说明手册,但这两种提供了较好的说明。用户可以同时尝试 Mutt 和 Alpine,并由个人喜好作出决定,也可以尝试以下几个比较新颖的选项。 + +### Sup ### + +- [项目主页][7] +- [源代码][8] +- 授权协议: [GPLv2][9] + +Sup 是我们列表中能被称为“大容量邮件客户端”的两个之一。自称“为邮件较多的人设计的命令行客户端”,Sup 的目标是提供一个支持层次化设计并允许再为会话添加标签进行简单整理的界面。 + +由于采用 Ruby 编写,Sup 能提供十分快速的搜索并能自动管理联系人列表,同时还允许自定义插件。对于使用 Gmail 作为网页邮件客户端的人们,这些功能都是耳熟能详的,这就使得 Sup 成为一种比较现代的命令行邮件管理方式。 +Written in Ruby, Sup provides exceptionally fast searching, manages your contact list automatically, and allows for custom extensions. For people who are used to Gmail as a webmail interface, these features will seem familiar, and Sup might be seen as a more modern approach to email on the command line. + +### Notmuch ### + +- [项目主页][10] +- [源代码][11] +- 授权协议: [GPLv3][12] + +"Sup? Notmuch." Notmuch 作为 Sup 的回应,最初只是重写了 Sup 的一小部分来提高性能。最终,这个项目逐渐变大并成为了一个独立的邮件客户端。 + +Notmuch是一款相当精简的软件。它并不能独立的收发邮件,启用 Notmuch 的快速搜索功能的代码实际上是一个需要调用的独立库。但这样的模块化设计也使得你能使用你最爱的工具进行写信,发信和收信,集中精力做好一件事情并有效浏览和管理你的邮件。 + +这个列表并不完整,还有很多 email 客户端,他们或许才是你的最佳选择。你喜欢什么客户端呢? +-------------------------------------------------------------------------------- + +via: http://opensource.com/life/15/8/top-4-open-source-command-line-email-clients + +作者:[Jason Baker][a] +译者:[KevinSJ](https://github.com/KevinSj) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://opensource.com/users/jason-baker +[1]:http://www.mutt.org/ +[2]:http://dev.mutt.org/trac/ +[3]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html +[4]:http://www.washington.edu/alpine/ +[5]:http://www.washington.edu/alpine/acquire/ +[6]:http://www.apache.org/licenses/LICENSE-2.0 +[7]:http://supmua.org/ +[8]:https://github.com/sup-heliotrope/sup +[9]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html +[10]:http://notmuchmail.org/ +[11]:http://notmuchmail.org/releases/ +[12]:http://www.gnu.org/licenses/gpl.html From 27d3c91bd4976c5e27e4a82658f2cb56e59aa5f3 Mon Sep 17 00:00:00 2001 From: Kevin Sicong Jiang Date: Mon, 24 Aug 2015 19:47:16 -0500 Subject: [PATCH 286/697] KevinSJ Translating --- sources/talk/20150820 Why did you start using Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/talk/20150820 Why did you start using Linux.md b/sources/talk/20150820 Why did you start using Linux.md index f83742a7a1..3ddf90c560 100644 --- a/sources/talk/20150820 Why did you start using Linux.md +++ b/sources/talk/20150820 Why did you start using Linux.md @@ -1,3 +1,4 @@ +KevinSJ translating Why did you start using Linux? ================================================================================ > In today's open source roundup: What got you started with Linux? Plus: IBM's Linux only Mainframe. And why you should skip Windows 10 and go with Linux @@ -144,4 +145,4 @@ via: http://www.itworld.com/article/2972587/linux/why-did-you-start-using-linux. [1]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/ [2]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/ [3]:http://techcrunch.com/2015/08/16/ibm-teams-with-canonical-on-linux-mainframe/ -[4]:http://www.zdnet.com/article/sick-of-windows-spying-on-you-go-linux/ \ No newline at end of file +[4]:http://www.zdnet.com/article/sick-of-windows-spying-on-you-go-linux/ From 2f88b1dba1741c25a5c56c3770c81be33b06f009 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 25 Aug 2015 09:14:48 +0800 Subject: [PATCH 287/697] translating --- ...tch These Kids Having Fun With Linux Terminal In Ubuntu.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md b/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md index 6adcbbc3bc..3ce51b9379 100644 --- a/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md +++ b/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md @@ -1,3 +1,5 @@ +translating---geekpi + Watch These Kids Having Fun With Linux Terminal In Ubuntu ================================================================================ I found this short video of children having fun with Linux terminals in their computer lab at school. I do not know where do they belong to, but I guess it is either in Indonesia or Malaysia. @@ -34,4 +36,4 @@ via: http://itsfoss.com/ubuntu-terminal-train/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://itsfoss.com/author/abhishek/ \ No newline at end of file +[a]:http://itsfoss.com/author/abhishek/ From 4b56b819865e4dce4e392add7b9860a61437aa52 Mon Sep 17 00:00:00 2001 From: bazz2 Date: Tue, 25 Aug 2015 09:20:24 +0800 Subject: [PATCH 288/697] [bazz2 translating]A Look at What's Next for the Linux Kernel --- .../20150820 A Look at What's Next for the Linux Kernel.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/talk/20150820 A Look at What's Next for the Linux Kernel.md b/sources/talk/20150820 A Look at What's Next for the Linux Kernel.md index 9705fd3a90..e46a4b538d 100644 --- a/sources/talk/20150820 A Look at What's Next for the Linux Kernel.md +++ b/sources/talk/20150820 A Look at What's Next for the Linux Kernel.md @@ -1,3 +1,4 @@ +[bazz22222222222] A Look at What's Next for the Linux Kernel ================================================================================ ![](http://www.eweek.com/imagesvr_ce/485/290x195cilinux1.jpg) @@ -46,4 +47,4 @@ via: http://www.eweek.com/enterprise-apps/a-look-at-whats-next-for-the-linux-ker 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/ \ No newline at end of file +[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/ From 8c98033bc94605d350e1ff4b7acb89adad181e7d Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 25 Aug 2015 09:32:07 +0800 Subject: [PATCH 289/697] translated --- ...aving Fun With Linux Terminal In Ubuntu.md | 39 ------------------- ...aving Fun With Linux Terminal In Ubuntu.md | 37 ++++++++++++++++++ 2 files changed, 37 insertions(+), 39 deletions(-) delete mode 100644 sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md create mode 100644 translated/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md diff --git a/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md b/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md deleted file mode 100644 index 3ce51b9379..0000000000 --- a/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md +++ /dev/null @@ -1,39 +0,0 @@ -translating---geekpi - -Watch These Kids Having Fun With Linux Terminal In Ubuntu -================================================================================ -I found this short video of children having fun with Linux terminals in their computer lab at school. I do not know where do they belong to, but I guess it is either in Indonesia or Malaysia. - -注:youtube 视频 - - -### Run train in Linux terminal ### - -There is no magic here. It’s just a small command line fun tool called ‘sl’. I presume that it was developed entirely to have some fun when command ls is wrongly typed. If you ever worked on Linux terminal, you know that ls is one of the most commonly used commands and perhaps one of the most frequently mis-typed command as well. - -If you want to have little fun with this terminal train, you can install it using the following command: - - sudo apt-get install sl - -To run the terminal train, just type **sl** in the terminal. It also has the following options: - -- -a : Accident mode. You can see people crying help -- -l : shows a smaller train but with more coaches -- -F : A flying train -- -e : Allows interrupt by Ctrl+C. In other mode you cannot use Ctrl+C to stop the train. But then, it doesn’t run for long. - -Normally, you should hear the whistle as well but it doesn’t work in most of the Linux OS, Ubuntu 14.04 being one of them. Here is the accidental terminal train :) - -![Linux Terminal Train](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/04/Linux_Terminal_Train.jpeg) - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/ubuntu-terminal-train/ - -作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ diff --git a/translated/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md b/translated/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md new file mode 100644 index 0000000000..3d0efff7b5 --- /dev/null +++ b/translated/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md @@ -0,0 +1,37 @@ +看这些孩子在Ubuntu的Linux终端下玩耍 +================================================================================ +我发现了一个孩子们在他们的计算机教室里玩得很开心的视频。我不知道他们在哪里,但我猜测是在印度尼西亚或者马来西亚。 + +注:youtube 视频 + + +### 在Linux终端下面跑火车 ### + +这里没有魔术。只是一个叫做“sl”的命令行工具。我假定它是在把ls打错的情况下为了好玩而开发的。如果你曾经在Linux的命令行下工作,你会知道ls是一个最常使用的一个命令,也许也是一个最经常打错的命令。 + +如果你想从这个终端下的火车获得一些乐趣,你可以使用下面的命令安装它。 + + sudo apt-get install sl + +要运行终端火车,只需要在终端中输入**sl**。它有以下几个选项: + +- -a : 意外模式。你会看见哭救的群众 +- -l : 显示一个更小的火车但有更多的车厢 +- -F : 一个飞行的火车 +- -e : 允许通过Ctrl+C。使用其他模式你不能使用Ctrl+C中断火车。但是,它不能长时间运行。 + +正常情况下,你应该会听到汽笛声但是在大多数Linux系统下都不管用,Ubuntu是其中一个。这就是一个意外的终端火车。 + +![Linux Terminal Train](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/04/Linux_Terminal_Train.jpeg) + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/ubuntu-terminal-train/ + +作者:[Abhishek][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ From 24b14a84926db605be221bff4de094cbc60a5129 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 25 Aug 2015 09:34:06 +0800 Subject: [PATCH 290/697] Revert "translated" This reverts commit 8c98033bc94605d350e1ff4b7acb89adad181e7d. --- ...aving Fun With Linux Terminal In Ubuntu.md | 39 +++++++++++++++++++ ...aving Fun With Linux Terminal In Ubuntu.md | 37 ------------------ 2 files changed, 39 insertions(+), 37 deletions(-) create mode 100644 sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md delete mode 100644 translated/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md diff --git a/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md b/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md new file mode 100644 index 0000000000..3ce51b9379 --- /dev/null +++ b/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md @@ -0,0 +1,39 @@ +translating---geekpi + +Watch These Kids Having Fun With Linux Terminal In Ubuntu +================================================================================ +I found this short video of children having fun with Linux terminals in their computer lab at school. I do not know where do they belong to, but I guess it is either in Indonesia or Malaysia. + +注:youtube 视频 + + +### Run train in Linux terminal ### + +There is no magic here. It’s just a small command line fun tool called ‘sl’. I presume that it was developed entirely to have some fun when command ls is wrongly typed. If you ever worked on Linux terminal, you know that ls is one of the most commonly used commands and perhaps one of the most frequently mis-typed command as well. + +If you want to have little fun with this terminal train, you can install it using the following command: + + sudo apt-get install sl + +To run the terminal train, just type **sl** in the terminal. It also has the following options: + +- -a : Accident mode. You can see people crying help +- -l : shows a smaller train but with more coaches +- -F : A flying train +- -e : Allows interrupt by Ctrl+C. In other mode you cannot use Ctrl+C to stop the train. But then, it doesn’t run for long. + +Normally, you should hear the whistle as well but it doesn’t work in most of the Linux OS, Ubuntu 14.04 being one of them. Here is the accidental terminal train :) + +![Linux Terminal Train](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/04/Linux_Terminal_Train.jpeg) + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/ubuntu-terminal-train/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ diff --git a/translated/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md b/translated/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md deleted file mode 100644 index 3d0efff7b5..0000000000 --- a/translated/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md +++ /dev/null @@ -1,37 +0,0 @@ -看这些孩子在Ubuntu的Linux终端下玩耍 -================================================================================ -我发现了一个孩子们在他们的计算机教室里玩得很开心的视频。我不知道他们在哪里,但我猜测是在印度尼西亚或者马来西亚。 - -注:youtube 视频 - - -### 在Linux终端下面跑火车 ### - -这里没有魔术。只是一个叫做“sl”的命令行工具。我假定它是在把ls打错的情况下为了好玩而开发的。如果你曾经在Linux的命令行下工作,你会知道ls是一个最常使用的一个命令,也许也是一个最经常打错的命令。 - -如果你想从这个终端下的火车获得一些乐趣,你可以使用下面的命令安装它。 - - sudo apt-get install sl - -要运行终端火车,只需要在终端中输入**sl**。它有以下几个选项: - -- -a : 意外模式。你会看见哭救的群众 -- -l : 显示一个更小的火车但有更多的车厢 -- -F : 一个飞行的火车 -- -e : 允许通过Ctrl+C。使用其他模式你不能使用Ctrl+C中断火车。但是,它不能长时间运行。 - -正常情况下,你应该会听到汽笛声但是在大多数Linux系统下都不管用,Ubuntu是其中一个。这就是一个意外的终端火车。 - -![Linux Terminal Train](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/04/Linux_Terminal_Train.jpeg) - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/ubuntu-terminal-train/ - -作者:[Abhishek][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ From 373fbde7526c0f45ddd783871703b3dea575a3ee Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 25 Aug 2015 09:34:30 +0800 Subject: [PATCH 291/697] Update 20150728 Process of the Linux kernel building.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 校对到200line --- ...28 Process of the Linux kernel building.md | 75 ++++--------------- 1 file changed, 13 insertions(+), 62 deletions(-) diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/sources/tech/20150728 Process of the Linux kernel building.md index 6f9368384c..b1c4388e66 100644 --- a/sources/tech/20150728 Process of the Linux kernel building.md +++ b/sources/tech/20150728 Process of the Linux kernel building.md @@ -1,40 +1,25 @@ Translating by Ezio -Process of the Linux kernel building -如何构建Linux 内核的 +如何构建Linux 内核 ================================================================================ 介绍 -------------------------------------------------------------------------------- -I will not tell you how to build and install custom Linux kernel on your machine, you can find many many [resources](https://encrypted.google.com/search?q=building+linux+kernel#q=building+linux+kernel+from+source+code) that will help you to do it. Instead, we will know what does occur when you are typed `make` in the directory with Linux kernel source code in this part. When I just started to learn source code of the Linux kernel, the [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) file was a first file that I've opened. And it was scary :) This [makefile](https://en.wikipedia.org/wiki/Make_%28software%29) contains `1591` lines of code at the time when I wrote this part and it was [third](https://github.com/torvalds/linux/commit/52721d9d3334c1cb1f76219a161084094ec634dc) release candidate. +我不会告诉你怎么在自己的电脑上去构建、安装一个定制化的Linux 内核,这样的[资料](https://encrypted.google.com/search?q=building+linux+kernel#q=building+linux+kernel+from+source+code) 太多了,它们会对你有帮助。本文会告诉你当你在内核源码路径里敲下`make` 时会发生什么。当我刚刚开始学习内核代码时,[Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 是我打开的第一个文件,这个文件看起来真令人害怕 :)。那时候这个[Makefile](https://en.wikipedia.org/wiki/Make_%28software%29) 还只包含了`1591` 行代码,当我开始写本文是,这个[Makefile](https://github.com/torvalds/linux/commit/52721d9d3334c1cb1f76219a161084094ec634dc) 已经是第三个候选版本了。 -我不会告诉你怎么在自己的电脑上去构建、安装一个定制化的Linux 内核,这样的[资料](https://encrypted.google.com/search?q=building+linux+kernel#q=building+linux+kernel+from+source+code) 太多了,它们会对你有帮助。本文会告诉你当你在内核源码路径里敲下`make` 时会发生什么。当我刚刚开始学习内核代码时,[Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 是我打开的第一个文件,这个文件真令人害怕 :)。那时候这个[Makefile](https://en.wikipedia.org/wiki/Make_%28software%29) 包含了`1591` 行代码,当我开始写本文是,这个[Makefile](https://github.com/torvalds/linux/commit/52721d9d3334c1cb1f76219a161084094ec634dc) 已经是第三个候选版本了。 - -This makefile is the the top makefile in the Linux kernel source code and kernel build starts here. Yes, it is big, but moreover, if you've read the source code of the Linux kernel you can noted that all directories with a source code has an own makefile. Of course it is not real to describe how each source files compiled and linked. So, we will see compilation only for the standard case. You will not find here building of the kernel's documentation, cleaning of the kernel source code, [tags](https://en.wikipedia.org/wiki/Ctags) generation, [cross-compilation](https://en.wikipedia.org/wiki/Cross_compiler) related stuff and etc. We will start from the `make` execution with the standard kernel configuration file and will finish with the building of the [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage). - -It would be good if you're already familiar with the [make](https://en.wikipedia.org/wiki/Make_%28software%29) util, but I will anyway try to describe all code that will be in this part. - -So let's start. - -这个makefile 是Linux 内核代码的顶端makefile ,内核构件就始于此处。是的,它的内容很多,但是如果你已经读过内核源代码,你就会发现每个包含代码的目录都有一个自己的makefile。当然了,我们不会去描述每个代码文件是怎么编译链接的。所以我们将只会挑选一些通用的例子来说明问题,而你不会在这里找到构建内核的文档,如何整洁内核代码, [tags](https://en.wikipedia.org/wiki/Ctags) 的生成,和[交叉编译](https://en.wikipedia.org/wiki/Cross_compiler) 相关的说明,等等。我们将从`make` 开始,使用标准的内核配置文件,到生成了内核镜像[bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage) 结束。 +这个makefile 是Linux 内核代码的根makefile ,内核构建就始于此处。是的,它的内容很多,但是如果你已经读过内核源代码,你就会发现每个包含代码的目录都有一个自己的makefile。当然了,我们不会去描述每个代码文件是怎么编译链接的。所以我们将只会挑选一些通用的例子来说明问题,而你不会在这里找到构建内核的文档、如何整洁内核代码、[tags](https://en.wikipedia.org/wiki/Ctags) 的生成和[交叉编译](https://en.wikipedia.org/wiki/Cross_compiler) 相关的说明,等等。我们将从`make` 开始,使用标准的内核配置文件,到生成了内核镜像[bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage) 结束。 如果你已经很了解[make](https://en.wikipedia.org/wiki/Make_%28software%29) 工具那是最好,但是我也会描述本文出现的相关代码。 让我们开始吧 -Preparation before the kernel compilation 编译内核前的准备 --------------------------------------------------------------------------------- -There are many things to preparate before the kernel compilation will be started. The main point here is to find and configure -the type of compilation, to parse command line arguments that are passed to the `make` util and etc. So let's dive into the top `Makefile` of the Linux kernel. +在开始编译前要进行很多准备工作。最主要的就是找到并配置好配置文件,`make` 命令要使用到的参数都需要从这些配置文件获取。现在就让我们深入内核的根`makefile` 吧 -在开始便以前要进行很多准备工作。最主要的就是找到并配置好配置文件,`make` 命令要使用到的参数都需要从这些配置文件获取。 - -The Linux kernel top `Makefile` is responsible for building two major products: [vmlinux](https://en.wikipedia.org/wiki/Vmlinux) (the resident kernel image) and the modules (any module files). The [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) of the Linux kernel starts from the definition of the following variables: - -内核顶端的`Makefile` 负责构建两个主要的产品:[vmlinux](https://en.wikipedia.org/wiki/Vmlinux) (内核镜像可执行文件)和模块文件。内核的 [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 以次开始: +内核的根`Makefile` 负责构建两个主要的文件:[vmlinux](https://en.wikipedia.org/wiki/Vmlinux) (内核镜像可执行文件)和模块文件。内核的 [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 从此处开始: ```Makefile VERSION = 4 @@ -44,16 +29,12 @@ EXTRAVERSION = -rc3 NAME = Hurr durr I'ma sheep ``` -These variables determine the current version of the Linux kernel and are used in the different places, for example in the forming of the `KERNELVERSION` variable: - 这些变量决定了当前内核的版本,并且被使用在很多不同的地方,比如`KERNELVERSION` : ```Makefile KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION) ``` -After this we can see a couple of the `ifeq` condition that check some of the parameters passed to `make`. The Linux kernel `makefiles` provides a special `make help` target that prints all available targets and some of the command line arguments that can be passed to `make`. For example: `make V=1` - provides verbose builds. The first `ifeq` condition checks if the `V=n` option is passed to make: - 接下来我们会看到很多`ifeq` 条件判断语句,它们负责检查传给`make` 的参数。内核的`Makefile` 提供了一个特殊的编译选项`make help` ,这个选项可以生成所有的可用目标和一些能传给`make` 的有效的命令行参数。举个例子,`make V=1` 会在构建过程中输出详细的编译信息,第一个`ifeq` 就是检查传递给make的`V=n` 选项。 ```Makefile @@ -75,9 +56,7 @@ endif export quiet Q KBUILD_VERBOSE ``` -If this option is passed to `make` we set the `KBUILD_VERBOSE` variable to the value of the `V` option. Otherwise we set the `KBUILD_VERBOSE` variable to zero. After this we check value of the `KBUILD_VERBOSE` variable and set values of the `quiet` and `Q` variables depends on the `KBUILD_VERBOSE` value. The `@` symbols suppress the output of the command and if it will be set before a command we will see something like this: `CC scripts/mod/empty.o` instead of the `Compiling .... scripts/mod/empty.o`. In the end we just export all of these variables. The next `ifeq` statement checks that `O=/dir` option was passed to the `make`. This option allows to locate all output files in the given `dir`: - -如果`V=n` 这个选项传给了`make` ,系统就会给变量`KBUILD_VERBOSE` 选项附上`V` 的值,否则的话`KBUILD_VERBOSE` 就会为0。然后系统会检查`KBUILD_VERBOSE` 的值,以此来决定`quiet` 和`Q` 的值。符号`@` 控制命令的输出,如果它被放在一个命令之前,这条命令的执行将会是`CC scripts/mod/empty.o`,而不是`Compiling .... scripts/mod/empty.o`(注:CC 在makefile 中一般都是编译命令)。最后系统仅仅导出所有的变量。下一个`ifeq` 语句检查的是传递给`make` 的选项`O=/dir`,这个选项允许在指定的目录`dir` 输出所有的结果文件: +如果`V=n` 这个选项传给了`make` ,系统就会给变量`KBUILD_VERBOSE` 选项附上`V` 的值,否则的话`KBUILD_VERBOSE` 就会为`0`。然后系统会检查`KBUILD_VERBOSE` 的值,以此来决定`quiet` 和`Q` 的值。符号`@` 控制命令的输出,如果它被放在一个命令之前,这条命令的执行将会是`CC scripts/mod/empty.o`,而不是`Compiling .... scripts/mod/empty.o`(注:CC 在makefile 中一般都是编译命令)。最后系统仅仅导出所有的变量。下一个`ifeq` 语句检查的是传递给`make` 的选项`O=/dir`,这个选项允许在指定的目录`dir` 输出所有的结果文件: ```Makefile ifeq ($(KBUILD_SRC),) @@ -102,15 +81,7 @@ endif # ifneq ($(KBUILD_OUTPUT),) endif # ifeq ($(KBUILD_SRC),) ``` -We check the `KBUILD_SRC` that represent top directory of the source code of the linux kernel and if it is empty (it is empty every time while makefile executes first time) and the set the `KBUILD_OUTPUT` variable to the value that passed with the `O` option (if this option was passed). In the next step we check this `KBUILD_OUTPUT` variable and if we set it, we do following things: - -* Store value of the `KBUILD_OUTPUT` in the temp `saved-output` variable; -* Try to create given output directory; -* Check that directory created, in other way print error; -* If custom output directory created sucessfully, execute `make` again with the new directory (see `-C` option). - -The next `ifeq` statements checks that `C` or `M` options was passed to the make: -系统会检查变量`KBUILD_SRC`,如果他是空的(第一次执行makefile 时总是空的),并且变量`KBUILD_OUTPUT` 被设成了选项`O` 的值(如果这个选项被传进来了),那么这个值就会用来代表内核源码的顶层目录。下一步会检查变量`KBUILD_OUTPUT` ,如果之前设置过这个变量,那么接下来会做一下几件事: +系统会检查变量`KBUILD_SRC`,如果他是空的(第一次执行makefile 时总是空的),并且变量`KBUILD_OUTPUT` 被设成了选项`O` 的值(如果这个选项被传进来了),那么这个值就会用来代表内核源码的顶层目录。下一步会检查变量`KBUILD_OUTPUT` ,如果之前设置过这个变量,那么接下来会做以下几件事: * 将变量`KBUILD_OUTPUT` 的值保存到临时变量`saved-output`; * 尝试创建输出目录; @@ -132,8 +103,6 @@ ifeq ("$(origin M)", "command line") endif ``` -The first `C` option tells to the `makefile` that need to check all `c` source code with a tool provided by the `$CHECK` environment variable, by default it is [sparse](https://en.wikipedia.org/wiki/Sparse). The second `M` option provides build for the external modules (will not see this case in this part). As we set this variables we make a check of the `KBUILD_SRC` variable and if it is not set we set `srctree` variable to `.`: - 第一个选项`C` 会告诉`makefile` 需要使用环境变量`$CHECK` 提供的工具来检查全部`c` 代码,默认情况下会使用[sparse](https://en.wikipedia.org/wiki/Sparse)。第二个选项`M` 会用来编译外部模块(本文不做讨论)。因为设置了这两个变量,系统还会检查变量`KBUILD_SRC`,如果`KBUILD_SRC` 没有被设置,系统会设置变量`srctree` 为`.`: ```Makefile @@ -148,9 +117,7 @@ obj := $(objtree) export srctree objtree VPATH ``` -That tells to `Makefile` that source tree of the Linux kernel will be in the current directory where `make` command was executed. After this we set `objtree` and other variables to this directory and export these variables. The next step is the setting value for the `SUBARCH` variable that will represent what the underlying archicecture is: - -这将会告诉`Makefile` 内核的源码树就在执行make 命令的目录。然后要设置`objtree` 和其他变量为执行make 命令的目录,并且将这些变量导出。接着就是要获取`SUBARCH` 的值,这个变量代表了当前的系统架构(注:一般值CPU 架构): +这将会告诉`Makefile` 内核的源码树就在执行make 命令的目录。然后要设置`objtree` 和其他变量为执行make 命令的目录,并且将这些变量导出。接着就是要获取`SUBARCH` 的值,这个变量代表了当前的系统架构(注:一般都指CPU 架构): ```Makefile SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \ @@ -161,8 +128,6 @@ SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \ -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ ) ``` -As you can see it executes [uname](https://en.wikipedia.org/wiki/Uname) utils that prints information about machine, operating system and architecture. As it will get output of the `uname` util, it will parse it and assign to the `SUBARCH` variable. As we got `SUBARCH`, we set the `SRCARCH` variable that provides directory of the certain architecture and `hfr-arch` that provides directory for the header files: - 如你所见,系统执行[uname](https://en.wikipedia.org/wiki/Uname) 得到机器、操作系统和架构的信息。因为我们得到的是`uname` 的输出,所以我们需要做一些处理在赋给变量`SUBARCH` 。获得`SUBARCH` 之后就要设置`SRCARCH` 和`hfr-arch`,`SRCARCH`提供了硬件架构相关代码的目录,`hfr-arch` 提供了相关头文件的目录: ```Makefile @@ -176,18 +141,13 @@ endif hdr-arch := $(SRCARCH) ``` -Note that `ARCH` is the alias for the `SUBARCH`. In the next step we set the `KCONFIG_CONFIG` variable that represents path to the kernel configuration file and if it was not set before, it will be `.config` by default: - -注意:`ARCH` 是`SUBARCH` 的别名。如果没有设置过代表内核配置文件路径的变量`KCONFIG_CONFIG`,下一步系统会设置他,默认情况下就是`.config` : +注意:`ARCH` 是`SUBARCH` 的别名。如果没有设置过代表内核配置文件路径的变量`KCONFIG_CONFIG`,下一步系统会设置它,默认情况下就是`.config` : ```Makefile KCONFIG_CONFIG ?= .config export KCONFIG_CONFIG ``` - -and the [shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) that will be used during kernel compilation: - -和编译内核过程中要用到的[shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) +以及编译内核过程中要用到的[shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) ```Makefile CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \ @@ -195,10 +155,7 @@ CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \ else echo sh; fi ; fi) ``` -The next set of variables related to the compiler that will be used during Linux kernel compilation. We set the host compilers for the `c` and `c++` and flags for it: - -接下来就要设置一组和编译内核的编译器相关的变量。我们会设置host 的C 和C++ 的编译器及相关配置项: - +接下来就要设置一组和编译内核的编译器相关的变量。我们会设置主机的`C` 和`C++` 的编译器及相关配置项: ```Makefile HOSTCC = gcc @@ -207,9 +164,7 @@ HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-p HOSTCXXFLAGS = -O2 ``` -Next we will meet the `CC` variable that represent compiler too, so why do we need in the `HOST*` variables? The `CC` is the target compiler that will be used during kernel compilation, but `HOSTCC` will be used during compilation of the set of the `host` programs (we will see it soon). After this we can see definition of the `KBUILD_MODULES` and `KBUILD_BUILTIN` variables that are used for the determination of the what to compile (kernel, modules or both): - -然后会去适配代表编译器的变量`CC`,为什么还要`HOST*` 这些选项呢?`CC` 是编译内核过程中要使用的目标架构的编译器,但是`HOSTCC` 是要被用来编译一组`host` 程序的(下面我们就会看到)。然后我们就看看变量`KBUILD_MODULES` 和`KBUILD_BUILTIN` 的定义,这两个变量据欸的那个了我们要编译什么(内核、模块还是其他?): +下一步会去适配代表编译器的变量`CC`,那为什么还要`HOST*` 这些选项呢?这是因为`CC` 是编译内核过程中要使用的目标架构的编译器,但是`HOSTCC` 是要被用来编译一组`host` 程序的(下面我们就会看到)。然后我们就看看变量`KBUILD_MODULES` 和`KBUILD_BUILTIN` 的定义,这两个变量决定了我们要编译什么东西(内核、模块还是其他): ```Makefile KBUILD_MODULES := @@ -220,16 +175,12 @@ ifeq ($(MAKECMDGOALS),modules) endif ``` -Here we can see definition of these variables and the value of the `KBUILD_BUILTIN` will depens on the `CONFIG_MODVERSIONS` kernel configuration parameter if we pass only `modules` to the `make`. The next step is including of the: - -在这我们可以看到这些变量的定义,并且,如果们仅仅传递了`modules` 给`make`,变量`KBUILD_BUILTIN` 会依赖于内核配置选项`CONFIG_MODVERSIONS`。下一步操作是引入: +在这我们可以看到这些变量的定义,并且,如果们仅仅传递了`modules` 给`make`,变量`KBUILD_BUILTIN` 会依赖于内核配置选项`CONFIG_MODVERSIONS`。下一步操作是引入下面的文件: ```Makefile include scripts/Kbuild.include ``` -`kbuild` file. The [Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) or `Kernel Build System` is the special infrastructure to manage building of the kernel and its modules. The `kbuild` files has the same syntax that makefiles. The [scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) file provides some generic definitions for the `kbuild` system. As we included this `kbuild` files we can see definition of the variables that are related to the different tools that will be used during kernel and modules compilation (like linker, compilers, utils from the [binutils](http://www.gnu.org/software/binutils/) and etc...): - 文件`kbuild` ,[Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) 或者又叫做 `Kernel Build System`是一个用来管理构建内核和模块的特殊框架。`kbuild` 文件的语法与makefile 一样。文件[scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) 为`kbuild` 系统同提供了一些原生的定义。因为我们包含了这个`kbuild` 文件,我们可以看到和不同工具关联的这些变量的定义,这些工具会在内核和模块编译过程中被使用(比如链接器、编译器、二进制工具包[binutils](http://www.gnu.org/software/binutils/),等等): ```Makefile From 640b23a26acd2f824897e27b7d033398edc16989 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 25 Aug 2015 09:35:53 +0800 Subject: [PATCH 292/697] translated --- ...aving Fun With Linux Terminal In Ubuntu.md | 26 +++++++++---------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md b/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md index 3ce51b9379..3d0efff7b5 100644 --- a/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md +++ b/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md @@ -1,28 +1,26 @@ -translating---geekpi - -Watch These Kids Having Fun With Linux Terminal In Ubuntu +看这些孩子在Ubuntu的Linux终端下玩耍 ================================================================================ -I found this short video of children having fun with Linux terminals in their computer lab at school. I do not know where do they belong to, but I guess it is either in Indonesia or Malaysia. +我发现了一个孩子们在他们的计算机教室里玩得很开心的视频。我不知道他们在哪里,但我猜测是在印度尼西亚或者马来西亚。 注:youtube 视频 -### Run train in Linux terminal ### +### 在Linux终端下面跑火车 ### -There is no magic here. It’s just a small command line fun tool called ‘sl’. I presume that it was developed entirely to have some fun when command ls is wrongly typed. If you ever worked on Linux terminal, you know that ls is one of the most commonly used commands and perhaps one of the most frequently mis-typed command as well. +这里没有魔术。只是一个叫做“sl”的命令行工具。我假定它是在把ls打错的情况下为了好玩而开发的。如果你曾经在Linux的命令行下工作,你会知道ls是一个最常使用的一个命令,也许也是一个最经常打错的命令。 -If you want to have little fun with this terminal train, you can install it using the following command: +如果你想从这个终端下的火车获得一些乐趣,你可以使用下面的命令安装它。 sudo apt-get install sl -To run the terminal train, just type **sl** in the terminal. It also has the following options: +要运行终端火车,只需要在终端中输入**sl**。它有以下几个选项: -- -a : Accident mode. You can see people crying help -- -l : shows a smaller train but with more coaches -- -F : A flying train -- -e : Allows interrupt by Ctrl+C. In other mode you cannot use Ctrl+C to stop the train. But then, it doesn’t run for long. +- -a : 意外模式。你会看见哭救的群众 +- -l : 显示一个更小的火车但有更多的车厢 +- -F : 一个飞行的火车 +- -e : 允许通过Ctrl+C。使用其他模式你不能使用Ctrl+C中断火车。但是,它不能长时间运行。 -Normally, you should hear the whistle as well but it doesn’t work in most of the Linux OS, Ubuntu 14.04 being one of them. Here is the accidental terminal train :) +正常情况下,你应该会听到汽笛声但是在大多数Linux系统下都不管用,Ubuntu是其中一个。这就是一个意外的终端火车。 ![Linux Terminal Train](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/04/Linux_Terminal_Train.jpeg) @@ -31,7 +29,7 @@ Normally, you should hear the whistle as well but it doesn’t work in most of t via: http://itsfoss.com/ubuntu-terminal-train/ 作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 2fccde8110f326870b455cc0613c4ac8a6f96b69 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 25 Aug 2015 09:38:06 +0800 Subject: [PATCH 293/697] Rename sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md to translated/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md --- ...4 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md (100%) diff --git a/sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md b/translated/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md similarity index 100% rename from sources/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md rename to translated/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md From 734846d1c71a5a96351b16d5d8b0e2913bf78df6 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 25 Aug 2015 09:41:56 +0800 Subject: [PATCH 294/697] translating --- ...0824 Basics Of NetworkManager Command Line Tool Nmcli.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/sources/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md b/sources/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md index 577411f58a..d911cfcc0e 100644 --- a/sources/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md +++ b/sources/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md @@ -1,4 +1,6 @@ - Basics Of NetworkManager Command Line Tool, Nmcli +translating----geekpi + +Basics Of NetworkManager Command Line Tool, Nmcli ================================================================================ ![](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/08/networking1.jpg) @@ -150,4 +152,4 @@ via: http://www.unixmen.com/basics-networkmanager-command-line-tool-nmcli/ 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From e156ecf77804e1661f6625c3d9dac3a81cbf37f2 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 25 Aug 2015 10:16:45 +0800 Subject: [PATCH 295/697] translated --- ... NetworkManager Command Line Tool Nmcli.md | 82 +++++++++---------- 1 file changed, 41 insertions(+), 41 deletions(-) rename {sources => translated}/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md (62%) diff --git a/sources/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md b/translated/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md similarity index 62% rename from sources/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md rename to translated/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md index d911cfcc0e..5ddb31d1ea 100644 --- a/sources/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md +++ b/translated/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md @@ -1,20 +1,18 @@ -translating----geekpi - -Basics Of NetworkManager Command Line Tool, Nmcli +网络管理命令行工具基础,Nmcli ================================================================================ ![](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/08/networking1.jpg) -### Introduction ### +### 介绍 ### -In this tutorial, we will discuss NetworkManager command line tool, aka **nmcli**, in CentOS / RHEL 7. Users who are using **ifconfig** should avoid this command in Centos 7. +在本教程中,我们会在CentOS / RHEL 7中讨论网络管理工具,也叫**nmcli**。那些使用**ifconfig**的用户应该在CentOS 7中避免使用这个命令。 -Lets configure some networking settings with nmcli utility. +让我们用nmcli工具配置一些网络设置。 -#### To get all address information of all interfaces connected with System #### +### 要得到系统中所有接口的地址信息 ### [root@localhost ~]# ip addr show -**Sample Output:** +**示例输出:** 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 @@ -29,53 +27,53 @@ Lets configure some networking settings with nmcli utility. inet6 fe80::20c:29ff:fe67:2f4c/64 scope link valid_lft forever preferred_lft forever -#### To retrieve packets statistics related with connected interfaces #### +#### 检索与连接的接口相关的数据包统计 #### [root@localhost ~]# ip -s link show eno16777736 -**Sample Output:** +**示例输出:** ![unxmen_(011)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0111.png) -#### Get routing configuration #### +#### 得到路由配置 #### [root@localhost ~]# ip route -Sample Output: +示例输出: default via 192.168.1.1 dev eno16777736 proto static metric 100 192.168.1.0/24 dev eno16777736 proto kernel scope link src 192.168.1.51 metric 100 -#### Analyze path for some host/website #### +#### 分析主机/网站路径 #### [root@localhost ~]# tracepath unixmen.com -Output will be just like traceroute but in more managed form. +输出像traceroute,但是更加完整。 ![unxmen_0121](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_01211.png) -### nmcli utility ### +### nmcli 工具 ### -**Nmcli** is a very rich and flexible command line utility. some of the terms used in nmcli are: +**Nmcli** 是一个非常丰富和灵活的命令行工具。nmcli使用的情况有: -- **Device** – A network interface being used. -- **Connection** – A set of configuration settings, for a single device you can have multiple connections, you can switch between connections. +- **设备** – 正在使用的网络接口 +- **连接** – 一组配置设置,对于一个单一的设备可以有多个连接,可以在连接之间切换。 -#### Find out how many connections are available for how many devices #### +#### 找出有多少连接服务于多少设备 #### [root@localhost ~]# nmcli connection show ![unxmen_(013)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_013.png) -#### Get details of a specific connection #### +#### 得到特定连接的详情 #### [root@localhost ~]# nmcli connection show eno1 -**Sample output:** +**示例输出:** ![unxmen_(014)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0141.png) -#### Get the Network device status #### +#### 得到网络设备状态 #### [root@localhost ~]# nmcli device status @@ -85,71 +83,73 @@ Output will be just like traceroute but in more managed form. eno16777736 ethernet connected eno1 lo loopback unmanaged -- -#### Create a new connection with “dhcp” #### +#### 使用“dhcp”创建新的连接 #### [root@localhost ~]# nmcli connection add con-name "dhcp" type ethernet ifname eno16777736 -Where, +这里, -- **Connection add** – To add new connection -- **con-name** – connection name -- **type** – type of device -- **ifname** – interface name +- **Connection add** – 添加新的连接 +- **con-name** – 连接名 +- **type** – 设备类型 +- **ifname** – 接口名 -This command will add connection with dhcp protocol. +这个命令会使用dhcp协议添加连接 -**Sample output:** +**示例输出:** Connection 'dhcp' (163a6822-cd50-4d23-bb42-8b774aeab9cb) successfully added. -#### Instead of assigning an IP via dhcp, you can add ip address as “static” #### +#### 不同过dhcp分配IP,使用“static”添加地址 #### [root@localhost ~]# nmcli connection add con-name "static" ifname eno16777736 autoconnect no type ethernet ip4 192.168.1.240 gw4 192.168.1.1 -**Sample Output:** +**示例输出:** Connection 'static' (8e69d847-03d7-47c7-8623-bb112f5cc842) successfully added. -**Update connection:** +**更新连接:** [root@localhost ~]# nmcli connection up eno1 Again Check, whether ip address is changed or not. +再检查一遍,ip地址是否已经改变 [root@localhost ~]# ip addr show ![unxmen_(015)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0151.png) -#### Add DNS settings to Static connections. #### +#### 添加DNS设置到静态连接中 #### [root@localhost ~]# nmcli connection modify "static" ipv4.dns 202.131.124.4 -#### Add additional DNS value. #### +#### 添加额外的DNS值 #### [root@localhost ~]# nmcli connection modify "static" +ipv4.dns 8.8.8.8 -**Note**: For additional entries **+** symbol will be used and **+ipv4.dns** will be used instead on **ip4.dns** +**注意**:要使用额外的**+**符号,并且要是**+ipv4.dns**,而不是**ip4.dns**。 -Put an additional ip address: + +添加一个额外的ip地址: [root@localhost ~]# nmcli connection modify "static" +ipv4.addresses 192.168.200.1/24 -Refresh settings using command: +使用命令刷新设置: [root@localhost ~]# nmcli connection up eno1 ![unxmen_(016)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_016.png) -You will see, setting are effective now. +你会看见,设置生效了。 -That’s it. +完结 -------------------------------------------------------------------------------- via: http://www.unixmen.com/basics-networkmanager-command-line-tool-nmcli/ 作者:Rajneesh Upadhyay -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From dec3d0e0bde984923c90b288293efabd641d9a4a Mon Sep 17 00:00:00 2001 From: bazz2 Date: Tue, 25 Aug 2015 11:59:20 +0800 Subject: [PATCH 296/697] [translated by bazz2]A Look at What's Next for the Linux Kernel --- ...ook at What's Next for the Linux Kernel.md | 50 ------------------- ...ook at What's Next for the Linux Kernel.md | 49 ++++++++++++++++++ 2 files changed, 49 insertions(+), 50 deletions(-) delete mode 100644 sources/talk/20150820 A Look at What's Next for the Linux Kernel.md create mode 100644 translated/talk/20150820 A Look at What's Next for the Linux Kernel.md diff --git a/sources/talk/20150820 A Look at What's Next for the Linux Kernel.md b/sources/talk/20150820 A Look at What's Next for the Linux Kernel.md deleted file mode 100644 index e46a4b538d..0000000000 --- a/sources/talk/20150820 A Look at What's Next for the Linux Kernel.md +++ /dev/null @@ -1,50 +0,0 @@ -[bazz22222222222] -A Look at What's Next for the Linux Kernel -================================================================================ -![](http://www.eweek.com/imagesvr_ce/485/290x195cilinux1.jpg) - -**The upcoming Linux 4.2 kernel will have more contributors than any other Linux kernel in history, according to Linux kernel developer Jonathan Corbet.** - -SEATTLE—The Linux kernel continues to grow—both in lines of code and the number of developers that contribute to it—yet some challenges need to be addressed. That was one of the key messages from Linux kernel developer Jonathan Corbet during his annual Kernel Report session at the LinuxCon conference here. - -The Linux 4.2 kernel is still under development, with general availability expected on Aug. 23. Corbet noted that 1,569 developers have contributed code for the Linux 4.2 kernel. Of those, 277 developers made their first contribution ever, during the Linux 4.2 development cycle. - -Even as more developers are coming to Linux, the pace of development and releases is very fast, Corbet said. He estimates that it now takes approximately 63 days for the community to build a new Linux kernel milestone. - -Linux 4.2 will benefit from a number of improvements that have been evolving in Linux over the last several releases. One such improvement is the introduction of OverlayFS, a new type of read-only file system that is useful because it can enable many containers to be layered on top of each other, Corbet said. - -Linux networking also is set to improve small packet performance, which is important for areas such as high-frequency financial trading. The improvements are aimed at reducing the amount of time and power needed to process each data packet, Corbet said. - -New drivers are always being added to Linux. On average, there are 60 to 80 new or updated drivers added in every Linux kernel development cycle, Corbet said. - -Another key area that continues to improve is that of Live Kernel patching, first introduced in the Linux 4.0 kernel. With live kernel patching, the promise is that a system administrator can patch a live running kernel without the need to reboot a running production system. While the basic elements of live kernel patching are in the kernel already, work is under way to make the technology all work with the right level of consistency and stability, Corbet explained. - -**Linux Security, IoT and Other Concerns** - -Security has been a hot topic in the open-source community in the past year due to high-profile issues, including Heartbleed and Shellshock. - -"I don't doubt there are some unpleasant surprises in the neglected Linux code at this point," Corbet said. - -He noted that there are more than 3 millions lines of code in the Linux kernel today that have been untouched in the last decade by developers and that the Shellshock vulnerability was a flaw in 20-year-old code that hadn't been looked at in some time. - -Another issue that concerns Corbet is the Unix 2038 issue—the Linux equivalent of the Y2K bug, which could have caused global havoc in the year 2000 if it hadn't been fixed. With the 2038 issue, there is a bug that could shut down Linux and Unix machines in the year 2038. Corbet said that while 2038 is still 23 years away, there are systems being deployed now that will be in use in the 2038. - -Some initial work took place to fix the 2038 flaw in Linux, but much more remains to be done, Corbet said. "The time to fix this is now, not 20 years from now in a panic when we're all trying to enjoy our retirement," Corbet said. - -The Internet of things (IoT) is another area of Linux concern for Corbet. Today, Linux is a leading embedded operating system for IoT, but that might not always be the case. Corbet is concerned that the Linux kernel's growth is making it too big in terms of memory footprint to work in future IoT devices. - -A Linux project is now under way to minimize the size of the Linux kernel, and it's important that it gets the support it needs, Corbet said. - -"Either Linux is suitable for IoT, or something else will come along and that something else might not be as free and open as Linux," Corbet said. "We can't assume the continued dominance of Linux in IoT. We have to earn it. We have to pay attention to stuff that makes the kernel bigger." - --------------------------------------------------------------------------------- - -via: http://www.eweek.com/enterprise-apps/a-look-at-whats-next-for-the-linux-kernel.html - -作者:[Sean Michael Kerner][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/ diff --git a/translated/talk/20150820 A Look at What's Next for the Linux Kernel.md b/translated/talk/20150820 A Look at What's Next for the Linux Kernel.md new file mode 100644 index 0000000000..daf3e4d0e3 --- /dev/null +++ b/translated/talk/20150820 A Look at What's Next for the Linux Kernel.md @@ -0,0 +1,49 @@ +Linux 内核的发展方向 +================================================================================ +![](http://www.eweek.com/imagesvr_ce/485/290x195cilinux1.jpg) + +**即将到来的 Linux 4.2 内核涉及到史上最多的贡献者数量,内核开发者 Jonathan Corbet 如是说。** + +来自西雅图。Linux 内核持续增长:代码量在增加,代码贡献者数量也在增加。而随之而来的一些挑战需要处理一下。以上是 Jonathan Corbet 在今年的 LinuxCon 的内核年度报告上提出的主要观点。以下是他的主要演讲内容: + +Linux 4.2 内核依然处于开发阶段,预计在8月23号释出。Corbet 强调有 1569 名开发者为这个版本贡献了代码,其中 277 名是第一次提交代码。 + +越来越多的开发者的加入,内核更新非常快,Corbet 估计现在大概 63 天就能产生一个新的内核里程碑。 + +Linux 4.2 涉及多方面的更新。其中一个就是引进了 OverLayFS,这是一种只读型文件系统,它可以实现在一个容器之上再放一个容器。 + +网络系统对小包传输性能也有了提升,这对于高频传输领域如金融交易而言非常重要。提升的方面主要集中在减小处理数据包的时间的能耗。 + +依然有新的驱动中加入内核。在每个内核发布周期,平均会有 60 到 80 个新增或升级驱动中加入。 + +另一个主要更新是实时内核补丁,这个特性在 4.0 版首次引进,好处是系统管理员可以在生产环境中打上内核补丁而不需要重启系统。当补丁所需要的元素都已准备就绪,打补丁的过程会在后台持续而稳定地进行。 + +**Linux 安全, IoT 和其他关注点 ** + +过去一年中,安全问题在开源社区是一个很热的话题,这都归因于那些引发高度关注的事件,比如 Heartbleed 和 Shellshock。 + +“我毫不怀疑 Linux 代码对这些方面的忽视会产生一些令人不悦的问题”,Corbet 原话。 + +他强调说过去 10 年间有超过 3 百万行代码不再被开发者修改,而产生 Shellshock 漏洞的代码的年龄已经是 20 岁了,近年来更是无人问津。 + +另一个关注点是 2038 问题,Linux 界的“千年虫”,如果不解决,2000 年出现过的问题还会重现。2038 问题说的是在 2038 年一些 Linux 和 Unix 机器会死机(LCTT:32 位系统记录的时间,在2038年1月19日星期二晚上03:14:07之后的下一秒,会变成负数)。Corbet 说现在离 2038 年还有 23 年时间,现在部署的系统都会考虑 2038 问题。 + +Linux 已经开始一些初步的方案来修复 2038 问题了,但做的还远远不够。“现在就要修复这个问题,而不是等 20 年后把这个头疼的问题留给下一代解决,我们却享受着退休的美好时光”。 + +物联网(IoT)也是 Linux 关注的领域,Linux 是物联网嵌入式操作系统的主要占有者,然而这并没有什么卵用。Corget 认为日渐臃肿的内核对于未来的物联网设备来说肯定过于庞大。 + +现在有一个项目就是做内核最小化的,获取足够的支持对于这个项目来说非常重要。 + +“除了 Linux 之外,也有其他项目可以做物联网,但那些项目不会像 Linux 一样开放”,Corbet 说,“我们不能指望 Linux 在物联网领域一直保持优势,我们需要靠自己的努力去做到这点,我们需要注意不能让内核变得越来越臃肿。” + +-------------------------------------------------------------------------------- + +via: http://www.eweek.com/enterprise-apps/a-look-at-whats-next-for-the-linux-kernel.html + +作者:[Sean Michael Kerner][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/ From bb90b935cb19af90b0dd9a5b118f0e5e3cd185f3 Mon Sep 17 00:00:00 2001 From: bazz2 Date: Tue, 25 Aug 2015 12:04:59 +0800 Subject: [PATCH 297/697] [translating by bazz2]Linuxcon--The Changing Role of the Server OS --- .../20150819 Linuxcon--The Changing Role of the Server OS.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/talk/20150819 Linuxcon--The Changing Role of the Server OS.md b/sources/talk/20150819 Linuxcon--The Changing Role of the Server OS.md index 8f6d80c7e9..832cb6a30a 100644 --- a/sources/talk/20150819 Linuxcon--The Changing Role of the Server OS.md +++ b/sources/talk/20150819 Linuxcon--The Changing Role of the Server OS.md @@ -1,3 +1,4 @@ +[bazz2222222] Linuxcon: The Changing Role of the Server OS ================================================================================ SEATTLE - Containers might one day change the world, but it will take time and it will also change the role of the operating system. That's the message delivered during a Linuxcon keynote here today by Wim Coekaerts, SVP Linux and virtualization engineering at Oracle. @@ -46,4 +47,4 @@ via: http://www.serverwatch.com/server-news/linuxcon-the-changing-role-of-the-se 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://www.serverwatch.com/author/Sean-Michael-Kerner-101580.htm \ No newline at end of file +[a]:http://www.serverwatch.com/author/Sean-Michael-Kerner-101580.htm From 451807e602f9dc33916e0a76e9056e582440b778 Mon Sep 17 00:00:00 2001 From: bazz2 Date: Tue, 25 Aug 2015 14:18:53 +0800 Subject: [PATCH 298/697] [translated by bazz2]Linuxcon--The Changing Role of the Server OS --- ...con--The Changing Role of the Server OS.md | 50 ------------------- ...con--The Changing Role of the Server OS.md | 50 +++++++++++++++++++ 2 files changed, 50 insertions(+), 50 deletions(-) delete mode 100644 sources/talk/20150819 Linuxcon--The Changing Role of the Server OS.md create mode 100644 translated/talk/20150819 Linuxcon--The Changing Role of the Server OS.md diff --git a/sources/talk/20150819 Linuxcon--The Changing Role of the Server OS.md b/sources/talk/20150819 Linuxcon--The Changing Role of the Server OS.md deleted file mode 100644 index 832cb6a30a..0000000000 --- a/sources/talk/20150819 Linuxcon--The Changing Role of the Server OS.md +++ /dev/null @@ -1,50 +0,0 @@ -[bazz2222222] -Linuxcon: The Changing Role of the Server OS -================================================================================ -SEATTLE - Containers might one day change the world, but it will take time and it will also change the role of the operating system. That's the message delivered during a Linuxcon keynote here today by Wim Coekaerts, SVP Linux and virtualization engineering at Oracle. - -![](http://www.serverwatch.com/imagesvr_ce/6421/wim-200x150.jpg) - -Coekaerts started his presentation by putting up a slide stating it's the year of the desktop, which generated a few laughs from the audience. Oracle Wim Coekarts Truly, though, Coekaerts said it is now apparent that 2015 is the year of the container, and more importantly the year of the application, which is what containers really are all about. - -"What do you need an operating system for?" Coekaerts asked. "It's really just there to run an application; an operating system is there to manage hardware and resources so your app can run." - -Coekaerts added that with Docker containers, the focus is once again on the application. At Oracle, Coekaerts said much of the focus is on how to make the app run better on the OS. - -"Many people are used to installing apps, but many of the younger generation just click a button on their mobile device and it runs," Coekaerts said. - -Coekaerts said that people now wonder why it's more complex in the enterprise to install software, and Docker helps to change that. - -"The role of the operating system is changing," Coekaerts said. - -The rise of Docker does not mean the demise of virtual machines (VMs), though. Coekaerts said it will take a very long time for things to mature in the containerization space and get used in real world. - -During that period VMs and containers will co-exist and there will be a need for transition and migration tools between containers and VMs. For example, Coekaerts noted that Oracle's VirtualBox open-source technology is widely used on desktop systems today as a way to help users run Docker. The Docker Kitematic project makes use of VirtualBox to boot Docker on Macs today. - -### The Open Compute Initiative and Write Once, Deploy Anywhere for Containers ### - -A key promise that needs to be enabled for containers to truly be successful is the concept of write once, deploy anywhere. That's an area where the Linux Foundations' Open Compute Initiative (OCI) will play a key role in enabling interoperability across container runtimes. - -"With OCI, it will make it easier to build once and run anywhere, so what you package locally you can run wherever you want," Coekaerts said. - -Overall, though, Coekaerts said that while there is a lot of interest in moving to the container model, it's not quite ready yet. He noted Oracle is working on certifying its products to run in containers, but it's a hard process. - -"Running the database is easy; it's everything else around it that is complex," Coekaerts said. "Containers don't behave the same as VMs, and some applications depend on low-level system configuration items that are not exposed from the host to the container." - -Additionally, Coekaerts commented that debugging problems inside a container is different than in a VM, and there is currently a lack of mature tools for proper container app debugging. - -Coekaerts emphasized that as containers matures it's important to not forget about the existing technology that organizations use to run and deploy applications on servers today. He said enterprises don't typically throw out everything they have just to start with new technology. - -"Deploying new technology is hard, and you need to be able to transition from what you have," Coekaerts said. "The technology that allows you to transition easily is the technology that wins." - --------------------------------------------------------------------------------- - -via: http://www.serverwatch.com/server-news/linuxcon-the-changing-role-of-the-server-os.html - -作者:[Sean Michael Kerner][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.serverwatch.com/author/Sean-Michael-Kerner-101580.htm diff --git a/translated/talk/20150819 Linuxcon--The Changing Role of the Server OS.md b/translated/talk/20150819 Linuxcon--The Changing Role of the Server OS.md new file mode 100644 index 0000000000..98a0f94b03 --- /dev/null +++ b/translated/talk/20150819 Linuxcon--The Changing Role of the Server OS.md @@ -0,0 +1,50 @@ +LinuxCon: 服务器操作系统的转型 +================================================================================ +来自西雅图。容器迟早要改变世界,以及改变操作系统的角色。这是 Wim Coekaerts 带来的 LinuxCon 演讲主题,Coekaerts 是 Oracle 公司 Linux 与虚拟化工程的高级副总裁。 + +![](http://www.serverwatch.com/imagesvr_ce/6421/wim-200x150.jpg) + +Coekaerts 在开始演讲的时候拿出一张关于“桌面之年”的幻灯片,引发了现场观众的一片笑声。之后他说 2015 年很明显是容器之年,更是应用之年,应用才是容器的关键。 + +“你需要操作系统做什么事情?”,Coekaerts 回答现场观众:“只需一件事:运行一个应用。操作系统负责管理硬件和资源,来让你的应用运行起来。” + +Coakaerts 说在 Docker 容器的帮助下,我们的注意力再次集中在应用上,而在 Oracle,我们将注意力放在如何让应用更好地运行在操作系统上。 + +“许多人过去常常需要繁琐地安装应用,而现在的年轻人只需要按一个按钮就能让应用在他们的移动设备上运行起来”。 + +人们对安装企业版的软件需要这么复杂的步骤而感到惊讶,而 Docker 帮助他们脱离了这片苦海。 + +“操作系统的角色已经变了。” Coekaerts 说。 + +Docker 的出现不代表虚拟机的淘汰,容器化过程需要经过很长时间才能变得成熟,然后才能在世界范围内得到应用。 + +在这段时间内,容器会与虚拟机共存,并且我们需要一些工具,将应用在容器和虚拟机之间进行转换迁移。Coekaerts 举例说 Oracle 的 VirtualBox 就可以用来帮助用户运行 Docker,而它原来是被广泛用在桌面系统上的一项开源技术。现在 Docker 的 Kitematic 项目将在 Mac 上使用 VirtualBox 运行 Docker。 + +### The Open Compute Initiative and Write Once, Deploy Anywhere for Containers ### +### 容器的开放计算计划和一次写随处部署 ### + +一个能让容器成功的关键是“一次写,随处部署”的概念。而在容器之间的互操作领域,Linux 基金会的开放计算计划(OCI)扮演一个非常关键的角色。 + +“使用 OCI,应用编译一次后就可以很方便地在多地运行,所以你可以将你的应用部署在任何地方”。 + +Coekaerts 总结说虽然在迁移到容器模型过程中会发生很多好玩的事情,但容器还没真正做好准备,他强调 Oracle 现在正在验证将产品运行在容器内的可行性,但这是一个非常艰难的过程。 + +“运行数据库很简单,难的是要搞定数据库所需的环境”,Coekaerts 说:“容器与虚拟机不一样,一些需要依赖底层系统配置的应用无法从主机迁移到容器中。” + +另外,Coekaerts 指出在容器内调试问题与在虚拟机内调试问题也是不一样的,现在还没有成熟的工具来进行容器应用的调试。 + +Coekaerts 强调当容器足够成熟时,有一点很重要:不要抛弃现有的技术。组织和企业不能抛弃现有的部署好的应用,而完全投入新技术的怀抱。 + +“部署新技术是很困难的事情,你需要缓慢地迁移过去,能让你顺利迁移的技术才是成功的技术。”Coekaerts 说。 + +-------------------------------------------------------------------------------- + +via: http://www.serverwatch.com/server-news/linuxcon-the-changing-role-of-the-server-os.html + +作者:[Sean Michael Kerner][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.serverwatch.com/author/Sean-Michael-Kerner-101580.htm From 82d34ad4dac28b9c63901c546981e892d2da581b Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 25 Aug 2015 14:54:10 +0800 Subject: [PATCH 299/697] PUB:20150209 Install OpenQRM Cloud Computing Platform In Debian @FSSlc --- ...nQRM Cloud Computing Platform In Debian.md | 84 ++++++++++--------- 1 file changed, 43 insertions(+), 41 deletions(-) rename {translated/tech => published}/20150209 Install OpenQRM Cloud Computing Platform In Debian.md (54%) diff --git a/translated/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md b/published/20150209 Install OpenQRM Cloud Computing Platform In Debian.md similarity index 54% rename from translated/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md rename to published/20150209 Install OpenQRM Cloud Computing Platform In Debian.md index 2eacc933b9..fdaa039b2f 100644 --- a/translated/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md +++ b/published/20150209 Install OpenQRM Cloud Computing Platform In Debian.md @@ -1,48 +1,49 @@ 在 Debian 中安装 OpenQRM 云计算平台 ================================================================================ + ### 简介 ### **openQRM**是一个基于 Web 的开源云计算和数据中心管理平台,可灵活地与企业数据中心的现存组件集成。 它支持下列虚拟技术: -- KVM, -- XEN, -- Citrix XenServer, -- VMWare ESX, -- LXC, -- OpenVZ. +- KVM +- XEN +- Citrix XenServer +- VMWare ESX +- LXC +- OpenVZ -openQRM 中的杂交云连接器通过 **Amazon AWS**, **Eucalyptus** 或 **OpenStack** 来支持一系列的私有或公有云提供商,以此来按需扩展你的基础设施。它也自动地进行资源调配、 虚拟化、 存储和配置管理,且关注高可用性。集成计费系统的自助服务云门户可使终端用户按需请求新的服务器和应用堆栈。 +openQRM 中的混合云连接器支持 **Amazon AWS**, **Eucalyptus** 或 **OpenStack** 等一系列的私有或公有云提供商,以此来按需扩展你的基础设施。它也可以自动地进行资源调配、 虚拟化、 存储和配置管理,且保证高可用性。集成的计费系统的自服务云门户可使终端用户按需请求新的服务器和应用堆栈。 openQRM 有两种不同风格的版本可获取: - 企业版 - 社区版 -你可以在[这里][1] 查看这两个版本间的区别。 +你可以在[这里][1]查看这两个版本间的区别。 ### 特点 ### -- 私有/杂交的云计算平台; -- 可管理物理或虚拟的服务器系统; -- 可与所有主流的开源或商业的存储技术集成; -- 跨平台: Linux, Windows, OpenSolaris, and BSD; -- 支持 KVM, XEN, Citrix XenServer, VMWare ESX(i), lxc, OpenVZ 和 VirtualBox; -- 支持使用额外的 Amazon AWS, Eucalyptus, Ubuntu UEC 等云资源来进行杂交云设置; -- 支持 P2V, P2P, V2P, V2V 迁移和高可用性; -- 集成最好的开源管理工具 – 如 puppet, nagios/Icinga 或 collectd; -- 有超过 50 个插件来支持扩展功能并与你的基础设施集成; -- 针对终端用户的自助门户; -- 集成计费系统. +- 私有/混合的云计算平台 +- 可管理物理或虚拟的服务器系统 +- 集成了所有主流的开源或商业的存储技术 +- 跨平台: Linux, Windows, OpenSolaris 和 BSD +- 支持 KVM, XEN, Citrix XenServer, VMWare ESX(i), lxc, OpenVZ 和 VirtualBox +- 支持使用额外的 Amazon AWS, Eucalyptus, Ubuntu UEC 等云资源来进行混合云设置 +- 支持 P2V, P2P, V2P, V2V 迁移和高可用性 +- 集成最好的开源管理工具 – 如 puppet, nagios/Icinga 或 collectd +- 有超过 50 个插件来支持扩展功能并与你的基础设施集成 +- 针对终端用户的自服务门户 +- 集成了计费系统 ### 安装 ### -在这里我们将在 in Debian 7.5 上安装 openQRM。你的服务器必须至少满足以下要求: +在这里我们将在 Debian 7.5 上安装 openQRM。你的服务器必须至少满足以下要求: -- 1 GB RAM; -- 100 GB Hdd(硬盘驱动器); -- 可选: Bios 支持虚拟化(Intel CPUs 的 VT 或 AMD CPUs AMD-V). +- 1 GB RAM +- 100 GB Hdd(硬盘驱动器) +- 可选: Bios 支持虚拟化(Intel CPUs 的 VT 或 AMD CPUs AMD-V) 首先,安装 `make` 软件包来编译 openQRM 源码包: @@ -52,7 +53,7 @@ openQRM 有两种不同风格的版本可获取: 然后,逐次运行下面的命令来安装 openQRM。 -从[这里][2] 下载最新的可用版本: +从[这里][2]下载最新的可用版本: wget http://sourceforge.net/projects/openqrm/files/openQRM-Community-5.1/openqrm-community-5.1.tgz @@ -66,35 +67,35 @@ openQRM 有两种不同风格的版本可获取: sudo make start -安装期间,你将被询问去更新文件 `php.ini` +安装期间,会要求你更新文件 `php.ini` -![~-openqrm-community-5.1-src_001](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_001.png) +![~-openqrm-community-5.1-src_001](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_001.png) 输入 mysql root 用户密码。 -![~-openqrm-community-5.1-src_002](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_002.png) +![~-openqrm-community-5.1-src_002](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_002.png) 再次输入密码: -![~-openqrm-community-5.1-src_003](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_003.png) +![~-openqrm-community-5.1-src_003](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_003.png) -选择邮件服务器配置类型。 +选择邮件服务器配置类型: -![~-openqrm-community-5.1-src_004](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_004.png) +![~-openqrm-community-5.1-src_004](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_004.png) 假如你不确定该如何选择,可选择 `Local only`。在我们的这个示例中,我选择了 **Local only** 选项。 -![~-openqrm-community-5.1-src_005](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_005.png) +![~-openqrm-community-5.1-src_005](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_005.png) 输入你的系统邮件名称,并最后输入 Nagios 管理员密码。 -![~-openqrm-community-5.1-src_007](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_007.png) +![~-openqrm-community-5.1-src_007](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_007.png) 根据你的网络连接状态,上面的命令可能将花费很长的时间来下载所有运行 openQRM 所需的软件包,请耐心等待。 最后你将得到 openQRM 配置 URL 地址以及相关的用户名和密码。 -![~_002](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@debian-_002.png) +![~_002](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@debian-_002.png) ### 配置 ### @@ -104,23 +105,23 @@ openQRM 有两种不同风格的版本可获取: 默认的用户名和密码是: **openqrm/openqrm** 。 -![Mozilla Firefox_003](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/Mozilla-Firefox_003.png) +![Mozilla Firefox_003](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/Mozilla-Firefox_003.png) 选择一个网卡来给 openQRM 管理网络使用。 -![openQRM Server - Mozilla Firefox_004](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_004.png) +![openQRM Server - Mozilla Firefox_004](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_004.png) 选择一个数据库类型,在我们的示例中,我选择了 mysql。 -![openQRM Server - Mozilla Firefox_006](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_006.png) +![openQRM Server - Mozilla Firefox_006](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_006.png) 现在,配置数据库连接并初始化 openQRM, 在这里,我使用 **openQRM** 作为数据库名称, **root** 作为用户的身份,并将 debian 作为数据库的密码。 请小心,你应该输入先前在安装 openQRM 时创建的 mysql root 用户密码。 -![openQRM Server - Mozilla Firefox_012](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_012.png) +![openQRM Server - Mozilla Firefox_012](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_012.png) -祝贺你!! openQRM 已经安装并配置好了。 +祝贺你! openQRM 已经安装并配置好了。 -![openQRM Server - Mozilla Firefox_013](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_013.png) +![openQRM Server - Mozilla Firefox_013](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_013.png) ### 更新 openQRM ### @@ -129,16 +130,17 @@ openQRM 有两种不同风格的版本可获取: cd openqrm/src/ make update -到现在为止,我们做的只是在我们的 Ubuntu 服务器中安装和配置 openQRM, 至于 创建、运行虚拟,管理存储,额外的系统集成和运行你自己的私有云等内容,我建议你阅读 [openQRM 管理员指南][3]。 +到现在为止,我们做的只是在我们的 Debian 服务器中安装和配置 openQRM, 至于 创建、运行虚拟,管理存储,额外的系统集成和运行你自己的私有云等内容,我建议你阅读 [openQRM 管理员指南][3]。 就是这些了,欢呼吧!周末快乐! + -------------------------------------------------------------------------------- via: http://www.unixmen.com/install-openqrm-cloud-computing-platform-debian/ 作者:[SK][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From dd8a39c8fd46ad9f708a39f0c87ddf817c4879c0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=BC=A9=E6=B6=A1?= Date: Tue, 25 Aug 2015 19:46:26 +0800 Subject: [PATCH 300/697] Update 20150817 Top 5 Torrent Clients For Ubuntu Linux.md --- .../share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md b/sources/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md index 5ae03e4df1..95ad8d2b5d 100644 --- a/sources/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md +++ b/sources/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md @@ -1,3 +1,5 @@ +Translating by Xuanwo + Top 5 Torrent Clients For Ubuntu Linux ================================================================================ ![Best Torrent clients for Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/5_Best_Torrent_Ubuntu.png) @@ -114,4 +116,4 @@ via: http://itsfoss.com/best-torrent-ubuntu/ [9]:http://sysads.co.uk/2014/05/install-utorrent-3-3-ubuntu-14-04-13-10/ [10]:http://itsfoss.com/manage-startup-applications-ubuntu/ [11]:http://itsfoss.com/4-best-download-managers-for-linux/ -[12]:http://itsfoss.com/popcorn-time-tips/ \ No newline at end of file +[12]:http://itsfoss.com/popcorn-time-tips/ From 26056a3f72760922c6a54a97cb7ad9ba537d5c42 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 25 Aug 2015 23:37:34 +0800 Subject: [PATCH 301/697] PUB:Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux @strugglingyouth --- ... (Mirroring) using 'Two Disks' in Linux.md | 126 +++++++++--------- 1 file changed, 63 insertions(+), 63 deletions(-) rename {translated/tech/RAID => published}/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md (50%) diff --git a/translated/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md b/published/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md similarity index 50% rename from translated/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md rename to published/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md index 948e530ed8..dba520121f 100644 --- a/translated/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md +++ b/published/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md @@ -1,83 +1,82 @@ -在 Linux 中使用"两个磁盘"创建 RAID 1(镜像) - 第3部分 +在 Linux 下使用 RAID(三):用两块磁盘创建 RAID 1(镜像) ================================================================================ -RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁盘中。创建 RAID1 至少需要两个磁盘,它的读取性能或者可靠性比数据存储容量更好。 + +**RAID 镜像**意味着相同数据的完整克隆(或镜像),分别写入到两个磁盘中。创建 RAID 1 至少需要两个磁盘,而且仅用于读取性能或者可靠性要比数据存储容量更重要的场合。 ![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg) -在 Linux 中设置 RAID1 +*在 Linux 中设置 RAID 1* 创建镜像是为了防止因硬盘故障导致数据丢失。镜像中的每个磁盘包含数据的完整副本。当一个磁盘发生故障时,相同的数据可以从其它正常磁盘中读取。而后,可以从正在运行的计算机中直接更换发生故障的磁盘,无需任何中断。 ### RAID 1 的特点 ### --镜像具有良好的性能。 +- 镜像具有良好的性能。 --磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。 +- 磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。 --在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。 +- 在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。 --读取数据会比写入性能更好。 +- 读取性能会比写入性能更好。 #### 要求 #### +创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8等偶数。要添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。 -创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8的两倍。为了能够添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。 +这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的功能界面或使用 Ctrl + I 键来访问它。 -这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的 UI 组件或使用 Ctrl + I 键来访问它。 - -需要阅读: [Basic Concepts of RAID in Linux][1] +需要阅读: [介绍 RAID 的级别和概念][1] #### 在我的服务器安装 #### - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.226 - Hostname : rd1.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc + 操作系统 : CentOS 6.5 Final + IP 地址 : 192.168.0.226 + 主机名 : rd1.tecmintlocal.com + 磁盘 1 [20GB] : /dev/sdb + 磁盘 2 [20GB] : /dev/sdc -本文将指导你使用 mdadm (创建和管理 RAID 的)一步一步的建立一个软件 RAID 1 或镜像在 Linux 平台上。但同样的做法也适用于其它 Linux 发行版如 RedHat,CentOS,Fedora 等等。 +本文将指导你在 Linux 平台上使用 mdadm (用于创建和管理 RAID )一步步的建立一个软件 RAID 1 (镜像)。同样的做法也适用于如 RedHat,CentOS,Fedora 等 Linux 发行版。 -### 第1步:安装所需要的并且检查磁盘 ### +### 第1步:安装所需软件并且检查磁盘 ### -1.正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。 +1、 正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。 - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] + # yum install mdadm [在 RedHat 系统] + # apt-get install mdadm [在 Debain 系统] -2. 一旦安装好‘mdadm‘包,我们需要使用下面的命令来检查磁盘是否已经配置好。 +2、 一旦安装好`mdadm`包,我们需要使用下面的命令来检查磁盘是否已经配置好。 # mdadm -E /dev/sd[b-c] ![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png) -检查 RAID 的磁盘 - +*检查 RAID 的磁盘* 正如你从上面图片看到的,没有检测到任何超级块,这意味着还没有创建RAID。 ### 第2步:为 RAID 创建分区 ### -3. 正如我提到的,我们最少使用两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID1。我们首先使用‘fdisk‘命令来创建这两个分区并更改其类型为 raid。 +3、 正如我提到的,我们使用最少的两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID 1。我们首先使用`fdisk`命令来创建这两个分区并更改其类型为 raid。 # fdisk /dev/sdb 按照下面的说明 -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。 +- 按 `n` 创建新的分区。 +- 然后按 `P` 选择主分区。 - 接下来选择分区号为1。 - 按两次回车键默认将整个容量分配给它。 -- 然后,按 ‘P’ 来打印创建好的分区。 -- 按 ‘L’,列出所有可用的类型。 -- 按 ‘t’ 修改分区类型。 -- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 +- 然后,按 `P` 来打印创建好的分区。 +- 按 `L`,列出所有可用的类型。 +- 按 `t` 修改分区类型。 +- 键入 `fd` 设置为 Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用`p`查看我们所做的更改。 +- 使用`w`保存更改。 ![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png) -创建磁盘分区 +*创建磁盘分区* 在创建“/dev/sdb”分区后,接下来按照同样的方法创建分区 /dev/sdc 。 @@ -85,59 +84,59 @@ RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁 ![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png) -创建第二个分区 +*创建第二个分区* -4. 一旦这两个分区创建成功后,使用相同的命令来检查 sdb & sdc 分区并确认 RAID 分区的类型如上图所示。 +4、 一旦这两个分区创建成功后,使用相同的命令来检查 sdb 和 sdc 分区并确认 RAID 分区的类型如上图所示。 # mdadm -E /dev/sd[b-c] ![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png) -验证分区变化 +*验证分区变化* ![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png) -检查 RAID 类型 +*检查 RAID 类型* **注意**: 正如你在上图所看到的,在 sdb1 和 sdc1 中没有任何对 RAID 的定义,这就是我们没有检测到超级块的原因。 -### 步骤3:创建 RAID1 设备 ### +### 第3步:创建 RAID 1 设备 ### -5.接下来使用以下命令来创建一个名为 /dev/md0 的“RAID1”设备并验证它 +5、 接下来使用以下命令来创建一个名为 /dev/md0 的“RAID 1”设备并验证它 # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1 # cat /proc/mdstat ![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png) -创建RAID设备 +*创建RAID设备* -6. 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列 +6、 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列 # mdadm -E /dev/sd[b-c]1 # mdadm --detail /dev/md0 ![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png) -检查 RAID 设备类型 +*检查 RAID 设备类型* ![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png) -检查 RAID 设备阵列 +*检查 RAID 设备阵列* -从上图中,人们很容易理解,RAID1 已经使用的 /dev/sdb1 和 /dev/sdc1 分区被创建,你也可以看到状态为 resyncing。 +从上图中,人们很容易理解,RAID 1 已经创建好了,使用了 /dev/sdb1 和 /dev/sdc1 分区,你也可以看到状态为 resyncing(重新同步中)。 ### 第4步:在 RAID 设备上创建文件系统 ### -7. 使用 ext4 为 md0 创建文件系统并挂载到 /mnt/raid1 . +7、 给 md0 上创建 ext4 文件系统 # mkfs.ext4 /dev/md0 ![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png) -创建 RAID 设备文件系统 +*创建 RAID 设备文件系统* -8. 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据 +8、 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据 # mkdir /mnt/raid1 # mount /dev/md0 /mnt/raid1/ @@ -146,51 +145,52 @@ RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁 ![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png) -挂载 RAID 设备 +*挂载 RAID 设备* -9.为了在系统重新启动自动挂载 RAID1,需要在 fstab 文件中添加条目。打开“/etc/fstab”文件并添加以下行。 +9、为了在系统重新启动自动挂载 RAID 1,需要在 fstab 文件中添加条目。打开`/etc/fstab`文件并添加以下行: /dev/md0 /mnt/raid1 ext4 defaults 0 0 ![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png) -自动挂载 Raid 设备 +*自动挂载 Raid 设备* + +10、 运行`mount -av`,检查 fstab 中的条目是否有错误 -10. 运行“mount -a”,检查 fstab 中的条目是否有错误 # mount -av ![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png) -检查 fstab 中的错误 +*检查 fstab 中的错误* -11. 接下来,使用下面的命令保存 raid 的配置到文件“mdadm.conf”中。 +11、 接下来,使用下面的命令保存 RAID 的配置到文件“mdadm.conf”中。 # mdadm --detail --scan --verbose >> /etc/mdadm.conf ![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png) -保存 Raid 的配置 +*保存 Raid 的配置* 上述配置文件在系统重启时会读取并加载 RAID 设备。 ### 第5步:在磁盘故障后检查数据 ### -12.我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。 +12、我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。 # mdadm --detail /dev/md0 ![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png) -验证 Raid 设备 +*验证 RAID 设备* -在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的并且 Active Devices 是2.现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。 +在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的,并且 Active Devices 是2。现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。 # ls -l /dev | grep sd # mdadm --detail /dev/md0 ![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png) -测试 RAID 设备 +*测试 RAID 设备* 现在,在上面的图片中你可以看到,一个磁盘不见了。我从虚拟机上删除了一个磁盘。此时让我们来检查我们宝贵的数据。 @@ -199,9 +199,9 @@ RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁 ![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png) -验证 RAID 数据 +*验证 RAID 数据* -你有没有看到我们的数据仍然可用。由此,我们可以知道 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。 +你可以看到我们的数据仍然可用。由此,我们可以了解 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。 -------------------------------------------------------------------------------- @@ -209,9 +209,9 @@ via: http://www.tecmint.com/create-raid1-in-linux/ 作者:[Babin Lonston][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[1]:https://linux.cn/article-6085-1.html From 39a941d9846d6f0fd2494f44979e26d211918cef Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 26 Aug 2015 00:01:44 +0800 Subject: [PATCH 302/697] Delete Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md --- ...ing Up RAID 10 or 1+0 (Nested) in Linux.md | 276 ------------------ 1 file changed, 276 deletions(-) delete mode 100644 sources/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md diff --git a/sources/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md b/sources/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md deleted file mode 100644 index a08903e00e..0000000000 --- a/sources/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md +++ /dev/null @@ -1,276 +0,0 @@ -struggling 翻译中 -Setting Up RAID 10 or 1+0 (Nested) in Linux – Part 6 -================================================================================ -RAID 10 is a combine of RAID 0 and RAID 1 to form a RAID 10. To setup Raid 10, we need at least 4 number of disks. In our earlier articles, we’ve seen how to setup a RAID 0 and RAID 1 with minimum 2 number of disks. - -Here we will use both RAID 0 and RAID 1 to perform a Raid 10 setup with minimum of 4 drives. Assume, that we’ve some data saved to logical volume, which is created with RAID 10. Just for an example, if we are saving a data “apple” this will be saved under all 4 disk by this following method. - -![Create Raid 10 in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/raid10.jpg) - -Create Raid 10 in Linux - -Using RAID 0 it will save as “A” in first disk and “p” in the second disk, then again “p” in first disk and “l” in second disk. Then “e” in first disk, like this it will continue the Round robin process to save the data. From this we come to know that RAID 0 will write the half of the data to first disk and other half of the data to second disk. - -In RAID 1 method, same data will be written to other 2 disks as follows. “A” will write to both first and second disks, “P” will write to both disk, Again other “P” will write to both the disks. Thus using RAID 1 it will write to both the disks. This will continue in round robin process. - -Now you all came to know that how RAID 10 works by combining of both RAID 0 and RAID 1. If we have 4 number of 20 GB size disks, it will be 80 GB in total, but we will get only 40 GB of Storage capacity, the half of total capacity will be lost for building RAID 10. - -#### Pros and Cons of RAID 5 #### - -- Gives better performance. -- We will loose two of the disk capacity in RAID 10. -- Reading and writing will be very good, because it will write and read to all those 4 disk at the same time. -- It can be used for Database solutions, which needs a high I/O disk writes. - -#### Requirements #### - -In RAID 10, we need minimum of 4 disks, the first 2 disks for RAID 0 and other 2 Disks for RAID 1. Like I said before, RAID 10 is just a Combine of RAID 0 & 1. If we need to extended the RAID group, we must increase the disk by minimum 4 disks. - -**My Server Setup** - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.229 - Hostname : rd10.tecmintlocal.com - Disk 1 [20GB] : /dev/sdd - Disk 2 [20GB] : /dev/sdc - Disk 3 [20GB] : /dev/sdd - Disk 4 [20GB] : /dev/sde - -There are two ways to setup RAID 10, but here I’m going to show you both methods, but I prefer you to follow the first method, which makes the work lot easier for setting up a RAID 10. - -### Method 1: Setting Up Raid 10 ### - -1. First, verify that all the 4 added disks are detected or not using the following command. - - # ls -l /dev | grep sd - -2. Once the four disks are detected, it’s time to check for the drives whether there is already any raid existed before creating a new one. - - # mdadm -E /dev/sd[b-e] - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde - -![Verify 4 Added Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-4-Added-Disks.png) - -Verify 4 Added Disks - -**Note**: In the above output, you see there isn’t any super-block detected yet, that means there is no RAID defined in all 4 drives. - -#### Step 1: Drive Partitioning for RAID #### - -3. Now create a new partition on all 4 disks (/dev/sdb, /dev/sdc, /dev/sdd and /dev/sde) using the ‘fdisk’ tool. - - # fdisk /dev/sdb - # fdisk /dev/sdc - # fdisk /dev/sdd - # fdisk /dev/sde - -**Create /dev/sdb Partition** - -Let me show you how to partition one of the disk (/dev/sdb) using fdisk, this steps will be the same for all the other disks too. - - # fdisk /dev/sdb - -Please use the below steps for creating a new partition on /dev/sdb drive. - -- Press ‘n‘ for creating new partition. -- Then choose ‘P‘ for Primary partition. -- Then choose ‘1‘ to be the first partition. -- Next press ‘p‘ to print the created partition. -- Change the Type, If we need to know the every available types Press ‘L‘. -- Here, we are selecting ‘fd‘ as my type is RAID. -- Next press ‘p‘ to print the defined partition. -- Then again use ‘p‘ to print the changes what we have made. -- Use ‘w‘ to write the changes. - -![Disk sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-sdb-Partition.png) - -Disk sdb Partition - -**Note**: Please use the above same instructions for creating partitions on other disks (sdc, sdd sdd sde). - -4. After creating all 4 partitions, again you need to examine the drives for any already existing raid using the following command. - - # mdadm -E /dev/sd[b-e] - # mdadm -E /dev/sd[b-e]1 - - OR - - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde - # mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 - -![Check All Disks for Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Check-All-Disks-for-Raid.png) - -Check All Disks for Raid - -**Note**: The above outputs shows that there isn’t any super-block detected on all four newly created partitions, that means we can move forward to create RAID 10 on these drives. - -#### Step 2: Creating ‘md’ RAID Device #### - -5. Now it’s time to create a ‘md’ (i.e. /dev/md0) device, using ‘mdadm’ raid management tool. Before, creating device, your system must have ‘mdadm’ tool installed, if not install it first. - - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] - -Once ‘mdadm’ tool installed, you can now create a ‘md’ raid device using the following command. - - # mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 - -6. Next verify the newly created raid device using the ‘cat’ command. - - # cat /proc/mdstat - -![Create md raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-raid-Device.png) - -Create md raid Device - -7. Next, examine all the 4 drives using the below command. The output of the below command will be long as it displays the information of all 4 disks. - - # mdadm --examine /dev/sd[b-e]1 - -8. Next, check the details of Raid Array with the help of following command. - - # mdadm --detail /dev/md0 - -![Check Raid Array Details](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Array-Details.png) - -Check Raid Array Details - -**Note**: You see in the above results, that the status of Raid was active and re-syncing. - -#### Step 3: Creating Filesystem #### - -9. Create a file system using ext4 for ‘md0′ and mount it under ‘/mnt/raid10‘. Here, I’ve used ext4, but you can use any filesystem type if you want. - - # mkfs.ext4 /dev/md0 - -![Create md Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-Filesystem.png) - -Create md Filesystem - -10. After creating filesystem, mount the created file-system under ‘/mnt/raid10‘ and list the contents of the mount point using ‘ls -l’ command. - - # mkdir /mnt/raid10 - # mount /dev/md0 /mnt/raid10/ - # ls -l /mnt/raid10/ - -Next, add some files under mount point and append some text in any one of the file and check the content. - - # touch /mnt/raid10/raid10_files.txt - # ls -l /mnt/raid10/ - # echo "raid 10 setup with 4 disks" > /mnt/raid10/raid10_files.txt - # cat /mnt/raid10/raid10_files.txt - -![Mount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-md-Device.png) - -Mount md Device - -11. For automounting, open the ‘/etc/fstab‘ file and append the below entry in fstab, may be mount point will differ according to your environment. Save and quit using wq!. - - # vim /etc/fstab - - /dev/md0 /mnt/raid10 ext4 defaults 0 0 - -![AutoMount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/AutoMount-md-Device.png) - -AutoMount md Device - -12. Next, verify the ‘/etc/fstab‘ file for any errors before restarting the system using ‘mount -a‘ command. - - # mount -av - -![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Errors-in-Fstab.png) - -Check Errors in Fstab - -#### Step 4: Save RAID Configuration #### - -13. By default RAID don’t have a config file, so we need to save it manually after making all the above steps, to preserve these settings during system boot. - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -![Save Raid10 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid10-Configuration.png) - -Save Raid10 Configuration - -That’s it, we have created RAID 10 using method 1, this method is the easier one. Now let’s move forward to setup RAID 10 using method 2. - -### Method 2: Creating RAID 10 ### - -1. In method 2, we have to define 2 sets of RAID 1 and then we need to define a RAID 0 using those created RAID 1 sets. Here, what we will do is to first create 2 mirrors (RAID1) and then striping over RAID0. - -First, list the disks which are all available for creating RAID 10. - - # ls -l /dev | grep sd - -![List 4 Devices](http://www.tecmint.com/wp-content/uploads/2014/11/List-4-Devices.png) - -List 4 Devices - -2. Partition the all 4 disks using ‘fdisk’ command. For partitioning, you can follow #step 3 above. - - # fdisk /dev/sdb - # fdisk /dev/sdc - # fdisk /dev/sdd - # fdisk /dev/sde - -3. After partitioning all 4 disks, now examine the disks for any existing raid blocks. - - # mdadm --examine /dev/sd[b-e] - # mdadm --examine /dev/sd[b-e]1 - -![Examine 4 Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-4-Disks.png) - -Examine 4 Disks - -#### Step 1: Creating RAID 1 #### - -4. First let me create 2 sets of RAID 1 using 4 disks ‘sdb1′ and ‘sdc1′ and other set using ‘sdd1′ & ‘sde1′. - - # mdadm --create /dev/md1 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[b-c]1 - # mdadm --create /dev/md2 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[d-e]1 - # cat /proc/mdstat - -![Creating Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png) - -Creating Raid 1 - -![Check Details of Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png) - -Check Details of Raid 1 - -#### Step 2: Creating RAID 0 #### - -5. Next, create the RAID 0 using md1 and md2 devices. - - # mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/md1 /dev/md2 - # cat /proc/mdstat - -![Creating Raid 0](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-0.png) - -Creating Raid 0 - -#### Step 3: Save RAID Configuration #### - -6. We need to save the Configuration under ‘/etc/mdadm.conf‘ to load all raid devices in every reboot times. - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -After this, we need to follow #step 3 Creating file system of method 1. - -That’s it! we have created RAID 1+0 using method 2. We will loose two disks space here, but the performance will be excellent compared to any other raid setups. - -### Conclusion ### - -Here we have created RAID 10 using two methods. RAID 10 has good performance and redundancy too. Hope this helps you to understand about RAID 10 Nested Raid level. Let us see how to grow an existing raid array and much more in my upcoming articles. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid-10-in-linux/ - -作者:[Babin Lonston][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ \ No newline at end of file From fe8f7603a69585d6bbaa8f75149dd0a5558b636f Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 26 Aug 2015 00:02:53 +0800 Subject: [PATCH 303/697] Create Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md --- ...ing Up RAID 10 or 1+0 (Nested) in Linux.md | 277 ++++++++++++++++++ 1 file changed, 277 insertions(+) create mode 100644 translated/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md diff --git a/translated/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md b/translated/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md new file mode 100644 index 0000000000..850f6c3e49 --- /dev/null +++ b/translated/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md @@ -0,0 +1,277 @@ + +在 Linux 中设置 RAID 10 或 1 + 0(嵌套) - 第6部分 +================================================================================ +RAID 10 是结合 RAID 0 和 RAID 1 形成的。要设置 RAID 10,我们至少需要4个磁盘。在之前的文章中,我们已经看到了如何使用两个磁盘设置 RAID 0 和 RAID 1。 + +在这里,我们将使用最少4个磁盘结合 RAID 0 和 RAID 1 来设置 RAID 10。假设,我们已经在逻辑卷保存了一些数据,这是 RAID 10 创建的,如果我们要保存数据“apple”,它将使用以下方法将其保存在4个磁盘中。 + +![Create Raid 10 in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/raid10.jpg) + +在 Linux 中创建 Raid 10 + +使用 RAID 0 时,它将“A”保存在第一个磁盘,“p”保存在第二个磁盘,下一个“P”又在第一个磁盘,“L”在第二个磁盘。然后,“e”又在第一个磁盘,像这样它会继续循环此过程将数据保存完整。由此我们知道,RAID 0 是将数据的一半保存到第一个磁盘,另一半保存到第二个磁盘。 + +在 RAID 1 方法中,相同的数据将被写入到两个磁盘中。 “A”将同时被写入到第一和第二个磁盘中,“P”也将被同时写入到两个磁盘中,下一个“P”也将同时被写入到两个磁盘。因此,使用 RAID 1 将同时写入到两个磁盘。它将继续循环此过程。 + +现在大家来了解 RAID 10 怎样结合 RAID 0 和 RAID 1 来工作。如果我们有4个20 GB 的磁盘,总共为 80 GB,但我们将只能得到40 GB 的容量,另一半的容量将用于构建 RAID 10。 + +#### RAID 10 的优点和缺点 #### + +- 提供更好的性能。 +- 在 RAID 10 中我们将失去两个磁盘的容量。 +- 读与写的性能将会很好,因为它会同时进行写入和读取。 +- 它能解决数据库的高 I/O 磁盘写操作。 + +#### 要求 #### + +在 RAID 10 中,我们至少需要4个磁盘,2个磁盘为 RAID 0,其他2个磁盘为 RAID 1,就像我之前说的,RAID 10 仅仅是结合了 RAID 0和1。如果我们需要扩展 RAID 组,最少需要添加4个磁盘。 + +**我的服务器设置** + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.229 + Hostname : rd10.tecmintlocal.com + Disk 1 [20GB] : /dev/sdd + Disk 2 [20GB] : /dev/sdc + Disk 3 [20GB] : /dev/sdd + Disk 4 [20GB] : /dev/sde + +有两种方法来设置 RAID 10,在这里两种方法我都会演示,但我更喜欢第一种方法,使用它来设置 RAID 10 更简单。 + +### 方法1:设置 RAID 10 ### + +1.首先,使用以下命令确认所添加的4块磁盘没有被使用。 + + # ls -l /dev | grep sd + +2.四个磁盘被检测后,然后来检查磁盘是否存在 RAID 分区。 + + # mdadm -E /dev/sd[b-e] + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde + +![Verify 4 Added Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-4-Added-Disks.png) + +验证添加的4块磁盘 + +**注意**: 在上面的输出中,如果没有检测到 super-block 意味着在4块磁盘中没有定义过 RAID。 + +#### 第1步:为 RAID 分区 #### + +3.现在,使用‘fdisk’,命令为4个磁盘(/dev/sdb, /dev/sdc, /dev/sdd 和 /dev/sde)创建新分区。 + + # fdisk /dev/sdb + # fdisk /dev/sdc + # fdisk /dev/sdd + # fdisk /dev/sde + +**为 /dev/sdb 创建分区** + +我来告诉你如何使用 fdisk 为磁盘(/dev/sdb)进行分区,此步也适用于其他磁盘。 + + # fdisk /dev/sdb + +请使用以下步骤为 /dev/sdb 创建一个新的分区。 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按 ‘P’ 来打印创建好的分区。 +- 按 ‘L’,列出所有可用的类型。 +- 按 ‘t’ 去修改分区。 +- 键入 ‘fd’ 设置为 Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Disk sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-sdb-Partition.png) + +为磁盘 sdb 分区 + +**注意**: 请使用上面相同的指令对其他磁盘(sdc, sdd sdd sde)进行分区。 + +4.创建好4个分区后,需要使用下面的命令来检查磁盘是否存在 raid。 + + # mdadm -E /dev/sd[b-e] + # mdadm -E /dev/sd[b-e]1 + + 或者 + + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde + # mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 + +![Check All Disks for Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Check-All-Disks-for-Raid.png) + +检查磁盘 + +**注意**: 以上输出显示,新创建的四个分区中没有检测到 super-block,这意味着我们可以继续在这些磁盘上创建 RAID 10。 + +#### 第2步: 创建 RAID 设备 ‘md’ #### + +5.现在改创建一个‘md’(即 /dev/md0)设备,使用“mdadm” raid 管理工具。在创建设备之前,必须确保系统已经安装了‘mdadm’工具,如果没有请使用下面的命令来安装。 + + # yum install mdadm [on RedHat systems] + # apt-get install mdadm [on Debain systems] + +‘mdadm’工具安装完成后,可以使用下面的命令创建一个‘md’ raid 设备。 + + # mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 + +6.接下来使用‘cat’命令验证新创建的 raid 设备。 + + # cat /proc/mdstat + +![Create md raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-raid-Device.png) + +创建 md raid 设备 + +7.接下来,使用下面的命令来检查4个磁盘。下面命令的输出会很长,因为它会显示4个磁盘的所有信息。 + + # mdadm --examine /dev/sd[b-e]1 + +8.接下来,使用以下命令来查看 RAID 阵列的详细信息。 + + # mdadm --detail /dev/md0 + +![Check Raid Array Details](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Array-Details.png) + +查看 Raid 阵列详细信息 + +**注意**: 你在上面看到的结果,该 RAID 的状态是 active 和re-syncing。 + +#### 第3步:创建文件系统 #### + +9.使用 ext4 作为‘md0′的文件系统并将它挂载到‘/mnt/raid10‘下。在这里,我用的是 ext4,你可以使用你想要的文件系统类型。 + + # mkfs.ext4 /dev/md0 + +![Create md Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-Filesystem.png) + +创建 md 文件系统 + +10.在创建文件系统后,挂载文件系统到‘/mnt/raid10‘下,并使用‘ls -l’命令列出挂载点下的内容。 + + # mkdir /mnt/raid10 + # mount /dev/md0 /mnt/raid10/ + # ls -l /mnt/raid10/ + +接下来,在挂载点下创建一些文件,并在文件中添加些内容,然后检查内容。 + + # touch /mnt/raid10/raid10_files.txt + # ls -l /mnt/raid10/ + # echo "raid 10 setup with 4 disks" > /mnt/raid10/raid10_files.txt + # cat /mnt/raid10/raid10_files.txt + +![Mount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-md-Device.png) + +挂载 md 设备 + +11.要想自动挂载,打开‘/etc/fstab‘文件并添加下面的条目,挂载点根据你环境的不同来添加。使用 wq! 保存并退出。 + + # vim /etc/fstab + + /dev/md0 /mnt/raid10 ext4 defaults 0 0 + +![AutoMount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/AutoMount-md-Device.png) + +挂载 md 设备 + +12.接下来,在重新启动系统前使用‘mount -a‘来确认‘/etc/fstab‘文件是否有错误。 + + # mount -av + +![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Errors-in-Fstab.png) + +检查 Fstab 中的错误 + +#### 第四步:保存 RAID 配置 #### + +13.默认情况下 RAID 没有配置文件,所以我们需要在上述步骤完成后手动保存它。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +![Save Raid10 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid10-Configuration.png) + +保存 Raid10 的配置 + +就这样,我们使用方法1创建完了 RAID 10,这种方法是比较容易的。现在,让我们使用方法2来设置 RAID 10。 + +### 方法2:创建 RAID 10 ### + +1.在方法2中,我们必须定义2组 RAID 1,然后我们需要使用这些创建好的 RAID 1 的集来定义一个 RAID 0。在这里,我们将要做的是先创建2个镜像(RAID1),然后创建 RAID0 (条带化)。 + +首先,列出所有的可用于创建 RAID 10 的磁盘。 + + # ls -l /dev | grep sd + +![List 4 Devices](http://www.tecmint.com/wp-content/uploads/2014/11/List-4-Devices.png) + +列出了 4 设备 + +2.将4个磁盘使用‘fdisk’命令进行分区。对于如何分区,您可以按照 #步骤 3。 + + # fdisk /dev/sdb + # fdisk /dev/sdc + # fdisk /dev/sdd + # fdisk /dev/sde + +3.在完成4个磁盘的分区后,现在检查磁盘是否存在 RAID块。 + + # mdadm --examine /dev/sd[b-e] + # mdadm --examine /dev/sd[b-e]1 + +![Examine 4 Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-4-Disks.png) + +检查 4 个磁盘 + +#### 第1步:创建 RAID 1 #### + +4.首先,使用4块磁盘创建2组 RAID 1,一组为‘sdb1′和 ‘sdc1′,另一组是‘sdd1′ 和 ‘sde1′。 + + # mdadm --create /dev/md1 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[b-c]1 + # mdadm --create /dev/md2 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[d-e]1 + # cat /proc/mdstat + +![Creating Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png) + +创建 Raid 1 + +![Check Details of Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png) + +查看 Raid 1 的详细信息 + +#### 第2步:创建 RAID 0 #### + +5.接下来,使用 md1 和 md2 来创建 RAID 0。 + + # mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/md1 /dev/md2 + # cat /proc/mdstat + +![Creating Raid 0](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-0.png) + +创建 Raid 0 + +#### 第3步:保存 RAID 配置 #### + +6.我们需要将配置文件保存在‘/etc/mdadm.conf‘文件中,使其每次重新启动后都能加载所有的 raid 设备。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +在此之后,我们需要按照方法1中的#第3步来创建文件系统。 + +就是这样!我们采用的方法2创建完了 RAID 1+0.我们将会失去两个磁盘的空间,但相比其他 RAID ,它的性能将是非常好的。 + +### 结论 ### + +在这里,我们采用两种方法创建 RAID 10。RAID 10 具有良好的性能和冗余性。希望这篇文章可以帮助你了解 RAID 10(嵌套 RAID 的级别)。在后面的文章中我们会看到如何扩展现有的 RAID 阵列以及更多精彩的。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid-10-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ From 7b242fb2a91191ed601102eff134a0edbace3389 Mon Sep 17 00:00:00 2001 From: Xuanwo Date: Wed, 26 Aug 2015 06:10:36 +0800 Subject: [PATCH 304/697] finish the translating of 20150817 Top 5 Torrent Clients For Ubuntu Linux --- ... Top 5 Torrent Clients For Ubuntu Linux.md | 119 ----------------- ... Top 5 Torrent Clients For Ubuntu Linux.md | 120 ++++++++++++++++++ 2 files changed, 120 insertions(+), 119 deletions(-) delete mode 100644 sources/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md create mode 100644 translated/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md diff --git a/sources/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md b/sources/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md deleted file mode 100644 index 95ad8d2b5d..0000000000 --- a/sources/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md +++ /dev/null @@ -1,119 +0,0 @@ -Translating by Xuanwo - -Top 5 Torrent Clients For Ubuntu Linux -================================================================================ -![Best Torrent clients for Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/5_Best_Torrent_Ubuntu.png) - -Looking for the **best torrent client in Ubuntu**? Indeed there are a number of torrent clients available for desktop Linux. But which ones are the **best Ubuntu torrent clients** among them? - -I am going to list top 5 torrent clients for Linux, which are lightweight, feature rich and have impressive GUI. Ease of installation and using is also a factor. - -### Best torrent programs for Ubuntu ### - -Since Ubuntu comes by default with Transmission, I am going to exclude it from the list. This doesn’t mean that Transmission doesn’t deserve to be on the list. Transmission is a good to have torrent client for Ubuntu and this is the reason why it is the default Torrent application in several Linux distributions, including Ubuntu. - ----------- - -### Deluge ### - -![Logo of Deluge torrent client for Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Deluge.png) - -[Deluge][1] has been chosen as the best torrent client for Linux by Lifehacker and that speaks itself of the usefulness of Deluge. And it’s not just Lifehacker who is fan of Deluge, check out any forum and you’ll find a number of people admitting that Deluge is their favorite. - -Fast, sleek and intuitive interface makes Deluge a hot favorite among Linux users. - -Deluge is available in Ubuntu repositories and you can install it in Ubuntu Software Center or by using the command below: - - sudo apt-get install deluge - ----------- - -### qBittorrent ### - -![qBittorrent client for Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/qbittorrent_icon.png) - -As the name suggests, [qBittorrent][2] is the Qt version of famous [Bittorrent][3] application. You’ll see an interface similar to Bittorrent client in Windows, if you ever used it. Sort of lightweight and have all the standard features of a torrent program, qBittorrent is also available in default Ubuntu repository. - -It could be installed from Ubuntu Software Center or using the command below: - - sudo apt-get install qbittorrent - ----------- - -### Tixati ### - -![Tixati torrent client logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/tixati_icon.png) - -[Tixati][4] is another nice to have torrent client for Ubuntu. It has a default dark theme which might be preferred by many but not me. It has all the standard features that you can seek in a torrent client. - -In addition to that, there are additional feature of data analysis. You can measure and analyze bandwidth and other statistics in nice charts. - -- [Download Tixati][5] - ----------- - -### Vuze ### - -![Vuze Torrent Logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/vuze_icon_for_mac_os_x_by_hamzasaleem-d6yx1fp.png) - -[Vuze][6] is favorite torrent application of a number of Linux as well as Windows users. Apart from the standard features, you can search for torrents directly in the application. You can also subscribe to episodic content so that you won’t have to search for new contents as you can see it in your subscription in sidebar. - -It also comes with a video player that can play HD videos with subtitles and all. But I don’t think you would like to use it over the better video players such as VLC. - -Vuze can be installed from Ubuntu Software Center or using the command below: - - sudo apt-get install vuze - ----------- - -### Frostwire ### - -![Logo of Frostwire torrent client](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/frostwire.png) - -[Frostwire][7] is the torrent application you might want to try. It is more than just a simple torrent client. Also available for Android, you can use it to share files over WiFi. - -You can search for torrents from within the application and play them inside the application. In addition to the downloaded files, it can browse your local media and have them organized inside the player. The same is applicable for the Android version. - -An additional feature is that Frostwire also provides access to legal music by indi artists. You can download them and listen to it, for free, for legal. - -- [Download Frostwire][8] - ----------- - -### Honorable mention ### - -On Windows, uTorrent (pronounced mu torrent) is my favorite torrent application. While uTorrent may be available for Linux, I deliberately skipped it from the list because installing and using uTorrent in Linux is neither easy nor does it provide a complete application experience (runs with in web browser). - -You can read about uTorrent installation in Ubuntu [here][9]. - -#### Quick tip: #### - -Most of the time, torrent applications do not start by default. You might want to change this behavior. Read this post to learn [how to manage startup applications in Ubuntu][10]. - -### What’s your favorite? ### - -That was my opinion on the best Torrent clients in Ubuntu. What is your favorite one? Do leave a comment. You can also check the [best download managers for Ubuntu][11] in related posts. And if you use Popcorn Time, check these [Popcorn Time Tips][12]. - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/best-torrent-ubuntu/ - -作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:http://deluge-torrent.org/ -[2]:http://www.qbittorrent.org/ -[3]:http://www.bittorrent.com/ -[4]:http://www.tixati.com/ -[5]:http://www.tixati.com/download/ -[6]:http://www.vuze.com/ -[7]:http://www.frostwire.com/ -[8]:http://www.frostwire.com/downloads -[9]:http://sysads.co.uk/2014/05/install-utorrent-3-3-ubuntu-14-04-13-10/ -[10]:http://itsfoss.com/manage-startup-applications-ubuntu/ -[11]:http://itsfoss.com/4-best-download-managers-for-linux/ -[12]:http://itsfoss.com/popcorn-time-tips/ diff --git a/translated/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md b/translated/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md new file mode 100644 index 0000000000..0ba1ac3e03 --- /dev/null +++ b/translated/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md @@ -0,0 +1,120 @@ +Translating by Xuanwo + +介绍Ubuntu下五大BT客户端 +================================================================================ +![Best Torrent clients for Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/5_Best_Torrent_Ubuntu.png) + +你在寻找**Ubuntu中最好的BT客户端**吗?事实上,桌面平台中有许多可用的BT客户端,但是它们中的哪些才是**最好的**呢? + +我将会列出最好的五个BT客户端,它们都拥有着体积轻盈,功能强大的特点,而且还有令人印象深刻的用户界面。自然,易于安装和使用也是特性之一。 + +### Ubuntu下最好的BT客户端 ### + +考虑到Ubuntu默认安装了Transmission,所以我将会从这个列表中删去Transmission。但是这并不意味着Transmission没有资格出现在这个列表中,事实上,Transmission是一个非常好的BT客户端,这也正是它被多个发行版默认安装的原因,Ubuntu也不例外。 + +---------- + +### Deluge ### + +![Logo of Deluge torrent client for Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Deluge.png) + +[Deluge][1] 被Lifehacker选为Linux下最好的BT客户端,这说明了Deluge是多么的有用。而且,并不仅仅只有Lifehacker是Deluge的粉丝,纵观多个论坛,你都会发现不少Deluge的忠实拥趸。 + +快速,时尚而且直观的界面使得Deluge成为Linux用户的挚爱。 + +Deluge可在Ubuntu的仓库中获取,你能够在Ubuntu软件中心中安装它,或者使用下面的命令: + + sudo apt-get install delugeFast, sleek and intuitive interface makes Deluge a hot favorite among Linux users. + +---------- + +### qBittorrent ### + +![qBittorrent client for Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/qbittorrent_icon.png) + +正如它的名字所暗示的,[qBittorrent][2] 是著名的 [Bittorrent][3] 应用的Qt版本。如果你曾经使用过它,你将会看到和Windows下的Bittorrent相似的界面。同样轻巧并且有着BT客户端的所有标准功能,qBittorrent也可以在Ubuntu的默认仓库中找到。 + +它可以通过Ubuntu软件仓库安装,或者使用下面的命令: + + sudo apt-get install qbittorrent + +---------- + +### Tixati ### + +![Tixati torrent client logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/tixati_icon.png) + +[Tixati][4] 是另一个不错的Ubuntu下的BT客户端。它有着一个默认的黑暗主题,尽管很多人喜欢,但是我例外。它拥有着一切你能在BT客户端中找到的功能。 + +除此之外,它还有着数据分析的额外功能。你可以在美观的图表中分析流量以及其它数据。 + +- [下载 Tixati][5] + +---------- + +### Vuze ### + +![Vuze Torrent Logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/vuze_icon_for_mac_os_x_by_hamzasaleem-d6yx1fp.png) + +[Vuze][6]是许多Linux以及Windows用户最喜欢的BT客户端。除了标准的功能,你可以直接在应用程序中搜索种子。你也可以订阅系列片源,这样你就无需再去寻找新的片源了,因为你可以在侧边栏中的订阅看到它们。 + +它还配备了一个视频播放器,可以播放带有字幕的高清视频以及一切。但是我不认为你会用它来代替那些更好的视频播放器,比如VLC。 + +Vuze可以通过Ubuntu软件中心安装或者使用下列命令: + + sudo apt-get install vuze + +---------- + +### Frostwire ### + +![Logo of Frostwire torrent client](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/frostwire.png) + +[Frostwire][7]可能会是一个你想要去尝试的应用。它不仅仅是一个简单的BT客户端,它还可以应用于安卓,你可以用它通过Wifi来共享文件。 + +你可以在应用中搜索种子并且播放他们。除了下载文件,它还可以浏览你本地的影音文件,并且将它们有条理的呈现在播放器中。这同样适用于安卓版本。 + +还有一个特点是:Frostwire提供了印度艺术家的合法音乐下载。你可以下载并且欣赏它们,免费而且合法。 + +- [下载 Frostwire][8] + +---------- + +### 荣誉奖 ### + +在Windows中,uTorrent(发音:mu torrent)是我最喜欢的BT应用。尽管uTorrent可以在Linux下运行,但是我还是特意忽略了它。因为在Linux下使用uTorrent不仅困难,而且无法获得完整的应用体验(运行在浏览器中)。 + +你可以[在这里][9]阅读Ubuntu下uTorrent的安装教程。 + +#### 快速提示: #### + +大多数情况下,BT应用不会默认自动自动启动。如果你想改变这一行为,阅读[如何管理Ubuntu下的自启程序][10]来学习。 + +### 你最喜欢的是什么? ### + +这些是我对于Ubuntu下最好的BT客户端的一键。你最喜欢的是什么呢?请发表评论。你也可以查看与本主题相关的[Ubuntu最好的下载管理器][11]。如果你使用Popcorn Time,试试[Popcorn Time技巧][12] + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/best-torrent-ubuntu/ + +作者:[Abhishek][a] +译者:[Xuanwo](https://github.com/Xuanwo) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://deluge-torrent.org/ +[2]:http://www.qbittorrent.org/ +[3]:http://www.bittorrent.com/ +[4]:http://www.tixati.com/ +[5]:http://www.tixati.com/download/ +[6]:http://www.vuze.com/ +[7]:http://www.frostwire.com/ +[8]:http://www.frostwire.com/downloads +[9]:http://sysads.co.uk/2014/05/install-utorrent-3-3-ubuntu-14-04-13-10/ +[10]:http://itsfoss.com/manage-startup-applications-ubuntu/ +[11]:http://itsfoss.com/4-best-download-managers-for-linux/ +[12]:http://itsfoss.com/popcorn-time-tips/ + From 8040c6c76cfaa973d7700e59e51f724a88a5e961 Mon Sep 17 00:00:00 2001 From: bazz2 Date: Wed, 26 Aug 2015 07:54:27 +0800 Subject: [PATCH 305/697] [bazz2 translating]Linux about to gain a new file system--bcachefs --- ...20150824 Linux about to gain a new file system--bcachefs.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/talk/20150824 Linux about to gain a new file system--bcachefs.md b/sources/talk/20150824 Linux about to gain a new file system--bcachefs.md index df3cd14682..9568d05836 100644 --- a/sources/talk/20150824 Linux about to gain a new file system--bcachefs.md +++ b/sources/talk/20150824 Linux about to gain a new file system--bcachefs.md @@ -1,3 +1,4 @@ +[bazz2222222] Linux about to gain a new file system – bcachefs ================================================================================ A five year old file system built by Kent Overstreet, formerly of Google, is near feature complete with all critical components in place. Bcachefs boasts the performance and reliability of the widespread ext4 and xfs as well as the feature list similar to that of btrfs and zfs. Notable features include checksumming, compression, multiple devices, caching and eventually snapshots and other “nifty” features. @@ -22,4 +23,4 @@ via: http://www.linuxveda.com/2015/08/22/linux-gain-new-file-system-bcachefs/ [a]:http://www.linuxveda.com/author/paul_hill/ [1]:https://en.wikipedia.org/wiki/Copy-on-write -[2]:https://lkml.org/lkml/2015/8/21/22 \ No newline at end of file +[2]:https://lkml.org/lkml/2015/8/21/22 From 2c0561dc2f517c08dca59d4fa1948c18cb2c0704 Mon Sep 17 00:00:00 2001 From: Xuanwo Date: Wed, 26 Aug 2015 08:41:08 +0800 Subject: [PATCH 306/697] apply for the LFCS translate --- ...eate Edit and Manipulate files in Linux.md | 2 ++ ...ng and Linux Filesystem Troubleshooting.md | 28 ++++++++++--------- ...and Use vi or vim as a Full Text Editor.md | 4 ++- ...e Attributes and Finding Files in Linux.md | 8 ++++-- ...esystems and Configuring Swap Partition.md | 2 ++ ...work Samba and NFS Filesystems in Linux.md | 2 ++ ...es – Creating & Managing System Backups.md | 22 ++++++++------- ...d Services SysVinit Systemd and Upstart.md | 6 ++-- ...es and Enabling sudo Access on Accounts.md | 4 ++- ...th Yum RPM Apt Dpkg Aptitude and Zypper.md | 12 ++++---- 10 files changed, 55 insertions(+), 35 deletions(-) diff --git a/sources/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md b/sources/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md index ca96b7dac6..083078fa62 100644 --- a/sources/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md +++ b/sources/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md @@ -1,3 +1,5 @@ +Translating by Xuanwo + Part 1 - LFCS: How to use GNU ‘sed’ Command to Create, Edit, and Manipulate files in Linux ================================================================================ The Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, a new program that aims at helping individuals all over the world to get certified in basic to intermediate system administration tasks for Linux systems. This includes supporting running systems and services, along with first-hand troubleshooting and analysis, and smart decision-making to escalate issues to engineering teams. diff --git a/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md b/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md index 45029ac20e..5dd1782a98 100644 --- a/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md +++ b/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md @@ -1,3 +1,5 @@ +Translating by Xuanwo + Part 10 - LFCS: Understanding & Learning Basic Shell Scripting and Linux Filesystem Troubleshooting ================================================================================ The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new initiative whose purpose is to allow individuals everywhere (and anywhere) to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams. @@ -99,10 +101,10 @@ Execute Script Whenever you need to specify different courses of action to be taken in a shell script, as result of the success or failure of a command, you will use the if construct to define such conditions. Its basic syntax is: - if CONDITION; then + if CONDITION; then COMMANDS; else - OTHER-COMMANDS + OTHER-COMMANDS fi Where CONDITION can be one of the following (only the most frequent conditions are cited here) and evaluates to true when: @@ -133,8 +135,8 @@ Where CONDITION can be one of the following (only the most frequent conditions a This loop allows to execute one or more commands for each value in a list of values. Its basic syntax is: - for item in SEQUENCE; do - COMMANDS; + for item in SEQUENCE; do + COMMANDS; done Where item is a generic variable that represents each value in SEQUENCE during each iteration. @@ -143,8 +145,8 @@ Where item is a generic variable that represents each value in SEQUENCE during e This loop allows to execute a series of repetitive commands as long as the control command executes with an exit status equal to zero (successfully). Its basic syntax is: - while EVALUATION_COMMAND; do - EXECUTE_COMMANDS; + while EVALUATION_COMMAND; do + EXECUTE_COMMANDS; done Where EVALUATION_COMMAND can be any command(s) that can exit with a success (0) or failure (other than 0) status, and EXECUTE_COMMANDS can be any program, script or shell construct, including other nested loops. @@ -158,7 +160,7 @@ We will demonstrate the use of the if construct and the for loop with the follow Let’s create a file with a list of services that we want to monitor at a glance. # cat myservices.txt - + sshd mariadb httpd @@ -172,10 +174,10 @@ Script to Monitor Linux Services Our shell script should look like. #!/bin/bash - + # This script iterates over a list of services and # is used to determine whether they are running or not. - + for service in $(cat myservices.txt); do systemctl status $service | grep --quiet "running" if [ $? -eq 0 ]; then @@ -214,10 +216,10 @@ Services Monitoring Script We could go one step further and check for the existence of myservices.txt before even attempting to enter the for loop. #!/bin/bash - + # This script iterates over a list of services and # is used to determine whether they are running or not. - + if [ -f myservices.txt ]; then for service in $(cat myservices.txt); do systemctl status $service | grep --quiet "running" @@ -238,9 +240,9 @@ You may want to maintain a list of hosts in a text file and use a script to dete The read shell built-in command tells the while loop to read myhosts line by line and assigns the content of each line to variable host, which is then passed to the ping command. #!/bin/bash - + # This script is used to demonstrate the use of a while loop - + while read host; do ping -c 2 $host done < myhosts diff --git a/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md b/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md index 7537f784bd..1d069e08ea 100644 --- a/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md +++ b/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md @@ -1,3 +1,5 @@ +Translating by Xuanwo + Part 2 - LFCS: How to Install and Use vi/vim as a Full Text Editor ================================================================================ A couple of months ago, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification in order to help individuals from all over the world to verify they are capable of doing basic to intermediate system administration tasks on Linux systems: system support, first-hand troubleshooting and maintenance, plus intelligent decision-making to know when it’s time to raise issues to upper support teams. @@ -295,7 +297,7 @@ Vi Search String in File c). vi uses a command (similar to sed’s) to perform substitution operations over a range of lines or an entire file. To change the word “old” to “young” for the entire file, we must enter the following command. - :%s/old/young/g + :%s/old/young/g **Notice**: The colon at the beginning of the command. diff --git a/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md b/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md index 6ac3d104a0..77fe5cf040 100644 --- a/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md +++ b/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md @@ -1,3 +1,5 @@ +Translating by Xuanwo + Part 3 - LFCS: How to Archive/Compress Files & Directories, Setting File Attributes and Finding Files in Linux ================================================================================ Recently, the Linux Foundation started the LFCS (Linux Foundation Certified Sysadmin) certification, a brand new program whose purpose is allowing individuals from all corners of the globe to have access to an exam, which if approved, certifies that the person is knowledgeable in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-level troubleshooting and analysis, plus the ability to decide when to escalate issues to engineering teams. @@ -178,9 +180,9 @@ List Archive Content Run any of the following commands: - # gzip -d myfiles.tar.gz [#1] - # bzip2 -d myfiles.tar.bz2 [#2] - # xz -d myfiles.tar.xz [#3] + # gzip -d myfiles.tar.gz [#1] + # bzip2 -d myfiles.tar.bz2 [#2] + # xz -d myfiles.tar.xz [#3] Then diff --git a/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md b/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md index ada637fabb..93e4b2966b 100644 --- a/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md +++ b/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md @@ -1,3 +1,5 @@ +Translating by Xuanwo + Part 4 - LFCS: Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition ================================================================================ Last August, the Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators to show, through a performance-based exam, that they can perform overall operational support of Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation – if needed – to other support teams. diff --git a/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md b/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md index 1544a378bc..4316e32c16 100644 --- a/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md +++ b/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md @@ -1,3 +1,5 @@ +Translating by Xuanwo + Part 5 - LFCS: How to Mount/Unmount Local and Network (Samba & NFS) Filesystems in Linux ================================================================================ The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is allowing individuals from all corners of the globe to get certified in basic to intermediate system administration tasks for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams. diff --git a/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md b/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md index bdabfb1f9d..901fb7b4f1 100644 --- a/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md +++ b/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md @@ -1,3 +1,5 @@ +Translating by Xuanwo + Part 6 - LFCS: Assembling Partitions as RAID Devices – Creating & Managing System Backups ================================================================================ Recently, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification, a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of performing overall operational support on Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation, when required, to other support teams. @@ -24,7 +26,7 @@ However, the actual fault-tolerance and disk I/O performance lean on how the har Our tool of choice for creating, assembling, managing, and monitoring our software RAIDs is called mdadm (short for multiple disks admin). ---------------- Debian and Derivatives ---------------- - # aptitude update && aptitude install mdadm + # aptitude update && aptitude install mdadm ---------- @@ -34,7 +36,7 @@ Our tool of choice for creating, assembling, managing, and monitoring our softwa ---------- ---------------- On openSUSE ---------------- - # zypper refresh && zypper install mdadm # + # zypper refresh && zypper install mdadm # #### Assembling Partitions as RAID Devices #### @@ -55,7 +57,7 @@ Creating RAID Array After creating RAID array, you an check the status of the array using the following commands. # cat /proc/mdstat - or + or # mdadm --detail /dev/md0 [More detailed summary] ![Check RAID Array Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Array-Status.png) @@ -203,16 +205,16 @@ The downside of this backup approach is that the image will have the same size a # dd if=/dev/sda of=/system_images/sda.img OR - --------------------- Alternatively, you can compress the image file --------------------- - # dd if=/dev/sda | gzip -c > /system_images/sda.img.gz + --------------------- Alternatively, you can compress the image file --------------------- + # dd if=/dev/sda | gzip -c > /system_images/sda.img.gz **Restoring the backup from the image file** # dd if=/system_images/sda.img of=/dev/sda - OR - - --------------------- Depending on your choice while creating the image --------------------- - gzip -dc /system_images/sda.img.gz | dd of=/dev/sda + OR + + --------------------- Depending on your choice while creating the image --------------------- + gzip -dc /system_images/sda.img.gz | dd of=/dev/sda Method 2: Backup certain files / directories with tar command – already covered in [Part 3][3] of this series. You may consider using this method if you need to keep copies of specific files and directories (configuration files, users’ home directories, and so on). @@ -247,7 +249,7 @@ Synchronizing remote → local directories over ssh. In this case, switch the source and destination directories from the previous example. - # rsync -avzhe ssh root@remote_host:/remote_directory/ backups + # rsync -avzhe ssh root@remote_host:/remote_directory/ backups Please note that these are only 3 examples (most frequent cases you’re likely to run into) of the use of rsync. For more examples and usages of rsync commands can be found at the following article. diff --git a/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md b/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md index b024c89540..4b7cdf9fe2 100644 --- a/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md +++ b/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md @@ -1,3 +1,5 @@ +Translating by Xuanwo + Part 7 - LFCS: Managing System Startup Process and Services (SysVinit, Systemd and Upstart) ================================================================================ A couple of months ago, the Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, an exciting new program whose aim is allowing individuals from all ends of the world to get certified in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-hand problem-finding and analysis, plus the ability to decide when to raise issues to engineering teams. @@ -267,7 +269,7 @@ Starting Stoping Services Under systemd you can enable or disable a service when it boots. - # systemctl enable [service] # enable a service + # systemctl enable [service] # enable a service # systemctl disable [service] # prevent a service from starting at boot The process of enabling or disabling a service to start automatically on boot consists in adding or removing symbolic links in the /etc/systemd/system/multi-user.target.wants directory. @@ -315,7 +317,7 @@ For example, # My test service - Upstart script demo description "Here goes the description of 'My test service'" author "Dave Null " # Stanzas - + # # Stanzas define when and how a process is started and stopped # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn diff --git a/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md b/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md index 4ccf3f20f6..50f39ee2d9 100644 --- a/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md +++ b/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md @@ -1,3 +1,5 @@ +Translating by Xuanwo + Part 8 - LFCS: Managing Users & Groups, File Permissions & Attributes and Enabling sudo Access on Accounts ================================================================================ Last August, the Linux Foundation started the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is to allow individuals everywhere and anywhere take an exam in order to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus intelligent decision-making to be able to decide when it’s necessary to escalate issues to higher level support teams. @@ -191,7 +193,7 @@ Thus, any user should have permission to run /bin/passwd, but only root will be ![Change User Password in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/change-user-password.png) Change User Password - + **Understanding Setgid** When the setgid bit is set, the effective GID of the real user becomes that of the group owner. Thus, any user can access a file under the privileges granted to the group owner of such file. In addition, when the setgid bit is set on a directory, newly created files inherit the same group as the directory, and newly created subdirectories will also inherit the setgid bit of the parent directory. You will most likely use this approach whenever members of a certain group need access to all the files in a directory, regardless of the file owner’s primary group. diff --git a/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md b/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md index 7b58a467d7..a363a50c09 100644 --- a/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md +++ b/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md @@ -1,3 +1,5 @@ +Translating by Xuanwo + Part 9 - LFCS: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper ================================================================================ Last August, the Linux Foundation announced the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of succeeding at overall operational support for Linux systems. A Linux Foundation Certified Sysadmin has the expertise to ensure effective system support, first-level troubleshooting and monitoring, including finally issue escalation, when needed, to engineering support teams. @@ -85,7 +87,7 @@ rpm is the package management system used by Linux Standard Base (LSB)-compliant yum adds the functionality of automatic updates and package management with dependency management to RPM-based systems. As a high-level tool, like apt-get or aptitude, yum works with repositories. - Read More: [20 yum Command Examples][4] -- +- ### Common Usage of Low-Level Tools ### The most frequent tasks that you will do with low level tools are as follows: @@ -155,7 +157,7 @@ The most frequent tasks that you will do with high level tools are as follows. aptitude update will update the list of available packages, and aptitude search will perform the actual search for package_name. - # aptitude update && aptitude search package_name + # aptitude update && aptitude search package_name In the search all option, yum will search for package_name not only in package names, but also in package descriptions. @@ -190,8 +192,8 @@ The option remove will uninstall the package but leaving configuration files int # yum erase package_name ---Notice the minus sign in front of the package that will be uninstalled, openSUSE --- - - # zypper remove -package_name + + # zypper remove -package_name Most (if not all) package managers will prompt you, by default, if you’re sure about proceeding with the uninstallation before actually performing it. So read the onscreen messages carefully to avoid running into unnecessary trouble! @@ -199,7 +201,7 @@ Most (if not all) package managers will prompt you, by default, if you’re sure The following command will display information about the birthday package. - # aptitude show birthday + # aptitude show birthday # yum info birthday # zypper info birthday From f87d0c6fdfc6cc275997d7b679719349609d14e8 Mon Sep 17 00:00:00 2001 From: Xuanwo Date: Wed, 26 Aug 2015 08:46:50 +0800 Subject: [PATCH 307/697] small bug fixed --- ...50817 Top 5 Torrent Clients For Ubuntu Linux.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/translated/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md b/translated/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md index 0ba1ac3e03..80c9899637 100644 --- a/translated/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md +++ b/translated/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md @@ -4,7 +4,7 @@ Translating by Xuanwo ================================================================================ ![Best Torrent clients for Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/5_Best_Torrent_Ubuntu.png) -你在寻找**Ubuntu中最好的BT客户端**吗?事实上,桌面平台中有许多可用的BT客户端,但是它们中的哪些才是**最好的**呢? +在寻找**Ubuntu中最好的BT客户端**吗?事实上,桌面平台中有许多可用的BT客户端,但是它们中的哪些才是**最好的**呢? 我将会列出最好的五个BT客户端,它们都拥有着体积轻盈,功能强大的特点,而且还有令人印象深刻的用户界面。自然,易于安装和使用也是特性之一。 @@ -32,7 +32,7 @@ Deluge可在Ubuntu的仓库中获取,你能够在Ubuntu软件中心中安装 ![qBittorrent client for Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/qbittorrent_icon.png) -正如它的名字所暗示的,[qBittorrent][2] 是著名的 [Bittorrent][3] 应用的Qt版本。如果你曾经使用过它,你将会看到和Windows下的Bittorrent相似的界面。同样轻巧并且有着BT客户端的所有标准功能,qBittorrent也可以在Ubuntu的默认仓库中找到。 +正如它的名字所暗示的,[qBittorrent][2] 是著名的 [Bittorrent][3] 应用的Qt版本。如果曾经使用过它,你将会看到和Windows下的Bittorrent相似的界面。同样轻巧并且有着BT客户端的所有标准功能,qBittorrent也可以在Ubuntu的默认仓库中找到。 它可以通过Ubuntu软件仓库安装,或者使用下面的命令: @@ -56,7 +56,7 @@ Deluge可在Ubuntu的仓库中获取,你能够在Ubuntu软件中心中安装 ![Vuze Torrent Logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/vuze_icon_for_mac_os_x_by_hamzasaleem-d6yx1fp.png) -[Vuze][6]是许多Linux以及Windows用户最喜欢的BT客户端。除了标准的功能,你可以直接在应用程序中搜索种子。你也可以订阅系列片源,这样你就无需再去寻找新的片源了,因为你可以在侧边栏中的订阅看到它们。 +[Vuze][6]是许多Linux以及Windows用户最喜欢的BT客户端。除了标准的功能,你可以直接在应用程序中搜索种子,也可以订阅系列片源,这样就无需再去寻找新的片源了,因为你可以在侧边栏中的订阅看到它们。 它还配备了一个视频播放器,可以播放带有字幕的高清视频以及一切。但是我不认为你会用它来代替那些更好的视频播放器,比如VLC。 @@ -72,7 +72,7 @@ Vuze可以通过Ubuntu软件中心安装或者使用下列命令: [Frostwire][7]可能会是一个你想要去尝试的应用。它不仅仅是一个简单的BT客户端,它还可以应用于安卓,你可以用它通过Wifi来共享文件。 -你可以在应用中搜索种子并且播放他们。除了下载文件,它还可以浏览你本地的影音文件,并且将它们有条理的呈现在播放器中。这同样适用于安卓版本。 +你可以在应用中搜索种子并且播放他们。除了下载文件,它还可以浏览本地的影音文件,并且将它们有条理的呈现在播放器中。这同样适用于安卓版本。 还有一个特点是:Frostwire提供了印度艺术家的合法音乐下载。你可以下载并且欣赏它们,免费而且合法。 @@ -84,15 +84,15 @@ Vuze可以通过Ubuntu软件中心安装或者使用下列命令: 在Windows中,uTorrent(发音:mu torrent)是我最喜欢的BT应用。尽管uTorrent可以在Linux下运行,但是我还是特意忽略了它。因为在Linux下使用uTorrent不仅困难,而且无法获得完整的应用体验(运行在浏览器中)。 -你可以[在这里][9]阅读Ubuntu下uTorrent的安装教程。 +可以[在这里][9]阅读Ubuntu下uTorrent的安装教程。 #### 快速提示: #### -大多数情况下,BT应用不会默认自动自动启动。如果你想改变这一行为,阅读[如何管理Ubuntu下的自启程序][10]来学习。 +大多数情况下,BT应用不会默认自动自动启动。如果想改变这一行为,请阅读[如何管理Ubuntu下的自启程序][10]来学习。 ### 你最喜欢的是什么? ### -这些是我对于Ubuntu下最好的BT客户端的一键。你最喜欢的是什么呢?请发表评论。你也可以查看与本主题相关的[Ubuntu最好的下载管理器][11]。如果你使用Popcorn Time,试试[Popcorn Time技巧][12] +这些是我对于Ubuntu下最好的BT客户端的意见。你最喜欢的是什么呢?请发表评论。也可以查看与本主题相关的[Ubuntu最好的下载管理器][11]。如果使用Popcorn Time,试试[Popcorn Time技巧][12] -------------------------------------------------------------------------------- From 3646e757e6ecde5c85cfcdcbddae02e91d71aa7c Mon Sep 17 00:00:00 2001 From: bazz2 Date: Wed, 26 Aug 2015 08:50:34 +0800 Subject: [PATCH 308/697] [translated by bazz2]Linux about to gain a new file system--bcachefs --- ...out to gain a new file system--bcachefs.md | 26 ------------------- ...out to gain a new file system--bcachefs.md | 25 ++++++++++++++++++ 2 files changed, 25 insertions(+), 26 deletions(-) delete mode 100644 sources/talk/20150824 Linux about to gain a new file system--bcachefs.md create mode 100644 translated/talk/20150824 Linux about to gain a new file system--bcachefs.md diff --git a/sources/talk/20150824 Linux about to gain a new file system--bcachefs.md b/sources/talk/20150824 Linux about to gain a new file system--bcachefs.md deleted file mode 100644 index 9568d05836..0000000000 --- a/sources/talk/20150824 Linux about to gain a new file system--bcachefs.md +++ /dev/null @@ -1,26 +0,0 @@ -[bazz2222222] -Linux about to gain a new file system – bcachefs -================================================================================ -A five year old file system built by Kent Overstreet, formerly of Google, is near feature complete with all critical components in place. Bcachefs boasts the performance and reliability of the widespread ext4 and xfs as well as the feature list similar to that of btrfs and zfs. Notable features include checksumming, compression, multiple devices, caching and eventually snapshots and other “nifty” features. - -Bcachefs started out as **bcache** which was a block caching layer, the evolution from bcache to a fully featured [copy-on-write][1] file system has been described as a metamorphosis. - -Responding to the self-imposed question “Yet another new filesystem? Why?” Kent Overstreet replies with the following “Well, years ago (going back to when I was still at Google), I and the other people working on bcache realized that what we were working on was, almost by accident, a good chunk of the functionality of a full blown filesystem – and there was a really clean and elegant design to be had there if we took it and ran with it. And a fast one – the main goal of bcachefs to match ext4 and xfs on performance and reliability, but with the features of btrfs/xfs.” - -Overstreet has invited people to use and test bcachefs out on their own systems. To find instructions to use bcachefs on your system check out the mailing list [announcement][2]. - -The file system situation on Linux is a fairly drawn out one, Fedora 16 for instance aimed to use btrfs instead of ext4 as the default file system, this switch still has not happened. Currently all of the Debian based distros, including Ubuntu, Mint and elementary OS, still use ext4 as their default file systems and none have even whispered about switching to a new default file system yet. - --------------------------------------------------------------------------------- - -via: http://www.linuxveda.com/2015/08/22/linux-gain-new-file-system-bcachefs/ - -作者:[Paul Hill][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxveda.com/author/paul_hill/ -[1]:https://en.wikipedia.org/wiki/Copy-on-write -[2]:https://lkml.org/lkml/2015/8/21/22 diff --git a/translated/talk/20150824 Linux about to gain a new file system--bcachefs.md b/translated/talk/20150824 Linux about to gain a new file system--bcachefs.md new file mode 100644 index 0000000000..4fe4bf8ff9 --- /dev/null +++ b/translated/talk/20150824 Linux about to gain a new file system--bcachefs.md @@ -0,0 +1,25 @@ +Linux 界将出现一个新的文件系统:bcachefs +================================================================================ +这个有 5 年历史,由 Kent Oberstreet 创建,过去属于谷歌的文件系统,最近完成了关键的组件。Bcachefs 文件系统自称其性能和稳定性与 ext4 和 xfs 相同,而其他方面的功能又可以与 btrfs 和 zfs 相媲美。主要特性包括校验、压缩、多设备支持、缓存、快照与其他好用的特性。 + +Bcachefs 来自 **bcache**,这是一个块级缓存层,从 bcaceh 到一个功能完整的[写时复制][1]文件系统,堪称是一项质的转变。 + +在自己提出问题“为什么要出一个新的文件系统”中,Kent Oberstreet 作了以下回答:当我还在谷歌的时候,我与其他在 bcache 上工作的同事在偶然的情况下意识到我们正在使用的东西可以成为一个成熟文件系统的功能块,我们可以用 bcache 创建一个拥有干净而优雅设计的文件系统,而最重要的一点是,bcachefs 的主要目的就是在性能和稳定性上能与 ext4 和 xfs 匹敌,同时拥有 btrfs 和 zfs 的特性。 + +Overstreet 邀请人们在自己的系统上测试 bcachefs,可以通过邮件列表[通告]获取 bcachefs 的操作指南。 + +Linux 生态系统中文件系统几乎处于一家独大状态,Fedora 在第 16 版的时候就想用 btrfs 换掉 ext4 作为其默认文件系统,但是到现在(LCTT:都出到 Fedora 22 了)还在使用 ext4。而几乎所有 Debian 系的发行版(Ubuntu、Mint、elementary OS 等)也使用 ext4 作为默认文件系统,并且这些主流的发生版都没有替换默认文件系统的意思。 + +-------------------------------------------------------------------------------- + +via: http://www.linuxveda.com/2015/08/22/linux-gain-new-file-system-bcachefs/ + +作者:[Paul Hill][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxveda.com/author/paul_hill/ +[1]:https://en.wikipedia.org/wiki/Copy-on-write +[2]:https://lkml.org/lkml/2015/8/21/22 From 43513f894f561fb8013d27ed9f90a3cb54d28aa2 Mon Sep 17 00:00:00 2001 From: joeren Date: Wed, 26 Aug 2015 08:58:41 +0800 Subject: [PATCH 309/697] Update 20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md --- ... Hindi And Devanagari Support In Antergos And Arch Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md b/sources/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md index db36df66e6..826fa248a0 100644 --- a/sources/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md +++ b/sources/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md @@ -1,3 +1,4 @@ +Translating by GOLinux! How To Add Hindi And Devanagari Support In Antergos And Arch Linux ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Indian-languages.jpg) @@ -44,4 +45,4 @@ via: http://itsfoss.com/display-hindi-arch-antergos/ [a]:http://itsfoss.com/author/abhishek/ [1]:http://antergos.com/ -[2]:http://itsfoss.com/tag/antergos/ \ No newline at end of file +[2]:http://itsfoss.com/tag/antergos/ From 678a245caf421c84d52e5cabbed3d4b167e6c1f0 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Wed, 26 Aug 2015 09:24:56 +0800 Subject: [PATCH 310/697] [Translated]20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md --- ...gari Support In Antergos And Arch Linux.md | 48 ------------------- ...gari Support In Antergos And Arch Linux.md | 46 ++++++++++++++++++ 2 files changed, 46 insertions(+), 48 deletions(-) delete mode 100644 sources/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md create mode 100644 translated/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md diff --git a/sources/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md b/sources/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md deleted file mode 100644 index 826fa248a0..0000000000 --- a/sources/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md +++ /dev/null @@ -1,48 +0,0 @@ -Translating by GOLinux! -How To Add Hindi And Devanagari Support In Antergos And Arch Linux -================================================================================ -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Indian-languages.jpg) - -You might be knowing by now that I have been trying my hands on [Antergos Linux][1] lately. One of the first few things I noticed after installing [Antergos][2] was that **Hindi scripts were not displayed properly** in the default chromium browser. - -This is a strange thing that I never encountered before in my desktop Linux experience ever. First, I thought maybe it could be a browser problem so I went on to install Firefox only to see the same story repeated. Firefox also could not display Hindi properly. Unlike Chromium that displayed nothing, Firefox did display something but it was not readable. - -![No hindi support in Arch Linux based Antergos](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_1.jpeg) - -Hindi display in Chromium - -![No hindi support in Arch Linux based Antergos](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_2.jpeg) - -Hindi display in Firefox - -Strange? So no Hindi support in Arch based Antergos Linux by default? I did not verify, but I presume that it would be the same for other Indian languages etc that are also based on Devanagari script. - -I this quick tutorial, I am going to show you how to add Devanagari support so that Hindi and other Indian languages are displayed properly. - -### Add Indian language support in Antergos and Arch Linux ### - -Open a terminal and use the following command: - - sudo yaourt -S ttf-indic-otf - -Enter the password. And it will provide rendering support for Indian languages. - -Restarting Firefox displayed Hindi correctly immediately, but it took a restart to display Hindi. For that reason, I advise that you **restart your system** after installing the Indian fonts. - -![Adding Hindi display support in Arch based Antergos Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_4.jpeg) - -I hope tis quick helped you to read Hindi, Sanskrit, Tamil, Telugu, Malayalam, Bangla and other Indian languages in Antergos and other Arch based Linux distros such as Manjaro Linux. - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/display-hindi-arch-antergos/ - -作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:http://antergos.com/ -[2]:http://itsfoss.com/tag/antergos/ diff --git a/translated/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md b/translated/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md new file mode 100644 index 0000000000..1bcc05a080 --- /dev/null +++ b/translated/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md @@ -0,0 +1,46 @@ +为Antergos与Arch Linux添加印度语和梵文支持 +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Indian-languages.jpg) + +你们到目前或许知道,我最近一直在尝试体验[Antergos Linux][1]。在安装完[Antergos][2]后我所首先注意到的一些事情是在默认的Chromium浏览器中**没法正确显示印度语脚本**。 + +这是一件奇怪的事情,在我之前桌面Linux的体验中是从未遇到过的。起初,我认为是浏览器的问题,所以我安装了Firefox,然而问题依旧,Firefox也不能正确显示印度语。和Chromium不显示任何东西不同的是,Firefox确实显示了一些东西,但是毫无可读性。 + +![No hindi support in Arch Linux based Antergos](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_1.jpeg) +Chromium中的印度语显示 + + +![No hindi support in Arch Linux based Antergos](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_2.jpeg) +Firefox中的印度语显示 + +奇怪吧?那么,默认情况下基于Arch的Antergos Linux中没有印度语的支持吗?我没有去验证,但是我假设其它基于梵语脚本的印地语之类会产生同样的问题。 + +在这个快速指南中,我打算为大家演示如何来添加梵语支持,以便让印度语和其它印地语都能正确显示。 + +### 在Antergos和Arch Linux中添加印地语支持 ### + +打开终端,使用以下命令: + + sudo yaourt -S ttf-indic-otf + +键入密码,它会提供给你对于印地语的译文支持。 + +重启Firefox,会马上正确显示印度语了,但是它需要一次重启来显示印度语。因此,我建议你在安装了印地语字体后**重启你的系统**。 + +![Adding Hindi display support in Arch based Antergos Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_4.jpeg) + +我希望这篇快速指南能够帮助你,让你可以在Antergos和其它基于Arch的Linux发行版中,如Manjaro Linux,阅读印度语、梵文、泰米尔语、泰卢固语、马拉雅拉姆语、孟加拉语,以及其它印地语。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/display-hindi-arch-antergos/ + +作者:[Abhishek][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://antergos.com/ +[2]:http://itsfoss.com/tag/antergos/ From d0b3823ee81ddae2ddc374aee4bacb2d34956387 Mon Sep 17 00:00:00 2001 From: joeren Date: Wed, 26 Aug 2015 09:27:15 +0800 Subject: [PATCH 311/697] Update 20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md --- ...Several Smaller Partition into One Large Virtual Storage.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md b/sources/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md index 6815fa64d8..ebf3a9c4fd 100644 --- a/sources/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md +++ b/sources/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md @@ -1,3 +1,4 @@ +Translating by GOLinux! Mhddfs – Combine Several Smaller Partition into One Large Virtual Storage ================================================================================ Let’s assume that you have 30GB of movies and you have 3 drives each 20 GB in size. So how will you store? @@ -183,4 +184,4 @@ via: http://www.tecmint.com/combine-partitions-into-one-in-linux-using-mhddfs/ [a]:http://www.tecmint.com/author/avishek/ [1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ [2]:http://www.tecmint.com/mount-filesystem-in-linux/ -[3]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ \ No newline at end of file +[3]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ From 08862d689ef1eb4776633573c33b7b8f9257c2c7 Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 26 Aug 2015 09:43:49 +0800 Subject: [PATCH 312/697] Update 20150728 Process of the Linux kernel building.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完成 20150728 Process of the Linux kernel building.md --- ...28 Process of the Linux kernel building.md | 127 ++---------------- 1 file changed, 12 insertions(+), 115 deletions(-) diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/sources/tech/20150728 Process of the Linux kernel building.md index b1c4388e66..9ee5f795ef 100644 --- a/sources/tech/20150728 Process of the Linux kernel building.md +++ b/sources/tech/20150728 Process of the Linux kernel building.md @@ -199,9 +199,7 @@ AWK = awk ... ``` -After definition of these variables we define two variables: `USERINCLUDE` and `LINUXINCLUDE`. They will contain paths of the directories with headers (public for users in the first case and for kernel in the second case): - -在这些定义好的变量之后,我们又定义了两个变量:`USERINCLUDE` 和`LINUXINCLUDE`。他们为包含头文件的路径(第一个是给用户用的,第二个是给内核用的): +在这些定义好的变量后面,我们又定义了两个变量:`USERINCLUDE` 和`LINUXINCLUDE`。他们包含了头文件的路径(第一个是给用户用的,第二个是给内核用的): ```Makefile USERINCLUDE := \ @@ -216,7 +214,6 @@ LINUXINCLUDE := \ ... ``` -And the standard flags for the C compiler: 以及标准的C 编译器标志: ```Makefile KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ @@ -226,8 +223,7 @@ KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ -std=gnu89 ``` -It is the not last compiler flags, they can be updated by the other makefiles (for example kbuilds from `arch/`). After all of these, all variables will be exported to be available in the other makefiles. The following two the `RCS_FIND_IGNORE` and the `RCS_TAR_IGNORE` variables will contain files that will be ignored in the version control system: -这并不是最终确定的编译器标志,他们还可以在其他makefile 里面更新(比如`arch/` 里面的kbuild)。经过所有这些,全部变量会被导出,这样其他makefile 就可以直接使用了。下面的两个变量`RCS_FIND_IGNORE` 和 `RCS_TAR_IGNORE` 包含了被版本控制系统忽略的文件: +这并不是最终确定的编译器标志,他们还可以在其他makefile 里面更新(比如`arch/` 里面的kbuild)。变量定义完之后,全部会被导出供其他makefile 使用。下面的两个变量`RCS_FIND_IGNORE` 和 `RCS_TAR_IGNORE` 包含了被版本控制系统忽略的文件: ```Makefile export RCS_FIND_IGNORE := \( -name SCCS -o -name BitKeeper -o -name .svn -o \ @@ -237,16 +233,11 @@ export RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn \ --exclude CVS --exclude .pc --exclude .hg --exclude .git ``` -That's all. We have finished with the all preparations, next point is the building of `vmlinux`. - 这就是全部了,我们已经完成了所有的准备工作,下一个点就是如果构建`vmlinux`. -Directly to the kernel build 直面构建内核 -------------------------------------------------------------------------------- -As we have finished all preparations, next step in the root makefile is related to the kernel build. Before this moment we will not see in the our terminal after the execution of the `make` command. But now first steps of the compilation are started. In this moment we need to go on the [598](https://github.com/torvalds/linux/blob/master/Makefile#L598) line of the Linux kernel top makefile and we will see `vmlinux` target there: - 现在我们已经完成了所有的准备工作,根makefile(注:内核根目录下的makefile)的下一步工作就是和编译内核相关的了。在我们执行`make` 命令之前,我们不会在终端看到任何东西。但是现在编译的第一步开始了,这里我们需要从内核根makefile的的[598](https://github.com/torvalds/linux/blob/master/Makefile#L598) 行开始,这里可以看到目标`vmlinux`: ```Makefile @@ -254,29 +245,20 @@ all: vmlinux include arch/$(SRCARCH)/Makefile ``` -Don't worry that we have missed many lines in Makefile that are placed after `export RCS_FIND_IGNORE.....` and before `all: vmlinux.....`. This part of the makefile is responsible for the `make *.config` targets and as I wrote in the beginning of this part we will see only building of the kernel in a general way. - 不要操心我们略过的从`export RCS_FIND_IGNORE.....` 到`all: vmlinux.....` 这一部分makefile 代码,他们只是负责根据各种配置文件生成不同目标内核的,因为之前我就说了这一部分我们只讨论构建内核的通用途径。 -The `all:` target is the default when no target is given on the command line. You can see here that we include architecture specific makefile there (in our case it will be [arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)). From this moment we will continue from this makefile. As we can see `all` target depends on the `vmlinux` target that defined a little lower in the top makefile: - -目标`all:` 是在命令行里不指定目标时默认生成的目标。你可以看到这里我们包含了架构相关的makefile(默认情况下会是[arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile))。从这一时刻起,我们会从这个makefile 继续进行下去。如我们所见,目标`all` 依赖于根makefile 后面声明的`vmlinux`: +目标`all:` 是在命令行如果不指定具体目标时默认使用的目标。你可以看到这里包含了架构相关的makefile(在这里就指的是[arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile))。从这一时刻起,我们会从这个makefile 继续进行下去。如我们所见,目标`all` 依赖于根makefile 后面声明的`vmlinux`: ```Makefile vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE ``` -The `vmlinux` is is the Linux kernel in an statically linked executable file format. The [scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) script links combines different compiled subsystems into vmlinux. The second target is the `vmlinux-deps` that defined as: - `vmlinux` 是linux 内核的静态链接可执行文件格式。脚本[scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) 把不同的编译好的子模块链接到一起形成了vmlinux。第二个目标是`vmlinux-deps`,它的定义如下: - ```Makefile vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) ``` -and consists from the set of the `built-in.o` from the each top directory of the Linux kernel. Later, when we will go through all directories in the Linux kernel, the `Kbuild` will compile all the `$(obj-y)` files. It then calls `$(LD) -r` to merge these files into one `built-in.o` file. For this moment we have no `vmlinux-deps`, so the `vmlinux` target will not be executed now. For me `vmlinux-deps` contains following files: - 它是由内核代码下的每个顶级目录的`built-in.o` 组成的。之后我们还会检查内核所有的目录,`kbuild` 会编译各个目录下所有的对应`$obj-y` 的源文件。接着调用`$(LD) -r` 把这些文件合并到一个`build-in.o` 文件里。此时我们还没有`vmloinux-deps`, 所以目标`vmlinux` 现在还不会被构建。对我而言`vmlinux-deps` 包含下面的文件 ``` @@ -295,8 +277,6 @@ arch/x86/power/built-in.o arch/x86/video/built-in.o net/built-in.o ``` -The next target that can be executed is following: - 下一个可以被执行的目标如下: ```Makefile @@ -305,8 +285,6 @@ $(vmlinux-dirs): prepare scripts $(Q)$(MAKE) $(build)=$@ ``` -As we can see the `vmlinux-dirs` depends on the two targets: `prepare` and `scripts`. The first `prepare` defined in the top `Makefile` of the Linux kernel and executes three stages of preparations: - 就像我们看到的,`vmlinux-dir` 依赖于两部分:`prepare` 和`scripts`。第一个`prepare` 定义在内核的根`makefile` ,准备工作分成三个阶段: ```Makefile @@ -321,17 +299,13 @@ prepare1: prepare2 $(version_h) include/generated/utsrelease.h \ prepare2: prepare3 outputmakefile asm-generic ``` -The first `prepare0` expands to the `archprepare` that exapnds to the `archheaders` and `archscripts` that defined in the `x86_64` specific [Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile). Let's look on it. The `x86_64` specific makefile starts from the definition of the variables that are related to the archicteture-specific configs ([defconfig](https://github.com/torvalds/linux/tree/master/arch/x86/configs) and etc.). After this it defines flags for the compiling of the [16-bit](https://en.wikipedia.org/wiki/Real_mode) code,calculating of the `BITS` variable that can be `32` for `i386` or `64` for the `x86_64` flags for the assembly source code, flags for the linker and many many more (all definitions you can find in the [arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)). The first target is `archheaders` in the makefile generates syscall table: - -第一个`prepare0` 展开到`archprepare` ,后者又展开到`archheader` 和`archscripts`,这两个变量定义在`x86_64` 相关的[Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)。让我们看看这个文件。`x86_64` 特定的makefile从变量定义开始,这些变量都是和特定架构的配置文件 ([defconfig](https://github.com/torvalds/linux/tree/master/arch/x86/configs),等等)有关联。变量定义之后,这个makefile 定义了编译[16-bit](https://en.wikipedia.org/wiki/Real_mode)代码的编译选项,根据变量`BITS` 的值,如果是`32` 汇编代码、链接器、以及其它很多东西(全部的定义都可以在[arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)找到)对应的参数就是`i386`,而`64`就对应的是`x86_84`。生成的系统调用列表(syscall table)的makefile 里第一个目标就是`archheaders` : +第一个`prepare0` 展开到`archprepare` ,后者又展开到`archheader` 和`archscripts`,这两个变量定义在`x86_64` 相关的[Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)。让我们看看这个文件。`x86_64` 特定的makefile从变量定义开始,这些变量都是和特定架构的配置文件 ([defconfig](https://github.com/torvalds/linux/tree/master/arch/x86/configs),等等)有关联。变量定义之后,这个makefile 定义了编译[16-bit](https://en.wikipedia.org/wiki/Real_mode)代码的编译选项,根据变量`BITS` 的值,如果是`32`, 汇编代码、链接器、以及其它很多东西(全部的定义都可以在[arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)找到)对应的参数就是`i386`,而`64`就对应的是`x86_84`。生成的系统调用列表(syscall table)的makefile 里第一个目标就是`archheaders` : ```Makefile archheaders: $(Q)$(MAKE) $(build)=arch/x86/entry/syscalls all ``` -And the second target is `archscripts` in this makefile is: - 这个makefile 里第二个目标就是`archscripts`: ```Makefile @@ -339,17 +313,13 @@ archscripts: scripts_basic $(Q)$(MAKE) $(build)=arch/x86/tools relocs ``` -We can see that it depends on the `scripts_basic` target from the top [Makefile](https://github.com/torvalds/linux/blob/master/Makefile). At the first we can see the `scripts_basic` target that executes make for the [scripts/basic](https://github.com/torvalds/linux/blob/master/scripts/basic/Makefile) makefile: - - 我们可以看到`archscripts` 是依赖于根[Makefile](https://github.com/torvalds/linux/blob/master/Makefile)里的`scripts_basic` 。首先我们可以看出`scripts_basic` 是根据[scripts/basic](https://github.com/torvalds/linux/blob/master/scripts/basic/Makefile) 的mekefile 执行的: + 我们可以看到`archscripts` 是依赖于根[Makefile](https://github.com/torvalds/linux/blob/master/Makefile)里的`scripts_basic` 。首先我们可以看出`scripts_basic` 是按照[scripts/basic](https://github.com/torvalds/linux/blob/master/scripts/basic/Makefile) 的mekefile 执行make 的: ```Maklefile scripts_basic: $(Q)$(MAKE) $(build)=scripts/basic ``` -The `scripts/basic/Makefile` contains targets for compilation of the two host programs: `fixdep` and `bin2`: - `scripts/basic/Makefile`包含了编译两个主机程序`fixdep` 和`bin2` 的目标: ```Makefile @@ -360,8 +330,6 @@ always := $(hostprogs-y) $(addprefix $(obj)/,$(filter-out fixdep,$(always))): $(obj)/fixdep ``` -First program is `fixdep` - optimizes list of dependencies generated by the [gcc](https://gcc.gnu.org/) that tells make when to remake a source code file. The second program is `bin2c` depends on the value of the `CONFIG_BUILD_BIN2C` kernel configuration option and very little C program that allows to convert a binary on stdin to a C include on stdout. You can note here strange notation: `hostprogs-y` and etc. This notation is used in the all `kbuild` files and more about it you can read in the [documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt). In our case the `hostprogs-y` tells to the `kbuild` that there is one host program named `fixdep` that will be built from the will be built from `fixdep.c` that located in the same directory that `Makefile`. The first output after we will execute `make` command in our terminal will be result of this `kbuild` file: - 第一个工具是`fixdep`:用来优化[gcc](https://gcc.gnu.org/) 生成的依赖列表,然后在重新编译源文件的时候告诉make。第二个工具是`bin2c`,他依赖于内核配置选项`CONFIG_BUILD_BIN2C`,并且它是一个用来将标准输入接口(注:即stdin)收到的二进制流通过标准输出接口(即:stdout)转换成C 头文件的非常小的C 程序。你可以注意到这里有些奇怪的标志,如`hostprogs-y`等。这些标志使用在所有的`kbuild` 文件,更多的信息你可以从[documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt) 获得。在我们的用例`hostprogs-y` 中,他告诉`kbuild` 这里有个名为`fixed` 的程序,这个程序会通过和`Makefile` 相同目录的`fixdep.c` 编译而来。执行make 之后,终端的第一个输出就是`kbuild` 的结果: ``` @@ -369,16 +337,12 @@ $ make HOSTCC scripts/basic/fixdep ``` -As `script_basic` target was executed, the `archscripts` target will execute `make` for the [arch/x86/tools](https://github.com/torvalds/linux/blob/master/arch/x86/tools/Makefile) makefile with the `relocs` target: - 当目标`script_basic` 被执行,目标`archscripts` 就会make [arch/x86/tools](https://github.com/torvalds/linux/blob/master/arch/x86/tools/Makefile) 下的makefile 和目标`relocs`: ```Makefile $(Q)$(MAKE) $(build)=arch/x86/tools relocs ``` -The `relocs_32.c` and the `relocs_64.c` will be compiled that will contain [relocation](https://en.wikipedia.org/wiki/Relocation_%28computing%29) information and we will see it in the `make` output: - 代码`relocs_32.c` 和`relocs_64.c` 包含了[重定位](https://en.wikipedia.org/wiki/Relocation_%28computing%29) 的信息,将会被编译,者可以在`make` 的输出中看到: ```Makefile @@ -388,8 +352,6 @@ The `relocs_32.c` and the `relocs_64.c` will be compiled that will contain [relo HOSTLD arch/x86/tools/relocs ``` -There is checking of the `version.h` after compiling of the `relocs.c`: - 在编译完`relocs.c` 之后会检查`version.h`: ```Makefile @@ -398,17 +360,12 @@ $(version_h): $(srctree)/Makefile FORCE $(Q)rm -f $(old_version_h) ``` -We can see it in the output: - 我们可以在输出看到它: - ``` CHK include/config/kernel.release ``` -and the building of the `generic` assembly headers with the `asm-generic` target from the `arch/x86/include/generated/asm` that generated in the top Makefile of the Linux kernel. After the `asm-generic` target the `archprepare` will be done, so the `prepare0` target will be executed. As I wrote above: - 以及在内核根Makefiel 使用`arch/x86/include/generated/asm`的目标`asm-generic` 来构建`generic` 汇编头文件。在目标`asm-generic` 之后,`archprepare` 就会被完成,所以目标`prepare0` 会接着被执行,如我上面所写: ```Makefile @@ -416,32 +373,24 @@ prepare0: archprepare FORCE $(Q)$(MAKE) $(build)=. ``` -Note on the `build`. It defined in the [scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) and looks like this: - 注意`build`,它是定义在文件[scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include),内容是这样的: ```Makefile build := -f $(srctree)/scripts/Makefile.build obj ``` -or in our case it is current source directory - `.`: - 或者在我们的例子中,他就是当前源码目录路径——`.`: ```Makefile $(Q)$(MAKE) -f $(srctree)/scripts/Makefile.build obj=. ``` -The [scripts/Makefile.build](https://github.com/torvalds/linux/blob/master/scripts/Makefile.build) tries to find the `Kbuild` file by the given directory via the `obj` parameter, include this `Kbuild` files: - 参数`obj` 会告诉脚本[scripts/Makefile.build](https://github.com/torvalds/linux/blob/master/scripts/Makefile.build) 那些目录包含`kbuild` 文件,脚本以此来寻找各个`kbuild` 文件: ```Makefile include $(kbuild-file) ``` -and build targets from it. In our case `.` contains the [Kbuild](https://github.com/torvalds/linux/blob/master/Kbuild) file that generates the `kernel/bounds.s` and the `arch/x86/kernel/asm-offsets.s`. After this the `prepare` target finished to work. The `vmlinux-dirs` also depends on the second target - `scripts` that compiles following programs: `file2alias`, `mk_elfconfig`, `modpost` and etc... After scripts/host-programs compilation our `vmlinux-dirs` target can be executed. First of all let's try to understand what does `vmlinux-dirs` contain. For my case it contains paths of the following kernel directories: - -然后根据这个构建目标。我们这里`.` 包含了[Kbuild](https://github.com/torvalds/linux/blob/master/Kbuild),就用这个文件来生成`kernel/bounds.s` 和`arch/x86/kernel/asm-offsets.s`。这样目标`prepare` 就完成了它的工作。`vmlinux-dirs` 也依赖于第二个目标——`scripts` ,`scripts`会编译接下来的几个程序:`filealias`,`mk_elfconfig`,`modpost`等等。`scripts/host-programs` 编译完之后,我们的目标`vmlinux-dirs` 就可以开始编译了。第一步,我们先来理解一下`vmlinux-dirs` 都包含了那些东西。在我们的例子中它包含了接下来的内核目录的路径: +然后根据这个构建目标。我们这里`.` 包含了[Kbuild](https://github.com/torvalds/linux/blob/master/Kbuild),就用这个文件来生成`kernel/bounds.s` 和`arch/x86/kernel/asm-offsets.s`。这样目标`prepare` 就完成了它的工作。`vmlinux-dirs` 也依赖于第二个目标——`scripts` ,`scripts`会编译接下来的几个程序:`filealias`,`mk_elfconfig`,`modpost`等等。`scripts/host-programs` 编译完之后,我们的目标`vmlinux-dirs` 就可以开始编译了。第一步,我们先来理解一下`vmlinux-dirs` 都包含了那些东西。在我们的例子中它包含了接下来要使用的内核目录的路径: ``` init usr arch/x86 kernel mm fs ipc security crypto block @@ -449,8 +398,6 @@ drivers sound firmware arch/x86/pci arch/x86/power arch/x86/video net lib arch/x86/lib ``` -We can find definition of the `vmlinux-dirs` in the top [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) of the Linux kernel: - 我们可以在内核的根[Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 里找到`vmlinux-dirs` 的定义: ```Makefile @@ -467,8 +414,6 @@ libs-y := lib/ ... ``` -Here we remove the `/` symbol from the each directory with the help of the `patsubst` and `filter` functions and put it to the `vmlinux-dirs`. So we have list of directories in the `vmlinux-dirs` and the following code: - 这里我们借助函数`patsubst` 和`filter`去掉了每个目录路径里的符号`/`,并且把结果放到`vmlinux-dirs` 里。所以我们就有了`vmlinux-dirs` 里的目录的列表,以及下面的代码: ```Makefile @@ -476,8 +421,6 @@ $(vmlinux-dirs): prepare scripts $(Q)$(MAKE) $(build)=$@ ``` -The `$@` represents `vmlinux-dirs` here that means that it will go recursively over all directories from the `vmlinux-dirs` and its internal directories (depens on configuration) and will execute `make` in there. We can see it in the output: - 符号`$@` 在这里代表了`vmlinux-dirs`,这就表明程序会递归遍历从`vmlinux-dirs` 以及它内部的全部目录(依赖于配置),并且在对应的目录下执行`make` 命令。我们可以在输出看到结果: ``` @@ -495,8 +438,8 @@ The `$@` represents `vmlinux-dirs` here that means that it will go recursively o CC arch/x86/entry/syscall_64.o ``` -Source code in each directory will be compiled and linked to the `built-in.o`: 每个目录下的源代码将会被编译并且链接到`built-io.o` 里: + ``` $ find . -name built-in.o ./arch/x86/crypto/built-in.o @@ -508,8 +451,6 @@ $ find . -name built-in.o ... ``` -Ok, all buint-in.o(s) built, now we can back to the `vmlinux` target. As you remember, the `vmlinux` target is in the top Makefile of the Linux kernel. Before the linking of the `vmlinux` it builds [samples](https://github.com/torvalds/linux/tree/master/samples), [Documentation](https://github.com/torvalds/linux/tree/master/Documentation) and etc., but I will not describe it in this part as I wrote in the beginning of this part. - 好了,所有的`built-in.o` 都构建完了,现在我们回到目标`vmlinux` 上。你应该还记得,目标`vmlinux` 是在内核的根makefile 里。在链接`vmlinux` 之前,系统会构建[samples](https://github.com/torvalds/linux/tree/master/samples), [Documentation](https://github.com/torvalds/linux/tree/master/Documentation)等等,但是如上文所述,我不会在本文描述这些。 ```Makefile @@ -519,8 +460,6 @@ vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE +$(call if_changed,link-vmlinux) ``` -As you can see main purpose of it is a call of the [scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) script is linking of the all `built-in.o`(s) to the one statically linked executable and creation of the [System.map](https://en.wikipedia.org/wiki/System.map). In the end we will see following output: - 你可以看到,`vmlinux` 的调用脚本[scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) 的主要目的是把所有的`built-in.o` 链接成一个静态可执行文件、生成[System.map](https://en.wikipedia.org/wiki/System.map)。 最后我们来看看下面的输出: ``` @@ -539,31 +478,24 @@ As you can see main purpose of it is a call of the [scripts/link-vmlinux.sh](htt SYSMAP System.map ``` -and `vmlinux` and `System.map` in the root of the Linux kernel source tree: -还有内核源码树根目录下的`vmlinux` 和`System.map` +以及内核源码树根目录下的`vmlinux` 和`System.map` + ``` $ ls vmlinux System.map System.map vmlinux ``` -That's all, `vmlinux` is ready. The next step is creation of the [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage). - 这就是全部了,`vmlinux` 构建好了,下一步就是创建[bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage). -Building bzImage 制作bzImage -------------------------------------------------------------------------------- -The `bzImage` is the compressed Linux kernel image. We can get it with the execution of the `make bzImage` after the `vmlinux` built. In other way we can just execute `make` without arguments and will get `bzImage` anyway because it is default image: - -`bzImage` 就是压缩了的linux 内核镜像。我们可以在构建了`vmlinux` 之后通过执行`make bzImage` 获得`bzImage`。同时我们可以仅仅执行`make` 而不带任何参数也可以生成`bzImage` ,因为它是在[arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile) 里定义的、默认会生成的镜像: +`bzImage` 就是压缩了的linux 内核镜像。我们可以在构建了`vmlinux` 之后通过执行`make bzImage` 获得`bzImage`。同时我们可以仅仅执行`make` 而不带任何参数也可以生成`bzImage` ,因为它是在[arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile) 里预定义的、默认生成的镜像: ```Makefile all: bzImage ``` -in the [arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile). Let's look on this target, it will help us to understand how this image builds. As I already said the `bzImage` target defined in the [arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile) and looks like this: - 让我们看看这个目标,他能帮助我们理解这个镜像是怎么构建的。我已经说过了`bzImage` 师被定义在[arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile),定义如下: ```Makefile @@ -573,16 +505,12 @@ bzImage: vmlinux $(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/$@ ``` -We can see here, that first of all called `make` for the boot directory, in our case it is: - 在这里我们可以看到第一次为boot 目录执行`make`,在我们的例子里是这样的: ```Makefile boot := arch/x86/boot ``` -The main goal now to build source code in the `arch/x86/boot` and `arch/x86/boot/compressed` directories, build `setup.bin` and `vmlinux.bin`, and build the `bzImage` from they in the end. First target in the [arch/x86/boot/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/Makefile) is the `$(obj)/setup.elf`: - 现在的主要目标是编译目录`arch/x86/boot` 和`arch/x86/boot/compressed` 的代码,构建`setup.bin` 和`vmlinux.bin`,然后用这两个文件生成`bzImage`。第一个目标是定义在[arch/x86/boot/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/Makefile) 的`$(obj)/setup.elf`: ```Makefile @@ -590,8 +518,6 @@ $(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE $(call if_changed,ld) ``` -We already have the `setup.ld` linker script in the `arch/x86/boot` directory and the `SETUP_OBJS` expands to the all source files from the `boot` directory. We can see first output: - 我们已经在目录`arch/x86/boot`有了链接脚本`setup.ld`,并且将变量`SETUP_OBJS` 扩展到`boot` 目录下的全部源代码。我们可以看看第一个输出: ```Makefile @@ -607,15 +533,12 @@ We already have the `setup.ld` linker script in the `arch/x86/boot` directory an CC arch/x86/boot/edd.o ``` -The next source code file is the [arch/x86/boot/header.S](https://github.com/torvalds/linux/blob/master/arch/x86/boot/header.S), but we can't build it now because this target depends on the following two header files: - -下一个源码文件是[arch/x86/boot/header.S](https://github.com/torvalds/linux/blob/master/arch/x86/boot/header.S),但是我们不能现在就编译他,因为这个目标依赖于下面两个头文件: +下一个源码文件是[arch/x86/boot/header.S](https://github.com/torvalds/linux/blob/master/arch/x86/boot/header.S),但是我们不能现在就编译它,因为这个目标依赖于下面两个头文件: ```Makefile $(obj)/header.o: $(obj)/voffset.h $(obj)/zoffset.h ``` -The first is `voffset.h` generated by the `sed` script that gets two addresses from the `vmlinux` with the `nm` util: 第一个头文件`voffset.h` 是使用`sed` 脚本生成的,包含用`nm` 工具从`vmlinux` 获取的两个地址: ```C @@ -623,8 +546,6 @@ The first is `voffset.h` generated by the `sed` script that gets two addresses f #define VO__text 0xffffffff81000000 ``` -They are start and end of the kernel. The second is `zoffset.h` depens on the `vmlinux` target from the [arch/x86/boot/compressed/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/Makefile): - 这两个地址是内核的起始和结束地址。第二个头文件`zoffset.h` 在[arch/x86/boot/compressed/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/Makefile) 可以看出是依赖于目标`vmlinux`的: ```Makefile @@ -632,9 +553,7 @@ $(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE $(call if_changed,zoffset) ``` -The `$(obj)/compressed/vmlinux` target depends on the `vmlinux-objs-y` that compiles source code files from the [arch/x86/boot/compressed](https://github.com/torvalds/linux/tree/master/arch/x86/boot/compressed) directory and generates `vmlinux.bin`, `vmlinux.bin.bz2`, and compiles programm - `mkpiggy`. We can see this in the output: - -目标`$(obj)/compressed/vmlinux` 依赖于变量`vmlinux-objs-y` —— 表明要编译目录[arch/x86/boot/compressed](https://github.com/torvalds/linux/tree/master/arch/x86/boot/compressed) 下的源代码,然后生成`vmlinux.bin`, `vmlinux.bin.bz2`, 和编译工具 - `mkpiggy`。我们可以在下面的输出看出来: +目标`$(obj)/compressed/vmlinux` 依赖于变量`vmlinux-objs-y` —— 说明需要编译目录[arch/x86/boot/compressed](https://github.com/torvalds/linux/tree/master/arch/x86/boot/compressed) 下的源代码,然后生成`vmlinux.bin`, `vmlinux.bin.bz2`, 和编译工具 - `mkpiggy`。我们可以在下面的输出看出来: ```Makefile LDS arch/x86/boot/compressed/vmlinux.lds @@ -647,8 +566,6 @@ The `$(obj)/compressed/vmlinux` target depends on the `vmlinux-objs-y` that comp HOSTCC arch/x86/boot/compressed/mkpiggy ``` -Where the `vmlinux.bin` is the `vmlinux` with striped debuging information and comments and the `vmlinux.bin.bz2` compressed `vmlinux.bin.all` + `u32` size of `vmlinux.bin.all`. The `vmlinux.bin.all` is `vmlinux.bin + vmlinux.relocs`, where `vmlinux.relocs` is the `vmlinux` that was handled by the `relocs` program (see above). As we got these files, the `piggy.S` assembly files will be generated with the `mkpiggy` program and compiled: - `vmlinux.bin` 是去掉了调试信息和注释的`vmlinux` 二进制文件,加上了占用了`u32` (注:即4-Byte)的长度信息的`vmlinux.bin.all` 压缩后就是`vmlinux.bin.bz2`。其中`vmlinux.bin.all` 包含了`vmlinux.bin` 和`vmlinux.relocs`(注:vmlinux 的重定位信息),其中`vmlinux.relocs` 是`vmlinux` 经过程序`relocs` 处理之后的`vmlinux` 镜像(见上文所述)。我们现在已经获取到了这些文件,汇编文件`piggy.S` 将会被`mkpiggy` 生成、然后编译: ```Makefile @@ -656,16 +573,12 @@ Where the `vmlinux.bin` is the `vmlinux` with striped debuging information and c AS arch/x86/boot/compressed/piggy.o ``` -This assembly files will contain computed offset from a compressed kernel. After this we can see that `zoffset` generated: - 这个汇编文件会包含经过计算得来的、压缩内核的偏移信息。处理完这个汇编文件,我们就可以看到`zoffset` 生成了: ```Makefile ZOFFSET arch/x86/boot/zoffset.h ``` -As the `zoffset.h` and the `voffset.h` are generated, compilation of the source code files from the [arch/x86/boot](https://github.com/torvalds/linux/tree/master/arch/x86/boot/) can be continued: - 现在`zoffset.h` 和`voffset.h` 已经生成了,[arch/x86/boot](https://github.com/torvalds/linux/tree/master/arch/x86/boot/) 里的源文件可以继续编译: ```Makefile @@ -686,15 +599,12 @@ As the `zoffset.h` and the `voffset.h` are generated, compilation of the source CC arch/x86/boot/video-bios.o ``` -As all source code files will be compiled, they will be linked to the `setup.elf`: - 所有的源代码会被编译,他们最终会被链接到`setup.elf` : ```Makefile LD arch/x86/boot/setup.elf ``` -or: 或者: @@ -702,32 +612,24 @@ or: ld -m elf_x86_64 -T arch/x86/boot/setup.ld arch/x86/boot/a20.o arch/x86/boot/bioscall.o arch/x86/boot/cmdline.o arch/x86/boot/copy.o arch/x86/boot/cpu.o arch/x86/boot/cpuflags.o arch/x86/boot/cpucheck.o arch/x86/boot/early_serial_console.o arch/x86/boot/edd.o arch/x86/boot/header.o arch/x86/boot/main.o arch/x86/boot/mca.o arch/x86/boot/memory.o arch/x86/boot/pm.o arch/x86/boot/pmjump.o arch/x86/boot/printf.o arch/x86/boot/regs.o arch/x86/boot/string.o arch/x86/boot/tty.o arch/x86/boot/video.o arch/x86/boot/video-mode.o arch/x86/boot/version.o arch/x86/boot/video-vga.o arch/x86/boot/video-vesa.o arch/x86/boot/video-bios.o -o arch/x86/boot/setup.elf ``` -The last two things is the creation of the `setup.bin` that will contain compiled code from the `arch/x86/boot/*` directory: - 最后两件事是创建包含目录`arch/x86/boot/*` 下的编译过的代码的`setup.bin`: ``` objcopy -O binary arch/x86/boot/setup.elf arch/x86/boot/setup.bin ``` -and the creation of the `vmlinux.bin` from the `vmlinux`: - 以及从`vmlinux` 生成`vmlinux.bin` : ``` objcopy -O binary -R .note -R .comment -S arch/x86/boot/compressed/vmlinux arch/x86/boot/vmlinux.bin ``` -In the end we compile host program: [arch/x86/boot/tools/build.c](https://github.com/torvalds/linux/blob/master/arch/x86/boot/tools/build.c) that will create our `bzImage` from the `setup.bin` and the `vmlinux.bin`: - 最后,我们编译主机程序[arch/x86/boot/tools/build.c](https://github.com/torvalds/linux/blob/master/arch/x86/boot/tools/build.c),它将会用来把`setup.bin` 和`vmlinux.bin` 打包成`bzImage`: ``` arch/x86/boot/tools/build arch/x86/boot/setup.bin arch/x86/boot/vmlinux.bin arch/x86/boot/zoffset.h arch/x86/boot/bzImage ``` -Actually the `bzImage` is the concatenated `setup.bin` and the `vmlinux.bin`. In the end we will see the output which familiar to all who once build the Linux kernel from source: - 实际上`bzImage` 就是把`setup.bin` 和`vmlinux.bin` 连接到一起。最终我们会看到输出结果,就和那些用源码编译过内核的同行的结果一样: ``` @@ -737,20 +639,15 @@ CRC 94a88f9a Kernel: arch/x86/boot/bzImage is ready (#5) ``` -That's all. 全部结束。 -Conclusion 结论 ================================================================================ -It is the end of this part and here we saw all steps from the execution of the `make` command to the generation of the `bzImage`. I know, the Linux kernel makefiles and process of the Linux kernel building may seem confusing at first glance, but it is not so hard. Hope this part will help you to understand process of the Linux kernel building. - 这就是本文的最后一节。本文我们了解了编译内核的全部步骤:从执行`make` 命令开始,到最后生成`bzImage`。我知道,linux 内核的makefiles 和构建linux 的过程第一眼看起来可能比较迷惑,但是这并不是很难。希望本文可以帮助你理解构建linux 内核的整个流程。 -Links 链接 ================================================================================ From a4e422dd35b5560e84bc1955ab2a2797a03abf59 Mon Sep 17 00:00:00 2001 From: zl Date: Wed, 26 Aug 2015 10:01:29 +0800 Subject: [PATCH 313/697] move file MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完成 Process of the Linux kernel building.md 并移动到translated --- .../tech/20150728 Process of the Linux kernel building.md | 2 -- 1 file changed, 2 deletions(-) rename {sources => translated}/tech/20150728 Process of the Linux kernel building.md (99%) diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/translated/tech/20150728 Process of the Linux kernel building.md similarity index 99% rename from sources/tech/20150728 Process of the Linux kernel building.md rename to translated/tech/20150728 Process of the Linux kernel building.md index 9ee5f795ef..b8ded80179 100644 --- a/sources/tech/20150728 Process of the Linux kernel building.md +++ b/translated/tech/20150728 Process of the Linux kernel building.md @@ -1,5 +1,3 @@ -Translating by Ezio - 如何构建Linux 内核 ================================================================================ 介绍 From 461ef3145bae861e099915bfa1e9c50a836a2850 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 26 Aug 2015 11:55:19 +0800 Subject: [PATCH 314/697] PUB:20150817 Top 5 Torrent Clients For Ubuntu Linux MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @Xuanwo 其中有一处 indi artists, 根据我的判断,应该是指“independent artists” ,独立音乐人。根据其网站描述,应该是指以 CC 协议提供音乐的人。所以我径直调整了,如有不当,请讨论。:> --- ... Top 5 Torrent Clients For Ubuntu Linux.md | 115 +++++++++++++++++ ... Top 5 Torrent Clients For Ubuntu Linux.md | 120 ------------------ 2 files changed, 115 insertions(+), 120 deletions(-) create mode 100644 published/20150817 Top 5 Torrent Clients For Ubuntu Linux.md delete mode 100644 translated/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md diff --git a/published/20150817 Top 5 Torrent Clients For Ubuntu Linux.md b/published/20150817 Top 5 Torrent Clients For Ubuntu Linux.md new file mode 100644 index 0000000000..0ad6d04671 --- /dev/null +++ b/published/20150817 Top 5 Torrent Clients For Ubuntu Linux.md @@ -0,0 +1,115 @@ +Ubuntu 下五个最好的 BT 客户端 +================================================================================ + +![Best Torrent clients for Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/5_Best_Torrent_Ubuntu.png) + +在寻找 **Ubuntu 中最好的 BT 客户端**吗?事实上,Linux 桌面平台中有许多 BT 客户端,但是它们中的哪些才是**最好的 Ubuntu 客户端**呢? + +我将会列出 Linux 上最好的五个 BT 客户端,它们都拥有着体积轻盈,功能强大的特点,而且还有令人印象深刻的用户界面。自然,易于安装和使用也是特性之一。 + +### Ubuntu 下最好的 BT 客户端 ### + +考虑到 Ubuntu 默认安装了 Transmission,所以我将会从这个列表中排除了 Transmission。但是这并不意味着 Transmission 没有资格出现在这个列表中,事实上,Transmission 是一个非常好的BT客户端,这也正是它被包括 Ubuntu 在内的多个发行版默认安装的原因。 + +### Deluge ### + +![Logo of Deluge torrent client for Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Deluge.png) + +[Deluge][1] 被 Lifehacker 评选为 Linux 下最好的 BT 客户端,这说明了 Deluge 是多么的有用。而且,并不仅仅只有 Lifehacker 是 Deluge 的粉丝,纵观多个论坛,你都会发现不少 Deluge 的忠实拥趸。 + +快速,时尚而直观的界面使得 Deluge 成为 Linux 用户的挚爱。 + +Deluge 可在 Ubuntu 的仓库中获取,你能够在 Ubuntu 软件中心中安装它,或者使用下面的命令: + + sudo apt-get install deluge + +### qBittorrent ### + +![qBittorrent client for Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/qbittorrent_icon.png) + +正如它的名字所暗示的,[qBittorrent][2] 是著名的 [Bittorrent][3] 应用的 Qt 版本。如果曾经使用过它,你将会看到和 Windows 下的 Bittorrent 相似的界面。同样轻巧并且有着 BT 客户端的所有标准功能, qBittorrent 也可以在 Ubuntu 的默认仓库中找到。 + +它可以通过 Ubuntu 软件仓库安装,或者使用下面的命令: + + sudo apt-get install qbittorrent + + +### Tixati ### + +![Tixati torrent client logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/tixati_icon.png) + +[Tixati][4] 是另一个不错的 Ubuntu 下的 BT 客户端。它有着一个默认的黑暗主题,尽管很多人喜欢,但是我例外。它拥有着一切你能在 BT 客户端中找到的功能。 + +除此之外,它还有着数据分析的额外功能。你可以在美观的图表中分析流量以及其它数据。 + +- [下载 Tixati][5] + + + +### Vuze ### + +![Vuze Torrent Logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/vuze_icon_for_mac_os_x_by_hamzasaleem-d6yx1fp.png) + +[Vuze][6] 是许多 Linux 以及 Windows 用户最喜欢的 BT 客户端。除了标准的功能,你可以直接在应用程序中搜索种子,也可以订阅系列片源,这样就无需再去寻找新的片源了,因为你可以在侧边栏中的订阅看到它们。 + +它还配备了一个视频播放器,可以播放带有字幕的高清视频等等。但是我不认为你会用它来代替那些更好的视频播放器,比如 VLC。 + +Vuze 可以通过 Ubuntu 软件中心安装或者使用下列命令: + + sudo apt-get install vuze + + + +### Frostwire ### + +![Logo of Frostwire torrent client](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/frostwire.png) + +[Frostwire][7] 是一个你应该试一下的应用。它不仅仅是一个简单的 BT 客户端,它还可以应用于安卓,你可以用它通过 Wifi 来共享文件。 + +你可以在应用中搜索种子并且播放他们。除了下载文件,它还可以浏览本地的影音文件,并且将它们有条理的呈现在播放器中。这同样适用于安卓版本。 + +还有一个特点是:Frostwire 提供了独立音乐人的[合法音乐下载][13]。你可以下载并且欣赏它们,免费而且合法。 + +- [下载 Frostwire][8] + + + +### 荣誉奖 ### + +在 Windows 中,uTorrent(发音:mu torrent)是我最喜欢的 BT 应用。尽管 uTorrent 可以在 Linux 下运行,但是我还是特意忽略了它。因为在 Linux 下使用 uTorrent 不仅困难,而且无法获得完整的应用体验(运行在浏览器中)。 + +可以[在这里][9]阅读 Ubuntu下uTorrent 的安装教程。 + +#### 快速提示: #### + +大多数情况下,BT 应用不会默认自动启动。如果想改变这一行为,请阅读[如何管理 Ubuntu 下的自启动程序][10]来学习。 + +### 你最喜欢的是什么? ### + +这些是我对于 Ubuntu 下最好的 BT 客户端的意见。你最喜欢的是什么呢?请发表评论。也可以查看与本主题相关的[Ubuntu 最好的下载管理器][11]。如果使用 Popcorn Time,试试 [Popcorn Time 技巧][12] + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/best-torrent-ubuntu/ + +作者:[Abhishek][a] +译者:[Xuanwo](https://github.com/Xuanwo) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://deluge-torrent.org/ +[2]:http://www.qbittorrent.org/ +[3]:http://www.bittorrent.com/ +[4]:http://www.tixati.com/ +[5]:http://www.tixati.com/download/ +[6]:http://www.vuze.com/ +[7]:http://www.frostwire.com/ +[8]:http://www.frostwire.com/downloads +[9]:http://sysads.co.uk/2014/05/install-utorrent-3-3-ubuntu-14-04-13-10/ +[10]:http://itsfoss.com/manage-startup-applications-ubuntu/ +[11]:http://itsfoss.com/4-best-download-managers-for-linux/ +[12]:http://itsfoss.com/popcorn-time-tips/ +[13]:http://www.frostclick.com/wp/ + diff --git a/translated/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md b/translated/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md deleted file mode 100644 index 80c9899637..0000000000 --- a/translated/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md +++ /dev/null @@ -1,120 +0,0 @@ -Translating by Xuanwo - -介绍Ubuntu下五大BT客户端 -================================================================================ -![Best Torrent clients for Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/5_Best_Torrent_Ubuntu.png) - -在寻找**Ubuntu中最好的BT客户端**吗?事实上,桌面平台中有许多可用的BT客户端,但是它们中的哪些才是**最好的**呢? - -我将会列出最好的五个BT客户端,它们都拥有着体积轻盈,功能强大的特点,而且还有令人印象深刻的用户界面。自然,易于安装和使用也是特性之一。 - -### Ubuntu下最好的BT客户端 ### - -考虑到Ubuntu默认安装了Transmission,所以我将会从这个列表中删去Transmission。但是这并不意味着Transmission没有资格出现在这个列表中,事实上,Transmission是一个非常好的BT客户端,这也正是它被多个发行版默认安装的原因,Ubuntu也不例外。 - ----------- - -### Deluge ### - -![Logo of Deluge torrent client for Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Deluge.png) - -[Deluge][1] 被Lifehacker选为Linux下最好的BT客户端,这说明了Deluge是多么的有用。而且,并不仅仅只有Lifehacker是Deluge的粉丝,纵观多个论坛,你都会发现不少Deluge的忠实拥趸。 - -快速,时尚而且直观的界面使得Deluge成为Linux用户的挚爱。 - -Deluge可在Ubuntu的仓库中获取,你能够在Ubuntu软件中心中安装它,或者使用下面的命令: - - sudo apt-get install delugeFast, sleek and intuitive interface makes Deluge a hot favorite among Linux users. - ----------- - -### qBittorrent ### - -![qBittorrent client for Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/qbittorrent_icon.png) - -正如它的名字所暗示的,[qBittorrent][2] 是著名的 [Bittorrent][3] 应用的Qt版本。如果曾经使用过它,你将会看到和Windows下的Bittorrent相似的界面。同样轻巧并且有着BT客户端的所有标准功能,qBittorrent也可以在Ubuntu的默认仓库中找到。 - -它可以通过Ubuntu软件仓库安装,或者使用下面的命令: - - sudo apt-get install qbittorrent - ----------- - -### Tixati ### - -![Tixati torrent client logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/tixati_icon.png) - -[Tixati][4] 是另一个不错的Ubuntu下的BT客户端。它有着一个默认的黑暗主题,尽管很多人喜欢,但是我例外。它拥有着一切你能在BT客户端中找到的功能。 - -除此之外,它还有着数据分析的额外功能。你可以在美观的图表中分析流量以及其它数据。 - -- [下载 Tixati][5] - ----------- - -### Vuze ### - -![Vuze Torrent Logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/vuze_icon_for_mac_os_x_by_hamzasaleem-d6yx1fp.png) - -[Vuze][6]是许多Linux以及Windows用户最喜欢的BT客户端。除了标准的功能,你可以直接在应用程序中搜索种子,也可以订阅系列片源,这样就无需再去寻找新的片源了,因为你可以在侧边栏中的订阅看到它们。 - -它还配备了一个视频播放器,可以播放带有字幕的高清视频以及一切。但是我不认为你会用它来代替那些更好的视频播放器,比如VLC。 - -Vuze可以通过Ubuntu软件中心安装或者使用下列命令: - - sudo apt-get install vuze - ----------- - -### Frostwire ### - -![Logo of Frostwire torrent client](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/frostwire.png) - -[Frostwire][7]可能会是一个你想要去尝试的应用。它不仅仅是一个简单的BT客户端,它还可以应用于安卓,你可以用它通过Wifi来共享文件。 - -你可以在应用中搜索种子并且播放他们。除了下载文件,它还可以浏览本地的影音文件,并且将它们有条理的呈现在播放器中。这同样适用于安卓版本。 - -还有一个特点是:Frostwire提供了印度艺术家的合法音乐下载。你可以下载并且欣赏它们,免费而且合法。 - -- [下载 Frostwire][8] - ----------- - -### 荣誉奖 ### - -在Windows中,uTorrent(发音:mu torrent)是我最喜欢的BT应用。尽管uTorrent可以在Linux下运行,但是我还是特意忽略了它。因为在Linux下使用uTorrent不仅困难,而且无法获得完整的应用体验(运行在浏览器中)。 - -可以[在这里][9]阅读Ubuntu下uTorrent的安装教程。 - -#### 快速提示: #### - -大多数情况下,BT应用不会默认自动自动启动。如果想改变这一行为,请阅读[如何管理Ubuntu下的自启程序][10]来学习。 - -### 你最喜欢的是什么? ### - -这些是我对于Ubuntu下最好的BT客户端的意见。你最喜欢的是什么呢?请发表评论。也可以查看与本主题相关的[Ubuntu最好的下载管理器][11]。如果使用Popcorn Time,试试[Popcorn Time技巧][12] - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/best-torrent-ubuntu/ - -作者:[Abhishek][a] -译者:[Xuanwo](https://github.com/Xuanwo) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:http://deluge-torrent.org/ -[2]:http://www.qbittorrent.org/ -[3]:http://www.bittorrent.com/ -[4]:http://www.tixati.com/ -[5]:http://www.tixati.com/download/ -[6]:http://www.vuze.com/ -[7]:http://www.frostwire.com/ -[8]:http://www.frostwire.com/downloads -[9]:http://sysads.co.uk/2014/05/install-utorrent-3-3-ubuntu-14-04-13-10/ -[10]:http://itsfoss.com/manage-startup-applications-ubuntu/ -[11]:http://itsfoss.com/4-best-download-managers-for-linux/ -[12]:http://itsfoss.com/popcorn-time-tips/ - From 09c53626e630571fc79671978a3f52e7039d0cb7 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 26 Aug 2015 15:41:08 +0800 Subject: [PATCH 315/697] =?UTF-8?q?20150826-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Is Out And It's Packed Full Of Features.md | 87 +++++++++++++++++++ 1 file changed, 87 insertions(+) create mode 100644 sources/news/20150826 Plasma 5.4 Is Out And It's Packed Full Of Features.md diff --git a/sources/news/20150826 Plasma 5.4 Is Out And It's Packed Full Of Features.md b/sources/news/20150826 Plasma 5.4 Is Out And It's Packed Full Of Features.md new file mode 100644 index 0000000000..a103c6b505 --- /dev/null +++ b/sources/news/20150826 Plasma 5.4 Is Out And It's Packed Full Of Features.md @@ -0,0 +1,87 @@ +Plasma 5.4 Is Out And It’s Packed Full Of Features +================================================================================ +KDE has [announced][1] a brand new feature release of Plasma 5 — and it’s a corker. + +![kde network applet graphs](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/kde-network-applet-graphs.jpg) + +Better network details are among the changes + +Plasma 5.4.0 builds on [April’s 5.3.0 milestone][2] in a number of ways, ranging from the inherently technical, Wayland preview session, ahoy, to lavish aesthetic touches, like **1,400 brand new icons**. + +A handful of new components also feature in the release, including a new Plasma Widget for volume control, a monitor calibration tool and an improved user management tool. + +The ‘Kicker’ application menu has been powered up to let you favourite all types of content, not just applications. + +**KRunner now remembers searches** so that it can automatically offer suggestions based on your earlier queries as you type. + +The **network applet displays a graph** to give you a better understanding of your network traffic. It also gains two new VPN plugins for SSH and SSTP connections. + +Minor tweaks to the digital clock see it adapt better in slim panel mode, it gains ISO date support and makes it easier for you to toggle between 12 hour and 24 hour clock. Week numbers have been added to the calendar. + +### Application Dashboard ### + +![plasma 5.4 fullscreen dashboard](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/plasma-fullscreen-dashboard.jpg) + +The new ‘Application Dashboard’ in KDE Plasma 5.4.0 + +**A new full screen launcher, called ‘Application Dashboard’**, is also available. + +This full-screen dash offers the same features as the traditional Application Menu but with “sophisticated scaling to screen size and full spatial keyboard navigation”. + +Like the Unity launch, the new Plasma Application Dashboard helps you quickly find applications, sift through files and contacts based on your previous activity. + +### Changes in KDE Plasma 5.4.0 at a glance ### + +- Improved high DPI support +- KRunner autocompletion +- KRunner search history +- Application Dashboard add on +- 1,400 New icons +- Wayland tech preview + +For a full list of changes in Plasma 5.4 refer to [this changelog][3]. + +### Install Plasma 5.4 in Kubuntu 15.04 ### + +![new plasma desktop](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/new-plasma-desktop-.jpg) + +![Kubuntu logo](http://www.omgubuntu.co.uk/wp-content/uploads/2012/02/logo-kubuntu.png) + +To **install Plasma 5.4 in Kubuntu 15.04** you will need to add the KDE Backports PPA to your Software Sources. + +Adding the Kubuntu backports PPA **is not strictly advised** as it may upgrade other parts of the KDE desktop, application suite, developer frameworks or Kubuntu specific config files. + +If you like your desktop being stable, don’t proceed. + +The quickest way to upgrade to Plasma 5.4 once it lands in the Kubuntu Backports PPA is to use the Terminal: + + sudo add-apt-repository ppa:kubuntu-ppa/backports + + sudo apt-get update && sudo apt-get dist-upgrade + +Let the upgrade process complete. Assuming no errors emerge, reboot your computer for changes to take effect. + +If you’re not already using Kubuntu, i.e. you’re using the Unity version of Ubuntu, you should first install the Kubuntu desktop package (you’ll find it in the Ubuntu Software Centre). + +To undo the changes above and downgrade to the most recent version of Plasma available in the Ubuntu archives use the PPA-Purge tool: + + sudo apt-get install ppa-purge + + sudo ppa-purge ppa:kubuntu-ppa/backports + +Let us know how your upgrade/testing goes in the comments below and don’t forget to mention the features you hope to see added to the Plasma 5 desktop next. + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2015/08/plasma-5-4-new-features + +作者:[Joey-Elijah Sneddon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:https://dot.kde.org/2015/08/25/kde-ships-plasma-540-feature-release-august +[2]:http://www.omgubuntu.co.uk/2015/04/kde-plasma-5-3-released-heres-how-to-upgrade-in-kubuntu-15-04 +[3]:https://www.kde.org/announcements/plasma-5.3.2-5.4.0-changelog.php \ No newline at end of file From 14fba98900ff292c4029455f6e7d5fe14f6023de Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 26 Aug 2015 15:59:39 +0800 Subject: [PATCH 316/697] =?UTF-8?q?20150826-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...stem status page of your infrastructure.md | 294 ++++++++++++++++++ 1 file changed, 294 insertions(+) create mode 100644 sources/tech/20150826 How to set up a system status page of your infrastructure.md diff --git a/sources/tech/20150826 How to set up a system status page of your infrastructure.md b/sources/tech/20150826 How to set up a system status page of your infrastructure.md new file mode 100644 index 0000000000..44fb4ed8d5 --- /dev/null +++ b/sources/tech/20150826 How to set up a system status page of your infrastructure.md @@ -0,0 +1,294 @@ +How to set up a system status page of your infrastructure +================================================================================ +If you are a system administrator who is responsible for critical IT infrastructure or services of your organization, you will understand the importance of effective communication in your day-to-day tasks. Suppose your production storage server is on fire. You want your entire team on the same page in order to resolve the issue as fast as you can. While you are at it, you don't want half of all users contacting you asking why they cannot access their documents. When a scheduled maintenance is coming up, you want to notify interested parties of the event ahead of the schedule, so that unnecessary support tickets can be avoided. + +All these require some sort of streamlined communication channel between you, your team and people you serve. One way to achieve that is to maintain a centralized system status page, where the detail of downtime incidents, progress updates and maintenance schedules are reported and chronicled. That way, you can minimize unnecessary distractions during downtime, and also have any interested party informed and opt-in for any status update. + +One good **open-source, self-hosted system status page solution** is [Cachet][1]. In this tutorial, I am going to describe how to set up a self-hosted system status page using Cachet. + +### Cachet Features ### + +Before going into the detail of setting up Cachet, let me briefly introduce its main features. + +- **Full JSON API**: The Cachet API allows you to connect any external program or script (e.g., uptime script) to Cachet to report incidents or update status automatically. +- **Authentication**: Cachet supports Basic Auth and API token in JSON API, so that only authorized personnel can update the status page. +- **Metrics system**: This is useful to visualize custom data over time (e.g., server load or response time). +- **Notification**: Optionally you can send notification emails about reported incidents to anyone who signed up to the status page. +- **Multiple languages**: The status page can be translated into 11 different languages. +- **Two factor authentication**: This allows you to lock your Cachet admin account with Google's two-factor authentication. +- **Cross database support**: You can choose between MySQL, SQLite, Redis, APC, and PostgreSQL for a backend storage. + +In the rest of the tutorial, I explain how to install and configure Cachet on Linux. + +### Step One: Download and Install Cachet ### + +Cachet requires a web server and a backend database to operate. In this tutorial, I am going to use the LAMP stack. Here are distro-specific instructions to install Cachet and LAMP stack. + +#### Debian, Ubuntu or Linux Mint #### + + $ sudo apt-get install curl git apache2 mysql-server mysql-client php5 php5-mysql + $ sudo git clone https://github.com/cachethq/Cachet.git /var/www/cachet + $ cd /var/www/cachet + $ sudo git checkout v1.1.1 + $ sudo chown -R www-data:www-data . + +For more detail on setting up LAMP stack on Debian-based systems, refer to [this tutorial][2]. + +#### Fedora, CentOS or RHEL #### + +On Red Hat based systems, you first need to [enable REMI repository][3] (to meet PHP version requirement). Then proceed as follows. + + $ sudo yum install curl git httpd mariadb-server + $ sudo yum --enablerepo=remi-php56 install php php-mysql php-mbstring + $ sudo git clone https://github.com/cachethq/Cachet.git /var/www/cachet + $ cd /var/www/cachet + $ sudo git checkout v1.1.1 + $ sudo chown -R apache:apache . + $ sudo firewall-cmd --permanent --zone=public --add-service=http + $ sudo firewall-cmd --reload + $ sudo systemctl enable httpd.service; sudo systemctl start httpd.service + $ sudo systemctl enable mariadb.service; sudo systemctl start mariadb.service + +For more details on setting up LAMP on Red Hat-based systems, refer to [this tutorial][4]. + +### Configure a Backend Database for Cachet ### + +The next step is to configure database backend. + +Log in to MySQL/MariaDB server, and create an empty database called 'cachet'. + + $ sudo mysql -uroot -p + +---------- + + mysql> create database cachet; + mysql> quit + +Now create a Cachet configuration file by using a sample configuration file. + + $ cd /var/www/cachet + $ sudo mv .env.example .env + +In .env file, fill in database information (i.e., DB_*) according to your setup. Leave other fields unchanged for now. + + APP_ENV=production + APP_DEBUG=false + APP_URL=http://localhost + APP_KEY=SomeRandomString + + DB_DRIVER=mysql + DB_HOST=localhost + DB_DATABASE=cachet + DB_USERNAME=root + DB_PASSWORD= + + CACHE_DRIVER=apc + SESSION_DRIVER=apc + QUEUE_DRIVER=database + + MAIL_DRIVER=smtp + MAIL_HOST=mailtrap.io + MAIL_PORT=2525 + MAIL_USERNAME=null + MAIL_PASSWORD=null + MAIL_ADDRESS=null + MAIL_NAME=null + + REDIS_HOST=null + REDIS_DATABASE=null + REDIS_PORT=null + +### Step Three: Install PHP Dependencies and Perform DB Migration ### + +Next, we are going to install necessary PHP dependencies. For that we will use composer. If you do not have composer installed on your system, install it first: + + $ curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer + +Now go ahead and install PHP dependencies using composer. + + $ cd /var/www/cachet + $ sudo composer install --no-dev -o + +Next, perform one-time database migration. This step will populate the empty database we created earlier with necessary tables. + + $ sudo php artisan migrate + +Assuming the database config in /var/www/cachet/.env is correct, database migration should be completed successfully as shown below. + +![](https://farm6.staticflickr.com/5814/20235620184_54048676b0_c.jpg) + +Next, create a security key, which will be used to encrypt the data entered in Cachet. + + $ sudo php artisan key:generate + $ sudo php artisan config:cache + +![](https://farm6.staticflickr.com/5717/20831952096_7105c9fdc7_c.jpg) + +The generated app key will be automatically added to the APP_KEY variable of your .env file. No need to edit .env on your own here. + +### Step Four: Configure Apache HTTP Server ### + +Now it's time to configure the web server that Cachet will be running on. As we are using Apache HTTP server, create a new [virtual host][5] for Cachet as follows. + +#### Debian, Ubuntu or Linux Mint #### + + $ sudo vi /etc/apache2/sites-available/cachet.conf + +---------- + + + ServerName cachethost + ServerAlias cachethost + DocumentRoot "/var/www/cachet/public" + + Require all granted + Options Indexes FollowSymLinks + AllowOverride All + Order allow,deny + Allow from all + + + +Enable the new Virtual Host and mod_rewrite with: + + $ sudo a2ensite cachet.conf + $ sudo a2enmod rewrite + $ sudo service apache2 restart + +#### Fedora, CentOS or RHEL #### + +On Red Hat based systems, create a virtual host file as follows. + + $ sudo vi /etc/httpd/conf.d/cachet.conf + +---------- + + + ServerName cachethost + ServerAlias cachethost + DocumentRoot "/var/www/cachet/public" + + Require all granted + Options Indexes FollowSymLinks + AllowOverride All + Order allow,deny + Allow from all + + + +Now reload Apache configuration: + + $ sudo systemctl reload httpd.service + +### Step Five: Configure /etc/hosts for Testing Cachet ### + +At this point, the initial Cachet status page should be up and running, and now it's time to test. + +Since Cachet is configured as a virtual host of Apache HTTP server, we need to tweak /etc/hosts of your client computer to be able to access it. Here the client computer is the one from which you will be accessing the Cachet page. + +Open /etc/hosts, and add the following entry. + + $ sudo vi /etc/hosts + +---------- + + cachethost + +In the above, the name "cachethost" must match with ServerName specified in the Apache virtual host file for Cachet. + +### Test Cachet Status Page ### + +Now you are ready to access Cachet status page. Type http://cachethost in your browser address bar. You will be redirected to the initial Cachet setup page as follows. + +![](https://farm6.staticflickr.com/5745/20858228815_405fce1301_c.jpg) + +Choose cache/session driver. Here let's choose "File" for both cache and session drivers. + +Next, type basic information about the status page (e.g., site name, domain, timezone and language), as well as administrator account. + +![](https://farm1.staticflickr.com/611/20237229693_c22014e4fd_c.jpg) + +![](https://farm6.staticflickr.com/5707/20858228875_b056c9e1b4_c.jpg) + +![](https://farm6.staticflickr.com/5653/20671482009_8629572886_c.jpg) + +Your initial status page will finally be ready. + +![](https://farm6.staticflickr.com/5692/20237229793_f6a48f379a_c.jpg) + +Go ahead and create components (units of your system), incidents or any scheduled maintenance as you want. + +For example, to add a new component: + +![](https://farm6.staticflickr.com/5672/20848624752_9d2e0a07be_c.jpg) + +To add a scheduled maintenance: + +This is what the public Cachet status page looks like: + +![](https://farm1.staticflickr.com/577/20848624842_df68c0026d_c.jpg) + +With SMTP integration, you can send out emails on status updates to any subscribers. Also, you can fully customize the layout and style of the status page using CSS and markdown formatting. + +### Conclusion ### + +Cachet is pretty easy-to-use, self-hosted status page software. One of the nicest features of Cachet is its support for full JSON API. Using its RESTful API, one can easily hook up Cachet with separate monitoring backends (e.g., [Nagios][6]), and feed Cachet with incident reports and status updates automatically. This is far quicker and efficient than manually manage a status page. + +As final words, I'd like to mention one thing. While setting up a fancy status page with Cachet is straightforward, making the best use of the software is not as easy as installing it. You need total commitment from the IT team on updating the status page in an accurate and timely manner, thereby building credibility of the published information. At the same time, you need to educate users to turn to the status page. At the end of the day, it would be pointless to set up a status page if it's not populated well, and/or no one is checking it. Remember this when you consider deploying Cachet in your work environment. + +### Troubleshooting ### + +As a bonus, here are some useful troubleshooting tips in case you encounter problems while setting up Cachet. + +1. The Cachet page does not load anything, and you are getting the following error. + + production.ERROR: exception 'RuntimeException' with message 'No supported encrypter found. The cipher and / or key length are invalid.' in /var/www/cachet/bootstrap/cache/compiled.php:6695 + +**Solution**: Make sure that you create an app key, as well as clear configuration cache as follows. + + $ cd /path/to/cachet + $ sudo php artisan key:generate + $ sudo php artisan config:cache + +2. You are getting the following error while invoking composer command. + + - danielstjules/stringy 1.10.0 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. + - laravel/framework v5.1.8 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. + - league/commonmark 0.10.0 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. + +**Solution**: Make sure to install the required PHP extension mbstring on your system which is compatible with your PHP. On Red Hat based system, since we installed PHP from REMI-56 repository, we install the extension from the same repository. + + $ sudo yum --enablerepo=remi-php56 install php-mbstring + +3. You are getting a blank page while trying to access Cachet status page. The HTTP log shows the following error. + + PHP Fatal error: Uncaught exception 'UnexpectedValueException' with message 'The stream or file "/var/www/cachet/storage/logs/laravel-2015-08-21.log" could not be opened: failed to open stream: Permission denied' in /var/www/cachet/bootstrap/cache/compiled.php:12851 + +**Solution**: Try the following commands. + + $ cd /var/www/cachet + $ sudo php artisan cache:clear + $ sudo chmod -R 777 storage + $ sudo composer dump-autoload + +If the above solution does not work, try disabling SELinux: + + $ sudo setenforce 0 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/setup-system-status-page.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:https://cachethq.io/ +[2]:http://xmodulo.com/install-lamp-stack-ubuntu-server.html +[3]:http://ask.xmodulo.com/install-remi-repository-centos-rhel.html +[4]:http://xmodulo.com/install-lamp-stack-centos.html +[5]:http://xmodulo.com/configure-virtual-hosts-apache-http-server.html +[6]:http://xmodulo.com/monitor-common-services-nagios.html \ No newline at end of file From 52b47060e88e3bc423bdc82b15c0af5531076e79 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 26 Aug 2015 16:12:45 +0800 Subject: [PATCH 317/697] =?UTF-8?q?20150826-3=20RHCE=20=E7=AC=AC=E4=BA=94?= =?UTF-8?q?=E7=AF=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ate and Import Into Database) in RHEL 7.md | 165 ++++++++++++++++++ 1 file changed, 165 insertions(+) create mode 100644 sources/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md diff --git a/sources/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md b/sources/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md new file mode 100644 index 0000000000..217116cfee --- /dev/null +++ b/sources/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md @@ -0,0 +1,165 @@ +Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7 +================================================================================ +In order to keep your RHEL 7 systems secure, you need to know how to monitor all of the activities that take place on such systems by examining log files. Thus, you will be able to detect any unusual or potentially malicious activity and perform system troubleshooting or take another appropriate action. + +![Linux Rotate Log Files Using Rsyslog and Logrotate](http://www.tecmint.com/wp-content/uploads/2015/08/Manage-and-Rotate-Linux-Logs-Using-Rsyslog-Logrotate.jpg) + +RHCE Exam: Manage System LogsUsing Rsyslogd and Logrotate – Part 5 + +In RHEL 7, the [rsyslogd][1] daemon is responsible for system logging and reads its configuration from /etc/rsyslog.conf (this file specifies the default location for all system logs) and from files inside /etc/rsyslog.d, if any. + +### Rsyslogd Configuration ### + +A quick inspection of the [rsyslog.conf][2] will be helpful to start. This file is divided into 3 main sections: Modules (since rsyslog follows a modular design), Global directives (used to set global properties of the rsyslogd daemon), and Rules. As you will probably guess, this last section indicates what gets logged or shown (also known as the selector) and where, and will be our focus throughout this article. + +A typical line in rsyslog.conf is as follows: + +![Rsyslogd Configuration](http://www.tecmint.com/wp-content/uploads/2015/08/Rsyslogd-Configuration.png) + +Rsyslogd Configuration + +In the image above, we can see that a selector consists of one or more pairs Facility:Priority separated by semicolons, where Facility describes the type of message (refer to [section 4.1.1 in RFC 3164][3] to see the complete list of facilities available for rsyslog) and Priority indicates its severity, which can be one of the following self-explanatory words: + +- debug +- info +- notice +- warning +- err +- crit +- alert +- emerg + +Though not a priority itself, the keyword none means no priority at all of the given facility. + +**Note**: That a given priority indicates that all messages of such priority and above should be logged. Thus, the line in the example above instructs the rsyslogd daemon to log all messages of priority info or higher (regardless of the facility) except those belonging to mail, authpriv, and cron services (no messages coming from this facilities will be taken into account) to /var/log/messages. + +You can also group multiple facilities using the colon sign to apply the same priority to all of them. Thus, the line: + + *.info;mail.none;authpriv.none;cron.none /var/log/messages + +Could be rewritten as + + *.info;mail,authpriv,cron.none /var/log/messages + +In other words, the facilities mail, authpriv, and cron are grouped and the keyword none is applied to the three of them. + +#### Creating a custom log file #### + +To log all daemon messages to /var/log/tecmint.log, we need to add the following line either in rsyslog.conf or in a separate file (easier to manage) inside /etc/rsyslog.d: + + daemon.* /var/log/tecmint.log + +Let’s restart the daemon (note that the service name does not end with a d): + + # systemctl restart rsyslog + +And check the contents of our custom log before and after restarting two random daemons: + +![Linux Create Custom Log File](http://www.tecmint.com/wp-content/uploads/2015/08/Create-Custom-Log-File.png) + +Create Custom Log File + +As a self-study exercise, I would recommend you play around with the facilities and priorities and either log additional messages to existing log files or create new ones as in the previous example. + +### Rotating Logs using Logrotate ### + +To prevent log files from growing endlessly, the logrotate utility is used to rotate, compress, remove, and alternatively mail logs, thus easing the administration of systems that generate large numbers of log files. + +Logrotate runs daily as a cron job (/etc/cron.daily/logrotate) and reads its configuration from /etc/logrotate.conf and from files located in /etc/logrotate.d, if any. + +As with the case of rsyslog, even when you can include settings for specific services in the main file, creating separate configuration files for each one will help organize your settings better. + +Let’s take a look at a typical logrotate.conf: + +![Logrotate Configuration](http://www.tecmint.com/wp-content/uploads/2015/08/Logrotate-Configuration.png) + +Logrotate Configuration + +In the example above, logrotate will perform the following actions for /var/loh/wtmp: attempt to rotate only once a month, but only if the file is at least 1 MB in size, then create a brand new log file with permissions set to 0664 and ownership given to user root and group utmp. Next, only keep one archived log, as specified by the rotate directive: + +![Logrotate Logs Monthly](http://www.tecmint.com/wp-content/uploads/2015/08/Logrotate-Logs-Monthly.png) + +Logrotate Logs Monthly + +Let’s now consider another example as found in /etc/logrotate.d/httpd: + +![Rotate Apache Log Files](http://www.tecmint.com/wp-content/uploads/2015/08/Rotate-Apache-Log-Files.png) + +Rotate Apache Log Files + +You can read more about the settings for logrotate in its man pages ([man logrotate][4] and [man logrotate.conf][5]). Both files are provided along with this article in PDF format for your reading convenience. + +As a system engineer, it will be pretty much up to you to decide for how long logs will be stored and in what format, depending on whether you have /var in a separate partition / logical volume. Otherwise, you really want to consider removing old logs to save storage space. On the other hand, you may be forced to keep several logs for future security auditing according to your company’s or client’s internal policies. + +#### Saving Logs to a Database #### + +Of course examining logs (even with the help of tools such as grep and regular expressions) can become a rather tedious task. For that reason, rsyslog allows us to export them into a database (OTB supported RDBMS include MySQL, MariaDB, PostgreSQL, and Oracle. + +This section of the tutorial assumes that you have already installed the MariaDB server and client in the same RHEL 7 box where the logs are being managed: + + # yum update && yum install mariadb mariadb-server mariadb-client rsyslog-mysql + # systemctl enable mariadb && systemctl start mariadb + +Then use the `mysql_secure_installation` utility to set the password for the root user and other security considerations: + +![Secure MySQL Database](http://www.tecmint.com/wp-content/uploads/2015/08/Secure-MySQL-Database.png) + +Secure MySQL Database + +Note: If you don’t want to use the MariaDB root user to insert log messages to the database, you can configure another user account to do so. Explaining how to do that is out of the scope of this tutorial but is explained in detail in [MariaDB knowledge][6] base. In this tutorial we will use the root account for simplicity. + +Next, download the createDB.sql script from [GitHub][7] and import it into your database server: + + # mysql -u root -p < createDB.sql + +![Save Server Logs to Database](http://www.tecmint.com/wp-content/uploads/2015/08/Save-Server-Logs-to-Database.png) + +Save Server Logs to Database + +Finally, add the following lines to /etc/rsyslog.conf: + + $ModLoad ommysql + $ActionOmmysqlServerPort 3306 + *.* :ommysql:localhost,Syslog,root,YourPasswordHere + +Restart rsyslog and the database server: + + # systemctl restart rsyslog + # systemctl restart mariadb + +#### Querying the Logs using SQL syntax #### + +Now perform some tasks that will modify the logs (like stopping and starting services, for example), then log to your DB server and use standard SQL commands to display and search in the logs: + + USE Syslog; + SELECT ReceivedAt, Message FROM SystemEvents; + +![Query Logs in Database](http://www.tecmint.com/wp-content/uploads/2015/08/Query-Logs-in-Database.png) + +Query Logs in Database + +### Summary ### + +In this article we have explained how to set up system logging, how to rotate logs, and how to redirect the messages to a database for easier search. We hope that these skills will be helpful as you prepare for the [RHCE exam][8] and in your daily responsibilities as well. + +As always, your feedback is more than welcome. Feel free to use the form below to reach us. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/manage-linux-system-logs-using-rsyslogd-and-logrotate/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/wp-content/pdf/rsyslogd.pdf +[2]:http://www.tecmint.com/wp-content/pdf/rsyslog.conf.pdf +[3]:https://tools.ietf.org/html/rfc3164#section-4.1.1 +[4]:http://www.tecmint.com/wp-content/pdf/logrotate.pdf +[5]:http://www.tecmint.com/wp-content/pdf/logrotate.conf.pdf +[6]:https://mariadb.com/kb/en/mariadb/create-user/ +[7]:https://github.com/sematext/rsyslog/blob/master/plugins/ommysql/createDB.sql +[8]:http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/ \ No newline at end of file From 8c775ab3d9ad4a223a37da1785f7b1ff00412dc4 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 26 Aug 2015 16:24:27 +0800 Subject: [PATCH 318/697] =?UTF-8?q?20150826-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Connecting Remote Unix or Linux Systems.md | 110 ++++++++++++++++++ 1 file changed, 110 insertions(+) create mode 100644 sources/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md diff --git a/sources/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md b/sources/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md new file mode 100644 index 0000000000..f36e1b21df --- /dev/null +++ b/sources/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md @@ -0,0 +1,110 @@ +Mosh Shell – A SSH Based Client for Connecting Remote Unix/Linux Systems +================================================================================ +Mosh, which stands for Mobile Shell is a command-line application which is used for connecting to the server from a client computer, over the Internet. It can be used as SSH and contains more feature than Secure Shell. It is an application similar to SSH, but with additional features. The application is written originally by Keith Winstein for Unix like operating system and released under GNU GPL v3. + +![Mosh Shell SSH Client](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-SSH-Client.png) + +Mosh Shell SSH Client + +#### Features of Mosh #### + +- It is a remote terminal application that supports roaming. +- Available for all major UNIX-like OS viz., Linux, FreeBSD, Solaris, Mac OS X and Android. +- Intermittent Connectivity supported. +- Provides intelligent local echo. +- Line editing of user keystrokes supported. +- Responsive design and Robust Nature over wifi, cellular and long-distance links. +- Remain Connected even when IP changes. It usages UDP in place of TCP (used by SSH). TCP time out when connect is reset or new IP assigned but UDP keeps the connection open. +- The Connection remains intact when you resume the session after a long time. +- No network lag. Shows users typed key and deletions immediately without network lag. +- Same old method to login as it was in SSH. +- Mechanism to handle packet loss. + +### Installation of Mosh Shell in Linux ### + +On Debian, Ubuntu and Mint alike systems, you can easily install the Mosh package with the help of [apt-get package manager][1] as shown. + + # apt-get update + # apt-get install mosh + +On RHEL/CentOS/Fedora based distributions, you need to turn on third party repository called [EPEL][2], in order to install mosh from this repository using [yum package manager][3] as shown. + + # yum update + # yum install mosh + +On Fedora 22+ version, you need to use [dnf package manager][4] to install mosh as shown. + + # dnf install mosh + +### How do I use Mosh Shell? ### + +1. Let’s try to login into remote Linux server using mosh shell. + + $ mosh root@192.168.0.150 + +![Mosh Shell Remote Connection](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-Remote-Connection.png) + +Mosh Shell Remote Connection + +**Note**: Did you see I got an error in connecting since the port was not open in my remote CentOS 7 box. A quick but not recommended solution I performed was: + + # systemctl stop firewalld [on Remote Server] + +The preferred way is to open a port and update firewall rules. And then connect to mosh on a predefined port. For in-depth details on firewalld you may like to visit this post. + +- [How to Configure Firewalld][5] + +2. Let’s assume that the default SSH port 22 was changed to port 70, in this case you can define custom port with the help of ‘-p‘ switch with mosh. + + $ mosh -p 70 root@192.168.0.150 + +3. Check the version of installed Mosh. + + $ mosh --version + +![Check Mosh Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Mosh-Version.png) + +Check Mosh Version + +4. You can close mosh session type ‘exit‘ on the prompt. + + $ exit + +5. Mosh supports a lot of options, which you may see as: + + $ mosh --help + +![Mosh Shell Options](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-Options.png) + +Mosh Shell Options + +#### Cons of Mosh Shell #### + +- Mosh requires additional prerequisite for example, allow direct connection via UDP, which was not required by SSH. +- Dynamic port allocation in the range of 60000-61000. The first open fort is allocated. It requires one port per connection. +- Default port allocation is a serious security concern, especially in production. +- IPv6 connections supported, but roaming on IPv6 not supported. +- Scrollback not supported. +- No X11 forwarding supported. +- No support for ssh-agent forwarding. + +### Conclusion ### + +Mosh is a nice small utility which is available for download in the repository of most of the Linux Distributions. Though it has a few discrepancies specially security concern and additional requirement it’s features like remaining connected even while roaming is its plus point. My recommendation is Every Linux-er who deals with SSH should try this application and mind it, Mosh is worth a try. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/install-mosh-shell-ssh-client-in-linux/ + +作者:[Avishek Kumar][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ +[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ +[3]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ +[4]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ +[5]:http://www.tecmint.com/configure-firewalld-in-centos-7/ \ No newline at end of file From 455f69011e3d39c5668f35173c529842fa77872a Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 26 Aug 2015 17:01:18 +0800 Subject: [PATCH 319/697] =?UTF-8?q?20150826-5=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...50826 Five Super Cool Open Source Games.md | 65 ++++++++++++++++ ... Run Kali Linux 2.0 In Docker Container.md | 74 +++++++++++++++++++ 2 files changed, 139 insertions(+) create mode 100644 sources/share/20150826 Five Super Cool Open Source Games.md create mode 100644 sources/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md diff --git a/sources/share/20150826 Five Super Cool Open Source Games.md b/sources/share/20150826 Five Super Cool Open Source Games.md new file mode 100644 index 0000000000..0b92dcedff --- /dev/null +++ b/sources/share/20150826 Five Super Cool Open Source Games.md @@ -0,0 +1,65 @@ +Five Super Cool Open Source Games +================================================================================ +In 2014 and 2015, Linux became home to a list of popular commercial titles such as the popular Borderlands, Witcher, Dead Island, and Counter Strike series of games. While this is exciting news, what of the gamer on a budget? Commercial titles are good, but even better are free-to-play alternatives made by developers who know what players like. + +Some time ago, I came across a three year old YouTube video with the ever optimistic title [5 Open Source Games that Don’t Suck][1]. Although the video praises some open source games, I’d prefer to approach the subject with a bit more enthusiasm, at least as far as the title goes. So, here’s my list of five super cool open source games. + +### Tux Racer ### + +![Tux Racer](http://fossforce.com/wp-content/uploads/2015/08/tuxracer-550x413.jpg) + +Tux Racer + +[Tux Racer][2] is the first game on this list because I’ve had plenty of experience with it. On a [recent trip to Mexico][3] that my brother and I took with [Kids on Computers][4], Tux Racer was one of the games that kids and teachers alike enjoyed. In this game, players use the Linux mascot, the penguin Tux, to race on downhill ski slopes in time trials in which players challenge their own personal bests. Currently there’s no multiplayer version available, but that could be subject to change. Available for Linux, OS X, Windows, and Android. + +### Warsow ### + +![Warsow](http://fossforce.com/wp-content/uploads/2015/08/warsow-550x413.jpg) + +Warsow + +The [Warsow][5] website explains: “Set in a futuristic cartoonish world, Warsow is a completely free fast-paced first-person shooter (FPS) for Windows, Linux and Mac OS X. Warsow is the Art of Respect and Sportsmanship Over the Web.” I was reluctant to include games from the FPS genre on this list, because many have played games in this genre, but I was amused by Warsow. It prioritizes lots of movement and the game is fast paced with a set of eight weapons to start with. The cartoonish style makes playing feel less serious and more casual, something for friends and family to play together. However, it boasts competitive play, and when I experienced the game I found there were, indeed, some expert players around. Available for Linux, Windows and OS X. + +### M.A.R.S – A ridiculous shooter ### + +![M.A.R.S. - A ridiculous shooter](http://fossforce.com/wp-content/uploads/2015/08/MARS-screenshot-550x344.jpg) + +M.A.R.S. – A ridiculous shooter + +[M.A.R.S – A ridiculous shooter][6] is appealing because of it’s vibrant coloring and style. There is support for two players on the same keyboard, but an online multiplayer version is currently in the works — meaning plans to play with friends have to wait for now. Regardless, it’s an entertaining space shooter with a few different ships and weapons to play as. There are different shaped ships, ranging from shotguns, lasers, scattered shots and more (one of the random ships shot bubbles at my opponents, which was funny amid the chaotic gameplay). There are a few modes of play, such as the standard death match against opponents to score a certain limit or score high, along with other modes called Spaceball, Grave-itation Pit and Cannon Keep. Available for Linux, Windows and OS X. + +### Valyria Tear ### + +![Valyria Tear](http://fossforce.com/wp-content/uploads/2015/08/bronnan-jump-to-enemy-550x413.jpg) + +Valyria Tear + +[Valyria Tear][7] resembles many fan favorite role-playing games (RPGs) spanning the years. The story is set in the usual era of fantasy games, full of knights, kingdoms and wizardry, and follows the main character Bronann. The design team did great work in designing the world and gives players everything expected from the genre: hidden chests, random monster encounters, non-player character (NPC) interaction, and something no RPG would be complete without: grinding for experience on lower level slime monsters until you’re ready for the big bosses. When I gave it a try, time didn’t permit me to play too far into the campaign, but for those interested there is a ‘[Let’s Play][8]‘ series by YouTube user Yohann Ferriera. Available for Linux, Windows and OS X. + +### SuperTuxKart ### + +![SuperTuxKart](http://fossforce.com/wp-content/uploads/2015/08/hacienda_tux_antarctica-550x293.jpg) + +SuperTuxKart + +Last but not least is [SuperTuxKart][9], a clone of Mario Kart that is every bit as fun as the original. It started development around 2000-2004 as Tux Kart, but there were errors in its production which led to a cease in development for a few years. Since development picked up again in 2006, it’s been improving, with version 0.9 debuting four months ago. In the game, our old friend Tux starts in the role of Mario and a few other open source mascots. One recognizable face among them is Suzanne, the monkey mascot for Blender. The graphics are solid and gameplay is fluent. While online play is in the planning stages, split screen multiplayer action is available, with up to four players supported on a single computer. Available for Linux, Windows, OS X, AmigaOS 4, AROS and MorphOS. + +-------------------------------------------------------------------------------- + +via: http://fossforce.com/2015/08/five-super-cool-open-source-games/ + +作者:Hunter Banks +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.youtube.com/watch?v=BEKVl-XtOP8 +[2]:http://tuxracer.sourceforge.net/download.html +[3]:http://fossforce.com/2015/07/banks-family-values-texas-linux-fest/ +[4]:http://www.kidsoncomputers.org/an-amazing-week-in-oaxaca +[5]:https://www.warsow.net/download +[6]:http://mars-game.sourceforge.net/ +[7]:http://valyriatear.blogspot.com/ +[8]:https://www.youtube.com/channel/UCQ5KrSk9EqcT_JixWY2RyMA +[9]:http://supertuxkart.sourceforge.net/ \ No newline at end of file diff --git a/sources/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md b/sources/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md new file mode 100644 index 0000000000..8018ca17e9 --- /dev/null +++ b/sources/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md @@ -0,0 +1,74 @@ +How to Run Kali Linux 2.0 In Docker Container +================================================================================ +### Introduction ### + +Kali Linux is a well known operating system for security testers and ethical hackers. It comes bundled with a large list of security related applications and make it easy to perform penetration testing. Recently, [Kali Linux 2.0][1] is out and it is being considered as one of the most important release for this operating system. On the other hand, Docker technology is getting massive popularity due to its scalability and ease of use. Dockers make it super easy to ship your software applications to your users. Breaking news is that you can now run Kali Linux via Dockers; let’s see how :) + +### Running Kali Linux 2.0 In Docker ### + +**Related Notes** + +If you don’t have docker installed on your system, you can do it by using the following commands: + +**For Ubuntu/Linux Mint/Debian:** + + sudo apt-get install docker + +**For Fedora/RHEL/CentOS:** + + sudo yum install docker + +**For Fedora 22:** + + dnf install docker + +You can start docker service by running: + + sudo docker start + +First of all make sure that docker service is running fine by using the following command: + + sudo docker status + +Kali Linux docker image has been uploaded online by Kali Linux development team, simply run following command to download this image to your system. + + docker pull kalilinux/kali-linux-docker + +![Pull Kali Linux docker](http://linuxpitstop.com/wp-content/uploads/2015/08/129.png) + +Once download is complete, run following command to find out the Image ID for your downloaded Kali Linux docker image file. + + docker images + +![Kali Linux Image ID](http://linuxpitstop.com/wp-content/uploads/2015/08/230.png) + +Now run following command to start your kali Linux docker container from image file (Here replace Image ID with correct one). + + docker run -i -t 198cd6df71ab3 /bin/bash + +It will immediately start the container and will log you into the operating system, you can start working on Kali Linux here. + +![Kali Linux Login](http://linuxpitstop.com/wp-content/uploads/2015/08/328.png) + +You can verify that container is started/running fine, by using the following command: + + docker ps + +![Docker Kali](http://linuxpitstop.com/wp-content/uploads/2015/08/421.png) + +### Conclusion ### + +Dockers are the smartest way to deploy and distribute your packages. Kali Linux docker image is pretty handy, does not consume any high amount of space on the disk and it is pretty easy to test this wonderful distro on any docker installed operating system now. + +-------------------------------------------------------------------------------- + +via: http://linuxpitstop.com/run-kali-linux-2-0-in-docker-container/ + +作者:[Aun][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxpitstop.com/author/aun/ +[1]:http://linuxpitstop.com/install-kali-linux-2-0/ \ No newline at end of file From 83975466763ab52f4e455bdc788b110496c0a1d0 Mon Sep 17 00:00:00 2001 From: Luoyuanhao Date: Wed, 26 Aug 2015 19:19:47 +0800 Subject: [PATCH 320/697] [Translating] tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md --- ...ogs (Configure, Rotate and Import Into Database) in RHEL 7.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md b/sources/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md index 217116cfee..8f3370f741 100644 --- a/sources/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md +++ b/sources/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md @@ -1,3 +1,4 @@ +ictlyh Translating Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7 ================================================================================ In order to keep your RHEL 7 systems secure, you need to know how to monitor all of the activities that take place on such systems by examining log files. Thus, you will be able to detect any unusual or potentially malicious activity and perform system troubleshooting or take another appropriate action. From 5c0aeccce15bf2f8d6efbc16defeba75e422913b Mon Sep 17 00:00:00 2001 From: locez Date: Thu, 27 Aug 2015 02:31:11 +0800 Subject: [PATCH 321/697] kde-plasma-5.4 --- translated/kde-plasma-5.4.md | 77 ++++++++++++++++++++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 translated/kde-plasma-5.4.md diff --git a/translated/kde-plasma-5.4.md b/translated/kde-plasma-5.4.md new file mode 100644 index 0000000000..4c6e66bd69 --- /dev/null +++ b/translated/kde-plasma-5.4.md @@ -0,0 +1,77 @@ +#KDE 发行 Plasma 5.4.0 , 8 月版本特性 + +![Plasma 5.4](https://www.kde.org/announcements/plasma-5.4/plasma-screen-desktop-2-shadow.png) +2015 年 8 月 25 ,星期二,KDE发布了 Plasma 5 的一个新版本。 +此版本为我们带来了许多非常棒的触感,如优化了对高 DPI 的支持,KRunner 自动补全和一些新的美观的 Breeze 图标。这还为不久以后的 Wayland 桌面奠定了基础。我们还发行了几个新组件,如声音部件,显示器校准工具和测试版的用户管理工具。 + +###新的音频音量程序 +![The new Audio Volume Applet](https://www.kde.org/announcements/plasma-5.4/plasma-screen-audiovolume-shadows-wee.png) +新的音频音量程序直接与 PulseAudio (Linux 上一个非常流行的音频服务) 共同提供服务,并且在一个漂亮的简约的界面提供一个完全的音量控制和输出设定。 + +###应用控制面板起动器 +![he new Dashboard alternative launcher](https://www.kde.org/announcements/plasma-5.4/plasma-screen-dashboard-2-shadow-wee.png) +Plasma 5.4 在 kdeplasma-addons 带来了一个全新的全屏应用控制面板,它具有应用菜单的所有功能还支持缩放和全空间键盘导航。新的起动器可以像你目前所用的最近使用的或收藏的文档和联系人一样简单和快速地查找应用。 + +###丰富的艺术图标 +![Just some of the new icons in this release](https://kver.files.wordpress.com/2015/07/image10430.png) +Plasma 5.4 加入了 1400 多个新图标,其中不仅包含 KDE 程序的,而且还为 Inkscape, Firefox 和 Libreoffice 提供 Breeze 主题的艺术图标,可以体验到更加集成和本地化的感觉。 + +###KRunner 历史记录 +![KRunner](https://www.kde.org/announcements/plasma-5.4/plasma-screen-krunner-shadow-wee.png) +KRunner 现在可以记住之前的搜索历史并根据历史记录进行自动补全。 + +###Network 程序中实用的图形展示 +![Network Graphs](https://www.kde.org/announcements/plasma-5.4/plasma-screen-nm-graph-shadow-wee.png) +Network 程序现在可以以图形显示网络流量了,这也支持两个新的 VPN 插件,甚至是 SSH 或 SSTP。 + +###Wayland 技术预览 +Plasma 5.4 ,这个第一个 Wayland 桌面技术预览版已经发布了。在使用开源驱动的系统上可以使用 KWin,Plasma 的 Wayland 复合器, X11 窗口管理器和通过内核设定来运行 Plasma。现在已经支持的功能需求来自于[手机 Plasma 项目](https://dot.kde.org/2015/07/25/plasma-mobile-free-mobile-platform),更多的桌面原生功能还未被完全实现。现在还不能作为替换 Xorg 的基础桌面来使用,但可以轻松地测试它,贡献和观看感人的免费视频。有关如何在 Wayland 中使用 Plasma 的介绍请到:[KWin wiki pages](https://community.kde.org/KWin/Wayland#Start_a_Plasma_session_on_Wayland)。Wlayland 将会在我们构建一个稳定的版本的目标中得到改进。 + +###其他的改变和添加 + - 优化对高 DPI 支持 + - 更少的内存占用 + - 桌面搜索更新和更快的后端处理 + - 便笺添加拖拉和支持键盘导航 + - 回收站重新支持拖拉 + - 系统托盘获得更快的可配置性 + - 文档重新修订和更新 + - 优化数字时钟的布局 + - 数字时钟支持 ISO 日期 + - 更简单的方式切换数字时钟 12/24 格式 + - 日历显示第几周 + - 任何项目都可以收藏进应用菜单,支持收藏文档和 Telepathy 联系人 + - Telepathy 联系人收藏可以展示联系人的照片和实时状态 + - 优化程序与容器间的焦点和激活处理 + - 文件夹视图中各种小修复:更好的默认大喜哦,鼠标交互问题以及文本标签换行 + - 任务管理器更好的呈现起动器默认的应用图标 + - 可再次通过在任务管理器删除程序来添加起动器 + - 可配置中间点击在任务管理器中的行为:无动作,关闭窗口,起动一个相同的程序 + - 任务管理器现在以列排序优先无论用户是否更倾向于行优先;许多用户更喜欢这样排序是因为它会使更少的任务按钮像窗口一样移来移去 + - 优化任务管理器的图标和页边标度 + - 任务管理器中各种小修复:垂直下拉,触摸事件处理,组扩展箭头视觉问题 + - 提供可用的目的框架技术预览版,可以使用 QuickShare Plasmoid 在许多 web 服务分享文件 + - 添加 显示器配置工具 + - 添加 kwallet-pam 来在登陆时打开 wallet + - 用户管理器现在已经接入到设置中,并且用户账户模块被舍弃 + - 应用程序菜单(Kicker)的性能得到改善 + - 应用程序菜单(Kicker)各种小修复:可用 隐藏/显示 程序更加可靠,顶部面板对齐修复,文件夹视图中 “添加到桌面”更加可靠,在 KActivities-based 最新的模块中有更好的表现 + - 支持自定义菜单布局 (kmenuedit)和应用程序菜单(Kicker)支持菜单项目分隔 + - 文件夹视图已经优化在面板中的模式 [blog](https://blogs.kde.org/2015/06/04/folder-view-panel-popups-are-list-views-again) + - 在桌面容器中删除一个文件夹现在可在文件夹视图中恢复 + + [Full Plasma 5.4 changelog](https://www.kde.org/announcements/plasma-5.3.2-5.4.0-changelog.php) + + ###Live 镜像 + 偿鲜的最简单的方式就是从 U 盘中启动,可以在 KDE 社区 Wiki 中找到 各种 [ Live Images with Plasma 5](https://community.kde.org/Plasma/LiveImages) + + ###下载软件包 + 各发行版已经构建了软件包,或者正在构建,wiki 中的列出了各发行版的软件包名:[Package download wiki page](https://community.kde.org/Plasma/Packages) + + ###源码下载 + 可以直接从源码中安装 Plasma 5。KDE 社区 Wiki 已经介绍了怎样编译[instructions to compile](http://community.kde.org/Frameworks/Building)。 +注意,Plasma 5 与 Plasma 4 不兼容,必须先卸载旧版本,或者安装到不同的前缀处。 + - [Source Info Page](https://www.kde.org/info/plasma-5.4.0.php) + + --- + via:https://www.kde.org/announcements/plasma-5.4.0.php + 译者:[Locez](http://locez.com) \ No newline at end of file From 777337a9ff87a9322c11ca57dd89ec7b636784bf Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 26 Aug 2015 22:32:52 +0800 Subject: [PATCH 322/697] PUB:kde-plasma-5.4 @Locez --- published/kde-plasma-5.4.md | 109 +++++++++++++++++++++++++++++++++++ translated/kde-plasma-5.4.md | 77 ------------------------- 2 files changed, 109 insertions(+), 77 deletions(-) create mode 100644 published/kde-plasma-5.4.md delete mode 100644 translated/kde-plasma-5.4.md diff --git a/published/kde-plasma-5.4.md b/published/kde-plasma-5.4.md new file mode 100644 index 0000000000..6d5b77bfd0 --- /dev/null +++ b/published/kde-plasma-5.4.md @@ -0,0 +1,109 @@ +KDE Plasma 5.4.0 发布,八月特色版 +============================= + +![Plasma 5.4](https://www.kde.org/announcements/plasma-5.4/plasma-screen-desktop-2-shadow.png) + +2015 年 8 月 25 ,星期二,KDE 发布了 Plasma 5 的一个特色新版本。 + +此版本为我们带来了许多非常棒的感受,如优化了对高分辨率的支持,KRunner 自动补全和一些新的 Breeze 漂亮图标。这还为不久以后的技术预览版的 Wayland 桌面奠定了基础。我们还带来了几个新组件,如声音音量部件,显示器校准工具和测试版的用户管理工具。 + +###新的音频音量程序 + +![The new Audio Volume Applet](https://www.kde.org/announcements/plasma-5.4/plasma-screen-audiovolume-shadows.png) + +新的音频音量程序直接工作于 PulseAudio (Linux 上一个非常流行的音频服务) 之上 ,并且在一个漂亮的简约的界面提供一个完整的音量控制和输出设定。 + +###替代的应用控制面板起动器 + +![he new Dashboard alternative launcher](https://www.kde.org/announcements/plasma-5.4/plasma-screen-dashboard-2-shadow.png) + +Plasma 5.4 在 kdeplasma-addons 软件包中提供了一个全新的全屏的应用控制面板,它具有应用菜单的所有功能,还支持缩放和全空间键盘导航。新的起动器可以像你目前所用的“最近使用的”或“收藏的文档和联系人”一样简单和快速地查找应用。 + +###丰富的艺术图标 + +![Just some of the new icons in this release](https://kver.files.wordpress.com/2015/07/image10430.png) + +Plasma 5.4 提供了超过 1400 个的新图标,其中不仅包含 KDE 程序的,而且还为 Inkscape, Firefox 和 Libreoffice 提供 Breeze 主题的艺术图标,可以体验到更加一致和本地化的感觉。 + +###KRunner 历史记录 + +![KRunner](https://www.kde.org/announcements/plasma-5.4/plasma-screen-krunner-shadow.png) + +KRunner 现在可以记住之前的搜索历史并根据历史记录进行自动补全。 + +###Network 程序中实用的图形展示 + +![Network Graphs](https://www.kde.org/announcements/plasma-5.4/plasma-screen-nm-graph-shadow.png) + +Network 程序现在可以以图形形式显示网络流量了,同时也支持两个新的 VPN 插件:通过 SSH 连接或通过 SSTP 连接。 + +###Wayland 技术预览 + +随着 Plasma 5.4 ,Wayland 桌面发布了第一个技术预览版。在使用自由图形驱动(free graphics drivers)的系统上可以使用 KWin(Plasma 的 Wayland 合成器和 X11 窗口管理器)通过[内核模式设定][1]来运行 Plasma。现在已经支持的功能需求来自于[手机 Plasma 项目][2],更多的面向桌面的功能还未被完全实现。现在还不能作为替换那些基于 Xorg 的桌面,但可以轻松地对它测试和贡献,以及观看令人激动视频。有关如何在 Wayland 中使用 Plasma 的介绍请到:[KWin 维基页][3]。Wlayland 将随着我们构建的稳定版本而逐步得到改进。 + +###其他的改变和添加 + + - 优化对高 DPI 支持 + - 更少的内存占用 + - 桌面搜索使用了更快的新后端 + - 便笺增加拖拉支持和键盘导航 + - 回收站重新支持拖拉 + - 系统托盘获得更快的可配置性 + - 文档重新修订和更新 + - 优化了窄小面板上的数字时钟的布局 + - 数字时钟支持 ISO 日期 + - 切换数字时钟 12/24 格式更简单 + - 日历显示第几周 + - 任何项目都可以收藏进应用菜单(Kicker),支持收藏文档和 Telepathy 联系人 + - Telepathy 联系人收藏可以展示联系人的照片和实时状态 + - 优化程序与容器间的焦点和激活处理 + - 文件夹视图中各种小修复:更好的默认尺寸,鼠标交互问题以及文本标签换行 + - 任务管理器更好的呈现起动器默认的应用图标 + - 可再次通过将程序拖入任务管理器来添加启动器 + - 可配置中间键点击在任务管理器中的行为:无动作,关闭窗口,启动一个相同的程序的新实例 + - 任务管理器现在以列排序优先,无论用户是否更倾向于行优先;许多用户更喜欢这样排序是因为它会使更少的任务按钮像窗口一样移来移去 + - 优化任务管理器的图标和缩放边 + - 任务管理器中各种小修复:垂直下拉,触摸事件处理现在支持所有系统,组扩展箭头的视觉问题 + - 提供可用的目的框架技术预览版,可以使用 QuickShare Plasmoid,它可以让许多 web 服务分享文件更容易 + - 增加了显示器配置工具 + - 增加的 kwallet-pam 可以在登录时打开 wallet + - 用户管理器现在会同步联系人到 KConfig 的设置中,用户账户模块被丢弃了 + - 应用程序菜单(Kicker)的性能得到改善 + - 应用程序菜单(Kicker)各种小修复:隐藏/显示程序更加可靠,顶部面板的对齐修复,文件夹视图中 “添加到桌面”更加可靠,在基于 KActivities 的最新模块中有更好的表现 + - 支持自定义菜单布局 (kmenuedit)和应用程序菜单(Kicker)支持菜单项目分隔 + - 当在面板中时,改进了文件夹视图,参见 [blog][4] + - 将文件夹拖放到桌面容器现在会再次创建一个文件夹视图 + +[完整的 Plasma 5.4 变更日志在此](https://www.kde.org/announcements/plasma-5.3.2-5.4.0-changelog.php) + +###Live 镜像 + +尝鲜的最简单的方式就是从 U 盘中启动,可以在 KDE 社区 Wiki 中找到 各种 [带有 Plasma 5 的 Live 镜像][5]。 + +###下载软件包 + +各发行版已经构建了软件包,或者正在构建,wiki 中的列出了各发行版的软件包名:[软件包下载维基页][6]。 + +###源码下载 + +可以直接从源码中安装 Plasma 5。KDE 社区 Wiki 已经介绍了[怎样编译][7]。 + +注意,Plasma 5 与 Plasma 4 不兼容,必须先卸载旧版本,或者安装到不同的前缀处。 + + +- [源代码信息页][8] + +--- + +via: https://www.kde.org/announcements/plasma-5.4.0.php + +译者:[Locez](http://locez.com) 校对:[wxy](http://github.com/wxy) + +[1]:https://en.wikipedia.org/wiki/Direct_Rendering_Manager +[2]:https://dot.kde.org/2015/07/25/plasma-mobile-free-mobile-platform +[3]:https://community.kde.org/KWin/Wayland#Start_a_Plasma_session_on_Wayland +[4]:https://blogs.kde.org/2015/06/04/folder-view-panel-popups-are-list-views-again +[5]:https://community.kde.org/Plasma/LiveImages +[6]:https://community.kde.org/Plasma/Packages +[7]:http://community.kde.org/Frameworks/Building +[8]:https://www.kde.org/info/plasma-5.4.0.php \ No newline at end of file diff --git a/translated/kde-plasma-5.4.md b/translated/kde-plasma-5.4.md deleted file mode 100644 index 4c6e66bd69..0000000000 --- a/translated/kde-plasma-5.4.md +++ /dev/null @@ -1,77 +0,0 @@ -#KDE 发行 Plasma 5.4.0 , 8 月版本特性 - -![Plasma 5.4](https://www.kde.org/announcements/plasma-5.4/plasma-screen-desktop-2-shadow.png) -2015 年 8 月 25 ,星期二,KDE发布了 Plasma 5 的一个新版本。 -此版本为我们带来了许多非常棒的触感,如优化了对高 DPI 的支持,KRunner 自动补全和一些新的美观的 Breeze 图标。这还为不久以后的 Wayland 桌面奠定了基础。我们还发行了几个新组件,如声音部件,显示器校准工具和测试版的用户管理工具。 - -###新的音频音量程序 -![The new Audio Volume Applet](https://www.kde.org/announcements/plasma-5.4/plasma-screen-audiovolume-shadows-wee.png) -新的音频音量程序直接与 PulseAudio (Linux 上一个非常流行的音频服务) 共同提供服务,并且在一个漂亮的简约的界面提供一个完全的音量控制和输出设定。 - -###应用控制面板起动器 -![he new Dashboard alternative launcher](https://www.kde.org/announcements/plasma-5.4/plasma-screen-dashboard-2-shadow-wee.png) -Plasma 5.4 在 kdeplasma-addons 带来了一个全新的全屏应用控制面板,它具有应用菜单的所有功能还支持缩放和全空间键盘导航。新的起动器可以像你目前所用的最近使用的或收藏的文档和联系人一样简单和快速地查找应用。 - -###丰富的艺术图标 -![Just some of the new icons in this release](https://kver.files.wordpress.com/2015/07/image10430.png) -Plasma 5.4 加入了 1400 多个新图标,其中不仅包含 KDE 程序的,而且还为 Inkscape, Firefox 和 Libreoffice 提供 Breeze 主题的艺术图标,可以体验到更加集成和本地化的感觉。 - -###KRunner 历史记录 -![KRunner](https://www.kde.org/announcements/plasma-5.4/plasma-screen-krunner-shadow-wee.png) -KRunner 现在可以记住之前的搜索历史并根据历史记录进行自动补全。 - -###Network 程序中实用的图形展示 -![Network Graphs](https://www.kde.org/announcements/plasma-5.4/plasma-screen-nm-graph-shadow-wee.png) -Network 程序现在可以以图形显示网络流量了,这也支持两个新的 VPN 插件,甚至是 SSH 或 SSTP。 - -###Wayland 技术预览 -Plasma 5.4 ,这个第一个 Wayland 桌面技术预览版已经发布了。在使用开源驱动的系统上可以使用 KWin,Plasma 的 Wayland 复合器, X11 窗口管理器和通过内核设定来运行 Plasma。现在已经支持的功能需求来自于[手机 Plasma 项目](https://dot.kde.org/2015/07/25/plasma-mobile-free-mobile-platform),更多的桌面原生功能还未被完全实现。现在还不能作为替换 Xorg 的基础桌面来使用,但可以轻松地测试它,贡献和观看感人的免费视频。有关如何在 Wayland 中使用 Plasma 的介绍请到:[KWin wiki pages](https://community.kde.org/KWin/Wayland#Start_a_Plasma_session_on_Wayland)。Wlayland 将会在我们构建一个稳定的版本的目标中得到改进。 - -###其他的改变和添加 - - 优化对高 DPI 支持 - - 更少的内存占用 - - 桌面搜索更新和更快的后端处理 - - 便笺添加拖拉和支持键盘导航 - - 回收站重新支持拖拉 - - 系统托盘获得更快的可配置性 - - 文档重新修订和更新 - - 优化数字时钟的布局 - - 数字时钟支持 ISO 日期 - - 更简单的方式切换数字时钟 12/24 格式 - - 日历显示第几周 - - 任何项目都可以收藏进应用菜单,支持收藏文档和 Telepathy 联系人 - - Telepathy 联系人收藏可以展示联系人的照片和实时状态 - - 优化程序与容器间的焦点和激活处理 - - 文件夹视图中各种小修复:更好的默认大喜哦,鼠标交互问题以及文本标签换行 - - 任务管理器更好的呈现起动器默认的应用图标 - - 可再次通过在任务管理器删除程序来添加起动器 - - 可配置中间点击在任务管理器中的行为:无动作,关闭窗口,起动一个相同的程序 - - 任务管理器现在以列排序优先无论用户是否更倾向于行优先;许多用户更喜欢这样排序是因为它会使更少的任务按钮像窗口一样移来移去 - - 优化任务管理器的图标和页边标度 - - 任务管理器中各种小修复:垂直下拉,触摸事件处理,组扩展箭头视觉问题 - - 提供可用的目的框架技术预览版,可以使用 QuickShare Plasmoid 在许多 web 服务分享文件 - - 添加 显示器配置工具 - - 添加 kwallet-pam 来在登陆时打开 wallet - - 用户管理器现在已经接入到设置中,并且用户账户模块被舍弃 - - 应用程序菜单(Kicker)的性能得到改善 - - 应用程序菜单(Kicker)各种小修复:可用 隐藏/显示 程序更加可靠,顶部面板对齐修复,文件夹视图中 “添加到桌面”更加可靠,在 KActivities-based 最新的模块中有更好的表现 - - 支持自定义菜单布局 (kmenuedit)和应用程序菜单(Kicker)支持菜单项目分隔 - - 文件夹视图已经优化在面板中的模式 [blog](https://blogs.kde.org/2015/06/04/folder-view-panel-popups-are-list-views-again) - - 在桌面容器中删除一个文件夹现在可在文件夹视图中恢复 - - [Full Plasma 5.4 changelog](https://www.kde.org/announcements/plasma-5.3.2-5.4.0-changelog.php) - - ###Live 镜像 - 偿鲜的最简单的方式就是从 U 盘中启动,可以在 KDE 社区 Wiki 中找到 各种 [ Live Images with Plasma 5](https://community.kde.org/Plasma/LiveImages) - - ###下载软件包 - 各发行版已经构建了软件包,或者正在构建,wiki 中的列出了各发行版的软件包名:[Package download wiki page](https://community.kde.org/Plasma/Packages) - - ###源码下载 - 可以直接从源码中安装 Plasma 5。KDE 社区 Wiki 已经介绍了怎样编译[instructions to compile](http://community.kde.org/Frameworks/Building)。 -注意,Plasma 5 与 Plasma 4 不兼容,必须先卸载旧版本,或者安装到不同的前缀处。 - - [Source Info Page](https://www.kde.org/info/plasma-5.4.0.php) - - --- - via:https://www.kde.org/announcements/plasma-5.4.0.php - 译者:[Locez](http://locez.com) \ No newline at end of file From e97706ebbf6275e19c35e8ea6e75333b95187f43 Mon Sep 17 00:00:00 2001 From: Xuanwo Date: Thu, 27 Aug 2015 01:21:06 +0800 Subject: [PATCH 323/697] update --- ... 'sed' Command to Create Edit and Manipulate files in Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md (100%) diff --git a/sources/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md b/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md similarity index 100% rename from sources/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md rename to translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md From bea736fc794a2f7986cf6aa969e00adb7745dd65 Mon Sep 17 00:00:00 2001 From: Xuanwo Date: Thu, 27 Aug 2015 02:08:59 +0800 Subject: [PATCH 324/697] woking hard >.< --- ...eate Edit and Manipulate files in Linux.md | 32 +++++++++---------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md b/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md index 083078fa62..80f7aa6339 100644 --- a/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md +++ b/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md @@ -1,33 +1,33 @@ Translating by Xuanwo Part 1 - LFCS: How to use GNU ‘sed’ Command to Create, Edit, and Manipulate files in Linux +LFCS系列第一讲:如何在Linux上使用GNU'sed'命令来创建、编辑和操作文件 ================================================================================ -The Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, a new program that aims at helping individuals all over the world to get certified in basic to intermediate system administration tasks for Linux systems. This includes supporting running systems and services, along with first-hand troubleshooting and analysis, and smart decision-making to escalate issues to engineering teams. +Linux基金会宣布了一个全新的LFCS(Linux Foundation Certified Sysadmin,Linux基金会认证系统管理员)认证计划。这一计划旨在帮助遍布全世界的人们获得其在处理Linux系统管理任务上能力的认证。这些能力包括支持运行的系统服务,以及第一手的故障诊断和分析和为工程师团队在升级时提供智能决策。 ![Linux Foundation Certified Sysadmin](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-1.png) -Linux Foundation Certified Sysadmin – Part 1 +Linux基金会认证系统管理员——第一讲 -Please watch the following video that demonstrates about The Linux Foundation Certification Program. +请观看下面关于Linux基金会认证计划的演示: 注:youtube 视频 -The series will be titled Preparation for the LFCS (Linux Foundation Certified Sysadmin) Parts 1 through 10 and cover the following topics for Ubuntu, CentOS, and openSUSE: +该系列将命名为《LFCS预备第一讲》至《LFCS预备第十讲》并覆盖关于Ubuntu,CentOS以及openSUSE的下列话题。 -- Part 1: How to use GNU ‘sed’ Command to Create, Edit, and Manipulate files in Linux -- Part 2: How to Install and Use vi/m as a full Text Editor -- Part 3: Archiving Files/Directories and Finding Files on the Filesystem -- Part 4: Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition -- Part 5: Mount/Unmount Local and Network (Samba & NFS) Filesystems in Linux -- Part 6: Assembling Partitions as RAID Devices – Creating & Managing System Backups -- Part 7: Managing System Startup Process and Services (SysVinit, Systemd and Upstart -- Part 8: Managing Users & Groups, File Permissions & Attributes and Enabling sudo Access on Accounts -- Part 9: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper -- Part 10: Learning Basic Shell Scripting and Filesystem Troubleshooting +- 第一讲:如何在Linux上使用GNU'sed'命令来创建、编辑和操作文件 +- 第二讲:如何安装和使用vi/m全功能文字编辑器 +- 第三讲:归档文件/目录和在文件系统中寻找文件 +- 第四讲:为存储设备分区,格式化文件系统和配置交换分区 +- 第五讲:在Linux中挂载/卸载本地和网络(Samba & NFS)文件系统 +- 第六讲:组合分区作为RAID设备——创建&管理系统备份 +- 第七讲:管理系统启动进程和服务(使用SysVinit, Systemd 和 Upstart) +- 第八讲:管理用户和组,文件权限和属性以及启用账户的sudo权限 +- 第九讲:Linux包管理与Yum,RPM,Apt,Dpkg,Aptitude,Zypper +- 第十讲:学习简单的Shell脚本和文件系统故障排除 - -This post is Part 1 of a 10-tutorial series, which will cover the necessary domains and competencies that are required for the LFCS certification exam. That being said, fire up your terminal, and let’s start. +本文是覆盖这个参加LFCS认证考试的所必需的范围和能力的十个教程的第一讲。话虽如此,快打开你的终端,让我们开始吧! ### Processing Text Streams in Linux ### From ffdee792a740a45fbbcb9e5172a4b97ca87dc8e4 Mon Sep 17 00:00:00 2001 From: Xuanwo Date: Thu, 27 Aug 2015 02:42:24 +0800 Subject: [PATCH 325/697] have a break: --- ...eate Edit and Manipulate files in Linux.md | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md b/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md index 80f7aa6339..21579f0ed9 100644 --- a/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md +++ b/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md @@ -29,34 +29,34 @@ Linux基金会认证系统管理员——第一讲 本文是覆盖这个参加LFCS认证考试的所必需的范围和能力的十个教程的第一讲。话虽如此,快打开你的终端,让我们开始吧! -### Processing Text Streams in Linux ### +### 处理Linux中的文本流 ### -Linux treats the input to and the output from programs as streams (or sequences) of characters. To begin understanding redirection and pipes, we must first understand the three most important types of I/O (Input and Output) streams, which are in fact special files (by convention in UNIX and Linux, data streams and peripherals, or device files, are also treated as ordinary files). +Linux将程序中的输入和输出当成字符流或者字符序列。在开始理解重定向和管道之前,我们必须先了解三种最重要的I/O(Input and Output,输入和输出)流,事实上,它们都是特殊的文件(根据UNIX和Linux中的约定,数据流和外围设备或者设备文件也被视为普通文件)。 -The difference between > (redirection operator) and | (pipeline operator) is that while the first connects a command with a file, the latter connects the output of a command with another command. +> (重定向操作符) 和 | (管道操作符)之间的区别是:前者将命令与文件相连接,而后者将命令的输出和另一个命令相连接。 # command > file # command1 | command2 -Since the redirection operator creates or overwrites files silently, we must use it with extreme caution, and never mistake it with a pipeline. One advantage of pipes on Linux and UNIX systems is that there is no intermediate file involved with a pipe – the stdout of the first command is not written to a file and then read by the second command. +由于重定向操作符静默创建或覆盖文件,我们必须特别小心谨慎地使用它,并且永远不要把它和管道混淆起来。在Linux和UNIX系统上管道的优势是:第一个命令的输出不会写入一个文件而是直接被第二个命令读取。 -For the following practice exercises we will use the poem “A happy child” (anonymous author). +在下面的操作练习中,我们将会使用这首诗——《A happy child》(匿名作者) ![cat command](http://www.tecmint.com/wp-content/uploads/2014/10/cat-command.png) cat command example -#### Using sed #### +#### 使用 sed #### -The name sed is short for stream editor. For those unfamiliar with the term, a stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline). +sed是流编辑器(stream editor)的缩写。为那些不懂术语的人额外解释一下,流编辑器是用来在一个输入流(文件或者管道中的输入)执行基本的文本转换的工具。 -The most basic (and popular) usage of sed is the substitution of characters. We will begin by changing every occurrence of the lowercase y to UPPERCASE Y and redirecting the output to ahappychild2.txt. The g flag indicates that sed should perform the substitution for all instances of term on every line of file. If this flag is omitted, sed will replace only the first occurrence of term on each line. +sed最基本的用法是字符替换。我们将通过把每个出现的小写y改写为大写Y并且将输出重定向到ahappychild2.txt开始。g标志表示sed应该替换文件每一行中所有应当替换的实例。如果这个标志省略了,sed将会只替换每一行中第一次出现的实例。 -**Basic syntax:** +**基本语法:** # sed ‘s/term/replacement/flag’ file -**Our example:** +**我们的样例:** # sed ‘s/y/Y/g’ ahappychild.txt > ahappychild2.txt @@ -64,9 +64,9 @@ The most basic (and popular) usage of sed is the substitution of characters. We sed command example -Should you want to search for or replace a special character (such as /, \, &) you need to escape it, in the term or replacement strings, with a backward slash. +如果你要在替换文本中搜索或者替换特殊字符(如/,\,&),你需要使用反斜杠对它进行转义。 -For example, we will substitute the word and for an ampersand. At the same time, we will replace the word I with You when the first one is found at the beginning of a line. +例如,我们将会用一个符号来替换一个文字。与此同时,我们将把一行最开始出现的第一个I替换为You。 # sed 's/and/\&/g;s/^I/You/g' ahappychild.txt @@ -74,9 +74,9 @@ For example, we will substitute the word and for an ampersand. At the same time, sed replace string -In the above command, a ^ (caret sign) is a well-known regular expression that is used to represent the beginning of a line. +在上面的命令中,^(插入符号)是众所周知用来表示一行开头的正则表达式。 -As you can see, we can combine two or more substitution commands (and use regular expressions inside them) by separating them with a semicolon and enclosing the set inside single quotes. +正如你所看到的,我们可以通过使用分号分隔以及用括号包裹来把两个或者更多的替换命令(并在他们中使用正则表达式)链接起来。 Another use of sed is showing (or deleting) a chosen portion of a file. In the following example, we will display the first 5 lines of /var/log/messages from Jun 8. From c8652ceacacb5784660136aece5fc8e0c137a394 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Thu, 27 Aug 2015 08:52:38 +0800 Subject: [PATCH 326/697] [Translated]20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md --- ...artition into One Large Virtual Storage.md | 187 ------------------ ...artition into One Large Virtual Storage.md | 183 +++++++++++++++++ 2 files changed, 183 insertions(+), 187 deletions(-) delete mode 100644 sources/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md create mode 100644 translated/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md diff --git a/sources/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md b/sources/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md deleted file mode 100644 index ebf3a9c4fd..0000000000 --- a/sources/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md +++ /dev/null @@ -1,187 +0,0 @@ -Translating by GOLinux! -Mhddfs – Combine Several Smaller Partition into One Large Virtual Storage -================================================================================ -Let’s assume that you have 30GB of movies and you have 3 drives each 20 GB in size. So how will you store? - -Obviously you can split your videos in two or three different volumes and store them on the drive manually. This certainly is not a good idea, it is an exhaustive work which requires manual intervention and a lots of your time. - -Another solution is to create a [RAID array of disk][1]. The RAID has always remained notorious for loss of storage reliability and usable disk space. Another solution is mhddfs. - -![Combine Multiple Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Combine-Multiple-Partitions-in-Linux.png) - -Mhddfs – Combine Multiple Partitions in Linux - -mhddfs is a driver for Linux that combines several mount points into one virtual disk. It is a fuse based driver, which provides a easy solution for large data storage. It combines all small file systems to create a single big virtual filesystem which contains every particle of its member filesystem including files and free spaces. - -#### Why you need Mhddfs? #### - -All your storage devices creates a single virtual pool and it can be mounted right at the boot. This small utility takes care of, which drive is full and which is empty and to write data to what drive, intelligently. Once you create virtual drives successfully, you can share your virtual filesystem using [SAMBA][2]. Your client will always see a huge drive and lots of free space. - -#### Features of Mhddfs #### - -- Get attributes of the file system and system information. -- Set attributes of the file system. -- Create, Read, Remove and write Directories and files. -- Support for file locks and Hardlinks on single device. - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Pros of mhddfsCons of mhddfs
 Perfect for home users.mhddfs driver is not built in the Linux Kernel
 Simple to run. Required lots of processing power during runtime
 No evidence of Data loss No redundancy solution.
 Do not split the file. Hardlinks moving not supported
 Add new files to the combined virtual filesystem. 
 Manage the location where these files are saved. 
  Extended file attributes 
- -### Installation of Mhddfs in Linux ### - -On Debian and portable to alike systems, you can install mhddfs package using following command. - - # apt-get update && apt-get install mhddfs - -![Install Mhddfs on Debian based Systems](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Mhddfs-on-Ubuntu.png) - -Install Mhddfs on Debian based Systems - -On RHEL/CentOS Linux systems, you need to turn on [epel-repository][3] and then execute the below command to install mhddfs package. - - # yum install mhddfs - -On Fedora 22+ systems, you may get it by dnf package manger as shown below. - - # dnf install mhddfs - -![Install Mhddfs on Fedora](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Mhddfs-on-Fedora.png) - -Install Mhddfs on Fedora - -If incase, mhddfs package isn’t available from epel repository, then you need to resolve following dependencies to install and compile it from source as shown below. - -- FUSE header files -- GCC -- libc6 header files -- uthash header files -- libattr1 header files (optional) - -Next, download the latest source package simply as suggested below and compile it. - - # wget http://mhddfs.uvw.ru/downloads/mhddfs_0.1.39.tar.gz - # tar -zxvf mhddfs*.tar.gz - # cd mhddfs-0.1.39/ - # make - -You should be able to see binary mhddfs in the current directory. Move it to /usr/bin/ and /usr/local/bin/ as root. - - # cp mhddfs /usr/bin/ - # cp mhddfs /usr/local/bin/ - -All set, mhddfs is ready to be used. - -### How do I use Mhddfs? ### - -1. Lets see all the HDD mounted to my system currently. - - $ df -h - -![Check Mounted Devices](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Mounted-Devices.gif) - -**Sample Output** - - Filesystem Size Used Avail Use% Mounted on - - /dev/sda1 511M 132K 511M 1% /boot/efi - /dev/sda2 451G 92G 336G 22% / - /dev/sdb1 1.9T 161G 1.7T 9% /media/avi/BD9B-5FCE - /dev/sdc1 555M 555M 0 100% /media/avi/Debian 8.1.0 M-A 1 - -Notice the ‘Mount Point‘ name here, which we will be using later. - -2. Create a directory `/mnt/virtual_hdd` where all these all file system will be grouped together as, - - # mkdir /mnt/virtual_hdd - -3. And then mount all the file-systems. Either as root or as a user who is a member of FUSE group. - - # mhddfs /boot/efi, /, /media/avi/BD9B-5FCE/, /media/avi/Debian\ 8.1.0\ M-A\ 1/ /mnt/virtual_hdd -o allow_other - -![Mount All File System in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Mount-All-File-System-in-Linux.png) - -Mount All File System in Linux - -**Note**: We are used mount Point names here of all the HDDs. Obviously the mount point in your case will be different. Also notice “-o allow_other” option makes this Virtual file system visible to all others and not only the person who created it. - -4. Now run “df -h” see all the filesystems. It should contain the one you created just now. - - $ df -h - -![Verify Virtual File System Mount](http://www.tecmint.com/wp-content/uploads/2015/08/Verify-Virtual-File-System.png) - -Verify Virtual File System Mount - -You can perform all the option to the Virtual File System you created as you would have done to a Mounted Drive. - -5. To create this Virtual File system on every system boot, you should add the below line of code (in your case it should be different, depending upon your mount point), at the end of /etc/fstab file as root. - - mhddfs# /boot/efi, /, /media/avi/BD9B-5FCE/, /media/avi/Debian\ 8.1.0\ M-A\ 1/ /mnt/virtual_hdd fuse defaults,allow_other 0 0 - -6. If at any point of time you want to add/remove a new drive to Virtual_hdd, you may mount a new drive, copy the contents of mount point /mnt/virtual_hdd, un-mount the volume, Eject the Drive you want to remove and/or mount the new drive you want to include, Mount the overall filesystem under Virtual_hdd using mhddfs command and you should be done. - -#### How do I Un-Mount Virtual_hdd? #### - -Unmounting virtual_hdd is as easy as, - - # umount /mnt/virtual_hdd - -![Unmount Virtual Filesystem](http://www.tecmint.com/wp-content/uploads/2015/08/Unmount-Virtual-Filesystem.png) - -Unmount Virtual Filesystem - -Notice it is umount and not unmount. A lot of user type it wrong. - -That’s all for now. I am working on another post you people will love to read. Till then stay tuned and connected to Tecmint. Provide us with your valuable feedback in the comments below. Like and share us and help us get spread. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/combine-partitions-into-one-in-linux-using-mhddfs/ - -作者:[Avishek Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ -[2]:http://www.tecmint.com/mount-filesystem-in-linux/ -[3]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ diff --git a/translated/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md b/translated/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md new file mode 100644 index 0000000000..04d9f18eb9 --- /dev/null +++ b/translated/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md @@ -0,0 +1,183 @@ +Mhddfs——将多个小分区合并成一个大的虚拟存储 +================================================================================ + +让我们假定你有30GB的电影,并且你有3个驱动器,每个的大小为20GB。那么,你会怎么来存放东西呢? + +很明显,你可以将你的视频分割成2个或者3个不同的卷,并将它们手工存储到驱动器上。这当然不是一个好主意,它成了一项费力的工作,它需要你手工干预,而且花费你大量时间。 + +另外一个解决方案是创建一个[RAID磁盘阵列][1]。然而,RAID在缺乏存储可靠性,磁盘空间可用性差等方面声名狼藉。另外一个解决方案,就是mhddfs。 + +![Combine Multiple Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Combine-Multiple-Partitions-in-Linux.png) +Mhddfs——在Linux中合并多个分区 + +mhddfs是一个用于Linux的驱动,它可以将多个挂载点合并到一个虚拟磁盘中。它是一个基于FUSE的驱动,提供了一个用于大数据存储的简单解决方案。它将所有小文件系统合并,以创建一个单一的大虚拟文件系统,该文件系统包含其成员文件系统的所有颗粒,包括文件和空闲空间。 + +#### 你为什么需要Mhddfs? #### + +你所有存储设备创建了一个单一的虚拟池,它可以在启动时被挂载。这个小工具可以智能地照看并处理哪个驱动器满了,哪个驱动器空着,将数据写到哪个驱动器中。当你成功创建虚拟驱动器后,你可以使用[SAMBA][2]来共享你的虚拟文件系统。你的客户端将在任何时候都看到一个巨大的驱动器和大量的空闲空间。 + +#### Mhddfs特性 #### + +- 获取文件系统属性和系统信息。 +- 设置文件系统属性。 +- 创建、读取、移除和写入目录和文件。 +- 支持文件锁和单一设备上的硬链接。 + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
mhddfs的优点mhddfs的缺点
 适合家庭用户mhddfs驱动没有内建在Linux内核中
 运行简单 运行时需要大量处理能力
 没有明显的数据丢失 没有冗余解决方案
 不分割文件 不支持移动硬链接
 添加新文件到合并的虚拟文件系统 
 管理文件保存的位置 
  扩展文件属性 
+ +### Linux中安装Mhddfs ### + +在Debian及其类似的移植系统中,你可以使用下面的命令来安装mhddfs包。 + + # apt-get update && apt-get install mhddfs + +![Install Mhddfs on Debian based Systems](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Mhddfs-on-Ubuntu.png) +安装Mhddfs到基于Debian的系统中 + +在RHEL/CentOS Linux系统中,你需要开启[epel仓库][3],然后执行下面的命令来安装mhddfs包。 + + # yum install mhddfs + +在Fedora 22及以上系统中,你可以通过dnf包管理来获得它,就像下面这样。 + + # dnf install mhddfs + +![Install Mhddfs on Fedora](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Mhddfs-on-Fedora.png) +安装Mhddfs到Fedora + +如果万一mhddfs包不能从epel仓库获取到,那么你需要解决下面的依赖,然后像下面这样来编译源码并安装。 + +- FUSE头文件 +- GCC +- libc6头文件 +- uthash头文件 +- libattr1头文件(可选) + +接下来,只需从下面建议的地址下载最新的源码包,然后编译。 + + # wget http://mhddfs.uvw.ru/downloads/mhddfs_0.1.39.tar.gz + # tar -zxvf mhddfs*.tar.gz + # cd mhddfs-0.1.39/ + # make + +你应该可以在当前目录中看到mhddfs的二进制文件,以root身份将它移动到/usr/bin/和/usr/local/bin/中。 + + # cp mhddfs /usr/bin/ + # cp mhddfs /usr/local/bin/ + +一切搞定,mhddfs已经可以用了。 + +### 我怎么使用Mhddfs? ### + +1.让我们看看当前所有挂载到我们系统中的硬盘。 + + + $ df -h + +![Check Mounted Devices](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Mounted-Devices.gif) +**样例输出** + + Filesystem Size Used Avail Use% Mounted on + + /dev/sda1 511M 132K 511M 1% /boot/efi + /dev/sda2 451G 92G 336G 22% / + /dev/sdb1 1.9T 161G 1.7T 9% /media/avi/BD9B-5FCE + /dev/sdc1 555M 555M 0 100% /media/avi/Debian 8.1.0 M-A 1 + +注意这里的‘挂载点’名称,我们后面会使用到它们。 + +2.创建目录‘/mnt/virtual_hdd’,在这里,所有这些文件系统将被组成组。 + + + # mkdir /mnt/virtual_hdd + +3.然后,挂载所有文件系统。你可以通过root或者FUSE组中的某个成员来完成。 + + + # mhddfs /boot/efi, /, /media/avi/BD9B-5FCE/, /media/avi/Debian\ 8.1.0\ M-A\ 1/ /mnt/virtual_hdd -o allow_other + +![Mount All File System in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Mount-All-File-System-in-Linux.png) +在Linux中挂载所有文件系统 + +**注意**:这里我们使用了所有硬盘的挂载点名称,很明显,你的挂载点名称会有所不同。也请注意“-o allow_other”选项可以让这个虚拟文件系统让其它所有人可见,而不仅仅是创建它的人。 + +4.现在,运行“df -h”来看看所有文件系统。它应该包含了你刚才创建的那个。 + + + $ df -h + +![Verify Virtual File System Mount](http://www.tecmint.com/wp-content/uploads/2015/08/Verify-Virtual-File-System.png) +验证虚拟文件系统挂载 + +你可以像对已挂在的驱动器那样给虚拟文件系统部署所有的选项。 + +5.要在每次系统启动创建这个虚拟文件系统,你应该以root身份添加下面的这行代码(在你那里会有点不同,取决于你的挂载点)到/etc/fstab文件的末尾。 + + mhddfs# /boot/efi, /, /media/avi/BD9B-5FCE/, /media/avi/Debian\ 8.1.0\ M-A\ 1/ /mnt/virtual_hdd fuse defaults,allow_other 0 0 + +6.如果在任何时候你想要添加/移除一个新的驱动器到/从虚拟硬盘,你可以挂载一个新的驱动器,拷贝/mnt/vritual_hdd的内容,卸载卷,弹出你要移除的的驱动器并/或挂载你要包含的新驱动器。使用mhddfs命令挂载全部文件系统到Virtual_hdd下,这样就全部搞定了。 +#### 我怎么卸载Virtual_hdd? #### + +卸载virtual_hdd相当简单,就像下面这样 + + # umount /mnt/virtual_hdd + +![Unmount Virtual Filesystem](http://www.tecmint.com/wp-content/uploads/2015/08/Unmount-Virtual-Filesystem.png) +卸载虚拟文件系统 + +注意,是umount,而不是unmount,很多用户都输错了。 + +到现在为止全部结束了。我正在写另外一篇文章,你们一定喜欢读的。到那时,请保持连线到Tecmint。请在下面的评论中给我们提供有用的反馈吧。请为我们点赞并分享,帮助我们扩散。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/combine-partitions-into-one-in-linux-using-mhddfs/ + +作者:[Avishek Kumar][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[2]:http://www.tecmint.com/mount-filesystem-in-linux/ +[3]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ From 3bf11430e3f94531e8aa8d686244163ffb2cb9f4 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 27 Aug 2015 10:40:18 +0800 Subject: [PATCH 327/697] translating --- .../20150826 How to Run Kali Linux 2.0 In Docker Container.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md b/sources/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md index 8018ca17e9..15c8d1011e 100644 --- a/sources/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md +++ b/sources/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md @@ -1,3 +1,5 @@ +translating---geekpi + How to Run Kali Linux 2.0 In Docker Container ================================================================================ ### Introduction ### @@ -71,4 +73,4 @@ via: http://linuxpitstop.com/run-kali-linux-2-0-in-docker-container/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://linuxpitstop.com/author/aun/ -[1]:http://linuxpitstop.com/install-kali-linux-2-0/ \ No newline at end of file +[1]:http://linuxpitstop.com/install-kali-linux-2-0/ From f7dcc0a1b9da2726bd4d6d9e02cbea7b027c5d0c Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 27 Aug 2015 10:59:21 +0800 Subject: [PATCH 328/697] translated --- ... Run Kali Linux 2.0 In Docker Container.md | 76 ------------------- ... Run Kali Linux 2.0 In Docker Container.md | 74 ++++++++++++++++++ 2 files changed, 74 insertions(+), 76 deletions(-) delete mode 100644 sources/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md create mode 100644 translated/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md diff --git a/sources/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md b/sources/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md deleted file mode 100644 index 15c8d1011e..0000000000 --- a/sources/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md +++ /dev/null @@ -1,76 +0,0 @@ -translating---geekpi - -How to Run Kali Linux 2.0 In Docker Container -================================================================================ -### Introduction ### - -Kali Linux is a well known operating system for security testers and ethical hackers. It comes bundled with a large list of security related applications and make it easy to perform penetration testing. Recently, [Kali Linux 2.0][1] is out and it is being considered as one of the most important release for this operating system. On the other hand, Docker technology is getting massive popularity due to its scalability and ease of use. Dockers make it super easy to ship your software applications to your users. Breaking news is that you can now run Kali Linux via Dockers; let’s see how :) - -### Running Kali Linux 2.0 In Docker ### - -**Related Notes** - -If you don’t have docker installed on your system, you can do it by using the following commands: - -**For Ubuntu/Linux Mint/Debian:** - - sudo apt-get install docker - -**For Fedora/RHEL/CentOS:** - - sudo yum install docker - -**For Fedora 22:** - - dnf install docker - -You can start docker service by running: - - sudo docker start - -First of all make sure that docker service is running fine by using the following command: - - sudo docker status - -Kali Linux docker image has been uploaded online by Kali Linux development team, simply run following command to download this image to your system. - - docker pull kalilinux/kali-linux-docker - -![Pull Kali Linux docker](http://linuxpitstop.com/wp-content/uploads/2015/08/129.png) - -Once download is complete, run following command to find out the Image ID for your downloaded Kali Linux docker image file. - - docker images - -![Kali Linux Image ID](http://linuxpitstop.com/wp-content/uploads/2015/08/230.png) - -Now run following command to start your kali Linux docker container from image file (Here replace Image ID with correct one). - - docker run -i -t 198cd6df71ab3 /bin/bash - -It will immediately start the container and will log you into the operating system, you can start working on Kali Linux here. - -![Kali Linux Login](http://linuxpitstop.com/wp-content/uploads/2015/08/328.png) - -You can verify that container is started/running fine, by using the following command: - - docker ps - -![Docker Kali](http://linuxpitstop.com/wp-content/uploads/2015/08/421.png) - -### Conclusion ### - -Dockers are the smartest way to deploy and distribute your packages. Kali Linux docker image is pretty handy, does not consume any high amount of space on the disk and it is pretty easy to test this wonderful distro on any docker installed operating system now. - --------------------------------------------------------------------------------- - -via: http://linuxpitstop.com/run-kali-linux-2-0-in-docker-container/ - -作者:[Aun][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxpitstop.com/author/aun/ -[1]:http://linuxpitstop.com/install-kali-linux-2-0/ diff --git a/translated/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md b/translated/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md new file mode 100644 index 0000000000..5c65ec0286 --- /dev/null +++ b/translated/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md @@ -0,0 +1,74 @@ +如何在Docker容器中运行Kali Linux 2.0 +================================================================================ +### 介绍 ### + +Kali Linux是一个对于安全测试人员和白帽的一个知名的操作系统。它带有大量安全相关的程序,这让它很容易用于渗透测试。最近,[Kali Linux 2.0][1]发布了,并且它被认为是这个操作系统最重要的一次发布。另一方面,Docker技术由于它的可扩展性和易用性让它变得很流行。Dokcer让你非常容易地将你的程序带给你的用户。好消息是你可以通过Docker运行Kali Linux了,让我们看看该怎么做:) + +### 在Docker中运行Kali Linux 2.0 ### + +**相关提示** + +如果你还没有在系统中安装docker,你可以运行下面的命令: + +**对于 Ubuntu/Linux Mint/Debian:** + + sudo apt-get install docker + +**对于 Fedora/RHEL/CentOS:** + + sudo yum install docker + +**对于 Fedora 22:** + + dnf install docker + +你可以运行下面的命令来启动docker: + + sudo docker start + +首先运行下面的命令确保服务正在运行: + + sudo docker status + +Kali Linux的开发团队已将Kali Linux的docker镜像上传了,只需要输入下面的命令来下载镜像。 + + docker pull kalilinux/kali-linux-docker + +![Pull Kali Linux docker](http://linuxpitstop.com/wp-content/uploads/2015/08/129.png) + +下载完成后,运行下面的命令来找出你下载的docker镜像的ID。 + + docker images + +![Kali Linux Image ID](http://linuxpitstop.com/wp-content/uploads/2015/08/230.png) + +现在运行下面的命令来从镜像文件启动kali linux docker容器(这里用正确的镜像ID替换)。 + + docker run -i -t 198cd6df71ab3 /bin/bash + +它会立刻启动容器并且会登录操作系统,你现在可以在Kaili Linux中工作了。 + +![Kali Linux Login](http://linuxpitstop.com/wp-content/uploads/2015/08/328.png) + +你可以通过下面的命令来验证通气已经启动/运行中了: + + docker ps + +![Docker Kali](http://linuxpitstop.com/wp-content/uploads/2015/08/421.png) + +### 总结 ### + +Docker是一种最聪明的用来部署和分发包的方式。Kali Linux docker镜像非常容易上手,也不会消耗很大的硬盘空间,这样也容易地在任何安装了docker的操作系统上测试这个很棒的发行版了。 + +-------------------------------------------------------------------------------- + +via: http://linuxpitstop.com/run-kali-linux-2-0-in-docker-container/ + +作者:[Aun][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxpitstop.com/author/aun/ +[1]:http://linuxpitstop.com/install-kali-linux-2-0/ From 223b0022828c66ca36215a5d35aa5c474fcc31fe Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Thu, 27 Aug 2015 11:53:55 +0800 Subject: [PATCH 329/697] [Translated]RHCSA Series--Part 08--Securing SSH,Setting Hostname and Enabling Network Services.md --- ... Hostname and Enabling Network Services.md | 217 ------------------ ... Hostname and Enabling Network Services.md | 215 +++++++++++++++++ 2 files changed, 215 insertions(+), 217 deletions(-) delete mode 100644 sources/tech/RHCSA Series/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md create mode 100644 translated/tech/RHCSA/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md b/sources/tech/RHCSA Series/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md deleted file mode 100644 index 40fa771580..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md +++ /dev/null @@ -1,217 +0,0 @@ -FSSlc translating - -RHCSA Series: Securing SSH, Setting Hostname and Enabling Network Services – Part 8 -================================================================================ -As a system administrator you will often have to log on to remote systems to perform a variety of administration tasks using a terminal emulator. You will rarely sit in front of a real (physical) terminal, so you need to set up a way to log on remotely to the machines that you will be asked to manage. - -In fact, that may be the last thing that you will have to do in front of a physical terminal. For security reasons, using Telnet for this purpose is not a good idea, as all traffic goes through the wire in unencrypted, plain text. - -In addition, in this article we will also review how to configure network services to start automatically at boot and learn how to set up network and hostname resolution statically or dynamically. - -![RHCSA: Secure SSH and Enable Network Services](http://www.tecmint.com/wp-content/uploads/2015/05/Secure-SSH-Server-and-Enable-Network-Services.png) - -RHCSA: Secure SSH and Enable Network Services – Part 8 - -### Installing and Securing SSH Communication ### - -For you to be able to log on remotely to a RHEL 7 box using SSH, you will have to install the openssh, openssh-clients and openssh-servers packages. The following command not only will install the remote login program, but also the secure file transfer tool, as well as the remote file copy utility: - - # yum update && yum install openssh openssh-clients openssh-servers - -Note that it’s a good idea to install the server counterparts as you may want to use the same machine as both client and server at some point or another. - -After installation, there is a couple of basic things that you need to take into account if you want to secure remote access to your SSH server. The following settings should be present in the `/etc/ssh/sshd_config` file. - -1. Change the port where the sshd daemon will listen on from 22 (the default value) to a high port (2000 or greater), but first make sure the chosen port is not being used. - -For example, let’s suppose you choose port 2500. Use [netstat][1] in order to check whether the chosen port is being used or not: - - # netstat -npltu | grep 2500 - -If netstat does not return anything, you can safely use port 2500 for sshd, and you should change the Port setting in the configuration file as follows: - - Port 2500 - -2. Only allow protocol 2: - -Protocol 2 - -3. Configure the authentication timeout to 2 minutes, do not allow root logins, and restrict to a minimum the list of users which are allowed to login via ssh: - - LoginGraceTime 2m - PermitRootLogin no - AllowUsers gacanepa - -4. If possible, use key-based instead of password authentication: - - PasswordAuthentication no - RSAAuthentication yes - PubkeyAuthentication yes - -This assumes that you have already created a key pair with your user name on your client machine and copied it to your server as explained here. - -- [Enable SSH Passwordless Login][2] - -### Configuring Networking and Name Resolution ### - -1. Every system administrator should be well acquainted with the following system-wide configuration files: - -- /etc/hosts is used to resolve names <---> IPs in small networks. - -Every line in the `/etc/hosts` file has the following structure: - - IP address - Hostname - FQDN - -For example, - - 192.168.0.10 laptop laptop.gabrielcanepa.com.ar - -2. `/etc/resolv.conf` specifies the IP addresses of DNS servers and the search domain, which is used for completing a given query name to a fully qualified domain name when no domain suffix is supplied. - -Under normal circumstances, you don’t need to edit this file as it is managed by the system. However, should you want to change DNS servers, be advised that you need to stick to the following structure in each line: - - nameserver - IP address - -For example, - - nameserver 8.8.8.8 - -3. 3. `/etc/host.conf` specifies the methods and the order by which hostnames are resolved within a network. In other words, tells the name resolver which services to use, and in what order. - -Although this file has several options, the most common and basic setup includes a line as follows: - - order bind,hosts - -Which indicates that the resolver should first look in the nameservers specified in `resolv.conf` and then to the `/etc/hosts` file for name resolution. - -4. `/etc/sysconfig/network` contains routing and global host information for all network interfaces. The following values may be used: - - NETWORKING=yes|no - HOSTNAME=value - -Where value should be the Fully Qualified Domain Name (FQDN). - - GATEWAY=XXX.XXX.XXX.XXX - -Where XXX.XXX.XXX.XXX is the IP address of the network’s gateway. - - GATEWAYDEV=value - -In a machine with multiple NICs, value is the gateway device, such as enp0s3. - -5. Files inside `/etc/sysconfig/network-scripts` (network adapters configuration files). - -Inside the directory mentioned previously, you will find several plain text files named. - - ifcfg-name - -Where name is the name of the NIC as returned by ip link show: - -![Check Network Link Status](http://www.tecmint.com/wp-content/uploads/2015/05/Check-IP-Address.png) - -Check Network Link Status - -For example: - -![Network Files](http://www.tecmint.com/wp-content/uploads/2015/05/Network-Files.png) - -Network Files - -Other than for the loopback interface, you can expect a similar configuration for your NICs. Note that some variables, if set, will override those present in `/etc/sysconfig/network` for this particular interface. Each line is commented for clarification in this article but in the actual file you should avoid comments: - - HWADDR=08:00:27:4E:59:37 # The MAC address of the NIC - TYPE=Ethernet # Type of connection - BOOTPROTO=static # This indicates that this NIC has been assigned a static IP. If this variable was set to dhcp, the NIC will be assigned an IP address by a DHCP server and thus the next two lines should not be present in that case. - IPADDR=192.168.0.18 - NETMASK=255.255.255.0 - GATEWAY=192.168.0.1 - NM_CONTROLLED=no # Should be added to the Ethernet interface to prevent NetworkManager from changing the file. - NAME=enp0s3 - UUID=14033805-98ef-4049-bc7b-d4bea76ed2eb - ONBOOT=yes # The operating system should bring up this NIC during boot - -### Setting Hostnames ### - -In Red Hat Enterprise Linux 7, the hostnamectl command is used to both query and set the system’s hostname. - -To display the current hostname, type: - - # hostnamectl status - -![Check System hostname in CentOS 7](http://www.tecmint.com/wp-content/uploads/2015/05/Check-System-hostname.png) - -Check System Hostname - -To change the hostname, use - - # hostnamectl set-hostname [new hostname] - -For example, - - # hostnamectl set-hostname cinderella - -For the changes to take effect you will need to restart the hostnamed daemon (that way you will not have to log off and on again in order to apply the change): - - # systemctl restart systemd-hostnamed - -![Set System Hostname in CentOS 7](http://www.tecmint.com/wp-content/uploads/2015/05/Set-System-Hostname.png) - -Set System Hostname - -In addition, RHEL 7 also includes the nmcli utility that can be used for the same purpose. To display the hostname, run: - - # nmcli general hostname - -and to change it: - - # nmcli general hostname [new hostname] - -For example, - - # nmcli general hostname rhel7 - -![Set Hostname Using nmcli Command](http://www.tecmint.com/wp-content/uploads/2015/05/nmcli-command.png) - -Set Hostname Using nmcli Command - -### Starting Network Services on Boot ### - -To wrap up, let us see how we can ensure that network services are started automatically on boot. In simple terms, this is done by creating symlinks to certain files specified in the [Install] section of the service configuration files. - -In the case of firewalld (/usr/lib/systemd/system/firewalld.service): - - [Install] - WantedBy=basic.target - Alias=dbus-org.fedoraproject.FirewallD1.service - -To enable the service: - - # systemctl enable firewalld - -On the other hand, disabling firewalld entitles removing the symlinks: - - # systemctl disable firewalld - -![Enable Service at System Boot](http://www.tecmint.com/wp-content/uploads/2015/05/Enable-Service-at-System-Boot.png) - -Enable Service at System Boot - -### Conclusion ### - -In this article we have summarized how to install and secure connections via SSH to a RHEL server, how to change its name, and finally how to ensure that network services are started on boot. If you notice that a certain service has failed to start properly, you can use systemctl status -l [service] and journalctl -xn to troubleshoot it. - -Feel free to let us know what you think about this article using the comment form below. Questions are also welcome. We look forward to hearing from you! - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-series-secure-ssh-set-hostname-enable-network-services-in-rhel-7/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/20-netstat-commands-for-linux-network-management/ -[2]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ diff --git a/translated/tech/RHCSA/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md b/translated/tech/RHCSA/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md new file mode 100644 index 0000000000..82245f33b1 --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md @@ -0,0 +1,215 @@ +RHCSA 系列:安全 SSH,设定主机名及开启网络服务 – Part 8 +================================================================================ +作为一名系统管理员,你将经常使用一个终端模拟器来登陆到一个远程的系统中,执行一系列的管理任务。你将很少有机会坐在一个真实的(物理)终端前,所以你需要设定好一种方法来使得你可以登陆到你被要求去管理的那台远程主机上。 + +事实上,当你必须坐在一台物理终端前的时候,就可能是你登陆到该主机的最后一种方法。基于安全原因,使用 Telnet 来达到以上目的并不是一个好主意,因为穿行在线缆上的流量并没有被加密,它们以文本方式在传送。 + +另外,在这篇文章中,我们也将复习如何配置网络服务来使得它在开机时被自动开启,并学习如何设置网络和静态或动态地解析主机名。 + +![RHCSA: 安全 SSH 和开启网络服务](http://www.tecmint.com/wp-content/uploads/2015/05/Secure-SSH-Server-and-Enable-Network-Services.png) + +RHCSA: 安全 SSH 和开启网络服务 – Part 8 + +### 安装并确保 SSH 通信安全 ### + +对于你来说,要能够使用 SSH 远程登陆到一个 RHEL 7 机子,你必须安装 `openssh`,`openssh-clients` 和 `openssh-servers` 软件包。下面的命令不仅将安装远程登陆程序,也会安装安全的文件传输工具以及远程文件复制程序: + + # yum update && yum install openssh openssh-clients openssh-servers + +注意,安装上服务器所需的相应软件包是一个不错的主意,因为或许在某个时刻,你想使用同一个机子来作为客户端和服务器。 + +在安装完成后,如若你想安全地访问你的 SSH 服务器,你还需要考虑一些基本的事情。下面的设定应该在文件 `/etc/ssh/sshd_config` 中得以呈现。 + +1. 更改 sshd 守护进程的监听端口,从 22(默认的端口值)改为一个更高的端口值(2000 或更大),但首先要确保所选的端口没有被占用。 + +例如,让我们假设你选择了端口 2500 。使用 [netstat][1] 来检查所选的端口是否被占用: + + # netstat -npltu | grep 2500 + +假如 netstat 没有返回任何信息,则你可以安全地为 sshd 使用端口 2500,并且你应该在上面的配置文件中更改端口的设定,具体如下: + + Port 2500 + +2. 只允许协议 2: + + Protocol 2 + +3. 配置验证超时的时间为 2 分钟,不允许以 root 身份登陆,并将允许通过 ssh 登陆的人数限制到最小: + + LoginGraceTime 2m + PermitRootLogin no + AllowUsers gacanepa + +4. 假如可能,使用基于公钥的验证方式而不是使用密码: + + PasswordAuthentication no + RSAAuthentication yes + PubkeyAuthentication yes + +这假设了你已经在你的客户端机子上创建了带有你的用户名的一个密钥对,并将公钥复制到了你的服务器上。 + +- [开启 SSH 无密码登陆][2] + +### 配置网络和名称的解析 ### + +1. 每个系统管理员应该对下面这个系统配置文件非常熟悉: + +- /etc/hosts 被用来在小型网络中解析名称 <---> IP 地址。 + +文件 `/etc/hosts` 中的每一行拥有如下的结构: + + IP address - Hostname - FQDN + +例如, + + 192.168.0.10 laptop laptop.gabrielcanepa.com.ar + +2. `/etc/resolv.conf` 特别指定 DNS 服务器的 IP 地址和搜索域,它被用来在没有提供域名后缀时,将一个给定的查询名称对应为一个全称域名。 + +在正常情况下,你不必编辑这个文件,因为它是由系统管理的。然而,若你非要改变 DNS 服务器的 IP 地址,建议你在该文件的每一行中,都应该遵循下面的结构: + + nameserver - IP address + +例如, + + nameserver 8.8.8.8 + +3. `/etc/host.conf` 特别指定在一个网络中主机名被解析的方法和顺序。换句话说,告诉名称解析器使用哪个服务,并以什么顺序来使用。 + +尽管这个文件由几个选项,但最为常见和基本的设置包含如下的一行: + + order bind,hosts + +它意味着解析器应该首先查看 `resolv.conf` 中特别指定的域名服务器,然后到 `/etc/hosts` 文件中查找解析的名称。 + +4. `/etc/sysconfig/network` 包含了所有网络接口的路由和全局主机信息。下面的值可能会被使用: + + NETWORKING=yes|no + HOSTNAME=value + +其中的 value 应该是全称域名(FQDN)。 + + GATEWAY=XXX.XXX.XXX.XXX + +其中的 XXX.XXX.XXX.XXX 是网关的 IP 地址。 + + GATEWAYDEV=value + +在一个带有多个网卡的机器中, value 为网关设备名,例如 enp0s3。 + +5. 位于 `/etc/sysconfig/network-scripts` 中的文件(网络适配器配置文件)。 + +在上面提到的目录中,你将找到几个被命名为如下格式的文本文件。 + + ifcfg-name + +其中 name 为网卡的名称,由 `ip link show` 返回: + +![检查网络连接状态](http://www.tecmint.com/wp-content/uploads/2015/05/Check-IP-Address.png) + +检查网络连接状态 + +例如: + +![网络文件](http://www.tecmint.com/wp-content/uploads/2015/05/Network-Files.png) + +网络文件 + +除了环回接口,你还可以为你的网卡进行一个相似的配置。注意,假如设定了某些变量,它们将为这个特别的接口,覆盖掉 `/etc/sysconfig/network` 中定义的值。在这篇文章中,为了能够解释清楚,每行都被加上了注释,但在实际的文件中,你应该避免加上注释: + + HWADDR=08:00:27:4E:59:37 # The MAC address of the NIC + TYPE=Ethernet # Type of connection + BOOTPROTO=static # This indicates that this NIC has been assigned a static IP. If this variable was set to dhcp, the NIC will be assigned an IP address by a DHCP server and thus the next two lines should not be present in that case. + IPADDR=192.168.0.18 + NETMASK=255.255.255.0 + GATEWAY=192.168.0.1 + NM_CONTROLLED=no # Should be added to the Ethernet interface to prevent NetworkManager from changing the file. + NAME=enp0s3 + UUID=14033805-98ef-4049-bc7b-d4bea76ed2eb + ONBOOT=yes # The operating system should bring up this NIC during boot + +### 设定主机名 ### + +在 RHEL 7 中, `hostnamectl` 命令被同时用来查询和设定系统的主机名。 + +要展示当前的主机名,输入: + + # hostnamectl status + +![在RHEL 7 中检查系统的主机名](http://www.tecmint.com/wp-content/uploads/2015/05/Check-System-hostname.png) + +检查系统的主机名 + +要更改主机名,使用 + + # hostnamectl set-hostname [new hostname] + +例如, + + # hostnamectl set-hostname cinderella + +要想使得更改生效,你需要重启 hostnamed 守护进程(这样你就不必因为要应用更改而登出系统并再登陆系统): + + # systemctl restart systemd-hostnamed + +![在 RHEL7 中设定系统主机名](http://www.tecmint.com/wp-content/uploads/2015/05/Set-System-Hostname.png) + +设定系统主机名 + +另外, RHEL 7 还包含 `nmcli` 工具,它可被用来达到相同的目的。要展示主机名,运行: + + # nmcli general hostname + +且要改变主机名,则运行: + + # nmcli general hostname [new hostname] + +例如, + + # nmcli general hostname rhel7 + +![使用 nmcli 命令来设定主机名](http://www.tecmint.com/wp-content/uploads/2015/05/nmcli-command.png) + +使用 nmcli 命令来设定主机名 + +### 在开机时开启网络服务 ### + +作为本文的最后部分,就让我们看看如何确保网络服务在开机时被自动开启。简单来说,这个可通过创建符号链接到某些由服务的配置文件中的 [Install] 小节中指定的文件来实现。 + +以 firewalld(/usr/lib/systemd/system/firewalld.service) 为例: + + [Install] + WantedBy=basic.target + Alias=dbus-org.fedoraproject.FirewallD1.service + +要开启该服务,运行: + + # systemctl enable firewalld + +另一方面,要禁用 firewalld,则需要移除符号链接: + + # systemctl disable firewalld + +![在开机时开启服务](http://www.tecmint.com/wp-content/uploads/2015/05/Enable-Service-at-System-Boot.png) + +在开机时开启服务 + +### 总结 ### + +在这篇文章中,我们总结了如何安装 SSH 及使用它安全地连接到一个 RHEL 服务器,如何改变主机名,并在最后如何确保在系统启动时开启服务。假如你注意到某个服务启动失败,你可以使用 `systemctl status -l [service]` 和 `journalctl -xn` 来进行排错。 + +请随意使用下面的评论框来让我们知晓你对本文的看法。提问也同样欢迎。我们期待着你的反馈! + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-series-secure-ssh-set-hostname-enable-network-services-in-rhel-7/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/20-netstat-commands-for-linux-network-management/ +[2]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ \ No newline at end of file From 8bb04b7614a497065fa820b2576dc4aeb84e00cb Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Thu, 27 Aug 2015 11:59:36 +0800 Subject: [PATCH 330/697] Update RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 继续翻译该系列。 --- ...stalling, Configuring and Securing a Web and FTP Server.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md b/sources/tech/RHCSA Series/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md index 6a1e544de3..437612f124 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md @@ -1,3 +1,5 @@ +FSSlc Translating + RHCSA Series: Installing, Configuring and Securing a Web and FTP Server – Part 9 ================================================================================ A web server (also known as a HTTP server) is a service that handles content (most commonly web pages, but other types of documents as well) over to a client in a network. @@ -173,4 +175,4 @@ via: http://www.tecmint.com/rhcsa-series-install-and-secure-apache-web-server-an [a]:http://www.tecmint.com/author/gacanepa/ [1]:http://httpd.apache.org/docs/2.4/ [2]:http://www.tecmint.com/manage-and-limit-downloadupload-bandwidth-with-trickle-in-linux/ -[3]:http://www.google.com/cse?cx=partner-pub-2601749019656699:2173448976&ie=UTF-8&q=virtual+hosts&sa=Search&gws_rd=cr&ei=Dy9EVbb0IdHisASnroG4Bw#gsc.tab=0&gsc.q=apache \ No newline at end of file +[3]:http://www.google.com/cse?cx=partner-pub-2601749019656699:2173448976&ie=UTF-8&q=virtual+hosts&sa=Search&gws_rd=cr&ei=Dy9EVbb0IdHisASnroG4Bw#gsc.tab=0&gsc.q=apache From 95cf2c6be1ca32d22b42c4a34deda014f0f01ea3 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 27 Aug 2015 13:09:10 +0800 Subject: [PATCH 331/697] PUB:Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux @strugglingyouth --- ...iping with Distributed Parity) in Linux.md | 163 +++++++++--------- 1 file changed, 79 insertions(+), 84 deletions(-) rename {translated/tech/RAID => published}/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md (50%) diff --git a/translated/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md b/published/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md similarity index 50% rename from translated/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md rename to published/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md index 7de5199a08..34ac7f18b2 100644 --- a/translated/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md +++ b/published/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md @@ -1,89 +1,90 @@ - -在 Linux 中创建 RAID 5(条带化与分布式奇偶校验) - 第4部分 +在 Linux 下使用 RAID(四):创建 RAID 5(条带化与分布式奇偶校验) ================================================================================ -在 RAID 5 中,条带化数据跨多个驱磁盘使用分布式奇偶校验。分布式奇偶校验的条带化意味着它将奇偶校验信息和条带中的数据分布在多个磁盘上,它将有很好的数据冗余。 + +在 RAID 5 中,数据条带化后存储在分布式奇偶校验的多个磁盘上。分布式奇偶校验的条带化意味着它将奇偶校验信息和条带化数据分布在多个磁盘上,这样会有很好的数据冗余。 ![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/setup-raid-5-in-linux.jpg) -在 Linux 中配置 RAID 5 +*在 Linux 中配置 RAID 5* -对于此 RAID 级别它至少应该有三个或更多个磁盘。RAID 5 通常被用于大规模生产环境中花费更多的成本来提供更好的数据冗余性能。 +对于此 RAID 级别它至少应该有三个或更多个磁盘。RAID 5 通常被用于大规模生产环境中,以花费更多的成本来提供更好的数据冗余性能。 #### 什么是奇偶校验? #### -奇偶校验是在数据存储中检测错误最简单的一个方法。奇偶校验信息存储在每个磁盘中,比如说,我们有4个磁盘,其中一个磁盘空间被分割去存储所有磁盘的奇偶校验信息。如果任何一个磁盘出现故障,我们可以通过更换故障磁盘后,从奇偶校验信息重建得到原来的数据。 +奇偶校验是在数据存储中检测错误最简单的常见方式。奇偶校验信息存储在每个磁盘中,比如说,我们有4个磁盘,其中相当于一个磁盘大小的空间被分割去存储所有磁盘的奇偶校验信息。如果任何一个磁盘出现故障,我们可以通过更换故障磁盘后,从奇偶校验信息重建得到原来的数据。 #### RAID 5 的优点和缺点 #### -- 提供更好的性能 +- 提供更好的性能。 - 支持冗余和容错。 - 支持热备份。 -- 将失去一个磁盘的容量存储奇偶校验信息。 +- 将用掉一个磁盘的容量存储奇偶校验信息。 - 单个磁盘发生故障后不会丢失数据。我们可以更换故障硬盘后从奇偶校验信息中重建数据。 -- 事务处理读操作会更快。 -- 由于奇偶校验占用资源,写操作将是缓慢的。 +- 适合于面向事务处理的环境,读操作会更快。 +- 由于奇偶校验占用资源,写操作会慢一些。 - 重建需要很长的时间。 #### 要求 #### + 创建 RAID 5 最少需要3个磁盘,你也可以添加更多的磁盘,前提是你要有多端口的专用硬件 RAID 控制器。在这里,我们使用“mdadm”包来创建软件 RAID。 -mdadm 是一个允许我们在 Linux 下配置和管理 RAID 设备的包。默认情况下 RAID 没有可用的配置文件,我们在创建和配置 RAID 后必须将配置文件保存在一个单独的文件中,例如:mdadm.conf。 +mdadm 是一个允许我们在 Linux 下配置和管理 RAID 设备的包。默认情况下没有 RAID 的配置文件,我们在创建和配置 RAID 后必须将配置文件保存在一个单独的文件 mdadm.conf 中。 在进一步学习之前,我建议你通过下面的文章去了解 Linux 中 RAID 的基础知识。 -- [Basic Concepts of RAID in Linux – Part 1][1] -- [Creating RAID 0 (Stripe) in Linux – Part 2][2] -- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3] +- [介绍 RAID 的级别和概念][1] +- [使用 mdadm 工具创建软件 RAID 0 (条带化)][2] +- [用两块磁盘创建 RAID 1(镜像)][3] #### 我的服务器设置 #### - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.227 - Hostname : rd5.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc - Disk 3 [20GB] : /dev/sdd + 操作系统 : CentOS 6.5 Final + IP 地址 : 192.168.0.227 + 主机名 : rd5.tecmintlocal.com + 磁盘 1 [20GB] : /dev/sdb + 磁盘 2 [20GB] : /dev/sdc + 磁盘 3 [20GB] : /dev/sdd -这篇文章是 RAID 系列9教程的第4部分,在这里我们要建立一个软件 RAID 5(分布式奇偶校验)使用三个20GB(名为/dev/sdb, /dev/sdc 和 /dev/sdd)的磁盘在 Linux 系统或服务器中上。 +这是9篇系列教程的第4部分,在这里我们要在 Linux 系统或服务器上使用三个20GB(名为/dev/sdb, /dev/sdc 和 /dev/sdd)的磁盘建立带有分布式奇偶校验的软件 RAID 5。 ### 第1步:安装 mdadm 并检验磁盘 ### -1.正如我们前面所说,我们使用 CentOS 6.5 Final 版本来创建 RAID 设置,但同样的做法也适用于其他 Linux 发行版。 +1、 正如我们前面所说,我们使用 CentOS 6.5 Final 版本来创建 RAID 设置,但同样的做法也适用于其他 Linux 发行版。 # lsb_release -a # ifconfig | grep inet ![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/CentOS-6.5-Summary.png) -CentOS 6.5 摘要 +*CentOS 6.5 摘要* -2. 如果你按照我们的 RAID 系列去配置的,我们假设你已经安装了“mdadm”包,如果没有,根据你的 Linux 发行版使用下面的命令安装。 +2、 如果你按照我们的 RAID 系列去配置的,我们假设你已经安装了“mdadm”包,如果没有,根据你的 Linux 发行版使用下面的命令安装。 - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] + # yum install mdadm [在 RedHat 系统] + # apt-get install mdadm [在 Debain 系统] -3. “mdadm”包安装后,先使用‘fdisk‘命令列出我们在系统上增加的三个20GB的硬盘。 +3、 “mdadm”包安装后,先使用`fdisk`命令列出我们在系统上增加的三个20GB的硬盘。 # fdisk -l | grep sd ![Install mdadm Tool in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Install-mdadm-Tool.png) -安装 mdadm 工具 +*安装 mdadm 工具* -4. 现在该检查这三个磁盘是否存在 RAID 块,使用下面的命令来检查。 +4、 现在该检查这三个磁盘是否存在 RAID 块,使用下面的命令来检查。 # mdadm -E /dev/sd[b-d] - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd # 或 ![Examine Drives For Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-Drives-For-Raid.png) -检查 Raid 磁盘 +*检查 Raid 磁盘* **注意**: 上面的图片说明,没有检测到任何超级块。所以,这三个磁盘中没有定义 RAID。让我们现在开始创建一个吧! ### 第2步:为磁盘创建 RAID 分区 ### -5. 首先,在创建 RAID 前我们要为磁盘分区(/dev/sdb, /dev/sdc 和 /dev/sdd),在进行下一步之前,先使用‘fdisk’命令进行分区。 +5、 首先,在创建 RAID 前磁盘(/dev/sdb, /dev/sdc 和 /dev/sdd)必须有分区,因此,在进行下一步之前,先使用`fdisk`命令进行分区。 # fdisk /dev/sdb # fdisk /dev/sdc @@ -93,20 +94,20 @@ CentOS 6.5 摘要 请按照下面的说明在 /dev/sdb 硬盘上创建分区。 -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。选择主分区是因为还没有定义过分区。 -- 接下来选择分区号为1。默认就是1. +- 按 `n` 创建新的分区。 +- 然后按 `P` 选择主分区。选择主分区是因为还没有定义过分区。 +- 接下来选择分区号为1。默认就是1。 - 这里是选择柱面大小,我们没必要选择指定的大小,因为我们需要为 RAID 使用整个分区,所以只需按两次 Enter 键默认将整个容量分配给它。 -- 然后,按 ‘P’ 来打印创建好的分区。 -- 改变分区类型,按 ‘L’可以列出所有可用的类型。 -- 按 ‘t’ 修改分区类型。 -- 这里使用‘fd’设置为 RAID 的类型。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 +- 然后,按 `P` 来打印创建好的分区。 +- 改变分区类型,按 `L`可以列出所有可用的类型。 +- 按 `t` 修改分区类型。 +- 这里使用`fd`设置为 RAID 的类型。 +- 然后再次使用`p`查看我们所做的更改。 +- 使用`w`保存更改。 ![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition1.png) -创建 sdb 分区 +*创建 sdb 分区* **注意**: 我们仍要按照上面的步骤来创建 sdc 和 sdd 的分区。 @@ -118,7 +119,7 @@ CentOS 6.5 摘要 ![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition1.png) -创建 sdc 分区 +*创建 sdc 分区* #### 创建 /dev/sdd 分区 #### @@ -126,93 +127,87 @@ CentOS 6.5 摘要 ![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition1.png) -创建 sdd 分区 +*创建 sdd 分区* -6. 创建分区后,检查三个磁盘 sdb, sdc, sdd 的变化。 +6、 创建分区后,检查三个磁盘 sdb, sdc, sdd 的变化。 # mdadm --examine /dev/sdb /dev/sdc /dev/sdd - - or - - # mdadm -E /dev/sd[b-c] + # mdadm -E /dev/sd[b-c] # 或 ![Check Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Changes-on-Partitions.png) -检查磁盘变化 +*检查磁盘变化* **注意**: 在上面的图片中,磁盘的类型是 fd。 -7.现在在新创建的分区检查 RAID 块。如果没有检测到超级块,我们就能够继续下一步,创建一个新的 RAID 5 的设置在这些磁盘中。 +7、 现在在新创建的分区检查 RAID 块。如果没有检测到超级块,我们就能够继续下一步,在这些磁盘中创建一个新的 RAID 5 配置。 ![Check Raid on Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-Partitions.png) -在分区中检查 Raid +*在分区中检查 RAID * ### 第3步:创建 md 设备 md0 ### -8. 现在创建一个 RAID 设备“md0”(即 /dev/md0)使用所有新创建的分区(sdb1, sdc1 and sdd1) ,使用以下命令。 +8、 现在使用所有新创建的分区(sdb1, sdc1 和 sdd1)创建一个 RAID 设备“md0”(即 /dev/md0),使用以下命令。 # mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 - - or - - # mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1 + # mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1 # 或 -9. 创建 RAID 设备后,检查并确认 RAID,包括设备和从 mdstat 中输出的 RAID 级别。 +9、 创建 RAID 设备后,检查并确认 RAID,从 mdstat 中输出中可以看到包括的设备的 RAID 级别。 # cat /proc/mdstat ![Verify Raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Device.png) -验证 Raid 设备 +*验证 Raid 设备* -如果你想监视当前的创建过程,你可以使用‘watch‘命令,使用 watch ‘cat /proc/mdstat‘,它会在屏幕上显示且每隔1秒刷新一次。 +如果你想监视当前的创建过程,你可以使用`watch`命令,将 `cat /proc/mdstat` 传递给它,它会在屏幕上显示且每隔1秒刷新一次。 # watch -n1 cat /proc/mdstat ![Monitor Raid Process](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Raid-Process.png) -监控 Raid 5 过程 +*监控 RAID 5 构建过程* ![Raid 5 Process Summary](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Process-Summary.png) -Raid 5 过程概要 +*Raid 5 过程概要* -10. 创建 RAID 后,使用以下命令验证 RAID 设备 +10、 创建 RAID 后,使用以下命令验证 RAID 设备 # mdadm -E /dev/sd[b-d]1 ![Verify Raid Level](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Level.png) -验证 Raid 级别 +*验证 Raid 级别* **注意**: 因为它显示三个磁盘的信息,上述命令的输出会有点长。 -11. 接下来,验证 RAID 阵列的假设,这包含正在运行 RAID 的设备,并开始重新同步。 +11、 接下来,验证 RAID 阵列,假定包含 RAID 的设备正在运行并已经开始了重新同步。 # mdadm --detail /dev/md0 ![Verify Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Array.png) -验证 Raid 阵列 +*验证 RAID 阵列* ### 第4步:为 md0 创建文件系统### -12. 在挂载前为“md0”设备创建 ext4 文件系统。 +12、 在挂载前为“md0”设备创建 ext4 文件系统。 # mkfs.ext4 /dev/md0 ![Create md0 Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md0-Filesystem.png) -创建 md0 文件系统 +*创建 md0 文件系统* -13.现在,在‘/mnt‘下创建目录 raid5,然后挂载文件系统到 /mnt/raid5/ 下并检查下挂载点的文件,你会看到 lost+found 目录。 +13、 现在,在`/mnt`下创建目录 raid5,然后挂载文件系统到 /mnt/raid5/ 下,并检查挂载点下的文件,你会看到 lost+found 目录。 # mkdir /mnt/raid5 # mount /dev/md0 /mnt/raid5/ # ls -l /mnt/raid5/ -14. 在挂载点 /mnt/raid5 下创建几个文件,并在其中一个文件中添加一些内容然后去验证。 +14、 在挂载点 /mnt/raid5 下创建几个文件,并在其中一个文件中添加一些内容然后去验证。 # touch /mnt/raid5/raid5_tecmint_{1..5} # ls -l /mnt/raid5/ @@ -222,9 +217,9 @@ Raid 5 过程概要 ![Mount Raid 5 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-Raid-Device.png) -挂载 Raid 设备 +*挂载 RAID 设备* -15. 我们需要在 fstab 中添加条目,否则系统重启后将不会显示我们的挂载点。然后编辑 fstab 文件添加条目,在文件尾追加以下行,如下图所示。挂载点会根据你环境的不同而不同。 +15、 我们需要在 fstab 中添加条目,否则系统重启后将不会显示我们的挂载点。编辑 fstab 文件添加条目,在文件尾追加以下行。挂载点会根据你环境的不同而不同。 # vim /etc/fstab @@ -232,19 +227,19 @@ Raid 5 过程概要 ![Raid 5 Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Device-Automount.png) -自动挂载 Raid 5 +*自动挂载 RAID 5* -16. 接下来,运行‘mount -av‘命令检查 fstab 条目中是否有错误。 +16、 接下来,运行`mount -av`命令检查 fstab 条目中是否有错误。 # mount -av ![Check Fstab Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Fstab-Errors.png) -检查 Fstab 错误 +*检查 Fstab 错误* ### 第5步:保存 Raid 5 的配置 ### -17. 在前面章节已经说过,默认情况下 RAID 没有配置文件。我们必须手动保存。如果此步不跟 RAID 设备将不会存在 md0,它将会跟一些其他数子。 +17、 在前面章节已经说过,默认情况下 RAID 没有配置文件。我们必须手动保存。如果此步中没有跟随不属于 md0 的 RAID 设备,它会是一些其他随机数字。 所以,我们必须要在系统重新启动之前保存配置。如果配置保存它在系统重新启动时会被加载到内核中然后 RAID 也将被加载。 @@ -252,17 +247,17 @@ Raid 5 过程概要 ![Save Raid 5 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid-5-Configuration.png) -保存 Raid 5 配置 +*保存 RAID 5 配置* -注意:保存配置将保持 RAID 级别的稳定性在 md0 设备中。 +注意:保存配置将保持 md0 设备的 RAID 级别稳定不变。 ### 第6步:添加备用磁盘 ### -18.备用磁盘有什么用?它是非常有用的,如果我们有一个备用磁盘,当我们阵列中的任何一个磁盘发生故障后,这个备用磁盘会主动添加并重建进程,并从其他磁盘上同步数据,所以我们可以在这里看到冗余。 +18、 备用磁盘有什么用?它是非常有用的,如果我们有一个备用磁盘,当我们阵列中的任何一个磁盘发生故障后,这个备用磁盘会进入激活重建过程,并从其他磁盘上同步数据,这样就有了冗余。 更多关于添加备用磁盘和检查 RAID 5 容错的指令,请阅读下面文章中的第6步和第7步。 -- [Add Spare Drive to Raid 5 Setup][4] +- [在 RAID 5 中添加备用磁盘][4] ### 结论 ### @@ -274,12 +269,12 @@ via: http://www.tecmint.com/create-raid-5-in-linux/ 作者:[Babin Lonston][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ -[2]:http://www.tecmint.com/create-raid0-in-linux/ -[3]:http://www.tecmint.com/create-raid1-in-linux/ +[1]:https://linux.cn/article-6085-1.html +[2]:https://linux.cn/article-6087-1.html +[3]:https://linux.cn/article-6093-1.html [4]:http://www.tecmint.com/create-raid-6-in-linux/ From 3ea905d9e58574f61849d3f93091293016435083 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 27 Aug 2015 15:34:16 +0800 Subject: [PATCH 332/697] =?UTF-8?q?20150827-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Download Manager Updated With Fresh GUI.md | 67 ++++++++ ... DEB and DEB to RPM Package Using Alien.md | 159 ++++++++++++++++++ 2 files changed, 226 insertions(+) create mode 100644 sources/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md create mode 100644 sources/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md diff --git a/sources/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md b/sources/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md new file mode 100644 index 0000000000..767c2fdcd4 --- /dev/null +++ b/sources/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md @@ -0,0 +1,67 @@ +Xtreme Download Manager Updated With Fresh GUI +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme-Download-Manager-Linux.jpg) + +[Xtreme Download Manager][1], unarguably one of the [best download managers for Linux][2], has a new version named XDM 2015 which brings a fresh new look to it. + +Xtreme Download Manager, also known as XDM or XDMAN, is a popular cross-platform download manager available for Linux, Windows and Mac OS X. It is also compatible with all major web browsers such as Chrome, Firefox, Safari enabling you to download directly from XDM when you try to download something in your web browser. + +Applications such as XDM are particularly useful when you have slow/limited network connectivity and you need to manage your downloads. Imagine downloading a huge file from internet on a slow network. What if you could pause and resume the download at will? XDM helps you in such situations. + +Some of the main features of XDM are: + +- Pause and resume download +- [Download videos from YouTube][3] and other video sites +- Force assemble +- Download speed acceleration +- Schedule downloads +- Limit download speed +- Web browser integration +- Support for proxy servers + +Here you can see the difference between the old and new XDM. + +![Old XDM](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme-Download-Manager-700x400_c.jpg) + +Old XDM + +![New XDM](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme_Download_Manager.png) + +New XDM + +### Install Xtreme Download Manager in Ubuntu based Linux distros ### + +Thanks to the PPA by Noobslab, you can easily install Xtreme Download Manager using the commands below. XDM requires Java but thanks to the PPA, you don’t need to bother with installing dependencies separately. + + sudo add-apt-repository ppa:noobslab/apps + sudo apt-get update + sudo apt-get install xdman + +The above PPA should be available for Ubuntu and other Ubuntu based Linux distributions such as Linux Mint, elementary OS, Linux Lite etc. + +#### Remove XDM #### + +To remove XDM (installed using the PPA), use the commands below: + + sudo apt-get remove xdman + sudo add-apt-repository --remove ppa:noobslab/apps + +For other Linux distributions, you can download it from the link below: + +- [Download Xtreme Download Manager][4] + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/xtreme-download-manager-install/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://xdman.sourceforge.net/ +[2]:http://itsfoss.com/4-best-download-managers-for-linux/ +[3]:http://itsfoss.com/download-youtube-videos-ubuntu/ +[4]:http://xdman.sourceforge.net/download.html \ No newline at end of file diff --git a/sources/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md b/sources/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md new file mode 100644 index 0000000000..2d3f203676 --- /dev/null +++ b/sources/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md @@ -0,0 +1,159 @@ +How to Convert From RPM to DEB and DEB to RPM Package Using Alien +================================================================================ +As I’m sure you already know, there are plenty of ways to install software in Linux: using the package management system provided by your distribution ([aptitude, yum, or zypper][1], to name a few examples), compiling from source (though somewhat rare these days, it was the only method available during the early days of Linux), or utilizing a low level tool such as dpkg or rpm with .deb and .rpm standalone, precompiled packages, respectively. + +![Convert RPM to DEB and DEB to RPM](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-RPM-to-DEB-and-DEB-to-RPM.png) + +Convert RPM to DEB and DEB to RPM Package Using Alien + +In this article we will introduce you to alien, a tool that converts between different Linux package formats, with .rpm to .deb (and vice versa) being the most common usage. + +This tool, even when its author is no longer maintaining it and states in his website that alien will always probably remain in experimental status, can come in handy if you need a certain type of package but can only find that program in another package format. + +For example, alien saved my day once when I was looking for a .deb driver for a inkjet printer and couldn’t find any – the manufacturer only provided a .rpm package. I installed alien, converted the package, and before long I was able to use my printer without issues. + +That said, we must clarify that this utility should not be used to replace important system files and libraries since they are set up differently across distributions. Only use alien as a last resort if the suggested installation methods at the beginning of this article are out of the question for the required program. + +Last but not least, we must note that even though we will use CentOS and Debian in this article, alien is also known to work in Slackware and even in Solaris, besides the first two distributions and their respective families. + +### Step 1: Installing Alien and Dependencies ### + +To install alien in CentOS/RHEL 7, you will need to enable the EPEL and the Nux Dextop (yes, it’s Dextop – not Desktop) repositories, in that order: + + # yum install epel-release + # rpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro + +The latest version of the package that enables this repository is currently 0.5 (published on Aug. 10, 2015). You should check [http://li.nux.ro/download/nux/dextop/el7/x86_64/][2] to see whether there’s a newer version before proceeding further: + + # rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm + +then do, + + # yum update && yum install alien + +In Fedora, you will only need to run the last command. + +In Debian and derivatives, simply do: + + # aptitude install alien + +### Step 2: Converting from .deb to .rpm Package ### + +For this test we have chosen dateutils, which provides a set of date and time utilities to deal with large amounts of financial data. We will download the .deb package to our CentOS 7 box, convert it to .rpm and install it: + +![Check CentOS Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-OS-Version.png) + +Check CentOS Version + + # cat /etc/centos-release + # wget http://ftp.us.debian.org/debian/pool/main/d/dateutils/dateutils_0.3.1-1.1_amd64.deb + # alien --to-rpm --scripts dateutils_0.3.1-1.1_amd64.deb + +![Convert .deb to .rpm package in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-deb-to-rpm-package.png) + +Convert .deb to .rpm package in Linux + +**Important**: (Please note how, by default, alien increases the version minor number of the target package. If you want to override this behavior, add the –keep-version flag). + +If we try to install the package right away, we will run into a slight issue: + + # rpm -Uvh dateutils-0.3.1-2.1.x86_64.rpm + +![Install RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-RPM-Package.png) + +Install RPM Package + +To solve this issue, we will enable the epel-testing repository and install the rpmrebuild utility to edit the settings of the package to be rebuilt: + + # yum --enablerepo=epel-testing install rpmrebuild + +Then run, + + # rpmrebuild -pe dateutils-0.3.1-2.1.x86_64.rpm + +Which will open up your default text editor. Go to the `%files` section and delete the lines that refer to the directories mentioned in the error message, then save the file and exit: + +![Convert .deb to Alien Version](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-Deb-Package-to-Alien-Version.png) + +Convert .deb to Alien Version + +When you exit the file you will be prompted to continue with the rebuild. If you choose Y, the file will be rebuilt into the specified directory (different than the current working directory): + + # rpmrebuild –pe dateutils-0.3.1-2.1.x86_64.rpm + +![Build RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Build-RPM-Package.png) + +Build RPM Package + +Now you can proceed to install the package and verify as usual: + + # rpm -Uvh /root/rpmbuild/RPMS/x86_64/dateutils-0.3.1-2.1.x86_64.rpm + # rpm -qa | grep dateutils + +![Install Build RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Build-RPM-Package.png) + +Install Build RPM Package + +Finally, you can list the individual tools that were included with dateutils and alternatively check their respective man pages: + + # ls -l /usr/bin | grep dateutils + +![Verify Installed RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Verify-Installed-Package.png) + +Verify Installed RPM Package + +### Step 3: Converting from .rpm to .deb Package ### + +In this section we will illustrate how to convert from .rpm to .deb. In a 32-bit Debian Wheezy box, let’s download the .rpm package for the zsh shell from the CentOS 6 OS repository. Note that this shell is not available by default in Debian and derivatives. + + # cat /etc/shells + # lsb_release -a | tail -n 4 + +![Check Shell and Debian OS Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Shell-Debian-OS-Version.png) + +Check Shell and Debian OS Version + + # wget http://mirror.centos.org/centos/6/os/i386/Packages/zsh-4.3.11-4.el6.centos.i686.rpm + # alien --to-deb --scripts zsh-4.3.11-4.el6.centos.i686.rpm + +You can safely disregard the messages about a missing signature: + +![Convert .rpm to .deb Package](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-rpm-to-deb-Package.png) + +Convert .rpm to .deb Package + +After a few moments, the .deb file should have been generated and be ready to install: + + # dpkg -i zsh_4.3.11-5_i386.deb + +![Install RPM Converted Deb Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Deb-Package.png) + +Install RPM Converted Deb Package + +After the installation, you can verify that zsh is added to the list of valid shells: + + # cat /etc/shells + +![Confirm Installed Zsh Package](http://www.tecmint.com/wp-content/uploads/2015/08/Confirm-Installed-Package.png) + +Confirm Installed Zsh Package + +### Summary ### + +In this article we have explained how to convert from .rpm to .deb and vice versa to install packages as a last resort when such programs are not available in the repositories or as distributable source code. You will want to bookmark this article because all of us will need alien at one time or another. + +Feel free to share your thoughts about this article using the form below. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/convert-from-rpm-to-deb-and-deb-to-rpm-package-using-alien/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/linux-package-management/ +[2]:http://li.nux.ro/download/nux/dextop/el7/x86_64/ \ No newline at end of file From ef6c926d18b3617c8a0d65cd04cf28c58ee19ea1 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 27 Aug 2015 16:02:02 +0800 Subject: [PATCH 333/697] =?UTF-8?q?20150827-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...The Strangest Most Unique Linux Distros.md | 67 ++++++++ ... or UNIX--Bash Read a File Line By Line.md | 162 ++++++++++++++++++ 2 files changed, 229 insertions(+) create mode 100644 sources/talk/20150827 The Strangest Most Unique Linux Distros.md create mode 100644 sources/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md diff --git a/sources/talk/20150827 The Strangest Most Unique Linux Distros.md b/sources/talk/20150827 The Strangest Most Unique Linux Distros.md new file mode 100644 index 0000000000..04ff47952a --- /dev/null +++ b/sources/talk/20150827 The Strangest Most Unique Linux Distros.md @@ -0,0 +1,67 @@ +The Strangest, Most Unique Linux Distros +================================================================================ +From the most consumer focused distros like Ubuntu, Fedora, Mint or elementary OS to the more obscure, minimal and enterprise focused ones such as Slackware, Arch Linux or RHEL, I thought I've seen them all. Couldn't have been any further from the truth. Linux eco-system is very diverse. There's one for everyone. Let's discuss the weird and wacky world of niche Linux distros that represents the true diversity of open platforms. + +![strangest linux distros](http://2.bp.blogspot.com/--cSL2-6rIgA/VcwNc5hFebI/AAAAAAAAJzk/AgB55mVtJVQ/s1600/Puppy-Linux.png) + +**Puppy Linux**: An operating system which is about 1/10th the size of an average DVD quality movie rip, that's Puppy Linux for you. The OS is just 100 MB in size! And it can run from RAM making it unusually fast even in older PCs. You can even remove the boot medium after the operating system has started! Can it get any better than that? System requirements are bare minimum, most hardware are automatically detected, and it comes loaded with software catering to your basic needs. [Experience Puppy Linux][1]. + +![suicide linux](http://3.bp.blogspot.com/-dfeehRIQKpo/VdMgRVQqIJI/AAAAAAAAJz0/TmBs-n2K9J8/s1600/suicide-linux.jpg) + +**Suicide Linux**: Did the name scare you? Well it should. 'Any time - any time - you type any remotely incorrect command, the interpreter creatively resolves it into rm -rf / and wipes your hard drive'. Simple as that. I really want to know the ones who are confident enough to risk their production machines with [Suicide Linux][2]. **Warning: DO NOT try this on production machines!** The whole thing is available in a neat [DEB package][3] if you're interested. + +![top 10 strangest linux distros](http://3.bp.blogspot.com/-Q0hlEMCD9-o/VdMieAiXY1I/AAAAAAAAJ0M/iS_ZjVaZAk8/s1600/papyros.png) + +**PapyrOS**: "Strange" in a good way. PapyrOS is trying to adapt the material design language of Android into their brand new Linux distribution. Though the project is in early stages, it already looks very promising. The project page says the OS is 80% complete and one can expect the first Alpha release anytime soon. We did a small write up on [PapyrOS][4] when it was announced and by the looks of it, PapyrOS might even become a trend-setter of sorts. Follow the project on [Google+][5] and contribute via [BountySource][6] if you're interested. + +![10 most unique linux distros](http://3.bp.blogspot.com/-8aOtnTp3Yxk/VdMo_KWs4sI/AAAAAAAAJ0o/3NTqhaw60jM/s1600/qubes-linux.png) + +**Qubes OS**: Qubes is an open-source operating system designed to provide strong security using a Security by Compartmentalization approach. The assumption is that there can be no perfect, bug-free desktop environment. And by implementing a 'Security by Isolation' approach, [Qubes Linux][7] intends to remedy that. Qubes is based on Xen, the X Window System, and Linux, and can run most Linux applications and supports most Linux drivers. Qubes was selected as a finalist of Access Innovation Prize 2014 for Endpoint Security Solution. + +![top10 linux distros](http://3.bp.blogspot.com/-2Sqvb_lilC0/VdMq_ceoXnI/AAAAAAAAJ00/kot20ugVJFk/s1600/ubuntu-satanic.jpg) + +**Ubuntu Satanic Edition**: Ubuntu SE is a Linux distribution based on Ubuntu. "It brings together the best of free software and free metal music" in one comprehensive package consisting of themes, wallpapers, and even some heavy-metal music sourced from talented new artists. Though the project doesn't look actively developed anymore, Ubuntu Satanic Edition is strange in every sense of that word. [Ubuntu SE (Slightly NSFW)][8]. + +![10 strange linux distros](http://2.bp.blogspot.com/-ZtIVjGMqdx0/VdMv136Pz1I/AAAAAAAAJ1E/-q34j-TXyUY/s1600/tiny-core-linux.png) + +**Tiny Core Linux**: Puppy Linux not small enough? Try this. Tiny Core Linux is a 12 MB graphical Linux desktop! Yep, you read it right. One major caveat: It is not a complete desktop nor is all hardware completely supported. It represents only the core needed to boot into a very minimal X desktop typically with wired internet access. There is even a version without the GUI called Micro Core Linux which is just 9MB in size. [Tiny Core Linux][9] folks. + +![top 10 unique and special linux distros](http://4.bp.blogspot.com/-idmCvIxtxeo/VdcqcggBk1I/AAAAAAAAJ1U/DTQCkiLqlLk/s1600/nixos.png) + +**NixOS**: A very experienced-user focused Linux distribution with a unique approach to package and configuration management. In other distributions, actions such as upgrades can be dangerous. Upgrading a package can cause other packages to break, upgrading an entire system is much less reliable than reinstalling from scratch. And top of all that you can't safely test what the results of a configuration change will be, there's no "Undo" so to speak. In NixOS, the entire operating system is built by the Nix package manager from a description in a purely functional build language. This means that building a new configuration cannot overwrite previous configurations. Most of the other features follow this pattern. Nix stores all packages in isolation from each other. [More about NixOS][10]. + +![strangest linux distros](http://4.bp.blogspot.com/-rOYfBXg-UiU/VddCF7w_xuI/AAAAAAAAJ1w/Nf11bOheOwM/s1600/gobolinux.jpg) + +**GoboLinux**: This is another very unique Linux distro. What makes GoboLinux so different from the rest is its unique re-arrangement of the filesystem. It has its own subdirectory tree, where all of its files and programs are stored. GoboLinux does not have a package database because the filesystem is its database. In some ways, this sort of arrangement is similar to that seen in OS X. [Get GoboLinux][11]. + +![strangest linux distros](http://1.bp.blogspot.com/-3P22pYfih6Y/VdcucPOv4LI/AAAAAAAAJ1g/PszZDbe83sQ/s1600/hannah-montana-linux.jpg) + +**Hannah Montana Linux**: Here is a Linux distro based on Kubuntu with a Hannah Montana themed boot screen, KDM, icon set, ksplash, plasma, color scheme, and wallpapers (I'm so sorry). [Link][12]. Project not active anymore. + +**RLSD Linux**: An extremely minimalistic, small, lightweight and security-hardened, text-based operating system built on Linux. "It's a unique distribution that provides a selection of console applications and home-grown security features which might appeal to hackers," developers claim. [RLSD Linux][13]. + +Did we miss anything even stranger? Let us know. + +-------------------------------------------------------------------------------- + +via: http://www.techdrivein.com/2015/08/the-strangest-most-unique-linux-distros.html + +作者:Manuel Jose +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://puppylinux.org/main/Overview%20and%20Getting%20Started.htm +[2]:http://qntm.org/suicide +[3]:http://sourceforge.net/projects/suicide-linux/files/ +[4]:http://www.techdrivein.com/2015/02/papyros-material-design-linux-coming-soon.html +[5]:https://plus.google.com/communities/109966288908859324845/stream/3262a3d3-0797-4344-bbe0-56c3adaacb69 +[6]:https://www.bountysource.com/teams/papyros +[7]:https://www.qubes-os.org/ +[8]:http://ubuntusatanic.org/ +[9]:http://tinycorelinux.net/ +[10]:https://nixos.org/ +[11]:http://www.gobolinux.org/ +[12]:http://hannahmontana.sourceforge.net/ +[13]:http://rlsd2.dimakrasner.com/ \ No newline at end of file diff --git a/sources/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md b/sources/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md new file mode 100644 index 0000000000..971adc0a0c --- /dev/null +++ b/sources/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md @@ -0,0 +1,162 @@ +Linux/UNIX: Bash Read a File Line By Line +================================================================================ +How do I read a file line by line under a Linux or UNIX-like system using KSH or BASH shell? + +You can use while..do..done bash loop to read file line by line on a Linux, OSX, *BSD, or Unix-like system. + +**Syntax to read file line by line on a Bash Unix & Linux shell:** + +1. The syntax is as follows for bash, ksh, zsh, and all other shells - +1. while read -r line; do COMMAND; done < input.file +1. The -r option passed to red command prevents backslash escapes from being interpreted. +1. Add IFS= option before read command to prevent leading/trailing whitespace from being trimmed - +1. while IFS= read -r line; do COMMAND_on $line; done < input.file + +Here is more human readable syntax for you: + + #!/bin/bash + input="/path/to/txt/file" + while IFS= read -r var + do + echo "$var" + done < "$input" + +**Examples** + +Here are some examples: + + #!/bin/ksh + file="/home/vivek/data.txt" + while IFS= read line + do + # display $line or do somthing with $line + echo "$line" + done <"$file" + +The same example using bash shell: + + #!/bin/bash + file="/home/vivek/data.txt" + while IFS= read -r line + do + # display $line or do somthing with $line + printf '%s\n' "$line" + done <"$file" + +You can also read field wise: + + #!/bin/bash + file="/etc/passwd" + while IFS=: read -r f1 f2 f3 f4 f5 f6 f7 + do + # display fields using f1, f2,..,f7 + printf 'Username: %s, Shell: %s, Home Dir: %s\n' "$f1" "$f7" "$f6" + done <"$file" + +Sample outputs: + +![Fig.01: Bash shell scripting- read file line by line demo outputs](http://s0.cyberciti.org/uploads/faq/2011/01/Bash-Scripting-Read-File-line-by-line-demo.jpg) + +Fig.01: Bash shell scripting- read file line by line demo outputs + +**Bash Scripting: Read text file line-by-line to create pdf files** + +My input file is as follows (faq.txt): + + 4|http://www.cyberciti.biz/faq/mysql-user-creation/|Mysql User Creation: Setting Up a New MySQL User Account + 4096|http://www.cyberciti.biz/faq/ksh-korn-shell/|What is UNIX / Linux Korn Shell? + 4101|http://www.cyberciti.biz/faq/what-is-posix-shell/|What Is POSIX Shell? + 17267|http://www.cyberciti.biz/faq/linux-check-battery-status/|Linux: Check Battery Status Command + 17245|http://www.cyberciti.biz/faq/restarting-ntp-service-on-linux/|Linux Restart NTPD Service Command + 17183|http://www.cyberciti.biz/faq/ubuntu-linux-determine-your-ip-address/|Ubuntu Linux: Determine Your IP Address + 17172|http://www.cyberciti.biz/faq/determine-ip-address-of-linux-server/|HowTo: Determine an IP Address My Linux Server + 16510|http://www.cyberciti.biz/faq/unix-linux-restart-php-service-command/|Linux / Unix: Restart PHP Service Command + 8292|http://www.cyberciti.biz/faq/mounting-harddisks-in-freebsd-with-mount-command/|FreeBSD: Mount Hard Drive / Disk Command + 8190|http://www.cyberciti.biz/faq/rebooting-solaris-unix-server/|Reboot a Solaris UNIX System + +My bash script: + + #!/bin/bash + # Usage: Create pdf files from input (wrapper script) + # Author: Vivek Gite under GPL v2.x+ + #--------------------------------------------------------- + + #Input file + _db="/tmp/wordpress/faq.txt" + + #Output location + o="/var/www/prviate/pdf/faq" + + _writer="~/bin/py/pdfwriter.py" + + # If file exists + if [[ -f "$_db" ]] + then + # read it + while IFS='|' read -r pdfid pdfurl pdftitle + do + local pdf="$o/$pdfid.pdf" + echo "Creating $pdf file ..." + #Genrate pdf file + $_writer --quiet --footer-spacing 2 \ + --footer-left "nixCraft is GIT UL++++ W+++ C++++ M+ e+++ d-" \ + --footer-right "Page [page] of [toPage]" --footer-line \ + --footer-font-size 7 --print-media-type "$pdfurl" "$pdf" + done <"$_db" + fi + +**Tip: Read from bash variable** + +Let us say you want a list of all installed php packages on a Debian or Ubuntu Linux, enter: + + # My input source is the contents of a variable called $list # + list=$(dpkg --list php\* | awk '/ii/{print $2}') + printf '%s\n' "$list" + +Sample outputs: + + php-pear + php5-cli + php5-common + php5-fpm + php5-gd + php5-json + php5-memcache + php5-mysql + php5-readline + php5-suhosin-extension + +You can now read from $list and install the package: + + #!/bin/bash + # BASH can iterate over $list variable using a "here string" # + while IFS= read -r pkg + do + printf 'Installing php package %s...\n' "$pkg" + /usr/bin/apt-get -qq install $pkg + done <<< "$list" + printf '*** Do not forget to run php5enmod and restart the server (httpd or php5-fpm) ***\n' + +Sample outputs: + + Installing php package php-pear... + Installing php package php5-cli... + Installing php package php5-common... + Installing php package php5-fpm... + Installing php package php5-gd... + Installing php package php5-json... + Installing php package php5-memcache... + Installing php package php5-mysql... + Installing php package php5-readline... + Installing php package php5-suhosin-extension... + *** Do not forget to run php5enmod and restart the server (httpd or php5-fpm) *** + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/unix-howto-read-line-by-line-from-file/ + +作者:[作者名][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file From 11cf23fc623947c5939ec69b51c0809268121d8c Mon Sep 17 00:00:00 2001 From: KS Date: Thu, 27 Aug 2015 17:35:37 +0800 Subject: [PATCH 334/697] Create 20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md --- ... Tool to Setup IPsec Based VPN in Linux.md | 113 ++++++++++++++++++ 1 file changed, 113 insertions(+) create mode 100644 translated/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md diff --git a/translated/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md b/translated/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md new file mode 100644 index 0000000000..3c16463951 --- /dev/null +++ b/translated/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md @@ -0,0 +1,113 @@ +安装Strongswan - Linux上一个基于IPsec的vpn工具 +================================================================================ +IPsec是一个提供网络层安全的标准。它包含认证头(AH)和安全负载封装(ESP)组件。AH提供包的完整性,ESP组件提供包的保密性。IPsec确保了在网络层的安全特性。 + +- 保密性 +- 数据包完整性 +- 来源不可抵赖性 +- 重放攻击防护 + +[Strongswan][1]是一个IPsec协议实现的开源代码,Strongswan代表强壮开源广域网(StrongS/WAN)。它支持IPsec的VPN两个版本的密钥自动交换(网络密钥交换(IKE)V1和V2)。 + +Strongswan基本上提供了自动交换密钥共享VPN两个节点或网络,然后它使用Linux内核的IPsec(AH和ESP)实现。密钥共享使用了IKE机制的特性使用ESP编码数据。在IKE阶段,strongswan使用OpenSSL加密算法(AES,SHA等等)和其他加密类库。无论如何,ESP组成IPsec使用的安全算法,它是Linux内核实现的。Strongswan的主要特性是下面这些。 + +- x.509证书或基于预共享密钥认证 +- 支持IKEv1和IKEv2密钥交换协议 +- 可选内置插件和库的完整性和加密测试 +- 支持椭圆曲线DH群体和ECDSA证书 +- 在智能卡上存储RSA私钥和证书 + +它能被使用在客户端或服务器(road warrior模式)和网关到网关的情景。 + +### 如何安装 ### + +几乎所有的Linux发行版都支持Strongswan的二进制包。在这个教程,我们将从二进制包安装strongswan也编译strongswan合适的特性的源代码。 + +### 使用二进制包 ### + +可以使用以下命令安装Strongswan到Ubuntu 14.04 LTS + + $sudo aptitude install strongswan + +![安装strongswan](http://blog.linoxide.com/wp-content/uploads/2014/12/strongswan-binary.png) + +strongswan的全局配置(strongswan.conf)文件和ipsec配置(ipsec.conf/ipsec.secrets)文件都在/etc/目录下。 + +### strongswan源码编译安装的依赖包 ### + +- GMP(strongswan使用的Mathematical/Precision 库) +- OpenSSL(加密算法在这个库里) +- PKCS(1,7,8,11,12)(证书编码和智能卡与Strongswan集成) + +#### 步骤 #### + +**1)** 在终端使用下面命令到/usr/src/目录 + + $cd /usr/src + +**2)** 用下面命令从strongswan网站下载源代码 + + $sudo wget http://download.strongswan.org/strongswan-5.2.1.tar.gz + +(strongswan-5.2.1.tar.gz 是最新版。) + +![下载软件](http://blog.linoxide.com/wp-content/uploads/2014/12/download_strongswan.png) + +**3)** 用下面命令提取下载软件,然后进入目录。 + + $sudo tar –xvzf strongswan-5.2.1.tar.gz; cd strongswan-5.2.1 + +**4)** 使用configure命令配置strongswan每个想要的选项。 + + ./configure --prefix=/usr/local -–enable-pkcs11 -–enable-openssl + +![检查strongswan包](http://blog.linoxide.com/wp-content/uploads/2014/12/strongswan-configure.png) + +如果GMP库没有安装,然后配置脚本将会发生下面的错误。 + +![GMP library error](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-error.png) + +因此,首先,使用下面命令安装GMP库然后执行配置脚本。 + +![gmp installation](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-installation1.png) + +无论如何,如果GMP已经安装而且还一致报错,然后在Ubuntu上使用下面命令创建libgmp.so库的软连到/usr/lib,/lib/,/usr/lib/x86_64-linux-gnu/路径下。 + + $ sudo ln -s /usr/lib/x86_64-linux-gnu/libgmp.so.10.1.3 /usr/lib/x86_64-linux-gnu/libgmp.so + +![softlink of libgmp.so library](http://blog.linoxide.com/wp-content/uploads/2014/12/softlink.png) + +创建libgmp.so软连后,再执行./configure脚本也许就找到gmp库了。无论如何,gmp头文件也许发生其他错误,像下面这样。 + +![GMP header file issu](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-header.png) + +为解决上面的错误,使用下面命令安装libgmp-dev包 + + $sudo aptitude install libgmp-dev + +![Installation of Development library of GMP](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-dev.png) + +安装gmp的开发库后,在运行一遍配置脚本,如果没有发生错误,则将看见下面的这些输出。 + +![Output of Configure scirpt](http://blog.linoxide.com/wp-content/uploads/2014/12/successful-run.png) + +使用下面的命令编译安装strongswan。 + + $ sudo make ; sudo make install + +安装strongswan后,全局配置(strongswan.conf)和ipsec策略/密码配置文件(ipsec.conf/ipsec.secretes)被放在**/usr/local/etc**目录。 + +根据我们的安全需要Strongswan可以用作隧道或者传输模式。它提供众所周知的site-2-site模式和road warrior模式的VPN。它很容易使用在Cisco,Juniper设备上。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/security/install-strongswan/ + +作者:[nido][a] +译者:[wyangsun](https://github.com/wyangsun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/naveeda/ +[1]:https://www.strongswan.org/ From 9be5244464896eeceaa76ca8151c7f032d188b28 Mon Sep 17 00:00:00 2001 From: KS Date: Thu, 27 Aug 2015 17:35:55 +0800 Subject: [PATCH 335/697] Delete 20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md --- ... Tool to Setup IPsec Based VPN in Linux.md | 114 ------------------ 1 file changed, 114 deletions(-) delete mode 100644 sources/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md diff --git a/sources/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md b/sources/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md deleted file mode 100644 index cd9ee43213..0000000000 --- a/sources/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md +++ /dev/null @@ -1,114 +0,0 @@ -wyangsun translating -Install Strongswan - A Tool to Setup IPsec Based VPN in Linux -================================================================================ -IPsec is a standard which provides the security at network layer. It consist of authentication header (AH) and encapsulating security payload (ESP) components. AH provides the packet Integrity and confidentiality is provided by ESP component . IPsec ensures the following security features at network layer. - -- Confidentiality -- Integrity of packet -- Source Non. Repudiation -- Replay attack protection - -[Strongswan][1] is an open source implementation of IPsec protocol and Strongswan stands for Strong Secure WAN (StrongS/WAN). It supports the both version of automatic keying exchange in IPsec VPN (Internet keying Exchange (IKE) V1 & V2). - -Strongswan basically provides the automatic keying sharing between two nodes/gateway of the VPN and after that it uses the Linux Kernel implementation of IPsec (AH & ESP). Key shared using IKE mechanism is further used in the ESP for the encryption of data. In IKE phase, strongswan uses the encryption algorithms (AES,SHA etc) of OpenSSL and other crypto libraries. However, ESP component of IPsec uses the security algorithm which are implemented in the Linux Kernel. The main features of Strongswan are given below. - -- 509 certificates or pre-shared keys based Authentication -- Support of IKEv1 and IKEv2 key exchange protocols -- Optional built-in integrity and crypto tests for plugins and libraries -- Support of elliptic curve DH groups and ECDSA certificates -- Storage of RSA private keys and certificates on a smartcard. - -It can be used in the client / server (road warrior) and gateway to gateway scenarios. - -### How to Install ### - -Almost all Linux distro’s, supports the binary package of Strongswan. In this tutorial, we will install the strongswan from binary package and also the compilation of strongswan source code with desirable features. - -### Using binary package ### - -Strongswan can be installed using following command on Ubuntu 14.04 LTS . - - $sudo aptitude install strongswan - -![Installation of strongswan](http://blog.linoxide.com/wp-content/uploads/2014/12/strongswan-binary.png) - -The global configuration (strongswan.conf) file and ipsec configuration (ipsec.conf/ipsec.secrets) files of strongswan are under /etc/ directory. - -### Pre-requisite for strongswan source compilation & installation ### - -- GMP (Mathematical/Precision Library used by strongswan) -- OpenSSL (Crypto Algorithms from this library) -- PKCS (1,7,8,11,12)(Certificate encoding and smart card integration with Strongswan ) - -#### Procedure #### - -**1)** Go to /usr/src/ directory using following command in the terminal. - - $cd /usr/src - -**2)** Download the source code from strongswan site suing following command - - $sudo wget http://download.strongswan.org/strongswan-5.2.1.tar.gz - -(strongswan-5.2.1.tar.gz is the latest version.) - -![Downloading software](http://blog.linoxide.com/wp-content/uploads/2014/12/download_strongswan.png) - -**3)** Extract the downloaded software and go inside it using following command. - - $sudo tar –xvzf strongswan-5.2.1.tar.gz; cd strongswan-5.2.1 - -**4)** Configure the strongswan as per desired options using configure command. - - ./configure --prefix=/usr/local -–enable-pkcs11 -–enable-openssl - -![checking packages for strongswan](http://blog.linoxide.com/wp-content/uploads/2014/12/strongswan-configure.png) - -If GMP library is not installed, then configure script will generate following error. - -![GMP library error](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-error.png) - -Therefore, first of all, install the GMP library using following command and then run the configure script. - -![gmp installation](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-installation1.png) - -However, if GMP is already installed and still above error exists then create soft link of libgmp.so library at /usr/lib , /lib/, /usr/lib/x86_64-linux-gnu/ paths in Ubuntu using following command. - - $ sudo ln -s /usr/lib/x86_64-linux-gnu/libgmp.so.10.1.3 /usr/lib/x86_64-linux-gnu/libgmp.so - -![softlink of libgmp.so library](http://blog.linoxide.com/wp-content/uploads/2014/12/softlink.png) - -After the creation of libgmp.so softlink, again run the ./configure script and it may find the gmp library. However, it may generate another error of gmp header file which is shown the following figure. - -![GMP header file issu](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-header.png) - -Install the libgmp-dev package using following command for the solution of above error. - - $sudo aptitude install libgmp-dev - -![Installation of Development library of GMP](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-dev.png) - -After installation of development package of gmp library, again run the configure script and if it does not produce any error, then the following output will be displayed. - -![Output of Configure scirpt](http://blog.linoxide.com/wp-content/uploads/2014/12/successful-run.png) - -Type the following commands for the compilation and installation of strongswan. - - $ sudo make ; sudo make install - -After the installation of strongswan , the Global configuration (strongswan.conf) and ipsec policy/secret configuration files (ipsec.conf/ipsec.secretes) are placed in **/usr/local/etc** directory. - -Strongswan can be used as tunnel or transport mode depends on our security need. It provides well known site-2-site and road warrior VPNs. It can be use easily with Cisco,Juniper devices. - --------------------------------------------------------------------------------- - -via: http://linoxide.com/security/install-strongswan/ - -作者:[nido][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/naveeda/ -[1]:https://www.strongswan.org/ From 217911c8876de1313460771cfa1f1be3e0c7b33b Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Thu, 27 Aug 2015 17:55:34 +0800 Subject: [PATCH 336/697] Update 20150827 Linux or UNIX--Bash Read a File Line By Line.md --- .../20150827 Linux or UNIX--Bash Read a File Line By Line.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md b/sources/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md index 971adc0a0c..c0a4b6c27c 100644 --- a/sources/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md +++ b/sources/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md @@ -1,3 +1,4 @@ +translation by strugglingyouth Linux/UNIX: Bash Read a File Line By Line ================================================================================ How do I read a file line by line under a Linux or UNIX-like system using KSH or BASH shell? @@ -159,4 +160,4 @@ via: http://www.cyberciti.biz/faq/unix-howto-read-line-by-line-from-file/ 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From f0c171116875e4ea6cb783bda1dcdfa615d0644f Mon Sep 17 00:00:00 2001 From: KS Date: Thu, 27 Aug 2015 19:32:42 +0800 Subject: [PATCH 337/697] Update 20150826 How to set up a system status page of your infrastructure.md --- ...ow to set up a system status page of your infrastructure.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150826 How to set up a system status page of your infrastructure.md b/sources/tech/20150826 How to set up a system status page of your infrastructure.md index 44fb4ed8d5..f696e91638 100644 --- a/sources/tech/20150826 How to set up a system status page of your infrastructure.md +++ b/sources/tech/20150826 How to set up a system status page of your infrastructure.md @@ -1,3 +1,4 @@ +wyangsun translating How to set up a system status page of your infrastructure ================================================================================ If you are a system administrator who is responsible for critical IT infrastructure or services of your organization, you will understand the importance of effective communication in your day-to-day tasks. Suppose your production storage server is on fire. You want your entire team on the same page in order to resolve the issue as fast as you can. While you are at it, you don't want half of all users contacting you asking why they cannot access their documents. When a scheduled maintenance is coming up, you want to notify interested parties of the event ahead of the schedule, so that unnecessary support tickets can be avoided. @@ -291,4 +292,4 @@ via: http://xmodulo.com/setup-system-status-page.html [3]:http://ask.xmodulo.com/install-remi-repository-centos-rhel.html [4]:http://xmodulo.com/install-lamp-stack-centos.html [5]:http://xmodulo.com/configure-virtual-hosts-apache-http-server.html -[6]:http://xmodulo.com/monitor-common-services-nagios.html \ No newline at end of file +[6]:http://xmodulo.com/monitor-common-services-nagios.html From 720999b380ca95fe4007b5e817639f56f7b23f5d Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 27 Aug 2015 22:42:14 +0800 Subject: [PATCH 338/697] PUB:20150826 How to Run Kali Linux 2.0 In Docker Container @geekpi --- ... Run Kali Linux 2.0 In Docker Container.md | 74 +++++++++++++++++++ ... Run Kali Linux 2.0 In Docker Container.md | 74 ------------------- 2 files changed, 74 insertions(+), 74 deletions(-) create mode 100644 published/20150826 How to Run Kali Linux 2.0 In Docker Container.md delete mode 100644 translated/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md diff --git a/published/20150826 How to Run Kali Linux 2.0 In Docker Container.md b/published/20150826 How to Run Kali Linux 2.0 In Docker Container.md new file mode 100644 index 0000000000..83248bb51e --- /dev/null +++ b/published/20150826 How to Run Kali Linux 2.0 In Docker Container.md @@ -0,0 +1,74 @@ +如何在 Docker 容器中运行 Kali Linux 2.0 +================================================================================ +### 介绍 ### + +Kali Linux 是一个对于安全测试人员和白帽的一个知名操作系统。它带有大量安全相关的程序,这让它很容易用于渗透测试。最近,[Kali Linux 2.0][1] 发布了,它被认为是这个操作系统最重要的一次发布。另一方面,Docker 技术由于它的可扩展性和易用性让它变得很流行。Dokcer 让你非常容易地将你的程序带给你的用户。好消息是你可以通过 Docker 运行Kali Linux 了,让我们看看该怎么做 :) + +### 在 Docker 中运行 Kali Linux 2.0 ### + +**相关提示** + +> 如果你还没有在系统中安装docker,你可以运行下面的命令: + +> **对于 Ubuntu/Linux Mint/Debian:** + +> sudo apt-get install docker + +> **对于 Fedora/RHEL/CentOS:** + +> sudo yum install docker + +> **对于 Fedora 22:** + +> dnf install docker + +> 你可以运行下面的命令来启动docker: + +> sudo docker start + +首先运行下面的命令确保 Docker 服务运行正常: + + sudo docker status + +Kali Linux 的开发团队已将 Kali Linux 的 docker 镜像上传了,只需要输入下面的命令来下载镜像。 + + docker pull kalilinux/kali-linux-docker + +![Pull Kali Linux docker](http://linuxpitstop.com/wp-content/uploads/2015/08/129.png) + +下载完成后,运行下面的命令来找出你下载的 docker 镜像的 ID。 + + docker images + +![Kali Linux Image ID](http://linuxpitstop.com/wp-content/uploads/2015/08/230.png) + +现在运行下面的命令来从镜像文件启动 kali linux docker 容器(这里需用正确的镜像ID替换)。 + + docker run -i -t 198cd6df71ab3 /bin/bash + +它会立刻启动容器并且让你登录到该操作系统,你现在可以在 Kaili Linux 中工作了。 + +![Kali Linux Login](http://linuxpitstop.com/wp-content/uploads/2015/08/328.png) + +你可以在容器外面通过下面的命令来验证容器已经启动/运行中了: + + docker ps + +![Docker Kali](http://linuxpitstop.com/wp-content/uploads/2015/08/421.png) + +### 总结 ### + +Docker 是一种最聪明的用来部署和分发包的方式。Kali Linux docker 镜像非常容易上手,也不会消耗很大的硬盘空间,这样也可以很容易地在任何安装了 docker 的操作系统上测试这个很棒的发行版了。 + +-------------------------------------------------------------------------------- + +via: http://linuxpitstop.com/run-kali-linux-2-0-in-docker-container/ + +作者:[Aun][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxpitstop.com/author/aun/ +[1]:https://linux.cn/article-6005-1.html diff --git a/translated/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md b/translated/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md deleted file mode 100644 index 5c65ec0286..0000000000 --- a/translated/tech/20150826 How to Run Kali Linux 2.0 In Docker Container.md +++ /dev/null @@ -1,74 +0,0 @@ -如何在Docker容器中运行Kali Linux 2.0 -================================================================================ -### 介绍 ### - -Kali Linux是一个对于安全测试人员和白帽的一个知名的操作系统。它带有大量安全相关的程序,这让它很容易用于渗透测试。最近,[Kali Linux 2.0][1]发布了,并且它被认为是这个操作系统最重要的一次发布。另一方面,Docker技术由于它的可扩展性和易用性让它变得很流行。Dokcer让你非常容易地将你的程序带给你的用户。好消息是你可以通过Docker运行Kali Linux了,让我们看看该怎么做:) - -### 在Docker中运行Kali Linux 2.0 ### - -**相关提示** - -如果你还没有在系统中安装docker,你可以运行下面的命令: - -**对于 Ubuntu/Linux Mint/Debian:** - - sudo apt-get install docker - -**对于 Fedora/RHEL/CentOS:** - - sudo yum install docker - -**对于 Fedora 22:** - - dnf install docker - -你可以运行下面的命令来启动docker: - - sudo docker start - -首先运行下面的命令确保服务正在运行: - - sudo docker status - -Kali Linux的开发团队已将Kali Linux的docker镜像上传了,只需要输入下面的命令来下载镜像。 - - docker pull kalilinux/kali-linux-docker - -![Pull Kali Linux docker](http://linuxpitstop.com/wp-content/uploads/2015/08/129.png) - -下载完成后,运行下面的命令来找出你下载的docker镜像的ID。 - - docker images - -![Kali Linux Image ID](http://linuxpitstop.com/wp-content/uploads/2015/08/230.png) - -现在运行下面的命令来从镜像文件启动kali linux docker容器(这里用正确的镜像ID替换)。 - - docker run -i -t 198cd6df71ab3 /bin/bash - -它会立刻启动容器并且会登录操作系统,你现在可以在Kaili Linux中工作了。 - -![Kali Linux Login](http://linuxpitstop.com/wp-content/uploads/2015/08/328.png) - -你可以通过下面的命令来验证通气已经启动/运行中了: - - docker ps - -![Docker Kali](http://linuxpitstop.com/wp-content/uploads/2015/08/421.png) - -### 总结 ### - -Docker是一种最聪明的用来部署和分发包的方式。Kali Linux docker镜像非常容易上手,也不会消耗很大的硬盘空间,这样也容易地在任何安装了docker的操作系统上测试这个很棒的发行版了。 - --------------------------------------------------------------------------------- - -via: http://linuxpitstop.com/run-kali-linux-2-0-in-docker-container/ - -作者:[Aun][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxpitstop.com/author/aun/ -[1]:http://linuxpitstop.com/install-kali-linux-2-0/ From b790e5703a27c4b45474431fc96e4922aec600de Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 27 Aug 2015 23:09:15 +0800 Subject: [PATCH 339/697] PUB:Linux and Unix Test Disk IO Performance With dd Command @DongShuaike --- ...est Disk IO Performance With dd Command.md | 69 ++++++++++--------- 1 file changed, 38 insertions(+), 31 deletions(-) rename translated/tech/Linux and Unix Test Disk IO Performance With dd Command.MD => published/Linux and Unix Test Disk IO Performance With dd Command.md (70%) diff --git a/translated/tech/Linux and Unix Test Disk IO Performance With dd Command.MD b/published/Linux and Unix Test Disk IO Performance With dd Command.md similarity index 70% rename from translated/tech/Linux and Unix Test Disk IO Performance With dd Command.MD rename to published/Linux and Unix Test Disk IO Performance With dd Command.md index be5986b78e..be96c3941b 100644 --- a/translated/tech/Linux and Unix Test Disk IO Performance With dd Command.MD +++ b/published/Linux and Unix Test Disk IO Performance With dd Command.md @@ -1,24 +1,25 @@ -使用dd命令在Linux和Unix环境下进行硬盘I/O性能检测 +使用 dd 命令进行硬盘 I/O 性能检测 ================================================================================ -如何使用dd命令测试硬盘的性能?如何在linux操作系统下检测硬盘的读写能力? + +如何使用dd命令测试我的硬盘性能?如何在linux操作系统下检测硬盘的读写速度? 你可以使用以下命令在一个Linux或类Unix操作系统上进行简单的I/O性能测试。 -- **dd命令** :它被用来在Linux和类Unix系统下对硬盘设备进行写性能的检测。 -- **hparm命令**:它被用来获取或设置硬盘参数,包括测试读性能以及缓存性能等。 +- **dd命令** :它被用来在Linux和类Unix系统下对硬盘设备进行写性能的检测。 +- **hparm命令**:它用来在基于 Linux 的系统上获取或设置硬盘参数,包括测试读性能以及缓存性能等。 在这篇指南中,你将会学到如何使用dd命令来测试硬盘性能。 ### 使用dd命令来监控硬盘的读写性能:### -- 打开shell终端(这里貌似不能翻译为终端提示符)。 -- 通过ssh登录到远程服务器。 +- 打开shell终端。 +- 或者通过ssh登录到远程服务器。 - 使用dd命令来测量服务器的吞吐率(写速度) `dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync` - 使用dd命令测量服务器延迟 `dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync` ####理解dd命令的选项### -在这个例子当中,我将使用搭载Ubuntu Linux 14.04 LTS系统的RAID-10(配有SAS SSD的Adaptec 5405Z)服务器阵列来运行。基本语法为: +在这个例子当中,我将使用搭载Ubuntu Linux 14.04 LTS系统的RAID-10(配有SAS SSD的Adaptec 5405Z)服务器阵列来运行。基本语法为: dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync ## GNU dd语法 ## @@ -29,18 +30,19 @@ 输出样例: ![Fig.01: Ubuntu Linux Server with RAID10 and testing server throughput with dd](http://s0.cyberciti.org/uploads/faq/2015/08/dd-server-test-io-speed-output.jpg) -Fig.01: 使用dd命令获取的服务器吞吐率 + +*图01: 使用dd命令获取的服务器吞吐率* 请各位注意在这个实验中,我们写入一个G的数据,可以发现,服务器的吞吐率是135 MB/s,这其中 -- `if=/dev/zero (if=/dev/input.file)` :用来设置dd命令读取的输入文件名。 -- `of=/tmp/test1.img (of=/path/to/output.file)` :dd命令将input.file写入的输出文件的名字。 -- `bs=1G (bs=block-size)` :设置dd命令读取的块的大小。例子中为1个G。 -- `count=1 (count=number-of-blocks)`: dd命令读取的块的个数。 -- `oflag=dsync (oflag=dsync)` :使用同步I/O。不要省略这个选项。这个选项能够帮助你去除caching的影响,以便呈现给你精准的结果。 +- `if=/dev/zero` (if=/dev/input.file) :用来设置dd命令读取的输入文件名。 +- `of=/tmp/test1.img` (of=/path/to/output.file):dd命令将input.file写入的输出文件的名字。 +- `bs=1G` (bs=block-size) :设置dd命令读取的块的大小。例子中为1个G。 +- `count=1` (count=number-of-blocks):dd命令读取的块的个数。 +- `oflag=dsync` (oflag=dsync) :使用同步I/O。不要省略这个选项。这个选项能够帮助你去除caching的影响,以便呈现给你精准的结果。 - `conv=fdatasyn`: 这个选项和`oflag=dsync`含义一样。 -在这个例子中,一共写了1000次,每次写入512字节来获得RAID10服务器的延迟时间: +在下面这个例子中,一共写了1000次,每次写入512字节来获得RAID10服务器的延迟时间: dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync @@ -50,11 +52,11 @@ Fig.01: 使用dd命令获取的服务器吞吐率 1000+0 records out 512000 bytes (512 kB) copied, 0.60362 s, 848 kB/s -请注意服务器的吞吐率以及延迟时间也取决于服务器/应用的加载。所以我推荐你在一个刚刚重启过并且处于峰值时间的服务器上来运行测试,以便得到更加准确的度量。现在你可以在你的所有设备上互相比较这些测试结果了。 +请注意服务器的吞吐率以及延迟时间也取决于服务器/应用的负载。所以我推荐你在一个刚刚重启过并且处于峰值时间的服务器上来运行测试,以便得到更加准确的度量。现在你可以在你的所有设备上互相比较这些测试结果了。 -####为什么服务器的吞吐率和延迟时间都这么差?### +###为什么服务器的吞吐率和延迟时间都这么差?### -低的数值并不意味着你在使用差劲的硬件。可能是HARDWARE RAID10的控制器缓存导致的。 +低的数值并不意味着你在使用差劲的硬件。可能是硬件 RAID10的控制器缓存导致的。 使用hdparm命令来查看硬盘缓存的读速度。 @@ -79,11 +81,12 @@ Fig.01: 使用dd命令获取的服务器吞吐率 输出样例: ![Fig.02: Linux hdparm command to test reading and caching disk performance](http://s0.cyberciti.org/uploads/faq/2015/08/hdparam-output.jpg) -Fig.02: 检测硬盘读入以及缓存性能的Linux hdparm命令 -请再一次注意由于文件文件操作的缓存属性,你将总是会看到很高的读速度。 +*图02: 检测硬盘读入以及缓存性能的Linux hdparm命令* -**使用dd命令来测试读入速度** +请再次注意,由于文件文件操作的缓存属性,你将总是会看到很高的读速度。 + +###使用dd命令来测试读取速度### 为了获得精确的读测试数据,首先在测试前运行下列命令,来将缓存设置为无效: @@ -91,11 +94,11 @@ Fig.02: 检测硬盘读入以及缓存性能的Linux hdparm命令 echo 3 | sudo tee /proc/sys/vm/drop_caches time time dd if=/path/to/bigfile of=/dev/null bs=8k -**笔记本上的示例** +####笔记本上的示例#### 运行下列命令: - ### Cache存在的Debian系统笔记本吞吐率### + ### 带有Cache的Debian系统笔记本吞吐率### dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct ###使cache失效### @@ -104,10 +107,11 @@ Fig.02: 检测硬盘读入以及缓存性能的Linux hdparm命令 ###没有Cache的Debian系统笔记本吞吐率### dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct -**苹果OS X Unix(Macbook pro)的例子** +####苹果OS X Unix(Macbook pro)的例子#### GNU dd has many more options but OS X/BSD and Unix-like dd command need to run as follows to test real disk I/O and not memory add sync option as follows: -GNU dd命令有其他许多选项但是在 OS X/BSD 以及类Unix中, dd命令需要像下面那样执行来检测去除掉内存地址同步的硬盘真实I/O性能: + +GNU dd命令有其他许多选项,但是在 OS X/BSD 以及类Unix中, dd命令需要像下面那样执行来检测去除掉内存地址同步的硬盘真实I/O性能: ## 运行这个命令2-3次来获得更好地结果 ### time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync" @@ -124,26 +128,29 @@ GNU dd命令有其他许多选项但是在 OS X/BSD 以及类Unix中, dd命令 本人Macbook Pro的写速度是635346520字节(635.347MB/s)。 -**不喜欢用命令行?^_^** +###不喜欢用命令行?\^_^### 你可以在Linux或基于Unix的系统上使用disk utility(gnome-disk-utility)这款工具来得到同样的信息。下面的那个图就是在我的Fedora Linux v22 VM上截取的。 -**图形化方法** +####图形化方法#### 点击“Activites”或者“Super”按键来在桌面和Activites视图间切换。输入“Disks” ![Fig.03: Start the Gnome disk utility](http://s0.cyberciti.org/uploads/faq/2015/08/disk-1.jpg) -Fig.03: 打开Gnome硬盘工具 + +*图03: 打开Gnome硬盘工具* 在左边的面板上选择你的硬盘,点击configure按钮,然后点击“Benchmark partition”: ![Fig.04: Benchmark disk/partition](http://s0.cyberciti.org/uploads/faq/2015/08/disks-2.jpg) -Fig.04: 评测硬盘/分区 -最后,点击“Start Benchmark...”按钮(你可能被要求输入管理员用户名和密码): +*图04: 评测硬盘/分区* + +最后,点击“Start Benchmark...”按钮(你可能需要输入管理员用户名和密码): ![Fig.05: Final benchmark result](http://s0.cyberciti.org/uploads/faq/2015/08/disks-3.jpg) -Fig.05: 最终的评测结果 + +*图05: 最终的评测结果* 如果你要问,我推荐使用哪种命令和方法? @@ -158,7 +165,7 @@ via: http://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd 作者:Vivek Gite 译者:[DongShuaike](https://github.com/DongShuaike) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 8e1b11caca9e5c6cf499f4f42bfabee5e8a3eca0 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 27 Aug 2015 23:14:15 +0800 Subject: [PATCH 340/697] PUB:20150821 Linux FAQs with Answers--How to check MariaDB server version @geekpi --- ...s with Answers--How to check MariaDB server version.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) rename {translated/tech => published}/20150821 Linux FAQs with Answers--How to check MariaDB server version.md (75%) diff --git a/translated/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md b/published/20150821 Linux FAQs with Answers--How to check MariaDB server version.md similarity index 75% rename from translated/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md rename to published/20150821 Linux FAQs with Answers--How to check MariaDB server version.md index 36ea2d15d6..899c2775de 100644 --- a/translated/tech/20150821 Linux FAQs with Answers--How to check MariaDB server version.md +++ b/published/20150821 Linux FAQs with Answers--How to check MariaDB server version.md @@ -1,8 +1,8 @@ -Linux有问必答--如何检查MatiaDB服务端版本 +Linux有问必答:如何检查MariaDB服务端版本 ================================================================================ > **提问**: 我使用的是一台运行MariaDB的VPS。我该如何检查MariaDB服务端的版本? -你需要知道数据库版本的情况有:当你生你数据库或者为服务器打补丁。这里有几种方法找出MariaDB版本的方法。 +有时候你需要知道你的数据库版本,比如当你升级你数据库或对已知缺陷打补丁时。这里有几种方法找出MariaDB版本的方法。 ### 方法一 ### @@ -16,7 +16,7 @@ Linux有问必答--如何检查MatiaDB服务端版本 ### 方法二 ### -如果你不能访问MariaDB,那么你就不能用第一种方法。这种情况下你可以根据MariaDB的安装包的版本来推测。这种方法只有在MariaDB通过包管理器安装的才有用。 +如果你不能访问MariaDB服务器,那么你就不能用第一种方法。这种情况下你可以根据MariaDB的安装包的版本来推测。这种方法只有在MariaDB通过包管理器安装的才有用。 你可以用下面的方法检查MariaDB的安装包。 @@ -42,7 +42,7 @@ via: http://ask.xmodulo.com/check-mariadb-server-version.html 作者:[Dan Nanni][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 966aeb1be5ec49ec838c8a90c934208a588b58c4 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Thu, 27 Aug 2015 23:34:29 +0800 Subject: [PATCH 341/697] Delete Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md --- ...Array and Removing Failed Disks in Raid.md | 180 ------------------ 1 file changed, 180 deletions(-) delete mode 100644 sources/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md diff --git a/sources/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md b/sources/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md deleted file mode 100644 index 76039f4371..0000000000 --- a/sources/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md +++ /dev/null @@ -1,180 +0,0 @@ -struggling 翻译中 -Growing an Existing RAID Array and Removing Failed Disks in Raid – Part 7 -================================================================================ -Every newbies will get confuse of the word array. Array is just a collection of disks. In other words, we can call array as a set or group. Just like a set of eggs containing 6 numbers. Likewise RAID Array contains number of disks, it may be 2, 4, 6, 8, 12, 16 etc. Hope now you know what Array is. - -Here we will see how to grow (extend) an existing array or raid group. For example, if we are using 2 disks in an array to form a raid 1 set, and in some situation if we need more space in that group, we can extend the size of an array using mdadm –grow command, just by adding one of the disk to the existing array. After growing (adding disk to an existing array), we will see how to remove one of the failed disk from array. - -![Grow Raid Array in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Growing-Raid-Array.jpg) - -Growing Raid Array and Removing Failed Disks - -Assume that one of the disk is little weak and need to remove that disk, till it fails let it under use, but we need to add one of the spare drive and grow the mirror before it fails, because we need to save our data. While the weak disk fails we can remove it from array this is the concept we are going to see in this topic. - -#### Features of RAID Growth #### - -- We can grow (extend) the size of any raid set. -- We can remove the faulty disk after growing raid array with new disk. -- We can grow raid array without any downtime. - -Requirements - -- To grow an RAID array, we need an existing RAID set (Array). -- We need extra disks to grow the Array. -- Here I’m using 1 disk to grow the existing array. - -Before we learn about growing and recovering of Array, we have to know about the basics of RAID levels and setups. Follow the below links to know about those setups. - -- [Understanding Basic RAID Concepts – Part 1][1] -- [Creating a Software Raid 0 in Linux – Part 2][2] - -#### My Server Setup #### - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.230 - Hostname : grow.tecmintlocal.com - 2 Existing Disks : 1 GB - 1 Additional Disk : 1 GB - -Here, my already existing RAID has 2 number of disks with each size is 1GB and we are now adding one more disk whose size is 1GB to our existing raid array. - -### Growing an Existing RAID Array ### - -1. Before growing an array, first list the existing Raid array using the following command. - - # mdadm --detail /dev/md0 - -![Check Existing Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Existing-Raid-Array.png) - -Check Existing Raid Array - -**Note**: The above output shows that I’ve already has two disks in Raid array with raid1 level. Now here we are adding one more disk to an existing array, - -2. Now let’s add the new disk “sdd” and create a partition using ‘fdisk‘ command. - - # fdisk /dev/sdd - -Please use the below instructions to create a partition on /dev/sdd drive. - -- Press ‘n‘ for creating new partition. -- Then choose ‘P‘ for Primary partition. -- Then choose ‘1‘ to be the first partition. -- Next press ‘p‘ to print the created partition. -- Here, we are selecting ‘fd‘ as my type is RAID. -- Next press ‘p‘ to print the defined partition. -- Then again use ‘p‘ to print the changes what we have made. -- Use ‘w‘ to write the changes. - -![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Create-New-sdd-Partition.png) - -Create New sdd Partition - -3. Once new sdd partition created, you can verify it using below command. - - # ls -l /dev/ | grep sd - -![Confirm sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-sdd-Partition.png) - -Confirm sdd Partition - -4. Next, examine the newly created disk for any existing raid, before adding to the array. - - # mdadm --examine /dev/sdd1 - -![Check Raid on sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-sdd-Partition.png) - -Check Raid on sdd Partition - -**Note**: The above output shows that the disk has no super-blocks detected, means we can move forward to add a new disk to an existing array. - -4. To add the new partition /dev/sdd1 in existing array md0, use the following command. - - # mdadm --manage /dev/md0 --add /dev/sdd1 - -![Add Disk To Raid-Array](http://www.tecmint.com/wp-content/uploads/2014/11/Add-Disk-To-Raid-Array.png) - -Add Disk To Raid-Array - -5. Once the new disk has been added, check for the added disk in our array using. - - # mdadm --detail /dev/md0 - -![Confirm Disk Added to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Disk-Added-To-Raid.png) - -Confirm Disk Added to Raid - -**Note**: In the above output, you can see the drive has been added as a spare. Here, we already having 2 disks in the array, but what we are expecting is 3 devices in array for that we need to grow the array. - -6. To grow the array we have to use the below command. - - # mdadm --grow --raid-devices=3 /dev/md0 - -![Grow Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Raid-Array.png) - -Grow Raid Array - -Now we can see the third disk (sdd1) has been added to array, after adding third disk it will sync the data from other two disks. - - # mdadm --detail /dev/md0 - -![Confirm Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Raid-Array.png) - -Confirm Raid Array - -**Note**: For large size disk it will take hours to sync the contents. Here I have used 1GB virtual disk, so its done very quickly within seconds. - -### Removing Disks from Array ### - -7. After the data has been synced to new disk ‘sdd1‘ from other two disks, that means all three disks now have same contents. - -As I told earlier let’s assume that one of the disk is weak and needs to be removed, before it fails. So, now assume disk ‘sdc1‘ is weak and needs to be removed from an existing array. - -Before removing a disk we have to mark the disk as failed one, then only we can able to remove it. - - # mdadm --fail /dev/md0 /dev/sdc1 - # mdadm --detail /dev/md0 - -![Disk Fail in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-Fail-in-Raid-Array.png) - -Disk Fail in Raid Array - -From the above output, we clearly see that the disk was marked as faulty at the bottom. Even its faulty, we can see the raid devices are 3, failed 1 and state was degraded. - -Now we have to remove the faulty drive from the array and grow the array with 2 devices, so that the raid devices will be set to 2 devices as before. - - # mdadm --remove /dev/md0 /dev/sdc1 - -![Remove Disk in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Remove-Disk-in-Raid-Array.png) - -Remove Disk in Raid Array - -8. Once the faulty drive is removed, now we’ve to grow the raid array using 2 disks. - - # mdadm --grow --raid-devices=2 /dev/md0 - # mdadm --detail /dev/md0 - -![Grow Disks in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Disks-in-Raid-Array.png) - -Grow Disks in Raid Array - -From the about output, you can see that our array having only 2 devices. If you need to grow the array again, follow the same steps as described above. If you need to add a drive as spare, mark it as spare so that if the disk fails, it will automatically active and rebuild. - -### Conclusion ### - -In the article, we’ve seen how to grow an existing raid set and how to remove a faulty disk from an array after re-syncing the existing contents. All these steps can be done without any downtime. During data syncing, system users, files and applications will not get affected in any case. - -In next, article I will show you how to manage the RAID, till then stay tuned to updates and don’t forget to add your comments. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/grow-raid-array-in-linux/ - -作者:[Babin Lonston][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ -[2]:http://www.tecmint.com/create-raid0-in-linux/ \ No newline at end of file From 436e11b38c5e15bf85beee1c5a89187fa6d2261f Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Thu, 27 Aug 2015 23:35:02 +0800 Subject: [PATCH 342/697] Create Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md --- ...Array and Removing Failed Disks in Raid.md | 182 ++++++++++++++++++ 1 file changed, 182 insertions(+) create mode 100644 translated/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md diff --git a/translated/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md b/translated/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md new file mode 100644 index 0000000000..94d18edde2 --- /dev/null +++ b/translated/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md @@ -0,0 +1,182 @@ + +在 Raid 中扩展现有的 RAID 阵列和删除故障的磁盘 - 第7部分 +================================================================================ +每个新手都会对阵列的意思产生疑惑。阵列只是磁盘的一个集合。换句话说,我们可以称阵列为一个集合或一组。就像一组鸡蛋中包含6个。同样 RAID 阵列中包含着多个磁盘,可能是2,4,6,8,12,16等,希望你现在知道了什么是阵列。 + +在这里,我们将看到如何扩展现有的阵列或 raid 组。例如,如果我们在一组 raid 中使用2个磁盘形成一个 raid 1,在某些情况,如果该组中需要更多的空间,就可以使用mdadm -grow 命令来扩展阵列大小,只是将一个磁盘加入到现有的阵列中。在扩展(添加磁盘到现有的阵列中)后,我们将看看如何从阵列中删除故障的磁盘。 + +![Grow Raid Array in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Growing-Raid-Array.jpg) + +扩展 RAID 阵列和删除故障的磁盘 + +假设磁盘中的一个有问题了需要删除该磁盘,但我们需要添加一个备用磁盘来扩展该镜像再删除磁盘前,因为我们需要保存数据。当磁盘发生故障时我们需要从阵列中删除它,这是这个主题中我们将要学习到的。 + +#### 扩展 RAID 的特性 #### + +- 我们可以增加(扩大)所有 RAID 集和的大小。 +- 我们在使用新磁盘扩展 RAID 阵列后删除故障的磁盘。 +- 我们可以扩展 RAID 阵列不存在宕机时间。 + +要求 + +- 为了扩展一个RAID阵列,我们需要已有的 RAID 组(阵列)。 +- 我们需要额外的磁盘来扩展阵列。 +- 在这里,我们使用一块磁盘来扩展现有的阵列。 + +在我们了解扩展和恢复阵列前,我们必须了解有关 RAID 级别和设置的基本知识。点击下面的链接了解这些。 + +- [理解 RAID 的基础概念 – 第一部分][1] +- [在 Linux 中创建软件 Raid 0 – 第二部分][2] + +#### 我的服务器设置 #### + + 操作系统 : CentOS 6.5 Final +  IP地址 : 192.168.0.230 +  主机名 : grow.tecmintlocal.com + 2 块现有磁盘 : 1 GB + 1 块额外磁盘 : 1 GB + +在这里,现有的 RAID 有2块磁盘,每个大小为1GB,我们现在再增加一个磁盘到我们现有的 RAID 阵列中,其大小为1GB。 + +### 扩展现有的 RAID 阵列 ### + +1. 在扩展阵列前,首先使用下面的命令列出现有的 RAID 阵列。 + + # mdadm --detail /dev/md0 + +![Check Existing Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Existing-Raid-Array.png) + +检查现有的 RAID 阵列 + +**注意**: 以上输出显示,已经有了两个磁盘在 RAID 阵列中,级别为 RAID 1。现在我们在这里再增加一个磁盘到现有的阵列。 + +2.现在让我们添加新的磁盘“sdd”,并使用‘fdisk‘命令来创建分区。 + + # fdisk /dev/sdd + +请使用以下步骤为 /dev/sdd 创建一个新的分区。 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按 ‘P’ 来打印创建好的分区。 +- 按 ‘L’,列出所有可用的类型。 +- 按 ‘t’ 去修改分区。 +- 键入 ‘fd’ 设置为 Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Create-New-sdd-Partition.png) + +为 sdd 创建新的分区 + +3. 一旦新的 sdd 分区创建完成后,你可以使用下面的命令验证它。 + + # ls -l /dev/ | grep sd + +![Confirm sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-sdd-Partition.png) + +确认 sdd 分区 + +4.接下来,在添加到阵列前先检查磁盘是否有 RAID 分区。 + + # mdadm --examine /dev/sdd1 + +![Check Raid on sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-sdd-Partition.png) + +在 sdd 分区中检查 raid + +**注意**:以上输出显示,该盘有没有发现 super-blocks,意味着我们可以将新的磁盘添加到现有阵列。 + +4. 要添加新的分区 /dev/sdd1 到现有的阵列 md0,请使用以下命令。 + + # mdadm --manage /dev/md0 --add /dev/sdd1 + +![Add Disk To Raid-Array](http://www.tecmint.com/wp-content/uploads/2014/11/Add-Disk-To-Raid-Array.png) + +添加磁盘到 Raid 阵列 + +5. 一旦新的磁盘被添加后,在我们的阵列中检查新添加的磁盘。 + + # mdadm --detail /dev/md0 + +![Confirm Disk Added to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Disk-Added-To-Raid.png) + +确认将新磁盘添加到 Raid 中 + +**注意**: 在上面的输出,你可以看到磁盘已经被添加作为备用的。在这里,我们的阵列中已经有了2个磁盘,但我们期待阵列中有3个磁盘,因此我们需要扩展阵列。 + +6. 要扩展阵列,我们需要使用下面的命令。 + + # mdadm --grow --raid-devices=3 /dev/md0 + +![Grow Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Raid-Array.png) + +扩展 Raid 阵列 + +现在我们可以看到第三块磁盘(sdd1)已被添加到阵列中,在第三块磁盘被添加后,它将从另外两块磁盘上同步数据。 + + # mdadm --detail /dev/md0 + +![Confirm Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Raid-Array.png) + +确认 Raid 阵列 + +**注意**: 对于容量磁盘会需要几个小时来同步数据。在这里,我们使用的是1GB的虚拟磁盘,所以它非常快在几秒钟内便会完成。 + +### 从阵列中删除磁盘 ### + +7. 在数据被从其他两个磁盘同步到新磁盘‘sdd1‘后,现在三个磁盘中的数据已经相同了。 + +正如我前面所说的,假定一个磁盘出问题了需要被删除。所以,现在假设磁盘‘sdc1‘出问题了,需要从现有阵列中删除。 + +在删除磁盘前我们要将其标记为 failed,然后我们才可以将其删除。 + + # mdadm --fail /dev/md0 /dev/sdc1 + # mdadm --detail /dev/md0 + +![Disk Fail in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-Fail-in-Raid-Array.png) + +在 Raid 阵列中模拟磁盘故障 + +从上面的输出中,我们清楚地看到,磁盘在底部被标记为 faulty。即使它是 faulty 的,我们仍然可以看到 raid 设备有3个,1个损坏了 state 是 degraded。 + +现在我们要从阵列中删除 faulty 的磁盘,raid 设备将像之前一样继续有2个设备。 + + # mdadm --remove /dev/md0 /dev/sdc1 + +![Remove Disk in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Remove-Disk-in-Raid-Array.png) + +在 Raid 阵列中删除磁盘 + +8. 一旦故障的磁盘被删除,然后我们只能使用2个磁盘来扩展 raid 阵列了。 + + # mdadm --grow --raid-devices=2 /dev/md0 + # mdadm --detail /dev/md0 + +![Grow Disks in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Disks-in-Raid-Array.png) + +在 RAID 阵列扩展磁盘 + +从上面的输出中可以看到,我们的阵列中仅有2台设备。如果你需要再次扩展阵列,按照同样的步骤,如上所述。如果你需要添加一个磁盘作为备用,将其标记为 spare,因此,如果磁盘出现故障时,它会自动顶上去并重建数据。 + +### 结论 ### + +在这篇文章中,我们已经看到了如何扩展现有的 RAID 集合,以及如何从一个阵列中删除故障磁盘在重新同步已有磁盘的数据后。所有这些步骤都可以不用停机来完成。在数据同步期间,系统用户,文件和应用程序不会受到任何影响。 + +在接下来的文章我将告诉你如何管理 RAID,敬请关注更新,不要忘了写评论。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/grow-raid-array-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[2]:http://www.tecmint.com/create-raid0-in-linux/ From 7ae7928abc2cdd47f7e3dabe8cc03a735da4cdad Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 27 Aug 2015 23:35:46 +0800 Subject: [PATCH 343/697] PUB:20150818 How to monitor stock quotes from the command line on Linux @GOLinux --- ...ock quotes from the command line on Linux.md | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) rename {translated/tech => published}/20150818 How to monitor stock quotes from the command line on Linux.md (75%) diff --git a/translated/tech/20150818 How to monitor stock quotes from the command line on Linux.md b/published/20150818 How to monitor stock quotes from the command line on Linux.md similarity index 75% rename from translated/tech/20150818 How to monitor stock quotes from the command line on Linux.md rename to published/20150818 How to monitor stock quotes from the command line on Linux.md index c2a9e5d576..53be8376a4 100644 --- a/translated/tech/20150818 How to monitor stock quotes from the command line on Linux.md +++ b/published/20150818 How to monitor stock quotes from the command line on Linux.md @@ -1,18 +1,19 @@ Linux中通过命令行监控股票报价 ================================================================================ -如果你是那些股票投资者或者交易者中的一员,那么监控证券市场将成为你日常工作中的其中一项任务。最有可能是你会使用一个在线交易平台,这个平台有着一些漂亮的实时图表和全部种类的高级股票分析和交易工具。虽然这种复杂的市场研究工具是任何严肃的证券投资者阅读市场的必备,但是监控最新的股票报价来构建有利可图的投资组合仍然有很长一段路要走。 -如果你是一位长久坐在终端前的全职系统管理员,而证券交易又成了你日常生活中的业余兴趣,那么一个简单地显示实时股票报价的命令行工具会是你的恩赐。 +如果你是那些股票投资者或者交易者中的一员,那么监控证券市场将是你的日常工作之一。最有可能的是你会使用一个在线交易平台,这个平台有着一些漂亮的实时图表和全部种类的高级股票分析和交易工具。虽然这种复杂的市场研究工具是任何严肃的证券投资者了解市场的必备工具,但是监控最新的股票报价来构建有利可图的投资组合仍然有很长一段路要走。 + +如果你是一位长久坐在终端前的全职系统管理员,而证券交易又成了你日常生活中的业余兴趣,那么一个简单地显示实时股票报价的命令行工具会是给你的恩赐。 在本教程中,让我来介绍一个灵巧而简洁的命令行工具,它可以让你在Linux上从命令行监控股票报价。 这个工具叫做[Mop][1]。它是用GO编写的一个轻量级命令行工具,可以极其方便地跟踪来自美国市场的最新股票报价。你可以很轻松地自定义要监控的证券列表,它会在一个基于ncurses的便于阅读的界面显示最新的股票报价。 -**注意**:Mop是通过雅虎金融API获取最新的股票报价的。你必须意识到,他们的的股票报价已知会有15分钟的延时。所以,如果你正在寻找0延时的“实时”股票报价,那么Mop就不是你的菜了。这种“现场”股票报价订阅通常可以通过向一些不开放的私有接口付费获取。对于上面讲得,让我们来看看怎样在Linux环境下使用Mop吧。 +**注意**:Mop是通过雅虎金融API获取最新的股票报价的。你必须意识到,他们的的股票报价已知会有15分钟的延时。所以,如果你正在寻找0延时的“实时”股票报价,那么Mop就不是你的菜了。这种“现场”股票报价订阅通常可以通过向一些不开放的私有接口付费获取。了解这些之后,让我们来看看怎样在Linux环境下使用Mop吧。 ### 安装 Mop 到 Linux ### -由于Mop部署在Go中,你首先需要安装Go语言。如果你还没有安装Go,请参照[此指南][2]将Go安装到你的Linux平台中。请确保按指南中所讲的设置GOPATH环境变量。 +由于Mop是用Go实现的,你首先需要安装Go语言。如果你还没有安装Go,请参照[此指南][2]将Go安装到你的Linux平台中。请确保按指南中所讲的设置GOPATH环境变量。 安装完Go后,继续像下面这样安装Mop。 @@ -42,7 +43,7 @@ Linux中通过命令行监控股票报价 ### 使用Mop来通过命令行监控股票报价 ### -要启动Mop,只需运行名为cmd的命令。 +要启动Mop,只需运行名为cmd的命令(LCTT 译注:这名字实在是……)。 $ cmd @@ -50,7 +51,7 @@ Linux中通过命令行监控股票报价 ![](https://farm6.staticflickr.com/5749/20018949104_c8c64e0e06_c.jpg) -报价显示了像最新价格、交易百分比、每日低/高、52周低/高、股利以及年产量等信息。Mop从[CNN][3]获取市场总览信息,从[雅虎金融][4]获得个股报价,股票报价信息它自己会在终端内周期性更新。 +报价显示了像最新价格、交易百分比、每日低/高、52周低/高、股息以及年收益率等信息。Mop从[CNN][3]获取市场总览信息,从[雅虎金融][4]获得个股报价,股票报价信息它自己会在终端内周期性更新。 ### 自定义Mop中的股票报价 ### @@ -78,7 +79,7 @@ Linux中通过命令行监控股票报价 ### 尾声 ### -正如你所见,Mop是一个轻量级的,然而极其方便的证券监控工具。当然,你可以很轻松地从其它别的什么地方,从在线站点,你的智能手机等等访问到股票报价信息。然而,如果你在终端环境中花费大量时间,Mop可以很容易地适应你的工作空间,希望没有让你过多地从你的公罗流程中分心。只要让它在你其中一个终端中运行并保持市场日期持续更新,就让它在那干着吧。 +正如你所见,Mop是一个轻量级的,然而极其方便的证券监控工具。当然,你可以很轻松地从其它别的什么地方,从在线站点,你的智能手机等等访问到股票报价信息。然而,如果你在整天使用终端环境,Mop可以很容易地适应你的工作环境,希望没有让你过多地从你的工作流程中分心。只要让它在你其中一个终端中运行并保持市场日期持续更新,那就够了。 交易快乐! @@ -88,7 +89,7 @@ via: http://xmodulo.com/monitor-stock-quotes-command-line-linux.html 作者:[Dan Nanni][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From b36ab92cb463c9c947bb4070a29c89a11247f4c7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Fri, 28 Aug 2015 02:46:10 +0800 Subject: [PATCH 344/697] =?UTF-8?q?=E3=80=90=E7=BF=BB=E8=AF=91=E5=AE=8C?= =?UTF-8?q?=E6=88=90=E3=80=91RHCSA=20Series--Part=2007?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完成第七篇 --- ...Lists) and Mounting Samba or NFS Shares.md | 192 ++++++++++++++++++ 1 file changed, 192 insertions(+) create mode 100644 translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md diff --git a/translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md b/translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md new file mode 100644 index 0000000000..a9c56b1cbe --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md @@ -0,0 +1,192 @@ +[xiqingongzi Translating] +RHCSA Series: Using ACLs (Access Control Lists) and Mounting Samba / NFS Shares – Part 7 +================================================================================ +在第六篇文章的最后,我们开始解释如何使用parted和SSM 设置和配置本地文件存储([RHCSA series Part 6][1]) + +![Configure ACL's and Mounting NFS / Samba Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ACLs-and-Mounting-NFS-Samba-Shares.png) + +RHCSA系列::第七章 ACL的配置和安装NFS/Samba文件分享系统 + +我们还讨论了如何创建和在启动启动时用密码挂载加密逻辑卷。另外,我们要提醒您要避免在安装 在管理操作系统的存储文件系统事执行关键的操作。接下来我们要回顾在红帽Linux 7 中常用的文件系统格式然后卸载和挂载网络文件系统(CIFS和NFS), +#### 前提 #### + +在开始之前,请确保你有一个线上Samba服务器和一个线上NFS服务器(RHEL7 将很快不支持 NFS V2) + +在这个指南中,我们将使用一个IP为192.168.0.10的机器作为服务端,RHEL7 盒子作为客户端,IP为192.168.0.18,稍后我们会告诉你该安装哪些软件包。 + +### RHEL7中的文件格式 ### + +从RHEL7 开始,XFS 因为其高可用性和可拓展性被设置为所有架构的默认文件系统。目前,红帽和合作伙伴测试的主流硬件线上他支持最大500TB 每个文件系统。 +同时,XFS使user_xattr(扩展用户属性)和ACL(POSIX访问控制列表)作为默认的挂载选项,不像ext3或ext4(ext2在RHEL 7中是过时的),这意味着你不需要明确的指定命令行选项或在/etc/fstab挂载时XFS文件系统(如果你想禁用在后一种情况下,这样的选择你要明确使用no_acl和no_user_xattr)。 +记住,扩展用户属性可以指定文件和目录用于存储任意等附加信息的MIME类型,字符集或文件的编码,而对用户属性的访问权限由普通文件权限位的定义。 +#### 权限控制列表 #### + +每一个系统管理员,无论新手还是专家,都熟悉文件和目录的权限和许可。它能制定特定的权限(读,写和执行)的所有者,属组,和其他的正常访问权限。如果需要,可以回去看看 [Part 3 of the RHCSA series][2] +然而,由于标准的 ugo/rwx 设置不允许配置不同用户不同权限,所以ACL可以比一般规定更多的文件和目录权限。 +事实上,ACL定义的权限是文件权限的一个超集,我们来看一下在真正的场景下是如何转换的。 +1. 有两种类型:访问ACL (可以适用于任何一个特定的文件或目录),也是默认的ACL,它只能应用于目录。如果文件包含在其中没有ACL设置,他们继承父目录的默认ACL。 +2. 首先,ACL可以配置每个用户,每个组,或不在组内的用户拥有文件。 +3。设置ACL(和删除)使用setfacl,分别使用M或X选项。 +例如,让我们创建一个组名为tecmint和添加用户johndoe和davenull: + # groupadd tecmint + # useradd johndoe + # useradd davenull + # usermod -a -G tecmint johndoe + # usermod -a -G tecmint davenull + +让我们确认用户属于组tecmint: + + # id johndoe + # id davenull + +![Verify Users](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-Users.png) + +验证用户 + +Let’s now create a directory called playground within /mnt, and a file named testfile.txt inside. We will set the group owner to tecmint and change its default ugo/rwx permissions to 770 (read, write, and execute permissions granted to both the owner and the group owner of the file): +现在让我们创建一个在/mnt下的目录名为playground,和一个名叫testfile.txt文件。我们将tecmint和更改其默认 ugo/rwx 权限为 770组所有者(读,写,和执行给予属主和属组所有者权限): + # mkdir /mnt/playground + # touch /mnt/playground/testfile.txt + # chmod 770 /mnt/playground/testfile.txt + +然后切换用户johndoe和davenull,按照这个顺序,并写入文件: + + echo "My name is John Doe" > /mnt/playground/testfile.txt + echo "My name is Dave Null" >> /mnt/playground/testfile.txt + +到目前为止很好。现在,让我们的用户gacanepa写入文件–和写操作,可以预料到出现的结果。 + +但如果我们真的需要用户gacanepa(不是tecmint组的成员)有/mnt/playground/testfile.txt的写入权限。首先,可能是你的想法是添加用户帐户组tecmint。但这会给他写上所有文件的权限,写入的是该组的权限,我们不希望这样。我们只希望他能写/mnt/playground/ testfile.txt。 + + # touch /mnt/playground/testfile.txt + # chown :tecmint /mnt/playground/testfile.txt + # chmod 777 /mnt/playground/testfile.txt + # su johndoe + $ echo "My name is John Doe" > /mnt/playground/testfile.txt + $ su davenull + $ echo "My name is Dave Null" >> /mnt/playground/testfile.txt + $ su gacanepa + $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt + +![Manage User Permissions](http://www.tecmint.com/wp-content/uploads/2015/04/User-Permissions.png) + +管理用户权限 + +让我们给用户gacanepa添加/mnt/playground/testfile.txt的读写权限 +在root下执行 + + # setfacl -R -m u:gacanepa:rwx /mnt/playground + +您已经成功添加了一个ACL允许gacanepa写入测试文件。然后切换到用户gacanepa试图写入文件: + + $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt + +要查看特定的文件或目录的ACL,使用getfacl: + + # getfacl /mnt/playground/testfile.txt + +![Check ACLs of Files](http://www.tecmint.com/wp-content/uploads/2015/04/Check-ACL-of-File.png) + +检查文件的ACLs + +设置默认ACL目录(它的内容将会继承除非被覆盖),添加d:以前的规则并且指定一个文件名来替代 + # setfacl -m d:o:r /mnt/playground + +以上的ACL将允许用户不在属组属主有/mnt/playground的读权限。注意在getfacl /mnt/playground 之前和之后的改变输出的差异: +![Set Default ACL in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Set-Default-ACL-in-Linux.png) + +Set Default ACL in Linux + +[Chapter 20 in the official RHEL 7 Storage Administration Guide][3] 提供了更多ACL的例子,我强烈推荐你去读读它,参考起来非常方便。 + +#### 安装NFS网络共享 #### + +显示在你的服务器的NFS共享可用的列表,您可以使用showmount命令与E选项,其次是机器名或IP地址。这个工具包含在NFS utils包: + + # yum update && yum install nfs-utils + +然后: + + # showmount -e 192.168.0.10 + +你会得到一个列表的可用的NFS分享192.168.0.10: +![Check Available NFS Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Mount-NFS-Shares.png) + +Check Available NFS Shares + +在使用命令行对必要的本地客户端挂载NFS网络共享,使用以下语法: + + # mount -t nfs -o [options] remote_host:/remote/directory /local/directory + +在我们的例子中,翻译成: + + # mount -t nfs 192.168.0.10:/NFS-SHARE /mnt/nfs + +如果您收到一下错误消息:"rpc-statd.service工作失败,看 “systemctl status rpc-statd.service” 和“journalctl -xn” 获取详细信息.确保你的rpcbind服务在开机时开启。 + # systemctl enable rpcbind.socket + # systemctl restart rpcbind.service + +然后重新启动。这应该做的技巧,你将能够挂载NFS共享就和前面所解释的那样。如果你需要安装NFS共享的自动引导系统,添加一个有效的条目到/etc/fstab文件: + + 远程主机:远程目录 本地目录 nfs 选项 0 0 + +变量远程主机, 远程目录, 本地目录, and 选项 (可选的)在我们手动挂载是谁同样的,就和我们之前的例子一样。 + + 192.168.0.10:/NFS-SHARE /mnt/nfs nfs defaults 0 0 + +#### 挂载 Samba 网络文件共享 #### + +Samba 代表选择可以在×nix和Windows之间进行网络共享的工具.使用Samba客户端包内的 smbclient 命令 加 -L 参数来展示 Samba 文件分享,其次是机器名或IP地址 +将会提示你输入远程主机上的密码: + # smbclient -L 192.168.0.10 + +![Check Samba Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Samba-Shares.png) + +Check Samba Shares + +在本地客户端,你需要首先安装CIFS utils来挂载Samba: + + # yum update && yum install cifs-utils + +然后在命令行上使用下面的语法: + + # mount -t cifs -o credentials=/path/to/credentials/file //remote_host/samba_share /local/directory + +在我们的例子中,翻译成: + + # mount -t cifs -o credentials=~/.smbcredentials //192.168.0.10/gacanepa /mnt/samba + +smbcredentials内容为: + + username=gacanepa + password=XXXXXX + +是一个隐藏文件在root的主目录(/root/),权限设置为600,因此,除了该文件的所有者可以读或写,没有人能够读写。 +请注意,samba_share是Samba共享的名字就像 smbclient -L remote_host 返回的那样 + +现在,如果你需要samba共享可自动在系统启动时,添加一个有效的条目/etc/fstab文件如下: + + //远程主机:/samba_share 本地目录 cifs 选项 0 0 + +变量 远程主机, /samba_share, 本地目录, 选项 (可选的) 和我们手动安装的意义一样 + + //192.168.0.10/gacanepa /mnt/samba cifs credentials=/root/smbcredentials,defaults 0 0 + +### 结论 ### + +在这篇文章中我们已经讲解了如何在Linux设置ACL,并探讨RHEL7中该如何挂载CIFS和NFS网络共享。 +我建议你去实践这些概念,甚至把它们一起安装(先尝试安装网络共享设置ACL)如果你有疑问或意见,请随时使用下面的表格,随时与我们联系。还可以通过你的社交网络来分享这篇文章。 +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/ + +作者:[Gabriel Cánepa][a] +译者:[xiqingongzi](https://github.com/xiqingongzi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/ +[2]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ +[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html From 4180b6aa1f70ff8fe1852ce8c2eb19fd8195123c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Fri, 28 Aug 2015 02:46:32 +0800 Subject: [PATCH 345/697] Delete RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md --- ...Lists) and Mounting Samba or NFS Shares.md | 213 ------------------ 1 file changed, 213 deletions(-) delete mode 100644 sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md b/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md deleted file mode 100644 index f8d9d45d27..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md +++ /dev/null @@ -1,213 +0,0 @@ -[xiqingongzi Translating] -RHCSA Series: Using ACLs (Access Control Lists) and Mounting Samba / NFS Shares – Part 7 -================================================================================ -In the last article ([RHCSA series Part 6][1]) we started explaining how to set up and configure local system storage using parted and ssm. - -![Configure ACL's and Mounting NFS / Samba Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ACLs-and-Mounting-NFS-Samba-Shares.png) - -RHCSA Series:: Configure ACL’s and Mounting NFS / Samba Shares – Part 7 - -We also discussed how to create and mount encrypted volumes with a password during system boot. In addition, we warned you to avoid performing critical storage management operations on mounted filesystems. With that in mind we will now review the most used file system formats in Red Hat Enterprise Linux 7 and then proceed to cover the topics of mounting, using, and unmounting both manually and automatically network filesystems (CIFS and NFS), along with the implementation of access control lists for your system. - -#### Prerequisites #### - -Before proceeding further, please make sure you have a Samba server and a NFS server available (note that NFSv2 is no longer supported in RHEL 7). - -During this guide we will use a machine with IP 192.168.0.10 with both services running in it as server, and a RHEL 7 box as client with IP address 192.168.0.18. Later in the article we will tell you which packages you need to install on the client. - -### File System Formats in RHEL 7 ### - -Beginning with RHEL 7, XFS has been introduced as the default file system for all architectures due to its high performance and scalability. It currently supports a maximum filesystem size of 500 TB as per the latest tests performed by Red Hat and its partners for mainstream hardware. - -Also, XFS enables user_xattr (extended user attributes) and acl (POSIX access control lists) as default mount options, unlike ext3 or ext4 (ext2 is considered deprecated as of RHEL 7), which means that you don’t need to specify those options explicitly either on the command line or in /etc/fstab when mounting a XFS filesystem (if you want to disable such options in this last case, you have to explicitly use no_acl and no_user_xattr). - -Keep in mind that the extended user attributes can be assigned to files and directories for storing arbitrary additional information such as the mime type, character set or encoding of a file, whereas the access permissions for user attributes are defined by the regular file permission bits. - -#### Access Control Lists #### - -As every system administrator, either beginner or expert, is well acquainted with regular access permissions on files and directories, which specify certain privileges (read, write, and execute) for the owner, the group, and “the world” (all others). However, feel free to refer to [Part 3 of the RHCSA series][2] if you need to refresh your memory a little bit. - -However, since the standard ugo/rwx set does not allow to configure different permissions for different users, ACLs were introduced in order to define more detailed access rights for files and directories than those specified by regular permissions. - -In fact, ACL-defined permissions are a superset of the permissions specified by the file permission bits. Let’s see how all of this translates is applied in the real world. - -1. There are two types of ACLs: access ACLs, which can be applied to either a specific file or a directory), and default ACLs, which can only be applied to a directory. If files contained therein do not have a ACL set, they inherit the default ACL of their parent directory. - -2. To begin, ACLs can be configured per user, per group, or per an user not in the owning group of a file. - -3. ACLs are set (and removed) using setfacl, with either the -m or -x options, respectively. - -For example, let us create a group named tecmint and add users johndoe and davenull to it: - - # groupadd tecmint - # useradd johndoe - # useradd davenull - # usermod -a -G tecmint johndoe - # usermod -a -G tecmint davenull - -And let’s verify that both users belong to supplementary group tecmint: - - # id johndoe - # id davenull - -![Verify Users](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-Users.png) - -Verify Users - -Let’s now create a directory called playground within /mnt, and a file named testfile.txt inside. We will set the group owner to tecmint and change its default ugo/rwx permissions to 770 (read, write, and execute permissions granted to both the owner and the group owner of the file): - - # mkdir /mnt/playground - # touch /mnt/playground/testfile.txt - # chmod 770 /mnt/playground/testfile.txt - -Then switch user to johndoe and davenull, in that order, and write to the file: - - echo "My name is John Doe" > /mnt/playground/testfile.txt - echo "My name is Dave Null" >> /mnt/playground/testfile.txt - -So far so good. Now let’s have user gacanepa write to the file – and the write operation will, which was to be expected. - -But what if we actually need user gacanepa (who is not a member of group tecmint) to have write permissions on /mnt/playground/testfile.txt? The first thing that may come to your mind is adding that user account to group tecmint. But that will give him write permissions on ALL files were the write bit is set for the group, and we don’t want that. We only want him to be able to write to /mnt/playground/testfile.txt. - - # touch /mnt/playground/testfile.txt - # chown :tecmint /mnt/playground/testfile.txt - # chmod 777 /mnt/playground/testfile.txt - # su johndoe - $ echo "My name is John Doe" > /mnt/playground/testfile.txt - $ su davenull - $ echo "My name is Dave Null" >> /mnt/playground/testfile.txt - $ su gacanepa - $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt - -![Manage User Permissions](http://www.tecmint.com/wp-content/uploads/2015/04/User-Permissions.png) - -Manage User Permissions - -Let’s give user gacanepa read and write access to /mnt/playground/testfile.txt. - -Run as root, - - # setfacl -R -m u:gacanepa:rwx /mnt/playground - -and you’ll have successfully added an ACL that allows gacanepa to write to the test file. Then switch to user gacanepa and try to write to the file again: - - $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt - -To view the ACLs for a specific file or directory, use getfacl: - - # getfacl /mnt/playground/testfile.txt - -![Check ACLs of Files](http://www.tecmint.com/wp-content/uploads/2015/04/Check-ACL-of-File.png) - -Check ACLs of Files - -To set a default ACL to a directory (which its contents will inherit unless overwritten otherwise), add d: before the rule and specify a directory instead of a file name: - - # setfacl -m d:o:r /mnt/playground - -The ACL above will allow users not in the owner group to have read access to the future contents of the /mnt/playground directory. Note the difference in the output of getfacl /mnt/playground before and after the change: - -![Set Default ACL in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Set-Default-ACL-in-Linux.png) - -Set Default ACL in Linux - -[Chapter 20 in the official RHEL 7 Storage Administration Guide][3] provides more ACL examples, and I highly recommend you take a look at it and have it handy as reference. - -#### Mounting NFS Network Shares #### - -To show the list of NFS shares available in your server, you can use the showmount command with the -e option, followed by the machine name or its IP address. This tool is included in the nfs-utils package: - - # yum update && yum install nfs-utils - -Then do: - - # showmount -e 192.168.0.10 - -and you will get a list of the available NFS shares on 192.168.0.10: - -![Check Available NFS Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Mount-NFS-Shares.png) - -Check Available NFS Shares - -To mount NFS network shares on the local client using the command line on demand, use the following syntax: - - # mount -t nfs -o [options] remote_host:/remote/directory /local/directory - -which, in our case, translates to: - - # mount -t nfs 192.168.0.10:/NFS-SHARE /mnt/nfs - -If you get the following error message: “Job for rpc-statd.service failed. See “systemctl status rpc-statd.service” and “journalctl -xn” for details.”, make sure the rpcbind service is enabled and started in your system first: - - # systemctl enable rpcbind.socket - # systemctl restart rpcbind.service - -and then reboot. That should do the trick and you will be able to mount your NFS share as explained earlier. If you need to mount the NFS share automatically on system boot, add a valid entry to the /etc/fstab file: - - remote_host:/remote/directory /local/directory nfs options 0 0 - -The variables remote_host, /remote/directory, /local/directory, and options (which is optional) are the same ones used when manually mounting an NFS share from the command line. As per our previous example: - - 192.168.0.10:/NFS-SHARE /mnt/nfs nfs defaults 0 0 - -#### Mounting CIFS (Samba) Network Shares #### - -Samba represents the tool of choice to make a network share available in a network with *nix and Windows machines. To show the Samba shares that are available, use the smbclient command with the -L flag, followed by the machine name or its IP address. This tool is included in the samba-client package: - -You will be prompted for root’s password in the remote host: - - # smbclient -L 192.168.0.10 - -![Check Samba Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Samba-Shares.png) - -Check Samba Shares - -To mount Samba network shares on the local client you will need to install first the cifs-utils package: - - # yum update && yum install cifs-utils - -Then use the following syntax on the command line: - - # mount -t cifs -o credentials=/path/to/credentials/file //remote_host/samba_share /local/directory - -which, in our case, translates to: - - # mount -t cifs -o credentials=~/.smbcredentials //192.168.0.10/gacanepa /mnt/samba - -where smbcredentials: - - username=gacanepa - password=XXXXXX - -is a hidden file inside root’s home (/root/) with permissions set to 600, so that no one else but the owner of the file can read or write to it. - -Please note that the samba_share is the name of the Samba share as returned by smbclient -L remote_host as shown above. - -Now, if you need the Samba share to be available automatically on system boot, add a valid entry to the /etc/fstab file as follows: - - //remote_host:/samba_share /local/directory cifs options 0 0 - -The variables remote_host, /samba_share, /local/directory, and options (which is optional) are the same ones used when manually mounting a Samba share from the command line. Following the definitions given in our previous example: - - //192.168.0.10/gacanepa /mnt/samba cifs credentials=/root/smbcredentials,defaults 0 0 - -### Conclusion ### - -In this article we have explained how to set up ACLs in Linux, and discussed how to mount CIFS and NFS network shares in a RHEL 7 client. - -I recommend you to practice these concepts and even mix them (go ahead and try to set ACLs in mounted network shares) until you feel comfortable. If you have questions or comments feel free to use the form below to contact us anytime. Also, feel free to share this article through your social networks. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/ -[2]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ -[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html From 8f3984987fc2e2cc778109e39b343d9cdf38dd29 Mon Sep 17 00:00:00 2001 From: "Y.C.S.M" Date: Fri, 28 Aug 2015 08:56:28 +0800 Subject: [PATCH 346/697] Delete 20150813 Linux file system hierarchy v2.0.md --- ...150813 Linux file system hierarchy v2.0.md | 440 ------------------ 1 file changed, 440 deletions(-) delete mode 100644 sources/tech/20150813 Linux file system hierarchy v2.0.md diff --git a/sources/tech/20150813 Linux file system hierarchy v2.0.md b/sources/tech/20150813 Linux file system hierarchy v2.0.md deleted file mode 100644 index 23f70258b1..0000000000 --- a/sources/tech/20150813 Linux file system hierarchy v2.0.md +++ /dev/null @@ -1,440 +0,0 @@ -translating by tnuoccalanosrep - -Linux file system hierarchy v2.0 -================================================================================ -What is a file in Linux? What is file system in Linux? Where are all the configuration files? Where do I keep my downloaded applications? Is there really a filesystem standard structure in Linux? Well, the above image explains Linux file system hierarchy in a very simple and non-complex way. It’s very useful when you’re looking for a configuration file or a binary file. I’ve added some explanation and examples below, but that’s TL;DR. - -Another issue is when you got configuration and binary files all over the system that creates inconsistency and if you’re a large organization or even an end user, it can compromise your system (binary talking with old lib files etc.) and when you do [security audit of your Linux system][1], you find it is vulnerable to different exploits. So keeping a clean operating system (no matter Windows or Linux) is important. - -### What is a file in Linux? ### - -A simple description of the UNIX system, also applicable to Linux, is this: - -> On a UNIX system, everything is a file; if something is not a file, it is a process. - -This statement is true because there are special files that are more than just files (named pipes and sockets, for instance), but to keep things simple, saying that everything is a file is an acceptable generalization. A Linux system, just like UNIX, makes no difference between a file and a directory, since a directory is just a file containing names of other files. Programs, services, texts, images, and so forth, are all files. Input and output devices, and generally all devices, are considered to be files, according to the system. - -![](http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png) - -- Version 2.0 – 17-06-2015 - - – Improved: Added title and version history. - - – Improved: Added /srv, /media and /proc. - - – Improved: Updated descriptions to reflect modern Linux File Systems. - - – Fixed: Multiple typo’s. - - – Fixed: Appearance and colour. -- Version 1.0 – 14-02-2015 - - – Created: Initial diagram. - - – Note: Discarded lowercase version. - -### Download Links ### - -Following are two links for download. If you need this in any other format, let me know and I will try to create that and upload it somewhere. - -- [Large (PNG) Format – 2480×1755 px – 184KB][2] -- [Largest (PDF) Format – 9919x7019 px – 1686KB][3] - -**Note**: PDF Format is best for printing and very high in quality - -### Linux file system description ### - -In order to manage all those files in an orderly fashion, man likes to think of them in an ordered tree-like structure on the hard disk, as we know from `MS-DOS` (Disk Operating System) for instance. The large branches contain more branches, and the branches at the end contain the tree’s leaves or normal files. For now we will use this image of the tree, but we will find out later why this is not a fully accurate image. - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
DirectoryDescription
-
/
-
Primary hierarchy root and root directory of the entire file system hierarchy.
-
/bin
-
Essential command binaries that need to be available in single user mode; for all users, e.g., cat, ls, cp.
-
/boot
-
Boot loader files, e.g., kernels, initrd.
-
/dev
-
Essential devices, e.g., /dev/null.
-
/etc
-
Host-specific system-wide configuration filesThere has been controversy over the meaning of the name itself. In early versions of the UNIX Implementation Document from Bell labs, /etc is referred to as the etcetera directory, as this directory historically held everything that did not belong elsewhere (however, the FHS restricts /etc to static configuration files and may not contain binaries). Since the publication of early documentation, the directory name has been re-designated in various ways. Recent interpretations include backronyms such as “Editable Text Configuration” or “Extended Tool Chest”.
-
-
-
/opt
-
-
-
Configuration files for add-on packages that are stored in /opt/.
-
-
-
/sgml
-
-
-
Configuration files, such as catalogs, for software that processes SGML.
-
-
-
/X11
-
-
-
Configuration files for the X Window System, version 11.
-
-
-
/xml
-
-
-
Configuration files, such as catalogs, for software that processes XML.
-
/home
-
Users’ home directories, containing saved files, personal settings, etc.
-
/lib
-
Libraries essential for the binaries in /bin/ and /sbin/.
-
/lib<qual>
-
Alternate format essential libraries. Such directories are optional, but if they exist, they have some requirements.
-
/media
-
Mount points for removable media such as CD-ROMs (appeared in FHS-2.3).
-
/mnt
-
Temporarily mounted filesystems.
-
/opt
-
Optional application software packages.
-
/proc
-
Virtual filesystem providing process and kernel information as files. In Linux, corresponds to a procfs mount.
-
/root
-
Home directory for the root user.
-
/sbin
-
Essential system binaries, e.g., init, ip, mount.
-
/srv
-
Site-specific data which are served by the system.
-
/tmp
-
Temporary files (see also /var/tmp). Often not preserved between system reboots.
-
/usr
-
Secondary hierarchy for read-only user data; contains the majority of (multi-)user utilities and applications.
-
-
-
/bin
-
-
-
Non-essential command binaries (not needed in single user mode); for all users.
-
-
-
/include
-
-
-
Standard include files.
-
-
-
/lib
-
-
-
Libraries for the binaries in /usr/bin/ and /usr/sbin/.
-
-
-
/lib<qual>
-
-
-
Alternate format libraries (optional).
-
-
-
/local
-
-
-
Tertiary hierarchy for local data, specific to this host. Typically has further subdirectories, e.g., bin/, lib/, share/.
-
-
-
/sbin
-
-
-
Non-essential system binaries, e.g., daemons for various network-services.
-
-
-
/share
-
-
-
Architecture-independent (shared) data.
-
-
-
/src
-
-
-
Source code, e.g., the kernel source code with its header files.
-
-
-
/X11R6
-
-
-
X Window System, Version 11, Release 6.
-
/var
-
Variable files—files whose content is expected to continually change during normal operation of the system—such as logs, spool files, and temporary e-mail files.
-
-
-
/cache
-
-
-
Application cache data. Such data are locally generated as a result of time-consuming I/O or calculation. The application must be able to regenerate or restore the data. The cached files can be deleted without loss of data.
-
-
-
/lib
-
-
-
State information. Persistent data modified by programs as they run, e.g., databases, packaging system metadata, etc.
-
-
-
/lock
-
-
-
Lock files. Files keeping track of resources currently in use.
-
-
-
/log
-
-
-
Log files. Various logs.
-
-
-
/mail
-
-
-
Users’ mailboxes.
-
-
-
/opt
-
-
-
Variable data from add-on packages that are stored in /opt/.
-
-
-
/run
-
-
-
Information about the running system since last boot, e.g., currently logged-in users and running daemons.
-
-
-
/spool
-
-
-
Spool for tasks waiting to be processed, e.g., print queues and outgoing mail queue.
-
-
-
-
-
/mail
-
-
-
-
-
Deprecated location for users’ mailboxes.
-
-
-
/tmp
-
-
-
Temporary files to be preserved between reboots.
- -### Types of files in Linux ### - -Most files are just files, called `regular` files; they contain normal data, for example text files, executable files or programs, input for or output from a program and so on. - -While it is reasonably safe to suppose that everything you encounter on a Linux system is a file, there are some exceptions. - -- `Directories`: files that are lists of other files. -- `Special files`: the mechanism used for input and output. Most special files are in `/dev`, we will discuss them later. -- `Links`: a system to make a file or directory visible in multiple parts of the system’s file tree. We will talk about links in detail. -- `(Domain) sockets`: a special file type, similar to TCP/IP sockets, providing inter-process networking protected by the file system’s access control. -- `Named pipes`: act more or less like sockets and form a way for processes to communicate with each other, without using network socket semantics. - -### File system in reality ### - -For most users and for most common system administration tasks, it is enough to accept that files and directories are ordered in a tree-like structure. The computer, however, doesn’t understand a thing about trees or tree-structures. - -Every partition has its own file system. By imagining all those file systems together, we can form an idea of the tree-structure of the entire system, but it is not as simple as that. In a file system, a file is represented by an `inode`, a kind of serial number containing information about the actual data that makes up the file: to whom this file belongs, and where is it located on the hard disk. - -Every partition has its own set of inodes; throughout a system with multiple partitions, files with the same inode number can exist. - -Each inode describes a data structure on the hard disk, storing the properties of a file, including the physical location of the file data. When a hard disk is initialized to accept data storage, usually during the initial system installation process or when adding extra disks to an existing system, a fixed number of inodes per partition is created. This number will be the maximum amount of files, of all types (including directories, special files, links etc.) that can exist at the same time on the partition. We typically count on having 1 inode per 2 to 8 kilobytes of storage.At the time a new file is created, it gets a free inode. In that inode is the following information: - -- Owner and group owner of the file. -- File type (regular, directory, …) -- Permissions on the file -- Date and time of creation, last read and change. -- Date and time this information has been changed in the inode. -- Number of links to this file (see later in this chapter). -- File size -- An address defining the actual location of the file data. - -The only information not included in an inode, is the file name and directory. These are stored in the special directory files. By comparing file names and inode numbers, the system can make up a tree-structure that the user understands. Users can display inode numbers using the -i option to ls. The inodes have their own separate space on the disk. - --------------------------------------------------------------------------------- - -via: http://www.blackmoreops.com/2015/06/18/linux-file-system-hierarchy-v2-0/ - -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:http://www.blackmoreops.com/2015/02/15/in-light-of-recent-linux-exploits-linux-security-audit-is-a-must/ -[2]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png -[3]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-File-System-Hierarchy-blackMORE-Ops.pdf From d4b3fcf6a82cc5eaf9c46d2bb21127e2bf2a9189 Mon Sep 17 00:00:00 2001 From: "Y.C.S.M" Date: Fri, 28 Aug 2015 09:00:36 +0800 Subject: [PATCH 347/697] Create 20150813 Linux file system hierarchy v2.0.md --- ...150813 Linux file system hierarchy v2.0.md | 432 ++++++++++++++++++ 1 file changed, 432 insertions(+) create mode 100644 translated/tech/20150813 Linux file system hierarchy v2.0.md diff --git a/translated/tech/20150813 Linux file system hierarchy v2.0.md b/translated/tech/20150813 Linux file system hierarchy v2.0.md new file mode 100644 index 0000000000..6f92d3bb53 --- /dev/null +++ b/translated/tech/20150813 Linux file system hierarchy v2.0.md @@ -0,0 +1,432 @@ +translating by tnuoccalanosrep +Linux文件系统结构 v2.0 +================================================================================ +Linux中的文件是什么?它的文件系统又是什么?那些配置文件又在哪里?我下载好的程序保存在哪里了?好了,上图简明地阐释了Linux的文件系统的层次关系。当你苦于寻找配置文件或者二进制文件的时候,这便显得十分有用了。我在下方添加了一些解释以及例子,但“篇幅过长,没有阅读”。 + +有一种情况便是当你在系统中获取配置以及二进制文件时,出现了不一致性问题,如果你是一个大型组织,或者只是一个终端用户,这也有可能会破坏你的系统(比如,二进制文件运行在就旧的库文件上了)。若然你在你的Linux系统上做安全审计([security audit of your Linux system][1])的话,你将会发现它很容易遭到不同的攻击。所以,清洁操作(无论是Windows还是Linux)都显得十分重要。 +### What is a file in Linux? ### +Linux的文件是什么? +对于UNIX系统来说(同样适用于Linux),以下便是对文件简单的描述: +> 在UNIX系统中,一切皆为文件;若非文件,则为进程 + +> 这种定义是比较正确的,因为有些特殊的文件不仅仅是普通文件(比如命名管道和套接字),不过为了让事情变的简单,“一切皆为文件”也是一个可以让人接受的说法。Linux系统也像UNXI系统一样,将文件和目录视如同物,因为目录只是一个包含了其他文件名的文件而已。程序,服务,文本,图片等等,都是文件。对于系统来说,输入和输出设备,基本上所有的设备,都被当做是文件。 +![](http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png) + +- Version 2.0 – 17-06-2015 + - – Improved: 添加标题以及版本历史 + - – Improved: 添加/srv,/meida和/proc + - – Improved: 更新了反映当前的Linux文件系统的描述 + - – Fixed: 多处的打印错误 + - – Fixed: 外观和颜色 +- Version 1.0 – 14-02-2015 + - – Created: 基本的图表 + - – Note: 摒弃更低的版本 + +### Download Links ### +以下是结构图的下载地址。如果你需要其他结构,请跟原作者联系,他会尝试制作并且上传到某个地方以供下载 +- [Large (PNG) Format – 2480×1755 px – 184KB][2] +- [Largest (PDF) Format – 9919x7019 px – 1686KB][3] + +**注意**: PDF格式文件是打印的最好选择,因为它画质很高。 +### Linux 文件系统描述 ### +为了有序地管理那些文件,人们习惯把这些文件当做是硬盘上的有序的类树结构体,正如我们熟悉的'MS-DOS'(硬盘操作系统)。大的分枝包括更多的分枝,分枝的末梢是树的叶子或者普通的文件。现在我们将会以这树形图为例,但晚点我们会发现为什么这不是一个完全准确的一幅图。 +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Directory(目录)Description(描述)
+
/
+
主层次 的根,也是整个文件系统层次结构的根目录
+
/bin
+
存放在单用户模式可用的必要命令二进制文件,对于所有用户而言,则是像cat,ls,cp等等的文件
+
/boot
+
存放引导加载程序文件,例如kernels,initrd等
+
/dev
+
存放必要的设备文件
+
/etc
+
存放主机特定的系统范围内的配置文件。其实这里有个关于它名字本身意义上的的争议。在贝尔实验室的早期UNIX实施文档版本中,/etc表示是“其他目录”,因为从历史上看,这个目录是存放各种不属于其他目录的文件(然而,FSH(文件系统目录标准)限定 /ect是用于存放静态配置文件,这里不该存有二进制文件)。早期文档出版后,这个目录名又重新定义成不同的形式。近期的解释中包含着诸如“可编辑文本配置”或者“额外的工具箱”这样的重定义
+
+
+
/opt
+
+
+
存储着新增包的配置文件 /opt/.
+
+
+
/sgml
+
+
+
存放配置文件,比如目录,还有那些处理SGML(译者注:标准通用标记语言)的软件的配置文件
+
+
+
/X11
+
+
+
X Window系统的配置文件,版本号为11
+
+
+
/xml
+
+
+
配置文件,比如目录,处理XML(译者注:可扩展标记语言)的软件的配置文件
+
/home
+
用户的主目录,包括保存的文件, 个人配置, 等等.
+
/lib
+
/bin/ and /sbin/中的二进制文件必不可少的库文件
+
/lib<qual>
+
备用格式的必要的库文件. 这样的目录视可选的,但如果他们存在的话, 他们还有一些要求.
+
/media
+
可移动的多媒体(如CD-ROMs)的挂载点.(出现于 FHS-2.3)
+
/mnt
+
临时挂载的文件系统
+
/opt
+
自定义应用程序软件包
+
/proc
+
以文件形式提供进程以及内核信息的虚拟文件系统,在Linux中,对应进程文件系统的挂载点
+
/root
+
根用户的主目录
+
/sbin
+
必要系统二进制文件, 比如, init, ip, mount.
+
/srv
+
系统提供的站点特定数据
+
/tmp
+
临时文件 (另见 /var/tmp). 通常在系统重启后删除
+
/usr
+
二级层级 存储用户的只读数据; 包含(多)用户主要的公共文件以及应用程序
+
+
+
/bin
+
+
+
非必要的命令二进制文件 (在单用户模式中不需要用到的); 用于所有用户.
+
+
+
/include
+
+
+
标准的包含文件
+
+
+
/lib
+
+
+
库文件,用于/usr/bin//usr/sbin/.中的二进制文件
+
+
+
/lib<qual>
+
+
+
备用格式库(可选的).
+
+
+
/local
+
+
+
三级层次 用于本地数据, 具体到该主机上的.通常会有下一个子目录, 比如, bin/, lib/, share/.
+
+
+
/sbin
+
+
+
非必要系统的二进制文件, 比如,用于不同网络服务的守护进程
+
+
+
/share
+
+
+
独立架构的 (共享) 数据.
+
+
+
/src
+
+
+
源代码, 比如, 内核源文件以及与它相关的头文件
+
+
+
/X11R6
+
+
+
X Window系统,版本号:11,发行版本:6
+
/var
+
各式各样的文件,一些随着系统常规操作而持续改变的文件就放在这里,比如日志文件,脱机文件,还有临时的电子邮件文件
+
+
+
/cache
+
+
+
应用程序缓存数据. 这些数据是根据I/O(输入/输出)的耗时结果或者是运算生成的.这些应用程序是可以重新生成或者恢复数据的.当没有数据丢失的时候,可以删除缓存文件.
+
+
+
/lib
+
+
+
状态信息.这些信息随着程序的运行而不停地改变,比如,数据库,系统元数据的打包等等
+
+
+
/lock
+
+
+
锁文件。这些文件会持续监控正在使用的资源
+
+
+
/log
+
+
+
日志文件. 包含各种日志.
+
+
+
/mail
+
+
+
内含用户邮箱的相关文件
+
+
+
/opt
+
+
+
来自附加包的各种数据都会存储在 /opt/.
+
+
+
/run
+
+
+
Information about the running system since last boot, e.g., currently logged-in users and running daemons.存放当前系统上次启动的相关信息, 例如, 当前登入的用户以及当前运行的daemons(守护进程).
+
+
+
/spool
+
+
+
该spool主要用于存放将要被处理的任务, 比如, 打印队列以及邮件传出队列
+
+
+
+
+
/mail
+
+
+
+
+
过时的位置,用于放置用户邮箱文件
+
+
+
/tmp
+
+
+
存放重启之前的临时接口
+ +### Types of files in Linux ### +### Linux的文件类型 ### +大多数文件也仅仅是文件,他们被称为`regular`文件;他们包含普通数据,比如,文本,可执行文件,或者程序,程序输入或输出文件等等 +While it is reasonably safe to suppose that everything you encounter on a Linux system is a file, there are some exceptions. +虽然你可以认为“在Linux中,一切你看到的皆为文件”这个观点相当保险,但这里仍有着一些例外。 + +- `目录`:由其他文件组成的文件 +- `特殊文件`:用于输入和输出的途径。大多数特殊文件都储存在`/dev`中,我们将会在后面讨论这个问题。 +- `链接文件`:让文件或者目录在系统文件树结构上可见的机制。我们将详细地讨论这个链接文件。 +- `(域)套接字`:特殊的文件类型,和TCP/IP协议中的套接字有点像,提供进程网络,并受文件系统的访问控制机制保护。 +-`命名管道` : 或多或少有点像sockets(套接字),提供一个进程间的通信机制,而不用网络套接字协议。 +### File system in reality ### +### 现实中的文件系统 ### +对于大多数用户和常规系统管理任务而言,"文件和目录是一个有序的类树结构"是可以接受的。然而,对于电脑而言,它是不会理解什么是树,或者什么是树结构。 + +每个分区都有它自己的文件系统。想象一下,如果把那些文件系统想成一个整体,我们可以构思一个关于整个系统的树结构,不过这并没有这么简单。在文件系统中,一个文件代表着一个`inode`(索引节点),一种包含着构建文件的实际数据信息的序列号:这些数据表示文件是属于谁的,还有它在硬盘中的位置。 + +每个分区都有一套属于他们自己的inodes,在一个系统的不同分区中,可以存在有相同inodes的文件。 + +每个inode都表示着一种在硬盘上的数据结构,保存着文件的属性,包括文件数据的物理地址。当硬盘被格式化并用来存储数据时(通常发生在初始系统安装过程,或者是在一个已经存在的系统中添加额外的硬盘),每个分区都会创建关于inodes的固定值。这个值表示这个分区能够同时存储各类文件的最大数量。我们通常用一个inode去映射2-8k的数据块。当一个新的文件生成后,它就会获得一个空闲的indoe。在这个inode里面存储着以下信息: + +- 文件属主和组属主 +- 文件类型(常规文件,目录文件......) +- 文件权限 +- 创建、最近一次读文件和修改文件的时间 +- inode里该信息被修改的时间 +- 文件的链接数(详见下一章) +- 文件大小 +- 文件数据的实际地址 + +唯一不在inode的信息是文件名和目录。它们存储在特殊的目录文件。通过比较文件名和inodes的数目,系统能够构造出一个便于用户理解的树结构。用户可以通过ls -i查看inode的数目。在硬盘上,inodes有他们独立的空间。 + + + +via: http://www.blackmoreops.com/2015/06/18/linux-file-system-hierarchy-v2-0/ + +译者:[译者ID](https://github.com/tnuoccalanosrep) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.blackmoreops.com/2015/02/15/in-light-of-recent-linux-exploits-linux-security-audit-is-a-must/ +[2]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png +[3]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-File-System-Hierarchy-blackMORE-Ops.pdf From 640802a25af32cec263d382f8620fc14897265ec Mon Sep 17 00:00:00 2001 From: Xuanwo Date: Fri, 28 Aug 2015 09:28:13 +0800 Subject: [PATCH 348/697] finished the translate --- ...eate Edit and Manipulate files in Linux.md | 74 +++++++++---------- 1 file changed, 36 insertions(+), 38 deletions(-) diff --git a/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md b/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md index 21579f0ed9..c8c56c0077 100644 --- a/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md +++ b/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md @@ -1,7 +1,6 @@ Translating by Xuanwo -Part 1 - LFCS: How to use GNU ‘sed’ Command to Create, Edit, and Manipulate files in Linux -LFCS系列第一讲:如何在Linux上使用GNU'sed'命令来创建、编辑和操作文件 +Part 1 - LFCS系列第一讲:如何在Linux上使用GNU'sed'命令来创建、编辑和操作文件 ================================================================================ Linux基金会宣布了一个全新的LFCS(Linux Foundation Certified Sysadmin,Linux基金会认证系统管理员)认证计划。这一计划旨在帮助遍布全世界的人们获得其在处理Linux系统管理任务上能力的认证。这些能力包括支持运行的系统服务,以及第一手的故障诊断和分析和为工程师团队在升级时提供智能决策。 @@ -11,8 +10,7 @@ Linux基金会认证系统管理员——第一讲 请观看下面关于Linux基金会认证计划的演示: -注:youtube 视频 - + 该系列将命名为《LFCS预备第一讲》至《LFCS预备第十讲》并覆盖关于Ubuntu,CentOS以及openSUSE的下列话题。 @@ -78,13 +76,13 @@ sed replace string 正如你所看到的,我们可以通过使用分号分隔以及用括号包裹来把两个或者更多的替换命令(并在他们中使用正则表达式)链接起来。 -Another use of sed is showing (or deleting) a chosen portion of a file. In the following example, we will display the first 5 lines of /var/log/messages from Jun 8. +另一种sed的用法是显示或者删除文件中选中的一部分。在下面的样例中,将会显示/var/log/messages中从6月8日开始的头五行。 # sed -n '/^Jun 8/ p' /var/log/messages | sed -n 1,5p -Note that by default, sed prints every line. We can override this behaviour with the -n option and then tell sed to print (indicated by p) only the part of the file (or the pipe) that matches the pattern (Jun 8 at the beginning of line in the first case and lines 1 through 5 inclusive in the second case). +请注意,在默认的情况下,sed会打印每一行。我们可以使用-n选项来覆盖这一行为并且告诉sed只需要打印(用p来表示)文件(或管道)中匹配的部分(第一种情况下行开头的第一个6月8日以及第二种情况下的一到五行*此处翻译欠妥,需要修正*)。 -Finally, it can be useful while inspecting scripts or configuration files to inspect the code itself and leave out comments. The following sed one-liner deletes (d) blank lines or those starting with # (the | character indicates a boolean OR between the two regular expressions). +最后,可能有用的技巧是当检查脚本或者配置文件的时候可以保留文件本身并且删除注释。下面的单行sed命令删除(d)空行或者是开头为`#`的行(|字符返回两个正则表达式之间的布尔值)。 # sed '/^#\|^$/d' apache2.conf @@ -92,13 +90,13 @@ Finally, it can be useful while inspecting scripts or configuration files to ins sed match string -#### uniq Command #### +#### uniq C命令 #### -The uniq command allows us to report or remove duplicate lines in a file, writing to stdout by default. We must note that uniq does not detect repeated lines unless they are adjacent. Thus, uniq is commonly used along with a preceding sort (which is used to sort lines of text files). By default, sort takes the first field (separated by spaces) as key field. To specify a different key field, we need to use the -k option. +uniq命令允许我们返回或者删除文件中重复的行,默认写入标准输出。我们必须注意到,除非两个重复的行相邻,否则uniq命令不会删除他们。因此,uniq经常和前序排序(此处翻译欠妥)(一种用来对文本行进行排序的算法)搭配使用。默认情况下,排序使用第一个字段(用空格分隔)作为关键字段。要指定一个不同的关键字段,我们需要使用-k选项。 -**Examples** +**样例** -The du –sch /path/to/directory/* command returns the disk space usage per subdirectories and files within the specified directory in human-readable format (also shows a total per directory), and does not order the output by size, but by subdirectory and file name. We can use the following command to sort by size. +du –sch /path/to/directory/* 命令将会以人类可读的格式返回在指定目录下每一个子文件夹和文件的磁盘空间使用情况(也会显示每个目录总体的情况),而且不是按照大小输出,而是按照子文件夹和文件的名称。我们可以使用下面的命令来让它通过大小排序。 # du -sch /var/* | sort –h @@ -106,7 +104,8 @@ The du –sch /path/to/directory/* command returns the disk space usage per subd sort command example -You can count the number of events in a log by date by telling uniq to perform the comparison using the first 6 characters (-w 6) of each line (where the date is specified), and prefixing each output line by the number of occurrences (-c) with the following command. +你可以通过使用下面的命令告诉uniq比较每一行的前6个字符(-w 6)(指定了不同的日期)来统计日志事件的个数,而且在每一行的开头输出出现的次数(-c)。 + # cat /var/log/mail.log | uniq -c -w 6 @@ -114,7 +113,7 @@ You can count the number of events in a log by date by telling uniq to perform t Count Numbers in File -Finally, you can combine sort and uniq (as they usually are). Consider the following file with a list of donors, donation date, and amount. Suppose we want to know how many unique donors there are. We will use the following command to cut the first field (fields are delimited by a colon), sort by name, and remove duplicate lines. +最后,你可以组合使用sort和uniq命令(通常如此)。考虑下面文件中捐助者,捐助日期和金额的列表。假设我们想知道有多少个捐助者。我们可以使用下面的命令来分隔第一字段(字段由冒号分隔),按名称排序并且删除重复的行。 # cat sortuniq.txt | cut -d: -f1 | sort | uniq @@ -122,15 +121,15 @@ Finally, you can combine sort and uniq (as they usually are). Consider the follo Find Unique Records in File -- Read Also: [13 “cat” Command Examples][1] +- 也可阅读: [13个“cat”命令样例][1] -#### grep Command #### +#### grep 命令 #### -grep searches text files or (command output) for the occurrence of a specified regular expression and outputs any line containing a match to standard output. +grep在文件(或命令输出)中搜索指定正则表达式并且在标准输出中输出匹配的行。 -**Examples** +**样例** -Display the information from /etc/passwd for user gacanepa, ignoring case. +显示文件/etc/passwd中用户gacanepa的信息,忽略大小写。 # grep -i gacanepa /etc/passwd @@ -138,7 +137,7 @@ Display the information from /etc/passwd for user gacanepa, ignoring case. grep command example -Show all the contents of /etc whose name begins with rc followed by any single number. +显示/etc文件夹下所有rc开头并跟随任意数字的内容。 # ls -l /etc | grep rc[0-9] @@ -146,15 +145,15 @@ Show all the contents of /etc whose name begins with rc followed by any single n List Content Using grep -- Read Also: [12 “grep” Command Examples][2] +- 也可阅读: [12个“grep”命令样例][2] #### tr Command Usage #### -The tr command can be used to translate (change) or delete characters from stdin, and write the result to stdout. +tr命令可以用来从标准输入中翻译(改变)或者删除字符并将结果写入到标准输出中。 -**Examples** +**样例** -Change all lowercase to uppercase in sortuniq.txt file. +把sortuniq.txt文件中所有的小写改为大写。 # cat sortuniq.txt | tr [:lower:] [:upper:] @@ -162,21 +161,20 @@ Change all lowercase to uppercase in sortuniq.txt file. Sort Strings in File -Squeeze the delimiter in the output of ls –l to only one space. - +压缩`ls –l`输出中的定界符至一个空格。 # ls -l | tr -s ' ' ![Squeeze Delimiter](http://www.tecmint.com/wp-content/uploads/2014/10/squeeze-delimeter.jpg) Squeeze Delimiter -#### cut Command Usage #### +#### cut 命令使用方法 #### -The cut command extracts portions of input lines (from stdin or files) and displays the result on standard output, based on number of bytes (-b option), characters (-c), or fields (-f). In this last case (based on fields), the default field separator is a tab, but a different delimiter can be specified by using the -d option. +cut命令可以基于字节数(-b选项),字符(-c)或者字段(-f)提取部分输入(从标准输入或者文件中)并且将结果输出到标准输出。在最后一种情况下(基于字段),默认的字段分隔符是一个tab,但不同的分隔符可以由-d选项来指定。 -**Examples** +**样例** -Extract the user accounts and the default shells assigned to them from /etc/passwd (the –d option allows us to specify the field delimiter, and the –f switch indicates which field(s) will be extracted. +从/etc/passwd中提取用户账户和他们被分配的默认shell(-d选项允许我们指定分界符,-f选项指定那些字段将被提取)。 # cat /etc/passwd | cut -d: -f1,7 @@ -184,7 +182,7 @@ Extract the user accounts and the default shells assigned to them from /etc/pass Extract User Accounts -Summing up, we will create a text stream consisting of the first and third non-blank files of the output of the last command. We will use grep as a first filter to check for sessions of user gacanepa, then squeeze delimiters to only one space (tr -s ‘ ‘). Next, we’ll extract the first and third fields with cut, and finally sort by the second field (IP addresses in this case) showing unique. +总结一下,我们将使用最后一个命令的输出中第一和第三个非空文件创建一个文本流。我们将使用grep作为第一过滤器来检查用户gacanepa的会话,然后将分隔符压缩至一个空格(tr -s ' ')。下一步,我们将使用cut来提取第一和第三个字段,最后使用第二个字段(本样例中,指的是IP地址)来排序之后再用uniq去重。 # last | grep gacanepa | tr -s ‘ ‘ | cut -d’ ‘ -f1,3 | sort -k2 | uniq @@ -192,24 +190,24 @@ Summing up, we will create a text stream consisting of the first and third non-b last command example -The above command shows how multiple commands and pipes can be combined so as to obtain filtered data according to our desires. Feel free to also run it by parts, to help you see the output that is pipelined from one command to the next (this can be a great learning experience, by the way!). +上面的命令显示了如何将多个命令和管道结合起来以便根据我们的愿望得到过滤后的数据。你也可以逐步地使用它以帮助你理解输出是如何从一个命令传输到下一个命令的(顺便说一句,这是一个非常好的学习经验!) -### Summary ### +### 总结 ### -Although this example (along with the rest of the examples in the current tutorial) may not seem very useful at first sight, they are a nice starting point to begin experimenting with commands that are used to create, edit, and manipulate files from the Linux command line. Feel free to leave your questions and comments below – they will be much appreciated! +尽管这个例子(以及在当前教程中的其他实例)第一眼看上去可能不是非常有用,但是他们是体验在Linux命令行中创建,编辑和操作文件的一个非常好的开始。请随时留下你的问题和意见——不胜感激! -#### Reference Links #### +#### 参考链接 #### -- [About the LFCS][3] -- [Why get a Linux Foundation Certification?][4] -- [Register for the LFCS exam][5] +- [关于LFCS][3] +- [为什么需要Linux基金会认证?][4] +- [注册LFCS考试][5] -------------------------------------------------------------------------------- via: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ 作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) +译者:[Xuanwo](https://github.com/Xuanwo) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From b663886aa6da8c94947f11a0a901b56a3bba60a1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Fri, 28 Aug 2015 10:41:26 +0800 Subject: [PATCH 349/697] Delete RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md --- ...Lists) and Mounting Samba or NFS Shares.md | 192 ------------------ 1 file changed, 192 deletions(-) delete mode 100644 translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md diff --git a/translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md b/translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md deleted file mode 100644 index a9c56b1cbe..0000000000 --- a/translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md +++ /dev/null @@ -1,192 +0,0 @@ -[xiqingongzi Translating] -RHCSA Series: Using ACLs (Access Control Lists) and Mounting Samba / NFS Shares – Part 7 -================================================================================ -在第六篇文章的最后,我们开始解释如何使用parted和SSM 设置和配置本地文件存储([RHCSA series Part 6][1]) - -![Configure ACL's and Mounting NFS / Samba Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ACLs-and-Mounting-NFS-Samba-Shares.png) - -RHCSA系列::第七章 ACL的配置和安装NFS/Samba文件分享系统 - -我们还讨论了如何创建和在启动启动时用密码挂载加密逻辑卷。另外,我们要提醒您要避免在安装 在管理操作系统的存储文件系统事执行关键的操作。接下来我们要回顾在红帽Linux 7 中常用的文件系统格式然后卸载和挂载网络文件系统(CIFS和NFS), -#### 前提 #### - -在开始之前,请确保你有一个线上Samba服务器和一个线上NFS服务器(RHEL7 将很快不支持 NFS V2) - -在这个指南中,我们将使用一个IP为192.168.0.10的机器作为服务端,RHEL7 盒子作为客户端,IP为192.168.0.18,稍后我们会告诉你该安装哪些软件包。 - -### RHEL7中的文件格式 ### - -从RHEL7 开始,XFS 因为其高可用性和可拓展性被设置为所有架构的默认文件系统。目前,红帽和合作伙伴测试的主流硬件线上他支持最大500TB 每个文件系统。 -同时,XFS使user_xattr(扩展用户属性)和ACL(POSIX访问控制列表)作为默认的挂载选项,不像ext3或ext4(ext2在RHEL 7中是过时的),这意味着你不需要明确的指定命令行选项或在/etc/fstab挂载时XFS文件系统(如果你想禁用在后一种情况下,这样的选择你要明确使用no_acl和no_user_xattr)。 -记住,扩展用户属性可以指定文件和目录用于存储任意等附加信息的MIME类型,字符集或文件的编码,而对用户属性的访问权限由普通文件权限位的定义。 -#### 权限控制列表 #### - -每一个系统管理员,无论新手还是专家,都熟悉文件和目录的权限和许可。它能制定特定的权限(读,写和执行)的所有者,属组,和其他的正常访问权限。如果需要,可以回去看看 [Part 3 of the RHCSA series][2] -然而,由于标准的 ugo/rwx 设置不允许配置不同用户不同权限,所以ACL可以比一般规定更多的文件和目录权限。 -事实上,ACL定义的权限是文件权限的一个超集,我们来看一下在真正的场景下是如何转换的。 -1. 有两种类型:访问ACL (可以适用于任何一个特定的文件或目录),也是默认的ACL,它只能应用于目录。如果文件包含在其中没有ACL设置,他们继承父目录的默认ACL。 -2. 首先,ACL可以配置每个用户,每个组,或不在组内的用户拥有文件。 -3。设置ACL(和删除)使用setfacl,分别使用M或X选项。 -例如,让我们创建一个组名为tecmint和添加用户johndoe和davenull: - # groupadd tecmint - # useradd johndoe - # useradd davenull - # usermod -a -G tecmint johndoe - # usermod -a -G tecmint davenull - -让我们确认用户属于组tecmint: - - # id johndoe - # id davenull - -![Verify Users](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-Users.png) - -验证用户 - -Let’s now create a directory called playground within /mnt, and a file named testfile.txt inside. We will set the group owner to tecmint and change its default ugo/rwx permissions to 770 (read, write, and execute permissions granted to both the owner and the group owner of the file): -现在让我们创建一个在/mnt下的目录名为playground,和一个名叫testfile.txt文件。我们将tecmint和更改其默认 ugo/rwx 权限为 770组所有者(读,写,和执行给予属主和属组所有者权限): - # mkdir /mnt/playground - # touch /mnt/playground/testfile.txt - # chmod 770 /mnt/playground/testfile.txt - -然后切换用户johndoe和davenull,按照这个顺序,并写入文件: - - echo "My name is John Doe" > /mnt/playground/testfile.txt - echo "My name is Dave Null" >> /mnt/playground/testfile.txt - -到目前为止很好。现在,让我们的用户gacanepa写入文件–和写操作,可以预料到出现的结果。 - -但如果我们真的需要用户gacanepa(不是tecmint组的成员)有/mnt/playground/testfile.txt的写入权限。首先,可能是你的想法是添加用户帐户组tecmint。但这会给他写上所有文件的权限,写入的是该组的权限,我们不希望这样。我们只希望他能写/mnt/playground/ testfile.txt。 - - # touch /mnt/playground/testfile.txt - # chown :tecmint /mnt/playground/testfile.txt - # chmod 777 /mnt/playground/testfile.txt - # su johndoe - $ echo "My name is John Doe" > /mnt/playground/testfile.txt - $ su davenull - $ echo "My name is Dave Null" >> /mnt/playground/testfile.txt - $ su gacanepa - $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt - -![Manage User Permissions](http://www.tecmint.com/wp-content/uploads/2015/04/User-Permissions.png) - -管理用户权限 - -让我们给用户gacanepa添加/mnt/playground/testfile.txt的读写权限 -在root下执行 - - # setfacl -R -m u:gacanepa:rwx /mnt/playground - -您已经成功添加了一个ACL允许gacanepa写入测试文件。然后切换到用户gacanepa试图写入文件: - - $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt - -要查看特定的文件或目录的ACL,使用getfacl: - - # getfacl /mnt/playground/testfile.txt - -![Check ACLs of Files](http://www.tecmint.com/wp-content/uploads/2015/04/Check-ACL-of-File.png) - -检查文件的ACLs - -设置默认ACL目录(它的内容将会继承除非被覆盖),添加d:以前的规则并且指定一个文件名来替代 - # setfacl -m d:o:r /mnt/playground - -以上的ACL将允许用户不在属组属主有/mnt/playground的读权限。注意在getfacl /mnt/playground 之前和之后的改变输出的差异: -![Set Default ACL in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Set-Default-ACL-in-Linux.png) - -Set Default ACL in Linux - -[Chapter 20 in the official RHEL 7 Storage Administration Guide][3] 提供了更多ACL的例子,我强烈推荐你去读读它,参考起来非常方便。 - -#### 安装NFS网络共享 #### - -显示在你的服务器的NFS共享可用的列表,您可以使用showmount命令与E选项,其次是机器名或IP地址。这个工具包含在NFS utils包: - - # yum update && yum install nfs-utils - -然后: - - # showmount -e 192.168.0.10 - -你会得到一个列表的可用的NFS分享192.168.0.10: -![Check Available NFS Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Mount-NFS-Shares.png) - -Check Available NFS Shares - -在使用命令行对必要的本地客户端挂载NFS网络共享,使用以下语法: - - # mount -t nfs -o [options] remote_host:/remote/directory /local/directory - -在我们的例子中,翻译成: - - # mount -t nfs 192.168.0.10:/NFS-SHARE /mnt/nfs - -如果您收到一下错误消息:"rpc-statd.service工作失败,看 “systemctl status rpc-statd.service” 和“journalctl -xn” 获取详细信息.确保你的rpcbind服务在开机时开启。 - # systemctl enable rpcbind.socket - # systemctl restart rpcbind.service - -然后重新启动。这应该做的技巧,你将能够挂载NFS共享就和前面所解释的那样。如果你需要安装NFS共享的自动引导系统,添加一个有效的条目到/etc/fstab文件: - - 远程主机:远程目录 本地目录 nfs 选项 0 0 - -变量远程主机, 远程目录, 本地目录, and 选项 (可选的)在我们手动挂载是谁同样的,就和我们之前的例子一样。 - - 192.168.0.10:/NFS-SHARE /mnt/nfs nfs defaults 0 0 - -#### 挂载 Samba 网络文件共享 #### - -Samba 代表选择可以在×nix和Windows之间进行网络共享的工具.使用Samba客户端包内的 smbclient 命令 加 -L 参数来展示 Samba 文件分享,其次是机器名或IP地址 -将会提示你输入远程主机上的密码: - # smbclient -L 192.168.0.10 - -![Check Samba Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Samba-Shares.png) - -Check Samba Shares - -在本地客户端,你需要首先安装CIFS utils来挂载Samba: - - # yum update && yum install cifs-utils - -然后在命令行上使用下面的语法: - - # mount -t cifs -o credentials=/path/to/credentials/file //remote_host/samba_share /local/directory - -在我们的例子中,翻译成: - - # mount -t cifs -o credentials=~/.smbcredentials //192.168.0.10/gacanepa /mnt/samba - -smbcredentials内容为: - - username=gacanepa - password=XXXXXX - -是一个隐藏文件在root的主目录(/root/),权限设置为600,因此,除了该文件的所有者可以读或写,没有人能够读写。 -请注意,samba_share是Samba共享的名字就像 smbclient -L remote_host 返回的那样 - -现在,如果你需要samba共享可自动在系统启动时,添加一个有效的条目/etc/fstab文件如下: - - //远程主机:/samba_share 本地目录 cifs 选项 0 0 - -变量 远程主机, /samba_share, 本地目录, 选项 (可选的) 和我们手动安装的意义一样 - - //192.168.0.10/gacanepa /mnt/samba cifs credentials=/root/smbcredentials,defaults 0 0 - -### 结论 ### - -在这篇文章中我们已经讲解了如何在Linux设置ACL,并探讨RHEL7中该如何挂载CIFS和NFS网络共享。 -我建议你去实践这些概念,甚至把它们一起安装(先尝试安装网络共享设置ACL)如果你有疑问或意见,请随时使用下面的表格,随时与我们联系。还可以通过你的社交网络来分享这篇文章。 --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/ - -作者:[Gabriel Cánepa][a] -译者:[xiqingongzi](https://github.com/xiqingongzi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/ -[2]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ -[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html From 20227444b13f1cea4976677c9c559cbbb191e3c3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Fri, 28 Aug 2015 11:11:16 +0800 Subject: [PATCH 350/697] Update RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md --- ...t, Automating Tasks with Cron and Monitoring System Logs.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md b/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md index 04c7d7a29e..307ec72515 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md @@ -1,3 +1,4 @@ +[xiqingongzi translating] RHCSA Series: Yum Package Management, Automating Tasks with Cron and Monitoring System Logs – Part 10 ================================================================================ In this article we will review how to install, update, and remove packages in Red Hat Enterprise Linux 7. We will also cover how to automate tasks using cron, and will finish this guide explaining how to locate and interpret system logs files with the focus of teaching you why all of these are essential skills for every system administrator. @@ -194,4 +195,4 @@ via: http://www.tecmint.com/yum-package-management-cron-job-scheduling-monitorin [1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ [2]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ [3]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/ -[4]:http://www.tecmint.com/dmesg-commands/ \ No newline at end of file +[4]:http://www.tecmint.com/dmesg-commands/ From c242f434c900997a28380b2e50620bba14b2f00b Mon Sep 17 00:00:00 2001 From: Jerry Ling Date: Fri, 28 Aug 2015 12:21:27 +0800 Subject: [PATCH 351/697] [Translating] Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting --- ...inux Birthday-- A 22 Years of Journey and Still Counting.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md b/sources/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md index f74384b616..da750a495a 100644 --- a/sources/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md +++ b/sources/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md @@ -1,3 +1,4 @@ +[jerryling315](https://github.com/jerryling315/) is translating. Debian GNU/Linux Birthday : A 22 Years of Journey and Still Counting… ================================================================================ On 16th August 2015, the Debian project has celebrated its 22nd anniversary, making it one of the oldest popular distribution in open source world. Debian project was conceived and founded in the year 1993 by Ian Murdock. By that time Slackware had already made a remarkable presence as one of the earliest Linux Distribution. @@ -106,4 +107,4 @@ via: http://www.tecmint.com/happy-birthday-to-debian-gnu-linux/ [a]:http://www.tecmint.com/author/avishek/ [1]:http://xmodulo.com/2013/08/interesting-facts-about-debian-linux.html -[2]:https://www.debian.org/ \ No newline at end of file +[2]:https://www.debian.org/ From 829b9cbb09aabfac61a8a578a31d4b8a2edfafb7 Mon Sep 17 00:00:00 2001 From: Ping Date: Fri, 28 Aug 2015 13:54:05 +0800 Subject: [PATCH 352/697] Translating sources/share/20150827 Xtreme Download Manager Updated With Fresh GUI --- .../20150827 Xtreme Download Manager Updated With Fresh GUI.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md b/sources/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md index 767c2fdcd4..8879d6bf64 100644 --- a/sources/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md +++ b/sources/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md @@ -1,3 +1,4 @@ +Translating by Ping Xtreme Download Manager Updated With Fresh GUI ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme-Download-Manager-Linux.jpg) @@ -64,4 +65,4 @@ via: http://itsfoss.com/xtreme-download-manager-install/ [1]:http://xdman.sourceforge.net/ [2]:http://itsfoss.com/4-best-download-managers-for-linux/ [3]:http://itsfoss.com/download-youtube-videos-ubuntu/ -[4]:http://xdman.sourceforge.net/download.html \ No newline at end of file +[4]:http://xdman.sourceforge.net/download.html From 8697100000ccaba0be7f29aa03595410ff96dbe5 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 28 Aug 2015 14:42:22 +0800 Subject: [PATCH 353/697] PUB:20150518 How to set up a Replica Set on MongoDB @mr-ping --- ... How to set up a Replica Set on MongoDB.md | 78 +++++++++++-------- 1 file changed, 47 insertions(+), 31 deletions(-) rename {translated/tech => published}/20150518 How to set up a Replica Set on MongoDB.md (51%) diff --git a/translated/tech/20150518 How to set up a Replica Set on MongoDB.md b/published/20150518 How to set up a Replica Set on MongoDB.md similarity index 51% rename from translated/tech/20150518 How to set up a Replica Set on MongoDB.md rename to published/20150518 How to set up a Replica Set on MongoDB.md index 44b8535b82..7d05a48d95 100644 --- a/translated/tech/20150518 How to set up a Replica Set on MongoDB.md +++ b/published/20150518 How to set up a Replica Set on MongoDB.md @@ -1,10 +1,11 @@ -如何配置MongoDB副本集(Replica Set) +如何配置 MongoDB 副本集 ================================================================================ -MongoDB已经成为市面上最知名的NoSQL数据库。MongoDB是面向文档的,它的无模式设计使得它在各种各样的WEB应用当中广受欢迎。最让我喜欢的特性之一是它的副本集,副本集将同一数据的多份拷贝放在一组mongod节点上,从而实现数据的冗余以及高可用性。 -这篇教程将向你介绍如何配置一个MongoDB副本集。 +MongoDB 已经成为市面上最知名的 NoSQL 数据库。MongoDB 是面向文档的,它的无模式设计使得它在各种各样的WEB 应用当中广受欢迎。最让我喜欢的特性之一是它的副本集(Replica Set),副本集将同一数据的多份拷贝放在一组 mongod 节点上,从而实现数据的冗余以及高可用性。 -副本集的最常见配置涉及到一个主节点以及多个副节点。这之后启动的复制行为会从这个主节点到其他副节点。副本集不止可以针对意外的硬件故障和停机事件对数据库提供保护,同时也因为提供了更多的结点从而提高了数据库客户端数据读取的吞吐量。 +这篇教程将向你介绍如何配置一个 MongoDB 副本集。 + +副本集的最常见配置需要一个主节点以及多个副节点。这之后启动的复制行为会从这个主节点到其他副节点。副本集不止可以针对意外的硬件故障和停机事件对数据库提供保护,同时也因为提供了更多的节点从而提高了数据库客户端数据读取的吞吐量。 ### 配置环境 ### @@ -12,25 +13,25 @@ MongoDB已经成为市面上最知名的NoSQL数据库。MongoDB是面向文档 ![](https://farm8.staticflickr.com/7667/17801038505_529a5224a1.jpg) -为了达到这个目的,我们使用了3个运行在VirtualBox上的虚拟机。我会在这些虚拟机上安装Ubuntu 14.04,并且安装MongoDB官方包。 +为了达到这个目的,我们使用了3个运行在 VirtualBox 上的虚拟机。我会在这些虚拟机上安装 Ubuntu 14.04,并且安装 MongoDB 官方包。 -我会在一个虚拟机实例上配置好需要的环境,然后将它克隆到其他的虚拟机实例上。因此,选择一个名为master的虚拟机,执行以下安装过程。 +我会在一个虚拟机实例上配置好所需的环境,然后将它克隆到其他的虚拟机实例上。因此,选择一个名为 master 的虚拟机,执行以下安装过程。 -首先,我们需要在apt中增加一个MongoDB密钥: +首先,我们需要给 apt 增加一个 MongoDB 密钥: $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 -然后,将官方的MongoDB仓库添加到source.list中: +然后,将官方的 MongoDB 仓库添加到 source.list 中: $ sudo su # echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release -sc)"/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list -接下来更新apt仓库并且安装MongoDB。 +接下来更新 apt 仓库并且安装 MongoDB。 $ sudo apt-get update $ sudo apt-get install -y mongodb-org -现在对/etc/mongodb.conf做一些更改 +现在对 /etc/mongodb.conf 做一些更改 auth = true dbpath=/var/lib/mongodb @@ -39,17 +40,17 @@ MongoDB已经成为市面上最知名的NoSQL数据库。MongoDB是面向文档 keyFile=/var/lib/mongodb/keyFile replSet=myReplica -第一行的作用是确认我们的数据库需要验证才可以使用的。keyfile用来配置用于MongoDB结点间复制行为的密钥文件。replSet用来为副本集设置一个名称。 +第一行的作用是表明我们的数据库需要验证才可以使用。keyfile 配置用于 MongoDB 节点间复制行为的密钥文件。replSet 为副本集设置一个名称。 接下来我们创建一个用于所有实例的密钥文件。 $ echo -n "MyRandomStringForReplicaSet" | md5sum > keyFile -这将会创建一个含有MD5字符串的密钥文件,但是由于其中包含了一些噪音,我们需要对他们清理后才能正式在MongoDB中使用。 +这将会创建一个含有 MD5 字符串的密钥文件,但是由于其中包含了一些噪音,我们需要对他们清理后才能正式在 MongoDB 中使用。 $ echo -n "MyReplicaSetKey" | md5sum|grep -o "[0-9a-z]\+" > keyFile -grep命令的作用的是把将空格等我们不想要的内容过滤掉之后的MD5字符串打印出来。 +grep 命令的作用的是把将空格等我们不想要的内容过滤掉之后的 MD5 字符串打印出来。 现在我们对密钥文件进行一些操作,让它真正可用。 @@ -57,7 +58,7 @@ grep命令的作用的是把将空格等我们不想要的内容过滤掉之后 $ sudo chown mongodb:nogroup keyFile $ sudo chmod 400 keyFile -接下来,关闭此虚拟机。将其Ubuntu系统克隆到其他虚拟机上。 +接下来,关闭此虚拟机。将其 Ubuntu 系统克隆到其他虚拟机上。 ![](https://farm9.staticflickr.com/8729/17800903865_9876a9cc9c.jpg) @@ -67,55 +68,55 @@ grep命令的作用的是把将空格等我们不想要的内容过滤掉之后 请注意,三个虚拟机示例需要在同一个网络中以便相互通讯。因此,我们需要它们弄到“互联网"上去。 -这里推荐给每个虚拟机设置一个静态IP地址,而不是使用DHCP。这样它们就不至于在DHCP分配IP地址给他们的时候失去连接。 +这里推荐给每个虚拟机设置一个静态 IP 地址,而不是使用 DHCP。这样它们就不至于在 DHCP 分配IP地址给他们的时候失去连接。 -像下面这样编辑每个虚拟机的/etc/networks/interfaces文件。 +像下面这样编辑每个虚拟机的 /etc/networks/interfaces 文件。 -在主结点上: +在主节点上: auto eth1 iface eth1 inet static address 192.168.50.2 netmask 255.255.255.0 -在副结点1上: +在副节点1上: auto eth1 iface eth1 inet static address 192.168.50.3 netmask 255.255.255.0 -在副结点2上: +在副节点2上: auto eth1 iface eth1 inet static address 192.168.50.4 netmask 255.255.255.0 -由于我们没有DNS服务,所以需要设置设置一下/etc/hosts这个文件,手工将主机名称放到次文件中。 +由于我们没有 DNS 服务,所以需要设置设置一下 /etc/hosts 这个文件,手工将主机名称放到此文件中。 -在主结点上: +在主节点上: 127.0.0.1 localhost primary 192.168.50.2 primary 192.168.50.3 secondary1 192.168.50.4 secondary2 -在副结点1上: +在副节点1上: 127.0.0.1 localhost secondary1 192.168.50.2 primary 192.168.50.3 secondary1 192.168.50.4 secondary2 -在副结点2上: +在副节点2上: 127.0.0.1 localhost secondary2 192.168.50.2 primary 192.168.50.3 secondary1 192.168.50.4 secondary2 -使用ping命令检查各个结点之间的连接。 +使用 ping 命令检查各个节点之间的连接。 $ ping primary $ ping secondary1 @@ -123,9 +124,9 @@ grep命令的作用的是把将空格等我们不想要的内容过滤掉之后 ### 配置副本集 ### -验证各个结点可以正常连通后,我们就可以新建一个管理员用户,用于之后的副本集操作。 +验证各个节点可以正常连通后,我们就可以新建一个管理员用户,用于之后的副本集操作。 -在主节点上,打开/etc/mongodb.conf文件,将auth和replSet两项注释掉。 +在主节点上,打开 /etc/mongodb.conf 文件,将 auth 和 replSet 两项注释掉。 dbpath=/var/lib/mongodb logpath=/var/log/mongodb/mongod.log @@ -133,21 +134,30 @@ grep命令的作用的是把将空格等我们不想要的内容过滤掉之后 #auth = true keyFile=/var/lib/mongodb/keyFile #replSet=myReplica + +在一个新安装的 MongoDB 上配置任何用户或副本集之前,你需要注释掉 auth 行。默认情况下,MongoDB 并没有创建任何用户。而如果在你创建用户前启用了 auth,你就不能够做任何事情。你可以在创建一个用户后再次启用 auth。 -重启mongod进程。 +修改 /etc/mongodb.conf 之后,重启 mongod 进程。 $ sudo service mongod restart -连接MongoDB后,新建管理员用户。 +现在连接到 MongoDB master: + + $ mongo :27017 + +连接 MongoDB 后,新建管理员用户。 > use admin > db.createUser({ user:"admin", pwd:" }) + +重启 MongoDB: + $ sudo service mongod restart -连接到MongoDB,用以下命令将secondary1和secondary2节点添加到我们的副本集中。 +再次连接到 MongoDB,用以下命令将 副节点1 和副节点2节点添加到我们的副本集中。 > use admin > db.auth("admin","myreallyhardpassword") @@ -156,7 +166,7 @@ grep命令的作用的是把将空格等我们不想要的内容过滤掉之后 > rs.add("secondary2:27017") -现在副本集到手了,可以开始我们的项目了。参照 [official driver documentation][1] 来了解如何连接到副本集。如果你想要用Shell来请求数据,那么你需要连接到主节点上来插入或者请求数据,副节点不行。如果你执意要尝试用附件点操作,那么以下错误信息就蹦出来招呼你了。 +现在副本集到手了,可以开始我们的项目了。参照 [官方驱动文档][1] 来了解如何连接到副本集。如果你想要用 Shell 来请求数据,那么你需要连接到主节点上来插入或者请求数据,副节点不行。如果你执意要尝试用副本集操作,那么以下错误信息就蹦出来招呼你了。 myReplica:SECONDARY> myReplica:SECONDARY> show databases @@ -166,6 +176,12 @@ grep命令的作用的是把将空格等我们不想要的内容过滤掉之后 at shellHelper.show (src/mongo/shell/utils.js:630:33) at shellHelper (src/mongo/shell/utils.js:524:36) at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47 + +如果你要从 shell 连接到整个副本集,你可以安装如下命令。在副本集中的失败切换是自动的。 + + $ mongo primary,secondary1,secondary2:27017/?replicaSet=myReplica + +如果你使用其它驱动语言(例如,JavaScript、Ruby 等等),格式也许不同。 希望这篇教程能对你有所帮助。你可以使用Vagrant来自动完成你的本地环境配置,并且加速你的代码。 @@ -175,7 +191,7 @@ via: http://xmodulo.com/setup-replica-set-mongodb.html 作者:[Christopher Valerio][a] 译者:[mr-ping](https://github.com/mr-ping) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 117acaf5d0b79a7b1054f488e29410d8fbf354c6 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 28 Aug 2015 14:46:58 +0800 Subject: [PATCH 354/697] =?UTF-8?q?=E5=9B=9E=E6=94=B6=E8=B6=85=E6=9C=9F?= =?UTF-8?q?=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @H-mudcup @zpl1025 @martin2011qi --- ...Is Out And It's Packed Full Of Features.md | 87 ------------------- ...20141223 Defending the Free Linux World.md | 2 - ...s--Linus Torvalds Answers Your Question.md | 1 - .../talk/20150716 Interview--Larry Wall.md | 2 - 4 files changed, 92 deletions(-) delete mode 100644 sources/news/20150826 Plasma 5.4 Is Out And It's Packed Full Of Features.md diff --git a/sources/news/20150826 Plasma 5.4 Is Out And It's Packed Full Of Features.md b/sources/news/20150826 Plasma 5.4 Is Out And It's Packed Full Of Features.md deleted file mode 100644 index a103c6b505..0000000000 --- a/sources/news/20150826 Plasma 5.4 Is Out And It's Packed Full Of Features.md +++ /dev/null @@ -1,87 +0,0 @@ -Plasma 5.4 Is Out And It’s Packed Full Of Features -================================================================================ -KDE has [announced][1] a brand new feature release of Plasma 5 — and it’s a corker. - -![kde network applet graphs](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/kde-network-applet-graphs.jpg) - -Better network details are among the changes - -Plasma 5.4.0 builds on [April’s 5.3.0 milestone][2] in a number of ways, ranging from the inherently technical, Wayland preview session, ahoy, to lavish aesthetic touches, like **1,400 brand new icons**. - -A handful of new components also feature in the release, including a new Plasma Widget for volume control, a monitor calibration tool and an improved user management tool. - -The ‘Kicker’ application menu has been powered up to let you favourite all types of content, not just applications. - -**KRunner now remembers searches** so that it can automatically offer suggestions based on your earlier queries as you type. - -The **network applet displays a graph** to give you a better understanding of your network traffic. It also gains two new VPN plugins for SSH and SSTP connections. - -Minor tweaks to the digital clock see it adapt better in slim panel mode, it gains ISO date support and makes it easier for you to toggle between 12 hour and 24 hour clock. Week numbers have been added to the calendar. - -### Application Dashboard ### - -![plasma 5.4 fullscreen dashboard](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/plasma-fullscreen-dashboard.jpg) - -The new ‘Application Dashboard’ in KDE Plasma 5.4.0 - -**A new full screen launcher, called ‘Application Dashboard’**, is also available. - -This full-screen dash offers the same features as the traditional Application Menu but with “sophisticated scaling to screen size and full spatial keyboard navigation”. - -Like the Unity launch, the new Plasma Application Dashboard helps you quickly find applications, sift through files and contacts based on your previous activity. - -### Changes in KDE Plasma 5.4.0 at a glance ### - -- Improved high DPI support -- KRunner autocompletion -- KRunner search history -- Application Dashboard add on -- 1,400 New icons -- Wayland tech preview - -For a full list of changes in Plasma 5.4 refer to [this changelog][3]. - -### Install Plasma 5.4 in Kubuntu 15.04 ### - -![new plasma desktop](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/new-plasma-desktop-.jpg) - -![Kubuntu logo](http://www.omgubuntu.co.uk/wp-content/uploads/2012/02/logo-kubuntu.png) - -To **install Plasma 5.4 in Kubuntu 15.04** you will need to add the KDE Backports PPA to your Software Sources. - -Adding the Kubuntu backports PPA **is not strictly advised** as it may upgrade other parts of the KDE desktop, application suite, developer frameworks or Kubuntu specific config files. - -If you like your desktop being stable, don’t proceed. - -The quickest way to upgrade to Plasma 5.4 once it lands in the Kubuntu Backports PPA is to use the Terminal: - - sudo add-apt-repository ppa:kubuntu-ppa/backports - - sudo apt-get update && sudo apt-get dist-upgrade - -Let the upgrade process complete. Assuming no errors emerge, reboot your computer for changes to take effect. - -If you’re not already using Kubuntu, i.e. you’re using the Unity version of Ubuntu, you should first install the Kubuntu desktop package (you’ll find it in the Ubuntu Software Centre). - -To undo the changes above and downgrade to the most recent version of Plasma available in the Ubuntu archives use the PPA-Purge tool: - - sudo apt-get install ppa-purge - - sudo ppa-purge ppa:kubuntu-ppa/backports - -Let us know how your upgrade/testing goes in the comments below and don’t forget to mention the features you hope to see added to the Plasma 5 desktop next. - --------------------------------------------------------------------------------- - -via: http://www.omgubuntu.co.uk/2015/08/plasma-5-4-new-features - -作者:[Joey-Elijah Sneddon][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:https://dot.kde.org/2015/08/25/kde-ships-plasma-540-feature-release-august -[2]:http://www.omgubuntu.co.uk/2015/04/kde-plasma-5-3-released-heres-how-to-upgrade-in-kubuntu-15-04 -[3]:https://www.kde.org/announcements/plasma-5.3.2-5.4.0-changelog.php \ No newline at end of file diff --git a/sources/talk/20141223 Defending the Free Linux World.md b/sources/talk/20141223 Defending the Free Linux World.md index 0a552e640d..0df2a47383 100644 --- a/sources/talk/20141223 Defending the Free Linux World.md +++ b/sources/talk/20141223 Defending the Free Linux World.md @@ -1,5 +1,3 @@ -Translating by H-mudcup - Defending the Free Linux World ================================================================================ ![](http://www.linuxinsider.com/ai/908455/open-invention-network.jpg) diff --git a/sources/talk/20150709 Interviews--Linus Torvalds Answers Your Question.md b/sources/talk/20150709 Interviews--Linus Torvalds Answers Your Question.md index f1420fd0e4..bb04ddf0c8 100644 --- a/sources/talk/20150709 Interviews--Linus Torvalds Answers Your Question.md +++ b/sources/talk/20150709 Interviews--Linus Torvalds Answers Your Question.md @@ -1,4 +1,3 @@ -zpl1025 Interviews: Linus Torvalds Answers Your Question ================================================================================ Last Thursday you had a chance to [ask Linus Torvalds][1] about programming, hardware, and all things Linux. You can read his answers to those questions below. If you'd like to see what he had to say the last time we sat down with him, [you can do so here][2]. diff --git a/sources/talk/20150716 Interview--Larry Wall.md b/sources/talk/20150716 Interview--Larry Wall.md index 1362281517..3010691cff 100644 --- a/sources/talk/20150716 Interview--Larry Wall.md +++ b/sources/talk/20150716 Interview--Larry Wall.md @@ -1,5 +1,3 @@ -martin - Interview: Larry Wall ================================================================================ > Perl 6 has been 15 years in the making, and is now due to be released at the end of this year. We speak to its creator to find out what’s going on. From 8ba608059ea9e9247ec89e6c7ddd8fd5ee52ecd4 Mon Sep 17 00:00:00 2001 From: martin qi Date: Fri, 28 Aug 2015 19:01:24 +0800 Subject: [PATCH 355/697] Update 20150716 Interview--Larry Wall.md --- sources/talk/20150716 Interview--Larry Wall.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/talk/20150716 Interview--Larry Wall.md b/sources/talk/20150716 Interview--Larry Wall.md index 3010691cff..f3fea9c596 100644 --- a/sources/talk/20150716 Interview--Larry Wall.md +++ b/sources/talk/20150716 Interview--Larry Wall.md @@ -1,3 +1,5 @@ +translating... + Interview: Larry Wall ================================================================================ > Perl 6 has been 15 years in the making, and is now due to be released at the end of this year. We speak to its creator to find out what’s going on. From 3df7b23d8f3d5916d78684ea621a2d5cf4d57ab2 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 28 Aug 2015 23:33:13 +0800 Subject: [PATCH 356/697] Delete 20150827 Linux or UNIX--Bash Read a File Line By Line.md --- ... or UNIX--Bash Read a File Line By Line.md | 163 ------------------ 1 file changed, 163 deletions(-) delete mode 100644 sources/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md diff --git a/sources/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md b/sources/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md deleted file mode 100644 index c0a4b6c27c..0000000000 --- a/sources/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md +++ /dev/null @@ -1,163 +0,0 @@ -translation by strugglingyouth -Linux/UNIX: Bash Read a File Line By Line -================================================================================ -How do I read a file line by line under a Linux or UNIX-like system using KSH or BASH shell? - -You can use while..do..done bash loop to read file line by line on a Linux, OSX, *BSD, or Unix-like system. - -**Syntax to read file line by line on a Bash Unix & Linux shell:** - -1. The syntax is as follows for bash, ksh, zsh, and all other shells - -1. while read -r line; do COMMAND; done < input.file -1. The -r option passed to red command prevents backslash escapes from being interpreted. -1. Add IFS= option before read command to prevent leading/trailing whitespace from being trimmed - -1. while IFS= read -r line; do COMMAND_on $line; done < input.file - -Here is more human readable syntax for you: - - #!/bin/bash - input="/path/to/txt/file" - while IFS= read -r var - do - echo "$var" - done < "$input" - -**Examples** - -Here are some examples: - - #!/bin/ksh - file="/home/vivek/data.txt" - while IFS= read line - do - # display $line or do somthing with $line - echo "$line" - done <"$file" - -The same example using bash shell: - - #!/bin/bash - file="/home/vivek/data.txt" - while IFS= read -r line - do - # display $line or do somthing with $line - printf '%s\n' "$line" - done <"$file" - -You can also read field wise: - - #!/bin/bash - file="/etc/passwd" - while IFS=: read -r f1 f2 f3 f4 f5 f6 f7 - do - # display fields using f1, f2,..,f7 - printf 'Username: %s, Shell: %s, Home Dir: %s\n' "$f1" "$f7" "$f6" - done <"$file" - -Sample outputs: - -![Fig.01: Bash shell scripting- read file line by line demo outputs](http://s0.cyberciti.org/uploads/faq/2011/01/Bash-Scripting-Read-File-line-by-line-demo.jpg) - -Fig.01: Bash shell scripting- read file line by line demo outputs - -**Bash Scripting: Read text file line-by-line to create pdf files** - -My input file is as follows (faq.txt): - - 4|http://www.cyberciti.biz/faq/mysql-user-creation/|Mysql User Creation: Setting Up a New MySQL User Account - 4096|http://www.cyberciti.biz/faq/ksh-korn-shell/|What is UNIX / Linux Korn Shell? - 4101|http://www.cyberciti.biz/faq/what-is-posix-shell/|What Is POSIX Shell? - 17267|http://www.cyberciti.biz/faq/linux-check-battery-status/|Linux: Check Battery Status Command - 17245|http://www.cyberciti.biz/faq/restarting-ntp-service-on-linux/|Linux Restart NTPD Service Command - 17183|http://www.cyberciti.biz/faq/ubuntu-linux-determine-your-ip-address/|Ubuntu Linux: Determine Your IP Address - 17172|http://www.cyberciti.biz/faq/determine-ip-address-of-linux-server/|HowTo: Determine an IP Address My Linux Server - 16510|http://www.cyberciti.biz/faq/unix-linux-restart-php-service-command/|Linux / Unix: Restart PHP Service Command - 8292|http://www.cyberciti.biz/faq/mounting-harddisks-in-freebsd-with-mount-command/|FreeBSD: Mount Hard Drive / Disk Command - 8190|http://www.cyberciti.biz/faq/rebooting-solaris-unix-server/|Reboot a Solaris UNIX System - -My bash script: - - #!/bin/bash - # Usage: Create pdf files from input (wrapper script) - # Author: Vivek Gite under GPL v2.x+ - #--------------------------------------------------------- - - #Input file - _db="/tmp/wordpress/faq.txt" - - #Output location - o="/var/www/prviate/pdf/faq" - - _writer="~/bin/py/pdfwriter.py" - - # If file exists - if [[ -f "$_db" ]] - then - # read it - while IFS='|' read -r pdfid pdfurl pdftitle - do - local pdf="$o/$pdfid.pdf" - echo "Creating $pdf file ..." - #Genrate pdf file - $_writer --quiet --footer-spacing 2 \ - --footer-left "nixCraft is GIT UL++++ W+++ C++++ M+ e+++ d-" \ - --footer-right "Page [page] of [toPage]" --footer-line \ - --footer-font-size 7 --print-media-type "$pdfurl" "$pdf" - done <"$_db" - fi - -**Tip: Read from bash variable** - -Let us say you want a list of all installed php packages on a Debian or Ubuntu Linux, enter: - - # My input source is the contents of a variable called $list # - list=$(dpkg --list php\* | awk '/ii/{print $2}') - printf '%s\n' "$list" - -Sample outputs: - - php-pear - php5-cli - php5-common - php5-fpm - php5-gd - php5-json - php5-memcache - php5-mysql - php5-readline - php5-suhosin-extension - -You can now read from $list and install the package: - - #!/bin/bash - # BASH can iterate over $list variable using a "here string" # - while IFS= read -r pkg - do - printf 'Installing php package %s...\n' "$pkg" - /usr/bin/apt-get -qq install $pkg - done <<< "$list" - printf '*** Do not forget to run php5enmod and restart the server (httpd or php5-fpm) ***\n' - -Sample outputs: - - Installing php package php-pear... - Installing php package php5-cli... - Installing php package php5-common... - Installing php package php5-fpm... - Installing php package php5-gd... - Installing php package php5-json... - Installing php package php5-memcache... - Installing php package php5-mysql... - Installing php package php5-readline... - Installing php package php5-suhosin-extension... - *** Do not forget to run php5enmod and restart the server (httpd or php5-fpm) *** - --------------------------------------------------------------------------------- - -via: http://www.cyberciti.biz/faq/unix-howto-read-line-by-line-from-file/ - -作者:[作者名][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From b1a1328295ee4624811029e6f03b82485bbc0175 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 28 Aug 2015 23:34:09 +0800 Subject: [PATCH 357/697] Create 20150827 Linux or UNIX--Bash Read a File Line By Line.md --- ... or UNIX--Bash Read a File Line By Line.md | 166 ++++++++++++++++++ 1 file changed, 166 insertions(+) create mode 100644 translated/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md diff --git a/translated/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md b/translated/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md new file mode 100644 index 0000000000..37473334d1 --- /dev/null +++ b/translated/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md @@ -0,0 +1,166 @@ + +Linux/UNIX: Bash 下如何逐行读取一个文件 +================================================================================ + 在 Linux 或类 UNIX 系统下如何使用 KSH 或 BASH shell 逐行读取一个文件? + +在 Linux, OSX, * BSD ,或者类 Unix 系统下你可以使用​​while..do..done bash 的循环来逐行读取一个文件。 + +**在 Bash Unix 或者 Linux shell 中逐行读取一个文件的语法:** + +1.对于 bash, ksh, zsh,和其他的 shells 语法如下 - +1. while read -r line; do COMMAND; done < input.file +1.通过 -r 选项传递给红色的命令阻止反斜杠被解释。 +1.在 read 命令之前添加 IFS= option,来防止 leading/trailing 尾随的空白字符被分割 - +1. while IFS= read -r line; do COMMAND_on $line; done < input.file + +这是更适合人类阅读的语法: + + #!/bin/bash + input="/path/to/txt/file" + while IFS= read -r var + do + echo "$var" + done < "$input" + +**示例** + +下面是一些例子: + + #!/bin/ksh + file="/home/vivek/data.txt" + while IFS= read line + do + # display $line or do somthing with $line + echo "$line" + done <"$file" + +在 bash shell 中相同的例子: + + #!/bin/bash + file="/home/vivek/data.txt" + while IFS= read -r line + do + # display $line or do somthing with $line + printf '%s\n' "$line" + done <"$file" + +你还可以看看这个更好的: + + #!/bin/bash + file="/etc/passwd" + while IFS=: read -r f1 f2 f3 f4 f5 f6 f7 + do + # display fields using f1, f2,..,f7 + printf 'Username: %s, Shell: %s, Home Dir: %s\n' "$f1" "$f7" "$f6" + done <"$file" + +示例输出: + +![Fig.01: Bash shell scripting- read file line by line demo outputs](http://s0.cyberciti.org/uploads/faq/2011/01/Bash-Scripting-Read-File-line-by-line-demo.jpg) + +图01:Bash shell scripting- 读取文件并逐行输出文件 + +**Bash Scripting: 逐行读取文本文件并创建为 pdf 文件** + +我的输入文件如下(faq.txt): + + 4|http://www.cyberciti.biz/faq/mysql-user-creation/|Mysql User Creation: Setting Up a New MySQL User Account + 4096|http://www.cyberciti.biz/faq/ksh-korn-shell/|What is UNIX / Linux Korn Shell? + 4101|http://www.cyberciti.biz/faq/what-is-posix-shell/|What Is POSIX Shell? + 17267|http://www.cyberciti.biz/faq/linux-check-battery-status/|Linux: Check Battery Status Command + 17245|http://www.cyberciti.biz/faq/restarting-ntp-service-on-linux/|Linux Restart NTPD Service Command + 17183|http://www.cyberciti.biz/faq/ubuntu-linux-determine-your-ip-address/|Ubuntu Linux: Determine Your IP Address + 17172|http://www.cyberciti.biz/faq/determine-ip-address-of-linux-server/|HowTo: Determine an IP Address My Linux Server + 16510|http://www.cyberciti.biz/faq/unix-linux-restart-php-service-command/|Linux / Unix: Restart PHP Service Command + 8292|http://www.cyberciti.biz/faq/mounting-harddisks-in-freebsd-with-mount-command/|FreeBSD: Mount Hard Drive / Disk Command + 8190|http://www.cyberciti.biz/faq/rebooting-solaris-unix-server/|Reboot a Solaris UNIX System + +我的 bash script: + + #!/bin/bash + # Usage: Create pdf files from input (wrapper script) + # Author: Vivek Gite under GPL v2.x+ + #--------------------------------------------------------- + + #Input file + _db="/tmp/wordpress/faq.txt" + + #Output location + o="/var/www/prviate/pdf/faq" + + _writer="~/bin/py/pdfwriter.py" + + # If file exists + if [[ -f "$_db" ]] + then + # read it + while IFS='|' read -r pdfid pdfurl pdftitle + do + local pdf="$o/$pdfid.pdf" + echo "Creating $pdf file ..." + #Genrate pdf file + $_writer --quiet --footer-spacing 2 \ + --footer-left "nixCraft is GIT UL++++ W+++ C++++ M+ e+++ d-" \ + --footer-right "Page [page] of [toPage]" --footer-line \ + --footer-font-size 7 --print-media-type "$pdfurl" "$pdf" + done <"$_db" + fi + +**提示:从 bash 的变量开始读取** + +让我们看看如何在 Debian 或者 Ubuntu Linux 下列出所有安装过的 php 包,请输入: + + # 我将输出内容赋值到一个变量名为$list中 # + + list=$(dpkg --list php\* | awk '/ii/{print $2}') + printf '%s\n' "$list" + +示例输出: + + php-pear + php5-cli + php5-common + php5-fpm + php5-gd + php5-json + php5-memcache + php5-mysql + php5-readline + php5-suhosin-extension + +你现在可以从 $list 中看到安装的包: + + #!/bin/bash + # BASH can iterate over $list variable using a "here string" # + while IFS= read -r pkg + do + printf 'Installing php package %s...\n' "$pkg" + /usr/bin/apt-get -qq install $pkg + done <<< "$list" + printf '*** Do not forget to run php5enmod and restart the server (httpd or php5-fpm) ***\n' + +示例输出: + + Installing php package php-pear... + Installing php package php5-cli... + Installing php package php5-common... + Installing php package php5-fpm... + Installing php package php5-gd... + Installing php package php5-json... + Installing php package php5-memcache... + Installing php package php5-mysql... + Installing php package php5-readline... + Installing php package php5-suhosin-extension... + + + *** 不要忘了运行php5enmod并重新启动服务(httpd 或 php5-fpm) *** + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/unix-howto-read-line-by-line-from-file/ + +作者:[作者名][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 772a5d000de163d51f039ed4ac0e38f373552541 Mon Sep 17 00:00:00 2001 From: Jerry Ling Date: Fri, 28 Aug 2015 23:38:22 +0800 Subject: [PATCH 358/697] =?UTF-8?q?=E5=AE=8C=E6=88=90=E7=BF=BB=E8=AF=91=20?= =?UTF-8?q?Debian=20GNU=20or=20Linux=20Birthday--=20A=2022=20Years=20of=20?= =?UTF-8?q?Journey=20and=20Still=20Counting?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 完成翻译 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting Update 20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md fix error --- ... 22 Years of Journey and Still Counting.md | 110 ------------------ ... 22 Years of Journey and Still Counting.md | 109 +++++++++++++++++ 2 files changed, 109 insertions(+), 110 deletions(-) delete mode 100644 sources/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md create mode 100644 translated/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md diff --git a/sources/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md b/sources/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md deleted file mode 100644 index da750a495a..0000000000 --- a/sources/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md +++ /dev/null @@ -1,110 +0,0 @@ -[jerryling315](https://github.com/jerryling315/) is translating. -Debian GNU/Linux Birthday : A 22 Years of Journey and Still Counting… -================================================================================ -On 16th August 2015, the Debian project has celebrated its 22nd anniversary, making it one of the oldest popular distribution in open source world. Debian project was conceived and founded in the year 1993 by Ian Murdock. By that time Slackware had already made a remarkable presence as one of the earliest Linux Distribution. - -![Happy 22nd Birthday to Debian](http://www.tecmint.com/wp-content/uploads/2014/08/Debian-22nd-Birthday.png) - -Happy 22nd Birthday to Debian Linux - -Ian Ashley Murdock, an American Software Engineer by profession, conceived the idea of Debian project, when he was a student of Purdue University. He named the project Debian after the name of his then-girlfriend Debra Lynn (Deb) and his name. He later married her and then got divorced in January 2008. - -![Ian Murdock](http://www.tecmint.com/wp-content/uploads/2014/08/Ian-Murdock.jpeg) - -Debian Creator: Ian Murdock - -Ian is currently serving as Vice President of Platform and Development Community at ExactTarget. - -Debian (as Slackware) was the result of unavailability of up-to mark Linux Distribution, that time. Ian in an interview said – “Providing the first class Product without profit would be the sole aim of Debian Project. Even Linux was not reliable and up-to mark that time. I Remember…. Moving files between file-system and dealing with voluminous file would often result in Kernel Panic. However the project Linux was promising. The availability of Source Code freely and the potential it seemed was qualitative.” - -I remember … like everyone else I wanted to solve problem, run something like UNIX at home, but it was not possible…neither financially nor legally, in the other sense . Then I come to know about GNU kernel Development and its non-association with any kind of legal issues, he added. He was sponsored by Free Software Foundation (FSF) in the early days when he was working on Debian, it also helped Debian to take a giant step though Ian needed to finish his degree and hence quited FSF roughly after one year of sponsorship. - -### Debian Development History ### - -- **Debian 0.01 – 0.09** : Released between August 1993 – December 1993. -- **Debian 0.91 ** – Released in January 1994 with primitive package system, No dependencies. -- **Debian 0.93 rc5** : March 1995. It is the first modern release of Debian, dpkg was used to install and maintain packages after base system installation. -- **Debian 0.93 rc6**: Released in November 1995. It was last a.out release, deselect made an appearance for the first time – 60 developers were maintaining packages, then at that time. -- **Debian 1.1**: Released in June 1996. Code name – Buzz, Packages count – 474, Package Manager dpkg, Kernel 2.0, ELF. -- **Debian 1.2**: Released in December 1996. Code name – Rex, Packages count – 848, Developers Count – 120. -- **Debian 1.3**: Released in July 1997. Code name – Bo, package count 974, Developers count – 200. -- **Debian 2.0**: Released in July 1998. Code name: Hamm, Support for architecture – Intel i386 and Motorola 68000 series, Number of Packages: 1500+, Number of Developers: 400+, glibc included. -- **Debian 2.1**: Released on March 09, 1999. Code name – slink, support architecture Alpha and Sparc, apt came in picture, Number of package – 2250. -- **Debian 2.2**: Released on August 15, 2000. Code name – Potato, Supported architecture – Intel i386, Motorola 68000 series, Alpha, SUN Sparc, PowerPC and ARM architecture. Number of packages: 3900+ (binary) and 2600+ (Source), Number of Developers – 450. There were a group of people studied and came with an article called Counting potatoes, which shows – How a free software effort could lead to a modern operating system despite all the issues around it. -- **Debian 3.0** : Released on July 19th, 2002. Code name – woody, Architecture supported increased– HP, PA_RISC, IA-64, MIPS and IBM, First release in DVD, Package Count – 8500+, Developers Count – 900+, Cryptography. -- **Debian 3.1**: Release on June 6th, 2005. Code name – sarge, Architecture support – same as woody + AMD64 – Unofficial Port released, Kernel – 2.4 qnd 2.6 series, Number of Packages: 15000+, Number of Developers : 1500+, packages like – OpenOffice Suite, Firefox Browser, Thunderbird, Gnome 2.8, kernel 3.3 Advanced Installation Support: RAID, XFS, LVM, Modular Installer. -- **Debian 4.0**: Released on April 8th, 2007. Code name – etch, architecture support – same as sarge, included AMD64. Number of packages: 18,200+ Developers count : 1030+, Graphical Installer. -- **Debian 5.0**: Released on February 14th, 2009. Code name – lenny, Architecture Support – Same as before + ARM. Number of packages: 23000+, Developers Count: 1010+. -- **Debian 6.0** : Released on July 29th, 2009. Code name – squeeze, Package included : kernel 2.6.32, Gnome 2.3. Xorg 7.5, DKMS included, Dependency-based. Architecture : Same as pervious + kfreebsd-i386 and kfreebsd-amd64, Dependency based booting. -- **Debian 7.0**: Released on may 4, 2013. Code name: wheezy, Support for Multiarch, Tools for private cloud, Improved Installer, Third party repo need removed, full featured multimedia-codec, Kernel 3.2, Xen Hypervisor 4.1.4 Package Count: 37400+. -- **Debian 8.0**: Released on May 25, 2015 and Code name: Jessie, Systemd as the default init system, powered by Kernel 3.16, fast booting, cgroups for services, possibility of isolating part of the services, 43000+ packages. Sysvinit init system available in Jessie. - -**Note**: Linux Kernel initial release was on October 05, 1991 and Debian initial release was on September 15, 1993. So, Debian is there for 22 Years running Linux Kernel which is there for 24 years. - -### Debian Facts ### - -Year 1994 was spent on organizing and managing Debian project so that it would be easy for others to contribute. Hence no release for users were made this year however there were certain internal release. - -Debian 1.0 was never released. A CDROM manufacturer company by mistakenly labelled an unreleased version as Debian 1.0. Hence to avoid confusion Debian 1.0 was released as Debian 1.1 and since then only the concept of official CDROM images came into existence. - -Each release of Debian is a character of Toy Story. - -Debian remains available in old stable, stable, testing and experimental, all the time. - -The Debian Project continues to work on the unstable distribution (codenamed sid, after the evil kid from the Toy Story). Sid is the permanent name for the unstable distribution and is remains ‘Still In Development’. The testing release is intended to become the next stable release and is currently codenamed jessie. - -Debian official distribution includes only Free and OpenSource Software and nothing else. However the availability of contrib and Non-free Packages makes it possible to install those packages which are free but their dependencies are not licensed free (contrib) and Packages licensed under non-free softwares. - -Debian is the mother of a lot of Linux distribution. Some of these Includes: - -- Damn Small Linux -- KNOPPIX -- Linux Advanced -- MEPIS -- Ubuntu -- 64studio (No more active) -- LMDE - -Debian is the world’s largest non commercial Linux Distribution. It is written in C (32.1%) programming language and rest in 70 other languages. - -![Debian Contribution](http://www.tecmint.com/wp-content/uploads/2014/08/Debian-Programming.png) - -Debian Contribution - -Image Source: [Xmodulo][1] - -Debian project contains 68.5 million actual loc (lines of code) + 4.5 million lines of comments and white spaces. - -International Space station dropped Windows & Red Hat for adopting Debian – These astronauts are using one release back – now “squeeze” for stability and strength from community. - -Thank God! Who would have heard the scream from space on Windows Metro Screen :P - -#### The Black Wednesday #### - -On November 20th, 2002 the University of Twente Network Operation Center (NOC) caught fire. The fire department gave up protecting the server area. NOC hosted satie.debian.org which included Security, non-US archive, New Maintainer, quality assurance, databases – Everything was turned to ashes. Later these services were re-built by debian. - -#### The Future Distro #### - -Next in the list is Debian 9, code name – Stretch, what it will have is yet to be revealed. The best is yet to come, Just Wait for it! - -A lot of distribution made an appearance in Linux Distro genre and then disappeared. In most cases managing as it gets bigger was a concern. But certainly this is not the case with Debian. It has hundreds of thousands of developer and maintainer all across the globe. It is a one Distro which was there from the initial days of Linux. - -The contribution of Debian in Linux ecosystem can’t be measured in words. If there had been no Debian, Linux would not have been so rich and user-friendly. Debian is among one of the disto which is considered highly reliable, secure and stable and a perfect choice for Web Servers. - -That’s the beginning of Debian. It came a long way and still going. The Future is Here! The world is here! If you have not used Debian till now, What are you Waiting for. Just Download Your Image and get started, we will be here if you get into trouble. - -- [Debian Homepage][2] - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/happy-birthday-to-debian-gnu-linux/ - -作者:[Avishek Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://xmodulo.com/2013/08/interesting-facts-about-debian-linux.html -[2]:https://www.debian.org/ diff --git a/translated/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md b/translated/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md new file mode 100644 index 0000000000..1c92079b57 --- /dev/null +++ b/translated/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md @@ -0,0 +1,109 @@ +Debian GNU/Linux 生日: 22年未完的美妙旅程. +================================================================================ +在2015年8月16日, Debian项目组庆祝了 Debian 的22周年纪念日; 这也是开源世界历史最悠久, 热门的发行版之一. Debian项目于1993年由Ian Murdock创立. 彼时, Slackware 作为最早的 Linux 发行版已经名声在外. + +![Happy 22nd Birthday to Debian](http://www.tecmint.com/wp-content/uploads/2014/08/Debian-22nd-Birthday.png) + +22岁生日快乐! Debian Linux! + +Ian Ashly Murdock, 一个美国职业软件工程师, 在他还是普渡大学的学生时构想出了 Debia n项目的计划. 他把这个项目命名为 Debian 是由于这个名字组合了他彼时女友的名字, Debra Lynn, 和他自己的名字(译者: Ian). 他之后和Lynn顺利结婚并在2008年1月离婚. + +![Ian Murdock](http://www.tecmint.com/wp-content/uploads/2014/08/Ian-Murdock.jpeg) + +Debian 创始人:Ian Murdock + +Ian 目前是 ExactTarget 下 Platform and Development Community 的副总裁. + +Debian (如同Slackware一样) 都是由于当时缺乏满足作者标准的发行版才应运而生的. Ian 在一次采访中说:"免费提供一流的产品会是Debian项目的唯一使命. 尽管过去的 Linux 发行版均不尽然可靠抑或是优秀. 我印象里...比如在不同的文件系统间移动文件, 处理大型文件经常会导致内核出错. 但是 Linux 其实是很可靠的, 免费的源代码让这个项目本质上很有前途. + +"我记得过去我也像其他人一样想解决问题, 想在家里运营一个像 UNIX 的东西. 但那是不可能的, 无论是经济上还是法律上或是别的什么角度. 然后我就听闻了GNU内核开发项目, 以及这个项目是如何没有任何法律纷争", Ian 补充到. 他早年在开发 Debian 时曾被自由软件基金会(FSF)资助, 这份资助帮助 Debian 向前迈了一大步; 尽管一年后由于学业原因 Ian 退出了 FSF 转而去完成他的学位. + +### Debian开发历史 ### + +- **Debian 0.01 – 0.09** : 发布于 1993 八月 – 1993 十二月. +- **Debian 0.91 ** – 发布于 1994 一月. 有了原始的包管理系统, 没有依赖管理机制. +- **Debian 0.93 rc5** : 发布于 1995 三月. "现代"意义的 Debian 的第一次发布, dpkg 会在系统安装后被用作安装以及管理其他软件包. +- **Debian 0.93 rc6**: 发布于1995 十一月. 最后一次a.out发布, deselect机制第一次出现, 有60位开发者在彼时维护着软件包. +- **Debian 1.1**: 发布于1996 六月. 项目代号 – Buzz, 软件包数量 – 474, 包管理器 dpkg, 内核版本 2.0, ELF. +- **Debian 1.2**: 发布于1996 十二月. 项目代号 – Rex, 软件包数量 – 848, 开发者数量 – 120. +- **Debian 1.3**: 发布于1997 七月. 项目代号 – Bo, 软件包数量 974, 开发者数量 – 200. +- **Debian 2.0**: 发布于1998 七月. 项目代号 - Hamm, 支持构架 – Intel i386 以及 Motorola 68000 系列, 软件包数量: 1500+, 开发者数量: 400+, 内置了 glibc. +- **Debian 2.1**: 发布于1999 三月九日. 项目代号 – slink, 支持构架 - Alpha 和 Sparc, apt 包管理器开始成型, 软件包数量 – 2250. +- **Debian 2.2**: 发布于2000 八月十五日. 项目代号 – Potato, 支持构架 – Intel i386, Motorola 68000 系列, Alpha, SUN Sparc, PowerPC 以及 ARM 构架. 软件包数量: 3900+ (二进制) 以及 2600+ (源代码), 开发者数量 – 450. 有一群人在那时研究并发表了一篇论文, 论文展示了自由软件是如何在被各种问题包围的情况下依然逐步成长为优秀的现代操作系统的. +- **Debian 3.0**: 发布于2002 七月十九日. 项目代号 – woody, 支持构架新增– HP, PA_RISC, IA-64, MIPS 以及 IBM, 首次以DVD的形式发布, 软件包数量 – 8500+, 开发者数量 – 900+, 支持加密. +- **Debian 3.1**: 发布于2005 六月六日. 项目代号 – sarge, 支持构架 – 不变基础上新增 AMD64 – 非官方渠道发布, 内核 – 2.4 以及 2.6 系列, 软件包数量: 15000+, 开发者数量 : 1500+, 增加了诸如 – OpenOffice 套件, Firefox 浏览器, Thunderbird, Gnome 2.8, 内核版本 3.3 先进地支持了: RAID, XFS, LVM, Modular Installer. +- **Debian 4.0**: 发布于2007 四月八日. 项目代号 – etch, 支持构架 – 不变基础上新增 AMD64. 软件包数量: 18,200+ 开发者数量 : 1030+, 图形化安装器. +- **Debian 5.0**: Released on February 14th, 发布于2009. 项目代号 – lenny, 支持构架 – 保不变基础上新增 ARM. 软件包数量: 23000+, 开发者数量: 1010+. +- **Debian 6.0**: 发布于2009 七月二十九日. 项目代号 – squeeze, 包含的软件包: 内核 2.6.32, Gnome 2.3. Xorg 7.5, 同时包含了 DKMS, 基于依赖包支持. 支持构架 : 不变基础上新增 kfreebsd-i386 以及 kfreebsd-amd64, 基于依赖管理的启动过程. +- **Debian 7.0**: 发布于2013 五月四日. 项目代号: wheezy, 支持 Multiarch, 私人云工具, 升级了安装器, 移除了第三方软件依赖, 万能的多媒体套件-codec, 内核版本 3.2, Xen Hypervisor 4.1.4 软件包数量: 37400+. +- **Debian 8.0**: 发布于2015 五月二十五日. 项目代号: Jessie, 将 Systemd 作为默认的启动加载器, 内核版本 3.16, 增加了快速启动(fast booting), service进程所依赖的 cgroups 使隔离部分 service 进程成为可能, 43000+ packages. Sysvinit 初始化工具首次在 Jessie 中可用. + +**注意**: Linux的内核第一次是在1991 十月五日被发布, 而 Debian 的首次发布则在1993 九月十三日. 所以 Debian 已经在只有24岁的 Linux 内核上运行了整整22年了. + +### 有关 Debian 的小知识 ### + +1994年被用来管理和重整 Debian 项目以使得其他开发者能更好地加入. 所以在那一年并没有面向用户的更新被发布, 当然, 内部版本肯定是有的. + +Debian 1.0 从来就没有被发布过. 一家 CD-ROM 的生产商错误地把某个未发布的版本标注为了 1.0, 为了避免产生混乱, 原本的 Debian 1.0 以1.1的面貌发布了. 从那以后才有了所谓的官方CD-ROM的概念. + +每个 Debian 新版本的代号都是玩具总动员里某个角色的名字哦. + +Debian 有四种可用版本: 旧稳定版(old stable), 稳定版, 测试版 以及 试验版(experimental). 始终如此. + +Debian 项目组一直致力于开发写一代发行版的不稳定版本, 这个不稳定版本始终被叫做Sid(玩具总动员里那个邪恶的臭小孩). Sid是unstable版本的永久名称, 同时Sid也取自'Still In Development"(译者:还在开发中)的首字母. Sid 将会成为下一个稳定版, 此时的下一个稳定版本代号为 jessie. + +Debian 的官方发行版只包含开源并且免费的软件, 绝无其他东西. 不过contrib 和 不免费的软件包使得安装那些本身免费但是依赖的软件包不免费的软件成为了可能. 那些依赖包本身的证书可能不属于自由/免费软件. + +Debian 是一堆Linux 发行版的母亲. 举几个例子: + +- Damn Small Linux +- KNOPPIX +- Linux Advanced +- MEPIS +- Ubuntu +- 64studio (不再活跃开发) +- LMDE + +Debian 是世界上最大的非商业Linux 发行版.他主要是由C书写的(32.1%), 一并的还有其他70多种语言. + +![Debian 开发语言贡献表](http://www.tecmint.com/wp-content/uploads/2014/08/Debian-Programming.png) + +Debian Contribution + +图片来源: [Xmodulo][1] + +Debian 项目包含6,850万行代码, 以及, 450万行空格和注释. + +国际空间站放弃了 Windows 和红帽子, 进而换成了Debian - 在上面的宇航员使用落后一个版本的稳定发行版, 目前是squeeze; 这么做是为了稳定程度以及来自 Debian 社区的雄厚帮助支持. + +感谢上帝! 我们差点就听到来自国际空间宇航员面对 Windows Metro 界面的尖叫了 :P + +#### 黑色星期三 #### + +2002 十一月而是日, Twente 大学的 Network Operation Center 着火 (NOC). 当地消防部门放弃了服务器区域. NOC维护了satie.debian.org的网站服务器, 这个网站包含了安全, 非美国相关的存档, 新维护者资料, 数量报告, 数据库; 这一切都化为了灰烬. 之后这些服务被使用 Debian 重新实现了. + +#### 未来版本 #### + +下一个待发布版本是 Debian 9, 项目代号 – Stretch, 它会带来什么还是个未知数. 满心期待吧! + +有很多发行版在 Linux 发行版的历史上出现过一瞬然后很快消失了. 在多数情况下, 维护一个日渐庞大的项目是开发者们面临的挑战. 但这对 Debian 来说不是问题. Debian 项目有全世界成百上千的开发者, 维护者. 它在 Linux 诞生的之初起便一直存在. + +Debian 在 Linux 生态环境中的贡献是难以用语言描述的. 如果 Debian 没有出现过, 那么 Linux 世界将不会像现在这样丰富, 用户友好. Debian 是为数不多可以被认为安全可靠又稳定, 是作为网络服务器完美选择的发行版. + +这仅仅是 Debian 的一个开始. 它从远古时代一路走到今天, 并将一直走下去. 未来即是现在! 世界近在眼前! 如果你到现在还从来没有使用过 Debian, 我只想问, 你还再等什么? 快去下载一份镜像试试吧, 我们会在此守候遇到任何问题的你. + +- [Debian 主页][2] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/happy-birthday-to-debian-gnu-linux/ + +作者:[Avishek Kumar][a] +译者:[jerryling315](http://moelf.xyz) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://xmodulo.com/2013/08/interesting-facts-about-debian-linux.html +[2]:https://www.debian.org/ From f61c12b96c475ae4701c362e645b8504b26b689b Mon Sep 17 00:00:00 2001 From: Luoyuanhao Date: Sat, 29 Aug 2015 09:34:10 +0800 Subject: [PATCH 359/697] [Translated] tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md --- ...ate and Import Into Database) in RHEL 7.md | 166 ----------------- ...ate and Import Into Database) in RHEL 7.md | 169 ++++++++++++++++++ 2 files changed, 169 insertions(+), 166 deletions(-) delete mode 100644 sources/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md create mode 100644 translated/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md diff --git a/sources/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md b/sources/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md deleted file mode 100644 index 8f3370f741..0000000000 --- a/sources/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md +++ /dev/null @@ -1,166 +0,0 @@ -ictlyh Translating -Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7 -================================================================================ -In order to keep your RHEL 7 systems secure, you need to know how to monitor all of the activities that take place on such systems by examining log files. Thus, you will be able to detect any unusual or potentially malicious activity and perform system troubleshooting or take another appropriate action. - -![Linux Rotate Log Files Using Rsyslog and Logrotate](http://www.tecmint.com/wp-content/uploads/2015/08/Manage-and-Rotate-Linux-Logs-Using-Rsyslog-Logrotate.jpg) - -RHCE Exam: Manage System LogsUsing Rsyslogd and Logrotate – Part 5 - -In RHEL 7, the [rsyslogd][1] daemon is responsible for system logging and reads its configuration from /etc/rsyslog.conf (this file specifies the default location for all system logs) and from files inside /etc/rsyslog.d, if any. - -### Rsyslogd Configuration ### - -A quick inspection of the [rsyslog.conf][2] will be helpful to start. This file is divided into 3 main sections: Modules (since rsyslog follows a modular design), Global directives (used to set global properties of the rsyslogd daemon), and Rules. As you will probably guess, this last section indicates what gets logged or shown (also known as the selector) and where, and will be our focus throughout this article. - -A typical line in rsyslog.conf is as follows: - -![Rsyslogd Configuration](http://www.tecmint.com/wp-content/uploads/2015/08/Rsyslogd-Configuration.png) - -Rsyslogd Configuration - -In the image above, we can see that a selector consists of one or more pairs Facility:Priority separated by semicolons, where Facility describes the type of message (refer to [section 4.1.1 in RFC 3164][3] to see the complete list of facilities available for rsyslog) and Priority indicates its severity, which can be one of the following self-explanatory words: - -- debug -- info -- notice -- warning -- err -- crit -- alert -- emerg - -Though not a priority itself, the keyword none means no priority at all of the given facility. - -**Note**: That a given priority indicates that all messages of such priority and above should be logged. Thus, the line in the example above instructs the rsyslogd daemon to log all messages of priority info or higher (regardless of the facility) except those belonging to mail, authpriv, and cron services (no messages coming from this facilities will be taken into account) to /var/log/messages. - -You can also group multiple facilities using the colon sign to apply the same priority to all of them. Thus, the line: - - *.info;mail.none;authpriv.none;cron.none /var/log/messages - -Could be rewritten as - - *.info;mail,authpriv,cron.none /var/log/messages - -In other words, the facilities mail, authpriv, and cron are grouped and the keyword none is applied to the three of them. - -#### Creating a custom log file #### - -To log all daemon messages to /var/log/tecmint.log, we need to add the following line either in rsyslog.conf or in a separate file (easier to manage) inside /etc/rsyslog.d: - - daemon.* /var/log/tecmint.log - -Let’s restart the daemon (note that the service name does not end with a d): - - # systemctl restart rsyslog - -And check the contents of our custom log before and after restarting two random daemons: - -![Linux Create Custom Log File](http://www.tecmint.com/wp-content/uploads/2015/08/Create-Custom-Log-File.png) - -Create Custom Log File - -As a self-study exercise, I would recommend you play around with the facilities and priorities and either log additional messages to existing log files or create new ones as in the previous example. - -### Rotating Logs using Logrotate ### - -To prevent log files from growing endlessly, the logrotate utility is used to rotate, compress, remove, and alternatively mail logs, thus easing the administration of systems that generate large numbers of log files. - -Logrotate runs daily as a cron job (/etc/cron.daily/logrotate) and reads its configuration from /etc/logrotate.conf and from files located in /etc/logrotate.d, if any. - -As with the case of rsyslog, even when you can include settings for specific services in the main file, creating separate configuration files for each one will help organize your settings better. - -Let’s take a look at a typical logrotate.conf: - -![Logrotate Configuration](http://www.tecmint.com/wp-content/uploads/2015/08/Logrotate-Configuration.png) - -Logrotate Configuration - -In the example above, logrotate will perform the following actions for /var/loh/wtmp: attempt to rotate only once a month, but only if the file is at least 1 MB in size, then create a brand new log file with permissions set to 0664 and ownership given to user root and group utmp. Next, only keep one archived log, as specified by the rotate directive: - -![Logrotate Logs Monthly](http://www.tecmint.com/wp-content/uploads/2015/08/Logrotate-Logs-Monthly.png) - -Logrotate Logs Monthly - -Let’s now consider another example as found in /etc/logrotate.d/httpd: - -![Rotate Apache Log Files](http://www.tecmint.com/wp-content/uploads/2015/08/Rotate-Apache-Log-Files.png) - -Rotate Apache Log Files - -You can read more about the settings for logrotate in its man pages ([man logrotate][4] and [man logrotate.conf][5]). Both files are provided along with this article in PDF format for your reading convenience. - -As a system engineer, it will be pretty much up to you to decide for how long logs will be stored and in what format, depending on whether you have /var in a separate partition / logical volume. Otherwise, you really want to consider removing old logs to save storage space. On the other hand, you may be forced to keep several logs for future security auditing according to your company’s or client’s internal policies. - -#### Saving Logs to a Database #### - -Of course examining logs (even with the help of tools such as grep and regular expressions) can become a rather tedious task. For that reason, rsyslog allows us to export them into a database (OTB supported RDBMS include MySQL, MariaDB, PostgreSQL, and Oracle. - -This section of the tutorial assumes that you have already installed the MariaDB server and client in the same RHEL 7 box where the logs are being managed: - - # yum update && yum install mariadb mariadb-server mariadb-client rsyslog-mysql - # systemctl enable mariadb && systemctl start mariadb - -Then use the `mysql_secure_installation` utility to set the password for the root user and other security considerations: - -![Secure MySQL Database](http://www.tecmint.com/wp-content/uploads/2015/08/Secure-MySQL-Database.png) - -Secure MySQL Database - -Note: If you don’t want to use the MariaDB root user to insert log messages to the database, you can configure another user account to do so. Explaining how to do that is out of the scope of this tutorial but is explained in detail in [MariaDB knowledge][6] base. In this tutorial we will use the root account for simplicity. - -Next, download the createDB.sql script from [GitHub][7] and import it into your database server: - - # mysql -u root -p < createDB.sql - -![Save Server Logs to Database](http://www.tecmint.com/wp-content/uploads/2015/08/Save-Server-Logs-to-Database.png) - -Save Server Logs to Database - -Finally, add the following lines to /etc/rsyslog.conf: - - $ModLoad ommysql - $ActionOmmysqlServerPort 3306 - *.* :ommysql:localhost,Syslog,root,YourPasswordHere - -Restart rsyslog and the database server: - - # systemctl restart rsyslog - # systemctl restart mariadb - -#### Querying the Logs using SQL syntax #### - -Now perform some tasks that will modify the logs (like stopping and starting services, for example), then log to your DB server and use standard SQL commands to display and search in the logs: - - USE Syslog; - SELECT ReceivedAt, Message FROM SystemEvents; - -![Query Logs in Database](http://www.tecmint.com/wp-content/uploads/2015/08/Query-Logs-in-Database.png) - -Query Logs in Database - -### Summary ### - -In this article we have explained how to set up system logging, how to rotate logs, and how to redirect the messages to a database for easier search. We hope that these skills will be helpful as you prepare for the [RHCE exam][8] and in your daily responsibilities as well. - -As always, your feedback is more than welcome. Feel free to use the form below to reach us. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/manage-linux-system-logs-using-rsyslogd-and-logrotate/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/wp-content/pdf/rsyslogd.pdf -[2]:http://www.tecmint.com/wp-content/pdf/rsyslog.conf.pdf -[3]:https://tools.ietf.org/html/rfc3164#section-4.1.1 -[4]:http://www.tecmint.com/wp-content/pdf/logrotate.pdf -[5]:http://www.tecmint.com/wp-content/pdf/logrotate.conf.pdf -[6]:https://mariadb.com/kb/en/mariadb/create-user/ -[7]:https://github.com/sematext/rsyslog/blob/master/plugins/ommysql/createDB.sql -[8]:http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/ \ No newline at end of file diff --git a/translated/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md b/translated/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md new file mode 100644 index 0000000000..a37c9610fd --- /dev/null +++ b/translated/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md @@ -0,0 +1,169 @@ +第五部分 - 如何在 RHEL 7 中管理系统日志(配置、旋转以及导入到数据库) +================================================================================ +为了确保你的 RHEL 7 系统安全,你需要通过查看日志文件监控系统中发生的所有活动。这样,你就可以检测任何不正常或有潜在破坏的活动并进行系统故障排除或者其它恰当的操作。 + +![Linux 中使用 Rsyslog 和 Logrotate 旋转日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Manage-and-Rotate-Linux-Logs-Using-Rsyslog-Logrotate.jpg) + +(译者注:[日志旋转][9]是系统管理中归档每天产生的日志文件的自动化过程) + +RHCE 考试 - 第五部分:使用 Rsyslog 和 Logrotate 管理系统日志 + +在 RHEL 7 中,[rsyslogd][1] 守护进程负责系统日志,它从 /etc/rsyslog.conf(该文件指定所有系统日志的默认路径)和 /etc/rsyslog.d 中的所有文件(如果有的话)读取配置信息。 + +### Rsyslogd 配置 ### + +快速浏览一下 [rsyslog.conf][2] 会是一个好的开端。该文件分为 3 个主要部分:模块(rsyslong 按照模块化设计),全局指令(用于设置 rsyslogd 守护进程的全局属性),以及规则。正如你可能猜想的,最后一个部分指示获取,显示以及在哪里保存什么的日志(也称为选择子),这也是这篇博文关注的重点。 + +rsyslog.conf 中典型的一行如下所示: + +![Rsyslogd 配置](http://www.tecmint.com/wp-content/uploads/2015/08/Rsyslogd-Configuration.png) + +Rsyslogd 配置 + +在上面的图片中,我们可以看到一个选择子包括了一个或多个用分号分隔的设备:优先级(Facility:Priority)对,其中设备描述了消息类型(参考 [RFC 3164 4.1.1 章节][3] 查看 rsyslog 可用的完整设备列表),优先级指示它的严重性,这可能是以下几种之一: + +- debug +- info +- notice +- warning +- err +- crit +- alert +- emerg + +尽管自身并不是一个优先级,关键字 none 意味着指定设备没有任何优先级。 + +**注意**:给定一个优先级表示该优先级以及之上的消息都应该记录到日志中。因此,上面例子中的行指示 rsyslogd 守护进程记录所有优先级为 info 以及以上(不管是什么设备)的除了属于 mail、authpriv、以及 cron 服务(不考虑来自这些设备的消息)的消息到 /var/log/messages。 + +你也可以使用逗号将多个设备分为一组,对同组中的设备使用相同的优先级。例如下面这行: + + *.info;mail.none;authpriv.none;cron.none /var/log/messages + +也可以这样写: + + *.info;mail,authpriv,cron.none /var/log/messages + +换句话说,mail、authpriv 以及 cron 被分为一组,并使用关键字 none。 + +#### 创建自定义日志文件 #### + +要把所有的守护进程消息记录到 /var/log/tecmint.log,我们需要在 rsyslog.conf 或者 /etc/rsyslog.d 目录中的单独文件(易于管理)添加下面一行: + + daemon.* /var/log/tecmint.log + +然后重启守护进程(注意服务名称不以 d 结尾): + + # systemctl restart rsyslog + +在随机重启两个守护进程之前和之后查看自定义日志的内容: + +![Linux 创建自定义日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Create-Custom-Log-File.png) + +创建自定义日志文件 + +作为一个自学练习,我建议你重点关注设备和优先级,添加额外的消息到已有的日志文件或者像上面那样创建一个新的日志文件。 + +### 使用 Logrotate 旋转日志 ### + +为了防止日志文件无限制增长,logrotate 工具用于旋转、压缩、移除或者通过电子邮件发送日志,从而减轻管理会产生大量日志文件系统的困难。 + +Logrotate 作为一个 cron 作业(/etc/cron.daily/logrotate)每天运行,并从 /etc/logrotate.conf 和 /etc/logrotate.d 中的文件(如果有的话)读取配置信息。 + +对于 rsyslog,即使你可以在主文件中为指定服务包含设置,为每个服务创建单独的配置文件能帮助你更好地组织设置。 + +让我们来看一个典型的 logrotate.conf: + +![Logrotate 配置](http://www.tecmint.com/wp-content/uploads/2015/08/Logrotate-Configuration.png) + +Logrotate 配置 + +在上面的例子中,logrotate 会为 /var/log/wtmp 进行以下操作:尝试每个月旋转一次,但至少文件要大于 1MB,然后用 0664 权限、用户 root、组 utmp 创建一个新的日志文件。下一步只保存一个归档日志,正如旋转指令指定的: + +![每月 Logrotate 日志](http://www.tecmint.com/wp-content/uploads/2015/08/Logrotate-Logs-Monthly.png) + +每月 Logrotate 日志 + +让我们再来看看 /etc/logrotate.d/httpd 中的另一个例子: + +![旋转 Apache 日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Rotate-Apache-Log-Files.png) + +旋转 Apache 日志文件 + +你可以在 logrotate 的 man 手册([man logrotate][4] 和 [man logrotate.conf][5])中阅读更多有关它的设置。为了方便你的阅读,本文还提供了两篇文章的 PDF 格式。 + +作为一个系统工程师,很可能由你决定多久按照什么格式保存一次日志,取决于你是否有一个单独的分区/逻辑卷给 /var。否则,你真的要考虑删除旧日志以节省存储空间。另一方面,根据你公司和客户内部的政策,为了以后的安全审核,你可能被迫要保留多个日志。 + +#### 保存日志到数据库 #### + +当然检查日志可能是一个很繁琐的工作(即使有类似 grep 工具和正则表达式的帮助)。因为这个原因,rsyslog 允许我们把它们导出到数据库(OTB 支持的关系数据库管理系统包括 MySQL、MariaDB、PostgreSQL 和 Oracle)。 + +指南的这部分假设你已经在要管理日志的 RHEL 7 上安装了 MariaDB 服务器和客户端: + + # yum update && yum install mariadb mariadb-server mariadb-client rsyslog-mysql + # systemctl enable mariadb && systemctl start mariadb + +然后使用 `mysql_secure_installation` 工具为 root 用户设置密码以及其它安全考量: + + +![保证 MySQL 数据库安全](http://www.tecmint.com/wp-content/uploads/2015/08/Secure-MySQL-Database.png) + +保证 MySQL 数据库安全 + +注意:如果你不想用 MariaDB root 用户插入日志消息到数据库,你也可以配置用另一个用户账户。如何实现的介绍已经超出了本文的范围,但在 [MariaDB 知识][6] 中有详细解析。为了简单在这篇指南中我们会使用 root 账户。 + +下一步,从 [GitHub][7] 下载 createDB.sql 脚本并导入到你的数据库服务器: + + # mysql -u root -p < createDB.sql + +![保存服务器日志到数据库](http://www.tecmint.com/wp-content/uploads/2015/08/Save-Server-Logs-to-Database.png) + +保存服务器日志到数据库 + +最后,添加下面的行到 /etc/rsyslog.conf: + + $ModLoad ommysql + $ActionOmmysqlServerPort 3306 + *.* :ommysql:localhost,Syslog,root,YourPasswordHere + +重启 rsyslog 和数据库服务器: + + # systemctl restart rsyslog + # systemctl restart mariadb + +#### 使用 SQL 语法查询日志 #### + +现在执行一些会改变日志的操作(例如停止和启动服务),然后登陆到你的 DB 服务器并使用标准的 SQL 命令显示和查询日志: + + USE Syslog; + SELECT ReceivedAt, Message FROM SystemEvents; + +![在数据库中查询日志](http://www.tecmint.com/wp-content/uploads/2015/08/Query-Logs-in-Database.png) + +在数据库中查询日志 + +### 总结 ### + +在这篇文章中我们介绍了如何设置系统日志,如果旋转日志以及为了简化查询如何重定向消息到数据库。我们希望这些技巧能对你准备 [RHCE 考试][8] 和日常工作有所帮助。 + +正如往常,非常欢迎你的反馈。用下面的表单和我们联系吧。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/manage-linux-system-logs-using-rsyslogd-and-logrotate/ + +作者:[Gabriel Cánepa][a] +译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/wp-content/pdf/rsyslogd.pdf +[2]:http://www.tecmint.com/wp-content/pdf/rsyslog.conf.pdf +[3]:https://tools.ietf.org/html/rfc3164#section-4.1.1 +[4]:http://www.tecmint.com/wp-content/pdf/logrotate.pdf +[5]:http://www.tecmint.com/wp-content/pdf/logrotate.conf.pdf +[6]:https://mariadb.com/kb/en/mariadb/create-user/ +[7]:https://github.com/sematext/rsyslog/blob/master/plugins/ommysql/createDB.sql +[8]:http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/ +[9]:https://en.wikipedia.org/wiki/Log_rotation \ No newline at end of file From f424915eed5fa48b0f9a609067c787113fde8e74 Mon Sep 17 00:00:00 2001 From: Yu Ye Date: Sat, 29 Aug 2015 09:43:45 +0800 Subject: [PATCH 360/697] Update 20141223 Defending the Free Linux World.md --- sources/talk/20141223 Defending the Free Linux World.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/talk/20141223 Defending the Free Linux World.md b/sources/talk/20141223 Defending the Free Linux World.md index 0df2a47383..df53b9c93f 100644 --- a/sources/talk/20141223 Defending the Free Linux World.md +++ b/sources/talk/20141223 Defending the Free Linux World.md @@ -1,3 +1,5 @@ +Translating by H-mudcup Again...... + Defending the Free Linux World ================================================================================ ![](http://www.linuxinsider.com/ai/908455/open-invention-network.jpg) From 31a919a84c882ca402dc7324dd3b57045890eb26 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 29 Aug 2015 12:32:45 +0800 Subject: [PATCH 361/697] =?UTF-8?q?=E4=BF=AE=E6=94=B9=E4=BA=86=E7=AD=BE?= =?UTF-8?q?=E5=90=8D?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sign.md | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/sign.md b/sign.md index ea83b53f1f..1c413aba40 100644 --- a/sign.md +++ b/sign.md @@ -1,8 +1,22 @@ + --- -via: +via:来源链接 -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 +作者:[作者名][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) -译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译, +[Linux中国](https://linux.cn/) 荣誉推出 +[a]:作者链接 +[1]:文内链接 +[2]: +[3]: +[4]: +[5]: +[6]: +[7]: +[8]: +[9]: \ No newline at end of file From 53cecac710138a5a60d74921296edef427bd3a7b Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 29 Aug 2015 17:31:20 +0800 Subject: [PATCH 362/697] PUB:20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux @wyangsun --- ... Tool to Setup IPsec Based VPN in Linux.md | 45 ++++++++++--------- 1 file changed, 23 insertions(+), 22 deletions(-) rename {translated/tech => published}/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md (60%) diff --git a/translated/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md b/published/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md similarity index 60% rename from translated/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md rename to published/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md index 3c16463951..22e50e355d 100644 --- a/translated/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md +++ b/published/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md @@ -1,5 +1,6 @@ -安装Strongswan - Linux上一个基于IPsec的vpn工具 +安装 Strongswan :Linux 上一个基于 IPsec 的 VPN 工具 ================================================================================ + IPsec是一个提供网络层安全的标准。它包含认证头(AH)和安全负载封装(ESP)组件。AH提供包的完整性,ESP组件提供包的保密性。IPsec确保了在网络层的安全特性。 - 保密性 @@ -7,27 +8,27 @@ IPsec是一个提供网络层安全的标准。它包含认证头(AH)和安全 - 来源不可抵赖性 - 重放攻击防护 -[Strongswan][1]是一个IPsec协议实现的开源代码,Strongswan代表强壮开源广域网(StrongS/WAN)。它支持IPsec的VPN两个版本的密钥自动交换(网络密钥交换(IKE)V1和V2)。 +[Strongswan][1]是一个IPsec协议的开源代码实现,Strongswan的意思是强安全广域网(StrongS/WAN)。它支持IPsec的VPN中的两个版本的密钥自动交换(网络密钥交换(IKE)V1和V2)。 -Strongswan基本上提供了自动交换密钥共享VPN两个节点或网络,然后它使用Linux内核的IPsec(AH和ESP)实现。密钥共享使用了IKE机制的特性使用ESP编码数据。在IKE阶段,strongswan使用OpenSSL加密算法(AES,SHA等等)和其他加密类库。无论如何,ESP组成IPsec使用的安全算法,它是Linux内核实现的。Strongswan的主要特性是下面这些。 +Strongswan基本上提供了在VPN的两个节点/网关之间自动交换密钥的共享,然后它使用了Linux内核的IPsec(AH和ESP)实现。密钥共享使用了之后用于ESP数据加密的IKE 机制。在IKE阶段,strongswan使用OpenSSL的加密算法(AES,SHA等等)和其他加密类库。无论如何,IPsec中的ESP组件使用的安全算法是由Linux内核实现的。Strongswan的主要特性如下: - x.509证书或基于预共享密钥认证 - 支持IKEv1和IKEv2密钥交换协议 -- 可选内置插件和库的完整性和加密测试 -- 支持椭圆曲线DH群体和ECDSA证书 +- 可选的,对于插件和库的内置完整性和加密测试 +- 支持椭圆曲线DH群和ECDSA证书 - 在智能卡上存储RSA私钥和证书 -它能被使用在客户端或服务器(road warrior模式)和网关到网关的情景。 +它能被使用在客户端/服务器(road warrior模式)和网关到网关的情景。 ### 如何安装 ### -几乎所有的Linux发行版都支持Strongswan的二进制包。在这个教程,我们将从二进制包安装strongswan也编译strongswan合适的特性的源代码。 +几乎所有的Linux发行版都支持Strongswan的二进制包。在这个教程,我们会从二进制包安装strongswan,也会从源代码编译带有合适的特性的strongswan。 ### 使用二进制包 ### 可以使用以下命令安装Strongswan到Ubuntu 14.04 LTS - $sudo aptitude install strongswan + $ sudo aptitude install strongswan ![安装strongswan](http://blog.linoxide.com/wp-content/uploads/2014/12/strongswan-binary.png) @@ -35,35 +36,35 @@ strongswan的全局配置(strongswan.conf)文件和ipsec配置(ipsec.conf/ ### strongswan源码编译安装的依赖包 ### -- GMP(strongswan使用的Mathematical/Precision 库) -- OpenSSL(加密算法在这个库里) -- PKCS(1,7,8,11,12)(证书编码和智能卡与Strongswan集成) +- GMP(strongswan使用的高精度数学库) +- OpenSSL(加密算法来自这个库) +- PKCS(1,7,8,11,12)(证书编码和智能卡集成) #### 步骤 #### **1)** 在终端使用下面命令到/usr/src/目录 - $cd /usr/src + $ cd /usr/src **2)** 用下面命令从strongswan网站下载源代码 - $sudo wget http://download.strongswan.org/strongswan-5.2.1.tar.gz + $ sudo wget http://download.strongswan.org/strongswan-5.2.1.tar.gz -(strongswan-5.2.1.tar.gz 是最新版。) +(strongswan-5.2.1.tar.gz 是当前最新版。) ![下载软件](http://blog.linoxide.com/wp-content/uploads/2014/12/download_strongswan.png) -**3)** 用下面命令提取下载软件,然后进入目录。 +**3)** 用下面命令提取下载的软件,然后进入目录。 - $sudo tar –xvzf strongswan-5.2.1.tar.gz; cd strongswan-5.2.1 + $ sudo tar –xvzf strongswan-5.2.1.tar.gz; cd strongswan-5.2.1 **4)** 使用configure命令配置strongswan每个想要的选项。 - ./configure --prefix=/usr/local -–enable-pkcs11 -–enable-openssl + $ ./configure --prefix=/usr/local -–enable-pkcs11 -–enable-openssl ![检查strongswan包](http://blog.linoxide.com/wp-content/uploads/2014/12/strongswan-configure.png) -如果GMP库没有安装,然后配置脚本将会发生下面的错误。 +如果GMP库没有安装,配置脚本将会发生下面的错误。 ![GMP library error](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-error.png) @@ -71,19 +72,19 @@ strongswan的全局配置(strongswan.conf)文件和ipsec配置(ipsec.conf/ ![gmp installation](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-installation1.png) -无论如何,如果GMP已经安装而且还一致报错,然后在Ubuntu上使用下面命令创建libgmp.so库的软连到/usr/lib,/lib/,/usr/lib/x86_64-linux-gnu/路径下。 +不过,如果GMP已经安装还报上述错误的话,在Ubuntu上使用如下命令,给在路径 /usr/lib,/lib/,/usr/lib/x86_64-linux-gnu/ 下的libgmp.so库创建软连接。 $ sudo ln -s /usr/lib/x86_64-linux-gnu/libgmp.so.10.1.3 /usr/lib/x86_64-linux-gnu/libgmp.so ![softlink of libgmp.so library](http://blog.linoxide.com/wp-content/uploads/2014/12/softlink.png) -创建libgmp.so软连后,再执行./configure脚本也许就找到gmp库了。无论如何,gmp头文件也许发生其他错误,像下面这样。 +创建libgmp.so软连接后,再执行./configure脚本也许就找到gmp库了。然而,如果gmp头文件发生其他错误,像下面这样。 ![GMP header file issu](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-header.png) 为解决上面的错误,使用下面命令安装libgmp-dev包 - $sudo aptitude install libgmp-dev + $ sudo aptitude install libgmp-dev ![Installation of Development library of GMP](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-dev.png) @@ -105,7 +106,7 @@ via: http://linoxide.com/security/install-strongswan/ 作者:[nido][a] 译者:[wyangsun](https://github.com/wyangsun) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From 2a6de5a73155cb54c5d25c6b939705e11b5ad35e Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 29 Aug 2015 17:56:20 +0800 Subject: [PATCH 363/697] PUB:20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @oska874 这似乎是你的第一篇翻译?翻译的不错! --- ...un Ubuntu Snappy Core on Raspberry Pi 2.md | 89 +++++++++++++++++++ ...un Ubuntu Snappy Core on Raspberry Pi 2.md | 89 ------------------- 2 files changed, 89 insertions(+), 89 deletions(-) create mode 100644 published/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md delete mode 100644 translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md diff --git a/published/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md b/published/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md new file mode 100644 index 0000000000..c36ae7adb7 --- /dev/null +++ b/published/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md @@ -0,0 +1,89 @@ +如何在树莓派 2 运行 ubuntu Snappy Core +================================================================================ +物联网(Internet of Things, IoT) 时代即将来临。很快,过不了几年,我们就会问自己当初是怎么在没有物联网的情况下生存的,就像我们现在怀疑过去没有手机的年代。Canonical 就是一个物联网快速发展却还是开放市场下的竞争者。这家公司宣称自己把赌注压到了IoT 上,就像他们已经在“云”上做过的一样。在今年一月底,Canonical 启动了一个基于Ubuntu Core 的小型操作系统,名字叫做 [Ubuntu Snappy Core][1] 。 + +Snappy 代表了两种意思,它是一种用来替代 deb 的新的打包格式;也是一个用来更新系统的前端,从CoreOS、红帽子和其他系统借鉴了**原子更新**这个想法。自从树莓派 2 投入市场,Canonical 很快就发布了用于树莓派的Snappy Core 版本。而第一代树莓派因为是基于ARMv6 ,Ubuntu 的ARM 镜像是基于ARMv7 ,所以不能运行ubuntu 。不过这种状况现在改变了,Canonical 通过发布 Snappy Core 的RPI2 镜像,抓住机会证明了Snappy 就是一个用于云计算,特别是用于物联网的系统。 + +Snappy 同样可以运行在其它像Amazon EC2, Microsofts Azure, Google的 Compute Engine 这样的云端上,也可以虚拟化在 KVM、Virtuabox 和vagrant 上。Canonical Ubuntu 已经拥抱了微软、谷歌、Docker、OpenStack 这些重量级选手,同时也与一些小项目达成合作关系。除了一些创业公司,比如 Ninja Sphere、Erle Robotics,还有一些开发板生产商,比如 Odroid、Banana Pro, Udoo, PCDuino 和 Parallella 、全志,Snappy 也提供了支持。Snappy Core 同时也希望尽快运行到路由器上来帮助改进路由器生产商目前很少更新固件的策略。 + +接下来,让我们看看怎么样在树莓派 2 上运行 Ubuntu Snappy Core。 + +用于树莓派2 的Snappy 镜像可以从 [Raspberry Pi 网站][2] 上下载。解压缩出来的镜像必须[写到一个至少8GB 大小的SD 卡][3]。尽管原始系统很小,但是原子升级和回滚功能会占用不小的空间。使用 Snappy 启动树莓派 2 后你就可以使用默认用户名和密码(都是ubuntu)登录系统。 + +![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg) + +sudo 已经配置好了可以直接用,安全起见,你应该使用以下命令来修改你的用户名 + + $ sudo usermod -l + +或者也可以使用`adduser` 为你添加一个新用户。 + +因为RPI缺少硬件时钟,而 Snappy Core 镜像并不知道这一点,所以系统会有一个小 bug:处理某些命令时会报很多错。不过这个很容易解决: + +使用这个命令来确认这个bug 是否影响: + + $ date + +如果输出类似 "Thu Jan 1 01:56:44 UTC 1970", 你可以这样做来改正: + + $ sudo date --set="Sun Apr 04 17:43:26 UTC 2015" + +改成你的实际时间。 + +![](https://farm9.staticflickr.com/8735/16426231744_c54d9b8877_b.jpg) + +现在你可能打算检查一下,看看有没有可用的更新。注意通常使用的命令是不行的: + + $ sudo apt-get update && sudo apt-get distupgrade + +这时系统不会让你通过,因为 Snappy 使用它自己精简过的、基于dpkg 的包管理系统。这么做的原因是 Snappy 会运行很多嵌入式程序,而同时你也会试图所有事情尽可能的简化。 + +让我们来看看最关键的部分,理解一下程序是如何与 Snappy 工作的。运行 Snappy 的SD 卡上除了 boot 分区外还有3个分区。其中的两个构成了一个重复的文件系统。这两个平行文件系统被固定挂载为只读模式,并且任何时刻只有一个是激活的。第三个分区是一个部分可写的文件系统,用来让用户存储数据。通过更新系统,标记为'system-a' 的分区会保持一个完整的文件系统,被称作核心,而另一个平行的文件系统仍然会是空的。 + +![](https://farm9.staticflickr.com/8758/16841251947_21f42609ce_b.jpg) + +如果我们运行以下命令: + + $ sudo snappy update + +系统将会在'system-b' 上作为一个整体进行更新,这有点像是更新一个镜像文件。接下来你将会被告知要重启系统来激活新核心。 + +重启之后,运行下面的命令可以检查你的系统是否已经更新到最新版本,以及当前被激活的是哪个核心 + + $ sudo snappy versions -a + +经过更新-重启两步操作,你应该可以看到被激活的核心已经被改变了。 + +因为到目前为止我们还没有安装任何软件,所以可以用下面的命令更新: + + $ sudo snappy update ubuntu-core + +如果你打算仅仅更新特定的OS 版本这就够了。如果出了问题,你可以使用下面的命令回滚: + + $ sudo snappy rollback ubuntu-core + +这将会把系统状态回滚到更新之前。 + +![](https://farm8.staticflickr.com/7666/17022676786_5fe6804ed8_c.jpg) + +再来说说那些让 Snappy 变得有用的软件。这里不会讲的太多关于如何构建软件、向 Snappy 应用商店添加软件的基础知识,但是你可以通过 Freenode 上的IRC 频道 #snappy 了解更多信息,那个上面有很多人参与。你可以通过浏览器访问http://\:4200 来浏览应用商店,然后从商店安装软件,再在浏览器里访问 http://webdm.local 来启动程序。如何构建用于 Snappy 的软件并不难,而且也有了现成的[参考文档][4] 。你也可以很容易的把 DEB 安装包使用Snappy 格式移植到Snappy 上。 + +![](https://farm8.staticflickr.com/7656/17022676836_968a2a7254_c.jpg) + +尽管 Ubuntu Snappy Core 吸引了我们去研究新型的 Snappy 安装包格式和 Canonical 式的原子更新操作,但是因为有限的可用应用,它现在在生产环境里还不是很有用。但是既然搭建一个 Snappy 环境如此简单,这看起来是一个学点新东西的好机会。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html + +作者:[Ferdinand Thommes][a] +译者:[Ezio](https://github.com/oska874) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/ferdinand +[1]:http://www.ubuntu.com/things +[2]:http://www.raspberrypi.org/downloads/ +[3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html +[4]:https://developer.ubuntu.com/en/snappy/ diff --git a/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md b/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md deleted file mode 100644 index f5e6fe60b2..0000000000 --- a/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md +++ /dev/null @@ -1,89 +0,0 @@ -如何在树莓派2 代运行ubuntu Snappy Core -================================================================================ -物联网(Internet of Things, IoT) 时代即将来临。很快,过不了几年,我们就会问自己当初是怎么在没有物联网的情况下生存的,就像我们现在怀疑过去没有手机的年代。Canonical 就是一个物联网快速发展却还是开放市场下的竞争者。这家公司宣称自己把赌注压到了IoT 上,就像他们已经在“云”上做过的一样。。在今年一月底,Canonical 启动了一个基于Ubuntu Core 的小型操作系统,名字叫做 [Ubuntu Snappy Core][1] 。 - -Snappy 是一种用来替代deb 的新的打包格式,是一个用来更新系统的前端,从CoreOS、红帽子和其他系统借鉴了**原子更新**这个想法。树莓派2 代投入市场,Canonical 很快就发布了用于树莓派的Snappy Core 版本。而第一代树莓派因为是基于ARMv6 ,Ubuntu 的ARM 镜像是基于ARMv7 ,所以不能运行ubuntu 。不过这种状况现在改变了,Canonical 通过发布用于RPI2 的镜像,抓住机会证明了Snappy 就是一个用于云计算,特别是用于物联网的系统。 - -Snappy 同样可以运行在其它像Amazon EC2, Microsofts Azure, Google的 Compute Engine 这样的云端上,也可以虚拟化在KVM、Virtuabox 和vagrant 上。Canonical Ubuntu 已经拥抱了微软、谷歌、Docker、OpenStack 这些重量级选手,同时也与一些小项目达成合作关系。除了一些创业公司,比如Ninja Sphere、Erle Robotics,还有一些开发板生产商,比如Odroid、Banana Pro, Udoo, PCDuino 和Parallella 、全志,Snappy 也提供了支持。Snappy Core 同时也希望尽快运行到路由器上来帮助改进路由器生产商目前很少更新固件的策略。 - -接下来,让我们看看怎么样在树莓派2 上运行Snappy。 - -用于树莓派2 的Snappy 镜像可以从 [Raspberry Pi 网站][2] 上下载。解压缩出来的镜像必须[写到一个至少8GB 大小的SD 卡][3]。尽管原始系统很小,但是原子升级和回滚功能会占用不小的空间。使用Snappy 启动树莓派2 后你就可以使用默认用户名和密码(都是ubuntu)登录系统。 - -![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg) - -sudo 已经配置好了可以直接用,安全起见,你应该使用以下命令来修改你的用户名 - - $ sudo usermod -l - -或者也可以使用`adduser` 为你添加一个新用户。 - -因为RPI缺少硬件时钟,而Snappy 并不知道这一点,所以系统会有一个小bug:处理某些命令时会报很多错。不过这个很容易解决: - -使用这个命令来确认这个bug 是否影响: - - $ date - -如果输出是 "Thu Jan 1 01:56:44 UTC 1970", 你可以这样做来改正: - - $ sudo date --set="Sun Apr 04 17:43:26 UTC 2015" - -改成你的实际时间。 - -![](https://farm9.staticflickr.com/8735/16426231744_c54d9b8877_b.jpg) - -现在你可能打算检查一下,看看有没有可用的更新。注意通常使用的命令: - - $ sudo apt-get update && sudo apt-get distupgrade - -不过这时系统不会让你通过,因为Snappy 使用它自己精简过的、基于dpkg 的包管理系统。这么做的原因是Snappy 会运行很多嵌入式程序,而同时你也会想着所有事情尽可能的简化。 - -让我们来看看最关键的部分,理解一下程序是如何与Snappy 工作的。运行Snappy 的SD 卡上除了boot 分区外还有3个分区。其中的两个构成了一个重复的文件系统。这两个平行文件系统被固定挂载为只读模式,并且任何时刻只有一个是激活的。第三个分区是一个部分可写的文件系统,用来让用户存储数据。通过更新系统,标记为'system-a' 的分区会保持一个完整的文件系统,被称作核心,而另一个平行文件系统仍然会是空的。 - -![](https://farm9.staticflickr.com/8758/16841251947_21f42609ce_b.jpg) - -如果我们运行以下命令: - - $ sudo snappy update - -系统将会在'system-b' 上作为一个整体进行更新,这有点像是更新一个镜像文件。接下来你将会被告知要重启系统来激活新核心。 - -重启之后,运行下面的命令可以检查你的系统是否已经更新到最新版本,以及当前被激活的是那个核心 - - $ sudo snappy versions -a - -经过更新-重启两步操作,你应该可以看到被激活的核心已经被改变了。 - -因为到目前为止我们还没有安装任何软件,下面的命令: - - $ sudo snappy update ubuntu-core - -将会生效,而且如果你打算仅仅更新特定的OS 版本,这也是一个办法。如果出了问题,你可以使用下面的命令回滚: - - $ sudo snappy rollback ubuntu-core - -这将会把系统状态回滚到更新之前。 - -![](https://farm8.staticflickr.com/7666/17022676786_5fe6804ed8_c.jpg) - -再来说说那些让Snappy 有用的软件。这里不会讲的太多关于如何构建软件、向Snappy 应用商店添加软件的基础知识,但是你可以通过Freenode 上的IRC 频道#snappy 了解更多信息,那个上面有很多人参与。你可以通过浏览器访问http://:4200 来浏览应用商店,然后从商店安装软件,再在浏览器里访问http://webdm.local 来启动程序。如何构建用于Snappy 的软件并不难,而且也有了现成的[参考文档][4] 。你也可以很容易的把DEB 安装包使用Snappy 格式移植到Snappy 上。 - -![](https://farm8.staticflickr.com/7656/17022676836_968a2a7254_c.jpg) - -尽管Ubuntu Snappy Core 吸引我们去研究新型的Snappy 安装包格式和Canonical 式的原子更新操作,但是因为有限的可用应用,它现在在生产环境里还不是很有用。但是既然搭建一个Snappy 环境如此简单,这看起来是一个学点新东西的好机会。 - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html - -作者:[Ferdinand Thommes][a] -译者:[Ezio](https://github.com/oska874) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/ferdinand -[1]:http://www.ubuntu.com/things -[2]:http://www.raspberrypi.org/downloads/ -[3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html -[4]:https://developer.ubuntu.com/en/snappy/ From adbae49af57f326c1d1fe35127500786b65b8800 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 29 Aug 2015 18:32:52 +0800 Subject: [PATCH 364/697] PUB:20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04 @GOLinux --- ...The Update Information Is Outdated' In Ubuntu 14.04.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) rename {translated/tech => published}/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md (91%) diff --git a/translated/tech/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md b/published/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md similarity index 91% rename from translated/tech/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md rename to published/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md index cebfab93c4..c4a9e43e85 100644 --- a/translated/tech/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md +++ b/published/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md @@ -2,7 +2,7 @@ Ubuntu 14.04中修复“update information is outdated”错误 ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Fix_update_information_is_outdated.jpeg) -看到Ubuntu 14.04的顶部面板上那个显示下面这个错误的红色三角形了吗? +看到过Ubuntu 14.04的顶部面板上那个显示下面这个错误的红色三角形了吗? > 更新信息过时。该错误可能是由网络问题,或者某个仓库不再可用而造成的。请通过从指示器菜单中选择‘显示更新’来手动更新,然后查看是否存在有失败的仓库。 > @@ -25,7 +25,7 @@ Ubuntu 14.04中修复“update information is outdated”错误 ### 修复‘update information is outdated’错误 ### -这里讨论的‘解决方案’可能对Ubuntu的这些版本有用:Ubuntu 14.04,12.04或14.04。你所要做的仅仅是打开终端(Ctrl+Alt+T),然后使用下面的命令: +这里讨论的‘解决方案’可能对Ubuntu的这些版本有用:Ubuntu 14.04,12.04。你所要做的仅仅是打开终端(Ctrl+Alt+T),然后使用下面的命令: sudo apt-get update @@ -47,7 +47,7 @@ via: http://itsfoss.com/fix-update-information-outdated-ubuntu/ 作者:[Abhishek][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -56,4 +56,4 @@ via: http://itsfoss.com/fix-update-information-outdated-ubuntu/ [2]:http://itsfoss.com/notification-terminal-command-completion-ubuntu/ [3]:http://itsfoss.com/solve-gpg-error-signatures-verified-ubuntu/ [4]:http://itsfoss.com/install-spotify-ubuntu-1504/ -[5]:http://itsfoss.com/fix-update-errors-ubuntu-1404/ +[5]:https://linux.cn/article-5603-1.html From d446a10ccb58a74bafc829c2f0750e7acfaa6590 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 29 Aug 2015 18:33:20 +0800 Subject: [PATCH 365/697] [Translating] tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md --- ...Fix No Bootable Device Found Error After Installing Ubuntu.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md b/sources/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md index 3281a51137..3fac11eb35 100644 --- a/sources/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md +++ b/sources/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md @@ -1,3 +1,4 @@ +ictlyh Translating Fix No Bootable Device Found Error After Installing Ubuntu ================================================================================ Usually, I dual boot Ubuntu and Windows but this time I decided to go for a clean Ubuntu installation i.e. eliminating Windows completely. After the clean install of Ubuntu, I ended up with a screen saying **no bootable device found** instead of the Grub screen. Clearly, the installation messed up with the UEFI boot settings. From 2b0286e086dd8815070f8193c26800420de298fd Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 29 Aug 2015 19:19:02 +0800 Subject: [PATCH 366/697] [Translated] tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md --- ...ice Found Error After Installing Ubuntu.md | 98 ------------------- ...ice Found Error After Installing Ubuntu.md | 97 ++++++++++++++++++ 2 files changed, 97 insertions(+), 98 deletions(-) delete mode 100644 sources/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md create mode 100644 translated/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md diff --git a/sources/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md b/sources/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md deleted file mode 100644 index 3fac11eb35..0000000000 --- a/sources/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md +++ /dev/null @@ -1,98 +0,0 @@ -ictlyh Translating -Fix No Bootable Device Found Error After Installing Ubuntu -================================================================================ -Usually, I dual boot Ubuntu and Windows but this time I decided to go for a clean Ubuntu installation i.e. eliminating Windows completely. After the clean install of Ubuntu, I ended up with a screen saying **no bootable device found** instead of the Grub screen. Clearly, the installation messed up with the UEFI boot settings. - -![No Bootable Device Found After Installing Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_1.jpg) - -I am going to show you how I fixed **no bootable device found error after installing Ubuntu in Acer laptops**. It is important that I mention that I am using Acer Aspire R13 because we have to change things in firmware settings and those settings might look different from manufacturer to manufacturer and from device to device. - -So before you go on trying the steps mentioned here, let’s first see what state my computer was in during this error: - -- My Acer Aspire R13 came preinstalled with Windows 8.1 and with UEFI boot manager -- Secure boot was not turned off (my laptop has just come from repair and the service guy had put the secure boot on again, I did not know until I ran up in the problem). You can read this post to know [how disable secure boot in Acer laptops][1] -- I chose to install Ubuntu by erasing everything i.e. existing Windows 8.1, various partitions etc. -- After installing Ubuntu, I saw no bootable device found error while booting from the hard disk. Booting from live USB worked just fine - -In my opinion, not disabling the secure boot was the reason of this error. However, I have no data to backup my claim. It is just a hunch. Interestingly, dual booting Windows and Linux often ends up in common Grub issues like these two: - -- [error: no such partition grub rescue][2] -- [Minimal BASH like line editing is supported][3] - -If you are in similar situation, you can try the fix which worked for me. - -### Fix no bootable device found error after installing Ubuntu ### - -Pardon me for poor quality images. My OnePlus camera seems to be not very happy with my laptop screen. - -#### Step 1 #### - -Turn the power off and boot into boot settings. I had to press Fn+F2 (to press F2 key) on Acer Aspire R13 quickly. You have to be very quick with it if you are using SSD hard disk because SSDs are very fast in booting. Depending upon your manufacturer/model, you might need to use Del or F10 or F12 keys. - -#### Step 2 #### - -In the boot settings, make sure that Secure Boot is turned on. It should be under the Boot tab. - -#### Step 3 #### - -Go to Security tab and look for “Select an UEFI file as trusted for executing” and click enter. - -![Fix no bootable device found ](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_2.jpg) - -Just for your information, what we are going to do here is to add the UEFI settings file (it was generated while Ubuntu installation) among the trusted UEFI boots in your device. If you remember, UEFI boot’s main aim is to provide security and since Secure Boot was not disabled (perhaps) the device did not intend to boot from the newly installed OS. Adding it as trusted, kind of whitelisting, will let the device boot from the Ubuntu UEFI file. - -#### Step 4 #### - -You should see your hard disk like HDD0 etc here. If you have more than one hard disk, I hope you remember where did you install Ubuntu. Press Enter here as well. - -![Fix no bootable device found in boot settings](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_3.jpg) - -#### Step 5 #### - -You should see here. Press enter. - -![Fix settings in UEFI](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_4.jpg) - -#### Step 6 #### - -You’ll see in next screen. Don’t get impatient, you are almost there - -![Fixing boot error after installing Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_5.jpg) - -#### Step 7 #### - -You’ll see shimx64.efi, grubx64.efi and MokManager.efi file here. The important one is shimx64.efi here. Select it and click enter. - - -![Fix no bootable device found](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_6.jpg) - -In next screen, type Yes and click enter. - -![No_Bootable_Device_Found_7](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_7.jpg) - -#### Step 8 #### - -Once we have added it as trused EFI file to be executed, press F10 to save and exit. - -![Save and exist firmware settings](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_8.jpg) - -Reboot your system and this time you should be seeing the familiar Grub screen. Even if you do not see Grub screen, you should at least not be seeing “no bootable device found” screen anymore. You should be able to boot into Ubuntu. - -If your Grub screen was messed up after the fix but you got to login into it, you can reinstall Grub to boot into the familiar purple Grub screen of Ubuntu. - -I hope this tutorial helped you to fix no bootable device found error. Any questions or suggestions or a word of thanks is always welcomed. - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/no-bootable-device-found-ubuntu/ - -作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:http://itsfoss.com/disable-secure-boot-in-acer/ -[2]:http://itsfoss.com/solve-error-partition-grub-rescue-ubuntu-linux/ -[3]:http://itsfoss.com/fix-minimal-bash-line-editing-supported-grub-error-linux/ \ No newline at end of file diff --git a/translated/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md b/translated/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md new file mode 100644 index 0000000000..91aa23d6aa --- /dev/null +++ b/translated/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md @@ -0,0 +1,97 @@ +修复安装完 Ubuntu 后无可引导设备错误 +================================================================================ +通常情况下,我启动 Ubuntu 和 Windows 双系统,但是这次我决定完全消除 Windows 纯净安装 Ubuntu。纯净安装 Ubuntu 完成后,结束时屏幕输出 **no bootable device found** 而不是进入 GRUB 界面。显然,安装搞砸了 UEFI 引导设置。 + +![安装完 Ubuntu 后无可引导设备](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_1.jpg) + +我会告诉你我是如何修复**在宏碁笔记本上安装 Ubuntu 后出现无可引导设备错误**。我声明了我使用的是宏碁灵越 R13,这很重要,因为我们需要更改固件设置,而这些设置可能因制造商和设备有所不同。 + +因此在你开始这里介绍的步骤之前,先看一下发生这个错误时我计算机的状态: + +- 我的宏碁灵越 R13 预装了 Windows8.1 和 UEFI 引导管理器 +- 关闭了 Secure boot(我的笔记本刚维修过,维修人员又启用了它,直到出现了问题我才发现)。你可以阅读这篇博文了解[如何在宏碁笔记本中关闭 secure boot][1] +- 我通过选择清除所有东西安装 Ubuntu,例如现有的 Windows 8.1,各种分区等。 +- 安装完 Ubuntu 之后,从硬盘启动时我看到无可引导设备错误。但能从 USB 设备正常启动 + +在我看来,没有禁用 secure boot 可能是这个错误的原因。但是,我没有数据支撑我的观点。这仅仅是预感。有趣的是,双系统启动 Windows 和 Linux 经常会出现这两个 Grub 问题: + +- [error: no such partition grub rescue][2] +- [Minimal BASH like line editing is supported][3] + +如果你遇到类似的情况,你可以试试我的修复方法。 + +### 修复安装完 Ubuntu 后无可引导设备错误 ### + +请原谅我没有丰富的图片。我的一加相机不能很好地拍摄笔记本屏幕。 + +#### 第一步 #### + +关闭电源并进入 boot 设置。我需要在宏碁灵越 R13 上快速地按 Fn+F2。如果你使用固态硬盘的话要按的非常快,因为固态硬盘启动速度很快。取决于你的制造商,你可能要用 Del 或 F10 或者 F12。 + +#### 第二步 #### + +在 boot 设置中,确保启用了 Secure Boot。它在 Boot 标签里。 + +#### 第三步 #### + +进入到 Security 标签,查找 “Select an UEFI file as trusted for executing” 并敲击回车。 + +![修复无可引导设备错误](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_2.jpg) + +特意说明,我们这一步是要在你的设备中添加 UEFI 设置文件(安装 Ubuntu 的时候生成)到可信 UEFI 启动。如果你记得的话,UEFI 启动的主要目的是提供安全性,由于(可能)没有禁用 Secure Boot,设备不会试图从新安装的操作系统中启动。添加它到类似白名单的可信列表,会使设备从 Ubuntu UEFI 文件启动。 + +#### 第四步 #### + +在这里你可以看到你的硬盘,例如 HDD0。如果你有多块硬盘,我希望你记住你安装 Ubuntu 的那块。同样敲击回车。 + +![在 Boot 设置中修复无可引导设备错误](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_3.jpg) + +#### 第五步 #### + +你应该可以看到 ,敲击回车。 + +![在 UEFI 中修复设置](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_4.jpg) + +#### 第六步 #### + +在下一个屏幕中你会看到 。耐心点,马上就好了。 + +![安装完 Ubuntu 后修复启动错误](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_5.jpg) + +#### 第七步 #### + +你可以看到 shimx64.efi,grubx64.efi 和 MokManager.efi 文件。重要的是 shimx64.efi。选中它并敲击回车。 + + +![修复无可引导设备](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_6.jpg) + +在下一个屏幕中,输入 Yes 并敲击回车。 + +![无可引导设备_7](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_7.jpg) + +#### 第八步 #### + +当我们添加它到可信 EFI 文件并执行时,按 F10 保存并退出。 + +![保存并退出固件设置](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_8.jpg) + +重启你的系统,这时你就可以看到熟悉的 GRUB 界面了。就算你没有看到 Grub 界面,起码也再也不会看到“无可引导设备”。你应该可以进入 Ubuntu 了。 + +如果修复后搞乱了你的 Grub 界面,但你确实能登录系统,你可以重装 Grub 并进入到 Ubuntu 熟悉的紫色 Grub 界面。 + +我希望这篇指南能帮助你修复无可引导设备错误。欢迎提出任何疑问、建议或者感谢。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/no-bootable-device-found-ubuntu/ + +作者:[Abhishek][a] +译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://itsfoss.com/disable-secure-boot-in-acer/ +[2]:http://itsfoss.com/solve-error-partition-grub-rescue-ubuntu-linux/ +[3]:http://itsfoss.com/fix-minimal-bash-line-editing-supported-grub-error-linux/ \ No newline at end of file From 37f4d451aa96918883261e4212400c7a3fdb287d Mon Sep 17 00:00:00 2001 From: Yu Ye Date: Sat, 29 Aug 2015 20:20:20 +0800 Subject: [PATCH 367/697] Delete 20141223 Defending the Free Linux World.md --- ...20141223 Defending the Free Linux World.md | 127 ------------------ 1 file changed, 127 deletions(-) delete mode 100644 sources/talk/20141223 Defending the Free Linux World.md diff --git a/sources/talk/20141223 Defending the Free Linux World.md b/sources/talk/20141223 Defending the Free Linux World.md deleted file mode 100644 index df53b9c93f..0000000000 --- a/sources/talk/20141223 Defending the Free Linux World.md +++ /dev/null @@ -1,127 +0,0 @@ -Translating by H-mudcup Again...... - -Defending the Free Linux World -================================================================================ -![](http://www.linuxinsider.com/ai/908455/open-invention-network.jpg) - -**Co-opetition is a part of open source. The Open Invention Network model allows companies to decide where they will compete and where they will collaborate, explained OIN CEO Keith Bergelt. As open source evolved, "we had to create channels for collaboration. Otherwise, we would have hundreds of entities spending billions of dollars on the same technology."** - -The [Open Invention Network][1], or OIN, is waging a global campaign to keep Linux out of harm's way in patent litigation. Its efforts have resulted in more than 1,000 companies joining forces to become the largest defense patent management organization in history. - -The Open Invention Network was created in 2005 as a white hat organization to protect Linux from license assaults. It has considerable financial backing from original board members that include Google, IBM, NEC, Novell, Philips, [Red Hat][2] and Sony. Organizations worldwide have joined the OIN community by signing the free OIN license. - -Organizers founded the Open Invention Network as a bold endeavor to leverage intellectual property to protect Linux. Its business model was difficult to comprehend. It asked its members to take a royalty-free license and forever forgo the chance to sue other members over their Linux-oriented intellectual property. - -However, the surge in Linux adoptions since then -- think server and cloud platforms -- has made protecting Linux intellectual property a critically necessary strategy. - -Over the past year or so, there has been a shift in the Linux landscape. OIN is doing a lot less talking to people about what the organization is and a lot less explaining why Linux needs protection. There is now a global awareness of the centrality of Linux, according to Keith Bergelt, CEO of OIN. - -"We have seen a culture shift to recognizing how OIN benefits collaboration," he told LinuxInsider. - -### How It Works ### - -The Open Invention Network uses patents to create a collaborative environment. This approach helps ensure the continuation of innovation that has benefited software vendors, customers, emerging markets and investors. - -Patents owned by Open Invention Network are available royalty-free to any company, institution or individual. All that is required to qualify is the signer's agreement not to assert its patents against the Linux system. - -OIN ensures the openness of the Linux source code. This allows programmers, equipment vendors, independent software vendors and institutions to invest in and use Linux without excessive worry about intellectual property issues. This makes it more economical for companies to repackage, embed and use Linux. - -"With the diffusion of copyright licenses, the need for OIN licenses becomes more acute. People are now looking for a simpler or more utilitarian solution," said Bergelt. - -OIN legal defenses are free of charge to members. Members commit to not initiating patent litigation against the software in OIN's list. They also agree to offer their own patents in defense of that software. Ultimately, these commitments result in access to hundreds of thousands of patents cross-licensed by the network, Bergelt explained. - -### Closing the Legal Loopholes ### - -"What OIN is doing is very essential. It offers another layer of IP protection, said Greg R. Vetter, associate professor of law at the [University of Houston Law Center][3]. - -Version 2 of the GPL license is thought by some to provide an implied patent license, but lawyers always feel better with an explicit license, he told LinuxInsider. - -What OIN provides is something that bridges that gap. It also provides explicit coverage of the Linux kernel. An explicit patent license is not necessarily part of the GPLv2, but it was added in GPLv3, according to Vetter. - -Take the case of a code writer who produces 10,000 lines of code under GPLv3, for example. Over time, other code writers contribute many more lines of code, which adds to the IP. The software patent license provisions in GPLv3 would protect the use of the entire code base under all of the participating contributors' patents, Vetter said. - -### Not Quite the Same ### - -Patents and licenses are overlapping legal constructs. Figuring out how the two entities work with open source software can be like traversing a minefield. - -"Licenses are legal constructs granting additional rights based on, typically, patent and copyright laws. Licenses are thought to give a permission to do something that might otherwise be infringement of someone else's IP rights," Vetter said. - -Many free and open source licenses (such as the Mozilla Public License, the GNU GPLv3, and the Apache Software License) incorporate some form of reciprocal patent rights clearance. Older licenses like BSD and MIT do not mention patents, Vetter pointed out. - -A software license gives someone else certain rights to use the code the programmer created. Copyright to establish ownership is automatic, as soon as someone writes or draws something original. However, copyright covers only that particular expression and derivative works. It does not cover code functionality or ideas for use. - -Patents cover functionality. Patent rights also can be licensed. A copyright may not protect how someone independently developed implementation of another's code, but a patent fills this niche, Vetter explained. - -### Looking for Safe Passage ### - -The mixing of license and patent legalities can appear threatening to open source developers. For some, even the GPL qualifies as threatening, according to William Hurley, cofounder of [Chaotic Moon Studios][4] and [IEEE][5] Computer Society member. - -"Way back in the day, open source was a different world. Driven by mutual respect and a view of code as art, not property, things were far more open than they are today. I believe that many efforts set upon with the best of intentions almost always end up bearing unintended consequences," Hurley told LinuxInsider. - -Surpassing the 1,000-member mark might carry a mixed message about the significance of intellectual property right protection, he suggested. It might just continue to muddy the already murky waters of today's open source ecosystem. - -"At the end of the day, this shows some of the common misconceptions around intellectual property. Having thousands of developers does not decrease risk -- it increases it. The more developers licensing the patents, the more valuable they appear to be," Hurley said. "The more valuable they appear to be, the more likely someone with similar patents or other intellectual property will try to take advantage and extract value for their own financial gain." - -### Sharing While Competing ### - -Co-opetition is a part of open source. The OIN model allows companies to decide where they will compete and where they will collaborate, explained Bergelt. - -"Many of the changes in the evolution of open source in terms of process have moved us into a different direction. We had to create channels for collaboration. Otherwise, we would have hundreds of entities spending billions of dollars on the same technology," he said. - -A glaring example of this is the early evolution of the cellphone industry. Multiple standards were put forward by multiple companies. There was no sharing and no collaboration, noted Bergelt. - -"That damaged our ability to access technology by seven to 10 years in the U.S. Our experience with devices was far behind what everybody else in the world had. We were complacent with GSM (Global System for Mobile Communications) while we were waiting for CDMA (Code Division Multiple Access)," he said. - -### Changing Landscape ### - -OIN experienced a growth surge of 400 new licensees in the last year. That is indicative of a new trend involving open source. - -"The marketplace reached a critical mass where finally people within organizations recognized the need to explicitly collaborate and to compete. The result is doing both at the same time. This can be messy and taxing," Bergelt said. - -However, it is a sustainable transformation driven by a cultural shift in how people think about collaboration and competition. It is also a shift in how people are embracing open source -- and Linux in particular -- as the lead project in the open source community, he explained. - -One indication is that most significant new projects are not being developed under the GPLv3 license. - -### Two Better Than One ### - -"The GPL is incredibly important, but the reality is there are a number of licensing models being used. The relative addressability of patent issues is generally far lower in Eclipse and Apache and Berkeley licenses that it is in GPLv3," said Bergelt. - -GPLv3 is a natural complement for addressing patent issues -- but the GPL is not sufficient on its own to address the issues of potential conflicts around the use of patents. So OIN is designed as a complement to copyright licenses, he added. - -However, the overlap of patent and license may not do much good. In the end, patents are for offensive purposes -- not defensive -- in almost every case, Bergelt suggested. - -"If you are not prepared to take legal action against others, then a patent may not be the best form of legal protection for your intellectual properties," he said. "We now live in a world where the misconceptions around software, both open and proprietary, combined with an ill-conceived and outdated patent system, leave us floundering as an industry and stifling innovation on a daily basis," he said. - -### Court of Last Resort ### - -It would be nice to think the presence of OIN has dampened a flood of litigation, Bergelt said, or at the very least, that OIN's presence is neutralizing specific threats. - -"We are getting people to lay down their arms, so to say. At the same time, we are creating a new cultural norm. Once you buy into patent nonaggression in this model, the correlative effect is to encourage collaboration," he observed. - -If you are committed to collaboration, you tend not to rush to litigation as a first response. Instead, you think in terms of how can we enable you to use what we have and make some money out of it while we use what you have, Bergelt explained. - -"OIN is a multilateral solution. It encourages signers to create bilateral agreements," he said. "That makes litigation the last course of action. That is where it should be." - -### Bottom Line ### - -OIN is working to prevent Linux patent challenges, Bergelt is convinced. There has not been litigation in this space involving Linux. - -The only thing that comes close are the mobile wars with Microsoft, which focus on elements high in the stack. Those legal challenges may be designed to raise the cost of ownership involving the use of Linux products, Bergelt noted. - -Still, "these are not Linux-related law suits," he said. "They do not focus on what is core to Linux. They focus on what is in the Linux system." - --------------------------------------------------------------------------------- - -via: http://www.linuxinsider.com/story/Defending-the-Free-Linux-World-81512.html - -作者:Jack M. Germain -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[1]:http://www.openinventionnetwork.com/ -[2]:http://www.redhat.com/ -[3]:http://www.law.uh.edu/ -[4]:http://www.chaoticmoon.com/ -[5]:http://www.ieee.org/ From 1a9f35cfa6c6caff0a8355783854023ed6637ab9 Mon Sep 17 00:00:00 2001 From: Yu Ye Date: Sat, 29 Aug 2015 20:21:08 +0800 Subject: [PATCH 368/697] Create 20141223 Defending the Free Linux World.md --- ...20141223 Defending the Free Linux World.md | 127 ++++++++++++++++++ 1 file changed, 127 insertions(+) create mode 100644 translated/talk/20141223 Defending the Free Linux World.md diff --git a/translated/talk/20141223 Defending the Free Linux World.md b/translated/talk/20141223 Defending the Free Linux World.md new file mode 100644 index 0000000000..cabc8af041 --- /dev/null +++ b/translated/talk/20141223 Defending the Free Linux World.md @@ -0,0 +1,127 @@ +Translating by H-mudcup + +守卫自由的Linux世界 +================================================================================ +![](http://www.linuxinsider.com/ai/908455/open-invention-network.jpg) + +**"合作是开源的一部分。OIN的CEO Keith Bergelt解释说,开放创新网络(Open Invention Network)模式允许众多企业和公司决定它们该在哪较量,在哪合作。随着开源的演变,“我们需要为合作创造渠道。否则我们将会有几百个团体把数十亿美元花费到同样的技术上。”** + +[开放创新网络(Open Invention Network)][1],既OIN,正在全球范围内开展让 Linux 远离专利诉讼的伤害的活动。它的努力得到了一千多个公司的热烈回应,它们的加入让这股力量成为了历史上最大的反专利管理组织。 + +开放创新网络以白帽子组织的身份创建于2005年,目的是保护 Linux 免受来自许可证方面的困扰。包括Google、 IBM、 NEC、 Novell、 Philips、 [Red Hat][2] 和 Sony这些成员的董事会给予了它可观的经济支持。世界范围内的多个组织通过签署自由 OIN 协议加入了这个社区。 + +创立开放创新网络的组织成员把它当作利用知识产权保护 Linux 的大胆尝试。它的商业模式非常的难以理解。它要求它的成员持无专利证并永远放弃由于 Linux 相关知识产权起诉其他成员的机会。 + +然而,从 Linux 收购风波——想想服务器和云平台——那时起,保护 Linux 知识产权的策略就变得越加的迫切。 + +在过去的几年里,Linux 的版图曾经历了一场变革。OIN 不必再向人们解释这个组织的定义,也不必再解释为什么 Linux 需要保护。据 OIN 的 CEO Keith Bergelt 说,现在 Linux 的重要性得到了全世界的关注。 + +“我们已经见到了一场人们了解到OIN如何让合作受益的文化变革,”他对 LinuxInsider 说。 + +### 如何运作 ### + +开放创新网络使用专利权的方式创建了一个协作环境。这种方法有助于确保创新的延续。这已经使很多软件商贩、顾客、新型市场和投资者受益。 + +开放创新网络的专利证可以让任何公司、公共机构或个人免版权使用。这些权利的获得建立在签署者同意不会专为了维护专利而攻击 Linux 系统的基础上。 + +OIN 确保 Linux 的源代码保持开放的状态。这让编程人员、设备出售人员、独立软件开发者和公共机构在投资和使用 Linux 时不用过多的担心知识产权的问题。这让对 Linux 进行重新装配、嵌入和使用的公司省了不少钱。 + +“随着版权许可证越来越广泛的使用,对 OIN 许可证的需求也变得更加的迫切。现在,人们正在寻找更加简单或更功利的解决方法”,Bergelt 说。 + +OIN 法律防御援助对成员是免费的。成员必须承诺不对 OIN 名单带上的软件发起专利诉讼。为了保护该软件,他们也同意提供他们自己的专利。最终,这些保证将导致几十万的交叉许可通过网络连接,Bergelt 如此解释道。 + +### 填补法律漏洞 ### + +“OIN 正在做的事情是非常必要的。它提供额另一层 IP 保护,”[休斯顿法律中心大学][3]的副教授 Greg R. Vetter 这样说道。 + +他回答 LinuxInsider 说,某些人设想的第二版 GPL 许可证会隐含的提供专利许可,但是律师们更喜欢明确的许可。 + +OIN 所提供的许可填补了这个空白。它还明确的覆盖了 Linux 核心。据 Vetter 说,明确的专利许可并不是 GPLv2 中的必要部分,但是这个部分曾在 GPLv3 中。 + +拿一个在 GPLv3 中写了10000行代码的代码编写者来说。随着时间推移,其他的代码编写者会贡献更多行的代码到 IP 中。GPLv3 中的软件专利许可条款将保护所有基于参与其中的贡献者的专利的全部代码的使用,Vetter 如此说道。 + +### 并不完全一样 ### + +专利权和许可证在法律结构上层层叠叠互相覆盖。弄清两者对开源软件的作用就像是穿越雷区。 + +Vetter 说“许可证是授予通常是建立在专利和版权法律上的额外权利的法律结构。许可证被认为是给予了人们做一些的可能会侵犯到其他人的 IP 权利的事的许可。” + +Vetter 指出,很多自由开源许可证(例如 Mozilla 公共许可、GNU、GPLv3 以及 Apache 软件许可)融合了某些互惠专利权的形式。Vetter 指出,像 BSD 和 MIT 这样旧的许可证不会提到专利。 + +一个软件的许可证让其他人可以在某种程度上使用这个编程人员创造的代码。版权对所属权的建立是自动的,只要某个人写或者画了某个原创的东西。然而,版权只覆盖了个别的表达方式和衍生的作品。他并没有涵盖代码的功能性或可用的想法。 + +专利涵盖了功能性。专利权还可以成为许可证。版权可能无法保护某人如何独立的对另一个人的代码的实现的开发,但是专利填补了这个小瑕疵,Vetter 解释道。 + +### 寻找安全通道 ### + +许可证和专利混合的法律性质可能会对开源开发者产生威胁。据 [Chaotic Moon Studios][4] 的创办者之一、 [IEEE][5] 计算机协会成员 William Hurley 说,对于某些人来说即使是 GPL 也会成为威胁。 + +"在很久以前,开源是个完全不同的世界。被彼此间的尊重和把代码视为艺术而非资产的观点所驱动,那时的程序和代码比现在更加的开放。我相信很多为最好的意图所做的努力几乎最后总是背负着意外的结果,"Hurley 这样告诉 LinuxInsider。 + +他暗示说,成员人数超越了1000人可能带来了一个关于知识产权保护重要性的混乱信息。这可能会继续搅混开源生态系统这滩浑水。 + +“最终,这些显现出了围绕着知识产权的常见的一些错误概念。拥有几千个开发者并不会减少风险——而是增加。给专利许可的开发者越多,它们看起来就越值钱,”Hurley 说。“它们看起来越值钱,有着类似专利的或者其他知识产权的人就越可能试图利用并从中榨取他们自己的经济利益。” + +### 共享与竞争共存 ### + +竞合策略是开源的一部分。OIN 模型让各个公司能够决定他们将在哪竞争以及在哪合作,Bergelt 解释道。 + +“开源演化中的许多改变已经把我们移到了另一个方向上。我们必须为合作创造渠道。否则我们将会有几百个团体把数十亿美元花费到同样的技术上,”他说。 + +手机产业的革新就是个很好的例子。各个公司放出了不同的标准。没有共享,没有合作,Bergelt 解释道。 + +他说:“这让我们在美国接触技术的能力落后了七到五年。我们接触设备的经验远远落后于世界其他地方的人。在我们等待 CDMA (Code Division Multiple Access 码分多址访问通信技术)时自满于 GSM (Global System for Mobile Communications 全球移动通信系统)。” + +### 改变格局 ### + +OIN 在去年经历了增长了400个新许可的浪潮。这意味着着开源有了新趋势。 + +Bergelt 说:“市场到达了一个临界点,组织内的人们终于意识到直白地合作和竞争的需要。结果是两件事同时进行。这可能会变得复杂、费力。” + +然而,这个由人们开始考虑合作和竞争的文化革新所驱动的转换过程是可以忍受的。他解释说,这也是人们在以把开源作为开源社区的最重要的工程的方式拥抱开源——尤其是 Linux——的转变。 + +还有一个迹象是,最具意义的新工程都没有在 GPLv3 许可下开发。 + +### 二个总比一个好 ### + +“GPL 极为重要,但是事实是有一堆的许可模型正被使用着。在Eclipse、Apache 和 Berkeley 许可中,专利问题的相对可解决性通常远远低于在 GPLv3 中的。”Bergelt 说。 + +GPLv3 对于解决专利问题是个自然的补充——但是 GPL 自身不足以独自解决围绕专利使用的潜在冲突。所以 OIN 的设计是以能够补充版权许可为目的的,他补充道。 + +然而,层层叠叠的专利和许可也许并没有带来多少好处。到最后,专利在几乎所有的案例中都被用于攻击目的——而不是防御目的,Bergelt 暗示说。 + +“如果你不准备对其他人采取法律行动,那么对于你的知识财产来说专利可能并不是最佳的法律保护方式”,他说。“我们现在生活在一个对软件——开放和专有——误会重重的世界里。这些软件还被错误并过时的专利系统所捆绑。我们每天在工业化的被窒息的创新中挣扎”,他说。 + +### 法院是最后的手段### + +想到 OIN 的出现抑制了诉讼的泛滥就感到十分欣慰,Bergelt 说,或者至少可以说 OIN 的出现扼制了特定的某些威胁。 + +“可以说我们让人们放下它们了的武器。同时我们正在创建一种新的文化规范。一旦你入股这个模型中的非侵略专利,所产生的相关影响就是对合作的鼓励”,他说。 + +如果你愿意承诺合作,你的第一反应就会趋向于不急着起诉。相反的,你会想如何让我们允许你使用我们所拥有的东西并让它为你赚钱,而同时我们也能使用你所拥有的东西,Bergelt 解释道。 + +“OIN 是个多面的解决方式。他鼓励签署者创造双赢协议”,他说。“这让起诉成为最逼不得已的行为。那才是它的位置。” + +### 底线### + +Bergelt 坚信,OIN 的运作是为了阻止 Linux 受到专利伤害。在 Linux 的世界里没有诉讼的地方。 + +唯一临近的是和微软的移动大战,这主要关系到堆栈中高的元素。那些来自法律的挑战可能是为了提高包括使用 Linux 产品的所属权的成本,Bergelt 说。 + +尽管如此“这些并不是有关 Linux 诉讼”,他说。“他们的重点并不在于 Linux 的核心。他们关注的是 Linux 系统里都有些什么。” + +-------------------------------------------------------------------------------- + +via: http://www.linuxinsider.com/story/Defending-the-Free-Linux-World-81512.html + +作者:Jack M. Germain +译者:[H-mudcup](https://github.com/H-mudcup) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[1]:http://www.openinventionnetwork.com/ +[2]:http://www.redhat.com/ +[3]:http://www.law.uh.edu/ +[4]:http://www.chaoticmoon.com/ +[5]:http://www.ieee.org/ From 2a0ce7cbfbf700299cc62c8a6fed50d829ddb3cc Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 29 Aug 2015 23:51:27 +0800 Subject: [PATCH 369/697] PUB:20150803 Troubleshooting with Linux Logs @strugglingyouth --- ...0150803 Troubleshooting with Linux Logs.md | 32 ++++++++++--------- 1 file changed, 17 insertions(+), 15 deletions(-) rename {translated/tech => published}/20150803 Troubleshooting with Linux Logs.md (61%) diff --git a/translated/tech/20150803 Troubleshooting with Linux Logs.md b/published/20150803 Troubleshooting with Linux Logs.md similarity index 61% rename from translated/tech/20150803 Troubleshooting with Linux Logs.md rename to published/20150803 Troubleshooting with Linux Logs.md index 5950a69d98..ca117d8af3 100644 --- a/translated/tech/20150803 Troubleshooting with Linux Logs.md +++ b/published/20150803 Troubleshooting with Linux Logs.md @@ -1,10 +1,11 @@ 在 Linux 中使用日志来排错 ================================================================================ -人们创建日志的主要原因是排错。通常你会诊断为什么问题发生在你的 Linux 系统或应用程序中。错误信息或一些列事件可以给你提供造成根本原因的线索,说明问题是如何发生的,并指出如何解决它。这里有几个使用日志来解决的样例。 + +人们创建日志的主要原因是排错。通常你会诊断为什么问题发生在你的 Linux 系统或应用程序中。错误信息或一系列的事件可以给你提供找出根本原因的线索,说明问题是如何发生的,并指出如何解决它。这里有几个使用日志来解决的样例。 ### 登录失败原因 ### -如果你想检查你的系统是否安全,你可以在验证日志中检查登录失败的和登录成功但可疑的用户。当有人通过不正当或无效的凭据来登录时会出现认证失败,经常使用 SSH 进行远程登录或 su 到本地其他用户来进行访问权。这些是由[插入式验证模块][1]来记录,或 PAM 进行短期记录。在你的日志中会看到像 Failed 这样的字符串密码和未知的用户。成功认证记录包括像 Accepted 这样的字符串密码并打开会话。 +如果你想检查你的系统是否安全,你可以在验证日志中检查登录失败的和登录成功但可疑的用户。当有人通过不正当或无效的凭据来登录时会出现认证失败,这通常发生在使用 SSH 进行远程登录或 su 到本地其他用户来进行访问权时。这些是由[插入式验证模块(PAM)][1]来记录的。在你的日志中会看到像 Failed password 和 user unknown 这样的字符串。而成功认证记录则会包括像 Accepted password 和 session opened 这样的字符串。 失败的例子: @@ -30,22 +31,21 @@ 由于没有标准格式,所以你需要为每个应用程序的日志使用不同的命令。日志管理系统,可以自动分析日志,将它们有效的归类,帮助你提取关键字,如用户名。 -日志管理系统可以使用自动解析功能从 Linux 日志中提取用户名。这使你可以看到用户的信息,并能单个的筛选。在这个例子中,我们可以看到,root 用户登录了 2700 次,因为我们筛选的日志显示尝试登录的只有 root 用户。 +日志管理系统可以使用自动解析功能从 Linux 日志中提取用户名。这使你可以看到用户的信息,并能通过点击过滤。在下面这个例子中,我们可以看到,root 用户登录了 2700 次之多,因为我们筛选的日志仅显示 root 用户的尝试登录记录。 ![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.36-AM.png) -日志管理系统也让你以时间为做坐标轴的图标来查看使你更容易发现异常。如果有人在几分钟内登录失败一次或两次,它可能是一个真正的用户而忘记了密码。但是,如果有几百个失败的登录并且使用的都是不同的用户名,它更可能是在试图攻击系统。在这里,你可以看到在3月12日,有人试图登录 Nagios 几百次。这显然​​不是一个合法的系统用户。 +日志管理系统也可以让你以时间为做坐标轴的图表来查看,使你更容易发现异常。如果有人在几分钟内登录失败一次或两次,它可能是一个真正的用户而忘记了密码。但是,如果有几百个失败的登录并且使用的都是不同的用户名,它更可能是在试图攻击系统。在这里,你可以看到在3月12日,有人试图登录 Nagios 几百次。这显然​​不是一个合法的系统用户。 ![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.12.18-AM.png) ### 重启的原因 ### - 有时候,一台服务器由于系统崩溃或重启而宕机。你怎么知道它何时发生,是谁做的? #### 关机命令 #### -如果有人手动运行 shutdown 命令,你可以看到它的身份在验证日志文件中。在这里,你可以看到,有人从 IP 50.0.134.125 上作为 ubuntu 的用户远程登录了,然后关闭了系统。 +如果有人手动运行 shutdown 命令,你可以在验证日志文件中看到它。在这里,你可以看到,有人从 IP 50.0.134.125 上作为 ubuntu 的用户远程登录了,然后关闭了系统。 Mar 19 18:36:41 ip-172-31-11-231 sshd[23437]: Accepted publickey for ubuntu from 50.0.134.125 port 52538 ssh Mar 19 18:36:41 ip-172-31-11-231 23437]:sshd[ pam_unix(sshd:session): session opened for user ubuntu by (uid=0) @@ -53,7 +53,7 @@ #### 内核初始化 #### -如果你想看看服务器重新启动的所有原因(包括崩溃),你可以从内核初始化日志中寻找。你需要搜索内核设施和初始化 cpu 的信息。 +如果你想看看服务器重新启动的所有原因(包括崩溃),你可以从内核初始化日志中寻找。你需要搜索内核类(kernel)和 cpu 初始化(Initializing)的信息。 Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpuset Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpu @@ -61,9 +61,9 @@ ### 检测内存问题 ### -有很多原因可能导致服务器崩溃,但一个普遍的原因是内存用尽。 +有很多原因可能导致服务器崩溃,但一个常见的原因是内存用尽。 -当你系统的内存不足时,进程会被杀死,通常会杀死使用最多资源的进程。当系统正在使用的内存发生错误并且有新的或现有的进程试图使用更多的内存。在你的日志文件查找像 Out of Memory 这样的字符串,内核也会发出杀死进程的警告。这些信息表明系统故意杀死进程或应用程序,而不是允许进程崩溃。 +当你系统的内存不足时,进程会被杀死,通常会杀死使用最多资源的进程。当系统使用了所有内存,而新的或现有的进程试图使用更多的内存时就会出现错误。在你的日志文件查找像 Out of Memory 这样的字符串或类似 kill 这样的内核警告信息。这些信息表明系统故意杀死进程或应用程序,而不是允许进程崩溃。 例如: @@ -75,20 +75,20 @@ $ grep “Out of memory” /var/log/syslog [33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child -请记住,grep 也要使用内存,所以导致内存不足的错误可能只是运行的 grep。这是另一个分析日志的独特方法! +请记住,grep 也要使用内存,所以只是运行 grep 也可能导致内存不足的错误。这是另一个你应该中央化存储日志的原因! ### 定时任务错误日志 ### -cron 守护程序是一个调度器只在指定的日期和时间运行进程。如果进程运行失败或无法完成,那么 cron 的错误出现在你的日志文件中。你可以找到这些文件在 /var/log/cron,/var/log/messages,和 /var/log/syslog 中,具体取决于你的发行版。cron 任务失败原因有很多。通常情况下,问题出在进程中而不是 cron 守护进程本身。 +cron 守护程序是一个调度器,可以在指定的日期和时间运行进程。如果进程运行失败或无法完成,那么 cron 的错误出现在你的日志文件中。具体取决于你的发行版,你可以在 /var/log/cron,/var/log/messages,和 /var/log/syslog 几个位置找到这个日志。cron 任务失败原因有很多。通常情况下,问题出在进程中而不是 cron 守护进程本身。 -默认情况下,cron 作业会通过电子邮件发送信息。这里是一个日志中记录的发送电子邮件的内容。不幸的是,你不能看到邮件的内容在这里。 +默认情况下,cron 任务的输出会通过 postfix 发送电子邮件。这是一个显示了该邮件已经发送的日志。不幸的是,你不能在这里看到邮件的内容。 Mar 13 16:35:01 PSQ110 postfix/pickup[15158]: C3EDC5800B4: uid=1001 from= Mar 13 16:35:01 PSQ110 postfix/cleanup[15727]: C3EDC5800B4: message-id=<20150310110501.C3EDC5800B4@PSQ110> Mar 13 16:35:01 PSQ110 postfix/qmgr[15159]: C3EDC5800B4: from=, size=607, nrcpt=1 (queue active) Mar 13 16:35:05 PSQ110 postfix/smtp[15729]: C3EDC5800B4: to=, relay=gmail-smtp-in.l.google.com[74.125.130.26]:25, delay=4.1, delays=0.26/0/2.2/1.7, dsn=2.0.0, status=sent (250 2.0.0 OK 1425985505 f16si501651pdj.5 - gsmtp) -你应该想想 cron 在日志中的标准输出以帮助你定位问题。这里展示你可以使用 logger 命令重定向 cron 标准输出到 syslog。用你的脚本来代替 echo 命令,helloCron 可以设置为任何你想要的应用程序的名字。 +你可以考虑将 cron 的标准输出记录到日志中,以帮助你定位问题。这是一个你怎样使用 logger 命令重定向 cron 标准输出到 syslog的例子。用你的脚本来代替 echo 命令,helloCron 可以设置为任何你想要的应用程序的名字。 */5 * * * * echo ‘Hello World’ 2>&1 | /usr/bin/logger -t helloCron @@ -97,7 +97,9 @@ cron 守护程序是一个调度器只在指定的日期和时间运行进程。 Apr 28 22:20:01 ip-172-31-11-231 CRON[15296]: (ubuntu) CMD (echo 'Hello World!' 2>&1 | /usr/bin/logger -t helloCron) Apr 28 22:20:01 ip-172-31-11-231 helloCron: Hello World! -每个 cron 作业将根据作业的具体类型以及如何输出数据来记录不同的日志。希望在日志中有问题根源的线索,也可以根据需要添加额外的日志记录。 +每个 cron 任务将根据任务的具体类型以及如何输出数据来记录不同的日志。 + +希望在日志中有问题根源的线索,也可以根据需要添加额外的日志记录。 -------------------------------------------------------------------------------- @@ -107,7 +109,7 @@ via: http://www.loggly.com/ultimate-guide/logging/troubleshooting-with-linux-log 作者:[Amy Echeverri][a2] 作者:[Sadequl Hussain][a3] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From e9b99485c8ccdb6033e073e3a8ed5a8b93e582eb Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 30 Aug 2015 00:22:07 +0800 Subject: [PATCH 370/697] PUB:20150527 Howto Manage Host Using Docker Machine in a VirtualBox @bazz2 --- ...st Using Docker Machine in a VirtualBox.md | 26 ++++++++++--------- 1 file changed, 14 insertions(+), 12 deletions(-) rename {translated/tech => published}/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md (63%) diff --git a/translated/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md b/published/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md similarity index 63% rename from translated/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md rename to published/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md index 153035c9f4..f47f79b3b7 100644 --- a/translated/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md +++ b/published/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md @@ -1,6 +1,6 @@ 在 VirtualBox 中使用 Docker Machine 管理主机 ================================================================================ -大家好,今天我们学习在 VirtualBox 中使用 Docker Machine 来创建和管理 Docker 主机。Docker Machine 是一个应用,用于在我们的电脑上、在云端、在数据中心创建 Docker 主机,然后用户可以使用 Docker 客户端来配置一些东西。这个 API 为本地主机、或数据中心的虚拟机、或云端的实例提供 Docker 服务。Docker Machine 支持 Windows、OSX 和 Linux,并且是以一个独立的二进制文件包形式安装的。使用(与现有 Docker 工具)相同的接口,我们就可以充分利用已经提供 Docker 基础框架的生态系统。只要一个命令,用户就能快速部署 Docker 容器。 +大家好,今天我们学习在 VirtualBox 中使用 Docker Machine 来创建和管理 Docker 主机。Docker Machine 是一个可以帮助我们在电脑上、在云端、在数据中心内创建 Docker 主机的应用。它为根据用户的配置和需求创建服务器并在其上安装 Docker和客户端提供了一个轻松的解决方案。这个 API 可以用于在本地主机、或数据中心的虚拟机、或云端的实例提供 Docker 服务。Docker Machine 支持 Windows、OSX 和 Linux,并且是以一个独立的二进制文件包形式安装的。仍然使用(与现有 Docker 工具)相同的接口,我们就可以充分利用已经提供 Docker 基础框架的生态系统。只要一个命令,用户就能快速部署 Docker 容器。 本文列出一些简单的步骤用 Docker Machine 来部署 docker 容器。 @@ -8,15 +8,15 @@ Docker Machine 完美支持所有 Linux 操作系统。首先我们需要从 [github][1] 下载最新版本的 Docker Machine,本文使用 curl 作为下载工具,Docker Machine 版本为 0.2.0。 -** 64 位操作系统 ** +**64 位操作系统** # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine -** 32 位操作系统 ** +**32 位操作系统** # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-i386 > /usr/local/bin/docker-machine -下载完成后,找到 **/usr/local/bin** 目录下的 **docker-machine** 文件,执行一下: +下载完成后,找到 **/usr/local/bin** 目录下的 **docker-machine** 文件,让其可以执行: # chmod +x /usr/local/bin/docker-machine @@ -28,12 +28,12 @@ Docker Machine 完美支持所有 Linux 操作系统。首先我们需要从 [gi 运行下面的命令,安装 Docker 客户端,以便于在我们自己的电脑止运行 Docker 命令: - # curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker - # chmod +x /usr/local/bin/docker + # curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker + # chmod +x /usr/local/bin/docker ### 2. 创建 VirtualBox 虚拟机 ### -在 Linux 系统上安装完 Docker Machine 后,接下来我们可以安装 VirtualBox 虚拟机,运行下面的就可以了。--driver virtualbox 选项表示我们要在 VirtualBox 的虚拟机里面部署 docker,最后的参数“linux” 是虚拟机的名称。这个命令会下载 [boot2docker][2] iso,它是个基于 Tiny Core Linux 的轻量级发行版,自带 Docker 程序,然后 docker-machine 命令会创建一个 VirtualBox 虚拟机(LCTT:当然,我们也可以选择其他的虚拟机软件)来运行这个 boot2docker 系统。 +在 Linux 系统上安装完 Docker Machine 后,接下来我们可以安装 VirtualBox 虚拟机,运行下面的就可以了。`--driver virtualbox` 选项表示我们要在 VirtualBox 的虚拟机里面部署 docker,最后的参数“linux” 是虚拟机的名称。这个命令会下载 [boot2docker][2] iso,它是个基于 Tiny Core Linux 的轻量级发行版,自带 Docker 程序,然后 `docker-machine` 命令会创建一个 VirtualBox 虚拟机(LCTT译注:当然,我们也可以选择其他的虚拟机软件)来运行这个 boot2docker 系统。 # docker-machine create --driver virtualbox linux @@ -49,7 +49,7 @@ Docker Machine 完美支持所有 Linux 操作系统。首先我们需要从 [gi ### 3. 设置环境变量 ### -现在我们需要让 docker 与虚拟机通信,运行 docker-machine env <虚拟机名称> 来实现这个目的。 +现在我们需要让 docker 与 docker-machine 通信,运行 `docker-machine env <虚拟机名称>` 来实现这个目的。 # eval "$(docker-machine env linux)" # docker ps @@ -64,7 +64,7 @@ Docker Machine 完美支持所有 Linux 操作系统。首先我们需要从 [gi ### 4. 运行 Docker 容器 ### -完成配置后我们就可以在 VirtualBox 上运行 docker 容器了。测试一下,在虚拟机里执行 **docker run busybox echo hello world** 命令,我们可以看到容器的输出信息。 +完成配置后我们就可以在 VirtualBox 上运行 docker 容器了。测试一下,我们可以运行虚拟机 `docker run busybox` ,并在里面里执行 `echo hello world` 命令,我们可以看到容器的输出信息。 # docker run busybox echo hello world @@ -72,7 +72,7 @@ Docker Machine 完美支持所有 Linux 操作系统。首先我们需要从 [gi ### 5. 拿到 Docker 主机的 IP ### -我们可以执行下面的命令获取 Docker 主机的 IP 地址。 +我们可以执行下面的命令获取运行 Docker 的主机的 IP 地址。我们可以看到在 Docker 主机的 IP 地址上的任何暴露出来的端口。 # docker-machine ip @@ -94,7 +94,9 @@ Docker Machine 完美支持所有 Linux 操作系统。首先我们需要从 [gi ### 总结 ### -最后,我们使用 Docker Machine 成功在 VirtualBox 上创建并管理一台 Docker 主机。Docker Machine 确实能让用户快速地在不同的平台上部署 Docker 主机,就像我们这里部署在 VirtualBox 上一样。这个 --driver virtulbox 驱动可以在本地机器上使用,也可以在数据中心的虚拟机上使用。Docker Machine 驱动除了支持本地的 VirtualBox 之外,还支持远端的 Digital Ocean、AWS、Azure、VMware 以及其他基础设施。如果你有任何疑问,或者建议,请在评论栏中写出来,我们会不断改进我们的内容。谢谢,祝愉快。 +最后,我们使用 Docker Machine 成功在 VirtualBox 上创建并管理一台 Docker 主机。Docker Machine 确实能让用户快速地在不同的平台上部署 Docker 主机,就像我们这里部署在 VirtualBox 上一样。这个 virtualbox 驱动可以在本地机器上使用,也可以在数据中心的虚拟机上使用。Docker Machine 驱动除了支持本地的 VirtualBox 之外,还支持远端的 Digital Ocean、AWS、Azure、VMware 以及其它基础设施。 + +如果你有任何疑问,或者建议,请在评论栏中写出来,我们会不断改进我们的内容。谢谢,祝愉快。 -------------------------------------------------------------------------------- @@ -102,7 +104,7 @@ via: http://linoxide.com/linux-how-to/host-virtualbox-docker-machine/ 作者:[Arun Pyasi][a] 译者:[bazz2](https://github.com/bazz2) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From b4b7ad67cb8d9a73ff5d0b8b13186d986f8d0d25 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 30 Aug 2015 00:40:53 +0800 Subject: [PATCH 371/697] PUB:20150821 How to Install Visual Studio Code in Linux @ictlyh --- ... to Install Visual Studio Code in Linux.md | 22 ++++++++++--------- 1 file changed, 12 insertions(+), 10 deletions(-) rename {translated/tech => published}/20150821 How to Install Visual Studio Code in Linux.md (70%) diff --git a/translated/tech/20150821 How to Install Visual Studio Code in Linux.md b/published/20150821 How to Install Visual Studio Code in Linux.md similarity index 70% rename from translated/tech/20150821 How to Install Visual Studio Code in Linux.md rename to published/20150821 How to Install Visual Studio Code in Linux.md index 48f68ade0b..9694b23d4f 100644 --- a/translated/tech/20150821 How to Install Visual Studio Code in Linux.md +++ b/published/20150821 How to Install Visual Studio Code in Linux.md @@ -1,8 +1,8 @@ 如何在 Linux 中安装 Visual Studio Code ================================================================================ -大家好,今天我们一起来学习如何在 Linux 发行版中安装 Visual Studio Code。Visual Studio Code 是基于 Electron 优化代码后的编辑器,后者是基于 Chromium 的一款软件,用于为桌面系统发布 io.js 应用。Visual Studio Code 是微软开发的包括 Linux 在内的全平台代码编辑器和文本编辑器。它是免费软件但不开源,在专有软件许可条款下发布。它是我们日常使用的超级强大和快速的代码编辑器。Visual Studio Code 有很多很酷的功能,例如导航、智能感知支持、语法高亮、括号匹配、自动补全、片段、支持自定义键盘绑定、并且支持多种语言,例如 Python、C++、Jade、PHP、XML、Batch、F#、DockerFile、Coffee Script、Java、HandleBars、 R、 Objective-C、 PowerShell、 Luna、 Visual Basic、 .Net、 Asp.Net、 C#、 JSON、 Node.js、 Javascript、 HTML、 CSS、 Less、 Sass 和 Markdown。Visual Studio Code 集成了包管理器和库,并构建通用任务使得加速每日的工作流。Visual Studio Code 中最受欢迎的是它的调试功能,它包括流式支持 Node.js 的预览调试。 +大家好,今天我们一起来学习如何在 Linux 发行版中安装 Visual Studio Code。Visual Studio Code 是基于 Electron 优化代码后的编辑器,后者是基于 Chromium 的一款软件,用于为桌面系统发布 io.js 应用。Visual Studio Code 是微软开发的支持包括 Linux 在内的全平台代码编辑器和文本编辑器。它是免费软件但不开源,在专有软件许可条款下发布。它是可以用于我们日常使用的超级强大和快速的代码编辑器。Visual Studio Code 有很多很酷的功能,例如导航、智能感知支持、语法高亮、括号匹配、自动补全、代码片段、支持自定义键盘绑定、并且支持多种语言,例如 Python、C++、Jade、PHP、XML、Batch、F#、DockerFile、Coffee Script、Java、HandleBars、 R、 Objective-C、 PowerShell、 Luna、 Visual Basic、 .Net、 Asp.Net、 C#、 JSON、 Node.js、 Javascript、 HTML、 CSS、 Less、 Sass 和 Markdown。Visual Studio Code 集成了包管理器、库、构建,以及其它通用任务,以加速日常的工作流。Visual Studio Code 中最受欢迎的是它的调试功能,它包括流式支持 Node.js 的预览调试。 -注意:请注意 Visual Studio Code 只支持 64 位 Linux 发行版。 +注意:请注意 Visual Studio Code 只支持 64 位的 Linux 发行版。 下面是在所有 Linux 发行版中安装 Visual Studio Code 的几个简单步骤。 @@ -32,12 +32,12 @@ ### 3. 运行 Visual Studio Code ### -提取软件包之后,我们可以直接运行一个名为 Code 的文件启动 Visual Studio Code。 +展开软件包之后,我们可以直接运行一个名为 Code 的文件启动 Visual Studio Code。 # sudo chmod +x /opt/VSCode-linux-x64/Code # sudo /opt/VSCode-linux-x64/Code -如果我们想启动 Code 并通过终端能在任何地方打开,我们就需要创建 /opt/vscode/Code 的一个链接 /usr/local/bin/code。 +如果我们想通过终端在任何地方启动 Code,我们就需要创建 /opt/vscode/Code 的一个链接 /usr/local/bin/code。 # ln -s /opt/VSCode-linux-x64/Code /usr/local/bin/code @@ -47,11 +47,11 @@ ### 4. 创建桌面启动 ### -下一步,成功抽取 Visual Studio Code 软件包之后,我们打算创建桌面启动程序,使得根据不同桌面环境能够从启动器、菜单、桌面启动它。首先我们要复制一个图标文件到 /usr/share/icons/ 目录。 +下一步,成功展开 Visual Studio Code 软件包之后,我们打算创建桌面启动程序,使得根据不同桌面环境能够从启动器、菜单、桌面启动它。首先我们要复制一个图标文件到 /usr/share/icons/ 目录。 # cp /opt/VSCode-linux-x64/resources/app/vso.png /usr/share/icons/ -然后,我们创建一个桌面启动程序,文件扩展名为 .desktop。这里我们在 /tmp/VSCODE/ 目录中使用喜欢的文本编辑器创建名为 visualstudiocode.desktop 的文件。 +然后,我们创建一个桌面启动程序,文件扩展名为 .desktop。这里我们使用喜欢的文本编辑器在 /tmp/VSCODE/ 目录中创建名为 visualstudiocode.desktop 的文件。 # vi /tmp/vscode/visualstudiocode.desktop @@ -99,17 +99,19 @@ # apt-get update # apt-get install ubuntu-make -在我们的 ubuntu 操作系统上安装完 Ubuntu Make 之后,我们打算在一个终端中运行以下命令安装 Code。 +在我们的 ubuntu 操作系统上安装完 Ubuntu Make 之后,我们可以在一个终端中运行以下命令来安装 Code。 # umake web visual-studio-code ![Umake Web Code](http://blog.linoxide.com/wp-content/uploads/2015/06/umake-web-code.png) -运行完上面的命令之后,会要求我们输入想要的安装路径。然后,会请求我们允许在 ubuntu 系统中安装 Visual Studio Code。我们敲击 “a”。点击完后,它会在 ubuntu 机器上下载和安装 Code。最后,我们可以在启动器或者菜单中启动它。 +运行完上面的命令之后,会要求我们输入想要的安装路径。然后,会请求我们允许在 ubuntu 系统中安装 Visual Studio Code。我们输入“a”(接受)。输入完后,它会在 ubuntu 机器上下载和安装 Code。最后,我们可以在启动器或者菜单中启动它。 ### 总结 ### -我们已经成功地在 Linux 发行版上安装了 Visual Studio Code。在所有 linux 发行版上安装 Visual Studio Code 都和上面介绍的相似,我们同样可以使用 umake 在 linux 发行版中安装。Umake 是一个安装开发工具,IDEs 和语言流行的工具。我们可以用 Umake 轻松地安装 Android Studios、Eclipse 和很多其它流行 IDE。Visual Studio Code 是基于 Github 上一个叫 [Electron][2] 的项目,它是 [Atom.io][3] 编辑器的一部分。它有很多 Atom.io 编辑器没有的改进功能。当前 Visual Studio Code 只支持 64 位 linux 操作系统平台。如果你有任何疑问、建议或者反馈,请在下面的评论框中留言以便我们改进和更新我们的内容。非常感谢!Enjoy :-) +我们已经成功地在 Linux 发行版上安装了 Visual Studio Code。在所有 linux 发行版上安装 Visual Studio Code 都和上面介绍的相似,我们也可以使用 umake 在 Ubuntu 发行版中安装。Umake 是一个安装开发工具,IDEs 和语言的流行工具。我们可以用 Umake 轻松地安装 Android Studios、Eclipse 和很多其它流行 IDE。Visual Studio Code 是基于 Github 上一个叫 [Electron][2] 的项目,它是 [Atom.io][3] 编辑器的一部分。它有很多 Atom.io 编辑器没有的改进功能。当前 Visual Studio Code 只支持 64 位 linux 操作系统平台。 + +如果你有任何疑问、建议或者反馈,请在下面的评论框中留言以便我们改进和更新我们的内容。非常感谢!Enjoy :-) -------------------------------------------------------------------------------- @@ -117,7 +119,7 @@ via: http://linoxide.com/linux-how-to/install-visual-studio-code-linux/ 作者:[Arun Pyasi][a] 译者:[ictlyh](https://github.com/ictlyh) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 1aef169420f51d19ac95db726849152d9144402f Mon Sep 17 00:00:00 2001 From: joeren Date: Sun, 30 Aug 2015 07:59:25 +0800 Subject: [PATCH 372/697] Update 20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md --- ...nvert From RPM to DEB and DEB to RPM Package Using Alien.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md b/sources/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md index 2d3f203676..96cb8d82cc 100644 --- a/sources/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md +++ b/sources/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md @@ -1,3 +1,4 @@ +Translating by GOLinux! How to Convert From RPM to DEB and DEB to RPM Package Using Alien ================================================================================ As I’m sure you already know, there are plenty of ways to install software in Linux: using the package management system provided by your distribution ([aptitude, yum, or zypper][1], to name a few examples), compiling from source (though somewhat rare these days, it was the only method available during the early days of Linux), or utilizing a low level tool such as dpkg or rpm with .deb and .rpm standalone, precompiled packages, respectively. @@ -156,4 +157,4 @@ via: http://www.tecmint.com/convert-from-rpm-to-deb-and-deb-to-rpm-package-using [a]:http://www.tecmint.com/author/gacanepa/ [1]:http://www.tecmint.com/linux-package-management/ -[2]:http://li.nux.ro/download/nux/dextop/el7/x86_64/ \ No newline at end of file +[2]:http://li.nux.ro/download/nux/dextop/el7/x86_64/ From 84a86849d24fe6367cf9a1f3c9d193c4544d1cee Mon Sep 17 00:00:00 2001 From: Chr1sh3ng Date: Sun, 30 Aug 2015 10:20:53 +0800 Subject: [PATCH 373/697] cygmris is translating --- .../20150824 Great Open Source Collaborative Editing Tools.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/share/20150824 Great Open Source Collaborative Editing Tools.md b/sources/share/20150824 Great Open Source Collaborative Editing Tools.md index 8f3ab16110..4696862569 100644 --- a/sources/share/20150824 Great Open Source Collaborative Editing Tools.md +++ b/sources/share/20150824 Great Open Source Collaborative Editing Tools.md @@ -1,3 +1,4 @@ +cygmris is translating... Great Open Source Collaborative Editing Tools ================================================================================ In a nutshell, collaborative writing is writing done by more than one person. There are benefits and risks of collaborative working. Some of the benefits include a more integrated / co-ordinated approach, better use of existing resources, and a stronger, united voice. For me, the greatest advantage is one of the most transparent. That's when I need to take colleagues' views. Sending files back and forth between colleagues is inefficient, causes unnecessary delays and leaves people (i.e. me) unhappy with the whole notion of collaboration. With good collaborative software, I can share notes, data and files, and use comments to share thoughts in real-time or asynchronously. Working together on documents, images, video, presentations, and tasks is made less of a chore. @@ -225,4 +226,4 @@ via: http://www.linuxlinks.com/article/20150823085112605/CollaborativeEditing.ht [10]:https://gobby.github.io/ [11]:https://github.com/gobby [12]:https://www.onlyoffice.com/free-edition.aspx -[13]:https://github.com/ONLYOFFICE/DocumentServer \ No newline at end of file +[13]:https://github.com/ONLYOFFICE/DocumentServer From 93923560cc08d25c72f25d698ca5486130bd9f84 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Sun, 30 Aug 2015 13:00:42 +0800 Subject: [PATCH 374/697] [Translated]20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md --- ... DEB and DEB to RPM Package Using Alien.md | 160 ------------------ ... DEB and DEB to RPM Package Using Alien.md | 148 ++++++++++++++++ 2 files changed, 148 insertions(+), 160 deletions(-) delete mode 100644 sources/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md create mode 100644 translated/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md diff --git a/sources/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md b/sources/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md deleted file mode 100644 index 96cb8d82cc..0000000000 --- a/sources/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md +++ /dev/null @@ -1,160 +0,0 @@ -Translating by GOLinux! -How to Convert From RPM to DEB and DEB to RPM Package Using Alien -================================================================================ -As I’m sure you already know, there are plenty of ways to install software in Linux: using the package management system provided by your distribution ([aptitude, yum, or zypper][1], to name a few examples), compiling from source (though somewhat rare these days, it was the only method available during the early days of Linux), or utilizing a low level tool such as dpkg or rpm with .deb and .rpm standalone, precompiled packages, respectively. - -![Convert RPM to DEB and DEB to RPM](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-RPM-to-DEB-and-DEB-to-RPM.png) - -Convert RPM to DEB and DEB to RPM Package Using Alien - -In this article we will introduce you to alien, a tool that converts between different Linux package formats, with .rpm to .deb (and vice versa) being the most common usage. - -This tool, even when its author is no longer maintaining it and states in his website that alien will always probably remain in experimental status, can come in handy if you need a certain type of package but can only find that program in another package format. - -For example, alien saved my day once when I was looking for a .deb driver for a inkjet printer and couldn’t find any – the manufacturer only provided a .rpm package. I installed alien, converted the package, and before long I was able to use my printer without issues. - -That said, we must clarify that this utility should not be used to replace important system files and libraries since they are set up differently across distributions. Only use alien as a last resort if the suggested installation methods at the beginning of this article are out of the question for the required program. - -Last but not least, we must note that even though we will use CentOS and Debian in this article, alien is also known to work in Slackware and even in Solaris, besides the first two distributions and their respective families. - -### Step 1: Installing Alien and Dependencies ### - -To install alien in CentOS/RHEL 7, you will need to enable the EPEL and the Nux Dextop (yes, it’s Dextop – not Desktop) repositories, in that order: - - # yum install epel-release - # rpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro - -The latest version of the package that enables this repository is currently 0.5 (published on Aug. 10, 2015). You should check [http://li.nux.ro/download/nux/dextop/el7/x86_64/][2] to see whether there’s a newer version before proceeding further: - - # rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm - -then do, - - # yum update && yum install alien - -In Fedora, you will only need to run the last command. - -In Debian and derivatives, simply do: - - # aptitude install alien - -### Step 2: Converting from .deb to .rpm Package ### - -For this test we have chosen dateutils, which provides a set of date and time utilities to deal with large amounts of financial data. We will download the .deb package to our CentOS 7 box, convert it to .rpm and install it: - -![Check CentOS Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-OS-Version.png) - -Check CentOS Version - - # cat /etc/centos-release - # wget http://ftp.us.debian.org/debian/pool/main/d/dateutils/dateutils_0.3.1-1.1_amd64.deb - # alien --to-rpm --scripts dateutils_0.3.1-1.1_amd64.deb - -![Convert .deb to .rpm package in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-deb-to-rpm-package.png) - -Convert .deb to .rpm package in Linux - -**Important**: (Please note how, by default, alien increases the version minor number of the target package. If you want to override this behavior, add the –keep-version flag). - -If we try to install the package right away, we will run into a slight issue: - - # rpm -Uvh dateutils-0.3.1-2.1.x86_64.rpm - -![Install RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-RPM-Package.png) - -Install RPM Package - -To solve this issue, we will enable the epel-testing repository and install the rpmrebuild utility to edit the settings of the package to be rebuilt: - - # yum --enablerepo=epel-testing install rpmrebuild - -Then run, - - # rpmrebuild -pe dateutils-0.3.1-2.1.x86_64.rpm - -Which will open up your default text editor. Go to the `%files` section and delete the lines that refer to the directories mentioned in the error message, then save the file and exit: - -![Convert .deb to Alien Version](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-Deb-Package-to-Alien-Version.png) - -Convert .deb to Alien Version - -When you exit the file you will be prompted to continue with the rebuild. If you choose Y, the file will be rebuilt into the specified directory (different than the current working directory): - - # rpmrebuild –pe dateutils-0.3.1-2.1.x86_64.rpm - -![Build RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Build-RPM-Package.png) - -Build RPM Package - -Now you can proceed to install the package and verify as usual: - - # rpm -Uvh /root/rpmbuild/RPMS/x86_64/dateutils-0.3.1-2.1.x86_64.rpm - # rpm -qa | grep dateutils - -![Install Build RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Build-RPM-Package.png) - -Install Build RPM Package - -Finally, you can list the individual tools that were included with dateutils and alternatively check their respective man pages: - - # ls -l /usr/bin | grep dateutils - -![Verify Installed RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Verify-Installed-Package.png) - -Verify Installed RPM Package - -### Step 3: Converting from .rpm to .deb Package ### - -In this section we will illustrate how to convert from .rpm to .deb. In a 32-bit Debian Wheezy box, let’s download the .rpm package for the zsh shell from the CentOS 6 OS repository. Note that this shell is not available by default in Debian and derivatives. - - # cat /etc/shells - # lsb_release -a | tail -n 4 - -![Check Shell and Debian OS Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Shell-Debian-OS-Version.png) - -Check Shell and Debian OS Version - - # wget http://mirror.centos.org/centos/6/os/i386/Packages/zsh-4.3.11-4.el6.centos.i686.rpm - # alien --to-deb --scripts zsh-4.3.11-4.el6.centos.i686.rpm - -You can safely disregard the messages about a missing signature: - -![Convert .rpm to .deb Package](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-rpm-to-deb-Package.png) - -Convert .rpm to .deb Package - -After a few moments, the .deb file should have been generated and be ready to install: - - # dpkg -i zsh_4.3.11-5_i386.deb - -![Install RPM Converted Deb Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Deb-Package.png) - -Install RPM Converted Deb Package - -After the installation, you can verify that zsh is added to the list of valid shells: - - # cat /etc/shells - -![Confirm Installed Zsh Package](http://www.tecmint.com/wp-content/uploads/2015/08/Confirm-Installed-Package.png) - -Confirm Installed Zsh Package - -### Summary ### - -In this article we have explained how to convert from .rpm to .deb and vice versa to install packages as a last resort when such programs are not available in the repositories or as distributable source code. You will want to bookmark this article because all of us will need alien at one time or another. - -Feel free to share your thoughts about this article using the form below. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/convert-from-rpm-to-deb-and-deb-to-rpm-package-using-alien/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/linux-package-management/ -[2]:http://li.nux.ro/download/nux/dextop/el7/x86_64/ diff --git a/translated/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md b/translated/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md new file mode 100644 index 0000000000..98abba27f3 --- /dev/null +++ b/translated/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md @@ -0,0 +1,148 @@ +Alien大法:RPM和DEB互转 +================================================================================ +正如我确信,你们一定知道Linux下的多种软件安装方式:使用发行版所提供的包管理系统([aptitude,yum,或者zypper][1],还可以举很多例子),从源码编译(尽管现在很少用了,但在Linux发展早期却是唯一可用的方法),或者使用各自的低级工具dpkg用于.deb,以及rpm用于.rpm,预编译包等等。 + +![Convert RPM to DEB and DEB to RPM](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-RPM-to-DEB-and-DEB-to-RPM.png) +使用Alien将RPM转换成DEB以及将DEB转换成RPM + +在本文中,我们将为你介绍alien,一个用于在各种不同的Linux包格式相互转换的工具,将.rpm转换成.deb(或者反过来)是最常见的用法。 + +如果你需要某个特定类型的包,而你只能找到其它格式的包的时候,该工具,即使当其作者不再维护,并且在其网站声明:alien将可能永远维持在实验状态,迟早派得上用场。 + +例如,有一次,我正查找一个用于喷墨打印机的.deb驱动,但是却没有找到——生产厂家只提供.rpm包,这时候alien拯救了我。我安装了alien,将包进行转换,不久之后我就可以使用我的打印机了,没有任何问题。 + +即便如此,我们也必须澄清一下,这个工具不应当用来替换重要的系统文件和库,因为它们在不同的发行版中有不同的配置。只有在本文开头提出的安装方法根本不适合所需的程序时,alien才能作为最后手段使用。 + +最后一项要点是,我们必须注意,虽然我们在本文中使用CentOS和Debian,除了前两个发行版及其各自的家族体系外,alien也据我们所知可以工作在Slackware中,甚至Solaris中。 + +### 步骤1:安装Alien及其依赖 ### + +要安装alien到CentOS/RHEL 7中,你需要启用EPEL和Nux Dextop(是的,是Dextop——不是Desktop)仓库,顺序如下: + + # yum install epel-release + # rpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro + +启用该仓库的包的当前最新版本是0.5(2015年8月10日发布),在安装之前你可以查看[http://li.nux.ro/download/nux/dextop/el7/x86_64/][2]上是否有更新的版本。 + + # rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm + +然后再做, + + # yum update && yum install alien + +在Fedora中,你只需要运行上面的命令即可。 + +在Debian及其衍生版中,只需要: + + # aptitude install alien + +### 步骤2:将.deb转换成.rpm包 ### + +对于本次测试,我们选择了date工具,它提供了一系列日期和时间工具用于处理大量金融数据。我们将下载.deb包到我们的CentOS 7机器中,将它转换成.rpm并安装: + +![Check CentOS Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-OS-Version.png) + +检查CentOS版本 + + # cat /etc/centos-release + # wget http://ftp.us.debian.org/debian/pool/main/d/dateutils/dateutils_0.3.1-1.1_amd64.deb + # alien --to-rpm --scripts dateutils_0.3.1-1.1_amd64.deb + +![Convert .deb to .rpm package in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-deb-to-rpm-package.png) +在Linux中将.deb转换成.rpm + +**重要**:(请注意alien是怎样来增加目标包的次版本号的。如果你想要无视该行为,请添加-keep-version标识)。 + +如果我们尝试马上安装该包,我们将碰到些许问题: + + # rpm -Uvh dateutils-0.3.1-2.1.x86_64.rpm + +![Install RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-RPM-Package.png) +安装RPM包 + +要解决该问题,我们需要启用epel-testing仓库,然后安装rpmbuild工具来编辑该包的配置以重建包: + + # yum --enablerepo=epel-testing install rpmrebuild + +然后运行, + + # rpmrebuild -pe dateutils-0.3.1-2.1.x86_64.rpm + +它会打开你的默认文本编辑器。转到`%files`章节并删除涉及到错误信息中提到的目录的行,然后保存文件并退出: + +![Convert .deb to Alien Version](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-Deb-Package-to-Alien-Version.png) +转换.deb到Alien版 + +但你退出该文件后,将提示你继续去重构。如果你选择Y,该文件会重构到指定的目录(与当前工作目录不同): + + # rpmrebuild –pe dateutils-0.3.1-2.1.x86_64.rpm + +![Build RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Build-RPM-Package.png) +构建RPM包 + +现在你可以像以往一样继续来安装包并验证: + + # rpm -Uvh /root/rpmbuild/RPMS/x86_64/dateutils-0.3.1-2.1.x86_64.rpm + # rpm -qa | grep dateutils + +![Install Build RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Build-RPM-Package.png) +安装构建RPM包 + +最后,你可以列出date工具包含的各个工具,并可选择性地查看各自的手册页: + + # ls -l /usr/bin | grep dateutils + +![Verify Installed RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Verify-Installed-Package.png) +验证安装的RPM包 + +### 步骤3:将.rpm转换成.deb包 ### + +在本节中,我们将演示如何将.rpm转换成.deb。在一台32位的Debian Wheezy机器中,让我们从CentOS 6操作系统仓库中下载用于zsh shell的.rpm包。注意,该shell在Debian及其衍生版的默认安装中是不可用的。 + + # cat /etc/shells + # lsb_release -a | tail -n 4 + +![Check Shell and Debian OS Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Shell-Debian-OS-Version.png) +检查Shell和Debian操作系统版本 + + # wget http://mirror.centos.org/centos/6/os/i386/Packages/zsh-4.3.11-4.el6.centos.i686.rpm + # alien --to-deb --scripts zsh-4.3.11-4.el6.centos.i686.rpm + +你可以安全地无视关于签名丢失的信息: + +![Convert .rpm to .deb Package](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-rpm-to-deb-Package.png) +将.rpm转换成.deb包 + +过了一会儿后,.deb包应该已经生成,并可以安装了: + + # dpkg -i zsh_4.3.11-5_i386.deb + +![Install RPM Converted Deb Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Deb-Package.png) +安装RPM转换来的Deb包 + +安装完后,你可以zsh是否添加到了合法shell列表中: + + # cat /etc/shells + +![Confirm Installed Zsh Package](http://www.tecmint.com/wp-content/uploads/2015/08/Confirm-Installed-Package.png) +确认安装的Zsh包 + +### 小结 ### + +在本文中,我们已经解释了如何将.rpm转换成.deb及其反向转换并作为这类程序不能从仓库中或者作为可分发源代码获得的最后安装手段。你一定想要将本文添加到书签中,因为我们都需要alien。 + +请自由分享你关于本文的想法,写到下面的表格中吧。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/convert-from-rpm-to-deb-and-deb-to-rpm-package-using-alien/ + +作者:[Gabriel Cánepa][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/linux-package-management/ +[2]:http://li.nux.ro/download/nux/dextop/el7/x86_64/ From 70d1a832dbdfe534ced889cb9fa47fb00100de7a Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 30 Aug 2015 22:01:24 +0800 Subject: [PATCH 375/697] PUB:20150813 How to Install Logwatch on Ubuntu 15.04 @runningwater --- ...How to Install Logwatch on Ubuntu 15.04.md | 29 ++++++++++--------- 1 file changed, 15 insertions(+), 14 deletions(-) rename {translated/tech => published}/20150813 How to Install Logwatch on Ubuntu 15.04.md (77%) diff --git a/translated/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md b/published/20150813 How to Install Logwatch on Ubuntu 15.04.md similarity index 77% rename from translated/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md rename to published/20150813 How to Install Logwatch on Ubuntu 15.04.md index 8bb0836755..4ea05688cd 100644 --- a/translated/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md +++ b/published/20150813 How to Install Logwatch on Ubuntu 15.04.md @@ -1,6 +1,7 @@ -Ubuntu 15.04 and系统中安装 Logwatch +如何在 Ubuntu 15.04 系统中安装 Logwatch ================================================================================ -大家好,今天我们会讲述在 Ubuntu 15.04 操作系统上如何安装 Logwatch 软件,它也可以在任意的 Linux 系统和类 Unix 系统上安装。Logwatch 是一款可定制的日志分析和日志监控报告生成系统,它可以根据一段时间的日志文件生成您所希望关注的详细报告。它具有易安装、易配置、可审查等特性,同时对其提供的数据的安全性上也有一些保障措施。Logwatch 会扫描重要的操作系统组件像 SSH、网站服务等的日志文件,然后生成用户所关心的有价值的条目汇总报告。 + +大家好,今天我们会讲述在 Ubuntu 15.04 操作系统上如何安装 Logwatch 软件,它也可以在各种 Linux 系统和类 Unix 系统上安装。Logwatch 是一款可定制的日志分析和日志监控报告生成系统,它可以根据一段时间的日志文件生成您所希望关注的详细报告。它具有易安装、易配置、可审查等特性,同时对其提供的数据的安全性上也有一些保障措施。Logwatch 会扫描重要的操作系统组件像 SSH、网站服务等的日志文件,然后生成用户所关心的有价值的条目汇总报告。 ### 预安装设置 ### @@ -16,13 +17,13 @@ Ubuntu 15.04 and系统中安装 Logwatch root@ubuntu-15:~# apt-get install logwatch -在安装过程中,一旦您按提示按下“Y”健同意对系统修改的话,Logwatch 将会开始安装一些额外的必须软件包。 +在安装过程中,一旦您按提示按下“Y”键同意对系统修改的话,Logwatch 将会开始安装一些额外的必须软件包。 -在安装过程中会根据您机器上的邮件服务器设置情况弹出提示对 Postfix 设置的配置界面。在这篇教程中我们使用最容易的 “仅本地” 选项。根据您的基础设施情况也可以选择其它的可选项,然后点击“确定”继续。 +在安装过程中会根据您机器上的邮件服务器设置情况弹出提示对 Postfix 设置的配置界面。在这篇教程中我们使用最容易的 “仅本地(Local only)” 选项。根据您的基础设施情况也可以选择其它的可选项,然后点击“确定”继续。 ![Potfix Configurations](http://blog.linoxide.com/wp-content/uploads/2015/08/21.png) -随后您得选择邮件服务器名,这邮件服务器名也会被其它程序使用,所以它应该是一个完全合格域名/全称域名(FQDN),且只一个。 +随后您得选择邮件服务器名,这邮件服务器名也会被其它程序使用,所以它应该是一个完全合格域名/全称域名(FQDN)。 ![Postfix Setup](http://blog.linoxide.com/wp-content/uploads/2015/08/31.png) @@ -70,11 +71,11 @@ Ubuntu 15.04 and系统中安装 Logwatch # complete email address. MailFrom = Logwatch -对这个配置文件保存修改,至于其它的参数就让它是默认的,无需改动。 +对这个配置文件保存修改,至于其它的参数就让它保持默认,无需改动。 **调度任务配置** -现在编辑在日常 crons 目录下的 “00logwatch” 文件来配置从 logwatch 生成的报告需要发送的邮件地址。 +现在编辑在 “daily crons” 目录下的 “00logwatch” 文件来配置从 logwatch 生成的报告需要发送的邮件地址。 root@ubuntu-15:~# vim /etc/cron.daily/00logwatch @@ -88,25 +89,25 @@ Ubuntu 15.04 and系统中安装 Logwatch root@ubuntu-15:~#logwatch -生成的报告开始部分显示的是执行的时间和日期。它包含不同的部分,每个部分以开始标识开始而以结束标识结束,中间显示的标识部分提到的完整日志信息。 +生成的报告开始部分显示的是执行的时间和日期。它包含不同的部分,每个部分以开始标识开始而以结束标识结束,中间显示的是该部分的完整信息。 -这儿演示的是开始标识头的样子,要显示系统上所有安装包的信息,如下所示: +这儿显示的是开始的样子,它以显示系统上所有安装的软件包的部分开始,如下所示: ![dpkg status](http://blog.linoxide.com/wp-content/uploads/2015/08/81.png) -接下来的部分显示的日志信息是关于当前系统登陆会话、rsyslogs 和当前及最后可用的会话 SSH 连接信息。 +接下来的部分显示的日志信息是关于当前系统登录会话、rsyslogs 和当前及最近的 SSH 会话信息。 ![logwatch report](http://blog.linoxide.com/wp-content/uploads/2015/08/9.png) -Logwatch 报告最后显示的是安全 sudo 日志及root目录磁盘使用情况,如下示: +Logwatch 报告最后显示的是安全方面的 sudo 日志及根目录磁盘使用情况,如下示: ![Logwatch end report](http://blog.linoxide.com/wp-content/uploads/2015/08/10.png) -您也可以打开如下的文件来检查生成的 logwatch 报告电子邮件。 +您也可以打开如下的文件来查看生成的 logwatch 报告电子邮件。 root@ubuntu-15:~# vim /var/mail/root -您会看到所有已生成的邮件到其配置用户的信息传送状态。 +您会看到发送给你配置的用户的所有已生成的邮件及其邮件递交状态。 ### 更多详情 ### @@ -130,7 +131,7 @@ via: http://linoxide.com/ubuntu-how-to/install-use-logwatch-ubuntu-15-04/ 作者:[Kashif Siddique][a] 译者:[runningwater](https://github.com/runningwater) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From a4f24be373df0ce44866cd46efaed13a8d40e4ee Mon Sep 17 00:00:00 2001 From: Xuanwo Date: Mon, 31 Aug 2015 07:38:55 +0800 Subject: [PATCH 376/697] little bug fix in line 3 --- ...sed' Command to Create Edit and Manipulate files in Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md b/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md index c8c56c0077..34ef170213 100644 --- a/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md +++ b/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md @@ -1,6 +1,6 @@ Translating by Xuanwo -Part 1 - LFCS系列第一讲:如何在Linux上使用GNU'sed'命令来创建、编辑和操作文件 +LFCS系列第一讲:如何在Linux上使用GNU'sed'命令来创建、编辑和操作文件 ================================================================================ Linux基金会宣布了一个全新的LFCS(Linux Foundation Certified Sysadmin,Linux基金会认证系统管理员)认证计划。这一计划旨在帮助遍布全世界的人们获得其在处理Linux系统管理任务上能力的认证。这些能力包括支持运行的系统服务,以及第一手的故障诊断和分析和为工程师团队在升级时提供智能决策。 From aefa39e69eeda1cee8c1ae7a05541f924d2068d1 Mon Sep 17 00:00:00 2001 From: Xuanwo Date: Mon, 31 Aug 2015 07:43:46 +0800 Subject: [PATCH 377/697] change the series name --- ...sed' Command to Create Edit and Manipulate files in Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md b/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md index 34ef170213..4f3094b6f5 100644 --- a/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md +++ b/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md @@ -12,7 +12,7 @@ Linux基金会认证系统管理员——第一讲 -该系列将命名为《LFCS预备第一讲》至《LFCS预备第十讲》并覆盖关于Ubuntu,CentOS以及openSUSE的下列话题。 +该系列将命名为《LFCS系列第一讲》至《LFCS系列第十讲》并覆盖关于Ubuntu,CentOS以及openSUSE的下列话题。 - 第一讲:如何在Linux上使用GNU'sed'命令来创建、编辑和操作文件 - 第二讲:如何安装和使用vi/m全功能文字编辑器 From fd6c58b0e4e8dd531648e08c339ca9ebe93d8bdf Mon Sep 17 00:00:00 2001 From: Xuanwo Date: Mon, 31 Aug 2015 09:45:57 +0800 Subject: [PATCH 378/697] update some translate --- ...eate Edit and Manipulate files in Linux.md | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md b/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md index 4f3094b6f5..79e263d7e0 100644 --- a/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md +++ b/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md @@ -25,7 +25,7 @@ Linux基金会认证系统管理员——第一讲 - 第九讲:Linux包管理与Yum,RPM,Apt,Dpkg,Aptitude,Zypper - 第十讲:学习简单的Shell脚本和文件系统故障排除 -本文是覆盖这个参加LFCS认证考试的所必需的范围和能力的十个教程的第一讲。话虽如此,快打开你的终端,让我们开始吧! +本文是覆盖这个参加LFCS认证考试的所必需的范围和能力的十个教程的第一讲。话说了那么多,快打开你的终端,让我们开始吧! ### 处理Linux中的文本流 ### @@ -42,7 +42,7 @@ Linux将程序中的输入和输出当成字符流或者字符序列。在开始 ![cat command](http://www.tecmint.com/wp-content/uploads/2014/10/cat-command.png) -cat command example +cat 命令样例 #### 使用 sed #### @@ -60,7 +60,7 @@ sed最基本的用法是字符替换。我们将通过把每个出现的小写y ![sed command](http://www.tecmint.com/wp-content/uploads/2014/10/sed-command.png) -sed command example +sed 命令样例 如果你要在替换文本中搜索或者替换特殊字符(如/,\,&),你需要使用反斜杠对它进行转义。 @@ -70,7 +70,7 @@ sed command example ![sed replace string](http://www.tecmint.com/wp-content/uploads/2014/10/sed-replace-string.png) -sed replace string +sed 替换字符串 在上面的命令中,^(插入符号)是众所周知用来表示一行开头的正则表达式。 @@ -88,7 +88,7 @@ sed replace string ![sed match string](http://www.tecmint.com/wp-content/uploads/2014/10/sed-match-string.png) -sed match string +sed 匹配字符串 #### uniq C命令 #### @@ -102,7 +102,7 @@ du –sch /path/to/directory/* 命令将会以人类可读的格式返回在指 ![sort command](http://www.tecmint.com/wp-content/uploads/2014/10/sort-command.jpg) -sort command example +sort 命令样例 你可以通过使用下面的命令告诉uniq比较每一行的前6个字符(-w 6)(指定了不同的日期)来统计日志事件的个数,而且在每一行的开头输出出现的次数(-c)。 @@ -111,7 +111,7 @@ sort command example ![Count Numbers in File](http://www.tecmint.com/wp-content/uploads/2014/10/count-numbers-in-file.jpg) -Count Numbers in File +统计文件中数字 最后,你可以组合使用sort和uniq命令(通常如此)。考虑下面文件中捐助者,捐助日期和金额的列表。假设我们想知道有多少个捐助者。我们可以使用下面的命令来分隔第一字段(字段由冒号分隔),按名称排序并且删除重复的行。 @@ -119,7 +119,7 @@ Count Numbers in File ![Find Unique Records in File](http://www.tecmint.com/wp-content/uploads/2014/10/find-uniqu-records-in-file.jpg) -Find Unique Records in File +寻找文件中不重复的记录 - 也可阅读: [13个“cat”命令样例][1] @@ -135,7 +135,7 @@ grep在文件(或命令输出)中搜索指定正则表达式并且在标准 ![grep Command](http://www.tecmint.com/wp-content/uploads/2014/10/grep-command.jpg) -grep command example +grep 命令样例 显示/etc文件夹下所有rc开头并跟随任意数字的内容。 @@ -143,11 +143,11 @@ grep command example ![List Content Using grep](http://www.tecmint.com/wp-content/uploads/2014/10/list-content-using-grep.jpg) -List Content Using grep +使用grep列出内容 - 也可阅读: [12个“grep”命令样例][2] -#### tr Command Usage #### +#### tr 命令使用技巧 #### tr命令可以用来从标准输入中翻译(改变)或者删除字符并将结果写入到标准输出中。 @@ -159,14 +159,14 @@ tr命令可以用来从标准输入中翻译(改变)或者删除字符并将 ![Sort Strings in File](http://www.tecmint.com/wp-content/uploads/2014/10/sort-strings.jpg) -Sort Strings in File +排序文件中的字符串 压缩`ls –l`输出中的定界符至一个空格。 # ls -l | tr -s ' ' ![Squeeze Delimiter](http://www.tecmint.com/wp-content/uploads/2014/10/squeeze-delimeter.jpg) -Squeeze Delimiter +压缩分隔符 #### cut 命令使用方法 #### @@ -180,7 +180,7 @@ cut命令可以基于字节数(-b选项),字符(-c)或者字段(-f ![Extract User Accounts](http://www.tecmint.com/wp-content/uploads/2014/10/extract-user-accounts.jpg) -Extract User Accounts +提取用户账户 总结一下,我们将使用最后一个命令的输出中第一和第三个非空文件创建一个文本流。我们将使用grep作为第一过滤器来检查用户gacanepa的会话,然后将分隔符压缩至一个空格(tr -s ' ')。下一步,我们将使用cut来提取第一和第三个字段,最后使用第二个字段(本样例中,指的是IP地址)来排序之后再用uniq去重。 @@ -188,7 +188,7 @@ Extract User Accounts ![last command](http://www.tecmint.com/wp-content/uploads/2014/10/last-command.png) -last command example +last 命令样例 上面的命令显示了如何将多个命令和管道结合起来以便根据我们的愿望得到过滤后的数据。你也可以逐步地使用它以帮助你理解输出是如何从一个命令传输到下一个命令的(顺便说一句,这是一个非常好的学习经验!) From 9f3fda990911d1d7b0bc8c20667a06d894f15f27 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 31 Aug 2015 10:07:16 +0800 Subject: [PATCH 379/697] PUB:20150827 Linux or UNIX--Bash Read a File Line By Line @strugglingyouth --- ... or UNIX--Bash Read a File Line By Line.md | 51 ++++++++++--------- 1 file changed, 27 insertions(+), 24 deletions(-) rename {translated/tech => published}/20150827 Linux or UNIX--Bash Read a File Line By Line.md (75%) diff --git a/translated/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md b/published/20150827 Linux or UNIX--Bash Read a File Line By Line.md similarity index 75% rename from translated/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md rename to published/20150827 Linux or UNIX--Bash Read a File Line By Line.md index 37473334d1..8702ddec41 100644 --- a/translated/tech/20150827 Linux or UNIX--Bash Read a File Line By Line.md +++ b/published/20150827 Linux or UNIX--Bash Read a File Line By Line.md @@ -1,17 +1,21 @@ - -Linux/UNIX: Bash 下如何逐行读取一个文件 +Bash 下如何逐行读取一个文件 ================================================================================ - 在 Linux 或类 UNIX 系统下如何使用 KSH 或 BASH shell 逐行读取一个文件? -在 Linux, OSX, * BSD ,或者类 Unix 系统下你可以使用​​while..do..done bash 的循环来逐行读取一个文件。 +在 Linux 或类 UNIX 系统下如何使用 KSH 或 BASH shell 逐行读取一个文件? -**在 Bash Unix 或者 Linux shell 中逐行读取一个文件的语法:** +在 Linux、OSX、 *BSD 或者类 Unix 系统下你可以使用 ​​while..do..done 的 bash 循环来逐行读取一个文件。 -1.对于 bash, ksh, zsh,和其他的 shells 语法如下 - -1. while read -r line; do COMMAND; done < input.file -1.通过 -r 选项传递给红色的命令阻止反斜杠被解释。 -1.在 read 命令之前添加 IFS= option,来防止 leading/trailing 尾随的空白字符被分割 - -1. while IFS= read -r line; do COMMAND_on $line; done < input.file +###在 Bash Unix 或者 Linux shell 中逐行读取一个文件的语法 + +对于 bash、ksh、 zsh 和其他的 shells 语法如下 + + while read -r line; do COMMAND; done < input.file + +通过 -r 选项传递给 read 命令以防止阻止解释其中的反斜杠转义符。 + +在 read 命令之前添加 `IFS=` 选项,来防止首尾的空白字符被去掉。 + + while IFS= read -r line; do COMMAND_on $line; done < input.file 这是更适合人类阅读的语法: @@ -30,7 +34,7 @@ Linux/UNIX: Bash 下如何逐行读取一个文件 file="/home/vivek/data.txt" while IFS= read line do - # display $line or do somthing with $line + # display $line or do somthing with $line echo "$line" done <"$file" @@ -40,7 +44,7 @@ Linux/UNIX: Bash 下如何逐行读取一个文件 file="/home/vivek/data.txt" while IFS= read -r line do - # display $line or do somthing with $line + # display $line or do somthing with $line printf '%s\n' "$line" done <"$file" @@ -50,17 +54,17 @@ Linux/UNIX: Bash 下如何逐行读取一个文件 file="/etc/passwd" while IFS=: read -r f1 f2 f3 f4 f5 f6 f7 do - # display fields using f1, f2,..,f7 - printf 'Username: %s, Shell: %s, Home Dir: %s\n' "$f1" "$f7" "$f6" + # display fields using f1, f2,..,f7 + printf 'Username: %s, Shell: %s, Home Dir: %s\n' "$f1" "$f7" "$f6" done <"$file" 示例输出: ![Fig.01: Bash shell scripting- read file line by line demo outputs](http://s0.cyberciti.org/uploads/faq/2011/01/Bash-Scripting-Read-File-line-by-line-demo.jpg) -图01:Bash shell scripting- 读取文件并逐行输出文件 +*图01:Bash 脚本:读取文件并逐行输出文件* -**Bash Scripting: 逐行读取文本文件并创建为 pdf 文件** +###Bash 脚本:逐行读取文本文件并创建为 pdf 文件 我的输入文件如下(faq.txt): @@ -75,7 +79,7 @@ Linux/UNIX: Bash 下如何逐行读取一个文件 8292|http://www.cyberciti.biz/faq/mounting-harddisks-in-freebsd-with-mount-command/|FreeBSD: Mount Hard Drive / Disk Command 8190|http://www.cyberciti.biz/faq/rebooting-solaris-unix-server/|Reboot a Solaris UNIX System -我的 bash script: +我的 bash 脚本: #!/bin/bash # Usage: Create pdf files from input (wrapper script) @@ -106,11 +110,11 @@ Linux/UNIX: Bash 下如何逐行读取一个文件 done <"$_db" fi -**提示:从 bash 的变量开始读取** +###技巧:从 bash 变量中读取 让我们看看如何在 Debian 或者 Ubuntu Linux 下列出所有安装过的 php 包,请输入: - # 我将输出内容赋值到一个变量名为$list中 # + # 我将输出内容赋值到一个变量名为 $list中 # list=$(dpkg --list php\* | awk '/ii/{print $2}') printf '%s\n' "$list" @@ -128,7 +132,7 @@ Linux/UNIX: Bash 下如何逐行读取一个文件 php5-readline php5-suhosin-extension -你现在可以从 $list 中看到安装的包: +你现在可以从 $list 中看到它们,并安装这些包: #!/bin/bash # BASH can iterate over $list variable using a "here string" # @@ -152,15 +156,14 @@ Linux/UNIX: Bash 下如何逐行读取一个文件 Installing php package php5-readline... Installing php package php5-suhosin-extension... - - *** 不要忘了运行php5enmod并重新启动服务(httpd 或 php5-fpm) *** + *** Do not forget to run php5enmod and restart the server (httpd or php5-fpm) *** -------------------------------------------------------------------------------- via: http://www.cyberciti.biz/faq/unix-howto-read-line-by-line-from-file/ -作者:[作者名][a] +作者: VIVEK GIT 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 61e0b79ff34515e3428f66ba8ee2b23b7eca2311 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 31 Aug 2015 10:47:16 +0800 Subject: [PATCH 380/697] PUB:20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien @GOLinux --- ... DEB and DEB to RPM Package Using Alien.md | 68 +++++++++++-------- 1 file changed, 40 insertions(+), 28 deletions(-) rename {translated/tech => published}/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md (73%) diff --git a/translated/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md b/published/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md similarity index 73% rename from translated/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md rename to published/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md index 98abba27f3..366a3c1e98 100644 --- a/translated/tech/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md +++ b/published/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md @@ -1,29 +1,31 @@ -Alien大法:RPM和DEB互转 +Alien 魔法:RPM 和 DEB 互转 ================================================================================ -正如我确信,你们一定知道Linux下的多种软件安装方式:使用发行版所提供的包管理系统([aptitude,yum,或者zypper][1],还可以举很多例子),从源码编译(尽管现在很少用了,但在Linux发展早期却是唯一可用的方法),或者使用各自的低级工具dpkg用于.deb,以及rpm用于.rpm,预编译包等等。 + +正如我确信,你们一定知道Linux下的多种软件安装方式:使用发行版所提供的包管理系统([aptitude,yum,或者zypper][1],还可以举很多例子),从源码编译(尽管现在很少用了,但在Linux发展早期却是唯一可用的方法),或者使用各自的低级工具dpkg用于.deb,以及rpm用于.rpm,预编译包,如此这般。 ![Convert RPM to DEB and DEB to RPM](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-RPM-to-DEB-and-DEB-to-RPM.png) -使用Alien将RPM转换成DEB以及将DEB转换成RPM -在本文中,我们将为你介绍alien,一个用于在各种不同的Linux包格式相互转换的工具,将.rpm转换成.deb(或者反过来)是最常见的用法。 +*使用Alien将RPM转换成DEB以及将DEB转换成RPM* -如果你需要某个特定类型的包,而你只能找到其它格式的包的时候,该工具,即使当其作者不再维护,并且在其网站声明:alien将可能永远维持在实验状态,迟早派得上用场。 +在本文中,我们将为你介绍alien,一个用于在各种不同的Linux包格式相互转换的工具,其最常见的用法是将.rpm转换成.deb(或者反过来)。 + +如果你需要某个特定类型的包,而你只能找到其它格式的包的时候,该工具迟早能派得上用场——即使是其作者不再维护,并且在其网站声明:alien将可能永远维持在实验状态。 例如,有一次,我正查找一个用于喷墨打印机的.deb驱动,但是却没有找到——生产厂家只提供.rpm包,这时候alien拯救了我。我安装了alien,将包进行转换,不久之后我就可以使用我的打印机了,没有任何问题。 -即便如此,我们也必须澄清一下,这个工具不应当用来替换重要的系统文件和库,因为它们在不同的发行版中有不同的配置。只有在本文开头提出的安装方法根本不适合所需的程序时,alien才能作为最后手段使用。 +即便如此,我们也必须澄清一下,这个工具不应当用来转换重要的系统文件和库,因为它们在不同的发行版中有不同的配置。只有在前面说的那种情况下所建议的安装方法根本不适合时,alien才能作为最后手段使用。 -最后一项要点是,我们必须注意,虽然我们在本文中使用CentOS和Debian,除了前两个发行版及其各自的家族体系外,alien也据我们所知可以工作在Slackware中,甚至Solaris中。 +最后一项要点是,我们必须注意,虽然我们在本文中使用CentOS和Debian,除了前两个发行版及其各自的家族体系外,据我们所知,alien可以工作在Slackware中,甚至Solaris中。 -### 步骤1:安装Alien及其依赖 ### +### 步骤1:安装Alien及其依赖包 ### 要安装alien到CentOS/RHEL 7中,你需要启用EPEL和Nux Dextop(是的,是Dextop——不是Desktop)仓库,顺序如下: # yum install epel-release + +启用Nux Dextop仓库的包的当前最新版本是0.5(2015年8月10日发布),在安装之前你可以查看[http://li.nux.ro/download/nux/dextop/el7/x86_64/][2]上是否有更新的版本。 + # rpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro - -启用该仓库的包的当前最新版本是0.5(2015年8月10日发布),在安装之前你可以查看[http://li.nux.ro/download/nux/dextop/el7/x86_64/][2]上是否有更新的版本。 - # rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm 然后再做, @@ -49,7 +51,8 @@ Alien大法:RPM和DEB互转 # alien --to-rpm --scripts dateutils_0.3.1-1.1_amd64.deb ![Convert .deb to .rpm package in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-deb-to-rpm-package.png) -在Linux中将.deb转换成.rpm + +*在Linux中将.deb转换成.rpm* **重要**:(请注意alien是怎样来增加目标包的次版本号的。如果你想要无视该行为,请添加-keep-version标识)。 @@ -58,7 +61,8 @@ Alien大法:RPM和DEB互转 # rpm -Uvh dateutils-0.3.1-2.1.x86_64.rpm ![Install RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-RPM-Package.png) -安装RPM包 + +*安装RPM包* 要解决该问题,我们需要启用epel-testing仓库,然后安装rpmbuild工具来编辑该包的配置以重建包: @@ -68,17 +72,19 @@ Alien大法:RPM和DEB互转 # rpmrebuild -pe dateutils-0.3.1-2.1.x86_64.rpm -它会打开你的默认文本编辑器。转到`%files`章节并删除涉及到错误信息中提到的目录的行,然后保存文件并退出: +它会打开你的默认文本编辑器。请转到`%files`章节并删除涉及到错误信息中提到的目录的行,然后保存文件并退出: ![Convert .deb to Alien Version](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-Deb-Package-to-Alien-Version.png) -转换.deb到Alien版 -但你退出该文件后,将提示你继续去重构。如果你选择Y,该文件会重构到指定的目录(与当前工作目录不同): +*转换.deb到Alien版* + +但你退出该文件后,将提示你继续去重构。如果你选择“Y”,该文件会重构到指定的目录(与当前工作目录不同): # rpmrebuild –pe dateutils-0.3.1-2.1.x86_64.rpm ![Build RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Build-RPM-Package.png) -构建RPM包 + +*构建RPM包* 现在你可以像以往一样继续来安装包并验证: @@ -86,14 +92,16 @@ Alien大法:RPM和DEB互转 # rpm -qa | grep dateutils ![Install Build RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Build-RPM-Package.png) -安装构建RPM包 -最后,你可以列出date工具包含的各个工具,并可选择性地查看各自的手册页: +*安装构建RPM包* + +最后,你可以列出date工具包含的各个工具,也可以查看各自的手册页: # ls -l /usr/bin | grep dateutils ![Verify Installed RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Verify-Installed-Package.png) -验证安装的RPM包 + +*验证安装的RPM包* ### 步骤3:将.rpm转换成.deb包 ### @@ -103,7 +111,8 @@ Alien大法:RPM和DEB互转 # lsb_release -a | tail -n 4 ![Check Shell and Debian OS Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Shell-Debian-OS-Version.png) -检查Shell和Debian操作系统版本 + +*检查Shell和Debian操作系统版本* # wget http://mirror.centos.org/centos/6/os/i386/Packages/zsh-4.3.11-4.el6.centos.i686.rpm # alien --to-deb --scripts zsh-4.3.11-4.el6.centos.i686.rpm @@ -111,27 +120,30 @@ Alien大法:RPM和DEB互转 你可以安全地无视关于签名丢失的信息: ![Convert .rpm to .deb Package](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-rpm-to-deb-Package.png) -将.rpm转换成.deb包 + +*将.rpm转换成.deb包* 过了一会儿后,.deb包应该已经生成,并可以安装了: # dpkg -i zsh_4.3.11-5_i386.deb ![Install RPM Converted Deb Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Deb-Package.png) -安装RPM转换来的Deb包 -安装完后,你可以zsh是否添加到了合法shell列表中: +*安装RPM转换来的Deb包* + +安装完后,你看看可以zsh是否添加到了合法shell列表中: # cat /etc/shells ![Confirm Installed Zsh Package](http://www.tecmint.com/wp-content/uploads/2015/08/Confirm-Installed-Package.png) -确认安装的Zsh包 + +*确认安装的Zsh包* ### 小结 ### -在本文中,我们已经解释了如何将.rpm转换成.deb及其反向转换并作为这类程序不能从仓库中或者作为可分发源代码获得的最后安装手段。你一定想要将本文添加到书签中,因为我们都需要alien。 +在本文中,我们已经解释了如何将.rpm转换成.deb及其反向转换,这可以作为这类程序不能从仓库中或者作为可分发源代码获得的最后安装手段。你一定想要将本文添加到书签中,因为我们都需要alien。 -请自由分享你关于本文的想法,写到下面的表格中吧。 +请自由分享你关于本文的想法,写到下面的表单中吧。 -------------------------------------------------------------------------------- @@ -139,7 +151,7 @@ via: http://www.tecmint.com/convert-from-rpm-to-deb-and-deb-to-rpm-package-using 作者:[Gabriel Cánepa][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 5d2647c6a89ed1b0805362f6c2111e627f58865b Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 31 Aug 2015 15:51:24 +0800 Subject: [PATCH 381/697] PUB:Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux @strugglingyouth --- ...ith Double Distributed Parity) in Linux.md | 185 +++++++++--------- 1 file changed, 92 insertions(+), 93 deletions(-) rename {translated/tech/RAID => published}/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md (50%) diff --git a/translated/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md b/published/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md similarity index 50% rename from translated/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md rename to published/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md index 1890a242e2..d222a997e5 100644 --- a/translated/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md +++ b/published/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md @@ -1,77 +1,78 @@ - -在 Linux 中安装 RAID 6(条带化双分布式奇偶校验) - 第5部分 +在 Linux 下使用 RAID(五):安装 RAID 6(条带化双分布式奇偶校验) ================================================================================ -RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即时两个磁盘发生故障后依然有容错能力。两并列的磁盘发生故障时,系统的关键任务仍然能运行。它与 RAID 5 相似,但性能更健壮,因为它多用了一个磁盘来进行奇偶校验。 -在之前的文章中,我们已经在 RAID 5 看了分布式奇偶校验,但在本文中,我们将看到的是 RAID 6 双分布式奇偶校验。不要期望比其他 RAID 有额外的性能,我们仍然需要安装一个专用的 RAID 控制器。在 RAID 6 中,即使我们失去了2个磁盘,我们仍可以取回数据通过更换磁盘,然后从校验中构建数据。 +RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即使两个磁盘发生故障后依然有容错能力。在两个磁盘同时发生故障时,系统的关键任务仍然能运行。它与 RAID 5 相似,但性能更健壮,因为它多用了一个磁盘来进行奇偶校验。 + +在之前的文章中,我们已经在 RAID 5 看了分布式奇偶校验,但在本文中,我们将看到的是 RAID 6 双分布式奇偶校验。不要期望比其他 RAID 有更好的性能,除非你也安装了一个专用的 RAID 控制器。在 RAID 6 中,即使我们失去了2个磁盘,我们仍可以通过更换磁盘,从校验中构建数据,然后取回数据。 ![Setup RAID 6 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Setup-RAID-6-in-Linux.jpg) -在 Linux 中安装 RAID 6 +*在 Linux 中安装 RAID 6* -要建立一个 RAID 6,一组最少需要4个磁盘。RAID 6 甚至在有些设定中会有多组磁盘,当读取数据时,它会同时从所有磁盘读取,所以读取速度会更快,当写数据时,因为它要将数据写在条带化的多个磁盘上,所以性能会较差。 +要建立一个 RAID 6,一组最少需要4个磁盘。RAID 6 甚至在有些组中会有更多磁盘,这样将多个硬盘捆在一起,当读取数据时,它会同时从所有磁盘读取,所以读取速度会更快,当写数据时,因为它要将数据写在条带化的多个磁盘上,所以性能会较差。 -现在,很多人都在讨论为什么我们需要使用 RAID 6,它的性能和其他 RAID 相比并不太好。提出这个问题首先需要知道的是,如果需要高容错的必须选择 RAID 6。在每一个对数据库的高可用性要求较高的环境中,他们需要 RAID 6 因为数据库是最重要,无论花费多少都需要保护其安全,它在视频流环境中也是非常有用的。 +现在,很多人都在讨论为什么我们需要使用 RAID 6,它的性能和其他 RAID 相比并不太好。提出这个问题首先需要知道的是,如果需要高容错性就选择 RAID 6。在每一个用于数据库的高可用性要求较高的环境中,他们需要 RAID 6 因为数据库是最重要,无论花费多少都需要保护其安全,它在视频流环境中也是非常有用的。 #### RAID 6 的的优点和缺点 #### -- 性能很不错。 -- RAID 6 非常昂贵,因为它要求两个独立的磁盘用于奇偶校验功能。 +- 性能不错。 +- RAID 6 比较昂贵,因为它要求两个独立的磁盘用于奇偶校验功能。 - 将失去两个磁盘的容量来保存奇偶校验信息(双奇偶校验)。 -- 不存在数据丢失,即时两个磁盘损坏。我们可以在更换损坏的磁盘后从校验中重建数据。 +- 即使两个磁盘损坏,数据也不会丢失。我们可以在更换损坏的磁盘后从校验中重建数据。 - 读性能比 RAID 5 更好,因为它从多个磁盘读取,但对于没有专用的 RAID 控制器的设备写性能将非常差。 #### 要求 #### -要创建一个 RAID 6 最少需要4个磁盘.你也可以添加更多的磁盘,但你必须有专用的 RAID 控制器。在软件 RAID 中,我们在 RAID 6 中不会得到更好的性能,所以我们需要一个物理 RAID 控制器。 +要创建一个 RAID 6 最少需要4个磁盘。你也可以添加更多的磁盘,但你必须有专用的 RAID 控制器。使用软件 RAID 我们在 RAID 6 中不会得到更好的性能,所以我们需要一个物理 RAID 控制器。 -这些是新建一个 RAID 需要的设置,我们建议先看完以下 RAID 文章。 +如果你新接触 RAID 设置,我们建议先看完以下 RAID 文章。 -- [Linux 中 RAID 的基本概念 – 第一部分][1] -- [在 Linux 上创建软件 RAID 0 (条带化) – 第二部分][2] -- [在 Linux 上创建软件 RAID 1 (镜像) – 第三部分][3] +- [介绍 RAID 的级别和概念][1] +- [使用 mdadm 工具创建软件 RAID 0 (条带化)][2] +- [用两块磁盘创建 RAID 1(镜像)][3] +- [创建 RAID 5(条带化与分布式奇偶校验)](4) -#### My Server Setup #### +#### 我的服务器设置 #### - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.228 - Hostname : rd6.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc - Disk 3 [20GB] : /dev/sdd - Disk 4 [20GB] : /dev/sde + 操作系统 : CentOS 6.5 Final + IP 地址 : 192.168.0.228 + 主机名 : rd6.tecmintlocal.com + 磁盘 1 [20GB] : /dev/sdb + 磁盘 2 [20GB] : /dev/sdc + 磁盘 3 [20GB] : /dev/sdd + 磁盘 4 [20GB] : /dev/sde -这篇文章是9系列 RAID 教程的第5部分,在这里我们将看到我们如何在 Linux 系统或者服务器上创建和设置软件 RAID 6 或条带化双分布式奇偶校验,使用四个 20GB 的磁盘 /dev/sdb, /dev/sdc, /dev/sdd 和 /dev/sde. +这是9篇系列教程的第5部分,在这里我们将看到如何在 Linux 系统或者服务器上使用四个 20GB 的磁盘(名为 /dev/sdb、 /dev/sdc、 /dev/sdd 和 /dev/sde)创建和设置软件 RAID 6 (条带化双分布式奇偶校验)。 ### 第1步:安装 mdadm 工具,并检查磁盘 ### -1.如果你按照我们最进的两篇 RAID 文章(第2篇和第3篇),我们已经展示了如何安装‘mdadm‘工具。如果你直接看的这篇文章,我们先来解释下在Linux系统中如何使用‘mdadm‘工具来创建和管理 RAID,首先根据你的 Linux 发行版使用以下命令来安装。 +1、 如果你按照我们最进的两篇 RAID 文章(第2篇和第3篇),我们已经展示了如何安装`mdadm`工具。如果你直接看的这篇文章,我们先来解释下在 Linux 系统中如何使用`mdadm`工具来创建和管理 RAID,首先根据你的 Linux 发行版使用以下命令来安装。 - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] + # yum install mdadm [在 RedHat 系统] + # apt-get install mdadm [在 Debain 系统] -2.安装该工具后,然后来验证需要的四个磁盘,我们将会使用下面的‘fdisk‘命令来检验用于创建 RAID 的磁盘。 +2、 安装该工具后,然后来验证所需的四个磁盘,我们将会使用下面的`fdisk`命令来检查用于创建 RAID 的磁盘。 # fdisk -l | grep sd ![Check Hard Disk in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Linux-Disks.png) -在 Linux 中检查磁盘 +*在 Linux 中检查磁盘* -3.在创建 RAID 磁盘前,先检查下我们的磁盘是否创建过 RAID 分区。 +3、 在创建 RAID 磁盘前,先检查下我们的磁盘是否创建过 RAID 分区。 # mdadm -E /dev/sd[b-e] - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde # 或 ![Check Raid on Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Disk-Raid.png) -在磁盘上检查 Raid 分区 +*在磁盘上检查 RAID 分区* **注意**: 在上面的图片中,没有检测到任何 super-block 或者说在四个磁盘上没有 RAID 存在。现在我们开始创建 RAID 6。 ### 第2步:为 RAID 6 创建磁盘分区 ### -4.现在为 raid 创建分区‘/dev/sdb‘, ‘/dev/sdc‘, ‘/dev/sdd‘ 和 ‘/dev/sde‘使用下面 fdisk 命令。在这里,我们将展示如何创建分区在 sdb 磁盘,同样的步骤也适用于其他分区。 +4、 现在在 `/dev/sdb`, `/dev/sdc`, `/dev/sdd` 和 `/dev/sde`上为 RAID 创建分区,使用下面的 fdisk 命令。在这里,我们将展示如何在 sdb 磁盘创建分区,同样的步骤也适用于其他分区。 **创建 /dev/sdb 分区** @@ -79,20 +80,20 @@ RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即时两 请按照说明进行操作,如下图所示创建分区。 -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。 +- 按 `n`创建新的分区。 +- 然后按 `P` 选择主分区。 - 接下来选择分区号为1。 - 只需按两次回车键选择默认值即可。 -- 然后,按 ‘P’ 来打印创建好的分区。 -- 按 ‘L’,列出所有可用的类型。 -- 按 ‘t’ 去修改分区。 -- 键入 ‘fd’ 设置为 Linux 的 RAID 类型,然后按 Enter 确认。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 +- 然后,按 `P` 来打印创建好的分区。 +- 按 `L`,列出所有可用的类型。 +- 按 `t` 去修改分区。 +- 键入 `fd` 设置为 Linux 的 RAID 类型,然后按回车确认。 +- 然后再次使用`p`查看我们所做的更改。 +- 使用`w`保存更改。 ![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition.png) -创建 /dev/sdb 分区 +*创建 /dev/sdb 分区* **创建 /dev/sdc 分区** @@ -100,7 +101,7 @@ RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即时两 ![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition.png) -创建 /dev/sdc 分区 +*创建 /dev/sdc 分区* **创建 /dev/sdd 分区** @@ -108,7 +109,7 @@ RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即时两 ![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition.png) -创建 /dev/sdd 分区 +*创建 /dev/sdd 分区* **创建 /dev/sde 分区** @@ -116,71 +117,67 @@ RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即时两 ![Create sde Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sde-Partition.png) -创建 /dev/sde 分区 +*创建 /dev/sde 分区* -5.创建好分区后,检查磁盘的 super-blocks 是个好的习惯。如果 super-blocks 不存在我们可以按前面的创建一个新的 RAID。 +5、 创建好分区后,检查磁盘的 super-blocks 是个好的习惯。如果 super-blocks 不存在我们可以按前面的创建一个新的 RAID。 - # mdadm -E /dev/sd[b-e]1 - - - 或者 - - # mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 + # mdadm -E /dev/sd[b-e]1 + # mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 # 或 ![Check Raid on New Partitions](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Partitions.png) -在新分区中检查 Raid +*在新分区中检查 RAID * ### 步骤3:创建 md 设备(RAID) ### -6,现在是时候来创建 RAID 设备‘md0‘ (即 /dev/md0)并应用 RAID 级别在所有新创建的分区中,确认 raid 使用以下命令。 +6、 现在可以使用以下命令创建 RAID 设备`md0` (即 /dev/md0),并在所有新创建的分区中应用 RAID 级别,然后确认 RAID 设置。 # mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 # cat /proc/mdstat ![Create Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Raid-6-Device.png) -创建 Raid 6 设备 +*创建 Raid 6 设备* -7.你还可以使用 watch 命令来查看当前 raid 的进程,如下图所示。 +7、 你还可以使用 watch 命令来查看当前创建 RAID 的进程,如下图所示。 # watch -n1 cat /proc/mdstat ![Check Raid 6 Process](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Process.png) -检查 Raid 6 进程 +*检查 RAID 6 创建过程* -8.使用以下命令验证 RAID 设备。 +8、 使用以下命令验证 RAID 设备。 -# mdadm -E /dev/sd[b-e]1 + # mdadm -E /dev/sd[b-e]1 **注意**::上述命令将显示四个磁盘的信息,这是相当长的,所以没有截取其完整的输出。 -9.接下来,验证 RAID 阵列,以确认 re-syncing 被启动。 +9、 接下来,验证 RAID 阵列,以确认重新同步过程已经开始。 # mdadm --detail /dev/md0 ![Check Raid 6 Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Array.png) -检查 Raid 6 阵列 +*检查 Raid 6 阵列* ### 第4步:在 RAID 设备上创建文件系统 ### -10.使用 ext4 为‘/dev/md0‘创建一个文件系统并将它挂载在 /mnt/raid5 。这里我们使用的是 ext4,但你可以根据你的选择使用任意类型的文件系统。 +10、 使用 ext4 为`/dev/md0`创建一个文件系统,并将它挂载在 /mnt/raid6 。这里我们使用的是 ext4,但你可以根据你的选择使用任意类型的文件系统。 # mkfs.ext4 /dev/md0 ![Create File System on Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Create-File-System-on-Raid.png) -在 Raid 6 上创建文件系统 +*在 RAID 6 上创建文件系统* -11.挂载创建的文件系统到 /mnt/raid6,并验证挂载点下的文件,我们可以看到 lost+found 目录。 +11、 将创建的文件系统挂载到 /mnt/raid6,并验证挂载点下的文件,我们可以看到 lost+found 目录。 # mkdir /mnt/raid6 # mount /dev/md0 /mnt/raid6/ # ls -l /mnt/raid6/ -12.在挂载点下创建一些文件,在任意文件中添加一些文字并验证其内容。 +12、 在挂载点下创建一些文件,在任意文件中添加一些文字并验证其内容。 # touch /mnt/raid6/raid6_test.txt # ls -l /mnt/raid6/ @@ -189,9 +186,9 @@ RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即时两 ![Verify Raid Content](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Content.png) -验证 Raid 内容 +*验证 RAID 内容* -13.在 /etc/fstab 中添加以下条目使系统启动时自动挂载设备,环境不同挂载点可能会有所不同。 +13、 在 /etc/fstab 中添加以下条目使系统启动时自动挂载设备,操作系统环境不同挂载点可能会有所不同。 # vim /etc/fstab @@ -199,36 +196,37 @@ RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即时两 ![Automount Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Automount-Raid-Device.png) -自动挂载 Raid 6 设备 +*自动挂载 RAID 6 设备* -14.接下来,执行‘mount -a‘命令来验证 fstab 中的条目是否有错误。 +14、 接下来,执行`mount -a`命令来验证 fstab 中的条目是否有错误。 # mount -av ![Verify Raid Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Automount-Raid-Devices.png) -验证 Raid 是否自动挂载 +*验证 RAID 是否自动挂载* ### 第5步:保存 RAID 6 的配置 ### -15.请注意默认 RAID 没有配置文件。我们需要使用以下命令手动保存它,然后检查设备‘/dev/md0‘的状态。 +15、 请注意,默认情况下 RAID 没有配置文件。我们需要使用以下命令手动保存它,然后检查设备`/dev/md0`的状态。 # mdadm --detail --scan --verbose >> /etc/mdadm.conf + # cat /etc/mdadm.conf # mdadm --detail /dev/md0 ![Save Raid 6 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png) -保存 Raid 6 配置 +*保存 RAID 6 配置* ![Check Raid 6 Status](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png) -检查 Raid 6 状态 +*检查 RAID 6 状态* ### 第6步:添加备用磁盘 ### -16.现在,它使用了4个磁盘,并且有两个作为奇偶校验信息来使用。在某些情况下,如果任意一个磁盘出现故障,我们仍可以得到数据,因为在 RAID 6 使用双奇偶校验。 +16、 现在,已经使用了4个磁盘,并且其中两个作为奇偶校验信息来使用。在某些情况下,如果任意一个磁盘出现故障,我们仍可以得到数据,因为在 RAID 6 使用双奇偶校验。 -如果第二个磁盘也出现故障,在第三块磁盘损坏前我们可以添加一个​​新的。它可以作为一个备用磁盘并入 RAID 集合,但我在创建 raid 集合前没有定义备用的磁盘。但是,在磁盘损坏后或者创建 RAId 集合时我们可以添加一块磁盘。现在,我们已经创建好了 RAID,下面让我演示如何添加备用磁盘。 +如果第二个磁盘也出现故障,在第三块磁盘损坏前我们可以添加一个​​新的。可以在创建 RAID 集时加入一个备用磁盘,但我在创建 RAID 集合前没有定义备用的磁盘。不过,我们可以在磁盘损坏后或者创建 RAID 集合时添加一块备用磁盘。现在,我们已经创建好了 RAID,下面让我演示如何添加备用磁盘。 为了达到演示的目的,我已经热插入了一个新的 HDD 磁盘(即 /dev/sdf),让我们来验证接入的磁盘。 @@ -236,15 +234,15 @@ RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即时两 ![Check New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-New-Disk.png) -检查新 Disk +*检查新磁盘* -17.现在再次确认新连接的磁盘没有配置过 RAID ,使用 mdadm 来检查。 +17、 现在再次确认新连接的磁盘没有配置过 RAID ,使用 mdadm 来检查。 # mdadm --examine /dev/sdf ![Check Raid on New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Disk.png) -在新磁盘中检查 Raid +*在新磁盘中检查 RAID* **注意**: 像往常一样,我们早前已经为四个磁盘创建了分区,同样,我们使用 fdisk 命令为新插入的磁盘创建新分区。 @@ -252,9 +250,9 @@ RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即时两 ![Create sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Partition-on-sdf.png) -为 /dev/sdf 创建分区 +*为 /dev/sdf 创建分区* -18.在 /dev/sdf 创建新的分区后,在新分区上确认 raid,包括/dev/md0 raid 设备的备用磁盘,并验证添加的设备。 +18、 在 /dev/sdf 创建新的分区后,在新分区上确认没有 RAID,然后将备用磁盘添加到 RAID 设备 /dev/md0 中,并验证添加的设备。 # mdadm --examine /dev/sdf # mdadm --examine /dev/sdf1 @@ -263,19 +261,19 @@ RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即时两 ![Verify Raid on sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-on-sdf.png) -在 sdf 分区上验证 Raid +*在 sdf 分区上验证 Raid* ![Add sdf Partition to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Add-sdf-Partition-to-Raid.png) -为 RAID 添加 sdf 分区 +*添加 sdf 分区到 RAID * ![Verify sdf Partition Details](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-sdf-Details.png) -验证 sdf 分区信息 +*验证 sdf 分区信息* ### 第7步:检查 RAID 6 容错 ### -19.现在,让我们检查备用驱动器是否能自动工作,当我们阵列中的任何一个磁盘出现故障时。为了测试,我亲自将一个磁盘模拟为故障设备。 +19、 现在,让我们检查备用驱动器是否能自动工作,当我们阵列中的任何一个磁盘出现故障时。为了测试,我将一个磁盘手工标记为故障设备。 在这里,我们标记 /dev/sdd1 为故障磁盘。 @@ -283,15 +281,15 @@ RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即时两 ![Check Raid 6 Fault Tolerance](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Failover.png) -检查 Raid 6 容错 +*检查 RAID 6 容错* -20.让我们查看 RAID 的详细信息,并检查备用磁盘是否开始同步。 +20、 让我们查看 RAID 的详细信息,并检查备用磁盘是否开始同步。 # mdadm --detail /dev/md0 ![Check Auto Raid Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Auto-Raid-Syncing.png) -检查 Raid 自动同步 +*检查 RAID 自动同步* **哇塞!** 这里,我们看到备用磁盘激活了,并开始重建进程。在底部,我们可以看到有故障的磁盘 /dev/sdd1 标记为 faulty。可以使用下面的命令查看进程重建。 @@ -299,11 +297,11 @@ RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即时两 ![Raid 6 Auto Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-6-Auto-Syncing.png) -Raid 6 自动同步 +*RAID 6 自动同步* ### 结论: ### -在这里,我们看到了如何使用四个磁盘设置 RAID 6。这种 RAID 级别是具有高冗余的昂贵设置之一。在接下来的文章中,我们将看到如何建立一个嵌套的 RAID 10 甚至更多。至此,请继续关注 TECMINT。 +在这里,我们看到了如何使用四个磁盘设置 RAID 6。这种 RAID 级别是具有高冗余的昂贵设置之一。在接下来的文章中,我们将看到如何建立一个嵌套的 RAID 10 甚至更多。请继续关注。 -------------------------------------------------------------------------------- @@ -311,11 +309,12 @@ via: http://www.tecmint.com/create-raid-6-in-linux/ 作者:[Babin Lonston][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ -[2]:http://www.tecmint.com/create-raid0-in-linux/ -[3]:http://www.tecmint.com/create-raid1-in-linux/ +[1]:https://linux.cn/article-6085-1.html +[2]:https://linux.cn/article-6087-1.html +[3]:https://linux.cn/article-6093-1.html +[4]:https://linux.cn/article-6102-1.html From cbe0470402a2ef2303db565a5a3475d50643f9c1 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 31 Aug 2015 16:12:20 +0800 Subject: [PATCH 382/697] =?UTF-8?q?20150831-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...31 Linux workstation security checklist.md | 800 ++++++++++++++++++ 1 file changed, 800 insertions(+) create mode 100644 sources/tech/20150831 Linux workstation security checklist.md diff --git a/sources/tech/20150831 Linux workstation security checklist.md b/sources/tech/20150831 Linux workstation security checklist.md new file mode 100644 index 0000000000..bc2b59f16a --- /dev/null +++ b/sources/tech/20150831 Linux workstation security checklist.md @@ -0,0 +1,800 @@ +Linux workstation security checklist +================================================================================ +This is a set of recommendations used by the Linux Foundation for their systems +administrators. All of LF employees are remote workers and we use this set of +guidelines to ensure that a sysadmin's system passes core security requirements +in order to reduce the risk of it becoming an attack vector against the rest +of our infrastructure. + +Even if your systems administrators are not remote workers, chances are that +they perform a lot of their work either from a portable laptop in a work +environment, or set up their home systems to access the work infrastructure +for after-hours/emergency support. In either case, you can adapt this set of +recommendations to suit your environment. + +This, by no means, is an exhaustive "workstation hardening" document, but +rather an attempt at a set of baseline recommendations to avoid most glaring +security errors without introducing too much inconvenience. You may read this +document and think it is way too paranoid, while someone else may think this +barely scratches the surface. Security is just like driving on the highway -- +anyone going slower than you is an idiot, while anyone driving faster than you +is a crazy person. These guidelines are merely a basic set of core safety +rules that is neither exhaustive, nor a replacement for experience, vigilance, +and common sense. + +Each section is split into two areas: + +- The checklist that can be adapted to your project's needs +- Free-form list of considerations that explain what dictated these decisions + +## Severity levels + +The items in each checklist include the severity level, which we hope will help +guide your decision: + +- _(CRITICAL)_ items should definitely be high on the consideration list. + If not implemented, they will introduce high risks to your workstation + security. +- _(MODERATE)_ items will improve your security posture, but are less + important, especially if they interfere too much with your workflow. +- _(LOW)_ items may improve the overall security, but may not be worth the + convenience trade-offs. +- _(PARANOID)_ is reserved for items we feel will dramatically improve your + workstation security, but will probably require a lot of adjustment to the + way you interact with your operating system. + +Remember, these are only guidelines. If you feel these severity levels do not +reflect your project's commitment to security, you should adjust them as you +see fit. + +## Choosing the right hardware + +We do not mandate that our admins use a specific vendor or a specific model, so +this section addresses core considerations when choosing a work system. + +### Checklist + +- [ ] System supports SecureBoot _(CRITICAL)_ +- [ ] System has no firewire, thunderbolt or ExpressCard ports _(MODERATE)_ +- [ ] System has a TPM chip _(LOW)_ + +### Considerations + +#### SecureBoot + +Despite its controversial nature, SecureBoot offers prevention against many +attacks targeting workstations (Rootkits, "Evil Maid," etc), without +introducing too much extra hassle. It will not stop a truly dedicated attacker, +plus there is a pretty high degree of certainty that state security agencies +have ways to defeat it (probably by design), but having SecureBoot is better +than having nothing at all. + +Alternatively, you may set up [Anti Evil Maid][1] which offers a more +wholesome protection against the type of attacks that SecureBoot is supposed +to prevent, but it will require more effort to set up and maintain. + +#### Firewire, thunderbolt, and ExpressCard ports + +Firewire is a standard that, by design, allows any connecting device full +direct memory access to your system ([see Wikipedia][2]). Thunderbolt and +ExpressCard are guilty of the same, though some later implementations of +Thunderbolt attempt to limit the scope of memory access. It is best if the +system you are getting has none of these ports, but it is not critical, as +they usually can be turned off via UEFI or disabled in the kernel itself. + +#### TPM Chip + +Trusted Platform Module (TPM) is a crypto chip bundled with the motherboard +separately from the core processor, which can be used for additional platform +security (such as to store full-disk encryption keys), but is not normally used +for day-to-day workstation operation. At best, this is a nice-to-have, unless +you have a specific need to use TPM for your workstation security. + +## Pre-boot environment + +This is a set of recommendations for your workstation before you even start +with OS installation. + +### Checklist + +- [ ] UEFI boot mode is used (not legacy BIOS) _(CRITICAL)_ +- [ ] Password is required to enter UEFI configuration _(CRITICAL)_ +- [ ] SecureBoot is enabled _(CRITICAL)_ +- [ ] UEFI-level password is required to boot the system _(LOW)_ + +### Considerations + +#### UEFI and SecureBoot + +UEFI, with all its warts, offers a lot of goodies that legacy BIOS doesn't, +such as SecureBoot. Most modern systems come with UEFI mode on by default. + +Make sure a strong password is required to enter UEFI configuration mode. Pay +attention, as many manufacturers quietly limit the length of the password you +are allowed to use, so you may need to choose high-entropy short passwords vs. +long passphrases (see below for more on passphrases). + +Depending on the Linux distribution you decide to use, you may or may not have +to jump through additional hoops in order to import your distribution's +SecureBoot key that would allow you to boot the distro. Many distributions have +partnered with Microsoft to sign their released kernels with a key that is +already recognized by most system manufacturers, therefore saving you the +trouble of having to deal with key importing. + +As an extra measure, before someone is allowed to even get to the boot +partition and try some badness there, let's make them enter a password. This +password should be different from your UEFI management password, in order to +prevent shoulder-surfing. If you shut down and start a lot, you may choose to +not bother with this, as you will already have to enter a LUKS passphrase and +this will save you a few extra keystrokes. + +## Distro choice considerations + +Chances are you'll stick with a fairly widely-used distribution such as Fedora, +Ubuntu, Arch, Debian, or one of their close spin-offs. In any case, this is +what you should consider when picking a distribution to use. + +### Checklist + +- [ ] Has a robust MAC/RBAC implementation (SELinux/AppArmor/Grsecurity) _(CRITICAL)_ +- [ ] Publishes security bulletins _(CRITICAL)_ +- [ ] Provides timely security patches _(CRITICAL)_ +- [ ] Provides cryptographic verification of packages _(CRITICAL)_ +- [ ] Fully supports UEFI and SecureBoot _(CRITICAL)_ +- [ ] Has robust native full disk encryption support _(CRITICAL)_ + +### Considerations + +#### SELinux, AppArmor, and GrSecurity/PaX + +Mandatory Access Controls (MAC) or Role-Based Access Controls (RBAC) are an +extension of the basic user/group security mechanism used in legacy POSIX +systems. Most distributions these days either already come bundled with a +MAC/RBAC implementation (Fedora, Ubuntu), or provide a mechanism to add it via +an optional post-installation step (Gentoo, Arch, Debian). Obviously, it is +highly advised that you pick a distribution that comes pre-configured with a +MAC/RBAC system, but if you have strong feelings about a distribution that +doesn't have one enabled by default, do plan to configure it +post-installation. + +Distributions that do not provide any MAC/RBAC mechanisms should be strongly +avoided, as traditional POSIX user- and group-based security should be +considered insufficient in this day and age. If you would like to start out +with a MAC/RBAC workstation, AppArmor and PaX are generally considered easier +to learn than SELinux. Furthermore, on a workstation, where there are few or +no externally listening daemons, and where user-run applications pose the +highest risk, GrSecurity/PaX will _probably_ offer more security benefits than +SELinux. + +#### Distro security bulletins + +Most of the widely used distributions have a mechanism to deliver security +bulletins to their users, but if you are fond of something esoteric, check +whether the developers have a documented mechanism of alerting the users about +security vulnerabilities and patches. Absence of such mechanism is a major +warning sign that the distribution is not mature enough to be considered for a +primary admin workstation. + +#### Timely and trusted security updates + +Most of the widely used distributions deliver regular security updates, but is +worth checking to ensure that critical package updates are provided in a +timely fashion. Avoid using spin-offs and "community rebuilds" for this +reason, as they routinely delay security updates due to having to wait for the +upstream distribution to release it first. + +You'll be hard-pressed to find a distribution that does not use cryptographic +signatures on packages, updates metadata, or both. That being said, fairly +widely used distributions have been known to go for years before introducing +this basic security measure (Arch, I'm looking at you), so this is a thing +worth checking. + +#### Distros supporting UEFI and SecureBoot + +Check that the distribution supports UEFI and SecureBoot. Find out whether it +requires importing an extra key or whether it signs its boot kernels with a key +already trusted by systems manufacturers (e.g. via an agreement with +Microsoft). Some distributions do not support UEFI/SecureBoot but offer +alternatives to ensure tamper-proof or tamper-evident boot environments +([Qubes-OS][3] uses Anti Evil Maid, mentioned earlier). If a distribution +doesn't support SecureBoot and has no mechanisms to prevent boot-level attacks, +look elsewhere. + +#### Full disk encryption + +Full disk encryption is a requirement for securing data at rest, and is +supported by most distributions. As an alternative, systems with +self-encrypting hard drives may be used (normally implemented via the on-board +TPM chip) and offer comparable levels of security plus faster operation, but at +a considerably higher cost. + +## Distro installation guidelines + +All distributions are different, but here are general guidelines: + +### Checklist + +- [ ] Use full disk encryption (LUKS) with a robust passphrase _(CRITICAL)_ +- [ ] Make sure swap is also encrypted _(CRITICAL)_ +- [ ] Require a password to edit bootloader (can be same as LUKS) _(CRITICAL)_ +- [ ] Set up a robust root password (can be same as LUKS) _(CRITICAL)_ +- [ ] Use an unprivileged account, part of administrators group _(CRITICAL)_ +- [ ] Set up a robust user-account password, different from root _(CRITICAL)_ + +### Considerations + +#### Full disk encryption + +Unless you are using self-encrypting hard drives, it is important to configure +your installer to fully encrypt all the disks that will be used for storing +your data and your system files. It is not sufficient to simply encrypt the +user directory via auto-mounting cryptfs loop files (I'm looking at you, older +versions of Ubuntu), as this offers no protection for system binaries or swap, +which is likely to contain a slew of sensitive data. The recommended +encryption strategy is to encrypt the LVM device, so only one passphrase is +required during the boot process. + +The `/boot` partition will always remain unencrypted, as the bootloader needs +to be able to actually boot the kernel before invoking LUKS/dm-crypt. The +kernel image itself should be protected against tampering with a cryptographic +signature checked by SecureBoot. + +In other words, `/boot` should always be the only unencrypted partition on your +system. + +#### Choosing good passphrases + +Modern Linux systems have no limitation of password/passphrase length, so the +only real limitation is your level of paranoia and your stubbornness. If you +boot your system a lot, you will probably have to type at least two different +passwords: one to unlock LUKS, and another one to log in, so having long +passphrases will probably get old really fast. Pick passphrases that are 2-3 +words long, easy to type, and preferably from rich/mixed vocabularies. + +Examples of good passphrases (yes, you can use spaces): +- nature abhors roombas +- 12 in-flight Jebediahs +- perdon, tengo flatulence + +You can also stick with non-vocabulary passwords that are at least 10-12 +characters long, if you prefer that to typing passphrases. + +Unless you have concerns about physical security, it is fine to write down your +passphrases and keep them in a safe place away from your work desk. + +#### Root, user passwords and the admin group + +We recommend that you use the same passphrase for your root password as you +use for your LUKS encryption (unless you share your laptop with other trusted +people who should be able to unlock the drives, but shouldn't be able to +become root). If you are the sole user of the laptop, then having your root +password be different from your LUKS password has no meaningful security +advantages. Generally, you can use the same passphrase for your UEFI +administration, disk encryption, and root account -- knowing any of these will +give an attacker full control of your system anyway, so there is little +security benefit to have them be different on a single-user workstation. + +You should have a different, but equally strong password for your regular user +account that you will be using for day-to-day tasks. This user should be member +of the admin group (e.g. `wheel` or similar, depending on the distribution), +allowing you to perform `sudo` to elevate privileges. + +In other words, if you are the sole user on your workstation, you should have 2 +distinct, robust, equally strong passphrases you will need to remember: + +**Admin-level**, used in the following locations: + +- UEFI administration +- Bootloader (GRUB) +- Disk encryption (LUKS) +- Workstation admin (root user) + +**User-level**, used for the following: + +- User account and sudo +- Master password for the password manager + +All of them, obviously, can be different if there is a compelling reason. + +## Post-installation hardening + +Post-installation security hardening will depend greatly on your distribution +of choice, so it is futile to provide detailed instructions in a general +document such as this one. However, here are some steps you should take: + +### Checklist + +- [ ] Globally disable firewire and thunderbolt modules _(CRITICAL)_ +- [ ] Check your firewalls to ensure all incoming ports are filtered _(CRITICAL)_ +- [ ] Make sure root mail is forwarded to an account you check _(CRITICAL)_ +- [ ] Check to ensure sshd service is disabled by default _(MODERATE)_ +- [ ] Set up an automatic OS update schedule, or update reminders _(MODERATE)_ +- [ ] Configure the screensaver to auto-lock after a period of inactivity _(MODERATE)_ +- [ ] Set up logwatch _(MODERATE)_ +- [ ] Install and use rkhunter _(LOW)_ +- [ ] Install an Intrusion Detection System _(PARANOID)_ + +### Considerations + +#### Blacklisting modules + +To blacklist a firewire and thunderbolt modules, add the following lines to a +file in `/etc/modprobe.d/blacklist-dma.conf`: + + blacklist firewire-core + blacklist thunderbolt + +The modules will be blacklisted upon reboot. It doesn't hurt doing this even if +you don't have these ports (but it doesn't do anything either). + +#### Root mail + +By default, root mail is just saved on the system and tends to never be read. +Make sure you set your `/etc/aliases` to forward root mail to a mailbox that +you actually read, otherwise you may miss important system notifications and +reports: + + # Person who should get root's mail + root: bob@example.com + +Run `newaliases` after this edit and test it out to make sure that it actually +gets delivered, as some email providers will reject email coming in from +nonexistent or non-routable domain names. If that is the case, you will need to +play with your mail forwarding configuration until this actually works. + +#### Firewalls, sshd, and listening daemons + +The default firewall settings will depend on your distribution, but many of +them will allow incoming `sshd` ports. Unless you have a compelling legitimate +reason to allow incoming ssh, you should filter that out and disable the `sshd` +daemon. + + systemctl disable sshd.service + systemctl stop sshd.service + +You can always start it temporarily if you need to use it. + +In general, your system shouldn't have any listening ports apart from +responding to ping. This will help safeguard you against network-level 0-day +exploits. + +#### Automatic updates or notifications + +It is recommended to turn on automatic updates, unless you have a very good +reason not to do so, such as fear that an automatic update would render your +system unusable (it's happened in the past, so this fear is not unfounded). At +the very least, you should enable automatic notifications of available updates. +Most distributions already have this service automatically running for you, so +chances are you don't have to do anything. Consult your distribution +documentation to find out more. + +You should apply all outstanding errata as soon as possible, even if something +isn't specifically labeled as "security update" or has an associated CVE code. +All bugs have the potential of being security bugs and erring on the side of +newer, unknown bugs is _generally_ a safer strategy than sticking with old, +known ones. + +#### Watching logs + +You should have a keen interest in what happens on your system. For this +reason, you should install `logwatch` and configure it to send nightly activity +reports of everything that happens on your system. This won't prevent a +dedicated attacker, but is a good safety-net feature to have in place. + +Note, that many systemd distros will no longer automatically install a syslog +server that `logwatch` needs (due to systemd relying on its own journal), so +you will need to install and enable `rsyslog` to make sure your `/var/log` is +not empty before logwatch will be of any use. + +#### Rkhunter and IDS + +Installing `rkhunter` and an intrusion detection system (IDS) like `aide` or +`tripwire` will not be that useful unless you actually understand how they work +and take the necessary steps to set them up properly (such as, keeping the +databases on external media, running checks from a trusted environment, +remembering to refresh the hash databases after performing system updates and +configuration changes, etc). If you are not willing to take these steps and +adjust how you do things on your own workstation, these tools will introduce +hassle without any tangible security benefit. + +We do recommend that you install `rkhunter` and run it nightly. It's fairly +easy to learn and use, and though it will not deter a sophisticated attacker, +it may help you catch your own mistakes. + +## Personal workstation backups + +Workstation backups tend to be overlooked or done in a haphazard, often unsafe +manner. + +### Checklist + +- [ ] Set up encrypted workstation backups to external storage _(CRITICAL)_ +- [ ] Use zero-knowledge backup tools for cloud backups _(MODERATE)_ + +### Considerations + +#### Full encrypted backups to external storage + +It is handy to have an external hard drive where one can dump full backups +without having to worry about such things like bandwidth and upstream speeds +(in this day and age most providers still offer dramatically asymmetric +upload/download speeds). Needless to say, this hard drive needs to be in itself +encrypted (again, via LUKS), or you should use a backup tool that creates +encrypted backups, such as `duplicity` or its GUI companion, `deja-dup`. I +recommend using the latter with a good randomly generated passphrase, stored in +your password manager. If you travel with your laptop, leave this drive at home +to have something to come back to in case your laptop is lost or stolen. + +In addition to your home directory, you should also back up `/etc` and +`/var/log` for various forensic purposes. + +Above all, avoid copying your home directory onto any unencrypted storage, even +as a quick way to move your files around between systems, as you will most +certainly forget to erase it once you're done, exposing potentially private or +otherwise security sensitive data to snooping hands -- especially if you keep +that storage media in the same bag with your laptop. + +#### Selective zero-knowledge backups off-site + +Off-site backups are also extremely important and can be done either to your +employer, if they offer space for it, or to a cloud provider. You can set up a +separate duplicity/deja-dup profile to only include most important files in +order to avoid transferring huge amounts of data that you don't really care to +back up off-site (internet cache, music, downloads, etc). + +Alternatively, you can use a zero-knowledge backup tool, such as +[SpiderOak][5], which offers an excellent Linux GUI tool and has additional +useful features such as synchronizing content between multiple systems and +platforms. + +## Best practices + +What follows is a curated list of best practices that we think you should +adopt. It is most certainly non-exhaustive, but rather attempts to offer +practical advice that strikes a workable balance between security and overall +usability. + +### Browsing + +There is no question that the web browser will be the piece of software with +the largest and the most exposed attack surface on your system. It is a tool +written specifically to download and execute untrusted, frequently hostile +code. It attempts to shield you from this danger by employing multiple +mechanisms such as sandboxes and code sanitization, but they have all been +previously defeated on multiple occasions. You should learn to approach +browsing websites as the most insecure activity you'll engage in on any given +day. + +There are several ways you can reduce the impact of a compromised browser, but +the truly effective ways will require significant changes in the way you +operate your workstation. + +#### 1: Use two different browsers + +This is the easiest to do, but only offers minor security benefits. Not all +browser compromises give an attacker full unfettered access to your system -- +sometimes they are limited to allowing one to read local browser storage, +steal active sessions from other tabs, capture input entered into the browser, +etc. Using two different browsers, one for work/high security sites, and +another for everything else will help prevent minor compromises from giving +attackers access to the whole cookie jar. The main inconvenience will be the +amount of memory consumed by two different browser processes. + +Here's what we recommend: + +##### Firefox for work and high security sites + +Use Firefox to access work-related sites, where extra care should be taken to +ensure that data like cookies, sessions, login information, keystrokes, etc, +should most definitely not fall into attackers' hands. You should NOT use +this browser for accessing any other sites except select few. + +You should install the following Firefox add-ons: + +- [ ] NoScript _(CRITICAL)_ + - NoScript prevents active content from loading, except from user + whitelisted domains. It is a great hassle to use with your default browser + (though offers really good security benefits), so we recommend only + enabling it on the browser you use to access work-related sites. + +- [ ] Privacy Badger _(CRITICAL)_ + - EFF's Privacy Badger will prevent most external trackers and ad platforms + from being loaded, which will help avoid compromises on these tracking + sites from affecting your browser (trackers and ad sites are very commonly + targeted by attackers, as they allow rapid infection of thousands of + systems worldwide). + +- [ ] HTTPS Everywhere _(CRITICAL)_ + - This EFF-developed Add-on will ensure that most of your sites are accessed + over a secure connection, even if a link you click is using http:// (great + to avoid a number of attacks, such as [SSL-strip][7]). + +- [ ] Certificate Patrol _(MODERATE)_ + - This tool will alert you if the site you're accessing has recently changed + their TLS certificates -- especially if it wasn't nearing expiration dates + or if it is now using a different certification authority. It helps + alert you if someone is trying to man-in-the-middle your connection, + but generates a lot of benign false-positives. + +You should leave Firefox as your default browser for opening links, as +NoScript will prevent most active content from loading or executing. + +##### Chrome/Chromium for everything else + +Chromium developers are ahead of Firefox in adding a lot of nice security +features (at least [on Linux][6]), such as seccomp sandboxes, kernel user +namespaces, etc, which act as an added layer of isolation between the sites +you visit and the rest of your system. Chromium is the upstream open-source +project, and Chrome is Google's proprietary binary build based on it (insert +the usual paranoid caution about not using it for anything you don't want +Google to know about). + +It is recommended that you install **Privacy Badger** and **HTTPS Everywhere** +extensions in Chrome as well and give it a distinct theme from Firefox to +indicate that this is your "untrusted sites" browser. + +#### 2: Use two different browsers, one inside a dedicated VM + +This is a similar recommendation to the above, except you will add an extra +step of running Chrome inside a dedicated VM that you access via a fast +protocol, allowing you to share clipboards and forward sound events (e.g. +Spice or RDP). This will add an excellent layer of isolation between the +untrusted browser and the rest of your work environment, ensuring that +attackers who manage to fully compromise your browser will then have to +additionally break out of the VM isolation layer in order to get to the rest +of your system. + +This is a surprisingly workable configuration, but requires a lot of RAM and +fast processors that can handle the increased load. It will also require an +important amount of dedication on the part of the admin who will need to +adjust their work practices accordingly. + +#### 3: Fully separate your work and play environments via virtualization + +See [Qubes-OS project][3], which strives to provide a high-security +workstation environment via compartmentalizing your applications into separate +fully isolated VMs. + +### Password managers + +#### Checklist + +- [ ] Use a password manager _(CRITICAL_) +- [ ] Use unique passwords on unrelated sites _(CRITICAL)_ +- [ ] Use a password manager that supports team sharing _(MODERATE)_ +- [ ] Use a separate password manager for non-website accounts _(PARANOID)_ + +#### Considerations + +Using good, unique passwords should be a critical requirement for every member +of your team. Credential theft is happening all the time -- either via +compromised computers, stolen database dumps, remote site exploits, or any +number of other means. No credentials should ever be reused across sites, +especially for critical applications. + +##### In-browser password manager + +Every browser has a mechanism for saving passwords that is fairly secure and +can sync with vendor-maintained cloud storage while keeping the data encrypted +with a user-provided passphrase. However, this mechanism has important +disadvantages: + +1. It does not work across browsers +2. It does not offer any way of sharing credentials with team members + +There are several well-supported, free-or-cheap password managers that are +well-integrated into multiple browsers, work across platforms, and offer +group sharing (usually as a paid service). Solutions can be easily found via +search engines. + +##### Standalone password manager + +One of the major drawbacks of any password manager that comes integrated with +the browser is the fact that it's part of the application that is most likely +to be attacked by intruders. If this makes you uncomfortable (and it should), +you may choose to have two different password managers -- one for websites +that is integrated into your browser, and one that runs as a standalone +application. The latter can be used to store high-risk credentials such as +root passwords, database passwords, other shell account credentials, etc. + +It may be particularly useful to have such tool for sharing superuser account +credentials with other members of your team (server root passwords, ILO +passwords, database admin passwords, bootloader passwords, etc). + +A few tools can help you: + +- [KeePassX][8], which improves team sharing in version 2 +- [Pass][9], which uses text files and PGP and integrates with git +- [Django-Pstore][10], which uses GPG to share credentials between admins +- [Hiera-Eyaml][11], which, if you are already using Puppet for your + infrastructure, may be a handy way to track your server/service credentials + as part of your encrypted Hiera data store + +### Securing SSH and PGP private keys + +Personal encryption keys, including SSH and PGP private keys, are going to be +the most prized items on your workstation -- something the attackers will be +most interested in obtaining, as that would allow them to further attack your +infrastructure or impersonate you to other admins. You should take extra steps +to ensure that your private keys are well protected against theft. + +#### Checklist + +- [ ] Strong passphrases are used to protect private keys _(CRITICAL)_ +- [ ] PGP Master key is stored on removable storage _(MODERATE)_ +- [ ] Auth, Sign and Encrypt Subkeys are stored on a smartcard device _(MODERATE)_ +- [ ] SSH is configured to use PGP Auth key as ssh private key _(MODERATE)_ + +#### Considerations + +The best way to prevent private key theft is to use a smartcard to store your +encryption private keys and never copy them onto the workstation. There are +several manufacturers that offer OpenPGP capable devices: + +- [Kernel Concepts][12], where you can purchase both the OpenPGP compatible + smartcards and the USB readers, should you need one. +- [Yubikey NEO][13], which offers OpenPGP smartcard functionality in addition + to many other cool features (U2F, PIV, HOTP, etc). + +It is also important to make sure that the master PGP key is not stored on the +main workstation, and only subkeys are used. The master key will only be +needed when signing someone else's keys or creating new subkeys -- operations +which do not happen very frequently. You may follow [the Debian's subkeys][14] +guide to learn how to move your master key to removable storage and how to +create subkeys. + +You should then configure your gnupg agent to act as ssh agent and use the +smartcard-based PGP Auth key to act as your ssh private key. We publish a +[detailed guide][15] on how to do that using either a smartcard reader or a +Yubikey NEO. + +If you are not willing to go that far, at least make sure you have a strong +passphrase on both your PGP private key and your SSH private key, which will +make it harder for attackers to steal and use them. + +### SELinux on the workstation + +If you are using a distribution that comes bundled with SELinux (such as +Fedora), here are some recommendation of how to make the best use of it to +maximize your workstation security. + +#### Checklist + +- [ ] Make sure SELinux is enforcing on your workstation _(CRITICAL)_ +- [ ] Never blindly run `audit2allow -M`, always check _(CRITICAL)_ +- [ ] Never `setenforce 0` _(MODERATE)_ +- [ ] Switch your account to SELinux user `staff_u` _(MODERATE)_ + +#### Considerations + +SELinux is a Mandatory Access Controls (MAC) extension to core POSIX +permissions functionality. It is mature, robust, and has come a long way since +its initial roll-out. Regardless, many sysadmins to this day repeat the +outdated mantra of "just turn it off." + +That being said, SELinux will have limited security benefits on the +workstation, as most applications you will be running as a user are going to +be running unconfined. It does provide enough net benefit to warrant leaving +it on, as it will likely help prevent an attacker from escalating privileges +to gain root-level access via a vulnerable daemon service. + +Our recommendation is to leave it on and enforcing. + +##### Never `setenforce 0` + +It's tempting to use `setenforce 0` to flip SELinux into permissive mode +on a temporary basis, but you should avoid doing that. This essentially turns +off SELinux for the entire system, while what you really want is to +troubleshoot a particular application or daemon. + +Instead of `setenforce 0` you should be using `semanage permissive -a +[somedomain_t]` to put only that domain into permissive mode. First, find out +which domain is causing troubles by running `ausearch`: + + ausearch -ts recent -m avc + +and then look for `scontext=` (source SELinux context) line, like so: + + scontext=staff_u:staff_r:gpg_pinentry_t:s0-s0:c0.c1023 + ^^^^^^^^^^^^^^ + +This tells you that the domain being denied is `gpg_pinentry_t`, so if you +want to troubleshoot the application, you should add it to permissive domains: + + semange permissive -a gpg_pinentry_t + +This will allow you to use the application and collect the rest of the AVCs, +which you can then use in conjunction with `audit2allow` to write a local +policy. Once that is done and you see no new AVC denials, you can remove that +domain from permissive by running: + + semanage permissive -d gpg_pinentry_t + +##### Use your workstation as SELinux role staff_r + +SELinux comes with a native implementation of roles that prohibit or grant +certain privileges based on the role associated with the user account. As an +administrator, you should be using the `staff_r` role, which will restrict +access to many configuration and other security-sensitive files, unless you +first perform `sudo`. + +By default, accounts are created as `unconfined_r` and most applications you +execute will run unconfined, without any (or with only very few) SELinux +constraints. To switch your account to the `staff_r` role, run the following +command: + + usermod -Z staff_u [username] + +You should log out and log back in to enable the new role, at which point if +you run `id -Z`, you'll see: + + staff_u:staff_r:staff_t:s0-s0:c0.c1023 + +When performing `sudo`, you should remember to add an extra flag to tell +SELinux to transition to the "sysadmin" role. The command you want is: + + sudo -i -r sysadm_r + +At which point `id -Z` will show: + + staff_u:sysadm_r:sysadm_t:s0-s0:c0.c1023 + +**WARNING**: you should be comfortable using `ausearch` and `audit2allow` +before you make this switch, as it's possible some of your applications will +no longer work when you're running as role `staff_r`. At the time of writing, +the following popular applications are known to not work under `staff_r` +without policy tweaks: + +- Chrome/Chromium +- Skype +- VirtualBox + +To switch back to `unconfined_r`, run the following command: + + usermod -Z unconfined_u [username] + +and then log out and back in to get back into the comfort zone. + +## Further reading + +The world of IT security is a rabbit hole with no bottom. If you would like to +go deeper, or find out more about security features on your particular +distribution, please check out the following links: + +- [Fedora Security Guide](https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/index.html) +- [CESG Ubuntu Security Guide](https://www.gov.uk/government/publications/end-user-devices-security-guidance-ubuntu-1404-lts) +- [Debian Security Manual](https://www.debian.org/doc/manuals/securing-debian-howto/index.en.html) +- [Arch Linux Security Wiki](https://wiki.archlinux.org/index.php/Security) +- [Mac OSX Security](https://www.apple.com/support/security/guides/) + +## License +This work is licensed under a +[Creative Commons Attribution-ShareAlike 4.0 International License][0]. + +-------------------------------------------------------------------------------- + +via: https://github.com/lfit/itpol/blob/master/linux-workstation-security.md#linux-workstation-security-checklist + +作者:[mricon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://github.com/mricon +[0]: http://creativecommons.org/licenses/by-sa/4.0/ +[1]: https://github.com/QubesOS/qubes-antievilmaid +[2]: https://en.wikipedia.org/wiki/IEEE_1394#Security_issues +[3]: https://qubes-os.org/ +[4]: https://xkcd.com/936/ +[5]: https://spideroak.com/ +[6]: https://code.google.com/p/chromium/wiki/LinuxSandboxing +[7]: http://www.thoughtcrime.org/software/sslstrip/ +[8]: https://keepassx.org/ +[9]: http://www.passwordstore.org/ +[10]: https://pypi.python.org/pypi/django-pstore +[11]: https://github.com/TomPoulton/hiera-eyaml +[12]: http://shop.kernelconcepts.de/ +[13]: https://www.yubico.com/products/yubikey-hardware/yubikey-neo/ +[14]: https://wiki.debian.org/Subkeys +[15]: https://github.com/lfit/ssh-gpg-smartcard-config \ No newline at end of file From 5f120e9c132a7436091b845a1bd002b13d742759 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 31 Aug 2015 17:39:53 +0800 Subject: [PATCH 383/697] =?UTF-8?q?20150831-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...orkManager to systemd-networkd on Linux.md | 165 ++++++++++++++++++ 1 file changed, 165 insertions(+) create mode 100644 sources/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md diff --git a/sources/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md b/sources/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md new file mode 100644 index 0000000000..2f2405043c --- /dev/null +++ b/sources/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md @@ -0,0 +1,165 @@ +How to switch from NetworkManager to systemd-networkd on Linux +================================================================================ +In the world of Linux, adoption of [systemd][1] has been a subject of heated controversy, and the debate between its proponents and critics is still going on. As of today, most major Linux distributions have adopted systemd as a default init system. + +Billed as a "never finished, never complete, but tracking progress of technology" by its author, systemd is not just the init daemon, but is designed as a more broad system and service management platform which encompasses the growing ecosystem of core system daemons, libraries and utilities. + +One of many additions to **systemd** is **systemd-networkd**, which is responsible for network configuration within the systemd ecosystem. Using systemd-networkd, you can configure basic DHCP/static IP networking for network devices. It can also configure virtual networking features such as bridges, tunnels or VLANs. Wireless networking is not directly handled by systemd-networkd, but you can use wpa_supplicant service to configure wireless adapters, and then hook it up with **systemd-networkd**. + +On many Linux distributions, NetworkManager has been and is still used as a default network configuration manager. Compared to NetworkManager, **systemd-networkd** is still under active development, and missing features. For example, it does not have NetworkManager's intelligence to keep your computer connected across various interfaces at all times. It does not provide ifup/ifdown hooks for advanced scripting. Yet, systemd-networkd is integrated well with the rest of systemd components (e.g., **resolved** for DNS, **timesyncd** for NTP, udevd for naming), and the role of **systemd-networkd** may only grow over time in the systemd environment. + +If you are happy with the way **systemd** is evolving, one thing you can consider is to switch from NetworkManager to systemd-networkd. If you are feverishly against systemd, and perfectly happy with NetworkManager or [basic network service][2], that is totally cool. + +But for those of you who want to try out systemd-networkd, you can read on, and find out in this tutorial how to switch from NetworkManager to systemd-networkd on Linux. + +### Requirement ### + +systemd-networkd is available in systemd version 210 and higher. Thus distributions like Debian 8 "Jessie" (systemd 215), Fedora 21 (systemd 217), Ubuntu 15.04 (systemd 219) or later are compatible with systemd-networkd. + +For other distributions, check the version of your systemd before proceeding. + + $ systemctl --version + +### Switch from Network Manager to Systemd-Networkd ### + +It is relatively straightforward to switch from Network Manager to systemd-networkd (and vice versa). + +First, disable Network Manager service, and enable systemd-networkd as follows. + + $ sudo systemctl disable NetworkManager + $ sudo systemctl enable systemd-networkd + +You also need to enable **systemd-resolved** service, which is used by systemd-networkd for network name resolution. This service implements a caching DNS server. + + $ sudo systemctl enable systemd-resolved + $ sudo systemctl start systemd-resolved + +Once started, **systemd-resolved** will create its own resolv.conf somewhere under /run/systemd directory. However, it is a common practise to store DNS resolver information in /etc/resolv.conf, and many applications still rely on /etc/resolv.conf. Thus for compatibility reason, create a symlink to /etc/resolv.conf as follows. + + $ sudo rm /etc/resolv.conf + $ sudo ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf + +### Configure Network Connections with Systemd-networkd ### + +To configure network devices with systemd-networkd, you must specify configuration information in text files with .network extension. These network configuration files are then stored and loaded from /etc/systemd/network. When there are multiple files, systemd-networkd loads and processes them one by one in lexical order. + +Let's start by creating a folder /etc/systemd/network. + + $ sudo mkdir /etc/systemd/network + +#### DHCP Networking #### + +Let's configure DHCP networking first. For this, create the following configuration file. The name of a file can be arbitrary, but remember that files are processed in lexical order. + + $ sudo vi /etc/systemd/network/20-dhcp.network + +---------- + + [Match] + Name=enp3* + + [Network] + DHCP=yes + +As you can see above, each network configuration file contains one or more "sections" with each section preceded by [XXX] heading. Each section contains one or more key/value pairs. The [Match] section determine which network device(s) are configured by this configuration file. For example, this file matches any network interface whose name starts with ens3 (e.g., enp3s0, enp3s1, enp3s2, etc). For matched interface(s), it then applies DHCP network configuration specified under [Network] section. + +### Static IP Networking ### + +If you want to assign a static IP address to a network interface, create the following configuration file. + + $ sudo vi /etc/systemd/network/10-static-enp3s0.network + +---------- + + [Match] + Name=enp3s0 + + [Network] + Address=192.168.10.50/24 + Gateway=192.168.10.1 + DNS=8.8.8.8 + +As you can guess, the interface enp3s0 will be assigned an address 192.168.10.50/24, a default gateway 192.168.10.1, and a DNS server 8.8.8.8. One subtlety here is that the name of an interface enp3s0, in facts, matches the pattern rule defined in the earlier DHCP configuration as well. However, since the file "10-static-enp3s0.network" is processed before "20-dhcp.network" according to lexical order, the static configuration takes priority over DHCP configuration in case of enp3s0 interface. + +Once you are done with creating configuration files, restart systemd-networkd service or reboot. + + $ sudo systemctl restart systemd-networkd + +Check the status of the service by running: + + $ systemctl status systemd-networkd + $ systemctl status systemd-resolved + +![](https://farm1.staticflickr.com/719/21010813392_76abe123ed_c.jpg) + +### Configure Virtual Network Devices with Systemd-networkd ### + +**systemd-networkd** also allows you to configure virtual network devices such as bridges, VLANs, tunnel, VXLAN, bonding, etc. You must configure these virtual devices in files with .netdev extension. + +Here I'll show how to configure a bridge interface. + +#### Linux Bridge #### + +If you want to create a Linux bridge (br0) and add a physical interface (eth1) to the bridge, create the following configuration. + + $ sudo vi /etc/systemd/network/bridge-br0.netdev + +---------- + + [NetDev] + Name=br0 + Kind=bridge + +Then configure the bridge interface br0 and the slave interface eth1 using .network files as follows. + + $ sudo vi /etc/systemd/network/bridge-br0-slave.network + +---------- + + [Match] + Name=eth1 + + [Network] + Bridge=br0 + +---------- + + $ sudo vi /etc/systemd/network/bridge-br0.network + +---------- + + [Match] + Name=br0 + + [Network] + Address=192.168.10.100/24 + Gateway=192.168.10.1 + DNS=8.8.8.8 + +Finally, restart systemd-networkd: + + $ sudo systemctl restart systemd-networkd + +You can use [brctl tool][3] to verify that a bridge br0 has been created. + +### Summary ### + +When systemd promises to be a system manager for Linux, it is no wonder something like systemd-networkd came into being to manage network configurations. At this stage, however, systemd-networkd seems more suitable for a server environment where network configurations are relatively stable. For desktop/laptop environments which involve various transient wired/wireless interfaces, NetworkManager may still be a preferred choice. + +For those who want to check out more on systemd-networkd, refer to the official [man page][4] for a complete list of supported sections and keys. + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/switch-from-networkmanager-to-systemd-networkd.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://xmodulo.com/use-systemd-system-administration-debian.html +[2]:http://xmodulo.com/disable-network-manager-linux.html +[3]:http://xmodulo.com/how-to-configure-linux-bridge-interface.html +[4]:http://www.freedesktop.org/software/systemd/man/systemd.network.html \ No newline at end of file From 7fb1753774781e1984c95228eb37bad1bfa664f1 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 31 Aug 2015 21:56:56 +0800 Subject: [PATCH 384/697] PUB:Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux @strugglingyouth --- ...ing Up RAID 10 or 1+0 (Nested) in Linux.md | 275 +++++++++++++++++ ...ing Up RAID 10 or 1+0 (Nested) in Linux.md | 277 ------------------ 2 files changed, 275 insertions(+), 277 deletions(-) create mode 100644 published/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md delete mode 100644 translated/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md diff --git a/published/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md b/published/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md new file mode 100644 index 0000000000..c0b03f3dba --- /dev/null +++ b/published/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md @@ -0,0 +1,275 @@ +在 Linux 下使用 RAID(六):设置 RAID 10 或 1 + 0(嵌套) +================================================================================ + +RAID 10 是组合 RAID 1 和 RAID 0 形成的。要设置 RAID 10,我们至少需要4个磁盘。在之前的文章中,我们已经看到了如何使用最少两个磁盘设置 RAID 1 和 RAID 0。 + +在这里,我们将使用最少4个磁盘组合 RAID 1 和 RAID 0 来设置 RAID 10。假设我们已经在用 RAID 10 创建的逻辑卷保存了一些数据。比如我们要保存数据 “TECMINT”,它将使用以下方法将其保存在4个磁盘中。 + +![Create Raid 10 in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/raid10.jpg) + +*在 Linux 中创建 Raid 10(LCTT 译注:此图有误,请参照文字说明和本系列第一篇文章)* + +RAID 10 是先做镜像,再做条带。因此,在 RAID 1 中,相同的数据将被写入到两个磁盘中,“T”将同时被写入到第一和第二个磁盘中。接着的数据被条带化到另外两个磁盘,“E”将被同时写入到第三和第四个磁盘中。它将继续循环此过程,“C”将同时被写入到第一和第二个磁盘,以此类推。 + +(LCTT 译注:原文中此处描述混淆有误,已经根据实际情况进行修改。) + +现在你已经了解 RAID 10 怎样组合 RAID 1 和 RAID 0 来工作的了。如果我们有4个20 GB 的磁盘,总共为 80 GB,但我们将只能得到40 GB 的容量,另一半的容量在构建 RAID 10 中丢失。 + +#### RAID 10 的优点和缺点 #### + +- 提供更好的性能。 +- 在 RAID 10 中我们将失去一半的磁盘容量。 +- 读与写的性能都很好,因为它会同时进行写入和读取。 +- 它能解决数据库的高 I/O 磁盘写操作。 + +#### 要求 #### + +在 RAID 10 中,我们至少需要4个磁盘,前2个磁盘为 RAID 1,其他2个磁盘为 RAID 0,就像我之前说的,RAID 10 仅仅是组合了 RAID 0和1。如果我们需要扩展 RAID 组,最少需要添加4个磁盘。 + +**我的服务器设置** + + 操作系统 : CentOS 6.5 Final + IP 地址 : 192.168.0.229 + 主机名 : rd10.tecmintlocal.com + 磁盘 1 [20GB] : /dev/sdd + 磁盘 2 [20GB] : /dev/sdc + 磁盘 3 [20GB] : /dev/sdd + 磁盘 4 [20GB] : /dev/sde + +有两种方法来设置 RAID 10,在这里两种方法我都会演示,但我更喜欢第一种方法,使用它来设置 RAID 10 更简单。 + +### 方法1:设置 RAID 10 ### + +1、 首先,使用以下命令确认所添加的4块磁盘没有被使用。 + + # ls -l /dev | grep sd + +2、 四个磁盘被检测后,然后来检查磁盘是否存在 RAID 分区。 + + # mdadm -E /dev/sd[b-e] + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde # 或 + +![Verify 4 Added Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-4-Added-Disks.png) + +*验证添加的4块磁盘* + +**注意**: 在上面的输出中,如果没有检测到 super-block 意味着在4块磁盘中没有定义过 RAID。 + +#### 第1步:为 RAID 分区 #### + +3、 现在,使用`fdisk`,命令为4个磁盘(/dev/sdb, /dev/sdc, /dev/sdd 和 /dev/sde)创建新分区。 + + # fdisk /dev/sdb + # fdisk /dev/sdc + # fdisk /dev/sdd + # fdisk /dev/sde + +#####为 /dev/sdb 创建分区##### + +我来告诉你如何使用 fdisk 为磁盘(/dev/sdb)进行分区,此步也适用于其他磁盘。 + + # fdisk /dev/sdb + +请使用以下步骤为 /dev/sdb 创建一个新的分区。 + +- 按 `n` 创建新的分区。 +- 然后按 `P` 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按 `P` 来打印创建好的分区。 +- 按 `L`,列出所有可用的类型。 +- 按 `t` 去修改分区。 +- 键入 `fd` 设置为 Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用`p`查看我们所做的更改。 +- 使用`w`保存更改。 + +![Disk sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-sdb-Partition.png) + +*为磁盘 sdb 分区* + +**注意**: 请使用上面相同的指令对其他磁盘(sdc, sdd sdd sde)进行分区。 + +4、 创建好4个分区后,需要使用下面的命令来检查磁盘是否存在 raid。 + + # mdadm -E /dev/sd[b-e] + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde # 或 + + # mdadm -E /dev/sd[b-e]1 + # mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 # 或 + +![Check All Disks for Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Check-All-Disks-for-Raid.png) + +*检查磁盘* + +**注意**: 以上输出显示,新创建的四个分区中没有检测到 super-block,这意味着我们可以继续在这些磁盘上创建 RAID 10。 + +#### 第2步: 创建 RAID 设备 `md` #### + +5、 现在该创建一个`md`(即 /dev/md0)设备了,使用“mdadm” raid 管理工具。在创建设备之前,必须确保系统已经安装了`mdadm`工具,如果没有请使用下面的命令来安装。 + + # yum install mdadm [在 RedHat 系统] + # apt-get install mdadm [在 Debain 系统] + +`mdadm`工具安装完成后,可以使用下面的命令创建一个`md` raid 设备。 + + # mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 + +6、 接下来使用`cat`命令验证新创建的 raid 设备。 + + # cat /proc/mdstat + +![Create md raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-raid-Device.png) + +*创建 md RAID 设备* + +7、 接下来,使用下面的命令来检查4个磁盘。下面命令的输出会很长,因为它会显示4个磁盘的所有信息。 + + # mdadm --examine /dev/sd[b-e]1 + +8、 接下来,使用以下命令来查看 RAID 阵列的详细信息。 + + # mdadm --detail /dev/md0 + +![Check Raid Array Details](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Array-Details.png) + +*查看 RAID 阵列详细信息* + +**注意**: 你在上面看到的结果,该 RAID 的状态是 active 和re-syncing。 + +#### 第3步:创建文件系统 #### + +9、 使用 ext4 作为`md0′的文件系统,并将它挂载到`/mnt/raid10`下。在这里,我用的是 ext4,你可以使用你想要的文件系统类型。 + + # mkfs.ext4 /dev/md0 + +![Create md Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-Filesystem.png) + +*创建 md 文件系统* + +10、 在创建文件系统后,挂载文件系统到`/mnt/raid10`下,并使用`ls -l`命令列出挂载点下的内容。 + + # mkdir /mnt/raid10 + # mount /dev/md0 /mnt/raid10/ + # ls -l /mnt/raid10/ + +接下来,在挂载点下创建一些文件,并在文件中添加些内容,然后检查内容。 + + # touch /mnt/raid10/raid10_files.txt + # ls -l /mnt/raid10/ + # echo "raid 10 setup with 4 disks" > /mnt/raid10/raid10_files.txt + # cat /mnt/raid10/raid10_files.txt + +![Mount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-md-Device.png) + +*挂载 md 设备* + +11、 要想自动挂载,打开`/etc/fstab`文件并添加下面的条目,挂载点根据你环境的不同来添加。使用 wq! 保存并退出。 + + # vim /etc/fstab + + /dev/md0 /mnt/raid10 ext4 defaults 0 0 + +![AutoMount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/AutoMount-md-Device.png) + +*挂载 md 设备* + +12、 接下来,在重新启动系统前使用`mount -a`来确认`/etc/fstab`文件是否有错误。 + + # mount -av + +![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Errors-in-Fstab.png) + +*检查 Fstab 中的错误* + +#### 第四步:保存 RAID 配置 #### + +13、 默认情况下 RAID 没有配置文件,所以我们需要在上述步骤完成后手动保存它。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +![Save Raid10 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid10-Configuration.png) + +*保存 RAID10 的配置* + +就这样,我们使用方法1创建完了 RAID 10,这种方法是比较容易的。现在,让我们使用方法2来设置 RAID 10。 + +### 方法2:创建 RAID 10 ### + +1、 在方法2中,我们必须定义2组 RAID 1,然后我们需要使用这些创建好的 RAID 1 的集合来定义一个 RAID 0。在这里,我们将要做的是先创建2个镜像(RAID1),然后创建 RAID0 (条带化)。 + +首先,列出所有的可用于创建 RAID 10 的磁盘。 + + # ls -l /dev | grep sd + +![List 4 Devices](http://www.tecmint.com/wp-content/uploads/2014/11/List-4-Devices.png) + +*列出了 4 个设备* + +2、 将4个磁盘使用`fdisk`命令进行分区。对于如何分区,您可以按照上面的第1步。 + + # fdisk /dev/sdb + # fdisk /dev/sdc + # fdisk /dev/sdd + # fdisk /dev/sde + +3、 在完成4个磁盘的分区后,现在检查磁盘是否存在 RAID块。 + + # mdadm --examine /dev/sd[b-e] + # mdadm --examine /dev/sd[b-e]1 + +![Examine 4 Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-4-Disks.png) + +*检查 4 个磁盘* + +#### 第1步:创建 RAID 1 #### + +4、 首先,使用4块磁盘创建2组 RAID 1,一组为`sdb1′和 `sdc1′,另一组是`sdd1′ 和 `sde1′。 + + # mdadm --create /dev/md1 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[b-c]1 + # mdadm --create /dev/md2 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[d-e]1 + # cat /proc/mdstat + +![Creating Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png) + +*创建 RAID 1* + +![Check Details of Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png) + +*查看 RAID 1 的详细信息* + +#### 第2步:创建 RAID 0 #### + +5、 接下来,使用 md1 和 md2 来创建 RAID 0。 + + # mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/md1 /dev/md2 + # cat /proc/mdstat + +![Creating Raid 0](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-0.png) + +*创建 RAID 0* + +#### 第3步:保存 RAID 配置 #### + +6、 我们需要将配置文件保存在`/etc/mdadm.conf`文件中,使其每次重新启动后都能加载所有的 RAID 设备。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +在此之后,我们需要按照方法1中的第3步来创建文件系统。 + +就是这样!我们采用的方法2创建完了 RAID 1+0。我们将会失去一半的磁盘空间,但相比其他 RAID ,它的性能将是非常好的。 + +### 结论 ### + +在这里,我们采用两种方法创建 RAID 10。RAID 10 具有良好的性能和冗余性。希望这篇文章可以帮助你了解 RAID 10 嵌套 RAID。在后面的文章中我们会看到如何扩展现有的 RAID 阵列以及更多精彩的内容。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid-10-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ diff --git a/translated/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md b/translated/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md deleted file mode 100644 index 850f6c3e49..0000000000 --- a/translated/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md +++ /dev/null @@ -1,277 +0,0 @@ - -在 Linux 中设置 RAID 10 或 1 + 0(嵌套) - 第6部分 -================================================================================ -RAID 10 是结合 RAID 0 和 RAID 1 形成的。要设置 RAID 10,我们至少需要4个磁盘。在之前的文章中,我们已经看到了如何使用两个磁盘设置 RAID 0 和 RAID 1。 - -在这里,我们将使用最少4个磁盘结合 RAID 0 和 RAID 1 来设置 RAID 10。假设,我们已经在逻辑卷保存了一些数据,这是 RAID 10 创建的,如果我们要保存数据“apple”,它将使用以下方法将其保存在4个磁盘中。 - -![Create Raid 10 in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/raid10.jpg) - -在 Linux 中创建 Raid 10 - -使用 RAID 0 时,它将“A”保存在第一个磁盘,“p”保存在第二个磁盘,下一个“P”又在第一个磁盘,“L”在第二个磁盘。然后,“e”又在第一个磁盘,像这样它会继续循环此过程将数据保存完整。由此我们知道,RAID 0 是将数据的一半保存到第一个磁盘,另一半保存到第二个磁盘。 - -在 RAID 1 方法中,相同的数据将被写入到两个磁盘中。 “A”将同时被写入到第一和第二个磁盘中,“P”也将被同时写入到两个磁盘中,下一个“P”也将同时被写入到两个磁盘。因此,使用 RAID 1 将同时写入到两个磁盘。它将继续循环此过程。 - -现在大家来了解 RAID 10 怎样结合 RAID 0 和 RAID 1 来工作。如果我们有4个20 GB 的磁盘,总共为 80 GB,但我们将只能得到40 GB 的容量,另一半的容量将用于构建 RAID 10。 - -#### RAID 10 的优点和缺点 #### - -- 提供更好的性能。 -- 在 RAID 10 中我们将失去两个磁盘的容量。 -- 读与写的性能将会很好,因为它会同时进行写入和读取。 -- 它能解决数据库的高 I/O 磁盘写操作。 - -#### 要求 #### - -在 RAID 10 中,我们至少需要4个磁盘,2个磁盘为 RAID 0,其他2个磁盘为 RAID 1,就像我之前说的,RAID 10 仅仅是结合了 RAID 0和1。如果我们需要扩展 RAID 组,最少需要添加4个磁盘。 - -**我的服务器设置** - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.229 - Hostname : rd10.tecmintlocal.com - Disk 1 [20GB] : /dev/sdd - Disk 2 [20GB] : /dev/sdc - Disk 3 [20GB] : /dev/sdd - Disk 4 [20GB] : /dev/sde - -有两种方法来设置 RAID 10,在这里两种方法我都会演示,但我更喜欢第一种方法,使用它来设置 RAID 10 更简单。 - -### 方法1:设置 RAID 10 ### - -1.首先,使用以下命令确认所添加的4块磁盘没有被使用。 - - # ls -l /dev | grep sd - -2.四个磁盘被检测后,然后来检查磁盘是否存在 RAID 分区。 - - # mdadm -E /dev/sd[b-e] - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde - -![Verify 4 Added Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-4-Added-Disks.png) - -验证添加的4块磁盘 - -**注意**: 在上面的输出中,如果没有检测到 super-block 意味着在4块磁盘中没有定义过 RAID。 - -#### 第1步:为 RAID 分区 #### - -3.现在,使用‘fdisk’,命令为4个磁盘(/dev/sdb, /dev/sdc, /dev/sdd 和 /dev/sde)创建新分区。 - - # fdisk /dev/sdb - # fdisk /dev/sdc - # fdisk /dev/sdd - # fdisk /dev/sde - -**为 /dev/sdb 创建分区** - -我来告诉你如何使用 fdisk 为磁盘(/dev/sdb)进行分区,此步也适用于其他磁盘。 - - # fdisk /dev/sdb - -请使用以下步骤为 /dev/sdb 创建一个新的分区。 - -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。 -- 接下来选择分区号为1。 -- 只需按两次回车键选择默认值即可。 -- 然后,按 ‘P’ 来打印创建好的分区。 -- 按 ‘L’,列出所有可用的类型。 -- 按 ‘t’ 去修改分区。 -- 键入 ‘fd’ 设置为 Linux 的 RAID 类型,然后按 Enter 确认。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 - -![Disk sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-sdb-Partition.png) - -为磁盘 sdb 分区 - -**注意**: 请使用上面相同的指令对其他磁盘(sdc, sdd sdd sde)进行分区。 - -4.创建好4个分区后,需要使用下面的命令来检查磁盘是否存在 raid。 - - # mdadm -E /dev/sd[b-e] - # mdadm -E /dev/sd[b-e]1 - - 或者 - - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde - # mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 - -![Check All Disks for Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Check-All-Disks-for-Raid.png) - -检查磁盘 - -**注意**: 以上输出显示,新创建的四个分区中没有检测到 super-block,这意味着我们可以继续在这些磁盘上创建 RAID 10。 - -#### 第2步: 创建 RAID 设备 ‘md’ #### - -5.现在改创建一个‘md’(即 /dev/md0)设备,使用“mdadm” raid 管理工具。在创建设备之前,必须确保系统已经安装了‘mdadm’工具,如果没有请使用下面的命令来安装。 - - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] - -‘mdadm’工具安装完成后,可以使用下面的命令创建一个‘md’ raid 设备。 - - # mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 - -6.接下来使用‘cat’命令验证新创建的 raid 设备。 - - # cat /proc/mdstat - -![Create md raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-raid-Device.png) - -创建 md raid 设备 - -7.接下来,使用下面的命令来检查4个磁盘。下面命令的输出会很长,因为它会显示4个磁盘的所有信息。 - - # mdadm --examine /dev/sd[b-e]1 - -8.接下来,使用以下命令来查看 RAID 阵列的详细信息。 - - # mdadm --detail /dev/md0 - -![Check Raid Array Details](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Array-Details.png) - -查看 Raid 阵列详细信息 - -**注意**: 你在上面看到的结果,该 RAID 的状态是 active 和re-syncing。 - -#### 第3步:创建文件系统 #### - -9.使用 ext4 作为‘md0′的文件系统并将它挂载到‘/mnt/raid10‘下。在这里,我用的是 ext4,你可以使用你想要的文件系统类型。 - - # mkfs.ext4 /dev/md0 - -![Create md Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-Filesystem.png) - -创建 md 文件系统 - -10.在创建文件系统后,挂载文件系统到‘/mnt/raid10‘下,并使用‘ls -l’命令列出挂载点下的内容。 - - # mkdir /mnt/raid10 - # mount /dev/md0 /mnt/raid10/ - # ls -l /mnt/raid10/ - -接下来,在挂载点下创建一些文件,并在文件中添加些内容,然后检查内容。 - - # touch /mnt/raid10/raid10_files.txt - # ls -l /mnt/raid10/ - # echo "raid 10 setup with 4 disks" > /mnt/raid10/raid10_files.txt - # cat /mnt/raid10/raid10_files.txt - -![Mount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-md-Device.png) - -挂载 md 设备 - -11.要想自动挂载,打开‘/etc/fstab‘文件并添加下面的条目,挂载点根据你环境的不同来添加。使用 wq! 保存并退出。 - - # vim /etc/fstab - - /dev/md0 /mnt/raid10 ext4 defaults 0 0 - -![AutoMount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/AutoMount-md-Device.png) - -挂载 md 设备 - -12.接下来,在重新启动系统前使用‘mount -a‘来确认‘/etc/fstab‘文件是否有错误。 - - # mount -av - -![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Errors-in-Fstab.png) - -检查 Fstab 中的错误 - -#### 第四步:保存 RAID 配置 #### - -13.默认情况下 RAID 没有配置文件,所以我们需要在上述步骤完成后手动保存它。 - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -![Save Raid10 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid10-Configuration.png) - -保存 Raid10 的配置 - -就这样,我们使用方法1创建完了 RAID 10,这种方法是比较容易的。现在,让我们使用方法2来设置 RAID 10。 - -### 方法2:创建 RAID 10 ### - -1.在方法2中,我们必须定义2组 RAID 1,然后我们需要使用这些创建好的 RAID 1 的集来定义一个 RAID 0。在这里,我们将要做的是先创建2个镜像(RAID1),然后创建 RAID0 (条带化)。 - -首先,列出所有的可用于创建 RAID 10 的磁盘。 - - # ls -l /dev | grep sd - -![List 4 Devices](http://www.tecmint.com/wp-content/uploads/2014/11/List-4-Devices.png) - -列出了 4 设备 - -2.将4个磁盘使用‘fdisk’命令进行分区。对于如何分区,您可以按照 #步骤 3。 - - # fdisk /dev/sdb - # fdisk /dev/sdc - # fdisk /dev/sdd - # fdisk /dev/sde - -3.在完成4个磁盘的分区后,现在检查磁盘是否存在 RAID块。 - - # mdadm --examine /dev/sd[b-e] - # mdadm --examine /dev/sd[b-e]1 - -![Examine 4 Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-4-Disks.png) - -检查 4 个磁盘 - -#### 第1步:创建 RAID 1 #### - -4.首先,使用4块磁盘创建2组 RAID 1,一组为‘sdb1′和 ‘sdc1′,另一组是‘sdd1′ 和 ‘sde1′。 - - # mdadm --create /dev/md1 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[b-c]1 - # mdadm --create /dev/md2 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[d-e]1 - # cat /proc/mdstat - -![Creating Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png) - -创建 Raid 1 - -![Check Details of Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png) - -查看 Raid 1 的详细信息 - -#### 第2步:创建 RAID 0 #### - -5.接下来,使用 md1 和 md2 来创建 RAID 0。 - - # mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/md1 /dev/md2 - # cat /proc/mdstat - -![Creating Raid 0](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-0.png) - -创建 Raid 0 - -#### 第3步:保存 RAID 配置 #### - -6.我们需要将配置文件保存在‘/etc/mdadm.conf‘文件中,使其每次重新启动后都能加载所有的 raid 设备。 - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -在此之后,我们需要按照方法1中的#第3步来创建文件系统。 - -就是这样!我们采用的方法2创建完了 RAID 1+0.我们将会失去两个磁盘的空间,但相比其他 RAID ,它的性能将是非常好的。 - -### 结论 ### - -在这里,我们采用两种方法创建 RAID 10。RAID 10 具有良好的性能和冗余性。希望这篇文章可以帮助你了解 RAID 10(嵌套 RAID 的级别)。在后面的文章中我们会看到如何扩展现有的 RAID 阵列以及更多精彩的。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid-10-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ From 2fba10b6b89bf9a779b286d6054fbe6beadfc8f0 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Mon, 31 Aug 2015 22:18:14 +0800 Subject: [PATCH 385/697] [Translating] tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md --- ...Boss Data Virtualization GA with OData in Docker Container.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md b/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md index 007f16493b..8f5b4a68d3 100644 --- a/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md +++ b/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md @@ -1,3 +1,4 @@ +ictlyh Translating Howto Run JBoss Data Virtualization GA with OData in Docker Container ================================================================================ Hi everyone, today we'll learn how to run JBoss Data Virtualization 6.0.0.GA with OData in a Docker Container. JBoss Data Virtualization is a data supply and integration solution platform that transforms various scatered multiple sources data, treats them as single source and delivers the required data into actionable information at business speed to any applications or users. JBoss Data Virtualization can help us easily combine and transform data into reusable business friendly data models and make unified data easily consumable through open standard interfaces. It offers comprehensive data abstraction, federation, integration, transformation, and delivery capabilities to combine data from one or multiple sources into reusable for agile data utilization and sharing.For more information about JBoss Data Virtualization, we can check out [its official page][1]. Docker is an open source platform that provides an open platform to pack, ship and run any application as a lightweight container. Running JBoss Data Virtualization with OData in Docker Container makes us easy to handle and launch. From 08b165287dc0e5a2743dba17f50cff9b9760c7fd Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 31 Aug 2015 22:28:34 +0800 Subject: [PATCH 386/697] PUB:Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @strugglingyouth 终于完了,可是目前看起来源站像是太监了,剩下两篇没消息了。。 --- ...Array and Removing Failed Disks in Raid.md | 182 ++++++++++++++++++ ...Array and Removing Failed Disks in Raid.md | 182 ------------------ 2 files changed, 182 insertions(+), 182 deletions(-) create mode 100644 published/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md delete mode 100644 translated/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md diff --git a/published/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md b/published/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md new file mode 100644 index 0000000000..3376376a2a --- /dev/null +++ b/published/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md @@ -0,0 +1,182 @@ +在 Linux 下使用 RAID(七):在 Raid 中扩展现有的 RAID 阵列和删除故障的磁盘 +================================================================================ + +每个新手都会对阵列(array)这个词所代表的意思产生疑惑。阵列只是磁盘的一个集合。换句话说,我们可以称阵列为一个集合(set)或一组(group)。就像一组鸡蛋中包含6个一样。同样 RAID 阵列中包含着多个磁盘,可能是2,4,6,8,12,16等,希望你现在知道了什么是阵列。 + +在这里,我们将看到如何扩展现有的阵列或 RAID 组。例如,如果我们在阵列中使用2个磁盘形成一个 raid 1 集合,在某些情况,如果该组中需要更多的空间,就可以使用 mdadm -grow 命令来扩展阵列大小,只需要将一个磁盘加入到现有的阵列中即可。在说完扩展(添加磁盘到现有的阵列中)后,我们将看看如何从阵列中删除故障的磁盘。 + +![Grow Raid Array in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Growing-Raid-Array.jpg) + +*扩展 RAID 阵列和删除故障的磁盘* + +假设磁盘中的一个有问题了需要删除该磁盘,但我们需要在删除磁盘前添加一个备用磁盘来扩展该镜像,因为我们需要保存我们的数据。当磁盘发生故障时我们需要从阵列中删除它,这是这个主题中我们将要学习到的。 + +#### 扩展 RAID 的特性 #### + +- 我们可以增加(扩展)任意 RAID 集合的大小。 +- 我们可以在使用新磁盘扩展 RAID 阵列后删除故障的磁盘。 +- 我们可以扩展 RAID 阵列而无需停机。 + +####要求 #### + +- 为了扩展一个RAID阵列,我们需要一个已有的 RAID 组(阵列)。 +- 我们需要额外的磁盘来扩展阵列。 +- 在这里,我们使用一块磁盘来扩展现有的阵列。 + +在我们了解扩展和恢复阵列前,我们必须了解有关 RAID 级别和设置的基本知识。点击下面的链接了解这些。 + +- [介绍 RAID 的级别和概念][1] +- [使用 mdadm 工具创建软件 RAID 0 (条带化)][2] + +#### 我的服务器设置 #### + + 操作系统 : CentOS 6.5 Final +  IP地址 : 192.168.0.230 +  主机名 : grow.tecmintlocal.com + 2 块现有磁盘 : 1 GB + 1 块额外磁盘 : 1 GB + +在这里,我们已有一个 RAID ,有2块磁盘,每个大小为1GB,我们现在再增加一个磁盘到我们现有的 RAID 阵列中,其大小为1GB。 + +### 扩展现有的 RAID 阵列 ### + +1、 在扩展阵列前,首先使用下面的命令列出现有的 RAID 阵列。 + + # mdadm --detail /dev/md0 + +![Check Existing Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Existing-Raid-Array.png) + +*检查现有的 RAID 阵列* + +**注意**: 以上输出显示,已经有了两个磁盘在 RAID 阵列中,级别为 RAID 1。现在我们增加一个磁盘到现有的阵列里。 + +2、 现在让我们添加新的磁盘“sdd”,并使用`fdisk`命令来创建分区。 + + # fdisk /dev/sdd + +请使用以下步骤为 /dev/sdd 创建一个新的分区。 + +- 按 `n` 创建新的分区。 +- 然后按 `P` 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按 `P` 来打印创建好的分区。 +- 按 `L`,列出所有可用的类型。 +- 按 `t` 去修改分区。 +- 键入 `fd` 设置为 Linux 的 RAID 类型,然后按回车确认。 +- 然后再次使用`p`查看我们所做的更改。 +- 使用`w`保存更改。 + +![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Create-New-sdd-Partition.png) + +*为 sdd 创建新的分区* + +3、 一旦新的 sdd 分区创建完成后,你可以使用下面的命令验证它。 + + # ls -l /dev/ | grep sd + +![Confirm sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-sdd-Partition.png) + +*确认 sdd 分区* + +4、 接下来,在添加到阵列前先检查磁盘是否有 RAID 分区。 + + # mdadm --examine /dev/sdd1 + +![Check Raid on sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-sdd-Partition.png) + +*在 sdd 分区中检查 RAID* + +**注意**:以上输出显示,该盘有没有发现 super-blocks,意味着我们可以将新的磁盘添加到现有阵列。 + +5、 要添加新的分区 /dev/sdd1 到现有的阵列 md0,请使用以下命令。 + + # mdadm --manage /dev/md0 --add /dev/sdd1 + +![Add Disk To Raid-Array](http://www.tecmint.com/wp-content/uploads/2014/11/Add-Disk-To-Raid-Array.png) + +*添加磁盘到 RAID 阵列* + +6、 一旦新的磁盘被添加后,在我们的阵列中检查新添加的磁盘。 + + # mdadm --detail /dev/md0 + +![Confirm Disk Added to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Disk-Added-To-Raid.png) + +*确认将新磁盘添加到 RAID 中* + +**注意**: 在上面的输出,你可以看到磁盘已经被添加作为备用的。在这里,我们的阵列中已经有了2个磁盘,但我们期待阵列中有3个磁盘,因此我们需要扩展阵列。 + +7、 要扩展阵列,我们需要使用下面的命令。 + + # mdadm --grow --raid-devices=3 /dev/md0 + +![Grow Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Raid-Array.png) + +*扩展 Raid 阵列* + +现在我们可以看到第三块磁盘(sdd1)已被添加到阵列中,在第三块磁盘被添加后,它将从另外两块磁盘上同步数据。 + + # mdadm --detail /dev/md0 + +![Confirm Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Raid-Array.png) + +*确认 Raid 阵列* + +**注意**: 对于大容量磁盘会需要几个小时来同步数据。在这里,我们使用的是1GB的虚拟磁盘,所以它非常快在几秒钟内便会完成。 + +### 从阵列中删除磁盘 ### + +8、 在数据被从其他两个磁盘同步到新磁盘`sdd1`后,现在三个磁盘中的数据已经相同了(镜像)。 + +正如我前面所说的,假定一个磁盘出问题了需要被删除。所以,现在假设磁盘`sdc1`出问题了,需要从现有阵列中删除。 + +在删除磁盘前我们要将其标记为失效,然后我们才可以将其删除。 + + # mdadm --fail /dev/md0 /dev/sdc1 + # mdadm --detail /dev/md0 + +![Disk Fail in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-Fail-in-Raid-Array.png) + +*在 RAID 阵列中模拟磁盘故障* + +从上面的输出中,我们清楚地看到,磁盘在下面被标记为 faulty。即使它是 faulty 的,我们仍然可以看到 raid 设备有3个,1个损坏了,状态是 degraded。 + +现在我们要从阵列中删除 faulty 的磁盘,raid 设备将像之前一样继续有2个设备。 + + # mdadm --remove /dev/md0 /dev/sdc1 + +![Remove Disk in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Remove-Disk-in-Raid-Array.png) + +*在 Raid 阵列中删除磁盘* + +9、 一旦故障的磁盘被删除,然后我们只能使用2个磁盘来扩展 raid 阵列了。 + + # mdadm --grow --raid-devices=2 /dev/md0 + # mdadm --detail /dev/md0 + +![Grow Disks in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Disks-in-Raid-Array.png) + +*在 RAID 阵列扩展磁盘* + +从上面的输出中可以看到,我们的阵列中仅有2台设备。如果你需要再次扩展阵列,按照如上所述的同样步骤进行。如果你需要添加一个磁盘作为备用,将其标记为 spare,因此,如果磁盘出现故障时,它会自动顶上去并重建数据。 + +### 结论 ### + +在这篇文章中,我们已经看到了如何扩展现有的 RAID 集合,以及如何在重新同步已有磁盘的数据后从一个阵列中删除故障磁盘。所有这些步骤都可以不用停机来完成。在数据同步期间,系统用户,文件和应用程序不会受到任何影响。 + +在接下来的文章我将告诉你如何管理 RAID,敬请关注更新,不要忘了写评论。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/grow-raid-array-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:https://linux.cn/article-6085-1.html +[2]:https://linux.cn/article-6087-1.html diff --git a/translated/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md b/translated/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md deleted file mode 100644 index 94d18edde2..0000000000 --- a/translated/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md +++ /dev/null @@ -1,182 +0,0 @@ - -在 Raid 中扩展现有的 RAID 阵列和删除故障的磁盘 - 第7部分 -================================================================================ -每个新手都会对阵列的意思产生疑惑。阵列只是磁盘的一个集合。换句话说,我们可以称阵列为一个集合或一组。就像一组鸡蛋中包含6个。同样 RAID 阵列中包含着多个磁盘,可能是2,4,6,8,12,16等,希望你现在知道了什么是阵列。 - -在这里,我们将看到如何扩展现有的阵列或 raid 组。例如,如果我们在一组 raid 中使用2个磁盘形成一个 raid 1,在某些情况,如果该组中需要更多的空间,就可以使用mdadm -grow 命令来扩展阵列大小,只是将一个磁盘加入到现有的阵列中。在扩展(添加磁盘到现有的阵列中)后,我们将看看如何从阵列中删除故障的磁盘。 - -![Grow Raid Array in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Growing-Raid-Array.jpg) - -扩展 RAID 阵列和删除故障的磁盘 - -假设磁盘中的一个有问题了需要删除该磁盘,但我们需要添加一个备用磁盘来扩展该镜像再删除磁盘前,因为我们需要保存数据。当磁盘发生故障时我们需要从阵列中删除它,这是这个主题中我们将要学习到的。 - -#### 扩展 RAID 的特性 #### - -- 我们可以增加(扩大)所有 RAID 集和的大小。 -- 我们在使用新磁盘扩展 RAID 阵列后删除故障的磁盘。 -- 我们可以扩展 RAID 阵列不存在宕机时间。 - -要求 - -- 为了扩展一个RAID阵列,我们需要已有的 RAID 组(阵列)。 -- 我们需要额外的磁盘来扩展阵列。 -- 在这里,我们使用一块磁盘来扩展现有的阵列。 - -在我们了解扩展和恢复阵列前,我们必须了解有关 RAID 级别和设置的基本知识。点击下面的链接了解这些。 - -- [理解 RAID 的基础概念 – 第一部分][1] -- [在 Linux 中创建软件 Raid 0 – 第二部分][2] - -#### 我的服务器设置 #### - - 操作系统 : CentOS 6.5 Final -  IP地址 : 192.168.0.230 -  主机名 : grow.tecmintlocal.com - 2 块现有磁盘 : 1 GB - 1 块额外磁盘 : 1 GB - -在这里,现有的 RAID 有2块磁盘,每个大小为1GB,我们现在再增加一个磁盘到我们现有的 RAID 阵列中,其大小为1GB。 - -### 扩展现有的 RAID 阵列 ### - -1. 在扩展阵列前,首先使用下面的命令列出现有的 RAID 阵列。 - - # mdadm --detail /dev/md0 - -![Check Existing Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Existing-Raid-Array.png) - -检查现有的 RAID 阵列 - -**注意**: 以上输出显示,已经有了两个磁盘在 RAID 阵列中,级别为 RAID 1。现在我们在这里再增加一个磁盘到现有的阵列。 - -2.现在让我们添加新的磁盘“sdd”,并使用‘fdisk‘命令来创建分区。 - - # fdisk /dev/sdd - -请使用以下步骤为 /dev/sdd 创建一个新的分区。 - -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。 -- 接下来选择分区号为1。 -- 只需按两次回车键选择默认值即可。 -- 然后,按 ‘P’ 来打印创建好的分区。 -- 按 ‘L’,列出所有可用的类型。 -- 按 ‘t’ 去修改分区。 -- 键入 ‘fd’ 设置为 Linux 的 RAID 类型,然后按 Enter 确认。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 - -![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Create-New-sdd-Partition.png) - -为 sdd 创建新的分区 - -3. 一旦新的 sdd 分区创建完成后,你可以使用下面的命令验证它。 - - # ls -l /dev/ | grep sd - -![Confirm sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-sdd-Partition.png) - -确认 sdd 分区 - -4.接下来,在添加到阵列前先检查磁盘是否有 RAID 分区。 - - # mdadm --examine /dev/sdd1 - -![Check Raid on sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-sdd-Partition.png) - -在 sdd 分区中检查 raid - -**注意**:以上输出显示,该盘有没有发现 super-blocks,意味着我们可以将新的磁盘添加到现有阵列。 - -4. 要添加新的分区 /dev/sdd1 到现有的阵列 md0,请使用以下命令。 - - # mdadm --manage /dev/md0 --add /dev/sdd1 - -![Add Disk To Raid-Array](http://www.tecmint.com/wp-content/uploads/2014/11/Add-Disk-To-Raid-Array.png) - -添加磁盘到 Raid 阵列 - -5. 一旦新的磁盘被添加后,在我们的阵列中检查新添加的磁盘。 - - # mdadm --detail /dev/md0 - -![Confirm Disk Added to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Disk-Added-To-Raid.png) - -确认将新磁盘添加到 Raid 中 - -**注意**: 在上面的输出,你可以看到磁盘已经被添加作为备用的。在这里,我们的阵列中已经有了2个磁盘,但我们期待阵列中有3个磁盘,因此我们需要扩展阵列。 - -6. 要扩展阵列,我们需要使用下面的命令。 - - # mdadm --grow --raid-devices=3 /dev/md0 - -![Grow Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Raid-Array.png) - -扩展 Raid 阵列 - -现在我们可以看到第三块磁盘(sdd1)已被添加到阵列中,在第三块磁盘被添加后,它将从另外两块磁盘上同步数据。 - - # mdadm --detail /dev/md0 - -![Confirm Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Raid-Array.png) - -确认 Raid 阵列 - -**注意**: 对于容量磁盘会需要几个小时来同步数据。在这里,我们使用的是1GB的虚拟磁盘,所以它非常快在几秒钟内便会完成。 - -### 从阵列中删除磁盘 ### - -7. 在数据被从其他两个磁盘同步到新磁盘‘sdd1‘后,现在三个磁盘中的数据已经相同了。 - -正如我前面所说的,假定一个磁盘出问题了需要被删除。所以,现在假设磁盘‘sdc1‘出问题了,需要从现有阵列中删除。 - -在删除磁盘前我们要将其标记为 failed,然后我们才可以将其删除。 - - # mdadm --fail /dev/md0 /dev/sdc1 - # mdadm --detail /dev/md0 - -![Disk Fail in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-Fail-in-Raid-Array.png) - -在 Raid 阵列中模拟磁盘故障 - -从上面的输出中,我们清楚地看到,磁盘在底部被标记为 faulty。即使它是 faulty 的,我们仍然可以看到 raid 设备有3个,1个损坏了 state 是 degraded。 - -现在我们要从阵列中删除 faulty 的磁盘,raid 设备将像之前一样继续有2个设备。 - - # mdadm --remove /dev/md0 /dev/sdc1 - -![Remove Disk in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Remove-Disk-in-Raid-Array.png) - -在 Raid 阵列中删除磁盘 - -8. 一旦故障的磁盘被删除,然后我们只能使用2个磁盘来扩展 raid 阵列了。 - - # mdadm --grow --raid-devices=2 /dev/md0 - # mdadm --detail /dev/md0 - -![Grow Disks in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Disks-in-Raid-Array.png) - -在 RAID 阵列扩展磁盘 - -从上面的输出中可以看到,我们的阵列中仅有2台设备。如果你需要再次扩展阵列,按照同样的步骤,如上所述。如果你需要添加一个磁盘作为备用,将其标记为 spare,因此,如果磁盘出现故障时,它会自动顶上去并重建数据。 - -### 结论 ### - -在这篇文章中,我们已经看到了如何扩展现有的 RAID 集合,以及如何从一个阵列中删除故障磁盘在重新同步已有磁盘的数据后。所有这些步骤都可以不用停机来完成。在数据同步期间,系统用户,文件和应用程序不会受到任何影响。 - -在接下来的文章我将告诉你如何管理 RAID,敬请关注更新,不要忘了写评论。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/grow-raid-array-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ -[2]:http://www.tecmint.com/create-raid0-in-linux/ From fe9588990f741f9a9c40998d27fc12115c490d42 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 31 Aug 2015 23:45:37 +0800 Subject: [PATCH 387/697] PUB:20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @GOLinux 这篇翻译的不错~ --- ...r Easy Partition Resizing and Snapshots.md | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) rename {translated/tech => published}/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md (75%) diff --git a/translated/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md b/published/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md similarity index 75% rename from translated/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md rename to published/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md index 2e66e27f31..adf9abd11c 100644 --- a/translated/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md +++ b/published/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md @@ -1,14 +1,14 @@ -Ubuntu上使用LVM轻松调整分区并制作快照 +Ubuntu 上使用 LVM 轻松调整分区并制作快照 ================================================================================ ![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035707bbd74.png.pagespeed.ic.9_yebxUF1C.png) -Ubuntu的安装器提供了一个轻松“使用LVM”的复选框。说明中说,它启用了逻辑卷管理,因此你可以制作快照,并更容易地调整硬盘分区大小——这里将为大家讲述如何完成这些操作。 +Ubuntu的安装器提供了一个轻松“使用LVM”的复选框。它的描述中说,启用逻辑卷管理可以让你制作快照,并更容易地调整硬盘分区大小——这里将为大家讲述如何完成这些操作。 -LVM是一种技术,某种程度上和[RAID阵列][1]或[Windows上的存储空间][2]类似。虽然该技术在服务器上更为有用,但是它也可以在桌面端PC上使用。 +LVM是一种技术,某种程度上和[RAID阵列][1]或[Windows上的“存储空间”][2]类似。虽然该技术在服务器上更为有用,但是它也可以在桌面端PC上使用。 ### 你应该在新安装Ubuntu时使用LVM吗? ### -第一个问题是,你是否想要在安装Ubuntu时使用LVM?如果是,那么Ubuntu让这一切变得很简单,只需要轻点鼠标就可以完成,但是该选项默认是不启用的。正如安装器所说的,它允许你调整分区、创建快照、合并多个磁盘到一个逻辑卷等等——所有这一切都可以在系统运行时完成。不同于传统分区,你不需要关掉你的系统,从Live CD或USB驱动,然后[调整这些不使用的分区][3]。 +第一个问题是,你是否想要在安装Ubuntu时使用LVM?如果是,那么Ubuntu让这一切变得很简单,只需要轻点鼠标就可以完成,但是该选项默认是不启用的。正如安装器所说的,它允许你调整分区、创建快照、将多个磁盘合并到一个逻辑卷等等——所有这一切都可以在系统运行时完成。不同于传统分区,你不需要关掉你的系统,从Live CD或USB驱动,然后[当这些分区不使用时才能调整][3]。 完全坦率地说,普通Ubuntu桌面用户可能不会意识到他们是否正在使用LVM。但是,如果你想要在今后做一些更高深的事情,那么LVM就会有所帮助了。LVM可能更复杂,可能会在你今后恢复数据时会导致问题——尤其是在你经验不足时。这里不会有显著的性能损失——LVM是彻底地在Linux内核中实现的。 @@ -18,7 +18,7 @@ LVM是一种技术,某种程度上和[RAID阵列][1]或[Windows上的存储空 前面,我们已经[说明了何谓LVM][4]。概括来讲,它在你的物理磁盘和呈现在你系统中的分区之间提供了一个抽象层。例如,你的计算机可能装有两个硬盘驱动器,它们的大小都是 1 TB。你必须得在这些磁盘上至少分两个区,每个区大小 1 TB。 -LVM就在这些分区上提供了一个抽象层。用于取代磁盘上的传统分区,LVM将在你对这些磁盘初始化后,将它们当作独立的“物理卷”来对待。然后,你就可以基于这些物理卷创建“逻辑卷”。例如,你可以将这两个 1 TB 的磁盘组合成一个 2 TB 的分区,你的系统将只看到一个 2 TB 的卷,而LVM将会在后台处理这一切。一组物理卷以及一组逻辑卷被称之为“卷组”,一个标准的系统只会有一个卷组。 +LVM就在这些分区上提供了一个抽象层。用于取代磁盘上的传统分区,LVM将在你对这些磁盘初始化后,将它们当作独立的“物理卷”来对待。然后,你就可以基于这些物理卷创建“逻辑卷”。例如,你可以将这两个 1 TB 的磁盘组合成一个 2 TB 的分区,你的系统将只看到一个 2 TB 的卷,而LVM将会在后台处理这一切。一组物理卷以及一组逻辑卷被称之为“卷组”,一个典型的系统只会有一个卷组。 该抽象层使得调整分区、将多个磁盘组合成单个卷、甚至为一个运行着的分区的文件系统创建“快照”变得十分简单,而完成所有这一切都无需先卸载分区。 @@ -28,11 +28,11 @@ LVM就在这些分区上提供了一个抽象层。用于取代磁盘上的传 通常,[LVM通过Linux终端命令来管理][5]。这在Ubuntu上也行得通,但是有个更简单的图形化方法可供大家采用。如果你是一个Linux用户,对GParted或者与其类似的分区管理器熟悉,算了,别瞎掰了——GParted根本不支持LVM磁盘。 -然而,你可以使用Ubuntu附带的磁盘工具。该工具也被称之为GNOME磁盘工具,或者叫Palimpsest。点击停靠盘上的图标来开启它吧,搜索磁盘然后敲击回车。不像GParted,该磁盘工具将会在“其它设备”下显示LVM分区,因此你可以根据需要格式化这些分区,也可以调整其它选项。该工具在Live CD或USB 驱动下也可以使用。 +然而,你可以使用Ubuntu附带的磁盘工具。该工具也被称之为GNOME磁盘工具,或者叫Palimpsest。点击dash中的图标来开启它吧,搜索“磁盘”然后敲击回车。不像GParted,该磁盘工具将会在“其它设备”下显示LVM分区,因此你可以根据需要格式化这些分区,也可以调整其它选项。该工具在Live CD或USB 驱动下也可以使用。 ![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550361b3772f7.png.pagespeed.ic.nZWwLJUywR.png) -不幸的是,该磁盘工具不支持LVM的大多数强大的特性,没有管理卷组、扩展分区,或者创建快照等选项。对于这些操作,你可以通过终端来实现,但是你没有那个必要。相反,你可以打开Ubuntu软件中心,搜索关键字LVM,然后安装逻辑卷管理工具,你可以在终端窗口中运行**sudo apt-get install system-config-lvm**命令来安装它。安装完之后,你就可以从停靠盘上打开逻辑卷管理工具了。 +不幸的是,该磁盘工具不支持LVM的大多数强大的特性,没有管理卷组、扩展分区,或者创建快照等选项。对于这些操作,你可以通过终端来实现,但是没有那个必要。相反,你可以打开Ubuntu软件中心,搜索关键字LVM,然后安装逻辑卷管理工具,你可以在终端窗口中运行**sudo apt-get install system-config-lvm**命令来安装它。安装完之后,你就可以从dash上打开逻辑卷管理工具了。 这个图形化配置工具是由红帽公司开发的,虽然有点陈旧了,但却是唯一的图形化方式,你可以通过它来完成上述操作,将那些终端命令抛诸脑后了。 @@ -40,11 +40,11 @@ LVM就在这些分区上提供了一个抽象层。用于取代磁盘上的传 ![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363106789c.png.pagespeed.ic.drVInt3Weq.png) -卷组视图会列出你所有物理卷和逻辑卷的总览。这里,我们有两个横跨两个独立硬盘驱动器的物理分区,我们有一个交换分区和一个根分区,就像Ubuntu默认设置的分区图表。由于我们从另一个驱动器添加了第二个物理分区,现在那里有大量未使用空间。 +卷组视图会列出你所有的物理卷和逻辑卷的总览。这里,我们有两个横跨两个独立硬盘驱动器的物理分区,我们有一个交换分区和一个根分区,这是Ubuntu默认设置的分区图表。由于我们从另一个驱动器添加了第二个物理分区,现在那里有大量未使用空间。 ![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363f631c19.png.pagespeed.ic.54E_Owcq8y.png) -要扩展逻辑分区到物理空间,你可以在逻辑视图下选择它,点击编辑属性,然后修改大小来扩大分区。你也可以在这里缩减分区。 +要扩展逻辑分区到物理空间,你可以在逻辑视图下选择它,点击编辑属性,然后修改大小来扩大分区。你也可以在这里缩小分区。 ![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55036893712d3.png.pagespeed.ic.ce7y_Mt0uF.png) @@ -55,7 +55,7 @@ system-config-lvm的其它选项允许你设置快照和镜像。对于传统桌 via: http://www.howtogeek.com/211937/how-to-use-lvm-on-ubuntu-for-easy-partition-resizing-and-snapshots/ 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From 7cefdbd8c4c506524a1af2b9c5e357edf70201a2 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 31 Aug 2015 23:46:47 +0800 Subject: [PATCH 388/697] =?UTF-8?q?=E5=BD=92=E6=A1=A3201508?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- published/{ => 201508}/20141211 Open source all over the world.md | 0 .../20150128 7 communities driving open source development.md | 0 ...stall Strongswan - A Tool to Setup IPsec Based VPN in Linux.md | 0 ...20150209 Install OpenQRM Cloud Computing Platform In Debian.md | 0 ...to Manage and Use LVM (Logical Volume Management) in Ubuntu.md | 0 ...Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md | 0 .../20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md | 0 ... to access a Linux server behind NAT via reverse SSH tunnel.md | 0 .../20150518 How to set up a Replica Set on MongoDB.md | 0 published/{ => 201508}/20150522 Analyzing Linux Logs.md | 0 ...0527 Howto Manage Host Using Docker Machine in a VirtualBox.md | 0 ...50602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md | 0 ...hares Her Interview Experience on Linux 'iptables' Firewall.md | 0 ... Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md | 0 ...150625 How to Provision Swarm Clusters using Docker Machine.md | 0 ... Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md | 0 ...esktop--What They Get Right & Wrong - Page 1 - Introduction.md | 0 ...p--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md | 0 ...--What They Get Right & Wrong - Page 3 - GNOME Applications.md | 0 ...ktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md | 0 ... Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md | 0 ... Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md | 0 .../20150717 How to collect NGINX metrics - Part 2.md | 0 .../20150717 How to monitor NGINX with Datadog - Part 3.md | 0 published/{ => 201508}/20150717 How to monitor NGINX- Part 1.md | 0 ...150717 Howto Configure FTP Server with Proftpd on Fedora 22.md | 0 ...To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md | 0 .../20150722 How To Manage StartUp Applications In Ubuntu.md | 0 ...150727 Easy Backup Restore and Migrate Containers in Docker.md | 0 ... Fix--There is no command installed for 7-zip archive files.md | 0 ... How to Update Linux Kernel for Improved System Performance.md | 0 ... CD, Watch User Activity and Check Memory Usages of Browser.md | 0 ...Shell Commands Easily Using 'Explain Shell' Script in Linux.md | 0 ...ogical Volume Management and How Do You Enable It in Ubuntu.md | 0 published/{ => 201508}/20150730 Compare PDF Files on Ubuntu.md | 0 .../20150730 Must-Know Linux Commands For New Users.md | 0 ...0150803 Handy commands for profiling your Unix file systems.md | 0 published/{ => 201508}/20150803 Linux Logging Basics.md | 0 .../{ => 201508}/20150803 Troubleshooting with Linux Logs.md | 0 ...6 5 Reasons Why Software Developer is a Great Career Choice.md | 0 ...ow to fix 'ImportError--No module named wxversion' on Linux.md | 0 ...150806 Linux FAQs with Answers--How to install git on Linux.md | 0 ... Bash Environment Variables on a Linux and Unix-like System.md | 0 published/{ => 201508}/20150810 For Linux, Supercomputers R Us.md | 0 ...s a Web Based Network Traffic Analyzer--Install it on Linux.md | 0 ...1 How to download apk files from Google Play Store on Linux.md | 0 .../20150813 How to Install Logwatch on Ubuntu 15.04.md | 0 .../20150813 How to get Public IP from Linux Terminal.md | 0 ...It Easier For You To Install The Latest Nvidia Linux Driver.md | 0 ...0816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md | 0 .../20150816 shellinabox--A Web based AJAX Terminal Emulator.md | 0 .../20150817 Top 5 Torrent Clients For Ubuntu Linux.md | 0 ... How to monitor stock quotes from the command line on Linux.md | 0 ...150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md | 0 .../20150818 ​Ubuntu Linux is coming to IBM mainframes.md | 0 .../20150821 How to Install Visual Studio Code in Linux.md | 0 ...inux FAQs with Answers--How to check MariaDB server version.md | 0 .../20150826 How to Run Kali Linux 2.0 In Docker Container.md | 0 ... Convert From RPM to DEB and DEB to RPM Package Using Alien.md | 0 .../20150827 Linux or UNIX--Bash Read a File Line By Line.md | 0 .../Linux and Unix Test Disk IO Performance With dd Command.md | 0 ... 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md | 0 ...RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md | 0 ... - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md | 0 ...Creating RAID 5 (Striping with Distributed Parity) in Linux.md | 0 ... Level 6 (Striping with Double Distributed Parity) in Linux.md | 0 .../Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md | 0 ...ng an Existing RAID Array and Removing Failed Disks in Raid.md | 0 published/{ => 201508}/kde-plasma-5.4.md | 0 69 files changed, 0 insertions(+), 0 deletions(-) rename published/{ => 201508}/20141211 Open source all over the world.md (100%) rename published/{ => 201508}/20150128 7 communities driving open source development.md (100%) rename published/{ => 201508}/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md (100%) rename published/{ => 201508}/20150209 Install OpenQRM Cloud Computing Platform In Debian.md (100%) rename published/{ => 201508}/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md (100%) rename published/{ => 201508}/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md (100%) rename published/{ => 201508}/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md (100%) rename published/{ => 201508}/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md (100%) rename published/{ => 201508}/20150518 How to set up a Replica Set on MongoDB.md (100%) rename published/{ => 201508}/20150522 Analyzing Linux Logs.md (100%) rename published/{ => 201508}/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md (100%) rename published/{ => 201508}/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md (100%) rename published/{ => 201508}/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md (100%) rename published/{ => 201508}/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md (100%) rename published/{ => 201508}/20150625 How to Provision Swarm Clusters using Docker Machine.md (100%) rename published/{ => 201508}/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md (100%) rename published/{ => 201508}/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md (100%) rename published/{ => 201508}/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md (100%) rename published/{ => 201508}/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md (100%) rename published/{ => 201508}/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md (100%) rename published/{ => 201508}/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md (100%) rename published/{ => 201508}/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md (100%) rename published/{ => 201508}/20150717 How to collect NGINX metrics - Part 2.md (100%) rename published/{ => 201508}/20150717 How to monitor NGINX with Datadog - Part 3.md (100%) rename published/{ => 201508}/20150717 How to monitor NGINX- Part 1.md (100%) rename published/{ => 201508}/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md (100%) rename published/{ => 201508}/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md (100%) rename published/{ => 201508}/20150722 How To Manage StartUp Applications In Ubuntu.md (100%) rename published/{ => 201508}/20150727 Easy Backup Restore and Migrate Containers in Docker.md (100%) rename published/{ => 201508}/20150728 How To Fix--There is no command installed for 7-zip archive files.md (100%) rename published/{ => 201508}/20150728 How to Update Linux Kernel for Improved System Performance.md (100%) rename published/{ => 201508}/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md (100%) rename published/{ => 201508}/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md (100%) rename published/{ => 201508}/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md (100%) rename published/{ => 201508}/20150730 Compare PDF Files on Ubuntu.md (100%) rename published/{ => 201508}/20150730 Must-Know Linux Commands For New Users.md (100%) rename published/{ => 201508}/20150803 Handy commands for profiling your Unix file systems.md (100%) rename published/{ => 201508}/20150803 Linux Logging Basics.md (100%) rename published/{ => 201508}/20150803 Troubleshooting with Linux Logs.md (100%) rename published/{ => 201508}/20150806 5 Reasons Why Software Developer is a Great Career Choice.md (100%) rename published/{ => 201508}/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md (100%) rename published/{ => 201508}/20150806 Linux FAQs with Answers--How to install git on Linux.md (100%) rename published/{ => 201508}/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md (100%) rename published/{ => 201508}/20150810 For Linux, Supercomputers R Us.md (100%) rename published/{ => 201508}/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md (100%) rename published/{ => 201508}/20150811 How to download apk files from Google Play Store on Linux.md (100%) rename published/{ => 201508}/20150813 How to Install Logwatch on Ubuntu 15.04.md (100%) rename published/{ => 201508}/20150813 How to get Public IP from Linux Terminal.md (100%) rename published/{ => 201508}/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md (100%) rename published/{ => 201508}/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md (100%) rename published/{ => 201508}/20150816 shellinabox--A Web based AJAX Terminal Emulator.md (100%) rename published/{ => 201508}/20150817 Top 5 Torrent Clients For Ubuntu Linux.md (100%) rename published/{ => 201508}/20150818 How to monitor stock quotes from the command line on Linux.md (100%) rename published/{ => 201508}/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md (100%) rename published/{ => 201508}/20150818 ​Ubuntu Linux is coming to IBM mainframes.md (100%) rename published/{ => 201508}/20150821 How to Install Visual Studio Code in Linux.md (100%) rename published/{ => 201508}/20150821 Linux FAQs with Answers--How to check MariaDB server version.md (100%) rename published/{ => 201508}/20150826 How to Run Kali Linux 2.0 In Docker Container.md (100%) rename published/{ => 201508}/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md (100%) rename published/{ => 201508}/20150827 Linux or UNIX--Bash Read a File Line By Line.md (100%) rename published/{ => 201508}/Linux and Unix Test Disk IO Performance With dd Command.md (100%) rename published/{ => 201508}/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md (100%) rename published/{ => 201508}/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md (100%) rename published/{ => 201508}/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md (100%) rename published/{ => 201508}/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md (100%) rename published/{ => 201508}/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md (100%) rename published/{ => 201508}/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md (100%) rename published/{ => 201508}/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md (100%) rename published/{ => 201508}/kde-plasma-5.4.md (100%) diff --git a/published/20141211 Open source all over the world.md b/published/201508/20141211 Open source all over the world.md similarity index 100% rename from published/20141211 Open source all over the world.md rename to published/201508/20141211 Open source all over the world.md diff --git a/published/20150128 7 communities driving open source development.md b/published/201508/20150128 7 communities driving open source development.md similarity index 100% rename from published/20150128 7 communities driving open source development.md rename to published/201508/20150128 7 communities driving open source development.md diff --git a/published/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md b/published/201508/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md similarity index 100% rename from published/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md rename to published/201508/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md diff --git a/published/20150209 Install OpenQRM Cloud Computing Platform In Debian.md b/published/201508/20150209 Install OpenQRM Cloud Computing Platform In Debian.md similarity index 100% rename from published/20150209 Install OpenQRM Cloud Computing Platform In Debian.md rename to published/201508/20150209 Install OpenQRM Cloud Computing Platform In Debian.md diff --git a/published/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md b/published/201508/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md similarity index 100% rename from published/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md rename to published/201508/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md diff --git a/published/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md b/published/201508/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md similarity index 100% rename from published/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md rename to published/201508/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md diff --git a/published/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md b/published/201508/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md similarity index 100% rename from published/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md rename to published/201508/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md diff --git a/published/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md b/published/201508/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md similarity index 100% rename from published/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md rename to published/201508/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md diff --git a/published/20150518 How to set up a Replica Set on MongoDB.md b/published/201508/20150518 How to set up a Replica Set on MongoDB.md similarity index 100% rename from published/20150518 How to set up a Replica Set on MongoDB.md rename to published/201508/20150518 How to set up a Replica Set on MongoDB.md diff --git a/published/20150522 Analyzing Linux Logs.md b/published/201508/20150522 Analyzing Linux Logs.md similarity index 100% rename from published/20150522 Analyzing Linux Logs.md rename to published/201508/20150522 Analyzing Linux Logs.md diff --git a/published/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md b/published/201508/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md similarity index 100% rename from published/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md rename to published/201508/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md diff --git a/published/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md b/published/201508/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md similarity index 100% rename from published/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md rename to published/201508/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md diff --git a/published/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md b/published/201508/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md similarity index 100% rename from published/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md rename to published/201508/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md diff --git a/published/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md b/published/201508/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md similarity index 100% rename from published/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md rename to published/201508/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md diff --git a/published/20150625 How to Provision Swarm Clusters using Docker Machine.md b/published/201508/20150625 How to Provision Swarm Clusters using Docker Machine.md similarity index 100% rename from published/20150625 How to Provision Swarm Clusters using Docker Machine.md rename to published/201508/20150625 How to Provision Swarm Clusters using Docker Machine.md diff --git a/published/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md b/published/201508/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md similarity index 100% rename from published/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md rename to published/201508/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md diff --git a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md b/published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md similarity index 100% rename from published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md rename to published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md diff --git a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md b/published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md similarity index 100% rename from published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md rename to published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md diff --git a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md b/published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md similarity index 100% rename from published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md rename to published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md diff --git a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md b/published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md similarity index 100% rename from published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md rename to published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md diff --git a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md b/published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md similarity index 100% rename from published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md rename to published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md diff --git a/published/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md b/published/201508/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md similarity index 100% rename from published/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md rename to published/201508/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md diff --git a/published/20150717 How to collect NGINX metrics - Part 2.md b/published/201508/20150717 How to collect NGINX metrics - Part 2.md similarity index 100% rename from published/20150717 How to collect NGINX metrics - Part 2.md rename to published/201508/20150717 How to collect NGINX metrics - Part 2.md diff --git a/published/20150717 How to monitor NGINX with Datadog - Part 3.md b/published/201508/20150717 How to monitor NGINX with Datadog - Part 3.md similarity index 100% rename from published/20150717 How to monitor NGINX with Datadog - Part 3.md rename to published/201508/20150717 How to monitor NGINX with Datadog - Part 3.md diff --git a/published/20150717 How to monitor NGINX- Part 1.md b/published/201508/20150717 How to monitor NGINX- Part 1.md similarity index 100% rename from published/20150717 How to monitor NGINX- Part 1.md rename to published/201508/20150717 How to monitor NGINX- Part 1.md diff --git a/published/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md b/published/201508/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md similarity index 100% rename from published/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md rename to published/201508/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md diff --git a/published/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md b/published/201508/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md similarity index 100% rename from published/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md rename to published/201508/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md diff --git a/published/20150722 How To Manage StartUp Applications In Ubuntu.md b/published/201508/20150722 How To Manage StartUp Applications In Ubuntu.md similarity index 100% rename from published/20150722 How To Manage StartUp Applications In Ubuntu.md rename to published/201508/20150722 How To Manage StartUp Applications In Ubuntu.md diff --git a/published/20150727 Easy Backup Restore and Migrate Containers in Docker.md b/published/201508/20150727 Easy Backup Restore and Migrate Containers in Docker.md similarity index 100% rename from published/20150727 Easy Backup Restore and Migrate Containers in Docker.md rename to published/201508/20150727 Easy Backup Restore and Migrate Containers in Docker.md diff --git a/published/20150728 How To Fix--There is no command installed for 7-zip archive files.md b/published/201508/20150728 How To Fix--There is no command installed for 7-zip archive files.md similarity index 100% rename from published/20150728 How To Fix--There is no command installed for 7-zip archive files.md rename to published/201508/20150728 How To Fix--There is no command installed for 7-zip archive files.md diff --git a/published/20150728 How to Update Linux Kernel for Improved System Performance.md b/published/201508/20150728 How to Update Linux Kernel for Improved System Performance.md similarity index 100% rename from published/20150728 How to Update Linux Kernel for Improved System Performance.md rename to published/201508/20150728 How to Update Linux Kernel for Improved System Performance.md diff --git a/published/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md b/published/201508/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md similarity index 100% rename from published/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md rename to published/201508/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md diff --git a/published/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md b/published/201508/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md similarity index 100% rename from published/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md rename to published/201508/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md diff --git a/published/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md b/published/201508/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md similarity index 100% rename from published/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md rename to published/201508/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md diff --git a/published/20150730 Compare PDF Files on Ubuntu.md b/published/201508/20150730 Compare PDF Files on Ubuntu.md similarity index 100% rename from published/20150730 Compare PDF Files on Ubuntu.md rename to published/201508/20150730 Compare PDF Files on Ubuntu.md diff --git a/published/20150730 Must-Know Linux Commands For New Users.md b/published/201508/20150730 Must-Know Linux Commands For New Users.md similarity index 100% rename from published/20150730 Must-Know Linux Commands For New Users.md rename to published/201508/20150730 Must-Know Linux Commands For New Users.md diff --git a/published/20150803 Handy commands for profiling your Unix file systems.md b/published/201508/20150803 Handy commands for profiling your Unix file systems.md similarity index 100% rename from published/20150803 Handy commands for profiling your Unix file systems.md rename to published/201508/20150803 Handy commands for profiling your Unix file systems.md diff --git a/published/20150803 Linux Logging Basics.md b/published/201508/20150803 Linux Logging Basics.md similarity index 100% rename from published/20150803 Linux Logging Basics.md rename to published/201508/20150803 Linux Logging Basics.md diff --git a/published/20150803 Troubleshooting with Linux Logs.md b/published/201508/20150803 Troubleshooting with Linux Logs.md similarity index 100% rename from published/20150803 Troubleshooting with Linux Logs.md rename to published/201508/20150803 Troubleshooting with Linux Logs.md diff --git a/published/20150806 5 Reasons Why Software Developer is a Great Career Choice.md b/published/201508/20150806 5 Reasons Why Software Developer is a Great Career Choice.md similarity index 100% rename from published/20150806 5 Reasons Why Software Developer is a Great Career Choice.md rename to published/201508/20150806 5 Reasons Why Software Developer is a Great Career Choice.md diff --git a/published/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md b/published/201508/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md similarity index 100% rename from published/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md rename to published/201508/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md diff --git a/published/20150806 Linux FAQs with Answers--How to install git on Linux.md b/published/201508/20150806 Linux FAQs with Answers--How to install git on Linux.md similarity index 100% rename from published/20150806 Linux FAQs with Answers--How to install git on Linux.md rename to published/201508/20150806 Linux FAQs with Answers--How to install git on Linux.md diff --git a/published/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md b/published/201508/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md similarity index 100% rename from published/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md rename to published/201508/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md diff --git a/published/20150810 For Linux, Supercomputers R Us.md b/published/201508/20150810 For Linux, Supercomputers R Us.md similarity index 100% rename from published/20150810 For Linux, Supercomputers R Us.md rename to published/201508/20150810 For Linux, Supercomputers R Us.md diff --git a/published/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md b/published/201508/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md similarity index 100% rename from published/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md rename to published/201508/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md diff --git a/published/20150811 How to download apk files from Google Play Store on Linux.md b/published/201508/20150811 How to download apk files from Google Play Store on Linux.md similarity index 100% rename from published/20150811 How to download apk files from Google Play Store on Linux.md rename to published/201508/20150811 How to download apk files from Google Play Store on Linux.md diff --git a/published/20150813 How to Install Logwatch on Ubuntu 15.04.md b/published/201508/20150813 How to Install Logwatch on Ubuntu 15.04.md similarity index 100% rename from published/20150813 How to Install Logwatch on Ubuntu 15.04.md rename to published/201508/20150813 How to Install Logwatch on Ubuntu 15.04.md diff --git a/published/20150813 How to get Public IP from Linux Terminal.md b/published/201508/20150813 How to get Public IP from Linux Terminal.md similarity index 100% rename from published/20150813 How to get Public IP from Linux Terminal.md rename to published/201508/20150813 How to get Public IP from Linux Terminal.md diff --git a/published/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md b/published/201508/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md similarity index 100% rename from published/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md rename to published/201508/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md diff --git a/published/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md b/published/201508/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md similarity index 100% rename from published/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md rename to published/201508/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md diff --git a/published/20150816 shellinabox--A Web based AJAX Terminal Emulator.md b/published/201508/20150816 shellinabox--A Web based AJAX Terminal Emulator.md similarity index 100% rename from published/20150816 shellinabox--A Web based AJAX Terminal Emulator.md rename to published/201508/20150816 shellinabox--A Web based AJAX Terminal Emulator.md diff --git a/published/20150817 Top 5 Torrent Clients For Ubuntu Linux.md b/published/201508/20150817 Top 5 Torrent Clients For Ubuntu Linux.md similarity index 100% rename from published/20150817 Top 5 Torrent Clients For Ubuntu Linux.md rename to published/201508/20150817 Top 5 Torrent Clients For Ubuntu Linux.md diff --git a/published/20150818 How to monitor stock quotes from the command line on Linux.md b/published/201508/20150818 How to monitor stock quotes from the command line on Linux.md similarity index 100% rename from published/20150818 How to monitor stock quotes from the command line on Linux.md rename to published/201508/20150818 How to monitor stock quotes from the command line on Linux.md diff --git a/published/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md b/published/201508/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md similarity index 100% rename from published/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md rename to published/201508/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md diff --git a/published/20150818 ​Ubuntu Linux is coming to IBM mainframes.md b/published/201508/20150818 ​Ubuntu Linux is coming to IBM mainframes.md similarity index 100% rename from published/20150818 ​Ubuntu Linux is coming to IBM mainframes.md rename to published/201508/20150818 ​Ubuntu Linux is coming to IBM mainframes.md diff --git a/published/20150821 How to Install Visual Studio Code in Linux.md b/published/201508/20150821 How to Install Visual Studio Code in Linux.md similarity index 100% rename from published/20150821 How to Install Visual Studio Code in Linux.md rename to published/201508/20150821 How to Install Visual Studio Code in Linux.md diff --git a/published/20150821 Linux FAQs with Answers--How to check MariaDB server version.md b/published/201508/20150821 Linux FAQs with Answers--How to check MariaDB server version.md similarity index 100% rename from published/20150821 Linux FAQs with Answers--How to check MariaDB server version.md rename to published/201508/20150821 Linux FAQs with Answers--How to check MariaDB server version.md diff --git a/published/20150826 How to Run Kali Linux 2.0 In Docker Container.md b/published/201508/20150826 How to Run Kali Linux 2.0 In Docker Container.md similarity index 100% rename from published/20150826 How to Run Kali Linux 2.0 In Docker Container.md rename to published/201508/20150826 How to Run Kali Linux 2.0 In Docker Container.md diff --git a/published/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md b/published/201508/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md similarity index 100% rename from published/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md rename to published/201508/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md diff --git a/published/20150827 Linux or UNIX--Bash Read a File Line By Line.md b/published/201508/20150827 Linux or UNIX--Bash Read a File Line By Line.md similarity index 100% rename from published/20150827 Linux or UNIX--Bash Read a File Line By Line.md rename to published/201508/20150827 Linux or UNIX--Bash Read a File Line By Line.md diff --git a/published/Linux and Unix Test Disk IO Performance With dd Command.md b/published/201508/Linux and Unix Test Disk IO Performance With dd Command.md similarity index 100% rename from published/Linux and Unix Test Disk IO Performance With dd Command.md rename to published/201508/Linux and Unix Test Disk IO Performance With dd Command.md diff --git a/published/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md b/published/201508/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md similarity index 100% rename from published/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md rename to published/201508/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md diff --git a/published/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md b/published/201508/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md similarity index 100% rename from published/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md rename to published/201508/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md diff --git a/published/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md b/published/201508/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md similarity index 100% rename from published/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md rename to published/201508/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md diff --git a/published/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md b/published/201508/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md similarity index 100% rename from published/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md rename to published/201508/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md diff --git a/published/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md b/published/201508/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md similarity index 100% rename from published/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md rename to published/201508/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md diff --git a/published/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md b/published/201508/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md similarity index 100% rename from published/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md rename to published/201508/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md diff --git a/published/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md b/published/201508/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md similarity index 100% rename from published/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md rename to published/201508/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md diff --git a/published/kde-plasma-5.4.md b/published/201508/kde-plasma-5.4.md similarity index 100% rename from published/kde-plasma-5.4.md rename to published/201508/kde-plasma-5.4.md From c19a8f948cd35070c325abafe1d36bcc40c8f5c7 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 1 Sep 2015 11:17:39 +0800 Subject: [PATCH 389/697] PUB:20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux @GOLinux --- ...ind and Delete Duplicate Files in Linux.md | 45 ++++++++++--------- 1 file changed, 23 insertions(+), 22 deletions(-) rename {translated/tech => published}/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md (80%) diff --git a/translated/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md b/published/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md similarity index 80% rename from translated/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md rename to published/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md index 09f10fb546..76a06ea37c 100644 --- a/translated/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md +++ b/published/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md @@ -1,16 +1,16 @@ -fdupes——Linux中查找并删除重复文件的命令行工具 +fdupes:Linux中查找并删除重复文件的命令行工具 ================================================================================ -对于大多数计算机用户而言,查找并替换重复的文件是一个常见的需求。查找并移除重复文件真是一项领人不胜其烦的工作,它耗时又耗力。如果你的机器上跑着GNU/Linux,那么查找重复文件会变得十分简单,这多亏了`**fdupes**`工具。 +对于大多数计算机用户而言,查找并替换重复的文件是一个常见的需求。查找并移除重复文件真是一项令人不胜其烦的工作,它耗时又耗力。但如果你的机器上跑着GNU/Linux,那么查找重复文件会变得十分简单,这多亏了`fdupes`工具。 ![Find and Delete Duplicate Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/find-and-delete-duplicate-files-in-linux.png) -Fdupes——在Linux中查找并删除重复文件 +*fdupes——在Linux中查找并删除重复文件* ### fdupes是啥东东? ### -**Fdupes**是Linux下的一个工具,它由**Adrian Lopez**用C编程语言编写并基于MIT许可证发行,该应用程序可以在指定的目录及子目录中查找重复的文件。Fdupes通过对比文件的MD5签名,以及逐字节比较文件来识别重复内容,可以为Fdupes指定大量的选项以实现对文件的列出、删除、替换到文件副本的硬链接等操作。 +**fdupes**是Linux下的一个工具,它由**Adrian Lopez**用C编程语言编写并基于MIT许可证发行,该应用程序可以在指定的目录及子目录中查找重复的文件。fdupes通过对比文件的MD5签名,以及逐字节比较文件来识别重复内容,fdupes有各种选项,可以实现对文件的列出、删除、替换为文件副本的硬链接等操作。 -对比以下列顺序开始: +文件对比以下列顺序开始: **大小对比 > 部分 MD5 签名对比 > 完整 MD5 签名对比 > 逐字节对比** @@ -27,8 +27,9 @@ Fdupes——在Linux中查找并删除重复文件 **注意**:自Fedora 22之后,默认的包管理器yum被dnf取代了。 -### fdupes命令咋个搞? ### -1.作为演示的目的,让我们来在某个目录(比如 tecmint)下创建一些重复文件,命令如下: +### fdupes命令如何使用 ### + +1、 作为演示的目的,让我们来在某个目录(比如 tecmint)下创建一些重复文件,命令如下: $ mkdir /home/"$USER"/Desktop/tecmint && cd /home/"$USER"/Desktop/tecmint && for i in {1..15}; do echo "I Love Tecmint. Tecmint is a very nice community of Linux Users." > tecmint${i}.txt ; done @@ -57,7 +58,7 @@ Fdupes——在Linux中查找并删除重复文件 "I Love Tecmint. Tecmint is a very nice community of Linux Users." -2.现在在**tecmint**文件夹内搜索重复的文件。 +2、 现在在**tecmint**文件夹内搜索重复的文件。 $ fdupes /home/$USER/Desktop/tecmint @@ -77,7 +78,7 @@ Fdupes——在Linux中查找并删除重复文件 /home/tecmint/Desktop/tecmint/tecmint15.txt /home/tecmint/Desktop/tecmint/tecmint12.txt -3.使用**-r**选项在每个目录包括其子目录中递归搜索重复文件。 +3、 使用**-r**选项在每个目录包括其子目录中递归搜索重复文件。 它会递归搜索所有文件和文件夹,花一点时间来扫描重复文件,时间的长短取决于文件和文件夹的数量。在此其间,终端中会显示全部过程,像下面这样。 @@ -85,7 +86,7 @@ Fdupes——在Linux中查找并删除重复文件 Progress [37780/54747] 69% -4.使用**-S**选项来查看某个文件夹内找到的重复文件的大小。 +4、 使用**-S**选项来查看某个文件夹内找到的重复文件的大小。 $ fdupes -S /home/$USER/Desktop/tecmint @@ -106,7 +107,7 @@ Fdupes——在Linux中查找并删除重复文件 /home/tecmint/Desktop/tecmint/tecmint15.txt /home/tecmint/Desktop/tecmint/tecmint12.txt -5.你可以同时使用**-S**和**-r**选项来查看所有涉及到的目录和子目录中的重复文件的大小,如下: +5、 你可以同时使用**-S**和**-r**选项来查看所有涉及到的目录和子目录中的重复文件的大小,如下: $ fdupes -Sr /home/avi/Desktop/ @@ -131,11 +132,11 @@ Fdupes——在Linux中查找并删除重复文件 /home/tecmint/Desktop/resume_files/r-csc.html /home/tecmint/Desktop/resume_files/fc.html -6.不同于在一个或所有文件夹内递归搜索,你可以选择按要求有选择性地在两个或三个文件夹内进行搜索。不必再提醒你了吧,如有需要,你可以使用**-S**和/或**-r**选项。 +6、 不同于在一个或所有文件夹内递归搜索,你可以选择按要求有选择性地在两个或三个文件夹内进行搜索。不必再提醒你了吧,如有需要,你可以使用**-S**和/或**-r**选项。 $ fdupes /home/avi/Desktop/ /home/avi/Templates/ -7.要删除重复文件,同时保留一个副本,你可以使用`**-d**`选项。使用该选项,你必须额外小心,否则最终结果可能会是文件/数据的丢失。郑重提醒,此操作不可恢复。 +7、 要删除重复文件,同时保留一个副本,你可以使用`-d`选项。使用该选项,你必须额外小心,否则最终结果可能会是文件/数据的丢失。郑重提醒,此操作不可恢复。 $ fdupes -d /home/$USER/Desktop/tecmint @@ -177,13 +178,13 @@ Fdupes——在Linux中查找并删除重复文件 [-] /home/tecmint/Desktop/tecmint/tecmint15.txt [-] /home/tecmint/Desktop/tecmint/tecmint12.txt -8.从安全角度出发,你可能想要打印`**fdupes**`的输出结果到文件中,然后检查文本文件来决定要删除什么文件。这可以降低意外删除文件的风险。你可以这么做: +8、 从安全角度出发,你可能想要打印`fdupes`的输出结果到文件中,然后检查文本文件来决定要删除什么文件。这可以降低意外删除文件的风险。你可以这么做: $ fdupes -Sr /home > /home/fdupes.txt -**注意**:你可以替换`**/home**`为你想要的文件夹。同时,如果你想要递归搜索并打印大小,可以使用`**-r**`和`**-S**`选项。 +**注意**:你应该替换`/home`为你想要的文件夹。同时,如果你想要递归搜索并打印大小,可以使用`-r`和`-S`选项。 -9.你可以使用`**-f**`选项来忽略每个匹配集中的首个文件。 +9、 你可以使用`-f`选项来忽略每个匹配集中的首个文件。 首先列出该目录中的文件。 @@ -205,13 +206,13 @@ Fdupes——在Linux中查找并删除重复文件 /home/tecmint/Desktop/tecmint9 (another copy).txt /home/tecmint/Desktop/tecmint9 (4th copy).txt -10.检查已安装的fdupes版本。 +10、 检查已安装的fdupes版本。 $ fdupes --version fdupes 1.51 -11.如果你需要关于fdupes的帮助,可以使用`**-h**`开关。 +11、 如果你需要关于fdupes的帮助,可以使用`-h`开关。 $ fdupes -h @@ -245,7 +246,7 @@ Fdupes——在Linux中查找并删除重复文件 -v --version display fdupes version -h --help display this help message -到此为止了。让我知道你到现在为止你是怎么在Linux中查找并删除重复文件的?同时,也让我知道你关于这个工具的看法。在下面的评论部分中提供你有价值的反馈吧,别忘了为我们点赞并分享,帮助我们扩散哦。 +到此为止了。让我知道你以前怎么在Linux中查找并删除重复文件的吧?同时,也让我知道你关于这个工具的看法。在下面的评论部分中提供你有价值的反馈吧,别忘了为我们点赞并分享,帮助我们扩散哦。 我正在使用另外一个移除重复文件的工具,它叫**fslint**。很快就会把使用心得分享给大家哦,你们一定会喜欢看的。 @@ -254,10 +255,10 @@ Fdupes——在Linux中查找并删除重复文件 via: http://www.tecmint.com/fdupes-find-and-delete-duplicate-files-in-linux/ 作者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ -[2]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/ +[1]:https://linux.cn/article-2324-1.html +[2]:https://linux.cn/article-5109-1.html From 93520ea5154d8ec595cd582dc12c75abcea35a7d Mon Sep 17 00:00:00 2001 From: Yu Ye Date: Tue, 1 Sep 2015 11:21:40 +0800 Subject: [PATCH 390/697] Update 20150826 Five Super Cool Open Source Games.md --- sources/share/20150826 Five Super Cool Open Source Games.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/share/20150826 Five Super Cool Open Source Games.md b/sources/share/20150826 Five Super Cool Open Source Games.md index 0b92dcedff..0d3d3c8bfd 100644 --- a/sources/share/20150826 Five Super Cool Open Source Games.md +++ b/sources/share/20150826 Five Super Cool Open Source Games.md @@ -1,3 +1,4 @@ +Translating by H-mudcup Five Super Cool Open Source Games ================================================================================ In 2014 and 2015, Linux became home to a list of popular commercial titles such as the popular Borderlands, Witcher, Dead Island, and Counter Strike series of games. While this is exciting news, what of the gamer on a budget? Commercial titles are good, but even better are free-to-play alternatives made by developers who know what players like. @@ -62,4 +63,4 @@ via: http://fossforce.com/2015/08/five-super-cool-open-source-games/ [6]:http://mars-game.sourceforge.net/ [7]:http://valyriatear.blogspot.com/ [8]:https://www.youtube.com/channel/UCQ5KrSk9EqcT_JixWY2RyMA -[9]:http://supertuxkart.sourceforge.net/ \ No newline at end of file +[9]:http://supertuxkart.sourceforge.net/ From 1ba1a83818f600ef041c21b1c0506146bad8a67b Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 1 Sep 2015 15:21:03 +0800 Subject: [PATCH 391/697] =?UTF-8?q?20150901-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... open source board games to play online.md | 194 ++++++++++++++++++ ... automatically dim your screen on Linux.md | 52 +++++ 2 files changed, 246 insertions(+) create mode 100644 sources/share/20150901 5 best open source board games to play online.md create mode 100644 sources/tech/20150901 How to automatically dim your screen on Linux.md diff --git a/sources/share/20150901 5 best open source board games to play online.md b/sources/share/20150901 5 best open source board games to play online.md new file mode 100644 index 0000000000..505ca76f10 --- /dev/null +++ b/sources/share/20150901 5 best open source board games to play online.md @@ -0,0 +1,194 @@ +5 best open source board games to play online +================================================================================ +I have always had a fascination with board games, in part because they are a device of social interaction, they challenge the mind and, most importantly, they are great fun to play. In my misspent youth, myself and a group of friends gathered together to escape the horrors of the classroom, and indulge in a little escapism. The time provided an outlet for tension and rivalry. Board games help teach diplomacy, how to make and break alliances, bring families and friends together, and learn valuable lessons. + +I had a panache for abstract strategy games such as chess and draughts, as well as word games. I can still never resist a game of Escape from Colditz, a strategy card and dice-based board game, or Risk; two timeless multi-player strategy board games. But Catan remains my favourite board game. + +Board games have seen a resurgence in recent years, and Linux has a good range of board games to choose from. There is a credible implementation of Catan called Pioneers. But for my favourite implementations of classic board games to play online, check out the recommendations below. + +---------- + +### TripleA ### + +![TripleA in action](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-TripleA.png) + +TripleA is an open source online turn based strategy game. It allows people to implement and play various strategy board games (ie. Axis & Allies). The TripleA engine has full networking support for online play, support for sounds, XML support for game files, and has its own imaging subsystem that allows for customized user editable maps to be used. TripleA is versatile, scalable and robust. + +TripleA started out as a World War II simulation, but now includes different conflicts, as well as variations and mods of popular games and maps. TripleA comes with multiple games and over 100 more games can be downloaded from the user community. + +Features include: + +- Good interface and attractive graphics +- Optional scenarios +- Multiplayer games +- TripleA comes with the following supported games that uses its game engine (just to name a few): + - Axis & Allies : Classic edition (2nd, 3rd with options enabled) + - Axis & Allies : Revised Edition + - Pact of Steel A&A Variant + - Big World 1942 A&A Variant + - Four if by Sea + - Battle Ship Row + - Capture The Flag + - Minimap +- Hot-seat +- Play By EMail mode allows persons to play a game via EMail without having to be connected to each other online + - More time to think out moves + - Only need to come online to send your turn to the next player + - Dice rolls are done by a dedicated dice server that is independent of TripleA + - All dice rolls are PGP Verified and email to every player + - Every move and every dice roll is logged and saved in TripleA's History Window + - An online game can be later continued under PBEM mode + - Hard for others to cheat +- Hosted online lobby +- Utilities for editing maps +- Website: [triplea.sourceforge.net][1] +- Developer: Sean Bridges (original developer), Mark Christopher Duncan +- License: GNU GPL v2 +- Version Number: 1.8.0.7 + +---------- + +### Domination ### + +![Domination in action](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-Domination.png) + +Domination is an open source game that shares common themes with the hugely popular Risk board game. It has many game options and includes many maps. + +In the classic “World Domination” game of military strategy, you are battling to conquer the world. To win, you must launch daring attacks, defend yourself to all fronts, and sweep across vast continents with boldness and cunning. But remember, the dangers, as well as the rewards, are high. Just when the world is within your grasp, your opponent might strike and take it all away! + +Features include: + +- Simple to learn + - Domination - you must occupy all countries on the map, and thereby eliminate all opponents. These can be long, drawn out games + - Capital - each player has a country they have selected as a Capital. To win the game, you must occupy all Capitals + - Mission - each player draws a random mission. The first to complete their mission wins. Missions may include the elimination of a certain colour, occupation of a particular continent, or a mix of both +- Map editor +- Simple map format +- Multiplayer network play +- Single player +- Hotseat +- 5 user interfaces +- Game types: +- Play online +- Website: [domination.sourceforge.net][2] +- Developer: Yura Mamyrin, Christian Weiske, Mike Chaten, and many others +- License: GNU GPL v3 +- Version Number: 1.1.1.5 + +---------- + +### PyChess ### + +![Micro-Max in action](http://www.linuxlinks.com/portal/content/reviews/Games/Screenshot-Pychess.jpg) + +PyChess is a Gnome inspired chess client written in Python. + +The goal of PyChess, is to provide a fully featured, nice looking, easy to use chess client for the gnome-desktop. + +The client should be usable both to those totally new to chess, those who want to play an occasional game, and those who wants to use the computer to further enhance their play. + +Features include: + +- Attractive interface +- Chess Engine Communication Protocol (CECP) and Univeral Chess Interface (UCI) Engine support +- Free online play on the Free Internet Chess Server (FICS) +- Read and writes PGN, EPD and FEN chess file formats +- Built-in Python based engine +- Undo and pause functions +- Board and piece animation +- Drag and drop +- Tabbed interface +- Hints and spyarrows +- Opening book sidepanel using sqlite +- Score plot sidepanel +- "Enter game" in pgn dialog +- Optional sounds +- Legal move highlighting +- Internationalised or figure pieces in notation +- Website: [www.pychess.org][3] +- Developer: Thomas Dybdahl Ahle +- License: GNU GPL v2 +- Version Number: 0.12 Anderssen rc4 + +---------- + +### Scrabble ### + +![Scrabble in action](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-Scrabble3D.png) + +Scrabble3D is a highly customizable Scrabble game that not only supports Classic Scrabble and Superscrabble but also 3D games and own boards. You can play local against the computer or connect to a game server to find other players. + +Scrabble is a board game with the goal to place letters crossword like. Up to four players take part and get a limited amount of letters (usually 7 or 8). Consecutively, each player tries to compose his letters to one or more word combining with the placed words on the game array. The value of the move depends on the letters (rare letter get more points) and bonus fields which multiply the value of a letter or the whole word. The player with most points win. + +This idea is extended with Scrabble3D to the third dimension. Of course, a classic game with 15x15 fields or Superscrabble with 21x21 fields can be played and you may configure any field setting by yourself. The game can be played by the provided freeware program against Computer, other local players or via internet. Last but not least it's possible to connect to a game server to find other players and to obtain a rating. Most options are configurable, including the number and valuation of letters, the used dictionary, the language of dialogs and certainly colors, fonts etc. + +Features include: + +- Configurable board, letterset and design +- Board in OpenGL graphics with user-definable wavefront model +- Game against computer with support of multithreading +- Post-hoc game analysis with calculation of best move by computer +- Match with other players connected on a game server +- NSA rating and highscore at game server +- Time limit of games +- Localization; use of non-standard digraphs like CH, RR, LL and right to left reading +- Multilanguage help / wiki +- Network games are buffered and asynchronous games are possible +- Running games can be kibitzed +- International rules including italian "Cambio Secco" +- Challenge mode, What-if-variant, CLABBERS, etc +- Website: [sourceforge.net/projects/scrabble][4] +- Developer: Heiko Tietze +- License: GNU GPL v3 +- Version Number: 3.1.3 + +---------- + +### Backgammon ### + +![Backgammon in action](http://www.linuxlinks.com/portal/content/reviews/Games/Screenshot-gnubg.png) + +GNU Backgammon (gnubg) is a strong backgammon program (world-class with a bearoff database installed) usable either as an engine by other programs or as a standalone backgammon game. It is able to play and analyze both money games and tournament matches, evaluate and roll out positions, and more. + +In addition to supporting simple play, it also has extensive analysis features, a tutor mode, adjustable difficulty, and support for exporting annotated games. + +It currently plays at about the level of a championship flight tournament player and is gradually improving. + +gnubg can be played on numerous on-line backgammon servers, such as the First Internet Backgammon Server (FIBS). + +Features include: + +- A command line interface (with full command editing features if GNU readline is available) that lets you play matches and sessions against GNU Backgammon with a rough ASCII representation of the board on text terminals +- Support for a GTK+ interface with a graphical board window. Both 2D and 3D graphics are available +- Tournament match and money session cube handling and cubeful play +- Support for both 1-sided and 2-sided bearoff databases: 1-sided bearoff database for 15 checkers on the first 6 points and optional 2-sided database kept in memory. Optional larger 1-sided and 2-sided databases stored on disk +- Automated rollouts of positions, with lookahead and race variance reduction where appropriate. Rollouts may be extended +- Functions to generate legal moves and evaluate positions at varying search depths +- Neural net functions for giving cubeless evaluations of all other contact and race positions +- Automatic and manual annotation (analysis and commentary) of games and matches +- Record keeping of statistics of players in games and matches (both native inside GNU Backgammon and externally using relational databases and Python) +- Loading and saving analyzed games and matches as .sgf files (Smart Game Format) +- Exporting positions, games and matches to: (.eps) Encapsulated Postscript, (.gam) Jellyfish Game, (.html) HTML, (.mat) Jellyfish Match, (.pdf) PDF, (.png) Portable Network Graphics, (.pos) Jellyfish Position, (.ps) PostScript, (.sgf) Gnu Backgammon File, (.tex) LaTeX, (.txt) Plain Text, (.txt) Snowie Text +- Import of matches and positions from a number of file formats: (.bkg) Hans Berliner's BKG Format, (.gam) GammonEmpire Game, (.gam) PartyGammon Game, (.mat) Jellyfish Match, (.pos) Jellyfish Position, (.sgf) Gnu Backgammon File, (.sgg) GamesGrid Save Game, (.tmg) TrueMoneyGames, (.txt) Snowie Text +- Python Scripting +- Native language support; 10 languages complete or in progress +- Website: [www.gnubg.org][5] +- Developer: Joseph Heled, Oystein Johansen, Jonathan Kinsey, David Montgomery, Jim Segrave, Joern Thyssen, Gary Wong and contributors +- License: GPL v2 +- Version Number: 1.05.000 + +-------------------------------------------------------------------------------- + +via: http://www.linuxlinks.com/article/20150830011533893/BoardGames.html + +作者:Frazer Kline +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://triplea.sourceforge.net/ +[2]:http://domination.sourceforge.net/ +[3]:http://www.pychess.org/ +[4]:http://sourceforge.net/projects/scrabble/ +[5]:http://www.gnubg.org/ \ No newline at end of file diff --git a/sources/tech/20150901 How to automatically dim your screen on Linux.md b/sources/tech/20150901 How to automatically dim your screen on Linux.md new file mode 100644 index 0000000000..b8a9ead16b --- /dev/null +++ b/sources/tech/20150901 How to automatically dim your screen on Linux.md @@ -0,0 +1,52 @@ +How to automatically dim your screen on Linux +================================================================================ +When you start spending the majority of your time in front of a computer, natural questions start arising. Is this healthy? How can I diminish the strain on my eyes? Why is the sunlight burning me? Although active research is still going on to answer these questions, a lot of programmers have already adopted a few applications to make their daily habits a little healthier for their eyes. Among those applications, there are two which I found particularly interesting: Calise and Redshift. + +### Calise ### + +In and out of development limbo, [Calise][1] stands for "Camera Light Sensor." In other terms, it is an open source program that computes the best backlight level for your screen based on the light intensity received by your webcam. And for more precision, Calise is capable of taking in account the weather in your area based on your geographical coordinates. What I like about it is the compatibility with every desktops, even non-X ones. + +![](https://farm1.staticflickr.com/569/21016715646_6e1e95f066_o.jpg) + +It comes with a command line interface and a GUI, supports multiple user profiles, and can even export its data to CSV. After installation, you will have to calibrate it quickly before the magic happens. + +![](https://farm6.staticflickr.com/5770/21050571901_1e7b2d63ec_c.jpg) + +What is less likeable is unfortunately that if you are as paranoid as I am, you have a little piece of tape in front of your webcam, which greatly affects Calise's precision. But that aside, Calise is a great application, which deserves our attention and support. As I mentioned earlier, it has gone through some rough patches in its development schedule over the last couple of years, so I really hope that this project will continue. + +![](https://farm1.staticflickr.com/633/21032989702_9ae563db1e_o.png) + +### Redshift ### + +If you already considered decreasing the strain on your eyes caused by your screen, it is possible that you have heard of f.lux, a free proprietary software that modifies the luminosity and color scheme of your display based on the time of the day. However, if you really prefer open source software, there is an alternative: [Redshift][2]. Inspired by f.lux, Redshift also alters the color scheme and luminosity to enhance the experience of sitting in front of your screen at night. On startup, you can configure it with you geographic position as longitude and latitude, and then let it run in tray. Redshift will smoothly adjust the color scheme or your screen based on the position of the sun. At night, you will see the screen's color temperature turn towards red, making it a lot less painful for your eyes. + +![](https://farm6.staticflickr.com/5823/20420303684_2b6e917fee_b.jpg) + +Just like Calise, it proposes a command line interface as well as a GUI client. To start Redshift quickly, just use the command: + + $ redshift -l [LAT]:[LON] + +Replacing [LAT]:[LON] by your latitude and longitude. + +However, it is also possible to input your coordinates by GPS via the gpsd module. For Arch Linux users, I recommend this [wiki page][3]. + +### Conclusion ### + +To conclude, Linux users have no excuse for not taking care of their eyes. Calise and Redshift are both amazing. I really hope that their development will continue and that they get the support they deserve. Of course, there are more than just two programs out there to fulfill the purpose of protecting your eyes and staying healthy, but I feel that Calise and Redshift are a good start. + +If there is a program that you really like and that you use regularly to reduce the strain on your eyes, please let us know in the comments. + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/automatically-dim-your-screen-linux.html + +作者:[Adrien Brochard][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/adrien +[1]:http://calise.sourceforge.net/ +[2]:http://jonls.dk/redshift/ +[3]:https://wiki.archlinux.org/index.php/Redshift#Automatic_location_based_on_GPS \ No newline at end of file From 61b8a08d3eb574c2e7c800889b4b038f176b41c3 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 1 Sep 2015 15:38:55 +0800 Subject: [PATCH 392/697] =?UTF-8?q?20150901-2=20=E9=80=89=E9=A2=98=20?= =?UTF-8?q?=E4=B8=A4=E7=AF=87=E5=85=B3=E8=81=94=E7=9A=84?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...r Upgrade to Linux Kernel 4.2 in Ubuntu.md | 88 +++++++++++++++++++ ...ux Kernel in Ubuntu Easily via A Script.md | 79 +++++++++++++++++ 2 files changed, 167 insertions(+) create mode 100644 sources/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md create mode 100644 sources/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md diff --git a/sources/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md b/sources/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md new file mode 100644 index 0000000000..0a18e76db1 --- /dev/null +++ b/sources/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md @@ -0,0 +1,88 @@ +How to Install / Upgrade to Linux Kernel 4.2 in Ubuntu +================================================================================ +![](http://ubuntuhandbook.org/wp-content/uploads/2014/12/linux-kernel-icon-tux.png) + +Linux Kernel 4.2 was released yesterday, at noon. Linus Torvalds wrote on [lkml.org][1]: + +> So judging by how little happened this week, it wouldn’t have been a mistake to release 4.2 last week after all, but hey, there’s certainly a few fixes here, and it’s not like delaying 4.2 for a week should have caused any problems either. +> +> So here it is, and the merge window for 4.3 is now open. I already have a few pending early pull requests, but as usual I’ll start processing them tomorrow and give the release some time to actually sit. +> +> The shortlog from rc8 is tiny, and appended. The patch is pretty tiny too… + +### What’s New in Kernel 4.2: ### + +- rewrites of Intel Assembly x86 code +- support for new ARM boards and SoCs +- F2FS per-file encryption +- The AMDGPU kernel DRM driver +- VCE1 video encode support for the Radeon DRM driver +- Initial support for Intel Broxton Atom SoCs +- Support for ARCv2 and HS38 CPU cores. +- added queue spinlocks support +- many other improvements and updated drivers. + +### How to Install Kernel 4.2 in Ubuntu: ### + +The binary packages of this kernel release are available for download at link below: + +- [Download Kernel 4.2 (.DEB)][1] + +First check out your OS type, 32-bit (i386) or 64-bit (amd64), then download and install the packages below in turn: + +1. linux-headers-4.2.0-xxx_all.deb +1. linux-headers-4.2.0-xxx-generic_xxx_i386/amd64.deb +1. linux-image-4.2.0-xxx-generic_xxx_i386/amd64.deb + +After installing the kernel, you may run `sudo update-grub` command in terminal (Ctrl+Alt+T) to refresh grub boot-loader. + +If you need a low latency system (e.g. for recording audio) then download & install below packages instead: + +1. linux-headers-4.2.0_xxx_all.deb +1. linux-headers-4.2.0-xxx-lowlatency_xxx_i386/amd64.deb +1. linux-image-4.2.0-xxx-lowlatency_xxx_i386/amd64.deb + +For Ubuntu Server without a graphical UI, you may run below commands one by one to grab packages via wget and install them via dpkg: + +For 64-bit system run: + + cd /tmp/ + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200_4.2.0-040200.201508301530_all.deb + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200-generic_4.2.0-040200.201508301530_amd64.deb + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-image-4.2.0-040200-generic_4.2.0-040200.201508301530_amd64.deb + + sudo dpkg -i linux-headers-4.2.0-*.deb linux-image-4.2.0-*.deb + +For 32-bit system, run: + + cd /tmp/ + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200_4.2.0-040200.201508301530_all.deb + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200-generic_4.2.0-040200.201508301530_i386.deb + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-image-4.2.0-040200-generic_4.2.0-040200.201508301530_i386.deb + + sudo dpkg -i linux-headers-4.2.0-*.deb linux-image-4.2.0-*.deb + +Finally restart your computer to take effect. + +To revert back, remove old kernels, see [install kernel simply via a script][3]. + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/08/upgrade-kernel-4-2-ubuntu/ + +作者:[Ji m][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:https://lkml.org/lkml/2015/8/30/96 +[2]:http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/ +[3]:http://ubuntuhandbook.org/index.php/2015/08/install-latest-kernel-script/ \ No newline at end of file diff --git a/sources/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md b/sources/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md new file mode 100644 index 0000000000..7022efd817 --- /dev/null +++ b/sources/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md @@ -0,0 +1,79 @@ +Install The Latest Linux Kernel in Ubuntu Easily via A Script +================================================================================ +![](http://ubuntuhandbook.org/wp-content/uploads/2014/12/linux-kernel-icon-tux.png) + +Want to install the latest Linux Kernel? A simple script can always do the job and make things easier in Ubuntu. + +Michael Murphy has created a script makes installing the latest RC, stable, or lowlatency Kernel easier in Ubuntu. The script asks some questions and automatically downloads and installs the latest Kernel packages from [Ubuntu kernel mainline page][1]. + +### Install / Upgrade Linux Kernel via the Script: ### + +1. Download the script from the right sidebar of the [github page][2] (click the “Download Zip” button). + +2. Decompress the Zip archive by right-clicking on it in your user Downloads folder and select “Extract Here”. + +3. Navigate to the result folder in terminal by right-clicking on that folder and select “Open in Terminal”: + +![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/open-terminal.jpg) + +It opens a terminal window and automatically navigates into the result folder. If you **DON’T** find the “Open in Terminal” option, search for and install `nautilus-open-terminal` in Ubuntu Software Center and then log out and back in (or run `nautilus -q` command in terminal instead to apply changes). + +4. When you’re in terminal, give the script executable permission for once. + + chmod +x * + +FINALLY run the script every time you want to install / upgrade Linux Kernel in Ubuntu: + + ./* + +![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/run-script.jpg) + +I use * instead of the SCRIPT NAME in both commands since it’s the only file in that folder. + +If the script runs successfully, restart your computer when done. + +### Revert back and Uninstall the new Kernel: ### + +To revert back and remove the new kernel for any reason, restart your computer and select boot with the old kernel entry under **Advanced Options** menu when you’re at Grub boot-loader. + +When it boots up, see below section. + +### How to Remove the old (or new) Kernels: ### + +1. Install Synaptic Package Manager from Ubuntu Software Center. + +2. Launch Synaptic Package Manager and do: + +- click the **Reload** button in case you want to remove the new kernel. +- select **Status -> Installed** on the left pane to make search list clear. +- search **linux-image**- using Quick filter box. +- select a kernel image “linux-image-x.xx.xx-generic” and mark for (complete) removal +- finally apply changes + +![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/remove-old-kernel1.jpg) + +Repeat until you removed all unwanted kernels. DON’T carelessly remove the current running kernel, check it out via `uname -r` (see below pic.) command. + +For Ubuntu Server, you may run below commands one by one: + + uname -r + + dpkg -l | grep linux-image- + + sudo apt-get autoremove KERNEL_IMAGE_NAME + +![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/remove-kernel-terminal.jpg) + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/08/install-latest-kernel-script/ + +作者:[Ji m][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:http://kernel.ubuntu.com/~kernel-ppa/mainline/ +[2]:https://gist.github.com/mmstick/8493727 \ No newline at end of file From ec7dffbf93a8d29a2797802971d4bdee42dd37b5 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 1 Sep 2015 15:58:56 +0800 Subject: [PATCH 393/697] =?UTF-8?q?20150901-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../talk/20150901 Is Linux Right For You.md | 63 +++++++++ ...0150901 How to Defragment Linux Systems.md | 125 ++++++++++++++++++ 2 files changed, 188 insertions(+) create mode 100644 sources/talk/20150901 Is Linux Right For You.md create mode 100644 sources/tech/20150901 How to Defragment Linux Systems.md diff --git a/sources/talk/20150901 Is Linux Right For You.md b/sources/talk/20150901 Is Linux Right For You.md new file mode 100644 index 0000000000..89044347ec --- /dev/null +++ b/sources/talk/20150901 Is Linux Right For You.md @@ -0,0 +1,63 @@ +Is Linux Right For You? +================================================================================ +> Not everyone should opt for Linux -- for many users, remaining with Windows or OSX is the better choice. + +I enjoy using Linux on the desktop. Not because of software politics or because I despise other operating systems. I simply like Linux because it just works. + +It's been my experience that not everyone is cut out for the Linux lifestyle. In this article, I'll help you run through the pros and cons of making the switch to Linux so you can determine if switching is right for you. + +### When to make the switch ### + +Switching to Linux makes sense when there is a decisive reason to do so. The same can be said about moving from Windows to OS X or vice versa. In order to have success with switching, you must be able to identify your reason for jumping ship in the first place. + +For some people, the reason for switching is frustration with their current platform. Maybe the latest upgrade left them with a lousy experience and they're ready to chart new horizons. In other instances, perhaps it's simply a matter of curiosity. Whatever the motivation, you must have a good reason for switching operating systems. If you're pushing yourself in this direction without a good reason, then no one wins. + +However, there are exceptions to every rule. And if you're really interested in trying Linux on the desktop, then maybe coming to terms with a workable compromise is the way to go. + +### Starting off slow ### + +After trying Linux for the first time, I've seen people blast their Windows installation to bits because they had a good experience with Ubuntu on a flash drive for 20 minutes. Folks, this isn't a test. Instead I'd suggest the following: + +- Run the [Linux distro in a virtual machine][1] for a week. This means you are committing to running that distro for all browser work, email and other tasks you might otherwise do on that machine. +- If running a VM for a week is too resource intensive, try doing the same with a USB drive running Linux that offers [some persistent storage][2]. This will allow you to leave your main OS alone and intact. At the same time, you'll still be able to "live inside" of your Linux distribution for a week. +- If you find that everything is successful after a week of running Linux, the next step is to examine how many times you booted into Windows that week. If only occasionally, then the next step is to look into [dual-booting Windows][3] and Linux. For those of you that only found themselves using their Linux distro, it might be worth considering making the switch full time. +- Before you hose your Windows partition completely, it might make more sense to purchase a second hard drive to install Linux onto instead. This allows you to dual-boot, but to do so with ample hard drive space. It also makes Windows available to you if something should come up. + +### What do you gain adopting Linux? ### + +So what does one gain by switching to Linux? Generally it comes down to personal freedom for most people. With Linux, if something isn't to your liking, you're free to change it. Using Linux also saves users oodles of money in avoiding hardware upgrades and unnecessary software expenses. Additionally, you're not burdened with tracking down lost license keys for software. And if you dislike the direction a particular distribution is headed, you can switch to another distribution with minimal hassle. + +The sheer volume of desktop choice on the Linux desktop is staggering. This level of choice might even seem overwhelming to the newcomer. But if you find a distro base (Debian, Fedora, Arch, etc) that you like, the hard work is already done. All you need to do now is find a variation of the distro and the desktop environment you prefer. + +Now one of the most common complaints I hear is that there isn't much in the way of software for Linux. However, this isn't accurate at all. While other operating systems may have more of it, today's Linux desktop has applications to do just about anything you can think of. Video editing (home and pro-level), photography, office management, remote access, music (listening and creation), plus much, much more. + +### What you lose adopting Linux? ### + +As much as I enjoy using Linux, my wife's home office relies on OS X. She's perfectly content using Linux for some tasks, however she relies on OS X for specific software not available for Linux. This is a common problem that many people face when first looking at making the switch. You must decide whether or not you're going to be losing out on critical software if you make the switch. + +Sometimes the issue is because the software has content locked down with it. In other cases, it's a workflow and functionality that was found with the legacy applications and not with the software available for Linux. I myself have never experienced this type of challenge, but I know those who have. Many of the software titles available for Linux are also available for other operating systems. So if there is a concern about such things, I encourage you to try out comparable apps on your native OS first. + +Another thing you might lose by switching to Linux is the luxury of local support when you need it. People scoff at this, but I know of countless instances where a newcomer to Linux was dismayed to find their only recourse for solving Linux challenges was from strangers on the Web. This is especially problematic if their only PC is the one having issues. Windows and OS X users are spoiled in that there are endless support techs in cities all over the world that support their platform(s). + +### How to proceed from here ### + +Perhaps the single biggest piece of advice to remember is always have a fallback plan. Remember, once you wipe that copy of Windows 10 from your hard drive, you may find yourself spending money to get it reinstalled. This is especially true for those of you who upgrade from other Windows releases. Accepting this, persistent flash drives with Linux or dual-booting Windows and Linux is always a preferable way forward for newcomers. Odds are that you may be just fine and take to Linux like a fish to water. But having that fallback plan in place just means you'll sleep better at night. + +If instead you've been relying on a dual-boot installation for weeks and feel ready to take the plunge, then by all means do it. Wipe your drive and start off with a clean installation of your favorite Linux distribution. I've been a full time Linux enthusiast for years and I can tell you for certain, it's a great feeling. How long? Let's just say my first Linux experience was with early Red Hat. I finally installed a dedicated installation on my laptop by 2003. + +Existing Linux enthusiasts, where did you first get started? Was your switch an exciting one or was it filled with angst? Hit the Comments and share your experiences. + +-------------------------------------------------------------------------------- + +via: http://www.datamation.com/open-source/is-linux-right-for-you.html + +作者:[Matt Hartley][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.datamation.com/author/Matt-Hartley-3080.html +[1]:http://www.psychocats.net/ubuntu/virtualbox +[2]:http://www.howtogeek.com/howto/14912/create-a-persistent-bootable-ubuntu-usb-flash-drive/ +[3]:http://www.linuxandubuntu.com/home/dual-boot-ubuntu-15-04-14-10-and-windows-10-8-1-8-step-by-step-tutorial-with-screenshots \ No newline at end of file diff --git a/sources/tech/20150901 How to Defragment Linux Systems.md b/sources/tech/20150901 How to Defragment Linux Systems.md new file mode 100644 index 0000000000..4b9095c1de --- /dev/null +++ b/sources/tech/20150901 How to Defragment Linux Systems.md @@ -0,0 +1,125 @@ +How to Defragment Linux Systems +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-featured.png) + +There is a common myth that Linux disks never need defragmentation at all. In most cases, this is true, due mostly to the excellent journaling filesystems Linux uses (ext2, 3, 4, btrfs, etc.) to handle the filesystem. However, in some specific cases, fragmentation might still occur. If that happens to you, the solution is fortunately very simple. + +### What is fragmentation? ### + +Fragmentation occurs when a file system updates files in little chunks, but these chunks do not form a contiguous whole and are scattered around the disk instead. This is particularly true for FAT and FAT32 filesystems. It was somewhat mitigated in NTFS and almost never happens in Linux (extX). Here is why. + +In filesystems such as FAT and FAT32, files are written right next to each other on the disk. There is no room left for file growth or updates: + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-fragmented.png) + +The NTFS leaves somewhat more room between the files, so there is room to grow. As the space between chunks is limited, fragmentation will still occur over time. + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-ntfs.png) + +Linux’s journaling filesystems take a different approach. Instead of placing files beside each other, each file is scattered all over the disk, leaving generous amounts of free space between each file. There is sufficient room for file updates/growth and fragmentation rarely occurs. + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-journal.png) + +Additionally, if fragmentation does happen, most Linux filesystems would attempt to shuffle files and chunks around to make them contiguous again. + +### Disk fragmentation on Linux ### + +Disk fragmentation seldom occurs in Linux unless you have a small hard drive, or it is running out of space. Some possible fragmentation cases include: + +- if you edit large video files or raw image files, and disk space is limited +- if you use older hardware like an old laptop, and you have a small hard drive +- if your hard drives start filling up (above 85% used) +- if you have many small partitions cluttering your home folder + +The best solution is to buy a larger hard drive. If it’s not possible, this is where defragmentation becomes useful. + +### How to check for fragmentation ### + +The `fsck` command will do this for you – that is, if you have an opportunity to run it from a live CD, with **all affected partitions unmounted**. + +This is very important: **RUNNING FSCK ON A MOUNTED PARTITION CAN AND WILL SEVERELY DAMAGE YOUR DATA AND YOUR DISK**. + +You have been warned. Before proceeding, make a full system backup. + +**Disclaimer**: The author of this article and Make Tech Easier take no responsibility for any damage to your files, data, system, or any other damage, caused by your actions after following this advice. You may proceed at your own risk. If you do proceed, you accept and acknowledge this. + +You should just boot into a live session (like an installer disk, system rescue CD, etc.) and run `fsck` on your UNMOUNTED partitions. To check for any problems, run the following command with root permission: + + fsck -fn [/path/to/your/partition] + +You can check what the `[/path/to/your/partition]` is by running + + sudo fdisk -l + +There is a way to run `fsck` (relatively) safely on a mounted partition – that is by using the `-n` switch. This will result in a read only file system check without touching anything. Of course, there is no guarantee of safety here, and you should only proceed after creating a backup. On an ext2 filesystem, running + + sudo fsck.ext2 -fn /path/to/your/partition + +would result in plenty of output – most of them error messages resulting from the fact that the partition is mounted. In the end it will give you fragmentation related information. + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-fsck.png) + +If your fragmentation is above 20%, you should proceed to defragment your system. + +### How to easily defragment Linux filesystems ### + +All you need to do is to back up **ALL** your files and data to another drive (by manually **copying** them over), format the partition, and copy your files back (don’t use a backup program for this). The journalling file system will handle them as new files and place them neatly to the disk without fragmentation. + +To back up your files, run + + cp -afv [/path/to/source/partition]/* [/path/to/destination/folder] + +Mind the asterix (*); it is important. + +Note: It is generally agreed that to copy large files or large amounts of data, the dd command might be best. This is a very low level operation and does copy everything “as is”, including the empty space, and even the junk left over. This is not what we want, so it is probably better to use `cp`. + +Now you only need to remove all the original files. + + sudo rm -rf [/path/to/source/partition]/* + +**Optional**: you can fill the empty space with zeros. You could achieve this with formatting as well, but if for example you did not copy the whole partition, only large files (which are most likely to cause fragmentation), this might not be an option. + + sudo dd if=/dev/zero of=[/path/to/source/partition]/temp-zero.txt + +Wait for it to finish. You could also monitor the progress with `pv`. + + sudo apt-get install pv + sudo pv -tpreb | of=[/path/to/source/partition]/temp-zero.txt + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-dd.png) + +When it is done, just delete the temporary file. + + sudo rm [/path/to/source/partition]/temp-zero.txt + +After you zeroed out the empty space (or just skipped that step entirely), copy your files back, reversing the first cp command: + + cp -afv [/path/to/original/destination/folder]/* [/path/to/original/source/partition] + +### Using e4defrag ### + +If you prefer a simpler approach, install `e2fsprogs`, + + sudo apt-get install e2fsprogs + +and run `e4defrag` as root on the affected partition. If you don’t want to or cannot unmount the partition, you can use its mount point instead of its path. To defragment your whole system, run + + sudo e4defrag / + +It is not guaranteed to succeed while mounted (you should also stop using your system while it is running), but it is much easier than copying all files away and back. + +### Conclusion ### + +Fragmentation should rarely be an issue on a Linux system due to the the journalling filesystem’s efficient data handling. If you do run into fragmentation due to any circumstances, there are simple ways to reallocate your disk space like copying all files away and back or using `e4defrag`. It is important, however, to keep your data safe, so before attempting any operation that would affect all or most of your files, make sure you make a backup just to be on the safe side. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/defragment-linux/ + +作者:[Attila Orosz][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ \ No newline at end of file From 5aa1cea090f7c50b5d1b24eed03fee68a3ed0cfe Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 1 Sep 2015 16:14:32 +0800 Subject: [PATCH 394/697] =?UTF-8?q?20150901-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Apache with MariaDB on Debian or Ubuntu.md | 182 ++++++++++++++++++ 1 file changed, 182 insertions(+) create mode 100644 sources/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md diff --git a/sources/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md b/sources/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md new file mode 100644 index 0000000000..d157b2b7aa --- /dev/null +++ b/sources/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md @@ -0,0 +1,182 @@ +Setting Up High-Performance ‘HHVM’ and Nginx/Apache with MariaDB on Debian/Ubuntu +================================================================================ +HHVM stands for HipHop Virtual Machine, is an open source virtual machine created for running Hack (it’s a programming language for HHVM) and PHP written applications. HHVM uses a last minute compilation path to achieve remarkable performance while keeping the flexibility that PHP programmers are addicted to. Till date, HHVM has achieved over a 9x increase in http request throughput and more than 5x cut in memory utilization (when running on low system memory) for Facebook compared with the PHP engine + [APC (Alternative PHP Cache)][1]. + +HHVM can also be used along with a FastCGI-based web-server like Nginx or Apache. + +![Install HHVM, Nginx and Apache with MariaDB](http://www.tecmint.com/wp-content/uploads/2015/08/Install-HHVM-Nginx-Apache-MariaDB.png) + +Install HHVM, Nginx and Apache with MariaDB + +In this tutorial we shall look at steps for setting up Nginx/Apache web server, MariaDB database server and HHVM. For this setup, we will use Ubuntu 15.04 (64-bit) as HHVM runs on 64-bit system only, although Debian and Linux Mint distributions are also supported. + +### Step 1: Installing Nginx and Apache Web Server ### + +1. First do a system upgrade to update repository list with the help of following commands. + + # apt-get update && apt-get upgrade + +![System Upgrade](http://www.tecmint.com/wp-content/uploads/2015/08/System-Upgrade.png) + +System Upgrade + +2. As I said HHVM can be used with both Nginx and Apache web server. So, it’s your choice which web server you will going to use, but here we will show you both web servers installation and how to use them with HHVM. + +#### Installing Nginx #### + +In this step, we will install Nginx/Apache web server from the packages repository using following command. + + # apt-get install nginx + +![Install Nginx Web Server](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Nginx-Web-Server.png) + +Install Nginx Web Server + +#### Installing Apache #### + + # apt-get install apache2 + +![Install Apache Web Server](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Apache-Web-Server.png) + +Install Apache Web Server + +At this point, you should be able to navigate to following URL and you will able to see Nginx or Apache default page. + + http://localhost + OR + http://IP-Address + +#### Nginx Default Page #### + +![Nginx Welcome Page](http://www.tecmint.com/wp-content/uploads/2015/08/Nginx-Welcome-Page.png) + +Nginx Welcome Page + +#### Apache Default Page #### + +![Apache Default Page](http://www.tecmint.com/wp-content/uploads/2015/08/Apache-Default-Page.png) + +Apache Default Page + +### Step 2: Install and Configure MariaDB ### + +3. In this step, we will install MariaDB, as it providers better performance as compared to MySQL. + + # apt-get install mariadb-client mariadb-server + +![Install MariaDB Database](http://www.tecmint.com/wp-content/uploads/2015/08/Install-MariaDB-Database.png) + +Install MariaDB Database + +4. After MariaDB successful installation, you can start MariaDB and set root password to secure the database: + + # systemctl start mysql + # mysql_secure_installation + +Answer the following questions by typing `y` or `n` and press enter. Make sure you read the instructions carefully before answering the questions. + + Enter current password for root (enter for none) = press enter + Set root password? [Y/n] = y + Remove anonymous users[y/n] = y + Disallow root login remotely[y/n] = y + Remove test database and access to it [y/n] = y + Reload privileges tables now[y/n] = y + +5. After setting root password for MariaDB, you can connect to MariaDB prompt with the new root password. + + # mysql -u root -p + +### Step 3: Installation of HHVM ### + +6. At this stage we shall install and configure HHVM. You need to add the HHVM repository to your `sources.list` file and then you have to update your repository list using following series of commands. + + # wget -O - http://dl.hhvm.com/conf/hhvm.gpg.key | apt-key add - + # echo deb http://dl.hhvm.com/ubuntu DISTRIBUTION_VERSION main | sudo tee /etc/apt/sources.list.d/hhvm.list + # apt-get update + +**Important**: Don’t forget to replace DISTRIBUTION_VERSION with your Ubuntu distribution version (i.e. lucid, precise, or trusty.) and also on Debian replace with jessie or wheezy. On Linux Mint installation instructions are same, but petra is the only currently supported distribution. + +After adding HHVM repository, you can easily install it as shown. + + # apt-get install -y hhvm + +Installing HHVM will start it up now, but it not configured to auto start at next system boot. To set auto start at next boot use the following command. + + # update-rc.d hhvm defaults + +### Step 4: Configuring Nginx/Apache to Talk to HHVM ### + +7. Now, nginx/apache and HHVM are installed and running as independent, so we need to configure both web servers to talk to each other. The crucial part is that we have to tell nginx/apache to forward all PHP files to HHVM to execute. + +If you are using Nginx, follow this instructions as explained.. + +By default, the nginx configuration lives under /etc/nginx/sites-available/default and these config looks in /usr/share/nginx/html for files to execute, but it don’t know what to do with PHP. + +To make Nginx to talk with HHVM, we need to run the following include script that will configure nginx correctly by placing a hhvm.conf at the beginning of the nginx config as mentioned above. + +This script makes the nginx to talk to any file that ends with .hh or .php and send it to HHVM via fastcgi. + + # /usr/share/hhvm/install_fastcgi.sh + +![Configure Nginx for HHVM](http://www.tecmint.com/wp-content/uploads/2015/08/Configure-Nginx-for-HHVM.png) + +Configure Nginx for HHVM + +**Important**: If you are using Apache, there isn’t any configuration is needed now. + +8. Next, you need to use /usr/bin/hhvm to provide /usr/bin/php (php) by running this command below. + + # /usr/bin/update-alternatives --install /usr/bin/php php /usr/bin/hhvm 60 + +After all the above steps are done, you can now start HHVM and test it. + + # systemctl start hhvm + +### Step 5: Testing HHVM with Nginx/Apache ### + +9. To verify that hhvm working, you need to create a hello.php file under nginx/apache document root directory. + + # nano /usr/share/nginx/html/hello.php [For Nginx] + OR + # nano /var/www/html/hello.php [For Nginx and Apache] + +Add the following snippet to this file. + + + +and then navigate to the following URL and verify to see “hello world“. + + http://localhost/info.php + OR + http://IP-Address/info.php + +![HHVM Page](http://www.tecmint.com/wp-content/uploads/2015/08/HHVM-Page.png) + +HHVM Page + +If “HHVM” page appears, then it means you’re all set! + +### Conclusion ### + +These steps are very easy to follow and hope your find this tutorial useful and if you get any error during installation of any packages, post a comment and we shall find solutions together. And any additional ideas are welcome. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/install-hhvm-and-nginx-apache-with-mariadb-on-debian-ubuntu/ + +作者:[Ravi Saive][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/admin/ +[1]:http://www.tecmint.com/install-apc-alternative-php-cache-in-rhel-centos-fedora/ \ No newline at end of file From c1cf9482557f16a896789afb6d7e2bf2b29ceba4 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 1 Sep 2015 17:12:31 +0800 Subject: [PATCH 395/697] Update 20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md --- ... How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md b/sources/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md index 0a18e76db1..b26e225586 100644 --- a/sources/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md +++ b/sources/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md @@ -1,3 +1,4 @@ +translation by strugglingyouth How to Install / Upgrade to Linux Kernel 4.2 in Ubuntu ================================================================================ ![](http://ubuntuhandbook.org/wp-content/uploads/2014/12/linux-kernel-icon-tux.png) @@ -85,4 +86,4 @@ via: http://ubuntuhandbook.org/index.php/2015/08/upgrade-kernel-4-2-ubuntu/ [a]:http://ubuntuhandbook.org/index.php/about/ [1]:https://lkml.org/lkml/2015/8/30/96 [2]:http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/ -[3]:http://ubuntuhandbook.org/index.php/2015/08/install-latest-kernel-script/ \ No newline at end of file +[3]:http://ubuntuhandbook.org/index.php/2015/08/install-latest-kernel-script/ From bc49c88739870c74bbe6a725e44669584220d34b Mon Sep 17 00:00:00 2001 From: KS Date: Tue, 1 Sep 2015 17:42:29 +0800 Subject: [PATCH 396/697] Delete 20150826 How to set up a system status page of your infrastructure.md --- ...stem status page of your infrastructure.md | 295 ------------------ 1 file changed, 295 deletions(-) delete mode 100644 sources/tech/20150826 How to set up a system status page of your infrastructure.md diff --git a/sources/tech/20150826 How to set up a system status page of your infrastructure.md b/sources/tech/20150826 How to set up a system status page of your infrastructure.md deleted file mode 100644 index f696e91638..0000000000 --- a/sources/tech/20150826 How to set up a system status page of your infrastructure.md +++ /dev/null @@ -1,295 +0,0 @@ -wyangsun translating -How to set up a system status page of your infrastructure -================================================================================ -If you are a system administrator who is responsible for critical IT infrastructure or services of your organization, you will understand the importance of effective communication in your day-to-day tasks. Suppose your production storage server is on fire. You want your entire team on the same page in order to resolve the issue as fast as you can. While you are at it, you don't want half of all users contacting you asking why they cannot access their documents. When a scheduled maintenance is coming up, you want to notify interested parties of the event ahead of the schedule, so that unnecessary support tickets can be avoided. - -All these require some sort of streamlined communication channel between you, your team and people you serve. One way to achieve that is to maintain a centralized system status page, where the detail of downtime incidents, progress updates and maintenance schedules are reported and chronicled. That way, you can minimize unnecessary distractions during downtime, and also have any interested party informed and opt-in for any status update. - -One good **open-source, self-hosted system status page solution** is [Cachet][1]. In this tutorial, I am going to describe how to set up a self-hosted system status page using Cachet. - -### Cachet Features ### - -Before going into the detail of setting up Cachet, let me briefly introduce its main features. - -- **Full JSON API**: The Cachet API allows you to connect any external program or script (e.g., uptime script) to Cachet to report incidents or update status automatically. -- **Authentication**: Cachet supports Basic Auth and API token in JSON API, so that only authorized personnel can update the status page. -- **Metrics system**: This is useful to visualize custom data over time (e.g., server load or response time). -- **Notification**: Optionally you can send notification emails about reported incidents to anyone who signed up to the status page. -- **Multiple languages**: The status page can be translated into 11 different languages. -- **Two factor authentication**: This allows you to lock your Cachet admin account with Google's two-factor authentication. -- **Cross database support**: You can choose between MySQL, SQLite, Redis, APC, and PostgreSQL for a backend storage. - -In the rest of the tutorial, I explain how to install and configure Cachet on Linux. - -### Step One: Download and Install Cachet ### - -Cachet requires a web server and a backend database to operate. In this tutorial, I am going to use the LAMP stack. Here are distro-specific instructions to install Cachet and LAMP stack. - -#### Debian, Ubuntu or Linux Mint #### - - $ sudo apt-get install curl git apache2 mysql-server mysql-client php5 php5-mysql - $ sudo git clone https://github.com/cachethq/Cachet.git /var/www/cachet - $ cd /var/www/cachet - $ sudo git checkout v1.1.1 - $ sudo chown -R www-data:www-data . - -For more detail on setting up LAMP stack on Debian-based systems, refer to [this tutorial][2]. - -#### Fedora, CentOS or RHEL #### - -On Red Hat based systems, you first need to [enable REMI repository][3] (to meet PHP version requirement). Then proceed as follows. - - $ sudo yum install curl git httpd mariadb-server - $ sudo yum --enablerepo=remi-php56 install php php-mysql php-mbstring - $ sudo git clone https://github.com/cachethq/Cachet.git /var/www/cachet - $ cd /var/www/cachet - $ sudo git checkout v1.1.1 - $ sudo chown -R apache:apache . - $ sudo firewall-cmd --permanent --zone=public --add-service=http - $ sudo firewall-cmd --reload - $ sudo systemctl enable httpd.service; sudo systemctl start httpd.service - $ sudo systemctl enable mariadb.service; sudo systemctl start mariadb.service - -For more details on setting up LAMP on Red Hat-based systems, refer to [this tutorial][4]. - -### Configure a Backend Database for Cachet ### - -The next step is to configure database backend. - -Log in to MySQL/MariaDB server, and create an empty database called 'cachet'. - - $ sudo mysql -uroot -p - ----------- - - mysql> create database cachet; - mysql> quit - -Now create a Cachet configuration file by using a sample configuration file. - - $ cd /var/www/cachet - $ sudo mv .env.example .env - -In .env file, fill in database information (i.e., DB_*) according to your setup. Leave other fields unchanged for now. - - APP_ENV=production - APP_DEBUG=false - APP_URL=http://localhost - APP_KEY=SomeRandomString - - DB_DRIVER=mysql - DB_HOST=localhost - DB_DATABASE=cachet - DB_USERNAME=root - DB_PASSWORD= - - CACHE_DRIVER=apc - SESSION_DRIVER=apc - QUEUE_DRIVER=database - - MAIL_DRIVER=smtp - MAIL_HOST=mailtrap.io - MAIL_PORT=2525 - MAIL_USERNAME=null - MAIL_PASSWORD=null - MAIL_ADDRESS=null - MAIL_NAME=null - - REDIS_HOST=null - REDIS_DATABASE=null - REDIS_PORT=null - -### Step Three: Install PHP Dependencies and Perform DB Migration ### - -Next, we are going to install necessary PHP dependencies. For that we will use composer. If you do not have composer installed on your system, install it first: - - $ curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer - -Now go ahead and install PHP dependencies using composer. - - $ cd /var/www/cachet - $ sudo composer install --no-dev -o - -Next, perform one-time database migration. This step will populate the empty database we created earlier with necessary tables. - - $ sudo php artisan migrate - -Assuming the database config in /var/www/cachet/.env is correct, database migration should be completed successfully as shown below. - -![](https://farm6.staticflickr.com/5814/20235620184_54048676b0_c.jpg) - -Next, create a security key, which will be used to encrypt the data entered in Cachet. - - $ sudo php artisan key:generate - $ sudo php artisan config:cache - -![](https://farm6.staticflickr.com/5717/20831952096_7105c9fdc7_c.jpg) - -The generated app key will be automatically added to the APP_KEY variable of your .env file. No need to edit .env on your own here. - -### Step Four: Configure Apache HTTP Server ### - -Now it's time to configure the web server that Cachet will be running on. As we are using Apache HTTP server, create a new [virtual host][5] for Cachet as follows. - -#### Debian, Ubuntu or Linux Mint #### - - $ sudo vi /etc/apache2/sites-available/cachet.conf - ----------- - - - ServerName cachethost - ServerAlias cachethost - DocumentRoot "/var/www/cachet/public" - - Require all granted - Options Indexes FollowSymLinks - AllowOverride All - Order allow,deny - Allow from all - - - -Enable the new Virtual Host and mod_rewrite with: - - $ sudo a2ensite cachet.conf - $ sudo a2enmod rewrite - $ sudo service apache2 restart - -#### Fedora, CentOS or RHEL #### - -On Red Hat based systems, create a virtual host file as follows. - - $ sudo vi /etc/httpd/conf.d/cachet.conf - ----------- - - - ServerName cachethost - ServerAlias cachethost - DocumentRoot "/var/www/cachet/public" - - Require all granted - Options Indexes FollowSymLinks - AllowOverride All - Order allow,deny - Allow from all - - - -Now reload Apache configuration: - - $ sudo systemctl reload httpd.service - -### Step Five: Configure /etc/hosts for Testing Cachet ### - -At this point, the initial Cachet status page should be up and running, and now it's time to test. - -Since Cachet is configured as a virtual host of Apache HTTP server, we need to tweak /etc/hosts of your client computer to be able to access it. Here the client computer is the one from which you will be accessing the Cachet page. - -Open /etc/hosts, and add the following entry. - - $ sudo vi /etc/hosts - ----------- - - cachethost - -In the above, the name "cachethost" must match with ServerName specified in the Apache virtual host file for Cachet. - -### Test Cachet Status Page ### - -Now you are ready to access Cachet status page. Type http://cachethost in your browser address bar. You will be redirected to the initial Cachet setup page as follows. - -![](https://farm6.staticflickr.com/5745/20858228815_405fce1301_c.jpg) - -Choose cache/session driver. Here let's choose "File" for both cache and session drivers. - -Next, type basic information about the status page (e.g., site name, domain, timezone and language), as well as administrator account. - -![](https://farm1.staticflickr.com/611/20237229693_c22014e4fd_c.jpg) - -![](https://farm6.staticflickr.com/5707/20858228875_b056c9e1b4_c.jpg) - -![](https://farm6.staticflickr.com/5653/20671482009_8629572886_c.jpg) - -Your initial status page will finally be ready. - -![](https://farm6.staticflickr.com/5692/20237229793_f6a48f379a_c.jpg) - -Go ahead and create components (units of your system), incidents or any scheduled maintenance as you want. - -For example, to add a new component: - -![](https://farm6.staticflickr.com/5672/20848624752_9d2e0a07be_c.jpg) - -To add a scheduled maintenance: - -This is what the public Cachet status page looks like: - -![](https://farm1.staticflickr.com/577/20848624842_df68c0026d_c.jpg) - -With SMTP integration, you can send out emails on status updates to any subscribers. Also, you can fully customize the layout and style of the status page using CSS and markdown formatting. - -### Conclusion ### - -Cachet is pretty easy-to-use, self-hosted status page software. One of the nicest features of Cachet is its support for full JSON API. Using its RESTful API, one can easily hook up Cachet with separate monitoring backends (e.g., [Nagios][6]), and feed Cachet with incident reports and status updates automatically. This is far quicker and efficient than manually manage a status page. - -As final words, I'd like to mention one thing. While setting up a fancy status page with Cachet is straightforward, making the best use of the software is not as easy as installing it. You need total commitment from the IT team on updating the status page in an accurate and timely manner, thereby building credibility of the published information. At the same time, you need to educate users to turn to the status page. At the end of the day, it would be pointless to set up a status page if it's not populated well, and/or no one is checking it. Remember this when you consider deploying Cachet in your work environment. - -### Troubleshooting ### - -As a bonus, here are some useful troubleshooting tips in case you encounter problems while setting up Cachet. - -1. The Cachet page does not load anything, and you are getting the following error. - - production.ERROR: exception 'RuntimeException' with message 'No supported encrypter found. The cipher and / or key length are invalid.' in /var/www/cachet/bootstrap/cache/compiled.php:6695 - -**Solution**: Make sure that you create an app key, as well as clear configuration cache as follows. - - $ cd /path/to/cachet - $ sudo php artisan key:generate - $ sudo php artisan config:cache - -2. You are getting the following error while invoking composer command. - - - danielstjules/stringy 1.10.0 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. - - laravel/framework v5.1.8 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. - - league/commonmark 0.10.0 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. - -**Solution**: Make sure to install the required PHP extension mbstring on your system which is compatible with your PHP. On Red Hat based system, since we installed PHP from REMI-56 repository, we install the extension from the same repository. - - $ sudo yum --enablerepo=remi-php56 install php-mbstring - -3. You are getting a blank page while trying to access Cachet status page. The HTTP log shows the following error. - - PHP Fatal error: Uncaught exception 'UnexpectedValueException' with message 'The stream or file "/var/www/cachet/storage/logs/laravel-2015-08-21.log" could not be opened: failed to open stream: Permission denied' in /var/www/cachet/bootstrap/cache/compiled.php:12851 - -**Solution**: Try the following commands. - - $ cd /var/www/cachet - $ sudo php artisan cache:clear - $ sudo chmod -R 777 storage - $ sudo composer dump-autoload - -If the above solution does not work, try disabling SELinux: - - $ sudo setenforce 0 - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/setup-system-status-page.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/nanni -[1]:https://cachethq.io/ -[2]:http://xmodulo.com/install-lamp-stack-ubuntu-server.html -[3]:http://ask.xmodulo.com/install-remi-repository-centos-rhel.html -[4]:http://xmodulo.com/install-lamp-stack-centos.html -[5]:http://xmodulo.com/configure-virtual-hosts-apache-http-server.html -[6]:http://xmodulo.com/monitor-common-services-nagios.html From 50c391918ff889dc8b38a409cf0f333a5eccbd8b Mon Sep 17 00:00:00 2001 From: KS Date: Tue, 1 Sep 2015 17:44:32 +0800 Subject: [PATCH 397/697] Create 20150826 How to set up a system status page of your infrastructure.md --- ...stem status page of your infrastructure.md | 294 ++++++++++++++++++ 1 file changed, 294 insertions(+) create mode 100644 translated/tech/20150826 How to set up a system status page of your infrastructure.md diff --git a/translated/tech/20150826 How to set up a system status page of your infrastructure.md b/translated/tech/20150826 How to set up a system status page of your infrastructure.md new file mode 100644 index 0000000000..53c97670d8 --- /dev/null +++ b/translated/tech/20150826 How to set up a system status page of your infrastructure.md @@ -0,0 +1,294 @@ +如何部署一个你的公共系统状态页面 +================================================================================ +如果你是一个系统管理员,负责关键的IT基础设置或你公司的服务,你将明白有效的沟通在日常任务中的重要性。假设你的线上存储服务器故障了。你希望团队所有人达成共识你好尽快的解决问题。当你忙来忙去时,你不想一半的人问你为什么他们不能访问他们的文档。当一个维护计划快到时间了你想在计划前提醒相关人员,这样避免了不必要的开销。 + +这一切的要求或多或少改进了你和你的团队,用户和你的服务的沟通渠道。一个实现它方法是维护一个集中的系统状态页面,故障停机详情,进度更新和维护计划会被报告和记录。这样,在故障期间你避免了不必要的打扰,也有一些相关方提供的资料和任何选状态更新择性加入。 + +一个不错的**开源, 自承载系统状态页面**是is [Cachet][1]。在这个教程,我将要描述如何用Cachet部署一个自承载系统状态页面。 + +### Cachet 特性 ### + +在详细的配置Cachet之前,让我简单的介绍一下它的主要特性。 + +- **全JSON API**:Cachet API允许你使用任意外部程序或脚本(例如,uptime脚本)链接到Cachet来报告突发事件或自动更新状态。 +- **认证**:Cachet支持基础认证和JSON API的API令牌,所以只有认证用户可以更新状态页面。 +- **衡量系统**:这通常用来展现随着时间推移的自定义数据(例如,服务器负载或者相应时间)。 +- **通知**:你可以随意的发送通知邮件,报告事件给任一注册了状态页面的人。 +- **多语言**:状态也可以被转换为11种不同的语言。 +- **双因子认证**:这允许你使用Google的双因子认证管理账户锁定你的Cachet(什么事Google?呵呵!)。 +- **支持交叉数据库**:你可以选择MySQL,SQLite,Redis,APC和PostgreSQL作为后端存储。 + +剩下的教程,我说明如何在Linux上安装配置Cachet。 + +### 第一步:下载和安装Cachet ### + +Cachet需要一个web服务器和一个后端数据库来运转。在这个教程中,我将使用LAMP架构。这里有特定发行版安装Cachet和LAMP架构的指令。 + +#### Debian, Ubuntu 或者 Linux Mint #### + + $ sudo apt-get install curl git apache2 mysql-server mysql-client php5 php5-mysql + $ sudo git clone https://github.com/cachethq/Cachet.git /var/www/cachet + $ cd /var/www/cachet + $ sudo git checkout v1.1.1 + $ sudo chown -R www-data:www-data . + +在基于Debian的系统上更多详细的设置LAMP架构,参考这个[教程][2]。 + +#### Fedora, CentOS 或 RHEL #### + +在基于Red Hat系统上,你首先需要[设置REMI资源库][3](以满足PHP版本需求)。然后执行下面命令。 + + $ sudo yum install curl git httpd mariadb-server + $ sudo yum --enablerepo=remi-php56 install php php-mysql php-mbstring + $ sudo git clone https://github.com/cachethq/Cachet.git /var/www/cachet + $ cd /var/www/cachet + $ sudo git checkout v1.1.1 + $ sudo chown -R apache:apache . + $ sudo firewall-cmd --permanent --zone=public --add-service=http + $ sudo firewall-cmd --reload + $ sudo systemctl enable httpd.service; sudo systemctl start httpd.service + $ sudo systemctl enable mariadb.service; sudo systemctl start mariadb.service + +在基于Red Hat系统上更多详细设置LAMP,参考这个[教程][4]。 + +### 配置Cachet的后端数据库### + +下一步是配置后端数据库。 + +登陆到MySQL/MariaDB服务,然后创建一个空的数据库称为‘cachet’。 + + $ sudo mysql -uroot -p + +---------- + + mysql> create database cachet; + mysql> quit + +现在用一个样本配置文件创建一个Cachet配置文件。 + + $ cd /var/www/cachet + $ sudo mv .env.example .env + +在.env文件里,填写你自己设置的数据库信息(例如,DB\_\*)。其他的字段先不改变。 + + APP_ENV=production + APP_DEBUG=false + APP_URL=http://localhost + APP_KEY=SomeRandomString + + DB_DRIVER=mysql + DB_HOST=localhost + DB_DATABASE=cachet + DB_USERNAME=root + DB_PASSWORD= + + CACHE_DRIVER=apc + SESSION_DRIVER=apc + QUEUE_DRIVER=database + + MAIL_DRIVER=smtp + MAIL_HOST=mailtrap.io + MAIL_PORT=2525 + MAIL_USERNAME=null + MAIL_PASSWORD=null + MAIL_ADDRESS=null + MAIL_NAME=null + + REDIS_HOST=null + REDIS_DATABASE=null + REDIS_PORT=null + +### 第三步:安装PHP依赖和执行数据库迁移 ### + +下面,我们将要安装必要的PHP依赖包。所以我们将使用composer。如果你的系统还没有安装composer,先安装它: + + $ curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer + +现在开始用composer安装PHP依赖包。 + + $ cd /var/www/cachet + $ sudo composer install --no-dev -o + +下面执行一次数据库迁移。这一步将我们早期创建的必要表填充到数据库。 + + $ sudo php artisan migrate + +假设数据库配置在/var/www/cachet/.env是正确的,数据库迁移应该像下面显示一样完成成功。 + +![](https://farm6.staticflickr.com/5814/20235620184_54048676b0_c.jpg) + +下面,创建一个密钥,它将用来加密进入Cachet的数据。 + + $ sudo php artisan key:generate + $ sudo php artisan config:cache + +![](https://farm6.staticflickr.com/5717/20831952096_7105c9fdc7_c.jpg) + +生成的应用密钥将自动添加到你的.env文件APP\_KEY变量中。你不需要单独编辑.env。 + +### 第四步:配置Apache HTTP服务 ### + +现在到了配置web服务的时候,Cachet将运行在上面。我们使用Apache HTTP服务器,为Cachet创建一个新的[虚拟主机][5]如下所述。 + +#### Debian, Ubuntu 或 Linux Mint #### + + $ sudo vi /etc/apache2/sites-available/cachet.conf + +---------- + + + ServerName cachethost + ServerAlias cachethost + DocumentRoot "/var/www/cachet/public" + + Require all granted + Options Indexes FollowSymLinks + AllowOverride All + Order allow,deny + Allow from all + + + +启用新虚拟主机和mod_rewrite: + + $ sudo a2ensite cachet.conf + $ sudo a2enmod rewrite + $ sudo service apache2 restart + +#### Fedora, CentOS 或 RHEL #### + +在基于Red Hat系统上,创建一个虚拟主机文件如下所述。 + + $ sudo vi /etc/httpd/conf.d/cachet.conf + +---------- + + + ServerName cachethost + ServerAlias cachethost + DocumentRoot "/var/www/cachet/public" + + Require all granted + Options Indexes FollowSymLinks + AllowOverride All + Order allow,deny + Allow from all + + + +现在重载Apache配置: + + $ sudo systemctl reload httpd.service + +### 第五步:配置/etc/hosts来测试Cachet ### + +这时候,初始的Cachet状态页面应该启动运行了,现在测试一下。 + +由于Cachet被配置为Apache HTTP服务的虚拟主机,我们需要调整你的客户机的/etc/hosts来访问他。你将从这个客户端电脑访问Cachet页面。 + +Open /etc/hosts, and add the following entry. + + $ sudo vi /etc/hosts + +---------- + + cachethost + +上面名为“cachethost”必须匹配Cachet的Apache虚拟主机文件的ServerName。 + +### 测试Cachet状态页面 ### + +现在你准备好访问Cachet状态页面。在你浏览器地址栏输入http://cachethost。你将被转到初始Cachet状态页如下。 + +![](https://farm6.staticflickr.com/5745/20858228815_405fce1301_c.jpg) + +选择cache/session驱动。这里cache和session驱动两个都选“File”。 + +下一步,输入关于状态页面的基本信息(例如,站点名称,域名,时区和语言),以及管理员认证账户。 + +![](https://farm1.staticflickr.com/611/20237229693_c22014e4fd_c.jpg) + +![](https://farm6.staticflickr.com/5707/20858228875_b056c9e1b4_c.jpg) + +![](https://farm6.staticflickr.com/5653/20671482009_8629572886_c.jpg) + +你的初始状态页将要最终完成。 + +![](https://farm6.staticflickr.com/5692/20237229793_f6a48f379a_c.jpg) + +继续创建组件(你的系统单位),事件或者任意你想要的维护计划。 + +例如,增加一个组件: + +![](https://farm6.staticflickr.com/5672/20848624752_9d2e0a07be_c.jpg) + +增加一个维护计划: + +公共Cachet状态页就像这样: + +![](https://farm1.staticflickr.com/577/20848624842_df68c0026d_c.jpg) + +集成SMTP,你可以在状态更新时发送邮件给订阅者。并且你可以完全自定义布局和状态页面使用的CSS和markdown格式。 + +### 结论 ### + +Cachet是一个相当易于使用,自托管的状态页面软件。Cachet一个高级特性是支持全JSON API。使用它的RESTful API,Cachet可以轻松连接单独的监控后端(例如,[Nagios][6]),然后回馈给Cachet事件报告并自动更新状态。比起手段管理一个状态页它更快和有效率。 + +最后一句,我喜欢提及一个事。用Cachet简单的设置一个花哨的状态页面同时,使用最佳的软件不像安装它那么容易。你需要完全保障所有IT团队习惯准确及时的更新状态页,从而建立公共信息的准确性。同时,你需要教用户去查看状态页面。在今天最后,如果不很好的填充,部署状态页面将没有意义,并且/或者没有一个人查看它。记住这个,当你考虑部署Cachet在你的工作环境中时。 + +### 故障排查 ### + +作为奖励,万一你安装Cachet时遇到问题,这有一些有用的故障排查的技巧。 + +1. Cachet页面没有加载任何东西,并且你看到如下报错。 + + production.ERROR: exception 'RuntimeException' with message 'No supported encrypter found. The cipher and / or key length are invalid.' in /var/www/cachet/bootstrap/cache/compiled.php:6695 + +**解决方案**:确保你创建了一个应用密钥,以及明确配置缓存如下所述。 + + $ cd /path/to/cachet + $ sudo php artisan key:generate + $ sudo php artisan config:cache + +2. 调用composer命令时有如下报错。 + + - danielstjules/stringy 1.10.0 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. + - laravel/framework v5.1.8 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. + - league/commonmark 0.10.0 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. + +**解决方案**:确保安装了必要的PHP扩展mbstring到你的系统上,并且兼容你的PHP。在基于Red Hat的系统上,由于我们从REMI-56库安装PHP,要从同一个库安装扩展。 + + $ sudo yum --enablerepo=remi-php56 install php-mbstring + +3. 你访问Cachet状态页面时得到一个白屏。HTTP日志显示如下错误。 + + PHP Fatal error: Uncaught exception 'UnexpectedValueException' with message 'The stream or file "/var/www/cachet/storage/logs/laravel-2015-08-21.log" could not be opened: failed to open stream: Permission denied' in /var/www/cachet/bootstrap/cache/compiled.php:12851 + +**解决方案**:尝试如下命令。 + + $ cd /var/www/cachet + $ sudo php artisan cache:clear + $ sudo chmod -R 777 storage + $ sudo composer dump-autoload + +如果上面的方法不起作用,试试禁止SELinux: + + $ sudo setenforce 0 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/setup-system-status-page.html + +作者:[Dan Nanni][a] +译者:[wyangsun](https://github.com/wyangsun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:https://cachethq.io/ +[2]:http://xmodulo.com/install-lamp-stack-ubuntu-server.html +[3]:http://ask.xmodulo.com/install-remi-repository-centos-rhel.html +[4]:http://xmodulo.com/install-lamp-stack-centos.html +[5]:http://xmodulo.com/configure-virtual-hosts-apache-http-server.html +[6]:http://xmodulo.com/monitor-common-services-nagios.html From a8737174ca2e6205174d4967072bbd173514a75e Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Tue, 1 Sep 2015 21:52:02 +0800 Subject: [PATCH 398/697] [Translate] RHCSA Series--Part op--Installing,Configuring and Securing a Web and FTP Server.md --- ...uring and Securing a Web and FTP Server.md | 178 ------------------ ...uring and Securing a Web and FTP Server.md | 175 +++++++++++++++++ 2 files changed, 175 insertions(+), 178 deletions(-) delete mode 100644 sources/tech/RHCSA Series/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md create mode 100644 translated/tech/RHCSA/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md b/sources/tech/RHCSA Series/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md deleted file mode 100644 index 437612f124..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md +++ /dev/null @@ -1,178 +0,0 @@ -FSSlc Translating - -RHCSA Series: Installing, Configuring and Securing a Web and FTP Server – Part 9 -================================================================================ -A web server (also known as a HTTP server) is a service that handles content (most commonly web pages, but other types of documents as well) over to a client in a network. - -A FTP server is one of the oldest and most commonly used resources (even to this day) to make files available to clients on a network in cases where no authentication is necessary since FTP uses username and password without encryption. - -The web server available in RHEL 7 is version 2.4 of the Apache HTTP Server. As for the FTP server, we will use the Very Secure Ftp Daemon (aka vsftpd) to establish connections secured by TLS. - -![Configuring and Securing Apache and FTP Server](http://www.tecmint.com/wp-content/uploads/2015/05/Install-Configure-Secure-Apache-FTP-Server.png) - -RHCSA: Installing, Configuring and Securing Apache and FTP – Part 9 - -In this article we will explain how to install, configure, and secure a web server and a FTP server in RHEL 7. - -### Installing Apache and FTP Server ### - -In this guide we will use a RHEL 7 server with a static IP address of 192.168.0.18/24. To install Apache and VSFTPD, run the following command: - - # yum update && yum install httpd vsftpd - -When the installation completes, both services will be disabled initially, so we need to start them manually for the time being and enable them to start automatically beginning with the next boot: - - # systemctl start httpd - # systemctl enable httpd - # systemctl start vsftpd - # systemctl enable vsftpd - -In addition, we have to open ports 80 and 21, where the web and ftp daemons are listening, respectively, in order to allow access to those services from the outside: - - # firewall-cmd --zone=public --add-port=80/tcp --permanent - # firewall-cmd --zone=public --add-service=ftp --permanent - # firewall-cmd --reload - -To confirm that the web server is working properly, fire up your browser and enter the IP of the server. You should see the test page: - -![Confirm Apache Web Server](http://www.tecmint.com/wp-content/uploads/2015/05/Confirm-Apache-Web-Server.png) - -Confirm Apache Web Server - -As for the ftp server, we will have to configure it further, which we will do in a minute, before confirming that it’s working as expected. - -### Configuring and Securing Apache Web Server ### - -The main configuration file for Apache is located in `/etc/httpd/conf/httpd.conf`, but it may rely on other files present inside `/etc/httpd/conf.d`. - -Although the default configuration should be sufficient for most cases, it’s a good idea to become familiar with all the available options as described in the [official documentation][1]. - -As always, make a backup copy of the main configuration file before editing it: - - # cp /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.$(date +%Y%m%d) - -Then open it with your preferred text editor and look for the following variables: - -- ServerRoot: the directory where the server’s configuration, error, and log files are kept. -- Listen: instructs Apache to listen on specific IP address and / or ports. -- Include: allows the inclusion of other configuration files, which must exist. Otherwise, the server will fail, as opposed to the IncludeOptional directive, which is silently ignored if the specified configuration files do not exist. -- User and Group: the name of the user/group to run the httpd service as. -- DocumentRoot: The directory out of which Apache will serve your documents. By default, all requests are taken from this directory, but symbolic links and aliases may be used to point to other locations. -- ServerName: this directive sets the hostname (or IP address) and port that the server uses to identify itself. - -The first security measure will consist of creating a dedicated user and group (i.e. tecmint/tecmint) to run the web server as and changing the default port to a higher one (9000 in this case): - - ServerRoot "/etc/httpd" - Listen 192.168.0.18:9000 - User tecmint - Group tecmint - DocumentRoot "/var/www/html" - ServerName 192.168.0.18:9000 - -You can test the configuration file with. - - # apachectl configtest - -and if everything is OK, then restart the web server. - - # systemctl restart httpd - -and don’t forget to enable the new port (and disable the old one) in the firewall: - - # firewall-cmd --zone=public --remove-port=80/tcp --permanent - # firewall-cmd --zone=public --add-port=9000/tcp --permanent - # firewall-cmd --reload - -Note that, due to SELinux policies, you can only use the ports returned by - - # semanage port -l | grep -w '^http_port_t' - -for the web server. - -If you want to use another port (i.e. TCP port 8100), you will have to add it to SELinux port context for the httpd service: - -# semanage port -a -t http_port_t -p tcp 8100 - -![Add Apache Port to SELinux Policies](http://www.tecmint.com/wp-content/uploads/2015/05/Add-Apache-Port-to-SELinux-Policies.png) - -Add Apache Port to SELinux Policies - -To further secure your Apache installation, follow these steps: - -1. The user Apache is running as should not have access to a shell: - - # usermod -s /sbin/nologin tecmint - -2. Disable directory listing in order to prevent the browser from displaying the contents of a directory if there is no index.html present in that directory. - -Edit `/etc/httpd/conf/httpd.conf` (and the configuration files for virtual hosts, if any) and make sure that the Options directive, both at the top and at Directory block levels, is set to None: - - Options None - -3. Hide information about the web server and the operating system in HTTP responses. Edit /etc/httpd/conf/httpd.conf as follows: - - ServerTokens Prod - ServerSignature Off - -Now you are ready to start serving content from your /var/www/html directory. - -### Configuring and Securing FTP Server ### - -As in the case of Apache, the main configuration file for Vsftpd `(/etc/vsftpd/vsftpd.conf)` is well commented and while the default configuration should suffice for most applications, you should become acquainted with the documentation and the man page `(man vsftpd.conf)` in order to operate the ftp server more efficiently (I can’t emphasize that enough!). - -In our case, these are the directives used: - - anonymous_enable=NO - local_enable=YES - write_enable=YES - local_umask=022 - dirmessage_enable=YES - xferlog_enable=YES - connect_from_port_20=YES - xferlog_std_format=YES - chroot_local_user=YES - allow_writeable_chroot=YES - listen=NO - listen_ipv6=YES - pam_service_name=vsftpd - userlist_enable=YES - tcp_wrappers=YES - -By using `chroot_local_user=YES`, local users will be (by default) placed in a chroot’ed jail in their home directory right after login. This means that local users will not be able to access any files outside their corresponding home directories. - -Finally, to allow ftp to read files in the user’s home directory, set the following SELinux boolean: - - # setsebool -P ftp_home_dir on - -You can now connect to the ftp server using a client such as Filezilla: - -![Check FTP Connection](http://www.tecmint.com/wp-content/uploads/2015/05/Check-FTP-Connection.png) - -Check FTP Connection - -Note that the `/var/log/xferlo`g log records downloads and uploads, which concur with the above directory listing: - -![Monitor FTP Download and Upload](http://www.tecmint.com/wp-content/uploads/2015/05/Monitor-FTP-Download-Upload.png) - -Monitor FTP Download and Upload - -Read Also: [Limit FTP Network Bandwidth Used by Applications in a Linux System with Trickle][2] - -### Summary ### - -In this tutorial we have explained how to set up a web and a ftp server. Due to the vastness of the subject, it is not possible to cover all the aspects of these topics (i.e. virtual web hosts). Thus, I recommend you also check other excellent articles in this website about [Apache][3]. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-series-install-and-secure-apache-web-server-and-ftp-in-rhel/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://httpd.apache.org/docs/2.4/ -[2]:http://www.tecmint.com/manage-and-limit-downloadupload-bandwidth-with-trickle-in-linux/ -[3]:http://www.google.com/cse?cx=partner-pub-2601749019656699:2173448976&ie=UTF-8&q=virtual+hosts&sa=Search&gws_rd=cr&ei=Dy9EVbb0IdHisASnroG4Bw#gsc.tab=0&gsc.q=apache diff --git a/translated/tech/RHCSA/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md b/translated/tech/RHCSA/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md new file mode 100644 index 0000000000..190c32ece5 --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md @@ -0,0 +1,175 @@ +RHCSA 系列: 安装,配置及加固一个 Web 和 FTP 服务器 – Part 9 +================================================================================ +Web 服务器(也被称为 HTTP 服务器)是在网络中将内容(最为常见的是网页,但也支持其他类型的文件)进行处理并传递给客户端的服务。 + +FTP 服务器是最为古老且最常使用的资源之一(即便到今天也是这样),在身份认证不是必须的情况下,它可使得在一个网络里文件对于客户端可用,因为 FTP 使用没有加密的用户名和密码。 + +在 RHEL 7 中可用的 web 服务器是版本号为 2.4 的 Apache HTTP 服务器。至于 FTP 服务器,我们将使用 Very Secure Ftp Daemon (又名 vsftpd) 来建立用 TLS 加固的连接。 + +![配置和加固 Apache 和 FTP 服务器](http://www.tecmint.com/wp-content/uploads/2015/05/Install-Configure-Secure-Apache-FTP-Server.png) + +RHCSA: 安装,配置及加固 Apache 和 FTP 服务器 – Part 9 + +在这篇文章中,我们将解释如何在 RHEL 7 中安装,配置和加固 web 和 FTP 服务器。 + +### 安装 Apache 和 FTP 服务器 ### + +在本指导中,我们将使用一个静态 IP 地址为 192.168.0.18/24 的 RHEL 7 服务器。为了安装 Apache 和 VSFTPD,运行下面的命令: + + # yum update && yum install httpd vsftpd + +当安装完成后,这两个服务在开始时是默认被禁用的,所以我们需要暂时手动开启它们并让它们在下一次启动时自动地开启它们: + + # systemctl start httpd + # systemctl enable httpd + # systemctl start vsftpd + # systemctl enable vsftpd + +另外,我们必须打开 80 和 21 端口,它们分别是 web 和 ftp 守护进程监听的端口,为的是允许从外面访问这些服务: + + # firewall-cmd --zone=public --add-port=80/tcp --permanent + # firewall-cmd --zone=public --add-service=ftp --permanent + # firewall-cmd --reload + +为了确认 web 服务工作正常,打开你的浏览器并输入服务器的 IP,则你应该可以看到如下的测试页面: + +![确认 Apache Web 服务器](http://www.tecmint.com/wp-content/uploads/2015/05/Confirm-Apache-Web-Server.png) + +确认 Apache Web 服务器 + +对于 ftp 服务器,在确保它如期望中的那样工作之前,我们必须进一步地配置它,我们将在几分钟后来做这件事。 + +### 配置并加固 Apache Web 服务器 ### + +Apache 的主要配置文件位于 `/etc/httpd/conf/httpd.conf` 中,但它可能依赖 `/etc/httpd/conf.d` 中的其他文件。 + +尽管默认的配置对于大多数的情形是充分的,熟悉描述在 [官方文档][1] 中的所有可用选项是一个不错的主意。 + +同往常一样,在编辑主配置文件前先做一个备份: + + # cp /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.$(date +%Y%m%d) + +然后用你钟爱的文本编辑器打开它,并查找下面这些变量: + +- ServerRoot: 服务器的配置,错误和日志文件保存的目录。 +- Listen: 通知 Apache 去监听特定的 IP 地址或端口。 +- Include: 允许包含其他配置文件,这个必须存在,否则,服务器将会崩溃。它恰好与 IncludeOptional 相反,假如特定的配置文件不存在,它将静默地忽略掉它们。 +- User 和 Group: 运行 httpd 服务的用户/组的名称。 +- DocumentRoot: Apache 为你的文档服务的目录。默认情况下,所有的请求将在这个目录中被获取,但符号链接和别名可能会被用于指向其他位置。 +- ServerName: 这个指令将设定用于识别它自身的主机名(或 IP 地址)和端口。 + +安全措施的第一步将包含创建一个特定的用户和组(如 tecmint/tecmint)来运行 web 服务器以及更改默认的端口为一个更高的端口(在这个例子中为 9000): + + ServerRoot "/etc/httpd" + Listen 192.168.0.18:9000 + User tecmint + Group tecmint + DocumentRoot "/var/www/html" + ServerName 192.168.0.18:9000 + +你可以使用下面的命令来测试配置文件: + + # apachectl configtest + +假如一切 OK,接着重启 web 服务器。 + + # systemctl restart httpd + +并别忘了在防火墙中开启新的端口(和禁用旧的端口): + + + # firewall-cmd --zone=public --remove-port=80/tcp --permanent + # firewall-cmd --zone=public --add-port=9000/tcp --permanent + # firewall-cmd --reload + +请注意,由于 SELinux 的策略,你只可使用如下命令所返回的端口来分配给 web 服务器。 + + # semanage port -l | grep -w '^http_port_t' + +假如你想使用另一个端口(如 TCP 端口 8100)来给 httpd 服务,你必须将它加到 SELinux 的端口上下文: + + # semanage port -a -t http_port_t -p tcp 8100 + +![添加 Apache 端口到 SELinux 策略](http://www.tecmint.com/wp-content/uploads/2015/05/Add-Apache-Port-to-SELinux-Policies.png) + +添加 Apache 端口到 SELinux 策略 + +为了进一步加固你安装的 Apache,请遵循以下步骤: + +1. 运行 Apache 的用户不应该拥有访问 shell 的能力: + + # usermod -s /sbin/nologin tecmint + +2. 禁用目录列表功能,为的是阻止浏览器展示一个未包含 index.html 文件的目录里的内容。 + +编辑 `/etc/httpd/conf/httpd.conf` (和虚拟主机的配置文件,假如有的话),并确保 Options 指令在顶级和目录块级别中(注:感觉这里我的翻译不对)都被设置为 None: + + Options None + +3. 在 HTTP 回应中隐藏有关 web 服务器和操作系统的信息。像下面这样编辑文件 `/etc/httpd/conf/httpd.conf`: + + ServerTokens Prod + ServerSignature Off + +现在,你已经做好了从 `/var/www/html` 目录开始服务内容的准备了。 + +### 配置并加固 FTP 服务器 ### + +和 Apache 的情形类似, Vsftpd 的主配置文件 `(/etc/vsftpd/vsftpd.conf)` 带有详细的注释,且虽然对于大多数的应用实例,默认的配置应该足够了,但为了更有效率地操作 ftp 服务器,你应该开始熟悉相关的文档和 man 页 `(man vsftpd.conf)`(对于这点,再多的强调也不为过!)。 + +在我们的示例中,使用了这些指令: + + anonymous_enable=NO + local_enable=YES + write_enable=YES + local_umask=022 + dirmessage_enable=YES + xferlog_enable=YES + connect_from_port_20=YES + xferlog_std_format=YES + chroot_local_user=YES + allow_writeable_chroot=YES + listen=NO + listen_ipv6=YES + pam_service_name=vsftpd + userlist_enable=YES + tcp_wrappers=YES + +通过使用 `chroot_local_user=YES`,(默认情况下)本地用户在登陆之后,将马上被置于一个位于用户家目录的 chroot 环境中(注:这里的翻译也不准确)。这意味着本地用户将不能访问除其家目录之外的任何文件。 + +最后,为了让 ftp 能够在用户的家目录中读取文件,设置如下的 SELinux 布尔值: + + # setsebool -P ftp_home_dir on + +现在,你可以使用一个客户端例如 Filezilla 来连接一个 ftp 服务器: + +![查看 FTP 连接](http://www.tecmint.com/wp-content/uploads/2015/05/Check-FTP-Connection.png) + +查看 FTP 连接 + +注意, `/var/log/xferlog` 日志将会记录下载和上传的情况,这与上图的目录列表一致: + +![监视 FTP 的下载和上传情况](http://www.tecmint.com/wp-content/uploads/2015/05/Monitor-FTP-Download-Upload.png) + +监视 FTP 的下载和上传情况 + +另外请参考: [在 Linux 系统中使用 Trickle 来限制应用使用的 FTP 网络带宽][2] + +### 总结 ### + +在本教程中,我们解释了如何设置 web 和 ftp 服务器。由于这个主题的广泛性,涵盖这些话题的所有方面是不可能的(如虚拟网络主机)。因此,我推荐你也阅读这个网站中有关 [Apache][3] 的其他卓越的文章。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-series-install-and-secure-apache-web-server-and-ftp-in-rhel/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://httpd.apache.org/docs/2.4/ +[2]:http://www.tecmint.com/manage-and-limit-downloadupload-bandwidth-with-trickle-in-linux/ +[3]:http://www.google.com/cse?cx=partner-pub-2601749019656699:2173448976&ie=UTF-8&q=virtual+hosts&sa=Search&gws_rd=cr&ei=Dy9EVbb0IdHisASnroG4Bw#gsc.tab=0&gsc.q=apache From ee751a1697e297f7f0c1e432008759e8149853f9 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Tue, 1 Sep 2015 21:55:40 +0800 Subject: [PATCH 399/697] Update RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译这篇文章。 --- ...nd Network Traffic Control Using FirewallD and Iptables.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md b/sources/tech/RHCSA Series/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md index fd27f4c6fc..022953429d 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md @@ -1,3 +1,5 @@ +FSSlc Translating + RHCSA Series: Firewall Essentials and Network Traffic Control Using FirewallD and Iptables – Part 11 ================================================================================ In simple words, a firewall is a security system that controls the incoming and outgoing traffic in a network based on a set of predefined rules (such as the packet destination / source or type of traffic, for example). @@ -188,4 +190,4 @@ via: http://www.tecmint.com/firewalld-vs-iptables-and-control-network-traffic-in [3]:http://www.tecmint.com/configure-iptables-firewall/ [4]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html [5]:http://www.tecmint.com/rhcsa-series-install-and-secure-apache-web-server-and-ftp-in-rhel/ -[6]:http://www.tecmint.com/firewalld-rules-for-centos-7/ \ No newline at end of file +[6]:http://www.tecmint.com/firewalld-rules-for-centos-7/ From c0dc3defc1c065b2b3c8ef18bf1948992f2e6654 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 1 Sep 2015 23:12:59 +0800 Subject: [PATCH 400/697] PUB:20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu @geekpi --- ...e Kids Having Fun With Linux Terminal In Ubuntu.md | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) rename {translated/share => published}/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md (68%) diff --git a/translated/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md b/published/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md similarity index 68% rename from translated/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md rename to published/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md index 3d0efff7b5..e7e2d88e03 100644 --- a/translated/share/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md +++ b/published/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md @@ -1,13 +1,10 @@ -看这些孩子在Ubuntu的Linux终端下玩耍 +看这些孩子在 Ubuntu 的 Linux 终端下玩耍 ================================================================================ -我发现了一个孩子们在他们的计算机教室里玩得很开心的视频。我不知道他们在哪里,但我猜测是在印度尼西亚或者马来西亚。 - -注:youtube 视频 - +我发现了一个孩子们在他们的计算机教室里玩得很开心的视频。我不知道他们在哪里,但我猜测是在印度尼西亚或者马来西亚。视频请自行搭梯子: http://www.youtube.com/z8taQPomp0Y ### 在Linux终端下面跑火车 ### -这里没有魔术。只是一个叫做“sl”的命令行工具。我假定它是在把ls打错的情况下为了好玩而开发的。如果你曾经在Linux的命令行下工作,你会知道ls是一个最常使用的一个命令,也许也是一个最经常打错的命令。 +这里没有魔术。只是一个叫做“sl”的命令行工具。我想它是在把ls打错的情况下为了好玩而开发的。如果你曾经在Linux的命令行下工作,你会知道ls是一个最常使用的一个命令,也许也是一个最经常打错的命令。 如果你想从这个终端下的火车获得一些乐趣,你可以使用下面的命令安装它。 @@ -30,7 +27,7 @@ via: http://itsfoss.com/ubuntu-terminal-train/ 作者:[Abhishek][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From bb56ba76bec513c707365f6ef9b866cfbbca79f8 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 1 Sep 2015 23:39:27 +0800 Subject: [PATCH 401/697] PUB:20150722 Howto Interactively Perform Tasks with Docker using Kitematic @ictlyh --- ...rform Tasks with Docker using Kitematic.md | 21 ++++++++++++------- 1 file changed, 13 insertions(+), 8 deletions(-) rename {translated/tech => published}/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md (60%) diff --git a/translated/tech/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md b/published/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md similarity index 60% rename from translated/tech/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md rename to published/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md index 8ad03dd06c..ac93dceb50 100644 --- a/translated/tech/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md +++ b/published/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md @@ -1,8 +1,9 @@ -如何在 Docker 中通过 Kitematic 交互式执行任务 +如何在 Windows 上通过 Kitematic 使用 Docker ================================================================================ -在本篇文章中,我们会学习如何在 Windows 操作系统上安装 Kitematic 以及部署一个 Hello World Nginx Web 服务器。Kitematic 是一个自由开源软件,它有现代化的界面设计使得允许我们在 Docker 中交互式执行任务。Kitematic 设计非常漂亮、界面也很不错。我们可以简单快速地开箱搭建我们的容器而不需要输入命令,我们可以在图形用户界面中通过简单的点击从而在容器上部署我们的应用。Kitematic 集成了 Docker Hub,允许我们搜索、拉取任何需要的镜像,并在上面部署应用。它同时也能很好地切换到命令行用户接口模式。目前,它包括了自动映射端口、可视化更改环境变量、配置卷、精简日志以及其它功能。 -下面是在 Windows 上安装 Kitematic 并部署 Hello World Nginx Web 服务器的 3 个简单步骤。 +在本篇文章中,我们会学习如何在 Windows 操作系统上安装 Kitematic 以及部署一个测试性的 Nginx Web 服务器。Kitematic 是一个具有现代化的界面设计的自由开源软件,它可以让我们在 Docker 中交互式执行任务。Kitematic 设计的非常漂亮、界面美观。使用它,我们可以简单快速地开箱搭建我们的容器而不需要输入命令,可以在图形用户界面中通过简单的点击从而在容器上部署我们的应用。Kitematic 集成了 Docker Hub,允许我们搜索、拉取任何需要的镜像,并在上面部署应用。它同时也能很好地切换到命令行用户接口模式。目前,它包括了自动映射端口、可视化更改环境变量、配置卷、流式日志以及其它功能。 + +下面是在 Windows 上安装 Kitematic 并部署测试性 Nginx Web 服务器的 3 个简单步骤。 ### 1. 下载 Kitematic ### @@ -16,15 +17,15 @@ ### 2. 安装 Kitematic ### -下载好可执行安装程序之后,我们现在打算在我们的 Windows 操作系统上安装 Kitematic。安装程序现在会开始下载并安装运行 Kitematic 需要的依赖,包括 Virtual Box 和 Docker。如果已经在系统上安装了 Virtual Box,它会把它升级到最新版本。安装程序会在几分钟内完成,但取决于你网络和系统的速度。如果你还没有安装 Virtual Box,它会问你是否安装 Virtual Box 网络驱动。建议安装它,因为它有助于 Virtual Box 的网络。 +下载好可执行安装程序之后,我们现在就可以在我们的 Windows 操作系统上安装 Kitematic了。安装程序现在会开始下载并安装运行 Kitematic 需要的依赖软件,包括 Virtual Box 和 Docker。如果已经在系统上安装了 Virtual Box,它会把它升级到最新版本。安装程序会在几分钟内完成,但取决于你网络和系统的速度。如果你还没有安装 Virtual Box,它会问你是否安装 Virtual Box 网络驱动。建议安装它,因为它用于 Virtual Box 的网络功能。 ![安装 Kitematic](http://blog.linoxide.com/wp-content/uploads/2015/06/installing-kitematic.png) -需要的依赖 Docker 和 Virtual Box 安装完成并运行后,会让我们登录到 Docker Hub。如果我们还没有账户或者还不想登录,可以点击 **SKIP FOR NOW** 继续后面的步骤。 +所需的依赖 Docker 和 Virtual Box 安装完成并运行后,会让我们登录到 Docker Hub。如果我们还没有账户或者还不想登录,可以点击 **SKIP FOR NOW** 继续后面的步骤。 ![登录 Docker Hub](http://blog.linoxide.com/wp-content/uploads/2015/06/login-docker-hub.jpg) -如果你还没有账户,你可以在应用程序上点击注册链接并在 Docker Hub 上创建账户。 +如果你还没有账户,你可以在应用程序上点击注册(Sign Up)链接并在 Docker Hub 上创建账户。 完成之后,就会出现 Kitematic 应用程序的第一个界面。正如下面看到的这样。我们可以搜索可用的 docker 镜像。 @@ -50,7 +51,11 @@ ### 总结 ### -我们终于成功在 Windows 操作系统上安装了 Kitematic 并部署了一个 Hello World Ngnix 服务器。总是推荐下载安装 Kitematic 最新的发行版,因为会增加很多新的高级功能。由于 Docker 运行在 64 位平台,当前 Kitematic 也是为 64 位操作系统构建。它只能在 Windows 7 以及更高版本上运行。在这篇教程中,我们部署了一个 Nginx Web 服务器,类似地我们可以在 Kitematic 中简单的点击就能通过镜像部署任何 docker 容器。Kitematic 已经有可用的 Mac OS X 和 Windows 版本,Linux 版本也在开发中很快就会发布。如果你有任何疑问、建议或者反馈,请在下面的评论框中写下来以便我们更改地改进或更新我们的内容。非常感谢!Enjoy :-) +我们终于成功在 Windows 操作系统上安装了 Kitematic 并部署了一个 Hello World Ngnix 服务器。推荐下载安装 Kitematic 最新的发行版,因为会增加很多新的高级功能。由于 Docker 运行在 64 位平台,当前 Kitematic 也是为 64 位操作系统构建。它只能在 Windows 7 以及更高版本上运行。 + +在这篇教程中,我们部署了一个 Nginx Web 服务器,类似地我们可以在 Kitematic 中简单的点击就能通过镜像部署任何 docker 容器。Kitematic 已经有可用的 Mac OS X 和 Windows 版本,Linux 版本也在开发中很快就会发布。 + +如果你有任何疑问、建议或者反馈,请在下面的评论框中写下来以便我们更改地改进或更新我们的内容。非常感谢!Enjoy :-) -------------------------------------------------------------------------------- @@ -58,7 +63,7 @@ via: http://linoxide.com/linux-how-to/interactively-docker-kitematic/ 作者:[Arun Pyasi][a] 译者:[ictlyh](https://github.com/ictlyh) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 33d8630650a99d389246091d55d1c540cdd0b757 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 1 Sep 2015 23:46:42 +0800 Subject: [PATCH 402/697] Delete 20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md --- ...r Upgrade to Linux Kernel 4.2 in Ubuntu.md | 89 ------------------- 1 file changed, 89 deletions(-) delete mode 100644 sources/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md diff --git a/sources/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md b/sources/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md deleted file mode 100644 index b26e225586..0000000000 --- a/sources/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md +++ /dev/null @@ -1,89 +0,0 @@ -translation by strugglingyouth -How to Install / Upgrade to Linux Kernel 4.2 in Ubuntu -================================================================================ -![](http://ubuntuhandbook.org/wp-content/uploads/2014/12/linux-kernel-icon-tux.png) - -Linux Kernel 4.2 was released yesterday, at noon. Linus Torvalds wrote on [lkml.org][1]: - -> So judging by how little happened this week, it wouldn’t have been a mistake to release 4.2 last week after all, but hey, there’s certainly a few fixes here, and it’s not like delaying 4.2 for a week should have caused any problems either. -> -> So here it is, and the merge window for 4.3 is now open. I already have a few pending early pull requests, but as usual I’ll start processing them tomorrow and give the release some time to actually sit. -> -> The shortlog from rc8 is tiny, and appended. The patch is pretty tiny too… - -### What’s New in Kernel 4.2: ### - -- rewrites of Intel Assembly x86 code -- support for new ARM boards and SoCs -- F2FS per-file encryption -- The AMDGPU kernel DRM driver -- VCE1 video encode support for the Radeon DRM driver -- Initial support for Intel Broxton Atom SoCs -- Support for ARCv2 and HS38 CPU cores. -- added queue spinlocks support -- many other improvements and updated drivers. - -### How to Install Kernel 4.2 in Ubuntu: ### - -The binary packages of this kernel release are available for download at link below: - -- [Download Kernel 4.2 (.DEB)][1] - -First check out your OS type, 32-bit (i386) or 64-bit (amd64), then download and install the packages below in turn: - -1. linux-headers-4.2.0-xxx_all.deb -1. linux-headers-4.2.0-xxx-generic_xxx_i386/amd64.deb -1. linux-image-4.2.0-xxx-generic_xxx_i386/amd64.deb - -After installing the kernel, you may run `sudo update-grub` command in terminal (Ctrl+Alt+T) to refresh grub boot-loader. - -If you need a low latency system (e.g. for recording audio) then download & install below packages instead: - -1. linux-headers-4.2.0_xxx_all.deb -1. linux-headers-4.2.0-xxx-lowlatency_xxx_i386/amd64.deb -1. linux-image-4.2.0-xxx-lowlatency_xxx_i386/amd64.deb - -For Ubuntu Server without a graphical UI, you may run below commands one by one to grab packages via wget and install them via dpkg: - -For 64-bit system run: - - cd /tmp/ - - wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200_4.2.0-040200.201508301530_all.deb - - wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200-generic_4.2.0-040200.201508301530_amd64.deb - - wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-image-4.2.0-040200-generic_4.2.0-040200.201508301530_amd64.deb - - sudo dpkg -i linux-headers-4.2.0-*.deb linux-image-4.2.0-*.deb - -For 32-bit system, run: - - cd /tmp/ - - wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200_4.2.0-040200.201508301530_all.deb - - wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200-generic_4.2.0-040200.201508301530_i386.deb - - wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-image-4.2.0-040200-generic_4.2.0-040200.201508301530_i386.deb - - sudo dpkg -i linux-headers-4.2.0-*.deb linux-image-4.2.0-*.deb - -Finally restart your computer to take effect. - -To revert back, remove old kernels, see [install kernel simply via a script][3]. - --------------------------------------------------------------------------------- - -via: http://ubuntuhandbook.org/index.php/2015/08/upgrade-kernel-4-2-ubuntu/ - -作者:[Ji m][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ubuntuhandbook.org/index.php/about/ -[1]:https://lkml.org/lkml/2015/8/30/96 -[2]:http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/ -[3]:http://ubuntuhandbook.org/index.php/2015/08/install-latest-kernel-script/ From f7d25f28a1fe4099e8ffa1203683caa827215f8b Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 1 Sep 2015 23:47:13 +0800 Subject: [PATCH 403/697] Create 20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md --- ...r Upgrade to Linux Kernel 4.2 in Ubuntu.md | 94 +++++++++++++++++++ 1 file changed, 94 insertions(+) create mode 100644 translated/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md diff --git a/translated/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md b/translated/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md new file mode 100644 index 0000000000..71d4985fe0 --- /dev/null +++ b/translated/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md @@ -0,0 +1,94 @@ + +在 Ubuntu 中如何安装/升级 Linux 内核到4.2 +================================================================================ +![](http://ubuntuhandbook.org/wp-content/uploads/2014/12/linux-kernel-icon-tux.png) + +Linux 内核4.2在昨天中午被公布。Linus Torvalds 写了 [lkml.org][1]: + +> 通过这周的小的变动,4.2版本应该不会有问题,毕竟这是最后一周,但在这里有几个补丁,4.2延迟一个星期也会引发问题。 + +> + +> 所以在这里它是,并且4.3的合并窗口现已开放。我已经早期引入了几个悬而未决的请求,但像往常一样,我会从明天开始处理它们,并会发布完成的时间。 + +> + +> 从 rc8 中的 shortlog 非常小,并且是追加的。这个补丁也很完美... + + +### 新内核 4.2 有哪些改进?: ### + +- 英特尔的x86汇编代码重写 +- 支持新的 ARM 板和 SoCs +- 对 F2FS 的 per-file 加密 +- 有 AMDGPU 内核 DRM 驱动程序 +- 使用Radeon DRM 来支持 VCE1 视频编码 +- 初步支持英特尔的 Broxton Atom SoCs +- 支持ARCv2和HS38 CPU内核。 +- 增加了排队自旋锁的支持 +- 许多其他的改进和驱动更新。 + +### 在 Ubuntu 中如何下载4.2内核 : ### + +此内核版本的二进制包可供下载链接如下: + +- [下载 4.2 内核(.DEB)][1] + +首先检查你的操作系统类型,32位(i386)的或64位(amd64)的,然后使用下面的方式依次下载并安装软件包: + +1. linux-headers-4.2.0-xxx_all.deb +1. linux-headers-4.2.0-xxx-generic_xxx_i386/amd64.deb +1. linux-image-4.2.0-xxx-generic_xxx_i386/amd64.deb + +安装内核后,在终端((Ctrl+Alt+T))运行`sudo update-grub`命令来更新 grub boot-loader。 + +如果你需要一个低延迟系统(例如用于录制音频),请下载并安装下面的包: + +1. linux-headers-4.2.0_xxx_all.deb +1. linux-headers-4.2.0-xxx-lowlatency_xxx_i386/amd64.deb +1. linux-image-4.2.0-xxx-lowlatency_xxx_i386/amd64.deb + +对于没有图形用户界面的 Ubuntu 服务器,你可以运行下面的命令通过 wget 来逐一抓下载,并通过 dpkg 来安装: + +对于64位的系统请运行: + + cd /tmp/ + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200_4.2.0-040200.201508301530_all.deb + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200-generic_4.2.0-040200.201508301530_amd64.deb + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-image-4.2.0-040200-generic_4.2.0-040200.201508301530_amd64.deb + + sudo dpkg -i linux-headers-4.2.0-*.deb linux-image-4.2.0-*.deb + +对于32位的系统,请运行: + + cd /tmp/ + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200_4.2.0-040200.201508301530_all.deb + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200-generic_4.2.0-040200.201508301530_i386.deb + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-image-4.2.0-040200-generic_4.2.0-040200.201508301530_i386.deb + + sudo dpkg -i linux-headers-4.2.0-*.deb linux-image-4.2.0-*.deb + +最后,重新启动计算机才能生效。 + +要恢复或删除旧的内核,请参阅[通过脚本安装内核][3]。 + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/08/upgrade-kernel-4-2-ubuntu/ + +作者:[Ji m][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:https://lkml.org/lkml/2015/8/30/96 +[2]:http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/ +[3]:http://ubuntuhandbook.org/index.php/2015/08/install-latest-kernel-script/ From 411308a16dfdcafbb88c9895c858220f4cee43fd Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Tue, 1 Sep 2015 23:51:53 +0800 Subject: [PATCH 404/697] sources/tech/20150901 Setting Up High-Performance HHVM and Nginx or Apache with MariaDB on Debian or Ubuntu.md --- ...and Nginx or Apache with MariaDB on Debian or Ubuntu.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/sources/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md b/sources/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md index d157b2b7aa..ef8897a39e 100644 --- a/sources/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md +++ b/sources/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md @@ -1,7 +1,8 @@ -Setting Up High-Performance ‘HHVM’ and Nginx/Apache with MariaDB on Debian/Ubuntu +translating by mike +在 Debian 或者 Ubuntu 上配置高性能的 HHVM、Nginx/Apache 和 MariaDB ================================================================================ HHVM stands for HipHop Virtual Machine, is an open source virtual machine created for running Hack (it’s a programming language for HHVM) and PHP written applications. HHVM uses a last minute compilation path to achieve remarkable performance while keeping the flexibility that PHP programmers are addicted to. Till date, HHVM has achieved over a 9x increase in http request throughput and more than 5x cut in memory utilization (when running on low system memory) for Facebook compared with the PHP engine + [APC (Alternative PHP Cache)][1]. - +HHVM全称为 HipHop Virtual Machine, 它是一个由 running Hack(一种编程语言)和 PHP的相关应用组成的开源虚拟机。HHVM HHVM can also be used along with a FastCGI-based web-server like Nginx or Apache. ![Install HHVM, Nginx and Apache with MariaDB](http://www.tecmint.com/wp-content/uploads/2015/08/Install-HHVM-Nginx-Apache-MariaDB.png) @@ -179,4 +180,4 @@ via: http://www.tecmint.com/install-hhvm-and-nginx-apache-with-mariadb-on-debian 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/admin/ -[1]:http://www.tecmint.com/install-apc-alternative-php-cache-in-rhel-centos-fedora/ \ No newline at end of file +[1]:http://www.tecmint.com/install-apc-alternative-php-cache-in-rhel-centos-fedora/ From 426c7e2414476858426cb605a62cb52a4a87a504 Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Wed, 2 Sep 2015 00:48:54 +0800 Subject: [PATCH 405/697] sources/tech/20150901 Setting Up High-Performance HHVM and Nginx or Apache with MariaDB on Debian or Ubuntu.md --- ...Apache with MariaDB on Debian or Ubuntu.md | 107 +++++++++--------- 1 file changed, 54 insertions(+), 53 deletions(-) diff --git a/sources/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md b/sources/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md index ef8897a39e..60b2137c55 100644 --- a/sources/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md +++ b/sources/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md @@ -1,19 +1,21 @@ -translating by mike 在 Debian 或者 Ubuntu 上配置高性能的 HHVM、Nginx/Apache 和 MariaDB ================================================================================ -HHVM stands for HipHop Virtual Machine, is an open source virtual machine created for running Hack (it’s a programming language for HHVM) and PHP written applications. HHVM uses a last minute compilation path to achieve remarkable performance while keeping the flexibility that PHP programmers are addicted to. Till date, HHVM has achieved over a 9x increase in http request throughput and more than 5x cut in memory utilization (when running on low system memory) for Facebook compared with the PHP engine + [APC (Alternative PHP Cache)][1]. -HHVM全称为 HipHop Virtual Machine, 它是一个由 running Hack(一种编程语言)和 PHP的相关应用组成的开源虚拟机。HHVM +HHVM全称为 HipHop Virtual Machine, 它是一个由 running Hack(一种编程语言)和 PHP的相关应用组成的开源虚拟机。HHVM 在保证了 PHP 程序员最关注的高灵活性的要求下,通过使用最新编译结果的方式来达到一个客观的性能。到目前为止,HHVM 为 FaceBook 在 HTTP 请求的吞吐量上提高了9倍的性能,在内存的占用上,减少了5倍左右的内存占用。 + ++ [APC (Alternative PHP Cache)][1]. + HHVM can also be used along with a FastCGI-based web-server like Nginx or Apache. +同时,HHVM 也可以通过 FastCGI 接口,与像 Nginx 或者 Apache 进行集成。 ![Install HHVM, Nginx and Apache with MariaDB](http://www.tecmint.com/wp-content/uploads/2015/08/Install-HHVM-Nginx-Apache-MariaDB.png) -Install HHVM, Nginx and Apache with MariaDB +安装 HHVM,Nginx和 Apache 还有 MariaDB -In this tutorial we shall look at steps for setting up Nginx/Apache web server, MariaDB database server and HHVM. For this setup, we will use Ubuntu 15.04 (64-bit) as HHVM runs on 64-bit system only, although Debian and Linux Mint distributions are also supported. +在本教程中,我们一起来进行 Nginx/Apache web 服务器、 数据库服务器 MariaDB 和 HHVM 的设置。设置中,我们将使用 Ubuntu 15.04 (64 位),同时,该教程也适用于 Debian 和 Linux Mint。 -### Step 1: Installing Nginx and Apache Web Server ### +### Step 1: 安装 Nginx 或者 Apache 服务器 ### -1. First do a system upgrade to update repository list with the help of following commands. +1. 首先,先进行一次系统的升级或者更新软件仓库列表. # apt-get update && apt-get upgrade @@ -21,139 +23,138 @@ In this tutorial we shall look at steps for setting up Nginx/Apache web server, System Upgrade -2. As I said HHVM can be used with both Nginx and Apache web server. So, it’s your choice which web server you will going to use, but here we will show you both web servers installation and how to use them with HHVM. +2. 正如我之前说的,HHVM 能和 Nginx 和 Apache 进行集成。所以,究竟使用哪个服务器,这是你的自由,不过,我们会教你如何安装这两个服务器。 -#### Installing Nginx #### +#### 安装 Nginx #### -In this step, we will install Nginx/Apache web server from the packages repository using following command. +我们通过下面的命令安装 Nginx/Apache 服务器 # apt-get install nginx ![Install Nginx Web Server](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Nginx-Web-Server.png) -Install Nginx Web Server +安装 Nginx 服务器 -#### Installing Apache #### +#### 安装 Apache #### # apt-get install apache2 ![Install Apache Web Server](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Apache-Web-Server.png) -Install Apache Web Server +安装 Apache 服务器 -At this point, you should be able to navigate to following URL and you will able to see Nginx or Apache default page. +完成这一步,你能通过以下的链接看到 Nginx 或者 Apache 的默认页面 http://localhost OR http://IP-Address -#### Nginx Default Page #### +#### Nginx 默认页面 #### ![Nginx Welcome Page](http://www.tecmint.com/wp-content/uploads/2015/08/Nginx-Welcome-Page.png) -Nginx Welcome Page +Nginx 默认页面 -#### Apache Default Page #### +#### Apache 默认页面 #### ![Apache Default Page](http://www.tecmint.com/wp-content/uploads/2015/08/Apache-Default-Page.png) -Apache Default Page +Apache 默认页面 -### Step 2: Install and Configure MariaDB ### +### Step 2: 安装和配置 MariaDB ### -3. In this step, we will install MariaDB, as it providers better performance as compared to MySQL. +3. 这一步,我们将通过如下命令安装 MariaDB,它是一个比 MySQL 更好的数据库 # apt-get install mariadb-client mariadb-server ![Install MariaDB Database](http://www.tecmint.com/wp-content/uploads/2015/08/Install-MariaDB-Database.png) -Install MariaDB Database +安装 MariaDB -4. After MariaDB successful installation, you can start MariaDB and set root password to secure the database: +4. 在 MariaDB 成功安装之后,你可以启动它,并且设置 root 密码来保护数据库: # systemctl start mysql # mysql_secure_installation -Answer the following questions by typing `y` or `n` and press enter. Make sure you read the instructions carefully before answering the questions. +回答以下问题,只需要按下`y`或者 `n`并且回车。请确保你仔细的阅读过说明。 Enter current password for root (enter for none) = press enter Set root password? [Y/n] = y Remove anonymous users[y/n] = y Disallow root login remotely[y/n] = y Remove test database and access to it [y/n] = y - Reload privileges tables now[y/n] = y + Reload privileges tables now[y/n] = y -5. After setting root password for MariaDB, you can connect to MariaDB prompt with the new root password. +5. 在设置了密码之后,你就可以登陆 MariaDB 了。 # mysql -u root -p -### Step 3: Installation of HHVM ### +### Step 3: 安装 HHVM ### -6. At this stage we shall install and configure HHVM. You need to add the HHVM repository to your `sources.list` file and then you have to update your repository list using following series of commands. +6. 我们需要添加 HHVM 的仓库到你的`sources.list`文件中,然后更新软件列表。 # wget -O - http://dl.hhvm.com/conf/hhvm.gpg.key | apt-key add - # echo deb http://dl.hhvm.com/ubuntu DISTRIBUTION_VERSION main | sudo tee /etc/apt/sources.list.d/hhvm.list # apt-get update -**Important**: Don’t forget to replace DISTRIBUTION_VERSION with your Ubuntu distribution version (i.e. lucid, precise, or trusty.) and also on Debian replace with jessie or wheezy. On Linux Mint installation instructions are same, but petra is the only currently supported distribution. +**重要**:不要忘记用你的 Ubuntu 发行版型号替换上述的DISTRIBUTION_VERSION (比如:lucid, precise, trusty) 或者是 Debian 的 jessie 或者 wheezy。在 Linux Mint 中也是一样的,不过只支持 petra。 -After adding HHVM repository, you can easily install it as shown. +添加了 HHVM 仓库之后,你就可以安装了。 # apt-get install -y hhvm -Installing HHVM will start it up now, but it not configured to auto start at next system boot. To set auto start at next boot use the following command. +安装之后,即可启动它,但是它并没有做到开机启动。可以用如下命令做到开机启动。 # update-rc.d hhvm defaults -### Step 4: Configuring Nginx/Apache to Talk to HHVM ### +### Step 4: 配置 Nginx/Apache 连接 HHVM ### -7. Now, nginx/apache and HHVM are installed and running as independent, so we need to configure both web servers to talk to each other. The crucial part is that we have to tell nginx/apache to forward all PHP files to HHVM to execute. +7. 现在,nginx/apache 和 HHVM 都已经安装完成了,并且都独立运行起来了,所以我们需要对他们进行设置,来让他们互相关联。这个关键的步骤,就是需要nginx/apache 将所有的 php 文件,都交给 HHVM 进行处理。 -If you are using Nginx, follow this instructions as explained.. +如果你用了 Nginx,请按照如下步骤: -By default, the nginx configuration lives under /etc/nginx/sites-available/default and these config looks in /usr/share/nginx/html for files to execute, but it don’t know what to do with PHP. +nginx 的配置文件在 /etc/nginx/sites-available/default, 并且这些配置文件会在 /usr/share/nginx/html 中寻找文件执行,不过,他不知道如何处理 PHP。 -To make Nginx to talk with HHVM, we need to run the following include script that will configure nginx correctly by placing a hhvm.conf at the beginning of the nginx config as mentioned above. +为了确保 Nginx 可以连接 HHVM,我们需要执行如下的脚本。他可以帮助我们正确的配置 Nginx。 -This script makes the nginx to talk to any file that ends with .hh or .php and send it to HHVM via fastcgi. +这个脚本可以确保 Nginx 可以对 .hh 和 .php 的做正确的处理,并且通过 fastcgi 与 HHVM 进行通信 # /usr/share/hhvm/install_fastcgi.sh ![Configure Nginx for HHVM](http://www.tecmint.com/wp-content/uploads/2015/08/Configure-Nginx-for-HHVM.png) -Configure Nginx for HHVM +配置 Nginx、HHVM -**Important**: If you are using Apache, there isn’t any configuration is needed now. +**重要**: 如果你使用的是 Apache,这边就不需要进行配置了 -8. Next, you need to use /usr/bin/hhvm to provide /usr/bin/php (php) by running this command below. +8. 接下来,你需要使用 hhvm 来提供 php 的运行环境。 # /usr/bin/update-alternatives --install /usr/bin/php php /usr/bin/hhvm 60 -After all the above steps are done, you can now start HHVM and test it. +以上步骤完成之后,你现在可以启动并且测试他了。 # systemctl start hhvm -### Step 5: Testing HHVM with Nginx/Apache ### +### Step 5: 测试 HHVM 和 Nginx/Apache ### -9. To verify that hhvm working, you need to create a hello.php file under nginx/apache document root directory. +9. 为了确认 hhvm 是否工作,你需要在 nginx/apache 的根目录下建立 hello.php。 # nano /usr/share/nginx/html/hello.php [For Nginx] OR # nano /var/www/html/hello.php [For Nginx and Apache] -Add the following snippet to this file. +在文件中添加如下代码: -and then navigate to the following URL and verify to see “hello world“. +然后访问如下链接,确认自己能否看到 "hello world" http://localhost/info.php OR @@ -163,18 +164,18 @@ and then navigate to the following URL and verify to see “hello world“. HHVM Page -If “HHVM” page appears, then it means you’re all set! +如果 “HHVM” 的页面出现了,那就说明你成功了 -### Conclusion ### +### 结论 ### -These steps are very easy to follow and hope your find this tutorial useful and if you get any error during installation of any packages, post a comment and we shall find solutions together. And any additional ideas are welcome. +以上的步骤都是非常简单的,希望你能觉得这是一篇有用的教程,如果你在以上的步骤中遇到了问题,给我们留一个评论,我们将全力解决。 -------------------------------------------------------------------------------- via: http://www.tecmint.com/install-hhvm-and-nginx-apache-with-mariadb-on-debian-ubuntu/ 作者:[Ravi Saive][a] -译者:[译者ID](https://github.com/译者ID) +译者:[MikeCoder](https://github.com/MikeCoder) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8c011f677e747b487f06aac6fce2154c2bc50403 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 2 Sep 2015 00:49:59 +0800 Subject: [PATCH 406/697] PUB:20150813 Linux file system hierarchy v2.0 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @tnuoccalanosrep 这是你的第一篇吧?翻译的还不错。加油! --- ...150813 Linux file system hierarchy v2.0.md | 440 ++++++++++++++++++ ...150813 Linux file system hierarchy v2.0.md | 432 ----------------- 2 files changed, 440 insertions(+), 432 deletions(-) create mode 100644 published/20150813 Linux file system hierarchy v2.0.md delete mode 100644 translated/tech/20150813 Linux file system hierarchy v2.0.md diff --git a/published/20150813 Linux file system hierarchy v2.0.md b/published/20150813 Linux file system hierarchy v2.0.md new file mode 100644 index 0000000000..6a68efbd67 --- /dev/null +++ b/published/20150813 Linux file system hierarchy v2.0.md @@ -0,0 +1,440 @@ +Linux 文件系统结构介绍 +================================================================================ + +![](http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png) + +Linux中的文件是什么?它的文件系统又是什么?那些配置文件又在哪里?我下载好的程序保存在哪里了?在 Linux 中文件系统是标准结构的吗?好了,上图简明地阐释了Linux的文件系统的层次关系。当你苦于寻找配置文件或者二进制文件的时候,这便显得十分有用了。我在下方添加了一些解释以及例子,不过“篇幅较长,可以有空再看”。 + +另外一种情况便是当你在系统中获取配置以及二进制文件时,出现了不一致性问题,如果你是在一个大型组织中,或者只是一个终端用户,这也有可能会破坏你的系统(比如,二进制文件运行在旧的库文件上了)。若然你在[你的Linux系统上做安全审计][1]的话,你将会发现它很容易遭到各种攻击。所以,保持一个清洁的操作系统(无论是Windows还是Linux)都显得十分重要。 + +### Linux的文件是什么? ### + +对于UNIX系统来说(同样适用于Linux),以下便是对文件简单的描述: + +> 在UNIX系统中,一切皆为文件;若非文件,则为进程 + +这种定义是比较正确的,因为有些特殊的文件不仅仅是普通文件(比如命名管道和套接字),不过为了让事情变的简单,“一切皆为文件”也是一个可以让人接受的说法。Linux系统也像UNIX系统一样,将文件和目录视如同物,因为目录只是一个包含了其他文件名的文件而已。程序、服务、文本、图片等等,都是文件。对于系统来说,输入和输出设备,基本上所有的设备,都被当做是文件。 + +题图版本历史: + +- Version 2.0 – 17-06-2015 + - – Improved: 添加标题以及版本历史 + - – Improved: 添加/srv,/meida和/proc + - – Improved: 更新了反映当前的Linux文件系统的描述 + - – Fixed: 多处的打印错误 + - – Fixed: 外观和颜色 +- Version 1.0 – 14-02-2015 + - – Created: 基本的图表 + - – Note: 摒弃更低的版本 + +### 下载链接 ### + +以下是大图的下载地址。如果你需要其他格式,请跟原作者联系,他会尝试制作并且上传到某个地方以供下载 + +- [大图 (PNG 格式) – 2480×1755 px – 184KB][2] +- [最大图 (PDF 格式) – 9919x7019 px – 1686KB][3] + +**注意**: PDF格式文件是打印的最好选择,因为它画质很高。 + +### Linux 文件系统描述 ### + +为了有序地管理那些文件,人们习惯把这些文件当做是硬盘上的有序的树状结构,正如我们熟悉的'MS-DOS'(磁盘操作系统)就是一个例子。大的分枝包括更多的分枝,分枝的末梢是树的叶子或者普通的文件。现在我们将会以这树形图为例,但晚点我们会发现为什么这不是一个完全准确的一幅图。 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
目录描述
+ / + 主层次 的根,也是整个文件系统层次结构的根目录
+ /bin + 存放在单用户模式可用的必要命令二进制文件,所有用户都可用,如 cat、ls、cp等等
+ /boot + 存放引导加载程序文件,例如kernels、initrd等
+ /dev + 存放必要的设备文件,例如/dev/null
+ /etc + 存放主机特定的系统级配置文件。其实这里有个关于它名字本身意义上的的争议。在贝尔实验室的UNIX实施文档的早期版本中,/etc表示是“其他(etcetera)目录”,因为从历史上看,这个目录是存放各种不属于其他目录的文件(然而,文件系统目录标准 FSH 限定 /etc 用于存放静态配置文件,这里不该存有二进制文件)。早期文档出版后,这个目录名又重新定义成不同的形式。近期的解释中包含着诸如“可编辑文本配置”或者“额外的工具箱”这样的重定义
+ + + /etc/opt + + + 存储着新增包的配置文件 /opt/.
+ + + /etc/sgml + + + 存放配置文件,比如 catalogs,用于那些处理SGML(译者注:标准通用标记语言)的软件的配置文件
+ + + /etc/X11 + + + X Window 系统11版本的的配置文件
+ + + /etc/xml + + + 配置文件,比如catalogs,用于那些处理XML(译者注:可扩展标记语言)的软件的配置文件
+ /home + 用户的主目录,包括保存的文件,个人配置,等等
+ /lib + /bin//sbin/中的二进制文件的必需的库文件
+ /lib<架构位数> + 备用格式的必要的库文件。 这样的目录是可选的,但如果他们存在的话肯定是有需要用到它们的程序
+ /media + 可移动的多媒体(如CD-ROMs)的挂载点。(出现于 FHS-2.3)
+ /mnt + 临时挂载的文件系统
+ /opt + 可选的应用程序软件包
+ /proc + 以文件形式提供进程以及内核信息的虚拟文件系统,在Linux中,对应进程文件系统(procfs )的挂载点
+ /root + 根用户的主目录
+ /sbin + 必要的系统级二进制文件,比如, init, ip, mount
+ /srv + 系统提供的站点特定数据
+ /tmp + 临时文件 (另见 /var/tmp). 通常在系统重启后删除
+ /usr + 二级层级存储用户的只读数据; 包含(多)用户主要的公共文件以及应用程序
+ + + /usr/bin + + + 非必要的命令二进制文件 (在单用户模式中不需要用到的);用于所有用户
+ + + /usr/include + + + 标准的包含文件
+ + + /usr/lib + + + 库文件,用于/usr/bin//usr/sbin/中的二进制文件
+ + + /usr/lib<架构位数> + + + 备用格式库(可选的)
+ + + /usr/local + + + 三级层次 用于本地数据,具体到该主机上的。通常会有下一个子目录, 比如, bin/, lib/, share/.
+ + + /usr/local/sbin + + + 非必要系统的二进制文件,比如用于不同网络服务的守护进程
+ + + /usr/share + + + 架构无关的 (共享) 数据.
+ + + /usr/src + + + 源代码,比如内核源文件以及与它相关的头文件
+ + + /usr/X11R6 + + + X Window系统,版本号:11,发行版本:6
+ /var + 各式各样的(Variable)文件,一些随着系统常规操作而持续改变的文件就放在这里,比如日志文件,脱机文件,还有临时的电子邮件文件
+ + + /var/cache + + + 应用程序缓存数据. 这些数据是由耗时的I/O(输入/输出)的或者是运算本地生成的结果。这些应用程序是可以重新生成或者恢复数据的。当没有数据丢失的时候,可以删除缓存文件
+ + + /var/lib + + + 状态信息。这些信息随着程序的运行而不停地改变,比如,数据库,软件包系统的元数据等等
+ + + /var/lock + + + 锁文件。这些文件用于跟踪正在使用的资源
+ + + /var/log + + + 日志文件。包含各种日志。
+ + + /var/mail + + + 内含用户邮箱的相关文件
+ + + /var/opt + + + 来自附加包的各种数据都会存储在 /var/opt/.
+ + + /var/run + + + 存放当前系统上次启动以来的相关信息,例如当前登入的用户以及当前运行的daemons(守护进程).
+ + + /var/spool + + + 该spool主要用于存放将要被处理的任务,比如打印队列以及邮件外发队列
+ + + + + /var/mail + + + + + 过时的位置,用于放置用户邮箱文件
+ + + /var/tmp + + + 存放重启后保留的临时文件
+ +### Linux的文件类型 ### + +大多数文件仅仅是普通文件,他们被称为`regular`文件;他们包含普通数据,比如,文本、可执行文件、或者程序、程序的输入或输出等等 + +虽然你可以认为“在Linux中,一切你看到的皆为文件”这个观点相当保险,但这里仍有着一些例外。 + +- `目录`:由其他文件组成的文件 +- `特殊文件`:用于输入和输出的途径。大多数特殊文件都储存在`/dev`中,我们将会在后面讨论这个问题。 +- `链接文件`:让文件或者目录出现在系统文件树结构上多个地方的机制。我们将详细地讨论这个链接文件。 +- `(域)套接字`:特殊的文件类型,和TCP/IP协议中的套接字有点像,提供进程间网络通讯,并受文件系统的访问控制机制保护。 +- `命名管道` : 或多或少有点像sockets(套接字),提供一个进程间的通信机制,而不用网络套接字协议。 + +### 现实中的文件系统 ### + +对于大多数用户和常规系统管理任务而言,“文件和目录是一个有序的类树结构”是可以接受的。然而,对于电脑而言,它是不会理解什么是树,或者什么是树结构。 + +每个分区都有它自己的文件系统。想象一下,如果把那些文件系统想成一个整体,我们可以构思一个关于整个系统的树结构,不过这并没有这么简单。在文件系统中,一个文件代表着一个`inode`(索引节点),这是一种包含着构建文件的实际数据信息的序列号:这些数据表示文件是属于谁的,还有它在硬盘中的位置。 + +每个分区都有一套属于他们自己的inode,在一个系统的不同分区中,可以存在有相同inode的文件。 + +每个inode都表示着一种在硬盘上的数据结构,保存着文件的属性,包括文件数据的物理地址。当硬盘被格式化并用来存储数据时(通常发生在初始系统安装过程,或者是在一个已经存在的系统中添加额外的硬盘),每个分区都会创建固定数量的inode。这个值表示这个分区能够同时存储各类文件的最大数量。我们通常用一个inode去映射2-8k的数据块。当一个新的文件生成后,它就会获得一个空闲的inode。在这个inode里面存储着以下信息: + +- 文件属主和组属主 +- 文件类型(常规文件,目录文件......) +- 文件权限 +- 创建、最近一次读文件和修改文件的时间 +- inode里该信息被修改的时间 +- 文件的链接数(详见下一章) +- 文件大小 +- 文件数据的实际地址 + +唯一不在inode的信息是文件名和目录。它们存储在特殊的目录文件。通过比较文件名和inode的数目,系统能够构造出一个便于用户理解的树结构。用户可以通过ls -i查看inode的数目。在硬盘上,inodes有他们独立的空间。 + +------------------------ + +via: http://www.blackmoreops.com/2015/06/18/linux-file-system-hierarchy-v2-0/ + +译者:[tnuoccalanosrep](https://github.com/tnuoccalanosrep) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.blackmoreops.com/2015/02/15/in-light-of-recent-linux-exploits-linux-security-audit-is-a-must/ +[2]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png +[3]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-File-System-Hierarchy-blackMORE-Ops.pdf diff --git a/translated/tech/20150813 Linux file system hierarchy v2.0.md b/translated/tech/20150813 Linux file system hierarchy v2.0.md deleted file mode 100644 index 6f92d3bb53..0000000000 --- a/translated/tech/20150813 Linux file system hierarchy v2.0.md +++ /dev/null @@ -1,432 +0,0 @@ -translating by tnuoccalanosrep -Linux文件系统结构 v2.0 -================================================================================ -Linux中的文件是什么?它的文件系统又是什么?那些配置文件又在哪里?我下载好的程序保存在哪里了?好了,上图简明地阐释了Linux的文件系统的层次关系。当你苦于寻找配置文件或者二进制文件的时候,这便显得十分有用了。我在下方添加了一些解释以及例子,但“篇幅过长,没有阅读”。 - -有一种情况便是当你在系统中获取配置以及二进制文件时,出现了不一致性问题,如果你是一个大型组织,或者只是一个终端用户,这也有可能会破坏你的系统(比如,二进制文件运行在就旧的库文件上了)。若然你在你的Linux系统上做安全审计([security audit of your Linux system][1])的话,你将会发现它很容易遭到不同的攻击。所以,清洁操作(无论是Windows还是Linux)都显得十分重要。 -### What is a file in Linux? ### -Linux的文件是什么? -对于UNIX系统来说(同样适用于Linux),以下便是对文件简单的描述: -> 在UNIX系统中,一切皆为文件;若非文件,则为进程 - -> 这种定义是比较正确的,因为有些特殊的文件不仅仅是普通文件(比如命名管道和套接字),不过为了让事情变的简单,“一切皆为文件”也是一个可以让人接受的说法。Linux系统也像UNXI系统一样,将文件和目录视如同物,因为目录只是一个包含了其他文件名的文件而已。程序,服务,文本,图片等等,都是文件。对于系统来说,输入和输出设备,基本上所有的设备,都被当做是文件。 -![](http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png) - -- Version 2.0 – 17-06-2015 - - – Improved: 添加标题以及版本历史 - - – Improved: 添加/srv,/meida和/proc - - – Improved: 更新了反映当前的Linux文件系统的描述 - - – Fixed: 多处的打印错误 - - – Fixed: 外观和颜色 -- Version 1.0 – 14-02-2015 - - – Created: 基本的图表 - - – Note: 摒弃更低的版本 - -### Download Links ### -以下是结构图的下载地址。如果你需要其他结构,请跟原作者联系,他会尝试制作并且上传到某个地方以供下载 -- [Large (PNG) Format – 2480×1755 px – 184KB][2] -- [Largest (PDF) Format – 9919x7019 px – 1686KB][3] - -**注意**: PDF格式文件是打印的最好选择,因为它画质很高。 -### Linux 文件系统描述 ### -为了有序地管理那些文件,人们习惯把这些文件当做是硬盘上的有序的类树结构体,正如我们熟悉的'MS-DOS'(硬盘操作系统)。大的分枝包括更多的分枝,分枝的末梢是树的叶子或者普通的文件。现在我们将会以这树形图为例,但晚点我们会发现为什么这不是一个完全准确的一幅图。 -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Directory(目录)Description(描述)
-
/
-
主层次 的根,也是整个文件系统层次结构的根目录
-
/bin
-
存放在单用户模式可用的必要命令二进制文件,对于所有用户而言,则是像cat,ls,cp等等的文件
-
/boot
-
存放引导加载程序文件,例如kernels,initrd等
-
/dev
-
存放必要的设备文件
-
/etc
-
存放主机特定的系统范围内的配置文件。其实这里有个关于它名字本身意义上的的争议。在贝尔实验室的早期UNIX实施文档版本中,/etc表示是“其他目录”,因为从历史上看,这个目录是存放各种不属于其他目录的文件(然而,FSH(文件系统目录标准)限定 /ect是用于存放静态配置文件,这里不该存有二进制文件)。早期文档出版后,这个目录名又重新定义成不同的形式。近期的解释中包含着诸如“可编辑文本配置”或者“额外的工具箱”这样的重定义
-
-
-
/opt
-
-
-
存储着新增包的配置文件 /opt/.
-
-
-
/sgml
-
-
-
存放配置文件,比如目录,还有那些处理SGML(译者注:标准通用标记语言)的软件的配置文件
-
-
-
/X11
-
-
-
X Window系统的配置文件,版本号为11
-
-
-
/xml
-
-
-
配置文件,比如目录,处理XML(译者注:可扩展标记语言)的软件的配置文件
-
/home
-
用户的主目录,包括保存的文件, 个人配置, 等等.
-
/lib
-
/bin/ and /sbin/中的二进制文件必不可少的库文件
-
/lib<qual>
-
备用格式的必要的库文件. 这样的目录视可选的,但如果他们存在的话, 他们还有一些要求.
-
/media
-
可移动的多媒体(如CD-ROMs)的挂载点.(出现于 FHS-2.3)
-
/mnt
-
临时挂载的文件系统
-
/opt
-
自定义应用程序软件包
-
/proc
-
以文件形式提供进程以及内核信息的虚拟文件系统,在Linux中,对应进程文件系统的挂载点
-
/root
-
根用户的主目录
-
/sbin
-
必要系统二进制文件, 比如, init, ip, mount.
-
/srv
-
系统提供的站点特定数据
-
/tmp
-
临时文件 (另见 /var/tmp). 通常在系统重启后删除
-
/usr
-
二级层级 存储用户的只读数据; 包含(多)用户主要的公共文件以及应用程序
-
-
-
/bin
-
-
-
非必要的命令二进制文件 (在单用户模式中不需要用到的); 用于所有用户.
-
-
-
/include
-
-
-
标准的包含文件
-
-
-
/lib
-
-
-
库文件,用于/usr/bin//usr/sbin/.中的二进制文件
-
-
-
/lib<qual>
-
-
-
备用格式库(可选的).
-
-
-
/local
-
-
-
三级层次 用于本地数据, 具体到该主机上的.通常会有下一个子目录, 比如, bin/, lib/, share/.
-
-
-
/sbin
-
-
-
非必要系统的二进制文件, 比如,用于不同网络服务的守护进程
-
-
-
/share
-
-
-
独立架构的 (共享) 数据.
-
-
-
/src
-
-
-
源代码, 比如, 内核源文件以及与它相关的头文件
-
-
-
/X11R6
-
-
-
X Window系统,版本号:11,发行版本:6
-
/var
-
各式各样的文件,一些随着系统常规操作而持续改变的文件就放在这里,比如日志文件,脱机文件,还有临时的电子邮件文件
-
-
-
/cache
-
-
-
应用程序缓存数据. 这些数据是根据I/O(输入/输出)的耗时结果或者是运算生成的.这些应用程序是可以重新生成或者恢复数据的.当没有数据丢失的时候,可以删除缓存文件.
-
-
-
/lib
-
-
-
状态信息.这些信息随着程序的运行而不停地改变,比如,数据库,系统元数据的打包等等
-
-
-
/lock
-
-
-
锁文件。这些文件会持续监控正在使用的资源
-
-
-
/log
-
-
-
日志文件. 包含各种日志.
-
-
-
/mail
-
-
-
内含用户邮箱的相关文件
-
-
-
/opt
-
-
-
来自附加包的各种数据都会存储在 /opt/.
-
-
-
/run
-
-
-
Information about the running system since last boot, e.g., currently logged-in users and running daemons.存放当前系统上次启动的相关信息, 例如, 当前登入的用户以及当前运行的daemons(守护进程).
-
-
-
/spool
-
-
-
该spool主要用于存放将要被处理的任务, 比如, 打印队列以及邮件传出队列
-
-
-
-
-
/mail
-
-
-
-
-
过时的位置,用于放置用户邮箱文件
-
-
-
/tmp
-
-
-
存放重启之前的临时接口
- -### Types of files in Linux ### -### Linux的文件类型 ### -大多数文件也仅仅是文件,他们被称为`regular`文件;他们包含普通数据,比如,文本,可执行文件,或者程序,程序输入或输出文件等等 -While it is reasonably safe to suppose that everything you encounter on a Linux system is a file, there are some exceptions. -虽然你可以认为“在Linux中,一切你看到的皆为文件”这个观点相当保险,但这里仍有着一些例外。 - -- `目录`:由其他文件组成的文件 -- `特殊文件`:用于输入和输出的途径。大多数特殊文件都储存在`/dev`中,我们将会在后面讨论这个问题。 -- `链接文件`:让文件或者目录在系统文件树结构上可见的机制。我们将详细地讨论这个链接文件。 -- `(域)套接字`:特殊的文件类型,和TCP/IP协议中的套接字有点像,提供进程网络,并受文件系统的访问控制机制保护。 --`命名管道` : 或多或少有点像sockets(套接字),提供一个进程间的通信机制,而不用网络套接字协议。 -### File system in reality ### -### 现实中的文件系统 ### -对于大多数用户和常规系统管理任务而言,"文件和目录是一个有序的类树结构"是可以接受的。然而,对于电脑而言,它是不会理解什么是树,或者什么是树结构。 - -每个分区都有它自己的文件系统。想象一下,如果把那些文件系统想成一个整体,我们可以构思一个关于整个系统的树结构,不过这并没有这么简单。在文件系统中,一个文件代表着一个`inode`(索引节点),一种包含着构建文件的实际数据信息的序列号:这些数据表示文件是属于谁的,还有它在硬盘中的位置。 - -每个分区都有一套属于他们自己的inodes,在一个系统的不同分区中,可以存在有相同inodes的文件。 - -每个inode都表示着一种在硬盘上的数据结构,保存着文件的属性,包括文件数据的物理地址。当硬盘被格式化并用来存储数据时(通常发生在初始系统安装过程,或者是在一个已经存在的系统中添加额外的硬盘),每个分区都会创建关于inodes的固定值。这个值表示这个分区能够同时存储各类文件的最大数量。我们通常用一个inode去映射2-8k的数据块。当一个新的文件生成后,它就会获得一个空闲的indoe。在这个inode里面存储着以下信息: - -- 文件属主和组属主 -- 文件类型(常规文件,目录文件......) -- 文件权限 -- 创建、最近一次读文件和修改文件的时间 -- inode里该信息被修改的时间 -- 文件的链接数(详见下一章) -- 文件大小 -- 文件数据的实际地址 - -唯一不在inode的信息是文件名和目录。它们存储在特殊的目录文件。通过比较文件名和inodes的数目,系统能够构造出一个便于用户理解的树结构。用户可以通过ls -i查看inode的数目。在硬盘上,inodes有他们独立的空间。 - - - -via: http://www.blackmoreops.com/2015/06/18/linux-file-system-hierarchy-v2-0/ - -译者:[译者ID](https://github.com/tnuoccalanosrep) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:http://www.blackmoreops.com/2015/02/15/in-light-of-recent-linux-exploits-linux-security-audit-is-a-must/ -[2]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png -[3]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-File-System-Hierarchy-blackMORE-Ops.pdf From df238a689f7557654979a7aa2a8fdb1061c655d4 Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Wed, 2 Sep 2015 00:50:38 +0800 Subject: [PATCH 407/697] finish translate --- ...'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md (100%) diff --git a/sources/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md b/translated/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md similarity index 100% rename from sources/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md rename to translated/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md From c05677e6043ac0a422c67dec984e8f061f3885e8 Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Wed, 2 Sep 2015 00:54:27 +0800 Subject: [PATCH 408/697] finish translate --- ...inx or Apache with MariaDB on Debian or Ubuntu.md | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/translated/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md b/translated/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md index 60b2137c55..61e7c80bdf 100644 --- a/translated/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md +++ b/translated/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md @@ -16,9 +16,9 @@ HHVM can also be used along with a FastCGI-based web-server like Nginx or Apache ### Step 1: 安装 Nginx 或者 Apache 服务器 ### 1. 首先,先进行一次系统的升级或者更新软件仓库列表. - +``` # apt-get update && apt-get upgrade - +``` ![System Upgrade](http://www.tecmint.com/wp-content/uploads/2015/08/System-Upgrade.png) System Upgrade @@ -64,17 +64,19 @@ Apache 默认页面 ### Step 2: 安装和配置 MariaDB ### 3. 这一步,我们将通过如下命令安装 MariaDB,它是一个比 MySQL 更好的数据库 - +``` # apt-get install mariadb-client mariadb-server - +``` ![Install MariaDB Database](http://www.tecmint.com/wp-content/uploads/2015/08/Install-MariaDB-Database.png) 安装 MariaDB 4. 在 MariaDB 成功安装之后,你可以启动它,并且设置 root 密码来保护数据库: +``` # systemctl start mysql # mysql_secure_installation +``` 回答以下问题,只需要按下`y`或者 `n`并且回车。请确保你仔细的阅读过说明。 @@ -87,7 +89,9 @@ Apache 默认页面 5. 在设置了密码之后,你就可以登陆 MariaDB 了。 +``` # mysql -u root -p +``` ### Step 3: 安装 HHVM ### From 3f3faa942ba4736d928cad2f595de5042b14ac5e Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Wed, 2 Sep 2015 00:55:26 +0800 Subject: [PATCH 409/697] finish translate change style --- ...inx or Apache with MariaDB on Debian or Ubuntu.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/translated/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md b/translated/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md index 61e7c80bdf..1591def307 100644 --- a/translated/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md +++ b/translated/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md @@ -96,11 +96,11 @@ Apache 默认页面 ### Step 3: 安装 HHVM ### 6. 我们需要添加 HHVM 的仓库到你的`sources.list`文件中,然后更新软件列表。 - +``` # wget -O - http://dl.hhvm.com/conf/hhvm.gpg.key | apt-key add - # echo deb http://dl.hhvm.com/ubuntu DISTRIBUTION_VERSION main | sudo tee /etc/apt/sources.list.d/hhvm.list # apt-get update - +``` **重要**:不要忘记用你的 Ubuntu 发行版型号替换上述的DISTRIBUTION_VERSION (比如:lucid, precise, trusty) 或者是 Debian 的 jessie 或者 wheezy。在 Linux Mint 中也是一样的,不过只支持 petra。 添加了 HHVM 仓库之后,你就可以安装了。 @@ -132,9 +132,9 @@ nginx 的配置文件在 /etc/nginx/sites-available/default, 并且这些配 **重要**: 如果你使用的是 Apache,这边就不需要进行配置了 8. 接下来,你需要使用 hhvm 来提供 php 的运行环境。 - +``` # /usr/bin/update-alternatives --install /usr/bin/php php /usr/bin/hhvm 60 - +``` 以上步骤完成之后,你现在可以启动并且测试他了。 # systemctl start hhvm @@ -142,11 +142,11 @@ nginx 的配置文件在 /etc/nginx/sites-available/default, 并且这些配 ### Step 5: 测试 HHVM 和 Nginx/Apache ### 9. 为了确认 hhvm 是否工作,你需要在 nginx/apache 的根目录下建立 hello.php。 - +``` # nano /usr/share/nginx/html/hello.php [For Nginx] OR # nano /var/www/html/hello.php [For Nginx and Apache] - +``` 在文件中添加如下代码: Date: Wed, 2 Sep 2015 07:30:29 +0800 Subject: [PATCH 410/697] [translating by bazz2]How to filter BGP routes in Quagga BGP router --- .../20150202 How to filter BGP routes in Quagga BGP router.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150202 How to filter BGP routes in Quagga BGP router.md b/sources/tech/20150202 How to filter BGP routes in Quagga BGP router.md index d92c47c774..f227e0c506 100644 --- a/sources/tech/20150202 How to filter BGP routes in Quagga BGP router.md +++ b/sources/tech/20150202 How to filter BGP routes in Quagga BGP router.md @@ -1,3 +1,4 @@ +[bazz222] How to filter BGP routes in Quagga BGP router ================================================================================ In the [previous tutorial][1], we demonstrated how to turn a CentOS box into a BGP router using Quagga. We also covered basic BGP peering and prefix exchange setup. In this tutorial, we will focus on how we can control incoming and outgoing BGP prefixes by using **prefix-list** and **route-map**. @@ -198,4 +199,4 @@ via: http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:http://xmodulo.com/author/sarmed -[1]:http://xmodulo.com/centos-bgp-router-quagga.html \ No newline at end of file +[1]:http://xmodulo.com/centos-bgp-router-quagga.html From 219c5746cec877427b4ded700e69d91a7602cf17 Mon Sep 17 00:00:00 2001 From: joeren Date: Wed, 2 Sep 2015 08:21:36 +0800 Subject: [PATCH 411/697] Update 20150901 How to automatically dim your screen on Linux.md --- .../20150901 How to automatically dim your screen on Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150901 How to automatically dim your screen on Linux.md b/sources/tech/20150901 How to automatically dim your screen on Linux.md index b8a9ead16b..3a494421a2 100644 --- a/sources/tech/20150901 How to automatically dim your screen on Linux.md +++ b/sources/tech/20150901 How to automatically dim your screen on Linux.md @@ -1,3 +1,4 @@ +Translating by GOLinux! How to automatically dim your screen on Linux ================================================================================ When you start spending the majority of your time in front of a computer, natural questions start arising. Is this healthy? How can I diminish the strain on my eyes? Why is the sunlight burning me? Although active research is still going on to answer these questions, a lot of programmers have already adopted a few applications to make their daily habits a little healthier for their eyes. Among those applications, there are two which I found particularly interesting: Calise and Redshift. @@ -49,4 +50,4 @@ via: http://xmodulo.com/automatically-dim-your-screen-linux.html [a]:http://xmodulo.com/author/adrien [1]:http://calise.sourceforge.net/ [2]:http://jonls.dk/redshift/ -[3]:https://wiki.archlinux.org/index.php/Redshift#Automatic_location_based_on_GPS \ No newline at end of file +[3]:https://wiki.archlinux.org/index.php/Redshift#Automatic_location_based_on_GPS From b2a9aa92c2c3ac2bb5a684eadc3396c746270ce7 Mon Sep 17 00:00:00 2001 From: Ping Date: Wed, 2 Sep 2015 09:13:35 +0800 Subject: [PATCH 412/697] Complete Xtreme Download Manager Updated With Fresh GUI --- ...Download Manager Updated With Fresh GUI.md | 68 ------------------- ...Download Manager Updated With Fresh GUI.md | 68 +++++++++++++++++++ 2 files changed, 68 insertions(+), 68 deletions(-) delete mode 100644 sources/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md create mode 100644 translated/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md diff --git a/sources/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md b/sources/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md deleted file mode 100644 index 8879d6bf64..0000000000 --- a/sources/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md +++ /dev/null @@ -1,68 +0,0 @@ -Translating by Ping -Xtreme Download Manager Updated With Fresh GUI -================================================================================ -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme-Download-Manager-Linux.jpg) - -[Xtreme Download Manager][1], unarguably one of the [best download managers for Linux][2], has a new version named XDM 2015 which brings a fresh new look to it. - -Xtreme Download Manager, also known as XDM or XDMAN, is a popular cross-platform download manager available for Linux, Windows and Mac OS X. It is also compatible with all major web browsers such as Chrome, Firefox, Safari enabling you to download directly from XDM when you try to download something in your web browser. - -Applications such as XDM are particularly useful when you have slow/limited network connectivity and you need to manage your downloads. Imagine downloading a huge file from internet on a slow network. What if you could pause and resume the download at will? XDM helps you in such situations. - -Some of the main features of XDM are: - -- Pause and resume download -- [Download videos from YouTube][3] and other video sites -- Force assemble -- Download speed acceleration -- Schedule downloads -- Limit download speed -- Web browser integration -- Support for proxy servers - -Here you can see the difference between the old and new XDM. - -![Old XDM](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme-Download-Manager-700x400_c.jpg) - -Old XDM - -![New XDM](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme_Download_Manager.png) - -New XDM - -### Install Xtreme Download Manager in Ubuntu based Linux distros ### - -Thanks to the PPA by Noobslab, you can easily install Xtreme Download Manager using the commands below. XDM requires Java but thanks to the PPA, you don’t need to bother with installing dependencies separately. - - sudo add-apt-repository ppa:noobslab/apps - sudo apt-get update - sudo apt-get install xdman - -The above PPA should be available for Ubuntu and other Ubuntu based Linux distributions such as Linux Mint, elementary OS, Linux Lite etc. - -#### Remove XDM #### - -To remove XDM (installed using the PPA), use the commands below: - - sudo apt-get remove xdman - sudo add-apt-repository --remove ppa:noobslab/apps - -For other Linux distributions, you can download it from the link below: - -- [Download Xtreme Download Manager][4] - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/xtreme-download-manager-install/ - -作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:http://xdman.sourceforge.net/ -[2]:http://itsfoss.com/4-best-download-managers-for-linux/ -[3]:http://itsfoss.com/download-youtube-videos-ubuntu/ -[4]:http://xdman.sourceforge.net/download.html diff --git a/translated/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md b/translated/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md new file mode 100644 index 0000000000..d9ab3ab9f3 --- /dev/null +++ b/translated/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md @@ -0,0 +1,68 @@ +Xtreme下载管理器升级全新用户界面 +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme-Download-Manager-Linux.jpg) + +[Xtreme 下载管理器][1], 毫无疑问是[Linux界最好的下载管理器][2]之一 , 它的新版本名叫 XDM 2015 ,这次的新版本给我们带来了全新的外观体验! + +Xtreme 下载管理器,也被称作 XDM 或 XDMAN,它是一个跨平台的下载管理器,可以用于 Linux、Windows 和 Mac OS X 系统之上。同时它兼容于主流的浏览器,如 Chrome, Firefox, Safari 等,因此当你从浏览器下载东西的时候可以直接使用 XDM 下载。 + +当你的网络连接超慢并且需要管理下载文件的时候,像 XDM 这种软件可以帮到你大忙。例如说你在一个慢的要死的网络速度下下载一个超大文件, XDM 可以帮助你暂停并且继续下载。 + +XDM 的主要功能: + +- 暂停和继续下载 +- [从 YouTube 下载视频][3],其他视频网站同样适用 +- 强制聚合 +- 下载加速 +- 计划下载 +- 下载限速 +- 与浏览器整合 +- 支持代理服务器 + +下面你可以看到 XDM 新旧版本之间的差别。 + +![Old XDM](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme-Download-Manager-700x400_c.jpg) + +老版本XDM + +![New XDM](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme_Download_Manager.png) + +新版本XDM + +### 在基于 Ubuntu 的 Linux 发行版上安装 Xtreme下载管理器 ### + +感谢 Noobslab 提供的 PPA,你可以使用以下命令来安装 Xtreme 下载管理器。虽然 XDM 依赖 Java,但是托 PPA 的福,你不需要对其进行单独的安装。 + + sudo add-apt-repository ppa:noobslab/apps + sudo apt-get update + sudo apt-get install xdman + +以上的 PPA 可以在 Ubuntu 或者其他基于 Ubuntu 的发行版上使用,如 Linux Mint, elementary OS, Linux Lite 等。 + +#### 删除 XDM #### + +如果你是使用 PPA 安装的 XDM ,可以通过以下命令将其删除: + + sudo apt-get remove xdman + sudo add-apt-repository --remove ppa:noobslab/apps + +对于其他Linux发行版,可以通过以下连接下载: + +- [Download Xtreme Download Manager][4] + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/xtreme-download-manager-install/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/mr-ping) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://xdman.sourceforge.net/ +[2]:http://itsfoss.com/4-best-download-managers-for-linux/ +[3]:http://itsfoss.com/download-youtube-videos-ubuntu/ +[4]:http://xdman.sourceforge.net/download.html + From 628748de5c7ea948b5e22aac67112a463d8b85e7 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Wed, 2 Sep 2015 10:16:55 +0800 Subject: [PATCH 413/697] [Translated] --- ... automatically dim your screen on Linux.md | 53 ------------------- ... automatically dim your screen on Linux.md | 52 ++++++++++++++++++ 2 files changed, 52 insertions(+), 53 deletions(-) delete mode 100644 sources/tech/20150901 How to automatically dim your screen on Linux.md create mode 100644 translated/tech/20150901 How to automatically dim your screen on Linux.md diff --git a/sources/tech/20150901 How to automatically dim your screen on Linux.md b/sources/tech/20150901 How to automatically dim your screen on Linux.md deleted file mode 100644 index 3a494421a2..0000000000 --- a/sources/tech/20150901 How to automatically dim your screen on Linux.md +++ /dev/null @@ -1,53 +0,0 @@ -Translating by GOLinux! -How to automatically dim your screen on Linux -================================================================================ -When you start spending the majority of your time in front of a computer, natural questions start arising. Is this healthy? How can I diminish the strain on my eyes? Why is the sunlight burning me? Although active research is still going on to answer these questions, a lot of programmers have already adopted a few applications to make their daily habits a little healthier for their eyes. Among those applications, there are two which I found particularly interesting: Calise and Redshift. - -### Calise ### - -In and out of development limbo, [Calise][1] stands for "Camera Light Sensor." In other terms, it is an open source program that computes the best backlight level for your screen based on the light intensity received by your webcam. And for more precision, Calise is capable of taking in account the weather in your area based on your geographical coordinates. What I like about it is the compatibility with every desktops, even non-X ones. - -![](https://farm1.staticflickr.com/569/21016715646_6e1e95f066_o.jpg) - -It comes with a command line interface and a GUI, supports multiple user profiles, and can even export its data to CSV. After installation, you will have to calibrate it quickly before the magic happens. - -![](https://farm6.staticflickr.com/5770/21050571901_1e7b2d63ec_c.jpg) - -What is less likeable is unfortunately that if you are as paranoid as I am, you have a little piece of tape in front of your webcam, which greatly affects Calise's precision. But that aside, Calise is a great application, which deserves our attention and support. As I mentioned earlier, it has gone through some rough patches in its development schedule over the last couple of years, so I really hope that this project will continue. - -![](https://farm1.staticflickr.com/633/21032989702_9ae563db1e_o.png) - -### Redshift ### - -If you already considered decreasing the strain on your eyes caused by your screen, it is possible that you have heard of f.lux, a free proprietary software that modifies the luminosity and color scheme of your display based on the time of the day. However, if you really prefer open source software, there is an alternative: [Redshift][2]. Inspired by f.lux, Redshift also alters the color scheme and luminosity to enhance the experience of sitting in front of your screen at night. On startup, you can configure it with you geographic position as longitude and latitude, and then let it run in tray. Redshift will smoothly adjust the color scheme or your screen based on the position of the sun. At night, you will see the screen's color temperature turn towards red, making it a lot less painful for your eyes. - -![](https://farm6.staticflickr.com/5823/20420303684_2b6e917fee_b.jpg) - -Just like Calise, it proposes a command line interface as well as a GUI client. To start Redshift quickly, just use the command: - - $ redshift -l [LAT]:[LON] - -Replacing [LAT]:[LON] by your latitude and longitude. - -However, it is also possible to input your coordinates by GPS via the gpsd module. For Arch Linux users, I recommend this [wiki page][3]. - -### Conclusion ### - -To conclude, Linux users have no excuse for not taking care of their eyes. Calise and Redshift are both amazing. I really hope that their development will continue and that they get the support they deserve. Of course, there are more than just two programs out there to fulfill the purpose of protecting your eyes and staying healthy, but I feel that Calise and Redshift are a good start. - -If there is a program that you really like and that you use regularly to reduce the strain on your eyes, please let us know in the comments. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/automatically-dim-your-screen-linux.html - -作者:[Adrien Brochard][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/adrien -[1]:http://calise.sourceforge.net/ -[2]:http://jonls.dk/redshift/ -[3]:https://wiki.archlinux.org/index.php/Redshift#Automatic_location_based_on_GPS diff --git a/translated/tech/20150901 How to automatically dim your screen on Linux.md b/translated/tech/20150901 How to automatically dim your screen on Linux.md new file mode 100644 index 0000000000..cf82291db3 --- /dev/null +++ b/translated/tech/20150901 How to automatically dim your screen on Linux.md @@ -0,0 +1,52 @@ +Linux上如何让屏幕自动变暗 +================================================================================ +当你开始在计算机前花费大量时间的时候,自然的问题开始显现。这健康吗?怎样才能舒缓我眼睛的压力呢?为什么太阳光灼烧着我?尽管解答这些问题的研究仍然在活跃进行着,许多程序员已经采用了一些应用来让他们的日常习惯对他们的眼睛更健康点。在这些应用中,我发现了两个特别有趣的东西:Calise和Redshift。 + +### Calise ### + +在开发状态之中和之外,[Calise][1]都表示“相机光感应器”。换句话说,它是一个开源程序,用于基于摄像头接收到的光密度计算屏幕最佳的背景光级别。更精确地说,Calise可以基于你的地理坐标来考虑你所在地区的天气。我喜欢它是因为它兼容各个桌面,甚至非X系列。 + +![](https://farm1.staticflickr.com/569/21016715646_6e1e95f066_o.jpg) + +它同时附带了命令行界面和图形界面,支持多用户配置,而且甚至可以导出数据为CSV。安装完后,你必须在魔法展开前快速进行校正。 + +![](https://farm6.staticflickr.com/5770/21050571901_1e7b2d63ec_c.jpg) + +不怎么令人喜欢的是,如果你和我一样偏执,在你的摄像头前面贴了一条胶带,那就会比较不幸了,这会大大影响Calise的精确度。除此之外,Calise还是个很棒的应用,值得我们关注和支持。正如我先前提到的,它在过去几年中经历了一段修修补补的艰难阶段,所以我真的希望这个项目继续开展下去。 + +![](https://farm1.staticflickr.com/633/21032989702_9ae563db1e_o.png) + +### Redshift ### + +如果你已经考虑好要减少由屏幕导致的眼睛的压力,那么你很可能听过f.lux,它是一个免费的专有软件,用于根据一天中的时间来修改显示器的亮度和配色。然而,如果真的偏好于开源软件,那么一个可选方案就是:[Redshift][2]。灵感来自f.lux,Redshift也可以改变配色和亮度来加强你夜间坐在屏幕前的体验。启动时,你可以配置使用经度和纬度来配置地理坐标,然后就可以让它在托盘中运行了。Redshift将根据太阳的位置平滑地调整你的配色或者屏幕。在夜里,你可以看到屏幕的色温调向偏暖色,这会让你的眼睛少遭些罪。 + +![](https://farm6.staticflickr.com/5823/20420303684_2b6e917fee_b.jpg) + +和Calise一样,它提供了一个命令行界面,同时也提供了一个图形客户端。要快速启动Redshift,只需使用命令: + + $ redshift -l [LAT]:[LON] + +替换[LAT]:[LON]为你的维度和经度。 + +然而,它也可以通过gpsd模块来输入你的坐标。对于Arch Linux用户,我推荐你读一读这个[维基页面][3]。 + +### 尾声 ### + +总而言之,Linux用户没有理由不去保护自己的眼睛,Calise和Redshift两个都很棒。我真希望它们的开发能够继续下去,他们能获得应有的支持。当然,还有比这两个更多的程序可以满足保护眼睛和保持健康的目的,但是我感觉Calise和Redshift会是一个不错的开始。 + +如果你有一个真正喜欢的程序,而且也经常用它来舒缓眼睛的压力,请在下面的评论中留言吧。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/automatically-dim-your-screen-linux.html + +作者:[Adrien Brochard][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/adrien +[1]:http://calise.sourceforge.net/ +[2]:http://jonls.dk/redshift/ +[3]:https://wiki.archlinux.org/index.php/Redshift#Automatic_location_based_on_GPS From 61caa70d6f7935c209d03629c5b89ac50592de1e Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 2 Sep 2015 12:43:09 +0800 Subject: [PATCH 414/697] PUB:RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @xiqingongzi 好长,辛苦了~有一些小错误。 --- ...ntial Commands and System Documentation.md | 313 +++++++++++++++++ ...ntial Commands and System Documentation.md | 320 ------------------ 2 files changed, 313 insertions(+), 320 deletions(-) create mode 100644 published/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md delete mode 100644 translated/tech/RHCSA/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md diff --git a/published/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md b/published/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md new file mode 100644 index 0000000000..a2b540a8ad --- /dev/null +++ b/published/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md @@ -0,0 +1,313 @@ +RHCSA 系列(一): 回顾基础命令及系统文档 +================================================================================ + +RHCSA (红帽认证系统工程师) 是由 RedHat 公司举行的认证考试,这家公司给商业公司提供开源操作系统和软件,除此之外,还为这些企业和机构提供支持、训练以及咨询服务等。 + +![RHCSA Exam Guide](http://www.tecmint.com/wp-content/uploads/2015/02/RHCSA-Series-by-Tecmint.png) + +*RHCSA 考试准备指南* + +RHCSA 考试(考试编号 EX200)通过后可以获取由 RedHat 公司颁发的证书. RHCSA 考试是 RHCT(红帽认证技师)的升级版,而且 RHCSA 必须在新的 Red Hat Enterprise Linux(红帽企业版)下完成。RHCT 和 RHCSA 的主要变化就是 RHCT 基于 RHEL5,而 RHCSA 基于 RHEL6 或者7,这两个认证的等级也有所不同。 + +红帽认证管理员最起码可以在红帽企业版的环境下执行如下系统管理任务: + +- 理解并会使用命令管理文件、目录、命令行以及系统/软件包的文档 +- 在不同的启动等级操作运行中的系统,识别和控制进程,启动或停止虚拟机 +- 使用分区和逻辑卷管理本地存储 +- 创建并且配置本地文件系统和网络文件系统,设置他们的属性(权限、加密、访问控制表) +- 部署、配置、并且控制系统,包括安装、升级和卸载软件 +- 管理系统用户和组,以及使用集中制的 LDAP 目录进行用户验证 +- 确保系统安全,包括基础的防火墙规则和 SELinux 配置 + +关于你所在国家的考试注册和费用请参考 [RHCSA 认证页面][1]。 + +在这个有15章的 RHCSA(红帽认证管理员)备考系列中,我们将覆盖以下的关于红帽企业 Linux 第七版的最新的信息: + +- Part 1: 回顾基础命令及系统文档 +- Part 2: 在 RHEL7 中如何进行文件和目录管理 +- Part 3: 在 RHEL7 中如何管理用户和组 +- Part 4: 使用 nano 和 vim 管理命令,使用 grep 和正则表达式分析文本 +- Part 5: RHEL7 的进程管理:启动,关机,以及这之间的各种事情 +- Part 6: 使用 'Parted' 和 'SSM' 来管理和加密系统存储 +- Part 7: 使用 ACL(访问控制表)并挂载 Samba/NFS 文件分享 +- Part 8: 加固 SSH,设置主机名并开启网络服务 +- Part 9: 安装、配置和加固一个 Web 和 FTP 服务器 +- Part 10: Yum 包管理方式,使用 Cron 进行自动任务管理以及监控系统日志 +- Part 11: 使用 FirewallD 和 Iptables 设置防火墙,控制网络流量 +- Part 12: 使用 Kickstart 自动安装 RHEL 7 +- Part 13: RHEL7:什么是 SeLinux?他的原理是什么? +- Part 14: 在 RHEL7 中使用基于 LDAP 的权限控制 +- Part 15: 虚拟化基础和用KVM管理虚拟机 + +在第一章,我们讲解如何在终端或者 Shell 窗口输入和运行正确的命令,并且讲解如何找到、查阅,以及使用系统文档。 + +![RHCSA: Reviewing Essential Linux Commands – Part 1](http://www.tecmint.com/wp-content/uploads/2015/02/Reviewing-Essential-Linux-Commands.png) + +*RHCSA:回顾必会的 Linux 命令 - 第一部分* + +#### 前提: #### + +至少你要熟悉如下命令 + +- [cd 命令][2] (改变目录) +- [ls 命令][3] (列举文件) +- [cp 命令][4] (复制文件) +- [mv 命令][5] (移动或重命名文件) +- [touch 命令][6] (创建一个新的文件或更新已存在文件的时间表) +- rm 命令 (删除文件) +- mkdir 命令 (创建目录) + +在这篇文章中你将会找到更多的关于如何更好的使用他们的正确用法和特殊用法. + +虽然没有严格的要求,但是作为讨论常用的 Linux 命令和在 Linux 中搜索信息方法,你应该安装 RHEL7 来尝试使用文章中提到的命令。这将会使你学习起来更省力。 + +- [红帽企业版 Linux(RHEL)7 安装指南][7] + +### 使用 Shell 进行交互 ### + +如果我们使用文本模式登录 Linux,我们就会直接进入到我们的默认 shell 中。另一方面,如果我们使用图形化界面登录,我们必须通过启动一个终端来开启 shell。无论那种方式,我们都会看到用户提示符,并且我们可以在这里输入并且执行命令(当按下回车时,命令就会被执行)。 + +命令是由两个部分组成的: + +- 命令本身 +- 参数 + +某些参数,称为选项(通常使用一个连字符开头),会改变命令的行为方式,而另外一些则指定了命令所操作的对象。 + +type 命令可以帮助我们识别某一个特定的命令是由 shell 内置的还是由一个单独的包提供的。这样的区别在于我们能够在哪里找到更多关于该命令的更多信息。对 shell 内置的命令,我们需要看 shell 的手册页;如果是其他的,我们需要看软件包自己的手册页。 + +![Check Shell built in Commands](http://www.tecmint.com/wp-content/uploads/2015/02/Check-shell-built-in-Commands.png) + +*检查Shell的内置命令* + +在上面的例子中, `cd` 和 `type` 是 shell 内置的命令,`top` 和 `less` 是由 shell 之外的其他的二进制文件提供的(在这种情况下,type将返回命令的位置)。 + +其他的内置命令: + +- [echo 命令][8]: 展示字符串 +- [pwd 命令][9]: 输出当前的工作目录 + +![More Built in Shell Commands](http://www.tecmint.com/wp-content/uploads/2015/02/More-Built-in-Shell-Commands.png) + +*其它内置命令* + +**exec 命令** + +它用来运行我们指定的外部程序。请注意在多数情况下,只需要输入我们想要运行的程序的名字就行,不过` exec` 命令有一个特殊的特性:不是在 shell 之外创建新的进程运行,而是这个新的进程会替代原来的 shell,可以通过下列命令来验证。 + + # ps -ef | grep [shell 进程的PID] + +当新的进程终止时,Shell 也随之终止。运行 `exec top` ,然后按下 `q` 键来退出 top,你会注意到 shell 会话也同时终止,如下面的屏幕录像展示的那样: + + + +**export 命令** + +给之后执行的命令的输出环境变量。 + +**history 命令** + +展示数行之前的历史命令。命令编号前面前缀上感叹号可以再次执行这个命令。如果我们需要编辑历史列表中的命令,我们可以按下 `Ctrl + r` 并输入与命令相关的第一个字符。我们可以看到的命令会自动补全,可以根据我们目前的需要来编辑它: + + + +命令列表会保存在一个叫 `.bash_history` 的文件里。`history` 命令是一个非常有用的用于减少输入次数的工具,特别是进行命令行编辑的时候。默认情况下,bash 保留最后输入的500个命令,不过可以通过修改 HISTSIZE 环境变量来增加: + +![Linux history Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-history-Command.png) + +*Linux history 命令* + +但上述变化,在我们的下一次启动不会保留。为了保持 HISTSIZE 变量的变化,我们需要通过手工修改文件编辑: + + # 要设置 history 长度,请看 bash(1)文档中的 HISTSIZE 和 HISTFILESIZE + HISTSIZE=1000 + +**重要**: 我们的更改不会立刻生效,除非我们重启了 shell 。 + +**alias 命令** + +没有参数或使用 `-p` 选项时将会以“名称=值”的标准形式输出别名列表。当提供了参数时,就会按照给定的名字和值定义一个别名。 + +使用 `alias` ,我们可以创建我们自己的命令,或使用所需的参数修改现有的命令。举个例子,假设我们将 `ls` 定义别名为 `ls –color=auto` ,这样就可以使用不同颜色输出文件、目录、链接等等。 + + + # alias ls='ls --color=auto' + +![Linux alias Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-alias-Command.png) + +*Linux 别名命令* + +**注意**: 你可以给你的“新命令”起任何的名字,并且使用单引号包括很多命令,但是你要用分号区分开它们。如下: + + # alias myNewCommand='cd /usr/bin; ls; cd; clear' + +**exit 命令** + +`exit` 和 `logout` 命令都可以退出 shell 。`exit` 命令可以退出所有的 shell,`logout` 命令只注销登录的 shell(即你用文本模式登录时自动启动的那个)。 + +**man 和 info 命令** +如果你对某个程序有疑问,可以参考它的手册页,可以使用 `man` 命令调出它。此外,还有一些关于重要文件(inittab、fstab、hosts 等等)、库函数、shell、设备及其他功能的手册页。 + +举例: + +- man uname (输出系统信息,如内核名称、处理器、操作系统类型、架构等) +- man inittab (初始化守护进程的设置) + +另外一个重要的信息的来源是由 `info` 命令提供的,`info` 命令常常被用来读取 info 文件。这些文件往往比手册页 提供了更多信息。可以通过 `info keyword` 调用某个命令的信息: + + # info ls + # info cut + +另外,在 `/usr/share/doc` 文件夹包含了大量的子目录,里面可以找到大量的文档。它们是文本文件或其他可读格式。 + +你要习惯于使用这三种方法去查找命令的信息。重点关注每个命令文档中介绍的详细的语法。 + +**使用 expand 命令把制表符转换为空格** + +有时候文本文档包含了制表符,但是程序无法很好的处理。或者我们只是简单的希望将制表符转换成空格。这就是用到 `expand` 地方(由GNU核心组件包提供) 。 + +举个例子,我们有个文件 NumberList.txt,让我们使用 `expand` 处理它,将制表符转换为一个空格,并且显示在标准输出上。 + + # expand --tabs=1 NumbersList.txt + +![Linux expand Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-expand-Command.png) + +*Linux expand 命令* + +unexpand命令可以实现相反的功能(将空格转为制表符) + +**使用 head 输出文件首行及使用 tail 输出文件尾行** + +通常情况下,`head` 命令后跟着文件名时,将会输出该文件的前十行,我们可以通过 `-n` 参数来自定义具体的行数。 + + # head -n3 /etc/passwd + # tail -n3 /etc/passwd + +![Linux head and tail Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-head-and-tail-Command.png) + +*Linux 的 head 和 tail 命令* + +`tail` 最有意思的一个特性就是能够显示增长的输入文件(`tail -f my.log`,my.log 是我们需要监视的文件。)这在我们监控一个持续增加的日志文件时非常有用。 + +- [使用 head 和 tail 命令有效地管理文件][10] + +**使用 paste 按行合并文本文件** + +`paste` 命令一行一行的合并文件,默认会以制表符来区分每个文件的行,或者你可以自定义的其它分隔符。(下面的例子就是输出中的字段使用等号分隔)。 + + # paste -d= file1 file2 + +![Merge Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Merge-Files-in-Linux-with-paste-command.png) + +*Linux 中的 merge 命令* + +**使用 split 命令将文件分块** + +`split` 命令常常用于把一个文件切割成两个或多个由我们自定义的前缀命名的文件。可以根据大小、区块、行数等进行切割,生成的文件会有一个数字或字母的后缀。在下面的例子中,我们将切割 bash.pdf ,每个文件 50KB (-b 50KB),使用数字后缀 (-d): + + # split -b 50KB -d bash.pdf bash_ + +![Split Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Split-Files-in-Linux-with-split-command.png) + +*在 Linux 下切割文件* + +你可以使用如下命令来合并这些文件,生成原来的文件: + + # cat bash_00 bash_01 bash_02 bash_03 bash_04 bash_05 > bash.pdf + +**使用 tr 命令替换字符** + +`tr` 命令多用于一对一的替换(改变)字符,或者使用字符范围。和之前一样,下面的实例我们将使用之前的同样文件file2,我们将做: + +- 小写字母 o 变成大写 +- 所有的小写字母都变成大写字母 + +- + # cat file2 | tr o O + # cat file2 | tr [a-z] [A-Z] + +![Translate Characters in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Translate-characters-in-Linux-with-tr-command.png) + +*在 Linux 中替换字符* + +**使用 uniq 和 sort 检查或删除重复的文字** + +`uniq` 命令可以帮我们查出或删除文件中的重复的行,默认会输出到标准输出,我们应当注意,`uniq`只能查出相邻的相同行,所以,`uniq` 往往和 `sort` 一起使用(`sort` 一般用于对文本文件的内容进行排序) + +默认情况下,`sort` 以第一个字段(使用空格分隔)为关键字段。想要指定不同关键字段,我们需要使用 -k 参数,请注意如何使用 `sort` 和 `uniq` 输出我们想要的字段,具体可以看下面的例子: + + # cat file3 + # sort file3 | uniq + # sort -k2 file3 | uniq + # sort -k3 file3 | uniq + +![删除文件中重复的行](http://www.tecmint.com/wp-content/uploads/2015/02/Remove-Duplicate-Lines-in-file.png) + +*删除文件中重复的行* + +**从文件中提取文本的命令** + +`cut` 命令基于字节(-b)、字符(-c)、或者字段(-f)的数量,从输入文件(标准输入或文件)中提取到的部分将会以标准输出上。 + +当我们使用字段 `cut` 时,默认的分隔符是一个制表符,不过你可以通过 -d 参数来自定义分隔符。 + + # cut -d: -f1,3 /etc/passwd # 这个例子提取了第一和第三字段的文本 + # cut -d: -f2-4 /etc/passwd # 这个例子提取了第二到第四字段的文本 + +![从文件中提取文本](http://www.tecmint.com/wp-content/uploads/2015/02/Extract-Text-from-a-file.png) + +*从文件中提取文本* + +注意,简洁起见,上方的两个输出的结果是截断的。 + +**使用 fmt 命令重新格式化文件** + +`fmt` 被用于去“清理”有大量内容或行的文件,或者有多级缩进的文件。新的段落格式每行不会超过75个字符宽,你能通过 -w (width 宽度)参数改变这个设定,它可以设置行宽为一个特定的数值。 + +举个例子,让我们看看当我们用 `fmt` 显示定宽为100个字符的时候的文件 /etc/passwd 时会发生什么。再次,输出截断了。 + + # fmt -w100 /etc/passwd + +![File Reformatting in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/File-Reformatting-in-Linux-with-fmt-command.png) + +*Linux 文件重新格式化* + +**使用 pr 命令格式化打印内容** + +`pr` 分页并且在按列或多列的方式显示一个或多个文件。 换句话说,使用 `pr` 格式化一个文件使它打印出来时看起来更好。举个例子,下面这个命令: + + # ls -a /etc | pr -n --columns=3 -h "Files in /etc" + +以一个友好的排版方式(3列)输出/etc下的文件,自定义了页眉(通过 -h 选项实现)、行号(-n)。 + +![File Formatting in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/File-Formatting-in-Linux-with-pr-command.png) + +*Linux的文件格式化* + +### 总结 ### + +在这篇文章中,我们已经讨论了如何在 Shell 或终端以正确的语法输入和执行命令,并解释如何找到,查阅和使用系统文档。正如你看到的一样简单,这就是你成为 RHCSA 的第一大步。 + +如果你希望添加一些其他的你经常使用的能够有效帮你完成你的日常工作的基础命令,并愿意分享它们,请在下方留言。也欢迎提出问题。我们期待您的回复。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ + +作者:[Gabriel Cánepa][a] +译者:[xiqingongzi](https://github.com/xiqingongzi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:https://www.redhat.com/en/services/certification/rhcsa +[2]:http://linux.cn/article-2479-1.html +[3]:https://linux.cn/article-5109-1.html +[4]:http://linux.cn/article-2687-1.html +[5]:http://www.tecmint.com/rename-multiple-files-in-linux/ +[6]:http://linux.cn/article-2740-1.html +[7]:http://www.tecmint.com/redhat-enterprise-linux-7-installation/ +[8]:https://linux.cn/article-3948-1.html +[9]:https://linux.cn/article-3422-1.html +[10]:http://www.tecmint.com/view-contents-of-file-in-linux/ diff --git a/translated/tech/RHCSA/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md b/translated/tech/RHCSA/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md deleted file mode 100644 index 93c2787c7e..0000000000 --- a/translated/tech/RHCSA/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md +++ /dev/null @@ -1,320 +0,0 @@ -[translating by xiqingongzi] - -RHCSA系列: 复习基础命令及系统文档 – 第一部分 -================================================================================ -RHCSA (红帽认证系统工程师) 是由给商业公司提供开源操作系统和软件的RedHat公司举行的认证考试, 除此之外,红帽公司还为这些企业和机构提供支持、训练以及咨询服务 - -![RHCSA Exam Guide](http://www.tecmint.com/wp-content/uploads/2015/02/RHCSA-Series-by-Tecmint.png) - -RHCSA 考试准备指南 - -RHCSA 考试(考试编号 EX200)通过后可以获取由Red Hat 公司颁发的证书. RHCSA 考试是RHCT(红帽认证技师)的升级版,而且RHCSA必须在新的Red Hat Enterprise Linux(红帽企业版)下完成.RHCT和RHCSA的主要变化就是RHCT基于 RHEL5 , 而RHCSA基于RHEL6或者7, 这两个认证的等级也有所不同. - -红帽认证管理员所会的最基础的是在红帽企业版的环境下执行如下系统管理任务: - -- 理解并会使用命令管理文件、目录、命令行以及系统/软件包的文档 -- 使用不同的启动等级启动系统,认证和控制进程,启动或停止虚拟机 -- 使用分区和逻辑卷管理本地存储 -- 创建并且配置本地文件系统和网络文件系统,设置他们的属性(许可、加密、访问控制表) -- 部署、配置、并且控制系统,包括安装、升级和卸载软件 -- 管理系统用户和组,独立使用集中制的LDAP目录权限控制 -- 确保系统安全,包括基础的防火墙规则和SELinux配置 - - -关于你所在国家的考试注册费用参考 [RHCSA Certification page][1]. - -关于你所在国家的考试注册费用参考RHCSA 认证页面 - - -在这个有15章的RHCSA(红帽认证管理员)备考系列,我们将覆盖以下的关于红帽企业Linux第七版的最新的信息 - -- Part 1: 回顾必会的命令和系统文档 -- Part 2: 在RHEL7如何展示文件和管理目录 -- Part 3: 在RHEL7中如何管理用户和组 -- Part 4: 使用nano和vim管理命令/ 使用grep和正则表达式分析文本 -- Part 5: RHEL7的进程管理:启动,关机,以及其他介于二者之间的. -- Part 6: 使用 'Parted'和'SSM'来管理和加密系统存储 -- Part 7: 使用ACLs(访问控制表)并挂载 Samba /NFS 文件分享 -- Part 8: 加固SSH,设置主机名并开启网络服务 -- Part 9: 安装、配置和加固一个Web,FTP服务器 -- Part 10: Yum 包管理方式,使用Cron进行自动任务管理以及监控系统日志 -- Part 11: 使用FirewallD和Iptables设置防火墙,控制网络流量 -- Part 12: 使用Kickstart 自动安装RHEL 7 -- Part 13: RHEL7:什么是SeLinux?他的原理是什么? -- Part 14: 在RHEL7 中使用基于LDAP的权限控制 -- Part 15: RHEL7的虚拟化:KVM 和虚拟机管理 - -在第一章,我们讲解如何输入和运行正确的命令在终端或者Shell窗口,并且讲解如何找到、插入,以及使用系统文档 - -![RHCSA: Reviewing Essential Linux Commands – Part 1](http://www.tecmint.com/wp-content/uploads/2015/02/Reviewing-Essential-Linux-Commands.png) - -RHCSA:回顾必会的Linux命令 - 第一部分 - -#### 前提: #### - -至少你要熟悉如下命令 - -- [cd command][2] (改变目录) -- [ls command][3] (列举文件) -- [cp command][4] (复制文件) -- [mv command][5] (移动或重命名文件) -- [touch command][6] (创建一个新的文件或更新已存在文件的时间表) -- rm command (删除文件) -- mkdir command (创建目录) - -在这篇文章中你将会找到更多的关于如何更好的使用他们的正确用法和特殊用法. - -虽然没有严格的要求,但是作为讨论常用的Linux命令和方法,你应该安装RHEL7 来尝试使用文章中提到的命令.这将会使你学习起来更省力. - -- [红帽企业版Linux(RHEL)7 安装指南][7] - -### 使用Shell进行交互 ### -如果我们使用文本模式登陆Linux,我们就无法使用鼠标在默认的shell。另一方面,如果我们使用图形化界面登陆,我们将会通过启动一个终端来开启shell,无论那种方式,我们都会看到用户提示,并且我们可以开始输入并且执行命令(当按下Enter时,命令就会被执行) - - -当我们使用文本模式登陆Linux时, -命令是由两个部分组成的: - -- 命令本身 -- 参数 - -某些参数,称为选项(通常使用一个连字符区分),改变了由其他参数定义的命令操作. - -命令的类型可以帮助我们识别某一个特定的命令是由shell内建的还是由一个单独的包提供。这样的区别在于我们能够找到更多关于该信息的命令,对shell内置的命令,我们需要看shell的ManPage,如果是其他提供的,我们需要看它自己的ManPage. - -![Check Shell built in Commands](http://www.tecmint.com/wp-content/uploads/2015/02/Check-shell-built-in-Commands.png) - -检查Shell的内建命令 - -在上面的例子中, cd 和 type 是shell内建的命令,top和 less 是由其他的二进制文件提供的(在这种情况下,type将返回命令的位置) -其他的内建命令 - -- [echo command][8]: 展示字符串 -- [pwd command][9]: 输出当前的工作目录 - -![More Built in Shell Commands](http://www.tecmint.com/wp-content/uploads/2015/02/More-Built-in-Shell-Commands.png) - -更多内建函数 - -**exec 命令** - -运行我们指定的外部程序。请注意,最好是只输入我们想要运行的程序的名字,不过exec命令有一个特殊的特性:使用旧的shell运行,而不是创建新的进程,可以作为子请求的验证. - - # ps -ef | grep [shell 进程的PID] - -当新的进程注销,Shell也随之注销,运行 exec top 然后按下 q键来退出top,你会注意到shell 会话会结束,如下面的屏幕录像展示的那样: - -注:youtube视频 - - -**export 命令** - -输出之后执行的命令的环境的变量 - -**history 命令** - -展示数行之前的历史命令.在感叹号前输入命令编号可以再次执行这个命令.如果我们需要编辑历史列表中的命令,我们可以按下 Ctrl + r 并输入与命令相关的第一个字符. -当我们看到的命令自动补全,我们可以根据我们目前的需要来编辑它: - -注:youtube视频 - - -命令列表会保存在一个叫 .bash_history的文件里.history命令是一个非常有用的用于减少输入次数的工具,特别是进行命令行编辑的时候.默认情况下,bash保留最后输入的500个命令,不过可以通过修改 HISTSIZE 环境变量来增加: - - -![Linux history Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-history-Command.png) - -Linux history 命令 - -但上述变化,在我们的下一次启动不会保留。为了保持HISTSIZE变量的变化,我们需要通过手工修改文件编辑: - - # 设置history请看 HISTSIZE 和 HISTFILESIZE 在 bash(1)的文档 - HISTSIZE=1000 - -**重要**: 我们的更改不会生效,除非我们重启了系统 - -**alias 命令** -没有参数或使用-p参数将会以 名称=值的标准形式输出alias 列表.当提供了参数时,一个alias 将被定义给给定的命令和值 - -使用alias ,我们可以创建我们自己的命令,或修改现有的命令,包括需要的参数.举个例子,假设我们想别名 ls 到 ls –color=auto ,这样就可以使用不同颜色输出文件、目录、链接 - - - # alias ls='ls --color=auto' - -![Linux alias Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-alias-Command.png) - -Linux 别名命令 - -**Note**: 你可以给你的新命令起任何的名字,并且附上足够多的使用单引号分割的参数,但是这样的情况下你要用分号区分开他们. - - # alias myNewCommand='cd /usr/bin; ls; cd; clear' - -**exit 命令** - -Exit和logout命令都是退出shell.exit命令退出所有的shell,logout命令只注销登陆的shell,其他的自动以文本模式启动的shell不算. - -如果我们对某个程序由疑问,我们可以看他的man Page,可以使用man命令调出它,额外的,还有一些重要的文件的手册页(inittab,fstab,hosts等等),库函数,shells,设备及其他功能 - -#### 举例: #### - -- man uname (输出系统信息,如内核名称、处理器、操作系统类型、架构等). -- man inittab (初始化守护设置). - -另外一个重要的信息的来源就是info命令提供的,info命令常常被用来读取信息文件.这些文件往往比manpage 提供更多信息.通过info 关键词调用某个命令的信息 - - # info ls - # info cut - - -另外,在/usr/share/doc 文件夹包含了大量的子目录,里面可以找到大量的文档.他们包含文本文件或其他友好的格式. -确保你使用这三种方法去查找命令的信息。重点关注每个命令文档中介绍的详细的语法 - -**使用expand命令把tabs转换为空格** - -有时候文本文档包含了tabs但是程序无法很好的处理的tabs.或者我们只是简单的希望将tabs转换成空格.这就是为什么expand (GNU核心组件提供)工具出现, - -举个例子,给我们一个文件 NumberList.txt,让我们使用expand处理它,将tabs转换为一个空格.并且以标准形式输出. - - # expand --tabs=1 NumbersList.txt - -![Linux expand Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-expand-Command.png) - -Linux expand 命令 - -unexpand命令可以实现相反的功能(将空格转为tab) - -**使用head输出文件首行及使用tail输出文件尾行** - -通常情况下,head命令后跟着文件名时,将会输出该文件的前十行,我们可以通过 -n 参数来自定义具体的行数。 - - # head -n3 /etc/passwd - # tail -n3 /etc/passwd - -![Linux head and tail Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-head-and-tail-Command.png) - -Linux 的 head 和 tail 命令 - -tail 最有意思的一个特性就是能够展现信息(最后一行)就像我们输入文件(tail -f my.log,一行一行的,就像我们在观察它一样。)这在我们监控一个持续增加的日志文件时非常有用 - -更多: [Manage Files Effectively using head and tail Commands][10] - -**使用paste合并文本文件** -paste命令一行一行的合并文件,默认会以tab来区分每一行,或者其他你自定义的分行方式.(下面的例子就是输出使用等号划分行的文件). - # paste -d= file1 file2 - -![Merge Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Merge-Files-in-Linux-with-paste-command.png) - -Merge Files in Linux - -**使用split命令将文件分块** - -split 命令常常用于把一个文件切割成两个或多个文由我们自定义的前缀命名的件文件.这些文件可以通过大小、区块、行数,生成的文件会有一个数字或字母的后缀.在下面的例子中,我们将切割bash.pdf ,每个文件50KB (-b 50KB) ,使用命名后缀 (-d): - - # split -b 50KB -d bash.pdf bash_ - -![Split Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Split-Files-in-Linux-with-split-command.png) - -在Linux下划分文件 - -你可以使用如下命令来合并这些文件,生成源文件: - - # cat bash_00 bash_01 bash_02 bash_03 bash_04 bash_05 > bash.pdf - -**使用tr命令改变字符** - -tr 命令多用于变化(改变)一个一个的字符活使用字符范围.和之前一样,下面的实例我们江使用同样的文件file2,我们将实习: - -- 小写字母 o 变成大写 -- 所有的小写字母都变成大写字母 - - # cat file2 | tr o O - # cat file2 | tr [a-z] [A-Z] - -![Translate Characters in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Translate-characters-in-Linux-with-tr-command.png) - -在Linux中替换文字 - -**使用uniq和sort检查或删除重复的文字** - -uniq命令可以帮我们查出或删除文件中的重复的行,默认会写出到stdout.我们应当注意, uniq 只能查出相邻的两个相同的单纯,所以, uniq 往往和sort 一起使用(sort一般用于对文本文件的内容进行排序) - - -默认的,sort 以第一个参数(使用空格区分)为关键字.想要定义特殊的关键字,我们需要使用 -k参数,请注意如何使用sort 和uniq输出我们想要的字段,具体可以看下面的例子 - - # cat file3 - # sort file3 | uniq - # sort -k2 file3 | uniq - # sort -k3 file3 | uniq - -![删除文件中重复的行](http://www.tecmint.com/wp-content/uploads/2015/02/Remove-Duplicate-Lines-in-file.png) - -删除文件中重复的行 - -**从文件中提取文本的命令** - -Cut命令基于字节(-b),字符(-c),或者区块(-f)从stdin活文件中提取到的部分将会以标准的形式展现在屏幕上 - -当我们使用区块切割时,默认的分隔符是一个tab,不过你可以通过 -d 参数来自定义分隔符. - - # cut -d: -f1,3 /etc/passwd # 这个例子提取了第一块和第三块的文本 - # cut -d: -f2-4 /etc/passwd # 这个例子提取了第一块到第三块的文本 - -![从文件中提取文本](http://www.tecmint.com/wp-content/uploads/2015/02/Extract-Text-from-a-file.png) - -从文件中提取文本 - - -注意,上方的两个输出的结果是十分简洁的。 - -**使用fmt命令重新格式化文件** - -fmt 被用于去“清理”有大量内容或行的文件,或者有很多缩进的文件.新的锻炼格式每行不会超过75个字符款,你能改变这个设定通过 -w(width 宽度)参数,它可以设置行宽为一个特定的数值 - -举个例子,让我们看看当我们用fmt显示定宽为100个字符的时候的文件/etc/passwd 时会发生什么.再来一次,输出值变得更加简洁. - - # fmt -w100 /etc/passwd - -![File Reformatting in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/File-Reformatting-in-Linux-with-fmt-command.png) - -Linux文件重新格式化 - -**使用pr命令格式化打印内容** - -pr 分页并且在列中展示一个或多个用于打印的文件. 换句话说,使用pr格式化一个文件使他打印出来时看起来更好.举个例子,下面这个命令 - - # ls -a /etc | pr -n --columns=3 -h "Files in /etc" - -以一个友好的排版方式(3列)输出/etc下的文件,自定义了页眉(通过 -h 选项实现),行号(-n) - -![File Formatting in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/File-Formatting-in-Linux-with-pr-command.png) - -Linux的文件格式 - -### 总结 ### - -在这篇文章中,我们已经讨论了如何在Shell或终端以正确的语法输入和执行命令,并解释如何找到,检查和使用系统文档。正如你看到的一样简单,这就是你成为RHCSA的第一大步 - -如果你想添加一些其他的你经常使用的能够有效帮你完成你的日常工作的基础命令,并为分享他们而感到自豪,请在下方留言.也欢迎提出问题.我们期待您的回复. - - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:https://www.redhat.com/en/services/certification/rhcsa -[2]:http://www.tecmint.com/cd-command-in-linux/ -[3]:http://www.tecmint.com/ls-command-interview-questions/ -[4]:http://www.tecmint.com/advanced-copy-command-shows-progress-bar-while-copying-files/ -[5]:http://www.tecmint.com/rename-multiple-files-in-linux/ -[6]:http://www.tecmint.com/8-pratical-examples-of-linux-touch-command/ -[7]:http://www.tecmint.com/redhat-enterprise-linux-7-installation/ -[8]:http://www.tecmint.com/echo-command-in-linux/ -[9]:http://www.tecmint.com/pwd-command-examples/ -[10]:http://www.tecmint.com/view-contents-of-file-in-linux/ From e82658e0f8f22e40ad56e17aab0fa0690aa1152c Mon Sep 17 00:00:00 2001 From: Ping Date: Wed, 2 Sep 2015 10:04:56 +0800 Subject: [PATCH 415/697] Translating by Ping --- ...l The Latest Linux Kernel in Ubuntu Easily via A Script.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md b/sources/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md index 7022efd817..cdb1d244d0 100644 --- a/sources/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md +++ b/sources/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md @@ -1,3 +1,5 @@ +Translating by Ping + Install The Latest Linux Kernel in Ubuntu Easily via A Script ================================================================================ ![](http://ubuntuhandbook.org/wp-content/uploads/2014/12/linux-kernel-icon-tux.png) @@ -76,4 +78,4 @@ via: http://ubuntuhandbook.org/index.php/2015/08/install-latest-kernel-script/ [a]:http://ubuntuhandbook.org/index.php/about/ [1]:http://kernel.ubuntu.com/~kernel-ppa/mainline/ -[2]:https://gist.github.com/mmstick/8493727 \ No newline at end of file +[2]:https://gist.github.com/mmstick/8493727 From 2abff784ad9f38725baf3606b3e12edf761ddb09 Mon Sep 17 00:00:00 2001 From: Ping Date: Wed, 2 Sep 2015 10:29:09 +0800 Subject: [PATCH 416/697] Translating by Ping --- ...switch from NetworkManager to systemd-networkd on Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md b/sources/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md index 2f2405043c..bc7ebee015 100644 --- a/sources/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md +++ b/sources/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md @@ -1,3 +1,5 @@ +Translating by Ping + How to switch from NetworkManager to systemd-networkd on Linux ================================================================================ In the world of Linux, adoption of [systemd][1] has been a subject of heated controversy, and the debate between its proponents and critics is still going on. As of today, most major Linux distributions have adopted systemd as a default init system. @@ -162,4 +164,4 @@ via: http://xmodulo.com/switch-from-networkmanager-to-systemd-networkd.html [1]:http://xmodulo.com/use-systemd-system-administration-debian.html [2]:http://xmodulo.com/disable-network-manager-linux.html [3]:http://xmodulo.com/how-to-configure-linux-bridge-interface.html -[4]:http://www.freedesktop.org/software/systemd/man/systemd.network.html \ No newline at end of file +[4]:http://www.freedesktop.org/software/systemd/man/systemd.network.html From 53b01d741f1da3218224d7a6bab1a95af8654c5b Mon Sep 17 00:00:00 2001 From: Ping Date: Wed, 2 Sep 2015 14:00:22 +0800 Subject: [PATCH 417/697] Complete Install The Latest Linux Kernel in Ubuntu Easily via A Script --- ...ux Kernel in Ubuntu Easily via A Script.md | 81 ------------------- ...ux Kernel in Ubuntu Easily via A Script.md | 79 ++++++++++++++++++ 2 files changed, 79 insertions(+), 81 deletions(-) delete mode 100644 sources/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md create mode 100644 translated/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md diff --git a/sources/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md b/sources/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md deleted file mode 100644 index cdb1d244d0..0000000000 --- a/sources/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md +++ /dev/null @@ -1,81 +0,0 @@ -Translating by Ping - -Install The Latest Linux Kernel in Ubuntu Easily via A Script -================================================================================ -![](http://ubuntuhandbook.org/wp-content/uploads/2014/12/linux-kernel-icon-tux.png) - -Want to install the latest Linux Kernel? A simple script can always do the job and make things easier in Ubuntu. - -Michael Murphy has created a script makes installing the latest RC, stable, or lowlatency Kernel easier in Ubuntu. The script asks some questions and automatically downloads and installs the latest Kernel packages from [Ubuntu kernel mainline page][1]. - -### Install / Upgrade Linux Kernel via the Script: ### - -1. Download the script from the right sidebar of the [github page][2] (click the “Download Zip” button). - -2. Decompress the Zip archive by right-clicking on it in your user Downloads folder and select “Extract Here”. - -3. Navigate to the result folder in terminal by right-clicking on that folder and select “Open in Terminal”: - -![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/open-terminal.jpg) - -It opens a terminal window and automatically navigates into the result folder. If you **DON’T** find the “Open in Terminal” option, search for and install `nautilus-open-terminal` in Ubuntu Software Center and then log out and back in (or run `nautilus -q` command in terminal instead to apply changes). - -4. When you’re in terminal, give the script executable permission for once. - - chmod +x * - -FINALLY run the script every time you want to install / upgrade Linux Kernel in Ubuntu: - - ./* - -![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/run-script.jpg) - -I use * instead of the SCRIPT NAME in both commands since it’s the only file in that folder. - -If the script runs successfully, restart your computer when done. - -### Revert back and Uninstall the new Kernel: ### - -To revert back and remove the new kernel for any reason, restart your computer and select boot with the old kernel entry under **Advanced Options** menu when you’re at Grub boot-loader. - -When it boots up, see below section. - -### How to Remove the old (or new) Kernels: ### - -1. Install Synaptic Package Manager from Ubuntu Software Center. - -2. Launch Synaptic Package Manager and do: - -- click the **Reload** button in case you want to remove the new kernel. -- select **Status -> Installed** on the left pane to make search list clear. -- search **linux-image**- using Quick filter box. -- select a kernel image “linux-image-x.xx.xx-generic” and mark for (complete) removal -- finally apply changes - -![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/remove-old-kernel1.jpg) - -Repeat until you removed all unwanted kernels. DON’T carelessly remove the current running kernel, check it out via `uname -r` (see below pic.) command. - -For Ubuntu Server, you may run below commands one by one: - - uname -r - - dpkg -l | grep linux-image- - - sudo apt-get autoremove KERNEL_IMAGE_NAME - -![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/remove-kernel-terminal.jpg) - --------------------------------------------------------------------------------- - -via: http://ubuntuhandbook.org/index.php/2015/08/install-latest-kernel-script/ - -作者:[Ji m][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ubuntuhandbook.org/index.php/about/ -[1]:http://kernel.ubuntu.com/~kernel-ppa/mainline/ -[2]:https://gist.github.com/mmstick/8493727 diff --git a/translated/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md b/translated/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md new file mode 100644 index 0000000000..dbe5dec7cd --- /dev/null +++ b/translated/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md @@ -0,0 +1,79 @@ +使用脚本便捷地在Ubuntu系统中安装最新的Linux内核 +================================================================================ +![](http://ubuntuhandbook.org/wp-content/uploads/2014/12/linux-kernel-icon-tux.png) + +想要安装最新的Linux内核吗?一个简单的脚本就可以在Ubuntu系统中方便的完成这项工作。 + +Michael Murphy 写了一个脚本用来将最新的候选版、标准版、或者低延时版内核安装到 Ubuntu 系统中。这个脚本会在询问一些问题后从 [Ubuntu kernel mainline page][1] 下载安装最新的 Linux 内核包。 + +### 通过脚本来安装、升级Linux内核: ### + +1. 点击 [github page][2] 右上角的 “Download Zip” 来下载脚本。 + +2. 鼠标右键单击用户下载目录下的 Zip 文件,选择 “Extract Here” 将其解压到此处。 + +3. 右键点击解压后的文件夹,选择 “Open in Terminal” 在终端中导航到此文件夹下。 + +![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/open-terminal.jpg) + +此时将会打开一个终端,并且自动导航到结果文件夹下。如果你找不到 “Open in Terminal” 选项的话,在 Ubuntu 软件中心搜索安装 `nautilus-open-terminal` ,然后重新登录系统即可(也可以再终端中运行 `nautilus -q` 来取代重新登录系统的操作)。 +4. 当进入终端后,运行以下命令来赋予脚本执行本次操作的权限。 + + chmod +x * + +最后,每当你想要安装或升级 Ubuntu 的 linux 内核时都可以运行此脚本。 + + ./* + +![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/run-script.jpg) + +这里之所以使用 * 替代脚本名称是因为文件夹中只有它一个文件。 + +如果脚本运行成功,重启电脑即可。 + +### 恢复并且卸载新版内核 ### + +如果因为某些原因要恢复并且移除新版内核的话,请重启电脑,在 Grub 启动器的 **高级选项** 菜单下选择旧版内核来启动系统。 + +当系统启动后,参照下边章节继续执行。 + +### 如何移除旧的(或新的)内核: ### + +1. 从Ubuntu软件中心安装 Synaptic Package Manager。 + +2. 打开 Synaptic Package Manager 然后如下操作: + +- 点击 **Reload** 按钮,让想要被删除的新内核显示出来. +- 在左侧面板中选择 **Status -> Installed** ,让查找列表更清晰一些。 +- 在 Quick filter 输入框中输入 **linux-image-** 用于查询。 +- 选择一个内核镜像 “linux-image-x.xx.xx-generic” 然后将其标记为removal(或者Complete Removal) +- 最后,应用变更 + +![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/remove-old-kernel1.jpg) + +重复以上操作直到移除所有你不需要的内核。注意,不要随意移除此刻正在运行的内核,你可以通过 `uname -r` 命令来查看运行的内核。 + +对于 Ubuntu 服务器来说,你可以一步步运行下面的命令: + + uname -r + + dpkg -l | grep linux-image- + + sudo apt-get autoremove KERNEL_IMAGE_NAME + +![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/remove-kernel-terminal.jpg) + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/08/install-latest-kernel-script/ + +作者:[Ji m][a] +译者:[译者ID](https://github.com/mr-ping) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:http://kernel.ubuntu.com/~kernel-ppa/mainline/ +[2]:https://gist.github.com/mmstick/8493727 + From c87292138d342c6c32a2ee865e4c4aba990d13f1 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 2 Sep 2015 14:40:18 +0800 Subject: [PATCH 418/697] PUB:20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting @GOLinux --- ...Switch for debugging and troubleshooting.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) rename {translated/tech => published}/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md (71%) diff --git a/translated/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md b/published/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md similarity index 71% rename from translated/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md rename to published/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md index 542cf31cb3..f5afec9a88 100644 --- a/translated/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md +++ b/published/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md @@ -1,10 +1,10 @@ -Linux有问必答——如何启用Open vSwitch的日志功能以便调试和排障 +Linux有问必答:如何启用Open vSwitch的日志功能以便调试和排障 ================================================================================ > **问题** 我试着为我的Open vSwitch部署排障,鉴于此,我想要检查它的由内建日志机制生成的调试信息。我怎样才能启用Open vSwitch的日志功能,并且修改它的日志等级(如,修改成INFO/DEBUG级别)以便于检查更多详细的调试信息呢? -Open vSwitch(OVS)是Linux平台上用于虚拟切换的最流行的开源部署。由于当今的数据中心日益依赖于软件定义的网络(SDN)架构,OVS被作为数据中心的SDN部署中实际上的标准网络元素而快速采用。 +Open vSwitch(OVS)是Linux平台上最流行的开源的虚拟交换机。由于当今的数据中心日益依赖于软件定义网络(SDN)架构,OVS被作为数据中心的SDN部署中的事实标准上的网络元素而得到飞速应用。 -Open vSwitch具有一个内建的日志机制,它称之为VLOG。VLOG工具允许你在各种切换组件中启用并自定义日志,由VLOG生成的日志信息可以被发送到一个控制台,syslog以及一个独立日志文件组合,以供检查。你可以通过一个名为`ovs-appctl`的命令行工具在运行时动态配置OVS日志。 +Open vSwitch具有一个内建的日志机制,它称之为VLOG。VLOG工具允许你在各种网络交换组件中启用并自定义日志,由VLOG生成的日志信息可以被发送到一个控制台、syslog以及一个便于查看的单独日志文件。你可以通过一个名为`ovs-appctl`的命令行工具在运行时动态配置OVS日志。 ![](https://farm1.staticflickr.com/499/19300367114_cd8aac2fb2_c.jpg) @@ -14,7 +14,7 @@ Open vSwitch具有一个内建的日志机制,它称之为VLOG。VLOG工具允 $ sudo ovs-appctl vlog/set module[:facility[:level]] -- **Module**:OVS中的任何合法组件的名称(如netdev,ofproto,dpif,vswitchd,以及其它大量组件) +- **Module**:OVS中的任何合法组件的名称(如netdev,ofproto,dpif,vswitchd等等) - **Facility**:日志信息的目的地(必须是:console,syslog,或者file) - **Level**:日志的详细程度(必须是:emer,err,warn,info,或者dbg) @@ -36,13 +36,13 @@ Open vSwitch具有一个内建的日志机制,它称之为VLOG。VLOG工具允 ![](https://farm1.staticflickr.com/465/19734939478_7eb5d44635_c.jpg) -输出结果显示了用于三个工具(console,syslog,file)的各个模块的调试级别。默认情况下,所有模块的日志等级都被设置为INFO。 +输出结果显示了用于三个场合(facility:console,syslog,file)的各个模块的调试级别。默认情况下,所有模块的日志等级都被设置为INFO。 -指定任何一个OVS模块,你可以选择性地修改任何特定工具的调试级别。例如,如果你想要在控制台屏幕中查看dpif更为详细的调试信息,可以运行以下命令。 +指定任何一个OVS模块,你可以选择性地修改任何特定场合的调试级别。例如,如果你想要在控制台屏幕中查看dpif更为详细的调试信息,可以运行以下命令。 $ sudo ovs-appctl vlog/set dpif:console:dbg -你将看到dpif模块的console工具已经将其日志等级修改为DBG,而其它两个工具syslog和file的日志级别仍然没有改变。 +你将看到dpif模块的console工具已经将其日志等级修改为DBG,而其它两个场合syslog和file的日志级别仍然没有改变。 ![](https://farm1.staticflickr.com/333/19896760146_5d851311ae_c.jpg) @@ -52,7 +52,7 @@ Open vSwitch具有一个内建的日志机制,它称之为VLOG。VLOG工具允 ![](https://farm1.staticflickr.com/351/19734939828_8c7f59e404_c.jpg) -同时,如果你想要一次性修改所有三个工具的日志级别,你可以指定“ANY”作为工具名。例如,下面的命令将修改每个模块的所有工具的日志级别为DBG。 +同时,如果你想要一次性修改所有三个场合的日志级别,你可以指定“ANY”作为场合名。例如,下面的命令将修改每个模块的所有场合的日志级别为DBG。 $ sudo ovs-appctl vlog/set ANY:ANY:dbg @@ -62,7 +62,7 @@ via: http://ask.xmodulo.com/enable-logging-open-vswitch.html 作者:[Dan Nanni][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 31c87ccf4c9bae259d785b66ef06792ee225bfaf Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 2 Sep 2015 15:13:00 +0800 Subject: [PATCH 419/697] PUB:20150811 How to Install Snort and Usage in Ubuntu 15.04 @geekpi --- ...Install Snort and Usage in Ubuntu 15.04.md | 36 ++++++++++--------- 1 file changed, 20 insertions(+), 16 deletions(-) rename {translated/tech => published}/20150811 How to Install Snort and Usage in Ubuntu 15.04.md (77%) diff --git a/translated/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md b/published/20150811 How to Install Snort and Usage in Ubuntu 15.04.md similarity index 77% rename from translated/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md rename to published/20150811 How to Install Snort and Usage in Ubuntu 15.04.md index 06fbfd62b8..01d7f7ec13 100644 --- a/translated/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md +++ b/published/20150811 How to Install Snort and Usage in Ubuntu 15.04.md @@ -1,12 +1,13 @@ -在Ubuntu 15.04中如何安装和使用Snort +在 Ubuntu 15.04 中如何安装和使用 Snort ================================================================================ -对于IT安全而言入侵检测是一件非常重要的事。入侵检测系统用于检测网络中非法与恶意的请求。Snort是一款知名的开源入侵检测系统。Web界面(Snorby)可以用于更好地分析警告。Snort使用iptables/pf防火墙来作为入侵检测系统。本篇中,我们会安装并配置一个开源的IDS系统snort。 + +对于网络安全而言入侵检测是一件非常重要的事。入侵检测系统(IDS)用于检测网络中非法与恶意的请求。Snort是一款知名的开源的入侵检测系统。其 Web界面(Snorby)可以用于更好地分析警告。Snort使用iptables/pf防火墙来作为入侵检测系统。本篇中,我们会安装并配置一个开源的入侵检测系统snort。 ### Snort 安装 ### #### 要求 #### -snort所使用的数据采集库(DAQ)用于抽象地调用采集库。这个在snort上就有。下载过程如下截图所示。 +snort所使用的数据采集库(DAQ)用于一个调用包捕获库的抽象层。这个在snort上就有。下载过程如下截图所示。 ![downloading_daq](http://blog.linoxide.com/wp-content/uploads/2015/07/downloading_daq.png) @@ -48,7 +49,7 @@ make和make install 命令的结果如下所示。 ![snort_extraction](http://blog.linoxide.com/wp-content/uploads/2015/07/snort_extraction.png) -创建安装目录并在脚本中设置prefix参数。同样也建议启用包性能监控(PPM)标志。 +创建安装目录并在脚本中设置prefix参数。同样也建议启用包性能监控(PPM)的sourcefire标志。 #mkdir /usr/local/snort @@ -56,7 +57,7 @@ make和make install 命令的结果如下所示。 ![snort_installation](http://blog.linoxide.com/wp-content/uploads/2015/07/snort_installation.png) -配置脚本由于缺少libpcre-dev、libdumbnet-dev 和zlib开发库而报错。 +配置脚本会由于缺少libpcre-dev、libdumbnet-dev 和zlib开发库而报错。 配置脚本由于缺少libpcre库报错。 @@ -96,7 +97,7 @@ make和make install 命令的结果如下所示。 ![make install snort](http://blog.linoxide.com/wp-content/uploads/2015/07/make-install-snort.png) -最终snort在/usr/local/snort/bin中运行。现在它对eth0的所有流量都处在promisc模式(包转储模式)。 +最后,从/usr/local/snort/bin中运行snort。现在它对eth0的所有流量都处在promisc模式(包转储模式)。 ![snort running](http://blog.linoxide.com/wp-content/uploads/2015/07/snort-running.png) @@ -106,14 +107,17 @@ make和make install 命令的结果如下所示。 #### Snort的规则和配置 #### -从源码安装的snort需要规则和安装配置,因此我们会从/etc/snort下面复制规则和配置。我们已经创建了单独的bash脚本来用于规则和配置。它会设置下面这些snort设置。 +从源码安装的snort还需要设置规则和配置,因此我们需要复制规则和配置到/etc/snort下面。我们已经创建了单独的bash脚本来用于设置规则和配置。它会设置下面这些snort设置。 -- 在linux中创建snort用户用于snort IDS服务。 +- 在linux中创建用于snort IDS服务的snort用户。 - 在/etc下面创建snort的配置文件和文件夹。 -- 权限设置并从etc中复制snortsnort源代码 +- 权限设置并从源代码的etc目录中复制数据。 - 从snort文件中移除规则中的#(注释符号)。 - #!/bin/bash##PATH of source code of snort +- + + #!/bin/bash# + # snort源代码的路径 snort_src="/home/test/Downloads/snort-2.9.7.3" echo "adding group and user for snort..." groupadd snort &> /dev/null @@ -141,15 +145,15 @@ make和make install 命令的结果如下所示。 sed -i 's/include \$RULE\_PATH/#include \$RULE\_PATH/' /etc/snort/snort.conf echo "---DONE---" -改变脚本中的snort源目录并运行。下面是成功的输出。 +改变脚本中的snort源目录路径并运行。下面是成功的输出。 ![running script](http://blog.linoxide.com/wp-content/uploads/2015/08/running_script.png) -上面的脚本从snort源中复制下面的文件/文件夹到/etc/snort配置文件中 +上面的脚本从snort源中复制下面的文件和文件夹到/etc/snort配置文件中 ![files copied](http://blog.linoxide.com/wp-content/uploads/2015/08/created.png) -、snort的配置非常复杂,然而为了IDS能正常工作需要进行下面必要的修改。 +snort的配置非常复杂,要让IDS能正常工作需要进行下面必要的修改。 ipvar HOME_NET 192.168.1.0/24 # LAN side @@ -173,7 +177,7 @@ make和make install 命令的结果如下所示。 ![path rules](http://blog.linoxide.com/wp-content/uploads/2015/08/path-rules.png) -下载[下载社区][1]规则并解压到/etc/snort/rules。启用snort.conf中的社区及紧急威胁规则。 +现在[下载社区规则][1]并解压到/etc/snort/rules。启用snort.conf中的社区及紧急威胁规则。 ![wget_rules](http://blog.linoxide.com/wp-content/uploads/2015/08/wget_rules.png) @@ -187,7 +191,7 @@ make和make install 命令的结果如下所示。 ### 总结 ### -本篇中,我们致力于开源IDPS系统snort在Ubuntu上的安装和配置。默认它用于监控时间,然而它可以被配置成用于网络保护的内联模式。snort规则可以在离线模式中可以使用pcap文件测试和分析 +本篇中,我们关注了开源IDPS系统snort在Ubuntu上的安装和配置。通常它用于监控事件,然而它可以被配置成用于网络保护的在线模式。snort规则可以在离线模式中可以使用pcap捕获文件进行测试和分析 -------------------------------------------------------------------------------- @@ -195,7 +199,7 @@ via: http://linoxide.com/security/install-snort-usage-ubuntu-15-04/ 作者:[nido][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From e4fcc9361a6538789ae1b6d1c9902828a50cb43e Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 2 Sep 2015 23:47:16 +0800 Subject: [PATCH 420/697] PUB:20150803 Managing Linux Logs MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @wyangsun 辛苦啦,我校对都用了一个下午加一个晚上呢,更别说翻译了! --- published/20150803 Managing Linux Logs.md | 418 ++++++++++++++++++ .../tech/20150803 Managing Linux Logs.md | 418 ------------------ 2 files changed, 418 insertions(+), 418 deletions(-) create mode 100644 published/20150803 Managing Linux Logs.md delete mode 100644 translated/tech/20150803 Managing Linux Logs.md diff --git a/published/20150803 Managing Linux Logs.md b/published/20150803 Managing Linux Logs.md new file mode 100644 index 0000000000..dca518e531 --- /dev/null +++ b/published/20150803 Managing Linux Logs.md @@ -0,0 +1,418 @@ +Linux 日志管理指南 +================================================================================ + +管理日志的一个最好做法是将你的日志集中或整合到一个地方,特别是在你有许多服务器或多层级架构时。我们将告诉你为什么这是一个好主意,然后给出如何更容易的做这件事的一些小技巧。 + +### 集中管理日志的好处 ### + +如果你有很多服务器,查看某个日志文件可能会很麻烦。现代的网站和服务经常包括许多服务器层级、分布式的负载均衡器,等等。找到正确的日志将花费很长时间,甚至要花更长时间在登录服务器的相关问题上。没什么比发现你找的信息没有被保存下来更沮丧的了,或者本该保留的日志文件正好在重启后丢失了。 + +集中你的日志使它们查找更快速,可以帮助你更快速的解决产品问题。你不用猜测那个服务器存在问题,因为所有的日志在同一个地方。此外,你可以使用更强大的工具去分析它们,包括日志管理解决方案。一些解决方案能[转换纯文本日志][1]为一些字段,更容易查找和分析。 + +集中你的日志也可以使它们更易于管理: + +- 它们更安全,当它们备份归档到一个单独区域时会有意无意地丢失。如果你的服务器宕机或者无响应,你可以使用集中的日志去调试问题。 +- 你不用担心ssh或者低效的grep命令在陷入困境的系统上需要更多的资源。 +- 你不用担心磁盘占满,这个能让你的服务器死机。 +- 你能保持你的产品服务器的安全性,只是为了查看日志无需给你所有团队登录权限。给你的团队从日志集中区域访问日志权限更安全。 + +随着集中日志管理,你仍需处理由于网络联通性不好或者耗尽大量网络带宽从而导致不能传输日志到中心区域的风险。在下面的章节我们将要讨论如何聪明的解决这些问题。 + +### 流行的日志归集工具 ### + +在 Linux 上最常见的日志归集是通过使用 syslog 守护进程或者日志代理。syslog 守护进程支持本地日志的采集,然后通过syslog 协议传输日志到中心服务器。你可以使用很多流行的守护进程来归集你的日志文件: + +- [rsyslog][2] 是一个轻量后台程序,在大多数 Linux 分支上已经安装。 +- [syslog-ng][3] 是第二流行的 Linux 系统日志后台程序。 +- [logstash][4] 是一个重量级的代理,它可以做更多高级加工和分析。 +- [fluentd][5] 是另一个具有高级处理能力的代理。 + +Rsyslog 是集中日志数据最流行的后台程序,因为它在大多数 Linux 分支上是被默认安装的。你不用下载或安装它,并且它是轻量的,所以不需要占用你太多的系统资源。 + +如果你需要更多先进的过滤或者自定义分析功能,如果你不在乎额外的系统负载,Logstash 是另一个最流行的选择。 + +### 配置 rsyslog.conf ### + +既然 rsyslog 是最广泛使用的系统日志程序,我们将展示如何配置它为日志中心。它的全局配置文件位于 /etc/rsyslog.conf。它加载模块,设置全局指令,和包含位于目录 /etc/rsyslog.d 中的应用的特有的配置。目录中包含的 /etc/rsyslog.d/50-default.conf 指示 rsyslog 将系统日志写到文件。在 [rsyslog 文档][6]中你可以阅读更多相关配置。 + +rsyslog 配置语言是是[RainerScript][7]。你可以给日志指定输入,就像将它们输出到另外一个位置一样。rsyslog 已经配置标准输入默认是 syslog ,所以你通常只需增加一个输出到你的日志服务器。这里有一个 rsyslog 输出到一个外部服务器的配置例子。在本例中,**BEBOP** 是一个服务器的主机名,所以你应该替换为你的自己的服务器名。 + + action(type="omfwd" protocol="tcp" target="BEBOP" port="514") + +你可以发送你的日志到一个有足够的存储容量的日志服务器来存储,提供查询,备份和分析。如果你存储日志到文件系统,那么你应该建立[日志轮转][8]来防止你的磁盘爆满。 + +作为一种选择,你可以发送这些日志到一个日志管理方案。如果你的解决方案是安装在本地你可以发送到系统文档中指定的本地主机和端口。如果你使用基于云提供商,你将发送它们到你的提供商特定的主机名和端口。 + +### 日志目录 ### + +你可以归集一个目录或者匹配一个通配符模式的所有文件。nxlog 和 syslog-ng 程序支持目录和通配符(*)。 + +常见的 rsyslog 不能直接监控目录。作为一种解决办法,你可以设置一个定时任务去监控这个目录的新文件,然后配置 rsyslog 来发送这些文件到目的地,比如你的日志管理系统。举个例子,日志管理提供商 Loggly 有一个开源版本的[目录监控脚本][9]。 + +### 哪个协议: UDP、TCP 或 RELP? ### + +当你使用网络传输数据时,有三个主流协议可以选择。UDP 在你自己的局域网是最常用的,TCP 用在互联网。如果你不能失去(任何)日志,就要使用更高级的 RELP 协议。 + +[UDP][10] 发送一个数据包,那只是一个单一的信息包。它是一个只外传的协议,所以它不会发送给你回执(ACK)。它只尝试发送包。当网络拥堵时,UDP 通常会巧妙的降级或者丢弃日志。它通常使用在类似局域网的可靠网络。 + +[TCP][11] 通过多个包和返回确认发送流式信息。TCP 会多次尝试发送数据包,但是受限于 [TCP 缓存][12]的大小。这是在互联网上发送送日志最常用的协议。 + +[RELP][13] 是这三个协议中最可靠的,但是它是为 rsyslog 创建的,而且很少有行业采用。它在应用层接收数据,如果有错误就会重发。请确认你的日志接受位置也支持这个协议。 + +### 用磁盘辅助队列可靠的传送 ### + +如果 rsyslog 在存储日志时遭遇错误,例如一个不可用网络连接,它能将日志排队直到连接还原。队列日志默认被存储在内存里。无论如何,内存是有限的并且如果问题仍然存在,日志会超出内存容量。 + +**警告:如果你只存储日志到内存,你可能会失去数据。** + +rsyslog 能在内存被占满时将日志队列放到磁盘。[磁盘辅助队列][14]使日志的传输更可靠。这里有一个例子如何配置rsyslog 的磁盘辅助队列: + + $WorkDirectory /var/spool/rsyslog # 暂存文件(spool)放置位置 + $ActionQueueFileName fwdRule1 # 暂存文件的唯一名字前缀 + $ActionQueueMaxDiskSpace 1g # 1gb 空间限制(尽可能大) + $ActionQueueSaveOnShutdown on # 关机时保存日志到磁盘 + $ActionQueueType LinkedList # 异步运行 + $ActionResumeRetryCount -1 # 如果主机宕机,不断重试 + +### 使用 TLS 加密日志 ### + +如果你担心你的数据的安全性和隐私性,你应该考虑加密你的日志。如果你使用纯文本在互联网传输日志,嗅探器和中间人可以读到你的日志。如果日志包含私人信息、敏感的身份数据或者政府管制数据,你应该加密你的日志。rsyslog 程序能使用 TLS 协议加密你的日志保证你的数据更安全。 + +建立 TLS 加密,你应该做如下任务: + +1. 生成一个[证书授权(CA)][15]。在 /contrib/gnutls 有一些证书例子,可以用来测试,但是你需要为产品环境创建自己的证书。如果你正在使用一个日志管理服务,它会给你一个证书。 +1. 为你的服务器生成一个[数字证书][16]使它能启用 SSL 操作,或者使用你自己的日志管理服务提供商的一个数字证书。 +1. 配置你的 rsyslog 程序来发送 TLS 加密数据到你的日志管理系统。 + +这有一个 rsyslog 配置 TLS 加密的例子。替换 CERT 和 DOMAIN_NAME 为你自己的服务器配置。 + + $DefaultNetstreamDriverCAFile /etc/rsyslog.d/keys/ca.d/CERT.crt + $ActionSendStreamDriver gtls + $ActionSendStreamDriverMode 1 + $ActionSendStreamDriverAuthMode x509/name + $ActionSendStreamDriverPermittedPeer *.DOMAIN_NAME.com + +### 应用日志的最佳管理方法 ### + +除 Linux 默认创建的日志之外,归集重要的应用日志也是一个好主意。几乎所有基于 Linux 的服务器应用都把它们的状态信息写入到独立、专门的日志文件中。这包括数据库产品,像 PostgreSQL 或者 MySQL,网站服务器,像 Nginx 或者 Apache,防火墙,打印和文件共享服务,目录和 DNS 服务等等。 + +管理员安装一个应用后要做的第一件事是配置它。Linux 应用程序典型的有一个放在 /etc 目录里 .conf 文件。它也可能在其它地方,但是那是大家找配置文件首先会看的地方。 + +根据应用程序有多复杂多庞大,可配置参数的数量可能会很少或者上百行。如前所述,大多数应用程序可能会在某种日志文件写它们的状态:配置文件是定义日志设置和其它东西的地方。 + +如果你不确定它在哪,你可以使用locate命令去找到它: + + [root@localhost ~]# locate postgresql.conf + /usr/pgsql-9.4/share/postgresql.conf.sample + /var/lib/pgsql/9.4/data/postgresql.conf + +#### 设置一个日志文件的标准位置 #### + +Linux 系统一般保存它们的日志文件在 /var/log 目录下。一般是这样,但是需要检查一下应用是否保存它们在 /var/log 下的特定目录。如果是,很好,如果不是,你也许想在 /var/log 下创建一个专用目录?为什么?因为其它程序也在 /var/log 下保存它们的日志文件,如果你的应用保存超过一个日志文件 - 也许每天一个或者每次重启一个 - 在这么大的目录也许有点难于搜索找到你想要的文件。 + +如果在你网络里你有运行多于一个的应用实例,这个方法依然便利。想想这样的情景,你也许有一打 web 服务器在你的网络运行。当排查任何一个机器的问题时,你就很容易知道确切的位置。 + +#### 使用一个标准的文件名 #### + +给你的应用最新的日志使用一个标准的文件名。这使一些事变得容易,因为你可以监控和追踪一个单独的文件。很多应用程序在它们的日志文件上追加一种时间戳。它让 rsyslog 更难于找到最新的文件和设置文件监控。一个更好的方法是使用日志轮转给老的日志文件增加时间。这样更易去归档和历史查询。 + +#### 追加日志文件 #### + +日志文件会在每个应用程序重启后被覆盖吗?如果这样,我们建议关掉它。每次重启 app 后应该去追加日志文件。这样,你就可以追溯重启前最后的日志。 + +#### 日志文件追加 vs. 轮转 #### + +要是应用程序每次重启后写一个新日志文件,如何保存当前日志?追加到一个单独的、巨大的文件?Linux 系统并不以频繁重启或者崩溃而出名:应用程序可以运行很长时间甚至不间歇,但是也会使日志文件非常大。如果你查询分析上周发生连接错误的原因,你可能无疑的要在成千上万行里搜索。 + +我们建议你配置应用每天半晚轮转(rotate)它的日志文件。 + +为什么?首先它将变得可管理。找一个带有特定日期的文件名比遍历一个文件中指定日期的条目更容易。文件也小的多:你不用考虑当你打开一个日志文件时 vi 僵住。第二,如果你正发送日志到另一个位置 - 也许每晚备份任务拷贝到归集日志服务器 - 这样不会消耗你的网络带宽。最后第三点,这样帮助你做日志保留。如果你想剔除旧的日志记录,这样删除超过指定日期的文件比用一个应用解析一个大文件更容易。 + +#### 日志文件的保留 #### + +你保留你的日志文件多长时间?这绝对可以归结为业务需求。你可能被要求保持一个星期的日志信息,或者管理要求保持一年的数据。无论如何,日志需要在一个时刻或其它情况下从服务器删除。 + +在我们看来,除非必要,只在线保持最近一个月的日志文件,并拷贝它们到第二个地方如日志服务器。任何比这更旧的日志可以被转到一个单独的介质上。例如,如果你在 AWS 上,你的旧日志可以被拷贝到 Glacier。 + +#### 给日志单独的磁盘分区 #### + +更好的,Linux 通常建议挂载到 /var 目录到一个单独的文件系统。这是因为这个目录的高 I/O。我们推荐挂载 /var/log 目录到一个单独的磁盘系统下。这样可以节省与主要的应用数据的 I/O 竞争。另外,如果一些日志文件变的太多,或者一个文件变的太大,不会占满整个磁盘。 + +#### 日志条目 #### + +每个日志条目中应该捕获什么信息? + +这依赖于你想用日志来做什么。你只想用它来排除故障,或者你想捕获所有发生的事?这是一个捕获每个用户在运行什么或查看什么的规则条件吗? + +如果你正用日志做错误排查的目的,那么只保存错误,报警或者致命信息。没有理由去捕获调试信息,例如,应用也许默认记录了调试信息或者另一个管理员也许为了故障排查而打开了调试信息,但是你应该关闭它,因为它肯定会很快的填满空间。在最低限度上,捕获日期、时间、客户端应用名、来源 ip 或者客户端主机名、执行的动作和信息本身。 + +#### 一个 PostgreSQL 的实例 #### + +作为一个例子,让我们看看 vanilla PostgreSQL 9.4 安装的主配置文件。它叫做 postgresql.conf,与其它Linux 系统中的配置文件不同,它不保存在 /etc 目录下。下列的代码段,我们可以在我们的 Centos 7 服务器的 /var/lib/pgsql 目录下找到它: + + root@localhost ~]# vi /var/lib/pgsql/9.4/data/postgresql.conf + ... + #------------------------------------------------------------------------------ + # ERROR REPORTING AND LOGGING + #------------------------------------------------------------------------------ + # - Where to Log - + log_destination = 'stderr' + # Valid values are combinations of + # stderr, csvlog, syslog, and eventlog, + # depending on platform. csvlog + # requires logging_collector to be on. + # This is used when logging to stderr: + logging_collector = on + # Enable capturing of stderr and csvlog + # into log files. Required to be on for + # csvlogs. + # (change requires restart) + # These are only used if logging_collector is on: + log_directory = 'pg_log' + # directory where log files are written, + # can be absolute or relative to PGDATA + log_filename = 'postgresql-%a.log' # log file name pattern, + # can include strftime() escapes + # log_file_mode = 0600 . + # creation mode for log files, + # begin with 0 to use octal notation + log_truncate_on_rotation = on # If on, an existing log file with the + # same name as the new log file will be + # truncated rather than appended to. + # But such truncation only occurs on + # time-driven rotation, not on restarts + # or size-driven rotation. Default is + # off, meaning append to existing files + # in all cases. + log_rotation_age = 1d + # Automatic rotation of logfiles will happen after that time. 0 disables. + log_rotation_size = 0 # Automatic rotation of logfiles will happen after that much log output. 0 disables. + # These are relevant when logging to syslog: + #syslog_facility = 'LOCAL0' + #syslog_ident = 'postgres' + # This is only relevant when logging to eventlog (win32): + #event_source = 'PostgreSQL' + # - When to Log - + #client_min_messages = notice # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # log + # notice + # warning + # error + #log_min_messages = warning # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # info + # notice + # warning + # error + # log + # fatal + # panic + #log_min_error_statement = error # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # info + # notice + # warning + # error + # log + # fatal + # panic (effectively off) + #log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements + # and their durations, > 0 logs only + # statements running at least this number + # of milliseconds + # - What to Log + #debug_print_parse = off + #debug_print_rewritten = off + #debug_print_plan = off + #debug_pretty_print = on + #log_checkpoints = off + #log_connections = off + #log_disconnections = off + #log_duration = off + #log_error_verbosity = default + # terse, default, or verbose messages + #log_hostname = off + log_line_prefix = '< %m >' # special values: + # %a = application name + # %u = user name + # %d = database name + # %r = remote host and port + # %h = remote host + # %p = process ID + # %t = timestamp without milliseconds + # %m = timestamp with milliseconds + # %i = command tag + # %e = SQL state + # %c = session ID + # %l = session line number + # %s = session start timestamp + # %v = virtual transaction ID + # %x = transaction ID (0 if none) + # %q = stop here in non-session + # processes + # %% = '%' + # e.g. '<%u%%%d> ' + #log_lock_waits = off # log lock waits >= deadlock_timeout + #log_statement = 'none' # none, ddl, mod, all + #log_temp_files = -1 # log temporary files equal or larger + # than the specified size in kilobytes;5# -1 disables, 0 logs all temp files5 + log_timezone = 'Australia/ACT' + +虽然大多数参数被加上了注释,它们使用了默认值。我们可以看见日志文件目录是 pg_log(log_directory 参数,在 /var/lib/pgsql/9.4/data/ 下的子目录),文件名应该以 postgresql 开头(log_filename参数),文件每天轮转一次(log_rotation_age 参数)然后每行日志记录以时间戳开头(log_line_prefix参数)。特别值得说明的是 log_line_prefix 参数:全部的信息你都可以包含在这。 + +看 /var/lib/pgsql/9.4/data/pg_log 目录下展现给我们这些文件: + + [root@localhost ~]# ls -l /var/lib/pgsql/9.4/data/pg_log + total 20 + -rw-------. 1 postgres postgres 1212 May 1 20:11 postgresql-Fri.log + -rw-------. 1 postgres postgres 243 Feb 9 21:49 postgresql-Mon.log + -rw-------. 1 postgres postgres 1138 Feb 7 11:08 postgresql-Sat.log + -rw-------. 1 postgres postgres 1203 Feb 26 21:32 postgresql-Thu.log + -rw-------. 1 postgres postgres 326 Feb 10 01:20 postgresql-Tue.log + +所以日志文件名只有星期命名的标签。我们可以改变它。如何做?在 postgresql.conf 配置 log_filename 参数。 + +查看一个日志内容,它的条目仅以日期时间开头: + + [root@localhost ~]# cat /var/lib/pgsql/9.4/data/pg_log/postgresql-Fri.log + ... + < 2015-02-27 01:21:27.020 EST >LOG: received fast shutdown request + < 2015-02-27 01:21:27.025 EST >LOG: aborting any active transactions + < 2015-02-27 01:21:27.026 EST >LOG: autovacuum launcher shutting down + < 2015-02-27 01:21:27.036 EST >LOG: shutting down + < 2015-02-27 01:21:27.211 EST >LOG: database system is shut down + +### 归集应用的日志 ### + +#### 使用 imfile 监控日志 #### + +习惯上,应用通常记录它们数据在文件里。文件容易在一个机器上寻找,但是多台服务器上就不是很恰当了。你可以设置日志文件监控,然后当新的日志被添加到文件尾部后就发送事件到一个集中服务器。在 /etc/rsyslog.d/ 里创建一个新的配置文件然后增加一个配置文件,然后输入如下: + + $ModLoad imfile + $InputFilePollInterval 10 + $PrivDropToGroup adm + +----- + # Input for FILE1 + $InputFileName /FILE1 + $InputFileTag APPNAME1 + $InputFileStateFile stat-APPNAME1 #this must be unique for each file being polled + $InputFileSeverity info + $InputFilePersistStateInterval 20000 + $InputRunFileMonitor + +替换 FILE1 和 APPNAME1 为你自己的文件名和应用名称。rsyslog 将发送它到你配置的输出目标中。 + +#### 本地套接字日志与 imuxsock #### + +套接字类似 UNIX 文件句柄,所不同的是套接字内容是由 syslog 守护进程读取到内存中,然后发送到目的地。不需要写入文件。作为一个例子,logger 命令发送它的日志到这个 UNIX 套接字。 + +如果你的服务器 I/O 有限或者你不需要本地文件日志,这个方法可以使系统资源有效利用。这个方法缺点是套接字有队列大小的限制。如果你的 syslog 守护进程宕掉或者不能保持运行,然后你可能会丢失日志数据。 + +rsyslog 程序将默认从 /dev/log 套接字中读取,但是你需要使用如下命令来让 [imuxsock 输入模块][17] 启用它: + + $ModLoad imuxsock + +#### UDP 日志与 imupd #### + +一些应用程序使用 UDP 格式输出日志数据,这是在网络上或者本地传输日志文件的标准 syslog 协议。你的 syslog 守护进程接受这些日志,然后处理它们或者用不同的格式传输它们。备选的,你可以发送日志到你的日志服务器或者到一个日志管理方案中。 + +使用如下命令配置 rsyslog 通过 UDP 来接收标准端口 514 的 syslog 数据: + + $ModLoad imudp + +---------- + + $UDPServerRun 514 + +### 用 logrotate 管理日志 ### + +日志轮转是当日志到达指定的时期时自动归档日志文件的方法。如果不介入,日志文件一直增长,会用尽磁盘空间。最后它们将破坏你的机器。 + +logrotate 工具能随着日志的日期截取你的日志,腾出空间。你的新日志文件保持该文件名。你的旧日志文件被重命名加上后缀数字。每次 logrotate 工具运行,就会创建一个新文件,然后现存的文件被逐一重命名。你来决定何时旧文件被删除或归档的阈值。 + +当 logrotate 拷贝一个文件,新的文件会有一个新的 inode,这会妨碍 rsyslog 监控新文件。你可以通过增加copytruncate 参数到你的 logrotate 定时任务来缓解这个问题。这个参数会拷贝现有的日志文件内容到新文件然后从现有文件截短这些内容。因为日志文件还是同一个,所以 inode 不会改变;但它的内容是一个新文件。 + +logrotate 工具使用的主配置文件是 /etc/logrotate.conf,应用特有设置在 /etc/logrotate.d/ 目录下。DigitalOcean 有一个详细的 [logrotate 教程][18] + +### 管理很多服务器的配置 ### + +当你只有很少的服务器,你可以登录上去手动配置。一旦你有几打或者更多服务器,你可以利用工具的优势使这变得更容易和更可扩展。基本上,所有的事情就是拷贝你的 rsyslog 配置到每个服务器,然后重启 rsyslog 使更改生效。 + +#### pssh #### + +这个工具可以让你在很多服务器上并行的运行一个 ssh 命令。使用 pssh 部署仅用于少量服务器。如果你其中一个服务器失败,然后你必须 ssh 到失败的服务器,然后手动部署。如果你有很多服务器失败,那么手动部署它们会话费很长时间。 + +#### Puppet/Chef #### + +Puppet 和 Chef 是两个不同的工具,它们能在你的网络按你规定的标准自动的配置所有服务器。它们的报表工具可以使你了解错误情况,然后定期重新同步。Puppet 和 Chef 都有一些狂热的支持者。如果你不确定那个更适合你的部署配置管理,你可以拜读一下 [InfoWorld 上这两个工具的对比][19] + +一些厂商也提供一些配置 rsyslog 的模块或者方法。这有一个 Loggly 上 Puppet 模块的例子。它提供给 rsyslog 一个类,你可以添加一个标识令牌: + + node 'my_server_node.example.net' { + # Send syslog events to Loggly + class { 'loggly::rsyslog': + customer_token => 'de7b5ccd-04de-4dc4-fbc9-501393600000', + } + } + +#### Docker #### + +Docker 使用容器去运行应用,不依赖于底层服务。所有东西都运行在内部的容器,你可以把它想象为一个功能单元。ZDNet 有一篇关于在你的数据中心[使用 Docker][20] 的深入文章。 + +这里有很多方式从 Docker 容器记录日志,包括链接到一个日志容器,记录到一个共享卷,或者直接在容器里添加一个 sysllog 代理。其中最流行的日志容器叫做 [logspout][21]。 + +#### 供应商的脚本或代理 #### + +大多数日志管理方案提供一些脚本或者代理,可以从一个或更多服务器相对容易地发送数据。重量级代理会耗尽额外的系统资源。一些供应商像 Loggly 提供配置脚本,来使用现存的 syslog 守护进程更轻松。这有一个 Loggly 上的例子[脚本][22],它能运行在任意数量的服务器上。 + +-------------------------------------------------------------------------------- + +via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/ + +作者:[Jason Skowronski][a1] +作者:[Amy Echeverri][a2] +作者:[Sadequl Hussain][a3] +译者:[wyangsun](https://github.com/wyangsun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a1]:https://www.linkedin.com/in/jasonskowronski +[a2]:https://www.linkedin.com/in/amyecheverri +[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 +[1]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.esrreycnpnbl +[2]:http://www.rsyslog.com/ +[3]:http://www.balabit.com/network-security/syslog-ng/opensource-logging-system +[4]:http://logstash.net/ +[5]:http://www.fluentd.org/ +[6]:http://www.rsyslog.com/doc/rsyslog_conf.html +[7]:http://www.rsyslog.com/doc/master/rainerscript/index.html +[8]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.eck7acdxin87 +[9]:https://www.loggly.com/docs/file-monitoring/ +[10]:http://www.networksorcery.com/enp/protocol/udp.htm +[11]:http://www.networksorcery.com/enp/protocol/tcp.htm +[12]:http://blog.gerhards.net/2008/04/on-unreliability-of-plain-tcp-syslog.html +[13]:http://www.rsyslog.com/doc/relp.html +[14]:http://www.rsyslog.com/doc/queues.html +[15]:http://www.rsyslog.com/doc/tls_cert_ca.html +[16]:http://www.rsyslog.com/doc/tls_cert_machine.html +[17]:http://www.rsyslog.com/doc/v8-stable/configuration/modules/imuxsock.html +[18]:https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10 +[19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html +[20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/ +[21]:https://github.com/progrium/logspout +[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/ diff --git a/translated/tech/20150803 Managing Linux Logs.md b/translated/tech/20150803 Managing Linux Logs.md deleted file mode 100644 index 59b41aa831..0000000000 --- a/translated/tech/20150803 Managing Linux Logs.md +++ /dev/null @@ -1,418 +0,0 @@ -Linux日志管理 -================================================================================ -管理日志的一个关键典型做法是集中或整合你的日志到一个地方,特别是如果你有许多服务器或多层级架构。我们将告诉你为什么这是一个好主意然后给出如何更容易的做这件事的一些小技巧。 - -### 集中管理日志的好处 ### - -如果你有很多服务器,查看单独的一个日志文件可能会很麻烦。现代的网站和服务经常包括许多服务器层级,分布式的负载均衡器,还有更多。这将花费很长时间去获取正确的日志,甚至花更长时间在登录服务器的相关问题上。没什么比发现你找的信息没有被捕获更沮丧的了,或者本能保留答案时正好在重启后丢失了日志文件。 - -集中你的日志使他们查找更快速,可以帮助你更快速的解决产品问题。你不用猜测那个服务器存在问题,因为所有的日志在同一个地方。此外,你可以使用更强大的工具去分析他们,包括日志管理解决方案。一些解决方案能[转换纯文本日志][1]为一些字段,更容易查找和分析。 - -集中你的日志也可以是他们更易于管理: - -- 他们更安全,当他们备份归档一个单独区域时意外或者有意的丢失。如果你的服务器宕机或者无响应,你可以使用集中的日志去调试问题。 -- 你不用担心ssh或者低效的grep命令需要更多的资源在陷入困境的系统。 -- 你不用担心磁盘占满,这个能让你的服务器死机。 -- 你能保持你的产品服务安全性,只是为了查看日志无需给你所有团队登录权限。给你的团队从中心区域访问日志权限更安全。 - -随着集中日志管理,你仍需处理由于网络联通性不好或者用尽大量网络带宽导致不能传输日志到中心区域的风险。在下面的章节我们将要讨论如何聪明的解决这些问题。 - -### 流行的日志归集工具 ### - -在Linux上最常见的日志归集是通过使用系统日志守护进程或者代理。系统日志守护进程支持本地日志的采集,然后通过系统日志协议传输日志到中心服务器。你可以使用很多流行的守护进程来归集你的日志文件: - -- [rsyslog][2]是一个轻量后台程序在大多数Linux分支上已经安装。 -- [syslog-ng][3]是第二流行的Linux系统日志后台程序。 -- [logstash][4]是一个重量级的代理,他可以做更多高级加工和分析。 -- [fluentd][5]是另一个有高级处理能力的代理。 - -Rsyslog是集中日志数据最流行的后台程序因为他在大多数Linux分支上是被默认安装的。你不用下载或安装它,并且它是轻量的,所以不需要占用你太多的系统资源。 - -如果你需要更多先进的过滤或者自定义分析功能,如果你不在乎额外的系统封装Logstash是下一个最流行的选择。 - -### 配置Rsyslog.conf ### - -既然rsyslog成为最广泛使用的系统日志程序,我们将展示如何配置它为日志中心。全局配置文件位于/etc/rsyslog.conf。它加载模块,设置全局指令,和包含应用特有文件位于目录/etc/rsyslog.d中。这些目录包含/etc/rsyslog.d/50-default.conf命令rsyslog写系统日志到文件。在[rsyslog文档][6]你可以阅读更多相关配置。 - -rsyslog配置语言是是[RainerScript][7]。你建立特定的日志输入就像输出他们到另一个目标。Rsyslog已经配置为系统日志输入的默认标准,所以你通常只需增加一个输出到你的日志服务器。这里有一个rsyslog输出到一个外部服务器的配置例子。在举例中,**BEBOP**是一个服务器的主机名,所以你应该替换为你的自己的服务器名。 - - action(type="omfwd" protocol="tcp" target="BEBOP" port="514") - -你可以发送你的日志到一个有丰富存储的日志服务器来存储,提供查询,备份和分析。如果你正存储日志在文件系统,然后你应该建立[日志转储][8]来防止你的磁盘报满。 - -作为一种选择,你可以发送这些日志到一个日志管理方案。如果你的解决方案是安装在本地你可以发送到您的本地系统文档中指定主机和端口。如果你使用基于云提供商,你将发送他们到你的提供商特定的主机名和端口。 - -### 日志目录 ### - -你可以归集一个目录或者匹配一个通配符模式的所有文件。nxlog和syslog-ng程序支持目录和通配符(*)。 - -rsyslog的通用形式不支持直接的监控目录。一种解决方案,你可以设置一个定时任务去监控这个目录的新文件,然后配置rsyslog来发送这些文件到目的地,比如你的日志管理系统。作为一个例子,日志管理提供商Loggly有一个开源版本的[目录监控脚本][9]。 - -### 哪个协议: UDP, TCP, or RELP? ### - -当你使用网络传输数据时,你可以选择三个主流的协议。UDP在你自己的局域网是最常用的,TCP是用在互联网。如果你不能失去日志,就要使用更高级的RELP协议。 - -[UDP][10]发送一个数据包,那只是一个简单的包信息。它是一个只外传的协议,所以他不发送给你回执(ACK)。它只尝试发送包。当网络拥堵时,UDP通常会巧妙的降级或者丢弃日志。它通常使用在类似局域网的可靠网络。 - -[TCP][11]通过多个包和返回确认发送流信息。TCP会多次尝试发送数据包,但是受限于[TCP缓存][12]大小。这是在互联网上发送送日志最常用的协议。 - -[RELP][13]是这三个协议中最可靠的,但是它是为rsyslog创建而且很少有行业应用。它在应用层接收数据然后再发出是否有错误。确认你的目标也支持这个协议。 - -### 用磁盘辅助队列可靠的传送 ### - -如果rsyslog在存储日志时遭遇错误,例如一个不可用网络连接,他能将日志排队直到连接还原。队列日志默认被存储在内存里。无论如何,内存是有限的并且如果问题仍然存在,日志会超出内存容量。 - -**警告:如果你只存储日志到内存,你可能会失去数据。** - -Rsyslog能在内存被占满时将日志队列放到磁盘。[磁盘辅助队列][14]使日志的传输更可靠。这里有一个例子如何配置rsyslog的磁盘辅助队列: - - $WorkDirectory /var/spool/rsyslog # where to place spool files - $ActionQueueFileName fwdRule1 # unique name prefix for spool files - $ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible) - $ActionQueueSaveOnShutdown on # save messages to disk on shutdown - $ActionQueueType LinkedList # run asynchronously - $ActionResumeRetryCount -1 # infinite retries if host is down - -### 使用TLS加密日志 ### - -当你的安全隐私数据是一个关心的事,你应该考虑加密你的日志。如果你使用纯文本在互联网传输日志,嗅探器和中间人可以读到你的日志。如果日志包含私人信息、敏感的身份数据或者政府管制数据,你应该加密你的日志。rsyslog程序能使用TLS协议加密你的日志保证你的数据更安全。 - -建立TLS加密,你应该做如下任务: - -1. 生成一个[证书授权][15](CA)。在/contrib/gnutls有一些简单的证书,只是有助于测试,但是你需要创建自己的产品证书。如果你正在使用一个日志管理服务,它将有一个证书给你。 -1. 为你的服务器生成一个[数字证书][16]使它能SSL运算,或者使用你自己的日志管理服务提供商的一个数字证书。 -1. 配置你的rsyslog程序来发送TLS加密数据到你的日志管理系统。 - -这有一个rsyslog配置TLS加密的例子。替换CERT和DOMAIN_NAME为你自己的服务器配置。 - - $DefaultNetstreamDriverCAFile /etc/rsyslog.d/keys/ca.d/CERT.crt - $ActionSendStreamDriver gtls - $ActionSendStreamDriverMode 1 - $ActionSendStreamDriverAuthMode x509/name - $ActionSendStreamDriverPermittedPeer *.DOMAIN_NAME.com - -### 应用日志的最佳管理方法 ### - -除Linux默认创建的日志之外,归集重要的应用日志也是一个好主意。几乎所有基于Linux的服务器的应用把他们的状态信息写入到独立专门的日志文件。这包括数据库产品,像PostgreSQL或者MySQL,网站服务器像Nginx或者Apache,防火墙,打印和文件共享服务还有DNS服务等等。 - -管理员要做的第一件事是安装一个应用后配置它。Linux应用程序典型的有一个.conf文件在/etc目录里。它也可能在其他地方,但是那是大家找配置文件首先会看的地方。 - -根据应用程序有多复杂多庞大,可配置参数的数量可能会很少或者上百行。如前所述,大多数应用程序可能会在某种日志文件写他们的状态:配置文件是日志设置的地方定义了其他的东西。 - -如果你不确定它在哪,你可以使用locate命令去找到它: - - [root@localhost ~]# locate postgresql.conf - /usr/pgsql-9.4/share/postgresql.conf.sample - /var/lib/pgsql/9.4/data/postgresql.conf - -#### 设置一个日志文件的标准位置 #### - -Linux系统一般保存他们的日志文件在/var/log目录下。如果是,很好,如果不是,你也许想在/var/log下创建一个专用目录?为什么?因为其他程序也在/var/log下保存他们的日志文件,如果你的应用报错多于一个日志文件 - 也许每天一个或者每次重启一个 - 通过这么大的目录也许有点难于搜索找到你想要的文件。 - -如果你有多于一个的应用实例在你网络运行,这个方法依然便利。想想这样的情景,你也许有一打web服务器在你的网络运行。当排查任何一个盒子的问题,你将知道确切的位置。 - -#### 使用一个标准的文件名 #### - -给你的应用最新的日志使用一个标准的文件名。这使一些事变得容易,因为你可以监控和追踪一个单独的文件。很多应用程序在他们的日志上追加一种时间戳。他让rsyslog更难于找到最新的文件和设置文件监控。一个更好的方法是使用日志转储增加时间戳到老的日志文件。这样更易去归档和历史查询。 - -#### 追加日志文件 #### - -日志文件会在每个应用程序重启后被覆盖?如果这样,我们建议关掉它。每次重启app后应该去追加日志文件。这样,你就可以追溯重启前最后的日志。 - -#### 日志文件追加 vs. 转储 #### - -虽然应用程序每次重启后写一个新日志文件,如何保存当前日志?追加到一个单独文件,巨大的文件?Linux系统不是因频繁重启或者崩溃出名的:应用程序可以运行很长时间甚至不间歇,但是也会使日志文件非常大。如果你查询分析上周发生连接错误的原因,你可能无疑的要在成千上万行里搜索。 - -我们建议你配置应用每天半晚转储它的日志文件。 - -为什么?首先它将变得可管理。找一个有特定日期部分的文件名比遍历一个文件指定日期的条目更容易。文件也小的多:你不用考虑当你打开一个日志文件时vi僵住。第二,如果你正发送日志到另一个位置 - 也许每晚备份任务拷贝到归集日志服务器 - 这样不会消耗你的网络带宽。最后第三点,这样帮助你做日志保持。如果你想剔除旧的日志记录,这样删除超过指定日期的文件比一个应用解析一个大文件更容易。 - -#### 日志文件的保持 #### - -你保留你的日志文件多长时间?这绝对可以归结为业务需求。你可能被要求保持一个星期的日志信息,或者管理要求保持一年的数据。无论如何,日志需要在一个时刻或其他从服务器删除。 - -在我们看来,除非必要,只在线保持最近一个月的日志文件,加上拷贝他们到第二个地方如日志服务器。任何比这更旧的日志可以被转到一个单独的介质上。例如,如果你在AWS上,你的旧日志可以被拷贝到Glacier。 - -#### 给日志单独的磁盘分区 #### - -Linux最典型的方式通常建议挂载到/var目录到一个单独度的文件系统。这是因为这个目录的高I/Os。我们推荐挂在/var/log目录到一个单独的磁盘系统下。这样可以节省与主应用的数据I/O竞争。另外,如果一些日志文件变的太多,或者一个文件变的太大,不会占满整个磁盘。 - -#### 日志条目 #### - -每个日志条目什么信息应该被捕获? - -这依赖于你想用日志来做什么。你只想用它来排除故障,或者你想捕获所有发生的事?这是一个规则条件去捕获每个用户在运行什么或查看什么? - -如果你正用日志做错误排查的目的,只保存错误,报警或者致命信息。没有理由去捕获调试信息,例如,应用也许默认记录了调试信息或者另一个管理员也许为了故障排查使用打开了调试信息,但是你应该关闭它,因为它肯定会很快的填满空间。在最低限度上,捕获日期,时间,客户端应用名,原ip或者客户端主机名,执行动作和它自身信息。 - -#### 一个PostgreSQL的实例 #### - -作为一个例子,让我们看看vanilla(这是一个开源论坛)PostgreSQL 9.4安装主配置文件。它叫做postgresql.conf与其他Linux系统中的配置文件不同,他不保存在/etc目录下。在代码段下,我们可以在我们的Centos 7服务器的/var/lib/pgsql目录下看见: - - root@localhost ~]# vi /var/lib/pgsql/9.4/data/postgresql.conf - ... - #------------------------------------------------------------------------------ - # ERROR REPORTING AND LOGGING - #------------------------------------------------------------------------------ - # - Where to Log - - log_destination = 'stderr' - # Valid values are combinations of - # stderr, csvlog, syslog, and eventlog, - # depending on platform. csvlog - # requires logging_collector to be on. - # This is used when logging to stderr: - logging_collector = on - # Enable capturing of stderr and csvlog - # into log files. Required to be on for - # csvlogs. - # (change requires restart) - # These are only used if logging_collector is on: - log_directory = 'pg_log' - # directory where log files are written, - # can be absolute or relative to PGDATA - log_filename = 'postgresql-%a.log' # log file name pattern, - # can include strftime() escapes - # log_file_mode = 0600 . - # creation mode for log files, - # begin with 0 to use octal notation - log_truncate_on_rotation = on # If on, an existing log file with the - # same name as the new log file will be - # truncated rather than appended to. - # But such truncation only occurs on - # time-driven rotation, not on restarts - # or size-driven rotation. Default is - # off, meaning append to existing files - # in all cases. - log_rotation_age = 1d - # Automatic rotation of logfiles will happen after that time. 0 disables. - log_rotation_size = 0 # Automatic rotation of logfiles will happen after that much log output. 0 disables. - # These are relevant when logging to syslog: - #syslog_facility = 'LOCAL0' - #syslog_ident = 'postgres' - # This is only relevant when logging to eventlog (win32): - #event_source = 'PostgreSQL' - # - When to Log - - #client_min_messages = notice # values in order of decreasing detail: - # debug5 - # debug4 - # debug3 - # debug2 - # debug1 - # log - # notice - # warning - # error - #log_min_messages = warning # values in order of decreasing detail: - # debug5 - # debug4 - # debug3 - # debug2 - # debug1 - # info - # notice - # warning - # error - # log - # fatal - # panic - #log_min_error_statement = error # values in order of decreasing detail: - # debug5 - # debug4 - # debug3 - # debug2 - # debug1 - # info - # notice - # warning - # error - # log - # fatal - # panic (effectively off) - #log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements - # and their durations, > 0 logs only - # statements running at least this number - # of milliseconds - # - What to Log - #debug_print_parse = off - #debug_print_rewritten = off - #debug_print_plan = off - #debug_pretty_print = on - #log_checkpoints = off - #log_connections = off - #log_disconnections = off - #log_duration = off - #log_error_verbosity = default - # terse, default, or verbose messages - #log_hostname = off - log_line_prefix = '< %m >' # special values: - # %a = application name - # %u = user name - # %d = database name - # %r = remote host and port - # %h = remote host - # %p = process ID - # %t = timestamp without milliseconds - # %m = timestamp with milliseconds - # %i = command tag - # %e = SQL state - # %c = session ID - # %l = session line number - # %s = session start timestamp - # %v = virtual transaction ID - # %x = transaction ID (0 if none) - # %q = stop here in non-session - # processes - # %% = '%' - # e.g. '<%u%%%d> ' - #log_lock_waits = off # log lock waits >= deadlock_timeout - #log_statement = 'none' # none, ddl, mod, all - #log_temp_files = -1 # log temporary files equal or larger - # than the specified size in kilobytes;5# -1 disables, 0 logs all temp files5 - log_timezone = 'Australia/ACT' - -虽然大多数参数被加上了注释,他们呈现了默认数值。我们可以看见日志文件目录是pg_log(log_directory参数),文件名应该以postgresql开头(log_filename参数),文件每天转储一次(log_rotation_age参数)然后日志记录以时间戳开头(log_line_prefix参数)。特别说明有趣的是log_line_prefix参数:你可以包含很多整体丰富的信息在这。 - -看/var/lib/pgsql/9.4/data/pg_log目录下展现给我们这些文件: - - [root@localhost ~]# ls -l /var/lib/pgsql/9.4/data/pg_log - total 20 - -rw-------. 1 postgres postgres 1212 May 1 20:11 postgresql-Fri.log - -rw-------. 1 postgres postgres 243 Feb 9 21:49 postgresql-Mon.log - -rw-------. 1 postgres postgres 1138 Feb 7 11:08 postgresql-Sat.log - -rw-------. 1 postgres postgres 1203 Feb 26 21:32 postgresql-Thu.log - -rw-------. 1 postgres postgres 326 Feb 10 01:20 postgresql-Tue.log - -所以日志文件命只有工作日命名的标签。我们可以改变他。如何做?在postgresql.conf配置log_filename参数。 - -查看一个日志内容,它的条目仅以日期时间开头: - - [root@localhost ~]# cat /var/lib/pgsql/9.4/data/pg_log/postgresql-Fri.log - ... - < 2015-02-27 01:21:27.020 EST >LOG: received fast shutdown request - < 2015-02-27 01:21:27.025 EST >LOG: aborting any active transactions - < 2015-02-27 01:21:27.026 EST >LOG: autovacuum launcher shutting down - < 2015-02-27 01:21:27.036 EST >LOG: shutting down - < 2015-02-27 01:21:27.211 EST >LOG: database system is shut down - -### 集中应用日志 ### - -#### 使用Imfile监控日志 #### - -习惯上,应用通常记录他们数据在文件里。文件容易在一个机器上寻找但是多台服务器上就不是很恰当了。你可以设置日志文件监控然后当新的日志被添加到底部就发送事件到一个集中服务器。在/etc/rsyslog.d/里创建一个新的配置文件然后增加一个文件输入,像这样: - - $ModLoad imfile - $InputFilePollInterval 10 - $PrivDropToGroup adm - ----------- - - # Input for FILE1 - $InputFileName /FILE1 - $InputFileTag APPNAME1 - $InputFileStateFile stat-APPNAME1 #this must be unique for each file being polled - $InputFileSeverity info - $InputFilePersistStateInterval 20000 - $InputRunFileMonitor - -替换FILE1和APPNAME1位你自己的文件和应用名称。Rsyslog将发送它到你配置的输出中。 - -#### 本地套接字日志与Imuxsock #### - -套接字类似UNIX文件句柄,所不同的是套接字内容是由系统日志程序读取到内存中,然后发送到目的地。没有文件需要被写入。例如,logger命令发送他的日志到这个UNIX套接字。 - -如果你的服务器I/O有限或者你不需要本地文件日志,这个方法使系统资源有效利用。这个方法缺点是套接字有队列大小的限制。如果你的系统日志程序宕掉或者不能保持运行,然后你可能会丢失日志数据。 - -rsyslog程序将默认从/dev/log套接字中种读取,但是你要用[imuxsock输入模块][17]如下命令使它生效: - - $ModLoad imuxsock - -#### UDP日志与Imupd #### - -一些应用程序使用UDP格式输出日志数据,这是在网络上或者本地传输日志文件的标准系统日志协议。你的系统日志程序收集这些日志然后处理他们或者用不同的格式传输他们。交替地,你可以发送日志到你的日志服务器或者到一个日志管理方案中。 - -使用如下命令配置rsyslog来接收标准端口514的UDP系统日志数据: - - $ModLoad imudp - ----------- - - $UDPServerRun 514 - -### 用Logrotate管理日志 ### - -日志转储是当日志到达指定的时期时自动归档日志文件的方法。如果不介入,日志文件一直增长,会用尽磁盘空间。最后他们将破坏你的机器。 - -logrotate实例能随着日志的日期截取你的日志,腾出空间。你的新日志文件保持文件名。你的旧日志文件被重命名为后缀加上数字。每次logrotate实例运行,一个新文件被建立然后现存的文件被逐一重命名。你来决定何时旧文件被删除或归档的阈值。 - -当logrotate拷贝一个文件,新的文件已经有一个新的索引节点,这会妨碍rsyslog监控新文件。你可以通过增加copytruncate参数到你的logrotate定时任务来缓解这个问题。这个参数拷贝现有的日志文件内容到新文件然后从现有文件截短这些内容。这个索引节点从不改变,因为日志文件自己保持不变;它的内容是一个新文件。 - -logrotate实例使用的主配置文件是/etc/logrotate.conf,应用特有设置在/etc/logrotate.d/目录下。DigitalOcean有一个详细的[logrotate教程][18] - -### 管理很多服务器的配置 ### - -当你只有很少的服务器,你可以登陆上去手动配置。一旦你有几打或者更多服务器,你可以用高级工具使这变得更容易和更可扩展。基本上,所有的事情就是拷贝你的rsyslog配置到每个服务器,然后重启rsyslog使更改生效。 - -#### Pssh #### - -这个工具可以让你在很多服务器上并行的运行一个ssh命令。使用pssh部署只有一小部分的服务器。如果你其中一个服务器失败,然后你必须ssh到失败的服务器,然后手动部署。如果你有很多服务器失败,那么手动部署他们会话费很长时间。 - -#### Puppet/Chef #### - -Puppet和Chef是两个不同的工具,他们能在你的网络按你规定的标准自动的配置所有服务器。他们的报表工具使你知道关于错误然后定期重新同步。Puppet和Chef有一些狂热的支持者。如果你不确定那个更适合你的部署配置管理,你可以领会一下[InfoWorld上这两个工具的对比][19] - -一些厂商也提供一些配置rsyslog的模块或者方法。这有一个Loggly上Puppet模块的例子。它提供给rsyslog一个类,你可以添加一个标识令牌: - - node 'my_server_node.example.net' { - # Send syslog events to Loggly - class { 'loggly::rsyslog': - customer_token => 'de7b5ccd-04de-4dc4-fbc9-501393600000', - } - } - -#### Docker #### - -Docker使用容器去运行应用不依赖底层服务。所有东西都从内部的容器运行,你可以想象为一个单元功能。ZDNet有一个深入文章关于在你的数据中心[使用Docker][20]。 - -这有很多方式从Docker容器记录日志,包括链接到一个日志容器,记录到一个共享卷,或者直接在容器里添加一个系统日志代理。其中最流行的日志容器叫做[logspout][21]。 - -#### 供应商的脚本或代理 #### - -大多数日志管理方案提供一些脚本或者代理,从一个或更多服务器比较简单的发送数据。重量级代理会耗尽额外的系统资源。一些供应商像Loggly提供配置脚本,来使用现存的系统日志程序更轻松。这有一个Loggly上的例子[脚本][22],它能运行在任意数量的服务器上。 - --------------------------------------------------------------------------------- - -via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/ - -作者:[Jason Skowronski][a1] -作者:[Amy Echeverri][a2] -作者:[Sadequl Hussain][a3] -译者:[wyangsun](https://github.com/wyangsun) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a1]:https://www.linkedin.com/in/jasonskowronski -[a2]:https://www.linkedin.com/in/amyecheverri -[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 -[1]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.esrreycnpnbl -[2]:http://www.rsyslog.com/ -[3]:http://www.balabit.com/network-security/syslog-ng/opensource-logging-system -[4]:http://logstash.net/ -[5]:http://www.fluentd.org/ -[6]:http://www.rsyslog.com/doc/rsyslog_conf.html -[7]:http://www.rsyslog.com/doc/master/rainerscript/index.html -[8]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.eck7acdxin87 -[9]:https://www.loggly.com/docs/file-monitoring/ -[10]:http://www.networksorcery.com/enp/protocol/udp.htm -[11]:http://www.networksorcery.com/enp/protocol/tcp.htm -[12]:http://blog.gerhards.net/2008/04/on-unreliability-of-plain-tcp-syslog.html -[13]:http://www.rsyslog.com/doc/relp.html -[14]:http://www.rsyslog.com/doc/queues.html -[15]:http://www.rsyslog.com/doc/tls_cert_ca.html -[16]:http://www.rsyslog.com/doc/tls_cert_machine.html -[17]:http://www.rsyslog.com/doc/v8-stable/configuration/modules/imuxsock.html -[18]:https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10 -[19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html -[20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/ -[21]:https://github.com/progrium/logspout -[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/ From 718a63f33fadbdcb373c9500274f129e3b945cec Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 3 Sep 2015 00:01:52 +0800 Subject: [PATCH 421/697] PUB:20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu @strugglingyouth --- ...r Upgrade to Linux Kernel 4.2 in Ubuntu.md | 34 +++++++------------ 1 file changed, 13 insertions(+), 21 deletions(-) rename {translated/tech => published}/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md (76%) diff --git a/translated/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md b/published/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md similarity index 76% rename from translated/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md rename to published/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md index 71d4985fe0..3737b88438 100644 --- a/translated/tech/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md +++ b/published/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md @@ -1,31 +1,23 @@ - -在 Ubuntu 中如何安装/升级 Linux 内核到4.2 +在 Ubuntu 中如何安装或升级 Linux 内核到4.2 ================================================================================ ![](http://ubuntuhandbook.org/wp-content/uploads/2014/12/linux-kernel-icon-tux.png) -Linux 内核4.2在昨天中午被公布。Linus Torvalds 写了 [lkml.org][1]: - -> 通过这周的小的变动,4.2版本应该不会有问题,毕竟这是最后一周,但在这里有几个补丁,4.2延迟一个星期也会引发问题。 - -> - -> 所以在这里它是,并且4.3的合并窗口现已开放。我已经早期引入了几个悬而未决的请求,但像往常一样,我会从明天开始处理它们,并会发布完成的时间。 - -> - -> 从 rc8 中的 shortlog 非常小,并且是追加的。这个补丁也很完美... +Linux 内核 4.2已经发布了。Linus Torvalds 在 [lkml.org][1] 上写到: +> 通过这周这么小的变动,看来在最后一周 发布 4.2 版本应该不会有问题,当然还有几个修正,但是看起来也并不需要延迟一周。 +> 所以这就到了,而且 4.3 的合并窗口现已打开。我已经有了几个等待处理的合并请求,明天我开始处理它们,然后在适当的时候放出来。 +> 从 rc8 以来的简短日志很小,已经附加。这个补丁也很小... ### 新内核 4.2 有哪些改进?: ### -- 英特尔的x86汇编代码重写 -- 支持新的 ARM 板和 SoCs +- 重写英特尔的x86汇编代码 +- 支持新的 ARM 板和 SoC - 对 F2FS 的 per-file 加密 -- 有 AMDGPU 内核 DRM 驱动程序 -- 使用Radeon DRM 来支持 VCE1 视频编码 -- 初步支持英特尔的 Broxton Atom SoCs -- 支持ARCv2和HS38 CPU内核。 -- 增加了排队自旋锁的支持 +- AMDGPU 的内核 DRM 驱动程序 +- 对 Radeon DRM 驱动的 VCE1 视频编码支持 +- 初步支持英特尔的 Broxton Atom SoC +- 支持 ARCv2 和 HS38 CPU 内核 +- 增加了队列自旋锁的支持 - 许多其他的改进和驱动更新。 ### 在 Ubuntu 中如何下载4.2内核 : ### @@ -84,7 +76,7 @@ via: http://ubuntuhandbook.org/index.php/2015/08/upgrade-kernel-4-2-ubuntu/ 作者:[Ji m][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From f2a4f490bd44575688ce6dd289b3cb3254cab1bb Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 3 Sep 2015 13:16:05 +0800 Subject: [PATCH 422/697] translating --- sources/tech/20150901 How to Defragment Linux Systems.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150901 How to Defragment Linux Systems.md b/sources/tech/20150901 How to Defragment Linux Systems.md index 4b9095c1de..ba1b6a4b5a 100644 --- a/sources/tech/20150901 How to Defragment Linux Systems.md +++ b/sources/tech/20150901 How to Defragment Linux Systems.md @@ -1,3 +1,5 @@ +translating----geekpi + How to Defragment Linux Systems ================================================================================ ![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-featured.png) @@ -122,4 +124,4 @@ via: https://www.maketecheasier.com/defragment-linux/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:https://www.maketecheasier.com/author/attilaorosz/ \ No newline at end of file +[a]:https://www.maketecheasier.com/author/attilaorosz/ From e088fc312c7cc8202e9efd3758bfb7b32b8bfd10 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 3 Sep 2015 14:29:36 +0800 Subject: [PATCH 423/697] translated --- ...0150901 How to Defragment Linux Systems.md | 127 ------------------ ...0150901 How to Defragment Linux Systems.md | 125 +++++++++++++++++ 2 files changed, 125 insertions(+), 127 deletions(-) delete mode 100644 sources/tech/20150901 How to Defragment Linux Systems.md create mode 100644 translated/tech/20150901 How to Defragment Linux Systems.md diff --git a/sources/tech/20150901 How to Defragment Linux Systems.md b/sources/tech/20150901 How to Defragment Linux Systems.md deleted file mode 100644 index ba1b6a4b5a..0000000000 --- a/sources/tech/20150901 How to Defragment Linux Systems.md +++ /dev/null @@ -1,127 +0,0 @@ -translating----geekpi - -How to Defragment Linux Systems -================================================================================ -![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-featured.png) - -There is a common myth that Linux disks never need defragmentation at all. In most cases, this is true, due mostly to the excellent journaling filesystems Linux uses (ext2, 3, 4, btrfs, etc.) to handle the filesystem. However, in some specific cases, fragmentation might still occur. If that happens to you, the solution is fortunately very simple. - -### What is fragmentation? ### - -Fragmentation occurs when a file system updates files in little chunks, but these chunks do not form a contiguous whole and are scattered around the disk instead. This is particularly true for FAT and FAT32 filesystems. It was somewhat mitigated in NTFS and almost never happens in Linux (extX). Here is why. - -In filesystems such as FAT and FAT32, files are written right next to each other on the disk. There is no room left for file growth or updates: - -![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-fragmented.png) - -The NTFS leaves somewhat more room between the files, so there is room to grow. As the space between chunks is limited, fragmentation will still occur over time. - -![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-ntfs.png) - -Linux’s journaling filesystems take a different approach. Instead of placing files beside each other, each file is scattered all over the disk, leaving generous amounts of free space between each file. There is sufficient room for file updates/growth and fragmentation rarely occurs. - -![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-journal.png) - -Additionally, if fragmentation does happen, most Linux filesystems would attempt to shuffle files and chunks around to make them contiguous again. - -### Disk fragmentation on Linux ### - -Disk fragmentation seldom occurs in Linux unless you have a small hard drive, or it is running out of space. Some possible fragmentation cases include: - -- if you edit large video files or raw image files, and disk space is limited -- if you use older hardware like an old laptop, and you have a small hard drive -- if your hard drives start filling up (above 85% used) -- if you have many small partitions cluttering your home folder - -The best solution is to buy a larger hard drive. If it’s not possible, this is where defragmentation becomes useful. - -### How to check for fragmentation ### - -The `fsck` command will do this for you – that is, if you have an opportunity to run it from a live CD, with **all affected partitions unmounted**. - -This is very important: **RUNNING FSCK ON A MOUNTED PARTITION CAN AND WILL SEVERELY DAMAGE YOUR DATA AND YOUR DISK**. - -You have been warned. Before proceeding, make a full system backup. - -**Disclaimer**: The author of this article and Make Tech Easier take no responsibility for any damage to your files, data, system, or any other damage, caused by your actions after following this advice. You may proceed at your own risk. If you do proceed, you accept and acknowledge this. - -You should just boot into a live session (like an installer disk, system rescue CD, etc.) and run `fsck` on your UNMOUNTED partitions. To check for any problems, run the following command with root permission: - - fsck -fn [/path/to/your/partition] - -You can check what the `[/path/to/your/partition]` is by running - - sudo fdisk -l - -There is a way to run `fsck` (relatively) safely on a mounted partition – that is by using the `-n` switch. This will result in a read only file system check without touching anything. Of course, there is no guarantee of safety here, and you should only proceed after creating a backup. On an ext2 filesystem, running - - sudo fsck.ext2 -fn /path/to/your/partition - -would result in plenty of output – most of them error messages resulting from the fact that the partition is mounted. In the end it will give you fragmentation related information. - -![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-fsck.png) - -If your fragmentation is above 20%, you should proceed to defragment your system. - -### How to easily defragment Linux filesystems ### - -All you need to do is to back up **ALL** your files and data to another drive (by manually **copying** them over), format the partition, and copy your files back (don’t use a backup program for this). The journalling file system will handle them as new files and place them neatly to the disk without fragmentation. - -To back up your files, run - - cp -afv [/path/to/source/partition]/* [/path/to/destination/folder] - -Mind the asterix (*); it is important. - -Note: It is generally agreed that to copy large files or large amounts of data, the dd command might be best. This is a very low level operation and does copy everything “as is”, including the empty space, and even the junk left over. This is not what we want, so it is probably better to use `cp`. - -Now you only need to remove all the original files. - - sudo rm -rf [/path/to/source/partition]/* - -**Optional**: you can fill the empty space with zeros. You could achieve this with formatting as well, but if for example you did not copy the whole partition, only large files (which are most likely to cause fragmentation), this might not be an option. - - sudo dd if=/dev/zero of=[/path/to/source/partition]/temp-zero.txt - -Wait for it to finish. You could also monitor the progress with `pv`. - - sudo apt-get install pv - sudo pv -tpreb | of=[/path/to/source/partition]/temp-zero.txt - -![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-dd.png) - -When it is done, just delete the temporary file. - - sudo rm [/path/to/source/partition]/temp-zero.txt - -After you zeroed out the empty space (or just skipped that step entirely), copy your files back, reversing the first cp command: - - cp -afv [/path/to/original/destination/folder]/* [/path/to/original/source/partition] - -### Using e4defrag ### - -If you prefer a simpler approach, install `e2fsprogs`, - - sudo apt-get install e2fsprogs - -and run `e4defrag` as root on the affected partition. If you don’t want to or cannot unmount the partition, you can use its mount point instead of its path. To defragment your whole system, run - - sudo e4defrag / - -It is not guaranteed to succeed while mounted (you should also stop using your system while it is running), but it is much easier than copying all files away and back. - -### Conclusion ### - -Fragmentation should rarely be an issue on a Linux system due to the the journalling filesystem’s efficient data handling. If you do run into fragmentation due to any circumstances, there are simple ways to reallocate your disk space like copying all files away and back or using `e4defrag`. It is important, however, to keep your data safe, so before attempting any operation that would affect all or most of your files, make sure you make a backup just to be on the safe side. - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/defragment-linux/ - -作者:[Attila Orosz][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/attilaorosz/ diff --git a/translated/tech/20150901 How to Defragment Linux Systems.md b/translated/tech/20150901 How to Defragment Linux Systems.md new file mode 100644 index 0000000000..49d16a8f18 --- /dev/null +++ b/translated/tech/20150901 How to Defragment Linux Systems.md @@ -0,0 +1,125 @@ +如何在Linux中整理磁盘碎片 +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-featured.png) + +有一神话是linux的磁盘从来不需要整理碎片。在大多数情况下这是真的,大多数因为是使用的是优秀的日志系统(ext2、3、4等等)来处理文件系统。然而,在一些特殊情况下,碎片仍旧会产生。如果正巧发生在你身上,解决方法很简单。 + +### 什么是磁盘碎片 ### + +碎片发生在不同的小块中更新文件时,但是这些快没有形成连续完整的文件而是分布在磁盘的各个角落中。这对于FAT和FAT32文件系统而言是这样的。这在NTFS中有所减轻,在Linux(extX)中几乎不会发生。下面是原因。 + +在像FAT和FAT32这类文件系统中,文件紧挨着写入到磁盘中。文件之间没有空间来用于增长或者更新: + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-fragmented.png) + +NTFS中在文件之间保留了一些空间,因此有空间进行增长。因为块之间的空间是有限的,碎片也会随着时间出现。 + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-ntfs.png) + +Linux的日志文件系统采用了一个不同的方案。与文件之间挨着不同,每个文件分布在磁盘的各处,每个文件之间留下了大量的剩余空间。这里有很大的空间用于更新和增长,并且碎片很少会发生。 + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-journal.png) + +此外,碎片一旦出现了,大多数Linux文件系统会尝试将文件和块重新连续起来。 + +### Linux中的磁盘整理 ### + +除非你用的是一个很小的硬盘或者空间不够了,不然Linux很少会需要磁盘整理。一些可能需要磁盘整理的情况包括: + +- 如果你编辑的是大型视频文件或者原生照片,但磁盘空间有限 +- if you use older hardware like an old laptop, and you have a small hard drive +- 如果你的磁盘开始满了(大约使用了85%) +- 如果你的家目录中有许多小分区 + +最好的解决方案是购买一个大硬盘。如果不可能,磁盘碎片整理就很有用了。 + +### 如何检查碎片 ### + +`fsck`命令会为你做这个 -也就是说如果你可以在liveCD中运行它,那么就可以**卸载所有的分区**。 + +这一点很重要:**在已经挂载的分区中运行fsck将会严重危害到你的数据和磁盘**。 + +你已经被警告过了。开始之前,先做一个完整的备份。 + +**免责声明**: 本文的作者与Make Tech Easier将不会对您的文件、数据、系统或者其他损害负责。你需要自己承担风险。如果你继续,你需要接收并了解这点。 + +你应该启动到一个live会话中(如安装磁盘,系统救援CD等)并运行`fsck`卸载分区。要检查是否有任何问题,请在运行root权限下面的命令: + + fsck -fn [/path/to/your/partition] + +您可以检查一下运行中的分区的路径 + + sudo fdisk -l + +有一个(相对)安全地在已挂载的分区中运行`fsck`的方法是使用‘-n’开关。这会让分区处在只读模式而不能创建任何文件。当然,这里并不能保证安全,你应该在创建备份之后进行。在ext2中,运行 + + sudo fsck.ext2 -fn /path/to/your/partition + +会产生大量的输出-- 大多数错误信息的原因是分区已经挂载了。最后会给出一个碎片相关的信息。 + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-fsck.png) + +如果碎片大于20%了,那么你应该开始整理你的磁盘碎片了。 + +### 如何简单地在Linux中整理碎片 ### + +你要做的是备份你**所有**的文件和数据到另外一块硬盘中(手动**复制**他们)。格式化分区然后重新复制回去(不要使用备份软件)。日志系统会把它们作为新的文件,并将它们整齐地放置到磁盘中而不产生碎片。 + +要备份你的文件,运行 + + cp -afv [/path/to/source/partition]/* [/path/to/destination/folder] + +记住星号(*)是很重要的。 + +注意:通常认为复制大文件或者大量文件,使用dd或许是最好的。这是一个非常底层的操作,它会复制一切,包含空闲的空间甚至是留下的垃圾。这不是我们想要的,因此这里最好使用`cp`。 + +现在你只需要删除源文件。 + + sudo rm -rf [/path/to/source/partition]/* + +**可选**:你可以将空闲空间置零。你也可以用格式化来达到这点,但是例子中你并没有复制整个分区而仅仅是大文件(这很可能会造成碎片)。这恐怕不能成为一个选项。 + + sudo dd if=/dev/zero of=[/path/to/source/partition]/temp-zero.txt + +等待它结束。你可以用`pv`来监测进程。 + + sudo apt-get install pv + sudo pv -tpreb | of=[/path/to/source/partition]/temp-zero.txt + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-dd.png) + +这就完成了,只要删除临时文件就行。 + + sudo rm [/path/to/source/partition]/temp-zero.txt + +待你清零了空闲空间(或者跳过了这步)。重新复制回文件,将第一个cp命令翻转一下: + + cp -afv [/path/to/original/destination/folder]/* [/path/to/original/source/partition] + +### 使用 e4defrag ### + +如果你想要简单的方法,安装`e2fsprogs`, + + sudo apt-get install e2fsprogs + +用root权限在分区中运行 `e4defrag`。如果你不想卸载分区,你可以使用它的挂载点而不是路径。要整理整个系统的碎片,运行: + + sudo e4defrag / + +在挂载的情况下不保证成功(你也应该保证在它运行时停止使用你的系统),但是它比服务全部文件再重新复制回来简单多了。 + +### 总结 ### + +linux系统中很少会出现碎片因为它的文件系统有效的数据处理。如果你因任何原因产生了碎片,简单的方法是重新分配你的磁盘如复制所有文件并复制回来,或者使用`e4defrag`。然而重要的是保证你数据的安全,因此在进行任何可能影响你全部或者大多数文件的操作之前,确保你的文件已经被备份到了另外一个安全的地方去了。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/defragment-linux/ + +作者:[Attila Orosz][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ From eaf2beefe32694ff03cf4b2c5de3ec02f73e9e45 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 3 Sep 2015 18:35:10 +0800 Subject: [PATCH 424/697] PUB:20150826 How to set up a system status page of your infrastructure MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @wyangsun 不错! --- ...stem status page of your infrastructure.md | 295 ++++++++++++++++++ ...stem status page of your infrastructure.md | 294 ----------------- 2 files changed, 295 insertions(+), 294 deletions(-) create mode 100644 published/20150826 How to set up a system status page of your infrastructure.md delete mode 100644 translated/tech/20150826 How to set up a system status page of your infrastructure.md diff --git a/published/20150826 How to set up a system status page of your infrastructure.md b/published/20150826 How to set up a system status page of your infrastructure.md new file mode 100644 index 0000000000..7725538ddd --- /dev/null +++ b/published/20150826 How to set up a system status page of your infrastructure.md @@ -0,0 +1,295 @@ +如何为你的平台部署一个公开的系统状态页 +================================================================================ + +如果你是一个系统管理员,负责关键的 IT 基础设置或公司的服务,你将明白有效的沟通在日常任务中的重要性。假设你的线上存储服务器故障了。你希望团队所有人达成共识你好尽快的解决问题。当你忙来忙去时,你不会想一半的人问你为什么他们不能访问他们的文档。当一个维护计划快到时间了你想在计划前提醒相关人员,这样避免了不必要的开销。 + +这一切的要求或多或少改进了你、你的团队、和你服务的用户之间沟通渠道。一个实现它的方法是维护一个集中的系统状态页面,报告和记录故障停机详情、进度更新和维护计划等。这样,在故障期间你避免了不必要的打扰,也可以提醒一些相关方,以及加入一些可选的状态更新。 + +有一个不错的**开源, 自承载系统状态页解决方案**叫做 [Cachet][1]。在这个教程,我将要描述如何用 Cachet 部署一个自承载系统状态页面。 + +### Cachet 特性 ### + +在详细的配置 Cachet 之前,让我简单的介绍一下它的主要特性。 + +- **全 JSON API**:Cachet API 可以让你使用任意的外部程序或脚本(例如,uptime 脚本)连接到 Cachet 来自动报告突发事件或更新状态。 +- **认证**:Cachet 支持基础认证和 JSON API 的 API 令牌,所以只有认证用户可以更新状态页面。 +- **衡量系统**:这通常用来展现随着时间推移的自定义数据(例如,服务器负载或者响应时间)。 +- **通知**:可选地,你可以给任一注册了状态页面的人发送突发事件的提示邮件。 +- **多语言**:状态页被翻译为11种不同的语言。 +- **双因子认证**:这允许你使用 Google 的双因子认证来提升 Cachet 管理账户的安全性。 +- **跨数据库支持**:你可以选择 MySQL,SQLite,Redis,APC 和 PostgreSQL 作为后端存储。 + +剩下的教程,我会说明如何在 Linux 上安装配置 Cachet。 + +### 第一步:下载和安装 Cachet ### + +Cachet 需要一个 web 服务器和一个后端数据库来运转。在这个教程中,我将使用 LAMP 架构。以下是一些特定发行版上安装 Cachet 和 LAMP 架构的指令。 + +#### Debian,Ubuntu 或者 Linux Mint #### + + $ sudo apt-get install curl git apache2 mysql-server mysql-client php5 php5-mysql + $ sudo git clone https://github.com/cachethq/Cachet.git /var/www/cachet + $ cd /var/www/cachet + $ sudo git checkout v1.1.1 + $ sudo chown -R www-data:www-data . + +在基于 Debian 的系统上设置 LAMP 架构的更多细节,参考这个[教程][2]。 + +#### Fedora, CentOS 或 RHEL #### + +在基于 Red Hat 系统上,你首先需要[设置 REMI 软件库][3](以满足 PHP 的版本需求)。然后执行下面命令。 + + $ sudo yum install curl git httpd mariadb-server + $ sudo yum --enablerepo=remi-php56 install php php-mysql php-mbstring + $ sudo git clone https://github.com/cachethq/Cachet.git /var/www/cachet + $ cd /var/www/cachet + $ sudo git checkout v1.1.1 + $ sudo chown -R apache:apache . + $ sudo firewall-cmd --permanent --zone=public --add-service=http + $ sudo firewall-cmd --reload + $ sudo systemctl enable httpd.service; sudo systemctl start httpd.service + $ sudo systemctl enable mariadb.service; sudo systemctl start mariadb.service + +在基于 Red Hat 系统上设置 LAMP 的更多细节,参考这个[教程][4]。 + +### 配置 Cachet 的后端数据库### + +下一步是配置后端数据库。 + +登录到 MySQL/MariaDB 服务,然后创建一个空的数据库称为‘cachet’。 + + $ sudo mysql -uroot -p + +---------- + + mysql> create database cachet; + mysql> quit + +现在用一个示例配置文件创建一个 Cachet 配置文件。 + + $ cd /var/www/cachet + $ sudo mv .env.example .env + +在 .env 文件里,填写你自己设置的数据库信息(例如,DB\_\*)。其他的字段先不改变。 + + APP_ENV=production + APP_DEBUG=false + APP_URL=http://localhost + APP_KEY=SomeRandomString + + DB_DRIVER=mysql + DB_HOST=localhost + DB_DATABASE=cachet + DB_USERNAME=root + DB_PASSWORD= + + CACHE_DRIVER=apc + SESSION_DRIVER=apc + QUEUE_DRIVER=database + + MAIL_DRIVER=smtp + MAIL_HOST=mailtrap.io + MAIL_PORT=2525 + MAIL_USERNAME=null + MAIL_PASSWORD=null + MAIL_ADDRESS=null + MAIL_NAME=null + + REDIS_HOST=null + REDIS_DATABASE=null + REDIS_PORT=null + +### 第三步:安装 PHP 依赖和执行数据库迁移 ### + +下面,我们将要安装必要的PHP依赖包。我们会使用 composer 来安装。如果你的系统还没有安装 composer,先安装它: + + $ curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer + +现在开始用 composer 安装 PHP 依赖包。 + + $ cd /var/www/cachet + $ sudo composer install --no-dev -o + +下面执行一次性的数据库迁移。这一步会在我们之前创建的数据库里面创建那些所需的表。 + + $ sudo php artisan migrate + +假设在 /var/www/cachet/.env 的数据库配置无误,数据库迁移应该像下面显示一样成功完成。 + +![](https://farm6.staticflickr.com/5814/20235620184_54048676b0_c.jpg) + +下面,创建一个密钥,它将用来加密进入 Cachet 的数据。 + + $ sudo php artisan key:generate + $ sudo php artisan config:cache + +![](https://farm6.staticflickr.com/5717/20831952096_7105c9fdc7_c.jpg) + +生成的应用密钥将自动添加到你的 .env 文件 APP\_KEY 变量中。你不需要自己编辑 .env。 + +### 第四步:配置 Apache HTTP 服务 ### + +现在到了配置运行 Cachet 的 web 服务的时候了。我们使用 Apache HTTP 服务器,为 Cachet 创建一个新的[虚拟主机][5],如下: + +#### Debian,Ubuntu 或 Linux Mint #### + + $ sudo vi /etc/apache2/sites-available/cachet.conf + +---------- + + + ServerName cachethost + ServerAlias cachethost + DocumentRoot "/var/www/cachet/public" + + Require all granted + Options Indexes FollowSymLinks + AllowOverride All + Order allow,deny + Allow from all + + + +启用新虚拟主机和 mod_rewrite: + + $ sudo a2ensite cachet.conf + $ sudo a2enmod rewrite + $ sudo service apache2 restart + +#### Fedora, CentOS 或 RHEL #### + +在基于 Red Hat 系统上,创建一个虚拟主机文件,如下: + + $ sudo vi /etc/httpd/conf.d/cachet.conf + +---------- + + + ServerName cachethost + ServerAlias cachethost + DocumentRoot "/var/www/cachet/public" + + Require all granted + Options Indexes FollowSymLinks + AllowOverride All + Order allow,deny + Allow from all + + + +现在重载 Apache 配置: + + $ sudo systemctl reload httpd.service + +### 第五步:配置 /etc/hosts 来测试 Cachet ### + +这时候,初始的 Cachet 状态页面应该启动运行了,现在测试一下。 + +由于 Cachet 被配置为Apache HTTP 服务的虚拟主机,我们需要调整你的客户机的 /etc/hosts 来访问他。你将从这个客户端电脑访问 Cachet 页面。(LCTT 译注:如果你给了这个页面一个正式的主机地址,则不需要这一步。) + +打开 /etc/hosts,加入如下行: + + $ sudo vi /etc/hosts + +---------- + + cachethost + +上面名为“cachethost”必须匹配 Cachet 的 Apache 虚拟主机文件的 ServerName。 + +### 测试 Cachet 状态页面 ### + +现在你准备好访问 Cachet 状态页面。在你浏览器地址栏输入 http://cachethost。你将被转到如下的 Cachet 状态页的初始化设置页面。 + +![](https://farm6.staticflickr.com/5745/20858228815_405fce1301_c.jpg) + +选择 cache/session 驱动。这里 cache 和 session 驱动两个都选“File”。 + +下一步,输入关于状态页面的基本信息(例如,站点名称、域名、时区和语言),以及管理员认证账户。 + +![](https://farm1.staticflickr.com/611/20237229693_c22014e4fd_c.jpg) + +![](https://farm6.staticflickr.com/5707/20858228875_b056c9e1b4_c.jpg) + +![](https://farm6.staticflickr.com/5653/20671482009_8629572886_c.jpg) + +你的状态页初始化就要完成了。 + +![](https://farm6.staticflickr.com/5692/20237229793_f6a48f379a_c.jpg) + +继续创建组件(你的系统单元)、事件或者任意你要做的维护计划。 + +例如,增加一个组件: + +![](https://farm6.staticflickr.com/5672/20848624752_9d2e0a07be_c.jpg) + +增加一个维护计划: + +公共 Cachet 状态页就像这样: + +![](https://farm1.staticflickr.com/577/20848624842_df68c0026d_c.jpg) + +集成了 SMTP,你可以在状态更新时发送邮件给订阅者。并且你可以使用 CSS 和 markdown 格式来完全自定义布局和状态页面。 + +### 结论 ### + +Cachet 是一个相当易于使用,自托管的状态页面软件。Cachet 一个高级特性是支持全 JSON API。使用它的 RESTful API,Cachet 可以轻松连接单独的监控后端(例如,[Nagios][6]),然后回馈给 Cachet 事件报告并自动更新状态。比起手工管理一个状态页它更快和有效率。 + +最后一句,我喜欢提及一个事。用 Cachet 设置一个漂亮的状态页面是很简单的,但要将这个软件用好并不像安装它那么容易。你需要完全保障所有 IT 团队习惯准确及时的更新状态页,从而建立公共信息的准确性。同时,你需要教用户去查看状态页面。最后,如果没有很好的填充数据,部署状态页面就没有意义,并且/或者没有一个人查看它。记住这个,尤其是当你考虑在你的工作环境中部署 Cachet 时。 + +### 故障排查 ### + +补充,万一你安装 Cachet 时遇到问题,这有一些有用的故障排查的技巧。 + +1. Cachet 页面没有加载任何东西,并且你看到如下报错。 + + production.ERROR: exception 'RuntimeException' with message 'No supported encrypter found. The cipher and / or key length are invalid.' in /var/www/cachet/bootstrap/cache/compiled.php:6695 + + **解决方案**:确保你创建了一个应用密钥,以及明确配置缓存如下所述。 + + $ cd /path/to/cachet + $ sudo php artisan key:generate + $ sudo php artisan config:cache + +2. 调用 composer 命令时有如下报错。 + + - danielstjules/stringy 1.10.0 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. + - laravel/framework v5.1.8 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. + - league/commonmark 0.10.0 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. + + **解决方案**:确保在你的系统上安装了必要的 PHP 扩展 mbstring ,并且兼容你的 PHP 版本。在基于 Red Hat 的系统上,由于我们从 REMI-56 库安装PHP,所以要从同一个库安装扩展。 + + $ sudo yum --enablerepo=remi-php56 install php-mbstring + +3. 你访问 Cachet 状态页面时得到一个白屏。HTTP 日志显示如下错误。 + + PHP Fatal error: Uncaught exception 'UnexpectedValueException' with message 'The stream or file "/var/www/cachet/storage/logs/laravel-2015-08-21.log" could not be opened: failed to open stream: Permission denied' in /var/www/cachet/bootstrap/cache/compiled.php:12851 + + **解决方案**:尝试如下命令。 + + $ cd /var/www/cachet + $ sudo php artisan cache:clear + $ sudo chmod -R 777 storage + $ sudo composer dump-autoload + + 如果上面的方法不起作用,试试禁止SELinux: + + $ sudo setenforce 0 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/setup-system-status-page.html + +作者:[Dan Nanni][a] +译者:[wyangsun](https://github.com/wyangsun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:https://cachethq.io/ +[2]:http://xmodulo.com/install-lamp-stack-ubuntu-server.html +[3]:https://linux.cn/article-4192-1.html +[4]:https://linux.cn/article-5789-1.html +[5]:http://xmodulo.com/configure-virtual-hosts-apache-http-server.html +[6]:http://xmodulo.com/monitor-common-services-nagios.html diff --git a/translated/tech/20150826 How to set up a system status page of your infrastructure.md b/translated/tech/20150826 How to set up a system status page of your infrastructure.md deleted file mode 100644 index 53c97670d8..0000000000 --- a/translated/tech/20150826 How to set up a system status page of your infrastructure.md +++ /dev/null @@ -1,294 +0,0 @@ -如何部署一个你的公共系统状态页面 -================================================================================ -如果你是一个系统管理员,负责关键的IT基础设置或你公司的服务,你将明白有效的沟通在日常任务中的重要性。假设你的线上存储服务器故障了。你希望团队所有人达成共识你好尽快的解决问题。当你忙来忙去时,你不想一半的人问你为什么他们不能访问他们的文档。当一个维护计划快到时间了你想在计划前提醒相关人员,这样避免了不必要的开销。 - -这一切的要求或多或少改进了你和你的团队,用户和你的服务的沟通渠道。一个实现它方法是维护一个集中的系统状态页面,故障停机详情,进度更新和维护计划会被报告和记录。这样,在故障期间你避免了不必要的打扰,也有一些相关方提供的资料和任何选状态更新择性加入。 - -一个不错的**开源, 自承载系统状态页面**是is [Cachet][1]。在这个教程,我将要描述如何用Cachet部署一个自承载系统状态页面。 - -### Cachet 特性 ### - -在详细的配置Cachet之前,让我简单的介绍一下它的主要特性。 - -- **全JSON API**:Cachet API允许你使用任意外部程序或脚本(例如,uptime脚本)链接到Cachet来报告突发事件或自动更新状态。 -- **认证**:Cachet支持基础认证和JSON API的API令牌,所以只有认证用户可以更新状态页面。 -- **衡量系统**:这通常用来展现随着时间推移的自定义数据(例如,服务器负载或者相应时间)。 -- **通知**:你可以随意的发送通知邮件,报告事件给任一注册了状态页面的人。 -- **多语言**:状态也可以被转换为11种不同的语言。 -- **双因子认证**:这允许你使用Google的双因子认证管理账户锁定你的Cachet(什么事Google?呵呵!)。 -- **支持交叉数据库**:你可以选择MySQL,SQLite,Redis,APC和PostgreSQL作为后端存储。 - -剩下的教程,我说明如何在Linux上安装配置Cachet。 - -### 第一步:下载和安装Cachet ### - -Cachet需要一个web服务器和一个后端数据库来运转。在这个教程中,我将使用LAMP架构。这里有特定发行版安装Cachet和LAMP架构的指令。 - -#### Debian, Ubuntu 或者 Linux Mint #### - - $ sudo apt-get install curl git apache2 mysql-server mysql-client php5 php5-mysql - $ sudo git clone https://github.com/cachethq/Cachet.git /var/www/cachet - $ cd /var/www/cachet - $ sudo git checkout v1.1.1 - $ sudo chown -R www-data:www-data . - -在基于Debian的系统上更多详细的设置LAMP架构,参考这个[教程][2]。 - -#### Fedora, CentOS 或 RHEL #### - -在基于Red Hat系统上,你首先需要[设置REMI资源库][3](以满足PHP版本需求)。然后执行下面命令。 - - $ sudo yum install curl git httpd mariadb-server - $ sudo yum --enablerepo=remi-php56 install php php-mysql php-mbstring - $ sudo git clone https://github.com/cachethq/Cachet.git /var/www/cachet - $ cd /var/www/cachet - $ sudo git checkout v1.1.1 - $ sudo chown -R apache:apache . - $ sudo firewall-cmd --permanent --zone=public --add-service=http - $ sudo firewall-cmd --reload - $ sudo systemctl enable httpd.service; sudo systemctl start httpd.service - $ sudo systemctl enable mariadb.service; sudo systemctl start mariadb.service - -在基于Red Hat系统上更多详细设置LAMP,参考这个[教程][4]。 - -### 配置Cachet的后端数据库### - -下一步是配置后端数据库。 - -登陆到MySQL/MariaDB服务,然后创建一个空的数据库称为‘cachet’。 - - $ sudo mysql -uroot -p - ----------- - - mysql> create database cachet; - mysql> quit - -现在用一个样本配置文件创建一个Cachet配置文件。 - - $ cd /var/www/cachet - $ sudo mv .env.example .env - -在.env文件里,填写你自己设置的数据库信息(例如,DB\_\*)。其他的字段先不改变。 - - APP_ENV=production - APP_DEBUG=false - APP_URL=http://localhost - APP_KEY=SomeRandomString - - DB_DRIVER=mysql - DB_HOST=localhost - DB_DATABASE=cachet - DB_USERNAME=root - DB_PASSWORD= - - CACHE_DRIVER=apc - SESSION_DRIVER=apc - QUEUE_DRIVER=database - - MAIL_DRIVER=smtp - MAIL_HOST=mailtrap.io - MAIL_PORT=2525 - MAIL_USERNAME=null - MAIL_PASSWORD=null - MAIL_ADDRESS=null - MAIL_NAME=null - - REDIS_HOST=null - REDIS_DATABASE=null - REDIS_PORT=null - -### 第三步:安装PHP依赖和执行数据库迁移 ### - -下面,我们将要安装必要的PHP依赖包。所以我们将使用composer。如果你的系统还没有安装composer,先安装它: - - $ curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer - -现在开始用composer安装PHP依赖包。 - - $ cd /var/www/cachet - $ sudo composer install --no-dev -o - -下面执行一次数据库迁移。这一步将我们早期创建的必要表填充到数据库。 - - $ sudo php artisan migrate - -假设数据库配置在/var/www/cachet/.env是正确的,数据库迁移应该像下面显示一样完成成功。 - -![](https://farm6.staticflickr.com/5814/20235620184_54048676b0_c.jpg) - -下面,创建一个密钥,它将用来加密进入Cachet的数据。 - - $ sudo php artisan key:generate - $ sudo php artisan config:cache - -![](https://farm6.staticflickr.com/5717/20831952096_7105c9fdc7_c.jpg) - -生成的应用密钥将自动添加到你的.env文件APP\_KEY变量中。你不需要单独编辑.env。 - -### 第四步:配置Apache HTTP服务 ### - -现在到了配置web服务的时候,Cachet将运行在上面。我们使用Apache HTTP服务器,为Cachet创建一个新的[虚拟主机][5]如下所述。 - -#### Debian, Ubuntu 或 Linux Mint #### - - $ sudo vi /etc/apache2/sites-available/cachet.conf - ----------- - - - ServerName cachethost - ServerAlias cachethost - DocumentRoot "/var/www/cachet/public" - - Require all granted - Options Indexes FollowSymLinks - AllowOverride All - Order allow,deny - Allow from all - - - -启用新虚拟主机和mod_rewrite: - - $ sudo a2ensite cachet.conf - $ sudo a2enmod rewrite - $ sudo service apache2 restart - -#### Fedora, CentOS 或 RHEL #### - -在基于Red Hat系统上,创建一个虚拟主机文件如下所述。 - - $ sudo vi /etc/httpd/conf.d/cachet.conf - ----------- - - - ServerName cachethost - ServerAlias cachethost - DocumentRoot "/var/www/cachet/public" - - Require all granted - Options Indexes FollowSymLinks - AllowOverride All - Order allow,deny - Allow from all - - - -现在重载Apache配置: - - $ sudo systemctl reload httpd.service - -### 第五步:配置/etc/hosts来测试Cachet ### - -这时候,初始的Cachet状态页面应该启动运行了,现在测试一下。 - -由于Cachet被配置为Apache HTTP服务的虚拟主机,我们需要调整你的客户机的/etc/hosts来访问他。你将从这个客户端电脑访问Cachet页面。 - -Open /etc/hosts, and add the following entry. - - $ sudo vi /etc/hosts - ----------- - - cachethost - -上面名为“cachethost”必须匹配Cachet的Apache虚拟主机文件的ServerName。 - -### 测试Cachet状态页面 ### - -现在你准备好访问Cachet状态页面。在你浏览器地址栏输入http://cachethost。你将被转到初始Cachet状态页如下。 - -![](https://farm6.staticflickr.com/5745/20858228815_405fce1301_c.jpg) - -选择cache/session驱动。这里cache和session驱动两个都选“File”。 - -下一步,输入关于状态页面的基本信息(例如,站点名称,域名,时区和语言),以及管理员认证账户。 - -![](https://farm1.staticflickr.com/611/20237229693_c22014e4fd_c.jpg) - -![](https://farm6.staticflickr.com/5707/20858228875_b056c9e1b4_c.jpg) - -![](https://farm6.staticflickr.com/5653/20671482009_8629572886_c.jpg) - -你的初始状态页将要最终完成。 - -![](https://farm6.staticflickr.com/5692/20237229793_f6a48f379a_c.jpg) - -继续创建组件(你的系统单位),事件或者任意你想要的维护计划。 - -例如,增加一个组件: - -![](https://farm6.staticflickr.com/5672/20848624752_9d2e0a07be_c.jpg) - -增加一个维护计划: - -公共Cachet状态页就像这样: - -![](https://farm1.staticflickr.com/577/20848624842_df68c0026d_c.jpg) - -集成SMTP,你可以在状态更新时发送邮件给订阅者。并且你可以完全自定义布局和状态页面使用的CSS和markdown格式。 - -### 结论 ### - -Cachet是一个相当易于使用,自托管的状态页面软件。Cachet一个高级特性是支持全JSON API。使用它的RESTful API,Cachet可以轻松连接单独的监控后端(例如,[Nagios][6]),然后回馈给Cachet事件报告并自动更新状态。比起手段管理一个状态页它更快和有效率。 - -最后一句,我喜欢提及一个事。用Cachet简单的设置一个花哨的状态页面同时,使用最佳的软件不像安装它那么容易。你需要完全保障所有IT团队习惯准确及时的更新状态页,从而建立公共信息的准确性。同时,你需要教用户去查看状态页面。在今天最后,如果不很好的填充,部署状态页面将没有意义,并且/或者没有一个人查看它。记住这个,当你考虑部署Cachet在你的工作环境中时。 - -### 故障排查 ### - -作为奖励,万一你安装Cachet时遇到问题,这有一些有用的故障排查的技巧。 - -1. Cachet页面没有加载任何东西,并且你看到如下报错。 - - production.ERROR: exception 'RuntimeException' with message 'No supported encrypter found. The cipher and / or key length are invalid.' in /var/www/cachet/bootstrap/cache/compiled.php:6695 - -**解决方案**:确保你创建了一个应用密钥,以及明确配置缓存如下所述。 - - $ cd /path/to/cachet - $ sudo php artisan key:generate - $ sudo php artisan config:cache - -2. 调用composer命令时有如下报错。 - - - danielstjules/stringy 1.10.0 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. - - laravel/framework v5.1.8 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. - - league/commonmark 0.10.0 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. - -**解决方案**:确保安装了必要的PHP扩展mbstring到你的系统上,并且兼容你的PHP。在基于Red Hat的系统上,由于我们从REMI-56库安装PHP,要从同一个库安装扩展。 - - $ sudo yum --enablerepo=remi-php56 install php-mbstring - -3. 你访问Cachet状态页面时得到一个白屏。HTTP日志显示如下错误。 - - PHP Fatal error: Uncaught exception 'UnexpectedValueException' with message 'The stream or file "/var/www/cachet/storage/logs/laravel-2015-08-21.log" could not be opened: failed to open stream: Permission denied' in /var/www/cachet/bootstrap/cache/compiled.php:12851 - -**解决方案**:尝试如下命令。 - - $ cd /var/www/cachet - $ sudo php artisan cache:clear - $ sudo chmod -R 777 storage - $ sudo composer dump-autoload - -如果上面的方法不起作用,试试禁止SELinux: - - $ sudo setenforce 0 - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/setup-system-status-page.html - -作者:[Dan Nanni][a] -译者:[wyangsun](https://github.com/wyangsun) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/nanni -[1]:https://cachethq.io/ -[2]:http://xmodulo.com/install-lamp-stack-ubuntu-server.html -[3]:http://ask.xmodulo.com/install-remi-repository-centos-rhel.html -[4]:http://xmodulo.com/install-lamp-stack-centos.html -[5]:http://xmodulo.com/configure-virtual-hosts-apache-http-server.html -[6]:http://xmodulo.com/monitor-common-services-nagios.html From df99ff42a451c077b8a665d5a80240e6efd3741e Mon Sep 17 00:00:00 2001 From: Luoyuanhao Date: Fri, 4 Sep 2015 11:25:31 +0800 Subject: [PATCH 425/697] [Translated] tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md --- ...ation GA with OData in Docker Container.md | 103 ----------------- ...ation GA with OData in Docker Container.md | 105 ++++++++++++++++++ 2 files changed, 105 insertions(+), 103 deletions(-) delete mode 100644 sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md create mode 100644 translated/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md diff --git a/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md b/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md deleted file mode 100644 index 8f5b4a68d3..0000000000 --- a/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md +++ /dev/null @@ -1,103 +0,0 @@ -ictlyh Translating -Howto Run JBoss Data Virtualization GA with OData in Docker Container -================================================================================ -Hi everyone, today we'll learn how to run JBoss Data Virtualization 6.0.0.GA with OData in a Docker Container. JBoss Data Virtualization is a data supply and integration solution platform that transforms various scatered multiple sources data, treats them as single source and delivers the required data into actionable information at business speed to any applications or users. JBoss Data Virtualization can help us easily combine and transform data into reusable business friendly data models and make unified data easily consumable through open standard interfaces. It offers comprehensive data abstraction, federation, integration, transformation, and delivery capabilities to combine data from one or multiple sources into reusable for agile data utilization and sharing.For more information about JBoss Data Virtualization, we can check out [its official page][1]. Docker is an open source platform that provides an open platform to pack, ship and run any application as a lightweight container. Running JBoss Data Virtualization with OData in Docker Container makes us easy to handle and launch. - -Here are some easy to follow tutorial on how we can run JBoss Data Virtualization with OData in Docker Container. - -### 1. Cloning the Repository ### - -First of all, we'll wanna clone the repository of OData with Data Virtualization ie [https://github.com/jbossdemocentral/dv-odata-docker-integration-demo][2] using git command. As we have an Ubuntu 15.04 distribution of linux running in our machine. We'll need to install git initially using apt-get command. - - # apt-get install git - -Then after installing git, we'll wanna clone the repository by running the command below. - - # git clone https://github.com/jbossdemocentral/dv-odata-docker-integration-demo - - Cloning into 'dv-odata-docker-integration-demo'... - remote: Counting objects: 96, done. - remote: Total 96 (delta 0), reused 0 (delta 0), pack-reused 96 - Unpacking objects: 100% (96/96), done. - Checking connectivity... done. - -### 2. Downloading JBoss Data Virtualization Installer ### - -Now, we'll need to download JBoss Data Virtualization Installer from the Download Page ie [http://www.jboss.org/products/datavirt/download/][3] . After we download **jboss-dv-installer-6.0.0.GA-redhat-4.jar**, we'll need to keep it under the directory named **software**. - -### 3. Building the Docker Image ### - -Next, after we have downloaded the JBoss Data Virtualization installer, we'll then go for building the docker image using the Dockerfile and its resources we had just cloned from the repository. - - # cd dv-odata-docker-integration-demo/ - # docker build -t jbossdv600 . - - ... - Step 22 : USER jboss - ---> Running in 129f701febd0 - ---> 342941381e37 - Removing intermediate container 129f701febd0 - Step 23 : EXPOSE 8080 9990 31000 - ---> Running in 61e6d2c26081 - ---> 351159bb6280 - Removing intermediate container 61e6d2c26081 - Step 24 : CMD $JBOSS_HOME/bin/standalone.sh -c standalone.xml -b 0.0.0.0 -bmanagement 0.0.0.0 - ---> Running in a9fed69b3000 - ---> 407053dc470e - Removing intermediate container a9fed69b3000 - Successfully built 407053dc470e - -Note: Here, we assume that you have already installed docker and is running in your machine. - -### 4. Starting the Docker Container ### - -As we have built the Docker Image of JBoss Data Virtualization with oData, we'll now gonna run the docker container and expose its port with -P flag. To do so, we'll run the following command. - - # docker run -p 8080:8080 -d -t jbossdv600 - - 7765dee9cd59c49ca26850e88f97c21f46859d2dc1d74166353d898773214c9c - -### 5. Getting the Container IP ### - -After we have started the Docker Container, we'll wanna get the IP address of the running docker container. To do so, we'll run the docker inspect command followed by the running container id. - - # docker inspect <$containerID> - - ... - "NetworkSettings": { - "Bridge": "", - "EndpointID": "3e94c5900ac5954354a89591a8740ce2c653efde9232876bc94878e891564b39", - "Gateway": "172.17.42.1", - "GlobalIPv6Address": "", - "GlobalIPv6PrefixLen": 0, - "HairpinMode": false, - "IPAddress": "172.17.0.8", - "IPPrefixLen": 16, - "IPv6Gateway": "", - "LinkLocalIPv6Address": "", - "LinkLocalIPv6PrefixLen": 0, - -### 6. Web Interface ### - -Now, if everything went as expected as done above, we'll gonna see the login screen of JBoss Data Virtualization with oData when pointing our web browser to http://container-ip:8080/ and the JBoss Management from http://container-ip:9990. The Management credentials for username is admin and password is redhat1! whereas the Data virtualization credentials for username is user and password is user . After that, we can navigate the contents via the web interface. - -**Note**: It is strongly recommended to change the password as soon as possible after the first login. Thanks :) - -### Conclusion ### - -Finally we've successfully run Docker Container running JBoss Data Virtualization with OData Multisource Virtual Database. JBoss Data Virtualization is really an awesome platform for the virtualization of data from different multiple source and transform them into reusable business friendly data models and produces data easily consumable through open standard interfaces. The deployment of JBoss Data Virtualization with OData Multisource Virtual Database has been very easy, secure and fast to setup with the Docker Technology. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/run-jboss-data-virtualization-ga-odata-docker-container/ - -作者:[Arun Pyasi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:http://www.redhat.com/en/technologies/jboss-middleware/data-virtualization -[2]:https://github.com/jbossdemocentral/dv-odata-docker-integration-demo -[3]:http://www.jboss.org/products/datavirt/download/ diff --git a/translated/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md b/translated/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md new file mode 100644 index 0000000000..4d14bbc904 --- /dev/null +++ b/translated/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md @@ -0,0 +1,105 @@ +如何在 Docker 容器中运行支持 OData 的 JBoss 数据虚拟化 GA +Howto Run JBoss Data Virtualization GA with OData in Docker Container +================================================================================ +大家好,我们今天来学习如何在一个 Docker 容器中运行支持 OData(译者注:Open Data Protocol,开放数据协议) 的 JBoss 数据虚拟化 6.0.0 GA(译者注:GA,General Availability,具体定义可以查看[WIKI][4])。JBoss 数据虚拟化是数据提供和集成解决方案平台,有多种分散的数据源时,转换为一种数据源统一对待,在正确的时间将所需数据传递给任意的应用或者用户。JBoss 数据虚拟化可以帮助我们将数据快速组合和转换为可重用的商业友好的数据模型,通过开放标准接口简单可用。它提供全面的数据抽取、联合、集成、转换,以及传输功能,将来自一个或多个源的数据组合为可重复使用和共享的灵活数据。要了解更多关于 JBoss 数据虚拟化的信息,可以查看它的[官方文档][1]。Docker 是一个提供开放平台用于打包,装载和以轻量级容器运行任何应用的开源平台。使用 Docker 容器我们可以轻松处理和启用支持 OData 的 JBoss 数据虚拟化。 + +下面是该指南中在 Docker 容器中运行支持 OData 的 JBoss 数据虚拟化的简单步骤。 + +### 1. 克隆仓库 ### + +首先,我们要用 git 命令从 [https://github.com/jbossdemocentral/dv-odata-docker-integration-demo][2] 克隆带数据虚拟化的 OData 仓库。假设我们的机器上运行着 Ubuntu 15.04 linux 发行版。我们要使用 apt-get 命令安装 git。 + + # apt-get install git + +安装完 git 之后,我们运行下面的命令克隆仓库。 + + # git clone https://github.com/jbossdemocentral/dv-odata-docker-integration-demo + + Cloning into 'dv-odata-docker-integration-demo'... + remote: Counting objects: 96, done. + remote: Total 96 (delta 0), reused 0 (delta 0), pack-reused 96 + Unpacking objects: 100% (96/96), done. + Checking connectivity... done. + +### 2. 下载 JBoss 数据虚拟化安装器 ### + +现在,我们需要从下载页 [http://www.jboss.org/products/datavirt/download/][3] 下载 JBoss 数据虚拟化安装器。下载了 **jboss-dv-installer-6.0.0.GA-redhat-4.jar** 后,我们把它保存在名为 **software** 的目录下。 + +### 3. 创建 Docker 镜像 ### + +下一步,下载了 JBoss 数据虚拟化安装器之后,我们打算使用 Dockerfile 和刚从仓库中克隆的资源创建 docker 镜像。 + + # cd dv-odata-docker-integration-demo/ + # docker build -t jbossdv600 . + + ... + Step 22 : USER jboss + ---> Running in 129f701febd0 + ---> 342941381e37 + Removing intermediate container 129f701febd0 + Step 23 : EXPOSE 8080 9990 31000 + ---> Running in 61e6d2c26081 + ---> 351159bb6280 + Removing intermediate container 61e6d2c26081 + Step 24 : CMD $JBOSS_HOME/bin/standalone.sh -c standalone.xml -b 0.0.0.0 -bmanagement 0.0.0.0 + ---> Running in a9fed69b3000 + ---> 407053dc470e + Removing intermediate container a9fed69b3000 + Successfully built 407053dc470e + +注意:在这里我们假设你已经安装了 docker 并正在运行。 + +### 4. 启动 Docker 容器 ### + +创建了支持 oData 的 JBoss 数据虚拟化 Docker 镜像之后,我们打算运行 docker 容器并用 -P 标签指定端口。我们运行下面的命令来实现。 + + # docker run -p 8080:8080 -d -t jbossdv600 + + 7765dee9cd59c49ca26850e88f97c21f46859d2dc1d74166353d898773214c9c + +### 5. 获取容器 IP ### + +启动了 Docker 容器之后,我们想要获取正在运行的 docker 容器的 IP 地址。要做到这点,我们运行后面添加了正在运行容器 id 号的 docker inspect 命令。 + + # docker inspect <$containerID> + + ... + "NetworkSettings": { + "Bridge": "", + "EndpointID": "3e94c5900ac5954354a89591a8740ce2c653efde9232876bc94878e891564b39", + "Gateway": "172.17.42.1", + "GlobalIPv6Address": "", + "GlobalIPv6PrefixLen": 0, + "HairpinMode": false, + "IPAddress": "172.17.0.8", + "IPPrefixLen": 16, + "IPv6Gateway": "", + "LinkLocalIPv6Address": "", + "LinkLocalIPv6PrefixLen": 0, + +### 6. Web 界面 ### +### 6. Web Interface ### + +现在,如果一切如期望的那样进行,当我们用浏览器打开 http://container-ip:8080/ 和 http://container-ip:9990 时会看到支持 oData 的 JBoss 数据虚拟化登录界面和 JBoss 管理界面。管理验证的用户名和密码分别是 admin 和 redhat1!数据虚拟化验证的用户名和密码都是 user。之后,我们可以通过 web 界面在内容间导航。 + +**注意**: 强烈建议在第一次登录后尽快修改密码。 + +### 总结 ### + +终于我们成功地运行了跑着支持 OData 多源虚拟数据库的 JBoss 数据虚拟化 的 Docker 容器。JBoss 数据虚拟化真的是一个很棒的平台,它为多种不同来源的数据进行虚拟化,并将它们转换为商业友好的数据模型,产生通过开放标准接口简单可用的数据。使用 Docker 技术可以简单、安全、快速地部署支持 OData 多源虚拟数据库的 JBoss 数据虚拟化。如果你有任何疑问、建议或者反馈,请在下面的评论框中写下来,以便我们可以改进和更新内容。非常感谢!Enjoy:-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/run-jboss-data-virtualization-ga-odata-docker-container/ + +作者:[Arun Pyasi][a] +译者:[ictlyh](http://www.mutouxiaogui.cn/blog) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://www.redhat.com/en/technologies/jboss-middleware/data-virtualization +[2]:https://github.com/jbossdemocentral/dv-odata-docker-integration-demo +[3]:http://www.jboss.org/products/datavirt/download/ +[4]:https://en.wikipedia.org/wiki/Software_release_life_cycle#General_availability_.28GA.29 \ No newline at end of file From fb4f6e7a37a12b59e15ad08c9bbdfe6a2e756708 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 4 Sep 2015 17:39:34 +0800 Subject: [PATCH 426/697] PUB:20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router @martin2011qi --- ...GP peering and filtering in Quagga BGP router.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) rename {translated/tech => published}/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md (97%) diff --git a/translated/tech/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md b/published/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md similarity index 97% rename from translated/tech/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md rename to published/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md index 23e2314576..1e17c7c6d3 100644 --- a/translated/tech/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md +++ b/published/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md @@ -1,5 +1,6 @@ -如何设置在Quagga BGP路由器中设置IPv6的BGP对等体和过滤 +如何设置在 Quagga BGP 路由器中设置 IPv6 的 BGP 对等体和过滤 ================================================================================ + 在之前的教程中,我们演示了如何使用Quagga建立一个[完备的BGP路由器][1]和配置[前缀过滤][2]。在本教程中,我们会向你演示如何创建IPv6 BGP对等体并通过BGP通告IPv6前缀。同时我们也将演示如何使用前缀列表和路由映射特性来过滤通告的或者获取到的IPv6前缀。 ### 拓扑 ### @@ -47,7 +48,7 @@ Quagga内部提供一个叫作vtysh的shell,其界面与那些主流路由厂 # vtysh -提示将改为: +提示符将改为: router-a# @@ -65,7 +66,7 @@ Quagga内部提供一个叫作vtysh的shell,其界面与那些主流路由厂 router-a# configure terminal -提示将变更成: +提示符将变更成: router-a(config)# @@ -246,13 +247,13 @@ Quagga内部提供一个叫作vtysh的shell,其界面与那些主流路由厂 via: http://xmodulo.com/ipv6-bgp-peering-filtering-quagga-bgp-router.html 作者:[Sarmed Rahman][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[martin2011qi](https://github.com/martin2011qi) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:http://xmodulo.com/author/sarmed -[1]:http://xmodulo.com/centos-bgp-router-quagga.html +[1]:https://linux.cn/article-4232-1.html [2]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html [3]:http://ask.xmodulo.com/open-port-firewall-centos-rhel.html [4]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html From aaf155be81100df6bd233740ccbb5bbc927ae620 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 4 Sep 2015 22:54:36 +0800 Subject: [PATCH 427/697] PUB:20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker @dingdongnigetou --- ... or Load Balancer with Weave and Docker.md | 129 ++++++++++++++++++ ... or Load Balancer with Weave and Docker.md | 126 ----------------- 2 files changed, 129 insertions(+), 126 deletions(-) create mode 100644 published/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md delete mode 100644 translated/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md diff --git a/published/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md b/published/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md new file mode 100644 index 0000000000..0f08cf12fa --- /dev/null +++ b/published/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md @@ -0,0 +1,129 @@ +如何使用 Weave 以及 Docker 搭建 Nginx 反向代理/负载均衡服务器 +================================================================================ + +Hi, 今天我们将会学习如何使用 Weave 和 Docker 搭建 Nginx 的反向代理/负载均衡服务器。Weave 可以创建一个虚拟网络将 Docker 容器彼此连接在一起,支持跨主机部署及自动发现。它可以让我们更加专注于应用的开发,而不是基础架构。Weave 提供了一个如此棒的环境,仿佛它的所有容器都属于同个网络,不需要端口/映射/连接等的配置。容器中的应用提供的服务在 weave 网络中可以轻易地被外部世界访问,不论你的容器运行在哪里。在这个教程里我们将会使用 weave 快速并且简单地将 nginx web 服务器部署为一个负载均衡器,反向代理一个运行在 Amazon Web Services 里面多个节点上的 docker 容器中的简单 php 应用。这里我们将会介绍 WeaveDNS,它提供一个不需要改变代码就可以让容器利用主机名找到的简单方式,并且能够让其他容器通过主机名连接彼此。 + +在这篇教程里,我们将使用 nginx 来将负载均衡分配到一个运行 Apache 的容器集合。最简单轻松的方法就是使用 Weave 来把运行在 ubuntu 上的 docker 容器中的 nginx 配置成负载均衡服务器。 + +### 1. 搭建 AWS 实例 ### + +首先,我们需要搭建 Amzaon Web Service 实例,这样才能在 ubuntu 下用 weave 跑 docker 容器。我们将会使用[AWS 命令行][1] 来搭建和配置两个 AWS EC2 实例。在这里,我们使用最小的可用实例,t1.micro。我们需要一个有效的**Amazon Web Services 账户**使用 AWS 命令行界面来搭建和配置。我们先在 AWS 命令行界面下使用下面的命令将 github 上的 weave 仓库克隆下来。 + + $ git clone http://github.com/fintanr/weave-gs + $ cd weave-gs/aws-nginx-ubuntu-simple + +在克隆完仓库之后,我们执行下面的脚本,这个脚本将会部署两个 t1.micro 实例,每个实例中都是 ubuntu 作为操作系统并用 weave 跑着 docker 容器。 + + $ sudo ./demo-aws-setup.sh + +在这里,我们将会在以后用到这些实例的 IP 地址。这些地址储存在一个 weavedemo.env 文件中,这个文件创建于执行 demo-aws-setup.sh 脚本期间。为了获取这些 IP 地址,我们需要执行下面的命令,命令输出类似下面的信息。 + + $ cat weavedemo.env + + export WEAVE_AWS_DEMO_HOST1=52.26.175.175 + export WEAVE_AWS_DEMO_HOST2=52.26.83.141 + export WEAVE_AWS_DEMO_HOSTCOUNT=2 + export WEAVE_AWS_DEMO_HOSTS=(52.26.175.175 52.26.83.141) + +请注意这些不是固定的 IP 地址,AWS 会为我们的实例动态地分配 IP 地址。 + +我们在 bash 下执行下面的命令使环境变量生效。 + + . ./weavedemo.env + +### 2. 启动 Weave 和 WeaveDNS ### + +在安装完实例之后,我们将会在每台主机上启动 weave 以及 weavedns。Weave 以及 weavedns 使得我们能够轻易地将容器部署到一个全新的基础架构以及配置中, 不需要改变代码,也不需要去理解像 Ambassador 容器以及 Link 机制之类的概念。下面是在第一台主机上启动 weave 以及 weavedns 的命令。 + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 + $ sudo weave launch + $ sudo weave launch-dns 10.2.1.1/24 + +下一步,我也准备在第二台主机上启动 weave 以及 weavedns。 + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 + $ sudo weave launch $WEAVE_AWS_DEMO_HOST1 + $ sudo weave launch-dns 10.2.1.2/24 + +### 3. 启动应用容器 ### + +现在,我们准备跨两台主机启动六个容器,这两台主机都用 Apache2 Web 服务实例跑着简单的 php 网站。为了在第一个 Apache2 Web 服务器实例跑三个容器, 我们将会使用下面的命令。 + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 + $ sudo weave run --with-dns 10.3.1.1/24 -h ws1.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.2/24 -h ws2.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.3/24 -h ws3.weave.local fintanr/weave-gs-nginx-apache + +在那之后,我们将会在第二个实例上启动另外三个容器,请使用下面的命令。 + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 + $ sudo weave run --with-dns 10.3.1.4/24 -h ws4.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.5/24 -h ws5.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.6/24 -h ws6.weave.local fintanr/weave-gs-nginx-apache + +注意: 在这里,--with-dns 选项告诉容器使用 weavedns 来解析主机名,-h x.weave.local 则使得 weavedns 能够解析该主机。 + +### 4. 启动 Nginx 容器 ### + +在应用容器如预期的运行后,我们将会启动 nginx 容器,它将会在六个应用容器服务之间轮询并提供反向代理或者负载均衡。 为了启动 nginx 容器,请使用下面的命令。 + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 + $ sudo weave run --with-dns 10.3.1.7/24 -ti -h nginx.weave.local -d -p 80:80 fintanr/weave-gs-nginx-simple + +因此,我们的 nginx 容器在 $WEAVE_AWS_DEMO_HOST1 上公开地暴露成为一个 http 服务器。 + +### 5. 测试负载均衡服务器 ### + +为了测试我们的负载均衡服务器是否可以工作,我们执行一段可以发送 http 请求给 nginx 容器的脚本。我们将会发送6个请求,这样我们就能看到 nginx 在一次的轮询中服务于每台 web 服务器之间。 + + $ ./access-aws-hosts.sh + + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws1.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws2.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws3.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws4.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws5.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws6.weave.local", + "date" : "2015-06-26 12:24:23" + } + +### 结束语 ### + +我们最终成功地将 nginx 配置成一个反向代理/负载均衡服务器,通过使用 weave 以及运行在 AWS(Amazon Web Service)EC2 里面的 ubuntu 服务器中的 docker。从上面的步骤输出可以清楚的看到我们已经成功地配置了 nginx。我们可以看到请求在一次轮询中被发送到6个应用容器,这些容器在 Apache2 Web 服务器中跑着 PHP 应用。在这里,我们部署了一个容器化的 PHP 应用,使用 nginx 横跨多台在 AWS EC2 上的主机而不需要改变代码,利用 weavedns 使得每个容器连接在一起,只需要主机名就够了,眼前的这些便捷, 都要归功于 weave 以及 weavedns。 + +如果你有任何的问题、建议、反馈,请在评论中注明,这样我们才能够做得更好,谢谢:-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/nginx-load-balancer-weave-docker/ + +作者:[Arun Pyasi][a] +译者:[dingdongnigetou](https://github.com/dingdongnigetou) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://console.aws.amazon.com/ diff --git a/translated/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md b/translated/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md deleted file mode 100644 index f90a1ce76d..0000000000 --- a/translated/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md +++ /dev/null @@ -1,126 +0,0 @@ -如何使用Weave以及Docker搭建Nginx反向代理/负载均衡服务器 -================================================================================ -Hi, 今天我们将会学习如何使用如何使用Weave和Docker搭建Nginx反向代理/负载均衡服务器。Weave创建一个虚拟网络将跨主机部署的Docker容器连接在一起并使它们自动暴露给外部世界。它让我们更加专注于应用的开发,而不是基础架构。Weave提供了一个如此棒的环境,仿佛它的所有容器都属于同个网络,不需要端口/映射/连接等的配置。容器中的应用提供的服务在weave网络中可以轻易地被外部世界访问,不论你的容器运行在哪里。在这个教程里我们将会使用weave快速并且轻易地将nginx web服务器部署为一个负载均衡器,反向代理一个运行在Amazon Web Services里面多个节点上的docker容器中的简单php应用。这里我们将会介绍WeaveDNS,它提供一个简单的方式让容器利用主机名找到彼此,不需要改变代码,并且能够告诉其他容器连接到这些主机名。 - -在这篇教程里,我们需要一个运行的容器集合来配置nginx负载均衡服务器。最简单轻松的方法就是使用Weave在ubuntu的docker容器中搭建nginx负载均衡服务器。 - -### 1. 搭建AWS实例 ### - -首先,我们需要搭建Amzaon Web Service实例,这样才能在ubuntu下用weave跑docker容器。我们将会使用[AWS CLI][1]来搭建和配置两个AWS EC2实例。在这里,我们使用最小的有效实例,t1.micro。我们需要一个有效的**Amazon Web Services账户**用以AWS命令行界面的搭建和配置。我们先在AWS命令行界面下使用下面的命令将github上的weave仓库克隆下来。 - - $ git clone http://github.com/fintanr/weave-gs - $ cd weave-gs/aws-nginx-ubuntu-simple - -在克隆完仓库之后,我们执行下面的脚本,这个脚本将会部署两个t1.micro实例,每个实例中都是ubuntu作为操作系统并用weave跑着docker容器。 - - $ sudo ./demo-aws-setup.sh - -在这里,我们将会在以后用到这些实例的IP地址。这些地址储存在一个weavedemo.env文件中,这个文件在执行demo-aws-setup.sh脚本的期间被创建。为了获取这些IP地址,我们需要执行下面的命令,命令输出类似下面的信息。 - - $ cat weavedemo.env - - export WEAVE_AWS_DEMO_HOST1=52.26.175.175 - export WEAVE_AWS_DEMO_HOST2=52.26.83.141 - export WEAVE_AWS_DEMO_HOSTCOUNT=2 - export WEAVE_AWS_DEMO_HOSTS=(52.26.175.175 52.26.83.141) - -请注意这些不是固定的IP地址,AWS会为我们的实例动态地分配IP地址。 - -我们在bash下执行下面的命令使环境变量生效。 - - . ./weavedemo.env - -### 2. 启动Weave and WeaveDNS ### - -在安装完实例之后,我们将会在每台主机上启动weave以及weavedns。Weave以及weavedns使得我们能够轻易地将容器部署到一个全新的基础架构以及配置中, 不需要改变代码,也不需要去理解像Ambassador容器以及Link机制之类的概念。下面是在第一台主机上启动weave以及weavedns的命令。 - - ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 - $ sudo weave launch - $ sudo weave launch-dns 10.2.1.1/24 - -下一步,我也准备在第二台主机上启动weave以及weavedns。 - - ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 - $ sudo weave launch $WEAVE_AWS_DEMO_HOST1 - $ sudo weave launch-dns 10.2.1.2/24 - -### 3. 启动应用容器 ### - -现在,我们准备跨两台主机启动六个容器,这两台主机都用Apache2 Web服务实例跑着简单的php网站。为了在第一个Apache2 Web服务器实例跑三个容器, 我们将会使用下面的命令。 - - ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 - $ sudo weave run --with-dns 10.3.1.1/24 -h ws1.weave.local fintanr/weave-gs-nginx-apache - $ sudo weave run --with-dns 10.3.1.2/24 -h ws2.weave.local fintanr/weave-gs-nginx-apache - $ sudo weave run --with-dns 10.3.1.3/24 -h ws3.weave.local fintanr/weave-gs-nginx-apache - -在那之后,我们将会在第二个实例上启动另外三个容器,请使用下面的命令。 - - ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 - $ sudo weave run --with-dns 10.3.1.4/24 -h ws4.weave.local fintanr/weave-gs-nginx-apache - $ sudo weave run --with-dns 10.3.1.5/24 -h ws5.weave.local fintanr/weave-gs-nginx-apache - $ sudo weave run --with-dns 10.3.1.6/24 -h ws6.weave.local fintanr/weave-gs-nginx-apache - -注意: 在这里,--with-dns选项告诉容器使用weavedns来解析主机名,-h x.weave.local则使得weavedns能够解析指定主机。 - -### 4. 启动Nginx容器 ### - -在应用容器运行得有如意料中的稳定之后,我们将会启动nginx容器,它将会在六个应用容器服务之间轮询并提供反向代理或者负载均衡。 为了启动nginx容器,请使用下面的命令。 - - ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 - $ sudo weave run --with-dns 10.3.1.7/24 -ti -h nginx.weave.local -d -p 80:80 fintanr/weave-gs-nginx-simple - -因此,我们的nginx容器在$WEAVE_AWS_DEMO_HOST1上公开地暴露成为一个http服务器。 - -### 5. 测试负载均衡服务器 ### - -为了测试我们的负载均衡服务器是否可以工作,我们执行一段可以发送http请求给nginx容器的脚本。我们将会发送6个请求,这样我们就能看到nginx在一次的轮询中服务于每台web服务器之间。 - - $ ./access-aws-hosts.sh - - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws1.weave.local", - "date" : "2015-06-26 12:24:23" - } - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws2.weave.local", - "date" : "2015-06-26 12:24:23" - } - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws3.weave.local", - "date" : "2015-06-26 12:24:23" - } - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws4.weave.local", - "date" : "2015-06-26 12:24:23" - } - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws5.weave.local", - "date" : "2015-06-26 12:24:23" - } - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws6.weave.local", - "date" : "2015-06-26 12:24:23" - } - -### 结束语 ### - -我们最终成功地将nginx配置成一个反向代理/负载均衡服务器,通过使用weave以及运行在AWS(Amazon Web Service)EC2之中的ubuntu服务器里面的docker。从上面的步骤输出可以清楚的看到我们已经成功地配置了nginx。我们可以看到请求在一次循环中被发送到6个应用容器,这些容器在Apache2 Web服务器中跑着PHP应用。在这里,我们部署了一个容器化的PHP应用,使用nginx横跨多台在AWS EC2上的主机而不需要改变代码,利用weavedns使得每个容器连接在一起,只需要主机名就够了,眼前的这些便捷, 都要归功于weave以及weavedns。 如果你有任何的问题、建议、反馈,请在评论中注明,这样我们才能够做得更好,谢谢:-) - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/nginx-load-balancer-weave-docker/ - -作者:[Arun Pyasi][a] -译者:[dingdongnigetou](https://github.com/dingdongnigetou) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:http://console.aws.amazon.com/ From 98001abdb7328004f64f4d5d83b6d879dcb37eeb Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 5 Sep 2015 17:36:52 +0800 Subject: [PATCH 428/697] PUB:20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux @Vic020 --- ...edule a Job and Watch Commands in Linux.md | 50 +++++++++---------- 1 file changed, 24 insertions(+), 26 deletions(-) rename {translated/tech => published}/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md (74%) diff --git a/translated/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md b/published/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md similarity index 74% rename from translated/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md rename to published/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md index 54d3996e0e..80e90df7fb 100644 --- a/translated/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md +++ b/published/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md @@ -1,11 +1,11 @@ -Linux小技巧:Chrome小游戏,文字说话,计划作业,重复执行命令 +Linux 小技巧:Chrome 小游戏,让文字说话,计划作业,重复执行命令 ================================================================================ 重要的事情说两遍,我完成了一个[Linux提示与彩蛋][1]系列,让你的Linux获得更多创造和娱乐。 ![Linux提示与彩蛋系列](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-Tips-and-Tricks.png) -Linux提示与彩蛋系列 +*Linux提示与彩蛋系列* 本文,我将会讲解Google-chrome内建小游戏,在终端中如何让文字说话,使用‘at’命令设置作业和使用watch命令重复执行命令。 @@ -17,7 +17,7 @@ Linux提示与彩蛋系列 ![不能连接到互联网](http://www.tecmint.com/wp-content/uploads/2015/08/Unable-to-Connect-Internet.png) -不能连接到互联网 +*不能连接到互联网* 按下空格键来激活Google-chrome彩蛋游戏。游戏没有时间限制。并且还不需要浪费时间安装使用。 @@ -27,27 +27,25 @@ Linux提示与彩蛋系列 ![Google Chrome中玩游戏](http://www.tecmint.com/wp-content/uploads/2015/08/Play-Game-in-Google-Chrome.gif) -Google Chrome中玩游戏 +*Google Chrome中玩游戏* ### 2. Linux 终端中朗读文字 ### -对于那些不能文字朗读的设备,有个小工具可以实现文字说话的转换器。 -espeak支持多种语言,可以及时朗读输入文字。 +对于那些不能文字朗读的设备,有个小工具可以实现文字说话的转换器。用各种语言写一些东西,espeak就可以朗读给你。 系统应该默认安装了Espeak,如果你的系统没有安装,你可以使用下列命令来安装: # apt-get install espeak (Debian) # yum install espeak (CentOS) - # dnf install espeak (Fedora 22 onwards) + # dnf install espeak (Fedora 22 及其以后) -You may ask espeak to accept Input Interactively from standard Input device and convert it to speech for you. You may do: -你可以设置接受从标准输入的交互地输入并及时转换成语音朗读出来。这样设置: +你可以让espeak接受标准输入的交互输入并及时转换成语音朗读出来。如下: $ espeak [按回车键] 更详细的输出你可以这样做: - $ espeak --stdout | aplay [按回车键][这里需要双击] + $ espeak --stdout | aplay [按回车键][再次回车] espeak设置灵活,也可以朗读文本文件。你可以这样设置: @@ -55,29 +53,29 @@ espeak设置灵活,也可以朗读文本文件。你可以这样设置: espeak可以设置朗读速度。默认速度是160词每分钟。使用-s参数来设置。 -设置30词每分钟: +设置每分钟30词的语速: $ espeak -s 30 -f /path/to/text/file/file_name.txt | aplay -设置200词每分钟: +设置每分钟200词的语速: $ espeak -s 200 -f /path/to/text/file/file_name.txt | aplay -让其他语言说北印度语(作者母语),这样设置: +说其他语言,比如北印度语(作者母语),这样设置: $ espeak -v hindi --stdout 'टेकमिंट विश्व की एक बेहतरीन लाइंक्स आधारित वेबसाइट है|' | aplay -espeak支持多种语言,支持自定义设置。使用下列命令来获得语言表: +你可以使用各种语言,让espeak如上面说的以你选择的语言朗读。使用下列命令来获得语言列表: $ espeak --voices -### 3. 快速计划作业 ### +### 3. 快速调度任务 ### -我们已经非常熟悉使用[cron][2]后台执行一个计划命令。 +我们已经非常熟悉使用[cron][2]守护进程执行一个计划命令。 Cron是一个Linux系统管理的高级命令,用于计划定时任务如备份或者指定时间或间隔的任何事情。 -但是,你是否知道at命令可以让你计划一个作业或者命令在指定时间?at命令可以指定时间和指定内容执行作业。 +但是,你是否知道at命令可以让你在指定时间调度一个任务或者命令?at命令可以指定时间执行指定内容。 例如,你打算在早上11点2分执行uptime命令,你只需要这样做: @@ -85,17 +83,17 @@ Cron是一个Linux系统管理的高级命令,用于计划定时任务如备 uptime >> /home/$USER/uptime.txt Ctrl+D -![Linux中计划作业](http://www.tecmint.com/wp-content/uploads/2015/08/Schedule-Job-in-Linux.png) +![Linux中计划任务](http://www.tecmint.com/wp-content/uploads/2015/08/Schedule-Job-in-Linux.png) -Linux中计划作业 +*Linux中计划任务* 检查at命令是否成功设置,使用: $ at -l -![浏览计划作业](http://www.tecmint.com/wp-content/uploads/2015/08/View-Scheduled-Jobs.png) +![浏览计划任务](http://www.tecmint.com/wp-content/uploads/2015/08/View-Scheduled-Jobs.png) -浏览计划作业 +*浏览计划任务* at支持计划多个命令,例如: @@ -117,17 +115,17 @@ at支持计划多个命令,例如: ![Linux中查看日期和时间](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Date-in-Linux.png) -Linux中查看日期和时间 +*Linux中查看日期和时间* -为了查看这个命令每三秒的输出,我需要运行下列命令: +为了每三秒查看一下这个命令的输出,我需要运行下列命令: $ watch -n 3 'date +"%H:%M:%S"' ![Linux中watch命令](http://www.tecmint.com/wp-content/uploads/2015/08/Watch-Command-in-Linux.gif) -Linux中watch命令 +*Linux中watch命令* -watch命令的‘-n’开关设定时间间隔。在上诉命令中,我们定义了时间间隔为3秒。你可以按你的需求定义。同样watch +watch命令的‘-n’开关设定时间间隔。在上述命令中,我们定义了时间间隔为3秒。你可以按你的需求定义。同样watch 也支持其他命令或者脚本。 至此。希望你喜欢这个系列的文章,让你的linux更有创造性,获得更多快乐。所有的建议欢迎评论。欢迎你也看看其他文章,谢谢。 @@ -138,7 +136,7 @@ via: http://www.tecmint.com/text-to-speech-in-terminal-schedule-a-job-and-watch- 作者:[Avishek Kumar][a] 译者:[VicYu/Vic020](http://vicyu.net) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From fe9573d2e6ae85a282eda58c5c0e8a0e102cc4e9 Mon Sep 17 00:00:00 2001 From: KS Date: Sat, 5 Sep 2015 19:47:05 +0800 Subject: [PATCH 429/697] Update 20150831 Linux workstation security checklist.md --- sources/tech/20150831 Linux workstation security checklist.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150831 Linux workstation security checklist.md b/sources/tech/20150831 Linux workstation security checklist.md index bc2b59f16a..9ef46339d0 100644 --- a/sources/tech/20150831 Linux workstation security checklist.md +++ b/sources/tech/20150831 Linux workstation security checklist.md @@ -1,3 +1,4 @@ +wyangsun translating Linux workstation security checklist ================================================================================ This is a set of recommendations used by the Linux Foundation for their systems @@ -797,4 +798,4 @@ via: https://github.com/lfit/itpol/blob/master/linux-workstation-security.md#lin [12]: http://shop.kernelconcepts.de/ [13]: https://www.yubico.com/products/yubikey-hardware/yubikey-neo/ [14]: https://wiki.debian.org/Subkeys -[15]: https://github.com/lfit/ssh-gpg-smartcard-config \ No newline at end of file +[15]: https://github.com/lfit/ssh-gpg-smartcard-config From 192a41cb299b3af31930728e159e552f48bd129f Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 5 Sep 2015 21:32:57 +0800 Subject: [PATCH 430/697] PUB:20150901 How to automatically dim your screen on Linux @GOLinux --- ... automatically dim your screen on Linux.md | 53 +++++++++++++++++++ ... automatically dim your screen on Linux.md | 52 ------------------ 2 files changed, 53 insertions(+), 52 deletions(-) create mode 100644 published/20150901 How to automatically dim your screen on Linux.md delete mode 100644 translated/tech/20150901 How to automatically dim your screen on Linux.md diff --git a/published/20150901 How to automatically dim your screen on Linux.md b/published/20150901 How to automatically dim your screen on Linux.md new file mode 100644 index 0000000000..1fcdc19d47 --- /dev/null +++ b/published/20150901 How to automatically dim your screen on Linux.md @@ -0,0 +1,53 @@ +如何在 Linux 上自动调整屏幕亮度保护眼睛 +================================================================================ + +当你开始在计算机前花费大量时间的时候,问题自然开始显现。这健康吗?怎样才能舒缓我眼睛的压力呢?为什么光线灼烧着我?尽管解答这些问题的研究仍然在不断进行着,许多程序员已经采用了一些应用来改变他们的日常习惯,让他们的眼睛更健康点。在这些应用中,我发现了两个特别有趣的东西:Calise和Redshift。 + +### Calise ### + +处于时断时续的开发中,[Calise][1]的意思是“相机光感应器(Camera Light Sensor)”。换句话说,它是一个根据摄像头接收到的光强度计算屏幕最佳的背光级别的开源程序。更进一步地说,Calise可以基于你的地理坐标来考虑你所在地区的天气。我喜欢它是因为它兼容各个桌面,甚至非X系列。 + +![](https://farm1.staticflickr.com/569/21016715646_6e1e95f066_o.jpg) + +它同时附带了命令行界面和图形界面,支持多用户配置,而且甚至可以导出数据为CSV。安装完后,你必须在见证奇迹前对它进行快速校正。 + +![](https://farm6.staticflickr.com/5770/21050571901_1e7b2d63ec_c.jpg) + +不怎么令人喜欢的是,如果你和我一样有被偷窥妄想症,在你的摄像头前面贴了一条胶带,那就会比较不幸了,这会大大影响Calise的精确度。除此之外,Calise还是个很棒的应用,值得我们关注和支持。正如我先前提到的,它在过去几年中经历了一段修修补补的艰难阶段,所以我真的希望这个项目继续开展下去。 + +![](https://farm1.staticflickr.com/633/21032989702_9ae563db1e_o.png) + +### Redshift ### + +如果你想过要减少由屏幕导致的眼睛的压力,那么你很可能听过f.lux,它是一个免费的专有软件,用于根据一天中的时间来修改显示器的亮度和配色。然而,如果真的偏好于开源软件,那么一个可选方案就是:[Redshift][2]。灵感来自f.lux,Redshift也可以改变配色和亮度来加强你夜间坐在屏幕前的体验。启动时,你可以使用经度和纬度来配置地理坐标,然后就可以让它在托盘中运行了。Redshift将根据太阳的位置平滑地调整你的配色或者屏幕。在夜里,你可以看到屏幕的色温调向偏暖色,这会让你的眼睛少遭些罪。 + +![](https://farm6.staticflickr.com/5823/20420303684_2b6e917fee_b.jpg) + +和Calise一样,它提供了一个命令行界面,同时也提供了一个图形客户端。要快速启动Redshift,只需使用命令: + + $ redshift -l [LAT]:[LON] + +替换[LAT]:[LON]为你的维度和经度。 + +然而,它也可以通过gpsd模块来输入你的坐标。对于Arch Linux用户,我推荐你读一读这个[维基页面][3]。 + +### 尾声 ### + +总而言之,Linux用户没有理由不去保护自己的眼睛,Calise和Redshift两个都很棒。我真希望它们的开发能够继续下去,让它们获得应有的支持。当然,还有比这两个更多的程序可以满足保护眼睛和保持健康的目的,但是我感觉Calise和Redshift会是一个不错的开端。 + +如果你有一个经常用来舒缓眼睛的压力的喜欢的程序,请在下面的评论中留言吧。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/automatically-dim-your-screen-linux.html + +作者:[Adrien Brochard][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/adrien +[1]:http://calise.sourceforge.net/ +[2]:http://jonls.dk/redshift/ +[3]:https://wiki.archlinux.org/index.php/Redshift#Automatic_location_based_on_GPS diff --git a/translated/tech/20150901 How to automatically dim your screen on Linux.md b/translated/tech/20150901 How to automatically dim your screen on Linux.md deleted file mode 100644 index cf82291db3..0000000000 --- a/translated/tech/20150901 How to automatically dim your screen on Linux.md +++ /dev/null @@ -1,52 +0,0 @@ -Linux上如何让屏幕自动变暗 -================================================================================ -当你开始在计算机前花费大量时间的时候,自然的问题开始显现。这健康吗?怎样才能舒缓我眼睛的压力呢?为什么太阳光灼烧着我?尽管解答这些问题的研究仍然在活跃进行着,许多程序员已经采用了一些应用来让他们的日常习惯对他们的眼睛更健康点。在这些应用中,我发现了两个特别有趣的东西:Calise和Redshift。 - -### Calise ### - -在开发状态之中和之外,[Calise][1]都表示“相机光感应器”。换句话说,它是一个开源程序,用于基于摄像头接收到的光密度计算屏幕最佳的背景光级别。更精确地说,Calise可以基于你的地理坐标来考虑你所在地区的天气。我喜欢它是因为它兼容各个桌面,甚至非X系列。 - -![](https://farm1.staticflickr.com/569/21016715646_6e1e95f066_o.jpg) - -它同时附带了命令行界面和图形界面,支持多用户配置,而且甚至可以导出数据为CSV。安装完后,你必须在魔法展开前快速进行校正。 - -![](https://farm6.staticflickr.com/5770/21050571901_1e7b2d63ec_c.jpg) - -不怎么令人喜欢的是,如果你和我一样偏执,在你的摄像头前面贴了一条胶带,那就会比较不幸了,这会大大影响Calise的精确度。除此之外,Calise还是个很棒的应用,值得我们关注和支持。正如我先前提到的,它在过去几年中经历了一段修修补补的艰难阶段,所以我真的希望这个项目继续开展下去。 - -![](https://farm1.staticflickr.com/633/21032989702_9ae563db1e_o.png) - -### Redshift ### - -如果你已经考虑好要减少由屏幕导致的眼睛的压力,那么你很可能听过f.lux,它是一个免费的专有软件,用于根据一天中的时间来修改显示器的亮度和配色。然而,如果真的偏好于开源软件,那么一个可选方案就是:[Redshift][2]。灵感来自f.lux,Redshift也可以改变配色和亮度来加强你夜间坐在屏幕前的体验。启动时,你可以配置使用经度和纬度来配置地理坐标,然后就可以让它在托盘中运行了。Redshift将根据太阳的位置平滑地调整你的配色或者屏幕。在夜里,你可以看到屏幕的色温调向偏暖色,这会让你的眼睛少遭些罪。 - -![](https://farm6.staticflickr.com/5823/20420303684_2b6e917fee_b.jpg) - -和Calise一样,它提供了一个命令行界面,同时也提供了一个图形客户端。要快速启动Redshift,只需使用命令: - - $ redshift -l [LAT]:[LON] - -替换[LAT]:[LON]为你的维度和经度。 - -然而,它也可以通过gpsd模块来输入你的坐标。对于Arch Linux用户,我推荐你读一读这个[维基页面][3]。 - -### 尾声 ### - -总而言之,Linux用户没有理由不去保护自己的眼睛,Calise和Redshift两个都很棒。我真希望它们的开发能够继续下去,他们能获得应有的支持。当然,还有比这两个更多的程序可以满足保护眼睛和保持健康的目的,但是我感觉Calise和Redshift会是一个不错的开始。 - -如果你有一个真正喜欢的程序,而且也经常用它来舒缓眼睛的压力,请在下面的评论中留言吧。 - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/automatically-dim-your-screen-linux.html - -作者:[Adrien Brochard][a] -译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/adrien -[1]:http://calise.sourceforge.net/ -[2]:http://jonls.dk/redshift/ -[3]:https://wiki.archlinux.org/index.php/Redshift#Automatic_location_based_on_GPS From ab6da0826824152231de4e81a00e3c941c2b9a84 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 5 Sep 2015 23:40:13 +0800 Subject: [PATCH 431/697] PUB:20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @MikeCoder 好久没见你了? --- ...Apache with MariaDB on Debian or Ubuntu.md | 182 +++++++++++++++++ ...Apache with MariaDB on Debian or Ubuntu.md | 188 ------------------ 2 files changed, 182 insertions(+), 188 deletions(-) create mode 100644 published/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md delete mode 100644 translated/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md diff --git a/published/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md b/published/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md new file mode 100644 index 0000000000..5682e18a84 --- /dev/null +++ b/published/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md @@ -0,0 +1,182 @@ +在 Ubuntu 上配置高性能的 HHVM 环境 +================================================================================ + +HHVM全称为 HipHop Virtual Machine,它是一个开源虚拟机,用来运行由 Hack(一种编程语言)和 PHP 开发应用。HHVM 在保证了 PHP 程序员最关注的高灵活性的要求下,通过使用最新的编译方式来取得了非凡的性能。到目前为止,相对于 PHP + [APC (Alternative PHP Cache)][1] ,HHVM 为 FaceBook 在 HTTP 请求的吞吐量上提高了9倍的性能,在内存的占用上,减少了5倍左右的内存占用。 + +同时,HHVM 也可以与基于 FastCGI 的 Web 服务器(如 Nginx 或者 Apache )协同工作。 + +![Install HHVM, Nginx and Apache with MariaDB](http://www.tecmint.com/wp-content/uploads/2015/08/Install-HHVM-Nginx-Apache-MariaDB.png) + +*安装 HHVM,Nginx和 Apache 还有 MariaDB* + +在本教程中,我们一起来配置 Nginx/Apache web 服务器、 数据库服务器 MariaDB 和 HHVM 。我们将使用 Ubuntu 15.04 (64 位),因为 HHVM 只能运行在64位系统上。同时,该教程也适用于 Debian 和 Linux Mint。 + +### 第一步: 安装 Nginx 或者 Apache 服务器 ### + +1、首先,先进行一次系统的升级并更新软件仓库列表,命令如下 + + # apt-get update && apt-get upgrade + +![System Upgrade](http://www.tecmint.com/wp-content/uploads/2015/08/System-Upgrade.png) + +*系统升级* + +2、 正如我之前说的,HHVM 能和 Nginx 和 Apache 进行集成。所以,究竟使用哪个服务器,这是你的自由,不过,我们会教你如何安装这两个服务器。 + +#### 安装 Nginx #### + +我们通过下面的命令安装 Nginx/Apache 服务器 + + # apt-get install nginx + +![Install Nginx Web Server](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Nginx-Web-Server.png) + +*安装 Nginx 服务器* + +#### 安装 Apache #### + + # apt-get install apache2 + +![Install Apache Web Server](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Apache-Web-Server.png) + +*安装 Apache 服务器* + +完成这一步,你能通过以下的链接看到 Nginx 或者 Apache 的默认页面 + + http://localhost + 或 + http://IP-Address + + +![Nginx Welcome Page](http://www.tecmint.com/wp-content/uploads/2015/08/Nginx-Welcome-Page.png) + +*Nginx 默认页面* + +![Apache Default Page](http://www.tecmint.com/wp-content/uploads/2015/08/Apache-Default-Page.png) + +*Apache 默认页面* + +### 第二步: 安装和配置 MariaDB ### + +3、 这一步,我们将通过如下命令安装 MariaDB,它是一个比 MySQL 性能更好的数据库 + + # apt-get install mariadb-client mariadb-server + +![Install MariaDB Database](http://www.tecmint.com/wp-content/uploads/2015/08/Install-MariaDB-Database.png) + +*安装 MariaDB* + +4、 在 MariaDB 成功安装之后,你可以启动它,并且设置 root 密码来保护数据库: + + + # systemctl start mysql + # mysql_secure_installation + +回答以下问题,只需要按下`y`或者 `n`并且回车。请确保你仔细的阅读过说明。 + + Enter current password for root (enter for none) = press enter + Set root password? [Y/n] = y + Remove anonymous users[y/n] = y + Disallow root login remotely[y/n] = y + Remove test database and access to it [y/n] = y + Reload privileges tables now[y/n] = y + +5、 在设置了密码之后,你就可以登录 MariaDB 了。 + + + # mysql -u root -p + + +### 第三步: 安装 HHVM ### + +6、 在此阶段,我们将安装 HHVM。我们需要添加 HHVM 的仓库到你的`sources.list`文件中,然后更新软件列表。 + + # wget -O - http://dl.hhvm.com/conf/hhvm.gpg.key | apt-key add - + # echo deb http://dl.hhvm.com/ubuntu DISTRIBUTION_VERSION main | sudo tee /etc/apt/sources.list.d/hhvm.list + # apt-get update + +**重要**:不要忘记用你的 Ubuntu 发行版代号替换上述的 DISTRIBUTION_VERSION (比如:lucid, precise, trusty) 或者是 Debian 的 jessie 或者 wheezy。在 Linux Mint 中也是一样的,不过只支持 petra。 + +添加了 HHVM 仓库之后,你就可以轻松安装了。 + + # apt-get install -y hhvm + +安装之后,就可以启动它,但是它并没有做到开机启动。可以用如下命令做到开机启动。 + + # update-rc.d hhvm defaults + +### 第四步: 配置 Nginx/Apache 连接 HHVM ### + +7、 现在,nginx/apache 和 HHVM 都已经安装完成了,并且都独立运行起来了,所以我们需要对它们进行设置,来让它们互相关联。这个关键的步骤,就是需要告知 nginx/apache 将所有的 php 文件,都交给 HHVM 进行处理。 + +如果你用了 Nginx,请按照如下步骤: + +nginx 的配置文件在 /etc/nginx/sites-available/default, 并且这些配置文件会在 /usr/share/nginx/html 中寻找文件执行,不过,它不知道如何处理 PHP。 + +为了确保 Nginx 可以连接 HHVM,我们需要执行所带的如下脚本。它可以帮助我们正确的配置 Nginx,将 hhvm.conf 放到 上面提到的配置文件 nginx.conf 的头部。 + +这个脚本可以确保 Nginx 可以对 .hh 和 .php 的做正确的处理,并且将它们通过 fastcgi 发送给 HHVM。 + + # /usr/share/hhvm/install_fastcgi.sh + +![Configure Nginx for HHVM](http://www.tecmint.com/wp-content/uploads/2015/08/Configure-Nginx-for-HHVM.png) + +*配置 Nginx、HHVM* + +**重要**: 如果你使用的是 Apache,这里不需要进行配置。 + +8、 接下来,你需要使用 hhvm 来提供 php 的运行环境。 + + # /usr/bin/update-alternatives --install /usr/bin/php php /usr/bin/hhvm 60 + +以上步骤完成之后,你现在可以启动并且测试它了。 + + # systemctl start hhvm + +### 第五步: 测试 HHVM 和 Nginx/Apache ### + +9、 为了确认 hhvm 是否工作,你需要在 nginx/apache 的文档根目录下建立 hello.php。 + + # nano /usr/share/nginx/html/hello.php [对于 Nginx] + 或 + # nano /var/www/html/hello.php [对于 Nginx 和 Apache] + +在文件中添加如下代码: + + + +然后访问如下链接,确认自己能否看到 "hello world" + + http://localhost/info.php + 或 + http://IP-Address/info.php + +![HHVM Page](http://www.tecmint.com/wp-content/uploads/2015/08/HHVM-Page.png) + +*HHVM 页面* + +如果 “HHVM” 的页面出现了,那就说明你成功了。 + +### 结论 ### + +以上的步骤都是非常简单的,希望你能觉得这是一篇有用的教程,如果你在以上的步骤中遇到了问题,给我们留一个评论,我们将全力解决。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/install-hhvm-and-nginx-apache-with-mariadb-on-debian-ubuntu/ + +作者:[Ravi Saive][a] +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/admin/ +[1]:http://www.tecmint.com/install-apc-alternative-php-cache-in-rhel-centos-fedora/ diff --git a/translated/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md b/translated/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md deleted file mode 100644 index 1591def307..0000000000 --- a/translated/tech/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md +++ /dev/null @@ -1,188 +0,0 @@ -在 Debian 或者 Ubuntu 上配置高性能的 HHVM、Nginx/Apache 和 MariaDB -================================================================================ -HHVM全称为 HipHop Virtual Machine, 它是一个由 running Hack(一种编程语言)和 PHP的相关应用组成的开源虚拟机。HHVM 在保证了 PHP 程序员最关注的高灵活性的要求下,通过使用最新编译结果的方式来达到一个客观的性能。到目前为止,HHVM 为 FaceBook 在 HTTP 请求的吞吐量上提高了9倍的性能,在内存的占用上,减少了5倍左右的内存占用。 - -+ [APC (Alternative PHP Cache)][1]. - -HHVM can also be used along with a FastCGI-based web-server like Nginx or Apache. -同时,HHVM 也可以通过 FastCGI 接口,与像 Nginx 或者 Apache 进行集成。 - -![Install HHVM, Nginx and Apache with MariaDB](http://www.tecmint.com/wp-content/uploads/2015/08/Install-HHVM-Nginx-Apache-MariaDB.png) - -安装 HHVM,Nginx和 Apache 还有 MariaDB - -在本教程中,我们一起来进行 Nginx/Apache web 服务器、 数据库服务器 MariaDB 和 HHVM 的设置。设置中,我们将使用 Ubuntu 15.04 (64 位),同时,该教程也适用于 Debian 和 Linux Mint。 - -### Step 1: 安装 Nginx 或者 Apache 服务器 ### - -1. 首先,先进行一次系统的升级或者更新软件仓库列表. -``` - # apt-get update && apt-get upgrade -``` -![System Upgrade](http://www.tecmint.com/wp-content/uploads/2015/08/System-Upgrade.png) - -System Upgrade - -2. 正如我之前说的,HHVM 能和 Nginx 和 Apache 进行集成。所以,究竟使用哪个服务器,这是你的自由,不过,我们会教你如何安装这两个服务器。 - -#### 安装 Nginx #### - -我们通过下面的命令安装 Nginx/Apache 服务器 - - # apt-get install nginx - -![Install Nginx Web Server](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Nginx-Web-Server.png) - -安装 Nginx 服务器 - -#### 安装 Apache #### - - # apt-get install apache2 - -![Install Apache Web Server](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Apache-Web-Server.png) - -安装 Apache 服务器 - -完成这一步,你能通过以下的链接看到 Nginx 或者 Apache 的默认页面 - - http://localhost - OR - http://IP-Address - -#### Nginx 默认页面 #### - -![Nginx Welcome Page](http://www.tecmint.com/wp-content/uploads/2015/08/Nginx-Welcome-Page.png) - -Nginx 默认页面 - -#### Apache 默认页面 #### - -![Apache Default Page](http://www.tecmint.com/wp-content/uploads/2015/08/Apache-Default-Page.png) - -Apache 默认页面 - -### Step 2: 安装和配置 MariaDB ### - -3. 这一步,我们将通过如下命令安装 MariaDB,它是一个比 MySQL 更好的数据库 -``` - # apt-get install mariadb-client mariadb-server -``` -![Install MariaDB Database](http://www.tecmint.com/wp-content/uploads/2015/08/Install-MariaDB-Database.png) - -安装 MariaDB - -4. 在 MariaDB 成功安装之后,你可以启动它,并且设置 root 密码来保护数据库: - -``` - # systemctl start mysql - # mysql_secure_installation -``` - -回答以下问题,只需要按下`y`或者 `n`并且回车。请确保你仔细的阅读过说明。 - - Enter current password for root (enter for none) = press enter - Set root password? [Y/n] = y - Remove anonymous users[y/n] = y - Disallow root login remotely[y/n] = y - Remove test database and access to it [y/n] = y - Reload privileges tables now[y/n] = y - -5. 在设置了密码之后,你就可以登陆 MariaDB 了。 - -``` - # mysql -u root -p -``` - -### Step 3: 安装 HHVM ### - -6. 我们需要添加 HHVM 的仓库到你的`sources.list`文件中,然后更新软件列表。 -``` - # wget -O - http://dl.hhvm.com/conf/hhvm.gpg.key | apt-key add - - # echo deb http://dl.hhvm.com/ubuntu DISTRIBUTION_VERSION main | sudo tee /etc/apt/sources.list.d/hhvm.list - # apt-get update -``` -**重要**:不要忘记用你的 Ubuntu 发行版型号替换上述的DISTRIBUTION_VERSION (比如:lucid, precise, trusty) 或者是 Debian 的 jessie 或者 wheezy。在 Linux Mint 中也是一样的,不过只支持 petra。 - -添加了 HHVM 仓库之后,你就可以安装了。 - - # apt-get install -y hhvm - -安装之后,即可启动它,但是它并没有做到开机启动。可以用如下命令做到开机启动。 - - # update-rc.d hhvm defaults - -### Step 4: 配置 Nginx/Apache 连接 HHVM ### - -7. 现在,nginx/apache 和 HHVM 都已经安装完成了,并且都独立运行起来了,所以我们需要对他们进行设置,来让他们互相关联。这个关键的步骤,就是需要nginx/apache 将所有的 php 文件,都交给 HHVM 进行处理。 - -如果你用了 Nginx,请按照如下步骤: - -nginx 的配置文件在 /etc/nginx/sites-available/default, 并且这些配置文件会在 /usr/share/nginx/html 中寻找文件执行,不过,他不知道如何处理 PHP。 - -为了确保 Nginx 可以连接 HHVM,我们需要执行如下的脚本。他可以帮助我们正确的配置 Nginx。 - -这个脚本可以确保 Nginx 可以对 .hh 和 .php 的做正确的处理,并且通过 fastcgi 与 HHVM 进行通信 - - # /usr/share/hhvm/install_fastcgi.sh - -![Configure Nginx for HHVM](http://www.tecmint.com/wp-content/uploads/2015/08/Configure-Nginx-for-HHVM.png) - -配置 Nginx、HHVM - -**重要**: 如果你使用的是 Apache,这边就不需要进行配置了 - -8. 接下来,你需要使用 hhvm 来提供 php 的运行环境。 -``` - # /usr/bin/update-alternatives --install /usr/bin/php php /usr/bin/hhvm 60 -``` -以上步骤完成之后,你现在可以启动并且测试他了。 - - # systemctl start hhvm - -### Step 5: 测试 HHVM 和 Nginx/Apache ### - -9. 为了确认 hhvm 是否工作,你需要在 nginx/apache 的根目录下建立 hello.php。 -``` - # nano /usr/share/nginx/html/hello.php [For Nginx] - OR - # nano /var/www/html/hello.php [For Nginx and Apache] -``` -在文件中添加如下代码: - - - -然后访问如下链接,确认自己能否看到 "hello world" - - http://localhost/info.php - OR - http://IP-Address/info.php - -![HHVM Page](http://www.tecmint.com/wp-content/uploads/2015/08/HHVM-Page.png) - -HHVM Page - -如果 “HHVM” 的页面出现了,那就说明你成功了 - -### 结论 ### - -以上的步骤都是非常简单的,希望你能觉得这是一篇有用的教程,如果你在以上的步骤中遇到了问题,给我们留一个评论,我们将全力解决。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/install-hhvm-and-nginx-apache-with-mariadb-on-debian-ubuntu/ - -作者:[Ravi Saive][a] -译者:[MikeCoder](https://github.com/MikeCoder) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/admin/ -[1]:http://www.tecmint.com/install-apc-alternative-php-cache-in-rhel-centos-fedora/ From 6dcdae7a6ddb84a0c36c2c981ffeb8fb67e1878e Mon Sep 17 00:00:00 2001 From: Yu Ye Date: Sat, 5 Sep 2015 23:55:23 +0800 Subject: [PATCH 432/697] Create 20150826 Five Super Cool Open Source Games.md --- ...50826 Five Super Cool Open Source Games.md | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 translated/share/20150826 Five Super Cool Open Source Games.md diff --git a/translated/share/20150826 Five Super Cool Open Source Games.md b/translated/share/20150826 Five Super Cool Open Source Games.md new file mode 100644 index 0000000000..30ca09e171 --- /dev/null +++ b/translated/share/20150826 Five Super Cool Open Source Games.md @@ -0,0 +1,66 @@ +Translated by H-mudcup +五大超酷的开源游戏 +================================================================================ +在2014年和2015年,Linux 成了一堆流行商业品牌的家,例如备受欢迎的 Borderlands、Witcher、Dead Island 和 CS系列游戏。虽然这是令人激动的消息,但这跟玩家的预算有什么关系?商业品牌很好,但更好的是由了解玩家喜好的开发者开发的免费的替代品。 + +前段时间,我偶然看到了一个三年前发布的 YouTube 视频,标题非常的有正能量[5个不算糟糕的开源游戏][1]。虽然视频表扬了一些开源游戏,我还是更喜欢用一个更加热情的方式来切入这个话题,至少如标题所说。所以,下面是我的一份五大超酷开源游戏的清单。 + +### Tux Racer ### + +![Tux Racer](http://fossforce.com/wp-content/uploads/2015/08/tuxracer-550x413.jpg) + +Tux Racer + +[《Tux Racer》][2]是这份清单上的第一个游戏,因为我对这个游戏很熟悉。我和兄弟与[电脑上的孩子们][4]项目在[最近一次去墨西哥的路途中][3] Tux Racer 是孩子和教师都喜欢玩的游戏之一。在这个游戏中,玩家使用 Linux 吉祥物,企鹅 Tux,在下山雪道上以计时赛的方式进行比赛。玩家们不断挑战他们自己的最佳纪录。目前还没有多玩家版本,但这是有可能改变的。适用于 Linux、OS X、Windows 和 Android。 + +### Warsow ### + +![Warsow](http://fossforce.com/wp-content/uploads/2015/08/warsow-550x413.jpg) + +Warsow + +[《Warsow》][5]网站解释道:“设定是有未来感的卡通世界,Warsow 是个完全开放的适用于 Windows、Linux 和 Mac OS X平台的快节奏第一人称射击游戏(FPS)。Warsow 是尊重的艺术和网络中的体育精神。(Warsow is the Art of Respect and Sportsmanship Over the Web.大写字母组成Warsow。)” 我很不情愿的把 FPS 类放到了这个列表中,因为很多人玩过这类的游戏,但是我的确被 Warsow 打动了。它对很多动作进行了优先级排序,游戏节奏很快,一开始就有八个武器。卡通化的风格让玩的过程变得没有那么严肃,更加的休闲,非常适合可以和亲友一同玩。然而,他却以充满竞争的游戏自居,并且当我体验这个游戏时,我发现周围确实有一些专家级的玩家。适用于 Linux、Windows 和 OS X。 + +### M.A.R.S——一个荒诞的射击游戏 ### + +![M.A.R.S. - A ridiculous shooter](http://fossforce.com/wp-content/uploads/2015/08/MARS-screenshot-550x344.jpg) + +M.A.R.S.——一个荒诞的射击游戏 + +[《M.A.R.S——一个荒诞的射击游戏》][6]之所以吸引人是因为他充满活力的色彩和画风。支持两个玩家使用同一个键盘,而一个在线多玩家版本目前正在开发中——这意味着想要和朋友们一起玩暂时还要等等。不论如何,它是个可以使用几个不同飞船和武器的有趣的太空射击游戏。飞船的形状不同,从普通的枪、激光、散射枪到更有趣的武器(随机出来的飞船中有一个会对敌人发射泡泡,这为这款混乱的游戏增添了很多乐趣)。游戏几种模式,比如标准模式和对方进行殊死搏斗以获得高分或先达到某个分数线,还有其他的模式,空间球(Spaceball)、坟坑(Grave-itation Pit)和保加农炮(Cannon Keep)。适用于 Linux、Windows 和 OS X。 + +### Valyria Tear ### + +![Valyria Tear](http://fossforce.com/wp-content/uploads/2015/08/bronnan-jump-to-enemy-550x413.jpg) + +Valyria Tear + +[Valyria Tear][7] 类似几年来拥有众多粉丝的角色扮演游戏(RPG)。故事设定在梦幻游戏的通用年代,充满了骑士、王国和魔法,以及主要角色 Bronann。设计团队做的非常棒,在设计这个世界和实现玩家对这类游戏所有的期望:隐藏的宝藏、偶遇的怪物、非玩家操纵角色(NPC)的互动以及所有 RPG 不可或缺的:在低级别的怪物上刷经验直到可以面对大 BOSS。我在试玩的时候,时间不允许我太过深入到这个游戏故事中,但是感兴趣的人可以看 YouTube 上由 Yohann Ferriera 用户发的‘[Let’s Play][8]’系列视频。适用于 Linux、Windows 和 OS X。 + +### SuperTuxKart ### + +![SuperTuxKart](http://fossforce.com/wp-content/uploads/2015/08/hacienda_tux_antarctica-550x293.jpg) + +SuperTuxKart + +最后一个同样好玩的游戏是 [SuperTuxKart][9],一个效仿 Mario Kart(马里奥卡丁车)但丝毫不必原作差的好游戏。它在2000年-2004年间开始以 Tux Kart 开发,但是在成品中有错误,结果开发就停止了几年。从2006年开始重新开发时起,它就一直在改进,直到四个月前0.9版首次发布。在游戏里,我们的老朋友 Tux 与马里奥和其他一些开源吉祥物一同开始。其中一个熟悉的面孔是 Suzanne,Blender 的那只吉祥物猴子。画面很给力,游戏很流畅。虽然在线游戏还在计划阶段,但是分屏多玩家游戏是可以的。一个电脑最多可以四个玩家同时玩。适用于 Linux、Windows、OS X、AmigaOS 4、AROS 和 MorphOS。 + +-------------------------------------------------------------------------------- + +via: http://fossforce.com/2015/08/five-super-cool-open-source-games/ + +作者:Hunter Banks +译者:[H-mudcup](https://github.com/H-mudcup) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.youtube.com/watch?v=BEKVl-XtOP8 +[2]:http://tuxracer.sourceforge.net/download.html +[3]:http://fossforce.com/2015/07/banks-family-values-texas-linux-fest/ +[4]:http://www.kidsoncomputers.org/an-amazing-week-in-oaxaca +[5]:https://www.warsow.net/download +[6]:http://mars-game.sourceforge.net/ +[7]:http://valyriatear.blogspot.com/ +[8]:https://www.youtube.com/channel/UCQ5KrSk9EqcT_JixWY2RyMA +[9]:http://supertuxkart.sourceforge.net/ From 993fdc129e92bc67ae7ac5cc6f130a83f96255f0 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 6 Sep 2015 01:34:22 +0800 Subject: [PATCH 433/697] PUB:RHCSA Series--Part 02--How to Perform File and Directory Management @xiqingongzi --- ...o Perform File and Directory Management.md | 173 ++++++++---------- 1 file changed, 80 insertions(+), 93 deletions(-) rename {translated/tech/RHCSA => published}/RHCSA Series--Part 02--How to Perform File and Directory Management.md (59%) diff --git a/translated/tech/RHCSA/RHCSA Series--Part 02--How to Perform File and Directory Management.md b/published/RHCSA Series--Part 02--How to Perform File and Directory Management.md similarity index 59% rename from translated/tech/RHCSA/RHCSA Series--Part 02--How to Perform File and Directory Management.md rename to published/RHCSA Series--Part 02--How to Perform File and Directory Management.md index f46fd93321..8751949b40 100644 --- a/translated/tech/RHCSA/RHCSA Series--Part 02--How to Perform File and Directory Management.md +++ b/published/RHCSA Series--Part 02--How to Perform File and Directory Management.md @@ -1,68 +1,63 @@ -RHCSA 系列: 如何执行文件并进行文件管理 – Part 2 +RHCSA 系列(二): 如何进行文件和目录管理 ================================================================================ -在本篇(RHCSA 第二篇:文件和目录管理)中,我们江回顾一些系统管理员日常任务需要的技能 +在本篇中,我们将回顾一些系统管理员日常任务需要的技能。 ![RHCSA: Perform File and Directory Management – Part 2](http://www.tecmint.com/wp-content/uploads/2015/03/RHCSA-Part2.png) +*RHCSA: 运行文件以及进行文件夹管理 - 第二部分* -RHCSA : 运行文件以及进行文件夹管理 - 第二章 -### 创建,删除,复制和移动文件及目录 ### +### 创建、删除、复制和移动文件及目录 ### -文件和目录管理是每一个系统管理员都应该掌握的必要的技能.它包括了从头开始的创建、删除文本文件(每个程序的核心配置)以及目录(你用来组织文件和其他目录),以及识别存在的文件的类型 +文件和目录管理是每一个系统管理员都应该掌握的必备技能。它包括了从头开始的创建、删除文本文件(每个程序的核心配置)以及目录(你用来组织文件和其它目录),以及识别已有文件的类型。 - [touch 命令][1] 不仅仅能用来创建空文件,还能用来更新已存在的文件的权限和时间表 +[`touch` 命令][1] 不仅仅能用来创建空文件,还能用来更新已有文件的访问时间和修改时间。 ![touch command example](http://www.tecmint.com/wp-content/uploads/2015/03/touch-command-example.png) -touch 命令示例 +*touch 命令示例* -你可以使用 `file [filename]`来判断一个文件的类型 (在你用文本编辑器编辑之前,判断类型将会更方便编辑). +你可以使用 `file [filename]`来判断一个文件的类型 (在你用文本编辑器编辑之前,判断类型将会更方便编辑)。 ![file command example](http://www.tecmint.com/wp-content/uploads/2015/03/file-command-example.png) -file 命令示例 +*file 命令示例* -使用`rm [filename]` 可以删除文件 +使用`rm [filename]` 可以删除文件。 ![Linux rm command examples](http://www.tecmint.com/wp-content/uploads/2015/03/rm-command-examples.png) -rm 命令示例 - -对于目录,你可以使用`mkdir [directory]`在已经存在的路径中创建目录,或者使用 `mkdir -p [/full/path/to/directory].`带全路径创建文件夹 +*rm 命令示例* +对于目录,你可以使用`mkdir [directory]`在已经存在的路径中创建目录,或者使用 `mkdir -p [/full/path/to/directory]`带全路径创建文件夹。 ![mkdir command example](http://www.tecmint.com/wp-content/uploads/2015/03/mkdir-command-example.png) -mkdir 命令示例 +*mkdir 命令示例* -当你想要去删除目录时,在你使用`rmdir [directory]` 前,你需要先确保目录是空的,或者使用更加强力的命令(小心使用它)`rm -rf [directory]`.后者会强制删除`[directory]`以及他的内容.所以使用这个命令存在一定的风险 +当你想要去删除目录时,在你使用`rmdir [directory]` 前,你需要先确保目录是空的,或者使用更加强力的命令(小心使用它!)`rm -rf [directory]`。后者会强制删除`[directory]`以及它的内容,所以使用这个命令存在一定的风险。 ### 输入输出重定向以及管道 ### -命令行环境提供了两个非常有用的功能:允许命令重定向的输入和输出到文件和发送到另一个文件,分别称为重定向和管道 +命令行环境提供了两个非常有用的功能:允许重定向命令的输入和输出为另一个文件,以及发送命令的输出到另一个命令,这分别称为重定向和管道。 -To understand those two important concepts, we must first understand the three most important types of I/O (Input and Output) streams (or sequences) of characters, which are in fact special files, in the *nix sense of the word. -为了理解这两个重要概念,我们首先需要理解通常情况下三个重要的输入输出流的形式 +为了理解这两个重要概念,我们首先需要理解三个最重要的字符输入输出流类型,以 *nix 的话来说,它们实际上是特殊的文件。 -- 标准输入 (aka stdin) 是指默认使用键盘链接. 换句话说,键盘是输入命令到命令行的标准输入设备。 -- 标准输出 (aka stdout) 是指默认展示再屏幕上, 显示器接受输出命令,并且展示在屏幕上。 -- 标准错误 (aka stderr), 是指命令的状态默认输出, 同时也会展示在屏幕上 +- 标准输入 (即 stdin),默认连接到键盘。 换句话说,键盘是输入命令到命令行的标准输入设备。 +- 标准输出 (即 stdout),默认连接到屏幕。 找个设备“接受”命令的输出,并展示到屏幕上。 +- 标准错误 (即 stderr),默认是命令的状态消息出现的地方,它也是屏幕。 -In the following example, the output of `ls /var` is sent to stdout (the screen), as well as the result of ls /tecmint. But in the latter case, it is stderr that is shown. -在下面的例子中,`ls /var`的结果被发送到stdout(屏幕展示),就像ls /tecmint 的结果。但在后一种情况下,它是标准错误输出。 +在下面的例子中,`ls /var`的结果被发送到stdout(屏幕展示),ls /tecmint 的结果也一样。但在后一种情况下,它显示在标准错误输出上。 ![Linux input output redirect](http://www.tecmint.com/wp-content/uploads/2015/03/Linux-input-output-redirect.png) -输入和输出命令实例 -为了更容易识别这些特殊文件,每个文件都被分配有一个文件描述符(用于控制他们的抽象标识)。主要要理解的是,这些文件就像其他人一样,可以被重定向。这就意味着你可以从一个文件或脚本中捕获输出,并将它传送到另一个文件、命令或脚本中。你就可以在在磁盘上存储命令的输出结果,用于稍后的分析 +*输入和输出命令实例* -To redirect stdin (fd 0), stdout (fd 1), or stderr (fd 2), the following operators are available. +为了更容易识别这些特殊文件,每个文件都被分配有一个文件描述符,这是用于访问它们的抽象标识。主要要理解的是,这些文件就像其它的一样,可以被重定向。这就意味着你可以从一个文件或脚本中捕获输出,并将它传送到另一个文件、命令或脚本中。这样你就可以在磁盘上存储命令的输出结果,用于稍后的分析。 -注:表格 - - - +要重定向 stdin (fd 0)、 stdout (fd 1) 或 stderr (fd 2),可以使用如下操作符。 + +
@@ -70,102 +65,98 @@ To redirect stdin (fd 0), stdout (fd 1), or stderr (fd 2), the following operato - + - + - + - + - + - + - +
转向操作
>标准输出到一个文件。如果目标文件存在,内容就会被重写重定向标准输出到一个文件。如果目标文件存在,内容就会被重写。
>>添加标准输出到文件尾部添加标准输出到文件尾部。
2>标准错误输出到一个文件。如果目标文件存在,内容就会被重写重定向标准错误输出到一个文件。如果目标文件存在,内容就会被重写。
2>>添加标准错误输出到文件尾部.添加标准错误输出到文件尾部。
&>标准错误和标准输出都到一个文件。如果目标文件存在,内容就会被重写重定向标准错误和标准输出到一个文件。如果目标文件存在,内容就会被重写。
<使用特定的文件做标准输出使用特定的文件做标准输入。
<>使用特定的文件做标准输出和标准错误使用特定的文件做标准输入和标准输出。
- -相比与重定向,管道是通过在命令后添加一个竖杠`(|)`再添加另一个命令 . +与重定向相比,管道是通过在命令后和另外一个命令前之间添加一个竖杠`(|)`。 记得: -- 重定向是用来定向命令的输出到一个文件,或定向一个文件作为输入到一个命令。 -- 管道是用来将命令的输出转发到另一个命令作为输入。 +- *重定向*是用来定向命令的输出到一个文件,或把一个文件发送作为到一个命令的输入。 +- *管道*是用来将命令的输出转发到另一个命令作为其输入。 #### 重定向和管道的使用实例 #### -** 例1:将一个命令的输出到文件 ** +**例1:将一个命令的输出到文件** -有些时候,你需要遍历一个文件列表。要做到这样,你可以先将该列表保存到文件中,然后再按行读取该文件。虽然你可以遍历直接ls的输出,不过这个例子是用来说明重定向。 +有些时候,你需要遍历一个文件列表。要做到这样,你可以先将该列表保存到文件中,然后再按行读取该文件。虽然你可以直接遍历ls的输出,不过这个例子是用来说明重定向。 # ls -1 /var/mail > mail.txt ![Redirect output of command tot a file](http://www.tecmint.com/wp-content/uploads/2015/03/Redirect-output-to-a-file.png) -将一个命令的输出到文件 +*将一个命令的输出重定向到文件* -** 例2:重定向stdout和stderr到/dev/null ** +**例2:重定向stdout和stderr到/dev/null** -如果不想让标准输出和标准错误展示在屏幕上,我们可以把文件描述符重定向到 `/dev/null` 请注意在执行这个命令时该如何更改输出 +如果不想让标准输出和标准错误展示在屏幕上,我们可以把这两个文件描述符重定向到 `/dev/null`。请注意对于同样的命令,重定向是如何改变了输出。 # ls /var /tecmint # ls /var/ /tecmint &> /dev/null ![Redirecting stdout and stderr ouput to /dev/null](http://www.tecmint.com/wp-content/uploads/2015/03/Redirecting-stdout-stderr-ouput.png) -重定向stdout和stderr到/dev/null +*重定向stdout和stderr到/dev/null* -#### 例3:使用一个文件作为命令的输入 #### +**例3:使用一个文件作为命令的输入** -当官方的[cat 命令][2]的语法如下时 +[cat 命令][2]的经典用法如下 # cat [file(s)] -您还可以使用正确的重定向操作符传送一个文件作为输入。 +您还可以使用正确的重定向操作符发送一个文件作为输入。 # cat < mail.txt ![Linux cat command examples](http://www.tecmint.com/wp-content/uploads/2015/03/cat-command-examples.png) -cat 命令实例 +*cat 命令实例* -#### 例4:发送一个命令的输出作为另一个命令的输入 #### +**例4:发送一个命令的输出作为另一个命令的输入** -如果你有一个较大的目录或进程列表,并且想快速定位,你或许需要将列表通过管道传送给grep +如果你有一个较大的目录或进程列表,并且想快速定位,你或许需要将列表通过管道传送给grep。 -接下来我们使用管道在下面的命令中,第一个是查找所需的关键词,第二个是除去产生的 `grep command`.这个例子列举了所有与apache用户有关的进程 +接下来我们会在下面的命令中使用管道,第一个管道是查找所需的关键词,第二个管道是除去产生的 `grep command`。这个例子列举了所有与apache用户有关的进程: # ps -ef | grep apache | grep -v grep ![Send output of command as input to another](http://www.tecmint.com/wp-content/uploads/2015/03/Send-output-of-command-as-input-to-another1.png) -发送一个命令的输出作为另一个命令的输入 +*发送一个命令的输出作为另一个命令的输入* ### 归档,压缩,解包,解压文件 ### -如果你需要传输,备份,或者通过邮件发送一组文件,你可以使用一个存档(或文件夹)如 [tar][3]工具,通常使用gzip,bzip2,或XZ压缩工具. +如果你需要传输、备份、或者通过邮件发送一组文件,你可以使用一个存档(或打包)工具,如 [tar][3],通常与gzip,bzip2,或 xz 等压缩工具配合使用。 -您选择的压缩工具每一个都有自己的定义的压缩速度和速率的。这三种压缩工具,gzip是最古老和提供最小压缩的工具,bzip2提供经过改进的压缩,以及XZ提供最信和最好的压缩。通常情况下,这些文件都是被压缩的如.gz .bz2或.xz -注:表格 - - - - +您选择的压缩工具每一个都有自己不同的压缩速度和压缩率。这三种压缩工具,gzip是最古老和可以较小压缩的工具,bzip2提供经过改进的压缩,以及xz是最新的而且压缩最大。通常情况下,使用这些压缩工具压缩的文件的扩展名依次是.gz、.bz2或.xz。 + +
@@ -180,12 +171,12 @@ cat 命令实例 - + - + @@ -195,26 +186,22 @@ cat 命令实例 - + - + - +
命令
–concatenate A向归档中添加tar文件添加tar归档到另外一个归档中
–append r向归档中添加非tar文件添加非tar归档到另外一个归档中
–update
–diff or –compare d将归档和硬盘的文件夹进行对比将归档中的文件和硬盘的文件进行对比
–list t列举一个tar的压缩包列举一个tar压缩包的内容
–extract or –get x从归档中解压文件从归档中提取文件
-注:表格 - - - - +
@@ -234,34 +221,34 @@ cat 命令实例 - + - + - + - + - +
操作参数
–verbose v列举所有文件用于读取或提取,这里包含列表,并显示文件的大小、所有权和时间戳列举所有读取或提取的文件,如果和 --list 参数一起使用,也会显示文件的大小、所有权和时间戳
exclude file 排除存档文件。在这种情况下,文件可以是一个实际的文件或目录。从存档中排除文件。在这种情况下,文件可以是一个实际的文件或匹配模式。
gzip or gunzip z使用gzip压缩文件使用gzip压缩归档
–bzip2 j使用bzip2压缩文件使用bzip2压缩归档
–xz J使用xz压缩文件使用xz压缩归档
-#### 例5:创建一个文件,然后使用三种压缩工具压缩#### +**例5:创建一个tar文件,然后使用三种压缩工具压缩** -在决定使用一个或另一个工具之前,您可能想比较每个工具的压缩效率。请注意压缩小文件或几个文件,结果可能不会有太大的差异,但可能会给你看出他们的差异 +在决定使用这个还是那个工具之前,您可能想比较每个工具的压缩效率。请注意压缩小文件或几个文件,结果可能不会有太大的差异,但可能会给你看出它们的差异。 # tar cf ApacheLogs-$(date +%Y%m%d).tar /var/log/httpd/* # Create an ordinary tarball # tar czf ApacheLogs-$(date +%Y%m%d).tar.gz /var/log/httpd/* # Create a tarball and compress with gzip @@ -270,42 +257,42 @@ cat 命令实例 ![Linux tar command examples](http://www.tecmint.com/wp-content/uploads/2015/03/tar-command-examples.png) -tar 命令实例 +*tar 命令实例* -#### 例6:归档时同时保存原始权限和所有权 #### +**例6:归档时同时保存原始权限和所有权** -如果你创建的是用户的主目录的备份,你需要要存储的个人文件与原始权限和所有权,而不是通过改变他们的用户帐户或守护进程来执行备份。下面的命令可以在归档时保留文件属性 +如果你正在从用户的主目录创建备份,你需要要存储的个人文件与原始权限和所有权,而不是通过改变它们的用户帐户或守护进程来执行备份。下面的命令可以在归档时保留文件属性。 # tar cJf ApacheLogs-$(date +%Y%m%d).tar.xz /var/log/httpd/* --same-permissions --same-owner ### 创建软连接和硬链接 ### -在Linux中,有2种类型的链接文件:硬链接和软(也称为符号)链接。因为硬链接文件代表另一个名称是由同一点确定,然后链接到实际的数据;符号链接指向的文件名,而不是实际的数据 +在Linux中,有2种类型的链接文件:硬链接和软(也称为符号)链接。因为硬链接文件只是现存文件的另一个名字,使用相同的 inode 号,它指向实际的数据;而符号链接只是指向的文件名。 -此外,硬链接不占用磁盘上的空间,而符号链接做占用少量的空间来存储的链接本身的文本。硬链接的缺点就是要求他们必须在同一个innode内。而符号链接没有这个限制,符号链接因为只保存了文件名和目录名,所以可以跨文件系统. +此外,硬链接不占用磁盘上的空间,而符号链接则占用少量的空间来存储的链接本身的文本。硬链接的缺点就是要求它们必须在同一个文件系统内,因为 inode 在一个文件系统内是唯一的。而符号链接没有这个限制,它们通过文件名而不是 inode 指向其它文件或目录,所以可以跨文件系统。 创建链接的基本语法看起来是相似的: # ln TARGET LINK_NAME #从Link_NAME到Target的硬链接 # ln -s TARGET LINK_NAME #从Link_NAME到Target的软链接 -#### 例7:创建硬链接和软链接 #### +**例7:创建硬链接和软链接** -没有更好的方式来形象的说明一个文件和一个指向它的符号链接的关系,而不是创建这些链接。在下面的截图中你会看到文件的硬链接指向它共享相同的节点都是由466个字节的磁盘使用情况确定。 +没有更好的方式来形象的说明一个文件和一个指向它的硬链接或符号链接的关系,而不是创建这些链接。在下面的截图中你会看到文件和指向它的硬链接共享相同的inode,都是使用了相同的466个字节的磁盘。 -另一方面,在别的磁盘创建一个硬链接将占用5个字节,并不是说你将耗尽存储容量,而是这个例子足以说明一个硬链接和软链接之间的区别。 +另一方面,在别的磁盘创建一个硬链接将占用5个字节,这并不是说你将耗尽存储容量,而是这个例子足以说明一个硬链接和软链接之间的区别。 ![Difference between a hard link and a soft link](http://www.tecmint.com/wp-content/uploads/2015/03/hard-soft-link.png) -软连接和硬链接之间的不同 +*软连接和硬链接之间的不同* -符号链接的典型用法是在Linux系统的版本文件参考。假设有需要一个访问文件foo X.Y 想图书馆一样经常被访问,你想更新一个就可以而不是更新所有的foo X.Y,这时使用软连接更为明智和安全。有文件被看成foo X.Y的链接符号,从而找到foo X.Y +在Linux系统上符号链接的典型用法是指向一个带版本的文件。假设有几个程序需要访问文件fooX.Y,但麻烦是版本经常变化(像图书馆一样)。每次版本更新时我们都需要更新指向 fooX.Y 的单一引用,而更安全、更快捷的方式是,我们可以让程序寻找名为 foo 的符号链接,它实际上指向 fooX.Y。 -这样的话,当你的X和Y发生变化后,你只需更新一个文件,而不是更新每个文件。 +这样的话,当你的X和Y发生变化后,你只需更新符号链接 foo 到新的目标文件,而不用跟踪每个对目标文件的使用并更新。 ### 总结 ### -在这篇文章中,我们回顾了一些基本的文件和目录管理技能,这是每个系统管理员的工具集的一部分。请确保阅读了本系列的其他部分,以及复习并将这些主题与本教程所涵盖的内容相结合。 +在这篇文章中,我们回顾了一些基本的文件和目录管理技能,这是每个系统管理员的工具集的一部分。请确保阅读了本系列的其它部分,并将这些主题与本教程所涵盖的内容相结合。 如果你有任何问题或意见,请随时告诉我们。我们总是很高兴从读者那获取反馈. @@ -315,11 +302,11 @@ via: http://www.tecmint.com/file-and-directory-management-in-linux/ 作者:[Gabriel Cánepa][a] 译者:[xiqingongzi](https://github.com/xiqingongzi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/8-pratical-examples-of-linux-touch-command/ +[1]:https://linux.cn/article-2740-1.html [2]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/ [3]:http://www.tecmint.com/18-tar-command-examples-in-linux/ From eb242db8397aee293edf7c2ae2251a2fcd9c6782 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Sun, 6 Sep 2015 10:51:55 +0800 Subject: [PATCH 434/697] =?UTF-8?q?20150906-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...og Files With Logrotate On Ubuntu 12.10.md | 116 ++++++++++++++++++ 1 file changed, 116 insertions(+) create mode 100644 sources/tech/20150906 How To Manage Log Files With Logrotate On Ubuntu 12.10.md diff --git a/sources/tech/20150906 How To Manage Log Files With Logrotate On Ubuntu 12.10.md b/sources/tech/20150906 How To Manage Log Files With Logrotate On Ubuntu 12.10.md new file mode 100644 index 0000000000..2968dc113e --- /dev/null +++ b/sources/tech/20150906 How To Manage Log Files With Logrotate On Ubuntu 12.10.md @@ -0,0 +1,116 @@ +How To Manage Log Files With Logrotate On Ubuntu 12.10 +================================================================================ +#### About Logrotate #### + +Logrotate is a utility/tool that manages activities like automatic rotation, removal and compression of log files in a system. This is an excellent tool to manage your logs conserve precious disk space. By having a simple yet powerful configuration file, different parameters of logrotation can be controlled. This gives complete control over the way logs can be automatically managed and need not necessitate manual intervention. + +### Prerequisites ### + +As a prerequisite, we are assuming that you have gone through the article on how to set up your droplet or VPS. If not, you can find the article [here][1]. This tutorial requires you to have a VPS up and running and have you log into it. + +#### Setup Logrotate #### + +### Step 1—Update System and System Packages ### + +Run the following command to update the package lists from apt-get and get the information on the newest versions of packages and their dependencies. + + sudo apt-get update + +### Step 2—Install Logrotate ### + +If logrotate is not already on your VPS, install it now through apt-get. + + sudo apt-get install logrotate + +### Step 3 — Confirmation ### + +To verify that logrotate was successfully installed, run this in the command prompt. + + logrotate + +Since the logrotate utility is based on configuration files, the above command will not rotate any files and will show you a brief overview of the usage and the switch options available. + +### Step 4—Configure Logrotate ### + +Configurations and default options for the logrotate utility are present in: + + /etc/logrotate.conf + +Some of the important configuration settings are : rotation-interval, log-file-size, rotation-count and compression. + +Application-specific log file information (to override the defaults) are kept at: + + /etc/logrotate.d/ + +We will have a look at a few examples to understand the concept better. + +### Step 5—Example ### + +An example application configuration setting would be the dpkg (Debian package management system), that is stored in /etc/logrotate.d/dpkg. One of the entries in this file would be: + + /var/log/dpkg.log { + monthly + rotate 12 + compress + delaycompress + missingok + notifempty + create 644 root root + } + +What this means is that: + +- the logrotation for dpkg monitors the /var/log/dpkg.log file and does this on a monthly basis this is the rotation interval. +- 'rotate 12' signifies that 12 days worth of logs would be kept. +- logfiles can be compressed using the gzip format by specifying 'compress' and 'delaycompress' delays the compression process till the next log rotation. 'delaycompress' will work only if 'compress' option is specified. +- 'missingok' avoids halting on any error and carries on with the next log file. +- 'notifempty' avoid log rotation if the logfile is empty. +- 'create ' creates a new empty file with the specified properties after log-rotation. + +Though missing in the above example, 'size' is also an important setting if you want to control the sizing of the logs growing in the system. + +A configuration setting of around 100MB would look like: + + size 100M + +Note that If both size and rotation interval are set, then size is taken as a higher priority. That is, if a configuration file has the following settings: + + monthly + size 100M + +then the logs are rotated once the file size reaches 100M and this need not wait for the monthly cycle. + +### Step 6—Cron Job ### + +You can also set the logrotation as a cron so that the manual process can be avoided and this is taken care of automatically. By specifying an entry in /etc/cron.daily/logrotate , the rotation is triggered daily. + +### Step 7—Status Check and Verification ### + +To verify if a particular log is indeed rotating or not and to check the last date and time of its rotation, check the /var/lib/logrotate/status file. This is a neatly formatted file that contains the log file name and the date on which it was last rotated. + + cat /var/lib/logrotate/status + +A few entries from this file, for example: + + "/var/log/lpr.log" 2013-4-11 + "/var/log/dpkg.log" 2013-4-11 + "/var/log/pm-suspend.log" 2013-4-11 + "/var/log/syslog" 2013-4-11 + "/var/log/mail.info" 2013-4-11 + "/var/log/daemon.log" 2013-4-11 + "/var/log/apport.log" 2013-4-11 + +Congratulations! You have logrotate installed in your system. Now, change the configuration settings as per your requirements. + +Try 'man logrotate' or 'logrotate -?' for more details. + +-------------------------------------------------------------------------------- + +via: https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10 + +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.digitalocean.com/community/articles/initial-server-setup-with-ubuntu-12-04 \ No newline at end of file From 0985333bb0b25055c9b88e9134e90c0ff68f5a31 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Sun, 6 Sep 2015 10:54:07 +0800 Subject: [PATCH 435/697] =?UTF-8?q?20150906-2=20=E9=80=89=E9=A2=98=20strug?= =?UTF-8?q?gling=20=E6=8E=A8=E8=8D=90=E5=B9=B6=E8=AE=A4=E9=A2=86?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...lling NGINX and NGINX Plus With Ansible.md | 450 ++++++++++++++++++ 1 file changed, 450 insertions(+) create mode 100644 sources/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md diff --git a/sources/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md b/sources/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md new file mode 100644 index 0000000000..42ebe26e1b --- /dev/null +++ b/sources/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md @@ -0,0 +1,450 @@ +Installing NGINX and NGINX Plus With Ansible +================================================================================ +Coming from a production operations background, I have learned to love all things related to automation. Why do something by hand if a computer can do it for you? But creating and implementing automation can be a difficult task given an ever-changing infrastructure and the various technologies surrounding your environments. This is why I love [Ansible][1]. Ansible is an open source tool for IT configuration management, deployment, and orchestration that is extremely easy to use. + +One of my favorite features of Ansible is that it is completely clientless. To manage a system, a connection is made over SSH, using either [Paramiko][2] (a Python library) or native [OpenSSH][3]. Another attractive feature of Ansible is its extensive selection of modules. These modules can be used to perform some of the common tasks of a system administrator. In particular, they make Ansible a powerful tool for installing and configuring any application across multiple servers, environments, and operating systems, all from one central location. + +In this tutorial I will walk you through the steps for using Ansible to install and deploy the open source [NGINX][4] software and [NGINX Plus][5], our commercial product. I’m showing deployment onto a [CentOS][6] server, but I have included details about deploying on Ubuntu servers in [Creating an Ansible Playbook for Installing NGINX and NGINX Plus on Ubuntu][7] below. + +For this tutorial I will be using Ansible version 1.9.2 and performing the deployment from a server running CentOS 7.1. + + $ ansible --version + ansible 1.9.2 + + $ cat /etc/redhat-release + CentOS Linux release 7.1.1503 (Core) + +If you don’t already have Ansible, you can get instructions for installing it [at the Ansible site][8]. + +If you are using CentOS, installing Ansible is easy as typing the following command. If you want to compile from source or for other distributions, see the instructions at the Ansible link provided just above. + + $ sudo yum install -y epel-release && sudo yum install -y ansible + +Depending on your environment, some of the commands in this tutorial might require sudo privileges. The path to the files, usernames, and destination servers are all values that will be specific to your environment. + +### Creating an Ansible Playbook for Installing NGINX (CentOS) ### + +First we create a working directory for our NGINX deployment, along with subdirectories and deployment configuration files. I usually recommend creating the directory in your home directory and show that in all examples in this tutorial. + + $ cd $HOME + $ mkdir -p ansible-nginx/tasks/ + $ touch ansible-nginx/deploy.yml + $ touch ansible-nginx/tasks/install_nginx.yml + +The directory structure now looks like this. You can check by using the tree command. + + $ tree $HOME/ansible-nginx/ + /home/kjones/ansible-nginx/ + ├── deploy.yml + └── tasks + └── install_nginx.yml + + 1 directory, 2 files + +If you do not have tree installed, you can do so using the following command. + + $ sudo yum install -y tree + +#### Creating the Main Deployment File #### + +Next we open **deploy.yml** in a text editor. I prefer vim for editing configuration files on the command line, and will use it throughout the tutorial. + + $ vim $HOME/ansible-nginx/deploy.yml + +The **deploy.yml** file is our main Ansible deployment file, which we’ll reference when we run the ansible‑playbook command in [Running Ansible to Deploy NGINX][9]. Within this file we specify the inventory for Ansible to use along with any other configuration files to include at runtime. + +In my example I use the [include][10] module to specify a configuration file that has the steps for installing NGINX. While it is possible to create a playbook in one very large file, I recommend that you separate the steps into smaller included files to keep things organized. Sample use cases for an include are copying static content, copying configuration files, or assigning variables for a more advanced deployment with configuration logic. + +Type the following lines into the file. I include the filename at the top in a comment for reference. + + # ./ansible-nginx/deploy.yml + + - hosts: nginx + tasks: + - include: 'tasks/install_nginx.yml' + +The hosts statement tells Ansible to deploy to all servers in the **nginx** group, which is defined in **/etc/ansible/hosts**. We’ll edit this file in [Creating the List of NGINX Servers below][11]. + +The include statement tells Ansible to read in and execute the contents of the **install_nginx.yml** file from the **tasks** directory during deployment. The file includes the steps for downloading, installing, and starting NGINX. We’ll create this file in the next section. + +#### Creating the Deployment File for NGINX #### + +Now let’s save our work to **deploy.yml** and open up **install_nginx.yml** in the editor. + + $ vim $HOME/ansible-nginx/tasks/install_nginx.yml + +The file is going to contain the instructions – written in [YAML][12] format – for Ansible to follow when installing and configuring our NGINX deployment. Each section (step in the process) starts with a name statement (preceded by hyphen) that describes the step. The string following name: is written to stdout during the Ansible deployment and can be changed as you wish. The next line of a section in the YAML file is the module that will be used during that deployment step. In the configuration below, both the [yum][13] and [service][14] modules are used. The yum module is used to install packages on CentOS. The service module is used to manage UNIX services. The final line or lines in a section specify any parameters for the module (in the example, these lines start with name and state). + +Type the following lines into the file. As with **deploy.yml**, the first line in our file is a comment that names the file for reference. The first section tells Ansible to install the **.rpm** file for CentOS 7 from the NGINX repository. This directs the package manager to install the most recent stable version of NGINX directly from NGINX. Modify the pathname as necessary for your CentOS version. A list of available packages can be found on the [open source NGINX website][15]. The next two sections tell Ansible to install the latest NGINX version using the yum module and then start NGINX using the service module. + +**Note:** In the first section, the pathname to the CentOS package appears on two lines only for space reasons. Type the entire path on a single line. + + # ./ansible-nginx/tasks/install_nginx.yml + + - name: NGINX | Installing NGINX repo rpm + yum: + name: http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm + + - name: NGINX | Installing NGINX + yum: + name: nginx + state: latest + + - name: NGINX | Starting NGINX + service: + name: nginx + state: started + +#### Creating the List of NGINX Servers #### + +Now that we have our Ansible deployment configuration files all set up, we need to tell Ansible exactly which servers to deploy to. We specify this in the Ansible **hosts** file I mentioned earlier. Let’s make a backup of the existing file and create a new one just for our deployment. + + $ sudo mv /etc/ansible/hosts /etc/ansible/hosts.backup + $ sudo vim /etc/ansible/hosts + +Type (or edit) the following lines in the file to create a group called **nginx** and list the servers to install NGINX on. You can designate servers by hostname, IP address, or in an array such as **server[1-3].domain.com**. Here I designate one server by its IP address. + + # /etc/ansible/hosts + + [nginx] + 172.16.239.140 + +#### Setting Up Security #### + +We are almost all set, but before deployment we need to ensure that Ansible has authorization to access our destination server over SSH. + +The preferred and most secure method is to add the Ansible deployment server’s RSA SSH key to the destination server’s **authorized_keys** file, which gives Ansible unrestricted SSH permissions on the destination server. To learn more about this configuration, see [Securing OpenSSH][16] on wiki.centos.org. This way you can automate your deployments without user interaction. + +Alternatively, you can request the password interactively during deployment. I strongly recommend that you use this method during testing only, because it is insecure and there is no way to track changes to a destination host’s fingerprint. If you want to do this, change the value of StrictHostKeyChecking from the default yes to no in the **/etc/ssh/ssh_config** file on each of your destination hosts. Then add the --ask-pass flag on the ansible-playbook command to have Ansible prompt for the SSH password. + +Here I illustrate how to edit the **ssh_config** file to disable strict host key checking on the destination server. We manually SSH into the server to which we’ll deploy NGINX and change the value of StrictHostKeyChecking to no. + + $ ssh kjones@172.16.239.140 + kjones@172.16.239.140's password:*********** + + [kjones@nginx ]$ sudo vim /etc/ssh/ssh_config + +After you make the change, save **ssh_config**, and connect to your Ansible server via SSH. The setting should look as below before you save your work. + + # /etc/ssh/ssh_config + + StrictHostKeyChecking no + +#### Running Ansible to Deploy NGINX #### + +If you have followed the steps in this tutorial, you can run the following command to have Ansible deploy NGINX. (Again, if you have set up RSA SSH key authentication, then the --ask-pass flag is not needed.) Run the command on the Ansible server with the configuration files we created above. + + $ sudo ansible-playbook --ask-pass $HOME/ansible-nginx/deploy.yml + +Ansible prompts for the SSH password and produces output like the following. A recap that reports failed=0 like this one indicates that deployment succeeded. + + $ sudo ansible-playbook --ask-pass $HOME/ansible-nginx/deploy.yml + SSH password: + + PLAY [all] ******************************************************************** + + GATHERING FACTS *************************************************************** + ok: [172.16.239.140] + + TASK: [NGINX | Installing NGINX repo rpm] ************************************* + changed: [172.16.239.140] + + TASK: [NGINX | Installing NGINX] ********************************************** + changed: [172.16.239.140] + + TASK: [NGINX | Starting NGINX] ************************************************ + changed: [172.16.239.140] + + PLAY RECAP ******************************************************************** + 172.16.239.140 : ok=4 changed=3 unreachable=0 failed=0 + +If you didn’t get a successful play recap, you can try running the ansible-playbook command again with the -vvvv flag (verbose with connection debugging) to troubleshoot the deployment process. + +When deployment succeeds (as it did for us on the first try), you can verify that NGINX is running on the remote server by running the following basic [cURL][17] command. Here it returns 200 OK. Success! We have successfully installed NGINX using Ansible. + + $ curl -Is 172.16.239.140 | grep HTTP + HTTP/1.1 200 OK + +### Creating an Ansible Playbook for Installing NGINX Plus (CentOS) ### + +Now that I’ve shown you how to install the open source version of NGINX, I’ll walk you through the steps for installing NGINX Plus. This requires some additional changes to the deployment configuration and showcases some of Ansible’s other features. + +#### Copying the NGINX Plus Certificate and Key to the Ansible Server #### + +To install and configure NGINX Plus with Ansible, we first need to copy the key and certificate for our NGINX Plus subscription from the [NGINX Plus Customer Portal][18] to the standard location on the Ansible deployment server. + +Access to the NGINX Plus Customer Portal is available for customers who have purchased NGINX Plus or are evaluating it. If you are interested in evaluating NGINX Plus, you can request a 30-day free trial [here][19]. You will receive a link to your trial certificate and key shortly after you sign up. + +On a Mac or Linux host, use the [scp][20] utility as I show here. On a Microsoft Windows host, you can use [WinSCP][21]. For this tutorial, I downloaded the files to my Mac laptop, then used scp to copy them to the Ansible server. These commands place both the key and certificate in my home directory. + + $ cd /path/to/nginx-repo-files/ + $ scp nginx-repo.* user@destination-server:. + +Next we SSH to the Ansible server, make sure the SSL directory for NGINX Plus exists, and move the files there. + + $ ssh user@destination-server + $ sudo mkdir -p /etc/ssl/nginx/ + $ sudo mv nginx-repo.* /etc/ssl/nginx/ + +Verify that your **/etc/ssl/nginx** directory contains both the certificate (**.crt**) and key (**.key**) files. You can check by using the tree command. + + $ tree /etc/ssl/nginx + /etc/ssl/nginx + ├── nginx-repo.crt + └── nginx-repo.key + + 0 directories, 2 files + +If you do not have tree installed, you can do so using the following command. + + $ sudo yum install -y tree + +#### Creating the Ansible Directory Structure #### + +The remaining steps are very similar to the ones for open source NGINX that we performed in [Creating an Ansible Playbook for Installing NGINX (CentOS)][22]. First we set up a working directory for our NGINX Plus deployment. Again I prefer creating it as a subdirectory of my home directory. + + $ cd $HOME + $ mkdir -p ansible-nginx-plus/tasks/ + $ touch ansible-nginx-plus/deploy.yml + $ touch ansible-nginx-plus/tasks/install_nginx_plus.yml + +The directory structure now looks like this. + + $ tree $HOME/ansible-nginx-plus/ + /home/kjones/ansible-nginx-plus/ + ├── deploy.yml + └── tasks + └── install_nginx_plus.yml + + 1 directory, 2 files + +#### Creating the Main Deployment File #### + +Next we use vim to create the **deploy.yml** file as for open source NGINX. + + $ vim ansible-nginx-plus/deploy.yml + +The only difference from the open source NGINX deployment is that we change the name of the included file to **install_nginx_plus.yml**. As a reminder, the file tells Ansible to deploy NGINX Plus on all servers in the **nginx** group (which is defined in **/etc/ansible/hosts**), and to read in and execute the contents of the **install_nginx_plus.yml** file from the **tasks** directory during deployment. + + # ./ansible-nginx-plus/deploy.yml + + - hosts: nginx + tasks: + - include: 'tasks/install_nginx_plus.yml' + +If you have not done so already, you also need to create the hosts file as detailed in [Creating the List of NGINX Servers][23] above. + +#### Creating the Deployment File for NGINX Plus #### + +Open **install_nginx_plus.yml** in a text editor. The file is going to contain the instructions for Ansible to follow when installing and configuring your NGINX Plus deployment. The commands and modules are specific to CentOS and some are unique to NGINX Plus. + + $ vim ansible-nginx-plus/tasks/install_nginx_plus.yml + +The first section uses the [file][24] module, telling Ansible to create the SSL directory for NGINX Plus as specified by the path and state arguments, set the ownership to root, and change the mode to 0700. + + # ./ansible-nginx-plus/tasks/install_nginx_plus.yml + + - name: NGINX Plus | Creating NGINX Plus ssl cert repo directory + file: path=/etc/ssl/nginx state=directory group=root mode=0700 + +The next two sections use the [copy][25] module to copy the NGINX Plus certificate and key from the Ansible deployment server to the NGINX Plus server during the deployment, again setting ownership to root and the mode to 0700. + + - name: NGINX Plus | Copying NGINX Plus repository certificate + copy: src=/etc/ssl/nginx/nginx-repo.crt dest=/etc/ssl/nginx/nginx-repo.crt owner=root group=root mode=0700 + + - name: NGINX Plus | Copying NGINX Plus repository key + copy: src=/etc/ssl/nginx/nginx-repo.key dest=/etc/ssl/nginx/nginx-repo.key owner=root group=root mode=0700 + +Next we tell Ansible to use the [get_url][26] module to download the CA certificate from the NGINX Plus repository at the remote location specified by the url argument, put it in the directory specified by the dest argument, and set the mode to 0700. + + - name: NGINX Plus | Downloading NGINX Plus CA certificate + get_url: url=https://cs.nginx.com/static/files/CA.crt dest=/etc/ssl/nginx/CA.crt mode=0700 + +Similarly, we tell Ansible to download the NGINX Plus repo file using the get_url module and copy it to the **/etc/yum.repos.d** directory on the NGINX Plus server. + + - name: NGINX Plus | Downloading yum NGINX Plus repository + get_url: url=https://cs.nginx.com/static/files/nginx-plus-7.repo dest=/etc/yum.repos.d/nginx-plus-7.repo mode=0700 + +The final two name sections tell Ansible to install and start NGINX Plus using the yum and service modules. + + - name: NGINX Plus | Installing NGINX Plus + yum: + name: nginx-plus + state: latest + + - name: NGINX Plus | Starting NGINX Plus + service: + name: nginx + state: started + +#### Running Ansible to Deploy NGINX Plus #### + +After saving the **install_nginx_plus.yml** file, we run the ansible-playbook command to deploy NGINX Plus. Again here we include the --ask-pass flag to have Ansible prompt for the SSH password and pass it to each NGINX Plus server, and specify the path to the main Ansible **deploy.yml** file. + + $ sudo ansible-playbook --ask-pass $HOME/ansible-nginx-plus/deploy.yml + + PLAY [nginx] ****************************************************************** + + GATHERING FACTS *************************************************************** + ok: [172.16.239.140] + + TASK: [NGINX Plus | Creating NGINX Plus ssl cert repo directory] ************** + changed: [172.16.239.140] + + TASK: [NGINX Plus | Copying NGINX Plus repository certificate] **************** + changed: [172.16.239.140] + + TASK: [NGINX Plus | Copying NGINX Plus repository key] ************************ + changed: [172.16.239.140] + + TASK: [NGINX Plus | Downloading NGINX Plus CA certificate] ******************** + changed: [172.16.239.140] + + TASK: [NGINX Plus | Downloading yum NGINX Plus repository] ******************** + changed: [172.16.239.140] + + TASK: [NGINX Plus | Installing NGINX Plus] ************************************ + changed: [172.16.239.140] + + TASK: [NGINX Plus | Starting NGINX Plus] ************************************** + changed: [172.16.239.140] + + PLAY RECAP ******************************************************************** + 172.16.239.140 : ok=8 changed=7 unreachable=0 failed=0 + +The playbook recap was successful. Now we can run a quick curl command to verify that NGINX Plus is running. Great, we get 200 OK! Success! We have successfully installed NGINX Plus with Ansible. + + $ curl -Is http://172.16.239.140 | grep HTTP + HTTP/1.1 200 OK + +### Creating an Ansible Playbook for Installing NGINX and NGINX Plus on Ubuntu ### + +The process for deploying NGINX and NGINX Plus on [Ubuntu servers][27] is pretty similar to the process on CentOS, so instead of providing step-by-step instructions I’ll show the complete deployment files and and point out the slight differences from CentOS. + +First create the Ansible directory structure and the main Ansible deployment file, as for CentOS. Also create the **/etc/ansible/hosts** file as described in [Creating the List of NGINX Servers][28]. For NGINX Plus, you need to copy over the key and certificate as described in [Copying the NGINX Plus Certificate and Key to the Ansible Server][29]. + +Here’s the **install_nginx.yml** deployment file for open source NGINX. In the first section, we use the [apt_key][30] module to import the NGINX signing key. The next two sections use the [lineinfile][31] module to add the package URLs for Ubuntu 14.04 to the **sources.list** file. Lastly we use the [apt][32] module to update the cache and install NGINX (apt replaces the yum module we used for deploying to CentOS). + + # ./ansible-nginx/tasks/install_nginx.yml + + - name: NGINX | Adding NGINX signing key + apt_key: url=http://nginx.org/keys/nginx_signing.key state=present + + - name: NGINX | Adding sources.list deb url for NGINX + lineinfile: dest=/etc/apt/sources.list line="deb http://nginx.org/packages/mainline/ubuntu/ trusty nginx" + + - name: NGINX Plus | Adding sources.list deb-src url for NGINX + lineinfile: dest=/etc/apt/sources.list line="deb-src http://nginx.org/packages/mainline/ubuntu/ trusty nginx" + + - name: NGINX | Updating apt cache + apt: + update_cache: yes + + - name: NGINX | Installing NGINX + apt: + pkg: nginx + state: latest + + - name: NGINX | Starting NGINX + service: + name: nginx + state: started + +Here’s the **install_nginx.yml** deployment file for NGINX Plus. The first four sections set up the NGINX Plus key and certificate. Then we use the apt_key module to import the signing key as for open source NGINX, and the get_url module to download the apt configuration file for NGINX Plus. The [shell][33] module evokes a printf command that writes its output to the **nginx-plus.list** file in the **sources.list.d** directory. The final name modules are the same as for open source NGINX. + + # ./ansible-nginx-plus/tasks/install_nginx_plus.yml + + - name: NGINX Plus | Creating NGINX Plus ssl cert repo directory + file: path=/etc/ssl/nginx state=directory group=root mode=0700 + + - name: NGINX Plus | Copying NGINX Plus repository certificate + copy: src=/etc/ssl/nginx/nginx-repo.crt dest=/etc/ssl/nginx/nginx-repo.crt owner=root group=root mode=0700 + + - name: NGINX Plus | Copying NGINX Plus repository key + copy: src=/etc/ssl/nginx/nginx-repo.key dest=/etc/ssl/nginx/nginx-repo.key owner=root group=root mode=0700 + + - name: NGINX Plus | Downloading NGINX Plus CA certificate + get_url: url=https://cs.nginx.com/static/files/CA.crt dest=/etc/ssl/nginx/CA.crt mode=0700 + + - name: NGINX Plus | Adding NGINX Plus signing key + apt_key: url=http://nginx.org/keys/nginx_signing.key state=present + + - name: NGINX Plus | Downloading Apt-Get NGINX Plus repository + get_url: url=https://cs.nginx.com/static/files/90nginx dest=/etc/apt/apt.conf.d/90nginx mode=0700 + + - name: NGINX Plus | Adding sources.list url for NGINX Plus + shell: printf "deb https://plus-pkgs.nginx.com/ubuntu `lsb_release -cs` nginx-plus\n" >/etc/apt/sources.list.d/nginx-plus.list + + - name: NGINX Plus | Running apt-get update + apt: + update_cache: yes + + - name: NGINX Plus | Installing NGINX Plus via apt-get + apt: + pkg: nginx-plus + state: latest + + - name: NGINX Plus | Start NGINX Plus + service: + name: nginx + state: started + +We’re now ready to run the ansible-playbook command: + + $ sudo ansible-playbook --ask-pass $HOME/ansible-nginx-plus/deploy.yml + +You should get a successful play recap. If you did not get a success, you can use the verbose flag to help troubleshoot your deployment as described in [Running Ansible to Deploy NGINX][34]. + +### Summary ### + +What I demonstrated in this tutorial is just the beginning of what Ansible can do to help automate your NGINX or NGINX Plus deployment. There are many useful modules ranging from user account management to custom configuration templates. If you are interested in learning more about these, please visit the extensive [Ansible documentation][35 site. + +To learn more about Ansible, come hear my talk on deploying NGINX Plus with Ansible at [NGINX.conf 2015][36], September 22–24 in San Francisco. + +-------------------------------------------------------------------------------- + +via: https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/ + +作者:[Kevin Jones][a] +译者:[struggling](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.nginx.com/blog/author/kjones/ +[1]:http://www.ansible.com/ +[2]:http://www.paramiko.org/ +[3]:http://www.openssh.com/ +[4]:http://nginx.org/en/ +[5]:https://www.nginx.com/products/ +[6]:http://www.centos.org/ +[7]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#ubuntu +[8]:http://docs.ansible.com/ansible/intro_installation.html#installing-the-control-machine +[9]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#deploy-nginx +[10]:http://docs.ansible.com/ansible/playbooks_roles.html#task-include-files-and-encouraging-reuse +[11]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#list-nginx +[12]:http://docs.ansible.com/ansible/YAMLSyntax.html +[13]:http://docs.ansible.com/ansible/yum_module.html +[14]:http://docs.ansible.com/ansible/service_module.html +[15]:http://nginx.org/en/linux_packages.html +[16]:http://wiki.centos.org/HowTos/Network/SecuringSSH +[17]:http://curl.haxx.se/ +[18]:https://cs.nginx.com/ +[19]:https://www.nginx.com/#free-trial +[20]:http://linux.die.net/man/1/scp +[21]:https://winscp.net/eng/download.php +[22]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#playbook-nginx +[23]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#list-nginx +[24]:http://docs.ansible.com/ansible/file_module.html +[25]:http://docs.ansible.com/ansible/copy_module.html +[26]:http://docs.ansible.com/ansible/get_url_module.html +[27]:http://www.ubuntu.com/ +[28]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#list-nginx +[29]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#copy-cert-key +[30]:http://docs.ansible.com/ansible/apt_key_module.html +[31]:http://docs.ansible.com/ansible/lineinfile_module.html +[32]:http://docs.ansible.com/ansible/apt_module.html +[33]:http://docs.ansible.com/ansible/shell_module.html +[34]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#deploy-nginx +[35]:http://docs.ansible.com/ +[36]:https://www.nginx.com/nginxconf/ \ No newline at end of file From f06d5858a4d431cfcf1f2dcdb4e6f317e57ee525 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 6 Sep 2015 12:45:46 +0800 Subject: [PATCH 436/697] Update 20150906 Installing NGINX and NGINX Plus With Ansible.md --- .../20150906 Installing NGINX and NGINX Plus With Ansible.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/sources/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md b/sources/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md index 42ebe26e1b..3fa66fe6b1 100644 --- a/sources/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md +++ b/sources/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md @@ -1,4 +1,5 @@ -Installing NGINX and NGINX Plus With Ansible +translation by strugglingyouth +nstalling NGINX and NGINX Plus With Ansible ================================================================================ Coming from a production operations background, I have learned to love all things related to automation. Why do something by hand if a computer can do it for you? But creating and implementing automation can be a difficult task given an ever-changing infrastructure and the various technologies surrounding your environments. This is why I love [Ansible][1]. Ansible is an open source tool for IT configuration management, deployment, and orchestration that is extremely easy to use. @@ -447,4 +448,4 @@ via: https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/ [33]:http://docs.ansible.com/ansible/shell_module.html [34]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#deploy-nginx [35]:http://docs.ansible.com/ -[36]:https://www.nginx.com/nginxconf/ \ No newline at end of file +[36]:https://www.nginx.com/nginxconf/ From c3481798efafdc0ae4fd0667c1effe79d2d89764 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Sun, 6 Sep 2015 15:05:37 +0800 Subject: [PATCH 437/697] =?UTF-8?q?20150906-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...h In Ubuntu And elementary OS With NaSC.md | 53 +++++ ... How To Set Up Your FTP Server In Linux.md | 102 +++++++++ ...ata intrusion detection system on Linux.md | 197 ++++++++++++++++++ 3 files changed, 352 insertions(+) create mode 100644 sources/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md create mode 100644 sources/tech/20150906 How To Set Up Your FTP Server In Linux.md create mode 100644 sources/tech/20150906 How to install Suricata intrusion detection system on Linux.md diff --git a/sources/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md b/sources/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md new file mode 100644 index 0000000000..67601c8ce6 --- /dev/null +++ b/sources/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md @@ -0,0 +1,53 @@ +Do Simple Math In Ubuntu And elementary OS With NaSC +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Make-Math-Simpler-with-NaSC.jpg) + +[NaSC][1], abbreviation Not a Soulver Clone, is a third party app developed for elementary OS. Whatever the name suggests, NaSC is heavily inspired by [Soulver][2], an OS X app for doing maths like a normal person. + +elementary OS itself draws from OS X and it is not a surprise that a number of the third party apps it has got, are also inspired by OS X apps. + +Coming back to NaSC, what exactly it means by “maths like a normal person “? Well, it means to write like how you think in your mind. As per the description of the app: + +> “Its an app where you do maths like a normal person. It lets you type whatever you want and smartly figures out what is math and spits out an answer on the right pane. Then you can plug those answers in to future equations and if that answer changes, so does the equations its used in.” + +Still not convinced? Here, take a look at this screenshot. + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/NaSC.png) + +Now, you see what is ‘math for normal person’? Honestly, I am not a fan of such apps but it might be useful for some of you perhaps. Let’s see how can you install NaSC in elementary OS, Ubuntu and Linux Mint. + +### Install NaSC in Ubuntu, elementary OS and Mint ### + +There is a PPA available for installing NaSC. The PPA says ‘daily’ which could mean daily build (i.e. unstable) but in my quick test, it worked just fine. + +Open a terminal and use the following commands: + + sudo apt-add-repository ppa:nasc-team/daily + sudo apt-get update + sudo apt-get install nasc + +Here is a screenshot of NaSC in Ubuntu 15.04: + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/NaSC-Ubuntu.png) + +If you want to remove it, you can use the following commands: + + sudo apt-get remove nasc + sudo apt-add-repository --remove ppa:nasc-team/daily + +If you try it, do share your experience with it. In addition to this, you can also try [Vocal podcast app for Linux][3] from third party elementary OS apps. + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/math-ubuntu-nasc/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://parnold-x.github.io/nasc/ +[2]:http://www.acqualia.com/soulver/ +[3]:http://itsfoss.com/podcast-app-vocal-linux/ \ No newline at end of file diff --git a/sources/tech/20150906 How To Set Up Your FTP Server In Linux.md b/sources/tech/20150906 How To Set Up Your FTP Server In Linux.md new file mode 100644 index 0000000000..a3e2096359 --- /dev/null +++ b/sources/tech/20150906 How To Set Up Your FTP Server In Linux.md @@ -0,0 +1,102 @@ +How To Set Up Your FTP Server In Linux +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Setup-FTP-Server-in-Linux.jpg) + +In this lesson, I will explain to you how to Set up your FTP server. But first, let me quickly tell you what is FTP. + +### What is FTP? ### + +[FTP][1] is an acronym for File Transfer Protocol. As the name suggests, FTP is used to transfer files between computers on a network. You can use FTP to exchange files between computer accounts, transfer files between an account and a desktop computer, or access online software archives. Keep in mind, however, that many FTP sites are heavily used and require several attempts before connecting. + +An FTP address looks a lot like an HTTP or website address except it uses the prefix ftp:// instead of http://. + +### What is an FTP Server? ### + +Typically, a computer with an FTP address is dedicated to receive an FTP connection. A computer dedicated to receiving an FTP connection is referred to as an FTP server or FTP site. + +Now, let’s begin a special adventure. We will make FTP server to share files with friends and family. I will use [vsftpd][2] for this purpose. + +VSFTPD is an FTP server software which claims to be the most secure FTP software. In fact, the first two letters in VSFTPD, stand for “very secure”. The software was built around the vulnerabilities of the FTP protocol. + +Nevertheless, you should always remember that there are better solutions for secure transfer and management of files such as SFTP (uses [OpenSSH][3]). The FTP protocol is particularly useful for sharing non-sensitive data and is very reliable at that. + +#### Installing VSFTPD in rpm distributions: #### + +You can quickly install VSFTPD on your server through the command line interface with: + + dnf -y install vsftpd + +#### Installing VSFTPD in deb distributions: #### + +You can quickly install VSFTPD on your server through the command line interface with: + +sudo apt-get install vsftpd + +#### Installing VSFTPD in Arch distribution: #### + +You can quickly install VSFTPD on your server through the command line interface with: + + sudo pacman -S vsftpd + +#### Configuring FTP server #### + +Most VSFTPD’s configuration takes place in /etc/vsftpd.conf. The file itself is well-documented, so this section only highlights some important changes you may want to make. For all available options and basic documentation see the man pages: + + man vsftpd.conf + +Files are served by default from /srv/ftp as per the Filesystem Hierarchy Standard. + +**Enable Uploading:** + +The “write_enable” flag must be set to YES in order to allow changes to the filesystem, such as uploading: + + write_enable=YES + +**Allow Local Users to Login:** + +In order to allow users in /etc/passwd to login, the “local_enable” directive must look like this: + +local_enable=YES + +**Anonymous Login** + +The following lines control whether anonymous users can login: + + # Allow anonymous login + +anonymous_enable=YES +# No password is required for an anonymous login (Optional) +no_anon_password=YES +# Maximum transfer rate for an anonymous client in Bytes/second (Optional) +anon_max_rate=30000 +# Directory to be used for an anonymous login (Optional) +anon_root=/example/directory/ + +**Chroot Jail** + +It is possible to set up a chroot environment, which prevents the user from leaving his home directory. To enable this, add/change the following lines in the configuration file: + + chroot_list_enable=YES chroot_list_file=/etc/vsftpd.chroot_list + +The “chroot_list_file” variable specifies the file in which the jailed users are contained to. + +In the end you must restart your ftp server. Type in your command line + + sudo systemctl restart vsftpd + +That’s it. Your FTP server is up and running. + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/set-ftp-server-linux/ + +作者:[alimiracle][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/ali/ +[1]:https://en.wikipedia.org/wiki/File_Transfer_Protocol +[2]:https://security.appspot.com/vsftpd.html +[3]:http://www.openssh.com/ \ No newline at end of file diff --git a/sources/tech/20150906 How to install Suricata intrusion detection system on Linux.md b/sources/tech/20150906 How to install Suricata intrusion detection system on Linux.md new file mode 100644 index 0000000000..fe4a784d5a --- /dev/null +++ b/sources/tech/20150906 How to install Suricata intrusion detection system on Linux.md @@ -0,0 +1,197 @@ +How to install Suricata intrusion detection system on Linux +================================================================================ +With incessant security threats, intrusion detection system (IDS) has become one of the most critical requirements in today's data center environments. However, as more and more servers upgrade their NICs to 10GB/40GB Ethernet, it is increasingly difficult to implement compute-intensive intrusion detection on commodity hardware at line rates. One approach to scaling IDS performance is **multi-threaded IDS**, where CPU-intensive deep packet inspection workload is parallelized into multiple concurrent tasks. Such parallelized inspection can exploit multi-core hardware to scale up IDS throughput easily. Two well-known open-source efforts in this area are [Suricata][1] and [Bro][2]. + +In this tutorial, I am going to demonstrate **how to install and configure Suricata IDS on Linux server**. + +### Install Suricata IDS on Linux ### + +Let's build Suricata from the source. You first need to install several required dependencies as follows. + +#### Install Dependencies on Debian, Ubuntu or Linux Mint #### + + $ sudo apt-get install wget build-essential libpcre3-dev libpcre3-dbg automake autoconf libtool libpcap-dev libnet1-dev libyaml-dev zlib1g-dev libcap-ng-dev libjansson-dev + +#### Install Dependencies on CentOS, Fedora or RHEL #### + + $ sudo yum install wget libpcap-devel libnet-devel pcre-devel gcc-c++ automake autoconf libtool make libyaml-devel zlib-devel file-devel jansson-devel nss-devel + +Once you install all required packages, go ahead and install Suricata as follows. + +First, download the latest Suricata source code from [http://suricata-ids.org/download/][3], and build it. As of this writing, the latest version is 2.0.8. + + $ wget http://www.openinfosecfoundation.org/download/suricata-2.0.8.tar.gz + $ tar -xvf suricata-2.0.8.tar.gz + $ cd suricata-2.0.8 + $ ./configure --sysconfdir=/etc --localstatedir=/var + +Here is the example output of configuration. + + Suricata Configuration: + AF_PACKET support: yes + PF_RING support: no + NFQueue support: no + NFLOG support: no + IPFW support: no + DAG enabled: no + Napatech enabled: no + Unix socket enabled: yes + Detection enabled: yes + + libnss support: yes + libnspr support: yes + libjansson support: yes + Prelude support: no + PCRE jit: yes + LUA support: no + libluajit: no + libgeoip: no + Non-bundled htp: no + Old barnyard2 support: no + CUDA enabled: no + +Now compile and install it. + + $ make + $ sudo make install + +Suricata source code comes with default configuration files. Let's install these default configuration files as follows. + + $ sudo make install-conf + +As you know, Suricata is useless without IDS rule sets. Conveniently, the Makefile comes with IDS rule installation option. To install IDS rules, run the following command. + + $ sudo make install-rules + +The above rule installation command will download the current snapshot of community rulesets available from [EmergingThreats.net][4], and store them under /etc/suricata/rules. + +![](https://farm1.staticflickr.com/691/20482669553_8b67632277_c.jpg) + +### Configure Suricata IDS the First Time ### + +Now it's time to configure Suricata. The configuration file is located at **/etc/suricata/suricata.yaml**. Open the file with a text editor for editing. + + $ sudo vi /etc/suricata/suricata.yaml + +Here are some basic setup for you to get started. + +The "default-log-dir" keyword should point to the location of Suricata log files. + + default-log-dir: /var/log/suricata/ + +Under "vars" section, you will find several important variables used by Suricata. "HOME_NET" should point to the local network to be inspected by Suricata. "!$HOME_NET" (assigned to EXTERNAL_NET) refers to any other networks than the local network. "XXX_PORTS" indicates the port number(s) use by different services. Note that Suricata can automatically detect HTTP traffic regardless of the port it uses. So it is not critical to specify the HTTP_PORTS variable correctly. + + vars: + HOME_NET: "[192.168.122.0/24]" + EXTERNAL_NET: "!$HOME_NET" + HTTP_PORTS: "80" + SHELLCODE_PORTS: "!80" + SSH_PORTS: 22 + +The "host-os-policy" section is used to defend against some well-known attacks which exploit the behavior of an operating system's network stack (e.g., TCP reassembly) to evade detection. As a counter measure, modern IDS came up with so-called "target-based" inspection, where inspection engine fine-tunes its detection algorithm based on a target operating system of the traffic. Thus, if you know what OS individual local hosts are running, you can feed that information to Suricata to potentially enhance its detection rate. This is when "host-os-policy" section is used. In this example, the default IDS policy is Linux; if no OS information is known for a particular IP address, Suricata will apply Linux-based inspection. When traffic for 192.168.122.0/28 and 192.168.122.155 is captured, Suricata will apply Windows-based inspection policy. + + host-os-policy: + # These are Windows machines. + windows: [192.168.122.0/28, 192.168.122.155] + bsd: [] + bsd-right: [] + old-linux: [] + # Make the default policy Linux. + linux: [0.0.0.0/0] + old-solaris: [] + solaris: ["::1"] + hpux10: [] + hpux11: [] + irix: [] + macos: [] + vista: [] + windows2k3: [] + +Under "threading" section, you can specify CPU affinity for different Suricata threads. By default, [CPU affinity][5] is disabled ("set-cpu-affinity: no"), meaning that Suricata threads will be scheduled on any available CPU cores. By default, Suricata will create one "detect" thread for each CPU core. You can adjust this behavior by specifying "detect-thread-ratio: N". This will create N*M detect threads, where M is the total number of CPU cores on the host. + + threading: + set-cpu-affinity: no + detect-thread-ratio: 1.5 + +With the above threading settings, Suricata will create 1.5*M detection threads, where M is the total number of CPU cores on the system. + +For more information about Suricata configuration, you can read the default configuration file itself, which is heavily commented for clarity. + +### Perform Intrusion Detection with Suricata ### + +Now it's time to test-run Suricata. Before launching it, there's one more step to do. + +When you are using pcap capture mode, it is highly recommended to turn off any packet offloead features (e.g., LRO/GRO) on the NIC which Suricata is listening on, as those features may interfere with live packet capture. + +Here is how to turn off LRO/GRO on the network interface eth0: + + $ sudo ethtool -K eth0 gro off lro off + +Note that depending on your NIC, you may see the following warning, which you can ignore. It simply means that your NIC does not support LRO. + + Cannot change large-receive-offload + +Suricata supports a number of running modes. A runmode determines how different threads are used for IDS. The following command lists all [available runmodes][6]. + + $ sudo /usr/local/bin/suricata --list-runmodes + +![](https://farm6.staticflickr.com/5730/20481140934_25080d04d7_c.jpg) + +The default runmode used by Suricata is autofp (which stands for "auto flow pinned load balancing"). In this mode, packets from each distinct flow are assigned to a single detect thread. Flows are assigned to threads with the lowest number of unprocessed packets. + +Finally, let's start Suricata, and see it in action. + + $ sudo /usr/local/bin/suricata -c /etc/suricata/suricata.yaml -i eth0 --init-errors-fatal + +![](https://farm1.staticflickr.com/701/21077552366_c577746e36_c.jpg) + +In this example, we are monitoring a network interface eth0 on a 8-core system. As shown above, Suricata creates 13 packet processing threads and 3 management threads. The packet processing threads consist of one PCAP packet capture thread, and 12 detect threads (equal to 8*1.5). This means that the packets captured by one capture thread are load-balanced to 12 detect threads for IDS. The management threads are one flow manager and two counter/stats related threads. + +Here is a thread-view of Suricata process (plotted by [htop][7]). + +![](https://farm6.staticflickr.com/5775/20482669593_174f8f41cb_c.jpg) + +Suricata detection logs are stored in /var/log/suricata directory. + + $ tail -f /var/log/suricata/fast.log + +---------- + + 04/01/2015-15:47:12.559075 [**] [1:2200074:1] SURICATA TCPv4 invalid checksum [**] [Classification: (null)] [Priority: 3] {TCP} 172.16.253.158:22 -> 172.16.253.1:46997 + 04/01/2015-15:49:06.565901 [**] [1:2200074:1] SURICATA TCPv4 invalid checksum [**] [Classification: (null)] [Priority: 3] {TCP} 172.16.253.158:22 -> 172.16.253.1:46317 + 04/01/2015-15:49:06.566759 [**] [1:2200074:1] SURICATA TCPv4 invalid checksum [**] [Classification: (null)] [Priority: 3] {TCP} 172.16.253.158:22 -> 172.16.253.1:46317 + +For ease of import, the log is also available in JSON format: + + $ tail -f /var/log/suricata/eve.json + +---------- + {"timestamp":"2015-04-01T15:49:06.565901","event_type":"alert","src_ip":"172.16.253.158","src_port":22,"dest_ip":"172.16.253.1","dest_port":46317,"proto":"TCP","alert":{"action":"allowed","gid":1,"signature_id":2200074,"rev":1,"signature":"SURICATA TCPv4 invalid checksum","category":"","severity":3}} + {"timestamp":"2015-04-01T15:49:06.566759","event_type":"alert","src_ip":"172.16.253.158","src_port":22,"dest_ip":"172.16.253.1","dest_port":46317,"proto":"TCP","alert":{"action":"allowed","gid":1,"signature_id":2200074,"rev":1,"signature":"SURICATA TCPv4 invalid checksum","category":"","severity":3}} + +### Conclusion ### + +In this tutorial, I demonstrated how you can set up Suricata IDS on a multi-core Linux server. Unlike single-threaded [Snort IDS][8], Suricata can easily benefit from multi-core/many-core hardware with multi-threading. There is great deal of customization in Suricata to maximize its performance and detection coverage. Suricata folks maintain [online Wiki][9] quite well, so I strongly recommend you check it out if you want to deploy Suricata in your environment. + +Are you currently using Suricata? If so, feel free to share your experience. + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/install-suricata-intrusion-detection-system-linux.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://suricata-ids.org/ +[2]:https://www.bro.org/ +[3]:http://suricata-ids.org/download/ +[4]:http://rules.emergingthreats.net/ +[5]:http://xmodulo.com/run-program-process-specific-cpu-cores-linux.html +[6]:https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Runmodes +[7]:http://ask.xmodulo.com/view-threads-process-linux.html +[8]:http://xmodulo.com/how-to-compile-and-install-snort-from-source-code-on-ubuntu.html +[9]:https://redmine.openinfosecfoundation.org/projects/suricata/wiki \ No newline at end of file From e9f7ade861d0deb3043807f507f3afe306cdb76b Mon Sep 17 00:00:00 2001 From: cvsher <478990879@qq.com> Date: Sun, 6 Sep 2015 15:58:24 +0800 Subject: [PATCH 438/697] Update 20150906 How To Set Up Your FTP Server In Linux.md --- .../tech/20150906 How To Set Up Your FTP Server In Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150906 How To Set Up Your FTP Server In Linux.md b/sources/tech/20150906 How To Set Up Your FTP Server In Linux.md index a3e2096359..718539b7a1 100644 --- a/sources/tech/20150906 How To Set Up Your FTP Server In Linux.md +++ b/sources/tech/20150906 How To Set Up Your FTP Server In Linux.md @@ -1,3 +1,4 @@ +translating by cvsher How To Set Up Your FTP Server In Linux ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Setup-FTP-Server-in-Linux.jpg) @@ -99,4 +100,4 @@ via: http://itsfoss.com/set-ftp-server-linux/ [a]:http://itsfoss.com/author/ali/ [1]:https://en.wikipedia.org/wiki/File_Transfer_Protocol [2]:https://security.appspot.com/vsftpd.html -[3]:http://www.openssh.com/ \ No newline at end of file +[3]:http://www.openssh.com/ From cff69fea0e9f67f4bda119e34f9098f912d75545 Mon Sep 17 00:00:00 2001 From: cvsher <478990879@qq.com> Date: Sun, 6 Sep 2015 16:03:53 +0800 Subject: [PATCH 439/697] Update 20150906 How To Set Up Your FTP Server In Linux.md --- .../tech/20150906 How To Set Up Your FTP Server In Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150906 How To Set Up Your FTP Server In Linux.md b/sources/tech/20150906 How To Set Up Your FTP Server In Linux.md index a3e2096359..718539b7a1 100644 --- a/sources/tech/20150906 How To Set Up Your FTP Server In Linux.md +++ b/sources/tech/20150906 How To Set Up Your FTP Server In Linux.md @@ -1,3 +1,4 @@ +translating by cvsher How To Set Up Your FTP Server In Linux ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Setup-FTP-Server-in-Linux.jpg) @@ -99,4 +100,4 @@ via: http://itsfoss.com/set-ftp-server-linux/ [a]:http://itsfoss.com/author/ali/ [1]:https://en.wikipedia.org/wiki/File_Transfer_Protocol [2]:https://security.appspot.com/vsftpd.html -[3]:http://www.openssh.com/ \ No newline at end of file +[3]:http://www.openssh.com/ From 7b383ef98a19e92e61b0e50980a4179407e46d10 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Sun, 6 Sep 2015 16:26:33 +0800 Subject: [PATCH 440/697] =?UTF-8?q?20150906-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...r-friendly command line shell for Linux.md | 60 +++++ ... How to Configure OpenNMS on CentOS 7.x.md | 219 ++++++++++++++++++ ...tall DNSCrypt and Unbound in Arch Linux.md | 174 ++++++++++++++ ... to Install QGit Viewer in Ubuntu 14.04.md | 113 +++++++++ ....9.0 Winamp-like Audio Player in Ubuntu.md | 72 ++++++ ...ple in Ubuntu or Elementary OS via NaSC.md | 62 +++++ 6 files changed, 700 insertions(+) create mode 100644 sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md create mode 100644 sources/tech/20150906 How to Configure OpenNMS on CentOS 7.x.md create mode 100644 sources/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md create mode 100644 sources/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md create mode 100644 sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md create mode 100644 sources/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md diff --git a/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md b/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md new file mode 100644 index 0000000000..180616e2d2 --- /dev/null +++ b/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md @@ -0,0 +1,60 @@ +FISH – A smart and user-friendly command line shell for Linux +================================================================================ +The friendly interactive shell (FISH). fish is a user friendly command line shell intended mostly for interactive use. A shell is a program used to execute other programs. + +### FISH Features ### + +#### Autosuggestions #### + +fish suggests commands as you type based on history and completions, just like a web browser. Watch out, Netscape Navigator 4.0! + +#### Glorious VGA Color #### + +fish natively supports term256, the state of the art in terminal technology. You'll have an astonishing 256 colors available for use! + +#### Sane Scripting #### + +fish is fully scriptable, and its syntax is simple, clean, and consistent. You'll never write esac again. + +#### Web Based configuration #### + +For those lucky few with a graphical computer, you can set your colors and view functions, variables, and history all from a web page. + +#### Man Page Completions #### + +Other shells support programmable completions, but only fish generates them automatically by parsing your installed man pages. + +#### Works Out Of The Box #### + +fish will delight you with features like tab completions and syntax highlighting that just work, with nothing new to learn or configure. + +### Install FISH On ubuntu 15.04 ### + +Open the terminal and run the following commands + + sudo apt-add-repository ppa:fish-shell/release-2 + sudo apt-get update + sudo apt-get install fish + +**Using FISH** + +Open the terminal and run the following command to start FISH + + fish + +Welcome to fish, the friendly interactive shell Type help for instructions on how to use fish + +Check [FISH Documentation][1] How to use. + +-------------------------------------------------------------------------------- + +via: http://www.ubuntugeek.com/fish-a-smart-and-user-friendly-command-line-shell-for-linux.html + +作者:[ruchi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.ubuntugeek.com/author/ubuntufix +[1]:http://fishshell.com/docs/current/index.html#introduction \ No newline at end of file diff --git a/sources/tech/20150906 How to Configure OpenNMS on CentOS 7.x.md b/sources/tech/20150906 How to Configure OpenNMS on CentOS 7.x.md new file mode 100644 index 0000000000..c7810d06ef --- /dev/null +++ b/sources/tech/20150906 How to Configure OpenNMS on CentOS 7.x.md @@ -0,0 +1,219 @@ +How to Configure OpenNMS on CentOS 7.x +================================================================================ +Systems management and monitoring services are very important that provides information to view important systems management information that allow us to to make decisions based on this information. To make sure the network is running at its best and to minimize the network downtime we need to improve application performance. So, in this article we will make you understand the step by step procedure to setup OpenNMS in your IT infrastructure. OpenNMS is a free open source enterprise level network monitoring and management platform that provides information to allow us to make decisions in regards to future network and capacity planning. + +OpenNMS designed to manage tens of thousands of devices from a single server as well as manage unlimited devices using a cluster of servers. It includes a discovery engine to automatically configure and manage network devices without operator intervention. It is written in Java and is published under the GNU General Public License. OpenNMS is known for its scalability with its main functional areas in services monitoring, data collection using SNMP and event management and notifications. + +### Installing OpenNMS RPM Repository ### + +We will start from the installation of OpenNMS RPM for our CentOs 7.1 operating system as its available for most of the RPM-based distributions through Yum at their official link http://yum.opennms.org/ . + +![OpenNMS RPM](http://blog.linoxide.com/wp-content/uploads/2015/08/18.png) + +Then open your command line interface of CentOS 7.1 and login with root credentials to run the below command with “wget” to get the required RPM. + + [root@open-nms ~]# wget http://yum.opennms.org/repofiles/opennms-repo-stable-rhel7.noarch.rpm + +![Download RPM](http://blog.linoxide.com/wp-content/uploads/2015/08/26.png) + +Now we need to install this repository so that the OpenNMS package information could be available through yum for installation. Let’s run the command below with same root level credentials to do so. + + [root@open-nms ~]# rpm -Uvh opennms-repo-stable-rhel7.noarch.rpm + +![Installing RPM](http://blog.linoxide.com/wp-content/uploads/2015/08/36.png) + +### Installing Prerequisite Packages for OpenNMS ### + +Now before we start installation of OpenNMS, let’s make sure you’ve done the following prerequisites. + +**Install JDK 7** + +Its recommended that you install the latest stable Java 7 JDK from Oracle for the best performance to integrate JDK in our YUM repository as a fallback. Let’s go to the Oracle Java 7 SE JDK download page, accept the license if you agree, choose the platform and architecture. Once it has finished downloading, execute it from the command-line and then install the resulting JDK rpm. + +Else run the below command to install using the Yum from the the available system repositories. + + [root@open-nms ~]# yum install java-1.7.0-openjdk-1.7.0.85-2.6.1.2.el7_1 + +Once you have installed the Java you can confirm its installation using below command and check its installed version. + + [root@open-nms ~]# java -version + +![Java version](http://blog.linoxide.com/wp-content/uploads/2015/08/46.png) + +**Install PostgreSQL** + +Now we will install the PostgreSQL that is a must requirement to setup the database for OpenNMS. PostgreSQL is included in all of the major YUM-based distributions. To install, simply run the below command. + + [root@open-nms ~]# yum install postgresql postgresql-server + +![Installing Postgresql](http://blog.linoxide.com/wp-content/uploads/2015/08/55.png) + +### Prepare the Database for OpenNMS ### + +Once you have installed PostgreSQL, now you'll need to make sure that PostgreSQL is up and active. Let’s run the below command to first initialize the database and then start its services. + + [root@open-nms ~]# /sbin/service postgresql initdb + [root@open-nms ~]# /sbin/service postgresql start + +![start DB](http://blog.linoxide.com/wp-content/uploads/2015/08/64.png) + +Now to confirm the status of your PostgreSQL database you can run the below command. + + [root@open-nms ~]# service postgresql status + +![PostgreSQL status](http://blog.linoxide.com/wp-content/uploads/2015/08/74.png) + +To ensure that PostgreSQL will start after a reboot, use the “systemctl”command to enable start on bootup using below command. + + [root@open-nms ~]# systemctl enable postgresql + ln -s '/usr/lib/systemd/system/postgresql.service' '/etc/systemd/system/multi-user.target.wants/postgresql.service' + +### Configure PostgreSQL ### + +Locate the Postgres “data” directory. Often this is located in /var/lib/pgsql/data directory and Open the postgresql.conf file in text editor and configure the following parameters as shown. + + [root@open-nms ~]# vim /var/lib/pgsql/data/postgresql.conf + +---------- + + #------------------------------------------------------------------------------ + # CONNECTIONS AND AUTHENTICATION + #------------------------------------------------------------------------------ + + listen_addresses = 'localhost' + max_connections = 256 + + #------------------------------------------------------------------------------ + # RESOURCE USAGE (except WAL) + #------------------------------------------------------------------------------ + + shared_buffers = 1024MB + +**User Access to the Database** + +PostgreSQL only allows you to connect if you are logged in to the local account name that matches the PostgreSQL user. Since OpenNMS runs as root, it cannot connect as a "postgres" or "opennms" user by default, so we have to change the configuration to allow user access to the database by opening the below configuration file. + + [root@open-nms ~]# vim /var/lib/pgsql/data/pg_hba.conf + +Update the configuration file as shown below and change the METHOD settings from "ident" to "trust" + +![user access to db](http://blog.linoxide.com/wp-content/uploads/2015/08/84.png) + +Write and quit the file to make saved changes and then restart PostgreSQL services. + + [root@open-nms ~]# service postgresql restart + +### Starting OpenNMS Installation ### + +Now we are ready go with installation of OpenNMS as we have almost don with its prerequisites. Using the YUM packaging system will download and install all of the required components and their dependencies, if they are not already installed on your system. +So let's riun th belwo command to start OpenNMS installation that will pull everything you need to have a working OpenNMS, including the OpenNMS core, web UI, and a set of common plugins. + + [root@open-nms ~]# yum -y install opennms + +![OpenNMS Installation](http://blog.linoxide.com/wp-content/uploads/2015/08/93.png) + +The above command will ends up with successful installation of OpenNMS and its derivative packages. + +### Configure JAVA for OpenNMS ### + +In order to integrate the default version of Java with OpenNMS we will run the below command. + + [root@open-nms ~]# /opt/opennms/bin/runjava -s + +![java integration](http://blog.linoxide.com/wp-content/uploads/2015/08/102.png) + +### Run the OpenNMS installer ### + +Now it's time to start the OpenNMS installer that will create and configure the OpenNMS database, while the same command will be used in case we want to update it to the latest version. To do so, we will run the following command. + + [root@open-nms ~]# /opt/opennms/bin/install -dis + +The above install command will take many options with following mechanism. + +-d - to update the database +-i - to insert any default data that belongs in the database +-s - to create or update the stored procedures OpenNMS uses for certain kinds of data access + + ============================================================================== + OpenNMS Installer + ============================================================================== + + Configures PostgreSQL tables, users, and other miscellaneous settings. + + DEBUG: Platform is IPv6 ready: true + - searching for libjicmp.so: + - trying to load /usr/lib64/libjicmp.so: OK + - searching for libjicmp6.so: + - trying to load /usr/lib64/libjicmp6.so: OK + - searching for libjrrd.so: + - trying to load /usr/lib64/libjrrd.so: OK + - using SQL directory... /opt/opennms/etc + - using create.sql... /opt/opennms/etc/create.sql + 17:27:51.178 [Main] INFO org.opennms.core.schema.Migrator - PL/PgSQL call handler exists + 17:27:51.180 [Main] INFO org.opennms.core.schema.Migrator - PL/PgSQL language exists + - checking if database "opennms" is unicode... ALREADY UNICODE + - Creating imports directory (/opt/opennms/etc/imports... OK + - Checking for old import files in /opt/opennms/etc... DONE + INFO 16/08/15 17:27:liquibase: Reading from databasechangelog + Installer completed successfully! + + ============================================================================== + OpenNMS Upgrader + ============================================================================== + + OpenNMS is currently stopped + Found upgrade task SnmpInterfaceRrdMigratorOnline + Found upgrade task KscReportsMigrator + Found upgrade task JettyConfigMigratorOffline + Found upgrade task DataCollectionConfigMigratorOffline + Processing RequisitionsMigratorOffline: Remove non-ip-snmp-primary and non-ip-interfaces from requisitions: NMS-5630, NMS-5571 + - Running pre-execution phase + Backing up: /opt/opennms/etc/imports + - Running post-execution phase + Removing backup /opt/opennms/etc/datacollection.zip + + Finished in 0 seconds + + Upgrade completed successfully! + +### Firewall configurations to Allow OpenNMS ### + +Here we have to allow OpenNMS management interface port 8980 through firewall or router to access the management web interface from the remote systems. So use the following commands to do so. + + [root@open-nms etc]# firewall-cmd --permanent --add-port=8980/tcp + [root@open-nms etc]# firewall-cmd --reload + +### Start OpenNMS and Login to Web Interface ### + +Let's start OpenNMS service and enable to it start at each bootup by using the below command. + + [root@open-nms ~]#systemctl start opennms + [root@open-nms ~]#systemctl enable opennms + +Once the services are up are ready to go with its web management interface. Open your web browser and access it with your server's IP address and 8980 port. + +http://servers_ip:8980/ + +Give the username and password where as the default username and password is admin/admin. + +![opennms login](http://blog.linoxide.com/wp-content/uploads/2015/08/opennms-login.png) + +After successful authentication with your provided username and password you will be directed towards the the Home page of OpenNMS where you can configure the new monitoring devices/nodes/services etc. + +![opennms home](http://blog.linoxide.com/wp-content/uploads/2015/08/opennms-home.png) + +### Conclusion ### + +Congratulations! we have successfully setup OpenNMS on CentOS 7.1. So, at the end of this tutorial, you are now able to install and configure OpenNMS with its prerequisites that included PostgreSQL and JAVA setup. So let's enjoy with the great network monitoring system with open source roots using OpenNMS that provide a bevy of features at no cost than their high-end competitors, and can scale to monitor large numbers of network nodes. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/monitoring-2/install-configure-opennms-centos-7-x/ + +作者:[Kashif Siddique][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ \ No newline at end of file diff --git a/sources/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md b/sources/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md new file mode 100644 index 0000000000..98cb0e9b55 --- /dev/null +++ b/sources/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md @@ -0,0 +1,174 @@ +How to Install DNSCrypt and Unbound in Arch Linux +================================================================================ +**DNSCrypt** is a protocol that encrypt and authenticate communications between a DNS client and a DNS resolver. Prevent from DNS spoofing or man in the middle-attack. DNSCrypt are available for most operating system, including Linux, Windows, MacOSX android and iOS. And in this tutorial I'm using archlinux with kernel 4.1. + +Unbound is a DNS cache server used to resolve any DNS query received. If the user requests a new query, then unbound will store it as a cache, and when the user requests the same query for the second time, then unbound would take from the cache that have been saved. This will be faster than the first request query. + +And now I will try to install "DNSCrypt" to secure the dns communication, and make it faster with dns cache "Unbound". + +### Step 1 - Install yaourt ### + +Yaourt is one of AUR(Arch User Repository) helper that make archlinux users easy to install a program from AUR. Yaourt use same syntax as pacman, so you can install the program with yaourt. and this is easy way to install yaourt : + +1. Edit the arch repository configuration file with nano or vi, stored in a file "/etc/pacman.conf". + + $ nano /etc/pacman.conf + +2. Add at the bottom line yaourt repository, just paste script below : + + [archlinuxfr] + SigLevel = Never + Server = http://repo.archlinux.fr/$arch + +3. Save it with press "Ctrl + x" and then "Y". + +4. Now update the repository database and install yaourt with pacman command : + + $ sudo pacman -Sy yaourt + +### Step 2 - Install DNSCrypt and Unbound ### + +DNSCrypt and unbound available on archlinux repository, then you can install it with pacman command : + + $ sudo pacman -S dnscrypt-proxy unbound + +wait it and press "Y" for proceed with installation. + +### Step 3 - Install dnscrypt-autoinstall ### + +Dnscrypt-autoinstall is A script for installing and automatically configuring DNSCrypt on Linux-based systems. Dnscrypt-autoinstall available in AUR(Arch User Repository), and you must use "yaourt" command to install it : + + $ yaourt -S dnscrypt-autoinstall + +Note : + +-S = it is same as pacman -S to install a software/program. + +### Step 4 - Run dnscrypt-autoinstall ### + +run the command "dnscrypt-autoinstall" with root privileges to configure DNSCrypt automatically : + + $ sudo dnscrypt-autoinstall + +Press "Enter" for the next configuration, and then type "y" and choose the DNS provider you want to use, I'm here use DNSCrypt.eu featured with no logs and DNSSEC. + +![DNSCrypt autoinstall](http://blog.linoxide.com/wp-content/uploads/2015/08/DNSCrypt-autoinstall.png) + +### Step 5 - Configure DNSCrypt and Unbound ### + +1. Open the dnscrypt configuration file "/etc/conf.d/dnscrypt-config" and make sure the configuration of "DNSCRYPT_LOCALIP" point to **localhost IP**, and for port configuration "DNSCRYPT_LOCALPORT" it's up to you, I`m here use port **40**. + + $ nano /etc/conf.d/dnscrypt-config + + DNSCRYPT_LOCALIP=127.0.0.1 + DNSCRYPT_LOCALIP2=127.0.0.2 + DNSCRYPT_LOCALPORT=40 + +![DNSCrypt Configuration](http://blog.linoxide.com/wp-content/uploads/2015/08/DNSCryptConfiguration.png) + +Save and exit. + +2. Now you can edit unbound configuration in "/etc/unbound/". edit the file configuration with nano editor : + + $ nano /etc/unbound/unbound.conf + +3. Add the following script in the end of line : + + do-not-query-localhost: no + forward-zone: + name: "." + forward-addr: 127.0.0.1@40 + +Make sure the "**forward-addr**" port is same with "**DNSCRYPT_LOCALPORT**" configuration in DNSCrypt. You can see the I`m use port **40**. + +![Unbound Configuration](http://blog.linoxide.com/wp-content/uploads/2015/08/UnboundConfiguration.png) + +and then save and exit. + +### Step 6 - Run DNSCrypt and Unbound, then Add to startup/Boot ### + +Please run DNSCrypt and unbound with root privileges, you can run with systemctl command : + + $ sudo systemctl start dnscrypt-proxy unbound + +Add the service at the boot time/startup. You can do it by running "systemctl enable" : + + $ sudo systemctl enable dnscrypt-proxy unbound + +the command will create the symlink of the service to "/usr/lib/systemd/system/" directory. + +### Step 7 - Configure resolv.conf and restart all services ### + +Resolv.conf is a file used by linux to configure Domain Name Server(DNS) resolver. it is just plain-text created by administrator, so you must edit by root privileges and make it immutable/no one can edit it. + +Edit it with nano editor : + + $ nano /etc/resolv.conf + +and add the localhost IP "**127.0.0.1**". and now make it immutable with "chattr" command : + + $ chattr +i /etc/resolv.conf + +Note : + +If you want to edit it again, make it writable with command "chattr -i /etc/resolv.conf". + +Now yo need to restart the DNSCrypt, unbound and the network : + + $ sudo systemctl restart dnscrypt-proxy unbound netctl + +If you see the error, check your configuration file. + +### Testing ### + +1. Test DNSCrypt + +You can be sure that DNSCrypt had acted correctly by visiting https://dnsleaktest.com/, then click on "Standard Test" or "Extended Test" and wait the process running. + +And now you can see that DNSCrypt is working with DNSCrypt.eu as your DNS provider. + +![Testing DNSCrypt](http://blog.linoxide.com/wp-content/uploads/2015/08/TestingDNSCrypt.png) + +And now you can see that DNSCrypt is working with DNSCrypt.eu as your DNS provider. + +2. Test Unbound + +Now you should ensure that the unbound is working correctly with "dig" or "drill" command. + +This is the results for dig command : + + $ dig linoxide.com + +Now see in the results, the "Query time" is "533 msec" : + + ;; Query time: 533 msec + ;; SERVER: 127.0.0.1#53(127.0.0.1) + ;; WHEN: Sun Aug 30 14:48:19 WIB 2015 + ;; MSG SIZE rcvd: 188 + +and try again with the same command. And you will see the "Query time" is "0 msec". + + ;; Query time: 0 msec + ;; SERVER: 127.0.0.1#53(127.0.0.1) + ;; WHEN: Sun Aug 30 14:51:05 WIB 2015 + ;; MSG SIZE rcvd: 188 + +![Unbound Test](http://blog.linoxide.com/wp-content/uploads/2015/08/UnboundTest.png) + +And in the end DNSCrypt secure communications between the DNS clients and DNS resolver is working perfectly, and then Unbound make it faster if there is the same request in another time by taking the cache that have been saved. + +### Conclusion ### + +DNSCrypt is a protocol that can encrypt data flow between the DNS client and DNS resolver. DNSCrypt can run on various operating systems, either mobile or desktop. Choose DNS provider also includes something important, choose which provide a DNSSEC and no logs. Unbound can be used as a DNS cache, thus speeding up the resolve process resolv, because Unbound will store a request as the cache, then when a client request same query in the next time, then unbound would take from the cache that have been saved. DNSCrypt and Unbound is a powerful combination for the safety and speed. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/tools/install-dnscrypt-unbound-archlinux/ + +作者:[Arul][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arulm/ \ No newline at end of file diff --git a/sources/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md b/sources/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md new file mode 100644 index 0000000000..0c18bd4c99 --- /dev/null +++ b/sources/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md @@ -0,0 +1,113 @@ +How to Install QGit Viewer in Ubuntu 14.04 +================================================================================ +QGit is a free and Open Source GUI git viewer written on Qt and C++ by Marco Costalba. It is a better git viewer which provides us the ability to browse revisions history, view commits and patches applied to the files under a simple GUI environment. It utilizes git command line to process execute the commands and to display the output. It has some common features like to view revisions, diffs, files history, files annotation, archive tree. We can format and apply patch series with the selected commits, drag and drop commits between two instances and more with QGit Viewer. It allows us to create custom buttons with which we can add more buttons to execute a specific command when pressed using its builtin Action Builder. + +Here are some easy steps on how we can compile and install QGit Viewer from its source code in Ubuntu 14.04 LTS "Trusty". + +### 1. Installing QT4 Libraries ### + +First of all, we'll need have QT4 Libraries installed in order to run QGit viewer in our ubuntu machine. As apt is the default package manager of ubuntu and QT4 packages is available in the official repository of ubutnu, we'll gonna install qt4-default using apt-get command as shown below. + + $ sudo apt-get install qt4-default + +### 2. Downloading QGit Tarball ### + +After installing Qt4 libraries, we'll gonna install git so that we can clone the Git repository of QGit Viewer for Qt 4 . To do so, we'll run the following apt-get command. + + $ sudo apt-get install git + +Now, we'll clone the repository using git command as shown below. + + $ git clone git://repo.or.cz/qgit4/redivivus.git + + Cloning into 'redivivus'... + remote: Counting objects: 7128, done. + remote: Compressing objects: 100% (2671/2671), done. + remote: Total 7128 (delta 5464), reused 5711 (delta 4438) + Receiving objects: 100% (7128/7128), 2.39 MiB | 470.00 KiB/s, done. + Resolving deltas: 100% (5464/5464), done. + Checking connectivity... done. + +### 3. Compiling QGit ### + +After we have cloned the repository, we'll now enter into the directory named redivivus and create the makefile which we'll require to compile qgit viewer. So, to enter into the directory, we'll run the following command. + + $ cd redivivus + +Next, we'll run the following command in order to generate a new Makefile from qmake project file ie qgit.pro. + + $ qmake qgit.pro + +After the Makefile has been generated, we'll now finally compile the source codes of qgit and get the binary as output. To do so, first we'll need to install make and g++ package so that we can compile, as it is a program written in C++ . + + $ sudo apt-get install make g++ + +Now, we'll gonna compile the codes using make command. + + $ make + +### 4. Installing QGit ### + +As we have successfully compiled the source code of QGit viewer, now we'll surely wanna install it in our Ubuntu 14.04 machine so that we can execute it from our system. To do so, we'll run the following command. + + $ sudo make install + + cd src/ && make -f Makefile install + make[1]: Entering directory `/home/arun/redivivus/src' + make -f Makefile.Release install + make[2]: Entering directory `/home/arun/redivivus/src' + install -m 755 -p "../bin/qgit" "/usr/lib/x86_64-linux-gnu/qt4/bin/qgit" + strip "/usr/lib/x86_64-linux-gnu/qt4/bin/qgit" + make[2]: Leaving directory `/home/arun/redivivus/src' + make[1]: Leaving directory `/home/arun/redivivus/src' + +Next, we'll need to copy the built qgit binary file from bin directory to /usr/bin/ directory so that it will be available as global command. + + $ sudo cp bin/qgit /usr/bin/ + +### 5. Creating Desktop File ### + +As we have successfully installed qgit in our Ubuntu box, we'll now go for create a desktop file so that QGit will be available under Menu or Launcher of our Desktop Environment. To do so, we'll need to create a new file named qgit.desktop under /usr/share/applications/ directory. + + $ sudo nano /usr/share/applications/qgit.desktop + +Then, we'll need to paste the following lines into the file. + + [Desktop Entry] + Name=qgit + GenericName=git GUI viewer + Exec=qgit + Icon=qgit + Type=Application + Comment=git GUI viewer + Terminal=false + MimeType=inode/directory; + Categories=Qt;Development;RevisionControl; + +After done, we'll simply save the file and exit. + +### 6. Running QGit Viewer ### + +After QGit is installed successfully in our Ubuntu box, we can now run it from any launcher or application menu. In order to run QGit from the terminal, we'll need to run as follows. + + $ qgit + +This will open the Qt4 Framework based QGit Viewer in GUI mode. + +![QGit Viewer](http://blog.linoxide.com/wp-content/uploads/2015/07/qgit-viewer.png) + +### Conclusion ### + +QGit is really an awesome QT based git viewer. It is available on all three platforms Linux, Mac OSX and Microsoft Windows. It helps us to easily navigate to the history, revisions, branches and more from the available git repository. It reduces the need of running git command line for the common stuffs like viewing revisions, history, diff, etc as graphical interface of it makes easy to do tasks. The latest version of qgit is also available in the default repository of ubuntu which we can install using **apt-get install qgit** command. So, qgit makes our work pretty fast and easy to do with its simple GUI. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/ubuntu-how-to/install-qgit-viewer-ubuntu-14-04/ + +作者:[Arun Pyasi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ \ No newline at end of file diff --git a/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md b/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md new file mode 100644 index 0000000000..503d57fdbf --- /dev/null +++ b/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md @@ -0,0 +1,72 @@ +Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu +================================================================================ +![](http://ubuntuhandbook.org/wp-content/uploads/2015/01/qmmp-icon-simple.png) + +Qmmp, Qt-based audio player with winamp or xmms like user interface, now is at 0.9.0 release. PPA updated for Ubuntu 15.10, Ubuntu 15.04, Ubuntu 14.04, Ubuntu 12.04 and derivatives. + +Qmmp 0.9.0 is a big release with many new features, improvements and some translation updates. It added: + +- audio-channel sequence converter; +- 9 channels support to equalizer; +- album artist tag support; +- asynchronous sorting; +- sorting by file modification date; +- sorting by album artist; +- multiple column support; +- feature to hide track length; +- feature to disable plugins without qmmp.pri modification (qmake only) +- feature to remember playlist scroll position; +- feature to exclude cue data files; +- feature to change user agent; +- feature to change window title; +- feature to reset fonts; +- feature to restore default shortcuts; +- default hotkey for the “Rename List” action; +- feature to disable fadeout in the gme plugin; +- Simple User Interface (QSUI) with the following changes: + - added multiple column support; + - added sorting by album artist; + - added sorting by file modification date; + - added feature to hide song length; + - added default hotkey for the “Rename List” action; + - added “Save List” action to the tab menu; + - added feature to reset fonts; + - added feature to reset shortcuts; + - improved status bar; + +It also improved playlist changes notification, playlist container, sample rate converter, cmake build scripts, title formatter, ape tags support in the mpeg plugin, fileops plugin, reduced cpu usage, changed default skin (to Glare) and playlist separator. + +![qmmp-090](http://ubuntuhandbook.org/wp-content/uploads/2015/09/qmmp-090.jpg) + +### Install Qmmp 0.9.0 in Ubuntu: ### + +New release has been made into PPA, available for all current Ubuntu releases and derivatives. + +1. To add the [Qmmp PPA][1]. + +Open terminal from the Dash, App Launcher, or via Ctrl+Alt+T shortcut keys. When it opens, run command: + + sudo add-apt-repository ppa:forkotov02/ppa + +![qmmp-ppa](http://ubuntuhandbook.org/wp-content/uploads/2015/09/qmmp-ppa.jpg) + +2. After adding the PPA, upgrade Qmmp player through Software Updater. Or refresh system cache and install the software via below commands: + + sudo apt-get update + + sudo apt-get install qmmp qmmp-plugin-pack + +That’s it. Enjoy! + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/09/qmmp-0-9-0-in-ubuntu/ + +作者:[Ji m][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:https://launchpad.net/~forkotov02/+archive/ubuntu/ppa \ No newline at end of file diff --git a/sources/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md b/sources/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md new file mode 100644 index 0000000000..4abc41325f --- /dev/null +++ b/sources/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md @@ -0,0 +1,62 @@ +Make Math Simple in Ubuntu / Elementary OS via NaSC +================================================================================ +![](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-icon.png) + +NaSC (Not a Soulver Clone) is an open source software designed for Elementary OS to do arithmetics. It’s kinda similar to the Mac app [Soulver][1]. + +> Its an app where you do maths like a normal person. It lets you type whatever you want and smartly figures out what is math and spits out an answer on the right pane. Then you can plug those answers in to future equations and if that answer changes, so does the equations its used in. + +With NaSC you can for example: + +- Perform calculations with strangers you can define yourself +- Change the units and values ​​(in m cm, dollar euro …) +- Knowing the surface area of ​​a planet +- Solve of second-degree polynomial +- and more … + +![nasc-eos](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-eos.jpg) + +At the first launch, NaSC offers a tutorial that details possible features. You can later click the help icon on headerbar to get more. + +![nasc-help](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-help.jpg) + +In addition, the software allows to save your file in order to continue the work. It can be also shared on Pastebin with a defined time. + +### Install NaSC in Ubuntu / Elementary OS Freya: ### + +For Ubuntu 15.04, Ubuntu 15.10, Elementary OS Freya, open terminal from the Dash, App Launcher and run below commands one by one: + +1. Add the [NaSC PPA][2] via command: + + sudo apt-add-repository ppa:nasc-team/daily + +![nasc-ppa](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-ppa.jpg) + +2. If you’ve installed Synaptic Package Manager, search for and install `nasc` via it after clicking Reload button. + +Or run below commands to update system cache and install the software: + + sudo apt-get update + + sudo apt-get install nasc + +3. **(Optional)** To remove the software as well as NaSC, run: + + sudo apt-get remove nasc && sudo add-apt-repository -r ppa:nasc-team/daily + +For those who don’t want to add PPA, grab the .deb package directly from [this page][3]. + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/09/make-math-simple-in-ubuntu-elementary-os-via-nasc/ + +作者:[Ji m][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:http://www.acqualia.com/soulver/ +[2]:https://launchpad.net/~nasc-team/+archive/ubuntu/daily/ +[3]:http://ppa.launchpad.net/nasc-team/daily/ubuntu/pool/main/n/nasc/ \ No newline at end of file From bba159333a0ff81bf91a5dfee08157e88b8c1989 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Sun, 6 Sep 2015 20:27:34 +0800 Subject: [PATCH 441/697] [Translated]RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md --- ...ic Control Using FirewallD and Iptables.md | 193 ------------------ ...ic Control Using FirewallD and Iptables.md | 193 ++++++++++++++++++ 2 files changed, 193 insertions(+), 193 deletions(-) delete mode 100644 sources/tech/RHCSA Series/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md create mode 100644 translated/tech/RHCSA/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md b/sources/tech/RHCSA Series/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md deleted file mode 100644 index 022953429d..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md +++ /dev/null @@ -1,193 +0,0 @@ -FSSlc Translating - -RHCSA Series: Firewall Essentials and Network Traffic Control Using FirewallD and Iptables – Part 11 -================================================================================ -In simple words, a firewall is a security system that controls the incoming and outgoing traffic in a network based on a set of predefined rules (such as the packet destination / source or type of traffic, for example). - -![Control Network Traffic with FirewallD and Iptables](http://www.tecmint.com/wp-content/uploads/2015/05/Control-Network-Traffic-Using-Firewall.png) - -RHCSA: Control Network Traffic with FirewallD and Iptables – Part 11 - -In this article we will review the basics of firewalld, the default dynamic firewall daemon in Red Hat Enterprise Linux 7, and iptables service, the legacy firewall service for Linux, with which most system and network administrators are well acquainted, and which is also available in RHEL 7. - -### A Comparison Between FirewallD and Iptables ### - -Under the hood, both firewalld and the iptables service talk to the netfilter framework in the kernel through the same interface, not surprisingly, the iptables command. However, as opposed to the iptables service, firewalld can change the settings during normal system operation without existing connections being lost. - -Firewalld should be installed by default in your RHEL system, though it may not be running. You can verify with the following commands (firewall-config is the user interface configuration tool): - - # yum info firewalld firewall-config - -![Check FirewallD Information](http://www.tecmint.com/wp-content/uploads/2015/05/Check-FirewallD-Information.png) - -Check FirewallD Information - -and, - - # systemctl status -l firewalld.service - -![Check FirewallD Status](http://www.tecmint.com/wp-content/uploads/2015/05/Check-FirewallD-Status.png) - -Check FirewallD Status - -On the other hand, the iptables service is not included by default, but can be installed through. - - # yum update && yum install iptables-services - -Both daemons can be started and enabled to start on boot with the usual systemd commands: - - # systemctl start firewalld.service | iptables-service.service - # systemctl enable firewalld.service | iptables-service.service - -Read Also: [Useful Commands to Manage Systemd Services][1] - -As for the configuration files, the iptables service uses `/etc/sysconfig/iptables` (which will not exist if the package is not installed in your system). On a RHEL 7 box used as a cluster node, this file looks as follows: - -![Iptables Firewall Configuration](http://www.tecmint.com/wp-content/uploads/2015/05/Iptables-Rules.png) - -Iptables Firewall Configuration - -Whereas firewalld store its configuration across two directories, `/usr/lib/firewalld` and `/etc/firewalld`: - - # ls /usr/lib/firewalld /etc/firewalld - -![FirewallD Configuration](http://www.tecmint.com/wp-content/uploads/2015/05/Firewalld-configuration.png) - -FirewallD Configuration - -We will examine these configuration files further later in this article, after we add a few rules here and there. By now it will suffice to remind you that you can always find more information about both tools with. - - # man firewalld.conf - # man firewall-cmd - # man iptables - -Other than that, remember to take a look at [Reviewing Essential Commands & System Documentation – Part 1][2] of the current series, where I described several sources where you can get information about the packages installed on your RHEL 7 system. - -### Using Iptables to Control Network Traffic ### - -You may want to refer to [Configure Iptables Firewall – Part 8][3] of the Linux Foundation Certified Engineer (LFCE) series to refresh your memory about iptables internals before proceeding further. Thus, we will be able to jump in right into the examples. - -**Example 1: Allowing both incoming and outgoing web traffic** - -TCP ports 80 and 443 are the default ports used by the Apache web server to handle normal (HTTP) and secure (HTTPS) web traffic. You can allow incoming and outgoing web traffic through both ports on the enp0s3 interface as follows: - - # iptables -A INPUT -i enp0s3 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT - # iptables -A OUTPUT -o enp0s3 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT - # iptables -A INPUT -i enp0s3 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT - # iptables -A OUTPUT -o enp0s3 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT - -**Example 2: Block all (or some) incoming connections from a specific network** - -There may be times when you need to block all (or some) type of traffic originating from a specific network, say 192.168.1.0/24 for example: - - # iptables -I INPUT -s 192.168.1.0/24 -j DROP - -will drop all packages coming from the 192.168.1.0/24 network, whereas, - - # iptables -A INPUT -s 192.168.1.0/24 --dport 22 -j ACCEPT - -will only allow incoming traffic through port 22. - -**Example 3: Redirect incoming traffic to another destination** - -If you use your RHEL 7 box not only as a software firewall, but also as the actual hardware-based one, so that it sits between two distinct networks, IP forwarding must have been already enabled in your system. If not, you need to edit `/etc/sysctl.conf` and set the value of net.ipv4.ip_forward to 1, as follows: - - net.ipv4.ip_forward = 1 - -then save the change, close your text editor and finally run the following command to apply the change: - - # sysctl -p /etc/sysctl.conf - -For example, you may have a printer installed at an internal box with IP 192.168.0.10, with the CUPS service listening on port 631 (both on the print server and on your firewall). In order to forward print requests from clients on the other side of the firewall, you should add the following iptables rule: - - # iptables -t nat -A PREROUTING -i enp0s3 -p tcp --dport 631 -j DNAT --to 192.168.0.10:631 - -Please keep in mind that iptables reads its rules sequentially, so make sure the default policies or later rules do not override those outlined in the examples above. - -### Getting Started with FirewallD ### - -One of the changes introduced with firewalld are zones. This concept allows to separate networks into different zones level of trust the user has decided to place on the devices and traffic within that network. - -To list the active zones: - - # firewall-cmd --get-active-zones - -In the example below, the public zone is active, and the enp0s3 interface has been assigned to it automatically. To view all the information about a particular zone: - - # firewall-cmd --zone=public --list-all - -![List all FirewallD Zones](http://www.tecmint.com/wp-content/uploads/2015/05/View-FirewallD-Zones.png) - -List all FirewallD Zones - -Since you can read more about zones in the [RHEL 7 Security guide][4], we will only list some specific examples here. - -**Example 4: Allowing services through the firewall** - -To get a list of the supported services, use. - - # firewall-cmd --get-services - -![List All Supported Services](http://www.tecmint.com/wp-content/uploads/2015/05/List-All-Supported-Services.png) - -List All Supported Services - -To allow http and https web traffic through the firewall, effective immediately and on subsequent boots: - - # firewall-cmd --zone=MyZone --add-service=http - # firewall-cmd --zone=MyZone --permanent --add-service=http - # firewall-cmd --zone=MyZone --add-service=https - # firewall-cmd --zone=MyZone --permanent --add-service=https - # firewall-cmd --reload - -If code>–zone is omitted, the default zone (you can check with firewall-cmd –get-default-zone) is used. - -To remove the rule, replace the word add with remove in the above commands. - -**Example 5: IP / Port forwarding** - -First off, you need to find out if masquerading is enabled for the desired zone: - - # firewall-cmd --zone=MyZone --query-masquerade - -In the image below, we can see that masquerading is enabled for the external zone, but not for public: - -![Check Masquerading Status in Firewalld](http://www.tecmint.com/wp-content/uploads/2015/05/Check-masquerading.png) - -Check Masquerading Status - -You can either enable masquerading for public: - - # firewall-cmd --zone=public --add-masquerade - -or use masquerading in external. Here’s what we would do to replicate Example 3 with firewalld: - - # firewall-cmd --zone=external --add-forward-port=port=631:proto=tcp:toport=631:toaddr=192.168.0.10 - -And don’t forget to reload the firewall. - -You can find further examples on [Part 9][5] of the RHCSA series, where we explained how to allow or disable the ports that are usually used by a web server and a ftp server, and how to change the corresponding rule when the default port for those services are changed. In addition, you may want to refer to the firewalld wiki for further examples. - -Read Also: [Useful FirewallD Examples to Configure Firewall in RHEL 7][6] - -### Conclusion ### - -In this article we have explained what a firewall is, what are the available services to implement one in RHEL 7, and provided a few examples that can help you get started with this task. If you have any comments, suggestions, or questions, feel free to let us know using the form below. Thank you in advance! - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/firewalld-vs-iptables-and-control-network-traffic-in-firewall/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux/ -[2]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ -[3]:http://www.tecmint.com/configure-iptables-firewall/ -[4]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html -[5]:http://www.tecmint.com/rhcsa-series-install-and-secure-apache-web-server-and-ftp-in-rhel/ -[6]:http://www.tecmint.com/firewalld-rules-for-centos-7/ diff --git a/translated/tech/RHCSA/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md b/translated/tech/RHCSA/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md new file mode 100644 index 0000000000..80e64c088d --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md @@ -0,0 +1,193 @@ +RHCSA 系列: 防火墙简要和使用 FirewallD 和 Iptables 来控制网络流量 – Part 11 +================================================================================ + +简单来说,防火墙就是一个基于一系列预先定义的规则(例如流量包的目的地或来源,流量的类型等)的安全系统,它控制着一个网络中的流入和流出流量。 + +![使用 FirewallD 和 Iptables 来控制网络流量](http://www.tecmint.com/wp-content/uploads/2015/05/Control-Network-Traffic-Using-Firewall.png) + +RHCSA: 使用 FirewallD 和 Iptables 来控制网络流量 – Part 11 + +在本文中,我们将回顾 firewalld 和 iptables 的基础知识。前者是 RHEL 7 中的默认动态防火墙守护进程,而后者则是针对 Linux 的传统的防火墙服务,大多数的系统和网络管理员都非常熟悉它,并且在 RHEL 7 中也可以获取到。 + +### FirewallD 和 Iptables 的一个比较 ### + +在后台, firewalld 和 iptables 服务都通过相同的接口来与内核中的 netfilter 框架相交流,这不足为奇,即它们都通过 iptables 命令来与 netfilter 交互。然而,与 iptables 服务相反, firewalld 可以在不丢失现有连接的情况下,在正常的系统操作期间更改设定。 + +在默认情况下, firewalld 应该已经安装在你的 RHEL 系统中了,尽管它可能没有在运行。你可以使用下面的命令来确认(firewall-config 是用户界面配置工具): + + # yum info firewalld firewall-config + +![检查 FirewallD 的信息](http://www.tecmint.com/wp-content/uploads/2015/05/Check-FirewallD-Information.png) + +检查 FirewallD 的信息 + +以及, + + # systemctl status -l firewalld.service + +![检查 FirewallD 的状态](http://www.tecmint.com/wp-content/uploads/2015/05/Check-FirewallD-Status.png) + +检查 FirewallD 的状态 + +另一方面, iptables 服务在默认情况下没有被包含在 RHEL 系统中,但可以被安装上。 + + # yum update && yum install iptables-services + +这两个守护进程都可以使用常规的 systemd 命令来在开机时被启动和开启: + + # systemctl start firewalld.service | iptables-service.service + # systemctl enable firewalld.service | iptables-service.service + +另外,请阅读:[管理 Systemd 服务的实用命令][1] (注: 本文已被翻译发表,在 https://linux.cn/article-5926-1.html) + +至于配置文件, iptables 服务使用 `/etc/sysconfig/iptables` 文件(假如这个软件包在你的系统中没有被安装,则这个文件将不存在)。在一个被用作集群节点的 RHEL 7 机子上,这个文件长得像这样: + +![Iptables 防火墙配置文件](http://www.tecmint.com/wp-content/uploads/2015/05/Iptables-Rules.png) + +Iptables 防火墙配置文件 + +而 firewalld 则在两个目录中存储它的配置文件,即 `/usr/lib/firewalld` 和 `/etc/firewalld`: + + # ls /usr/lib/firewalld /etc/firewalld + +![FirewallD 的配置文件](http://www.tecmint.com/wp-content/uploads/2015/05/Firewalld-configuration.png) + +FirewallD 的配置文件 + +在这篇文章中后面,我们将进一步查看这些配置文件,在那之后,我们将在各处添加一些规则。 +现在,是时候提醒你了,你总可以使用下面的命令来找到更多有关这两个工具的信息。 + + # man firewalld.conf + # man firewall-cmd + # man iptables + +除了这些,记得查看一下当前系列的第一篇 [RHCSA 系列(一): 回顾基础命令及系统文档][2](注: 本文已被翻译发表,在 https://linux.cn/article-6133-1.html ),在其中我描述了几种渠道来得到安装在你的 RHEL 7 系统上的软件包的信息。 + +### 使用 Iptables 来控制网络流量 ### + +在进一步深入之前,或许你需要参考 Linux 基金会认证工程师(Linux Foundation Certified Engineer,LFCE) 系列中的 [配置 Iptables 防火墙 – Part 8][3] 来复习你脑中有关 iptables 的知识。 + +**例 1:同时允许流入和流出的网络流量** + +TCP 端口 80 和 443 是 Apache web 服务器使用的用来处理常规(HTTP) 和安全(HTTPS)网络流量的默认端口。你可以像下面这样在 enp0s3 接口上允许流入和流出网络流量通过这两个端口: + + # iptables -A INPUT -i enp0s3 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT + # iptables -A OUTPUT -o enp0s3 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT + # iptables -A INPUT -i enp0s3 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT + # iptables -A OUTPUT -o enp0s3 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT + +**例 2:从某个特定网络中阻挡所有(或某些)流入连接** + +或许有时你需要阻挡来自于某个特定网络的所有(或某些)类型的来源流量,比方说 192.168.1.0/24: + + # iptables -I INPUT -s 192.168.1.0/24 -j DROP + +上面的命令将丢掉所有来自 192.168.1.0/24 网络的网络包,而 + + # iptables -A INPUT -s 192.168.1.0/24 --dport 22 -j ACCEPT + +将只允许通过端口 22 的流入流量。 + +**例 3:将流入流量重定向到另一个目的地** + +假如你不仅使用你的 RHEL 7 机子来作为一个软件防火墙,而且还将它作为一个硬件防火墙,使得它位于两个不同的网络之间,则在你的系统 IP 转发一定已经被开启了。假如没有开启,你需要编辑 `/etc/sysctl.conf` 文件并将 `net.ipv4.ip_forward` 的值设为 1,即: + + net.ipv4.ip_forward = 1 + +接着保存更改,关闭你的文本编辑器,并最终运行下面的命令来应用更改: + + # sysctl -p /etc/sysctl.conf + +例如,你可能在一个内部的机子上安装了一个打印机,它的 IP 地址为 192.168.0.10,CUPS 服务在端口 631 上进行监听(同时在你的打印服务器和你的防火墙上)。为了从防火墙另一边的客户端传递打印请求,你应该添加下面的 iptables 规则: + + # iptables -t nat -A PREROUTING -i enp0s3 -p tcp --dport 631 -j DNAT --to 192.168.0.10:631 + +请记住 iptables 逐条地读取它的规则,所以请确保默认的策略或后面的规则不会重载上面例子中那些有下划线的规则。 + +### FirewallD 入门 ### + +引入 firewalld 的一个改变是区域(zone) (注:翻译参考了 https://fedoraproject.org/wiki/FirewallD/zh-cn) 的概念。它允许将网路划分为拥有不同信任级别的区域,由用户决定将设备和流量放置到哪个区域。 + +要获取活动的区域,使用: + + # firewall-cmd --get-active-zones + +在下面的例子中,公用区域被激活了,并且 enp0s3 接口被自动地分配到了这个区域。要查看有关一个特定区域的所有信息,可使用: + + # firewall-cmd --zone=public --list-all + +![列出所有的 Firewalld 区域](http://www.tecmint.com/wp-content/uploads/2015/05/View-FirewallD-Zones.png) + +列出所有的 Firewalld 区域 + +由于你可以在 [RHEL 7 安全指南][4] 中阅读到更多有关区域的知识,这里我们将仅列出一些特别的例子。 + +**例 4:允许服务通过防火墙** + +要获取受支持的服务的列表,可以使用: + + # firewall-cmd --get-services + +![列出所有受支持的服务](http://www.tecmint.com/wp-content/uploads/2015/05/List-All-Supported-Services.png) + +列出所有受支持的服务 + +要立刻且在随后的开机中使得 http 和 https 网络流量通过防火墙,可以这样: + + # firewall-cmd --zone=MyZone --add-service=http + # firewall-cmd --zone=MyZone --permanent --add-service=http + # firewall-cmd --zone=MyZone --add-service=https + # firewall-cmd --zone=MyZone --permanent --add-service=https + # firewall-cmd --reload + +假如 code>–zone 被忽略,则默认的区域(你可以使用 `firewall-cmd –get-default-zone`来查看)将会被使用。 + +若要移除这些规则,可以在上面的命令中将 `add` 替换为 `remove`。 + +**例 5:IP 转发或端口转发** + +首先,你需要查看在目标区域中,伪装是否被开启: + + # firewall-cmd --zone=MyZone --query-masquerade + +在下面的图片中,我们可以看到对于外部区域,伪装已被开启,但对于公用区域则没有: + +![在 firewalld 中查看伪装状态](http://www.tecmint.com/wp-content/uploads/2015/05/Check-masquerading.png) + +查看伪装状态 + +你可以为公共区域开启伪装: + + # firewall-cmd --zone=public --add-masquerade + +或者在外部区域中使用伪装。下面是使用 firewalld 来重复例 3 中的任务所需的命令: + + # firewall-cmd --zone=external --add-forward-port=port=631:proto=tcp:toport=631:toaddr=192.168.0.10 + +并且别忘了重新加载防火墙。 + +在 RHCSA 系列的 [Part 9][5] 你可以找到更深入的例子,在那篇文章中我们解释了如何允许或禁用通常被 web 服务器和 ftp 服务器使用的端口,以及在针对这两个服务所使用的默认端口被改变时,如何更改相应的规则。另外,你或许想参考 firewalld 的 wiki 来查看更深入的例子。 + +Read Also: [在 RHEL 7 中配置防火墙的几个实用的 firewalld 例子][6] + +### 总结 ### + +在这篇文章中,我们已经解释了防火墙是什么,介绍了在 RHEL 7 中用来实现防火墙的几个可用的服务,并提供了可以帮助你入门防火墙的几个例子。假如你有任何的评论,建议或问题,请随意使用下面的评论框来让我们知晓。这里就事先感谢了! + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/firewalld-vs-iptables-and-control-network-traffic-in-firewall/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux/ +[2]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ +[3]:http://www.tecmint.com/configure-iptables-firewall/ +[4]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html +[5]:http://www.tecmint.com/rhcsa-series-install-and-secure-apache-web-server-and-ftp-in-rhel/ +[6]:http://www.tecmint.com/firewalld-rules-for-centos-7/ \ No newline at end of file From 9914cfab72f15b32c1d4ba7653cf64673a876e35 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Sun, 6 Sep 2015 20:30:28 +0800 Subject: [PATCH 442/697] Update RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ...art 12--Automate RHEL 7 Installations Using 'Kickstart'.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md b/sources/tech/RHCSA Series/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md index a4365e311e..3d8b578a32 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md @@ -1,3 +1,5 @@ +FSSlc translating + RHCSA Series: Automate RHEL 7 Installations Using ‘Kickstart’ – Part 12 ================================================================================ Linux servers are rarely standalone boxes. Whether it is in a datacenter or in a lab environment, chances are that you have had to install several machines that will interact one with another in some way. If you multiply the time that it takes to install Red Hat Enterprise Linux 7 manually on a single server by the number of boxes that you need to set up, this can lead to a rather lengthy effort that can be avoided through the use of an unattended installation tool known as kickstart. @@ -139,4 +141,4 @@ via: http://www.tecmint.com/automatic-rhel-installations-using-kickstart/ [a]:http://www.tecmint.com/author/gacanepa/ [1]:https://access.redhat.com/labs/kickstartconfig/ -[2]:http://www.tecmint.com/multiple-centos-installations-using-kickstart/ \ No newline at end of file +[2]:http://www.tecmint.com/multiple-centos-installations-using-kickstart/ From 303b95f336868fb67d589d88435bc5d04038df19 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sun, 6 Sep 2015 21:59:32 +0800 Subject: [PATCH 443/697] =?UTF-8?q?Translating=EF=BC=9A=20sources/tech/201?= =?UTF-8?q?50906=20Do=20Simple=20Math=20In=20Ubuntu=20And=20elementary=20O?= =?UTF-8?q?S=20With=20NaSC.md=20sources/tech/20150906=20How=20To=20Manage?= =?UTF-8?q?=20Log=20Files=20With=20Logrotate=20On=20Ubuntu=2012.10.md=20so?= =?UTF-8?q?urces/tech/20150906=20Make=20Math=20Simple=20in=20Ubuntu=20or?= =?UTF-8?q?=20Elementary=20OS=20via=20NaSC.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...50906 Do Simple Math In Ubuntu And elementary OS With NaSC.md | 1 + ...906 How To Manage Log Files With Logrotate On Ubuntu 12.10.md | 1 + ...50906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md | 1 + 3 files changed, 3 insertions(+) diff --git a/sources/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md b/sources/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md index 67601c8ce6..512c0669f9 100644 --- a/sources/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md +++ b/sources/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md @@ -1,3 +1,4 @@ +ictlyh Translating Do Simple Math In Ubuntu And elementary OS With NaSC ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Make-Math-Simpler-with-NaSC.jpg) diff --git a/sources/tech/20150906 How To Manage Log Files With Logrotate On Ubuntu 12.10.md b/sources/tech/20150906 How To Manage Log Files With Logrotate On Ubuntu 12.10.md index 2968dc113e..0c7ae1a7e3 100644 --- a/sources/tech/20150906 How To Manage Log Files With Logrotate On Ubuntu 12.10.md +++ b/sources/tech/20150906 How To Manage Log Files With Logrotate On Ubuntu 12.10.md @@ -1,3 +1,4 @@ +ictlyh Translating How To Manage Log Files With Logrotate On Ubuntu 12.10 ================================================================================ #### About Logrotate #### diff --git a/sources/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md b/sources/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md index 4abc41325f..2ddb2a072c 100644 --- a/sources/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md +++ b/sources/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md @@ -1,3 +1,4 @@ +ictlyh Translating Make Math Simple in Ubuntu / Elementary OS via NaSC ================================================================================ ![](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-icon.png) From a1393e930fd6e4a40875824ad3fb7328350a5c07 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 6 Sep 2015 22:23:56 +0800 Subject: [PATCH 444/697] Update 20150906 FISH--A smart and user-friendly command line shell for Linux.md --- ...-A smart and user-friendly command line shell for Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md b/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md index 180616e2d2..6e0c78246b 100644 --- a/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md +++ b/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md @@ -1,3 +1,5 @@ +translating by oska874 + FISH – A smart and user-friendly command line shell for Linux ================================================================================ The friendly interactive shell (FISH). fish is a user friendly command line shell intended mostly for interactive use. A shell is a program used to execute other programs. @@ -57,4 +59,4 @@ via: http://www.ubuntugeek.com/fish-a-smart-and-user-friendly-command-line-shell 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.ubuntugeek.com/author/ubuntufix -[1]:http://fishshell.com/docs/current/index.html#introduction \ No newline at end of file +[1]:http://fishshell.com/docs/current/index.html#introduction From d2ae274381a71eb01861251cc738afc8e963258a Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 7 Sep 2015 09:48:01 +0800 Subject: [PATCH 445/697] Update 20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md --- ...06 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md b/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md index 503d57fdbf..36e4c70d2c 100644 --- a/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md +++ b/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md @@ -1,3 +1,4 @@ +translation by strugglingyouth Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu ================================================================================ ![](http://ubuntuhandbook.org/wp-content/uploads/2015/01/qmmp-icon-simple.png) @@ -69,4 +70,4 @@ via: http://ubuntuhandbook.org/index.php/2015/09/qmmp-0-9-0-in-ubuntu/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://ubuntuhandbook.org/index.php/about/ -[1]:https://launchpad.net/~forkotov02/+archive/ubuntu/ppa \ No newline at end of file +[1]:https://launchpad.net/~forkotov02/+archive/ubuntu/ppa From c2ad6f0bf7b55a98ba75e766537be8909340c6b4 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 7 Sep 2015 11:02:25 +0800 Subject: [PATCH 446/697] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...r-friendly command line shell for Linux.md | 47 +++++++++---------- 1 file changed, 22 insertions(+), 25 deletions(-) diff --git a/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md b/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md index 6e0c78246b..208f1f6781 100644 --- a/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md +++ b/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md @@ -1,59 +1,56 @@ -translating by oska874 -FISH – A smart and user-friendly command line shell for Linux +FISH - Linux 的一个智能、易用的SHELL ================================================================================ -The friendly interactive shell (FISH). fish is a user friendly command line shell intended mostly for interactive use. A shell is a program used to execute other programs. -### FISH Features ### +FISH:友好的交互式shell。 fish 是一个用户友好的命令行shell,主要是用来进行交互式使用。shell 就是一个用来执行其他程序的程序。 -#### Autosuggestions #### +### FISH 特性 ### -fish suggests commands as you type based on history and completions, just like a web browser. Watch out, Netscape Navigator 4.0! +#### 自动建议 #### -#### Glorious VGA Color #### +fish 会根据你的历史输入和已经完成的命令来提供建议,方便输入,就像一个网络浏览器一样。注意了,就是Netscape Navigator 4.0! -fish natively supports term256, the state of the art in terminal technology. You'll have an astonishing 256 colors available for use! +#### 漂亮的VGA 色彩 #### +fish 原生支持term256, 它就是一个终端技术的艺术国度。 你将可以拥有一个难以置信的、256 色的shell 来使用。 -#### Sane Scripting #### +#### 理智的脚本 #### -fish is fully scriptable, and its syntax is simple, clean, and consistent. You'll never write esac again. +fish 是完全可以通过脚本控制的,而且它的语法又是那么的简单、干净,而且一致。你甚至不需要去重写。 -#### Web Based configuration #### +#### 基于web 的配置 #### -For those lucky few with a graphical computer, you can set your colors and view functions, variables, and history all from a web page. +对于少数能使用图形计算机的幸运儿, 你们可以在网页上配置你们自己的色彩方案,以及查看函数、变量和历史记录。 -#### Man Page Completions #### +#### 帮助手册补全 #### -Other shells support programmable completions, but only fish generates them automatically by parsing your installed man pages. +其它的shell 支持可配置的补全, 但是只有fish 可以通过自动转换你安装好的man 手册来实现补全功能。 -#### Works Out Of The Box #### +#### 开箱即用 #### -fish will delight you with features like tab completions and syntax highlighting that just work, with nothing new to learn or configure. +fish 将会通过tab 补全和语法高亮是你非常愉快的使用shell, 同时不需要太多的学习或者配置。 -### Install FISH On ubuntu 15.04 ### +### 在ubuntu 15.04 上安装FISH -Open the terminal and run the following commands +打开终端,运行下列命令: sudo apt-add-repository ppa:fish-shell/release-2 sudo apt-get update sudo apt-get install fish -**Using FISH** - -Open the terminal and run the following command to start FISH +**使用FISH** +打开终端,运行下列命令来启动FISH: fish -Welcome to fish, the friendly interactive shell Type help for instructions on how to use fish - -Check [FISH Documentation][1] How to use. +欢迎来到fish, 友好的交互式shell,输入指令help 来了解怎么使用fish。 +阅读[FISH 文档][1] ,掌握使用方法。 -------------------------------------------------------------------------------- via: http://www.ubuntugeek.com/fish-a-smart-and-user-friendly-command-line-shell-for-linux.html 作者:[ruchi][a] -译者:[译者ID](https://github.com/译者ID) +译者:[译者ID](https://github.com/oska874) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c50498f743252e6eb6246c463f41a5ce81f96b3d Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 7 Sep 2015 11:02:42 +0800 Subject: [PATCH 447/697] Update 20150906 FISH--A smart and user-friendly command line shell for Linux.md --- ...SH--A smart and user-friendly command line shell for Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md b/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md index 208f1f6781..3a6e5cafa0 100644 --- a/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md +++ b/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md @@ -45,6 +45,7 @@ fish 将会通过tab 补全和语法高亮是你非常愉快的使用shell, 欢迎来到fish, 友好的交互式shell,输入指令help 来了解怎么使用fish。 阅读[FISH 文档][1] ,掌握使用方法。 + -------------------------------------------------------------------------------- via: http://www.ubuntugeek.com/fish-a-smart-and-user-friendly-command-line-shell-for-linux.html From 50656a3d038d05410e679011cbee135aacb1362c Mon Sep 17 00:00:00 2001 From: ezio Date: Mon, 7 Sep 2015 11:09:56 +0800 Subject: [PATCH 448/697] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=B9=B6=E7=A7=BB?= =?UTF-8?q?=E5=8A=A8?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ISH--A smart and user-friendly command line shell for Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md (100%) diff --git a/sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md b/translated/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md similarity index 100% rename from sources/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md rename to translated/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md From 979d48279e393a6e0154ca303e7b7290ef3c66a4 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 7 Sep 2015 23:30:48 +0800 Subject: [PATCH 449/697] PUB:20150821 Top 4 open source command-line email clients @KevinSJ --- ... open source command-line email clients.md | 27 +++++++++---------- 1 file changed, 13 insertions(+), 14 deletions(-) rename {translated/share => published}/20150821 Top 4 open source command-line email clients.md (52%) diff --git a/translated/share/20150821 Top 4 open source command-line email clients.md b/published/20150821 Top 4 open source command-line email clients.md similarity index 52% rename from translated/share/20150821 Top 4 open source command-line email clients.md rename to published/20150821 Top 4 open source command-line email clients.md index db28f4c543..1e9ae59c5c 100644 --- a/translated/share/20150821 Top 4 open source command-line email clients.md +++ b/published/20150821 Top 4 open source command-line email clients.md @@ -1,13 +1,12 @@ -KevinSJ Translating -四大开源版命令行邮件客户端 +4 个开源的命令行邮件客户端 ================================================================================ ![](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/life_mail.png) -无论你承认与否,email并没有消亡。对依赖命令行的 Linux 高级用户而言,离开 shell 转而使用传统的桌面或网页版邮件客户端并不合适。归根结底,命令行最善于处理文件,特别是文本文件,能使效率倍增。 +无论你承认与否,email并没有消亡。对那些对命令行至死不渝的 Linux 高级用户而言,离开 shell 转而使用传统的桌面或网页版邮件客户端并不适应。归根结底,命令行最善于处理文件,特别是文本文件,能使效率倍增。 -幸运的是,也有不少的命令行邮件客户端,他们的用户大都乐于帮助你入门并回答你使用中遇到的问题。但别说我没警告过你:一旦你完全掌握了其中一个客户端,要再使用图基于图形界面的客户端将回变得很困难! +幸运的是,也有不少的命令行邮件客户端,而它们的用户大都乐于帮助你入门并回答你使用中遇到的问题。但别说我没警告过你:一旦你完全掌握了其中一个客户端,你会发现很难回到基于图形界面的客户端! -要安装下述四个客户端中的任何一个是非常容易的;主要 Linux 发行版的软件仓库中都提供此类软件,并可通过包管理器进行安装。你也可以再其他的操作系统中寻找并安装这类客户端,但我并未尝试过也没有相关的经验。 +要安装下述四个客户端中的任何一个是非常容易的;主要的 Linux 发行版的软件仓库中都提供此类软件,并可通过包管理器进行安装。你也可以在其它的操作系统中寻找并安装这类客户端,但我并未尝试过也没有相关的经验。 ### Mutt ### @@ -17,7 +16,7 @@ KevinSJ Translating 许多终端爱好者都听说过甚至熟悉 Mutt 和 Alpine, 他们已经存在多年。让我们先看看 Mutt。 -Mutt 支持许多你所期望 email 系统支持的功能:会话,颜色区分,支持多语言,同时还有很多设置选项。它支持 POP3 和 IMAP, 两个主要的邮件传输协议,以及许多邮箱格式。自从1995年诞生以来, Mutt 即拥有一个活跃的开发社区,但最近几年,新版本更多的关注于修复问题和安全更新而非提供新功能。这对大多数 Mutt 用户而言并无大碍,他们钟爱这样的界面,并支持此项目的口号:“所有邮件客户端都很烂,只是这个烂的没那么彻底。” +Mutt 支持许多你所期望 email 系统支持的功能:会话,颜色区分,支持多语言,同时还有很多设置选项。它支持 POP3 和 IMAP 这两个主要的邮件传输协议,以及许多邮箱格式。自从1995年诞生以来, Mutt 就拥有了一个活跃的开发社区,但最近几年,新版本更多的关注于修复问题和安全更新而非提供新功能。这对大多数 Mutt 用户而言并无大碍,他们钟爱这样的界面,并支持此项目的口号:“所有邮件客户端都很烂,只是这个烂的没那么彻底。” ### Alpine ### @@ -25,13 +24,13 @@ Mutt 支持许多你所期望 email 系统支持的功能:会话,颜色区 - [源代码][5] - 授权协议: [Apache 2.0][6] -Alpine 是另一款知名的终端邮件客户端,它由华盛顿大学开发,初衷是作为 UW 开发的 Pine 的开源,支持unicode的替代版本。 +Alpine 是另一款知名的终端邮件客户端,它由华盛顿大学开发,设计初衷是作为一个开源的、支持 unicode 的 Pine (也来自华盛顿大学)的替代版本。 Alpine 不仅容易上手,还为高级用户提供了很多特性,它支持很多协议 —— IMAP, LDAP, NNTP, POP, SMTP 等,同时也支持不同的邮箱格式。Alpine 内置了一款名为 Pico 的可独立使用的简易文本编辑工具,但你也可以使用你常用的文本编辑器: vi, Emacs等。 -尽管Alpine的升级并不频繁,名为re-alpine的分支为不同的开发者提供了开发此项目的机会。 +尽管 Alpine 的升级并不频繁,不过有个名为 re-alpine 的分支为不同的开发者提供了开发此项目的机会。 -Alpine 支持再屏幕上显示上下文帮助,但一些用户回喜欢 Mutt 式的独立说明手册,但这两种提供了较好的说明。用户可以同时尝试 Mutt 和 Alpine,并由个人喜好作出决定,也可以尝试以下几个比较新颖的选项。 +Alpine 支持在屏幕上显示上下文帮助,但一些用户会喜欢 Mutt 式的独立说明手册,不过它们两个的文档都很完善。用户可以同时尝试 Mutt 和 Alpine,并由个人喜好作出决定,也可以尝试以下的几个新选择。 ### Sup ### @@ -39,10 +38,9 @@ Alpine 支持再屏幕上显示上下文帮助,但一些用户回喜欢 Mutt - [源代码][8] - 授权协议: [GPLv2][9] -Sup 是我们列表中能被称为“大容量邮件客户端”的两个之一。自称“为邮件较多的人设计的命令行客户端”,Sup 的目标是提供一个支持层次化设计并允许再为会话添加标签进行简单整理的界面。 +Sup 是我们列表中能被称为“大容量邮件客户端”的二者之一。自称“为邮件较多的人设计的命令行客户端”,Sup 的目标是提供一个支持层次化设计并允许为会话添加标签进行简单整理的界面。 由于采用 Ruby 编写,Sup 能提供十分快速的搜索并能自动管理联系人列表,同时还允许自定义插件。对于使用 Gmail 作为网页邮件客户端的人们,这些功能都是耳熟能详的,这就使得 Sup 成为一种比较现代的命令行邮件管理方式。 -Written in Ruby, Sup provides exceptionally fast searching, manages your contact list automatically, and allows for custom extensions. For people who are used to Gmail as a webmail interface, these features will seem familiar, and Sup might be seen as a more modern approach to email on the command line. ### Notmuch ### @@ -52,16 +50,17 @@ Written in Ruby, Sup provides exceptionally fast searching, manages your contact "Sup? Notmuch." Notmuch 作为 Sup 的回应,最初只是重写了 Sup 的一小部分来提高性能。最终,这个项目逐渐变大并成为了一个独立的邮件客户端。 -Notmuch是一款相当精简的软件。它并不能独立的收发邮件,启用 Notmuch 的快速搜索功能的代码实际上是一个需要调用的独立库。但这样的模块化设计也使得你能使用你最爱的工具进行写信,发信和收信,集中精力做好一件事情并有效浏览和管理你的邮件。 +Notmuch 是一款相当精简的软件。它并不能独立的收发邮件,启用 Notmuch 的快速搜索功能的代码实际上是设计成一个程序可以调用的独立库。但这样的模块化设计也使得你能使用你最爱的工具进行写信,发信和收信,集中精力做好一件事情并有效浏览和管理你的邮件。 + +这个列表并不完整,还有很多 email 客户端,它们或许才是你的最佳选择。你喜欢什么客户端呢? -这个列表并不完整,还有很多 email 客户端,他们或许才是你的最佳选择。你喜欢什么客户端呢? -------------------------------------------------------------------------------- via: http://opensource.com/life/15/8/top-4-open-source-command-line-email-clients 作者:[Jason Baker][a] 译者:[KevinSJ](https://github.com/KevinSj) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From c1ceb5869d1d80ff3c819030e2882d374c7b6e67 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 7 Sep 2015 23:47:48 +0800 Subject: [PATCH 450/697] PUB:20150906 FISH--A smart and user-friendly command line shell for Linux @oska874 --- ...r-friendly command line shell for Linux.md | 27 ++++++++++--------- 1 file changed, 15 insertions(+), 12 deletions(-) rename {translated/share => published}/20150906 FISH--A smart and user-friendly command line shell for Linux.md (54%) diff --git a/translated/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md b/published/20150906 FISH--A smart and user-friendly command line shell for Linux.md similarity index 54% rename from translated/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md rename to published/20150906 FISH--A smart and user-friendly command line shell for Linux.md index 3a6e5cafa0..de5d3946f0 100644 --- a/translated/share/20150906 FISH--A smart and user-friendly command line shell for Linux.md +++ b/published/20150906 FISH--A smart and user-friendly command line shell for Linux.md @@ -1,33 +1,35 @@ - -FISH - Linux 的一个智能、易用的SHELL +FISH:Linux 的一个智能易用的 Shell ================================================================================ -FISH:友好的交互式shell。 fish 是一个用户友好的命令行shell,主要是用来进行交互式使用。shell 就是一个用来执行其他程序的程序。 +FISH(friendly interactive shell)是一个用户友好的命令行 shell,主要是用来进行交互式使用。shell 就是一个用来执行其他程序的程序。 ### FISH 特性 ### #### 自动建议 #### -fish 会根据你的历史输入和已经完成的命令来提供建议,方便输入,就像一个网络浏览器一样。注意了,就是Netscape Navigator 4.0! +fish 会根据你的历史输入和补完来提供命令建议,就像一个网络浏览器一样。注意了,就是Netscape Navigator 4.0! + +![](http://www.tecmint.com/wp-content/uploads/2015/07/Fish-Auto-Suggestion.gif) #### 漂亮的VGA 色彩 #### -fish 原生支持term256, 它就是一个终端技术的艺术国度。 你将可以拥有一个难以置信的、256 色的shell 来使用。 + +fish 原生支持 term256, 它就是一个终端技术的艺术国度。 你将可以拥有一个难以置信的、256 色的shell 来使用。 #### 理智的脚本 #### fish 是完全可以通过脚本控制的,而且它的语法又是那么的简单、干净,而且一致。你甚至不需要去重写。 -#### 基于web 的配置 #### +#### 基于 web 的配置 #### 对于少数能使用图形计算机的幸运儿, 你们可以在网页上配置你们自己的色彩方案,以及查看函数、变量和历史记录。 #### 帮助手册补全 #### -其它的shell 支持可配置的补全, 但是只有fish 可以通过自动转换你安装好的man 手册来实现补全功能。 +其它的 shell 支持可配置的补全, 但是只有 fish 可以通过自动转换你安装好的 man 手册来实现补全功能。 #### 开箱即用 #### -fish 将会通过tab 补全和语法高亮是你非常愉快的使用shell, 同时不需要太多的学习或者配置。 +fish 将会通过 tab 补全和语法高亮使你非常愉快的使用shell, 同时不需要太多的学习或者配置。 ### 在ubuntu 15.04 上安装FISH @@ -37,12 +39,13 @@ fish 将会通过tab 补全和语法高亮是你非常愉快的使用shell, sudo apt-get update sudo apt-get install fish -**使用FISH** +###使用FISH### 打开终端,运行下列命令来启动FISH: + fish -欢迎来到fish, 友好的交互式shell,输入指令help 来了解怎么使用fish。 +欢迎来到 fish,友好的交互式shell,输入指令 help 来了解怎么使用fish。 阅读[FISH 文档][1] ,掌握使用方法。 @@ -51,8 +54,8 @@ fish 将会通过tab 补全和语法高亮是你非常愉快的使用shell, via: http://www.ubuntugeek.com/fish-a-smart-and-user-friendly-command-line-shell-for-linux.html 作者:[ruchi][a] -译者:[译者ID](https://github.com/oska874) -校对:[校对者ID](https://github.com/校对者ID) +译者:[oska874](https://github.com/oska874) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 46a139830ca8be02c9f6de86db5ebd73620dfb6d Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 8 Sep 2015 00:14:35 +0800 Subject: [PATCH 451/697] PUB:20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @jerryling315 ,请注意,要使用中文标点。 --- ...u 15.04 to connect to Android or iPhone.md | 77 +++++++++++++++++++ ...u 15.04 to connect to Android or iPhone.md | 74 ------------------ 2 files changed, 77 insertions(+), 74 deletions(-) create mode 100644 published/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md delete mode 100644 translated/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md diff --git a/published/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md b/published/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md new file mode 100644 index 0000000000..e7ee7d760d --- /dev/null +++ b/published/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md @@ -0,0 +1,77 @@ +如何在 Ubuntu 15.04 下创建一个可供 Android/iOS 连接的 AP +================================================================================ +我成功地在 Ubuntu 15.04 下用 Gnome Network Manager 创建了一个无线AP热点。接下来我要分享一下我的步骤。请注意:你必须要有一个可以用来创建AP热点的无线网卡。如果你不知道如何确认它的话,在终端(Terminal)里输入`iw list`。 + +如果你没有安装`iw`的话, 在Ubuntu下你可以使用`sudo apt-get install iw`进行安装. + +在你键入`iw list`之后, 查看“支持的接口模式”, 你应该会看到类似下面的条目中看到 AP: + + Supported interface modes: + + * IBSS + * managed + * AP + * AP/VLAN + * monitor + * mesh point + +让我们一步步看: + +1、 断开WIFI连接。使用有线网络接入你的笔记本。 + +2、 在顶栏面板里点击网络的图标 -> Edit Connections(编辑连接) -> 在弹出窗口里点击Add(新增)按钮。 + +3、 在下拉菜单内选择Wi-Fi。 + +4、 接下来: + +a、 输入一个链接名 比如: Hotspot 1 + +b、 输入一个 SSID 比如: Hotspot 1 + +c、 选择模式(mode): Infrastructure (基础设施) + +d、 设备 MAC 地址: 在下拉菜单里选择你的无线设备 + +![](http://i2.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome1.jpg) + +5、 进入Wi-Fi安全选项卡,选择 WPA & WPA2 Personal 并且输入密码。 +6、 进入IPv4设置选项卡,在Method(方法)下拉菜单里,选择Shared to other computers(共享至其他电脑)。 + +![](http://i1.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome4.jpg) + +7、 进入IPv6选项卡,在Method(方法)里设置为忽略ignore (只有在你不使用IPv6的情况下这么做) +8、 点击 Save(保存) 按钮以保存配置。 +9、 从 menu/dash 里打开Terminal。 +10、 修改你刚刚使用 network settings 创建的连接。 + +使用 VIM 编辑器: + + sudo vim /etc/NetworkManager/system-connections/Hotspot + +或使用Gedit 编辑器: + + gksu gedit /etc/NetworkManager/system-connections/Hotspot + +把名字 Hotspot 用你在第4步里起的连接名替换掉。 + +![](http://i2.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome2.jpg?resize=640%2C402) + +a、 把 `mode=infrastructure` 改成 `mode=ap` 并且保存文件。 +b、 一旦你保存了这个文件,你应该能在 Wifi 菜单里看到你刚刚建立的AP了。(如果没有的话请再顶栏里 关闭/打开 Wifi 选项一次) + +![](http://i1.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome3.jpg?resize=290%2C375) + +11、你现在可以把你的设备连上Wifi了。已经过 Android 5.0的小米4测试。(下载了1GB的文件以测试速度与稳定性) + +-------------------------------------------------------------------------------- + +via: http://www.linuxveda.com/2015/08/23/how-to-create-an-ap-in-ubuntu-15-04-to-connect-to-androidiphone/ + +作者:[Sayantan Das][a] +译者:[jerryling315](https://github.com/jerryling315) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxveda.com/author/sayantan_das/ diff --git a/translated/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md b/translated/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md deleted file mode 100644 index 02aef62d82..0000000000 --- a/translated/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md +++ /dev/null @@ -1,74 +0,0 @@ -如何在 Ubuntu 15.04 下创建连接至 Android/iOS 的 AP -================================================================================ -我成功地在 Ubuntu 15.04 下用 Gnome Network Manager 创建了一个无线AP热点. 接下来我要分享一下我的步骤. 请注意: 你必须要有一个可以用来创建AP热点的无线网卡. 如果你不知道如何找到连上了的设备的话, 在终端(Terminal)里输入`iw list`. - -如果你没有安装`iw`的话, 在Ubuntu下你可以使用`udo apt-get install iw`进行安装. - -在你键入`iw list`之后, 寻找可用的借口, 你应该会看到类似下列的条目: - -Supported interface modes: - -* IBSS -* managed -* AP -* AP/VLAN -* monitor -* mesh point - -让我们一步步看 - -1. 断开WIFI连接. 使用有线网络接入你的笔记本. -1. 在顶栏面板里点击网络的图标 -> Edit Connections(编辑连接) -> 在弹出窗口里点击Add(新增)按钮. -1. 在下拉菜单内选择Wi-Fi. -1. 接下来, - -a. 输入一个链接名 比如: Hotspot - -b. 输入一个 SSID 比如: Hotspot - -c. 选择模式(mode): Infrastructure - -d. 设备 MAC 地址: 在下拉菜单里选择你的无线设备 - -![](http://i2.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome1.jpg) - -1. 进入Wi-Fi安全选项卡, 选择 WPA & WPA2 Personal 并且输入密码. -1. 进入IPv4设置选项卡, 在Method(方法)下拉菜单里, 选择Shared to other computers(共享至其他电脑). - -![](http://i1.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome4.jpg) - -1. 进入IPv6选项卡, 在Method(方法)里设置为忽略ignore (只有在你不使用IPv6的情况下这么做) -1. 点击 Save(保存) 按钮以保存配置. -1. 从 menu/dash 里打开Terminal. -1. 修改你刚刚使用 network settings 创建的连接. - -使用 VIM 编辑器: - - sudo vim /etc/NetworkManager/system-connections/Hotspot - -使用Gedit 编辑器: - - gksu gedit /etc/NetworkManager/system-connections/Hotspot - -把名字 Hotspot 用你在第4步里起的连接名替换掉. - -![](http://i2.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome2.jpg?resize=640%2C402) - -1. 把 `mode=infrastructure` 改成 `mode=ap` 并且保存文件 -1. 一旦你保存了这个文件, 你应该能在 Wifi 菜单里看到你刚刚建立的AP了. (如果没有的话请再顶栏里 关闭/打开 Wifi 选项一次) - -![](http://i1.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome3.jpg?resize=290%2C375) - -1. 你现在可以把你的设备连上Wifi了. 已经过 Android 5.0的小米4测试.(下载了1GB的文件以测试速度与稳定性) - --------------------------------------------------------------------------------- - -via: http://www.linuxveda.com/2015/08/23/how-to-create-an-ap-in-ubuntu-15-04-to-connect-to-androidiphone/ - -作者:[Sayantan Das][a] -译者:[jerryling315](https://github.com/jerryling315) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxveda.com/author/sayantan_das/ From 8fca2c5bb751b650d65c884f11048df9f6253176 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 8 Sep 2015 09:22:01 +0800 Subject: [PATCH 452/697] translating --- .../20150906 How to Install QGit Viewer in Ubuntu 14.04.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md b/sources/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md index 0c18bd4c99..747a87b13b 100644 --- a/sources/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md +++ b/sources/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md @@ -1,3 +1,5 @@ +Translating----geekpi + How to Install QGit Viewer in Ubuntu 14.04 ================================================================================ QGit is a free and Open Source GUI git viewer written on Qt and C++ by Marco Costalba. It is a better git viewer which provides us the ability to browse revisions history, view commits and patches applied to the files under a simple GUI environment. It utilizes git command line to process execute the commands and to display the output. It has some common features like to view revisions, diffs, files history, files annotation, archive tree. We can format and apply patch series with the selected commits, drag and drop commits between two instances and more with QGit Viewer. It allows us to create custom buttons with which we can add more buttons to execute a specific command when pressed using its builtin Action Builder. @@ -110,4 +112,4 @@ via: http://linoxide.com/ubuntu-how-to/install-qgit-viewer-ubuntu-14-04/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://linoxide.com/author/arunp/ \ No newline at end of file +[a]:http://linoxide.com/author/arunp/ From 47fe31595e2bdb312b18e2d74d986c6b727daed9 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 8 Sep 2015 10:08:39 +0800 Subject: [PATCH 453/697] translated --- ... to Install QGit Viewer in Ubuntu 14.04.md | 115 ------------------ ... to Install QGit Viewer in Ubuntu 14.04.md | 113 +++++++++++++++++ 2 files changed, 113 insertions(+), 115 deletions(-) delete mode 100644 sources/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md create mode 100644 translated/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md diff --git a/sources/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md b/sources/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md deleted file mode 100644 index 747a87b13b..0000000000 --- a/sources/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md +++ /dev/null @@ -1,115 +0,0 @@ -Translating----geekpi - -How to Install QGit Viewer in Ubuntu 14.04 -================================================================================ -QGit is a free and Open Source GUI git viewer written on Qt and C++ by Marco Costalba. It is a better git viewer which provides us the ability to browse revisions history, view commits and patches applied to the files under a simple GUI environment. It utilizes git command line to process execute the commands and to display the output. It has some common features like to view revisions, diffs, files history, files annotation, archive tree. We can format and apply patch series with the selected commits, drag and drop commits between two instances and more with QGit Viewer. It allows us to create custom buttons with which we can add more buttons to execute a specific command when pressed using its builtin Action Builder. - -Here are some easy steps on how we can compile and install QGit Viewer from its source code in Ubuntu 14.04 LTS "Trusty". - -### 1. Installing QT4 Libraries ### - -First of all, we'll need have QT4 Libraries installed in order to run QGit viewer in our ubuntu machine. As apt is the default package manager of ubuntu and QT4 packages is available in the official repository of ubutnu, we'll gonna install qt4-default using apt-get command as shown below. - - $ sudo apt-get install qt4-default - -### 2. Downloading QGit Tarball ### - -After installing Qt4 libraries, we'll gonna install git so that we can clone the Git repository of QGit Viewer for Qt 4 . To do so, we'll run the following apt-get command. - - $ sudo apt-get install git - -Now, we'll clone the repository using git command as shown below. - - $ git clone git://repo.or.cz/qgit4/redivivus.git - - Cloning into 'redivivus'... - remote: Counting objects: 7128, done. - remote: Compressing objects: 100% (2671/2671), done. - remote: Total 7128 (delta 5464), reused 5711 (delta 4438) - Receiving objects: 100% (7128/7128), 2.39 MiB | 470.00 KiB/s, done. - Resolving deltas: 100% (5464/5464), done. - Checking connectivity... done. - -### 3. Compiling QGit ### - -After we have cloned the repository, we'll now enter into the directory named redivivus and create the makefile which we'll require to compile qgit viewer. So, to enter into the directory, we'll run the following command. - - $ cd redivivus - -Next, we'll run the following command in order to generate a new Makefile from qmake project file ie qgit.pro. - - $ qmake qgit.pro - -After the Makefile has been generated, we'll now finally compile the source codes of qgit and get the binary as output. To do so, first we'll need to install make and g++ package so that we can compile, as it is a program written in C++ . - - $ sudo apt-get install make g++ - -Now, we'll gonna compile the codes using make command. - - $ make - -### 4. Installing QGit ### - -As we have successfully compiled the source code of QGit viewer, now we'll surely wanna install it in our Ubuntu 14.04 machine so that we can execute it from our system. To do so, we'll run the following command. - - $ sudo make install - - cd src/ && make -f Makefile install - make[1]: Entering directory `/home/arun/redivivus/src' - make -f Makefile.Release install - make[2]: Entering directory `/home/arun/redivivus/src' - install -m 755 -p "../bin/qgit" "/usr/lib/x86_64-linux-gnu/qt4/bin/qgit" - strip "/usr/lib/x86_64-linux-gnu/qt4/bin/qgit" - make[2]: Leaving directory `/home/arun/redivivus/src' - make[1]: Leaving directory `/home/arun/redivivus/src' - -Next, we'll need to copy the built qgit binary file from bin directory to /usr/bin/ directory so that it will be available as global command. - - $ sudo cp bin/qgit /usr/bin/ - -### 5. Creating Desktop File ### - -As we have successfully installed qgit in our Ubuntu box, we'll now go for create a desktop file so that QGit will be available under Menu or Launcher of our Desktop Environment. To do so, we'll need to create a new file named qgit.desktop under /usr/share/applications/ directory. - - $ sudo nano /usr/share/applications/qgit.desktop - -Then, we'll need to paste the following lines into the file. - - [Desktop Entry] - Name=qgit - GenericName=git GUI viewer - Exec=qgit - Icon=qgit - Type=Application - Comment=git GUI viewer - Terminal=false - MimeType=inode/directory; - Categories=Qt;Development;RevisionControl; - -After done, we'll simply save the file and exit. - -### 6. Running QGit Viewer ### - -After QGit is installed successfully in our Ubuntu box, we can now run it from any launcher or application menu. In order to run QGit from the terminal, we'll need to run as follows. - - $ qgit - -This will open the Qt4 Framework based QGit Viewer in GUI mode. - -![QGit Viewer](http://blog.linoxide.com/wp-content/uploads/2015/07/qgit-viewer.png) - -### Conclusion ### - -QGit is really an awesome QT based git viewer. It is available on all three platforms Linux, Mac OSX and Microsoft Windows. It helps us to easily navigate to the history, revisions, branches and more from the available git repository. It reduces the need of running git command line for the common stuffs like viewing revisions, history, diff, etc as graphical interface of it makes easy to do tasks. The latest version of qgit is also available in the default repository of ubuntu which we can install using **apt-get install qgit** command. So, qgit makes our work pretty fast and easy to do with its simple GUI. - --------------------------------------------------------------------------------- - -via: http://linoxide.com/ubuntu-how-to/install-qgit-viewer-ubuntu-14-04/ - -作者:[Arun Pyasi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ diff --git a/translated/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md b/translated/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md new file mode 100644 index 0000000000..317e610a6f --- /dev/null +++ b/translated/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md @@ -0,0 +1,113 @@ +如何在Ubuntu中安装QGit浏览器 +================================================================================ +QGit是一款Marco Costalba用Qt和C++写的开源GUI Git浏览器。它是一款在GUI环境下更好地提供浏览历史记录、提交记录和文件补丁的浏览器。它利用git命令行来执行并显示输出。它有一些常规的功能像浏览历史、比较、文件历史、文件标注、档案树。我们可以格式化并用选中的提交应用补丁,在两个实例之间拖拽并提交等等。它允许我们创建自定义的按钮来用它内置的生成器来执行特定的命令。 + +这里有简单的几步在Ubuntu 14.04 LTS "Trusty"中编译并安装QGit浏览器。 + +### 1. 安装 QT4 库 ### + +首先在ubuntu中运行QGit需要先安装QT4库。由于apt是ubuntu默认的包管理器,同时qt4也在官方的仓库中,因此我们直接用下面的apt-get命令来安装qt4。 + + $ sudo apt-get install qt4-default + +### 2. 下载QGit压缩包 ### + +安装完Qt4之后,我们要安装git,这样我们才能在QGit中克隆git仓库。运行下面的apt-get命令。 + + $ sudo apt-get install git + +现在,我们要使用下面的git命令来克隆仓库。 + + $ git clone git://repo.or.cz/qgit4/redivivus.git + + Cloning into 'redivivus'... + remote: Counting objects: 7128, done. + remote: Compressing objects: 100% (2671/2671), done. + remote: Total 7128 (delta 5464), reused 5711 (delta 4438) + Receiving objects: 100% (7128/7128), 2.39 MiB | 470.00 KiB/s, done. + Resolving deltas: 100% (5464/5464), done. + Checking connectivity... done. + +### 3. 编译 QGit ### + +克隆之后,我们现在进入redivivus的目录,并创建我们编译需要的makefile文件。因此,要进入目录,我们要运行下面的命令。 + + $ cd redivivus + +接下来,我们运行下面的命令从qmake项目也就是qgit.pro来生成新的Makefile。 + + $ qmake qgit.pro + +生成Makefile之后,我们现在终于要编译qgit的源代码并得到二进制的输出。首先我们要安装make和g++包用于编译,因为这是一个用C++写的程序。 + + $ sudo apt-get install make g++ + +现在,我们要用make命令来编译代码了 + + $ make + +### 4. 安装 QGit ### + +成功编译QGit的源码之后,我们就要在Ubuntu 14.04中安装它了,这样就可以在系统中执行它。因此我们将运行下面的命令、 + + $ sudo make install + + cd src/ && make -f Makefile install + make[1]: Entering directory `/home/arun/redivivus/src' + make -f Makefile.Release install + make[2]: Entering directory `/home/arun/redivivus/src' + install -m 755 -p "../bin/qgit" "/usr/lib/x86_64-linux-gnu/qt4/bin/qgit" + strip "/usr/lib/x86_64-linux-gnu/qt4/bin/qgit" + make[2]: Leaving directory `/home/arun/redivivus/src' + make[1]: Leaving directory `/home/arun/redivivus/src' + +接下来,我们需要从bin目录下复制qgit的二进制文件到/usr/bin/,这样我们就可以全局运行它了。 + + $ sudo cp bin/qgit /usr/bin/ + +### 5. 创建桌面文件 ### + +既然我们已经在ubuntu中成功安装了qgit,我们来创建一个桌面文件,这样QGit就可以在我们桌面环境中的菜单或者启动器中找到了。要做到这点,我们要在/usr/share/applications/创建一个新文件叫qgit.desktop。 + + $ sudo nano /usr/share/applications/qgit.desktop + +接下来复制下面的行到文件中。 + + [Desktop Entry] + Name=qgit + GenericName=git GUI viewer + Exec=qgit + Icon=qgit + Type=Application + Comment=git GUI viewer + Terminal=false + MimeType=inode/directory; + Categories=Qt;Development;RevisionControl; + +完成之后,保存并退出。 + +### 6. 运行 QGit 浏览器 ### + +QGit安装完成之后,我们现在就可以从任何启动器或者程序菜单中启动它了。要在终端下面运行QGit,我们可以像下面那样。 + + $ qgit + +这会打开基于Qt4框架GUI模式的QGit。 + +![QGit Viewer](http://blog.linoxide.com/wp-content/uploads/2015/07/qgit-viewer.png) + +### 总结 ### + +QGit是一个很棒的基于QT的git浏览器。它可以在Linux、MAC OSX和 Microsoft Windows所有这三个平台中运行。它帮助我们很容易地浏览历史、版本、分支等等git仓库提供的信息。它减少了使用命令行的方式去执行诸如浏览版本、历史、比较功能的需求,并用图形化的方式来简化了这些任务。最新的qgit版本也在默认仓库中,你可以使用 **apt-get install qgit** 命令来安装。因此。qgit用它简单的GUI使得我们的工作更加简单和快速。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/ubuntu-how-to/install-qgit-viewer-ubuntu-14-04/ + +作者:[Arun Pyasi][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ From caffc9312cb2921598ce6c145547562aefd0a2a0 Mon Sep 17 00:00:00 2001 From: cvsher <478990879@qq.com> Date: Tue, 8 Sep 2015 11:08:06 +0800 Subject: [PATCH 454/697] [translated]20150906 How To Set Up Your FTP Server In Linux.md [translated]20150906 How To Set Up Your FTP Server In Linux.md --- ... How To Set Up Your FTP Server In Linux.md | 105 ++++++++++++++++++ 1 file changed, 105 insertions(+) create mode 100644 translated/tech/20150906 How To Set Up Your FTP Server In Linux.md diff --git a/translated/tech/20150906 How To Set Up Your FTP Server In Linux.md b/translated/tech/20150906 How To Set Up Your FTP Server In Linux.md new file mode 100644 index 0000000000..e2a27dd36b --- /dev/null +++ b/translated/tech/20150906 How To Set Up Your FTP Server In Linux.md @@ -0,0 +1,105 @@ +如何在linux中搭建FTP服务 +===================================================================== +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Setup-FTP-Server-in-Linux.jpg) + +在本教程中,我将会解释如何搭建你自己的FTP服务。但是,首先我们应该来的学习一下FTP是什么。 + +###FTP是什么?### + +[FTP][1] 是文件传输协议(File Transfer Protocol)的缩写。顾名思义,FTP是用于计算机之间通过网络进行文件传输。你可以通过FTP在计算机账户间进行文件传输,也可以在账户和桌面计算机之间传输文件,或者访问在线软件文档。但是,需要注意的是多数的FTP站点的使用率非常高,并且在连接前需要进行多次尝试。 + +FTP地址和HTTP地址(即网页地址)非常相似,只是FTP地址使用ftp://前缀而不是http:// + +###FTP服务器是什么?### + +通常,拥有FTP地址的计算机是专用于接收FTP连接请求的。一台专用于接收FTP连接请求的计算机即为FTP服务器或者FTP站点。 + +现在,我们来开始一个特别的冒险,我们将会搭建一个FTP服务用于和家人、朋友进行文件共享。在本教程,我们将以[vsftpd][2]作为ftp服务。 + +VSFTPD是一个自称为最安全的FTP服务端软件。事实上VSFTPD的前两个字母表示“非常安全的(very secure)”。该软件的构建绕开了FTP协议的漏洞。 + +尽管如此,你应该知道对于安全的文件管理和传输还有更好的解决方法,如:SFTP(使用[OpenSSH][3])。FTP协议对于共享非敏感数据是非常有用和可靠的。 + +####在rpm distributions中安装VSFTPD:#### + +你可以使用如下命令在命令行界面中快捷的安装VSFTPD: + + dnf -y install vsftpd + +####在deb distributions中安装VSFTPD:#### + +你可以使用如下命令在命令行界面中快捷的安装VSFTPD: + + sudo apt-get install vsftpd + +####在Arch distribution中安装VSFTPD:#### + +你可以使用如下命令在命令行界面中快捷的安装VSFTPD: + + sudo apt-get install vsftpd + +####配置FTP服务#### + +多数的VSFTPD配置项都在/etc/vsftpd.conf配置文件中。这个文件本身已经有非常良好的文档说明了,因此,在本节中,我只强调一些你可能进行修改的重要选项。使用man页面查看所有可用的选项和基本的 文档说明: + + man vsftpd.conf + +根据文件系统层级标准,FTP共享文件默认位于/srv/ftp目录中。 + +**允许上传:** + +为了允许ftp用户可以修改文件系统的内容,如上传文件等,“write_enable”标志必须设置为 YES。 + + write_enable=YES + +**允许本地用户登陆:** + +为了允许文件/etc/passwd中记录的用户可以登陆ftp服务,“local_enable”标记必须设置为YES。 + + local_enable=YES + +**匿名用户登陆** + +下面配置内容控制匿名用户是否允许登陆: + + # Allow anonymous login + anonymous_enable=YES + # No password is required for an anonymous login (Optional) + no_anon_password=YES + # Maximum transfer rate for an anonymous client in Bytes/second (Optional) + anon_max_rate=30000 + # Directory to be used for an anonymous login (Optional) + anon_root=/example/directory/ + +**根目录限制(Chroot Jail)** + +(译者注:chroot jail是类unix系统中的一种安全机制,用于修改进程运行的根目录环境,限制该线程不能感知到其根目录树以外的其他目录结构和文件的存在。详情参看[chroot jail][4]) + +有时我们需要设置根目录(chroot)环境来禁止用户离开他们的家(home)目录。在配置文件中增加/修改下面配置开启根目录限制(Chroot Jail): + + chroot_list_enable=YES + chroot_list_file=/etc/vsftpd.chroot_list + +“chroot_list_file”变量指定根目录监狱所包含的文件/目录(译者注:即用户只能访问这些文件/目录) + +最后你必须重启ftp服务,在命令行中输入以下命令: + + sudo systemctl restart vsftpd + +到此为止,你的ftp服务已经搭建完成并且启动了 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/set-ftp-server-linux/ + +作者:[alimiracle][a] +译者:[cvsher](https://github.com/cvsher) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/ali/ +[1]:https://en.wikipedia.org/wiki/File_Transfer_Protocol +[2]:https://security.appspot.com/vsftpd.html +[3]:http://www.openssh.com/ +[4]:https://zh.wikipedia.org/wiki/Chroot From e85221fe828f8879170fabe02cd7d835e971ee89 Mon Sep 17 00:00:00 2001 From: cvsher <478990879@qq.com> Date: Tue, 8 Sep 2015 11:28:29 +0800 Subject: [PATCH 455/697] Update 20150906 How To Set Up Your FTP Server In Linux.md --- .../tech/20150906 How To Set Up Your FTP Server In Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20150906 How To Set Up Your FTP Server In Linux.md b/translated/tech/20150906 How To Set Up Your FTP Server In Linux.md index e2a27dd36b..8c754786fe 100644 --- a/translated/tech/20150906 How To Set Up Your FTP Server In Linux.md +++ b/translated/tech/20150906 How To Set Up Your FTP Server In Linux.md @@ -18,7 +18,7 @@ FTP地址和HTTP地址(即网页地址)非常相似,只是FTP地址使用f VSFTPD是一个自称为最安全的FTP服务端软件。事实上VSFTPD的前两个字母表示“非常安全的(very secure)”。该软件的构建绕开了FTP协议的漏洞。 -尽管如此,你应该知道对于安全的文件管理和传输还有更好的解决方法,如:SFTP(使用[OpenSSH][3])。FTP协议对于共享非敏感数据是非常有用和可靠的。 +尽管如此,你应该知道还有更安全的方法进行文件管理和传输,如:SFTP(使用[OpenSSH][3])。FTP协议对于共享非敏感数据是非常有用和可靠的。 ####在rpm distributions中安装VSFTPD:#### From 4f869d3e23e53fd746c3e7761007f9c5680d92de Mon Sep 17 00:00:00 2001 From: cvsher <478990879@qq.com> Date: Tue, 8 Sep 2015 11:46:13 +0800 Subject: [PATCH 456/697] Delete 20150906 How To Set Up Your FTP Server In Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完成,删除源文件 --- ... How To Set Up Your FTP Server In Linux.md | 103 ------------------ 1 file changed, 103 deletions(-) delete mode 100644 sources/tech/20150906 How To Set Up Your FTP Server In Linux.md diff --git a/sources/tech/20150906 How To Set Up Your FTP Server In Linux.md b/sources/tech/20150906 How To Set Up Your FTP Server In Linux.md deleted file mode 100644 index 718539b7a1..0000000000 --- a/sources/tech/20150906 How To Set Up Your FTP Server In Linux.md +++ /dev/null @@ -1,103 +0,0 @@ -translating by cvsher -How To Set Up Your FTP Server In Linux -================================================================================ -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Setup-FTP-Server-in-Linux.jpg) - -In this lesson, I will explain to you how to Set up your FTP server. But first, let me quickly tell you what is FTP. - -### What is FTP? ### - -[FTP][1] is an acronym for File Transfer Protocol. As the name suggests, FTP is used to transfer files between computers on a network. You can use FTP to exchange files between computer accounts, transfer files between an account and a desktop computer, or access online software archives. Keep in mind, however, that many FTP sites are heavily used and require several attempts before connecting. - -An FTP address looks a lot like an HTTP or website address except it uses the prefix ftp:// instead of http://. - -### What is an FTP Server? ### - -Typically, a computer with an FTP address is dedicated to receive an FTP connection. A computer dedicated to receiving an FTP connection is referred to as an FTP server or FTP site. - -Now, let’s begin a special adventure. We will make FTP server to share files with friends and family. I will use [vsftpd][2] for this purpose. - -VSFTPD is an FTP server software which claims to be the most secure FTP software. In fact, the first two letters in VSFTPD, stand for “very secure”. The software was built around the vulnerabilities of the FTP protocol. - -Nevertheless, you should always remember that there are better solutions for secure transfer and management of files such as SFTP (uses [OpenSSH][3]). The FTP protocol is particularly useful for sharing non-sensitive data and is very reliable at that. - -#### Installing VSFTPD in rpm distributions: #### - -You can quickly install VSFTPD on your server through the command line interface with: - - dnf -y install vsftpd - -#### Installing VSFTPD in deb distributions: #### - -You can quickly install VSFTPD on your server through the command line interface with: - -sudo apt-get install vsftpd - -#### Installing VSFTPD in Arch distribution: #### - -You can quickly install VSFTPD on your server through the command line interface with: - - sudo pacman -S vsftpd - -#### Configuring FTP server #### - -Most VSFTPD’s configuration takes place in /etc/vsftpd.conf. The file itself is well-documented, so this section only highlights some important changes you may want to make. For all available options and basic documentation see the man pages: - - man vsftpd.conf - -Files are served by default from /srv/ftp as per the Filesystem Hierarchy Standard. - -**Enable Uploading:** - -The “write_enable” flag must be set to YES in order to allow changes to the filesystem, such as uploading: - - write_enable=YES - -**Allow Local Users to Login:** - -In order to allow users in /etc/passwd to login, the “local_enable” directive must look like this: - -local_enable=YES - -**Anonymous Login** - -The following lines control whether anonymous users can login: - - # Allow anonymous login - -anonymous_enable=YES -# No password is required for an anonymous login (Optional) -no_anon_password=YES -# Maximum transfer rate for an anonymous client in Bytes/second (Optional) -anon_max_rate=30000 -# Directory to be used for an anonymous login (Optional) -anon_root=/example/directory/ - -**Chroot Jail** - -It is possible to set up a chroot environment, which prevents the user from leaving his home directory. To enable this, add/change the following lines in the configuration file: - - chroot_list_enable=YES chroot_list_file=/etc/vsftpd.chroot_list - -The “chroot_list_file” variable specifies the file in which the jailed users are contained to. - -In the end you must restart your ftp server. Type in your command line - - sudo systemctl restart vsftpd - -That’s it. Your FTP server is up and running. - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/set-ftp-server-linux/ - -作者:[alimiracle][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/ali/ -[1]:https://en.wikipedia.org/wiki/File_Transfer_Protocol -[2]:https://security.appspot.com/vsftpd.html -[3]:http://www.openssh.com/ From 00ff0244546b628399bc86e32b36f3637394aa24 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 8 Sep 2015 15:16:30 +0800 Subject: [PATCH 457/697] =?UTF-8?q?20150908-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...tall and Configure Plank Dock in Ubuntu.md | 66 +++++++++++++ ... Files Directly From the HDD with GRUB2.md | 96 +++++++++++++++++++ 2 files changed, 162 insertions(+) create mode 100644 sources/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md create mode 100644 sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md diff --git a/sources/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md b/sources/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md new file mode 100644 index 0000000000..4f0a5f9ea1 --- /dev/null +++ b/sources/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md @@ -0,0 +1,66 @@ +How to Download, Install, and Configure Plank Dock in Ubuntu +================================================================================ +It’s a well-known fact that Linux is extremely customizable with users having a lot of options to choose from – be it the operating systems’ various distributions or desktop environments available for a single distro. Like users of any other OS, Linux users also have different tastes and preferences, especially when it comes to desktop. + +While some users aren’t particularly bothered about their desktop, others take special care to make sure that their desktop looks cool and attractive, something for which there are various applications available. One such application that brings life to your desktop – especially if you use a global menu on the top – is the dock. There are many dock applications available for Linux; if you’re looking for the simplest one, then look no further than [Plank][1], which we’ll be discussing in this article. + +**Note**: the examples and commands mentioned here have been tested on Ubuntu (version 14.10) and Plank version 0.9.1.1383. + +### Plank ### + +The official documentation describes Plank as the “simplest dock on the planet.” The project’s goal is to provide just what a dock needs, although it’s essentially a library which can be extended to create other dock programs with more advanced features. + +What’s worth mentioning here is that Plank, which comes pre-installed in elementary OS, is the underlying technology for Docky, a popular dock application which is very similar in functionality to Mac OS X’s Dock. + +### Download and Install ### + +You can download and install Plank by executing the following commands on your terminal: + + sudo add-apt-repository ppa:docky-core/stable + sudo apt-get update + sudo apt-get install plank + +Once installed successfully, you can open the application by typing the name Plank in Unity Dash (see image below), or open it from the App Menu if you aren’t using the Unity environment. + +![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-unity-dash.png) + +### Features ### + +Once the Plank dock is enabled, you’ll see it sitting at the center-bottom of your desktop. + +![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-enabled-new.jpg) + +As you can see in the image above, the dock contains some application icons with an orange color indication below those which are currently running. Needless to say, you can click an icon to open that application. Also, a right-click on any application icon will produce some more options that you might be interested in. For example, see the screen-shot below: + +![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-right-click-icons-new.jpg) + +To access the configuration options, you’ll have to do a right-click on Plank’s icon (which is the first one from the left), and then click the Preferences option. This will produce the following window. + +![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-preferences.png) + +As you can see, the preference window consists of two tabs: Appearance and Behavior, with the former being selected by default. The Appearance tab contains settings related to the Plank theme, the dock’s position, and alignment, as well as that related to icons, while the Behavior tab contains settings related to the dock itself. + +![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-behavior-settings.png) + +For example, I changed the position of the dock to Right from within the Appearance tab and locked the icons (which means no “Keep in Dock” option on right-click) from the Behavior tab. + +![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-right-lock-new.jpg) + +As you can see in the screen-shot above, the changes came into effect. Similarly, you can tweak any available setting as per your requirement. + +### Conclusion ### + +Like I said in the beginning, having a dock isn’t mandatory. However, using one definitely makes things convenient, especially if you’ve been using Mac and have recently switched over to Linux for whatever reason. For its part, Plank not only offers simplicity, but dependability and stability as well – the project is well-maintained. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/download-install-configure-plank-dock-ubuntu/ + +作者:[Himanshu Arora][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/himanshu/ +[1]:https://launchpad.net/plank \ No newline at end of file diff --git a/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md b/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md new file mode 100644 index 0000000000..7de3640532 --- /dev/null +++ b/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md @@ -0,0 +1,96 @@ +How to Run ISO Files Directly From the HDD with GRUB2 +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-featured.png) + +Most Linux distros offer a live environment, which you can boot up from a USB drive, for you to test the system without installing. You can either use it to evaluate the distro or as a disposable OS. While it is easy to copy these onto a USB disk, in certain cases one might want to run the same ISO image often or run different ones regularly. GRUB 2 can be configured so that you do not need to burn the ISOs to disk or use a USB drive, but need to run a live environment directly form the boot menu. + +### Obtaining and checking bootable ISO images ### + +To obtain an ISO image, you should usually visit the website of the desired distribution and download any image that is compatible with your setup. If the image can be started from a USB, it should be able to start from the GRUB menu as well. + +Once the image has finished downloading, you should check its integrity by running a simple md5 check on it. This will output a long combination of numbers and alphanumeric characters + +![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-md5.png) + +which you can compare against the MD5 checksum provided on the download page. The two should be identical. + +### Setting up GRUB 2 ### + +ISO images contain full systems. All you need to do is direct GRUB2 to the appropriate file, and tell it where it can find the kernel and the initramdisk or initram filesystem (depending on which one your distribution uses). + +In this example, a Kubuntu 15.04 live environment will be set up to run on an Ubuntu 14.04 box as a Grub menu item. It should work for most newer Ubuntu-based systems and derivatives. If you have a different system or want to achieve something else, you can get some ideas on how to do this from one of [these files][1], although it will require a little experience with GRUB. + +In this example the file `kubuntu-15.04-desktop-amd64.iso` + +lives in `/home/maketecheasier/TempISOs/` on `/dev/sda1`. + +To make GRUB2 look for it in the right place, you need to edit the + + /etc/grub.d40-custom + +![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-40-custom-empty.png) + +To start Kubuntu from the above location, add the following code (after adjusting it to your needs) below the commented section, without modifying the original content. + + menuentry "Kubuntu 15.04 ISO" { + set isofile="/home/maketecheasier/TempISOs/kubuntu-15.04-desktop-amd64.iso" + loopback loop (hd0,1)$isofile + echo "Starting $isofile..." + linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/filename=${isofile} quiet splash + initrd (loop)/casper/initrd.lz + } + +![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-40-custom-new.png) + +### Breaking down the above code ### + +First set up a variable named `$menuentry`. This is where the ISO file is located. If you want to change to a different ISO, you need to change the bit where it says set `isofile="/path/to/file/name-of-iso-file-.iso"`. + +The next line is where you specify the loopback device; you also need to give it the right partition number. This is the bit where it says + + loopback loop (hd0,1)$isofile + +Note the hd0,1 bit; it is important. This means first HDD, first partition (`/dev/sda1`). + +GRUB’s naming here is slightly confusing. For HDDs, it starts counting from “0”, making the first HDD #0, the second one #1, the third one #2, etc. However, for partitions, it will start counting from 1. First partition is #1, second is #2, etc. There might be a good reason for this but not necessarily a sane one (UX-wise it is a disaster, to be sure).. + +This makes fist disk, first partition, which in Linux would usually look something like `/dev/sda1` become `hd0,1` in GRUB2. The second disk, third partition would be `hd1,3`, and so on. + +The next important line is + + linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/filename=${isofile} quiet splash + +It will load the kernel image. On newer Ubuntu Live CDs, this would be in the `/casper` directory and called `vmlinuz.efi`. If you use a different system, your kernel might be missing the `.efi` extension or be located somewhere else entirely (You can easily check this by opening the ISO file with an archive manager and looking inside `/casper.`). The last options, `quiet splash`, would be your regular GRUB options, if you care to change them. + +Finally + + initrd (loop)/casper/initrd.lz + +will load `initrd`, which is responsible to load a RAMDisk into memory for bootup. + +### Booting into your live system ### + +To make it all work, you will only need to update GRUB2 + + sudo update-grub + +![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-updare-grub.png) + +When you reboot your system, you should be presented with a new GRUB entry which will allow you to load into the ISO image you’ve just set up. + +![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-grub-menu.png) + +Selecting the new entry should boot you into the live environment, just like booting from a DVD or USB would. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/run-iso-files-hdd-grub2/ + +作者:[Attila Orosz][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ +[1]:http://git.marmotte.net/git/glim/tree/grub2 \ No newline at end of file From b4e47be63df2f95725301cc0a5cf80c3fa0c820c Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 8 Sep 2015 15:48:28 +0800 Subject: [PATCH 458/697] =?UTF-8?q?20150906-2=20=E9=80=89=E9=A2=98=20Learn?= =?UTF-8?q?=20with=20Linux=20=E4=B8=93=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../Learn with Linux--Learning Music.md | 155 ++++++++++++++++++ .../Learn with Linux--Learning to Type.md | 121 ++++++++++++++ ...-Master Your Math with These Linux Apps.md | 126 ++++++++++++++ .../Learn with Linux--Physics Simulation.md | 107 ++++++++++++ .../Learn with Linux--Two Geography Apps.md | 103 ++++++++++++ 5 files changed, 612 insertions(+) create mode 100644 sources/tech/Learn with Linux/Learn with Linux--Learning Music.md create mode 100644 sources/tech/Learn with Linux/Learn with Linux--Learning to Type.md create mode 100644 sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md create mode 100644 sources/tech/Learn with Linux/Learn with Linux--Physics Simulation.md create mode 100644 sources/tech/Learn with Linux/Learn with Linux--Two Geography Apps.md diff --git a/sources/tech/Learn with Linux/Learn with Linux--Learning Music.md b/sources/tech/Learn with Linux/Learn with Linux--Learning Music.md new file mode 100644 index 0000000000..e6467eb810 --- /dev/null +++ b/sources/tech/Learn with Linux/Learn with Linux--Learning Music.md @@ -0,0 +1,155 @@ +Learn with Linux: Learning Music +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-featured.png) + +This article is part of the [Learn with Linux][1] series: + +- [Learn with Linux: Learning to Type][2] +- [Learn with Linux: Physics Simulation][3] +- [Learn with Linux: Learning Music][4] +- [Learn with Linux: Two Geography Apps][5] +- [Learn with Linux: Master Your Math with These Linux Apps][6] + +Linux offers great educational software and many excellent tools to aid students of all grades and ages in learning and practicing a variety of topics, often interactively. The “Learn with Linux” series of articles offers an introduction to a variety of educational apps and software. + +Learning music is a great pastime. Training your ears to identify scales and chords and mastering an instrument or your own voice requires lots of practise and could become difficult. Music theory is extensive. There is much to memorize, and to turn it into a “skill” you will need diligence. Linux offers exceptional software to help you along your musical journey. They will not help you become a professional musician instantly but could ease the process of learning, being a great aide and reference point. + +### Gnu Solfège ### + +[Solfège][7] is a popular music education method that is used in all levels of music education all around the world. Many popular methods (like the Kodály method) use Solfège as their basis. GNU Solfège is a great software aimed more at practising Solfège than learning it. It assumes the student has already acquired the basics and wishes to practise what they have learned. + +As the developer states on the GNU website: + +> “When you study music on high school, college, music conservatory, you usually have to do ear training. Some of the exercises, like sight singing, is easy to do alone [sic]. But often you have to be at least two people, one making questions, the other answering. […] GNU Solfège tries to help out with this. With Solfege you can practise the more simple and mechanical exercises without the need to get others to help you. Just don’t forget that this program only touches a part of the subject.” + +The software delivers its promise; you can practise essentially everything with audible and visual aids. + +GNU solfege is in the Debian (therefore Ubuntu) repositories. To get it just type the following command into a terminal: + + sudo apt-get install solfege + +When it loads, you find yourself on a simple starting screen/ + +![learnmusic-solfege-main](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-solfege-main.png) + +The number of options is almost overwhelming. Most of the links will open sub-categories + +![learnmusic-solfege-scales](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-solfege-scales.png) + +from where you can select individual exercises. + +![learnmusic-solfege-hun](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-solfege-hun.png) + +There are practice sessions and tests. Both will be able to play the tones through any connected MIDI device or just your sound card’s MIDI player. The exercises often have visual notation and the ability to play back the sequence slowly. + +One important note about Solfège is that under Ubuntu you might not be able to hear anything with the default setup (unless you have a MIDI device connected). If that is the case, head over to “File -> Preferences,” select sound setup and choose the appropriate option for your system (choosing ALSA would probably work in most cases). + +![learnmusic-solfege-midi](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-solfege-midi.png) + +Solfège could be very helpful for your daily practise. Use it regularly and you will have trained your ear before you can sing do-re-mi. + +### Tete (ear trainer) ### + +[Tete][8] (This ear trainer ‘ere) is a Java application for simple, yet efficient, [ear training][9]. It helps you identify a variety of scales by playing thhm back under various circumstances, from different roots and on different MIDI sounds. [Download it from SourceForge][10]. You then need to unzip the downloaded file. + + unzip Tete-* + +Enter the unpacked directory: + + cd Tete-* + +Assuming you have Java installed in your system, you can run the java file with + + java -jar Tete-[your version] + +(To autocomplete the above command, just press the Tab key after typing “Tete-“.) + +Tete has a simple, one-page interface with everything on it. + +![learnmusic-tete-main](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-tete-main.png) + +You can choose to play scales (see above), chords, + +![learnmusic-tete-chords](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-tete-chords.png) + +or intervals. + +![learnmusic-tete-intervals](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-tete-intervals.png) + +You can “fine tune” your experience with various options including the midi instrument’s sound, what note to start from, ascending or descending scales, and how slow/fast the playback should be. Tete’s SourceForge page includes a very useful tutorial that explains most aspects of the software. + +### JalMus ### + +Jalmus is a Java-based keyboard note reading trainer. It works with attached MIDI keyboards or with the on-screen virtual keyboard. It has many simple lessons and exercises to train in music reading. Unfortunately, its development has been discontinued since 2013, but the software appears to still be functional. + +To get Jalmus, head over to the [sourceforge page][11] of its last version (2.3) to get the Java installer, or just type the following command into a terminal: + + wget http://garr.dl.sourceforge.net/project/jalmus/Jalmus-2.3/installjalmus23.jar + +Once the download finishes, load the installer with + + java -jar installjalmus23.jar + +You will be guided through a simple Java-based installer that was made for cross-platform installation. + +Jalmus’s main screen is plain. + +![learnmusic-jalmus-main](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-jalmus-main.jpg) + +You can find lessons of varying difficulty in the Lessons menu. It ranges from very simple ones, where one notes swims in from the left, and the corresponding key lights up on the on screen keyboard … + +![learnmusic-jalmus-singlenote](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-jalmus-singlenote.png) + +… to difficult ones with many notes swimming in from the right, and you are required to repeat the sequence on your keyboard. + +![learnmusic-jalmus-multinote](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-jalmus-multinote.png) + +Jalmus also includes exercises of note reading single notes, which are very similar to the lessons, only without the visual hints, where your score will be displayed after you finished. It also aids rhythm reading of varying difficulty, where the rhythm is both audible and visually marked. A metronome (audible and visual) aids in the understanding + +![learnmusic-jalmus-rhythm](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-jalmus-rhythm.png) + +and score reading where multiple notes will be played + +![learnmusic-jalmus-score](https://www.maketecheasier.com/assets/uploads/2015/07/learnmusic-jalmus-score.png) + +All these options are configurable; you can switch features on and off as you like. + +All things considered, Jalmus probably works best for rhythm training. Although it was not necessarily its intended purpose, the software really excelled in this particular use-case. + +### Notable mentions ### + +#### TuxGuitar #### + +For guitarists, [TuxGuitar][12] works much like Guitar Pro on Windows (and it can also read guitar-pro files). +PianoBooster + +[Piano Booster][13] can help with piano skills. It is designed to play MIDI files, which you can play along with on an attached keyboard, watching the core roll past on the screen. + +### Conclusion ### + +Linux offers many great tools for learning, and if your particular interest is music, your will not be left without software to aid your practice. Surely there are many more excellent software tools available for music students than were mentioned above. Do you know of any? Please let us know in the comments below. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/linux-learning-music/ + +作者:[Attila Orosz][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ +[1]:https://www.maketecheasier.com/series/learn-with-linux/ +[2]:https://www.maketecheasier.com/learn-to-type-in-linux/ +[3]:https://www.maketecheasier.com/linux-physics-simulation/ +[4]:https://www.maketecheasier.com/linux-learning-music/ +[5]:https://www.maketecheasier.com/linux-geography-apps/ +[6]:https://www.maketecheasier.com/learn-linux-maths/ +[7]:https://en.wikipedia.org/wiki/Solf%C3%A8ge +[8]:http://tete.sourceforge.net/index.shtml +[9]:https://en.wikipedia.org/wiki/Ear_training +[10]:http://sourceforge.net/projects/tete/files/latest/download +[11]:http://sourceforge.net/projects/jalmus/files/Jalmus-2.3/ +[12]:http://tuxguitar.herac.com.ar/ +[13]:http://www.linuxlinks.com/article/20090517041840856/PianoBooster.html \ No newline at end of file diff --git a/sources/tech/Learn with Linux/Learn with Linux--Learning to Type.md b/sources/tech/Learn with Linux/Learn with Linux--Learning to Type.md new file mode 100644 index 0000000000..51cef0f1a8 --- /dev/null +++ b/sources/tech/Learn with Linux/Learn with Linux--Learning to Type.md @@ -0,0 +1,121 @@ +Learn with Linux: Learning to Type +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-featured.png) + +This article is part of the [Learn with Linux][1] series: + +- [Learn with Linux: Learning to Type][2] +- [Learn with Linux: Physics Simulation][3] +- [Learn with Linux: Learning Music][4] +- [Learn with Linux: Two Geography Apps][5] +- [Learn with Linux: Master Your Math with These Linux Apps][6] + +Linux offers great educational software and many excellent tools to aid students of all grades and ages in learning and practicing a variety of topics, often interactively. The “Learn with Linux” series of articles offers an introduction to a variety of educational apps and software. + +Typing is taken for granted by many people; today being keyboard savvy often comes as second nature. Yet how many of us still type with two fingers, even if ever so fast? Once typing was taught in schools, but slowly the art of ten-finger typing is giving way to two thumbs. + +The following two applications can help you master the keyboard so that your next thought does not get lost while your fingers catch up. They were chosen for their simplicity and ease of use. While there are some more flashy or better looking typing apps out there, the following two will get the basics covered and offer the easiest way to start out. + +### TuxType (or TuxTyping) ### + +TuxType is for children. Young students can learn how to type with ten fingers with simple lessons and practice their newly-acquired skills in fun games. + +Debian and derivatives (therefore all Ubuntu derivatives) should have TuxType in their standard repositories. To install simply type + + sudo apt-get install tuxtype + +The application starts with a simple menu screen featuring Tux and some really bad midi music (Fortunately the sound can be turned off easily with the icon in the lower left corner.). + +![learntotype-tuxtyping-main](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-main.jpg) + +The top two choices, “Fish Cascade” and “Comet Zap,” represent typing games, but to start learning you need to head over to the lessons. + +There are forty simple built-in lessons to choose from. Each one of these will take a letter from the keyboard and make the student practice while giving visual hints, such as which finger to use. + +![learntotype-tuxtyping-exd1](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-exd1.jpg) + +![learntotype-tuxtyping-exd2](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-exd2.jpg) + +For more advanced practice, phrase typing is also available, although for some reason this is hidden under the options menu. + +![learntotype-tuxtyping-phrase](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-phrase.jpg) + +The games are good for speed and accuracy as the player helps Tux catch falling fish + +![learntotype-tuxtyping-fish](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-fish.jpg) + +or zap incoming asteroids by typing the words written over them. + +![learntotype-tuxtyping-zap](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-tuxtyping-zap.jpg) + +Besides being a fun way to practice, these games teach spelling, speed, and eye-to-hand coordination, as you must type while also watching the screen, building a foundation for touch typing, if taken seriously. + +### GNU typist (gtype) ### + +For adults and more experienced typists, there is GNU Typist, a console-based application developed by the GNU project. + +GNU Typist will also be carried by most Debian derivatives’ main repos. Installing it is as easy as typing + + sudo apt-get install gtype + +You will probably not find it in the Applications menu; insteaad you should start it from a terminal window. + + gtype + +The main menu is simple, no-nonsense and frill-free, yet it is evident how much the software has to offer. Typing lessons of all levels are immediately accessible. + +![learntotype-gtype-main](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-main.png) + +The lessons are straightforward and detailed. + +![learntotype-gtype-lesson](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-lesson.png) + +The interactive practice sessions offer little more than highlighting your mistakes. Instead of flashy visuals you have to chance to focus on practising. At the end of each lesson you get some simple statistics of how you’ve been doing. If you make too many mistakes, you cannot proceed until you can pass the level. + +![learntotype-gtype-mistake](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-mistake.png) + +While the basic lessons only require you to repeat some characters, more advanced drills will have the practitioner type either whole sentences, + +![learntotype-gtype-warmup](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-warmup.png) + +where of course the three percent error margin means you are allowed even fewer mistakes, + +![learntotype-gtype-warmupfail](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-warmupfail.png) + +or some drills aiming to achieve certain goals, as in the “Balanced keyboard drill.” + +![learntotype-gtype-balanceddrill](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-balanceddrill.png) + +Simple speed drills have you type quotes, + +![learntotype-gtype-speed-simple](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-speed-simple.png) + +while more advanced ones will make you write longer texts taken from classics. + +![learntotype-gtype-speed-advanced](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-speed-advanced.png) + +If you’d prefer a different language, more lessons can also be loaded as command line arguments. + +![learntotype-gtype-more-lessons](https://www.maketecheasier.com/assets/uploads/2015/07/learntotype-gtype-more-lessons.png) + +### Conclusion ### + +If you care to hone your typing skills, Linux has great software to offer. The two basic, yet feature-rich, applications discussed above will cater to most aspiring typists’ needs. If you use or know of another great typing application, please don’t hesitate to let us know below in the comments. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/learn-to-type-in-linux/ + +作者:[Attila Orosz][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ +[1]:https://www.maketecheasier.com/series/learn-with-linux/ +[2]:https://www.maketecheasier.com/learn-to-type-in-linux/ +[3]:https://www.maketecheasier.com/linux-physics-simulation/ +[4]:https://www.maketecheasier.com/linux-learning-music/ +[5]:https://www.maketecheasier.com/linux-geography-apps/ +[6]:https://www.maketecheasier.com/learn-linux-maths/ \ No newline at end of file diff --git a/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md b/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md new file mode 100644 index 0000000000..f9def558fb --- /dev/null +++ b/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md @@ -0,0 +1,126 @@ +Learn with Linux: Master Your Math with These Linux Apps +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-featured.png) + +This article is part of the [Learn with Linux][1] series: + +- [Learn with Linux: Learning to Type][2] +- [Learn with Linux: Physics Simulation][3] +- [Learn with Linux: Learning Music][4] +- [Learn with Linux: Two Geography Apps][5] +- [Learn with Linux: Master Your Math with These Linux Apps][6] + +Linux offers great educational software and many excellent tools to aid students of all grades and ages in learning and practicing a variety of topics, often interactively. The “Learn with Linux” series of articles offers an introduction to a variety of educational apps and software. + +Mathematics is the core of computing. If one would expect a great operating system, such as GNU/Linux, to excel in and discipline, it would be Math. If you seek mathematical applications, you will not be disappointed. Linux offers many excellent tools that will make Mathematics look as intimidating as it ever did, but at least they will simplify your way of using it. + +### Gnuplot ### + +Gnuplot is a command-line scriptable and versatile graphing utility for different platforms. Despite its name, it is not part of the GNU operating system. Although it is not freely licensed, it’s free-ware (meaning it’s copyrighted but free to use). + +To install `gnuplot` on an Ubuntu (or derivative) system, type + + sudo apt-get install gnuplot gnuplot-x11 + +into a terminal window. To start the program, type + + gnuplot + +You will be presented with a simple command line interface + +![learnmath-gnuplot](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot.png) + +into which you can start typing functions directly. The plot command will draw a graph. + +Typing, for instance, + + plot sin(x)/x + +into the `gnuplot` prompt, will open another window, wherein the graph is presented. + +![learnmath-gnuplot-plot1](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-plot1.png) + +You can also set different attributes of the graphs in-line. For example, specifying “title” will give them just that. + + plot sin(x) title 'Sine Function', tan(x) title 'Tangent' + +![learnmath-gnuplot-plot2](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-plot2.png) + +You can give things a bit more depth and draw 3D graphs with the `splot` command. + + splot sin(x*y/20) + +![learnmath-gnuplot-plot3](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-plot3.png) + +The plot window has a few basic configuration options, + +![learnmath-gnuplot-options](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-options.png) + +but the true power of `gnuplot` lies within its command line and scripting capabilities. The extensive full documentation of `gnuplot` can be found [here][7] with a great tutorial for the previous version [on the Duke University’s website][8]. + +### Maxima ### + +[Maxima][9] is a computer algebra system developed from the original sources of Macsyma. According to its SourceForge page, + +> “Maxima is a system for the manipulation of symbolic and numerical expressions, including differentiation, integration, Taylor series, Laplace transforms, ordinary differential equations, systems of linear equations, polynomials, sets, lists, vectors, matrices and tensors. Maxima yields high precision numerical results by using exact fractions, arbitrary-precision integers and variable-precision floating-point numbers. Maxima can plot functions and data in two and three dimensions.” + +You will have binary packages for Maxima in most Ubuntu derivatives as well as the Maxima graphical interface. To install them all, type + + sudo apt-get install maxima xmaxima wxmaxima + +into a terminal window. Maxima is a command line utility with not much of a UI, but if you start `wxmaxima`, you’ll get into a simple, yet powerful GUI. + +![learnmath-maxima](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima.png) + +You can start using this by simply starting to type. (Hint: Enter will add more lines; if you want to evaluate an expression, use “Shift + Enter.”) + +Maxima can be used for very simple problems, as it also acts as a calculator, + +![learnmath-maxima-1and1](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-1and1.png) + +and much more complex ones as well. + +![learnmath-maxima-functions](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-functions.png) + +It uses `gnuplot` to draw simple + +![learnmath-maxima-plot](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-plot.png) + +and more elaborate graphs. + +![learnmath-maxima-plot2](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-plot2.png) + +(It needs the `gnuplot-x11` package to display them.) + +Besides beautifying the expressions, Maxima makes it possible to export them in latex format, or do some operations on the highlighted functions with a right-click context menu, + +![learnmath-maxima-menu](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-menu.png) + +while its main menus offer an overwhelming amount of functionality. Of course, Maxima is capable of much more than this. It has an extensive documentation [available online][10]. + +### Conclusion ### + +Mathematics is not an easy subject, and the excellent math software on Linux does not make it look easier, yet these applications make using Mathematics much more straightforward and productive. The above two applications are just an introduction to what Linux has to offer. If you are seriously engaged in math and need even more functionality with great documentation, you should check out the [Mathbuntu project][11]. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/learn-linux-maths/ + +作者:[Attila Orosz][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ +[1]:https://www.maketecheasier.com/series/learn-with-linux/ +[2]:https://www.maketecheasier.com/learn-to-type-in-linux/ +[3]:https://www.maketecheasier.com/linux-physics-simulation/ +[4]:https://www.maketecheasier.com/linux-learning-music/ +[5]:https://www.maketecheasier.com/linux-geography-apps/ +[6]:https://www.maketecheasier.com/learn-linux-maths/ +[7]:http://www.gnuplot.info/documentation.html +[8]:http://people.duke.edu/~hpgavin/gnuplot.html +[9]:http://maxima.sourceforge.net/ +[10]:http://maxima.sourceforge.net/documentation.html +[11]:http://www.mathbuntu.org/ \ No newline at end of file diff --git a/sources/tech/Learn with Linux/Learn with Linux--Physics Simulation.md b/sources/tech/Learn with Linux/Learn with Linux--Physics Simulation.md new file mode 100644 index 0000000000..2a8415dda7 --- /dev/null +++ b/sources/tech/Learn with Linux/Learn with Linux--Physics Simulation.md @@ -0,0 +1,107 @@ +Learn with Linux: Physics Simulation +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/07/physics-fetured.jpg) + +This article is part of the [Learn with Linux][1] series: + +- [Learn with Linux: Learning to Type][2] +- [Learn with Linux: Physics Simulation][3] +- [Learn with Linux: Learning Music][4] +- [Learn with Linux: Two Geography Apps][5] +- [Learn with Linux: Master Your Math with These Linux Apps][6] + +Linux offers great educational software and many excellent tools to aid students of all grades and ages in learning and practicing a variety of topics, often interactively. The “Learn with Linux” series of articles offers an introduction to a variety of educational apps and software. + +Physics is an interesting subject, and arguably the most enjoyable part of any Physics class/lecture are the demonstrations. It is really nice to see physics in action, yet the experiments do not need to be restricted to the classroom. While Linux offers many great tools for scientists to support or conduct experiments, this article will concern a few that would make learning physics easier or more fun. + +### 1. Step ### + +[Step][7] is an interactive physics simulator, part of [KDEEdu, the KDE Education Project][8]. Nobody could better describe what Step does than the people who made it. According to the project webpage, “[Step] works like this: you place some bodies on the scene, add some forces such as gravity or springs, then click “Simulate” and Step shows you how your scene will evolve according to the laws of physics. You can change every property of bodies/forces in your experiment (even during simulation) and see how this will change the outcome of the experiment. With Step, you can not only learn but feel how physics works!” + +While of course it requires Qt and loads of KDE-specific dependencies to work, projects like this (and KDEEdu itself) are part of the reason why KDE is such an awesome environment (if you don’t mind running a heavier desktop, of course). + +Step is in the Debian repositories; to install it on derivatives, simply type + + sudo apt-get install step + +into a terminal. On a KDE system it should have minimal dependencies and install in seconds. + +Step has a simple interface, and it lets you jump right into simulations. + +![physics-step-main](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-main.png) + +You will find all available objects on the left-hand side. You can have different particles, gas, shaped objects, springs, and different forces in action. (1) If you select an object, a short description of it will appear on the right-hand side (2). On the right you will also see an overview of the “world” you have created (the objects it contains) (3), the properties of the currently selected object (4), and the steps you have taken so far (5). + +![physics-step-parts](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-parts.png) + +Once you have placed all you wanted on the canvas, just press “Simulate,” and watch the events unfold as the objects interact with each other. + +![physics-step-simulate1](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-simulate1.png) + +![physics-step-simulate2](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-simulate2.png) + +![physics-step-simulate3](https://www.maketecheasier.com/assets/uploads/2015/07/physics-step-simulate3.png) + +To get to know Step better you only need to press F1. The KDE Help Center offers a great and detailed Step handbook. + +### 2. Lightspeed ### + +Lightspeed is a simple GTK+ and OpenGL based simulator that is meant to demonstrate the effect of how one might observe a fast moving object. Lightspeed will simulate these effects based on Einstein’s special relativity. According to [their sourceforge page][9] “When an object accelerates to more than a few million meters per second, it begins to appear warped and discolored in strange and unusual ways, and as it approaches the speed of light (299,792,458 m/s) the effects become more and more bizarre. In addition, the manner in which the object is distorted varies drastically with the viewpoint from which it is observed.” + +These effects which come into play at relative velocities are: + +- **The Lorentz contraction** – causes the object to appear shorter +- **The Doppler red/blue shift** – alters the hues of color observed +- **The headlight effect** – brightens or darkens the object +- **Optical aberration** – deforms the object in unusual ways + +Lightspeed is in the Debian repositories; to install it, simply type: + + sudo apt-get install lightspeed + +The user interface is very simple. You get a shape (more can be downloaded from sourceforge) which would move along the x-axis (animation can be started by processing “A” or by selecting it from the object menu). + +![physics-lightspeed](https://www.maketecheasier.com/assets/uploads/2015/08/physics-lightspeed.png) + +You control the speed of its movement with the right-hand side slider and watch how it deforms. + +![physics-lightspeed-deform](https://www.maketecheasier.com/assets/uploads/2015/08/physics-lightspeed-deform.png) + +Some simple controls will allow you to add more visual elements + +![physics-lightspeed-visual](https://www.maketecheasier.com/assets/uploads/2015/08/physics-lightspeed-visual.png) + +The viewing angles can be adjusted by pressing either the left, middle or right button and dragging the mouse or from the Camera menu that also offers some other adjustments like background colour or graphics mode. + +### Notable mention: Physion ### + +Physion looks like an interesting project and a great looking software to simulate physics in a much more colorful and fun way than the above examples would allow. Unfortunately, at the time of writing, the [official website][10] was experiencing problems, and the download page was unavailable. + +Judging from their Youtube videos, Physion must be worth installing once a download line becomes available. Until then we can just enjoy the this video demo. + +注:youtube 视频 + + +Do you have another favorite physics simulation/demonstration/learning applications for Linux? Please share with us in the comments below. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/linux-physics-simulation/ + +作者:[Attila Orosz][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ +[1]:https://www.maketecheasier.com/series/learn-with-linux/ +[2]:https://www.maketecheasier.com/learn-to-type-in-linux/ +[3]:https://www.maketecheasier.com/linux-physics-simulation/ +[4]:https://www.maketecheasier.com/linux-learning-music/ +[5]:https://www.maketecheasier.com/linux-geography-apps/ +[6]:https://www.maketecheasier.com/learn-linux-maths/ +[7]:https://edu.kde.org/applications/all/step +[8]:https://edu.kde.org/ +[9]:http://lightspeed.sourceforge.net/ +[10]:http://www.physion.net/ \ No newline at end of file diff --git a/sources/tech/Learn with Linux/Learn with Linux--Two Geography Apps.md b/sources/tech/Learn with Linux/Learn with Linux--Two Geography Apps.md new file mode 100644 index 0000000000..a31e1f73b4 --- /dev/null +++ b/sources/tech/Learn with Linux/Learn with Linux--Two Geography Apps.md @@ -0,0 +1,103 @@ +Learn with Linux: Two Geography Apps +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-featured.png) + +This article is part of the [Learn with Linux][1] series: + +- [Learn with Linux: Learning to Type][2] +- [Learn with Linux: Physics Simulation][3] +- [Learn with Linux: Learning Music][4] +- [Learn with Linux: Two Geography Apps][5] +- [Learn with Linux: Master Your Math with These Linux Apps][6] + +Linux offers great educational software and many excellent tools to aid students of all grades and ages in learning and practicing a variety of topics, often interactively. The “Learn with Linux” series of articles offers an introduction to a variety of educational apps and software. + +Geography is an interesting subject, used by many of us day to day, often without realizing. But when you fire up GPS, SatNav, or just Google maps, you are using the geographical data provided by this software with the maps drawn by cartographists. When you hear about a certain country in the news or hear financial data being recited, these all fall under the umbrella of geography. And you have some great Linux software to study and practice these, whether it is for school or your own improvement. + +### Kgeography ### + +There are only two geography-related applications readily available in most Linux repositories, and both of these are KDE applications, in fact part of the KDE Educatonal project. Kgeography uses simple color-coded maps of any selected country. + +To install kegeography just type + + sudo apt-get install kgeography + +into a terminal window of any Ubuntu-based distribution. + +The interface is very basic. You are first presented with a picker menu that lets you choose an area map. + +![learn-geography-kgeo-pick](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-kgeo-pick.png) + +On the map you can display the name and capital of any given territory by clicking on it, + +![learn-geography-kgeo-brit](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-kgeo-brit.png) + +and test your knowledge in different quizzes. + +![learn-geography-kgeo-test](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-kgeo-test.png) + +It is an interactive way to test your basic geographical knowledge and could be an excellent tool to help you prepare for exams. + +### Marble ### + +Marble is a somewhat more advanced software, offering a global view of the world without the need of 3D acceleration. + +![learn-geography-marble-main](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-main.png) + +To get Marble, type + + sudo apt-get install marble + +into a terminal window of any Ubuntu-based distribution. + +Marble focuses on cartography, its main view being that of an atlas. + +![learn-geography-marble-atlas](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-atlas.jpg) + +You can have different projections, like Globe or Mercator displayed as defaults, with flat and other exotic views available from a drop-down menu. The surfaces include the basic Atlas view, a full-fledged offline map powered by OpenStreetMap, + +![learn-geography-marble-map](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-map.jpg) + +satellite view (by NASA), + +![learn-geography-marble-satellite](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-satellite.jpg) + +and political and even historical maps of the world, among others. + +![learn-geography-marble-history](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-history.jpg) + +Besides providing great offline maps with different skins and varying amount of data, Marble offers other types of information as well. You can switch on and off various offline info-boxes + +![learn-geography-marble-offline](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-offline.png) + +and online services from the menu. + +![learn-geography-marble-online](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-online.png) + +An interesting online service is Wikipedia integration. Clicking on the little Wiki logos will bring up a pop-up featuring detailed information about the selected places. + +![learn-geography-marble-wiki](https://www.maketecheasier.com/assets/uploads/2015/07/learn-geography-marble-wiki.png) + +The software also includes options for location tracking, route planning, and searching for locations, among other great and useful features. If you enjoy cartography, Marble offers hours of fun exploring and learning. + +### Conclusion ### + +Linux offers many great educational applications, and the subject of geography is no exception. With the above two programs you can learn a lot about our globe and test your knowledge in a fun and interactive manner. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/linux-geography-apps/ + +作者:[Attila Orosz][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ +[1]:https://www.maketecheasier.com/series/learn-with-linux/ +[2]:https://www.maketecheasier.com/learn-to-type-in-linux/ +[3]:https://www.maketecheasier.com/linux-physics-simulation/ +[4]:https://www.maketecheasier.com/linux-learning-music/ +[5]:https://www.maketecheasier.com/linux-geography-apps/ +[6]:https://www.maketecheasier.com/learn-linux-maths/ \ No newline at end of file From be965bed877c7286f652cc1f941d9c1953236027 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 8 Sep 2015 16:18:43 +0800 Subject: [PATCH 459/697] =?UTF-8?q?20150908-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0150908 List Of 10 Funny Linux Commands.md | 185 ++++++++++++++++++ 1 file changed, 185 insertions(+) create mode 100644 sources/tech/20150908 List Of 10 Funny Linux Commands.md diff --git a/sources/tech/20150908 List Of 10 Funny Linux Commands.md b/sources/tech/20150908 List Of 10 Funny Linux Commands.md new file mode 100644 index 0000000000..660bd47ff5 --- /dev/null +++ b/sources/tech/20150908 List Of 10 Funny Linux Commands.md @@ -0,0 +1,185 @@ +List Of 10 Funny Linux Commands +================================================================================ +**Working from the Terminal is really fun. Today, we’ll list really funny Linux commands which will bring smile on your face.** + +### 1. rev ### + +Create a file, type some words in this file, rev command will dump all words written by you in reverse. + + # rev + +![Selection_002](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0021.png) + +![Selection_001](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0011.png) + +### 2. fortune ### + +This command is not install by default, install with apt-get and fortune will display some random sentence. + + crank@crank-System:~$ sudo apt-get install fortune + +![Selection_003](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0031.png) + +Use **-s** option with fortune, it will limit the out to one sentence. + + # fortune -s + +![Selection_004](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0042.png) + +### 3. yes ### + + #yes + +This command will keep displaying the string for infinite time until the process is killed by the user. + + # yes unixmen + +![Selection_005](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0054.png) + +### 4. figlet ### + +This command can be installed with apt-get, comes with some ascii fonts which are located in **/usr/share/figlet**. + + cd /usr/share/figlet + +---------- + + #figlet -f + +e.g. + + #figlet -f big.flf unixmen + +![Selection_006](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0062.png) + +#figlet -f block.flf unixmen + +![Selection_007](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0072.png) + +You can try another options also. + +### 5. asciiquarium ### + +This command will transform your terminal in to a Sea Aquarium. +Download term animator + + # wget http://search.cpan.org/CPAN/authors/id/K/KB/KBAUCOM/Term-Animation-2.4.tar.gz + +Install and Configure above package. + + # tar -zxvf Term-Animation-2.4.tar.gz + # cd Term-Animation-2.4/ + # perl Makefile.PL && make && make test + # sudo make install + +Install following package: + + # apt-get install libcurses-perl + +Download and install asciiquarium + + # wget http://www.robobunny.com/projects/asciiquarium/asciiquarium.tar.gz + # tar -zxvf asciiquarium.tar.gz + # cd asciiquarium_1.0/ + # cp asciiquarium /usr/local/bin/ + +Run, + + # /usr/local/bin/asciiquarium + +![asciiquarium_1.1 : perl_008](http://www.unixmen.com/wp-content/uploads/2015/09/asciiquarium_1.1-perl_008.png) + +### 6. bb ### + + # apt-get install bb + # bb + +See what comes out: + +![Selection_009](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0092.png) + +### 7. sl ### + +Sometimes you type **sl** instead of **ls** by mistake,actually **sl** is a command and a locomotive engine will start moving if you type sl. + + # apt-get install sl + +---------- + + # sl + +![Selection_012](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0122.png) + +### 8. cowsay ### + +Very common command, is will display in ascii form whatever you wants to say. + + apt-get install cowsay + +---------- + + # cowsay + +![Selection_013](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0132.png) + +Or, you can use another character instead of com, such characters are stored in **/usr/share/cowsay/cows** + + # cd /usr/share/cowsay/cows + +---------- + + cowsay -f ghostbusters.cow unixmen + +![Selection_014](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0141.png) + +or + + # cowsay -f bud-frogs.cow Rajneesh + +![Selection_015](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0151.png) + +### 9. toilet ### + +Yes, this is a command, it dumps ascii strings in colored form to the terminal. + + # apt-get install toilet + +---------- + + # toilet --gay unixmen + +![Selection_016](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0161.png) + + toilet -F border -F gay unixmen + +![Selection_020](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_020.png) + + toilet -f mono12 -F metal unixmen + +![Selection_018](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0181.png) + +### 10. aafire ### + +Put you terminal on fire with aafire. + + # apt-get install libaa-bin + +---------- + + # aafire + +![Selection_019](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0191.png) + +That it, Have fun with Linux Terminal!! + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/list-10-funny-linux-commands/ + +作者:[Rajneesh Upadhyay][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/rajneesh/ \ No newline at end of file From a5325a3597e9cc719e6676752dbd374568562858 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 9 Sep 2015 08:07:44 +0800 Subject: [PATCH 460/697] PUB:20150820 A Look at What's Next for the Linux Kernel @bazz2 --- ...Look at What's Next for the Linux Kernel.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) rename {translated/talk => published}/20150820 A Look at What's Next for the Linux Kernel.md (77%) diff --git a/translated/talk/20150820 A Look at What's Next for the Linux Kernel.md b/published/20150820 A Look at What's Next for the Linux Kernel.md similarity index 77% rename from translated/talk/20150820 A Look at What's Next for the Linux Kernel.md rename to published/20150820 A Look at What's Next for the Linux Kernel.md index daf3e4d0e3..56969d3b24 100644 --- a/translated/talk/20150820 A Look at What's Next for the Linux Kernel.md +++ b/published/20150820 A Look at What's Next for the Linux Kernel.md @@ -1,24 +1,24 @@ -Linux 内核的发展方向 +对 Linux 内核的发展方向的展望 ================================================================================ ![](http://www.eweek.com/imagesvr_ce/485/290x195cilinux1.jpg) -**即将到来的 Linux 4.2 内核涉及到史上最多的贡献者数量,内核开发者 Jonathan Corbet 如是说。** +** Linux 4.2 内核涉及到史上最多的贡献者数量,内核开发者 Jonathan Corbet 如是说。** -来自西雅图。Linux 内核持续增长:代码量在增加,代码贡献者数量也在增加。而随之而来的一些挑战需要处理一下。以上是 Jonathan Corbet 在今年的 LinuxCon 的内核年度报告上提出的主要观点。以下是他的主要演讲内容: +西雅图报道。Linux 内核持续增长:代码量在增加,代码贡献者数量也在增加。而随之而来的一些挑战需要处理一下。以上是 Jonathan Corbet 在今年的 LinuxCon 的内核年度报告上提出的主要观点。以下是他的主要演讲内容: -Linux 4.2 内核依然处于开发阶段,预计在8月23号释出。Corbet 强调有 1569 名开发者为这个版本贡献了代码,其中 277 名是第一次提交代码。 +Linux 4.2 内核已经于上月底释出。Corbet 强调有 1569 名开发者为这个版本贡献了代码,其中 277 名是第一次提交代码。 越来越多的开发者的加入,内核更新非常快,Corbet 估计现在大概 63 天就能产生一个新的内核里程碑。 Linux 4.2 涉及多方面的更新。其中一个就是引进了 OverLayFS,这是一种只读型文件系统,它可以实现在一个容器之上再放一个容器。 -网络系统对小包传输性能也有了提升,这对于高频传输领域如金融交易而言非常重要。提升的方面主要集中在减小处理数据包的时间的能耗。 +网络系统对小包传输性能也有了提升,这对于高频金融交易而言非常重要。提升的方面主要集中在减小处理数据包的时间的能耗。 依然有新的驱动中加入内核。在每个内核发布周期,平均会有 60 到 80 个新增或升级驱动中加入。 另一个主要更新是实时内核补丁,这个特性在 4.0 版首次引进,好处是系统管理员可以在生产环境中打上内核补丁而不需要重启系统。当补丁所需要的元素都已准备就绪,打补丁的过程会在后台持续而稳定地进行。 -**Linux 安全, IoT 和其他关注点 ** +**Linux 安全, IoT 和其他关注点** 过去一年中,安全问题在开源社区是一个很热的话题,这都归因于那些引发高度关注的事件,比如 Heartbleed 和 Shellshock。 @@ -26,9 +26,9 @@ Linux 4.2 涉及多方面的更新。其中一个就是引进了 OverLayFS,这 他强调说过去 10 年间有超过 3 百万行代码不再被开发者修改,而产生 Shellshock 漏洞的代码的年龄已经是 20 岁了,近年来更是无人问津。 -另一个关注点是 2038 问题,Linux 界的“千年虫”,如果不解决,2000 年出现过的问题还会重现。2038 问题说的是在 2038 年一些 Linux 和 Unix 机器会死机(LCTT:32 位系统记录的时间,在2038年1月19日星期二晚上03:14:07之后的下一秒,会变成负数)。Corbet 说现在离 2038 年还有 23 年时间,现在部署的系统都会考虑 2038 问题。 +另一个关注点是 2038 问题,Linux 界的“千年虫”,如果不解决,2000 年出现过的问题还会重现。2038 问题说的是在 2038 年一些 Linux 和 Unix 机器会死机(LCTT译注:32 位系统记录的时间,在2038年1月19日星期二晚上03:14:07之后的下一秒,会变成负数)。Corbet 说现在离 2038 年还有 23 年时间,现在部署的系统都会考虑 2038 问题。 -Linux 已经开始一些初步的方案来修复 2038 问题了,但做的还远远不够。“现在就要修复这个问题,而不是等 20 年后把这个头疼的问题留给下一代解决,我们却享受着退休的美好时光”。 +Linux 已经启动一些初步的方案来修复 2038 问题了,但做的还远远不够。“现在就要修复这个问题,而不是等 20 年后把这个头疼的问题留给下一代解决,我们却享受着退休的美好时光”。 物联网(IoT)也是 Linux 关注的领域,Linux 是物联网嵌入式操作系统的主要占有者,然而这并没有什么卵用。Corget 认为日渐臃肿的内核对于未来的物联网设备来说肯定过于庞大。 @@ -42,7 +42,7 @@ via: http://www.eweek.com/enterprise-apps/a-look-at-whats-next-for-the-linux-ker 作者:[Sean Michael Kerner][a] 译者:[bazz2](https://github.com/bazz2) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From c85181676da37df61817bfa51f3c481691d0b57b Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 9 Sep 2015 09:11:12 +0800 Subject: [PATCH 461/697] PUB:20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting @jerryling315 --- ... 22 Years of Journey and Still Counting.md | 108 +++++++++++++++++ ... 22 Years of Journey and Still Counting.md | 109 ------------------ 2 files changed, 108 insertions(+), 109 deletions(-) create mode 100644 published/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md delete mode 100644 translated/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md diff --git a/published/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md b/published/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md new file mode 100644 index 0000000000..e14e0ba320 --- /dev/null +++ b/published/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md @@ -0,0 +1,108 @@ +Debian GNU/Linux,22 年未完的美妙旅程 +================================================================================ + +在2015年8月16日, Debian项目组庆祝了 Debian 的22周年纪念日;这也是开源世界历史最悠久、热门的发行版之一。 Debian项目于1993年由Ian Murdock创立。彼时,Slackware 作为最早的 Linux 发行版已经名声在外。 + +![Happy 22nd Birthday to Debian](http://www.tecmint.com/wp-content/uploads/2014/08/Debian-22nd-Birthday.png) + +*22岁生日快乐! Debian Linux!* + +Ian Ashly Murdock, 一个美国职业软件工程师, 在他还是普渡大学的学生时构想出了 Debian 项目的计划。他把这个项目命名为 Debian 是由于这个名字组合了他彼时女友的名字 Debra Lynn 和他自己的名字 Ian。 他之后和 Lynn 结婚并在2008年1月离婚。 + +![Ian Murdock](http://www.tecmint.com/wp-content/uploads/2014/08/Ian-Murdock.jpeg) + +*Debian 创始人:Ian Murdock* + +Ian 目前是 ExactTarget 的平台与开发社区的副总裁。 + +Debian (如同Slackware一样) 都是由于当时缺乏满足合乎标准的发行版才应运而生的。 Ian 在一次采访中说:“免费提供一流的产品会是 Debian 项目的唯一使命。 尽管过去的 Linux 发行版均不尽然可靠抑或是优秀。 我印象里...比如在不同的文件系统间移动文件, 处理大型文件经常会导致内核出错。 但是 Linux 其实是很可靠的, 自由的源代码让这个项目本质上很有前途。” + +"我记得过去我像其他想解决问题的人一样, 想在家里运行一个像 UNIX 的东西。 但那是不可能的, 无论是经济上还是法律上或是别的什么角度。 然后我就听闻了 GNU 内核开发项目, 以及这个项目是如何没有任何法律纷争", Ian 补充到。 他早年在开发 Debian 时曾被自由软件基金会(FSF)资助, 这份资助帮助 Debian 取得了长足的发展; 尽管一年后由于学业原因 Ian 退出了 FSF 转而去完成他的学位。 + +### Debian开发历史 ### + +- **Debian 0.01 – 0.09** : 发布于 1993 年八月 – 1993 年十二月。 +- **Debian 0.91** : 发布于 1994 年一月。 有了原始的包管理系统, 没有依赖管理机制。 +- **Debian 0.93 rc5** : 发布于 1995 年三月。 “现代”意义的 Debian 的第一次发布, 在基础系统安装后会使用dpkg 安装以及管理其他软件包。 +- **Debian 0.93 rc6**: 发布于 1995 年十一月。 最后一次 a.out 发布, deselect 机制第一次出现, 有60位开发者在彼时维护着软件包。 +- **Debian 1.1**: 发布于 1996 年六月。 项目代号 – Buzz, 软件包数量 – 474, 包管理器 dpkg, 内核版本 2.0, ELF 二进制。 +- **Debian 1.2**: 发布于 1996 年十二月。 项目代号 – Rex, 软件包数量 – 848, 开发者数量 – 120。 +- **Debian 1.3**: 发布于 1997 年七月。 项目代号 – Bo, 软件包数量 974, 开发者数量 – 200。 +- **Debian 2.0**: 发布于 1998 年七月。 项目代号 - Hamm, 支持构架 – Intel i386 以及 Motorola 68000 系列, 软件包数量: 1500+, 开发者数量: 400+, 内置了 glibc。 +- **Debian 2.1**: 发布于1999 年三月九日。 项目代号 – slink, 支持构架 - Alpha 和 Sparc, apt 包管理器开始成型, 软件包数量 – 2250。 +- **Debian 2.2**: 发布于 2000 年八月十五日。 项目代号 – Potato, 支持构架 – Intel i386, Motorola 68000 系列, Alpha, SUN Sparc, PowerPC 以及 ARM 构架。 软件包数量: 3900+ (二进制) 以及 2600+ (源代码), 开发者数量 – 450。 有一群人在那时研究并发表了一篇论文, 论文展示了自由软件是如何在被各种问题包围的情况下依然逐步成长为优秀的现代操作系统的。 +- **Debian 3.0**: 发布于 2002 年七月十九日。 项目代号 – woody, 支持构架新增 – HP, PA_RISC, IA-64, MIPS 以及 IBM, 首次以DVD的形式发布, 软件包数量 – 8500+, 开发者数量 – 900+, 支持加密。 +- **Debian 3.1**: 发布于 2005 年六月六日。 项目代号 – sarge, 支持构架 – 新增 AMD64(非官方渠道发布), 内核 – 2.4 以及 2.6 系列, 软件包数量: 15000+, 开发者数量 : 1500+, 增加了诸如 OpenOffice 套件, Firefox 浏览器, Thunderbird, Gnome 2.8, 支持: RAID, XFS, LVM, Modular Installer。 +- **Debian 4.0**: 发布于 2007 年四月八日。 项目代号 – etch, 支持构架 – 如前,包括 AMD64。 软件包数量: 18,200+ 开发者数量 : 1030+, 图形化安装器。 +- **Debian 5.0**: 发布于 2009 年二月十四日。 项目代号 – lenny, 支持构架 – 新增 ARM。 软件包数量: 23000+, 开发者数量: 1010+。 +- **Debian 6.0**: 发布于 2009 年七月二十九日。 项目代号 – squeeze, 包含的软件包: 内核 2.6.32, Gnome 2.3. Xorg 7.5, 同时包含了 DKMS, 基于依赖包支持。 支持构架 : 新增 kfreebsd-i386 以及 kfreebsd-amd64, 基于依赖管理的启动过程。 +- **Debian 7.0**: 发布于 2013 年五月四日。 项目代号: wheezy, 支持 Multiarch, 私有云工具, 升级了安装器, 移除了第三方软件依赖, 全功能多媒体套件-codec, 内核版本 3.2, Xen Hypervisor 4.1.4 ,软件包数量: 37400+。 +- **Debian 8.0**: 发布于 2015 年五月二十五日。 项目代号: Jessie, 将 Systemd 作为默认的初始化系统, 内核版本 3.16, 增加了快速启动(fast booting), service进程所依赖的 cgroups 使隔离部分 service 进程成为可能, 43000+ 软件包。 Sysvinit 初始化工具在 Jessie 中可用。 + +**注意**: Linux的内核第一次是在1991 年十月五日被发布, 而 Debian 的首次发布则在1993 年九月十三日。 所以 Debian 已经在只有24岁的 Linux 内核上运行了整整22年了。 + +### Debian 的那些事 ### + +1994年管理和重整了 Debian 项目以使得其他开发者能更好地加入,所以在那一年并没有发布面向用户的更新, 当然, 内部版本肯定是有的。 + +Debian 1.0 从来就没有被发布过。 一家 CD-ROM 的生产商错误地把某个未发布的版本标注为了 1.0, 为了避免产生混乱, 原本的 Debian 1.0 以1.1的面貌发布了。 从那以后才有了所谓的官方CD-ROM的概念。 + +每个 Debian 新版本的代号都是玩具总动员里某个角色的名字哦。 + +Debian 有四种可用版本: 旧稳定版(old stable), 稳定版(stable), 测试版(testing) 以及 试验版(experimental)。 始终如此。 + +Debian 项目组一直工作在不稳定发行版上, 这个不稳定版本始终被叫做Sid(玩具总动员里那个邪恶的臭小孩)。 Sid是unstable版本的永久名称, 同时Sid也取自'Still In Development"(译者:还在开发中)的首字母。 Sid 将会成为下一个稳定版, 当前的稳定版本代号为 jessie。 + +Debian 的官方发行版只包含开源并且自由的软件, 绝无其他东西. 不过 contrib 和非自由软件包使得安装那些本身自由但是其依赖的软件包不自由(contrib)的软件和非自由软件成为了可能。 + +Debian 是一堆Linux 发行版之母。 举几个例子: + +- Damn Small Linux +- KNOPPIX +- Linux Advanced +- MEPIS +- Ubuntu +- 64studio (不再活跃开发) +- LMDE + +Debian 是世界上最大的非商业 Linux 发行版。它主要是由C编写的(32.1%), 一并的还有其他70多种语言。 + +![Debian 开发语言贡献表](http://www.tecmint.com/wp-content/uploads/2014/08/Debian-Programming.png) + +*Debian 开发语言贡献表,图片来源: [Xmodulo][1]* + +Debian 项目包含6,850万行代码, 以及 450万行空格和注释。 + +国际空间站放弃了 Windows 和红帽子, 进而换成了 Debian - 在上面的宇航员使用落后一个版本的稳定发行版, 目前是 squeeze; 这么做是为了稳定程度以及来自 Debian 社区的雄厚帮助支持。 + +感谢上帝! 我们差点就听到来自国际空间宇航员面对 Windows Metro 界面的尖叫了 :P + +#### 黑色星期三 #### + +2002 年十一月二十日, Twente 大学的网络运营中心(NOC)着火。 当地消防部门放弃了服务器区域。 NOC维护着satie.debian.org 的网站服务器, 这个网站包含了安全、非美国相关的存档、新维护者资料、数量报告、数据库等等;这一切都化为了灰烬。 之后这些服务由 Debian 重建了。 + +#### 未来版本 #### + +下一个待发布版本是 Debian 9, 项目代号 – Stretch, 它会带来什么还是个未知数。 满心期待吧! + +有很多发行版在 Linux 发行版的历史上出现过一瞬间然后很快消失了。 在多数情况下, 维护一个日渐庞大的项目是开发者们面临的挑战。 但这对 Debian 来说不是问题。 Debian 项目有全世界成百上千的开发者、维护者。 它在 Linux 诞生的之初起便一直存在。 + +Debian 在 Linux 生态环境中的贡献是难以用语言描述的。 如果 Debian 没有出现过, 那么 Linux 世界将不会像现在这样丰富和用户友好。 Debian 是为数不多可以被认为安全可靠又稳定的发行版,是作为网络服务器完美选择。 + +这仅仅是 Debian 的一个开始。 它走过了这么长的征程, 并将一直走下去。 未来即是现在! 世界近在眼前! 如果你到现在还从来没有使用过 Debian, 我只想问, 你还再等什么? 快去下载一份镜像试试吧, 我们会在此守候遇到任何问题的你。 + +- [Debian 主页][2] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/happy-birthday-to-debian-gnu-linux/ + +作者:[Avishek Kumar][a] +译者:[jerryling315](http://moelf.xyz) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://xmodulo.com/2013/08/interesting-facts-about-debian-linux.html +[2]:https://www.debian.org/ diff --git a/translated/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md b/translated/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md deleted file mode 100644 index 1c92079b57..0000000000 --- a/translated/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md +++ /dev/null @@ -1,109 +0,0 @@ -Debian GNU/Linux 生日: 22年未完的美妙旅程. -================================================================================ -在2015年8月16日, Debian项目组庆祝了 Debian 的22周年纪念日; 这也是开源世界历史最悠久, 热门的发行版之一. Debian项目于1993年由Ian Murdock创立. 彼时, Slackware 作为最早的 Linux 发行版已经名声在外. - -![Happy 22nd Birthday to Debian](http://www.tecmint.com/wp-content/uploads/2014/08/Debian-22nd-Birthday.png) - -22岁生日快乐! Debian Linux! - -Ian Ashly Murdock, 一个美国职业软件工程师, 在他还是普渡大学的学生时构想出了 Debia n项目的计划. 他把这个项目命名为 Debian 是由于这个名字组合了他彼时女友的名字, Debra Lynn, 和他自己的名字(译者: Ian). 他之后和Lynn顺利结婚并在2008年1月离婚. - -![Ian Murdock](http://www.tecmint.com/wp-content/uploads/2014/08/Ian-Murdock.jpeg) - -Debian 创始人:Ian Murdock - -Ian 目前是 ExactTarget 下 Platform and Development Community 的副总裁. - -Debian (如同Slackware一样) 都是由于当时缺乏满足作者标准的发行版才应运而生的. Ian 在一次采访中说:"免费提供一流的产品会是Debian项目的唯一使命. 尽管过去的 Linux 发行版均不尽然可靠抑或是优秀. 我印象里...比如在不同的文件系统间移动文件, 处理大型文件经常会导致内核出错. 但是 Linux 其实是很可靠的, 免费的源代码让这个项目本质上很有前途. - -"我记得过去我也像其他人一样想解决问题, 想在家里运营一个像 UNIX 的东西. 但那是不可能的, 无论是经济上还是法律上或是别的什么角度. 然后我就听闻了GNU内核开发项目, 以及这个项目是如何没有任何法律纷争", Ian 补充到. 他早年在开发 Debian 时曾被自由软件基金会(FSF)资助, 这份资助帮助 Debian 向前迈了一大步; 尽管一年后由于学业原因 Ian 退出了 FSF 转而去完成他的学位. - -### Debian开发历史 ### - -- **Debian 0.01 – 0.09** : 发布于 1993 八月 – 1993 十二月. -- **Debian 0.91 ** – 发布于 1994 一月. 有了原始的包管理系统, 没有依赖管理机制. -- **Debian 0.93 rc5** : 发布于 1995 三月. "现代"意义的 Debian 的第一次发布, dpkg 会在系统安装后被用作安装以及管理其他软件包. -- **Debian 0.93 rc6**: 发布于1995 十一月. 最后一次a.out发布, deselect机制第一次出现, 有60位开发者在彼时维护着软件包. -- **Debian 1.1**: 发布于1996 六月. 项目代号 – Buzz, 软件包数量 – 474, 包管理器 dpkg, 内核版本 2.0, ELF. -- **Debian 1.2**: 发布于1996 十二月. 项目代号 – Rex, 软件包数量 – 848, 开发者数量 – 120. -- **Debian 1.3**: 发布于1997 七月. 项目代号 – Bo, 软件包数量 974, 开发者数量 – 200. -- **Debian 2.0**: 发布于1998 七月. 项目代号 - Hamm, 支持构架 – Intel i386 以及 Motorola 68000 系列, 软件包数量: 1500+, 开发者数量: 400+, 内置了 glibc. -- **Debian 2.1**: 发布于1999 三月九日. 项目代号 – slink, 支持构架 - Alpha 和 Sparc, apt 包管理器开始成型, 软件包数量 – 2250. -- **Debian 2.2**: 发布于2000 八月十五日. 项目代号 – Potato, 支持构架 – Intel i386, Motorola 68000 系列, Alpha, SUN Sparc, PowerPC 以及 ARM 构架. 软件包数量: 3900+ (二进制) 以及 2600+ (源代码), 开发者数量 – 450. 有一群人在那时研究并发表了一篇论文, 论文展示了自由软件是如何在被各种问题包围的情况下依然逐步成长为优秀的现代操作系统的. -- **Debian 3.0**: 发布于2002 七月十九日. 项目代号 – woody, 支持构架新增– HP, PA_RISC, IA-64, MIPS 以及 IBM, 首次以DVD的形式发布, 软件包数量 – 8500+, 开发者数量 – 900+, 支持加密. -- **Debian 3.1**: 发布于2005 六月六日. 项目代号 – sarge, 支持构架 – 不变基础上新增 AMD64 – 非官方渠道发布, 内核 – 2.4 以及 2.6 系列, 软件包数量: 15000+, 开发者数量 : 1500+, 增加了诸如 – OpenOffice 套件, Firefox 浏览器, Thunderbird, Gnome 2.8, 内核版本 3.3 先进地支持了: RAID, XFS, LVM, Modular Installer. -- **Debian 4.0**: 发布于2007 四月八日. 项目代号 – etch, 支持构架 – 不变基础上新增 AMD64. 软件包数量: 18,200+ 开发者数量 : 1030+, 图形化安装器. -- **Debian 5.0**: Released on February 14th, 发布于2009. 项目代号 – lenny, 支持构架 – 保不变基础上新增 ARM. 软件包数量: 23000+, 开发者数量: 1010+. -- **Debian 6.0**: 发布于2009 七月二十九日. 项目代号 – squeeze, 包含的软件包: 内核 2.6.32, Gnome 2.3. Xorg 7.5, 同时包含了 DKMS, 基于依赖包支持. 支持构架 : 不变基础上新增 kfreebsd-i386 以及 kfreebsd-amd64, 基于依赖管理的启动过程. -- **Debian 7.0**: 发布于2013 五月四日. 项目代号: wheezy, 支持 Multiarch, 私人云工具, 升级了安装器, 移除了第三方软件依赖, 万能的多媒体套件-codec, 内核版本 3.2, Xen Hypervisor 4.1.4 软件包数量: 37400+. -- **Debian 8.0**: 发布于2015 五月二十五日. 项目代号: Jessie, 将 Systemd 作为默认的启动加载器, 内核版本 3.16, 增加了快速启动(fast booting), service进程所依赖的 cgroups 使隔离部分 service 进程成为可能, 43000+ packages. Sysvinit 初始化工具首次在 Jessie 中可用. - -**注意**: Linux的内核第一次是在1991 十月五日被发布, 而 Debian 的首次发布则在1993 九月十三日. 所以 Debian 已经在只有24岁的 Linux 内核上运行了整整22年了. - -### 有关 Debian 的小知识 ### - -1994年被用来管理和重整 Debian 项目以使得其他开发者能更好地加入. 所以在那一年并没有面向用户的更新被发布, 当然, 内部版本肯定是有的. - -Debian 1.0 从来就没有被发布过. 一家 CD-ROM 的生产商错误地把某个未发布的版本标注为了 1.0, 为了避免产生混乱, 原本的 Debian 1.0 以1.1的面貌发布了. 从那以后才有了所谓的官方CD-ROM的概念. - -每个 Debian 新版本的代号都是玩具总动员里某个角色的名字哦. - -Debian 有四种可用版本: 旧稳定版(old stable), 稳定版, 测试版 以及 试验版(experimental). 始终如此. - -Debian 项目组一直致力于开发写一代发行版的不稳定版本, 这个不稳定版本始终被叫做Sid(玩具总动员里那个邪恶的臭小孩). Sid是unstable版本的永久名称, 同时Sid也取自'Still In Development"(译者:还在开发中)的首字母. Sid 将会成为下一个稳定版, 此时的下一个稳定版本代号为 jessie. - -Debian 的官方发行版只包含开源并且免费的软件, 绝无其他东西. 不过contrib 和 不免费的软件包使得安装那些本身免费但是依赖的软件包不免费的软件成为了可能. 那些依赖包本身的证书可能不属于自由/免费软件. - -Debian 是一堆Linux 发行版的母亲. 举几个例子: - -- Damn Small Linux -- KNOPPIX -- Linux Advanced -- MEPIS -- Ubuntu -- 64studio (不再活跃开发) -- LMDE - -Debian 是世界上最大的非商业Linux 发行版.他主要是由C书写的(32.1%), 一并的还有其他70多种语言. - -![Debian 开发语言贡献表](http://www.tecmint.com/wp-content/uploads/2014/08/Debian-Programming.png) - -Debian Contribution - -图片来源: [Xmodulo][1] - -Debian 项目包含6,850万行代码, 以及, 450万行空格和注释. - -国际空间站放弃了 Windows 和红帽子, 进而换成了Debian - 在上面的宇航员使用落后一个版本的稳定发行版, 目前是squeeze; 这么做是为了稳定程度以及来自 Debian 社区的雄厚帮助支持. - -感谢上帝! 我们差点就听到来自国际空间宇航员面对 Windows Metro 界面的尖叫了 :P - -#### 黑色星期三 #### - -2002 十一月而是日, Twente 大学的 Network Operation Center 着火 (NOC). 当地消防部门放弃了服务器区域. NOC维护了satie.debian.org的网站服务器, 这个网站包含了安全, 非美国相关的存档, 新维护者资料, 数量报告, 数据库; 这一切都化为了灰烬. 之后这些服务被使用 Debian 重新实现了. - -#### 未来版本 #### - -下一个待发布版本是 Debian 9, 项目代号 – Stretch, 它会带来什么还是个未知数. 满心期待吧! - -有很多发行版在 Linux 发行版的历史上出现过一瞬然后很快消失了. 在多数情况下, 维护一个日渐庞大的项目是开发者们面临的挑战. 但这对 Debian 来说不是问题. Debian 项目有全世界成百上千的开发者, 维护者. 它在 Linux 诞生的之初起便一直存在. - -Debian 在 Linux 生态环境中的贡献是难以用语言描述的. 如果 Debian 没有出现过, 那么 Linux 世界将不会像现在这样丰富, 用户友好. Debian 是为数不多可以被认为安全可靠又稳定, 是作为网络服务器完美选择的发行版. - -这仅仅是 Debian 的一个开始. 它从远古时代一路走到今天, 并将一直走下去. 未来即是现在! 世界近在眼前! 如果你到现在还从来没有使用过 Debian, 我只想问, 你还再等什么? 快去下载一份镜像试试吧, 我们会在此守候遇到任何问题的你. - -- [Debian 主页][2] - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/happy-birthday-to-debian-gnu-linux/ - -作者:[Avishek Kumar][a] -译者:[jerryling315](http://moelf.xyz) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://xmodulo.com/2013/08/interesting-facts-about-debian-linux.html -[2]:https://www.debian.org/ From 3f0aab56c681a6d185d618a34a6bfe7153c15814 Mon Sep 17 00:00:00 2001 From: "Y.C.S.M" Date: Wed, 9 Sep 2015 10:07:09 +0800 Subject: [PATCH 462/697] Update 20150908 List Of 10 Funny Linux Commands.md update the state of md file --- sources/tech/20150908 List Of 10 Funny Linux Commands.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150908 List Of 10 Funny Linux Commands.md b/sources/tech/20150908 List Of 10 Funny Linux Commands.md index 660bd47ff5..f3acbe27d3 100644 --- a/sources/tech/20150908 List Of 10 Funny Linux Commands.md +++ b/sources/tech/20150908 List Of 10 Funny Linux Commands.md @@ -1,3 +1,4 @@ +translating by tnuoccalanosrep List Of 10 Funny Linux Commands ================================================================================ **Working from the Terminal is really fun. Today, we’ll list really funny Linux commands which will bring smile on your face.** @@ -182,4 +183,4 @@ via: http://www.unixmen.com/list-10-funny-linux-commands/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://www.unixmen.com/author/rajneesh/ \ No newline at end of file +[a]:http://www.unixmen.com/author/rajneesh/ From c1504f802cb1639bcd8f1fdfb5b57fa0ed1eb6a3 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 9 Sep 2015 16:20:09 +0800 Subject: [PATCH 463/697] =?UTF-8?q?20150909-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...l Uptime of System With tuptime Utility.md | 152 ++++++++++++++++++ 1 file changed, 152 insertions(+) create mode 100644 sources/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md diff --git a/sources/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md b/sources/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md new file mode 100644 index 0000000000..d40f1b26af --- /dev/null +++ b/sources/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md @@ -0,0 +1,152 @@ +Linux Server See the Historical and Statistical Uptime of System With tuptime Utility +================================================================================ +You can use the following tools to see how long system has been running on a Linux or Unix-like system: + +- uptime : Tell how long the server has been running. +- lastt : Show the reboot and shutdown time. +- tuptime : Report the historical and statistical running time of system, keeping it between restarts. Like uptime command but with more interesting output. + +#### Finding out the system last reboot time and date #### + +You [can use the following commands to get the last reboot and shutdown time and date on a Linux][1] operating system (also works on OSX/Unix-like system): + + ## Just show system reboot and shutdown date and time ### + who -b + last reboot + last shutdown + ## Uptime info ## + uptime + cat /proc/uptime + awk '{ print "up " $1 /60 " minutes"}' /proc/uptime + w + +**Sample outputs:** + +![Fig.01: Various Linux commands in action to find out the server uptime](http://s0.cyberciti.org/uploads/cms/2015/09/uptime-w-awk-outputs.jpg) + +Fig.01: Various Linux commands in action to find out the server uptime + +**Say hello to tuptime** + +The tuptime command line tool can report the following information on a Linux based system: + +1. Count system startups +1. Register first boot time (a.k.a. installation time) +1. Count nicely and accidentally shutdowns +1. Average uptime and downtime +1. Current uptime +1. Uptime and downtime rate since first boot time +1. Accumulated system uptime, downtime and total +1. Report each startup, uptime, shutdown and downtime + +#### Installation #### + +Type the [following command to clone a git repo on a Linux operating system][2]: + + $ cd /tmp + $ git clone https://github.com/rfrail3/tuptime.git + $ ls + $ cd tuptime + $ ls + +**Sample outputs:** + +![Fig.02: Cloning a git repo](http://s0.cyberciti.org/uploads/cms/2015/09/git-install-tuptime.jpg) + +Fig.02: Cloning a git repo + +Make sure you've Python v2.7 installed with sys, optparse, os, re, string, sqlite3, datetime, disutils, and locale modules. + +You can simply install it as follows: + + $ sudo tuptime-install.sh + +OR do a manual installation (recommended method due to systemd or non-systemd based Linux system): + +$ sudo cp /tmp/tuptime/latest/cron.d/tuptime /etc/cron.d/tuptime + +If is a system with systemd, copy service file and enable it: + + $ sudo cp /tmp/tuptime/latest/systemd/tuptime.service /lib/systemd/system/ + $ sudo systemctl enable tuptime.service + +If the systemd don't have systemd, copy init file: + + $ sudo cp /tmp/tuptime/latest/init.d/tuptime.init.d-debian7 /etc/init.d/tuptime + $ sudo update-rc.d tuptime defaults + +**Run it** + +Simply type the following command: + + $ sudo tuptime + +**Sample outputs:** + +![Fig.03: tuptime in action](http://s0.cyberciti.org/uploads/cms/2015/09/tuptime-output.jpg) + +Fig.03: tuptime in action + +After kernel upgrade I rebooted the box and typed the same command again: + + $ sudo tuptime + System startups: 2 since 03:52:16 PM 08/21/2015 + System shutdowns: 1 ok - 0 bad + Average uptime: 7 days, 16 hours, 48 minutes and 3 seconds + Average downtime: 2 hours, 30 minutes and 5 seconds + Current uptime: 5 minutes and 28 seconds since 06:23:06 AM 09/06/2015 + Uptime rate: 98.66 % + Downtime rate: 1.34 % + System uptime: 15 days, 9 hours, 36 minutes and 7 seconds + System downtime: 5 hours, 0 minutes and 11 seconds + System life: 15 days, 14 hours, 36 minutes and 18 seconds + +You can change date and time format as follows: + + $ sudo tuptime -d '%H:%M:%S %m-%d-%Y' + +**Sample outputs:** + + System startups: 1 since 15:52:16 08-21-2015 + System shutdowns: 0 ok - 0 bad + Average uptime: 15 days, 9 hours, 21 minutes and 19 seconds + Average downtime: 0 seconds + Current uptime: 15 days, 9 hours, 21 minutes and 19 seconds since 15:52:16 08-21-2015 + Uptime rate: 100.0 % + Downtime rate: 0.0 % + System uptime: 15 days, 9 hours, 21 minutes and 19 seconds + System downtime: 0 seconds + System life: 15 days, 9 hours, 21 minutes and 19 seconds + +Enumerate each startup, uptime, shutdown and downtime: + + $ sudo tuptime -e + +**Sample outputs:** + + Startup: 1 at 03:52:16 PM 08/21/2015 + Uptime: 15 days, 9 hours, 22 minutes and 33 seconds + + System startups: 1 since 03:52:16 PM 08/21/2015 + System shutdowns: 0 ok - 0 bad + Average uptime: 15 days, 9 hours, 22 minutes and 33 seconds + Average downtime: 0 seconds + Current uptime: 15 days, 9 hours, 22 minutes and 33 seconds since 03:52:16 PM 08/21/2015 + Uptime rate: 100.0 % + Downtime rate: 0.0 % + System uptime: 15 days, 9 hours, 22 minutes and 33 seconds + System downtime: 0 seconds + System life: 15 days, 9 hours, 22 minutes and 33 seconds + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/hardware/howto-see-historical-statistical-uptime-on-linux-server/ + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.cyberciti.biz/hardware/howto-see-historical-statistical-uptime-on-linux-server/ +[2]:http://www.cyberciti.biz/faq/debian-ubunut-linux-download-a-git-repository/ \ No newline at end of file From e043bae6c6643463b6f0420593052975d42d9935 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Wed, 9 Sep 2015 17:42:59 +0800 Subject: [PATCH 464/697] [Translated]RHCSA Series-Part 12--Automeat RHEL 7 Installations Using 'Kickstart'.md --- ... RHEL 7 Installations Using 'Kickstart'.md | 144 ------------------ ... RHEL 7 Installations Using 'Kickstart'.md | 142 +++++++++++++++++ 2 files changed, 142 insertions(+), 144 deletions(-) delete mode 100644 sources/tech/RHCSA Series/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md create mode 100644 translated/tech/RHCSA/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md b/sources/tech/RHCSA Series/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md deleted file mode 100644 index 3d8b578a32..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md +++ /dev/null @@ -1,144 +0,0 @@ -FSSlc translating - -RHCSA Series: Automate RHEL 7 Installations Using ‘Kickstart’ – Part 12 -================================================================================ -Linux servers are rarely standalone boxes. Whether it is in a datacenter or in a lab environment, chances are that you have had to install several machines that will interact one with another in some way. If you multiply the time that it takes to install Red Hat Enterprise Linux 7 manually on a single server by the number of boxes that you need to set up, this can lead to a rather lengthy effort that can be avoided through the use of an unattended installation tool known as kickstart. - -In this article we will show what you need to use kickstart utility so that you can forget about babysitting servers during the installation process. - -![Automatic Kickstart Installation of RHEL 7](http://www.tecmint.com/wp-content/uploads/2015/05/Automatic-Kickstart-Installation-of-RHEL-7.jpg) - -RHCSA: Automatic Kickstart Installation of RHEL 7 - -#### Introducing Kickstart and Automated Installations #### - -Kickstart is an automated installation method used primarily by Red Hat Enterprise Linux (and other Fedora spin-offs, such as CentOS, Oracle Linux, etc.) to execute unattended operating system installation and configuration. Thus, kickstart installations allow system administrators to have identical systems, as far as installed package groups and system configuration are concerned, while sparing them the hassle of having to manually install each of them. - -### Preparing for a Kickstart Installation ### - -To perform a kickstart installation, we need to follow these steps: - -1. Create a Kickstart file, a plain text file with several predefined configuration options. - -2. Make the Kickstart file available on removable media, a hard drive or a network location. The client will use the rhel-server-7.0-x86_64-boot.iso file, whereas you will need to make the full ISO image (rhel-server-7.0-x86_64-dvd.iso) available from a network resource, such as a HTTP of FTP server (in our present case, we will use another RHEL 7 box with IP 192.168.0.18). - -3. Start the Kickstart installation - -To create a kickstart file, login to your Red Hat Customer Portal account, and use the [Kickstart configuration tool][1] to choose the desired installation options. Read each one of them carefully before scrolling down, and choose what best fits your needs: - -![Kickstart Configuration Tool](http://www.tecmint.com/wp-content/uploads/2015/05/Kickstart-Configuration-Tool.png) - -Kickstart Configuration Tool - -If you specify that the installation should be performed either through HTTP, FTP, or NFS, make sure the firewall on the server allows those services. - -Although you can use the Red Hat online tool to create a kickstart file, you can also create it manually using the following lines as reference. You will notice, for example, that the installation process will be in English, using the latin american keyboard layout and the America/Argentina/San_Luis time zone: - - lang en_US - keyboard la-latin1 - timezone America/Argentina/San_Luis --isUtc - rootpw $1$5sOtDvRo$In4KTmX7OmcOW9HUvWtfn0 --iscrypted - #platform x86, AMD64, or Intel EM64T - text - url --url=http://192.168.0.18//kickstart/media - bootloader --location=mbr --append="rhgb quiet crashkernel=auto" - zerombr - clearpart --all --initlabel - autopart - auth --passalgo=sha512 --useshadow - selinux --enforcing - firewall --enabled - firstboot --disable - %packages - @base - @backup-server - @print-server - %end - -In the online configuration tool, use 192.168.0.18 for HTTP Server and `/kickstart/tecmint.bin` for HTTP Directory in the Installation section after selecting HTTP as installation source. Finally, click the Download button at the right top corner to download the kickstart file. - -In the kickstart sample file above, you need to pay careful attention to. - - url --url=http://192.168.0.18//kickstart/media - -That directory is where you need to extract the contents of the DVD or ISO installation media. Before doing that, we will mount the ISO installation file in /media/rhel as a loop device: - - # mount -o loop /var/www/html/kickstart/rhel-server-7.0-x86_64-dvd.iso /media/rhel - -![Mount RHEL ISO Image](http://www.tecmint.com/wp-content/uploads/2015/05/Mount-RHEL-ISO-Image.png) - -Mount RHEL ISO Image - -Next, copy all the contents of /media/rhel to /var/www/html/kickstart/media: - - # cp -R /media/rhel /var/www/html/kickstart/media - -When you’re done, the directory listing and disk usage of /var/www/html/kickstart/media should look as follows: - -![Kickstart Media Files](http://www.tecmint.com/wp-content/uploads/2015/05/Kickstart-media-Files.png) - -Kickstart Media Files - -Now we’re ready to kick off the kickstart installation. - -Regardless of how you choose to create the kickstart file, it’s always a good idea to check its syntax before proceeding with the installation. To do that, install the pykickstart package. - - # yum update && yum install pykickstart - -And then use the ksvalidator utility to check the file: - - # ksvalidator /var/www/html/kickstart/tecmint.bin - -If the syntax is correct, you will not get any output, whereas if there’s an error in the file, you will get a warning notice indicating the line where the syntax is not correct or unknown. - -### Performing a Kickstart Installation ### - -To start, boot your client using the rhel-server-7.0-x86_64-boot.iso file. When the initial screen appears, select Install Red Hat Enterprise Linux 7.0 and press the Tab key to append the following stanza and press Enter: - - # inst.ks=http://192.168.0.18/kickstart/tecmint.bin - -![RHEL Kickstart Installation](http://www.tecmint.com/wp-content/uploads/2015/05/RHEL-Kickstart-Installation.png) - -RHEL Kickstart Installation - -Where tecmint.bin is the kickstart file created earlier. - -When you press Enter, the automated installation will begin, and you will see the list of packages that are being installed (the number and the names will differ depending on your choice of programs and package groups): - -![Automatic Kickstart Installation of RHEL 7](http://www.tecmint.com/wp-content/uploads/2015/05/Kickstart-Automatic-Installation.png) - -Automatic Kickstart Installation of RHEL 7 - -When the automated process ends, you will be prompted to remove the installation media and then you will be able to boot into your newly installed system: - -![RHEL 7 Boot Screen](http://www.tecmint.com/wp-content/uploads/2015/05/RHEL-7.png) - -RHEL 7 Boot Screen - -Although you can create your kickstart files manually as we mentioned earlier, you should consider using the recommended approach whenever possible. You can either use the online configuration tool, or the anaconda-ks.cfg file that is created by the installation process in root’s home directory. - -This file actually is a kickstart file, so you may want to install the first box manually with all the desired options (maybe modify the logical volumes layout or the file system on top of each one) and then use the resulting anaconda-ks.cfg file to automate the installation of the rest. - -In addition, using the online configuration tool or the anaconda-ks.cfg file to guide future installations will allow you to perform them using an encrypted root password out-of-the-box. - -### Conclusion ### - -Now that you know how to create kickstart files and how to use them to automate the installation of Red Hat Enterprise Linux 7 servers, you can forget about babysitting the installation process. This will give you time to do other things, or perhaps some leisure time if you’re lucky. - -Either way, let us know what you think about this article using the form below. Questions are also welcome! - -Read Also: [Automated Installations of Multiple RHEL/CentOS 7 Distributions using PXE and Kickstart][2] - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/automatic-rhel-installations-using-kickstart/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:https://access.redhat.com/labs/kickstartconfig/ -[2]:http://www.tecmint.com/multiple-centos-installations-using-kickstart/ diff --git a/translated/tech/RHCSA/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md b/translated/tech/RHCSA/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md new file mode 100644 index 0000000000..25102ad8f9 --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md @@ -0,0 +1,142 @@ +RHCSA 系列: 使用 ‘Kickstart’完成 RHEL 7 的自动化安装 – Part 12 +================================================================================ +无论是在数据中心还是实验室环境,Linux 服务器很少是独立的机子,很可能有时你不得不安装多个以某种方式相互联系的机子。假如你将在单个服务器上手动安装 RHEL 7 所花的时间乘以你需要配置的机子个数,则这将导致你必须做出一场相当长的努力,而通过使用被称为 kicksta 的无人值守安装工具则可以避免这样的麻烦。 + +在这篇文章中,我们将向你展示使用 kickstart 工具时所需的一切,以便在安装过程中,不用你时不时地照看“处在襁褓中”的服务器。 + +![RHEL 7 的自动化 Kickstart 安装](http://www.tecmint.com/wp-content/uploads/2015/05/Automatic-Kickstart-Installation-of-RHEL-7.jpg) + +RHCSA: RHEL 7 的自动化 Kickstart 安装 + +#### Kickstart 和自动化安装简介 #### + +Kickstart 是一种被用来执行无人值守操作系统安装和配置的自动化安装方法,主要被 RHEL(和其他 Fedora 的副产品,如 CentOS,Oracle Linux 等)所使用。因此,kickstart 安装方法可使得系统管理员只需考虑需要安装的软件包组和系统的配置,便可以得到相同的系统,从而省去必须手动安装这些软件包的麻烦。 + +### 准备一次 Kickstart 安装 ### + +要执行一次 kickstart 安装,我们需要遵循下面的这些步骤: + +1. 创建一个 Kickstart 文件,它是一个带有多个预定义配置选项的纯文本文件。 + +2. 使得 Kickstart 文件在可移动介质上可得,如一个硬盘或一个网络位置。客户端将使用 `rhel-server-7.0-x86_64-boot.iso` 镜像文件,而你还需要使得完全的 ISO 镜像(`rhel-server-7.0-x86_64-dvd.iso`)可从一个网络资源上获取得到,例如通过一个 FTP 服务器的 HTTP(在我们当前的例子中,我们将使用另一个 IP 地址为 192.168.0.18 的 RHEL 7 机子)。 + +3. 开始 Kickstart 安装。 + +为创建一个 kickstart 文件,请登陆你的红帽客户门户网站帐户,并使用 [Kickstart 配置工具][1] 来选择所需的安装选项。在向下滑动之前请仔细阅读每个选项,然后选择最适合你需求的选项: + +![Kickstart 配置工具](http://www.tecmint.com/wp-content/uploads/2015/05/Kickstart-Configuration-Tool.png) + +Kickstart 配置工具 + +假如你指定安装将通过 HTTP,FTP,NFS 来执行,请确保服务器上的防火墙允许这些服务通过。 + +尽管你可以使用红帽的在线工具来创建一个 kickstart 文件,但你还可以使用下面的代码来作为参考手动地创建它。例如,你可以注意到,下面的代码指定了安装过程将使用英语环境,使用拉丁美洲键盘布局,并设定时区为 America/Argentina/San_Luis 时区: + + lang en_US + keyboard la-latin1 + timezone America/Argentina/San_Luis --isUtc + rootpw $1$5sOtDvRo$In4KTmX7OmcOW9HUvWtfn0 --iscrypted + #platform x86, AMD64, or Intel EM64T + text + url --url=http://192.168.0.18//kickstart/media + bootloader --location=mbr --append="rhgb quiet crashkernel=auto" + zerombr + clearpart --all --initlabel + autopart + auth --passalgo=sha512 --useshadow + selinux --enforcing + firewall --enabled + firstboot --disable + %packages + @base + @backup-server + @print-server + %end + +在上面的在线配置工具中,在选择以 HTTP 来作为安装源后,设置好在安装过程中使用 192.168.0.18 来作为 HTTP 服务器的地址,`/kickstart/tecmint.bin` 作为 HTTP 目录。 + +在上面的 kickstart 示例文件中,你需要特别注意 + + url --url=http://192.168.0.18//kickstart/media + +这个目录是你解压 DVD 或 ISO 安装介质的地方。在执行解压之前,我们将把 ISO 安装文件作为一个回环设备挂载到 /media/rhel 目录下: + + # mount -o loop /var/www/html/kickstart/rhel-server-7.0-x86_64-dvd.iso /media/rhel + +![挂载 RHEL ISO 镜像](http://www.tecmint.com/wp-content/uploads/2015/05/Mount-RHEL-ISO-Image.png) + +挂载 RHEL ISO 镜像 + +接下来,复制 /media/rhel 中的全部文件到 /var/www/html/kickstart/media 目录: + + # cp -R /media/rhel /var/www/html/kickstart/media + +这一步做完后,/var/www/html/kickstart/media 目录中的文件列表和磁盘使用情况将如下所示: + +![Kickstart 媒体文件](http://www.tecmint.com/wp-content/uploads/2015/05/Kickstart-media-Files.png) + +Kickstart 媒体文件 + +现在,我们已经准备好开始 kickstart 安装了。 + +不管你如何选择创建 kickstart 文件的方式,在执行安装之前检查这个文件的语法总是一个不错的主意。为此,我们需要安装 pykickstart 软件包。 + + # yum update && yum install pykickstart + +然后使用 ksvalidator 工具来检查这个文件: + + # ksvalidator /var/www/html/kickstart/tecmint.bin + +假如文件中的语法正确,你将不会得到任何输出,反之,假如文件中存在错误,你得到警告,向你提示在某一行中语法不正确或出错原因未知。 + +### 执行一次 Kickstart 安装 ### + +首先,使用 rhel-server-7.0-x86_64-boot.iso 来启动你的客户端。当初始屏幕出现时,选择安装 RHEL 7.0 ,然后按 Tab 键来追加下面这一句,接着按 Enter 键: + + # inst.ks=http://192.168.0.18/kickstart/tecmint.bin + +![RHEL Kickstart 安装](http://www.tecmint.com/wp-content/uploads/2015/05/RHEL-Kickstart-Installation.png) + +RHEL Kickstart 安装 + +其中 tecmint.bin 是先前创建的 kickstart 文件。 + +当你按了 Enter 键后,自动安装就开始了,且你将看到一个列有正在被安装的软件的列表(软件包的数目和名称根据你所选择的程序和软件包组而有所不同): + +![RHEL 7 的自动化 Kickstart 安装](http://www.tecmint.com/wp-content/uploads/2015/05/Kickstart-Automatic-Installation.png) + +RHEL 7 的自动化 Kickstart 安装 + +当自动化过程结束后,将提示你移除安装介质,接着你就可以启动到你新安装的系统中了: + +![RHEL 7 启动屏幕](http://www.tecmint.com/wp-content/uploads/2015/05/RHEL-7.png) + +RHEL 7 启动屏幕 + +尽管你可以像我们前面提到的那样,手动地创建你的 kickstart 文件,但你应该尽可能地考虑使用受推荐的方式:你可以使用在线配置工具,或者使用在安装过程中创建的位于 root 家目录下的 anaconda-ks.cfg 文件。 + +这个文件实际上就是一个 kickstart 文件,所以你或许想在选择好所有所需的选项(可能需要更改逻辑卷布局或机子上所用的文件系统)后手动地安装第一个机子,接着使用产生的 anaconda-ks.cfg 文件来自动完成其余机子的安装过程。 + +另外,使用在线配置工具或 anaconda-ks.cfg 文件来引导将来的安装将允许你使用一个加密的 root 密码来执行系统的安装。 + +### 总结 ### + +既然你知道了如何创建 kickstart 文件并如何使用它们来自动完成 RHEL 7 服务器的安装,你就可以忘记时时照看安装进度的过程了。这将给你时间来做其他的事情,或者若你足够幸运,你还可以用来休闲一番。 + +无论以何种方式,请使用下面的评论栏来让我们知晓你对这篇文章的看法。提问也同样欢迎! + +另外,请阅读:[使用 PXE 和 kickstart 来自动化安装多个 RHEL/CentOS 7 发行版本][2] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/automatic-rhel-installations-using-kickstart/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:https://access.redhat.com/labs/kickstartconfig/ +[2]:http://www.tecmint.com/multiple-centos-installations-using-kickstart/ From 3c041e32c40f4c6f7faf663c448d49ae234ac9a6 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Wed, 9 Sep 2015 17:47:24 +0800 Subject: [PATCH 465/697] Update RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ...datory Access Control Essentials with SELinux in RHEL 7.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md b/sources/tech/RHCSA Series/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md index 1a0d08df8f..8d014dcc2e 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md @@ -1,3 +1,5 @@ +FSSlc translating + RHCSA Series: Mandatory Access Control Essentials with SELinux in RHEL 7 – Part 13 ================================================================================ During this series we have explored in detail at least two access control methods: standard ugo/rwx permissions ([Manage Users and Groups – Part 3][1]) and access control lists ([Configure ACL’s on File Systems – Part 7][2]). @@ -173,4 +175,4 @@ via: http://www.tecmint.com/selinux-essentials-and-control-filesystem-access/ [2]:http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/ [3]:http://www.tecmint.com/rhcsa-series-secure-ssh-set-hostname-enable-network-services-in-rhel-7/ [4]:https://www.nsa.gov/research/selinux/index.shtml -[5]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/SELinux_Users_and_Administrators_Guide/part_I-SELinux.html \ No newline at end of file +[5]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/SELinux_Users_and_Administrators_Guide/part_I-SELinux.html From 5c72d9096c46ebe86dfe59efcb8a9eb9a45d7cbd Mon Sep 17 00:00:00 2001 From: Yu Ye Date: Wed, 9 Sep 2015 18:22:24 +0800 Subject: [PATCH 466/697] Delete 20150826 Five Super Cool Open Source Games.md --- ...50826 Five Super Cool Open Source Games.md | 66 ------------------- 1 file changed, 66 deletions(-) delete mode 100644 sources/share/20150826 Five Super Cool Open Source Games.md diff --git a/sources/share/20150826 Five Super Cool Open Source Games.md b/sources/share/20150826 Five Super Cool Open Source Games.md deleted file mode 100644 index 0d3d3c8bfd..0000000000 --- a/sources/share/20150826 Five Super Cool Open Source Games.md +++ /dev/null @@ -1,66 +0,0 @@ -Translating by H-mudcup -Five Super Cool Open Source Games -================================================================================ -In 2014 and 2015, Linux became home to a list of popular commercial titles such as the popular Borderlands, Witcher, Dead Island, and Counter Strike series of games. While this is exciting news, what of the gamer on a budget? Commercial titles are good, but even better are free-to-play alternatives made by developers who know what players like. - -Some time ago, I came across a three year old YouTube video with the ever optimistic title [5 Open Source Games that Don’t Suck][1]. Although the video praises some open source games, I’d prefer to approach the subject with a bit more enthusiasm, at least as far as the title goes. So, here’s my list of five super cool open source games. - -### Tux Racer ### - -![Tux Racer](http://fossforce.com/wp-content/uploads/2015/08/tuxracer-550x413.jpg) - -Tux Racer - -[Tux Racer][2] is the first game on this list because I’ve had plenty of experience with it. On a [recent trip to Mexico][3] that my brother and I took with [Kids on Computers][4], Tux Racer was one of the games that kids and teachers alike enjoyed. In this game, players use the Linux mascot, the penguin Tux, to race on downhill ski slopes in time trials in which players challenge their own personal bests. Currently there’s no multiplayer version available, but that could be subject to change. Available for Linux, OS X, Windows, and Android. - -### Warsow ### - -![Warsow](http://fossforce.com/wp-content/uploads/2015/08/warsow-550x413.jpg) - -Warsow - -The [Warsow][5] website explains: “Set in a futuristic cartoonish world, Warsow is a completely free fast-paced first-person shooter (FPS) for Windows, Linux and Mac OS X. Warsow is the Art of Respect and Sportsmanship Over the Web.” I was reluctant to include games from the FPS genre on this list, because many have played games in this genre, but I was amused by Warsow. It prioritizes lots of movement and the game is fast paced with a set of eight weapons to start with. The cartoonish style makes playing feel less serious and more casual, something for friends and family to play together. However, it boasts competitive play, and when I experienced the game I found there were, indeed, some expert players around. Available for Linux, Windows and OS X. - -### M.A.R.S – A ridiculous shooter ### - -![M.A.R.S. - A ridiculous shooter](http://fossforce.com/wp-content/uploads/2015/08/MARS-screenshot-550x344.jpg) - -M.A.R.S. – A ridiculous shooter - -[M.A.R.S – A ridiculous shooter][6] is appealing because of it’s vibrant coloring and style. There is support for two players on the same keyboard, but an online multiplayer version is currently in the works — meaning plans to play with friends have to wait for now. Regardless, it’s an entertaining space shooter with a few different ships and weapons to play as. There are different shaped ships, ranging from shotguns, lasers, scattered shots and more (one of the random ships shot bubbles at my opponents, which was funny amid the chaotic gameplay). There are a few modes of play, such as the standard death match against opponents to score a certain limit or score high, along with other modes called Spaceball, Grave-itation Pit and Cannon Keep. Available for Linux, Windows and OS X. - -### Valyria Tear ### - -![Valyria Tear](http://fossforce.com/wp-content/uploads/2015/08/bronnan-jump-to-enemy-550x413.jpg) - -Valyria Tear - -[Valyria Tear][7] resembles many fan favorite role-playing games (RPGs) spanning the years. The story is set in the usual era of fantasy games, full of knights, kingdoms and wizardry, and follows the main character Bronann. The design team did great work in designing the world and gives players everything expected from the genre: hidden chests, random monster encounters, non-player character (NPC) interaction, and something no RPG would be complete without: grinding for experience on lower level slime monsters until you’re ready for the big bosses. When I gave it a try, time didn’t permit me to play too far into the campaign, but for those interested there is a ‘[Let’s Play][8]‘ series by YouTube user Yohann Ferriera. Available for Linux, Windows and OS X. - -### SuperTuxKart ### - -![SuperTuxKart](http://fossforce.com/wp-content/uploads/2015/08/hacienda_tux_antarctica-550x293.jpg) - -SuperTuxKart - -Last but not least is [SuperTuxKart][9], a clone of Mario Kart that is every bit as fun as the original. It started development around 2000-2004 as Tux Kart, but there were errors in its production which led to a cease in development for a few years. Since development picked up again in 2006, it’s been improving, with version 0.9 debuting four months ago. In the game, our old friend Tux starts in the role of Mario and a few other open source mascots. One recognizable face among them is Suzanne, the monkey mascot for Blender. The graphics are solid and gameplay is fluent. While online play is in the planning stages, split screen multiplayer action is available, with up to four players supported on a single computer. Available for Linux, Windows, OS X, AmigaOS 4, AROS and MorphOS. - --------------------------------------------------------------------------------- - -via: http://fossforce.com/2015/08/five-super-cool-open-source-games/ - -作者:Hunter Banks -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.youtube.com/watch?v=BEKVl-XtOP8 -[2]:http://tuxracer.sourceforge.net/download.html -[3]:http://fossforce.com/2015/07/banks-family-values-texas-linux-fest/ -[4]:http://www.kidsoncomputers.org/an-amazing-week-in-oaxaca -[5]:https://www.warsow.net/download -[6]:http://mars-game.sourceforge.net/ -[7]:http://valyriatear.blogspot.com/ -[8]:https://www.youtube.com/channel/UCQ5KrSk9EqcT_JixWY2RyMA -[9]:http://supertuxkart.sourceforge.net/ From a3d742ca44e570546154c23a383b69ad7bd6271c Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 9 Sep 2015 21:24:28 +0800 Subject: [PATCH 467/697] PUB:20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7 @geekpi --- ...TOP (IT Operational Portal) on CentOS 7.md | 25 ++++++++++--------- 1 file changed, 13 insertions(+), 12 deletions(-) rename {translated/tech => published}/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md (83%) diff --git a/translated/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md b/published/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md similarity index 83% rename from translated/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md rename to published/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md index dd20493d77..a9adc3e68c 100644 --- a/translated/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md +++ b/published/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md @@ -1,16 +1,17 @@ -如何在CentOS上安装iTOP(IT操作门户) +如何在 CentOS 7 上安装开源 ITIL 门户 iTOP ================================================================================ -iTOP简单来说是一个简单的基于网络的开源IT服务管理工具。它有所有的ITIL功能包括服务台、配置管理、事件管理、问题管理、更改管理和服务管理。iTOP依赖于Apache/IIS、MySQL和PHP,因此它可以运行在任何支持这些软件的操作系统中。因为iTOP是一个网络程序,因此你不必在用户的PC端任何客户端程序。一个简单的浏览器就足够每天的IT环境操作了。 + +iTOP是一个简单的基于Web的开源IT服务管理工具。它有所有的ITIL功能,包括服务台、配置管理、事件管理、问题管理、变更管理和服务管理。iTOP依赖于Apache/IIS、MySQL和PHP,因此它可以运行在任何支持这些软件的操作系统中。因为iTOP是一个Web程序,因此你不必在用户的PC端任何客户端程序。一个简单的浏览器就足够每天的IT环境操作了。 我们要在一台有满足基本需求的LAMP环境的CentOS 7上安装和配置iTOP。 ### 下载 iTOP ### -iTOP的下载包现在在SOurceForge上,我们可以从这获取它的官方[链接][1]。 +iTOP的下载包现在在SourceForge上,我们可以从这获取它的官方[链接][1]。 ![itop download](http://blog.linoxide.com/wp-content/uploads/2015/07/1-itop-download.png) -我们从这里的连接用wget命令获取压缩文件 +我们从这里的连接用wget命令获取压缩文件。 [root@centos-007 ~]# wget http://downloads.sourceforge.net/project/itop/itop/2.1.0/iTop-2.1.0-2127.zip @@ -40,7 +41,7 @@ iTOP的下载包现在在SOurceForge上,我们可以从这获取它的官方[ installation.xml itop-change-mgmt-itil itop-incident-mgmt-itil itop-request-mgmt-itil itop-tickets itop-attachments itop-config itop-knownerror-mgmt itop-service-mgmt itop-virtualization-mgmt -在解压的目录下,通过不同的数据模型用复制命令迁移需要的扩展从datamodels复制到web扩展目录下。 +在解压的目录下,使用如下的 cp 命令将不同的数据模型从web 下的 datamodels 目录下复制到 extensions 目录,来迁移需要的扩展。 [root@centos-7 2.x]# pwd /var/www/html/itop/web/datamodels/2.x @@ -50,19 +51,19 @@ iTOP的下载包现在在SOurceForge上,我们可以从这获取它的官方[ 大多数服务端设置和配置已经完成了。最后我们安装web界面来完成安装。 -打开浏览器使用ip地址或者FQDN来访问WordPress web目录。 +打开浏览器使用ip地址或者完整域名来访问iTop 的 web目录。 http://servers_ip_address/itop/web/ 你会被重定向到iTOP的web安装页面。让我们按照要求配置,就像在这篇教程中做的那样。 -#### 先决要求验证 #### +#### 验证先决要求 #### 这一步你就会看到验证完成的欢迎界面。如果你看到了一些警告信息,你需要先安装这些软件来解决这些问题。 ![mcrypt missing](http://blog.linoxide.com/wp-content/uploads/2015/07/2-itop-web-install.png) -这一步一个叫php mcrypt的可选包丢失了。下载下面的rpm包接着尝试安装php mcrypt包。 +这一步有一个叫php mcrypt的可选包丢失了。下载下面的rpm包接着尝试安装php mcrypt包。 [root@centos-7 ~]#yum localinstall php-mcrypt-5.3.3-1.el6.x86_64.rpm libmcrypt-2.5.8-9.el6.x86_64.rpm. @@ -76,7 +77,7 @@ iTOP的下载包现在在SOurceForge上,我们可以从这获取它的官方[ #### iTop 许可协议 #### -勾选同意iTOP所有组件的许可协议并点击“NEXT”。 +勾选接受 iTOP所有组件的许可协议,并点击“NEXT”。 ![License Agreement](http://blog.linoxide.com/wp-content/uploads/2015/07/4.png) @@ -94,7 +95,7 @@ iTOP的下载包现在在SOurceForge上,我们可以从这获取它的官方[ #### 杂项参数 #### -让我们选择额外的参数来选择你是否需要安装一个演示内容或者使用全新的数据库,接着下一步。 +让我们选择额外的参数来选择你是否需要安装一个带有演示内容的数据库或者使用全新的数据库,接着下一步。 ![Misc Parameters](http://blog.linoxide.com/wp-content/uploads/2015/07/7.png) @@ -118,7 +119,7 @@ iTOP的下载包现在在SOurceForge上,我们可以从这获取它的官方[ #### 改变管理选项 #### -选择不同的ticket类型以便管理可用选项中的IT设备更改。我们选择ITTL更改管理选项。 +选择不同的ticket类型以便管理可用选项中的IT设备变更。我们选择ITTL变更管理选项。 ![ITIL Change](http://blog.linoxide.com/wp-content/uploads/2015/07/11.png) @@ -166,7 +167,7 @@ via: http://linoxide.com/tools/setup-itop-centos-7/ 作者:[Kashif Siddique][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 4c81fb6b8fc58ca6d035d8f55c113cd31fe40074 Mon Sep 17 00:00:00 2001 From: wi-cuckoo Date: Wed, 9 Sep 2015 21:54:49 +0800 Subject: [PATCH 468/697] translating wi-cuckoo --- ...w to Download Install and Configure Plank Dock in Ubuntu.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md b/sources/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md index 4f0a5f9ea1..2ebefa9297 100644 --- a/sources/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md +++ b/sources/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md @@ -1,3 +1,4 @@ +translating wi-cuckoo How to Download, Install, and Configure Plank Dock in Ubuntu ================================================================================ It’s a well-known fact that Linux is extremely customizable with users having a lot of options to choose from – be it the operating systems’ various distributions or desktop environments available for a single distro. Like users of any other OS, Linux users also have different tastes and preferences, especially when it comes to desktop. @@ -63,4 +64,4 @@ via: https://www.maketecheasier.com/download-install-configure-plank-dock-ubuntu 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.maketecheasier.com/author/himanshu/ -[1]:https://launchpad.net/plank \ No newline at end of file +[1]:https://launchpad.net/plank From 11d65975963636f0da80bff5b86f316a27d57b8f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Thu, 10 Sep 2015 04:49:24 +0800 Subject: [PATCH 469/697] Update RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md --- ...ks with Cron and Monitoring System Logs.md | 112 +++++++++--------- 1 file changed, 55 insertions(+), 57 deletions(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md b/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md index 307ec72515..180b8f8f2b 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md @@ -1,67 +1,67 @@ [xiqingongzi translating] -RHCSA Series: Yum Package Management, Automating Tasks with Cron and Monitoring System Logs – Part 10 +RHCSA Series: Yum 包管理, 自动任务计划和系统监控日志 – Part 10 ================================================================================ -In this article we will review how to install, update, and remove packages in Red Hat Enterprise Linux 7. We will also cover how to automate tasks using cron, and will finish this guide explaining how to locate and interpret system logs files with the focus of teaching you why all of these are essential skills for every system administrator. +在这篇文章中,我们将回顾如何在REHL7中安装,更新和删除软件包。我们还将介绍如何使用cron任务的自动化,并完成如何查找和监控系统日志文件以及为什么这些技能是系统管理员必备技能 ![Yum Package Management Cron Jobs Log Monitoring Linux](http://www.tecmint.com/wp-content/uploads/2015/05/Yum-Package-Management-Cron-Job-Log-Monitoring-Linux.jpg) -RHCSA: Yum Package Management, Cron Job Scheduling and Log Monitoring – Part 10 +RHCSA: Yum包管理, 任务计划和系统监控 – 第十章 -### Managing Packages Via Yum ### +### 使用yum 管理包 ### -To install a package along with all its dependencies that are not already installed, you will use: +要安装一个包以及所有尚未安装的依赖包,您可以使用: # yum -y install package_name(s) -Where package_name(s) represent at least one real package name. + package_name(s) 需要是一个存在的包名 -For example, to install httpd and mlocate (in that order), type. +例如,安装httpd和mlocate(按顺序),类型。 # yum -y install httpd mlocate -**Note**: That the letter y in the example above bypasses the confirmation prompts that yum presents before performing the actual download and installation of the requested programs. You can leave it out if you want. +**注意**: 字符y表示绕过执行下载和安装前的确认提示,如果需要,你可以删除它 -By default, yum will install the package with the architecture that matches the OS architecture, unless overridden by appending the package architecture to its name. +默认情况下,yum将安装与操作系统体系结构相匹配的包,除非通过在包名加入架构名 -For example, on a 64 bit system, yum install package will install the x86_64 version of package, whereas yum install package.x86 (if available) will install the 32-bit one. +例如,在64位系统上,使用yum安装包将安装包的x86_64版本,而package.x86 yum安装(如果有的话)将安装32位。 -There will be times when you want to install a package but don’t know its exact name. The search all or search options can search the currently enabled repositories for a certain keyword in the package name and/or in its description as well, respectively. +有时,你想安装一个包,但不知道它的确切名称。搜索可以在当前启用的存储库中去搜索包名称或在它的描述中搜索,并分别进行。 For example, # yum search log -will search the installed repositories for packages with the word log in their names and summaries, whereas +将搜索安装的软件包中名字与该词类似的软件,而 # yum search all log -will look for the same keyword in the package description and url fields as well. +也将在包描述和网址中寻找寻找相同的关键字 -Once the search returns a package listing, you may want to display further information about some of them before installing. That is when the info option will come in handy: +一旦搜索返回包列表,您可能希望在安装前显示一些信息。这时info选项派上用场: # yum info logwatch ![Search Package Information](http://www.tecmint.com/wp-content/uploads/2015/05/Search-Package-Information.png) -Search Package Information +搜索包信息 -You can regularly check for updates with the following command: +您可以定期用以下命令检查更新: # yum check-update -The above command will return all the installed packages for which an update is available. In the example shown in the image below, only rhel-7-server-rpms has an update available: +上述命令将返回可以更新的所有安装包。在下图所示的例子中,只有rhel-7-server-rpms有可用更新: ![Check For Package Updates](http://www.tecmint.com/wp-content/uploads/2015/05/Check-For-Updates.png) Check For Package Updates -You can then update that package alone with, +然后,您可以更新该包, # yum update rhel-7-server-rpms -If there are several packages that can be updated, yum update will update all of them at once. +如果有几个包,可以一同更新,yum update 将一次性更新所有的包 -Now what happens when you know the name of an executable, such as ps2pdf, but don’t know which package provides it? You can find out with `yum whatprovides “*/[executable]”`: +现在,当你知道一个可执行文件的名称,如ps2pdf,但不知道那个包提供了它?你可以通过 `yum whatprovides “*/[executable]”`找到: # yum whatprovides “*/ps2pdf” @@ -69,7 +69,7 @@ Now what happens when you know the name of an executable, such as ps2pdf, but do Find Package Belongs to Which Package -Now, when it comes to removing a package, you can do so with yum remove package. Easy, huh? This goes to show that yum is a complete and powerful package manager. +现在,当删除包时,你可以使用 yum remove Package ,很简单吧?Yum 是一个完整的强大的包管理器。 # yum remove httpd @@ -77,13 +77,12 @@ Read Also: [20 Yum Commands to Manage RHEL 7 Package Management][1] ### Good Old Plain RPM ### -RPM (aka RPM Package Manager, or originally RedHat Package Manager) can also be used to install or update packages when they come in form of standalone `.rpm` packages. - -It is often utilized with the `-Uvh` flags to indicate that it should install the package if it’s not already present or attempt to update it if it’s installed `(-U)`, producing a verbose output `(-v)` and a progress bar with hash marks `(-h)` while the operation is being performed. For example, +RPM(又名RPM包管理器,或原本RedHat软件包管理器)也可用于安装或更新软件包来当他们在独立`rpm`包装形式。 +往往使用`-Uvh` 表面这个包应该被安装而不是已存在或尝试更新。安装是`-U` ,显示详细输出用`-v`,显示进度条用`-h` 例如 # rpm -Uvh package.rpm -Another typical use of rpm is to produce a list of currently installed packages with code>rpm -qa (short for query all): +另一个典型的使用rpm 是产生一个列表,目前安装的软件包的code > rpm -qa(缩写查询所有) # rpm -qa @@ -95,23 +94,23 @@ Read Also: [20 RPM Commands to Install Packages in RHEL 7][2] ### Scheduling Tasks using Cron ### -Linux and other Unix-like operating systems include a tool called cron that allows you to schedule tasks (i.e. commands or shell scripts) to run on a periodic basis. Cron checks every minute the /var/spool/cron directory for files which are named after accounts in /etc/passwd. +Linux和UNIX类操作系统包括其他的工具称为Cron允许你安排任务(即命令或shell脚本)运行在周期性的基础上。每分钟定时检查/var/spool/cron目录中有在/etc/passwd帐户文件中指定名称的文件。 -When executing commands, any output is mailed to the owner of the crontab (or to the user specified in the MAILTO environment variable in the /etc/crontab, if it exists). +执行命令时,输出是发送到crontab的所有者(或者在/etc/crontab,在MailTO环境变量中指定的用户,如果它存在的话)。 -Crontab files (which are created by typing crontab -e and pressing Enter) have the following format: +crontab文件(这是通过键入crontab e和按Enter键创建)的格式如下: ![Crontab Entries](http://www.tecmint.com/wp-content/uploads/2015/05/Crontab-Format.png) Crontab Entries -Thus, if we want to update the local file database (which is used by locate to find files by name or pattern) every second day of the month at 2:15 am, we need to add the following crontab entry: +因此,如果我们想更新本地文件数据库(这是用于定位文件或图案)每个初二日上午2:15,我们需要添加以下crontab条目: 15 02 2 * * /bin/updatedb -The above crontab entry reads, “Run /bin/updatedb on the second day of the month, every month of the year, regardless of the day of the week, at 2:15 am”. As I’m sure you already guessed, the star symbol is used as a wildcard character. +以上的条目写着:”每年每月第二天的凌晨2:15运行 /bin/updatedb“ 无论是周几”,我想你也猜到了。星号作为通配符 -After adding a cron job, you can see that a file named root was added inside /var/spool/cron, as we mentioned earlier. That file lists all the tasks that the crond daemon should run: +添加一个cron作业后,你可以看到一个文件名为root被添加在/var/spool/cron,正如我们前面所提到的。该文件列出了所有的crond守护进程运行的任务: # ls -l /var/spool/cron @@ -119,74 +118,72 @@ After adding a cron job, you can see that a file named root was added inside /va Check All Cron Jobs -In the above image, the current user’s crontab can be displayed either using cat /var/spool/cron/root or, +在上图中,显示当前用户的crontab可以使用 cat /var/spool/cron 或 # crontab -l -If you need to run a task on a more fine-grained basis (for example, twice a day or three times each month), cron can also help you to do that. +如果你需要在一个更精细的时间上运行的任务(例如,一天两次或每月三次),cron也可以帮助你。 -For example, to run /my/script on the 1st and 15th of each month and send any output to /dev/null, you can add two crontab entries as follows: +例如,每个月1号和15号运行 /my/script 并将输出导出到 /dev/null,您可以添加如下两个crontab条目: 01 00 1 * * /myscript > /dev/null 2>&1 01 00 15 * * /my/script > /dev/null 2>&1 -But in order for the task to be easier to maintain, you can combine both entries into one: +不过为了简单,你可以将他们合并 01 00 1,15 * * /my/script > /dev/null 2>&1 - -Following the previous example, we can run /my/other/script at 1:30 am on the first day of the month every three months: +在前面的例子中,我们可以在每三个月的第一天的凌晨1:30运行 /my/other/script . 30 01 1 1,4,7,10 * /my/other/script > /dev/null 2>&1 -But when you have to repeat a certain task every “x” minutes, hours, days, or months, you can divide the right position by the desired frequency. The following crontab entry has the exact same meaning as the previous one: +但是当你必须每一个“十”分钟,数小时,数天或数月的重复某个任务时,你可以通过所需的频率来划分正确的时间。以下为前一个crontab条目具有相同的意义: 30 01 1 */3 * /my/other/script > /dev/null 2>&1 -Or perhaps you need to run a certain job on a fixed frequency or after the system boots, for example. You can use one of the following string instead of the five fields to indicate the exact time when you want your job to run: +或者也许你需要在一个固定的时间段或系统启动后运行某个固定的工作,例如。你可以使用下列五个字符串中的一个字符串来指示你想让你的任务计划工作的确切时间: - @reboot Run when the system boots. - @yearly Run once a year, same as 00 00 1 1 *. - @monthly Run once a month, same as 00 00 1 * *. - @weekly Run once a week, same as 00 00 * * 0. - @daily Run once a day, same as 00 00 * * *. - @hourly Run once an hour, same as 00 * * * *. + @reboot 仅系统启动时运行. + @yearly 一年一次, 类似与 00 00 1 1 *. + @monthly 一月一次, 类似与 00 00 1 * *. + @weekly 一周一次, 类似与 00 00 * * 0. + @daily 一天一次, 类似与 00 00 * * *. + @hourly 一小时一次, 类似与 00 * * * *. Read Also: [11 Commands to Schedule Cron Jobs in RHEL 7][3] -### Locating and Checking Logs ### +### 定位和查看日志### -System logs are located (and rotated) inside the /var/log directory. According to the Linux Filesystem Hierarchy Standard, this directory contains miscellaneous log files, which are written to it or an appropriate subdirectory (such as audit, httpd, or samba in the image below) by the corresponding daemons during system operation: +系统日志存放在 /var/log 目录.根据Linux的文件系统层次标准,这个目录包括各种日志文件,并包含一些必要的子目录(如 audit, httpd, 或 samba ,如下图),并由相应的系统守护进程操作 # ls /var/log ![Linux Log Files Location](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-Log-Files.png) -Linux Log Files Location +Linux 日志定位 -Other interesting logs are [dmesg][4] (contains all messages from kernel ring buffer), secure (logs connection attempts that require user authentication), messages (system-wide messages) and wtmp (records of all user logins and logouts). +其他有趣的日志比如 [dmesg][4](包括了所有内核缓冲区的信息),安全(用户认证尝试链接),信息(系统信息),和wtmp(记录了所有用户的登录登出) -Logs are very important in that they allow you to have a glimpse of what is going on at all times in your system, and what has happened in the past. They represent a priceless tool to troubleshoot and monitor a Linux server, and thus are often used with the `tail -f command` to display events, in real time, as they happen and are recorded in a log. +日志是非常重要的,他们让你可以看到是任何时刻发生在你的系统的事情,甚至是已经过去的事情。他们是无价的工具,解决和监测一个Linux服务器,并因此经常使用的 “tail -f command ”来实时显示正在发生并实时写入的事件。 -For example, if you want to display kernel-related events, type the following command: +举个例子,如果你想看你的内核的日志,你需要输入如下命令 # tail -f /var/log/dmesg -Same if you want to view access to your web server: +同样的,如果你想查看你的网络服务器日志,你需要输入如下命令 # tail -f /var/log/httpd/access.log -### Summary ### +### 总结 ### -If you know how to efficiently manage packages, schedule tasks, and where to look for information about the current and past operation of your system you can rest assure that you will not run into surprises very often. I hope this article has helped you learn or refresh your knowledge about these basic skills. - -Don’t hesitate to drop us a line using the contact form below if you have any questions or comments. +如果你知道如何有效的管理包,安排任务,以及知道在哪寻找系统当前和过去操作的信息,你可以放心你将不会总是有太多的惊喜。我希望这篇文章能够帮你学习或回顾这些基础知识。 +如果你有任何问题或意见,请使用下面的表格反馈给我们。 -------------------------------------------------------------------------------- via: http://www.tecmint.com/yum-package-management-cron-job-scheduling-monitoring-linux-logs/ 作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) +译者:[xiqingongzi](https://github.com/xiqingongzi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -196,3 +193,4 @@ via: http://www.tecmint.com/yum-package-management-cron-job-scheduling-monitorin [2]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ [3]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/ [4]:http://www.tecmint.com/dmesg-commands/ + From ce276ff0c7ea8a43320fd759944dec53d3a3a160 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Thu, 10 Sep 2015 04:53:01 +0800 Subject: [PATCH 470/697] Update RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md --- ...asks with Cron and Monitoring System Logs.md | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md b/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md index 180b8f8f2b..3456361c0c 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md @@ -27,7 +27,7 @@ RHCSA: Yum包管理, 任务计划和系统监控 – 第十章 有时,你想安装一个包,但不知道它的确切名称。搜索可以在当前启用的存储库中去搜索包名称或在它的描述中搜索,并分别进行。 -For example, +比如, # yum search log @@ -52,8 +52,7 @@ For example, 上述命令将返回可以更新的所有安装包。在下图所示的例子中,只有rhel-7-server-rpms有可用更新: ![Check For Package Updates](http://www.tecmint.com/wp-content/uploads/2015/05/Check-For-Updates.png) - -Check For Package Updates +检查包更新 然后,您可以更新该包, @@ -67,7 +66,7 @@ Check For Package Updates ![Find Package Belongs to Which Package](http://www.tecmint.com/wp-content/uploads/2015/05/Find-Package-Information.png) -Find Package Belongs to Which Package +查找文件属于哪个包 现在,当删除包时,你可以使用 yum remove Package ,很简单吧?Yum 是一个完整的强大的包管理器。 @@ -75,7 +74,7 @@ Find Package Belongs to Which Package Read Also: [20 Yum Commands to Manage RHEL 7 Package Management][1] -### Good Old Plain RPM ### +### 文本式RPM工具 ### RPM(又名RPM包管理器,或原本RedHat软件包管理器)也可用于安装或更新软件包来当他们在独立`rpm`包装形式。 @@ -88,11 +87,11 @@ RPM(又名RPM包管理器,或原本RedHat软件包管理器)也可用于 ![Query All RPM Packages](http://www.tecmint.com/wp-content/uploads/2015/05/Query-All-RPM-Packages.png) -Query All RPM Packages +查询所有包 Read Also: [20 RPM Commands to Install Packages in RHEL 7][2] -### Scheduling Tasks using Cron ### +### Cron任务计划 ### Linux和UNIX类操作系统包括其他的工具称为Cron允许你安排任务(即命令或shell脚本)运行在周期性的基础上。每分钟定时检查/var/spool/cron目录中有在/etc/passwd帐户文件中指定名称的文件。 @@ -102,7 +101,7 @@ crontab文件(这是通过键入crontab e和按Enter键创建)的格式如 ![Crontab Entries](http://www.tecmint.com/wp-content/uploads/2015/05/Crontab-Format.png) -Crontab Entries +crontab条目 因此,如果我们想更新本地文件数据库(这是用于定位文件或图案)每个初二日上午2:15,我们需要添加以下crontab条目: @@ -116,7 +115,7 @@ Crontab Entries ![Check All Cron Jobs](http://www.tecmint.com/wp-content/uploads/2015/05/Check-All-Cron-Jobs.png) -Check All Cron Jobs +检查所有cron工作 在上图中,显示当前用户的crontab可以使用 cat /var/spool/cron 或 From 9de303c7a85642e20d19139fc984d69e945e4a32 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Thu, 10 Sep 2015 04:53:24 +0800 Subject: [PATCH 471/697] Create RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md --- ...ks with Cron and Monitoring System Logs.md | 195 ++++++++++++++++++ 1 file changed, 195 insertions(+) create mode 100644 translated/tech/RHCSA/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md diff --git a/translated/tech/RHCSA/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md b/translated/tech/RHCSA/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md new file mode 100644 index 0000000000..3456361c0c --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md @@ -0,0 +1,195 @@ +[xiqingongzi translating] +RHCSA Series: Yum 包管理, 自动任务计划和系统监控日志 – Part 10 +================================================================================ +在这篇文章中,我们将回顾如何在REHL7中安装,更新和删除软件包。我们还将介绍如何使用cron任务的自动化,并完成如何查找和监控系统日志文件以及为什么这些技能是系统管理员必备技能 + +![Yum Package Management Cron Jobs Log Monitoring Linux](http://www.tecmint.com/wp-content/uploads/2015/05/Yum-Package-Management-Cron-Job-Log-Monitoring-Linux.jpg) + +RHCSA: Yum包管理, 任务计划和系统监控 – 第十章 + +### 使用yum 管理包 ### + +要安装一个包以及所有尚未安装的依赖包,您可以使用: + + # yum -y install package_name(s) + + package_name(s) 需要是一个存在的包名 + +例如,安装httpd和mlocate(按顺序),类型。 + + # yum -y install httpd mlocate + +**注意**: 字符y表示绕过执行下载和安装前的确认提示,如果需要,你可以删除它 + +默认情况下,yum将安装与操作系统体系结构相匹配的包,除非通过在包名加入架构名 + +例如,在64位系统上,使用yum安装包将安装包的x86_64版本,而package.x86 yum安装(如果有的话)将安装32位。 + +有时,你想安装一个包,但不知道它的确切名称。搜索可以在当前启用的存储库中去搜索包名称或在它的描述中搜索,并分别进行。 + +比如, + + # yum search log + +将搜索安装的软件包中名字与该词类似的软件,而 + + # yum search all log + +也将在包描述和网址中寻找寻找相同的关键字 + +一旦搜索返回包列表,您可能希望在安装前显示一些信息。这时info选项派上用场: + + # yum info logwatch + +![Search Package Information](http://www.tecmint.com/wp-content/uploads/2015/05/Search-Package-Information.png) + +搜索包信息 + +您可以定期用以下命令检查更新: + + # yum check-update + +上述命令将返回可以更新的所有安装包。在下图所示的例子中,只有rhel-7-server-rpms有可用更新: + +![Check For Package Updates](http://www.tecmint.com/wp-content/uploads/2015/05/Check-For-Updates.png) +检查包更新 + +然后,您可以更新该包, + + # yum update rhel-7-server-rpms + +如果有几个包,可以一同更新,yum update 将一次性更新所有的包 + +现在,当你知道一个可执行文件的名称,如ps2pdf,但不知道那个包提供了它?你可以通过 `yum whatprovides “*/[executable]”`找到: + + # yum whatprovides “*/ps2pdf” + +![Find Package Belongs to Which Package](http://www.tecmint.com/wp-content/uploads/2015/05/Find-Package-Information.png) + +查找文件属于哪个包 + +现在,当删除包时,你可以使用 yum remove Package ,很简单吧?Yum 是一个完整的强大的包管理器。 + + # yum remove httpd + +Read Also: [20 Yum Commands to Manage RHEL 7 Package Management][1] + +### 文本式RPM工具 ### + +RPM(又名RPM包管理器,或原本RedHat软件包管理器)也可用于安装或更新软件包来当他们在独立`rpm`包装形式。 + +往往使用`-Uvh` 表面这个包应该被安装而不是已存在或尝试更新。安装是`-U` ,显示详细输出用`-v`,显示进度条用`-h` 例如 + # rpm -Uvh package.rpm + +另一个典型的使用rpm 是产生一个列表,目前安装的软件包的code > rpm -qa(缩写查询所有) + + # rpm -qa + +![Query All RPM Packages](http://www.tecmint.com/wp-content/uploads/2015/05/Query-All-RPM-Packages.png) + +查询所有包 + +Read Also: [20 RPM Commands to Install Packages in RHEL 7][2] + +### Cron任务计划 ### + +Linux和UNIX类操作系统包括其他的工具称为Cron允许你安排任务(即命令或shell脚本)运行在周期性的基础上。每分钟定时检查/var/spool/cron目录中有在/etc/passwd帐户文件中指定名称的文件。 + +执行命令时,输出是发送到crontab的所有者(或者在/etc/crontab,在MailTO环境变量中指定的用户,如果它存在的话)。 + +crontab文件(这是通过键入crontab e和按Enter键创建)的格式如下: + +![Crontab Entries](http://www.tecmint.com/wp-content/uploads/2015/05/Crontab-Format.png) + +crontab条目 + +因此,如果我们想更新本地文件数据库(这是用于定位文件或图案)每个初二日上午2:15,我们需要添加以下crontab条目: + + 15 02 2 * * /bin/updatedb + +以上的条目写着:”每年每月第二天的凌晨2:15运行 /bin/updatedb“ 无论是周几”,我想你也猜到了。星号作为通配符 + +添加一个cron作业后,你可以看到一个文件名为root被添加在/var/spool/cron,正如我们前面所提到的。该文件列出了所有的crond守护进程运行的任务: + + # ls -l /var/spool/cron + +![Check All Cron Jobs](http://www.tecmint.com/wp-content/uploads/2015/05/Check-All-Cron-Jobs.png) + +检查所有cron工作 + +在上图中,显示当前用户的crontab可以使用 cat /var/spool/cron 或 + + # crontab -l + +如果你需要在一个更精细的时间上运行的任务(例如,一天两次或每月三次),cron也可以帮助你。 + +例如,每个月1号和15号运行 /my/script 并将输出导出到 /dev/null,您可以添加如下两个crontab条目: + + 01 00 1 * * /myscript > /dev/null 2>&1 + 01 00 15 * * /my/script > /dev/null 2>&1 + +不过为了简单,你可以将他们合并 + + 01 00 1,15 * * /my/script > /dev/null 2>&1 +在前面的例子中,我们可以在每三个月的第一天的凌晨1:30运行 /my/other/script . + + 30 01 1 1,4,7,10 * /my/other/script > /dev/null 2>&1 + +但是当你必须每一个“十”分钟,数小时,数天或数月的重复某个任务时,你可以通过所需的频率来划分正确的时间。以下为前一个crontab条目具有相同的意义: + + 30 01 1 */3 * /my/other/script > /dev/null 2>&1 + +或者也许你需要在一个固定的时间段或系统启动后运行某个固定的工作,例如。你可以使用下列五个字符串中的一个字符串来指示你想让你的任务计划工作的确切时间: + + @reboot 仅系统启动时运行. + @yearly 一年一次, 类似与 00 00 1 1 *. + @monthly 一月一次, 类似与 00 00 1 * *. + @weekly 一周一次, 类似与 00 00 * * 0. + @daily 一天一次, 类似与 00 00 * * *. + @hourly 一小时一次, 类似与 00 * * * *. + +Read Also: [11 Commands to Schedule Cron Jobs in RHEL 7][3] + +### 定位和查看日志### + +系统日志存放在 /var/log 目录.根据Linux的文件系统层次标准,这个目录包括各种日志文件,并包含一些必要的子目录(如 audit, httpd, 或 samba ,如下图),并由相应的系统守护进程操作 + + # ls /var/log + +![Linux Log Files Location](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-Log-Files.png) + +Linux 日志定位 + +其他有趣的日志比如 [dmesg][4](包括了所有内核缓冲区的信息),安全(用户认证尝试链接),信息(系统信息),和wtmp(记录了所有用户的登录登出) + +日志是非常重要的,他们让你可以看到是任何时刻发生在你的系统的事情,甚至是已经过去的事情。他们是无价的工具,解决和监测一个Linux服务器,并因此经常使用的 “tail -f command ”来实时显示正在发生并实时写入的事件。 + +举个例子,如果你想看你的内核的日志,你需要输入如下命令 + + # tail -f /var/log/dmesg + +同样的,如果你想查看你的网络服务器日志,你需要输入如下命令 + + # tail -f /var/log/httpd/access.log + +### 总结 ### + +如果你知道如何有效的管理包,安排任务,以及知道在哪寻找系统当前和过去操作的信息,你可以放心你将不会总是有太多的惊喜。我希望这篇文章能够帮你学习或回顾这些基础知识。 + +如果你有任何问题或意见,请使用下面的表格反馈给我们。 +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/yum-package-management-cron-job-scheduling-monitoring-linux-logs/ + +作者:[Gabriel Cánepa][a] +译者:[xiqingongzi](https://github.com/xiqingongzi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ +[2]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ +[3]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/ +[4]:http://www.tecmint.com/dmesg-commands/ + From f9c22b0115b4a5335ce83143939e722f1a92cdf7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Thu, 10 Sep 2015 04:53:39 +0800 Subject: [PATCH 472/697] Delete RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md --- ...ks with Cron and Monitoring System Logs.md | 195 ------------------ 1 file changed, 195 deletions(-) delete mode 100644 sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md b/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md deleted file mode 100644 index 3456361c0c..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md +++ /dev/null @@ -1,195 +0,0 @@ -[xiqingongzi translating] -RHCSA Series: Yum 包管理, 自动任务计划和系统监控日志 – Part 10 -================================================================================ -在这篇文章中,我们将回顾如何在REHL7中安装,更新和删除软件包。我们还将介绍如何使用cron任务的自动化,并完成如何查找和监控系统日志文件以及为什么这些技能是系统管理员必备技能 - -![Yum Package Management Cron Jobs Log Monitoring Linux](http://www.tecmint.com/wp-content/uploads/2015/05/Yum-Package-Management-Cron-Job-Log-Monitoring-Linux.jpg) - -RHCSA: Yum包管理, 任务计划和系统监控 – 第十章 - -### 使用yum 管理包 ### - -要安装一个包以及所有尚未安装的依赖包,您可以使用: - - # yum -y install package_name(s) - - package_name(s) 需要是一个存在的包名 - -例如,安装httpd和mlocate(按顺序),类型。 - - # yum -y install httpd mlocate - -**注意**: 字符y表示绕过执行下载和安装前的确认提示,如果需要,你可以删除它 - -默认情况下,yum将安装与操作系统体系结构相匹配的包,除非通过在包名加入架构名 - -例如,在64位系统上,使用yum安装包将安装包的x86_64版本,而package.x86 yum安装(如果有的话)将安装32位。 - -有时,你想安装一个包,但不知道它的确切名称。搜索可以在当前启用的存储库中去搜索包名称或在它的描述中搜索,并分别进行。 - -比如, - - # yum search log - -将搜索安装的软件包中名字与该词类似的软件,而 - - # yum search all log - -也将在包描述和网址中寻找寻找相同的关键字 - -一旦搜索返回包列表,您可能希望在安装前显示一些信息。这时info选项派上用场: - - # yum info logwatch - -![Search Package Information](http://www.tecmint.com/wp-content/uploads/2015/05/Search-Package-Information.png) - -搜索包信息 - -您可以定期用以下命令检查更新: - - # yum check-update - -上述命令将返回可以更新的所有安装包。在下图所示的例子中,只有rhel-7-server-rpms有可用更新: - -![Check For Package Updates](http://www.tecmint.com/wp-content/uploads/2015/05/Check-For-Updates.png) -检查包更新 - -然后,您可以更新该包, - - # yum update rhel-7-server-rpms - -如果有几个包,可以一同更新,yum update 将一次性更新所有的包 - -现在,当你知道一个可执行文件的名称,如ps2pdf,但不知道那个包提供了它?你可以通过 `yum whatprovides “*/[executable]”`找到: - - # yum whatprovides “*/ps2pdf” - -![Find Package Belongs to Which Package](http://www.tecmint.com/wp-content/uploads/2015/05/Find-Package-Information.png) - -查找文件属于哪个包 - -现在,当删除包时,你可以使用 yum remove Package ,很简单吧?Yum 是一个完整的强大的包管理器。 - - # yum remove httpd - -Read Also: [20 Yum Commands to Manage RHEL 7 Package Management][1] - -### 文本式RPM工具 ### - -RPM(又名RPM包管理器,或原本RedHat软件包管理器)也可用于安装或更新软件包来当他们在独立`rpm`包装形式。 - -往往使用`-Uvh` 表面这个包应该被安装而不是已存在或尝试更新。安装是`-U` ,显示详细输出用`-v`,显示进度条用`-h` 例如 - # rpm -Uvh package.rpm - -另一个典型的使用rpm 是产生一个列表,目前安装的软件包的code > rpm -qa(缩写查询所有) - - # rpm -qa - -![Query All RPM Packages](http://www.tecmint.com/wp-content/uploads/2015/05/Query-All-RPM-Packages.png) - -查询所有包 - -Read Also: [20 RPM Commands to Install Packages in RHEL 7][2] - -### Cron任务计划 ### - -Linux和UNIX类操作系统包括其他的工具称为Cron允许你安排任务(即命令或shell脚本)运行在周期性的基础上。每分钟定时检查/var/spool/cron目录中有在/etc/passwd帐户文件中指定名称的文件。 - -执行命令时,输出是发送到crontab的所有者(或者在/etc/crontab,在MailTO环境变量中指定的用户,如果它存在的话)。 - -crontab文件(这是通过键入crontab e和按Enter键创建)的格式如下: - -![Crontab Entries](http://www.tecmint.com/wp-content/uploads/2015/05/Crontab-Format.png) - -crontab条目 - -因此,如果我们想更新本地文件数据库(这是用于定位文件或图案)每个初二日上午2:15,我们需要添加以下crontab条目: - - 15 02 2 * * /bin/updatedb - -以上的条目写着:”每年每月第二天的凌晨2:15运行 /bin/updatedb“ 无论是周几”,我想你也猜到了。星号作为通配符 - -添加一个cron作业后,你可以看到一个文件名为root被添加在/var/spool/cron,正如我们前面所提到的。该文件列出了所有的crond守护进程运行的任务: - - # ls -l /var/spool/cron - -![Check All Cron Jobs](http://www.tecmint.com/wp-content/uploads/2015/05/Check-All-Cron-Jobs.png) - -检查所有cron工作 - -在上图中,显示当前用户的crontab可以使用 cat /var/spool/cron 或 - - # crontab -l - -如果你需要在一个更精细的时间上运行的任务(例如,一天两次或每月三次),cron也可以帮助你。 - -例如,每个月1号和15号运行 /my/script 并将输出导出到 /dev/null,您可以添加如下两个crontab条目: - - 01 00 1 * * /myscript > /dev/null 2>&1 - 01 00 15 * * /my/script > /dev/null 2>&1 - -不过为了简单,你可以将他们合并 - - 01 00 1,15 * * /my/script > /dev/null 2>&1 -在前面的例子中,我们可以在每三个月的第一天的凌晨1:30运行 /my/other/script . - - 30 01 1 1,4,7,10 * /my/other/script > /dev/null 2>&1 - -但是当你必须每一个“十”分钟,数小时,数天或数月的重复某个任务时,你可以通过所需的频率来划分正确的时间。以下为前一个crontab条目具有相同的意义: - - 30 01 1 */3 * /my/other/script > /dev/null 2>&1 - -或者也许你需要在一个固定的时间段或系统启动后运行某个固定的工作,例如。你可以使用下列五个字符串中的一个字符串来指示你想让你的任务计划工作的确切时间: - - @reboot 仅系统启动时运行. - @yearly 一年一次, 类似与 00 00 1 1 *. - @monthly 一月一次, 类似与 00 00 1 * *. - @weekly 一周一次, 类似与 00 00 * * 0. - @daily 一天一次, 类似与 00 00 * * *. - @hourly 一小时一次, 类似与 00 * * * *. - -Read Also: [11 Commands to Schedule Cron Jobs in RHEL 7][3] - -### 定位和查看日志### - -系统日志存放在 /var/log 目录.根据Linux的文件系统层次标准,这个目录包括各种日志文件,并包含一些必要的子目录(如 audit, httpd, 或 samba ,如下图),并由相应的系统守护进程操作 - - # ls /var/log - -![Linux Log Files Location](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-Log-Files.png) - -Linux 日志定位 - -其他有趣的日志比如 [dmesg][4](包括了所有内核缓冲区的信息),安全(用户认证尝试链接),信息(系统信息),和wtmp(记录了所有用户的登录登出) - -日志是非常重要的,他们让你可以看到是任何时刻发生在你的系统的事情,甚至是已经过去的事情。他们是无价的工具,解决和监测一个Linux服务器,并因此经常使用的 “tail -f command ”来实时显示正在发生并实时写入的事件。 - -举个例子,如果你想看你的内核的日志,你需要输入如下命令 - - # tail -f /var/log/dmesg - -同样的,如果你想查看你的网络服务器日志,你需要输入如下命令 - - # tail -f /var/log/httpd/access.log - -### 总结 ### - -如果你知道如何有效的管理包,安排任务,以及知道在哪寻找系统当前和过去操作的信息,你可以放心你将不会总是有太多的惊喜。我希望这篇文章能够帮你学习或回顾这些基础知识。 - -如果你有任何问题或意见,请使用下面的表格反馈给我们。 --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/yum-package-management-cron-job-scheduling-monitoring-linux-logs/ - -作者:[Gabriel Cánepa][a] -译者:[xiqingongzi](https://github.com/xiqingongzi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ -[2]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ -[3]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/ -[4]:http://www.tecmint.com/dmesg-commands/ - From 08173b346e4c6b25a346c62ab9ea2e03bcb8b173 Mon Sep 17 00:00:00 2001 From: joeren Date: Thu, 10 Sep 2015 08:03:32 +0800 Subject: [PATCH 473/697] Update 20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md --- ...al and Statistical Uptime of System With tuptime Utility.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md b/sources/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md index d40f1b26af..4611471fa6 100644 --- a/sources/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md +++ b/sources/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md @@ -1,3 +1,4 @@ +Translating by GOLinux! Linux Server See the Historical and Statistical Uptime of System With tuptime Utility ================================================================================ You can use the following tools to see how long system has been running on a Linux or Unix-like system: @@ -149,4 +150,4 @@ via: http://www.cyberciti.biz/hardware/howto-see-historical-statistical-uptime-o 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [1]:http://www.cyberciti.biz/hardware/howto-see-historical-statistical-uptime-on-linux-server/ -[2]:http://www.cyberciti.biz/faq/debian-ubunut-linux-download-a-git-repository/ \ No newline at end of file +[2]:http://www.cyberciti.biz/faq/debian-ubunut-linux-download-a-git-repository/ From f4532bfea9acccae2134a76d2c41612a861b9c8b Mon Sep 17 00:00:00 2001 From: GOLinux Date: Thu, 10 Sep 2015 08:36:09 +0800 Subject: [PATCH 474/697] [Translated]20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md --- ...l Uptime of System With tuptime Utility.md | 79 +++++++++---------- 1 file changed, 38 insertions(+), 41 deletions(-) rename {sources => translated}/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md (62%) diff --git a/sources/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md b/translated/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md similarity index 62% rename from sources/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md rename to translated/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md index 4611471fa6..0d242c0be2 100644 --- a/sources/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md +++ b/translated/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md @@ -1,15 +1,13 @@ -Translating by GOLinux! -Linux Server See the Historical and Statistical Uptime of System With tuptime Utility +使用tuptime工具查看Linux服务器系统历史开机时间统计 ================================================================================ -You can use the following tools to see how long system has been running on a Linux or Unix-like system: +你们可以使用下面的工具来查看Linux或者类Unix系统运行了多长时间: +- uptime : 告诉你服务器运行了多长的时间。 +- lastt : 显示重启和关机时间。 +- tuptime : 报告系统的历史运行时间和统计运行时间,这是指重启之间的运行时间。和uptime命令类似,不过输出结果更有意思。 -- uptime : Tell how long the server has been running. -- lastt : Show the reboot and shutdown time. -- tuptime : Report the historical and statistical running time of system, keeping it between restarts. Like uptime command but with more interesting output. +#### 找出系统上次重启时间和日期 #### -#### Finding out the system last reboot time and date #### - -You [can use the following commands to get the last reboot and shutdown time and date on a Linux][1] operating system (also works on OSX/Unix-like system): +你[可以使用下面的命令来获取Linux操作系统的上次重启和关机时间及日期][1](在OSX/类Unix系统上也可以用): ## Just show system reboot and shutdown date and time ### who -b @@ -21,28 +19,27 @@ You [can use the following commands to get the last reboot and shutdown time and awk '{ print "up " $1 /60 " minutes"}' /proc/uptime w -**Sample outputs:** +**样例输出:** ![Fig.01: Various Linux commands in action to find out the server uptime](http://s0.cyberciti.org/uploads/cms/2015/09/uptime-w-awk-outputs.jpg) -Fig.01: Various Linux commands in action to find out the server uptime +图像01:用于找出服务器开机时间的多个Linux命令 -**Say hello to tuptime** +**跟tuptime问打个招呼吧** -The tuptime command line tool can report the following information on a Linux based system: +tuptime命令行工具可以报告基于Linux的系统上的下列信息: +1. 系统启动次数统计 +2. 注册首次启动时间(也就是安装时间) +1. 正常关机和意外关机统计 +1. 平均开机时间和故障停机时间 +1. 当前开机时间 +1. 首次启动以来的开机和故障停机率 +1. 累积系统开机时间、故障停机时间和合计 +1. 报告每次启动、开机时间、关机和故障停机时间 -1. Count system startups -1. Register first boot time (a.k.a. installation time) -1. Count nicely and accidentally shutdowns -1. Average uptime and downtime -1. Current uptime -1. Uptime and downtime rate since first boot time -1. Accumulated system uptime, downtime and total -1. Report each startup, uptime, shutdown and downtime +#### 安装 #### -#### Installation #### - -Type the [following command to clone a git repo on a Linux operating system][2]: +输入[下面的命令来克隆git仓库到Linux系统中][2]: $ cd /tmp $ git clone https://github.com/rfrail3/tuptime.git @@ -50,45 +47,45 @@ Type the [following command to clone a git repo on a Linux operating system][2]: $ cd tuptime $ ls -**Sample outputs:** +**样例输出:** ![Fig.02: Cloning a git repo](http://s0.cyberciti.org/uploads/cms/2015/09/git-install-tuptime.jpg) -Fig.02: Cloning a git repo +图像02:克隆git仓库 -Make sure you've Python v2.7 installed with sys, optparse, os, re, string, sqlite3, datetime, disutils, and locale modules. +确保你随sys,optparse,os,re,string,sqlite3,datetime,disutils安装了Python v2.7和本地模块。 -You can simply install it as follows: +你可以像下面这样来安装: $ sudo tuptime-install.sh -OR do a manual installation (recommended method due to systemd or non-systemd based Linux system): +或者,可以手工安装(根据基于systemd或非systemd的Linux的推荐方法): $ sudo cp /tmp/tuptime/latest/cron.d/tuptime /etc/cron.d/tuptime -If is a system with systemd, copy service file and enable it: +如果系统是systemd的,拷贝服务文件并启用: $ sudo cp /tmp/tuptime/latest/systemd/tuptime.service /lib/systemd/system/ $ sudo systemctl enable tuptime.service -If the systemd don't have systemd, copy init file: +如果系统不是systemd的,拷贝初始化文件: $ sudo cp /tmp/tuptime/latest/init.d/tuptime.init.d-debian7 /etc/init.d/tuptime $ sudo update-rc.d tuptime defaults -**Run it** +**运行** -Simply type the following command: +只需输入以下命令: $ sudo tuptime -**Sample outputs:** +**样例输出:** ![Fig.03: tuptime in action](http://s0.cyberciti.org/uploads/cms/2015/09/tuptime-output.jpg) -Fig.03: tuptime in action +图像03:tuptime工作中 -After kernel upgrade I rebooted the box and typed the same command again: +在更新内核后,我重启了系统,然后再次输入了同样的命令: $ sudo tuptime System startups: 2 since 03:52:16 PM 08/21/2015 @@ -102,11 +99,11 @@ After kernel upgrade I rebooted the box and typed the same command again: System downtime: 5 hours, 0 minutes and 11 seconds System life: 15 days, 14 hours, 36 minutes and 18 seconds -You can change date and time format as follows: +你可以像下面这样修改日期和时间格式: $ sudo tuptime -d '%H:%M:%S %m-%d-%Y' -**Sample outputs:** +**样例输出:** System startups: 1 since 15:52:16 08-21-2015 System shutdowns: 0 ok - 0 bad @@ -119,11 +116,11 @@ You can change date and time format as follows: System downtime: 0 seconds System life: 15 days, 9 hours, 21 minutes and 19 seconds -Enumerate each startup, uptime, shutdown and downtime: +计算每次启动、开机时间、关机和故障停机时间: $ sudo tuptime -e -**Sample outputs:** +**样例输出:** Startup: 1 at 03:52:16 PM 08/21/2015 Uptime: 15 days, 9 hours, 22 minutes and 33 seconds @@ -144,7 +141,7 @@ Enumerate each startup, uptime, shutdown and downtime: via: http://www.cyberciti.biz/hardware/howto-see-historical-statistical-uptime-on-linux-server/ 作者:Vivek Gite -译者:[译者ID](https://github.com/译者ID) +译者:[GOLinux](https://github.com/GOLinux) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9495ad072c326d30571cdaacf25a4d1e19e1ca89 Mon Sep 17 00:00:00 2001 From: "Y.C.S.M" Date: Thu, 10 Sep 2015 09:26:38 +0800 Subject: [PATCH 475/697] Update 20150908 List Of 10 Funny Linux Commands.md translate as much as i can by using the working time --- sources/tech/20150908 List Of 10 Funny Linux Commands.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/sources/tech/20150908 List Of 10 Funny Linux Commands.md b/sources/tech/20150908 List Of 10 Funny Linux Commands.md index f3acbe27d3..59464c6497 100644 --- a/sources/tech/20150908 List Of 10 Funny Linux Commands.md +++ b/sources/tech/20150908 List Of 10 Funny Linux Commands.md @@ -2,9 +2,9 @@ translating by tnuoccalanosrep List Of 10 Funny Linux Commands ================================================================================ **Working from the Terminal is really fun. Today, we’ll list really funny Linux commands which will bring smile on your face.** - +**在终端工作是一件很有趣的事情。今天,我们将会列举一些有趣得让你笑出来的Linux命令。 ### 1. rev ### - +创建一个文件,在文件里面输入几个单词,rev命令会将你写的东西反转输出到控制台。 Create a file, type some words in this file, rev command will dump all words written by you in reverse. # rev @@ -14,7 +14,7 @@ Create a file, type some words in this file, rev command will dump all words wri ![Selection_001](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0011.png) ### 2. fortune ### - +这个命令没有被默认安装,用apt-get命令安装它,fortune命令会随机显示一些句子 This command is not install by default, install with apt-get and fortune will display some random sentence. crank@crank-System:~$ sudo apt-get install fortune @@ -22,7 +22,7 @@ This command is not install by default, install with apt-get and fortune will di ![Selection_003](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0031.png) Use **-s** option with fortune, it will limit the out to one sentence. - +利用fortune命令的**_s** 选项,他会限制一个句子的输出长度。 # fortune -s ![Selection_004](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0042.png) From 2dfca1426c8b5ce3aa24939ea65880a0c5f56cc2 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Thu, 10 Sep 2015 09:57:05 +0800 Subject: [PATCH 476/697] Delete 20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md --- ....9.0 Winamp-like Audio Player in Ubuntu.md | 73 ------------------- 1 file changed, 73 deletions(-) delete mode 100644 sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md diff --git a/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md b/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md deleted file mode 100644 index 36e4c70d2c..0000000000 --- a/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md +++ /dev/null @@ -1,73 +0,0 @@ -translation by strugglingyouth -Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu -================================================================================ -![](http://ubuntuhandbook.org/wp-content/uploads/2015/01/qmmp-icon-simple.png) - -Qmmp, Qt-based audio player with winamp or xmms like user interface, now is at 0.9.0 release. PPA updated for Ubuntu 15.10, Ubuntu 15.04, Ubuntu 14.04, Ubuntu 12.04 and derivatives. - -Qmmp 0.9.0 is a big release with many new features, improvements and some translation updates. It added: - -- audio-channel sequence converter; -- 9 channels support to equalizer; -- album artist tag support; -- asynchronous sorting; -- sorting by file modification date; -- sorting by album artist; -- multiple column support; -- feature to hide track length; -- feature to disable plugins without qmmp.pri modification (qmake only) -- feature to remember playlist scroll position; -- feature to exclude cue data files; -- feature to change user agent; -- feature to change window title; -- feature to reset fonts; -- feature to restore default shortcuts; -- default hotkey for the “Rename List” action; -- feature to disable fadeout in the gme plugin; -- Simple User Interface (QSUI) with the following changes: - - added multiple column support; - - added sorting by album artist; - - added sorting by file modification date; - - added feature to hide song length; - - added default hotkey for the “Rename List” action; - - added “Save List” action to the tab menu; - - added feature to reset fonts; - - added feature to reset shortcuts; - - improved status bar; - -It also improved playlist changes notification, playlist container, sample rate converter, cmake build scripts, title formatter, ape tags support in the mpeg plugin, fileops plugin, reduced cpu usage, changed default skin (to Glare) and playlist separator. - -![qmmp-090](http://ubuntuhandbook.org/wp-content/uploads/2015/09/qmmp-090.jpg) - -### Install Qmmp 0.9.0 in Ubuntu: ### - -New release has been made into PPA, available for all current Ubuntu releases and derivatives. - -1. To add the [Qmmp PPA][1]. - -Open terminal from the Dash, App Launcher, or via Ctrl+Alt+T shortcut keys. When it opens, run command: - - sudo add-apt-repository ppa:forkotov02/ppa - -![qmmp-ppa](http://ubuntuhandbook.org/wp-content/uploads/2015/09/qmmp-ppa.jpg) - -2. After adding the PPA, upgrade Qmmp player through Software Updater. Or refresh system cache and install the software via below commands: - - sudo apt-get update - - sudo apt-get install qmmp qmmp-plugin-pack - -That’s it. Enjoy! - --------------------------------------------------------------------------------- - -via: http://ubuntuhandbook.org/index.php/2015/09/qmmp-0-9-0-in-ubuntu/ - -作者:[Ji m][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ubuntuhandbook.org/index.php/about/ -[1]:https://launchpad.net/~forkotov02/+archive/ubuntu/ppa From dadfcc8755a02573b297e8459b6ecfe011373212 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Thu, 10 Sep 2015 09:58:22 +0800 Subject: [PATCH 477/697] Create 20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md --- ....9.0 Winamp-like Audio Player in Ubuntu.md | 72 +++++++++++++++++++ 1 file changed, 72 insertions(+) create mode 100644 sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md diff --git a/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md b/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md new file mode 100644 index 0000000000..ac07a04b85 --- /dev/null +++ b/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md @@ -0,0 +1,72 @@ +在 Ubuntu 上安装 Qmmp 0.9.0 类似 Winamp 的音频播放器 +================================================================================ +![](http://ubuntuhandbook.org/wp-content/uploads/2015/01/qmmp-icon-simple.png) + +Qmmp,基于 Qt 的音频播放器,与 Winamp 或 xmms 的用户界面类似,现在最新版本是0.9.0。PPA 已经在 Ubuntu 15.10,Ubuntu 15.04,Ubuntu 14.04,Ubuntu 12.04 和其衍生物中已经更新了。 + +Qmmp 0.9.0 是一个较大的版本,有许多新的功能,有许多改进和新的转变。它添加了如下功能: + +- 音频-信道序列转换器; +- 9通道支持均衡器; +- 艺术家专辑标签支持; +- 异步排序; +- 通过文件的修改日期排​​序; +- 按艺术家专辑排序; +- 支持多专栏; +- 有隐藏踪迹长度功能; +- 不用修改 qmmp.pri 来禁用插件(仅在 qmake 中)功能 +- 记住播放列表滚动位置功能; +- 排除提示数据文件功能; +- 更改用户代理功能; +- 改变窗口标题功能; +- 复位字体功能; +- 恢复默认快捷键功能; +- 默认热键为“Rename List”功能; +- 功能禁用弹出的 GME 插件; +- 简单的用户界面(QSUI)有以下变化: + - 增加了多列表的支持; + - 增加了按艺术家专辑排序; + - 增加了按文件的修改日期进行排序; + - 增加了隐藏歌曲长度功能; + - 增加了默认热键为“Rename List”; + - 增加了“Save List”功能到标签菜单; + - 增加了复位字体功能; + - 增加了复位快捷键功能; + - 改进了状态栏; + +它还改进了播放列表的通知,播放列表容器,采样率转换器,cmake 构建脚本,标题格式,在 mpeg 插件中支持 ape 标签,fileops 插件,降低了 cpu 占用率,改变默认的皮肤(炫光)和分离播放列表。 + +![qmmp-090](http://ubuntuhandbook.org/wp-content/uploads/2015/09/qmmp-090.jpg) + +### 在 Ubuntu 中安装 Qmmp 0.9.0 : ### + +新版本已经制做了 PPA,适用于目前所有 Ubuntu 发行版和衍生版。 + +1. 添加 [Qmmp PPA][1]. + +从 Dash 中打开终端并启动应用,通过按 Ctrl+Alt+T 快捷键。当它打开时,运行命令: + + sudo add-apt-repository ppa:forkotov02/ppa + +![qmmp-ppa](http://ubuntuhandbook.org/wp-content/uploads/2015/09/qmmp-ppa.jpg) + +2. 在添加 PPA 后,通过更新软件来升级 Qmmp 播放器。刷新系统缓存,并通过以下命令安装软件: + + sudo apt-get update + + sudo apt-get install qmmp qmmp-plugin-pack + +就是这样。尽情享受吧! + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/09/qmmp-0-9-0-in-ubuntu/ + +作者:[Ji m][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:https://launchpad.net/~forkotov02/+archive/ubuntu/ppa From 9b132bfcf2a91464be11e399c880ba4a9412247b Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Thu, 10 Sep 2015 09:59:31 +0800 Subject: [PATCH 478/697] Update 20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md --- ...906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md b/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md index ac07a04b85..76f479cb80 100644 --- a/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md +++ b/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md @@ -63,7 +63,7 @@ Qmmp 0.9.0 是一个较大的版本,有许多新的功能,有许多改进和 via: http://ubuntuhandbook.org/index.php/2015/09/qmmp-0-9-0-in-ubuntu/ 作者:[Ji m][a] -译者:[译者ID](https://github.com/译者ID) +译者:[strugglingyouth](https://github.com/strugglingyouth) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From cd6462772d5e052aa505209b8d16efecdb204315 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 10 Sep 2015 16:15:54 +0800 Subject: [PATCH 479/697] =?UTF-8?q?20150910-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...e Free Software Foundation--30 years in.md | 149 ++++++++++++++++++ 1 file changed, 149 insertions(+) create mode 100644 sources/talk/20150910 The Free Software Foundation--30 years in.md diff --git a/sources/talk/20150910 The Free Software Foundation--30 years in.md b/sources/talk/20150910 The Free Software Foundation--30 years in.md new file mode 100644 index 0000000000..f782b2e876 --- /dev/null +++ b/sources/talk/20150910 The Free Software Foundation--30 years in.md @@ -0,0 +1,149 @@ +The Free Software Foundation: 30 years in +================================================================================ +![](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc_general_openfield.png?itok=tcXpYeHi) + +Welcome back, folks, to a new Six Degrees column. As usual, please send your thoughts on this piece to the comment box and your suggestions for future columns to [my inbox][1]. + +Now, I have to be honest with you all, this column went a little differently than I expected. + +A few weeks ago when thinking what to write, I mused over the notion of a piece about the [Free Software Foundation][2] celebrating its 30 year anniversary and how relevant and important its work is in today's computing climate. + +To add some meat I figured I would interview [John Sullivan][3], executive director of the FSF. My plan was typical of many of my pieces: thread together an interesting narrative and quote pieces of the interview to give it color. + +Well, that all went out the window when John sent me a tremendously detailed, thoughtful, and descriptive interview. I decided therefore to present it in full as the main event, and to add some commentary throughout. Thus, this is quite a long column, but I think it paints a fascinating picture of a fascinating organization. I recommend you grab a cup of something delicious and settle in for a solid read. + +### The sands of change ### + +The Free Software Foundation was founded in 1985. To paint a picture of what computing was like back then, the [Amiga 1000][4] was released, C++ was becoming a dominant language, [Aldus PageMaker][5] was announced, and networking was just starting to grow. Oh, and that year [Careless Whisper][6] by Wham! was a major hit. + +Things have changed a lot in 30 years. Back in 1985 the FSF was primarily focused on building free pieces of software that were primarily useful to nerdy computer people. These days we have software, services, social networks, and more to consider. + +I first wanted to get a sense of what John feels are most prominent risks to software freedom today. + +"I think there's widespread agreement on the biggest risks for computer user freedom today, but maybe not on the names for them." + +"The first is what we might as well just call 'tiny computers everywhere.' The free software movement has succeeded to the point where laptops, desktops, and servers can run fully free operating systems doing anything users of proprietary systems can do. There are still a few holes, but they'll be closed. The challenge that remains in this area is to cut through the billion dollar marketing budgets and legal regimes working against us to actually get the systems into users hands." + +"However, we have a serious problem on the set of computers whose primary common trait is that they are very small. Even though a car is not especially small, the computers in it are, so I include that form factor in this category, along with phones, tablets, glasses, watches, and so on. While these computers often have a basis in free software—for example, using the kernel Linux along with other free software like Android or GNU—their primary uses are to run proprietary applications and be shims for services that replace local computing with computing done on a server over which the user has no control. Since these devices serve vital functions, with some being primary means of communication for huge populations, some sitting very close to our bodies and our actual vital functions, some bearing responsibility for our physical safety, it is imperative that they run fully free systems under their users' control. Right now, they don't." + +John feels the risk here is not just the platforms and form factors, but the services integrates into them. + +"The services many of these devices talk to are the second major threat we face. It does us little good booting into a free system if we do our actual work and entertainment on companies' servers running software we have no access to at all. The point of free software is that we can see, modify, and share code. The existence of those freedoms even for nontechnical users provides a shield that prevents companies from controlling us. None of these freedoms exist for users of Facebook or Salesforce or Google Docs. Even more worrisome, we see a trend where people are accepting proprietary restrictions imposed on their local machines in order to have access to certain services. Browsers—including Firefox—are now automatically installing a DRM plugin in order to appease Netflix and other video giants. We need to work harder at developing free software decentralized replacements for media distribution that can actually empower users, artists, and user-artists, and for other services as well. For Facebook we have GNU social, pump.io, Diaspora, Movim, and others. For Salesforce, we have CiviCRM. For Google Docs, we have Etherpad. For media, we have GNU MediaGoblin. But all of these projects need more help, and many services don't have any replacement contenders yet." + +It is interesting that John mentions finding free software equivalents for common applications and services today. The FSF maintains a list of "High Priority Projects" that are designed to fill this gap. Unfortunately the capabilities of these projects varies tremendously and in an age where social media is so prominent, the software is only part of the problem: the real challenge is getting people to use it. + +This all begs the question of where the FSF fit in today's modern computing world. I am a fan of the FSF. I think the work they do is valuable and I contribute financially to support it too. They are an important organization for building an open computing culture, but all organizations need to grow, adjust, and adapt, particularly ones in the technology space. + +I wanted to get a better sense of what the FSF is doing today that it wasn't doing at it's inception. + +"We're speaking to a much larger audience than we were 30 years ago, and to a much broader audience. It's no longer just hackers and developers and researchers that need to know about free software. Everyone using a computer does, and it's quickly becoming the case that everyone uses a computer." + +John went on to provide some examples of these efforts. + +"We're doing coordinated public advocacy campaigns on issues of concern to the free software movement. Earlier in our history, we expressed opinions on these things, and took action on a handful, but in the last ten years we've put more emphasis on formulating and carrying out coherent campaigns. We've made especially significant noise in the area of Digital Restrictions Management (DRM) with Defective by Design, which I believe played a role in getting iTunes music off DRM (now of course, Apple is bringing DRM back with Apple Music). We've made attractive and useful introductory materials for people new to free software, like our [User Liberation animated video][7] and our [Email Self-Defense Guide][8]. + +We're also endorsing hardware that [respects users' freedoms][9]. Hardware distributors whose devices have been certified by the FSF to contain and require only free software can display a logo saying so. Expanding the base of free software users and the free software movement has two parts: convincing people to care, and then making it possible for them to act on that. Through this initiative, we encourage manufacturers and distributors to do the right thing, and we make it easy for users who have started to care about free software to buy what they need without suffering through hours and hours of research. We've certified a home WiFi router, 3D printers, laptops, and USB WiFi adapters, with more on the way. + +We're collecting all of the free software we can find in our [Free Software Directory][10]. We still have a long way to go on this—we're at only about 15,500 packages right now, and we can imagine many improvements to the design and function of the site—but I think this resource has great potential for helping users find the free software they need, especially users who aren't yet using a full GNU/Linux system. With the dangers inherent in downloading random programs off the Internet, there is a definite need for a curated collection like this. It also happens to provide a wealth of machine-readable data of use to researchers. + +We're acting as the fiscal sponsor for several specific free software projects, enabling them to raise funds for development. Most of these projects are part of GNU (which we continue to provide many kinds of infrastructure for), but we also sponsor [Replicant][11], a fully free fork of Android designed to give users the free-est mobile devices currently possible. + +We're helping developers use free software licenses properly, and we're following up on complaints about companies that aren't following the terms of the GPL. We help them fix their mistakes and distribute properly. RMS was in fact doing similar work with the precursors of the GPL very early on, but it's now an ongoing part of our work. + +Most of the specific things the FSF does now it wasn't doing 30 years ago, but the vision is little changed from the original paperwork—we aim to create a world where everything users want to do on any computer can be done using free software; a world where users control their computers and not the other way around." + +### A cult of personality ### + +There is little doubt in anyone's minds about the value the FSF brings. As John just highlighted, its efforts span not just the creation and licensing of free software, but also recognizing, certifying, and advocating a culture of freedom in technology. + +The head of the FSF is the inimitable Richard M. Stallman, commonly referred to as RMS. + +RMS is a curious character. He has demonstrated an unbelievable level of commitment to his ideas, philosophy, and ethical devotion to freedom in software. + +While he is sometimes mocked online for his social awkwardness, be it things said in his speeches, his bizarre travel requirements, or other sometimes cringeworthy moments, RMS's perspectives on software and freedom are generally rock-solid. He takes a remarkably consistent approach to his perspectives and he is clearly a careful thinker about not just his own thoughts but the wider movement he is leading. My only criticism is that I think from time to time he somewhat over-eggs the pudding with the voracity of his words. But hey, given his importance in our world, I would rather take an extra egg than no pudding for anyone. O.K., I get that the whole pudding thing here was strained... + +So RMS is a key part of the FSF, but the organization is also much more than that. There are employees, a board, and many contributors. I was curious to see how much of a role RMS plays these days in the FSF. John shared this with me. + +"RMS is the FSF's President, and does that work without receiving a salary from the FSF. He continues his grueling global speaking schedule, advocating for free software and computer user freedom in dozens of countries each year. In the course of that, he meets with government officials as well as local activists connected with all varieties of social movements. He also raises funds for the FSF and inspires many people to volunteer." + +"In between engagements, he does deep thinking on issues facing the free software movement, and anticipates new challenges. Often this leads to new articles—he wrote a 3-part series for Wired earlier this year about free software and free hardware designs—or new ideas communicated to the FSF's staff as the basis for future projects." + +As we delved into the cult of personality, I wanted to tap John's perspectives on how wide the free software movement has grown. + +I remember being at the [Open Source Think Tank][12] (an event that brings together execs from various open source organizations) and there was a case study where attendees were asked to recommend license choice for a particular project. The vast majority of break-out groups recommended the Apache Software License (APL) over the GNU Public License (GPL). + +This stuck in my mind as since then I have noticed that many companies seem to have opted for open licenses other than the GPL. I was curious to see if John had noticed a trend towards the APL as opposed to the GPL. + +"Has there been? I'm not so sure. I gave a presentation at FOSDEM a few years ago called 'Is Copyleft Being Framed?' that showed some of the problems with the supposed data behind claims of shifts in license adoption. I'll be publishing an article soon on this, but here's some of the major problems: + + +- Free software license choices do not exist in a vacuum. The number of people choosing proprietary software licenses also needs to be considered in order to draw the kinds of conclusions that people want to draw. I find it much more likely that lax permissive license choices (such as the Apache License or 3-clause BSD) are trading off with proprietary license choices, rather than with the GPL. +- License counters often, ironically, don't publish the software they use to collect that data as free software. That means we can't inspect their methods or reproduce their results. Some people are now publishing the code they use, but certainly any that don't should be completely disregarded. Science has rules. +- What counts as a thing with a license? Are we really counting an app under the APL that makes funny noises as 1:1 with GNU Emacs under GPLv3? If not, how do we decide which things to treat as equals? Are we only looking at software that actually works? Are we making sure not to double- and triple- count programs that exist on multiple hosting sites, and what about ports for different OSes? + +The question is interesting to ponder, but every conclusion I've seen so far has been extremely premature in light of the actual data. I'd much rather see a survey of developers asking about why they chose particular licenses for their projects than any more of these attempts to programmatically ascertain the license of programs and then ascribe human intentions on to patterns in that data. + +Copyleft is as vital as it ever was. Permissively licensed software is still free software and on-face a good thing, but it is contingent and needs an accompanying strong social commitment to not incorporate it in proprietary software. If free software's major long-term impact is enabling businesses to more efficiently make products that restrict us, then we have achieved nothing for computer user freedom." + +### Rising to new challenges ### + +30 years is an impressive time for any organization to be around, and particularly one with such important goals that span so many different industries, professions, governments, and cultures. + +As I started to wrap up the interview I wanted to get a better sense of what the FSF's primary function is today, 30 years after the mission started. + +"I think the FSF is in a very interesting position of both being a steady rock and actively pushing the envelope." + +"We have core documents like the [Free Software Definition][13], the [GNU General Public License][14], and the [list we maintain of free and nonfree software licenses][15], which have been keystones in the construction of the world of free software we have today. People place a great deal of trust in us to stay true to the principles outlined in those documents, and to apply them correctly and wisely in our assessments of new products or practices in computing. In this role, we hold the ladder for others to climb. As a 501(c)(3) charity held legally accountable to the public interest, and about 85% funded by individuals, we have the right structure for this." + +"But we also push the envelope. We take on challenges that others say are too hard. I guess that means we also build ladders? Or maybe I should stop with the metaphors." + +While John may not be great with metaphors (like I am one to talk), the FSF is great at setting a mission and demonstrating a devout commitment to it. This mission starts with a belief that free software should be everywhere. + +"We are not satisfied with the idea that you can get a laptop that works with free software except for a few components. We're not satisfied that you can have a tablet that runs a lot of free software, and just uses proprietary software to communicate with networks and to accelerate video and to take pictures and to check in on your flight and to call an Über and to.. Well, we are happy about some such developments for sure, but we are also unhappy about the suggestion that we should be fully content with them. Any proprietary software on a system is both an injustice to the user and inherently a threat to users' security. These almost-free things can be stepping stones on the way to a free world, but only if we keep our feet moving." + +In the early years of the FSF, we actually had to get a free operating system written. This has now been done by GNU and Linux and many collaborators, although there is always more software to write and bugs to fix. So while the FSF does still sponsor free software development in specific areas, there are thankfully many other organizations also doing this." + +A key part of the challenge John is referring to is getting the right hardware into the hands of the right people. + +"What we have been focusing on now are the challenges I highlighted in the first question. We are in desperate need of hardware in several different areas that fully supports free software. We have been talking a lot at the FSF about what we can do to address this, and I expect us to be making some significant moves to both increase our support for some of the projects already out there—as we having been doing to some extent through our Respects Your Freedom certification program—and possibly to launch some projects of our own. The same goes for the network service problem. I think we need to tackle them together, because having full control over the mobile components has great potential for changing how we relate to services, and decentralizing more and more services will in turn shape the mobile components." + +I hope folks will support the FSF as we work to grow and tackle these challenges. Hardware is expensive and difficult, as is making usable, decentralized, federated replacements for network services. We're going to need the resources and creativity of a lot of people. But, 30 years ago, a community rallied around RMS and the concept of copyleft to write an entire operating system. I've spent my last 12 years at the FSF because I believe we can rise to the new challenges in the same way." + +### Final thoughts ### + +In reading John's thoughtful responses to my questions, and in knowing various FSF members, the one sense that resonates for me is the sheer level of passion that is alive and kicking in the FSF. This is not an organization that has got bored or disillusioned with its mission. Its passion and commitment is as voracious as it has ever been. + +While I don't always agree with the FSF and I sometimes think its approach is a little one-dimensional at times, I have been and will continue to be a huge fan and supporter of its work. The FSF represent the ethical heartbeat of much of the free software and open source work that happens across the world. It represents a world view that is pretty hard to the left, but I believe its passion and conviction helps to bring people further to the right a little closer to the left too. + +Sure, RMS can be odd, somewhat hardline, and a little sensational, but he is precisely the kind of leader that is valuable in a movement that encapsulates a mixture of technology, ethics, and culture. We need an RMS in much the same way we need a Torvalds, a Shuttleworth, a Whitehurst, and a Zemlin. These different people bring together mixture of perspectives that ultimately maps to technology that can be adaptable to almost any set of use cases, ethics, and ambitions. + +So, in closing, I want to thank the FSF for its tremendous efforts, and I wish the FSF and its fearless leaders, one Richard M. Stallman and one John Sullivan, another 30 years of fighting the good fight. Go get 'em! + +> This article is part of Jono Bacon's Six Degrees column, where he shares his thoughts and perspectives on culture, communities, and trends in open source. + +-------------------------------------------------------------------------------- + +via: http://opensource.com/business/15/9/free-software-foundation-30-years + +作者:[Jono Bacon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://opensource.com/users/jonobacon +[1]:Welcome back, folks, to a new Six Degrees column. As usual, please send your thoughts on this piece to the comment box and your suggestions for future columns to my inbox. +[2]:http://www.fsf.org/ +[3]:http://twitter.com/johns_fsf/ +[4]:https://en.wikipedia.org/wiki/Amiga_1000 +[5]:https://en.wikipedia.org/wiki/Adobe_PageMaker +[6]:https://www.youtube.com/watch?v=izGwDsrQ1eQ +[7]:http://fsf.org/ +[8]:http://emailselfdefense.fsf.org/ +[9]:http://fsf.org/ryf +[10]:http://directory.fsf.org/ +[11]:http://www.replicant.us/ +[12]:http://www.osthinktank.com/ +[13]:http://www.fsf.org/about/what-is-free-software +[14]:http://www.gnu.org/licenses/gpl-3.0.en.html +[15]:http://www.gnu.org/licenses/licenses.en.html \ No newline at end of file From 0382f3fb8e1e058f61bea6b1e5cf4f0134da21ef Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 10 Sep 2015 16:48:01 +0800 Subject: [PATCH 480/697] =?UTF-8?q?20150910-2=20=E9=80=89=E9=A2=98=20?= =?UTF-8?q?=E4=B8=8A=E7=99=BE=E4=B8=AA=E9=93=BE=E6=8E=A5=E7=9A=84=E5=B7=A8?= =?UTF-8?q?=E7=AF=87=E5=95=8A?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... of the world's best living programmers.md | 389 ++++++++++++++++++ 1 file changed, 389 insertions(+) create mode 100644 sources/talk/20150909 Superclass--15 of the world's best living programmers.md diff --git a/sources/talk/20150909 Superclass--15 of the world's best living programmers.md b/sources/talk/20150909 Superclass--15 of the world's best living programmers.md new file mode 100644 index 0000000000..70a9803b10 --- /dev/null +++ b/sources/talk/20150909 Superclass--15 of the world's best living programmers.md @@ -0,0 +1,389 @@ +Superclass: 15 of the world’s best living programmers +================================================================================ +When developers discuss who the world’s top programmer is, these names tend to come up a lot. + +![](http://images.techhive.com/images/article/2015/09/superman-620x465-100611650-orig.jpg) + +Image courtesy [tom_bullock CC BY 2.0][1] + +It seems like there are lots of programmers out there these days, and lots of really good programmers. But which one is the very best? + +Even though there’s no way to really say who the best living programmer is, that hasn’t stopped developers from frequently kicking the topic around. ITworld has solicited input and scoured coder discussion forums to see if there was any consensus. As it turned out, a handful of names did frequently get mentioned in these discussions. + +Use the arrows above to read about 15 people commonly cited as the world’s best living programmer. + +![](http://images.techhive.com/images/article/2015/09/margaret_hamilton-620x465-100611764-orig.jpg) + +Image courtesy [NASA][2] + +### Margaret Hamilton ### + +**Main claim to fame: The brains behind Apollo’s flight control software** + +Credentials: As the Director of the Software Engineering Division at Charles Stark Draper Laboratory, she headed up the team which [designed and built][3] the on-board [flight control software for NASA’s Apollo][4] and Skylab missions. Based on her Apollo work, she later developed the [Universal Systems Language][5] and [Development Before the Fact][6] paradigm. Pioneered the concepts of [asynchronous software, priority scheduling, and ultra-reliable software design][7]. Coined the term “[software engineering][8].” Winner of the [Augusta Ada Lovelace Award][9] in 1986 and [NASA’s Exceptional Space Act Award in 2003][10]. + +Quotes: “Hamilton invented testing , she pretty much formalised Computer Engineering in the US.” [ford_beeblebrox][11] + +“I think before her (and without disrespect including Knuth) computer programming was (and to an extent remains) a branch of mathematics. However a flight control system for a spacecraft clearly moves programming into a different paradigm.” [Dan Allen][12] + +“... she originated the term ‘software engineering’ — and offered a great example of how to do it.” [David Hamilton][13] + +“What a badass” [Drukered][14] + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_donald_knuth-620x465-100502872-orig.jpg) + +Image courtesy [vonguard CC BY-SA 2.0][15] + +### Donald Knuth ### + +**Main claim to fame: Author of The Art of Computer Programming** + +Credentials: Wrote the [definitive book on the theory of programming][16]. Created the TeX digital typesetting system. [First winner of the ACM’s Grace Murray Hopper Award][17] in 1971. Winner of the ACM’s [A. M. Turing][18] Award in 1974, the [National Medal of Science][19] in 1979 and the IEEE’s [John von Neumann Medal][20] in 1995. Named a [Fellow at the Computer History Museum][21] in 1998. + +Quotes: “... wrote The Art of Computer Programming which is probably the most comprehensive work on computer programming ever.” [Anonymous][22] + +“There is only one large computer program I have used in which there are to a decent approximation 0 bugs: Don Knuth's TeX. That's impressive.” [Jaap Weel][23] + +“Pretty awesome if you ask me.” [Mitch Rees-Jones][24] + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_ken-thompson-620x465-100502874-orig.jpg) + +Image courtesy [Association for Computing Machinery][25] + +### Ken Thompson ### + +**Main claim to fame: Creator of Unix** + +Credentials: Co-creator, [along with Dennis Ritchie][26], of Unix. Creator of the [B programming language][27], the [UTF-8 character encoding scheme][28], the ed [text editor][29], and co-developer of the Go programming language. Co-winner (along with Ritchie) of the [A.M. Turing Award][30] in 1983, [IEEE Computer Pioneer Award][31] in 1994, and the [National Medal of Technology][32] in 1998. Inducted as a [fellow of the Computer History Museum][33] in 1997. + +Quotes: “... probably the most accomplished programmer ever. Unix kernel, Unix tools, world-champion chess program Belle, Plan 9, Go Language.” [Pete Prokopowicz][34] + +“Ken's contributions, more than anyone else I can think of, were fundamental and yet so practical and timeless they are still in daily use.“ [Jan Jannink][35] + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_richard_stallman-620x465-100502868-orig.jpg) + +Image courtesy Jiel Beaumadier CC BY-SA 3.0 + +### Richard Stallman ### + +**Main claim to fame: Creator of Emacs, GCC** + +Credentials: Founded the [GNU Project][36] and created many of its core tools, such as [Emacs, GCC, GDB][37], and [GNU Make][38]. Also founded the [Free Software Foundation][39]. Winner of the ACM's [Grace Murray Hopper Award][40] in 1990 and the [EFF's Pioneer Award in 1998][41]. + +Quotes: “... there was the time when he single-handedly outcoded several of the best Lisp hackers around, in the Symbolics vs LMI fight.” [Srinivasan Krishnan][42] + +“Through his amazing mastery of programming and force of will, he created a whole sub-culture in programming and computers.” [Dan Dunay][43] + +“I might disagree on many things with the great man, but he is still one of the most important programmers, alive or dead” [Marko Poutiainen][44] + +“Try to imagine Linux without the prior work on the GNu project. Stallman's the bomb, yo.” [John Burnette][45] + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_anders_hejlsberg-620x465-100502873-orig.jpg) + +Image courtesy [D.Begley CC BY 2.0][46] + +### Anders Hejlsberg ### + +**Main claim to fame: Creator of Turbo Pascal** + +Credentials: [The original author of what became Turbo Pascal][47], one of the most popular Pascal compilers and the first integrated development environment. Later, [led the building of Delphi][48], Turbo Pascal’s successor. [Chief designer and architect of C#][49]. Winner of [Dr. Dobb's Excellence in Programming Award][50] in 2001. + +Quotes: “He wrote the [Pascal] compiler in assembly language for both of the dominant PC operating systems of the day (DOS and CPM). It was designed to compile, link and run a program in seconds rather than minutes.” [Steve Wood][51] + +“I revere this guy - he created the development tools that were my favourite through three key periods along my path to becoming a professional software engineer.” [Stefan Kiryazov][52] + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_doug_cutting-620x465-100502871-orig.jpg) + +Image courtesy [vonguard CC BY-SA 2.0][53] + +### Doug Cutting ### + +**Main claim to fame: Creator of Lucene** + +Credentials: [Developed the Lucene search engine, as well as Nutch][54], a web crawler, and [Hadoop][55], a set of tools for distributed processing of large data sets. A strong proponent of open-source (Lucene, Nutch and Hadoop are all open-source). Currently [a former director of the Apache Software Foundation][56]. + +Quotes: “... he is the same guy who has written an exceptional search framework(lucene/solr) and opened the big-data gateway to the world(hadoop).” [Rajesh Rao][57] + +“His creation/work on Lucene and Hadoop (among other projects) has created a tremendous amount of wealth and employment for folks in the world….” [Amit Nithianandan][58] + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_sanjay_ghemawat-620x465-100502876-orig.jpg) + +Image courtesy [Association for Computing Machinery][59] + +### Sanjay Ghemawat ### + +**Main claim to fame: Key Google architect** + +Credentials: [Helped to design and implement some of Google’s large distributed systems][60], including MapReduce, BigTable, Spanner and Google File System. [Created Unix’s ical][61] calendaring system. Elected to the [National Academy of Engineering][62] in 2009. Winner of the [ACM-Infosys Foundation Award in the Computing Sciences][63] in 2012. + +Quote: “Jeff Dean's wingman.” [Ahmet Alp Balkan][64] + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_jeff_dean-620x465-100502866-orig.jpg) + +Image courtesy [Google][65] + +### Jeff Dean ### + +**Main claim to fame: The brains behind Google search indexing** + +Credentials: Helped to design and implement [many of Google’s large-scale distributed systems][66], including website crawling, indexing and searching, AdSense, MapReduce, BigTable and Spanner. Elected to the [National Academy of Engineering][67] in 2009. 2012 winner of the ACM’s [SIGOPS Mark Weiser Award][68] and the [ACM-Infosys Foundation Award in the Computing Sciences][69]. + +Quotes: “... for bringing breakthroughs in data mining( GFS, Map and Reduce, Big Table ).” [Natu Lauchande][70] + +“... conceived, built, and deployed MapReduce and BigTable, among a bazillion other things” [Erik Goldman][71] + +![](http://images.techhive.com/images/article/2015/09/linus_torvalds-620x465-100611765-orig.jpg) + +Image courtesy [Krd CC BY-SA 4.0][72] + +### Linus Torvalds ### + +**Main claim to fame: Creator of Linux** + +Credentials: Created the [Linux kernel][73] and [Git][74], an open source version control system. Winner of numerous awards and honors, including the [EFF Pioneer Award][75] in 1998, the [British Computer Society’s Lovelace Medal][76] in 2000, the [Millenium Technology Prize][77] in 2012 and the [IEEE Computer Society’s Computer Pioneer Award][78] in 2014. Also inducted into the [Computer History Museum’s Hall of Fellows][79] in 2008 and the [Internet Hall of Fame][80] in 2012. + +Quotes: “To put into prospective what an achievement this is, he wrote the Linux kernel in a few years while the GNU Hurd (a GNU-developed kernel) has been under development for 25 years and has still yet to release a production-ready example.” [Erich Ficker][81] + +“Torvalds is probably the programmer's programmer.” [Dan Allen][82] + +“He's pretty darn good.” [Alok Tripathy][83] + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_john_carmack-620x465-100502867-orig.jpg) + +Image courtesy [QuakeCon CC BY 2.0][84] + +### John Carmack ### + +**Main claim to fame: Creator of Doom** + +Credentials: Cofounded id Software and [created such influential FPS games][85] as Wolfenstein 3D, Doom and Quake. Pioneered such ground-breaking computer graphic techniques [adaptive tile refresh][86], [binary space partitioning][87], and surface caching. Inducted into the [Academy of Interactive Arts and Sciences Hall of Fame][88] in 2001, [won Emmy awards][89] in the Engineering & Technology category in 2007 and 2008, and given a lifetime achievement award by the [Game Developers Choice Awards][90] in 2010. + +Quotes: “He wrote his first rendering engine before he was 20 years old. The guy's a genius. I wish I were a quarter a programmer he is.” [Alex Dolinsky][91] + +“... Wolfenstein 3D, Doom and Quake were revolutionary at the time and have influenced a generation of game designers.” [dniblock][92] + +“He can write basically anything in a weekend....” [Greg Naughton][93] + +“He is the Mozart of computer coding….” [Chris Morris][94] + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_fabrice_bellard-620x465-100502870-orig.jpg) + +Image courtesy [Duff][95] + +### Fabrice Bellard ### + +**Main claim to fame: Creator of QEMU** + +Credentials: Created a [variety of well-known open-source software programs][96], including QEMU, a platform for hardware emulation and virtualization, FFmpeg, for handling multimedia data, the Tiny C Compiler and LZEXE, an executable file compressor. [Winner of the Obfuscated C Code Contest][97] in 2000 and 2001 and the [Google-O'Reilly Open Source Award][98] in 2011. Former world record holder for [calculating the most number of digits in Pi][99]. + +Quotes: “I find Fabrice Bellard's work remarkable and impressive.” [raphinou][100] + +“Fabrice Bellard is the most productive programmer in the world....” [Pavan Yara][101] + +“Hes like the Nikola Tesla of sofware engineering.” [Michael Valladolid][102] + +“He's a prolific serial achiever since the 1980s.” M[ichael Biggins][103] + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_jon_skeet-620x465-100502863-orig.jpg) + +Image courtesy [Craig Murphy CC BY 2.0][104] + +### Jon Skeet ### + +**Main claim to fame: Legendary Stack Overflow contributor** + +Credentials: Google engineer and author of [C# in Depth][105]. Holds [highest reputation score of all time on Stack Overflow][106], answering, on average, 390 questions per month. + +Quotes: “Jon Skeet doesn't need a debugger, he just stares down the bug until the code confesses” [Steven A. Lowe][107] + +“When Jon Skeet's code fails to compile the compiler apologises.” [Dan Dyer][108] + +“Jon Skeet's code doesn't follow a coding convention. It is the coding convention.” [Anonymous][109] + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_image_adam_dangelo-620x465-100502875-orig.jpg) + +Image courtesy [Philip Neustrom CC BY 2.0][110] + +### Adam D'Angelo ### + +**Main claim to fame: Co-founder of Quora** + +Credentials: As an engineer at Facebook, [built initial infrastructure for its news feed][111]. Went on to become CTO and VP of engineering at Facebook, before leaving to co-found Quora. [Eighth place finisher at the USA Computing Olympiad][112] as a high school student in 2001. Member of [California Institute of Technology’s silver medal winning team][113] at the ACM International Collegiate Programming Contest in 2004. [Finalist in the Algorithm Coding Competition][114] of Topcoder Collegiate Challenge in 2005. + +Quotes: “An "All-Rounder" Programmer.” [Anonymous][115] + +"For every good thing I make he has like six." [Mark Zuckerberg][116] + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_petr_mitrichev-620x465-100502869-orig.jpg) + +Image courtesy [Facebook][117] + +### Petr Mitrechev ### + +**Main claim to fame: One of the top competitive programmers of all time** + +Credentials: [Two-time gold medal winner][118] in the International Olympiad in Informatics (2000, 2002). In 2006, [won the Google Code Jam][119] and was also the [TopCoder Open Algorithm champion][120]. Also, two-time winner of the Facebook Hacker Cup ([2011][121], [2013][122]). At the time of this writing, [the second ranked algorithm competitor on TopCoder][123] (handle: Petr) and also [ranked second by Codeforces][124] + +Quote: “He is an idol in competitive programming even here in India…” [Kavish Dwivedi][125] + +![](http://images.techhive.com/images/idge/imported/imageapi/2014/10/08/17/slide_gennady_korot-620x465-100502864-orig.jpg) + +Image courtesy [Ishandutta2007 CC BY-SA 3.0][126] + +### Gennady Korotkevich ### + +**Main claim to fame: Competitive programming prodigy** + +Credentials: Youngest participant ever (age 11) and [6 time gold medalist][127] (2007-2012) in the International Olympiad in Informatics. Part of [the winning team][128] at the ACM International Collegiate Programming Contest in 2013 and winner of the [2014 Facebook Hacker Cup][129]. At the time of this writing, [ranked first by Codeforces][130] (handle: Tourist) and [first among algorithm competitors by TopCoder][131]. + +Quotes: “A programming prodigy!” [Prateek Joshi][132] + +“Gennady is definitely amazing, and visible example of why I have a large development team in Belarus.” [Chris Howard][133] + +“Tourist is genius” [Nuka Shrinivas Rao][134] + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2823547/enterprise-software/158256-superclass-14-of-the-world-s-best-living-programmers.html#slide1 + +作者:[Phil Johnson][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Phil-Johnson/ +[1]:https://www.flickr.com/photos/tombullock/15713223772 +[2]:https://commons.wikimedia.org/wiki/File:Margaret_Hamilton_in_action.jpg +[3]:http://klabs.org/home_page/hamilton.htm +[4]:https://www.youtube.com/watch?v=DWcITjqZtpU&feature=youtu.be&t=3m12s +[5]:http://www.htius.com/Articles/r12ham.pdf +[6]:http://www.htius.com/Articles/Inside_DBTF.htm +[7]:http://www.nasa.gov/home/hqnews/2003/sep/HQ_03281_Hamilton_Honor.html +[8]:http://www.nasa.gov/50th/50th_magazine/scientists.html +[9]:https://books.google.com/books?id=JcmV0wfQEoYC&pg=PA321&lpg=PA321&dq=ada+lovelace+award+1986&source=bl&ots=qGdBKsUa3G&sig=bkTftPAhM1vZ_3VgPcv-38ggSNo&hl=en&sa=X&ved=0CDkQ6AEwBGoVChMI3paoxJHWxwIVA3I-Ch1whwPn#v=onepage&q=ada%20lovelace%20award%201986&f=false +[10]:http://history.nasa.gov/alsj/a11/a11Hamilton.html +[11]:https://www.reddit.com/r/pics/comments/2oyd1y/margaret_hamilton_with_her_code_lead_software/cmrswof +[12]:http://qr.ae/RFEZLk +[13]:http://qr.ae/RFEZUn +[14]:https://www.reddit.com/r/pics/comments/2oyd1y/margaret_hamilton_with_her_code_lead_software/cmrv9u9 +[15]:https://www.flickr.com/photos/44451574@N00/5347112697 +[16]:http://cs.stanford.edu/~uno/taocp.html +[17]:http://awards.acm.org/award_winners/knuth_1013846.cfm +[18]:http://amturing.acm.org/award_winners/knuth_1013846.cfm +[19]:http://www.nsf.gov/od/nms/recip_details.jsp?recip_id=198 +[20]:http://www.ieee.org/documents/von_neumann_rl.pdf +[21]:http://www.computerhistory.org/fellowawards/hall/bios/Donald,Knuth/ +[22]:http://www.quora.com/Who-are-the-best-programmers-in-Silicon-Valley-and-why/answers/3063 +[23]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Jaap-Weel +[24]:http://qr.ae/RFE94x +[25]:http://amturing.acm.org/photo/thompson_4588371.cfm +[26]:https://www.youtube.com/watch?v=JoVQTPbD6UY +[27]:https://www.bell-labs.com/usr/dmr/www/bintro.html +[28]:http://doc.cat-v.org/bell_labs/utf-8_history +[29]:http://c2.com/cgi/wiki?EdIsTheStandardTextEditor +[30]:http://amturing.acm.org/award_winners/thompson_4588371.cfm +[31]:http://www.computer.org/portal/web/awards/cp-thompson +[32]:http://www.uspto.gov/about/nmti/recipients/1998.jsp +[33]:http://www.computerhistory.org/fellowawards/hall/bios/Ken,Thompson/ +[34]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Pete-Prokopowicz-1 +[35]:http://qr.ae/RFEWBY +[36]:https://groups.google.com/forum/#!msg/net.unix-wizards/8twfRPM79u0/1xlglzrWrU0J +[37]:http://www.emacswiki.org/emacs/RichardStallman +[38]:https://www.gnu.org/gnu/thegnuproject.html +[39]:http://www.emacswiki.org/emacs/FreeSoftwareFoundation +[40]:http://awards.acm.org/award_winners/stallman_9380313.cfm +[41]:https://w2.eff.org/awards/pioneer/1998.php +[42]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Greg-Naughton/comment/4146397 +[43]:http://qr.ae/RFEaib +[44]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Marko-Poutiainen +[45]:http://qr.ae/RFEUqp +[46]:https://www.flickr.com/photos/begley/2979906130 +[47]:http://www.taoyue.com/tutorials/pascal/history.html +[48]:http://c2.com/cgi/wiki?AndersHejlsberg +[49]:http://www.microsoft.com/about/technicalrecognition/anders-hejlsberg.aspx +[50]:http://www.drdobbs.com/windows/dr-dobbs-excellence-in-programming-award/184404602 +[51]:http://qr.ae/RFEZrv +[52]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Stefan-Kiryazov +[53]:https://www.flickr.com/photos/vonguard/4076389963/ +[54]:http://www.wizards-of-os.org/archiv/sprecher/a_c/doug_cutting.html +[55]:http://hadoop.apache.org/ +[56]:https://www.linkedin.com/in/cutting +[57]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Shalin-Shekhar-Mangar/comment/2293071 +[58]:http://www.quora.com/Who-are-the-best-programmers-in-Silicon-Valley-and-why/answer/Amit-Nithianandan +[59]:http://awards.acm.org/award_winners/ghemawat_1482280.cfm +[60]:http://research.google.com/pubs/SanjayGhemawat.html +[61]:http://www.quora.com/Google/Who-is-Sanjay-Ghemawat +[62]:http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=02062009 +[63]:http://awards.acm.org/award_winners/ghemawat_1482280.cfm +[64]:http://www.quora.com/Google/Who-is-Sanjay-Ghemawat/answer/Ahmet-Alp-Balkan +[65]:http://research.google.com/people/jeff/index.html +[66]:http://research.google.com/people/jeff/index.html +[67]:http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=02062009 +[68]:http://news.cs.washington.edu/2012/10/10/uw-cse-ph-d-alum-jeff-dean-wins-2012-sigops-mark-weiser-award/ +[69]:http://awards.acm.org/award_winners/dean_2879385.cfm +[70]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Natu-Lauchande +[71]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Cosmin-Negruseri/comment/28399 +[72]:https://commons.wikimedia.org/wiki/File:LinuxCon_Europe_Linus_Torvalds_05.jpg +[73]:http://www.linuxfoundation.org/about/staff#torvalds +[74]:http://git-scm.com/book/en/Getting-Started-A-Short-History-of-Git +[75]:https://w2.eff.org/awards/pioneer/1998.php +[76]:http://www.bcs.org/content/ConWebDoc/14769 +[77]:http://www.zdnet.com/blog/open-source/linus-torvalds-wins-the-tech-equivalent-of-a-nobel-prize-the-millennium-technology-prize/10789 +[78]:http://www.computer.org/portal/web/pressroom/Linus-Torvalds-Named-Recipient-of-the-2014-IEEE-Computer-Society-Computer-Pioneer-Award +[79]:http://www.computerhistory.org/fellowawards/hall/bios/Linus,Torvalds/ +[80]:http://www.internethalloffame.org/inductees/linus-torvalds +[81]:http://qr.ae/RFEeeo +[82]:http://qr.ae/RFEZLk +[83]:http://www.quora.com/Software-Engineering/Who-are-some-of-the-greatest-currently-active-software-architects-in-the-world/answer/Alok-Tripathy-1 +[84]:https://www.flickr.com/photos/quakecon/9434713998 +[85]:http://doom.wikia.com/wiki/John_Carmack +[86]:http://thegamershub.net/2012/04/gaming-gods-john-carmack/ +[87]:http://www.shamusyoung.com/twentysidedtale/?p=4759 +[88]:http://www.interactive.org/special_awards/details.asp?idSpecialAwards=6 +[89]:http://www.itworld.com/article/2951105/it-management/a-fly-named-for-bill-gates-and-9-other-unusual-honors-for-tech-s-elite.html#slide8 +[90]:http://www.gamechoiceawards.com/archive/lifetime.html +[91]:http://qr.ae/RFEEgr +[92]:http://www.itworld.com/answers/topic/software/question/whos-best-living-programmer#comment-424562 +[93]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Greg-Naughton +[94]:http://money.cnn.com/2003/08/21/commentary/game_over/column_gaming/ +[95]:http://dufoli.wordpress.com/2007/06/23/ammmmaaaazing-night/ +[96]:http://bellard.org/ +[97]:http://www.ioccc.org/winners.html#B +[98]:http://www.oscon.com/oscon2011/public/schedule/detail/21161 +[99]:http://bellard.org/pi/pi2700e9/ +[100]:https://news.ycombinator.com/item?id=7850797 +[101]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Erik-Frey/comment/1718701 +[102]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Erik-Frey/comment/2454450 +[103]:http://qr.ae/RFEjhZ +[104]:https://www.flickr.com/photos/craigmurphy/4325516497 +[105]:http://www.amazon.co.uk/gp/product/1935182471?ie=UTF8&tag=developetutor-21&linkCode=as2&camp=1634&creative=19450&creativeASIN=1935182471 +[106]:http://stackexchange.com/leagues/1/alltime/stackoverflow +[107]:http://meta.stackexchange.com/a/9156 +[108]:http://meta.stackexchange.com/a/9138 +[109]:http://meta.stackexchange.com/a/9182 +[110]:https://www.flickr.com/photos/philipn/5326344032 +[111]:http://www.crunchbase.com/person/adam-d-angelo +[112]:http://www.exeter.edu/documents/Exeter_Bulletin/fall_01/oncampus.html +[113]:http://icpc.baylor.edu/community/results-2004 +[114]:https://www.topcoder.com/tc?module=Static&d1=pressroom&d2=pr_022205 +[115]:http://qr.ae/RFfOfe +[116]:http://www.businessinsider.com/in-new-alleged-ims-mark-zuckerberg-talks-about-adam-dangelo-2012-9#ixzz369FcQoLB +[117]:https://www.facebook.com/hackercup/photos/a.329665040399024.91563.133954286636768/553381194694073/?type=1 +[118]:http://stats.ioinformatics.org/people/1849 +[119]:http://googlepress.blogspot.com/2006/10/google-announces-winner-of-global-code_27.html +[120]:http://community.topcoder.com/tc?module=SimpleStats&c=coder_achievements&d1=statistics&d2=coderAchievements&cr=10574855 +[121]:https://www.facebook.com/notes/facebook-hacker-cup/facebook-hacker-cup-finals/208549245827651 +[122]:https://www.facebook.com/hackercup/photos/a.329665040399024.91563.133954286636768/553381194694073/?type=1 +[123]:http://community.topcoder.com/tc?module=AlgoRank +[124]:http://codeforces.com/ratings +[125]:http://www.quora.com/Respected-Software-Engineers/Who-are-some-of-the-best-programmers-in-the-world/answer/Venkateswaran-Vicky/comment/1960855 +[126]:http://commons.wikimedia.org/wiki/File:Gennady_Korot.jpg +[127]:http://stats.ioinformatics.org/people/804 +[128]:http://icpc.baylor.edu/regionals/finder/world-finals-2013/standings +[129]:https://www.facebook.com/hackercup/posts/10152022955628845 +[130]:http://codeforces.com/ratings +[131]:http://community.topcoder.com/tc?module=AlgoRank +[132]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi +[133]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4720779 +[134]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4880549 \ No newline at end of file From 1d1e89a882157ae3e4de107bd999be943c55b7f5 Mon Sep 17 00:00:00 2001 From: Yu Ye Date: Thu, 10 Sep 2015 17:42:15 +0800 Subject: [PATCH 481/697] Update 20150901 5 best open source board games to play online.md --- .../20150901 5 best open source board games to play online.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/share/20150901 5 best open source board games to play online.md b/sources/share/20150901 5 best open source board games to play online.md index 505ca76f10..5df980d1db 100644 --- a/sources/share/20150901 5 best open source board games to play online.md +++ b/sources/share/20150901 5 best open source board games to play online.md @@ -1,3 +1,4 @@ +Translating by H-mudcup 5 best open source board games to play online ================================================================================ I have always had a fascination with board games, in part because they are a device of social interaction, they challenge the mind and, most importantly, they are great fun to play. In my misspent youth, myself and a group of friends gathered together to escape the horrors of the classroom, and indulge in a little escapism. The time provided an outlet for tension and rivalry. Board games help teach diplomacy, how to make and break alliances, bring families and friends together, and learn valuable lessons. @@ -191,4 +192,4 @@ via: http://www.linuxlinks.com/article/20150830011533893/BoardGames.html [2]:http://domination.sourceforge.net/ [3]:http://www.pychess.org/ [4]:http://sourceforge.net/projects/scrabble/ -[5]:http://www.gnubg.org/ \ No newline at end of file +[5]:http://www.gnubg.org/ From 43163c010b7ff07037e96e141098ad9eeb3a17c3 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 10 Sep 2015 20:21:02 +0800 Subject: [PATCH 482/697] PUB:20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop @strugglingyouth --- ...x Wireshark GUI freeze on Linux desktop.md | 25 +-- ...ow to Manage Users and Groups in RHEL 7.md | 162 ++++++++++-------- 2 files changed, 106 insertions(+), 81 deletions(-) rename {translated/tech => published}/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md (73%) diff --git a/translated/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md b/published/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md similarity index 73% rename from translated/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md rename to published/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md index 9db7231a68..a59a946857 100644 --- a/translated/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md +++ b/published/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md @@ -1,14 +1,6 @@ - -Linux 有问必答--如何解决 Linux 桌面上的 Wireshark GUI 死机 +Linux 有问必答:如何解决 Linux 上的 Wireshark 界面僵死 ================================================================================ -> **问题**: 当我试图在 Ubuntu 上的 Wireshark 中打开一个 pre-recorded 数据包转储时,它的 UI 突然死机,在我发起 Wireshark 的终端出现了下面的错误和警告。我该如何解决这个问题? - -Wireshark 是一个基于 GUI 的数据包捕获和嗅探工具。该工具被网络管理员普遍使用,网络安全工程师或开发人员对于各种任务的 packet-level 网络分析是必需的,例如在网络故障,漏洞测试,应用程序调试,或逆向协议工程是必需的。 Wireshark 允许记录存活数据包,并通过便捷的图形用户界面浏览他们的协议首部和有效负荷。 - -![](https://farm1.staticflickr.com/722/20584224675_f4d7a59474_c.jpg) - -这是 Wireshark 的 UI,尤其是在 Ubuntu 桌面下运行,有时会挂起或冻结出现以下错误,而你是向上或向下滚动分组列表视图时,就开始加载一个 pre-recorded 包转储文件。 - +> **问题**: 当我试图在 Ubuntu 上的 Wireshark 中打开一个 pre-recorded 数据包转储时,它的界面突然死机,在我运行 Wireshark 的终端出现了下面的错误和警告。我该如何解决这个问题? (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject' (wireshark:3480): GLib-GObject-CRITICAL **: g_object_set_qdata_full: assertion 'G_IS_OBJECT (object)' failed @@ -22,6 +14,15 @@ Wireshark 是一个基于 GUI 的数据包捕获和嗅探工具。该工具被 (wireshark:3480): GLib-GObject-CRITICAL **: g_object_get_qdata: assertion 'G_IS_OBJECT (object)' failed (wireshark:3480): Gtk-CRITICAL **: gtk_widget_set_name: assertion 'GTK_IS_WIDGET (widget)' failed + +Wireshark 是一个基于 GUI 的数据包捕获和嗅探工具。该工具被网络管理员普遍使用,网络安全工程师或开发人员对于各种任务的数据包级的网络分析是必需的,例如在网络故障,漏洞测试,应用程序调试,或逆向协议工程是必需的。 Wireshark 允许实时记录数据包,并通过便捷的图形用户界面浏览他们的协议首部和有效负荷。 + +![](https://farm1.staticflickr.com/722/20584224675_f4d7a59474_c.jpg) + +这是 Wireshark 的 UI,尤其是在 Ubuntu 桌面下运行时,当你向上或向下滚动分组列表视图时,或开始加载一个 pre-recorded 包转储文件时,有时会挂起或冻结,并出现以下错误。 + +![](https://farm1.staticflickr.com/589/20062177334_47c0f2aeae_c.jpg) + 显然,这个错误是由 Wireshark 和叠加滚动条之间的一些不兼容造成的,在最新的 Ubuntu 桌面还没有被解决(例如,Ubuntu 15.04 的桌面)。 一种避免 Wireshark 的 UI 卡死的办法就是 **暂时禁用叠加滚动条**。在 Wireshark 上有两种方法来禁用叠加滚动条,这取决于你在桌面上如何启动 Wireshark 的。 @@ -46,7 +47,7 @@ Wireshark 是一个基于 GUI 的数据包捕获和嗅探工具。该工具被 Exec=env LIBOVERLAY_SCROLLBAR=0 wireshark %f -虽然这种解决方法将有利于所有桌面用户的 system-wide,但它将无法升级 Wireshark。如果你想保留修改的 .desktop 文件,如下所示将它复制到你的主目录。 +虽然这种解决方法可以在系统级帮助到所有桌面用户,但升级 Wireshark 就没用了。如果你想保留修改的 .desktop 文件,如下所示将它复制到你的主目录。 $ cp /usr/share/applications/wireshark.desktop ~/.local/share/applications/ @@ -56,7 +57,7 @@ via: http://ask.xmodulo.com/fix-wireshark-gui-freeze-linux-desktop.html 作者:[Dan Nanni][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/RHCSA/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md b/translated/tech/RHCSA/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md index 1436621c4e..cbe4f0bdc6 100644 --- a/translated/tech/RHCSA/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md +++ b/translated/tech/RHCSA/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md @@ -1,71 +1,79 @@ -RHCSA 系列: 如何管理RHEL7的用户和组 – Part 3 +RHCSA 系列(三): 如何管理 RHEL7 的用户和组 ================================================================================ -和管理其他Linux服务器一样,管理一个 RHEL 7 服务器 要求你能够添加,修改,暂停或删除用户帐户,并且授予他们文件,目录,其他系统资源所必要的权限。 + +和管理其它Linux服务器一样,管理一个 RHEL 7 服务器要求你能够添加、修改、暂停或删除用户帐户,并且授予他们执行其分配的任务所需的文件、目录、其它系统资源所必要的权限。 + ![User and Group Management in Linux](http://www.tecmint.com/wp-content/uploads/2015/03/User-and-Group-Management-in-Linux.png) -RHCSA: 用户和组管理 – Part 3 +*RHCSA: 用户和组管理 – Part 3* -### 管理用户帐户## +###管理用户帐户## -如果想要给RHEL 7 服务器添加账户,你需要以root用户执行如下两条命令 +如果想要给RHEL 7 服务器添加账户,你需要以root用户执行如下两条命令之一: # adduser [new_account] # useradd [new_account] 当添加新的用户帐户时,默认会执行下列操作。 -- 他/她 的主目录就会被创建(一般是"/home/用户名",除非你特别设置) -- 一些隐藏文件 如`.bash_logout`, `.bash_profile` 以及 `.bashrc` 会被复制到用户的主目录,并且会为用户的回话提供环境变量.你可以进一步查看他们的相关细节。 -- 会为您的账号添加一个邮件池目录 -- 会创建一个和用户名同样的组 +- 它/她的主目录就会被创建(一般是"/home/用户名",除非你特别设置) +- 一些隐藏文件 如`.bash_logout`, `.bash_profile` 以及 `.bashrc` 会被复制到用户的主目录,它们会为用户的回话提供环境变量。你可以进一步查看它们的相关细节。 +- 会为您的账号添加一个邮件池目录。 +- 会创建一个和用户名同样的组(LCTT 译注:除非你给新创建的用户指定了组)。 + +用户帐户的全部信息被保存在`/etc/passwd`文件。这个文件以如下格式保存了每一个系统帐户的所有信息(字段以“:”分割) -用户帐户的全部信息被保存在`/etc/passwd `文件。这个文件以如下格式保存了每一个系统帐户的所有信息(以:分割) [username]:[x]:[UID]:[GID]:[Comment]:[Home directory]:[Default shell] -- `[username]` 和`[Comment]` 是用于自我解释的 -- ‘x’表示帐户的密码保护(详细在`/etc/shadow`文件),就是我们用于登录的`[username]`. -- `[UID]` 和`[GID]`是用于显示`[username]` 的 用户认证和主用户组。 +- `[username]` 和`[Comment]` 其意自明,就是用户名和备注 +- 第二个‘x’表示帐户的启用了密码保护(记录在`/etc/shadow`文件),密码用于登录`[username]` +- `[UID]` 和`[GID]`是整数,它们表明了`[username]`的用户ID 和所属的主组ID -最后, +最后。 - `[Home directory]`显示`[username]`的主目录的绝对路径 - `[Default shell]` 是当用户登录系统后使用的默认shell -另外一个你必须要熟悉的重要的文件是存储组信息的`/etc/group`.因为和`/etc/passwd`类似,所以也是由:分割 +另外一个你必须要熟悉的重要的文件是存储组信息的`/etc/group`。和`/etc/passwd`类似,也是每行一个记录,字段由“:”分割 + [Group name]:[Group password]:[GID]:[Group members] - - - `[Group name]` 是组名 -- 这个组是否使用了密码 (如果是"X"意味着没有). +- 这个组是否使用了密码 (如果是"x"意味着没有) - `[GID]`: 和`/etc/passwd`中一样 -- `[Group members]`:用户列表,使用,隔开。里面包含组内的所有用户 +- `[Group members]`:用户列表,使用“,”隔开。里面包含组内的所有用户 + +添加过帐户后,任何时候你都可以通过 usermod 命令来修改用户账户信息,基本的语法如下: -添加过帐户后,任何时候你都可以通过 usermod 命令来修改用户战壕沟,基础的语法如下: # usermod [options] [username] 相关阅读 -- [15 ‘useradd’ Command Examples][1] -- [15 ‘usermod’ Command Examples][2] +- [15 ‘useradd’ 命令示例][1] +- [15 ‘usermod’ 命令示例][2] #### 示例1 : 设置帐户的过期时间 #### -如果你的公司有一些短期使用的帐户或者你相应帐户在有限时间内使用,你可以使用 `--expiredate` 参数 ,后加YYYY-MM-DD格式的日期。为了查看是否生效,你可以使用如下命令查看 +如果你的公司有一些短期使用的帐户或者你要在有限时间内授予访问,你可以使用 `--expiredate` 参数 ,后加YYYY-MM-DD 格式的日期。为了查看是否生效,你可以使用如下命令查看 + # chage -l [username] 帐户更新前后的变动如下图所示 + ![Change User Account Information](http://www.tecmint.com/wp-content/uploads/2015/03/Change-User-Account-Information.png) -修改用户信息 +*修改用户信息* #### 示例 2: 向组内追加用户 #### -除了创建用户时的主用户组,一个用户还能被添加到别的组。你需要使用 -aG或 -append -group 选项,后跟逗号分隔的组名 +除了创建用户时的主用户组,一个用户还能被添加到别的组。你需要使用 -aG或 -append -group 选项,后跟逗号分隔的组名。 + #### 示例 3: 修改用户主目录或默认Shell #### -如果因为一些原因,你需要修改默认的用户主目录(一般为 /home/用户名),你需要使用 -d 或 -home 参数,后跟绝对路径来修改主目录 -如果有用户想要使用其他的shell来取代bash(比如sh ),一般默认是bash .使用 usermod ,并使用 -shell 的参数,后加新的shell的路径 +如果因为一些原因,你需要修改默认的用户主目录(一般为 /home/用户名),你需要使用 -d 或 -home 参数,后跟绝对路径来修改主目录。 + +如果有用户想要使用其它的shell来取代默认的bash(比如zsh)。使用 usermod ,并使用 -shell 的参数,后加新的shell的路径。 + #### 示例 4: 展示组内的用户 #### 当把用户添加到组中后,你可以使用如下命令验证属于哪一个组 @@ -73,17 +81,17 @@ RHCSA: 用户和组管理 – Part 3 # groups [username] # id [username] -下面图片的演示了示例2到示例四 +下面图片的演示了示例2到示例4 ![Adding User to Supplementary Group](http://www.tecmint.com/wp-content/uploads/2015/03/Adding-User-to-Supplementary-Group.png) -添加用户到额外的组 +*添加用户到额外的组* 在上面的示例中: # usermod --append --groups gacanepa,users --home /tmp --shell /bin/sh tecmint -如果想要从组内删除用户,省略 `--append` 切换,并且可以使用 `--groups` 来列举组内的用户 +如果想要从组内删除用户,取消 `--append` 选项,并使用 `--groups` 和你要用户属于的组的列表。 #### 示例 5: 通过锁定密码来停用帐户 #### @@ -91,120 +99,136 @@ RHCSA: 用户和组管理 – Part 3 #### 示例 6: 解锁密码 #### -当你想要重新启用帐户让他可以继续登录时,属于 -u 或 –unlock 选项来解锁用户的密码,就像示例5 介绍的那样 +当你想要重新启用帐户让它可以继续登录时,使用 -u 或 –unlock 选项来解锁用户的密码,就像示例5 介绍的那样 # usermod --unlock tecmint -下面的图片展示了示例5和示例6 +下面的图片展示了示例5和示例6: ![Lock Unlock User Account](http://www.tecmint.com/wp-content/uploads/2015/03/Lock-Unlock-User-Account.png) -锁定上锁用户 +*锁定上锁用户* #### 示例 7:删除组和用户 #### -如果要删除一个组,你需要使用 groupdel ,如果需要删除用户 你需要使用 userdel (添加 -r 可以删除主目录和邮件池的内容) +如果要删除一个组,你需要使用 groupdel ,如果需要删除用户 你需要使用 userdel (添加 -r 可以删除主目录和邮件池的内容)。 + # groupdel [group_name] # 删除组 # userdel -r [user_name] # 删除用户,并删除主目录和邮件池 -如果一些文件属于组,他们将不会被删除。但是组拥有者将会被设置为删除掉的组的GID -### 列举,设置,并且修改 ugo/rwx 权限 ### +如果一些文件属于该组,删除组时它们不会也被删除。但是组拥有者的名字将会被设置为删除掉的组的GID。 -著名的 [ls 命令][3] 是管理员最好的助手. 当我们使用 -l 参数, 这个工具允许您查看一个目录中的内容(或详细格式). +### 列举,设置,并且修改标准 ugo/rwx 权限 ### + +著名的 [ls 命令][3] 是管理员最好的助手. 当我们使用 -l 参数, 这个工具允许您以长格式(或详细格式)查看一个目录中的内容。 + +而且,该命令还可以用于单个文件中。无论哪种方式,在“ls”输出中的前10个字符表示每个文件的属性。 -而且,该命令还可以应用于单个文件中。无论哪种方式,在“ls”输出中的前10个字符表示每个文件的属性。 这10个字符序列的第一个字符用于表示文件类型: - – (连字符): 一个标准文件 - d: 一个目录 - l: 一个符号链接 -- c: 字符设备(将数据作为字节流,即一个终端) -- b: 块设备(处理数据块,即存储设备) +- c: 字符设备(将数据作为字节流,例如终端) +- b: 块设备(以块的方式处理数据,例如存储设备) + +文件属性的接下来的九个字符,分为三个组,被称为文件模式,并注明读(r)、写(w)、和执行(x)权限授予文件的所有者、文件的所有组、和其它的用户(通常被称为“世界”)。 + +同文件上的读取权限允许文件被打开和读取一样,如果目录同时有执行权限时,就允许其目录内容被列出。此外,如果一个文件有执行权限,就允许它作为一个程序运行。 -文件属性的下一个九个字符,分为三个组,被称为文件模式,并注明读(r),写(w),并执行(x)授予文件的所有者,文件的所有组,和其他的用户(通常被称为“世界”)。 -在文件的读取权限允许打开和读取相同的权限时,允许其内容被列出,如果还设置了执行权限,还允许它作为一个程序和运行。 文件权限是通过chmod命令改变的,它的基本语法如下: # chmod [new_mode] file -new_mode是一个八进制数或表达式,用于指定新的权限。适合每一个随意的案例。或者您已经有了一个更好的方式来设置文件的权限,所以你觉得可以自由地使用最适合你自己的方法。 -八进制数可以基于二进制等效计算,可以从所需的文件权限的文件的所有者,所有组,和世界。一定权限的存在等于2的幂(R = 22,W = 21,x = 20),没有时意为0。例如: +new_mode 是一个八进制数或表达式,用于指定新的权限。随意试试各种权限看看是什么效果。或者您已经有了一个更好的方式来设置文件的权限,你也可以用你自己的方式自由地试试。 + +八进制数可以基于二进制等价计算,可以从所需的文件权限的文件的所有者、所有组、和世界组合成。每种权限都等于2的幂(R = 2\^2,W = 2\^1,x = 2\^0),没有时即为0。例如: + ![File Permissions](http://www.tecmint.com/wp-content/uploads/2015/03/File-Permissions.png) -文件权限 +*文件权限* 在八进制形式下设置文件的权限,如上图所示 # chmod 744 myfile -请用一分钟来对比一下我们以前的计算,在更改文件的权限后,我们的实际输出为: +请用马上来对比一下我们以前的计算,在更改文件的权限后,我们的实际输出为: ![Long List Format](http://www.tecmint.com/wp-content/uploads/2015/03/Long-List-Format.png) -长列表格式 +*长列表格式* #### 示例 8: 寻找777权限的文件 #### -出于安全考虑,你应该确保在正常情况下,尽可能避免777权限(读、写、执行的文件)。虽然我们会在以后的教程中教你如何更有效地找到所有的文件在您的系统的权限集的说明,你现在仍可以使用LS grep获取这种信息。 -在下面的例子,我们会寻找 /etc 目录下的777权限文件. 注意,我们要使用第二章讲到的管道的知识[第二章:文件和目录管理][4]: +出于安全考虑,你应该确保在正常情况下,尽可能避免777权限(任何人可读、可写、可执行的文件)。虽然我们会在以后的教程中教你如何更有效地找到您的系统的具有特定权限的全部文件,你现在仍可以组合使用ls 和 grep来获取这种信息。 + +在下面的例子,我们会寻找 /etc 目录下的777权限文件。注意,我们要使用[第二章:文件和目录管理][4]中讲到的管道的知识: # ls -l /etc | grep rwxrwxrwx ![Find All Files with 777 Permission](http://www.tecmint.com/wp-content/uploads/2015/03/Find-All-777-Files.png) -查找所有777权限的文件 +*查找所有777权限的文件* #### 示例 9: 为所有用户指定特定权限 #### shell脚本,以及一些二进制文件,所有用户都应该有权访问(不只是其相应的所有者和组),应该有相应的执行权限(我们会讨论特殊情况下的问题): + # chmod a+x script.sh -**注意**: 我们可以设置文件模式使用表示用户权限的字母如“u”,组所有者权限的字母“g”,其余的为o 。所有权限为a.权限可以通过`+` 或 `-` 来管理。 +**注意**: 我们可以使用表达式设置文件模式,表示用户权限的字母如“u”,组所有者权限的字母“g”,其余的为“o” ,同时具有所有权限为“a”。权限可以通过`+` 或 `-` 来授予和收回。 ![Set Execute Permission on File](http://www.tecmint.com/wp-content/uploads/2015/03/Set-Execute-Permission-on-File.png) -为文件设置执行权限 +*为文件设置执行权限* -长目录列表还显示了该文件的所有者和其在第一和第二列中的组主。此功能可作为系统中文件的第一级访问控制方法: +长目录列表还用两列显示了该文件的所有者和所有组。此功能可作为系统中文件的第一级访问控制方法: ![Check File Owner and Group](http://www.tecmint.com/wp-content/uploads/2015/03/Check-File-Owner-and-Group.png) -检查文件的属主和属组 +*检查文件的所有者和所有组* + +改变文件的所有者,您应该使用chown命令。请注意,您可以在同时或分别更改文件的所有组: -改变文件的所有者,您将使用chown命令。请注意,您可以在同一时间或单独的更改文件的所有权: # chown user:group file -虽然可以在同一时间更改用户或组,或在同一时间的两个属性,但是不要忘记冒号区分,如果你想要更新其他属性,让另外的选项保持空白: - # chown :group file # Change group ownership only - # chown user: file # Change user ownership only +你可以更改用户或组,或在同时更改两个属性,但是不要忘记冒号区分,如果你想要更新其它属性,让另外的部分为空: + + # chown :group file # 仅改变所有组 + # chown user: file # 仅改变所有者 #### 示例 10:从一个文件复制权限到另一个文件#### -If you would like to “clone” ownership from one file to another, you can do so using the –reference flag, as follows: + 如果你想“克隆”一个文件的所有权到另一个,你可以这样做,使用–reference参数,如下: + # chown --reference=ref_file file ref_file的所有信息会复制给 file ![Clone File Ownership](http://www.tecmint.com/wp-content/uploads/2015/03/Clone-File-Ownership.png) -复制文件属主信息 +*复制文件属主信息* ### 设置 SETGID 协作目录 ### -你应该授予在一个特定的目录中拥有访问所有的文件的权限给一个特点的用户组,你将有可能使用目录设置setgid的方法。当setgid后设置,真实用户的有效GID成为团队的主人。 -因此,任何用户都可以访问该文件的组所有者授予的权限的文件。此外,当setgid设置在一个目录中,新创建的文件继承同一组目录,和新创建的子目录也将继承父目录的setgid。 +假如你需要授予在一个特定的目录中拥有访问所有的文件的权限给一个特定的用户组,你有可能需要使用给目录设置setgid的方法。当setgid设置后,该真实用户的有效GID会变成属主的GID。 + +因此,任何访问该文件的用户会被授予该文件的属组的权限。此外,当setgid设置在一个目录中,新创建的文件继承组该目录的组,而且新创建的子目录也将继承父目录的setgid权限。 + # chmod g+s [filename] -为了设置 setgid 在八进制形式,预先准备好数字2 来给基本的权限 +要以八进制形式设置 setgid,需要在基本权限前缀以2。 + # chmod 2755 [directory] ### 总结 ### -扎实的用户和组管理知识,符合规则的,Linux权限管理,以及部分实践,可以帮你快速解决RHEL 7 服务器的文件权限。 -我向你保证,当你按照本文所概述的步骤和使用系统文档(和第一章解释的那样 [Part 1: Reviewing Essential Commands & System Documentation][5] of this series) 你将掌握基本的系统管理的能力。 +扎实的用户和组管理知识,以及标准和特殊的 Linux权限管理,通过实践,可以帮你快速解决 RHEL 7 服务器的文件权限问题。 -请随时让我们知道你是否有任何问题或意见使用下面的表格。 +我向你保证,当你按照本文所概述的步骤和使用系统文档(在本系列的[第一章 回顾基础命令及系统文档][5]中讲到), 你将掌握基本的系统管理的能力。 + +请随时使用下面的评论框让我们知道你是否有任何问题或意见。 -------------------------------------------------------------------------------- @@ -212,13 +236,13 @@ via: http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ 作者:[Gabriel Cánepa][a] 译者:[xiqingongzi](https://github.com/xiqingongzi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/gacanepa/ [1]:http://www.tecmint.com/add-users-in-linux/ [2]:http://www.tecmint.com/usermod-command-examples/ -[3]:http://www.tecmint.com/ls-interview-questions/ -[4]:http://www.tecmint.com/file-and-directory-management-in-linux/ +[3]:http://linux.cn/article-5349-1.html +[4]:https://www.linux.cn/article-6155-1.html [5]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ From a5d6dc3c5efb7a1d9e1b3ee8211038e3fcccd240 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 10 Sep 2015 20:36:08 +0800 Subject: [PATCH 483/697] PUB:20150819 Linuxcon--The Changing Role of the Server OS MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @bazz2 这篇翻译的不错! --- ...0150819 Linuxcon--The Changing Role of the Server OS.md | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) rename {translated/talk => published}/20150819 Linuxcon--The Changing Role of the Server OS.md (89%) diff --git a/translated/talk/20150819 Linuxcon--The Changing Role of the Server OS.md b/published/20150819 Linuxcon--The Changing Role of the Server OS.md similarity index 89% rename from translated/talk/20150819 Linuxcon--The Changing Role of the Server OS.md rename to published/20150819 Linuxcon--The Changing Role of the Server OS.md index 98a0f94b03..2c66635f89 100644 --- a/translated/talk/20150819 Linuxcon--The Changing Role of the Server OS.md +++ b/published/20150819 Linuxcon--The Changing Role of the Server OS.md @@ -1,6 +1,6 @@ LinuxCon: 服务器操作系统的转型 ================================================================================ -来自西雅图。容器迟早要改变世界,以及改变操作系统的角色。这是 Wim Coekaerts 带来的 LinuxCon 演讲主题,Coekaerts 是 Oracle 公司 Linux 与虚拟化工程的高级副总裁。 +西雅图报道。容器迟早要改变世界,以及改变操作系统的角色。这是 Wim Coekaerts 带来的 LinuxCon 演讲主题,Coekaerts 是 Oracle 公司 Linux 与虚拟化工程的高级副总裁。 ![](http://www.serverwatch.com/imagesvr_ce/6421/wim-200x150.jpg) @@ -8,7 +8,7 @@ Coekaerts 在开始演讲的时候拿出一张关于“桌面之年”的幻灯 “你需要操作系统做什么事情?”,Coekaerts 回答现场观众:“只需一件事:运行一个应用。操作系统负责管理硬件和资源,来让你的应用运行起来。” -Coakaerts 说在 Docker 容器的帮助下,我们的注意力再次集中在应用上,而在 Oracle,我们将注意力放在如何让应用更好地运行在操作系统上。 +Coakaerts 补充说,在 Docker 容器的帮助下,我们的注意力再次集中在应用上,而在 Oracle,我们将注意力放在如何让应用更好地运行在操作系统上。 “许多人过去常常需要繁琐地安装应用,而现在的年轻人只需要按一个按钮就能让应用在他们的移动设备上运行起来”。 @@ -20,7 +20,6 @@ Docker 的出现不代表虚拟机的淘汰,容器化过程需要经过很长 在这段时间内,容器会与虚拟机共存,并且我们需要一些工具,将应用在容器和虚拟机之间进行转换迁移。Coekaerts 举例说 Oracle 的 VirtualBox 就可以用来帮助用户运行 Docker,而它原来是被广泛用在桌面系统上的一项开源技术。现在 Docker 的 Kitematic 项目将在 Mac 上使用 VirtualBox 运行 Docker。 -### The Open Compute Initiative and Write Once, Deploy Anywhere for Containers ### ### 容器的开放计算计划和一次写随处部署 ### 一个能让容器成功的关键是“一次写,随处部署”的概念。而在容器之间的互操作领域,Linux 基金会的开放计算计划(OCI)扮演一个非常关键的角色。 @@ -43,7 +42,7 @@ via: http://www.serverwatch.com/server-news/linuxcon-the-changing-role-of-the-se 作者:[Sean Michael Kerner][a] 译者:[bazz2](https://github.com/bazz2) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 6312580f624e2742d77503f05204aa4879e34f2e Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 10 Sep 2015 20:43:33 +0800 Subject: [PATCH 484/697] PUB:20150824 Linux about to gain a new file system--bcachefs @bazz2 --- ...nux about to gain a new file system--bcachefs.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) rename {translated/talk => published}/20150824 Linux about to gain a new file system--bcachefs.md (53%) diff --git a/translated/talk/20150824 Linux about to gain a new file system--bcachefs.md b/published/20150824 Linux about to gain a new file system--bcachefs.md similarity index 53% rename from translated/talk/20150824 Linux about to gain a new file system--bcachefs.md rename to published/20150824 Linux about to gain a new file system--bcachefs.md index 4fe4bf8ff9..c7946e9fad 100644 --- a/translated/talk/20150824 Linux about to gain a new file system--bcachefs.md +++ b/published/20150824 Linux about to gain a new file system--bcachefs.md @@ -1,14 +1,15 @@ -Linux 界将出现一个新的文件系统:bcachefs +Linux 上将出现一个新的文件系统:bcachefs ================================================================================ -这个有 5 年历史,由 Kent Oberstreet 创建,过去属于谷歌的文件系统,最近完成了关键的组件。Bcachefs 文件系统自称其性能和稳定性与 ext4 和 xfs 相同,而其他方面的功能又可以与 btrfs 和 zfs 相媲美。主要特性包括校验、压缩、多设备支持、缓存、快照与其他好用的特性。 -Bcachefs 来自 **bcache**,这是一个块级缓存层,从 bcaceh 到一个功能完整的[写时复制][1]文件系统,堪称是一项质的转变。 +这个有 5 年历史,由 Kent Oberstreet 创建,过去属于谷歌的文件系统,最近完成了全部关键组件。Bcachefs 文件系统自称其性能和稳定性与 ext4 和 xfs 相同,而其他方面的功能又可以与 btrfs 和 zfs 相媲美。主要特性包括校验、压缩、多设备支持、缓存、快照与其他“漂亮”的特性。 -在自己提出问题“为什么要出一个新的文件系统”中,Kent Oberstreet 作了以下回答:当我还在谷歌的时候,我与其他在 bcache 上工作的同事在偶然的情况下意识到我们正在使用的东西可以成为一个成熟文件系统的功能块,我们可以用 bcache 创建一个拥有干净而优雅设计的文件系统,而最重要的一点是,bcachefs 的主要目的就是在性能和稳定性上能与 ext4 和 xfs 匹敌,同时拥有 btrfs 和 zfs 的特性。 +Bcachefs 来自 **bcache**,这是一个块级缓存层。从 bcaceh 到一个功能完整的[写时复制][1]文件系统,堪称是一项质的转变。 + +对自己的问题“为什么要出一个新的文件系统”中,Kent Oberstreet 自问自答道:当我还在谷歌的时候,我与其他在 bcache 上工作的同事在偶然的情况下意识到我们正在使用的东西可以成为一个成熟文件系统的功能块,我们可以用 bcache 创建一个拥有干净而优雅设计的文件系统,而最重要的一点是,bcachefs 的主要目的就是在性能和稳定性上能与 ext4 和 xfs 匹敌,同时拥有 btrfs 和 zfs 的特性。 Overstreet 邀请人们在自己的系统上测试 bcachefs,可以通过邮件列表[通告]获取 bcachefs 的操作指南。 -Linux 生态系统中文件系统几乎处于一家独大状态,Fedora 在第 16 版的时候就想用 btrfs 换掉 ext4 作为其默认文件系统,但是到现在(LCTT:都出到 Fedora 22 了)还在使用 ext4。而几乎所有 Debian 系的发行版(Ubuntu、Mint、elementary OS 等)也使用 ext4 作为默认文件系统,并且这些主流的发生版都没有替换默认文件系统的意思。 +Linux 生态系统中文件系统几乎处于一家独大状态,Fedora 在第 16 版的时候就想用 btrfs 换掉 ext4 作为其默认文件系统,但是到现在(LCTT:都出到 Fedora 22 了)还在使用 ext4。而几乎所有 Debian 系的发行版(Ubuntu、Mint、elementary OS 等)也使用 ext4 作为默认文件系统,并且这些主流的发行版都没有替换默认文件系统的意思。 -------------------------------------------------------------------------------- @@ -16,7 +17,7 @@ via: http://www.linuxveda.com/2015/08/22/linux-gain-new-file-system-bcachefs/ 作者:[Paul Hill][a] 译者:[bazz2](https://github.com/bazz2) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From e2b356ccaff44bb682184e970bb5190f4d3da996 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 11 Sep 2015 09:28:22 +0800 Subject: [PATCH 485/697] PUB:20150728 Process of the Linux kernel building MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @oska874 翻译的很不错,我建议根据我校对后的稿件,你 PR 到原文的 github ,作为翻译版。 此外,我发现这片文章是用另外一个 ID: zlatcag 推送上来的?这是怎么回事呢? --- ...28 Process of the Linux kernel building.md | 690 ++++++++++++++++++ ...28 Process of the Linux kernel building.md | 674 ----------------- 2 files changed, 690 insertions(+), 674 deletions(-) create mode 100644 published/20150728 Process of the Linux kernel building.md delete mode 100644 translated/tech/20150728 Process of the Linux kernel building.md diff --git a/published/20150728 Process of the Linux kernel building.md b/published/20150728 Process of the Linux kernel building.md new file mode 100644 index 0000000000..13610c77f2 --- /dev/null +++ b/published/20150728 Process of the Linux kernel building.md @@ -0,0 +1,690 @@ +你知道 Linux 内核是如何构建的吗? +================================================================================ + +###介绍 + +我不会告诉你怎么在自己的电脑上去构建、安装一个定制化的 Linux 内核,这样的[资料](https://encrypted.google.com/search?q=building+linux+kernel#q=building+linux+kernel+from+source+code)太多了,它们会对你有帮助。本文会告诉你当你在内核源码路径里敲下`make` 时会发生什么。 + +当我刚刚开始学习内核代码时,[Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 是我打开的第一个文件,这个文件看起来真令人害怕 :)。那时候这个 [Makefile](https://en.wikipedia.org/wiki/Make_%28software%29) 还只包含了`1591` 行代码,当我开始写本文时,内核已经是[4.2.0的第三个候选版本](https://github.com/torvalds/linux/commit/52721d9d3334c1cb1f76219a161084094ec634dc) 了。 + +这个 makefile 是 Linux 内核代码的根 makefile ,内核构建就始于此处。是的,它的内容很多,但是如果你已经读过内核源代码,你就会发现每个包含代码的目录都有一个自己的 makefile。当然了,我们不会去描述每个代码文件是怎么编译链接的,所以我们将只会挑选一些通用的例子来说明问题。而你不会在这里找到构建内核的文档、如何整洁内核代码、[tags](https://en.wikipedia.org/wiki/Ctags) 的生成和[交叉编译](https://en.wikipedia.org/wiki/Cross_compiler) 相关的说明,等等。我们将从`make` 开始,使用标准的内核配置文件,到生成了内核镜像 [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage) 结束。 + +如果你已经很了解 [make](https://en.wikipedia.org/wiki/Make_%28software%29) 工具那是最好,但是我也会描述本文出现的相关代码。 + +让我们开始吧! + + +###编译内核前的准备 + +在开始编译前要进行很多准备工作。最主要的就是找到并配置好配置文件,`make` 命令要使用到的参数都需要从这些配置文件获取。现在就让我们深入内核的根 `makefile` 吧 + +内核的根 `Makefile` 负责构建两个主要的文件:[vmlinux](https://en.wikipedia.org/wiki/Vmlinux) (内核镜像可执行文件)和模块文件。内核的 [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 从定义如下变量开始: + +```Makefile +VERSION = 4 +PATCHLEVEL = 2 +SUBLEVEL = 0 +EXTRAVERSION = -rc3 +NAME = Hurr durr I'ma sheep +``` + +这些变量决定了当前内核的版本,并且被使用在很多不同的地方,比如同一个 `Makefile` 中的 `KERNELVERSION` : + +```Makefile +KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION) +``` + +接下来我们会看到很多`ifeq` 条件判断语句,它们负责检查传递给 `make` 的参数。内核的 `Makefile` 提供了一个特殊的编译选项 `make help` ,这个选项可以生成所有的可用目标和一些能传给 `make` 的有效的命令行参数。举个例子,`make V=1` 会在构建过程中输出详细的编译信息,第一个 `ifeq` 就是检查传递给 make 的 `V=n` 选项。 + +```Makefile +ifeq ("$(origin V)", "command line") + KBUILD_VERBOSE = $(V) +endif +ifndef KBUILD_VERBOSE + KBUILD_VERBOSE = 0 +endif + +ifeq ($(KBUILD_VERBOSE),1) + quiet = + Q = +else + quiet=quiet_ + Q = @ +endif + +export quiet Q KBUILD_VERBOSE +``` +如果 `V=n` 这个选项传给了 `make` ,系统就会给变量 `KBUILD_VERBOSE` 选项附上 `V` 的值,否则的话`KBUILD_VERBOSE` 就会为 `0`。然后系统会检查 `KBUILD_VERBOSE` 的值,以此来决定 `quiet` 和`Q` 的值。符号 `@` 控制命令的输出,如果它被放在一个命令之前,这条命令的输出将会是 `CC scripts/mod/empty.o`,而不是`Compiling .... scripts/mod/empty.o`(LCTT 译注:CC 在 makefile 中一般都是编译命令)。在这段最后,系统导出了所有的变量。 + +下一个 `ifeq` 语句检查的是传递给 `make` 的选项 `O=/dir`,这个选项允许在指定的目录 `dir` 输出所有的结果文件: + +```Makefile +ifeq ($(KBUILD_SRC),) + +ifeq ("$(origin O)", "command line") + KBUILD_OUTPUT := $(O) +endif + +ifneq ($(KBUILD_OUTPUT),) +saved-output := $(KBUILD_OUTPUT) +KBUILD_OUTPUT := $(shell mkdir -p $(KBUILD_OUTPUT) && cd $(KBUILD_OUTPUT) \ + && /bin/pwd) +$(if $(KBUILD_OUTPUT),, \ + $(error failed to create output directory "$(saved-output)")) + +sub-make: FORCE + $(Q)$(MAKE) -C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR) \ + -f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS)) + +skip-makefile := 1 +endif # ifneq ($(KBUILD_OUTPUT),) +endif # ifeq ($(KBUILD_SRC),) +``` + +系统会检查变量 `KBUILD_SRC`,它代表内核代码的顶层目录,如果它是空的(第一次执行 makefile 时总是空的),我们会设置变量 `KBUILD_OUTPUT` 为传递给选项 `O` 的值(如果这个选项被传进来了)。下一步会检查变量 `KBUILD_OUTPUT` ,如果已经设置好,那么接下来会做以下几件事: + +* 将变量 `KBUILD_OUTPUT` 的值保存到临时变量 `saved-output`; +* 尝试创建给定的输出目录; +* 检查创建的输出目录,如果失败了就打印错误; +* 如果成功创建了输出目录,那么就在新目录重新执行 `make` 命令(参见选项`-C`)。 + +下一个 `ifeq` 语句会检查传递给 make 的选项 `C` 和 `M`: + +```Makefile +ifeq ("$(origin C)", "command line") + KBUILD_CHECKSRC = $(C) +endif +ifndef KBUILD_CHECKSRC + KBUILD_CHECKSRC = 0 +endif + +ifeq ("$(origin M)", "command line") + KBUILD_EXTMOD := $(M) +endif +``` + +第一个选项 `C` 会告诉 `makefile` 需要使用环境变量 `$CHECK` 提供的工具来检查全部 `c` 代码,默认情况下会使用[sparse](https://en.wikipedia.org/wiki/Sparse)。第二个选项 `M` 会用来编译外部模块(本文不做讨论)。 + +系统还会检查变量 `KBUILD_SRC`,如果 `KBUILD_SRC` 没有被设置,系统会设置变量 `srctree` 为`.`: + +```Makefile +ifeq ($(KBUILD_SRC),) + srctree := . +endif + +objtree := . +src := $(srctree) +obj := $(objtree) + +export srctree objtree VPATH +``` + +这将会告诉 `Makefile` 内核的源码树就在执行 `make` 命令的目录,然后要设置 `objtree` 和其他变量为这个目录,并且将这些变量导出。接着就是要获取 `SUBARCH` 的值,这个变量代表了当前的系统架构(LCTT 译注:一般都指CPU 架构): + +```Makefile +SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \ + -e s/sun4u/sparc64/ \ + -e s/arm.*/arm/ -e s/sa110/arm/ \ + -e s/s390x/s390/ -e s/parisc64/parisc/ \ + -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \ + -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ ) +``` + +如你所见,系统执行 [uname](https://en.wikipedia.org/wiki/Uname) 得到机器、操作系统和架构的信息。因为我们得到的是 `uname` 的输出,所以我们需要做一些处理再赋给变量 `SUBARCH` 。获得 `SUBARCH` 之后就要设置`SRCARCH` 和 `hfr-arch`,`SRCARCH` 提供了硬件架构相关代码的目录,`hfr-arch` 提供了相关头文件的目录: + +```Makefile +ifeq ($(ARCH),i386) + SRCARCH := x86 +endif +ifeq ($(ARCH),x86_64) + SRCARCH := x86 +endif + +hdr-arch := $(SRCARCH) +``` + +注意:`ARCH` 是 `SUBARCH` 的别名。如果没有设置过代表内核配置文件路径的变量 `KCONFIG_CONFIG`,下一步系统会设置它,默认情况下就是 `.config` : + +```Makefile +KCONFIG_CONFIG ?= .config +export KCONFIG_CONFIG +``` + +以及编译内核过程中要用到的 [shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) + +```Makefile +CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \ + else if [ -x /bin/bash ]; then echo /bin/bash; \ + else echo sh; fi ; fi) +``` + +接下来就要设置一组和编译内核的编译器相关的变量。我们会设置主机的 `C` 和 `C++` 的编译器及相关配置项: + +```Makefile +HOSTCC = gcc +HOSTCXX = g++ +HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer -std=gnu89 +HOSTCXXFLAGS = -O2 +``` + +接下来会去适配代表编译器的变量 `CC`,那为什么还要 `HOST*` 这些变量呢?这是因为 `CC` 是编译内核过程中要使用的目标架构的编译器,但是 `HOSTCC` 是要被用来编译一组 `host` 程序的(下面我们就会看到)。 + +然后我们就看到变量 `KBUILD_MODULES` 和 `KBUILD_BUILTIN` 的定义,这两个变量决定了我们要编译什么东西(内核、模块或者两者): + +```Makefile +KBUILD_MODULES := +KBUILD_BUILTIN := 1 + +ifeq ($(MAKECMDGOALS),modules) + KBUILD_BUILTIN := $(if $(CONFIG_MODVERSIONS),1) +endif +``` + +在这我们可以看到这些变量的定义,并且,如果们仅仅传递了 `modules` 给 `make`,变量 `KBUILD_BUILTIN` 会依赖于内核配置选项 `CONFIG_MODVERSIONS`。 + +下一步操作是引入下面的文件: + +```Makefile +include scripts/Kbuild.include +``` + +文件 [Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) 或者又叫做 `Kernel Build System` 是一个用来管理构建内核及其模块的特殊框架。`kbuild` 文件的语法与 makefile 一样。文件[scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) 为 `kbuild` 系统提供了一些常规的定义。因为我们包含了这个 `kbuild` 文件,我们可以看到和不同工具关联的这些变量的定义,这些工具会在内核和模块编译过程中被使用(比如链接器、编译器、来自 [binutils](http://www.gnu.org/software/binutils/) 的二进制工具包 ,等等): + +```Makefile +AS = $(CROSS_COMPILE)as +LD = $(CROSS_COMPILE)ld +CC = $(CROSS_COMPILE)gcc +CPP = $(CC) -E +AR = $(CROSS_COMPILE)ar +NM = $(CROSS_COMPILE)nm +STRIP = $(CROSS_COMPILE)strip +OBJCOPY = $(CROSS_COMPILE)objcopy +OBJDUMP = $(CROSS_COMPILE)objdump +AWK = awk +... +... +... +``` + +在这些定义好的变量后面,我们又定义了两个变量:`USERINCLUDE` 和 `LINUXINCLUDE`。他们包含了头文件的路径(第一个是给用户用的,第二个是给内核用的): + +```Makefile +USERINCLUDE := \ + -I$(srctree)/arch/$(hdr-arch)/include/uapi \ + -Iarch/$(hdr-arch)/include/generated/uapi \ + -I$(srctree)/include/uapi \ + -Iinclude/generated/uapi \ + -include $(srctree)/include/linux/kconfig.h + +LINUXINCLUDE := \ + -I$(srctree)/arch/$(hdr-arch)/include \ + ... +``` + +以及给 C 编译器的标准标志: + +```Makefile +KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ + -fno-strict-aliasing -fno-common \ + -Werror-implicit-function-declaration \ + -Wno-format-security \ + -std=gnu89 +``` + +这并不是最终确定的编译器标志,它们还可以在其他 makefile 里面更新(比如 `arch/` 里面的 kbuild)。变量定义完之后,全部会被导出供其他 makefile 使用。 + +下面的两个变量 `RCS_FIND_IGNORE` 和 `RCS_TAR_IGNORE` 包含了被版本控制系统忽略的文件: + +```Makefile +export RCS_FIND_IGNORE := \( -name SCCS -o -name BitKeeper -o -name .svn -o \ + -name CVS -o -name .pc -o -name .hg -o -name .git \) \ + -prune -o +export RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn \ + --exclude CVS --exclude .pc --exclude .hg --exclude .git +``` + +这就是全部了,我们已经完成了所有的准备工作,下一个点就是如果构建`vmlinux`。 + +###直面内核构建 + +现在我们已经完成了所有的准备工作,根 makefile(注:内核根目录下的 makefile)的下一步工作就是和编译内核相关的了。在这之前,我们不会在终端看到 `make` 命令输出的任何东西。但是现在编译的第一步开始了,这里我们需要从内核根 makefile 的 [598](https://github.com/torvalds/linux/blob/master/Makefile#L598) 行开始,这里可以看到目标`vmlinux`: + +```Makefile +all: vmlinux + include arch/$(SRCARCH)/Makefile +``` + +不要操心我们略过的从 `export RCS_FIND_IGNORE.....` 到 `all: vmlinux.....` 这一部分 makefile 代码,他们只是负责根据各种配置文件(`make *.config`)生成不同目标内核的,因为之前我就说了这一部分我们只讨论构建内核的通用途径。 + +目标 `all:` 是在命令行如果不指定具体目标时默认使用的目标。你可以看到这里包含了架构相关的 makefile(在这里就指的是 [arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile))。从这一时刻起,我们会从这个 makefile 继续进行下去。如我们所见,目标 `all` 依赖于根 makefile 后面声明的 `vmlinux`: + +```Makefile +vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE +``` + +`vmlinux` 是 linux 内核的静态链接可执行文件格式。脚本 [scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) 把不同的编译好的子模块链接到一起形成了 vmlinux。 + +第二个目标是 `vmlinux-deps`,它的定义如下: + +```Makefile +vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) +``` + +它是由内核代码下的每个顶级目录的 `built-in.o` 组成的。之后我们还会检查内核所有的目录,`kbuild` 会编译各个目录下所有的对应 `$(obj-y)` 的源文件。接着调用 `$(LD) -r` 把这些文件合并到一个 `build-in.o` 文件里。此时我们还没有`vmlinux-deps`,所以目标 `vmlinux` 现在还不会被构建。对我而言 `vmlinux-deps` 包含下面的文件: + +``` +arch/x86/kernel/vmlinux.lds arch/x86/kernel/head_64.o +arch/x86/kernel/head64.o arch/x86/kernel/head.o +init/built-in.o usr/built-in.o +arch/x86/built-in.o kernel/built-in.o +mm/built-in.o fs/built-in.o +ipc/built-in.o security/built-in.o +crypto/built-in.o block/built-in.o +lib/lib.a arch/x86/lib/lib.a +lib/built-in.o arch/x86/lib/built-in.o +drivers/built-in.o sound/built-in.o +firmware/built-in.o arch/x86/pci/built-in.o +arch/x86/power/built-in.o arch/x86/video/built-in.o +net/built-in.o +``` + +下一个可以被执行的目标如下: + +```Makefile +$(sort $(vmlinux-deps)): $(vmlinux-dirs) ; +$(vmlinux-dirs): prepare scripts + $(Q)$(MAKE) $(build)=$@ +``` + +就像我们看到的,`vmlinux-dir` 依赖于两部分:`prepare` 和 `scripts`。第一个 `prepare` 定义在内核的根 `makefile` 中,准备工作分成三个阶段: + +```Makefile +prepare: prepare0 +prepare0: archprepare FORCE + $(Q)$(MAKE) $(build)=. +archprepare: archheaders archscripts prepare1 scripts_basic + +prepare1: prepare2 $(version_h) include/generated/utsrelease.h \ + include/config/auto.conf + $(cmd_crmodverdir) +prepare2: prepare3 outputmakefile asm-generic +``` + +第一个 `prepare0` 展开到 `archprepare` ,后者又展开到 `archheader` 和 `archscripts`,这两个变量定义在 `x86_64` 相关的 [Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)。让我们看看这个文件。`x86_64` 特定的 makefile 从变量定义开始,这些变量都是和特定架构的配置文件 ([defconfig](https://github.com/torvalds/linux/tree/master/arch/x86/configs),等等)有关联。在定义了编译 [16-bit](https://en.wikipedia.org/wiki/Real_mode) 代码的编译选项之后,根据变量 `BITS` 的值,如果是 `32`, 汇编代码、链接器、以及其它很多东西(全部的定义都可以在[arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)找到)对应的参数就是 `i386`,而 `64` 就对应的是 `x86_84`。 + +第一个目标是 makefile 生成的系统调用列表(syscall table)中的 `archheaders` : + +```Makefile +archheaders: + $(Q)$(MAKE) $(build)=arch/x86/entry/syscalls all +``` + +第二个目标是 makefile 里的 `archscripts`: + +```Makefile +archscripts: scripts_basic + $(Q)$(MAKE) $(build)=arch/x86/tools relocs +``` + + 我们可以看到 `archscripts` 是依赖于根 [Makefile](https://github.com/torvalds/linux/blob/master/Makefile)里的`scripts_basic` 。首先我们可以看出 `scripts_basic` 是按照 [scripts/basic](https://github.com/torvalds/linux/blob/master/scripts/basic/Makefile) 的 makefile 执行 make 的: + +```Maklefile +scripts_basic: + $(Q)$(MAKE) $(build)=scripts/basic +``` + +`scripts/basic/Makefile` 包含了编译两个主机程序 `fixdep` 和 `bin2` 的目标: + +```Makefile +hostprogs-y := fixdep +hostprogs-$(CONFIG_BUILD_BIN2C) += bin2c +always := $(hostprogs-y) + +$(addprefix $(obj)/,$(filter-out fixdep,$(always))): $(obj)/fixdep +``` + +第一个工具是 `fixdep`:用来优化 [gcc](https://gcc.gnu.org/) 生成的依赖列表,然后在重新编译源文件的时候告诉make。第二个工具是 `bin2c`,它依赖于内核配置选项 `CONFIG_BUILD_BIN2C`,并且它是一个用来将标准输入接口(LCTT 译注:即 stdin)收到的二进制流通过标准输出接口(即:stdout)转换成 C 头文件的非常小的 C 程序。你可能注意到这里有些奇怪的标志,如 `hostprogs-y` 等。这个标志用于所有的 `kbuild` 文件,更多的信息你可以从[documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt) 获得。在我们这里, `hostprogs-y` 告诉 `kbuild` 这里有个名为 `fixed` 的程序,这个程序会通过和 `Makefile` 相同目录的 `fixdep.c` 编译而来。 + +执行 make 之后,终端的第一个输出就是 `kbuild` 的结果: + +``` +$ make + HOSTCC scripts/basic/fixdep +``` + +当目标 `script_basic` 被执行,目标 `archscripts` 就会 make [arch/x86/tools](https://github.com/torvalds/linux/blob/master/arch/x86/tools/Makefile) 下的 makefile 和目标 `relocs`: + +```Makefile +$(Q)$(MAKE) $(build)=arch/x86/tools relocs +``` + +包含了[重定位](https://en.wikipedia.org/wiki/Relocation_%28computing%29) 的信息的代码 `relocs_32.c` 和 `relocs_64.c` 将会被编译,这可以在`make` 的输出中看到: + +```Makefile + HOSTCC arch/x86/tools/relocs_32.o + HOSTCC arch/x86/tools/relocs_64.o + HOSTCC arch/x86/tools/relocs_common.o + HOSTLD arch/x86/tools/relocs +``` + +在编译完 `relocs.c` 之后会检查 `version.h`: + +```Makefile +$(version_h): $(srctree)/Makefile FORCE + $(call filechk,version.h) + $(Q)rm -f $(old_version_h) +``` + +我们可以在输出看到它: + +``` +CHK include/config/kernel.release +``` + +以及在内核的根 Makefiel 使用 `arch/x86/include/generated/asm` 的目标 `asm-generic` 来构建 `generic` 汇编头文件。在目标 `asm-generic` 之后,`archprepare` 就完成了,所以目标 `prepare0` 会接着被执行,如我上面所写: + +```Makefile +prepare0: archprepare FORCE + $(Q)$(MAKE) $(build)=. +``` + +注意 `build`,它是定义在文件 [scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include),内容是这样的: + +```Makefile +build := -f $(srctree)/scripts/Makefile.build obj +``` + +或者在我们的例子中,它就是当前源码目录路径:`.`: + +```Makefile +$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.build obj=. +``` + +脚本 [scripts/Makefile.build](https://github.com/torvalds/linux/blob/master/scripts/Makefile.build) 通过参数 `obj` 给定的目录找到 `Kbuild` 文件,然后引入 `kbuild` 文件: + +```Makefile +include $(kbuild-file) +``` + +并根据这个构建目标。我们这里 `.` 包含了生成 `kernel/bounds.s` 和 `arch/x86/kernel/asm-offsets.s` 的 [Kbuild](https://github.com/torvalds/linux/blob/master/Kbuild) 文件。在此之后,目标 `prepare` 就完成了它的工作。 `vmlinux-dirs` 也依赖于第二个目标 `scripts` ,它会编译接下来的几个程序:`filealias`,`mk_elfconfig`,`modpost` 等等。之后,`scripts/host-programs` 就可以开始编译我们的目标 `vmlinux-dirs` 了。 + +首先,我们先来理解一下 `vmlinux-dirs` 都包含了那些东西。在我们的例子中它包含了下列内核目录的路径: + +``` +init usr arch/x86 kernel mm fs ipc security crypto block +drivers sound firmware arch/x86/pci arch/x86/power +arch/x86/video net lib arch/x86/lib +``` + +我们可以在内核的根 [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 里找到 `vmlinux-dirs` 的定义: + +```Makefile +vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \ + $(core-y) $(core-m) $(drivers-y) $(drivers-m) \ + $(net-y) $(net-m) $(libs-y) $(libs-m))) + +init-y := init/ +drivers-y := drivers/ sound/ firmware/ +net-y := net/ +libs-y := lib/ +... +... +... +``` + +这里我们借助函数 `patsubst` 和 `filter`去掉了每个目录路径里的符号 `/`,并且把结果放到 `vmlinux-dirs` 里。所以我们就有了 `vmlinux-dirs` 里的目录列表,以及下面的代码: + +```Makefile +$(vmlinux-dirs): prepare scripts + $(Q)$(MAKE) $(build)=$@ +``` + +符号 `$@` 在这里代表了 `vmlinux-dirs`,这就表明程序会递归遍历从 `vmlinux-dirs` 以及它内部的全部目录(依赖于配置),并且在对应的目录下执行 `make` 命令。我们可以在输出看到结果: + +``` + CC init/main.o + CHK include/generated/compile.h + CC init/version.o + CC init/do_mounts.o + ... + CC arch/x86/crypto/glue_helper.o + AS arch/x86/crypto/aes-x86_64-asm_64.o + CC arch/x86/crypto/aes_glue.o + ... + AS arch/x86/entry/entry_64.o + AS arch/x86/entry/thunk_64.o + CC arch/x86/entry/syscall_64.o +``` + +每个目录下的源代码将会被编译并且链接到 `built-io.o` 里: + +``` +$ find . -name built-in.o +./arch/x86/crypto/built-in.o +./arch/x86/crypto/sha-mb/built-in.o +./arch/x86/net/built-in.o +./init/built-in.o +./usr/built-in.o +... +... +``` + +好了,所有的 `built-in.o` 都构建完了,现在我们回到目标 `vmlinux` 上。你应该还记得,目标 `vmlinux` 是在内核的根makefile 里。在链接 `vmlinux` 之前,系统会构建 [samples](https://github.com/torvalds/linux/tree/master/samples), [Documentation](https://github.com/torvalds/linux/tree/master/Documentation) 等等,但是如上文所述,我不会在本文描述这些。 + +```Makefile +vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE + ... + ... + +$(call if_changed,link-vmlinux) +``` + +你可以看到,调用脚本 [scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) 的主要目的是把所有的 `built-in.o` 链接成一个静态可执行文件,和生成 [System.map](https://en.wikipedia.org/wiki/System.map)。 最后我们来看看下面的输出: + +``` + LINK vmlinux + LD vmlinux.o + MODPOST vmlinux.o + GEN .version + CHK include/generated/compile.h + UPD include/generated/compile.h + CC init/version.o + LD init/built-in.o + KSYM .tmp_kallsyms1.o + KSYM .tmp_kallsyms2.o + LD vmlinux + SORTEX vmlinux + SYSMAP System.map +``` + +`vmlinux` 和`System.map` 生成在内核源码树根目录下。 + +``` +$ ls vmlinux System.map +System.map vmlinux +``` + +这就是全部了,`vmlinux` 构建好了,下一步就是创建 [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage). + +###制作bzImage + +`bzImage` 就是压缩了的 linux 内核镜像。我们可以在构建了 `vmlinux` 之后通过执行 `make bzImage` 获得`bzImage`。同时我们可以仅仅执行 `make` 而不带任何参数也可以生成 `bzImage` ,因为它是在 [arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile) 里预定义的、默认生成的镜像: + +```Makefile +all: bzImage +``` + +让我们看看这个目标,它能帮助我们理解这个镜像是怎么构建的。我已经说过了 `bzImage` 是被定义在 [arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile),定义如下: + +```Makefile +bzImage: vmlinux + $(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE) + $(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot + $(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/$@ +``` + +在这里我们可以看到第一次为 boot 目录执行 `make`,在我们的例子里是这样的: + +```Makefile +boot := arch/x86/boot +``` + +现在的主要目标是编译目录 `arch/x86/boot` 和 `arch/x86/boot/compressed` 的代码,构建 `setup.bin` 和 `vmlinux.bin`,最后用这两个文件生成 `bzImage`。第一个目标是定义在 [arch/x86/boot/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/Makefile) 的 `$(obj)/setup.elf`: + +```Makefile +$(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE + $(call if_changed,ld) +``` + +我们已经在目录 `arch/x86/boot` 有了链接脚本 `setup.ld`,和扩展到 `boot` 目录下全部源代码的变量 `SETUP_OBJS` 。我们可以看看第一个输出: + +```Makefile + AS arch/x86/boot/bioscall.o + CC arch/x86/boot/cmdline.o + AS arch/x86/boot/copy.o + HOSTCC arch/x86/boot/mkcpustr + CPUSTR arch/x86/boot/cpustr.h + CC arch/x86/boot/cpu.o + CC arch/x86/boot/cpuflags.o + CC arch/x86/boot/cpucheck.o + CC arch/x86/boot/early_serial_console.o + CC arch/x86/boot/edd.o +``` + +下一个源码文件是 [arch/x86/boot/header.S](https://github.com/torvalds/linux/blob/master/arch/x86/boot/header.S),但是我们不能现在就编译它,因为这个目标依赖于下面两个头文件: + +```Makefile +$(obj)/header.o: $(obj)/voffset.h $(obj)/zoffset.h +``` + +第一个头文件 `voffset.h` 是使用 `sed` 脚本生成的,包含用 `nm` 工具从 `vmlinux` 获取的两个地址: + +```C +#define VO__end 0xffffffff82ab0000 +#define VO__text 0xffffffff81000000 +``` + +这两个地址是内核的起始和结束地址。第二个头文件 `zoffset.h` 在 [arch/x86/boot/compressed/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/Makefile) 可以看出是依赖于目标 `vmlinux`的: + +```Makefile +$(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE + $(call if_changed,zoffset) +``` + +目标 `$(obj)/compressed/vmlinux` 依赖于 `vmlinux-objs-y` —— 说明需要编译目录 [arch/x86/boot/compressed](https://github.com/torvalds/linux/tree/master/arch/x86/boot/compressed) 下的源代码,然后生成 `vmlinux.bin`、`vmlinux.bin.bz2`,和编译工具 `mkpiggy`。我们可以在下面的输出看出来: + +```Makefile + LDS arch/x86/boot/compressed/vmlinux.lds + AS arch/x86/boot/compressed/head_64.o + CC arch/x86/boot/compressed/misc.o + CC arch/x86/boot/compressed/string.o + CC arch/x86/boot/compressed/cmdline.o + OBJCOPY arch/x86/boot/compressed/vmlinux.bin + BZIP2 arch/x86/boot/compressed/vmlinux.bin.bz2 + HOSTCC arch/x86/boot/compressed/mkpiggy +``` + +`vmlinux.bin` 是去掉了调试信息和注释的 `vmlinux` 二进制文件,加上了占用了 `u32` (LCTT 译注:即4-Byte)的长度信息的 `vmlinux.bin.all` 压缩后就是 `vmlinux.bin.bz2`。其中 `vmlinux.bin.all` 包含了 `vmlinux.bin` 和`vmlinux.relocs`(LCTT 译注:vmlinux 的重定位信息),其中 `vmlinux.relocs` 是 `vmlinux` 经过程序 `relocs` 处理之后的 `vmlinux` 镜像(见上文所述)。我们现在已经获取到了这些文件,汇编文件 `piggy.S` 将会被 `mkpiggy` 生成、然后编译: + +```Makefile + MKPIGGY arch/x86/boot/compressed/piggy.S + AS arch/x86/boot/compressed/piggy.o +``` + +这个汇编文件会包含经过计算得来的、压缩内核的偏移信息。处理完这个汇编文件,我们就可以看到 `zoffset` 生成了: + +```Makefile + ZOFFSET arch/x86/boot/zoffset.h +``` + +现在 `zoffset.h` 和 `voffset.h` 已经生成了,[arch/x86/boot](https://github.com/torvalds/linux/tree/master/arch/x86/boot/) 里的源文件可以继续编译: + +```Makefile + AS arch/x86/boot/header.o + CC arch/x86/boot/main.o + CC arch/x86/boot/mca.o + CC arch/x86/boot/memory.o + CC arch/x86/boot/pm.o + AS arch/x86/boot/pmjump.o + CC arch/x86/boot/printf.o + CC arch/x86/boot/regs.o + CC arch/x86/boot/string.o + CC arch/x86/boot/tty.o + CC arch/x86/boot/video.o + CC arch/x86/boot/video-mode.o + CC arch/x86/boot/video-vga.o + CC arch/x86/boot/video-vesa.o + CC arch/x86/boot/video-bios.o +``` + +所有的源代码会被编译,他们最终会被链接到 `setup.elf` : + +```Makefile + LD arch/x86/boot/setup.elf +``` + +或者: + +``` +ld -m elf_x86_64 -T arch/x86/boot/setup.ld arch/x86/boot/a20.o arch/x86/boot/bioscall.o arch/x86/boot/cmdline.o arch/x86/boot/copy.o arch/x86/boot/cpu.o arch/x86/boot/cpuflags.o arch/x86/boot/cpucheck.o arch/x86/boot/early_serial_console.o arch/x86/boot/edd.o arch/x86/boot/header.o arch/x86/boot/main.o arch/x86/boot/mca.o arch/x86/boot/memory.o arch/x86/boot/pm.o arch/x86/boot/pmjump.o arch/x86/boot/printf.o arch/x86/boot/regs.o arch/x86/boot/string.o arch/x86/boot/tty.o arch/x86/boot/video.o arch/x86/boot/video-mode.o arch/x86/boot/version.o arch/x86/boot/video-vga.o arch/x86/boot/video-vesa.o arch/x86/boot/video-bios.o -o arch/x86/boot/setup.elf +``` + +最后的两件事是创建包含目录 `arch/x86/boot/*` 下的编译过的代码的 `setup.bin`: + +``` +objcopy -O binary arch/x86/boot/setup.elf arch/x86/boot/setup.bin +``` + +以及从 `vmlinux` 生成 `vmlinux.bin` : + +``` +objcopy -O binary -R .note -R .comment -S arch/x86/boot/compressed/vmlinux arch/x86/boot/vmlinux.bin +``` + +最最后,我们编译主机程序 [arch/x86/boot/tools/build.c](https://github.com/torvalds/linux/blob/master/arch/x86/boot/tools/build.c),它将会用来把 `setup.bin` 和 `vmlinux.bin` 打包成 `bzImage`: + +``` +arch/x86/boot/tools/build arch/x86/boot/setup.bin arch/x86/boot/vmlinux.bin arch/x86/boot/zoffset.h arch/x86/boot/bzImage +``` + +实际上 `bzImage` 就是把 `setup.bin` 和 `vmlinux.bin` 连接到一起。最终我们会看到输出结果,就和那些用源码编译过内核的同行的结果一样: + +``` +Setup is 16268 bytes (padded to 16384 bytes). +System is 4704 kB +CRC 94a88f9a +Kernel: arch/x86/boot/bzImage is ready (#5) +``` + + +全部结束。 + +###结论 + +这就是本文的结尾部分。本文我们了解了编译内核的全部步骤:从执行 `make` 命令开始,到最后生成 `bzImage`。我知道,linux 内核的 makefile 和构建 linux 的过程第一眼看起来可能比较迷惑,但是这并不是很难。希望本文可以帮助你理解构建 linux 内核的整个流程。 + + +###链接 + +* [GNU make util](https://en.wikipedia.org/wiki/Make_%28software%29) +* [Linux kernel top Makefile](https://github.com/torvalds/linux/blob/master/Makefile) +* [cross-compilation](https://en.wikipedia.org/wiki/Cross_compiler) +* [Ctags](https://en.wikipedia.org/wiki/Ctags) +* [sparse](https://en.wikipedia.org/wiki/Sparse) +* [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage) +* [uname](https://en.wikipedia.org/wiki/Uname) +* [shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) +* [Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) +* [binutils](http://www.gnu.org/software/binutils/) +* [gcc](https://gcc.gnu.org/) +* [Documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt) +* [System.map](https://en.wikipedia.org/wiki/System.map) +* [Relocation](https://en.wikipedia.org/wiki/Relocation_%28computing%29) + +-------------------------------------------------------------------------------- + +via: https://github.com/0xAX/linux-insides/blob/master/Misc/how_kernel_compiled.md + +译者:[oska874](https://github.com/oska874) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20150728 Process of the Linux kernel building.md b/translated/tech/20150728 Process of the Linux kernel building.md deleted file mode 100644 index b8ded80179..0000000000 --- a/translated/tech/20150728 Process of the Linux kernel building.md +++ /dev/null @@ -1,674 +0,0 @@ -如何构建Linux 内核 -================================================================================ -介绍 --------------------------------------------------------------------------------- - -我不会告诉你怎么在自己的电脑上去构建、安装一个定制化的Linux 内核,这样的[资料](https://encrypted.google.com/search?q=building+linux+kernel#q=building+linux+kernel+from+source+code) 太多了,它们会对你有帮助。本文会告诉你当你在内核源码路径里敲下`make` 时会发生什么。当我刚刚开始学习内核代码时,[Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 是我打开的第一个文件,这个文件看起来真令人害怕 :)。那时候这个[Makefile](https://en.wikipedia.org/wiki/Make_%28software%29) 还只包含了`1591` 行代码,当我开始写本文是,这个[Makefile](https://github.com/torvalds/linux/commit/52721d9d3334c1cb1f76219a161084094ec634dc) 已经是第三个候选版本了。 - -这个makefile 是Linux 内核代码的根makefile ,内核构建就始于此处。是的,它的内容很多,但是如果你已经读过内核源代码,你就会发现每个包含代码的目录都有一个自己的makefile。当然了,我们不会去描述每个代码文件是怎么编译链接的。所以我们将只会挑选一些通用的例子来说明问题,而你不会在这里找到构建内核的文档、如何整洁内核代码、[tags](https://en.wikipedia.org/wiki/Ctags) 的生成和[交叉编译](https://en.wikipedia.org/wiki/Cross_compiler) 相关的说明,等等。我们将从`make` 开始,使用标准的内核配置文件,到生成了内核镜像[bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage) 结束。 - -如果你已经很了解[make](https://en.wikipedia.org/wiki/Make_%28software%29) 工具那是最好,但是我也会描述本文出现的相关代码。 - -让我们开始吧 - - -编译内核前的准备 ---------------------------------------------------------------------------------- - -在开始编译前要进行很多准备工作。最主要的就是找到并配置好配置文件,`make` 命令要使用到的参数都需要从这些配置文件获取。现在就让我们深入内核的根`makefile` 吧 - -内核的根`Makefile` 负责构建两个主要的文件:[vmlinux](https://en.wikipedia.org/wiki/Vmlinux) (内核镜像可执行文件)和模块文件。内核的 [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 从此处开始: - -```Makefile -VERSION = 4 -PATCHLEVEL = 2 -SUBLEVEL = 0 -EXTRAVERSION = -rc3 -NAME = Hurr durr I'ma sheep -``` - -这些变量决定了当前内核的版本,并且被使用在很多不同的地方,比如`KERNELVERSION` : - -```Makefile -KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION) -``` - -接下来我们会看到很多`ifeq` 条件判断语句,它们负责检查传给`make` 的参数。内核的`Makefile` 提供了一个特殊的编译选项`make help` ,这个选项可以生成所有的可用目标和一些能传给`make` 的有效的命令行参数。举个例子,`make V=1` 会在构建过程中输出详细的编译信息,第一个`ifeq` 就是检查传递给make的`V=n` 选项。 - -```Makefile -ifeq ("$(origin V)", "command line") - KBUILD_VERBOSE = $(V) -endif -ifndef KBUILD_VERBOSE - KBUILD_VERBOSE = 0 -endif - -ifeq ($(KBUILD_VERBOSE),1) - quiet = - Q = -else - quiet=quiet_ - Q = @ -endif - -export quiet Q KBUILD_VERBOSE -``` - -如果`V=n` 这个选项传给了`make` ,系统就会给变量`KBUILD_VERBOSE` 选项附上`V` 的值,否则的话`KBUILD_VERBOSE` 就会为`0`。然后系统会检查`KBUILD_VERBOSE` 的值,以此来决定`quiet` 和`Q` 的值。符号`@` 控制命令的输出,如果它被放在一个命令之前,这条命令的执行将会是`CC scripts/mod/empty.o`,而不是`Compiling .... scripts/mod/empty.o`(注:CC 在makefile 中一般都是编译命令)。最后系统仅仅导出所有的变量。下一个`ifeq` 语句检查的是传递给`make` 的选项`O=/dir`,这个选项允许在指定的目录`dir` 输出所有的结果文件: - -```Makefile -ifeq ($(KBUILD_SRC),) - -ifeq ("$(origin O)", "command line") - KBUILD_OUTPUT := $(O) -endif - -ifneq ($(KBUILD_OUTPUT),) -saved-output := $(KBUILD_OUTPUT) -KBUILD_OUTPUT := $(shell mkdir -p $(KBUILD_OUTPUT) && cd $(KBUILD_OUTPUT) \ - && /bin/pwd) -$(if $(KBUILD_OUTPUT),, \ - $(error failed to create output directory "$(saved-output)")) - -sub-make: FORCE - $(Q)$(MAKE) -C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR) \ - -f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS)) - -skip-makefile := 1 -endif # ifneq ($(KBUILD_OUTPUT),) -endif # ifeq ($(KBUILD_SRC),) -``` - -系统会检查变量`KBUILD_SRC`,如果他是空的(第一次执行makefile 时总是空的),并且变量`KBUILD_OUTPUT` 被设成了选项`O` 的值(如果这个选项被传进来了),那么这个值就会用来代表内核源码的顶层目录。下一步会检查变量`KBUILD_OUTPUT` ,如果之前设置过这个变量,那么接下来会做以下几件事: - -* 将变量`KBUILD_OUTPUT` 的值保存到临时变量`saved-output`; -* 尝试创建输出目录; -* 检查创建的输出目录,如果失败了就打印错误; -* 如果成功创建了输出目录,那么就在新目录重新执行`make` 命令(参见选项`-C`)。 - -下一个`ifeq` 语句会检查传递给make 的选项`C` 和`M`: - -```Makefile -ifeq ("$(origin C)", "command line") - KBUILD_CHECKSRC = $(C) -endif -ifndef KBUILD_CHECKSRC - KBUILD_CHECKSRC = 0 -endif - -ifeq ("$(origin M)", "command line") - KBUILD_EXTMOD := $(M) -endif -``` - -第一个选项`C` 会告诉`makefile` 需要使用环境变量`$CHECK` 提供的工具来检查全部`c` 代码,默认情况下会使用[sparse](https://en.wikipedia.org/wiki/Sparse)。第二个选项`M` 会用来编译外部模块(本文不做讨论)。因为设置了这两个变量,系统还会检查变量`KBUILD_SRC`,如果`KBUILD_SRC` 没有被设置,系统会设置变量`srctree` 为`.`: - -```Makefile -ifeq ($(KBUILD_SRC),) - srctree := . -endif - -objtree := . -src := $(srctree) -obj := $(objtree) - -export srctree objtree VPATH -``` - -这将会告诉`Makefile` 内核的源码树就在执行make 命令的目录。然后要设置`objtree` 和其他变量为执行make 命令的目录,并且将这些变量导出。接着就是要获取`SUBARCH` 的值,这个变量代表了当前的系统架构(注:一般都指CPU 架构): - -```Makefile -SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \ - -e s/sun4u/sparc64/ \ - -e s/arm.*/arm/ -e s/sa110/arm/ \ - -e s/s390x/s390/ -e s/parisc64/parisc/ \ - -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \ - -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ ) -``` - -如你所见,系统执行[uname](https://en.wikipedia.org/wiki/Uname) 得到机器、操作系统和架构的信息。因为我们得到的是`uname` 的输出,所以我们需要做一些处理在赋给变量`SUBARCH` 。获得`SUBARCH` 之后就要设置`SRCARCH` 和`hfr-arch`,`SRCARCH`提供了硬件架构相关代码的目录,`hfr-arch` 提供了相关头文件的目录: - -```Makefile -ifeq ($(ARCH),i386) - SRCARCH := x86 -endif -ifeq ($(ARCH),x86_64) - SRCARCH := x86 -endif - -hdr-arch := $(SRCARCH) -``` - -注意:`ARCH` 是`SUBARCH` 的别名。如果没有设置过代表内核配置文件路径的变量`KCONFIG_CONFIG`,下一步系统会设置它,默认情况下就是`.config` : - -```Makefile -KCONFIG_CONFIG ?= .config -export KCONFIG_CONFIG -``` -以及编译内核过程中要用到的[shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) - -```Makefile -CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \ - else if [ -x /bin/bash ]; then echo /bin/bash; \ - else echo sh; fi ; fi) -``` - -接下来就要设置一组和编译内核的编译器相关的变量。我们会设置主机的`C` 和`C++` 的编译器及相关配置项: - -```Makefile -HOSTCC = gcc -HOSTCXX = g++ -HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer -std=gnu89 -HOSTCXXFLAGS = -O2 -``` - -下一步会去适配代表编译器的变量`CC`,那为什么还要`HOST*` 这些选项呢?这是因为`CC` 是编译内核过程中要使用的目标架构的编译器,但是`HOSTCC` 是要被用来编译一组`host` 程序的(下面我们就会看到)。然后我们就看看变量`KBUILD_MODULES` 和`KBUILD_BUILTIN` 的定义,这两个变量决定了我们要编译什么东西(内核、模块还是其他): - -```Makefile -KBUILD_MODULES := -KBUILD_BUILTIN := 1 - -ifeq ($(MAKECMDGOALS),modules) - KBUILD_BUILTIN := $(if $(CONFIG_MODVERSIONS),1) -endif -``` - -在这我们可以看到这些变量的定义,并且,如果们仅仅传递了`modules` 给`make`,变量`KBUILD_BUILTIN` 会依赖于内核配置选项`CONFIG_MODVERSIONS`。下一步操作是引入下面的文件: - -```Makefile -include scripts/Kbuild.include -``` - -文件`kbuild` ,[Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) 或者又叫做 `Kernel Build System`是一个用来管理构建内核和模块的特殊框架。`kbuild` 文件的语法与makefile 一样。文件[scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) 为`kbuild` 系统同提供了一些原生的定义。因为我们包含了这个`kbuild` 文件,我们可以看到和不同工具关联的这些变量的定义,这些工具会在内核和模块编译过程中被使用(比如链接器、编译器、二进制工具包[binutils](http://www.gnu.org/software/binutils/),等等): - -```Makefile -AS = $(CROSS_COMPILE)as -LD = $(CROSS_COMPILE)ld -CC = $(CROSS_COMPILE)gcc -CPP = $(CC) -E -AR = $(CROSS_COMPILE)ar -NM = $(CROSS_COMPILE)nm -STRIP = $(CROSS_COMPILE)strip -OBJCOPY = $(CROSS_COMPILE)objcopy -OBJDUMP = $(CROSS_COMPILE)objdump -AWK = awk -... -... -... -``` - -在这些定义好的变量后面,我们又定义了两个变量:`USERINCLUDE` 和`LINUXINCLUDE`。他们包含了头文件的路径(第一个是给用户用的,第二个是给内核用的): - -```Makefile -USERINCLUDE := \ - -I$(srctree)/arch/$(hdr-arch)/include/uapi \ - -Iarch/$(hdr-arch)/include/generated/uapi \ - -I$(srctree)/include/uapi \ - -Iinclude/generated/uapi \ - -include $(srctree)/include/linux/kconfig.h - -LINUXINCLUDE := \ - -I$(srctree)/arch/$(hdr-arch)/include \ - ... -``` - -以及标准的C 编译器标志: -```Makefile -KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ - -fno-strict-aliasing -fno-common \ - -Werror-implicit-function-declaration \ - -Wno-format-security \ - -std=gnu89 -``` - -这并不是最终确定的编译器标志,他们还可以在其他makefile 里面更新(比如`arch/` 里面的kbuild)。变量定义完之后,全部会被导出供其他makefile 使用。下面的两个变量`RCS_FIND_IGNORE` 和 `RCS_TAR_IGNORE` 包含了被版本控制系统忽略的文件: - -```Makefile -export RCS_FIND_IGNORE := \( -name SCCS -o -name BitKeeper -o -name .svn -o \ - -name CVS -o -name .pc -o -name .hg -o -name .git \) \ - -prune -o -export RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn \ - --exclude CVS --exclude .pc --exclude .hg --exclude .git -``` - -这就是全部了,我们已经完成了所有的准备工作,下一个点就是如果构建`vmlinux`. - -直面构建内核 --------------------------------------------------------------------------------- - -现在我们已经完成了所有的准备工作,根makefile(注:内核根目录下的makefile)的下一步工作就是和编译内核相关的了。在我们执行`make` 命令之前,我们不会在终端看到任何东西。但是现在编译的第一步开始了,这里我们需要从内核根makefile的的[598](https://github.com/torvalds/linux/blob/master/Makefile#L598) 行开始,这里可以看到目标`vmlinux`: - -```Makefile -all: vmlinux - include arch/$(SRCARCH)/Makefile -``` - -不要操心我们略过的从`export RCS_FIND_IGNORE.....` 到`all: vmlinux.....` 这一部分makefile 代码,他们只是负责根据各种配置文件生成不同目标内核的,因为之前我就说了这一部分我们只讨论构建内核的通用途径。 - -目标`all:` 是在命令行如果不指定具体目标时默认使用的目标。你可以看到这里包含了架构相关的makefile(在这里就指的是[arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile))。从这一时刻起,我们会从这个makefile 继续进行下去。如我们所见,目标`all` 依赖于根makefile 后面声明的`vmlinux`: - -```Makefile -vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE -``` - -`vmlinux` 是linux 内核的静态链接可执行文件格式。脚本[scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) 把不同的编译好的子模块链接到一起形成了vmlinux。第二个目标是`vmlinux-deps`,它的定义如下: - -```Makefile -vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) -``` - -它是由内核代码下的每个顶级目录的`built-in.o` 组成的。之后我们还会检查内核所有的目录,`kbuild` 会编译各个目录下所有的对应`$obj-y` 的源文件。接着调用`$(LD) -r` 把这些文件合并到一个`build-in.o` 文件里。此时我们还没有`vmloinux-deps`, 所以目标`vmlinux` 现在还不会被构建。对我而言`vmlinux-deps` 包含下面的文件 - -``` -arch/x86/kernel/vmlinux.lds arch/x86/kernel/head_64.o -arch/x86/kernel/head64.o arch/x86/kernel/head.o -init/built-in.o usr/built-in.o -arch/x86/built-in.o kernel/built-in.o -mm/built-in.o fs/built-in.o -ipc/built-in.o security/built-in.o -crypto/built-in.o block/built-in.o -lib/lib.a arch/x86/lib/lib.a -lib/built-in.o arch/x86/lib/built-in.o -drivers/built-in.o sound/built-in.o -firmware/built-in.o arch/x86/pci/built-in.o -arch/x86/power/built-in.o arch/x86/video/built-in.o -net/built-in.o -``` - -下一个可以被执行的目标如下: - -```Makefile -$(sort $(vmlinux-deps)): $(vmlinux-dirs) ; -$(vmlinux-dirs): prepare scripts - $(Q)$(MAKE) $(build)=$@ -``` - -就像我们看到的,`vmlinux-dir` 依赖于两部分:`prepare` 和`scripts`。第一个`prepare` 定义在内核的根`makefile` ,准备工作分成三个阶段: - -```Makefile -prepare: prepare0 -prepare0: archprepare FORCE - $(Q)$(MAKE) $(build)=. -archprepare: archheaders archscripts prepare1 scripts_basic - -prepare1: prepare2 $(version_h) include/generated/utsrelease.h \ - include/config/auto.conf - $(cmd_crmodverdir) -prepare2: prepare3 outputmakefile asm-generic -``` - -第一个`prepare0` 展开到`archprepare` ,后者又展开到`archheader` 和`archscripts`,这两个变量定义在`x86_64` 相关的[Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)。让我们看看这个文件。`x86_64` 特定的makefile从变量定义开始,这些变量都是和特定架构的配置文件 ([defconfig](https://github.com/torvalds/linux/tree/master/arch/x86/configs),等等)有关联。变量定义之后,这个makefile 定义了编译[16-bit](https://en.wikipedia.org/wiki/Real_mode)代码的编译选项,根据变量`BITS` 的值,如果是`32`, 汇编代码、链接器、以及其它很多东西(全部的定义都可以在[arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)找到)对应的参数就是`i386`,而`64`就对应的是`x86_84`。生成的系统调用列表(syscall table)的makefile 里第一个目标就是`archheaders` : - -```Makefile -archheaders: - $(Q)$(MAKE) $(build)=arch/x86/entry/syscalls all -``` - -这个makefile 里第二个目标就是`archscripts`: - -```Makefile -archscripts: scripts_basic - $(Q)$(MAKE) $(build)=arch/x86/tools relocs -``` - - 我们可以看到`archscripts` 是依赖于根[Makefile](https://github.com/torvalds/linux/blob/master/Makefile)里的`scripts_basic` 。首先我们可以看出`scripts_basic` 是按照[scripts/basic](https://github.com/torvalds/linux/blob/master/scripts/basic/Makefile) 的mekefile 执行make 的: - -```Maklefile -scripts_basic: - $(Q)$(MAKE) $(build)=scripts/basic -``` - -`scripts/basic/Makefile`包含了编译两个主机程序`fixdep` 和`bin2` 的目标: - -```Makefile -hostprogs-y := fixdep -hostprogs-$(CONFIG_BUILD_BIN2C) += bin2c -always := $(hostprogs-y) - -$(addprefix $(obj)/,$(filter-out fixdep,$(always))): $(obj)/fixdep -``` - -第一个工具是`fixdep`:用来优化[gcc](https://gcc.gnu.org/) 生成的依赖列表,然后在重新编译源文件的时候告诉make。第二个工具是`bin2c`,他依赖于内核配置选项`CONFIG_BUILD_BIN2C`,并且它是一个用来将标准输入接口(注:即stdin)收到的二进制流通过标准输出接口(即:stdout)转换成C 头文件的非常小的C 程序。你可以注意到这里有些奇怪的标志,如`hostprogs-y`等。这些标志使用在所有的`kbuild` 文件,更多的信息你可以从[documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt) 获得。在我们的用例`hostprogs-y` 中,他告诉`kbuild` 这里有个名为`fixed` 的程序,这个程序会通过和`Makefile` 相同目录的`fixdep.c` 编译而来。执行make 之后,终端的第一个输出就是`kbuild` 的结果: - -``` -$ make - HOSTCC scripts/basic/fixdep -``` - -当目标`script_basic` 被执行,目标`archscripts` 就会make [arch/x86/tools](https://github.com/torvalds/linux/blob/master/arch/x86/tools/Makefile) 下的makefile 和目标`relocs`: - -```Makefile -$(Q)$(MAKE) $(build)=arch/x86/tools relocs -``` - -代码`relocs_32.c` 和`relocs_64.c` 包含了[重定位](https://en.wikipedia.org/wiki/Relocation_%28computing%29) 的信息,将会被编译,者可以在`make` 的输出中看到: - -```Makefile - HOSTCC arch/x86/tools/relocs_32.o - HOSTCC arch/x86/tools/relocs_64.o - HOSTCC arch/x86/tools/relocs_common.o - HOSTLD arch/x86/tools/relocs -``` - -在编译完`relocs.c` 之后会检查`version.h`: - -```Makefile -$(version_h): $(srctree)/Makefile FORCE - $(call filechk,version.h) - $(Q)rm -f $(old_version_h) -``` - -我们可以在输出看到它: - -``` -CHK include/config/kernel.release -``` - -以及在内核根Makefiel 使用`arch/x86/include/generated/asm`的目标`asm-generic` 来构建`generic` 汇编头文件。在目标`asm-generic` 之后,`archprepare` 就会被完成,所以目标`prepare0` 会接着被执行,如我上面所写: - -```Makefile -prepare0: archprepare FORCE - $(Q)$(MAKE) $(build)=. -``` - -注意`build`,它是定义在文件[scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include),内容是这样的: - -```Makefile -build := -f $(srctree)/scripts/Makefile.build obj -``` - -或者在我们的例子中,他就是当前源码目录路径——`.`: -```Makefile -$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.build obj=. -``` - -参数`obj` 会告诉脚本[scripts/Makefile.build](https://github.com/torvalds/linux/blob/master/scripts/Makefile.build) 那些目录包含`kbuild` 文件,脚本以此来寻找各个`kbuild` 文件: - -```Makefile -include $(kbuild-file) -``` - -然后根据这个构建目标。我们这里`.` 包含了[Kbuild](https://github.com/torvalds/linux/blob/master/Kbuild),就用这个文件来生成`kernel/bounds.s` 和`arch/x86/kernel/asm-offsets.s`。这样目标`prepare` 就完成了它的工作。`vmlinux-dirs` 也依赖于第二个目标——`scripts` ,`scripts`会编译接下来的几个程序:`filealias`,`mk_elfconfig`,`modpost`等等。`scripts/host-programs` 编译完之后,我们的目标`vmlinux-dirs` 就可以开始编译了。第一步,我们先来理解一下`vmlinux-dirs` 都包含了那些东西。在我们的例子中它包含了接下来要使用的内核目录的路径: - -``` -init usr arch/x86 kernel mm fs ipc security crypto block -drivers sound firmware arch/x86/pci arch/x86/power -arch/x86/video net lib arch/x86/lib -``` - -我们可以在内核的根[Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 里找到`vmlinux-dirs` 的定义: - -```Makefile -vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \ - $(core-y) $(core-m) $(drivers-y) $(drivers-m) \ - $(net-y) $(net-m) $(libs-y) $(libs-m))) - -init-y := init/ -drivers-y := drivers/ sound/ firmware/ -net-y := net/ -libs-y := lib/ -... -... -... -``` - -这里我们借助函数`patsubst` 和`filter`去掉了每个目录路径里的符号`/`,并且把结果放到`vmlinux-dirs` 里。所以我们就有了`vmlinux-dirs` 里的目录的列表,以及下面的代码: - -```Makefile -$(vmlinux-dirs): prepare scripts - $(Q)$(MAKE) $(build)=$@ -``` - -符号`$@` 在这里代表了`vmlinux-dirs`,这就表明程序会递归遍历从`vmlinux-dirs` 以及它内部的全部目录(依赖于配置),并且在对应的目录下执行`make` 命令。我们可以在输出看到结果: - -``` - CC init/main.o - CHK include/generated/compile.h - CC init/version.o - CC init/do_mounts.o - ... - CC arch/x86/crypto/glue_helper.o - AS arch/x86/crypto/aes-x86_64-asm_64.o - CC arch/x86/crypto/aes_glue.o - ... - AS arch/x86/entry/entry_64.o - AS arch/x86/entry/thunk_64.o - CC arch/x86/entry/syscall_64.o -``` - -每个目录下的源代码将会被编译并且链接到`built-io.o` 里: - -``` -$ find . -name built-in.o -./arch/x86/crypto/built-in.o -./arch/x86/crypto/sha-mb/built-in.o -./arch/x86/net/built-in.o -./init/built-in.o -./usr/built-in.o -... -... -``` - -好了,所有的`built-in.o` 都构建完了,现在我们回到目标`vmlinux` 上。你应该还记得,目标`vmlinux` 是在内核的根makefile 里。在链接`vmlinux` 之前,系统会构建[samples](https://github.com/torvalds/linux/tree/master/samples), [Documentation](https://github.com/torvalds/linux/tree/master/Documentation)等等,但是如上文所述,我不会在本文描述这些。 - -```Makefile -vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE - ... - ... - +$(call if_changed,link-vmlinux) -``` - -你可以看到,`vmlinux` 的调用脚本[scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) 的主要目的是把所有的`built-in.o` 链接成一个静态可执行文件、生成[System.map](https://en.wikipedia.org/wiki/System.map)。 最后我们来看看下面的输出: - -``` - LINK vmlinux - LD vmlinux.o - MODPOST vmlinux.o - GEN .version - CHK include/generated/compile.h - UPD include/generated/compile.h - CC init/version.o - LD init/built-in.o - KSYM .tmp_kallsyms1.o - KSYM .tmp_kallsyms2.o - LD vmlinux - SORTEX vmlinux - SYSMAP System.map -``` - -以及内核源码树根目录下的`vmlinux` 和`System.map` - -``` -$ ls vmlinux System.map -System.map vmlinux -``` - -这就是全部了,`vmlinux` 构建好了,下一步就是创建[bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage). - -制作bzImage --------------------------------------------------------------------------------- - -`bzImage` 就是压缩了的linux 内核镜像。我们可以在构建了`vmlinux` 之后通过执行`make bzImage` 获得`bzImage`。同时我们可以仅仅执行`make` 而不带任何参数也可以生成`bzImage` ,因为它是在[arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile) 里预定义的、默认生成的镜像: - -```Makefile -all: bzImage -``` - -让我们看看这个目标,他能帮助我们理解这个镜像是怎么构建的。我已经说过了`bzImage` 师被定义在[arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile),定义如下: - -```Makefile -bzImage: vmlinux - $(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE) - $(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot - $(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/$@ -``` - -在这里我们可以看到第一次为boot 目录执行`make`,在我们的例子里是这样的: - -```Makefile -boot := arch/x86/boot -``` - -现在的主要目标是编译目录`arch/x86/boot` 和`arch/x86/boot/compressed` 的代码,构建`setup.bin` 和`vmlinux.bin`,然后用这两个文件生成`bzImage`。第一个目标是定义在[arch/x86/boot/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/Makefile) 的`$(obj)/setup.elf`: - -```Makefile -$(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE - $(call if_changed,ld) -``` - -我们已经在目录`arch/x86/boot`有了链接脚本`setup.ld`,并且将变量`SETUP_OBJS` 扩展到`boot` 目录下的全部源代码。我们可以看看第一个输出: - -```Makefile - AS arch/x86/boot/bioscall.o - CC arch/x86/boot/cmdline.o - AS arch/x86/boot/copy.o - HOSTCC arch/x86/boot/mkcpustr - CPUSTR arch/x86/boot/cpustr.h - CC arch/x86/boot/cpu.o - CC arch/x86/boot/cpuflags.o - CC arch/x86/boot/cpucheck.o - CC arch/x86/boot/early_serial_console.o - CC arch/x86/boot/edd.o -``` - -下一个源码文件是[arch/x86/boot/header.S](https://github.com/torvalds/linux/blob/master/arch/x86/boot/header.S),但是我们不能现在就编译它,因为这个目标依赖于下面两个头文件: - -```Makefile -$(obj)/header.o: $(obj)/voffset.h $(obj)/zoffset.h -``` - -第一个头文件`voffset.h` 是使用`sed` 脚本生成的,包含用`nm` 工具从`vmlinux` 获取的两个地址: - -```C -#define VO__end 0xffffffff82ab0000 -#define VO__text 0xffffffff81000000 -``` - -这两个地址是内核的起始和结束地址。第二个头文件`zoffset.h` 在[arch/x86/boot/compressed/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/Makefile) 可以看出是依赖于目标`vmlinux`的: - -```Makefile -$(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE - $(call if_changed,zoffset) -``` - -目标`$(obj)/compressed/vmlinux` 依赖于变量`vmlinux-objs-y` —— 说明需要编译目录[arch/x86/boot/compressed](https://github.com/torvalds/linux/tree/master/arch/x86/boot/compressed) 下的源代码,然后生成`vmlinux.bin`, `vmlinux.bin.bz2`, 和编译工具 - `mkpiggy`。我们可以在下面的输出看出来: - -```Makefile - LDS arch/x86/boot/compressed/vmlinux.lds - AS arch/x86/boot/compressed/head_64.o - CC arch/x86/boot/compressed/misc.o - CC arch/x86/boot/compressed/string.o - CC arch/x86/boot/compressed/cmdline.o - OBJCOPY arch/x86/boot/compressed/vmlinux.bin - BZIP2 arch/x86/boot/compressed/vmlinux.bin.bz2 - HOSTCC arch/x86/boot/compressed/mkpiggy -``` - -`vmlinux.bin` 是去掉了调试信息和注释的`vmlinux` 二进制文件,加上了占用了`u32` (注:即4-Byte)的长度信息的`vmlinux.bin.all` 压缩后就是`vmlinux.bin.bz2`。其中`vmlinux.bin.all` 包含了`vmlinux.bin` 和`vmlinux.relocs`(注:vmlinux 的重定位信息),其中`vmlinux.relocs` 是`vmlinux` 经过程序`relocs` 处理之后的`vmlinux` 镜像(见上文所述)。我们现在已经获取到了这些文件,汇编文件`piggy.S` 将会被`mkpiggy` 生成、然后编译: - -```Makefile - MKPIGGY arch/x86/boot/compressed/piggy.S - AS arch/x86/boot/compressed/piggy.o -``` - -这个汇编文件会包含经过计算得来的、压缩内核的偏移信息。处理完这个汇编文件,我们就可以看到`zoffset` 生成了: - -```Makefile - ZOFFSET arch/x86/boot/zoffset.h -``` - -现在`zoffset.h` 和`voffset.h` 已经生成了,[arch/x86/boot](https://github.com/torvalds/linux/tree/master/arch/x86/boot/) 里的源文件可以继续编译: - -```Makefile - AS arch/x86/boot/header.o - CC arch/x86/boot/main.o - CC arch/x86/boot/mca.o - CC arch/x86/boot/memory.o - CC arch/x86/boot/pm.o - AS arch/x86/boot/pmjump.o - CC arch/x86/boot/printf.o - CC arch/x86/boot/regs.o - CC arch/x86/boot/string.o - CC arch/x86/boot/tty.o - CC arch/x86/boot/video.o - CC arch/x86/boot/video-mode.o - CC arch/x86/boot/video-vga.o - CC arch/x86/boot/video-vesa.o - CC arch/x86/boot/video-bios.o -``` - -所有的源代码会被编译,他们最终会被链接到`setup.elf` : - -```Makefile - LD arch/x86/boot/setup.elf -``` - - -或者: - -``` -ld -m elf_x86_64 -T arch/x86/boot/setup.ld arch/x86/boot/a20.o arch/x86/boot/bioscall.o arch/x86/boot/cmdline.o arch/x86/boot/copy.o arch/x86/boot/cpu.o arch/x86/boot/cpuflags.o arch/x86/boot/cpucheck.o arch/x86/boot/early_serial_console.o arch/x86/boot/edd.o arch/x86/boot/header.o arch/x86/boot/main.o arch/x86/boot/mca.o arch/x86/boot/memory.o arch/x86/boot/pm.o arch/x86/boot/pmjump.o arch/x86/boot/printf.o arch/x86/boot/regs.o arch/x86/boot/string.o arch/x86/boot/tty.o arch/x86/boot/video.o arch/x86/boot/video-mode.o arch/x86/boot/version.o arch/x86/boot/video-vga.o arch/x86/boot/video-vesa.o arch/x86/boot/video-bios.o -o arch/x86/boot/setup.elf -``` - -最后两件事是创建包含目录`arch/x86/boot/*` 下的编译过的代码的`setup.bin`: - -``` -objcopy -O binary arch/x86/boot/setup.elf arch/x86/boot/setup.bin -``` - -以及从`vmlinux` 生成`vmlinux.bin` : - -``` -objcopy -O binary -R .note -R .comment -S arch/x86/boot/compressed/vmlinux arch/x86/boot/vmlinux.bin -``` - -最后,我们编译主机程序[arch/x86/boot/tools/build.c](https://github.com/torvalds/linux/blob/master/arch/x86/boot/tools/build.c),它将会用来把`setup.bin` 和`vmlinux.bin` 打包成`bzImage`: - -``` -arch/x86/boot/tools/build arch/x86/boot/setup.bin arch/x86/boot/vmlinux.bin arch/x86/boot/zoffset.h arch/x86/boot/bzImage -``` - -实际上`bzImage` 就是把`setup.bin` 和`vmlinux.bin` 连接到一起。最终我们会看到输出结果,就和那些用源码编译过内核的同行的结果一样: - -``` -Setup is 16268 bytes (padded to 16384 bytes). -System is 4704 kB -CRC 94a88f9a -Kernel: arch/x86/boot/bzImage is ready (#5) -``` - - -全部结束。 - -结论 -================================================================================ - -这就是本文的最后一节。本文我们了解了编译内核的全部步骤:从执行`make` 命令开始,到最后生成`bzImage`。我知道,linux 内核的makefiles 和构建linux 的过程第一眼看起来可能比较迷惑,但是这并不是很难。希望本文可以帮助你理解构建linux 内核的整个流程。 - - -链接 -================================================================================ - -* [GNU make util](https://en.wikipedia.org/wiki/Make_%28software%29) -* [Linux kernel top Makefile](https://github.com/torvalds/linux/blob/master/Makefile) -* [cross-compilation](https://en.wikipedia.org/wiki/Cross_compiler) -* [Ctags](https://en.wikipedia.org/wiki/Ctags) -* [sparse](https://en.wikipedia.org/wiki/Sparse) -* [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage) -* [uname](https://en.wikipedia.org/wiki/Uname) -* [shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) -* [Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) -* [binutils](http://www.gnu.org/software/binutils/) -* [gcc](https://gcc.gnu.org/) -* [Documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt) -* [System.map](https://en.wikipedia.org/wiki/System.map) -* [Relocation](https://en.wikipedia.org/wiki/Relocation_%28computing%29) - --------------------------------------------------------------------------------- - -via: https://github.com/0xAX/linux-insides/blob/master/Misc/how_kernel_compiled.md - -译者:[译者ID](https://github.com/oska874) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 3fdea4dc7dc9fb9ad5ccc3f7284ff8824bf29c76 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 11 Sep 2015 16:20:20 +0800 Subject: [PATCH 486/697] =?UTF-8?q?20150911-1=20=E9=80=89=E9=A2=98=20RHCE?= =?UTF-8?q?=20=E6=96=B0=E7=9A=84=E4=B8=A4=E7=AF=87=206=E3=80=817?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ile Sharing on Linux or Windows Clients.md | 208 ++++++++++++++++++ ...-based Authentication for Linux Clients.md | 188 ++++++++++++++++ 2 files changed, 396 insertions(+) create mode 100644 sources/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md create mode 100644 sources/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md diff --git a/sources/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md b/sources/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md new file mode 100644 index 0000000000..02a538cacc --- /dev/null +++ b/sources/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md @@ -0,0 +1,208 @@ +Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux/Windows Clients – Part 6 +================================================================================ +Since computers seldom work as isolated systems, it is to be expected that as a system administrator or engineer, you know how to set up and maintain a network with multiple types of servers. + +In this article and in the next of this series we will go through the essentials of setting up Samba and NFS servers with Windows/Linux and Linux clients, respectively. + +![Setup Samba File Sharing on Linux](http://www.tecmint.com/wp-content/uploads/2015/09/setup-samba-file-sharing-on-linux-windows-clients.png) + +RHCE: Setup Samba File Sharing – Part 6 + +This article will definitely come in handy if you’re called upon to set up file servers in corporate or enterprise environments where you are likely to find different operating systems and types of devices. + +Since you can read about the background and the technical aspects of both Samba and NFS all over the Internet, in this article and the next we will cut right to the chase with the topic at hand. + +### Step 1: Installing Samba Server ### + +Our current testing environment consists of two RHEL 7 boxes and one Windows 8 machine, in that order: + + 1. Samba / NFS server [box1 (RHEL 7): 192.168.0.18], + 2. Samba client #1 [box2 (RHEL 7): 192.168.0.20] + 3. Samba client #2 [Windows 8 machine: 192.168.0.106] + +![Testing Setup for Samba](http://www.tecmint.com/wp-content/uploads/2015/09/Testing-Setup-for-Samba.png) + +Testing Setup for Samba + +On box1, install the following packages: + + # yum update && yum install samba samba-client samba-common + +On box2: + + # yum update && yum install samba samba-client samba-common cifs-utils + +Once the installation is complete, we’re ready to configure our share. + +### Step 2: Setting Up File Sharing Through Samba ### + +One of the reason why Samba is so relevant is because it provides file and print services to SMB/CIFS clients, which causes those clients to see the server as if it was a Windows system (I must admit I tend to get a little emotional while writing about this topic as it was my first setup as a new Linux system administrator some years ago). + +**Adding system users and setting up permissions and ownership** + +To allow for group collaboration, we will create a group named finance with two users (user1 and user2) with [useradd command][1] and a directory /finance in box1. + +We will also change the group owner of this directory to finance and set its permissions to 0770 (read, write, and execution permissions for the owner and the group owner): + + # groupadd finance + # useradd user1 + # useradd user2 + # usermod -a -G finance user1 + # usermod -a -G finance user2 + # mkdir /finance + # chmod 0770 /finance + # chgrp finance /finance + +### Step 3:​ Configuring SELinux and Firewalld ### + +In preparation to configure /finance as a Samba share, we will need to either disable SELinux or set the proper boolean and security context values as follows (otherwise, SELinux will prevent clients from accessing the share): + + # setsebool -P samba_export_all_ro=1 samba_export_all_rw=1 + # getsebool –a | grep samba_export + # semanage fcontext –at samba_share_t "/finance(/.*)?" + # restorecon /finance + +In addition, we must ensure that Samba traffic is allowed by the [firewalld][2]. + + # firewall-cmd --permanent --add-service=samba + # firewall-cmd --reload + +### Step 4: Configure Samba Share ### + +Now it’s time to dive into the configuration file /etc/samba/smb.conf and add the section for our share: we want the members of the finance group to be able to browse the contents of /finance, and save / create files or subdirectories in it (which by default will have their permission bits set to 0770 and finance will be their group owner): + +**smb.conf** + +---------- + + [finance] + comment=Directory for collaboration of the company's finance team + browsable=yes + path=/finance + public=no + valid users=@finance + write list=@finance + writeable=yes + create mask=0770 + Force create mode=0770 + force group=finance + +Save the file and then test it with the testparm utility. If there are any errors, the output of the following command will indicate what you need to fix. Otherwise, it will display a review of your Samba server configuration: + +![Test Samba Configuration](http://www.tecmint.com/wp-content/uploads/2015/09/Test-Samba-Configuration.png) + +Test Samba Configuration + +Should you want to add another share that is open to the public (meaning without any authentication whatsoever), create another section in /etc/samba/smb.conf and under the new share’s name copy the section above, only changing public=no to public=yes and not including the valid users and write list directives. + +### Step 5: Adding Samba Users ### + +Next, you will need to add user1 and user2 as Samba users. To do so, you will use the smbpasswd command, which interacts with Samba’s internal database. You will be prompted to enter a password that you will later use to connect to the share: + + # smbpasswd -a user1 + # smbpasswd -a user2 + +Finally, restart Samba, enable the service to start on boot, and make sure the share is actually available to network clients: + + # systemctl start smb + # systemctl enable smb + # smbclient -L localhost –U user1 + # smbclient -L localhost –U user2 + +![Verify Samba Share](http://www.tecmint.com/wp-content/uploads/2015/09/Verify-Samba-Share.png) + +Verify Samba Share + +At this point, the Samba file server has been properly installed and configured. Now it’s time to test this setup on our RHEL 7 and Windows 8 clients. + +### Step 6:​ Mounting the Samba Share in Linux ### + +First, make sure the Samba share is accessible from this client: + +# smbclient –L 192.168.0.18 -U user2 + +![Mount Samba Share on Linux](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Samba-Share-on-Linux.png) + +Mount Samba Share on Linux + +(repeat the above command for user1) + +As any other storage media, you can mount (and later unmount) this network share when needed: + + # mount //192.168.0.18/finance /media/samba -o username=user1 + +![Mount Samba Network Share](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Samba-Network-Share.png) + +Mount Samba Network Share + +(where /media/samba is an existing directory) + +or permanently, by adding the following entry in /etc/fstab file: + +**fstab** + +---------- + + //192.168.0.18/finance /media/samba cifs credentials=/media/samba/.smbcredentials,defaults 0 0 + +Where the hidden file /media/samba/.smbcredentials (whose permissions and ownership have been set to 600 and root:root, respectively) contains two lines that indicate the username and password of an account that is allowed to use the share: + +**.smbcredentials** + +---------- + + username=user1 + password=PasswordForUser1 + +Finally, let’s create a file inside /finance and check the permissions and ownership: + + # touch /media/samba/FileCreatedInRHELClient.txt + +![Create File in Samba Share](http://www.tecmint.com/wp-content/uploads/2015/09/Create-File-in-Samba-Share.png) + +Create File in Samba Share + +As you can see, the file was created with 0770 permissions and ownership set to user1:finance. + +### Step 7: Mounting the Samba Share in Windows ### + +To mount the Samba share in Windows, go to My PC and choose Computer, then Map network drive. Next, assign a letter for the drive to be mapped and check Connect using different credentials (the screenshots below are in Spanish, my native language): + +![Mount Samba Share in Windows](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Samba-Share-in-Windows.png) + +Mount Samba Share in Windows + +Finally, let’s create a file and check the permissions and ownership: + +![Create Files on Windows Samba Share](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Files-on-Windows-Samba-Share.png) + +Create Files on Windows Samba Share + + # ls -l /finance + +This time the file belongs to user2 since that’s the account we used to connect from the Windows client. + +### Summary ### + +In this article we have explained not only how to set up a Samba server and two clients using different operating systems, but also [how to configure the firewalld][3] and [SELinux on the server][4] to allow the desired group collaboration capabilities. + +Last, but not least, let me recommend the reading of the online [man page of smb.conf][5] to explore other configuration directives that may be more suitable for your case than the scenario described in this article. + +As always, feel free to drop a comment using the form below if you have any comments or suggestions. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/setup-samba-file-sharing-for-linux-windows-clients/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/add-users-in-linux/ +[2]:http://www.tecmint.com/firewalld-vs-iptables-and-control-network-traffic-in-firewall/ +[3]:http://www.tecmint.com/configure-firewalld-in-centos-7/ +[4]:http://www.tecmint.com/selinux-essentials-and-control-filesystem-access/ +[5]:https://www.samba.org/samba/docs/man/manpages-3/smb.conf.5.html \ No newline at end of file diff --git a/sources/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md b/sources/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md new file mode 100644 index 0000000000..2f0ca53dd5 --- /dev/null +++ b/sources/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md @@ -0,0 +1,188 @@ +Setting Up NFS Server with Kerberos-based Authentication for Linux Clients – Part 7 +================================================================================ +In the last article of this series, we reviewed [how to set up a Samba share over a network][1] that may consist of multiple types of operating systems. Now, if you need to set up file sharing for a group of Unix-like clients you will automatically think of the Network File System, or NFS for short. + +![Setting Up NFS Server with Kerberos Authentication](http://www.tecmint.com/wp-content/uploads/2015/09/Setting-Kerberos-Authentication-with-NFS.jpg) + +RHCE Series: Setting Up NFS Server with Kerberos Authentication – Part 7 + +In this article we will walk you through the process of using Kerberos-based authentication for NFS shares. It is assumed that you already have set up a NFS server and a client. If not, please refer to [install and configure NFS server][2] – which will list the necessary packages that need to be installed and explain how to perform initial configurations on the server before proceeding further. + +In addition, you will want to configure both [SELinux][3] and [firewalld][4] to allow for file sharing through NFS. + +The following example assumes that your NFS share is located in /nfs in box2: + + # semanage fcontext -a -t public_content_rw_t "/nfs(/.*)?" + # restorecon -R /nfs + # setsebool -P nfs_export_all_rw on + # setsebool -P nfs_export_all_ro on + +(where the -P flag indicates persistence across reboots). + +Finally, don’t forget to: + +#### Create NFS Group and Configure NFS Share Directory #### + +1. Create a group called nfs and add the nfsnobody user to it, then change the permissions of the /nfs directory to 0770 and its group owner to nfs. Thus, nfsnobody (which is mapped to the client requests) will have write permissions on the share) and you won’t need to use no_root_squash in the /etc/exports file. + + # groupadd nfs + # usermod -a -G nfs nfsnobody + # chmod 0770 /nfs + # chgrp nfs /nfs + +2. Modify the exports file (/etc/exports) as follows to only allow access from box1 using Kerberos security (sec=krb5). + +**Note**: that the value of anongid has been set to the GID of the nfs group that we created previously: + +**exports – Add NFS Share** + +---------- + + /nfs box1(rw,sec=krb5,anongid=1004) + +3. Re-export (-r) all (-a) the NFS shares. Adding verbosity to the output (-v) is a good idea since it will provide helpful information to troubleshoot the server if something goes wrong: + + # exportfs -arv + +4. Restart and enable the NFS server and related services. Note that you don’t have to enable nfs-lock and nfs-idmapd because they will be automatically started by the other services on boot: + + # systemctl restart rpcbind nfs-server nfs-lock nfs-idmap + # systemctl enable rpcbind nfs-server + +#### Testing Environment and Other Prerequisites #### + +In this guide we will use the following test environment: + +- Client machine [box1: 192.168.0.18] +- NFS / Kerberos server [box2: 192.168.0.20] (also known as Key Distribution Center, or KDC for short). + +**Note**: that Kerberos service is crucial to the authentication scheme. + +As you can see, the NFS server and the KDC are hosted in the same machine for simplicity, although you can set them up in separate machines if you have more available. Both machines are members of the `mydomain.com` domain. + +Last but not least, Kerberos requires at least a basic schema of name resolution and the [Network Time Protocol][5] service to be present in both client and server since the security of Kerberos authentication is in part based upon the timestamps of tickets. + +To set up name resolution, we will use the /etc/hosts file in both client and server: + +**host file – Add DNS for Domain** + +---------- + + 192.168.0.18 box1.mydomain.com box1 + 192.168.0.20 box2.mydomain.com box2 + +In RHEL 7, chrony is the default software that is used for NTP synchronization: + + # yum install chrony + # systemctl start chronyd + # systemctl enable chronyd + +To make sure chrony is actually synchronizing your system’s time with time servers you may want to issue the following command two or three times and make sure the offset is getting nearer to zero: + + # chronyc tracking + +![Synchronize Server Time with Chrony](http://www.tecmint.com/wp-content/uploads/2015/09/Synchronize-Time-with-Chrony.png) + +Synchronize Server Time with Chrony + +### Installing and Configuring Kerberos ### + +To set up the KDC, install the following packages on both server and client (omit the server package in the client): + + # yum update && yum install krb5-server krb5-workstation pam_krb5 + +Once it is installed, edit the configuration files (/etc/krb5.conf and /var/kerberos/krb5kdc/kadm5.acl) and replace all instances of example.com (lowercase and uppercase) with `mydomain.com` as follows. + +Next, enable Kerberos through the firewall and start / enable the related services. + +**Important**: nfs-secure must be started and enabled on the client as well: + + # firewall-cmd --permanent --add-service=kerberos + # systemctl start krb5kdc kadmin nfs-secure + # systemctl enable krb5kdc kadmin nfs-secure + +Now create the Kerberos database (please note that this may take a while as it requires a some level of entropy in your system. To speed things up, I opened another terminal and ran ping -f localhost for 30-45 seconds): + + # kdb5_util create -s + +![Create Kerberos Database](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Kerberos-Database.png) + +Create Kerberos Database + +Next, using the kadmin.local tool, create an admin principal for root: + + # kadmin.local + # addprinc root/admin + +And add the Kerberos server to the database: + + # addprinc -randkey host/box2.mydomain.com + +Same with the NFS service for both client (box1) and server (box2). Please note that in the screenshot below I forgot to do it for box1 before quitting: + + # addprinc -randkey nfs/box2.mydomain.com + # addprinc -randkey nfs/box1.mydomain.com + +And exit by typing quit and pressing Enter: + +![Add Kerberos to NFS Server](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Kerboros-for-NFS.png) + +Add Kerberos to NFS Server + +Then obtain and cache Kerberos ticket-granting ticket for root/admin: + + # kinit root/admin + # klist + +![Cache Kerberos](http://www.tecmint.com/wp-content/uploads/2015/09/Cache-kerberos-Ticket.png) + +Cache Kerberos + +The last step before actually using Kerberos is storing into a keytab file (in the server) the principals that are authorized to use Kerberos authentication: + + # kdadmin.local + # ktadd host/box2.mydomain.com + # ktadd nfs/box2.mydomain.com + # ktadd nfs/box1.mydomain.com + +Finally, mount the share and perform a write test: + + # mount -t nfs4 -o sec=krb5 box2:/nfs /mnt + # echo "Hello from Tecmint.com" > /mnt/greeting.txt + +![Mount NFS Share](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-NFS-Share.png) + +Mount NFS Share + +Let’s now unmount the share, rename the keytab file in the client (to simulate it’s not present) and try to mount the share again: + + # umount /mnt + # mv /etc/krb5.keytab /etc/krb5.keytab.orig + +![Mount Unmount Kerberos NFS Share](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Unmount-Kerberos-NFS-Share.png) + +Mount Unmount Kerberos NFS Share + +Now you can use the NFS share with Kerberos-based authentication. + +### Summary ### + +In this article we have explained how to set up NFS with Kerberos authentication. Since there is much more to the topic than we can cover in a single guide, feel free to check the online [Kerberos documentation][6] and since Kerberos is a bit tricky to say the least, don’t hesitate to drop us a note using the form below if you run into any issue or need help with your testing or implementation. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/setting-up-nfs-server-with-kerberos-based-authentication/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/setup-samba-file-sharing-for-linux-windows-clients/ +[2]:http://www.tecmint.com/configure-nfs-server/ +[3]:http://www.tecmint.com/selinux-essentials-and-control-filesystem-access/ +[4]:http://www.tecmint.com/firewalld-rules-for-centos-7/ +[5]:http://www.tecmint.com/install-ntp-server-in-centos/ +[6]:http://web.mit.edu/kerberos/krb5-1.12/doc/admin/admin_commands/ \ No newline at end of file From 93c32f1b55a7d3b915448e7df6eca4d47d41aa5a Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 11 Sep 2015 16:38:04 +0800 Subject: [PATCH 487/697] =?UTF-8?q?20150911-2=20=E9=80=89=E9=A2=98?= =?UTF-8?q?=EF=BC=8C=E8=BF=99=E4=B8=A4=E7=AF=87=E6=9C=89=E4=B8=80=E4=B8=AA?= =?UTF-8?q?=E7=AC=AC=E4=B8=80=E7=AF=87=EF=BC=8C=E6=98=AF=E4=BB=8A=E5=B9=B4?= =?UTF-8?q?=E4=B8=89=E6=9C=88=E4=BB=BD=E7=9A=84=EF=BC=8C=E6=96=87=E4=BB=B6?= =?UTF-8?q?=E5=90=8D=E5=86=99=E5=9C=A8=E8=AF=B4=E6=98=8E=E4=B8=AD=E4=BA=86?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 第一篇的文件名:20150316 5 Interesting Command Line Tips and Tricks in Linux--Part 1.md --- ...Command Line Tricks for Newbies--Part 2.md | 250 ++++++++++++++++ ...e Types and System Time in Linu--Part 3.md | 279 ++++++++++++++++++ 2 files changed, 529 insertions(+) create mode 100644 sources/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md create mode 100644 sources/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md diff --git a/sources/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md b/sources/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md new file mode 100644 index 0000000000..09fd4c879d --- /dev/null +++ b/sources/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md @@ -0,0 +1,250 @@ +10 Useful Linux Command Line Tricks for Newbies – Part 2 +================================================================================ +I remember when I first started using Linux and I was used to the graphical interface of Windows, I truly hated the Linux terminal. Back then I was finding the commands hard to remember and proper use of each one of them. With time I realised the beauty, flexibility and usability of the Linux terminal and to be honest a day doesn’t pass without using. Today, I would like to share some useful tricks and tips for Linux new comers to ease their transition to Linux or simply help them learn something new (hopefully). + +![10 Linux Commandline Tricks for Newbies](http://www.tecmint.com/wp-content/uploads/2015/09/10-Linux-Commandline-Tricks.jpg) + +10 Linux Commandline Tricks – Part 2 + +- [5 Interesting Command Line Tips and Tricks in Linux – Part 1][1] +- [5 Useful Commands to Manage Linux File Types – Part 3][2] + +This article intends to show you some useful tricks how to use the Linux terminal like a pro with minimum amount of skills. All you need is a Linux terminal and some free time to test these commands. + +### 1. Find the right command ### + +Executing the right command can be vital for your system. However in Linux there are so many different command lines that they are often hard to remember. So how do you search for the right command you need? The answer is apropos. All you need to run is: + + # apropos + +Where you should change the “description” with the actual description of the command you are looking for. Here is a good example: + + # apropos "list directory" + + dir (1) - list directory contents + ls (1) - list directory contents + ntfsls (8) - list directory contents on an NTFS filesystem + vdir (1) - list directory contents + +On the left you can see the commands and on the right their description. + +### 2. Execute Previous Command ### + +Many times you will need to execute the same command over and over again. While you can repeatedly press the Up key on your keyboard, you can use the history command instead. This command will list all commands you entered since you launched the terminal: + + # history + + 1 fdisk -l + 2 apt-get install gnome-paint + 3 hostname tecmint.com + 4 hostnamectl tecmint.com + 5 man hostnamectl + 6 hostnamectl --set-hostname tecmint.com + 7 hostnamectl -set-hostname tecmint.com + 8 hostnamectl set-hostname tecmint.com + 9 mount -t "ntfs" -o + 10 fdisk -l + 11 mount -t ntfs-3g /dev/sda5 /mnt + 12 mount -t rw ntfs-3g /dev/sda5 /mnt + 13 mount -t -rw ntfs-3g /dev/sda5 /mnt + 14 mount -t ntfs-3g /dev/sda5 /mnt + 15 mount man + 16 man mount + 17 mount -t -o ntfs-3g /dev/sda5 /mnt + 18 mount -o ntfs-3g /dev/sda5 /mnt + 19 mount -ro ntfs-3g /dev/sda5 /mnt + 20 cd /mnt + ... + +As you will see from the output above, you will receive a list of all commands that you have ran. On each line you have number indicating the row in which you have entered the command. You can recall that command by using: + + !# + +Where # should be changed with the actual number of the command. For better understanding, see the below example: + + !501 + +Is equivalent to: + + # history + +### 3. Use midnight Commander ### + +If you are not used to using commands such cd, cp, mv, rm than you can use the midnight command. It is an easy to use visual shell in which you can also use mouse: + +![Midnight Commander in Action](http://www.tecmint.com/wp-content/uploads/2015/09/mc-command.jpg) + +Midnight Commander in Action + +Thanks to the F1 – F12 keys, you can easy perform different tasks. Simply check the legend at the bottom. To select a file or folder click the “Insert” button. + +In short the midnight command is called “mc“. To install mc on your system simply run: + + $ sudo apt-get install mc [On Debian based systems] + +---------- + + # yum install mc [On Fedora based systems] + +Here is a simple example of using midnight commander. Open mc by simply typing: + + # mc + +Now use the TAB button to switch between windows – left and right. I have a LibreOffice file that I will move to “Software” folder: + +![Midnight Commander Move Files](http://www.tecmint.com/wp-content/uploads/2015/09/Midnight-Commander-Move-Files.jpg) + +Midnight Commander Move Files + +To move the file in the new directory press F6 button on your keyboard. MC will now ask you for confirmation: + +![Move Files to New Directory](http://www.tecmint.com/wp-content/uploads/2015/09/Move-Files-to-new-Directory.png) + +Move Files to New Directory + +Once confirmed, the file will be moved in the new destination directory. + +Read More: [How to Use Midnight Commander File Manager in Linux][4] + +### 4. Shutdown Computer at Specific Time ### + +Sometimes you will need to shutdown your computer some hours after your work hours have ended. You can configure your computer to shut down at specific time by using: + + $ sudo shutdown 21:00 + +This will tell your computer to shut down at the specific time you have provided. You can also tell the system to shutdown after specific amount of minutes: + + $ sudo shutdown +15 + +That way the system will shut down in 15 minutes. + +### 5. Show Information about Known Users ### + +You can use a simple command to list your Linux system users and some basic information about them. Simply use: + + # lslogins + +This should bring you the following output: + + UID USER PWD-LOCK PWD-DENY LAST-LOGIN GECOS + 0 root 0 0 Apr29/11:35 root + 1 bin 0 1 bin + 2 daemon 0 1 daemon + 3 adm 0 1 adm + 4 lp 0 1 lp + 5 sync 0 1 sync + 6 shutdown 0 1 Jul19/10:04 shutdown + 7 halt 0 1 halt + 8 mail 0 1 mail + 10 uucp 0 1 uucp + 11 operator 0 1 operator + 12 games 0 1 games + 13 gopher 0 1 gopher + 14 ftp 0 1 FTP User + 23 squid 0 1 + 25 named 0 1 Named + 27 mysql 0 1 MySQL Server + 47 mailnull 0 1 + 48 apache 0 1 Apache + ... + +### 6. Search for Files ### + +Searching for files can sometimes be not as easy as you think. A good example for searching for files is: + + # find /home/user -type f + +This command will search for all files located in /home/user. The find command is extremely powerful one and you can pass more options to it to make your search even more detailed. If you want to search for files larger than given size, you can use: + + # find . -type f -size 10M + +The above command will search from current directory for all files that are larger than 10 MB. Make sure not to run the command from the root directory of your Linux system as this may cause high I/O on your machine. + +One of the most frequently used combinations that I use find with is “exec” option, which basically allows you to run some actions on the results of the find command. + +For example, lets say that we want to find all files in a directory and change their permissions. This can be easily done with: + + # find /home/user/files/ -type f -exec chmod 644 {} \; + +The above command will search for all files in the specified directory recursively and will executed chmod command on the found files. I am sure you will find many more uses on this command in future, for now read [35 Examples of Linux ‘find’ Command and Usage][5]. + +### 7. Build Directory Trees with one Command ### + +You probably know that you can create new directories by using the mkdir command. So if you want to create a new folder you will run something like this: + + # mkdir new_folder + +But what, if you want to create 5 subfolders within that folder? Running mkdir 5 times in a row is not a good solution. Instead you can use -p option like that: + + # mkdir -p new_folder/{folder_1,folder_2,folder_3,folder_4,folder_5} + +In the end you should have 5 folders located in new_folder: + + # ls new_folder/ + + folder_1 folder_2 folder_3 folder_4 folder_5 + +### 8. Copy File into Multiple Directories ### + +File copying is usually performed with the cp command. Copying a file usually looks like this: + + # cp /path-to-file/my_file.txt /path-to-new-directory/ + +Now imagine that you need to copy that file in multiple directories: + + # cp /home/user/my_file.txt /home/user/1 + # cp /home/user/my_file.txt /home/user/2 + # cp /home/user/my_file.txt /home/user/3 + +This is a bit absurd. Instead you can solve the problem with a simple one line command: + + # echo /home/user/1/ /home/user/2/ /home/user/3/ | xargs -n 1 cp /home/user/my_file.txt + +### 9. Deleting Larger Files ### + +Sometimes files can grow extremely large. I have seen cases where a single log file went over 250 GB large due to poor administrating skills. Removing the file with rm utility might not be sufficient in such cases due to the fact that there is extremely large amount of data that needs to be removed. The operation will be a “heavy” one and should be avoided. Instead, you can go with a really simple solution: + + # > /path-to-file/huge_file.log + +Where of course you will need to change the path and the file names with the exact ones to match your case. The above command will simply write an empty output to the file. In more simpler words it will empty the file without causing high I/O on your system. + +### 10. Run Same Command on Multiple Linux Servers ### + +Recently one of our readers asked in our [LinuxSay forum][6], how to execute single command to multiple Linux boxes at once using SSH. He had his machines IP addresses looking like this: + + 10.0.0.1 + 10.0.0.2 + 10.0.0.3 + 10.0.0.4 + 10.0.0.5 + +So here is a simple solution of this issue. Collect the IP addresses of the servers in a one file called list.txt one under other just as shown above. Then you can run: + + # for in $i(cat list.txt); do ssh user@$i 'bash command'; done + +In the above example you will need to change “user” with the actual user with which you will be logging and “bash command” with the actual bash command you wish to execute. The method is better working when you are [using passwordless authentication with SSH key][7] to your machines as that way you will not need to enter the password for your user over and over again. + +Note that you may need to pass some additional parameters to the SSH command depending on your Linux boxes setup. + +### Conclusion ### + +The above examples are really simple ones and I hope they have helped you to find some of the beauty of Linux and how you can easily perform different operations that can take much more time on other operating systems. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/10-useful-linux-command-line-tricks-for-newbies/ + +作者:[Marin Todorov][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/marintodorov89/ +[1]:http://www.tecmint.com/5-linux-command-line-tricks/ +[2]:http://www.tecmint.com/manage-file-types-and-set-system-time-in-linux/ +[3]:http://www.tecmint.com/history-command-examples/ +[4]:http://www.tecmint.com/midnight-commander-a-console-based-file-manager-for-linux/ +[5]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/ +[6]:http://www.linuxsay.com/ +[7]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ \ No newline at end of file diff --git a/sources/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md b/sources/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md new file mode 100644 index 0000000000..2226b72297 --- /dev/null +++ b/sources/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md @@ -0,0 +1,279 @@ +5 Useful Commands to Manage File Types and System Time in Linux – Part 3 +================================================================================ +Adapting to using the command line or terminal can be very hard for beginners who want to learn Linux. Because the terminal gives more control over a Linux system than GUIs programs, one has to get a used to running commands on the terminal. Therefore to memorize different commands in Linux, you should use the terminal on a daily basis to understand how commands are used with different options and arguments. + +![Manage File Types and Set Time in Linux](http://www.tecmint.com/wp-content/uploads/2015/09/Find-File-Types-in-Linux.jpg) + +Manage File Types and Set Time in Linux – Part 3 + +Please go through our previous parts of this [Linux Tricks][1] series. + +- [5 Interesting Command Line Tips and Tricks in Linux – Part 1][2] +- [ Useful Commandline Tricks for Newbies – Part 2][3] + +In this article, we are going to look at some tips and tricks of using 10 commands to work with files and time on the terminal. + +### File Types in Linux ### + +In Linux, everything is considered as a file, your devices, directories and regular files are all considered as files. + +There are different types of files in a Linux system: + +- Regular files which may include commands, documents, music files, movies, images, archives and so on. +- Device files: which are used by the system to access your hardware components. + +There are two types of device files block files that represent storage devices such as harddisks, they read data in blocks and character files read data in a character by character manner. + +- Hardlinks and softlinks: they are used to access files from any where on a Linux filesystem. +- Named pipes and sockets: allow different processes to communicate with each other. + +#### 1. Determining the type of a file using ‘file’ command #### + +You can determine the type of a file by using the file command as follows. The screenshot below shows different examples of using the file command to determine the types of different files. + + tecmint@tecmint ~/Linux-Tricks $ dir + BACKUP master.zip + crossroads-stable.tar.gz num.txt + EDWARD-MAYA-2011-2012-NEW-REMIX.mp3 reggea.xspf + Linux-Security-Optimization-Book.gif tmp-link + + tecmint@tecmint ~/Linux-Tricks $ file BACKUP/ + BACKUP/: directory + + tecmint@tecmint ~/Linux-Tricks $ file master.zip + master.zip: Zip archive data, at least v1.0 to extract + + tecmint@tecmint ~/Linux-Tricks $ file crossroads-stable.tar.gz + crossroads-stable.tar.gz: gzip compressed data, from Unix, last modified: Tue Apr 5 15:15:20 2011 + + tecmint@tecmint ~/Linux-Tricks $ file Linux-Security-Optimization-Book.gif + Linux-Security-Optimization-Book.gif: GIF image data, version 89a, 200 x 259 + + tecmint@tecmint ~/Linux-Tricks $ file EDWARD-MAYA-2011-2012-NEW-REMIX.mp3 + EDWARD-MAYA-2011-2012-NEW-REMIX.mp3: Audio file with ID3 version 2.3.0, contains: MPEG ADTS, layer III, v1, 192 kbps, 44.1 kHz, JntStereo + + tecmint@tecmint ~/Linux-Tricks $ file /dev/sda1 + /dev/sda1: block special + + tecmint@tecmint ~/Linux-Tricks $ file /dev/tty1 + /dev/tty1: character special + +#### 2. Determining the file type using ‘ls’ and ‘dir’ commands #### + +Another way of determining the type of a file is by performing a long listing using the ls and [dir][4] commands. + +Using ls -l to determine the type of a file. + +When you view the file permissions, the first character shows the file type and the other charcters show the file permissions. + + tecmint@tecmint ~/Linux-Tricks $ ls -l + total 6908 + drwxr-xr-x 2 tecmint tecmint 4096 Sep 9 11:46 BACKUP + -rw-r--r-- 1 tecmint tecmint 1075620 Sep 9 11:47 crossroads-stable.tar.gz + -rwxr----- 1 tecmint tecmint 5916085 Sep 9 11:49 EDWARD-MAYA-2011-2012-NEW-REMIX.mp3 + -rw-r--r-- 1 tecmint tecmint 42122 Sep 9 11:49 Linux-Security-Optimization-Book.gif + -rw-r--r-- 1 tecmint tecmint 17627 Sep 9 11:46 master.zip + -rw-r--r-- 1 tecmint tecmint 5 Sep 9 11:48 num.txt + -rw-r--r-- 1 tecmint tecmint 0 Sep 9 11:46 reggea.xspf + -rw-r--r-- 1 tecmint tecmint 5 Sep 9 11:47 tmp-link + +Using ls -l to determine block and character files. + + tecmint@tecmint ~/Linux-Tricks $ ls -l /dev/sda1 + brw-rw---- 1 root disk 8, 1 Sep 9 10:53 /dev/sda1 + + tecmint@tecmint ~/Linux-Tricks $ ls -l /dev/tty1 + crw-rw---- 1 root tty 4, 1 Sep 9 10:54 /dev/tty1 + +Using dir -l to determine the type of a file. + + tecmint@tecmint ~/Linux-Tricks $ dir -l + total 6908 + drwxr-xr-x 2 tecmint tecmint 4096 Sep 9 11:46 BACKUP + -rw-r--r-- 1 tecmint tecmint 1075620 Sep 9 11:47 crossroads-stable.tar.gz + -rwxr----- 1 tecmint tecmint 5916085 Sep 9 11:49 EDWARD-MAYA-2011-2012-NEW-REMIX.mp3 + -rw-r--r-- 1 tecmint tecmint 42122 Sep 9 11:49 Linux-Security-Optimization-Book.gif + -rw-r--r-- 1 tecmint tecmint 17627 Sep 9 11:46 master.zip + -rw-r--r-- 1 tecmint tecmint 5 Sep 9 11:48 num.txt + -rw-r--r-- 1 tecmint tecmint 0 Sep 9 11:46 reggea.xspf + -rw-r--r-- 1 tecmint tecmint 5 Sep 9 11:47 tmp-link + +#### 3. Counting number of files of a specific type #### + +Next we shall look at tips on counting number of files of a specific type in a given directory using the ls, [grep][5] and [wc][6] commands. Communication between the commands is achieved through named piping. + +- grep – command to search according to a given pattern or regular expression. +- wc – command to count lines, words and characters. + +Counting number of regular files + +In Linux, regular files are represented by the `–` symbol. + + tecmint@tecmint ~/Linux-Tricks $ ls -l | grep ^- | wc -l + 7 + +**Counting number of directories** + +In Linux, directories are represented by the `d` symbol. + + tecmint@tecmint ~/Linux-Tricks $ ls -l | grep ^d | wc -l + 1 + +**Counting number of symbolic and hard links** + +In Linux, symblic and hard links are represented by the l symbol. + + tecmint@tecmint ~/Linux-Tricks $ ls -l | grep ^l | wc -l + 0 + +**Counting number of block and character files** + +In Linux, block and character files are represented by the `b` and `c` symbols respectively. + + tecmint@tecmint ~/Linux-Tricks $ ls -l /dev | grep ^b | wc -l + 37 + tecmint@tecmint ~/Linux-Tricks $ ls -l /dev | grep ^c | wc -l + 159 + +#### 4. Finding files on a Linux system #### + +Next we shall look at some commands one can use to find files on a Linux system, these include the locate, find, whatis and which commands. + +**Using the locate command to find files** + +In the output below, I am trying to locate the [Samba server configuration][7] for my system. + + tecmint@tecmint ~/Linux-Tricks $ locate samba.conf + /usr/lib/tmpfiles.d/samba.conf + /var/lib/dpkg/info/samba.conffiles + +**Using the find command to find files** + +To learn how to use the find command in Linux, you can read our following article that shows more than 30+ practical examples and usage of find command in Linux. + +- [35 Examples of ‘find’ Command in Linux][8] + +**Using the whatis command to locate commands** + +The whatis command is mostly used to locate commands and it is special because it gives information about a command, it also finds configurations files and manual entries for a command. + + tecmint@tecmint ~/Linux-Tricks $ whatis bash + bash (1) - GNU Bourne-Again SHell + + tecmint@tecmint ~/Linux-Tricks $ whatis find + find (1) - search for files in a directory hierarchy + + tecmint@tecmint ~/Linux-Tricks $ whatis ls + ls (1) - list directory contents + +**Using which command to locate commands** + +The which command is used to locate commands on the filesystem. + + tecmint@tecmint ~/Linux-Tricks $ which mkdir + /bin/mkdir + + tecmint@tecmint ~/Linux-Tricks $ which bash + /bin/bash + + tecmint@tecmint ~/Linux-Tricks $ which find + /usr/bin/find + + tecmint@tecmint ~/Linux-Tricks $ $ which ls + /bin/ls + +#### 5. Working with time on your Linux system #### + +When working in a networked environment, it is a good practice to keep the correct time on your Linux system. There are certain services on Linux systems that require correct time to work efficiently on a network. + +We shall look at commands you can use to manage time on your machine. In Linux, time is managed in two ways: system time and hardware time. + +The system time is managed by a system clock and the hardware time is managed by a hardware clock. + +To view your system time, date and timezone, use the date command as follows. + + tecmint@tecmint ~/Linux-Tricks $ date + Wed Sep 9 12:25:40 IST 2015 + +Set your system time using date -s or date –set=”STRING” as follows. + + tecmint@tecmint ~/Linux-Tricks $ sudo date -s "12:27:00" + Wed Sep 9 12:27:00 IST 2015 + + tecmint@tecmint ~/Linux-Tricks $ sudo date --set="12:27:00" + Wed Sep 9 12:27:00 IST 2015 + +You can also set time and date as follows. + + tecmint@tecmint ~/Linux-Tricks $ sudo date 090912302015 + Wed Sep 9 12:30:00 IST 2015 + +Viewing current date from a calendar using cal command. + + tecmint@tecmint ~/Linux-Tricks $ cal + September 2015 + Su Mo Tu We Th Fr Sa + 1 2 3 4 5 + 6 7 8 9 10 11 12 + 13 14 15 16 17 18 19 + 20 21 22 23 24 25 26 + 27 28 29 30 + +View hardware clock time using the hwclock command. + + tecmint@tecmint ~/Linux-Tricks $ sudo hwclock + Wednesday 09 September 2015 06:02:58 PM IST -0.200081 seconds + +To set the hardware clock time, use hwclock –set –date=”STRING” as follows. + + tecmint@tecmint ~/Linux-Tricks $ sudo hwclock --set --date="09/09/2015 12:33:00" + + tecmint@tecmint ~/Linux-Tricks $ sudo hwclock + Wednesday 09 September 2015 12:33:11 PM IST -0.891163 seconds + +The system time is set by the hardware clock during booting and when the system is shutting down, the hardware time is reset to the system time. + +Therefore when you view system time and hardware time, they are the same unless when you change the system time. Your hardware time may be incorrect when the CMOS battery is weak. + +You can also set your system time using time from the hardware clock as follows. + + $ sudo hwclock --hctosys + +It is also possible to set hardware clock time using the system clock time as follows. + + $ sudo hwclock --systohc + +To view how long your Linux system has been running, use the uptime command. + + tecmint@tecmint ~/Linux-Tricks $ uptime + 12:36:27 up 1:43, 2 users, load average: 1.39, 1.34, 1.45 + + tecmint@tecmint ~/Linux-Tricks $ uptime -p + up 1 hour, 43 minutes + + tecmint@tecmint ~/Linux-Tricks $ uptime -s + 2015-09-09 10:52:47 + +### Summary ### + +Understanding file types is Linux is a good practice for begginers, and also managing time is critical especially on servers to manage services reliably and efficiently. Hope you find this guide helpful. If you have any additional information, do not forget to post a comment. Stay connected to Tecmint. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/manage-file-types-and-set-system-time-in-linux/ + +作者:[Aaron Kili][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/aaronkili/ +[1]:http://www.tecmint.com/tag/linux-tricks/ +[2]:http://www.tecmint.com/free-online-linux-learning-guide-for-beginners/ +[3]:http://www.tecmint.com/10-useful-linux-command-line-tricks-for-newbies/ +[4]:http://www.tecmint.com/linux-dir-command-usage-with-examples/ +[5]:http://www.tecmint.com/12-practical-examples-of-linux-grep-command/ +[6]:http://www.tecmint.com/wc-command-examples/ +[7]:http://www.tecmint.com/setup-samba-file-sharing-for-linux-windows-clients/ +[8]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/ \ No newline at end of file From 2b6f609557e5530b3b946f2b8a920273ebb7b247 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Fri, 11 Sep 2015 20:19:14 +0800 Subject: [PATCH 488/697] [Translated] tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md --- ...h In Ubuntu And elementary OS With NaSC.md | 54 ---------------- ...ple in Ubuntu or Elementary OS via NaSC.md | 63 ------------------- ...h In Ubuntu And elementary OS With NaSC.md | 53 ++++++++++++++++ ...ple in Ubuntu or Elementary OS via NaSC.md | 62 ++++++++++++++++++ 4 files changed, 115 insertions(+), 117 deletions(-) delete mode 100644 sources/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md delete mode 100644 sources/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md create mode 100644 translated/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md create mode 100644 translated/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md diff --git a/sources/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md b/sources/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md deleted file mode 100644 index 512c0669f9..0000000000 --- a/sources/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md +++ /dev/null @@ -1,54 +0,0 @@ -ictlyh Translating -Do Simple Math In Ubuntu And elementary OS With NaSC -================================================================================ -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Make-Math-Simpler-with-NaSC.jpg) - -[NaSC][1], abbreviation Not a Soulver Clone, is a third party app developed for elementary OS. Whatever the name suggests, NaSC is heavily inspired by [Soulver][2], an OS X app for doing maths like a normal person. - -elementary OS itself draws from OS X and it is not a surprise that a number of the third party apps it has got, are also inspired by OS X apps. - -Coming back to NaSC, what exactly it means by “maths like a normal person “? Well, it means to write like how you think in your mind. As per the description of the app: - -> “Its an app where you do maths like a normal person. It lets you type whatever you want and smartly figures out what is math and spits out an answer on the right pane. Then you can plug those answers in to future equations and if that answer changes, so does the equations its used in.” - -Still not convinced? Here, take a look at this screenshot. - -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/NaSC.png) - -Now, you see what is ‘math for normal person’? Honestly, I am not a fan of such apps but it might be useful for some of you perhaps. Let’s see how can you install NaSC in elementary OS, Ubuntu and Linux Mint. - -### Install NaSC in Ubuntu, elementary OS and Mint ### - -There is a PPA available for installing NaSC. The PPA says ‘daily’ which could mean daily build (i.e. unstable) but in my quick test, it worked just fine. - -Open a terminal and use the following commands: - - sudo apt-add-repository ppa:nasc-team/daily - sudo apt-get update - sudo apt-get install nasc - -Here is a screenshot of NaSC in Ubuntu 15.04: - -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/NaSC-Ubuntu.png) - -If you want to remove it, you can use the following commands: - - sudo apt-get remove nasc - sudo apt-add-repository --remove ppa:nasc-team/daily - -If you try it, do share your experience with it. In addition to this, you can also try [Vocal podcast app for Linux][3] from third party elementary OS apps. - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/math-ubuntu-nasc/ - -作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:http://parnold-x.github.io/nasc/ -[2]:http://www.acqualia.com/soulver/ -[3]:http://itsfoss.com/podcast-app-vocal-linux/ \ No newline at end of file diff --git a/sources/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md b/sources/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md deleted file mode 100644 index 2ddb2a072c..0000000000 --- a/sources/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md +++ /dev/null @@ -1,63 +0,0 @@ -ictlyh Translating -Make Math Simple in Ubuntu / Elementary OS via NaSC -================================================================================ -![](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-icon.png) - -NaSC (Not a Soulver Clone) is an open source software designed for Elementary OS to do arithmetics. It’s kinda similar to the Mac app [Soulver][1]. - -> Its an app where you do maths like a normal person. It lets you type whatever you want and smartly figures out what is math and spits out an answer on the right pane. Then you can plug those answers in to future equations and if that answer changes, so does the equations its used in. - -With NaSC you can for example: - -- Perform calculations with strangers you can define yourself -- Change the units and values ​​(in m cm, dollar euro …) -- Knowing the surface area of ​​a planet -- Solve of second-degree polynomial -- and more … - -![nasc-eos](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-eos.jpg) - -At the first launch, NaSC offers a tutorial that details possible features. You can later click the help icon on headerbar to get more. - -![nasc-help](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-help.jpg) - -In addition, the software allows to save your file in order to continue the work. It can be also shared on Pastebin with a defined time. - -### Install NaSC in Ubuntu / Elementary OS Freya: ### - -For Ubuntu 15.04, Ubuntu 15.10, Elementary OS Freya, open terminal from the Dash, App Launcher and run below commands one by one: - -1. Add the [NaSC PPA][2] via command: - - sudo apt-add-repository ppa:nasc-team/daily - -![nasc-ppa](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-ppa.jpg) - -2. If you’ve installed Synaptic Package Manager, search for and install `nasc` via it after clicking Reload button. - -Or run below commands to update system cache and install the software: - - sudo apt-get update - - sudo apt-get install nasc - -3. **(Optional)** To remove the software as well as NaSC, run: - - sudo apt-get remove nasc && sudo add-apt-repository -r ppa:nasc-team/daily - -For those who don’t want to add PPA, grab the .deb package directly from [this page][3]. - --------------------------------------------------------------------------------- - -via: http://ubuntuhandbook.org/index.php/2015/09/make-math-simple-in-ubuntu-elementary-os-via-nasc/ - -作者:[Ji m][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ubuntuhandbook.org/index.php/about/ -[1]:http://www.acqualia.com/soulver/ -[2]:https://launchpad.net/~nasc-team/+archive/ubuntu/daily/ -[3]:http://ppa.launchpad.net/nasc-team/daily/ubuntu/pool/main/n/nasc/ \ No newline at end of file diff --git a/translated/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md b/translated/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md new file mode 100644 index 0000000000..d65beef2a5 --- /dev/null +++ b/translated/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md @@ -0,0 +1,53 @@ +在 Ubuntu 和 Elementary OS 上使用 NaSC 进行简单数学运算 +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Make-Math-Simpler-with-NaSC.jpg) + +[NaSC][1],Not a Soulver Clone 的缩写,是为 elementary 操作系统开发的第三方应用程序。正如名字暗示的那样,NaSC 的灵感来源于 [Soulver][2],后者是像普通人一样进行数学计算的 OS X 应用。 + +Elementary OS 它自己本身借鉴了 OS X,也就不奇怪它的很多第三方应用灵感都来自于 OS X 应用。 + +回到 NaSC,“像普通人一样进行数学计算”到底是什么意思呢?事实上,它意味着正如你想的那样去书写。按照该应用程序的介绍: + +> “它能使你像平常那样进行计算。它允许你输入任何你想输入的,智能识别其中的数学部分并在右边面板打印出结果。然后你可以在后面的等式中使用这些结果,如果结果发生了改变,等式中使用的也会同样变化。” + +还不相信?让我们来看一个截图。 + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/NaSC.png) + +现在,你明白什么是 “像普通人一样做数学” 了吗?坦白地说,我并不是这类应用程序的粉丝,但对你们中的某些人可能会有用。让我们来看看怎么在 Elementary OS、Ubuntu 和 Linux Mint 上安装 NaSC。 + +### 在 Ubuntu、Elementary OS 和 Mint 上安装 NaSC ### + +安装 NaSC 有一个可用的 PPA。PPA 中说 ‘每日’,意味着所有构建(包括不稳定),但作为我的快速测试,并没什么影响。 + +打卡一个终端并运行下面的命令: + + sudo apt-add-repository ppa:nasc-team/daily + sudo apt-get update + sudo apt-get install nasc + +这是 Ubuntu 15.04 中使用 NaSC 的一个截图: + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/NaSC-Ubuntu.png) + +如果你想卸载它,可以使用下面的命令: + + sudo apt-get remove nasc + sudo apt-add-repository --remove ppa:nasc-team/daily + +如果你试用了这个软件,要分享你的经验哦。除此之外,你也可以在第三方 Elementary OS 应用中体验[Vocal podcast app for Linux][3]。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/math-ubuntu-nasc/ + +作者:[Abhishek][a] +译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://parnold-x.github.io/nasc/ +[2]:http://www.acqualia.com/soulver/ +[3]:http://itsfoss.com/podcast-app-vocal-linux/ \ No newline at end of file diff --git a/translated/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md b/translated/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md new file mode 100644 index 0000000000..2d74b1efa5 --- /dev/null +++ b/translated/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md @@ -0,0 +1,62 @@ +在 Ubuntu 和 Elementary 上使用 NaSC 做简单数学运算 +================================================================================ +![](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-icon.png) + +NaSC(Not a Soulver Clone,并非 Soulver 的克隆品)是为 Elementary 操作系统进行数学计算而设计的一款开源软件。类似于 Mac 上的 [Soulver][1]。 + +> 它能使你像平常那样进行计算。它允许你输入任何你想输入的,智能识别其中的数学部分并在右边面板打印出结果。然后你可以在后面的等式中使用这些结果,如果结果发生了改变,等式中使用的也会同样变化。 + +用 NaSC,你可以: + +- 自己定义复杂的计算 +- 改变单位和值(英尺、米、厘米,美元、欧元等) +- 了解行星的表面积 +- 解二次多项式 +- 以及其它 + +![nasc-eos](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-eos.jpg) + +第一次启动时,NaSC 提供了一个关于现有功能的教程。以后你还可以通过点击标题栏上的帮助图标再次查看。 + +![nasc-help](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-help.jpg) + +另外,这个软件还允许你保存文件以便以后继续工作。还可以在一定时间内通过粘贴板共用。 + +### 在 Ubuntu 或 Elementary OS Freya 上安装 NaSC: ### + +对于 Ubuntu 15.04,Ubuntu 15.10,Elementary OS Freya,从 Dash 或应用启动器中打开终端,逐条运行下面的命令: + +1. 通过命令添加 [NaSC PPA][2]: + + sudo apt-add-repository ppa:nasc-team/daily + +![nasc-ppa](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-ppa.jpg) + +2. 如果安装了 Synaptic 软件包管理器,点击 ‘Reload’ 后搜索并安装 ‘nasc’。 + +或者运行下面的命令更新系统缓存并安装软件: + + sudo apt-get update + + sudo apt-get install nasc + +3. **(可选)** 要卸载软件以及 NaSC,运行: + + sudo apt-get remove nasc && sudo add-apt-repository -r ppa:nasc-team/daily + +对于不想添加 PPA 的人,可以直接从[该网页][3]获取 .deb 安装包。、 + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/09/make-math-simple-in-ubuntu-elementary-os-via-nasc/ + +作者:[Ji m][a] +译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:http://www.acqualia.com/soulver/ +[2]:https://launchpad.net/~nasc-team/+archive/ubuntu/daily/ +[3]:http://ppa.launchpad.net/nasc-team/daily/ubuntu/pool/main/n/nasc/ \ No newline at end of file From b0951c83ed82e583f6302c5e95c73814df5bfb0e Mon Sep 17 00:00:00 2001 From: ictlyh Date: Fri, 11 Sep 2015 20:28:20 +0800 Subject: [PATCH 489/697] [Translating] tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md --- ...og Files With Logrotate On Ubuntu 12.10.md | 117 ------------------ ...ile Sharing on Linux or Windows Clients.md | 1 + ...-based Authentication for Linux Clients.md | 1 + 3 files changed, 2 insertions(+), 117 deletions(-) delete mode 100644 sources/tech/20150906 How To Manage Log Files With Logrotate On Ubuntu 12.10.md diff --git a/sources/tech/20150906 How To Manage Log Files With Logrotate On Ubuntu 12.10.md b/sources/tech/20150906 How To Manage Log Files With Logrotate On Ubuntu 12.10.md deleted file mode 100644 index 0c7ae1a7e3..0000000000 --- a/sources/tech/20150906 How To Manage Log Files With Logrotate On Ubuntu 12.10.md +++ /dev/null @@ -1,117 +0,0 @@ -ictlyh Translating -How To Manage Log Files With Logrotate On Ubuntu 12.10 -================================================================================ -#### About Logrotate #### - -Logrotate is a utility/tool that manages activities like automatic rotation, removal and compression of log files in a system. This is an excellent tool to manage your logs conserve precious disk space. By having a simple yet powerful configuration file, different parameters of logrotation can be controlled. This gives complete control over the way logs can be automatically managed and need not necessitate manual intervention. - -### Prerequisites ### - -As a prerequisite, we are assuming that you have gone through the article on how to set up your droplet or VPS. If not, you can find the article [here][1]. This tutorial requires you to have a VPS up and running and have you log into it. - -#### Setup Logrotate #### - -### Step 1—Update System and System Packages ### - -Run the following command to update the package lists from apt-get and get the information on the newest versions of packages and their dependencies. - - sudo apt-get update - -### Step 2—Install Logrotate ### - -If logrotate is not already on your VPS, install it now through apt-get. - - sudo apt-get install logrotate - -### Step 3 — Confirmation ### - -To verify that logrotate was successfully installed, run this in the command prompt. - - logrotate - -Since the logrotate utility is based on configuration files, the above command will not rotate any files and will show you a brief overview of the usage and the switch options available. - -### Step 4—Configure Logrotate ### - -Configurations and default options for the logrotate utility are present in: - - /etc/logrotate.conf - -Some of the important configuration settings are : rotation-interval, log-file-size, rotation-count and compression. - -Application-specific log file information (to override the defaults) are kept at: - - /etc/logrotate.d/ - -We will have a look at a few examples to understand the concept better. - -### Step 5—Example ### - -An example application configuration setting would be the dpkg (Debian package management system), that is stored in /etc/logrotate.d/dpkg. One of the entries in this file would be: - - /var/log/dpkg.log { - monthly - rotate 12 - compress - delaycompress - missingok - notifempty - create 644 root root - } - -What this means is that: - -- the logrotation for dpkg monitors the /var/log/dpkg.log file and does this on a monthly basis this is the rotation interval. -- 'rotate 12' signifies that 12 days worth of logs would be kept. -- logfiles can be compressed using the gzip format by specifying 'compress' and 'delaycompress' delays the compression process till the next log rotation. 'delaycompress' will work only if 'compress' option is specified. -- 'missingok' avoids halting on any error and carries on with the next log file. -- 'notifempty' avoid log rotation if the logfile is empty. -- 'create ' creates a new empty file with the specified properties after log-rotation. - -Though missing in the above example, 'size' is also an important setting if you want to control the sizing of the logs growing in the system. - -A configuration setting of around 100MB would look like: - - size 100M - -Note that If both size and rotation interval are set, then size is taken as a higher priority. That is, if a configuration file has the following settings: - - monthly - size 100M - -then the logs are rotated once the file size reaches 100M and this need not wait for the monthly cycle. - -### Step 6—Cron Job ### - -You can also set the logrotation as a cron so that the manual process can be avoided and this is taken care of automatically. By specifying an entry in /etc/cron.daily/logrotate , the rotation is triggered daily. - -### Step 7—Status Check and Verification ### - -To verify if a particular log is indeed rotating or not and to check the last date and time of its rotation, check the /var/lib/logrotate/status file. This is a neatly formatted file that contains the log file name and the date on which it was last rotated. - - cat /var/lib/logrotate/status - -A few entries from this file, for example: - - "/var/log/lpr.log" 2013-4-11 - "/var/log/dpkg.log" 2013-4-11 - "/var/log/pm-suspend.log" 2013-4-11 - "/var/log/syslog" 2013-4-11 - "/var/log/mail.info" 2013-4-11 - "/var/log/daemon.log" 2013-4-11 - "/var/log/apport.log" 2013-4-11 - -Congratulations! You have logrotate installed in your system. Now, change the configuration settings as per your requirements. - -Try 'man logrotate' or 'logrotate -?' for more details. - --------------------------------------------------------------------------------- - -via: https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10 - -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.digitalocean.com/community/articles/initial-server-setup-with-ubuntu-12-04 \ No newline at end of file diff --git a/sources/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md b/sources/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md index 02a538cacc..b59955783a 100644 --- a/sources/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md +++ b/sources/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md @@ -1,3 +1,4 @@ +ictlyh Translating Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux/Windows Clients – Part 6 ================================================================================ Since computers seldom work as isolated systems, it is to be expected that as a system administrator or engineer, you know how to set up and maintain a network with multiple types of servers. diff --git a/sources/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md b/sources/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md index 2f0ca53dd5..e0341c5247 100644 --- a/sources/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md +++ b/sources/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md @@ -1,3 +1,4 @@ +ictlyh Translating Setting Up NFS Server with Kerberos-based Authentication for Linux Clients – Part 7 ================================================================================ In the last article of this series, we reviewed [how to set up a Samba share over a network][1] that may consist of multiple types of operating systems. Now, if you need to set up file sharing for a group of Unix-like clients you will automatically think of the Network File System, or NFS for short. From 1c4e41f3d7ade7e41bf026f0834d8260c730fb03 Mon Sep 17 00:00:00 2001 From: icybreaker Date: Fri, 11 Sep 2015 22:31:48 +0800 Subject: [PATCH 490/697] icybreaker translating --- ...data structures and algorithms make you a better developer.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/talk/20150823 How learning data structures and algorithms make you a better developer.md b/sources/talk/20150823 How learning data structures and algorithms make you a better developer.md index 7152efa1ed..d90242e58a 100644 --- a/sources/talk/20150823 How learning data structures and algorithms make you a better developer.md +++ b/sources/talk/20150823 How learning data structures and algorithms make you a better developer.md @@ -1,3 +1,4 @@ +icybreaker translating How learning data structures and algorithms make you a better developer ================================================================================ From 7eb82b0882703f4f2c67a6608947a91bb0663002 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 12 Sep 2015 09:52:43 +0800 Subject: [PATCH 491/697] translating --- ...ased Client for Connecting Remote Unix or Linux Systems.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md b/sources/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md index f36e1b21df..ac02e48d2f 100644 --- a/sources/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md +++ b/sources/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md @@ -1,3 +1,5 @@ +translating---geekpi + Mosh Shell – A SSH Based Client for Connecting Remote Unix/Linux Systems ================================================================================ Mosh, which stands for Mobile Shell is a command-line application which is used for connecting to the server from a client computer, over the Internet. It can be used as SSH and contains more feature than Secure Shell. It is an application similar to SSH, but with additional features. The application is written originally by Keith Winstein for Unix like operating system and released under GNU GPL v3. @@ -107,4 +109,4 @@ via: http://www.tecmint.com/install-mosh-shell-ssh-client-in-linux/ [2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ [3]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ [4]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ -[5]:http://www.tecmint.com/configure-firewalld-in-centos-7/ \ No newline at end of file +[5]:http://www.tecmint.com/configure-firewalld-in-centos-7/ From da767f2fa7f8a47a9384674d2671cfdd32c60925 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 12 Sep 2015 11:16:13 +0800 Subject: [PATCH 492/697] translated --- ...Connecting Remote Unix or Linux Systems.md | 112 ------------------ ...Connecting Remote Unix or Linux Systems.md | 111 +++++++++++++++++ 2 files changed, 111 insertions(+), 112 deletions(-) delete mode 100644 sources/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md create mode 100644 translated/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md diff --git a/sources/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md b/sources/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md deleted file mode 100644 index ac02e48d2f..0000000000 --- a/sources/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md +++ /dev/null @@ -1,112 +0,0 @@ -translating---geekpi - -Mosh Shell – A SSH Based Client for Connecting Remote Unix/Linux Systems -================================================================================ -Mosh, which stands for Mobile Shell is a command-line application which is used for connecting to the server from a client computer, over the Internet. It can be used as SSH and contains more feature than Secure Shell. It is an application similar to SSH, but with additional features. The application is written originally by Keith Winstein for Unix like operating system and released under GNU GPL v3. - -![Mosh Shell SSH Client](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-SSH-Client.png) - -Mosh Shell SSH Client - -#### Features of Mosh #### - -- It is a remote terminal application that supports roaming. -- Available for all major UNIX-like OS viz., Linux, FreeBSD, Solaris, Mac OS X and Android. -- Intermittent Connectivity supported. -- Provides intelligent local echo. -- Line editing of user keystrokes supported. -- Responsive design and Robust Nature over wifi, cellular and long-distance links. -- Remain Connected even when IP changes. It usages UDP in place of TCP (used by SSH). TCP time out when connect is reset or new IP assigned but UDP keeps the connection open. -- The Connection remains intact when you resume the session after a long time. -- No network lag. Shows users typed key and deletions immediately without network lag. -- Same old method to login as it was in SSH. -- Mechanism to handle packet loss. - -### Installation of Mosh Shell in Linux ### - -On Debian, Ubuntu and Mint alike systems, you can easily install the Mosh package with the help of [apt-get package manager][1] as shown. - - # apt-get update - # apt-get install mosh - -On RHEL/CentOS/Fedora based distributions, you need to turn on third party repository called [EPEL][2], in order to install mosh from this repository using [yum package manager][3] as shown. - - # yum update - # yum install mosh - -On Fedora 22+ version, you need to use [dnf package manager][4] to install mosh as shown. - - # dnf install mosh - -### How do I use Mosh Shell? ### - -1. Let’s try to login into remote Linux server using mosh shell. - - $ mosh root@192.168.0.150 - -![Mosh Shell Remote Connection](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-Remote-Connection.png) - -Mosh Shell Remote Connection - -**Note**: Did you see I got an error in connecting since the port was not open in my remote CentOS 7 box. A quick but not recommended solution I performed was: - - # systemctl stop firewalld [on Remote Server] - -The preferred way is to open a port and update firewall rules. And then connect to mosh on a predefined port. For in-depth details on firewalld you may like to visit this post. - -- [How to Configure Firewalld][5] - -2. Let’s assume that the default SSH port 22 was changed to port 70, in this case you can define custom port with the help of ‘-p‘ switch with mosh. - - $ mosh -p 70 root@192.168.0.150 - -3. Check the version of installed Mosh. - - $ mosh --version - -![Check Mosh Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Mosh-Version.png) - -Check Mosh Version - -4. You can close mosh session type ‘exit‘ on the prompt. - - $ exit - -5. Mosh supports a lot of options, which you may see as: - - $ mosh --help - -![Mosh Shell Options](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-Options.png) - -Mosh Shell Options - -#### Cons of Mosh Shell #### - -- Mosh requires additional prerequisite for example, allow direct connection via UDP, which was not required by SSH. -- Dynamic port allocation in the range of 60000-61000. The first open fort is allocated. It requires one port per connection. -- Default port allocation is a serious security concern, especially in production. -- IPv6 connections supported, but roaming on IPv6 not supported. -- Scrollback not supported. -- No X11 forwarding supported. -- No support for ssh-agent forwarding. - -### Conclusion ### - -Mosh is a nice small utility which is available for download in the repository of most of the Linux Distributions. Though it has a few discrepancies specially security concern and additional requirement it’s features like remaining connected even while roaming is its plus point. My recommendation is Every Linux-er who deals with SSH should try this application and mind it, Mosh is worth a try. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/install-mosh-shell-ssh-client-in-linux/ - -作者:[Avishek Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ -[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ -[3]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ -[4]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ -[5]:http://www.tecmint.com/configure-firewalld-in-centos-7/ diff --git a/translated/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md b/translated/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md new file mode 100644 index 0000000000..093b4cbc21 --- /dev/null +++ b/translated/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md @@ -0,0 +1,111 @@ +mosh - 一个基于SSH用于连接远程Unix/Linux系统的工具 +================================================================================ +Mosh表示移动Shell(Mobile Shell)是一个用于从客户端连接远程服务器的命令行工具。它可以像ssh那样使用并包含了更多的功能。它是一个类似ssh的程序,但是提供更多的功能。程序最初由Keith Winstein编写用于类Unix的操作系统中,发布于GNU GPL v3协议下。 + +![Mosh Shell SSH Client](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-SSH-Client.png) + +Mosh客户端 + +#### Mosh的功能 #### + +- 它是一个支持漫游的远程终端程序。 +- 在所有主流类Unix版本中可用如Linux、FreeBSD、Solaris、Mac OS X和Android。 +- 中断连接支持 +- 支持智能本地echo +- 用户按键行编辑支持 +- 响应式设计及在wifi、3G、长距离连接下的鲁棒性 +- 在IP改变后保持连接。它使用UDP代替TCP(在SSH中使用)当连接被重置或者获得新的IP后TCP会超时但是UDP仍然保持连接。 +- 在你很长之间之后恢复会话时仍然保持连接。 +- 没有网络延迟。立即显示用户输入和删除而没有延迟 +- 像SSH那样支持一些旧的方式登录。 +- 包丢失处理机制 + +### Linux中mosh的安装 ### + +在Debian、Ubuntu和Mint类似的系统中,你可以很容易地用[apt-get包管理器][1]安装。 + + # apt-get update + # apt-get install mosh + +在基于RHEL/CentOS/Fedora的系统中,要使用[yum 包管理器][3]安装mosh,你需要打开第三方的[EPEL][2]。 + + # yum update + # yum install mosh + +在Fedora 22+的版本中,你需要使用[dnf包管理器][4]来安装mosh。 + + # dnf install mosh + +### 我该如何使用mosh? ### + +1. 让我们尝试使用mosh登录远程Linux服务器。 + + $ mosh root@192.168.0.150 + +![Mosh Shell Remote Connection](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-Remote-Connection.png) + +mosh远程连接 + +**注意**:你有没有看到一个连接错误,因为我在CentOS 7中还有打开这个端口。一个快速但是我并不建议的解决方法是: + + # systemctl stop firewalld [on Remote Server] + +更好的方法是打开一个端口并更新防火墙规则。接着用mosh连接到预定义的端口中。至于更深入的细节,也许你会对下面的文章感兴趣。 + +- [如何配置Firewalld][5] + +2. 让我们假设把默认的22端口改到70,这时使用-p选项来使用自定义端口。 + + $ mosh -p 70 root@192.168.0.150 + +3. 检查mosh的版本 + + $ mosh --version + +![Check Mosh Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Mosh-Version.png) + +检查mosh版本 + +4. 你可以输入‘exit’来退出mosh会话。 + + $ exit + +5. mosh支持很多选项,你可以用下面的方法看到: + + $ mosh --help + +![Mosh Shell Options](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-Options.png) + +Mosh选项 + +#### mosh的利弊 #### + +- mosh有额外的需求,比如需要允许UDP直接连接,这在SSH不需要。 +- 动态分配的端口范围是60000-61000。第一个打开的端口是分配的。每个连接都需要一个端口。 +- 默认端口分配是一个严重的安全问题,尤其是在生产环境中。 +- 支持IPv6连接,但是不支持IPv6漫游。 +- 不支持回溯 +- 不支持X11转发 +- 不支持ssh-agent转发 + +### 总结 ### + +Mosh is a nice small utility which is available for download in the repository of most of the Linux Distributions. Though it has a few discrepancies specially security concern and additional requirement it’s features like remaining connected even while roaming is its plus point. My recommendation is Every Linux-er who deals with SSH should try this application and mind it, Mosh is worth a try. +mosh是一款在大多数linux发行版的仓库中可以下载的一款小工具。虽然它有一些差异尤其是安全问题和额外的需求,它的功能像漫游后保持连接是一个加分点。我的建议是任何一个使用ssh的linux用户都应该试试这个程序,mosh值得一试 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/install-mosh-shell-ssh-client-in-linux/ + +作者:[Avishek Kumar][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ +[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ +[3]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ +[4]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ +[5]:http://www.tecmint.com/configure-firewalld-in-centos-7/ From a7a033417cad1cbf2351b6e4c4a7bc7b76281b8e Mon Sep 17 00:00:00 2001 From: icybreaker Date: Sat, 12 Sep 2015 12:36:41 +0800 Subject: [PATCH 493/697] translated --- ... algorithms make you a better developer.md | 127 ------------------ ... algorithms make you a better developer.md | 123 +++++++++++++++++ 2 files changed, 123 insertions(+), 127 deletions(-) delete mode 100644 sources/talk/20150823 How learning data structures and algorithms make you a better developer.md create mode 100644 translated/talk/20150823 How learning data structures and algorithms make you a better developer.md diff --git a/sources/talk/20150823 How learning data structures and algorithms make you a better developer.md b/sources/talk/20150823 How learning data structures and algorithms make you a better developer.md deleted file mode 100644 index d90242e58a..0000000000 --- a/sources/talk/20150823 How learning data structures and algorithms make you a better developer.md +++ /dev/null @@ -1,127 +0,0 @@ -icybreaker translating -How learning data structures and algorithms make you a better developer -================================================================================ - -> "I'm a huge proponent of designing your code around the data, rather than the other way around, and I think it's one of the reasons git has been fairly successful […] I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important." --- Linus Torvalds - ---- - -> "Smart data structures and dumb code works a lot better than the other way around." --- Eric S. Raymond, The Cathedral and The Bazaar - -Learning about data structures and algorithms makes you a stonking good programmer. - -**Data structures and algorithms are patterns for solving problems.** The more of them you have in your utility belt, the greater variety of problems you'll be able to solve. You'll also be able to come up with more elegant solutions to new problems than you would otherwise be able to. - -You'll understand, ***in depth***, how your computer gets things done. This informs any technical decisions you make, regardless of whether or not you're using a given algorithm directly. Everything from memory allocation in the depths of your operating system, to the inner workings of your RDBMS to how your networking stack manages to send data from one corner of Earth to another. All computers rely on fundamental data structures and algorithms, so understanding them better makes you understand the computer better. - -Cultivate a broad and deep knowledge of algorithms and you'll have stock solutions to large classes of problems. Problem spaces that you had difficulty modelling before often slot neatly into well-worn data structures that elegantly handle the known use-cases. Dive deep into the implementation of even the most basic data structures and you'll start seeing applications for them in your day-to-day programming tasks. - -You'll also be able to come up with novel solutions to the somewhat fruitier problems you're faced with. Data structures and algorithms have the habit of proving themselves useful in situations that they weren't originally intended for, and the only way you'll discover these on your own is by having a deep and intuitive knowledge of at least the basics. - -But enough with the theory, have a look at some examples - -###Figuring out the fastest way to get somewhere### -Let's say we're creating software to figure out the shortest distance from one international airport to another. Assume we're constrained to following routes: - -![](http://www.happybearsoftware.com/assets/posts/how-learning-data-structures-and-algorithms-makes-you-a-better-developer/airport-graph-d2e32b3344b708383e405d67a80c29ea.svg) - -graph of destinations and the distances between them, how can we find the shortest distance say, from Helsinki to London? **Dijkstra's algorithm** is the algorithm that will definitely get us the right answer in the shortest time. - -In all likelihood, if you ever came across this problem and knew that Dijkstra's algorithm was the solution, you'd probably never have to implement it from scratch. Just ***knowing*** about it would point you to a library implementation that solves the problem for you. - -If you did dive deep into the implementation, you'd be working through one of the most important graph algorithms we know of. You'd know that in practice it's a little resource intensive so an extension called A* is often used in it's place. It gets used everywhere from robot guidance to routing TCP packets to GPS pathfinding. - -###Figuring out the order to do things in### -Let's say you're trying to model courses on a new Massive Open Online Courses platform (like Udemy or Khan Academy). Some of the courses depend on each other. For example, a user has to have taken Calculus before she's eligible for the course on Newtonian Mechanics. Courses can have multiple dependencies. Here's are some examples of what that might look like written out in YAML: - - # Mapping from course name to requirements - # - # If you're a physcist or a mathematicisn and you're reading this, sincere - # apologies for the completely made-up dependency tree :) - courses: - arithmetic: [] - algebra: [arithmetic] - trigonometry: [algebra] - calculus: [algebra, trigonometry] - geometry: [algebra] - mechanics: [calculus, trigonometry] - atomic_physics: [mechanics, calculus] - electromagnetism: [calculus, atomic_physics] - radioactivity: [algebra, atomic_physics] - astrophysics: [radioactivity, calculus] - quantumn_mechanics: [atomic_physics, radioactivity, calculus] - -Given those dependencies, as a user, I want to be able to pick any course and have the system give me an ordered list of courses that I would have to take to be eligible. So if I picked `calculus`, I'd want the system to return the list: - - arithmetic -> algebra -> trigonometry -> calculus - -Two important constraints on this that may not be self-evident: - - - At every stage in the course list, the dependencies of the next course must be met. - - We don't want any duplicate courses in the list. - -This is an example of resolving dependencies and the algorithm we're looking for to solve this problem is called topological sort (tsort). Tsort works on a dependency graph like we've outlined in the YAML above. Here's what that would look like in a graph (where each arrow means `requires`): - -![](http://www.happybearsoftware.com/assets/posts/how-learning-data-structures-and-algorithms-makes-you-a-better-developer/course-graph-2f60f42bb0dc95319954ce34c02705a2.svg) - -topological sort does is take a graph like the one above and find an ordering in which all the dependencies are met at each stage. So if we took a sub-graph that only contained `radioactivity` and it's dependencies, then ran tsort on it, we might get the following ordering: - - arithmetic - algebra - trigonometry - calculus - mechanics - atomic_physics - radioactivity - -This meets the requirements set out by the use case we described above. A user just has to pick `radioactivity` and they'll get an ordered list of all the courses they have to work through before they're allowed to. - -We don't even need to go into the details of how topological sort works before we put it to good use. In all likelihood, your programming language of choice probably has an implementation of it in the standard library. In the worst case scenario, your Unix probably has the `tsort` utility installed by default, run man `tsort` and have a play with it. - -###Other places tsort get's used### - - - **Tools like** `make` allow you to declare task dependencies. Topological sort is used under the hood to figure out what order the tasks should be executed in. - - **Any programming language that has a `require` directive**, indicating that the current file requires the code in a different file to be run first. Here topological sort can be used to figure out what order the files should be loaded in so that each is only loaded once and all dependencies are met. - - **Project management tools with Gantt charts**. A Gantt chart is a graph that outlines all the dependencies of a given task and gives you an estimate of when it will be complete based on those dependencies. I'm not a fan of Gantt charts, but it's highly likely that tsort will be used to draw them. - -###Squeezing data with Huffman coding### -[Huffman coding](http://en.wikipedia.org/wiki/Huffman_coding) is an algorithm used for lossless data compression. It works by analyzing the data you want to compress and creating a binary code for each character. More frequently occurring characters get smaller codes, so `e` might be encoded as `111` while `x` might be `10010`. The codes are created so that they can be concatenated without a delimeter and still be decoded accurately. - -Huffman coding is used along with LZ77 in the DEFLATE algorithm which is used by gzip to compress things. gzip is used all over the place, in particular for compressing files (typically anything with a `.gz` extension) and for http requests/responses in transit. - -Knowing how to implement and use Huffman coding has a number of benefits: - - - You'll know why a larger compression context results in better compression overall (e.g. the more you compress, the better the compression ratio). This is one of the proposed benefits of SPDY: that you get better compression on multiple HTTP requests/responses. - - You'll know that if you're compressing your javascript/css in transit anyway, it's completely pointless to run a minifier on them. Sames goes for PNG files, which use DEFLATE internally for compression already. - - If you ever find yourself trying to forcibly decipher encrypted information , you may realize that since repeating data compresses better, the compression ratio of a given bit of ciphertext will help you determine it's [block cipher mode of operation](http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation). - -###Picking what to learn next is hard### -Being a programmer involves learning constantly. To operate as a web developer you need to know markup languages, high level languages like ruby/python, regular expressions, SQL and JavaScript. You need to know the fine details of HTTP, how to drive a unix terminal and the subtle art of object oriented programming. It's difficult to navigate that landscape effectively and choose what to learn next. - -I'm not a fast learner so I have to choose what to spend time on very carefully. As much as possible, I want to learn skills and techniques that are evergreen, that is, won't be rendered obsolete in a few years time. That means I'm hesitant to learn the javascript framework of the week or untested programming languages and environments. - -As long as our dominant model of computation stays the same, data structures and algorithms that we use today will be used in some form or another in the future. You can safely spend time on gaining a deep and thorough knowledge of them and know that they will pay dividends for your entire career as a programmer. - -###Sign up to the Happy Bear Software List### -Find this article useful? For a regular dose of freshly squeezed technical content delivered straight to your inbox, **click on the big green button below to sign up to the Happy Bear Software mailing list.** - -We'll only be in touch a few times per month and you can unsubscribe at any time. - --------------------------------------------------------------------------------- - -via: http://www.happybearsoftware.com/how-learning-data-structures-and-algorithms-makes-you-a-better-developer - -作者:[Happy Bear][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.happybearsoftware.com/ -[1]:http://en.wikipedia.org/wiki/Huffman_coding -[2]:http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation - - - diff --git a/translated/talk/20150823 How learning data structures and algorithms make you a better developer.md b/translated/talk/20150823 How learning data structures and algorithms make you a better developer.md new file mode 100644 index 0000000000..8125229719 --- /dev/null +++ b/translated/talk/20150823 How learning data structures and algorithms make you a better developer.md @@ -0,0 +1,123 @@ +学习数据结构与算法分析如何帮助您成为更优秀的开发人员? +================================================================================ + +> "相较于其它方式,我一直热衷于推崇围绕数据设计代码,我想这也是Git能够如此成功的一大原因[…]在我看来,区别程序员优劣的一大标准就在于他是否认为自己设计的代码或数据结构更为重要。" +-- Linus Torvalds + +--- + +> "优秀的数据结构与简陋的代码组合远比倒过来的组合方式更好。" +-- Eric S. Raymond, The Cathedral and The Bazaar + +学习数据结构与算法分析会让您成为一名出色的程序员。 + +**数据结构与算法分析是一种解决问题的思维模式** 在您的个人知识库中,数据结构与算法分析的相关知识储备越多,您将具备应对并解决越多各类繁杂问题的能力。掌握了这种思维模式,您还将有能力针对新问题提出更多以前想不到的漂亮的解决方案。 + +您将***更深入地***了解,计算机如何完成各项操作。无论您是否是直接使用给定的算法,它都影响着您作出的各种技术决定。从计算机操作系统的内存分配到RDBMS的内在工作机制,以及网络堆栈如何实现将数据从地球的一个角落发送至另一个角落这些大大小小的工作的完成,都离不开基础的数据结构与算法,理解并掌握它将会让您更了解计算机的运作机理。 + +对算法广泛深入的学习能让为您应对大体系的问题储备解决方案。之前建模困难时遇到的问题如今通常都能融合进经典的数据结构中得到很好地解决。即使是最基础的数据结构,只要对它进行足够深入的钻研,您将会发现在每天的编程任务中都能经常用到这些知识。 + +有了这种思维模式,在遇到磨棱两可的问题时,您会具备想出新的解决方案的能力。即使最初并没有打算用数据结构与算法解决相应问题的情况,当真正用它们解决这些问题时您会发现它们将非常有用。要意识到这一点,您至少要对数据结构与算法分析的基础知识有深入直观的认识。 + +理论认识就讲到这里,让我们一起看看下面几个例子。 + +###最短路径问题### + +我们想要开发一个计算从一个国际机场出发到另一个国际机场的最短距离的软件。假设我们受限于以下路线: + +![](http://www.happybearsoftware.com/assets/posts/how-learning-data-structures-and-algorithms-makes-you-a-better-developer/airport-graph-d2e32b3344b708383e405d67a80c29ea.svg) + +从这张画出机场各自之间的距离以及目的地的图中,我们如何才能找到最短距离,比方说从赫尔辛基到伦敦?**Dijkstra算法**是能让我们在最短的时间得到正确答案的适用算法。 + +在所有可能的解法中,如果您曾经遇到过这类问题,知道可以用Dijkstra算法求解,您大可不必从零开始实现它,只需***知道***该算法能指向固定的代码库帮助您解决相关的实现问题。 + +实现了该算法,您将深入理解一项著名的重要图论算法。您会发现实际上该算法太集成化,因此名为A*的扩展包经常会代替该算法使用。这个算法应用广泛,从机器人指引的功能实现到TCP数据包路由,以及GPS寻径问题都能应用到这个算法。 + +###先后排序问题### + +您想要在开放式在线课程平台上(如Udemy或Khan学院)学习某课程,有些课程之间彼此依赖。例如,用户学习牛顿力学机制课程前必须先修微积分课程,课程之间可以有多种依赖关系。用YAML表述举例如下: + + # Mapping from course name to requirements + # + # If you're a physcist or a mathematicisn and you're reading this, sincere + # apologies for the completely made-up dependency tree :) + courses: + arithmetic: [] + algebra: [arithmetic] + trigonometry: [algebra] + calculus: [algebra, trigonometry] + geometry: [algebra] + mechanics: [calculus, trigonometry] + atomic_physics: [mechanics, calculus] + electromagnetism: [calculus, atomic_physics] + radioactivity: [algebra, atomic_physics] + astrophysics: [radioactivity, calculus] + quantumn_mechanics: [atomic_physics, radioactivity, calculus] + +鉴于以上这些依赖关系,作为一名用户,我希望系统能帮我列出必修课列表,让我在之后可以选择任意一门课程学习。如果我选择了`微积分`课程,我希望系统能返回以下列表: + + arithmetic -> algebra -> trigonometry -> calculus + +这里有两个潜在的重要约束条件: + + - 返回的必修课列表中,每门课都与下一门课存在依赖关系 + - 必修课列表中不能有重复项 + +这是解决数据间依赖关系的例子,解决该问题的排序算法称作拓扑排序算法(tsort)。它适用于解决上述我们用YAML列出的依赖关系图的情况,以下是在图中显示的相关结果(其中箭头代表`需要先修的课程`): + +![](http://www.happybearsoftware.com/assets/posts/how-learning-data-structures-and-algorithms-makes-you-a-better-developer/course-graph-2f60f42bb0dc95319954ce34c02705a2.svg) + +拓扑排序算法的实现就是从如上所示的图中找到满足各层次要求的依赖关系。因此如果我们只列出包含`radioactivity`和与它有依赖关系的子图,运行tsort排序,会得到如下的顺序表: + + arithmetic + algebra + trigonometry + calculus + mechanics + atomic_physics + radioactivity + +这符合我们上面描述的需求,用户只需选出`radioactivity`,就能得到在此之前所有必修课程的有序列表。 + +在运用该排序算法之前,我们甚至不需要深入了解算法的实现细节。一般来说,选择不同的编程语言在其标准库中都会有相应的算法实现。即使最坏的情况,Unix也会默认安装`tsort`程序,运行`tsort`程序,您就可以实现该算法。 + +###其它拓扑排序适用场合### + + - **工具** 使用诸如`make`的工具您可以声明任务之间的依赖关系,这里拓扑排序算法将从底层实现具有依赖关系的任务顺序执行的功能。 + - **有`require`指令的编程语言**,适用于要运行当前文件需先运行另一个文件的情况。这里拓扑排序用于识别文件运行顺序以保证每个文件只加载一次,且满足所有文件间的依赖关系要求。 + - **包含甘特图的项目管理工具**.甘特图能直观列出给定任务的所有依赖关系,在这些依赖关系之上能提供给用户任务完成的预估时间。我不常用到甘特图,但这些绘制甘特图的工具很可能会用到拓扑排序算法。 + +###霍夫曼编码实现数据压缩### +[霍夫曼编码](http://en.wikipedia.org/wiki/Huffman_coding)是一种用于无损数据压缩的编码算法。它的工作原理是先分析要压缩的数据,再为每个字符创建一个二进制编码。字符出现的越频繁,编码赋值越小。因此在一个数据集中`e`可能会编码为`111`,而`x`会编码为`10010`。创建了这种编码模式,就可以串联无定界符,也能正确地进行解码。 + +在gzip中使用的DEFLATE算法就结合了霍夫曼编码与LZ77一同用于实现数据压缩功能。gzip应用领域很广,特别适用于文件压缩(以`.gz`为扩展名的文件)以及用于数据传输中的http请求与应答。 + +学会实现并使用霍夫曼编码有如下益处: + + - 您会理解为什么较大的压缩文件会获得较好的整体压缩效果(如压缩的越多,压缩率也越高)。这也是SPDY协议得以推崇的原因之一:在复杂的HTTP请求/响应过程数据有更好的压缩效果。 + - 您会了解数据传输过程中如果想要压缩JavaScript/CSS文件,运行压缩软件是完全没有意义的。PNG文件也是类似,因为它们已经使用DEFLATE算法完成了压缩。 + - 如果您试图强行破译加密的信息,您可能会发现重复数据压缩质量越好,给定的密文单位bit的数据压缩将帮助您确定相关的[分组密码模式](http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation). + +###下一步选择学习什么是困难的### +作为一名程序员应当做好持续学习的准备。为成为一名web开发人员,您需要了解标记语言以及Ruby/Python,正则表达式,SQL,JavaScript等高级编程语言,还需要了解HTTP的工作原理,如何运行UNIX终端以及面向对象的编程艺术。您很难有效地预览到未来的职业全景,因此选择下一步要学习哪些知识是困难的。 + +我没有快速学习的能力,因此我不得不在时间花费上非常谨慎。我希望尽可能地学习到有持久生命力的技能,即不会在几年内就过时的技术。这意味着我也会犹豫这周是要学习JavaScript框架还是那些新的编程语言。 + +只要占主导地位的计算模型体系不变,我们如今使用的数据结构与算法在未来也必定会以另外的形式继续适用。您可以放心地将时间投入到深入掌握数据结构与算法知识中,它们将会成为您作为一名程序员的职业生涯中一笔长期巨大的财富。 + +-------------------------------------------------------------------------------- + +via: http://www.happybearsoftware.com/how-learning-data-structures-and-algorithms-makes-you-a-better-developer + +作者:[Happy Bear][a] +译者:[icybreaker](https://github.com/icybreaker) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.happybearsoftware.com/ +[1]:http://en.wikipedia.org/wiki/Huffman_coding +[2]:http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation + + + From a13b5f5b24f0a2ba5d49dfd616eb7f8e84d795bb Mon Sep 17 00:00:00 2001 From: wi-cuckoo Date: Sat, 12 Sep 2015 17:02:00 +0800 Subject: [PATCH 494/697] translated wi-cuckoo --- ...tall and Configure Plank Dock in Ubuntu.md | 67 ------------------- ...tall and Configure Plank Dock in Ubuntu.md | 66 ++++++++++++++++++ 2 files changed, 66 insertions(+), 67 deletions(-) delete mode 100644 sources/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md create mode 100644 translated/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md diff --git a/sources/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md b/sources/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md deleted file mode 100644 index 2ebefa9297..0000000000 --- a/sources/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md +++ /dev/null @@ -1,67 +0,0 @@ -translating wi-cuckoo -How to Download, Install, and Configure Plank Dock in Ubuntu -================================================================================ -It’s a well-known fact that Linux is extremely customizable with users having a lot of options to choose from – be it the operating systems’ various distributions or desktop environments available for a single distro. Like users of any other OS, Linux users also have different tastes and preferences, especially when it comes to desktop. - -While some users aren’t particularly bothered about their desktop, others take special care to make sure that their desktop looks cool and attractive, something for which there are various applications available. One such application that brings life to your desktop – especially if you use a global menu on the top – is the dock. There are many dock applications available for Linux; if you’re looking for the simplest one, then look no further than [Plank][1], which we’ll be discussing in this article. - -**Note**: the examples and commands mentioned here have been tested on Ubuntu (version 14.10) and Plank version 0.9.1.1383. - -### Plank ### - -The official documentation describes Plank as the “simplest dock on the planet.” The project’s goal is to provide just what a dock needs, although it’s essentially a library which can be extended to create other dock programs with more advanced features. - -What’s worth mentioning here is that Plank, which comes pre-installed in elementary OS, is the underlying technology for Docky, a popular dock application which is very similar in functionality to Mac OS X’s Dock. - -### Download and Install ### - -You can download and install Plank by executing the following commands on your terminal: - - sudo add-apt-repository ppa:docky-core/stable - sudo apt-get update - sudo apt-get install plank - -Once installed successfully, you can open the application by typing the name Plank in Unity Dash (see image below), or open it from the App Menu if you aren’t using the Unity environment. - -![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-unity-dash.png) - -### Features ### - -Once the Plank dock is enabled, you’ll see it sitting at the center-bottom of your desktop. - -![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-enabled-new.jpg) - -As you can see in the image above, the dock contains some application icons with an orange color indication below those which are currently running. Needless to say, you can click an icon to open that application. Also, a right-click on any application icon will produce some more options that you might be interested in. For example, see the screen-shot below: - -![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-right-click-icons-new.jpg) - -To access the configuration options, you’ll have to do a right-click on Plank’s icon (which is the first one from the left), and then click the Preferences option. This will produce the following window. - -![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-preferences.png) - -As you can see, the preference window consists of two tabs: Appearance and Behavior, with the former being selected by default. The Appearance tab contains settings related to the Plank theme, the dock’s position, and alignment, as well as that related to icons, while the Behavior tab contains settings related to the dock itself. - -![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-behavior-settings.png) - -For example, I changed the position of the dock to Right from within the Appearance tab and locked the icons (which means no “Keep in Dock” option on right-click) from the Behavior tab. - -![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-right-lock-new.jpg) - -As you can see in the screen-shot above, the changes came into effect. Similarly, you can tweak any available setting as per your requirement. - -### Conclusion ### - -Like I said in the beginning, having a dock isn’t mandatory. However, using one definitely makes things convenient, especially if you’ve been using Mac and have recently switched over to Linux for whatever reason. For its part, Plank not only offers simplicity, but dependability and stability as well – the project is well-maintained. - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/download-install-configure-plank-dock-ubuntu/ - -作者:[Himanshu Arora][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/himanshu/ -[1]:https://launchpad.net/plank diff --git a/translated/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md b/translated/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md new file mode 100644 index 0000000000..1990e25a60 --- /dev/null +++ b/translated/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md @@ -0,0 +1,66 @@ +在 Ubuntu 里,如何下载,安装和配置 Plank Dock +============================================================================= +一个众所周知的事实就是,Linux 是一个用户可以高度自定义的系统,有很多选项可以选择 —— 作为操作系统,有各种各样的发行版,而对于单个发行版来说,又有很多桌面环境可以选择。与其他操作系统的用户一样,Linux 用户也有不同的口味和喜好,特别是对于桌面来说。 + +一些用户并非很在意他们的桌面,而其他一些则非常关心,要确保他们的桌面看起来很酷,很有吸引力,对于这种情况,有很多不错的应用可以派上用场。有一个应用可以给你的桌面带来活力 —— 特别是当你常用一个全局菜单的时候 —— 这就是 dock 。Linux 上有很多 dock 应用可选用;如果你希望是一个最简洁的,那么就选择 [Plank][1] 吧,文章接下来就要讨论这个应用。 + +**注意**:接下提到的例子和命令都已在 Ubuntu(版本 14.10)和 Plank(版本 0.9.1.1383)上测试通过。 + +### Plank ### + +官方的文档描述 Plank 是“这个星球上最简洁的 dock”。该项目的目的就是提供一个 dock 仅需要的功能,尽管这是很基础的一个库,却可以被扩展,创造其他的含更多高级功能的 dock 程序。 + +这里值得一提的就是,在 elementary OS 里,Plank 是预装的。并且 Plank 是 Docky 的基础,Docky 也是一个非常流行的 dock 应用,在功能上与 Mac OS X 的 Dock 非常相似。 + +### 下载和安装 ### + +通过在终端里执行下面的命令,可以下载并安装 Plank: + + sudo add-apt-repository ppa:docky-core/stable + sudo apt-get update + sudo apt-get install plank + +安装成功后,你就可以在 Unity Dash(见下面图片)里通过输入 Plank 来打开该应用,或者从应用菜单里面打开,如果你没有使用 Unity 环境的话。 + +![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-unity-dash.png) + +### 特性 ### + +当 Plank 启用后,你会看见它停靠在你桌面的底部中间位置。 + +![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-enabled-new.jpg) + +正如上面图片显示的那样,dock 包含许多带橙色的应用图标,这表明这些应用正处于运行状态。无需说,你可以点击一个图标来打开那个应用。同时,右击一个应用图标会给出更多的选项,你可能会感兴趣。举个例子,该下面的屏幕快照: + +![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-right-click-icons-new.jpg) + +为了获得配置的选项,你不得不右击一下 Plank 的图标(左数第一个),然后点击 Preferences 选项。这就会产生接下来的窗口。 + +![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-preferences.png) + +如你所见,Preferences 窗口包含两个标签:Apperance 和 Behavior,前者是默认选中的。Appearance 标签栏包含 Plank 主题相关的设置,dock 的位置,对齐,还有图标相关的,而 Behavior 标签栏包含 dock 本身相关的设定。 + +![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-behavior-settings.png) + +举个例子,我在 Appearance 里改变 dock 的位置为右侧,在 Behavior 里锁定图标(这表示右击选项中不再有 “Keep in Dock”)。 + +![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-right-lock-new.jpg) + +如你所见的上面屏幕快照一样,改变生效了。类似地,根据你个人需求,改变任何可用的设定。 + +### 结论 ### + +如我开始所说的那样,使用 dock 不是强制的。尽管如此,使用一个会让事情变得方便,特别是你习惯了 Mac,而最近由于一些原因切换到了 Linux 系统。就其本身而言,Plank 不仅提供简洁性,还有可信任和稳定性 —— 该项目一直被很好地维护着。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/download-install-configure-plank-dock-ubuntu/ + +作者:[Himanshu Arora][a] +译者:[wi-cuckoo](https://github.com/wi-cuckoo) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/himanshu/ +[1]:https://launchpad.net/plank From 78cdfeb0b08859ef0d6f8591020732a77e9e4a04 Mon Sep 17 00:00:00 2001 From: "Y.C.S.M" Date: Sat, 12 Sep 2015 20:22:20 +0800 Subject: [PATCH 495/697] Update 20150908 List Of 10 Funny Linux Commands.md complete the translation --- ...0150908 List Of 10 Funny Linux Commands.md | 50 +++++++++---------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/sources/tech/20150908 List Of 10 Funny Linux Commands.md b/sources/tech/20150908 List Of 10 Funny Linux Commands.md index 59464c6497..18d6628b04 100644 --- a/sources/tech/20150908 List Of 10 Funny Linux Commands.md +++ b/sources/tech/20150908 List Of 10 Funny Linux Commands.md @@ -1,11 +1,13 @@ translating by tnuoccalanosrep -List Of 10 Funny Linux Commands + +10条真心有趣的Linux命令 ================================================================================ -**Working from the Terminal is really fun. Today, we’ll list really funny Linux commands which will bring smile on your face.** -**在终端工作是一件很有趣的事情。今天,我们将会列举一些有趣得让你笑出来的Linux命令。 + +**在终端工作是一件很有趣的事情。今天,我们将会列举一些有趣得为你带来欢笑的Linux命令。** + ### 1. rev ### + 创建一个文件,在文件里面输入几个单词,rev命令会将你写的东西反转输出到控制台。 -Create a file, type some words in this file, rev command will dump all words written by you in reverse. # rev @@ -14,15 +16,15 @@ Create a file, type some words in this file, rev command will dump all words wri ![Selection_001](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0011.png) ### 2. fortune ### + 这个命令没有被默认安装,用apt-get命令安装它,fortune命令会随机显示一些句子 -This command is not install by default, install with apt-get and fortune will display some random sentence. crank@crank-System:~$ sudo apt-get install fortune ![Selection_003](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0031.png) -Use **-s** option with fortune, it will limit the out to one sentence. 利用fortune命令的**_s** 选项,他会限制一个句子的输出长度。 + # fortune -s ![Selection_004](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0042.png) @@ -31,15 +33,14 @@ Use **-s** option with fortune, it will limit the out to one sentence. #yes -This command will keep displaying the string for infinite time until the process is killed by the user. +这个命令会不停打印字符串,直到用户把这进程给结束掉。 # yes unixmen ![Selection_005](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0054.png) ### 4. figlet ### - -This command can be installed with apt-get, comes with some ascii fonts which are located in **/usr/share/figlet**. +这个命令可以用apt-get安装,安装之后,在**/usr/share/figlet**可以看到一些ascii字体文件。 cd /usr/share/figlet @@ -57,34 +58,33 @@ e.g. ![Selection_007](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0072.png) -You can try another options also. +当然,你也可以尝试使用其他的选项。 ### 5. asciiquarium ### - -This command will transform your terminal in to a Sea Aquarium. -Download term animator +这个命令会将你的终端变成一个海洋馆。 +下载term animator # wget http://search.cpan.org/CPAN/authors/id/K/KB/KBAUCOM/Term-Animation-2.4.tar.gz -Install and Configure above package. +安装并且配置这个包 # tar -zxvf Term-Animation-2.4.tar.gz # cd Term-Animation-2.4/ # perl Makefile.PL && make && make test # sudo make install -Install following package: +接着安装下面这个包: # apt-get install libcurses-perl -Download and install asciiquarium +下载并且安装asciiquarium # wget http://www.robobunny.com/projects/asciiquarium/asciiquarium.tar.gz # tar -zxvf asciiquarium.tar.gz # cd asciiquarium_1.0/ # cp asciiquarium /usr/local/bin/ -Run, +执行如下命令 # /usr/local/bin/asciiquarium @@ -95,13 +95,13 @@ Run, # apt-get install bb # bb -See what comes out: +看看会输出什么? ![Selection_009](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0092.png) ### 7. sl ### -Sometimes you type **sl** instead of **ls** by mistake,actually **sl** is a command and a locomotive engine will start moving if you type sl. +有的时候你可能把 **ls** 误打成了 **sl**,其实 **sl** 也是一个命令,如果你打 sl的话,你会看到一个移动的火车头 # apt-get install sl @@ -113,7 +113,7 @@ Sometimes you type **sl** instead of **ls** by mistake,actually **sl** is a com ### 8. cowsay ### -Very common command, is will display in ascii form whatever you wants to say. +一个很普遍的命令,它会用ascii显示你想说的话。 apt-get install cowsay @@ -123,7 +123,7 @@ Very common command, is will display in ascii form whatever you wants to say. ![Selection_013](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0132.png) -Or, you can use another character instead of com, such characters are stored in **/usr/share/cowsay/cows** +或者,你可以用其他的角色来取代默认角色来说这句话,这些角色都存储在**/usr/share/cowsay/cows**目录下 # cd /usr/share/cowsay/cows @@ -134,6 +134,7 @@ Or, you can use another character instead of com, such characters are stored in ![Selection_014](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0141.png) or +或者 # cowsay -f bud-frogs.cow Rajneesh @@ -141,7 +142,7 @@ or ### 9. toilet ### -Yes, this is a command, it dumps ascii strings in colored form to the terminal. +你没看错,这是个命令来的,他会将字符串以彩色的ascii字符串形式输出到终端 # apt-get install toilet @@ -161,7 +162,7 @@ Yes, this is a command, it dumps ascii strings in colored form to the terminal. ### 10. aafire ### -Put you terminal on fire with aafire. +aafire能让你的终端燃起来。 # apt-get install libaa-bin @@ -171,8 +172,7 @@ Put you terminal on fire with aafire. ![Selection_019](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0191.png) -That it, Have fun with Linux Terminal!! - +就这么多,祝你们在Linux终端玩得开心哈!!! -------------------------------------------------------------------------------- via: http://www.unixmen.com/list-10-funny-linux-commands/ From 9ea105a0032da9dcdd394337b359dba0fb9ae427 Mon Sep 17 00:00:00 2001 From: "Y.C.S.M" Date: Sat, 12 Sep 2015 20:25:20 +0800 Subject: [PATCH 496/697] Update 20150908 List Of 10 Funny Linux Commands.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit adjust the output form,and change some translation --- sources/tech/20150908 List Of 10 Funny Linux Commands.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/sources/tech/20150908 List Of 10 Funny Linux Commands.md b/sources/tech/20150908 List Of 10 Funny Linux Commands.md index 18d6628b04..aeb45f8d28 100644 --- a/sources/tech/20150908 List Of 10 Funny Linux Commands.md +++ b/sources/tech/20150908 List Of 10 Funny Linux Commands.md @@ -113,7 +113,7 @@ e.g. ### 8. cowsay ### -一个很普遍的命令,它会用ascii显示你想说的话。 +一个很常见的命令,它会用ascii显示你想说的话。 apt-get install cowsay @@ -133,7 +133,6 @@ e.g. ![Selection_014](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0141.png) -or 或者 # cowsay -f bud-frogs.cow Rajneesh @@ -173,12 +172,13 @@ aafire能让你的终端燃起来。 ![Selection_019](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0191.png) 就这么多,祝你们在Linux终端玩得开心哈!!! + -------------------------------------------------------------------------------- via: http://www.unixmen.com/list-10-funny-linux-commands/ 作者:[Rajneesh Upadhyay][a] -译者:[译者ID](https://github.com/译者ID) +译者:[tnuoccalanosrep](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From bb1bdf524c3a1500d50ae592dd96a0a65f327edf Mon Sep 17 00:00:00 2001 From: "Y.C.S.M" Date: Sat, 12 Sep 2015 20:33:29 +0800 Subject: [PATCH 497/697] create translated file [20150908 List Of 10 Funny Linux Commands] --- ...0150908 List Of 10 Funny Linux Commands.md | 186 ++++++++++++++++++ 1 file changed, 186 insertions(+) create mode 100644 translated/tech/20150908 List Of 10 Funny Linux Commands.md diff --git a/translated/tech/20150908 List Of 10 Funny Linux Commands.md b/translated/tech/20150908 List Of 10 Funny Linux Commands.md new file mode 100644 index 0000000000..aeb45f8d28 --- /dev/null +++ b/translated/tech/20150908 List Of 10 Funny Linux Commands.md @@ -0,0 +1,186 @@ +translating by tnuoccalanosrep + +10条真心有趣的Linux命令 +================================================================================ + +**在终端工作是一件很有趣的事情。今天,我们将会列举一些有趣得为你带来欢笑的Linux命令。** + +### 1. rev ### + +创建一个文件,在文件里面输入几个单词,rev命令会将你写的东西反转输出到控制台。 + + # rev + +![Selection_002](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0021.png) + +![Selection_001](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0011.png) + +### 2. fortune ### + +这个命令没有被默认安装,用apt-get命令安装它,fortune命令会随机显示一些句子 + + crank@crank-System:~$ sudo apt-get install fortune + +![Selection_003](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0031.png) + +利用fortune命令的**_s** 选项,他会限制一个句子的输出长度。 + + # fortune -s + +![Selection_004](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0042.png) + +### 3. yes ### + + #yes + +这个命令会不停打印字符串,直到用户把这进程给结束掉。 + + # yes unixmen + +![Selection_005](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0054.png) + +### 4. figlet ### +这个命令可以用apt-get安装,安装之后,在**/usr/share/figlet**可以看到一些ascii字体文件。 + + cd /usr/share/figlet + +---------- + + #figlet -f + +e.g. + + #figlet -f big.flf unixmen + +![Selection_006](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0062.png) + +#figlet -f block.flf unixmen + +![Selection_007](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0072.png) + +当然,你也可以尝试使用其他的选项。 + +### 5. asciiquarium ### +这个命令会将你的终端变成一个海洋馆。 +下载term animator + + # wget http://search.cpan.org/CPAN/authors/id/K/KB/KBAUCOM/Term-Animation-2.4.tar.gz + +安装并且配置这个包 + + # tar -zxvf Term-Animation-2.4.tar.gz + # cd Term-Animation-2.4/ + # perl Makefile.PL && make && make test + # sudo make install + +接着安装下面这个包: + + # apt-get install libcurses-perl + +下载并且安装asciiquarium + + # wget http://www.robobunny.com/projects/asciiquarium/asciiquarium.tar.gz + # tar -zxvf asciiquarium.tar.gz + # cd asciiquarium_1.0/ + # cp asciiquarium /usr/local/bin/ + +执行如下命令 + + # /usr/local/bin/asciiquarium + +![asciiquarium_1.1 : perl_008](http://www.unixmen.com/wp-content/uploads/2015/09/asciiquarium_1.1-perl_008.png) + +### 6. bb ### + + # apt-get install bb + # bb + +看看会输出什么? + +![Selection_009](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0092.png) + +### 7. sl ### + +有的时候你可能把 **ls** 误打成了 **sl**,其实 **sl** 也是一个命令,如果你打 sl的话,你会看到一个移动的火车头 + + # apt-get install sl + +---------- + + # sl + +![Selection_012](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0122.png) + +### 8. cowsay ### + +一个很常见的命令,它会用ascii显示你想说的话。 + + apt-get install cowsay + +---------- + + # cowsay + +![Selection_013](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0132.png) + +或者,你可以用其他的角色来取代默认角色来说这句话,这些角色都存储在**/usr/share/cowsay/cows**目录下 + + # cd /usr/share/cowsay/cows + +---------- + + cowsay -f ghostbusters.cow unixmen + +![Selection_014](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0141.png) + +或者 + + # cowsay -f bud-frogs.cow Rajneesh + +![Selection_015](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0151.png) + +### 9. toilet ### + +你没看错,这是个命令来的,他会将字符串以彩色的ascii字符串形式输出到终端 + + # apt-get install toilet + +---------- + + # toilet --gay unixmen + +![Selection_016](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0161.png) + + toilet -F border -F gay unixmen + +![Selection_020](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_020.png) + + toilet -f mono12 -F metal unixmen + +![Selection_018](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0181.png) + +### 10. aafire ### + +aafire能让你的终端燃起来。 + + # apt-get install libaa-bin + +---------- + + # aafire + +![Selection_019](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0191.png) + +就这么多,祝你们在Linux终端玩得开心哈!!! + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/list-10-funny-linux-commands/ + +作者:[Rajneesh Upadhyay][a] +译者:[tnuoccalanosrep](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/rajneesh/ From 7a26d47f807e57b80250ec615aa7182754cd5839 Mon Sep 17 00:00:00 2001 From: "Y.C.S.M" Date: Sat, 12 Sep 2015 20:35:08 +0800 Subject: [PATCH 498/697] Update 20150908 List Of 10 Funny Linux Commands.md remove the state of file in the head --- translated/tech/20150908 List Of 10 Funny Linux Commands.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/translated/tech/20150908 List Of 10 Funny Linux Commands.md b/translated/tech/20150908 List Of 10 Funny Linux Commands.md index aeb45f8d28..7219e16890 100644 --- a/translated/tech/20150908 List Of 10 Funny Linux Commands.md +++ b/translated/tech/20150908 List Of 10 Funny Linux Commands.md @@ -1,5 +1,3 @@ -translating by tnuoccalanosrep - 10条真心有趣的Linux命令 ================================================================================ From 7054c14d60581082e700f0fe14dd59fd523de653 Mon Sep 17 00:00:00 2001 From: "Y.C.S.M" Date: Sat, 12 Sep 2015 20:35:53 +0800 Subject: [PATCH 499/697] Delete 20150908 List Of 10 Funny Linux Commands.md --- ...0150908 List Of 10 Funny Linux Commands.md | 186 ------------------ 1 file changed, 186 deletions(-) delete mode 100644 sources/tech/20150908 List Of 10 Funny Linux Commands.md diff --git a/sources/tech/20150908 List Of 10 Funny Linux Commands.md b/sources/tech/20150908 List Of 10 Funny Linux Commands.md deleted file mode 100644 index aeb45f8d28..0000000000 --- a/sources/tech/20150908 List Of 10 Funny Linux Commands.md +++ /dev/null @@ -1,186 +0,0 @@ -translating by tnuoccalanosrep - -10条真心有趣的Linux命令 -================================================================================ - -**在终端工作是一件很有趣的事情。今天,我们将会列举一些有趣得为你带来欢笑的Linux命令。** - -### 1. rev ### - -创建一个文件,在文件里面输入几个单词,rev命令会将你写的东西反转输出到控制台。 - - # rev - -![Selection_002](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0021.png) - -![Selection_001](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0011.png) - -### 2. fortune ### - -这个命令没有被默认安装,用apt-get命令安装它,fortune命令会随机显示一些句子 - - crank@crank-System:~$ sudo apt-get install fortune - -![Selection_003](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0031.png) - -利用fortune命令的**_s** 选项,他会限制一个句子的输出长度。 - - # fortune -s - -![Selection_004](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0042.png) - -### 3. yes ### - - #yes - -这个命令会不停打印字符串,直到用户把这进程给结束掉。 - - # yes unixmen - -![Selection_005](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0054.png) - -### 4. figlet ### -这个命令可以用apt-get安装,安装之后,在**/usr/share/figlet**可以看到一些ascii字体文件。 - - cd /usr/share/figlet - ----------- - - #figlet -f - -e.g. - - #figlet -f big.flf unixmen - -![Selection_006](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0062.png) - -#figlet -f block.flf unixmen - -![Selection_007](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0072.png) - -当然,你也可以尝试使用其他的选项。 - -### 5. asciiquarium ### -这个命令会将你的终端变成一个海洋馆。 -下载term animator - - # wget http://search.cpan.org/CPAN/authors/id/K/KB/KBAUCOM/Term-Animation-2.4.tar.gz - -安装并且配置这个包 - - # tar -zxvf Term-Animation-2.4.tar.gz - # cd Term-Animation-2.4/ - # perl Makefile.PL && make && make test - # sudo make install - -接着安装下面这个包: - - # apt-get install libcurses-perl - -下载并且安装asciiquarium - - # wget http://www.robobunny.com/projects/asciiquarium/asciiquarium.tar.gz - # tar -zxvf asciiquarium.tar.gz - # cd asciiquarium_1.0/ - # cp asciiquarium /usr/local/bin/ - -执行如下命令 - - # /usr/local/bin/asciiquarium - -![asciiquarium_1.1 : perl_008](http://www.unixmen.com/wp-content/uploads/2015/09/asciiquarium_1.1-perl_008.png) - -### 6. bb ### - - # apt-get install bb - # bb - -看看会输出什么? - -![Selection_009](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0092.png) - -### 7. sl ### - -有的时候你可能把 **ls** 误打成了 **sl**,其实 **sl** 也是一个命令,如果你打 sl的话,你会看到一个移动的火车头 - - # apt-get install sl - ----------- - - # sl - -![Selection_012](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0122.png) - -### 8. cowsay ### - -一个很常见的命令,它会用ascii显示你想说的话。 - - apt-get install cowsay - ----------- - - # cowsay - -![Selection_013](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0132.png) - -或者,你可以用其他的角色来取代默认角色来说这句话,这些角色都存储在**/usr/share/cowsay/cows**目录下 - - # cd /usr/share/cowsay/cows - ----------- - - cowsay -f ghostbusters.cow unixmen - -![Selection_014](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0141.png) - -或者 - - # cowsay -f bud-frogs.cow Rajneesh - -![Selection_015](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0151.png) - -### 9. toilet ### - -你没看错,这是个命令来的,他会将字符串以彩色的ascii字符串形式输出到终端 - - # apt-get install toilet - ----------- - - # toilet --gay unixmen - -![Selection_016](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0161.png) - - toilet -F border -F gay unixmen - -![Selection_020](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_020.png) - - toilet -f mono12 -F metal unixmen - -![Selection_018](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0181.png) - -### 10. aafire ### - -aafire能让你的终端燃起来。 - - # apt-get install libaa-bin - ----------- - - # aafire - -![Selection_019](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0191.png) - -就这么多,祝你们在Linux终端玩得开心哈!!! - --------------------------------------------------------------------------------- - -via: http://www.unixmen.com/list-10-funny-linux-commands/ - -作者:[Rajneesh Upadhyay][a] -译者:[tnuoccalanosrep](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.unixmen.com/author/rajneesh/ From 552e482c8a37c4f0f0177d23f69aec7b56070319 Mon Sep 17 00:00:00 2001 From: Luoyuanhao Date: Sun, 13 Sep 2015 11:07:13 +0800 Subject: [PATCH 500/697] [Translated] tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md --- ...ile Sharing on Linux or Windows Clients.md | 209 ----------------- ...ile Sharing on Linux or Windows Clients.md | 211 ++++++++++++++++++ 2 files changed, 211 insertions(+), 209 deletions(-) delete mode 100644 sources/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md create mode 100644 translated/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md diff --git a/sources/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md b/sources/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md deleted file mode 100644 index b59955783a..0000000000 --- a/sources/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md +++ /dev/null @@ -1,209 +0,0 @@ -ictlyh Translating -Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux/Windows Clients – Part 6 -================================================================================ -Since computers seldom work as isolated systems, it is to be expected that as a system administrator or engineer, you know how to set up and maintain a network with multiple types of servers. - -In this article and in the next of this series we will go through the essentials of setting up Samba and NFS servers with Windows/Linux and Linux clients, respectively. - -![Setup Samba File Sharing on Linux](http://www.tecmint.com/wp-content/uploads/2015/09/setup-samba-file-sharing-on-linux-windows-clients.png) - -RHCE: Setup Samba File Sharing – Part 6 - -This article will definitely come in handy if you’re called upon to set up file servers in corporate or enterprise environments where you are likely to find different operating systems and types of devices. - -Since you can read about the background and the technical aspects of both Samba and NFS all over the Internet, in this article and the next we will cut right to the chase with the topic at hand. - -### Step 1: Installing Samba Server ### - -Our current testing environment consists of two RHEL 7 boxes and one Windows 8 machine, in that order: - - 1. Samba / NFS server [box1 (RHEL 7): 192.168.0.18], - 2. Samba client #1 [box2 (RHEL 7): 192.168.0.20] - 3. Samba client #2 [Windows 8 machine: 192.168.0.106] - -![Testing Setup for Samba](http://www.tecmint.com/wp-content/uploads/2015/09/Testing-Setup-for-Samba.png) - -Testing Setup for Samba - -On box1, install the following packages: - - # yum update && yum install samba samba-client samba-common - -On box2: - - # yum update && yum install samba samba-client samba-common cifs-utils - -Once the installation is complete, we’re ready to configure our share. - -### Step 2: Setting Up File Sharing Through Samba ### - -One of the reason why Samba is so relevant is because it provides file and print services to SMB/CIFS clients, which causes those clients to see the server as if it was a Windows system (I must admit I tend to get a little emotional while writing about this topic as it was my first setup as a new Linux system administrator some years ago). - -**Adding system users and setting up permissions and ownership** - -To allow for group collaboration, we will create a group named finance with two users (user1 and user2) with [useradd command][1] and a directory /finance in box1. - -We will also change the group owner of this directory to finance and set its permissions to 0770 (read, write, and execution permissions for the owner and the group owner): - - # groupadd finance - # useradd user1 - # useradd user2 - # usermod -a -G finance user1 - # usermod -a -G finance user2 - # mkdir /finance - # chmod 0770 /finance - # chgrp finance /finance - -### Step 3:​ Configuring SELinux and Firewalld ### - -In preparation to configure /finance as a Samba share, we will need to either disable SELinux or set the proper boolean and security context values as follows (otherwise, SELinux will prevent clients from accessing the share): - - # setsebool -P samba_export_all_ro=1 samba_export_all_rw=1 - # getsebool –a | grep samba_export - # semanage fcontext –at samba_share_t "/finance(/.*)?" - # restorecon /finance - -In addition, we must ensure that Samba traffic is allowed by the [firewalld][2]. - - # firewall-cmd --permanent --add-service=samba - # firewall-cmd --reload - -### Step 4: Configure Samba Share ### - -Now it’s time to dive into the configuration file /etc/samba/smb.conf and add the section for our share: we want the members of the finance group to be able to browse the contents of /finance, and save / create files or subdirectories in it (which by default will have their permission bits set to 0770 and finance will be their group owner): - -**smb.conf** - ----------- - - [finance] - comment=Directory for collaboration of the company's finance team - browsable=yes - path=/finance - public=no - valid users=@finance - write list=@finance - writeable=yes - create mask=0770 - Force create mode=0770 - force group=finance - -Save the file and then test it with the testparm utility. If there are any errors, the output of the following command will indicate what you need to fix. Otherwise, it will display a review of your Samba server configuration: - -![Test Samba Configuration](http://www.tecmint.com/wp-content/uploads/2015/09/Test-Samba-Configuration.png) - -Test Samba Configuration - -Should you want to add another share that is open to the public (meaning without any authentication whatsoever), create another section in /etc/samba/smb.conf and under the new share’s name copy the section above, only changing public=no to public=yes and not including the valid users and write list directives. - -### Step 5: Adding Samba Users ### - -Next, you will need to add user1 and user2 as Samba users. To do so, you will use the smbpasswd command, which interacts with Samba’s internal database. You will be prompted to enter a password that you will later use to connect to the share: - - # smbpasswd -a user1 - # smbpasswd -a user2 - -Finally, restart Samba, enable the service to start on boot, and make sure the share is actually available to network clients: - - # systemctl start smb - # systemctl enable smb - # smbclient -L localhost –U user1 - # smbclient -L localhost –U user2 - -![Verify Samba Share](http://www.tecmint.com/wp-content/uploads/2015/09/Verify-Samba-Share.png) - -Verify Samba Share - -At this point, the Samba file server has been properly installed and configured. Now it’s time to test this setup on our RHEL 7 and Windows 8 clients. - -### Step 6:​ Mounting the Samba Share in Linux ### - -First, make sure the Samba share is accessible from this client: - -# smbclient –L 192.168.0.18 -U user2 - -![Mount Samba Share on Linux](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Samba-Share-on-Linux.png) - -Mount Samba Share on Linux - -(repeat the above command for user1) - -As any other storage media, you can mount (and later unmount) this network share when needed: - - # mount //192.168.0.18/finance /media/samba -o username=user1 - -![Mount Samba Network Share](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Samba-Network-Share.png) - -Mount Samba Network Share - -(where /media/samba is an existing directory) - -or permanently, by adding the following entry in /etc/fstab file: - -**fstab** - ----------- - - //192.168.0.18/finance /media/samba cifs credentials=/media/samba/.smbcredentials,defaults 0 0 - -Where the hidden file /media/samba/.smbcredentials (whose permissions and ownership have been set to 600 and root:root, respectively) contains two lines that indicate the username and password of an account that is allowed to use the share: - -**.smbcredentials** - ----------- - - username=user1 - password=PasswordForUser1 - -Finally, let’s create a file inside /finance and check the permissions and ownership: - - # touch /media/samba/FileCreatedInRHELClient.txt - -![Create File in Samba Share](http://www.tecmint.com/wp-content/uploads/2015/09/Create-File-in-Samba-Share.png) - -Create File in Samba Share - -As you can see, the file was created with 0770 permissions and ownership set to user1:finance. - -### Step 7: Mounting the Samba Share in Windows ### - -To mount the Samba share in Windows, go to My PC and choose Computer, then Map network drive. Next, assign a letter for the drive to be mapped and check Connect using different credentials (the screenshots below are in Spanish, my native language): - -![Mount Samba Share in Windows](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Samba-Share-in-Windows.png) - -Mount Samba Share in Windows - -Finally, let’s create a file and check the permissions and ownership: - -![Create Files on Windows Samba Share](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Files-on-Windows-Samba-Share.png) - -Create Files on Windows Samba Share - - # ls -l /finance - -This time the file belongs to user2 since that’s the account we used to connect from the Windows client. - -### Summary ### - -In this article we have explained not only how to set up a Samba server and two clients using different operating systems, but also [how to configure the firewalld][3] and [SELinux on the server][4] to allow the desired group collaboration capabilities. - -Last, but not least, let me recommend the reading of the online [man page of smb.conf][5] to explore other configuration directives that may be more suitable for your case than the scenario described in this article. - -As always, feel free to drop a comment using the form below if you have any comments or suggestions. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/setup-samba-file-sharing-for-linux-windows-clients/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/add-users-in-linux/ -[2]:http://www.tecmint.com/firewalld-vs-iptables-and-control-network-traffic-in-firewall/ -[3]:http://www.tecmint.com/configure-firewalld-in-centos-7/ -[4]:http://www.tecmint.com/selinux-essentials-and-control-filesystem-access/ -[5]:https://www.samba.org/samba/docs/man/manpages-3/smb.conf.5.html \ No newline at end of file diff --git a/translated/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md b/translated/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md new file mode 100644 index 0000000000..cb8fa59954 --- /dev/null +++ b/translated/tech/RHCE/Part 6 - Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux or Windows Clients.md @@ -0,0 +1,211 @@ +安装 Samba 并配置 Firewalld 和 SELinux 使得能在 Linux 和 Windows 之间共享文件 - 第六部分 +================================================================================ +由于计算机很少作为一个独立的系统工作,作为一个系统管理员或工程师,就应该知道如何在有多种类型的服务器之间搭设和维护网络。 + +在本篇以及该系列后面的文章中,我们会介绍用 Windows/Linux 配置 Samba 和 NFS 服务器以及 Linux 客户端。 + +![在 Linux 中配置 Samba 进行文件共享](http://www.tecmint.com/wp-content/uploads/2015/09/setup-samba-file-sharing-on-linux-windows-clients.png) + +RHCE 系列第六部分 - 设置 Samba 文件共享 + +如果有人叫你设置文件服务器用于协作或者配置很可能有多种不同类型操作系统和设备的企业环境,这篇文章就能派上用场。 + +由于你可以在网上找到很多关于 Samba 和 NFS 背景和技术方面的介绍,在这篇文章以及后续文章中我们就省略了这些部分直接进入到我们的主题。 + +### 步骤一: 安装 Samba 服务器 ### + +我们当前的测试环境包括两台 RHEL 7 和一台 Windows 8: + + 1. Samba / NFS 服务器 [box1 (RHEL 7): 192.168.0.18], + 2. Samba 客户端 #1 [box2 (RHEL 7): 192.168.0.20] + 3. Samba 客户端 #2 [Windows 8 machine: 192.168.0.106] + +![测试安装 Samba](http://www.tecmint.com/wp-content/uploads/2015/09/Testing-Setup-for-Samba.png) + +测试安装 Samba + +在 box1 中安装以下软件包: + + # yum update && yum install samba samba-client samba-common + +在 box2: + + # yum update && yum install samba samba-client samba-common cifs-utils + +安装完成后,就可以配置我们的共享了。 + +### 步骤二: 设置通过 Samba 进行文件共享 ### + +Samba 这么重要的原因之一是它为 SMB/CIFS 客户端(译者注:SMB 是微软和英特尔制定的一种通信协议,CIFS 是其中一个版本,更详细的介绍可以参考[Wiki][6])提供了文件和打印设备,这使得客户端看起来服务器就是一个 Windows 系统(我必须承认写这篇文章的时候我有一点激动,因为这是我多年前作为一个新手 Linux 系统管理员的第一次设置)。 + +**添加系统用户并设置权限和属性** + +为了允许组协作,我们会在 box1 中用 [useradd 命令][1]创建一个有两个用户(user1 和 user2)的组 finance 和目录 /finance。 + +我们同时会把这个目录的组所有者更改为 finance 并把权限设置为 0777(所有者和组属主可读可写可执行): + + # groupadd finance + # useradd user1 + # useradd user2 + # usermod -a -G finance user1 + # usermod -a -G finance user2 + # mkdir /finance + # chmod 0770 /finance + # chgrp finance /finance + +### 步骤三: 配置 SELinux 和 Firewalld ### + +在配置 /finance 作为 Samba 共享目录之前,我们需要像下面那样停用 SELinux 或设置恰当的布尔值和安全选项(否则,SELinux 会阻止客户端访问共享目录): + + # setsebool -P samba_export_all_ro=1 samba_export_all_rw=1 + # getsebool –a | grep samba_export + # semanage fcontext –at samba_share_t "/finance(/.*)?" + # restorecon /finance + +另外我们必须确保 [firewalld][2] 允许 Samba 流量通过。 + + # firewall-cmd --permanent --add-service=samba + # firewall-cmd --reload + +### 步骤四: 配置 Samba 共享目录 ### + +现在我们来看看配置文件 /etc/samba/smb.conf 并添加用于共享的章节(section):我们希望组 finance 的成员可以浏览 /finance 的内容,在里面保存/创建文件或者子目录(默认权限为 0777,组所有者为 finance): + +**smb.conf** + +---------- + + [finance] + comment=Directory for collaboration of the company's finance team + browsable=yes + path=/finance + public=no + valid users=@finance + write list=@finance + writeable=yes + create mask=0770 + Force create mode=0770 + force group=finance + +保存文件然后用 testparm 工具进行测试。如果这里有任何错误,命令的输出或提示你需要如何修复。否则,会显示你 Samba 服务器配置的回顾: + +![测试 Samba 配置](http://www.tecmint.com/wp-content/uploads/2015/09/Test-Samba-Configuration.png) + +测试 Samba 配置 + +如果你要添加另一个公开的共享目录(意味着没有任何验证),在 /etc/samba/smb.conf 中创建另一章节,在共享目录名称下面复制上面的章节,只需要把 public=no 更改为 public=yes 并去掉有效用户和写列表命令。 + +### 步骤五: 添加 Samba 用户 ### + +下一步,你需要添加 user1 和 user2 作为 Samba 的用户。要做到这点,你需要用 smbpasswd 命令,它会和 Samba 的数据库进行交互。会提示你输入一个命令用于你之后和共享目录连接: + + # smbpasswd -a user1 + # smbpasswd -a user2 + +最后,重启 Samda,启用系统启动时自动启动服务,并确保共享目录对网络客户端可用: + + # systemctl start smb + # systemctl enable smb + # smbclient -L localhost –U user1 + # smbclient -L localhost –U user2 + + +![验证 Samba 共享](http://www.tecmint.com/wp-content/uploads/2015/09/Verify-Samba-Share.png) + +验证 Samba 共享 + +到这里,已经正确安装和配置了 Samba 文件服务器。现在让我们在 RHEL 7 和 Windows 8 客户端中测试该配置。 + +### 步骤六: 在 Linux 中挂载 Samba 共享 ### + +首先,确保客户端可以访问 Samba 共享: + +# smbclient –L 192.168.0.18 -U user2 + + +![在 Linux 上挂载 Samba 共享](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Samba-Share-on-Linux.png) + +在 Linux 上挂载 Samba 共享 + +(为 user1 重复上面的命令) + +正如任何其它存储介质,当你需要的时候你可以挂载(之后卸载)该网络共享: + + # mount //192.168.0.18/finance /media/samba -o username=user1 + +![挂载 Samba 网络共享](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Samba-Network-Share.png) + +挂载 Samba 网络共享 + +(其中 /media/samba 是一个已有的目录) + +或者在 /etc/fstab 文件中添加下面的条目自动挂载: + +**fstab** + +---------- + + //192.168.0.18/finance /media/samba cifs credentials=/media/samba/.smbcredentials,defaults 0 0 + +其中隐藏文件 /media/samba/.smbcredentials(它的权限被设置为 600 和 root:root)有两行,指示允许使用共享的账户的用户名和密码: + +**.smbcredentials** + +---------- + + username=user1 + password=PasswordForUser1 + +最后,让我们在 /finance 中创建一个文件并检查权限和属性: + + # touch /media/samba/FileCreatedInRHELClient.txt + +![在 Samba 共享中创建文件](http://www.tecmint.com/wp-content/uploads/2015/09/Create-File-in-Samba-Share.png) + +在 Samba 共享中创建文件 + +正如你看到的,用权限 0770 和属主 user1:finance 创建了文件。 + +### 步骤七: 在 Windows 上挂载 Samba 共享 ### + +要在 Windows 上挂载 Samba 共享,进入 ‘我的计算机’ 并选择 ‘计算机’,‘网络驱动映射’。下一步,为要映射的驱动分配一个字母并用不同的认证检查连接(下面的截图使用我的母语西班牙语): + +![在 Windows 中挂载 Samba 共享](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Samba-Share-in-Windows.png) + +在 Windows 中挂载 Samba 共享 + +最后,让我们新建一个文件并检查权限和属性: + +![在 Windows Samba 共享中新建文件](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Files-on-Windows-Samba-Share.png) + +在 Windows Samba 共享中新建文件 + + # ls -l /finance + +这次文件属于 user2,因为这是我们用于从 Windows 客户端中连接的账户。 + +### 总结 ### + +在这篇文章中我们不仅介绍了如何使用不同操作系统设置 Samba 服务器和两个客户端,也介绍了[如何配置 Firewalld][3] 和 [服务器中的 SELinux][4] 以获取所需的组协作功能。 + +最后,同样重要的是,我推荐阅读网上的 [smb.conf man 手册][5] 查看其它可能针对你的情况比本文中介绍的场景更加合适的配置命令。 + +正如往常,欢迎在下面的评论框中留下你的评论或建议。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/setup-samba-file-sharing-for-linux-windows-clients/ + +作者:[Gabriel Cánepa][a] +译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/add-users-in-linux/ +[2]:http://www.tecmint.com/firewalld-vs-iptables-and-control-network-traffic-in-firewall/ +[3]:http://www.tecmint.com/configure-firewalld-in-centos-7/ +[4]:http://www.tecmint.com/selinux-essentials-and-control-filesystem-access/ +[5]:https://www.samba.org/samba/docs/man/manpages-3/smb.conf.5.html +[6]:https://en.wikipedia.org/wiki/Server_Message_Block \ No newline at end of file From b6b80a6daad6d8948d592dddc9e81821ede09510 Mon Sep 17 00:00:00 2001 From: Luoyuanhao Date: Sun, 13 Sep 2015 19:59:37 +0800 Subject: [PATCH 501/697] [Translated] tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md --- ...-based Authentication for Linux Clients.md | 189 ----------------- ...-based Authentication for Linux Clients.md | 190 ++++++++++++++++++ 2 files changed, 190 insertions(+), 189 deletions(-) delete mode 100644 sources/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md create mode 100644 translated/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md diff --git a/sources/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md b/sources/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md deleted file mode 100644 index e0341c5247..0000000000 --- a/sources/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md +++ /dev/null @@ -1,189 +0,0 @@ -ictlyh Translating -Setting Up NFS Server with Kerberos-based Authentication for Linux Clients – Part 7 -================================================================================ -In the last article of this series, we reviewed [how to set up a Samba share over a network][1] that may consist of multiple types of operating systems. Now, if you need to set up file sharing for a group of Unix-like clients you will automatically think of the Network File System, or NFS for short. - -![Setting Up NFS Server with Kerberos Authentication](http://www.tecmint.com/wp-content/uploads/2015/09/Setting-Kerberos-Authentication-with-NFS.jpg) - -RHCE Series: Setting Up NFS Server with Kerberos Authentication – Part 7 - -In this article we will walk you through the process of using Kerberos-based authentication for NFS shares. It is assumed that you already have set up a NFS server and a client. If not, please refer to [install and configure NFS server][2] – which will list the necessary packages that need to be installed and explain how to perform initial configurations on the server before proceeding further. - -In addition, you will want to configure both [SELinux][3] and [firewalld][4] to allow for file sharing through NFS. - -The following example assumes that your NFS share is located in /nfs in box2: - - # semanage fcontext -a -t public_content_rw_t "/nfs(/.*)?" - # restorecon -R /nfs - # setsebool -P nfs_export_all_rw on - # setsebool -P nfs_export_all_ro on - -(where the -P flag indicates persistence across reboots). - -Finally, don’t forget to: - -#### Create NFS Group and Configure NFS Share Directory #### - -1. Create a group called nfs and add the nfsnobody user to it, then change the permissions of the /nfs directory to 0770 and its group owner to nfs. Thus, nfsnobody (which is mapped to the client requests) will have write permissions on the share) and you won’t need to use no_root_squash in the /etc/exports file. - - # groupadd nfs - # usermod -a -G nfs nfsnobody - # chmod 0770 /nfs - # chgrp nfs /nfs - -2. Modify the exports file (/etc/exports) as follows to only allow access from box1 using Kerberos security (sec=krb5). - -**Note**: that the value of anongid has been set to the GID of the nfs group that we created previously: - -**exports – Add NFS Share** - ----------- - - /nfs box1(rw,sec=krb5,anongid=1004) - -3. Re-export (-r) all (-a) the NFS shares. Adding verbosity to the output (-v) is a good idea since it will provide helpful information to troubleshoot the server if something goes wrong: - - # exportfs -arv - -4. Restart and enable the NFS server and related services. Note that you don’t have to enable nfs-lock and nfs-idmapd because they will be automatically started by the other services on boot: - - # systemctl restart rpcbind nfs-server nfs-lock nfs-idmap - # systemctl enable rpcbind nfs-server - -#### Testing Environment and Other Prerequisites #### - -In this guide we will use the following test environment: - -- Client machine [box1: 192.168.0.18] -- NFS / Kerberos server [box2: 192.168.0.20] (also known as Key Distribution Center, or KDC for short). - -**Note**: that Kerberos service is crucial to the authentication scheme. - -As you can see, the NFS server and the KDC are hosted in the same machine for simplicity, although you can set them up in separate machines if you have more available. Both machines are members of the `mydomain.com` domain. - -Last but not least, Kerberos requires at least a basic schema of name resolution and the [Network Time Protocol][5] service to be present in both client and server since the security of Kerberos authentication is in part based upon the timestamps of tickets. - -To set up name resolution, we will use the /etc/hosts file in both client and server: - -**host file – Add DNS for Domain** - ----------- - - 192.168.0.18 box1.mydomain.com box1 - 192.168.0.20 box2.mydomain.com box2 - -In RHEL 7, chrony is the default software that is used for NTP synchronization: - - # yum install chrony - # systemctl start chronyd - # systemctl enable chronyd - -To make sure chrony is actually synchronizing your system’s time with time servers you may want to issue the following command two or three times and make sure the offset is getting nearer to zero: - - # chronyc tracking - -![Synchronize Server Time with Chrony](http://www.tecmint.com/wp-content/uploads/2015/09/Synchronize-Time-with-Chrony.png) - -Synchronize Server Time with Chrony - -### Installing and Configuring Kerberos ### - -To set up the KDC, install the following packages on both server and client (omit the server package in the client): - - # yum update && yum install krb5-server krb5-workstation pam_krb5 - -Once it is installed, edit the configuration files (/etc/krb5.conf and /var/kerberos/krb5kdc/kadm5.acl) and replace all instances of example.com (lowercase and uppercase) with `mydomain.com` as follows. - -Next, enable Kerberos through the firewall and start / enable the related services. - -**Important**: nfs-secure must be started and enabled on the client as well: - - # firewall-cmd --permanent --add-service=kerberos - # systemctl start krb5kdc kadmin nfs-secure - # systemctl enable krb5kdc kadmin nfs-secure - -Now create the Kerberos database (please note that this may take a while as it requires a some level of entropy in your system. To speed things up, I opened another terminal and ran ping -f localhost for 30-45 seconds): - - # kdb5_util create -s - -![Create Kerberos Database](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Kerberos-Database.png) - -Create Kerberos Database - -Next, using the kadmin.local tool, create an admin principal for root: - - # kadmin.local - # addprinc root/admin - -And add the Kerberos server to the database: - - # addprinc -randkey host/box2.mydomain.com - -Same with the NFS service for both client (box1) and server (box2). Please note that in the screenshot below I forgot to do it for box1 before quitting: - - # addprinc -randkey nfs/box2.mydomain.com - # addprinc -randkey nfs/box1.mydomain.com - -And exit by typing quit and pressing Enter: - -![Add Kerberos to NFS Server](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Kerboros-for-NFS.png) - -Add Kerberos to NFS Server - -Then obtain and cache Kerberos ticket-granting ticket for root/admin: - - # kinit root/admin - # klist - -![Cache Kerberos](http://www.tecmint.com/wp-content/uploads/2015/09/Cache-kerberos-Ticket.png) - -Cache Kerberos - -The last step before actually using Kerberos is storing into a keytab file (in the server) the principals that are authorized to use Kerberos authentication: - - # kdadmin.local - # ktadd host/box2.mydomain.com - # ktadd nfs/box2.mydomain.com - # ktadd nfs/box1.mydomain.com - -Finally, mount the share and perform a write test: - - # mount -t nfs4 -o sec=krb5 box2:/nfs /mnt - # echo "Hello from Tecmint.com" > /mnt/greeting.txt - -![Mount NFS Share](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-NFS-Share.png) - -Mount NFS Share - -Let’s now unmount the share, rename the keytab file in the client (to simulate it’s not present) and try to mount the share again: - - # umount /mnt - # mv /etc/krb5.keytab /etc/krb5.keytab.orig - -![Mount Unmount Kerberos NFS Share](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Unmount-Kerberos-NFS-Share.png) - -Mount Unmount Kerberos NFS Share - -Now you can use the NFS share with Kerberos-based authentication. - -### Summary ### - -In this article we have explained how to set up NFS with Kerberos authentication. Since there is much more to the topic than we can cover in a single guide, feel free to check the online [Kerberos documentation][6] and since Kerberos is a bit tricky to say the least, don’t hesitate to drop us a note using the form below if you run into any issue or need help with your testing or implementation. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/setting-up-nfs-server-with-kerberos-based-authentication/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/setup-samba-file-sharing-for-linux-windows-clients/ -[2]:http://www.tecmint.com/configure-nfs-server/ -[3]:http://www.tecmint.com/selinux-essentials-and-control-filesystem-access/ -[4]:http://www.tecmint.com/firewalld-rules-for-centos-7/ -[5]:http://www.tecmint.com/install-ntp-server-in-centos/ -[6]:http://web.mit.edu/kerberos/krb5-1.12/doc/admin/admin_commands/ \ No newline at end of file diff --git a/translated/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md b/translated/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md new file mode 100644 index 0000000000..5eba70cd7a --- /dev/null +++ b/translated/tech/RHCE/Part 7 - Setting Up NFS Server with Kerberos-based Authentication for Linux Clients.md @@ -0,0 +1,190 @@ +第七部分 - 在 Linux 客户端配置基于 Kerberos 身份验证的 NFS 服务器 +================================================================================ +在本系列的前一篇文章,我们回顾了[如何在可能包括多种类型操作系统的网络上配置 Samba 共享][1]。现在,如果你需要为一组类-Unix 客户端配置文件共享,很自然的你会想到网络文件系统,或简称 NFS。 + + +![设置使用 Kerberos 进行身份验证的 NFS 服务器](http://www.tecmint.com/wp-content/uploads/2015/09/Setting-Kerberos-Authentication-with-NFS.jpg) + +RHCE 系列:第七部分 - 设置使用 Kerberos 进行身份验证的 NFS 服务器 + +在这篇文章中我们会介绍配置基于 Kerberos 身份验证的 NFS 共享的整个流程。假设你已经配置好了一个 NFS 服务器和一个客户端。如果还没有,可以参考 [安装和配置 NFS 服务器][2] - 它列出了需要安装的依赖软件包并解释了在进行下一步之前如何在服务器上进行初始化配置。 + +另外,你可能还需要配置 [SELinux][3] 和 [firewalld][4] 以允许通过 NFS 进行文件共享。 + +下面的例子假设你的 NFS 共享目录在 box2 的 /nfs: + + # semanage fcontext -a -t public_content_rw_t "/nfs(/.*)?" + # restorecon -R /nfs + # setsebool -P nfs_export_all_rw on + # setsebool -P nfs_export_all_ro on + +(其中 -P 标记指示重启持久有效)。 + +最后,别忘了: + +#### 创建 NFS 组并配置 NFS 共享目录 #### + +1. 新建一个名为 nfs 的组并给它添加用户 nfsnobody,然后更改 /nfs 目录的权限为 0770,组属主为 nfs。于是,nfsnobody(对应请求用户)在共享目录有写的权限,你就不需要在 /etc/exports 文件中使用 no_root_squash(译者注:设为 root_squash 意味着在访问 NFS 服务器上的文件时,客户机上的 root 用户不会被当作 root 用户来对待)。 + + # groupadd nfs + # usermod -a -G nfs nfsnobody + # chmod 0770 /nfs + # chgrp nfs /nfs + +2. 像下面那样更改 export 文件(/etc/exports)只允许从 box1 使用 Kerberos 安全验证的访问(sec=krb5)。 + +**注意**:anongid 的值设置为之前新建的组 nfs 的 GID: + +**exports – 添加 NFS 共享** + +---------- + + /nfs box1(rw,sec=krb5,anongid=1004) + +3. 再次 exprot(-r)所有(-a)NFS 共享。为输出添加详情(-v)是个好主意,因为它提供了发生错误时解决问题的有用信息: + + # exportfs -arv + +4. 重启并启用 NFS 服务器以及相关服务。注意你不需要启动 nfs-lock 和 nfs-idmapd,因为系统启动时其它服务会自动启动它们: + + # systemctl restart rpcbind nfs-server nfs-lock nfs-idmap + # systemctl enable rpcbind nfs-server + +#### 测试环境和其它前提要求 #### + +在这篇指南中我们使用下面的测试环境: + +- 客户端机器 [box1: 192.168.0.18] +- NFS / Kerberos 服务器 [box2: 192.168.0.20] (也称为密钥分发中心,简称 KDC)。 + +**注意**:Kerberos 服务是至关重要的认证方案。 + +正如你看到的,为了简便,NFS 服务器和 KDC 在同一台机器上,当然如果你有更多可用机器你也可以把它们安装在不同的机器上。两台机器都在 `mydomain.com` 域。 + +最后同样重要的是,Kerberos 要求客户端和服务器中至少有一个域名解析的基本模式和[网络时间协议][5]服务,因为 Kerberos 身份验证的安全一部分基于时间戳。 + +为了配置域名解析,我们在客户端和服务器中编辑 /etc/hosts 文件: + +**host 文件 – 为域添加 DNS** + +---------- + + 192.168.0.18 box1.mydomain.com box1 + 192.168.0.20 box2.mydomain.com box2 + +在 RHEL 7 中,chrony 是用于 NTP 同步的默认软件: + + # yum install chrony + # systemctl start chronyd + # systemctl enable chronyd + +为了确保 chrony 确实在和时间服务器同步你系统的时间,你可能要输入下面的命令两到三次,确保时间偏差尽可能接近 0: + + # chronyc tracking + + +![用 Chrony 同步服务器时间](http://www.tecmint.com/wp-content/uploads/2015/09/Synchronize-Time-with-Chrony.png) + +用 Chrony 同步服务器时间 + +### 安装和配置 Kerberos ### + +要设置 KDC,首先在客户端和服务器安装下面的软件包(客户端不需要 server 软件包): + + # yum update && yum install krb5-server krb5-workstation pam_krb5 + +安装完成后,编辑配置文件(/etc/krb5.conf 和 /var/kerberos/krb5kdc/kadm5.acl),像下面那样用 `mydomain.com` 替换所有 example.com。 + +下一步,确保 Kerberos 能功过防火墙并启动/启用相关服务。 + +**重要**:客户端也必须启动和启用 nfs-secure: + + # firewall-cmd --permanent --add-service=kerberos + # systemctl start krb5kdc kadmin nfs-secure + # systemctl enable krb5kdc kadmin nfs-secure + +现在创建 Kerberos 数据库(请注意这可能会需要一点时间,因为它会和你的系统进行多次交互)。为了加速这个过程,我打开了另一个终端并运行了 ping -f localhost 30 到 45 秒): + + # kdb5_util create -s + +![创建 Kerberos 数据库](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Kerberos-Database.png) + +创建 Kerberos 数据库 + +下一步,使用 kadmin.local 工具为 root 创建管理权限: + + # kadmin.local + # addprinc root/admin + +添加 Kerberos 服务器到数据库: + + # addprinc -randkey host/box2.mydomain.com + +在客户端(box1)和服务器(box2)上对 NFS 服务同样操作。请注意下面的截图中在退出前我忘了在 box1 上进行操作: + + # addprinc -randkey nfs/box2.mydomain.com + # addprinc -randkey nfs/box1.mydomain.com + +输入 quit 和回车键退出: + +![添加 Kerberos 到 NFS 服务器](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Kerboros-for-NFS.png) + +添加 Kerberos 到 NFS 服务器 + +为 root/admin 获取和缓存票据授权票据(ticket-granting ticket): + + # kinit root/admin + # klist + +![缓存 Kerberos](http://www.tecmint.com/wp-content/uploads/2015/09/Cache-kerberos-Ticket.png) + +缓存 Kerberos + +真正使用 Kerberos 之前的最后一步是保存被授权使用 Kerberos 身份验证的规则到一个密钥表文件(在服务器中): + + # kdadmin.local + # ktadd host/box2.mydomain.com + # ktadd nfs/box2.mydomain.com + # ktadd nfs/box1.mydomain.com + +最后,挂载共享目录并进行一个写测试: + + # mount -t nfs4 -o sec=krb5 box2:/nfs /mnt + # echo "Hello from Tecmint.com" > /mnt/greeting.txt + +![挂载 NFS 共享](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-NFS-Share.png) + +挂载 NFS 共享 + +现在让我们卸载共享,在客户端中重命名密钥表文件(模拟它不存在)然后试着再次挂载共享目录: + + # umount /mnt + # mv /etc/krb5.keytab /etc/krb5.keytab.orig + +![挂载/卸载 Kerberos NFS 共享](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Unmount-Kerberos-NFS-Share.png) + +挂载/卸载 Kerberos NFS 共享 + +现在你可以使用基于 Kerberos 身份验证的 NFS 共享了。 + +### 总结 ### + +在这篇文章中我们介绍了如何设置带 Kerberos 身份验证的 NFS。和我们在这篇指南中介绍的相比,该主题还有很多相关内容,可以在 [Kerberos 手册][6] 查看,另外至少可以说 Kerberos 有一点棘手,如果你在测试或实现中遇到了任何问题或需要帮助,别犹豫在下面的评论框中告诉我们吧。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/setting-up-nfs-server-with-kerberos-based-authentication/ + +作者:[Gabriel Cánepa][a] +译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/setup-samba-file-sharing-for-linux-windows-clients/ +[2]:http://www.tecmint.com/configure-nfs-server/ +[3]:http://www.tecmint.com/selinux-essentials-and-control-filesystem-access/ +[4]:http://www.tecmint.com/firewalld-rules-for-centos-7/ +[5]:http://www.tecmint.com/install-ntp-server-in-centos/ +[6]:http://web.mit.edu/kerberos/krb5-1.12/doc/admin/admin_commands/ \ No newline at end of file From 9db10d3e4ce0c896200f654eefcb749824a49aa0 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 13 Sep 2015 20:14:05 +0800 Subject: [PATCH 502/697] PUB:20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management @wi-cuckoo --- ...ence on RedHat Linux Package Management.md | 348 ++++++++++++++++++ ...ence on RedHat Linux Package Management.md | 348 ------------------ 2 files changed, 348 insertions(+), 348 deletions(-) create mode 100644 published/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md delete mode 100644 translated/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md diff --git a/published/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md b/published/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md new file mode 100644 index 0000000000..0ca331b72b --- /dev/null +++ b/published/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md @@ -0,0 +1,348 @@ +Shilpa Nair 分享的 RedHat Linux 包管理方面的面试经验 +======================================================================== +**Shilpa Nair 刚于2015年毕业。她之后去了一家位于 Noida,Delhi 的国家新闻电视台,应聘实习生的岗位。在她去年毕业季的时候,常逛 Tecmint 寻求作业上的帮助。从那时开始,她就常去 Tecmint。** + +![Linux Interview Questions on RPM](http://www.tecmint.com/wp-content/uploads/2015/06/Linux-Interview-Questions-on-RPM.jpeg) + +*有关 RPM 方面的 Linux 面试题* + +所有的问题和回答都是 Shilpa Nair 根据回忆重写的。 + +> “大家好!我是来自 Delhi 的Shilpa Nair。我不久前才顺利毕业,正寻找一个实习的机会。在大学早期的时候,我就对 UNIX 十分喜爱,所以我也希望这个机会能适合我,满足我的兴趣。我被提问了很多问题,大部分都是关于 RedHat 包管理的基础问题。” + +下面就是我被问到的问题,和对应的回答。我仅贴出了与 RedHat GNU/Linux 包管理相关的,也是主要被提问的。 + +### 1,Linux 里如何查找一个包安装与否?假设你需要确认 ‘nano’ 有没有安装,你怎么做? ### + +**回答**:为了确认 nano 软件包有没有安装,我们可以使用 rpm 命令,配合 -q 和 -a 选项来查询所有已安装的包 + + # rpm -qa nano + 或 + # rpm -qa | grep -i nano + + nano-2.3.1-10.el7.x86_64 + +同时包的名字必须是完整的,不完整的包名会返回到提示符,不打印任何东西,就是说这包(包名字不全)未安装。下面的例子会更好理解些: + +我们通常使用 vim 替代 vi 命令。当时如果我们查找安装包 vi/vim 的时候,我们就会看到标准输出上没有任何结果。 + + # vi + # vim + +尽管如此,我们仍然可以像上面一样运行 vi/vim 命令来清楚地知道包有没有安装。只是因为我们不知道它的完整包名才不能找到的。如果我们不确切知道完整的文件名,我们可以使用通配符: + + # rpm -qa vim* + + vim-minimal-7.4.160-1.el7.x86_64 + +通过这种方式,我们可以获得任何软件包的信息,安装与否。 + +### 2. 你如何使用 rpm 命令安装 XYZ 软件包? ### + +**回答**:我们可以使用 rpm 命令安装任何的软件包(*.rpm),像下面这样,选项 -i(安装),-v(冗余或者显示额外的信息)和 -h(在安装过程中,打印#号显示进度)。 + + # rpm -ivh peazip-1.11-1.el6.rf.x86_64.rpm + + Preparing... ################################# [100%] + Updating / installing... + 1:peazip-1.11-1.el6.rf ################################# [100%] + +如果要升级一个早期版本的包,应加上 -U 选项,选项 -v 和 -h 可以确保我们得到用 # 号表示的冗余输出,这增加了可读性。 + +### 3. 你已经安装了一个软件包(假设是 httpd),现在你想看看软件包创建并安装的所有文件和目录,你会怎么做? ### + +**回答**:使用选项 -l(列出所有文件)和 -q(查询)列出 httpd 软件包安装的所有文件(Linux 哲学:所有的都是文件,包括目录)。 + + # rpm -ql httpd + + /etc/httpd + /etc/httpd/conf + /etc/httpd/conf.d + ... + +### 4. 假如你要移除一个软件包,叫 postfix。你会怎么做? ### + +**回答**:首先我们需要知道什么包安装了 postfix。查找安装 postfix 的包名后,使用 -e(擦除/卸载软件包)和 -v(冗余输出)两个选项来实现。 + + # rpm -qa postfix* + + postfix-2.10.1-6.el7.x86_64 + +然后移除 postfix,如下: + + # rpm -ev postfix-2.10.1-6.el7.x86_64 + + Preparing packages... + postfix-2:3.0.1-2.fc22.x86_64 + +### 5. 获得一个已安装包的具体信息,如版本,发行号,安装日期,大小,总结和一个简短的描述。 ### + +**回答**:我们通过使用 rpm 的选项 -qi,后面接包名,可以获得关于一个已安装包的具体信息。 + +举个例子,为了获得 openssh 包的具体信息,我需要做的就是: + + # rpm -qi openssh + + [root@tecmint tecmint]# rpm -qi openssh + Name : openssh + Version : 6.8p1 + Release : 5.fc22 + Architecture: x86_64 + Install Date: Thursday 28 May 2015 12:34:50 PM IST + Group : Applications/Internet + Size : 1542057 + License : BSD + .... + +### 6. 假如你不确定一个指定包的配置文件在哪,比如 httpd。你如何找到所有 httpd 提供的配置文件列表和位置。 ### + +**回答**: 我们需要用选项 -c 接包名,这会列出所有配置文件的名字和他们的位置。 + + # rpm -qc httpd + + /etc/httpd/conf.d/autoindex.conf + /etc/httpd/conf.d/userdir.conf + /etc/httpd/conf.d/welcome.conf + /etc/httpd/conf.modules.d/00-base.conf + /etc/httpd/conf/httpd.conf + /etc/sysconfig/httpd + +相似地,我们可以列出所有相关的文档文件,如下: + + # rpm -qd httpd + + /usr/share/doc/httpd/ABOUT_APACHE + /usr/share/doc/httpd/CHANGES + /usr/share/doc/httpd/LICENSE + ... + +我们也可以列出所有相关的证书文件,如下: + + # rpm -qL openssh + + /usr/share/licenses/openssh/LICENCE + +忘了说明上面的选项 -d 和 -L 分别表示 “文档” 和 “证书”,抱歉。 + +### 7. 你找到了一个配置文件,位于‘/usr/share/alsa/cards/AACI.conf’,现在你不确定该文件属于哪个包。你如何查找出包的名字? ### + +**回答**:当一个包被安装后,相关的信息就存储在了数据库里。所以使用选项 -qf(-f 查询包拥有的文件)很容易追踪谁提供了上述的包。 + + # rpm -qf /usr/share/alsa/cards/AACI.conf + alsa-lib-1.0.28-2.el7.x86_64 + +类似地,我们可以查找(谁提供的)关于任何子包,文档和证书文件的信息。 + +### 8. 你如何使用 rpm 查找最近安装的软件列表? ### + +**回答**:如刚刚说的,每一样被安装的文件都记录在了数据库里。所以这并不难,通过查询 rpm 的数据库,找到最近安装软件的列表。 + +我们通过运行下面的命令,使用选项 -last(打印出最近安装的软件)达到目的。 + + # rpm -qa --last + +上面的命令会打印出所有安装的软件,最近安装的软件在列表的顶部。 + +如果我们关心的是找出特定的包,我们可以使用 grep 命令从列表中匹配包(假设是 sqlite ),简单如下: + + # rpm -qa --last | grep -i sqlite + + sqlite-3.8.10.2-1.fc22.x86_64 Thursday 18 June 2015 05:05:43 PM IST + +我们也可以获得10个最近安装的软件列表,简单如下: + + # rpm -qa --last | head + +我们可以重定义一下,输出想要的结果,简单如下: + + # rpm -qa --last | head -n 2 + +上面的命令中,-n 代表数目,后面接一个常数值。该命令是打印2个最近安装的软件的列表。 + +### 9. 安装一个包之前,你如果要检查其依赖。你会怎么做? ### + +**回答**:检查一个 rpm 包(XYZ.rpm)的依赖,我们可以使用选项 -q(查询包),-p(指定包名)和 -R(查询/列出该包依赖的包,嗯,就是依赖)。 + + # rpm -qpR gedit-3.16.1-1.fc22.i686.rpm + + /bin/sh + /usr/bin/env + glib2(x86-32) >= 2.40.0 + gsettings-desktop-schemas + gtk3(x86-32) >= 3.16 + gtksourceview3(x86-32) >= 3.16 + gvfs + libX11.so.6 + ... + +### 10. rpm 是不是一个前端的包管理工具呢? ### + +**回答**:**不是!**rpm 是一个后端管理工具,适用于基于 Linux 发行版的 RPM (此处指 Redhat Package Management)。 + +[YUM][1],全称 Yellowdog Updater Modified,是一个 RPM 的前端工具。YUM 命令自动完成所有工作,包括解决依赖和其他一切事务。 + +最近,[DNF][2](YUM命令升级版)在Fedora 22发行版中取代了 YUM。尽管 YUM 仍然可以在 RHEL 和 CentOS 平台使用,我们也可以安装 dnf,与 YUM 命令共存使用。据说 DNF 较于 YUM 有很多提高。 + +知道更多总是好的,保持自我更新。现在我们移步到前端部分来谈谈。 + +### 11. 你如何列出一个系统上面所有可用的仓库列表。 ### + +**回答**:简单地使用下面的命令,我们就可以列出一个系统上所有可用的仓库列表。 + + # yum repolist + 或 + # dnf repolist + + Last metadata expiration check performed 0:30:03 ago on Mon Jun 22 16:50:00 2015. + repo id repo name status + *fedora Fedora 22 - x86_64 44,762 + ozonos Repository for Ozon OS 61 + *updates Fedora 22 - x86_64 - Updates + +上面的命令仅会列出可用的仓库。如果你需要列出所有的仓库,不管可用与否,可以这样做。 + + # yum repolist all + 或 + # dnf repolist all + + Last metadata expiration check performed 0:29:45 ago on Mon Jun 22 16:50:00 2015. + repo id repo name status + *fedora Fedora 22 - x86_64 enabled: 44,762 + fedora-debuginfo Fedora 22 - x86_64 - Debug disabled + fedora-source Fedora 22 - Source disabled + ozonos Repository for Ozon OS enabled: 61 + *updates Fedora 22 - x86_64 - Updates enabled: 5,018 + updates-debuginfo Fedora 22 - x86_64 - Updates - Debug + +### 12. 你如何列出一个系统上所有可用并且安装了的包? ### + +**回答**:列出一个系统上所有可用的包,我们可以这样做: + + # yum list available + 或 + # dnf list available + + ast metadata expiration check performed 0:34:09 ago on Mon Jun 22 16:50:00 2015. + Available Packages + 0ad.x86_64 0.0.18-1.fc22 fedora + 0ad-data.noarch 0.0.18-1.fc22 fedora + 0install.x86_64 2.6.1-2.fc21 fedora + 0xFFFF.x86_64 0.3.9-11.fc22 fedora + 2048-cli.x86_64 0.9-4.git20141214.723738c.fc22 fedora + 2048-cli-nocurses.x86_64 0.9-4.git20141214.723738c.fc22 fedora + .... + +而列出一个系统上所有已安装的包,我们可以这样做。 + + # yum list installed + 或 + # dnf list installed + + Last metadata expiration check performed 0:34:30 ago on Mon Jun 22 16:50:00 2015. + Installed Packages + GeoIP.x86_64 1.6.5-1.fc22 @System + GeoIP-GeoLite-data.noarch 2015.05-1.fc22 @System + NetworkManager.x86_64 1:1.0.2-1.fc22 @System + NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System + aajohan-comfortaa-fonts.noarch 2.004-4.fc22 @System + .... + +而要同时满足两个要求的时候,我们可以这样做。 + + # yum list + 或 + # dnf list + + Last metadata expiration check performed 0:32:56 ago on Mon Jun 22 16:50:00 2015. + Installed Packages + GeoIP.x86_64 1.6.5-1.fc22 @System + GeoIP-GeoLite-data.noarch 2015.05-1.fc22 @System + NetworkManager.x86_64 1:1.0.2-1.fc22 @System + NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System + aajohan-comfortaa-fonts.noarch 2.004-4.fc22 @System + acl.x86_64 2.2.52-7.fc22 @System + .... + +### 13. 你会怎么在一个系统上面使用 YUM 或 DNF 分别安装和升级一个包与一组包? ### + +**回答**:安装一个包(假设是 nano),我们可以这样做, + + # yum install nano + +而安装一组包(假设是 Haskell),我们可以这样做, + + # yum groupinstall 'haskell' + +升级一个包(还是 nano),我们可以这样做, + + # yum update nano + +而为了升级一组包(还是 haskell),我们可以这样做, + + # yum groupupdate 'haskell' + +### 14. 你会如何同步一个系统上面的所有安装软件到稳定发行版? ### + +**回答**:我们可以一个系统上(假设是 CentOS 或者 Fedora)的所有包到稳定发行版,如下, + + # yum distro-sync [在 CentOS/ RHEL] + 或 + # dnf distro-sync [在 Fedora 20之后版本] + +似乎来面试之前你做了相当不多的功课,很好!在进一步交谈前,我还想问一两个问题。 + +### 15. 你对 YUM 本地仓库熟悉吗?你尝试过建立一个本地 YUM 仓库吗?让我们简单看看你会怎么建立一个本地 YUM 仓库。 ### + +**回答**:首先,感谢你的夸奖。回到问题,我必须承认我对本地 YUM 仓库十分熟悉,并且在我的本地主机上也部署过,作为测试用。 + +1、 为了建立本地 YUM 仓库,我们需要安装下面三个包: + + # yum install deltarpm python-deltarpm createrepo + +2、 新建一个目录(假设 /home/$USER/rpm),然后复制 RedHat/CentOS DVD 上的 RPM 包到这个文件夹下 + + # mkdir /home/$USER/rpm + # cp /path/to/rpm/on/DVD/*.rpm /home/$USER/rpm + +3、 新建基本的库头文件如下。 + + # createrepo -v /home/$USER/rpm + +4、 在路径 /etc/yum.repo.d 下创建一个 .repo 文件(如 abc.repo): + + cd /etc/yum.repos.d && cat << EOF abc.repo + [local-installation]name=yum-local + baseurl=file:///home/$USER/rpm + enabled=1 + gpgcheck=0 + EOF + +**重要**:用你的用户名替换掉 $USER。 + +以上就是创建一个本地 YUM 仓库所要做的全部工作。我们现在可以从这里安装软件了,相对快一些,安全一些,并且最重要的是不需要 Internet 连接。 + +好了!面试过程很愉快。我已经问完了。我会将你推荐给 HR。你是一个年轻且十分聪明的候选者,我们很愿意你加入进来。如果你有任何问题,你可以问我。 + +**我**:谢谢,这确实是一次愉快的面试,我感到今天非常幸运,可以搞定这次面试... + +显然,不会在这里结束。我问了很多问题,比如他们正在做的项目。我会担任什么角色,负责什么,,,balabalabala + +小伙伴们,这之后的 3 天会经过 HR 轮,到时候所有问题到时候也会被写成文档。希望我当时表现不错。感谢你们所有的祝福。 + +谢谢伙伴们和 Tecmint,花时间来编辑我的面试经历。我相信 Tecmint 好伙伴们做了很大的努力,必要要赞一个。当我们与他人分享我们的经历的时候,其他人从我们这里知道了更多,而我们自己则发现了自己的不足。 + +这增加了我们的信心。如果你最近也有任何类似的面试经历,别自己蔵着。分享出来!让我们所有人都知道。你可以使用如下的表单来与我们分享你的经历。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-rpm-package-management-interview-questions/ + +作者:[Avishek Kumar][a] +译者:[wi-cuckoo](https://github.com/wi-cuckoo) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ +[2]:https://linux.cn/article-5718-1.html diff --git a/translated/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md b/translated/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md deleted file mode 100644 index f095a31c65..0000000000 --- a/translated/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md +++ /dev/null @@ -1,348 +0,0 @@ -Shilpa Nair 分享了她面试 RedHat Linux 包管理方面的经验 -======================================================================== -**Shilpa Nair 刚于2015年毕业。她之后去了一家位于 Noida,Delhi 的国家新闻电视台,应聘实习生的岗位。在她去年毕业季的时候,常逛 Tecmint 寻求作业上的帮助。从那时开始,她就常去 Tecmint。** - -![Linux Interview Questions on RPM](http://www.tecmint.com/wp-content/uploads/2015/06/Linux-Interview-Questions-on-RPM.jpeg) - -有关 RPM 方面的 Linux 面试题 - -所有的问题和回答都是 Shilpa Nair 根据回忆重写的。 - -> “大家好!我是来自 Delhi 的Shilpa Nair。我不久前才顺利毕业,正寻找一个实习的机会。在大学早期的时候,我就对 UNIX 十分喜爱,所以我也希望这个机会能适合我,满足我的兴趣。我被提问了很多问题,大部分都是关于 RedHat 包管理的基础问题。” - -下面就是我被问到的问题,和对应的回答。我仅贴出了与 RedHat GNU/Linux 包管理相关的,也是主要被提问的。 - -### 1,里如何查找一个包安装与否?假设你需要确认 ‘nano’ 有没有安装,你怎么做? ### - -> **回答**:为了确认 nano 软件包有没有安装,我们可以使用 rpm 命令,配合 -q 和 -a 选项来查询所有已安装的包 -> -> # rpm -qa nano -> OR -> # rpm -qa | grep -i nano -> -> nano-2.3.1-10.el7.x86_64 -> -> 同时包的名字必须是完成的,不完整的包名返回提示,不打印任何东西,就是说这包(包名字不全)未安装。下面的例子会更好理解些: -> -> 我们通常使用 vim 替代 vi 命令。当时如果我们查找安装包 vi/vim 的时候,我们就会看到标准输出上没有任何结果。 -> -> # vi -> # vim -> -> 尽管如此,我们仍然可以通过使用 vi/vim 命令来清楚地知道包有没有安装。Here is ... name(这句不知道)。如果我们不确切知道完整的文件名,我们可以使用通配符: -> -> # rpm -qa vim* -> -> vim-minimal-7.4.160-1.el7.x86_64 -> -> 通过这种方式,我们可以获得任何软件包的信息,安装与否。 - -### 2. 你如何使用 rpm 命令安装 XYZ 软件包? ### - -> **回答**:我们可以使用 rpm 命令安装任何的软件包(*.rpm),像下面这样,选项 -i(install),-v(冗余或者显示额外的信息)和 -h(打印#号显示进度,在安装过程中)。 -> -> # rpm -ivh peazip-1.11-1.el6.rf.x86_64.rpm -> -> Preparing... ################################# [100%] -> Updating / installing... -> 1:peazip-1.11-1.el6.rf ################################# [100%] -> -> 如果要升级一个早期版本的包,应加上 -U 选项,选项 -v 和 -h 可以确保我们得到用 # 号表示的冗余输出,这增加了可读性。 - -### 3. 你已经安装了一个软件包(假设是 httpd),现在你想看看软件包创建并安装的所有文件和目录,你会怎么做? ### - -> **回答**:使用选项 -l(列出所有文件)和 -q(查询)列出 httpd 软件包安装的所有文件(Linux哲学:所有的都是文件,包括目录)。 -> -> # rpm -ql httpd -> -> /etc/httpd -> /etc/httpd/conf -> /etc/httpd/conf.d -> ... - -### 4. 假如你要移除一个软件包,叫 postfix。你会怎么做? ### - -> **回答**:首先我们需要知道什么包安装了 postfix。查找安装 postfix 的包名后,使用 -e(擦除/卸载软件包)和 -v(冗余输出)两个选项来实现。 -> -> # rpm -qa postfix* -> -> postfix-2.10.1-6.el7.x86_64 -> -> 然后移除 postfix,如下: -> -> # rpm -ev postfix-2.10.1-6.el7.x86_64 -> -> Preparing packages... -> postfix-2:3.0.1-2.fc22.x86_64 - -### 5. 获得一个已安装包的具体信息,如版本,发行号,安装日期,大小,总结和一个间短的描述。 ### - -> **回答**:我们通过使用 rpm 的选项 -qi,后面接包名,可以获得关于一个已安装包的具体信息。 -> -> 举个例子,为了获得 openssh 包的具体信息,我需要做的就是: -> -> # rpm -qi openssh -> -> [root@tecmint tecmint]# rpm -qi openssh -> Name : openssh -> Version : 6.8p1 -> Release : 5.fc22 -> Architecture: x86_64 -> Install Date: Thursday 28 May 2015 12:34:50 PM IST -> Group : Applications/Internet -> Size : 1542057 -> License : BSD -> .... - -### 6. 假如你不确定一个指定包的配置文件在哪,比如 httpd。你如何找到所有 httpd 提供的配置文件列表和位置。 ### - -> **回答**: 我们需要用选项 -c 接包名,这会列出所有配置文件的名字和他们的位置。 -> -> # rpm -qc httpd -> -> /etc/httpd/conf.d/autoindex.conf -> /etc/httpd/conf.d/userdir.conf -> /etc/httpd/conf.d/welcome.conf -> /etc/httpd/conf.modules.d/00-base.conf -> /etc/httpd/conf/httpd.conf -> /etc/sysconfig/httpd -> -> 相似地,我们可以列出所有相关的文档文件,如下: -> -> # rpm -qd httpd -> -> /usr/share/doc/httpd/ABOUT_APACHE -> /usr/share/doc/httpd/CHANGES -> /usr/share/doc/httpd/LICENSE -> ... -> -> 我们也可以列出所有相关的证书文件,如下: -> -> # rpm -qL openssh -> -> /usr/share/licenses/openssh/LICENCE -> -> 忘了说明上面的选项 -d 和 -L 分别表示 “文档” 和 “证书”,抱歉。 - -### 7. 你进入了一个配置文件,位于‘/usr/share/alsa/cards/AACI.conf’,现在你不确定该文件属于哪个包。你如何查找出包的名字? ### - -> **回答**:当一个包被安装后,相关的信息就存储在了数据库里。所以使用选项 -qf(-f 查询包拥有的文件)很容易追踪谁提供了上述的包。 -> -> # rpm -qf /usr/share/alsa/cards/AACI.conf -> alsa-lib-1.0.28-2.el7.x86_64 -> -> 类似地,我们可以查找(谁提供的)关于任何子包,文档和证书文件的信息。 - -### 8. 你如何使用 rpm 查找最近安装的软件列表? ### - -> **回答**:如刚刚说的,每一样被安装的文件都记录在了数据库里。所以这并不难,通过查询 rpm 的数据库,找到最近安装软件的列表。 -> -> 我们通过运行下面的命令,使用选项 -last(打印出最近安装的软件)达到目的。 -> -> # rpm -qa --last -> -> 上面的命令会打印出所有安装的软件,最近一次安装的软件在列表的顶部。 -> -> 如果我们关心的是找出特定的包,我们可以使用 grep 命令从列表中匹配包(假设是 sqlite ),简单如下: -> -> # rpm -qa --last | grep -i sqlite -> -> sqlite-3.8.10.2-1.fc22.x86_64 Thursday 18 June 2015 05:05:43 PM IST -> -> 我们也可以获得10个最近安装的软件列表,简单如下: -> -> # rpm -qa --last | head -> -> 我们可以重定义一下,输出想要的结果,简单如下: -> -> # rpm -qa --last | head -n 2 -> -> 上面的命令中,-n 代表数目,后面接一个常数值。该命令是打印2个最近安装的软件的列表。 - -### 9. 安装一个包之前,你如果要检查其依赖。你会怎么做? ### - -> **回答**:检查一个 rpm 包(XYZ.rpm)的依赖,我们可以使用选项 -q(查询包),-p(指定包名)和 -R(查询/列出该包依赖的包,嗯,就是依赖)。 -> -> # rpm -qpR gedit-3.16.1-1.fc22.i686.rpm -> -> /bin/sh -> /usr/bin/env -> glib2(x86-32) >= 2.40.0 -> gsettings-desktop-schemas -> gtk3(x86-32) >= 3.16 -> gtksourceview3(x86-32) >= 3.16 -> gvfs -> libX11.so.6 -> ... - -### 10. rpm 是不是一个前端的包管理工具呢? ### - -> **回答**:不是!rpm 是一个后端管理工具,适用于基于 Linux 发行版的 RPM (此处指 Redhat Package Management)。 -> -> [YUM][1],全称 Yellowdog Updater Modified,是一个 RPM 的前端工具。YUM 命令自动完成所有工作,包括解决依赖和其他一切事务。 -> -> 最近,[DNF][2](YUM命令升级版)在Fedora 22发行版中取代了 YUM。尽管 YUM 仍然可以在 RHEL 和 CentOS 平台使用,我们也可以安装 dnf,与 YUM 命令共存使用。据说 DNF 较于 YUM 有很多提高。 -> -> 知道更多总是好的,保持自我更新。现在我们移步到前端部分来谈谈。 - -### 11. 你如何列出一个系统上面所有可用的仓库列表。 ### - -> **回答**:简单地使用下面的命令,我们就可以列出一个系统上所有可用的仓库列表。 -> -> # yum repolist -> 或 -> # dnf repolist -> -> Last metadata expiration check performed 0:30:03 ago on Mon Jun 22 16:50:00 2015. -> repo id repo name status -> *fedora Fedora 22 - x86_64 44,762 -> ozonos Repository for Ozon OS 61 -> *updates Fedora 22 - x86_64 - Updates -> -> 上面的命令仅会列出可用的仓库。如果你需要列出所有的仓库,不管可用与否,可以这样做。 -> -> # yum repolist all -> or -> # dnf repolist all -> -> Last metadata expiration check performed 0:29:45 ago on Mon Jun 22 16:50:00 2015. -> repo id repo name status -> *fedora Fedora 22 - x86_64 enabled: 44,762 -> fedora-debuginfo Fedora 22 - x86_64 - Debug disabled -> fedora-source Fedora 22 - Source disabled -> ozonos Repository for Ozon OS enabled: 61 -> *updates Fedora 22 - x86_64 - Updates enabled: 5,018 -> updates-debuginfo Fedora 22 - x86_64 - Updates - Debug - -### 12. 你如何列出一个系统上所有可用并且安装了的包? ### - -> **回答**:列出一个系统上所有可用的包,我们可以这样做: -> -> # yum list available -> 或 -> # dnf list available -> -> ast metadata expiration check performed 0:34:09 ago on Mon Jun 22 16:50:00 2015. -> Available Packages -> 0ad.x86_64 0.0.18-1.fc22 fedora -> 0ad-data.noarch 0.0.18-1.fc22 fedora -> 0install.x86_64 2.6.1-2.fc21 fedora -> 0xFFFF.x86_64 0.3.9-11.fc22 fedora -> 2048-cli.x86_64 0.9-4.git20141214.723738c.fc22 fedora -> 2048-cli-nocurses.x86_64 0.9-4.git20141214.723738c.fc22 fedora -> .... -> -> 而列出一个系统上所有已安装的包,我们可以这样做。 -> -> # yum list installed -> or -> # dnf list installed -> -> Last metadata expiration check performed 0:34:30 ago on Mon Jun 22 16:50:00 2015. -> Installed Packages -> GeoIP.x86_64 1.6.5-1.fc22 @System -> GeoIP-GeoLite-data.noarch 2015.05-1.fc22 @System -> NetworkManager.x86_64 1:1.0.2-1.fc22 @System -> NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System -> aajohan-comfortaa-fonts.noarch 2.004-4.fc22 @System -> .... -> -> 而要同时满足两个要求的时候,我们可以这样做。 -> -> # yum list -> 或 -> # dnf list -> -> Last metadata expiration check performed 0:32:56 ago on Mon Jun 22 16:50:00 2015. -> Installed Packages -> GeoIP.x86_64 1.6.5-1.fc22 @System -> GeoIP-GeoLite-data.noarch 2015.05-1.fc22 @System -> NetworkManager.x86_64 1:1.0.2-1.fc22 @System -> NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System -> aajohan-comfortaa-fonts.noarch 2.004-4.fc22 @System -> acl.x86_64 2.2.52-7.fc22 @System -> .... - -### 13. 你会怎么分别安装和升级一个包与一组包,在一个系统上面使用 YUM/DNF? ### - -> **回答**:安装一个包(假设是 nano),我们可以这样做, -> -> # yum install nano -> -> 而安装一组包(假设是 Haskell),我们可以这样做, -> -> # yum groupinstall 'haskell' -> -> 升级一个包(还是 nano),我们可以这样做, -> -> # yum update nano -> -> 而为了升级一组包(还是 haskell),我们可以这样做, -> -> # yum groupupdate 'haskell' - -### 14. 你会如何同步一个系统上面的所有安装软件到稳定发行版? ### - -> **回答**:我们可以一个系统上(假设是 CentOS 或者 Fedora)的所有包到稳定发行版,如下, -> -> # yum distro-sync [On CentOS/ RHEL] -> 或 -> # dnf distro-sync [On Fedora 20之后版本] - -似乎来面试之前你做了相当不多的功课,很好!在进一步交谈前,我还想问一两个问题。 - -### 15. 你对 YUM 本地仓库熟悉吗?你尝试过建立一个本地 YUM 仓库吗?让我们简单看看你会怎么建立一个本地 YUM 仓库。 ### - -> **回答**:首先,感谢你的夸奖。回到问题,我必须承认我对本地 YUM 仓库十分熟悉,并且在我的本地主机上也部署过,作为测试用。 -> -> 1. 为了建立本地 YUM 仓库,我们需要安装下面三个包: -> -> # yum install deltarpm python-deltarpm createrepo -> -> 2. 新建一个目录(假设 /home/$USER/rpm),然后复制 RedHat/CentOS DVD 上的 RPM 包到这个文件夹下 -> -> # mkdir /home/$USER/rpm -> # cp /path/to/rpm/on/DVD/*.rpm /home/$USER/rpm -> -> 3. 新建基本的库头文件如下。 -> -> # createrepo -v /home/$USER/rpm -> -> 4. 在路径 /etc/yum.repo.d 下创建一个 .repo 文件(如 abc.repo): -> -> cd /etc/yum.repos.d && cat << EOF > abc.repo -> [local-installation]name=yum-local -> baseurl=file:///home/$USER/rpm -> enabled=1 -> gpgcheck=0 -> EOF - -**重要**:用你的用户名替换掉 $USER。 - -以上就是创建一个本地 YUM 仓库所要做的全部工作。我们现在可以从这里安装软件了,相对快一些,安全一些,并且最重要的是不需要 Internet 连接。 - -好了!面试过程很愉快。我已经问完了。我会将你推荐给 HR。你是一个年轻且十分聪明的候选者,我们很愿意你加入进来。如果你有任何问题,你可以问我。 - -**我**:谢谢,这确实是一次愉快的面试,我感到非常幸运今天,然后这次面试就毁了。。。 - -显然,不会在这里结束。我问了很多问题,比如他们正在做的项目。我会担任什么角色,负责什么,,,balabalabala - -小伙伴们,3天以前 HR 轮的所有问题到时候也会被写成文档。希望我当时表现不错。感谢你们所有的祝福。 - -谢谢伙伴们和 Tecmint,花时间来编辑我的面试经历。我相信 Tecmint 好伙伴们做了很大的努力,必要要赞一个。当我们与他人分享我们的经历的时候,其他人从我们这里知道了更多,而我们自己则发现了自己的不足。 - -这增加了我们的信心。如果你最近也有任何类似的面试经历,别自己蔵着。分享出来!让我们所有人都知道。你可以使用如下的格式来与我们分享你的经历。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-rpm-package-management-interview-questions/ - -作者:[Avishek Kumar][a] -译者:[wi-cuckoo](https://github.com/wi-cuckoo) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ -[2]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ From f267c593a717152240e6eae44b9f92cab154f3cb Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 13 Sep 2015 20:46:49 +0800 Subject: [PATCH 503/697] PUB:20150827 Xtreme Download Manager Updated With Fresh GUI @mr-ping --- ...me Download Manager Updated With Fresh GUI.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) rename {translated/share => published}/20150827 Xtreme Download Manager Updated With Fresh GUI.md (87%) diff --git a/translated/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md b/published/20150827 Xtreme Download Manager Updated With Fresh GUI.md similarity index 87% rename from translated/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md rename to published/20150827 Xtreme Download Manager Updated With Fresh GUI.md index d9ab3ab9f3..bd55b6fb66 100644 --- a/translated/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md +++ b/published/20150827 Xtreme Download Manager Updated With Fresh GUI.md @@ -1,4 +1,4 @@ -Xtreme下载管理器升级全新用户界面 +Xtreme 下载管理器升级带来全新用户界面 ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme-Download-Manager-Linux.jpg) @@ -6,11 +6,11 @@ Xtreme下载管理器升级全新用户界面 Xtreme 下载管理器,也被称作 XDM 或 XDMAN,它是一个跨平台的下载管理器,可以用于 Linux、Windows 和 Mac OS X 系统之上。同时它兼容于主流的浏览器,如 Chrome, Firefox, Safari 等,因此当你从浏览器下载东西的时候可以直接使用 XDM 下载。 -当你的网络连接超慢并且需要管理下载文件的时候,像 XDM 这种软件可以帮到你大忙。例如说你在一个慢的要死的网络速度下下载一个超大文件, XDM 可以帮助你暂停并且继续下载。 +当你的网络连接超慢并且需要管理下载文件的时候,像 XDM 这种软件可以帮到你大忙。例如说你在一个慢的要死的网络速度下下载一个超大文件,或者你想要暂停和恢复下载的话, XDM 可以帮助你。 XDM 的主要功能: -- 暂停和继续下载 +- 暂停和恢复下载 - [从 YouTube 下载视频][3],其他视频网站同样适用 - 强制聚合 - 下载加速 @@ -23,11 +23,11 @@ XDM 的主要功能: ![Old XDM](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme-Download-Manager-700x400_c.jpg) -老版本XDM +*老版本XDM* ![New XDM](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme_Download_Manager.png) -新版本XDM +*新版本XDM* ### 在基于 Ubuntu 的 Linux 发行版上安装 Xtreme下载管理器 ### @@ -48,15 +48,15 @@ XDM 的主要功能: 对于其他Linux发行版,可以通过以下连接下载: -- [Download Xtreme Download Manager][4] +- [下载 Xtreme 下载管理器][4] -------------------------------------------------------------------------------- via: http://itsfoss.com/xtreme-download-manager-install/ 作者:[Abhishek][a] -译者:[译者ID](https://github.com/mr-ping) -校对:[校对者ID](https://github.com/校对者ID) +译者:[mr-ping](https://github.com/mr-ping) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 86144a1ec0dc8b14d55eb215ae93365ccec75e63 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 13 Sep 2015 20:54:08 +0800 Subject: [PATCH 504/697] PUB:RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7 @xiqingongzi --- ...A Series--Part 03--How to Manage Users and Groups in RHEL 7.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech/RHCSA => published}/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md (100%) diff --git a/translated/tech/RHCSA/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md b/published/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md similarity index 100% rename from translated/tech/RHCSA/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md rename to published/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md From 6c394ac2102bb1fb16239aa84133151e97e36cb9 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 13 Sep 2015 21:20:46 +0800 Subject: [PATCH 505/697] PUB:20150816 How to migrate MySQL to MariaDB on Linux @strugglingyouth --- ...ow to migrate MySQL to MariaDB on Linux.md | 23 +++++++++---------- 1 file changed, 11 insertions(+), 12 deletions(-) rename {translated/tech => published}/20150816 How to migrate MySQL to MariaDB on Linux.md (70%) diff --git a/translated/tech/20150816 How to migrate MySQL to MariaDB on Linux.md b/published/20150816 How to migrate MySQL to MariaDB on Linux.md similarity index 70% rename from translated/tech/20150816 How to migrate MySQL to MariaDB on Linux.md rename to published/20150816 How to migrate MySQL to MariaDB on Linux.md index 70856ec874..40818ae8a5 100644 --- a/translated/tech/20150816 How to migrate MySQL to MariaDB on Linux.md +++ b/published/20150816 How to migrate MySQL to MariaDB on Linux.md @@ -1,10 +1,9 @@ - 在 Linux 中怎样将 MySQL 迁移到 MariaDB 上 ================================================================================ -自从甲骨文收购 MySQL 后,很多 MySQL 的开发者和用户放弃了 MySQL 由于甲骨文对 MySQL 的开发和维护更多倾向于闭门的立场。在社区驱动下,促使更多人移到 MySQL 的另一个分支中,叫 MariaDB。在原有 MySQL 开发人员的带领下,MariaDB 的开发遵循开源的理念,并确保 [它的二进制格式与 MySQL 兼容][1]。Linux 发行版如 Red Hat 家族(Fedora,CentOS,RHEL),Ubuntu 和Mint,openSUSE 和 Debian 已经开始使用,并支持 MariaDB 作为 MySQL 的简易替换品。 +自从甲骨文收购 MySQL 后,由于甲骨文对 MySQL 的开发和维护更多倾向于闭门的立场,很多 MySQL 的开发者和用户放弃了 MySQL。在社区驱动下,促使更多人移到 MySQL 的另一个叫 MariaDB 的分支。在原有 MySQL 开发人员的带领下,MariaDB 的开发遵循开源的理念,并确保[它的二进制格式与 MySQL 兼容][1]。Linux 发行版如 Red Hat 家族(Fedora,CentOS,RHEL),Ubuntu 和 Mint,openSUSE 和 Debian 已经开始使用,并支持 MariaDB 作为 MySQL 的直接替换品。 -如果想要将 MySQL 中的数据库迁移到 MariaDB 中,这篇文章就是你所期待的。幸运的是,由于他们的二进制兼容性,MySQL-to-MariaDB 迁移过程是非常简单的。如果你按照下面的步骤,将 MySQL 迁移到 MariaDB 会是无痛的。 +如果你想要将 MySQL 中的数据库迁移到 MariaDB 中,这篇文章就是你所期待的。幸运的是,由于他们的二进制兼容性,MySQL-to-MariaDB 迁移过程是非常简单的。如果你按照下面的步骤,将 MySQL 迁移到 MariaDB 会是无痛的。 ### 准备 MySQL 数据库和表 ### @@ -69,7 +68,7 @@ ### 安装 MariaDB ### -在 CentOS/RHEL 7和Ubuntu(14.04或更高版本)上,最新的 MariaDB 包含在其官方源。在 Fedora 上,自19版本后 MariaDB 已经替代了 MySQL。如果你使用的是旧版本或 LTS 类型如 Ubuntu 13.10 或更早的,你仍然可以通过添加其官方仓库来安装 MariaDB。 +在 CentOS/RHEL 7和Ubuntu(14.04或更高版本)上,最新的 MariaDB 已经包含在其官方源。在 Fedora 上,自19 版本后 MariaDB 已经替代了 MySQL。如果你使用的是旧版本或 LTS 类型如 Ubuntu 13.10 或更早的,你仍然可以通过添加其官方仓库来安装 MariaDB。 [MariaDB 网站][2] 提供了一个在线工具帮助你依据你的 Linux 发行版中来添加 MariaDB 的官方仓库。此工具为 openSUSE, Arch Linux, Mageia, Fedora, CentOS, RedHat, Mint, Ubuntu, 和 Debian 提供了 MariaDB 的官方仓库. @@ -103,7 +102,7 @@ $ sudo yum install MariaDB-server MariaDB-client -安装了所有必要的软件包后,你可能会被要求为 root 用户创建一个新密码。设置 root 的密码后,别忘了恢复备份的 my.cnf 文件。 +安装了所有必要的软件包后,你可能会被要求为 MariaDB 的 root 用户创建一个新密码。设置 root 的密码后,别忘了恢复备份的 my.cnf 文件。 $ sudo cp /opt/my.cnf /etc/mysql/ @@ -111,7 +110,7 @@ $ sudo service mariadb start -或者: +或: $ sudo systemctl start mariadb @@ -141,13 +140,13 @@ ### 结论 ### -如你在本教程中看到的,MySQL-to-MariaDB 的迁移并不难。MariaDB 相比 MySQL 有很多新的功能,你应该知道的。至于配置方面,在我的测试情况下,我只是将我旧的 MySQL 配置文件(my.cnf)作为 MariaDB 的配置文件,导入过程完全没有出现任何问题。对于配置文件,我建议你在迁移之前请仔细阅读MariaDB 配置选项的文件,特别是如果你正在使用 MySQL 的特殊配置。 +如你在本教程中看到的,MySQL-to-MariaDB 的迁移并不难。你应该知道,MariaDB 相比 MySQL 有很多新的功能。至于配置方面,在我的测试情况下,我只是将我旧的 MySQL 配置文件(my.cnf)作为 MariaDB 的配置文件,导入过程完全没有出现任何问题。对于配置文件,我建议你在迁移之前请仔细阅读 MariaDB 配置选项的文件,特别是如果你正在使用 MySQL 的特定配置。 -如果你正在运行更复杂的配置有海量的数据库和表,包括群集或主从复制,看一看 Mozilla IT 和 Operations 团队的 [更详细的指南][3] ,或者 [官方的 MariaDB 文档][4]。 +如果你正在运行有海量的表、包括群集或主从复制的数据库的复杂配置,看一看 Mozilla IT 和 Operations 团队的 [更详细的指南][3] ,或者 [官方的 MariaDB 文档][4]。 ### 故障排除 ### -1.在运行 mysqldump 命令备份数据库时出现以下错误。 +1、 在运行 mysqldump 命令备份数据库时出现以下错误。 $ mysqldump --all-databases --user=root --password --master-data > backupdb.sql @@ -155,7 +154,7 @@ mysqldump: Error: Binlogging on server not active -通过使用 "--master-data",你要在导出的输出中包含二进制日志信息,这对于数据库的复制和恢复是有用的。但是,二进制日志未在 MySQL 服务器启用。要解决这个错误,修改 my.cnf 文件,并在 [mysqld] 部分添加下面的选项。 +通过使用 "--master-data",你可以在导出的输出中包含二进制日志信息,这对于数据库的复制和恢复是有用的。但是,二进制日志未在 MySQL 服务器启用。要解决这个错误,修改 my.cnf 文件,并在 [mysqld] 部分添加下面的选项。 log-bin=mysql-bin @@ -176,8 +175,8 @@ via: http://xmodulo.com/migrate-mysql-to-mariadb-linux.html 作者:[Kristophorus Hadiono][a] -译者:[strugglingyouth](https://github.com/译者ID) -校对:[strugglingyouth](https://github.com/校对者ID) +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From 64f540fec2d59111642016a8ad294dee5644eb4b Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 13 Sep 2015 22:47:06 +0800 Subject: [PATCH 506/697] PUB:20141223 Defending the Free Linux World MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @H-mudcup 这篇好难啊,尤其最后一段有句就是不明白。。 --- ...20141223 Defending the Free Linux World.md | 125 +++++++++++++++++ ...20141223 Defending the Free Linux World.md | 127 ------------------ 2 files changed, 125 insertions(+), 127 deletions(-) create mode 100644 published/20141223 Defending the Free Linux World.md delete mode 100644 translated/talk/20141223 Defending the Free Linux World.md diff --git a/published/20141223 Defending the Free Linux World.md b/published/20141223 Defending the Free Linux World.md new file mode 100644 index 0000000000..b554bf7f7e --- /dev/null +++ b/published/20141223 Defending the Free Linux World.md @@ -0,0 +1,125 @@ +守卫自由的 Linux 世界 +================================================================================ +![](http://www.linuxinsider.com/ai/908455/open-invention-network.jpg) + +**合作是开源的一部分。OIN 的 CEO Keith Bergelt 解释说,开放创新网络(Open Invention Network)模式允许众多企业和公司决定它们该在哪较量,在哪合作。随着开源的演变,“我们需要为合作创造渠道,否则我们将会有几百个团体把数十亿美元花费到同样的技术上。”** + +[开放创新网络(Open Invention Network)][1],即 OIN,正在全球范围内开展让 Linux 远离专利诉讼的伤害的活动。它的努力得到了一千多个公司的热烈回应,它们的加入让这股力量成为了历史上最大的反专利管理组织。 + +开放创新网络以白帽子组织的身份创建于2005年,目的是保护 Linux 免受来自许可证方面的困扰。包括 Google、 IBM、 NEC、 Novell、 Philips、 [Red Hat][2] 和 Sony 这些成员的董事会给予了它可观的经济支持。世界范围内的多个组织通过签署自由 OIN 协议加入了这个社区。 + +创立开放创新网络的组织成员把它当作利用知识产权保护 Linux 的大胆尝试。它的商业模式非常的难以理解。它要求它的成员采用免版权许可证,并永远放弃由于 Linux 相关知识产权起诉其他成员的机会。 + +然而,从 Linux 收购风波——想想服务器和云平台——那时起,保护 Linux 知识产权的策略就变得越加的迫切。 + +在过去的几年里,Linux 的版图曾经历了一场变革。OIN 不必再向人们解释这个组织的定义,也不必再解释为什么 Linux 需要保护。据 OIN 的 CEO Keith Bergelt 说,现在 Linux 的重要性得到了全世界的关注。 + +“我们已经见到了一场人们了解到 OIN 如何让合作受益的文化变革,”他对 LinuxInsider 说。 + +### 如何运作 ### + +开放创新网络使用专利权的方式创建了一个协作环境。这种方法有助于确保创新的延续。这已经使很多软件厂商、顾客、新型市场和投资者受益。 + +开放创新网络的专利证可以让任何公司、公共机构或个人免版权使用。这些权利的获得建立在签署者同意不会专为了维护专利而攻击 Linux 系统的基础上。 + +OIN 确保 Linux 的源代码保持开放的状态。这让编程人员、设备厂商、独立软件开发者和公共机构在投资和使用 Linux 时不用过多的担心知识产权的问题。这让对 Linux 进行重新打包、嵌入和使用的公司省了不少钱。 + +“随着版权许可证越来越广泛的使用,对 OIN 许可证的需求也变得更加的迫切。现在,人们正在寻找更加简单或更实用的解决方法”,Bergelt 说。 + +OIN 法律防御援助对成员是免费的。成员必须承诺不对 OIN 名单上的软件发起专利诉讼。为了保护他们的软件,他们也同意提供他们自己的专利。最终,这些保证将让几十万的交叉许可通过该网络相互连接起来,Bergelt 如此解释道。 + +### 填补法律漏洞 ### + +“OIN 正在做的事情是非常必要的。它提供另一层 IP (知识产权)保护,”[休斯顿法律中心大学][3]的副教授 Greg R. Vetter 这样说道。 + +他回答 LinuxInsider 说,第二版 GPL 许可证被某些人认为提供了隐含的专利许可,但是律师们更喜欢明确的许可。 + +OIN 所提供的许可填补了这个空白。它还明确的覆盖了 Linux 内核。据 Vetter 说,明确的专利许可并不是 GPLv2 中的必要部分,但是这个部分被加入到了 GPLv3 中。(LCTT 译注:Linux 内核采用的是 GPLv2 的许可) + +拿一个在 GPLv3 中写了10000行代码的代码编写者来说。随着时间推移,其他的代码编写者会贡献更多行的代码,也加入到了知识产权中。GPLv3 中的软件专利许可条款将基于所有参与的贡献者的专利,保护全部代码的使用,Vetter 如此说道。 + +### 并不完全一样 ### + +专利权和许可证在法律结构上层层叠叠互相覆盖。弄清两者对开源软件的作用就像是穿越雷区。 + +Vetter 说“通常,许可证是授予建立在专利和版权法律上的额外权利的法律结构。许可证被认为是给予了人们做一些的可能会侵犯到其他人的知识产权权利的事的许可。” + +Vetter 指出,很多自由开源许可证(例如 Mozilla 公共许可、GNU GPLv3 以及 Apache 软件许可)融合了某些互惠专利权的形式。Vetter 指出,像 BSD 和 MIT 这样旧的许可证不会提到专利。 + +一个软件的许可证让其他人可以在某种程度上使用这个编程人员创造的代码。版权对所属权的建立是自动的,只要某个人写或者画了某个原创的东西。然而,版权只覆盖了个别的表达方式和衍生的作品。他并没有涵盖代码的功能性或可用的想法。 + +专利涵盖了功能性。专利权还可以被许可。版权可能无法保护某人如何独立地开发对另一个人的代码的实现,但是专利填补了这个小瑕疵,Vetter 解释道。 + +### 寻找安全通道 ### + +许可证和专利混合的法律性质可能会对开源开发者产生威胁。据 [Chaotic Moon Studios][4] 的创办者之一、 [IEEE][5] 计算机协会成员 William Hurley 说,对于某些人来说,即使是 GPL 也会成为威胁。 + +“在很久以前,开源是个完全不同的世界。被彼此间的尊重和把代码视为艺术而非资产的观点所驱动,那时的程序和代码比现在更加的开放。我相信很多为最好的愿景所做的努力几乎最后总是背负着意外的结果,”Hurley 这样告诉 LinuxInsider。 + +他暗示说,成员人数超越了1000人(的组织)可能会在知识产权保护重要性方面意见不一。这可能会继续搅混开源生态系统这滩浑水。 + +“最终,这些显现出了围绕着知识产权的常见的一些错误概念。拥有几千个开发者并不会减少风险——而是增加。给出了专利许可的开发者越多,它们看起来就越值钱,”Hurley 说。“它们看起来越值钱,有着类似专利的或者其他知识产权的人就越可能试图利用并从中榨取他们自己的经济利益。” + +### 共享与竞争共存 ### + +竞合策略是开源的一部分。OIN 模型让各个公司能够决定他们将在哪竞争以及在哪合作,Bergelt 解释道。 + +“开源演化中的许多改变已经把我们移到了另一个方向上。我们必须为合作创造渠道。否则我们将会有几百个团体把数十亿美元花费到同样的技术上,”他说。 + +手机产业的革新就是个很好的例子。各个公司放出了不同的标准。没有共享,没有合作,Bergelt 解释道。 + +他说:“这让我们在美国接触技术的能力落后了七到十年。我们接触设备的经验远远落后于世界其他地方的人。在我们用不上 CDMA (Code Division Multiple Access 码分多址访问通信技术)时对 GSM (Global System for Mobile Communications 全球移动通信系统) 还沾沾自喜。” + +### 改变格局 ### + +OIN 在去年经历了激增400个新许可的增长。这意味着着开源有了新趋势。 + +Bergelt 说:“市场到达了一个临界点,组织内的人们终于意识到直白地合作和竞争的需要。结果是两件事同时进行。这可能会变得复杂、费力。” + +然而,这个由人们开始考虑合作和竞争的文化革新所驱动的转换过程是可以接受的。他解释说,这也是一个人们怎样拥抱开源的转变——尤其是在 Linux 这个开源社区的领导者项目。 + +还有一个迹象是,最具意义的新项目都没有在 GPLv3 许可下开发。 + +### 二个总比一个好 ### + +“GPL 极为重要,但是事实是有一堆的许可模型正被使用着。在 Eclipse、Apache 和 Berkeley 许可中,专利问题的相对可解决性通常远远低于在 GPLv3 中的。”Bergelt 说。 + +GPLv3 对于解决专利问题是个自然的补充——但是 GPL 自身不足以独自解决围绕专利使用的潜在冲突。所以 OIN 的设计是以能够补充版权许可为目的的,他补充道。 + +然而,层层叠叠的专利和许可也许并没有带来多少好处。到最后,专利在几乎所有的案例中都被用于攻击目的——而不是防御目的,Bergelt 暗示说。 + +“如果你不准备对其他人采取法律行动,那么对于你的知识产权来说专利可能并不是最佳的法律保护方式”,他说。“我们现在生活在一个对软件——开放的和专有的——误会重重的世界里。这些软件还被错误而过时的专利系统所捆绑。我们每天在工业化和被扼杀的创新中挣扎”,他说。 + +### 法院是最后的手段### + +想到 OIN 的出现抑制了诉讼的泛滥就感到十分欣慰,Bergelt 说,或者至少可以说 OIN 的出现扼制了特定的某些威胁。 + +“可以说我们让人们放下他们的武器。同时我们正在创建一种新的文化规范。一旦你入股这个模型中的非侵略专利,所产生的相关影响就是对合作的鼓励”,他说。 + +如果你愿意承诺合作,你的第一反应就会趋向于不急着起诉。相反的,你会想如何让我们允许你使用我们所拥有的东西并让它为你赚钱,而同时我们也能使用你所拥有的东西,Bergelt 解释道。 + +“OIN 是个多面的解决方式。它鼓励签署者创造双赢协议”,他说,“这让起诉成为最逼不得已的行为。那才是它的位置。” + +### 底线### + +Bergelt 坚信,OIN 的运作是为了阻止 Linux 受到专利伤害。在这个需要 Linux 的世界里没有诉讼的地方。 + +唯一临近的是与微软的移动之争,这关系到行业的发展前景(原文: The only thing that comes close are the mobile wars with Microsoft, which focus on elements high in the stack. 不太理解,请指正。)。那些来自法律的挑战可能是为了提高包括使用 Linux 产品的所属权的成本,Bergelt 说。 + +尽管如此“这些并不是有关 Linux 诉讼”,他说。“他们的重点并不在于 Linux 的核心。他们关注的是 Linux 系统里都有些什么。” + +-------------------------------------------------------------------------------- + +via: http://www.linuxinsider.com/story/Defending-the-Free-Linux-World-81512.html + +作者:Jack M. Germain +译者:[H-mudcup](https://github.com/H-mudcup) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[1]:http://www.openinventionnetwork.com/ +[2]:http://www.redhat.com/ +[3]:http://www.law.uh.edu/ +[4]:http://www.chaoticmoon.com/ +[5]:http://www.ieee.org/ diff --git a/translated/talk/20141223 Defending the Free Linux World.md b/translated/talk/20141223 Defending the Free Linux World.md deleted file mode 100644 index cabc8af041..0000000000 --- a/translated/talk/20141223 Defending the Free Linux World.md +++ /dev/null @@ -1,127 +0,0 @@ -Translating by H-mudcup - -守卫自由的Linux世界 -================================================================================ -![](http://www.linuxinsider.com/ai/908455/open-invention-network.jpg) - -**"合作是开源的一部分。OIN的CEO Keith Bergelt解释说,开放创新网络(Open Invention Network)模式允许众多企业和公司决定它们该在哪较量,在哪合作。随着开源的演变,“我们需要为合作创造渠道。否则我们将会有几百个团体把数十亿美元花费到同样的技术上。”** - -[开放创新网络(Open Invention Network)][1],既OIN,正在全球范围内开展让 Linux 远离专利诉讼的伤害的活动。它的努力得到了一千多个公司的热烈回应,它们的加入让这股力量成为了历史上最大的反专利管理组织。 - -开放创新网络以白帽子组织的身份创建于2005年,目的是保护 Linux 免受来自许可证方面的困扰。包括Google、 IBM、 NEC、 Novell、 Philips、 [Red Hat][2] 和 Sony这些成员的董事会给予了它可观的经济支持。世界范围内的多个组织通过签署自由 OIN 协议加入了这个社区。 - -创立开放创新网络的组织成员把它当作利用知识产权保护 Linux 的大胆尝试。它的商业模式非常的难以理解。它要求它的成员持无专利证并永远放弃由于 Linux 相关知识产权起诉其他成员的机会。 - -然而,从 Linux 收购风波——想想服务器和云平台——那时起,保护 Linux 知识产权的策略就变得越加的迫切。 - -在过去的几年里,Linux 的版图曾经历了一场变革。OIN 不必再向人们解释这个组织的定义,也不必再解释为什么 Linux 需要保护。据 OIN 的 CEO Keith Bergelt 说,现在 Linux 的重要性得到了全世界的关注。 - -“我们已经见到了一场人们了解到OIN如何让合作受益的文化变革,”他对 LinuxInsider 说。 - -### 如何运作 ### - -开放创新网络使用专利权的方式创建了一个协作环境。这种方法有助于确保创新的延续。这已经使很多软件商贩、顾客、新型市场和投资者受益。 - -开放创新网络的专利证可以让任何公司、公共机构或个人免版权使用。这些权利的获得建立在签署者同意不会专为了维护专利而攻击 Linux 系统的基础上。 - -OIN 确保 Linux 的源代码保持开放的状态。这让编程人员、设备出售人员、独立软件开发者和公共机构在投资和使用 Linux 时不用过多的担心知识产权的问题。这让对 Linux 进行重新装配、嵌入和使用的公司省了不少钱。 - -“随着版权许可证越来越广泛的使用,对 OIN 许可证的需求也变得更加的迫切。现在,人们正在寻找更加简单或更功利的解决方法”,Bergelt 说。 - -OIN 法律防御援助对成员是免费的。成员必须承诺不对 OIN 名单带上的软件发起专利诉讼。为了保护该软件,他们也同意提供他们自己的专利。最终,这些保证将导致几十万的交叉许可通过网络连接,Bergelt 如此解释道。 - -### 填补法律漏洞 ### - -“OIN 正在做的事情是非常必要的。它提供额另一层 IP 保护,”[休斯顿法律中心大学][3]的副教授 Greg R. Vetter 这样说道。 - -他回答 LinuxInsider 说,某些人设想的第二版 GPL 许可证会隐含的提供专利许可,但是律师们更喜欢明确的许可。 - -OIN 所提供的许可填补了这个空白。它还明确的覆盖了 Linux 核心。据 Vetter 说,明确的专利许可并不是 GPLv2 中的必要部分,但是这个部分曾在 GPLv3 中。 - -拿一个在 GPLv3 中写了10000行代码的代码编写者来说。随着时间推移,其他的代码编写者会贡献更多行的代码到 IP 中。GPLv3 中的软件专利许可条款将保护所有基于参与其中的贡献者的专利的全部代码的使用,Vetter 如此说道。 - -### 并不完全一样 ### - -专利权和许可证在法律结构上层层叠叠互相覆盖。弄清两者对开源软件的作用就像是穿越雷区。 - -Vetter 说“许可证是授予通常是建立在专利和版权法律上的额外权利的法律结构。许可证被认为是给予了人们做一些的可能会侵犯到其他人的 IP 权利的事的许可。” - -Vetter 指出,很多自由开源许可证(例如 Mozilla 公共许可、GNU、GPLv3 以及 Apache 软件许可)融合了某些互惠专利权的形式。Vetter 指出,像 BSD 和 MIT 这样旧的许可证不会提到专利。 - -一个软件的许可证让其他人可以在某种程度上使用这个编程人员创造的代码。版权对所属权的建立是自动的,只要某个人写或者画了某个原创的东西。然而,版权只覆盖了个别的表达方式和衍生的作品。他并没有涵盖代码的功能性或可用的想法。 - -专利涵盖了功能性。专利权还可以成为许可证。版权可能无法保护某人如何独立的对另一个人的代码的实现的开发,但是专利填补了这个小瑕疵,Vetter 解释道。 - -### 寻找安全通道 ### - -许可证和专利混合的法律性质可能会对开源开发者产生威胁。据 [Chaotic Moon Studios][4] 的创办者之一、 [IEEE][5] 计算机协会成员 William Hurley 说,对于某些人来说即使是 GPL 也会成为威胁。 - -"在很久以前,开源是个完全不同的世界。被彼此间的尊重和把代码视为艺术而非资产的观点所驱动,那时的程序和代码比现在更加的开放。我相信很多为最好的意图所做的努力几乎最后总是背负着意外的结果,"Hurley 这样告诉 LinuxInsider。 - -他暗示说,成员人数超越了1000人可能带来了一个关于知识产权保护重要性的混乱信息。这可能会继续搅混开源生态系统这滩浑水。 - -“最终,这些显现出了围绕着知识产权的常见的一些错误概念。拥有几千个开发者并不会减少风险——而是增加。给专利许可的开发者越多,它们看起来就越值钱,”Hurley 说。“它们看起来越值钱,有着类似专利的或者其他知识产权的人就越可能试图利用并从中榨取他们自己的经济利益。” - -### 共享与竞争共存 ### - -竞合策略是开源的一部分。OIN 模型让各个公司能够决定他们将在哪竞争以及在哪合作,Bergelt 解释道。 - -“开源演化中的许多改变已经把我们移到了另一个方向上。我们必须为合作创造渠道。否则我们将会有几百个团体把数十亿美元花费到同样的技术上,”他说。 - -手机产业的革新就是个很好的例子。各个公司放出了不同的标准。没有共享,没有合作,Bergelt 解释道。 - -他说:“这让我们在美国接触技术的能力落后了七到五年。我们接触设备的经验远远落后于世界其他地方的人。在我们等待 CDMA (Code Division Multiple Access 码分多址访问通信技术)时自满于 GSM (Global System for Mobile Communications 全球移动通信系统)。” - -### 改变格局 ### - -OIN 在去年经历了增长了400个新许可的浪潮。这意味着着开源有了新趋势。 - -Bergelt 说:“市场到达了一个临界点,组织内的人们终于意识到直白地合作和竞争的需要。结果是两件事同时进行。这可能会变得复杂、费力。” - -然而,这个由人们开始考虑合作和竞争的文化革新所驱动的转换过程是可以忍受的。他解释说,这也是人们在以把开源作为开源社区的最重要的工程的方式拥抱开源——尤其是 Linux——的转变。 - -还有一个迹象是,最具意义的新工程都没有在 GPLv3 许可下开发。 - -### 二个总比一个好 ### - -“GPL 极为重要,但是事实是有一堆的许可模型正被使用着。在Eclipse、Apache 和 Berkeley 许可中,专利问题的相对可解决性通常远远低于在 GPLv3 中的。”Bergelt 说。 - -GPLv3 对于解决专利问题是个自然的补充——但是 GPL 自身不足以独自解决围绕专利使用的潜在冲突。所以 OIN 的设计是以能够补充版权许可为目的的,他补充道。 - -然而,层层叠叠的专利和许可也许并没有带来多少好处。到最后,专利在几乎所有的案例中都被用于攻击目的——而不是防御目的,Bergelt 暗示说。 - -“如果你不准备对其他人采取法律行动,那么对于你的知识财产来说专利可能并不是最佳的法律保护方式”,他说。“我们现在生活在一个对软件——开放和专有——误会重重的世界里。这些软件还被错误并过时的专利系统所捆绑。我们每天在工业化的被窒息的创新中挣扎”,他说。 - -### 法院是最后的手段### - -想到 OIN 的出现抑制了诉讼的泛滥就感到十分欣慰,Bergelt 说,或者至少可以说 OIN 的出现扼制了特定的某些威胁。 - -“可以说我们让人们放下它们了的武器。同时我们正在创建一种新的文化规范。一旦你入股这个模型中的非侵略专利,所产生的相关影响就是对合作的鼓励”,他说。 - -如果你愿意承诺合作,你的第一反应就会趋向于不急着起诉。相反的,你会想如何让我们允许你使用我们所拥有的东西并让它为你赚钱,而同时我们也能使用你所拥有的东西,Bergelt 解释道。 - -“OIN 是个多面的解决方式。他鼓励签署者创造双赢协议”,他说。“这让起诉成为最逼不得已的行为。那才是它的位置。” - -### 底线### - -Bergelt 坚信,OIN 的运作是为了阻止 Linux 受到专利伤害。在 Linux 的世界里没有诉讼的地方。 - -唯一临近的是和微软的移动大战,这主要关系到堆栈中高的元素。那些来自法律的挑战可能是为了提高包括使用 Linux 产品的所属权的成本,Bergelt 说。 - -尽管如此“这些并不是有关 Linux 诉讼”,他说。“他们的重点并不在于 Linux 的核心。他们关注的是 Linux 系统里都有些什么。” - --------------------------------------------------------------------------------- - -via: http://www.linuxinsider.com/story/Defending-the-Free-Linux-World-81512.html - -作者:Jack M. Germain -译者:[H-mudcup](https://github.com/H-mudcup) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[1]:http://www.openinventionnetwork.com/ -[2]:http://www.redhat.com/ -[3]:http://www.law.uh.edu/ -[4]:http://www.chaoticmoon.com/ -[5]:http://www.ieee.org/ From 8f75ab08e0d47b9940625e190f407f42db039639 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 13 Sep 2015 22:55:10 +0800 Subject: [PATCH 507/697] Delete 20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md --- ....9.0 Winamp-like Audio Player in Ubuntu.md | 72 ------------------- 1 file changed, 72 deletions(-) delete mode 100644 sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md diff --git a/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md b/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md deleted file mode 100644 index 76f479cb80..0000000000 --- a/sources/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md +++ /dev/null @@ -1,72 +0,0 @@ -在 Ubuntu 上安装 Qmmp 0.9.0 类似 Winamp 的音频播放器 -================================================================================ -![](http://ubuntuhandbook.org/wp-content/uploads/2015/01/qmmp-icon-simple.png) - -Qmmp,基于 Qt 的音频播放器,与 Winamp 或 xmms 的用户界面类似,现在最新版本是0.9.0。PPA 已经在 Ubuntu 15.10,Ubuntu 15.04,Ubuntu 14.04,Ubuntu 12.04 和其衍生物中已经更新了。 - -Qmmp 0.9.0 是一个较大的版本,有许多新的功能,有许多改进和新的转变。它添加了如下功能: - -- 音频-信道序列转换器; -- 9通道支持均衡器; -- 艺术家专辑标签支持; -- 异步排序; -- 通过文件的修改日期排​​序; -- 按艺术家专辑排序; -- 支持多专栏; -- 有隐藏踪迹长度功能; -- 不用修改 qmmp.pri 来禁用插件(仅在 qmake 中)功能 -- 记住播放列表滚动位置功能; -- 排除提示数据文件功能; -- 更改用户代理功能; -- 改变窗口标题功能; -- 复位字体功能; -- 恢复默认快捷键功能; -- 默认热键为“Rename List”功能; -- 功能禁用弹出的 GME 插件; -- 简单的用户界面(QSUI)有以下变化: - - 增加了多列表的支持; - - 增加了按艺术家专辑排序; - - 增加了按文件的修改日期进行排序; - - 增加了隐藏歌曲长度功能; - - 增加了默认热键为“Rename List”; - - 增加了“Save List”功能到标签菜单; - - 增加了复位字体功能; - - 增加了复位快捷键功能; - - 改进了状态栏; - -它还改进了播放列表的通知,播放列表容器,采样率转换器,cmake 构建脚本,标题格式,在 mpeg 插件中支持 ape 标签,fileops 插件,降低了 cpu 占用率,改变默认的皮肤(炫光)和分离播放列表。 - -![qmmp-090](http://ubuntuhandbook.org/wp-content/uploads/2015/09/qmmp-090.jpg) - -### 在 Ubuntu 中安装 Qmmp 0.9.0 : ### - -新版本已经制做了 PPA,适用于目前所有 Ubuntu 发行版和衍生版。 - -1. 添加 [Qmmp PPA][1]. - -从 Dash 中打开终端并启动应用,通过按 Ctrl+Alt+T 快捷键。当它打开时,运行命令: - - sudo add-apt-repository ppa:forkotov02/ppa - -![qmmp-ppa](http://ubuntuhandbook.org/wp-content/uploads/2015/09/qmmp-ppa.jpg) - -2. 在添加 PPA 后,通过更新软件来升级 Qmmp 播放器。刷新系统缓存,并通过以下命令安装软件: - - sudo apt-get update - - sudo apt-get install qmmp qmmp-plugin-pack - -就是这样。尽情享受吧! - --------------------------------------------------------------------------------- - -via: http://ubuntuhandbook.org/index.php/2015/09/qmmp-0-9-0-in-ubuntu/ - -作者:[Ji m][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ubuntuhandbook.org/index.php/about/ -[1]:https://launchpad.net/~forkotov02/+archive/ubuntu/ppa From 9cd88e297833087e7bf0a8637cd5a61f8cd106a4 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 13 Sep 2015 22:55:38 +0800 Subject: [PATCH 508/697] Delete 20150906 Installing NGINX and NGINX Plus With Ansible.md --- ...lling NGINX and NGINX Plus With Ansible.md | 451 ------------------ 1 file changed, 451 deletions(-) delete mode 100644 sources/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md diff --git a/sources/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md b/sources/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md deleted file mode 100644 index 3fa66fe6b1..0000000000 --- a/sources/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md +++ /dev/null @@ -1,451 +0,0 @@ -translation by strugglingyouth -nstalling NGINX and NGINX Plus With Ansible -================================================================================ -Coming from a production operations background, I have learned to love all things related to automation. Why do something by hand if a computer can do it for you? But creating and implementing automation can be a difficult task given an ever-changing infrastructure and the various technologies surrounding your environments. This is why I love [Ansible][1]. Ansible is an open source tool for IT configuration management, deployment, and orchestration that is extremely easy to use. - -One of my favorite features of Ansible is that it is completely clientless. To manage a system, a connection is made over SSH, using either [Paramiko][2] (a Python library) or native [OpenSSH][3]. Another attractive feature of Ansible is its extensive selection of modules. These modules can be used to perform some of the common tasks of a system administrator. In particular, they make Ansible a powerful tool for installing and configuring any application across multiple servers, environments, and operating systems, all from one central location. - -In this tutorial I will walk you through the steps for using Ansible to install and deploy the open source [NGINX][4] software and [NGINX Plus][5], our commercial product. I’m showing deployment onto a [CentOS][6] server, but I have included details about deploying on Ubuntu servers in [Creating an Ansible Playbook for Installing NGINX and NGINX Plus on Ubuntu][7] below. - -For this tutorial I will be using Ansible version 1.9.2 and performing the deployment from a server running CentOS 7.1. - - $ ansible --version - ansible 1.9.2 - - $ cat /etc/redhat-release - CentOS Linux release 7.1.1503 (Core) - -If you don’t already have Ansible, you can get instructions for installing it [at the Ansible site][8]. - -If you are using CentOS, installing Ansible is easy as typing the following command. If you want to compile from source or for other distributions, see the instructions at the Ansible link provided just above. - - $ sudo yum install -y epel-release && sudo yum install -y ansible - -Depending on your environment, some of the commands in this tutorial might require sudo privileges. The path to the files, usernames, and destination servers are all values that will be specific to your environment. - -### Creating an Ansible Playbook for Installing NGINX (CentOS) ### - -First we create a working directory for our NGINX deployment, along with subdirectories and deployment configuration files. I usually recommend creating the directory in your home directory and show that in all examples in this tutorial. - - $ cd $HOME - $ mkdir -p ansible-nginx/tasks/ - $ touch ansible-nginx/deploy.yml - $ touch ansible-nginx/tasks/install_nginx.yml - -The directory structure now looks like this. You can check by using the tree command. - - $ tree $HOME/ansible-nginx/ - /home/kjones/ansible-nginx/ - ├── deploy.yml - └── tasks - └── install_nginx.yml - - 1 directory, 2 files - -If you do not have tree installed, you can do so using the following command. - - $ sudo yum install -y tree - -#### Creating the Main Deployment File #### - -Next we open **deploy.yml** in a text editor. I prefer vim for editing configuration files on the command line, and will use it throughout the tutorial. - - $ vim $HOME/ansible-nginx/deploy.yml - -The **deploy.yml** file is our main Ansible deployment file, which we’ll reference when we run the ansible‑playbook command in [Running Ansible to Deploy NGINX][9]. Within this file we specify the inventory for Ansible to use along with any other configuration files to include at runtime. - -In my example I use the [include][10] module to specify a configuration file that has the steps for installing NGINX. While it is possible to create a playbook in one very large file, I recommend that you separate the steps into smaller included files to keep things organized. Sample use cases for an include are copying static content, copying configuration files, or assigning variables for a more advanced deployment with configuration logic. - -Type the following lines into the file. I include the filename at the top in a comment for reference. - - # ./ansible-nginx/deploy.yml - - - hosts: nginx - tasks: - - include: 'tasks/install_nginx.yml' - -The hosts statement tells Ansible to deploy to all servers in the **nginx** group, which is defined in **/etc/ansible/hosts**. We’ll edit this file in [Creating the List of NGINX Servers below][11]. - -The include statement tells Ansible to read in and execute the contents of the **install_nginx.yml** file from the **tasks** directory during deployment. The file includes the steps for downloading, installing, and starting NGINX. We’ll create this file in the next section. - -#### Creating the Deployment File for NGINX #### - -Now let’s save our work to **deploy.yml** and open up **install_nginx.yml** in the editor. - - $ vim $HOME/ansible-nginx/tasks/install_nginx.yml - -The file is going to contain the instructions – written in [YAML][12] format – for Ansible to follow when installing and configuring our NGINX deployment. Each section (step in the process) starts with a name statement (preceded by hyphen) that describes the step. The string following name: is written to stdout during the Ansible deployment and can be changed as you wish. The next line of a section in the YAML file is the module that will be used during that deployment step. In the configuration below, both the [yum][13] and [service][14] modules are used. The yum module is used to install packages on CentOS. The service module is used to manage UNIX services. The final line or lines in a section specify any parameters for the module (in the example, these lines start with name and state). - -Type the following lines into the file. As with **deploy.yml**, the first line in our file is a comment that names the file for reference. The first section tells Ansible to install the **.rpm** file for CentOS 7 from the NGINX repository. This directs the package manager to install the most recent stable version of NGINX directly from NGINX. Modify the pathname as necessary for your CentOS version. A list of available packages can be found on the [open source NGINX website][15]. The next two sections tell Ansible to install the latest NGINX version using the yum module and then start NGINX using the service module. - -**Note:** In the first section, the pathname to the CentOS package appears on two lines only for space reasons. Type the entire path on a single line. - - # ./ansible-nginx/tasks/install_nginx.yml - - - name: NGINX | Installing NGINX repo rpm - yum: - name: http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm - - - name: NGINX | Installing NGINX - yum: - name: nginx - state: latest - - - name: NGINX | Starting NGINX - service: - name: nginx - state: started - -#### Creating the List of NGINX Servers #### - -Now that we have our Ansible deployment configuration files all set up, we need to tell Ansible exactly which servers to deploy to. We specify this in the Ansible **hosts** file I mentioned earlier. Let’s make a backup of the existing file and create a new one just for our deployment. - - $ sudo mv /etc/ansible/hosts /etc/ansible/hosts.backup - $ sudo vim /etc/ansible/hosts - -Type (or edit) the following lines in the file to create a group called **nginx** and list the servers to install NGINX on. You can designate servers by hostname, IP address, or in an array such as **server[1-3].domain.com**. Here I designate one server by its IP address. - - # /etc/ansible/hosts - - [nginx] - 172.16.239.140 - -#### Setting Up Security #### - -We are almost all set, but before deployment we need to ensure that Ansible has authorization to access our destination server over SSH. - -The preferred and most secure method is to add the Ansible deployment server’s RSA SSH key to the destination server’s **authorized_keys** file, which gives Ansible unrestricted SSH permissions on the destination server. To learn more about this configuration, see [Securing OpenSSH][16] on wiki.centos.org. This way you can automate your deployments without user interaction. - -Alternatively, you can request the password interactively during deployment. I strongly recommend that you use this method during testing only, because it is insecure and there is no way to track changes to a destination host’s fingerprint. If you want to do this, change the value of StrictHostKeyChecking from the default yes to no in the **/etc/ssh/ssh_config** file on each of your destination hosts. Then add the --ask-pass flag on the ansible-playbook command to have Ansible prompt for the SSH password. - -Here I illustrate how to edit the **ssh_config** file to disable strict host key checking on the destination server. We manually SSH into the server to which we’ll deploy NGINX and change the value of StrictHostKeyChecking to no. - - $ ssh kjones@172.16.239.140 - kjones@172.16.239.140's password:*********** - - [kjones@nginx ]$ sudo vim /etc/ssh/ssh_config - -After you make the change, save **ssh_config**, and connect to your Ansible server via SSH. The setting should look as below before you save your work. - - # /etc/ssh/ssh_config - - StrictHostKeyChecking no - -#### Running Ansible to Deploy NGINX #### - -If you have followed the steps in this tutorial, you can run the following command to have Ansible deploy NGINX. (Again, if you have set up RSA SSH key authentication, then the --ask-pass flag is not needed.) Run the command on the Ansible server with the configuration files we created above. - - $ sudo ansible-playbook --ask-pass $HOME/ansible-nginx/deploy.yml - -Ansible prompts for the SSH password and produces output like the following. A recap that reports failed=0 like this one indicates that deployment succeeded. - - $ sudo ansible-playbook --ask-pass $HOME/ansible-nginx/deploy.yml - SSH password: - - PLAY [all] ******************************************************************** - - GATHERING FACTS *************************************************************** - ok: [172.16.239.140] - - TASK: [NGINX | Installing NGINX repo rpm] ************************************* - changed: [172.16.239.140] - - TASK: [NGINX | Installing NGINX] ********************************************** - changed: [172.16.239.140] - - TASK: [NGINX | Starting NGINX] ************************************************ - changed: [172.16.239.140] - - PLAY RECAP ******************************************************************** - 172.16.239.140 : ok=4 changed=3 unreachable=0 failed=0 - -If you didn’t get a successful play recap, you can try running the ansible-playbook command again with the -vvvv flag (verbose with connection debugging) to troubleshoot the deployment process. - -When deployment succeeds (as it did for us on the first try), you can verify that NGINX is running on the remote server by running the following basic [cURL][17] command. Here it returns 200 OK. Success! We have successfully installed NGINX using Ansible. - - $ curl -Is 172.16.239.140 | grep HTTP - HTTP/1.1 200 OK - -### Creating an Ansible Playbook for Installing NGINX Plus (CentOS) ### - -Now that I’ve shown you how to install the open source version of NGINX, I’ll walk you through the steps for installing NGINX Plus. This requires some additional changes to the deployment configuration and showcases some of Ansible’s other features. - -#### Copying the NGINX Plus Certificate and Key to the Ansible Server #### - -To install and configure NGINX Plus with Ansible, we first need to copy the key and certificate for our NGINX Plus subscription from the [NGINX Plus Customer Portal][18] to the standard location on the Ansible deployment server. - -Access to the NGINX Plus Customer Portal is available for customers who have purchased NGINX Plus or are evaluating it. If you are interested in evaluating NGINX Plus, you can request a 30-day free trial [here][19]. You will receive a link to your trial certificate and key shortly after you sign up. - -On a Mac or Linux host, use the [scp][20] utility as I show here. On a Microsoft Windows host, you can use [WinSCP][21]. For this tutorial, I downloaded the files to my Mac laptop, then used scp to copy them to the Ansible server. These commands place both the key and certificate in my home directory. - - $ cd /path/to/nginx-repo-files/ - $ scp nginx-repo.* user@destination-server:. - -Next we SSH to the Ansible server, make sure the SSL directory for NGINX Plus exists, and move the files there. - - $ ssh user@destination-server - $ sudo mkdir -p /etc/ssl/nginx/ - $ sudo mv nginx-repo.* /etc/ssl/nginx/ - -Verify that your **/etc/ssl/nginx** directory contains both the certificate (**.crt**) and key (**.key**) files. You can check by using the tree command. - - $ tree /etc/ssl/nginx - /etc/ssl/nginx - ├── nginx-repo.crt - └── nginx-repo.key - - 0 directories, 2 files - -If you do not have tree installed, you can do so using the following command. - - $ sudo yum install -y tree - -#### Creating the Ansible Directory Structure #### - -The remaining steps are very similar to the ones for open source NGINX that we performed in [Creating an Ansible Playbook for Installing NGINX (CentOS)][22]. First we set up a working directory for our NGINX Plus deployment. Again I prefer creating it as a subdirectory of my home directory. - - $ cd $HOME - $ mkdir -p ansible-nginx-plus/tasks/ - $ touch ansible-nginx-plus/deploy.yml - $ touch ansible-nginx-plus/tasks/install_nginx_plus.yml - -The directory structure now looks like this. - - $ tree $HOME/ansible-nginx-plus/ - /home/kjones/ansible-nginx-plus/ - ├── deploy.yml - └── tasks - └── install_nginx_plus.yml - - 1 directory, 2 files - -#### Creating the Main Deployment File #### - -Next we use vim to create the **deploy.yml** file as for open source NGINX. - - $ vim ansible-nginx-plus/deploy.yml - -The only difference from the open source NGINX deployment is that we change the name of the included file to **install_nginx_plus.yml**. As a reminder, the file tells Ansible to deploy NGINX Plus on all servers in the **nginx** group (which is defined in **/etc/ansible/hosts**), and to read in and execute the contents of the **install_nginx_plus.yml** file from the **tasks** directory during deployment. - - # ./ansible-nginx-plus/deploy.yml - - - hosts: nginx - tasks: - - include: 'tasks/install_nginx_plus.yml' - -If you have not done so already, you also need to create the hosts file as detailed in [Creating the List of NGINX Servers][23] above. - -#### Creating the Deployment File for NGINX Plus #### - -Open **install_nginx_plus.yml** in a text editor. The file is going to contain the instructions for Ansible to follow when installing and configuring your NGINX Plus deployment. The commands and modules are specific to CentOS and some are unique to NGINX Plus. - - $ vim ansible-nginx-plus/tasks/install_nginx_plus.yml - -The first section uses the [file][24] module, telling Ansible to create the SSL directory for NGINX Plus as specified by the path and state arguments, set the ownership to root, and change the mode to 0700. - - # ./ansible-nginx-plus/tasks/install_nginx_plus.yml - - - name: NGINX Plus | Creating NGINX Plus ssl cert repo directory - file: path=/etc/ssl/nginx state=directory group=root mode=0700 - -The next two sections use the [copy][25] module to copy the NGINX Plus certificate and key from the Ansible deployment server to the NGINX Plus server during the deployment, again setting ownership to root and the mode to 0700. - - - name: NGINX Plus | Copying NGINX Plus repository certificate - copy: src=/etc/ssl/nginx/nginx-repo.crt dest=/etc/ssl/nginx/nginx-repo.crt owner=root group=root mode=0700 - - - name: NGINX Plus | Copying NGINX Plus repository key - copy: src=/etc/ssl/nginx/nginx-repo.key dest=/etc/ssl/nginx/nginx-repo.key owner=root group=root mode=0700 - -Next we tell Ansible to use the [get_url][26] module to download the CA certificate from the NGINX Plus repository at the remote location specified by the url argument, put it in the directory specified by the dest argument, and set the mode to 0700. - - - name: NGINX Plus | Downloading NGINX Plus CA certificate - get_url: url=https://cs.nginx.com/static/files/CA.crt dest=/etc/ssl/nginx/CA.crt mode=0700 - -Similarly, we tell Ansible to download the NGINX Plus repo file using the get_url module and copy it to the **/etc/yum.repos.d** directory on the NGINX Plus server. - - - name: NGINX Plus | Downloading yum NGINX Plus repository - get_url: url=https://cs.nginx.com/static/files/nginx-plus-7.repo dest=/etc/yum.repos.d/nginx-plus-7.repo mode=0700 - -The final two name sections tell Ansible to install and start NGINX Plus using the yum and service modules. - - - name: NGINX Plus | Installing NGINX Plus - yum: - name: nginx-plus - state: latest - - - name: NGINX Plus | Starting NGINX Plus - service: - name: nginx - state: started - -#### Running Ansible to Deploy NGINX Plus #### - -After saving the **install_nginx_plus.yml** file, we run the ansible-playbook command to deploy NGINX Plus. Again here we include the --ask-pass flag to have Ansible prompt for the SSH password and pass it to each NGINX Plus server, and specify the path to the main Ansible **deploy.yml** file. - - $ sudo ansible-playbook --ask-pass $HOME/ansible-nginx-plus/deploy.yml - - PLAY [nginx] ****************************************************************** - - GATHERING FACTS *************************************************************** - ok: [172.16.239.140] - - TASK: [NGINX Plus | Creating NGINX Plus ssl cert repo directory] ************** - changed: [172.16.239.140] - - TASK: [NGINX Plus | Copying NGINX Plus repository certificate] **************** - changed: [172.16.239.140] - - TASK: [NGINX Plus | Copying NGINX Plus repository key] ************************ - changed: [172.16.239.140] - - TASK: [NGINX Plus | Downloading NGINX Plus CA certificate] ******************** - changed: [172.16.239.140] - - TASK: [NGINX Plus | Downloading yum NGINX Plus repository] ******************** - changed: [172.16.239.140] - - TASK: [NGINX Plus | Installing NGINX Plus] ************************************ - changed: [172.16.239.140] - - TASK: [NGINX Plus | Starting NGINX Plus] ************************************** - changed: [172.16.239.140] - - PLAY RECAP ******************************************************************** - 172.16.239.140 : ok=8 changed=7 unreachable=0 failed=0 - -The playbook recap was successful. Now we can run a quick curl command to verify that NGINX Plus is running. Great, we get 200 OK! Success! We have successfully installed NGINX Plus with Ansible. - - $ curl -Is http://172.16.239.140 | grep HTTP - HTTP/1.1 200 OK - -### Creating an Ansible Playbook for Installing NGINX and NGINX Plus on Ubuntu ### - -The process for deploying NGINX and NGINX Plus on [Ubuntu servers][27] is pretty similar to the process on CentOS, so instead of providing step-by-step instructions I’ll show the complete deployment files and and point out the slight differences from CentOS. - -First create the Ansible directory structure and the main Ansible deployment file, as for CentOS. Also create the **/etc/ansible/hosts** file as described in [Creating the List of NGINX Servers][28]. For NGINX Plus, you need to copy over the key and certificate as described in [Copying the NGINX Plus Certificate and Key to the Ansible Server][29]. - -Here’s the **install_nginx.yml** deployment file for open source NGINX. In the first section, we use the [apt_key][30] module to import the NGINX signing key. The next two sections use the [lineinfile][31] module to add the package URLs for Ubuntu 14.04 to the **sources.list** file. Lastly we use the [apt][32] module to update the cache and install NGINX (apt replaces the yum module we used for deploying to CentOS). - - # ./ansible-nginx/tasks/install_nginx.yml - - - name: NGINX | Adding NGINX signing key - apt_key: url=http://nginx.org/keys/nginx_signing.key state=present - - - name: NGINX | Adding sources.list deb url for NGINX - lineinfile: dest=/etc/apt/sources.list line="deb http://nginx.org/packages/mainline/ubuntu/ trusty nginx" - - - name: NGINX Plus | Adding sources.list deb-src url for NGINX - lineinfile: dest=/etc/apt/sources.list line="deb-src http://nginx.org/packages/mainline/ubuntu/ trusty nginx" - - - name: NGINX | Updating apt cache - apt: - update_cache: yes - - - name: NGINX | Installing NGINX - apt: - pkg: nginx - state: latest - - - name: NGINX | Starting NGINX - service: - name: nginx - state: started - -Here’s the **install_nginx.yml** deployment file for NGINX Plus. The first four sections set up the NGINX Plus key and certificate. Then we use the apt_key module to import the signing key as for open source NGINX, and the get_url module to download the apt configuration file for NGINX Plus. The [shell][33] module evokes a printf command that writes its output to the **nginx-plus.list** file in the **sources.list.d** directory. The final name modules are the same as for open source NGINX. - - # ./ansible-nginx-plus/tasks/install_nginx_plus.yml - - - name: NGINX Plus | Creating NGINX Plus ssl cert repo directory - file: path=/etc/ssl/nginx state=directory group=root mode=0700 - - - name: NGINX Plus | Copying NGINX Plus repository certificate - copy: src=/etc/ssl/nginx/nginx-repo.crt dest=/etc/ssl/nginx/nginx-repo.crt owner=root group=root mode=0700 - - - name: NGINX Plus | Copying NGINX Plus repository key - copy: src=/etc/ssl/nginx/nginx-repo.key dest=/etc/ssl/nginx/nginx-repo.key owner=root group=root mode=0700 - - - name: NGINX Plus | Downloading NGINX Plus CA certificate - get_url: url=https://cs.nginx.com/static/files/CA.crt dest=/etc/ssl/nginx/CA.crt mode=0700 - - - name: NGINX Plus | Adding NGINX Plus signing key - apt_key: url=http://nginx.org/keys/nginx_signing.key state=present - - - name: NGINX Plus | Downloading Apt-Get NGINX Plus repository - get_url: url=https://cs.nginx.com/static/files/90nginx dest=/etc/apt/apt.conf.d/90nginx mode=0700 - - - name: NGINX Plus | Adding sources.list url for NGINX Plus - shell: printf "deb https://plus-pkgs.nginx.com/ubuntu `lsb_release -cs` nginx-plus\n" >/etc/apt/sources.list.d/nginx-plus.list - - - name: NGINX Plus | Running apt-get update - apt: - update_cache: yes - - - name: NGINX Plus | Installing NGINX Plus via apt-get - apt: - pkg: nginx-plus - state: latest - - - name: NGINX Plus | Start NGINX Plus - service: - name: nginx - state: started - -We’re now ready to run the ansible-playbook command: - - $ sudo ansible-playbook --ask-pass $HOME/ansible-nginx-plus/deploy.yml - -You should get a successful play recap. If you did not get a success, you can use the verbose flag to help troubleshoot your deployment as described in [Running Ansible to Deploy NGINX][34]. - -### Summary ### - -What I demonstrated in this tutorial is just the beginning of what Ansible can do to help automate your NGINX or NGINX Plus deployment. There are many useful modules ranging from user account management to custom configuration templates. If you are interested in learning more about these, please visit the extensive [Ansible documentation][35 site. - -To learn more about Ansible, come hear my talk on deploying NGINX Plus with Ansible at [NGINX.conf 2015][36], September 22–24 in San Francisco. - --------------------------------------------------------------------------------- - -via: https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/ - -作者:[Kevin Jones][a] -译者:[struggling](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.nginx.com/blog/author/kjones/ -[1]:http://www.ansible.com/ -[2]:http://www.paramiko.org/ -[3]:http://www.openssh.com/ -[4]:http://nginx.org/en/ -[5]:https://www.nginx.com/products/ -[6]:http://www.centos.org/ -[7]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#ubuntu -[8]:http://docs.ansible.com/ansible/intro_installation.html#installing-the-control-machine -[9]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#deploy-nginx -[10]:http://docs.ansible.com/ansible/playbooks_roles.html#task-include-files-and-encouraging-reuse -[11]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#list-nginx -[12]:http://docs.ansible.com/ansible/YAMLSyntax.html -[13]:http://docs.ansible.com/ansible/yum_module.html -[14]:http://docs.ansible.com/ansible/service_module.html -[15]:http://nginx.org/en/linux_packages.html -[16]:http://wiki.centos.org/HowTos/Network/SecuringSSH -[17]:http://curl.haxx.se/ -[18]:https://cs.nginx.com/ -[19]:https://www.nginx.com/#free-trial -[20]:http://linux.die.net/man/1/scp -[21]:https://winscp.net/eng/download.php -[22]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#playbook-nginx -[23]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#list-nginx -[24]:http://docs.ansible.com/ansible/file_module.html -[25]:http://docs.ansible.com/ansible/copy_module.html -[26]:http://docs.ansible.com/ansible/get_url_module.html -[27]:http://www.ubuntu.com/ -[28]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#list-nginx -[29]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#copy-cert-key -[30]:http://docs.ansible.com/ansible/apt_key_module.html -[31]:http://docs.ansible.com/ansible/lineinfile_module.html -[32]:http://docs.ansible.com/ansible/apt_module.html -[33]:http://docs.ansible.com/ansible/shell_module.html -[34]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#deploy-nginx -[35]:http://docs.ansible.com/ -[36]:https://www.nginx.com/nginxconf/ From 5eec461b75dc3896eb52dd3419d051bb79ecc217 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 13 Sep 2015 22:56:45 +0800 Subject: [PATCH 509/697] Create 20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md --- ....9.0 Winamp-like Audio Player in Ubuntu.md | 72 +++++++++++++++++++ 1 file changed, 72 insertions(+) create mode 100644 translated/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md diff --git a/translated/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md b/translated/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md new file mode 100644 index 0000000000..ac07a04b85 --- /dev/null +++ b/translated/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md @@ -0,0 +1,72 @@ +在 Ubuntu 上安装 Qmmp 0.9.0 类似 Winamp 的音频播放器 +================================================================================ +![](http://ubuntuhandbook.org/wp-content/uploads/2015/01/qmmp-icon-simple.png) + +Qmmp,基于 Qt 的音频播放器,与 Winamp 或 xmms 的用户界面类似,现在最新版本是0.9.0。PPA 已经在 Ubuntu 15.10,Ubuntu 15.04,Ubuntu 14.04,Ubuntu 12.04 和其衍生物中已经更新了。 + +Qmmp 0.9.0 是一个较大的版本,有许多新的功能,有许多改进和新的转变。它添加了如下功能: + +- 音频-信道序列转换器; +- 9通道支持均衡器; +- 艺术家专辑标签支持; +- 异步排序; +- 通过文件的修改日期排​​序; +- 按艺术家专辑排序; +- 支持多专栏; +- 有隐藏踪迹长度功能; +- 不用修改 qmmp.pri 来禁用插件(仅在 qmake 中)功能 +- 记住播放列表滚动位置功能; +- 排除提示数据文件功能; +- 更改用户代理功能; +- 改变窗口标题功能; +- 复位字体功能; +- 恢复默认快捷键功能; +- 默认热键为“Rename List”功能; +- 功能禁用弹出的 GME 插件; +- 简单的用户界面(QSUI)有以下变化: + - 增加了多列表的支持; + - 增加了按艺术家专辑排序; + - 增加了按文件的修改日期进行排序; + - 增加了隐藏歌曲长度功能; + - 增加了默认热键为“Rename List”; + - 增加了“Save List”功能到标签菜单; + - 增加了复位字体功能; + - 增加了复位快捷键功能; + - 改进了状态栏; + +它还改进了播放列表的通知,播放列表容器,采样率转换器,cmake 构建脚本,标题格式,在 mpeg 插件中支持 ape 标签,fileops 插件,降低了 cpu 占用率,改变默认的皮肤(炫光)和分离播放列表。 + +![qmmp-090](http://ubuntuhandbook.org/wp-content/uploads/2015/09/qmmp-090.jpg) + +### 在 Ubuntu 中安装 Qmmp 0.9.0 : ### + +新版本已经制做了 PPA,适用于目前所有 Ubuntu 发行版和衍生版。 + +1. 添加 [Qmmp PPA][1]. + +从 Dash 中打开终端并启动应用,通过按 Ctrl+Alt+T 快捷键。当它打开时,运行命令: + + sudo add-apt-repository ppa:forkotov02/ppa + +![qmmp-ppa](http://ubuntuhandbook.org/wp-content/uploads/2015/09/qmmp-ppa.jpg) + +2. 在添加 PPA 后,通过更新软件来升级 Qmmp 播放器。刷新系统缓存,并通过以下命令安装软件: + + sudo apt-get update + + sudo apt-get install qmmp qmmp-plugin-pack + +就是这样。尽情享受吧! + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/09/qmmp-0-9-0-in-ubuntu/ + +作者:[Ji m][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:https://launchpad.net/~forkotov02/+archive/ubuntu/ppa From 9a914c9184d69099864cc3b5d71b1d75d0f3a459 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 13 Sep 2015 22:57:43 +0800 Subject: [PATCH 510/697] Create 20150906 Installing NGINX and NGINX Plus With Ansible.md --- ...lling NGINX and NGINX Plus With Ansible.md | 452 ++++++++++++++++++ 1 file changed, 452 insertions(+) create mode 100644 translated/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md diff --git a/translated/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md b/translated/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md new file mode 100644 index 0000000000..e80f080624 --- /dev/null +++ b/translated/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md @@ -0,0 +1,452 @@ +translation by strugglingyouth +nstalling NGINX and NGINX Plus With Ansible +================================================================================ +在生产环境中,我会更喜欢做与自动化相关的所有事情。如果计算机能完成你的任务,何必需要你亲自动手呢?但是,在不断变化并存在多种技术的环境中,创建和实施自动化是一项艰巨的任务。这就是为什么我喜欢[Ansible][1]。Ansible是免费的,开源的,对于 IT 配置管理,部署和业务流程,使用起来非常方便。 + + +我最喜欢 Ansible 的一个特点是,它是完全无客户端。要管理一个系统,通过 SSH 建立连接,也使用了[Paramiko][2](一个 Python 库)或本地的 [OpenSSH][3]。Ansible 另一个吸引人的地方是它有许多可扩展的模块。这些模块可被系统管理员用于执行一些的相同任务。特别是,它们使用 Ansible 这个强有力的工具可以安装和配置任何程序在多个服务器上,环境或操作系统,只需要一个控制节点。 + +在本教程中,我将带你使用 Ansible 完成安装和部署开源[NGINX][4] 和 [NGINX Plus][5],我们的商业产品。我将在 [CentOS][6] 服务器上演示,但我也写了一个详细的教程关于在 Ubuntu 服务器上部署[在 Ubuntu 上创建一个 Ansible Playbook 来安装 NGINX 和 NGINX Plus][7] 。 + +在本教程中我将使用 Ansible 1.9.2 版本的,并在 CentOS 7.1 服务器上部署运行。 + + $ ansible --version + ansible 1.9.2 + + $ cat /etc/redhat-release + CentOS Linux release 7.1.1503 (Core) + +如果你还没有 Ansible,可以在 [Ansible 网站][8] 查看说明并安装它。 + +如果你使用的是 CentOS,安装 Ansible 十分简单,只要输入以下命令。如果你想使用源码编译安装或使用其他发行版,请参阅上面 Ansible 链接中的说明。 + + + $ sudo yum install -y epel-release && sudo yum install -y ansible + +根据环境的不同,在本教程中的命令有的可能需要 sudo 权限。文件路径,用户名,目标服务器的值取决于你的环境中。 + +### 创建一个 Ansible Playbook 来安装 NGINX (CentOS) ### + +首先,我们为 NGINX 的部署创建一个工作目录,以及子目录和部署配置文件目录。我通常建议在主目录中创建目录,在文章的所有例子中都会有说明。 + + $ cd $HOME + $ mkdir -p ansible-nginx/tasks/ + $ touch ansible-nginx/deploy.yml + $ touch ansible-nginx/tasks/install_nginx.yml + +目录结构看起来是这样的。你可以使用 tree 命令来查看。 + + $ tree $HOME/ansible-nginx/ + /home/kjones/ansible-nginx/ + ├── deploy.yml + └── tasks + └── install_nginx.yml + + 1 directory, 2 files + +如果你没有安装 tree 命令,使用以下命令去安装。 + + $ sudo yum install -y tree + +#### 创建主部署文件 #### + +接下来,我们在文本编辑器中打开 **deploy.yml**。我喜欢在命令行上使用 vim 来编辑配置文件,在整个教程中也都将使用它。 + + $ vim $HOME/ansible-nginx/deploy.yml + +**deploy.yml** 文件是 Ansible 部署的主要文件,[ 在使用 Ansible 部署 NGINX][9] 时,我们将运行 ansible‑playbook 命令执行此文件。在这个文件中,我们指定运行时 Ansible 使用的库以及其它配置文件。 + +在这个例子中,我使用 [include][10] 模块来指定配置文件一步一步来安装NGINX。虽然可以创建一个非常大的 playbook 文件,我建议你将其分割为小文件,以保证其可靠性。示例中的包括复制静态内容,复制配置文件,为更高级的部署使用逻辑配置设定变量。 + +在文件中输入以下行。包括顶部参考注释中的文件名。 + + # ./ansible-nginx/deploy.yml + + - hosts: nginx + tasks: + - include: 'tasks/install_nginx.yml' + + hosts 语句说明 Ansible 部署 **nginx** 组的所有服务器,服务器在 **/etc/ansible/hosts** 中指定。我们将编辑此文件来 [创建 NGINX 服务器的列表][11]。 + +include 语句说明 Ansible 在部署过程中从 **tasks** 目录下读取并执行 **install_nginx.yml** 文件中的内容。该文件包括以下几步:下载,安装,并启动 NGINX。我们将创建此文件在下一节。 + +#### 为 NGINX 创建部署文件 #### + +现在,先保存 **deploy.yml** 文件,并在编辑器中打开 **install_nginx.yml** 。 + + $ vim $HOME/ansible-nginx/tasks/install_nginx.yml + +该文件包含的说明有 - 以 [YAML][12] 格式写入 - 使用 Ansible 安装和配置 NGINX。每个部分(步骤中的过程)起始于一个 name 声明(前面连字符)描述此步骤。下面的 name 字符串:是 Ansible 部署过程中写到标准输出的,可以根据你的意愿来改变。YAML 文件中的下一个部分是在部署过程中将使用的模块。在下面的配置中,[yum][13] 和 [service][14] 模块使将被用。yum 模块用于在 CentOS 上安装软件包。service 模块用于管理 UNIX 的服务。在这部分的最后一行或几行指定了几个模块的参数(在本例中,这些行以 name 和 state 开始)。 + +在文件中输入以下行。对于 **deploy.yml**,在我们文件的第一行是关于文件名的注释。第一部分说明 Ansible 从 NGINX 仓库安装 **.rpm** 文件在CentOS 7 上。这说明软件包管理器直接从 NGINX 仓库安装最新最稳定的版本。需要在你的 CentOS 版本上修改路径。可使用包的列表可以在 [开源 NGINX 网站][15] 上找到。接下来的两节说明 Ansible 使用 yum 模块安装最新的 NGINX 版本,然后使用 service 模块启动 NGINX。 + +**注意:** 在第一部分中,CentOS 包中的路径名是连着的两行。在一行上输入其完整路径。 + + # ./ansible-nginx/tasks/install_nginx.yml + + - name: NGINX | Installing NGINX repo rpm + yum: + name: http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm + + - name: NGINX | Installing NGINX + yum: + name: nginx + state: latest + + - name: NGINX | Starting NGINX + service: + name: nginx + state: started + +#### 创建 NGINX 服务器列表 #### + +现在,我们有 Ansible 部署所有配置的文件,我们需要告诉 Ansible 部署哪个服务器。我们需要在 Ansible 中指定 **hosts** 文件。先备份现有的文件,并新建一个新文件来部署。 + + $ sudo mv /etc/ansible/hosts /etc/ansible/hosts.backup + $ sudo vim /etc/ansible/hosts + +在文件中输入以下行来创建一个名为 **nginx** 的组并列出安装 NGINX 的服务器。你可以指定服务器通过主机名,IP 地址,或者在一个区域,例如 **server[1-3].domain.com**。在这里,我指定一台服务器通过 IP 地址。 + + # /etc/ansible/hosts + + [nginx] + 172.16.239.140 + +#### 设置安全性 #### + +在部署之前,我们需要确保 Ansible 已通过 SSH 授权能访问我们的目标服务器。 + +首选并且最安全的方法是添加 Ansible 所要部署服务器的 RSA SSH 密钥到目标服务器的 **authorized_keys** 文件中,这给 Ansible 在目标服务器上的 SSH 权限不受限制。要了解更多关于此配置,请参阅 [安全的 OpenSSH][16] 在 wiki.centos.org。这样,你就可以自动部署而无需用户交互。 + +另外,你也可以在部署过程中需要输入密码。我强烈建议你只在测试过程中使用这种方法,因为它是不安全的,没有办法判断目标主机的身份。如果你想这样做,将每个目标主机 **/etc/ssh/ssh_config** 文件中 StrictHostKeyChecking 的默认值 yes 改为 no。然后在 ansible-playbook 命令中添加 --ask-pass参数来表示 Ansible 会提示输入 SSH 密码。 + +在这里,我将举例说明如何编辑 **ssh_config** 文件来禁用在目标服务器上严格的主机密钥检查。我们手动 SSH 到我们将部署 NGINX 的服务器并将StrictHostKeyChecking 的值更改为 no。 + + $ ssh kjones@172.16.239.140 + kjones@172.16.239.140's password:*********** + + [kjones@nginx ]$ sudo vim /etc/ssh/ssh_config + +当你更改后,保存 **ssh_config**,并通过 SSH 连接到你的 Ansible 服务器。保存后的设置应该如下图所示。 + + # /etc/ssh/ssh_config + + StrictHostKeyChecking no + +#### 运行 Ansible 部署 NGINX #### + +如果你一直照本教程的步骤来做,你可以运行下面的命令来使用 Ansible 部署NGINX。(同样,如果你设置了 RSA SSH 密钥认证,那么--ask-pass 参数是不需要的。)在 Ansible 服务器运行命令,并使用我们上面创建的配置文件。 + + $ sudo ansible-playbook --ask-pass $HOME/ansible-nginx/deploy.yml + +Ansible 提示输入 SSH 密码,输出如下。recap 中显示 failed=0 这条信息,意味着部署成功了。 + + $ sudo ansible-playbook --ask-pass $HOME/ansible-nginx/deploy.yml + SSH password: + + PLAY [all] ******************************************************************** + + GATHERING FACTS *************************************************************** + ok: [172.16.239.140] + + TASK: [NGINX | Installing NGINX repo rpm] ************************************* + changed: [172.16.239.140] + + TASK: [NGINX | Installing NGINX] ********************************************** + changed: [172.16.239.140] + + TASK: [NGINX | Starting NGINX] ************************************************ + changed: [172.16.239.140] + + PLAY RECAP ******************************************************************** + 172.16.239.140 : ok=4 changed=3 unreachable=0 failed=0 + +如果你没有得到一个成功的 play recap,你可以尝试用 -vvvv 参数(带连接调试的详细信息)再次运行 ansible-playbook 命令来解决部署过程的问题。 + +当部署成功(因为我们不是第一次部署)后,你可以验证 NGINX 在远程服务器上运行基本的 [cURL][17] 命令。在这里,它会返回 200 OK。Yes!我们使用Ansible 成功安装了 NGINX。 + + $ curl -Is 172.16.239.140 | grep HTTP + HTTP/1.1 200 OK + +### 创建 Ansible Playbook 来安装 NGINX Plus (CentOS) ### + +现在,我已经展示了如何安装 NGINX 的开源版本,我将带你完成安装 NGINX Plus。这需要更改一些额外的部署配置,并展示了一些 Ansible 的其他功能。 + +#### 复制 NGINX Plus 上的证书和密钥到 Ansible 服务器 #### + +使用 Ansible 安装和配置 NGINX Plus 时,首先我们需要将 [NGINX Plus Customer Portal][18] 的密钥和证书复制到部署 Ansible 服务器上的标准位置。 + +购买了 NGINX Plus 或正在试用的客户也可以访问 NGINX Plus Customer Portal。如果你有兴趣测试 NGINX Plus,你可以申请免费试用30天[点击这里][19]。在你注册后不久你将收到一个试用证书和密钥的链接。 + +在 Mac 或 Linux 主机上,我在这里演示使用 [scp][20] 工具。在 Microsoft Windows 主机,可以使用 [WinSCP][21]。在本教程中,先下载文件到我的 Mac 笔记本电脑上,然后使用 scp 将其复制到 Ansible 服务器。密钥和证书的位置都在我的家目录下。 + + $ cd /path/to/nginx-repo-files/ + $ scp nginx-repo.* user@destination-server:. + +接下来,我们通过 SSH 连接到 Ansible 服务器,确保 NGINX Plus 的 SSL 目录存在,移动文件到这儿。 + + $ ssh user@destination-server + $ sudo mkdir -p /etc/ssl/nginx/ + $ sudo mv nginx-repo.* /etc/ssl/nginx/ + +验证你的 **/etc/ssl/nginx** 目录包含证书(**.crt**)和密钥(**.key**)文件。你可以使用 tree 命令检查。 + + $ tree /etc/ssl/nginx + /etc/ssl/nginx + ├── nginx-repo.crt + └── nginx-repo.key + + 0 directories, 2 files + +如果你没有安装 tree,可以使用下面的命令去安装。 + + $ sudo yum install -y tree + +#### 创建 Ansible 目录结构 #### + +以下执行的步骤将和开源 NGINX 的非常相似在[创建安装 NGINX 的 Ansible Playbook 中(CentOS)][22]。首先,我们建一个工作目录为部署 NGINX Plus 使用。我喜欢将它创建为我主目录的子目录。 + + $ cd $HOME + $ mkdir -p ansible-nginx-plus/tasks/ + $ touch ansible-nginx-plus/deploy.yml + $ touch ansible-nginx-plus/tasks/install_nginx_plus.yml + +目录结构看起来像这样。 + + $ tree $HOME/ansible-nginx-plus/ + /home/kjones/ansible-nginx-plus/ + ├── deploy.yml + └── tasks + └── install_nginx_plus.yml + + 1 directory, 2 files + +#### 创建主部署文件 #### + +接下来,我们使用 vim 为开源的 NGINX 创建 **deploy.yml** 文件。 + + $ vim ansible-nginx-plus/deploy.yml + +和开源 NGINX 的部署唯一的区别是,我们将包含文件的名称修改为**install_nginx_plus.yml**。该文件告诉 Ansible 在 **nginx** 组中的所有服务器(**/etc/ansible/hosts** 中定义的)上部署 NGINX Plus ,然后在部署过程中从 **tasks** 目录读取并执行 **install_nginx_plus.yml** 的内容。 + + # ./ansible-nginx-plus/deploy.yml + + - hosts: nginx + tasks: + - include: 'tasks/install_nginx_plus.yml' + +如果你还没有这样做的话,你需要创建 hosts 文件,详细说明在上面的 [创建 NGINX 服务器的列表][23]。 + +#### 为 NGINX Plus 创建部署文件 #### + +在文本编辑器中打开 **install_nginx_plus.yml**。该文件在部署过程中使用 Ansible 来安装和配置 NGINX Plus。这些命令和模块仅针对 CentOS,有些是 NGINX Plus 独有的。 + + $ vim ansible-nginx-plus/tasks/install_nginx_plus.yml + +第一部分使用 [文件][24] 模块,告诉 Ansible 使用指定的路径和状态参数为 NGINX Plus 创建特定的 SSL 目录,设置根目录的权限,将权限更改为0700。 + + # ./ansible-nginx-plus/tasks/install_nginx_plus.yml + + - name: NGINX Plus | 创建 NGINX Plus ssl 证书目录 + file: path=/etc/ssl/nginx state=directory group=root mode=0700 + +接下来的两节使用 [copy][25] 模块从部署 Ansible 的服务器上将 NGINX Plus 的证书和密钥复制到 NGINX Plus 服务器上,再修改权根,将权限设置为0700。 + + - name: NGINX Plus | 复制 NGINX Plus repo 证书 + copy: src=/etc/ssl/nginx/nginx-repo.crt dest=/etc/ssl/nginx/nginx-repo.crt owner=root group=root mode=0700 + + - name: NGINX Plus | 复制 NGINX Plus 密钥 + copy: src=/etc/ssl/nginx/nginx-repo.key dest=/etc/ssl/nginx/nginx-repo.key owner=root group=root mode=0700 + +接下来,我们告诉 Ansible 使用 [get_url][26] 模块从 NGINX Plus 仓库下载 CA 证书在 url 参数指定的远程位置,通过 dest 参数把它放在指定的目录,并设置权限为 0700。 + + - name: NGINX Plus | 下载 NGINX Plus CA 证书 + get_url: url=https://cs.nginx.com/static/files/CA.crt dest=/etc/ssl/nginx/CA.crt mode=0700 + +同样,我们告诉 Ansible 使用 get_url 模块下载 NGINX Plus repo 文件,并将其复制到 **/etc/yum.repos.d** 目录下在 NGINX Plus 服务器上。 + + - name: NGINX Plus | 下载 yum NGINX Plus 仓库 + get_url: url=https://cs.nginx.com/static/files/nginx-plus-7.repo dest=/etc/yum.repos.d/nginx-plus-7.repo mode=0700 + +最后两节的 name 告诉 Ansible 使用 yum 和 service 模块下载并启动 NGINX Plus。 + + - name: NGINX Plus | 安装 NGINX Plus + yum: + name: nginx-plus + state: latest + + - name: NGINX Plus | 启动 NGINX Plus + service: + name: nginx + state: started + +#### 运行 Ansible 来部署 NGINX Plus #### + +在保存 **install_nginx_plus.yml** 文件后,然后运行 ansible-playbook 命令来部署 NGINX Plus。同样在这里,我们使用 --ask-pass 参数使用 Ansible 提示输入 SSH 密码并把它传递给每个 NGINX Plus 服务器,指定路径在 **deploy.yml** 文件中。 + + $ sudo ansible-playbook --ask-pass $HOME/ansible-nginx-plus/deploy.yml + + PLAY [nginx] ****************************************************************** + + GATHERING FACTS *************************************************************** + ok: [172.16.239.140] + + TASK: [NGINX Plus | Creating NGINX Plus ssl cert repo directory] ************** + changed: [172.16.239.140] + + TASK: [NGINX Plus | Copying NGINX Plus repository certificate] **************** + changed: [172.16.239.140] + + TASK: [NGINX Plus | Copying NGINX Plus repository key] ************************ + changed: [172.16.239.140] + + TASK: [NGINX Plus | Downloading NGINX Plus CA certificate] ******************** + changed: [172.16.239.140] + + TASK: [NGINX Plus | Downloading yum NGINX Plus repository] ******************** + changed: [172.16.239.140] + + TASK: [NGINX Plus | Installing NGINX Plus] ************************************ + changed: [172.16.239.140] + + TASK: [NGINX Plus | Starting NGINX Plus] ************************************** + changed: [172.16.239.140] + + PLAY RECAP ******************************************************************** + 172.16.239.140 : ok=8 changed=7 unreachable=0 failed=0 + +playbook 的 recap 是成功的。现在,使用 curl 命令来验证 NGINX Plus 是否在运行。太好了,我们得到的是 200 OK!成功了!我们使用 Ansible 成功地安装了 NGINX Plus。 + + $ curl -Is http://172.16.239.140 | grep HTTP + HTTP/1.1 200 OK + +### 在 Ubuntu 上创建一个 Ansible Playbook 来安装 NGINX 和 NGINX Plus ### + +此过程在 [Ubuntu 服务器][27] 上部署 NGINX 和 NGINX Plus 与 CentOS 很相似,我将一步一步的指导来完成整个部署文件,并指出和 CentOS 的细微差异。 + +首先和 CentOS 一样,创建 Ansible 目录结构和主要的 Ansible 部署文件。也创建 **/etc/ansible/hosts** 文件来描述 [创建 NGINX 服务器的列表][28]。对于 NGINX Plus,你也需要复制证书和密钥在此步中 [复制 NGINX Plus 证书和密钥到 Ansible 服务器][29]。 + +下面是开源 NGINX 的 **install_nginx.yml** 部署文件。在第一部分,我们使用 [apt_key][30] 模块导入 Nginx 的签名密钥。接下来的两节使用[lineinfile][31] 模块来添加 URLs 到 **sources.list** 文件中。最后,我们使用 [apt][32] 模块来更新缓存并安装 NGINX(apt 取代了我们在 CentOS 中部署时的 yum 模块)。 + + # ./ansible-nginx/tasks/install_nginx.yml + + - name: NGINX | 添加 NGINX 签名密钥 + apt_key: url=http://nginx.org/keys/nginx_signing.key state=present + + - name: NGINX | 为 NGINX 添加 sources.list deb url + lineinfile: dest=/etc/apt/sources.list line="deb http://nginx.org/packages/mainline/ubuntu/ trusty nginx" + + - name: NGINX Plus | 为 NGINX 添加 sources.list deb-src url + lineinfile: dest=/etc/apt/sources.list line="deb-src http://nginx.org/packages/mainline/ubuntu/ trusty nginx" + + - name: NGINX | 更新 apt 缓存 + apt: + update_cache: yes + + - name: NGINX | 安装 NGINX + apt: + pkg: nginx + state: latest + + - name: NGINX | 启动 NGINX + service: + name: nginx + state: started +下面是 NGINX Plus 的部署文件 **install_nginx.yml**。前四节设置了 NGINX Plus 密钥和证书。然后,我们用 apt_key 模块为开源的 NGINX 导入签名密钥,get_url 模块为 NGINX Plus 下载 apt 配置文件。[shell][33] 模块使用 printf 命令写下输出到 **nginx-plus.list** 文件中在**sources.list.d** 目录。最终的 name 模块是为开源 NGINX 的。 + + # ./ansible-nginx-plus/tasks/install_nginx_plus.yml + + - name: NGINX Plus | 创建 NGINX Plus ssl 证书 repo 目录 + file: path=/etc/ssl/nginx state=directory group=root mode=0700 + + - name: NGINX Plus | 复制 NGINX Plus 仓库证书 + copy: src=/etc/ssl/nginx/nginx-repo.crt dest=/etc/ssl/nginx/nginx-repo.crt owner=root group=root mode=0700 + + - name: NGINX Plus | 复制 NGINX Plus 仓库密钥 + copy: src=/etc/ssl/nginx/nginx-repo.key dest=/etc/ssl/nginx/nginx-repo.key owner=root group=root mode=0700 + + - name: NGINX Plus | 安装 NGINX Plus CA 证书 + get_url: url=https://cs.nginx.com/static/files/CA.crt dest=/etc/ssl/nginx/CA.crt mode=0700 + + - name: NGINX Plus | 添加 NGINX Plus 签名密钥 + apt_key: url=http://nginx.org/keys/nginx_signing.key state=present + + - name: NGINX Plus | 安装 Apt-Get NGINX Plus 仓库 + get_url: url=https://cs.nginx.com/static/files/90nginx dest=/etc/apt/apt.conf.d/90nginx mode=0700 + + - name: NGINX Plus | 为 NGINX Plus 添加 sources.list url + shell: printf "deb https://plus-pkgs.nginx.com/ubuntu `lsb_release -cs` nginx-plus\n" >/etc/apt/sources.list.d/nginx-plus.list + + - name: NGINX Plus | 运行 apt-get update + apt: + update_cache: yes + + - name: NGINX Plus | 安装 NGINX Plus 通过 apt-get + apt: + pkg: nginx-plus + state: latest + + - name: NGINX Plus | 启动 NGINX Plus + service: + name: nginx + state: started + +现在我们已经准备好运行 ansible-playbook 命令: + + $ sudo ansible-playbook --ask-pass $HOME/ansible-nginx-plus/deploy.yml + +你应该得到一个成功的 play recap。如果你没有成功,你可以使用 verbose 参数,以帮助你解决在 [运行 Ansible 来部署 NGINX][34] 中出现的问题。 + +### 小结 ### + +我在这个教程中演示是什么是 Ansible,可以做些什么来帮助你自动部署 NGINX 或 NGINX Plus,这仅仅是个开始。还有许多有用的模块,用户账号管理,自定义配置模板等。如果你有兴趣了解更多关于这些,请访问 [Ansible 官方文档][35]。 + +要了解更多关于 Ansible,来听我讲用 Ansible 部署 NGINX Plus 在[NGINX.conf 2015][36],9月22-24日在旧金山。 + +-------------------------------------------------------------------------------- + +via: https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/ + +作者:[Kevin Jones][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.nginx.com/blog/author/kjones/ +[1]:http://www.ansible.com/ +[2]:http://www.paramiko.org/ +[3]:http://www.openssh.com/ +[4]:http://nginx.org/en/ +[5]:https://www.nginx.com/products/ +[6]:http://www.centos.org/ +[7]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#ubuntu +[8]:http://docs.ansible.com/ansible/intro_installation.html#installing-the-control-machine +[9]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#deploy-nginx +[10]:http://docs.ansible.com/ansible/playbooks_roles.html#task-include-files-and-encouraging-reuse +[11]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#list-nginx +[12]:http://docs.ansible.com/ansible/YAMLSyntax.html +[13]:http://docs.ansible.com/ansible/yum_module.html +[14]:http://docs.ansible.com/ansible/service_module.html +[15]:http://nginx.org/en/linux_packages.html +[16]:http://wiki.centos.org/HowTos/Network/SecuringSSH +[17]:http://curl.haxx.se/ +[18]:https://cs.nginx.com/ +[19]:https://www.nginx.com/#free-trial +[20]:http://linux.die.net/man/1/scp +[21]:https://winscp.net/eng/download.php +[22]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#playbook-nginx +[23]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#list-nginx +[24]:http://docs.ansible.com/ansible/file_module.html +[25]:http://docs.ansible.com/ansible/copy_module.html +[26]:http://docs.ansible.com/ansible/get_url_module.html +[27]:http://www.ubuntu.com/ +[28]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#list-nginx +[29]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#copy-cert-key +[30]:http://docs.ansible.com/ansible/apt_key_module.html +[31]:http://docs.ansible.com/ansible/lineinfile_module.html +[32]:http://docs.ansible.com/ansible/apt_module.html +[33]:http://docs.ansible.com/ansible/shell_module.html +[34]:https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/#deploy-nginx +[35]:http://docs.ansible.com/ +[36]:https://www.nginx.com/nginxconf/ From 243840381bc7c0505242a566618368e44cb2de0d Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 14 Sep 2015 16:18:35 +0800 Subject: [PATCH 511/697] =?UTF-8?q?20150914-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md | 102 ++++++++++++++++++ ...orecasts from the command line on Linux.md | 76 +++++++++++++ ...move unused old kernel images on Ubuntu.md | 68 ++++++++++++ 3 files changed, 246 insertions(+) create mode 100644 sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md create mode 100644 sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md create mode 100644 sources/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md diff --git a/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md b/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md new file mode 100644 index 0000000000..eecddacf62 --- /dev/null +++ b/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md @@ -0,0 +1,102 @@ +How to Setup Node JS v4.0.0 on Ubuntu 14.04 / 15.04 +================================================================================ +Hi everyone, Node.JS Version 4.0.0 has been out, the popular server-side JavaScript platform has combines the Node.js and io.js code bases. This release represents the combined efforts encapsulated in both the Node.js project and the io.js project that are now combined in a single codebase. The most important change is this Node.js is ships with version 4.5 of Google's V8 JavaScript engine, which is the same version that ships with the current Chrome browser. So, being able to more closely track V8’s releases means Node.js runs JavaScript faster, more securely, and with the ability to use many desirable ES6 language features. + +![Node JS](http://blog.linoxide.com/wp-content/uploads/2015/09/nodejs.png) + +Node.js 4.0.0 aims to provide an easy update path for current users of io.js and node as there are no major API changes. Let’s see how you can easily get it installed and setup on Ubuntu server by following this simple article. + +### Basic System Setup ### + +Node works perfectly on Linux, Macintosh, and Solaris operating systems and among the Linux operating systems it has the best results using Ubuntu OS. That's why we are to setup it Ubuntu 15.04 while the same steps can be followed using Ubuntu 14.04. + +#### 1) System Resources #### + +The basic system resources for Node depend upon the size of your infrastructure requirements. So, here in this tutorial we will setup Node with 1 GB RAM, 1 GHz Processor and 10 GB of available disk space with minimal installation packages installed on the server that is no web or database server packages are installed. + +#### 2) System Update #### + +It always been recommended to keep your system upto date with latest patches and updates, so before we move to the installation on Node, let's login to your server with super user privileges and run update command. + + # apt-get update + +#### 3) Installing Dependencies #### + +Node JS only requires some basic system and software utilities to be present on your server, for its successful installation like 'make' 'gcc' and 'wget'. Let's run the below command to get them installed if they are not already present. + + # apt-get install python gcc make g++ wget + +### Download Latest Node JS v4.0.0 ### + +Let's download the latest Node JS version 4.0.0 by following this link of [Node JS Download Page][1]. + +![nodejs download](http://blog.linoxide.com/wp-content/uploads/2015/09/download.png) + +We will copy the link location of its latest package and download it using 'wget' command as shown. + + # wget https://nodejs.org/download/rc/v4.0.0-rc.1/node-v4.0.0-rc.1.tar.gz + +Once download completes, unpack using 'tar' command as shown. + + # tar -zxvf node-v4.0.0-rc.1.tar.gz + +![wget nodejs](http://blog.linoxide.com/wp-content/uploads/2015/09/wget.png) + +### Installing Node JS v4.0.0 ### + +Now we have to start the installation of Node JS from its downloaded source code. So, change your directory and configure the source code by running its configuration script before compiling it on your ubuntu server. + + root@ubuntu-15:~/node-v4.0.0-rc.1# ./configure + +![Installing NodeJS](http://blog.linoxide.com/wp-content/uploads/2015/09/configure.png) + +Now run the 'make install' command to compile the Node JS installation package as shown. + + root@ubuntu-15:~/node-v4.0.0-rc.1# make install + +The make command will take a couple of minutes while compiling its binaries so after executinf above command, wait for a while and keep calm. + +### Testing Node JS Installation ### + +Once the compilation process is complete, we will test it if every thing went fine. Let's run the following command to confirm the installed version of Node JS. + + root@ubuntu-15:~# node -v + v4.0.0-pre + +By executing 'node' without any arguments from the command-line you will be dropped into the REPL (Read-Eval-Print-Loop) that has simplistic emacs line-editing where you can interactively run JavaScript and see the results. + +![node version](http://blog.linoxide.com/wp-content/uploads/2015/09/node.png) + +### Writing Test Program ### + +We can also try out a very simple console program to test the successful installation and proper working of Node JS. To do so we will create a file named "test.js" and write the following code into it and save the changes made in the file as shown. + + root@ubuntu-15:~# vim test.js + var util = require("util"); + console.log("Hello! This is a Node Test Program"); + :wq! + +Now in order to run the above program, from the command prompt run the below command. + + root@ubuntu-15:~# node test.js + +![Node Program](http://blog.linoxide.com/wp-content/uploads/2015/09/node-test.png) + +So, upon successful installation we will get the output as shown in the screen, where as in the above program it loads the "util" class into a variable "util" and then uses the "util" object to perform the console tasks. While the console.log is a command similar to the cout in C++. + +### Conclusion ### + +That’s it. Hope this gives you a good idea of Node.js going with Node.js on Ubuntu. If you are new to developing applications with Node.js. After all we can say that we can expect significant performance gains with Node JS Version 4.0.0. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/ubuntu-how-to/setup-node-js-4-0-ubuntu-14-04-15-04/ + +作者:[Kashif Siddique][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ +[1]:https://nodejs.org/download/rc/v4.0.0-rc.1/ \ No newline at end of file diff --git a/sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md b/sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md new file mode 100644 index 0000000000..c25d7684d1 --- /dev/null +++ b/sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md @@ -0,0 +1,76 @@ +Linux FAQs with Answers--How to check weather forecasts from the command line on Linux +================================================================================ +> **Question**: I often check local weather forecasts on the Linux desktop. However, is there an easy way to access weather forecast information in the terminal environment, where I don't have access to desktop widgets or web browser? + +For Linux desktop users, there are many ways to access weather forecasts, e.g., using standalone weather apps, desktop widgets, or panel applets. If your work environment is terminal-based, there are also several ways to access weather forecasts from the command line. + +Among them is [wego][1], **a cute little weather app for the terminal**. Using an ncurses-based fancy interface, this command-line app allows you to see current weather conditions and forecasts at a glance. It retrieves the weather forecasts for the next 5 days via a weather forecast API. + +### Install Wego on Linux ### + +Installation of wego is pretty simple. wego is written in Go language, thus the first step is to [install Go language][2]. After installing Go, proceed to install wego as follows. + + $ go get github.com/schachmat/wego + +The wego tool will be installed under $GOPATH/bin. So add $GOPATH/bin to your $PATH variable. + + $ echo 'export PATH="$PATH:$GOPATH/bin"' >> ~/.bashrc + $ source ~/.bashrc + +Now go ahead and invoke wego from the command line. + + $ wego + +The first time you run wego, it will generate a config file (~/.wegorc), where you need to specify a weather API key. + +You can obtain a free API key from [worldweatheronline.com][3]. Free sign-up is quick and easy. You only need a valid email address. + +![](https://farm6.staticflickr.com/5781/21317466341_5a368b0d26_c.jpg) + +Your .wegorc will look like the following. + +![](https://farm6.staticflickr.com/5620/21121418558_df0d27cd0a_b.jpg) + +Other than API key, you can specify in ~/.wegorc your preferred location, use of metric/imperial units, and language. + +Note that the weather API is rate-limited; 5 queries per second, and 250 queries per day. + +When you invoke wego command again, you will see the latest weather forecast (of your preferred location), shown as follows. + +![](https://farm6.staticflickr.com/5776/21121218110_dd51e03ff4_c.jpg) + +The displayed weather information includes: (1) temperature, (2) wind direction and speed, (3) viewing distance, and (4) precipitation amount and probability. + +By default, it will show 3-day weather forecast. To change this behavior, you can supply the number of days (upto five) as an argument. For example, to see 5-day forecast: + + $ wego 5 + +If you want to check the weather of any other location, you can specify the city name. + + $ wego Seattle + +### Troubleshooting ### + +1. You encounter the following error while running wego. + + user: Current not implemented on linux/amd64 + +This error can happen when you run wego on a platform which is not supported by the native Go compiler gc (e.g., Fedora). In that case, you can compile the program using gccgo, a compiler-frontend for Go language. This can be done as follows. + + $ sudo yum install gcc-go + $ go get -compiler=gccgo github.com/schachmat/wego + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/weather-forecasts-command-line-linux.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:https://github.com/schachmat/wego +[2]:http://ask.xmodulo.com/install-go-language-linux.html +[3]:https://developer.worldweatheronline.com/auth/register \ No newline at end of file diff --git a/sources/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md b/sources/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md new file mode 100644 index 0000000000..9f13543292 --- /dev/null +++ b/sources/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md @@ -0,0 +1,68 @@ +Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu +================================================================================ +> **Question**: I have upgraded the kernel on my Ubuntu many times in the past. Now I would like to uninstall unused old kernel images to save some disk space. What is the easiest way to uninstall earlier versions of the Linux kernel on Ubuntu? + +In Ubuntu environment, there are several ways for the kernel to get upgraded. On Ubuntu desktop, Software Updater allows you to check for and update to the latest kernel on a daily basis. On Ubuntu server, the unattended-upgrades package takes care of upgrading the kernel automatically as part of important security updates. Otherwise, you can manually upgrade the kernel using apt-get or aptitude command. + +Over time, this ongoing kernel upgrade will leave you with a number of unused old kernel images accumulated on your system, wasting disk space. Each kernel image and associated modules/header files occupy 200-400MB of disk space, and so wasted space from unused kernel images will quickly add up. + +![](https://farm1.staticflickr.com/636/21352725115_29ae7aab5f_c.jpg) + +GRUB boot manager maintains GRUB entries for each old kernel, in case you want to boot into it. + +![](https://farm6.staticflickr.com/5803/21164866468_07760fc23c_z.jpg) + +As part of disk cleaning, you can consider removing old kernel images if you haven't used them for a while. + +### How to Clean up Old Kernel Images ### + +Before you remove old kernel images, remember that it is recommended to keep at least two kernel images (the latest one and an extra older version), in case the primary one goes wrong. That said, let's see how to uninstall old kernel images on Ubuntu platform. + +In Ubuntu, kernel images consist of the following packages. + +- **linux-image-**: kernel image +- **linux-image-extra-**: extra kernel modules +- **linux-headers-**: kernel header files + +First, check what kernel image(s) are installed on your system. + + $ dpkg --list | grep linux-image + $ dpkg --list | grep linux-headers + +Among the listed kernel images, you can remove a particular version (e.g., 3.19.0-15) as follows. + + $ sudo apt-get purge linux-image-3.19.0-15 + $ sudo apt-get purge linux-headers-3.19.0-15 + +The above commands will remove the kernel image, and its associated kernel modules and header files. + +Note that removing an old kernel will automatically trigger the installation of the latest Linux kernel image if you haven't upgraded to it yet. Also, after the old kernel is removed, GRUB configuration will automatically be updated to remove the corresponding GRUB entry from GRUB menu. + +If you have many unused kernels, you can remove multiple of them in one shot using the following shell expansion syntax. Note that this brace expansion will work only for bash or any compatible shells. + + $ sudo apt-get purge linux-image-3.19.0-{18,20,21,25} + $ sudo apt-get purge linux-headers-3.19.0-{18,20,21,25} + +![](https://farm6.staticflickr.com/5619/21352725355_39cc4fc2d0_c.jpg) + +The above command will remove 4 kernel images: 3.19.0-18, 3.19.0-20, 3.19.0-21 and 3.19.0-25. + +If GRUB configuration is not properly updated for whatever reason after old kernels are removed, you can try to update GRUB configuration manually with update-grub2 command. + + $ sudo update-grub2 + +Now reboot and verify that your GRUB menu has been properly cleaned up. + +![](https://farm1.staticflickr.com/593/20731623163_cccfeac854_z.jpg) + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/remove-kernel-images-ubuntu.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file From 9671625513710b143f7dda469ba3ad40a11c6fd9 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 14 Sep 2015 16:32:11 +0800 Subject: [PATCH 512/697] =?UTF-8?q?20150914-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Using screenfetch and linux_logo Tools.md | 227 ++++++++++++++++++ 1 file changed, 227 insertions(+) create mode 100644 sources/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md diff --git a/sources/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md b/sources/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md new file mode 100644 index 0000000000..6640454f07 --- /dev/null +++ b/sources/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md @@ -0,0 +1,227 @@ +Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools +================================================================================ +Do you want to display a super cool logo of your Linux distribution along with basic hardware information? Look no further try awesome screenfetch and linux_logo utilities. + +### Say hello to screenfetch ### + +screenFetch is a CLI bash script to show system/theme info in screenshots. It runs on a Linux, OS X, FreeBSD and many other Unix-like system. From the man page: + +> This handy Bash script can be used to generate one of those nifty terminal theme information + ASCII distribution logos you see in everyone's screenshots nowadays. It will auto-detect your distribution and display an ASCII version of that distribution's logo and some valuable information to the right. + +#### Installing screenfetch on Linux #### + +Open the Terminal application. Simply type the following [apt-get command][1] on a Debian or Ubuntu or Mint Linux based system: + + $ sudo apt-get install screenfetch + +![](http://s0.cyberciti.org/uploads/cms/2015/09/ubuntu-debian-linux-apt-get-install-screenfetch.jpg) + +Fig.01: Installing screenfetch using apt-get + +#### Installing screenfetch Mac OS X #### + +Type the following command: + + $ brew install screenfetch + +![](http://s0.cyberciti.org/uploads/cms/2015/09/apple-mac-osx-install-screenfetch.jpg) + +Fig.02: Installing screenfetch using brew command + +#### Installing screenfetch on FreeBSD #### + +Type the following pkg command: + + $ sudo pkg install sysutils/screenfetch + +![](http://s0.cyberciti.org/uploads/cms/2015/09/freebsd-install-pkg-screenfetch.jpg) + +Fig.03: FreeBSD install screenfetch using pkg + +#### Installing screenfetch on Fedora Linux #### + +Type the following dnf command: + + $ sudo dnf install screenfetch + +![](http://s0.cyberciti.org/uploads/cms/2015/09/fedora-dnf-install-screenfetch.jpg) + +Fig.04: Fedora Linux 22 install screenfetch using dnf + +#### How do I use screefetch utility? #### + +Simply type the following command: + + $ screenfetch + +Here is the output from various operating system: + +![](http://s0.cyberciti.org/uploads/cms/2015/09/fedora-screenfetch-300x193.jpg) + +Screenfetch on Fedora + +![](http://s0.cyberciti.org/uploads/cms/2015/09/screenfetch-osx-300x213.jpg) + +Screenfetch on OS X + +![](http://s0.cyberciti.org/uploads/cms/2015/09/screenfetch-freebsd-300x143.jpg) + +Screenfetch on FreeBSD + +![](http://s0.cyberciti.org/uploads/cms/2015/09/debian-ubutnu-screenfetch-outputs-300x279.jpg) + +Screenfetch on Debian Linux + +#### Take screenshot #### + +To take a screenshot and to save a file, enter: + + $ screenfetch -s + +You will see a screenshot file at ~/Desktop/screenFetch-*.jpg. To take a screenshot and upload to imgur directly, enter: + + $ screenfetch -su imgur + +**Sample outputs:** + + -/+:. veryv@Viveks-MacBook-Pro + :++++. OS: 64bit Mac OS X 10.10.5 14F27 + /+++/. Kernel: x86_64 Darwin 14.5.0 + .:-::- .+/:-``.::- Uptime: 3d 1h 36m + .:/++++++/::::/++++++/:` Packages: 56 + .:///////////////////////:` Shell: bash 3.2.57 + ////////////////////////` Resolution: 2560x1600 1920x1200 + -+++++++++++++++++++++++` DE: Aqua + /++++++++++++++++++++++/ WM: Quartz Compositor + /sssssssssssssssssssssss. WM Theme: Blue + :ssssssssssssssssssssssss- Font: Not Found + osssssssssssssssssssssssso/` CPU: Intel Core i5-4288U CPU @ 2.60GHz + `syyyyyyyyyyyyyyyyyyyyyyyy+` GPU: Intel Iris + `ossssssssssssssssssssss/ RAM: 6405MB / 8192MB + :ooooooooooooooooooo+. + `:+oo+/:-..-:/+o+/- + + Taking shot in 3.. 2.. 1.. 0. + ==> Uploading your screenshot now...your screenshot can be viewed at http://imgur.com/HKIUznn + +You can visit [http://imgur.com/HKIUznn][2] to see uploaded screenshot. + +### Say hello to linux_logo ### + +The linux_logo program generates a color ANSI picture of a penguin which includes some system information obtained from the /proc filesystem. + +#### Installation #### + +Simply type the following command as per your Linux distro. + +#### Debian/Ubutnu/Mint #### + + # apt-get install linux_logo + +#### CentOS/RHEL/Older Fedora #### + + # yum install linux_logo + +#### Fedora Linux v22+ or newer #### + + # dnf install linux_logo + +#### Run it #### + +Simply type the following command: + + $ linux_logo + +![](http://s0.cyberciti.org/uploads/cms/2015/09/debian-linux_logo.jpg) + +linux_logo in action + +#### But wait, there's more! #### + +You can see a list of compiled in logos using: + + $ linux_logo -f -L list + +**Sample outputs:** + + Available Built-in Logos: + Num Type Ascii Name Description + 1 Classic Yes aix AIX Logo + 2 Banner Yes bsd_banner FreeBSD Logo + 3 Classic Yes bsd FreeBSD Logo + 4 Classic Yes irix Irix Logo + 5 Banner Yes openbsd_banner OpenBSD Logo + 6 Classic Yes openbsd OpenBSD Logo + 7 Banner Yes solaris The Default Banner Logos + 8 Banner Yes banner The Default Banner Logo + 9 Banner Yes banner-simp Simplified Banner Logo + 10 Classic Yes classic The Default Classic Logo + 11 Classic Yes classic-nodots The Classic Logo, No Periods + 12 Classic Yes classic-simp Classic No Dots Or Letters + 13 Classic Yes core Core Linux Logo + 14 Banner Yes debian_banner_2 Debian Banner 2 + 15 Banner Yes debian_banner Debian Banner (white) + 16 Classic Yes debian Debian Swirl Logos + 17 Classic Yes debian_old Debian Old Penguin Logos + 18 Classic Yes gnu_linux Classic GNU/Linux + 19 Banner Yes mandrake Mandrakelinux(TM) Banner + 20 Banner Yes mandrake_banner Mandrake(TM) Linux Banner + 21 Banner Yes mandriva Mandriva(TM) Linux Banner + 22 Banner Yes pld PLD Linux banner + 23 Classic Yes raspi An ASCII Raspberry Pi logo + 24 Banner Yes redhat RedHat Banner (white) + 25 Banner Yes slackware Slackware Logo + 26 Banner Yes sme SME Server Banner Logo + 27 Banner Yes sourcemage_ban Source Mage GNU/Linux banner + 28 Banner Yes sourcemage Source Mage GNU/Linux large + 29 Banner Yes suse SUSE Logo + 30 Banner Yes ubuntu Ubuntu Logo + + Do "linux_logo -L num" where num is from above to get the appropriate logo. + Remember to also use -a to get ascii version. + +To see aix logo, enter: + + $ linux_logo -f -L aix + +To see openbsd logo: + + $ linux_logo -f -L openbsd + +Or just see some random Linux logo: + + $ linux_logo -f -L random_xy + +You [can combine bash for loop as follows to display various logos][3], enter: + +![](http://s0.cyberciti.org/uploads/cms/2015/09/linux-logo-fun.gif) + +Gif 01: linux_logo and bash for loop for fun and profie + +### Getting help ### + +Simply type the following command: + + $ screefetch -h + $ linux_logo -h + +**References** + +- [screenFetch home page][4] +- [linux_logo home page][5] + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/hardware/howto-display-linux-logo-in-bash-terminal-using-screenfetch-linux_logo/ + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html +[2]:http://imgur.com/HKIUznn +[3]:http://www.cyberciti.biz/faq/bash-for-loop/ +[4]:https://github.com/KittyKatt/screenFetch +[5]:https://github.com/deater/linux_logo \ No newline at end of file From 007556724fa2b291158124a880c0202c531d5eac Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 14 Sep 2015 22:31:28 +0800 Subject: [PATCH 513/697] translating --- ...swers--How to remove unused old kernel images on Ubuntu.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md b/sources/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md index 9f13543292..c8ba164ee8 100644 --- a/sources/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md +++ b/sources/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md @@ -1,3 +1,5 @@ +translating----geekpi + Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu ================================================================================ > **Question**: I have upgraded the kernel on my Ubuntu many times in the past. Now I would like to uninstall unused old kernel images to save some disk space. What is the easiest way to uninstall earlier versions of the Linux kernel on Ubuntu? @@ -65,4 +67,4 @@ via: http://ask.xmodulo.com/remove-kernel-images-ubuntu.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file +[a]:http://ask.xmodulo.com/author/nanni From 8a35c84de2eeb090193a5f21a8a7a29e795cdd48 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 14 Sep 2015 23:03:49 +0800 Subject: [PATCH 514/697] =?UTF-8?q?=E6=9B=B4=E6=AD=A3=E9=93=BE=E6=8E=A5?= =?UTF-8?q?=E9=94=99=E8=AF=AF?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...cess And Use Cloud Storage (SkyDrive etc.) In Linux.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/published/201311/3 Ways to Access And Use Cloud Storage (SkyDrive etc.) In Linux.md b/published/201311/3 Ways to Access And Use Cloud Storage (SkyDrive etc.) In Linux.md index e1716a1cd8..8972c25a3d 100644 --- a/published/201311/3 Ways to Access And Use Cloud Storage (SkyDrive etc.) In Linux.md +++ b/published/201311/3 Ways to Access And Use Cloud Storage (SkyDrive etc.) In Linux.md @@ -7,7 +7,7 @@ ![](http://main.makeuseoflimited.netdna-cdn.com/wp-content/uploads/2013/10/linux_accessing_cloud_ubuntu_one.jpg) -使用这种方式的明显好处就是你可以通过使用他们各自的官方应用访问你的各种云存储。目前,提供官方Linux客户端的服务提供商有[SpiderOak](1), [Dropbox](2), [Ubuntu One](3),[Copy](5)。[Ubuntu One](3)虽不出名但的确是[一个不错的云存储竞争着](4)。[Copy][5]则提供比Dropbox更多的空间,是[Dropbox的替代选择之一](6)。使用这些官方Linux客户端可以保持你的电脑与他们的服务器之间的通信,还可以让你进行属性设置,如选择性同步。 +使用这种方式的明显好处就是你可以通过使用他们各自的官方应用访问你的各种云存储。目前,提供官方Linux客户端的服务提供商有[SpiderOak][1], [Dropbox][2], [Ubuntu One][3],[Copy][5]。[Ubuntu One][3]虽不出名但的确是[一个不错的云存储竞争着][4]。[Copy][5]则提供比Dropbox更多的空间,是[Dropbox的替代选择之一][6]。使用这些官方Linux客户端可以保持你的电脑与他们的服务器之间的通信,还可以让你进行属性设置,如选择性同步。 对于普通桌面用户,使用官方客户端是最好的选择,因为官方客户端可以提供最多的功能和最好的兼容性。使用它们也很简单,只需要下载他们对应你的发行版的软件包,然后安装安装完后在运行一下就Ok了。安装客户端时,它一般会指导你完成这些简单的过程。 @@ -25,9 +25,9 @@ 当你运行最后一条命令后,脚本会提醒你这是你第一次运行这个脚本。它将告诉你去浏览一个Dropbox的特定网页以便访问你的账户。它还会告诉你所有你需要放入网站的信息,这是为了让Dropbox给你App Key和App Secret以及赋予这个脚本你给予的访问权限。现在脚本就拥有了访问你账户的合法授权了。 -这些一旦完成,你就可以这个脚本执行各种任务了,例如上传、下载、删除、移动、复制、创建文件夹、查看文件、共享文件、查看文件信息和取消共享。对于全部的语法解释,你可以查看一下[这个页面](9)。 +这些一旦完成,你就可以这个脚本执行各种任务了,例如上传、下载、删除、移动、复制、创建文件夹、查看文件、共享文件、查看文件信息和取消共享。对于全部的语法解释,你可以查看一下[这个页面][9]。 -###通过[Storage Made Easy](7)将SkyDrive带到Linux上 +###通过[Storage Made Easy][7]将SkyDrive带到Linux上 微软并没有提供SkyDrive的官方Linux客户端,这一点也不令人惊讶。但是你并不意味着你不能在Linux上访问SkyDrive,记住:SkyDrive的web版本是可用的。 @@ -41,7 +41,7 @@ 第一次启动时。它会要求你登录,还有询问你要把云存储挂载到什么地方。在你做完了这些后,你就可以浏览你选择的文件夹,你还可以访问你的Storage Made Easy空间以及你的SkyDrive空间了!这种方法对于那些想在Linux上使用SkyDrive的人来说非常好,对于想把他们的多个云存储服务整合到一个地方的人来说也很不错。这种方法的缺点是你无法使用他们各自官方客户端中可以使用的特殊功能。 -因为现在在你的Linux桌面上也可以使用SkyDrive,接下来你可能需要阅读一下我写的[SkyDrive与Google Drive的比较](8)以便于知道究竟哪种更适合于你。 +因为现在在你的Linux桌面上也可以使用SkyDrive,接下来你可能需要阅读一下我写的[SkyDrive与Google Drive的比较][8]以便于知道究竟哪种更适合于你。 ###结论 From c8fcb72946f0c99ab54f75cfb30e19e79eada0c1 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 14 Sep 2015 23:21:19 +0800 Subject: [PATCH 515/697] translating 20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md --- ...4 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md b/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md index eecddacf62..ba828d629d 100644 --- a/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md +++ b/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md @@ -1,3 +1,6 @@ + +translating by ezio + How to Setup Node JS v4.0.0 on Ubuntu 14.04 / 15.04 ================================================================================ Hi everyone, Node.JS Version 4.0.0 has been out, the popular server-side JavaScript platform has combines the Node.js and io.js code bases. This release represents the combined efforts encapsulated in both the Node.js project and the io.js project that are now combined in a single codebase. The most important change is this Node.js is ships with version 4.5 of Google's V8 JavaScript engine, which is the same version that ships with the current Chrome browser. So, being able to more closely track V8’s releases means Node.js runs JavaScript faster, more securely, and with the ability to use many desirable ES6 language features. @@ -99,4 +102,4 @@ via: http://linoxide.com/ubuntu-how-to/setup-node-js-4-0-ubuntu-14-04-15-04/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://linoxide.com/author/kashifs/ -[1]:https://nodejs.org/download/rc/v4.0.0-rc.1/ \ No newline at end of file +[1]:https://nodejs.org/download/rc/v4.0.0-rc.1/ From f4d5f650772e17507510b38561d28a84d246f63e Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 14 Sep 2015 23:22:46 +0800 Subject: [PATCH 516/697] translating 20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md --- ... check weather forecasts from the command line on Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md b/sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md index c25d7684d1..11dd713f74 100644 --- a/sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md +++ b/sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md @@ -1,3 +1,5 @@ +translating by ezio + Linux FAQs with Answers--How to check weather forecasts from the command line on Linux ================================================================================ > **Question**: I often check local weather forecasts on the Linux desktop. However, is there an easy way to access weather forecast information in the terminal environment, where I don't have access to desktop widgets or web browser? @@ -73,4 +75,4 @@ via: http://ask.xmodulo.com/weather-forecasts-command-line-linux.html [a]:http://ask.xmodulo.com/author/nanni [1]:https://github.com/schachmat/wego [2]:http://ask.xmodulo.com/install-go-language-linux.html -[3]:https://developer.worldweatheronline.com/auth/register \ No newline at end of file +[3]:https://developer.worldweatheronline.com/auth/register From 28f84f998cc58d4ede96d911761f76713d4ab647 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 14 Sep 2015 23:37:54 +0800 Subject: [PATCH 517/697] PUB:20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script @mr-ping --- ...nux Kernel in Ubuntu Easily via A Script.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) rename {translated/tech => published}/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md (73%) diff --git a/translated/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md b/published/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md similarity index 73% rename from translated/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md rename to published/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md index dbe5dec7cd..a2e5f1e276 100644 --- a/translated/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md +++ b/published/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md @@ -1,22 +1,22 @@ -使用脚本便捷地在Ubuntu系统中安装最新的Linux内核 +使用脚本便捷地在 Ubuntu 中安装最新 Linux 内核 ================================================================================ ![](http://ubuntuhandbook.org/wp-content/uploads/2014/12/linux-kernel-icon-tux.png) 想要安装最新的Linux内核吗?一个简单的脚本就可以在Ubuntu系统中方便的完成这项工作。 -Michael Murphy 写了一个脚本用来将最新的候选版、标准版、或者低延时版内核安装到 Ubuntu 系统中。这个脚本会在询问一些问题后从 [Ubuntu kernel mainline page][1] 下载安装最新的 Linux 内核包。 +Michael Murphy 写了一个脚本用来将最新的候选版、标准版、或者低延时版的内核安装到 Ubuntu 系统中。这个脚本会在询问一些问题后从 [Ubuntu 内核主线页面][1] 下载安装最新的 Linux 内核包。 ### 通过脚本来安装、升级Linux内核: ### -1. 点击 [github page][2] 右上角的 “Download Zip” 来下载脚本。 +1. 点击这个 [github 页面][2] 右上角的 “Download Zip” 来下载该脚本。 -2. 鼠标右键单击用户下载目录下的 Zip 文件,选择 “Extract Here” 将其解压到此处。 +2. 鼠标右键单击用户下载目录下的 Zip 文件,选择 “在此展开” 将其解压。 -3. 右键点击解压后的文件夹,选择 “Open in Terminal” 在终端中导航到此文件夹下。 +3. 右键点击解压后的文件夹,选择 “在终端中打开” 到此文件夹下。 ![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/open-terminal.jpg) -此时将会打开一个终端,并且自动导航到结果文件夹下。如果你找不到 “Open in Terminal” 选项的话,在 Ubuntu 软件中心搜索安装 `nautilus-open-terminal` ,然后重新登录系统即可(也可以再终端中运行 `nautilus -q` 来取代重新登录系统的操作)。 +此时将会打开一个终端,并且自动导航到目标文件夹下。如果你找不到 “在终端中打开” 选项的话,在 Ubuntu 软件中心搜索安装 `nautilus-open-terminal` ,然后重新登录系统即可(也可以再终端中运行 `nautilus -q` 来取代重新登录系统的操作)。 4. 当进入终端后,运行以下命令来赋予脚本执行本次操作的权限。 chmod +x * @@ -39,7 +39,7 @@ Michael Murphy 写了一个脚本用来将最新的候选版、标准版、或 ### 如何移除旧的(或新的)内核: ### -1. 从Ubuntu软件中心安装 Synaptic Package Manager。 +1. 从 Ubuntu 软件中心安装 Synaptic Package Manager。 2. 打开 Synaptic Package Manager 然后如下操作: @@ -68,8 +68,8 @@ Michael Murphy 写了一个脚本用来将最新的候选版、标准版、或 via: http://ubuntuhandbook.org/index.php/2015/08/install-latest-kernel-script/ 作者:[Ji m][a] -译者:[译者ID](https://github.com/mr-ping) -校对:[校对者ID](https://github.com/校对者ID) +译者:[mr-ping](https://github.com/mr-ping) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 375b0ece3a84bcb9b9f0e9e144d8adf9b00b8337 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 15 Sep 2015 09:23:05 +0800 Subject: [PATCH 518/697] translated --- ...move unused old kernel images on Ubuntu.md | 70 ------------------- ...move unused old kernel images on Ubuntu.md | 69 ++++++++++++++++++ 2 files changed, 69 insertions(+), 70 deletions(-) delete mode 100644 sources/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md create mode 100644 translated/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md diff --git a/sources/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md b/sources/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md deleted file mode 100644 index c8ba164ee8..0000000000 --- a/sources/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md +++ /dev/null @@ -1,70 +0,0 @@ -translating----geekpi - -Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu -================================================================================ -> **Question**: I have upgraded the kernel on my Ubuntu many times in the past. Now I would like to uninstall unused old kernel images to save some disk space. What is the easiest way to uninstall earlier versions of the Linux kernel on Ubuntu? - -In Ubuntu environment, there are several ways for the kernel to get upgraded. On Ubuntu desktop, Software Updater allows you to check for and update to the latest kernel on a daily basis. On Ubuntu server, the unattended-upgrades package takes care of upgrading the kernel automatically as part of important security updates. Otherwise, you can manually upgrade the kernel using apt-get or aptitude command. - -Over time, this ongoing kernel upgrade will leave you with a number of unused old kernel images accumulated on your system, wasting disk space. Each kernel image and associated modules/header files occupy 200-400MB of disk space, and so wasted space from unused kernel images will quickly add up. - -![](https://farm1.staticflickr.com/636/21352725115_29ae7aab5f_c.jpg) - -GRUB boot manager maintains GRUB entries for each old kernel, in case you want to boot into it. - -![](https://farm6.staticflickr.com/5803/21164866468_07760fc23c_z.jpg) - -As part of disk cleaning, you can consider removing old kernel images if you haven't used them for a while. - -### How to Clean up Old Kernel Images ### - -Before you remove old kernel images, remember that it is recommended to keep at least two kernel images (the latest one and an extra older version), in case the primary one goes wrong. That said, let's see how to uninstall old kernel images on Ubuntu platform. - -In Ubuntu, kernel images consist of the following packages. - -- **linux-image-**: kernel image -- **linux-image-extra-**: extra kernel modules -- **linux-headers-**: kernel header files - -First, check what kernel image(s) are installed on your system. - - $ dpkg --list | grep linux-image - $ dpkg --list | grep linux-headers - -Among the listed kernel images, you can remove a particular version (e.g., 3.19.0-15) as follows. - - $ sudo apt-get purge linux-image-3.19.0-15 - $ sudo apt-get purge linux-headers-3.19.0-15 - -The above commands will remove the kernel image, and its associated kernel modules and header files. - -Note that removing an old kernel will automatically trigger the installation of the latest Linux kernel image if you haven't upgraded to it yet. Also, after the old kernel is removed, GRUB configuration will automatically be updated to remove the corresponding GRUB entry from GRUB menu. - -If you have many unused kernels, you can remove multiple of them in one shot using the following shell expansion syntax. Note that this brace expansion will work only for bash or any compatible shells. - - $ sudo apt-get purge linux-image-3.19.0-{18,20,21,25} - $ sudo apt-get purge linux-headers-3.19.0-{18,20,21,25} - -![](https://farm6.staticflickr.com/5619/21352725355_39cc4fc2d0_c.jpg) - -The above command will remove 4 kernel images: 3.19.0-18, 3.19.0-20, 3.19.0-21 and 3.19.0-25. - -If GRUB configuration is not properly updated for whatever reason after old kernels are removed, you can try to update GRUB configuration manually with update-grub2 command. - - $ sudo update-grub2 - -Now reboot and verify that your GRUB menu has been properly cleaned up. - -![](https://farm1.staticflickr.com/593/20731623163_cccfeac854_z.jpg) - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/remove-kernel-images-ubuntu.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni diff --git a/translated/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md b/translated/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md new file mode 100644 index 0000000000..61eec350a9 --- /dev/null +++ b/translated/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md @@ -0,0 +1,69 @@ +Linux有问必答--如何删除Ubuntu上不再使用的老内核 +================================================================================ +> **提问**:过去我已经在我的Ubuntu上升级了几次内核。现在我想要删除这些旧的内核镜像来节省我的磁盘空间。如何用最简单的方法删除Ubuntu上先前版本的内核? + +在Ubuntu上,有几个方法来升级内核。在Ubuntu桌面中,软件更新允许你每天检查并更新到最新的内核上。在Ubuntu服务器上,一个无人值守的包会自动更新内核最为一项最要的安全更新。然而,你可以手动用apt-get或者aptitude命令来更新。 + +随着时间的流逝,持续的内核更新会在系统中积聚大量的不再使用的内核,浪费你的磁盘空间。每个内核镜像和其相关联的模块/头文件会占用200-400MB的磁盘空间,因此由不再使用的内核而浪费的磁盘空间会快速地增加。 + +![](https://farm1.staticflickr.com/636/21352725115_29ae7aab5f_c.jpg) + +GRUB管理器为每个旧内核都维护了一个GRUB入口,防止你想要进入它们。 + +![](https://farm6.staticflickr.com/5803/21164866468_07760fc23c_z.jpg) + +作为磁盘清理的一部分,如果你不再使用这些,你可以考虑清理掉这些镜像。 + +### 如何清理旧内核镜像 ### + +在删除旧内核之前,记住最好留有2个最近的内核(最新的和上一个版本),以防主要的版本出错。现在就让我们看看如何在Ubuntu上清理旧内核。 + +在Ubuntu内核镜像包哈了以下的包。 + +- **linux-image-**: 内核镜像 +- **linux-image-extra-**: 额外的内核模块 +- **linux-headers-**: 内核头文件 + +首先检查系统中安装的内核镜像。 + + $ dpkg --list | grep linux-image + $ dpkg --list | grep linux-headers + +在列出的内核镜像中,你可以移除一个特定的版本(比如3.19.0-15)。 + + $ sudo apt-get purge linux-image-3.19.0-15 + $ sudo apt-get purge linux-headers-3.19.0-15 + +上面的命令会删除内核镜像和它相关联的内核模块和头文件。 + +updated to remove the corresponding GRUB entry from GRUB menu. +注意如果你还没有升级内核那么删除旧内核会自动触发安装新内核。这样在删除旧内核之后,GRUB配置会自动升级来移除GRUB菜单中相关GRUB入口。 + +如果你有很多没用的内核,你可以用shell表达式来一次性地删除多个内核。注意这个括号表达式只在bash或者兼容的shell中才有效。 + + $ sudo apt-get purge linux-image-3.19.0-{18,20,21,25} + $ sudo apt-get purge linux-headers-3.19.0-{18,20,21,25} + +![](https://farm6.staticflickr.com/5619/21352725355_39cc4fc2d0_c.jpg) + +上面的命令会删除4个内核镜像:3.19.0-18、3.19.0-20、3.19.0-21 和 3.19.0-25。 + +如果GRUB配置由于任何原因在删除旧内核后没有正确升级,你可以尝试手动用update-grub2命令来更新配置。 + + $ sudo update-grub2 + +现在就重启来验证GRUB菜单已经正确清理了。 + +![](https://farm1.staticflickr.com/593/20731623163_cccfeac854_z.jpg) + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/remove-kernel-images-ubuntu.html + +作者:[Dan Nanni][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni From a44737b019d57dbca79bf1aefc57e03716e88834 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 15 Sep 2015 19:53:05 +0800 Subject: [PATCH 519/697] PUB:20150824 Fix No Bootable Device Found Error After Installing Ubuntu @ictlyh --- ...ice Found Error After Installing Ubuntu.md | 37 ++++++++++--------- 1 file changed, 19 insertions(+), 18 deletions(-) rename {translated/tech => published}/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md (58%) diff --git a/translated/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md b/published/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md similarity index 58% rename from translated/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md rename to published/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md index 91aa23d6aa..4b8e84bf1d 100644 --- a/translated/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md +++ b/published/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md @@ -1,44 +1,45 @@ -修复安装完 Ubuntu 后无可引导设备错误 +修复安装完 Ubuntu 后无可引导设备的错误 ================================================================================ -通常情况下,我启动 Ubuntu 和 Windows 双系统,但是这次我决定完全消除 Windows 纯净安装 Ubuntu。纯净安装 Ubuntu 完成后,结束时屏幕输出 **no bootable device found** 而不是进入 GRUB 界面。显然,安装搞砸了 UEFI 引导设置。 + +通常情况下,我会安装启动 Ubuntu 和 Windows 的双系统,但是这次我决定完全消除 Windows 纯净安装 Ubuntu。纯净安装 Ubuntu 完成后,结束时屏幕输出 **无可引导设备(no bootable device found)** 而不是进入 GRUB 界面。显然,安装搞砸了 UEFI 引导设置。 ![安装完 Ubuntu 后无可引导设备](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_1.jpg) -我会告诉你我是如何修复**在宏碁笔记本上安装 Ubuntu 后出现无可引导设备错误**。我声明了我使用的是宏碁灵越 R13,这很重要,因为我们需要更改固件设置,而这些设置可能因制造商和设备有所不同。 +我会告诉你我是如何修复**在宏碁笔记本上安装 Ubuntu 后出现无可引导设备错误**的。我声明了我使用的是宏碁灵越 R13,这很重要,因为我们需要更改固件设置,而这些设置可能因制造商和设备有所不同。 因此在你开始这里介绍的步骤之前,先看一下发生这个错误时我计算机的状态: -- 我的宏碁灵越 R13 预装了 Windows8.1 和 UEFI 引导管理器 -- 关闭了 Secure boot(我的笔记本刚维修过,维修人员又启用了它,直到出现了问题我才发现)。你可以阅读这篇博文了解[如何在宏碁笔记本中关闭 secure boot][1] -- 我通过选择清除所有东西安装 Ubuntu,例如现有的 Windows 8.1,各种分区等。 +- 我的宏碁灵越 R13 预装了 Windows 8.1 和 UEFI 引导管理器 +- 安全引导( Secure boot)没有关闭,(我的笔记本刚维修过,维修人员又启用了它,直到出现了问题我才发现)。你可以阅读这篇博文了解[如何在宏碁笔记本中关闭安全引导(secure boot)][1] +- 我选择了清除所有东西安装 Ubuntu,例如现有的 Windows 8.1,各种分区等 - 安装完 Ubuntu 之后,从硬盘启动时我看到无可引导设备错误。但能从 USB 设备正常启动 -在我看来,没有禁用 secure boot 可能是这个错误的原因。但是,我没有数据支撑我的观点。这仅仅是预感。有趣的是,双系统启动 Windows 和 Linux 经常会出现这两个 Grub 问题: +在我看来,没有禁用安全引导(secure boot)可能是这个错误的原因。但是,我没有数据支撑我的观点。这仅仅是预感。有趣的是,双系统启动 Windows 和 Linux 经常会出现这两个 Grub 问题: -- [error: no such partition grub rescue][2] -- [Minimal BASH like line editing is supported][3] +- [错误:没有 grub 救援分区][2] +- [支持最小化 BASH 式的行编辑][3] 如果你遇到类似的情况,你可以试试我的修复方法。 ### 修复安装完 Ubuntu 后无可引导设备错误 ### -请原谅我没有丰富的图片。我的一加相机不能很好地拍摄笔记本屏幕。 +请原谅我的图片质量很差。我的一加相机不能很好地拍摄笔记本屏幕。 #### 第一步 #### -关闭电源并进入 boot 设置。我需要在宏碁灵越 R13 上快速地按 Fn+F2。如果你使用固态硬盘的话要按的非常快,因为固态硬盘启动速度很快。取决于你的制造商,你可能要用 Del 或 F10 或者 F12。 +关闭电源并进入引导设置。我需要在宏碁灵越 R13 上快速地按下 Fn+F2。如果你使用固态硬盘的话要按的非常快,因为固态硬盘启动速度很快。这取决于你的制造商,你可能要用 Del 或 F10 或者 F12。 #### 第二步 #### -在 boot 设置中,确保启用了 Secure Boot。它在 Boot 标签里。 +在引导设置中,确保启用了 Secure Boot。它在 Boot 标签里。 #### 第三步 #### -进入到 Security 标签,查找 “Select an UEFI file as trusted for executing” 并敲击回车。 +进入到 Security 标签,找到 “选择一个用于执行的可信任 UEFI 文件(Select an UEFI file as trusted for executing)” 并敲击回车。 ![修复无可引导设备错误](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_2.jpg) -特意说明,我们这一步是要在你的设备中添加 UEFI 设置文件(安装 Ubuntu 的时候生成)到可信 UEFI 启动。如果你记得的话,UEFI 启动的主要目的是提供安全性,由于(可能)没有禁用 Secure Boot,设备不会试图从新安装的操作系统中启动。添加它到类似白名单的可信列表,会使设备从 Ubuntu UEFI 文件启动。 +特意说明,我们这一步是要在你的设备中添加 UEFI 设置文件(安装 Ubuntu 的时候生成)到可信 UEFI 启动中。如果你记得的话,UEFI 启动的主要目的是提供安全性,由于(可能)没有禁用安全引导(Secure Boot),设备不会试图从新安装的操作系统中启动。添加它到类似白名单的可信列表,会使设备从 Ubuntu UEFI 文件启动。 #### 第四步 #### @@ -48,13 +49,13 @@ #### 第五步 #### -你应该可以看到 ,敲击回车。 +你应该可以看到 \ 了,敲击回车。 ![在 UEFI 中修复设置](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_4.jpg) #### 第六步 #### -在下一个屏幕中你会看到 。耐心点,马上就好了。 +在下一个屏幕中你会看到 \。耐心点,马上就好了。 ![安装完 Ubuntu 后修复启动错误](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_5.jpg) @@ -71,7 +72,7 @@ #### 第八步 #### -当我们添加它到可信 EFI 文件并执行时,按 F10 保存并退出。 +当我们添加它到可信 EFI 文件并执行后,按 F10 保存并退出。 ![保存并退出固件设置](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_8.jpg) @@ -87,7 +88,7 @@ via: http://itsfoss.com/no-bootable-device-found-ubuntu/ 作者:[Abhishek][a] 译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From b813f6633db8bdeb9e72d9f1d09c1ac0a776b94b Mon Sep 17 00:00:00 2001 From: DongShuaike Date: Tue, 15 Sep 2015 20:49:22 +0800 Subject: [PATCH 520/697] Create 20140320 Best command line tools for linux performance monitoring.md --- ... tools for linux performance monitoring.md | 83 +++++++++++++++++++ 1 file changed, 83 insertions(+) create mode 100644 sources/tech/20140320 Best command line tools for linux performance monitoring.md diff --git a/sources/tech/20140320 Best command line tools for linux performance monitoring.md b/sources/tech/20140320 Best command line tools for linux performance monitoring.md new file mode 100644 index 0000000000..6b0ed1ebc4 --- /dev/null +++ b/sources/tech/20140320 Best command line tools for linux performance monitoring.md @@ -0,0 +1,83 @@ +Best command line tools for linux performance monitoring +================================================================================ +Sometimes a system can be slow and many reasons can be the root cause. To identify the process that is consuming memory, disk I/O or processor capacity you need to use tools to see what is happening in an operation system. + +There are many tools to monitor a GNU/Linux server. In this article, I am providing 7 monitoring tools and i hope it will help you. + +###Htop +Htop is an alternative of top command but it provides interactive system-monitor process-viewer and more user friendly output than top. + + htop also provides a better way to navigate to any process using keyboard Up/Down keys as well as we can also operate it using mouse. + + For Check our previous post:[How to install and use htop on RHEL/Centos and Fedora linux][1] +![Htop(Linux Process Monitoring)](http://lintut.com/wp-content/uploads/2013/11/Screenshot-from-2013-11-26-144444.png) +###dstat +Dstat is a versatile replacement for vmstat, iostat, netstat and ifstat. Dstat overcomes some of their limitations and adds some extra features, more counters and flexibility. Dstat is handy for monitoring systems during performance tuning tests, benchmarks or troubleshooting. + + Dstat allows you to view all of your system resources in real-time, you can eg. compare disk utilization in combination with interrupts from your IDE controller, or compare the network bandwidth numbers directly with the disk throughput (in the same interval). +Dstat gives you detailed selective information in columns and clearly indicates in what magnitude and unit the output is displayed. Less confusion, less mistakes. And most importantly, it makes it very easy to write plugins to collect your own counters and extend in ways you never expected. + + Dstat’s output by default is designed for being interpreted by humans in real-time, however you can export details to CSV output to a file to be imported later into Gnumeric or Excel to generate graphs. +Check our previous post:[How to install and use dstat on RHEL/CentOS,Fedora and Debian/Ubuntu based distribution][2] +![Example dstat output](http://lintut.com/wp-content/uploads/2013/12/Screenshot-from-2013-12-26-085128.png) +###Collectl +Collectl is a light-weight performance monitoring tool capable of reporting interactively as well as logging to disk. It reports statistics on cpu, disk, infiniband, lustre, memory, network, nfs, process, quadrics, slabs and more in easy to read format. +In this article i will show you how to install and sample usage Collectl on Debian/Ubuntu and RHEL/Centos and Fedora linux. + + Check our previous post:[Collectl-Monitoring system resources][3] + ![Collectl screen](http://lintut.com/wp-content/uploads/2014/03/collectlscreen1.png) + +###Nmon +nmon is a beutiful tool to monitor linux system performance. It works on Linux, IBM AIX Unix, Power,x86, amd64 and ARM based system such as Raspberry Pi. The nmon command displays and recordslocal system information. The command can run either in interactive or recording mode. + + Check our previous post: [Nmon – linux monitoring tools][4] + ![nmon startup screen](http://lintut.com/wp-content/uploads/2013/12/Screenshot-from-2013-12-26-234246.png) +###Saidar +Saidar is a curses-based application to display system statistics. It use the libstatgrab library, which provides cross platform access to statistics about the system on which it’s run. Reported statistics includeCPU, load, processes, memory, swap, network input and output and disks activities along with their free space. + + Check our previous post:[Saidar – system monitoring tool][5] + ![saidar -c](http://lintut.com/wp-content/uploads/2013/08/Screenshot-from-2013-12-16-223053.png) +###Sar +The sar utility, which is part of the systat package, can be used to review history performance data on your server. System resource utilization can be seen for given time frames to help troubleshoot performance issues, or to optimize performance. + + Check our previous post:[Using Sar To Monitor System Performance][6] + ![Sar command](http://lintut.com/wp-content/uploads/2014/03/sar-cpu-unix.jpg) + + ###Glances + Glances is a cross-platform curses-based command line monitoring tool writen in Python which use the psutil library to grab informations from the system. Glance monitoring CPU, Load Average, Memory, Network Interfaces, Disk I/O, Processesand File System spaces utilization. + + Glances can adapt dynamically the displayed information depending on the terminal siwrize. It can also work in a client/server mode for remote monitoring. + + Check our previous post: [Glances – Real Time System Monitoring Tool for Linux][7] + ![Glances](http://lintut.com/wp-content/uploads/2013/09/Screenshot-from-2013-09-07-213127.png) + + ###Atop + [Atop](http://www.atoptool.nl/) is an interactive monitor to view the load on a Linux system. It shows the occupation of the most critical hardware resources on system level, i.e. cpu, memory, disk and network. It also shows which processes are responsible for the indicated load with respect to cpu- and memory load on process level. Disk load is shown if per process “storage accounting” is active in the kernel or if the kernel patch ‘cnt’ has been installed. Network load is only shown per process if the kernel patch ‘cnt’ has been installed. + ![Atop linux resources monitoring tool](http://lintut.com/wp-content/uploads/2014/04/Screenshot-from-2014-04-12-004319.png) + For more about Atop check next post:[Atop - monitor system resources in linux][8] + So, if you come across any other similar tool then let us know in the comment box below. + + + + + + +-------------------------------------------------------------------------------- + +via: http://lintut.com/best-command-line-tools-for-linux-performance-monitring/ + +作者:[rasho][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[1]:http://lintut.com/install-htop-in-rhel-centos-fedora-linux/ +[2]:http://lintut.com/dstat-linux-monitoring-tools/ +[3]:http://lintut.com/collectl-monitoring-system-resources/ +[4]:http://lintut.com/nmon-linux-monitoring-tools/ +[5]:http://lintut.com/saidar-system-monitoring-tool/ +[6]:http://lintut.com/using-sar-to-monitor-system-performance/ +[7]:http://lintut.com/glances-an-eye-on-your-system/ +[8]:http://lintut.com/atop-linux-system-resource-monitor/ From b74d29d80d67856bff17053b03e42d3ef780ec2e Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 15 Sep 2015 23:15:19 +0800 Subject: [PATCH 521/697] PUB:RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps @FSSlc --- ...or Analyzing text with grep and regexps.md | 209 ++++++++++++++ ...or Analyzing text with grep and regexps.md | 258 ------------------ 2 files changed, 209 insertions(+), 258 deletions(-) create mode 100644 published/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md delete mode 100644 translated/tech/RHCSA/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md diff --git a/published/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md b/published/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md new file mode 100644 index 0000000000..995aab93e1 --- /dev/null +++ b/published/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md @@ -0,0 +1,209 @@ +RHCSA 系列(四): 编辑文本文件及分析文本 +================================================================================ + +作为系统管理员的日常职责的一部分,每个系统管理员都必须处理文本文件,这包括编辑已有文件(大多可能是配置文件),或创建新的文件。有这样一个说法,假如你想在 Linux 世界中挑起一场圣战,你可以询问系统管理员们,什么是他们最喜爱的编辑器以及为什么。在这篇文章中,我们并不打算那样做,但我们将向你呈现一些技巧,这些技巧对使用两款在 RHEL 7 中最为常用的文本编辑器: nano(由于其简单和易用,特别是对于新手来说)和 vi/m(由于其自身的几个特色使得它不仅仅是一个简单的编辑器)来说都大有裨益。我确信你可以找到更多的理由来使用其中的一个或另一个,或许其他的一些编辑器如 emacs 或 pico。这完全取决于你自己。 + +![学习 Nano 和 vi 编辑器](http://www.tecmint.com/wp-content/uploads/2015/03/Learn-Nano-and-vi-Editors.png) + +*RHCSA: 使用 Nano 和 Vim 编辑文本文件 – Part 4* + +### 使用 Nano 编辑器来编辑文件 ### + +要启动 nano,你可以在命令提示符下输入 `nano`,或可选地跟上一个文件名(在这种情况下,若文件存在,它将在编辑模式中被打开)。若文件不存在,或我们省略了文件名, nano 也将在编辑模式下开启,但将为我们开启一个空白屏以便开始输入: + +![Nano 编辑器](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Editor.png) + +*Nano 编辑器* + +正如你在上一张图片中所见的那样, nano 在屏幕的底部呈现出一些可以通过指定的快捷键来触发的功能(\^,即插入记号,代指 Ctrl 键)。它们中的一些是: + +- Ctrl + G: 触发一个帮助菜单,带有一个关于功能和相应的描述的完整列表; + +![Nano 编辑器帮助菜单](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Help.png) + +*Nano 编辑器帮助菜单* + +- Ctrl + O: 保存更改到一个文件。它可以让你用一个与源文件相同或不同的名称来保存该文件,然后按 Enter 键来确认。 + +![Nano 编辑器保存更改模式](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Save-Changes.png) + +*Nano 编辑器的保存更改模式* + +- Ctrl + X: 离开当前文件,假如更改没有被保存,则它们将被丢弃; +- Ctrl + R: 通过指定一个完整的文件路径,让你选择一个文件来将该文件的内容插入到当前文件中; + +![Nano: 插入文件内容到主文件中](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-File-Content.png) + +*Nano: 插入文件内容到主文件中* + +上图的操作将把 `/etc/passwd` 的内容插入到当前文件中。 + +- Ctrl + K: 剪切当前行; +- Ctrl + U: 粘贴; +- Ctrl + C: 取消当前的操作并返回先前的屏幕; + +为了轻松地在打开的文件中浏览, nano 提供了下面的功能: + +- Ctrl + F 和 Ctrl + B 分别先前或向后移动光标;而 Ctrl + P 和 Ctrl + N 则分别向上或向下移动一行,功能与箭头键相同; +- Ctrl + space 和 Alt + space 分别向前或向后移动一个单词; + +最后, + +- 假如你想将光标移动到文档中的特定位置,使用 Ctrl + _ (下划线) 并接着输入 X,Y 将准确地带你到 第 X 行,第 Y 列。 + +![在 nano 中定位到具体的行,列](http://www.tecmint.com/wp-content/uploads/2015/03/Column-Numbers.png) + +*在 nano 中定位到具体的行和列* + +上面的例子将带你到当前文档的第 15 行,第 14 列。 + +假如你可以回忆起你早期的 Linux 岁月,特别是当你刚从 Windows 迁移到 Linux 中,你就可能会同意:对于一个新手来说,使用 nano 来开始学习是最好的方式。 + +### 使用 Vim 编辑器来编辑文件 ### + +Vim 是 vi 的加强版本,它是 Linux 中一个著名的文本编辑器,可在所有兼容 POSIX 的 *nix 系统中获取到,例如在 RHEL 7 中。假如你有机会并可以安装 Vim,请继续;假如不能,这篇文章中的大多数(若不是全部)的提示也应该可以正常工作。 + +Vim 的一个出众的特点是可以在多个不同的模式中进行操作: + +- 命令模式(Command Mode)将允许你在文件中跳转和输入命令,这些命令是由一个或多个字母组成的简洁且大小写敏感的组合。假如你想重复执行某个命令特定次数,你可以在这个命令前加上需要重复的次数(这个规则只有极少数例外)。例如, `yy`(或 `Y`,yank 的缩写)可以复制整个当前行,而 `4yy`(或 `4Y`)则复制整个从当前行到接下来的 3 行(总共 4 行)。 +- 我们总是可以通过敲击 `Esc` 键来进入命令模式(无论我们正工作在哪个模式下)。 +- 在末行模式(Ex Mode)中,你可以操作文件(包括保存当前文件和运行外部的程序或命令)。要进入末行模式,你必须从命令模式中(换言之,输入 `Esc` + `:`)输入一个冒号(`:`),再直接跟上你想使用的末行模式命令的名称。 +- 对于插入模式(Insert Mode),可以输入字母 `i` 进入,然后只需要输入文字即可。大多数的击键结果都将出现在屏幕中的文本中。 + +现在,让我们看看如何在 vim 中执行在上一节列举的针对 nano 的相同的操作。不要忘记敲击 Enter 键来确认 vim 命令。 + +为了从命令行中获取 vim 的完整手册,在命令模式下键入 `:help` 并敲击 Enter 键: + +![vim 编辑器帮助菜单](http://www.tecmint.com/wp-content/uploads/2015/03/vim-Help-Menu.png) + +*vim 编辑器帮助菜单* + +上面的部分呈现出一个内容列表,这些定义的小节则描述了 Vim 的特定话题。要浏览某一个小节,可以将光标放到它的上面,然后按 `Ctrl + ]` (闭方括号)。注意,底部的小节展示的是当前文件的内容。 + +1、 要保存更改到文件,在命令模式中运行下面命令中的任意一个,就可以达到这个目的: + +``` +:wq! +:x! +ZZ (是的,两个 ZZ,前面无需添加冒号) +``` + +2、 要离开并丢弃更改,使用 `:q!`。这个命令也将允许你离开上面描述过的帮助菜单,并返回到命令模式中的当前文件。 + +3、 剪切 N 行:在命令模式中键入 `Ndd`。 + +4、 复制 M 行:在命令模式中键入 `Myy`。 + +5、 粘贴先前剪贴或复制过的行:在命令模式中按 `P`键。 + +6、 要插入另一个文件的内容到当前文件: + + :r filename + +例如,插入 `/etc/fstab` 的内容,可以这样做: + +[在 vi 编辑器中插入文件的内容](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-Content-vi-Editor.png) + +*在 vi 编辑器中插入文件的内容* + +7、 插入一个命令的输出到当前文档: + + :r! command + +例如,要在光标所在的当前位置后面插入日期和时间: + +![在 vi 编辑器中插入时间和日期](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-Time-and-Date-in-vi-Editor.png) + +*在 vi 编辑器中插入时间和日期* + +在另一篇我写的文章中,([LFCS 系列(二)][1]),我更加详细地解释了在 vim 中可用的键盘快捷键和功能。或许你可以参考那个教程来查看如何使用这个强大的文本编辑器的更深入的例子。 + +### 使用 grep 和正则表达式来分析文本 ### + +到现在为止,你已经学习了如何使用 nano 或 vim 创建和编辑文件。打个比方说,假如你成为了一个文本编辑器忍者 – 那又怎样呢? 在其他事情上,你也需要知道如何在文本中搜索正则表达式。 + +正则表达式(也称为 "regex" 或 "regexp") 是一种识别一个特定文本字符串或模式的方式,使得一个程序可以将这个模式和任意的文本字符串相比较。尽管利用 grep 来使用正则表达式值得用一整篇文章来描述,这里就让我们复习一些基本的知识: + +**1、 最简单的正则表达式是一个由数字和字母构成的字符串(例如,单词 "svm") ,或者两个(在使用两个字符串时,你可以使用 `|`(或) 操作符):** + + # grep -Ei 'svm|vmx' /proc/cpuinfo + +上面命令的输出结果中若有这两个字符串之一的出现,则标志着你的处理器支持虚拟化: + +![正则表达式示例](http://www.tecmint.com/wp-content/uploads/2015/03/Regular-Expression-Example.png) + +*正则表达式示例* + +**2、 第二种正则表达式是一个范围列表,由方括号包裹。** + +例如, `c[aeiou]t` 匹配字符串 cat、cet、cit、cot 和 cut,而 `[a-z]` 和 `[0-9]` 则相应地匹配小写字母或十进制数字。假如你想重复正则表达式 X 次,在正则表达式的后面立即输入 `{X}`即可。 + +例如,让我们从 `/etc/fstab` 中析出存储设备的 UUID: + + # grep -Ei '[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}' -o /etc/fstab + +![在 Linux 中从一个文件中析出字符串](http://www.tecmint.com/wp-content/uploads/2015/03/Extract-String-from-a-File.png) + +*从一个文件中析出字符串* + +方括号中的第一个表达式 `[0-9a-f]` 被用来表示小写的十六进制字符,`{8}`是一个量词,暗示前面匹配的字符串应该重复的次数(在一个 UUID 中的开头序列是一个 8 个字符长的十六进制字符串)。 + +在圆括号中,量词 `{4}`和连字符暗示下一个序列是一个 4 个字符长的十六进制字符串,接着的量词 `({3})`表示前面的表达式要重复 3 次。 + +最后,在 UUID 中的最后一个 12 个字符长的十六进制字符串可以由 `[0-9a-f]{12}` 取得, `-o` 选项表示只打印出在 `/etc/fstab`中匹配行中的匹配的(非空)部分。 + +**3、 POSIX 字符类** + +|字符类|匹配 …| +|-----|-----| +| `[:alnum:]` | 任意字母或数字 [a-zA-Z0-9] | +| `[:alpha:]` |任意字母 [a-zA-Z] | +| `[:blank:]` |空格或制表符 | +| `[:cntrl:]` |任意控制字符 (ASCII 码的 0 至 32) | +| `[:digit:]` |任意数字 [0-9] | +| `[:graph:]` |任意可见字符 | +| `[:lower:]` |任意小写字母 [a-z] | +| `[:print:]` |任意非控制字符 | +| `[:space:]` |任意空格 | +| `[:punct:]` |任意标点字符 | +| `[:upper:]` |任意大写字母 [A-Z] | +| `[:xdigit:]` |任意十六进制数字 [0-9a-fA-F] | +| `[:word:]` |任意字母,数字和下划线 [a-zA-Z0-9_] | + +例如,我们可能会对查找已添加到我们系统中给真实用户的 UID 和 GID(参考“[RHCSA 系列(二): 如何进行文件和目录管理][2]”来回忆起这些知识)感兴趣。那么,我们将在 `/etc/passwd` 文件中查找 4 个字符长的序列: + + # grep -Ei [[:digit:]]{4} /etc/passwd + +![在文件中查找一个字符串](http://www.tecmint.com/wp-content/uploads/2015/03/Search-For-String-in-File.png) + +*在文件中查找一个字符串* + +上面的示例可能不是真实世界中使用正则表达式的最好案例,但它清晰地启发了我们如何使用 POSIX 字符类来使用 grep 分析文本。 + +### 总结 ### + + +在这篇文章中,我们已经提供了一些技巧来最大地利用针对命令行用户的两个文本编辑器 nano 和 vim,这两个工具都有相关的扩展文档可供阅读,你可以分别查询它们的官方网站(链接在下面给出)以及使用“[RHCSA 系列(一): 回顾基础命令及系统文档][3]”中给出的建议。 + +#### 参考文件链接 #### + +- [http://www.nano-editor.org/][4] +- [http://www.vim.org/][5] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-exam-how-to-use-nano-vi-editors/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/vi-editor-usage/ +[2]:https://linux.cn/article-6155-1.html +[3]:https://linux.cn/article-6133-1-rel.html +[4]:http://www.nano-editor.org/ +[5]:http://www.vim.org/ +[6]:http://www.tecmint.com/vi-editor-usage/ \ No newline at end of file diff --git a/translated/tech/RHCSA/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md b/translated/tech/RHCSA/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md deleted file mode 100644 index 8438ec0351..0000000000 --- a/translated/tech/RHCSA/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md +++ /dev/null @@ -1,258 +0,0 @@ -RHCSA 系列:使用 Nano 和 Vim 编辑文本文件/使用 grep 和 regexps 分析文本 – Part 4 -================================================================================ -作为系统管理员的日常职责的一部分,每个系统管理员都必须处理文本文件,这包括编辑现存文件(大多可能是配置文件),或创建新的文件。有这样一个说法,假如你想在 Linux 世界中挑起一场圣战,你可以询问系统管理员们,什么是他们最喜爱的编辑器以及为什么。在这篇文章中,我们并不打算那样做,但我们将向你呈现一些技巧,这些技巧对使用两款在 RHEL 7 中最为常用的文本编辑器: nano(由于其简单和易用,特别是对于新手来说) 和 vi/m(由于其自身的几个特色使得它不仅仅是一个简单的编辑器)来说都大有裨益。我确信你可以找到更多的理由来使用其中的一个或另一个,或许其他的一些编辑器如 emacs 或 pico。这完全取决于你。 - -![学习 Nano 和 vi 编辑器](http://www.tecmint.com/wp-content/uploads/2015/03/Learn-Nano-and-vi-Editors.png) - -RHCSA: 使用 Nano 和 Vim 编辑文本文件 – Part 4 - -### 使用 Nano 编辑器来编辑文件 ### - -要启动 nano,你可以在命令提示符下输入 `nano`,或选择性地跟上一个文件名(在这种情况下,若文件存在,它将在编辑模式中被打开)。若文件不存在,或我们省略了文件名, nano 也将在 编辑模式下开启,但将为我们开启一个空白屏以便开始输入: - -![Nano 编辑器](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Editor.png) - -Nano 编辑器 - -正如你在上一张图片中所见的那样, nano 在屏幕的底部呈现出一些功能,它们可以通过暗指的快捷键来触发(^,即插入记号,代指 Ctrl 键)。它们中的一些是: - -- Ctrl + G: 触发一个帮助菜单,带有一个关于功能和相应的描述的完整列表; -- Ctrl + X: 离开当前文件,假如更改没有被保存,则它们将被丢弃; -- Ctrl + R: 通过指定一个完整的文件路径,让你选择一个文件来将该文件的内容插入到当前文件中; - -![Nano 编辑器帮助菜单](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Help.png) - -Nano 编辑器帮助菜单 - -- Ctrl + O: 保存更改到一个文件。它将让你用一个与源文件相同或不同的名称来保存该文件,然后按 Enter 键来确认。 - -![Nano 编辑器保存更改模式](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Save-Changes.png) - -Nano 编辑器的保存更改模式 - -- Ctrl + X: 离开当前文件,假如更改没有被保存,则它们将被丢弃; -- Ctrl + R: 通过指定一个完整的文件路径,让你选择一个文件来将该文件的内容插入到当前文件中; - -![Nano: 插入文件内容到主文件中](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-File-Content.png) - -Nano: 插入文件内容到主文件中 - -上图的操作将把 `/etc/passwd` 的内容插入到当前文件中。 - -- Ctrl + K: 剪切当前行; -- Ctrl + U: 粘贴; -- Ctrl + C: 取消当前的操作并返回先前的屏幕; - -为了轻松地在打开的文件中浏览, nano 提供了下面的功能: - -- Ctrl + F 和 Ctrl + B 分别先前或向后移动光标;而 Ctrl + P 和 Ctrl + N 则分别向上或向下移动一行,功能与箭头键相同; -- Ctrl + space 和 Alt + space 分别向前或向后移动一个单词; - -最后, - -- 假如你想将光标移动到文档中的特定位置,使用 Ctrl + _ (下划线) 并接着输入 X,Y 将准确地带你到 第 X 行,第 Y 列。 - -![在 nano 中定位到具体的行,列](http://www.tecmint.com/wp-content/uploads/2015/03/Column-Numbers.png) - -在 nano 中定位到具体的行和列 - -上面的例子将带你到当前文档的第 15 行,第 14 列。 - -假如你可以回忆起你早期的 Linux 岁月,特别是当你刚从 Windows 迁移到 Linux 中,你就可能会同意:对于一个新手来说,使用 nano 来开始学习是最好的方式。 - -### 使用 Vim 编辑器来编辑文件 ### - - -Vim 是 vi 的加强版本,它是 Linux 中一个著名的文本编辑器,可在所有兼容 POSIX 的 *nix 系统中获取到,例如在 RHEL 7 中。假如你有机会并可以安装 Vim,请继续;假如不能,这篇文章中的大多数(若不是全部)的提示也应该可以正常工作。 - -Vim 的一个出众的特点是可以在多个不同的模式中进行操作: - -- 命令模式将允许你在文件中跳转和输入命令,这些命令是由一个或多个字母组成的简洁且对大小写敏感的组合。假如你想重复执行某个命令特定次,你可以在这个命令前加上需要重复的次数(这个规则只有极少数例外)。例如, yy(或 Y,yank 的缩写)可以复制整个当前行,而 4yy(或 4Y)则复制整个当前行到接着的 3 行(总共 4 行)。 -- 在 ex 模式中,你可以操作文件(包括保存当前文件和运行外部的程序或命令)。要进入 ex 模式,你必须在命令模式前(或其他词前,Esc + :)输入一个冒号(:),再直接跟上你想使用的 ex 模式命令的名称。 -- 对于插入模式,可以输入字母 i 进入,我们只需要输入文字即可。大多数的键击结果都将出现在屏幕中的文本中。 -- 我们总是可以通过敲击 Esc 键来进入命令模式(无论我们正工作在哪个模式下)。 - -现在,让我们看看如何在 vim 中执行在上一节列举的针对 nano 的相同的操作。不要忘记敲击 Enter 键来确认 vim 命令。 - -为了从命令行中获取 vim 的完整手册,在命令模式下键入 `:help` 并敲击 Enter 键: - -![vim 编辑器帮助菜单](http://www.tecmint.com/wp-content/uploads/2015/03/vim-Help-Menu.png) - -vim 编辑器帮助菜单 - -上面的小节呈现出一个目录列表,而定义过的小节则主要关注 Vim 的特定话题。要浏览某一个小节,可以将光标放到它的上面,然后按 Ctrl + ] (闭方括号)。注意,底部的小节展示的是当前文件的内容。 - -1. 要保存更改到文件,在命令模式中运行下面命令中的任意一个,就可以达到这个目的: - -``` -:wq! -:x! -ZZ (是的,两个 ZZ,前面无需添加冒号) -``` - -2. 要离开并丢弃更改,使用 `:q!`。这个命令也将允许你离开上面描述过的帮助菜单,并返回到命令模式中的当前文件。 - -3. 剪切 N 行:在命令模式中键入 `Ndd`。 - -4. 复制 M 行:在命令模式中键入 `Myy`。 - -5. 粘贴先前剪贴或复制过的行:在命令模式中按 `P`键。 - -6. 要插入另一个文件的内容到当前文件: - - :r filename - -例如,插入 `/etc/fstab` 的内容,可以这样做: - -[在 vi 编辑器中插入文件的内容](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-Content-vi-Editor.png) - -在 vi 编辑器中插入文件的内容 - -7. 插入一个命名的输出到当前文档: - - :r! command - -例如,要在光标所在的当前位置后面插入日期和时间: - -![在 vi 编辑器中插入时间和日期](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-Time-and-Date-in-vi-Editor.png) - -在 vi 编辑器中插入时间和日期 - -在另一篇我写的文章中,([LFCS 系列的 Part 2][1]),我更加详细地解释了在 vim 中可用的键盘快捷键和功能。或许你可以参考那个教程来查看如何使用这个强大的文本编辑器的更深入的例子。 - -### 使用 Grep 和正则表达式来分析文本 ### - -到现在为止,你已经学习了如何使用 nano 或 vim 创建和编辑文件。打个比方说,假如你成为了一个文本编辑器忍者 – 那又怎样呢? 在其他事情上,你也需要知道如何在文本中搜索正则表达式。 - -正则表达式(也称为 "regex" 或 "regexp") 是一种识别一个特定文本字符串或模式的方式,使得一个程序可以将这个模式和任意的文本字符串相比较。尽管利用 grep 来使用正则表达式值得用一整篇文章来描述,这里就让我们复习一些基本的知识: - -**1. 最简单的正则表达式是一个由数字和字母构成的字符串(即,单词 "svm") 或两个(在使用两个字符串时,你可以使用 `|`(或) 操作符):** - - # grep -Ei 'svm|vmx' /proc/cpuinfo - -上面命令的输出结果中若有这两个字符串之一的出现,则标志着你的处理器支持虚拟化: - -![正则表达式示例](http://www.tecmint.com/wp-content/uploads/2015/03/Regular-Expression-Example.png) - -正则表达式示例 - -**2. 第二种正则表达式是一个范围列表,由方括号包裹。** - -例如, `c[aeiou]t` 匹配字符串 cat,cet,cit,cot 和 cut,而 `[a-z]` 和 `[0-9]` 则相应地匹配小写字母或十进制数字。假如你想重复正则表达式 X 次,在正则表达式的后面立即输入 `{X}`即可。 - -例如,让我们从 `/etc/fstab` 中析出存储设备的 UUID: - - # grep -Ei '[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}' -o /etc/fstab - -![在 Linux 中从一个文件中析出字符串](http://www.tecmint.com/wp-content/uploads/2015/03/Extract-String-from-a-File.png) - -从一个文件中析出字符串 - -方括号中的第一个表达式 `[0-9a-f]` 被用来表示小写的十六进制字符,`{8}`是一个量词,暗示前面匹配的字符串应该重复的次数(在一个 UUID 中的开头序列是一个 8 个字符长的十六进制字符串)。 - -在圆括号中,量词 `{4}`和连字符暗示下一个序列是一个 4 个字符长的十六进制字符串,接着的量词 `({3})`表示前面的表达式要重复 3 次。 - -最后,在 UUID 中的最后一个 12 个字符长的十六进制字符串可以由 `[0-9a-f]{12}` 取得, `-o` 选项表示只打印出在 `/etc/fstab`中匹配行中的匹配的(非空)部分。 - -**3. POSIX 字符类 ** - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
字符类匹配 …
 [[:alnum:]] 任意字母或数字 [a-zA-Z0-9]
 [[:alpha:]] 任意字母 [a-zA-Z]
 [[:blank:]] 空格或制表符
 [[:cntrl:]] 任意控制字符 (ASCII 码的 0 至 32)
 [[:digit:]] 任意数字 [0-9]
 [[:graph:]] 任意可见字符
 [[:lower:]] 任意小写字母 [a-z]
 [[:print:]] 任意非控制字符 -
 [[:space:]] 任意空格
 [[:punct:]] 任意标点字符
 [[:upper:]] 任意大写字母 [A-Z]
 [[:xdigit:]] 任意十六进制数字 [0-9a-fA-F]
 [:word:] 任意字母,数字和下划线 [a-zA-Z0-9_]
- -例如,我们可能会对查找已添加到我们系统中给真实用户的 UID 和 GID(参考这个系列的 [Part 2][2]来回忆起这些知识)感兴趣。那么,我们将在 `/etc/passwd` 文件中查找 4 个字符长的序列: - - # grep -Ei [[:digit:]]{4} /etc/passwd - -![在文件中查找一个字符串](http://www.tecmint.com/wp-content/uploads/2015/03/Search-For-String-in-File.png) - -在文件中查找一个字符串 - -上面的示例可能不是真实世界中使用正则表达式的最好案例,但它清晰地启发了我们如何使用 POSIX 字符类来使用 grep 分析文本。 - -### 总结 ### - - -在这篇文章中,我们已经提供了一些技巧来最大地利用针对命令行用户的两个文本编辑器 nano 和 vim,这两个工具都有相关的扩展文档可供阅读,你可以分别查询它们的官方网站(链接在下面给出)以及使用这个系列中的 [Part 1][3] 给出的建议。 - -#### 参考文件链接 #### - -- [http://www.nano-editor.org/][4] -- [http://www.vim.org/][5] - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-how-to-use-nano-vi-editors/ - -作者:[Gabriel Cánepa][a] -译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/vi-editor-usage/ -[2]:http://www.tecmint.com/file-and-directory-management-in-linux/ -[3]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ -[4]:http://www.nano-editor.org/ -[5]:http://www.vim.org/ From b7a1830262a2ff7f7aa29b579aa76a46a65e49bc Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 15 Sep 2015 23:33:49 +0800 Subject: [PATCH 522/697] PUB:20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu @strugglingyouth --- ....9.0 Winamp-like Audio Player in Ubuntu.md | 33 ++++++++----------- 1 file changed, 13 insertions(+), 20 deletions(-) rename {translated/tech => published}/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md (58%) diff --git a/translated/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md b/published/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md similarity index 58% rename from translated/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md rename to published/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md index ac07a04b85..1b5ca2ce43 100644 --- a/translated/tech/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md +++ b/published/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md @@ -1,40 +1,33 @@ -在 Ubuntu 上安装 Qmmp 0.9.0 类似 Winamp 的音频播放器 +在 Ubuntu 上安装类 Winamp 的音频播放器 Qmmp 0.9.0 ================================================================================ ![](http://ubuntuhandbook.org/wp-content/uploads/2015/01/qmmp-icon-simple.png) -Qmmp,基于 Qt 的音频播放器,与 Winamp 或 xmms 的用户界面类似,现在最新版本是0.9.0。PPA 已经在 Ubuntu 15.10,Ubuntu 15.04,Ubuntu 14.04,Ubuntu 12.04 和其衍生物中已经更新了。 +Qmmp,一个基于 Qt 的音频播放器,与 Winamp 或 xmms 的用户界面类似,现在最新版本是0.9.0。PPA 已经在 Ubuntu 15.10,Ubuntu 15.04,Ubuntu 14.04,Ubuntu 12.04 和其衍生版本中已经更新了。 Qmmp 0.9.0 是一个较大的版本,有许多新的功能,有许多改进和新的转变。它添加了如下功能: - 音频-信道序列转换器; - 9通道支持均衡器; -- 艺术家专辑标签支持; +- 支持艺术家专辑标签; - 异步排序; -- 通过文件的修改日期排​​序; -- 按艺术家专辑排序; -- 支持多专栏; -- 有隐藏踪迹长度功能; - 不用修改 qmmp.pri 来禁用插件(仅在 qmake 中)功能 - 记住播放列表滚动位置功能; -- 排除提示数据文件功能; +- 排除 cue 数据文件功能; - 更改用户代理功能; - 改变窗口标题功能; -- 复位字体功能; -- 恢复默认快捷键功能; -- 默认热键为“Rename List”功能; -- 功能禁用弹出的 GME 插件; -- 简单的用户界面(QSUI)有以下变化: - - 增加了多列表的支持; +- 禁用 gme 插件淡出的功能; +- 简单用户界面(QSUI)有以下变化: + - 增加了多列的支持; - 增加了按艺术家专辑排序; - 增加了按文件的修改日期进行排序; - 增加了隐藏歌曲长度功能; - - 增加了默认热键为“Rename List”; + - 增加了“Rename List”的默认热键; - 增加了“Save List”功能到标签菜单; - 增加了复位字体功能; - 增加了复位快捷键功能; - 改进了状态栏; -它还改进了播放列表的通知,播放列表容器,采样率转换器,cmake 构建脚本,标题格式,在 mpeg 插件中支持 ape 标签,fileops 插件,降低了 cpu 占用率,改变默认的皮肤(炫光)和分离播放列表。 +它还改进了播放列表的改变通知,播放列表容器,采样率转换器,cmake 构建脚本,标题格式化,在 mpeg 插件中支持 ape 标签,fileops 插件,降低了 cpu 占用率,改变默认的皮肤(炫光)和分离的播放列表。 ![qmmp-090](http://ubuntuhandbook.org/wp-content/uploads/2015/09/qmmp-090.jpg) @@ -42,7 +35,7 @@ Qmmp 0.9.0 是一个较大的版本,有许多新的功能,有许多改进和 新版本已经制做了 PPA,适用于目前所有 Ubuntu 发行版和衍生版。 -1. 添加 [Qmmp PPA][1]. +1、 添加 [Qmmp PPA][1]. 从 Dash 中打开终端并启动应用,通过按 Ctrl+Alt+T 快捷键。当它打开时,运行命令: @@ -50,7 +43,7 @@ Qmmp 0.9.0 是一个较大的版本,有许多新的功能,有许多改进和 ![qmmp-ppa](http://ubuntuhandbook.org/wp-content/uploads/2015/09/qmmp-ppa.jpg) -2. 在添加 PPA 后,通过更新软件来升级 Qmmp 播放器。刷新系统缓存,并通过以下命令安装软件: +2、 在添加 PPA 后,通过更新软件来升级 Qmmp 播放器。刷新系统缓存,并通过以下命令安装软件: sudo apt-get update @@ -63,8 +56,8 @@ Qmmp 0.9.0 是一个较大的版本,有许多新的功能,有许多改进和 via: http://ubuntuhandbook.org/index.php/2015/09/qmmp-0-9-0-in-ubuntu/ 作者:[Ji m][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 733781b3c63a0b1fb218a497c09e91a6ba91ab92 Mon Sep 17 00:00:00 2001 From: DongShuaike Date: Wed, 16 Sep 2015 11:18:02 +0800 Subject: [PATCH 523/697] Create 20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md --- ...OS Runs 42 Percent of Dell PCs in China.md | 41 +++++++++++++++++++ 1 file changed, 41 insertions(+) create mode 100644 sources/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md diff --git a/sources/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md b/sources/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md new file mode 100644 index 0000000000..7368a21b70 --- /dev/null +++ b/sources/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md @@ -0,0 +1,41 @@ +Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China +================================================================================ +> Dell says that 42 percent of the PCs it sells in the Chinese market run Kylin, an open source operating system based on Ubuntu Linux that Canonical helped to create. + + Open source fans, rejoice: The Year of the Linux Desktop has arrived. Or something close to it is on the horizon in China, at least, where [Dell][1] has reported that more than 40 percent of the PCs it sells run a variant of [Ubuntu Linux][2] that [Canonical][3] helped develop. + + Specifically, Dell said that 42 percent of computers in China run NeoKylin, an operating system that originated as an effort in China to build a home-grown alternative to [Microsoft][4] (MSFT) Windows. Also known simply Kylin, the OS has been based on Ubuntu since 2013, when Canonical began collaborating with the Chinese government to create an Ubuntu variant tailored for the Chinese market. + + Earlier versions of Kylin, which has been around since 2001, were based on other operating systems, including FreeBSD, an open source Unix-like operating system that is distinct from Linux. + + Ubuntu Kylin looks and feels a lot like modern versions of Ubuntu proper. It sports the [Unity][5] interface and runs the standard suite of open source apps, as well as specialized ones such as Youker Assistant, a graphical front end that helps users manage basic computing tasks. Kylin's default theme makes it look just a little more like Windows than stock Ubuntu, however. + + Given the relative stagnation of the market for desktop Linux PCs in most of the world, Dell's announcement is striking. And in light of China's [hostility][6] toward modern editions of Windows, the news does not bode well for Microsoft's prospects in the Chinese market. + + Dell's comment on Linux PC sales in China—which appeared in the form of a statement by an executive to the Wall Street Journal—comes on the heels of the company's [announcement][7] of $125 million of new investment in China. + ![Ubuntu Kylin](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/09/hey_2.png) + + + + + + + +-------------------------------------------------------------------------------- + +via: http://thevarguy.com/open-source-application-software-companies/091515/ubuntu-linux-based-open-source-os-runs-42-percent-dell-pc + +作者:[Christopher Tozzi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://thevarguy.com/author/christopher-tozzi +[1]:http://dell.com/ +[2]:http://ubuntu.com/ +[3]:http://canonical.com/ +[4]:http://microsoft.com/ +[5]:http://unity.ubuntu.com/ +[6]:http://www.wsj.com/articles/windows-8-faces-new-criticism-in-china-1401882772 +[7]:http://thevarguy.com/business-technology-solution-sales/091415/dell-125-million-directed-china-jobs-new-business-and-innovation From 266b05d0214a90e5ed03bc5023b0ccd44bce4c6a Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 16 Sep 2015 16:25:27 +0800 Subject: [PATCH 524/697] =?UTF-8?q?20150916-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...able Automatic System Updates In Ubuntu.md | 48 +++++++++++ ... which CPU core a process is running on.md | 81 +++++++++++++++++++ 2 files changed, 129 insertions(+) create mode 100644 sources/tech/20150916 Enable Automatic System Updates In Ubuntu.md create mode 100644 sources/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md diff --git a/sources/tech/20150916 Enable Automatic System Updates In Ubuntu.md b/sources/tech/20150916 Enable Automatic System Updates In Ubuntu.md new file mode 100644 index 0000000000..40397f2c42 --- /dev/null +++ b/sources/tech/20150916 Enable Automatic System Updates In Ubuntu.md @@ -0,0 +1,48 @@ +Enable Automatic System Updates In Ubuntu +================================================================================ +Before seeing **how to enable automatic system updates in Ubuntu**, first let’s see why should we do it in the first place. + +By default Ubuntu checks for updates daily. When there are security updates, it shows immediately but for other updates (i.e. regular software updates) it pop ups once a week. So, if you have been using Ubuntu for a while, this may be a familiar sight for you: + +![Software Update notification in Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Software-Update-Ubntu.png) + +Now if you are a normal desktop user, you don’t really care about what kind of updates are these. And this is not entirely a bad thing. You trust Ubuntu to provide you good updates, right? So, you just select ‘Install Now’ most of the time, don’t you? + +And all you do is to click on Install Now, why not enable the automatic system updates? Enabling automatic system updates means all the latest updates will be automatically downloaded and installed without requiring any actions from you. Isn’t it convenient? + +### Enable automatic updates in Ubuntu ### + +I am using Ubuntu 15.04 in this tutorial but the steps are the same for Ubuntu 14.04 as well. + +Go to Unity Dash and look for Software & Updates: + +![Ubuntu Software Update Settings](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/08/Software_Update_Ubuntu.jpeg) + +This will open the Software sources settings for you. Click on Updates tab here: + +![Software Updates settings in Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Software-Update-Ubntu-1.png) + +In here, you’ll see the default settings which is daily check for updates and immediate notification for security updates. + +![Changing software update frequency](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Software-Update-Ubntu-2.png) + +All you need to do is to change the action which reads “When there are” to “Download and install automatically”. This will download all the available updates and install them automatically. + +![Automatic updates in Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Software-Update-Ubntu-3.png) + +That’s it. Close it and you have automatic updates enabled in Ubuntu. In fact this tutorial is pretty similar to [changing update notification frequency in Ubuntu][1]. + +Do you use automatic updates installation or you prefer to install them manually? + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/automatic-system-updates-ubuntu/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://itsfoss.com/ubuntu-notify-updates-frequently/ \ No newline at end of file diff --git a/sources/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md b/sources/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md new file mode 100644 index 0000000000..3553f2b14e --- /dev/null +++ b/sources/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md @@ -0,0 +1,81 @@ +Linux FAQs with Answers--How to find out which CPU core a process is running on +================================================================================ +> Question: I have a Linux process running on my multi-core processor system. How can I find out which CPU core the process is running on? + +When you run performance-critical HPC applications or network-heavy workload on [multi-core NUMA processors][1], CPU/memory affinity is one important factor to consider to maximize their performance. Scheduling closely related processes on the same NUMA node can reduce slow remote memory access. On processors like Intel's Sandy Bridge processor which has an integrated PCIe controller, you want to schedule network I/O workload on the same NUMA node as the NIC card to exploit PCI-to-CPU affinity. + +As part of performance tuning or troubleshooting, you may want to know on which CPU core (or NUMA node) a particular process is currently scheduled. + +Here are several ways to **find out which CPU core is a given Linux process or a thread is scheduled on**. + +### Method One ### + +If a process is explicitly pinned to a particular CPU core using commands like [taskset][2], you can find out the pinned CPU using the following taskset command: + + $ taskset -c -p + +For example, if the process you are interested in has PID 5357: + + $ taskset -c -p 5357 + +---------- + + pid 5357's current affinity list: 5 + +The output says the process is pinned to CPU core 5. + +However, if you haven't explicitly pinned the process to any CPU core, you will get something like the following as the affinity list. + + pid 5357's current affinity list: 0-11 + +The output indicates that the process can potentially be scheduled on any CPU core from 0 to 11. So in this case, taskset is not useful in identifying which CPU core the process is currently assigned to, and you should use other methods as described below. + +### Method Two ### + +The ps command can tell you the CPU ID each process/thread is currently assigned to (under "PSR" column). + + $ ps -o pid,psr,comm -p + +---------- + + PID PSR COMMAND + 5357 10 prog + +The output says the process with PID 5357 (named "prog") is currently running on CPU core 10. If the process is not pinned, the PSR column can keep changing over time depending on where the kernel scheduler assigns the process. + +### Method Three ### + +The top command can also show the CPU assigned to a given process. First, launch top command with "p" option. Then press 'f' key, and add "Last used CPU" column to the display. The currently used CPU core will appear under "P" (or "PSR") column. + + $ top -p 5357 + +![](https://farm6.staticflickr.com/5698/21429268426_e7d1d73a04_c.jpg) + +Compared to ps command, the advantage of using top command is that you can continuously monitor how the assigned CPU changes over time. + +### Method Four ### + +Yet another method to check the currently used CPU of a process/thread is to use [htop command][3]. + +Launch htop from the command line. Press key, go to "Columns", and add PROCESSOR under "Available Columns". + +The currently used CPU ID of each process will appear under "CPU" column. + +![](https://farm6.staticflickr.com/5788/21444522832_a5a206f600_c.jpg) + +Note that all previous commands taskset, ps and top assign CPU core IDs 0, 1, 2, ..., N-1. However, htop assigns CPU core IDs starting from 1 (upto N). + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/cpu-core-process-is-running.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:http://xmodulo.com/identify-cpu-processor-architecture-linux.html +[2]:http://xmodulo.com/run-program-process-specific-cpu-cores-linux.html +[3]:http://ask.xmodulo.com/install-htop-centos-rhel.html \ No newline at end of file From 899c717a60c22c98c5a8ff97d5920a0bafd906c2 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 16 Sep 2015 16:35:12 +0800 Subject: [PATCH 525/697] =?UTF-8?q?20150916-2=20=E9=80=89=E9=A2=98=20RHCE?= =?UTF-8?q?=20=E4=B8=93=E9=A2=98=20=E7=AC=AC=E5=85=AB=E7=AF=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Network Security Service NSS for Apache.md | 211 ++++++++++++++++++ 1 file changed, 211 insertions(+) create mode 100644 sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md diff --git a/sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md b/sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md new file mode 100644 index 0000000000..317b2b3292 --- /dev/null +++ b/sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md @@ -0,0 +1,211 @@ +RHCE Series: Implementing HTTPS through TLS using Network Security Service (NSS) for Apache +================================================================================ +If you are a system administrator who is in charge of maintaining and securing a web server, you can’t afford to not devote your very best efforts to ensure that data served by or going through your server is protected at all times. + +![Setup Apache HTTPS Using SSL/TLS](http://www.tecmint.com/wp-content/uploads/2015/09/Setup-Apache-SSL-TLS-Server.png) + +RHCE Series: Implementing HTTPS through TLS using Network Security Service (NSS) for Apache – Part 8 + +In order to provide more secure communications between web clients and servers, the HTTPS protocol was born as a combination of HTTP and SSL (Secure Sockets Layer) or more recently, TLS (Transport Layer Security). + +Due to some serious security breaches, SSL has been deprecated in favor of the more robust TLS. For that reason, in this article we will explain how to secure connections between your web server and clients using TLS. + +This tutorial assumes that you have already installed and configured your Apache web server. If not, please refer to following article in this site before proceeding further. + +- [Install LAMP (Linux, MySQL/MariaDB, Apache and PHP) on RHEL/CentOS 7][1] + +### Installation of OpenSSL and Utilities ### + +First off, make sure that Apache is running and that both http and https are allowed through the firewall: + + # systemctl start http + # systemctl enable http + # firewall-cmd --permanent –-add-service=http + # firewall-cmd --permanent –-add-service=https + +Then install the necessary packages: + + # yum update && yum install openssl mod_nss crypto-utils + +**Important**: Please note that you can replace mod_nss with mod_ssl in the command above if you want to use OpenSSL libraries instead of NSS (Network Security Service) to implement TLS (which one to use is left entirely up to you, but we will use NSS in this article as it is more robust; for example, it supports recent cryptography standards such as PKCS #11). + +Finally, uninstall mod_ssl if you chose to use mod_nss, or viceversa. + + # yum remove mod_ssl + +### Configuring NSS (Network Security Service) ### + +After mod_nss is installed, its default configuration file is created as /etc/httpd/conf.d/nss.conf. You should then make sure that all of the Listen and VirtualHost directives point to port 443 (default port for HTTPS): + +nss.conf – Configuration File + +---------- + + Listen 443 + VirtualHost _default_:443 + +Then restart Apache and check whether the mod_nss module has been loaded: + + # apachectl restart + # httpd -M | grep nss + +![Check Mod_NSS Module in Apache](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Mod_NSS-Module-in-Apache.png) + +Check Mod_NSS Module Loaded in Apache + +Next, the following edits should be made in `/etc/httpd/conf.d/nss.conf` configuration file: + +1. Indicate NSS database directory. You can use the default directory or create a new one. In this tutorial we will use the default: + + NSSCertificateDatabase /etc/httpd/alias + +2. Avoid manual passphrase entry on each system start by saving the password to the database directory in /etc/httpd/nss-db-password.conf: + + NSSPassPhraseDialog file:/etc/httpd/nss-db-password.conf + +Where /etc/httpd/nss-db-password.conf contains ONLY the following line and mypassword is the password that you will set later for the NSS database: + + internal:mypassword + +In addition, its permissions and ownership should be set to 0640 and root:apache, respectively: + + # chmod 640 /etc/httpd/nss-db-password.conf + # chgrp apache /etc/httpd/nss-db-password.conf + +3. Red Hat recommends disabling SSL and all versions of TLS previous to TLSv1.0 due to the POODLE SSLv3 vulnerability (more information [here][2]). + +Make sure that every instance of the NSSProtocol directive reads as follows (you are likely to find only one if you are not hosting other virtual hosts): + + NSSProtocol TLSv1.0,TLSv1.1 + +4. Apache will refuse to restart as this is a self-signed certificate and will not recognize the issuer as valid. For this reason, in this particular case you will have to add: + + NSSEnforceValidCerts off + +5. Though not strictly required, it is important to set a password for the NSS database: + + # certutil -W -d /etc/httpd/alias + +![Set Password for NSS Database](http://www.tecmint.com/wp-content/uploads/2015/09/Set-Password-for-NSS-Database.png) + +Set Password for NSS Database + +### Creating a Apache SSL Self-Signed Certificate ### + +Next, we will create a self-signed certificate that will identify the server to our clients (please note that this method is not the best option for production environments; for such use you may want to consider buying a certificate verified by a 3rd trusted certificate authority, such as DigiCert). + +To create a new NSS-compliant certificate for box1 which will be valid for 365 days, we will use the genkey command. When this process completes: + + # genkey --nss --days 365 box1 + +Choose Next: + +![Create Apache SSL Key](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Apache-SSL-Key.png) + +Create Apache SSL Key + +You can leave the default choice for the key size (2048), then choose Next again: + +![Select Apache SSL Key Size](http://www.tecmint.com/wp-content/uploads/2015/09/Select-Apache-SSL-Key-Size.png) + +Select Apache SSL Key Size + +Wait while the system generates random bits: + +![Generating Random Key Bits](http://www.tecmint.com/wp-content/uploads/2015/09/Generating-Random-Bits.png) + +Generating Random Key Bits + +To speed up the process, you will be prompted to enter random text in your console, as shown in the following screencast. Please note how the progress bar stops when no input from the keyboard is received. Then, you will be asked to: + +1. Whether to send the Certificate Sign Request (CSR) to a Certificate Authority (CA): Choose No, as this is a self-signed certificate. + +2. to enter the information for the certificate. + +注:youtube 视频 + + +Finally, you will be prompted to enter the password to the NSS certificate that you set earlier: + + # genkey --nss --days 365 box1 + +![Apache NSS Certificate Password](http://www.tecmint.com/wp-content/uploads/2015/09/Apache-NSS-Password.png) + +Apache NSS Certificate Password + +At anytime, you can list the existing certificates with: + + # certutil –L –d /etc/httpd/alias + +![List Apache NSS Certificates](http://www.tecmint.com/wp-content/uploads/2015/09/List-Apache-Certificates.png) + +List Apache NSS Certificates + +And delete them by name (only if strictly required, replacing box1 by your own certificate name) with: + + # certutil -d /etc/httpd/alias -D -n "box1" + +if you need to.c + +### Testing Apache SSL HTTPS Connections ### + +Finally, it’s time to test the secure connection to our web server. When you point your browser to https://, you will get the well-known message “This connection is untrusted“: + +![Check Apache SSL Connection](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Apache-SSL-Connection.png) + +Check Apache SSL Connection + +In the above situation, you can click on Add Exception and then Confirm Security Exception – but don’t do it yet. Let’s first examine the certificate to see if its details match the information that we entered earlier (as shown in the screencast). + +To do so, click on View… –> Details tab above and you should see this when you select Issuer from the list: + +![Confirm Apache SSL Certificate Details](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Apache-SSL-Certificate-Details.png) + +Confirm Apache SSL Certificate Details + +Now you can go ahead, confirm the exception (either for this time or permanently) and you will be taken to your web server’s DocumentRoot directory via https, where you can inspect the connection details using your browser’s builtin developer tools: + +In Firefox you can launch it by right clicking on the screen, and choosing Inspect Element from the context menu, specifically through the Network tab: + +![Inspect Apache HTTPS Connection](http://www.tecmint.com/wp-content/uploads/2015/09/Inspect-Apache-HTTPS-Connection.png) + +Inspect Apache HTTPS Connection + +Please note that this is the same information as displayed before, which was entered during the certificate previously. There’s also a way to test the connection using command line tools: + +On the left (testing SSLv3): + + # openssl s_client -connect localhost:443 -ssl3 + +On the right (testing TLS): + + # openssl s_client -connect localhost:443 -tls1 + +![Testing Apache SSL and TLS Connections](http://www.tecmint.com/wp-content/uploads/2015/09/Testing-Apache-SSL-and-TLS.png) + +Testing Apache SSL and TLS Connections + +Refer to the screenshot above for more details. + +### Summary ### + +As I’m sure you already know, the presence of HTTPS inspires trust in visitors who may have to enter personal information in your site (from user names and passwords all the way to financial / bank account information). + +In that case, you will want to get a certificate signed by a trusted Certificate Authority as we explained earlier (the steps to set it up are identical with the exception that you will need to send the CSR to a CA, and you will get the signed certificate back); otherwise, a self-signed certificate as the one used in this tutorial will do. + +For more details on the use of NSS, please refer to the online help about [mod-nss][3]. And don’t hesitate to let us know if you have any questions or comments. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-apache-https-self-signed-certificate-using-nss/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/install-lamp-in-centos-7/ +[1]:http://www.tecmint.com/author/gacanepa/ +[2]:https://access.redhat.com/articles/1232123 +[3]:https://git.fedorahosted.org/cgit/mod_nss.git/plain/docs/mod_nss.html \ No newline at end of file From a9803952cf4c393a250de14573c4f7a4a74a144a Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 16 Sep 2015 19:31:21 +0800 Subject: [PATCH 526/697] Update 20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md --- ...--How to find out which CPU core a process is running on.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md b/sources/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md index 3553f2b14e..ff305ad5ce 100644 --- a/sources/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md +++ b/sources/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md @@ -1,3 +1,4 @@ +translation by strugglingyouth Linux FAQs with Answers--How to find out which CPU core a process is running on ================================================================================ > Question: I have a Linux process running on my multi-core processor system. How can I find out which CPU core the process is running on? @@ -78,4 +79,4 @@ via: http://ask.xmodulo.com/cpu-core-process-is-running.html [a]:http://ask.xmodulo.com/author/nanni [1]:http://xmodulo.com/identify-cpu-processor-architecture-linux.html [2]:http://xmodulo.com/run-program-process-specific-cpu-cores-linux.html -[3]:http://ask.xmodulo.com/install-htop-centos-rhel.html \ No newline at end of file +[3]:http://ask.xmodulo.com/install-htop-centos-rhel.html From 0d8e39f4d283be219171823666f10b98e1fbe952 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Wed, 16 Sep 2015 22:54:18 +0800 Subject: [PATCH 527/697] Translating tech/20140320 Best command line tools for linux performance monitoring.md tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md --- ...0 Best command line tools for linux performance monitoring.md | 1 + ... through TLS using Network Security Service NSS for Apache.md | 1 + 2 files changed, 2 insertions(+) diff --git a/sources/tech/20140320 Best command line tools for linux performance monitoring.md b/sources/tech/20140320 Best command line tools for linux performance monitoring.md index 6b0ed1ebc4..a9fa3cc8bc 100644 --- a/sources/tech/20140320 Best command line tools for linux performance monitoring.md +++ b/sources/tech/20140320 Best command line tools for linux performance monitoring.md @@ -1,3 +1,4 @@ +ictlyh Translating Best command line tools for linux performance monitoring ================================================================================ Sometimes a system can be slow and many reasons can be the root cause. To identify the process that is consuming memory, disk I/O or processor capacity you need to use tools to see what is happening in an operation system. diff --git a/sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md b/sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md index 317b2b3292..a316797ebd 100644 --- a/sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md +++ b/sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md @@ -1,3 +1,4 @@ +ictlyh Translating RHCE Series: Implementing HTTPS through TLS using Network Security Service (NSS) for Apache ================================================================================ If you are a system administrator who is in charge of maintaining and securing a web server, you can’t afford to not devote your very best efforts to ensure that data served by or going through your server is protected at all times. From d772f13cc812fd221f91e4942e37a9abb9ef1b04 Mon Sep 17 00:00:00 2001 From: VicYu Date: Wed, 16 Sep 2015 23:03:52 +0800 Subject: [PATCH 528/697] Update 20150916 Enable Automatic System Updates In Ubuntu.md --- .../20150916 Enable Automatic System Updates In Ubuntu.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150916 Enable Automatic System Updates In Ubuntu.md b/sources/tech/20150916 Enable Automatic System Updates In Ubuntu.md index 40397f2c42..3d69413a2f 100644 --- a/sources/tech/20150916 Enable Automatic System Updates In Ubuntu.md +++ b/sources/tech/20150916 Enable Automatic System Updates In Ubuntu.md @@ -1,3 +1,5 @@ + Vic020 + Enable Automatic System Updates In Ubuntu ================================================================================ Before seeing **how to enable automatic system updates in Ubuntu**, first let’s see why should we do it in the first place. @@ -45,4 +47,4 @@ via: http://itsfoss.com/automatic-system-updates-ubuntu/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://itsfoss.com/author/abhishek/ -[1]:http://itsfoss.com/ubuntu-notify-updates-frequently/ \ No newline at end of file +[1]:http://itsfoss.com/ubuntu-notify-updates-frequently/ From 28ac776cf1ef94ec352f49b8d433669cb54463eb Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 16 Sep 2015 23:23:54 +0800 Subject: [PATCH 529/697] =?UTF-8?q?=E7=BF=BB=E5=B7=B2=E5=AE=8C=E6=88=90=20?= =?UTF-8?q?Linux=20FAQs=20with=20Answers--How=20to=20check=20weather=20for?= =?UTF-8?q?ecasts=20from=20the=20command=20line=20on=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...orecasts from the command line on Linux.md | 52 ++++++++----------- 1 file changed, 22 insertions(+), 30 deletions(-) diff --git a/sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md b/sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md index 11dd713f74..b7751e118a 100644 --- a/sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md +++ b/sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md @@ -1,64 +1,56 @@ -translating by ezio - -Linux FAQs with Answers--How to check weather forecasts from the command line on Linux +Linux 问与答:如何在Linux 命令行下浏览天气预报 ================================================================================ -> **Question**: I often check local weather forecasts on the Linux desktop. However, is there an easy way to access weather forecast information in the terminal environment, where I don't have access to desktop widgets or web browser? +> **Q**: 我经常在Linux 桌面查看天气预报。然而,是否有一种在终端环境下,不通过桌面小插件或者网络查询天气预报的方法? -For Linux desktop users, there are many ways to access weather forecasts, e.g., using standalone weather apps, desktop widgets, or panel applets. If your work environment is terminal-based, there are also several ways to access weather forecasts from the command line. +对于Linux 桌面用户来说,有很多办法获取天气预报,比如使用专门的天气应用,桌面小插件,或者面板小程序。但是如果你的工作环境实际与终端的,这里也有一些在命令行下获取天气的手段。 -Among them is [wego][1], **a cute little weather app for the terminal**. Using an ncurses-based fancy interface, this command-line app allows you to see current weather conditions and forecasts at a glance. It retrieves the weather forecasts for the next 5 days via a weather forecast API. +其中有一个就是 [wego][1],**一个终端下的小巧程序**。使用基于ncurses 的接口,这个命令行程序允许你查看当前的天气情况和之后的预报。它也会通过一个天气预报的API 收集接下来5 天的天气预报。 -### Install Wego on Linux ### - -Installation of wego is pretty simple. wego is written in Go language, thus the first step is to [install Go language][2]. After installing Go, proceed to install wego as follows. +### 在Linux 下安装Wego ### +安装wego 相当简单。wego 是用Go 编写的,引起第一个步骤就是安装[Go 语言][2]。然后再安装wego。 $ go get github.com/schachmat/wego -The wego tool will be installed under $GOPATH/bin. So add $GOPATH/bin to your $PATH variable. +wego 会被安装到$GOPATH/bin,所以要将$GOPATH/bin 添加到$PATH 环境变量。 $ echo 'export PATH="$PATH:$GOPATH/bin"' >> ~/.bashrc $ source ~/.bashrc -Now go ahead and invoke wego from the command line. +现在就可与直接从命令行启动wego 了。 $ wego -The first time you run wego, it will generate a config file (~/.wegorc), where you need to specify a weather API key. - -You can obtain a free API key from [worldweatheronline.com][3]. Free sign-up is quick and easy. You only need a valid email address. +第一次运行weg 会生成一个配置文件(~/.wegorc),你需要指定一个天气API key。 +你可以从[worldweatheronline.com][3] 获取一个免费的API key。免费注册和使用。你只需要提供一个有效的邮箱地址。 ![](https://farm6.staticflickr.com/5781/21317466341_5a368b0d26_c.jpg) -Your .wegorc will look like the following. +你的 .wegorc 配置文件看起来会这样: ![](https://farm6.staticflickr.com/5620/21121418558_df0d27cd0a_b.jpg) -Other than API key, you can specify in ~/.wegorc your preferred location, use of metric/imperial units, and language. - -Note that the weather API is rate-limited; 5 queries per second, and 250 queries per day. - -When you invoke wego command again, you will see the latest weather forecast (of your preferred location), shown as follows. +除了API key,你还可以把你想要查询天气的地方、使用的城市/国家名称、语言配置在~/.wegorc 中。 +注意,这个天气API 的使用有限制:每秒最多5 次查询,每天最多250 次查询。 +当你重新执行wego 命令,你将会看到最新的天气预报(当然是你的指定地方),如下显示。 ![](https://farm6.staticflickr.com/5776/21121218110_dd51e03ff4_c.jpg) -The displayed weather information includes: (1) temperature, (2) wind direction and speed, (3) viewing distance, and (4) precipitation amount and probability. - -By default, it will show 3-day weather forecast. To change this behavior, you can supply the number of days (upto five) as an argument. For example, to see 5-day forecast: +显示出来的天气信息包括:(1)温度,(2)风速和风向,(3)可视距离,(4)降水量和降水概率 +默认情况下会显示3 天的天气预报。如果要进行修改,可以通过参数改变天气范围(最多5天),比如要查看5 天的天气预报: $ wego 5 -If you want to check the weather of any other location, you can specify the city name. +如果你想检查另一个地方的天气,只需要提供城市名即可: $ wego Seattle -### Troubleshooting ### - -1. You encounter the following error while running wego. +### 问题解决 ### +1. 可能会遇到下面的错误: user: Current not implemented on linux/amd64 -This error can happen when you run wego on a platform which is not supported by the native Go compiler gc (e.g., Fedora). In that case, you can compile the program using gccgo, a compiler-frontend for Go language. This can be done as follows. - + 当你在一个不支持原生Go 编译器的环境下运行wego 时就会出现这个错误。在这种情况下你只需要使用gccgo ——一个Go 的编译器前端来编译程序即可。这一步可以通过下面的命令完成。 + $ sudo yum install gcc-go $ go get -compiler=gccgo github.com/schachmat/wego @@ -67,7 +59,7 @@ This error can happen when you run wego on a platform which is not supported by via: http://ask.xmodulo.com/weather-forecasts-command-line-linux.html 作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) +译者:[译者ID](https://github.com/oska874) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 287e371a44946a65ca0a286497a69d1f0926ba66 Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 16 Sep 2015 23:32:42 +0800 Subject: [PATCH 530/697] Update 20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md --- ... How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md b/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md index ba828d629d..bceee2953b 100644 --- a/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md +++ b/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md @@ -1,7 +1,5 @@ -translating by ezio - -How to Setup Node JS v4.0.0 on Ubuntu 14.04 / 15.04 +在ubunt 14.04/15.04 上配置Node JS v4.0.0 ================================================================================ Hi everyone, Node.JS Version 4.0.0 has been out, the popular server-side JavaScript platform has combines the Node.js and io.js code bases. This release represents the combined efforts encapsulated in both the Node.js project and the io.js project that are now combined in a single codebase. The most important change is this Node.js is ships with version 4.5 of Google's V8 JavaScript engine, which is the same version that ships with the current Chrome browser. So, being able to more closely track V8’s releases means Node.js runs JavaScript faster, more securely, and with the ability to use many desirable ES6 language features. @@ -96,7 +94,7 @@ That’s it. Hope this gives you a good idea of Node.js going with Node.js on Ub via: http://linoxide.com/ubuntu-how-to/setup-node-js-4-0-ubuntu-14-04-15-04/ 作者:[Kashif Siddique][a] -译者:[译者ID](https://github.com/译者ID) +译者:[译者ID](https://github.com/osk874) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From cad2df606325ac090f8ff9c24aef513919a1877c Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 16 Sep 2015 23:33:58 +0800 Subject: [PATCH 531/697] Create 20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md --- ...orecasts from the command line on Linux.md | 70 +++++++++++++++++++ 1 file changed, 70 insertions(+) create mode 100644 translated/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md diff --git a/translated/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md b/translated/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md new file mode 100644 index 0000000000..b7751e118a --- /dev/null +++ b/translated/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md @@ -0,0 +1,70 @@ +Linux 问与答:如何在Linux 命令行下浏览天气预报 +================================================================================ +> **Q**: 我经常在Linux 桌面查看天气预报。然而,是否有一种在终端环境下,不通过桌面小插件或者网络查询天气预报的方法? + +对于Linux 桌面用户来说,有很多办法获取天气预报,比如使用专门的天气应用,桌面小插件,或者面板小程序。但是如果你的工作环境实际与终端的,这里也有一些在命令行下获取天气的手段。 + +其中有一个就是 [wego][1],**一个终端下的小巧程序**。使用基于ncurses 的接口,这个命令行程序允许你查看当前的天气情况和之后的预报。它也会通过一个天气预报的API 收集接下来5 天的天气预报。 + +### 在Linux 下安装Wego ### +安装wego 相当简单。wego 是用Go 编写的,引起第一个步骤就是安装[Go 语言][2]。然后再安装wego。 + + $ go get github.com/schachmat/wego + +wego 会被安装到$GOPATH/bin,所以要将$GOPATH/bin 添加到$PATH 环境变量。 + + $ echo 'export PATH="$PATH:$GOPATH/bin"' >> ~/.bashrc + $ source ~/.bashrc + +现在就可与直接从命令行启动wego 了。 + + $ wego + +第一次运行weg 会生成一个配置文件(~/.wegorc),你需要指定一个天气API key。 +你可以从[worldweatheronline.com][3] 获取一个免费的API key。免费注册和使用。你只需要提供一个有效的邮箱地址。 + +![](https://farm6.staticflickr.com/5781/21317466341_5a368b0d26_c.jpg) + +你的 .wegorc 配置文件看起来会这样: + +![](https://farm6.staticflickr.com/5620/21121418558_df0d27cd0a_b.jpg) + +除了API key,你还可以把你想要查询天气的地方、使用的城市/国家名称、语言配置在~/.wegorc 中。 +注意,这个天气API 的使用有限制:每秒最多5 次查询,每天最多250 次查询。 +当你重新执行wego 命令,你将会看到最新的天气预报(当然是你的指定地方),如下显示。 + +![](https://farm6.staticflickr.com/5776/21121218110_dd51e03ff4_c.jpg) + +显示出来的天气信息包括:(1)温度,(2)风速和风向,(3)可视距离,(4)降水量和降水概率 +默认情况下会显示3 天的天气预报。如果要进行修改,可以通过参数改变天气范围(最多5天),比如要查看5 天的天气预报: + + $ wego 5 + +如果你想检查另一个地方的天气,只需要提供城市名即可: + + $ wego Seattle + +### 问题解决 ### +1. 可能会遇到下面的错误: + + user: Current not implemented on linux/amd64 + + 当你在一个不支持原生Go 编译器的环境下运行wego 时就会出现这个错误。在这种情况下你只需要使用gccgo ——一个Go 的编译器前端来编译程序即可。这一步可以通过下面的命令完成。 + + $ sudo yum install gcc-go + $ go get -compiler=gccgo github.com/schachmat/wego + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/weather-forecasts-command-line-linux.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/oska874) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:https://github.com/schachmat/wego +[2]:http://ask.xmodulo.com/install-go-language-linux.html +[3]:https://developer.worldweatheronline.com/auth/register From 6f732f683c488f29ef40f3ca0bcf4fc4ab4343db Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 16 Sep 2015 23:34:28 +0800 Subject: [PATCH 532/697] Delete 20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md --- ...orecasts from the command line on Linux.md | 70 ------------------- 1 file changed, 70 deletions(-) delete mode 100644 sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md diff --git a/sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md b/sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md deleted file mode 100644 index b7751e118a..0000000000 --- a/sources/tech/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md +++ /dev/null @@ -1,70 +0,0 @@ -Linux 问与答:如何在Linux 命令行下浏览天气预报 -================================================================================ -> **Q**: 我经常在Linux 桌面查看天气预报。然而,是否有一种在终端环境下,不通过桌面小插件或者网络查询天气预报的方法? - -对于Linux 桌面用户来说,有很多办法获取天气预报,比如使用专门的天气应用,桌面小插件,或者面板小程序。但是如果你的工作环境实际与终端的,这里也有一些在命令行下获取天气的手段。 - -其中有一个就是 [wego][1],**一个终端下的小巧程序**。使用基于ncurses 的接口,这个命令行程序允许你查看当前的天气情况和之后的预报。它也会通过一个天气预报的API 收集接下来5 天的天气预报。 - -### 在Linux 下安装Wego ### -安装wego 相当简单。wego 是用Go 编写的,引起第一个步骤就是安装[Go 语言][2]。然后再安装wego。 - - $ go get github.com/schachmat/wego - -wego 会被安装到$GOPATH/bin,所以要将$GOPATH/bin 添加到$PATH 环境变量。 - - $ echo 'export PATH="$PATH:$GOPATH/bin"' >> ~/.bashrc - $ source ~/.bashrc - -现在就可与直接从命令行启动wego 了。 - - $ wego - -第一次运行weg 会生成一个配置文件(~/.wegorc),你需要指定一个天气API key。 -你可以从[worldweatheronline.com][3] 获取一个免费的API key。免费注册和使用。你只需要提供一个有效的邮箱地址。 - -![](https://farm6.staticflickr.com/5781/21317466341_5a368b0d26_c.jpg) - -你的 .wegorc 配置文件看起来会这样: - -![](https://farm6.staticflickr.com/5620/21121418558_df0d27cd0a_b.jpg) - -除了API key,你还可以把你想要查询天气的地方、使用的城市/国家名称、语言配置在~/.wegorc 中。 -注意,这个天气API 的使用有限制:每秒最多5 次查询,每天最多250 次查询。 -当你重新执行wego 命令,你将会看到最新的天气预报(当然是你的指定地方),如下显示。 - -![](https://farm6.staticflickr.com/5776/21121218110_dd51e03ff4_c.jpg) - -显示出来的天气信息包括:(1)温度,(2)风速和风向,(3)可视距离,(4)降水量和降水概率 -默认情况下会显示3 天的天气预报。如果要进行修改,可以通过参数改变天气范围(最多5天),比如要查看5 天的天气预报: - - $ wego 5 - -如果你想检查另一个地方的天气,只需要提供城市名即可: - - $ wego Seattle - -### 问题解决 ### -1. 可能会遇到下面的错误: - - user: Current not implemented on linux/amd64 - - 当你在一个不支持原生Go 编译器的环境下运行wego 时就会出现这个错误。在这种情况下你只需要使用gccgo ——一个Go 的编译器前端来编译程序即可。这一步可以通过下面的命令完成。 - - $ sudo yum install gcc-go - $ go get -compiler=gccgo github.com/schachmat/wego - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/weather-forecasts-command-line-linux.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/oska874) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni -[1]:https://github.com/schachmat/wego -[2]:http://ask.xmodulo.com/install-go-language-linux.html -[3]:https://developer.worldweatheronline.com/auth/register From 7ca9ec892bbf40a5c18426715afdc918b01342ff Mon Sep 17 00:00:00 2001 From: DongShuaike Date: Thu, 17 Sep 2015 08:29:05 +0800 Subject: [PATCH 533/697] Create 20150916 Italy's Ministry of Defense to Drop Microsoft Office in Favor of LibreOffice.md --- ...icrosoft Office in Favor of LibreOffice.md | 30 +++++++++++++++++++ 1 file changed, 30 insertions(+) create mode 100644 sources/talk/20150916 Italy's Ministry of Defense to Drop Microsoft Office in Favor of LibreOffice.md diff --git a/sources/talk/20150916 Italy's Ministry of Defense to Drop Microsoft Office in Favor of LibreOffice.md b/sources/talk/20150916 Italy's Ministry of Defense to Drop Microsoft Office in Favor of LibreOffice.md new file mode 100644 index 0000000000..f47352ed26 --- /dev/null +++ b/sources/talk/20150916 Italy's Ministry of Defense to Drop Microsoft Office in Favor of LibreOffice.md @@ -0,0 +1,30 @@ +Italy's Ministry of Defense to Drop Microsoft Office in Favor of LibreOffice +================================================================================ +>**LibreItalia's Italo Vignoli [reports][1] that the Italian Ministry of Defense is about to migrate to the LibreOffice open-source software for productivity and adopt the Open Document Format (ODF), while moving away from proprietary software products.** + +The movement comes in the form of a [collaboration][1] between Italy's Ministry of Defense and the LibreItalia Association. Sonia Montegiove, President of the LibreItalia Association, and Ruggiero Di Biase, Rear Admiral and General Executive Manager of Automated Information Systems of the Ministry of Defense in Italy signed an agreement for a collaboration to adopt the LibreOffice office suite in all of the Ministry's offices. + +While the LibreItalia non-profit organization promises to help the Italian Ministry of Defense with trainers for their offices across the country, the Ministry will start the implementation of the LibreOffice software on October 2015 with online training courses for their staff. The entire transition process is expected to be completed by the end of year 2016\. An Italian law lets officials find open source software alternatives to well-known commercial software. + +"Under the agreement, the Italian Ministry of Defense will develop educational content for a series of online training courses on LibreOffice, which will be released to the community under Creative Commons, while the partners, LibreItalia, will manage voluntarily the communication and training of trainers in the Ministry," says Italo Vignoli, Honorary President of LibreItalia. + +### The Ministry of Defense will adopt the Open Document Format (ODF) + +The initiative will allow the Italian Ministry of Defense to be independent from proprietary software applications, which are aimed at individual productivity, and adopt open source document format standards like Open Document Format (ODF), which is used by default in the LibreOffice office suite. The project follows similar movements already made by governments of other European countries, including United Kingdom, France, Spain, Germany, and Holland. + +It would appear that numerous other public institutions all over Italy are using open source alternatives, including the Italian Region Emilia Romagna, Galliera Hospital in Genoa, Macerata, Cremona, Trento and Bolzano, Perugia, the municipalities of Bologna, ASL 5 of Veneto, Piacenza and Reggio Emilia, and many others. AGID (Agency for Digital Italy) welcomes this project and hopes that other public institutions will do the same. + + +-------------------------------------------------------------------------------- + +via: http://news.softpedia.com/news/italy-s-ministry-of-defense-to-drop-microsoft-office-in-favor-of-libreoffice-491850.shtml + +作者:[Marius Nestor][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://news.softpedia.com/editors/browse/marius-nestor +[1]:http://www.libreitalia.it/accordo-di-collaborazione-tra-associazione-libreitalia-onlus-e-difesa-per-ladozione-del-prodotto-libreoffice-quale-pacchetto-di-produttivita-open-source-per-loffice-automation/ +[2]:http://www.libreitalia.it/chi-siamo/ From bcaee6786ef129f9ada0f9a52ec0356dd40342ca Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 17 Sep 2015 09:50:16 +0800 Subject: [PATCH 534/697] translating --- ...Based Open Source OS Runs 42 Percent of Dell PCs in China.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md b/sources/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md index 7368a21b70..1961850b65 100644 --- a/sources/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md +++ b/sources/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md @@ -1,3 +1,5 @@ +translating---geekpi + Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China ================================================================================ > Dell says that 42 percent of the PCs it sells in the Chinese market run Kylin, an open source operating system based on Ubuntu Linux that Canonical helped to create. From 80cde8ca85bde6d3b09f9047719993705abd0763 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 17 Sep 2015 10:27:04 +0800 Subject: [PATCH 535/697] translated --- ...OS Runs 42 Percent of Dell PCs in China.md | 43 ------------------- ...OS Runs 42 Percent of Dell PCs in China.md | 41 ++++++++++++++++++ 2 files changed, 41 insertions(+), 43 deletions(-) delete mode 100644 sources/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md create mode 100644 translated/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md diff --git a/sources/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md b/sources/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md deleted file mode 100644 index 1961850b65..0000000000 --- a/sources/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md +++ /dev/null @@ -1,43 +0,0 @@ -translating---geekpi - -Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China -================================================================================ -> Dell says that 42 percent of the PCs it sells in the Chinese market run Kylin, an open source operating system based on Ubuntu Linux that Canonical helped to create. - - Open source fans, rejoice: The Year of the Linux Desktop has arrived. Or something close to it is on the horizon in China, at least, where [Dell][1] has reported that more than 40 percent of the PCs it sells run a variant of [Ubuntu Linux][2] that [Canonical][3] helped develop. - - Specifically, Dell said that 42 percent of computers in China run NeoKylin, an operating system that originated as an effort in China to build a home-grown alternative to [Microsoft][4] (MSFT) Windows. Also known simply Kylin, the OS has been based on Ubuntu since 2013, when Canonical began collaborating with the Chinese government to create an Ubuntu variant tailored for the Chinese market. - - Earlier versions of Kylin, which has been around since 2001, were based on other operating systems, including FreeBSD, an open source Unix-like operating system that is distinct from Linux. - - Ubuntu Kylin looks and feels a lot like modern versions of Ubuntu proper. It sports the [Unity][5] interface and runs the standard suite of open source apps, as well as specialized ones such as Youker Assistant, a graphical front end that helps users manage basic computing tasks. Kylin's default theme makes it look just a little more like Windows than stock Ubuntu, however. - - Given the relative stagnation of the market for desktop Linux PCs in most of the world, Dell's announcement is striking. And in light of China's [hostility][6] toward modern editions of Windows, the news does not bode well for Microsoft's prospects in the Chinese market. - - Dell's comment on Linux PC sales in China—which appeared in the form of a statement by an executive to the Wall Street Journal—comes on the heels of the company's [announcement][7] of $125 million of new investment in China. - ![Ubuntu Kylin](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/09/hey_2.png) - - - - - - - --------------------------------------------------------------------------------- - -via: http://thevarguy.com/open-source-application-software-companies/091515/ubuntu-linux-based-open-source-os-runs-42-percent-dell-pc - -作者:[Christopher Tozzi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://thevarguy.com/author/christopher-tozzi -[1]:http://dell.com/ -[2]:http://ubuntu.com/ -[3]:http://canonical.com/ -[4]:http://microsoft.com/ -[5]:http://unity.ubuntu.com/ -[6]:http://www.wsj.com/articles/windows-8-faces-new-criticism-in-china-1401882772 -[7]:http://thevarguy.com/business-technology-solution-sales/091415/dell-125-million-directed-china-jobs-new-business-and-innovation diff --git a/translated/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md b/translated/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md new file mode 100644 index 0000000000..eea7af0368 --- /dev/null +++ b/translated/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md @@ -0,0 +1,41 @@ +基于Linux的Ubuntu开源操作系统在中国42%的Dell PC上运行 +================================================================================ +> Dell称它在中国市场出售的42%的PC运行的是Kylin,一款Canonical帮助创建的基于Ubuntu的操作系统。 + + 让开源粉丝欢喜的是:Linux桌面年来了。或者说中国正在接近这个目标,[Dell][1]报告称超过40%售卖的PC机运行的是 [Canonical][3]帮助开发的[Ubuntu Linux][2]。 + + 特别地,Dell称42%的中国电脑运行NeoKylin,一款中国本土倾力打造的用于替代[Microsoft][4] (MSFT) Windows的操作系统。它也简称麒麟,一款从2013年出来的基于Ubuntu的操作系统,也是这年开始Canonical公司与中国政府合作来建立一个专为中国市场Ubuntu变种。 + + 2001年左右早期版本的麒麟,都是基于其他操作系统,包括FreeBSD,一个开放源码的区别于Linux的类Unix操作系统。 + + Ubuntu的麒麟的外观和感觉很像Ubuntu的现代版本。它拥有的[Unity][5]界面,并运行标准开源套件,以及专门的如Youker助理程序,它是一个图形化的前端,帮助用户管理的基本计算任务。但是麒麟的默认主题使得它看起来有点像Windows而不是Ubuntu。 + + 鉴于桌面Linux PC市场在世界上大多数国家的相对停滞,戴尔的宣布是惊人的。并结合中国对现代windows的轻微[敌意][6],这个消息并不预示着微软在中国市场的前景。 + + 在Dell公司[宣布][7]在华投资1.25亿美元很快之后一位行政官给华尔街杂志的评论中提到了Dell在中国市场上PC的销售。 + ![Ubuntu Kylin](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/09/hey_2.png) + + + + + + + +-------------------------------------------------------------------------------- + +via: http://thevarguy.com/open-source-application-software-companies/091515/ubuntu-linux-based-open-source-os-runs-42-percent-dell-pc + +作者:[Christopher Tozzi][a] +译者:[geekpi](https://github.com/geeekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://thevarguy.com/author/christopher-tozzi +[1]:http://dell.com/ +[2]:http://ubuntu.com/ +[3]:http://canonical.com/ +[4]:http://microsoft.com/ +[5]:http://unity.ubuntu.com/ +[6]:http://www.wsj.com/articles/windows-8-faces-new-criticism-in-china-1401882772 +[7]:http://thevarguy.com/business-technology-solution-sales/091415/dell-125-million-directed-china-jobs-new-business-and-innovation From de7796e8175d0894822494dd5ec2d3bfe238087d Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 17 Sep 2015 10:29:44 +0800 Subject: [PATCH 536/697] PUB:20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux @strugglingyouth --- ...e number of threads in a process on Linux.md | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) rename {translated/tech => published}/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md (59%) diff --git a/translated/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md b/published/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md similarity index 59% rename from translated/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md rename to published/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md index 96bf143533..8667e99712 100644 --- a/translated/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md +++ b/published/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md @@ -1,5 +1,4 @@ - -Linux 有问必答 - 如何在 Linux 中统计一个进程的线程数 +Linux 有问必答:如何在 Linux 中统计一个进程的线程数 ================================================================================ > **问题**: 我正在运行一个程序,它在运行时会派生出多个线程。我想知道程序在运行时会有多少线程。在 Linux 中检查进程的线程数最简单的方法是什么? @@ -7,11 +6,11 @@ Linux 有问必答 - 如何在 Linux 中统计一个进程的线程数 ### 方法一: /proc ### - proc 伪文件系统,它驻留在 /proc 目录,这是最简单的方法来查看任何活动进程的线程数。 /proc 目录以可读文本文件形式输出,提供现有进程和系统硬件相关的信息如 CPU, interrupts, memory, disk, 等等. +proc 伪文件系统,它驻留在 /proc 目录,这是最简单的方法来查看任何活动进程的线程数。 /proc 目录以可读文本文件形式输出,提供现有进程和系统硬件相关的信息如 CPU、中断、内存、磁盘等等. $ cat /proc//status -上面的命令将显示进程 的详细信息,包括过程状态(例如, sleeping, running),父进程 PID,UID,GID,使用的文件描述符的数量,以及上下文切换的数量。输出也包括**进程创建的总线程数**如下所示。 +上面的命令将显示进程 \ 的详细信息,包括过程状态(例如, sleeping, running),父进程 PID,UID,GID,使用的文件描述符的数量,以及上下文切换的数量。输出也包括**进程创建的总线程数**如下所示。 Threads: @@ -23,11 +22,11 @@ Linux 有问必答 - 如何在 Linux 中统计一个进程的线程数 输出表明该进程有28个线程。 -或者,你可以在 /proc//task 中简单的统计目录的数量,如下所示。 +或者,你可以在 /proc//task 中简单的统计子目录的数量,如下所示。 $ ls /proc//task | wc -这是因为,对于一个进程中创建的每个线程,在 /proc//task 中会创建一个相应的目录,命名为其线程 ID。由此在 /proc//task 中目录的总数表示在进程中线程的数目。 +这是因为,对于一个进程中创建的每个线程,在 `/proc//task` 中会创建一个相应的目录,命名为其线程 ID。由此在 `/proc//task` 中目录的总数表示在进程中线程的数目。 ### 方法二: ps ### @@ -35,7 +34,7 @@ Linux 有问必答 - 如何在 Linux 中统计一个进程的线程数 $ ps hH p | wc -l -如果你想监视一个进程的不同线程消耗的硬件资源(CPU & memory),请参阅[此教程][1]。(注:此文我们翻译过) +如果你想监视一个进程的不同线程消耗的硬件资源(CPU & memory),请参阅[此教程][1]。 -------------------------------------------------------------------------------- @@ -43,9 +42,9 @@ via: http://ask.xmodulo.com/number-of-threads-process-linux.html 作者:[Dan Nanni][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://ask.xmodulo.com/author/nanni -[1]:http://ask.xmodulo.com/view-threads-process-linux.html +[1]:https://linux.cn/article-5633-1.html From 46a74b72f1a7ea9c3aee03c43e8985368475a588 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 17 Sep 2015 10:44:29 +0800 Subject: [PATCH 537/697] =?UTF-8?q?20150917-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ository with 44 Years of Unix Evolution.md | 202 ++++++++++++++++++ 1 file changed, 202 insertions(+) create mode 100644 sources/tech/20150917 A Repository with 44 Years of Unix Evolution.md diff --git a/sources/tech/20150917 A Repository with 44 Years of Unix Evolution.md b/sources/tech/20150917 A Repository with 44 Years of Unix Evolution.md new file mode 100644 index 0000000000..807cedf01d --- /dev/null +++ b/sources/tech/20150917 A Repository with 44 Years of Unix Evolution.md @@ -0,0 +1,202 @@ +A Repository with 44 Years of Unix Evolution +================================================================================ +### Abstract ### + +The evolution of the Unix operating system is made available as a version-control repository, covering the period from its inception in 1972 as a five thousand line kernel, to 2015 as a widely-used 26 million line system. The repository contains 659 thousand commits and 2306 merges. The repository employs the commonly used Git system for its storage, and is hosted on the popular GitHub archive. It has been created by synthesizing with custom software 24 snapshots of systems developed at Bell Labs, Berkeley University, and the 386BSD team, two legacy repositories, and the modern repository of the open source FreeBSD system. In total, 850 individual contributors are identified, the early ones through primary research. The data set can be used for empirical research in software engineering, information systems, and software archaeology. + +### 1 Introduction ### + +The Unix operating system stands out as a major engineering breakthrough due to its exemplary design, its numerous technical contributions, its development model, and its widespread use. The design of the Unix programming environment has been characterized as one offering unusual simplicity, power, and elegance [[1][1]]. On the technical side, features that can be directly attributed to Unix or were popularized by it include [[2][2]]: the portable implementation of the kernel in a high level language; a hierarchical file system; compatible file, device, networking, and inter-process I/O; the pipes and filters architecture; virtual file systems; and the shell as a user-selectable regular process. A large community contributed software to Unix from its early days [[3][3]], [[4][4],pp. 65-72]. This community grew immensely over time and worked using what are now termed open source software development methods [[5][5],pp. 440-442]. Unix and its intellectual descendants have also helped the spread of the C and C++ programming languages, parser and lexical analyzer generators (*yacc, lex*), document preparation tools (*troff, eqn, tbl*), scripting languages (*awk, sed, Perl*), TCP/IP networking, and configuration management systems (*SCCS, RCS, Subversion, Git*), while also forming a large part of the modern internet infrastructure and the web. + +Luckily, important Unix material of historical importance has survived and is nowadays openly available. Although Unix was initially distributed with relatively restrictive licenses, the most significant parts of its early development have been released by one of its right-holders (Caldera International) under a liberal license. Combining these parts with software that was developed or released as open source software by the University of California, Berkeley and the FreeBSD Project provides coverage of the system's development over a period ranging from June 20th 1972 until today. + +Curating and processing available snapshots as well as old and modern configuration management repositories allows the reconstruction of a new synthetic Git repository that combines under a single roof most of the available data. This repository documents in a digital form the detailed evolution of an important digital artefact over a period of 44 years. The following sections describe the repository's structure and contents (Section [II][6]), the way it was created (Section [III][7]), and how it can be used (Section [IV][8]). + +### 2 Data Overview ### + +The 1GB Unix history Git repository is made available for cloning on [GitHub][9].[1][10] Currently[2][11] the repository contains 659 thousand commits and 2306 merges from about 850 contributors. The contributors include 23 from the Bell Labs staff, 158 from Berkeley's Computer Systems Research Group (CSRG), and 660 from the FreeBSD Project. + +The repository starts its life at a tag identified as *Epoch*, which contains only licensing information and its modern README file. Various tag and branch names identify points of significance. + +- *Research-VX* tags correspond to six research editions that came out of Bell Labs. These start with *Research-V1* (4768 lines of PDP-11 assembly) and end with *Research-V7* (1820 mostly C files, 324kLOC). +- *Bell-32V* is the port of the 7th Edition Unix to the DEC/VAX architecture. +- *BSD-X* tags correspond to 15 snapshots released from Berkeley. +- *386BSD-X* tags correspond to two open source versions of the system, with the Intel 386 architecture kernel code mainly written by Lynne and William Jolitz. +- *FreeBSD-release/X* tags and branches mark 116 releases coming from the FreeBSD project. + +In addition, branches with a *-Snapshot-Development* suffix denote commits that have been synthesized from a time-ordered sequence of a snapshot's files, while tags with a *-VCS-Development* suffix mark the point along an imported version control history branch where a particular release occurred. + +The repository's history includes commits from the earliest days of the system's development, such as the following. + + commit c9f643f59434f14f774d61ee3856972b8c3905b1 + Author: Dennis Ritchie + Date: Mon Dec 2 18:18:02 1974 -0500 + Research V5 development + Work on file usr/sys/dmr/kl.c + +Merges between releases that happened along the system's evolution, such as the development of BSD 3 from BSD 2 and Unix 32/V, are also correctly represented in the Git repository as graph nodes with two parents. + +More importantly, the repository is constructed in a way that allows *git blame*, which annotates source code lines with the version, date, and author associated with their first appearance, to produce the expected code provenance results. For example, checking out the *BSD-4* tag, and running git blame on the kernel's *pipe.c* file will show lines written by Ken Thompson in 1974, 1975, and 1979, and by Bill Joy in 1980. This allows the automatic (though computationally expensive) detection of the code's provenance at any point of time. + +![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/provenance.png) + +Figure 1: Code provenance across significant Unix releases. + +As can be seen in Figure [1][12], a modern version of Unix (FreeBSD 9) still contains visible chunks of code from BSD 4.3, BSD 4.3 Net/2, and FreeBSD 2.0. Interestingly, the Figure shows that code developed during the frantic dash to create an open source operating system out of the code released by Berkeley (386BSD and FreeBSD 1.0) does not seem to have survived. The oldest code in FreeBSD 9 appears to be an 18-line sequence in the C library file timezone.c, which can also be found in the 7th Edition Unix file with the same name and a time stamp of January 10th, 1979 - 36 years ago. + +### 3 Data Collection and Processing ### + +The goal of the project is to consolidate data concerning the evolution of Unix in a form that helps the study of the system's evolution, by entering them into a modern revision repository. This involves collecting the data, curating them, and synthesizing them into a single Git repository. + +![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/branches.png) + +Figure 2: Imported Unix snapshots, repositories, and their mergers. + +The project is based on three types of data (see Figure [2][13]). First, snapshots of early released versions, which were obtained from the [Unix Heritage Society archive][14],[3][15] the [CD-ROM images][16] containing the full source archives of CSRG,[4][17] the [OldLinux site][18],[5][19] and the [FreeBSD archive][20].[6][21] Second, past and current repositories, namely the CSRG SCCS [[6][22]] repository, the FreeBSD 1 CVS repository, and the [Git mirror of modern FreeBSD development][23].[7][24] The first two were obtained from the same sources as the corresponding snapshots. + +The last, and most labour intensive, source of data was **primary research**. The release snapshots do not provide information regarding their ancestors and the contributors of each file. Therefore, these pieces of information had to be determined through primary research. The authorship information was mainly obtained by reading author biographies, research papers, internal memos, and old documentation scans; by reading and automatically processing source code and manual page markup; by communicating via email with people who were there at the time; by posting a query on the Unix *StackExchange* site; by looking at the location of files (in early editions the kernel source code was split into `usr/sys/dmr` and `/usr/sys/ken`); and by propagating authorship from research papers and manual pages to source code and from one release to others. (Interestingly, the 1st and 2nd Research Edition manual pages have an "owner" section, listing the person (e.g. *ken*) associated with the corresponding system command, file, system call, or library function. This section was not there in the 4th Edition, and resurfaced as the "Author" section in BSD releases.) Precise details regarding the source of the authorship information are documented in the project's files that are used for mapping Unix source code files to their authors and the corresponding commit messages. Finally, information regarding merges between source code bases was obtained from a [BSD family tree maintained by the NetBSD project][25].[8][26] + +The software and data files that were developed as part of this project, are [available online][27],[9][28] and, with appropriate network, CPU and disk resources, they can be used to recreate the repository from scratch. The authorship information for major releases is stored in files under the project's `author-path` directory. These contain lines with a regular expressions for a file path followed by the identifier of the corresponding author. Multiple authors can also be specified. The regular expressions are processed sequentially, so that a catch-all expression at the end of the file can specify a release's default authors. To avoid repetition, a separate file with a `.au` suffix is used to map author identifiers into their names and emails. One such file has been created for every community associated with the system's evolution: Bell Labs, Berkeley, 386BSD, and FreeBSD. For the sake of authenticity, emails for the early Bell Labs releases are listed in UUCP notation (e.g. `research!ken`). The FreeBSD author identifier map, required for importing the early CVS repository, was constructed by extracting the corresponding data from the project's modern Git repository. In total the commented authorship files (828 rules) comprise 1107 lines, and there are another 640 lines mapping author identifiers to names. + +The curation of the project's data sources has been codified into a 168-line `Makefile`. It involves the following steps. + +**Fetching** Copying and cloning about 11GB of images, archives, and repositories from remote sites. + +**Tooling** Obtaining an archiver for old PDP-11 archives from 2.9 BSD, and adjusting it to compile under modern versions of Unix; compiling the 4.3 BSD *compress* program, which is no longer part of modern Unix systems, in order to decompress the 386BSD distributions. + +**Organizing** Unpacking archives using tar and *cpio*; combining three 6th Research Edition directories; unpacking all 1 BSD archives using the old PDP-11 archiver; mounting CD-ROM images so that they can be processed as file systems; combining the 8 and 62 386BSD floppy disk images into two separate files. + +**Cleaning** Restoring the 1st Research Edition kernel source code files, which were obtained from printouts through optical character recognition, into a format close to their original state; patching some 7th Research Edition source code files; removing metadata files and other files that were added after a release, to avoid obtaining erroneous time stamp information; patching corrupted SCCS files; processing the early FreeBSD CVS repository by removing CVS symbols assigned to multiple revisions with a custom Perl script, deleting CVS *Attic* files clashing with live ones, and converting the CVS repository into a Git one using *cvs2svn*. + +An interesting part of the repository representation is how snapshots are imported and linked together in a way that allows *git blame* to perform its magic. Snapshots are imported into the repository as sequential commits based on the time stamp of each file. When all files have been imported the repository is tagged with the name of the corresponding release. At that point one could delete those files, and begin the import of the next snapshot. Note that the *git blame* command works by traversing backwards a repository's history, and using heuristics to detect code moving and being copied within or across files. Consequently, deleted snapshots would create a discontinuity between them, and prevent the tracing of code between them. + +Instead, before the next snapshot is imported, all the files of the preceding snapshot are moved into a hidden look-aside directory named `.ref` (reference). They remain there, until all files of the next snapshot have been imported, at which point they are deleted. Because every file in the `.ref` directory matches exactly an original file, *git blame* can determine how source code moves from one version to the next via the `.ref` file, without ever displaying the `.ref` file. To further help the detection of code provenance, and to increase the representation's realism, each release is represented as a merge between the branch with the incremental file additions (*-Development*) and the preceding release. + +For a period in the 1980s, only a subset of the files developed at Berkeley were under SCCS version control. During that period our unified repository contains imports of both the SCCS commits, and the snapshots' incremental additions. At the point of each release, the SCCS commit with the nearest time stamp is found and is marked as a merge with the release's incremental import branch. These merges can be seen in the middle of Figure [2][29]. + +The synthesis of the various data sources into a single repository is mainly performed by two scripts. A 780-line Perl script (`import-dir.pl`) can export the (real or synthesized) commit history from a single data source (snapshot directory, SCCS repository, or Git repository) in the *Git fast export* format. The output is a simple text format that Git tools use to import and export commits. Among other things, the script takes as arguments the mapping of files to contributors, the mapping between contributor login names and their full names, the commit(s) from which the import will be merged, which files to process and which to ignore, and the handling of "reference" files. A 450-line shell script creates the Git repository and calls the Perl script with appropriate arguments to import each one of the 27 available historical data sources. The shell script also runs 30 tests that compare the repository at specific tags against the corresponding data sources, verify the appearance and disappearance of look-aside directories, and look for regressions in the count of tree branches and merges and the output of *git blame* and *git log*. Finally, *git* is called to garbage-collect and compress the repository from its initial 6GB size down to the distributed 1GB. + +### 4 Data Uses ### + +The data set can be used for empirical research in software engineering, information systems, and software archeology. Through its unique uninterrupted coverage of a period of more than 40 years, it can inform work on software evolution and handovers across generations. With thousandfold increases in processing speed and million-fold increases in storage capacity during that time, the data set can also be used to study the co-evolution of software and hardware technology. The move of the software's development from research labs, to academia, and to the open source community can be used to study the effects of organizational culture on software development. The repository can also be used to study how notable individuals, such as Turing Award winners (Dennis Ritchie and Ken Thompson) and captains of the IT industry (Bill Joy and Eric Schmidt), actually programmed. Another phenomenon worthy of study concerns the longevity of code, either at the level of individual lines, or as complete systems that were at times distributed with Unix (Ingres, Lisp, Pascal, Ratfor, Snobol, TMG), as well as the factors that lead to code's survival or demise. Finally, because the data set stresses Git, the underlying software repository storage technology, to its limits, it can be used to drive engineering progress in the field of revision management systems. + +![](http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/metrics.png) + +Figure 3: Code style evolution along Unix releases. + +Figure [3][30], which depicts trend lines (obtained with R's local polynomial regression fitting function) of some interesting code metrics along 36 major releases of Unix, demonstrates the evolution of code style and programming language use over very long timescales. This evolution can be driven by software and hardware technology affordances and requirements, software construction theory, and even social forces. The dates in the Figure have been calculated as the average date of all files appearing in a given release. As can be seen in it, over the past 40 years the mean length of identifiers and file names has steadily increased from 4 and 6 characters to 7 and 11 characters, respectively. We can also see less steady increases in the number of comments and decreases in the use of the *goto* statement, as well as the virtual disappearance of the *register* type modifier. + +### 5 Further Work ### + +Many things can be done to increase the repository's faithfulness and usefulness. Given that the build process is shared as open source code, it is easy to contribute additions and fixes through GitHub pull requests. The most useful community contribution would be to increase the coverage of imported snapshot files that are attributed to a specific author. Currently, about 90 thousand files (out of a total of 160 thousand) are getting assigned an author through a default rule. Similarly, there are about 250 authors (primarily early FreeBSD ones) for which only the identifier is known. Both are listed in the build repository's unmatched directory, and contributions are welcomed. Furthermore, the BSD SCCS and the FreeBSD CVS commits that share the same author and time-stamp can be coalesced into a single Git commit. Support can be added for importing the SCCS file comment fields, in order to bring into the repository the corresponding metadata. Finally, and most importantly, more branches of open source systems can be added, such as NetBSD OpenBSD, DragonFlyBSD, and *illumos*. Ideally, current right holders of other important historical Unix releases, such as System III, System V, NeXTSTEP, and SunOS, will release their systems under a license that would allow their incorporation into this repository for study. + +#### Acknowledgements #### + +The author thanks the many individuals who contributed to the effort. Brian W. Kernighan, Doug McIlroy, and Arnold D. Robbins helped with Bell Labs login identifiers. Clem Cole, Era Eriksson, Mary Ann Horton, Kirk McKusick, Jeremy C. Reed, Ingo Schwarze, and Anatole Shaw helped with BSD login identifiers. The BSD SCCS import code is based on work by H. Merijn Brand and Jonathan Gray. + +This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thalis - Athens University of Economics and Business - Software Engineering Research Platform. + +### References ### + +[[1]][31] + M. D. McIlroy, E. N. Pinson, and B. A. Tague, "UNIX time-sharing system: Foreword," *The Bell System Technical Journal*, vol. 57, no. 6, pp. 1899-1904, July-August 1978. + +[[2]][32] + D. M. Ritchie and K. Thompson, "The UNIX time-sharing system," *Bell System Technical Journal*, vol. 57, no. 6, pp. 1905-1929, July-August 1978. + +[[3]][33] + D. M. Ritchie, "The evolution of the UNIX time-sharing system," *AT&T Bell Laboratories Technical Journal*, vol. 63, no. 8, pp. 1577-1593, Oct. 1984. + +[[4]][34] + P. H. Salus, *A Quarter Century of UNIX*. Boston, MA: Addison-Wesley, 1994. + +[[5]][35] + E. S. Raymond, *The Art of Unix Programming*. Addison-Wesley, 2003. + +[[6]][36] + M. J. Rochkind, "The source code control system," *IEEE Transactions on Software Engineering*, vol. SE-1, no. 4, pp. 255-265, 1975. + +---------- + +#### Footnotes: #### + +[1][37] - [https://github.com/dspinellis/unix-history-repo][38] + +[2][39] - Updates may add or modify material. To ensure replicability the repository's users are encouraged to fork it or archive it. + +[3][40] - [http://www.tuhs.org/archive_sites.html][41] + +[4][42] - [https://www.mckusick.com/csrg/][43] + +[5][44] - [http://www.oldlinux.org/Linux.old/distributions/386BSD][45] + +[6][46] - [http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/][47] + +[7][48] - [https://github.com/freebsd/freebsd][49] + +[8][50] - [http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree][51] + +[9][52] - [https://github.com/dspinellis/unix-history-make][53] + +-------------------------------------------------------------------------------- + +via: http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html + +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#MPT78 +[2]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#RT78 +[3]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Rit84 +[4]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Sal94 +[5]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#Ray03 +[6]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:data +[7]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:dev +[8]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#sec:use +[9]:https://github.com/dspinellis/unix-history-repo +[10]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAB +[11]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAC +[12]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:provenance +[13]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:branches +[14]:http://www.tuhs.org/archive_sites.html +[15]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAD +[16]:https://www.mckusick.com/csrg/ +[17]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAE +[18]:http://www.oldlinux.org/Linux.old/distributions/386BSD +[19]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAF +[20]:http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/ +[21]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAG +[22]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#SCCS +[23]:https://github.com/freebsd/freebsd +[24]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAH +[25]:http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree +[26]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAI +[27]:https://github.com/dspinellis/unix-history-make +[28]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFtNtAAJ +[29]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:branches +[30]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#fig:metrics +[31]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITEMPT78 +[32]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERT78 +[33]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERit84 +[34]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITESal94 +[35]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITERay03 +[36]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#CITESCCS +[37]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAB +[38]:https://github.com/dspinellis/unix-history-repo +[39]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAC +[40]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAD +[41]:http://www.tuhs.org/archive_sites.html +[42]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAE +[43]:https://www.mckusick.com/csrg/ +[44]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAF +[45]:http://www.oldlinux.org/Linux.old/distributions/386BSD +[46]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAG +[47]:http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/ +[48]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAH +[49]:https://github.com/freebsd/freebsd +[50]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAI +[51]:http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree +[52]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAJ +[53]:https://github.com/dspinellis/unix-history-make \ No newline at end of file From 9ec17292c9af74eec62628cdb2d6786ddf5e9d3a Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 17 Sep 2015 10:58:30 +0800 Subject: [PATCH 538/697] PUB:20150824 Basics Of NetworkManager Command Line Tool Nmcli @geekpi --- ... NetworkManager Command Line Tool Nmcli.md | 39 +++++++++---------- 1 file changed, 19 insertions(+), 20 deletions(-) rename {translated/tech => published}/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md (76%) diff --git a/translated/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md b/published/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md similarity index 76% rename from translated/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md rename to published/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md index 5ddb31d1ea..19b5a9cd02 100644 --- a/translated/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md +++ b/published/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md @@ -1,14 +1,15 @@ -网络管理命令行工具基础,Nmcli +Nmcli 网络管理命令行工具基础 ================================================================================ + ![](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/08/networking1.jpg) ### 介绍 ### -在本教程中,我们会在CentOS / RHEL 7中讨论网络管理工具,也叫**nmcli**。那些使用**ifconfig**的用户应该在CentOS 7中避免使用这个命令。 +在本教程中,我们会在CentOS / RHEL 7中讨论网络管理工具(NetworkManager command line tool),也叫**nmcli**。那些使用**ifconfig**的用户应该在CentOS 7中避免使用**ifconfig** 了。 让我们用nmcli工具配置一些网络设置。 -### 要得到系统中所有接口的地址信息 ### +#### 要得到系统中所有接口的地址信息 #### [root@localhost ~]# ip addr show @@ -27,13 +28,13 @@ inet6 fe80::20c:29ff:fe67:2f4c/64 scope link valid_lft forever preferred_lft forever -#### 检索与连接的接口相关的数据包统计 #### +#### 检索与已连接的接口相关的数据包统计 #### [root@localhost ~]# ip -s link show eno16777736 **示例输出:** -![unxmen_(011)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0111.png) +![](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0111.png) #### 得到路由配置 #### @@ -50,11 +51,11 @@ 输出像traceroute,但是更加完整。 -![unxmen_0121](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_01211.png) +![](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_01211.png) ### nmcli 工具 ### -**Nmcli** 是一个非常丰富和灵活的命令行工具。nmcli使用的情况有: +**nmcli** 是一个非常丰富和灵活的命令行工具。nmcli使用的情况有: - **设备** – 正在使用的网络接口 - **连接** – 一组配置设置,对于一个单一的设备可以有多个连接,可以在连接之间切换。 @@ -63,7 +64,7 @@ [root@localhost ~]# nmcli connection show -![unxmen_(013)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_013.png) +![](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_013.png) #### 得到特定连接的详情 #### @@ -71,7 +72,7 @@ **示例输出:** -![unxmen_(014)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0141.png) +![](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0141.png) #### 得到网络设备状态 #### @@ -89,7 +90,7 @@ 这里, -- **Connection add** – 添加新的连接 +- **connection add** – 添加新的连接 - **con-name** – 连接名 - **type** – 设备类型 - **ifname** – 接口名 @@ -100,7 +101,7 @@ Connection 'dhcp' (163a6822-cd50-4d23-bb42-8b774aeab9cb) successfully added. -#### 不同过dhcp分配IP,使用“static”添加地址 #### +#### 不通过dhcp分配IP,使用“static”添加地址 #### [root@localhost ~]# nmcli connection add con-name "static" ifname eno16777736 autoconnect no type ethernet ip4 192.168.1.240 gw4 192.168.1.1 @@ -112,25 +113,23 @@ [root@localhost ~]# nmcli connection up eno1 -Again Check, whether ip address is changed or not. 再检查一遍,ip地址是否已经改变 [root@localhost ~]# ip addr show -![unxmen_(015)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0151.png) +![](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0151.png) #### 添加DNS设置到静态连接中 #### [root@localhost ~]# nmcli connection modify "static" ipv4.dns 202.131.124.4 -#### 添加额外的DNS值 #### +#### 添加更多的DNS #### -[root@localhost ~]# nmcli connection modify "static" +ipv4.dns 8.8.8.8 + [root@localhost ~]# nmcli connection modify "static" +ipv4.dns 8.8.8.8 **注意**:要使用额外的**+**符号,并且要是**+ipv4.dns**,而不是**ip4.dns**。 - -添加一个额外的ip地址: +####添加一个额外的ip地址#### [root@localhost ~]# nmcli connection modify "static" +ipv4.addresses 192.168.200.1/24 @@ -138,11 +137,11 @@ Again Check, whether ip address is changed or not. [root@localhost ~]# nmcli connection up eno1 -![unxmen_(016)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_016.png) +![](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_016.png) 你会看见,设置生效了。 -完结 +完结。 -------------------------------------------------------------------------------- @@ -150,6 +149,6 @@ via: http://www.unixmen.com/basics-networkmanager-command-line-tool-nmcli/ 作者:Rajneesh Upadhyay 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From a0c5a6e951a91e6266cafdb1e0e1766935f6c7ea Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 17 Sep 2015 17:24:48 +0800 Subject: [PATCH 539/697] Update 20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md --- ... How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md b/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md index bceee2953b..63c2f115fa 100644 --- a/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md +++ b/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md @@ -1,13 +1,13 @@ 在ubunt 14.04/15.04 上配置Node JS v4.0.0 ================================================================================ -Hi everyone, Node.JS Version 4.0.0 has been out, the popular server-side JavaScript platform has combines the Node.js and io.js code bases. This release represents the combined efforts encapsulated in both the Node.js project and the io.js project that are now combined in a single codebase. The most important change is this Node.js is ships with version 4.5 of Google's V8 JavaScript engine, which is the same version that ships with the current Chrome browser. So, being able to more closely track V8’s releases means Node.js runs JavaScript faster, more securely, and with the ability to use many desirable ES6 language features. +大家好,Node.JS 4.0 发布了,主流的服务器端JS 平台已经将Node.js 和io.js 结合到一起。4.0 版就是两者结合的产物——共用一个代码库。这次最主要的变化是Node.js 封装了Google V8 4.5 JS 引擎,而这一版与当前的Chrome 一致。所以,紧跟V8 的版本号可以让Node.js 运行的更快、更安全,同时更好的利用ES6 的很多语言特性。 ![Node JS](http://blog.linoxide.com/wp-content/uploads/2015/09/nodejs.png) -Node.js 4.0.0 aims to provide an easy update path for current users of io.js and node as there are no major API changes. Let’s see how you can easily get it installed and setup on Ubuntu server by following this simple article. +Node.js 4.0 的目标是为io.js 当前用户提供一个简单的升级途径,所以这次并没有太多重要的API 变更。剩下的内容会让我们看到如何轻松的在ubuntu server 上安装、配置Node.js。 -### Basic System Setup ### +### 基础系统安装 ### Node works perfectly on Linux, Macintosh, and Solaris operating systems and among the Linux operating systems it has the best results using Ubuntu OS. That's why we are to setup it Ubuntu 15.04 while the same steps can be followed using Ubuntu 14.04. From 2d9aad0530ea5cbe8deedc9dd1f04b29d332ef24 Mon Sep 17 00:00:00 2001 From: Vic020 Date: Thu, 17 Sep 2015 21:50:09 +0800 Subject: [PATCH 540/697] Translated --- ...able Automatic System Updates In Ubuntu.md | 38 +++++++++---------- 1 file changed, 18 insertions(+), 20 deletions(-) diff --git a/sources/tech/20150916 Enable Automatic System Updates In Ubuntu.md b/sources/tech/20150916 Enable Automatic System Updates In Ubuntu.md index 3d69413a2f..ea320bd6e2 100644 --- a/sources/tech/20150916 Enable Automatic System Updates In Ubuntu.md +++ b/sources/tech/20150916 Enable Automatic System Updates In Ubuntu.md @@ -1,47 +1,45 @@ - Vic020 - -Enable Automatic System Updates In Ubuntu +开启Ubuntu系统自动升级 ================================================================================ -Before seeing **how to enable automatic system updates in Ubuntu**, first let’s see why should we do it in the first place. +在学习如何开启Ubuntu系统自动升级之前,先解释下为什么需要自动升级。 -By default Ubuntu checks for updates daily. When there are security updates, it shows immediately but for other updates (i.e. regular software updates) it pop ups once a week. So, if you have been using Ubuntu for a while, this may be a familiar sight for you: +默认情况下,ubuntu每天一次检查更新。但是一周只会弹出一次软件升级提醒,除非当有安全性升级时,才会立即弹出。所以,如果你已经使用Ubuntu一段时间,你肯定很熟悉这个画面: -![Software Update notification in Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Software-Update-Ubntu.png) +![Ubuntu软件升级提醒](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Software-Update-Ubntu.png) -Now if you are a normal desktop user, you don’t really care about what kind of updates are these. And this is not entirely a bad thing. You trust Ubuntu to provide you good updates, right? So, you just select ‘Install Now’ most of the time, don’t you? +但是做为一个正常桌面用户,根本不会去关心有什么更新细节。而且这个提醒完全就是浪费时间,你肯定信任Ubuntu提供的升级补丁,对不对?所以,大部分情况你肯定会选择“现在安装”,对不对? -And all you do is to click on Install Now, why not enable the automatic system updates? Enabling automatic system updates means all the latest updates will be automatically downloaded and installed without requiring any actions from you. Isn’t it convenient? +所以,你需要做的就只是点一下升级按钮。现在,明白为什么需要自动系统升级了吧?开启自动系统升级意味着所有最新的更新都会自动下载并安装,并且没有请求确认。是不是很方便? -### Enable automatic updates in Ubuntu ### +### 开启Ubuntu自动升级 ### -I am using Ubuntu 15.04 in this tutorial but the steps are the same for Ubuntu 14.04 as well. +演示使用Ubuntu15.04,Ubuntu 14.04步骤类似。 -Go to Unity Dash and look for Software & Updates: +打开Unity Dash ,找到软件&更新: -![Ubuntu Software Update Settings](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/08/Software_Update_Ubuntu.jpeg) +![Ubuntu 软件升级设置](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/08/Software_Update_Ubuntu.jpeg) -This will open the Software sources settings for you. Click on Updates tab here: +打开软件资源设置,切换到升级标签: -![Software Updates settings in Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Software-Update-Ubntu-1.png) +![Ubuntu 软件升级设置](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Software-Update-Ubntu-1.png) -In here, you’ll see the default settings which is daily check for updates and immediate notification for security updates. +可以发现,默认设置就是每日检查并立即提醒安全升级。 -![Changing software update frequency](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Software-Update-Ubntu-2.png) +![改变软件更新频率](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Software-Update-Ubntu-2.png) -All you need to do is to change the action which reads “When there are” to “Download and install automatically”. This will download all the available updates and install them automatically. +改变 ‘当有安全升级’和‘当有其他升级’的选项为:下载并自动安装。 ![Automatic updates in Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Software-Update-Ubntu-3.png) -That’s it. Close it and you have automatic updates enabled in Ubuntu. In fact this tutorial is pretty similar to [changing update notification frequency in Ubuntu][1]. +关闭对话框完成设定。这样每次Ubuntu检查更新后就会自动升级。事实上,这篇文章十分类似[改变Ubuntu升级提醒频率][1]。 -Do you use automatic updates installation or you prefer to install them manually? +你喜欢自动升级还是手动安装升级呢?欢迎评论。 -------------------------------------------------------------------------------- via: http://itsfoss.com/automatic-system-updates-ubuntu/ 作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) +译者:[Vic020/VicYu](http://vicyu.net) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e3d095702450e39c78c80fbfbbf085b3c20d342e Mon Sep 17 00:00:00 2001 From: Vic020 Date: Thu, 17 Sep 2015 21:56:34 +0800 Subject: [PATCH 541/697] Moved --- .../tech/20150916 Enable Automatic System Updates In Ubuntu.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20150916 Enable Automatic System Updates In Ubuntu.md (100%) diff --git a/sources/tech/20150916 Enable Automatic System Updates In Ubuntu.md b/translated/tech/20150916 Enable Automatic System Updates In Ubuntu.md similarity index 100% rename from sources/tech/20150916 Enable Automatic System Updates In Ubuntu.md rename to translated/tech/20150916 Enable Automatic System Updates In Ubuntu.md From 016298b61e4a60a7bc2398d08aeaabfec03136ec Mon Sep 17 00:00:00 2001 From: KnightJoker <544133483@qq.com> Date: Thu, 17 Sep 2015 23:12:21 +0800 Subject: [PATCH 542/697] Translating by KnightJoker --- .../Learn with Linux--Master Your Math with These Linux Apps.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md b/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md index f9def558fb..c70122d6c5 100644 --- a/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md +++ b/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md @@ -1,3 +1,5 @@ +Translating by KnightJoker + Learn with Linux: Master Your Math with These Linux Apps ================================================================================ ![](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-featured.png) From abb5dce140736fb501967b7c826e6d3c0de04561 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 18 Sep 2015 00:36:19 +0800 Subject: [PATCH 543/697] =?UTF-8?q?=E7=BF=BB=E5=B7=B2=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md | 43 +++++++++---------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md b/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md index 63c2f115fa..453ab2c234 100644 --- a/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md +++ b/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md @@ -9,85 +9,84 @@ Node.js 4.0 的目标是为io.js 当前用户提供一个简单的升级途径 ### 基础系统安装 ### -Node works perfectly on Linux, Macintosh, and Solaris operating systems and among the Linux operating systems it has the best results using Ubuntu OS. That's why we are to setup it Ubuntu 15.04 while the same steps can be followed using Ubuntu 14.04. - -#### 1) System Resources #### +Node 在Linux,Macintosh,Solaris 这几个系统上都可以完美的运行,同时linux 的发行版本当中Ubuntu 是最合适的。这也是我们为什么要尝试在ubuntu 15.04 上安装Node,当然了在14.04 上也可以使用相同的步骤安装。 +#### 1) 系统资源 #### The basic system resources for Node depend upon the size of your infrastructure requirements. So, here in this tutorial we will setup Node with 1 GB RAM, 1 GHz Processor and 10 GB of available disk space with minimal installation packages installed on the server that is no web or database server packages are installed. -#### 2) System Update #### +#### 2) 系统更新 #### It always been recommended to keep your system upto date with latest patches and updates, so before we move to the installation on Node, let's login to your server with super user privileges and run update command. # apt-get update -#### 3) Installing Dependencies #### +#### 3) 安装依赖 #### Node JS only requires some basic system and software utilities to be present on your server, for its successful installation like 'make' 'gcc' and 'wget'. Let's run the below command to get them installed if they are not already present. # apt-get install python gcc make g++ wget -### Download Latest Node JS v4.0.0 ### +### 下载最新版的Node JS v4.0.0 ### -Let's download the latest Node JS version 4.0.0 by following this link of [Node JS Download Page][1]. +使用链接 [Node JS Download Page][1] 下载源代码. ![nodejs download](http://blog.linoxide.com/wp-content/uploads/2015/09/download.png) -We will copy the link location of its latest package and download it using 'wget' command as shown. +我们会复制最新源代码的链接,然后用`wget` 下载,命令如下: # wget https://nodejs.org/download/rc/v4.0.0-rc.1/node-v4.0.0-rc.1.tar.gz -Once download completes, unpack using 'tar' command as shown. +下载完成后使用命令`tar` 解压缩: # tar -zxvf node-v4.0.0-rc.1.tar.gz ![wget nodejs](http://blog.linoxide.com/wp-content/uploads/2015/09/wget.png) -### Installing Node JS v4.0.0 ### +### 安装 Node JS v4.0.0 ### -Now we have to start the installation of Node JS from its downloaded source code. So, change your directory and configure the source code by running its configuration script before compiling it on your ubuntu server. +现在可以开始使用下载好的源代码编译Nod JS。你需要在ubuntu serve 上开始编译前运行配置脚本来修改你要使用目录和配置参数。 root@ubuntu-15:~/node-v4.0.0-rc.1# ./configure ![Installing NodeJS](http://blog.linoxide.com/wp-content/uploads/2015/09/configure.png) -Now run the 'make install' command to compile the Node JS installation package as shown. +现在运行命令'make install' 编译安装Node JS: root@ubuntu-15:~/node-v4.0.0-rc.1# make install -The make command will take a couple of minutes while compiling its binaries so after executinf above command, wait for a while and keep calm. +make 命令会花费几分钟完成编译,冷静的等待一会。 -### Testing Node JS Installation ### +### 验证Node 安装 ### -Once the compilation process is complete, we will test it if every thing went fine. Let's run the following command to confirm the installed version of Node JS. +一旦编译任务完成,我们就可以开始验证安装工作是否OK。我们运行下列命令来确认Node JS 的版本。 root@ubuntu-15:~# node -v v4.0.0-pre -By executing 'node' without any arguments from the command-line you will be dropped into the REPL (Read-Eval-Print-Loop) that has simplistic emacs line-editing where you can interactively run JavaScript and see the results. - +在命令行下不带参数的运行`node` 就会进入REPL(Read-Eval-Print-Loop,读-执行-输出-循环)模式,它有一个简化版的emacs 行编辑器,通过它你可以交互式的运行JS和查看运行结果。 ![node version](http://blog.linoxide.com/wp-content/uploads/2015/09/node.png) -### Writing Test Program ### +### 写测试程序 ### -We can also try out a very simple console program to test the successful installation and proper working of Node JS. To do so we will create a file named "test.js" and write the following code into it and save the changes made in the file as shown. +我们也可以写一个很简单的终端程序来测试安装是否成功,并且工作正常。要完成这一点,我们将会创建一个“tes.js” 文件,包含一下代码,操作如下: root@ubuntu-15:~# vim test.js var util = require("util"); console.log("Hello! This is a Node Test Program"); :wq! -Now in order to run the above program, from the command prompt run the below command. +现在为了运行上面的程序,在命令行运行下面的命令。 root@ubuntu-15:~# node test.js ![Node Program](http://blog.linoxide.com/wp-content/uploads/2015/09/node-test.png) -So, upon successful installation we will get the output as shown in the screen, where as in the above program it loads the "util" class into a variable "util" and then uses the "util" object to perform the console tasks. While the console.log is a command similar to the cout in C++. +在一个成功安装了Node JS 的环境下运行上面的程序就会在屏幕上得到上图所示的输出,这个程序加载类 “util” 到变量“util” 中,接着用对象“util” 运行终端任务,console.log 这个命令作用类似C++ 里的cout -### Conclusion ### +### 结论 ### That’s it. Hope this gives you a good idea of Node.js going with Node.js on Ubuntu. If you are new to developing applications with Node.js. After all we can say that we can expect significant performance gains with Node JS Version 4.0.0. +希望本文能够通过在ubuntu 上安装、运行Node.JS让你了解一下Node JS 的大概,如果你是刚刚开始使用Node.JS 开发应用程序。最后我们可以说我们能够通过Node JS v4.0.0 获取显著的性能。 -------------------------------------------------------------------------------- From b1f44f63b152c2545c4ed5df21411637e2c9ad50 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 18 Sep 2015 00:37:16 +0800 Subject: [PATCH 544/697] =?UTF-8?q?=E7=A7=BB=E5=8A=A8=E6=96=87=E4=BB=B6?= =?UTF-8?q?=E5=88=B0translated?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md | 102 ++++++++++++++++++ 1 file changed, 102 insertions(+) create mode 100644 translated/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md diff --git a/translated/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md b/translated/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md new file mode 100644 index 0000000000..453ab2c234 --- /dev/null +++ b/translated/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md @@ -0,0 +1,102 @@ + +在ubunt 14.04/15.04 上配置Node JS v4.0.0 +================================================================================ +大家好,Node.JS 4.0 发布了,主流的服务器端JS 平台已经将Node.js 和io.js 结合到一起。4.0 版就是两者结合的产物——共用一个代码库。这次最主要的变化是Node.js 封装了Google V8 4.5 JS 引擎,而这一版与当前的Chrome 一致。所以,紧跟V8 的版本号可以让Node.js 运行的更快、更安全,同时更好的利用ES6 的很多语言特性。 + +![Node JS](http://blog.linoxide.com/wp-content/uploads/2015/09/nodejs.png) + +Node.js 4.0 的目标是为io.js 当前用户提供一个简单的升级途径,所以这次并没有太多重要的API 变更。剩下的内容会让我们看到如何轻松的在ubuntu server 上安装、配置Node.js。 + +### 基础系统安装 ### + +Node 在Linux,Macintosh,Solaris 这几个系统上都可以完美的运行,同时linux 的发行版本当中Ubuntu 是最合适的。这也是我们为什么要尝试在ubuntu 15.04 上安装Node,当然了在14.04 上也可以使用相同的步骤安装。 +#### 1) 系统资源 #### + +The basic system resources for Node depend upon the size of your infrastructure requirements. So, here in this tutorial we will setup Node with 1 GB RAM, 1 GHz Processor and 10 GB of available disk space with minimal installation packages installed on the server that is no web or database server packages are installed. + +#### 2) 系统更新 #### + +It always been recommended to keep your system upto date with latest patches and updates, so before we move to the installation on Node, let's login to your server with super user privileges and run update command. + + # apt-get update + +#### 3) 安装依赖 #### + +Node JS only requires some basic system and software utilities to be present on your server, for its successful installation like 'make' 'gcc' and 'wget'. Let's run the below command to get them installed if they are not already present. + + # apt-get install python gcc make g++ wget + +### 下载最新版的Node JS v4.0.0 ### + +使用链接 [Node JS Download Page][1] 下载源代码. + +![nodejs download](http://blog.linoxide.com/wp-content/uploads/2015/09/download.png) + +我们会复制最新源代码的链接,然后用`wget` 下载,命令如下: + + # wget https://nodejs.org/download/rc/v4.0.0-rc.1/node-v4.0.0-rc.1.tar.gz + +下载完成后使用命令`tar` 解压缩: + + # tar -zxvf node-v4.0.0-rc.1.tar.gz + +![wget nodejs](http://blog.linoxide.com/wp-content/uploads/2015/09/wget.png) + +### 安装 Node JS v4.0.0 ### + +现在可以开始使用下载好的源代码编译Nod JS。你需要在ubuntu serve 上开始编译前运行配置脚本来修改你要使用目录和配置参数。 + + root@ubuntu-15:~/node-v4.0.0-rc.1# ./configure + +![Installing NodeJS](http://blog.linoxide.com/wp-content/uploads/2015/09/configure.png) + +现在运行命令'make install' 编译安装Node JS: + + root@ubuntu-15:~/node-v4.0.0-rc.1# make install + +make 命令会花费几分钟完成编译,冷静的等待一会。 + +### 验证Node 安装 ### + +一旦编译任务完成,我们就可以开始验证安装工作是否OK。我们运行下列命令来确认Node JS 的版本。 + + root@ubuntu-15:~# node -v + v4.0.0-pre + +在命令行下不带参数的运行`node` 就会进入REPL(Read-Eval-Print-Loop,读-执行-输出-循环)模式,它有一个简化版的emacs 行编辑器,通过它你可以交互式的运行JS和查看运行结果。 +![node version](http://blog.linoxide.com/wp-content/uploads/2015/09/node.png) + +### 写测试程序 ### + +我们也可以写一个很简单的终端程序来测试安装是否成功,并且工作正常。要完成这一点,我们将会创建一个“tes.js” 文件,包含一下代码,操作如下: + + root@ubuntu-15:~# vim test.js + var util = require("util"); + console.log("Hello! This is a Node Test Program"); + :wq! + +现在为了运行上面的程序,在命令行运行下面的命令。 + + root@ubuntu-15:~# node test.js + +![Node Program](http://blog.linoxide.com/wp-content/uploads/2015/09/node-test.png) + +在一个成功安装了Node JS 的环境下运行上面的程序就会在屏幕上得到上图所示的输出,这个程序加载类 “util” 到变量“util” 中,接着用对象“util” 运行终端任务,console.log 这个命令作用类似C++ 里的cout + +### 结论 ### + +That’s it. Hope this gives you a good idea of Node.js going with Node.js on Ubuntu. If you are new to developing applications with Node.js. After all we can say that we can expect significant performance gains with Node JS Version 4.0.0. +希望本文能够通过在ubuntu 上安装、运行Node.JS让你了解一下Node JS 的大概,如果你是刚刚开始使用Node.JS 开发应用程序。最后我们可以说我们能够通过Node JS v4.0.0 获取显著的性能。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/ubuntu-how-to/setup-node-js-4-0-ubuntu-14-04-15-04/ + +作者:[Kashif Siddique][a] +译者:[译者ID](https://github.com/osk874) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ +[1]:https://nodejs.org/download/rc/v4.0.0-rc.1/ From 0e59499692182b2b3d137ddb4e8d6dc284d8e672 Mon Sep 17 00:00:00 2001 From: Ezio Date: Fri, 18 Sep 2015 00:37:39 +0800 Subject: [PATCH 545/697] =?UTF-8?q?=E5=88=A0=E9=99=A4=E5=8E=9F=E6=96=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md | 102 ------------------ 1 file changed, 102 deletions(-) delete mode 100644 sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md diff --git a/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md b/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md deleted file mode 100644 index 453ab2c234..0000000000 --- a/sources/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md +++ /dev/null @@ -1,102 +0,0 @@ - -在ubunt 14.04/15.04 上配置Node JS v4.0.0 -================================================================================ -大家好,Node.JS 4.0 发布了,主流的服务器端JS 平台已经将Node.js 和io.js 结合到一起。4.0 版就是两者结合的产物——共用一个代码库。这次最主要的变化是Node.js 封装了Google V8 4.5 JS 引擎,而这一版与当前的Chrome 一致。所以,紧跟V8 的版本号可以让Node.js 运行的更快、更安全,同时更好的利用ES6 的很多语言特性。 - -![Node JS](http://blog.linoxide.com/wp-content/uploads/2015/09/nodejs.png) - -Node.js 4.0 的目标是为io.js 当前用户提供一个简单的升级途径,所以这次并没有太多重要的API 变更。剩下的内容会让我们看到如何轻松的在ubuntu server 上安装、配置Node.js。 - -### 基础系统安装 ### - -Node 在Linux,Macintosh,Solaris 这几个系统上都可以完美的运行,同时linux 的发行版本当中Ubuntu 是最合适的。这也是我们为什么要尝试在ubuntu 15.04 上安装Node,当然了在14.04 上也可以使用相同的步骤安装。 -#### 1) 系统资源 #### - -The basic system resources for Node depend upon the size of your infrastructure requirements. So, here in this tutorial we will setup Node with 1 GB RAM, 1 GHz Processor and 10 GB of available disk space with minimal installation packages installed on the server that is no web or database server packages are installed. - -#### 2) 系统更新 #### - -It always been recommended to keep your system upto date with latest patches and updates, so before we move to the installation on Node, let's login to your server with super user privileges and run update command. - - # apt-get update - -#### 3) 安装依赖 #### - -Node JS only requires some basic system and software utilities to be present on your server, for its successful installation like 'make' 'gcc' and 'wget'. Let's run the below command to get them installed if they are not already present. - - # apt-get install python gcc make g++ wget - -### 下载最新版的Node JS v4.0.0 ### - -使用链接 [Node JS Download Page][1] 下载源代码. - -![nodejs download](http://blog.linoxide.com/wp-content/uploads/2015/09/download.png) - -我们会复制最新源代码的链接,然后用`wget` 下载,命令如下: - - # wget https://nodejs.org/download/rc/v4.0.0-rc.1/node-v4.0.0-rc.1.tar.gz - -下载完成后使用命令`tar` 解压缩: - - # tar -zxvf node-v4.0.0-rc.1.tar.gz - -![wget nodejs](http://blog.linoxide.com/wp-content/uploads/2015/09/wget.png) - -### 安装 Node JS v4.0.0 ### - -现在可以开始使用下载好的源代码编译Nod JS。你需要在ubuntu serve 上开始编译前运行配置脚本来修改你要使用目录和配置参数。 - - root@ubuntu-15:~/node-v4.0.0-rc.1# ./configure - -![Installing NodeJS](http://blog.linoxide.com/wp-content/uploads/2015/09/configure.png) - -现在运行命令'make install' 编译安装Node JS: - - root@ubuntu-15:~/node-v4.0.0-rc.1# make install - -make 命令会花费几分钟完成编译,冷静的等待一会。 - -### 验证Node 安装 ### - -一旦编译任务完成,我们就可以开始验证安装工作是否OK。我们运行下列命令来确认Node JS 的版本。 - - root@ubuntu-15:~# node -v - v4.0.0-pre - -在命令行下不带参数的运行`node` 就会进入REPL(Read-Eval-Print-Loop,读-执行-输出-循环)模式,它有一个简化版的emacs 行编辑器,通过它你可以交互式的运行JS和查看运行结果。 -![node version](http://blog.linoxide.com/wp-content/uploads/2015/09/node.png) - -### 写测试程序 ### - -我们也可以写一个很简单的终端程序来测试安装是否成功,并且工作正常。要完成这一点,我们将会创建一个“tes.js” 文件,包含一下代码,操作如下: - - root@ubuntu-15:~# vim test.js - var util = require("util"); - console.log("Hello! This is a Node Test Program"); - :wq! - -现在为了运行上面的程序,在命令行运行下面的命令。 - - root@ubuntu-15:~# node test.js - -![Node Program](http://blog.linoxide.com/wp-content/uploads/2015/09/node-test.png) - -在一个成功安装了Node JS 的环境下运行上面的程序就会在屏幕上得到上图所示的输出,这个程序加载类 “util” 到变量“util” 中,接着用对象“util” 运行终端任务,console.log 这个命令作用类似C++ 里的cout - -### 结论 ### - -That’s it. Hope this gives you a good idea of Node.js going with Node.js on Ubuntu. If you are new to developing applications with Node.js. After all we can say that we can expect significant performance gains with Node JS Version 4.0.0. -希望本文能够通过在ubuntu 上安装、运行Node.JS让你了解一下Node JS 的大概,如果你是刚刚开始使用Node.JS 开发应用程序。最后我们可以说我们能够通过Node JS v4.0.0 获取显著的性能。 - --------------------------------------------------------------------------------- - -via: http://linoxide.com/ubuntu-how-to/setup-node-js-4-0-ubuntu-14-04-15-04/ - -作者:[Kashif Siddique][a] -译者:[译者ID](https://github.com/osk874) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/kashifs/ -[1]:https://nodejs.org/download/rc/v4.0.0-rc.1/ From 10fe32ed8436a33afcf5f80193e6750978e78faf Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 18 Sep 2015 09:15:08 +0800 Subject: [PATCH 546/697] PUB:20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu @geekpi --- ... to remove unused old kernel images on Ubuntu.md | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) rename {translated/tech => published}/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md (84%) diff --git a/translated/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md b/published/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md similarity index 84% rename from translated/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md rename to published/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md index 61eec350a9..4ac40ff485 100644 --- a/translated/tech/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md +++ b/published/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md @@ -1,14 +1,14 @@ -Linux有问必答--如何删除Ubuntu上不再使用的老内核 +Linux有问必答:如何删除Ubuntu上不再使用的旧内核 ================================================================================ > **提问**:过去我已经在我的Ubuntu上升级了几次内核。现在我想要删除这些旧的内核镜像来节省我的磁盘空间。如何用最简单的方法删除Ubuntu上先前版本的内核? -在Ubuntu上,有几个方法来升级内核。在Ubuntu桌面中,软件更新允许你每天检查并更新到最新的内核上。在Ubuntu服务器上,一个无人值守的包会自动更新内核最为一项最要的安全更新。然而,你可以手动用apt-get或者aptitude命令来更新。 +在Ubuntu上,有几个方法来升级内核。在Ubuntu桌面中,软件更新允许你每天检查并更新到最新的内核上。在Ubuntu服务器上,最为重要的安全更新项目之一就是 unattended-upgrades 软件包会自动更新内核。然而,你也可以手动用apt-get或者aptitude命令来更新。 随着时间的流逝,持续的内核更新会在系统中积聚大量的不再使用的内核,浪费你的磁盘空间。每个内核镜像和其相关联的模块/头文件会占用200-400MB的磁盘空间,因此由不再使用的内核而浪费的磁盘空间会快速地增加。 ![](https://farm1.staticflickr.com/636/21352725115_29ae7aab5f_c.jpg) -GRUB管理器为每个旧内核都维护了一个GRUB入口,防止你想要进入它们。 +GRUB管理器为每个旧内核都维护了一个GRUB入口,以备你想要使用它们。 ![](https://farm6.staticflickr.com/5803/21164866468_07760fc23c_z.jpg) @@ -18,7 +18,7 @@ GRUB管理器为每个旧内核都维护了一个GRUB入口,防止你想要进 在删除旧内核之前,记住最好留有2个最近的内核(最新的和上一个版本),以防主要的版本出错。现在就让我们看看如何在Ubuntu上清理旧内核。 -在Ubuntu内核镜像包哈了以下的包。 +在Ubuntu内核镜像包含了以下的包。 - **linux-image-**: 内核镜像 - **linux-image-extra-**: 额外的内核模块 @@ -36,7 +36,6 @@ GRUB管理器为每个旧内核都维护了一个GRUB入口,防止你想要进 上面的命令会删除内核镜像和它相关联的内核模块和头文件。 -updated to remove the corresponding GRUB entry from GRUB menu. 注意如果你还没有升级内核那么删除旧内核会自动触发安装新内核。这样在删除旧内核之后,GRUB配置会自动升级来移除GRUB菜单中相关GRUB入口。 如果你有很多没用的内核,你可以用shell表达式来一次性地删除多个内核。注意这个括号表达式只在bash或者兼容的shell中才有效。 @@ -52,7 +51,7 @@ updated to remove the corresponding GRUB entry from GRUB menu. $ sudo update-grub2 -现在就重启来验证GRUB菜单已经正确清理了。 +现在就重启来验证GRUB菜单是否已经正确清理了。 ![](https://farm1.staticflickr.com/593/20731623163_cccfeac854_z.jpg) @@ -62,7 +61,7 @@ via: http://ask.xmodulo.com/remove-kernel-images-ubuntu.html 作者:[Dan Nanni][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From dd8db52e1074d0674450934b327c94b676b2381f Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 18 Sep 2015 09:34:37 +0800 Subject: [PATCH 547/697] PUB:20150906 Do Simple Math In Ubuntu And elementary OS With NaSC @ictlyh --- ...o Simple Math In Ubuntu And elementary OS With NaSC.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) rename {translated/tech => published}/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md (90%) diff --git a/translated/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md b/published/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md similarity index 90% rename from translated/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md rename to published/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md index d65beef2a5..5b3c20d03a 100644 --- a/translated/tech/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md +++ b/published/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md @@ -18,9 +18,9 @@ Elementary OS 它自己本身借鉴了 OS X,也就不奇怪它的很多第三 ### 在 Ubuntu、Elementary OS 和 Mint 上安装 NaSC ### -安装 NaSC 有一个可用的 PPA。PPA 中说 ‘每日’,意味着所有构建(包括不稳定),但作为我的快速测试,并没什么影响。 +安装 NaSC 有一个可用的 PPA。PPA 是 ‘每日’,意味着每日构建(意即,不稳定),但作为我的快速测试,并没什么影响。 -打卡一个终端并运行下面的命令: +打开一个终端并运行下面的命令: sudo apt-add-repository ppa:nasc-team/daily sudo apt-get update @@ -35,7 +35,7 @@ Elementary OS 它自己本身借鉴了 OS X,也就不奇怪它的很多第三 sudo apt-get remove nasc sudo apt-add-repository --remove ppa:nasc-team/daily -如果你试用了这个软件,要分享你的经验哦。除此之外,你也可以在第三方 Elementary OS 应用中体验[Vocal podcast app for Linux][3]。 +如果你试用了这个软件,要分享你的经验哦。除此之外,你也可以在第三方 Elementary OS 应用中体验 [Vocal podcast app for Linux][3]。 -------------------------------------------------------------------------------- @@ -43,7 +43,7 @@ via: http://itsfoss.com/math-ubuntu-nasc/ 作者:[Abhishek][a] 译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From dd74cfa2ae4729732aadb907da97f921ea279cc0 Mon Sep 17 00:00:00 2001 From: DongShuaike Date: Fri, 18 Sep 2015 09:36:39 +0800 Subject: [PATCH 548/697] Create 20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md --- ...R 0.98 INSTALL IN UBUNTU AND LINUX MINT.md | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+) create mode 100644 sources/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md diff --git a/sources/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md b/sources/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md new file mode 100644 index 0000000000..ba371ca915 --- /dev/null +++ b/sources/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md @@ -0,0 +1,60 @@ +TERMINATOR 0.98: INSTALL IN UBUNTU AND LINUX MINT +================================================================================ +[Terminator][1] multiple terminals in one window. The goal of this project is to produce a useful tool for arranging terminals. It is inspired by programs such as gnome-multi-term, quadkonsole, etc. in that the main focus is arranging terminals in grids. Terminator 0.98 bringing a more polished tabs functionality, better layout saving/restoring, improved preferences UI and numerous bug fixes. + +![](http://www.ewikitech.com/wp-content/uploads/2015/09/Screenshot-from-2015-09-17-094828.png) + +###CHANGES/FEATURE TERMINATOR 0.98 +- Alayout launcher was added which allows easily switching between layouts (use Alt + L to open the new layout switcher); +- A new manual was added (use F1 to launch it); +- When saving, a layout now remembers the following: + - * maximised and fullscreen status + - * window titles + - * which tab was active + - * which terminal was active + - * working directory for each terminal +- Added options for enabling/disabling non-homogenous tabs and scroll arrows; +- Added shortcuts for scrolling up/down by line/half-page/page; +- Added Ctrl+MouseWheel Zoom in/out and Shift+MouseWheel page scroll up/down; +- Added shortcuts for next/prev profile; +- Improved consistency of Custom Commands menu; +- Added shortcuts/code to toggle All/Tab grouping; +- Improved watcher plugin; +- Added search bar wrap toggle; +- Major cleanup and reorganisation of the preferences window, including a complete revamp of the global tab. +- Added option to set how long ActivityWatcher plugin is quiet for; +- Many other improvements and bug fixes +- [Click Here To See Complete Changlog][2] + +###INSTALL TERMINATOR 0.98: + +Terminator 0.98 is available in PPA, Firstly we need to add repository in Ubuntu/Linux Mint system. Run following commands in terminal to install Terminator 0.98. + + $ sudo add-apt-repository ppa:gnome-terminator/nightly + $ sudo apt-get update + $ sudo apt-get install terminator + +If you want to remove terminator, simply run following command in terminal, (Optional) + + $ sudo apt-get remove terminator + + + + + +-------------------------------------------------------------------------------- + +via: http://www.ewikitech.com/articles/linux/terminator-install-ubuntu-linux-mint/ + +作者:[admin][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.ewikitech.com/author/admin/ +[1]:https://launchpad.net/terminator +[2]:http://bazaar.launchpad.net/~gnome-terminator/terminator/trunk/view/head:/ChangeLog + + + From f609fd494c7b0d9dd59226fcfe8d1d7e06b30644 Mon Sep 17 00:00:00 2001 From: runningwater Date: Fri, 18 Sep 2015 10:22:10 +0800 Subject: [PATCH 549/697] translating by runningwater --- ...Best command line tools for linux performance monitoring.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20140320 Best command line tools for linux performance monitoring.md b/sources/tech/20140320 Best command line tools for linux performance monitoring.md index 6b0ed1ebc4..eb77aa60d6 100644 --- a/sources/tech/20140320 Best command line tools for linux performance monitoring.md +++ b/sources/tech/20140320 Best command line tools for linux performance monitoring.md @@ -1,3 +1,4 @@ +(translating by runningwater) Best command line tools for linux performance monitoring ================================================================================ Sometimes a system can be slow and many reasons can be the root cause. To identify the process that is consuming memory, disk I/O or processor capacity you need to use tools to see what is happening in an operation system. @@ -67,7 +68,7 @@ The sar utility, which is part of the systat package, can be used to review hist via: http://lintut.com/best-command-line-tools-for-linux-performance-monitring/ 作者:[rasho][a] -译者:[译者ID](https://github.com/译者ID) +译者:[runningwater](https://github.com/runningwater) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From ffd53ee55e85409a54ce027904565eeaf0c90bc3 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 18 Sep 2015 11:55:57 +0800 Subject: [PATCH 550/697] =?UTF-8?q?=E9=87=8D=E5=A4=8D?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 这篇和你翻译的另外一篇基本上一致,不发表了。@ictlyh @DeadFire --- ...ple in Ubuntu or Elementary OS via NaSC.md | 62 ------------------- 1 file changed, 62 deletions(-) delete mode 100644 translated/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md diff --git a/translated/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md b/translated/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md deleted file mode 100644 index 2d74b1efa5..0000000000 --- a/translated/tech/20150906 Make Math Simple in Ubuntu or Elementary OS via NaSC.md +++ /dev/null @@ -1,62 +0,0 @@ -在 Ubuntu 和 Elementary 上使用 NaSC 做简单数学运算 -================================================================================ -![](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-icon.png) - -NaSC(Not a Soulver Clone,并非 Soulver 的克隆品)是为 Elementary 操作系统进行数学计算而设计的一款开源软件。类似于 Mac 上的 [Soulver][1]。 - -> 它能使你像平常那样进行计算。它允许你输入任何你想输入的,智能识别其中的数学部分并在右边面板打印出结果。然后你可以在后面的等式中使用这些结果,如果结果发生了改变,等式中使用的也会同样变化。 - -用 NaSC,你可以: - -- 自己定义复杂的计算 -- 改变单位和值(英尺、米、厘米,美元、欧元等) -- 了解行星的表面积 -- 解二次多项式 -- 以及其它 - -![nasc-eos](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-eos.jpg) - -第一次启动时,NaSC 提供了一个关于现有功能的教程。以后你还可以通过点击标题栏上的帮助图标再次查看。 - -![nasc-help](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-help.jpg) - -另外,这个软件还允许你保存文件以便以后继续工作。还可以在一定时间内通过粘贴板共用。 - -### 在 Ubuntu 或 Elementary OS Freya 上安装 NaSC: ### - -对于 Ubuntu 15.04,Ubuntu 15.10,Elementary OS Freya,从 Dash 或应用启动器中打开终端,逐条运行下面的命令: - -1. 通过命令添加 [NaSC PPA][2]: - - sudo apt-add-repository ppa:nasc-team/daily - -![nasc-ppa](http://ubuntuhandbook.org/wp-content/uploads/2015/09/nasc-ppa.jpg) - -2. 如果安装了 Synaptic 软件包管理器,点击 ‘Reload’ 后搜索并安装 ‘nasc’。 - -或者运行下面的命令更新系统缓存并安装软件: - - sudo apt-get update - - sudo apt-get install nasc - -3. **(可选)** 要卸载软件以及 NaSC,运行: - - sudo apt-get remove nasc && sudo add-apt-repository -r ppa:nasc-team/daily - -对于不想添加 PPA 的人,可以直接从[该网页][3]获取 .deb 安装包。、 - --------------------------------------------------------------------------------- - -via: http://ubuntuhandbook.org/index.php/2015/09/make-math-simple-in-ubuntu-elementary-os-via-nasc/ - -作者:[Ji m][a] -译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ubuntuhandbook.org/index.php/about/ -[1]:http://www.acqualia.com/soulver/ -[2]:https://launchpad.net/~nasc-team/+archive/ubuntu/daily/ -[3]:http://ppa.launchpad.net/nasc-team/daily/ubuntu/pool/main/n/nasc/ \ No newline at end of file From 606717fe970c7e64bccbe6f7b21abf63ddafca09 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 18 Sep 2015 12:31:04 +0800 Subject: [PATCH 551/697] Delete 20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md --- ... which CPU core a process is running on.md | 82 ------------------- 1 file changed, 82 deletions(-) delete mode 100644 sources/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md diff --git a/sources/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md b/sources/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md deleted file mode 100644 index ff305ad5ce..0000000000 --- a/sources/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md +++ /dev/null @@ -1,82 +0,0 @@ -translation by strugglingyouth -Linux FAQs with Answers--How to find out which CPU core a process is running on -================================================================================ -> Question: I have a Linux process running on my multi-core processor system. How can I find out which CPU core the process is running on? - -When you run performance-critical HPC applications or network-heavy workload on [multi-core NUMA processors][1], CPU/memory affinity is one important factor to consider to maximize their performance. Scheduling closely related processes on the same NUMA node can reduce slow remote memory access. On processors like Intel's Sandy Bridge processor which has an integrated PCIe controller, you want to schedule network I/O workload on the same NUMA node as the NIC card to exploit PCI-to-CPU affinity. - -As part of performance tuning or troubleshooting, you may want to know on which CPU core (or NUMA node) a particular process is currently scheduled. - -Here are several ways to **find out which CPU core is a given Linux process or a thread is scheduled on**. - -### Method One ### - -If a process is explicitly pinned to a particular CPU core using commands like [taskset][2], you can find out the pinned CPU using the following taskset command: - - $ taskset -c -p - -For example, if the process you are interested in has PID 5357: - - $ taskset -c -p 5357 - ----------- - - pid 5357's current affinity list: 5 - -The output says the process is pinned to CPU core 5. - -However, if you haven't explicitly pinned the process to any CPU core, you will get something like the following as the affinity list. - - pid 5357's current affinity list: 0-11 - -The output indicates that the process can potentially be scheduled on any CPU core from 0 to 11. So in this case, taskset is not useful in identifying which CPU core the process is currently assigned to, and you should use other methods as described below. - -### Method Two ### - -The ps command can tell you the CPU ID each process/thread is currently assigned to (under "PSR" column). - - $ ps -o pid,psr,comm -p - ----------- - - PID PSR COMMAND - 5357 10 prog - -The output says the process with PID 5357 (named "prog") is currently running on CPU core 10. If the process is not pinned, the PSR column can keep changing over time depending on where the kernel scheduler assigns the process. - -### Method Three ### - -The top command can also show the CPU assigned to a given process. First, launch top command with "p" option. Then press 'f' key, and add "Last used CPU" column to the display. The currently used CPU core will appear under "P" (or "PSR") column. - - $ top -p 5357 - -![](https://farm6.staticflickr.com/5698/21429268426_e7d1d73a04_c.jpg) - -Compared to ps command, the advantage of using top command is that you can continuously monitor how the assigned CPU changes over time. - -### Method Four ### - -Yet another method to check the currently used CPU of a process/thread is to use [htop command][3]. - -Launch htop from the command line. Press key, go to "Columns", and add PROCESSOR under "Available Columns". - -The currently used CPU ID of each process will appear under "CPU" column. - -![](https://farm6.staticflickr.com/5788/21444522832_a5a206f600_c.jpg) - -Note that all previous commands taskset, ps and top assign CPU core IDs 0, 1, 2, ..., N-1. However, htop assigns CPU core IDs starting from 1 (upto N). - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/cpu-core-process-is-running.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni -[1]:http://xmodulo.com/identify-cpu-processor-architecture-linux.html -[2]:http://xmodulo.com/run-program-process-specific-cpu-cores-linux.html -[3]:http://ask.xmodulo.com/install-htop-centos-rhel.html From 8c7b7778bc887d31828d7a76d05822737266fb9e Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 18 Sep 2015 12:31:38 +0800 Subject: [PATCH 552/697] Create 20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md --- ... which CPU core a process is running on.md | 82 +++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100644 translated/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md diff --git a/translated/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md b/translated/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md new file mode 100644 index 0000000000..d901b95030 --- /dev/null +++ b/translated/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md @@ -0,0 +1,82 @@ +Linux 有问必答--如何找出哪个 CPU 内核正在运行进程 +================================================================================ +>问题:我有个 Linux 进程运行在多核处理器系统上。怎样才能找出哪个 CPU 内核正在运行该进程? + +当你运行需要较高性能的 HPC 程序或非常消耗网络资源的程序在 [多核 NUMA 处理器上][1],CPU/memory 的亲和力是限度其发挥最大性能的重要因素之一。在同一 NUMA 节点上调整程序的亲和力可以减少远程内存访问。像英特尔 Sandy Bridge 处理器,该处理器有一个集成的 PCIe 控制器,要调整同一 NUMA 节点的网络 I/O 负载可以使用 网卡控制 PCI 和 CPU 亲和力。 + +由于性能优化和故障排除只是一部分,你可能想知道哪个 CPU 内核(或 NUMA 节点)被调度运行特定的进程。 + +这里有几种方法可以 **找出哪个 CPU 内核被调度来运行 给定的 Linux 进程或线程**。 + +### 方法一 ### + +如果一个进程明确的被固定到 CPU 的特定内核,如使用 [taskset][2] 命令,你可以使用 taskset 命令找出被固定的 CPU 内核: + + $ taskset -c -p + +例如, 如果你对 PID 5357 这个进程有兴趣: + + $ taskset -c -p 5357 + +---------- + + pid 5357's current affinity list: 5 + +输出显示这个过程被固定在 CPU 内核 5。 + +但是,如果你没有明确固定进程到任何 CPU 内核,你会得到类似下面的亲和力列表。 + + pid 5357's current affinity list: 0-11 + +输出表明,该进程可能会被安排在从0到11中的任何一个 CPU 内核。在这种情况下,taskset 不会识别该进程当前被分配给哪个 CPU 内核,你应该使用如下所述的方法。 + +### 方法二 ### + +ps 命令可以告诉你每个进程/线程目前分配到的 (在“PSR”列)CPU ID。 + + + $ ps -o pid,psr,comm -p + +---------- + + PID PSR COMMAND + 5357 10 prog + +输出表示进程的 PID 为 5357(名为"prog")目前在CPU 内核 10 上运行着。如果该过程没有被固定,PSR 列可以保持随着时间变化,内核可能调度该进程到不同位置。 + +### 方法三 ### + +top 命令也可以显示 CPU 被分配给哪个进程。首先,在top 命令中使用“P”选项。然后按“f”键,显示中会出现 "Last used CPU" 列。目前使用的 CPU 内核将出现在 “P”(或“PSR”)列下。 + + $ top -p 5357 + +![](https://farm6.staticflickr.com/5698/21429268426_e7d1d73a04_c.jpg) + +相比于 ps 命令,使用 top 命令的好处是,你可以连续监视随着时间的改变, CPU 是如何分配的。 + +### 方法四 ### + +另一种来检查一个进程/线程当前使用的是哪个 CPU 内核的方法是使用 [htop 命令][3]。 + +从命令行启动 htop。按 键,进入"Columns",在"Available Columns"下会添加 PROCESSOR。 + +每个进程当前使用的 CPU ID 将出现在“CPU”列中。 + +![](https://farm6.staticflickr.com/5788/21444522832_a5a206f600_c.jpg) + +请注意,所有以前使用的命令 taskset,ps 和 top 分配CPU 内核的 IDs 为 0,1,2,...,N-1。然而,htop 分配 CPU 内核 IDs 从 1开始(直到 N)。 + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/cpu-core-process-is-running.html + +作者:[Dan Nanni][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:http://xmodulo.com/identify-cpu-processor-architecture-linux.html +[2]:http://xmodulo.com/run-program-process-specific-cpu-cores-linux.html +[3]:http://ask.xmodulo.com/install-htop-centos-rhel.html From 615f47739b4c4edf7036d5c890cf8a16284e9376 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 18 Sep 2015 14:37:51 +0800 Subject: [PATCH 553/697] PUB:RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between @FSSlc --- ...Boot Shutdown and Everything in Between.md | 152 +++++++++++++ ...Boot Shutdown and Everything in Between.md | 214 ------------------ 2 files changed, 152 insertions(+), 214 deletions(-) create mode 100644 published/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md delete mode 100644 translated/tech/RHCSA/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md diff --git a/published/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md b/published/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md new file mode 100644 index 0000000000..36caaf5294 --- /dev/null +++ b/published/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md @@ -0,0 +1,152 @@ +RHCSA 系列(五): RHEL7 中的进程管理:开机,关机 +================================================================================ +我们将概括和简要地复习从你按开机按钮来打开你的 RHEL 7 服务器到呈现出命令行界面的登录屏幕之间所发生的所有事情,以此来作为这篇文章的开始。 + +![RHEL 7 开机过程](http://www.tecmint.com/wp-content/uploads/2015/03/RHEL-7-Boot-Process.png) + +*Linux 开机过程* + +**请注意:** + +1. 相同的基本原则也可以应用到其他的 Linux 发行版本中,但可能需要较小的更改,并且 +2. 下面的描述并不是旨在给出开机过程的一个详尽的解释,而只是介绍一些基础的东西 + +### Linux 开机过程 ### + +1. 初始化 POST(加电自检)并执行硬件检查; + +2. 当 POST 完成后,系统的控制权将移交给启动管理器的第一阶段(first stage),它存储在一个硬盘的引导扇区(对于使用 BIOS 和 MBR 的旧式的系统而言)或存储在一个专门的 (U)EFI 分区上。 + +3. 启动管理器的第一阶段完成后,接着进入启动管理器的第二阶段(second stage),通常大多数使用的是 GRUB(GRand Unified Boot Loader 的简称),它驻留在 `/boot` 中,然后开始加载内核和驻留在 RAM 中的初始化文件系统(被称为 initramfs,它包含执行必要操作所需要的程序和二进制文件,以此来最终挂载真实的根文件系统)。 + +4. 接着展示了闪屏(splash)过后,呈现在我们眼前的是类似下图的画面,它允许我们选择一个操作系统和内核来启动: + + ![RHEL 7 开机屏幕](http://www.tecmint.com/wp-content/uploads/2015/03/RHEL-7-Boot-Screen.png) + + *启动菜单屏幕* + +5. 内核会对接入到系统的硬件进行设置,当根文件系统被挂载后,接着便启动 PID 为 1 的进程,这个进程将开始初始化其他的进程并最终呈现给我们一个登录提示符界面。 + + 注意:假如我们想在启动后查看这些信息,我们可以使用 [dmesg 命令][1],并使用这个系列里的上一篇文章中介绍过的工具(注:即 grep)来过滤它的输出。 + + ![登录屏幕和进程的 PID](http://www.tecmint.com/wp-content/uploads/2015/03/Login-Screen-Process-PID.png) + + *登录屏幕和进程的 PID* + +在上面的例子中,我们使用了大家熟知的 `ps` 命令来显示在系统启动过程中的一系列当前进程的信息,它们的父进程(或者换句话说,就是那个开启这些进程的进程)为 systemd(大多数现代的 Linux 发行版本已经切换到的系统和服务管理器): + + # ps -o ppid,pid,uname,comm --ppid=1 + +记住 `-o`(为 -format 的简写)选项允许你以一个自定义的格式来显示 ps 的输出,以此来满足你的需求;这个自定义格式使用 `man ps` 里 STANDARD FORMAT SPECIFIERS 一节中的特定关键词。 + +另一个你想自定义 ps 的输出而不是使用其默认输出的情形是:当你需要找到引起 CPU 或内存消耗过多的那些进程,并按照下列方式来对它们进行排序时: + + # ps aux --sort=+pcpu # 以 %CPU 来排序(增序) + # ps aux --sort=-pcpu # 以 %CPU 来排序(降序) + # ps aux --sort=+pmem # 以 %MEM 来排序(增序) + # ps aux --sort=-pmem # 以 %MEM 来排序(降序) + # ps aux --sort=+pcpu,-pmem # 结合 %CPU (增序) 和 %MEM (降序)来排列 + +![http://www.tecmint.com/wp-content/uploads/2015/03/ps-command-output.png](http://www.tecmint.com/wp-content/uploads/2015/03/ps-command-output.png) + +*自定义 ps 命令的输出* + +### systemd 的一个介绍 ### + +在 Linux 世界中,很少有能比在主流的 Linux 发行版本中采用 systemd 引起更多的争论的决定。systemd 的倡导者根据以下事实来表明其主要的优势: + +1. 在系统启动期间,systemd 允许并发地启动更多的进程(相比于先前的 SysVinit,SysVinit 似乎总是表现得更慢,因为它一个接一个地启动进程,检查一个进程是否依赖于另一个进程,然后等待守护进程启动才可以启动的更多的服务),并且 +2. 在一个运行着的系统中,它用作一个动态的资源管理器。这样在启动期间,当一个服务被需要时,才启动它(以此来避免消耗系统资源)而不是在没有一个合理的原因的情况下启动额外的服务。 +3. 向后兼容 sysvinit 的脚本。 + + 另外请阅读: ['init' 和 'systemd' 背后的故事][2] + +systemd 由 systemctl 工具控制,假如你了解 SysVinit,你将会对以下的内容感到熟悉: + +- service 工具,在旧一点的系统中,它被用来管理 SysVinit 脚本,以及 +- chkconfig 工具,为系统服务升级和查询运行级别信息 +- shutdown 你一定使用过几次来重启或关闭一个运行的系统。 + +下面的表格展示了使用传统的工具和 systemctl 之间的相似之处: + + +| 旧式工具 | Systemctl 等价命令 | 描述 | +|-------------|----------------------|-------------| +| service name start | systemctl start name | 启动 name (这里 name 是一个服务) | +| service name stop | systemctl stop name | 停止 name | +| service name condrestart | systemctl try-restart name | 重启 name (如果它已经运行了) | +| service name restart | systemctl restart name | 重启 name | +| service name reload | systemctl reload name | 重载 name 的配置 | +| service name status | systemctl status name | 显示 name 的当前状态 | +| service - status-all | systemctl | 显示当前所有服务的状态 | +| chkconfig name on | systemctl enable name | 通过一个特定的单元文件,让 name 可以在系统启动时运行(这个文件是一个符号链接)。启用或禁用一个启动时的进程,实际上是增加或移除一个到 /etc/systemd/system 目录中的符号链接。 | +| chkconfig name off | systemctl disable name | 通过一个特定的单元文件,让 name 可以在系统启动时禁止运行(这个文件是一个符号链接)。 | +| chkconfig -list name | systemctl is-enabled name | 确定 name (一个特定的服务)当前是否启用。| +| chkconfig - list | systemctl - type=service | 显示所有的服务及其是否启用或禁用。 | +| shutdown -h now | systemctl poweroff | 关机 | +| shutdown -r now | systemctl reboot | 重启系统 | + +systemd 也引进了单元(unit)(它可能是一个服务,一个挂载点,一个设备或者一个网络套接字)和目标(target)(它们定义了 systemd 如何去管理和同时开启几个相关的进程,可以认为它们与在基于 SysVinit 的系统中的运行级别等价,尽管事实上它们并不等价)的概念。 + +### 总结归纳 ### + +其他与进程管理相关,但并不仅限于下面所列的功能的任务有: + +**1. 在考虑到系统资源的使用上,调整一个进程的执行优先级:** + +这是通过 `renice` 工具来完成的,它可以改变一个或多个正在运行着的进程的调度优先级。简单来说,调度优先级是一个允许内核(当前只支持 >= 2.6 的版本)根据某个给定进程被分配的执行优先级(即友善度(niceness),从 -20 到 19)来为其分配系统资源的功能。 + +`renice` 的基本语法如下: + + # renice [-n] priority [-gpu] identifier + +在上面的通用命令中,第一个参数是将要使用的优先级数值,而另一个参数可以是进程 ID(这是默认的设定),进程组 ID,用户 ID 或者用户名。一个常规的用户(即除 root 以外的用户)只可以更改他或她所拥有的进程的调度优先级,并且只能增加友善度的层次(这意味着占用更少的系统资源)。 + +![在 Linux 中调整进程的优先级](http://www.tecmint.com/wp-content/uploads/2015/03/Process-Scheduling-Priority.png) + +*进程调度优先级* + +**2. 按照需要杀死一个进程(或终止其正常执行):** + +更精确地说,杀死一个进程指的是通过 [kill 或 pkill][3] 命令给该进程发送一个信号,让它优雅地(SIGTERM=15)或立即(SIGKILL=9)结束它的执行。 + +这两个工具的不同之处在于前一个被用来终止一个特定的进程或一个进程组,而后一个则允许你通过进程的名称和其他属性,执行相同的动作。 + +另外, pkill 与 pgrep 相捆绑,pgrep 提供将受符合的进程的 PID 给 pkill 来使用。例如,在运行下面的命令之前: + + # pkill -u gacanepa + +查看一眼由 gacanepa 所拥有的 PID 或许会带来点帮助: + + # pgrep -l -u gacanepa + +![找到用户拥有的 PID](http://www.tecmint.com/wp-content/uploads/2015/03/Find-PIDs-of-User.png) + +*找到用户拥有的 PID* + +默认情况下,kill 和 pkiill 都发送 SIGTERM 信号给进程,如我们上面提到的那样,这个信号可以被忽略(即该进程可能会终止其自身的执行,也可能不终止),所以当你因一个合理的理由要真正地停止一个运行着的进程,则你将需要在命令行中带上特定的 SIGKILL 信号: + + # kill -9 identifier # 杀死一个进程或一个进程组 + # kill -s SIGNAL identifier # 同上 + # pkill -s SIGNAL identifier # 通过名称或其他属性来杀死一个进程 + +### 结论 ### + +在这篇文章中,我们解释了在 RHEL 7 系统中,有关开机启动过程的基本知识,并分析了一些可用的工具来帮助你通过使用一般的程序和 systemd 特有的命令来管理进程。 + +请注意,这个列表并不旨在涵盖有关这个话题的所有花哨的工具,请随意使用下面的评论栏来添加你自已钟爱的工具和命令。同时欢迎你的提问和其他的评论。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-exam-boot-process-and-process-management/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:https://linux.cn/article-3587-1.html +[2]:http://www.tecmint.com/systemd-replaces-init-in-linux/ +[3]:https://linux.cn/article-2116-1.html diff --git a/translated/tech/RHCSA/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md b/translated/tech/RHCSA/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md deleted file mode 100644 index 91e2482e49..0000000000 --- a/translated/tech/RHCSA/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md +++ /dev/null @@ -1,214 +0,0 @@ -RHECSA 系列:RHEL7 中的进程管理:开机,关机,以及两者之间的所有其他事项 – Part 5 -================================================================================ -我们将概括和简要地复习从你按开机按钮来打开你的 RHEL 7 服务器到呈现出命令行界面的登录屏幕之间所发生的所有事情,以此来作为这篇文章的开始。 - -![RHEL 7 开机过程](http://www.tecmint.com/wp-content/uploads/2015/03/RHEL-7-Boot-Process.png) - -Linux 开机过程 - -**请注意:** - -1. 相同的基本原则也可以应用到其他的 Linux 发行版本中,但可能需要较小的更改,并且 -2. 下面的描述并不是旨在给出开机过程的一个详尽的解释,而只是介绍一些基础的东西 - -### Linux 开机过程 ### - -1.初始化 POST(加电自检)并执行硬件检查; - -2.当 POST 完成后,系统的控制权将移交给启动管理器的第一阶段,它存储在一个硬盘的引导扇区(对于使用 BIOS 和 MBR 的旧式的系统)或存储在一个专门的 (U)EFI 分区上。 - -3.启动管理器的第一阶段完成后,接着进入启动管理器的第二阶段,通常大多数使用的是 GRUB(GRand Unified Boot Loader 的简称),它驻留在 `/boot` 中,反过来加载内核和驻留在 RAM 中的初始化文件系统(被称为 initramfs,它包含执行必要操作所需要的程序和二进制文件,以此来最终挂载真实的根文件系统)。 - -4.接着经历了闪屏过后,呈现在我们眼前的是类似下图的画面,它允许我们选择一个操作系统和内核来启动: - -![RHEL 7 开机屏幕](http://www.tecmint.com/wp-content/uploads/2015/03/RHEL-7-Boot-Screen.png) - -启动菜单屏幕 - -5.然后内核对挂载到系统的硬件进行设置,一旦根文件系统被挂载,接着便启动 PID 为 1 的进程,反过来这个进程将初始化其他的进程并最终呈现给我们一个登录提示符界面。 - -注意:假如我们想在后面这样做(注:这句话我总感觉不通顺,不明白它的意思,希望改一下),我们可以使用 [dmesg 命令][1](注:这篇文章已经翻译并发表了,链接是 https://linux.cn/article-3587-1.html )并使用这个系列里的上一篇文章中解释过的工具(注:即 grep)来过滤它的输出。 - -![登录屏幕和进程的 PID](http://www.tecmint.com/wp-content/uploads/2015/03/Login-Screen-Process-PID.png) - -登录屏幕和进程的 PID - -在上面的例子中,我们使用了众所周知的 `ps` 命令来显示在系统启动过程中的一系列当前进程的信息,它们的父进程(或者换句话说,就是那个开启这些进程的进程) 为 systemd(大多数现代的 Linux 发行版本已经切换到的系统和服务管理器): - - # ps -o ppid,pid,uname,comm --ppid=1 - -记住 `-o`(为 -format 的简写)选项允许你以一个自定义的格式来显示 ps 的输出,以此来满足你的需求;这个自定义格式使用 man ps 里 STANDARD FORMAT SPECIFIERS 一节中的特定关键词。 - -另一个你想自定义 ps 的输出而不是使用其默认输出的情形是:当你需要找到引起 CPU 或内存消耗过多的那些进程,并按照下列方式来对它们进行排序时: - - # ps aux --sort=+pcpu # 以 %CPU 来排序(增序) - # ps aux --sort=-pcpu # 以 %CPU 来排序(降序) - # ps aux --sort=+pmem # 以 %MEM 来排序(增序) - # ps aux --sort=-pmem # 以 %MEM 来排序(降序) - # ps aux --sort=+pcpu,-pmem # 结合 %CPU (增序) 和 %MEM (降序)来排列 - -![http://www.tecmint.com/wp-content/uploads/2015/03/ps-command-output.png](http://www.tecmint.com/wp-content/uploads/2015/03/ps-command-output.png) - -自定义 ps 命令的输出 - -### systemd 的一个介绍 ### - -在 Linux 世界中,很少有决定能够比在主流的 Linux 发行版本中采用 systemd 引起更多的争论。systemd 的倡导者根据以下事实命名其主要的优势: - -另外请阅读: ['init' 和 'systemd' 背后的故事][2] - -1. 在系统启动期间,systemd 允许并发地启动更多的进程(相比于先前的 SysVinit,SysVinit 似乎总是表现得更慢,因为它一个接一个地启动进程,检查一个进程是否依赖于另一个进程,然后等待守护进程去开启可以开始的更多的服务),并且 -2. 在一个运行着的系统中,它作为一个动态的资源管理器来工作。这样在开机期间,当一个服务被需要时,才启动它(以此来避免消耗系统资源)而不是在没有一个合理的原因的情况下启动额外的服务。 -3. 向后兼容 sysvinit 的脚本。 - -systemd 由 systemctl 工具控制,假如你带有 SysVinit 背景,你将会对以下的内容感到熟悉: - -- service 工具, 在旧一点的系统中,它被用来管理 SysVinit 脚本,以及 -- chkconfig 工具, 为系统服务升级和查询运行级别信息 -- shutdown, 你一定使用过几次来重启或关闭一个运行的系统。 - -下面的表格展示了使用传统的工具和 systemctl 之间的相似之处: - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Legacy toolSystemctl equivalentDescription
service name startsystemctl start nameStart name (where name is a service)
service name stopsystemctl stop nameStop name
service name condrestartsystemctl try-restart nameRestarts name (if it’s already running)
service name restartsystemctl restart nameRestarts name
service name reloadsystemctl reload nameReloads the configuration for name
service name statussystemctl status nameDisplays the current status of name
service –status-allsystemctlDisplays the status of all current services
chkconfig name onsystemctl enable nameEnable name to run on startup as specified in the unit file (the file to which the symlink points). The process of enabling or disabling a service to start automatically on boot consists in adding or removing symbolic links inside the /etc/systemd/system directory.
chkconfig name offsystemctl disable nameDisables name to run on startup as specified in the unit file (the file to which the symlink points)
chkconfig –list namesystemctl is-enabled nameVerify whether name (a specific service) is currently enabled
chkconfig –listsystemctl –type=serviceDisplays all services and tells whether they are enabled or disabled
shutdown -h nowsystemctl poweroffPower-off the machine (halt)
shutdown -r nowsystemctl rebootReboot the system
- -systemd 也引进了单元(它可能是一个服务,一个挂载点,一个设备或者一个网络套接字)和目标(它们定义了 systemd 如何去管理和同时开启几个相关的进程,并可认为它们与在基于 SysVinit 的系统中的运行级别等价,尽管事实上它们并不等价)。 - -### 总结归纳 ### - -其他与进程管理相关,但并不仅限于下面所列的功能的任务有: - -**1. 在考虑到系统资源的使用上,调整一个进程的执行优先级:** - -这是通过 `renice` 工具来完成的,它可以改变一个或多个正在运行着的进程的调度优先级。简单来说,调度优先级是一个允许内核(当前只支持 >= 2.6 的版本)根据某个给定进程被分配的执行优先级(即优先级,从 -20 到 19)来为其分配系统资源的功能。 - -`renice` 的基本语法如下: - - # renice [-n] priority [-gpu] identifier - -在上面的通用命令中,第一个参数是将要使用的优先级数值,而另一个参数可以解释为进程 ID(这是默认的设定),进程组 ID,用户 ID 或者用户名。一个常规的用户(即除 root 以外的用户)只可以更改他或她所拥有的进程的调度优先级,并且只能增加优先级的层次(这意味着占用更少的系统资源)。 - -![在 Linux 中调整进程的优先级](http://www.tecmint.com/wp-content/uploads/2015/03/Process-Scheduling-Priority.png) - -进程调度优先级 - -**2. 按照需要杀死一个进程(或终止其正常执行):** - -更精确地说,杀死一个进程指的是通过 [kill 或 pkill][3]命令给该进程发送一个信号,让它优雅地(SIGTERM=15)或立即(SIGKILL=9)结束它的执行。 - -这两个工具的不同之处在于前一个被用来终止一个特定的进程或一个进程组,而后一个则允许你在进程的名称和其他属性的基础上,执行相同的动作。 - -另外, pkill 与 pgrep 相捆绑,pgrep 提供将受影响的进程的 PID 给 pkill 来使用。例如,在运行下面的命令之前: - - # pkill -u gacanepa - -查看一眼由 gacanepa 所拥有的 PID 或许会带来点帮助: - - # pgrep -l -u gacanepa - -![找到用户拥有的 PID](http://www.tecmint.com/wp-content/uploads/2015/03/Find-PIDs-of-User.png) - -找到用户拥有的 PID - -默认情况下,kill 和 pkiill 都发送 SIGTERM 信号给进程,如我们上面提到的那样,这个信号可以被忽略(即该进程可能会终止其自身的执行或者不终止),所以当你因一个合理的理由要真正地停止一个运行着的进程,则你将需要在命令行中带上特定的 SIGKILL 信号: - - # kill -9 identifier # 杀死一个进程或一个进程组 - # kill -s SIGNAL identifier # 同上 - # pkill -s SIGNAL identifier # 通过名称或其他属性来杀死一个进程 - -### 结论 ### - -在这篇文章中,我们解释了在 RHEL 7 系统中,有关开机启动过程的基本知识,并分析了一些可用的工具来帮助你通过使用一般的程序和 systemd 特有的命令来管理进程。 - -请注意,这个列表并不旨在涵盖有关这个话题的所有花哨的工具,请随意使用下面的评论栏来添加你自已钟爱的工具和命令。同时欢迎你的提问和其他的评论。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-boot-process-and-process-management/ - -作者:[Gabriel Cánepa][a] -译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/dmesg-commands/ -[2]:http://www.tecmint.com/systemd-replaces-init-in-linux/ -[3]:http://www.tecmint.com/how-to-kill-a-process-in-linux/ From a2be4dbf95984395c2bb8c58964a2951d93d86ae Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 18 Sep 2015 17:28:59 +0800 Subject: [PATCH 554/697] =?UTF-8?q?20150918-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Remove Bookmarks In Ubuntu Beginner Tip.md | 48 ++++++ ...0918 Install Justniffer In Ubuntu 15.04.md | 151 ++++++++++++++++++ 2 files changed, 199 insertions(+) create mode 100644 sources/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md create mode 100644 sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md diff --git a/sources/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md b/sources/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md new file mode 100644 index 0000000000..9249a66e4d --- /dev/null +++ b/sources/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md @@ -0,0 +1,48 @@ +How To Add And Remove Bookmarks In Ubuntu [Beginner Tip] +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Add-Bookmark.jpg) + +In this quick tip for absolute beginners, I am going to show you how to add bookmarks in Ubuntu File manager, Files. + +Now, if you wonder why would you do that, the answer is pretty simple. It gives you quick access, right in the left sidebar. For example, I [installed Copy in Ubuntu][1]. Now it has been created in /Home/Copy. Not a big deal to go in Home and then go in Copy directory, but I would like to access it rather quickly. So, I add a bookmark to it so that I would be able to access it from the sidebar. + +### Add a bookmark in Ubuntu ### + +Open Files. Go to the location which you want to save for quick access. You need to be inside the directory to bookmark it. + +Now, you have two ways to do it. + +#### Option 1: #### + +When you are in Files (file explorer in Ubuntu), look at the top for the global menu. You would see Bookmarks. Click on it and you’ll see the option to add the current location as boomark. + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Add-Bookmark-Ubuntu.jpeg) + +#### Option 2: #### + +You can simply press Ctrl+D and the current location will be added as a bookmark. + +As you can see, here is the newly added Copy directory in the left sidebar for quick access: + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Add-Bookmark-Ubuntu-1.jpeg) + +### Manage Bookmarks ### + +If you think you have got too many bookmarks or if you added a bookmark by mistake, you can remove a bookmark easily. Press Ctrl+B to access all the bookmarks. Here, select the desired bookmark and click on – to delete it. + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Remove-bookmark-ubuntu.png) + +That’s all you need to do to manage bookmarks in Ubuntu. I know it might be trivial with most of the users, but it might help people who are absolutely new to Ubuntu. + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/add-remove-bookmarks-ubuntu/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://itsfoss.com/install-copy-in-ubuntu-14-04/ \ No newline at end of file diff --git a/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md b/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md new file mode 100644 index 0000000000..b4db9d16ba --- /dev/null +++ b/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md @@ -0,0 +1,151 @@ +Install Justniffer In Ubuntu 15.04 +================================================================================ +### Introduction ### + +[Justniffer][1] is a network protocol analyzer that can be used as alternative to Snort. It is a very popular network analyzer tool, it work interactively to trace/sniff a live network. It can capture traffic from a live environment, support “lipcap” a “tcpdump” file formats. It helps the users to perform analysis in a complex network where it is difficult to capture traffic with wireshark. Specially it help to analyze application layer traffic very significantly and can extract http contents like images, scripts, HTML etc easily. Justsniffer is helpful in understanding how communication occur among different components. + +### Features ### + +This is the advantage of Justniffer that it collect all traffic from a complex network without affecting system performance, and can save logs for future analysis, some of the important features of Justniffer are: + +#### 1. Reliable TCP flow rebuilding #### + +It can record and reassemble TCP segments and IP fragments using a portion of host Linux kernel. + +#### 2. Logging #### + +Log are saved for future analysis and can be customized as and when required. + +#### 3. Extensible #### + +Can be extended with external python, perl and bash scripts to get some additional results from analysis reports. + +#### 4. Performance Management #### + +Retrieve information on the basis of Connection time, close time, response time or request time etc. + +### Installation ### + +Justsniffer can be installed with ppa. + +To add the repo, run: + + $ sudo add-apt-repository ppa:oreste-notelli/ppa + +Update System: + + $ sudo apt-get update + +Install Justniffer tool: + + $ sudo apt-get install justniffer + +It failed to install in make then i run following command and try to reinstall service + + $ sudo apt-get -f install + +### Examples ### + +First of all verify installed version of Justniffer with -V option, you will need super user privileges to utilize that tool. + + $ sudo justniffer -V + +Sample output: + +![j](http://www.unixmen.com/wp-content/uploads/2015/09/j.png) + +**1. Dump Traffic to terminal in apache like format for eth1 interface, type** + + $ sudo justniffer -i eth1 + +Sample output: + +![Selection_001](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0013.png) + +**2. You can trace running tcp stream with following option** + + $ sudo justniffer -i eth1 -r + +Sample output: + +![Selection_002](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0023.png) + +**3. To get the response time of web server, type** + + $ sudo justniffer -i eth1 -a " %response.time" + +Sample output: + +![Selection_003](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0033.png) + +**4. Read a tcpdump captured file with Justniffer** + +First, capture traffic with tcpdump. + + $ sudo tcpdump -w /tmp/file.cap -s0 -i eth0 + +Now access that data with justniffer + + $ justniffer -f file.cap + +Sample output: + +![Selection_005](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0056.png) + +**5. Capture http only data** + + $ sudo justniffer -i eth1 -r -p "port 80 or port 8080" + +Sample output: + +![Selection_006](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0064.png) + +**6. Get http only data from a specific host** + + $ justniffer -i eth1 -r -p "host 192.168.1.250 and tcp port 80" + +Sample output: + +![Selection_007](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0074.png) + +**7. Capture data in a more preciser format** + +When you will type **justniffer -h** You will see a lots of format key words which help to get data in more preciser way + + $ justniffer -h + +Sample Output: + +![Selection_008](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0083.png) + +Let us retrieve data with some predefined parameter provided with justniffer + + $ justniffer -i eth1 -l "%request.timestamp %request.header.host %request.url %response.time" + +Sample Output: + +![Selection_009](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0094.png) + +There are lots of option which you can explore. + +### Conclusion ### + +Justniffer is a very nice tool for network testing. In my view users who are using Snort for network sniffing will know justniffer as an less complicated tool. It is provided with a lots of **FORMAT KEYWORDS** which are very helpful to retrieve data in specific formats as per your need. You can log your network in .cap file formats which can be analyzed later on to monitor network service performance. + +**Reference:** + +- [Justniffer website][2] + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/install-justniffer-ubuntu-15-04/ + +作者:[Rajneesh Upadhyay][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/rajneesh/ +[1]:http://sourceforge.net/projects/justniffer/?source=directory +[2]:http://justniffer.sourceforge.net/ \ No newline at end of file From 48cd390ced57f8faaa4b749d529779260efe16fb Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 19 Sep 2015 08:59:46 +0800 Subject: [PATCH 555/697] translating --- ... How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md b/sources/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md index 9249a66e4d..dbb21cc226 100644 --- a/sources/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md +++ b/sources/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md @@ -1,3 +1,5 @@ +translating---geekpi + How To Add And Remove Bookmarks In Ubuntu [Beginner Tip] ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Add-Bookmark.jpg) @@ -45,4 +47,4 @@ via: http://itsfoss.com/add-remove-bookmarks-ubuntu/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://itsfoss.com/author/abhishek/ -[1]:http://itsfoss.com/install-copy-in-ubuntu-14-04/ \ No newline at end of file +[1]:http://itsfoss.com/install-copy-in-ubuntu-14-04/ From 2b4400f7b7a00a006a2626457f49d0987424dbad Mon Sep 17 00:00:00 2001 From: Luoyuanhao Date: Sat, 19 Sep 2015 09:03:12 +0800 Subject: [PATCH 556/697] Translating sources/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md --- .../20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md | 1 + sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md | 1 + ... through TLS using Network Security Service NSS for Apache.md | 1 + 3 files changed, 3 insertions(+) diff --git a/sources/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md b/sources/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md index ba371ca915..862a5bd9b5 100644 --- a/sources/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md +++ b/sources/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md @@ -1,3 +1,4 @@ +ictlyh Translating TERMINATOR 0.98: INSTALL IN UBUNTU AND LINUX MINT ================================================================================ [Terminator][1] multiple terminals in one window. The goal of this project is to produce a useful tool for arranging terminals. It is inspired by programs such as gnome-multi-term, quadkonsole, etc. in that the main focus is arranging terminals in grids. Terminator 0.98 bringing a more polished tabs functionality, better layout saving/restoring, improved preferences UI and numerous bug fixes. diff --git a/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md b/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md index b4db9d16ba..92d7db0bd9 100644 --- a/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md +++ b/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md @@ -1,3 +1,4 @@ +ictlyh Translating Install Justniffer In Ubuntu 15.04 ================================================================================ ### Introduction ### diff --git a/sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md b/sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md index 317b2b3292..a316797ebd 100644 --- a/sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md +++ b/sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md @@ -1,3 +1,4 @@ +ictlyh Translating RHCE Series: Implementing HTTPS through TLS using Network Security Service (NSS) for Apache ================================================================================ If you are a system administrator who is in charge of maintaining and securing a web server, you can’t afford to not devote your very best efforts to ensure that data served by or going through your server is protected at all times. From ab25dd89d2cce1cf7aa7a04d832893e566150624 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 19 Sep 2015 09:16:18 +0800 Subject: [PATCH 557/697] translated --- ...Remove Bookmarks In Ubuntu Beginner Tip.md | 32 +++++++++---------- 1 file changed, 15 insertions(+), 17 deletions(-) diff --git a/sources/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md b/sources/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md index dbb21cc226..25388915af 100644 --- a/sources/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md +++ b/sources/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md @@ -1,47 +1,45 @@ -translating---geekpi - -How To Add And Remove Bookmarks In Ubuntu [Beginner Tip] +如何在Ubuntu中添加和删除书签[新手技巧] ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Add-Bookmark.jpg) -In this quick tip for absolute beginners, I am going to show you how to add bookmarks in Ubuntu File manager, Files. +这是一篇对完全是新手的一篇技巧,我将向你展示如何在Ubuntu文件管理器中添加书签。 -Now, if you wonder why would you do that, the answer is pretty simple. It gives you quick access, right in the left sidebar. For example, I [installed Copy in Ubuntu][1]. Now it has been created in /Home/Copy. Not a big deal to go in Home and then go in Copy directory, but I would like to access it rather quickly. So, I add a bookmark to it so that I would be able to access it from the sidebar. +现在如果你想知道为什么要这么做,答案很简单。它可以让你可以快速地在左边栏中访问。比如。我[在Ubuntu中安装了Copy][1]。现在它创建了/Home/Copy。先进入Home目录再进入Copy目录并不是一件大事,但是我想要更快地访问它。因此我添加了一个书签这样我就可以直接从侧边栏访问了。 -### Add a bookmark in Ubuntu ### +### 在Ubuntu中添加书签 ### -Open Files. Go to the location which you want to save for quick access. You need to be inside the directory to bookmark it. +打开Files。进入你想要保存快速访问的目录。你需要在标记书签的目录里面。 -Now, you have two ways to do it. +现在,你有两种方法。 -#### Option 1: #### +#### 方法1: #### -When you are in Files (file explorer in Ubuntu), look at the top for the global menu. You would see Bookmarks. Click on it and you’ll see the option to add the current location as boomark. +当你在Files中时(Ubuntu中的文件管理器),查看顶部菜单。你会看到书签按钮。点击它你会看到将当前路径保存为书签的选项。 ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Add-Bookmark-Ubuntu.jpeg) -#### Option 2: #### +#### 方法 2: #### -You can simply press Ctrl+D and the current location will be added as a bookmark. +你可以直接按下Ctrl+D就可以将当前位置保存位书签。 -As you can see, here is the newly added Copy directory in the left sidebar for quick access: +如你所见,这里左边栏就有一个新添加的Copy目录: ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Add-Bookmark-Ubuntu-1.jpeg) -### Manage Bookmarks ### +### 管理书签 ### -If you think you have got too many bookmarks or if you added a bookmark by mistake, you can remove a bookmark easily. Press Ctrl+B to access all the bookmarks. Here, select the desired bookmark and click on – to delete it. +如果你不想要太多的书签或者你错误地添加了一个书签,你可以很简单地删除它。按下Ctrl+B查看所有的书签。现在选择想要删除的书签并点击删除。 ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Remove-bookmark-ubuntu.png) -That’s all you need to do to manage bookmarks in Ubuntu. I know it might be trivial with most of the users, but it might help people who are absolutely new to Ubuntu. +这就是在Ubuntu中管理书签需要做的。我知道这对于大多数用户而言很贱,但是这也许多Ubuntu的新手而言或许还有用。 -------------------------------------------------------------------------------- via: http://itsfoss.com/add-remove-bookmarks-ubuntu/ 作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e6a549f59cd1acb9710ef0ae30d05528bc3f18c5 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 19 Sep 2015 09:16:55 +0800 Subject: [PATCH 558/697] Rename sources/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md to translated/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md --- ...0918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md (100%) diff --git a/sources/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md b/translated/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md similarity index 100% rename from sources/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md rename to translated/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md From 8aacc129220137e9916036e0fa5994083616f159 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 19 Sep 2015 09:18:06 +0800 Subject: [PATCH 559/697] translating --- sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md b/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md index b4db9d16ba..ff4220c863 100644 --- a/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md +++ b/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md @@ -1,3 +1,6 @@ +translating----geekpi + + Install Justniffer In Ubuntu 15.04 ================================================================================ ### Introduction ### @@ -148,4 +151,4 @@ via: http://www.unixmen.com/install-justniffer-ubuntu-15-04/ [a]:http://www.unixmen.com/author/rajneesh/ [1]:http://sourceforge.net/projects/justniffer/?source=directory -[2]:http://justniffer.sourceforge.net/ \ No newline at end of file +[2]:http://justniffer.sourceforge.net/ From 9d6b1ed7c9ab90d9cf8eb87620dac1a26e9f5bdf Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 19 Sep 2015 09:22:41 +0800 Subject: [PATCH 560/697] Update 20150918 Install Justniffer In Ubuntu 15.04.md --- sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md b/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md index ff4220c863..d1c49dd960 100644 --- a/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md +++ b/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md @@ -1,6 +1,3 @@ -translating----geekpi - - Install Justniffer In Ubuntu 15.04 ================================================================================ ### Introduction ### From 8bab4141cb090a0218549925ae927b482fca7a3b Mon Sep 17 00:00:00 2001 From: Luoyuanhao Date: Sat, 19 Sep 2015 10:11:46 +0800 Subject: [PATCH 561/697] [Translated] tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md tech/20150918 Install Justniffer In Ubuntu 15.04.md [Move] tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md --- ...R 0.98 INSTALL IN UBUNTU AND LINUX MINT.md | 61 ------- ...0918 Install Justniffer In Ubuntu 15.04.md | 152 ------------------ ... which CPU core a process is running on.md | 0 ...R 0.98 INSTALL IN UBUNTU AND LINUX MINT.md | 60 +++++++ ...0918 Install Justniffer In Ubuntu 15.04.md | 151 +++++++++++++++++ 5 files changed, 211 insertions(+), 213 deletions(-) delete mode 100644 sources/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md delete mode 100644 sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md rename translated/{ => tech}/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md (100%) create mode 100644 translated/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md create mode 100644 translated/tech/20150918 Install Justniffer In Ubuntu 15.04.md diff --git a/sources/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md b/sources/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md deleted file mode 100644 index 862a5bd9b5..0000000000 --- a/sources/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md +++ /dev/null @@ -1,61 +0,0 @@ -ictlyh Translating -TERMINATOR 0.98: INSTALL IN UBUNTU AND LINUX MINT -================================================================================ -[Terminator][1] multiple terminals in one window. The goal of this project is to produce a useful tool for arranging terminals. It is inspired by programs such as gnome-multi-term, quadkonsole, etc. in that the main focus is arranging terminals in grids. Terminator 0.98 bringing a more polished tabs functionality, better layout saving/restoring, improved preferences UI and numerous bug fixes. - -![](http://www.ewikitech.com/wp-content/uploads/2015/09/Screenshot-from-2015-09-17-094828.png) - -###CHANGES/FEATURE TERMINATOR 0.98 -- Alayout launcher was added which allows easily switching between layouts (use Alt + L to open the new layout switcher); -- A new manual was added (use F1 to launch it); -- When saving, a layout now remembers the following: - - * maximised and fullscreen status - - * window titles - - * which tab was active - - * which terminal was active - - * working directory for each terminal -- Added options for enabling/disabling non-homogenous tabs and scroll arrows; -- Added shortcuts for scrolling up/down by line/half-page/page; -- Added Ctrl+MouseWheel Zoom in/out and Shift+MouseWheel page scroll up/down; -- Added shortcuts for next/prev profile; -- Improved consistency of Custom Commands menu; -- Added shortcuts/code to toggle All/Tab grouping; -- Improved watcher plugin; -- Added search bar wrap toggle; -- Major cleanup and reorganisation of the preferences window, including a complete revamp of the global tab. -- Added option to set how long ActivityWatcher plugin is quiet for; -- Many other improvements and bug fixes -- [Click Here To See Complete Changlog][2] - -###INSTALL TERMINATOR 0.98: - -Terminator 0.98 is available in PPA, Firstly we need to add repository in Ubuntu/Linux Mint system. Run following commands in terminal to install Terminator 0.98. - - $ sudo add-apt-repository ppa:gnome-terminator/nightly - $ sudo apt-get update - $ sudo apt-get install terminator - -If you want to remove terminator, simply run following command in terminal, (Optional) - - $ sudo apt-get remove terminator - - - - - --------------------------------------------------------------------------------- - -via: http://www.ewikitech.com/articles/linux/terminator-install-ubuntu-linux-mint/ - -作者:[admin][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.ewikitech.com/author/admin/ -[1]:https://launchpad.net/terminator -[2]:http://bazaar.launchpad.net/~gnome-terminator/terminator/trunk/view/head:/ChangeLog - - - diff --git a/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md b/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md deleted file mode 100644 index 92d7db0bd9..0000000000 --- a/sources/tech/20150918 Install Justniffer In Ubuntu 15.04.md +++ /dev/null @@ -1,152 +0,0 @@ -ictlyh Translating -Install Justniffer In Ubuntu 15.04 -================================================================================ -### Introduction ### - -[Justniffer][1] is a network protocol analyzer that can be used as alternative to Snort. It is a very popular network analyzer tool, it work interactively to trace/sniff a live network. It can capture traffic from a live environment, support “lipcap” a “tcpdump” file formats. It helps the users to perform analysis in a complex network where it is difficult to capture traffic with wireshark. Specially it help to analyze application layer traffic very significantly and can extract http contents like images, scripts, HTML etc easily. Justsniffer is helpful in understanding how communication occur among different components. - -### Features ### - -This is the advantage of Justniffer that it collect all traffic from a complex network without affecting system performance, and can save logs for future analysis, some of the important features of Justniffer are: - -#### 1. Reliable TCP flow rebuilding #### - -It can record and reassemble TCP segments and IP fragments using a portion of host Linux kernel. - -#### 2. Logging #### - -Log are saved for future analysis and can be customized as and when required. - -#### 3. Extensible #### - -Can be extended with external python, perl and bash scripts to get some additional results from analysis reports. - -#### 4. Performance Management #### - -Retrieve information on the basis of Connection time, close time, response time or request time etc. - -### Installation ### - -Justsniffer can be installed with ppa. - -To add the repo, run: - - $ sudo add-apt-repository ppa:oreste-notelli/ppa - -Update System: - - $ sudo apt-get update - -Install Justniffer tool: - - $ sudo apt-get install justniffer - -It failed to install in make then i run following command and try to reinstall service - - $ sudo apt-get -f install - -### Examples ### - -First of all verify installed version of Justniffer with -V option, you will need super user privileges to utilize that tool. - - $ sudo justniffer -V - -Sample output: - -![j](http://www.unixmen.com/wp-content/uploads/2015/09/j.png) - -**1. Dump Traffic to terminal in apache like format for eth1 interface, type** - - $ sudo justniffer -i eth1 - -Sample output: - -![Selection_001](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0013.png) - -**2. You can trace running tcp stream with following option** - - $ sudo justniffer -i eth1 -r - -Sample output: - -![Selection_002](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0023.png) - -**3. To get the response time of web server, type** - - $ sudo justniffer -i eth1 -a " %response.time" - -Sample output: - -![Selection_003](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0033.png) - -**4. Read a tcpdump captured file with Justniffer** - -First, capture traffic with tcpdump. - - $ sudo tcpdump -w /tmp/file.cap -s0 -i eth0 - -Now access that data with justniffer - - $ justniffer -f file.cap - -Sample output: - -![Selection_005](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0056.png) - -**5. Capture http only data** - - $ sudo justniffer -i eth1 -r -p "port 80 or port 8080" - -Sample output: - -![Selection_006](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0064.png) - -**6. Get http only data from a specific host** - - $ justniffer -i eth1 -r -p "host 192.168.1.250 and tcp port 80" - -Sample output: - -![Selection_007](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0074.png) - -**7. Capture data in a more preciser format** - -When you will type **justniffer -h** You will see a lots of format key words which help to get data in more preciser way - - $ justniffer -h - -Sample Output: - -![Selection_008](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0083.png) - -Let us retrieve data with some predefined parameter provided with justniffer - - $ justniffer -i eth1 -l "%request.timestamp %request.header.host %request.url %response.time" - -Sample Output: - -![Selection_009](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0094.png) - -There are lots of option which you can explore. - -### Conclusion ### - -Justniffer is a very nice tool for network testing. In my view users who are using Snort for network sniffing will know justniffer as an less complicated tool. It is provided with a lots of **FORMAT KEYWORDS** which are very helpful to retrieve data in specific formats as per your need. You can log your network in .cap file formats which can be analyzed later on to monitor network service performance. - -**Reference:** - -- [Justniffer website][2] - --------------------------------------------------------------------------------- - -via: http://www.unixmen.com/install-justniffer-ubuntu-15-04/ - -作者:[Rajneesh Upadhyay][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.unixmen.com/author/rajneesh/ -[1]:http://sourceforge.net/projects/justniffer/?source=directory -[2]:http://justniffer.sourceforge.net/ \ No newline at end of file diff --git a/translated/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md b/translated/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md similarity index 100% rename from translated/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md rename to translated/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md diff --git a/translated/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md b/translated/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md new file mode 100644 index 0000000000..d571566c62 --- /dev/null +++ b/translated/tech/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md @@ -0,0 +1,60 @@ +在 Ubuntu 和 Linux Mint 上安装 Terminator 0.98 +================================================================================ +[Terminator][1],在一个窗口中有多个终端。该项目的目标之一是为管理终端提供一个有用的工具。它的灵感来自于类似 gnome-multi-term,quankonsole 等程序,这些程序关注于在窗格中管理终端。 Terminator 0.98 带来了更完美的标签功能,更好的布局保存/恢复,改进了偏好用户界面和多出 bug 修复。 + +![](http://www.ewikitech.com/wp-content/uploads/2015/09/Screenshot-from-2015-09-17-094828.png) + +###TERMINATOR 0.98 的更改和新特性 +- 添加了一个布局启动器,允许在不用布局之间简单切换(用 Alt + L 打开一个新的布局切换器); +- 添加了一个新的手册(使用 F1 打开); +- 保存的时候,布局现在会记住: + - * 最大化和全屏状态 + - * 窗口标题 + - * 激活的标签 + - * 激活的终端 + - * 每个终端的工作目录 +- 添加选项用于启用/停用非同质标签和滚动箭头; +- 添加快捷键用于按行/半页/一页向上/下滚动; +- 添加使用 Ctrl+鼠标滚轮放大/缩小,Shift+鼠标滚轮向上/下滚动页面; +- 为下一个/上一个 profile 添加快捷键 +- 改进自定义命令菜单的一致性 +- 新增快捷方式/代码来切换所有/标签分组; +- 改进监视插件 +- 增加搜索栏切换; +- 清理和重新组织窗口偏好,包括一个完整的全局便签更新 +- 添加选项用于设置 ActivityWatcher 插件静默时间 +- 其它一些改进和 bug 修复 +- [点击此处查看完整更新日志][2] + +### 安装 Terminator 0.98: + +Terminator 0.98 有可用的 PPA,首先我们需要在 Ubuntu/Linux Mint 上添加库。在终端里运行下面的命令来安装 Terminator 0.98。 + + $ sudo add-apt-repository ppa:gnome-terminator/nightly + $ sudo apt-get update + $ sudo apt-get install terminator + +如果你想要移除 Terminator,只需要在终端中运行下面的命令(可选) + + $ sudo apt-get remove terminator + + + + + +-------------------------------------------------------------------------------- + +via: http://www.ewikitech.com/articles/linux/terminator-install-ubuntu-linux-mint/ + +作者:[admin][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.ewikitech.com/author/admin/ +[1]:https://launchpad.net/terminator +[2]:http://bazaar.launchpad.net/~gnome-terminator/terminator/trunk/view/head:/ChangeLog + + + diff --git a/translated/tech/20150918 Install Justniffer In Ubuntu 15.04.md b/translated/tech/20150918 Install Justniffer In Ubuntu 15.04.md new file mode 100644 index 0000000000..1d711ea08b --- /dev/null +++ b/translated/tech/20150918 Install Justniffer In Ubuntu 15.04.md @@ -0,0 +1,151 @@ +在 Ubuntu 15.04 上安装 Justniffer +================================================================================ +### 简介 ### + +[Justniffer][1] 是一个可用于替换 Snort 的网络协议分析器。它非常流行,可交互式地跟踪/探测一个网络连接。它能从实时环境中抓取流量,支持 “lipcap” 和 “tcpdump” 文件格式。它可以帮助用户分析一个用 wireshark 难以抓包的复杂网络。尤其是它可以有效的帮助分析应用层流量,能提取类似图像、脚本、HTML 等 http 内容。Justniffer 有助于理解不同组件之间是如何通信的。 + +### 功能 ### + +Justniffer 收集一个复杂网络的所有流量而不影响系统性能,这是 Justniffer 的一个优势,它还可以保存日志用于之后的分析,Justniffer 其它一些重要功能包括: + +#### 1. 可靠的 TCP 流重建 #### + +它可以使用主机 Linux 内核的一部分用于记录并重现 TCP 片段和 IP 片段。 + +#### 2. 日志 #### + +保存日志用于之后的分析,并能自定义保存内容和时间。 + +#### 3. 可扩展 #### + +可以通过外部 python、 perl 和 bash 脚本扩展来从分析报告中获取一些额外的结果。 + +#### 4. 性能管理 #### + +基于连接时间、关闭时间、响应时间或请求时间等提取信息。 + +### 安装 ### + +Justniffer 可以通过 PPA 安装: + +运行下面命令添加库: + + $ sudo add-apt-repository ppa:oreste-notelli/ppa + +更新系统: + + $ sudo apt-get update + +安装 Justniffer 工具: + + $ sudo apt-get install justniffer + +make 的时候失败了,然后我运行下面的命令并尝试重新安装服务 + + $ sudo apt-get -f install + +### 事例 ### + +首先用 -v 选项验证安装的 Justniffer 版本,你需要用超级用户权限来使用这个工具。 + + $ sudo justniffer -V + +事例输出: + +![j](http://www.unixmen.com/wp-content/uploads/2015/09/j.png) + +**1. 为 eth1 接口导出 apache 中的流量到终端** + + $ sudo justniffer -i eth1 + +事例输出: + +![Selection_001](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0013.png) + +**2. 可以永恒下面的选项跟踪正在运行的 tcp 流** + + $ sudo justniffer -i eth1 -r + +事例输出: + +![Selection_002](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0023.png) + +**3. 获取 web 服务器的响应时间** + + $ sudo justniffer -i eth1 -a " %response.time" + +事例输出: + +![Selection_003](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0033.png) + +**4. 使用 Justniffer 读取一个 tcpdump 抓取的文件** + +首先,用 tcpdump 抓取流量。 + + $ sudo tcpdump -w /tmp/file.cap -s0 -i eth0 + +然后用 Justniffer 访问数据 + + $ justniffer -f file.cap + +事例输出: + +![Selection_005](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0056.png) + +**5. 只抓取 http 数据** + + $ sudo justniffer -i eth1 -r -p "port 80 or port 8080" + +事例输出: + +![Selection_006](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0064.png) + +**6. 从一个指定主机获取 http 数据** + + $ justniffer -i eth1 -r -p "host 192.168.1.250 and tcp port 80" + +事例输出: + +![Selection_007](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0074.png) + +**7. 以更精确的格式抓取数据** + +当你输入 **justniffer -h** 的时候你可以看到很多用于以更精确的方式获取数据的格式关键字 + + $ justniffer -h + +事例输出: + +![Selection_008](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0083.png) + +让我们用 Justniffer 根据预先定义的参数提取数据 + + $ justniffer -i eth1 -l "%request.timestamp %request.header.host %request.url %response.time" + +事例输出: + +![Selection_009](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0094.png) + +其中还有很多你可以探索的选项 + +### 总结 ### + +Justniffer 是用于网络测试一个很好的工具。在我看来对于那些用 Snort 来进行网络探测的用户来说,Justniffer 是一个更简单的工具。它提供了很多 **格式关键字** 用于按照你的需要精确地提取数据。你可以用 .cap 文件格式记录网络信息,之后用于分析监视网络服务性能。 + +**参考资料:** + +- [Justniffer 官网][2] + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/install-justniffer-ubuntu-15-04/ + +作者:[Rajneesh Upadhyay][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/rajneesh/ +[1]:http://sourceforge.net/projects/justniffer/?source=directory +[2]:http://justniffer.sourceforge.net/ \ No newline at end of file From 9f7c2398586ebdb59018d7cfbef3145835b02645 Mon Sep 17 00:00:00 2001 From: icybreaker Date: Sat, 19 Sep 2015 11:03:18 +0800 Subject: [PATCH 562/697] icybreaker translating --- sources/talk/20150901 Is Linux Right For You.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/talk/20150901 Is Linux Right For You.md b/sources/talk/20150901 Is Linux Right For You.md index 89044347ec..ddbffa8481 100644 --- a/sources/talk/20150901 Is Linux Right For You.md +++ b/sources/talk/20150901 Is Linux Right For You.md @@ -1,3 +1,4 @@ +icybreaker translating... Is Linux Right For You? ================================================================================ > Not everyone should opt for Linux -- for many users, remaining with Windows or OSX is the better choice. @@ -60,4 +61,4 @@ via: http://www.datamation.com/open-source/is-linux-right-for-you.html [a]:http://www.datamation.com/author/Matt-Hartley-3080.html [1]:http://www.psychocats.net/ubuntu/virtualbox [2]:http://www.howtogeek.com/howto/14912/create-a-persistent-bootable-ubuntu-usb-flash-drive/ -[3]:http://www.linuxandubuntu.com/home/dual-boot-ubuntu-15-04-14-10-and-windows-10-8-1-8-step-by-step-tutorial-with-screenshots \ No newline at end of file +[3]:http://www.linuxandubuntu.com/home/dual-boot-ubuntu-15-04-14-10-and-windows-10-8-1-8-step-by-step-tutorial-with-screenshots From 3227161d2200c51a5fa404abe8917666bdba9111 Mon Sep 17 00:00:00 2001 From: Luoyuanhao Date: Sun, 20 Sep 2015 09:08:42 +0800 Subject: [PATCH 563/697] [Translating]sources/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md --- ...mands to Manage File Types and System Time in Linu--Part 3.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md b/sources/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md index 2226b72297..6f4d1a63f5 100644 --- a/sources/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md +++ b/sources/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md @@ -1,3 +1,4 @@ +ictlyh Translating 5 Useful Commands to Manage File Types and System Time in Linux – Part 3 ================================================================================ Adapting to using the command line or terminal can be very hard for beginners who want to learn Linux. Because the terminal gives more control over a Linux system than GUIs programs, one has to get a used to running commands on the terminal. Therefore to memorize different commands in Linux, you should use the terminal on a daily basis to understand how commands are used with different options and arguments. From 7774856c6fc51b65a61bfca7c50760a502a8fa36 Mon Sep 17 00:00:00 2001 From: Luoyuanhao Date: Sun, 20 Sep 2015 10:38:26 +0800 Subject: [PATCH 564/697] [Translated]sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md --- ...Network Security Service NSS for Apache.md | 212 ------------------ ...Network Security Service NSS for Apache.md | 210 +++++++++++++++++ 2 files changed, 210 insertions(+), 212 deletions(-) delete mode 100644 sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md create mode 100644 translated/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md diff --git a/sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md b/sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md deleted file mode 100644 index a316797ebd..0000000000 --- a/sources/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md +++ /dev/null @@ -1,212 +0,0 @@ -ictlyh Translating -RHCE Series: Implementing HTTPS through TLS using Network Security Service (NSS) for Apache -================================================================================ -If you are a system administrator who is in charge of maintaining and securing a web server, you can’t afford to not devote your very best efforts to ensure that data served by or going through your server is protected at all times. - -![Setup Apache HTTPS Using SSL/TLS](http://www.tecmint.com/wp-content/uploads/2015/09/Setup-Apache-SSL-TLS-Server.png) - -RHCE Series: Implementing HTTPS through TLS using Network Security Service (NSS) for Apache – Part 8 - -In order to provide more secure communications between web clients and servers, the HTTPS protocol was born as a combination of HTTP and SSL (Secure Sockets Layer) or more recently, TLS (Transport Layer Security). - -Due to some serious security breaches, SSL has been deprecated in favor of the more robust TLS. For that reason, in this article we will explain how to secure connections between your web server and clients using TLS. - -This tutorial assumes that you have already installed and configured your Apache web server. If not, please refer to following article in this site before proceeding further. - -- [Install LAMP (Linux, MySQL/MariaDB, Apache and PHP) on RHEL/CentOS 7][1] - -### Installation of OpenSSL and Utilities ### - -First off, make sure that Apache is running and that both http and https are allowed through the firewall: - - # systemctl start http - # systemctl enable http - # firewall-cmd --permanent –-add-service=http - # firewall-cmd --permanent –-add-service=https - -Then install the necessary packages: - - # yum update && yum install openssl mod_nss crypto-utils - -**Important**: Please note that you can replace mod_nss with mod_ssl in the command above if you want to use OpenSSL libraries instead of NSS (Network Security Service) to implement TLS (which one to use is left entirely up to you, but we will use NSS in this article as it is more robust; for example, it supports recent cryptography standards such as PKCS #11). - -Finally, uninstall mod_ssl if you chose to use mod_nss, or viceversa. - - # yum remove mod_ssl - -### Configuring NSS (Network Security Service) ### - -After mod_nss is installed, its default configuration file is created as /etc/httpd/conf.d/nss.conf. You should then make sure that all of the Listen and VirtualHost directives point to port 443 (default port for HTTPS): - -nss.conf – Configuration File - ----------- - - Listen 443 - VirtualHost _default_:443 - -Then restart Apache and check whether the mod_nss module has been loaded: - - # apachectl restart - # httpd -M | grep nss - -![Check Mod_NSS Module in Apache](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Mod_NSS-Module-in-Apache.png) - -Check Mod_NSS Module Loaded in Apache - -Next, the following edits should be made in `/etc/httpd/conf.d/nss.conf` configuration file: - -1. Indicate NSS database directory. You can use the default directory or create a new one. In this tutorial we will use the default: - - NSSCertificateDatabase /etc/httpd/alias - -2. Avoid manual passphrase entry on each system start by saving the password to the database directory in /etc/httpd/nss-db-password.conf: - - NSSPassPhraseDialog file:/etc/httpd/nss-db-password.conf - -Where /etc/httpd/nss-db-password.conf contains ONLY the following line and mypassword is the password that you will set later for the NSS database: - - internal:mypassword - -In addition, its permissions and ownership should be set to 0640 and root:apache, respectively: - - # chmod 640 /etc/httpd/nss-db-password.conf - # chgrp apache /etc/httpd/nss-db-password.conf - -3. Red Hat recommends disabling SSL and all versions of TLS previous to TLSv1.0 due to the POODLE SSLv3 vulnerability (more information [here][2]). - -Make sure that every instance of the NSSProtocol directive reads as follows (you are likely to find only one if you are not hosting other virtual hosts): - - NSSProtocol TLSv1.0,TLSv1.1 - -4. Apache will refuse to restart as this is a self-signed certificate and will not recognize the issuer as valid. For this reason, in this particular case you will have to add: - - NSSEnforceValidCerts off - -5. Though not strictly required, it is important to set a password for the NSS database: - - # certutil -W -d /etc/httpd/alias - -![Set Password for NSS Database](http://www.tecmint.com/wp-content/uploads/2015/09/Set-Password-for-NSS-Database.png) - -Set Password for NSS Database - -### Creating a Apache SSL Self-Signed Certificate ### - -Next, we will create a self-signed certificate that will identify the server to our clients (please note that this method is not the best option for production environments; for such use you may want to consider buying a certificate verified by a 3rd trusted certificate authority, such as DigiCert). - -To create a new NSS-compliant certificate for box1 which will be valid for 365 days, we will use the genkey command. When this process completes: - - # genkey --nss --days 365 box1 - -Choose Next: - -![Create Apache SSL Key](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Apache-SSL-Key.png) - -Create Apache SSL Key - -You can leave the default choice for the key size (2048), then choose Next again: - -![Select Apache SSL Key Size](http://www.tecmint.com/wp-content/uploads/2015/09/Select-Apache-SSL-Key-Size.png) - -Select Apache SSL Key Size - -Wait while the system generates random bits: - -![Generating Random Key Bits](http://www.tecmint.com/wp-content/uploads/2015/09/Generating-Random-Bits.png) - -Generating Random Key Bits - -To speed up the process, you will be prompted to enter random text in your console, as shown in the following screencast. Please note how the progress bar stops when no input from the keyboard is received. Then, you will be asked to: - -1. Whether to send the Certificate Sign Request (CSR) to a Certificate Authority (CA): Choose No, as this is a self-signed certificate. - -2. to enter the information for the certificate. - -注:youtube 视频 - - -Finally, you will be prompted to enter the password to the NSS certificate that you set earlier: - - # genkey --nss --days 365 box1 - -![Apache NSS Certificate Password](http://www.tecmint.com/wp-content/uploads/2015/09/Apache-NSS-Password.png) - -Apache NSS Certificate Password - -At anytime, you can list the existing certificates with: - - # certutil –L –d /etc/httpd/alias - -![List Apache NSS Certificates](http://www.tecmint.com/wp-content/uploads/2015/09/List-Apache-Certificates.png) - -List Apache NSS Certificates - -And delete them by name (only if strictly required, replacing box1 by your own certificate name) with: - - # certutil -d /etc/httpd/alias -D -n "box1" - -if you need to.c - -### Testing Apache SSL HTTPS Connections ### - -Finally, it’s time to test the secure connection to our web server. When you point your browser to https://, you will get the well-known message “This connection is untrusted“: - -![Check Apache SSL Connection](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Apache-SSL-Connection.png) - -Check Apache SSL Connection - -In the above situation, you can click on Add Exception and then Confirm Security Exception – but don’t do it yet. Let’s first examine the certificate to see if its details match the information that we entered earlier (as shown in the screencast). - -To do so, click on View… –> Details tab above and you should see this when you select Issuer from the list: - -![Confirm Apache SSL Certificate Details](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Apache-SSL-Certificate-Details.png) - -Confirm Apache SSL Certificate Details - -Now you can go ahead, confirm the exception (either for this time or permanently) and you will be taken to your web server’s DocumentRoot directory via https, where you can inspect the connection details using your browser’s builtin developer tools: - -In Firefox you can launch it by right clicking on the screen, and choosing Inspect Element from the context menu, specifically through the Network tab: - -![Inspect Apache HTTPS Connection](http://www.tecmint.com/wp-content/uploads/2015/09/Inspect-Apache-HTTPS-Connection.png) - -Inspect Apache HTTPS Connection - -Please note that this is the same information as displayed before, which was entered during the certificate previously. There’s also a way to test the connection using command line tools: - -On the left (testing SSLv3): - - # openssl s_client -connect localhost:443 -ssl3 - -On the right (testing TLS): - - # openssl s_client -connect localhost:443 -tls1 - -![Testing Apache SSL and TLS Connections](http://www.tecmint.com/wp-content/uploads/2015/09/Testing-Apache-SSL-and-TLS.png) - -Testing Apache SSL and TLS Connections - -Refer to the screenshot above for more details. - -### Summary ### - -As I’m sure you already know, the presence of HTTPS inspires trust in visitors who may have to enter personal information in your site (from user names and passwords all the way to financial / bank account information). - -In that case, you will want to get a certificate signed by a trusted Certificate Authority as we explained earlier (the steps to set it up are identical with the exception that you will need to send the CSR to a CA, and you will get the signed certificate back); otherwise, a self-signed certificate as the one used in this tutorial will do. - -For more details on the use of NSS, please refer to the online help about [mod-nss][3]. And don’t hesitate to let us know if you have any questions or comments. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-apache-https-self-signed-certificate-using-nss/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/install-lamp-in-centos-7/ -[1]:http://www.tecmint.com/author/gacanepa/ -[2]:https://access.redhat.com/articles/1232123 -[3]:https://git.fedorahosted.org/cgit/mod_nss.git/plain/docs/mod_nss.html \ No newline at end of file diff --git a/translated/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md b/translated/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md new file mode 100644 index 0000000000..5ff1f9fe65 --- /dev/null +++ b/translated/tech/RHCE/Part 8 - RHCE Series--Implementing HTTPS through TLS using Network Security Service NSS for Apache.md @@ -0,0 +1,210 @@ +RHCE 系列: 使用网络安全服务(NSS)为 Apache 通过 TLS 实现 HTTPS +================================================================================ +如果你是一个负责维护和确保 web 服务器安全的系统管理员,你不能不花费最大的精力确保服务器中处理和通过的数据任何时候都受到保护。 +![使用 SSL/TLS 设置 Apache HTTPS](http://www.tecmint.com/wp-content/uploads/2015/09/Setup-Apache-SSL-TLS-Server.png) + +RHCE 系列:第八部分 - 使用网络安全服务(NSS)为 Apache 通过 TLS 实现 HTTPS + +为了在客户端和服务器之间提供更安全的连接,作为 HTTP 和 SSL(安全套接层)或者最近称为 TLS(传输层安全)的组合,产生了 HTTPS 协议。 + +由于一些严重的安全漏洞,SSL 已经被更健壮的 TLS 替代。由于这个原因,在这篇文章中我们会解析如何通过 TLS 实现你 web 服务器和客户端之间的安全连接。 + +这里假设你已经安装并配置好了 Apache web 服务器。如果还没有,在进入下一步之前请阅读下面站点中的文章。 + +- [在 RHEL/CentOS 7 上安装 LAMP(Linux,MySQL/MariaDB,Apache 和 PHP)][1] + +### 安装 OpenSSL 和一些工具包 ### + +首先,确保正在运行 Apache 并且允许 http 和 https 通过防火墙: + + # systemctl start http + # systemctl enable http + # firewall-cmd --permanent –-add-service=http + # firewall-cmd --permanent –-add-service=https + +然后安装一些必须软件包: + + # yum update && yum install openssl mod_nss crypto-utils + +**重要**:请注意如果你想使用 OpenSSL 库而不是 NSS(网络安全服务)实现 TLS,你可以在上面的命令中用 mod\_ssl 替换 mod\_nss(使用哪一个取决于你,但在这篇文章中由于更加健壮我们会使用 NSS;例如,它支持最新的加密标准,比如 PKCS #11)。 + +如果你使用 mod\_nss,首先要卸载 mod\_ssl,反之如此。 + + # yum remove mod_ssl + +### 配置 NSS(网络安全服务)### + +安装完 mod\_nss 之后,会创建默认的配置文件 /etc/httpd/conf.d/nss.conf。你应该确保所有 Listen 和 VirualHost 指令都指向 443 号端口(HTTPS 默认端口): + +nss.conf – 配置文件 + +---------- + + Listen 443 + VirtualHost _default_:443 + +然后重启 Apache 并检查是否加载了 mod\_nss 模块: + + # apachectl restart + # httpd -M | grep nss + +![在 Apache 中检查 mod_nss 模块](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Mod_NSS-Module-in-Apache.png) + +检查 Apache 是否加载 mod\_nss 模块 + +下一步,在 `/etc/httpd/conf.d/nss.conf` 配置文件中做以下更改: + +1. 指定 NSS 数据库目录。你可以使用默认的目录或者新建一个。本文中我们使用默认的: + + NSSCertificateDatabase /etc/httpd/alias + +2. 通过保存密码到数据库目录中的 /etc/httpd/nss-db-password.conf 文件避免每次系统启动时要手动输入密码: + + NSSPassPhraseDialog file:/etc/httpd/nss-db-password.conf + +其中 /etc/httpd/nss-db-password.conf 只包含以下一行,其中 mypassword 是后面你为 NSS 数据库设置的密码: + + internal:mypassword + +另外,要设置该文件的权限和属主为 0640 和 root:apache: + + # chmod 640 /etc/httpd/nss-db-password.conf + # chgrp apache /etc/httpd/nss-db-password.conf + +3. 由于 POODLE SSLv3 漏洞,红帽建议停用 SSL 和 TLSv1.0 之前所有版本的 TLS(更多信息可以查看[这里][2])。 + +确保 NSSProtocol 指令的每个实例都类似下面一样(如果你没有托管其它虚拟主机,很可能只有一条): + + NSSProtocol TLSv1.0,TLSv1.1 + +4. 由于这是一个自签名证书,Apache 会拒绝重启,并不会识别为有效发行人。由于这个原因,对于这种特殊情况我们还需要添加: + + NSSEnforceValidCerts off + +5. 虽然并不是严格要求,为 NSS 数据库设置一个密码同样很重要: + + # certutil -W -d /etc/httpd/alias + +![为 NSS 数据库设置密码](http://www.tecmint.com/wp-content/uploads/2015/09/Set-Password-for-NSS-Database.png) + +为 NSS 数据库设置密码 + +### 创建一个 Apache SSL 自签名证书 ### + +下一步,我们会创建一个自签名证书为我们的客户机识别服务器(请注意这个方法对于生产环境并不是最好的选择;对于生产环境你应该考虑购买第三方可信证书机构验证的证书,例如 DigiCert)。 + +我们用 genkey 命令为 box1 创建有效期为 365 天的 NSS 兼容证书。完成这一步后: + + # genkey --nss --days 365 box1 + +选择 Next: + +![创建 Apache SSL 密钥](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Apache-SSL-Key.png) + +创建 Apache SSL 密钥 + +你可以使用默认的密钥大小(2048),然后再次选择 Next: + +![选择 Apache SSL 密钥大小](http://www.tecmint.com/wp-content/uploads/2015/09/Select-Apache-SSL-Key-Size.png) + +选择 Apache SSL 密钥大小 + +等待系统生成随机比特: + +![生成随机密钥比特](http://www.tecmint.com/wp-content/uploads/2015/09/Generating-Random-Bits.png) + +生成随机密钥比特 + +为了加快速度,会提示你在控制台输入随机字符,正如下面的截图所示。请注意当没有从键盘接收到输入时进度条是如何停止的。然后,会让你选择: + +1. 是否发送验证签名请求(CSR)到一个验证机构(CA):选择 No,因为这是一个自签名证书。 + +2. 为证书输入信息。 + +注:youtube 视频 + + +最后,会提示你输入之前设置的密码到 NSS 证书: + + # genkey --nss --days 365 box1 + +![Apache NSS 证书密码](http://www.tecmint.com/wp-content/uploads/2015/09/Apache-NSS-Password.png) + +Apache NSS 证书密码 + +在任何时候你都可以用以下命令列出现有的证书: + + # certutil –L –d /etc/httpd/alias + +![列出 Apache NSS 证书](http://www.tecmint.com/wp-content/uploads/2015/09/List-Apache-Certificates.png) + +列出 Apache NSS 证书 + +然后通过名字删除(除非严格要求,用你自己的证书名称替换 box1): + + # certutil -d /etc/httpd/alias -D -n "box1" + +如果你需要继续的话: + +### 测试 Apache SSL HTTPS 连接 ### + +最后,是时候测试到我们服务器的安全连接了。当你用浏览器打开 https://,你会看到著名的信息 “This connection is untrusted”: + +![检查 Apache SSL 连接](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Apache-SSL-Connection.png) + +检查 Apache SSL 连接 + +在上面的情况中,你可以点击添加例外(Add Exception) 然后确认安全例外(Confirm Security Exception) - 但先不要这么做。让我们首先来看看证书看它的信息是否和我们之前输入的相符(如截图所示)。 + +要做到这点,点击上面的视图(View...)-> 详情(Details)选项卡,当你从列表中选择发行人你应该看到这个: + +![确认 Apache SSL 证书详情](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Apache-SSL-Certificate-Details.png) + +确认 Apache SSL 证书详情 + +现在你继续,确认例外(限于此次或永久),然后会通过 https 把你带到你 web 服务器的 DocumentRoot 目录,在这里你可以使用你浏览器自带的开发者工具检查连接详情: + +在火狐浏览器中,你可以通过在屏幕中右击然后从上下文菜单中选择检查元素(Inspect Element)启动,尤其是通过网络选项卡: + +![检查 Apache HTTPS 连接](http://www.tecmint.com/wp-content/uploads/2015/09/Inspect-Apache-HTTPS-Connection.png) + +检查 Apache HTTPS 连接 + +请注意这和之前显示的在验证过程中输入的信息一致。还有一种方式通过使用命令行工具测试连接: + +左边(测试 SSLv3): + + # openssl s_client -connect localhost:443 -ssl3 + +右边(测试 TLS): + + # openssl s_client -connect localhost:443 -tls1 + +![测试 Apache SSL 和 TLS 连接](http://www.tecmint.com/wp-content/uploads/2015/09/Testing-Apache-SSL-and-TLS.png) + +测试 Apache SSL 和 TLS 连接 + +参考上面的截图了解更相信信息。 + +### 总结 ### + +我确信你已经知道,使用 HTTPS 会增加会在你站点中输入个人信息的访客的信任(从用户名和密码到任何商业/银行账户信息)。 + +在那种情况下,你会希望获得由可信验证机构签名的证书,正如我们之前解释的(启用的步骤和发送 CSR 到 CA 然后获得签名证书的例子相同);另外的情况,就是像我们的例子中一样使用自签名证书。 + +要获取更多关于使用 NSS 的详情,可以参考关于 [mod-nss][3] 的在线帮助。如果你有任何疑问或评论,请告诉我们。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-apache-https-self-signed-certificate-using-nss/ + +作者:[Gabriel Cánepa][a] +译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/install-lamp-in-centos-7/ +[1]:http://www.tecmint.com/author/gacanepa/ +[2]:https://access.redhat.com/articles/1232123 +[3]:https://git.fedorahosted.org/cgit/mod_nss.git/plain/docs/mod_nss.html \ No newline at end of file From d1fa680201f959ed80aacc1ac03751460cfb6e45 Mon Sep 17 00:00:00 2001 From: icybreaker Date: Sun, 20 Sep 2015 11:23:08 +0800 Subject: [PATCH 565/697] translated by icybreaker --- .../talk/20150901 Is Linux Right For You.md | 64 ------------------- .../talk/20150901 Is Linux Right For You.md | 63 ++++++++++++++++++ 2 files changed, 63 insertions(+), 64 deletions(-) delete mode 100644 sources/talk/20150901 Is Linux Right For You.md create mode 100644 translated/talk/20150901 Is Linux Right For You.md diff --git a/sources/talk/20150901 Is Linux Right For You.md b/sources/talk/20150901 Is Linux Right For You.md deleted file mode 100644 index ddbffa8481..0000000000 --- a/sources/talk/20150901 Is Linux Right For You.md +++ /dev/null @@ -1,64 +0,0 @@ -icybreaker translating... -Is Linux Right For You? -================================================================================ -> Not everyone should opt for Linux -- for many users, remaining with Windows or OSX is the better choice. - -I enjoy using Linux on the desktop. Not because of software politics or because I despise other operating systems. I simply like Linux because it just works. - -It's been my experience that not everyone is cut out for the Linux lifestyle. In this article, I'll help you run through the pros and cons of making the switch to Linux so you can determine if switching is right for you. - -### When to make the switch ### - -Switching to Linux makes sense when there is a decisive reason to do so. The same can be said about moving from Windows to OS X or vice versa. In order to have success with switching, you must be able to identify your reason for jumping ship in the first place. - -For some people, the reason for switching is frustration with their current platform. Maybe the latest upgrade left them with a lousy experience and they're ready to chart new horizons. In other instances, perhaps it's simply a matter of curiosity. Whatever the motivation, you must have a good reason for switching operating systems. If you're pushing yourself in this direction without a good reason, then no one wins. - -However, there are exceptions to every rule. And if you're really interested in trying Linux on the desktop, then maybe coming to terms with a workable compromise is the way to go. - -### Starting off slow ### - -After trying Linux for the first time, I've seen people blast their Windows installation to bits because they had a good experience with Ubuntu on a flash drive for 20 minutes. Folks, this isn't a test. Instead I'd suggest the following: - -- Run the [Linux distro in a virtual machine][1] for a week. This means you are committing to running that distro for all browser work, email and other tasks you might otherwise do on that machine. -- If running a VM for a week is too resource intensive, try doing the same with a USB drive running Linux that offers [some persistent storage][2]. This will allow you to leave your main OS alone and intact. At the same time, you'll still be able to "live inside" of your Linux distribution for a week. -- If you find that everything is successful after a week of running Linux, the next step is to examine how many times you booted into Windows that week. If only occasionally, then the next step is to look into [dual-booting Windows][3] and Linux. For those of you that only found themselves using their Linux distro, it might be worth considering making the switch full time. -- Before you hose your Windows partition completely, it might make more sense to purchase a second hard drive to install Linux onto instead. This allows you to dual-boot, but to do so with ample hard drive space. It also makes Windows available to you if something should come up. - -### What do you gain adopting Linux? ### - -So what does one gain by switching to Linux? Generally it comes down to personal freedom for most people. With Linux, if something isn't to your liking, you're free to change it. Using Linux also saves users oodles of money in avoiding hardware upgrades and unnecessary software expenses. Additionally, you're not burdened with tracking down lost license keys for software. And if you dislike the direction a particular distribution is headed, you can switch to another distribution with minimal hassle. - -The sheer volume of desktop choice on the Linux desktop is staggering. This level of choice might even seem overwhelming to the newcomer. But if you find a distro base (Debian, Fedora, Arch, etc) that you like, the hard work is already done. All you need to do now is find a variation of the distro and the desktop environment you prefer. - -Now one of the most common complaints I hear is that there isn't much in the way of software for Linux. However, this isn't accurate at all. While other operating systems may have more of it, today's Linux desktop has applications to do just about anything you can think of. Video editing (home and pro-level), photography, office management, remote access, music (listening and creation), plus much, much more. - -### What you lose adopting Linux? ### - -As much as I enjoy using Linux, my wife's home office relies on OS X. She's perfectly content using Linux for some tasks, however she relies on OS X for specific software not available for Linux. This is a common problem that many people face when first looking at making the switch. You must decide whether or not you're going to be losing out on critical software if you make the switch. - -Sometimes the issue is because the software has content locked down with it. In other cases, it's a workflow and functionality that was found with the legacy applications and not with the software available for Linux. I myself have never experienced this type of challenge, but I know those who have. Many of the software titles available for Linux are also available for other operating systems. So if there is a concern about such things, I encourage you to try out comparable apps on your native OS first. - -Another thing you might lose by switching to Linux is the luxury of local support when you need it. People scoff at this, but I know of countless instances where a newcomer to Linux was dismayed to find their only recourse for solving Linux challenges was from strangers on the Web. This is especially problematic if their only PC is the one having issues. Windows and OS X users are spoiled in that there are endless support techs in cities all over the world that support their platform(s). - -### How to proceed from here ### - -Perhaps the single biggest piece of advice to remember is always have a fallback plan. Remember, once you wipe that copy of Windows 10 from your hard drive, you may find yourself spending money to get it reinstalled. This is especially true for those of you who upgrade from other Windows releases. Accepting this, persistent flash drives with Linux or dual-booting Windows and Linux is always a preferable way forward for newcomers. Odds are that you may be just fine and take to Linux like a fish to water. But having that fallback plan in place just means you'll sleep better at night. - -If instead you've been relying on a dual-boot installation for weeks and feel ready to take the plunge, then by all means do it. Wipe your drive and start off with a clean installation of your favorite Linux distribution. I've been a full time Linux enthusiast for years and I can tell you for certain, it's a great feeling. How long? Let's just say my first Linux experience was with early Red Hat. I finally installed a dedicated installation on my laptop by 2003. - -Existing Linux enthusiasts, where did you first get started? Was your switch an exciting one or was it filled with angst? Hit the Comments and share your experiences. - --------------------------------------------------------------------------------- - -via: http://www.datamation.com/open-source/is-linux-right-for-you.html - -作者:[Matt Hartley][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.datamation.com/author/Matt-Hartley-3080.html -[1]:http://www.psychocats.net/ubuntu/virtualbox -[2]:http://www.howtogeek.com/howto/14912/create-a-persistent-bootable-ubuntu-usb-flash-drive/ -[3]:http://www.linuxandubuntu.com/home/dual-boot-ubuntu-15-04-14-10-and-windows-10-8-1-8-step-by-step-tutorial-with-screenshots diff --git a/translated/talk/20150901 Is Linux Right For You.md b/translated/talk/20150901 Is Linux Right For You.md new file mode 100644 index 0000000000..60b56f6e25 --- /dev/null +++ b/translated/talk/20150901 Is Linux Right For You.md @@ -0,0 +1,63 @@ +Linux系统是否适合于您? +================================================================================ +> 并非人人都适合使用Linux--对许多用户来说,Windows或OSX会是更好的选择。 + +我喜欢使用Linux系统,并不是因为软件的政治性质,也不是不喜欢其他操作系统。我喜欢Linux系统因为它能满足我的需求并且确实适合使用。 + +我的经验是,并非人人都适合切换至“Linux的生活方式”。本文将帮助您通过分析使用Linux系统的利弊来供您自行判断使用Linux是否真正适合您。 + +### 什么时候更换系统? ### + +当有充分的理由时,将系统切换到Linux系统是很有意义的。这对Windows用户将系统更换到OSX或类似的情况都同样适用。为让您的系统转变成功,您必须首先确定为什么要做这种转换。 + +对某些人来说,更换系统通常意味着他们不满于当前的系统操作平台。也许是最新的升级给了他们糟糕的用户体验,他们已准备好更换到别的系统,也许仅仅是因为对某个系统好奇。不管动机是什么,必须要有充分的理由支撑您做出更换操作系统的决定。如果没有一个充足的原因让您这样做,往往不会成功。 + +然而事事都有例外。如果您确实对Linux非常感兴趣,或许可以选择一种折衷的方式。 + +### 放慢起步的脚步 ### + +第一次尝试运行Linux系统后,我看到就有人开始批判Windows安装过程的费时,完全是因为他们20分钟就用闪存安装好Ubuntu的良好体验。但是伙伴们,这并不只是一次测验。相反,我有如下建议: + +- 一周的时间尝试在[虚拟机上运行Linux系统][1]。这意味着您将在该系统上运行所有的浏览器工作、邮箱操作和其它想要完成的任务。 +- 如果运行虚拟机资源消耗太大,您可以尝试通过[存储持久性][2]的USB驱动器来运行Linux,您的主操作系统将不受任何影响。与此同时,您仍可以运行Linux系统。 +- 运行Linux系统一周后,如果一切进展顺利,下一步您可以计算一下这周内登入Windows的次数。如果只是偶尔登陆Windows系统,下一步就可以尝试运行Windows和Linux[双系统][3]。对那些只运行Linux系统的用户,可以考虑尝试将系统真正更换为Linux系统。 +- 在管理Windows分区前,有必要购买一个新硬盘来安装Linux系统。这样只要有充足的硬盘空间,您就可以使用双系统。如果想到必须要要启动Windows系统做些事情,Windows系统也是可以运行的。 + +### 使用Linux系统的好处是什么? ### + +将系统更换到Linux有什么好处呢?一般而言,这种好处对大多数人来说可以归结到释放个性化自由。在使用Linux系统的时候,如果您不喜欢某些设置,可以自行更改它们。同时使用Linux可以为用户节省大量的硬件升级开支和不必要的软件开支。另外,您不需再费力找寻已丢失的软件许可证密钥,而且如果您不喜欢即将发布的系统版本,大可轻松地更换到别的版本。 + +台式机首选Linux系统是令人吃惊的,看起来对新手来说做这种选择非常困难。但是如果您发现了喜欢的一款Linux版本(Debian,Fedora,Arch等),最困难的工作其实已经完成了,您需要做的就是找到各版本并选择出您最喜欢的系统版本环境。 + +如今我听到的最常见的抱怨之一是用户发现没有太多的软件格式能适用于Linux系统。然而,这并不是事实。尽管别的操作系统可能会提供更多软件,但是如今的Linux也已经提供了足够多应用程序满足您的各种需求,包括视频剪辑(家庭版和专业版),摄影,办公管理软件,远程访问,音乐软件,还有很多别的各类软件。 + +### 使用Linux系统您会失去些什么? ### + +虽然我喜欢使用Linux,但我妻子的家庭办公系统依然依赖于OS X。对于用Linux系统完成一些特定的任务她心满意足,但是她仍习惯于使用提供Linux不支持的一些软件的OS X系统。这是许多想要更换系统的用户会遇到的一个常见的问题。如果要更换系统,您需要考虑是否愿意失去一些关键的软件工具。 + +有时在Linux系统上遇到问题是因为软件会内容锁定。别的情况下,是在Linux系统上可运行的软件并不适用于传统应用程序的工作流和功能。我自己并没有遇到过这类问题,但是我知道确实存在这些问题。许多Linux上的软件在其他操作系统上也都可以用。所以如果担心这类软件兼容问题,建议您先尝试在已有的系统上操作一下几款类似的应用程序。 + +更换成Linux系统后,另一件您可能会失去的是本地系统支持服务。人们通常会嘲笑这种愚蠢行径,但我知道,无数的新手在使用Linux时会发现解决Linux上各种问题的唯一资源就是来自网络另一端的陌生人提供的帮助。如果只是他们的PC遇到了一些问题,这将会比较麻烦。Windows和OS X的用户已经习惯各城市遍布了支持他们操作系统的各项技术服务。 + +### 如何开启新旅程? ### + +这里建议大家要记住最重要的就是经常做备份。如果您将Windows 10从硬盘中擦除,您会发现重新安装它又会花费金钱。对那些从其他Windows发布版本升级的用户来说尤其会遇到这种情况。接受这个建议,那就是对新手来说使用闪存安装Linux或使用Windows和Linux双系统都是更值得提倡的做法。您也许会如鱼得水般使用Linux系统,但是有了一份备份计划,您将高枕无忧。 + +相反,如果数周以来您一直依赖于使用双操作系统,但是已经准备好冒险去尝试一下单操作系统,那么就去做吧。格式化您的驱动器,重新安装您喜爱的Linux distribution。数年来我一直都是"全职"Linux使用爱好者,这里可以确定地告诉您,使用Linux系统感觉棒极了。这种感觉会持续多久?我第一次的Linux系统使用经验还是来自早期的Red Hat系统,2003年我已经决定在自己的笔记本上安装专用的Linux系统并一直使用至今。 + +Linux爱好者们,你们什么时候开始使用Linux的?您在最初更换成Linux系统时是兴奋还是焦虑呢?欢迎点击评论分享你们的经验。 + +-------------------------------------------------------------------------------- + +via: http://www.datamation.com/open-source/is-linux-right-for-you.html + +作者:[Matt Hartley][a] +译者:[icybreaker](https://github.com/icybreaker) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.datamation.com/author/Matt-Hartley-3080.html +[1]:http://www.psychocats.net/ubuntu/virtualbox +[2]:http://www.howtogeek.com/howto/14912/create-a-persistent-bootable-ubuntu-usb-flash-drive/ +[3]:http://www.linuxandubuntu.com/home/dual-boot-ubuntu-15-04-14-10-and-windows-10-8-1-8-step-by-step-tutorial-with-screenshots From 30f8d413f0e026f4dedc10d37ca415e23676e556 Mon Sep 17 00:00:00 2001 From: alim0x Date: Sun, 20 Sep 2015 18:39:33 +0800 Subject: [PATCH 566/697] [complete]18 - The history of Android --- .../18 - The history of Android.md | 83 ------------------- .../18 - The history of Android.md | 83 +++++++++++++++++++ 2 files changed, 83 insertions(+), 83 deletions(-) delete mode 100644 sources/talk/The history of Android/18 - The history of Android.md create mode 100644 translated/talk/The history of Android/18 - The history of Android.md diff --git a/sources/talk/The history of Android/18 - The history of Android.md b/sources/talk/The history of Android/18 - The history of Android.md deleted file mode 100644 index 3af4359680..0000000000 --- a/sources/talk/The history of Android/18 - The history of Android.md +++ /dev/null @@ -1,83 +0,0 @@ -安卓编年史 -================================================================================ -![安卓市场的新设计试水“卡片式”界面,这将成为谷歌的主要风格。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/play-store.png) -安卓市场的新设计试水“卡片式”界面,这将成为谷歌的主要风格。 -Ron Amadeo 供图 - -安卓推向市场已经有两年半时间了,安卓市场放出了它的第四版设计。这个新设计十分重要,因为它已经很接近谷歌的“卡片式”界面了。通过在小方块中显示应用或其他内容,谷歌可以使其设计在不同尺寸屏幕下无缝过渡而不受影响。内容可以像一个相册应用里的照片一样显示——给布局渲染填充一个内容块列表,加上屏幕包装,就完成了。更大的屏幕一次可以看到更多的内容块,小点的屏幕一次看到的内容就少。内容用了不一样的方式显示,谷歌还在右边新增了一个“分类”板块,顶部还有个巨大的热门应用滚动显示。 - -虽然设计上为更容易配置界面准备好准备好了,但功能上还没有。最初发布的市场版本锁定为横屏模式,而且还是蜂巢独占的。 - -![应用详情页和“我的应用”界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/device-2014-02-12-190002.png) -应用详情页和“我的应用”界面。 -Ron Amadeo 供图 - -新的市场不仅出售应用,还加入了书籍和电影租借。谷歌从2010年开始出售图书;之前只通过网站出售。新的市场将谷歌所有的内容销售聚合到了一处,进一步向苹果 iTunes 的主宰展开较量。虽然在“安卓市场”出售这些东西有点品牌混乱,因为大部分内容都不依赖于安卓才能使用。 - -![浏览器看起来非常像 Chrome,联系人使用了双面板界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/browsercontactst.png) -浏览器看起来非常像 Chrome,联系人使用了双面板界面。 -Ron Amadeo 供图 - -The new Browser added an honest-to-goodness tabs strip at the top of the interface. While this browser wasn't Chrome, it aped a lot of Chrome's design and features. Besides the pioneering tabs-on-top interface, it added Incognito tabs, which kept no history or autocomplete records. There was also an option to have a Chrome-style new tab page consisting of thumbnails of your most-viewed webpages. - -The new Browser even synced with Chrome. After signing in to the browser, it would download your Chrome bookmarks and automatically sign in to Google Web pages with your account. Bookmarking a page was as easy as tapping on the star icon in the address bar. Just like Google Maps, the browser dumped the zoom buttons and went with all gesture controls. - -The contacts app was finally removed from the phone app and broken out into a standalone app. The previous contacts/dialer hybrid was far too phone-centric for how people use a modern smartphone. Contacts housed information for e-mails, IM, texting, addresses, birthdays, and social networks, so tying it to the phone app makes just as much sense as trying it to Google Maps. With the telephony requirements out of the way, contacts could be simplified to a tab-less list of people. Honeycomb went with a dual pane view showing the full contact list on the left and contacts on the right. This again made use of a Fragments API; a hypothetical phone version of this app could show each panel as a single screen. - -The Honeycomb version of Contacts was the first version to have a quick scroll feature. When grabbing the left scroll bar, you could quickly scroll up and down, and a letter preview showed your current spot in the list. - -![The new YouTube app looked like something out of the Matrix.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/youtubes.png) -The new YouTube app looked like something out of the Matrix. -Photo by Ron Amadeo - -YouTube thankfully dumped the "unique" design Google came up with for 2.3 and gave the video service a cohesive design that looked like it belonged in Android. The main screen was a horizontally scrolling curved wall of video thumbnails that showed a most popular or (when signed in) personalized selection of videos. While Google never brought this design to phones, it could be considered an easily reconfigurable card interface. The action bar shined here as a reconfigurable toolbar. When not signed it, the action bar was filled with a search bar. When you were signed in, search shrank down to a button, and tabs for "Home," "Browse," and "Your Channel" were shown. - -![Honeycomb really liked to drive home that it was a computer interface with blue scaffolding. Movie Studio completes the Tron look with an orange theme.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/other2.png) -Honeycomb really liked to drive home that it was a computer interface with blue scaffolding. Movie Studio completes the Tron look with an orange theme. -Photo by Ron Amadeo - -The lone new app in Honeycomb was "Movie Studio," which was not a self-explanatory app and arrived with no explanations or instructions. As far as we could tell, you could import video clips, cut them up, and add text and scene transitions. Editing video—one of the most time consuming, difficult, and processor-intensive things you can do on a computer—on a tablet felt just a little too ambitious, and Google would completely remove this app in later versions. Our favorite part of Movie Studio was that it really completed the Tron theme. While the rest of the OS used blue highlights, this was all orange. (Movie Studio is an evil program!) - -![Widgets!](http://cdn.arstechnica.net/wp-content/uploads/2014/02/device-2014-02-12-202224.png) -Widgets! -Photo by Ron Amadeo - -Honeycomb brought a new widget framework that allowed for scrolling widgets, and the Gmail, Email, and Calendar widgets were upgraded to support it. YouTube and Books used a new widget that auto-scrolled through cards of content. By flicking up or down on the widget, you could scroll through the cards. We're not sure what the point of being constantly reminded of your book collection was, but it's there if you want it. While all of these widgets worked great on a 10-inch screen, Google never redesigned them for phones, making them practically useless on Android's most popular form factor. All the widgets had massive identifying headers and usually took up half the screen to show only a few items. - -![The scrollable Recent Apps and resizable widgets in Android 3.1.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/31new.jpg) -The scrollable Recent Apps and resizable widgets in Android 3.1. -Photo by Ron Amadeo - -Later versions of Honeycomb would fix many of the early problems 3.0 had. Android 3.1 was released three months after the first version of Honeycomb, and it brought several improvements. Resizable widgets were one of the biggest features added. After long pressing on a widget, a blue outline with grabbable handles would pop up around it, and dragging the handles around would resize the widget. The Recent Apps panel could now scroll vertically and held many more apps. The only feature missing from it at this point was the ability to swipe away apps. - -Today, an 0.1 upgrade is a major release, but in Honeycomb, point releases were considerably smaller. Besides the few UI tweaks, 3.1 added support for gamepads, keyboards, mice, and other input devices over USB and Bluetooth. It also offered a few more developer APIs. - -![Android 3.2's compatibility zoom and a typical stretched-out app on an Android tablet.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/device-2014-02-14-131132.jpg) -Android 3.2's compatibility zoom and a typical stretched-out app on an Android tablet. -Photo by Ron Amadeo - -Android 3.2 launched two months after 3.1, adding support for smaller sized tablets in the seven- to eight-inch range. It finally enabled SD card support, which the Xoom carried like a vestigial limb for the first five months of its life. - -Honeycomb was rushed out the door in order to be an ecosystem builder. No one will want an Android tablet if the tablet-specific apps aren't there, and Google knew it needed to get something in the hands of developers ASAP. At this early stage of Android's tablet ecosystem, the apps just weren't there. It was the biggest problem people had with the Xoom. - -3.2 added "Compatibility Zoom," which gave users a new option of stretching apps to the screen (as shown in the right picture) or zooming the normal app layout to fit the screen. Neither option was ideal, and without the app ecosystem to support it, Honeycomb devices sold pretty poorly. Google's tablet moves would eventually pay off though. Today, Android tablets have [taken the market share crown from iOS][1]. - ----------- - -![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg) - -[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work. - -[@RonAmadeo][t] - --------------------------------------------------------------------------------- - -via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/18/ - -译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[1]:http://techcrunch.com/2014/03/03/gartner-195m-tablets-sold-in-2013-android-grabs-top-spot-from-ipad-with-62-share/ -[a]:http://arstechnica.com/author/ronamadeo -[t]:https://twitter.com/RonAmadeo diff --git a/translated/talk/The history of Android/18 - The history of Android.md b/translated/talk/The history of Android/18 - The history of Android.md new file mode 100644 index 0000000000..f4781cc621 --- /dev/null +++ b/translated/talk/The history of Android/18 - The history of Android.md @@ -0,0 +1,83 @@ +安卓编年史 +================================================================================ +![安卓市场的新设计试水“卡片式”界面,这将成为谷歌的主要风格。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/play-store.png) +安卓市场的新设计试水“卡片式”界面,这将成为谷歌的主要风格。 +Ron Amadeo 供图 + +安卓推向市场已经有两年半时间了,安卓市场放出了它的第四版设计。这个新设计十分重要,因为它已经很接近谷歌的“卡片式”界面了。通过在小方块中显示应用或其他内容,谷歌可以使其设计在不同尺寸屏幕下无缝过渡而不受影响。内容可以像一个相册应用里的照片一样显示——给布局渲染填充一个内容块列表,加上屏幕包装,就完成了。更大的屏幕一次可以看到更多的内容块,小点的屏幕一次看到的内容就少。内容用了不一样的方式显示,谷歌还在右边新增了一个“分类”板块,顶部还有个巨大的热门应用滚动显示。 + +虽然设计上为更容易配置界面准备好准备好了,但功能上还没有。最初发布的市场版本锁定为横屏模式,而且还是蜂巢独占的。 + +![应用详情页和“我的应用”界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/device-2014-02-12-190002.png) +应用详情页和“我的应用”界面。 +Ron Amadeo 供图 + +新的市场不仅出售应用,还加入了书籍和电影租借。谷歌从2010年开始出售图书;之前只通过网站出售。新的市场将谷歌所有的内容销售聚合到了一处,进一步向苹果 iTunes 的主宰展开较量。虽然在“安卓市场”出售这些东西有点品牌混乱,因为大部分内容都不依赖于安卓才能使用。 + +![浏览器看起来非常像 Chrome,联系人使用了双面板界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/browsercontactst.png) +浏览器看起来非常像 Chrome,联系人使用了双面板界面。 +Ron Amadeo 供图 + +新浏览器界面顶部添加了标签页栏。尽管这个浏览器并不是 Chrome ,它模仿了许多 Chrome 的设计和特性。除了这个探索性的顶部标签页界面,浏览器还加入了隐身标签,在浏览网页时不保存历史记录和自动补全记录。它还有个选项可以让你拥有一个 Chrome 风格的新标签页,页面上包含你最经常访问的网页略缩图。 + +新浏览器甚至还能和 Chrome 同步。在浏览器登录后,它会下载你的 Chrome 书签并且自动登录你的谷歌账户。收藏一个页面只需点击地址栏的星形标志即可,和谷歌地图一样,浏览器抛弃了缩放按钮,完全改用手势控制。 + +联系人应用最终从电话应用中移除,并且独立为一个应用。之前的联系人/拨号混合式设计相对于人们使用现代智能手机的方式来说,过于以电话为中心了。联系人中存有电子邮件,IM,短信,地址,生日,以及社交网络等信息,所以将它们捆绑在电话应用里的意义和将它们放进谷歌地图里差不多。抛开了电话通讯功能,联系人能够简化成没有标签页的联系人列表。蜂巢采用了双面板视图,在左侧显示完整的联系人列表,右侧是联系人详情。应用利用了 Fragments API,通过它应用可以在同一屏显示多个面板界面。 + +蜂巢版本的联系人应用是第一个拥有快速滚动功能的版本。当按住左侧滚动条的时候,你可以快速上下拖动,应用会显示列表当前位置的首字母预览。 + +![新 Youtube 应用看起来像是来自黑客帝国。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/youtubes.png) +新 Youtube 应用看起来像是来自黑客帝国。 +Ron Amadeo 供图 + +谢天谢地 Youtube 终于抛弃了自安卓 2.3 以来的谷歌给予这个视频服务的“独特”设计,新界面设计与系统更加一体化。主界面是一个水平滚动的曲面墙,上面显示着最热门或者(登录之后)个人关注的视频。虽然谷歌从来没有将这个设计带到手机上,但它可以被认为是一个易于重新配置的卡片界面。操作栏在这里是个可配置的工具栏。没有登录时,操作栏由一个搜索栏填满。当你登录后,搜索缩小为一个按钮,“首页”,“浏览”和“你的频道”标签将会显示出来。 + +![蜂巢用一个蓝色框架的电脑界面来驱动主屏。电影工作室完全采用橙色电子风格主题。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/other2.png) +蜂巢用一个蓝色框架的电脑界面来驱动主屏。电影工作室完全采用橙色电子风格主题。 +Ron Amadeo 供图 + +蜂巢新增的应用“电影工作室”,这不是一个不言自明的应用,而且没有任何的解释或说明。就我们所知,你可以导入视频,剪切它们,添加文本和场景过渡。编辑视频——电脑上你可以做的最耗时,困难,以及处理器密集型任务之一——在平板上完成感觉有点野心过大了,谷歌在之后的版本里将其完全移除了。电影工作室里我们最喜欢的部分是它完全的电子风格主题。虽然系统的其它部分使用蓝色高亮,在这里是橙色的。(电影工作室是个邪恶的程序!) + +![小部件!](http://cdn.arstechnica.net/wp-content/uploads/2014/02/device-2014-02-12-202224.png) +小部件! +Ron Amadeo 供图 + +蜂巢带来了新的部件框架,允许部件滚动,Gmail,Email 以及日历部件都升级了以支持改功能。Youtube 和书籍使用了新的部件,内容卡片可以自动滚动切换。在小部件上轻轻向上或向下滑动可以切换卡片。我们不确定你的书籍中哪些书会被显示出来,但如果你想要的话它就在那儿。尽管所有的这些小部件在10英寸屏幕上运行良好,谷歌从未将它们重新设计给手机,这让它们在安卓最流行的规格上几乎毫无用处。所有的小部件有个大块的标识标题栏,而且通常占据大半屏幕只显示很少的内容。 + +![安卓3.1中可滚动的最近应用以及可自定义大小的小部件。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/31new.jpg) +安卓3.1中可滚动的最近应用以及可自定义大小的小部件。 +Ron Amadeo 供图 + +蜂巢后续的版本修复了3.0早期的一些问题。安卓3.1在蜂巢的第一个版本之后三个月放出,并带来了一些改进。小部件自定义大小是添加的最大特性之一。长按小部件之后,一个带有拖拽按钮的蓝色外框会显示出来,拖动按钮可以改变小部件尺寸。最近应用界面现在可以垂直滚动并且承载更多应用。这个版本唯一缺失的功能是滑动关闭应用。 + +在今天,一个0.1版本的升级是个主要更新,但是在蜂巢,那只是个小更新。除了一些界面调整,3.1添加了对游戏手柄,键盘,鼠标以及其它USB和蓝牙输入设备的支持。它还提供了更多的开发者API。 + +![安卓3.2的兼容性缩放和一个安卓平板上典型的展开视图应用。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/device-2014-02-14-131132.jpg) +安卓3.2的兼容性缩放和一个安卓平板上典型的展开视图应用。 +Ron Amadeo 供图 + +安卓3.2在3.1发布后两个月放出,添加了七到八英寸的小尺寸平板支持。3.2终于启用了SD卡支持,Xoom 在生命最初的五个月像是抱着个不完整的肢体一样。 + +蜂巢匆匆问世是为了成为一个生态系统建设者。如果应用没有平板版本,没人会想要一个安卓平板的,所以谷歌知道需要尽快将东西送到开发者手中。在这个安卓平板生态的早期阶段,应用还没有到齐。这是拥有 Xoom 的人们所面临的最大的问题。 + +3.2添加了“兼容缩放”,给了用户一个新选项,可以将应用拉伸适应屏幕(如右侧图片显示的那样)或缩放成正常的应用布局来适应屏幕。这些选项都不是很理想,没有应用生态来支持平板,蜂巢设备销售状况惨淡。但谷歌的平板决策最终还是会得到回报。今天,安卓平板已经[取代 iOS 占据了最大的市场份额][1]。 + +---------- + +![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg) + +[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。 + +[@RonAmadeo][t] + +-------------------------------------------------------------------------------- + +via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/18/ + +译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[1]:http://techcrunch.com/2014/03/03/gartner-195m-tablets-sold-in-2013-android-grabs-top-spot-from-ipad-with-62-share/ +[a]:http://arstechnica.com/author/ronamadeo +[t]:https://twitter.com/RonAmadeo From 246a826b62bec223ff90ddbbe25865d41551677e Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 20 Sep 2015 19:42:15 +0800 Subject: [PATCH 567/697] PUB:20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux @GOLinux --- ... Devanagari Support In Antergos And Arch Linux.md | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) rename {translated/tech => published}/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md (85%) diff --git a/translated/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md b/published/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md similarity index 85% rename from translated/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md rename to published/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md index 1bcc05a080..82063aae7a 100644 --- a/translated/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md +++ b/published/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md @@ -1,17 +1,19 @@ -为Antergos与Arch Linux添加印度语和梵文支持 +也许你需要在 Antergos 与 Arch Linux 中查看印度语和梵文? ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Indian-languages.jpg) -你们到目前或许知道,我最近一直在尝试体验[Antergos Linux][1]。在安装完[Antergos][2]后我所首先注意到的一些事情是在默认的Chromium浏览器中**没法正确显示印度语脚本**。 +你们到目前或许知道,我最近一直在尝试体验 [Antergos Linux][1]。在安装完[Antergos][2]后我所首先注意到的一些事情是在默认的 Chromium 浏览器中**没法正确显示印度语脚本**。 这是一件奇怪的事情,在我之前桌面Linux的体验中是从未遇到过的。起初,我认为是浏览器的问题,所以我安装了Firefox,然而问题依旧,Firefox也不能正确显示印度语。和Chromium不显示任何东西不同的是,Firefox确实显示了一些东西,但是毫无可读性。 ![No hindi support in Arch Linux based Antergos](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_1.jpeg) -Chromium中的印度语显示 + +*Chromium中的印度语显示* ![No hindi support in Arch Linux based Antergos](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_2.jpeg) -Firefox中的印度语显示 + +*Firefox中的印度语显示* 奇怪吧?那么,默认情况下基于Arch的Antergos Linux中没有印度语的支持吗?我没有去验证,但是我假设其它基于梵语脚本的印地语之类会产生同样的问题。 @@ -37,7 +39,7 @@ via: http://itsfoss.com/display-hindi-arch-antergos/ 作者:[Abhishek][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 6d61107e76a99b4e6b6a77a1d1bd08f000f212bd Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 20 Sep 2015 19:52:20 +0800 Subject: [PATCH 568/697] PUB:20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container @ictlyh --- ...ata Virtualization GA with OData in Docker Container.md | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) rename {translated/tech => published}/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md (85%) diff --git a/translated/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md b/published/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md similarity index 85% rename from translated/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md rename to published/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md index 4d14bbc904..6564e13924 100644 --- a/translated/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md +++ b/published/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md @@ -1,7 +1,7 @@ 如何在 Docker 容器中运行支持 OData 的 JBoss 数据虚拟化 GA -Howto Run JBoss Data Virtualization GA with OData in Docker Container ================================================================================ -大家好,我们今天来学习如何在一个 Docker 容器中运行支持 OData(译者注:Open Data Protocol,开放数据协议) 的 JBoss 数据虚拟化 6.0.0 GA(译者注:GA,General Availability,具体定义可以查看[WIKI][4])。JBoss 数据虚拟化是数据提供和集成解决方案平台,有多种分散的数据源时,转换为一种数据源统一对待,在正确的时间将所需数据传递给任意的应用或者用户。JBoss 数据虚拟化可以帮助我们将数据快速组合和转换为可重用的商业友好的数据模型,通过开放标准接口简单可用。它提供全面的数据抽取、联合、集成、转换,以及传输功能,将来自一个或多个源的数据组合为可重复使用和共享的灵活数据。要了解更多关于 JBoss 数据虚拟化的信息,可以查看它的[官方文档][1]。Docker 是一个提供开放平台用于打包,装载和以轻量级容器运行任何应用的开源平台。使用 Docker 容器我们可以轻松处理和启用支持 OData 的 JBoss 数据虚拟化。 + +大家好,我们今天来学习如何在一个 Docker 容器中运行支持 OData(译者注:Open Data Protocol,开放数据协议) 的 JBoss 数据虚拟化 6.0.0 GA(译者注:GA,General Availability,具体定义可以查看[WIKI][4])。JBoss 数据虚拟化是数据提供和集成解决方案平台,将多种分散的数据源转换为一种数据源统一对待,在正确的时间将所需数据传递给任意的应用或者用户。JBoss 数据虚拟化可以帮助我们将数据快速组合和转换为可重用的商业友好的数据模型,通过开放标准接口简单可用。它提供全面的数据抽取、联合、集成、转换,以及传输功能,将来自一个或多个源的数据组合为可重复使用和共享的灵活数据。要了解更多关于 JBoss 数据虚拟化的信息,可以查看它的[官方文档][1]。Docker 是一个提供开放平台用于打包,装载和以轻量级容器运行任何应用的开源平台。使用 Docker 容器我们可以轻松处理和启用支持 OData 的 JBoss 数据虚拟化。 下面是该指南中在 Docker 容器中运行支持 OData 的 JBoss 数据虚拟化的简单步骤。 @@ -78,7 +78,6 @@ Howto Run JBoss Data Virtualization GA with OData in Docker Container "LinkLocalIPv6PrefixLen": 0, ### 6. Web 界面 ### -### 6. Web Interface ### 现在,如果一切如期望的那样进行,当我们用浏览器打开 http://container-ip:8080/ 和 http://container-ip:9990 时会看到支持 oData 的 JBoss 数据虚拟化登录界面和 JBoss 管理界面。管理验证的用户名和密码分别是 admin 和 redhat1!数据虚拟化验证的用户名和密码都是 user。之后,我们可以通过 web 界面在内容间导航。 @@ -94,7 +93,7 @@ via: http://linoxide.com/linux-how-to/run-jboss-data-virtualization-ga-odata-doc 作者:[Arun Pyasi][a] 译者:[ictlyh](http://www.mutouxiaogui.cn/blog) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From ebbccfca891e4839bdfe20e4bd9d15f12460cb16 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 20 Sep 2015 22:26:19 +0800 Subject: [PATCH 569/697] PUB:RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage @FSSlc --- ...to Configure and Encrypt System Storage.md | 86 +++++++++---------- 1 file changed, 43 insertions(+), 43 deletions(-) rename {translated/tech/RHCSA => published}/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md (71%) diff --git a/translated/tech/RHCSA/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md b/published/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md similarity index 71% rename from translated/tech/RHCSA/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md rename to published/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md index 41890b2280..d33d4eb3d0 100644 --- a/translated/tech/RHCSA/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md +++ b/published/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md @@ -1,28 +1,28 @@ -RHCSA 系列:使用 'Parted' 和 'SSM' 来配置和加密系统存储 – Part 6 +RHCSA 系列(六): 使用 Parted 和 SSM 来配置和加密系统存储 ================================================================================ -在本篇文章中,我们将讨论在 RHEL 7 中如何使用传统的工具来设置和配置本地系统存储,并介绍系统存储管理器(也称为 SSM),它将极大地简化上面的任务。 +在本篇文章中,我们将讨论在 RHEL 7 中如何使用传统的工具来设置和配置本地系统存储,并介绍系统存储管理器(也称为 SSM),它将极大地简化上面的任务。 ![配置和加密系统存储](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-and-Encrypt-System-Storage.png) -RHCSA: 配置和加密系统存储 – Part 6 +*RHCSA: 配置和加密系统存储 – Part 6* -请注意,我们将在这篇文章中展开这个话题,但由于该话题的宽泛性,我们将在下一期(Part 7)中继续介绍有关它的描述和使用。 +请注意,我们将在这篇文章中展开这个话题,但由于该话题的宽泛性,我们将在下一期中继续介绍有关它的描述和使用。 ### 在 RHEL 7 中创建和修改分区 ### 在 RHEL 7 中, parted 是默认的用来处理分区的程序,且它允许你: - 展示当前的分区表 -- 操纵(增加或减少分区的大小)现有的分区 +- 操纵(扩大或缩小分区的大小)现有的分区 - 利用空余的磁盘空间或额外的物理存储设备来创建分区 -强烈建议你在试图增加一个新的分区或对一个现有分区进行更改前,你应当确保设备上没有任何一个分区正在使用(`umount /dev/partition`),且假如你正使用设备的一部分来作为 swap 分区,在进行上面的操作期间,你需要将它禁用(`swapoff -v /dev/partition`) 。 +强烈建议你在试图增加一个新的分区或对一个现有分区进行更改前,你应当确保该设备上没有任何一个分区正在使用(`umount /dev/分区`),且假如你正使用设备的一部分来作为 swap 分区,在进行上面的操作期间,你需要将它禁用(`swapoff -v /dev/分区`) 。 -实施上面的操作的最简单的方法是使用一个安装介质例如一个 RHEL 7 安装 DVD 或 USB 以急救模式启动 RHEL(Troubleshooting → Rescue a Red Hat Enterprise Linux system),然后当让你选择一个选项来挂载现有的 Linux 安装时,选择'跳过'这个选项,接着你将看到一个命令行提示符,在其中你可以像下图显示的那样开始键入与在一个未被使用的物理设备上创建一个正常的分区时所用的相同的命令。 +实施上面的操作的最简单的方法是使用一个安装介质例如一个 RHEL 7 的 DVD 或 USB 安装盘以急救模式启动 RHEL(`Troubleshooting` → `Rescue a Red Hat Enterprise Linux system`),然后当让你选择一个选项来挂载现有的 Linux 安装时,选择“跳过”这个选项,接着你将看到一个命令行提示符,在其中你可以像下图显示的那样开始键入与在一个未被使用的物理设备上创建一个正常的分区时所用的相同的命令。 ![RHEL 7 急救模式](http://www.tecmint.com/wp-content/uploads/2015/04/RHEL-7-Rescue-Mode.png) -RHEL 7 急救模式 +*RHEL 7 急救模式* 要启动 parted,只需键入: @@ -32,17 +32,17 @@ RHEL 7 急救模式 ![创建新的分区](http://www.tecmint.com/wp-content/uploads/2015/04/Create-New-Partition.png) -创建新的分区 +*创建新的分区* -正如你所看到的那样,在这个例子中,我们正在使用一个 5 GB 的虚拟光驱。现在我们将要创建一个 4 GB 的主分区,然后将它格式化为 xfs 文件系统,它是 RHEL 7 中默认的文件系统。 +正如你所看到的那样,在这个例子中,我们正在使用一个 5 GB 的虚拟驱动器。现在我们将要创建一个 4 GB 的主分区,然后将它格式化为 xfs 文件系统,它是 RHEL 7 中默认的文件系统。 -你可以从一系列的文件系统中进行选择。你将需要使用 mkpart 来手动地创建分区,接着和平常一样,用 mkfs.fstype 来对分区进行格式化,因为 mkpart 并不支持许多现代的文件系统以达到即开即用。 +你可以从一系列的文件系统中进行选择。你将需要使用 `mkpart` 来手动地创建分区,接着和平常一样,用 `mkfs.类型` 来对分区进行格式化,因为 `mkpart` 并不支持许多现代的文件系统的到即开即用。 在下面的例子中,我们将为设备设定一个标记,然后在 `/dev/sdb` 上创建一个主分区 `(p)`,它从设备的 0% 开始,并在 4000MB(4 GB) 处结束。 ![在 Linux 中设定分区名称](http://www.tecmint.com/wp-content/uploads/2015/04/Label-Partition.png) -标记分区的名称 +*标记分区的名称* 接下来,我们将把分区格式化为 xfs 文件系统,然后再次打印出分区表,以此来确保更改已被应用。 @@ -51,11 +51,11 @@ RHEL 7 急救模式 ![在 Linux 中格式化分区](http://www.tecmint.com/wp-content/uploads/2015/04/Format-Partition-in-Linux.png) -格式化分区为 XFS 文件系统 +*格式化分区为 XFS 文件系统* -对于旧一点的文件系统,在 parted 中你应该使用 `resize` 命令来改变分区的大小。不幸的是,这只适用于 ext2, fat16, fat32, hfs, linux-swap, 和 reiserfs (若 libreiserfs 已被安装)。 +对于旧一点的文件系统,在 parted 中你可以使用 `resize` 命令来改变分区的大小。不幸的是,这只适用于 ext2, fat16, fat32, hfs, linux-swap, 和 reiserfs (若 libreiserfs 已被安装)。 -因此,改变分区大小的唯一方式是删除它然后再创建它(所以确保你对你的数据做了完整的备份!)。毫无疑问,在 RHEL 7 中默认的分区方案是基于 LVM 的。 +因此,改变分区大小的唯一方式是删除它然后再创建它(所以,确保你对你的数据做了完整的备份!)。毫无疑问,在 RHEL 7 中默认的分区方案是基于 LVM 的。 使用 parted 来移除一个分区,可以用: @@ -64,23 +64,23 @@ RHEL 7 急救模式 ![在 Linux 中移除分区](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-Partition-in-Linux.png) -移除或删除分区 +*移除或删除分区* ### 逻辑卷管理(LVM) ### -一旦一个磁盘被分好了分区,再去更改分区的大小就是一件困难或冒险的事了。基于这个原因,假如我们计划在我们的系统上对分区的大小进行更改,我们应当考虑使用 LVM 的可能性,而不是使用传统的分区系统。这样多个物理设备可以组成一个逻辑组,以此来寄宿可自定义数目的逻辑卷,而逻辑卷的增大或减少不会带来任何麻烦。 +一旦一个磁盘被分好了分区,再去更改分区的大小就是一件困难或冒险的事了。基于这个原因,假如我们计划在我们的系统上对分区的大小进行更改,我们应当考虑使用 LVM 的可能性,而不是使用传统的分区系统。这样多个物理设备可以组成一个逻辑组,以此来存放任意数目的逻辑卷,而逻辑卷的增大或减少不会带来任何麻烦。 简单来说,你会发现下面的示意图对记住 LVM 的基础架构或许有用。 ![LVM 的基本架构](http://www.tecmint.com/wp-content/uploads/2015/04/LVM-Diagram.png) -LVM 的基本架构 +*LVM 的基本架构* #### 创建物理卷,卷组和逻辑卷 #### 遵循下面的步骤是为了使用传统的卷管理工具来设置 LVM。由于你可以通过阅读这个网站上的 LVM 系列来扩展这个话题,我将只是概要的介绍设置 LVM 的基本步骤,然后与使用 SSM 来实现相同功能做个比较。 -**注**: 我们将使用整个磁盘 `/dev/sdb` 和 `/dev/sdc` 来作为 PVs (物理卷),但是否执行相同的操作完全取决于你。 +**注**: 我们将使用整个磁盘 `/dev/sdb` 和 `/dev/sdc` 来作为物理卷(PV),但是否执行相同的操作完全取决于你。 **1. 使用 /dev/sdb 和 /dev/sdc 中 100% 的可用磁盘空间来创建分区 `/dev/sdb1` 和 `/dev/sdc1`:** @@ -89,7 +89,7 @@ LVM 的基本架构 ![创建新分区](http://www.tecmint.com/wp-content/uploads/2015/04/Create-New-Partitions.png) -创建新分区 +*创建新分区* **2. 分别在 /dev/sdb1 和 /dev/sdc1 上共创建 2 个物理卷。** @@ -98,21 +98,21 @@ LVM 的基本架构 ![创建两个物理卷](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Physical-Volumes.png) -创建两个物理卷 +*创建两个物理卷* -记住,你可以使用 pvdisplay /dev/sd{b,c}1 来显示有关新建的 PV 的信息。 +记住,你可以使用 pvdisplay /dev/sd{b,c}1 来显示有关新建的物理卷的信息。 -**3. 在上一步中创建的 PV 之上创建一个 VG:** +**3. 在上一步中创建的物理卷之上创建一个卷组(VG):** # vgcreate tecmint_vg /dev/sd{b,c}1 ![在 Linux 中创建卷组](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Volume-Group.png) -创建卷组 +*创建卷组* -记住,你可使用 vgdisplay tecmint_vg 来显示有关新建的 VG 的信息。 +记住,你可使用 vgdisplay tecmint_vg 来显示有关新建的卷组的信息。 -**4. 像下面那样,在 VG tecmint_vg 之上创建 3 个逻辑卷:** +**4. 像下面那样,在卷组 tecmint_vg 之上创建 3 个逻辑卷(LV):** # lvcreate -L 3G -n vol01_docs tecmint_vg [vol01_docs → 3 GB] # lvcreate -L 1G -n vol02_logs tecmint_vg [vol02_logs → 1 GB] @@ -120,11 +120,11 @@ LVM 的基本架构 ![在 LVM 中创建逻辑卷](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Logical-Volumes.png) -创建逻辑卷 +*创建逻辑卷* -记住,你可以使用 lvdisplay tecmint_vg 来显示有关在 VG tecmint_vg 之上新建的 LV 的信息。 +记住,你可以使用 lvdisplay tecmint_vg 来显示有关在 tecmint_vg 之上新建的逻辑卷的信息。 -**5. 格式化每个逻辑卷为 xfs 文件系统格式(假如你计划在以后将要缩小卷的大小,请别使用 xfs 文件系统格式!):** +**5. 格式化每个逻辑卷为 xfs 文件系统格式(假如你计划在以后将要缩小卷的大小,请别使用 xfs 文件系统格式!):** # mkfs.xfs /dev/tecmint_vg/vol01_docs # mkfs.xfs /dev/tecmint_vg/vol02_logs @@ -138,7 +138,7 @@ LVM 的基本架构 #### 移除逻辑卷,卷组和物理卷 #### -**7.现在我们将进行与刚才相反的操作并移除 LV,VG 和 PV:** +**7.现在我们将进行与刚才相反的操作并移除逻辑卷、卷组和物理卷:** # lvremove /dev/tecmint_vg/vol01_docs # lvremove /dev/tecmint_vg/vol02_logs @@ -161,20 +161,20 @@ LVM 的基本架构 - 初始化块设备来作为物理卷 - 创建一个卷组 - 创建逻辑卷 -- 格式化 LV 和 +- 格式化逻辑卷,以及 - 只使用一个命令来挂载它们 -**9. 现在,我们可以使用下面的命令来展示有关 PV,VG 或 LV 的信息:** +**9. 现在,我们可以使用下面的命令来展示有关物理卷、卷组或逻辑卷的信息:** # ssm list dev # ssm list pool # ssm list vol -![检查有关 PV, VG,或 LV 的信息](http://www.tecmint.com/wp-content/uploads/2015/04/Display-LVM-Information.png) +![检查有关物理卷、卷组或逻辑卷的信息](http://www.tecmint.com/wp-content/uploads/2015/04/Display-LVM-Information.png) -检查有关 PV, VG,或 LV 的信息 +*检查有关物理卷、卷组或逻辑卷的信息* -**10. 正如我们知道的那样, LVM 的一个显著的特点是可以在不停机的情况下更改(增大或缩小) 逻辑卷的大小:** +**10. 正如我们知道的那样, LVM 的一个显著的特点是可以在不停机的情况下更改(增大或缩小)逻辑卷的大小:** 假定在 vol02_logs 上我们用尽了空间,而 vol03_homes 还留有足够的空间。我们将把 vol03_homes 的大小调整为 4 GB,并使用剩余的空间来扩展 vol02_logs: @@ -184,7 +184,7 @@ LVM 的基本架构 ![查看卷的大小](http://www.tecmint.com/wp-content/uploads/2015/04/Check-LVM-Free-Space.png) -查看卷的大小 +*查看卷的大小* 然后执行: @@ -196,11 +196,11 @@ LVM 的基本架构 # ssm remove tecmint_vg -这个命令将返回一个提示,询问你是否确认删除 VG 和它所包含的 LV: +这个命令将返回一个提示,询问你是否确认删除卷组和它所包含的逻辑卷: ![移除逻辑卷和卷组](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-LV-VG.png) -移除逻辑卷和卷组 +*移除逻辑卷和卷组* ### 管理加密的卷 ### @@ -216,7 +216,7 @@ SSM 也给系统管理员提供了为新的或现存的卷加密的能力。首 我们的下一个任务是往 /etc/fstab 中添加条目来让这些逻辑卷在启动时可用,而不是使用设备识别编号(/dev/something)。 -我们将使用每个 LV 的 UUID (使得当我们添加其他的逻辑卷或设备后,我们的设备仍然可以被唯一的标记),而我们可以使用 blkid 应用来找到它们的 UUID: +我们将使用每个逻辑卷的 UUID (使得当我们添加其他的逻辑卷或设备后,我们的设备仍然可以被唯一的标记),而我们可以使用 blkid 应用来找到它们的 UUID: # blkid -o value UUID /dev/tecmint_vg/vol01_docs # blkid -o value UUID /dev/tecmint_vg/vol02_logs @@ -226,7 +226,7 @@ SSM 也给系统管理员提供了为新的或现存的卷加密的能力。首 ![找到逻辑卷的 UUID](http://www.tecmint.com/wp-content/uploads/2015/04/Logical-Volume-UUID.png) -找到逻辑卷的 UUID +*找到逻辑卷的 UUID* 接着,使用下面的内容来创建 /etc/crypttab 文件(请更改 UUID 来适用于你的设置): @@ -243,11 +243,11 @@ SSM 也给系统管理员提供了为新的或现存的卷加密的能力。首 # Logical volume vol03_homes /dev/mapper/homes /mnt/homes ext4 defaults 0 2 -现在重启(systemctl reboot),则你将被要求为每个 LV 输入密码。随后,你可以通过检查相应的挂载点来确保挂载操作是否成功: +现在重启(`systemctl reboot`),则你将被要求为每个逻辑卷输入密码。随后,你可以通过检查相应的挂载点来确保挂载操作是否成功: ![确保逻辑卷挂载点](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-LV-Mount-Points.png) -确保逻辑卷挂载点 +*确保逻辑卷挂载点* ### 总结 ### @@ -261,7 +261,7 @@ via: http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-p 作者:[Gabriel Cánepa][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 32dbd3b1e2563b6d2d038af3a2204a3ae3fb8369 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 20 Sep 2015 23:40:36 +0800 Subject: [PATCH 570/697] PUB:20150826 Five Super Cool Open Source Games @H-mudcup --- ...50826 Five Super Cool Open Source Games.md | 66 +++++++++++++++++++ ...50826 Five Super Cool Open Source Games.md | 66 ------------------- 2 files changed, 66 insertions(+), 66 deletions(-) create mode 100644 published/20150826 Five Super Cool Open Source Games.md delete mode 100644 translated/share/20150826 Five Super Cool Open Source Games.md diff --git a/published/20150826 Five Super Cool Open Source Games.md b/published/20150826 Five Super Cool Open Source Games.md new file mode 100644 index 0000000000..418d6248a5 --- /dev/null +++ b/published/20150826 Five Super Cool Open Source Games.md @@ -0,0 +1,66 @@ +五大超酷的开源游戏 +================================================================================ + +在2014年和2015年,Linux 涌入了一堆流行的付费游戏,例如备受欢迎的无主之地(Borderlands)、巫师(Witcher)、死亡岛(Dead Island) 和 CS 系列游戏。虽然这是令人激动的消息,但玩家有这个支出预算吗?付费游戏很好,但更好的是由了解玩家喜好的开发者开发的免费的替代品。 + +前段时间,我偶然看到了一个三年前发布的 YouTube 视频,标题非常的有正能量 [5个不算糟糕的开源游戏][1]。虽然视频表扬了一些开源游戏,我还是更喜欢用一个更加热情的方式来切入这个话题,至少如标题所说。所以,下面是我的一份五大超酷开源游戏的清单。 + +### Tux Racer ### + +![Tux Racer](http://fossforce.com/wp-content/uploads/2015/08/tuxracer-550x413.jpg) + +*Tux Racer* + +[《Tux Racer》][2]是这份清单上的第一个游戏,因为我对这个游戏很熟悉。最近,我的兄弟和我为了参加[玩电脑的孩子们][4]项目,在[去墨西哥的路途中][3],Tux Racer 是孩子和教师都喜欢玩的游戏之一。在这个游戏中,玩家使用 Linux 吉祥物——企鹅 Tux——在下山雪道上以计时赛的方式进行比赛。玩家们不断挑战他们自己的最佳纪录。目前还没有多玩家版本,但这是有可能改变的。它适用于 Linux、OS X、Windows 和 Android。 + +### Warsow ### + +![Warsow](http://fossforce.com/wp-content/uploads/2015/08/warsow-550x413.jpg) + +*Warsow* + +[《Warsow》][5]网站介绍道:“设定是有未来感的卡通世界,Warsow 是个完全开放的适用于 Windows、Linux 和 Mac OS X平台的快节奏第一人称射击游戏(FPS)。Warsow 是跨网络的尊重和体育精神的的艺术。(Warsow is the Art of Respect and Sportsmanship Over the Web. 大写回文字母组成 Warsow。)” 我很不情愿的把 FPS 类放到了这个列表中,因为很多人玩过这类的游戏,但是我的确被 Warsow 打动了。它对很多动作进行了优先级排序,游戏节奏很快,一开始就有八个武器。卡通化的风格让玩的过程变得没有那么严肃,更加的休闲,非常适合和亲友一同玩。然而,它却以充满竞争的游戏自居,并且当我体验这个游戏时,我发现周围确实有一些专家级的玩家。它适用于 Linux、Windows 和 OS X。 + +### M.A.R.S——一个荒诞的射击游戏 ### + +![M.A.R.S. - A ridiculous shooter](http://fossforce.com/wp-content/uploads/2015/08/MARS-screenshot-550x344.jpg) + +*M.A.R.S.——一个荒诞的射击游戏* + +[《M.A.R.S——一个荒诞的射击游戏》][6]之所以吸引人是因为它充满活力的色彩和画风。支持两个玩家使用同一个键盘,而一个在线多玩家版本目前正在开发中——这意味着想要和朋友们一起玩暂时还要等等。不论如何,它是个可以使用几个不同飞船和武器的有趣的太空射击游戏。飞船的形状不同,从普通的枪、激光、散射枪到更有趣的武器(随机出来的飞船中有一个会对敌人发射泡泡,这为这款混乱的游戏增添了很多乐趣)。游戏有几种模式,比如标准模式和对方进行殊死搏斗以获得高分或先达到某个分数线,还有其他的模式,空间球(Spaceball)、坟坑(Grave-itation Pit)和保加农炮(Cannon Keep)。它适用于 Linux、Windows 和 OS X。 + +### Valyria Tear ### + +![Valyria Tear](http://fossforce.com/wp-content/uploads/2015/08/bronnan-jump-to-enemy-550x413.jpg) + +*Valyria Tear* + +[Valyria Tear][7] 类似近年来拥有众多粉丝的角色扮演游戏(RPG)。故事设定在奇幻游戏的通用年代,充满了骑士、王国和魔法,以及主要角色 Bronann。设计团队在这个世界的设计上做的非常棒,实现了玩家对这类游戏所有的期望:隐藏的宝藏、偶遇的怪物、非玩家操纵角色(NPC)的互动以及所有 RPG 不可或缺的——在低级别的怪物上刷经验直到可以面对大 BOSS。我在试玩的时候,时间不允许我太过深入到这个游戏故事中,但是感兴趣的人可以看 YouTube 上由 Yohann Ferriera 用户发的‘[Let’s Play][8]’系列视频。它适用于 Linux、Windows 和 OS X。 + +### SuperTuxKart ### + +![SuperTuxKart](http://fossforce.com/wp-content/uploads/2015/08/hacienda_tux_antarctica-550x293.jpg) + +*SuperTuxKart* + +最后一个同样好玩的游戏是 [SuperTuxKart][9],一个效仿 Mario Kart(马里奥卡丁车)但丝毫不逊色的好游戏。它在2000年-2004年间开始以 Tux Kart 开发,但是在成品中有错误,结果开发就停止了几年。从2006年开始重新开发时起,它就一直在改进,直到四个月前0.9版首次发布。在游戏里,我们的老朋友 Tux 与马里奥和其他一些开源吉祥物一同开始。其中一个熟悉的面孔是 Suzanne,这是 Blender 的那只吉祥物猴子。画面很给力,游戏很流畅。虽然在线游戏还在计划阶段,但是分屏多玩家游戏是可以的。一个电脑最多可以供四个玩家同时玩。它适用于 Linux、Windows、OS X、AmigaOS 4、AROS 和 MorphOS。 + +-------------------------------------------------------------------------------- + +via: http://fossforce.com/2015/08/five-super-cool-open-source-games/ + +作者:Hunter Banks +译者:[H-mudcup](https://github.com/H-mudcup) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.youtube.com/watch?v=BEKVl-XtOP8 +[2]:http://tuxracer.sourceforge.net/download.html +[3]:http://fossforce.com/2015/07/banks-family-values-texas-linux-fest/ +[4]:http://www.kidsoncomputers.org/an-amazing-week-in-oaxaca +[5]:https://www.warsow.net/download +[6]:http://mars-game.sourceforge.net/ +[7]:http://valyriatear.blogspot.com/ +[8]:https://www.youtube.com/channel/UCQ5KrSk9EqcT_JixWY2RyMA +[9]:http://supertuxkart.sourceforge.net/ diff --git a/translated/share/20150826 Five Super Cool Open Source Games.md b/translated/share/20150826 Five Super Cool Open Source Games.md deleted file mode 100644 index 30ca09e171..0000000000 --- a/translated/share/20150826 Five Super Cool Open Source Games.md +++ /dev/null @@ -1,66 +0,0 @@ -Translated by H-mudcup -五大超酷的开源游戏 -================================================================================ -在2014年和2015年,Linux 成了一堆流行商业品牌的家,例如备受欢迎的 Borderlands、Witcher、Dead Island 和 CS系列游戏。虽然这是令人激动的消息,但这跟玩家的预算有什么关系?商业品牌很好,但更好的是由了解玩家喜好的开发者开发的免费的替代品。 - -前段时间,我偶然看到了一个三年前发布的 YouTube 视频,标题非常的有正能量[5个不算糟糕的开源游戏][1]。虽然视频表扬了一些开源游戏,我还是更喜欢用一个更加热情的方式来切入这个话题,至少如标题所说。所以,下面是我的一份五大超酷开源游戏的清单。 - -### Tux Racer ### - -![Tux Racer](http://fossforce.com/wp-content/uploads/2015/08/tuxracer-550x413.jpg) - -Tux Racer - -[《Tux Racer》][2]是这份清单上的第一个游戏,因为我对这个游戏很熟悉。我和兄弟与[电脑上的孩子们][4]项目在[最近一次去墨西哥的路途中][3] Tux Racer 是孩子和教师都喜欢玩的游戏之一。在这个游戏中,玩家使用 Linux 吉祥物,企鹅 Tux,在下山雪道上以计时赛的方式进行比赛。玩家们不断挑战他们自己的最佳纪录。目前还没有多玩家版本,但这是有可能改变的。适用于 Linux、OS X、Windows 和 Android。 - -### Warsow ### - -![Warsow](http://fossforce.com/wp-content/uploads/2015/08/warsow-550x413.jpg) - -Warsow - -[《Warsow》][5]网站解释道:“设定是有未来感的卡通世界,Warsow 是个完全开放的适用于 Windows、Linux 和 Mac OS X平台的快节奏第一人称射击游戏(FPS)。Warsow 是尊重的艺术和网络中的体育精神。(Warsow is the Art of Respect and Sportsmanship Over the Web.大写字母组成Warsow。)” 我很不情愿的把 FPS 类放到了这个列表中,因为很多人玩过这类的游戏,但是我的确被 Warsow 打动了。它对很多动作进行了优先级排序,游戏节奏很快,一开始就有八个武器。卡通化的风格让玩的过程变得没有那么严肃,更加的休闲,非常适合可以和亲友一同玩。然而,他却以充满竞争的游戏自居,并且当我体验这个游戏时,我发现周围确实有一些专家级的玩家。适用于 Linux、Windows 和 OS X。 - -### M.A.R.S——一个荒诞的射击游戏 ### - -![M.A.R.S. - A ridiculous shooter](http://fossforce.com/wp-content/uploads/2015/08/MARS-screenshot-550x344.jpg) - -M.A.R.S.——一个荒诞的射击游戏 - -[《M.A.R.S——一个荒诞的射击游戏》][6]之所以吸引人是因为他充满活力的色彩和画风。支持两个玩家使用同一个键盘,而一个在线多玩家版本目前正在开发中——这意味着想要和朋友们一起玩暂时还要等等。不论如何,它是个可以使用几个不同飞船和武器的有趣的太空射击游戏。飞船的形状不同,从普通的枪、激光、散射枪到更有趣的武器(随机出来的飞船中有一个会对敌人发射泡泡,这为这款混乱的游戏增添了很多乐趣)。游戏几种模式,比如标准模式和对方进行殊死搏斗以获得高分或先达到某个分数线,还有其他的模式,空间球(Spaceball)、坟坑(Grave-itation Pit)和保加农炮(Cannon Keep)。适用于 Linux、Windows 和 OS X。 - -### Valyria Tear ### - -![Valyria Tear](http://fossforce.com/wp-content/uploads/2015/08/bronnan-jump-to-enemy-550x413.jpg) - -Valyria Tear - -[Valyria Tear][7] 类似几年来拥有众多粉丝的角色扮演游戏(RPG)。故事设定在梦幻游戏的通用年代,充满了骑士、王国和魔法,以及主要角色 Bronann。设计团队做的非常棒,在设计这个世界和实现玩家对这类游戏所有的期望:隐藏的宝藏、偶遇的怪物、非玩家操纵角色(NPC)的互动以及所有 RPG 不可或缺的:在低级别的怪物上刷经验直到可以面对大 BOSS。我在试玩的时候,时间不允许我太过深入到这个游戏故事中,但是感兴趣的人可以看 YouTube 上由 Yohann Ferriera 用户发的‘[Let’s Play][8]’系列视频。适用于 Linux、Windows 和 OS X。 - -### SuperTuxKart ### - -![SuperTuxKart](http://fossforce.com/wp-content/uploads/2015/08/hacienda_tux_antarctica-550x293.jpg) - -SuperTuxKart - -最后一个同样好玩的游戏是 [SuperTuxKart][9],一个效仿 Mario Kart(马里奥卡丁车)但丝毫不必原作差的好游戏。它在2000年-2004年间开始以 Tux Kart 开发,但是在成品中有错误,结果开发就停止了几年。从2006年开始重新开发时起,它就一直在改进,直到四个月前0.9版首次发布。在游戏里,我们的老朋友 Tux 与马里奥和其他一些开源吉祥物一同开始。其中一个熟悉的面孔是 Suzanne,Blender 的那只吉祥物猴子。画面很给力,游戏很流畅。虽然在线游戏还在计划阶段,但是分屏多玩家游戏是可以的。一个电脑最多可以四个玩家同时玩。适用于 Linux、Windows、OS X、AmigaOS 4、AROS 和 MorphOS。 - --------------------------------------------------------------------------------- - -via: http://fossforce.com/2015/08/five-super-cool-open-source-games/ - -作者:Hunter Banks -译者:[H-mudcup](https://github.com/H-mudcup) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.youtube.com/watch?v=BEKVl-XtOP8 -[2]:http://tuxracer.sourceforge.net/download.html -[3]:http://fossforce.com/2015/07/banks-family-values-texas-linux-fest/ -[4]:http://www.kidsoncomputers.org/an-amazing-week-in-oaxaca -[5]:https://www.warsow.net/download -[6]:http://mars-game.sourceforge.net/ -[7]:http://valyriatear.blogspot.com/ -[8]:https://www.youtube.com/channel/UCQ5KrSk9EqcT_JixWY2RyMA -[9]:http://supertuxkart.sourceforge.net/ From a248dc90a6e0dfccb3945217dea1f5ed0d9c8a84 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 21 Sep 2015 00:40:16 +0800 Subject: [PATCH 571/697] PUB:20150901 Is Linux Right For You MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @icybreaker 翻译的不错 ! --- published/20150901 Is Linux Right For You.md | 63 +++++++++++++++++++ .../talk/20150901 Is Linux Right For You.md | 63 ------------------- 2 files changed, 63 insertions(+), 63 deletions(-) create mode 100644 published/20150901 Is Linux Right For You.md delete mode 100644 translated/talk/20150901 Is Linux Right For You.md diff --git a/published/20150901 Is Linux Right For You.md b/published/20150901 Is Linux Right For You.md new file mode 100644 index 0000000000..af88b9bfee --- /dev/null +++ b/published/20150901 Is Linux Right For You.md @@ -0,0 +1,63 @@ +Linux 系统是否适合于您? +================================================================================ +> 并非人人都适合使用 Linux --对许多用户来说,Windows 或 OSX 会是更好的选择。 + +我喜欢使用 Linux 桌面系统,并不是因为软件的政治性质,也不是不喜欢其它操作系统。我喜欢 Linux 系统因为它能满足我的需求并且确实适合使用。 + +我的经验是,并非人人都适合切换至“Linux 的生活方式”。本文将帮助您通过分析使用 Linux 系统的利弊来供您自行判断使用 Linux 是否真正适合您。 + +### 什么时候更换系统? ### + +当有充分的理由时,将系统切换到 Linux 系统是很有意义的。这对 Windows 用户将系统更换到 OSX 或类似的情况都同样适用。为让您的系统转变成功,您必须首先确定为什么要做这种转换。 + +对某些人来说,更换系统通常意味着他们不满于当前的系统操作平台。也许是最新的升级给了他们糟糕的用户体验,而他们也已准备好更换到别的系统,也许仅仅是因为对某个系统好奇。不管动机是什么,必须要有充分的理由支撑您做出更换操作系统的决定。如果没有一个充足的原因让您这样做,往往不会成功。 + +然而事事都有例外。如果您确实对 Linux 桌面非常感兴趣,或许可以选择一种折衷的方式。 + +### 放慢起步的脚步 ### + +第一次尝试运行 Linux 系统后,我看到就有人开始批判 Windows 安装过程的费时,完全是因为他们20分钟就用闪存安装好 Ubuntu 的良好体验。但是伙伴们,这并不只是一次测验。相反,我有如下建议: + +- 用一周的时间尝试在[虚拟机上运行 Linux 系统][1]。这意味着您将在该系统上执行所有的浏览器工作、邮箱操作和其它想要完成的任务。 +- 如果运行虚拟机资源消耗太大,您可以尝试用提供了[一些持久存储][2]的 USB 驱动器来运行 Linux,您的主操作系统将不受任何影响。与此同时,您仍可以运行 Linux 系统。 +- 运行 Linux 系统一周后,如果一切进展顺利,下一步您可以计算一下这周内登入 Windows 的次数。如果只是偶尔登录 Windows 系统,下一步就可以尝试运行 Windows 和 Linux 的[双系统][3]。对那些只运行了 Linux 系统的用户,可以考虑尝试将系统真正更换为 Linux 系统。 +- 在你完全删除 Windows 分区前,更应该购买一个新硬盘来安装 Linux 系统。这样有了充足的硬盘空间,您就可以使用双系统。如果必须要启动 Windows 系统做些事情的话,Windows 系统也是可以运行的。 + +### 使用 Linux 系统的好处是什么? ### + +将系统更换到 Linux 有什么好处呢?一般而言,这种好处对大多数人来说可以归结到释放个性自由。在使用 Linux 系统的时候,如果您不喜欢某些设置,可以自行更改它们。同时使用 Linux 可以为用户节省大量的硬件升级开支和不必要的软件开支。另外,您不需再费力找寻已丢失的软件许可证密钥,而且如果您不喜欢即将发布的系统版本,大可轻松地更换到别的版本。 + +在 Linux 桌面方面可以选择的桌面种类是惊人的多,看起来对新手来说做这种选择非常困难。但是如果您发现了喜欢的一款 Linux 版本(Debian、Fedora、Arch等),最困难的工作其实已经完成了,您需要做的就是找到各版本的区别并选择出您最喜欢的系统版本环境。 + +如今我听到的最常见的抱怨之一是用户发现没有太多的软件能适用于 Linux 系统。然而,这并不是事实。尽管别的操作系统可能会提供更多软件,但是如今的 Linux 也已经提供了足够多应用程序满足您的各种需求,包括视频剪辑(家用和专业级)、摄影、办公管理软件、远程访问、音乐软件、等等等等。 + +### 使用 Linux 系统您会失去些什么? ### + +虽然我喜欢使用 Linux,但我妻子的家庭办公依然依赖于 OS X。对于用 Linux 系统完成一些特定的任务她心满意足,但是她需要 OS X 来运行一些不支持 Linux 的软件。这是许多想要更换系统的用户会遇到的一个常见的问题。如果要更换系统,您需要考虑是否愿意失去一些关键的软件工具。 + +有时这个问题是因为软件的数据只能用该软件打开。别的情况下,是传统应用程序的工作流和功能并不适用于在 Linux 系统上可运行的软件。我自己并没有遇到过这类问题,但是我知道确实存在这些问题。许多 Linux 上的软件在其它操作系统上也都可以用。所以如果担心这类软件兼容问题,建议您先尝试在已有的系统上操作一下几款类似的应用程序。 + +更换成 Linux 系统后,另一件您可能会失去的是本地系统支持服务。人们通常会嘲笑这种愚蠢行径,但我知道,无数的新手在使用 Linux 时会发现解决 Linux 上各种问题的唯一资源就是来自网络另一端的陌生人提供的帮助。如果只是他们的 PC 遇到了一些问题,这将会比较麻烦。Windows 和 OS X 的用户已经习惯各城市遍布了支持他们操作系统的各项技术服务。 + +### 如何开启新旅程? ### + +这里建议大家要记住最重要的就是总要有个回退方案。如果您将 Windows 10 从硬盘中擦除,您会发现重新安装它又会花费金钱。对那些从其它 Windows 发布版本升级的用户来说尤其会遇到这种情况。请接受这个建议,对新手来说使用闪存安装 Linux 或使用 Windows 和 Linux 双系统都是更值得提倡的做法。您也许会如鱼得水般使用 Linux系统,但是有了一份回退方案,您将高枕无忧。 + +相反,如果数周以来您一直依赖于使用双操作系统,但是已经准备好冒险去尝试一下单操作系统,那么就去做吧。格式化您的驱动器,重新安装您喜爱的 Linux 发行版。数年来我一直都是“全职” Linux 使用爱好者,这里可以确定地告诉您,使用 Linux 系统感觉棒极了。这种感觉会持续多久?我第一次的 Linux 系统使用经验还是来自早期的 Red Hat 系统,最终在2003年,我在自己的笔记本上整个安装了 Linux 系统。 + +Linux 爱好者们,你们什么时候开始使用 Linux 的?您在最初更换成 Linux 系统时是兴奋还是焦虑呢?欢迎点击评论分享你们的经验。 + +-------------------------------------------------------------------------------- + +via: http://www.datamation.com/open-source/is-linux-right-for-you.html + +作者:[Matt Hartley][a] +译者:[icybreaker](https://github.com/icybreaker) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.datamation.com/author/Matt-Hartley-3080.html +[1]:http://www.psychocats.net/ubuntu/virtualbox +[2]:http://www.howtogeek.com/howto/14912/create-a-persistent-bootable-ubuntu-usb-flash-drive/ +[3]:http://www.linuxandubuntu.com/home/dual-boot-ubuntu-15-04-14-10-and-windows-10-8-1-8-step-by-step-tutorial-with-screenshots diff --git a/translated/talk/20150901 Is Linux Right For You.md b/translated/talk/20150901 Is Linux Right For You.md deleted file mode 100644 index 60b56f6e25..0000000000 --- a/translated/talk/20150901 Is Linux Right For You.md +++ /dev/null @@ -1,63 +0,0 @@ -Linux系统是否适合于您? -================================================================================ -> 并非人人都适合使用Linux--对许多用户来说,Windows或OSX会是更好的选择。 - -我喜欢使用Linux系统,并不是因为软件的政治性质,也不是不喜欢其他操作系统。我喜欢Linux系统因为它能满足我的需求并且确实适合使用。 - -我的经验是,并非人人都适合切换至“Linux的生活方式”。本文将帮助您通过分析使用Linux系统的利弊来供您自行判断使用Linux是否真正适合您。 - -### 什么时候更换系统? ### - -当有充分的理由时,将系统切换到Linux系统是很有意义的。这对Windows用户将系统更换到OSX或类似的情况都同样适用。为让您的系统转变成功,您必须首先确定为什么要做这种转换。 - -对某些人来说,更换系统通常意味着他们不满于当前的系统操作平台。也许是最新的升级给了他们糟糕的用户体验,他们已准备好更换到别的系统,也许仅仅是因为对某个系统好奇。不管动机是什么,必须要有充分的理由支撑您做出更换操作系统的决定。如果没有一个充足的原因让您这样做,往往不会成功。 - -然而事事都有例外。如果您确实对Linux非常感兴趣,或许可以选择一种折衷的方式。 - -### 放慢起步的脚步 ### - -第一次尝试运行Linux系统后,我看到就有人开始批判Windows安装过程的费时,完全是因为他们20分钟就用闪存安装好Ubuntu的良好体验。但是伙伴们,这并不只是一次测验。相反,我有如下建议: - -- 一周的时间尝试在[虚拟机上运行Linux系统][1]。这意味着您将在该系统上运行所有的浏览器工作、邮箱操作和其它想要完成的任务。 -- 如果运行虚拟机资源消耗太大,您可以尝试通过[存储持久性][2]的USB驱动器来运行Linux,您的主操作系统将不受任何影响。与此同时,您仍可以运行Linux系统。 -- 运行Linux系统一周后,如果一切进展顺利,下一步您可以计算一下这周内登入Windows的次数。如果只是偶尔登陆Windows系统,下一步就可以尝试运行Windows和Linux[双系统][3]。对那些只运行Linux系统的用户,可以考虑尝试将系统真正更换为Linux系统。 -- 在管理Windows分区前,有必要购买一个新硬盘来安装Linux系统。这样只要有充足的硬盘空间,您就可以使用双系统。如果想到必须要要启动Windows系统做些事情,Windows系统也是可以运行的。 - -### 使用Linux系统的好处是什么? ### - -将系统更换到Linux有什么好处呢?一般而言,这种好处对大多数人来说可以归结到释放个性化自由。在使用Linux系统的时候,如果您不喜欢某些设置,可以自行更改它们。同时使用Linux可以为用户节省大量的硬件升级开支和不必要的软件开支。另外,您不需再费力找寻已丢失的软件许可证密钥,而且如果您不喜欢即将发布的系统版本,大可轻松地更换到别的版本。 - -台式机首选Linux系统是令人吃惊的,看起来对新手来说做这种选择非常困难。但是如果您发现了喜欢的一款Linux版本(Debian,Fedora,Arch等),最困难的工作其实已经完成了,您需要做的就是找到各版本并选择出您最喜欢的系统版本环境。 - -如今我听到的最常见的抱怨之一是用户发现没有太多的软件格式能适用于Linux系统。然而,这并不是事实。尽管别的操作系统可能会提供更多软件,但是如今的Linux也已经提供了足够多应用程序满足您的各种需求,包括视频剪辑(家庭版和专业版),摄影,办公管理软件,远程访问,音乐软件,还有很多别的各类软件。 - -### 使用Linux系统您会失去些什么? ### - -虽然我喜欢使用Linux,但我妻子的家庭办公系统依然依赖于OS X。对于用Linux系统完成一些特定的任务她心满意足,但是她仍习惯于使用提供Linux不支持的一些软件的OS X系统。这是许多想要更换系统的用户会遇到的一个常见的问题。如果要更换系统,您需要考虑是否愿意失去一些关键的软件工具。 - -有时在Linux系统上遇到问题是因为软件会内容锁定。别的情况下,是在Linux系统上可运行的软件并不适用于传统应用程序的工作流和功能。我自己并没有遇到过这类问题,但是我知道确实存在这些问题。许多Linux上的软件在其他操作系统上也都可以用。所以如果担心这类软件兼容问题,建议您先尝试在已有的系统上操作一下几款类似的应用程序。 - -更换成Linux系统后,另一件您可能会失去的是本地系统支持服务。人们通常会嘲笑这种愚蠢行径,但我知道,无数的新手在使用Linux时会发现解决Linux上各种问题的唯一资源就是来自网络另一端的陌生人提供的帮助。如果只是他们的PC遇到了一些问题,这将会比较麻烦。Windows和OS X的用户已经习惯各城市遍布了支持他们操作系统的各项技术服务。 - -### 如何开启新旅程? ### - -这里建议大家要记住最重要的就是经常做备份。如果您将Windows 10从硬盘中擦除,您会发现重新安装它又会花费金钱。对那些从其他Windows发布版本升级的用户来说尤其会遇到这种情况。接受这个建议,那就是对新手来说使用闪存安装Linux或使用Windows和Linux双系统都是更值得提倡的做法。您也许会如鱼得水般使用Linux系统,但是有了一份备份计划,您将高枕无忧。 - -相反,如果数周以来您一直依赖于使用双操作系统,但是已经准备好冒险去尝试一下单操作系统,那么就去做吧。格式化您的驱动器,重新安装您喜爱的Linux distribution。数年来我一直都是"全职"Linux使用爱好者,这里可以确定地告诉您,使用Linux系统感觉棒极了。这种感觉会持续多久?我第一次的Linux系统使用经验还是来自早期的Red Hat系统,2003年我已经决定在自己的笔记本上安装专用的Linux系统并一直使用至今。 - -Linux爱好者们,你们什么时候开始使用Linux的?您在最初更换成Linux系统时是兴奋还是焦虑呢?欢迎点击评论分享你们的经验。 - --------------------------------------------------------------------------------- - -via: http://www.datamation.com/open-source/is-linux-right-for-you.html - -作者:[Matt Hartley][a] -译者:[icybreaker](https://github.com/icybreaker) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.datamation.com/author/Matt-Hartley-3080.html -[1]:http://www.psychocats.net/ubuntu/virtualbox -[2]:http://www.howtogeek.com/howto/14912/create-a-persistent-bootable-ubuntu-usb-flash-drive/ -[3]:http://www.linuxandubuntu.com/home/dual-boot-ubuntu-15-04-14-10-and-windows-10-8-1-8-step-by-step-tutorial-with-screenshots From 980d64729a1480cec0fb9a79747c508ce2ee63aa Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 21 Sep 2015 15:40:14 +0800 Subject: [PATCH 572/697] =?UTF-8?q?20150921-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... The New Ubuntu 15.10 Default Wallpaper.md | 44 ++++ ...21 Configure PXE Server In Ubuntu 14.04.md | 249 ++++++++++++++++++ ...onCube Loaders on Ubuntu 14.04 or 15.04.md | 89 +++++++ ...Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md | 102 +++++++ 4 files changed, 484 insertions(+) create mode 100644 sources/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md create mode 100644 sources/tech/20150921 Configure PXE Server In Ubuntu 14.04.md create mode 100644 sources/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md create mode 100644 sources/tech/20150921 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md diff --git a/sources/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md b/sources/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md new file mode 100644 index 0000000000..557fcbc427 --- /dev/null +++ b/sources/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md @@ -0,0 +1,44 @@ +Meet The New Ubuntu 15.10 Default Wallpaper +================================================================================ +**The brand new default wallpaper for Ubuntu 15.10 Wily Werewolf has been unveiled. ** + +At first glance you may find little has changed from the origami-inspired ‘Suru’ design shipped with April’s release of Ubuntu 15.04. But look closer and you’ll see that the new default background does feature some subtle differences. + +For one it looks much lighter, helped by an orange glow emanating from the upper-left of the image. The angular folds and sections remain, but with the addition of blocky, rectangular sections. + +The new background has been designed by Canonical Design Team member Alex Milazzo. + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/09/ubuntu-1510-wily-werewolf-wallpaper.jpg) + +The Ubuntu 15.10 default desktop wallpaper + +And just to show that there is a change, here is the Ubuntu 15.04 default wallpaper for comparison: + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/03/suru-desktop-wallpaper-ubuntu-vivid.jpg) + +The Ubuntu 15.04 default desktop wallpaper + +### Download Ubuntu 15.10 Wallpaper ### + +If you’re running daily builds of Ubuntu 15.10 Wily Werewolf and don’t yet see this as your default wallpaper you’ve no broken anything: the design has been unveiled but is, as of writing, yet to be packaged and uploaded to Wily itself. + +You don’t have to wait until October to use the new design as your desktop background. You can download the wallpaper in a huge HiDPI display friendly 4096×2304 resolution by hitting the button below. + +- [Download Ubuntu the new 15.10 Default Wallpaper][1] + +Finally, as we say this every time there’s a new wallpaper, you don’t have to care about the minutiae of distribution branding and design. If the new wallpaper is not to your tastes or you never keep it you can, as ever, easily change it — this isn’t the Ubuntu Phone after all! + +**Are you a fan of the refreshed look? Let us know in the comments below. ** + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2015/09/ubuntu-15-10-wily-werewolf-default-wallpaper + +作者:[Joey-Elijah Sneddon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:https://launchpadlibrarian.net/218258177/Wolf_Wallpaper_Desktop_4096x2304_Purple_PNG-24.png \ No newline at end of file diff --git a/sources/tech/20150921 Configure PXE Server In Ubuntu 14.04.md b/sources/tech/20150921 Configure PXE Server In Ubuntu 14.04.md new file mode 100644 index 0000000000..0ac8dbc527 --- /dev/null +++ b/sources/tech/20150921 Configure PXE Server In Ubuntu 14.04.md @@ -0,0 +1,249 @@ +Configure PXE Server In Ubuntu 14.04 +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/09/pxe-featured.jpg) + +PXE (Preboot Execution Environment) Server allows the user to boot a Linux distribution from a network and install it on hundreds of PCs at a time without any Linux iso images. If your client’s computers don’t have CD/DVD or USB drives, or if you want to set up multiple computers at the same time in a large enterprise, then PXE server can be used to save money and time. + +In this article we will show you how you can configure a PXE server in Ubuntu 14.04. + +### Configure Networking ### + +To get started, you need to first set up your PXE server to use a static IP. To set up a static IP address in your system, you need to edit the “/etc/network/interfaces” file. + +1. Open the “/etc/network/interfaces” file. + + sudo nano /etc/network/interfaces + +Add/edit as described below: + + # The loopback network interface + auto lo + iface lo inet loopback + # The primary network interface + auto eth0 + iface eth0 inet static + address 192.168.1.20 + netmask 255.255.255.0 + gateway 192.168.1.1 + dns-nameservers 8.8.8.8 + +Save the file and exit. This will set its IP address to “192.168.1.20”. Restart the network service. + + sudo /etc/init.d/networking restart + +### Install DHCP, TFTP and NFS: ### + +DHCP, TFTP and NFS are essential components for configuring a PXE server. First you need to update your system and install all necessary packages. + +For this, run the following commands: + + sudo apt-get update + sudo apt-get install isc-dhcp-Server inetutils-inetd tftpd-hpa syslinux nfs-kernel-Server + +### Configure DHCP Server: ### + +DHCP stands for Dynamic Host Configuration Protocol, and it is used mainly for dynamically distributing network configuration parameters such as IP addresses for interfaces and services. A DHCP server in PXE environment allow clients to request and receive an IP address automatically to gain access to the network servers. + +1. Edit the “/etc/default/dhcp3-server” file. + + sudo nano /etc/default/dhcp3-server + +Add/edit as described below: + + INTERFACES="eth0" + +Save (Ctrl + o) and exit (Ctrl + x) the file. + +2. Edit the “/etc/dhcp3/dhcpd.conf” file: + + sudo nano /etc/dhcp/dhcpd.conf + +Add/edit as described below: + + default-lease-time 600; + max-lease-time 7200; + subnet 192.168.1.0 netmask 255.255.255.0 { + range 192.168.1.21 192.168.1.240; + option subnet-mask 255.255.255.0; + option routers 192.168.1.20; + option broadcast-address 192.168.1.255; + filename "pxelinux.0"; + next-Server 192.168.1.20; + } + +Save the file and exit. + +3. Start the DHCP service. + + sudo /etc/init.d/isc-dhcp-server start + +### Configure TFTP Server: ### + +TFTP is a file-transfer protocol which is similar to FTP. It is used where user authentication and directory visibility are not required. The TFTP server is always listening for PXE clients on the network. When it detects any network PXE client asking for PXE services, then it provides a network package that contains the boot menu. + +1. To configure TFTP, edit the “/etc/inetd.conf” file. + + sudo nano /etc/inetd.conf + +Add/edit as described below: + + tftp dgram udp wait root /usr/sbin/in.tftpd /usr/sbin/in.tftpd -s /var/lib/tftpboot + +Save and exit the file. + +2. Edit the “/etc/default/tftpd-hpa” file. + + sudo nano /etc/default/tftpd-hpa + +Add/edit as described below: + + TFTP_USERNAME="tftp" + TFTP_DIRECTORY="/var/lib/tftpboot" + TFTP_ADDRESS="[:0.0.0.0:]:69" + TFTP_OPTIONS="--secure" + RUN_DAEMON="yes" + OPTIONS="-l -s /var/lib/tftpboot" + +Save and exit the file. + +3. Enable boot service for `inetd` to automatically start after every system reboot and start tftpd service. + + sudo update-inetd --enable BOOT + sudo service tftpd-hpa start + +4. Check status. + + sudo netstat -lu + +It will show the following output: + + Proto Recv-Q Send-Q Local Address Foreign Address State + udp 0 0 *:tftp *:* + +### Configure PXE boot files ### + +Now you need the PXE boot file “pxelinux.0” to be present in the TFTP root directory. Make a directory structure for TFTP, and copy all the bootloader files provided by syslinux from the “/usr/lib/syslinux/” to the “/var/lib/tftpboot/” path by issuing the following commands: + + sudo mkdir /var/lib/tftpboot + sudo mkdir /var/lib/tftpboot/pxelinux.cfg + sudo mkdir -p /var/lib/tftpboot/Ubuntu/14.04/amd64/ + sudo cp /usr/lib/syslinux/vesamenu.c32 /var/lib/tftpboot/ + sudo cp /usr/lib/syslinux/pxelinux.0 /var/lib/tftpboot/ + +#### Set up PXELINUX configuration file #### + +The PXE configuration file defines the boot menu displayed to the PXE client when it boots up and contacts the TFTP server. By default, when a PXE client boots up, it will use its own MAC address to specify which configuration file to read, so we need to create that default file that contains the list of kernels which are available to boot. + +Edit the PXE Server configuration file with valid installation options. + +To edit “/var/lib/tftpboot/pxelinux.cfg/default,” + + sudo nano /var/lib/tftpboot/pxelinux.cfg/default + +Add/edit as described below: + + DEFAULT vesamenu.c32 + TIMEOUT 100 + PROMPT 0 + MENU INCLUDE pxelinux.cfg/PXE.conf + NOESCAPE 1 + LABEL Try Ubuntu 14.04 Desktop + MENU LABEL Try Ubuntu 14.04 Desktop + kernel Ubuntu/vmlinuz + append boot=casper netboot=nfs nfsroot=192.168.1.20:/var/lib/tftpboot/Ubuntu/14.04/amd64 + initrd=Ubuntu/initrd.lz quiet splash + ENDTEXT + LABEL Install Ubuntu 14.04 Desktop + MENU LABEL Install Ubuntu 14.04 Desktop + kernel Ubuntu/vmlinuz + append boot=casper automatic-ubiquity netboot=nfs nfsroot=192.168.1.20:/var/lib/tftpboot/Ubuntu/14.04/amd64 + initrd=Ubuntu/initrd.lz quiet splash + ENDTEXT + +Save and exit the file. + +Edit the “/var/lib/tftpboot/pxelinux.cfg/pxe.conf” file. + + sudo nano /var/lib/tftpboot/pxelinux.cfg/pxe.conf + +Add/edit as described below: + + MENU TITLE PXE Server + NOESCAPE 1 + ALLOWOPTIONS 1 + PROMPT 0 + MENU WIDTH 80 + MENU ROWS 14 + MENU TABMSGROW 24 + MENU MARGIN 10 + MENU COLOR border 30;44 #ffffffff #00000000 std + +Save and exit the file. + +### Add Ubuntu 14.04 Desktop Boot Images to PXE Server ### + +For this, Ubuntu kernel and initrd files are required. To get those files, you need the Ubuntu 14.04 Desktop ISO Image. You can download the Ubuntu 14.04 ISO image in the /mnt folder by issuing the following command: + + sudo cd /mnt + sudo wget http://releases.ubuntu.com/14.04/ubuntu-14.04.3-desktop-amd64.iso + +**Note**: the download URL might change as the ISO image is updated. Check out this website for the latest download link if the above URL is not working. + +Mount the ISO file, and copy all the files to the TFTP folder by issuing the following commands: + + sudo mount -o loop /mnt/ubuntu-14.04.3-desktop-amd64.iso /media/ + sudo cp -r /media/* /var/lib/tftpboot/Ubuntu/14.04/amd64/ + sudo cp -r /media/.disk /var/lib/tftpboot/Ubuntu/14.04/amd64/ + sudo cp /media/casper/initrd.lz /media/casper/vmlinuz /var/lib/tftpboot/Ubuntu/ + +### Configure NFS Server to Export ISO Contents ### + +Now you need to setup Installation Source Mirrors via NFS protocol. You can also use http and ftp for Installation Source Mirrors. Here I have used NFS to export ISO contents. + +To configure the NFS server, you need to edit the “/etc/exports” file. + + sudo nano /etc/exports + +Add/edit as described below: + + /var/lib/tftpboot/Ubuntu/14.04/amd64 *(ro,async,no_root_squash,no_subtree_check) + +Save and exit the file. For the changes to take effect, export and start NFS service. + + sudo exportfs -a + sudo /etc/init.d/nfs-kernel-server start + +Now your PXE Server is ready. + +### Configure Network Boot PXE Client ### + +A PXE client can be any computer system with a PXE network boot enable option. Now your clients can boot and install Ubuntu 14.04 Desktop by enabling “Boot From Network” options from their systems BIOS. + +You’re now ready to go – start your PXE Client Machine with the network boot enable option, and you should now see a sub-menu showing for your Ubuntu 14.04 Desktop that we created. + +![pxe](https://www.maketecheasier.com/assets/uploads/2015/09/pxe.png) + +### Conclusion ### + +Configuring network boot installation using PXE server is efficient and a time-saving method. You can install hundreds of client at a time in your local network. All you need is a PXE server and PXE enabled clients. Try it out, and let us know if this works for you. + +Reference: +- [PXE Server wiki][1] +- [PXE Server Ubuntu][2] + +Image credit: [fupsol_unl_20][3] + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/configure-pxe-server-ubuntu/ + +作者:[Hitesh Jethva][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/hiteshjethva/ +[1]:https://en.wikipedia.org/wiki/Preboot_Execution_Environment +[2]:https://help.ubuntu.com/community/PXEInstallServer +[3]:https://www.flickr.com/photos/jhcalderon/3681926417/ \ No newline at end of file diff --git a/sources/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md b/sources/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md new file mode 100644 index 0000000000..c3f9dc366d --- /dev/null +++ b/sources/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md @@ -0,0 +1,89 @@ +How to Setup IonCube Loaders on Ubuntu 14.04 / 15.04 +================================================================================ +IonCube Loaders is an encryption/decryption utility for PHP applications which assists in speeding up the pages that are served. It also protects your website's PHP code from being viewed and ran on unlicensed computers. Using ionCube encoded and secured PHP files requires a file called ionCube Loader to be installed on the web server and made available to PHP which is often required for a lot of PHP based applications. It handles the reading and execution of encoded files at run time. PHP can use the loader with one line added to a PHP configuration file that ‘php.ini’. + +### Prerequisites ### + +In this article we will setup the installation of Ioncube Loader on Ubuntu 14.04/15.04, so that it can be used in all PHP Modes. The only requirement for this tutorial is to have "php.ini" file exists in your system with LEMP stack installed on the server. + +### Download IonCube Loader ### + +Login to your ubuntu server to download the latest IonCube loader package according to your operating system architecture whether your are using a 32 Bit or 64 Bit OS. You can get its package by issuing the following command with super user privileges or root user. + + # wget http://downloads3.ioncube.com/loader_downloads/ioncube_loaders_lin_x86-64.tar.gz + +![download ioncube](http://blog.linoxide.com/wp-content/uploads/2015/09/download1.png) + +After Downloading unpack the archive into the "/usr/local/src/" folder by issuing the following command. + + # tar -zxvf ioncube_loaders_lin_x86-64.tar.gz -C /usr/local/src/ + +![extracting archive](http://blog.linoxide.com/wp-content/uploads/2015/09/2-extract.png) + +After extracting the archive, we can see the list of all modules present in it. But we needs only the relevant with the version of PHP installed on our system. + +To check your PHP version, you can run the below command to find the relevant modules. + + # php -v + +![ioncube modules](http://blog.linoxide.com/wp-content/uploads/2015/09/modules.png) + +With reference to the output of above command we came to know that the PHP version installed on the system is 5.6.4, so we need to copy the appropriate module to the PHP modules folder. + +To do so we will create a new folder with name "ioncube" within the "/usr/local/" directory and copy the required ioncube loader modules into it. + + root@ubuntu-15:/usr/local/src/ioncube# mkdir /usr/local/ioncube + root@ubuntu-15:/usr/local/src/ioncube# cp ioncube_loader_lin_5.6.so ioncube_loader_lin_5.6_ts.so /usr/local/ioncube/ + +### PHP Configuration ### + +Now we need to put the following line into the configuration file of PHP file "php.ini" which is located in "/etc/php5/cli/" folder then restart your web server’s services and php module. + + # vim /etc/php5/cli/php.ini + +![ioncube zend extension](http://blog.linoxide.com/wp-content/uploads/2015/09/zend-extension.png) + +In our scenario we have Nginx web server installed, so we will run the following commands to start its services. + + # service php5-fpm restart + # service nginx restart + +![web services](http://blog.linoxide.com/wp-content/uploads/2015/09/web-services.png) + +### Testing IonCube Loader ### + +To test the ioncube loader in the PHP configuration for your website, create a test file called "info.php" with the following content and place it into the web directory of your web server. + + # vim /usr/share/nginx/html/info.php + +Then save the changes after placing phpinfo script and access "info.php" in your browser with your domain name or server’s IP address after reloading the web server services. + +You will be able to see the below section at the bottom of your php modules information. + +![php info](http://blog.linoxide.com/wp-content/uploads/2015/09/php-info.png) + +From the terminal issue the following command to verify the php version that shows the ionCube PHP Loader is Enabled. + + # php -v + +![php ioncube loader](http://blog.linoxide.com/wp-content/uploads/2015/09/php-ioncube.png) + +The output shown in the PHP version's command clearly indicated that IonCube loader has been successfully integrated with PHP. + +### Conclusion ### + +At the end of this tutorial you learnt about the installation and configuration of ionCube Loader on Ubuntu with Nginx web server there will be no such difference if you are using any other web server. So, installing Loaders is simple when its done correctly, and on most servers its installation will work without a problem. However there is no such thing as a "standard PHP installation", and servers can be setup in many different ways, and with different features enabled or disabled. + +If you are on a shared server, then make sure that you have run the ioncube-loader-helper.php script, and click the link to test run time installation. If you still face as such issue while doing your setup, feel free to contact us and leave us a comment. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/ubuntu-how-to/setup-ioncube-loaders-ubuntu-14-04-15-04/ + +作者:[Kashif Siddique][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ \ No newline at end of file diff --git a/sources/tech/20150921 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md b/sources/tech/20150921 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md new file mode 100644 index 0000000000..897550432b --- /dev/null +++ b/sources/tech/20150921 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md @@ -0,0 +1,102 @@ +How to Setup Node JS v4.0.0 on Ubuntu 14.04 / 15.04 +================================================================================ +Hi everyone, Node.JS Version 4.0.0 has been out, the popular server-side JavaScript platform has combines the Node.js and io.js code bases. This release represents the combined efforts encapsulated in both the Node.js project and the io.js project that are now combined in a single codebase. The most important change is this Node.js is ships with version 4.5 of Google's V8 JavaScript engine, which is the same version that ships with the current Chrome browser. So, being able to more closely track V8’s releases means Node.js runs JavaScript faster, more securely, and with the ability to use many desirable ES6 language features. + +![Node JS](http://blog.linoxide.com/wp-content/uploads/2015/09/nodejs.png) + +Node.js 4.0.0 aims to provide an easy update path for current users of io.js and node as there are no major API changes. Let’s see how you can easily get it installed and setup on Ubuntu server by following this simple article. + +### Basic System Setup ### + +Node works perfectly on Linux, Macintosh, and Solaris operating systems and among the Linux operating systems it has the best results using Ubuntu OS. That's why we are to setup it Ubuntu 15.04 while the same steps can be followed using Ubuntu 14.04. + +**1) System Resources** + +The basic system resources for Node depend upon the size of your infrastructure requirements. So, here in this tutorial we will setup Node with 1 GB RAM, 1 GHz Processor and 10 GB of available disk space with minimal installation packages installed on the server that is no web or database server packages are installed. + +**2) System Update** + +It always been recommended to keep your system upto date with latest patches and updates, so before we move to the installation on Node, let's login to your server with super user privileges and run update command. + + # apt-get update + +**3) Installing Dependencies** + +Node JS only requires some basic system and software utilities to be present on your server, for its successful installation like 'make' 'gcc' and 'wget'. Let's run the below command to get them installed if they are not already present. + + # apt-get install python gcc make g++ wget + +### Download Latest Node JS v4.0.0 ### + +Let's download the latest Node JS version 4.0.0 by following this link of [Node JS Download Page][1]. + +![](http://blog.linoxide.com/wp-content/uploads/2015/09/download.png) + +We will copy the link location of its latest package and download it using 'wget' command as shown. + + # wget https://nodejs.org/download/rc/v4.0.0-rc.1/node-v4.0.0-rc.1.tar.gz + +Once download completes, unpack using 'tar' command as shown. + + # tar -zxvf node-v4.0.0-rc.1.tar.gz + +![](http://blog.linoxide.com/wp-content/uploads/2015/09/wget.png) + +### Installing Node JS v4.0.0 ### + +Now we have to start the installation of Node JS from its downloaded source code. So, change your directory and configure the source code by running its configuration script before compiling it on your ubuntu server. + + root@ubuntu-15:~/node-v4.0.0-rc.1# ./configure + +![](http://blog.linoxide.com/wp-content/uploads/2015/09/configure.png) + +Now run the 'make install' command to compile the Node JS installation package as shown. + + root@ubuntu-15:~/node-v4.0.0-rc.1# make install + +The make command will take a couple of minutes while compiling its binaries so after executinf above command, wait for a while and keep calm. + +### Testing Node JS Installation ### + +Once the compilation process is complete, we will test it if every thing went fine. Let's run the following command to confirm the installed version of Node JS. + + root@ubuntu-15:~# node -v + v4.0.0-pre + +By executing 'node' without any arguments from the command-line you will be dropped into the REPL (Read-Eval-Print-Loop) that has simplistic emacs line-editing where you can interactively run JavaScript and see the results. + +![](http://blog.linoxide.com/wp-content/uploads/2015/09/node.png) + +### Writing Test Program ### + +We can also try out a very simple console program to test the successful installation and proper working of Node JS. To do so we will create a file named "test.js" and write the following code into it and save the changes made in the file as shown. + + root@ubuntu-15:~# vim test.js + var util = require("util"); + console.log("Hello! This is a Node Test Program"); + :wq! + +Now in order to run the above program, from the command prompt run the below command. + + root@ubuntu-15:~# node test.js + +![](http://blog.linoxide.com/wp-content/uploads/2015/09/node-test.png) + +So, upon successful installation we will get the output as shown in the screen, where as in the above program it loads the "util" class into a variable "util" and then uses the "util" object to perform the console tasks. While the console.log is a command similar to the cout in C++. + +### Conclusion ### + +That’s it. Hope this gives you a good idea of Node.js going with Node.js on Ubuntu. If you are new to developing applications with Node.js. After all we can say that we can expect significant performance gains with Node JS Version 4.0.0. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/ubuntu-how-to/setup-node-js-4-0-ubuntu-14-04-15-04/ + +作者:[Kashif Siddique][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ +[1]:https://nodejs.org/download/rc/v4.0.0-rc.1/ \ No newline at end of file From 066e66a5c2e80b3a36ef22df4f92f8792ad60499 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 21 Sep 2015 15:48:28 +0800 Subject: [PATCH 573/697] =?UTF-8?q?20150921-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ps for teaching open source development.md | 72 +++++++++++++++++++ 1 file changed, 72 insertions(+) create mode 100644 sources/talk/20150921 14 tips for teaching open source development.md diff --git a/sources/talk/20150921 14 tips for teaching open source development.md b/sources/talk/20150921 14 tips for teaching open source development.md new file mode 100644 index 0000000000..b2812d44c8 --- /dev/null +++ b/sources/talk/20150921 14 tips for teaching open source development.md @@ -0,0 +1,72 @@ +14 tips for teaching open source development +================================================================================ +Academia is an excellent platform for training and preparing the open source developers of tomorrow. In research, we occasionally open source software we write. We do this for two reasons. One, to promote the use of the tools we produce. And two, to learn more about the impact and issues other people face when using them. With this background of writing research software, I was tasked with redesigning the undergraduate software engineering course for second-year students at the University of Bradford. + +It was a challenge, as I was faced with 80 students coming for different degrees, including IT, business computing, and software engineering, all in the same course. The hardest part was working with students with a wide range of programming experience levels. Traditionally, the course had involved allowing students to choose their own teams, tasking them with building a garage database system and then submitting a report in the end as part of the assessment. + +I decided to redesign the course to give students insight into the process of working on real-world software teams. I divided the students into teams of five or six, based on their degrees and programming skills. The aim was to have an equal distribution of skills across the teams to prevent any unfair advantage of one team over another. + +### The core lessons ### + +The course format was updated to have both lectures and lab sessions. However, the lab session functioned as mentoring sessions, where instructors visited each team to ask for updates and see how the teams were progressing with the clients and the products. There were traditional lectures on project management, software testing, requirements engineering, and similar topics, supplemented by lab sessions and mentor meetings. These meetings allowed us to check up on students' progress and monitor whether they were following the software engineering methodologies taught in the lecture portion. Topics we taught this year included: + +- Requirements engineering +- How to interact with clients and other team members +- Software methodologies, such as agile and extreme programming approaches +- How to use different software engineering approaches and work through sprints +- Team meetings and documentations +- Project management and Gantt charts +- UML diagrams and system descriptions +- Code revisioning using Git +- Software testing and bug tracking +- Using open source libraries for their tools +- Open source licenses and which one to use +- Software delivery + +Along with these lectures, we had a few guest speakers from the corporate world talk about their practices in software product deliveries. We also managed to get the university’s intellectual property lawyer to come and talk about IP issues surrounding software in the UK, and how to handle any intellectual properties issues in software. + +### Collaboration tools ### + +To make all of the above possible, a number of tools were introduced. Students were trained on how to use them for their projects. These included: + +- Google Drive folders shared within the team and the tutor, to maintain documents and spreadsheets for project descriptions, requirements gathering, meeting minutes, and time tracking of the project. This was an extremely efficient way to monitor and also provide feedback straight into the folders for each team. +- [Basecamp][1] for document sharing as well, and later in the course we considered this as a possible replacement for Google Drive. +- Bug reporting tools such as [Mantis][2] again have a limited users for free reporting. Later Git itself was being used for bug reports n any tools by the testers in the teams +- Remote videoconferencing tools were used as a number of clients were off-campus, and sometimes not even in the same city. The students were regularly using Skype to communicate with them, documenting their meetings and sometimes even recording them for later use. +- A number of open source tool kits were also used for students' projects. The students were allowed to choose their own tool kits and languages based on the requirements of the projects. The only condition was that these have to be open source and could be installed in the university labs, which the technical staff was extremely supportive of. +- In the end all teams had to deliver their projects to the client, including complete working version of the software, documentation, and open source licenses of their own choosing. Most of the teams chose the GPL version 3 license. + +### Tips and lessons learned ### + +In the end, it was a fun year and nearly all students did very well. Here are some of the lessons I learned which may help improve the course next year: + +1. Give the students a wide variety of choice in projects that are interesting, such as game development or mobile application development, and projects with goals. Working with mundane database systems is not going to keep most students interested. Working with interesting projects, most students became self-learners, and were also helping others in their teams and outside to solve some common issues. The course also had a message list, where students were posting any issues they were encountering, in hopes of receiving advice from others. However, there was a drawback to this approach. The external examiners have advised us to go back to a style of one type of project, and one type of language to help narrow the assessment criteria for the students. +1. Give students regular feedback on their performance at every stage. This could be done during the mentoring meetings with the teams, or at other stages, to help them improve the work for next time. +1. Students are more than willing to work with clients from outside university! They look forward to working with external company representatives or people outside the university, just because of the new experience. They were all able to display professional behavior when interacting with their mentors, which put the instructors at ease. +1. A lot of teams left developing unit testing until the end of the project, which from an extreme programming methodology standpoint was a serious no-no. Maybe testing should be included at the assessments of the various stages to help remind students that they need to be developing unit tests in parallel with the software. +1. In the class of 80, there were only four girls, each working in different teams. I observed that boys were very ready to take on roles as team leads, assigning the most interesting code pieces to themselves and the girls were mostly following instructions or doing documentation. For some reason, the girls choose not to show authority or preferred not to code even when they were encouraged by a female instructor. This is still a major issue that needs to be addressed. +1. There are different styles of documentation such as using UML, state diagrams, and others. Allow students to learn them all and merge with other courses during the year to improve their learning experience. +1. Some students were very good developers, but some doing business computing had very little coding experience. The teams were encouraged to work together to prevent the idea that developer would get better marks than other team members if they were only doing meeting minutes or documentations. Roles were also encouraged to be rotated during mentoring sessions to see that everyone was getting a chance to learn how to program. +1. Allowing the team to meet with the mentor every week was helpful in monitoring team activities. It also showed who was doing the most work. Usually students who were not participating in their groups would not come to meetings, and could be identified by the work being presented by other members every week. +1. We encouraged students to attach licenses to their work and identify intellectual property issues when working with external libraries and clients. This allowed students to think out of the box and learn about real-world software delivery problems. +1. Give students room to choose their own technologies. +1. Having teaching assistants is key. Managing 80 students was very difficult, especially on the weeks when they were being assessed. Next year I would definitely have teaching assistants helping me with the teams. +1. A supportive tech support for the lab is very important. The university tech support was extremely supportive of the course. Next year, they are talking about having virtual machines assigned to teams, so the teams can install any software on their own virtual machine as needed. +1. Teamwork helps. Most teams exhibited a supportive nature to other team members, and mentoring also helped. +1. Additional support from other staff members is a plus. As a new academic, I needed to learn from experience and also seek advice at multiple points on how to handle certain students and teams if I was confused on how to engage them with the course. Support from senior staff members was very encouraging to me. + +In the end, it was a fun course—not only for the me as an instructor, but for the students as well. There were some issues with learning objectives and traditional grading schemes that still need to be ironed out to reduce the workload it produced on the instructors. For next year, I plan to keep this same format, but hope to come up with a better grading scheme and introduce more software tools that can help monitor project activities and code revisions. + +-------------------------------------------------------------------------------- + +via: http://opensource.com/education/15/9/teaching-open-source-development-undergraduates + +作者:[Mariam Kiran][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://opensource.com/users/mariamkiran +[1]:https://basecamp.com/ +[2]:https://www.mantisbt.org/ \ No newline at end of file From 103420797f34fea2931a376fb921ec5d03daaa9f Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Mon, 21 Sep 2015 19:32:41 +0800 Subject: [PATCH 574/697] [Translated]RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md --- ...ntrol Essentials with SELinux in RHEL 7.md | 178 ------------------ ...ntrol Essentials with SELinux in RHEL 7.md | 177 +++++++++++++++++ 2 files changed, 177 insertions(+), 178 deletions(-) delete mode 100644 sources/tech/RHCSA Series/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md create mode 100644 translated/tech/RHCSA/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md b/sources/tech/RHCSA Series/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md deleted file mode 100644 index 8d014dcc2e..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md +++ /dev/null @@ -1,178 +0,0 @@ -FSSlc translating - -RHCSA Series: Mandatory Access Control Essentials with SELinux in RHEL 7 – Part 13 -================================================================================ -During this series we have explored in detail at least two access control methods: standard ugo/rwx permissions ([Manage Users and Groups – Part 3][1]) and access control lists ([Configure ACL’s on File Systems – Part 7][2]). - -![RHCSA Exam: SELinux Essentials and Control FileSystem Access](http://www.tecmint.com/wp-content/uploads/2015/06/SELinux-Control-File-System-Access.png) - -RHCSA Exam: SELinux Essentials and Control FileSystem Access - -Although necessary as first level permissions and access control mechanisms, they have some limitations that are addressed by Security Enhanced Linux (aka SELinux for short). - -One of such limitations is that a user can expose a file or directory to a security breach through a poorly elaborated chmod command and thus cause an unexpected propagation of access rights. As a result, any process started by that user can do as it pleases with the files owned by the user, where finally a malicious or otherwise compromised software can achieve root-level access to the entire system. - -With those limitations in mind, the United States National Security Agency (NSA) first devised SELinux, a flexible mandatory access control method, to restrict the ability of processes to access or perform other operations on system objects (such as files, directories, network ports, etc) to the least permission model, which can be modified later as needed. In few words, each element of the system is given only the access required to function. - -In RHEL 7, SELinux is incorporated into the kernel itself and is enabled in Enforcing mode by default. In this article we will explain briefly the basic concepts associated with SELinux and its operation. - -### SELinux Modes ### - -SELinux can operate in three different ways: - -- Enforcing: SELinux denies access based on SELinux policy rules, a set of guidelines that control the security engine. -- Permissive: SELinux does not deny access, but denials are logged for actions that would have been denied if running in enforcing mode. -- Disabled (self-explanatory). - -The `getenforce` command displays the current mode of SELinux, whereas `setenforce` (followed by a 1 or a 0) is used to change the mode to Enforcing or Permissive, respectively, during the current session only. - -In order to achieve persistence across logouts and reboots, you will need to edit the `/etc/selinux/config` file and set the SELINUX variable to either enforcing, permissive, or disabled: - - # getenforce - # setenforce 0 - # getenforce - # setenforce 1 - # getenforce - # cat /etc/selinux/config - -![Set SELinux Mode](http://www.tecmint.com/wp-content/uploads/2015/05/Set-SELinux-Mode.png) - -Set SELinux Mode - -Typically you will use setenforce to toggle between SELinux modes (enforcing to permissive and back) as a first troubleshooting step. If SELinux is currently set to enforcing while you’re experiencing a certain problem, and the same goes away when you set it to permissive, you can be confident you’re looking at a SELinux permissions issue. - -### SELinux Contexts ### - -A SELinux context consists of an access control environment where decisions are made based on SELinux user, role, and type (and optionally a level): - -- A SELinux user complements a regular Linux user account by mapping it to a SELinux user account, which in turn is used in the SELinux context for processes in that session, in order to explicitly define their allowed roles and levels. -- The concept of role acts as an intermediary between domains and SELinux users in that it defines which process domains and file types can be accessed. This will shield your system against vulnerability to privilege escalation attacks. -- A type defines an SELinux file type or an SELinux process domain. Under normal circumstances, processes are prevented from accessing files that other processes use, and and from accessing other processes, thus access is only allowed if a specific SELinux policy rule exists that allows it. - -Let’s see how all of that works through the following examples. - -**EXAMPLE 1: Changing the default port for the sshd daemon** - -In [Securing SSH – Part 8][3] we explained that changing the default port where sshd listens on is one of the first security measures to secure your server against external attacks. Let’s edit the `/etc/ssh/sshd_config` file and set the port to 9999: - - Port 9999 - -Save the changes, and restart sshd: - - # systemctl restart sshd - # systemctl status sshd - -![Change SSH Port](http://www.tecmint.com/wp-content/uploads/2015/05/Change-SSH-Port.png) - -Restart SSH Service - -As you can see, sshd has failed to start. But what happened? - -A quick inspection of `/var/log/audit/audit.log` indicates that sshd has been denied permissions to start on port 9999 (SELinux log messages include the word “AVC” so that they might be easily identified from other messages) because that is a reserved port for the JBoss Management service: - - # cat /var/log/audit/audit.log | grep AVC | tail -1 - -![Inspect SSH Logs](http://www.tecmint.com/wp-content/uploads/2015/05/Inspect-SSH-Logs.png) - -Inspect SSH Logs - -At this point you could disable SELinux (but don’t!) as explained earlier and try to start sshd again, and it should work. However, the semanage utility can tell us what we need to change in order for us to be able to start sshd in whatever port we choose without issues. - -Run, - - # semanage port -l | grep ssh - -to get a list of the ports where SELinux allows sshd to listen on. - -![Semanage Tool](http://www.tecmint.com/wp-content/uploads/2015/05/SELinux-Permission.png) - -Semanage Tool - -So let’s change the port in /etc/ssh/sshd_config to Port 9998, add the port to the ssh_port_t context, and then restart the service: - - # semanage port -a -t ssh_port_t -p tcp 9998 - # systemctl restart sshd - # systemctl is-active sshd - -![Semanage Add Port](http://www.tecmint.com/wp-content/uploads/2015/05/Semenage-Add-Port.png) - -Semanage Add Port - -As you can see, the service was started successfully this time. This example illustrates the fact that SELinux controls the TCP port number to its own port type internal definitions. - -**EXAMPLE 2: Allowing httpd to send access sendmail** - -This is an example of SELinux managing a process accessing another process. If you were to implement mod_security and mod_evasive along with Apache in your RHEL 7 server, you need to allow httpd to access sendmail in order to send a mail notification in the wake of a (D)DoS attack. In the following command, omit the -P flag if you do not want the change to be persistent across reboots. - - # semanage boolean -1 | grep httpd_can_sendmail - # setsebool -P httpd_can_sendmail 1 - # semanage boolean -1 | grep httpd_can_sendmail - -![Allow Apache to Send Mails](http://www.tecmint.com/wp-content/uploads/2015/05/Allow-Apache-to-Send-Mails.png) - -Allow Apache to Send Mails - -As you can tell from the above example, SELinux boolean settings (or just booleans) are true / false rules embedded into SELinux policies. You can list all the booleans with `semanage boolean -l`, and alternatively pipe it to grep in order to filter the output. - -**EXAMPLE 3: Serving a static site from a directory other than the default one** - -Suppose you are serving a static website using a different directory than the default one (`/var/www/html`), say /websites (this could be the case if you’re storing your web files in a shared network drive, for example, and need to mount it at /websites). - -a). Create an index.html file inside /websites with the following contents: - - -

SELinux test

- - -If you do, - - # ls -lZ /websites/index.html - -you will see that the index.html file has been labeled with the default_t SELinux type, which Apache can’t access: - -![Check SELinux File Permission](http://www.tecmint.com/wp-content/uploads/2015/05/Check-File-Permssion.png) - -Check SELinux File Permission - -b). Change the DocumentRoot directive in `/etc/httpd/conf/httpd.conf` to /websites and don’t forget to update the corresponding Directory block. Then, restart Apache. - -c). Browse to `http://`, and you should get a 503 Forbidden HTTP response. - -d). Next, change the label of /websites, recursively, to the httpd_sys_content_t type in order to grant Apache read-only access to that directory and its contents: - - # semanage fcontext -a -t httpd_sys_content_t "/websites(/.*)?" - -e). Finally, apply the SELinux policy created in d): - - # restorecon -R -v /websites - -Now restart Apache and browse to `http://` again and you will see the html file displayed correctly: - -![Verify Apache Page](http://www.tecmint.com/wp-content/uploads/2015/05/08part13.png) - -Verify Apache Page - -### Summary ### - -In this article we have gone through the basics of SELinux. Note that due to the vastness of the subject, a full detailed explanation is not possible in a single article, but we believe that the principles outlined in this guide will help you to move on to more advanced topics should you wish to do so. - -If I may, let me recommend two essential resources to start with: the [NSA SELinux page][4] and the [RHEL 7 SELinux User’s and Administrator’s][5] guide. - -Don’t hesitate to let us know if you have any questions or comments. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/selinux-essentials-and-control-filesystem-access/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups -[2]:http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/ -[3]:http://www.tecmint.com/rhcsa-series-secure-ssh-set-hostname-enable-network-services-in-rhel-7/ -[4]:https://www.nsa.gov/research/selinux/index.shtml -[5]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/SELinux_Users_and_Administrators_Guide/part_I-SELinux.html diff --git a/translated/tech/RHCSA/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md b/translated/tech/RHCSA/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md new file mode 100644 index 0000000000..4afbc105b7 --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md @@ -0,0 +1,177 @@ +RHCSA 系列: 在 RHEL 7 中使用 SELinux 进行强制访问控制 – Part 13 +================================================================================ +在本系列的前面几篇文章中,我们已经详细地探索了至少两种访问控制方法:标准的 ugo/rwx 权限([管理用户和组 – Part 3][1]) 和访问控制列表([在文件系统中配置 ACL – Part 7][2])。 + +![RHCSA 认证:SELinux 精要和控制文件系统的访问](http://www.tecmint.com/wp-content/uploads/2015/06/SELinux-Control-File-System-Access.png) + +RHCSA 认证:SELinux 精要和控制文件系统的访问 + +尽管作为第一级别的权限和访问控制机制是必要的,但它们同样有一些局限,而这些局限则可以由安全增强 Linux(Security Enhanced Linux,简称为 SELinux) 来处理。 + +这些局限的一种情形是:某个用户可能通过一个未加详细阐述的 chmod 命令将一个文件或目录暴露在安全漏洞面前(注:这句我的翻译有点问题),从而引起访问权限的意外传播。结果,由该用户开启的任意进程可以对属于该用户的文件进行任意的操作,最终一个恶意的或受损的软件对整个系统可能会实现 root 级别的访问权限。 + +考虑到这些局限性,美国国家安全局(NSA) 率先设计出了 SELinux,一种强制的访问控制方法,它根据最小权限模型去限制进程在系统对象(如文件,目录,网络接口等)上的访问或执行其他的操作的能力,而这些限制可以在后面根据需要进行修改。简单来说,系统的每一个元素只给某个功能所需要的那些权限。 + +在 RHEL 7 中,SELinux 被并入了内核中,且默认情况下以强制模式开启。在这篇文章中,我们将简要地介绍有关 SELinux 及其相关操作的基本概念。 + +### SELinux 的模式 ### + +SELinux 可以以三种不同的模式运行: + +- 强制模式:SELinux 根据 SELinux 策略规则拒绝访问,这些规则是用以控制安全引擎的一系列准则; +- 宽容模式:SELinux 不拒绝访问,但对于那些运行在强制模式下会被拒绝访问的行为,它会进行记录; +- 关闭 (不言自明,即 SELinux 没有实际运行). + +使用 `getenforce` 命令可以展示 SELinux 当前所处的模式,而 `setenforce` 命令(后面跟上一个 1 或 0) 则被用来将当前模式切换到强制模式或宽容模式,但只对当前的会话有效。 + +为了使得在登出和重启后上面的设置还能保持作用,你需要编辑 `/etc/selinux/config` 文件并将 SELINUX 变量的值设为 enforcing,permissive,disabled 中之一: + + # getenforce + # setenforce 0 + # getenforce + # setenforce 1 + # getenforce + # cat /etc/selinux/config + +![设置 SELinux 模式](http://www.tecmint.com/wp-content/uploads/2015/05/Set-SELinux-Mode.png) + +设置 SELinux 模式 + +通常情况下,你将使用 `setenforce` 来在 SELinux 模式间进行切换(从强制模式到宽容模式,或反之),以此来作为你排错的第一步。假如 SELinux 当前被设置为强制模式,而你遇到了某些问题,但当你把 SELinux 切换为宽容模式后问题不再出现了,则你可以确信你遇到了一个 SELinux 权限方面的问题。 + +### SELinux 上下文 ### + +一个 SELinux 上下文由一个权限控制环境所组成,在这个环境中,决定的做出将基于 SELinux 的用户,角色和类型(和可选的级别): + +- 一个 SELinux 用户是通过将一个常规的 Linux 用户账户映射到一个 SELinux 用户账户来实现的,反过来,在一个会话中,这个 SELinux 用户账户在 SELinux 上下文中被进程所使用,为的是能够显示地定义它们所允许的角色和级别。 +- 角色的概念是作为域和处于该域中的 SELinux 用户之间的媒介,它定义了 SELinux 可以访问到哪个进程域和哪些文件类型。这将保护您的系统免受提权漏洞的攻击。 +- 类型则定义了一个 SELinux 文件类型或一个 SELinux 进程域。在正常情况下,进程将会被禁止访问其他进程正使用的文件,并禁止对其他进程进行访问。这样只有当一个特定的 SELinux 策略规则允许它访问时,才能够进行访问。 + +下面就让我们看看这些概念是如何在下面的例子中起作用的。 + +**例 1:改变 sshd 守护进程的默认端口** + +在[加固 SSH – Part 8][3] 中,我们解释了更改 sshd 所监听的默认端口是加固你的服务器免收外部攻击的首个安全措施。下面,就让我们编辑 `/etc/ssh/sshd_config` 文件并将端口设置为 9999: + + Port 9999 + +保存更改并重启 sshd: + + # systemctl restart sshd + # systemctl status sshd + +![更改 SSH 的端口](http://www.tecmint.com/wp-content/uploads/2015/05/Change-SSH-Port.png) + +重启 SSH 服务 + +正如你看到的那样, sshd 启动失败,但为什么会这样呢? + +快速检查 `/var/log/audit/audit.log` 文件会发现 sshd 已经被拒绝在端口 9999 上开启(SELinux 日志信息包含单词 "AVC",所以这类信息可以被轻易地与其他信息相区分),因为这个端口是 JBoss 管理服务的保留端口: + + # cat /var/log/audit/audit.log | grep AVC | tail -1 + +![查看 SSH 日志](http://www.tecmint.com/wp-content/uploads/2015/05/Inspect-SSH-Logs.png) + +查看 SSH 日志 + +在这种情况下,你可以像先前解释的那样禁用 SELinux(但请不要这样做!),并尝试重启 sshd,且这种方法能够起效。但是, `semanage` 应用可以告诉我们在哪些端口上可以开启 sshd 而不会出现任何问题。 + +运行: + + # semanage port -l | grep ssh + +便可以得到一个 SELinux 允许 sshd 在哪些端口上监听的列表: + +![Semanage 工具](http://www.tecmint.com/wp-content/uploads/2015/05/SELinux-Permission.png) + +Semanage 工具 + +所以让我们在 `/etc/ssh/sshd_config` 中将端口更改为 9998 端口,增加这个端口到 ssh_port_t 的上下文,然后重启 sshd 服务: + + # semanage port -a -t ssh_port_t -p tcp 9998 + # systemctl restart sshd + # systemctl is-active sshd + +![Semanage 添加端口](http://www.tecmint.com/wp-content/uploads/2015/05/Semenage-Add-Port.png) + +Semanage 添加端口 + +如你所见,这次 sshd 服务被成功地开启了。这个例子告诉我们这个事实:SELinux 控制 TCP 端口数为它自己端口类型中间定义。 + +**例 2:允许 httpd 访问 sendmail** + +这是一个 SELinux 管理一个进程来访问另一个进程的例子。假如在你的 RHEL 7 服务器上,你要实现 Apache 的 mod_security 和 mod_evasive(注:这里少添加了一个链接,链接的地址是 http://www.tecmint.com/protect-apache-using-mod_security-and-mod_evasive-on-rhel-centos-fedora/),你需要允许 httpd 访问 sendmail,以便在遭受到 (D)DoS 攻击时能够用邮件来提醒你。在下面的命令中,如果你不想使得更改在重启后任然生效,请去掉 `-P` 选项。 + + # semanage boolean -1 | grep httpd_can_sendmail + # setsebool -P httpd_can_sendmail 1 + # semanage boolean -1 | grep httpd_can_sendmail + +![允许 Apache 发送邮件](http://www.tecmint.com/wp-content/uploads/2015/05/Allow-Apache-to-Send-Mails.png) + +允许 Apache 发送邮件 + +从上面的例子中,你可以知道 SELinux 布尔设定(或者只是布尔值)分别对应于 true 或 false,被嵌入到了 SELinux 策略中。你可以使用 `semanage boolean -l` 来列出所有的布尔值,也可以管道至 grep 命令以便筛选输出的结果。 + +**例 3:在一个特定目录而非默认目录下服务一个静态站点** + +假设你正使用一个不同于默认目录(`/var/www/html`)的目录来服务一个静态站点,例如 `/websites` 目录(这种情形会出现在当你把你的网络文件存储在一个共享网络设备上,并需要将它挂载在 /websites 目录时)。 + +a). 在 /websites 下创建一个 index.html 文件并包含如下的内容: + + +

SELinux test

+ + +假如你执行 + + # ls -lZ /websites/index.html + +你将会看到这个 index.html 已经被标记上了 default_t SELinux 类型,而 Apache 不能访问这类文件: + +![检查 SELinux 文件的权限](http://www.tecmint.com/wp-content/uploads/2015/05/Check-File-Permssion.png) + +检查 SELinux 文件的权限 + +b). 将 `/etc/httpd/conf/httpd.conf` 中的 DocumentRoot 改为 /websites,并不要忘了 +更新相应的 Directory 代码块。然后重启 Apache。 + +c). 浏览到 `http://`,则你应该会得到一个 503 Forbidden 的 HTTP 响应。 + +d). 接下来,递归地改变 /websites 的标志,将它的标志变为 httpd_sys_content_t 类型,以便赋予 Apache 对这些目录和其内容的只读访问权限: + + # semanage fcontext -a -t httpd_sys_content_t "/websites(/.*)?" + +e). 最后,应用在 d) 中创建的 SELinux 策略: + + # restorecon -R -v /websites + +现在重启 Apache 并再次浏览到 `http://`,则你可以看到被正确展现出来的 html 文件: + +![确认 Apache 页面](http://www.tecmint.com/wp-content/uploads/2015/05/08part13.png) + +确认 Apache 页面 + +### 总结 ### + +在本文中,我们详细地介绍了 SELinux 的基础知识。请注意,由于这个主题的广泛性,在单篇文章中做出一个完全详尽的解释是不可能的,但我们相信,在这个指南中列出的基本原则将会对你进一步了解更高级的话题有所帮助,假如你想了解的话。 + +假如可以,请让我推荐两个必要的资源来入门 SELinux:[NSA SELinux 页面][4] 和 [针对用户和系统管理员的 RHEL 7 SELinux 指南][5]。 + +假如你有任何的问题或评论,请不要犹豫,让我们知晓吧。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/selinux-essentials-and-control-filesystem-access/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups +[2]:http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/ +[3]:http://www.tecmint.com/rhcsa-series-secure-ssh-set-hostname-enable-network-services-in-rhel-7/ +[4]:https://www.nsa.gov/research/selinux/index.shtml +[5]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/SELinux_Users_and_Administrators_Guide/part_I-SELinux.html From f58ad625a30cbdd8002c5eb71dd6f119f335c9ab Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 21 Sep 2015 19:58:04 +0800 Subject: [PATCH 575/697] Update 20150921 Configure PXE Server In Ubuntu 14.04.md --- sources/tech/20150921 Configure PXE Server In Ubuntu 14.04.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150921 Configure PXE Server In Ubuntu 14.04.md b/sources/tech/20150921 Configure PXE Server In Ubuntu 14.04.md index 0ac8dbc527..4265e59b4f 100644 --- a/sources/tech/20150921 Configure PXE Server In Ubuntu 14.04.md +++ b/sources/tech/20150921 Configure PXE Server In Ubuntu 14.04.md @@ -1,3 +1,4 @@ +translation by strugglingyouth Configure PXE Server In Ubuntu 14.04 ================================================================================ ![](https://www.maketecheasier.com/assets/uploads/2015/09/pxe-featured.jpg) @@ -246,4 +247,4 @@ via: https://www.maketecheasier.com/configure-pxe-server-ubuntu/ [a]:https://www.maketecheasier.com/author/hiteshjethva/ [1]:https://en.wikipedia.org/wiki/Preboot_Execution_Environment [2]:https://help.ubuntu.com/community/PXEInstallServer -[3]:https://www.flickr.com/photos/jhcalderon/3681926417/ \ No newline at end of file +[3]:https://www.flickr.com/photos/jhcalderon/3681926417/ From 568f24fe5f89abd61b5cc2a7f43f38c0f8a7ecbd Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 21 Sep 2015 22:20:02 +0800 Subject: [PATCH 576/697] PUB:20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems @geekpi --- ...Connecting Remote Unix or Linux Systems.md | 110 +++++++++++++++++ ...Connecting Remote Unix or Linux Systems.md | 111 ------------------ 2 files changed, 110 insertions(+), 111 deletions(-) create mode 100644 published/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md delete mode 100644 translated/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md diff --git a/published/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md b/published/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md new file mode 100644 index 0000000000..2b5c37992d --- /dev/null +++ b/published/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md @@ -0,0 +1,110 @@ +mosh:一个基于 SSH 用于连接远程 Unix/Linux 系统的工具 +================================================================================ +Mosh 表示移动 Shell(Mobile Shell),是一个用于从客户端跨互联网连接远程服务器的命令行工具。它能用于 SSH 连接,但是比 Secure Shell 功能更多。它是一个类似于 SSH 而带有更多功能的应用。程序最初由 Keith Winstein 编写,用于类 Unix 的操作系统中,发布于GNU GPL v3协议下。 + +![Mosh Shell SSH Client](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-SSH-Client.png) + +*Mosh Shell SSH 客户端* + +#### Mosh的功能 #### + +- 它是一个支持漫游的远程终端程序。 +- 在所有主流的类 Unix 版本中可用,如 Linux、FreeBSD、Solaris、Mac OS X 和 Android。 +- 支持不稳定连接 +- 支持智能的本地回显 +- 支持用户输入的行编辑 +- 响应式设计及在 wifi、3G、长距离连接下的鲁棒性 +- 在 IP 改变后保持连接。它使用 UDP 代替 TCP(在 SSH 中使用),当连接被重置或者获得新的 IP 后 TCP 会超时,但是 UDP 仍然保持连接。 +- 在很长的时候之后恢复会话时仍然保持连接。 +- 没有网络延迟。立即显示用户输入和删除而没有延迟 +- 像 SSH 那样支持一些旧的方式登录。 +- 包丢失处理机制 + +### Linux 中 mosh 的安装 ### + +在 Debian、Ubuntu 和 Mint 类似的系统中,你可以很容易地用 [apt-get 包管理器][1]安装。 + + # apt-get update + # apt-get install mosh + +在基于 RHEL/CentOS/Fedora 的系统中,要使用 [yum 包管理器][3]安装 mosh,你需要打开第三方的 [EPEL][2]。 + + # yum update + # yum install mosh + +在 Fedora 22+的版本中,你需要使用 [dnf 包管理器][4]来安装 mosh。 + + # dnf install mosh + +### 我该如何使用 mosh? ### + +1、 让我们尝试使用 mosh 登录远程 Linux 服务器。 + + $ mosh root@192.168.0.150 + +![Mosh Shell Remote Connection](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-Remote-Connection.png) + +*mosh远程连接* + +**注意**:你有没有看到一个连接错误,因为我在 CentOS 7中还有打开这个端口。一个快速但是我并不建议的解决方法是: + + # systemctl stop firewalld [在远程服务器上] + +更好的方法是打开一个端口并更新防火墙规则。接着用 mosh 连接到预定义的端口中。至于更深入的细节,也许你会对下面的文章感兴趣。 + +- [如何配置 Firewalld][5] + +2、 让我们假设把默认的 22 端口改到 70,这时使用 -p 选项来使用自定义端口。 + + $ mosh -p 70 root@192.168.0.150 + +3、 检查 mosh 的版本 + + $ mosh --version + +![Check Mosh Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Mosh-Version.png) + +*检查mosh版本* + +4、 你可以输入`exit`来退出 mosh 会话。 + + $ exit + +5、 mosh 支持很多选项,你可以用下面的方法看到: + + $ mosh --help + +![Mosh Shell Options](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-Options.png) + +*Mosh 选项* + +#### mosh 的优缺点 #### + +- mosh 有额外的需求,比如需要允许 UDP 直接连接,这在 SSH 不需要。 +- 动态分配的端口范围是 60000-61000。第一个打开的端口是分配好的。每个连接都需要一个端口。 +- 默认的端口分配是一个严重的安全问题,尤其是在生产环境中。 +- 支持 IPv6 连接,但是不支持 IPv6 漫游。 +- 不支持回滚 +- 不支持 X11 转发 +- 不支持 ssh-agent 转发 + +### 总结 ### + +mosh是一款在大多数linux发行版的仓库中可以下载的一款小工具。虽然它有一些差异尤其是安全问题和额外的需求,它的功能,比如漫游后保持连接是一个加分点。我的建议是任何一个使用ssh的linux用户都应该试试这个程序,mosh值得一试。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/install-mosh-shell-ssh-client-in-linux/ + +作者:[Avishek Kumar][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ +[2]:https://linux.cn/article-2324-1.html +[3]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ +[4]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ +[5]:http://www.tecmint.com/configure-firewalld-in-centos-7/ diff --git a/translated/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md b/translated/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md deleted file mode 100644 index 093b4cbc21..0000000000 --- a/translated/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md +++ /dev/null @@ -1,111 +0,0 @@ -mosh - 一个基于SSH用于连接远程Unix/Linux系统的工具 -================================================================================ -Mosh表示移动Shell(Mobile Shell)是一个用于从客户端连接远程服务器的命令行工具。它可以像ssh那样使用并包含了更多的功能。它是一个类似ssh的程序,但是提供更多的功能。程序最初由Keith Winstein编写用于类Unix的操作系统中,发布于GNU GPL v3协议下。 - -![Mosh Shell SSH Client](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-SSH-Client.png) - -Mosh客户端 - -#### Mosh的功能 #### - -- 它是一个支持漫游的远程终端程序。 -- 在所有主流类Unix版本中可用如Linux、FreeBSD、Solaris、Mac OS X和Android。 -- 中断连接支持 -- 支持智能本地echo -- 用户按键行编辑支持 -- 响应式设计及在wifi、3G、长距离连接下的鲁棒性 -- 在IP改变后保持连接。它使用UDP代替TCP(在SSH中使用)当连接被重置或者获得新的IP后TCP会超时但是UDP仍然保持连接。 -- 在你很长之间之后恢复会话时仍然保持连接。 -- 没有网络延迟。立即显示用户输入和删除而没有延迟 -- 像SSH那样支持一些旧的方式登录。 -- 包丢失处理机制 - -### Linux中mosh的安装 ### - -在Debian、Ubuntu和Mint类似的系统中,你可以很容易地用[apt-get包管理器][1]安装。 - - # apt-get update - # apt-get install mosh - -在基于RHEL/CentOS/Fedora的系统中,要使用[yum 包管理器][3]安装mosh,你需要打开第三方的[EPEL][2]。 - - # yum update - # yum install mosh - -在Fedora 22+的版本中,你需要使用[dnf包管理器][4]来安装mosh。 - - # dnf install mosh - -### 我该如何使用mosh? ### - -1. 让我们尝试使用mosh登录远程Linux服务器。 - - $ mosh root@192.168.0.150 - -![Mosh Shell Remote Connection](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-Remote-Connection.png) - -mosh远程连接 - -**注意**:你有没有看到一个连接错误,因为我在CentOS 7中还有打开这个端口。一个快速但是我并不建议的解决方法是: - - # systemctl stop firewalld [on Remote Server] - -更好的方法是打开一个端口并更新防火墙规则。接着用mosh连接到预定义的端口中。至于更深入的细节,也许你会对下面的文章感兴趣。 - -- [如何配置Firewalld][5] - -2. 让我们假设把默认的22端口改到70,这时使用-p选项来使用自定义端口。 - - $ mosh -p 70 root@192.168.0.150 - -3. 检查mosh的版本 - - $ mosh --version - -![Check Mosh Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Mosh-Version.png) - -检查mosh版本 - -4. 你可以输入‘exit’来退出mosh会话。 - - $ exit - -5. mosh支持很多选项,你可以用下面的方法看到: - - $ mosh --help - -![Mosh Shell Options](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-Options.png) - -Mosh选项 - -#### mosh的利弊 #### - -- mosh有额外的需求,比如需要允许UDP直接连接,这在SSH不需要。 -- 动态分配的端口范围是60000-61000。第一个打开的端口是分配的。每个连接都需要一个端口。 -- 默认端口分配是一个严重的安全问题,尤其是在生产环境中。 -- 支持IPv6连接,但是不支持IPv6漫游。 -- 不支持回溯 -- 不支持X11转发 -- 不支持ssh-agent转发 - -### 总结 ### - -Mosh is a nice small utility which is available for download in the repository of most of the Linux Distributions. Though it has a few discrepancies specially security concern and additional requirement it’s features like remaining connected even while roaming is its plus point. My recommendation is Every Linux-er who deals with SSH should try this application and mind it, Mosh is worth a try. -mosh是一款在大多数linux发行版的仓库中可以下载的一款小工具。虽然它有一些差异尤其是安全问题和额外的需求,它的功能像漫游后保持连接是一个加分点。我的建议是任何一个使用ssh的linux用户都应该试试这个程序,mosh值得一试 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/install-mosh-shell-ssh-client-in-linux/ - -作者:[Avishek Kumar][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ -[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ -[3]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ -[4]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ -[5]:http://www.tecmint.com/configure-firewalld-in-centos-7/ From 46052dcbcd15116ea4b25088fdca676bea26e304 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 21 Sep 2015 23:11:22 +0800 Subject: [PATCH 577/697] PUB:RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares @FSSlc --- ...Lists) and Mounting Samba or NFS Shares.md | 76 +++++++++---------- 1 file changed, 35 insertions(+), 41 deletions(-) rename {translated/tech/RHCSA => published}/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md (67%) diff --git a/translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md b/published/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md similarity index 67% rename from translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md rename to published/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md index a68d36de2b..94f42a47ba 100644 --- a/translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md +++ b/published/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md @@ -1,42 +1,40 @@ -RHCSA 系列:使用 ACL(访问控制列表) 和挂载 Samba/NFS 共享 – Part 7 +RHCSA 系列(六): 使用 ACL(访问控制列表) 和挂载 Samba/NFS 共享 ================================================================================ -在上一篇文章([RHCSA 系列 Part 6][1])中,我们解释了如何使用 parted 和 ssm 来设置和配置本地系统存储。 +在上一篇文章([RHCSA 系列(六)][1])中,我们解释了如何使用 parted 和 ssm 来设置和配置本地系统存储。 ![配置 ACL 及挂载 NFS/Samba 共享](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ACLs-and-Mounting-NFS-Samba-Shares.png) -RHCSA Series: 配置 ACL 及挂载 NFS/Samba 共享 – Part 7 +*RHCSA 系列: 配置 ACL 及挂载 NFS/Samba 共享 – Part 7* -我们也讨论了如何创建和在系统启动时使用一个密码来挂载加密的卷。另外,我们告诫过你要避免在挂载的文件系统上执行苛刻的存储管理操作。记住了这点后,现在,我们将回顾在 RHEL 7 中最常使用的文件系统格式,然后将涵盖有关手动或自动挂载、使用和卸载网络文件系统(CIFS 和 NFS)的话题以及在你的操作系统上实现访问控制列表的使用。 +我们也讨论了如何创建和在系统启动时使用一个密码来挂载加密的卷。另外,我们告诫过你要避免在挂载的文件系统上执行危险的存储管理操作。记住了这点后,现在,我们将回顾在 RHEL 7 中最常使用的文件系统格式,然后将涵盖有关手动或自动挂载、使用和卸载网络文件系统(CIFS 和 NFS)的话题以及在你的操作系统上实现访问控制列表(Access Control List)的使用。 #### 前提条件 #### -在进一步深入之前,请确保你可使用 Samba 服务和 NFS 服务(注意在 RHEL 7 中 NFSv2 已不再被支持)。 +在进一步深入之前,请确保你可使用 Samba 服务和 NFS 服务(注意在 RHEL 7 中 NFSv2 已不再被支持)。 -在本次指导中,我们将使用一个IP 地址为 192.168.0.10 且同时运行着 Samba 服务和 NFS 服务的机子来作为服务器,使用一个 IP 地址为 192.168.0.18 的 RHEL 7 机子来作为客户端。在这篇文章的后面部分,我们将告诉你在客户端上你需要安装哪些软件包。 +在本次指导中,我们将使用一个IP 地址为 192.168.0.10 且同时运行着 Samba 服务和 NFS 服务的机器来作为服务器,使用一个 IP 地址为 192.168.0.18 的 RHEL 7 机器来作为客户端。在这篇文章的后面部分,我们将告诉你在客户端上你需要安装哪些软件包。 ### RHEL 7 中的文件系统格式 ### -从 RHEL 7 开始,由于 XFS 的高性能和可扩展性,它已经被引入所有的架构中来作为默认的文件系统。 -根据 Red Hat 及其合作伙伴在主流硬件上执行的最新测试,当前 XFS 已支持最大为 500 TB 大小的文件系统。 +从 RHEL 7 开始,由于 XFS 的高性能和可扩展性,它已经被作为所有的架构中的默认文件系统。根据 Red Hat 及其合作伙伴在主流硬件上执行的最新测试,当前 XFS 已支持最大为 500 TB 大小的文件系统。 -另外, XFS 启用了 user_xattr(扩展用户属性) 和 acl( -POSIX 访问控制列表)来作为默认的挂载选项,而不像 ext3 或 ext4(对于 RHEL 7 来说, ext2 已过时),这意味着当挂载一个 XFS 文件系统时,你不必显式地在命令行或 /etc/fstab 中指定这些选项(假如你想在后一种情况下禁用这些选项,你必须显式地使用 no_acl 和 no_user_xattr)。 +另外,XFS 启用了 `user_xattr`(扩展用户属性) 和 `acl`(POSIX 访问控制列表)来作为默认的挂载选项,而不像 ext3 或 ext4(对于 RHEL 7 来说,ext2 已过时),这意味着当挂载一个 XFS 文件系统时,你不必显式地在命令行或 /etc/fstab 中指定这些选项(假如你想在后一种情况下禁用这些选项,你必须显式地使用 `no_acl` 和 `no_user_xattr`)。 -请记住扩展用户属性可以被指定到文件和目录中来存储任意的额外信息如 mime 类型,字符集或文件的编码,而用户属性中的访问权限由一般的文件权限位来定义。 +请记住扩展用户属性可以给文件和目录指定,用来存储任意的额外信息如 mime 类型,字符集或文件的编码,而用户属性中的访问权限由一般的文件权限位来定义。 #### 访问控制列表 #### -作为一名系统管理员,无论你是新手还是专家,你一定非常熟悉与文件和目录有关的常规访问权限,这些权限为所有者,所有组和"世界"(所有的其他人)指定了特定的权限(可读,可写及可执行)。但如若你需要稍微更新你的记忆,请随意参考 [RHCSA 系列的 Part 3][3]. +作为一名系统管理员,无论你是新手还是专家,你一定非常熟悉与文件和目录有关的常规访问权限,这些权限为所有者,所有组和“世界”(所有的其他人)指定了特定的权限(可读,可写及可执行)。但如若你需要稍微更新下你的记忆,请参考 [RHCSA 系列(三)][3]. 但是,由于标准的 `ugo/rwx` 集合并不允许为不同的用户配置不同的权限,所以 ACL 便被引入了进来,为的是为文件和目录定义更加详细的访问权限,而不仅仅是这些特别指定的特定权限。 事实上, ACL 定义的权限是由文件权限位所特别指定的权限的一个超集。下面就让我们看看这个转换是如何在真实世界中被应用的吧。 -1. 存在两种类型的 ACL:访问 ACL,可被应用到一个特定的文件或目录上,以及默认 ACL,只可被应用到一个目录上。假如目录中的文件没有 ACL,则它们将继承它们的父目录的默认 ACL 。 +1. 存在两种类型的 ACL:访问 ACL,可被应用到一个特定的文件或目录上;以及默认 ACL,只可被应用到一个目录上。假如目录中的文件没有 ACL,则它们将继承它们的父目录的默认 ACL 。 2. 从一开始, ACL 就可以为每个用户,每个组或不在文件所属组中的用户配置相应的权限。 -3. ACL 可使用 `setfacl` 来设置(和移除),可相应地使用 -m 或 -x 选项。 +3. ACL 可使用 `setfacl` 来设置(和移除),可相应地使用 -m 或 -x 选项。 例如,让我们创建一个名为 tecmint 的组,并将用户 johndoe 和 davenull 加入该组: @@ -53,36 +51,32 @@ POSIX 访问控制列表)来作为默认的挂载选项,而不像 ext3 或 ext ![检验用户](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-Users.png) -检验用户 +*检验用户* -现在,我们在 /mnt 下创建一个名为 playground 的目录,并在该目录下创建一个名为 testfile.txt 的文件。我们将设定该文件的属组为 tecmint,并更改它的默认 ugo/rwx 权限为 770(即赋予该文件的属主和属组可读,可写和可执行权限): +现在,我们在 /mnt 下创建一个名为 playground 的目录,并在该目录下创建一个名为 testfile.txt 的文件。我们将设定该文件的属组为 tecmint,并更改它的默认 `ugo/rwx` 权限为 770(即赋予该文件的属主和属组可读、可写和可执行权限): # mkdir /mnt/playground # touch /mnt/playground/testfile.txt + # chown :tecmint /mnt/playground/testfile.txt # chmod 770 /mnt/playground/testfile.txt 接着,依次切换为 johndoe 和 davenull 用户,并在文件中写入一些信息: - echo "My name is John Doe" > /mnt/playground/testfile.txt - echo "My name is Dave Null" >> /mnt/playground/testfile.txt - -到目前为止,一切正常。现在我们让用户 gacanepa 来向该文件执行写操作 – 则写操作将会失败,这是可以预料的。 - -但实际上我们需要用户 gacanepa(TA 不是组 tecmint 的成员)在文件 /mnt/playground/testfile.txt 上有写权限,那又该怎么办呢?首先映入你脑海里的可能是将该用户添加到组 tecmint 中。但那将使得他在所有该组具有写权限位的文件上均拥有写权限,但我们并不想这样,我们只想他能够在文件 /mnt/playground/testfile.txt 上有写权限。 - - # touch /mnt/playground/testfile.txt - # chown :tecmint /mnt/playground/testfile.txt - # chmod 777 /mnt/playground/testfile.txt # su johndoe $ echo "My name is John Doe" > /mnt/playground/testfile.txt $ su davenull $ echo "My name is Dave Null" >> /mnt/playground/testfile.txt + +到目前为止,一切正常。现在我们让用户 gacanepa 来向该文件执行写操作 – 则写操作将会失败,这是可以预料的。 + $ su gacanepa $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt ![管理用户的权限](http://www.tecmint.com/wp-content/uploads/2015/04/User-Permissions.png) -管理用户的权限 +*管理用户的权限* + +但实际上我们需要用户 gacanepa(他不是组 tecmint 的成员)在文件 /mnt/playground/testfile.txt 上有写权限,那又该怎么办呢?首先映入你脑海里的可能是将该用户添加到组 tecmint 中。但那将使得他在所有该组具有写权限位的文件上均拥有写权限,但我们并不想这样,我们只想他能够在文件 /mnt/playground/testfile.txt 上有写权限。 现在,让我们给用户 gacanepa 在 /mnt/playground/testfile.txt 文件上有读和写权限。 @@ -90,7 +84,7 @@ POSIX 访问控制列表)来作为默认的挂载选项,而不像 ext3 或 ext # setfacl -R -m u:gacanepa:rwx /mnt/playground -则你将成功地添加一条 ACL,运行 gacanepa 对那个测试文件可写。然后切换为 gacanepa 用户,并再次尝试向该文件写入一些信息: +则你将成功地添加一条 ACL,允许 gacanepa 对那个测试文件可写。然后切换为 gacanepa 用户,并再次尝试向该文件写入一些信息: $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt @@ -100,9 +94,9 @@ POSIX 访问控制列表)来作为默认的挂载选项,而不像 ext3 或 ext ![检查文件的 ACL](http://www.tecmint.com/wp-content/uploads/2015/04/Check-ACL-of-File.png) -检查文件的 ACL +*检查文件的 ACL* -要为目录设定默认 ACL(它的内容将被该目录下的文件继承,除非另外被覆写),在规则前添加 `d:`并特别指定一个目录名,而不是文件名: +要为目录设定默认 ACL(它的内容将被该目录下的文件继承,除非另外被覆写),在规则前添加 `d:`并特别指定一个目录名,而不是文件名: # setfacl -m d:o:r /mnt/playground @@ -111,7 +105,7 @@ POSIX 访问控制列表)来作为默认的挂载选项,而不像 ext3 或 ext ![在 Linux 中设定默认 ACL](http://www.tecmint.com/wp-content/uploads/2015/04/Set-Default-ACL-in-Linux.png) -在 Linux 中设定默认 ACL +*在 Linux 中设定默认 ACL* [在官方的 RHEL 7 存储管理指导手册的第 20 章][3] 中提供了更多有关 ACL 的例子,我极力推荐你看一看它并将它放在身边作为参考。 @@ -129,7 +123,7 @@ POSIX 访问控制列表)来作为默认的挂载选项,而不像 ext3 或 ext ![检查可用的 NFS 共享](http://www.tecmint.com/wp-content/uploads/2015/04/Mount-NFS-Shares.png) -检查可用的 NFS 共享 +*检查可用的 NFS 共享* 要按照需求在本地客户端上使用命令行来挂载 NFS 网络共享,可使用下面的语法: @@ -139,7 +133,7 @@ POSIX 访问控制列表)来作为默认的挂载选项,而不像 ext3 或 ext # mount -t nfs 192.168.0.10:/NFS-SHARE /mnt/nfs -若你得到如下的错误信息:“Job for rpc-statd.service failed. See “systemctl status rpc-statd.service”及“journalctl -xn” for details.”,请确保 `rpcbind` 服务被启用且已在你的系统中启动了。 +若你得到如下的错误信息:`Job for rpc-statd.service failed. See "systemctl status rpc-statd.service" and "journalctl -xn" for details.`,请确保 `rpcbind` 服务被启用且已在你的系统中启动了。 # systemctl enable rpcbind.socket # systemctl restart rpcbind.service @@ -162,7 +156,7 @@ Samba 代表一个特别的工具,使得在由 *nix 和 Windows 机器组成 ![检查 Samba 共享](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Samba-Shares.png) -检查 Samba 共享 +*检查 Samba 共享* 要在本地客户端上挂载 Samba 网络共享,你需要已安装好 cifs-utils 软件包: @@ -176,14 +170,14 @@ Samba 代表一个特别的工具,使得在由 *nix 和 Windows 机器组成 # mount -t cifs -o credentials=~/.smbcredentials //192.168.0.10/gacanepa /mnt/samba -其中 `smbcredentials` +其中 `.smbcredentials` 的内容是: username=gacanepa password=XXXXXX -是一个位于 root 用户的家目录(/root/) 中的隐藏文件,其权限被设置为 600,所以除了该文件的属主外,其他人对该文件既不可读也不可写。 +它是一个位于 root 用户的家目录(/root/) 中的隐藏文件,其权限被设置为 600,所以除了该文件的属主外,其他人对该文件既不可读也不可写。 -请注意 samba_share 是 Samba 分享的名称,由上面展示的 `smbclient -L remote_host` 所返回。 +请注意 samba_share 是 Samba 共享的名称,由上面展示的 `smbclient -L remote_host` 所返回。 现在,若你需要在系统启动时自动地使得 Samba 分享可用,可以向 /etc/fstab 文件添加一个像下面这样的有效条目: @@ -197,7 +191,7 @@ Samba 代表一个特别的工具,使得在由 *nix 和 Windows 机器组成 在这篇文章中,我们已经解释了如何在 Linux 中设置 ACL,并讨论了如何在一个 RHEL 7 客户端上挂载 CIFS 和 NFS 网络共享。 -我建议你去练习这些概念,甚至混合使用它们(试着在一个挂载的网络共享上设置 ACL),直至你感觉舒适。假如你有问题或评论,请随时随意地使用下面的评论框来联系我们。另外,请随意通过你的社交网络分享这篇文章。 +我建议你去练习这些概念,甚至混合使用它们(试着在一个挂载的网络共享上设置 ACL),直至你感觉掌握了。假如你有问题或评论,请随时随意地使用下面的评论框来联系我们。另外,请随意通过你的社交网络分享这篇文章。 -------------------------------------------------------------------------------- @@ -205,11 +199,11 @@ via: http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares 作者:[Gabriel Cánepa][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/ -[2]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ +[1]:https://linux.cn/article-6257-1.html +[2]:https://linux.cn/article-6187-1.html [3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html \ No newline at end of file From 728f9bd1300ae6f2478ae2911c65511219a8ead4 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 22 Sep 2015 07:36:15 +0800 Subject: [PATCH 578/697] PUB:20150908 How to Download Install and Configure Plank Dock in Ubuntu @wi-cuckoo --- ...nload Install and Configure Plank Dock in Ubuntu.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) rename {translated/tech => published}/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md (85%) diff --git a/translated/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md b/published/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md similarity index 85% rename from translated/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md rename to published/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md index 1990e25a60..ea125f9bbe 100644 --- a/translated/tech/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md +++ b/published/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md @@ -1,4 +1,4 @@ -在 Ubuntu 里,如何下载,安装和配置 Plank Dock +在 Ubuntu 里如何下载、安装和配置 Plank Dock ============================================================================= 一个众所周知的事实就是,Linux 是一个用户可以高度自定义的系统,有很多选项可以选择 —— 作为操作系统,有各种各样的发行版,而对于单个发行版来说,又有很多桌面环境可以选择。与其他操作系统的用户一样,Linux 用户也有不同的口味和喜好,特别是对于桌面来说。 @@ -8,7 +8,7 @@ ### Plank ### -官方的文档描述 Plank 是“这个星球上最简洁的 dock”。该项目的目的就是提供一个 dock 仅需要的功能,尽管这是很基础的一个库,却可以被扩展,创造其他的含更多高级功能的 dock 程序。 +官方的文档描述 Plank 是“这个星球上最简洁的 dock”。该项目的目的就是仅提供一个 dock 需要的功能,尽管这是很基础的一个库,却可以被扩展,创造其他的含更多高级功能的 dock 程序。 这里值得一提的就是,在 elementary OS 里,Plank 是预装的。并且 Plank 是 Docky 的基础,Docky 也是一个非常流行的 dock 应用,在功能上与 Mac OS X 的 Dock 非常相似。 @@ -30,11 +30,11 @@ ![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-enabled-new.jpg) -正如上面图片显示的那样,dock 包含许多带橙色的应用图标,这表明这些应用正处于运行状态。无需说,你可以点击一个图标来打开那个应用。同时,右击一个应用图标会给出更多的选项,你可能会感兴趣。举个例子,该下面的屏幕快照: +正如上面图片显示的那样,dock 包含许多带橙色标示的应用图标,这表明这些应用正处于运行状态。无需说,你可以点击一个图标来打开那个应用。同时,右击一个应用图标会给出更多的选项,你可能会感兴趣。举个例子,看下面的屏幕快照: ![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-right-click-icons-new.jpg) -为了获得配置的选项,你不得不右击一下 Plank 的图标(左数第一个),然后点击 Preferences 选项。这就会产生接下来的窗口。 +为了获得配置的选项,你需要右击一下 Plank 的图标(左数第一个),然后点击 Preferences 选项。这就会出现如下的窗口。 ![](https://www.maketecheasier.com/assets/uploads/2015/09/plank-preferences.png) @@ -58,7 +58,7 @@ via: https://www.maketecheasier.com/download-install-configure-plank-dock-ubuntu 作者:[Himanshu Arora][a] 译者:[wi-cuckoo](https://github.com/wi-cuckoo) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 387e52c704d2777bd4210ebe108fcb65959eee7a Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 22 Sep 2015 08:04:09 +0800 Subject: [PATCH 579/697] PUB:RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services @FSSlc --- ... Hostname and Enabling Network Services.md | 76 ++++++++++--------- 1 file changed, 39 insertions(+), 37 deletions(-) rename {translated/tech/RHCSA => published}/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md (59%) diff --git a/translated/tech/RHCSA/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md b/published/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md similarity index 59% rename from translated/tech/RHCSA/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md rename to published/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md index 82245f33b1..3494bf7355 100644 --- a/translated/tech/RHCSA/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md +++ b/published/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md @@ -1,26 +1,27 @@ -RHCSA 系列:安全 SSH,设定主机名及开启网络服务 – Part 8 +RHCSA 系列(八): 加固 SSH,设定主机名及启用网络服务 ================================================================================ -作为一名系统管理员,你将经常使用一个终端模拟器来登陆到一个远程的系统中,执行一系列的管理任务。你将很少有机会坐在一个真实的(物理)终端前,所以你需要设定好一种方法来使得你可以登陆到你被要求去管理的那台远程主机上。 -事实上,当你必须坐在一台物理终端前的时候,就可能是你登陆到该主机的最后一种方法。基于安全原因,使用 Telnet 来达到以上目的并不是一个好主意,因为穿行在线缆上的流量并没有被加密,它们以文本方式在传送。 +作为一名系统管理员,你将经常使用一个终端模拟器来登录到一个远程的系统中,执行一系列的管理任务。你将很少有机会坐在一个真实的(物理)终端前,所以你需要设定好一种方法来使得你可以登录到你需要去管理的那台远程主机上。 + +事实上,当你必须坐在一台物理终端前的时候,就可能是你登录到该主机的最后一种方法了。基于安全原因,使用 Telnet 来达到以上目的并不是一个好主意,因为穿行在线缆上的流量并没有被加密,它们以明文方式在传送。 另外,在这篇文章中,我们也将复习如何配置网络服务来使得它在开机时被自动开启,并学习如何设置网络和静态或动态地解析主机名。 ![RHCSA: 安全 SSH 和开启网络服务](http://www.tecmint.com/wp-content/uploads/2015/05/Secure-SSH-Server-and-Enable-Network-Services.png) -RHCSA: 安全 SSH 和开启网络服务 – Part 8 +*RHCSA: 安全 SSH 和开启网络服务 – Part 8* ### 安装并确保 SSH 通信安全 ### -对于你来说,要能够使用 SSH 远程登陆到一个 RHEL 7 机子,你必须安装 `openssh`,`openssh-clients` 和 `openssh-servers` 软件包。下面的命令不仅将安装远程登陆程序,也会安装安全的文件传输工具以及远程文件复制程序: +对于你来说,要能够使用 SSH 远程登录到一个 RHEL 7 机子,你必须安装 `openssh`,`openssh-clients` 和 `openssh-servers` 软件包。下面的命令不仅将安装远程登录程序,也会安装安全的文件传输工具以及远程文件复制程序: # yum update && yum install openssh openssh-clients openssh-servers -注意,安装上服务器所需的相应软件包是一个不错的主意,因为或许在某个时刻,你想使用同一个机子来作为客户端和服务器。 +注意,也安装上服务器所需的相应软件包是一个不错的主意,因为或许在某个时刻,你想使用同一个机子来作为客户端和服务器。 -在安装完成后,如若你想安全地访问你的 SSH 服务器,你还需要考虑一些基本的事情。下面的设定应该在文件 `/etc/ssh/sshd_config` 中得以呈现。 +在安装完成后,如若你想安全地访问你的 SSH 服务器,你还需要考虑一些基本的事情。下面的设定应该出现在文件 `/etc/ssh/sshd_config` 中。 -1. 更改 sshd 守护进程的监听端口,从 22(默认的端口值)改为一个更高的端口值(2000 或更大),但首先要确保所选的端口没有被占用。 +1、 更改 sshd 守护进程的监听端口,从 22(默认的端口值)改为一个更高的端口值(2000 或更大),但首先要确保所选的端口没有被占用。 例如,让我们假设你选择了端口 2500 。使用 [netstat][1] 来检查所选的端口是否被占用: @@ -30,17 +31,17 @@ RHCSA: 安全 SSH 和开启网络服务 – Part 8 Port 2500 -2. 只允许协议 2: +2、 只允许协议 2(LCTT 译注:SSHv1 已经被证明不安全,默认情况下 SSHv1 和 SSHv2 都支持,所以应该显示去掉如下配置行的注释,并只支持 SSHv2。): Protocol 2 -3. 配置验证超时的时间为 2 分钟,不允许以 root 身份登陆,并将允许通过 ssh 登陆的人数限制到最小: +3、 配置验证超时的时间为 2 分钟,不允许以 root 身份登录,并将允许通过 ssh 登录的人数限制到最小: LoginGraceTime 2m PermitRootLogin no AllowUsers gacanepa -4. 假如可能,使用基于公钥的验证方式而不是使用密码: +4、 假如可能,使用基于公钥的验证方式而不是使用密码: PasswordAuthentication no RSAAuthentication yes @@ -48,13 +49,13 @@ RHCSA: 安全 SSH 和开启网络服务 – Part 8 这假设了你已经在你的客户端机子上创建了带有你的用户名的一个密钥对,并将公钥复制到了你的服务器上。 -- [开启 SSH 无密码登陆][2] +- [开启 SSH 无密码登录][2] ### 配置网络和名称的解析 ### -1. 每个系统管理员应该对下面这个系统配置文件非常熟悉: +1、 每个系统管理员都应该对下面这个系统配置文件非常熟悉: -- /etc/hosts 被用来在小型网络中解析名称 <---> IP 地址。 +- /etc/hosts 被用来在小型网络中解析“名称” <---> “IP 地址”。 文件 `/etc/hosts` 中的每一行拥有如下的结构: @@ -64,7 +65,7 @@ RHCSA: 安全 SSH 和开启网络服务 – Part 8 192.168.0.10 laptop laptop.gabrielcanepa.com.ar -2. `/etc/resolv.conf` 特别指定 DNS 服务器的 IP 地址和搜索域,它被用来在没有提供域名后缀时,将一个给定的查询名称对应为一个全称域名。 +2、 `/etc/resolv.conf` 特别指定 DNS 服务器的 IP 地址和搜索域,它被用来在没有提供域名后缀时,将一个给定的查询名称对应为一个全称域名。 在正常情况下,你不必编辑这个文件,因为它是由系统管理的。然而,若你非要改变 DNS 服务器的 IP 地址,建议你在该文件的每一行中,都应该遵循下面的结构: @@ -74,7 +75,7 @@ RHCSA: 安全 SSH 和开启网络服务 – Part 8 nameserver 8.8.8.8 -3. `/etc/host.conf` 特别指定在一个网络中主机名被解析的方法和顺序。换句话说,告诉名称解析器使用哪个服务,并以什么顺序来使用。 +3、 `/etc/host.conf` 特别指定在一个网络中主机名被解析的方法和顺序。换句话说,告诉名称解析器使用哪个服务,并以什么顺序来使用。 尽管这个文件由几个选项,但最为常见和基本的设置包含如下的一行: @@ -82,12 +83,12 @@ RHCSA: 安全 SSH 和开启网络服务 – Part 8 它意味着解析器应该首先查看 `resolv.conf` 中特别指定的域名服务器,然后到 `/etc/hosts` 文件中查找解析的名称。 -4. `/etc/sysconfig/network` 包含了所有网络接口的路由和全局主机信息。下面的值可能会被使用: +4、 `/etc/sysconfig/network` 包含了所有网络接口的路由和全局主机信息。下面的值可能会被使用: NETWORKING=yes|no HOSTNAME=value -其中的 value 应该是全称域名(FQDN)。 +其中的 value 应该是全称域名(FQDN)。 GATEWAY=XXX.XXX.XXX.XXX @@ -97,7 +98,7 @@ RHCSA: 安全 SSH 和开启网络服务 – Part 8 在一个带有多个网卡的机器中, value 为网关设备名,例如 enp0s3。 -5. 位于 `/etc/sysconfig/network-scripts` 中的文件(网络适配器配置文件)。 +5、 位于 `/etc/sysconfig/network-scripts` 中的文件(网络适配器配置文件)。 在上面提到的目录中,你将找到几个被命名为如下格式的文本文件。 @@ -107,26 +108,27 @@ RHCSA: 安全 SSH 和开启网络服务 – Part 8 ![检查网络连接状态](http://www.tecmint.com/wp-content/uploads/2015/05/Check-IP-Address.png) -检查网络连接状态 +*检查网络连接状态* 例如: ![网络文件](http://www.tecmint.com/wp-content/uploads/2015/05/Network-Files.png) -网络文件 +*网络文件* -除了环回接口,你还可以为你的网卡进行一个相似的配置。注意,假如设定了某些变量,它们将为这个特别的接口,覆盖掉 `/etc/sysconfig/network` 中定义的值。在这篇文章中,为了能够解释清楚,每行都被加上了注释,但在实际的文件中,你应该避免加上注释: +除了环回接口(loopback),你还可以为你的网卡指定相似的配置。注意,假如设定了某些变量,它们将为这个指定的接口覆盖掉 `/etc/sysconfig/network` 中定义的默认值。在这篇文章中,为了能够解释清楚,每行都被加上了注释,但在实际的文件中,你应该避免加上注释: - HWADDR=08:00:27:4E:59:37 # The MAC address of the NIC - TYPE=Ethernet # Type of connection - BOOTPROTO=static # This indicates that this NIC has been assigned a static IP. If this variable was set to dhcp, the NIC will be assigned an IP address by a DHCP server and thus the next two lines should not be present in that case. + HWADDR=08:00:27:4E:59:37 ### 网卡的 MAC 地址 + TYPE=Ethernet ### 连接类型 + BOOTPROTO=static ### 这代表着该网卡指定了一个静态地址。 + ### 如果这个值指定为 dhcp,这个网卡会从 DHCP 服务器获取 IP 地址,并且就不应该出现以下两行。 IPADDR=192.168.0.18 NETMASK=255.255.255.0 GATEWAY=192.168.0.1 - NM_CONTROLLED=no # Should be added to the Ethernet interface to prevent NetworkManager from changing the file. + NM_CONTROLLED=no ### 应该给以太网卡设置,以便可以让 NetworkManager 可以修改这个文件。 NAME=enp0s3 UUID=14033805-98ef-4049-bc7b-d4bea76ed2eb - ONBOOT=yes # The operating system should bring up this NIC during boot + ONBOOT=yes ### 操作系统会在启动时打开这个网卡。 ### 设定主机名 ### @@ -138,7 +140,7 @@ RHCSA: 安全 SSH 和开启网络服务 – Part 8 ![在RHEL 7 中检查系统的主机名](http://www.tecmint.com/wp-content/uploads/2015/05/Check-System-hostname.png) -检查系统的主机名 +*检查系统的主机名* 要更改主机名,使用 @@ -148,13 +150,13 @@ RHCSA: 安全 SSH 和开启网络服务 – Part 8 # hostnamectl set-hostname cinderella -要想使得更改生效,你需要重启 hostnamed 守护进程(这样你就不必因为要应用更改而登出系统并再登陆系统): +要想使得更改生效,你需要重启 hostnamed 守护进程(这样你就不必因为要应用更改而登出并再登录系统): # systemctl restart systemd-hostnamed ![在 RHEL7 中设定系统主机名](http://www.tecmint.com/wp-content/uploads/2015/05/Set-System-Hostname.png) -设定系统主机名 +*设定系统主机名* 另外, RHEL 7 还包含 `nmcli` 工具,它可被用来达到相同的目的。要展示主机名,运行: @@ -170,13 +172,13 @@ RHCSA: 安全 SSH 和开启网络服务 – Part 8 ![使用 nmcli 命令来设定主机名](http://www.tecmint.com/wp-content/uploads/2015/05/nmcli-command.png) -使用 nmcli 命令来设定主机名 +*使用 nmcli 命令来设定主机名* ### 在开机时开启网络服务 ### -作为本文的最后部分,就让我们看看如何确保网络服务在开机时被自动开启。简单来说,这个可通过创建符号链接到某些由服务的配置文件中的 [Install] 小节中指定的文件来实现。 +作为本文的最后部分,就让我们看看如何确保网络服务在开机时被自动开启。简单来说,这个可通过创建符号链接到某些由服务的配置文件中的 `[Install]` 小节中指定的文件来实现。 -以 firewalld(/usr/lib/systemd/system/firewalld.service) 为例: +以 firewalld(/usr/lib/systemd/system/firewalld.service) 为例: [Install] WantedBy=basic.target @@ -192,11 +194,11 @@ RHCSA: 安全 SSH 和开启网络服务 – Part 8 ![在开机时开启服务](http://www.tecmint.com/wp-content/uploads/2015/05/Enable-Service-at-System-Boot.png) -在开机时开启服务 +*在开机时开启服务* ### 总结 ### -在这篇文章中,我们总结了如何安装 SSH 及使用它安全地连接到一个 RHEL 服务器,如何改变主机名,并在最后如何确保在系统启动时开启服务。假如你注意到某个服务启动失败,你可以使用 `systemctl status -l [service]` 和 `journalctl -xn` 来进行排错。 +在这篇文章中,我们总结了如何安装 SSH 及使用它安全地连接到一个 RHEL 服务器;如何改变主机名,并在最后如何确保在系统启动时开启服务。假如你注意到某个服务启动失败,你可以使用 `systemctl status -l [service]` 和 `journalctl -xn` 来进行排错。 请随意使用下面的评论框来让我们知晓你对本文的看法。提问也同样欢迎。我们期待着你的反馈! @@ -206,10 +208,10 @@ via: http://www.tecmint.com/rhcsa-series-secure-ssh-set-hostname-enable-network- 作者:[Gabriel Cánepa][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/gacanepa/ [1]:http://www.tecmint.com/20-netstat-commands-for-linux-network-management/ -[2]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ \ No newline at end of file +[2]:https://linux.cn/article-5444-1.html \ No newline at end of file From c8ef8ddabff51faaa815d5e2a445390772de8c13 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 22 Sep 2015 08:16:38 +0800 Subject: [PATCH 580/697] PUB:20150908 List Of 10 Funny Linux Commands @tnuoccalanosrep --- ...0150908 List Of 10 Funny Linux Commands.md | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) rename {translated/tech => published}/20150908 List Of 10 Funny Linux Commands.md (93%) diff --git a/translated/tech/20150908 List Of 10 Funny Linux Commands.md b/published/20150908 List Of 10 Funny Linux Commands.md similarity index 93% rename from translated/tech/20150908 List Of 10 Funny Linux Commands.md rename to published/20150908 List Of 10 Funny Linux Commands.md index 7219e16890..9b89da3b2b 100644 --- a/translated/tech/20150908 List Of 10 Funny Linux Commands.md +++ b/published/20150908 List Of 10 Funny Linux Commands.md @@ -1,4 +1,4 @@ -10条真心有趣的Linux命令 +10 条真心有趣的 Linux 命令 ================================================================================ **在终端工作是一件很有趣的事情。今天,我们将会列举一些有趣得为你带来欢笑的Linux命令。** @@ -29,7 +29,7 @@ ### 3. yes ### - #yes + # yes 这个命令会不停打印字符串,直到用户把这进程给结束掉。 @@ -38,6 +38,7 @@ ![Selection_005](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0054.png) ### 4. figlet ### + 这个命令可以用apt-get安装,安装之后,在**/usr/share/figlet**可以看到一些ascii字体文件。 cd /usr/share/figlet @@ -45,26 +46,25 @@ ---------- #figlet -f - -e.g. - #figlet -f big.flf unixmen ![Selection_006](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0062.png) -#figlet -f block.flf unixmen + #figlet -f block.flf unixmen ![Selection_007](http://www.unixmen.com/wp-content/uploads/2015/09/Selection_0072.png) 当然,你也可以尝试使用其他的选项。 ### 5. asciiquarium ### + 这个命令会将你的终端变成一个海洋馆。 -下载term animator + +下载term animator: # wget http://search.cpan.org/CPAN/authors/id/K/KB/KBAUCOM/Term-Animation-2.4.tar.gz -安装并且配置这个包 +安装并且配置这个包: # tar -zxvf Term-Animation-2.4.tar.gz # cd Term-Animation-2.4/ @@ -75,14 +75,14 @@ e.g. # apt-get install libcurses-perl -下载并且安装asciiquarium +下载并且安装asciiquarium: # wget http://www.robobunny.com/projects/asciiquarium/asciiquarium.tar.gz # tar -zxvf asciiquarium.tar.gz # cd asciiquarium_1.0/ # cp asciiquarium /usr/local/bin/ -执行如下命令 +执行如下命令: # /usr/local/bin/asciiquarium @@ -176,8 +176,8 @@ aafire能让你的终端燃起来。 via: http://www.unixmen.com/list-10-funny-linux-commands/ 作者:[Rajneesh Upadhyay][a] -译者:[tnuoccalanosrep](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[tnuoccalanosrep](https://github.com/tnuoccalanosrep) +校对:[wxy](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 713224c88c8e1dad1c13f65f3d798244090c1e53 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 22 Sep 2015 16:33:01 +0800 Subject: [PATCH 581/697] Delete 20150921 Configure PXE Server In Ubuntu 14.04.md --- ...21 Configure PXE Server In Ubuntu 14.04.md | 250 ------------------ 1 file changed, 250 deletions(-) delete mode 100644 sources/tech/20150921 Configure PXE Server In Ubuntu 14.04.md diff --git a/sources/tech/20150921 Configure PXE Server In Ubuntu 14.04.md b/sources/tech/20150921 Configure PXE Server In Ubuntu 14.04.md deleted file mode 100644 index 4265e59b4f..0000000000 --- a/sources/tech/20150921 Configure PXE Server In Ubuntu 14.04.md +++ /dev/null @@ -1,250 +0,0 @@ -translation by strugglingyouth -Configure PXE Server In Ubuntu 14.04 -================================================================================ -![](https://www.maketecheasier.com/assets/uploads/2015/09/pxe-featured.jpg) - -PXE (Preboot Execution Environment) Server allows the user to boot a Linux distribution from a network and install it on hundreds of PCs at a time without any Linux iso images. If your client’s computers don’t have CD/DVD or USB drives, or if you want to set up multiple computers at the same time in a large enterprise, then PXE server can be used to save money and time. - -In this article we will show you how you can configure a PXE server in Ubuntu 14.04. - -### Configure Networking ### - -To get started, you need to first set up your PXE server to use a static IP. To set up a static IP address in your system, you need to edit the “/etc/network/interfaces” file. - -1. Open the “/etc/network/interfaces” file. - - sudo nano /etc/network/interfaces - -Add/edit as described below: - - # The loopback network interface - auto lo - iface lo inet loopback - # The primary network interface - auto eth0 - iface eth0 inet static - address 192.168.1.20 - netmask 255.255.255.0 - gateway 192.168.1.1 - dns-nameservers 8.8.8.8 - -Save the file and exit. This will set its IP address to “192.168.1.20”. Restart the network service. - - sudo /etc/init.d/networking restart - -### Install DHCP, TFTP and NFS: ### - -DHCP, TFTP and NFS are essential components for configuring a PXE server. First you need to update your system and install all necessary packages. - -For this, run the following commands: - - sudo apt-get update - sudo apt-get install isc-dhcp-Server inetutils-inetd tftpd-hpa syslinux nfs-kernel-Server - -### Configure DHCP Server: ### - -DHCP stands for Dynamic Host Configuration Protocol, and it is used mainly for dynamically distributing network configuration parameters such as IP addresses for interfaces and services. A DHCP server in PXE environment allow clients to request and receive an IP address automatically to gain access to the network servers. - -1. Edit the “/etc/default/dhcp3-server” file. - - sudo nano /etc/default/dhcp3-server - -Add/edit as described below: - - INTERFACES="eth0" - -Save (Ctrl + o) and exit (Ctrl + x) the file. - -2. Edit the “/etc/dhcp3/dhcpd.conf” file: - - sudo nano /etc/dhcp/dhcpd.conf - -Add/edit as described below: - - default-lease-time 600; - max-lease-time 7200; - subnet 192.168.1.0 netmask 255.255.255.0 { - range 192.168.1.21 192.168.1.240; - option subnet-mask 255.255.255.0; - option routers 192.168.1.20; - option broadcast-address 192.168.1.255; - filename "pxelinux.0"; - next-Server 192.168.1.20; - } - -Save the file and exit. - -3. Start the DHCP service. - - sudo /etc/init.d/isc-dhcp-server start - -### Configure TFTP Server: ### - -TFTP is a file-transfer protocol which is similar to FTP. It is used where user authentication and directory visibility are not required. The TFTP server is always listening for PXE clients on the network. When it detects any network PXE client asking for PXE services, then it provides a network package that contains the boot menu. - -1. To configure TFTP, edit the “/etc/inetd.conf” file. - - sudo nano /etc/inetd.conf - -Add/edit as described below: - - tftp dgram udp wait root /usr/sbin/in.tftpd /usr/sbin/in.tftpd -s /var/lib/tftpboot - -Save and exit the file. - -2. Edit the “/etc/default/tftpd-hpa” file. - - sudo nano /etc/default/tftpd-hpa - -Add/edit as described below: - - TFTP_USERNAME="tftp" - TFTP_DIRECTORY="/var/lib/tftpboot" - TFTP_ADDRESS="[:0.0.0.0:]:69" - TFTP_OPTIONS="--secure" - RUN_DAEMON="yes" - OPTIONS="-l -s /var/lib/tftpboot" - -Save and exit the file. - -3. Enable boot service for `inetd` to automatically start after every system reboot and start tftpd service. - - sudo update-inetd --enable BOOT - sudo service tftpd-hpa start - -4. Check status. - - sudo netstat -lu - -It will show the following output: - - Proto Recv-Q Send-Q Local Address Foreign Address State - udp 0 0 *:tftp *:* - -### Configure PXE boot files ### - -Now you need the PXE boot file “pxelinux.0” to be present in the TFTP root directory. Make a directory structure for TFTP, and copy all the bootloader files provided by syslinux from the “/usr/lib/syslinux/” to the “/var/lib/tftpboot/” path by issuing the following commands: - - sudo mkdir /var/lib/tftpboot - sudo mkdir /var/lib/tftpboot/pxelinux.cfg - sudo mkdir -p /var/lib/tftpboot/Ubuntu/14.04/amd64/ - sudo cp /usr/lib/syslinux/vesamenu.c32 /var/lib/tftpboot/ - sudo cp /usr/lib/syslinux/pxelinux.0 /var/lib/tftpboot/ - -#### Set up PXELINUX configuration file #### - -The PXE configuration file defines the boot menu displayed to the PXE client when it boots up and contacts the TFTP server. By default, when a PXE client boots up, it will use its own MAC address to specify which configuration file to read, so we need to create that default file that contains the list of kernels which are available to boot. - -Edit the PXE Server configuration file with valid installation options. - -To edit “/var/lib/tftpboot/pxelinux.cfg/default,” - - sudo nano /var/lib/tftpboot/pxelinux.cfg/default - -Add/edit as described below: - - DEFAULT vesamenu.c32 - TIMEOUT 100 - PROMPT 0 - MENU INCLUDE pxelinux.cfg/PXE.conf - NOESCAPE 1 - LABEL Try Ubuntu 14.04 Desktop - MENU LABEL Try Ubuntu 14.04 Desktop - kernel Ubuntu/vmlinuz - append boot=casper netboot=nfs nfsroot=192.168.1.20:/var/lib/tftpboot/Ubuntu/14.04/amd64 - initrd=Ubuntu/initrd.lz quiet splash - ENDTEXT - LABEL Install Ubuntu 14.04 Desktop - MENU LABEL Install Ubuntu 14.04 Desktop - kernel Ubuntu/vmlinuz - append boot=casper automatic-ubiquity netboot=nfs nfsroot=192.168.1.20:/var/lib/tftpboot/Ubuntu/14.04/amd64 - initrd=Ubuntu/initrd.lz quiet splash - ENDTEXT - -Save and exit the file. - -Edit the “/var/lib/tftpboot/pxelinux.cfg/pxe.conf” file. - - sudo nano /var/lib/tftpboot/pxelinux.cfg/pxe.conf - -Add/edit as described below: - - MENU TITLE PXE Server - NOESCAPE 1 - ALLOWOPTIONS 1 - PROMPT 0 - MENU WIDTH 80 - MENU ROWS 14 - MENU TABMSGROW 24 - MENU MARGIN 10 - MENU COLOR border 30;44 #ffffffff #00000000 std - -Save and exit the file. - -### Add Ubuntu 14.04 Desktop Boot Images to PXE Server ### - -For this, Ubuntu kernel and initrd files are required. To get those files, you need the Ubuntu 14.04 Desktop ISO Image. You can download the Ubuntu 14.04 ISO image in the /mnt folder by issuing the following command: - - sudo cd /mnt - sudo wget http://releases.ubuntu.com/14.04/ubuntu-14.04.3-desktop-amd64.iso - -**Note**: the download URL might change as the ISO image is updated. Check out this website for the latest download link if the above URL is not working. - -Mount the ISO file, and copy all the files to the TFTP folder by issuing the following commands: - - sudo mount -o loop /mnt/ubuntu-14.04.3-desktop-amd64.iso /media/ - sudo cp -r /media/* /var/lib/tftpboot/Ubuntu/14.04/amd64/ - sudo cp -r /media/.disk /var/lib/tftpboot/Ubuntu/14.04/amd64/ - sudo cp /media/casper/initrd.lz /media/casper/vmlinuz /var/lib/tftpboot/Ubuntu/ - -### Configure NFS Server to Export ISO Contents ### - -Now you need to setup Installation Source Mirrors via NFS protocol. You can also use http and ftp for Installation Source Mirrors. Here I have used NFS to export ISO contents. - -To configure the NFS server, you need to edit the “/etc/exports” file. - - sudo nano /etc/exports - -Add/edit as described below: - - /var/lib/tftpboot/Ubuntu/14.04/amd64 *(ro,async,no_root_squash,no_subtree_check) - -Save and exit the file. For the changes to take effect, export and start NFS service. - - sudo exportfs -a - sudo /etc/init.d/nfs-kernel-server start - -Now your PXE Server is ready. - -### Configure Network Boot PXE Client ### - -A PXE client can be any computer system with a PXE network boot enable option. Now your clients can boot and install Ubuntu 14.04 Desktop by enabling “Boot From Network” options from their systems BIOS. - -You’re now ready to go – start your PXE Client Machine with the network boot enable option, and you should now see a sub-menu showing for your Ubuntu 14.04 Desktop that we created. - -![pxe](https://www.maketecheasier.com/assets/uploads/2015/09/pxe.png) - -### Conclusion ### - -Configuring network boot installation using PXE server is efficient and a time-saving method. You can install hundreds of client at a time in your local network. All you need is a PXE server and PXE enabled clients. Try it out, and let us know if this works for you. - -Reference: -- [PXE Server wiki][1] -- [PXE Server Ubuntu][2] - -Image credit: [fupsol_unl_20][3] - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/configure-pxe-server-ubuntu/ - -作者:[Hitesh Jethva][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/hiteshjethva/ -[1]:https://en.wikipedia.org/wiki/Preboot_Execution_Environment -[2]:https://help.ubuntu.com/community/PXEInstallServer -[3]:https://www.flickr.com/photos/jhcalderon/3681926417/ From a0933bc8102d1efd753357bcad3650a5ef9fac3f Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 22 Sep 2015 16:34:46 +0800 Subject: [PATCH 582/697] Create 20150921 Configure PXE Server In Ubuntu 14.04.md --- ...21 Configure PXE Server In Ubuntu 14.04.md | 251 ++++++++++++++++++ 1 file changed, 251 insertions(+) create mode 100644 translated/tech/20150921 Configure PXE Server In Ubuntu 14.04.md diff --git a/translated/tech/20150921 Configure PXE Server In Ubuntu 14.04.md b/translated/tech/20150921 Configure PXE Server In Ubuntu 14.04.md new file mode 100644 index 0000000000..eab3fb5224 --- /dev/null +++ b/translated/tech/20150921 Configure PXE Server In Ubuntu 14.04.md @@ -0,0 +1,251 @@ + + 在 Ubuntu 14.04 中配置 PXE 服务器 +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/09/pxe-featured.jpg) + +PXE(Preboot Execution Environment--预启动执行环境)服务器允许用户从网络中启动 Linux 发行版并且可以同时在数百台 PC 中安装而不需要 Linux ISO 镜像。如果你客户端的计算机没有 CD/DVD 或USB 引导盘,或者如果你想在大型企业中同时安装多台计算机,那么 PXE 服务器可以帮你节省时间和金钱。 + +在这篇文章中,我们将告诉你如何在 Ubuntu 14.04 配置 PXE 服务器。 + +### 配置网络 ### + +开始前,你需要先设置 PXE 服务器使用静态 IP。在你的系统中要使用静态 IP 地址,需要编辑 “/etc/network/interfaces” 文件。 + +1. 打开 “/etc/network/interfaces” 文件. + + sudo nano /etc/network/interfaces + + 作如下修改: + + # 回环网络接口 + auto lo + iface lo inet loopback + # 主网络接口 + auto eth0 + iface eth0 inet static + address 192.168.1.20 + netmask 255.255.255.0 + gateway 192.168.1.1 + dns-nameservers 8.8.8.8 + +保存文件并退出。这将设置其 IP 地址为“192.168.1.20”。然后重新启动网络服务。 + + sudo /etc/init.d/networking restart + +### 安装 DHCP, TFTP 和 NFS: ### + +DHCP,TFTP 和 NFS 是 PXE 服务器的重要组成部分。首先,需要更新你的系统并安装所有需要的软件包。 + +为此,运行以下命令: + + sudo apt-get update + sudo apt-get install isc-dhcp-Server inetutils-inetd tftpd-hpa syslinux nfs-kernel-Server + +### 配置 DHCP 服务: ### + +DHCP 代表动态主机配置协议(Dynamic Host Configuration Protocol),并且它主要用于动态分配网络配置参数,如用于接口和服务的 IP 地址。在 PXE 环境中,DHCP 服务器允许客户端请求并自动获得一个 IP 地址来访问网络。 + +1. 编辑 “/etc/default/dhcp3-server” 文件. + + sudo nano /etc/default/dhcp3-server + + 作如下修改: + + INTERFACES="eth0" + +保存 (Ctrl + o) 并退出 (Ctrl + x) 文件. + +2. 编辑 “/etc/dhcp3/dhcpd.conf” 文件: + + sudo nano /etc/dhcp/dhcpd.conf + + 作如下修改: + + default-lease-time 600; + max-lease-time 7200; + subnet 192.168.1.0 netmask 255.255.255.0 { + range 192.168.1.21 192.168.1.240; + option subnet-mask 255.255.255.0; + option routers 192.168.1.20; + option broadcast-address 192.168.1.255; + filename "pxelinux.0"; + next-Server 192.168.1.20; + } + +保存文件并退出。 + +3. 启动 DHCP 服务. + + sudo /etc/init.d/isc-dhcp-server start + +### 配置 TFTP 服务器: ### + +TFTP 是一种文件传输协议,类似于 FTP。它不用进行用户认证也不能列出目录。TFTP 服务器总是监听网络上的 PXE 客户端。当它检测到网络中有 PXE 客户端请求 PXE 服务器时,它将提供包含引导菜单的网络数据包。 + +1. 配置 TFTP 时,需要编辑 “/etc/inetd.conf” 文件. + + sudo nano /etc/inetd.conf + + 作如下修改: + + tftp dgram udp wait root /usr/sbin/in.tftpd /usr/sbin/in.tftpd -s /var/lib/tftpboot + + 保存文件并退出。 + +2. 编辑 “/etc/default/tftpd-hpa” 文件。 + + sudo nano /etc/default/tftpd-hpa + + 作如下修改: + + TFTP_USERNAME="tftp" + TFTP_DIRECTORY="/var/lib/tftpboot" + TFTP_ADDRESS="[:0.0.0.0:]:69" + TFTP_OPTIONS="--secure" + RUN_DAEMON="yes" + OPTIONS="-l -s /var/lib/tftpboot" + + 保存文件并退出。 + +3. 使用 `xinetd` 让 boot 服务在每次系统开机时自动启动,并启动tftpd服务。 + + sudo update-inetd --enable BOOT + sudo service tftpd-hpa start + +4. 检查状态。 + + sudo netstat -lu + +它将如下所示: + + Proto Recv-Q Send-Q Local Address Foreign Address State + udp 0 0 *:tftp *:* + +### 配置 PXE 启动文件 ### + +现在,你需要将 PXE 引导文件 “pxelinux.0” 放在 TFTP 根目录下。为 TFTP 创建一个目录,并复制 syslinux 在 “/usr/lib/syslinux/” 下提供的所有引导程序文件到 “/var/lib/tftpboot/” 下,操作如下: + + sudo mkdir /var/lib/tftpboot + sudo mkdir /var/lib/tftpboot/pxelinux.cfg + sudo mkdir -p /var/lib/tftpboot/Ubuntu/14.04/amd64/ + sudo cp /usr/lib/syslinux/vesamenu.c32 /var/lib/tftpboot/ + sudo cp /usr/lib/syslinux/pxelinux.0 /var/lib/tftpboot/ + +#### 设置 PXELINUX 配置文件 #### + +PXE 配置文件定义了 PXE 客户端启动时显示的菜单,它能引导并与 TFTP 服务器关联。默认情况下,当一个 PXE 客户端启动时,它会使用自己的 MAC 地址指定要读取的配置文件,所以我们需要创建一个包含可引导内核列表的默认文件。 + +编辑 PXE 服务器配置文件使用可用的安装选项。. + +编辑 “/var/lib/tftpboot/pxelinux.cfg/default,” + + sudo nano /var/lib/tftpboot/pxelinux.cfg/default + + 作如下修改: + + DEFAULT vesamenu.c32 + TIMEOUT 100 + PROMPT 0 + MENU INCLUDE pxelinux.cfg/PXE.conf + NOESCAPE 1 + LABEL Try Ubuntu 14.04 Desktop + MENU LABEL Try Ubuntu 14.04 Desktop + kernel Ubuntu/vmlinuz + append boot=casper netboot=nfs nfsroot=192.168.1.20:/var/lib/tftpboot/Ubuntu/14.04/amd64 + initrd=Ubuntu/initrd.lz quiet splash + ENDTEXT + LABEL Install Ubuntu 14.04 Desktop + MENU LABEL Install Ubuntu 14.04 Desktop + kernel Ubuntu/vmlinuz + append boot=casper automatic-ubiquity netboot=nfs nfsroot=192.168.1.20:/var/lib/tftpboot/Ubuntu/14.04/amd64 + initrd=Ubuntu/initrd.lz quiet splash + ENDTEXT + +保存文件并退出。 + +编辑 “/var/lib/tftpboot/pxelinux.cfg/pxe.conf” 文件。 + + sudo nano /var/lib/tftpboot/pxelinux.cfg/pxe.conf + +作如下修改: + + MENU TITLE PXE Server + NOESCAPE 1 + ALLOWOPTIONS 1 + PROMPT 0 + MENU WIDTH 80 + MENU ROWS 14 + MENU TABMSGROW 24 + MENU MARGIN 10 + MENU COLOR border 30;44 #ffffffff #00000000 std + +保存文件并退出。 + +### 为 PXE 服务器添加 Ubuntu 14.04 桌面启动镜像 ### + +对于这一步,Ubuntu 内核和 initrd 文件是必需的。要获得这些文件,你需要 Ubuntu 14.04 桌面 ISO 镜像。你可以通过以下命令下载 Ubuntu 14.04 ISO 镜像到 /mnt 目录: + + sudo cd /mnt + sudo wget http://releases.ubuntu.com/14.04/ubuntu-14.04.3-desktop-amd64.iso + +**注意**: 下载用的 URL 可能会改变,因为 ISO 镜像会进行更新。如果上面的网址无法访问,看看这个网站,了解最新的下载链接。 + +挂载 ISO 文件,使用以下命令将所有文件复制到 TFTP文件夹中: + + sudo mount -o loop /mnt/ubuntu-14.04.3-desktop-amd64.iso /media/ + sudo cp -r /media/* /var/lib/tftpboot/Ubuntu/14.04/amd64/ + sudo cp -r /media/.disk /var/lib/tftpboot/Ubuntu/14.04/amd64/ + sudo cp /media/casper/initrd.lz /media/casper/vmlinuz /var/lib/tftpboot/Ubuntu/ + +### 将导出的 ISO 目录配置到 NFS 服务器上 ### + +现在,你需要通过 NFS 协议安装源镜像。你还可以使用 HTTP 和 FTP 来安装源镜像。在这里,我已经使用 NFS 导出 ISO 内容。 + +要配置 NFS 服务器,你需要编辑 “etc/exports” 文件。 + + sudo nano /etc/exports + +作如下修改: + + /var/lib/tftpboot/Ubuntu/14.04/amd64 *(ro,async,no_root_squash,no_subtree_check) + +保存文件并退出。为使更改生效,启动 NFS 服务。 + + sudo exportfs -a + sudo /etc/init.d/nfs-kernel-server start + +现在,你的 PXE 服务器已经准备就绪。 + +### 配置网络引导 PXE 客户端 ### + +PXE 客户端可以被任何具备 PXE 网络引导的系统来启用。现在,你的客户端可以启动并安装 Ubuntu 14.04 桌面,需要在系统的 BIOS 中设置 “Boot From Network” 选项。 + +现在你可以去做 - 用网络引导启动你的 PXE 客户端计算机,你现在应该看到一个子菜单,显示了我们创建的 Ubuntu 14.04 桌面。 + +![pxe](https://www.maketecheasier.com/assets/uploads/2015/09/pxe.png) + +### 结论 ### + +配置使用 PXE 服务器从网络启动安装能提高效率和节省时间。你可以在本地网络中同时安装数百个客户端。所有你需要的只是一个 PXE 服务器和能启动 PXE 的客户端。试试吧,如果这个对你有用请让我们知道。 + + +参考: +- [PXE Server wiki][1] +- [PXE Server Ubuntu][2] + +图片来源: [fupsol_unl_20][3] + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/configure-pxe-server-ubuntu/ + +作者:[Hitesh Jethva][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/hiteshjethva/ +[1]:https://en.wikipedia.org/wiki/Preboot_Execution_Environment +[2]:https://help.ubuntu.com/community/PXEInstallServer +[3]:https://www.flickr.com/photos/jhcalderon/3681926417/ From 41b4a9c58009b81645a7b99dda32e9c80e29ac5f Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 22 Sep 2015 17:20:02 +0800 Subject: [PATCH 583/697] =?UTF-8?q?=E5=B7=B2=E7=BB=8F=E7=BF=BB=E8=AF=91?= =?UTF-8?q?=E8=BF=87=E4=BA=86?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md | 102 ------------------ 1 file changed, 102 deletions(-) delete mode 100644 sources/tech/20150921 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md diff --git a/sources/tech/20150921 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md b/sources/tech/20150921 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md deleted file mode 100644 index 897550432b..0000000000 --- a/sources/tech/20150921 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md +++ /dev/null @@ -1,102 +0,0 @@ -How to Setup Node JS v4.0.0 on Ubuntu 14.04 / 15.04 -================================================================================ -Hi everyone, Node.JS Version 4.0.0 has been out, the popular server-side JavaScript platform has combines the Node.js and io.js code bases. This release represents the combined efforts encapsulated in both the Node.js project and the io.js project that are now combined in a single codebase. The most important change is this Node.js is ships with version 4.5 of Google's V8 JavaScript engine, which is the same version that ships with the current Chrome browser. So, being able to more closely track V8’s releases means Node.js runs JavaScript faster, more securely, and with the ability to use many desirable ES6 language features. - -![Node JS](http://blog.linoxide.com/wp-content/uploads/2015/09/nodejs.png) - -Node.js 4.0.0 aims to provide an easy update path for current users of io.js and node as there are no major API changes. Let’s see how you can easily get it installed and setup on Ubuntu server by following this simple article. - -### Basic System Setup ### - -Node works perfectly on Linux, Macintosh, and Solaris operating systems and among the Linux operating systems it has the best results using Ubuntu OS. That's why we are to setup it Ubuntu 15.04 while the same steps can be followed using Ubuntu 14.04. - -**1) System Resources** - -The basic system resources for Node depend upon the size of your infrastructure requirements. So, here in this tutorial we will setup Node with 1 GB RAM, 1 GHz Processor and 10 GB of available disk space with minimal installation packages installed on the server that is no web or database server packages are installed. - -**2) System Update** - -It always been recommended to keep your system upto date with latest patches and updates, so before we move to the installation on Node, let's login to your server with super user privileges and run update command. - - # apt-get update - -**3) Installing Dependencies** - -Node JS only requires some basic system and software utilities to be present on your server, for its successful installation like 'make' 'gcc' and 'wget'. Let's run the below command to get them installed if they are not already present. - - # apt-get install python gcc make g++ wget - -### Download Latest Node JS v4.0.0 ### - -Let's download the latest Node JS version 4.0.0 by following this link of [Node JS Download Page][1]. - -![](http://blog.linoxide.com/wp-content/uploads/2015/09/download.png) - -We will copy the link location of its latest package and download it using 'wget' command as shown. - - # wget https://nodejs.org/download/rc/v4.0.0-rc.1/node-v4.0.0-rc.1.tar.gz - -Once download completes, unpack using 'tar' command as shown. - - # tar -zxvf node-v4.0.0-rc.1.tar.gz - -![](http://blog.linoxide.com/wp-content/uploads/2015/09/wget.png) - -### Installing Node JS v4.0.0 ### - -Now we have to start the installation of Node JS from its downloaded source code. So, change your directory and configure the source code by running its configuration script before compiling it on your ubuntu server. - - root@ubuntu-15:~/node-v4.0.0-rc.1# ./configure - -![](http://blog.linoxide.com/wp-content/uploads/2015/09/configure.png) - -Now run the 'make install' command to compile the Node JS installation package as shown. - - root@ubuntu-15:~/node-v4.0.0-rc.1# make install - -The make command will take a couple of minutes while compiling its binaries so after executinf above command, wait for a while and keep calm. - -### Testing Node JS Installation ### - -Once the compilation process is complete, we will test it if every thing went fine. Let's run the following command to confirm the installed version of Node JS. - - root@ubuntu-15:~# node -v - v4.0.0-pre - -By executing 'node' without any arguments from the command-line you will be dropped into the REPL (Read-Eval-Print-Loop) that has simplistic emacs line-editing where you can interactively run JavaScript and see the results. - -![](http://blog.linoxide.com/wp-content/uploads/2015/09/node.png) - -### Writing Test Program ### - -We can also try out a very simple console program to test the successful installation and proper working of Node JS. To do so we will create a file named "test.js" and write the following code into it and save the changes made in the file as shown. - - root@ubuntu-15:~# vim test.js - var util = require("util"); - console.log("Hello! This is a Node Test Program"); - :wq! - -Now in order to run the above program, from the command prompt run the below command. - - root@ubuntu-15:~# node test.js - -![](http://blog.linoxide.com/wp-content/uploads/2015/09/node-test.png) - -So, upon successful installation we will get the output as shown in the screen, where as in the above program it loads the "util" class into a variable "util" and then uses the "util" object to perform the console tasks. While the console.log is a command similar to the cout in C++. - -### Conclusion ### - -That’s it. Hope this gives you a good idea of Node.js going with Node.js on Ubuntu. If you are new to developing applications with Node.js. After all we can say that we can expect significant performance gains with Node JS Version 4.0.0. - --------------------------------------------------------------------------------- - -via: http://linoxide.com/ubuntu-how-to/setup-node-js-4-0-ubuntu-14-04-15-04/ - -作者:[Kashif Siddique][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/kashifs/ -[1]:https://nodejs.org/download/rc/v4.0.0-rc.1/ \ No newline at end of file From 4f33e9ff6741a806653e4a0f94d6bba2c00cd247 Mon Sep 17 00:00:00 2001 From: ezio Date: Tue, 22 Sep 2015 17:20:56 +0800 Subject: [PATCH 584/697] =?UTF-8?q?=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/talk/20150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md b/sources/talk/20150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md index 5b4ad2251f..3e225ac866 100644 --- a/sources/talk/20150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md +++ b/sources/talk/20150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md @@ -1,3 +1,5 @@ +- translating by Ezio + Linux 4.3 Kernel To Add The MOST Driver Subsystem ================================================================================ While the [Linux 4.2][1] kernel hasn't been officially released yet, Greg Kroah-Hartman sent in early his pull requests for the various subsystems he maintains for the Linux 4.3 merge window. From b267e288b9f79b36298a0459f39e996b1f15e6b2 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 22 Sep 2015 21:55:38 +0800 Subject: [PATCH 585/697] PUB:20150818 Docker Working on Security Components Live Container Migration @bazz2 --- ...curity Components Live Container Migration.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) rename {translated/talk => published}/20150818 Docker Working on Security Components Live Container Migration.md (77%) diff --git a/translated/talk/20150818 Docker Working on Security Components Live Container Migration.md b/published/20150818 Docker Working on Security Components Live Container Migration.md similarity index 77% rename from translated/talk/20150818 Docker Working on Security Components Live Container Migration.md rename to published/20150818 Docker Working on Security Components Live Container Migration.md index bd3f0451c7..2e60571b52 100644 --- a/translated/talk/20150818 Docker Working on Security Components Live Container Migration.md +++ b/published/20150818 Docker Working on Security Components Live Container Migration.md @@ -1,16 +1,16 @@ -Docker Working on Security Components, Live Container Migration +Docker 在安全组件、实时容器迁移方面的进展 ================================================================================ ![Docker Container Talk](http://www.eweek.com/imagesvr_ce/1905/290x195DockerMarianna.jpg) -**Docker 开发者在 Containercon 上的演讲,谈论将来的容器在安全和实时迁移方面的创新** +**这是 Docker 开发者在 Containercon 上的演讲,谈论将来的容器在安全和实时迁移方面的创新** 来自西雅图的消息。当前 IT 界最热的词汇是“容器”,美国有两大研讨会:Linuxcon USA 和 Containercon,后者就是为容器而生的。 Docker 公司是开源 Docker 项目的商业赞助商,本次研讨会这家公司有 3 位高管带来主题演讲,但公司创始人 Solomon Hykes 没上场演讲。 -Hykes 曾在 2014 年的 Linuxcon 上进行过一次主题演讲,但今年的 Containeron 他只坐在观众席上。而工程部高级副总裁 Marianna Tessel、Docker 首席安全员 Diogo Monica 和核心维护员 Michael Crosby 为我们演讲 Docker 新增的功能和将来会有的功能。 +Hykes 曾在 2014 年的 Linuxcon 上进行过一次主题演讲,但今年的 Containeron 他只坐在观众席上。而工程部高级副总裁 Marianna Tessel、Docker 首席安全官 Diogo Monica 和核心维护员 Michael Crosby 为我们演讲 Docker 新增的功能和将来会有的功能。 -Tessel 强调 Docker 现在已经被很多世界上最大的组织用在生产环境中,包括美国政府。Docker 也被用在小环境中,比如树莓派,一块树莓派上可以跑 2300 个容器。 +Tessel 强调 Docker 现在已经被很多世界上大型组织用在生产环境中,包括美国政府。Docker 也被用在小环境中,比如树莓派,一块树莓派上可以跑 2300 个容器。 “Docker 的功能正在变得越来越强大,而部署方法变得越来越简单。”Tessel 在会上说道。 @@ -18,9 +18,9 @@ Tessel 把 Docker 形容成一艘游轮,内部由强大而复杂的机器驱 Docker 试图解决的领域是简化安全配置。Tessel 认为对于大多数用户和组织来说,避免网络漏洞所涉及的安全问题是一个乏味而且复杂的过程。 -于是 Docker Content Trust 就出现在 Docker 1.8 release 版本中了。安全项目领导 Diogo Mónica 中加入 Tessel 上台讨论,说安全是一个难题,而 Docker Content Trust 就是为解决这个难道而存在的。 +于是 Docker Content Trust 就出现在 Docker 1.8 release 版本中了。安全项目领导 Diogo Mónica 中加入了 Tessel 的台上讨论,说安全是一个难题,而 Docker Content Trust 就是为解决这个难道而存在的。 -Docker Content Trusst 提供一种方法来验证一个 Docker 应用是否可信,以及多种方法来限制欺骗和病毒注入。 +Docker Content Trust 提供一种方法来验证一个 Docker 应用是否可信,以及多种方法来限制欺骗和病毒注入。 为了证明他的观点,Monica 做了个现场示范,演示 Content Trust 的效果。在一个实验中,一个网站在更新过程中其 Web App 被人为攻破,而当 Content Trust 启动后,这个黑客行为再也无法得逞。 @@ -32,7 +32,7 @@ Docker 首席维护员 Micheal Crosby 在台上做了个实时迁移的演示, 一个容器也可以克隆到另一个地方,Crosby 将他的克隆容器称为“多利”,就是世界上第一只被克隆出来的羊的名字。 -Tessel 也花了点时间聊了下 RunC 组件,这是个正在被 Open Container Initiative 作为多方开发的项目,目的是让窗口兼容 Linux、Windows 和 Solaris。 +Tessel 也花了点时间聊了下 RunC 组件,这是个正在被 Open Container Initiative 作为多方开发的项目,目的是让它可以从 Linux 扩展到包括 Windows 和 Solaris 在内的多种操作系统。 Tessel 总结说她不知道 Docker 的未来是什么样,但对此抱非常乐观的态度。 @@ -46,7 +46,7 @@ via: http://www.eweek.com/virtualization/docker-working-on-security-components-l 作者:[Sean Michael Kerner][a] 译者:[bazz2](https://github.com/bazz2) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 7ec5a135a47427bfaf2b1618e8d53f941639d12c Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 22 Sep 2015 22:29:38 +0800 Subject: [PATCH 586/697] PUB:20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China @geekpi --- ...OS Runs 42 Percent of Dell PCs in China.md | 38 +++++++++++++++++ ...OS Runs 42 Percent of Dell PCs in China.md | 41 ------------------- 2 files changed, 38 insertions(+), 41 deletions(-) create mode 100644 published/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md delete mode 100644 translated/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md diff --git a/published/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md b/published/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md new file mode 100644 index 0000000000..6364017ac7 --- /dev/null +++ b/published/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md @@ -0,0 +1,38 @@ +Ubuntu 开源操作系统运行在中国 42% 的 Dell PC 上 +================================================================================ + +> Dell 称它在中国市场出售的 42% 的 PC 运行的是 Kylin,这是一款 Canonical 帮助开发的基于 Ubuntu 的操作系统。 + +让开源粉丝欢喜的是:Linux 桌面年来了。或者说中国正在接近这个目标,[Dell][1] 报告称它售卖的超过 40% 的 PC 机运行的是 [Canonical][3] 帮助开发的 [Ubuntu Linux][2]。 + +特别地,Dell 称 42% 的中国电脑运行 NeoKylin(中标麒麟),一款中国本土倾力打造的用于替代 [Microsoft][4] Windows的操作系统。它也简称麒麟,这是一款从 2013 年出来的基于 Ubuntu 的操作系统,也是这年开始 Canonical 公司与中国政府合作建立一个专供中国市场的 Ubuntu 变种。 + +麒麟的早期版本出现于 2001 年左右,也是基于其他操作系统,包括 FreeBSD,这是一个开放源码但是不同于 Linux 的类 Unix 操作系统。 + +Ubuntu 麒麟的外观和感觉很像 Ubuntu 的现代版本。它拥有的 [Unity][5] 界面,并运行开源软件的标准套件,以及专门的如 Youker 助理程序,它是一个图形化的前端,帮助用户管理基本计算任务。但是麒麟的默认主题使得它看起来有点像 Windows 而不是 Ubuntu。 + + 鉴于桌面 Linux PC 市场在世界上大多数国家的相对停滞,戴尔的宣布是令人吃惊的。结合中国对当前版本 windows 的轻微[敌意][6],这个消息并不看好着微软在中国市场的前景。 + +紧跟着 Dell 公司[宣布][7]在华投资1.25亿美元之后,一位决策者给华尔街杂志的评论中提到了 Dell 在中国市场上 PC 的销售。 + + ![Ubuntu Kylin](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/09/hey_2.png) + + +-------------------------------------------------------------------------------- + +via: http://thevarguy.com/open-source-application-software-companies/091515/ubuntu-linux-based-open-source-os-runs-42-percent-dell-pc + +作者:[Christopher Tozzi][a] +译者:[geekpi](https://github.com/geeekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://thevarguy.com/author/christopher-tozzi +[1]:http://dell.com/ +[2]:http://ubuntu.com/ +[3]:http://canonical.com/ +[4]:http://microsoft.com/ +[5]:http://unity.ubuntu.com/ +[6]:http://www.wsj.com/articles/windows-8-faces-new-criticism-in-china-1401882772 +[7]:http://thevarguy.com/business-technology-solution-sales/091415/dell-125-million-directed-china-jobs-new-business-and-innovation diff --git a/translated/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md b/translated/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md deleted file mode 100644 index eea7af0368..0000000000 --- a/translated/talk/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md +++ /dev/null @@ -1,41 +0,0 @@ -基于Linux的Ubuntu开源操作系统在中国42%的Dell PC上运行 -================================================================================ -> Dell称它在中国市场出售的42%的PC运行的是Kylin,一款Canonical帮助创建的基于Ubuntu的操作系统。 - - 让开源粉丝欢喜的是:Linux桌面年来了。或者说中国正在接近这个目标,[Dell][1]报告称超过40%售卖的PC机运行的是 [Canonical][3]帮助开发的[Ubuntu Linux][2]。 - - 特别地,Dell称42%的中国电脑运行NeoKylin,一款中国本土倾力打造的用于替代[Microsoft][4] (MSFT) Windows的操作系统。它也简称麒麟,一款从2013年出来的基于Ubuntu的操作系统,也是这年开始Canonical公司与中国政府合作来建立一个专为中国市场Ubuntu变种。 - - 2001年左右早期版本的麒麟,都是基于其他操作系统,包括FreeBSD,一个开放源码的区别于Linux的类Unix操作系统。 - - Ubuntu的麒麟的外观和感觉很像Ubuntu的现代版本。它拥有的[Unity][5]界面,并运行标准开源套件,以及专门的如Youker助理程序,它是一个图形化的前端,帮助用户管理的基本计算任务。但是麒麟的默认主题使得它看起来有点像Windows而不是Ubuntu。 - - 鉴于桌面Linux PC市场在世界上大多数国家的相对停滞,戴尔的宣布是惊人的。并结合中国对现代windows的轻微[敌意][6],这个消息并不预示着微软在中国市场的前景。 - - 在Dell公司[宣布][7]在华投资1.25亿美元很快之后一位行政官给华尔街杂志的评论中提到了Dell在中国市场上PC的销售。 - ![Ubuntu Kylin](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/09/hey_2.png) - - - - - - - --------------------------------------------------------------------------------- - -via: http://thevarguy.com/open-source-application-software-companies/091515/ubuntu-linux-based-open-source-os-runs-42-percent-dell-pc - -作者:[Christopher Tozzi][a] -译者:[geekpi](https://github.com/geeekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://thevarguy.com/author/christopher-tozzi -[1]:http://dell.com/ -[2]:http://ubuntu.com/ -[3]:http://canonical.com/ -[4]:http://microsoft.com/ -[5]:http://unity.ubuntu.com/ -[6]:http://www.wsj.com/articles/windows-8-faces-new-criticism-in-china-1401882772 -[7]:http://thevarguy.com/business-technology-solution-sales/091415/dell-125-million-directed-china-jobs-new-business-and-innovation From 8a1c46c23d8cd30c32183951741a4d34f468b271 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 23 Sep 2015 16:44:40 +0800 Subject: [PATCH 587/697] =?UTF-8?q?20150923-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...e--Minimal Icon Theme For Linux Desktop.md | 86 +++++++++ ...mistic on OpenStack Revenue Opportunity.md | 37 ++++ ...o Upgrade From Oracle 11g To Oracle 12c.md | 165 ++++++++++++++++++ 3 files changed, 288 insertions(+) create mode 100644 sources/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md create mode 100644 sources/talk/20150921 Red Hat CEO Optimistic on OpenStack Revenue Opportunity.md create mode 100644 sources/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md diff --git a/sources/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md b/sources/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md new file mode 100644 index 0000000000..532004d419 --- /dev/null +++ b/sources/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md @@ -0,0 +1,86 @@ +Xenlism WildFire: Minimal Icon Theme For Linux Desktop +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icon-theme-linux-3.png) + +It’s been some time that I covered an icon theme on It’s FOSS. Perhaps because no theme caught my eyes in recent times. There are a few which I consider the [best icon themes for Ubuntu][1] but these are mostly the known ones like Numix and Moka and I am pretty content using Numix. + +But a few days back I came across this [Xenslim WildFire][2] and I must say, it looks damn good. Minimalism is the current popular trend in the design world and Xenlism perfects it. Smooth and tranquil, Xenlism is inspired by Nokia’s meego and Apple iOS icon. + +Have a look at some of its icons for various applications: + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icons.png) + +Folder icons look like: + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icons-1.png) + +Theme developer, [Nattapong Pullkhow][3], claims that the icon theme is best suited for GNOME but it should work fine with Unity, KDE and Mate as well. + +### Install Xenlism Wildfire icon theme ### + +Xenlism Theme is around 230 MB in download size which is slightly heavy for an icon theme but considering that it has support for a huge number of applications, the size should not be surprising. + +#### Installing in Ubuntu/Debian based Linux distributions #### + +To install it in Ubuntu variants, use the command below in a terminal to add the GPG key: + + sudo apt-key adv --keyserver keys.gnupg.net --recv-keys 90127F5B + +After adding the key, use the following commands: + + echo "deb http://downloads.sourceforge.net/project/xenlism-wildfire/repo deb/" | sudo tee -a /etc/apt/sources.list + sudo apt-get update + sudo apt-get install xenlism-wildfire-icon-theme + +In addition to the icon theme, you can download a matching minimal wallpaper as well: + + sudo apt-get install xenlism-artwork-wallpapers + +#### Installing in Arch based Linux distributions #### + +You’ll have to edit the Pacman repository. In a terminal, use the following command: + + sudo nano /etc/pacman.conf + +Add the following section to this configuration file: + + [xenlism-arch] + SigLevel = Never + Server = http://downloads.sourceforge.net/project/xenlism-wildfire/repo/arch + +Update the system and install icon theme and wallpapers as following: + + sudo pacman -Syyu + sudo pacman -S xenlism-wildfire + +#### Using Xenlism icon theme #### + +In Ubuntu Unity, [use Unity Tweak Tool to change the icon theme][4]. In GNOME, [use Gnome Tweak Tool to change the theme][5]. I presume that you know how to do this part, but if you are stuck let me know and I’ll add some screenshots. + +Below is a screenshot of Xenlism icon theme in use in Ubuntu 15.04 Unity. Xenlism desktop wallpaper is in the background. + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icons-2.png) + +It looks good, isn’t it? If you try it and like it, feel free to thank the developer: + +> [Xenlism is a stunning minimal icon theme for Linux. Thanks @xenatt for this beautiful theme.][6] + +I hope you like it. Do share your views on this icon theme or your preferred icon theme. Is Xenlism good enough to change your favorite icon theme? + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/xenlism-wildfire-theme/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://itsfoss.com/best-icon-themes-ubuntu-1404/ +[2]:http://xenlism.github.io/wildfire/ +[3]:https://plus.google.com/+NattapongPullkhow +[4]:http://itsfoss.com/install-numix-ubuntu/ +[5]:http://itsfoss.com/install-switch-themes-gnome-shell/ +[6]:https://twitter.com/share?text=Xenlism+is+a+stunning+minimal+icon+theme+for+Linux.+Thanks+%40xenatt+for+this+beautiful+theme.&via=itsfoss&related=itsfoss&url=http://itsfoss.com/xenlism-wildfire-theme/ \ No newline at end of file diff --git a/sources/talk/20150921 Red Hat CEO Optimistic on OpenStack Revenue Opportunity.md b/sources/talk/20150921 Red Hat CEO Optimistic on OpenStack Revenue Opportunity.md new file mode 100644 index 0000000000..d176ed8d77 --- /dev/null +++ b/sources/talk/20150921 Red Hat CEO Optimistic on OpenStack Revenue Opportunity.md @@ -0,0 +1,37 @@ +Red Hat CEO Optimistic on OpenStack Revenue Opportunity +================================================================================ +Red Hat continues to accelerate its growth thanks to an evolving mix of platform and infrastructure technology revolving around Linux and the cloud. Red Hat announced its second quarter fiscal 2016 financial results on September 21, once again exceeding expectations. + +![](http://www.serverwatch.com/imagesvr_ce/1212/icon-redhatcloud-r.jpg) + +For the quarter, Red Hat reported revenue of $504 million for a 13 percent year-over-year gain. Net Income was reported at $51 million, up from $47 Red Hatmillion in the second quarter of fiscal 2015. Looking forward, Red Hat provided some aggressive guidance for the coming quarter and the full year. For the third quarter, Red Hat provided guidance for revenue to be in the range of $519 million to $523 million, which is a 15 percent year-over-year gain. + +On a full year basis, Red Hat's full year guidance is for fiscal 2016 revenue of $2.044 billion, for a 14 percent year-over-year gain. + +Red Hat CFO Frank Calderoni commented during the earnings call that all of Red Hat's top 30 largest deals were approximately $1 million or more. He noted that Red Hat had four deals that were in excess of $5 million and one deal that was well over $10 million. As has been the case in recent years, cross selling across Red Hat products is strong with 65 percent of all deals including one or more components from Red Hat's group of application development and emerging technologies offerings. + +"We expect the growing adoption of these technologies, like Middleware, the RHEL OpenStack platform, OpenShift, cloud management and storage, to continue to drive revenue growth," Calderoni said. + +### OpenStack ### + +During the earnings call, Red Hat CEO Jim Whitehurst was repeatedly asked about the revenue prospects for OpenStack. Whitehurst said that the recently released Red Hat OpenStack Platform 7.0 is a big jump forward thanks to the improved installer. + +"It does a really good job of kind of identifying hardware and lighting it up," Whitehurst said. "Of course, that means there's a lot of work to do around certifying that hardware, making sure it lights up appropriately." + +Whitehurst said that he's starting to see a lot more production application start to move to the OpenStack cloud. He cautioned however that it's still largely the early adopters moving to OpenStack in production and it isn't quite mainstream, yet. + +From a competitive perspective, Whitehurst talked specifically about Microsoft, HP and Mirantis. In Whitehurst's view many organizations will continue to use multiple operating systems and if they choose Microsoft for one part, they are more likely to choose an open-source option,as the alternative option. Whitehurst said he doesn't see a lot of head-to-head competition against HP in cloud, but he does see Mirantis. + +"We've had several wins or people who were moving away from Mirantis to RHEL," Whitehurst said. + +-------------------------------------------------------------------------------- + +via: http://www.serverwatch.com/server-news/red-hat-ceo-optimistic-on-openstack-revenue-opportunity.html + +作者:[Sean Michael Kerner][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.serverwatch.com/author/Sean-Michael-Kerner-101580.htm \ No newline at end of file diff --git a/sources/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md b/sources/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md new file mode 100644 index 0000000000..103f3922a5 --- /dev/null +++ b/sources/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md @@ -0,0 +1,165 @@ +How To Upgrade From Oracle 11g To Oracle 12c +================================================================================ +Hello all. + +Today we will go through how to upgrade from oracle 11g to Oracle 12c. Let’s start then. + +For this, I will use CentOS 7 64 bit Linux distribution. + +I am assuming that you have already installed Oracle 11g on your system. Here I will show what I did when I installed Oracle 11g. + +I select “Create and configure a database” for Oracle 11g just like below image. + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage1.png) + +Then I select “Desktop Class” for my Oracle 11g installation. For production you must select “Server Class”. + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage2.png) + +Then you must enter all the paths for the Oracle 11g and your password as well. Below is mine for my Oracle 11g installation. Make sure you meet the Oracle password methodology for placing your password. + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage3.png) + +Next, I set Inventory Directory path as below. + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage4.png) + +Till now, I showed you what I had done to install Oracle 11g as we are going to upgrade to 12c. + +Let’s upgrade to Oracle 12c from Oracle 11g. + +You must download the two (2) zip files from this [link][1]. Download and unzip both files to the same directory. Files names are **linuxamd64_12c_database_1of2.zip** & **linuxamd64_12c_database_2of2.zip** respectively. After extracting or unzipping, It will create a folder called database. + +Note: Before upgrading to 12c, make sure you have all the necessary packages installed for your CentOS and all the path variable are OK and all other prerequisites are done before beginning. + +These are the following packages must be installed with correct version + +- binutils +- compat-libstdc++ +- gcc +- glibc +- libaio +- libgcc +- libstdc++ +- make +- sysstat +- unixodbc + +Search for your correct rpm version on the internet. + +You can also combine a query for multiple packages, and review the output for the correct versions. For example: + +Type the following command to check in the terminal + + rpm -q binutils compat-libstdc++ gcc glibc libaio libgcc libstdc++ make sysstat unixodbc + +The following packages (or later or earlier versions) must be installed on your system + +- binutils-2.23.52.0.1-12.el7.x86_64 +- compat-libcap1-1.10-3.el7.x86_64 +- gcc-4.8.2-3.el7.x86_64 +- gcc-c++-4.8.2-3.el7.x86_64 +- glibc-2.17-36.el7.i686 +- glibc-2.17-36.el7.x86_64 +- glibc-devel-2.17-36.el7.i686 +- glibc-devel-2.17-36.el7.x86_64 +- ksh +- libaio-0.3.109-9.el7.i686 +- libaio-0.3.109-9.el7.x86_64 +- libaio-devel-0.3.109-9.el7.i686 +- libaio-devel-0.3.109-9.el7.x86_64 +- libgcc-4.8.2-3.el7.i686 +- libgcc-4.8.2-3.el7.x86_64 +- libstdc++-4.8.2-3.el7.i686 +- libstdc++-4.8.2-3.el7.x86_64 +- libstdc++-devel-4.8.2-3.el7.i686 +- libstdc++-devel-4.8.2-3.el7.x86_64 +- libXi-1.7.2-1.el7.i686 +- libXi-1.7.2-1.el7.x86_64 +- libXtst-1.2.2-1.el7.i686 +- libXtst-1.2.2-1.el7.x86_64 +- make-3.82-19.el7.x86_64 +- sysstat-10.1.5-1.el7.x86_64 + +You will also need unixODBC-2.3.1 or later driver. + +I hope you already have a user on your CentOS 7 named oracle when you installed Oracle 11g. + +Let’s login onto CentOS by using user oracle. + +After login to your CentOS by user oracle, open a terminal on your CentOS. + +Now change directory and navigate to your extracted directory where you extracted both the zip files by using terminal. Now type the following in the terminal to begin installation of 12c. + + ./runInstaller + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212image5.png) + +If everything goes right then you will see something like below which will start the installation process of 12c. + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage6.png) + +Then you can skip the updates or you can download the latest update. It is recommended that you must update it for production server. Though I am skipping it. + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage7.png) + +Now, select upgrade an existing database. + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage8.png) + +For language, English is already there. Click next to continue or you can add according to your need. + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage9.png) + +Now, select Enterprise Edition. You can select upon your requirements. + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage10.png) + +Then select your path for Software location. This is pretty much self-explanatory. + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage11.png) + +For step 7, keep moving with the default options just like below. + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage12.png) + +In step 9, you will get a summary report like below image. + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage13.png) + +If everything is fine, you can start your installation by clicking install on step 9 and which will take you to step 10. + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage14.png) + +In the process you might encounter some errors and you need to Goggle it for fix those errors. There are a number of errors you may encounter and hence I am not covering those here. + +Keep your patience and it will show Succeeded one by one for step 10. If not, search it on Google and do necessary steps to fix it. Again, as there are a number of errors you may encounter and I can’t provide all the details over here. + +Now, configure the listener just simply following on screen instruction. + +After finishing the process for listener, it will start the Database Upgrade Assistant. Select Upgrade Oracle Database. + +![](http://www.unixmen.com/wp-content/uploads/2015/09/DUAimage15.png) + +In step 2, you will find that it will show the 11g location path along with 12c location path. Also you will find that it is indicating Target Oracle Home Release 12 from Source Oracle Home Release 11. Click next step 2 and move to step 3. + +![](http://www.unixmen.com/wp-content/uploads/2015/09/DUAimage16.png) + +Follow the on screen instructions and finished it. + +In the last step, you will get a success window where you will find that the update of oracle database was successful. + +**A word of caution**: Before upgrading to 12c for your production server, please make sure you have done it some other workstation so that you can fix all the errors, which you will encounter on the way of upgrading. Never try upgrading a production server without knowing all the details. + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/upgrade-from-oracle-11g-to-oracle-12c/ + +作者:[Mohammad Forhad Iftekher][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/forhad/ +[1]:http://www.oracle.com/technetwork/database/enterprise-edition/downloads/database12c-linux-download-1959253.html \ No newline at end of file From 9af8dc42e82ac53153caa22b04d81003a9e17326 Mon Sep 17 00:00:00 2001 From: alim0x Date: Wed, 23 Sep 2015 20:37:05 +0800 Subject: [PATCH 588/697] [translating]19 - The history of Android --- .../The history of Android/19 - The history of Android.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/talk/The history of Android/19 - The history of Android.md b/sources/talk/The history of Android/19 - The history of Android.md index 32841f5be9..4fff9d0e37 100644 --- a/sources/talk/The history of Android/19 - The history of Android.md +++ b/sources/talk/The history of Android/19 - The history of Android.md @@ -1,3 +1,5 @@ +alim0x translating + The history of Android ================================================================================ ![Google Music Beta running on Gingerbread.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/device-2014-03-31-110613.png) @@ -68,4 +70,4 @@ via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-histor [1]:http://arstechnica.com/gadgets/2011/05/hands-on-grooving-on-the-go-with-impressive-google-music-beta/ [2]:http://cdn.arstechnica.net/wp-content/uploads/2014/02/32.png [a]:http://arstechnica.com/author/ronamadeo -[t]:https://twitter.com/RonAmadeo \ No newline at end of file +[t]:https://twitter.com/RonAmadeo From 1b7b9c857b65c92f3e742a09672310f18041276e Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 24 Sep 2015 09:08:59 +0800 Subject: [PATCH 589/697] translating --- ...1 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md b/sources/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md index c3f9dc366d..5e0bb30e08 100644 --- a/sources/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md +++ b/sources/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md @@ -1,3 +1,5 @@ +translating----geekpi + How to Setup IonCube Loaders on Ubuntu 14.04 / 15.04 ================================================================================ IonCube Loaders is an encryption/decryption utility for PHP applications which assists in speeding up the pages that are served. It also protects your website's PHP code from being viewed and ran on unlicensed computers. Using ionCube encoded and secured PHP files requires a file called ionCube Loader to be installed on the web server and made available to PHP which is often required for a lot of PHP based applications. It handles the reading and execution of encoded files at run time. PHP can use the loader with one line added to a PHP configuration file that ‘php.ini’. @@ -86,4 +88,4 @@ via: http://linoxide.com/ubuntu-how-to/setup-ioncube-loaders-ubuntu-14-04-15-04/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://linoxide.com/author/kashifs/ \ No newline at end of file +[a]:http://linoxide.com/author/kashifs/ From 33e2644eb8ff5391c207d72e05b70ce0903fd821 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 24 Sep 2015 09:56:47 +0800 Subject: [PATCH 590/697] translated --- ...onCube Loaders on Ubuntu 14.04 or 15.04.md | 91 ------------------- ...onCube Loaders on Ubuntu 14.04 or 15.04.md | 91 +++++++++++++++++++ 2 files changed, 91 insertions(+), 91 deletions(-) delete mode 100644 sources/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md create mode 100644 translated/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md diff --git a/sources/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md b/sources/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md deleted file mode 100644 index 5e0bb30e08..0000000000 --- a/sources/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md +++ /dev/null @@ -1,91 +0,0 @@ -translating----geekpi - -How to Setup IonCube Loaders on Ubuntu 14.04 / 15.04 -================================================================================ -IonCube Loaders is an encryption/decryption utility for PHP applications which assists in speeding up the pages that are served. It also protects your website's PHP code from being viewed and ran on unlicensed computers. Using ionCube encoded and secured PHP files requires a file called ionCube Loader to be installed on the web server and made available to PHP which is often required for a lot of PHP based applications. It handles the reading and execution of encoded files at run time. PHP can use the loader with one line added to a PHP configuration file that ‘php.ini’. - -### Prerequisites ### - -In this article we will setup the installation of Ioncube Loader on Ubuntu 14.04/15.04, so that it can be used in all PHP Modes. The only requirement for this tutorial is to have "php.ini" file exists in your system with LEMP stack installed on the server. - -### Download IonCube Loader ### - -Login to your ubuntu server to download the latest IonCube loader package according to your operating system architecture whether your are using a 32 Bit or 64 Bit OS. You can get its package by issuing the following command with super user privileges or root user. - - # wget http://downloads3.ioncube.com/loader_downloads/ioncube_loaders_lin_x86-64.tar.gz - -![download ioncube](http://blog.linoxide.com/wp-content/uploads/2015/09/download1.png) - -After Downloading unpack the archive into the "/usr/local/src/" folder by issuing the following command. - - # tar -zxvf ioncube_loaders_lin_x86-64.tar.gz -C /usr/local/src/ - -![extracting archive](http://blog.linoxide.com/wp-content/uploads/2015/09/2-extract.png) - -After extracting the archive, we can see the list of all modules present in it. But we needs only the relevant with the version of PHP installed on our system. - -To check your PHP version, you can run the below command to find the relevant modules. - - # php -v - -![ioncube modules](http://blog.linoxide.com/wp-content/uploads/2015/09/modules.png) - -With reference to the output of above command we came to know that the PHP version installed on the system is 5.6.4, so we need to copy the appropriate module to the PHP modules folder. - -To do so we will create a new folder with name "ioncube" within the "/usr/local/" directory and copy the required ioncube loader modules into it. - - root@ubuntu-15:/usr/local/src/ioncube# mkdir /usr/local/ioncube - root@ubuntu-15:/usr/local/src/ioncube# cp ioncube_loader_lin_5.6.so ioncube_loader_lin_5.6_ts.so /usr/local/ioncube/ - -### PHP Configuration ### - -Now we need to put the following line into the configuration file of PHP file "php.ini" which is located in "/etc/php5/cli/" folder then restart your web server’s services and php module. - - # vim /etc/php5/cli/php.ini - -![ioncube zend extension](http://blog.linoxide.com/wp-content/uploads/2015/09/zend-extension.png) - -In our scenario we have Nginx web server installed, so we will run the following commands to start its services. - - # service php5-fpm restart - # service nginx restart - -![web services](http://blog.linoxide.com/wp-content/uploads/2015/09/web-services.png) - -### Testing IonCube Loader ### - -To test the ioncube loader in the PHP configuration for your website, create a test file called "info.php" with the following content and place it into the web directory of your web server. - - # vim /usr/share/nginx/html/info.php - -Then save the changes after placing phpinfo script and access "info.php" in your browser with your domain name or server’s IP address after reloading the web server services. - -You will be able to see the below section at the bottom of your php modules information. - -![php info](http://blog.linoxide.com/wp-content/uploads/2015/09/php-info.png) - -From the terminal issue the following command to verify the php version that shows the ionCube PHP Loader is Enabled. - - # php -v - -![php ioncube loader](http://blog.linoxide.com/wp-content/uploads/2015/09/php-ioncube.png) - -The output shown in the PHP version's command clearly indicated that IonCube loader has been successfully integrated with PHP. - -### Conclusion ### - -At the end of this tutorial you learnt about the installation and configuration of ionCube Loader on Ubuntu with Nginx web server there will be no such difference if you are using any other web server. So, installing Loaders is simple when its done correctly, and on most servers its installation will work without a problem. However there is no such thing as a "standard PHP installation", and servers can be setup in many different ways, and with different features enabled or disabled. - -If you are on a shared server, then make sure that you have run the ioncube-loader-helper.php script, and click the link to test run time installation. If you still face as such issue while doing your setup, feel free to contact us and leave us a comment. - --------------------------------------------------------------------------------- - -via: http://linoxide.com/ubuntu-how-to/setup-ioncube-loaders-ubuntu-14-04-15-04/ - -作者:[Kashif Siddique][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/kashifs/ diff --git a/translated/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md b/translated/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md new file mode 100644 index 0000000000..57652e7e44 --- /dev/null +++ b/translated/tech/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md @@ -0,0 +1,91 @@ +如何在Ubuntu 14.04 / 15.04中设置IonCube Loaders +================================================================================ +IonCube Loaders是PHP中用于辅助加速页面的加解密工具。它保护你的PHP代码不会被在未授权的计算机上查看。使用ionCube编码并加密PHP需要一个叫ionCube Loader的文件安装在web服务器上并提供给需要大量访问的PHP用。它在运行时处理并执行编码。PHP只需在‘php.ini’中添加一行就可以使用这个loader。 + +### 前提条件 ### + +在这篇文章中,我们将在Ubuntu14.04/15.04安装Ioncube Loader ,以便它可以在所有PHP模式中使用。本教程的唯一要求就是你系统安装了LEMP,并有“的php.ini”文件。 + +### 下载 IonCube Loader ### + +根据你系统的架构是32位或者64位来下载最新的IonCube loader包。你可以用超级用户权限或者root用户运行下面的命令。 + + # wget http://downloads3.ioncube.com/loader_downloads/ioncube_loaders_lin_x86-64.tar.gz + +![download ioncube](http://blog.linoxide.com/wp-content/uploads/2015/09/download1.png) + +下载完成后用下面的命令解压到"/usr/local/src/"。 + + # tar -zxvf ioncube_loaders_lin_x86-64.tar.gz -C /usr/local/src/ + +![extracting archive](http://blog.linoxide.com/wp-content/uploads/2015/09/2-extract.png) + +解压完成后我们就可以看到所有的存在的模块。但是我们只需要我们安装的PHP版本的相关模块。 + +要检查PHP版本,你可以运行下面的命令来找出相关的模块。 + + # php -v + +![ioncube modules](http://blog.linoxide.com/wp-content/uploads/2015/09/modules.png) + +根据上面的命令我们知道我们安装的是PHP 5.6.4,因此我们需要拷贝合适的模块到PHP模块目录下。 + +首先我们在“/usr/local/”创建一个叫“ioncube”的目录并复制需要的ioncube loader到这里。 + + root@ubuntu-15:/usr/local/src/ioncube# mkdir /usr/local/ioncube + root@ubuntu-15:/usr/local/src/ioncube# cp ioncube_loader_lin_5.6.so ioncube_loader_lin_5.6_ts.so /usr/local/ioncube/ + +### PHP 配置 ### + +我们要在位于"/etc/php5/cli/"文件夹下的"php.ini"中加入下面的配置行并重启web服务和php模块。 + + # vim /etc/php5/cli/php.ini + +![ioncube zend extension](http://blog.linoxide.com/wp-content/uploads/2015/09/zend-extension.png) + +此时我们安装的是nginx,因此我们用下面的命令来重启服务。 + + # service php5-fpm restart + # service nginx restart + +![web services](http://blog.linoxide.com/wp-content/uploads/2015/09/web-services.png) + +### 测试 IonCube Loader ### + +要为我们的网站测试ioncube loader。用下面的内容创建一个"info.php"文件并放在网站的web目录下。 + + + # vim /usr/share/nginx/html/info.php + +加入phpinfo的脚本后重启web服务后用域名或者ip地址访问“info.php”。 + +你会在最下面的php模块信息里看到下面这段。 + +![php info](http://blog.linoxide.com/wp-content/uploads/2015/09/php-info.png) + +From the terminal issue the following command to verify the php version that shows the ionCube PHP Loader is Enabled. +在终端中运行下面的命令来验证php版本并显示PHP Loader已经启用了。 + + # php -v + +![php ioncube loader](http://blog.linoxide.com/wp-content/uploads/2015/09/php-ioncube.png) + +上面的php版本输出明显地显示了IonCube loader已经成功与PHP集成了。 + +### 总结 ### + +教程的最后你已经了解了在安装有nginx的Ubuntu中安装和配置ionCube Loader,如果你正在使用其他的web服务,这与其他服务没有明显的差别。因此做完这些安装Loader是很简单的,并且在大多数服务器上的安装都不会有问题。然而并没有一个所谓的“标准PHP安装”,服务可以通过许多方式安装,并启用或者禁用功能。 + +如果你是在共享服务器上,那么确保运行了ioncube-loader-helper.php脚本,并点击链接来测试运行时安装。如果安装时你仍然遇到了问题,欢迎联系我们及给我们留下评论。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/ubuntu-how-to/setup-ioncube-loaders-ubuntu-14-04-15-04/ + +作者:[Kashif Siddique][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ From 8565e68b53aa44a41c29f7fa8b1c1d264ddc6153 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Thu, 24 Sep 2015 10:31:27 +0800 Subject: [PATCH 591/697] [Translated]20151007 Productivity Tools And Tips For Linux.md --- ...7 Productivity Tools And Tips For Linux.md | 79 ------------------- ...7 Productivity Tools And Tips For Linux.md | 77 ++++++++++++++++++ 2 files changed, 77 insertions(+), 79 deletions(-) delete mode 100644 sources/tech/20151007 Productivity Tools And Tips For Linux.md create mode 100644 translated/tech/20151007 Productivity Tools And Tips For Linux.md diff --git a/sources/tech/20151007 Productivity Tools And Tips For Linux.md b/sources/tech/20151007 Productivity Tools And Tips For Linux.md deleted file mode 100644 index 2434669710..0000000000 --- a/sources/tech/20151007 Productivity Tools And Tips For Linux.md +++ /dev/null @@ -1,79 +0,0 @@ -Translating by GOLinux! -Productivity Tools And Tips For Linux -================================================================================ -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Productivity-Tips-Linux.jpg) - -Since productivity in itself is a subjective term, I am not going into the details of what “productivity” I am talking about here. I am going to show you some tools and tips that could help you to focus better, be efficient and save time while working in Linux. - -### Productivity tools and tips for Linux ### - -Again, I am using Ubuntu at the time of writing this article. But the productivity tools and tips I am going to show you here should be applicable to most of the Linux distributions out there. - -#### Ambient Music #### - -[Music impacts productivity][2]. It is an open secret. From psychologists to management gurus, all have been advising to use ambient noise to feel relaxed and concentrate on your work. I am not going to argue with it because it works for me. I put my headphones on and listening to the birds chirping and wind blows indeed helps me in relaxing. - -In Linux, I use ANoise player for ambient noise player. Thanks to the official PPA provided, you can easily [install Ambient Noise player in Ubuntu][2] and other Ubuntu based Linux distributions. Installing it let’s you play the ambient music offline as well. - -Alternatively, you can always listen to ambient noise online. My favorite website for online ambient music is [Noisli][3]. Do give it a try. - -#### Task management app #### - -A good productive habit is to keep a to-do list. And if you combine it with [Pomodoro Technique][4], it could work wonder. What I mean hear is that create a to-do list and if possible, assign those tasks a certain time. This will keep you on track with your planned tasks for the day. - -For this, I recommend [Go For It!][5] app. You can install it in all major Linux distributions and since it is based on [ToDo.txt][6], you can easily sync it with your smartphone as well. I have written a detailed guide on [how to use Go For It!][7]. - -Alternatively, you can use [Sticky Notes][8] or [Google Keep][9]. If you need something more like [Evernote][10] fan, you can use these [open source alternatives for Evernote][11]. - -#### Clipboard manager #### - -Ctrl+ C and Ctrl+V are the integral part of our daily computer life. Only problem is that these important actions don’t have memory (by default). Suppose you copied something important and then you accidentally copied something else, you’ll lose what you had before. - -A clipboard manager comes handy in such situation. It displays the history of things you have copied (to clipboard) recently. You can copy text back to clipboard from it. - -I prefer [Diodon clipboard manager][12] for this purpose. It is actively developed and is available in Ubuntu repositories. - -#### Recent notifications #### - -When you are busy with something else and a desktop notification blings and fades away, what do you do? You wish that you could see what was the notification about, isn’t it? Recent notification indicator does this job. It keeps a history of all recent notifications. This way, you would never miss the desktop notifications. - -You can read about [Recent Notification Indicator here][13]. - -#### Terminal Tips #### - -No, I am not going to show you all those Linux command tricks and shortcuts. That could make up an entire blog. I am going to show you couple of terminal hacks you could use to enhance your productivity. - - -- **Change** sudo **password timeout**: By default sudo commands require you to enter password after 15 minutes. This could be tiresome. You could actually change the default sudo password timeout. [This tutorial][14] shows you how to do that. -- **Get desktop notification for command completion**: It’s a common joke among IT guys that developers spend a lot of time waiting for programs to be compiled and it is not entirely true. But it does affect the productivity because while you wait for the programs to be compiled, you may end up doing something else and forget about the commands you had run in the terminal.A nicer way would be to get desktop notification when a command is completed. This way, you won’t be distracted for long and can go back to what you were supposed to be doing earlier. Read about [how to get desktop notification for command completion][15]. - -I know that this is not a comprehensive article about **increasing productivity**. But these little apps and tips may actually help you to get more out of your valuable time. - -Now it’s your turn. What programs or tips you use to be more productive in Linux? Something you want to share with the community? - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/productivity-tips-ubuntu/ - -作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:http://www.helpscout.net/blog/music-productivity/ -[2]:http://itsfoss.com/ambient-noise-music-player-ubuntu/ -[3]:http://www.noisli.com/ -[4]:https://en.wikipedia.org/wiki/Pomodoro_Technique -[5]:http://manuel-kehl.de/projects/go-for-it/ -[6]:http://todotxt.com/ -[7]:http://itsfoss.com/go-for-it-to-do-app-in-linux/ -[8]:http://itsfoss.com/indicator-stickynotes-windows-like-sticky-note-app-for-ubuntu/ -[9]:http://itsfoss.com/install-google-keep-ubuntu-1310/ -[10]:https://evernote.com/ -[11]:http://itsfoss.com/5-evernote-alternatives-linux/ -[12]:https://esite.ch/tag/diodon/ -[13]:http://itsfoss.com/7-best-indicator-applets-for-ubuntu-13-10/ -[14]:http://itsfoss.com/change-sudo-password-timeout-ubuntu/ -[15]:http://itsfoss.com/notification-terminal-command-completion-ubuntu/ diff --git a/translated/tech/20151007 Productivity Tools And Tips For Linux.md b/translated/tech/20151007 Productivity Tools And Tips For Linux.md new file mode 100644 index 0000000000..a3245013fa --- /dev/null +++ b/translated/tech/20151007 Productivity Tools And Tips For Linux.md @@ -0,0 +1,77 @@ +Linux产能工具及其使用技巧 +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Productivity-Tips-Linux.jpg) + +由于生产力本身是一个主观术语,我不打算详细解释我这里要讲到的“生产力”是什么。我打算给你们展示一些工具及其使用技巧,希望这会帮助你在Linux中工作时能更专注、更高效,并且能节省时间。 + +### Linux产能工具及其使用技巧 ### + +再次说明,我在写下本文时正在使用的是Ubuntu。但是,我将要在这里展示给大家产能工具及其使用技巧却适用于外面的大多数Linux发行版。 + +#### 外界的音乐 #### + +[音乐影响生产力][1],这已经是一个公开的秘密了。从心理学家到管理大师,他们都一直在建议使用外界的杂音来让自己放松并专注于工作。我不打算就此进行辩论,因为这对于我确实有效。我戴上耳机,然后倾听着鸟叫声和风声,这确实让我很放松。 + +在Linux中,我使用ANoise播放器来播放外界的杂音。多亏了官方提供的PPA,你可以很容易地[安装Ambient Noise播放器到Ubuntu中][2],以及其它基于Ubuntu的Linux发行版中。安装它,也可以让它离线播放外界的音乐。 + +另外,你也总可以在线听外界杂音。我最喜欢的在线外界音乐站点是[Noisli][3]。强烈推荐你试试这个。 + +#### 任务管理应用 #### + +一个良好的生产习惯,就是制订一个任务列表。如果你将它和[番茄工作法][4]组合使用,那就可能创造奇迹了。这里我所说的是,创建一个任务列表,如果可能,将这些任务分配到特定的某个时间。这将会帮助你跟踪一天中计划好的任务。 + +对于此,我推荐[Go For It!][5]应用。你可以将它安装到所有主流Linux发行版中,由于它基于[ToDo.txt][6],你也可以很容易地同步到你的智能手机中。我已经为此写了一个详尽的指南[如何使用Go For It!][7]。 + +此外,你可以使用[Sticky Notes][8]或者[Google Keep][9]。如果你需要某些更类似[Evernote][10]的功能,你可以使用这些[Evernote的开源替代品][11]。 + +#### 剪贴板管理器 #### + +Ctrl+ C和Ctrl+V是我们日常计算机生活中不可缺少的一部分,它们唯一的不足之处在于,这些重要的活动不会被记住(默认情况下)。假如你拷贝了一些重要的东西,然后你意外地又拷贝了一些其它东西,你将丢失先前拷贝的东西。 + +剪贴板管理器在这种情况下会派上用场,它可以显示你最近拷贝(到剪贴板的)内容的历史记录,你可以从它这里将文本拷贝回到剪贴板中。 + +对于该目的,我更偏好[Diodon剪贴板管理器][12]。它处于活跃开发中,并且在Ubuntu的仓库中可以得到它。 + +#### 最近通知 #### + +如果你正忙着处理其它事情,而此时一个桌面通知闪了出来又逐渐消失了,你会怎么做?你会想要看看通知都说了什么,不是吗?最近通知指示器就是用于处理此项工作,它会保留一个最近所有通知的历史记录。这样,你就永远不会错过桌面通知了。 + +你可以阅读[最近通知指示器这里][13]。 + +#### 终端技巧 #### + +不,我不打算给你们展示所有那些Linux命令技巧和快捷方法,那会写满整个博客了。我打算给你们展示一些终端黑技巧,你可以用它们来提高你的生产力。 + +- **修改**sudo**密码超时**:默认情况下,sudo命令要求你在15分钟后再次输入密码,这真是让人讨厌。实际上,你可以修改默认的sudo密码超时。[此教程][14]会给你展示如何来实现。 +- **获取命令完成的桌面通知**:这是IT朋友们之间的一个常见的玩笑,开发者们花费大量时间来等待程序编译完成,而这不完全是正确的。但是,它确实影响到了生产力,因为在你等待程序编译完成时,你可以做其它事情,并忘了你在终端中运行的命令。一个更好的途径,就是在一个命令完成时,让它显示桌面通知。这样,你就不会长时间被打断,并且可以回到之前想要做的事情上。请阅读[如何获取命令完成的桌面通知][15]。 + +我知道,这不是一篇全面涵盖了**提升生产力**的文章。但是,这些小应用和小技巧可以在实际生活中帮助你在你宝贵的时间中做得更多。 + +现在,该轮到你们了。在Linux中,你使用了什么程序或者技巧来提高生产力呢?有哪些东西你想要和社区分享呢? + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/productivity-tips-ubuntu/ + +作者:[Abhishek][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://www.helpscout.net/blog/music-productivity/ +[2]:http://itsfoss.com/ambient-noise-music-player-ubuntu/ +[3]:http://www.noisli.com/ +[4]:https://en.wikipedia.org/wiki/Pomodoro_Technique +[5]:http://manuel-kehl.de/projects/go-for-it/ +[6]:http://todotxt.com/ +[7]:http://itsfoss.com/go-for-it-to-do-app-in-linux/ +[8]:http://itsfoss.com/indicator-stickynotes-windows-like-sticky-note-app-for-ubuntu/ +[9]:http://itsfoss.com/install-google-keep-ubuntu-1310/ +[10]:https://evernote.com/ +[11]:http://itsfoss.com/5-evernote-alternatives-linux/ +[12]:https://esite.ch/tag/diodon/ +[13]:http://itsfoss.com/7-best-indicator-applets-for-ubuntu-13-10/ +[14]:http://itsfoss.com/change-sudo-password-timeout-ubuntu/ +[15]:http://itsfoss.com/notification-terminal-command-completion-ubuntu/ From 30cd2e15865dc19ea84a747b04a4afe63010f2c5 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 24 Sep 2015 13:05:03 +0800 Subject: [PATCH 592/697] PUB:20150906 How To Set Up Your FTP Server In Linux @cvsher --- ... How To Set Up Your FTP Server In Linux.md | 42 +++++++++---------- 1 file changed, 21 insertions(+), 21 deletions(-) rename {translated/tech => published}/20150906 How To Set Up Your FTP Server In Linux.md (62%) diff --git a/translated/tech/20150906 How To Set Up Your FTP Server In Linux.md b/published/20150906 How To Set Up Your FTP Server In Linux.md similarity index 62% rename from translated/tech/20150906 How To Set Up Your FTP Server In Linux.md rename to published/20150906 How To Set Up Your FTP Server In Linux.md index 8c754786fe..90a1895ff0 100644 --- a/translated/tech/20150906 How To Set Up Your FTP Server In Linux.md +++ b/published/20150906 How To Set Up Your FTP Server In Linux.md @@ -1,12 +1,12 @@ -如何在linux中搭建FTP服务 +如何在 linux 中搭建 FTP 服务 ===================================================================== ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Setup-FTP-Server-in-Linux.jpg) -在本教程中,我将会解释如何搭建你自己的FTP服务。但是,首先我们应该来的学习一下FTP是什么。 +在本教程中,我将会介绍如何搭建你自己的FTP服务。但是,首先我们应该来的学习一下FTP是什么。 ###FTP是什么?### -[FTP][1] 是文件传输协议(File Transfer Protocol)的缩写。顾名思义,FTP是用于计算机之间通过网络进行文件传输。你可以通过FTP在计算机账户间进行文件传输,也可以在账户和桌面计算机之间传输文件,或者访问在线软件文档。但是,需要注意的是多数的FTP站点的使用率非常高,并且在连接前需要进行多次尝试。 +[FTP][1] 是文件传输协议(File Transfer Protocol)的缩写。顾名思义,FTP用于计算机之间通过网络进行文件传输。你可以通过FTP在计算机账户间进行文件传输,也可以在账户和桌面计算机之间传输文件,或者访问在线软件归档。但是,需要注意的是多数的FTP站点的使用率非常高,可能需要多次重连才能连接上。 FTP地址和HTTP地址(即网页地址)非常相似,只是FTP地址使用ftp://前缀而不是http:// @@ -16,23 +16,23 @@ FTP地址和HTTP地址(即网页地址)非常相似,只是FTP地址使用f 现在,我们来开始一个特别的冒险,我们将会搭建一个FTP服务用于和家人、朋友进行文件共享。在本教程,我们将以[vsftpd][2]作为ftp服务。 -VSFTPD是一个自称为最安全的FTP服务端软件。事实上VSFTPD的前两个字母表示“非常安全的(very secure)”。该软件的构建绕开了FTP协议的漏洞。 +VSFTPD是一个自称为最安全的FTP服务端软件。事实上VSFTPD的前两个字母表示“非常安全的(very secure)”。该软件的构建绕开了FTP协议的漏洞。 -尽管如此,你应该知道还有更安全的方法进行文件管理和传输,如:SFTP(使用[OpenSSH][3])。FTP协议对于共享非敏感数据是非常有用和可靠的。 +尽管如此,你应该知道还有更安全的方法进行文件管理和传输,如:SFTP(使用[OpenSSH][3])。FTP协议对于共享非敏感数据是非常有用和可靠的。 -####在rpm distributions中安装VSFTPD:#### +####使用 rpm 安装VSFTPD:#### 你可以使用如下命令在命令行界面中快捷的安装VSFTPD: dnf -y install vsftpd -####在deb distributions中安装VSFTPD:#### +####使用 deb 安装VSFTPD:#### 你可以使用如下命令在命令行界面中快捷的安装VSFTPD: sudo apt-get install vsftpd -####在Arch distribution中安装VSFTPD:#### +####在Arch 中安装VSFTPD:#### 你可以使用如下命令在命令行界面中快捷的安装VSFTPD: @@ -52,41 +52,41 @@ VSFTPD是一个自称为最安全的FTP服务端软件。事实上VSFTPD的前 write_enable=YES -**允许本地用户登陆:** +**允许本地(系统)用户登录:** -为了允许文件/etc/passwd中记录的用户可以登陆ftp服务,“local_enable”标记必须设置为YES。 +为了允许文件/etc/passwd中记录的用户可以登录ftp服务,“local_enable”标记必须设置为YES。 local_enable=YES -**匿名用户登陆** +**匿名用户登录** -下面配置内容控制匿名用户是否允许登陆: +下面配置内容控制匿名用户是否允许登录: - # Allow anonymous login + # 允许匿名用户登录 anonymous_enable=YES - # No password is required for an anonymous login (Optional) - no_anon_password=YES - # Maximum transfer rate for an anonymous client in Bytes/second (Optional) + # 匿名登录不需要密码(可选) + no_anon_password=YES + # 匿名登录的最大传输速率,Bytes/second(可选) anon_max_rate=30000 - # Directory to be used for an anonymous login (Optional) + # 匿名登录的目录(可选) anon_root=/example/directory/ **根目录限制(Chroot Jail)** -(译者注:chroot jail是类unix系统中的一种安全机制,用于修改进程运行的根目录环境,限制该线程不能感知到其根目录树以外的其他目录结构和文件的存在。详情参看[chroot jail][4]) +( LCTT 译注:chroot jail是类unix系统中的一种安全机制,用于修改进程运行的根目录环境,限制该线程不能感知到其根目录树以外的其他目录结构和文件的存在。详情参看[chroot jail][4]) 有时我们需要设置根目录(chroot)环境来禁止用户离开他们的家(home)目录。在配置文件中增加/修改下面配置开启根目录限制(Chroot Jail): chroot_list_enable=YES chroot_list_file=/etc/vsftpd.chroot_list -“chroot_list_file”变量指定根目录监狱所包含的文件/目录(译者注:即用户只能访问这些文件/目录) +“chroot\_list\_file”变量指定根目录限制所包含的文件/目录( LCTT 译注:即用户只能访问这些文件/目录) 最后你必须重启ftp服务,在命令行中输入以下命令: sudo systemctl restart vsftpd -到此为止,你的ftp服务已经搭建完成并且启动了 +到此为止,你的ftp服务已经搭建完成并且启动了。 -------------------------------------------------------------------------------- @@ -94,7 +94,7 @@ via: http://itsfoss.com/set-ftp-server-linux/ 作者:[alimiracle][a] 译者:[cvsher](https://github.com/cvsher) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 908e33cfc0d415c71e419dd26ead7a716d43d930 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 24 Sep 2015 13:08:23 +0800 Subject: [PATCH 593/697] PUB:20150916 Enable Automatic System Updates In Ubuntu @Vic020 --- .../20150916 Enable Automatic System Updates In Ubuntu.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20150916 Enable Automatic System Updates In Ubuntu.md (96%) diff --git a/translated/tech/20150916 Enable Automatic System Updates In Ubuntu.md b/published/20150916 Enable Automatic System Updates In Ubuntu.md similarity index 96% rename from translated/tech/20150916 Enable Automatic System Updates In Ubuntu.md rename to published/20150916 Enable Automatic System Updates In Ubuntu.md index ea320bd6e2..2a3ef499e9 100644 --- a/translated/tech/20150916 Enable Automatic System Updates In Ubuntu.md +++ b/published/20150916 Enable Automatic System Updates In Ubuntu.md @@ -1,4 +1,4 @@ -开启Ubuntu系统自动升级 +开启 Ubuntu 系统自动升级 ================================================================================ 在学习如何开启Ubuntu系统自动升级之前,先解释下为什么需要自动升级。 @@ -40,7 +40,7 @@ via: http://itsfoss.com/automatic-system-updates-ubuntu/ 作者:[Abhishek][a] 译者:[Vic020/VicYu](http://vicyu.net) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e8ad981738b23dafbcf32a082633d03d443e8e87 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 24 Sep 2015 15:19:48 +0800 Subject: [PATCH 594/697] PUB:RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server @FSSlc --- ...uring and Securing a Web and FTP Server.md | 65 +++++++++---------- 1 file changed, 32 insertions(+), 33 deletions(-) rename {translated/tech/RHCSA => published}/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md (58%) diff --git a/translated/tech/RHCSA/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md b/published/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md similarity index 58% rename from translated/tech/RHCSA/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md rename to published/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md index 190c32ece5..c1588ade99 100644 --- a/translated/tech/RHCSA/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md +++ b/published/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md @@ -1,16 +1,16 @@ -RHCSA 系列: 安装,配置及加固一个 Web 和 FTP 服务器 – Part 9 +RHCSA 系列(九): 安装、配置及加固一个 Web 和 FTP 服务器 ================================================================================ -Web 服务器(也被称为 HTTP 服务器)是在网络中将内容(最为常见的是网页,但也支持其他类型的文件)进行处理并传递给客户端的服务。 +Web 服务器(也被称为 HTTP 服务器)是在网络中将内容(最为常见的是网页,但也支持其他类型的文件)进行处理并传递给客户端的服务。 -FTP 服务器是最为古老且最常使用的资源之一(即便到今天也是这样),在身份认证不是必须的情况下,它可使得在一个网络里文件对于客户端可用,因为 FTP 使用没有加密的用户名和密码。 +FTP 服务器是最为古老且最常使用的资源之一(即便到今天也是这样),在身份认证不是必须的情况下,它可通过客户端在一个网络访问文件,因为 FTP 使用没有加密的用户名和密码,所以有些情况下不需要验证也行。 在 RHEL 7 中可用的 web 服务器是版本号为 2.4 的 Apache HTTP 服务器。至于 FTP 服务器,我们将使用 Very Secure Ftp Daemon (又名 vsftpd) 来建立用 TLS 加固的连接。 ![配置和加固 Apache 和 FTP 服务器](http://www.tecmint.com/wp-content/uploads/2015/05/Install-Configure-Secure-Apache-FTP-Server.png) -RHCSA: 安装,配置及加固 Apache 和 FTP 服务器 – Part 9 +*RHCSA: 安装,配置及加固 Apache 和 FTP 服务器 – Part 9* -在这篇文章中,我们将解释如何在 RHEL 7 中安装,配置和加固 web 和 FTP 服务器。 +在这篇文章中,我们将解释如何在 RHEL 7 中安装、配置和加固 web 和 FTP 服务器。 ### 安装 Apache 和 FTP 服务器 ### @@ -35,7 +35,7 @@ RHCSA: 安装,配置及加固 Apache 和 FTP 服务器 – Part 9 ![确认 Apache Web 服务器](http://www.tecmint.com/wp-content/uploads/2015/05/Confirm-Apache-Web-Server.png) -确认 Apache Web 服务器 +*确认 Apache Web 服务器* 对于 ftp 服务器,在确保它如期望中的那样工作之前,我们必须进一步地配置它,我们将在几分钟后来做这件事。 @@ -43,7 +43,7 @@ RHCSA: 安装,配置及加固 Apache 和 FTP 服务器 – Part 9 Apache 的主要配置文件位于 `/etc/httpd/conf/httpd.conf` 中,但它可能依赖 `/etc/httpd/conf.d` 中的其他文件。 -尽管默认的配置对于大多数的情形是充分的,熟悉描述在 [官方文档][1] 中的所有可用选项是一个不错的主意。 +尽管默认的配置对于大多数的情形都够用了,但熟悉在 [官方文档][1] 中介绍的所有可用选项是一个不错的主意。 同往常一样,在编辑主配置文件前先做一个备份: @@ -51,14 +51,14 @@ Apache 的主要配置文件位于 `/etc/httpd/conf/httpd.conf` 中,但它可 然后用你钟爱的文本编辑器打开它,并查找下面这些变量: -- ServerRoot: 服务器的配置,错误和日志文件保存的目录。 -- Listen: 通知 Apache 去监听特定的 IP 地址或端口。 -- Include: 允许包含其他配置文件,这个必须存在,否则,服务器将会崩溃。它恰好与 IncludeOptional 相反,假如特定的配置文件不存在,它将静默地忽略掉它们。 -- User 和 Group: 运行 httpd 服务的用户/组的名称。 -- DocumentRoot: Apache 为你的文档服务的目录。默认情况下,所有的请求将在这个目录中被获取,但符号链接和别名可能会被用于指向其他位置。 -- ServerName: 这个指令将设定用于识别它自身的主机名(或 IP 地址)和端口。 +- `ServerRoot`: 服务器的配置,错误和日志文件保存的目录。 +- `Listen`: 通知 Apache 去监听特定的 IP 地址或端口。 +- `Include`: 允许包含其他配置文件,要包含的文件必须存在,否则,服务器将会失败。它恰好与 IncludeOptional 相反,假如特定的配置文件不存在,它将静默地忽略掉它们。 +- `User` 和 `Group`: 运行 httpd 服务的用户/组的名称。 +- `DocumentRoot`: Apache 为你的文档所服务的目录。默认情况下,所有的请求将在这个目录中被获取,但符号链接和别名可能会被用于指向其他位置。 +- `ServerName`: 这个指令将设定用于识别它自身的主机名(或 IP 地址)和端口。 -安全措施的第一步将包含创建一个特定的用户和组(如 tecmint/tecmint)来运行 web 服务器以及更改默认的端口为一个更高的端口(在这个例子中为 9000): +安全措施的第一步将包含创建一个特定的用户和组(如 tecmint/tecmint)来运行 web 服务器,以及更改默认的端口为一个更高的端口(在这个例子中为 9000) (LCTT 译注:如果你的 Web 服务器对外公开提供服务,则不建议修改为非默认端口。): ServerRoot "/etc/httpd" Listen 192.168.0.18:9000 @@ -75,47 +75,46 @@ Apache 的主要配置文件位于 `/etc/httpd/conf/httpd.conf` 中,但它可 # systemctl restart httpd -并别忘了在防火墙中开启新的端口(和禁用旧的端口): - +并别忘了在防火墙中开启新的端口(并禁用旧的端口): # firewall-cmd --zone=public --remove-port=80/tcp --permanent # firewall-cmd --zone=public --add-port=9000/tcp --permanent # firewall-cmd --reload -请注意,由于 SELinux 的策略,你只可使用如下命令所返回的端口来分配给 web 服务器。 +请注意,由于 SELinux 策略,你只能给给 web 服务器使用如下命令所返回的端口。 # semanage port -l | grep -w '^http_port_t' -假如你想使用另一个端口(如 TCP 端口 8100)来给 httpd 服务,你必须将它加到 SELinux 的端口上下文: +假如你想让 httpd 服务使用另一个端口(如 TCP 端口 8100),你必须将它加到 SELinux 的端口上下文: # semanage port -a -t http_port_t -p tcp 8100 ![添加 Apache 端口到 SELinux 策略](http://www.tecmint.com/wp-content/uploads/2015/05/Add-Apache-Port-to-SELinux-Policies.png) -添加 Apache 端口到 SELinux 策略 +*添加 Apache 端口到 SELinux 策略* 为了进一步加固你安装的 Apache,请遵循以下步骤: 1. 运行 Apache 的用户不应该拥有访问 shell 的能力: - # usermod -s /sbin/nologin tecmint + # usermod -s /sbin/nologin tecmint -2. 禁用目录列表功能,为的是阻止浏览器展示一个未包含 index.html 文件的目录里的内容。 +2. 禁用目录列表功能,这是为了阻止浏览器展示一个未包含 index.html 文件的目录里的内容。 -编辑 `/etc/httpd/conf/httpd.conf` (和虚拟主机的配置文件,假如有的话),并确保 Options 指令在顶级和目录块级别中(注:感觉这里我的翻译不对)都被设置为 None: + 编辑 `/etc/httpd/conf/httpd.conf` (以及虚拟主机的配置文件,假如有的话),并确保出现在顶层的和Directory 块中的 Options 指令都被设置为 None: - Options None + Options None -3. 在 HTTP 回应中隐藏有关 web 服务器和操作系统的信息。像下面这样编辑文件 `/etc/httpd/conf/httpd.conf`: +3. 在 HTTP 响应中隐藏有关 web 服务器和操作系统的信息。像下面这样编辑文件 `/etc/httpd/conf/httpd.conf`: - ServerTokens Prod - ServerSignature Off + ServerTokens Prod + ServerSignature Off 现在,你已经做好了从 `/var/www/html` 目录开始服务内容的准备了。 ### 配置并加固 FTP 服务器 ### -和 Apache 的情形类似, Vsftpd 的主配置文件 `(/etc/vsftpd/vsftpd.conf)` 带有详细的注释,且虽然对于大多数的应用实例,默认的配置应该足够了,但为了更有效率地操作 ftp 服务器,你应该开始熟悉相关的文档和 man 页 `(man vsftpd.conf)`(对于这点,再多的强调也不为过!)。 +和 Apache 的情形类似, Vsftpd 的主配置文件 `/etc/vsftpd/vsftpd.conf` 带有详细的注释,且虽然对于大多数的应用实例,默认的配置应该足够了,但为了更有效率地操作 ftp 服务器,你应该开始熟悉相关的文档和 man 页 `man vsftpd.conf`(对于这点,再多的强调也不为过!)。 在我们的示例中,使用了这些指令: @@ -135,7 +134,7 @@ Apache 的主要配置文件位于 `/etc/httpd/conf/httpd.conf` 中,但它可 userlist_enable=YES tcp_wrappers=YES -通过使用 `chroot_local_user=YES`,(默认情况下)本地用户在登陆之后,将马上被置于一个位于用户家目录的 chroot 环境中(注:这里的翻译也不准确)。这意味着本地用户将不能访问除其家目录之外的任何文件。 +通过使用 `chroot_local_user=YES`,(默认情况下)本地用户在登录之后,将被限制在以用户的家目录为 chroot 监狱的环境中。这意味着本地用户将不能访问除其家目录之外的任何文件。 最后,为了让 ftp 能够在用户的家目录中读取文件,设置如下的 SELinux 布尔值: @@ -145,19 +144,19 @@ Apache 的主要配置文件位于 `/etc/httpd/conf/httpd.conf` 中,但它可 ![查看 FTP 连接](http://www.tecmint.com/wp-content/uploads/2015/05/Check-FTP-Connection.png) -查看 FTP 连接 +*查看 FTP 连接* 注意, `/var/log/xferlog` 日志将会记录下载和上传的情况,这与上图的目录列表一致: ![监视 FTP 的下载和上传情况](http://www.tecmint.com/wp-content/uploads/2015/05/Monitor-FTP-Download-Upload.png) -监视 FTP 的下载和上传情况 +*监视 FTP 的下载和上传情况* 另外请参考: [在 Linux 系统中使用 Trickle 来限制应用使用的 FTP 网络带宽][2] ### 总结 ### -在本教程中,我们解释了如何设置 web 和 ftp 服务器。由于这个主题的广泛性,涵盖这些话题的所有方面是不可能的(如虚拟网络主机)。因此,我推荐你也阅读这个网站中有关 [Apache][3] 的其他卓越的文章。 +在本教程中,我们解释了如何设置 web 和 ftp 服务器。由于这个主题的广泛性,涵盖这些话题的所有方面是不可能的(如虚拟主机)。因此,我推荐你也阅读这个网站中有关 [Apache][3] 的其他卓越的文章。 -------------------------------------------------------------------------------- @@ -165,11 +164,11 @@ via: http://www.tecmint.com/rhcsa-series-install-and-secure-apache-web-server-an 作者:[Gabriel Cánepa][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/gacanepa/ [1]:http://httpd.apache.org/docs/2.4/ -[2]:http://www.tecmint.com/manage-and-limit-downloadupload-bandwidth-with-trickle-in-linux/ +[2]:https://linux.cn/article-5517-1.html [3]:http://www.google.com/cse?cx=partner-pub-2601749019656699:2173448976&ie=UTF-8&q=virtual+hosts&sa=Search&gws_rd=cr&ei=Dy9EVbb0IdHisASnroG4Bw#gsc.tab=0&gsc.q=apache From f5f894ad5a018f855b64564f913575bdddd0f3fe Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Thu, 24 Sep 2015 22:12:07 +0800 Subject: [PATCH 595/697] Update RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ...Part 14--Setting Up LDAP-based Authentication in RHEL 7.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md b/sources/tech/RHCSA Series/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md index 36bf319b19..e3425f5164 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md @@ -1,3 +1,5 @@ +FSSlc translating + RHCSA Series: Setting Up LDAP-based Authentication in RHEL 7 – Part 14 ================================================================================ We will begin this article by outlining some LDAP basics (what it is, where it is used and why) and show how to set up a LDAP server and configure a client to authenticate against it using Red Hat Enterprise Linux 7 systems. @@ -272,4 +274,4 @@ via: http://www.tecmint.com/setup-ldap-server-and-configure-client-authenticatio [a]:http://www.tecmint.com/author/gacanepa/ [1]:http://www.tecmint.com/automatic-rhel-installations-using-kickstart/ [2]:http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux/ -[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/ch-Directory_Servers.html \ No newline at end of file +[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/ch-Directory_Servers.html From 182300618e4312a3d84561b2e690f23cab60cf92 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 25 Sep 2015 11:45:33 +0800 Subject: [PATCH 596/697] =?UTF-8?q?20150925-1=20=E9=80=89=E9=A2=98=20strug?= =?UTF-8?q?gling=20=E6=8E=A8=E8=8D=90=E5=B9=B6=E8=AE=A4=E9=A2=86?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...TTP 2 Now Fully Supported in NGINX Plus.md | 120 ++++++++++++++++++ 1 file changed, 120 insertions(+) create mode 100644 sources/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md diff --git a/sources/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md b/sources/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md new file mode 100644 index 0000000000..5d1059a38f --- /dev/null +++ b/sources/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md @@ -0,0 +1,120 @@ +struggling 翻译中 + +HTTP/2 Now Fully Supported in NGINX Plus +================================================================================ +Earlier this week we released [NGINX Plus R7][1] with support for HTTP/2. As the latest standard for the HTTP protocol, HTTP/2 is designed to bring increased performance and security to modern web applications. + +The HTTP/2 implementation in NGINX Plus works seamlessly with existing sites and applications. Minimal changes are required, as NGINX Plus delivers HTTP/1.x and HTTP/2 traffic in parallel for the best experience, no matter what browser your users choose. + +HTTP/2 support is available in the optional **nginx‑plus‑http2** package only. The **nginx‑plus** and **nginx‑plus‑extras** packages provide SPDY support and are currently recommended for production sites because of wider browser support and code maturity. + +### Why Move to HTTP/2? ### + +HTTP/2 makes data transfer more efficient and more secure for your applications. HTTP/2 adds five key features that improve performance when compared to HTTP/1.x: + +- **True multiplexing** – HTTP/1.1 enforces strict in-order completion of requests that come in over a keepalive connection. A request must be satisfied before processing on the next one can begin. HTTP/2 eliminates this requirement and allows requests to be satisfied in parallel and out of order. +- **Single, persistent connection** – As HTTP/2 allows for true multiplexing of requests, all objects on a web page can now be downloaded in parallel over a single connection. WIth HTTP/1.x, multiple connections are used to download resources in parallel, leading to inefficient use of the underlying TCP protocol. +- **Binary encoding** – Header information is sent in compact, binary format, rather than plain text, saving bytes on the wire. +- **Header compression** – Headers are compressed using a purpose-built algorithm, HPACK compression, which further reduces the amount of data crossing the network. +- **SSL/TLS encryption** – With HTTP/2, SSL/TLS encryption is mandatory. This is not enforced in the [RFC][2], which allows for plain-text HTTP/2, but rather by all web browsers that currently implement HTTP/2. SSL/TLS makes your site more secure, and with all the performance improvements in HTTP/2, the performance penalty from encryption and decryption is mitigated. + +To learn more about HTTP/2: + +- Please read our [white paper][3], which covers everything you need to know about HTTP/2. +- Download our [special edition of the High Performance Browser Networking ebook][4] by Ilya Grigorik of Google. + +### How NGINX Plus Implements HTTP/2 ### + +Our implementation of HTTP/2 is based on our support for SPDY, which is widely deployed (nearly 75% of websites that use SPDY use NGINX or NGINX Plus). With NGINX Plus, you can deploy HTTP/2 with very little change to your application infrastructure. This section discusses how NGINX Plus implements support for HTTP/2. + +#### An HTTP/2 Gateway #### + +![](https://www.nginx.com/wp-content/uploads/2015/09/http2-27-1024x300.png) + +NGINX Plus acts an HTTP/2 gateway. It talks HTTP/2 to client web browsers that support it, but translates HTTP/2 requests back to HTTP/1.x (or FastCGI, SCGI, uWSGI, etc. – whatever protocol you are currently using) for communication with back-end servers. + +#### Backward Compatibility #### + +![](https://www.nginx.com/wp-content/uploads/2015/09/http2-281-1024x581.png) + +For the foreseeable future you’ll need to support HTTP/2 and HTTP/1.x side by side. As of this writing, over 50% of users already run a web browser that [supports HTTP/2][5], but this also means almost 50% don’t. + +To support both HTTP/1.x and HTTP/2 side by side, NGINX Plus implements the Next Protocol Negotiation (NPN) extension to TLS. When a web browser connects to a server, it sends a list of supported protocols to the server. If the browser includes h2 – that is, HTTP/2 – in the list of supported protocols, NGINX Plus uses HTTP/2 for connections to that browser. If the browser doesn’t implement NPN, or doesn’t send h2 in its list of supported protocols, NGINX Plus falls back to HTTP/1.x. + +### Moving to HTTP/2 ### + +NGINX, Inc. aims to make the transition to HTTP/2 as seamless as possible. This section goes through the changes that need to be made to enable HTTP/2 for your applications, which include just a few changes to the configuration of NGINX Plus. + +#### Prerequisites #### + +Upgrade to the NGINX Plus R7 **nginx‑plus‑http2** package. Note that an HTTP/2-enabled version of the **nginx‑plus‑extras** package is not available at this time. + +#### Redirecting All Traffic to SSL/TLS #### + +If your app is not already encrypted with SSL/TLS, now would be a good time to make that move. Encrypting your app protects you from spying as well as from man-in-the-middle attacks. Some search engines even reward encrypted sites with [improved rankings][6] in search results. The following configuration block redirects all plain HTTP requests to the encrypted version of the site. + + server { + listen 80; + location / { + return 301 https://$host$request_uri; + } + } + +#### Enabling HTTP/2 #### + +To enable HTTP/2 support, simply add the http2 parameter to all [listen][7] directives. Also include the ssl parameter, required because browsers do not support HTTP/2 without encryption. + + server { + listen 443 ssl http2 default_server; + + ssl_certificate server.crt; + ssl_certificate_key server.key; + … + } + +If necessary, restart NGINX Plus, for example by running the nginx -s reload command. To verify that HTTP/2 translation is working, you can use the “HTTP/2 and SPDY indicator” plug-in available for [Google Chrome][8] and [Firefox][9]. + +### Caveats ### + +- Before installing the **nginx‑plus‑http2** package, you must remove the spdy parameter on all listen directives in your configuration (replace it with the http2 and ssl parameters to enable support for HTTP/2). With this package, NGINX Plus fails to start if any listen directives have the spdy parameter. +- If you are using a web application firewall (WAF) that is sitting in front of NGINX Plus, ensure that it is capable of parsing HTTP/2, or move it behind NGINX Plus. +- The “Server Push” feature defined in the HTTP/2 RFC is not supported in this release. Future releases of NGINX Plus might include it. +- NGINX Plus R7 supports both SPDY and HTTP/2. In a future release we will deprecate support for SPDY. Google is [deprecating SPDY][10] in early 2016, making it unnecessary to support both protocols at that point. +- If [ssl_prefer_server_ciphers][11] is set to on and/or a list of [ssl_ciphers][12] that are defined in [Appendix A: TLS 1.2 Ciper Suite Black List][13] is used, the browser will experience handshake-errors and not work. Please refer to [section 9.2.2 of the HTTP/2 RFC][14] for more details.- + +### Special Thanks ### + +NGINX, Inc. would like to thank [Dropbox][15] and [Automattic][16], who are heavy users of our software and graciously cosponsored the development of our HTTP/2 implementation. Their contributions have helped accelerate our ability to bring this software to you, and we hope you are able to support them in turn. + +![](https://www.nginx.com/wp-content/themes/nginx-theme/assets/img/landing-page/highperf_nginx_ebook.png) + +[O'REILLY'S BOOK ABOUT HTTP/2 & PERFORMANCE TUNING][17] + +-------------------------------------------------------------------------------- + +via: https://www.nginx.com/blog/http2-r7/ + +作者:[Faisal Memon][a] +译者:[struggling](https://github.com/struggling) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.nginx.com/blog/author/fmemon/ +[1]:https://www.nginx.com/blog/nginx-plus-r7-released/ +[2]:https://tools.ietf.org/html/rfc7540 +[3]:https://www.nginx.com/wp-content/uploads/2015/09/NGINX_HTTP2_White_Paper_v4.pdf +[4]:https://www.nginx.com/http2-ebook/ +[5]:http://caniuse.com/#feat=http2 +[6]:http://googlewebmastercentral.blogspot.co.uk/2014/08/https-as-ranking-signal.html +[7]:http://nginx.org/en/docs/http/ngx_http_core_module.html#listen +[8]:https://chrome.google.com/webstore/detail/http2-and-spdy-indicator/mpbpobfflnpcgagjijhmgnchggcjblin?hl=en +[9]:https://addons.mozilla.org/en-us/firefox/addon/spdy-indicator/ +[10]:http://blog.chromium.org/2015/02/hello-http2-goodbye-spdy-http-is_9.html +[11]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_prefer_server_ciphers +[12]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ciphers +[13]:https://tools.ietf.org/html/rfc7540#appendix-A +[14]:https://tools.ietf.org/html/rfc7540#section-9.2.2 +[15]:http://dropbox.com/ +[16]:http://automattic.com/ +[17]:https://www.nginx.com/http2-ebook/ \ No newline at end of file From 3720ab3835b71276962913179f2b434debdb02c2 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 25 Sep 2015 13:56:21 +0800 Subject: [PATCH 597/697] PUB:20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage @GOLinux --- ...artition into One Large Virtual Storage.md | 157 +++++++++++++++ ...artition into One Large Virtual Storage.md | 183 ------------------ 2 files changed, 157 insertions(+), 183 deletions(-) create mode 100644 published/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md delete mode 100644 translated/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md diff --git a/published/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md b/published/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md new file mode 100644 index 0000000000..eb462bfe6a --- /dev/null +++ b/published/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md @@ -0,0 +1,157 @@ +Mhddfs:将多个小分区合并成一个大的虚拟存储 +================================================================================ + +让我们假定你有30GB的电影,并且你有3个驱动器,每个的大小为20GB。那么,你会怎么来存放东西呢? + +很明显,你可以将你的视频分割成2个或者3个不同的卷,并将它们手工存储到驱动器上。这当然不是一个好主意,它成了一项费力的工作,它需要你手工干预,而且花费你大量时间。 + +另外一个解决方案是创建一个 [RAID磁盘阵列][1]。然而,RAID在存储可靠性,磁盘空间可用性差等方面声名狼藉。另外一个解决方案,就是mhddfs。 + +![Combine Multiple Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Combine-Multiple-Partitions-in-Linux.png) + +*Mhddfs——在Linux中合并多个分区* + +mhddfs是一个用于Linux的设备驱动,它可以将多个挂载点合并到一个虚拟磁盘中。它是一个基于FUSE的驱动,提供了一个用于大数据存储的简单解决方案。它可以将所有小文件系统合并,创建一个单一的大虚拟文件系统,该文件系统包含其成员文件系统的所有内容,包括文件和空闲空间。 + +#### 你为什么需要Mhddfs? #### + +你的所有存储设备会创建为一个单一的虚拟池,它可以在启动时被挂载。这个小工具可以智能地照看并处理哪个存储满了,哪个存储空着,以及将数据写到哪个存储中。当你成功创建虚拟驱动器后,你可以使用[SAMBA][2]来共享你的虚拟文件系统。你的客户端将在任何时候都看到一个巨大的驱动器和大量的空闲空间。 + +#### Mhddfs特性 #### + +- 获取文件系统属性和系统信息。 +- 设置文件系统属性。 +- 创建、读取、移除和写入目录和文件。 +- 在单一设备上支持文件锁和硬链接。 + +|mhddfs的优点|mhddfs的缺点| +|-----------|-----------| +|适合家庭用户|mhddfs驱动没有内建在Linux内核中 | +|运行简单|运行时需要大量处理能力| +|没有明显的数据丢失|没有冗余解决方案| +|不需要分割文件|不支持移动硬链接| +|可以添加新文件到组成的虚拟文件系统|| +|可以管理文件保存的位置|| +|支持扩展文件属性|| + +### Linux中安装Mhddfs ### + +在Debian及其类似的移植系统中,你可以使用下面的命令来安装mhddfs包。 + + # apt-get update && apt-get install mhddfs + +![Install Mhddfs on Debian based Systems](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Mhddfs-on-Ubuntu.png) + +*安装Mhddfs到基于Debian的系统中* + +在RHEL/CentOS Linux系统中,你需要开启[epel仓库][3],然后执行下面的命令来安装mhddfs包。 + + # yum install mhddfs + +在Fedora 22及以上系统中,你可以通过dnf包管理来获得它,就像下面这样。 + + # dnf install mhddfs + +![Install Mhddfs on Fedora](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Mhddfs-on-Fedora.png) + +*安装Mhddfs到Fedora* + +如果万一mhddfs包不能从epel仓库获取到,那么你需要解决下面的依赖,然后像下面这样来编译源码并安装。 + +- FUSE头文件 +- GCC +- libc6头文件 +- uthash头文件 +- libattr1头文件(可选) + +接下来,只需从下面建议的地址下载最新的源码包,然后编译。 + + # wget http://mhddfs.uvw.ru/downloads/mhddfs_0.1.39.tar.gz + # tar -zxvf mhddfs*.tar.gz + # cd mhddfs-0.1.39/ + # make + +你应该可以在当前目录中看到mhddfs的二进制文件,以root身份将它移动到/usr/bin/和/usr/local/bin/中。 + + # cp mhddfs /usr/bin/ + # cp mhddfs /usr/local/bin/ + +一切搞定,mhddfs已经可以用了。 + +### 我怎么使用Mhddfs? ### + +1、 让我们看看当前所有挂载到我们系统中的硬盘。 + + $ df -h + +![Check Mounted Devices](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Mounted-Devices.gif) + +**样例输出** + + Filesystem Size Used Avail Use% Mounted on + + /dev/sda1 511M 132K 511M 1% /boot/efi + /dev/sda2 451G 92G 336G 22% / + /dev/sdb1 1.9T 161G 1.7T 9% /media/avi/BD9B-5FCE + /dev/sdc1 555M 555M 0 100% /media/avi/Debian 8.1.0 M-A 1 + +注意这里的‘挂载点’名称,我们后面会使用到它们。 + +2、 创建目录‘/mnt/virtual_hdd’,所有这些文件系统将会在这里组织到一起。 + + # mkdir /mnt/virtual_hdd + +3、 然后,挂载所有文件系统。你可以通过root或者FUSE组中的某个用户来完成。 + + # mhddfs /boot/efi, /, /media/avi/BD9B-5FCE/, /media/avi/Debian\ 8.1.0\ M-A\ 1/ /mnt/virtual_hdd -o allow_other + +![Mount All File System in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Mount-All-File-System-in-Linux.png) + +*在Linux中挂载所有文件系统* + +**注意**:这里我们使用了所有硬盘的挂载点名称,很明显,你的挂载点名称会有所不同。也请注意“-o allow_other”选项可以让这个虚拟文件系统让其它所有人可见,而不仅仅是创建它的人。 + +4、 现在,运行“df -h”来看看所有文件系统。它应该包含了你刚才创建的那个。 + + $ df -h + +![Verify Virtual File System Mount](http://www.tecmint.com/wp-content/uploads/2015/08/Verify-Virtual-File-System.png) + +*验证虚拟文件系统挂载* + +你可以像对已挂在的驱动器那样给虚拟文件系统应用所有的选项。 + +5、 要在每次系统启动创建这个虚拟文件系统,你应该以root身份添加下面的这行代码(在你那里会有点不同,取决于你的挂载点)到/etc/fstab文件的末尾。 + + mhddfs# /boot/efi, /, /media/avi/BD9B-5FCE/, /media/avi/Debian\ 8.1.0\ M-A\ 1/ /mnt/virtual_hdd fuse defaults,allow_other 0 0 + +6、 如果在任何时候你想要添加/移除一个新的驱动器到/从虚拟硬盘,你可以挂载一个新的驱动器,拷贝/mnt/vritual_hdd的内容,卸载卷,弹出你要移除的的驱动器并/或挂载你要包含的新驱动器。使用mhddfs命令挂载全部文件系统到Virtual_hdd下,这样就全部搞定了。 + +#### 我怎么卸载Virtual_hdd? #### + +卸载virtual_hdd相当简单,就像下面这样 + + # umount /mnt/virtual_hdd + +![Unmount Virtual Filesystem](http://www.tecmint.com/wp-content/uploads/2015/08/Unmount-Virtual-Filesystem.png) + +*卸载虚拟文件系统* + +注意,是umount,而不是unmount,很多用户都输错了。 + +到现在为止全部结束了。我正在写另外一篇文章,你们一定喜欢读的。到那时,请保持连线。请在下面的评论中给我们提供有用的反馈吧。请为我们点赞并分享,帮助我们扩散。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/combine-partitions-into-one-in-linux-using-mhddfs/ + +作者:[Avishek Kumar][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[2]:http://www.tecmint.com/mount-filesystem-in-linux/ +[3]:https://linux.cn/article-2324-1.html diff --git a/translated/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md b/translated/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md deleted file mode 100644 index 04d9f18eb9..0000000000 --- a/translated/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md +++ /dev/null @@ -1,183 +0,0 @@ -Mhddfs——将多个小分区合并成一个大的虚拟存储 -================================================================================ - -让我们假定你有30GB的电影,并且你有3个驱动器,每个的大小为20GB。那么,你会怎么来存放东西呢? - -很明显,你可以将你的视频分割成2个或者3个不同的卷,并将它们手工存储到驱动器上。这当然不是一个好主意,它成了一项费力的工作,它需要你手工干预,而且花费你大量时间。 - -另外一个解决方案是创建一个[RAID磁盘阵列][1]。然而,RAID在缺乏存储可靠性,磁盘空间可用性差等方面声名狼藉。另外一个解决方案,就是mhddfs。 - -![Combine Multiple Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Combine-Multiple-Partitions-in-Linux.png) -Mhddfs——在Linux中合并多个分区 - -mhddfs是一个用于Linux的驱动,它可以将多个挂载点合并到一个虚拟磁盘中。它是一个基于FUSE的驱动,提供了一个用于大数据存储的简单解决方案。它将所有小文件系统合并,以创建一个单一的大虚拟文件系统,该文件系统包含其成员文件系统的所有颗粒,包括文件和空闲空间。 - -#### 你为什么需要Mhddfs? #### - -你所有存储设备创建了一个单一的虚拟池,它可以在启动时被挂载。这个小工具可以智能地照看并处理哪个驱动器满了,哪个驱动器空着,将数据写到哪个驱动器中。当你成功创建虚拟驱动器后,你可以使用[SAMBA][2]来共享你的虚拟文件系统。你的客户端将在任何时候都看到一个巨大的驱动器和大量的空闲空间。 - -#### Mhddfs特性 #### - -- 获取文件系统属性和系统信息。 -- 设置文件系统属性。 -- 创建、读取、移除和写入目录和文件。 -- 支持文件锁和单一设备上的硬链接。 - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
mhddfs的优点mhddfs的缺点
 适合家庭用户mhddfs驱动没有内建在Linux内核中
 运行简单 运行时需要大量处理能力
 没有明显的数据丢失 没有冗余解决方案
 不分割文件 不支持移动硬链接
 添加新文件到合并的虚拟文件系统 
 管理文件保存的位置 
  扩展文件属性 
- -### Linux中安装Mhddfs ### - -在Debian及其类似的移植系统中,你可以使用下面的命令来安装mhddfs包。 - - # apt-get update && apt-get install mhddfs - -![Install Mhddfs on Debian based Systems](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Mhddfs-on-Ubuntu.png) -安装Mhddfs到基于Debian的系统中 - -在RHEL/CentOS Linux系统中,你需要开启[epel仓库][3],然后执行下面的命令来安装mhddfs包。 - - # yum install mhddfs - -在Fedora 22及以上系统中,你可以通过dnf包管理来获得它,就像下面这样。 - - # dnf install mhddfs - -![Install Mhddfs on Fedora](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Mhddfs-on-Fedora.png) -安装Mhddfs到Fedora - -如果万一mhddfs包不能从epel仓库获取到,那么你需要解决下面的依赖,然后像下面这样来编译源码并安装。 - -- FUSE头文件 -- GCC -- libc6头文件 -- uthash头文件 -- libattr1头文件(可选) - -接下来,只需从下面建议的地址下载最新的源码包,然后编译。 - - # wget http://mhddfs.uvw.ru/downloads/mhddfs_0.1.39.tar.gz - # tar -zxvf mhddfs*.tar.gz - # cd mhddfs-0.1.39/ - # make - -你应该可以在当前目录中看到mhddfs的二进制文件,以root身份将它移动到/usr/bin/和/usr/local/bin/中。 - - # cp mhddfs /usr/bin/ - # cp mhddfs /usr/local/bin/ - -一切搞定,mhddfs已经可以用了。 - -### 我怎么使用Mhddfs? ### - -1.让我们看看当前所有挂载到我们系统中的硬盘。 - - - $ df -h - -![Check Mounted Devices](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Mounted-Devices.gif) -**样例输出** - - Filesystem Size Used Avail Use% Mounted on - - /dev/sda1 511M 132K 511M 1% /boot/efi - /dev/sda2 451G 92G 336G 22% / - /dev/sdb1 1.9T 161G 1.7T 9% /media/avi/BD9B-5FCE - /dev/sdc1 555M 555M 0 100% /media/avi/Debian 8.1.0 M-A 1 - -注意这里的‘挂载点’名称,我们后面会使用到它们。 - -2.创建目录‘/mnt/virtual_hdd’,在这里,所有这些文件系统将被组成组。 - - - # mkdir /mnt/virtual_hdd - -3.然后,挂载所有文件系统。你可以通过root或者FUSE组中的某个成员来完成。 - - - # mhddfs /boot/efi, /, /media/avi/BD9B-5FCE/, /media/avi/Debian\ 8.1.0\ M-A\ 1/ /mnt/virtual_hdd -o allow_other - -![Mount All File System in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Mount-All-File-System-in-Linux.png) -在Linux中挂载所有文件系统 - -**注意**:这里我们使用了所有硬盘的挂载点名称,很明显,你的挂载点名称会有所不同。也请注意“-o allow_other”选项可以让这个虚拟文件系统让其它所有人可见,而不仅仅是创建它的人。 - -4.现在,运行“df -h”来看看所有文件系统。它应该包含了你刚才创建的那个。 - - - $ df -h - -![Verify Virtual File System Mount](http://www.tecmint.com/wp-content/uploads/2015/08/Verify-Virtual-File-System.png) -验证虚拟文件系统挂载 - -你可以像对已挂在的驱动器那样给虚拟文件系统部署所有的选项。 - -5.要在每次系统启动创建这个虚拟文件系统,你应该以root身份添加下面的这行代码(在你那里会有点不同,取决于你的挂载点)到/etc/fstab文件的末尾。 - - mhddfs# /boot/efi, /, /media/avi/BD9B-5FCE/, /media/avi/Debian\ 8.1.0\ M-A\ 1/ /mnt/virtual_hdd fuse defaults,allow_other 0 0 - -6.如果在任何时候你想要添加/移除一个新的驱动器到/从虚拟硬盘,你可以挂载一个新的驱动器,拷贝/mnt/vritual_hdd的内容,卸载卷,弹出你要移除的的驱动器并/或挂载你要包含的新驱动器。使用mhddfs命令挂载全部文件系统到Virtual_hdd下,这样就全部搞定了。 -#### 我怎么卸载Virtual_hdd? #### - -卸载virtual_hdd相当简单,就像下面这样 - - # umount /mnt/virtual_hdd - -![Unmount Virtual Filesystem](http://www.tecmint.com/wp-content/uploads/2015/08/Unmount-Virtual-Filesystem.png) -卸载虚拟文件系统 - -注意,是umount,而不是unmount,很多用户都输错了。 - -到现在为止全部结束了。我正在写另外一篇文章,你们一定喜欢读的。到那时,请保持连线到Tecmint。请在下面的评论中给我们提供有用的反馈吧。请为我们点赞并分享,帮助我们扩散。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/combine-partitions-into-one-in-linux-using-mhddfs/ - -作者:[Avishek Kumar][a] -译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ -[2]:http://www.tecmint.com/mount-filesystem-in-linux/ -[3]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ From a6de7c218cc5e41a227798fcc502dbd0c49bff83 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 25 Sep 2015 14:58:31 +0800 Subject: [PATCH 598/697] PUB:20150901 How to Defragment Linux Systems @geekpi --- ...0150901 How to Defragment Linux Systems.md | 126 ++++++++++++++++++ ...0150901 How to Defragment Linux Systems.md | 125 ----------------- 2 files changed, 126 insertions(+), 125 deletions(-) create mode 100644 published/20150901 How to Defragment Linux Systems.md delete mode 100644 translated/tech/20150901 How to Defragment Linux Systems.md diff --git a/published/20150901 How to Defragment Linux Systems.md b/published/20150901 How to Defragment Linux Systems.md new file mode 100644 index 0000000000..4c14b5fc5f --- /dev/null +++ b/published/20150901 How to Defragment Linux Systems.md @@ -0,0 +1,126 @@ +如何在 Linux 中整理磁盘碎片 +================================================================================ + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-featured.png) + +有一神话是 linux 的磁盘从来不需要整理碎片。在大多数情况下这是真的,大多数因为是使用的是优秀的日志系统(ext2、3、4等等)来处理文件系统。然而,在一些特殊情况下,碎片仍旧会产生。如果正巧发生在你身上,解决方法很简单。 + +### 什么是磁盘碎片 ### + +文件系统会按块更新文件,如果这些块没有连成一整块而是分布在磁盘的各个角落中时,就会形成磁盘碎片。这对于 FAT 和 FAT32 文件系统而言是这样的。在 NTFS 中这种情况有所减轻,但在 Linux(extX)中却几乎不会发生。下面是原因: + +在像 FAT 和 FAT32 这类文件系统中,文件紧挨着写入到磁盘中。文件之间没有空间来用于增长或者更新: + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-fragmented.png) + +NTFS 中在文件之间保留了一些空间,因此有空间进行增长。但因块之间的空间是有限的,碎片也会随着时间出现。 + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-ntfs.png) + +Linux 的日志型文件系统采用了一个不同的方案。与文件相互挨着不同,每个文件分布在磁盘的各处,每个文件之间留下了大量的剩余空间。这就给文件更新和增长留下了很大的空间,碎片很少会发生。 + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-journal.png) + +此外,碎片一旦出现了,大多数 Linux 文件系统会尝试将文件和块重新连续起来。 + +### Linux 中的磁盘整理 ### + +除非你用的是一个很小的硬盘或者空间不够了,不然 Linux 很少会需要磁盘整理。一些可能需要磁盘整理的情况包括: + +- 如果你编辑的是大型视频文件或者 RAW 照片,但磁盘空间有限 +- 如果你使用一个老式硬件,如旧笔记本,你的硬盘会很小 +- 如果你的磁盘开始满了(大约使用了85%) +- 如果你的家目录中有许多小分区 + +最好的解决方案是购买一个大硬盘。如果不可能,磁盘碎片整理就很有用了。 + +### 如何检查碎片 ### + +`fsck` 命令会为你做这个,换句话说,如果你可以在 LiveCD 中运行它,那么就可以用于**所有卸载的分区**。 + +这一点很重要:**在已经挂载的分区中运行 fsck 将会严重危害到你的数据和磁盘**。 + +你已经被警告过了。开始之前,先做一个完整的备份。 + +**免责声明**: 本文的作者与本站将不会对您的文件、数据、系统或者其他损害负责。你需要自己承担风险。如果你继续,你需要接受并了解这点。 + +你应该启动到一个 live 会话中(如使用安装磁盘,系统救援CD等)并在你**卸载**的分区上运行 `fsck` 。要检查是否有任何问题,请在使用 root 权限运行下面的命令: + + fsck -fn [/path/to/your/partition] + +您可以运行以下命令找到分区的路径 + + sudo fdisk -l + +有一个在已挂载的分区中运行 `fsck`(相对)安全的方法是使用`-n`开关。这会对分区进行只读文件系统检查,而不会写入任何东西。当然,这并不能保证十分安全,你应该在创建备份之后进行。在 ext2 中,运行 + + sudo fsck.ext2 -fn /path/to/your/partition + +这会产生大量的输出,大多数错误信息的原因是分区已经挂载了。最后会给出一个碎片相关的信息。 + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-fsck.png) + +如果碎片率大于 20% 了,那么你应该开始整理你的磁盘碎片了。 + +### 如何简单地在 Linux 中整理碎片 ### + +你要做的是备份你**所有**的文件和数据到另外一块硬盘中(手动**复制**他们),格式化分区,然后重新复制回去(不要使用备份软件)。日志型文件系统会把它们作为新的文件,并将它们整齐地放置到磁盘中而不产生碎片。 + +要备份你的文件,运行 + + cp -afv [/path/to/source/partition]/* [/path/to/destination/folder] + +记住星号(*)是很重要的。 + +注意:通常认为复制大文件或者大量文件,使用 `dd` 或许是最好的。这是一个非常底层的操作,它会复制一切,包含空闲的空间甚至是留下的垃圾。这不是我们想要的,因此这里最好使用 `cp`。 + +现在你只需要删除源文件。 + + sudo rm -rf [/path/to/source/partition]/* + +**可选**:你可以使用如下命令将空闲空间用零填充。也可以用格式化来达到这点,但是如果你并没有复制整个分区而仅仅是复制大文件(它通常会形成碎片)的话,就不应该使用格式化的方法了。 + + sudo dd if=/dev/zero of=[/path/to/source/partition]/temp-zero.txt + +等待它结束。你可以用 `pv` 来监测进度。 + + sudo apt-get install pv + sudo pv -tpreb | of=[/path/to/source/partition]/temp-zero.txt + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-dd.png) + +这就完成了,只要删除这个用于填充的临时文件就行。 + + sudo rm [/path/to/source/partition]/temp-zero.txt + +待你清零了空闲空间(或者跳过了这步)。重新复制回文件,将第一个`cp`命令翻转一下: + + cp -afv [/path/to/original/destination/folder]/* [/path/to/original/source/partition] + +### 使用 e4defrag ### + +如果你想要简单的方法,安装 `e2fsprogs`, + + sudo apt-get install e2fsprogs + +用 root 权限在分区中运行 `e4defrag`。如果你不想或不能卸载该分区,你可以使用它的挂载点而不是路径。要整理整个系统的碎片,运行: + + sudo e4defrag / + +在挂载的情况下不保证成功(你也应该在它运行时不要使用你的系统),但是它比复制全部文件再重新复制回来简单多了。 + +### 总结 ### + +linux 系统中由于它的日志型文件系统有效的数据处理很少会出现碎片。如果你因任何原因产生了碎片,简单的方法是重新分配你的磁盘,如复制出去所有文件并复制回来,或者使用`e4defrag`。然而重要的是保证你数据的安全,因此在进行任何可能影响你全部或者大多数文件的操作之前,确保你的文件已经被备份到了另外一个安全的地方去了。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/defragment-linux/ + +作者:[Attila Orosz][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ diff --git a/translated/tech/20150901 How to Defragment Linux Systems.md b/translated/tech/20150901 How to Defragment Linux Systems.md deleted file mode 100644 index 49d16a8f18..0000000000 --- a/translated/tech/20150901 How to Defragment Linux Systems.md +++ /dev/null @@ -1,125 +0,0 @@ -如何在Linux中整理磁盘碎片 -================================================================================ -![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-featured.png) - -有一神话是linux的磁盘从来不需要整理碎片。在大多数情况下这是真的,大多数因为是使用的是优秀的日志系统(ext2、3、4等等)来处理文件系统。然而,在一些特殊情况下,碎片仍旧会产生。如果正巧发生在你身上,解决方法很简单。 - -### 什么是磁盘碎片 ### - -碎片发生在不同的小块中更新文件时,但是这些快没有形成连续完整的文件而是分布在磁盘的各个角落中。这对于FAT和FAT32文件系统而言是这样的。这在NTFS中有所减轻,在Linux(extX)中几乎不会发生。下面是原因。 - -在像FAT和FAT32这类文件系统中,文件紧挨着写入到磁盘中。文件之间没有空间来用于增长或者更新: - -![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-fragmented.png) - -NTFS中在文件之间保留了一些空间,因此有空间进行增长。因为块之间的空间是有限的,碎片也会随着时间出现。 - -![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-ntfs.png) - -Linux的日志文件系统采用了一个不同的方案。与文件之间挨着不同,每个文件分布在磁盘的各处,每个文件之间留下了大量的剩余空间。这里有很大的空间用于更新和增长,并且碎片很少会发生。 - -![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-journal.png) - -此外,碎片一旦出现了,大多数Linux文件系统会尝试将文件和块重新连续起来。 - -### Linux中的磁盘整理 ### - -除非你用的是一个很小的硬盘或者空间不够了,不然Linux很少会需要磁盘整理。一些可能需要磁盘整理的情况包括: - -- 如果你编辑的是大型视频文件或者原生照片,但磁盘空间有限 -- if you use older hardware like an old laptop, and you have a small hard drive -- 如果你的磁盘开始满了(大约使用了85%) -- 如果你的家目录中有许多小分区 - -最好的解决方案是购买一个大硬盘。如果不可能,磁盘碎片整理就很有用了。 - -### 如何检查碎片 ### - -`fsck`命令会为你做这个 -也就是说如果你可以在liveCD中运行它,那么就可以**卸载所有的分区**。 - -这一点很重要:**在已经挂载的分区中运行fsck将会严重危害到你的数据和磁盘**。 - -你已经被警告过了。开始之前,先做一个完整的备份。 - -**免责声明**: 本文的作者与Make Tech Easier将不会对您的文件、数据、系统或者其他损害负责。你需要自己承担风险。如果你继续,你需要接收并了解这点。 - -你应该启动到一个live会话中(如安装磁盘,系统救援CD等)并运行`fsck`卸载分区。要检查是否有任何问题,请在运行root权限下面的命令: - - fsck -fn [/path/to/your/partition] - -您可以检查一下运行中的分区的路径 - - sudo fdisk -l - -有一个(相对)安全地在已挂载的分区中运行`fsck`的方法是使用‘-n’开关。这会让分区处在只读模式而不能创建任何文件。当然,这里并不能保证安全,你应该在创建备份之后进行。在ext2中,运行 - - sudo fsck.ext2 -fn /path/to/your/partition - -会产生大量的输出-- 大多数错误信息的原因是分区已经挂载了。最后会给出一个碎片相关的信息。 - -![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-fsck.png) - -如果碎片大于20%了,那么你应该开始整理你的磁盘碎片了。 - -### 如何简单地在Linux中整理碎片 ### - -你要做的是备份你**所有**的文件和数据到另外一块硬盘中(手动**复制**他们)。格式化分区然后重新复制回去(不要使用备份软件)。日志系统会把它们作为新的文件,并将它们整齐地放置到磁盘中而不产生碎片。 - -要备份你的文件,运行 - - cp -afv [/path/to/source/partition]/* [/path/to/destination/folder] - -记住星号(*)是很重要的。 - -注意:通常认为复制大文件或者大量文件,使用dd或许是最好的。这是一个非常底层的操作,它会复制一切,包含空闲的空间甚至是留下的垃圾。这不是我们想要的,因此这里最好使用`cp`。 - -现在你只需要删除源文件。 - - sudo rm -rf [/path/to/source/partition]/* - -**可选**:你可以将空闲空间置零。你也可以用格式化来达到这点,但是例子中你并没有复制整个分区而仅仅是大文件(这很可能会造成碎片)。这恐怕不能成为一个选项。 - - sudo dd if=/dev/zero of=[/path/to/source/partition]/temp-zero.txt - -等待它结束。你可以用`pv`来监测进程。 - - sudo apt-get install pv - sudo pv -tpreb | of=[/path/to/source/partition]/temp-zero.txt - -![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-dd.png) - -这就完成了,只要删除临时文件就行。 - - sudo rm [/path/to/source/partition]/temp-zero.txt - -待你清零了空闲空间(或者跳过了这步)。重新复制回文件,将第一个cp命令翻转一下: - - cp -afv [/path/to/original/destination/folder]/* [/path/to/original/source/partition] - -### 使用 e4defrag ### - -如果你想要简单的方法,安装`e2fsprogs`, - - sudo apt-get install e2fsprogs - -用root权限在分区中运行 `e4defrag`。如果你不想卸载分区,你可以使用它的挂载点而不是路径。要整理整个系统的碎片,运行: - - sudo e4defrag / - -在挂载的情况下不保证成功(你也应该保证在它运行时停止使用你的系统),但是它比服务全部文件再重新复制回来简单多了。 - -### 总结 ### - -linux系统中很少会出现碎片因为它的文件系统有效的数据处理。如果你因任何原因产生了碎片,简单的方法是重新分配你的磁盘如复制所有文件并复制回来,或者使用`e4defrag`。然而重要的是保证你数据的安全,因此在进行任何可能影响你全部或者大多数文件的操作之前,确保你的文件已经被备份到了另外一个安全的地方去了。 - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/defragment-linux/ - -作者:[Attila Orosz][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/attilaorosz/ From 9526719928404fb3ab25b040d4400cfafb9cecba Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Fri, 25 Sep 2015 17:46:10 +0800 Subject: [PATCH 599/697] mikecoder translating... --- ... Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md b/sources/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md index 532004d419..1b743f7b27 100644 --- a/sources/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md +++ b/sources/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md @@ -1,3 +1,5 @@ +mikecoder translating... + Xenlism WildFire: Minimal Icon Theme For Linux Desktop ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icon-theme-linux-3.png) @@ -83,4 +85,4 @@ via: http://itsfoss.com/xenlism-wildfire-theme/ [3]:https://plus.google.com/+NattapongPullkhow [4]:http://itsfoss.com/install-numix-ubuntu/ [5]:http://itsfoss.com/install-switch-themes-gnome-shell/ -[6]:https://twitter.com/share?text=Xenlism+is+a+stunning+minimal+icon+theme+for+Linux.+Thanks+%40xenatt+for+this+beautiful+theme.&via=itsfoss&related=itsfoss&url=http://itsfoss.com/xenlism-wildfire-theme/ \ No newline at end of file +[6]:https://twitter.com/share?text=Xenlism+is+a+stunning+minimal+icon+theme+for+Linux.+Thanks+%40xenatt+for+this+beautiful+theme.&via=itsfoss&related=itsfoss&url=http://itsfoss.com/xenlism-wildfire-theme/ From a54750182f74f48f767a55cad904a3c209271dc2 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Fri, 25 Sep 2015 23:02:15 +0800 Subject: [PATCH 600/697] Translating sources/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md --- .../20150923 How To Upgrade From Oracle 11g To Oracle 12c.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md b/sources/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md index 103f3922a5..a43b5e2ac5 100644 --- a/sources/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md +++ b/sources/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md @@ -1,3 +1,4 @@ +ictlyh Translating How To Upgrade From Oracle 11g To Oracle 12c ================================================================================ Hello all. From 8cee9b8333a01ffe3bcc2bb39add7a1de52b37a1 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 26 Sep 2015 08:55:14 +0800 Subject: [PATCH 601/697] PUB:20150906 How to Install QGit Viewer in Ubuntu 14.04 @geekpi --- ... to Install QGit Viewer in Ubuntu 14.04.md | 31 ++++++++++--------- 1 file changed, 16 insertions(+), 15 deletions(-) rename {translated/tech => published}/20150906 How to Install QGit Viewer in Ubuntu 14.04.md (70%) diff --git a/translated/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md b/published/20150906 How to Install QGit Viewer in Ubuntu 14.04.md similarity index 70% rename from translated/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md rename to published/20150906 How to Install QGit Viewer in Ubuntu 14.04.md index 317e610a6f..e841c7f645 100644 --- a/translated/tech/20150906 How to Install QGit Viewer in Ubuntu 14.04.md +++ b/published/20150906 How to Install QGit Viewer in Ubuntu 14.04.md @@ -1,8 +1,9 @@ -如何在Ubuntu中安装QGit浏览器 +如何在 Ubuntu 中安装 QGit 客户端 ================================================================================ -QGit是一款Marco Costalba用Qt和C++写的开源GUI Git浏览器。它是一款在GUI环境下更好地提供浏览历史记录、提交记录和文件补丁的浏览器。它利用git命令行来执行并显示输出。它有一些常规的功能像浏览历史、比较、文件历史、文件标注、档案树。我们可以格式化并用选中的提交应用补丁,在两个实例之间拖拽并提交等等。它允许我们创建自定义的按钮来用它内置的生成器来执行特定的命令。 -这里有简单的几步在Ubuntu 14.04 LTS "Trusty"中编译并安装QGit浏览器。 +QGit是一款由Marco Costalba用Qt和C++写的开源的图形界面 Git 客户端。它是一款可以在图形界面环境下更好地提供浏览版本历史、查看提交记录和文件补丁的客户端。它利用git命令行来执行并显示输出。它有一些常规的功能像浏览版本历史、比较、文件历史、文件标注、归档树。我们可以格式化并用选中的提交应用补丁,在两个或多个实例之间拖拽并提交等等。它允许我们用它内置的生成器来创建自定义的按钮去执行特定的命令。 + +这里有简单的几步在Ubuntu 14.04 LTS "Trusty"中编译并安装QGit客户端。 ### 1. 安装 QT4 库 ### @@ -16,7 +17,7 @@ QGit是一款Marco Costalba用Qt和C++写的开源GUI Git浏览器。它是一 $ sudo apt-get install git -现在,我们要使用下面的git命令来克隆仓库。 +现在,我们要使用下面的git命令来克隆QGit客户端的仓库。 $ git clone git://repo.or.cz/qgit4/redivivus.git @@ -30,25 +31,25 @@ QGit是一款Marco Costalba用Qt和C++写的开源GUI Git浏览器。它是一 ### 3. 编译 QGit ### -克隆之后,我们现在进入redivivus的目录,并创建我们编译需要的makefile文件。因此,要进入目录,我们要运行下面的命令。 +克隆之后,我们现在进入redivivus的目录,并创建我们编译需要的makefile文件。进入目录,运行下面的命令。 $ cd redivivus -接下来,我们运行下面的命令从qmake项目也就是qgit.pro来生成新的Makefile。 +接下来,我们运行下面的命令从qmake项目文件(qgit.pro)来生成新的Makefile。 $ qmake qgit.pro -生成Makefile之后,我们现在终于要编译qgit的源代码并得到二进制的输出。首先我们要安装make和g++包用于编译,因为这是一个用C++写的程序。 +生成Makefile之后,我们现在终于可以编译qgit的源代码并生成二进制。首先我们要安装make和g++包用于编译,因为这是一个用C++写的程序。 $ sudo apt-get install make g++ -现在,我们要用make命令来编译代码了 +现在,我们要用make命令来编译代码了。 $ make ### 4. 安装 QGit ### -成功编译QGit的源码之后,我们就要在Ubuntu 14.04中安装它了,这样就可以在系统中执行它。因此我们将运行下面的命令、 +成功编译QGit的源码之后,我们就要在Ubuntu 14.04中安装它了,这样就可以在系统中执行它。因此我们将运行下面的命令。 $ sudo make install @@ -75,30 +76,30 @@ QGit是一款Marco Costalba用Qt和C++写的开源GUI Git浏览器。它是一 [Desktop Entry] Name=qgit - GenericName=git GUI viewer + GenericName=git 图形界面 viewer Exec=qgit Icon=qgit Type=Application - Comment=git GUI viewer + Comment=git 图形界面 viewer Terminal=false MimeType=inode/directory; Categories=Qt;Development;RevisionControl; 完成之后,保存并退出。 -### 6. 运行 QGit 浏览器 ### +### 6. 运行 QGit 客户端 ### QGit安装完成之后,我们现在就可以从任何启动器或者程序菜单中启动它了。要在终端下面运行QGit,我们可以像下面那样。 $ qgit -这会打开基于Qt4框架GUI模式的QGit。 +这会打开基于Qt4框架图形界面模式的QGit。 ![QGit Viewer](http://blog.linoxide.com/wp-content/uploads/2015/07/qgit-viewer.png) ### 总结 ### -QGit是一个很棒的基于QT的git浏览器。它可以在Linux、MAC OSX和 Microsoft Windows所有这三个平台中运行。它帮助我们很容易地浏览历史、版本、分支等等git仓库提供的信息。它减少了使用命令行的方式去执行诸如浏览版本、历史、比较功能的需求,并用图形化的方式来简化了这些任务。最新的qgit版本也在默认仓库中,你可以使用 **apt-get install qgit** 命令来安装。因此。qgit用它简单的GUI使得我们的工作更加简单和快速。 +QGit是一个很棒的基于QT的git客户端。它可以在Linux、MAC OSX和 Microsoft Windows所有这三个平台中运行。它帮助我们很容易地浏览历史、版本、分支等等git仓库提供的信息。它减少了使用命令行的方式去执行诸如浏览版本、历史、比较功能的需求,并用图形化的方式来简化了这些任务。最新的qgit版本也在默认仓库中,你可以使用 **apt-get install qgit** 命令来安装。因此,QGit用它简单的图形界面使得我们的工作更加简单和快速。 -------------------------------------------------------------------------------- @@ -106,7 +107,7 @@ via: http://linoxide.com/ubuntu-how-to/install-qgit-viewer-ubuntu-14-04/ 作者:[Arun Pyasi][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 911b3e616e9025b6211b90461709b311b871cc80 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 26 Sep 2015 10:14:24 +0800 Subject: [PATCH 602/697] PUB:RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs @xiqingongzi --- ...ks with Cron and Monitoring System Logs.md | 199 ++++++++++++++++++ ...ks with Cron and Monitoring System Logs.md | 195 ----------------- 2 files changed, 199 insertions(+), 195 deletions(-) create mode 100644 published/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md delete mode 100644 translated/tech/RHCSA/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md diff --git a/published/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md b/published/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md new file mode 100644 index 0000000000..b6805bbf4c --- /dev/null +++ b/published/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md @@ -0,0 +1,199 @@ +RHCSA 系列(九): Yum 包管理、Cron 自动任务计划和监控系统日志 +================================================================================ + +在这篇文章中,我们将回顾如何在 RHEL7 中安装,更新和删除软件包。我们还将介绍如何使用 cron 进行任务自动化,并完成如何查找和监控系统日志文件,以及为什么这些技能是系统管理员必备技能。 + +![Yum Package Management Cron Jobs Log Monitoring Linux](http://www.tecmint.com/wp-content/uploads/2015/05/Yum-Package-Management-Cron-Job-Log-Monitoring-Linux.jpg) + +*RHCSA: Yum包管理、任务计划和系统监控 – Part 10* + +### 使用yum 管理包 ### + +要安装一个包以及所有尚未安装的依赖包,您可以使用: + + # yum -y install package_name(s) + +package_name(s) 需要是至少一个真实的软件包名 + +例如,安装 httpd 和 mlocate(按顺序),输入。 + + # yum -y install httpd mlocate + +**注意**: 字符 y 表示绕过执行下载和安装前的确认提示。如果需要提示,你可以不用它。 + +默认情况下,yum 将安装与操作系统体系结构相匹配的包,除非通过在包名加入架构名。 + +例如,在 64 位系统上,`yum install package`将安装包的 x86_64 版本,而 `yum install package.x86`(如果有的话)将安装 32 位的。 + +有时,你想安装一个包,但不知道它的确切名称。`search all` 选项可以在当前启用的软件库中的包名称和包描述中搜索它,或者`search`选项可以在包名称中搜索。 + +比如, + + # yum search log + +将搜索安装的软件库中名字和摘要与该词(log)类似的软件,而 + + # yum search all log + +也将在包描述和网址中寻找寻找相同的关键字。 + +一旦搜索返回包列表,您可能希望在安装前显示一些信息。这时 info 选项派上了用场: + + # yum info logwatch + +![Search Package Information](http://www.tecmint.com/wp-content/uploads/2015/05/Search-Package-Information.png) + +*搜索包信息* + +您可以定期用以下命令检查更新: + + # yum check-update + +上述命令将返回可以更新的所有已安装的软件包。在下图所示的例子中,只有 rhel-7-server-rpms 有可用更新: + +![Check For Package Updates](http://www.tecmint.com/wp-content/uploads/2015/05/Check-For-Updates.png) + +*检查包更新* + +然后,您可以更新该包, + + # yum update rhel-7-server-rpms + +如果有几个包可以一同更新,可以使用 ` yum update` 一次性更新所有的包。 + +当你知道一个可执行文件的名称,如 ps2pdf,但不知道那个包提供了它?你可以通过 `yum whatprovides “*/[executable]”`找到: + + # yum whatprovides “*/ps2pdf” + +![Find Package Belongs to Which Package](http://www.tecmint.com/wp-content/uploads/2015/05/Find-Package-Information.png) + +*查找文件属于哪个包* + +当删除包时,你可以使用 `yum remove Package` ,很简单吧?Yum 是一个完整而强大的包管理器。 + + # yum remove httpd + +- 参见: [20 个管理 RHEL 7 软件包的 Yum 命令][1] + +### 文本式 RPM 工具 ### + +RPM(又名 RPM 包管理器,原意是 RedHat 软件包管理器)也可用于安装或更新独立的`rpm`格式的软件包。 + +往往使用 `-Uvh` 表明如果这个包没有安装就安装它,如果已存在就尝试更新。这里`-U`表示更新、`-v`表示显示详细输出,用`-h`显示进度条。例如 + + # rpm -Uvh package.rpm + +rpm 的另一个典型的使用方法是列出所有安装的软件包, + + # rpm -qa + +![Query All RPM Packages](http://www.tecmint.com/wp-content/uploads/2015/05/Query-All-RPM-Packages.png) + +*查询所有包* + +- 参见: [20 个管理 RHEL 7 软件包的 RPM 命令][2] + +### 使用 Cron 调度任务 ### + +Linux 和 UNIX 类操作系统包括一个称为 Cron 的工具,允许你周期性调度任务(即命令或 shell 脚本)。cron 会每分钟定时检查 /var/spool/cron 目录中有在 /etc/passwd 帐户文件中指定用户名的文件。 + +执行命令时,命令输出是发送到该 crontab 的所有者(或者可以在 /etc/crontab,通过 MAILTO 环境变量中指定用户)。 + +crontab 文件(可以通过键入 `crontab -e`并按 Enter 键创建)的格式如下: + +![Crontab Entries](http://www.tecmint.com/wp-content/uploads/2015/05/Crontab-Format.png) + +*crontab条目* + +因此,如果我们想在每个月第二天上午2:15更新本地文件数据库(用于按名字或通配模式定位文件),我们需要添加以下 crontab 条目: + + 15 02 2 * * /bin/updatedb + +以上的条目的意思是:”每年每月第二天的凌晨 2:15 运行 /bin/updatedb,无论是周几”,我想你也猜到了。星号作为通配符。 + +正如我们前面所提到的,添加一个 cron 任务后,你可以看到一个名为 root 的文件被添加在 /var/spool/cron。该文件列出了所有的 crond 守护进程应该运行的任务: + + # ls -l /var/spool/cron + +![Check All Cron Jobs](http://www.tecmint.com/wp-content/uploads/2015/05/Check-All-Cron-Jobs.png) + +*检查所有cron任务* + +在上图中,显示当前用户的 crontab 可以使用 `cat /var/spool/cron` 或 + + # crontab -l + +如果你需要在一个更精细的时间上运行的任务(例如,一天两次或每月三次),cron 也可以做到。 + +例如,每个月1号和15号运行 /my/script 并将输出导出到 /dev/null (丢弃输出),您可以添加如下两个crontab 条目: + + 01 00 1 * * /myscript > /dev/null 2>&1 + 01 00 15 * * /my/script > /dev/null 2>&1 + +不过为了简单,你可以将他们合并: + + 01 00 1,15 * * /my/script > /dev/null 2>&1 + +跟着前面的例子,我们可以在每三个月的第一天的凌晨1:30运行 /my/other/script。 + + 30 01 1 1,4,7,10 * /my/other/script > /dev/null 2>&1 + +但是当你必须每隔某分钟、小时、天或月来重复某个任务时,你可以通过所需的频率来划分正确的时间。以下与前一个 crontab 条目具有相同的意义: + + 30 01 1 */3 * /my/other/script > /dev/null 2>&1 + +或者也许你需要在一个固定的频率或系统启动后运行某个固定的工作,你可以使用下列五个字符串中的一个字符串来指示你想让你的任务计划工作的确切时间: + + @reboot 仅系统启动时运行 + @yearly 一年一次, 类似与 00 00 1 1 * + @monthly 一月一次, 类似与 00 00 1 * * + @weekly 一周一次, 类似与 00 00 * * 0 + @daily 一天一次, 类似与 00 00 * * * + @hourly 一小时一次, 类似与 00 * * * * + +- 参见:[11 个在 RHEL7 中调度任务的命令][3] + +### 定位和查看日志### + +系统日志存放(并轮转)在 /var/log 目录。根据 Linux 的文件系统层次标准(Linux Filesystem Hierarchy Standard),这个目录包括各种日志文件,并包含一些必要的子目录(如 audit、 httpd 或 samba ,如下图),并由相应的系统守护进程操作: + + # ls /var/log + +![Linux Log Files Location](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-Log-Files.png) + +*Linux 日志的位置* + +其他感兴趣的日志比如 [dmesg][4](包括了所有内核层缓冲区的消息),secure(记录要求用户认证的连接请求),messages(系统级信息),和 wtmp(记录了所有用户的登录、登出)。 + +日志是非常重要的,它们让你可以看到任何时刻发生在你的系统的事情,以及已经过去的事情。他们是无价的工具,可以排错和监测一个 Linux 服务器,通常使用 `tail -f` 命令来实时显示正在发生和写入日志的事件。 + +举个例子,如果你想看你的内核相关的日志,你需要输入如下命令: + + # tail -f /var/log/dmesg + +同样的,如果你想查看你的 Web 服务器日志,你需要输入如下命令: + + # tail -f /var/log/httpd/access.log + +### 总结 ### + +如果你知道如何有效的管理包、调度任务、以及知道在哪寻找系统当前和过去操作的信息,你可以放松工作而不会总被吓到。我希望这篇文章能够帮你学习或回顾这些基础知识。 + +如果你有任何问题或意见,请使用下面的表单反馈给我们。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/yum-package-management-cron-job-scheduling-monitoring-linux-logs/ + +作者:[Gabriel Cánepa][a] +译者:[xiqingongzi](https://github.com/xiqingongzi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ +[2]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ +[3]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/ +[4]:http://www.tecmint.com/dmesg-commands/ + diff --git a/translated/tech/RHCSA/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md b/translated/tech/RHCSA/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md deleted file mode 100644 index 3456361c0c..0000000000 --- a/translated/tech/RHCSA/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md +++ /dev/null @@ -1,195 +0,0 @@ -[xiqingongzi translating] -RHCSA Series: Yum 包管理, 自动任务计划和系统监控日志 – Part 10 -================================================================================ -在这篇文章中,我们将回顾如何在REHL7中安装,更新和删除软件包。我们还将介绍如何使用cron任务的自动化,并完成如何查找和监控系统日志文件以及为什么这些技能是系统管理员必备技能 - -![Yum Package Management Cron Jobs Log Monitoring Linux](http://www.tecmint.com/wp-content/uploads/2015/05/Yum-Package-Management-Cron-Job-Log-Monitoring-Linux.jpg) - -RHCSA: Yum包管理, 任务计划和系统监控 – 第十章 - -### 使用yum 管理包 ### - -要安装一个包以及所有尚未安装的依赖包,您可以使用: - - # yum -y install package_name(s) - - package_name(s) 需要是一个存在的包名 - -例如,安装httpd和mlocate(按顺序),类型。 - - # yum -y install httpd mlocate - -**注意**: 字符y表示绕过执行下载和安装前的确认提示,如果需要,你可以删除它 - -默认情况下,yum将安装与操作系统体系结构相匹配的包,除非通过在包名加入架构名 - -例如,在64位系统上,使用yum安装包将安装包的x86_64版本,而package.x86 yum安装(如果有的话)将安装32位。 - -有时,你想安装一个包,但不知道它的确切名称。搜索可以在当前启用的存储库中去搜索包名称或在它的描述中搜索,并分别进行。 - -比如, - - # yum search log - -将搜索安装的软件包中名字与该词类似的软件,而 - - # yum search all log - -也将在包描述和网址中寻找寻找相同的关键字 - -一旦搜索返回包列表,您可能希望在安装前显示一些信息。这时info选项派上用场: - - # yum info logwatch - -![Search Package Information](http://www.tecmint.com/wp-content/uploads/2015/05/Search-Package-Information.png) - -搜索包信息 - -您可以定期用以下命令检查更新: - - # yum check-update - -上述命令将返回可以更新的所有安装包。在下图所示的例子中,只有rhel-7-server-rpms有可用更新: - -![Check For Package Updates](http://www.tecmint.com/wp-content/uploads/2015/05/Check-For-Updates.png) -检查包更新 - -然后,您可以更新该包, - - # yum update rhel-7-server-rpms - -如果有几个包,可以一同更新,yum update 将一次性更新所有的包 - -现在,当你知道一个可执行文件的名称,如ps2pdf,但不知道那个包提供了它?你可以通过 `yum whatprovides “*/[executable]”`找到: - - # yum whatprovides “*/ps2pdf” - -![Find Package Belongs to Which Package](http://www.tecmint.com/wp-content/uploads/2015/05/Find-Package-Information.png) - -查找文件属于哪个包 - -现在,当删除包时,你可以使用 yum remove Package ,很简单吧?Yum 是一个完整的强大的包管理器。 - - # yum remove httpd - -Read Also: [20 Yum Commands to Manage RHEL 7 Package Management][1] - -### 文本式RPM工具 ### - -RPM(又名RPM包管理器,或原本RedHat软件包管理器)也可用于安装或更新软件包来当他们在独立`rpm`包装形式。 - -往往使用`-Uvh` 表面这个包应该被安装而不是已存在或尝试更新。安装是`-U` ,显示详细输出用`-v`,显示进度条用`-h` 例如 - # rpm -Uvh package.rpm - -另一个典型的使用rpm 是产生一个列表,目前安装的软件包的code > rpm -qa(缩写查询所有) - - # rpm -qa - -![Query All RPM Packages](http://www.tecmint.com/wp-content/uploads/2015/05/Query-All-RPM-Packages.png) - -查询所有包 - -Read Also: [20 RPM Commands to Install Packages in RHEL 7][2] - -### Cron任务计划 ### - -Linux和UNIX类操作系统包括其他的工具称为Cron允许你安排任务(即命令或shell脚本)运行在周期性的基础上。每分钟定时检查/var/spool/cron目录中有在/etc/passwd帐户文件中指定名称的文件。 - -执行命令时,输出是发送到crontab的所有者(或者在/etc/crontab,在MailTO环境变量中指定的用户,如果它存在的话)。 - -crontab文件(这是通过键入crontab e和按Enter键创建)的格式如下: - -![Crontab Entries](http://www.tecmint.com/wp-content/uploads/2015/05/Crontab-Format.png) - -crontab条目 - -因此,如果我们想更新本地文件数据库(这是用于定位文件或图案)每个初二日上午2:15,我们需要添加以下crontab条目: - - 15 02 2 * * /bin/updatedb - -以上的条目写着:”每年每月第二天的凌晨2:15运行 /bin/updatedb“ 无论是周几”,我想你也猜到了。星号作为通配符 - -添加一个cron作业后,你可以看到一个文件名为root被添加在/var/spool/cron,正如我们前面所提到的。该文件列出了所有的crond守护进程运行的任务: - - # ls -l /var/spool/cron - -![Check All Cron Jobs](http://www.tecmint.com/wp-content/uploads/2015/05/Check-All-Cron-Jobs.png) - -检查所有cron工作 - -在上图中,显示当前用户的crontab可以使用 cat /var/spool/cron 或 - - # crontab -l - -如果你需要在一个更精细的时间上运行的任务(例如,一天两次或每月三次),cron也可以帮助你。 - -例如,每个月1号和15号运行 /my/script 并将输出导出到 /dev/null,您可以添加如下两个crontab条目: - - 01 00 1 * * /myscript > /dev/null 2>&1 - 01 00 15 * * /my/script > /dev/null 2>&1 - -不过为了简单,你可以将他们合并 - - 01 00 1,15 * * /my/script > /dev/null 2>&1 -在前面的例子中,我们可以在每三个月的第一天的凌晨1:30运行 /my/other/script . - - 30 01 1 1,4,7,10 * /my/other/script > /dev/null 2>&1 - -但是当你必须每一个“十”分钟,数小时,数天或数月的重复某个任务时,你可以通过所需的频率来划分正确的时间。以下为前一个crontab条目具有相同的意义: - - 30 01 1 */3 * /my/other/script > /dev/null 2>&1 - -或者也许你需要在一个固定的时间段或系统启动后运行某个固定的工作,例如。你可以使用下列五个字符串中的一个字符串来指示你想让你的任务计划工作的确切时间: - - @reboot 仅系统启动时运行. - @yearly 一年一次, 类似与 00 00 1 1 *. - @monthly 一月一次, 类似与 00 00 1 * *. - @weekly 一周一次, 类似与 00 00 * * 0. - @daily 一天一次, 类似与 00 00 * * *. - @hourly 一小时一次, 类似与 00 * * * *. - -Read Also: [11 Commands to Schedule Cron Jobs in RHEL 7][3] - -### 定位和查看日志### - -系统日志存放在 /var/log 目录.根据Linux的文件系统层次标准,这个目录包括各种日志文件,并包含一些必要的子目录(如 audit, httpd, 或 samba ,如下图),并由相应的系统守护进程操作 - - # ls /var/log - -![Linux Log Files Location](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-Log-Files.png) - -Linux 日志定位 - -其他有趣的日志比如 [dmesg][4](包括了所有内核缓冲区的信息),安全(用户认证尝试链接),信息(系统信息),和wtmp(记录了所有用户的登录登出) - -日志是非常重要的,他们让你可以看到是任何时刻发生在你的系统的事情,甚至是已经过去的事情。他们是无价的工具,解决和监测一个Linux服务器,并因此经常使用的 “tail -f command ”来实时显示正在发生并实时写入的事件。 - -举个例子,如果你想看你的内核的日志,你需要输入如下命令 - - # tail -f /var/log/dmesg - -同样的,如果你想查看你的网络服务器日志,你需要输入如下命令 - - # tail -f /var/log/httpd/access.log - -### 总结 ### - -如果你知道如何有效的管理包,安排任务,以及知道在哪寻找系统当前和过去操作的信息,你可以放心你将不会总是有太多的惊喜。我希望这篇文章能够帮你学习或回顾这些基础知识。 - -如果你有任何问题或意见,请使用下面的表格反馈给我们。 --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/yum-package-management-cron-job-scheduling-monitoring-linux-logs/ - -作者:[Gabriel Cánepa][a] -译者:[xiqingongzi](https://github.com/xiqingongzi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ -[2]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ -[3]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/ -[4]:http://www.tecmint.com/dmesg-commands/ - From 698ec9bfbda32746c99ff85eb431711748ebf1a3 Mon Sep 17 00:00:00 2001 From: Mike Tang Date: Sat, 26 Sep 2015 11:04:02 +0800 Subject: [PATCH 603/697] translated/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md --- ...e--Minimal Icon Theme For Linux Desktop.md | 88 ------------------- ...e--Minimal Icon Theme For Linux Desktop.md | 86 ++++++++++++++++++ 2 files changed, 86 insertions(+), 88 deletions(-) delete mode 100644 sources/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md create mode 100644 translated/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md diff --git a/sources/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md b/sources/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md deleted file mode 100644 index 1b743f7b27..0000000000 --- a/sources/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md +++ /dev/null @@ -1,88 +0,0 @@ -mikecoder translating... - -Xenlism WildFire: Minimal Icon Theme For Linux Desktop -================================================================================ -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icon-theme-linux-3.png) - -It’s been some time that I covered an icon theme on It’s FOSS. Perhaps because no theme caught my eyes in recent times. There are a few which I consider the [best icon themes for Ubuntu][1] but these are mostly the known ones like Numix and Moka and I am pretty content using Numix. - -But a few days back I came across this [Xenslim WildFire][2] and I must say, it looks damn good. Minimalism is the current popular trend in the design world and Xenlism perfects it. Smooth and tranquil, Xenlism is inspired by Nokia’s meego and Apple iOS icon. - -Have a look at some of its icons for various applications: - -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icons.png) - -Folder icons look like: - -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icons-1.png) - -Theme developer, [Nattapong Pullkhow][3], claims that the icon theme is best suited for GNOME but it should work fine with Unity, KDE and Mate as well. - -### Install Xenlism Wildfire icon theme ### - -Xenlism Theme is around 230 MB in download size which is slightly heavy for an icon theme but considering that it has support for a huge number of applications, the size should not be surprising. - -#### Installing in Ubuntu/Debian based Linux distributions #### - -To install it in Ubuntu variants, use the command below in a terminal to add the GPG key: - - sudo apt-key adv --keyserver keys.gnupg.net --recv-keys 90127F5B - -After adding the key, use the following commands: - - echo "deb http://downloads.sourceforge.net/project/xenlism-wildfire/repo deb/" | sudo tee -a /etc/apt/sources.list - sudo apt-get update - sudo apt-get install xenlism-wildfire-icon-theme - -In addition to the icon theme, you can download a matching minimal wallpaper as well: - - sudo apt-get install xenlism-artwork-wallpapers - -#### Installing in Arch based Linux distributions #### - -You’ll have to edit the Pacman repository. In a terminal, use the following command: - - sudo nano /etc/pacman.conf - -Add the following section to this configuration file: - - [xenlism-arch] - SigLevel = Never - Server = http://downloads.sourceforge.net/project/xenlism-wildfire/repo/arch - -Update the system and install icon theme and wallpapers as following: - - sudo pacman -Syyu - sudo pacman -S xenlism-wildfire - -#### Using Xenlism icon theme #### - -In Ubuntu Unity, [use Unity Tweak Tool to change the icon theme][4]. In GNOME, [use Gnome Tweak Tool to change the theme][5]. I presume that you know how to do this part, but if you are stuck let me know and I’ll add some screenshots. - -Below is a screenshot of Xenlism icon theme in use in Ubuntu 15.04 Unity. Xenlism desktop wallpaper is in the background. - -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icons-2.png) - -It looks good, isn’t it? If you try it and like it, feel free to thank the developer: - -> [Xenlism is a stunning minimal icon theme for Linux. Thanks @xenatt for this beautiful theme.][6] - -I hope you like it. Do share your views on this icon theme or your preferred icon theme. Is Xenlism good enough to change your favorite icon theme? - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/xenlism-wildfire-theme/ - -作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:http://itsfoss.com/best-icon-themes-ubuntu-1404/ -[2]:http://xenlism.github.io/wildfire/ -[3]:https://plus.google.com/+NattapongPullkhow -[4]:http://itsfoss.com/install-numix-ubuntu/ -[5]:http://itsfoss.com/install-switch-themes-gnome-shell/ -[6]:https://twitter.com/share?text=Xenlism+is+a+stunning+minimal+icon+theme+for+Linux.+Thanks+%40xenatt+for+this+beautiful+theme.&via=itsfoss&related=itsfoss&url=http://itsfoss.com/xenlism-wildfire-theme/ diff --git a/translated/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md b/translated/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md new file mode 100644 index 0000000000..5bd7655a9e --- /dev/null +++ b/translated/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md @@ -0,0 +1,86 @@ +Xenlism WildFire: 一个精美的 Linux 桌面版主题 +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icon-theme-linux-3.png) + +有那么一段时间,我一直使用一个主题,没有更换过。可能是在最近的一段时间都没有一款主题能满足我的需求。有那么一些我认为是[Ubuntu 上最好的图标主题][1],比如 Numix 和 Moka,并且,我一直也对 Numix 比较满意。 + +但是,一段时间后,我使用了[Xenslim WildFire][2],并且我必须承认,他看起来太好了。Minimail 是当前比较流行的设计趋势。并且 Xenlism 完美的表现了它。平滑和美观。Xenlism 收到了诺基亚的 Meego 和苹果图标的影响。 + +让我们来看一下他的几个不同应用的图标: + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icons.png) + +文件夹图标看起来像这样: + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icons-1.png) + +主题开发者,[Nattapong Pullkhow][3], 说,这个图标主题最适合 GNOME,但是在 Unity 和 KDE,Mate 上也表现良好。 + +### 安装 Xenlism Wildfire ### + +Xenlism Theme 大约有 230 MB, 对于一个主题来说确实很大,但是考虑到它支持的庞大的软件数量,这个大小,确实也不是那么令人吃惊。 + +#### 在 Ubuntu/Debian 上安装 Xenlism #### + +在 Ubuntu 的变种中安装前,用以下的命令添加 GPG 秘钥: + + sudo apt-key adv --keyserver keys.gnupg.net --recv-keys 90127F5B + +添加完成之后,输入如下的命令进行安装: + + echo "deb http://downloads.sourceforge.net/project/xenlism-wildfire/repo deb/" | sudo tee -a /etc/apt/sources.list + sudo apt-get update + sudo apt-get install xenlism-wildfire-icon-theme + +除了主题之外,你也可以选择是否下载配套的桌面背景图: + + sudo apt-get install xenlism-artwork-wallpapers + +#### 在 Arch 上安装 Xenlism #### + +你需要编辑 Pacman 软件仓库。在终端中使用如下命令: + + sudo nano /etc/pacman.conf + + 添加如下的代码块,在配置文件中: + + [xenlism-arch] + SigLevel = Never + Server = http://downloads.sourceforge.net/project/xenlism-wildfire/repo/arch + +更新系统并且安装: + + sudo pacman -Syyu + sudo pacman -S xenlism-wildfire + +#### 使用 Xenlism 主题 #### + +在 Ubuntu Unity, [可以使用 Unity Tweak Tool 来改变主题][4]. In GNOME, [使用 Gnome Tweak Tool 改变主题][5]. 我确信你会接下来的步骤,如果你不会,请来信通知我,我会继续完善这篇文章。 + +这就是 Xenlism 在 Ubuntu 15.04 Unity 中的截图。同时也使用了 Xenlism 桌面背景。 + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icons-2.png) + +这看来真棒,不是吗?如果你试用了,并且喜欢他,你可以感谢他的开发者: + +> [Xenlism is a stunning minimal icon theme for Linux. Thanks @xenatt for this beautiful theme.][6] + +我希望你喜欢他。同时也希望你分享你对这个主题的看法,或者你喜欢的主题。Xenlism 真的很棒,可能会替换掉你最喜欢的主题。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/xenlism-wildfire-theme/ + +作者:[Abhishek][a] +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://itsfoss.com/best-icon-themes-ubuntu-1404/ +[2]:http://xenlism.github.io/wildfire/ +[3]:https://plus.google.com/+NattapongPullkhow +[4]:http://itsfoss.com/install-numix-ubuntu/ +[5]:http://itsfoss.com/install-switch-themes-gnome-shell/ +[6]:https://twitter.com/share?text=Xenlism+is+a+stunning+minimal+icon+theme+for+Linux.+Thanks+%40xenatt+for+this+beautiful+theme.&via=itsfoss&related=itsfoss&url=http://itsfoss.com/xenlism-wildfire-theme/ From 53ac741caea1b4c71c9b66e1332ed29fc18268ef Mon Sep 17 00:00:00 2001 From: KnightJoker <544133483@qq.com> Date: Sat, 26 Sep 2015 11:10:27 +0800 Subject: [PATCH 604/697] Translated by KnightJoker --- ...--Master Your Math with These Linux Apps.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md b/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md index c70122d6c5..a02c063999 100644 --- a/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md +++ b/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md @@ -1,21 +1,21 @@ Translating by KnightJoker -Learn with Linux: Master Your Math with These Linux Apps +用Linux学习:使用这些Linux应用来征服你的数学 ================================================================================ ![](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-featured.png) -This article is part of the [Learn with Linux][1] series: +这篇文章是[用Linux学习][1]系列的一部分: -- [Learn with Linux: Learning to Type][2] -- [Learn with Linux: Physics Simulation][3] -- [Learn with Linux: Learning Music][4] -- [Learn with Linux: Two Geography Apps][5] -- [Learn with Linux: Master Your Math with These Linux Apps][6] +- [用Linux学习: 学习类型][2] +- [用Linux学习: 物理模拟][3] +- [用Linux学习: 学习音乐][4] +- [用Linux学习: 两个地理应用程序][5] +- [用Linux学习: 用这些Linux应用来征服你的数学][6] -Linux offers great educational software and many excellent tools to aid students of all grades and ages in learning and practicing a variety of topics, often interactively. The “Learn with Linux” series of articles offers an introduction to a variety of educational apps and software. -Mathematics is the core of computing. If one would expect a great operating system, such as GNU/Linux, to excel in and discipline, it would be Math. If you seek mathematical applications, you will not be disappointed. Linux offers many excellent tools that will make Mathematics look as intimidating as it ever did, but at least they will simplify your way of using it. +Linux提供了大量的教育软件和许多优秀的工具来帮助所有年龄段的学生学习和练习各种各样的话题,常常以交互的方式。与Linux一起学习这一系列的文章则为这些各种各样的教育软件和应用提供了一个介绍。 +数学是计算机的核心。如果有人用精益求精和纪律来预期一个伟大的操作系统,比如GNU/ Linux,那么这将是数学。如果你在寻求一些数学应用程序,那么你将不会感到失望。Linux提供了很多优秀的工具使得数学看起来和你曾经做过的一样令人畏惧,但实际上他们会简化你使用它的方式。 ### Gnuplot ### Gnuplot is a command-line scriptable and versatile graphing utility for different platforms. Despite its name, it is not part of the GNU operating system. Although it is not freely licensed, it’s free-ware (meaning it’s copyrighted but free to use). From 2b1b2d3a4c2e0a990153ee8ed2cc863a1331362a Mon Sep 17 00:00:00 2001 From: KnightJoker <544133483@qq.com> Date: Sat, 26 Sep 2015 11:27:20 +0800 Subject: [PATCH 605/697] Translated by KnightJoker --- ...-Master Your Math with These Linux Apps.md | 56 +++++++++---------- 1 file changed, 27 insertions(+), 29 deletions(-) diff --git a/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md b/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md index a02c063999..f4625c6c13 100644 --- a/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md +++ b/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md @@ -1,4 +1,4 @@ -Translating by KnightJoker +Translated by KnightJoker 用Linux学习:使用这些Linux应用来征服你的数学 ================================================================================ @@ -18,98 +18,96 @@ Linux提供了大量的教育软件和许多优秀的工具来帮助所有年龄 数学是计算机的核心。如果有人用精益求精和纪律来预期一个伟大的操作系统,比如GNU/ Linux,那么这将是数学。如果你在寻求一些数学应用程序,那么你将不会感到失望。Linux提供了很多优秀的工具使得数学看起来和你曾经做过的一样令人畏惧,但实际上他们会简化你使用它的方式。 ### Gnuplot ### -Gnuplot is a command-line scriptable and versatile graphing utility for different platforms. Despite its name, it is not part of the GNU operating system. Although it is not freely licensed, it’s free-ware (meaning it’s copyrighted but free to use). - -To install `gnuplot` on an Ubuntu (or derivative) system, type +Gnuplot 是一个适用于不同平台的命令行脚本化和多功能的图形工具。尽管它的名字,并不是GNU操作系统的一部分。也没有免费授权,但它是免费软件(这意味着它受版权保护,但免费使用)。 +要在Ubuntu系统(或者衍生系统)上安装 `gnuplot`,输入: sudo apt-get install gnuplot gnuplot-x11 -into a terminal window. To start the program, type +进入一个终端窗口。启动该程序,输入: gnuplot -You will be presented with a simple command line interface +你会看到一个简单的命令行界面: ![learnmath-gnuplot](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot.png) -into which you can start typing functions directly. The plot command will draw a graph. +在其中您可以直接开始输入函数。绘图命令将绘制一个曲线图。 -Typing, for instance, +输入内容,例如, plot sin(x)/x -into the `gnuplot` prompt, will open another window, wherein the graph is presented. +随着`gnuplot的`提示,将会打开一个新的窗口,图像便会在里面呈现。 ![learnmath-gnuplot-plot1](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-plot1.png) -You can also set different attributes of the graphs in-line. For example, specifying “title” will give them just that. +你也可以在线这个图设置不同的属性,比如像这样指定“title” plot sin(x) title 'Sine Function', tan(x) title 'Tangent' ![learnmath-gnuplot-plot2](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-plot2.png) -You can give things a bit more depth and draw 3D graphs with the `splot` command. +使用`splot`命令,你可以给的东西更深入一点并且绘制3D图形 splot sin(x*y/20) ![learnmath-gnuplot-plot3](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-plot3.png) -The plot window has a few basic configuration options, +这个窗口有几个基本的配置选项, ![learnmath-gnuplot-options](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-options.png) -but the true power of `gnuplot` lies within its command line and scripting capabilities. The extensive full documentation of `gnuplot` can be found [here][7] with a great tutorial for the previous version [on the Duke University’s website][8]. +但是`gnuplot`的真正力量在于在它的命令行和脚本功能,`gnuplot`广泛完整的文档可在这里找到,并在[Duke大学网站][8]上面看见这个了不起的教程[7]的原始版本。 ### Maxima ### -[Maxima][9] is a computer algebra system developed from the original sources of Macsyma. According to its SourceForge page, +[Maxima][9]是从Macsyma原始资料开发的一个计算机代数系统,根据它的 SourceForge 页面, -> “Maxima is a system for the manipulation of symbolic and numerical expressions, including differentiation, integration, Taylor series, Laplace transforms, ordinary differential equations, systems of linear equations, polynomials, sets, lists, vectors, matrices and tensors. Maxima yields high precision numerical results by using exact fractions, arbitrary-precision integers and variable-precision floating-point numbers. Maxima can plot functions and data in two and three dimensions.” +> “Maxima是符号和数值的表达,包括微分,积分,泰勒级数,拉普拉斯变换,常微分方程,线性方程组,多项式,集合,列表,向量,矩阵和张量系统的操纵系统。Maxima通过精确的分数,任意精度的整数和可变精度浮点数产生高精度的计算结果。Maxima可以二维和三维中绘制函数和数据。“ -You will have binary packages for Maxima in most Ubuntu derivatives as well as the Maxima graphical interface. To install them all, type +你将会获得二进制包用于大多数Ubuntu衍生系统的Maxima以及它的图形界面中,插入所有包,输入: sudo apt-get install maxima xmaxima wxmaxima -into a terminal window. Maxima is a command line utility with not much of a UI, but if you start `wxmaxima`, you’ll get into a simple, yet powerful GUI. +在终端窗口中,Maxima是一个没有太多UI的命令行工具,但如果你开始wxmaxima,你会进入一个简单但功能强大的图形用户界面。 ![learnmath-maxima](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima.png) -You can start using this by simply starting to type. (Hint: Enter will add more lines; if you want to evaluate an expression, use “Shift + Enter.”) +你可以开始输入这个来简单的一个开始。(提示:如果你想计算一个表达式,使用“Shift + Enter”回车后会增加更多的方法) -Maxima can be used for very simple problems, as it also acts as a calculator, +Maxima可以用于一些简单的问题,因此也可以作为一个计算器, ![learnmath-maxima-1and1](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-1and1.png) -and much more complex ones as well. +以及一些更复杂的问题, ![learnmath-maxima-functions](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-functions.png) -It uses `gnuplot` to draw simple +它使用`gnuplot`使得绘制简单, ![learnmath-maxima-plot](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-plot.png) -and more elaborate graphs. +或者绘制一些复杂的图形. ![learnmath-maxima-plot2](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-plot2.png) -(It needs the `gnuplot-x11` package to display them.) +(它需要gnuplot-X11的包,来显示它们。) -Besides beautifying the expressions, Maxima makes it possible to export them in latex format, or do some operations on the highlighted functions with a right-click context menu, +除了美化一些图形,Maxima也尽可能用latex格式导出它们,或者通过右键是捷菜单进行一些突出的操作. ![learnmath-maxima-menu](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-menu.png) -while its main menus offer an overwhelming amount of functionality. Of course, Maxima is capable of much more than this. It has an extensive documentation [available online][10]. +然而其主菜单还是提供了大量压倒性的功能,当然Maxima的功能远不止如此,这里也有一个广泛使用的在线文档。 -### Conclusion ### - -Mathematics is not an easy subject, and the excellent math software on Linux does not make it look easier, yet these applications make using Mathematics much more straightforward and productive. The above two applications are just an introduction to what Linux has to offer. If you are seriously engaged in math and need even more functionality with great documentation, you should check out the [Mathbuntu project][11]. +### 总结 ### +数学不是一个简单的学科,这些在Linux上的优秀软件也没有使得数学更加简单,但是这些应用使得使用数学变得更加的简单和工程化。以上两种应用都只是介绍一下Linux的所提供的。如果你是认真从事数学和需要更多的功能与丰富的文档,那你更应该看看这些Mathbuntu项目。 -------------------------------------------------------------------------------- via: https://www.maketecheasier.com/learn-linux-maths/ 作者:[Attila Orosz][a] -译者:[译者ID](https://github.com/译者ID) +译者:[KnightJoker](https://github.com/KnightJoker/译者ID) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From d03ac0d7020bed0bae5b60bba1ede89bd86875d1 Mon Sep 17 00:00:00 2001 From: KnightJoker <544133483@qq.com> Date: Sat, 26 Sep 2015 11:30:23 +0800 Subject: [PATCH 606/697] Translated by KnightJoker --- ...-Master Your Math with These Linux Apps.md | 126 ++++++++++++++++++ 1 file changed, 126 insertions(+) create mode 100644 translated/tech/Learn with Linux--Master Your Math with These Linux Apps.md diff --git a/translated/tech/Learn with Linux--Master Your Math with These Linux Apps.md b/translated/tech/Learn with Linux--Master Your Math with These Linux Apps.md new file mode 100644 index 0000000000..f4625c6c13 --- /dev/null +++ b/translated/tech/Learn with Linux--Master Your Math with These Linux Apps.md @@ -0,0 +1,126 @@ +Translated by KnightJoker + +用Linux学习:使用这些Linux应用来征服你的数学 +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-featured.png) + +这篇文章是[用Linux学习][1]系列的一部分: + +- [用Linux学习: 学习类型][2] +- [用Linux学习: 物理模拟][3] +- [用Linux学习: 学习音乐][4] +- [用Linux学习: 两个地理应用程序][5] +- [用Linux学习: 用这些Linux应用来征服你的数学][6] + + +Linux提供了大量的教育软件和许多优秀的工具来帮助所有年龄段的学生学习和练习各种各样的话题,常常以交互的方式。与Linux一起学习这一系列的文章则为这些各种各样的教育软件和应用提供了一个介绍。 + +数学是计算机的核心。如果有人用精益求精和纪律来预期一个伟大的操作系统,比如GNU/ Linux,那么这将是数学。如果你在寻求一些数学应用程序,那么你将不会感到失望。Linux提供了很多优秀的工具使得数学看起来和你曾经做过的一样令人畏惧,但实际上他们会简化你使用它的方式。 +### Gnuplot ### + +Gnuplot 是一个适用于不同平台的命令行脚本化和多功能的图形工具。尽管它的名字,并不是GNU操作系统的一部分。也没有免费授权,但它是免费软件(这意味着它受版权保护,但免费使用)。 + +要在Ubuntu系统(或者衍生系统)上安装 `gnuplot`,输入: + sudo apt-get install gnuplot gnuplot-x11 + +进入一个终端窗口。启动该程序,输入: + + gnuplot + +你会看到一个简单的命令行界面: + +![learnmath-gnuplot](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot.png) + +在其中您可以直接开始输入函数。绘图命令将绘制一个曲线图。 + +输入内容,例如, + + plot sin(x)/x + +随着`gnuplot的`提示,将会打开一个新的窗口,图像便会在里面呈现。 + +![learnmath-gnuplot-plot1](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-plot1.png) + +你也可以在线这个图设置不同的属性,比如像这样指定“title” + + plot sin(x) title 'Sine Function', tan(x) title 'Tangent' + +![learnmath-gnuplot-plot2](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-plot2.png) + +使用`splot`命令,你可以给的东西更深入一点并且绘制3D图形 + + splot sin(x*y/20) + +![learnmath-gnuplot-plot3](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-plot3.png) + +这个窗口有几个基本的配置选项, + +![learnmath-gnuplot-options](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-options.png) + +但是`gnuplot`的真正力量在于在它的命令行和脚本功能,`gnuplot`广泛完整的文档可在这里找到,并在[Duke大学网站][8]上面看见这个了不起的教程[7]的原始版本。 + +### Maxima ### + +[Maxima][9]是从Macsyma原始资料开发的一个计算机代数系统,根据它的 SourceForge 页面, + +> “Maxima是符号和数值的表达,包括微分,积分,泰勒级数,拉普拉斯变换,常微分方程,线性方程组,多项式,集合,列表,向量,矩阵和张量系统的操纵系统。Maxima通过精确的分数,任意精度的整数和可变精度浮点数产生高精度的计算结果。Maxima可以二维和三维中绘制函数和数据。“ + +你将会获得二进制包用于大多数Ubuntu衍生系统的Maxima以及它的图形界面中,插入所有包,输入: + + sudo apt-get install maxima xmaxima wxmaxima + +在终端窗口中,Maxima是一个没有太多UI的命令行工具,但如果你开始wxmaxima,你会进入一个简单但功能强大的图形用户界面。 + +![learnmath-maxima](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima.png) + +你可以开始输入这个来简单的一个开始。(提示:如果你想计算一个表达式,使用“Shift + Enter”回车后会增加更多的方法) + +Maxima可以用于一些简单的问题,因此也可以作为一个计算器, + +![learnmath-maxima-1and1](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-1and1.png) + +以及一些更复杂的问题, + +![learnmath-maxima-functions](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-functions.png) + +它使用`gnuplot`使得绘制简单, + +![learnmath-maxima-plot](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-plot.png) + +或者绘制一些复杂的图形. + +![learnmath-maxima-plot2](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-plot2.png) + +(它需要gnuplot-X11的包,来显示它们。) + +除了美化一些图形,Maxima也尽可能用latex格式导出它们,或者通过右键是捷菜单进行一些突出的操作. + +![learnmath-maxima-menu](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-menu.png) + +然而其主菜单还是提供了大量压倒性的功能,当然Maxima的功能远不止如此,这里也有一个广泛使用的在线文档。 + +### 总结 ### + +数学不是一个简单的学科,这些在Linux上的优秀软件也没有使得数学更加简单,但是这些应用使得使用数学变得更加的简单和工程化。以上两种应用都只是介绍一下Linux的所提供的。如果你是认真从事数学和需要更多的功能与丰富的文档,那你更应该看看这些Mathbuntu项目。 +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/learn-linux-maths/ + +作者:[Attila Orosz][a] +译者:[KnightJoker](https://github.com/KnightJoker/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ +[1]:https://www.maketecheasier.com/series/learn-with-linux/ +[2]:https://www.maketecheasier.com/learn-to-type-in-linux/ +[3]:https://www.maketecheasier.com/linux-physics-simulation/ +[4]:https://www.maketecheasier.com/linux-learning-music/ +[5]:https://www.maketecheasier.com/linux-geography-apps/ +[6]:https://www.maketecheasier.com/learn-linux-maths/ +[7]:http://www.gnuplot.info/documentation.html +[8]:http://people.duke.edu/~hpgavin/gnuplot.html +[9]:http://maxima.sourceforge.net/ +[10]:http://maxima.sourceforge.net/documentation.html +[11]:http://www.mathbuntu.org/ \ No newline at end of file From bb6acd41cf9f315903bc0d5ce82c450ac490c0ce Mon Sep 17 00:00:00 2001 From: luoyuanhao Date: Sat, 26 Sep 2015 13:18:41 +0800 Subject: [PATCH 607/697] [Translated]sources/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md --- ...o Upgrade From Oracle 11g To Oracle 12c.md | 166 ------------------ ...o Upgrade From Oracle 11g To Oracle 12c.md | 165 +++++++++++++++++ 2 files changed, 165 insertions(+), 166 deletions(-) delete mode 100644 sources/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md create mode 100644 translated/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md diff --git a/sources/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md b/sources/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md deleted file mode 100644 index a43b5e2ac5..0000000000 --- a/sources/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md +++ /dev/null @@ -1,166 +0,0 @@ -ictlyh Translating -How To Upgrade From Oracle 11g To Oracle 12c -================================================================================ -Hello all. - -Today we will go through how to upgrade from oracle 11g to Oracle 12c. Let’s start then. - -For this, I will use CentOS 7 64 bit Linux distribution. - -I am assuming that you have already installed Oracle 11g on your system. Here I will show what I did when I installed Oracle 11g. - -I select “Create and configure a database” for Oracle 11g just like below image. - -![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage1.png) - -Then I select “Desktop Class” for my Oracle 11g installation. For production you must select “Server Class”. - -![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage2.png) - -Then you must enter all the paths for the Oracle 11g and your password as well. Below is mine for my Oracle 11g installation. Make sure you meet the Oracle password methodology for placing your password. - -![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage3.png) - -Next, I set Inventory Directory path as below. - -![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage4.png) - -Till now, I showed you what I had done to install Oracle 11g as we are going to upgrade to 12c. - -Let’s upgrade to Oracle 12c from Oracle 11g. - -You must download the two (2) zip files from this [link][1]. Download and unzip both files to the same directory. Files names are **linuxamd64_12c_database_1of2.zip** & **linuxamd64_12c_database_2of2.zip** respectively. After extracting or unzipping, It will create a folder called database. - -Note: Before upgrading to 12c, make sure you have all the necessary packages installed for your CentOS and all the path variable are OK and all other prerequisites are done before beginning. - -These are the following packages must be installed with correct version - -- binutils -- compat-libstdc++ -- gcc -- glibc -- libaio -- libgcc -- libstdc++ -- make -- sysstat -- unixodbc - -Search for your correct rpm version on the internet. - -You can also combine a query for multiple packages, and review the output for the correct versions. For example: - -Type the following command to check in the terminal - - rpm -q binutils compat-libstdc++ gcc glibc libaio libgcc libstdc++ make sysstat unixodbc - -The following packages (or later or earlier versions) must be installed on your system - -- binutils-2.23.52.0.1-12.el7.x86_64 -- compat-libcap1-1.10-3.el7.x86_64 -- gcc-4.8.2-3.el7.x86_64 -- gcc-c++-4.8.2-3.el7.x86_64 -- glibc-2.17-36.el7.i686 -- glibc-2.17-36.el7.x86_64 -- glibc-devel-2.17-36.el7.i686 -- glibc-devel-2.17-36.el7.x86_64 -- ksh -- libaio-0.3.109-9.el7.i686 -- libaio-0.3.109-9.el7.x86_64 -- libaio-devel-0.3.109-9.el7.i686 -- libaio-devel-0.3.109-9.el7.x86_64 -- libgcc-4.8.2-3.el7.i686 -- libgcc-4.8.2-3.el7.x86_64 -- libstdc++-4.8.2-3.el7.i686 -- libstdc++-4.8.2-3.el7.x86_64 -- libstdc++-devel-4.8.2-3.el7.i686 -- libstdc++-devel-4.8.2-3.el7.x86_64 -- libXi-1.7.2-1.el7.i686 -- libXi-1.7.2-1.el7.x86_64 -- libXtst-1.2.2-1.el7.i686 -- libXtst-1.2.2-1.el7.x86_64 -- make-3.82-19.el7.x86_64 -- sysstat-10.1.5-1.el7.x86_64 - -You will also need unixODBC-2.3.1 or later driver. - -I hope you already have a user on your CentOS 7 named oracle when you installed Oracle 11g. - -Let’s login onto CentOS by using user oracle. - -After login to your CentOS by user oracle, open a terminal on your CentOS. - -Now change directory and navigate to your extracted directory where you extracted both the zip files by using terminal. Now type the following in the terminal to begin installation of 12c. - - ./runInstaller - -![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212image5.png) - -If everything goes right then you will see something like below which will start the installation process of 12c. - -![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage6.png) - -Then you can skip the updates or you can download the latest update. It is recommended that you must update it for production server. Though I am skipping it. - -![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage7.png) - -Now, select upgrade an existing database. - -![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage8.png) - -For language, English is already there. Click next to continue or you can add according to your need. - -![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage9.png) - -Now, select Enterprise Edition. You can select upon your requirements. - -![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage10.png) - -Then select your path for Software location. This is pretty much self-explanatory. - -![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage11.png) - -For step 7, keep moving with the default options just like below. - -![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage12.png) - -In step 9, you will get a summary report like below image. - -![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage13.png) - -If everything is fine, you can start your installation by clicking install on step 9 and which will take you to step 10. - -![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage14.png) - -In the process you might encounter some errors and you need to Goggle it for fix those errors. There are a number of errors you may encounter and hence I am not covering those here. - -Keep your patience and it will show Succeeded one by one for step 10. If not, search it on Google and do necessary steps to fix it. Again, as there are a number of errors you may encounter and I can’t provide all the details over here. - -Now, configure the listener just simply following on screen instruction. - -After finishing the process for listener, it will start the Database Upgrade Assistant. Select Upgrade Oracle Database. - -![](http://www.unixmen.com/wp-content/uploads/2015/09/DUAimage15.png) - -In step 2, you will find that it will show the 11g location path along with 12c location path. Also you will find that it is indicating Target Oracle Home Release 12 from Source Oracle Home Release 11. Click next step 2 and move to step 3. - -![](http://www.unixmen.com/wp-content/uploads/2015/09/DUAimage16.png) - -Follow the on screen instructions and finished it. - -In the last step, you will get a success window where you will find that the update of oracle database was successful. - -**A word of caution**: Before upgrading to 12c for your production server, please make sure you have done it some other workstation so that you can fix all the errors, which you will encounter on the way of upgrading. Never try upgrading a production server without knowing all the details. - --------------------------------------------------------------------------------- - -via: http://www.unixmen.com/upgrade-from-oracle-11g-to-oracle-12c/ - -作者:[Mohammad Forhad Iftekher][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.unixmen.com/author/forhad/ -[1]:http://www.oracle.com/technetwork/database/enterprise-edition/downloads/database12c-linux-download-1959253.html \ No newline at end of file diff --git a/translated/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md b/translated/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md new file mode 100644 index 0000000000..921b5d958f --- /dev/null +++ b/translated/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md @@ -0,0 +1,165 @@ +如何将 Oracle 11g 升级到 Orcale 12c +================================================================================ +大家好。 + +今天我们来学习一下如何将 Oracle 11g 升级到 Oracle 12c。开始吧。 + +在此,我使用的是 CentOS 7 64 位 Linux 发行版。 + +我假设你已经在你的系统上安装了 Oracle 11g。这里我会展示一下安装 Oracle 11g 时我的操作步骤。 + +我在 Oracle 11g 上选择 “Create and configure a database”,如下图所示。 + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage1.png) + +然后我选择安装 Oracle 11g “Decktop Class”。如果是生产环境,你必须选择 “Server Class”。 + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage2.png) + +然后你输入安装 Oracle 11g 的所有路径以及密码。下面是我自己的 Oracle 11g 安装配置。确保你正确输入了 Oracle 的密码。 + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage3.png) + +下一步,我按照如下设置 Inventory Directory。 + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage4.png) + +到这里,我已经向你展示了我安装 Oracle 11g 所做的工作,因为我们开始想升级到 12c。 + +让我们将 Oracle 11g 升级到 Oracle 12c 吧。 + +你需要从该[链接][1]上下载两个 zip 文件。下载并解压两个文件到相同目录。文件名为 **linuxamd64_12c_database_1of2.zip** & **linuxamd64_12c_database_2of2.zip**。提取或解压完后,它会创建一个名为 database 的文件夹。 + +注意:升级到 12c 之前,请确保在你的 CentOS 上已经安装了所有必须的软件包并且 path 环境变量也已经正确配置,还有其它前提条件也已经满足。 + +下面是必须使用正确版本安装的一些软件包 + +- binutils +- compat-libstdc++ +- gcc +- glibc +- libaio +- libgcc +- libstdc++ +- make +- sysstat +- unixodbc + +在因特网上搜索正确的 rpm 版本。 + +你也可以用一个查询处理多个软件包,然后在输出中查找正确版本。例如: + +在终端中输入下面的命令 + + rpm -q binutils compat-libstdc++ gcc glibc libaio libgcc libstdc++ make sysstat unixodbc + +你的系统中必须安装了以下软件包(版本可能较新会旧) + +- binutils-2.23.52.0.1-12.el7.x86_64 +- compat-libcap1-1.10-3.el7.x86_64 +- gcc-4.8.2-3.el7.x86_64 +- gcc-c++-4.8.2-3.el7.x86_64 +- glibc-2.17-36.el7.i686 +- glibc-2.17-36.el7.x86_64 +- glibc-devel-2.17-36.el7.i686 +- glibc-devel-2.17-36.el7.x86_64 +- ksh +- libaio-0.3.109-9.el7.i686 +- libaio-0.3.109-9.el7.x86_64 +- libaio-devel-0.3.109-9.el7.i686 +- libaio-devel-0.3.109-9.el7.x86_64 +- libgcc-4.8.2-3.el7.i686 +- libgcc-4.8.2-3.el7.x86_64 +- libstdc++-4.8.2-3.el7.i686 +- libstdc++-4.8.2-3.el7.x86_64 +- libstdc++-devel-4.8.2-3.el7.i686 +- libstdc++-devel-4.8.2-3.el7.x86_64 +- libXi-1.7.2-1.el7.i686 +- libXi-1.7.2-1.el7.x86_64 +- libXtst-1.2.2-1.el7.i686 +- libXtst-1.2.2-1.el7.x86_64 +- make-3.82-19.el7.x86_64 +- sysstat-10.1.5-1.el7.x86_64 + +你也需要 unixODBC-2.3.1 或更新版本的驱动。 + +我希望你安装 Oracle 11g 的时候已经在你的 CentOS 7 上创建了名为 oracle 的用户。 + +让我们以用户 oracle 登录 CentOS。 + +以用户 oracle 登录到 CentOS 之后,在你的 CentOS上打开一个终端。 + +使用终端更改工作目录并导航到你解压两个 zip 文件的目录。在终端中输入以下命令开始安装 12c。 + + ./runInstaller + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212image5.png) + +如果一切顺利,你会看到类似下面的截图,已经开始安装 12c。 + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage6.png) + +然后你可以选择跳过更新或者下载最近更新。如果是生产服务器,建议你必须更新。我这里选择跳过。 + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage7.png) + +现在,选择升级现有数据库。 + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage8.png) + +对于语言,这里已经有 English。点击下一步继续,或者你可以根据你的需要添加语言。 + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage9.png) + +现在,选择企业版。你可以根据你的需求选择。 + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage10.png) + +然后选择软件位置路径,这些都是不言自明的。 + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage11.png) + +第七步,像下面这样使用默认的选择继续下一步。 + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage12.png) + +在第九步,你会看到一个类似下面这样的总结报告。 + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage13.png) + +如果一切正常,你可以点击步骤九中的 install 开始安装,进入步骤十。 + +![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage14.png) + +其中你可能会遇到一些错误,你需要通过谷歌找到这些错误的解决方法。你可能遇到的问题会有很多,因此我没有在这里详细介绍。 + +要有耐心,一步一步走下来最后它会告诉你成功了。否则,在谷歌上搜索做必要的操作解决问题。再一次说明,由于你可能会遇到的错误有很多,我无法在这里提供所有详细介绍。 + +现在,只需要按照下面屏幕指令配置监听器 + +配置完监听器之后,它会启动数据库升级助手(Database Upgrade Assistant)。选择 Upgrade Oracle Database。 + +![](http://www.unixmen.com/wp-content/uploads/2015/09/DUAimage15.png) + +在第二步,你会发现它显示了 11g 的位置路径以及 12c 的位置路径。同时你也会发现它指示说从原来的 Oracle Home Release 11 安装 Oracle Home Release 12.点击下一步进入步骤三。 + +![](http://www.unixmen.com/wp-content/uploads/2015/09/DUAimage16.png) + +按照屏幕上的说明完成安装。 + +在最后一步,你会看到一个成功窗口,其中你会看到成功升级了 oracle 数据库。 + +**一个忠告**:对于你的生产服务器,在升级到 12c 之前,请确保你已经在其它平台上测试过,以便你能修复升级过程中遇到的所有错误。永远不要尝试一无所知的时候就升级生产服务器。 + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/upgrade-from-oracle-11g-to-oracle-12c/ + +作者:[Mohammad Forhad Iftekher][a] +译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/forhad/ +[1]:http://www.oracle.com/technetwork/database/enterprise-edition/downloads/database12c-linux-download-1959253.html \ No newline at end of file From 999a554269f912644b3d66f9123ba1f87be4f215 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 26 Sep 2015 15:29:31 +0800 Subject: [PATCH 608/697] [Translated]sources/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md --- ...e Types and System Time in Linu--Part 3.md | 280 ------------------ ...e Types and System Time in Linu--Part 3.md | 279 +++++++++++++++++ 2 files changed, 279 insertions(+), 280 deletions(-) delete mode 100644 sources/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md create mode 100644 translated/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md diff --git a/sources/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md b/sources/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md deleted file mode 100644 index 6f4d1a63f5..0000000000 --- a/sources/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md +++ /dev/null @@ -1,280 +0,0 @@ -ictlyh Translating -5 Useful Commands to Manage File Types and System Time in Linux – Part 3 -================================================================================ -Adapting to using the command line or terminal can be very hard for beginners who want to learn Linux. Because the terminal gives more control over a Linux system than GUIs programs, one has to get a used to running commands on the terminal. Therefore to memorize different commands in Linux, you should use the terminal on a daily basis to understand how commands are used with different options and arguments. - -![Manage File Types and Set Time in Linux](http://www.tecmint.com/wp-content/uploads/2015/09/Find-File-Types-in-Linux.jpg) - -Manage File Types and Set Time in Linux – Part 3 - -Please go through our previous parts of this [Linux Tricks][1] series. - -- [5 Interesting Command Line Tips and Tricks in Linux – Part 1][2] -- [ Useful Commandline Tricks for Newbies – Part 2][3] - -In this article, we are going to look at some tips and tricks of using 10 commands to work with files and time on the terminal. - -### File Types in Linux ### - -In Linux, everything is considered as a file, your devices, directories and regular files are all considered as files. - -There are different types of files in a Linux system: - -- Regular files which may include commands, documents, music files, movies, images, archives and so on. -- Device files: which are used by the system to access your hardware components. - -There are two types of device files block files that represent storage devices such as harddisks, they read data in blocks and character files read data in a character by character manner. - -- Hardlinks and softlinks: they are used to access files from any where on a Linux filesystem. -- Named pipes and sockets: allow different processes to communicate with each other. - -#### 1. Determining the type of a file using ‘file’ command #### - -You can determine the type of a file by using the file command as follows. The screenshot below shows different examples of using the file command to determine the types of different files. - - tecmint@tecmint ~/Linux-Tricks $ dir - BACKUP master.zip - crossroads-stable.tar.gz num.txt - EDWARD-MAYA-2011-2012-NEW-REMIX.mp3 reggea.xspf - Linux-Security-Optimization-Book.gif tmp-link - - tecmint@tecmint ~/Linux-Tricks $ file BACKUP/ - BACKUP/: directory - - tecmint@tecmint ~/Linux-Tricks $ file master.zip - master.zip: Zip archive data, at least v1.0 to extract - - tecmint@tecmint ~/Linux-Tricks $ file crossroads-stable.tar.gz - crossroads-stable.tar.gz: gzip compressed data, from Unix, last modified: Tue Apr 5 15:15:20 2011 - - tecmint@tecmint ~/Linux-Tricks $ file Linux-Security-Optimization-Book.gif - Linux-Security-Optimization-Book.gif: GIF image data, version 89a, 200 x 259 - - tecmint@tecmint ~/Linux-Tricks $ file EDWARD-MAYA-2011-2012-NEW-REMIX.mp3 - EDWARD-MAYA-2011-2012-NEW-REMIX.mp3: Audio file with ID3 version 2.3.0, contains: MPEG ADTS, layer III, v1, 192 kbps, 44.1 kHz, JntStereo - - tecmint@tecmint ~/Linux-Tricks $ file /dev/sda1 - /dev/sda1: block special - - tecmint@tecmint ~/Linux-Tricks $ file /dev/tty1 - /dev/tty1: character special - -#### 2. Determining the file type using ‘ls’ and ‘dir’ commands #### - -Another way of determining the type of a file is by performing a long listing using the ls and [dir][4] commands. - -Using ls -l to determine the type of a file. - -When you view the file permissions, the first character shows the file type and the other charcters show the file permissions. - - tecmint@tecmint ~/Linux-Tricks $ ls -l - total 6908 - drwxr-xr-x 2 tecmint tecmint 4096 Sep 9 11:46 BACKUP - -rw-r--r-- 1 tecmint tecmint 1075620 Sep 9 11:47 crossroads-stable.tar.gz - -rwxr----- 1 tecmint tecmint 5916085 Sep 9 11:49 EDWARD-MAYA-2011-2012-NEW-REMIX.mp3 - -rw-r--r-- 1 tecmint tecmint 42122 Sep 9 11:49 Linux-Security-Optimization-Book.gif - -rw-r--r-- 1 tecmint tecmint 17627 Sep 9 11:46 master.zip - -rw-r--r-- 1 tecmint tecmint 5 Sep 9 11:48 num.txt - -rw-r--r-- 1 tecmint tecmint 0 Sep 9 11:46 reggea.xspf - -rw-r--r-- 1 tecmint tecmint 5 Sep 9 11:47 tmp-link - -Using ls -l to determine block and character files. - - tecmint@tecmint ~/Linux-Tricks $ ls -l /dev/sda1 - brw-rw---- 1 root disk 8, 1 Sep 9 10:53 /dev/sda1 - - tecmint@tecmint ~/Linux-Tricks $ ls -l /dev/tty1 - crw-rw---- 1 root tty 4, 1 Sep 9 10:54 /dev/tty1 - -Using dir -l to determine the type of a file. - - tecmint@tecmint ~/Linux-Tricks $ dir -l - total 6908 - drwxr-xr-x 2 tecmint tecmint 4096 Sep 9 11:46 BACKUP - -rw-r--r-- 1 tecmint tecmint 1075620 Sep 9 11:47 crossroads-stable.tar.gz - -rwxr----- 1 tecmint tecmint 5916085 Sep 9 11:49 EDWARD-MAYA-2011-2012-NEW-REMIX.mp3 - -rw-r--r-- 1 tecmint tecmint 42122 Sep 9 11:49 Linux-Security-Optimization-Book.gif - -rw-r--r-- 1 tecmint tecmint 17627 Sep 9 11:46 master.zip - -rw-r--r-- 1 tecmint tecmint 5 Sep 9 11:48 num.txt - -rw-r--r-- 1 tecmint tecmint 0 Sep 9 11:46 reggea.xspf - -rw-r--r-- 1 tecmint tecmint 5 Sep 9 11:47 tmp-link - -#### 3. Counting number of files of a specific type #### - -Next we shall look at tips on counting number of files of a specific type in a given directory using the ls, [grep][5] and [wc][6] commands. Communication between the commands is achieved through named piping. - -- grep – command to search according to a given pattern or regular expression. -- wc – command to count lines, words and characters. - -Counting number of regular files - -In Linux, regular files are represented by the `–` symbol. - - tecmint@tecmint ~/Linux-Tricks $ ls -l | grep ^- | wc -l - 7 - -**Counting number of directories** - -In Linux, directories are represented by the `d` symbol. - - tecmint@tecmint ~/Linux-Tricks $ ls -l | grep ^d | wc -l - 1 - -**Counting number of symbolic and hard links** - -In Linux, symblic and hard links are represented by the l symbol. - - tecmint@tecmint ~/Linux-Tricks $ ls -l | grep ^l | wc -l - 0 - -**Counting number of block and character files** - -In Linux, block and character files are represented by the `b` and `c` symbols respectively. - - tecmint@tecmint ~/Linux-Tricks $ ls -l /dev | grep ^b | wc -l - 37 - tecmint@tecmint ~/Linux-Tricks $ ls -l /dev | grep ^c | wc -l - 159 - -#### 4. Finding files on a Linux system #### - -Next we shall look at some commands one can use to find files on a Linux system, these include the locate, find, whatis and which commands. - -**Using the locate command to find files** - -In the output below, I am trying to locate the [Samba server configuration][7] for my system. - - tecmint@tecmint ~/Linux-Tricks $ locate samba.conf - /usr/lib/tmpfiles.d/samba.conf - /var/lib/dpkg/info/samba.conffiles - -**Using the find command to find files** - -To learn how to use the find command in Linux, you can read our following article that shows more than 30+ practical examples and usage of find command in Linux. - -- [35 Examples of ‘find’ Command in Linux][8] - -**Using the whatis command to locate commands** - -The whatis command is mostly used to locate commands and it is special because it gives information about a command, it also finds configurations files and manual entries for a command. - - tecmint@tecmint ~/Linux-Tricks $ whatis bash - bash (1) - GNU Bourne-Again SHell - - tecmint@tecmint ~/Linux-Tricks $ whatis find - find (1) - search for files in a directory hierarchy - - tecmint@tecmint ~/Linux-Tricks $ whatis ls - ls (1) - list directory contents - -**Using which command to locate commands** - -The which command is used to locate commands on the filesystem. - - tecmint@tecmint ~/Linux-Tricks $ which mkdir - /bin/mkdir - - tecmint@tecmint ~/Linux-Tricks $ which bash - /bin/bash - - tecmint@tecmint ~/Linux-Tricks $ which find - /usr/bin/find - - tecmint@tecmint ~/Linux-Tricks $ $ which ls - /bin/ls - -#### 5. Working with time on your Linux system #### - -When working in a networked environment, it is a good practice to keep the correct time on your Linux system. There are certain services on Linux systems that require correct time to work efficiently on a network. - -We shall look at commands you can use to manage time on your machine. In Linux, time is managed in two ways: system time and hardware time. - -The system time is managed by a system clock and the hardware time is managed by a hardware clock. - -To view your system time, date and timezone, use the date command as follows. - - tecmint@tecmint ~/Linux-Tricks $ date - Wed Sep 9 12:25:40 IST 2015 - -Set your system time using date -s or date –set=”STRING” as follows. - - tecmint@tecmint ~/Linux-Tricks $ sudo date -s "12:27:00" - Wed Sep 9 12:27:00 IST 2015 - - tecmint@tecmint ~/Linux-Tricks $ sudo date --set="12:27:00" - Wed Sep 9 12:27:00 IST 2015 - -You can also set time and date as follows. - - tecmint@tecmint ~/Linux-Tricks $ sudo date 090912302015 - Wed Sep 9 12:30:00 IST 2015 - -Viewing current date from a calendar using cal command. - - tecmint@tecmint ~/Linux-Tricks $ cal - September 2015 - Su Mo Tu We Th Fr Sa - 1 2 3 4 5 - 6 7 8 9 10 11 12 - 13 14 15 16 17 18 19 - 20 21 22 23 24 25 26 - 27 28 29 30 - -View hardware clock time using the hwclock command. - - tecmint@tecmint ~/Linux-Tricks $ sudo hwclock - Wednesday 09 September 2015 06:02:58 PM IST -0.200081 seconds - -To set the hardware clock time, use hwclock –set –date=”STRING” as follows. - - tecmint@tecmint ~/Linux-Tricks $ sudo hwclock --set --date="09/09/2015 12:33:00" - - tecmint@tecmint ~/Linux-Tricks $ sudo hwclock - Wednesday 09 September 2015 12:33:11 PM IST -0.891163 seconds - -The system time is set by the hardware clock during booting and when the system is shutting down, the hardware time is reset to the system time. - -Therefore when you view system time and hardware time, they are the same unless when you change the system time. Your hardware time may be incorrect when the CMOS battery is weak. - -You can also set your system time using time from the hardware clock as follows. - - $ sudo hwclock --hctosys - -It is also possible to set hardware clock time using the system clock time as follows. - - $ sudo hwclock --systohc - -To view how long your Linux system has been running, use the uptime command. - - tecmint@tecmint ~/Linux-Tricks $ uptime - 12:36:27 up 1:43, 2 users, load average: 1.39, 1.34, 1.45 - - tecmint@tecmint ~/Linux-Tricks $ uptime -p - up 1 hour, 43 minutes - - tecmint@tecmint ~/Linux-Tricks $ uptime -s - 2015-09-09 10:52:47 - -### Summary ### - -Understanding file types is Linux is a good practice for begginers, and also managing time is critical especially on servers to manage services reliably and efficiently. Hope you find this guide helpful. If you have any additional information, do not forget to post a comment. Stay connected to Tecmint. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/manage-file-types-and-set-system-time-in-linux/ - -作者:[Aaron Kili][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/aaronkili/ -[1]:http://www.tecmint.com/tag/linux-tricks/ -[2]:http://www.tecmint.com/free-online-linux-learning-guide-for-beginners/ -[3]:http://www.tecmint.com/10-useful-linux-command-line-tricks-for-newbies/ -[4]:http://www.tecmint.com/linux-dir-command-usage-with-examples/ -[5]:http://www.tecmint.com/12-practical-examples-of-linux-grep-command/ -[6]:http://www.tecmint.com/wc-command-examples/ -[7]:http://www.tecmint.com/setup-samba-file-sharing-for-linux-windows-clients/ -[8]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/ \ No newline at end of file diff --git a/translated/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md b/translated/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md new file mode 100644 index 0000000000..601f753341 --- /dev/null +++ b/translated/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md @@ -0,0 +1,279 @@ +Linux 中管理文件类型和系统时间的 5 个有用命令 - 第三部分 +================================================================================ +对于想学习 Linux 的初学者来说要适应使用命令行或者终端可能非常困难。由于终端比图形用户界面程序更能帮助用户控制 Linux 系统,我们必须习惯在终端中运行命令。因此为了有效记忆 Linux 不同的命令,你应该每天使用终端并明白怎样将命令和不同选项以及参数一同使用。 + +![在 Linux 中管理文件类型并设置时间](http://www.tecmint.com/wp-content/uploads/2015/09/Find-File-Types-in-Linux.jpg) + +在 Linux 中管理文件类型并设置时间 - 第三部分 + +请先查看我们 [Linux 小技巧][1]系列之前的文章。 + +- [Linux 中 5 个有趣的命令行提示和技巧 - 第一部分][2] +- [给新手的有用命令行技巧 - 第二部分][3] + +在这篇文章中,我们打算看看终端中 10 个和文件以及时间相关的提示和技巧。 + +### Linux 中的文件类型 ### + +在 Linux 中,一切皆文件,你的设备、目录以及普通文件都认为是文件。 + +Linux 系统中文件有不同的类型: + +- 普通文件:可能包含命令、文档、音频文件、视频、图像,归档文件等。 +- 设备文件:系统用于访问你硬件组件。 + +这里有两种表示存储设备的设备文件块文件,例如硬盘,它们以快读取数据,字符文件,以逐个字符读取数据。 + +- 硬链接和软链接:用于在 Linux 文件系统的任意地方访问文件。 +- 命名管道和套接字:允许不同的进程彼此之间交互。 + +#### 1. 用 ‘file’ 命令确定文件类型 #### + +你可以像下面这样使用 file 命令确定文件的类型。下面的截图显示了用 file 命令确定不同文件类型的例子。 + + tecmint@tecmint ~/Linux-Tricks $ dir + BACKUP master.zip + crossroads-stable.tar.gz num.txt + EDWARD-MAYA-2011-2012-NEW-REMIX.mp3 reggea.xspf + Linux-Security-Optimization-Book.gif tmp-link + + tecmint@tecmint ~/Linux-Tricks $ file BACKUP/ + BACKUP/: directory + + tecmint@tecmint ~/Linux-Tricks $ file master.zip + master.zip: Zip archive data, at least v1.0 to extract + + tecmint@tecmint ~/Linux-Tricks $ file crossroads-stable.tar.gz + crossroads-stable.tar.gz: gzip compressed data, from Unix, last modified: Tue Apr 5 15:15:20 2011 + + tecmint@tecmint ~/Linux-Tricks $ file Linux-Security-Optimization-Book.gif + Linux-Security-Optimization-Book.gif: GIF image data, version 89a, 200 x 259 + + tecmint@tecmint ~/Linux-Tricks $ file EDWARD-MAYA-2011-2012-NEW-REMIX.mp3 + EDWARD-MAYA-2011-2012-NEW-REMIX.mp3: Audio file with ID3 version 2.3.0, contains: MPEG ADTS, layer III, v1, 192 kbps, 44.1 kHz, JntStereo + + tecmint@tecmint ~/Linux-Tricks $ file /dev/sda1 + /dev/sda1: block special + + tecmint@tecmint ~/Linux-Tricks $ file /dev/tty1 + /dev/tty1: character special + +#### 2. 用 ‘ls’ 和 ‘dir’ 命令确定文件类型 #### + +确定文件类型的另一种方式是用 ls 和 [dir][4] 命令显示一长串结果。 + +用 ls -l 确定一个文件的类型。 + +当你查看文件权限时,第一个字符显示了文件类型,其它字符显示文件权限。 + + tecmint@tecmint ~/Linux-Tricks $ ls -l + total 6908 + drwxr-xr-x 2 tecmint tecmint 4096 Sep 9 11:46 BACKUP + -rw-r--r-- 1 tecmint tecmint 1075620 Sep 9 11:47 crossroads-stable.tar.gz + -rwxr----- 1 tecmint tecmint 5916085 Sep 9 11:49 EDWARD-MAYA-2011-2012-NEW-REMIX.mp3 + -rw-r--r-- 1 tecmint tecmint 42122 Sep 9 11:49 Linux-Security-Optimization-Book.gif + -rw-r--r-- 1 tecmint tecmint 17627 Sep 9 11:46 master.zip + -rw-r--r-- 1 tecmint tecmint 5 Sep 9 11:48 num.txt + -rw-r--r-- 1 tecmint tecmint 0 Sep 9 11:46 reggea.xspf + -rw-r--r-- 1 tecmint tecmint 5 Sep 9 11:47 tmp-link + +使用 ls -l 确定块和字符文件 + + tecmint@tecmint ~/Linux-Tricks $ ls -l /dev/sda1 + brw-rw---- 1 root disk 8, 1 Sep 9 10:53 /dev/sda1 + + tecmint@tecmint ~/Linux-Tricks $ ls -l /dev/tty1 + crw-rw---- 1 root tty 4, 1 Sep 9 10:54 /dev/tty1 + +使用 dir -l 确定一个文件的类型。 + + tecmint@tecmint ~/Linux-Tricks $ dir -l + total 6908 + drwxr-xr-x 2 tecmint tecmint 4096 Sep 9 11:46 BACKUP + -rw-r--r-- 1 tecmint tecmint 1075620 Sep 9 11:47 crossroads-stable.tar.gz + -rwxr----- 1 tecmint tecmint 5916085 Sep 9 11:49 EDWARD-MAYA-2011-2012-NEW-REMIX.mp3 + -rw-r--r-- 1 tecmint tecmint 42122 Sep 9 11:49 Linux-Security-Optimization-Book.gif + -rw-r--r-- 1 tecmint tecmint 17627 Sep 9 11:46 master.zip + -rw-r--r-- 1 tecmint tecmint 5 Sep 9 11:48 num.txt + -rw-r--r-- 1 tecmint tecmint 0 Sep 9 11:46 reggea.xspf + -rw-r--r-- 1 tecmint tecmint 5 Sep 9 11:47 tmp-link + +#### 3. 统计指定类型文件的数目 #### + +下面我们来看看在一个目录中用 ls,[grep][5] 和 [wc][6] 命令统计指定类型文件数目的技巧。命令之间的交互通过命名管道完成。 + +- grep – 用户根据给定模式或正则表达式进行搜索的命令。 +- wc – 用于统计行、字和字符的命令。 + +**统计普通文件的数目** + +在 Linux 中,普通文件用符号 `-` 表示。 + + tecmint@tecmint ~/Linux-Tricks $ ls -l | grep ^- | wc -l + 7 + +**统计目录的数目** + +在 Linux 中,目录用符号 `d` 表示。 + + tecmint@tecmint ~/Linux-Tricks $ ls -l | grep ^d | wc -l + 1 + +**统计符号链接和硬链接的数目** + +在 Linux 中,符号链接和硬链接用符号 `l` 表示。 + + tecmint@tecmint ~/Linux-Tricks $ ls -l | grep ^l | wc -l + 0 + +**统计块文件和字符文件的数目** + +在 Linux 中,块和字符文件用符号 `b` 和 `c` 表示。 + + tecmint@tecmint ~/Linux-Tricks $ ls -l /dev | grep ^b | wc -l + 37 + tecmint@tecmint ~/Linux-Tricks $ ls -l /dev | grep ^c | wc -l + 159 + +#### 4. 在 Linux 系统中查找文件 #### + +下面我们来看看在 Linux 系统中查找文件一些命令,它们包括 locate、find、whatis 和 which 命令。 + +**用 locate 命令查找文件** + +在下面的输出中,我想要定位系统中的 [Samba 服务器配置文件][7] + + tecmint@tecmint ~/Linux-Tricks $ locate samba.conf + /usr/lib/tmpfiles.d/samba.conf + /var/lib/dpkg/info/samba.conffiles + +**用 find 命令查找文件** + +想要学习如何在 Linux 中使用 find 命令,你可以阅读我们以下的文章,里面列出了 find 命令的 30 多个例子和使用方法。 + +- [Linux 中 35 个 ‘find’ 命令示例][8] + +**用 whatis 命令定位命令** + +whatis 命令通常用于定位命令,它很特殊,因为它给出关于一个命令的信息,它还能查找配置文件和命令的帮助手册条目。 + + tecmint@tecmint ~/Linux-Tricks $ whatis bash + bash (1) - GNU Bourne-Again SHell + + tecmint@tecmint ~/Linux-Tricks $ whatis find + find (1) - search for files in a directory hierarchy + + tecmint@tecmint ~/Linux-Tricks $ whatis ls + ls (1) - list directory contents + +**用 which 命令定位命令** + +which 命令用于定位文件系统中的命令。 + + tecmint@tecmint ~/Linux-Tricks $ which mkdir + /bin/mkdir + + tecmint@tecmint ~/Linux-Tricks $ which bash + /bin/bash + + tecmint@tecmint ~/Linux-Tricks $ which find + /usr/bin/find + + tecmint@tecmint ~/Linux-Tricks $ $ which ls + /bin/ls + +#### 5.处理 Linux 系统的时间 #### + +在联网环境中,保持你 Linux 系统时间准确是一个好的习惯。Linux 系统中有很多服务要求时间正确才能在联网条件下正常工作。 + +让我们来看看你可以用来管理你机器时间的命令。在 Linux 中,有两种方式管理时间:系统时间和硬件时间。 + +系统时间由系统时钟管理,硬件时间由硬件时钟管理。 + +要查看你的系统时间、日期和时区,像下面这样使用 date 命令。 + + tecmint@tecmint ~/Linux-Tricks $ date + Wed Sep 9 12:25:40 IST 2015 + +像下面这样用 date -s 或 date -set=“STRING” 设置系统时间。 + + tecmint@tecmint ~/Linux-Tricks $ sudo date -s "12:27:00" + Wed Sep 9 12:27:00 IST 2015 + + tecmint@tecmint ~/Linux-Tricks $ sudo date --set="12:27:00" + Wed Sep 9 12:27:00 IST 2015 + +你也可以像下面这样设置时间和日期。 + + tecmint@tecmint ~/Linux-Tricks $ sudo date 090912302015 + Wed Sep 9 12:30:00 IST 2015 + +使用 cal 命令从日历中查看当前日期。 + + tecmint@tecmint ~/Linux-Tricks $ cal + September 2015 + Su Mo Tu We Th Fr Sa + 1 2 3 4 5 + 6 7 8 9 10 11 12 + 13 14 15 16 17 18 19 + 20 21 22 23 24 25 26 + 27 28 29 30 + +使用 hwclock 命令查看硬件始终时间。 + + tecmint@tecmint ~/Linux-Tricks $ sudo hwclock + Wednesday 09 September 2015 06:02:58 PM IST -0.200081 seconds + +要设置硬件时钟时间,像下面这样使用 hwclock –set –date=“STRING” 命令。 + + tecmint@tecmint ~/Linux-Tricks $ sudo hwclock --set --date="09/09/2015 12:33:00" + + tecmint@tecmint ~/Linux-Tricks $ sudo hwclock + Wednesday 09 September 2015 12:33:11 PM IST -0.891163 seconds + +系统时间是由硬件始终时间在启动时设置的,系统关闭时,硬件时间被重置为系统时间。 + +因此你查看系统时间和硬件时间时,它们是一样的,除非你更改了系统时间。当你的 CMOS 电量不足时,硬件时间可能不正确。 + +你也可以像下面这样使用硬件时钟的时间设置系统时间。 + + $ sudo hwclock --hctosys + +也可以像下面这样用系统时钟时间设置硬件时钟时间。 + + $ sudo hwclock --systohc + +要查看你的 Linux 系统已经运行了多长时间,可以使用 uptime 命令。 + + tecmint@tecmint ~/Linux-Tricks $ uptime + 12:36:27 up 1:43, 2 users, load average: 1.39, 1.34, 1.45 + + tecmint@tecmint ~/Linux-Tricks $ uptime -p + up 1 hour, 43 minutes + + tecmint@tecmint ~/Linux-Tricks $ uptime -s + 2015-09-09 10:52:47 + +### 总结 ### + +对于初学者来说理解 Linux 中的文件类型是一个好的尝试,同时时间管理也非常重要,尤其是在需要可靠有效地管理服务的服务器上。希望这篇指南能对你有所帮助。如果你有任何反馈,别忘了给我们写评论。和 Tecmint 保持联系。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/manage-file-types-and-set-system-time-in-linux/ + +作者:[Aaron Kili][a] +译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/aaronkili/ +[1]:http://www.tecmint.com/tag/linux-tricks/ +[2]:http://www.tecmint.com/free-online-linux-learning-guide-for-beginners/ +[3]:http://www.tecmint.com/10-useful-linux-command-line-tricks-for-newbies/ +[4]:http://www.tecmint.com/linux-dir-command-usage-with-examples/ +[5]:http://www.tecmint.com/12-practical-examples-of-linux-grep-command/ +[6]:http://www.tecmint.com/wc-command-examples/ +[7]:http://www.tecmint.com/setup-samba-file-sharing-for-linux-windows-clients/ +[8]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/ \ No newline at end of file From a001f8da461d0db3d86d4c233f057ce7e7181754 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sun, 27 Sep 2015 15:12:08 +0800 Subject: [PATCH 609/697] [Translating]sources/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md --- ...11 10 Useful Linux Command Line Tricks for Newbies--Part 2.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md b/sources/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md index 09fd4c879d..51933e540a 100644 --- a/sources/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md +++ b/sources/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md @@ -1,3 +1,4 @@ +ictlyh Translating 10 Useful Linux Command Line Tricks for Newbies – Part 2 ================================================================================ I remember when I first started using Linux and I was used to the graphical interface of Windows, I truly hated the Linux terminal. Back then I was finding the commands hard to remember and proper use of each one of them. With time I realised the beauty, flexibility and usability of the Linux terminal and to be honest a day doesn’t pass without using. Today, I would like to share some useful tricks and tips for Linux new comers to ease their transition to Linux or simply help them learn something new (hopefully). From dd5d88063ccebe939c96999e095ca364d9738847 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Sun, 27 Sep 2015 16:02:01 +0800 Subject: [PATCH 610/697] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E8=A7=84=E8=8C=83?= =?UTF-8?q?=E4=B8=8A=E4=BC=A0?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- LCTT翻译规范.md | 4 ++++ 1 file changed, 4 insertions(+) create mode 100644 LCTT翻译规范.md diff --git a/LCTT翻译规范.md b/LCTT翻译规范.md new file mode 100644 index 0000000000..b9a514f115 --- /dev/null +++ b/LCTT翻译规范.md @@ -0,0 +1,4 @@ +# Linux中国翻译规范 +1. 翻译中出现的专有名词,可参见Dict.md中的翻译。 +2. 英文人名,如无中文对应译名,一般不译。 +2. 缩写词,一般不须翻译,可考虑旁注中文全名。 \ No newline at end of file From 2ace123685ed6f3ef513e27b641103554afa0174 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sun, 27 Sep 2015 16:13:29 +0800 Subject: [PATCH 611/697] [Translated]sources/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md --- ...Command Line Tricks for Newbies--Part 2.md | 251 ----------------- ...Command Line Tricks for Newbies--Part 2.md | 253 ++++++++++++++++++ 2 files changed, 253 insertions(+), 251 deletions(-) delete mode 100644 sources/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md create mode 100644 translated/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md diff --git a/sources/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md b/sources/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md deleted file mode 100644 index 51933e540a..0000000000 --- a/sources/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md +++ /dev/null @@ -1,251 +0,0 @@ -ictlyh Translating -10 Useful Linux Command Line Tricks for Newbies – Part 2 -================================================================================ -I remember when I first started using Linux and I was used to the graphical interface of Windows, I truly hated the Linux terminal. Back then I was finding the commands hard to remember and proper use of each one of them. With time I realised the beauty, flexibility and usability of the Linux terminal and to be honest a day doesn’t pass without using. Today, I would like to share some useful tricks and tips for Linux new comers to ease their transition to Linux or simply help them learn something new (hopefully). - -![10 Linux Commandline Tricks for Newbies](http://www.tecmint.com/wp-content/uploads/2015/09/10-Linux-Commandline-Tricks.jpg) - -10 Linux Commandline Tricks – Part 2 - -- [5 Interesting Command Line Tips and Tricks in Linux – Part 1][1] -- [5 Useful Commands to Manage Linux File Types – Part 3][2] - -This article intends to show you some useful tricks how to use the Linux terminal like a pro with minimum amount of skills. All you need is a Linux terminal and some free time to test these commands. - -### 1. Find the right command ### - -Executing the right command can be vital for your system. However in Linux there are so many different command lines that they are often hard to remember. So how do you search for the right command you need? The answer is apropos. All you need to run is: - - # apropos - -Where you should change the “description” with the actual description of the command you are looking for. Here is a good example: - - # apropos "list directory" - - dir (1) - list directory contents - ls (1) - list directory contents - ntfsls (8) - list directory contents on an NTFS filesystem - vdir (1) - list directory contents - -On the left you can see the commands and on the right their description. - -### 2. Execute Previous Command ### - -Many times you will need to execute the same command over and over again. While you can repeatedly press the Up key on your keyboard, you can use the history command instead. This command will list all commands you entered since you launched the terminal: - - # history - - 1 fdisk -l - 2 apt-get install gnome-paint - 3 hostname tecmint.com - 4 hostnamectl tecmint.com - 5 man hostnamectl - 6 hostnamectl --set-hostname tecmint.com - 7 hostnamectl -set-hostname tecmint.com - 8 hostnamectl set-hostname tecmint.com - 9 mount -t "ntfs" -o - 10 fdisk -l - 11 mount -t ntfs-3g /dev/sda5 /mnt - 12 mount -t rw ntfs-3g /dev/sda5 /mnt - 13 mount -t -rw ntfs-3g /dev/sda5 /mnt - 14 mount -t ntfs-3g /dev/sda5 /mnt - 15 mount man - 16 man mount - 17 mount -t -o ntfs-3g /dev/sda5 /mnt - 18 mount -o ntfs-3g /dev/sda5 /mnt - 19 mount -ro ntfs-3g /dev/sda5 /mnt - 20 cd /mnt - ... - -As you will see from the output above, you will receive a list of all commands that you have ran. On each line you have number indicating the row in which you have entered the command. You can recall that command by using: - - !# - -Where # should be changed with the actual number of the command. For better understanding, see the below example: - - !501 - -Is equivalent to: - - # history - -### 3. Use midnight Commander ### - -If you are not used to using commands such cd, cp, mv, rm than you can use the midnight command. It is an easy to use visual shell in which you can also use mouse: - -![Midnight Commander in Action](http://www.tecmint.com/wp-content/uploads/2015/09/mc-command.jpg) - -Midnight Commander in Action - -Thanks to the F1 – F12 keys, you can easy perform different tasks. Simply check the legend at the bottom. To select a file or folder click the “Insert” button. - -In short the midnight command is called “mc“. To install mc on your system simply run: - - $ sudo apt-get install mc [On Debian based systems] - ----------- - - # yum install mc [On Fedora based systems] - -Here is a simple example of using midnight commander. Open mc by simply typing: - - # mc - -Now use the TAB button to switch between windows – left and right. I have a LibreOffice file that I will move to “Software” folder: - -![Midnight Commander Move Files](http://www.tecmint.com/wp-content/uploads/2015/09/Midnight-Commander-Move-Files.jpg) - -Midnight Commander Move Files - -To move the file in the new directory press F6 button on your keyboard. MC will now ask you for confirmation: - -![Move Files to New Directory](http://www.tecmint.com/wp-content/uploads/2015/09/Move-Files-to-new-Directory.png) - -Move Files to New Directory - -Once confirmed, the file will be moved in the new destination directory. - -Read More: [How to Use Midnight Commander File Manager in Linux][4] - -### 4. Shutdown Computer at Specific Time ### - -Sometimes you will need to shutdown your computer some hours after your work hours have ended. You can configure your computer to shut down at specific time by using: - - $ sudo shutdown 21:00 - -This will tell your computer to shut down at the specific time you have provided. You can also tell the system to shutdown after specific amount of minutes: - - $ sudo shutdown +15 - -That way the system will shut down in 15 minutes. - -### 5. Show Information about Known Users ### - -You can use a simple command to list your Linux system users and some basic information about them. Simply use: - - # lslogins - -This should bring you the following output: - - UID USER PWD-LOCK PWD-DENY LAST-LOGIN GECOS - 0 root 0 0 Apr29/11:35 root - 1 bin 0 1 bin - 2 daemon 0 1 daemon - 3 adm 0 1 adm - 4 lp 0 1 lp - 5 sync 0 1 sync - 6 shutdown 0 1 Jul19/10:04 shutdown - 7 halt 0 1 halt - 8 mail 0 1 mail - 10 uucp 0 1 uucp - 11 operator 0 1 operator - 12 games 0 1 games - 13 gopher 0 1 gopher - 14 ftp 0 1 FTP User - 23 squid 0 1 - 25 named 0 1 Named - 27 mysql 0 1 MySQL Server - 47 mailnull 0 1 - 48 apache 0 1 Apache - ... - -### 6. Search for Files ### - -Searching for files can sometimes be not as easy as you think. A good example for searching for files is: - - # find /home/user -type f - -This command will search for all files located in /home/user. The find command is extremely powerful one and you can pass more options to it to make your search even more detailed. If you want to search for files larger than given size, you can use: - - # find . -type f -size 10M - -The above command will search from current directory for all files that are larger than 10 MB. Make sure not to run the command from the root directory of your Linux system as this may cause high I/O on your machine. - -One of the most frequently used combinations that I use find with is “exec” option, which basically allows you to run some actions on the results of the find command. - -For example, lets say that we want to find all files in a directory and change their permissions. This can be easily done with: - - # find /home/user/files/ -type f -exec chmod 644 {} \; - -The above command will search for all files in the specified directory recursively and will executed chmod command on the found files. I am sure you will find many more uses on this command in future, for now read [35 Examples of Linux ‘find’ Command and Usage][5]. - -### 7. Build Directory Trees with one Command ### - -You probably know that you can create new directories by using the mkdir command. So if you want to create a new folder you will run something like this: - - # mkdir new_folder - -But what, if you want to create 5 subfolders within that folder? Running mkdir 5 times in a row is not a good solution. Instead you can use -p option like that: - - # mkdir -p new_folder/{folder_1,folder_2,folder_3,folder_4,folder_5} - -In the end you should have 5 folders located in new_folder: - - # ls new_folder/ - - folder_1 folder_2 folder_3 folder_4 folder_5 - -### 8. Copy File into Multiple Directories ### - -File copying is usually performed with the cp command. Copying a file usually looks like this: - - # cp /path-to-file/my_file.txt /path-to-new-directory/ - -Now imagine that you need to copy that file in multiple directories: - - # cp /home/user/my_file.txt /home/user/1 - # cp /home/user/my_file.txt /home/user/2 - # cp /home/user/my_file.txt /home/user/3 - -This is a bit absurd. Instead you can solve the problem with a simple one line command: - - # echo /home/user/1/ /home/user/2/ /home/user/3/ | xargs -n 1 cp /home/user/my_file.txt - -### 9. Deleting Larger Files ### - -Sometimes files can grow extremely large. I have seen cases where a single log file went over 250 GB large due to poor administrating skills. Removing the file with rm utility might not be sufficient in such cases due to the fact that there is extremely large amount of data that needs to be removed. The operation will be a “heavy” one and should be avoided. Instead, you can go with a really simple solution: - - # > /path-to-file/huge_file.log - -Where of course you will need to change the path and the file names with the exact ones to match your case. The above command will simply write an empty output to the file. In more simpler words it will empty the file without causing high I/O on your system. - -### 10. Run Same Command on Multiple Linux Servers ### - -Recently one of our readers asked in our [LinuxSay forum][6], how to execute single command to multiple Linux boxes at once using SSH. He had his machines IP addresses looking like this: - - 10.0.0.1 - 10.0.0.2 - 10.0.0.3 - 10.0.0.4 - 10.0.0.5 - -So here is a simple solution of this issue. Collect the IP addresses of the servers in a one file called list.txt one under other just as shown above. Then you can run: - - # for in $i(cat list.txt); do ssh user@$i 'bash command'; done - -In the above example you will need to change “user” with the actual user with which you will be logging and “bash command” with the actual bash command you wish to execute. The method is better working when you are [using passwordless authentication with SSH key][7] to your machines as that way you will not need to enter the password for your user over and over again. - -Note that you may need to pass some additional parameters to the SSH command depending on your Linux boxes setup. - -### Conclusion ### - -The above examples are really simple ones and I hope they have helped you to find some of the beauty of Linux and how you can easily perform different operations that can take much more time on other operating systems. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/10-useful-linux-command-line-tricks-for-newbies/ - -作者:[Marin Todorov][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/marintodorov89/ -[1]:http://www.tecmint.com/5-linux-command-line-tricks/ -[2]:http://www.tecmint.com/manage-file-types-and-set-system-time-in-linux/ -[3]:http://www.tecmint.com/history-command-examples/ -[4]:http://www.tecmint.com/midnight-commander-a-console-based-file-manager-for-linux/ -[5]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/ -[6]:http://www.linuxsay.com/ -[7]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ \ No newline at end of file diff --git a/translated/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md b/translated/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md new file mode 100644 index 0000000000..c2fcb279f1 --- /dev/null +++ b/translated/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md @@ -0,0 +1,253 @@ +给新手的 10 个有用 Linux 命令行技巧 - 第二部分 +================================================================================ +我记得我第一次使用 Linux 的时候,我还习惯于 Windows 的图形界面,我真的很讨厌 Linux 终端。那时候我觉得命令难以记忆,不能正确使用它们。随着时间推移,我意识到了 Linux 终端的优美、灵活和可用性,说实话,我没有一天不使用它。今天,我很高兴和刚开始接触 Linux 的人一起来分享一些有用的技巧和提示,希望能帮助他们更好的向 Linux 过度,并帮助他们学到一些新的东西(希望如此)。 + +![给新手的 10 个命令行技巧](http://www.tecmint.com/wp-content/uploads/2015/09/10-Linux-Commandline-Tricks.jpg) + +10 个 Linux 命令行技巧 - 第二部分 + + +- [Linux 中 5 个有趣的命令行提示和技巧 - 第一部分][1] +- [管理 Linux 文件类型的 5 个有用命令 – 第三部分][2] + +这篇文章希望向你展示一些不需要很高的技术而可以像一个高手一样使用 Linux 终端的有用技巧。你只需要一个 Linux 终端和一些自由时间来体会这些命令。 + +### 1. 找到正确的命令 ### + +执行正确的命令对你的系统来说非常重要。然而在 Linux 中有很多通常难以记忆的不同的命令行。那么怎样才能找到你需要的正确命令呢?答案是 apropos。你只需要运行: + + # apropos + +其中你要用真正描述你要查找的命令的语句代替 “description”。这里有一个例子: + + # apropos "list directory" + + dir (1) - list directory contents + ls (1) - list directory contents + ntfsls (8) - list directory contents on an NTFS filesystem + vdir (1) - list directory contents + +左边你看到的是命令,右边是它们的描述。 + +### 2. 执行之前的命令 ### + +很多时候你需要一遍又一遍执行相同的命令。尽管你可以重复按你键盘上的 Up 键,你也可以用 history 命令。这个命令会列出自从你上次启动终端以来所有输入过的命令: + + # history + + 1 fdisk -l + 2 apt-get install gnome-paint + 3 hostname tecmint.com + 4 hostnamectl tecmint.com + 5 man hostnamectl + 6 hostnamectl --set-hostname tecmint.com + 7 hostnamectl -set-hostname tecmint.com + 8 hostnamectl set-hostname tecmint.com + 9 mount -t "ntfs" -o + 10 fdisk -l + 11 mount -t ntfs-3g /dev/sda5 /mnt + 12 mount -t rw ntfs-3g /dev/sda5 /mnt + 13 mount -t -rw ntfs-3g /dev/sda5 /mnt + 14 mount -t ntfs-3g /dev/sda5 /mnt + 15 mount man + 16 man mount + 17 mount -t -o ntfs-3g /dev/sda5 /mnt + 18 mount -o ntfs-3g /dev/sda5 /mnt + 19 mount -ro ntfs-3g /dev/sda5 /mnt + 20 cd /mnt + ... + +正如你上面看到的,你会得到一个你运行过的命令的列表。每一行中有一个数字表示你在第几行输入了命令。你可以通过以下方法重新调用该命令: + + !# + +其中要用命令的实际编号代替 #。为了更好的理解,请看下面的例子: + + !501 + +等价于: + + # history + +### 3. 使用 midnight 命令 ### + +如果你不习惯使用类似 cd、cp、mv、rm 等命令,你可以使用 midnight 命令。它是一个简单的可视化 shell,你可以在上面使用鼠标: + + +![Midnight 命令](http://www.tecmint.com/wp-content/uploads/2015/09/mc-command.jpg) + +Midnight 命令 + +多亏了 F1 到 F12 键,你可以轻易地执行不同任务。只需要在底部选择对应的命令。要选择文件或者目录,点击 “Insert” 按钮。 + +简而言之 midnight 就是所谓的 “mc”。要安装 mc,只需要运行: + + $ sudo apt-get install mc [On Debian based systems] + +---------- + + # yum install mc [On Fedora based systems] + +下面是一个使用 midnight 命令器的简单例子。通过输入以下命令打开 mc: + + # mc + +现在使用 TAB 键选择不同的窗口 - 左和右。我有一个想要移动到 “Software” 目录的 LibreOffice 文件: + +![Midnight 命令移动文件](http://www.tecmint.com/wp-content/uploads/2015/09/Midnight-Commander-Move-Files.jpg) + +Midnight 命令移动文件 + +按 F6 按钮移动文件到新的目录。MC 会请求你确认: + +![移动文件到新目录](http://www.tecmint.com/wp-content/uploads/2015/09/Move-Files-to-new-Directory.png) + +移动文件到新目录 + +确认了之后,文件就会被移动到新的目标目录。 + +扩展阅读:[如何在 Linux 中使用 Midnight 命令文件管理器][4] + +### 4. 在指定时间关闭计算机 ### + +有时候你需要在结束工作几个小时后再关闭计算机。你可以通过使用下面的命令在指定时间关闭你的计算机: + + $ sudo shutdown 21:00 + +这会告诉你在你指定的时间关闭计算机。你也可以告诉系统在指定分钟后关闭: + + $ sudo shutdown +15 + +这表示计算机会在 15 分钟后关闭。 + +### 5. 显示已知用户的信息 ### + +你可以使用一个简单的命令列出你 Linux 系统的用户以及一些关于它们的基本信息。 + + # lslogins + +这会输出下面的结果: + + UID USER PWD-LOCK PWD-DENY LAST-LOGIN GECOS + 0 root 0 0 Apr29/11:35 root + 1 bin 0 1 bin + 2 daemon 0 1 daemon + 3 adm 0 1 adm + 4 lp 0 1 lp + 5 sync 0 1 sync + 6 shutdown 0 1 Jul19/10:04 shutdown + 7 halt 0 1 halt + 8 mail 0 1 mail + 10 uucp 0 1 uucp + 11 operator 0 1 operator + 12 games 0 1 games + 13 gopher 0 1 gopher + 14 ftp 0 1 FTP User + 23 squid 0 1 + 25 named 0 1 Named + 27 mysql 0 1 MySQL Server + 47 mailnull 0 1 + 48 apache 0 1 Apache + ... + +### 6. 查找文件 ### +### 6. Search for Files ### + +查找文件有时候并不像你想象的那么简单。一个搜索文件的好例子是: + + # find /home/user -type f + +这个命令会搜索 /home/user 目录下的所有文件。find 命令真的很强大,你可以传递更多选项给它使得你的搜索更加详细。如果你想搜索比特定大小大的文件,可以使用: + + # find . -type f -size 10M + +上面的命令会搜索当前目录中所有大于 10M 的文件。确保不要在你 Linux 系统的根目录运行该命令,因为这可能导致你的机器 I/O 瓶颈。 + +我最经常和 find 命令一起使用的选项之一是 “exec”,这允许你对 find 命令的结果运行一些操作。 + +例如,假如我们想查找一个目录中的所有文件并更改权限。可以通过以下简单命令完成: + + # find /home/user/files/ -type f -exec chmod 644 {} \; + +上面的命令会递归搜索指定目录内的所有文件,并对找到的文件执行 chmod 命令。推荐你阅读 [35 个 Linux ‘find’ 命令的使用方法][5],我肯定你会发现这个命令更多的使用方法。 + +### 7. 用一个命令创建目录树 ### + +你很可能知道可以使用 mkdir 命令创建新的目录。因此如果你想创建一个新的目录,你可能会运行: + + # mkdir new_folder + +但如果你想在该目录下创建 5 个子目录呢?运行 5 次 mkdir 命令并非是一个好的选择。相反你可以类似下面这样使用 -p 选项: + + # mkdir -p new_folder/{folder_1,folder_2,folder_3,folder_4,folder_5} + +最后你会在 new_folder 中有 5 个目录: + + # ls new_folder/ + + folder_1 folder_2 folder_3 folder_4 folder_5 + +### 8. 复制文件到多个目录 ### + +通常使用 cp 命令进行文件复制。复制文件通常看起来类似: + + # cp /path-to-file/my_file.txt /path-to-new-directory/ + +现在假设你需要复制该文件到多个目录: + + # cp /home/user/my_file.txt /home/user/1 + # cp /home/user/my_file.txt /home/user/2 + # cp /home/user/my_file.txt /home/user/3 + +这有点荒唐。相反,你可以用简单的一行命令解决问题: + + # echo /home/user/1/ /home/user/2/ /home/user/3/ | xargs -n 1 cp /home/user/my_file.txt + +### 9. 删除大文件 ### + +有时候文件可能会变得很大。我看过由于缺乏管理技能一个日志文件就超过 250G 的例子。用 rm 命令可能不足以删除该文件,因为有大量的数据需要移除。应该避免这个很“笨重”的操作。相反,你可以使用一个简单的方法解决这个问题: + + # > /path-to-file/huge_file.log + +当然你需要根据你实际情况替换路径和文件名。上面的命令写一个空输出到该文件。用更简单的话说它会清空文件而不会导致你的系统产生大的 I/O 消耗。 + +### 10. 在多个 Linux 服务器上运行相同命令 ### + +最近我们的一个读者在 [LinuxSay 论坛][6]提问说如何通过 ssh 在多个 Linux 服务器上执行一个命令。他机器的 IP 地址是: + + 10.0.0.1 + 10.0.0.2 + 10.0.0.3 + 10.0.0.4 + 10.0.0.5 + +这里有一个简单的解决方法。收集服务器的 IP 地址到文件 list.txt 中,像上面那样一行一个。然后运行: + + # for in $i(cat list.txt); do ssh user@$i 'bash command'; done + +上面的命令中你需要用实际登录的用户替换 “user”,用你希望执行的实际命令替换 “bash command”。这个方法非常适用于通过[使用 SSH 密钥进行无密码验证][7],因为这样你不需要每次都为用户输入密码。 + +注意取决于你 Linux 系统的设置,你可能还需要传递一些额外的参数给 SSH 命令。 + +### 总结 ### + +上面的例子都很简单,我希望它们能帮助你发现 Linux 的优美之处,你如何能简单实现在其它操作系统上需要更多时间的不同操作。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/10-useful-linux-command-line-tricks-for-newbies/ + +作者:[Marin Todorov][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/marintodorov89/ +[1]:http://www.tecmint.com/5-linux-command-line-tricks/ +[2]:http://www.tecmint.com/manage-file-types-and-set-system-time-in-linux/ +[3]:http://www.tecmint.com/history-command-examples/ +[4]:http://www.tecmint.com/midnight-commander-a-console-based-file-manager-for-linux/ +[5]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/ +[6]:http://www.linuxsay.com/ +[7]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ \ No newline at end of file From 48f3d07d209970c7e32aa1a6543a9e52b4b1daf3 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 27 Sep 2015 17:07:45 +0800 Subject: [PATCH 612/697] translating --- ...0906 How to Install DNSCrypt and Unbound in Arch Linux.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md b/sources/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md index 98cb0e9b55..b0c6dec1b5 100644 --- a/sources/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md +++ b/sources/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md @@ -1,3 +1,6 @@ +translating---geekpi + + How to Install DNSCrypt and Unbound in Arch Linux ================================================================================ **DNSCrypt** is a protocol that encrypt and authenticate communications between a DNS client and a DNS resolver. Prevent from DNS spoofing or man in the middle-attack. DNSCrypt are available for most operating system, including Linux, Windows, MacOSX android and iOS. And in this tutorial I'm using archlinux with kernel 4.1. @@ -171,4 +174,4 @@ via: http://linoxide.com/tools/install-dnscrypt-unbound-archlinux/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://linoxide.com/author/arulm/ \ No newline at end of file +[a]:http://linoxide.com/author/arulm/ From cac21944417420c655ad482208877f66385f17b6 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 27 Sep 2015 17:59:20 +0800 Subject: [PATCH 613/697] translated --- ...tall DNSCrypt and Unbound in Arch Linux.md | 177 ------------------ ...tall DNSCrypt and Unbound in Arch Linux.md | 173 +++++++++++++++++ 2 files changed, 173 insertions(+), 177 deletions(-) delete mode 100644 sources/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md create mode 100644 translated/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md diff --git a/sources/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md b/sources/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md deleted file mode 100644 index b0c6dec1b5..0000000000 --- a/sources/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md +++ /dev/null @@ -1,177 +0,0 @@ -translating---geekpi - - -How to Install DNSCrypt and Unbound in Arch Linux -================================================================================ -**DNSCrypt** is a protocol that encrypt and authenticate communications between a DNS client and a DNS resolver. Prevent from DNS spoofing or man in the middle-attack. DNSCrypt are available for most operating system, including Linux, Windows, MacOSX android and iOS. And in this tutorial I'm using archlinux with kernel 4.1. - -Unbound is a DNS cache server used to resolve any DNS query received. If the user requests a new query, then unbound will store it as a cache, and when the user requests the same query for the second time, then unbound would take from the cache that have been saved. This will be faster than the first request query. - -And now I will try to install "DNSCrypt" to secure the dns communication, and make it faster with dns cache "Unbound". - -### Step 1 - Install yaourt ### - -Yaourt is one of AUR(Arch User Repository) helper that make archlinux users easy to install a program from AUR. Yaourt use same syntax as pacman, so you can install the program with yaourt. and this is easy way to install yaourt : - -1. Edit the arch repository configuration file with nano or vi, stored in a file "/etc/pacman.conf". - - $ nano /etc/pacman.conf - -2. Add at the bottom line yaourt repository, just paste script below : - - [archlinuxfr] - SigLevel = Never - Server = http://repo.archlinux.fr/$arch - -3. Save it with press "Ctrl + x" and then "Y". - -4. Now update the repository database and install yaourt with pacman command : - - $ sudo pacman -Sy yaourt - -### Step 2 - Install DNSCrypt and Unbound ### - -DNSCrypt and unbound available on archlinux repository, then you can install it with pacman command : - - $ sudo pacman -S dnscrypt-proxy unbound - -wait it and press "Y" for proceed with installation. - -### Step 3 - Install dnscrypt-autoinstall ### - -Dnscrypt-autoinstall is A script for installing and automatically configuring DNSCrypt on Linux-based systems. Dnscrypt-autoinstall available in AUR(Arch User Repository), and you must use "yaourt" command to install it : - - $ yaourt -S dnscrypt-autoinstall - -Note : - --S = it is same as pacman -S to install a software/program. - -### Step 4 - Run dnscrypt-autoinstall ### - -run the command "dnscrypt-autoinstall" with root privileges to configure DNSCrypt automatically : - - $ sudo dnscrypt-autoinstall - -Press "Enter" for the next configuration, and then type "y" and choose the DNS provider you want to use, I'm here use DNSCrypt.eu featured with no logs and DNSSEC. - -![DNSCrypt autoinstall](http://blog.linoxide.com/wp-content/uploads/2015/08/DNSCrypt-autoinstall.png) - -### Step 5 - Configure DNSCrypt and Unbound ### - -1. Open the dnscrypt configuration file "/etc/conf.d/dnscrypt-config" and make sure the configuration of "DNSCRYPT_LOCALIP" point to **localhost IP**, and for port configuration "DNSCRYPT_LOCALPORT" it's up to you, I`m here use port **40**. - - $ nano /etc/conf.d/dnscrypt-config - - DNSCRYPT_LOCALIP=127.0.0.1 - DNSCRYPT_LOCALIP2=127.0.0.2 - DNSCRYPT_LOCALPORT=40 - -![DNSCrypt Configuration](http://blog.linoxide.com/wp-content/uploads/2015/08/DNSCryptConfiguration.png) - -Save and exit. - -2. Now you can edit unbound configuration in "/etc/unbound/". edit the file configuration with nano editor : - - $ nano /etc/unbound/unbound.conf - -3. Add the following script in the end of line : - - do-not-query-localhost: no - forward-zone: - name: "." - forward-addr: 127.0.0.1@40 - -Make sure the "**forward-addr**" port is same with "**DNSCRYPT_LOCALPORT**" configuration in DNSCrypt. You can see the I`m use port **40**. - -![Unbound Configuration](http://blog.linoxide.com/wp-content/uploads/2015/08/UnboundConfiguration.png) - -and then save and exit. - -### Step 6 - Run DNSCrypt and Unbound, then Add to startup/Boot ### - -Please run DNSCrypt and unbound with root privileges, you can run with systemctl command : - - $ sudo systemctl start dnscrypt-proxy unbound - -Add the service at the boot time/startup. You can do it by running "systemctl enable" : - - $ sudo systemctl enable dnscrypt-proxy unbound - -the command will create the symlink of the service to "/usr/lib/systemd/system/" directory. - -### Step 7 - Configure resolv.conf and restart all services ### - -Resolv.conf is a file used by linux to configure Domain Name Server(DNS) resolver. it is just plain-text created by administrator, so you must edit by root privileges and make it immutable/no one can edit it. - -Edit it with nano editor : - - $ nano /etc/resolv.conf - -and add the localhost IP "**127.0.0.1**". and now make it immutable with "chattr" command : - - $ chattr +i /etc/resolv.conf - -Note : - -If you want to edit it again, make it writable with command "chattr -i /etc/resolv.conf". - -Now yo need to restart the DNSCrypt, unbound and the network : - - $ sudo systemctl restart dnscrypt-proxy unbound netctl - -If you see the error, check your configuration file. - -### Testing ### - -1. Test DNSCrypt - -You can be sure that DNSCrypt had acted correctly by visiting https://dnsleaktest.com/, then click on "Standard Test" or "Extended Test" and wait the process running. - -And now you can see that DNSCrypt is working with DNSCrypt.eu as your DNS provider. - -![Testing DNSCrypt](http://blog.linoxide.com/wp-content/uploads/2015/08/TestingDNSCrypt.png) - -And now you can see that DNSCrypt is working with DNSCrypt.eu as your DNS provider. - -2. Test Unbound - -Now you should ensure that the unbound is working correctly with "dig" or "drill" command. - -This is the results for dig command : - - $ dig linoxide.com - -Now see in the results, the "Query time" is "533 msec" : - - ;; Query time: 533 msec - ;; SERVER: 127.0.0.1#53(127.0.0.1) - ;; WHEN: Sun Aug 30 14:48:19 WIB 2015 - ;; MSG SIZE rcvd: 188 - -and try again with the same command. And you will see the "Query time" is "0 msec". - - ;; Query time: 0 msec - ;; SERVER: 127.0.0.1#53(127.0.0.1) - ;; WHEN: Sun Aug 30 14:51:05 WIB 2015 - ;; MSG SIZE rcvd: 188 - -![Unbound Test](http://blog.linoxide.com/wp-content/uploads/2015/08/UnboundTest.png) - -And in the end DNSCrypt secure communications between the DNS clients and DNS resolver is working perfectly, and then Unbound make it faster if there is the same request in another time by taking the cache that have been saved. - -### Conclusion ### - -DNSCrypt is a protocol that can encrypt data flow between the DNS client and DNS resolver. DNSCrypt can run on various operating systems, either mobile or desktop. Choose DNS provider also includes something important, choose which provide a DNSSEC and no logs. Unbound can be used as a DNS cache, thus speeding up the resolve process resolv, because Unbound will store a request as the cache, then when a client request same query in the next time, then unbound would take from the cache that have been saved. DNSCrypt and Unbound is a powerful combination for the safety and speed. - --------------------------------------------------------------------------------- - -via: http://linoxide.com/tools/install-dnscrypt-unbound-archlinux/ - -作者:[Arul][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arulm/ diff --git a/translated/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md b/translated/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md new file mode 100644 index 0000000000..9977cbd09f --- /dev/null +++ b/translated/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md @@ -0,0 +1,173 @@ +如何在Arch Linux中安装DNSCrypt和Unbound +================================================================================ +**DNSCrypt**是一个用于加密和验证的DNS客户端和一个DNS解析器之间通信的协议。阻止DNS欺骗或中间人攻击。 DNSCrypt可用于大多数的操作系统,包括Linux,Windows,MacOSX的Android和iOS。而在本教程中我使用的是内核为4.1的archlinux。 + +Unbound是用来解析收到的任意DNS查询的DNS缓存服务器。如果用户请求一个新的查询,然后unbound将其存储到缓存中,并且当用户再次请求相同的请求时,unbound将采用已经保存的缓存。这将是第一次请求查询更快。 + +现在我将尝试安装“DNSCrypt”,以确保DNS的通信的安全,并用“Unbound”加速。 + +### 第一步 - 安装yaourt ### + +Yaourt是AUR(ARCH用户仓库)的辅助,使用户能够很容易地从AUR安装程序。 Yaourt和pacman一样使用相同的语法,这样你就可以使用yaourt安装该程序。下面是安装yaourt的简单方法: + +1. 用nano或者vi编辑arch仓库配置文件,保存在“/etc/pacman.conf”中。 + + $ nano /etc/pacman.conf + +2. 在底部填上你的yaourt仓库,粘贴下面的脚本: + + [archlinuxfr] + SigLevel = Never + Server = http://repo.archlinux.fr/$arch + +3. 用“"Ctrl + x”,接着用“Y”保存。 + +4. 接着升级仓库数据库并用pacman安装yaourt: + + $ sudo pacman -Sy yaourt + +### 第二步 - 安装 DNSCrypt和Unbound ### + +DNSCrypt和unbound就在archlinux仓库中,你可以用下面的pacman命令安装: + + $ sudo pacman -S dnscrypt-proxy unbound + +接着在安装的过程中按下“Y”。 + +### 第三步 - 安装 dnscrypt-autoinstall ### + +Dnscrypt-autoinstall是一个自动在基于Linux的系统上安装和配置DNSCrypt的脚本。DNSCrypt在AUR中,因此你必须使用“yaourt”命令来安装它。 + + $ yaourt -S dnscrypt-autoinstall + +注意 : + +-S = 这和pacman -S安装程序一样。 + +### 第四步 - 运行dnscrypt-autoinstall ### + +用root权限运行“dnscrypt-autoinstall”开自动配置DNSCrypt。 + + $ sudo dnscrypt-autoinstall + +下一步中输入“回车”,接着输入"Y"来选择你想使用的DNS提供者,我这里使用不带日志和DNSSEC的DNSCrypt.eu。 + +![DNSCrypt autoinstall](http://blog.linoxide.com/wp-content/uploads/2015/08/DNSCrypt-autoinstall.png) + +### 第五步 - 配置DNSCrypt和Unbound ### + +1. 打开dnscrypt的“/etc/conf.d/dnscrypt-config” 配置文件中“DNSCRYPT_LOCALIP”指向**本地ip**,“DNSCRYPT_LOCALPORT”根据你本人的意愿配置,我是用的是**40**端口。 + + $ nano /etc/conf.d/dnscrypt-config + + DNSCRYPT_LOCALIP=127.0.0.1 + DNSCRYPT_LOCALIP2=127.0.0.2 + DNSCRYPT_LOCALPORT=40 + +![DNSCrypt Configuration](http://blog.linoxide.com/wp-content/uploads/2015/08/DNSCryptConfiguration.png) + +保存并退出。 + +2. 现在你用nanao编辑器编辑“/etc/unbound/”下unbound的配置文件: + + $ nano /etc/unbound/unbound.conf + +3. 在脚本最后添加下面的行: + + do-not-query-localhost: no + forward-zone: + name: "." + forward-addr: 127.0.0.1@40 + +确保**forward-addr**和DNSCrypt中的“**DNSCRYPT_LOCALPORT**”一致。你看见我是用的是**40**端口。 + +![Unbound Configuration](http://blog.linoxide.com/wp-content/uploads/2015/08/UnboundConfiguration.png) + +接着保存并退出。 + +### 第六步 - 运行DNSCrypt和Unbound,接着添加到开机启动中 ### + +请用root权限运行DNSCrypt和unbound,你可以用systemctl命令来运行: + + $ sudo systemctl start dnscrypt-proxy unbound + +将服务添加到启动中。你可以运行“systemctl enable”: + + $ sudo systemctl enable dnscrypt-proxy unbound + +命令将会创建软链接到“/usr/lib/systemd/system/”目录的服务。 + +### 第七步 - 配置resolv.conf并重启所有服务 ### + +resolv.conf是一个在linux中用于配置DNS解析器的文件。它是一个由管理员创建的纯文本,因此你必须用root权限编辑并让它不能被其他人修改。 + +用nano编辑器编辑: + + $ nano /etc/resolv.conf + +并添加本地IP “**127.0.0.1**”,现在用“chattr”命令使他只读: + + $ chattr +i /etc/resolv.conf + +注意: + +如果你想要重新编辑,用“chattr -i /etc/resolv.conf”加入写权限。 + +现在你需要重启DNSCrypt和unbound和网络; + + $ sudo systemctl restart dnscrypt-proxy unbound netctl + +如果你看到错误,检查配置文件。 + +### 测试 ### + +1. 测试DNSCrypt + +你可以通过https://dnsleaktest.com/来确认DNSCrypt,点击“开始测试”或者“扩展测试”,并在程序运行期间等待。 + +现在你可以看到NSCrypt.eu就已经与作为DNS提供商的DNSCrypt协同工作了。 + +![Testing DNSCrypt](http://blog.linoxide.com/wp-content/uploads/2015/08/TestingDNSCrypt.png) + + +2. 测试 Unbound + +现在你应该确保unbound可以正确地与“dig”和“drill”命令一起工作。 + +这是dig命令的结果: + + $ dig linoxide.com + +我们现在看下结果,“Query time”是“533 msec”: + + ;; Query time: 533 msec + ;; SERVER: 127.0.0.1#53(127.0.0.1) + ;; WHEN: Sun Aug 30 14:48:19 WIB 2015 + ;; MSG SIZE rcvd: 188 + +再次输入命令,我们看到“Query time”是“0 msec”。 + + ;; Query time: 0 msec + ;; SERVER: 127.0.0.1#53(127.0.0.1) + ;; WHEN: Sun Aug 30 14:51:05 WIB 2015 + ;; MSG SIZE rcvd: 188 + +![Unbound Test](http://blog.linoxide.com/wp-content/uploads/2015/08/UnboundTest.png) + +DNSCrypt加密通信在DNS客户端和解析端工作的很好,并且Unbound通过缓存让相同的请求在另一次请求同速度更快。 + +### 总结 ### + +DNSCrypt是一个可以加密DNS客户端和DNS解析器之间的数据流的协议。 DNSCrypt可以在不同的操作系统上运行,无论是移动端或桌面端。选择DNS提供商还包括一些重要的事情,选择那些提供DNSSEC同时没有日志的。Unbound可被用作DNS缓存,从而加快解析过程,因为Unbound将请求缓存,那么接下来客户端请求相同的查询时,unbound将从缓存中取出保存的值。 DNSCrypt和Unbound是针对安全性和速度的一个强大的组合。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/tools/install-dnscrypt-unbound-archlinux/ + +作者:[Arul][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arulm/ From 0b70b4fe451c74fa19642dde29e0655ce061bc9b Mon Sep 17 00:00:00 2001 From: alim0x Date: Sun, 27 Sep 2015 23:03:30 +0800 Subject: [PATCH 614/697] [translated]19 - The history of Android --- .../19 - The history of Android.md | 73 ------------------- .../19 - The history of Android.md | 71 ++++++++++++++++++ 2 files changed, 71 insertions(+), 73 deletions(-) delete mode 100644 sources/talk/The history of Android/19 - The history of Android.md create mode 100644 translated/talk/The history of Android/19 - The history of Android.md diff --git a/sources/talk/The history of Android/19 - The history of Android.md b/sources/talk/The history of Android/19 - The history of Android.md deleted file mode 100644 index 4fff9d0e37..0000000000 --- a/sources/talk/The history of Android/19 - The history of Android.md +++ /dev/null @@ -1,73 +0,0 @@ -alim0x translating - -The history of Android -================================================================================ -![Google Music Beta running on Gingerbread.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/device-2014-03-31-110613.png) -Google Music Beta running on Gingerbread. -Photo by Ron Amadeo - -### Google Music Beta—cloud storage in lieu of a content store ### - -While Honeycomb revamped the Google Music interface, the Music app didn't go directly from the Honeycomb design to Ice Cream Sandwich. In May 2011, Google launched "[Google Music Beta][1]," an online music locker that came along with a new Google Music app. - -The new Google Music app for 2.2 and up took a few design cues from the Cooliris Gallery, of all things, going with a changing, blurry image for the background. Just about everything was transparent: the pop-up menus, the tabs at the top, and the now-playing bar at the bottom. Individual songs or entire playlists could be downloaded to the device for offline playback, making Google Music an easy way to make sure your music was on all your devices. Besides the mobile app, there was also a Webapp, which allowed Google Music to work on any desktop computer. - -Google didn't have content deals in place with the record companies to start a music store yet, so its stop-gap solution was to allow users to store songs online and stream them to a device. Today, Google has content deals for individual song purchases and all-you-can-eat subscription modes, along with the music locker service. - -### Android 4.0, Ice Cream Sandwich—the modern era ### - -![The Samsung Galaxy Nexus, Android 4.0's launch device.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/samsung-i9250-galaxy-nexus-51.jpg) -The Samsung Galaxy Nexus, Android 4.0's launch device. - -Released in October 2011, Android 4.0, Ice Cream Sandwich, got the OS back on track with a release spanning phones and tablets, and it was once again open source. It was the first update to come to phones since Gingerbread, which meant the majority of Android's user base went almost a year without seeing an update. 4.0 was all about shrinking the Honeycomb design to smaller devices, bringing on-screen buttons, the action bar, and the new design language to phones. - -Ice Cream Sandwich debuted on the Samsung Galaxy Nexus, one of the first Android phones with a 720p screen. Along with the higher resolution, the Galaxy Nexus pushed phones to even larger sizes with a 4.65-inch screen—almost a full inch larger than the original Nexus One. This was called "too big" by many critics, but today many Android phones are even bigger. (Five inches is "normal" now.) Ice Cream Sandwich required a lot more power than Gingerbread did, and the Galaxy Nexus delivered with a dual core, 1.2Ghz TI OMAP processor and 1GB of RAM. - -In the US, the Galaxy Nexus debuted on Verizon with an LTE modem. Unlike previous Nexus devices, the most popular model—the Verizon version—was under the control of a carrier, and Google's software and updates had to be approved by Verizon before the phone could be updated. This led to delays in updates and the removal of software Verizon didn't like, namely Google Wallet. - -Thanks to the software improvements in Ice Cream Sandwich, Google finally achieved peak button removal on a phone. With the on-screen navigation buttons, the capacitive buttons could be removed, leaving the Galaxy Nexus with only power and volume buttons. - -![Android 4.0 shrunk down a lot of the Honeycomb design.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/2home.png) -Android 4.0 shrunk down a lot of the Honeycomb design. -Photo by Ron Amadeo - -The Tron aesthetic in Honeycomb was a little much. Immediately in Ice Cream Sandwich, Google started turning down some of the more sci-fi aspects of the design. The sci-fi clock font changed from a folded over semi-transparent thing to a thin, elegant, normal-looking font. The water ripple touch effect on the unlock circle was removed, and the alien Honeycomb clock widget was scrapped in favor of a more minimal design. The system buttons were redesigned, too, changing from blue outlines with the occasional thick side to thin, even, white outlines. The default wallpaper changed from the blue Honeycomb spaceship interior to a streaky, broken rainbow, which added some much-needed color to the default layout. - -The Honeycomb system bar features were split into a two-bar design for phones. At the top was the traditional status bar, and at the bottom was the new system bar, which housed the three system buttons: Back, Home, and Recent. A permanent search bar was added to the top of the home screen. The bar persisted on the screen the same way the dock did, so over the five home screens, it took up 20 icon spots. On the Honeycomb unlock screen, the small inner circle could be moved anywhere outside the larger circle to unlock the device. In Ice Cream Sandwich, you had to actually hit the unlock icon with the inner circle. This new accuracy requirement allowed Google to add another option to the lock screen: a camera shortcut. Dragging the inner circle to the camera icon would directly launch the camera, skipping the home screen. - -![A Phone OS meant a ton more apps, and the notification panel became a full-screen interface again.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/appsandnotic40.png) -A Phone OS meant a ton more apps, and the notification panel became a full-screen interface again. -Photo by Ron Amadeo - -The App drawer was still tabbed, but the "My Apps" tab from Honeycomb was replaced with "Widgets," which was a simple 2×3 thumbnail view of widgets. Like Honeycomb, this app drawer was paginated and had to be swiped through horizontally. (Android still uses this app drawer design today.) New in the app drawer was an Android Google+ app, which existed separately for some time. Along with it came a shortcut to "Messenger," the Google+ private messaging service. ("Messenger" is not to be confused with "Messaging," the stock SMS app.) - -Since we're back to a phone now, Messaging, News and Weather, Phone, and Voice Dialer returned, and Cordy, a tablet game, was removed. Our screenshots are from the Verizon variant, which, despite being a Nexus device, was sullied by crapware like "My Verizon Mobile," and "VZ Backup Assistant." In keeping with the de-Tronification theme of Ice Cream Sandwich, the Calendar and Camera icons now looked more like something from Planet Earth rather than alien artifacts. Clock, Downloads, Phone, and Android Market got new icons, too, and "Contacts" got a new icon and a new name, becoming "People." - -The Notification panel got a big overhaul, especially when compared to the [previous Gingerbread design][2]. There was now a top header featuring the date, a settings shortcut, and a "clear all." While first Honeycomb allowed users to dismiss individual notifications by tapping on an "X" in the notification, Ice Cream Sandwich's implementation was much more elegant: just swipe the individual notifications to the left or right and they cleared. Honeycomb had blue highlights, but the blue tone was all over the place. Ice Cream Sandwich unified almost everything to a single blue (hex code #33B5E5, if you want to get specific). The background of the notification panel was made transparent, and the "handle" at the bottom changed to a minimal blue circle with an opaque black background. - -![The main page of the Android Market changed back to black.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/market.png) -The main page of the Android Market changed back to black. -Photo by Ron Amadeo - -The Market got yet another redesign. It finally supported portrait mode again and added Music to the lineup of content you can buy in the store. The new Market extended the cards concept that debuted in Honeycomb and was the first version to use the same application on tablets and phones. The cards on the main page usually didn't link to apps, instead pointing to special promotional pages like "staff picks" or seasonal promotions. - ----------- - -![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg) - -[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work. - -[@RonAmadeo][t] - --------------------------------------------------------------------------------- - -via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/19/ - -译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[1]:http://arstechnica.com/gadgets/2011/05/hands-on-grooving-on-the-go-with-impressive-google-music-beta/ -[2]:http://cdn.arstechnica.net/wp-content/uploads/2014/02/32.png -[a]:http://arstechnica.com/author/ronamadeo -[t]:https://twitter.com/RonAmadeo diff --git a/translated/talk/The history of Android/19 - The history of Android.md b/translated/talk/The history of Android/19 - The history of Android.md new file mode 100644 index 0000000000..2ea47bc778 --- /dev/null +++ b/translated/talk/The history of Android/19 - The history of Android.md @@ -0,0 +1,71 @@ +安卓编年史 +================================================================================ +![姜饼上的 Google Music Beta。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/device-2014-03-31-110613.png) +姜饼上的 Google Music Beta。 +Ron Amadeo 供图 + +### Google Music Beta —— 取代内容商店的云存储 ### + +尽管蜂巢改进了 Google Music 的界面,但是音乐应用的设计并没有从蜂巢直接进化到冰淇淋三明治。2011年5月,谷歌发布了“[Google Music Beta][1]”,和新的 Google Music 应用一同到来的在线音乐存储。 + +新 Google Music 为安卓2.2及以上版本设计,借鉴了 Cooliris 相册的设计语言,但也有改变之处,背景使用了模糊处理的图片。几乎所有东西都是透明的:弹出菜单,顶部标签页,还有底部的正在播放栏。可以下载单独的歌曲或整个播放列表到设备上离线播放,这让 Google Music 成为一个让音乐同步到你所有设备的好途径。除了移动应用外,Google Music 还有一个 Web 应用,让它可以在任何一台桌面电脑上使用。 + +谷歌和唱片公司关于内容的合约还没有谈妥,音乐商店还没准备好,所以它的权宜之计是允许用户存储音乐到线上并下载到设备上。如今谷歌除了音乐存储服务外,还有单曲购买和订阅模式。 + +### Android 4.0, 冰淇淋三明治 —— 摩登时代 ### + +![三星 Galaxy Nexus,安卓4.0的首发设备。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/samsung-i9250-galaxy-nexus-51.jpg) +三星 Galaxy Nexus,安卓4.0的首发设备。 + +安卓4.0,冰淇淋三明治,在2011年10月发布,系统发布回到正轨,带来定期发布的手机和平板,并且安卓再次开源。这是自姜饼以来手机设备的第一个更新,意味着最主要的安卓用户群体近乎一年没有见到更新了。4.0随处可见缩小版的蜂巢设计,还将虚拟按键,操作栏(Action Bar),全新的设计语言带到了手机上。 + +冰淇淋三明治在三星 Galaxy Nexus 上首次亮相,也是最早带有720p显示屏的安卓手机之一。随着分辨率的提高,Galaxy Nexus 使用了更大的4.65英寸显示屏——几乎比最初的 Nexus One 大了一整英寸。这被许多批评者认为“太大了”,但如今的安卓设备甚至更大。(5英寸现在是“正常”的。)冰淇淋三明治比姜饼的性能要求更高,Galaxy Nexus 配备了一颗双核,1.2Ghz 德州仪器 OMAP 处理器和1GB的内存。 + +在美国,Galaxy Nexus 在 Verizon 首发并且支持 LTE。不像之前的 Nexus 设备,最流行的型号——Verizon版——是在运营商的控制之下,谷歌的软件和更新在手机得到更新之前要经过 Verizon 的核准。这导致了更新的延迟以及 Verizon 不喜欢的应用被移除,即便是 Google Wallet 也不例外。 + +多亏了冰淇淋三明治的软件改进,谷歌终于达成了移除手机上按钮的目标。有了虚拟导航键,实体电容按钮就可以移除了,最终 Galaxy Nexus 仅有电源和音量是实体按键。 + +![安卓4.0将很多蜂巢的设计缩小了。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/2home.png) +安卓4.0将很多蜂巢的设计缩小了。 +Ron Amadeo 供图 + +电子质感的审美在蜂巢中显得有点多。于是在冰淇淋三明治中,谷歌开始减少科幻风的设计。科幻风的时钟字体从半透明折叠风格转变成纤细,优雅,看起来更加正常的字体。解锁环的水面波纹效果被去除了,蜂巢中的外星风格时钟小部件也被极简设计所取代。系统按钮也经过了重新设计,原先的蓝色轮廓,偶尔的厚边框变成了细的,设置带有白色轮廓。默认壁纸从蜂巢的蓝色太空船内部变成条纹状,破碎的彩虹,给默认布局增添了不少迟来的色彩。 + +蜂巢的系统栏在手机上一分为二。在顶上是传统的状态栏,底部是新的系统栏,放着三个系统按钮:后退,主屏幕,最近应用。一个固定的搜索栏放置在了主屏幕顶部。该栏以和底栏一样的方式固定在屏幕上,所以在五个主屏上,它总共占据了20个图标大小的位置。在蜂巢的锁屏上,内部的小圆圈可以向大圆圈外的任意位置滑动来解锁设备。在冰淇淋三明治,你得把小圆圈移动到解锁图标上。这个新准确度要求允许谷歌向锁屏添加新的选项:一个相机快捷方式。将小圆圈拖向相机图标会直接启动相机,跳过了主屏幕。 + +![一个手机系统意味着更多的应用,通知面板重新回到了全屏界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/appsandnotic40.png) +一个手机系统意味着更多的应用,通知面板重新回到了全屏界面。 +Ron Amadeo 供图 + +应用抽屉还是标签页式的,但是蜂巢中的“我的应用”标签被“部件”标签页替代,这是个简单的2×3部件略缩图视图。像蜂巢里的那样,这个应用抽屉是分页的,需要水平滑动换页。(如今安卓仍在使用这个应用抽屉设计。)应用抽屉里新增的是 Google+ 应用,后来独立存在。还有一个“Messenger”快捷方式,是 Google+ 的私密信息服务。(不要混淆 “Messenger” 和已有的 “Messaging” 短信应用。) + +因为我们现在回到了手机上,所以短信,新闻和天气,电话,以及语音拨号都回来了,以及Cordy,一个平板的游戏,被移除了。尽管不是 Nexus 设备,我们的截图还是来自 Verizon 版的设备,可以从图上看到有像 “My Verizon Mobile” 和 “VZ Backup Assistant” 这样没用的应用。为了和冰淇淋三明治的去电子风格主题一致,日历和相机图标现在看起来更像是来自地球的东西而不是来自外星球。时钟,下载,电话,以及安卓市场同样得到了新图标,联系人“Contacts”获得了新图标,还有新名字“People”。 + +通知面板进行了大改造,特别是和[之前姜饼中的设计][2]相比而言。面板头部有个日期,一个设置的快捷方式,以及“清除所有”按钮。虽然蜂巢的第一个版本就允许用户通过通知右边的“X”消除单个通知,但是冰淇淋三明治的实现更加优雅:只要从左向右滑动通知即可。蜂巢有着蓝色高亮,但是蓝色色调到处都是。冰淇淋三明治几乎把所有地方的蓝色统一成一个(如果你想知道确定的值,hex码是#33B5E5)。通知面板的背景是透明的,底部的“把手”变为一个简单的小蓝圈,带着不透明的黑色背景。 + +![安卓市场的主页背景变成了黑色。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/market.png) +安卓市场的主页背景变成了黑色。 +Ron Amadeo 供图 + +市场获得了又一个新设计。它终于再次支持纵向模式,并且添加了音乐到商店中,你可以从中购买音乐。新的市场拓展了从蜂巢中引入的卡片概念,它还是第一个同时使用在手机和平板上的版本。主页上的卡片通常不是链接到应用的,而是指向特别的促销页面,像是“编辑精选”或季度促销。 + +---------- + +![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg) + +[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。 + +[@RonAmadeo][t] + +-------------------------------------------------------------------------------- + +via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/19/ + +译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[1]:http://arstechnica.com/gadgets/2011/05/hands-on-grooving-on-the-go-with-impressive-google-music-beta/ +[2]:http://cdn.arstechnica.net/wp-content/uploads/2014/02/32.png +[a]:http://arstechnica.com/author/ronamadeo +[t]:https://twitter.com/RonAmadeo From 22441ffaf64c6fc4365038222ff827a2b03395ea Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 27 Sep 2015 23:09:33 +0800 Subject: [PATCH 615/697] PUB:20150906 How to Install DNSCrypt and Unbound in Arch Linux @geekpi --- ...tall DNSCrypt and Unbound in Arch Linux.md | 174 ++++++++++++++++++ ...tall DNSCrypt and Unbound in Arch Linux.md | 173 ----------------- 2 files changed, 174 insertions(+), 173 deletions(-) create mode 100644 published/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md delete mode 100644 translated/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md diff --git a/published/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md b/published/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md new file mode 100644 index 0000000000..c83f639e7e --- /dev/null +++ b/published/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md @@ -0,0 +1,174 @@ +如何在 Arch Linux 中安装 DNSCrypt 和 Unbound +================================================================================ + +**DNSCrypt** 是一个用于对 DNS 客户端和 DNS 解析器之间通信进行加密和验证的协议。它可以阻止 DNS 欺骗或中间人攻击。 DNSCrypt 可用于大多数的操作系统,包括 Linux,Windows,MacOSX ,Android 和 iOS。而在本教程中我使用的是内核为4.1的 archlinux。 + +**Unbound** 是用来解析收到的任意 DNS 查询的 DNS 缓存服务器。如果用户请求一个新的查询,unbound 会将其存储到缓存中,并且当用户再次请求相同的请求时,unbound 将采用已经保存的缓存。这将比第一次请求查询更快。 + +现在我将尝试安装“DNSCrypt”,以确保 DNS 的通信的安全,并用“Unbound”加速。 + +### 第一步 - 安装 yaourt ### + +Yaourt 是AUR(ARCH 用户仓库)的辅助工具之一,它可以使用户能够很容易地从 AUR 安装程序。 Yaourt 和 pacman 使用相同的语法,你可以使用 yaourt 安装该程序。下面是安装 yaourt 的简单方法: + +1、 用 nano 或者 vi 编辑 arch 仓库配置文件,存放在“/etc/pacman.conf”中。 + + $ nano /etc/pacman.conf + +2、 在 yaourt 仓库底部添加,粘贴下面的脚本: + + [archlinuxfr] + SigLevel = Never + Server = http://repo.archlinux.fr/$arch + +3、 用“Ctrl + x”,接着用“Y”保存。 + +4、 接着升级仓库数据库并用pacman安装yaourt: + + $ sudo pacman -Sy yaourt + +### 第二步 - 安装 DNSCrypt 和 Unbound ### + +DNSCrypt 和 unbound 就在 archlinux 仓库中,你可以用下面的 pacman 命令安装: + + $ sudo pacman -S dnscrypt-proxy unbound + +接着在安装的过程中按下“Y”。 + +### 第三步 - 安装 dnscrypt-autoinstall ### + +Dnscrypt-autoinstall 是一个在基于 Linux 的系统上自动安装和配置 DNSCrypt 的脚本。DNSCrypt 在 AUR 中,因此你必须使用“yaourt”命令来安装它。 + + $ yaourt -S dnscrypt-autoinstall + +注意 : + +-S = 这和 pacman -S 安装程序一样。 + +### 第四步 - 运行 dnscrypt-autoinstall ### + +用 root 权限运行“dnscrypt-autoinstall”来自动配置 DNSCrypt。 + + $ sudo dnscrypt-autoinstall + +下一步中按下“回车”,接着输入"Y"来选择你想使用的 DNS 提供者,我这里使用不带日志和 DNSSEC 的 DNSCrypt.eu。 + +![DNSCrypt autoinstall](http://blog.linoxide.com/wp-content/uploads/2015/08/DNSCrypt-autoinstall.png) + +### 第五步 - 配置 DNSCrypt 和 Unbound ### + +1、 打开 dnscrypt 的“/etc/conf.d/dnscrypt-config” ,确认配置文件中“DNSCRYPT_LOCALIP”指向**本地ip**,“DNSCRYPT_LOCALPORT”根据你本人的意愿配置,我是用的是**40**端口。 + + $ nano /etc/conf.d/dnscrypt-config + + DNSCRYPT_LOCALIP=127.0.0.1 + DNSCRYPT_LOCALIP2=127.0.0.2 + DNSCRYPT_LOCALPORT=40 + +![DNSCrypt Configuration](http://blog.linoxide.com/wp-content/uploads/2015/08/DNSCryptConfiguration.png) + +保存并退出。 + +2、 现在你用 nano 编辑器编辑“/etc/unbound/”下 unbound 的配置文件: + + $ nano /etc/unbound/unbound.conf + +3、 在脚本最后添加下面的行: + + do-not-query-localhost: no + forward-zone: + name: "." + forward-addr: 127.0.0.1@40 + +确保**forward-addr**和DNSCrypt中的“**DNSCRYPT_LOCALPORT**”一致。如你所见,用的是**40**端口。 + +![Unbound Configuration](http://blog.linoxide.com/wp-content/uploads/2015/08/UnboundConfiguration.png) + +接着保存并退出。 + +### 第六步 - 运行 DNSCrypt 和 Unbound,接着添加到开机启动中 ### + +请用 root 权限运行 DNSCrypt 和 unbound,你可以用 systemctl 命令来运行: + + $ sudo systemctl start dnscrypt-proxy unbound + +将服务添加到启动中。你可以运行“systemctl enable”: + + $ sudo systemctl enable dnscrypt-proxy unbound + +命令将会创建软链接到“/usr/lib/systemd/system/”目录的服务。 + +### 第七步 - 配置 resolv.conf 并重启所有服务 ### + +resolv.conf 是一个在 linux 中用于配置 DNS 解析器的文件。它是一个由管理员创建的纯文本,因此你必须用 root 权限编辑并让它不能被其他人修改。 + +用 nano 编辑器编辑: + + $ nano /etc/resolv.conf + +并添加本地IP “**127.0.0.1**”。现在用“chattr”命令使他只读: + + $ chattr +i /etc/resolv.conf + +注意: + +如果你想要重新编辑,用“chattr -i /etc/resolv.conf”加入写权限。 + +现在你需要重启 DNSCrypt 和 unbound 和网络; + + $ sudo systemctl restart dnscrypt-proxy unbound netctl + +如果你看到错误,检查配置文件。 + +### 测试 ### + +1、 测试 DNSCrypt + +你可以通过 https://dnsleaktest.com/ 来确认 DNSCrypt,点击“标准测试”或者“扩展测试”,然后等待程序运行结束。 + +现在你可以看到 DNSCrypt.eu 就已经与作为 DNS 提供商的 DNSCrypt 协同工作了。 + +![Testing DNSCrypt](http://blog.linoxide.com/wp-content/uploads/2015/08/TestingDNSCrypt.png) + + +2、 测试 Unbound + +现在你应该确保 unbound 可以正确地与“dig”和“drill”命令一起工作。 + +这是 dig 命令的结果: + + $ dig linoxide.com + +我们现在看下结果,“Query time”是“533 msec”: + + ;; Query time: 533 msec + ;; SERVER: 127.0.0.1#53(127.0.0.1) + ;; WHEN: Sun Aug 30 14:48:19 WIB 2015 + ;; MSG SIZE rcvd: 188 + +再次输入命令,我们看到“Query time”是“0 msec”。 + + ;; Query time: 0 msec + ;; SERVER: 127.0.0.1#53(127.0.0.1) + ;; WHEN: Sun Aug 30 14:51:05 WIB 2015 + ;; MSG SIZE rcvd: 188 + +![Unbound Test](http://blog.linoxide.com/wp-content/uploads/2015/08/UnboundTest.png) + +DNSCrypt 对 DNS 客户端和解析端之间的通讯加密做的很好,并且 Unbound 通过缓存让相同的请求在另一次请求同速度更快。 + +### 总结 ### + +DNSCrypt 是一个可以加密 DNS 客户端和 DNS 解析器之间的数据流的协议。 DNSCrypt 可以在不同的操作系统上运行,无论是移动端或桌面端。选择 DNS 提供商还包括一些重要的事情,应选择那些提供 DNSSEC 同时没有日志的。Unbound 可被用作 DNS 缓存,从而加快解析过程,因为 Unbound 将请求缓存,那么接下来客户端请求相同的查询时,unbound 将从缓存中取出保存的值。 DNSCrypt 和 Unbound 是针对安全性和速度的一个强大的组合。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/tools/install-dnscrypt-unbound-archlinux/ + +作者:[Arul][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arulm/ diff --git a/translated/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md b/translated/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md deleted file mode 100644 index 9977cbd09f..0000000000 --- a/translated/tech/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md +++ /dev/null @@ -1,173 +0,0 @@ -如何在Arch Linux中安装DNSCrypt和Unbound -================================================================================ -**DNSCrypt**是一个用于加密和验证的DNS客户端和一个DNS解析器之间通信的协议。阻止DNS欺骗或中间人攻击。 DNSCrypt可用于大多数的操作系统,包括Linux,Windows,MacOSX的Android和iOS。而在本教程中我使用的是内核为4.1的archlinux。 - -Unbound是用来解析收到的任意DNS查询的DNS缓存服务器。如果用户请求一个新的查询,然后unbound将其存储到缓存中,并且当用户再次请求相同的请求时,unbound将采用已经保存的缓存。这将是第一次请求查询更快。 - -现在我将尝试安装“DNSCrypt”,以确保DNS的通信的安全,并用“Unbound”加速。 - -### 第一步 - 安装yaourt ### - -Yaourt是AUR(ARCH用户仓库)的辅助,使用户能够很容易地从AUR安装程序。 Yaourt和pacman一样使用相同的语法,这样你就可以使用yaourt安装该程序。下面是安装yaourt的简单方法: - -1. 用nano或者vi编辑arch仓库配置文件,保存在“/etc/pacman.conf”中。 - - $ nano /etc/pacman.conf - -2. 在底部填上你的yaourt仓库,粘贴下面的脚本: - - [archlinuxfr] - SigLevel = Never - Server = http://repo.archlinux.fr/$arch - -3. 用“"Ctrl + x”,接着用“Y”保存。 - -4. 接着升级仓库数据库并用pacman安装yaourt: - - $ sudo pacman -Sy yaourt - -### 第二步 - 安装 DNSCrypt和Unbound ### - -DNSCrypt和unbound就在archlinux仓库中,你可以用下面的pacman命令安装: - - $ sudo pacman -S dnscrypt-proxy unbound - -接着在安装的过程中按下“Y”。 - -### 第三步 - 安装 dnscrypt-autoinstall ### - -Dnscrypt-autoinstall是一个自动在基于Linux的系统上安装和配置DNSCrypt的脚本。DNSCrypt在AUR中,因此你必须使用“yaourt”命令来安装它。 - - $ yaourt -S dnscrypt-autoinstall - -注意 : - --S = 这和pacman -S安装程序一样。 - -### 第四步 - 运行dnscrypt-autoinstall ### - -用root权限运行“dnscrypt-autoinstall”开自动配置DNSCrypt。 - - $ sudo dnscrypt-autoinstall - -下一步中输入“回车”,接着输入"Y"来选择你想使用的DNS提供者,我这里使用不带日志和DNSSEC的DNSCrypt.eu。 - -![DNSCrypt autoinstall](http://blog.linoxide.com/wp-content/uploads/2015/08/DNSCrypt-autoinstall.png) - -### 第五步 - 配置DNSCrypt和Unbound ### - -1. 打开dnscrypt的“/etc/conf.d/dnscrypt-config” 配置文件中“DNSCRYPT_LOCALIP”指向**本地ip**,“DNSCRYPT_LOCALPORT”根据你本人的意愿配置,我是用的是**40**端口。 - - $ nano /etc/conf.d/dnscrypt-config - - DNSCRYPT_LOCALIP=127.0.0.1 - DNSCRYPT_LOCALIP2=127.0.0.2 - DNSCRYPT_LOCALPORT=40 - -![DNSCrypt Configuration](http://blog.linoxide.com/wp-content/uploads/2015/08/DNSCryptConfiguration.png) - -保存并退出。 - -2. 现在你用nanao编辑器编辑“/etc/unbound/”下unbound的配置文件: - - $ nano /etc/unbound/unbound.conf - -3. 在脚本最后添加下面的行: - - do-not-query-localhost: no - forward-zone: - name: "." - forward-addr: 127.0.0.1@40 - -确保**forward-addr**和DNSCrypt中的“**DNSCRYPT_LOCALPORT**”一致。你看见我是用的是**40**端口。 - -![Unbound Configuration](http://blog.linoxide.com/wp-content/uploads/2015/08/UnboundConfiguration.png) - -接着保存并退出。 - -### 第六步 - 运行DNSCrypt和Unbound,接着添加到开机启动中 ### - -请用root权限运行DNSCrypt和unbound,你可以用systemctl命令来运行: - - $ sudo systemctl start dnscrypt-proxy unbound - -将服务添加到启动中。你可以运行“systemctl enable”: - - $ sudo systemctl enable dnscrypt-proxy unbound - -命令将会创建软链接到“/usr/lib/systemd/system/”目录的服务。 - -### 第七步 - 配置resolv.conf并重启所有服务 ### - -resolv.conf是一个在linux中用于配置DNS解析器的文件。它是一个由管理员创建的纯文本,因此你必须用root权限编辑并让它不能被其他人修改。 - -用nano编辑器编辑: - - $ nano /etc/resolv.conf - -并添加本地IP “**127.0.0.1**”,现在用“chattr”命令使他只读: - - $ chattr +i /etc/resolv.conf - -注意: - -如果你想要重新编辑,用“chattr -i /etc/resolv.conf”加入写权限。 - -现在你需要重启DNSCrypt和unbound和网络; - - $ sudo systemctl restart dnscrypt-proxy unbound netctl - -如果你看到错误,检查配置文件。 - -### 测试 ### - -1. 测试DNSCrypt - -你可以通过https://dnsleaktest.com/来确认DNSCrypt,点击“开始测试”或者“扩展测试”,并在程序运行期间等待。 - -现在你可以看到NSCrypt.eu就已经与作为DNS提供商的DNSCrypt协同工作了。 - -![Testing DNSCrypt](http://blog.linoxide.com/wp-content/uploads/2015/08/TestingDNSCrypt.png) - - -2. 测试 Unbound - -现在你应该确保unbound可以正确地与“dig”和“drill”命令一起工作。 - -这是dig命令的结果: - - $ dig linoxide.com - -我们现在看下结果,“Query time”是“533 msec”: - - ;; Query time: 533 msec - ;; SERVER: 127.0.0.1#53(127.0.0.1) - ;; WHEN: Sun Aug 30 14:48:19 WIB 2015 - ;; MSG SIZE rcvd: 188 - -再次输入命令,我们看到“Query time”是“0 msec”。 - - ;; Query time: 0 msec - ;; SERVER: 127.0.0.1#53(127.0.0.1) - ;; WHEN: Sun Aug 30 14:51:05 WIB 2015 - ;; MSG SIZE rcvd: 188 - -![Unbound Test](http://blog.linoxide.com/wp-content/uploads/2015/08/UnboundTest.png) - -DNSCrypt加密通信在DNS客户端和解析端工作的很好,并且Unbound通过缓存让相同的请求在另一次请求同速度更快。 - -### 总结 ### - -DNSCrypt是一个可以加密DNS客户端和DNS解析器之间的数据流的协议。 DNSCrypt可以在不同的操作系统上运行,无论是移动端或桌面端。选择DNS提供商还包括一些重要的事情,选择那些提供DNSSEC同时没有日志的。Unbound可被用作DNS缓存,从而加快解析过程,因为Unbound将请求缓存,那么接下来客户端请求相同的查询时,unbound将从缓存中取出保存的值。 DNSCrypt和Unbound是针对安全性和速度的一个强大的组合。 - --------------------------------------------------------------------------------- - -via: http://linoxide.com/tools/install-dnscrypt-unbound-archlinux/ - -作者:[Arul][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arulm/ From 91d7f86ad3dbe0245899b54acd9294332765f97d Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 28 Sep 2015 10:44:42 +0800 Subject: [PATCH 616/697] PUB:20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on @strugglingyouth --- ... which CPU core a process is running on.md | 21 +++++++++---------- 1 file changed, 10 insertions(+), 11 deletions(-) rename {translated/tech => published}/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md (63%) diff --git a/translated/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md b/published/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md similarity index 63% rename from translated/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md rename to published/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md index d901b95030..be9b16e5e4 100644 --- a/translated/tech/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md +++ b/published/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md @@ -1,16 +1,16 @@ -Linux 有问必答--如何找出哪个 CPU 内核正在运行进程 +Linux 有问必答:如何知道进程运行在哪个 CPU 内核上? ================================================================================ >问题:我有个 Linux 进程运行在多核处理器系统上。怎样才能找出哪个 CPU 内核正在运行该进程? -当你运行需要较高性能的 HPC 程序或非常消耗网络资源的程序在 [多核 NUMA 处理器上][1],CPU/memory 的亲和力是限度其发挥最大性能的重要因素之一。在同一 NUMA 节点上调整程序的亲和力可以减少远程内存访问。像英特尔 Sandy Bridge 处理器,该处理器有一个集成的 PCIe 控制器,要调整同一 NUMA 节点的网络 I/O 负载可以使用 网卡控制 PCI 和 CPU 亲和力。 +当你在 [多核 NUMA 处理器上][1]运行需要较高性能的 HPC(高性能计算)程序或非常消耗网络资源的程序时,CPU/memory 的亲和力是限度其发挥最大性能的重要因素之一。在同一 NUMA 节点上调度最相关的进程可以减少缓慢的远程内存访问。像英特尔 Sandy Bridge 处理器,该处理器有一个集成的 PCIe 控制器,你可以在同一 NUMA 节点上调度网络 I/O 负载(如网卡)来突破 PCI 到 CPU 亲和力限制。 -由于性能优化和故障排除只是一部分,你可能想知道哪个 CPU 内核(或 NUMA 节点)被调度运行特定的进程。 +作为性能优化和故障排除的一部分,你可能想知道特定的进程被调度到哪个 CPU 内核(或 NUMA 节点)上运行。 -这里有几种方法可以 **找出哪个 CPU 内核被调度来运行 给定的 Linux 进程或线程**。 +这里有几种方法可以 **找出哪个 CPU 内核被调度来运行给定的 Linux 进程或线程**。 ### 方法一 ### -如果一个进程明确的被固定到 CPU 的特定内核,如使用 [taskset][2] 命令,你可以使用 taskset 命令找出被固定的 CPU 内核: +如果一个进程使用 [taskset][2] 命令明确的被固定(pinned)到 CPU 的特定内核上,你可以使用 taskset 命令找出被固定的 CPU 内核: $ taskset -c -p @@ -22,19 +22,18 @@ Linux 有问必答--如何找出哪个 CPU 内核正在运行进程 pid 5357's current affinity list: 5 -输出显示这个过程被固定在 CPU 内核 5。 +输出显示这个过程被固定在 CPU 内核 5上。 但是,如果你没有明确固定进程到任何 CPU 内核,你会得到类似下面的亲和力列表。 pid 5357's current affinity list: 0-11 -输出表明,该进程可能会被安排在从0到11中的任何一个 CPU 内核。在这种情况下,taskset 不会识别该进程当前被分配给哪个 CPU 内核,你应该使用如下所述的方法。 +输出表明该进程可能会被安排在从0到11中的任何一个 CPU 内核。在这种情况下,taskset 不能识别该进程当前被分配给哪个 CPU 内核,你应该使用如下所述的方法。 ### 方法二 ### ps 命令可以告诉你每个进程/线程目前分配到的 (在“PSR”列)CPU ID。 - $ ps -o pid,psr,comm -p ---------- @@ -42,7 +41,7 @@ ps 命令可以告诉你每个进程/线程目前分配到的 (在“PSR”列 PID PSR COMMAND 5357 10 prog -输出表示进程的 PID 为 5357(名为"prog")目前在CPU 内核 10 上运行着。如果该过程没有被固定,PSR 列可以保持随着时间变化,内核可能调度该进程到不同位置。 +输出表示进程的 PID 为 5357(名为"prog")目前在CPU 内核 10 上运行着。如果该过程没有被固定,PSR 列会根据内核可能调度该进程到不同内核而改变显示。 ### 方法三 ### @@ -72,11 +71,11 @@ via: http://ask.xmodulo.com/cpu-core-process-is-running.html 作者:[Dan Nanni][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://ask.xmodulo.com/author/nanni [1]:http://xmodulo.com/identify-cpu-processor-architecture-linux.html [2]:http://xmodulo.com/run-program-process-specific-cpu-cores-linux.html -[3]:http://ask.xmodulo.com/install-htop-centos-rhel.html +[3]:https://linux.cn/article-3141-1.html From 133a93b9dbf359fbc822dfa5db5ab7aa4b52f303 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 28 Sep 2015 11:00:26 +0800 Subject: [PATCH 617/697] PUB:20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility @GOLinux --- ...l Uptime of System With tuptime Utility.md | 55 ++++++++++--------- 1 file changed, 29 insertions(+), 26 deletions(-) rename {translated/tech => published}/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md (71%) diff --git a/translated/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md b/published/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md similarity index 71% rename from translated/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md rename to published/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md index 0d242c0be2..4c6356ec4e 100644 --- a/translated/tech/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md +++ b/published/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md @@ -1,19 +1,21 @@ -使用tuptime工具查看Linux服务器系统历史开机时间统计 +使用 tuptime 工具查看 Linux 服务器系统的开机时间的历史和统计 ================================================================================ -你们可以使用下面的工具来查看Linux或者类Unix系统运行了多长时间: -- uptime : 告诉你服务器运行了多长的时间。 -- lastt : 显示重启和关机时间。 -- tuptime : 报告系统的历史运行时间和统计运行时间,这是指重启之间的运行时间。和uptime命令类似,不过输出结果更有意思。 -#### 找出系统上次重启时间和日期 #### +你可以使用下面的工具来查看 Linux 或类 Unix 系统运行了多长时间: + +- uptime : 告诉你服务器运行了多长的时间。 +- lastt : 显示重启和关机时间。 +- tuptime : 报告系统的运行时间历史和运行时间统计,这是指重启之间的运行时间。和 uptime 命令类似,不过输出结果更有意思。 + +### 找出系统上次重启时间和日期 ### 你[可以使用下面的命令来获取Linux操作系统的上次重启和关机时间及日期][1](在OSX/类Unix系统上也可以用): - ## Just show system reboot and shutdown date and time ### + ### 显示系统重启和关机时间 who -b last reboot last shutdown - ## Uptime info ## + ### 开机信息 uptime cat /proc/uptime awk '{ print "up " $1 /60 " minutes"}' /proc/uptime @@ -23,23 +25,24 @@ ![Fig.01: Various Linux commands in action to find out the server uptime](http://s0.cyberciti.org/uploads/cms/2015/09/uptime-w-awk-outputs.jpg) -图像01:用于找出服务器开机时间的多个Linux命令 +*图01:用于找出服务器开机时间的多个Linux命令* -**跟tuptime问打个招呼吧** +###跟 tuptime 问打个招呼吧### + +tuptime 命令行工具可以报告基于 Linux 的系统上的下列信息: -tuptime命令行工具可以报告基于Linux的系统上的下列信息: 1. 系统启动次数统计 2. 注册首次启动时间(也就是安装时间) -1. 正常关机和意外关机统计 -1. 平均开机时间和故障停机时间 -1. 当前开机时间 -1. 首次启动以来的开机和故障停机率 -1. 累积系统开机时间、故障停机时间和合计 -1. 报告每次启动、开机时间、关机和故障停机时间 +3. 正常关机和意外关机统计 +4. 平均开机时间和故障停机时间 +5. 当前开机时间 +6. 首次启动以来的开机和故障停机率 +7. 累积系统开机时间、故障停机时间和合计 +8. 报告每次启动、开机时间、关机和故障停机时间 #### 安装 #### -输入[下面的命令来克隆git仓库到Linux系统中][2]: +输入[下面的命令来克隆 git 仓库到 Linux 系统中][2]: $ cd /tmp $ git clone https://github.com/rfrail3/tuptime.git @@ -51,17 +54,17 @@ tuptime命令行工具可以报告基于Linux的系统上的下列信息: ![Fig.02: Cloning a git repo](http://s0.cyberciti.org/uploads/cms/2015/09/git-install-tuptime.jpg) -图像02:克隆git仓库 +*图02:克隆git仓库* -确保你随sys,optparse,os,re,string,sqlite3,datetime,disutils安装了Python v2.7和本地模块。 +确保你安装了带有 sys,optparse,os,re,string,sqlite3,datetime,disutils 和 locale 模块的 Python v2.7。 你可以像下面这样来安装: $ sudo tuptime-install.sh -或者,可以手工安装(根据基于systemd或非systemd的Linux的推荐方法): +或者,可以手工安装(基于 systemd 或非 systemd ): -$ sudo cp /tmp/tuptime/latest/cron.d/tuptime /etc/cron.d/tuptime + $ sudo cp /tmp/tuptime/latest/cron.d/tuptime /etc/cron.d/tuptime 如果系统是systemd的,拷贝服务文件并启用: @@ -73,7 +76,7 @@ $ sudo cp /tmp/tuptime/latest/cron.d/tuptime /etc/cron.d/tuptime $ sudo cp /tmp/tuptime/latest/init.d/tuptime.init.d-debian7 /etc/init.d/tuptime $ sudo update-rc.d tuptime defaults -**运行** +####运行#### 只需输入以下命令: @@ -83,9 +86,9 @@ $ sudo cp /tmp/tuptime/latest/cron.d/tuptime /etc/cron.d/tuptime ![Fig.03: tuptime in action](http://s0.cyberciti.org/uploads/cms/2015/09/tuptime-output.jpg) -图像03:tuptime工作中 +*图03:tuptime工作中* -在更新内核后,我重启了系统,然后再次输入了同样的命令: +在一次更新内核后,我重启了系统,然后再次输入了同样的命令: $ sudo tuptime System startups: 2 since 03:52:16 PM 08/21/2015 @@ -142,7 +145,7 @@ via: http://www.cyberciti.biz/hardware/howto-see-historical-statistical-uptime-o 作者:Vivek Gite 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 1fe54bbd4bd4b686ecc971786c6b9eb3c0762e4c Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 28 Sep 2015 18:09:08 +0800 Subject: [PATCH 618/697] Delete 20150925 HTTP 2 Now Fully Supported in NGINX Plus.md --- ...TTP 2 Now Fully Supported in NGINX Plus.md | 120 ------------------ 1 file changed, 120 deletions(-) delete mode 100644 sources/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md diff --git a/sources/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md b/sources/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md deleted file mode 100644 index 5d1059a38f..0000000000 --- a/sources/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md +++ /dev/null @@ -1,120 +0,0 @@ -struggling 翻译中 - -HTTP/2 Now Fully Supported in NGINX Plus -================================================================================ -Earlier this week we released [NGINX Plus R7][1] with support for HTTP/2. As the latest standard for the HTTP protocol, HTTP/2 is designed to bring increased performance and security to modern web applications. - -The HTTP/2 implementation in NGINX Plus works seamlessly with existing sites and applications. Minimal changes are required, as NGINX Plus delivers HTTP/1.x and HTTP/2 traffic in parallel for the best experience, no matter what browser your users choose. - -HTTP/2 support is available in the optional **nginx‑plus‑http2** package only. The **nginx‑plus** and **nginx‑plus‑extras** packages provide SPDY support and are currently recommended for production sites because of wider browser support and code maturity. - -### Why Move to HTTP/2? ### - -HTTP/2 makes data transfer more efficient and more secure for your applications. HTTP/2 adds five key features that improve performance when compared to HTTP/1.x: - -- **True multiplexing** – HTTP/1.1 enforces strict in-order completion of requests that come in over a keepalive connection. A request must be satisfied before processing on the next one can begin. HTTP/2 eliminates this requirement and allows requests to be satisfied in parallel and out of order. -- **Single, persistent connection** – As HTTP/2 allows for true multiplexing of requests, all objects on a web page can now be downloaded in parallel over a single connection. WIth HTTP/1.x, multiple connections are used to download resources in parallel, leading to inefficient use of the underlying TCP protocol. -- **Binary encoding** – Header information is sent in compact, binary format, rather than plain text, saving bytes on the wire. -- **Header compression** – Headers are compressed using a purpose-built algorithm, HPACK compression, which further reduces the amount of data crossing the network. -- **SSL/TLS encryption** – With HTTP/2, SSL/TLS encryption is mandatory. This is not enforced in the [RFC][2], which allows for plain-text HTTP/2, but rather by all web browsers that currently implement HTTP/2. SSL/TLS makes your site more secure, and with all the performance improvements in HTTP/2, the performance penalty from encryption and decryption is mitigated. - -To learn more about HTTP/2: - -- Please read our [white paper][3], which covers everything you need to know about HTTP/2. -- Download our [special edition of the High Performance Browser Networking ebook][4] by Ilya Grigorik of Google. - -### How NGINX Plus Implements HTTP/2 ### - -Our implementation of HTTP/2 is based on our support for SPDY, which is widely deployed (nearly 75% of websites that use SPDY use NGINX or NGINX Plus). With NGINX Plus, you can deploy HTTP/2 with very little change to your application infrastructure. This section discusses how NGINX Plus implements support for HTTP/2. - -#### An HTTP/2 Gateway #### - -![](https://www.nginx.com/wp-content/uploads/2015/09/http2-27-1024x300.png) - -NGINX Plus acts an HTTP/2 gateway. It talks HTTP/2 to client web browsers that support it, but translates HTTP/2 requests back to HTTP/1.x (or FastCGI, SCGI, uWSGI, etc. – whatever protocol you are currently using) for communication with back-end servers. - -#### Backward Compatibility #### - -![](https://www.nginx.com/wp-content/uploads/2015/09/http2-281-1024x581.png) - -For the foreseeable future you’ll need to support HTTP/2 and HTTP/1.x side by side. As of this writing, over 50% of users already run a web browser that [supports HTTP/2][5], but this also means almost 50% don’t. - -To support both HTTP/1.x and HTTP/2 side by side, NGINX Plus implements the Next Protocol Negotiation (NPN) extension to TLS. When a web browser connects to a server, it sends a list of supported protocols to the server. If the browser includes h2 – that is, HTTP/2 – in the list of supported protocols, NGINX Plus uses HTTP/2 for connections to that browser. If the browser doesn’t implement NPN, or doesn’t send h2 in its list of supported protocols, NGINX Plus falls back to HTTP/1.x. - -### Moving to HTTP/2 ### - -NGINX, Inc. aims to make the transition to HTTP/2 as seamless as possible. This section goes through the changes that need to be made to enable HTTP/2 for your applications, which include just a few changes to the configuration of NGINX Plus. - -#### Prerequisites #### - -Upgrade to the NGINX Plus R7 **nginx‑plus‑http2** package. Note that an HTTP/2-enabled version of the **nginx‑plus‑extras** package is not available at this time. - -#### Redirecting All Traffic to SSL/TLS #### - -If your app is not already encrypted with SSL/TLS, now would be a good time to make that move. Encrypting your app protects you from spying as well as from man-in-the-middle attacks. Some search engines even reward encrypted sites with [improved rankings][6] in search results. The following configuration block redirects all plain HTTP requests to the encrypted version of the site. - - server { - listen 80; - location / { - return 301 https://$host$request_uri; - } - } - -#### Enabling HTTP/2 #### - -To enable HTTP/2 support, simply add the http2 parameter to all [listen][7] directives. Also include the ssl parameter, required because browsers do not support HTTP/2 without encryption. - - server { - listen 443 ssl http2 default_server; - - ssl_certificate server.crt; - ssl_certificate_key server.key; - … - } - -If necessary, restart NGINX Plus, for example by running the nginx -s reload command. To verify that HTTP/2 translation is working, you can use the “HTTP/2 and SPDY indicator” plug-in available for [Google Chrome][8] and [Firefox][9]. - -### Caveats ### - -- Before installing the **nginx‑plus‑http2** package, you must remove the spdy parameter on all listen directives in your configuration (replace it with the http2 and ssl parameters to enable support for HTTP/2). With this package, NGINX Plus fails to start if any listen directives have the spdy parameter. -- If you are using a web application firewall (WAF) that is sitting in front of NGINX Plus, ensure that it is capable of parsing HTTP/2, or move it behind NGINX Plus. -- The “Server Push” feature defined in the HTTP/2 RFC is not supported in this release. Future releases of NGINX Plus might include it. -- NGINX Plus R7 supports both SPDY and HTTP/2. In a future release we will deprecate support for SPDY. Google is [deprecating SPDY][10] in early 2016, making it unnecessary to support both protocols at that point. -- If [ssl_prefer_server_ciphers][11] is set to on and/or a list of [ssl_ciphers][12] that are defined in [Appendix A: TLS 1.2 Ciper Suite Black List][13] is used, the browser will experience handshake-errors and not work. Please refer to [section 9.2.2 of the HTTP/2 RFC][14] for more details.- - -### Special Thanks ### - -NGINX, Inc. would like to thank [Dropbox][15] and [Automattic][16], who are heavy users of our software and graciously cosponsored the development of our HTTP/2 implementation. Their contributions have helped accelerate our ability to bring this software to you, and we hope you are able to support them in turn. - -![](https://www.nginx.com/wp-content/themes/nginx-theme/assets/img/landing-page/highperf_nginx_ebook.png) - -[O'REILLY'S BOOK ABOUT HTTP/2 & PERFORMANCE TUNING][17] - --------------------------------------------------------------------------------- - -via: https://www.nginx.com/blog/http2-r7/ - -作者:[Faisal Memon][a] -译者:[struggling](https://github.com/struggling) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.nginx.com/blog/author/fmemon/ -[1]:https://www.nginx.com/blog/nginx-plus-r7-released/ -[2]:https://tools.ietf.org/html/rfc7540 -[3]:https://www.nginx.com/wp-content/uploads/2015/09/NGINX_HTTP2_White_Paper_v4.pdf -[4]:https://www.nginx.com/http2-ebook/ -[5]:http://caniuse.com/#feat=http2 -[6]:http://googlewebmastercentral.blogspot.co.uk/2014/08/https-as-ranking-signal.html -[7]:http://nginx.org/en/docs/http/ngx_http_core_module.html#listen -[8]:https://chrome.google.com/webstore/detail/http2-and-spdy-indicator/mpbpobfflnpcgagjijhmgnchggcjblin?hl=en -[9]:https://addons.mozilla.org/en-us/firefox/addon/spdy-indicator/ -[10]:http://blog.chromium.org/2015/02/hello-http2-goodbye-spdy-http-is_9.html -[11]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_prefer_server_ciphers -[12]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ciphers -[13]:https://tools.ietf.org/html/rfc7540#appendix-A -[14]:https://tools.ietf.org/html/rfc7540#section-9.2.2 -[15]:http://dropbox.com/ -[16]:http://automattic.com/ -[17]:https://www.nginx.com/http2-ebook/ \ No newline at end of file From 86b844ef49163dfcd0db7aa63fcb253be0c97629 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 28 Sep 2015 18:09:48 +0800 Subject: [PATCH 619/697] Create 20150925 HTTP 2 Now Fully Supported in NGINX Plus.md --- ...TTP 2 Now Fully Supported in NGINX Plus.md | 126 ++++++++++++++++++ 1 file changed, 126 insertions(+) create mode 100644 translated/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md diff --git a/translated/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md b/translated/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md new file mode 100644 index 0000000000..0a9cf30ad3 --- /dev/null +++ b/translated/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md @@ -0,0 +1,126 @@ + +NGINX Plus 现在完全支持 HTTP/2 +================================================================================ +本周早些时候,我们发布了对 HTTP/2 支持的 [NGINX Plus R7][1]。作为 HTTP 协议的最新标准,HTTP/2 的设计对现在的 web 应用程序带来了更高的性能和安全性。 + +NGINX Plus 使用 HTTP/2 协议可与现有的网站和应用程序进行无缝衔接。最微小的变化就是不管用户选择什么样的浏览器,NGINX Plus 都能为用户提供 HTTP/1.x 与HTTP/2 并发运行带来的最佳体验。 + +要支持 HTTP/2 仅需提供 **nginx‑plus‑http2** 软件包。**nginx‑plus** 和 **nginx‑plus‑extras** 软件包支持 SPDY 协议,目前推荐用于生产站点,因为其被大多数浏览器所支持并且代码也是相当成熟了。 + +### 为什么要使用 HTTP/2? ### +HTTP/2 使数据传输更高效,对你的应用程序更安全。 HTTP/2 相比于 HTTP/1.x 有五个提高性能特点: + +- **完全复用** – HTTP/1.1 强制按严格的顺序来对一个请求建立连接。请求建立必须在下一个进程开始之前完成。 HTTP/2 消除了这一要求,允许并行和乱序来完成请求的建立。 + +- **单一,持久连接** – 由于 HTTP/2 允许请求真正的复用,现在通过单一连接可以并行下载网页上的所有对象。在 HTTP/1.x 中,使用多个连接来并行下载资源,从而导致使用底层 TCP 协议效率很低。 + +- **二进制编码** – Header 信息使用紧凑二进制格式发送,而不是纯文本格式,节省了传输字节。 + +- **Header 压缩** – Headers 使用专用的算法来进行压缩,HPACK 压缩,这进一步降低数据通过网络传输的字节。 + +- **SSL/TLS encryption** – 在 HTTP/2 中,强制使用 SSL/TLS。在 [RFC][2] 中并没有强制,其允许纯文本的 HTTP/2,它是由当前 Web 浏览器执行 HTTP/2 的。 SSL/TLS 使你的网站更安全,并且使用 HTTP/2 所有性能会有提升,加密和解密过程的性能也有所提升。 + +要了解更多关于 HTTP/2: + +- 请阅读我们的 [白皮书][3],它涵盖了你需要了解HTTP/2 的一切。 +- 下载由 Google 的 Ilya Grigorik 编写的 [特别版的高性能浏览器网络电子书][4] 。 + +### NGINX Plus 如何实现 HTTP/2 ### + +实现 HTTP/2 要基于对 SPDY 的支持,它已经被广泛部署(使用了 NGINX 或 NGINX Plus 的网站近 75% 都使用了 SPDY)。使用 NGINX Plus 部署 HTTP/2 时,几乎不会改变你应用程序的配置。本节将讨论 NGINX Plus如何实现对 HTTP/2 的支持。 + +#### 一个 HTTP/2 网关 #### + +![](https://www.nginx.com/wp-content/uploads/2015/09/http2-27-1024x300.png) + +NGINX Plus 作为一个 HTTP/2 网关。它谈到 HTTP/2 对客户端 Web 浏览器支持,但传输 HTTP/2 请求返回给后端服务器通信时使用 HTTP/1.x(或者 FastCGI, SCGI, uWSGI, 等等. – 取决于你目前正在使用的协议)。 + +#### 向后兼容性 #### + +![](https://www.nginx.com/wp-content/uploads/2015/09/http2-281-1024x581.png) + +在不久的未来,你需要同时支持 HTTP/2 和 HTTP/1.x。在撰写本文时,超过50%的用户使用的 Web 浏览器已经[支持 HTTP/2][5],但这也意味着近50%的人还没有使用。 + +为了同时支持 HTTP/1.x 和 HTTP/2,NGINX Plus 实现了将 Next Protocol Negotiation (NPN协议)扩展到 TLS 中。当 Web 浏览器连接到服务器时,其将所支持的协议列表发送到服务器端。如果浏览器支持的协议列表中包括 h2 - 即,HTTP/2,NGINX Plus 将使用 HTTP/2 连接到浏览器。如果浏览器不支持 NPN 或在发送支持的协议列表中没有 h2,NGINX Plus 将继续使用 HTTP/1.x。 + +### 转向 HTTP/2 ### + +NGINX,公司尽可能无缝过渡到使用 HTTP/2。本节通过对你应用程序的改变来启用对 HTTP/2 的支持,其中只包括对 NGINX Plus 配置的几个变化。 + +#### 前提条件 #### + +使用 **nginx‑plus‑http2** 软件包升级到 NGINX Plus R7 . 注意启用 HTTP/2 版本在此时不需要使用 **nginx‑plus‑extras** 软件包。 + +#### 重定向所有流量到 SSL/TLS #### + +如果你的应用程序尚未使用 SSL/TLS 加密,现在启用它正是一个好的时机。加密你的应用程序可以保护你免受间谍以及来自其他中间人的攻击。一些搜索引擎甚至在搜索结果中对加密站点 [提高排名][6]。下面的配置块重定向所有的普通 HTTP 请求到该网站的加密版本。 + + server { + listen 80; + location / { + return 301 https://$host$request_uri; + } + } + +#### 启用 HTTP/2 #### + +要启用对 HTTP/2 的支持,只需将 http2 参数添加到所有的 [listen][7] 指令中,包括 SSL 参数,因为浏览器不支持不加密的 HTTP/2 请求。 + + server { + listen 443 ssl http2 default_server; + + ssl_certificate server.crt; + ssl_certificate_key server.key; + … + } + +如果有必要,重启 NGINX Plus,例如通过运行 nginx -s reload 命令。要验证 HTTP/2 是否正常工作,你可以在 [Google Chrome][8] 和 [Firefox][9] 中使用 “HTTP/2 and SPDY indicator” 插件来检查。 + +### 注意事项 ### + +- 在安装 **nginx‑plus‑http2** 包之前, 你必须删除配置文件中所有 listen 指令后的 SPDY 参数(使用 http2 和 ssl 参数来替换它以启用对 HTTP/2 的支持)。使用这个包后,如果 listen 指令后有 spdy 参数,NGINX Plus 将无法启动。 + +- 如果你在 NGINX Plus 前端使用了 Web 应用防火墙(WAF),请确保它能够解析 HTTP/2,或者把它移到 NGINX Plus 后面。 + +- 此版本在 HTTP/2 RFC 不支持 “Server Push” 特性。 NGINX Plus 以后的版本可能会支持它。 + +- NGINX Plus R7 同时支持 SPDY 和 HTTP/2。在以后的版本中,我们将弃用对 SPDY 的支持。谷歌在2016年初将 [弃用 SPDY][10],因此同时支持这两种协议也非必要。 + +- 如果 [ssl_prefer_server_ciphers][11] 设置为 on 或者 [ssl_ciphers][12] 列表被定义在 [Appendix A: TLS 1.2 Ciper Suite Black List][13] 使用时,浏览器会出现 handshake-errors 而无法正常工作。详细内容请参阅 [section 9.2.2 of the HTTP/2 RFC][14]。 + +### 特别感谢 ### + +NGINX,公司要感谢 [Dropbox][15] 和 [Automattic][16],他们是我们软件的重度使用者,并帮助我们实现 HTTP/2。他们的贡献帮助我们加速完成这个软件,我们希望你也能支持他们。 + +![](https://www.nginx.com/wp-content/themes/nginx-theme/assets/img/landing-page/highperf_nginx_ebook.png) + +[O'REILLY'S BOOK ABOUT HTTP/2 & PERFORMANCE TUNING][17] + +-------------------------------------------------------------------------------- + +via: https://www.nginx.com/blog/http2-r7/ + +作者:[Faisal Memon][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.nginx.com/blog/author/fmemon/ +[1]:https://www.nginx.com/blog/nginx-plus-r7-released/ +[2]:https://tools.ietf.org/html/rfc7540 +[3]:https://www.nginx.com/wp-content/uploads/2015/09/NGINX_HTTP2_White_Paper_v4.pdf +[4]:https://www.nginx.com/http2-ebook/ +[5]:http://caniuse.com/#feat=http2 +[6]:http://googlewebmastercentral.blogspot.co.uk/2014/08/https-as-ranking-signal.html +[7]:http://nginx.org/en/docs/http/ngx_http_core_module.html#listen +[8]:https://chrome.google.com/webstore/detail/http2-and-spdy-indicator/mpbpobfflnpcgagjijhmgnchggcjblin?hl=en +[9]:https://addons.mozilla.org/en-us/firefox/addon/spdy-indicator/ +[10]:http://blog.chromium.org/2015/02/hello-http2-goodbye-spdy-http-is_9.html +[11]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_prefer_server_ciphers +[12]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ciphers +[13]:https://tools.ietf.org/html/rfc7540#appendix-A +[14]:https://tools.ietf.org/html/rfc7540#section-9.2.2 +[15]:http://dropbox.com/ +[16]:http://automattic.com/ +[17]:https://www.nginx.com/http2-ebook/ From d0f0839ebc66ad3956a9b3cff6270acc7ed74eb9 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 28 Sep 2015 22:48:14 +0800 Subject: [PATCH 620/697] PUB:20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2 @ictlyh --- ...Command Line Tricks for Newbies--Part 2.md | 42 +++++++++---------- 1 file changed, 20 insertions(+), 22 deletions(-) rename {translated/tech => published}/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md (85%) diff --git a/translated/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md b/published/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md similarity index 85% rename from translated/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md rename to published/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md index c2fcb279f1..86ae1ec668 100644 --- a/translated/tech/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md +++ b/published/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md @@ -1,20 +1,20 @@ -给新手的 10 个有用 Linux 命令行技巧 - 第二部分 +给新手的 10 个有用 Linux 命令行技巧 ================================================================================ -我记得我第一次使用 Linux 的时候,我还习惯于 Windows 的图形界面,我真的很讨厌 Linux 终端。那时候我觉得命令难以记忆,不能正确使用它们。随着时间推移,我意识到了 Linux 终端的优美、灵活和可用性,说实话,我没有一天不使用它。今天,我很高兴和刚开始接触 Linux 的人一起来分享一些有用的技巧和提示,希望能帮助他们更好的向 Linux 过度,并帮助他们学到一些新的东西(希望如此)。 + +我记得我第一次使用 Linux 的时候,我还习惯于 Windows 的图形界面,我真的很讨厌 Linux 终端。那时候我觉得命令难以记忆,不能正确使用它们。随着时间推移,我意识到了 Linux 终端的优美、灵活和可用性,说实话,我没有一天不使用它。今天,我很高兴和刚开始接触 Linux 的人一起来分享一些有用的技巧和提示,希望能帮助他们更好的向 Linux 过渡,并帮助他们学到一些新的东西(希望如此)。 ![给新手的 10 个命令行技巧](http://www.tecmint.com/wp-content/uploads/2015/09/10-Linux-Commandline-Tricks.jpg) -10 个 Linux 命令行技巧 - 第二部分 +*10 个 Linux 命令行技巧* +- [5 个有趣的 Linux 命令行技巧][1] +- [管理 Linux 文件类型的 5 个有用命令][2] -- [Linux 中 5 个有趣的命令行提示和技巧 - 第一部分][1] -- [管理 Linux 文件类型的 5 个有用命令 – 第三部分][2] - -这篇文章希望向你展示一些不需要很高的技术而可以像一个高手一样使用 Linux 终端的有用技巧。你只需要一个 Linux 终端和一些自由时间来体会这些命令。 +这篇文章希望向你展示一些不需要很高的技术就可以像一个高手一样使用 Linux 终端的有用技巧。你只需要一个 Linux 终端和一些自由时间来体会这些命令。 ### 1. 找到正确的命令 ### -执行正确的命令对你的系统来说非常重要。然而在 Linux 中有很多通常难以记忆的不同的命令行。那么怎样才能找到你需要的正确命令呢?答案是 apropos。你只需要运行: +执行正确的命令对你的系统来说非常重要。然而在 Linux 中有如此多的、难以记忆的各种的命令行。那么怎样才能找到你需要的正确命令呢?答案是 apropos。你只需要运行: # apropos @@ -31,7 +31,7 @@ ### 2. 执行之前的命令 ### -很多时候你需要一遍又一遍执行相同的命令。尽管你可以重复按你键盘上的 Up 键,你也可以用 history 命令。这个命令会列出自从你上次启动终端以来所有输入过的命令: +很多时候你需要一遍又一遍执行相同的命令。尽管你可以重复按你键盘上的向上光标键,但你也可以用 history 命令替代。这个命令会列出自从你上次启动终端以来所有输入过的命令: # history @@ -73,12 +73,11 @@ 如果你不习惯使用类似 cd、cp、mv、rm 等命令,你可以使用 midnight 命令。它是一个简单的可视化 shell,你可以在上面使用鼠标: - ![Midnight 命令](http://www.tecmint.com/wp-content/uploads/2015/09/mc-command.jpg) -Midnight 命令 +*Midnight 命令* -多亏了 F1 到 F12 键,你可以轻易地执行不同任务。只需要在底部选择对应的命令。要选择文件或者目录,点击 “Insert” 按钮。 +借助 F1 到 F12 键,你可以轻易地执行不同任务。只需要在底部选择对应的命令。要选择文件或者目录,按下 “Insert” 键。 简而言之 midnight 就是所谓的 “mc”。要安装 mc,只需要运行: @@ -96,21 +95,21 @@ Midnight 命令 ![Midnight 命令移动文件](http://www.tecmint.com/wp-content/uploads/2015/09/Midnight-Commander-Move-Files.jpg) -Midnight 命令移动文件 +*Midnight 命令移动文件* 按 F6 按钮移动文件到新的目录。MC 会请求你确认: ![移动文件到新目录](http://www.tecmint.com/wp-content/uploads/2015/09/Move-Files-to-new-Directory.png) -移动文件到新目录 +*移动文件到新目录* 确认了之后,文件就会被移动到新的目标目录。 -扩展阅读:[如何在 Linux 中使用 Midnight 命令文件管理器][4] +- 扩展阅读:[如何在 Linux 中使用 Midnight 命令文件管理器][4] ### 4. 在指定时间关闭计算机 ### -有时候你需要在结束工作几个小时后再关闭计算机。你可以通过使用下面的命令在指定时间关闭你的计算机: +有时候你需要在下班几个小时后再关闭计算机。你可以通过使用下面的命令在指定时间关闭你的计算机: $ sudo shutdown 21:00 @@ -151,13 +150,12 @@ Midnight 命令移动文件 ... ### 6. 查找文件 ### -### 6. Search for Files ### 查找文件有时候并不像你想象的那么简单。一个搜索文件的好例子是: # find /home/user -type f -这个命令会搜索 /home/user 目录下的所有文件。find 命令真的很强大,你可以传递更多选项给它使得你的搜索更加详细。如果你想搜索比特定大小大的文件,可以使用: +这个命令会搜索 /home/user 目录下的所有文件。find 命令真的很强大,你可以传递更多选项给它使得你的搜索更加详细。如果你想搜索超过特定大小的文件,可以使用: # find . -type f -size 10M @@ -221,7 +219,7 @@ Midnight 命令移动文件 10.0.0.4 10.0.0.5 -这里有一个简单的解决方法。收集服务器的 IP 地址到文件 list.txt 中,像上面那样一行一个。然后运行: +这里有一个简单的解决方法。将服务器的 IP 地址写到文件 list.txt 中,像上面那样一行一个。然后运行: # for in $i(cat list.txt); do ssh user@$i 'bash command'; done @@ -239,15 +237,15 @@ via: http://www.tecmint.com/10-useful-linux-command-line-tricks-for-newbies/ 作者:[Marin Todorov][a] 译者:[ictlyh](http://mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/marintodorov89/ -[1]:http://www.tecmint.com/5-linux-command-line-tricks/ +[1]:https://linux.cn/article-5485-1.html [2]:http://www.tecmint.com/manage-file-types-and-set-system-time-in-linux/ [3]:http://www.tecmint.com/history-command-examples/ [4]:http://www.tecmint.com/midnight-commander-a-console-based-file-manager-for-linux/ [5]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/ [6]:http://www.linuxsay.com/ -[7]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ \ No newline at end of file +[7]:https://linux.cn/article-5202-1.html \ No newline at end of file From 5dfb2e347646d80411ef5e763f212ca46d6c1587 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 28 Sep 2015 23:14:00 +0800 Subject: [PATCH 621/697] PUB:RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables @FSSlc --- ...ic Control Using FirewallD and Iptables.md | 71 +++++++++---------- 1 file changed, 35 insertions(+), 36 deletions(-) rename {translated/tech/RHCSA => published}/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md (62%) diff --git a/translated/tech/RHCSA/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md b/published/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md similarity index 62% rename from translated/tech/RHCSA/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md rename to published/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md index 80e64c088d..f770d09353 100644 --- a/translated/tech/RHCSA/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md +++ b/published/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md @@ -1,25 +1,25 @@ -RHCSA 系列: 防火墙简要和使用 FirewallD 和 Iptables 来控制网络流量 – Part 11 +RHCSA 系列(十一): 使用 firewalld 和 iptables 来控制网络流量 ================================================================================ -简单来说,防火墙就是一个基于一系列预先定义的规则(例如流量包的目的地或来源,流量的类型等)的安全系统,它控制着一个网络中的流入和流出流量。 +简单来说,防火墙就是一个基于一系列预先定义的规则(例如流量包的目的地或来源,流量的类型等)的安全系统,它控制着一个网络中的流入和流出流量。 ![使用 FirewallD 和 Iptables 来控制网络流量](http://www.tecmint.com/wp-content/uploads/2015/05/Control-Network-Traffic-Using-Firewall.png) -RHCSA: 使用 FirewallD 和 Iptables 来控制网络流量 – Part 11 +*RHCSA: 使用 FirewallD 和 Iptables 来控制网络流量 – Part 11* -在本文中,我们将回顾 firewalld 和 iptables 的基础知识。前者是 RHEL 7 中的默认动态防火墙守护进程,而后者则是针对 Linux 的传统的防火墙服务,大多数的系统和网络管理员都非常熟悉它,并且在 RHEL 7 中也可以获取到。 +在本文中,我们将回顾 firewalld 和 iptables 的基础知识。前者是 RHEL 7 中的默认动态防火墙守护进程,而后者则是针对 Linux 的传统的防火墙服务,大多数的系统和网络管理员都非常熟悉它,并且在 RHEL 7 中也可以用。 ### FirewallD 和 Iptables 的一个比较 ### 在后台, firewalld 和 iptables 服务都通过相同的接口来与内核中的 netfilter 框架相交流,这不足为奇,即它们都通过 iptables 命令来与 netfilter 交互。然而,与 iptables 服务相反, firewalld 可以在不丢失现有连接的情况下,在正常的系统操作期间更改设定。 -在默认情况下, firewalld 应该已经安装在你的 RHEL 系统中了,尽管它可能没有在运行。你可以使用下面的命令来确认(firewall-config 是用户界面配置工具): +在默认情况下, firewalld 应该已经安装在你的 RHEL 系统中了,尽管它可能没有在运行。你可以使用下面的命令来确认(firewall-config 是用户界面配置工具): # yum info firewalld firewall-config ![检查 FirewallD 的信息](http://www.tecmint.com/wp-content/uploads/2015/05/Check-FirewallD-Information.png) -检查 FirewallD 的信息 +*检查 FirewallD 的信息* 以及, @@ -27,7 +27,7 @@ RHCSA: 使用 FirewallD 和 Iptables 来控制网络流量 – Part 11 ![检查 FirewallD 的状态](http://www.tecmint.com/wp-content/uploads/2015/05/Check-FirewallD-Status.png) -检查 FirewallD 的状态 +*检查 FirewallD 的状态* 另一方面, iptables 服务在默认情况下没有被包含在 RHEL 系统中,但可以被安装上。 @@ -38,13 +38,13 @@ RHCSA: 使用 FirewallD 和 Iptables 来控制网络流量 – Part 11 # systemctl start firewalld.service | iptables-service.service # systemctl enable firewalld.service | iptables-service.service -另外,请阅读:[管理 Systemd 服务的实用命令][1] (注: 本文已被翻译发表,在 https://linux.cn/article-5926-1.html) +另外,请阅读:[管理 Systemd 服务的实用命令][1] -至于配置文件, iptables 服务使用 `/etc/sysconfig/iptables` 文件(假如这个软件包在你的系统中没有被安装,则这个文件将不存在)。在一个被用作集群节点的 RHEL 7 机子上,这个文件长得像这样: +至于配置文件, iptables 服务使用 `/etc/sysconfig/iptables` 文件(假如这个软件包在你的系统中没有被安装,则这个文件将不存在)。在一个被用作集群节点的 RHEL 7 机子上,这个文件看起来是这样: ![Iptables 防火墙配置文件](http://www.tecmint.com/wp-content/uploads/2015/05/Iptables-Rules.png) -Iptables 防火墙配置文件 +*Iptables 防火墙配置文件* 而 firewalld 则在两个目录中存储它的配置文件,即 `/usr/lib/firewalld` 和 `/etc/firewalld`: @@ -52,33 +52,32 @@ Iptables 防火墙配置文件 ![FirewallD 的配置文件](http://www.tecmint.com/wp-content/uploads/2015/05/Firewalld-configuration.png) -FirewallD 的配置文件 +*FirewallD 的配置文件* -在这篇文章中后面,我们将进一步查看这些配置文件,在那之后,我们将在各处添加一些规则。 -现在,是时候提醒你了,你总可以使用下面的命令来找到更多有关这两个工具的信息。 +在这篇文章中后面,我们将进一步查看这些配置文件,在那之后,我们将在这两个地方添加一些规则。现在,是时候提醒你了,你总可以使用下面的命令来找到更多有关这两个工具的信息。 # man firewalld.conf # man firewall-cmd # man iptables -除了这些,记得查看一下当前系列的第一篇 [RHCSA 系列(一): 回顾基础命令及系统文档][2](注: 本文已被翻译发表,在 https://linux.cn/article-6133-1.html ),在其中我描述了几种渠道来得到安装在你的 RHEL 7 系统上的软件包的信息。 +除了这些,记得查看一下当前系列的第一篇 [RHCSA 系列(一): 回顾基础命令及系统文档][2],在其中我描述了几种渠道来得到安装在你的 RHEL 7 系统上的软件包的信息。 ### 使用 Iptables 来控制网络流量 ### -在进一步深入之前,或许你需要参考 Linux 基金会认证工程师(Linux Foundation Certified Engineer,LFCE) 系列中的 [配置 Iptables 防火墙 – Part 8][3] 来复习你脑中有关 iptables 的知识。 +在进一步深入之前,或许你需要参考 Linux 基金会认证工程师(Linux Foundation Certified Engineer,LFCE) 系列中的 [配置 Iptables 防火墙 – Part 8][3] 来复习你脑中有关 iptables 的知识。 **例 1:同时允许流入和流出的网络流量** -TCP 端口 80 和 443 是 Apache web 服务器使用的用来处理常规(HTTP) 和安全(HTTPS)网络流量的默认端口。你可以像下面这样在 enp0s3 接口上允许流入和流出网络流量通过这两个端口: +TCP 端口 80 和 443 是 Apache web 服务器使用的用来处理常规(HTTP)和安全(HTTPS)网络流量的默认端口。你可以像下面这样在 enp0s3 接口上允许流入和流出网络流量通过这两个端口: # iptables -A INPUT -i enp0s3 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT # iptables -A OUTPUT -o enp0s3 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # iptables -A INPUT -i enp0s3 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT # iptables -A OUTPUT -o enp0s3 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT -**例 2:从某个特定网络中阻挡所有(或某些)流入连接** +**例 2:从某个特定网络中阻挡所有(或某些)流入连接** -或许有时你需要阻挡来自于某个特定网络的所有(或某些)类型的来源流量,比方说 192.168.1.0/24: +或许有时你需要阻挡来自于某个特定网络的所有(或某些)类型的来源流量,比方说 192.168.1.0/24: # iptables -I INPUT -s 192.168.1.0/24 -j DROP @@ -90,7 +89,7 @@ TCP 端口 80 和 443 是 Apache web 服务器使用的用来处理常规(HTTP) **例 3:将流入流量重定向到另一个目的地** -假如你不仅使用你的 RHEL 7 机子来作为一个软件防火墙,而且还将它作为一个硬件防火墙,使得它位于两个不同的网络之间,则在你的系统 IP 转发一定已经被开启了。假如没有开启,你需要编辑 `/etc/sysctl.conf` 文件并将 `net.ipv4.ip_forward` 的值设为 1,即: +假如你不仅使用你的 RHEL 7 机子来作为一个软件防火墙,而且还将它作为一个硬件防火墙,使得它位于两个不同的网络之间,那么在你的系统上 IP 转发一定已经被开启了。假如没有开启,你需要编辑 `/etc/sysctl.conf` 文件并将 `net.ipv4.ip_forward` 的值设为 1,即: net.ipv4.ip_forward = 1 @@ -98,27 +97,27 @@ TCP 端口 80 和 443 是 Apache web 服务器使用的用来处理常规(HTTP) # sysctl -p /etc/sysctl.conf -例如,你可能在一个内部的机子上安装了一个打印机,它的 IP 地址为 192.168.0.10,CUPS 服务在端口 631 上进行监听(同时在你的打印服务器和你的防火墙上)。为了从防火墙另一边的客户端传递打印请求,你应该添加下面的 iptables 规则: +例如,你可能在一个内部的机子上安装了一个打印机,它的 IP 地址为 192.168.0.10,CUPS 服务在端口 631 上进行监听(同时在你的打印服务器和你的防火墙上)。为了从防火墙另一边的客户端传递打印请求,你应该添加下面的 iptables 规则: # iptables -t nat -A PREROUTING -i enp0s3 -p tcp --dport 631 -j DNAT --to 192.168.0.10:631 -请记住 iptables 逐条地读取它的规则,所以请确保默认的策略或后面的规则不会重载上面例子中那些有下划线的规则。 +请记住 iptables 会逐条地读取它的规则,所以请确保默认的策略或后面的规则不会重载上面例子中那些规则。 ### FirewallD 入门 ### -引入 firewalld 的一个改变是区域(zone) (注:翻译参考了 https://fedoraproject.org/wiki/FirewallD/zh-cn) 的概念。它允许将网路划分为拥有不同信任级别的区域,由用户决定将设备和流量放置到哪个区域。 +firewalld 引入的一个变化是区域(zone) (注:翻译参考了 https://fedoraproject.org/wiki/FirewallD/zh-cn )。这个概念允许将网路划分为拥有不同信任级别的区域,由用户决定将设备和流量放置到哪个区域。 要获取活动的区域,使用: # firewall-cmd --get-active-zones -在下面的例子中,公用区域被激活了,并且 enp0s3 接口被自动地分配到了这个区域。要查看有关一个特定区域的所有信息,可使用: +在下面的例子中,public 区域是激活的,并且 enp0s3 接口被自动地分配到了这个区域。要查看有关一个特定区域的所有信息,可使用: # firewall-cmd --zone=public --list-all ![列出所有的 Firewalld 区域](http://www.tecmint.com/wp-content/uploads/2015/05/View-FirewallD-Zones.png) -列出所有的 Firewalld 区域 +*列出所有的 Firewalld 区域* 由于你可以在 [RHEL 7 安全指南][4] 中阅读到更多有关区域的知识,这里我们将仅列出一些特别的例子。 @@ -130,9 +129,9 @@ TCP 端口 80 和 443 是 Apache web 服务器使用的用来处理常规(HTTP) ![列出所有受支持的服务](http://www.tecmint.com/wp-content/uploads/2015/05/List-All-Supported-Services.png) -列出所有受支持的服务 +*列出所有受支持的服务* -要立刻且在随后的开机中使得 http 和 https 网络流量通过防火墙,可以这样: +要立刻生效且在随后重启后都可以让 http 和 https 网络流量通过防火墙,可以这样: # firewall-cmd --zone=MyZone --add-service=http # firewall-cmd --zone=MyZone --permanent --add-service=http @@ -140,13 +139,13 @@ TCP 端口 80 和 443 是 Apache web 服务器使用的用来处理常规(HTTP) # firewall-cmd --zone=MyZone --permanent --add-service=https # firewall-cmd --reload -假如 code>–zone 被忽略,则默认的区域(你可以使用 `firewall-cmd –get-default-zone`来查看)将会被使用。 +假如 `-–zone` 被忽略,则使用默认的区域(你可以使用 `firewall-cmd –get-default-zone`来查看)。 若要移除这些规则,可以在上面的命令中将 `add` 替换为 `remove`。 **例 5:IP 转发或端口转发** -首先,你需要查看在目标区域中,伪装是否被开启: +首先,你需要查看在目标区域中,伪装(masquerading)是否被开启: # firewall-cmd --zone=MyZone --query-masquerade @@ -154,7 +153,7 @@ TCP 端口 80 和 443 是 Apache web 服务器使用的用来处理常规(HTTP) ![在 firewalld 中查看伪装状态](http://www.tecmint.com/wp-content/uploads/2015/05/Check-masquerading.png) -查看伪装状态 +*查看伪装状态* 你可以为公共区域开启伪装: @@ -164,11 +163,11 @@ TCP 端口 80 和 443 是 Apache web 服务器使用的用来处理常规(HTTP) # firewall-cmd --zone=external --add-forward-port=port=631:proto=tcp:toport=631:toaddr=192.168.0.10 -并且别忘了重新加载防火墙。 +不要忘了重新加载防火墙。 -在 RHCSA 系列的 [Part 9][5] 你可以找到更深入的例子,在那篇文章中我们解释了如何允许或禁用通常被 web 服务器和 ftp 服务器使用的端口,以及在针对这两个服务所使用的默认端口被改变时,如何更改相应的规则。另外,你或许想参考 firewalld 的 wiki 来查看更深入的例子。 +在 RHCSA 系列的 [第九部分][5] 你可以找到更深入的例子,在那篇文章中我们解释了如何允许或禁用通常被 web 服务器和 ftp 服务器使用的端口,以及在针对这两个服务所使用的默认端口被改变时,如何更改相应的规则。另外,你或许想参考 firewalld 的 wiki 来查看更深入的例子。 -Read Also: [在 RHEL 7 中配置防火墙的几个实用的 firewalld 例子][6] +- 延伸阅读: [在 RHEL 7 中配置防火墙的几个实用的 firewalld 例子][6] ### 总结 ### @@ -180,14 +179,14 @@ via: http://www.tecmint.com/firewalld-vs-iptables-and-control-network-traffic-in 作者:[Gabriel Cánepa][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux/ -[2]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ +[1]:https://linux.cn/article-5926-1.html +[2]:https://linux.cn/article-6133-1.html [3]:http://www.tecmint.com/configure-iptables-firewall/ [4]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html -[5]:http://www.tecmint.com/rhcsa-series-install-and-secure-apache-web-server-and-ftp-in-rhel/ +[5]:https://linux.cn/article-6286-1.html [6]:http://www.tecmint.com/firewalld-rules-for-centos-7/ \ No newline at end of file From 6a40a9822dad8799246ac0bff581c1b92cf6962a Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 29 Sep 2015 09:29:54 +0800 Subject: [PATCH 622/697] =?UTF-8?q?20150929-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...veloper's Journey into Linux Containers.md | 128 ++++++++++++++++++ 1 file changed, 128 insertions(+) create mode 100644 sources/tech/20150929 A Developer's Journey into Linux Containers.md diff --git a/sources/tech/20150929 A Developer's Journey into Linux Containers.md b/sources/tech/20150929 A Developer's Journey into Linux Containers.md new file mode 100644 index 0000000000..3b44992ef3 --- /dev/null +++ b/sources/tech/20150929 A Developer's Journey into Linux Containers.md @@ -0,0 +1,128 @@ +A Developer’s Journey into Linux Containers +================================================================================ +![](https://deis.com/images/blog-images/dev_journey_0.jpg) + +I’ll let you in on a secret: all that DevOps cloud stuff that goes into getting my applications into the world is still a bit of a mystery to me. But, over time I’ve come to realize that understanding the ins and outs of large scale machine provisioning and application deployment is important knowledge for a developer to have. It’s akin to being a professional musician. Of course you need know how to play your instrument. But, if you don’t understand how a recording studio works or how you fit into a symphony orchestra, you’re going to have a hard time working in such environments. + +In the world of software development getting your code into our very big world is just as important as making it. DevOps counts and it counts a lot. + +So, in the spirit of bridging the gap between Dev and Ops I am going to present container technology to you from the ground up. Why containers? Because there is strong evidence to suggest that containers are the next step in machine abstraction: making a computer a place and no longer a thing. Understanding containers is a journey that we’ll take together. + +In this article I am going to cover the concepts behind containerization. I am going to cover how a container differs from a virtual machine. I am going to go into the logic behind containers construction as well as how containers fit into application architecture. I’ll discussion how lightweight versions of the Linux operating system fits into the container ecosystem. I’ll discuss using images to create reusable containers. Lastly I’ll cover how clusters of containers allow your applications to scale quickly. + +In later articles I’ll show you the step by step process to containerize a sample application and how to create a host cluster for your application’s containers. Also, I’ll show you how to use a Deis to deploy the sample application to a VM on your local system as well as a variety of cloud providers. + +So let’s get started. + +### The Benefit of Virtual Machines ### + +In order to understand how containers fit into the scheme of things you need to understand the predecessor to containers: virtual machines. + +A [virtual machine][1] (VM) is a software abstraction of a computer that runs on a physical host computer. Configuring a virtual machine is akin to buying a typical computer: you define the number of CPUs you want along with desired RAM and disk storage capacity. Once the machine is configured, you load in the operating system and then any servers and applications you want the VM to support. + +Virtual machines allow you to run many simulations of a computer on a single hardware host. Here’s what that looks like with a handy diagram: + +![](https://deis.com/images/blog-images/dev_journey_1.png) + +Virtual machines bring efficiency to your hardware investment. You can buy a big, honking machine and run a lots of VMs on it. You can have a database VM sitting with a bunch of VMs with identical versions of your custom app running as a cluster. You can get a lot of scalability out of a finite hardware resources. If you find that you need more VMs and your host hardware has the capacity, you add what you need. Or, if you don’t need a VM, you simply bring the VM off line and delete the VM image. + +### The Limitations of Virtual Machines ### + +But, virtual machines do have limits. + +Say you create three VMs on a host as shown above. The host has 12 CPUs, 48 GB of RAM, and 3 TB of storage. Each VM is configured to have 4 CPUs, 16 GB of RAM and 1 TB of storage. So far, so good. The host has the capacity. + +But there is a drawback. All the resources allocated to a particular machine are dedicated, no matter what. Each machine has been allocated 16 GB of RAM. However, if the first VM never uses more than 1 GB of its RAM allocation, the remaining 15 GB just sit there unused. If the third VM uses only 100 GB of its 1 TB storage allocation, the remaining 900 GB is wasted space. + +There is no leveling of resources. Each VM owns what it is given. So, in a way we’re back to that time before virtual machines when we were paying a lot of good money for unused resources. + +There is *another* drawback to VMs too. They can take a long time to spin up. So, if you are in a situation where your infrastructure needs to grow quickly, even in a situation when VM provisioning is automated, you can still find yourself twiddling your thumbs waiting for machines to come online. + +### Enter: Containers ### + +Conceptually, a container is a Linux process that thinks it is the only process running. The process knows only about things it is told to know about. Also, in terms of containerization, the container process is assigned its own IP address. This is important, so I will say it again. **In terms of containerization, the container process is assigned its own IP address**. Once given an IP address, the process is an identifiable resource within the host network. Then, you can issue a command to the container manager to map the container’s IP address to a IP address on the host that is accessible to the public. Once this mapping takes place, for all intents and purposes, a container is a distinct machine accessible on the network, similar in concept to a virtual machine. + +Again, a container is an isolated Linux process that has a distinct IP address thus making it identifiable on a network. Here’s what that looks like as diagram: + +![](https://deis.com/images/blog-images/dev_journey_2.png) + +A container/process shares resources on the host computer in a dynamic, cooperative manner. If the container needs only 1 GB of RAM, it uses only 1 GB. If it needs 4 GB, it uses 4 GB. It’s the same with CPU utilization and storage. The allocation of CPU, memory and storage resources is dynamic, not static as is usual on a typical virtual machine. All of this resource sharing is managed by the container manager. + +Lastly, containers boot very quickly. + +So, the benefit of containers is: **you get the isolation and encapsulation of a virtual machine without the drawback of dedicated static resources**. Also, because containers load into memory fast, you get better performance when it comes to scaling many containers up. + +### Container Hosting, Configuration, and Management ### + +Computers that host containers run a version of Linux that is stripped down to the essentials. These days, the more popular underlying operating system for a host computer is [CoreOS, mentioned above][2]. There are others, however, such as [Red Hat Atomic Host][3] and [Ubuntu Snappy][4]. + +The Linux operating system is shared between all containers, minimising duplication and reducing the container footprint. Each container contains only what is unique to that specific container. Here’s what that looks like in diagram form: + +![](https://deis.com/images/blog-images/dev_journey_3.png) + +You configure your container with the components it requires. A container component is called a **layer**. A layer is a container image. (You’ll read more about container images in the following section.). You start with a base layer which typically the type of operating system you want in your container. (The container manager will provides only the parts of your desired operating system that is not in the host OS) As you construct the configuration of your container, you’ll add layers, say Apache if you want a web server, PHP or Python runtimes, if your container is running scripts. + +Layering is very versatile. If you application or service container requires PHP 5.2, you configure that container accordingly. If you have another application or service that requires PHP 5.6, no problem. You configure that container to use PHP.5.6. Unlike VMs, where you need to go through a lot of provisioning and installation hocus pocus to change a version of a runtime dependency; with containers you just redefine the layer in the container configuration file. + +All of the container versatility described previously is controlled by the a piece of software called a container manager. Presently, the most popular container managers are [Docker][5] and [Rocket][6]. The figure above shows a host scenario is which Docker is the container manager and CoreOS is the host operating system. + +### Containers are Built with Images ### + +When it comes time for you to build our application into a container, you are going to assemble images. An image represents a template of a container that your container needs to do its work. (I know, containers within containers. Go figure.) Images are stored in a registry. Registries live on the network. + +Conceptually, a registry is similar to a [Maven][7] repository, for those of you from the Java world, or a [NuGet][8] server, for you .NET heads. You’ll create a container configuration file that lists the images your application needs. The you’ll use the container manager to make a container that includes your application’s code as well as constituent resources downloaded from a container registry. For example, if your application is made up of some PHP files, your container configuration file will declare that you get the PHP runtime from a registry. Also, you’ll use the container configuration file to declare the .php files to copy into the container’s file system. The container manager encapsulates all your application stuff into a distinct container that you’ll run on a host computer, under a container manager. + +Here’s a diagram that illustrates the concepts behind container creation: + +![](https://deis.com/images/blog-images/dev_journey_4.png) + +Let’s take a detailed look at this diagram. + +Here, (1) indicates there is a container configuration file that defines the stuff your container needs, as well as how your container is to be constructed. When you run your container on the host, the container manager will read the configuration file to get the container images you need from a registry on the cloud (2) and add the images as layers in your container. + +Also, if that constituent image requires other images, the container manager will get those images too and layer them in. At (3) the container manager will copy in files to your container as is required. + +If you use a provisioning service, such as [Deis][9], the application container you just created exists as an image (4) which the provisioning service will deploy to a cloud provider of your choice. Examples of cloud providers are AWS and Rackspace. + +### Containers in a Cluster ### + +Okay. So we can say there is a good case to be made that containers provide a greater degree of configuration flexibility and resource utilization than virtual machines. Still, this is not the all of it. + +Where containers get really flexible is when they’re clustered. Remember, a container has a distinct IP address. Thus, it can be put behind a load balancer. Once a container goes behind a load balancer, the game goes up a level. + +You can run a cluster of containers behind a load balancer container to achieve high performance, high availability computing. Here’s one example setup: + +![](https://deis.com/images/blog-images/dev_journey_5.png) + +Let’s say you’ve made an application that does some resource intensive work. Photograph processing, for example. Using a container provisioning technology such as [Deis][9], you can create a container image that has your photo processing application configured with all the resources upon which your photo processing application depends. Then, you can deploy one or many instances of your container image to under a load balancer that reside on the host. Once the container image is made, you can keep it on the sidelines for introduction later on when the system becomes maxed out and more instances of your container are required in the cluster to meet the workload at hand. + +There is more good news. You don’t have manually configure the load balancer to accept your container image every time you add more instances into the environment. You can use service discovery technology to make it so that your container announces its availability to the balancer. Then, once informed, the balancer can start to route traffic to the new node. + +### Putting It All Together ### + +Container technology picks up where the virtual machine has left off. Host operating systems such as CoreOS, RHEL Atomic, and Ubuntu’s Snappy, in conjunction with container management technologies such as Docker and Rocket, are making containers more popular everyday. + +While containers are becoming more prevalent, they do take a while to master. However, once you get the hang of them, you can use provisioning technologies such as [Deis][9] to make container creation and deployment easier. + +Getting a conceptual understanding of containers is important as we move forward to actually doing some work with them. But, I imagine the concepts are hard to grasp without the actual hands-on experience to accompany the ideas in play. So, this is what we’ll do in the next segment of this series: make some containers. + +-------------------------------------------------------------------------------- + +via: https://deis.com/blog/2015/developer-journey-linux-containers + +作者:[Bob Reselman][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://deis.com/blog +[1]:https://en.wikipedia.org/wiki/Virtual_machine +[2]:https://coreos.com/using-coreos/ +[3]:http://www.projectatomic.io/ +[4]:https://developer.ubuntu.com/en/snappy/ +[5]:https://www.docker.com/ +[6]:https://coreos.com/blog/rocket/ +[7]:https://en.wikipedia.org/wiki/Apache_Maven +[8]:https://www.nuget.org/ +[9]:http://deis.com/learn \ No newline at end of file From 4bad08240ba3ba167d6c348f918880136798c3dc Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 29 Sep 2015 16:17:53 +0800 Subject: [PATCH 623/697] =?UTF-8?q?20150929-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...rd Is Coming To Ubuntu and Ubuntu Touch.md | 49 +++++++++++++++++++ 1 file changed, 49 insertions(+) create mode 100644 sources/talk/20150929 A Slick New Set-Up Wizard Is Coming To Ubuntu and Ubuntu Touch.md diff --git a/sources/talk/20150929 A Slick New Set-Up Wizard Is Coming To Ubuntu and Ubuntu Touch.md b/sources/talk/20150929 A Slick New Set-Up Wizard Is Coming To Ubuntu and Ubuntu Touch.md new file mode 100644 index 0000000000..2c147fb3e3 --- /dev/null +++ b/sources/talk/20150929 A Slick New Set-Up Wizard Is Coming To Ubuntu and Ubuntu Touch.md @@ -0,0 +1,49 @@ +A Slick New Set-Up Wizard Is Coming To Ubuntu and Ubuntu Touch +================================================================================ +> Canonical aims to 'seduce and reassure' those unfamiliar with the OS by making a good first impression + +**The Ubuntu installer is set to undergo a dramatic makeover.** + +Ubuntu will modernise its out-of-the-box experience (OOBE) to be easier and quicker to complete, look more ‘seductive’ to new users, and better present the Ubuntu brand through its design. + +Ubiquity, the current Ubuntu installer, has largely remained unchanged since its [introduction back in 2010][1]. + +### First Impressions Are Everything ### + +Since the first thing most users see when trying Ubuntu for the first time is an installer (or set-up wizard, depending on device) the design team feel it’s “one of the most important categories of software usability”. + +“It essentially says how easy your software is to use, as well as introducing the user into your brand through visual design and tone of voice, which can convey familiarity and trust within your product.” + +Canonical’s new OOBE designs show a striking departure from the current look of the Ubiquity installer used by the Ubuntu desktop, and presents a refined approach to the way mobile users ‘set up’ a new Ubuntu Phone. + +![Old design (left) and the new proposed design](http://www.omgubuntu.co.uk/wp-content/uploads/2015/09/desktop-2.jpg) + +Old design (left) and the new proposed design + +Detailing the designs in [new blog post][2], the Canonical Design team say the aim of the revamp is to create a consistent out-of-the-box experience across Ubuntu devices. + +To do this it groups together “common first experiences found on the mobile, tablet and desktop” and unifies the steps and screens between each, something they say moves the OS closer to “achieving a seamless convergent platform.” + +![New Ubuntu installer on desktop/tablet (left) and phone](http://www.omgubuntu.co.uk/wp-content/uploads/2015/09/Convergence.jpg) + +New Ubuntu installer on desktop/tablet (left) and phone + +Implementation of the new ‘OOBE’ has already begun’ according to Canonical, though as of writing there’s no firm word on when a revamped installer may land on either desktop or phone images. + +With the march to ‘desktop’ convergence now in full swing, and a(nother) stack of design changes set to hit the mobile build in lieu of the first Ubuntu Phone that ‘transforms’ in to a PC, chances are you won’t have to wait too long to try it out. + +**What do you think of the designs? How would you go about improving the Ubuntu set-up experience? Let us know in the comments below.** + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2015/09/new-look-ubuntu-installer-coming-soon + +作者:[Joey-Elijah Sneddon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:http://www.omgubuntu.co.uk/2010/09/ubuntu-10-10s-installer-slideshow-oozes-class +[2]:http://design.canonical.com/wp-content/uploads/Convergence.jpg \ No newline at end of file From ba499971b2071619f5af1f2b0db86b37fdf27391 Mon Sep 17 00:00:00 2001 From: alim0x Date: Tue, 29 Sep 2015 22:51:36 +0800 Subject: [PATCH 624/697] [translating]20 - The history of Android --- .../The history of Android/20 - The history of Android.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/talk/The history of Android/20 - The history of Android.md b/sources/talk/The history of Android/20 - The history of Android.md index 30db4ce5c2..75c89a1abc 100644 --- a/sources/talk/The history of Android/20 - The history of Android.md +++ b/sources/talk/The history of Android/20 - The history of Android.md @@ -1,3 +1,5 @@ +alim0x translating + The history of Android ================================================================================ ![Another Market design that was nothing like the old one. This lineup shows the categories page, featured, a top apps list, and an app page.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/market-pages.png) @@ -90,4 +92,4 @@ via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-histor [1]:http://arstechnica.com/gadgets/2011/02/near-field-communications-a-technology-primer/ [2]:http://arstechnica.com/business/2012/01/google-launches-style-guide-for-android-developers/ [a]:http://arstechnica.com/author/ronamadeo -[t]:https://twitter.com/RonAmadeo \ No newline at end of file +[t]:https://twitter.com/RonAmadeo From cef53abb75fce3cd26ec50ee1faf7685a7c77231 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 30 Sep 2015 09:40:00 +0800 Subject: [PATCH 625/697] =?UTF-8?q?20150930-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...e Ansible (Automation Tool) in CentOS 7.md | 99 +++++++++++++++++++ 1 file changed, 99 insertions(+) create mode 100644 sources/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md diff --git a/sources/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md b/sources/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md new file mode 100644 index 0000000000..9d417ba1a6 --- /dev/null +++ b/sources/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md @@ -0,0 +1,99 @@ +Install and use Ansible (Automation Tool) in CentOS 7 +================================================================================ +Ansible is a free & open source Configuration and automation tool for UNIX like operating system. It is written in python and similar to Chef or Puppet but there is one difference and advantage of Ansible is that we don’t need to install any agent on the nodes. It uses SSH for making communication to its nodes. + +In this article we will install and configure Ansible in CentOS 7 and will try to manage its two nodes. + +**Ansible Server** – ansible.linuxtechi.com ( 192.168.1.15 ) + + **Nodes** – 192.168.1.9 , 192.168.1.10 + +### Step :1 Set EPEL repository ### + +Ansible package is not available in the default yum repositories, so we will enable epel repository for CentOS 7 using below commands + + [root@ansible ~]# rpm -iUvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm + +### Step:2 Install Anisble using yum command ### + + [root@ansible ~]# yum install ansible + +Once the installation is completed, check the ansible version : + + [root@ansible ~]# ansible --version + +![ansible-version](http://www.linuxtechi.com/wp-content/uploads/2015/09/ansible-version.jpg) + +### Step:3 Setup keys based SSH authentication with Nodes. ### + +Generate keys on the Ansible server and copy public key to the nodes. + + root@ansible ~]# ssh-keygen + +![ssh-keygen](http://www.linuxtechi.com/wp-content/uploads/2015/09/ssh-keygen.jpg) + +Use ssh-copy-id command to copy public key of Ansible server to its nodes. + +![ssh-copy-id-command](http://www.linuxtechi.com/wp-content/uploads/2015/09/ssh-copy-id-command.jpg) + +### Step:4 Define the nodes or inventory of servers for Ansible. ### + +File ‘**/etc/ansible/hosts**‘ maintains the inventory of servers for Ansible. + + [root@ansible ~]# vi /etc/ansible/hosts + [test-servers] + 192.168.1.9 + 192.168.1.10 + +Save and exit the file. + +Sample output of hosts file. + +![ansible-host](http://www.linuxtechi.com/wp-content/uploads/2015/09/ansible-host.jpg) + +### Step:5 Now try to run the Commands from Ansible Server. ### + +Check the connectivity of ‘test-servers’ or ansible nodes using ping + + [root@ansible ~]# ansible -m ping 'test-servers' + +![ansible-ping](http://www.linuxtechi.com/wp-content/uploads/2015/09/ansible-ping.jpg) + +#### Executing Shell commands : #### + +**Example :1 Check the uptime of Ansible nodes** + + [root@ansible ~]# ansible -m command -a "uptime" 'test-servers' + +![ansible-uptime](http://www.linuxtechi.com/wp-content/uploads/2015/09/ansible-uptime.jpg) + +**Example:2 Check Kernel Version of nodes** + + [root@ansible ~]# ansible -m command -a "uname -r" 'test-servers' + +![kernel-version-ansible](http://www.linuxtechi.com/wp-content/uploads/2015/09/kernel-version-ansible.jpg) + +**Example:3 Adding a user to the nodes** + + [root@ansible ~]# ansible -m command -a "useradd mark" 'test-servers' + [root@ansible ~]# ansible -m command -a "grep mark /etc/passwd" 'test-servers' + +![useradd-ansible](http://www.linuxtechi.com/wp-content/uploads/2015/09/useradd-ansible.jpg) + +**Example:4 Redirecting the output of command to a file** + + [root@ansible ~]# ansible -m command -a "df -Th" 'test-servers' > /tmp/command-output.txt + +![redirecting-output-ansible](http://www.linuxtechi.com/wp-content/uploads/2015/09/redirecting-output-ansible.jpg) + +-------------------------------------------------------------------------------- + +via: http://www.linuxtechi.com/install-and-use-ansible-in-centos-7/ + +作者:[Pradeep Kumar][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxtechi.com/author/pradeep/ \ No newline at end of file From 55bd010d91fe5cb389fc1858b38c689b02ac2e0b Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 30 Sep 2015 09:42:19 +0800 Subject: [PATCH 626/697] =?UTF-8?q?20150930-2=20=E9=80=89=E9=A2=98=20=20RH?= =?UTF-8?q?CE=20=E7=AC=AC=E4=B9=9D=E7=AF=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... (SMTP) using null-client Configuration.md | 152 ++++++++++++++++++ 1 file changed, 152 insertions(+) create mode 100644 sources/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md diff --git a/sources/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md b/sources/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md new file mode 100644 index 0000000000..2f89eb9064 --- /dev/null +++ b/sources/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md @@ -0,0 +1,152 @@ +How to Setup Postfix Mail Server (SMTP) using null-client Configuration – Part 9 +================================================================================ +Regardless of the many online communication methods that are available today, email remains a practical way to deliver messages from one end of the world to another, or to a person sitting in the office next to ours. + +The following image illustrates the process of email transport starting with the sender until the message reaches the recipient’s inbox: + +![How Mail Setup Works](http://www.tecmint.com/wp-content/uploads/2015/09/How-Mail-Setup-Works.png) + +How Mail Setup Works + +To make this possible, several things happen behind the scenes. In order for an email message to be delivered from a client application (such as [Thunderbird][1], Outlook, or webmail services such as Gmail or Yahoo! Mail) to a mail server, and from there to the destination server and finally to its intended recipient, a SMTP (Simple Mail Transfer Protocol) service must be in place in each server. + +That is the reason why in this article we will explain how to set up a SMTP server in RHEL 7 where emails sent by local users (even to other local users) are forwarded to a central mail server for easier access. + +In the exam’s requirements this is called a null-client setup. + +Our test environment will consist of an originating mail server and a central mail server or relayhost. + + Original Mail Server: (hostname: box1.mydomain.com / IP: 192.168.0.18) + Central Mail Server: (hostname: mail.mydomain.com / IP: 192.168.0.20) + +For name resolution we will use the well-known /etc/hosts file on both boxes: + + 192.168.0.18 box1.mydomain.com box1 + 192.168.0.20 mail.mydomain.com mail + +### Installing Postfix and Firewall / SELinux Considerations ### + +To begin, we will need to (in both servers): + +**1. Install Postfix:** + + # yum update && yum install postfix + +**2. Start the service and enable it to run on future reboots:** + + # systemctl start postfix + # systemctl enable postfix + +**3. Allow mail traffic through the firewall:** + + # firewall-cmd --permanent --add-service=smtp + # firewall-cmd --add-service=smtp + +![Open Mail Server Port in Firewall](http://www.tecmint.com/wp-content/uploads/2015/09/Allow-Traffic-through-Firewall.png) + +Open Mail Server SMTP Port in Firewall + +**4. Configure Postfix on box1.mydomain.com.** + +Postfix’s main configuration file is located in /etc/postfix/main.cf. This file itself is a great documentation source as the included comments explain the purpose of the program’s settings. + +For brevity, let’s display only the lines that need to be edited (yes, you need to leave mydestination blank in the originating server; otherwise the emails will be stored locally as opposed to in a central mail server which is what we actually want): + +**Configure Postfix on box1.mydomain.com** + +---------- + + myhostname = box1.mydomain.com + mydomain = mydomain.com + myorigin = $mydomain + inet_interfaces = loopback-only + mydestination = + relayhost = 192.168.0.20 + +**5. Configure Postfix on mail.mydomain.com.** + +**Configure Postfix on mail.mydomain.com** + +---------- + + myhostname = mail.mydomain.com + mydomain = mydomain.com + myorigin = $mydomain + inet_interfaces = all + mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain + mynetworks = 192.168.0.0/24, 127.0.0.0/8 + +And set the related SELinux boolean to true permanently if not already done: + + # setsebool -P allow_postfix_local_write_mail_spool on + +![Set Postfix SELinux Permission](http://www.tecmint.com/wp-content/uploads/2015/09/Set-Postfix-SELinux-Permission.png) + +Set Postfix SELinux Permission + +The above SELinux boolean will allow Postfix to write to the mail spool in the central server. + +**6. Restart the service on both servers for the changes to take effect:** + + # systemctl restart postfix + +If Postfix does not start correctly, you can use following commands to troubleshoot. + + # systemctl –l status postfix + # journalctl –xn + # postconf –n + +### Testing the Postfix Mail Servers ### + +To test the mail servers, you can use any Mail User Agent (most commonly known as MUA for short) such as [mail or mutt][2]. + +Since mutt is a personal favorite, I will use it in box1 to send an email to user tecmint using an existing file (mailbody.txt) as message body: + + # mutt -s "Part 9-RHCE series" tecmint@mydomain.com < mailbody.txt + +![Test Postfix Mail Server](http://www.tecmint.com/wp-content/uploads/2015/09/Test-Postfix-Mail-Server.png) + +Test Postfix Mail Server + +Now go to the central mail server (mail.mydomain.com), log on as user tecmint, and check whether the email was received: + + # su – tecmint + # mail + +![Check Postfix Mail Server Delivery](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Postfix-Mail-Server-Delivery.png) + +Check Postfix Mail Server Delivery + +If the email was not received, check root’s mail spool for a warning or error notification. You may also want to make sure that the SMTP service is running on both servers and that port 25 is open in the central mail server using [nmap command][3]: + + # nmap -PN 192.168.0.20 + +![Troubleshoot Postfix Mail Server](http://www.tecmint.com/wp-content/uploads/2015/09/Troubleshoot-Postfix-Mail-Server.png) + +Troubleshoot Postfix Mail Server + +### Summary ### + +Setting up a mail server and a relay host as shown in this article is an essential skill that every system administrator must have, and represents the foundation to understand and install a more complex scenario such as a mail server hosting a live domain for several (even hundreds or thousands) of email accounts. + +(Please note that this kind of setup requires a DNS server, which is out of the scope of this guide), but you can use following article to setup DNS Server: + +- [Setup Cache only DNS Server in CentOS/RHEL 07][4] + +Finally, I highly recommend you become familiar with Postfix’s configuration file (main.cf) and the program’s man page. If in doubt, don’t hesitate to drop us a line using the form below or using our forum, Linuxsay.com, where you will get almost immediate help from Linux experts from all around the world. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/setup-postfix-mail-server-smtp-using-null-client-on-centos/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/install-thunderbird-17-in-ubuntu-xubuntu-linux-mint/ +[2]:http://www.tecmint.com/send-mail-from-command-line-using-mutt-command/ +[3]:http://www.tecmint.com/nmap-command-examples/ +[4]:http://www.tecmint.com/setup-dns-cache-server-in-centos-7/ \ No newline at end of file From e71dcfb140c85fd782521ad8cc2543fa432b8402 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 1 Oct 2015 10:38:15 +0800 Subject: [PATCH 627/697] translating --- ...0 Install and use Ansible (Automation Tool) in CentOS 7.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md b/sources/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md index 9d417ba1a6..f90b7ef4b5 100644 --- a/sources/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md +++ b/sources/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md @@ -1,3 +1,5 @@ +translating---geekpi + Install and use Ansible (Automation Tool) in CentOS 7 ================================================================================ Ansible is a free & open source Configuration and automation tool for UNIX like operating system. It is written in python and similar to Chef or Puppet but there is one difference and advantage of Ansible is that we don’t need to install any agent on the nodes. It uses SSH for making communication to its nodes. @@ -96,4 +98,4 @@ via: http://www.linuxtechi.com/install-and-use-ansible-in-centos-7/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://www.linuxtechi.com/author/pradeep/ \ No newline at end of file +[a]:http://www.linuxtechi.com/author/pradeep/ From bbb05c2d443c584533b53c9ec731df8edac82c1d Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 1 Oct 2015 11:16:55 +0800 Subject: [PATCH 628/697] translated --- ...e Ansible (Automation Tool) in CentOS 7.md | 49 +++++++++---------- 1 file changed, 24 insertions(+), 25 deletions(-) rename {sources => translated}/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md (57%) diff --git a/sources/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md b/translated/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md similarity index 57% rename from sources/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md rename to translated/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md index f90b7ef4b5..0527b51b9c 100644 --- a/sources/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md +++ b/translated/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md @@ -1,46 +1,44 @@ -translating---geekpi - -Install and use Ansible (Automation Tool) in CentOS 7 +在CentOS 7中安装并使用Ansible(自动化工具) ================================================================================ -Ansible is a free & open source Configuration and automation tool for UNIX like operating system. It is written in python and similar to Chef or Puppet but there is one difference and advantage of Ansible is that we don’t need to install any agent on the nodes. It uses SSH for making communication to its nodes. +Ansible是一款为类Unix系统开发的免费开源配置和自动化工具。它用Python写成并且和Chef和Puppet相似,但是有一个不同和好处是我们不需要在节点中安装任何客户端。它使用SSH来和节点进行通信。 -In this article we will install and configure Ansible in CentOS 7 and will try to manage its two nodes. +本篇中我们将在CentOS 7上安装并配置Ansible,并且尝试管理两个节点。 -**Ansible Server** – ansible.linuxtechi.com ( 192.168.1.15 ) +**Ansible 服务端** – ansible.linuxtechi.com ( 192.168.1.15 ) **Nodes** – 192.168.1.9 , 192.168.1.10 -### Step :1 Set EPEL repository ### +### 第一步: 设置EPEL仓库 ### -Ansible package is not available in the default yum repositories, so we will enable epel repository for CentOS 7 using below commands +Ansible仓库默认不在yum仓库中,因此我们需要使用下面的命令启用epel仓库。 [root@ansible ~]# rpm -iUvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm -### Step:2 Install Anisble using yum command ### +### 第二步: 使用yum安装Ansible ### [root@ansible ~]# yum install ansible -Once the installation is completed, check the ansible version : +安装完成后,检查ansible版本: [root@ansible ~]# ansible --version ![ansible-version](http://www.linuxtechi.com/wp-content/uploads/2015/09/ansible-version.jpg) -### Step:3 Setup keys based SSH authentication with Nodes. ### +### 第三步: 设置用于节点鉴权的SSH密钥 ### -Generate keys on the Ansible server and copy public key to the nodes. +在Ansible服务端生成密钥,并且复制公钥到节点中。 root@ansible ~]# ssh-keygen ![ssh-keygen](http://www.linuxtechi.com/wp-content/uploads/2015/09/ssh-keygen.jpg) -Use ssh-copy-id command to copy public key of Ansible server to its nodes. +使用ssh-copy-id命令来复制Ansible公钥到节点中。 ![ssh-copy-id-command](http://www.linuxtechi.com/wp-content/uploads/2015/09/ssh-copy-id-command.jpg) -### Step:4 Define the nodes or inventory of servers for Ansible. ### +### 第四步:为Ansible定义节点的清单 ### -File ‘**/etc/ansible/hosts**‘ maintains the inventory of servers for Ansible. +文件 ‘**/etc/ansible/hosts**‘ 维护了Ansible中服务器的清单。 [root@ansible ~]# vi /etc/ansible/hosts [test-servers] @@ -48,41 +46,42 @@ File ‘**/etc/ansible/hosts**‘ maintains the inventory of servers for Ansible 192.168.1.10 Save and exit the file. +保存并退出文件 -Sample output of hosts file. +主机文件示例。 ![ansible-host](http://www.linuxtechi.com/wp-content/uploads/2015/09/ansible-host.jpg) -### Step:5 Now try to run the Commands from Ansible Server. ### +### 第五步:尝试在Ansible服务端运行命令 ### -Check the connectivity of ‘test-servers’ or ansible nodes using ping +使用ping检查‘test-servers’或者ansible节点的连通性。 [root@ansible ~]# ansible -m ping 'test-servers' ![ansible-ping](http://www.linuxtechi.com/wp-content/uploads/2015/09/ansible-ping.jpg) -#### Executing Shell commands : #### +#### 执行shell命令 #### -**Example :1 Check the uptime of Ansible nodes** +**例子1:检查Ansible节点的运行时间 ** [root@ansible ~]# ansible -m command -a "uptime" 'test-servers' ![ansible-uptime](http://www.linuxtechi.com/wp-content/uploads/2015/09/ansible-uptime.jpg) -**Example:2 Check Kernel Version of nodes** +**例子2:检查节点的内核版本 ** [root@ansible ~]# ansible -m command -a "uname -r" 'test-servers' ![kernel-version-ansible](http://www.linuxtechi.com/wp-content/uploads/2015/09/kernel-version-ansible.jpg) -**Example:3 Adding a user to the nodes** +**例子3:给节点增加用户 ** [root@ansible ~]# ansible -m command -a "useradd mark" 'test-servers' [root@ansible ~]# ansible -m command -a "grep mark /etc/passwd" 'test-servers' - + ![useradd-ansible](http://www.linuxtechi.com/wp-content/uploads/2015/09/useradd-ansible.jpg) -**Example:4 Redirecting the output of command to a file** +**例子4:重定向输出到文件中** [root@ansible ~]# ansible -m command -a "df -Th" 'test-servers' > /tmp/command-output.txt @@ -93,7 +92,7 @@ Check the connectivity of ‘test-servers’ or ansible nodes using ping via: http://www.linuxtechi.com/install-and-use-ansible-in-centos-7/ 作者:[Pradeep Kumar][a] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 4df4ad36b67da744e81b20981fa9ea14bcd63154 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Thu, 1 Oct 2015 21:57:35 +0800 Subject: [PATCH 629/697] Translating sources/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md --- ...Postfix Mail Server (SMTP) using null-client Configuration.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md b/sources/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md index 2f89eb9064..77b508db66 100644 --- a/sources/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md +++ b/sources/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md @@ -1,3 +1,4 @@ +ictlyh Translating How to Setup Postfix Mail Server (SMTP) using null-client Configuration – Part 9 ================================================================================ Regardless of the many online communication methods that are available today, email remains a practical way to deliver messages from one end of the world to another, or to a person sitting in the office next to ours. From aebe0a04fb45d1732002888baa596c8c6d76e5c5 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 2 Oct 2015 00:15:23 +0800 Subject: [PATCH 630/697] =?UTF-8?q?=E5=BD=92=E6=A1=A3=20201509?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- published/{ => 201509}/20141223 Defending the Free Linux World.md | 0 ... set up IPv6 BGP peering and filtering in Quagga BGP router.md | 0 ...Her Interview Experience on RedHat Linux Package Management.md | 0 ...wto Interactively Perform Tasks with Docker using Kitematic.md | 0 .../{ => 201509}/20150728 Process of the Linux kernel building.md | 0 ...50730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md | 0 ...nx as Rreverse Proxy or Load Balancer with Weave and Docker.md | 0 published/{ => 201509}/20150803 Managing Linux Logs.md | 0 ...e logging in Open vSwitch for debugging and troubleshooting.md | 0 .../20150811 How to Install Snort and Usage in Ubuntu 15.04.md | 0 ...omamndline Tool to Find and Delete Duplicate Files in Linux.md | 0 ...e Text-to-Speech Schedule a Job and Watch Commands in Linux.md | 0 ...JBoss Data Virtualization GA with OData in Docker Container.md | 0 .../{ => 201509}/20150813 Linux file system hierarchy v2.0.md | 0 .../20150816 How to migrate MySQL to MariaDB on Linux.md | 0 ...s--How to count the number of threads in a process on Linux.md | 0 ...h Answers--How to fix Wireshark GUI freeze on Linux desktop.md | 0 ...r Linux Birthday-- A 22 Years of Journey and Still Counting.md | 0 ...ker Working on Security Components Live Container Migration.md | 0 .../20150819 Linuxcon--The Changing Role of the Server OS.md | 0 .../20150820 A Look at What's Next for the Linux Kernel.md | 0 .../20150821 Top 4 open source command-line email clients.md | 0 .../20150824 Basics Of NetworkManager Command Line Tool Nmcli.md | 0 ... Fix No Bootable Device Found Error After Installing Ubuntu.md | 0 ...Add Hindi And Devanagari Support In Antergos And Arch Linux.md | 0 ...reate an AP in Ubuntu 15.04 to connect to Android or iPhone.md | 0 .../20150824 Linux about to gain a new file system--bcachefs.md | 0 ...ne Several Smaller Partition into One Large Virtual Storage.md | 0 ...4 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md | 0 .../{ => 201509}/20150826 Five Super Cool Open Source Games.md | 0 ...6 How to set up a system status page of your infrastructure.md | 0 ...SH Based Client for Connecting Remote Unix or Linux Systems.md | 0 .../20150827 Xtreme Download Manager Updated With Fresh GUI.md | 0 .../{ => 201509}/20150901 How to Defragment Linux Systems.md | 0 ...901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md | 0 .../20150901 How to automatically dim your screen on Linux.md | 0 ...stall The Latest Linux Kernel in Ubuntu Easily via A Script.md | 0 published/{ => 201509}/20150901 Is Linux Right For You.md | 0 ...'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md | 0 ...150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md | 0 ...ISH--A smart and user-friendly command line shell for Linux.md | 0 .../20150906 How To Set Up Your FTP Server In Linux.md | 0 .../20150906 How to Install DNSCrypt and Unbound in Arch Linux.md | 0 .../20150906 How to Install QGit Viewer in Ubuntu 14.04.md | 0 ...50906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md | 0 ... How to Download Install and Configure Plank Dock in Ubuntu.md | 0 .../{ => 201509}/20150908 List Of 10 Funny Linux Commands.md | 0 ...rical and Statistical Uptime of System With tuptime Utility.md | 0 ...911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md | 0 ...h Answers--How to remove unused old kernel images on Ubuntu.md | 0 ...x-Based Open Source OS Runs 42 Percent of Dell PCs in China.md | 0 .../20150916 Enable Automatic System Updates In Ubuntu.md | 0 ...ers--How to find out which CPU core a process is running on.md | 0 ...t 01--Reviewing Essential Commands and System Documentation.md | 0 ...ries--Part 02--How to Perform File and Directory Management.md | 0 ...A Series--Part 03--How to Manage Users and Groups in RHEL 7.md | 0 ...s with Nano and Vim or Analyzing text with grep and regexps.md | 0 ...nagement in RHEL 7--Boot Shutdown and Everything in Between.md | 0 ... 'Parted' and 'SSM' to Configure and Encrypt System Storage.md | 0 ...CLs (Access Control Lists) and Mounting Samba or NFS Shares.md | 0 ...ecuring SSH, Setting Hostname and Enabling Network Services.md | 0 ...--Installing, Configuring and Securing a Web and FTP Server.md | 0 ...ment, Automating Tasks with Cron and Monitoring System Logs.md | 0 ...ls and Network Traffic Control Using FirewallD and Iptables.md | 0 64 files changed, 0 insertions(+), 0 deletions(-) rename published/{ => 201509}/20141223 Defending the Free Linux World.md (100%) rename published/{ => 201509}/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md (100%) rename published/{ => 201509}/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md (100%) rename published/{ => 201509}/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md (100%) rename published/{ => 201509}/20150728 Process of the Linux kernel building.md (100%) rename published/{ => 201509}/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md (100%) rename published/{ => 201509}/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md (100%) rename published/{ => 201509}/20150803 Managing Linux Logs.md (100%) rename published/{ => 201509}/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md (100%) rename published/{ => 201509}/20150811 How to Install Snort and Usage in Ubuntu 15.04.md (100%) rename published/{ => 201509}/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md (100%) rename published/{ => 201509}/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md (100%) rename published/{ => 201509}/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md (100%) rename published/{ => 201509}/20150813 Linux file system hierarchy v2.0.md (100%) rename published/{ => 201509}/20150816 How to migrate MySQL to MariaDB on Linux.md (100%) rename published/{ => 201509}/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md (100%) rename published/{ => 201509}/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md (100%) rename published/{ => 201509}/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md (100%) rename published/{ => 201509}/20150818 Docker Working on Security Components Live Container Migration.md (100%) rename published/{ => 201509}/20150819 Linuxcon--The Changing Role of the Server OS.md (100%) rename published/{ => 201509}/20150820 A Look at What's Next for the Linux Kernel.md (100%) rename published/{ => 201509}/20150821 Top 4 open source command-line email clients.md (100%) rename published/{ => 201509}/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md (100%) rename published/{ => 201509}/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md (100%) rename published/{ => 201509}/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md (100%) rename published/{ => 201509}/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md (100%) rename published/{ => 201509}/20150824 Linux about to gain a new file system--bcachefs.md (100%) rename published/{ => 201509}/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md (100%) rename published/{ => 201509}/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md (100%) rename published/{ => 201509}/20150826 Five Super Cool Open Source Games.md (100%) rename published/{ => 201509}/20150826 How to set up a system status page of your infrastructure.md (100%) rename published/{ => 201509}/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md (100%) rename published/{ => 201509}/20150827 Xtreme Download Manager Updated With Fresh GUI.md (100%) rename published/{ => 201509}/20150901 How to Defragment Linux Systems.md (100%) rename published/{ => 201509}/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md (100%) rename published/{ => 201509}/20150901 How to automatically dim your screen on Linux.md (100%) rename published/{ => 201509}/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md (100%) rename published/{ => 201509}/20150901 Is Linux Right For You.md (100%) rename published/{ => 201509}/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md (100%) rename published/{ => 201509}/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md (100%) rename published/{ => 201509}/20150906 FISH--A smart and user-friendly command line shell for Linux.md (100%) rename published/{ => 201509}/20150906 How To Set Up Your FTP Server In Linux.md (100%) rename published/{ => 201509}/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md (100%) rename published/{ => 201509}/20150906 How to Install QGit Viewer in Ubuntu 14.04.md (100%) rename published/{ => 201509}/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md (100%) rename published/{ => 201509}/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md (100%) rename published/{ => 201509}/20150908 List Of 10 Funny Linux Commands.md (100%) rename published/{ => 201509}/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md (100%) rename published/{ => 201509}/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md (100%) rename published/{ => 201509}/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md (100%) rename published/{ => 201509}/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md (100%) rename published/{ => 201509}/20150916 Enable Automatic System Updates In Ubuntu.md (100%) rename published/{ => 201509}/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md (100%) rename published/{ => 201509}/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md (100%) rename published/{ => 201509}/RHCSA Series--Part 02--How to Perform File and Directory Management.md (100%) rename published/{ => 201509}/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md (100%) rename published/{ => 201509}/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md (100%) rename published/{ => 201509}/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md (100%) rename published/{ => 201509}/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md (100%) rename published/{ => 201509}/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md (100%) rename published/{ => 201509}/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md (100%) rename published/{ => 201509}/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md (100%) rename published/{ => 201509}/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md (100%) rename published/{ => 201509}/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md (100%) diff --git a/published/20141223 Defending the Free Linux World.md b/published/201509/20141223 Defending the Free Linux World.md similarity index 100% rename from published/20141223 Defending the Free Linux World.md rename to published/201509/20141223 Defending the Free Linux World.md diff --git a/published/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md b/published/201509/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md similarity index 100% rename from published/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md rename to published/201509/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md diff --git a/published/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md b/published/201509/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md similarity index 100% rename from published/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md rename to published/201509/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md diff --git a/published/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md b/published/201509/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md similarity index 100% rename from published/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md rename to published/201509/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md diff --git a/published/20150728 Process of the Linux kernel building.md b/published/201509/20150728 Process of the Linux kernel building.md similarity index 100% rename from published/20150728 Process of the Linux kernel building.md rename to published/201509/20150728 Process of the Linux kernel building.md diff --git a/published/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md b/published/201509/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md similarity index 100% rename from published/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md rename to published/201509/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md diff --git a/published/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md b/published/201509/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md similarity index 100% rename from published/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md rename to published/201509/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md diff --git a/published/20150803 Managing Linux Logs.md b/published/201509/20150803 Managing Linux Logs.md similarity index 100% rename from published/20150803 Managing Linux Logs.md rename to published/201509/20150803 Managing Linux Logs.md diff --git a/published/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md b/published/201509/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md similarity index 100% rename from published/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md rename to published/201509/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md diff --git a/published/20150811 How to Install Snort and Usage in Ubuntu 15.04.md b/published/201509/20150811 How to Install Snort and Usage in Ubuntu 15.04.md similarity index 100% rename from published/20150811 How to Install Snort and Usage in Ubuntu 15.04.md rename to published/201509/20150811 How to Install Snort and Usage in Ubuntu 15.04.md diff --git a/published/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md b/published/201509/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md similarity index 100% rename from published/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md rename to published/201509/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md diff --git a/published/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md b/published/201509/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md similarity index 100% rename from published/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md rename to published/201509/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md diff --git a/published/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md b/published/201509/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md similarity index 100% rename from published/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md rename to published/201509/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md diff --git a/published/20150813 Linux file system hierarchy v2.0.md b/published/201509/20150813 Linux file system hierarchy v2.0.md similarity index 100% rename from published/20150813 Linux file system hierarchy v2.0.md rename to published/201509/20150813 Linux file system hierarchy v2.0.md diff --git a/published/20150816 How to migrate MySQL to MariaDB on Linux.md b/published/201509/20150816 How to migrate MySQL to MariaDB on Linux.md similarity index 100% rename from published/20150816 How to migrate MySQL to MariaDB on Linux.md rename to published/201509/20150816 How to migrate MySQL to MariaDB on Linux.md diff --git a/published/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md b/published/201509/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md similarity index 100% rename from published/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md rename to published/201509/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md diff --git a/published/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md b/published/201509/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md similarity index 100% rename from published/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md rename to published/201509/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md diff --git a/published/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md b/published/201509/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md similarity index 100% rename from published/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md rename to published/201509/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md diff --git a/published/20150818 Docker Working on Security Components Live Container Migration.md b/published/201509/20150818 Docker Working on Security Components Live Container Migration.md similarity index 100% rename from published/20150818 Docker Working on Security Components Live Container Migration.md rename to published/201509/20150818 Docker Working on Security Components Live Container Migration.md diff --git a/published/20150819 Linuxcon--The Changing Role of the Server OS.md b/published/201509/20150819 Linuxcon--The Changing Role of the Server OS.md similarity index 100% rename from published/20150819 Linuxcon--The Changing Role of the Server OS.md rename to published/201509/20150819 Linuxcon--The Changing Role of the Server OS.md diff --git a/published/20150820 A Look at What's Next for the Linux Kernel.md b/published/201509/20150820 A Look at What's Next for the Linux Kernel.md similarity index 100% rename from published/20150820 A Look at What's Next for the Linux Kernel.md rename to published/201509/20150820 A Look at What's Next for the Linux Kernel.md diff --git a/published/20150821 Top 4 open source command-line email clients.md b/published/201509/20150821 Top 4 open source command-line email clients.md similarity index 100% rename from published/20150821 Top 4 open source command-line email clients.md rename to published/201509/20150821 Top 4 open source command-line email clients.md diff --git a/published/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md b/published/201509/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md similarity index 100% rename from published/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md rename to published/201509/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md diff --git a/published/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md b/published/201509/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md similarity index 100% rename from published/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md rename to published/201509/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md diff --git a/published/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md b/published/201509/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md similarity index 100% rename from published/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md rename to published/201509/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md diff --git a/published/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md b/published/201509/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md similarity index 100% rename from published/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md rename to published/201509/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md diff --git a/published/20150824 Linux about to gain a new file system--bcachefs.md b/published/201509/20150824 Linux about to gain a new file system--bcachefs.md similarity index 100% rename from published/20150824 Linux about to gain a new file system--bcachefs.md rename to published/201509/20150824 Linux about to gain a new file system--bcachefs.md diff --git a/published/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md b/published/201509/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md similarity index 100% rename from published/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md rename to published/201509/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md diff --git a/published/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md b/published/201509/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md similarity index 100% rename from published/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md rename to published/201509/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md diff --git a/published/20150826 Five Super Cool Open Source Games.md b/published/201509/20150826 Five Super Cool Open Source Games.md similarity index 100% rename from published/20150826 Five Super Cool Open Source Games.md rename to published/201509/20150826 Five Super Cool Open Source Games.md diff --git a/published/20150826 How to set up a system status page of your infrastructure.md b/published/201509/20150826 How to set up a system status page of your infrastructure.md similarity index 100% rename from published/20150826 How to set up a system status page of your infrastructure.md rename to published/201509/20150826 How to set up a system status page of your infrastructure.md diff --git a/published/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md b/published/201509/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md similarity index 100% rename from published/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md rename to published/201509/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md diff --git a/published/20150827 Xtreme Download Manager Updated With Fresh GUI.md b/published/201509/20150827 Xtreme Download Manager Updated With Fresh GUI.md similarity index 100% rename from published/20150827 Xtreme Download Manager Updated With Fresh GUI.md rename to published/201509/20150827 Xtreme Download Manager Updated With Fresh GUI.md diff --git a/published/20150901 How to Defragment Linux Systems.md b/published/201509/20150901 How to Defragment Linux Systems.md similarity index 100% rename from published/20150901 How to Defragment Linux Systems.md rename to published/201509/20150901 How to Defragment Linux Systems.md diff --git a/published/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md b/published/201509/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md similarity index 100% rename from published/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md rename to published/201509/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md diff --git a/published/20150901 How to automatically dim your screen on Linux.md b/published/201509/20150901 How to automatically dim your screen on Linux.md similarity index 100% rename from published/20150901 How to automatically dim your screen on Linux.md rename to published/201509/20150901 How to automatically dim your screen on Linux.md diff --git a/published/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md b/published/201509/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md similarity index 100% rename from published/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md rename to published/201509/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md diff --git a/published/20150901 Is Linux Right For You.md b/published/201509/20150901 Is Linux Right For You.md similarity index 100% rename from published/20150901 Is Linux Right For You.md rename to published/201509/20150901 Is Linux Right For You.md diff --git a/published/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md b/published/201509/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md similarity index 100% rename from published/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md rename to published/201509/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md diff --git a/published/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md b/published/201509/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md similarity index 100% rename from published/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md rename to published/201509/20150906 Do Simple Math In Ubuntu And elementary OS With NaSC.md diff --git a/published/20150906 FISH--A smart and user-friendly command line shell for Linux.md b/published/201509/20150906 FISH--A smart and user-friendly command line shell for Linux.md similarity index 100% rename from published/20150906 FISH--A smart and user-friendly command line shell for Linux.md rename to published/201509/20150906 FISH--A smart and user-friendly command line shell for Linux.md diff --git a/published/20150906 How To Set Up Your FTP Server In Linux.md b/published/201509/20150906 How To Set Up Your FTP Server In Linux.md similarity index 100% rename from published/20150906 How To Set Up Your FTP Server In Linux.md rename to published/201509/20150906 How To Set Up Your FTP Server In Linux.md diff --git a/published/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md b/published/201509/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md similarity index 100% rename from published/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md rename to published/201509/20150906 How to Install DNSCrypt and Unbound in Arch Linux.md diff --git a/published/20150906 How to Install QGit Viewer in Ubuntu 14.04.md b/published/201509/20150906 How to Install QGit Viewer in Ubuntu 14.04.md similarity index 100% rename from published/20150906 How to Install QGit Viewer in Ubuntu 14.04.md rename to published/201509/20150906 How to Install QGit Viewer in Ubuntu 14.04.md diff --git a/published/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md b/published/201509/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md similarity index 100% rename from published/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md rename to published/201509/20150906 Install Qmmp 0.9.0 Winamp-like Audio Player in Ubuntu.md diff --git a/published/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md b/published/201509/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md similarity index 100% rename from published/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md rename to published/201509/20150908 How to Download Install and Configure Plank Dock in Ubuntu.md diff --git a/published/20150908 List Of 10 Funny Linux Commands.md b/published/201509/20150908 List Of 10 Funny Linux Commands.md similarity index 100% rename from published/20150908 List Of 10 Funny Linux Commands.md rename to published/201509/20150908 List Of 10 Funny Linux Commands.md diff --git a/published/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md b/published/201509/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md similarity index 100% rename from published/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md rename to published/201509/20150909 Linux Server See the Historical and Statistical Uptime of System With tuptime Utility.md diff --git a/published/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md b/published/201509/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md similarity index 100% rename from published/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md rename to published/201509/20150911 10 Useful Linux Command Line Tricks for Newbies--Part 2.md diff --git a/published/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md b/published/201509/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md similarity index 100% rename from published/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md rename to published/201509/20150914 Linux FAQs with Answers--How to remove unused old kernel images on Ubuntu.md diff --git a/published/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md b/published/201509/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md similarity index 100% rename from published/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md rename to published/201509/20150915 Ubuntu Linux-Based Open Source OS Runs 42 Percent of Dell PCs in China.md diff --git a/published/20150916 Enable Automatic System Updates In Ubuntu.md b/published/201509/20150916 Enable Automatic System Updates In Ubuntu.md similarity index 100% rename from published/20150916 Enable Automatic System Updates In Ubuntu.md rename to published/201509/20150916 Enable Automatic System Updates In Ubuntu.md diff --git a/published/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md b/published/201509/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md similarity index 100% rename from published/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md rename to published/201509/20150916 Linux FAQs with Answers--How to find out which CPU core a process is running on.md diff --git a/published/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md b/published/201509/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md similarity index 100% rename from published/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md rename to published/201509/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md diff --git a/published/RHCSA Series--Part 02--How to Perform File and Directory Management.md b/published/201509/RHCSA Series--Part 02--How to Perform File and Directory Management.md similarity index 100% rename from published/RHCSA Series--Part 02--How to Perform File and Directory Management.md rename to published/201509/RHCSA Series--Part 02--How to Perform File and Directory Management.md diff --git a/published/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md b/published/201509/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md similarity index 100% rename from published/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md rename to published/201509/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md diff --git a/published/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md b/published/201509/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md similarity index 100% rename from published/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md rename to published/201509/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md diff --git a/published/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md b/published/201509/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md similarity index 100% rename from published/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md rename to published/201509/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md diff --git a/published/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md b/published/201509/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md similarity index 100% rename from published/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md rename to published/201509/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md diff --git a/published/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md b/published/201509/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md similarity index 100% rename from published/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md rename to published/201509/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md diff --git a/published/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md b/published/201509/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md similarity index 100% rename from published/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md rename to published/201509/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md diff --git a/published/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md b/published/201509/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md similarity index 100% rename from published/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md rename to published/201509/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md diff --git a/published/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md b/published/201509/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md similarity index 100% rename from published/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md rename to published/201509/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md diff --git a/published/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md b/published/201509/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md similarity index 100% rename from published/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md rename to published/201509/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md From bc199705c35955c96ad2fda0b0064bc34111913e Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 2 Oct 2015 00:25:27 +0800 Subject: [PATCH 631/697] PUB:RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart' MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @FSSlc 快发完啦。加油哦· --- ... RHEL 7 Installations Using 'Kickstart'.md | 45 ++++++++++--------- 1 file changed, 23 insertions(+), 22 deletions(-) rename {translated/tech/RHCSA => published}/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md (64%) diff --git a/translated/tech/RHCSA/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md b/published/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md similarity index 64% rename from translated/tech/RHCSA/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md rename to published/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md index 25102ad8f9..27ae044218 100644 --- a/translated/tech/RHCSA/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md +++ b/published/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md @@ -1,32 +1,33 @@ -RHCSA 系列: 使用 ‘Kickstart’完成 RHEL 7 的自动化安装 – Part 12 +RHCSA 系列(十二): 使用 Kickstart 完成 RHEL 7 的自动化安装 ================================================================================ -无论是在数据中心还是实验室环境,Linux 服务器很少是独立的机子,很可能有时你不得不安装多个以某种方式相互联系的机子。假如你将在单个服务器上手动安装 RHEL 7 所花的时间乘以你需要配置的机子个数,则这将导致你必须做出一场相当长的努力,而通过使用被称为 kicksta 的无人值守安装工具则可以避免这样的麻烦。 + +无论是在数据中心还是实验室环境,Linux 服务器很少是独立的机器,很可能有时你需要安装多个以某种方式相互联系的机器。假如你将在单个服务器上手动安装 RHEL 7 所花的时间乘以你需要配置的机器数量,这将导致你必须做出一场相当长的努力,而通过使用被称为 kicksta 的无人值守安装工具则可以避免这样的麻烦。 在这篇文章中,我们将向你展示使用 kickstart 工具时所需的一切,以便在安装过程中,不用你时不时地照看“处在襁褓中”的服务器。 ![RHEL 7 的自动化 Kickstart 安装](http://www.tecmint.com/wp-content/uploads/2015/05/Automatic-Kickstart-Installation-of-RHEL-7.jpg) -RHCSA: RHEL 7 的自动化 Kickstart 安装 +*RHCSA: RHEL 7 的自动化 Kickstart 安装* #### Kickstart 和自动化安装简介 #### -Kickstart 是一种被用来执行无人值守操作系统安装和配置的自动化安装方法,主要被 RHEL(和其他 Fedora 的副产品,如 CentOS,Oracle Linux 等)所使用。因此,kickstart 安装方法可使得系统管理员只需考虑需要安装的软件包组和系统的配置,便可以得到相同的系统,从而省去必须手动安装这些软件包的麻烦。 +Kickstart 是一种被用来执行无人值守操作系统安装和配置的自动化安装方法,主要被 RHEL(以及其他 Fedora 的副产品,如 CentOS,Oracle Linux 等)所使用。因此,kickstart 安装方法可使得系统管理员只需考虑需要安装的软件包组和系统的配置,便可以得到相同的系统,从而省去必须手动安装这些软件包的麻烦。 -### 准备一次 Kickstart 安装 ### +### 准备 Kickstart 安装 ### -要执行一次 kickstart 安装,我们需要遵循下面的这些步骤: +要执行 kickstart 安装,我们需要遵循下面的这些步骤: 1. 创建一个 Kickstart 文件,它是一个带有多个预定义配置选项的纯文本文件。 -2. 使得 Kickstart 文件在可移动介质上可得,如一个硬盘或一个网络位置。客户端将使用 `rhel-server-7.0-x86_64-boot.iso` 镜像文件,而你还需要使得完全的 ISO 镜像(`rhel-server-7.0-x86_64-dvd.iso`)可从一个网络资源上获取得到,例如通过一个 FTP 服务器的 HTTP(在我们当前的例子中,我们将使用另一个 IP 地址为 192.168.0.18 的 RHEL 7 机子)。 +2. 将 Kickstart 文件保存在可移动介质上,如一个硬盘或一个网络位置。kickstart 客户端需要使用 `rhel-server-7.0-x86_64-boot.iso` 镜像文件,而你还需要可从一个网络资源上获取得到完整的 ISO 镜像 `rhel-server-7.0-x86_64-dvd.iso` ,例如通过一个 FTP 服务器的 HTTP 服务形式(在我们当前的例子中,我们将使用另一个 IP 地址为 192.168.0.18 的 RHEL 7 机器)。 3. 开始 Kickstart 安装。 -为创建一个 kickstart 文件,请登陆你的红帽客户门户网站帐户,并使用 [Kickstart 配置工具][1] 来选择所需的安装选项。在向下滑动之前请仔细阅读每个选项,然后选择最适合你需求的选项: +要创建一个 kickstart 文件,请登录你的红帽客户门户网站(Red Hat Customer Portal)帐户,并使用 [Kickstart 配置工具][1] 来选择所需的安装选项。在向下滑动之前请仔细阅读每个选项,然后选择最适合你需求的选项: ![Kickstart 配置工具](http://www.tecmint.com/wp-content/uploads/2015/05/Kickstart-Configuration-Tool.png) -Kickstart 配置工具 +*Kickstart 配置工具* 假如你指定安装将通过 HTTP,FTP,NFS 来执行,请确保服务器上的防火墙允许这些服务通过。 @@ -59,13 +60,13 @@ Kickstart 配置工具 url --url=http://192.168.0.18//kickstart/media -这个目录是你解压 DVD 或 ISO 安装介质的地方。在执行解压之前,我们将把 ISO 安装文件作为一个回环设备挂载到 /media/rhel 目录下: +这个目录是你展开 DVD 或 ISO 安装介质内容的地方。在执行解压之前,我们将把 ISO 安装文件作为一个回环设备挂载到 /media/rhel 目录下: # mount -o loop /var/www/html/kickstart/rhel-server-7.0-x86_64-dvd.iso /media/rhel ![挂载 RHEL ISO 镜像](http://www.tecmint.com/wp-content/uploads/2015/05/Mount-RHEL-ISO-Image.png) -挂载 RHEL ISO 镜像 +*挂载 RHEL ISO 镜像* 接下来,复制 /media/rhel 中的全部文件到 /var/www/html/kickstart/media 目录: @@ -75,11 +76,11 @@ Kickstart 配置工具 ![Kickstart 媒体文件](http://www.tecmint.com/wp-content/uploads/2015/05/Kickstart-media-Files.png) -Kickstart 媒体文件 +*Kickstart 媒体文件* 现在,我们已经准备好开始 kickstart 安装了。 -不管你如何选择创建 kickstart 文件的方式,在执行安装之前检查这个文件的语法总是一个不错的主意。为此,我们需要安装 pykickstart 软件包。 +不管你如何选择创建 kickstart 文件的方式,在执行安装之前检查下这个文件的语法是否有误总是一个不错的主意。为此,我们需要安装 pykickstart 软件包。 # yum update && yum install pykickstart @@ -89,7 +90,7 @@ Kickstart 媒体文件 假如文件中的语法正确,你将不会得到任何输出,反之,假如文件中存在错误,你得到警告,向你提示在某一行中语法不正确或出错原因未知。 -### 执行一次 Kickstart 安装 ### +### 执行 Kickstart 安装 ### 首先,使用 rhel-server-7.0-x86_64-boot.iso 来启动你的客户端。当初始屏幕出现时,选择安装 RHEL 7.0 ,然后按 Tab 键来追加下面这一句,接着按 Enter 键: @@ -97,31 +98,31 @@ Kickstart 媒体文件 ![RHEL Kickstart 安装](http://www.tecmint.com/wp-content/uploads/2015/05/RHEL-Kickstart-Installation.png) -RHEL Kickstart 安装 +*RHEL Kickstart 安装* 其中 tecmint.bin 是先前创建的 kickstart 文件。 -当你按了 Enter 键后,自动安装就开始了,且你将看到一个列有正在被安装的软件的列表(软件包的数目和名称根据你所选择的程序和软件包组而有所不同): +当你按了 Enter 键后,自动安装就开始了,且你将看到一个列有正在被安装的软件的列表(软件包的数目和名称根据你所选择的程序和软件包组而有所不同): ![RHEL 7 的自动化 Kickstart 安装](http://www.tecmint.com/wp-content/uploads/2015/05/Kickstart-Automatic-Installation.png) -RHEL 7 的自动化 Kickstart 安装 +*RHEL 7 的自动化 Kickstart 安装* 当自动化过程结束后,将提示你移除安装介质,接着你就可以启动到你新安装的系统中了: ![RHEL 7 启动屏幕](http://www.tecmint.com/wp-content/uploads/2015/05/RHEL-7.png) -RHEL 7 启动屏幕 +*RHEL 7 启动屏幕* 尽管你可以像我们前面提到的那样,手动地创建你的 kickstart 文件,但你应该尽可能地考虑使用受推荐的方式:你可以使用在线配置工具,或者使用在安装过程中创建的位于 root 家目录下的 anaconda-ks.cfg 文件。 -这个文件实际上就是一个 kickstart 文件,所以你或许想在选择好所有所需的选项(可能需要更改逻辑卷布局或机子上所用的文件系统)后手动地安装第一个机子,接着使用产生的 anaconda-ks.cfg 文件来自动完成其余机子的安装过程。 +这个文件实际上就是一个 kickstart 文件,你或许想在选择好所有所需的选项(可能需要更改逻辑卷布局或机器上所用的文件系统)后手动地安装第一个机器,接着使用产生的 anaconda-ks.cfg 文件来自动完成其余机器的安装过程。 -另外,使用在线配置工具或 anaconda-ks.cfg 文件来引导将来的安装将允许你使用一个加密的 root 密码来执行系统的安装。 +另外,使用在线配置工具或 anaconda-ks.cfg 文件来引导将来的安装将允许你在系统安装时以加密的形式设置 root 密码。 ### 总结 ### -既然你知道了如何创建 kickstart 文件并如何使用它们来自动完成 RHEL 7 服务器的安装,你就可以忘记时时照看安装进度的过程了。这将给你时间来做其他的事情,或者若你足够幸运,你还可以用来休闲一番。 +既然你知道了如何创建 kickstart 文件并如何使用它们来自动完成 RHEL 7 服务器的安装,你就可以不用时时照看安装进度的过程了。这将给你时间来做其他的事情,或者若你足够幸运,你还可以用来休闲一番。 无论以何种方式,请使用下面的评论栏来让我们知晓你对这篇文章的看法。提问也同样欢迎! @@ -133,7 +134,7 @@ via: http://www.tecmint.com/automatic-rhel-installations-using-kickstart/ 作者:[Gabriel Cánepa][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 5a09959df246e3aa3c5596c6c8a97801c59acbc9 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 2 Oct 2015 00:27:12 +0800 Subject: [PATCH 632/697] RHCSA Series --- ...t 01--Reviewing Essential Commands and System Documentation.md | 0 ...ries--Part 02--How to Perform File and Directory Management.md | 0 ...A Series--Part 03--How to Manage Users and Groups in RHEL 7.md | 0 ...s with Nano and Vim or Analyzing text with grep and regexps.md | 0 ...nagement in RHEL 7--Boot Shutdown and Everything in Between.md | 0 ... 'Parted' and 'SSM' to Configure and Encrypt System Storage.md | 0 ...CLs (Access Control Lists) and Mounting Samba or NFS Shares.md | 0 ...ecuring SSH, Setting Hostname and Enabling Network Services.md | 0 ...--Installing, Configuring and Securing a Web and FTP Server.md | 0 ...ment, Automating Tasks with Cron and Monitoring System Logs.md | 0 ...ls and Network Traffic Control Using FirewallD and Iptables.md | 0 ...s--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md | 0 12 files changed, 0 insertions(+), 0 deletions(-) rename published/{201509 => RHCSA Series}/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md (100%) rename published/{201509 => RHCSA Series}/RHCSA Series--Part 02--How to Perform File and Directory Management.md (100%) rename published/{201509 => RHCSA Series}/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md (100%) rename published/{201509 => RHCSA Series}/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md (100%) rename published/{201509 => RHCSA Series}/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md (100%) rename published/{201509 => RHCSA Series}/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md (100%) rename published/{201509 => RHCSA Series}/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md (100%) rename published/{201509 => RHCSA Series}/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md (100%) rename published/{201509 => RHCSA Series}/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md (100%) rename published/{201509 => RHCSA Series}/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md (100%) rename published/{201509 => RHCSA Series}/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md (100%) rename published/{ => RHCSA Series}/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md (100%) diff --git a/published/201509/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md b/published/RHCSA Series/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md similarity index 100% rename from published/201509/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md rename to published/RHCSA Series/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md diff --git a/published/201509/RHCSA Series--Part 02--How to Perform File and Directory Management.md b/published/RHCSA Series/RHCSA Series--Part 02--How to Perform File and Directory Management.md similarity index 100% rename from published/201509/RHCSA Series--Part 02--How to Perform File and Directory Management.md rename to published/RHCSA Series/RHCSA Series--Part 02--How to Perform File and Directory Management.md diff --git a/published/201509/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md b/published/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md similarity index 100% rename from published/201509/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md rename to published/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md diff --git a/published/201509/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md b/published/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md similarity index 100% rename from published/201509/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md rename to published/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md diff --git a/published/201509/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md b/published/RHCSA Series/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md similarity index 100% rename from published/201509/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md rename to published/RHCSA Series/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md diff --git a/published/201509/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md b/published/RHCSA Series/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md similarity index 100% rename from published/201509/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md rename to published/RHCSA Series/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md diff --git a/published/201509/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md b/published/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md similarity index 100% rename from published/201509/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md rename to published/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md diff --git a/published/201509/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md b/published/RHCSA Series/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md similarity index 100% rename from published/201509/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md rename to published/RHCSA Series/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md diff --git a/published/201509/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md b/published/RHCSA Series/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md similarity index 100% rename from published/201509/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md rename to published/RHCSA Series/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md diff --git a/published/201509/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md b/published/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md similarity index 100% rename from published/201509/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md rename to published/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md diff --git a/published/201509/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md b/published/RHCSA Series/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md similarity index 100% rename from published/201509/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md rename to published/RHCSA Series/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md diff --git a/published/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md b/published/RHCSA Series/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md similarity index 100% rename from published/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md rename to published/RHCSA Series/RHCSA Series--Part 12--Automate RHEL 7 Installations Using 'Kickstart'.md From 75d64b9116f548f8359cd8884605d0d8a4045def Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Fri, 2 Oct 2015 21:41:22 +0800 Subject: [PATCH 633/697] [Translated]RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md --- ... Up LDAP-based Authentication in RHEL 7.md | 277 ------------------ ... Up LDAP-based Authentication in RHEL 7.md | 275 +++++++++++++++++ 2 files changed, 275 insertions(+), 277 deletions(-) delete mode 100644 sources/tech/RHCSA Series/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md create mode 100644 translated/tech/RHCSA/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md b/sources/tech/RHCSA Series/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md deleted file mode 100644 index e3425f5164..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md +++ /dev/null @@ -1,277 +0,0 @@ -FSSlc translating - -RHCSA Series: Setting Up LDAP-based Authentication in RHEL 7 – Part 14 -================================================================================ -We will begin this article by outlining some LDAP basics (what it is, where it is used and why) and show how to set up a LDAP server and configure a client to authenticate against it using Red Hat Enterprise Linux 7 systems. - -![Setup LDAP Server and Client Authentication](http://www.tecmint.com/wp-content/uploads/2015/06/setup-ldap-server-and-configure-client-authentication.png) - -RHCSA Series: Setup LDAP Server and Client Authentication – Part 14 - -As we will see, there are several other possible application scenarios, but in this guide we will focus entirely on LDAP-based authentication. In addition, please keep in mind that due to the vastness of the subject, we will only cover its basics here, but you can refer to the documentation outlined in the summary for more in-depth details. - -For the same reason, you will note that I have decided to leave out several references to man pages of LDAP tools for the sake of brevity, but the corresponding explanations are at a fingertip’s distance (man ldapadd, for example). - -That said, let’s get started. - -**Our Testing Environment** - -Our test environment consists of two RHEL 7 boxes: - - Server: 192.168.0.18. FQDN: rhel7.mydomain.com - Client: 192.168.0.20. FQDN: ldapclient.mydomain.com - -If you want, you can use the machine installed in [Part 12: Automate RHEL 7 installations][1] using Kickstart as client. - -#### What is LDAP? #### - -LDAP stands for Lightweight Directory Access Protocol and consists in a set of protocols that allows a client to access, over a network, centrally stored information (such as a directory of login shells, absolute paths to home directories, and other typical system user information, for example) that should be accessible from different places or available to a large number of end users (another example would be a directory of home addresses and phone numbers of all employees in a company). - -Keeping such (and more) information centrally means it can be more easily maintained and accessed by everyone who has been granted permissions to use it. - -The following diagram offers a simplified diagram of LDAP, and is described below in greater detail: - -![LDAP Diagram](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-Diagram.png) - -LDAP Diagram - -Explanation of above diagram in detail. - -- An entry in a LDAP directory represents a single unit or information and is uniquely identified by what is called a Distinguished Name. -- An attribute is a piece of information associated with an entry (for example, addresses, available contact phone numbers, and email addresses). -- Each attribute is assigned one or more values consisting in a space-separated list. A value that is unique per entry is called a Relative Distinguished Name. - -That being said, let’s proceed with the server and client installations. - -### Installing and Configuring a LDAP Server and Client ### - -In RHEL 7, LDAP is implemented by OpenLDAP. To install the server and client, use the following commands, respectively: - - # yum update && yum install openldap openldap-clients openldap-servers - # yum update && yum install openldap openldap-clients nss-pam-ldapd - -Once the installation is complete, there are some things we look at. The following steps should be performed on the server alone, unless explicitly noted: - -**1. Make sure SELinux does not get in the way by enabling the following booleans persistently, both on the server and the client:** - - # setsebool -P allow_ypbind=0 authlogin_nsswitch_use_ldap=0 - -Where allow_ypbind is required for LDAP-based authentication, and authlogin_nsswitch_use_ldap may be needed by some applications. - -**2. Enable and start the service:** - - # systemctl enable slapd.service - # systemctl start slapd.service - -Keep in mind that you can also disable, restart, or stop the service with [systemctl][2] as well: - - # systemctl disable slapd.service - # systemctl restart slapd.service - # systemctl stop slapd.service - -**3. Since the slapd service runs as the ldap user (which you can verify with ps -e -o pid,uname,comm | grep slapd), such user should own the /var/lib/ldap directory in order for the server to be able to modify entries created by administrative tools that can only be run as root (more on this in a minute).** - -Before changing the ownership of this directory recursively, copy the sample database configuration file for slapd into it: - - # cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG - # chown -R ldap:ldap /var/lib/ldap - -**4. Set up an OpenLDAP administrative user and assign a password:** - - # slappasswd - -as shown in the next image: - -![Set LDAP Admin Password](http://www.tecmint.com/wp-content/uploads/2015/06/Set-LDAP-Admin-Password.png) - -Set LDAP Admin Password - -and create an LDIF file (ldaprootpasswd.ldif) with the following contents: - - dn: olcDatabase={0}config,cn=config - changetype: modify - add: olcRootPW - olcRootPW: {SSHA}PASSWORD - -where: - -- PASSWORD is the hashed string obtained earlier. -- cn=config indicates global config options. -- olcDatabase indicates a specific database instance name and can be typically found inside /etc/openldap/slapd.d/cn=config. - -Referring to the theoretical background provided earlier, the `ldaprootpasswd.ldif` file will add an entry to the LDAP directory. In that entry, each line represents an attribute: value pair (where dn, changetype, add, and olcRootPW are the attributes and the strings to the right of each colon are their corresponding values). - -You may want to keep this in mind as we proceed further, and please note that we are using the same Common Names `(cn=)` throughout the rest of this article, where each step depends on the previous one. - -**5. Now, add the corresponding LDAP entry by specifying the URI referring to the ldap server, where only the protocol/host/port fields are allowed.** - - # ldapadd -H ldapi:/// -f ldaprootpasswd.ldif - -The output should be similar to: - -![LDAP Configuration](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-Configuration.png) - -LDAP Configuration - -and import some basic LDAP definitions from the `/etc/openldap/schema` directory: - - # for def in cosine.ldif nis.ldif inetorgperson.ldif; do ldapadd -H ldapi:/// -f /etc/openldap/schema/$def; done - -![LDAP Definitions](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-Definitions.png) - -LDAP Definitions - -**6. Have LDAP use your domain in its database.** - -Create another LDIF file, which we will call `ldapdomain.ldif`, with the following contents, replacing your domain (in the Domain Component dc=) and password as appropriate: - - dn: olcDatabase={1}monitor,cn=config - changetype: modify - replace: olcAccess - olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" - read by dn.base="cn=Manager,dc=mydomain,dc=com" read by * none - - dn: olcDatabase={2}hdb,cn=config - changetype: modify - replace: olcSuffix - olcSuffix: dc=mydomain,dc=com - - dn: olcDatabase={2}hdb,cn=config - changetype: modify - replace: olcRootDN - olcRootDN: cn=Manager,dc=mydomain,dc=com - - dn: olcDatabase={2}hdb,cn=config - changetype: modify - add: olcRootPW - olcRootPW: {SSHA}PASSWORD - - dn: olcDatabase={2}hdb,cn=config - changetype: modify - add: olcAccess - olcAccess: {0}to attrs=userPassword,shadowLastChange by - dn="cn=Manager,dc=mydomain,dc=com" write by anonymous auth by self write by * none - olcAccess: {1}to dn.base="" by * read - olcAccess: {2}to * by dn="cn=Manager,dc=mydomain,dc=com" write by * read - -Then load it as follows: - - # ldapmodify -H ldapi:/// -f ldapdomain.ldif - -![LDAP Domain Configuration](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-Domain-Configuration.png) - -LDAP Domain Configuration - -**7. Now it’s time to add some entries to our LDAP directory. Attributes and values are separated by a colon `(:)` in the following file, which we’ll name `baseldapdomain.ldif`:** - - dn: dc=mydomain,dc=com - objectClass: top - objectClass: dcObject - objectclass: organization - o: mydomain com - dc: mydomain - - dn: cn=Manager,dc=mydomain,dc=com - objectClass: organizationalRole - cn: Manager - description: Directory Manager - - dn: ou=People,dc=mydomain,dc=com - objectClass: organizationalUnit - ou: People - - dn: ou=Group,dc=mydomain,dc=com - objectClass: organizationalUnit - ou: Group - -Add the entries to the LDAP directory: - - # ldapadd -x -D cn=Manager,dc=mydomain,dc=com -W -f baseldapdomain.ldif - -![Add LDAP Domain Attributes and Values](http://www.tecmint.com/wp-content/uploads/2015/06/Add-LDAP-Domain-Configuration.png) - -Add LDAP Domain Attributes and Values - -**8. Create a LDAP user called ldapuser (adduser ldapuser), then create the definitions for a LDAP group in `ldapgroup.ldif`.** - - # adduser ldapuser - # vi ldapgroup.ldif - -Add following content. - - dn: cn=Manager,ou=Group,dc=mydomain,dc=com - objectClass: top - objectClass: posixGroup - gidNumber: 1004 - -where gidNumber is the GID in /etc/group for ldapuser) and load it: - - # ldapadd -x -W -D "cn=Manager,dc=mydomain,dc=com" -f ldapgroup.ldif - -**9. Add a LDIF file with the definitions for user ldapuser (`ldapuser.ldif`):** - - dn: uid=ldapuser,ou=People,dc=mydomain,dc=com - objectClass: top - objectClass: account - objectClass: posixAccount - objectClass: shadowAccount - cn: ldapuser - uid: ldapuser - uidNumber: 1004 - gidNumber: 1004 - homeDirectory: /home/ldapuser - userPassword: {SSHA}fiN0YqzbDuDI0Fpqq9UudWmjZQY28S3M - loginShell: /bin/bash - gecos: ldapuser - shadowLastChange: 0 - shadowMax: 0 - shadowWarning: 0 - -and load it: - - # ldapadd -x -D cn=Manager,dc=mydomain,dc=com -W -f ldapuser.ldif - -![LDAP User Configuration](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-User-Configuration.png) - -LDAP User Configuration - -Likewise, you can delete the user entry you just created: - - # ldapdelete -x -W -D cn=Manager,dc=mydomain,dc=com "uid=ldapuser,ou=People,dc=mydomain,dc=com" - -**10. Allow communication through the firewall:** - - # firewall-cmd --add-service=ldap - -**11. Last, but not least, enable the client to authenticate using LDAP.** - -To help us in this final step, we will use the authconfig utility (an interface for configuring system authentication resources). - -Using the following command, the home directory for the requested user is created if it doesn’t exist after the authentication against the LDAP server succeeds: - - # authconfig --enableldap --enableldapauth --ldapserver=rhel7.mydomain.com --ldapbasedn="dc=mydomain,dc=com" --enablemkhomedir --update - -![LDAP Client Configuration](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-Client-Configuration.png) - -LDAP Client Configuration - -### Summary ### - -In this article we have explained how to set up basic authentication against a LDAP server. To further configure the setup described in the present guide, please refer to [Chapter 13 – LDAP Configuration][3] in the RHEL 7 System administrator’s guide, paying special attention to the security settings using TLS. - -Feel free to leave any questions you may have using the comment form below. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/setup-ldap-server-and-configure-client-authentication/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/automatic-rhel-installations-using-kickstart/ -[2]:http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux/ -[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/ch-Directory_Servers.html diff --git a/translated/tech/RHCSA/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md b/translated/tech/RHCSA/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md new file mode 100644 index 0000000000..9aba04d2cb --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md @@ -0,0 +1,275 @@ +RHCSA 系列: 在 RHEL 7 中设置基于 LDAP 的认证 – Part 14 +================================================================================ +在这篇文章中,我们将首先罗列一些 LDAP 的基础知识(它是什么,它被用于何处以及为什么会被这样使用),然后向你展示如何使用 RHEL 7 系统来设置一个 LDAP 服务器以及配置一个客户端来使用它达到认证的目的。 + +![设置 LDAP 服务器及客户端认证](http://www.tecmint.com/wp-content/uploads/2015/06/setup-ldap-server-and-configure-client-authentication.png) + +RHCSA 系列:设置 LDAP 服务器及客户端认证 – Part 14 + +正如你将看到的那样,关于认证,还存在其他可能的应用场景,但在这篇指南中,我们将只关注基于 LDAP 的认证。另外,请记住,由于这个话题的广泛性,在这里我们将只涵盖它的基础知识,但你可以参考位于总结部分中列出的文档,以此来了解更加深入的细节。 + +基于相同的原因,你将注意到:为了简洁起见,我已经决定省略了几个位于 man 页中 LDAP 工具的参考,但相应命令的解释是近在咫尺的(例如,输入 man ldapadd)。 + +那还是让我们开始吧。 + +**我们的测试环境** + +我们的测试环境包含两台 RHEL 7 机子: + + Server: 192.168.0.18. FQDN: rhel7.mydomain.com + Client: 192.168.0.20. FQDN: ldapclient.mydomain.com + +如若你想,你可以使用在 [Part 12: RHEL 7 的自动化安装][1] 中使用 Kickstart 安装的机子来作为客户端。 + +#### LDAP 是什么? #### + +LDAP 代表轻量级目录访问协议(Lightweight Directory Access Protocol),并包含在一系列协议之中,这些协议允许一个客户端通过网络去获取集中存储的信息(例如登陆 shell 的目录,家目录的绝对路径,或者其他典型的系统用户信息),而这些信息可以从不同的地方访问到或被很多终端用户获取到(另一个例子是含有某个公司所有雇员的家庭地址和电话号码的目录)。 + +对于那些被赋予了权限可以使用这些信息的人来说,将这些信息进行集中管理意味着可以更容易地维护和获取。 + +下面的图表提供了一个简化了的关于 LDAP 的示意图,且在下面将会进行更多的描述: + +![LDAP 示意图](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-Diagram.png) + +LDAP 示意图 + +下面是对上面示意图的一个详细解释。 + +- 在一个 LDAP 目录中,一个条目代表一个独立单元或信息,被所谓的 Distinguished Name 唯一识别。 +- 一个属性是一些与某个条目相关的信息(例如地址,有效的联系电话号码和邮箱地址)。 +- 每个属性被分配有一个或多个值,这些值被包含在一个以空格为分隔符的列表中。每个条目中那个唯一的值被称为一个 Relative Distinguished Name。 + +接下来,就让我们进入到有关服务器和客户端安装的内容。 + +### 安装和配置一个 LDAP 服务器和客户端 ### + +在 RHEL 7 中, LDAP 由 OpenLDAP 实现。为了安装服务器和客户端,分别使用下面的命令: + + # yum update && yum install openldap openldap-clients openldap-servers + # yum update && yum install openldap openldap-clients nss-pam-ldapd + +一旦安装完成,我们还需要关注一些事情。除非显示地提示,下面的步骤都只在服务器上执行: + +**1. 在服务器和客户端上,为了确保 SELinux 不会妨碍挡道,长久地开启下列的布尔值:** + + # setsebool -P allow_ypbind=0 authlogin_nsswitch_use_ldap=0 + +其中 `allow_ypbind` 为基于 LDAP 的认证所需要,而 `authlogin_nsswitch_use_ldap`则可能会被某些应用所需要。 + +**2. 开启并启动服务:** + + # systemctl enable slapd.service + # systemctl start slapd.service + +记住你也可以使用 [systemctl][2] 来禁用,重启或停止服务: + + # systemctl disable slapd.service + # systemctl restart slapd.service + # systemctl stop slapd.service + +**3. 由于 slapd 服务是由 ldap 用户来运行的(你可以使用 `ps -e -o pid,uname,comm | grep slapd` 来验证),为了使得服务器能够更改由管理工具创建的条目,这个用户应该有目录 `/var/lib/ldap` 的所有权,而这些管理工具仅可以由 root 用户来运行(紧接着有更多这方面的内容)。** + +在递归地更改这个目录的所有权之前,将 slapd 的示例数据库配置文件复制进这个目录: + + # cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG + # chown -R ldap:ldap /var/lib/ldap + +**4. 设置一个 OpenLDAP 管理用户并设置密码:** + + # slappasswd + +正如下一福图所展示的那样: + +![设置 LDAP 管理密码](http://www.tecmint.com/wp-content/uploads/2015/06/Set-LDAP-Admin-Password.png) + +设置 LDAP 管理密码 + +然后以下面的内容创建一个 LDIF 文件(`ldaprootpasswd.ldif`): + + dn: olcDatabase={0}config,cn=config + changetype: modify + add: olcRootPW + olcRootPW: {SSHA}PASSWORD + +其中: + +- PASSWORD 是先前得到的经过哈希处理的字符串。 +- cn=config 指的是全局配置选项。 +- olcDatabase 指的是一个特定的数据库实例的名称,并且通常可以在 `/etc/openldap/slapd.d/cn=config` 目录中发现。 + +根据上面提供的理论背景,`ldaprootpasswd.ldif` 文件将添加一个条目到 LDAP 目录中。在那个条目中,每一行代表一个属性键值对(其中 dn,changetype,add 和 olcRootPW 为属性,每个冒号右边的字符串为相应的键值)。 + +随着我们的进一步深入,请记住上面的这些,并注意到在这篇文章的余下部分,我们使用相同的 Common Names `(cn=)`,而这些余下的步骤中的每一步都将与其上一步相关。 + +**5. 现在,通过特别指定相对于 ldap 服务的 URI ,添加相应的 LDAP 条目,其中只有 protocol/host/port 这几个域被允许使用。** + + # ldapadd -H ldapi:/// -f ldaprootpasswd.ldif + +上面命令的输出应该与下面的图像相似: + +![LDAP 配置](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-Configuration.png) + +LDAP 配置 + +接着从 `/etc/openldap/schema` 目录导入一个基本的 LDAP 定义: + + # for def in cosine.ldif nis.ldif inetorgperson.ldif; do ldapadd -H ldapi:/// -f /etc/openldap/schema/$def; done + +![LDAP 定义](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-Definitions.png) + +LDAP 定义 + +**6. 让 LDAP 在它的数据库中使用你的域名。** + +以下面的内容创建另一个 LDIF 文件,我们称之为 `ldapdomain.ldif`, 然后酌情替换这个文件中的域名(在域名分量 dc=) 和密码: + + dn: olcDatabase={1}monitor,cn=config + changetype: modify + replace: olcAccess + olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" + read by dn.base="cn=Manager,dc=mydomain,dc=com" read by * none + + dn: olcDatabase={2}hdb,cn=config + changetype: modify + replace: olcSuffix + olcSuffix: dc=mydomain,dc=com + + dn: olcDatabase={2}hdb,cn=config + changetype: modify + replace: olcRootDN + olcRootDN: cn=Manager,dc=mydomain,dc=com + + dn: olcDatabase={2}hdb,cn=config + changetype: modify + add: olcRootPW + olcRootPW: {SSHA}PASSWORD + + dn: olcDatabase={2}hdb,cn=config + changetype: modify + add: olcAccess + olcAccess: {0}to attrs=userPassword,shadowLastChange by + dn="cn=Manager,dc=mydomain,dc=com" write by anonymous auth by self write by * none + olcAccess: {1}to dn.base="" by * read + olcAccess: {2}to * by dn="cn=Manager,dc=mydomain,dc=com" write by * read + +接着使用下面的命令来加载: + + # ldapmodify -H ldapi:/// -f ldapdomain.ldif + +![LDAP 域名配置](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-Domain-Configuration.png) + +LDAP 域名配置 + +**7. 现在,该是添加一些条目到我们的 LDAP 目录的时候了。在下面的文件中,属性和键值由一个冒号`(:)` 所分隔,这个文件我们将命名为 `baseldapdomain.ldif`:** + + dn: dc=mydomain,dc=com + objectClass: top + objectClass: dcObject + objectclass: organization + o: mydomain com + dc: mydomain + + dn: cn=Manager,dc=mydomain,dc=com + objectClass: organizationalRole + cn: Manager + description: Directory Manager + + dn: ou=People,dc=mydomain,dc=com + objectClass: organizationalUnit + ou: People + + dn: ou=Group,dc=mydomain,dc=com + objectClass: organizationalUnit + ou: Group + +添加条目到 LDAP 目录中: + + # ldapadd -x -D cn=Manager,dc=mydomain,dc=com -W -f baseldapdomain.ldif + +![添加 LDAP 域名,属性和键值](http://www.tecmint.com/wp-content/uploads/2015/06/Add-LDAP-Domain-Configuration.png) + +添加 LDAP 域名,属性和键值 + +**8. 创建一个名为 ldapuser 的 LDAP 用户(`adduser ldapuser`),然后在`ldapgroup.ldif` 中为一个 LDAP 组创建定义。** + + # adduser ldapuser + # vi ldapgroup.ldif + +添加下面的内容: + + dn: cn=Manager,ou=Group,dc=mydomain,dc=com + objectClass: top + objectClass: posixGroup + gidNumber: 1004 + +其中 gidNumber 是 ldapuser 在 `/etc/group` 中的 GID,然后加载这个文件: + + # ldapadd -x -W -D "cn=Manager,dc=mydomain,dc=com" -f ldapgroup.ldif + +**9. 为用户 ldapuser 添加一个带有定义的 LDIF 文件(`ldapuser.ldif`):** + + dn: uid=ldapuser,ou=People,dc=mydomain,dc=com + objectClass: top + objectClass: account + objectClass: posixAccount + objectClass: shadowAccount + cn: ldapuser + uid: ldapuser + uidNumber: 1004 + gidNumber: 1004 + homeDirectory: /home/ldapuser + userPassword: {SSHA}fiN0YqzbDuDI0Fpqq9UudWmjZQY28S3M + loginShell: /bin/bash + gecos: ldapuser + shadowLastChange: 0 + shadowMax: 0 + shadowWarning: 0 + +并加载它: + + # ldapadd -x -D cn=Manager,dc=mydomain,dc=com -W -f ldapuser.ldif + +![LDAP 用户配置](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-User-Configuration.png) + +LDAP 用户配置 + +相似地,你可以删除你刚刚创建的用户条目: + + # ldapdelete -x -W -D cn=Manager,dc=mydomain,dc=com "uid=ldapuser,ou=People,dc=mydomain,dc=com" + +**10. 允许有关 ldap 的通信通过防火墙:** + + # firewall-cmd --add-service=ldap + +**11. 最后,但并非最不重要的是使用 LDAP 开启客户端的认证。** + +为了在最后一步中对我们有所帮助,我们将使用 authconfig 工具(一个配置系统认证资源的界面)。 + +使用下面的命令,在通过 LDAP 服务器认证成功后,假如请求的用户的家目录不存在,则将会被创建: + + # authconfig --enableldap --enableldapauth --ldapserver=rhel7.mydomain.com --ldapbasedn="dc=mydomain,dc=com" --enablemkhomedir --update + +![LDAP 客户端认证](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-Client-Configuration.png) + +LDAP 客户端认证 + +### 总结 ### + +在这篇文章中,我们已经解释了如何利用一个 LDAP 服务器来设置基本的认证。若想对当前这个指南里描述的设置进行更深入的配置,请参考位于 RHEL 系统管理员指南里的 [第 13 章 – LDAP 的配置][3],并特别注意使用 TLS 来进行安全设定。 + +请随意使用下面的评论框来留下你的提问。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/setup-ldap-server-and-configure-client-authentication/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/automatic-rhel-installations-using-kickstart/ +[2]:http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux/ +[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/ch-Directory_Servers.html From 917c08053985003e656d67a77cb8788efe10df48 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 3 Oct 2015 00:11:17 +0800 Subject: [PATCH 634/697] PUB:RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7 @FSSlc --- ...ntrol Essentials with SELinux in RHEL 7.md | 76 ++++++++++--------- 1 file changed, 39 insertions(+), 37 deletions(-) rename {translated/tech/RHCSA => published/RHCSA Series}/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md (52%) diff --git a/translated/tech/RHCSA/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md b/published/RHCSA Series/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md similarity index 52% rename from translated/tech/RHCSA/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md rename to published/RHCSA Series/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md index 4afbc105b7..8e77f8495e 100644 --- a/translated/tech/RHCSA/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md +++ b/published/RHCSA Series/RHCSA Series--Part 13--Mandatory Access Control Essentials with SELinux in RHEL 7.md @@ -1,28 +1,29 @@ -RHCSA 系列: 在 RHEL 7 中使用 SELinux 进行强制访问控制 – Part 13 +RHCSA 系列(十三): 在 RHEL 7 中使用 SELinux 进行强制访问控制 ================================================================================ -在本系列的前面几篇文章中,我们已经详细地探索了至少两种访问控制方法:标准的 ugo/rwx 权限([管理用户和组 – Part 3][1]) 和访问控制列表([在文件系统中配置 ACL – Part 7][2])。 + +在本系列的前面几篇文章中,我们已经详细地探索了至少两种访问控制方法:标准的 ugo/rwx 权限([RHCSA 系列(三): 如何管理 RHEL7 的用户和组][1]) 和访问控制列表([RHCSA 系列(七): 使用 ACL(访问控制列表) 和挂载 Samba/NFS 共享][2])。 ![RHCSA 认证:SELinux 精要和控制文件系统的访问](http://www.tecmint.com/wp-content/uploads/2015/06/SELinux-Control-File-System-Access.png) -RHCSA 认证:SELinux 精要和控制文件系统的访问 +*RHCSA 认证:SELinux 精要和控制文件系统的访问* -尽管作为第一级别的权限和访问控制机制是必要的,但它们同样有一些局限,而这些局限则可以由安全增强 Linux(Security Enhanced Linux,简称为 SELinux) 来处理。 +尽管作为第一级别的权限和访问控制机制是必要的,但它们同样有一些局限,而这些局限则可以由安全增强 Linux(Security Enhanced Linux,简称为 SELinux) 来处理。 -这些局限的一种情形是:某个用户可能通过一个未加详细阐述的 chmod 命令将一个文件或目录暴露在安全漏洞面前(注:这句我的翻译有点问题),从而引起访问权限的意外传播。结果,由该用户开启的任意进程可以对属于该用户的文件进行任意的操作,最终一个恶意的或受损的软件对整个系统可能会实现 root 级别的访问权限。 +这些局限的一种情形是:某个用户可能通过一个泛泛的 chmod 命令将文件或目录暴露出现了安全违例,从而引起访问权限的意外传播。结果,由该用户开启的任意进程可以对属于该用户的文件进行任意的操作,最终一个恶意的或有其它缺陷的软件可能会取得整个系统的 root 级别的访问权限。 -考虑到这些局限性,美国国家安全局(NSA) 率先设计出了 SELinux,一种强制的访问控制方法,它根据最小权限模型去限制进程在系统对象(如文件,目录,网络接口等)上的访问或执行其他的操作的能力,而这些限制可以在后面根据需要进行修改。简单来说,系统的每一个元素只给某个功能所需要的那些权限。 +考虑到这些局限性,美国国家安全局(NSA) 率先设计出了 SELinux,一种强制的访问控制方法,它根据最小权限模型去限制进程在系统对象(如文件,目录,网络接口等)上的访问或执行其他的操作的能力,而这些限制可以在之后根据需要进行修改。简单来说,系统的每一个元素只给某个功能所需要的那些权限。 -在 RHEL 7 中,SELinux 被并入了内核中,且默认情况下以强制模式开启。在这篇文章中,我们将简要地介绍有关 SELinux 及其相关操作的基本概念。 +在 RHEL 7 中,SELinux 被并入了内核中,且默认情况下以强制模式(Enforcing)开启。在这篇文章中,我们将简要地介绍有关 SELinux 及其相关操作的基本概念。 ### SELinux 的模式 ### SELinux 可以以三种不同的模式运行: -- 强制模式:SELinux 根据 SELinux 策略规则拒绝访问,这些规则是用以控制安全引擎的一系列准则; -- 宽容模式:SELinux 不拒绝访问,但对于那些运行在强制模式下会被拒绝访问的行为,它会进行记录; -- 关闭 (不言自明,即 SELinux 没有实际运行). +- 强制模式(Enforcing):SELinux 基于其策略规则来拒绝访问,这些规则是用以控制安全引擎的一系列准则; +- 宽容模式(Permissive):SELinux 不会拒绝访问,但对于那些如果运行在强制模式下会被拒绝访问的行为进行记录; +- 关闭(Disabled) (不言自明,即 SELinux 没有实际运行). -使用 `getenforce` 命令可以展示 SELinux 当前所处的模式,而 `setenforce` 命令(后面跟上一个 1 或 0) 则被用来将当前模式切换到强制模式或宽容模式,但只对当前的会话有效。 +使用 `getenforce` 命令可以展示 SELinux 当前所处的模式,而 `setenforce` 命令(后面跟上一个 1 或 0) 则被用来将当前模式切换到强制模式(Enforcing)或宽容模式(Permissive),但只对当前的会话有效。 为了使得在登出和重启后上面的设置还能保持作用,你需要编辑 `/etc/selinux/config` 文件并将 SELINUX 变量的值设为 enforcing,permissive,disabled 中之一: @@ -35,15 +36,15 @@ SELinux 可以以三种不同的模式运行: ![设置 SELinux 模式](http://www.tecmint.com/wp-content/uploads/2015/05/Set-SELinux-Mode.png) -设置 SELinux 模式 +*设置 SELinux 模式* -通常情况下,你将使用 `setenforce` 来在 SELinux 模式间进行切换(从强制模式到宽容模式,或反之),以此来作为你排错的第一步。假如 SELinux 当前被设置为强制模式,而你遇到了某些问题,但当你把 SELinux 切换为宽容模式后问题不再出现了,则你可以确信你遇到了一个 SELinux 权限方面的问题。 +通常情况下,你应该使用 `setenforce` 来在 SELinux 模式间进行切换(从强制模式到宽容模式,或反之),以此来作为你排错的第一步。假如 SELinux 当前被设置为强制模式,而你遇到了某些问题,但当你把 SELinux 切换为宽容模式后问题不再出现了,则你可以确信你遇到了一个 SELinux 权限方面的问题。 ### SELinux 上下文 ### -一个 SELinux 上下文由一个权限控制环境所组成,在这个环境中,决定的做出将基于 SELinux 的用户,角色和类型(和可选的级别): +一个 SELinux 上下文(Context)由一个访问控制环境所组成,在这个环境中,决定的做出将基于 SELinux 的用户,角色和类型(和可选的级别): -- 一个 SELinux 用户是通过将一个常规的 Linux 用户账户映射到一个 SELinux 用户账户来实现的,反过来,在一个会话中,这个 SELinux 用户账户在 SELinux 上下文中被进程所使用,为的是能够显示地定义它们所允许的角色和级别。 +- 一个 SELinux 用户是通过将一个常规的 Linux 用户账户映射到一个 SELinux 用户账户来实现的,反过来,在一个会话中,这个 SELinux 用户账户在 SELinux 上下文中被进程所使用,以便能够明确定义它们所允许的角色和级别。 - 角色的概念是作为域和处于该域中的 SELinux 用户之间的媒介,它定义了 SELinux 可以访问到哪个进程域和哪些文件类型。这将保护您的系统免受提权漏洞的攻击。 - 类型则定义了一个 SELinux 文件类型或一个 SELinux 进程域。在正常情况下,进程将会被禁止访问其他进程正使用的文件,并禁止对其他进程进行访问。这样只有当一个特定的 SELinux 策略规则允许它访问时,才能够进行访问。 @@ -51,7 +52,7 @@ SELinux 可以以三种不同的模式运行: **例 1:改变 sshd 守护进程的默认端口** -在[加固 SSH – Part 8][3] 中,我们解释了更改 sshd 所监听的默认端口是加固你的服务器免收外部攻击的首个安全措施。下面,就让我们编辑 `/etc/ssh/sshd_config` 文件并将端口设置为 9999: +在 [RHCSA 系列(八): 加固 SSH,设定主机名及启用网络服务][3] 中,我们解释了更改 sshd 所监听的默认端口是加固你的服务器免受外部攻击的首要安全措施。下面,就让我们编辑 `/etc/ssh/sshd_config` 文件并将端口设置为 9999: Port 9999 @@ -62,19 +63,19 @@ SELinux 可以以三种不同的模式运行: ![更改 SSH 的端口](http://www.tecmint.com/wp-content/uploads/2015/05/Change-SSH-Port.png) -重启 SSH 服务 +*重启 SSH 服务* 正如你看到的那样, sshd 启动失败,但为什么会这样呢? -快速检查 `/var/log/audit/audit.log` 文件会发现 sshd 已经被拒绝在端口 9999 上开启(SELinux 日志信息包含单词 "AVC",所以这类信息可以被轻易地与其他信息相区分),因为这个端口是 JBoss 管理服务的保留端口: +快速检查 `/var/log/audit/audit.log` 文件会发现 sshd 已经被拒绝在端口 9999 上开启(SELinux 的日志信息包含单词 "AVC",所以这类信息可以被轻易地与其他信息相区分),因为这个端口是 JBoss 管理服务的保留端口: # cat /var/log/audit/audit.log | grep AVC | tail -1 ![查看 SSH 日志](http://www.tecmint.com/wp-content/uploads/2015/05/Inspect-SSH-Logs.png) -查看 SSH 日志 +*查看 SSH 日志* -在这种情况下,你可以像先前解释的那样禁用 SELinux(但请不要这样做!),并尝试重启 sshd,且这种方法能够起效。但是, `semanage` 应用可以告诉我们在哪些端口上可以开启 sshd 而不会出现任何问题。 +在这种情况下,你可以像先前解释的那样禁用 SELinux(但请不要这样做!),并尝试重启 sshd,且这种方法能够起效。但是, `semanage` 应用可以告诉我们在哪些端口上可以开启 sshd 而不会出现任何问题。 运行: @@ -84,7 +85,7 @@ SELinux 可以以三种不同的模式运行: ![Semanage 工具](http://www.tecmint.com/wp-content/uploads/2015/05/SELinux-Permission.png) -Semanage 工具 +*Semanage 工具* 所以让我们在 `/etc/ssh/sshd_config` 中将端口更改为 9998 端口,增加这个端口到 ssh_port_t 的上下文,然后重启 sshd 服务: @@ -94,13 +95,13 @@ Semanage 工具 ![Semanage 添加端口](http://www.tecmint.com/wp-content/uploads/2015/05/Semenage-Add-Port.png) -Semanage 添加端口 +*semanage 添加端口* -如你所见,这次 sshd 服务被成功地开启了。这个例子告诉我们这个事实:SELinux 控制 TCP 端口数为它自己端口类型中间定义。 +如你所见,这次 sshd 服务被成功地开启了。这个例子告诉我们一个事实:SELinux 用它自己的端口类型的内部定义来控制 TCP 端口号。 **例 2:允许 httpd 访问 sendmail** -这是一个 SELinux 管理一个进程来访问另一个进程的例子。假如在你的 RHEL 7 服务器上,你要实现 Apache 的 mod_security 和 mod_evasive(注:这里少添加了一个链接,链接的地址是 http://www.tecmint.com/protect-apache-using-mod_security-and-mod_evasive-on-rhel-centos-fedora/),你需要允许 httpd 访问 sendmail,以便在遭受到 (D)DoS 攻击时能够用邮件来提醒你。在下面的命令中,如果你不想使得更改在重启后任然生效,请去掉 `-P` 选项。 +这是一个 SELinux 管理一个进程来访问另一个进程的例子。假如在你的 RHEL 7 服务器上,[你要为 Apache 配置 mod\_security 和 mod\_evasive][6],你需要允许 httpd 访问 sendmail,以便在遭受到 (D)DoS 攻击时能够用邮件来提醒你。在下面的命令中,如果你不想使得更改在重启后仍然生效,请去掉 `-P` 选项。 # semanage boolean -1 | grep httpd_can_sendmail # setsebool -P httpd_can_sendmail 1 @@ -108,13 +109,13 @@ Semanage 添加端口 ![允许 Apache 发送邮件](http://www.tecmint.com/wp-content/uploads/2015/05/Allow-Apache-to-Send-Mails.png) -允许 Apache 发送邮件 +*允许 Apache 发送邮件* -从上面的例子中,你可以知道 SELinux 布尔设定(或者只是布尔值)分别对应于 true 或 false,被嵌入到了 SELinux 策略中。你可以使用 `semanage boolean -l` 来列出所有的布尔值,也可以管道至 grep 命令以便筛选输出的结果。 +从上面的例子中,你可以知道 SELinux 布尔设定(或者只是布尔值)分别对应于 true 或 false,被嵌入到了 SELinux 策略中。你可以使用 `semanage boolean -l` 来列出所有的布尔值,也可以管道至 grep 命令以便筛选输出的结果。 -**例 3:在一个特定目录而非默认目录下服务一个静态站点** +**例 3:在一个特定目录而非默认目录下提供一个静态站点服务** -假设你正使用一个不同于默认目录(`/var/www/html`)的目录来服务一个静态站点,例如 `/websites` 目录(这种情形会出现在当你把你的网络文件存储在一个共享网络设备上,并需要将它挂载在 /websites 目录时)。 +假设你正使用一个不同于默认目录(`/var/www/html`)的目录来提供一个静态站点服务,例如 `/websites` 目录(这种情形会出现在当你把你的网络文件存储在一个共享网络设备上,并需要将它挂载在 /websites 目录时)。 a). 在 /websites 下创建一个 index.html 文件并包含如下的内容: @@ -130,14 +131,14 @@ a). 在 /websites 下创建一个 index.html 文件并包含如下的内容: ![检查 SELinux 文件的权限](http://www.tecmint.com/wp-content/uploads/2015/05/Check-File-Permssion.png) -检查 SELinux 文件的权限 +*检查 SELinux 文件的权限* b). 将 `/etc/httpd/conf/httpd.conf` 中的 DocumentRoot 改为 /websites,并不要忘了 -更新相应的 Directory 代码块。然后重启 Apache。 +更新相应的 Directory 块。然后重启 Apache。 -c). 浏览到 `http://`,则你应该会得到一个 503 Forbidden 的 HTTP 响应。 +c). 浏览 `http://`,则你应该会得到一个 503 Forbidden 的 HTTP 响应。 -d). 接下来,递归地改变 /websites 的标志,将它的标志变为 httpd_sys_content_t 类型,以便赋予 Apache 对这些目录和其内容的只读访问权限: +d). 接下来,递归地改变 /websites 的标志,将它的标志变为 `httpd_sys_content_t` 类型,以便赋予 Apache 对这些目录和其内容的只读访问权限: # semanage fcontext -a -t httpd_sys_content_t "/websites(/.*)?" @@ -149,7 +150,7 @@ e). 最后,应用在 d) 中创建的 SELinux 策略: ![确认 Apache 页面](http://www.tecmint.com/wp-content/uploads/2015/05/08part13.png) -确认 Apache 页面 +*确认 Apache 页面* ### 总结 ### @@ -165,13 +166,14 @@ via: http://www.tecmint.com/selinux-essentials-and-control-filesystem-access/ 作者:[Gabriel Cánepa][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups -[2]:http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/ -[3]:http://www.tecmint.com/rhcsa-series-secure-ssh-set-hostname-enable-network-services-in-rhel-7/ +[1]:https://linux.cn/article-6187-1.html +[2]:https://linux.cn/article-6263-1.html +[3]:https://linux.cn/article-6266-1.html [4]:https://www.nsa.gov/research/selinux/index.shtml [5]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/SELinux_Users_and_Administrators_Guide/part_I-SELinux.html +[6]:https://linux.cn/article-5639-1.html From 98f2d70679f881520388838e7e60c68be7277e80 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 3 Oct 2015 10:00:29 +0800 Subject: [PATCH 635/697] [Translated] sources/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md --- ... (SMTP) using null-client Configuration.md | 153 ------------------ ... (SMTP) using null-client Configuration.md | 153 ++++++++++++++++++ 2 files changed, 153 insertions(+), 153 deletions(-) delete mode 100644 sources/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md create mode 100644 translated/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md diff --git a/sources/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md b/sources/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md deleted file mode 100644 index 77b508db66..0000000000 --- a/sources/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md +++ /dev/null @@ -1,153 +0,0 @@ -ictlyh Translating -How to Setup Postfix Mail Server (SMTP) using null-client Configuration – Part 9 -================================================================================ -Regardless of the many online communication methods that are available today, email remains a practical way to deliver messages from one end of the world to another, or to a person sitting in the office next to ours. - -The following image illustrates the process of email transport starting with the sender until the message reaches the recipient’s inbox: - -![How Mail Setup Works](http://www.tecmint.com/wp-content/uploads/2015/09/How-Mail-Setup-Works.png) - -How Mail Setup Works - -To make this possible, several things happen behind the scenes. In order for an email message to be delivered from a client application (such as [Thunderbird][1], Outlook, or webmail services such as Gmail or Yahoo! Mail) to a mail server, and from there to the destination server and finally to its intended recipient, a SMTP (Simple Mail Transfer Protocol) service must be in place in each server. - -That is the reason why in this article we will explain how to set up a SMTP server in RHEL 7 where emails sent by local users (even to other local users) are forwarded to a central mail server for easier access. - -In the exam’s requirements this is called a null-client setup. - -Our test environment will consist of an originating mail server and a central mail server or relayhost. - - Original Mail Server: (hostname: box1.mydomain.com / IP: 192.168.0.18) - Central Mail Server: (hostname: mail.mydomain.com / IP: 192.168.0.20) - -For name resolution we will use the well-known /etc/hosts file on both boxes: - - 192.168.0.18 box1.mydomain.com box1 - 192.168.0.20 mail.mydomain.com mail - -### Installing Postfix and Firewall / SELinux Considerations ### - -To begin, we will need to (in both servers): - -**1. Install Postfix:** - - # yum update && yum install postfix - -**2. Start the service and enable it to run on future reboots:** - - # systemctl start postfix - # systemctl enable postfix - -**3. Allow mail traffic through the firewall:** - - # firewall-cmd --permanent --add-service=smtp - # firewall-cmd --add-service=smtp - -![Open Mail Server Port in Firewall](http://www.tecmint.com/wp-content/uploads/2015/09/Allow-Traffic-through-Firewall.png) - -Open Mail Server SMTP Port in Firewall - -**4. Configure Postfix on box1.mydomain.com.** - -Postfix’s main configuration file is located in /etc/postfix/main.cf. This file itself is a great documentation source as the included comments explain the purpose of the program’s settings. - -For brevity, let’s display only the lines that need to be edited (yes, you need to leave mydestination blank in the originating server; otherwise the emails will be stored locally as opposed to in a central mail server which is what we actually want): - -**Configure Postfix on box1.mydomain.com** - ----------- - - myhostname = box1.mydomain.com - mydomain = mydomain.com - myorigin = $mydomain - inet_interfaces = loopback-only - mydestination = - relayhost = 192.168.0.20 - -**5. Configure Postfix on mail.mydomain.com.** - -**Configure Postfix on mail.mydomain.com** - ----------- - - myhostname = mail.mydomain.com - mydomain = mydomain.com - myorigin = $mydomain - inet_interfaces = all - mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain - mynetworks = 192.168.0.0/24, 127.0.0.0/8 - -And set the related SELinux boolean to true permanently if not already done: - - # setsebool -P allow_postfix_local_write_mail_spool on - -![Set Postfix SELinux Permission](http://www.tecmint.com/wp-content/uploads/2015/09/Set-Postfix-SELinux-Permission.png) - -Set Postfix SELinux Permission - -The above SELinux boolean will allow Postfix to write to the mail spool in the central server. - -**6. Restart the service on both servers for the changes to take effect:** - - # systemctl restart postfix - -If Postfix does not start correctly, you can use following commands to troubleshoot. - - # systemctl –l status postfix - # journalctl –xn - # postconf –n - -### Testing the Postfix Mail Servers ### - -To test the mail servers, you can use any Mail User Agent (most commonly known as MUA for short) such as [mail or mutt][2]. - -Since mutt is a personal favorite, I will use it in box1 to send an email to user tecmint using an existing file (mailbody.txt) as message body: - - # mutt -s "Part 9-RHCE series" tecmint@mydomain.com < mailbody.txt - -![Test Postfix Mail Server](http://www.tecmint.com/wp-content/uploads/2015/09/Test-Postfix-Mail-Server.png) - -Test Postfix Mail Server - -Now go to the central mail server (mail.mydomain.com), log on as user tecmint, and check whether the email was received: - - # su – tecmint - # mail - -![Check Postfix Mail Server Delivery](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Postfix-Mail-Server-Delivery.png) - -Check Postfix Mail Server Delivery - -If the email was not received, check root’s mail spool for a warning or error notification. You may also want to make sure that the SMTP service is running on both servers and that port 25 is open in the central mail server using [nmap command][3]: - - # nmap -PN 192.168.0.20 - -![Troubleshoot Postfix Mail Server](http://www.tecmint.com/wp-content/uploads/2015/09/Troubleshoot-Postfix-Mail-Server.png) - -Troubleshoot Postfix Mail Server - -### Summary ### - -Setting up a mail server and a relay host as shown in this article is an essential skill that every system administrator must have, and represents the foundation to understand and install a more complex scenario such as a mail server hosting a live domain for several (even hundreds or thousands) of email accounts. - -(Please note that this kind of setup requires a DNS server, which is out of the scope of this guide), but you can use following article to setup DNS Server: - -- [Setup Cache only DNS Server in CentOS/RHEL 07][4] - -Finally, I highly recommend you become familiar with Postfix’s configuration file (main.cf) and the program’s man page. If in doubt, don’t hesitate to drop us a line using the form below or using our forum, Linuxsay.com, where you will get almost immediate help from Linux experts from all around the world. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/setup-postfix-mail-server-smtp-using-null-client-on-centos/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/install-thunderbird-17-in-ubuntu-xubuntu-linux-mint/ -[2]:http://www.tecmint.com/send-mail-from-command-line-using-mutt-command/ -[3]:http://www.tecmint.com/nmap-command-examples/ -[4]:http://www.tecmint.com/setup-dns-cache-server-in-centos-7/ \ No newline at end of file diff --git a/translated/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md b/translated/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md new file mode 100644 index 0000000000..ccc67dbb30 --- /dev/null +++ b/translated/tech/RHCE/Part 9 - How to Setup Postfix Mail Server (SMTP) using null-client Configuration.md @@ -0,0 +1,153 @@ +第九部分 - 如果使用零客户端配置 Postfix 邮件服务器(SMTP) +================================================================================ +尽管现在有很多在线联系方式,邮件仍然是一个人传递信息给远在世界尽头或办公室里坐在我们旁边的另一个人的有效方式。 + +下面的图描述了邮件从发送者发出直到信息到达接收者收件箱的传递过程。 + +![邮件如何工作](http://www.tecmint.com/wp-content/uploads/2015/09/How-Mail-Setup-Works.png) + +邮件如何工作 + +要使这成为可能,背后发生了好多事情。为了使邮件信息从一个客户端应用程序(例如 [Thunderbird][1]、Outlook,或者网络邮件服务,例如 Gmail 或 Yahoo 邮件)到一个邮件服务器,并从其到目标服务器并最终到目标接收人,每个服务器上都必须有 SMTP(简单邮件传输协议)服务。 + +这就是为什么我们要在这篇博文中介绍如何在 RHEL 7 中设置 SMTP 服务器,从中本地用户发送的邮件(甚至发送到本地用户)被转发到一个中央邮件服务器以便于访问。 + +在实际需求中这称为零客户端安装。 + +在我们的测试环境中将包括一个原始邮件服务器和一个中央服务器或中继主机。 + + 原始邮件服务器: (主机名: box1.mydomain.com / IP: 192.168.0.18) + 中央邮件服务器: (主机名: mail.mydomain.com / IP: 192.168.0.20) + +为了域名解析我们在两台机器中都会使用有名的 /etc/hosts 文件: + + 192.168.0.18 box1.mydomain.com box1 + 192.168.0.20 mail.mydomain.com mail + +### 安装 Postfix 和防火墙/SELinux 注意事项 ### + +首先,我们需要(在两台机器上): + +**1. 安装 Postfix:** + + # yum update && yum install postfix + +**2. 启动服务并启用开机自动启动:** + + # systemctl start postfix + # systemctl enable postfix + +**3. 允许邮件流量通过防火墙:** + + # firewall-cmd --permanent --add-service=smtp + # firewall-cmd --add-service=smtp + + +![在防火墙中开通邮件服务器端口](http://www.tecmint.com/wp-content/uploads/2015/09/Allow-Traffic-through-Firewall.png) + +在防火墙中开通邮件服务器端口 + +**4. 在 box1.mydomain.com 配置 Postfix** + +Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一个很大的文本,因为其中包含的注释解析了程序设置的目的。 + +为了简洁,我们只显示了需要编辑的行(是的,在原始服务器中你需要保留 mydestination 为空;否则邮件会被保存到本地而不是我们实际想要的中央邮件服务器): + +**在 box1.mydomain.com 配置 Postfix** + +---------- + + myhostname = box1.mydomain.com + mydomain = mydomain.com + myorigin = $mydomain + inet_interfaces = loopback-only + mydestination = + relayhost = 192.168.0.20 + +**5. 在 mail.mydomain.com 配置 Postfix** + +** 在 mail.mydomain.com 配置 Postfix ** + +---------- + + myhostname = mail.mydomain.com + mydomain = mydomain.com + myorigin = $mydomain + inet_interfaces = all + mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain + mynetworks = 192.168.0.0/24, 127.0.0.0/8 + +如果还没有设置,还要设置相关的 SELinux 布尔值永久为真: + + # setsebool -P allow_postfix_local_write_mail_spool on + +![设置 Postfix SELinux 权限](http://www.tecmint.com/wp-content/uploads/2015/09/Set-Postfix-SELinux-Permission.png) + +设置 Postfix SELinux 权限 + +上面的 SELinux 布尔值会允许 Postfix 在中央服务器写入邮件池。 + +**6. 在两台机子上重启服务以使更改生效:** + + # systemctl restart postfix + +如果 Postfix 没有正确启动,你可以使用下面的命令进行错误处理。 + + # systemctl –l status postfix + # journalctl –xn + # postconf –n + +### 测试 Postfix 邮件服务 ### + +为了测试邮件服务器,你可以使用任何邮件用户代理(最常见的简称为 MUA)例如 [mail 或 mutt][2]。 + +由于我个人喜欢 mutt,我会在 box1 中使用它发送邮件给用户 tecmint,并把现有文件(mailbody.txt)作为信息内容: + + # mutt -s "Part 9-RHCE series" tecmint@mydomain.com < mailbody.txt + +![测试 Postfix 邮件服务器](http://www.tecmint.com/wp-content/uploads/2015/09/Test-Postfix-Mail-Server.png) + +测试 Postfix 邮件服务器 + +现在到中央邮件服务器(mail.mydomain.com)以 tecmint 用户登录,并检查是否收到了邮件: + + # su – tecmint + # mail + +![检查 Postfix 邮件服务器发送](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Postfix-Mail-Server-Delivery.png) + +检查 Postfix 邮件服务器发送 + +如果没有收到邮件,检查 root 用户的邮件池查看警告或者错误提示。你也需要使用 [nmap 命令][3]确保两台服务器运行了 SMTP 服务,并在中央邮件服务器中 打开了 25 号端口: + + # nmap -PN 192.168.0.20 + +![Postfix 邮件服务器错误处理](http://www.tecmint.com/wp-content/uploads/2015/09/Troubleshoot-Postfix-Mail-Server.png) + +Postfix 邮件服务器错误处理 + +### 总结 ### + +像本文中展示的设置邮件服务器和中继主机是每个系统管理员必须拥有的重要技能,也代表了理解和安装更复杂情景的基础,例如一个邮件服务器托管有多个邮件账户(甚至成百上千)的域名。 + +(请注意这种类型的设置需要有 DNS 服务器,这不在本文的介绍范围),但你可以参照下面的文章设置 DNS 服务器: + +- [在 CentOS/RHEL 07 上配置仅缓存的 DNS 服务器][4] + +最后,我强烈建议你熟悉 Postfix 的配置文件(main.cf)和这个程序的帮助手册。如果有任何疑问,别犹豫,使用下面的评论框或者我们的论坛 Linuxsay.com 告诉我们吧,你会从世界各地的 Linux 高手中获得几乎及时的帮助。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/setup-postfix-mail-server-smtp-using-null-client-on-centos/ + +作者:[Gabriel Cánepa][a] +译者:[ictlyh](https//www.mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/install-thunderbird-17-in-ubuntu-xubuntu-linux-mint/ +[2]:http://www.tecmint.com/send-mail-from-command-line-using-mutt-command/ +[3]:http://www.tecmint.com/nmap-command-examples/ +[4]:http://www.tecmint.com/setup-dns-cache-server-in-centos-7/ \ No newline at end of file From d921b9a02529f881149afd06e26ffb78090bf092 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 3 Oct 2015 10:08:28 +0800 Subject: [PATCH 636/697] Translating sources/tech/20150929 A Developer's Journey into Linux Containers.md --- .../tech/20150929 A Developer's Journey into Linux Containers.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20150929 A Developer's Journey into Linux Containers.md b/sources/tech/20150929 A Developer's Journey into Linux Containers.md index 3b44992ef3..63e4f14940 100644 --- a/sources/tech/20150929 A Developer's Journey into Linux Containers.md +++ b/sources/tech/20150929 A Developer's Journey into Linux Containers.md @@ -1,3 +1,4 @@ +ictlyh Translating A Developer’s Journey into Linux Containers ================================================================================ ![](https://deis.com/images/blog-images/dev_journey_0.jpg) From 06d853be482798adf2682de6a4550cf193b53509 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 3 Oct 2015 16:43:19 +0800 Subject: [PATCH 637/697] Translated sources/tech/20150929 A Developer's Journey into Linux Containers.md --- ...veloper's Journey into Linux Containers.md | 129 ------------------ ...veloper's Journey into Linux Containers.md | 128 +++++++++++++++++ 2 files changed, 128 insertions(+), 129 deletions(-) delete mode 100644 sources/tech/20150929 A Developer's Journey into Linux Containers.md create mode 100644 translated/tech/20150929 A Developer's Journey into Linux Containers.md diff --git a/sources/tech/20150929 A Developer's Journey into Linux Containers.md b/sources/tech/20150929 A Developer's Journey into Linux Containers.md deleted file mode 100644 index 63e4f14940..0000000000 --- a/sources/tech/20150929 A Developer's Journey into Linux Containers.md +++ /dev/null @@ -1,129 +0,0 @@ -ictlyh Translating -A Developer’s Journey into Linux Containers -================================================================================ -![](https://deis.com/images/blog-images/dev_journey_0.jpg) - -I’ll let you in on a secret: all that DevOps cloud stuff that goes into getting my applications into the world is still a bit of a mystery to me. But, over time I’ve come to realize that understanding the ins and outs of large scale machine provisioning and application deployment is important knowledge for a developer to have. It’s akin to being a professional musician. Of course you need know how to play your instrument. But, if you don’t understand how a recording studio works or how you fit into a symphony orchestra, you’re going to have a hard time working in such environments. - -In the world of software development getting your code into our very big world is just as important as making it. DevOps counts and it counts a lot. - -So, in the spirit of bridging the gap between Dev and Ops I am going to present container technology to you from the ground up. Why containers? Because there is strong evidence to suggest that containers are the next step in machine abstraction: making a computer a place and no longer a thing. Understanding containers is a journey that we’ll take together. - -In this article I am going to cover the concepts behind containerization. I am going to cover how a container differs from a virtual machine. I am going to go into the logic behind containers construction as well as how containers fit into application architecture. I’ll discussion how lightweight versions of the Linux operating system fits into the container ecosystem. I’ll discuss using images to create reusable containers. Lastly I’ll cover how clusters of containers allow your applications to scale quickly. - -In later articles I’ll show you the step by step process to containerize a sample application and how to create a host cluster for your application’s containers. Also, I’ll show you how to use a Deis to deploy the sample application to a VM on your local system as well as a variety of cloud providers. - -So let’s get started. - -### The Benefit of Virtual Machines ### - -In order to understand how containers fit into the scheme of things you need to understand the predecessor to containers: virtual machines. - -A [virtual machine][1] (VM) is a software abstraction of a computer that runs on a physical host computer. Configuring a virtual machine is akin to buying a typical computer: you define the number of CPUs you want along with desired RAM and disk storage capacity. Once the machine is configured, you load in the operating system and then any servers and applications you want the VM to support. - -Virtual machines allow you to run many simulations of a computer on a single hardware host. Here’s what that looks like with a handy diagram: - -![](https://deis.com/images/blog-images/dev_journey_1.png) - -Virtual machines bring efficiency to your hardware investment. You can buy a big, honking machine and run a lots of VMs on it. You can have a database VM sitting with a bunch of VMs with identical versions of your custom app running as a cluster. You can get a lot of scalability out of a finite hardware resources. If you find that you need more VMs and your host hardware has the capacity, you add what you need. Or, if you don’t need a VM, you simply bring the VM off line and delete the VM image. - -### The Limitations of Virtual Machines ### - -But, virtual machines do have limits. - -Say you create three VMs on a host as shown above. The host has 12 CPUs, 48 GB of RAM, and 3 TB of storage. Each VM is configured to have 4 CPUs, 16 GB of RAM and 1 TB of storage. So far, so good. The host has the capacity. - -But there is a drawback. All the resources allocated to a particular machine are dedicated, no matter what. Each machine has been allocated 16 GB of RAM. However, if the first VM never uses more than 1 GB of its RAM allocation, the remaining 15 GB just sit there unused. If the third VM uses only 100 GB of its 1 TB storage allocation, the remaining 900 GB is wasted space. - -There is no leveling of resources. Each VM owns what it is given. So, in a way we’re back to that time before virtual machines when we were paying a lot of good money for unused resources. - -There is *another* drawback to VMs too. They can take a long time to spin up. So, if you are in a situation where your infrastructure needs to grow quickly, even in a situation when VM provisioning is automated, you can still find yourself twiddling your thumbs waiting for machines to come online. - -### Enter: Containers ### - -Conceptually, a container is a Linux process that thinks it is the only process running. The process knows only about things it is told to know about. Also, in terms of containerization, the container process is assigned its own IP address. This is important, so I will say it again. **In terms of containerization, the container process is assigned its own IP address**. Once given an IP address, the process is an identifiable resource within the host network. Then, you can issue a command to the container manager to map the container’s IP address to a IP address on the host that is accessible to the public. Once this mapping takes place, for all intents and purposes, a container is a distinct machine accessible on the network, similar in concept to a virtual machine. - -Again, a container is an isolated Linux process that has a distinct IP address thus making it identifiable on a network. Here’s what that looks like as diagram: - -![](https://deis.com/images/blog-images/dev_journey_2.png) - -A container/process shares resources on the host computer in a dynamic, cooperative manner. If the container needs only 1 GB of RAM, it uses only 1 GB. If it needs 4 GB, it uses 4 GB. It’s the same with CPU utilization and storage. The allocation of CPU, memory and storage resources is dynamic, not static as is usual on a typical virtual machine. All of this resource sharing is managed by the container manager. - -Lastly, containers boot very quickly. - -So, the benefit of containers is: **you get the isolation and encapsulation of a virtual machine without the drawback of dedicated static resources**. Also, because containers load into memory fast, you get better performance when it comes to scaling many containers up. - -### Container Hosting, Configuration, and Management ### - -Computers that host containers run a version of Linux that is stripped down to the essentials. These days, the more popular underlying operating system for a host computer is [CoreOS, mentioned above][2]. There are others, however, such as [Red Hat Atomic Host][3] and [Ubuntu Snappy][4]. - -The Linux operating system is shared between all containers, minimising duplication and reducing the container footprint. Each container contains only what is unique to that specific container. Here’s what that looks like in diagram form: - -![](https://deis.com/images/blog-images/dev_journey_3.png) - -You configure your container with the components it requires. A container component is called a **layer**. A layer is a container image. (You’ll read more about container images in the following section.). You start with a base layer which typically the type of operating system you want in your container. (The container manager will provides only the parts of your desired operating system that is not in the host OS) As you construct the configuration of your container, you’ll add layers, say Apache if you want a web server, PHP or Python runtimes, if your container is running scripts. - -Layering is very versatile. If you application or service container requires PHP 5.2, you configure that container accordingly. If you have another application or service that requires PHP 5.6, no problem. You configure that container to use PHP.5.6. Unlike VMs, where you need to go through a lot of provisioning and installation hocus pocus to change a version of a runtime dependency; with containers you just redefine the layer in the container configuration file. - -All of the container versatility described previously is controlled by the a piece of software called a container manager. Presently, the most popular container managers are [Docker][5] and [Rocket][6]. The figure above shows a host scenario is which Docker is the container manager and CoreOS is the host operating system. - -### Containers are Built with Images ### - -When it comes time for you to build our application into a container, you are going to assemble images. An image represents a template of a container that your container needs to do its work. (I know, containers within containers. Go figure.) Images are stored in a registry. Registries live on the network. - -Conceptually, a registry is similar to a [Maven][7] repository, for those of you from the Java world, or a [NuGet][8] server, for you .NET heads. You’ll create a container configuration file that lists the images your application needs. The you’ll use the container manager to make a container that includes your application’s code as well as constituent resources downloaded from a container registry. For example, if your application is made up of some PHP files, your container configuration file will declare that you get the PHP runtime from a registry. Also, you’ll use the container configuration file to declare the .php files to copy into the container’s file system. The container manager encapsulates all your application stuff into a distinct container that you’ll run on a host computer, under a container manager. - -Here’s a diagram that illustrates the concepts behind container creation: - -![](https://deis.com/images/blog-images/dev_journey_4.png) - -Let’s take a detailed look at this diagram. - -Here, (1) indicates there is a container configuration file that defines the stuff your container needs, as well as how your container is to be constructed. When you run your container on the host, the container manager will read the configuration file to get the container images you need from a registry on the cloud (2) and add the images as layers in your container. - -Also, if that constituent image requires other images, the container manager will get those images too and layer them in. At (3) the container manager will copy in files to your container as is required. - -If you use a provisioning service, such as [Deis][9], the application container you just created exists as an image (4) which the provisioning service will deploy to a cloud provider of your choice. Examples of cloud providers are AWS and Rackspace. - -### Containers in a Cluster ### - -Okay. So we can say there is a good case to be made that containers provide a greater degree of configuration flexibility and resource utilization than virtual machines. Still, this is not the all of it. - -Where containers get really flexible is when they’re clustered. Remember, a container has a distinct IP address. Thus, it can be put behind a load balancer. Once a container goes behind a load balancer, the game goes up a level. - -You can run a cluster of containers behind a load balancer container to achieve high performance, high availability computing. Here’s one example setup: - -![](https://deis.com/images/blog-images/dev_journey_5.png) - -Let’s say you’ve made an application that does some resource intensive work. Photograph processing, for example. Using a container provisioning technology such as [Deis][9], you can create a container image that has your photo processing application configured with all the resources upon which your photo processing application depends. Then, you can deploy one or many instances of your container image to under a load balancer that reside on the host. Once the container image is made, you can keep it on the sidelines for introduction later on when the system becomes maxed out and more instances of your container are required in the cluster to meet the workload at hand. - -There is more good news. You don’t have manually configure the load balancer to accept your container image every time you add more instances into the environment. You can use service discovery technology to make it so that your container announces its availability to the balancer. Then, once informed, the balancer can start to route traffic to the new node. - -### Putting It All Together ### - -Container technology picks up where the virtual machine has left off. Host operating systems such as CoreOS, RHEL Atomic, and Ubuntu’s Snappy, in conjunction with container management technologies such as Docker and Rocket, are making containers more popular everyday. - -While containers are becoming more prevalent, they do take a while to master. However, once you get the hang of them, you can use provisioning technologies such as [Deis][9] to make container creation and deployment easier. - -Getting a conceptual understanding of containers is important as we move forward to actually doing some work with them. But, I imagine the concepts are hard to grasp without the actual hands-on experience to accompany the ideas in play. So, this is what we’ll do in the next segment of this series: make some containers. - --------------------------------------------------------------------------------- - -via: https://deis.com/blog/2015/developer-journey-linux-containers - -作者:[Bob Reselman][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://deis.com/blog -[1]:https://en.wikipedia.org/wiki/Virtual_machine -[2]:https://coreos.com/using-coreos/ -[3]:http://www.projectatomic.io/ -[4]:https://developer.ubuntu.com/en/snappy/ -[5]:https://www.docker.com/ -[6]:https://coreos.com/blog/rocket/ -[7]:https://en.wikipedia.org/wiki/Apache_Maven -[8]:https://www.nuget.org/ -[9]:http://deis.com/learn \ No newline at end of file diff --git a/translated/tech/20150929 A Developer's Journey into Linux Containers.md b/translated/tech/20150929 A Developer's Journey into Linux Containers.md new file mode 100644 index 0000000000..a71b5e8fb3 --- /dev/null +++ b/translated/tech/20150929 A Developer's Journey into Linux Containers.md @@ -0,0 +1,128 @@ +开发者的 Linux 容器之旅 +================================================================================ +![](https://deis.com/images/blog-images/dev_journey_0.jpg) + +我告诉你一个秘密:使得我的应用程序进入到全世界的所有云计算的东西,对我来说仍然有一点神秘。但随着时间流逝,我意识到理解大规模机器配置和应用程序部署的来龙去脉对一个开发者来说是非常重要的知识。这类似于成为一个专业的音乐家。你当然需要知道如何使用你的乐器。但是,如果你不知道一个录音室是如何工作的,或者你如何适应一个交响乐团,你在这样的环境中工作会变得非常困难。 + +在软件开发的世界里,使你的代码进入我们更大的世界正如写出它来一样重要。开发重要,而且是很重要。 + +因此,为了弥合开发和部署之间的间隔,我会从头开始介绍容器技术。为什么是容器?因为有强有力的证据表明,容器是机器抽象的下一步:使计算机成为场所而不再是一个东西。理解容器是我们共同的旅程。 + +在这篇文章中,我会介绍容器化背后的概念。容器和虚拟机的区别。以及容器构建背后的逻辑以及它是如何适应应用程序架构的。我会探讨轻量级的 Linux 操作系统是如何适应容器生态系统。我还会讨论使用镜像创建可重用的容器。最后我会介绍容器集群如何使你的应用程序可以快速扩展。 + +在后面的文章中,我会一步一步向你介绍容器化一个事例应用程序的过程,以及如何为你的应用程序容器创建一个托管集群。同时,我会向你展示如何使用 Deis 将你的事例应用程序部署到你本地系统以及多种云供应商的虚拟机上。 + +让我们开始吧。 + +### 虚拟机的好处 ### + +为了理解容器如何适应事物发展,你首先要了解容器的前者:虚拟机 + +[虚拟机][1] 是运行在物理宿主机上的软件抽象。配置一个虚拟机就像是购买一台计算机:你需要定义你想要的 CPU 数目,RAM 和磁盘存储容量。配置好了机器后,你把它加载到操作系统,然后是你想让虚拟机支持的任何服务器或者应用程序。 + +虚拟机允许你在一台硬件主机上运行多个模拟计算机。这是一个简单的示意图: + +![](https://deis.com/images/blog-images/dev_journey_1.png) + +虚拟机使得能充分利用你的硬件资源。你可以购买一台大型机然后在上面运行多个虚拟机。你可以有一个数据库虚拟机以及很多运行相同版本定制应用程序的虚拟机构成的集群。你可以在有限的硬件资源获得很多的扩展能力。如果你觉得你需要更多的虚拟机而且你的宿主硬件还有容量,你可以添加任何你想要的。或者,如果你不再需要一个虚拟机,你可以关闭该虚拟机并删除虚拟机镜像。 + +### 虚拟机的局限 ### + +但是,虚拟机确实有局限。 + +如上面所示,假如你在一个主机上创建了三个虚拟机。主机有 12 个 CPU,48 GB 内存和 3TB 的存储空间。每个虚拟机配置为有 4 个 CPU,16 GB 内存和 1TB 存储空间。到现在为止,一切都还好。主机有这个容量。 + +但这里有个缺陷。所有分配给一个虚拟机的资源,无论是什么,都是专有的。每台机器都分配了 16 GB 的内存。但是,如果第一个虚拟机永不会使用超过 1GB 分配的内存,剩余的 15 GB 就会被浪费在那里。如果第三天虚拟机只使用分配的 1TB 存储空间中的 100GB,其余的 900GB 就成为浪费空间。 + +这里没有资源的流动。每台虚拟机拥有分配给它的所有资源。因此,在某种方式上我们又回到了虚拟机之前,把大部分金钱花费在未使用的资源上。 + +虚拟机还有*另一个*缺陷。扩展他们需要很长时间。如果你处于基础设施需要快速增长的情形,即使虚拟机配置是自动的,你仍然会发现你的很多时间都浪费在等待机器上线。 + +### 来到:容器 ### + +概念上来说,容器是 Linux 中认为只有它自己的一个进程。该进程只知道告诉它的东西。另外,在容器化方面,该容器进程也分配了它自己的 IP 地址。这点很重要,我会再次重复。**在容器化方面,容器进程有它自己的 IP 地址**。一旦给予了一个 IP 地址,该进程就是宿主网络中可识别的资源。然后,你可以在容器管理器上运行命令,使容器 IP 映射到主机中能访问公网的 IP 地址。该映射发生时,对于任何意图和目的,一个容器就是网络上一个可访问的独立机器,概念上类似于虚拟机。 + +再次说明,容器是拥有不同 IP 地址从而使其成为网络上可识别的独立 Linux 进程。下面是一个示意图: + +![](https://deis.com/images/blog-images/dev_journey_2.png) + +容器/进程以动态合作的方式共享主机上的资源。如果容器只需要 1GB 内存,它就只会使用 1GB。如果它需要 4GB,就会使用 4GB。CPU 和存储空间利用也是如此。CPU,内存和存储空间的分配是动态的,和典型虚拟机的静态方式不同。所有这些资源的共享都由容器管理器管理。 + +最后,容器能快速启动。 + +因此,容器的好处是:**你获得了虚拟机独立和封装的好处而抛弃了专有静态资源的缺陷**。另外,由于容器能快速加载到内存,在扩展到多个容器时你能获得更好的性能。 + +### 容器托管、配置和管理 ### + +托管容器的计算机运行着被剥离的只剩下主要部分的 Linux 版本。现在,宿主计算机流行的底层操作系统是上面提到的 [CoreOS][2]。当然还有其它,例如 [Red Hat Atomic Host][3] 和 [Ubuntu Snappy][4]。 + +所有容器之间共享Linux 操作系统,减少了容器足迹的重复和冗余。每个容器只包括该容器唯一的部分。下面是一个示意图: + +![](https://deis.com/images/blog-images/dev_journey_3.png) + +你用它所需的组件配置容器。一个容器组件被称为**层**。一层是一个容器镜像,(你会在后面的部分看到更多关于容器镜像的介绍)。你从一个基本层开始,这通常是你想在容器中使用的操作系统。(容器管理器只提供你想要的操作系统在宿主操作系统中不存在的部分。)当你构建配置你的容器时,你会添加层,例如你想要添加网络服务器 Apache,如果容器要运行脚本,则需要添加 PHP 或 Python 运行时。 + +分层非常灵活。如果应用程序或者服务容器需要 PHP 5.2 版本,你相应地配置该容器即可。如果你有另一个应用程序或者服务需要 PHP 5.6 版本,没问题,你可以使用 PHP 5.6 配置该容器。不像虚拟机,更改一个版本的运行时依赖时你需要经过大量的配置和安装过程;对于容器你只需要在容器配置文件中重新定义层。 + +所有上面描述的容器多功能性都由一个称为容器管理器的软件控制。现在,最流行的容器管理器是 [Docker][5] 和 [Rocket][6]。上面的示意图展示了容器管理器是 Docker,宿主操作系统是 CentOS 的主机情景。 + +### 容器由镜像构成 ### + +当你需要将我们的应用程序构建到容器时,你就会编译镜像。镜像代表了需要完成容器工作的容器模板。(容器里的容器)。镜像被保存在网络上的注册表里。 + +从概念上讲,注册表类似于一个使用 Java 的人眼中的 [Maven][7] 仓库,使用 .NET 的人眼中的 [NuGet][8] 服务器。你会创建一个列出了你应用程序所需镜像的容器配置文件。然后你使用容器管理器创建一个包括了你应用程序代码以及从注册表中下载的构成资源的容器。例如,如果你的应用程序包括了一些 PHP 文件,你的容器配置文件会声明你会从注册表中获取 PHP 运行时。另外,你还要使用容器配置文件声明需要复制到容器文件系统中的 .php 文件。容器管理器会封装你应用程序的所有东西为一个独立容器。该容器将会在容器管理器的管理下运行在宿主计算机上。 + +这是一个容器创建背后概念的示意图: + +![](https://deis.com/images/blog-images/dev_journey_4.png) + +让我们仔细看看这个示意图。 + +(1)表示一个定义了你容器所需东西以及你容器如何构建的容器配置文件。当你在主机上运行容器时,容器管理器会读取配置文件从云上的注册表中获取你需要的容器镜像,(2)作为层将镜像添加到你的容器。 + +另外,如果组成镜像需要其它镜像,容器管理器也会获取这些镜像并把它们作为层添加进来。(3)容器管理器会将需要的文件复制到容器中。 + +如果你使用了配置服务,例如 [Deis][9],你刚刚创建的应用程序容器作为镜像存在(4)配置服务会将它部署到你选择的云供应商上。类似 AWS 和 Rackspace 云供应商。 + +### 集群中的容器 ### + +好了。这里有一个很好的例子说明了容器比虚拟机提供了更好的配置灵活性和资源利用率。但是,这并不是全部。 + +容器真正灵活是在集群中。记住,每个容器有一个独立的 IP 地址。因此,能把它放到负载均衡器后面。将容器放到负载均衡器后面,就上升了一个层次。 + +你可以在一个负载均衡容器后运行容器集群以获得更高的性能和高可用计算。这是一个例子: + +![](https://deis.com/images/blog-images/dev_journey_5.png) + +假如你开发了一个进行资源密集型工作的应用程序。例如图片处理。使用类似 [Deis][9] 的容器配置技术,你可以创建一个包括了你图片处理程序以及你图片处理程序需要的所有资源的容器镜像。然后,你可以部署一个或多个容器镜像到主机上的负载均衡器。一旦创建了容器镜像,你可以在系统快要刷爆时把它放到一边,为了满足手中的工作时添加更多的容器实例。 + +这里还有更多好消息。你不需要每次添加实例到环境中时手动配置负载均衡器以便接受你的容器镜像。你可以使用服务发现技术告知均衡器你容器的可用性。然后,一旦获知,均衡器就会将流量分发到新的结点。 + +### 全部放在一起 ### + +容器技术完善了虚拟机不包括的部分。类似 CoreOS、RHEL Atomic、和 Ubuntu 的 Snappy 宿主操作系统,和类似 Docker 和 Rocket 的容器管理技术结合起来,使得容器变得日益流行。 + +尽管容器变得更加越来越普遍,掌握它们还是需要一段时间。但是,一旦你懂得了它们的窍门,你可以使用类似 [Deis][9] 的配置技术使容器创建和部署变得更加简单。 + +概念上理解容器和进一步实际使用它们完成工作一样重要。但我认为不实际动手把想法付诸实践,概念也难以理解。因此,我们该系列的下一阶段就是:创建一些容器。 + +-------------------------------------------------------------------------------- + +via: https://deis.com/blog/2015/developer-journey-linux-containers + +作者:[Bob Reselman][a] +译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://deis.com/blog +[1]:https://en.wikipedia.org/wiki/Virtual_machine +[2]:https://coreos.com/using-coreos/ +[3]:http://www.projectatomic.io/ +[4]:https://developer.ubuntu.com/en/snappy/ +[5]:https://www.docker.com/ +[6]:https://coreos.com/blog/rocket/ +[7]:https://en.wikipedia.org/wiki/Apache_Maven +[8]:https://www.nuget.org/ +[9]:http://deis.com/learn \ No newline at end of file From 28cb827b3e3865b2c6abef30cc60e20f704c4d7a Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Sat, 3 Oct 2015 20:41:07 +0800 Subject: [PATCH 638/697] Update RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译。 --- ...als of Virtualization and Guest Administration with KVM.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md b/sources/tech/RHCSA Series/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md index d9e06bd876..6d25bf914f 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md @@ -1,3 +1,5 @@ +FSSlc translating + RHCSA Series: Essentials of Virtualization and Guest Administration with KVM – Part 15 ================================================================================ If you look up the word virtualize in a dictionary, you will find that it means “to create a virtual (rather than actual) version of something”. In computing, the term virtualization refers to the possibility of running multiple operating systems simultaneously and isolated one from another, on top of the same physical (hardware) system, known in the virtualization schema as host. @@ -185,4 +187,4 @@ via: http://www.tecmint.com/kvm-virtualization-basics-and-guest-administration/ [3]:http://www.tecmint.com/selinux-essentials-and-control-filesystem-access/ [4]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Getting_Started_Guide/index.html [5]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/index.html -[6]:http://www.tecmint.com/install-and-configure-kvm-in-linux/ \ No newline at end of file +[6]:http://www.tecmint.com/install-and-configure-kvm-in-linux/ From 80b652ca92553cf1567a676dc6fd183f206dd377 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 4 Oct 2015 00:24:25 +0800 Subject: [PATCH 639/697] PUB:20150930 Install and use Ansible (Automation Tool) in CentOS 7 @geekpi --- ...e Ansible (Automation Tool) in CentOS 7.md | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) rename {translated/tech => published}/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md (80%) diff --git a/translated/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md b/published/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md similarity index 80% rename from translated/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md rename to published/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md index 0527b51b9c..f80f6a6125 100644 --- a/translated/tech/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md +++ b/published/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md @@ -1,12 +1,13 @@ -在CentOS 7中安装并使用Ansible(自动化工具) +在 CentOS 7 中安装并使用自动化工具 Ansible ================================================================================ -Ansible是一款为类Unix系统开发的免费开源配置和自动化工具。它用Python写成并且和Chef和Puppet相似,但是有一个不同和好处是我们不需要在节点中安装任何客户端。它使用SSH来和节点进行通信。 + +Ansible是一款为类Unix系统开发的自由开源的配置和自动化工具。它用Python写成,类似于Chef和Puppet,但是有一个不同和优点是我们不需要在节点中安装任何客户端。它使用SSH来和节点进行通信。 本篇中我们将在CentOS 7上安装并配置Ansible,并且尝试管理两个节点。 -**Ansible 服务端** – ansible.linuxtechi.com ( 192.168.1.15 ) +- **Ansible 服务端** – ansible.linuxtechi.com ( 192.168.1.15 ) - **Nodes** – 192.168.1.9 , 192.168.1.10 +- **节点** – 192.168.1.9 , 192.168.1.10 ### 第一步: 设置EPEL仓库 ### @@ -38,17 +39,16 @@ Ansible仓库默认不在yum仓库中,因此我们需要使用下面的命令 ### 第四步:为Ansible定义节点的清单 ### -文件 ‘**/etc/ansible/hosts**‘ 维护了Ansible中服务器的清单。 +文件 `/etc/ansible/hosts` 维护着Ansible中服务器的清单。 [root@ansible ~]# vi /etc/ansible/hosts [test-servers] 192.168.1.9 192.168.1.10 -Save and exit the file. -保存并退出文件 +保存并退出文件。 -主机文件示例。 +主机文件示例如下: ![ansible-host](http://www.linuxtechi.com/wp-content/uploads/2015/09/ansible-host.jpg) @@ -62,19 +62,19 @@ Save and exit the file. #### 执行shell命令 #### -**例子1:检查Ansible节点的运行时间 ** +**例子1:检查Ansible节点的运行时间(uptime)** [root@ansible ~]# ansible -m command -a "uptime" 'test-servers' ![ansible-uptime](http://www.linuxtechi.com/wp-content/uploads/2015/09/ansible-uptime.jpg) -**例子2:检查节点的内核版本 ** +**例子2:检查节点的内核版本** [root@ansible ~]# ansible -m command -a "uname -r" 'test-servers' ![kernel-version-ansible](http://www.linuxtechi.com/wp-content/uploads/2015/09/kernel-version-ansible.jpg) -**例子3:给节点增加用户 ** +**例子3:给节点增加用户** [root@ansible ~]# ansible -m command -a "useradd mark" 'test-servers' [root@ansible ~]# ansible -m command -a "grep mark /etc/passwd" 'test-servers' @@ -93,7 +93,7 @@ via: http://www.linuxtechi.com/install-and-use-ansible-in-centos-7/ 作者:[Pradeep Kumar][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 447f142a76236ef7e722eb881db82fc850ecc9d7 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 4 Oct 2015 00:47:03 +0800 Subject: [PATCH 640/697] PUB:RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7 @FSSlc --- ... Up LDAP-based Authentication in RHEL 7.md | 56 +++++++++---------- 1 file changed, 28 insertions(+), 28 deletions(-) rename {translated/tech/RHCSA => published/RHCSA Series}/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md (76%) diff --git a/translated/tech/RHCSA/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md b/published/RHCSA Series/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md similarity index 76% rename from translated/tech/RHCSA/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md rename to published/RHCSA Series/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md index 9aba04d2cb..a071f4dc33 100644 --- a/translated/tech/RHCSA/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md +++ b/published/RHCSA Series/RHCSA Series--Part 14--Setting Up LDAP-based Authentication in RHEL 7.md @@ -1,43 +1,43 @@ -RHCSA 系列: 在 RHEL 7 中设置基于 LDAP 的认证 – Part 14 +RHCSA 系列(十四): 在 RHEL 7 中设置基于 LDAP 的认证 ================================================================================ -在这篇文章中,我们将首先罗列一些 LDAP 的基础知识(它是什么,它被用于何处以及为什么会被这样使用),然后向你展示如何使用 RHEL 7 系统来设置一个 LDAP 服务器以及配置一个客户端来使用它达到认证的目的。 +在这篇文章中,我们将首先罗列一些 LDAP 的基础知识(它是什么,它被用于何处以及为什么会被这样使用),然后向你展示如何使用 RHEL 7 系统来设置一个 LDAP 服务器以及配置一个客户端来使用它达到认证的目的。 ![设置 LDAP 服务器及客户端认证](http://www.tecmint.com/wp-content/uploads/2015/06/setup-ldap-server-and-configure-client-authentication.png) -RHCSA 系列:设置 LDAP 服务器及客户端认证 – Part 14 +*RHCSA 系列:设置 LDAP 服务器及客户端认证 – Part 14* 正如你将看到的那样,关于认证,还存在其他可能的应用场景,但在这篇指南中,我们将只关注基于 LDAP 的认证。另外,请记住,由于这个话题的广泛性,在这里我们将只涵盖它的基础知识,但你可以参考位于总结部分中列出的文档,以此来了解更加深入的细节。 -基于相同的原因,你将注意到:为了简洁起见,我已经决定省略了几个位于 man 页中 LDAP 工具的参考,但相应命令的解释是近在咫尺的(例如,输入 man ldapadd)。 +基于相同的原因,你将注意到:为了简洁起见,我已经决定省略了几个位于 man 页中 LDAP 工具的参考,但相应命令的解释是近在咫尺的(例如,输入 man ldapadd)。 那还是让我们开始吧。 **我们的测试环境** -我们的测试环境包含两台 RHEL 7 机子: +我们的测试环境包含两台 RHEL 7机器: Server: 192.168.0.18. FQDN: rhel7.mydomain.com Client: 192.168.0.20. FQDN: ldapclient.mydomain.com -如若你想,你可以使用在 [Part 12: RHEL 7 的自动化安装][1] 中使用 Kickstart 安装的机子来作为客户端。 +如若你想,你可以使用在 [RHCSA 系列(十二): 使用 Kickstart 完成 RHEL 7 的自动化安装][1] 中使用 Kickstart 安装的机子来作为客户端。 #### LDAP 是什么? #### -LDAP 代表轻量级目录访问协议(Lightweight Directory Access Protocol),并包含在一系列协议之中,这些协议允许一个客户端通过网络去获取集中存储的信息(例如登陆 shell 的目录,家目录的绝对路径,或者其他典型的系统用户信息),而这些信息可以从不同的地方访问到或被很多终端用户获取到(另一个例子是含有某个公司所有雇员的家庭地址和电话号码的目录)。 +LDAP 代表轻量级目录访问协议(Lightweight Directory Access Protocol),并包含在一系列协议之中,这些协议允许一个客户端通过网络去获取集中存储的信息(例如所登录的 shell 的路径,家目录的绝对路径,或者其他典型的系统用户信息),而这些信息可以从不同的地方访问到或被很多终端用户获取到(另一个例子是含有某个公司所有雇员的家庭地址和电话号码的目录)。 对于那些被赋予了权限可以使用这些信息的人来说,将这些信息进行集中管理意味着可以更容易地维护和获取。 -下面的图表提供了一个简化了的关于 LDAP 的示意图,且在下面将会进行更多的描述: +下面的图表提供了一个简化了的关于 LDAP 的示意图,在下面将会进行更多的描述: ![LDAP 示意图](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-Diagram.png) -LDAP 示意图 +*LDAP 示意图* 下面是对上面示意图的一个详细解释。 -- 在一个 LDAP 目录中,一个条目代表一个独立单元或信息,被所谓的 Distinguished Name 唯一识别。 -- 一个属性是一些与某个条目相关的信息(例如地址,有效的联系电话号码和邮箱地址)。 -- 每个属性被分配有一个或多个值,这些值被包含在一个以空格为分隔符的列表中。每个条目中那个唯一的值被称为一个 Relative Distinguished Name。 +- 在一个 LDAP 目录中,一个条目(entry)代表一个独立单元或信息,被所谓的 Distinguished Name(DN,区别名) 唯一识别。 +- 一个属性(attribute)是一些与某个条目相关的信息(例如地址,有效的联系电话号码和邮箱地址)。 +- 每个属性被分配有一个或多个值(value),这些值被包含在一个以空格为分隔符的列表中。每个条目中那个唯一的值被称为一个 Relative Distinguished Name(RDN,相对区别名)。 接下来,就让我们进入到有关服务器和客户端安装的内容。 @@ -67,7 +67,7 @@ LDAP 示意图 # systemctl restart slapd.service # systemctl stop slapd.service -**3. 由于 slapd 服务是由 ldap 用户来运行的(你可以使用 `ps -e -o pid,uname,comm | grep slapd` 来验证),为了使得服务器能够更改由管理工具创建的条目,这个用户应该有目录 `/var/lib/ldap` 的所有权,而这些管理工具仅可以由 root 用户来运行(紧接着有更多这方面的内容)。** +**3. 由于 slapd 服务是由 ldap 用户来运行的(你可以使用 `ps -e -o pid,uname,comm | grep slapd` 来验证),为了使得服务器能够更改由管理工具创建的条目,该用户应该有目录 `/var/lib/ldap` 的所有权,而这些管理工具仅可以由 root 用户来运行(紧接着有更多这方面的内容)。** 在递归地更改这个目录的所有权之前,将 slapd 的示例数据库配置文件复制进这个目录: @@ -78,11 +78,11 @@ LDAP 示意图 # slappasswd -正如下一福图所展示的那样: +正如下一幅图所展示的那样: ![设置 LDAP 管理密码](http://www.tecmint.com/wp-content/uploads/2015/06/Set-LDAP-Admin-Password.png) -设置 LDAP 管理密码 +*设置 LDAP 管理密码* 然后以下面的内容创建一个 LDIF 文件(`ldaprootpasswd.ldif`): @@ -97,9 +97,9 @@ LDAP 示意图 - cn=config 指的是全局配置选项。 - olcDatabase 指的是一个特定的数据库实例的名称,并且通常可以在 `/etc/openldap/slapd.d/cn=config` 目录中发现。 -根据上面提供的理论背景,`ldaprootpasswd.ldif` 文件将添加一个条目到 LDAP 目录中。在那个条目中,每一行代表一个属性键值对(其中 dn,changetype,add 和 olcRootPW 为属性,每个冒号右边的字符串为相应的键值)。 +根据上面提供的理论背景,`ldaprootpasswd.ldif` 文件将添加一个条目到 LDAP 目录中。在那个条目中,每一行代表一个属性键值对(其中 dn,changetype,add 和 olcRootPW 为属性,每个冒号右边的字符串为相应的键值)。 -随着我们的进一步深入,请记住上面的这些,并注意到在这篇文章的余下部分,我们使用相同的 Common Names `(cn=)`,而这些余下的步骤中的每一步都将与其上一步相关。 +随着我们的进一步深入,请记住上面的这些,并注意到在这篇文章的余下部分,我们使用相同的 Common Names(通用名) `(cn=)`,而这些余下的步骤中的每一步都将与其上一步相关。 **5. 现在,通过特别指定相对于 ldap 服务的 URI ,添加相应的 LDAP 条目,其中只有 protocol/host/port 这几个域被允许使用。** @@ -109,7 +109,7 @@ LDAP 示意图 ![LDAP 配置](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-Configuration.png) -LDAP 配置 +*LDAP 配置* 接着从 `/etc/openldap/schema` 目录导入一个基本的 LDAP 定义: @@ -117,11 +117,11 @@ LDAP 配置 ![LDAP 定义](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-Definitions.png) -LDAP 定义 +*LDAP 定义* **6. 让 LDAP 在它的数据库中使用你的域名。** -以下面的内容创建另一个 LDIF 文件,我们称之为 `ldapdomain.ldif`, 然后酌情替换这个文件中的域名(在域名分量 dc=) 和密码: +以下面的内容创建另一个 LDIF 文件,我们称之为 `ldapdomain.ldif`, 然后酌情替换这个文件中的域名(在域名部分(Domain Component) dc=) 和密码: dn: olcDatabase={1}monitor,cn=config changetype: modify @@ -158,7 +158,7 @@ LDAP 定义 ![LDAP 域名配置](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-Domain-Configuration.png) -LDAP 域名配置 +*LDAP 域名配置* **7. 现在,该是添加一些条目到我们的 LDAP 目录的时候了。在下面的文件中,属性和键值由一个冒号`(:)` 所分隔,这个文件我们将命名为 `baseldapdomain.ldif`:** @@ -188,7 +188,7 @@ LDAP 域名配置 ![添加 LDAP 域名,属性和键值](http://www.tecmint.com/wp-content/uploads/2015/06/Add-LDAP-Domain-Configuration.png) -添加 LDAP 域名,属性和键值 +*添加 LDAP 域名,属性和键值* **8. 创建一个名为 ldapuser 的 LDAP 用户(`adduser ldapuser`),然后在`ldapgroup.ldif` 中为一个 LDAP 组创建定义。** @@ -206,7 +206,7 @@ LDAP 域名配置 # ldapadd -x -W -D "cn=Manager,dc=mydomain,dc=com" -f ldapgroup.ldif -**9. 为用户 ldapuser 添加一个带有定义的 LDIF 文件(`ldapuser.ldif`):** +**9. 为用户 ldapuser 添加一个带有定义的 LDIF 文件(`ldapuser.ldif`):** dn: uid=ldapuser,ou=People,dc=mydomain,dc=com objectClass: top @@ -231,7 +231,7 @@ LDAP 域名配置 ![LDAP 用户配置](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-User-Configuration.png) -LDAP 用户配置 +*LDAP 用户配置* 相似地,你可以删除你刚刚创建的用户条目: @@ -243,7 +243,7 @@ LDAP 用户配置 **11. 最后,但并非最不重要的是使用 LDAP 开启客户端的认证。** -为了在最后一步中对我们有所帮助,我们将使用 authconfig 工具(一个配置系统认证资源的界面)。 +为了在最后一步中对我们有所帮助,我们将使用 authconfig 工具(一个配置系统认证资源的界面)。 使用下面的命令,在通过 LDAP 服务器认证成功后,假如请求的用户的家目录不存在,则将会被创建: @@ -251,7 +251,7 @@ LDAP 用户配置 ![LDAP 客户端认证](http://www.tecmint.com/wp-content/uploads/2015/06/LDAP-Client-Configuration.png) -LDAP 客户端认证 +*LDAP 客户端认证* ### 总结 ### @@ -265,11 +265,11 @@ via: http://www.tecmint.com/setup-ldap-server-and-configure-client-authenticatio 作者:[Gabriel Cánepa][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/automatic-rhel-installations-using-kickstart/ +[1]:https://linux.cn/article-6335-1.html [2]:http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux/ [3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/ch-Directory_Servers.html From 6848556ba56c6e576d065dc736ea4705f27b7aa7 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 4 Oct 2015 12:34:54 +0800 Subject: [PATCH 641/697] translating --- .../20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md b/sources/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md index 557fcbc427..39a0c7f2cf 100644 --- a/sources/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md +++ b/sources/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md @@ -1,3 +1,5 @@ +translating---geekpi + Meet The New Ubuntu 15.10 Default Wallpaper ================================================================================ **The brand new default wallpaper for Ubuntu 15.10 Wily Werewolf has been unveiled. ** @@ -41,4 +43,4 @@ via: http://www.omgubuntu.co.uk/2015/09/ubuntu-15-10-wily-werewolf-default-wallp 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:https://launchpadlibrarian.net/218258177/Wolf_Wallpaper_Desktop_4096x2304_Purple_PNG-24.png \ No newline at end of file +[1]:https://launchpadlibrarian.net/218258177/Wolf_Wallpaper_Desktop_4096x2304_Purple_PNG-24.png From 1334b83ce7ef8edce9e1e606b304f21ca8b18736 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 4 Oct 2015 12:57:02 +0800 Subject: [PATCH 642/697] translated --- ... The New Ubuntu 15.10 Default Wallpaper.md | 46 ------------------- ... The New Ubuntu 15.10 Default Wallpaper.md | 44 ++++++++++++++++++ 2 files changed, 44 insertions(+), 46 deletions(-) delete mode 100644 sources/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md create mode 100644 translated/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md diff --git a/sources/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md b/sources/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md deleted file mode 100644 index 39a0c7f2cf..0000000000 --- a/sources/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md +++ /dev/null @@ -1,46 +0,0 @@ -translating---geekpi - -Meet The New Ubuntu 15.10 Default Wallpaper -================================================================================ -**The brand new default wallpaper for Ubuntu 15.10 Wily Werewolf has been unveiled. ** - -At first glance you may find little has changed from the origami-inspired ‘Suru’ design shipped with April’s release of Ubuntu 15.04. But look closer and you’ll see that the new default background does feature some subtle differences. - -For one it looks much lighter, helped by an orange glow emanating from the upper-left of the image. The angular folds and sections remain, but with the addition of blocky, rectangular sections. - -The new background has been designed by Canonical Design Team member Alex Milazzo. - -![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/09/ubuntu-1510-wily-werewolf-wallpaper.jpg) - -The Ubuntu 15.10 default desktop wallpaper - -And just to show that there is a change, here is the Ubuntu 15.04 default wallpaper for comparison: - -![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/03/suru-desktop-wallpaper-ubuntu-vivid.jpg) - -The Ubuntu 15.04 default desktop wallpaper - -### Download Ubuntu 15.10 Wallpaper ### - -If you’re running daily builds of Ubuntu 15.10 Wily Werewolf and don’t yet see this as your default wallpaper you’ve no broken anything: the design has been unveiled but is, as of writing, yet to be packaged and uploaded to Wily itself. - -You don’t have to wait until October to use the new design as your desktop background. You can download the wallpaper in a huge HiDPI display friendly 4096×2304 resolution by hitting the button below. - -- [Download Ubuntu the new 15.10 Default Wallpaper][1] - -Finally, as we say this every time there’s a new wallpaper, you don’t have to care about the minutiae of distribution branding and design. If the new wallpaper is not to your tastes or you never keep it you can, as ever, easily change it — this isn’t the Ubuntu Phone after all! - -**Are you a fan of the refreshed look? Let us know in the comments below. ** - --------------------------------------------------------------------------------- - -via: http://www.omgubuntu.co.uk/2015/09/ubuntu-15-10-wily-werewolf-default-wallpaper - -作者:[Joey-Elijah Sneddon][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:https://launchpadlibrarian.net/218258177/Wolf_Wallpaper_Desktop_4096x2304_Purple_PNG-24.png diff --git a/translated/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md b/translated/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md new file mode 100644 index 0000000000..53751b8449 --- /dev/null +++ b/translated/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md @@ -0,0 +1,44 @@ +与新的Ubuntu 15.10默认壁纸相遇 +================================================================================ +**全新的Ubuntu 15.10 Wily Werewolf默认壁纸已经亮相 ** + +乍一看你几乎无法发现与今天4月发布的Ubuntu 15.04中收到折纸启发的‘Suru’设计有什么差别。但是仔细看你就会发现默认背景有一些细微差别。 + +其中一点是更淡,受到由左上角图片发出的橘黄色光的帮助。保持了角褶皱和色块,但是增加了块和矩形部分。 + +新的背景由Canonica设计团队的Alex Milazzo设计。 + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/09/ubuntu-1510-wily-werewolf-wallpaper.jpg) + +Ubuntu 15.10 默认桌面背景 + +只是为了显示改变,这个是Ubuntu 15.04的默认壁纸作为比较: + +![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/03/suru-desktop-wallpaper-ubuntu-vivid.jpg) + +Ubuntu 15.04 默认壁纸 + +### 下载Ubuntu 15.10 壁纸 ### + +如果你正运行的是Ubuntu 15.10 Wily Werewolf每日编译版本,那么你无法看到这个默认壁纸:设计已经亮相但是还没有打包到Wily中。 + +你不必等到10月份来使用新的设计来作为你的桌面背景。你可以点击下面的按钮下载4096×2304高清壁纸。 + +- [下载Ubuntu 15.10新的默认壁纸][1] + +最后,如我们每次在有新壁纸时说的,你不必在意发布版品牌和设计细节。如果壁纸不和你的口味或者不想永远用它,轻易地就换掉毕竟这不是Ubuntu Phone! + +**你是你版本的粉丝么?在评论中让我们知道 ** + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2015/09/ubuntu-15-10-wily-werewolf-default-wallpaper + +作者:[Joey-Elijah Sneddon][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:https://launchpadlibrarian.net/218258177/Wolf_Wallpaper_Desktop_4096x2304_Purple_PNG-24.png From 4f03530ecde18cbb0b565da74927fa31f53ef017 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Sun, 4 Oct 2015 17:11:58 +0800 Subject: [PATCH 643/697] [Translated]RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md --- ...ation and Guest Administration with KVM.md | 190 ----------------- ...ation and Guest Administration with KVM.md | 191 ++++++++++++++++++ 2 files changed, 191 insertions(+), 190 deletions(-) delete mode 100644 sources/tech/RHCSA Series/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md create mode 100644 translated/tech/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md b/sources/tech/RHCSA Series/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md deleted file mode 100644 index 6d25bf914f..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md +++ /dev/null @@ -1,190 +0,0 @@ -FSSlc translating - -RHCSA Series: Essentials of Virtualization and Guest Administration with KVM – Part 15 -================================================================================ -If you look up the word virtualize in a dictionary, you will find that it means “to create a virtual (rather than actual) version of something”. In computing, the term virtualization refers to the possibility of running multiple operating systems simultaneously and isolated one from another, on top of the same physical (hardware) system, known in the virtualization schema as host. - -![KVM Virtualization Basics and KVM Guest Administration](http://www.tecmint.com/wp-content/uploads/2015/06/RHCSA-Part15.png) - -RHCSA Series: Essentials of Virtualization and Guest Administration with KVM – Part 15 - -Through the use of the virtual machine monitor (also known as hypervisor), virtual machines (referred to as guests) are provided virtual resources (i.e. CPU, RAM, storage, network interfaces, to name a few) from the underlying hardware. - -With that in mind, it is plain to see that one of the main advantages of virtualization is cost savings (in equipment and network infrastructure and in terms of maintenance effort) and a substantial reduction in the physical space required to accommodate all the necessary hardware. - -Since this brief how-to cannot cover all virtualization methods, I encourage you to refer to the documentation listed in the summary for further details on the subject. - -Please keep in mind that the present article is intended to be a starting point to learn the basics of virtualization in RHEL 7 using [KVM][1] (Kernel-based Virtual Machine) with command-line utilities, and not an in-depth discussion of the topic. - -### Verifying Hardware Requirements and Installing Packages ### - -In order to set up virtualization, your CPU must support it. You can verify whether your system meets the requirements with the following command: - - # grep -E 'svm|vmx' /proc/cpuinfo - -In the following screenshot we can see that the current system (with an AMD microprocessor) supports virtualization, as indicated by svm. If we had an Intel-based processor, we would see vmx instead in the results of the above command. - -![Check KVM Support](http://www.tecmint.com/wp-content/uploads/2015/06/Check-KVM-Support.png) - -Check KVM Support - -In addition, you will need to have virtualization capabilities enabled in the firmware of your host (BIOS or UEFI). - -Now install the necessary packages: - -- qemu-kvm is an open source virtualizer that provides hardware emulation for the KVM hypervisor whereas qemu-img provides a command line tool for manipulating disk images. -- libvirt includes the tools to interact with the virtualization capabilities of the operating system. -- libvirt-python contains a module that permits applications written in Python to use the interface supplied by libvirt. -- libguestfs-tools: miscellaneous system administrator command line tools for virtual machines. -- virt-install: other command-line utilities for virtual machine administration. - - # yum update && yum install qemu-kvm qemu-img libvirt libvirt-python libguestfs-tools virt-install - -Once the installation completes, make sure you start and enable the libvirtd service: - - # systemctl start libvirtd.service - # systemctl enable libvirtd.service - -By default, each virtual machine will only be able to communicate with the rest in the same physical server and with the host itself. To allow the guests to reach other machines inside our LAN and also the Internet, we need to set up a bridge interface in our host (say br0, for example) by, - -1. adding the following line to our main NIC configuration (most likely `/etc/sysconfig/network-scripts/ifcfg-enp0s3`): - - BRIDGE=br0 - -2. creating the configuration file for br0 (/etc/sysconfig/network-scripts/ifcfg-br0) with these contents (note that you may have to change the IP address, gateway address, and DNS information): - - DEVICE=br0 - TYPE=Bridge - BOOTPROTO=static - IPADDR=192.168.0.18 - NETMASK=255.255.255.0 - GATEWAY=192.168.0.1 - NM_CONTROLLED=no - DEFROUTE=yes - PEERDNS=yes - PEERROUTES=yes - IPV4_FAILURE_FATAL=no - IPV6INIT=yes - IPV6_AUTOCONF=yes - IPV6_DEFROUTE=yes - IPV6_PEERDNS=yes - IPV6_PEERROUTES=yes - IPV6_FAILURE_FATAL=no - NAME=br0 - ONBOOT=yes - DNS1=8.8.8.8 - DNS2=8.8.4.4 - -3. finally, enabling packet forwarding by making, in `/etc/sysctl.conf`, - - net.ipv4.ip_forward = 1 - -and loading the changes to the current kernel configuration: - - # sysctl -p - -Note that you may also need to tell firewalld that this kind of traffic should be allowed. Remember that you can refer to the article on that topic in this same series ([Part 11: Network Traffic Control Using FirewallD and Iptables][2]) if you need help to do that. - -### Creating VM Images ### - -By default, VM images will be created to `/var/lib/libvirt/images` and you are strongly advised to not change this unless you really need to, know what you’re doing, and want to handle SELinux settings yourself (such topic is out of the scope of this tutorial but you can refer to Part 13 of the RHCSA series: [Mandatory Access Control Essentials with SELinux][3] if you want to refresh your memory). - -This means that you need to make sure that you have allocated the necessary space in that filesystem to accommodate your virtual machines. - -The following command will create a virtual machine named `tecmint-virt01` with 1 virtual CPU, 1 GB (=1024 MB) of RAM, and 20 GB of disk space (represented by `/var/lib/libvirt/images/tecmint-virt01.img`) using the rhel-server-7.0-x86_64-dvd.iso image located inside /home/gacanepa/ISOs as installation media and the br0 as network bridge: - - # virt-install \ - --network bridge=br0 - --name tecmint-virt01 \ - --ram=1024 \ - --vcpus=1 \ - --disk path=/var/lib/libvirt/images/tecmint-virt01.img,size=20 \ - --graphics none \ - --cdrom /home/gacanepa/ISOs/rhel-server-7.0-x86_64-dvd.iso - --extra-args="console=tty0 console=ttyS0,115200" - -If the installation file was located in a HTTP server instead of an image stored in your disk, you will have to replace the –cdrom flag with –location and indicate the address of the online repository. - -As for the –graphics none option, it tells the installer to perform the installation in text-mode exclusively. You can omit that flag if you are using a GUI interface and a VNC window to access the main VM console. Finally, with –extra-args we are passing kernel boot parameters to the installer that set up a serial VM console. - -The installation should now proceed as a regular (real) server now. If not, please review the steps listed above. - -### Managing Virtual Machines ### - -These are some typical administration tasks that you, as a system administrator, will need to perform on your virtual machines. Note that all of the following commands need to be run from your host: - -**1. List all VMs:** - - # virsh list --all - -From the output of the above command you will have to note the Id for the virtual machine (although it will also return its name and current status) because you will need it for most administration tasks related to a particular VM. - -**2. Display information about a guest:** - - # virsh dominfo [VM Id] - -**3. Start, restart, or stop a guest operating system:** - - # virsh start | reboot | shutdown [VM Id] - -**4. Access a VM’s serial console if networking is not available and no X server is running on the host:** - - # virsh console [VM Id] - -**Note** that this will require that you add the serial console configuration information to the `/etc/grub.conf` file (refer to the argument passed to the –extra-args option when the VM was created). - -**5. Modify assigned memory or virtual CPUs:** - -First, shutdown the guest: - - # virsh shutdown [VM Id] - -Edit the VM configuration for RAM: - - # virsh edit [VM Id] - -Then modify - - [Memory size here without brackets] - -Restart the VM with the new settings: - - # virsh create /etc/libvirt/qemu/tecmint-virt01.xml - -Finally, change the memory dynamically: - - # virsh setmem [VM Id] [Memory size here without brackets] - -For CPU: - - # virsh edit [VM Id] - -Then modify - - [Number of CPUs here without brackets] - -For further commands and details, please refer to table 26.1 in Chapter 26 of the RHEL 5 Virtualization guide (that guide, though a bit old, includes an exhaustive list of virsh commands used for guest administration). - -### SUMMARY ### - -In this article we have covered some basic aspects of virtualization with KVM in RHEL 7, which is both a vast and a fascinating topic, and I hope it will be helpful as a starting guide for you to later explore more advanced subjects found in the official [RHEL virtualization][4] getting started and [deployment / administration guides][5]. - -In addition, you can refer to the preceding articles in [this KVM series][6] in order to clarify or expand some of the concepts explained here. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/kvm-virtualization-basics-and-guest-administration/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.linux-kvm.org/page/Main_Page -[2]:http://www.tecmint.com/firewalld-vs-iptables-and-control-network-traffic-in-firewall/ -[3]:http://www.tecmint.com/selinux-essentials-and-control-filesystem-access/ -[4]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Getting_Started_Guide/index.html -[5]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/index.html -[6]:http://www.tecmint.com/install-and-configure-kvm-in-linux/ diff --git a/translated/tech/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md b/translated/tech/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md new file mode 100644 index 0000000000..c4f83fee61 --- /dev/null +++ b/translated/tech/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md @@ -0,0 +1,191 @@ +RHCSA 系列: 虚拟化基础和使用 KVM 进行虚拟机管理 – Part 15 +================================================================================ +假如你在词典中查一下单词 “virtualize”,你将会发现它的意思是 “创造某些事物的一个虚拟物(而非真实的)”。在计算机行业中,术语虚拟化指的是:在相同的物理(硬件)系统上,同时运行多个操作系统,且这几个系统相互隔离的可能性,而那个硬件在虚拟化架构中被称作宿主机(host)。 + +![KVM 虚拟化基础和 KVM 虚拟机管理](http://www.tecmint.com/wp-content/uploads/2015/06/RHCSA-Part15.png) + +RHCSA 系类: 虚拟化基础和使用 KVM 进行虚拟机管理 – Part 15 + +通过使用虚拟机监视器(也被称为虚拟机管理程序 hypervisor),虚拟机(被称为 guest)由底层的硬件来提供虚拟资源(举几个例来说如 CPU,RAM,存储介质,网络接口等)。 + +考虑到这一点就可以清楚地看出,虚拟化的主要优点是节约成本(在设备和网络基础设施,及维护工作等方面)和显著地减少容纳所有必要硬件所需的物理空间。 + +由于这个简单的指南不能涵盖所有的虚拟化方法,我鼓励你参考在总结部分中列出的文档,以此对这个话题做更深入的了解。 + +请记住当前文章只是用于在 RHEL 7 中用命令行工具使用 [KVM][1] (Kernel-based Virtual Machine) 学习虚拟化基础知识的一个起点,而并不是对这个话题的深入探讨。 + +### 检查硬件要求并安装软件包 ### + +为了设置虚拟化,你的 CPU 必须能够支持它。你可以使用下面的命令来查看你的系统是否满足这个要求: + + # grep -E 'svm|vmx' /proc/cpuinfo + +在下面的截图中,我们可以看到当前的系统(带有一个 AMD 的微处理器)支持虚拟化,svm 字样的存在暗示了这一点。假如我们有一个 Intel 系列的处理器,我们将会看到上面命令的结果将会出现 vmx 字样。 + +![检查 KVM 支持](http://www.tecmint.com/wp-content/uploads/2015/06/Check-KVM-Support.png) + +检查 KVM 支持 + +另外,你需要在你宿主机的硬件(BIOS 或 UEFI)中开启虚拟化。 + +现在,安装必要的软件包: + +- qemu-kvm 是一个开源的虚拟机程序,为 KVM 虚拟机监视器提供硬件仿真,而 qemu-img 则提供了一个操纵磁盘镜像的命令行工具。 +- libvirt 包含与操作系统的虚拟化功能交互的工具。 +- libvirt-python 包含一个模块,它允许用 Python 写的应用来使用由 libvirt 提供的接口。 +- libguestfs-tools 包含各式各样的针对虚拟机的系统管理员命令行工具。 +- virt-install 包含针对虚拟机管理的其他命令行工具。 + + + # yum update && yum install qemu-kvm qemu-img libvirt libvirt-python libguestfs-tools virt-install + +一旦安装完全,请确保你启动并开启了 libvirtd 服务: + + # systemctl start libvirtd.service + # systemctl enable libvirtd.service + +默认情况下,每个虚拟机将只能够与相同的物理服务器和宿主机自身通信。要使得虚拟机能够访问位于局域网或因特网中的其他机器,我们需要像下面这样在我们的宿主机上设置一个桥接接口(比如说 br0): + +1. 添加下面的一行到我们的 NIC 主配置中(一般是 `/etc/sysconfig/network-scripts/ifcfg-enp0s3` 这个文件): + + BRIDGE=br0 + +2. 使用下面的内容(注意,你可能必须更改 IP 地址,网关地址和 DNS 信息)为 br0 创建一个配置文件(`/etc/sysconfig/network-scripts/ifcfg-br0`): + + + DEVICE=br0 + TYPE=Bridge + BOOTPROTO=static + IPADDR=192.168.0.18 + NETMASK=255.255.255.0 + GATEWAY=192.168.0.1 + NM_CONTROLLED=no + DEFROUTE=yes + PEERDNS=yes + PEERROUTES=yes + IPV4_FAILURE_FATAL=no + IPV6INIT=yes + IPV6_AUTOCONF=yes + IPV6_DEFROUTE=yes + IPV6_PEERDNS=yes + IPV6_PEERROUTES=yes + IPV6_FAILURE_FATAL=no + NAME=br0 + ONBOOT=yes + DNS1=8.8.8.8 + DNS2=8.8.4.4 + +3. 最后通过使得文件`/etc/sysctl.conf` 中的 + + net.ipv4.ip_forward = 1 + +来开启包转发并加载更改到当前的内核配置中: + + # sysctl -p + +注意,你可能还需要告诉 firewalld 这类的流量应当被允许通过防火墙。假如你需要这样做,记住你可以参考这个系列的 [Part 11: 使用 firewalld 和 iptables 来进行网络流量控制][2]。 + +### 创建虚拟机镜像 ### + +默认情况下,虚拟机镜像将会被创建到 `/var/lib/libvirt/images` 中,且强烈建议你不要更改这个设定,除非你真的需要那么做且知道你在做什么,并能自己处理有关 SELinux 的设定(这个话题已经超出了本教程的讨论范畴,但你可以参考这个系列的第 13 部分[使用 SELinux 来进行强制访问控制][3],假如你想更新你的知识的话)。 + +这意味着你需要确保你在文件系统中分配了必要的空间来容纳你的虚拟机。 + +下面的命令将使用位于 `/home/gacanepa/ISOs`目录下的 rhel-server-7.0-x86_64-dvd.iso 镜像文件和 br0 这个网桥来创建一个名为 `tecmint-virt01` 的虚拟机,它有一个虚拟 CPU,1 GB(=1024 MB)的 RAM,20 GB 的磁盘空间(由`/var/lib/libvirt/images/tecmint-virt01.img`所代表): + + + # virt-install \ + --network bridge=br0 + --name tecmint-virt01 \ + --ram=1024 \ + --vcpus=1 \ + --disk path=/var/lib/libvirt/images/tecmint-virt01.img,size=20 \ + --graphics none \ + --cdrom /home/gacanepa/ISOs/rhel-server-7.0-x86_64-dvd.iso + --extra-args="console=tty0 console=ttyS0,115200" + +假如安装文件位于一个 HTTP 服务器上,而不是存储在你磁盘中的镜像中,你必须将上面的 `-cdrom` 替换为 `-location`,并明显地指出在线存储仓库的地址。 + +至于上面的 `–graphics none` 选项,它告诉安装程序只以文本模式执行安装过程。假如你使用一个 GUI 界面和一个 VNC 窗口来访问主虚拟机控制台,则可以省略这个选项。最后,使用 `–extra-args`参数,我们将传递内核启动参数给安装程序,以此来设置一个串行的虚拟机控制台。 + +现在,安装应当作为一个正常的(真实的)服务来执行了。假如没有,请查看上面列出的步骤。 + +### 管理虚拟机 ### + +作为一个系统管理员,还有一些典型的管理任务需要你在虚拟机上去完成。注:下面所有的命令都需要在你的宿主机上运行: + +**1. 列出所有的虚拟机:** + + # virsh list --all + +你必须留意上面命令输出中的虚拟机 ID(尽管上面的命令还会返回虚拟机的名称和当前的状态),因为你需要它来执行有关某个虚拟机的大多数管理任务。 + +**2. 显示某个虚拟机的信息:** + + # virsh dominfo [VM Id] + +**3. 开启,重启或停止一个虚拟机操作系统:** + + # virsh start | reboot | shutdown [VM Id] + +**4. 假如网络无法连接且在宿主机上没有运行 X 服务器,可以使用下面的目录来访问虚拟机的串行控制台:** + + # virsh console [VM Id] + +**注** 这需要你添加一个串行控制台配置信息到 `/etc/grub.conf` 文件中(参考刚才创建虚拟机时传递给`–extra-args`选项的参数)。 + +**5. 修改分配的内存或虚拟 CPU:** + +首先,关闭虚拟机: + + # virsh shutdown [VM Id] + +为 RAM 编辑虚拟机的配置: + + # virsh edit [VM Id] + +然后更改 + + [内存大小,这里没有括号] + +使用新的设定重启虚拟机: + + # virsh create /etc/libvirt/qemu/tecmint-virt01.xml + +最后,可以使用下面的命令来动态地改变内存的大小: + + # virsh setmem [VM Id] [内存大小,这里没有括号] + +对于 CPU,使用: + + # virsh edit [VM Id] + +然后更改 + + [CPU 数目,这里没有括号] + +至于更深入的命令和细节,请参考 RHEL 5 虚拟化指南(这个指南尽管有些陈旧,但包括了用于管理虚拟机的 virsh 命令的详尽清单)的第 26 章里的表 26.1。 + +### 总结 ### + +在这篇文章中,我们涵盖了在 RHEL 7 中如何使用 KVM 和虚拟化的一些基本概念,这个话题是一个广泛且令人着迷的话题。并且我希望它能成为你在随后阅读官方的 [RHEL 虚拟化入门][4] 和 [RHEL 虚拟化部署和管理指南][5] ,探索更高级的主题时的起点教程,并给你带来帮助。 + +另外,为了分辨或拓展这里解释的某些概念,你还可以参考先前包含在 [KVM 系列][6] 中的文章。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/kvm-virtualization-basics-and-guest-administration/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.linux-kvm.org/page/Main_Page +[2]:http://www.tecmint.com/firewalld-vs-iptables-and-control-network-traffic-in-firewall/ +[3]:http://www.tecmint.com/selinux-essentials-and-control-filesystem-access/ +[4]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Getting_Started_Guide/index.html +[5]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/index.html +[6]:http://www.tecmint.com/install-and-configure-kvm-in-linux/ From d497c369e5ca20a1c96f0d597772c20ee3a19e5f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Mon, 5 Oct 2015 18:27:40 +0800 Subject: [PATCH 644/697] Create 20151005 pyinfo() A good looking phpinfo-like python script.md --- ...good looking phpinfo-like python script.md | 28 +++++++++++++++++++ 1 file changed, 28 insertions(+) create mode 100644 sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md diff --git a/sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md b/sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md new file mode 100644 index 0000000000..3f3e3291eb --- /dev/null +++ b/sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md @@ -0,0 +1,28 @@ +pyinfo() A good looking phpinfo-like python script +================================================================================ +Being a native php guy, I'm used to having phpinfo(), giving me easy access to php.ini settings and loaded modules etc. So ofcourse I wanted to call the not existing pyinfo() function, to no avail. My fingers quickly pressed CTRL-E to google for a implementation of it, someone must've ported it already? + +Yes, someone did. But oh my was it ugly. Preposterous! Since I cannot stand ugly layouts *cough*, I just had to build my own. So I used the code I found and cleaned up the layout to make it better. The official python website isnt that bad layout-wise, so why not steal their colors and background images? Yes that sounds like a plan to me. + +[Gits Here][1] | [Download here][2] | [Example here][3] + +Mind you, I only ran it on a python 2.6.4 server, so anything else is at your own risk (but it should be no problem to port it to any other version). To get it working, just import the file and call pyinfo() while catching the function's return value. Print that on the screen. Huzzah! + +For those who did not get that and are using [mod_wsgi][4], run it using something like this (replace that path ofcourse): +``` +def application(environ, start_response): + import sys + path = 'YOUR_WWW_ROOT_DIRECTORY' + if path not in sys.path: + sys.path.append(path) + from pyinfo import pyinfo + output = pyinfo() + start_response('200 OK', [('Content-type', 'text/html')]) + return [output] +``` + + +[1]:https://gist.github.com/951825#file_pyinfo.py +[2]:http://bran.name/dump/pyinfo.zip +[3]:http://bran.name/dump/pyinfo/index.py +[4]:http://code.google.com/p/modwsgi/ From 076267735349c81817e52da3c73a4397f1f66170 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Mon, 5 Oct 2015 18:32:58 +0800 Subject: [PATCH 645/697] =?UTF-8?q?=E9=80=89=E9=A2=98=EF=BC=9Apyinfo()?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 20151005 pyinfo() A good looking phpinfo-like python script --- ...o() A good looking phpinfo-like python script.md | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md b/sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md index 3f3e3291eb..7b38244e76 100644 --- a/sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md +++ b/sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md @@ -20,8 +20,19 @@ def application(environ, start_response): start_response('200 OK', [('Content-type', 'text/html')]) return [output] ``` - +--- +via:http://bran.name/articles/pyinfo-a-good-looking-phpinfo-like-python-script/ + +作者:[Bran van der Meer][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译, +[Linux中国](https://linux.cn/) 荣誉推出 + + +[a]:http://bran.name/resume/ [1]:https://gist.github.com/951825#file_pyinfo.py [2]:http://bran.name/dump/pyinfo.zip [3]:http://bran.name/dump/pyinfo/index.py From fb472172c9c1c018ac810799c7d988af5acea5cb Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 7 Oct 2015 10:39:04 +0800 Subject: [PATCH 646/697] PUB:RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @FSSlc 这个系列终于完结啦。辛苦。 --- ...ation and Guest Administration with KVM.md | 46 ++++++++++--------- 1 file changed, 24 insertions(+), 22 deletions(-) rename {translated/tech => published/RHCSA Series}/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md (70%) diff --git a/translated/tech/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md b/published/RHCSA Series/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md similarity index 70% rename from translated/tech/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md rename to published/RHCSA Series/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md index c4f83fee61..03ea45452e 100644 --- a/translated/tech/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md +++ b/published/RHCSA Series/RHCSA Series--Part 15--Essentials of Virtualization and Guest Administration with KVM.md @@ -1,18 +1,19 @@ -RHCSA 系列: 虚拟化基础和使用 KVM 进行虚拟机管理 – Part 15 +RHCSA 系列(十五): 虚拟化基础和使用 KVM 进行虚拟机管理 ================================================================================ -假如你在词典中查一下单词 “virtualize”,你将会发现它的意思是 “创造某些事物的一个虚拟物(而非真实的)”。在计算机行业中,术语虚拟化指的是:在相同的物理(硬件)系统上,同时运行多个操作系统,且这几个系统相互隔离的可能性,而那个硬件在虚拟化架构中被称作宿主机(host)。 + +假如你在词典中查一下单词 “虚拟化(virtualize)”,你将会发现它的意思是 “创造某些事物的一个虚拟物(而非真实的)”。在计算机行业中,术语虚拟化(virtualization)指的是:在相同的物理(硬件)系统上,同时运行多个操作系统,且这几个系统相互隔离的**可能性**,而那个硬件在虚拟化架构中被称作宿主机(host)。 ![KVM 虚拟化基础和 KVM 虚拟机管理](http://www.tecmint.com/wp-content/uploads/2015/06/RHCSA-Part15.png) -RHCSA 系类: 虚拟化基础和使用 KVM 进行虚拟机管理 – Part 15 +*RHCSA 系列: 虚拟化基础和使用 KVM 进行虚拟机管理 – Part 15* -通过使用虚拟机监视器(也被称为虚拟机管理程序 hypervisor),虚拟机(被称为 guest)由底层的硬件来提供虚拟资源(举几个例来说如 CPU,RAM,存储介质,网络接口等)。 +通过使用虚拟机监视器(也被称为虚拟机管理程序(hypervisor)),虚拟机(被称为 guest)由底层的硬件来供给虚拟资源(举几个例子来说,如 CPU,RAM,存储介质,网络接口等)。 考虑到这一点就可以清楚地看出,虚拟化的主要优点是节约成本(在设备和网络基础设施,及维护工作等方面)和显著地减少容纳所有必要硬件所需的物理空间。 由于这个简单的指南不能涵盖所有的虚拟化方法,我鼓励你参考在总结部分中列出的文档,以此对这个话题做更深入的了解。 -请记住当前文章只是用于在 RHEL 7 中用命令行工具使用 [KVM][1] (Kernel-based Virtual Machine) 学习虚拟化基础知识的一个起点,而并不是对这个话题的深入探讨。 +请记住当前文章只是用于在 RHEL 7 中用命令行工具使用 [KVM][1] (Kernel-based Virtual Machine(基于内核的虚拟机)) 学习虚拟化基础知识的一个起点,而并不是对这个话题的深入探讨。 ### 检查硬件要求并安装软件包 ### @@ -24,7 +25,7 @@ RHCSA 系类: 虚拟化基础和使用 KVM 进行虚拟机管理 – Part 15 ![检查 KVM 支持](http://www.tecmint.com/wp-content/uploads/2015/06/Check-KVM-Support.png) -检查 KVM 支持 +*检查 KVM 支持* 另外,你需要在你宿主机的硬件(BIOS 或 UEFI)中开启虚拟化。 @@ -36,21 +37,22 @@ RHCSA 系类: 虚拟化基础和使用 KVM 进行虚拟机管理 – Part 15 - libguestfs-tools 包含各式各样的针对虚拟机的系统管理员命令行工具。 - virt-install 包含针对虚拟机管理的其他命令行工具。 +命令如下: # yum update && yum install qemu-kvm qemu-img libvirt libvirt-python libguestfs-tools virt-install -一旦安装完全,请确保你启动并开启了 libvirtd 服务: +一旦安装完成,请确保你启动并开启了 libvirtd 服务: # systemctl start libvirtd.service # systemctl enable libvirtd.service -默认情况下,每个虚拟机将只能够与相同的物理服务器和宿主机自身通信。要使得虚拟机能够访问位于局域网或因特网中的其他机器,我们需要像下面这样在我们的宿主机上设置一个桥接接口(比如说 br0): +默认情况下,每个虚拟机将只能够与放在相同的物理服务器上的虚拟机以及宿主机自身通信。要使得虚拟机能够访问位于局域网或因特网中的其他机器,我们需要像下面这样在我们的宿主机上设置一个桥接接口(比如说 br0): -1. 添加下面的一行到我们的 NIC 主配置中(一般是 `/etc/sysconfig/network-scripts/ifcfg-enp0s3` 这个文件): +1、 添加下面的一行到我们的 NIC 主配置中(类似 `/etc/sysconfig/network-scripts/ifcfg-enp0s3` 这样的文件): BRIDGE=br0 -2. 使用下面的内容(注意,你可能必须更改 IP 地址,网关地址和 DNS 信息)为 br0 创建一个配置文件(`/etc/sysconfig/network-scripts/ifcfg-br0`): +2、 使用下面的内容(注意,你可能需要更改 IP 地址,网关地址和 DNS 信息)为 br0 创建一个配置文件(`/etc/sysconfig/network-scripts/ifcfg-br0`): DEVICE=br0 @@ -75,7 +77,7 @@ RHCSA 系类: 虚拟化基础和使用 KVM 进行虚拟机管理 – Part 15 DNS1=8.8.8.8 DNS2=8.8.4.4 -3. 最后通过使得文件`/etc/sysctl.conf` 中的 +3、 最后在文件`/etc/sysctl.conf` 中设置: net.ipv4.ip_forward = 1 @@ -83,11 +85,11 @@ RHCSA 系类: 虚拟化基础和使用 KVM 进行虚拟机管理 – Part 15 # sysctl -p -注意,你可能还需要告诉 firewalld 这类的流量应当被允许通过防火墙。假如你需要这样做,记住你可以参考这个系列的 [Part 11: 使用 firewalld 和 iptables 来进行网络流量控制][2]。 +注意,你可能还需要告诉 firewalld 让这类的流量应当被允许通过防火墙。假如你需要这样做,记住你可以参考这个系列的 [使用 firewalld 和 iptables 来控制网络流量][2]。 ### 创建虚拟机镜像 ### -默认情况下,虚拟机镜像将会被创建到 `/var/lib/libvirt/images` 中,且强烈建议你不要更改这个设定,除非你真的需要那么做且知道你在做什么,并能自己处理有关 SELinux 的设定(这个话题已经超出了本教程的讨论范畴,但你可以参考这个系列的第 13 部分[使用 SELinux 来进行强制访问控制][3],假如你想更新你的知识的话)。 +默认情况下,虚拟机镜像将会被创建到 `/var/lib/libvirt/images` 中,且强烈建议你不要更改这个设定,除非你真的需要那么做且知道你在做什么,并能自己处理有关 SELinux 的设定(这个话题已经超出了本教程的讨论范畴,但你可以参考这个系列的第 13 部分 [使用 SELinux 来进行强制访问控制][3],假如你想更新你的知识的话)。 这意味着你需要确保你在文件系统中分配了必要的空间来容纳你的虚拟机。 @@ -104,11 +106,11 @@ RHCSA 系类: 虚拟化基础和使用 KVM 进行虚拟机管理 – Part 15 --cdrom /home/gacanepa/ISOs/rhel-server-7.0-x86_64-dvd.iso --extra-args="console=tty0 console=ttyS0,115200" -假如安装文件位于一个 HTTP 服务器上,而不是存储在你磁盘中的镜像中,你必须将上面的 `-cdrom` 替换为 `-location`,并明显地指出在线存储仓库的地址。 +假如安装文件位于一个 HTTP 服务器上,而不是存储在你磁盘中的镜像中,你必须将上面的 `-cdrom` 替换为 `-location`,并明确地指出在线存储仓库的地址。 -至于上面的 `–graphics none` 选项,它告诉安装程序只以文本模式执行安装过程。假如你使用一个 GUI 界面和一个 VNC 窗口来访问主虚拟机控制台,则可以省略这个选项。最后,使用 `–extra-args`参数,我们将传递内核启动参数给安装程序,以此来设置一个串行的虚拟机控制台。 +至于上面的 `–graphics none` 选项,它告诉安装程序只以文本模式执行安装过程。假如你使用一个 GUI 界面和一个 VNC 窗口来访问主虚拟机控制台,则可以省略这个选项。最后,使用 `–extra-args` 参数,我们将传递内核启动参数给安装程序,以此来设置一个串行的虚拟机控制台。 -现在,安装应当作为一个正常的(真实的)服务来执行了。假如没有,请查看上面列出的步骤。 +现在,所安装的虚拟机应当可以作为一个正常的(真实的)服务来运行了。假如没有,请查看上面列出的步骤。 ### 管理虚拟机 ### @@ -128,11 +130,11 @@ RHCSA 系类: 虚拟化基础和使用 KVM 进行虚拟机管理 – Part 15 # virsh start | reboot | shutdown [VM Id] -**4. 假如网络无法连接且在宿主机上没有运行 X 服务器,可以使用下面的目录来访问虚拟机的串行控制台:** +**4. 假如网络无法连接且在宿主机上没有运行 X 服务器,可以使用下面的命令来访问虚拟机的串行控制台:** # virsh console [VM Id] -**注** 这需要你添加一个串行控制台配置信息到 `/etc/grub.conf` 文件中(参考刚才创建虚拟机时传递给`–extra-args`选项的参数)。 +**注**:这需要你添加一个串行控制台配置信息到 `/etc/grub.conf` 文件中(参考刚才创建虚拟机时传递给`-extra-args`选项的参数)。 **5. 修改分配的内存或虚拟 CPU:** @@ -146,7 +148,7 @@ RHCSA 系类: 虚拟化基础和使用 KVM 进行虚拟机管理 – Part 15 然后更改 - [内存大小,这里没有括号] + [内存大小,注意不要加上方括号] 使用新的设定重启虚拟机: @@ -178,14 +180,14 @@ via: http://www.tecmint.com/kvm-virtualization-basics-and-guest-administration/ 作者:[Gabriel Cánepa][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/gacanepa/ [1]:http://www.linux-kvm.org/page/Main_Page -[2]:http://www.tecmint.com/firewalld-vs-iptables-and-control-network-traffic-in-firewall/ -[3]:http://www.tecmint.com/selinux-essentials-and-control-filesystem-access/ +[2]:https://linux.cn/article-6315-1.html +[3]:https://linux.cn/article-6339-1.html [4]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Getting_Started_Guide/index.html [5]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/index.html [6]:http://www.tecmint.com/install-and-configure-kvm-in-linux/ From b6d889890710d20ba1acef3ec7db9c71e668032d Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 7 Oct 2015 23:15:37 +0800 Subject: [PATCH 647/697] PUB:20150921 Meet The New Ubuntu 15.10 Default Wallpaper @geekpi --- ... The New Ubuntu 15.10 Default Wallpaper.md | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) rename {translated/share => published}/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md (70%) diff --git a/translated/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md b/published/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md similarity index 70% rename from translated/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md rename to published/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md index 53751b8449..dc27b3609c 100644 --- a/translated/share/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md +++ b/published/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md @@ -1,8 +1,8 @@ -与新的Ubuntu 15.10默认壁纸相遇 +看看新的 Ubuntu 15.10 默认壁纸 ================================================================================ -**全新的Ubuntu 15.10 Wily Werewolf默认壁纸已经亮相 ** +**全新的Ubuntu 15.10 Wily Werewolf默认壁纸已经亮相** -乍一看你几乎无法发现与今天4月发布的Ubuntu 15.04中收到折纸启发的‘Suru’设计有什么差别。但是仔细看你就会发现默认背景有一些细微差别。 +乍一看你几乎无法发现与今天4月发布的Ubuntu 15.04中受到折纸启发的‘Suru’设计有什么差别。但是仔细看你就会发现默认背景有一些细微差别。 其中一点是更淡,受到由左上角图片发出的橘黄色光的帮助。保持了角褶皱和色块,但是增加了块和矩形部分。 @@ -10,25 +10,25 @@ ![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/09/ubuntu-1510-wily-werewolf-wallpaper.jpg) -Ubuntu 15.10 默认桌面背景 +*Ubuntu 15.10 默认桌面背景* -只是为了显示改变,这个是Ubuntu 15.04的默认壁纸作为比较: +为了凸显变化,这个是Ubuntu 15.04的默认壁纸作为比较: ![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/03/suru-desktop-wallpaper-ubuntu-vivid.jpg) -Ubuntu 15.04 默认壁纸 +*Ubuntu 15.04 默认壁纸* ### 下载Ubuntu 15.10 壁纸 ### -如果你正运行的是Ubuntu 15.10 Wily Werewolf每日编译版本,那么你无法看到这个默认壁纸:设计已经亮相但是还没有打包到Wily中。 +如果你正运行的是Ubuntu 15.10 Wily Werewolf每日构建版本,那么你无法看到这个默认壁纸:设计已经亮相但是还没有打包到Wily中。 你不必等到10月份来使用新的设计来作为你的桌面背景。你可以点击下面的按钮下载4096×2304高清壁纸。 - [下载Ubuntu 15.10新的默认壁纸][1] -最后,如我们每次在有新壁纸时说的,你不必在意发布版品牌和设计细节。如果壁纸不和你的口味或者不想永远用它,轻易地就换掉毕竟这不是Ubuntu Phone! +最后,如我们每次在有新壁纸时说的,你不必在意发布版品牌和设计细节。如果壁纸不合你的口味或者不想永远用它,轻易地就换掉,毕竟这不是Ubuntu Phone! -**你是你版本的粉丝么?在评论中让我们知道 ** +**你是新版本的粉丝么?在评论中让我们知道** -------------------------------------------------------------------------------- @@ -36,7 +36,7 @@ via: http://www.omgubuntu.co.uk/2015/09/ubuntu-15-10-wily-werewolf-default-wallp 作者:[Joey-Elijah Sneddon][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From a410e2d6b53e12fa1c251fec0e5d6ea3acd959d9 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 8 Oct 2015 00:04:52 +0800 Subject: [PATCH 648/697] =?UTF-8?q?20151007-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ource Media Player MPlayer 1.2 Released.md | 62 +++++++++++++ ...l Script Opens In Text Editor In Ubuntu.md | 39 ++++++++ ...wnload Videos Using youtube-dl In Linux.md | 93 +++++++++++++++++++ ...7 Productivity Tools And Tips For Linux.md | 78 ++++++++++++++++ 4 files changed, 272 insertions(+) create mode 100644 sources/share/20151007 Open Source Media Player MPlayer 1.2 Released.md create mode 100644 sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md create mode 100644 sources/tech/20151007 How To Download Videos Using youtube-dl In Linux.md create mode 100644 sources/tech/20151007 Productivity Tools And Tips For Linux.md diff --git a/sources/share/20151007 Open Source Media Player MPlayer 1.2 Released.md b/sources/share/20151007 Open Source Media Player MPlayer 1.2 Released.md new file mode 100644 index 0000000000..518a2a4135 --- /dev/null +++ b/sources/share/20151007 Open Source Media Player MPlayer 1.2 Released.md @@ -0,0 +1,62 @@ +Open Source Media Player MPlayer 1.2 Released +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/MPlayer-1.2.jpg) + +Almost three years after [MPlaayer][1] 1.1, the new version of MPlayer has been released last week. MPlayer 1.2 brings up support for many new codecs in this release. + +MPlayer is a cross-platform, open source media player. Its name is an abbreviation of “Movie Player”. MPlayer has been one of the oldest video players for Linux and during last 15 years, it has inspired a number of other media players. Some of the famous media players based on MPlayer are: + +- [MPV][2] +- SMPlayer +- KPlayer +- GNOME MPlayer +- Deepin Player + +#### What’s new in MPlayer 1.2? #### + +- Compatibility with FFmpeg 2.8 +- VDPAU hardware acceleration for H.265/HEVC +- A number of new codecs supported via FFmpeg +- Improvements in TV and DVB support +- GUI improvements +- external dependency on libdvdcss/libdvdnav packages + +#### Install MPlayer 1.2 in Linux #### + +Most Linux distributions are still having MPlayer 1.1. If you want to use the new MPlayer 1.2, you’ll have to compile it from the source code which could be tricky at times for beginners. + +I have used Ubuntu 15.04 for the installation of MPlayer 1.2. Installation instructions will remain the same for all Linux distributions except the part where you need to install yasm. + +Open a terminal and use the following commands: + + wget http://www.mplayerhq.hu/MPlayer/releases/MPlayer-1.2.tar.xz + + tar xvf MPlayer-1.1.1.tar.xz + + cd MPlayer-1.2 + + sudo apt-get install yasm + + ./configure + +When you run make, it will throw a number of things on the terminal screen and takes some time to build it. Have patience. + + make + + sudo make install + +If you feel uncomfortable using the source code, I advise you to either wait forMPlayer 1.2 to land in the repositories of your Linux distribution or use an alternate like MPV. + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/mplayer-1-2-released/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:https://www.mplayerhq.hu/ +[2]:http://mpv.io/ \ No newline at end of file diff --git a/sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md b/sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md new file mode 100644 index 0000000000..95f7bb4ee5 --- /dev/null +++ b/sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md @@ -0,0 +1,39 @@ +Fix Shell Script Opens In Text Editor In Ubuntu +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Run-Shell-Script-on-Double-Click.jpg) + +When you double click on a shell script (.sh file) what do you expect? The normal expectation would be that it is executed. But this might not be the case in Ubuntu, or I should better say in case of Files (Nautilus). You may go crazy yelling “Run, File, Run”, but the file won’t run and instead it gets opened in Gedit. + +I know that you would say, does the file has execute permission? And I say, yes. The shell script has execute permission but still if I double click on it, it is opened in a text editor. I don’t want it and if you are facing the same issue, I assume that even you don’t want it. + +I know that you would have been advised to run it in the terminal and I know that it would work but that’s not an excuse for the GUI way to not work. Is it? + +In this quick tutorial, we shall see **how to make shell script run by double clicking on it**. + +#### Fix Shell script opens in text editor in Ubuntu #### + +The reason why shell scripts are opening in text editor is the default behavior set in Files (file manager in Ubuntu). In earlier versions, it would ask you if you want to run the file or open for editing. The default behavior has been changed in later versions. + +To fix it, go in file manager and from the top menu and click on **Preference**: + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/execute-shell-program-ubuntu-1.png) + +Next in **Files preferences**, go to **Behavior** tab and you’ll see the option of “**Executables Text Files**“. + +By default, it would have been set to “View executable text files when they are opened”. I would advise you to change it to “Ask each time” so that you’ll have the choice whether to execute it or edit but of course you can set it by default for execution. Your choice here really. + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/execute-shell-program-ubuntu-2.png) + +I hope this quick tip helped you to fix this little ‘issue’. Questions and suggestions are always welcomed. + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/shell-script-opens-text-editor/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ \ No newline at end of file diff --git a/sources/tech/20151007 How To Download Videos Using youtube-dl In Linux.md b/sources/tech/20151007 How To Download Videos Using youtube-dl In Linux.md new file mode 100644 index 0000000000..fa7dcbed6c --- /dev/null +++ b/sources/tech/20151007 How To Download Videos Using youtube-dl In Linux.md @@ -0,0 +1,93 @@ +How To Download Videos Using youtube-dl In Linux +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Download-YouTube-Videos.jpeg) + +I know you have already seen [how to download YouTube videos][1]. But those tools were mostly GUI ways. I am going to show you how to download YouTube videos via terminal using youtube-dl. + +### [youtube-dl][2] ### + +youtube-dl is a Python based small command-line tool that allows to download videos from YouTube.com, Dailymotion, Google Video, Photobucket, Facebook, Yahoo, Metacafe, Depositfiles and few more similar sites. It written in pygtk and requires Python interpreter to run this program, it’s not platform restricted. It should run on any Unix, Windows or in Mac OS X based systems. + +The youtube-dl tool supports resuming interrupted downloads. If youtube-dl is killed (for example by Ctrl-C or due to loss of Internet connectivity) in the middle of download, you can simply re-run it with the same YouTube video url. It will automatically resume the unfinished download, as long as a partial download is present in the current directory. Which means, you don’t need a [download][3] manager for resuming downloads. + +#### Installing youtube-dl #### + +If you are running Ubuntu based Linux distribution, you can install it using this command: + + sudo apt-get install youtube-dl + +For any Linux distribution, you can quickly install youtube-dl on your system through the command line interface with: + + sudo wget https://yt-dl.org/downloads/latest/youtube-dl -O/usr/local/bin/youtube-dl + +After fetching the file, you need to set a executable permission on the script to execute properly. + + sudo chmod a+rx /usr/local/bin/youtube-dl + +#### Use YouTube-DL to Download Videos: #### + +To download a video file, simply run the following command. Where “VIDEO_URL” is the url of the video that you want to download. + + youtube-dl VIDEO_URL + +#### Download YouTube Videos in Multiple Formats: #### + +These days YouTube videos have different resolutions, you first need to check available video formats of a given YouTube video. For that run youtube-dl with “-F” option. It will show you a list of available formats. + + youtube-dl -F http://www.youtube.com/watch?v=BlXaGWbFVKY + +It’s output will be like: + + Setting language + BlXaGWbFVKY: Downloading video webpage + BlXaGWbFVKY: Downloading video info webpage + BlXaGWbFVKY: Extracting video information + Available formats: + 37 : mp4 [1080×1920] + 46 : webm [1080×1920] + 22 : mp4 [720×1280] + 45 : webm [720×1280] + 35 : flv [480×854] + 44 : webm [480×854] + 34 : flv [360×640] + 18 : mp4 [360×640] + 43 : webm [360×640] + 5 : flv [240×400] + 17 : mp4 [144×176] + +Now among the available video formats, choose one that you like. For example, if you want to download it in MP4 version, you should use: + + youtube-dl -f 17 http://www.youtube.com/watch?v=BlXaGWbFVKY + +#### Download subtitles of videos using youtube-dl #### + +First check if there are subtitles available for the video. To list all subs for a video, use the command beelow: + + youtube-dl --list-subs https://www.youtube.com/watch?v=Ye8mB6VsUHw + +To download all subs, but not the video: + + youtube-dl --all-subs --skip-download https://www.youtube.com/watch?v=Ye8mB6VsUHw + +#### Download entire playlist #### + +To download a playlist, simply run the following command. Where “playlist_url” is the url of the playlist that ou want to download. + + youtube-dl -cit playlist_url + +youtube-dl is a versatile command line tool and provides a number of functionalities. No wonder it is such a popular command line tool. + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/download-youtube-linux/ + +作者:[alimiracle][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/ali/ +[1]:http://itsfoss.com/download-youtube-videos-ubuntu/ +[2]:https://rg3.github.io/youtube-dl/ +[3]:http://itsfoss.com/xtreme-download-manager-install/ \ No newline at end of file diff --git a/sources/tech/20151007 Productivity Tools And Tips For Linux.md b/sources/tech/20151007 Productivity Tools And Tips For Linux.md new file mode 100644 index 0000000000..7aa53de511 --- /dev/null +++ b/sources/tech/20151007 Productivity Tools And Tips For Linux.md @@ -0,0 +1,78 @@ +Productivity Tools And Tips For Linux +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Productivity-Tips-Linux.jpg) + +Since productivity in itself is a subjective term, I am not going into the details of what “productivity” I am talking about here. I am going to show you some tools and tips that could help you to focus better, be efficient and save time while working in Linux. + +### Productivity tools and tips for Linux ### + +Again, I am using Ubuntu at the time of writing this article. But the productivity tools and tips I am going to show you here should be applicable to most of the Linux distributions out there. + +#### Ambient Music #### + +[Music impacts productivity][2]. It is an open secret. From psychologists to management gurus, all have been advising to use ambient noise to feel relaxed and concentrate on your work. I am not going to argue with it because it works for me. I put my headphones on and listening to the birds chirping and wind blows indeed helps me in relaxing. + +In Linux, I use ANoise player for ambient noise player. Thanks to the official PPA provided, you can easily [install Ambient Noise player in Ubuntu][2] and other Ubuntu based Linux distributions. Installing it let’s you play the ambient music offline as well. + +Alternatively, you can always listen to ambient noise online. My favorite website for online ambient music is [Noisli][3]. Do give it a try. + +#### Task management app #### + +A good productive habit is to keep a to-do list. And if you combine it with [Pomodoro Technique][4], it could work wonder. What I mean hear is that create a to-do list and if possible, assign those tasks a certain time. This will keep you on track with your planned tasks for the day. + +For this, I recommend [Go For It!][5] app. You can install it in all major Linux distributions and since it is based on [ToDo.txt][6], you can easily sync it with your smartphone as well. I have written a detailed guide on [how to use Go For It!][7]. + +Alternatively, you can use [Sticky Notes][8] or [Google Keep][9]. If you need something more like [Evernote][10] fan, you can use these [open source alternatives for Evernote][11]. + +#### Clipboard manager #### + +Ctrl+ C and Ctrl+V are the integral part of our daily computer life. Only problem is that these important actions don’t have memory (by default). Suppose you copied something important and then you accidentally copied something else, you’ll lose what you had before. + +A clipboard manager comes handy in such situation. It displays the history of things you have copied (to clipboard) recently. You can copy text back to clipboard from it. + +I prefer [Diodon clipboard manager][12] for this purpose. It is actively developed and is available in Ubuntu repositories. + +#### Recent notifications #### + +When you are busy with something else and a desktop notification blings and fades away, what do you do? You wish that you could see what was the notification about, isn’t it? Recent notification indicator does this job. It keeps a history of all recent notifications. This way, you would never miss the desktop notifications. + +You can read about [Recent Notification Indicator here][13]. + +#### Terminal Tips #### + +No, I am not going to show you all those Linux command tricks and shortcuts. That could make up an entire blog. I am going to show you couple of terminal hacks you could use to enhance your productivity. + + +- **Change** sudo **password timeout**: By default sudo commands require you to enter password after 15 minutes. This could be tiresome. You could actually change the default sudo password timeout. [This tutorial][14] shows you how to do that. +- **Get desktop notification for command completion**: It’s a common joke among IT guys that developers spend a lot of time waiting for programs to be compiled and it is not entirely true. But it does affect the productivity because while you wait for the programs to be compiled, you may end up doing something else and forget about the commands you had run in the terminal.A nicer way would be to get desktop notification when a command is completed. This way, you won’t be distracted for long and can go back to what you were supposed to be doing earlier. Read about [how to get desktop notification for command completion][15]. + +I know that this is not a comprehensive article about **increasing productivity**. But these little apps and tips may actually help you to get more out of your valuable time. + +Now it’s your turn. What programs or tips you use to be more productive in Linux? Something you want to share with the community? + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/productivity-tips-ubuntu/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://www.helpscout.net/blog/music-productivity/ +[2]:http://itsfoss.com/ambient-noise-music-player-ubuntu/ +[3]:http://www.noisli.com/ +[4]:https://en.wikipedia.org/wiki/Pomodoro_Technique +[5]:http://manuel-kehl.de/projects/go-for-it/ +[6]:http://todotxt.com/ +[7]:http://itsfoss.com/go-for-it-to-do-app-in-linux/ +[8]:http://itsfoss.com/indicator-stickynotes-windows-like-sticky-note-app-for-ubuntu/ +[9]:http://itsfoss.com/install-google-keep-ubuntu-1310/ +[10]:https://evernote.com/ +[11]:http://itsfoss.com/5-evernote-alternatives-linux/ +[12]:https://esite.ch/tag/diodon/ +[13]:http://itsfoss.com/7-best-indicator-applets-for-ubuntu-13-10/ +[14]:http://itsfoss.com/change-sudo-password-timeout-ubuntu/ +[15]:http://itsfoss.com/notification-terminal-command-completion-ubuntu/ \ No newline at end of file From f483c967b6c0b81371472a18dcd0a1fb320612ba Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 8 Oct 2015 08:11:21 +0800 Subject: [PATCH 649/697] PUB:20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip @geekpi --- ... And Remove Bookmarks In Ubuntu Beginner Tip.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) rename {translated/tech => published}/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md (71%) diff --git a/translated/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md b/published/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md similarity index 71% rename from translated/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md rename to published/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md index 25388915af..0cc17b9465 100644 --- a/translated/tech/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md +++ b/published/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md @@ -1,26 +1,26 @@ -如何在Ubuntu中添加和删除书签[新手技巧] +[新手技巧] 如何在Ubuntu中添加和删除书签 ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Add-Bookmark.jpg) 这是一篇对完全是新手的一篇技巧,我将向你展示如何在Ubuntu文件管理器中添加书签。 -现在如果你想知道为什么要这么做,答案很简单。它可以让你可以快速地在左边栏中访问。比如。我[在Ubuntu中安装了Copy][1]。现在它创建了/Home/Copy。先进入Home目录再进入Copy目录并不是一件大事,但是我想要更快地访问它。因此我添加了一个书签这样我就可以直接从侧边栏访问了。 +现在如果你想知道为什么要这么做,答案很简单。它可以让你可以快速地在左边栏中访问。比如,我[在Ubuntu中安装了Copy 云服务][1]。它创建在/Home/Copy。先进入Home目录再进入Copy目录并不是很麻烦,但是我想要更快地访问它。因此我添加了一个书签这样我就可以直接从侧边栏访问了。 ### 在Ubuntu中添加书签 ### 打开Files。进入你想要保存快速访问的目录。你需要在标记书签的目录里面。 -现在,你有两种方法。 +现在,你有两种方法: #### 方法1: #### -当你在Files中时(Ubuntu中的文件管理器),查看顶部菜单。你会看到书签按钮。点击它你会看到将当前路径保存为书签的选项。 +当你在Files(Ubuntu中的文件管理器)中时,查看顶部菜单。你会看到书签按钮。点击它你会看到将当前路径保存为书签的选项。 ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Add-Bookmark-Ubuntu.jpeg) #### 方法 2: #### -你可以直接按下Ctrl+D就可以将当前位置保存位书签。 +你可以直接按下Ctrl+D就可以将当前位置保存为书签。 如你所见,这里左边栏就有一个新添加的Copy目录: @@ -32,7 +32,7 @@ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Remove-bookmark-ubuntu.png) -这就是在Ubuntu中管理书签需要做的。我知道这对于大多数用户而言很贱,但是这也许多Ubuntu的新手而言或许还有用。 +这就是在Ubuntu中管理书签需要做的。我知道这对于大多数用户而言很简单,但是这也许多Ubuntu的新手而言或许还有用。 -------------------------------------------------------------------------------- @@ -40,7 +40,7 @@ via: http://itsfoss.com/add-remove-bookmarks-ubuntu/ 作者:[Abhishek][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 74b6410450549e5ad4e9021e0087a3b361bf9a0f Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 8 Oct 2015 08:30:18 +0800 Subject: [PATCH 650/697] PUB:20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3 @ictlyh --- ...e Types and System Time in Linu--Part 3.md | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) rename {translated/tech => published}/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md (90%) diff --git a/translated/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md b/published/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md similarity index 90% rename from translated/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md rename to published/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md index 601f753341..bfa4068997 100644 --- a/translated/tech/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md +++ b/published/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md @@ -1,17 +1,17 @@ -Linux 中管理文件类型和系统时间的 5 个有用命令 - 第三部分 +5 个在 Linux 中管理文件类型和系统时间的有用命令 ================================================================================ 对于想学习 Linux 的初学者来说要适应使用命令行或者终端可能非常困难。由于终端比图形用户界面程序更能帮助用户控制 Linux 系统,我们必须习惯在终端中运行命令。因此为了有效记忆 Linux 不同的命令,你应该每天使用终端并明白怎样将命令和不同选项以及参数一同使用。 ![在 Linux 中管理文件类型并设置时间](http://www.tecmint.com/wp-content/uploads/2015/09/Find-File-Types-in-Linux.jpg) -在 Linux 中管理文件类型并设置时间 - 第三部分 +*在 Linux 中管理文件类型并设置时间* -请先查看我们 [Linux 小技巧][1]系列之前的文章。 +请先查看我们 Linux 小技巧系列之前的文章: -- [Linux 中 5 个有趣的命令行提示和技巧 - 第一部分][2] -- [给新手的有用命令行技巧 - 第二部分][3] +- [5 个有趣的 Linux 命令行技巧][2] +- [给新手的 10 个有用 Linux 命令行技巧][3] -在这篇文章中,我们打算看看终端中 10 个和文件以及时间相关的提示和技巧。 +在这篇文章中,我们打算看看终端中 5 个和文件以及时间相关的提示和技巧。 ### Linux 中的文件类型 ### @@ -22,10 +22,10 @@ Linux 系统中文件有不同的类型: - 普通文件:可能包含命令、文档、音频文件、视频、图像,归档文件等。 - 设备文件:系统用于访问你硬件组件。 -这里有两种表示存储设备的设备文件块文件,例如硬盘,它们以快读取数据,字符文件,以逐个字符读取数据。 +这里有两种表示存储设备的设备文件:块文件,例如硬盘,它们以块读取数据;字符文件,以逐个字符读取数据。 - 硬链接和软链接:用于在 Linux 文件系统的任意地方访问文件。 -- 命名管道和套接字:允许不同的进程彼此之间交互。 +- 命名管道和套接字:允许不同的进程之间进行交互。 #### 1. 用 ‘file’ 命令确定文件类型 #### @@ -219,7 +219,7 @@ which 命令用于定位文件系统中的命令。 20 21 22 23 24 25 26 27 28 29 30 -使用 hwclock 命令查看硬件始终时间。 +使用 hwclock 命令查看硬件时钟时间。 tecmint@tecmint ~/Linux-Tricks $ sudo hwclock Wednesday 09 September 2015 06:02:58 PM IST -0.200081 seconds @@ -231,7 +231,7 @@ which 命令用于定位文件系统中的命令。 tecmint@tecmint ~/Linux-Tricks $ sudo hwclock Wednesday 09 September 2015 12:33:11 PM IST -0.891163 seconds -系统时间是由硬件始终时间在启动时设置的,系统关闭时,硬件时间被重置为系统时间。 +系统时间是由硬件时钟时间在启动时设置的,系统关闭时,硬件时间被重置为系统时间。 因此你查看系统时间和硬件时间时,它们是一样的,除非你更改了系统时间。当你的 CMOS 电量不足时,硬件时间可能不正确。 @@ -256,7 +256,7 @@ which 命令用于定位文件系统中的命令。 ### 总结 ### -对于初学者来说理解 Linux 中的文件类型是一个好的尝试,同时时间管理也非常重要,尤其是在需要可靠有效地管理服务的服务器上。希望这篇指南能对你有所帮助。如果你有任何反馈,别忘了给我们写评论。和 Tecmint 保持联系。 +对于初学者来说理解 Linux 中的文件类型是一个好的尝试,同时时间管理也非常重要,尤其是在需要可靠有效地管理服务的服务器上。希望这篇指南能对你有所帮助。如果你有任何反馈,别忘了给我们写评论。和我们保持联系。 -------------------------------------------------------------------------------- @@ -264,16 +264,16 @@ via: http://www.tecmint.com/manage-file-types-and-set-system-time-in-linux/ 作者:[Aaron Kili][a] 译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/aaronkili/ [1]:http://www.tecmint.com/tag/linux-tricks/ -[2]:http://www.tecmint.com/free-online-linux-learning-guide-for-beginners/ -[3]:http://www.tecmint.com/10-useful-linux-command-line-tricks-for-newbies/ +[2]:https://linux.cn/article-5485-1.html +[3]:https://linux.cn/article-6314-1.html [4]:http://www.tecmint.com/linux-dir-command-usage-with-examples/ -[5]:http://www.tecmint.com/12-practical-examples-of-linux-grep-command/ +[5]:https://linux.cn/article-2250-1.html [6]:http://www.tecmint.com/wc-command-examples/ [7]:http://www.tecmint.com/setup-samba-file-sharing-for-linux-windows-clients/ [8]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/ \ No newline at end of file From 1c158b7d0ad49ee5c78b8e995db7d831f53fb8e6 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 8 Oct 2015 09:15:44 +0800 Subject: [PATCH 651/697] PUB:20150925 HTTP 2 Now Fully Supported in NGINX Plus @strugglingyouth --- ...TTP 2 Now Fully Supported in NGINX Plus.md | 127 ++++++++++++++++++ ...TTP 2 Now Fully Supported in NGINX Plus.md | 126 ----------------- 2 files changed, 127 insertions(+), 126 deletions(-) create mode 100644 published/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md delete mode 100644 translated/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md diff --git a/published/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md b/published/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md new file mode 100644 index 0000000000..f02b42b5a1 --- /dev/null +++ b/published/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md @@ -0,0 +1,127 @@ +NGINX Plus 现在完全支持 HTTP/2 +================================================================================ +早些时候,我们发布了支持 HTTP/2 协议的 [NGINX Plus R7][1]。作为 HTTP 协议的最新标准,HTTP/2 的设计为现在的 web 应用程序带来了更高的性能和安全性。(LCTT 译注: [开源版本的 NGINX 1.95 也支持 HTTP/2 了][18]。) + +NGINX Plus 所实现的 HTTP/2 协议可与现有的网站和应用程序进行无缝衔接。只需要一点改变,不管用户选择什么样的浏览器,NGINX Plus 都能为用户同时提供 HTTP/1.x 与HTTP/2 的最佳体验。 + +要支持 HTTP/2 仅需通过可选的 **nginx‑plus‑http2** 软件包。**nginx‑plus** 和 **nginx‑plus‑extras** 软件包支持 SPDY 协议,目前推荐用于生产站点,因为其被大多数浏览器所支持并且代码也是相当成熟了。 + +### 为什么要使用 HTTP/2? ### + +HTTP/2 使数据传输更高效,对你的应用程序更安全。 HTTP/2 相比于 HTTP/1.x 有五个提高性能特点: + +- **完全复用** – 在一个保持激活(keepalive)的连接上,HTTP/1.1 强制按严格的顺序来处理请求。一个请求必须在下一个请求开始前结束。 HTTP/2 消除了这一要求,允许并行和乱序来处理请求。 + +- **单一,持久连接** – 由于 HTTP/2 允许请求完全复用,所以可以通过单一连接并行下载网页上的所有对象。在 HTTP/1.x 中,使用多个连接来并行下载资源,从而导致使用底层 TCP 协议效率很低。 + +- **二进制编码** – Header 信息使用紧凑的二进制格式发送,而不是纯文本格式,节省了传输字节。 + +- **Header 压缩** – Headers 使用专用的 HPACK 压缩算法来进行压缩,这进一步降低数据通过网络传输的字节。 + +- **SSL/TLS 加密** – 在 HTTP/2 中,强制使用 SSL/TLS。在 [RFC][2] 中并没有强制,其允许纯文本的 HTTP/2,但是当前所有实现 HTTP/2的 Web 浏览器都只支持加密。 SSL/TLS 可以使你的网站更安全,并且使用 HTTP/2 各项性能会有提升,加密和解密过程的性能损失就减少了。 + +要了解更多关于 HTTP/2: + +- 请阅读我们的 [白皮书][3],它涵盖了你需要了解HTTP/2 的一切。 +- 下载由 Google 的 Ilya Grigorik 编写的 [特别版的高性能浏览器网络电子书][4] 。 + +### NGINX Plus 如何实现 HTTP/2 ### + +我们的 HTTP/2 实现是基于 SPDY 支持的,它已经被广泛部署(使用了 NGINX 或 NGINX Plus 的网站近 75% 都使用了 SPDY)。使用 NGINX Plus 部署 HTTP/2 时,几乎不会改变你应用程序的配置。本节将讨论 NGINX Plus如何实现对 HTTP/2 的支持。 + +#### 一个 HTTP/2 网关 #### + +![](https://www.nginx.com/wp-content/uploads/2015/09/http2-27-1024x300.png) + +NGINX Plus 作为一个 HTTP/2 网关。它与支持 HTTP/2 的客户端 Web 浏览器用 HTTP/2 通讯,而转换 HTTP/2 请求给后端服务器通信时使用 HTTP/1.x(或者 FastCGI, SCGI, uWSGI, 等等. – 取决于你目前正在使用的协议)。 + +#### 向后兼容性 #### + +![](https://www.nginx.com/wp-content/uploads/2015/09/http2-281-1024x581.png) + +在一段时间内,你需要同时支持 HTTP/2 和 HTTP/1.x。在撰写本文时,超过50%的用户使用的 Web 浏览器已经[支持 HTTP/2][5],但这也意味着近50%的人还没有使用。 + +为了同时支持 HTTP/1.x 和 HTTP/2,NGINX Plus 实现了 TLS 上的 Next Protocol Negotiation (NPN)扩展。当 Web 浏览器连接到服务器时,其将所支持的协议列表发送到服务器端。如果浏览器支持的协议列表中包括 h2 - 即 HTTP/2,NGINX Plus 将使用 HTTP/2 连接到浏览器。如果浏览器不支持 NPN 或在发送支持的协议列表中没有 h2,NGINX Plus 将继续回落到 HTTP/1.x。 + +### 转向 HTTP/2 ### + +NGINX 公司会尽可能帮助大家无缝过渡到使用 HTTP/2。本节介绍了通过对你应用进行改变来启用对 HTTP/2 支持,其中只需对 NGINX Plus 配置进行几个变化。 + +#### 前提条件 #### + +使用 **nginx‑plus‑http2** 软件包升级到 NGINX Plus R7。注意现在还没有支持 HTTP/2 版本的 **nginx‑plus‑extras** 软件包。 + +#### 重定向所有流量到 SSL/TLS #### + +如果你的应用尚未使用 SSL/TLS 加密,现在启用它正是一个好的时机。加密你的应用程序可以保护你免受间谍以及来自其他中间人的攻击。一些搜索引擎甚至在搜索结果中对加密站点[提高排名][6]。下面的配置块重定向所有的普通 HTTP 请求到该网站的加密版本。 + + server { + listen 80; + location / { + return 301 https://$host$request_uri; + } + } + +#### 启用 HTTP/2 #### + +要启用对 HTTP/2 的支持,只需将 http2 参数添加到所有的 [listen][7] 指令中,也要包括 SSL 参数,因为浏览器不支持不加密的 HTTP/2 请求。 + + server { + listen 443 ssl http2 default_server; + + ssl_certificate server.crt; + ssl_certificate_key server.key; + … + } + +如果有必要,重启 NGINX Plus,例如通过运行 `nginx -s reload` 命令。要验证 HTTP/2 是否正常工作,你可以在 [Google Chrome][8] 和 [Firefox][9] 中使用 “HTTP/2 and SPDY indicator” 插件来检查。 + +### 注意事项 ### + +- 在安装 **nginx‑plus‑http2** 包之前, 你必须删除配置文件中所有 listen 指令后的 SPDY 参数(使用 http2 和 ssl 参数来替换它以启用对 HTTP/2 的支持)。使用这个包后,如果 listen 指令后有 spdy 参数,NGINX Plus 将无法启动。 + +- 如果你在 NGINX Plus 前端使用了 Web 应用防火墙(WAF),请确保它能够解析 HTTP/2,或者把它移到 NGINX Plus 后面。 + +- 此版本不支持在 HTTP/2 RFC 中定义的 “Server Push” 特性。 NGINX Plus 以后的版本可能会支持它。 + +- NGINX Plus R7 同时支持 SPDY 和 HTTP/2(LCTT 译注:但是你只能同时使用其中一种)。在以后的版本中,我们将弃用对 SPDY 的支持。谷歌在2016年初将 [弃用 SPDY][10],因此同时支持这两种协议也非必要。 + +- 如果 [ssl_prefer_server_ciphers][11] 设置为 on 或者使用了定义在 [Appendix A: TLS 1.2 Ciper Suite Black List][13] 中的 [ssl_ciphers][12] 列表时,浏览器会出现 handshake-errors 而无法正常工作。详细内容请参阅 [section 9.2.2 of the HTTP/2 RFC][14]。 + +### 特别感谢 ### + +NGINX 公司要感谢 [Dropbox][15] 和 [Automattic][16],他们是我们软件的重度使用者,并帮助我们实现 HTTP/2。他们的贡献帮助我们加速完成这个软件,我们希望你也能支持他们。 + +![](https://www.nginx.com/wp-content/themes/nginx-theme/assets/img/landing-page/highperf_nginx_ebook.png) + +[O'REILLY'S BOOK ABOUT HTTP/2 & PERFORMANCE TUNING][17] + +-------------------------------------------------------------------------------- + +via: https://www.nginx.com/blog/http2-r7/ + +作者:[Faisal Memon][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.nginx.com/blog/author/fmemon/ +[1]:https://www.nginx.com/blog/nginx-plus-r7-released/ +[2]:https://tools.ietf.org/html/rfc7540 +[3]:https://www.nginx.com/wp-content/uploads/2015/09/NGINX_HTTP2_White_Paper_v4.pdf +[4]:https://www.nginx.com/http2-ebook/ +[5]:http://caniuse.com/#feat=http2 +[6]:http://googlewebmastercentral.blogspot.co.uk/2014/08/https-as-ranking-signal.html +[7]:http://nginx.org/en/docs/http/ngx_http_core_module.html#listen +[8]:https://chrome.google.com/webstore/detail/http2-and-spdy-indicator/mpbpobfflnpcgagjijhmgnchggcjblin?hl=en +[9]:https://addons.mozilla.org/en-us/firefox/addon/spdy-indicator/ +[10]:http://blog.chromium.org/2015/02/hello-http2-goodbye-spdy-http-is_9.html +[11]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_prefer_server_ciphers +[12]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ciphers +[13]:https://tools.ietf.org/html/rfc7540#appendix-A +[14]:https://tools.ietf.org/html/rfc7540#section-9.2.2 +[15]:http://dropbox.com/ +[16]:http://automattic.com/ +[17]:https://www.nginx.com/http2-ebook/ +[18]:http://mailman.nginx.org/pipermail/nginx-announce/2015/000162.html \ No newline at end of file diff --git a/translated/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md b/translated/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md deleted file mode 100644 index 0a9cf30ad3..0000000000 --- a/translated/tech/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md +++ /dev/null @@ -1,126 +0,0 @@ - -NGINX Plus 现在完全支持 HTTP/2 -================================================================================ -本周早些时候,我们发布了对 HTTP/2 支持的 [NGINX Plus R7][1]。作为 HTTP 协议的最新标准,HTTP/2 的设计对现在的 web 应用程序带来了更高的性能和安全性。 - -NGINX Plus 使用 HTTP/2 协议可与现有的网站和应用程序进行无缝衔接。最微小的变化就是不管用户选择什么样的浏览器,NGINX Plus 都能为用户提供 HTTP/1.x 与HTTP/2 并发运行带来的最佳体验。 - -要支持 HTTP/2 仅需提供 **nginx‑plus‑http2** 软件包。**nginx‑plus** 和 **nginx‑plus‑extras** 软件包支持 SPDY 协议,目前推荐用于生产站点,因为其被大多数浏览器所支持并且代码也是相当成熟了。 - -### 为什么要使用 HTTP/2? ### -HTTP/2 使数据传输更高效,对你的应用程序更安全。 HTTP/2 相比于 HTTP/1.x 有五个提高性能特点: - -- **完全复用** – HTTP/1.1 强制按严格的顺序来对一个请求建立连接。请求建立必须在下一个进程开始之前完成。 HTTP/2 消除了这一要求,允许并行和乱序来完成请求的建立。 - -- **单一,持久连接** – 由于 HTTP/2 允许请求真正的复用,现在通过单一连接可以并行下载网页上的所有对象。在 HTTP/1.x 中,使用多个连接来并行下载资源,从而导致使用底层 TCP 协议效率很低。 - -- **二进制编码** – Header 信息使用紧凑二进制格式发送,而不是纯文本格式,节省了传输字节。 - -- **Header 压缩** – Headers 使用专用的算法来进行压缩,HPACK 压缩,这进一步降低数据通过网络传输的字节。 - -- **SSL/TLS encryption** – 在 HTTP/2 中,强制使用 SSL/TLS。在 [RFC][2] 中并没有强制,其允许纯文本的 HTTP/2,它是由当前 Web 浏览器执行 HTTP/2 的。 SSL/TLS 使你的网站更安全,并且使用 HTTP/2 所有性能会有提升,加密和解密过程的性能也有所提升。 - -要了解更多关于 HTTP/2: - -- 请阅读我们的 [白皮书][3],它涵盖了你需要了解HTTP/2 的一切。 -- 下载由 Google 的 Ilya Grigorik 编写的 [特别版的高性能浏览器网络电子书][4] 。 - -### NGINX Plus 如何实现 HTTP/2 ### - -实现 HTTP/2 要基于对 SPDY 的支持,它已经被广泛部署(使用了 NGINX 或 NGINX Plus 的网站近 75% 都使用了 SPDY)。使用 NGINX Plus 部署 HTTP/2 时,几乎不会改变你应用程序的配置。本节将讨论 NGINX Plus如何实现对 HTTP/2 的支持。 - -#### 一个 HTTP/2 网关 #### - -![](https://www.nginx.com/wp-content/uploads/2015/09/http2-27-1024x300.png) - -NGINX Plus 作为一个 HTTP/2 网关。它谈到 HTTP/2 对客户端 Web 浏览器支持,但传输 HTTP/2 请求返回给后端服务器通信时使用 HTTP/1.x(或者 FastCGI, SCGI, uWSGI, 等等. – 取决于你目前正在使用的协议)。 - -#### 向后兼容性 #### - -![](https://www.nginx.com/wp-content/uploads/2015/09/http2-281-1024x581.png) - -在不久的未来,你需要同时支持 HTTP/2 和 HTTP/1.x。在撰写本文时,超过50%的用户使用的 Web 浏览器已经[支持 HTTP/2][5],但这也意味着近50%的人还没有使用。 - -为了同时支持 HTTP/1.x 和 HTTP/2,NGINX Plus 实现了将 Next Protocol Negotiation (NPN协议)扩展到 TLS 中。当 Web 浏览器连接到服务器时,其将所支持的协议列表发送到服务器端。如果浏览器支持的协议列表中包括 h2 - 即,HTTP/2,NGINX Plus 将使用 HTTP/2 连接到浏览器。如果浏览器不支持 NPN 或在发送支持的协议列表中没有 h2,NGINX Plus 将继续使用 HTTP/1.x。 - -### 转向 HTTP/2 ### - -NGINX,公司尽可能无缝过渡到使用 HTTP/2。本节通过对你应用程序的改变来启用对 HTTP/2 的支持,其中只包括对 NGINX Plus 配置的几个变化。 - -#### 前提条件 #### - -使用 **nginx‑plus‑http2** 软件包升级到 NGINX Plus R7 . 注意启用 HTTP/2 版本在此时不需要使用 **nginx‑plus‑extras** 软件包。 - -#### 重定向所有流量到 SSL/TLS #### - -如果你的应用程序尚未使用 SSL/TLS 加密,现在启用它正是一个好的时机。加密你的应用程序可以保护你免受间谍以及来自其他中间人的攻击。一些搜索引擎甚至在搜索结果中对加密站点 [提高排名][6]。下面的配置块重定向所有的普通 HTTP 请求到该网站的加密版本。 - - server { - listen 80; - location / { - return 301 https://$host$request_uri; - } - } - -#### 启用 HTTP/2 #### - -要启用对 HTTP/2 的支持,只需将 http2 参数添加到所有的 [listen][7] 指令中,包括 SSL 参数,因为浏览器不支持不加密的 HTTP/2 请求。 - - server { - listen 443 ssl http2 default_server; - - ssl_certificate server.crt; - ssl_certificate_key server.key; - … - } - -如果有必要,重启 NGINX Plus,例如通过运行 nginx -s reload 命令。要验证 HTTP/2 是否正常工作,你可以在 [Google Chrome][8] 和 [Firefox][9] 中使用 “HTTP/2 and SPDY indicator” 插件来检查。 - -### 注意事项 ### - -- 在安装 **nginx‑plus‑http2** 包之前, 你必须删除配置文件中所有 listen 指令后的 SPDY 参数(使用 http2 和 ssl 参数来替换它以启用对 HTTP/2 的支持)。使用这个包后,如果 listen 指令后有 spdy 参数,NGINX Plus 将无法启动。 - -- 如果你在 NGINX Plus 前端使用了 Web 应用防火墙(WAF),请确保它能够解析 HTTP/2,或者把它移到 NGINX Plus 后面。 - -- 此版本在 HTTP/2 RFC 不支持 “Server Push” 特性。 NGINX Plus 以后的版本可能会支持它。 - -- NGINX Plus R7 同时支持 SPDY 和 HTTP/2。在以后的版本中,我们将弃用对 SPDY 的支持。谷歌在2016年初将 [弃用 SPDY][10],因此同时支持这两种协议也非必要。 - -- 如果 [ssl_prefer_server_ciphers][11] 设置为 on 或者 [ssl_ciphers][12] 列表被定义在 [Appendix A: TLS 1.2 Ciper Suite Black List][13] 使用时,浏览器会出现 handshake-errors 而无法正常工作。详细内容请参阅 [section 9.2.2 of the HTTP/2 RFC][14]。 - -### 特别感谢 ### - -NGINX,公司要感谢 [Dropbox][15] 和 [Automattic][16],他们是我们软件的重度使用者,并帮助我们实现 HTTP/2。他们的贡献帮助我们加速完成这个软件,我们希望你也能支持他们。 - -![](https://www.nginx.com/wp-content/themes/nginx-theme/assets/img/landing-page/highperf_nginx_ebook.png) - -[O'REILLY'S BOOK ABOUT HTTP/2 & PERFORMANCE TUNING][17] - --------------------------------------------------------------------------------- - -via: https://www.nginx.com/blog/http2-r7/ - -作者:[Faisal Memon][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.nginx.com/blog/author/fmemon/ -[1]:https://www.nginx.com/blog/nginx-plus-r7-released/ -[2]:https://tools.ietf.org/html/rfc7540 -[3]:https://www.nginx.com/wp-content/uploads/2015/09/NGINX_HTTP2_White_Paper_v4.pdf -[4]:https://www.nginx.com/http2-ebook/ -[5]:http://caniuse.com/#feat=http2 -[6]:http://googlewebmastercentral.blogspot.co.uk/2014/08/https-as-ranking-signal.html -[7]:http://nginx.org/en/docs/http/ngx_http_core_module.html#listen -[8]:https://chrome.google.com/webstore/detail/http2-and-spdy-indicator/mpbpobfflnpcgagjijhmgnchggcjblin?hl=en -[9]:https://addons.mozilla.org/en-us/firefox/addon/spdy-indicator/ -[10]:http://blog.chromium.org/2015/02/hello-http2-goodbye-spdy-http-is_9.html -[11]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_prefer_server_ciphers -[12]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ciphers -[13]:https://tools.ietf.org/html/rfc7540#appendix-A -[14]:https://tools.ietf.org/html/rfc7540#section-9.2.2 -[15]:http://dropbox.com/ -[16]:http://automattic.com/ -[17]:https://www.nginx.com/http2-ebook/ From 614c7a938c1123df9818204dfb961a67f159623f Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Thu, 8 Oct 2015 09:34:45 +0800 Subject: [PATCH 652/697] Update 20151005 pyinfo() A good looking phpinfo-like python script.md --- ...0151005 pyinfo() A good looking phpinfo-like python script.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md b/sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md index 7b38244e76..f096bc5fc6 100644 --- a/sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md +++ b/sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md @@ -1,3 +1,4 @@ +translation by strugglingyouth pyinfo() A good looking phpinfo-like python script ================================================================================ Being a native php guy, I'm used to having phpinfo(), giving me easy access to php.ini settings and loaded modules etc. So ofcourse I wanted to call the not existing pyinfo() function, to no avail. My fingers quickly pressed CTRL-E to google for a implementation of it, someone must've ported it already? From fd6a035bbf0994bf75a50166f906f90eb7ace258 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 8 Oct 2015 11:14:12 +0800 Subject: [PATCH 653/697] PUB:20150906 Installing NGINX and NGINX Plus With Ansible MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @strugglingyouth 文章很长,辛苦了。不过要注意英文的倒装语句。 --- ...lling NGINX and NGINX Plus With Ansible.md | 103 +++++++++--------- 1 file changed, 50 insertions(+), 53 deletions(-) rename {translated/tech => published}/20150906 Installing NGINX and NGINX Plus With Ansible.md (59%) diff --git a/translated/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md b/published/20150906 Installing NGINX and NGINX Plus With Ansible.md similarity index 59% rename from translated/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md rename to published/20150906 Installing NGINX and NGINX Plus With Ansible.md index e80f080624..10e16e3082 100644 --- a/translated/tech/20150906 Installing NGINX and NGINX Plus With Ansible.md +++ b/published/20150906 Installing NGINX and NGINX Plus With Ansible.md @@ -1,14 +1,12 @@ -translation by strugglingyouth -nstalling NGINX and NGINX Plus With Ansible +使用 ansible 安装 NGINX 和 NGINX Plus ================================================================================ -在生产环境中,我会更喜欢做与自动化相关的所有事情。如果计算机能完成你的任务,何必需要你亲自动手呢?但是,在不断变化并存在多种技术的环境中,创建和实施自动化是一项艰巨的任务。这就是为什么我喜欢[Ansible][1]。Ansible是免费的,开源的,对于 IT 配置管理,部署和业务流程,使用起来非常方便。 +在生产环境中,我会更喜欢做与自动化相关的所有事情。如果计算机能完成你的任务,何必需要你亲自动手呢?但是,在不断变化并存在多种技术的环境中,创建和实施自动化是一项艰巨的任务。这就是为什么我喜欢 [Ansible][1] 的原因。Ansible 是一个用于 IT 配置管理,部署和业务流程的开源工具,使用起来非常方便。 +我最喜欢 Ansible 的一个特点是,它是完全无客户端的。要管理一个系统,通过 SSH 建立连接,它使用[Paramiko][2](一个 Python 库)或本地的 [OpenSSH][3]。Ansible 另一个吸引人的地方是它有许多可扩展的模块。这些模块可被系统管理员用于执行一些的常见任务。特别是,它们使用 Ansible 这个强有力的工具可以跨多个服务器、环境或操作系统安装和配置任何程序,只需要一个控制节点。 -我最喜欢 Ansible 的一个特点是,它是完全无客户端。要管理一个系统,通过 SSH 建立连接,也使用了[Paramiko][2](一个 Python 库)或本地的 [OpenSSH][3]。Ansible 另一个吸引人的地方是它有许多可扩展的模块。这些模块可被系统管理员用于执行一些的相同任务。特别是,它们使用 Ansible 这个强有力的工具可以安装和配置任何程序在多个服务器上,环境或操作系统,只需要一个控制节点。 +在本教程中,我将带你使用 Ansible 完成安装和部署开源 [NGINX][4] 和我们的商业产品 [NGINX Plus][5]。我将在 [CentOS][6] 服务器上演示,但我也在下面的“在 Ubuntu 上创建 Ansible Playbook 来安装 NGINX 和 NGINX Plus”小节中包含了在 Ubuntu 服务器上部署的细节。 -在本教程中,我将带你使用 Ansible 完成安装和部署开源[NGINX][4] 和 [NGINX Plus][5],我们的商业产品。我将在 [CentOS][6] 服务器上演示,但我也写了一个详细的教程关于在 Ubuntu 服务器上部署[在 Ubuntu 上创建一个 Ansible Playbook 来安装 NGINX 和 NGINX Plus][7] 。 - -在本教程中我将使用 Ansible 1.9.2 版本的,并在 CentOS 7.1 服务器上部署运行。 +在本教程中我将使用 Ansible 1.9.2 版本,并在 CentOS 7.1 服务器上部署运行。 $ ansible --version ansible 1.9.2 @@ -20,14 +18,13 @@ nstalling NGINX and NGINX Plus With Ansible 如果你使用的是 CentOS,安装 Ansible 十分简单,只要输入以下命令。如果你想使用源码编译安装或使用其他发行版,请参阅上面 Ansible 链接中的说明。 - $ sudo yum install -y epel-release && sudo yum install -y ansible -根据环境的不同,在本教程中的命令有的可能需要 sudo 权限。文件路径,用户名,目标服务器的值取决于你的环境中。 +根据环境的不同,在本教程中的命令有的可能需要 sudo 权限。文件路径,用户名和目标服务器取决于你的环境的情况。 ### 创建一个 Ansible Playbook 来安装 NGINX (CentOS) ### -首先,我们为 NGINX 的部署创建一个工作目录,以及子目录和部署配置文件目录。我通常建议在主目录中创建目录,在文章的所有例子中都会有说明。 +首先,我们要为 NGINX 的部署创建一个工作目录,包括子目录和部署配置文件。我通常建议在你的主目录中创建该目录,在文章的所有例子中都会有说明。 $ cd $HOME $ mkdir -p ansible-nginx/tasks/ @@ -54,11 +51,11 @@ nstalling NGINX and NGINX Plus With Ansible $ vim $HOME/ansible-nginx/deploy.yml -**deploy.yml** 文件是 Ansible 部署的主要文件,[ 在使用 Ansible 部署 NGINX][9] 时,我们将运行 ansible‑playbook 命令执行此文件。在这个文件中,我们指定运行时 Ansible 使用的库以及其它配置文件。 +**deploy.yml** 文件是 Ansible 部署的主要文件,在“使用 Ansible 部署 NGINX”小节中,我们运行 ansible‑playbook 命令时会使用此文件。在这个文件中,我们指定 Ansible 运行时使用的库以及其它配置文件。 -在这个例子中,我使用 [include][10] 模块来指定配置文件一步一步来安装NGINX。虽然可以创建一个非常大的 playbook 文件,我建议你将其分割为小文件,以保证其可靠性。示例中的包括复制静态内容,复制配置文件,为更高级的部署使用逻辑配置设定变量。 +在这个例子中,我使用 [include][10] 模块来指定配置文件一步一步来安装NGINX。虽然可以创建一个非常大的 playbook 文件,我建议你将其分割为小文件,让它们更有条理。include 的示例中可以复制静态内容,复制配置文件,为更高级的部署使用逻辑配置设定变量。 -在文件中输入以下行。包括顶部参考注释中的文件名。 +在文件中输入以下行。我在顶部的注释包含了文件名用于参考。 # ./ansible-nginx/deploy.yml @@ -66,21 +63,21 @@ nstalling NGINX and NGINX Plus With Ansible tasks: - include: 'tasks/install_nginx.yml' - hosts 语句说明 Ansible 部署 **nginx** 组的所有服务器,服务器在 **/etc/ansible/hosts** 中指定。我们将编辑此文件来 [创建 NGINX 服务器的列表][11]。 +hosts 语句说明 Ansible 部署 **nginx** 组的所有服务器,服务器在 **/etc/ansible/hosts** 中指定。我们会在下面的“创建 NGINX 服务器列表”小节编辑此文件。 -include 语句说明 Ansible 在部署过程中从 **tasks** 目录下读取并执行 **install_nginx.yml** 文件中的内容。该文件包括以下几步:下载,安装,并启动 NGINX。我们将创建此文件在下一节。 +include 语句说明 Ansible 在部署过程中从 **tasks** 目录下读取并执行 **install\_nginx.yml** 文件中的内容。该文件包括以下几步:下载,安装,并启动 NGINX。我们将在下一节创建此文件。 #### 为 NGINX 创建部署文件 #### -现在,先保存 **deploy.yml** 文件,并在编辑器中打开 **install_nginx.yml** 。 +现在,先保存 **deploy.yml** 文件,并在编辑器中打开 **install\_nginx.yml** 。 $ vim $HOME/ansible-nginx/tasks/install_nginx.yml -该文件包含的说明有 - 以 [YAML][12] 格式写入 - 使用 Ansible 安装和配置 NGINX。每个部分(步骤中的过程)起始于一个 name 声明(前面连字符)描述此步骤。下面的 name 字符串:是 Ansible 部署过程中写到标准输出的,可以根据你的意愿来改变。YAML 文件中的下一个部分是在部署过程中将使用的模块。在下面的配置中,[yum][13] 和 [service][14] 模块使将被用。yum 模块用于在 CentOS 上安装软件包。service 模块用于管理 UNIX 的服务。在这部分的最后一行或几行指定了几个模块的参数(在本例中,这些行以 name 和 state 开始)。 +该文件包含有指令(使用 [YAML][12] 格式写的), Ansible 会按照指令安装和配置我们的 NGINX 部署过程。每个节(过程中的步骤)起始于一个描述此步骤的 `name` 语句(前面有连字符)。 `name` 后的字符串是 Ansible 部署过程中输出到标准输出的,可以根据你的意愿来修改。YAML 文件中的节的下一行是在部署过程中将使用的模块。在下面的配置中,使用了 [`yum`][13] 和 [`service`][14] 模块。`yum` 模块用于在 CentOS 上安装软件包。`service` 模块用于管理 UNIX 的服务。在这个节的最后一行或几行指定了几个模块的参数(在本例中,这些行以 `name` 和 `state` 开始)。 -在文件中输入以下行。对于 **deploy.yml**,在我们文件的第一行是关于文件名的注释。第一部分说明 Ansible 从 NGINX 仓库安装 **.rpm** 文件在CentOS 7 上。这说明软件包管理器直接从 NGINX 仓库安装最新最稳定的版本。需要在你的 CentOS 版本上修改路径。可使用包的列表可以在 [开源 NGINX 网站][15] 上找到。接下来的两节说明 Ansible 使用 yum 模块安装最新的 NGINX 版本,然后使用 service 模块启动 NGINX。 +在文件中输入以下行。就像 **deploy.yml**,在我们文件的第一行是用于参考的文件名的注释。第一个节告诉 Ansible 在CentOS 7 上从 NGINX 仓库安装该 **.rpm** 文件。这让软件包管理器直接从 NGINX 仓库安装最新最稳定的版本。根据你的 CentOS 版本修改路径。所有可用的包的列表可以在 [开源 NGINX 网站][15] 上找到。接下来的两节告诉 Ansible 使用 `yum` 模块安装最新的 NGINX 版本,然后使用 `service` 模块启动 NGINX。 -**注意:** 在第一部分中,CentOS 包中的路径名是连着的两行。在一行上输入其完整路径。 +**注意:** 在第一个节中,CentOS 包中的路径名可能由于宽度显示为连着的两行。请在一行上输入其完整路径。 # ./ansible-nginx/tasks/install_nginx.yml @@ -100,12 +97,12 @@ include 语句说明 Ansible 在部署过程中从 **tasks** 目录下读取并 #### 创建 NGINX 服务器列表 #### -现在,我们有 Ansible 部署所有配置的文件,我们需要告诉 Ansible 部署哪个服务器。我们需要在 Ansible 中指定 **hosts** 文件。先备份现有的文件,并新建一个新文件来部署。 +现在,我们设置好了 Ansible 部署的所有配置文件,我们需要告诉 Ansible 部署哪个服务器。我们需要在 Ansible 中指定 **hosts** 文件。先备份现有的文件,并新建一个新文件来部署。 $ sudo mv /etc/ansible/hosts /etc/ansible/hosts.backup $ sudo vim /etc/ansible/hosts -在文件中输入以下行来创建一个名为 **nginx** 的组并列出安装 NGINX 的服务器。你可以指定服务器通过主机名,IP 地址,或者在一个区域,例如 **server[1-3].domain.com**。在这里,我指定一台服务器通过 IP 地址。 +在文件中输入(或编辑)以下行来创建一个名为 **nginx** 的组并列出安装 NGINX 的服务器。你可以通过主机名、IP 地址、或者在一个范围,例如 **server[1-3].domain.com** 来指定服务器。在这里,我通过 IP 地址指定一台服务器。 # /etc/ansible/hosts @@ -114,20 +111,20 @@ include 语句说明 Ansible 在部署过程中从 **tasks** 目录下读取并 #### 设置安全性 #### -在部署之前,我们需要确保 Ansible 已通过 SSH 授权能访问我们的目标服务器。 +接近完成了,但在部署之前,我们需要确保 Ansible 已被授权通过 SSH 访问我们的目标服务器。 -首选并且最安全的方法是添加 Ansible 所要部署服务器的 RSA SSH 密钥到目标服务器的 **authorized_keys** 文件中,这给 Ansible 在目标服务器上的 SSH 权限不受限制。要了解更多关于此配置,请参阅 [安全的 OpenSSH][16] 在 wiki.centos.org。这样,你就可以自动部署而无需用户交互。 +首选并且最安全的方法是添加 Ansible 所要部署服务器的 RSA SSH 密钥到目标服务器的 **authorized\_keys** 文件中,这给予 Ansible 在目标服务器上的不受限制 SSH 权限。要了解更多关于此配置,请参阅 wiki.centos.org 上 [安全加固 OpenSSH][16]。这样,你就可以自动部署而无需用户交互。 -另外,你也可以在部署过程中需要输入密码。我强烈建议你只在测试过程中使用这种方法,因为它是不安全的,没有办法判断目标主机的身份。如果你想这样做,将每个目标主机 **/etc/ssh/ssh_config** 文件中 StrictHostKeyChecking 的默认值 yes 改为 no。然后在 ansible-playbook 命令中添加 --ask-pass参数来表示 Ansible 会提示输入 SSH 密码。 +另外,你也可以在部署过程中要求输入密码。我强烈建议你只在测试过程中使用这种方法,因为它是不安全的,没有办法跟踪目标主机的身份(fingerprint)变化。如果你想这样做,将每个目标主机 **/etc/ssh/ssh\_config** 文件中 StrictHostKeyChecking 的默认值 yes 改为 no。然后在 ansible-playbook 命令中添加 --ask-pass 参数来让 Ansible 提示输入 SSH 密码。 -在这里,我将举例说明如何编辑 **ssh_config** 文件来禁用在目标服务器上严格的主机密钥检查。我们手动 SSH 到我们将部署 NGINX 的服务器并将StrictHostKeyChecking 的值更改为 no。 +在这里,我将举例说明如何编辑 **ssh\_config** 文件来禁用在目标服务器上严格的主机密钥检查。我们手动连接 SSH 到我们将部署 NGINX 的服务器,并将 StrictHostKeyChecking 的值更改为 no。 $ ssh kjones@172.16.239.140 kjones@172.16.239.140's password:*********** [kjones@nginx ]$ sudo vim /etc/ssh/ssh_config -当你更改后,保存 **ssh_config**,并通过 SSH 连接到你的 Ansible 服务器。保存后的设置应该如下图所示。 +当你更改后,保存 **ssh\_config**,并通过 SSH 连接到你的 Ansible 服务器。保存后的设置应该如下所示。 # /etc/ssh/ssh_config @@ -135,7 +132,7 @@ include 语句说明 Ansible 在部署过程中从 **tasks** 目录下读取并 #### 运行 Ansible 部署 NGINX #### -如果你一直照本教程的步骤来做,你可以运行下面的命令来使用 Ansible 部署NGINX。(同样,如果你设置了 RSA SSH 密钥认证,那么--ask-pass 参数是不需要的。)在 Ansible 服务器运行命令,并使用我们上面创建的配置文件。 +如果你一直照本教程的步骤来做,你可以运行下面的命令来使用 Ansible 部署 NGINX。(再次提示,如果你设置了 RSA SSH 密钥认证,那么 --ask-pass 参数是不需要的。)在 Ansible 服务器运行命令,并使用我们上面创建的配置文件。 $ sudo ansible-playbook --ask-pass $HOME/ansible-nginx/deploy.yml @@ -163,7 +160,7 @@ Ansible 提示输入 SSH 密码,输出如下。recap 中显示 failed=0 这条 如果你没有得到一个成功的 play recap,你可以尝试用 -vvvv 参数(带连接调试的详细信息)再次运行 ansible-playbook 命令来解决部署过程的问题。 -当部署成功(因为我们不是第一次部署)后,你可以验证 NGINX 在远程服务器上运行基本的 [cURL][17] 命令。在这里,它会返回 200 OK。Yes!我们使用Ansible 成功安装了 NGINX。 +当部署成功(假如我们是第一次部署)后,你可以在远程服务器上运行基本的 [cURL][17] 命令验证 NGINX 。在这里,它会返回 200 OK。Yes!我们使用 Ansible 成功安装了 NGINX。 $ curl -Is 172.16.239.140 | grep HTTP HTTP/1.1 200 OK @@ -174,11 +171,11 @@ Ansible 提示输入 SSH 密码,输出如下。recap 中显示 failed=0 这条 #### 复制 NGINX Plus 上的证书和密钥到 Ansible 服务器 #### -使用 Ansible 安装和配置 NGINX Plus 时,首先我们需要将 [NGINX Plus Customer Portal][18] 的密钥和证书复制到部署 Ansible 服务器上的标准位置。 +使用 Ansible 安装和配置 NGINX Plus 时,首先我们需要将 [NGINX Plus Customer Portal][18] NGINX Plus 订阅的密钥和证书复制到 Ansible 部署服务器上的标准位置。 -购买了 NGINX Plus 或正在试用的客户也可以访问 NGINX Plus Customer Portal。如果你有兴趣测试 NGINX Plus,你可以申请免费试用30天[点击这里][19]。在你注册后不久你将收到一个试用证书和密钥的链接。 +购买了 NGINX Plus 或正在试用的客户也可以访问 NGINX Plus Customer Portal。如果你有兴趣测试 NGINX Plus,你可以申请免费试用30天,[点击这里][19]。在你注册后不久你将收到一个试用证书和密钥的链接。 -在 Mac 或 Linux 主机上,我在这里演示使用 [scp][20] 工具。在 Microsoft Windows 主机,可以使用 [WinSCP][21]。在本教程中,先下载文件到我的 Mac 笔记本电脑上,然后使用 scp 将其复制到 Ansible 服务器。密钥和证书的位置都在我的家目录下。 +在 Mac 或 Linux 主机上,我在这里使用 [scp][20] 工具演示。在 Microsoft Windows 主机,可以使用 [WinSCP][21]。在本教程中,先下载文件到我的 Mac 笔记本电脑上,然后使用 scp 将其复制到 Ansible 服务器。密钥和证书的位置都在我的家目录下。 $ cd /path/to/nginx-repo-files/ $ scp nginx-repo.* user@destination-server:. @@ -189,7 +186,7 @@ Ansible 提示输入 SSH 密码,输出如下。recap 中显示 failed=0 这条 $ sudo mkdir -p /etc/ssl/nginx/ $ sudo mv nginx-repo.* /etc/ssl/nginx/ -验证你的 **/etc/ssl/nginx** 目录包含证书(**.crt**)和密钥(**.key**)文件。你可以使用 tree 命令检查。 +验证你的 **/etc/ssl/nginx** 目录包含了证书(**.crt**)和密钥(**.key**)文件。你可以使用 tree 命令检查。 $ tree /etc/ssl/nginx /etc/ssl/nginx @@ -204,7 +201,7 @@ Ansible 提示输入 SSH 密码,输出如下。recap 中显示 failed=0 这条 #### 创建 Ansible 目录结构 #### -以下执行的步骤将和开源 NGINX 的非常相似在[创建安装 NGINX 的 Ansible Playbook 中(CentOS)][22]。首先,我们建一个工作目录为部署 NGINX Plus 使用。我喜欢将它创建为我主目录的子目录。 +以下执行的步骤和我们的“创建 Ansible Playbook 来安装 NGINX(CentOS)”小节中部署开源 NGINX 的非常相似。首先,我们建一个工作目录用于部署 NGINX Plus 使用。我喜欢将它创建为我主目录的子目录。 $ cd $HOME $ mkdir -p ansible-nginx-plus/tasks/ @@ -223,11 +220,11 @@ Ansible 提示输入 SSH 密码,输出如下。recap 中显示 failed=0 这条 #### 创建主部署文件 #### -接下来,我们使用 vim 为开源的 NGINX 创建 **deploy.yml** 文件。 +接下来,像开源的 NGINX 一样,我们使用 vim 创建 **deploy.yml** 文件。 $ vim ansible-nginx-plus/deploy.yml -和开源 NGINX 的部署唯一的区别是,我们将包含文件的名称修改为**install_nginx_plus.yml**。该文件告诉 Ansible 在 **nginx** 组中的所有服务器(**/etc/ansible/hosts** 中定义的)上部署 NGINX Plus ,然后在部署过程中从 **tasks** 目录读取并执行 **install_nginx_plus.yml** 的内容。 +和开源 NGINX 的部署唯一的区别是,我们将包含文件的名称修改为 **install\_nginx\_plus.yml**。该文件告诉 Ansible 在 **nginx** 组中的所有服务器(**/etc/ansible/hosts** 中定义的)上部署 NGINX Plus ,然后在部署过程中从 **tasks** 目录读取并执行 **install\_nginx\_plus.yml** 的内容。 # ./ansible-nginx-plus/deploy.yml @@ -235,22 +232,22 @@ Ansible 提示输入 SSH 密码,输出如下。recap 中显示 failed=0 这条 tasks: - include: 'tasks/install_nginx_plus.yml' -如果你还没有这样做的话,你需要创建 hosts 文件,详细说明在上面的 [创建 NGINX 服务器的列表][23]。 +如果你之前没有安装过的话,你需要创建 hosts 文件,详细说明在上面的“创建 NGINX 服务器的列表”小节。 #### 为 NGINX Plus 创建部署文件 #### -在文本编辑器中打开 **install_nginx_plus.yml**。该文件在部署过程中使用 Ansible 来安装和配置 NGINX Plus。这些命令和模块仅针对 CentOS,有些是 NGINX Plus 独有的。 +在文本编辑器中打开 **install\_nginx\_plus.yml**。该文件包含了使用 Ansible 来安装和配置 NGINX Plus 部署过程中的指令。这些命令和模块仅针对 CentOS,有些是 NGINX Plus 独有的。 $ vim ansible-nginx-plus/tasks/install_nginx_plus.yml -第一部分使用 [文件][24] 模块,告诉 Ansible 使用指定的路径和状态参数为 NGINX Plus 创建特定的 SSL 目录,设置根目录的权限,将权限更改为0700。 +第一节使用 [`file`][24] 模块,告诉 Ansible 使用指定的`path`和`state`参数为 NGINX Plus 创建特定的 SSL 目录,设置属主为 root,将权限 `mode` 更改为0700。 # ./ansible-nginx-plus/tasks/install_nginx_plus.yml - name: NGINX Plus | 创建 NGINX Plus ssl 证书目录 file: path=/etc/ssl/nginx state=directory group=root mode=0700 -接下来的两节使用 [copy][25] 模块从部署 Ansible 的服务器上将 NGINX Plus 的证书和密钥复制到 NGINX Plus 服务器上,再修改权根,将权限设置为0700。 +接下来的两节使用 [copy][25] 模块从 Ansible 部署服务器上将 NGINX Plus 的证书和密钥复制到 NGINX Plus 服务器上,再修改属主为 root,权限 `mode` 为0700。 - name: NGINX Plus | 复制 NGINX Plus repo 证书 copy: src=/etc/ssl/nginx/nginx-repo.crt dest=/etc/ssl/nginx/nginx-repo.crt owner=root group=root mode=0700 @@ -258,17 +255,17 @@ Ansible 提示输入 SSH 密码,输出如下。recap 中显示 failed=0 这条 - name: NGINX Plus | 复制 NGINX Plus 密钥 copy: src=/etc/ssl/nginx/nginx-repo.key dest=/etc/ssl/nginx/nginx-repo.key owner=root group=root mode=0700 -接下来,我们告诉 Ansible 使用 [get_url][26] 模块从 NGINX Plus 仓库下载 CA 证书在 url 参数指定的远程位置,通过 dest 参数把它放在指定的目录,并设置权限为 0700。 +接下来,我们告诉 Ansible 使用 [`get_url`][26] 模块在 url 参数指定的远程位置从 NGINX Plus 仓库下载 CA 证书,通过 `dest` 参数把它放在指定的目录 `dest` ,并设置权限 `mode` 为 0700。 - name: NGINX Plus | 下载 NGINX Plus CA 证书 get_url: url=https://cs.nginx.com/static/files/CA.crt dest=/etc/ssl/nginx/CA.crt mode=0700 -同样,我们告诉 Ansible 使用 get_url 模块下载 NGINX Plus repo 文件,并将其复制到 **/etc/yum.repos.d** 目录下在 NGINX Plus 服务器上。 +同样,我们告诉 Ansible 使用 `get_url` 模块下载 NGINX Plus repo 文件,并将其复制到 NGINX Plus 服务器上的 **/etc/yum.repos.d** 目录下。 - name: NGINX Plus | 下载 yum NGINX Plus 仓库 get_url: url=https://cs.nginx.com/static/files/nginx-plus-7.repo dest=/etc/yum.repos.d/nginx-plus-7.repo mode=0700 -最后两节的 name 告诉 Ansible 使用 yum 和 service 模块下载并启动 NGINX Plus。 +最后两节的 `name` 告诉 Ansible 使用 `yum` 和 `service` 模块下载并启动 NGINX Plus。 - name: NGINX Plus | 安装 NGINX Plus yum: @@ -282,7 +279,7 @@ Ansible 提示输入 SSH 密码,输出如下。recap 中显示 failed=0 这条 #### 运行 Ansible 来部署 NGINX Plus #### -在保存 **install_nginx_plus.yml** 文件后,然后运行 ansible-playbook 命令来部署 NGINX Plus。同样在这里,我们使用 --ask-pass 参数使用 Ansible 提示输入 SSH 密码并把它传递给每个 NGINX Plus 服务器,指定路径在 **deploy.yml** 文件中。 +在保存 **install\_nginx\_plus.yml** 文件后,运行 ansible-playbook 命令来部署 NGINX Plus。同样在这里,我们使用 --ask-pass 参数使用 Ansible 提示输入 SSH 密码并把它传递给每个 NGINX Plus 服务器,并指定主配置文件路径 **deploy.yml** 文件。 $ sudo ansible-playbook --ask-pass $HOME/ansible-nginx-plus/deploy.yml @@ -315,18 +312,18 @@ Ansible 提示输入 SSH 密码,输出如下。recap 中显示 failed=0 这条 PLAY RECAP ******************************************************************** 172.16.239.140 : ok=8 changed=7 unreachable=0 failed=0 -playbook 的 recap 是成功的。现在,使用 curl 命令来验证 NGINX Plus 是否在运行。太好了,我们得到的是 200 OK!成功了!我们使用 Ansible 成功地安装了 NGINX Plus。 +playbook 的 recap 成功完成。现在,使用 curl 命令来验证 NGINX Plus 是否在运行。太好了,我们得到的是 200 OK!成功了!我们使用 Ansible 成功地安装了 NGINX Plus。 $ curl -Is http://172.16.239.140 | grep HTTP HTTP/1.1 200 OK -### 在 Ubuntu 上创建一个 Ansible Playbook 来安装 NGINX 和 NGINX Plus ### +### 在 Ubuntu 上创建 Ansible Playbook 来安装 NGINX 和 NGINX Plus ### -此过程在 [Ubuntu 服务器][27] 上部署 NGINX 和 NGINX Plus 与 CentOS 很相似,我将一步一步的指导来完成整个部署文件,并指出和 CentOS 的细微差异。 +在 [Ubuntu 服务器][27] 上部署 NGINX 和 NGINX Plus 的过程与 CentOS 很相似,我将一步一步的指导来完成整个部署文件,并指出和 CentOS 的细微差异。 -首先和 CentOS 一样,创建 Ansible 目录结构和主要的 Ansible 部署文件。也创建 **/etc/ansible/hosts** 文件来描述 [创建 NGINX 服务器的列表][28]。对于 NGINX Plus,你也需要复制证书和密钥在此步中 [复制 NGINX Plus 证书和密钥到 Ansible 服务器][29]。 +首先和 CentOS 一样,创建 Ansible 目录结构和 Ansible 主部署文件。也按“创建 NGINX 服务器的列表”小节的描述创建 **/etc/ansible/hosts** 文件。对于 NGINX Plus,你也需要安装“复制 NGINX Plus 证书和密钥到 Ansible 服务器”小节的描述复制证书和密钥。 -下面是开源 NGINX 的 **install_nginx.yml** 部署文件。在第一部分,我们使用 [apt_key][30] 模块导入 Nginx 的签名密钥。接下来的两节使用[lineinfile][31] 模块来添加 URLs 到 **sources.list** 文件中。最后,我们使用 [apt][32] 模块来更新缓存并安装 NGINX(apt 取代了我们在 CentOS 中部署时的 yum 模块)。 +下面是开源 NGINX 的 **install\_nginx.yml** 部署文件。在第一节,我们使用 [`apt_key`][30] 模块导入 NGINX 的签名密钥。接下来的两节使用 [`lineinfile`][31] 模块来添加 Ubuntu 14.04 的软件包 URL 到 **sources.list** 文件中。最后,我们使用 [`apt`][32] 模块来更新缓存并安装 NGINX(`apt` 取代了我们在 CentOS 中部署时的 `yum` 模块)。 # ./ansible-nginx/tasks/install_nginx.yml @@ -352,7 +349,8 @@ playbook 的 recap 是成功的。现在,使用 curl 命令来验证 NGINX Plu service: name: nginx state: started -下面是 NGINX Plus 的部署文件 **install_nginx.yml**。前四节设置了 NGINX Plus 密钥和证书。然后,我们用 apt_key 模块为开源的 NGINX 导入签名密钥,get_url 模块为 NGINX Plus 下载 apt 配置文件。[shell][33] 模块使用 printf 命令写下输出到 **nginx-plus.list** 文件中在**sources.list.d** 目录。最终的 name 模块是为开源 NGINX 的。 + +下面是 NGINX Plus 的部署文件 **install\_nginx.yml**。前四节设置了 NGINX Plus 密钥和证书。然后,我们像开源的 NGINX 一样用 `apt_key` 模块导入签名密钥,`get_url` 模块为 NGINX Plus 下载 `apt` 配置文件。[`shell`][33] 模块使用 `printf` 命令写下输出到 **sources.list.d** 目录中的 **nginx-plus.list** 文件。最终的 `name` 模块和开源 NGINX 一样。 # ./ansible-nginx-plus/tasks/install_nginx_plus.yml @@ -395,13 +393,12 @@ playbook 的 recap 是成功的。现在,使用 curl 命令来验证 NGINX Plu $ sudo ansible-playbook --ask-pass $HOME/ansible-nginx-plus/deploy.yml -你应该得到一个成功的 play recap。如果你没有成功,你可以使用 verbose 参数,以帮助你解决在 [运行 Ansible 来部署 NGINX][34] 中出现的问题。 +你应该得到一个成功的 play recap。如果你没有成功,你可以使用冗余参数,以帮助你解决出现的问题。 ### 小结 ### -我在这个教程中演示是什么是 Ansible,可以做些什么来帮助你自动部署 NGINX 或 NGINX Plus,这仅仅是个开始。还有许多有用的模块,用户账号管理,自定义配置模板等。如果你有兴趣了解更多关于这些,请访问 [Ansible 官方文档][35]。 +我在这个教程中演示是什么是 Ansible,可以做些什么来帮助你自动部署 NGINX 或 NGINX Plus,这仅仅是个开始。还有许多有用的模块,包括从用户账号管理到自定义配置模板等。如果你有兴趣了解关于这些的更多信息,请访问 [Ansible 官方文档][35]。 -要了解更多关于 Ansible,来听我讲用 Ansible 部署 NGINX Plus 在[NGINX.conf 2015][36],9月22-24日在旧金山。 -------------------------------------------------------------------------------- @@ -409,7 +406,7 @@ via: https://www.nginx.com/blog/installing-nginx-nginx-plus-ansible/ 作者:[Kevin Jones][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 38aeb0a1ff2f2137bc3424bcd21536555c65a50d Mon Sep 17 00:00:00 2001 From: joeren Date: Fri, 9 Oct 2015 08:16:14 +0800 Subject: [PATCH 654/697] Update 20151007 Productivity Tools And Tips For Linux.md --- sources/tech/20151007 Productivity Tools And Tips For Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151007 Productivity Tools And Tips For Linux.md b/sources/tech/20151007 Productivity Tools And Tips For Linux.md index 7aa53de511..2434669710 100644 --- a/sources/tech/20151007 Productivity Tools And Tips For Linux.md +++ b/sources/tech/20151007 Productivity Tools And Tips For Linux.md @@ -1,3 +1,4 @@ +Translating by GOLinux! Productivity Tools And Tips For Linux ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Productivity-Tips-Linux.jpg) @@ -75,4 +76,4 @@ via: http://itsfoss.com/productivity-tips-ubuntu/ [12]:https://esite.ch/tag/diodon/ [13]:http://itsfoss.com/7-best-indicator-applets-for-ubuntu-13-10/ [14]:http://itsfoss.com/change-sudo-password-timeout-ubuntu/ -[15]:http://itsfoss.com/notification-terminal-command-completion-ubuntu/ \ No newline at end of file +[15]:http://itsfoss.com/notification-terminal-command-completion-ubuntu/ From 3acfc713642d73f5fd884cc228eaea9c87ac67ec Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 9 Oct 2015 10:57:52 +0800 Subject: [PATCH 655/697] =?UTF-8?q?20151009-1=20=E9=80=89=E9=A2=98=20RAID?= =?UTF-8?q?=20=E7=AC=AC=E5=85=AB=E7=AF=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Data and Rebuild Failed Software RAID's.md | 167 ++++++++++++++++++ 1 file changed, 167 insertions(+) create mode 100644 sources/tech/RAID/Part 8 - How to Recover Data and Rebuild Failed Software RAID's.md diff --git a/sources/tech/RAID/Part 8 - How to Recover Data and Rebuild Failed Software RAID's.md b/sources/tech/RAID/Part 8 - How to Recover Data and Rebuild Failed Software RAID's.md new file mode 100644 index 0000000000..2acf1dfd86 --- /dev/null +++ b/sources/tech/RAID/Part 8 - How to Recover Data and Rebuild Failed Software RAID's.md @@ -0,0 +1,167 @@ +How to Recover Data and Rebuild Failed Software RAID’s – Part 8 +================================================================================ +In the previous articles of this [RAID series][1] you went from zero to RAID hero. We reviewed several software RAID configurations and explained the essentials of each one, along with the reasons why you would lean towards one or the other depending on your specific scenario. + +![Recover Rebuild Failed Software RAID's](http://www.tecmint.com/wp-content/uploads/2015/10/Recover-Rebuild-Failed-Software-RAID.png) + +Recover Rebuild Failed Software RAID’s – Part 8 + +In this guide we will discuss how to rebuild a software RAID array without data loss when in the event of a disk failure. For brevity, we will only consider a RAID 1 setup – but the concepts and commands apply to all cases alike. + +#### RAID Testing Scenario #### + +Before proceeding further, please make sure you have set up a RAID 1 array following the instructions provided in Part 3 of this series: [How to set up RAID 1 (Mirror) in Linux][2]. + +The only variations in our present case will be: + +1) a different version of CentOS (v7) than the one used in that article (v6.5), and + +2) different disk sizes for /dev/sdb and /dev/sdc (8 GB each). + +In addition, if SELinux is enabled in enforcing mode, you will need to add the corresponding labels to the directory where you’ll mount the RAID device. Otherwise, you’ll run into this warning message while attempting to mount it: + +![SELinux RAID Mount Error](http://www.tecmint.com/wp-content/uploads/2015/10/SELinux-RAID-Mount-Error.png) + +SELinux RAID Mount Error + +You can fix this by running: + + # restorecon -R /mnt/raid1 + +### Setting up RAID Monitoring ### + +There is a variety of reasons why a storage device can fail (SSDs have greatly reduced the chances of this happening, though), but regardless of the cause you can be sure that issues can occur anytime and you need to be prepared to replace the failed part and to ensure the availability and integrity of your data. + +A word of advice first. Even when you can inspect /proc/mdstat in order to check the status of your RAIDs, there’s a better and time-saving method that consists of running mdadm in monitor + scan mode, which will send alerts via email to a predefined recipient. + +To set this up, add the following line in /etc/mdadm.conf: + + MAILADDR user@ + +In my case: + + MAILADDR gacanepa@localhost + +![RAID Monitoring Email Alerts](http://www.tecmint.com/wp-content/uploads/2015/10/RAID-Monitoring-Email-Alerts.png) + +RAID Monitoring Email Alerts + +To run mdadm in monitor + scan mode, add the following crontab entry as root: + + @reboot /sbin/mdadm --monitor --scan --oneshot + +By default, mdadm will check the RAID arrays every 60 seconds and send an alert if it finds an issue. You can modify this behavior by adding the `--delay` option to the crontab entry above along with the amount of seconds (for example, `--delay` 1800 means 30 minutes). + +Finally, make sure you have a Mail User Agent (MUA) installed, such as [mutt or mailx][3]. Otherwise, you will not receive any alerts. + +In a minute we will see what an alert sent by mdadm looks like. + +### Simulating and Replacing a failed RAID Storage Device ### + +To simulate an issue with one of the storage devices in the RAID array, we will use the `--manage` and `--set-faulty` options as follows: + + # mdadm --manage --set-faulty /dev/md0 /dev/sdc1 + +This will result in /dev/sdc1 being marked as faulty, as we can see in /proc/mdstat: + +![Stimulate Issue with RAID Storage](http://www.tecmint.com/wp-content/uploads/2015/10/Stimulate-Issue-with-RAID-Storage.png) + +Stimulate Issue with RAID Storage + +More importantly, let’s see if we received an email alert with the same warning: + +![Email Alert on Failed RAID Device](http://www.tecmint.com/wp-content/uploads/2015/10/Email-Alert-on-Failed-RAID-Device.png) + +Email Alert on Failed RAID Device + +In this case, you will need to remove the device from the software RAID array: + + # mdadm /dev/md0 --remove /dev/sdc1 + +Then you can physically remove it from the machine and replace it with a spare part (/dev/sdd, where a partition of type fd has been previously created): + + # mdadm --manage /dev/md0 --add /dev/sdd1 + +Luckily for us, the system will automatically start rebuilding the array with the part that we just added. We can test this by marking /dev/sdb1 as faulty, removing it from the array, and making sure that the file tecmint.txt is still accessible at /mnt/raid1: + + # mdadm --detail /dev/md0 + # mount | grep raid1 + # ls -l /mnt/raid1 | grep tecmint + # cat /mnt/raid1/tecmint.txt + +![Confirm Rebuilding RAID Array](http://www.tecmint.com/wp-content/uploads/2015/10/Rebuilding-RAID-Array.png) + +Confirm Rebuilding RAID Array + +The image above clearly shows that after adding /dev/sdd1 to the array as a replacement for /dev/sdc1, the rebuilding of data was automatically performed by the system without intervention on our part. + +Though not strictly required, it’s a great idea to have a spare device in handy so that the process of replacing the faulty device with a good drive can be done in a snap. To do that, let’s re-add /dev/sdb1 and /dev/sdc1: + + # mdadm --manage /dev/md0 --add /dev/sdb1 + # mdadm --manage /dev/md0 --add /dev/sdc1 + +![Replace Failed Raid Device](http://www.tecmint.com/wp-content/uploads/2015/10/Replace-Failed-Raid-Device.png) + +Replace Failed Raid Device + +### Recovering from a Redundancy Loss ### + +As explained earlier, mdadm will automatically rebuild the data when one disk fails. But what happens if 2 disks in the array fail? Let’s simulate such scenario by marking /dev/sdb1 and /dev/sdd1 as faulty: + + # umount /mnt/raid1 + # mdadm --manage --set-faulty /dev/md0 /dev/sdb1 + # mdadm --stop /dev/md0 + # mdadm --manage --set-faulty /dev/md0 /dev/sdd1 + +Attempts to re-create the array the same way it was created at this time (or using the `--assume-clean` option) may result in data loss, so it should be left as a last resort. + +Let’s try to recover the data from /dev/sdb1, for example, into a similar disk partition (/dev/sde1 – note that this requires that you create a partition of type fd in /dev/sde before proceeding) using ddrescue: + + # ddrescue -r 2 /dev/sdb1 /dev/sde1 + +![Recovering Raid Array](http://www.tecmint.com/wp-content/uploads/2015/10/Recovering-Raid-Array.png) + +Recovering Raid Array + +Please note that up to this point, we haven’t touched /dev/sdb or /dev/sdd, the partitions that were part of the RAID array. + +Now let’s rebuild the array using /dev/sde1 and /dev/sdf1: + + # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[e-f]1 + +Please note that in a real situation, you will typically use the same device names as with the original array, that is, /dev/sdb1 and /dev/sdc1 after the failed disks have been replaced with new ones. + +In this article I have chosen to use extra devices to re-create the array with brand new disks and to avoid confusion with the original failed drives. + +When asked whether to continue writing array, type Y and press Enter. The array should be started and you should be able to watch its progress with: + + # watch -n 1 cat /proc/mdstat + +When the process completes, you should be able to access the content of your RAID: + +![Confirm Raid Content](http://www.tecmint.com/wp-content/uploads/2015/10/Raid-Content.png) + +Confirm Raid Content + +### Summary ### + +In this article we have reviewed how to recover from RAID failures and redundancy losses. However, you need to remember that this technology is a storage solution and DOES NOT replace backups. + +The principles explained in this guide apply to all RAID setups alike, as well as the concepts that we will cover in the next and final guide of this series (RAID management). + +If you have any questions about this article, feel free to drop us a note using the comment form below. We look forward to hearing from you! + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/recover-data-and-rebuild-failed-software-raid/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[2]:http://www.tecmint.com/create-raid1-in-linux/ +[3]:http://www.tecmint.com/send-mail-from-command-line-using-mutt-command/ \ No newline at end of file From a67a6b1dfaeaa36ce66bbe90b8e6c9a8be6928a2 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 9 Oct 2015 11:41:18 +0800 Subject: [PATCH 656/697] Update Part 8 - How to Recover Data and Rebuild Failed Software RAID's.md --- ...- How to Recover Data and Rebuild Failed Software RAID's.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/RAID/Part 8 - How to Recover Data and Rebuild Failed Software RAID's.md b/sources/tech/RAID/Part 8 - How to Recover Data and Rebuild Failed Software RAID's.md index 2acf1dfd86..57f93d50b2 100644 --- a/sources/tech/RAID/Part 8 - How to Recover Data and Rebuild Failed Software RAID's.md +++ b/sources/tech/RAID/Part 8 - How to Recover Data and Rebuild Failed Software RAID's.md @@ -1,3 +1,4 @@ +translation by strugglingyouth How to Recover Data and Rebuild Failed Software RAID’s – Part 8 ================================================================================ In the previous articles of this [RAID series][1] you went from zero to RAID hero. We reviewed several software RAID configurations and explained the essentials of each one, along with the reasons why you would lean towards one or the other depending on your specific scenario. @@ -164,4 +165,4 @@ via: http://www.tecmint.com/recover-data-and-rebuild-failed-software-raid/ [a]:http://www.tecmint.com/author/gacanepa/ [1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ [2]:http://www.tecmint.com/create-raid1-in-linux/ -[3]:http://www.tecmint.com/send-mail-from-command-line-using-mutt-command/ \ No newline at end of file +[3]:http://www.tecmint.com/send-mail-from-command-line-using-mutt-command/ From 31d56aa19639e919cf432954ee8db69b98f484bb Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 9 Oct 2015 14:13:35 +0800 Subject: [PATCH 657/697] PUB:20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @osk874 这篇翻译质量欠佳,还有漏译。 --- ...Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md | 102 ++++++++++++++++++ ...Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md | 102 ------------------ 2 files changed, 102 insertions(+), 102 deletions(-) create mode 100644 published/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md delete mode 100644 translated/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md diff --git a/published/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md b/published/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md new file mode 100644 index 0000000000..b96571c9fc --- /dev/null +++ b/published/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md @@ -0,0 +1,102 @@ +在 Ubuntu 14.04/15.04 上配置 Node JS v4.0.0 +================================================================================ +大家好,Node.JS 4.0 发布了,这个流行的服务器端 JS 平台合并了 Node.js 和 io.js 的代码,4.0 版就是这两个项目结合的产物——现在合并为一个代码库。这次最主要的变化是 Node.js 封装了4.5 版本的 Google V8 JS 引擎,与当前的 Chrome 所带的一致。所以,紧跟 V8 的发布可以让 Node.js 运行的更快、更安全,同时更好的利用 ES6 的很多语言特性。 + +![Node JS](http://blog.linoxide.com/wp-content/uploads/2015/09/nodejs.png) + +Node.js 4.0 发布的主要目标是为 io.js 用户提供一个简单的升级途径,所以这次并没有太多重要的 API 变更。下面的内容让我们来看看如何轻松的在 ubuntu server 上安装、配置 Node.js。 + +### 基础系统安装 ### + +Node 在 Linux,Macintosh,Solaris 这几个系统上都可以完美的运行,linux 的发行版本当中使用 Ubuntu 相当适合。这也是我们为什么要尝试在 ubuntu 15.04 上安装 Node.js,当然了在 14.04 上也可以使用相同的步骤安装。 + +#### 1) 系统资源 #### + +Node.js 所需的基本的系统资源取决于你的架构需要。本教程我们会在一台 1GB 内存、 1GHz 处理器和 10GB 磁盘空间的服务器上进行,最小安装即可,不需要安装 Web 服务器或数据库服务器。 + +#### 2) 系统更新 #### + +在我们安装 Node.js 之前,推荐你将系统更新到最新的补丁和升级包,所以请登录到系统中使用超级用户运行如下命令: + + # apt-get update + +#### 3) 安装依赖 #### + +Node.js 仅需要你的服务器上有一些基本系统和软件功能,比如 'make'、'gcc'和'wget' 之类的。如果你还没有安装它们,运行如下命令安装: + + # apt-get install python gcc make g++ wget + +### 下载最新版的Node JS v4.0.0 ### + +访问链接 [Node JS Download Page][1] 下载源代码. + +![nodejs download](http://blog.linoxide.com/wp-content/uploads/2015/09/download.png) + +复制其中的最新的源代码的链接,然后用`wget` 下载,命令如下: + + # wget https://nodejs.org/download/rc/v4.0.0-rc.1/node-v4.0.0-rc.1.tar.gz + +下载完成后使用命令`tar` 解压缩: + + # tar -zxvf node-v4.0.0-rc.1.tar.gz + +![wget nodejs](http://blog.linoxide.com/wp-content/uploads/2015/09/wget.png) + +### 安装 Node JS v4.0.0 ### + +现在可以开始使用下载好的源代码编译 Node.js。在开始编译前,你需要在 ubuntu server 上切换到源代码解压缩后的目录,运行 configure 脚本来配置源代码。 + + root@ubuntu-15:~/node-v4.0.0-rc.1# ./configure + +![Installing NodeJS](http://blog.linoxide.com/wp-content/uploads/2015/09/configure.png) + +现在运行命令 'make install' 编译安装 Node.js: + + root@ubuntu-15:~/node-v4.0.0-rc.1# make install + +make 命令会花费几分钟完成编译,安静的等待一会。 + +### 验证 Node.js 安装 ### + +一旦编译任务完成,我们就可以开始验证安装工作是否 OK。我们运行下列命令来确认 Node.js 的版本。 + + root@ubuntu-15:~# node -v + v4.0.0-pre + +在命令行下不带参数的运行`node` 就会进入 REPL(Read-Eval-Print-Loop,读-执行-输出-循环)模式,它有一个简化版的emacs 行编辑器,通过它你可以交互式的运行JS和查看运行结果。 + +![node version](http://blog.linoxide.com/wp-content/uploads/2015/09/node.png) + +### 编写测试程序 ### + +我们也可以写一个很简单的终端程序来测试安装是否成功,并且工作正常。要做这个,我们将会创建一个“test.js” 文件,包含以下代码,操作如下: + + root@ubuntu-15:~# vim test.js + var util = require("util"); + console.log("Hello! This is a Node Test Program"); + :wq! + +现在为了运行上面的程序,在命令行运行下面的命令。 + + root@ubuntu-15:~# node test.js + +![Node Program](http://blog.linoxide.com/wp-content/uploads/2015/09/node-test.png) + +在一个成功安装了 Node JS 的环境下运行上面的程序就会在屏幕上得到上图所示的输出,这个程序加载类 “util” 到变量 “util” 中,接着用对象 “util” 运行终端任务,console.log 这个命令作用类似 C++ 里的cout + +### 结论 ### + +就是这些了。如果你刚刚开始使用 Node.js 开发应用程序,希望本文能够通过在 ubuntu 上安装、运行 Node.js 让你了解一下Node.js 的大概。最后,我们可以认为我们可以期待 Node JS v4.0.0 能够取得显著性能提升。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/ubuntu-how-to/setup-node-js-4-0-ubuntu-14-04-15-04/ + +作者:[Kashif Siddique][a] +译者:[osk874](https://github.com/osk874) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ +[1]:https://nodejs.org/download/rc/v4.0.0-rc.1/ diff --git a/translated/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md b/translated/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md deleted file mode 100644 index 453ab2c234..0000000000 --- a/translated/tech/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md +++ /dev/null @@ -1,102 +0,0 @@ - -在ubunt 14.04/15.04 上配置Node JS v4.0.0 -================================================================================ -大家好,Node.JS 4.0 发布了,主流的服务器端JS 平台已经将Node.js 和io.js 结合到一起。4.0 版就是两者结合的产物——共用一个代码库。这次最主要的变化是Node.js 封装了Google V8 4.5 JS 引擎,而这一版与当前的Chrome 一致。所以,紧跟V8 的版本号可以让Node.js 运行的更快、更安全,同时更好的利用ES6 的很多语言特性。 - -![Node JS](http://blog.linoxide.com/wp-content/uploads/2015/09/nodejs.png) - -Node.js 4.0 的目标是为io.js 当前用户提供一个简单的升级途径,所以这次并没有太多重要的API 变更。剩下的内容会让我们看到如何轻松的在ubuntu server 上安装、配置Node.js。 - -### 基础系统安装 ### - -Node 在Linux,Macintosh,Solaris 这几个系统上都可以完美的运行,同时linux 的发行版本当中Ubuntu 是最合适的。这也是我们为什么要尝试在ubuntu 15.04 上安装Node,当然了在14.04 上也可以使用相同的步骤安装。 -#### 1) 系统资源 #### - -The basic system resources for Node depend upon the size of your infrastructure requirements. So, here in this tutorial we will setup Node with 1 GB RAM, 1 GHz Processor and 10 GB of available disk space with minimal installation packages installed on the server that is no web or database server packages are installed. - -#### 2) 系统更新 #### - -It always been recommended to keep your system upto date with latest patches and updates, so before we move to the installation on Node, let's login to your server with super user privileges and run update command. - - # apt-get update - -#### 3) 安装依赖 #### - -Node JS only requires some basic system and software utilities to be present on your server, for its successful installation like 'make' 'gcc' and 'wget'. Let's run the below command to get them installed if they are not already present. - - # apt-get install python gcc make g++ wget - -### 下载最新版的Node JS v4.0.0 ### - -使用链接 [Node JS Download Page][1] 下载源代码. - -![nodejs download](http://blog.linoxide.com/wp-content/uploads/2015/09/download.png) - -我们会复制最新源代码的链接,然后用`wget` 下载,命令如下: - - # wget https://nodejs.org/download/rc/v4.0.0-rc.1/node-v4.0.0-rc.1.tar.gz - -下载完成后使用命令`tar` 解压缩: - - # tar -zxvf node-v4.0.0-rc.1.tar.gz - -![wget nodejs](http://blog.linoxide.com/wp-content/uploads/2015/09/wget.png) - -### 安装 Node JS v4.0.0 ### - -现在可以开始使用下载好的源代码编译Nod JS。你需要在ubuntu serve 上开始编译前运行配置脚本来修改你要使用目录和配置参数。 - - root@ubuntu-15:~/node-v4.0.0-rc.1# ./configure - -![Installing NodeJS](http://blog.linoxide.com/wp-content/uploads/2015/09/configure.png) - -现在运行命令'make install' 编译安装Node JS: - - root@ubuntu-15:~/node-v4.0.0-rc.1# make install - -make 命令会花费几分钟完成编译,冷静的等待一会。 - -### 验证Node 安装 ### - -一旦编译任务完成,我们就可以开始验证安装工作是否OK。我们运行下列命令来确认Node JS 的版本。 - - root@ubuntu-15:~# node -v - v4.0.0-pre - -在命令行下不带参数的运行`node` 就会进入REPL(Read-Eval-Print-Loop,读-执行-输出-循环)模式,它有一个简化版的emacs 行编辑器,通过它你可以交互式的运行JS和查看运行结果。 -![node version](http://blog.linoxide.com/wp-content/uploads/2015/09/node.png) - -### 写测试程序 ### - -我们也可以写一个很简单的终端程序来测试安装是否成功,并且工作正常。要完成这一点,我们将会创建一个“tes.js” 文件,包含一下代码,操作如下: - - root@ubuntu-15:~# vim test.js - var util = require("util"); - console.log("Hello! This is a Node Test Program"); - :wq! - -现在为了运行上面的程序,在命令行运行下面的命令。 - - root@ubuntu-15:~# node test.js - -![Node Program](http://blog.linoxide.com/wp-content/uploads/2015/09/node-test.png) - -在一个成功安装了Node JS 的环境下运行上面的程序就会在屏幕上得到上图所示的输出,这个程序加载类 “util” 到变量“util” 中,接着用对象“util” 运行终端任务,console.log 这个命令作用类似C++ 里的cout - -### 结论 ### - -That’s it. Hope this gives you a good idea of Node.js going with Node.js on Ubuntu. If you are new to developing applications with Node.js. After all we can say that we can expect significant performance gains with Node JS Version 4.0.0. -希望本文能够通过在ubuntu 上安装、运行Node.JS让你了解一下Node JS 的大概,如果你是刚刚开始使用Node.JS 开发应用程序。最后我们可以说我们能够通过Node JS v4.0.0 获取显著的性能。 - --------------------------------------------------------------------------------- - -via: http://linoxide.com/ubuntu-how-to/setup-node-js-4-0-ubuntu-14-04-15-04/ - -作者:[Kashif Siddique][a] -译者:[译者ID](https://github.com/osk874) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/kashifs/ -[1]:https://nodejs.org/download/rc/v4.0.0-rc.1/ From 94385f848247fb60bf4d57dad869c1e6513e26de Mon Sep 17 00:00:00 2001 From: alim0x Date: Fri, 9 Oct 2015 20:56:26 +0800 Subject: [PATCH 658/697] [translating]20151007 Open Source Media Player MPlayer 1.2 Released --- ...pen Source Media Player MPlayer 1.2 Released.md | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/sources/share/20151007 Open Source Media Player MPlayer 1.2 Released.md b/sources/share/20151007 Open Source Media Player MPlayer 1.2 Released.md index 518a2a4135..52a6887786 100644 --- a/sources/share/20151007 Open Source Media Player MPlayer 1.2 Released.md +++ b/sources/share/20151007 Open Source Media Player MPlayer 1.2 Released.md @@ -1,3 +1,5 @@ +alim0x translating + Open Source Media Player MPlayer 1.2 Released ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/MPlayer-1.2.jpg) @@ -30,19 +32,19 @@ I have used Ubuntu 15.04 for the installation of MPlayer 1.2. Installation instr Open a terminal and use the following commands: wget http://www.mplayerhq.hu/MPlayer/releases/MPlayer-1.2.tar.xz - + tar xvf MPlayer-1.1.1.tar.xz - + cd MPlayer-1.2 - + sudo apt-get install yasm - + ./configure When you run make, it will throw a number of things on the terminal screen and takes some time to build it. Have patience. make - + sudo make install If you feel uncomfortable using the source code, I advise you to either wait forMPlayer 1.2 to land in the repositories of your Linux distribution or use an alternate like MPV. @@ -59,4 +61,4 @@ via: http://itsfoss.com/mplayer-1-2-released/ [a]:http://itsfoss.com/author/abhishek/ [1]:https://www.mplayerhq.hu/ -[2]:http://mpv.io/ \ No newline at end of file +[2]:http://mpv.io/ From 913bda2922ce5c68811172a8fc0badf32f4e549f Mon Sep 17 00:00:00 2001 From: bazz2 Date: Sat, 10 Oct 2015 09:05:21 +0800 Subject: [PATCH 659/697] [translated]How to filter BGP routes in Quagga BGP router --- ... filter BGP routes in Quagga BGP router.md | 202 ------------------ ... filter BGP routes in Quagga BGP router.md | 201 +++++++++++++++++ 2 files changed, 201 insertions(+), 202 deletions(-) delete mode 100644 sources/tech/20150202 How to filter BGP routes in Quagga BGP router.md create mode 100644 translated/tech/20150202 How to filter BGP routes in Quagga BGP router.md diff --git a/sources/tech/20150202 How to filter BGP routes in Quagga BGP router.md b/sources/tech/20150202 How to filter BGP routes in Quagga BGP router.md deleted file mode 100644 index f227e0c506..0000000000 --- a/sources/tech/20150202 How to filter BGP routes in Quagga BGP router.md +++ /dev/null @@ -1,202 +0,0 @@ -[bazz222] -How to filter BGP routes in Quagga BGP router -================================================================================ -In the [previous tutorial][1], we demonstrated how to turn a CentOS box into a BGP router using Quagga. We also covered basic BGP peering and prefix exchange setup. In this tutorial, we will focus on how we can control incoming and outgoing BGP prefixes by using **prefix-list** and **route-map**. - -As described in earlier tutorials, BGP routing decisions are made based on the prefixes received/advertised. To ensure error-free routing, it is recommended that you use some sort of filtering mechanism to control these incoming and outgoing prefixes. For example, if one of your BGP neighbors starts advertising prefixes which do not belong to them, and you accept such bogus prefixes by mistake, your traffic can be sent to that wrong neighbor, and end up going nowhere (so-called "getting blackholed"). To make sure that such prefixes are not received or advertised to any neighbor, you can use prefix-list and route-map. The former is a prefix-based filtering mechanism, while the latter is a more general prefix-based policy mechanism used to fine-tune actions. - -We will show you how to use prefix-list and route-map in Quagga. - -### Topology and Requirement ### - -In this tutorial, we assume the following topology. - -![](https://farm8.staticflickr.com/7394/16407625405_4f7d24d1f6_c.jpg) - -Service provider A has already established an eBGP peering with service provider B, and they are exchanging routing information between them. The AS and prefix details are as stated below. - -- **Peering block**: 192.168.1.0/24 -- **Service provider A**: AS 100, prefix 10.10.0.0/16 -- **Service provider B**: AS 200, prefix 10.20.0.0/16 - -In this scenario, service provider B wants to receive only prefixes 10.10.10.0/23, 10.10.10.0/24 and 10.10.11.0/24 from provider A. - -### Quagga Installation and BGP Peering ### - -In the [previous tutorial][1], we have already covered the method of installing Quagga and setting up BGP peering. So we will not go through the details here. Nonetheless, I am providing a summary of BGP configuration and prefix advertisements: - -![](https://farm8.staticflickr.com/7428/16219986668_97cb193b15_c.jpg) - -The above output indicates that the BGP peering is up. Router-A is advertising multiple prefixes towards router-B. Router-B, on the other hand, is advertising a single prefix 10.20.0.0/16 to router-A. Both routers are receiving the prefixes without any problems. - -### Creating Prefix-List ### - -In a router, a prefix can be blocked with either an ACL or prefix-list. Using prefix-list is often preferred to ACLs since prefix-list is less processor intensive than ACLs. Also, prefix-list is easier to create and maintain. - - ip prefix-list DEMO-PRFX permit 192.168.0.0/23 - -The above command creates prefix-list called 'DEMO-FRFX' that allows only 192.168.0.0/23. - -Another great feature of prefix-list is that we can specify a range of subnet mask(s). Take a look at the following example: - - ip prefix-list DEMO-PRFX permit 192.168.0.0/23 le 24 - -The above command creates prefix-list called 'DEMO-PRFX' that permits prefixes between 192.168.0.0/23 and /24, which are 192.168.0.0/23, 192.168.0.0/24 and 192.168.1.0/24. The 'le' operator means less than or equal to. You can also use 'ge' operator for greater than or equal to. - -A single prefix-list statement can have multiple permit/deny actions. Each statement is assigned a sequence number which can be determined automatically or specified manually. - -Multiple prefix-list statements are parsed one by one in the increasing order of sequence numbers. When configuring prefix-list, we should keep in mind that there is always an **implicit deny** at the end of all prefix-list statements. This means that anything that is not explicitly allowed will be denied. - -To allow everything, we can use the following prefix-list statement which allows any prefix starting from 0.0.0.0/0 up to anything with subnet mask /32. - - ip prefix-list DEMO-PRFX permit 0.0.0.0/0 le 32 - -Now that we know how to create prefix-list statements, we will create prefix-list called 'PRFX-LST' that will allow prefixes required in our scenario. - - router-b# conf t - router-b(config)# ip prefix-list PRFX-LST permit 10.10.10.0/23 le 24 - -### Creating Route-Map ### - -Besides prefix-list and ACLs, there is yet another mechanism called route-map, which can control prefixes in a BGP router. In fact, route-map can fine-tune possible actions more flexibly on the prefixes matched with an ACL or prefix-list. - -Similar to prefix-list, a route-map statement specifies permit or deny action, followed by a sequence number. Each route-map statement can have multiple permit/deny actions with it. For example: - - route-map DEMO-RMAP permit 10 - -The above statement creates route-map called 'DEMO-RMAP', and adds permit action with sequence 10. Now we will use match command under sequence 10. - - router-a(config-route-map)# match (press ? in the keyboard) - ----------- - - as-path Match BGP AS path list - community Match BGP community list - extcommunity Match BGP/VPN extended community list - interface match first hop interface of route - ip IP information - ipv6 IPv6 information - metric Match metric of route - origin BGP origin code - peer Match peer address - probability Match portion of routes defined by percentage value - tag Match tag of route - -As we can see, route-map can match many attributes. We will match a prefix in this tutorial. - - route-map DEMO-RMAP permit 10 - match ip address prefix-list DEMO-PRFX - -The match command will match the IP addresses permitted by the prefix-list 'DEMO-PRFX' created earlier (i.e., prefixes 192.168.0.0/23, 192.168.0.0/24 and 192.168.1.0/24). - -Next, we can modify the attributes by using the set command. The following example shows possible use cases of set. - - route-map DEMO-RMAP permit 10 - match ip address prefix-list DEMO-PRFX - set (press ? in keyboard) - ----------- - - aggregator BGP aggregator attribute - as-path Transform BGP AS-path attribute - atomic-aggregate BGP atomic aggregate attribute - comm-list set BGP community list (for deletion) - community BGP community attribute - extcommunity BGP extended community attribute - forwarding-address Forwarding Address - ip IP information - ipv6 IPv6 information - local-preference BGP local preference path attribute - metric Metric value for destination routing protocol - metric-type Type of metric - origin BGP origin code - originator-id BGP originator ID attribute - src src address for route - tag Tag value for routing protocol - vpnv4 VPNv4 information - weight BGP weight for routing table - -As we can see, the set command can be used to change many attributes. For a demonstration purpose, we will set BGP local preference. - - route-map DEMO-RMAP permit 10 - match ip address prefix-list DEMO-PRFX - set local-preference 500 - -Just like prefix-list, there is an implicit deny at the end of all route-map statements. So we will add another permit statement in sequence number 20 to permit everything. - - route-map DEMO-RMAP permit 10 - match ip address prefix-list DEMO-PRFX - set local-preference 500 - ! - route-map DEMO-RMAP permit 20 - -The sequence number 20 does not have a specific match command, so it will, by default, match everything. Since the decision is permit, everything will be permitted by this route-map statement. - -If you recall, our requirement is to only allow/deny some prefixes. So in our scenario, the set command is not necessary. We will just use one permit statement as follows. - - router-b# conf t - router-b(config)# route-map RMAP permit 10 - router-b(config-route-map)# match ip address prefix-list PRFX-LST - -This route-map statement should do the trick. - -### Applying Route-Map ### - -Keep in mind that ACLs, prefix-list and route-map are not effective unless they are applied to an interface or a BGP neighbor. Just like ACLs or prefix-list, a single route-map statement can be used with any number of interfaces or neighbors. However, any one interface or a neighbor can support only one route-map statement for inbound, and one for outbound traffic. - -We will apply the created route-map to the BGP configuration of router-B for neighbor 192.168.1.1 with incoming prefix advertisement. - - router-b# conf terminal - router-b(config)# router bgp 200 - router-b(config-router)# neighbor 192.168.1.1 route-map RMAP in - -Now, we check the routes advertised and received by using the following commands. - -For advertised routes: - - show ip bgp neighbor-IP advertised-routes - -For received routes: - - show ip bgp neighbor-IP routes - -![](https://farm8.staticflickr.com/7424/16221405429_4d86119548_c.jpg) - -You can see that while router-A is advertising four prefixes towards router-B, router-B is accepting only three prefixes. If we check the range, we can see that only the prefixes that are allowed by route-map are visible on router-B. All other prefixes are discarded. - -**Tip**: If there is no change in the received prefixes, try resetting the BGP session using the command: "clear ip bgp neighbor-IP". In our case: - - clear ip bgp 192.168.1.1 - -As we can see, the requirement has been met. We can create similar prefix-list and route-map statements in routers A and B to further control inbound and outbound prefixes. - -I am summarizing the configuration in one place so you can see it all at a glance. - - router bgp 200 - network 10.20.0.0/16 - neighbor 192.168.1.1 remote-as 100 - neighbor 192.168.1.1 route-map RMAP in - ! - ip prefix-list PRFX-LST seq 5 permit 10.10.10.0/23 le 24 - ! - route-map RMAP permit 10 - match ip address prefix-list PRFX-LST - -### Summary ### - -In this tutorial, we showed how we can filter BGP routes in Quagga by defining prefix-list and route-map. We also demonstrated how we can combine prefix-list with route-map to fine-control incoming prefixes. You can create your own prefix-list and route-map in a similar way to match your network requirements. These tools are one of the most effective ways to protect the production network from route poisoning and advertisement of bogon routes. - -Hope this helps. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html - -作者:[Sarmed Rahman][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/sarmed -[1]:http://xmodulo.com/centos-bgp-router-quagga.html diff --git a/translated/tech/20150202 How to filter BGP routes in Quagga BGP router.md b/translated/tech/20150202 How to filter BGP routes in Quagga BGP router.md new file mode 100644 index 0000000000..53ce40cac6 --- /dev/null +++ b/translated/tech/20150202 How to filter BGP routes in Quagga BGP router.md @@ -0,0 +1,201 @@ +如何使用 Quagga BGP(边界网关协议)路由器来过滤 BGP 路由 +================================================================================ +在[之前的文章][1]中,我们介绍了如何使用 Quagga 将 CentOS 服务器变成一个 BGP 路由器,也介绍了 BGP 对等体和前缀交换设置。在本教程中,我们将重点放在如何使用**前缀列表**和**路由映射**来分别控制数据注入和数据输出。 + +之前的文章已经说过,BGP 的路由判定是基于前缀的收取和前缀的广播。为避免错误的路由,你需要使用一些过滤机制来控制这些前缀的收发。举个例子,如果你的一个 BGP 邻居开始广播一个本不属于它们的前缀,而你也将错就错地接收了这些不正常前缀,并且也将它转发到网络上,这个转发过程会不断进行下去,永不停止(所谓的“黑洞”就这样产生了)。所以确保这样的前缀不会被收到,或者不会转发到任何网络,要达到这个目的,你可以使用前缀列表和路由映射。前者是基于前缀的过滤机制,后者是更为常用的基于前缀的策略,可用于精调过滤机制。 + +本文会向你展示如何在 Quagga 中使用前缀列表和路由映射。 + +### 拓扑和需求 ### + +本教程使用下面的拓扑结构。 + +![](https://farm8.staticflickr.com/7394/16407625405_4f7d24d1f6_c.jpg) + +服务供应商A和供应商B已经将对方设置成为 eBGP 对等体,实现互相通信。他们的自治系统号和前缀分别如下所示。 + +- **对等区段**: 192.168.1.0/24 +- **服务供应商A**: 自治系统号 100, 前缀 10.10.0.0/16 +- **服务供应商B**: 自治系统号 200, 前缀 10.20.0.0/16 + +在这个场景中,供应商B只想从A接收 10.10.10.0/23, 10.10.10.0/24 和 10.10.11.0/24 三个前缀。 + +### 安装 Quagga 和设置 BGP 对等体 ### + +在[之前的教程][1]中,我们已经写了安装 Quagga 和设置 BGP 对等体的方法,所以这里就不再详细说明了,只简单介绍下 BGP 配置和前缀广播: + +![](https://farm8.staticflickr.com/7428/16219986668_97cb193b15_c.jpg) + +上图说明 BGP 对等体已经开启。Router-A 在向 router-B 广播多个前缀,而 Router-B 也在向 router-A 广播一个前缀 10.20.0.0/16。两个路由器都能正确无误地收发前缀。 + +### 创建前缀列表 ### + +路由器可以使用 ACL 或前缀列表来过滤一个前缀。前缀列表比 ACL 更常用,因为前者处理步骤少,而且易于创建和维护。 + + ip prefix-list DEMO-PRFX permit 192.168.0.0/23 + +上面的命令创建了名为“DEMO-FRFX”的前缀列表,只允许存在 192.168.0.0/23 这个前缀。 + +前缀列表的另一个牛X功能是支持子网掩码区间,请看下面的例子: + + ip prefix-list DEMO-PRFX permit 192.168.0.0/23 le 24 + +这个命令创建的前缀列表包含在 192.168.0.0/23 和 /24 之间的前缀,分别是 192.168.0.0/23, 192.168.0.0/24 and 192.168.1.0/24。运算符“le”表示小于等于,你也可以使用“ge”表示大于等于。 + +一个前缀列表语句可以有多个允许或拒绝操作。每个语句都自动或手动地分配有一个序列号。 + +如果存在多个前缀列表语句,则这些语句会按序列号顺序被依次执行。在配置前缀列表的时候,我们需要注意在所有前缀列表语句后面的**隐性拒绝**属性,就是说凡是不被明显允许的,都会被拒绝。 + +如果要设置成允许所有前缀,前缀列表语句设置如下: + + ip prefix-list DEMO-PRFX permit 0.0.0.0/0 le 32 + +我们已经知道如何创建前缀列表语句了,现在我们要创建一个名为“PRFX-LST”的前缀列表,来满足我们实验场景的需求。 + + router-b# conf t + router-b(config)# ip prefix-list PRFX-LST permit 10.10.10.0/23 le 24 + +### 创建路由映射 ### + +除了前缀列表和 ACL,这里还有另一种机制,叫做路由映射,也可以在 BGP 路由器中控制前缀。事实上,路由映射针对前缀匹配的微调效果比前缀列表和 ACL 都强。 + +与前缀列表类似,路由映射语句也可以指定允许和拒绝操作,也需要分配一个序列号。每个路由匹配可以有多个允许或拒绝操作。例如: + + route-map DEMO-RMAP permit 10 + +上面的语句创建了名为“DEMO-RMAP”的路由映射,添加序列号为10的允许操作。现在我们在这个序列号所对应的路由映射下使用 match 命令进行匹配。 + + router-a(config-route-map)# match (press ? in the keyboard) + +---------- + + as-path Match BGP AS path list + community Match BGP community list + extcommunity Match BGP/VPN extended community list + interface match first hop interface of route + ip IP information + ipv6 IPv6 information + metric Match metric of route + origin BGP origin code + peer Match peer address + probability Match portion of routes defined by percentage value + tag Match tag of route + +如你所见,路由映射可以匹配很多属性,本教程需要匹配一个前缀。 + + route-map DEMO-RMAP permit 10 + match ip address prefix-list DEMO-PRFX + +这个 match 命令会匹配之前建好的前缀列表中允许的 IP 地址(也就是前缀 192.168.0.0/23, 192.168.0.0/24 和 192.168.1.0/24)。 + +接下来,我们可以使用 set 命令来修改这些属性。例子如下: + + route-map DEMO-RMAP permit 10 + match ip address prefix-list DEMO-PRFX + set (press ? in keyboard) + +---------- + + aggregator BGP aggregator attribute + as-path Transform BGP AS-path attribute + atomic-aggregate BGP atomic aggregate attribute + comm-list set BGP community list (for deletion) + community BGP community attribute + extcommunity BGP extended community attribute + forwarding-address Forwarding Address + ip IP information + ipv6 IPv6 information + local-preference BGP local preference path attribute + metric Metric value for destination routing protocol + metric-type Type of metric + origin BGP origin code + originator-id BGP originator ID attribute + src src address for route + tag Tag value for routing protocol + vpnv4 VPNv4 information + weight BGP weight for routing table + +如你所见,set 命令也可以修改很多属性。为了作个示范,我们修改一下 BGP 的 local-preference 这个属性。 + + route-map DEMO-RMAP permit 10 + match ip address prefix-list DEMO-PRFX + set local-preference 500 + +如同前缀列表,路由映射语句的末尾也有隐性拒绝操作。所以我们需要添加另外一个允许语句(使用序列号20)来允许所有前缀。 + + route-map DEMO-RMAP permit 10 + match ip address prefix-list DEMO-PRFX + set local-preference 500 + ! + route-map DEMO-RMAP permit 20 + +序列号20未指定任何匹配命令,所以默认匹配所有前缀。在这个路由映射语句中,所有的前缀都被允许。 + +回想一下,我们的需求是只允许或只拒绝一些前缀,所以上面的 set 命令不应该存在于这个场景中。我们只需要一个允许语句,如下如示: + + router-b# conf t + router-b(config)# route-map RMAP permit 10 + router-b(config-route-map)# match ip address prefix-list PRFX-LST + +这个路由映射才是我们需要的效果。 + +### 应用路由映射 ### + +注意,在被应用于一个接口或一个 BGP 邻居之前,ACL、前缀列表和路由映射都不会生效。与 ACL 和前缀列表一样,一条路由映射语句也能被多个接口或邻居使用。然而,一个接口或一个邻居只能有一条路由映射语句应用于输入端,以及一条路由映射语句应用于输出端。 + +下面我们将这条路由映射语句应用于 router-B 的 BGP 配置,为 router-B 的邻居 192.168.1.1 设置输入前缀广播。 + + router-b# conf terminal + router-b(config)# router bgp 200 + router-b(config-router)# neighbor 192.168.1.1 route-map RMAP in + +现在检查下广播路由和收取路由。 + +显示广播路由的命令: + + show ip bgp neighbor-IP advertised-routes + +显示收取路由的命令: + + show ip bgp neighbor-IP routes + +![](https://farm8.staticflickr.com/7424/16221405429_4d86119548_c.jpg) + +可以看到,router-A 有4条路由前缀到达 router-B,而 router-B 只接收3条。查看一下范围,我们就能知道只有被路由映射允许的前缀才能在 router-B 上显示出来,其他的前缀一概丢弃。 + +**小提示**:如果接收前缀内容没有刷新,试试重置下 BGP 会话,使用这个命令:clear ip bgp neighbor-IP。本教程中命令如下: + + clear ip bgp 192.168.1.1 + +我们能看到系统已经满足我们的要求了。接下来我们可以在 router-A 和 router-B 上创建相似的前缀列表和路由映射语句来更好地控制输入输出的前缀。 + +这里把配置过程总结一下,方便查看。 + + router bgp 200 + network 10.20.0.0/16 + neighbor 192.168.1.1 remote-as 100 + neighbor 192.168.1.1 route-map RMAP in + ! + ip prefix-list PRFX-LST seq 5 permit 10.10.10.0/23 le 24 + ! + route-map RMAP permit 10 + match ip address prefix-list PRFX-LST + +### 总结 ### + +在本教程中我们演示了如何在 Quagga 中设置前缀列表和路由映射来过滤 BGP 路由。我们也展示了如何将前缀列表结合进路由映射来进行输入前缀的微调功能。你可以参考这些方法来设置满足自己需求的前缀列表和路由映射。这些工具是保护网络免受路由毒化和来自 bogon 路由(LCTT 译注:指不该出现在internet路由表中的地址)的广播。 + +希望本文对你有帮助。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html + +作者:[Sarmed Rahman][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/sarmed +[1]:http://xmodulo.com/centos-bgp-router-quagga.html From 1176dd91ba9d5a1bff21f112b85c1a3e72c736d4 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 10 Oct 2015 10:00:50 +0800 Subject: [PATCH 660/697] PUB:20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop @MikeCoder --- ...e--Minimal Icon Theme For Linux Desktop.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) rename {translated/share => published}/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md (62%) diff --git a/translated/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md b/published/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md similarity index 62% rename from translated/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md rename to published/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md index 5bd7655a9e..0049dd5a6e 100644 --- a/translated/share/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md +++ b/published/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md @@ -1,12 +1,12 @@ -Xenlism WildFire: 一个精美的 Linux 桌面版主题 +Xenlism WildFire: Linux 桌面的极简风格图标主题 ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icon-theme-linux-3.png) -有那么一段时间,我一直使用一个主题,没有更换过。可能是在最近的一段时间都没有一款主题能满足我的需求。有那么一些我认为是[Ubuntu 上最好的图标主题][1],比如 Numix 和 Moka,并且,我一直也对 Numix 比较满意。 +有那么一段时间我没更换主题了,可能最近的一段时间没有一款主题能让我眼前一亮了。我考虑过更换 [Ubuntu 上最好的图标主题][1],但是它们和 Numix 和 Moka 差不多,而且我觉得 Numix 也不错。 -但是,一段时间后,我使用了[Xenslim WildFire][2],并且我必须承认,他看起来太好了。Minimail 是当前比较流行的设计趋势。并且 Xenlism 完美的表现了它。平滑和美观。Xenlism 收到了诺基亚的 Meego 和苹果图标的影响。 +但是前几天我试了试 [Xenslim WildFire][2],我必须承认,它看起来太棒了。极简风格是设计界当前的流行趋势,而 Xenlism 完美的表现了这种风格。平滑而美观,Xenlism 显然受到了诺基亚的 Meego 和苹果图标的影响。 -让我们来看一下他的几个不同应用的图标: +让我们来看一下它的几个不同应用的图标: ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icons.png) @@ -14,15 +14,15 @@ Xenlism WildFire: 一个精美的 Linux 桌面版主题 ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icons-1.png) -主题开发者,[Nattapong Pullkhow][3], 说,这个图标主题最适合 GNOME,但是在 Unity 和 KDE,Mate 上也表现良好。 +主题开发者 [Nattapong Pullkhow][3] 说,这个图标主题最适合 GNOME,但是在 Unity 和 KDE,Mate 上也表现良好。 ### 安装 Xenlism Wildfire ### -Xenlism Theme 大约有 230 MB, 对于一个主题来说确实很大,但是考虑到它支持的庞大的软件数量,这个大小,确实也不是那么令人吃惊。 +Xenlism Theme 大约有 230 MB, 对于一个主题来说确实很大,但是考虑到它所支持的庞大的软件数量,这个大小,确实也不是那么令人吃惊。 #### 在 Ubuntu/Debian 上安装 Xenlism #### -在 Ubuntu 的变种中安装前,用以下的命令添加 GPG 秘钥: +在 Ubuntu 系列中安装之前,用以下的命令添加 GPG 秘钥: sudo apt-key adv --keyserver keys.gnupg.net --recv-keys 90127F5B @@ -42,7 +42,7 @@ Xenlism Theme 大约有 230 MB, 对于一个主题来说确实很大,但是考 sudo nano /etc/pacman.conf - 添加如下的代码块,在配置文件中: +添加如下的代码块,在配置文件中: [xenlism-arch] SigLevel = Never @@ -55,17 +55,17 @@ Xenlism Theme 大约有 230 MB, 对于一个主题来说确实很大,但是考 #### 使用 Xenlism 主题 #### -在 Ubuntu Unity, [可以使用 Unity Tweak Tool 来改变主题][4]. In GNOME, [使用 Gnome Tweak Tool 改变主题][5]. 我确信你会接下来的步骤,如果你不会,请来信通知我,我会继续完善这篇文章。 +在 Ubuntu Unity, [可以使用 Unity Tweak Tool 来改变主题][4]。 在 GNOME 中,[使用 Gnome Tweak Tool 改变主题][5]。 我确信你会接下来的步骤,如果你不会,请来信通知我,我会继续完善这篇文章。 这就是 Xenlism 在 Ubuntu 15.04 Unity 中的截图。同时也使用了 Xenlism 桌面背景。 ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Xenlism-icons-2.png) -这看来真棒,不是吗?如果你试用了,并且喜欢他,你可以感谢他的开发者: +这看来真棒,不是吗?如果你试用了,并且喜欢它,你可以感谢它的开发者: -> [Xenlism is a stunning minimal icon theme for Linux. Thanks @xenatt for this beautiful theme.][6] +> [Xenlism 是一个用于 Linux 的、令人兴奋的极简风格的图标主题,感谢 @xenatt 做出这么漂亮的主题。][6] -我希望你喜欢他。同时也希望你分享你对这个主题的看法,或者你喜欢的主题。Xenlism 真的很棒,可能会替换掉你最喜欢的主题。 +我希望你喜欢它。同时也希望你分享你对这个主题的看法,或者你喜欢的主题。Xenlism 真的很棒,可能会替换掉你最喜欢的主题。 -------------------------------------------------------------------------------- @@ -73,7 +73,7 @@ via: http://itsfoss.com/xenlism-wildfire-theme/ 作者:[Abhishek][a] 译者:[MikeCoder](https://github.com/MikeCoder) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8e153dbcf9d46db2c8af5c05b96e34a167ec924f Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 10 Oct 2015 10:20:05 +0800 Subject: [PATCH 661/697] =?UTF-8?q?=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @Yuking-net --- ...Debian dropping the Linux Standard Base.md | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 sources/news/20150930 Debian dropping the Linux Standard Base.md diff --git a/sources/news/20150930 Debian dropping the Linux Standard Base.md b/sources/news/20150930 Debian dropping the Linux Standard Base.md new file mode 100644 index 0000000000..dc3e3adcce --- /dev/null +++ b/sources/news/20150930 Debian dropping the Linux Standard Base.md @@ -0,0 +1,66 @@ +Debian dropping the Linux Standard Base +======================================= + +The Linux Standard Base (LSB) is a [specification][1] that purports to define the services and application-level ABIs that a Linux distribution will provide for use by third-party programs. But some in the Debian project are questioning the value of maintaining LSB compliance—it has become, they say, a considerable amount of +work for little measurable benefit. + +The LSB was first released in 2001, and was modeled to a degree on the [POSIX][2] and [Single UNIX Specification][3] standards. Today, the LSB is maintained by a [working group][4] at the Linux Foundation. The most recent release was [LSB 5.0][5] in June 2015. It defines five LSB modules (Core, Desktop, Languages, Imaging, and Trial Use). + +The bulk of each module consists of a list of required libraries and the mandatory version for each, plus a description of the public functions and data definitions for each library. Other contents of the modules include naming and organizational specifications, such as the filesystem layout in the [Filesystem Hierarchy Standard (FHS)][6] or directory specifications like the Freedesktop [XDG Base Directory][7] specification. + +In what appears to be sheer coincidence, during the same week that LSB 5.0 was released, a discussion arose within the Debian project as to whether or not maintaining LSB compliance was a worthwhile pursuit for Debian. After LSB compliance was mentioned in passing in another thread, Didier Raboud took the opportunity to [propose][8] scaling back Debian's compliance efforts to the bare minimum. As it stands today, he said, Debian's `lsb-*` meta-packages attempt to require the correct versions of the libraries mentioned in the standard, but no one is actually checking that all of the symbols and data definitions are met as aresult. + +Furthermore, the LSB continues to grow; the 4.1 release (the most recent when Debian "jessie" was released) consisted of "*1493 components, 1672 libs, 38491 commands, 30176 classes and 716202 interfaces*," he said. No one seems interested in checking those details in the Debian packages, he noted, adding that "*I've held an LSB BoF last year at DebConf, and discussed src:lsb with various people back then, and what I took back was 'roughly no one cares'.*" Just as importantly, though, the lack of interest does not seem to be limited to Debian: + + The crux of the issue is, I think, whether this whole game is worth the work: I am yet to hear about software distribution happening through LSB packages. There are only _8_ applications by 6 companies on the LSB certified applications list, of which only one is against LSB >= 4. + +Raboud proposed that Debian drop everything except for the [lsb-base][9] package (which currently includes a small set of shell functions for use by the init system) and the [lsb-release][10] package (which provides a simple tool that users can use to query the identity of the distribution and what level of LSB compliance it advertises). + +In a follow-up [message][11],he noted that changing the LSB to be, essentially, "*whatever Debian as well as all other actors in the FLOSS world are _actually_ doing*" might make the standard—and the effort to support it in Debian—more valuable. But here again, he questioned whether anyone was interested in pursuing that objective. + +If his initial comments about lack of interest in LSB were not evidence enough, a full three months then went by with no one offering any support for maintaining the LSB-compliance packages and two terse votes in favor of dropping them. Consequently, on September 17, Raboud [announced][12] that he had gutted the `src:lsb` package (leaving just `lsb-base` and `lsb-release` as described) and uploaded it to the "unstable" archive. That minimalist set of tools will allow an interested user to start up the next Debian release and query whether or not it is LSB-compliant—and the answer will be "no." + +Raboud added that Debian does still plan to maintain FHS compliance, even though it is dropping LSB compliance: + + But Debian's not throwing all of the LSB overboard: we're still firmly standing behind the FHS (version 2.3 through Debian Policy; although 3.0 was released in August this year) and our SysV init scripts mostly conform to LSB VIII.22.{2-8}. But don't get me wrong, this src:lsb upload is an explicit move away from the LSB. + +After the announcement, Nikolaus Rath [replied][13] that some proprietary applications expect `ld-lsb.so*` symbolic links to be present in `/lib` and `/lib64`, and that those symbolic links had been provided by the `lsb-*` package set. Raboud [suggested][14] that the links should be provided by the `libc6` package instead; package maintainer Aurelien Jarno [said][15] he would accept such a patch if it was provided. + +The only remaining wrinkle, it seems, is that there are some printer-driver packages that expect some measure of LSB compliance. Raboud had noted in his first message that [OpenPrinting][16] drivers were the only example of LSB-compliant packages he had seen actually distributed. Michael Biebl [noted][17] that there was one such driver package in the main archive; Raboud [replied][18] that he believed the package in question ought to be moved to the non-free repository anyway, since it contained a binary driver. + +With that, the issue appears to be settled, at least for the current Debian development cycle. What will be more interesting, naturally, will be to see what effect, if any, the decision has on broader LSB acceptance. As Raboud alluded to, the number of distributions that are certified as LSB-compliant is [small][19]. It is hard not to notice that those distributions are largely of the "enterprise" variety. + +Perhaps, then, LSB compliance is still important to some business sectors, but it is hard to know how many customers of those enterprise distributions genuinely care about the LSB certification stamp. If Debian's experience is anything to go by, however, general interest in such certification may be in steep decline. + +--- + +via:https://lwn.net/Articles/658809/ + +作者:Nathan Willis +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译, +[Linux中国](https://linux.cn/) 荣誉推出 + + +[1]:http://refspecs.linuxfoundation.org/lsb.shtml +[2]:https://en.wikipedia.org/wiki/POSIX +[3]:https://en.wikipedia.org/wiki/Single_UNIX_Specification +[4]:http://www.linuxfoundation.org/collaborate/workgroups/lsb +[5]:http://www.linuxfoundation.org/collaborate/workgroups/lsb/lsb-50 +[6]:http://www.linuxfoundation.org/collaborate/workgroups/lsb/fhs +[7]:http://standards.freedesktop.org/basedir-spec/basedir-spec-0.6.html +[8]:https://lwn.net/Articles/658838/ +[9]:https://packages.debian.org/sid/lsb-base +[10]:https://packages.debian.org/sid/lsb-release +[11]:https://lwn.net/Articles/658842/ +[12]:/Articles/658843/ +[13]:/Articles/658846/ +[14]:/Articles/658847/ +[15]:/Articles/658848/ +[16]:http://www.linuxfoundation.org/collaborate/workgroups/openprinting/ +[17]:/Articles/658844/ +[18]:/Articles/658845/ + + From 4ef5ad00d95d66767c9693f9e5b6205f4e8622a9 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sat, 10 Oct 2015 12:34:21 +0800 Subject: [PATCH 662/697] Delete 20151005 pyinfo() A good looking phpinfo-like python script.md --- ...good looking phpinfo-like python script.md | 40 ------------------- 1 file changed, 40 deletions(-) delete mode 100644 sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md diff --git a/sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md b/sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md deleted file mode 100644 index f096bc5fc6..0000000000 --- a/sources/tech/20151005 pyinfo() A good looking phpinfo-like python script.md +++ /dev/null @@ -1,40 +0,0 @@ -translation by strugglingyouth -pyinfo() A good looking phpinfo-like python script -================================================================================ -Being a native php guy, I'm used to having phpinfo(), giving me easy access to php.ini settings and loaded modules etc. So ofcourse I wanted to call the not existing pyinfo() function, to no avail. My fingers quickly pressed CTRL-E to google for a implementation of it, someone must've ported it already? - -Yes, someone did. But oh my was it ugly. Preposterous! Since I cannot stand ugly layouts *cough*, I just had to build my own. So I used the code I found and cleaned up the layout to make it better. The official python website isnt that bad layout-wise, so why not steal their colors and background images? Yes that sounds like a plan to me. - -[Gits Here][1] | [Download here][2] | [Example here][3] - -Mind you, I only ran it on a python 2.6.4 server, so anything else is at your own risk (but it should be no problem to port it to any other version). To get it working, just import the file and call pyinfo() while catching the function's return value. Print that on the screen. Huzzah! - -For those who did not get that and are using [mod_wsgi][4], run it using something like this (replace that path ofcourse): -``` -def application(environ, start_response): - import sys - path = 'YOUR_WWW_ROOT_DIRECTORY' - if path not in sys.path: - sys.path.append(path) - from pyinfo import pyinfo - output = pyinfo() - start_response('200 OK', [('Content-type', 'text/html')]) - return [output] -``` ---- - -via:http://bran.name/articles/pyinfo-a-good-looking-phpinfo-like-python-script/ - -作者:[Bran van der Meer][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译, -[Linux中国](https://linux.cn/) 荣誉推出 - - -[a]:http://bran.name/resume/ -[1]:https://gist.github.com/951825#file_pyinfo.py -[2]:http://bran.name/dump/pyinfo.zip -[3]:http://bran.name/dump/pyinfo/index.py -[4]:http://code.google.com/p/modwsgi/ From dbc374103072a2b9c241e921343dd86264ad595e Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sat, 10 Oct 2015 12:37:32 +0800 Subject: [PATCH 663/697] Create 20151005 pyinfo() A good looking phpinfo-like python script.md --- ...good looking phpinfo-like python script.md | 40 +++++++++++++++++++ 1 file changed, 40 insertions(+) create mode 100644 translated/tech/20151005 pyinfo() A good looking phpinfo-like python script.md diff --git a/translated/tech/20151005 pyinfo() A good looking phpinfo-like python script.md b/translated/tech/20151005 pyinfo() A good looking phpinfo-like python script.md new file mode 100644 index 0000000000..6d7e71396d --- /dev/null +++ b/translated/tech/20151005 pyinfo() A good looking phpinfo-like python script.md @@ -0,0 +1,40 @@ +pyinfo()一个像 phpinfo 一样的 Python 脚本 +================================================================================ +作为一个热衷于 php 的家伙,我已经习惯了使用 phpinfo() 函数来让我轻松访问 php.ini 里的配置和加载的模块等。当然我也想要使用一个不存在的 pyinfo() 函数,但没有成功。在 google 中按下 CTRL-E 快速查找是否有人实现了它? + +是的,有人已经实现了。但是,对我来说它非常难看。荒谬!因为我无法忍受丑陋的布局 *咳嗽*,我不得不亲自动手来做。所以我用找到的代码,并重新进行布局使之更好看点。Python 官方网站的布局看起来不错,那么何不盗取他们的颜色和背景图片呢?是的,这听起来像一个计划。 + +[Gits Here][1] | [Download here][2] | [Example here][3] + +提醒你下,我仅仅在 Python 2.6.4 上运行过它,所以在别的版本上可能有风险(将它移植到任何其他版本它应该是没有问题的)。为了能使它工作,只需要导入 pyinfo() 文件,并将函数的返回值打印到屏幕上。好嘞! + +如果你没有得到正确的返回结果,你需要使用 [mod_wsgi][4],然后这样运行它(当然得替换路径): + +``` +def application(environ, start_response): + import sys + path = 'YOUR_WWW_ROOT_DIRECTORY' + if path not in sys.path: + sys.path.append(path) + from pyinfo import pyinfo + output = pyinfo() + start_response('200 OK', [('Content-type', 'text/html')]) + return [output] +``` +--- + +via:http://bran.name/articles/pyinfo-a-good-looking-phpinfo-like-python-script/ + +作者:[Bran van der Meer][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译, +[Linux中国](https://linux.cn/) 荣誉推出 + + +[a]:http://bran.name/resume/ +[1]:https://gist.github.com/951825#file_pyinfo.py +[2]:http://bran.name/dump/pyinfo.zip +[3]:http://bran.name/dump/pyinfo/index.py +[4]:http://code.google.com/p/modwsgi/ From 55d860348b6c4bd7b55ef83675482203d49a515a Mon Sep 17 00:00:00 2001 From: alim0x Date: Sat, 10 Oct 2015 16:46:53 +0800 Subject: [PATCH 664/697] [translated]20151007 Open Source Media Player MPlayer 1.2 Released --- ...ource Media Player MPlayer 1.2 Released.md | 64 ------------------- ...ource Media Player MPlayer 1.2 Released.md | 62 ++++++++++++++++++ 2 files changed, 62 insertions(+), 64 deletions(-) delete mode 100644 sources/share/20151007 Open Source Media Player MPlayer 1.2 Released.md create mode 100644 translated/share/20151007 Open Source Media Player MPlayer 1.2 Released.md diff --git a/sources/share/20151007 Open Source Media Player MPlayer 1.2 Released.md b/sources/share/20151007 Open Source Media Player MPlayer 1.2 Released.md deleted file mode 100644 index 52a6887786..0000000000 --- a/sources/share/20151007 Open Source Media Player MPlayer 1.2 Released.md +++ /dev/null @@ -1,64 +0,0 @@ -alim0x translating - -Open Source Media Player MPlayer 1.2 Released -================================================================================ -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/MPlayer-1.2.jpg) - -Almost three years after [MPlaayer][1] 1.1, the new version of MPlayer has been released last week. MPlayer 1.2 brings up support for many new codecs in this release. - -MPlayer is a cross-platform, open source media player. Its name is an abbreviation of “Movie Player”. MPlayer has been one of the oldest video players for Linux and during last 15 years, it has inspired a number of other media players. Some of the famous media players based on MPlayer are: - -- [MPV][2] -- SMPlayer -- KPlayer -- GNOME MPlayer -- Deepin Player - -#### What’s new in MPlayer 1.2? #### - -- Compatibility with FFmpeg 2.8 -- VDPAU hardware acceleration for H.265/HEVC -- A number of new codecs supported via FFmpeg -- Improvements in TV and DVB support -- GUI improvements -- external dependency on libdvdcss/libdvdnav packages - -#### Install MPlayer 1.2 in Linux #### - -Most Linux distributions are still having MPlayer 1.1. If you want to use the new MPlayer 1.2, you’ll have to compile it from the source code which could be tricky at times for beginners. - -I have used Ubuntu 15.04 for the installation of MPlayer 1.2. Installation instructions will remain the same for all Linux distributions except the part where you need to install yasm. - -Open a terminal and use the following commands: - - wget http://www.mplayerhq.hu/MPlayer/releases/MPlayer-1.2.tar.xz - - tar xvf MPlayer-1.1.1.tar.xz - - cd MPlayer-1.2 - - sudo apt-get install yasm - - ./configure - -When you run make, it will throw a number of things on the terminal screen and takes some time to build it. Have patience. - - make - - sudo make install - -If you feel uncomfortable using the source code, I advise you to either wait forMPlayer 1.2 to land in the repositories of your Linux distribution or use an alternate like MPV. - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/mplayer-1-2-released/ - -作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:https://www.mplayerhq.hu/ -[2]:http://mpv.io/ diff --git a/translated/share/20151007 Open Source Media Player MPlayer 1.2 Released.md b/translated/share/20151007 Open Source Media Player MPlayer 1.2 Released.md new file mode 100644 index 0000000000..fa2584bc0c --- /dev/null +++ b/translated/share/20151007 Open Source Media Player MPlayer 1.2 Released.md @@ -0,0 +1,62 @@ +开源媒体播放器 MPlayer 1.2 发布 +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/MPlayer-1.2.jpg) + +在 [MPlaayer][1] 1.1 发布将近3年后,新版 MPlayer 终于在上周发布了。在新版本 MPlayer 1.2 中带来了对许多新编码的解码支持。 + +MPlayer 是一款跨平台的开源媒体播放器。它的名字是“Movie Player”的缩写。MPlayer 已经成为 Linux 上最老牌的媒体播放器之一,在过去的15年里,它还启发了许多其他媒体播放器。著名的基于 MPlayer 的媒体播放器有: + +- [MPV][2] +- SMPlayer +- KPlayer +- GNOME MPlayer +- Deepin Player(深度影音) + +#### MPlayer 1.2 更新了些什么? #### + +- 兼容 FFmpeg 2.8 +- 对 H.265/HEVC 的 VDPAU 硬件加速 +- 通过 FFmpeg 支持一些新的编码解码 +- 改善电视与数字视频广播支持 +- 界面优化 +- libdvdcss/libdvdnav 包外部依赖 + +#### 在 Linux 安装 MPlayer 1.2 #### + +大多数 Linux 发行版仓库中还是 MPlayer 1.1 版本。如果你想使用新的 MPlayer 1.2 版本,你需要从源码手动编译,这对新手来说可能有点棘手。 + +我是在 Ubuntu 15.04 上安装的 MPlayer 1.2。除了需要安装 yasm 的地方以外,对所有 Linux 发行版来说安装说明都是一样的。 + +打开一个终端,运行下列命令: + + wget http://www.mplayerhq.hu/MPlayer/releases/MPlayer-1.2.tar.xz + + tar xvf MPlayer-1.1.1.tar.xz + + cd MPlayer-1.2 + + sudo apt-get install yasm + + ./configure + +在你运行 make 的时候,在你的终端屏幕上会显示一些东西,并且你需要一些时间来编译它。保持耐心。 + + make + + sudo make install + +如果你觉得从源码编译不大习惯的话,我建议你等待 MPlayer 1.2 提交到你的 Linux 发行版仓库中,或者用其它的播放器替代,比如 MPV。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/mplayer-1-2-released/ + +作者:[Abhishek][a] +译者:[alim0x](https://github.com/alim0x) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:https://www.mplayerhq.hu/ +[2]:http://mpv.io/ From 05e21d97122dd0190a227239ef604a57d6df84aa Mon Sep 17 00:00:00 2001 From: alim0x Date: Sat, 10 Oct 2015 16:50:48 +0800 Subject: [PATCH 665/697] =?UTF-8?q?=E7=BA=A0=E6=AD=A3=E6=8B=BC=E5=86=99?= =?UTF-8?q?=E9=94=99=E8=AF=AF?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20151007 Open Source Media Player MPlayer 1.2 Released.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/share/20151007 Open Source Media Player MPlayer 1.2 Released.md b/translated/share/20151007 Open Source Media Player MPlayer 1.2 Released.md index fa2584bc0c..622adebd61 100644 --- a/translated/share/20151007 Open Source Media Player MPlayer 1.2 Released.md +++ b/translated/share/20151007 Open Source Media Player MPlayer 1.2 Released.md @@ -2,7 +2,7 @@ ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/MPlayer-1.2.jpg) -在 [MPlaayer][1] 1.1 发布将近3年后,新版 MPlayer 终于在上周发布了。在新版本 MPlayer 1.2 中带来了对许多新编码的解码支持。 +在 [MPlayer][1] 1.1 发布将近3年后,新版 MPlayer 终于在上周发布了。在新版本 MPlayer 1.2 中带来了对许多新编码的解码支持。 MPlayer 是一款跨平台的开源媒体播放器。它的名字是“Movie Player”的缩写。MPlayer 已经成为 Linux 上最老牌的媒体播放器之一,在过去的15年里,它还启发了许多其他媒体播放器。著名的基于 MPlayer 的媒体播放器有: From dffe69612546fc03a056ab679c9ab886bb1bc6ee Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 10 Oct 2015 22:47:34 +0800 Subject: [PATCH 666/697] translating --- ...0151007 Fix Shell Script Opens In Text Editor In Ubuntu.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md b/sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md index 95f7bb4ee5..147f082f4c 100644 --- a/sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md +++ b/sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md @@ -1,3 +1,5 @@ +translating---geekpi + Fix Shell Script Opens In Text Editor In Ubuntu ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Run-Shell-Script-on-Double-Click.jpg) @@ -36,4 +38,4 @@ via: http://itsfoss.com/shell-script-opens-text-editor/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 -[a]:http://itsfoss.com/author/abhishek/ \ No newline at end of file +[a]:http://itsfoss.com/author/abhishek/ From 26bee9751b358b87e9310609019c8a3e284af017 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 10 Oct 2015 23:23:55 +0800 Subject: [PATCH 667/697] Translating sources/tech/20151007 How To Download Videos Using youtube-dl In Linux.md --- .../20151007 How To Download Videos Using youtube-dl In Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20151007 How To Download Videos Using youtube-dl In Linux.md b/sources/tech/20151007 How To Download Videos Using youtube-dl In Linux.md index fa7dcbed6c..f72bde05de 100644 --- a/sources/tech/20151007 How To Download Videos Using youtube-dl In Linux.md +++ b/sources/tech/20151007 How To Download Videos Using youtube-dl In Linux.md @@ -1,3 +1,4 @@ +ictlyh Translating How To Download Videos Using youtube-dl In Linux ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Download-YouTube-Videos.jpeg) From 97b4b470294147884cba9c91c06684397558fac6 Mon Sep 17 00:00:00 2001 From: Yuking-net Date: Sun, 11 Oct 2015 00:37:55 +0800 Subject: [PATCH 668/697] Update 20150930 Debian dropping the Linux Standard Base.md --- .../news/20150930 Debian dropping the Linux Standard Base.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/news/20150930 Debian dropping the Linux Standard Base.md b/sources/news/20150930 Debian dropping the Linux Standard Base.md index dc3e3adcce..22a5a79a44 100644 --- a/sources/news/20150930 Debian dropping the Linux Standard Base.md +++ b/sources/news/20150930 Debian dropping the Linux Standard Base.md @@ -37,7 +37,7 @@ Perhaps, then, LSB compliance is still important to some business sectors, but i via:https://lwn.net/Articles/658809/ 作者:Nathan Willis -译者:[译者ID](https://github.com/译者ID) +译者:[Yuking](https://github.com/Yuking-net) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译, From 9b10df39584038ca4dcb9812dee54725a1506b66 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 11 Oct 2015 08:58:39 +0800 Subject: [PATCH 669/697] translated --- ...l Script Opens In Text Editor In Ubuntu.md | 26 +++++++++---------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md b/sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md index 147f082f4c..f1d9f7253f 100644 --- a/sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md +++ b/sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md @@ -1,39 +1,37 @@ -translating---geekpi - -Fix Shell Script Opens In Text Editor In Ubuntu +修复Sheell脚本在Ubuntu中用文本编辑器打开的方式 ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Run-Shell-Script-on-Double-Click.jpg) -When you double click on a shell script (.sh file) what do you expect? The normal expectation would be that it is executed. But this might not be the case in Ubuntu, or I should better say in case of Files (Nautilus). You may go crazy yelling “Run, File, Run”, but the file won’t run and instead it gets opened in Gedit. +当你双击一个脚本(.sh文件)的时候,你想要做的是什么?通常的想法是执行它。但是在Ubuntu下面却不是这样,或者我应该更确切地说是在Files(Nautilus)中。你可能会疯狂地大叫“运行文件,运行文件”,但是文件没有运行而是用Gedit打开了。 -I know that you would say, does the file has execute permission? And I say, yes. The shell script has execute permission but still if I double click on it, it is opened in a text editor. I don’t want it and if you are facing the same issue, I assume that even you don’t want it. +我知道你也许会说文件有可执行权限么?我会说是的。脚本有可执行权限但是当我双击它的时候,它还是用文本编辑器打开了。我不希望这样如果你遇到了同样的问题,我想你也许也不需要这样。 -I know that you would have been advised to run it in the terminal and I know that it would work but that’s not an excuse for the GUI way to not work. Is it? +我知道你或许已经被建议在终端下面运行,我知道这个可行但是这不是一个在GUI下不能运行的借口是么? -In this quick tutorial, we shall see **how to make shell script run by double clicking on it**. +这篇教程中,我们会看到**如何在双击后运行shell脚本。** -#### Fix Shell script opens in text editor in Ubuntu #### +#### 修复在Ubuntu中shell脚本用文本编辑器打开的方式 #### -The reason why shell scripts are opening in text editor is the default behavior set in Files (file manager in Ubuntu). In earlier versions, it would ask you if you want to run the file or open for editing. The default behavior has been changed in later versions. +shell脚本用文件编辑器打开的原因是Files(Ubuntu中的文件管理器)中的默认行为设置。在更早的版本中,它或许会询问你是否运行文件或者用编辑器打开。默认的行位在新的版本中被修改了。 -To fix it, go in file manager and from the top menu and click on **Preference**: +要修复这个,进入文件管理器,并在菜单中点击**选项**: ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/execute-shell-program-ubuntu-1.png) -Next in **Files preferences**, go to **Behavior** tab and you’ll see the option of “**Executables Text Files**“. +接下来在**文件选项**中进入**行为**标签中,你会看到**文本文件执行**选项。 -By default, it would have been set to “View executable text files when they are opened”. I would advise you to change it to “Ask each time” so that you’ll have the choice whether to execute it or edit but of course you can set it by default for execution. Your choice here really. +默认情况下,它被设置成“在打开是显示文本文件”。我建议你把它改成“每次询问”,这样你可以选择是执行还是编辑了,当然了你也可以选择默认执行。你可以自行选择。 ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/execute-shell-program-ubuntu-2.png) -I hope this quick tip helped you to fix this little ‘issue’. Questions and suggestions are always welcomed. +我希望这个贴士可以帮你修复这个小“问题”。欢迎提出问题和建议。 -------------------------------------------------------------------------------- via: http://itsfoss.com/shell-script-opens-text-editor/ 作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From 742bf1845becf48abe80a6de791e2b31e50bfe21 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 11 Oct 2015 08:59:29 +0800 Subject: [PATCH 670/697] Rename sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md to translated/tech/20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md --- .../20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md => translated/tech/20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md (100%) diff --git a/sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md b/translated/tech/20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md similarity index 100% rename from sources/tech/20151007 Fix Shell Script Opens In Text Editor In Ubuntu.md rename to translated/tech/20151007-Fix-Shell-Script-Opens-In-Text Editor In Ubuntu.md From f2d9efbae401c19b1c3efc7cabfce14882592696 Mon Sep 17 00:00:00 2001 From: alim0x Date: Sun, 11 Oct 2015 10:53:19 +0800 Subject: [PATCH 671/697] [translating]20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools --- ...re Info Using screenfetch and linux_logo Tools.md | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/sources/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md b/sources/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md index 6640454f07..92b47aab50 100644 --- a/sources/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md +++ b/sources/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md @@ -1,3 +1,5 @@ +alim0x translating + Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools ================================================================================ Do you want to display a super cool logo of your Linux distribution along with basic hardware information? Look no further try awesome screenfetch and linux_logo utilities. @@ -80,7 +82,7 @@ To take a screenshot and to save a file, enter: You will see a screenshot file at ~/Desktop/screenFetch-*.jpg. To take a screenshot and upload to imgur directly, enter: - $ screenfetch -su imgur + $ screenfetch -su imgur **Sample outputs:** @@ -100,7 +102,7 @@ You will see a screenshot file at ~/Desktop/screenFetch-*.jpg. To take a screens `ossssssssssssssssssssss/ RAM: 6405MB / 8192MB :ooooooooooooooooooo+. `:+oo+/:-..-:/+o+/- - + Taking shot in 3.. 2.. 1.. 0. ==> Uploading your screenshot now...your screenshot can be viewed at http://imgur.com/HKIUznn @@ -130,7 +132,7 @@ Simply type the following command as per your Linux distro. Simply type the following command: - $ linux_logo + $ linux_logo ![](http://s0.cyberciti.org/uploads/cms/2015/09/debian-linux_logo.jpg) @@ -176,7 +178,7 @@ You can see a list of compiled in logos using: 28 Banner Yes sourcemage Source Mage GNU/Linux large 29 Banner Yes suse SUSE Logo 30 Banner Yes ubuntu Ubuntu Logo - + Do "linux_logo -L num" where num is from above to get the appropriate logo. Remember to also use -a to get ascii version. @@ -224,4 +226,4 @@ via: http://www.cyberciti.biz/hardware/howto-display-linux-logo-in-bash-terminal [2]:http://imgur.com/HKIUznn [3]:http://www.cyberciti.biz/faq/bash-for-loop/ [4]:https://github.com/KittyKatt/screenFetch -[5]:https://github.com/deater/linux_logo \ No newline at end of file +[5]:https://github.com/deater/linux_logo From 2bf99c2b3cc4a254f2c0d962be1e2164dc9c19a8 Mon Sep 17 00:00:00 2001 From: Yuking Date: Mon, 12 Oct 2015 01:21:57 +0800 Subject: [PATCH 672/697] Changes to be committed: modified: sources/news/20150930 Debian dropping the Linux Standard Base.md --- ...Debian dropping the Linux Standard Base.md | 37 +++++++++---------- 1 file changed, 17 insertions(+), 20 deletions(-) diff --git a/sources/news/20150930 Debian dropping the Linux Standard Base.md b/sources/news/20150930 Debian dropping the Linux Standard Base.md index 22a5a79a44..1f9144358a 100644 --- a/sources/news/20150930 Debian dropping the Linux Standard Base.md +++ b/sources/news/20150930 Debian dropping the Linux Standard Base.md @@ -1,38 +1,35 @@ -Debian dropping the Linux Standard Base -======================================= +Debian正在拋弃Linux标准规范 +======================== -The Linux Standard Base (LSB) is a [specification][1] that purports to define the services and application-level ABIs that a Linux distribution will provide for use by third-party programs. But some in the Debian project are questioning the value of maintaining LSB compliance—it has become, they say, a considerable amount of -work for little measurable benefit. +Linux标准规范(LSB)是一个意图定义一个Linux发行版提供的第三方程序的服务和应用层ABI的[规范][1]。但Debian项目内的某些人正在质疑维持LSB一致性的价值,他们认为,该项工作的工作量巨大,但好处有限。 -The LSB was first released in 2001, and was modeled to a degree on the [POSIX][2] and [Single UNIX Specification][3] standards. Today, the LSB is maintained by a [working group][4] at the Linux Foundation. The most recent release was [LSB 5.0][5] in June 2015. It defines five LSB modules (Core, Desktop, Languages, Imaging, and Trial Use). +LSB于2001年首次公布,建立在[POSIX][2]和[单一UNIX规范][3]的基础之上。目前,LSB由Linux基金会的一个[工作小组][4]维护。最新的版本是于2015年6月发布的[LSB 5.0][5]。它定义了五个LSB模块(核芯、桌面、语言、图形和试用)。 -The bulk of each module consists of a list of required libraries and the mandatory version for each, plus a description of the public functions and data definitions for each library. Other contents of the modules include naming and organizational specifications, such as the filesystem layout in the [Filesystem Hierarchy Standard (FHS)][6] or directory specifications like the Freedesktop [XDG Base Directory][7] specification. +每个模块都包含了一系列所需的库及其强制性版本,外加对每个库的公共函数和数据定义的描述。这些模块还包括命名和组织规范,如[文件系统层次标准(FHS)][6]中的文件系统布局或象Freedesktop的[XDG基础目录][7]规范这样的目录规范。 -In what appears to be sheer coincidence, during the same week that LSB 5.0 was released, a discussion arose within the Debian project as to whether or not maintaining LSB compliance was a worthwhile pursuit for Debian. After LSB compliance was mentioned in passing in another thread, Didier Raboud took the opportunity to [propose][8] scaling back Debian's compliance efforts to the bare minimum. As it stands today, he said, Debian's `lsb-*` meta-packages attempt to require the correct versions of the libraries mentioned in the standard, but no one is actually checking that all of the symbols and data definitions are met as aresult. +似乎只是一个巧合,就在 LSB 5.0 发布的那一周,Debian 项目内部针对Debian是否值得追求维持LSB一致性进行了一次讨论。在另一个贴子中,在提及LSB一致性后,Didier Raboud顺势[提议][8]将Debian的一致性工作维持在最低水平。他说,目前的情况是,Debian的“lsb-*”元包要求有该标准中提及的库的正确版本,但事实上却没有人去检查所有的符号和数据定义是否满足要求。 -Furthermore, the LSB continues to grow; the 4.1 release (the most recent when Debian "jessie" was released) consisted of "*1493 components, 1672 libs, 38491 commands, 30176 classes and 716202 interfaces*," he said. No one seems interested in checking those details in the Debian packages, he noted, adding that "*I've held an LSB BoF last year at DebConf, and discussed src:lsb with various people back then, and what I took back was 'roughly no one cares'.*" Just as importantly, though, the lack of interest does not seem to be limited to Debian: +另外,LSB还不断在膨胀;他说,4.1版(Debian “jessie”发布时的最新版本)包含“*1493个组件、1672个库、38491条命令、30176个类和716202个接口*”。似乎没有人有兴趣检查Debian包中的这些细节,他解释道,“*去年在DebConf上我举行过一次LSB BoF,后来又与很多人讨论过src:lsb,我收回自己的‘几乎没有人在意’的说法*”。但,重要的是,兴趣的缺乏似乎并不仅局限于Debian: - The crux of the issue is, I think, whether this whole game is worth the work: I am yet to hear about software distribution happening through LSB packages. There are only _8_ applications by 6 companies on the LSB certified applications list, of which only one is against LSB >= 4. + 我认为,这个问题的关键在于是否值得去玩这整个游戏:我还没听说有哪个软件通过LSB包来发行。LSB认证的应用清单上只有6个公司的_8_个应用,其中仅有一个 LSB >= 4。 -Raboud proposed that Debian drop everything except for the [lsb-base][9] package (which currently includes a small set of shell functions for use by the init system) and the [lsb-release][10] package (which provides a simple tool that users can use to query the identity of the distribution and what level of LSB compliance it advertises). +Raboud提议Debian摈弃除了[lsb-base][9]包(目前包括一个用于启动系统所需的小的shell函数集合)和[lsb-release][10]包(提供一个简单工具,用户可用它查询发行版的身份以及该发行版宣称的与哪个LSB级别一致)之外的所有内容。 -In a follow-up [message][11],he noted that changing the LSB to be, essentially, "*whatever Debian as well as all other actors in the FLOSS world are _actually_ doing*" might make the standard—and the effort to support it in Debian—more valuable. But here again, he questioned whether anyone was interested in pursuing that objective. +[后来][11],他又称,将LSB基本上改变为“*Debian和FLOSS世界中的所有其它演员所实际做的任何事*”可能会使得该标准(以及在Debian为支持它所做的工作)更有价值。但此时他再次质疑是否有人会对推动这个目标有兴趣。 -If his initial comments about lack of interest in LSB were not evidence enough, a full three months then went by with no one offering any support for maintaining the LSB-compliance packages and two terse votes in favor of dropping them. Consequently, on September 17, Raboud [announced][12] that he had gutted the `src:lsb` package (leaving just `lsb-base` and `lsb-release` as described) and uploaded it to the "unstable" archive. That minimalist set of tools will allow an interested user to start up the next Debian release and query whether or not it is LSB-compliant—and the answer will be "no." +如果说他最初称LSB中缺乏兴趣没有足够的证据,随后整整三个月之内没有任何人对维持LSB-一致性的包提供支持,并进行了两次拋弃它们的投票。最后,9月17日,Raboud[宣布][12]他已经抽掉`src:lsb`包(如前所述,保留了`lsb-base`和`lsb-relese`),将将其上载到“unstable”档案中。感兴趣的用户可通过最小限度的工具集来启动下一个Debian版本,并查询它是否符合LSB-一致性:结果将为“否”。 -Raboud added that Debian does still plan to maintain FHS compliance, even though it is dropping LSB compliance: +Raboud补充说,即便摈弃了LSB一致性,Debian仍计划保留FHS一致性: - But Debian's not throwing all of the LSB overboard: we're still firmly standing behind the FHS (version 2.3 through Debian Policy; although 3.0 was released in August this year) and our SysV init scripts mostly conform to LSB VIII.22.{2-8}. But don't get me wrong, this src:lsb upload is an explicit move away from the LSB. + 但Debian并没有放弃所有的LSB:我们仍将严格遵守FHS(Debian Policy中的版本2.3;虽然今年8月已经发布了3.0),而且我们的SysV启动脚本几乎全部遵循VIII.22.{2-8}。但请不要误解我们,此次src:lsb上载明确说明我们将离开LSB。 -After the announcement, Nikolaus Rath [replied][13] that some proprietary applications expect `ld-lsb.so*` symbolic links to be present in `/lib` and `/lib64`, and that those symbolic links had been provided by the `lsb-*` package set. Raboud [suggested][14] that the links should be provided by the `libc6` package instead; package maintainer Aurelien Jarno [said][15] he would accept such a patch if it was provided. +在该宣告之后,Nikolaus Rath[回应][13]称某些私有应用依赖`/lib`和`/lib64`中的符号链接`ld-lsb.so*`,而这些符号链接由`lsb-*`包提供。Raboud则[建议][14]应改由`libc6`包提供;包维护人员Aurelien Jarno[称][15],如果提供这样一个补丁,他将会接受它。 -The only remaining wrinkle, it seems, is that there are some printer-driver packages that expect some measure of LSB compliance. Raboud had noted in his first message that [OpenPrinting][16] drivers were the only example of LSB-compliant packages he had seen actually distributed. Michael Biebl [noted][17] that there was one such driver package in the main archive; Raboud [replied][18] that he believed the package in question ought to be moved to the non-free repository anyway, since it contained a binary driver. +似乎唯一的遗留问题只是某些打印机驱动包会依赖LSB一致性。Raboud称,在其首个贴子中已经说明,据他所知,实际发布的唯一一个依赖LSB一致性的包为[OpenPrinting][16]驱动程序。Michael Biebl[称][17],主档案中有这样一个驱动包;Raboud则[回应][18]说,他相信有问题的包应该被移到非自由仓库,因其包括了一个二进制驱动。 -With that, the issue appears to be settled, at least for the current Debian development cycle. What will be more interesting, naturally, will be to see what effect, if any, the decision has on broader LSB acceptance. As Raboud alluded to, the number of distributions that are certified as LSB-compliant is [small][19]. It is hard not to notice that those distributions are largely of the "enterprise" variety. +于是,这个问题看上去已经尘埃落定,至于对于目前的Debian开发周期来说是如此的状况。很自然的是,未来让人更感觉兴趣的是,如果该决定存在一些影响的话,那么人们将会看到它对更广泛的LSB接受度有何影响。正如Raboud所说的那样,被认证为LSB-一致性的发行版数量很[小][19]。人们很难不注意到这些发行版很大程度上是“企业”的变种。 -Perhaps, then, LSB compliance is still important to some business sectors, but it is hard to know how many customers of those enterprise distributions genuinely care about the LSB certification stamp. If Debian's experience is anything to go by, however, general interest in such certification may be in steep decline. - ---- +也许,对某些商业领域来说,LSB仍很重要,但很难知道有多少那些企业发行版的客户真正关心LSB认证标签。然而,如果Debian按此发展下去,对这种认证的一般兴趣可能会急剧下降。 via:https://lwn.net/Articles/658809/ From 3e94db32aa89d7eeb68f06ea8870b4ef3ae09bbd Mon Sep 17 00:00:00 2001 From: Yuking-net Date: Mon, 12 Oct 2015 01:27:56 +0800 Subject: [PATCH 673/697] Changes to be committed: modified: sources/news/20150930 Debian dropping the Linux Standard Base.md --- .../news/20150930 Debian dropping the Linux Standard Base.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/news/20150930 Debian dropping the Linux Standard Base.md b/sources/news/20150930 Debian dropping the Linux Standard Base.md index 1f9144358a..b29f729aff 100644 --- a/sources/news/20150930 Debian dropping the Linux Standard Base.md +++ b/sources/news/20150930 Debian dropping the Linux Standard Base.md @@ -17,7 +17,7 @@ Raboud提议Debian摈弃除了[lsb-base][9]包(目前包括一个用于启动 [后来][11],他又称,将LSB基本上改变为“*Debian和FLOSS世界中的所有其它演员所实际做的任何事*”可能会使得该标准(以及在Debian为支持它所做的工作)更有价值。但此时他再次质疑是否有人会对推动这个目标有兴趣。 -如果说他最初称LSB中缺乏兴趣没有足够的证据,随后整整三个月之内没有任何人对维持LSB-一致性的包提供支持,并进行了两次拋弃它们的投票。最后,9月17日,Raboud[宣布][12]他已经抽掉`src:lsb`包(如前所述,保留了`lsb-base`和`lsb-relese`),将将其上载到“unstable”档案中。感兴趣的用户可通过最小限度的工具集来启动下一个Debian版本,并查询它是否符合LSB-一致性:结果将为“否”。 +如果说他最初称LSB中缺乏兴趣没有足够的证据,随后整整三个月之内没有任何人对维持LSB-一致性的包提供支持,并进行了两次拋弃它们的投票。最后,9月17日,Raboud[宣布][12]他已经抽掉`src:lsb`包(如前所述,保留了`lsb-base`和`lsb-release`),将将其上载到“unstable”档案中。感兴趣的用户可通过最小限度的工具集来启动下一个Debian版本,并查询它是否符合LSB-一致性:结果将为“否”。 Raboud补充说,即便摈弃了LSB一致性,Debian仍计划保留FHS一致性: From e9f12804e05dbe3bcc44d6ae5a6e543d971c1602 Mon Sep 17 00:00:00 2001 From: "yuking_net@sohu.com" Date: Mon, 12 Oct 2015 01:32:11 +0800 Subject: [PATCH 674/697] Changes to be committed: modified: sources/news/20150930 Debian dropping the Linux Standard Base.md --- .../news/20150930 Debian dropping the Linux Standard Base.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/news/20150930 Debian dropping the Linux Standard Base.md b/sources/news/20150930 Debian dropping the Linux Standard Base.md index b29f729aff..9fca1694dc 100644 --- a/sources/news/20150930 Debian dropping the Linux Standard Base.md +++ b/sources/news/20150930 Debian dropping the Linux Standard Base.md @@ -1,5 +1,5 @@ -Debian正在拋弃Linux标准规范 -======================== +Debian拋弃Linux标准规范 +======================= Linux标准规范(LSB)是一个意图定义一个Linux发行版提供的第三方程序的服务和应用层ABI的[规范][1]。但Debian项目内的某些人正在质疑维持LSB一致性的价值,他们认为,该项工作的工作量巨大,但好处有限。 From 8978eee36928cfd8590bb44c4d499a3395384658 Mon Sep 17 00:00:00 2001 From: "yuking_net@sohu.com" Date: Mon, 12 Oct 2015 01:38:56 +0800 Subject: [PATCH 675/697] Changes to be committed: modified: 20150930 Debian dropping the Linux Standard Base.md --- ... Debian dropping the Linux Standard Base.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/sources/news/20150930 Debian dropping the Linux Standard Base.md b/sources/news/20150930 Debian dropping the Linux Standard Base.md index 9fca1694dc..f3d8b5214c 100644 --- a/sources/news/20150930 Debian dropping the Linux Standard Base.md +++ b/sources/news/20150930 Debian dropping the Linux Standard Base.md @@ -1,33 +1,33 @@ Debian拋弃Linux标准规范 ======================= -Linux标准规范(LSB)是一个意图定义一个Linux发行版提供的第三方程序的服务和应用层ABI的[规范][1]。但Debian项目内的某些人正在质疑维持LSB一致性的价值,他们认为,该项工作的工作量巨大,但好处有限。 +Linux标准规范(LSB)是一个意图定义一个Linux发行版提供的第三方程序的服务和应用层ABI的[规范][1]。但Debian项目内的某些人正在质疑维持兼容LSB的价值,他们认为,该项工作的工作量巨大,但好处有限。 LSB于2001年首次公布,建立在[POSIX][2]和[单一UNIX规范][3]的基础之上。目前,LSB由Linux基金会的一个[工作小组][4]维护。最新的版本是于2015年6月发布的[LSB 5.0][5]。它定义了五个LSB模块(核芯、桌面、语言、图形和试用)。 每个模块都包含了一系列所需的库及其强制性版本,外加对每个库的公共函数和数据定义的描述。这些模块还包括命名和组织规范,如[文件系统层次标准(FHS)][6]中的文件系统布局或象Freedesktop的[XDG基础目录][7]规范这样的目录规范。 -似乎只是一个巧合,就在 LSB 5.0 发布的那一周,Debian 项目内部针对Debian是否值得追求维持LSB一致性进行了一次讨论。在另一个贴子中,在提及LSB一致性后,Didier Raboud顺势[提议][8]将Debian的一致性工作维持在最低水平。他说,目前的情况是,Debian的“lsb-*”元包要求有该标准中提及的库的正确版本,但事实上却没有人去检查所有的符号和数据定义是否满足要求。 +似乎只是一个巧合,就在 LSB 5.0 发布的那一周,Debian 项目内部针对Debian是否值得追求维持兼容LSB进行了一次讨论。在另一个贴子中,在提及兼容LSB后,Didier Raboud顺势[提议][8]将Debian的兼容工作维持在最低水平。他说,目前的情况是,Debian的“lsb-*”元包要求有该标准中提及的库的正确版本,但事实上却没有人去检查所有的符号和数据定义是否满足要求。 -另外,LSB还不断在膨胀;他说,4.1版(Debian “jessie”发布时的最新版本)包含“*1493个组件、1672个库、38491条命令、30176个类和716202个接口*”。似乎没有人有兴趣检查Debian包中的这些细节,他解释道,“*去年在DebConf上我举行过一次LSB BoF,后来又与很多人讨论过src:lsb,我收回自己的‘几乎没有人在意’的说法*”。但,重要的是,兴趣的缺乏似乎并不仅局限于Debian: +另外,LSB还不断在膨胀;他说,4.1版(Debian “jessie”发布时的最新版本)包含“*1493个组件、1672个库、38491条命令、30176个类和716202个接口*”。似乎没有人有兴趣检查Debian包中的这些细节,他解释道,“*去年在DebConf上我举行过一次LSB BoF,后来又与很多人讨论过src:lsb,我收回自己的‘几乎没有人在意’的说法*”。但,重要的是,Debian似乎并不仅局限于兴趣的缺乏: 我认为,这个问题的关键在于是否值得去玩这整个游戏:我还没听说有哪个软件通过LSB包来发行。LSB认证的应用清单上只有6个公司的_8_个应用,其中仅有一个 LSB >= 4。 -Raboud提议Debian摈弃除了[lsb-base][9]包(目前包括一个用于启动系统所需的小的shell函数集合)和[lsb-release][10]包(提供一个简单工具,用户可用它查询发行版的身份以及该发行版宣称的与哪个LSB级别一致)之外的所有内容。 +Raboud提议Debian摈弃除了[lsb-base][9]包(目前包括一个用于启动系统所需的小的shell函数集合)和[lsb-release][10]包(提供一个简单工具,用户可用它查询发行版的身份以及该发行版宣称的与哪个LSB级别兼容)之外的所有内容。 [后来][11],他又称,将LSB基本上改变为“*Debian和FLOSS世界中的所有其它演员所实际做的任何事*”可能会使得该标准(以及在Debian为支持它所做的工作)更有价值。但此时他再次质疑是否有人会对推动这个目标有兴趣。 -如果说他最初称LSB中缺乏兴趣没有足够的证据,随后整整三个月之内没有任何人对维持LSB-一致性的包提供支持,并进行了两次拋弃它们的投票。最后,9月17日,Raboud[宣布][12]他已经抽掉`src:lsb`包(如前所述,保留了`lsb-base`和`lsb-release`),将将其上载到“unstable”档案中。感兴趣的用户可通过最小限度的工具集来启动下一个Debian版本,并查询它是否符合LSB-一致性:结果将为“否”。 +如果说他最初称LSB中缺乏兴趣没有足够的证据,随后整整三个月之内没有任何人对维持LSB兼容的包提供支持,并进行了两次拋弃它们的投票。最后,9月17日,Raboud[宣布][12]他已经抽掉`src:lsb`包(如前所述,保留了`lsb-base`和`lsb-release`),将将其上载到“unstable”档案中。感兴趣的用户可通过最小限度的工具集来启动下一个Debian版本,并查询它是否兼容LSB:结果将为“否”。 -Raboud补充说,即便摈弃了LSB一致性,Debian仍计划保留FHS一致性: +Raboud补充说,即便摈弃了兼容LSB,Debian仍计划继续兼容FHS: 但Debian并没有放弃所有的LSB:我们仍将严格遵守FHS(Debian Policy中的版本2.3;虽然今年8月已经发布了3.0),而且我们的SysV启动脚本几乎全部遵循VIII.22.{2-8}。但请不要误解我们,此次src:lsb上载明确说明我们将离开LSB。 在该宣告之后,Nikolaus Rath[回应][13]称某些私有应用依赖`/lib`和`/lib64`中的符号链接`ld-lsb.so*`,而这些符号链接由`lsb-*`包提供。Raboud则[建议][14]应改由`libc6`包提供;包维护人员Aurelien Jarno[称][15],如果提供这样一个补丁,他将会接受它。 -似乎唯一的遗留问题只是某些打印机驱动包会依赖LSB一致性。Raboud称,在其首个贴子中已经说明,据他所知,实际发布的唯一一个依赖LSB一致性的包为[OpenPrinting][16]驱动程序。Michael Biebl[称][17],主档案中有这样一个驱动包;Raboud则[回应][18]说,他相信有问题的包应该被移到非自由仓库,因其包括了一个二进制驱动。 +似乎唯一的遗留问题只是某些打印机驱动包会依赖LSB兼容。Raboud称,在其首个贴子中已经说明,据他所知,实际发布的唯一一个依赖LSB兼容的包为[OpenPrinting][16]驱动程序。Michael Biebl[称][17],主档案中有这样一个驱动包;Raboud则[回应][18]说,他相信有问题的包应该被移到非自由仓库,因其包括了一个二进制驱动。 -于是,这个问题看上去已经尘埃落定,至于对于目前的Debian开发周期来说是如此的状况。很自然的是,未来让人更感觉兴趣的是,如果该决定存在一些影响的话,那么人们将会看到它对更广泛的LSB接受度有何影响。正如Raboud所说的那样,被认证为LSB-一致性的发行版数量很[小][19]。人们很难不注意到这些发行版很大程度上是“企业”的变种。 +于是,这个问题看上去已经尘埃落定,至于对于目前的Debian开发周期来说是如此的状况。很自然的是,未来让人更感觉兴趣的是,如果该决定存在一些影响的话,那么人们将会看到它对更广泛的LSB接受度有何影响。正如Raboud所说的那样,被认证为LSB兼容的发行版数量很[小][19]。人们很难不注意到这些发行版很大程度上是“企业”的变种。 也许,对某些商业领域来说,LSB仍很重要,但很难知道有多少那些企业发行版的客户真正关心LSB认证标签。然而,如果Debian按此发展下去,对这种认证的一般兴趣可能会急剧下降。 @@ -59,5 +59,5 @@ via:https://lwn.net/Articles/658809/ [16]:http://www.linuxfoundation.org/collaborate/workgroups/openprinting/ [17]:/Articles/658844/ [18]:/Articles/658845/ - +[19]:https://www.linuxbase.org/lsb-cert/productdir.php?by_lsb From e4fb9ea72f6a766670d091d326511c9af1b5a199 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 12 Oct 2015 10:35:41 +0800 Subject: [PATCH 676/697] PUB:20150930 Debian dropping the Linux Standard Base @Yuking-net --- ...Debian dropping the Linux Standard Base.md | 65 +++++++++++++++++++ ...Debian dropping the Linux Standard Base.md | 63 ------------------ 2 files changed, 65 insertions(+), 63 deletions(-) create mode 100644 published/20150930 Debian dropping the Linux Standard Base.md delete mode 100644 sources/news/20150930 Debian dropping the Linux Standard Base.md diff --git a/published/20150930 Debian dropping the Linux Standard Base.md b/published/20150930 Debian dropping the Linux Standard Base.md new file mode 100644 index 0000000000..c854ba8dfe --- /dev/null +++ b/published/20150930 Debian dropping the Linux Standard Base.md @@ -0,0 +1,65 @@ +Debian 拋弃 Linux 标准规范(LSB) +======================= + +Linux 标准规范(LSB)是一个意图定义 Linux 发行版为第三方程序所提供的服务和应用层 ABI(Application Binary Interfaces,程序二进制界面) 的[规范][1]。但 Debian 项目内的某些人正在质疑是否值得维持兼容 LSB,他们认为,该项工作的工作量巨大,但好处有限。 + +LSB 于2001年首次公布,其模型建立在 [POSIX][2] 和[单一 UNIX 规范(Single UNIX Specification)][3]的基础之上。目前,LSB 由 Linux 基金会的一个[工作小组][4]维护。最新的版本是于2015年6月发布的 [LSB 5.0][5]。它定义了五个 LSB 模块(核芯(core)、桌面、语言、成像(imaging)和试用)。 + +每个模块都包含了一系列所需的库及其强制性版本,外加对每个库的公共函数和数据定义的描述。这些模块还包括命名和组织规范,如[文件系统层次标准(FHS,Filesystem Hierarchy Standard)][6]中的文件系统布局或象 Freedesktop 的[XDG 基础目录(XDG Base Directory)][7]规范这样的目录规范。 + +似乎只是一个巧合,就在 LSB 5.0 发布的同一周,Debian 项目内部针对其是否值得保持兼容 LSB 进行了一次讨论。在另一个贴子中,在提及兼容 LSB 后,Didier Raboud 顺势[提议][8]将 Debian 的兼容工作维持在最低水平。他说,目前的情况是,Debian 的“lsb-*” 元包( meta-packages)试图规定该标准中提及的库的正确版本,但事实上却没有人去检查所有的符号和数据定义是否满足要求。 + +另外,LSB 还不断在膨胀;他说, LSB 4.1 版(接近 Debian “jessie” 发布时的最新版本)包含“*1493个组件、1672个库、38491条命令、30176个类和716202个接口*”。似乎没有人有兴趣检查 Debian 包中的这些细节,他解释道,又补充说,“*去年在 DebConf 上我举行过一次 LSB BoF,后来又与很多人讨论过 src:lsb,我收回自己的‘几乎没有人在意’的说法*”。但,重要的是,Debian 似乎并不仅局限于兴趣的缺乏: + + 我认为,这个问题的关键在于是否值得去玩这整个游戏:我还没听说有哪个软件通过 LSB 包来发行。LSB 认证的应用清单上只有 6个公司的_8_个应用,其中仅有一个是针对不低于 LSB 4 的。 + +Raboud 提议 Debian 摈弃除了 [lsb-base][9] 包(目前包括一个用于启动系统所需的小的 shell 函数集合)和 [lsb-release][10] 包(提供一个简单工具,用户可用它查询发行版的身份以及该发行版宣称的与哪个 LSB 级别兼容)之外的所有内容。 + +[后来][11],他又称,将 LSB 基本上改变为“*Debian 和 FLOSS 世界中的所有的其它人所_实际_做的任何事*”可能会使得该标准(以及在 Debian 为支持它所做的工作)更有价值。但此时他再次质疑是否有人会对推动这个目标有兴趣。 + +如果说他最初称 LSB 中缺乏兴趣没有足够的证据,随后整整三个月之内没有任何人对维持 LSB 兼容的包提供支持,并进行了两次拋弃它们的投票。最后,9月17日,Raboud [宣布][12]他已经抽掉 `src:lsb` 包(如前所述,保留了`lsb-base` 和 `lsb-release`),将将其上载到 “unstable” 归档中。这个最小的工具集可以让感兴趣的用户在启动了下一个 Debian 版本后查询它是否兼容 LSB:结果将为“否”。 + +Raboud 补充说,即便摈弃了兼容 LSB,Debian 仍计划继续兼容 FHS: + + 但 Debian 并没有放弃所有的 LSB:我们仍将严格遵守 FHS(直到 Debian Policy 版本 2.3;虽然今年8月已经发布了3.0),而且我们的 SysV 启动脚本几乎全部遵循 VIII.22.{2-8}。但请不要误解,此次 src:lsb 上载明确说明我们将离开 LSB。 + +在该宣告之后,Nikolaus Rath [回应][13]称某些私有应用依赖`/lib`和`/lib64`中的符号链接`ld-lsb.so*`,而这些符号链接由`lsb-*`包提供。Raboud 则[建议][14]应改由`libc6`包提供;该包维护人员Aurelien Jarno [称][15],如果提供这样一个补丁,他将会接受它。 + +似乎唯一的遗留问题只是某些打印机驱动包会依赖 LSB 兼容。Raboud 称,在其首个贴子中已经说明,据他所知,实际发布的唯一一个依赖 LSB 兼容的包为 [OpenPrinting][16] 驱动程序。Michael Biebl [称][17],主归档中有这样一个驱动包;Raboud 则[回应][18]说,他认为这个有问题的包应该被移到非自由仓库,因其包括了一个二进制驱动。 + +于是,这个问题看上去已经尘埃落定,至于对于目前的 Debian 开发周期来说是如此的状况。很自然的是,未来让人更感觉兴趣的是,如果该决定存在一些影响的话,那么人们将会看到它对更广泛的 LSB 接受度有何影响。正如 Raboud 所说的那样,被认证为 LSB 兼容的发行版数量很[少][19]。人们很难不会注意到这些发行版很大程度上是“企业”的变种。 + +也许,对某些商业领域来说,LSB 仍很重要,但很难知道有多少那些企业发行版的客户真正关心 LSB 认证标签。然而,如果 Debian 的经验靠得住的话,对这种认证的一般兴趣可能会急剧下降。 + +---- + +via:https://lwn.net/Articles/658809/ + +作者:Nathan Willis +译者:[Yuking](https://github.com/Yuking-net) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译, +[Linux中国](https://linux.cn/) 荣誉推出 + + +[1]:http://refspecs.linuxfoundation.org/lsb.shtml +[2]:https://en.wikipedia.org/wiki/POSIX +[3]:https://en.wikipedia.org/wiki/Single_UNIX_Specification +[4]:http://www.linuxfoundation.org/collaborate/workgroups/lsb +[5]:http://www.linuxfoundation.org/collaborate/workgroups/lsb/lsb-50 +[6]:http://www.linuxfoundation.org/collaborate/workgroups/lsb/fhs +[7]:http://standards.freedesktop.org/basedir-spec/basedir-spec-0.6.html +[8]:https://lwn.net/Articles/658838/ +[9]:https://packages.debian.org/sid/lsb-base +[10]:https://packages.debian.org/sid/lsb-release +[11]:https://lwn.net/Articles/658842/ +[12]:https://lwn.net/Articles/658843/ +[13]:https://lwn.net/Articles/658846/ +[14]:https://lwn.net/Articles/658847/ +[15]:https://lwn.net/Articles/658848/ +[16]:http://www.linuxfoundation.org/collaborate/workgroups/openprinting/ +[17]:https://lwn.net/Articles/658844/ +[18]:https://lwn.net/Articles/658845/ +[19]:https://www.linuxbase.org/lsb-cert/productdir.php?by_lsb + diff --git a/sources/news/20150930 Debian dropping the Linux Standard Base.md b/sources/news/20150930 Debian dropping the Linux Standard Base.md deleted file mode 100644 index f3d8b5214c..0000000000 --- a/sources/news/20150930 Debian dropping the Linux Standard Base.md +++ /dev/null @@ -1,63 +0,0 @@ -Debian拋弃Linux标准规范 -======================= - -Linux标准规范(LSB)是一个意图定义一个Linux发行版提供的第三方程序的服务和应用层ABI的[规范][1]。但Debian项目内的某些人正在质疑维持兼容LSB的价值,他们认为,该项工作的工作量巨大,但好处有限。 - -LSB于2001年首次公布,建立在[POSIX][2]和[单一UNIX规范][3]的基础之上。目前,LSB由Linux基金会的一个[工作小组][4]维护。最新的版本是于2015年6月发布的[LSB 5.0][5]。它定义了五个LSB模块(核芯、桌面、语言、图形和试用)。 - -每个模块都包含了一系列所需的库及其强制性版本,外加对每个库的公共函数和数据定义的描述。这些模块还包括命名和组织规范,如[文件系统层次标准(FHS)][6]中的文件系统布局或象Freedesktop的[XDG基础目录][7]规范这样的目录规范。 - -似乎只是一个巧合,就在 LSB 5.0 发布的那一周,Debian 项目内部针对Debian是否值得追求维持兼容LSB进行了一次讨论。在另一个贴子中,在提及兼容LSB后,Didier Raboud顺势[提议][8]将Debian的兼容工作维持在最低水平。他说,目前的情况是,Debian的“lsb-*”元包要求有该标准中提及的库的正确版本,但事实上却没有人去检查所有的符号和数据定义是否满足要求。 - -另外,LSB还不断在膨胀;他说,4.1版(Debian “jessie”发布时的最新版本)包含“*1493个组件、1672个库、38491条命令、30176个类和716202个接口*”。似乎没有人有兴趣检查Debian包中的这些细节,他解释道,“*去年在DebConf上我举行过一次LSB BoF,后来又与很多人讨论过src:lsb,我收回自己的‘几乎没有人在意’的说法*”。但,重要的是,Debian似乎并不仅局限于兴趣的缺乏: - - 我认为,这个问题的关键在于是否值得去玩这整个游戏:我还没听说有哪个软件通过LSB包来发行。LSB认证的应用清单上只有6个公司的_8_个应用,其中仅有一个 LSB >= 4。 - -Raboud提议Debian摈弃除了[lsb-base][9]包(目前包括一个用于启动系统所需的小的shell函数集合)和[lsb-release][10]包(提供一个简单工具,用户可用它查询发行版的身份以及该发行版宣称的与哪个LSB级别兼容)之外的所有内容。 - -[后来][11],他又称,将LSB基本上改变为“*Debian和FLOSS世界中的所有其它演员所实际做的任何事*”可能会使得该标准(以及在Debian为支持它所做的工作)更有价值。但此时他再次质疑是否有人会对推动这个目标有兴趣。 - -如果说他最初称LSB中缺乏兴趣没有足够的证据,随后整整三个月之内没有任何人对维持LSB兼容的包提供支持,并进行了两次拋弃它们的投票。最后,9月17日,Raboud[宣布][12]他已经抽掉`src:lsb`包(如前所述,保留了`lsb-base`和`lsb-release`),将将其上载到“unstable”档案中。感兴趣的用户可通过最小限度的工具集来启动下一个Debian版本,并查询它是否兼容LSB:结果将为“否”。 - -Raboud补充说,即便摈弃了兼容LSB,Debian仍计划继续兼容FHS: - - 但Debian并没有放弃所有的LSB:我们仍将严格遵守FHS(Debian Policy中的版本2.3;虽然今年8月已经发布了3.0),而且我们的SysV启动脚本几乎全部遵循VIII.22.{2-8}。但请不要误解我们,此次src:lsb上载明确说明我们将离开LSB。 - -在该宣告之后,Nikolaus Rath[回应][13]称某些私有应用依赖`/lib`和`/lib64`中的符号链接`ld-lsb.so*`,而这些符号链接由`lsb-*`包提供。Raboud则[建议][14]应改由`libc6`包提供;包维护人员Aurelien Jarno[称][15],如果提供这样一个补丁,他将会接受它。 - -似乎唯一的遗留问题只是某些打印机驱动包会依赖LSB兼容。Raboud称,在其首个贴子中已经说明,据他所知,实际发布的唯一一个依赖LSB兼容的包为[OpenPrinting][16]驱动程序。Michael Biebl[称][17],主档案中有这样一个驱动包;Raboud则[回应][18]说,他相信有问题的包应该被移到非自由仓库,因其包括了一个二进制驱动。 - -于是,这个问题看上去已经尘埃落定,至于对于目前的Debian开发周期来说是如此的状况。很自然的是,未来让人更感觉兴趣的是,如果该决定存在一些影响的话,那么人们将会看到它对更广泛的LSB接受度有何影响。正如Raboud所说的那样,被认证为LSB兼容的发行版数量很[小][19]。人们很难不注意到这些发行版很大程度上是“企业”的变种。 - -也许,对某些商业领域来说,LSB仍很重要,但很难知道有多少那些企业发行版的客户真正关心LSB认证标签。然而,如果Debian按此发展下去,对这种认证的一般兴趣可能会急剧下降。 - -via:https://lwn.net/Articles/658809/ - -作者:Nathan Willis -译者:[Yuking](https://github.com/Yuking-net) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译, -[Linux中国](https://linux.cn/) 荣誉推出 - - -[1]:http://refspecs.linuxfoundation.org/lsb.shtml -[2]:https://en.wikipedia.org/wiki/POSIX -[3]:https://en.wikipedia.org/wiki/Single_UNIX_Specification -[4]:http://www.linuxfoundation.org/collaborate/workgroups/lsb -[5]:http://www.linuxfoundation.org/collaborate/workgroups/lsb/lsb-50 -[6]:http://www.linuxfoundation.org/collaborate/workgroups/lsb/fhs -[7]:http://standards.freedesktop.org/basedir-spec/basedir-spec-0.6.html -[8]:https://lwn.net/Articles/658838/ -[9]:https://packages.debian.org/sid/lsb-base -[10]:https://packages.debian.org/sid/lsb-release -[11]:https://lwn.net/Articles/658842/ -[12]:/Articles/658843/ -[13]:/Articles/658846/ -[14]:/Articles/658847/ -[15]:/Articles/658848/ -[16]:http://www.linuxfoundation.org/collaborate/workgroups/openprinting/ -[17]:/Articles/658844/ -[18]:/Articles/658845/ -[19]:https://www.linuxbase.org/lsb-cert/productdir.php?by_lsb - From 6fcd54d4e70e7e257afc760f11ff8dc292209656 Mon Sep 17 00:00:00 2001 From: locez Date: Mon, 12 Oct 2015 18:46:23 +0800 Subject: [PATCH 677/697] translating by locez --- ...908 How to Run ISO Files Directly From the HDD with GRUB2.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md b/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md index 7de3640532..147bd7b625 100644 --- a/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md +++ b/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md @@ -1,7 +1,7 @@ +Translating by Locez How to Run ISO Files Directly From the HDD with GRUB2 ================================================================================ ![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-featured.png) - Most Linux distros offer a live environment, which you can boot up from a USB drive, for you to test the system without installing. You can either use it to evaluate the distro or as a disposable OS. While it is easy to copy these onto a USB disk, in certain cases one might want to run the same ISO image often or run different ones regularly. GRUB 2 can be configured so that you do not need to burn the ISOs to disk or use a USB drive, but need to run a live environment directly form the boot menu. ### Obtaining and checking bootable ISO images ### From 9955e76d8b2392f583dea7ad6ee70b5ac18b98b9 Mon Sep 17 00:00:00 2001 From: locez Date: Mon, 12 Oct 2015 21:33:25 +0800 Subject: [PATCH 678/697] translated --- ... Files Directly From the HDD with GRUB2.md | 59 +++++++++---------- 1 file changed, 27 insertions(+), 32 deletions(-) diff --git a/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md b/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md index 147bd7b625..8af47b2d31 100644 --- a/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md +++ b/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md @@ -1,37 +1,32 @@ -Translating by Locez -How to Run ISO Files Directly From the HDD with GRUB2 +如何使用 GRUB 2 直接从硬盘运行 ISO 文件 ================================================================================ ![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-featured.png) -Most Linux distros offer a live environment, which you can boot up from a USB drive, for you to test the system without installing. You can either use it to evaluate the distro or as a disposable OS. While it is easy to copy these onto a USB disk, in certain cases one might want to run the same ISO image often or run different ones regularly. GRUB 2 can be configured so that you do not need to burn the ISOs to disk or use a USB drive, but need to run a live environment directly form the boot menu. +大多数 Linux 发行版都会提供一个可以从 USB 启动的 live 环境,以便用户无需安装即可测试系统。我们可以用它来评测这个发行版或仅仅是当成一个一次性系统,并且很容易将这些文件复制到一个 U 盘上,在某些情况下,我们可能需要经常运行同一个或不同的 ISO 镜像。GRUB 2 可以配置成直接从启动菜单运行一个 live 环境,而不需要烧录这些 ISO 到硬盘或 USB 设备。 -### Obtaining and checking bootable ISO images ### +### 获取和检查可启动的 ISO 镜像 ### +为了获取 ISO 镜像,我们通常应该访问所需要的发行版的网站下载与我们架构兼容的镜像文件。如果这个镜像可以从 U 盘启动,那它也应该可以从 GRUB 菜单启动。 -To obtain an ISO image, you should usually visit the website of the desired distribution and download any image that is compatible with your setup. If the image can be started from a USB, it should be able to start from the GRUB menu as well. - -Once the image has finished downloading, you should check its integrity by running a simple md5 check on it. This will output a long combination of numbers and alphanumeric characters +当镜像下载完后,我们应该通过 MD5 校验检查它的完整性。这会输出一大串数字与字母合成的序列。 ![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-md5.png) -which you can compare against the MD5 checksum provided on the download page. The two should be identical. +将这个序列与下载页提供的 MD5 校验码进行比较,两者应该完全相同。 -### Setting up GRUB 2 ### +### 配置 GRUB 2 ### +ISO 镜像文件包含了整个系统。我们要做的仅仅是告诉 GRUB 2 哪里可以找到 kernel 和 initramdisk 或 initram 文件系统(这取决于我们所使用的发行版)。 -ISO images contain full systems. All you need to do is direct GRUB2 to the appropriate file, and tell it where it can find the kernel and the initramdisk or initram filesystem (depending on which one your distribution uses). +在下面的例子中,一个 Kubuntu 15.04 live 环境将被配置到 Ubuntu 14.04 盒子的 Grub 启动菜单项。这应该能在大多数新的以 Ubuntu 为基础的系统上运行。如果你是其他系统并且想实现一些其它的东西,你可以从[这些文件][1]获取灵感,但这会要求你拥有一点 GRUB 使用经验。 -In this example, a Kubuntu 15.04 live environment will be set up to run on an Ubuntu 14.04 box as a Grub menu item. It should work for most newer Ubuntu-based systems and derivatives. If you have a different system or want to achieve something else, you can get some ideas on how to do this from one of [these files][1], although it will require a little experience with GRUB. +这个例子的文件 `kubuntu-15.04-desktop-amd64.iso` -In this example the file `kubuntu-15.04-desktop-amd64.iso` +放在位于 `/dev/sda1` 的 `/home/maketecheasier/TempISOs/` 上. -lives in `/home/maketecheasier/TempISOs/` on `/dev/sda1`. - -To make GRUB2 look for it in the right place, you need to edit the +为了使 GRUB 2 能正确找到它,我们应该编辑 /etc/grub.d40-custom ![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-40-custom-empty.png) -To start Kubuntu from the above location, add the following code (after adjusting it to your needs) below the commented section, without modifying the original content. - menuentry "Kubuntu 15.04 ISO" { set isofile="/home/maketecheasier/TempISOs/kubuntu-15.04-desktop-amd64.iso" loopback loop (hd0,1)$isofile @@ -42,52 +37,52 @@ To start Kubuntu from the above location, add the following code (after adjustin ![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-40-custom-new.png) -### Breaking down the above code ### +### 分析上述代码 ### -First set up a variable named `$menuentry`. This is where the ISO file is located. If you want to change to a different ISO, you need to change the bit where it says set `isofile="/path/to/file/name-of-iso-file-.iso"`. +首先设置了一个变量名 `$menuentry` ,这是 ISO 文件的所在位置 。如果你想改变一个 ISO ,你应该修改 `isofile="/path/to/file/name-of-iso-file-.iso"`. -The next line is where you specify the loopback device; you also need to give it the right partition number. This is the bit where it says +下一行是指定回环设备,且必须给出正确的分区号码。 loopback loop (hd0,1)$isofile -Note the hd0,1 bit; it is important. This means first HDD, first partition (`/dev/sda1`). +注意 hd0,1 这里非常重要,它的意思是第一硬盘,第一分区 (`/dev/sda1`)。 -GRUB’s naming here is slightly confusing. For HDDs, it starts counting from “0”, making the first HDD #0, the second one #1, the third one #2, etc. However, for partitions, it will start counting from 1. First partition is #1, second is #2, etc. There might be a good reason for this but not necessarily a sane one (UX-wise it is a disaster, to be sure).. +GRUB 的命名在这里稍微有点困惑,对于硬盘来说,它从 “0” 开始计数,第一块硬盘为 #0 ,第二块为 #1 ,第三块为 #2 ,依此类推。但是对于分区来说,它从 “1” 开始计数,第一个分区为 #1 ,第二个分区为 #2 ,依此类推。也许这里有一个很好的原因,但肯定不是明智的(明显用户体验很糟糕).. -This makes fist disk, first partition, which in Linux would usually look something like `/dev/sda1` become `hd0,1` in GRUB2. The second disk, third partition would be `hd1,3`, and so on. +在 Linux 中第一块硬盘,第一个分区是 `/dev/sda1` ,但在 GRUB2 中则是 `hd0,1` 。第二块硬盘,第三个分区则是 `hd1,3`, 依此类推. -The next important line is +下一个重要的行是 linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/filename=${isofile} quiet splash -It will load the kernel image. On newer Ubuntu Live CDs, this would be in the `/casper` directory and called `vmlinuz.efi`. If you use a different system, your kernel might be missing the `.efi` extension or be located somewhere else entirely (You can easily check this by opening the ISO file with an archive manager and looking inside `/casper.`). The last options, `quiet splash`, would be your regular GRUB options, if you care to change them. +这会载入内核镜像,在新的 Ubuntu Live CD 中,内核被存放在 `/casper` 目录,并且命名为 `vmlinuz.efi` 。如果你使用的是其它系统,可能会没有 `.efi` 扩展名或内核被存放在其它地方 (可以使用归档管理器打开 ISO 文件在 `/casper` 中查找确认)。最后一个选项, `quiet splash`, 是一个常规的 GRUB 选项无论你是否在意改动它们。 -Finally +最后 initrd (loop)/casper/initrd.lz -will load `initrd`, which is responsible to load a RAMDisk into memory for bootup. +这会载入 `initrd` ,它负责载入 RAMDisk 到内存用于启动。 -### Booting into your live system ### +### 启动 live 系统 ### -To make it all work, you will only need to update GRUB2 +做完上面所有的步骤后,需要更新 GRUB2 sudo update-grub ![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-updare-grub.png) -When you reboot your system, you should be presented with a new GRUB entry which will allow you to load into the ISO image you’ve just set up. +当重启系统后,应该可以看见一个新的,并且允许我们启动刚刚配置的 ISO 镜像的 GRUB 条目 ![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-grub-menu.png) -Selecting the new entry should boot you into the live environment, just like booting from a DVD or USB would. +选择这个新条目就允许我们像从 DVD 或 U 盘中启动一个 live 环境一样。 -------------------------------------------------------------------------------- via: https://www.maketecheasier.com/run-iso-files-hdd-grub2/ 作者:[Attila Orosz][a] -译者:[译者ID](https://github.com/译者ID) +译者:[译者ID](https://github.com/locez) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 979c24833d29d086ef5fa1a83de08b855a08ab38 Mon Sep 17 00:00:00 2001 From: locez Date: Mon, 12 Oct 2015 21:34:49 +0800 Subject: [PATCH 679/697] translated --- ...908 How to Run ISO Files Directly From the HDD with GRUB2.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md b/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md index 8af47b2d31..49850d831e 100644 --- a/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md +++ b/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md @@ -82,7 +82,7 @@ GRUB 的命名在这里稍微有点困惑,对于硬盘来说,它从 “0” via: https://www.maketecheasier.com/run-iso-files-hdd-grub2/ 作者:[Attila Orosz][a] -译者:[译者ID](https://github.com/locez) +译者:[Locez](https://github.com/locez) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 86ec23a9b8dc3e8297687d2974538c912743fc3d Mon Sep 17 00:00:00 2001 From: locez Date: Mon, 12 Oct 2015 21:36:30 +0800 Subject: [PATCH 680/697] translated --- ... Files Directly From the HDD with GRUB2.md | 91 +++++++++++++++++++ 1 file changed, 91 insertions(+) create mode 100644 translated/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md diff --git a/translated/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md b/translated/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md new file mode 100644 index 0000000000..49850d831e --- /dev/null +++ b/translated/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md @@ -0,0 +1,91 @@ +如何使用 GRUB 2 直接从硬盘运行 ISO 文件 +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-featured.png) +大多数 Linux 发行版都会提供一个可以从 USB 启动的 live 环境,以便用户无需安装即可测试系统。我们可以用它来评测这个发行版或仅仅是当成一个一次性系统,并且很容易将这些文件复制到一个 U 盘上,在某些情况下,我们可能需要经常运行同一个或不同的 ISO 镜像。GRUB 2 可以配置成直接从启动菜单运行一个 live 环境,而不需要烧录这些 ISO 到硬盘或 USB 设备。 + +### 获取和检查可启动的 ISO 镜像 ### +为了获取 ISO 镜像,我们通常应该访问所需要的发行版的网站下载与我们架构兼容的镜像文件。如果这个镜像可以从 U 盘启动,那它也应该可以从 GRUB 菜单启动。 + +当镜像下载完后,我们应该通过 MD5 校验检查它的完整性。这会输出一大串数字与字母合成的序列。 + +![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-md5.png) + +将这个序列与下载页提供的 MD5 校验码进行比较,两者应该完全相同。 + +### 配置 GRUB 2 ### +ISO 镜像文件包含了整个系统。我们要做的仅仅是告诉 GRUB 2 哪里可以找到 kernel 和 initramdisk 或 initram 文件系统(这取决于我们所使用的发行版)。 + +在下面的例子中,一个 Kubuntu 15.04 live 环境将被配置到 Ubuntu 14.04 盒子的 Grub 启动菜单项。这应该能在大多数新的以 Ubuntu 为基础的系统上运行。如果你是其他系统并且想实现一些其它的东西,你可以从[这些文件][1]获取灵感,但这会要求你拥有一点 GRUB 使用经验。 + +这个例子的文件 `kubuntu-15.04-desktop-amd64.iso` + +放在位于 `/dev/sda1` 的 `/home/maketecheasier/TempISOs/` 上. + +为了使 GRUB 2 能正确找到它,我们应该编辑 + + /etc/grub.d40-custom + +![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-40-custom-empty.png) + + menuentry "Kubuntu 15.04 ISO" { + set isofile="/home/maketecheasier/TempISOs/kubuntu-15.04-desktop-amd64.iso" + loopback loop (hd0,1)$isofile + echo "Starting $isofile..." + linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/filename=${isofile} quiet splash + initrd (loop)/casper/initrd.lz + } + +![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-40-custom-new.png) + +### 分析上述代码 ### + +首先设置了一个变量名 `$menuentry` ,这是 ISO 文件的所在位置 。如果你想改变一个 ISO ,你应该修改 `isofile="/path/to/file/name-of-iso-file-.iso"`. + +下一行是指定回环设备,且必须给出正确的分区号码。 + + loopback loop (hd0,1)$isofile + +注意 hd0,1 这里非常重要,它的意思是第一硬盘,第一分区 (`/dev/sda1`)。 + +GRUB 的命名在这里稍微有点困惑,对于硬盘来说,它从 “0” 开始计数,第一块硬盘为 #0 ,第二块为 #1 ,第三块为 #2 ,依此类推。但是对于分区来说,它从 “1” 开始计数,第一个分区为 #1 ,第二个分区为 #2 ,依此类推。也许这里有一个很好的原因,但肯定不是明智的(明显用户体验很糟糕).. + +在 Linux 中第一块硬盘,第一个分区是 `/dev/sda1` ,但在 GRUB2 中则是 `hd0,1` 。第二块硬盘,第三个分区则是 `hd1,3`, 依此类推. + +下一个重要的行是 + + linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/filename=${isofile} quiet splash + +这会载入内核镜像,在新的 Ubuntu Live CD 中,内核被存放在 `/casper` 目录,并且命名为 `vmlinuz.efi` 。如果你使用的是其它系统,可能会没有 `.efi` 扩展名或内核被存放在其它地方 (可以使用归档管理器打开 ISO 文件在 `/casper` 中查找确认)。最后一个选项, `quiet splash`, 是一个常规的 GRUB 选项无论你是否在意改动它们。 + +最后 + + initrd (loop)/casper/initrd.lz + +这会载入 `initrd` ,它负责载入 RAMDisk 到内存用于启动。 + +### 启动 live 系统 ### + +做完上面所有的步骤后,需要更新 GRUB2 + + sudo update-grub + +![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-updare-grub.png) + +当重启系统后,应该可以看见一个新的,并且允许我们启动刚刚配置的 ISO 镜像的 GRUB 条目 + +![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-grub-menu.png) + +选择这个新条目就允许我们像从 DVD 或 U 盘中启动一个 live 环境一样。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/run-iso-files-hdd-grub2/ + +作者:[Attila Orosz][a] +译者:[Locez](https://github.com/locez) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ +[1]:http://git.marmotte.net/git/glim/tree/grub2 \ No newline at end of file From 1bfb726e4b66eae5febd28551100a872af46dfd2 Mon Sep 17 00:00:00 2001 From: Locez Date: Mon, 12 Oct 2015 13:38:14 +0800 Subject: [PATCH 681/697] delete --- ... Files Directly From the HDD with GRUB2.md | 91 ------------------- 1 file changed, 91 deletions(-) delete mode 100644 sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md diff --git a/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md b/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md deleted file mode 100644 index 49850d831e..0000000000 --- a/sources/tech/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md +++ /dev/null @@ -1,91 +0,0 @@ -如何使用 GRUB 2 直接从硬盘运行 ISO 文件 -================================================================================ -![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-featured.png) -大多数 Linux 发行版都会提供一个可以从 USB 启动的 live 环境,以便用户无需安装即可测试系统。我们可以用它来评测这个发行版或仅仅是当成一个一次性系统,并且很容易将这些文件复制到一个 U 盘上,在某些情况下,我们可能需要经常运行同一个或不同的 ISO 镜像。GRUB 2 可以配置成直接从启动菜单运行一个 live 环境,而不需要烧录这些 ISO 到硬盘或 USB 设备。 - -### 获取和检查可启动的 ISO 镜像 ### -为了获取 ISO 镜像,我们通常应该访问所需要的发行版的网站下载与我们架构兼容的镜像文件。如果这个镜像可以从 U 盘启动,那它也应该可以从 GRUB 菜单启动。 - -当镜像下载完后,我们应该通过 MD5 校验检查它的完整性。这会输出一大串数字与字母合成的序列。 - -![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-md5.png) - -将这个序列与下载页提供的 MD5 校验码进行比较,两者应该完全相同。 - -### 配置 GRUB 2 ### -ISO 镜像文件包含了整个系统。我们要做的仅仅是告诉 GRUB 2 哪里可以找到 kernel 和 initramdisk 或 initram 文件系统(这取决于我们所使用的发行版)。 - -在下面的例子中,一个 Kubuntu 15.04 live 环境将被配置到 Ubuntu 14.04 盒子的 Grub 启动菜单项。这应该能在大多数新的以 Ubuntu 为基础的系统上运行。如果你是其他系统并且想实现一些其它的东西,你可以从[这些文件][1]获取灵感,但这会要求你拥有一点 GRUB 使用经验。 - -这个例子的文件 `kubuntu-15.04-desktop-amd64.iso` - -放在位于 `/dev/sda1` 的 `/home/maketecheasier/TempISOs/` 上. - -为了使 GRUB 2 能正确找到它,我们应该编辑 - - /etc/grub.d40-custom - -![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-40-custom-empty.png) - - menuentry "Kubuntu 15.04 ISO" { - set isofile="/home/maketecheasier/TempISOs/kubuntu-15.04-desktop-amd64.iso" - loopback loop (hd0,1)$isofile - echo "Starting $isofile..." - linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/filename=${isofile} quiet splash - initrd (loop)/casper/initrd.lz - } - -![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-40-custom-new.png) - -### 分析上述代码 ### - -首先设置了一个变量名 `$menuentry` ,这是 ISO 文件的所在位置 。如果你想改变一个 ISO ,你应该修改 `isofile="/path/to/file/name-of-iso-file-.iso"`. - -下一行是指定回环设备,且必须给出正确的分区号码。 - - loopback loop (hd0,1)$isofile - -注意 hd0,1 这里非常重要,它的意思是第一硬盘,第一分区 (`/dev/sda1`)。 - -GRUB 的命名在这里稍微有点困惑,对于硬盘来说,它从 “0” 开始计数,第一块硬盘为 #0 ,第二块为 #1 ,第三块为 #2 ,依此类推。但是对于分区来说,它从 “1” 开始计数,第一个分区为 #1 ,第二个分区为 #2 ,依此类推。也许这里有一个很好的原因,但肯定不是明智的(明显用户体验很糟糕).. - -在 Linux 中第一块硬盘,第一个分区是 `/dev/sda1` ,但在 GRUB2 中则是 `hd0,1` 。第二块硬盘,第三个分区则是 `hd1,3`, 依此类推. - -下一个重要的行是 - - linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/filename=${isofile} quiet splash - -这会载入内核镜像,在新的 Ubuntu Live CD 中,内核被存放在 `/casper` 目录,并且命名为 `vmlinuz.efi` 。如果你使用的是其它系统,可能会没有 `.efi` 扩展名或内核被存放在其它地方 (可以使用归档管理器打开 ISO 文件在 `/casper` 中查找确认)。最后一个选项, `quiet splash`, 是一个常规的 GRUB 选项无论你是否在意改动它们。 - -最后 - - initrd (loop)/casper/initrd.lz - -这会载入 `initrd` ,它负责载入 RAMDisk 到内存用于启动。 - -### 启动 live 系统 ### - -做完上面所有的步骤后,需要更新 GRUB2 - - sudo update-grub - -![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-updare-grub.png) - -当重启系统后,应该可以看见一个新的,并且允许我们启动刚刚配置的 ISO 镜像的 GRUB 条目 - -![](https://www.maketecheasier.com/assets/uploads/2015/07/rundirectiso-grub-menu.png) - -选择这个新条目就允许我们像从 DVD 或 U 盘中启动一个 live 环境一样。 - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/run-iso-files-hdd-grub2/ - -作者:[Attila Orosz][a] -译者:[Locez](https://github.com/locez) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/attilaorosz/ -[1]:http://git.marmotte.net/git/glim/tree/grub2 \ No newline at end of file From 7887eb2411739bc76c5044416a70f2bc412b0239 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 12 Oct 2015 14:25:34 +0800 Subject: [PATCH 682/697] =?UTF-8?q?20151012-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ut Linux Try Linux Desktop on the Cloud.md | 44 +++++++++ ...tory Of Aix HP-UX Solaris BSD And LINUX.md | 99 +++++++++++++++++++ 2 files changed, 143 insertions(+) create mode 100644 sources/share/20151012 Curious about Linux Try Linux Desktop on the Cloud.md create mode 100644 sources/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md diff --git a/sources/share/20151012 Curious about Linux Try Linux Desktop on the Cloud.md b/sources/share/20151012 Curious about Linux Try Linux Desktop on the Cloud.md new file mode 100644 index 0000000000..286d6ba816 --- /dev/null +++ b/sources/share/20151012 Curious about Linux Try Linux Desktop on the Cloud.md @@ -0,0 +1,44 @@ +Curious about Linux? Try Linux Desktop on the Cloud +================================================================================ +Linux maintains a very small market share as a desktop operating system. Current surveys estimate its share to be a mere 2%; contrast that with the various strains (no pun intended) of Windows which total nearly 90% of the desktop market. For Linux to challenge Microsoft's monopoly on the desktop, there needs to be a simple way of learning about this different operating system. And it would be naive to believe a typical Windows user is going to buy a second machine, tinker with partitioning a hard disk to set up a multi-boot system, or just jump ship to Linux without an easy way back. + +![](http://www.linuxlinks.com/portal/content/reviews/Cloud/CloudComputing.png) + +We have examined a number of risk-free ways users can experiment with Linux without dabbling with partition management. Various options include Live CD/DVDs, USB keys and desktop virtualization software. For the latter, I can strongly recommend VMWare (VMWare Player) or Oracle VirtualBox, two relatively easy and free ways of installing and running multiple operating systems on a desktop or laptop computer. Each virtual machine has its own share of CPU, memory, network interfaces etc which is isolated from other virtual machines. But virtual machines still require some effort to get Linux up and running, and a reasonably powerful machine. Too much effort for a mere inquisitive mind. + +It can be difficult to break down preconceptions. Many Windows users will have experimented with free software that is available on Linux. But there are many facets to learn on Linux. And it takes time to become accustomed to the way things work in Linux. + +Surely there should be an effortless way for a beginner to experiment with Linux for the first time? Indeed there is; step forward the online cloud lab. + +### LabxNow ### + +![LabxNow](http://www.linuxlinks.com/portal/content/reviews/Cloud/Screenshot-LabxNow.png) + +LabxNow provides a free service for general users offering Linux remote desktop over the browser. The developers promote the service as having a personal remote lab (to play around, develop, whatever!) that will be accessible from anywhere, with the internet of course. + +The service currently offers a free virtual private server with 2 cores, 4GB RAM and 10GB SSD space. The service runs on a 4 AMD 6272 CPU with 128GB RAM. + +#### Features include: #### + +- Machine images: Ubuntu 14.04 with Xfce 4.10, RHEL 6.5, CentOS with Gnome, and Oracle +- Hardware: CPU - 1 or 2 cores; RAM: 512MB, 1GB, 2GB or 4GB +- Fast network for data transfers +- Works with all popular browsers +- Install anything, run anything - an excellent way to experiment and learn all about Linux without any risk +- Easily add, delete, manage and customize VMs +- Share VMs, Remote desktop support + +All you need is a reasonable Internet connected device. Forget about high cost VPS, domain space or hardware support. LabxNow offers a great way of experimenting with Ubuntu, RHEL and CentOS. It gives Windows users an excellent environment to dip their toes into the wonderful world of Linux. Further, it allows users to do (programming) work from anywhere in the word without having the stress of installing Linux on each machine. Point your web browser at [www.labxnow.org/labxweb/][1]. + +There are other services (mostly paid services) that allow users to experiment with Linux. These include Cloudsigma which offers a free 7 day trial, and Icebergs.io (full root access via HTML5). But for now, LabxNow gets my recommendation. + +-------------------------------------------------------------------------------- + +via: http://www.linuxlinks.com/article/20151003095334682/LinuxCloud.html + +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.labxnow.org/labxweb/ \ No newline at end of file diff --git a/sources/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md b/sources/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md new file mode 100644 index 0000000000..0832929789 --- /dev/null +++ b/sources/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md @@ -0,0 +1,99 @@ +The Brief History Of Aix, HP-UX, Solaris, BSD, And LINUX +================================================================================ +Always remember that when doors close on you, other doors open. [Ken Thompson][1] and [Dennis Richie][2] are a great example for such saying. They were two of the best information technology specialists in the **20th** century as they created the **UNIX** system which is considered one the most influential and inspirational software that ever written. + +### The UNIX systems beginning at Bell Labs ### + +**UNIX** which was originally called **UNICS** (**UN**iplexed **I**nformation and **C**omputing **S**ervice) has a great family and was never born by itself. The grandfather of UNIX was **CTSS** (**C**ompatible **T**ime **S**haring **S**ystem) and the father was the **Multics** (**MULT**iplexed **I**nformation and **C**omputing **S**ervice) project which supports interactive timesharing for mainframe computers by huge communities of users. + +UNIX was born at **Bell Labs** in **1969** by **Ken Thompson** and later **Dennis Richie**. These two great researchers and scientists worked on a collaborative project with **General Electric** and the **Massachusetts Institute of Technology** to create an interactive timesharing system called the Multics. + +Multics was created to combine timesharing with other technological advances, allowing the users to phone the computer from remote terminals, then edit documents, read e-mail, run calculations, and so on. + +Over the next five years, AT&T corporate invested millions of dollars in the Multics project. They purchased mainframe computer called GE-645 and they dedicated to the effort of the top researchers at Bell Labs such as Ken Thompson, Stuart Feldman, Dennis Ritchie, M. Douglas McIlroy, Joseph F. Ossanna, and Robert Morris. The project was too ambitious, but it fell troublingly behind the schedule. And at the end, AT&T leaders decided to leave the project. + +Bell Labs managers decided to stop any further work on operating systems which made many researchers frustrated and upset. But thanks to Thompson, Richie, and some researchers who ignored their bosses’ instructions and continued working with love on their labs, UNIX was created as one the greatest operating systems of all times. + +UNIX started its life on a PDP-7 minicomputer which was a testing machine for Thompson’s ideas about the operating systems design and a platform for Thompsons and Richie’s game simulation that was called Space and Travel. + +> “What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing, as supplied by remote-access, time-shared machines, is not just to type programs into a terminal instead of a keypunch, but to encourage close communication”. Dennis Richie Said. + +UNIX was so close to be the first system under which the programmer could directly sit down at a machine and start composing programs on the fly, explore possibilities and also test while composing. All through UNIX lifetime, it has had a growing more capabilities pattern by attracting skilled volunteer effort from different programmers impatient with the other operating systems limitations. + +UNIX has received its first funding for a PDP-11/20 in 1970, the UNIX operating system was then officially named and could run on the PDP-11/20. The first real job from UNIX was in 1971, it was to support word processing for the patent department at Bell Labs. + +### The C revolution on UNIX systems ### + +Dennis Richie invented a higher level programming language called “**C**” in **1972**, later he decided with Ken Thompson to rewrite the UNIX in “C” to give the system more portability options. They wrote and debugged almost 100,000 code lines that year. The migration to the “C” language resulted in highly portable software that require only a relatively small machine-dependent code to be then replaced when porting UNIX to another computing platform. + +The UNIX was first formally presented to the outside world in 1973 on Operating Systems Principles, where Dennis Ritchie and Ken Thompson delivered a paper, then AT&T released Version 5 of the UNIX system and licensed it to the educational institutions, and then in 1975 they licensed Version 6 of UNIX to companies for the first time with a cost **$20.000**. The most widely used version of UNIX was Version 7 in 1980 where anybody could purchase a license but it was very restrictive terms in this license. The license included the source code, the machine dependents kernel which was written in PDP-11 assembly language. At all, versions of UNIX systems were determined by its user manuals editions. + +### The AIX System ### + +In **1983**, **Microsoft** had a plan to make a **Xenix** MS-DOS’s multiuser successor, and they created Xenix-based Altos 586 with **512 KB** RAM and **10 MB** hard drive by this year with cost $8,000. By 1984, 100,000 UNIX installations around the world for the System V Release 2. In 1986, 4.3BSD was released that included internet name server and the **AIX system** was announced by **IBM** with Installation base over 250,000. AIX is based on Unix System V, this system has BSD roots and is a hybrid of both. + +AIX was the first operating system that introduced a **journaled file system (JFS)** and an integrated Logical Volume Manager (LVM). IBM ported AIX to its RS/6000 platform by 1989. The Version 5L was a breakthrough release that was introduced in 2001 to provide Linux affinity and logical partitioning with the Power4 servers. + +AIX introduced virtualization by 2004 in AIX 5.3 with Advanced Power Virtualization (APV) which offered Symmetric multi-threading, micro-partitioning, and shared processor pools. + +In 2007, IBM started to enhance its virtualization product, by coinciding with the AIX 6.1 release and the architecture of Power6. They also rebranded Advanced Power Virtualization to PowerVM. + +The enhancements included form of workload partitioning that was called WPARs, that are similar to Solaris zones/Containers, but with much better functionality. + +### The HP-UX System ### + +The **Hewlett-Packard’s UNIX (HP-UX)** was based originally on System V release 3. The system initially ran exclusively on the PA-RISC HP 9000 platform. The Version 1 of HP-UX was released in 1984. + +The Version 9, introduced SAM, its character-based graphical user interface (GUI), from which one can administrate the system. The Version 10, was introduced in 1995, and brought some changes in the layout of the system file and directory structure, which made it similar to AT&T SVR4. + +The Version 11 was introduced in 1997. It was HP’s first release to support 64-bit addressing. But in 2000, this release was rebranded to 11i, as HP introduced operating environments and bundled groups of layered applications for specific Information Technology purposes. + +In 2001, The Version 11.20 was introduced with support for Itanium systems. The HP-UX was the first UNIX that used ACLs (Access Control Lists) for file permissions and it was also one of the first that introduced built-in support for Logical Volume Manager. + +Nowadays, HP-UX uses Veritas as primary file system due to partnership between Veritas and HP. + +The HP-UX is up to release 11iv3, update 4. + +### The Solaris System ### + +The Sun’s UNIX version, **Solaris**, was the successor of **SunOS**, which was founded in 1992. SunOS was originally based on the BSD (Berkeley Software Distribution) flavor of UNIX but SunOS versions 5.0 and later were based on Unix System V Release 4 which was rebranded as Solaris. + +SunOS version 1.0 was introduced with support for Sun-1 and Sun-2 systems in 1983. Version 2.0 was introduced later in 1985. In 1987, Sun and AT&T announced that they would collaborate on a project to merge System V and BSD into only one release, based on SVR4. + +The Solaris 2.4 was first Sparc/x86 release by Sun. The last release of the SunOS was version 4.1.4 announced in November 1994. The Solaris 7 was the first 64-bit Ultra Sparc release and it added native support for file system metadata logging. + +Solaris 9 was introduced in 2002, with support for Linux capabilities and Solaris Volume Manager. Then, Solaris 10 was introduced in 2005, and has number of innovations, such as support for its Solaris Containers, new ZFS file system, and Logical Domains. + +The Solaris system is presently up to version 10 as the latest update was released in 2008. + +### Linux ### + +By 1991 there were growing requirements for a free commercial alternative. Therefore **Linus Torvalds** set out to create new free operating system kernel that eventually became **Linux**. Linux started with a small number of “C” files and under a license which prohibited commercial distribution. Linux is a UNIX-like system and is different than UNIX. + +Version 3.18 was introduced in 2015 under a GNU Public License. IBM said that more than 18 million lines of code are Open Source and available to developers. + +The GNU Public License becomes the most widely available free software license which you can find nowadays. In accordance with the Open Source principles, this license permits individuals and organizations the freedom to distribute, run, share by copying, study, and also modify the code of the software. + +### UNIX vs. Linux: Technical Overview ### + +- Linux can encourage more diversity, and Linux developers come from wider range of backgrounds with different experiences and opinions. +- Linux can run on wider range of platforms and also types of architecture than UNIX. +- Developers of UNIX commercial editions have a specific target platform and audience in mind for their operating system. +- **Linux is more secure than UNIX** as it is less affected by virus threats or malware attacks. Linux has had about 60-100 viruses to date, but at the same time none of them are currently spreading. On the other hand, UNIX has had 85-120 viruses but some of them are still spreading. +- With commands of UNIX, tools and elements are rarely changed, and even some interfaces and command lines arguments still remain in later versions of UNIX. +- Some Linux development projects get funded on a voluntary basis such as Debian. The other projects maintain a community version of commercial Linux distributions such as SUSE with openSUSE and Red Hat with Fedora. +- Traditional UNIX is about scale up, but on the other hand Linux is about scale out. + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/brief-history-aix-hp-ux-solaris-bsd-linux/ + +作者:[M.el Khamlichi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/pirat9/ +[1]:http://www.unixmen.com/ken-thompson-unix-systems-father/ +[2]:http://www.unixmen.com/dennis-m-ritchie-father-c-programming-language/ \ No newline at end of file From 228f7edab1e9eda0183ce80ec8cf9f215049452b Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 12 Oct 2015 14:41:44 +0800 Subject: [PATCH 683/697] =?UTF-8?q?20151012-1=20=E9=80=89=E9=A2=98?= =?UTF-8?q?=E8=A1=A5=E5=85=85=E4=B8=80=E4=B8=AA=E5=9B=BE=E5=83=8F?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...1012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md b/sources/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md index 0832929789..f45f901b3d 100644 --- a/sources/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md +++ b/sources/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md @@ -1,5 +1,7 @@ The Brief History Of Aix, HP-UX, Solaris, BSD, And LINUX ================================================================================ +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/05/linux-712x445.png) + Always remember that when doors close on you, other doors open. [Ken Thompson][1] and [Dennis Richie][2] are a great example for such saying. They were two of the best information technology specialists in the **20th** century as they created the **UNIX** system which is considered one the most influential and inspirational software that ever written. ### The UNIX systems beginning at Bell Labs ### From b9aa6be17cb07db9c84e7d2b6498d41c2e5423a9 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 12 Oct 2015 15:21:40 +0800 Subject: [PATCH 684/697] =?UTF-8?q?20151012-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...51012 What is a good IDE for R on Linux.md | 61 ++++ ...012 10 Useful Utilities For Linux Users.md | 263 ++++++++++++++++++ ...012 How To Use iPhone In Antergos Linux.md | 81 ++++++ ... device permission permanently on Linux.md | 53 ++++ ... about built-in kernel modules on Linux.md | 53 ++++ ...sword change at the next login on Linux.md | 54 ++++ 6 files changed, 565 insertions(+) create mode 100644 sources/share/20151012 What is a good IDE for R on Linux.md create mode 100644 sources/tech/20151012 10 Useful Utilities For Linux Users.md create mode 100644 sources/tech/20151012 How To Use iPhone In Antergos Linux.md create mode 100644 sources/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md create mode 100644 sources/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md create mode 100644 sources/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md diff --git a/sources/share/20151012 What is a good IDE for R on Linux.md b/sources/share/20151012 What is a good IDE for R on Linux.md new file mode 100644 index 0000000000..fa0af9f921 --- /dev/null +++ b/sources/share/20151012 What is a good IDE for R on Linux.md @@ -0,0 +1,61 @@ +What is a good IDE for R on Linux +================================================================================ +Some time ago, I covered some of the [best IDEs for C/C++][1] on Linux. Obviously C and C++ are not the only programming languages out there, and it is time to turn to something a bit more specific. + +If you have ever done some statistics, it is possible that you have encountered the [language R][2]. If you have not, I really recommend this open source programming language which is tailored for statistics and data mining. Coming from a coding background, you might be thrown off a bit by the syntax, but hopefully you will get seduced by the speed of its vector operations. In short, try it. And to do so, what better way to start with an IDE? R being a cross platform language, there are a bunch of good IDEs which make data analysis in R far more pleasurable. If you are very attached to a particular editor, there are also some very good plugins to turn that editor into a fully-fledged R IDE. + +Here is a list of five good IDEs for R language in Linux environment. + +### 1. RStudio ### + +![](https://c1.staticflickr.com/1/603/22093054381_431383ab60_c.jpg) + +Let’s start hard with maybe one of the most popular R IDEs out there: [RStudio][3]. In addition to common IDE features like syntax highlighting and code completion, RStudio stands out for its integration of R documentation, its powerful debugger and its multiple views system. If you start with R, I can only recommend RStudio as the R console on the side is perfect for testing your code in real time, and the object explorer will help you understand what kind of data you are dealing with. Finally, what really conquered me was the integration of the plots visualiser, making it easy to export your graphs as images. On the downside, RStudio lacks the shortcuts and the advanced settings to make it a perfect IDE. Still, with a free version under AGPL license, Linux users have no excuses not to give this IDE a try. + +### 2. Emacs with ESS ### + +![](https://c2.staticflickr.com/6/5824/22056857776_a14a4e7e1b_c.jpg) + +In my last post about IDEs, some people were disappointed by the absence of Emacs in my list. My main reason for that is that Emacs is kind of the wild card of IDE: you could place it on any list for any languages. But things are different for [R with the ESS plugin][4]. Emacs Speaks Statistics (ESS) is an amazing plugin which completely changes the way you use the Emacs editor and really fits the needs of R coders. A bit like RStudio which has multiple views, Emacs with ESS displays presents two panels: one with the code and one with an R console, making it easy to test your code in real time and explore the objects. But ESS's real strength is its seamless integration with other Emacs plugins you might have installed and its advanced configuration options. In short, if you like your Emacs shortcuts, you will like to be able to use them in an environment that makes sense for R development. For full disclosure, however, I have heard of and experienced some efficiency issues when dealing with a lot of data in ESS. Nothing too major to be a problem, but just enough have me prefer RStudio. + +### 3. Vim with Vim-R-plugin ### + +![](https://c1.staticflickr.com/1/680/22056923916_abe3531bb4_b.jpg) + +Because I do not want to discriminate after talking about Emacs, I also tried the equivalent for Vim: the [Vim-R-plugin][5]. Using the terminal tool called tmux, this plugin makes it possible to have an R console open and code at the same time. But most importantly, it brings syntax highlighting and omni-completion for R objects to Vim. You can also easily access R documentation and browse objects. But once again, the strength comes from its extensive customization capacities and the speed of Vim. If you are tempted by this option, I direct you to the extremely thorough [documentation][6] on installing and setting up your environment. + +### 4. Gedit with RGedit ### + +![](https://c1.staticflickr.com/1/761/22056923956_1413f60b42_c.jpg) + +If neither Emacs or Vim is your cup of tea, and what you like is your default Gnome editor, then [RGedit][7] is made for you: a plugin to code in R from Gedit. Gedit is known to be more powerful than what it looks. With a very large library of plugins, it is possible to do a lot with it. And RGedit is precisely the plugin you need to code in R from Gedit. It comes with the classic syntax highlighting and integration of the R console at the bottom of the screen, but also a bunch of unique features like multiple profiles, code folding, file explorer, and even a GUI wizard to generate code from snippets. Despite my indifference towards Gedit, I have to admit that these features go beyond the basic plugin functionality and really make a difference when you spend a lot of time analyzing data. The only shadow is that the last update is from 2013. I really hope that this project can pick up again. + +### 5. RKWard ### + +![](https://c2.staticflickr.com/6/5643/21896132829_2ea8f3a320_c.jpg) + +Finally, last but not least, [RKWard][8] is an R IDE made for KDE environments. What I love the most about it is its name. But honestly, its package management system and spreadsheet-like data editor come in close second. In addition to that, it includes an easy system for plotting and importing data, and can be extended by plugins. If you are not a fan of the KDE feel, you might be a bit uncomfortable, but if you are, I would really recommend checking it out. + +To conclude, whether you are new to R or not, these IDEs might be useful to you. It does not matter if you prefer something that stands for itself, or a plugin for your favorite editor, I am sure that you will appreciate one of the features these software provide. I am also sure I missed a lot of good IDEs for R, which deserve to be on this list. So since you wrote a lot of very good comments for the post on the IDEs for C/C++, I invite you to do the same here and share your knowledge. + +What do you feel is a good IDE for R on Linux? Please let us know in the comments. + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/good-ide-for-r-on-linux.html + +作者:[Adrien Brochard][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/adrien +[1]:http://xmodulo.com/good-ide-for-c-cpp-linux.html +[2]:https://www.r-project.org/ +[3]:https://www.rstudio.com/ +[4]:http://ess.r-project.org/ +[5]:http://www.vim.org/scripts/script.php?script_id=2628 +[6]:http://www.lepem.ufc.br/jaa/r-plugin.html +[7]:http://rgedit.sourceforge.net/ +[8]:https://rkward.kde.org/ \ No newline at end of file diff --git a/sources/tech/20151012 10 Useful Utilities For Linux Users.md b/sources/tech/20151012 10 Useful Utilities For Linux Users.md new file mode 100644 index 0000000000..b39679a9fc --- /dev/null +++ b/sources/tech/20151012 10 Useful Utilities For Linux Users.md @@ -0,0 +1,263 @@ +10 Useful Utilities For Linux Users +================================================================================ +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2014/09/linux-656x445.png) + +### Introduction ### + +In this tutorial, I have collected 10 useful utility tools for Linux users which will include various network monitoring, system auditing or some another random commands which can help users to enhance their productivity. I hope you will enjoy them. + +#### 1. w #### + +Display who is logged into the system and what process executed by them. + + $w + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_023.png) + +for help + + $w -h + +for current user + + $w + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_024.png) + +#### 2. nmon #### + +Nmon or nigel’s monitor is a tool which displays performance information of the system. + + $ sudo apt-get install nmon + +---------- + + $ nmon + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_001.png) + +nmon can dump information related to netwrok, cpu, memory or disk uses. + +**nmon cpu info (press c)** + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_002.png) + +**nmon network info (press n)** + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_003.png) + +**nman disk info (press d)** + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_004.png) + +#### 3. ncdu #### + +A Command utility is a cursor based version of ‘du’, this command is used to analyze disk space occupied by various directories. + + $apt-get install ncdu + +---------- + + $ncdu / + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_006.png) + +Final output: + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_007.png) + +Press n to order by name or press s to order by file size(default). + +#### 4. slurm #### + +A command line utility used for command based network interface bandwidth monitoring, it will display ascii based graphic. + + $ apt-get install slurm + +Examples: + + $ slurm -i + +---------- + + $ slurm -i eth1 + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0091.png) + +**options** + +- Press **l** to display lx/tx led. +- press **c** to switch to classic mode. +- press **r** to refresh screen. +- press **q** to quit. + +#### 5.findmnt #### + +Findmnt command is used to find mount file systems. It is used to list mount devices and can alos mount or unmount devices as and when required, it comes as a part of util-linux. + +Examples: + + $findmnt + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0101.png) + +To get output in list format. + + $ findmnt -l + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0111.png) + +List file systems mounted in fstab. + + $ findmnt -s + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0122.png) + +List mounted files systems by file type + + $ findmnt -t ext4 + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0131.png) + +#### 6. dstat #### + +A combined and flexible tool which can be used to monitor memory, process, network or disk space performance, it is a good replacement of ifstat, iostat, dmstat etc. + + $apt-get install dstat + +Examples: + +A detailed info about cpu, hard disk and network. + + $ dstat + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0141.png) + +- **-c** cpu + + $ dstat -c + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0151.png) + +Some more detailed information about cpu + + $ dstat -cdl -D sda1 + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_017.png) + +- **-d** disk + + $ dstat -d + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0161.png) + +#### 7. saidar #### + +Another cli based system statistics monitoring tool, provide information about disk uses, network, memory, swap etc. + + $ sudo apt-get install saidar + +Examples: + + $ saidar + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0181.png) + +Enable colored output + + $ saider -c + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0191.png) + +#### 8. ss #### + +ss or socket statistics is a good alternative to netstat it directory gather information from kernel space nad play fast in comparision to the netstat utility. + +Examples: + +List all connections + + $ ss |less + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0201.png) + +Greb only tcp traffic + + $ ss -A tcp + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0211.png) + +Grab process name and pid + + $ ss -ltp + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0221.png) + +#### 9. ccze #### + +A tool that decorate your logs :). + + $ apt-get install ccze + +Examples: + + $ tailf /var/log/syslog | ccze + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0231.png) + +List ccze modules: + + $ ccze -l + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0241.png) + +Save log as html. + + tailf /var/log/syslog | ccze -h > /home/tux/Desktop/rajneesh.html + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_025.png) + +#### 10. ranwhen.py #### + +A python based terminal utility that can be used to display system activities graphically. Details are presented in a very colorful histogram. + +Install python: + + $ sudo apt-add-repository ppa:fkrull/deadsnakes + +Update system: + + $ sudo apt-get update + +Download python: + + $ sudo apt-get install python3.2 + +- [Download ranwhen.py][1] + + $ unzip ranwhen-master.zip && cd ranwhen-master + +Run the tool. + + $ python3.2 ranwhen.py + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_026.png) + +### Conclusion ### + +These are the less popular, yet important Linux administration tools. They can help user in their day to day activities. In our upcoming articles, we will try to bring some more Admin/user tools. + +Have fun! + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/10-useful-utilities-linux-users/ + +作者:[Rajneesh Upadhyay][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/rajneesh/ +[1]:https://github.com/p-e-w/ranwhen/archive/master.zip \ No newline at end of file diff --git a/sources/tech/20151012 How To Use iPhone In Antergos Linux.md b/sources/tech/20151012 How To Use iPhone In Antergos Linux.md new file mode 100644 index 0000000000..0186a214d4 --- /dev/null +++ b/sources/tech/20151012 How To Use iPhone In Antergos Linux.md @@ -0,0 +1,81 @@ +How To Use iPhone In Antergos Linux +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-Antergos-Arch-Linux.jpg) + +Troubles with iPhone and Arch Linux? iPhone and Linux never really go along very well. In this tutorial, I am going to show you how can you use iPhone in Antergos Linux. Since Antergos is based on Arch Linux, the same steps should be applicable to other Arch based Linux distros such as Manjaro Linux. + +So, recently I bought me a brand new iPhone 6S and when I connected it to Antergos Linux to copy some pictures, it was not detected at all. I could see that iPhone was being charged and I had allowed iPhone to ‘trust the computer’ but there was nothing at all detected. I tried to run dmseg but there was no trace of iPhone or Apple there. What is funny that [libimobiledevice][1] was installed as well, which always fixes [iPhone mount issue in Ubuntu][2]. + +I am going to show you how I am using iPhone 6S, running on iOS 9 in Antergos. It goes more in command line way, but I presume since you are in Arch Linux zone, you are not scared of terminal (and you should not be as well). + +### Mount iPhone in Arch Linux ### + +**Step 1**: Unplug your iPhone, if it is already plugged in. + +**Step 2**: Now, open a terminal and use the following command to install some necessary packages. Don’t worry if they are already installed. + + sudo pacman -Sy ifuse usbmuxd libplist libimobiledevice + +**Step 3**: Once these programs and libraries are installed, reboot your system. + + sudo reboot + +**Step 4**: Make a directory where you want the iPhone to be mounted. I would suggest making a directory named iPhone in your home directory. + + mkdir ~/iPhone + +**Step 5**: Unlock your phone and plug it in. If asked to trust the computer, allow it. + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-mount-Antergos-Linux-2.jpeg) + +**Step 6**: Verify that iPhone is recognized by the system this time. + + dmesg | grep -i iphone + +This should show you some result with iPhone and Apple in it. Something like this: + + [ 31.003392] ipheth 2-1:4.2: Apple iPhone USB Ethernet device attached + [ 40.950883] ipheth 2-1:4.2: Apple iPhone USB Ethernet now disconnected + [ 47.471897] ipheth 2-1:4.2: Apple iPhone USB Ethernet device attached + [ 82.967116] ipheth 2-1:4.2: Apple iPhone USB Ethernet now disconnected + [ 106.735932] ipheth 2-1:4.2: Apple iPhone USB Ethernet device attached + +This means that iPhone has been successfully recognized by Antergos/Arch Linux. + +**Step 7**: When everything is set, it’s time to mount the iPhone. Use the command below: + + ifuse ~/iPhone + +Since we created the mount directory in home, it won’t need root access and you should also be able to see it easily in your home directory. If the command is successful, you won’t see any output. + +Go back to Files and see if the iPhone is recognized or not. For me, it looks like this in Antergos: + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-mount-Antergos-Linux.jpeg) + +You can access the files in this directory. Copy files from it or to it. + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-mount-Antergos-Linux-1.jpeg) + +**Step 8**: When you want to unmount it, you should use this command: + + sudo umount ~/iPhone + +### Worked for you? ### + +I know that it is not very convenient and ideally, iPhone should be recognized as any other USB storage device but things don’t always behave as they are expected to. Good thing is that a little DIY hack can always fix the issue and it gives a sense of achievement (at least to me). That being said, I must say Antergos should work to fix this issue so that iPhone can be mounted by default. + +Did this trick work for you? If you have questions or suggestions, feel free to drop a comment. + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/iphone-antergos-linux/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://www.libimobiledevice.org/ +[2]:http://itsfoss.com/mount-iphone-ipad-ios-7-ubuntu-13-10/ \ No newline at end of file diff --git a/sources/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md b/sources/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md new file mode 100644 index 0000000000..8af62bfd75 --- /dev/null +++ b/sources/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md @@ -0,0 +1,53 @@ +Linux FAQs with Answers--How to change USB device permission permanently on Linux +================================================================================ +> **Question**: I am trying to run gpsd on my Linux with a USB GPS receiver. However, I am getting the following errors from gpsd. +> +> gpsd[377]: gpsd:ERROR: read-only device open failed: Permission denied +> gpsd[377]: gpsd:ERROR: /dev/ttyUSB0: device activation failed. +> gpsd[377]: gpsd:ERROR: device open failed: Permission denied - retrying read-only +> +> Looks like gpsd does not have permission to access the USB device (/dev/ttyUSB0). How can I change its default permission mode permanently on Linux? + +When you run a process that wants to read or write to a USB device, the user/group of the process must have appropriate permission to do so. Of course you can change the permission of your USB device manually with chmod command, but such manual permission change will be temporary. The USB device will revert to its default permission mode when you reboot your Linux machine. + +![](https://farm6.staticflickr.com/5741/20848677843_202ff53303_c.jpg) + +As a permanent solution, you can create a udev-based USB permission rule which assigns any custom permission mode of your choice. Here is how to do it. + +First, you need to identify the vendorID and productID of your USB device. For that, use lsusb command. + + $ lsusb -vvv + +![](https://farm1.staticflickr.com/731/20848677743_39f76eb403_c.jpg) + +From the lsusb output, find your USB device's entry, and look for "idVendor" and "idProduct" fields. In this example, we have idVendor (0x067b) and idProduct (0x2303). + +Next, create a new udev rule as follows. + + $ sudo vi /etc/udev/rules.d/50-myusb.rules + +---------- + + SUBSYSTEMS=="usb", ATTRS{idVendor}=="067b", ATTRS{idProduct}=="2303", GROUP="users", MODE="0666" + +Replace "idVendor" and "idProduct" values with your own. **MODE="0666"** indicates the preferred permission of the USB device. + +Now reboot your machine or reload udev rules: + + $ sudo udevadm control --reload + +Then verify the permission of the USB device. + +![](https://farm1.staticflickr.com/744/21282872179_9a4a05d768_b.jpg) + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/change-usb-device-permission-linux.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file diff --git a/sources/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md b/sources/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md new file mode 100644 index 0000000000..b7a5be3975 --- /dev/null +++ b/sources/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md @@ -0,0 +1,53 @@ +Linux FAQs with Answers--How to find information about built-in kernel modules on Linux +================================================================================ +> **Question**: I would like to know what modules are built into the kernel of my Linux system, and what parameters are available in each module. Is there a way to get a list of all built-in kernel modules and device drivers, and find detailed information about them? + +The modern Linux kernel has been growing significantly over the years to support a wide variety of hardware devices, file systems and networking functions. During this time, "loadable kernel modules (LKM)" came into being in order to keep the kernel from getting bloated, while flexibly extending its capabilities and hardware support under different environments, without having to rebuild it. + +The Linux kernel shipped with the latest Linux distributions comes with relatively a small number of "built-in modules", while the rest of hardware-specific drivers or custom capabilities exist as "loadable modules" which you can selectively load or unload. + +The built-in modules are statically compiled into the kernel. Unlike loadable kernel modules which can be dynamically loaded, unloaded, looked up or listed using commands like modprobe, insmod, rmmod, modinfo or lsmod, built-in kernel modules are always loaded in the kernel upon boot-up, and cannot be managed with these commands. + +### Find a List of Built-in Kernel Modules ### + +To get a list of all built-in modules, run the following command. + + $ cat /lib/modules/$(uname -r)/modules.builtin + +![](https://farm1.staticflickr.com/697/21481933835_ef6b9c71e1_c.jpg) + +You can also get a hint on what modules are built-in by running: + +![](https://farm6.staticflickr.com/5643/21295025949_57f5849c36_c.jpg) + +### Find Parameters of Built-in Kernel Modules ### + +Each kernel module, whether it's built-in or loadable, comes with a set of parameters. For loadable kernel modules, the modinfo command will show parameter information about them. However, this command will not work with built-in modules. You will simply get the following error. + + modinfo: ERROR: Module XXXXXX not found. + +If you want to check what parameters are available in a given built-in module, and what their values are, you can instead examine the content in **/sys/module** directory. + +Under /sys/module directory, you will find sub-directories named after existing kernel modules (both built-in and loadable). Then in each module directory, there is a directory named "parameters", which lists all available parameters for the module. + +For example, let's say you want to find out parameters of a built-in module called tcp_cubic (the default TCP implementation of the kernel). Then you can do this: + + $ ls /sys/module/tcp_cubic/parameters + +And check the value of each parameter by reading a corresponding file. + + $ cat /sys/module/tcp_cubic/parameters/tcp_friendliness + +![](https://farm6.staticflickr.com/5639/21293886250_a199b9c8f7_c.jpg) + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/find-information-builtin-kernel-modules-linux.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file diff --git a/sources/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md b/sources/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md new file mode 100644 index 0000000000..aadd2d1e81 --- /dev/null +++ b/sources/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md @@ -0,0 +1,54 @@ +Linux FAQs with Answers--How to force password change at the next login on Linux +================================================================================ +> **Question**: I manage a Linux server for multiple users to share. I have just created a new user account with some default password, and I want the user to change the default password immediately after the first login. Is there a way to force a user to change his/her password at the next login? + +In multi-user Linux environment, it's a standard practice to create user accounts with some random default password. Then after a successful login, a new user can change the default password to his or her own. For security reasons, it is often recommended to "force" users to change the default password after the first login to make sure that the initial one-time password is no longer used. + +Here is **how to force a user to change his or her password on the next login**. + +Every user account in Linux is associated with various password-related configurations and information. For example, it remembers the date of the last password change, the minimum/maximum number of days between password changes, and when to expire the current password, etc. + +A command-line tool called chage can access and adjust password expiration related configurations. You can use this tool to force password change of any user at the next login. + +To view password expiration information of a particular user (e.g., alice), run the following command. Note that you need root privilege only when you are checking password age information of any other user than yourself. + + $ sudo chage -l alice + +![](https://c1.staticflickr.com/1/727/21955581605_5471e61ee0_c.jpg) + +### Force Password Change for a User ### + +If you want to force a user to change his or her password, use the following command. + + $ sudo chage -d0 + +Originally the "-d " option is supposed to set the "age" of a password (in terms of the number of days since January 1st, 1970 when the password was last changed). So "-d0" indicates that the password was changed on January 1st, 1970, which essentially expires the current password, and causes it to be changed on the next login. + +Another way to expire the current password is via passwd command. + + $ sudo passwd -e + +The above command has the same effect of "chage -d0", making the current password of the user expire immediately. + +Now check the password information of the user again, and you will see: + +![](https://c2.staticflickr.com/6/5770/21767501480_ba88f00d80_c.jpg) + +When you log in again, you will be asked to change the password. You will need to verify the current password one more time before the change. + +![](https://c2.staticflickr.com/6/5835/21929638636_eed4d69cb9_c.jpg) + +To set more comprehensive password policies (e.g., password complexity, reuse prevention), you can use PAM. See [the article][1] for more detail. + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/force-password-change-next-login-linux.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:http://xmodulo.com/set-password-policy-linux.html \ No newline at end of file From 097f331da2b62d82e480f22c7123f02e2463cfce Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 12 Oct 2015 15:35:59 +0800 Subject: [PATCH 685/697] =?UTF-8?q?20151012-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ices from Ubuntu Command Line Using Mop.md | 90 +++++++++++++++++++ ...ber sed and awk All Linux admins should.md | 60 +++++++++++++ 2 files changed, 150 insertions(+) create mode 100644 sources/tech/20151012 How to Monitor Stock Prices from Ubuntu Command Line Using Mop.md create mode 100644 sources/tech/20151012 Remember sed and awk All Linux admins should.md diff --git a/sources/tech/20151012 How to Monitor Stock Prices from Ubuntu Command Line Using Mop.md b/sources/tech/20151012 How to Monitor Stock Prices from Ubuntu Command Line Using Mop.md new file mode 100644 index 0000000000..b23e772ca3 --- /dev/null +++ b/sources/tech/20151012 How to Monitor Stock Prices from Ubuntu Command Line Using Mop.md @@ -0,0 +1,90 @@ +How to Monitor Stock Prices from Ubuntu Command Line Using Mop +================================================================================ +Having a side income is always good, especially when you can easily manage the work along with your full time job. If your regular work involves working on an Internet-connected computer, trading stocks is a popular option to earn a few extra bucks. + +While there are quite a few stock-monitoring applications available for Linux, most of them are GUI-based. What if you’re a Linux professional who spends a lot (or all) of your time working on machines that do not have any GUI installed? Are you out of luck? Well, no, there are some command line stock-tracking tools, including Mop, which we’ll be discussing in this article. + +### Mop ### + +Mop, as already mentioned in the introduction above, is a command line tool that displays continuous and updated information about the US stock markets and individual stocks. Implemented in the GO programming language, the project is the brain child of Michael Dvorkin. + +### Download and Install ### + +Since the project is implemented in GO, before installing the tool, you’ll first have to make sure that the programming language is installed on your machine. Following are the steps required to install GO on a Debian-based system like Ubuntu: + + sudo apt-get install golang + mkdir ~/workspace + echo 'export GOPATH="$HOME/workspace"' >> ~/.bashrc + source ~/.bashrc + +Once GO is installed, the next step is to install the Mop tool and set the environment, something which you can do by running the following commands: + + sudo apt-get install git + go get github.com/michaeldv/mop + cd $GOPATH/src/github.com/michaeldv/mop + make install + export PATH="$PATH:$GOPATH/bin" + +Once done, just run the following command to execute Mop: + + cmd + +### Features ### + +When you run the Mop command for the first time, you’ll see an output similar to the following. + +![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-first-run.jpg) + +As you can see in the image above, the output – which auto-refreshes frequently – contains information related to various popular stock exchanges around the world as well as individual stocks. + +### Add/remove stocks ### + +Mop allows you to easily add/remove individual stocks to and from the output list. To add a stock, all you have to do is to press “+” and mention the stock listing name when prompted. For example, the following screenshot was taken while adding Facebook (FB) to the list. + +![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-add-stock.png) + +As I pressed the “+” key, a row containing text “Add tickers:” appeared, prompting me to add the stock listing name – I added FB for Facebook and pressed Enter. The output refreshed, and the new stock was added to the list: + +![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-stock-added.png) + +Similarly, you can delete a stock listing by pressing “-” and mentioning its name. + +#### Group stocks based on value #### + +There is a way to group stocks based on whether their value is going up or down – all you have to do is to press the “g” key. Following this, the stocks which are advancing will be grouped together and shown in green, while those whose value is going down will be represented in black. + +Here is an example screenshot. + +![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-group-stocks-profit-loss.png) + +#### Column sort #### + +Mop also allows you to change the sort order of individual columns. For this you first need to press “o” (this will select the first column by default), and then use the left and right arrow keys to select the column you want to sort. Once done, press enter to sort the column contents. + +For example, the following screenshot shows the output after the contents of the first column were sorted in descending alphabetical order. + +![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-change-order.png) + +**Note**: to better understand, compare it with the previous screenshot. + +#### Other options #### + +Other available options include “p” for pausing market data and stock updates, “q” or “esc” for quitting the command line application, and “?” for displaying the help page. + +![](https://www.maketecheasier.com/assets/uploads/2015/09/mop-help.png) + +### Conclusion ### + +Mop is a basic stock monitoring tool that doesn’t offer a plethora of features, but does what it promises. It’s quite obvious that the tool is not for stock trading professionals but is still a decent choice if all you want to do is some basic stock tracking on a command line-only machine. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/monitor-stock-prices-ubuntu-command-line/ + +作者:[Himanshu Arora][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/himanshu/ \ No newline at end of file diff --git a/sources/tech/20151012 Remember sed and awk All Linux admins should.md b/sources/tech/20151012 Remember sed and awk All Linux admins should.md new file mode 100644 index 0000000000..f52f748859 --- /dev/null +++ b/sources/tech/20151012 Remember sed and awk All Linux admins should.md @@ -0,0 +1,60 @@ +Remember sed and awk? All Linux admins should +================================================================================ +![](http://images.techhive.com/images/article/2015/03/linux-100573790-primary.idge.jpg) + +Credit: Shutterstock + +**We aren’t doing the next generation of Linux and Unix admins any favors by forgetting init scripts and fundamental tools** + +I happened across a post on Reddit by chance, [asking about textfile manipulation][1]. It was a fairly simple request, similar to those that folks in Unix see nearly every day. In this case, it was how to remove all duplicate lines in a file, keeping one instance of each. This sounds relatively easy, but can get a bit complicated if the source file is sufficiently large and random. + +There are countless answers to this problem. You could write a script in nearly any language to do this, with varying levels of complexity and time investment, which I suspect is what most would do. It might take 20 or 60 minutes depending on skill level, but armed with Perl, Python, or Ruby, you could make quick work of it. + +Or you could use the answer stated in that thread, which warmed my heart: Just use awk. + +That answer is the most concise and simplest solution to the problem by far. It’s one line: + + awk '!seen[$0]++' . + +Let’s take a look at this. + +In this command, there’s a lot of hidden code. Awk is a text processing language, and as such it makes a lot of assumptions. For starters, what you see here is actually the meat of a for loop. Awk assumes you want to loop through every line of the input file, so you don’t need to explicitly state it. Awk also assumes you want to print the postprocessed output, so you don’t need to state that either. Finally, Awk then assumes the loop ends when the last statement finishes, so no need to state it. + +The string seen in this example is the name given to an associative array. $0 is a variable that represents the entirety of the current line of the file. Thus, this command translates to “Evaluate every line in this file, and if you haven’t seen this line before, print it.” Awk does this by adding $0 to the seen array if it doesn’t already exist and incrementing the value so that it will not match the pattern the next time around and, thus, not print. + +Some will see this as elegant, while others may see this as obfuscation. Anyone who uses awk on a daily basis will be in the first group. Awk is designed to do this. You can write multiline programs in awk. You can even write [disturbingly complex functions in awk][2]. But at the end of the day, awk is designed to do text processing, generally within a pipe. Eliminating the extraneous cruft of loop definition is simply a shortcut for a very common use case. If you like, you could write the same thing as the following: + + awk '{ if (!seen[$0]) print $0; seen[$0]++ }’ + +It would lead to the same result. + +Awk is the perfect tool for this job. Nevertheless, I believe many admins -- especially newer admins -- would jump into [Bash][3] or Python to try to accomplish this task, because knowledge of awk and what it can do seems to be fading as time goes on. I think it may be an indicator of things to come, where problems that have been solved for decades suddenly emerge again, based on lack of exposure to the previous solutions. + +The shell, grep, sed, and awk are fundaments of Unix computing. If you’re not completely comfortable with their use, you’re artificially hamstrung because they form the basis of interaction with Unix systems via the CLI and shell scripting. One of the best ways to learn how these tools work is by observing and working with live examples, which every Unix flavor has in spades with their init systems -- or had, in the case of Linux distros that have adopted [systemd][4]. + +Millions of Unix admins learned how shell scripting and Unix tools worked by reading, writing, modifying, and working with init scripts. Init scripts differ greatly from OS to OS, even from distribution to distribution in the case of Linux, but they are all rooted in sh, and they all use core CLI tools like sed, awk, and grep. + +I’ve heard many complaints that init scripts are “ancient” and “difficult,” but in fact, init scripts use the same tools that Unix admins work with every day, and thus provide an excellent way to become more familiar and comfortable with those tools. Saying that init scripts are hard to read or difficult to work with is to admit that you lack fundamental familiarity with the Unix toolset. + +Speaking of things found on Reddit, I also came across this question from a budding Linux sys admin, [asking whether he should bother to learn sysvinit][5]. Most of the answers in the thread are good -- yes, definitely learn sysvinit and systemd. One commenter even notes that init scripts are a great way to learn Bash, and another states that the Fortune 50 company he works for has no plans to move to a systemd-based release. + +But it concerns me that this is a question at all. If we continue down the path of eliminating scripts and roping off core system elements within our operating systems, we will inadvertently make it harder for new admins to learn the fundamental Unix toolset due to the lack of exposure. + +I’m not sure why some want to cover up Unix internals with abstraction after abstraction, but such a path may reduce a generation of Unix admins to hapless button pushers dependent on support contracts. I’m pretty sure that would not be a good development. + +-------------------------------------------------------------------------------- + +via: http://www.infoworld.com/article/2985804/linux/remember-sed-awk-linux-admins-should.html + +作者:[Paul Venezia][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.infoworld.com/author/Paul-Venezia/ +[1]:https://www.reddit.com/r/linuxadmin/comments/3lwyko/how_do_i_remove_every_occurence_of_duplicate_line/ +[2]:http://intro-to-awk.blogspot.com/2008/08/awk-more-complex-examples.html +[3]:http://www.infoworld.com/article/2613338/linux/linux-how-to-script-a-bash-crash-course.html +[4]:http://www.infoworld.com/article/2608798/data-center/systemd--harbinger-of-the-linux-apocalypse.html +[5]:https://www.reddit.com/r/linuxadmin/comments/3ltq2y/when_i_start_learning_about_linux_administration/ \ No newline at end of file From ee9e3e82c867fb401b25edb71fdd0758924d2566 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 12 Oct 2015 16:37:11 +0800 Subject: [PATCH 686/697] =?UTF-8?q?20151012-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...co Virtual Private Networking on Docker.md | 322 ++++++++++++++++++ ...up DockerUI--a Web Interface for Docker.md | 111 ++++++ ...etup Red Hat Ceph Storage on CentOS 7.0.md | 250 ++++++++++++++ 3 files changed, 683 insertions(+) create mode 100644 sources/tech/20151012 Getting Started to Calico Virtual Private Networking on Docker.md create mode 100644 sources/tech/20151012 How to Setup DockerUI--a Web Interface for Docker.md create mode 100644 sources/tech/20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md diff --git a/sources/tech/20151012 Getting Started to Calico Virtual Private Networking on Docker.md b/sources/tech/20151012 Getting Started to Calico Virtual Private Networking on Docker.md new file mode 100644 index 0000000000..27d60729e9 --- /dev/null +++ b/sources/tech/20151012 Getting Started to Calico Virtual Private Networking on Docker.md @@ -0,0 +1,322 @@ +Getting Started to Calico Virtual Private Networking on Docker +================================================================================ +Calico is a free and open source software for virtual networking in data centers. It is a pure Layer 3 approach to highly scalable datacenter for cloud virtual networking. It seamlessly integrates with cloud orchestration system such as openstack, docker clusters in order to enable secure IP communication between virtual machines and containers. It implements a highly productive vRouter in each node that takes advantage of the existing Linux kernel forwarding engine. Calico works in such an awesome technology that it has the ability to peer directly with the data center’s physical fabric whether L2 or L3, without the NAT, tunnels on/off ramps, or overlays. Calico makes full utilization of docker to run its containers in the nodes which makes it multi-platform and very easy to ship, pack and deploy. Calico has the following salient features out of the box. + +- It can scale tens of thousands of servers and millions of workloads. +- Calico is easy to deploy, operate and diagnose. +- It is open source software licensed under Apache License version 2 and uses open standards. +- It supports container, virtual machines and bare metal workloads. +- It supports both IPv4 and IPv6 internet protocols. +- It is designed internally to support rich, flexible and secure network policy. + +In this tutorial, we'll perform a virtual private networking between two nodes running Calico in them with Docker Technology. Here are some easy steps on how we can do that. + +### 1. Installing etcd ### + +To get started with the calico virtual private networking, we'll need to have a linux machine running etcd. As CoreOS comes preinstalled and preconfigured with etcd, we can use CoreOS but if we want to configure Calico in other linux distributions, then we'll need to setup it in our machine. As we are running Ubuntu 14.04 LTS, we'll need to first install and configure etcd in our machine. To install etcd in our Ubuntu box, we'll need to add the official ppa repository of Calico by running the following command in the machine which we want to run etcd server. Here, we'll be installing etcd in our 1st node. + + # apt-add-repository ppa:project-calico/icehouse + + The primary source of Ubuntu packages for Project Calico based on OpenStack Icehouse, an open source solution for virtual networking in cloud data centers. Find out more at http://www.projectcalico.org/ + More info: https://launchpad.net/~project-calico/+archive/ubuntu/icehouse + Press [ENTER] to continue or ctrl-c to cancel adding it + gpg: keyring `/tmp/tmpi9zcmls1/secring.gpg' created + gpg: keyring `/tmp/tmpi9zcmls1/pubring.gpg' created + gpg: requesting key 3D40A6A7 from hkp server keyserver.ubuntu.com + gpg: /tmp/tmpi9zcmls1/trustdb.gpg: trustdb created + gpg: key 3D40A6A7: public key "Launchpad PPA for Project Calico" imported + gpg: Total number processed: 1 + gpg: imported: 1 (RSA: 1) + OK + +Then, we'll need to edit /etc/apt/preferences and make changes to prefer Calico-provided packages for Nova and Neutron. + + # nano /etc/apt/preferences + +We'll need to add the following lines into it. + + Package: * + Pin: release o=LP-PPA-project-calico-* + Pin-Priority: 100 + +![Calico PPA Config](http://blog.linoxide.com/wp-content/uploads/2015/10/calico-ppa-config.png) + +Next, we'll also need to add the official BIRD PPA for Ubuntu 14.04 LTS so that bugs fixes are installed before its available on the Ubuntu repo. + + # add-apt-repository ppa:cz.nic-labs/bird + + The BIRD Internet Routing Daemon PPA (by upstream & .deb maintainer) + More info: https://launchpad.net/~cz.nic-labs/+archive/ubuntu/bird + Press [ENTER] to continue or ctrl-c to cancel adding it + gpg: keyring `/tmp/tmphxqr5hjf/secring.gpg' created + gpg: keyring `/tmp/tmphxqr5hjf/pubring.gpg' created + gpg: requesting key F9C59A45 from hkp server keyserver.ubuntu.com + apt-ggpg: /tmp/tmphxqr5hjf/trustdb.gpg: trustdb created + gpg: key F9C59A45: public key "Launchpad Datov� schr�nky" imported + gpg: Total number processed: 1 + gpg: imported: 1 (RSA: 1) + OK + +Now, after the PPA jobs are done, we'll now gonna update the local repository index and then install etcd in our machine. + + # apt-get update + +To install etcd in our ubuntu machine, we'll gonna run the following apt command. + + # apt-get install etcd python-etcd + +### 2. Starting Etcd ### + +After the installation is complete, we'll now configure the etcd configuration file. Here, we'll edit **/etc/init/etcd.conf** using a text editor and append the line exec **/usr/bin/etcd** and make it look like below configuration. + + # nano /etc/init/etcd.conf + exec /usr/bin/etcd --name="node1" \ + --advertise-client-urls="http://10.130.65.71:2379,http://10.130.65.71:4001" \ + --listen-client-urls="http://0.0.0.0:2379,http://0.0.0.0:4001" \ + --listen-peer-urls "http://0.0.0.0:2380" \ + --initial-advertise-peer-urls "http://10.130.65.71:2380" \ + --initial-cluster-token $(uuidgen) \ + --initial-cluster "node1=http://10.130.65.71:2380" \ + --initial-cluster-state "new" + +![Configuring ETCD](http://blog.linoxide.com/wp-content/uploads/2015/10/configuring-etcd.png) + +**Note**: In the above configuration, we'll need to replace 10.130.65.71 and node-1 with the private ip address and hostname of your etcd server box. After done with editing, we'll need to save and exit the file. + +We can get the private ip address of our etcd server by running the following command. + + # ifconfig + +![ifconfig](http://blog.linoxide.com/wp-content/uploads/2015/10/ifconfig1.png) + +As our etcd configuration is done, we'll now gonna start our etcd service in our Ubuntu node. To start etcd daemon, we'll gonna run the following command. + + # service etcd start + +After done, we'll have a check if etcd is really running or not. To ensure that, we'll need to run the following command. + + # service etcd status + +### 3. Installing Docker ### + +Next, we'll gonna install Docker in both of our nodes running Ubuntu. To install the latest release of docker, we'll simply need to run the following command. + + # curl -sSL https://get.docker.com/ | sh + +![Docker Engine Installation](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-engine-installation.png) + +After the installation is completed, we'll gonna start the docker daemon in-order to make sure that its running before we move towards Calico. + + # service docker restart + + docker stop/waiting + docker start/running, process 3056 + +### 3. Installing Calico ### + +We'll now install calico in our linux machine in-order to run the calico containers. We'll need to install Calico in every node which we're wanting to connect into the Calico network. To install Calico, we'll need to run the following command under root or sudo permission. + +#### On 1st Node #### + + # wget https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl + + --2015-09-28 12:08:59-- https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl + Resolving github.com (github.com)... 192.30.252.129 + Connecting to github.com (github.com)|192.30.252.129|:443... connected. + ... + Resolving github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)... 54.231.9.9 + Connecting to github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)|54.231.9.9|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 6166661 (5.9M) [application/octet-stream] + Saving to: 'calicoctl' + 100%[=========================================>] 6,166,661 1.47MB/s in 6.7s + 2015-09-28 12:09:08 (898 KB/s) - 'calicoctl' saved [6166661/6166661] + + # chmod +x calicoctl + +After done with making it executable, we'll gonna make the binary calicoctl available as the command in any directory. To do so, we'll need to run the following command. + + # mv calicoctl /usr/bin/ + +#### On 2nd Node #### + + # wget https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl + + --2015-09-28 12:09:03-- https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl + Resolving github.com (github.com)... 192.30.252.131 + Connecting to github.com (github.com)|192.30.252.131|:443... connected. + ... + Resolving github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)... 54.231.8.113 + Connecting to github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)|54.231.8.113|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 6166661 (5.9M) [application/octet-stream] + Saving to: 'calicoctl' + 100%[=========================================>] 6,166,661 1.47MB/s in 5.9s + 2015-09-28 12:09:11 (1022 KB/s) - 'calicoctl' saved [6166661/6166661] + + # chmod +x calicoctl + +After done with making it executable, we'll gonna make the binary calicoctl available as the command in any directory. To do so, we'll need to run the following command. + + # mv calicoctl /usr/bin/ + +Likewise, we'll need to execute the above commands to install in every other nodes. + +### 4. Starting Calico services ### + +After we have installed calico on each of our nodes, we'll gonna start our Calico services. To start the calico services, we'll need to run the following commands. + +#### On 1st Node #### + + # calicoctl node + + WARNING: Unable to detect the xt_set module. Load with `modprobe xt_set` + WARNING: Unable to detect the ipip module. Load with `modprobe ipip` + No IP provided. Using detected IP: 10.130.61.244 + Pulling Docker image calico/node:v0.6.0 + Calico node is running with id: fa0ca1f26683563fa71d2ccc81d62706e02fac4bbb08f562d45009c720c24a43 + +#### On 2nd Node #### + +Next, we'll gonna export a global variable in order to connect our calico nodes to the same etcd server which is hosted in node1 in our case. To do so, we'll need to run the following command in each of our nodes. + + # export ETCD_AUTHORITY=10.130.61.244:2379 + +Then, we'll gonna run calicoctl container in our every our second node. + + # calicoctl node + + WARNING: Unable to detect the xt_set module. Load with `modprobe xt_set` + WARNING: Unable to detect the ipip module. Load with `modprobe ipip` + No IP provided. Using detected IP: 10.130.61.245 + Pulling Docker image calico/node:v0.6.0 + Calico node is running with id: 70f79c746b28491277e28a8d002db4ab49f76a3e7d42e0aca8287a7178668de4 + +This command should be executed in every nodes in which we want to start our Calico services. The above command start a container in the respective node. To check if the container is running or not, we'll gonna run the following docker command. + + # docker ps + +![Docker Running Containers](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-running-containers.png) + +If we see the output something similar to the output shown below then we can confirm that Calico containers are up and running. + +### 5. Starting Containers ### + +Next, we'll need to start few containers in each of our nodes running Calico services. We'll assign a different name to each of the containers running ubuntu. Here, workload-A, workload-B, etc has been assigned as the unique name for each of the containers. To do so, we'll need to run the following command. + +#### On 1st Node #### + + # docker run --net=none --name workload-A -tid ubuntu + + Unable to find image 'ubuntu:latest' locally + latest: Pulling from library/ubuntu + ... + 91e54dfb1179: Already exists + library/ubuntu:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security. + Digest: sha256:73fbe2308f5f5cb6e343425831b8ab44f10bbd77070ecdfbe4081daa4dbe3ed1 + Status: Downloaded newer image for ubuntu:latest + a1ba9105955e9f5b32cbdad531cf6ecd9cab0647d5d3d8b33eca0093605b7a18 + + # docker run --net=none --name workload-B -tid ubuntu + + 89dd3d00f72ac681bddee4b31835c395f14eeb1467300f2b1b9fd3e704c28b7d + +#### On 2nd Node #### + + # docker run --net=none --name workload-C -tid ubuntu + + Unable to find image 'ubuntu:latest' locally + latest: Pulling from library/ubuntu + ... + 91e54dfb1179: Already exists + library/ubuntu:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security. + Digest: sha256:73fbe2308f5f5cb6e343425831b8ab44f10bbd77070ecdfbe4081daa4dbe3ed1 + Status: Downloaded newer image for ubuntu:latest + 24e2d5d7d6f3990b534b5643c0e483da5b4620a1ac2a5b921b2ba08ebf754746 + + # docker run --net=none --name workload-D -tid ubuntu + + c6f28d1ab8f7ac1d9ccc48e6e4234972ed790205c9ca4538b506bec4dc533555 + +Similarly, if we have more nodes, we can run ubuntu docker container into it by running the above command with assigning a different container name. + +### 6. Assigning IP addresses ### + +After we have got our docker containers running in each of our hosts, we'll go for adding a networking support to the containers. Now, we'll gonna assign a new ip address to each of the containers using calicoctl. This will add a new network interface to the containers with the assigned ip addresses. To do so, we'll need to run the following commands in the hosts running the containers. + +#### On 1st Node #### + + # calicoctl container add workload-A 192.168.0.1 + # calicoctl container add workload-B 192.168.0.2 + +#### On 2nd Node #### + + # calicoctl container add workload-C 192.168.0.3 + # calicoctl container add workload-D 192.168.0.4 + +### 7. Adding Policy Profiles ### + +After our containers have got networking interfaces and ip address assigned, we'll now need to add policy profiles to enable networking between the containers each other. After adding the profiles, the containers will be able to communicate to each other only if they have the common profiles assigned. That means, if they have different profiles assigned, they won't be able to communicate to eachother. So, before being able to assign. we'll need to first create some new profiles. That can be done in either of the hosts. Here, we'll run the following command in 1st Node. + + # calicoctl profile add A_C + + Created profile A_C + + # calicoctl profile add B_D + + Created profile B_D + +After the profile has been created, we'll simply add our workload to the required profile. Here, in this tutorial, we'll place workload A and workload C in a common profile A_C and workload B and D in a common profile B_D. To do so, we'll run the following command in our hosts. + +#### On 1st Node #### + + # calicoctl container workload-A profile append A_C + # calicoctl container workload-B profile append B_D + +#### On 2nd Node #### + + # calicoctl container workload-C profile append A_C + # calicoctl container workload-D profile append B_D + +### 8. Testing the Network ### + +After we've added a policy profile to each of our containers using Calicoctl, we'll now test whether our networking is working as expected or not. We'll take a node and a workload and try to communicate with the other containers running in same or different nodes. And due to the profile, we should be able to communicate only with the containers having a common profile. So, in this case, workload A should be able to communicate with only C and vice versa whereas workload A shouldn't be able to communicate with B or D. To test the network, we'll gonna ping the containers having common profiles from the 1st host running workload A and B. + +We'll first ping workload-C having ip 192.168.0.3 using workload-A as shown below. + + # docker exec workload-A ping -c 4 192.168.0.3 + +Then, we'll ping workload-D having ip 192.168.0.4 using workload-B as shown below. + + # docker exec workload-B ping -c 4 192.168.0.4 + +![Ping Test Success](http://blog.linoxide.com/wp-content/uploads/2015/10/ping-test-success.png) + +Now, we'll check if we're able to ping the containers having different profiles. We'll now ping workload-D having ip address 192.168.0.4 using workload-A. + + # docker exec workload-A ping -c 4 192.168.0.4 + +After done, we'll try to ping workload-C having ip address 192.168.0.3 using workload-B. + + # docker exec workload-B ping -c 4 192.168.0.3 + +![Ping Test Failed](http://blog.linoxide.com/wp-content/uploads/2015/10/ping-test-failed.png) + +Hence, the workloads having same profiles could ping each other whereas having different profiles couldn't ping to each other. + +### Conclusion ### + +Calico is an awesome project providing an easy way to configure a virtual network using the latest docker technology. It is considered as a great open source solution for virtual networking in cloud data centers. Calico is being experimented by people in different cloud platforms like AWS, DigitalOcean, GCE and more these days. As Calico is currently under experiment, its stable version hasn't been released yet and is still in pre-release. The project consists a well documented documentations, tutorials and manuals in their [official documentation site][2]. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/calico-virtual-private-networking-docker/ + +作者:[Arun Pyasi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://docs.projectcalico.org/ \ No newline at end of file diff --git a/sources/tech/20151012 How to Setup DockerUI--a Web Interface for Docker.md b/sources/tech/20151012 How to Setup DockerUI--a Web Interface for Docker.md new file mode 100644 index 0000000000..b712c8ebd6 --- /dev/null +++ b/sources/tech/20151012 How to Setup DockerUI--a Web Interface for Docker.md @@ -0,0 +1,111 @@ +How to Setup DockerUI - a Web Interface for Docker +================================================================================ +Docker is getting more popularity day by day. The idea of running a complete Operating System inside a container rather than running inside a virtual machine is an awesome technology. Docker has made lives of millions of system administrators and developers pretty easy for getting their work done in no time. It is an open source technology that provides an open platform to pack, ship, share and run any application as a lightweight container without caring on which operating system we are running on the host. It has no boundaries of Language support, Frameworks or packaging system and can be run anywhere, anytime from a small home computers to high-end servers. Running docker containers and managing them may come a bit difficult and time consuming, so there is a web based application named DockerUI which is make managing and running container pretty simple. DockerUI is highly beneficial to people who are not much aware of linux command lines and want to run containerized applications. DockerUI is an open source web based application best known for its beautiful design and ease simple interface for running and managing docker containers. + +Here are some easy steps on how we can setup Docker Engine with DockerUI in our linux machine. + +### 1. Installing Docker Engine ### + +First of all, we'll gonna install docker engine in our linux machine. Thanks to its developers, docker is very easy to install in any major linux distribution. To install docker engine, we'll need to run the following command with respect to which distribution we are running. + +#### On Ubuntu/Fedora/CentOS/RHEL/Debian #### + +Docker maintainers have written an awesome script that can be used to install docker engine in Ubuntu 15.04/14.10/14.04, CentOS 6.x/7, Fedora 22, RHEL 7 and Debian 8.x distributions of linux. This script recognizes the distribution of linux installed in our machine, then adds the required repository to the filesystem, updates the local repository index and finally installs docker engine and required dependencies from it. To install docker engine using that script, we'll need to run the following command under root or sudo mode. + + # curl -sSL https://get.docker.com/ | sh + +#### On OpenSuse/SUSE Linux Enterprise #### + +To install docker engine in the machine running OpenSuse 13.1/13.2 or SUSE Linux Enterprise Server 12, we'll simply need to execute the zypper command. We'll gonna install docker using zypper command as the latest docker engine is available on the official repository. To do so, we'll run the following command under root/sudo mode. + + # zypper in docker + +#### On ArchLinux #### + +Docker is available in the official repository of Archlinux as well as in the AUR packages maintained by the community. So, we have two options to install docker in archlinux. To install docker using the official arch repository, we'll need to run the following pacman command. + + # pacman -S docker + +But if we want to install docker from the Archlinux User Repository ie AUR, then we'll need to execute the following command. + + # yaourt -S docker-git + +### 2. Starting Docker Daemon ### + +After docker is installed, we'll now gonna start our docker daemon so that we can run docker containers and manage them. We'll run the following command to make sure that docker daemon is installed and to start the docker daemon. + +#### On SysVinit #### + + # service docker start + +#### On Systemd #### + + # systemctl start docker + +### 3. Installing DockerUI ### + +Installing DockerUI is pretty easy than installing docker engine. We just need to pull the dockerui from the Docker Registry Hub and run it inside a container. To do so, we'll simply need to run the following command. + + # docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock dockerui/dockerui + +![Starting DockerUI Container](http://blog.linoxide.com/wp-content/uploads/2015/09/starting-dockerui-container.png) + +Here, in the above command, as the default port of the dockerui web application server 9000, we'll simply map the default port of it with -p flag. With -v flag, we specify the docker socket. The --privileged flag is required for hosts using SELinux. + +After executing the above command, we'll now check if the dockerui container is running or not by running the following command. + + # docker ps + +![Running Docker Containers](http://blog.linoxide.com/wp-content/uploads/2015/09/running-docker-containers.png) + +### 4. Pulling an Image ### + +Currently, we cannot pull an image directly from DockerUI so, we'll need to pull a docker image from the linux console/terminal. To do so, we'll need to run the following command. + + # docker pull ubuntu + +![Docker Image Pull](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-image-pull.png) + +The above command will pull an image tagged as ubuntu from the official [Docker Hub][1]. Similarly, we can pull more images that we require and are available in the hub. + +### 4. Managing with DockerUI ### + +After we have started the dockerui container, we'll now have fun with it to start, pause, stop, remove and perform many possible activities featured by dockerui with docker containers and images. First of all, we'll need to open the web application using our web browser. To do so, we'll need to point our browser to http://ip-address:9000 or http://mydomain.com:9000 according to the configuration of our system. By default, there is no login authentication needed for the user access but we can configure our web server for adding authentication. To start a container, first we'll need to have images of the required application we want to run a container with. + +#### Create a Container #### + +To create a container, we'll need to go to the section named Images then, we'll need to click on the image id which we want to create a container of. After clicking on the required image id, we'll need to click on Create button then we'll be asked to enter the required properties for our container. And after everything is set and done. We'll need to click on Create button to finally create a container. + +![Creating Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/10/creating-docker-container.png) + +#### Stop a Container #### + +To stop a container, we'll need to move towards the Containers page and then select the required container we want to stop. Now, we'll want to click on Stop option which we can see under Actions drop-down menu. + +![Managing Container](http://blog.linoxide.com/wp-content/uploads/2015/10/managing-container.png) + +#### Pause and Resume #### + +To pause a container, we simply select the required container we want to pause by keeping a check mark on the container and then click the Pause option under Actions . This is will pause the running container and then, we can simply resume the container by selecting Unpause option from the Actions drop down menu. + +#### Kill and Remove #### + +Like we had performed the above tasks, its pretty easy to kill and remove a container or an image. We just need to check/select the required container or image and then select the Kill or Remove button from the application according to our need. + +### Conclusion ### + +DockerUI is a beautiful utilization of Docker Remote API to develop an awesome web interface for managing docker containers. The developers have designed and developed this application in pure HTML and JS language. It is currently incomplete and is under heavy development so we don't recommend it for the use in production currently. It makes users pretty easy to manage their containers and images with simple clicks without needing to execute lines of commands to do small jobs. If we want to contribute DockerUI, we can simply visit its [Github Repository][2]. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/setup-dockerui-web-interface-docker/ + +作者:[Arun Pyasi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:https://hub.docker.com/ +[2]:https://github.com/crosbymichael/dockerui/ \ No newline at end of file diff --git a/sources/tech/20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md b/sources/tech/20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md new file mode 100644 index 0000000000..6f5f3f3b65 --- /dev/null +++ b/sources/tech/20151012 How to Setup Red Hat Ceph Storage on CentOS 7.0.md @@ -0,0 +1,250 @@ +How to Setup Red Hat Ceph Storage on CentOS 7.0 +================================================================================ +Ceph is an open source software platform that stores data on a single distributed computer cluster. When you are planning to build a cloud, then on top of the requirements you have to decide on how to implement your storage. Open Source CEPH is one of RED HAT mature technology based on object-store system, called RADOS, with a set of gateway APIs that present the data in block, file, and object modes. As a result of its open source nature, this portable storage platform may be installed and used in public or private clouds. The topology of a Ceph cluster is designed around replication and information distribution, which are intrinsic and provide data integrity. It is designed to be fault-tolerant, and can run on commodity hardware, but can also be run on a number of more advanced systems with the right setup. + +Ceph can be installed on any Linux distribution but it requires the recent kernel and other up-to-date libraries in order to be properly executed. But, here in this tutorial we will be using CentOS-7.0 with minimal installation packages on it. + +### System Resources ### + + **CEPH-STORAGE** + OS: CentOS Linux 7 (Core) + RAM:1 GB + CPU:1 CPU + DISK: 20 + Network: 45.79.136.163 + FQDN: ceph-storage.linoxide.com + + **CEPH-NODE** + OS: CentOS Linux 7 (Core) + RAM:1 GB + CPU:1 CPU + DISK: 20 + Network: 45.79.171.138 + FQDN: ceph-node.linoxide.com + +### Pre-Installation Setup ### + +There are few steps that we need to perform on each of our node before the CEPH storage setup. So first thing is to make sure that each node is configured with its networking setup with FQDN that is reachable to other nodes. + +**Configure Hosts** + +To setup the hosts entry on each node let's open the default hosts configuration file as shown below. + + # vi /etc/hosts + +---------- + + 45.79.136.163 ceph-storage ceph-storage.linoxide.com + 45.79.171.138 ceph-node ceph-node.linoxide.com + +**Install VMware Tools** + +While working on the VMware virtual environment, its recommended that you have installed its open VM tools. You can install using below command. + + #yum install -y open-vm-tools + +**Firewall Setup** + +If you are working on a restrictive environment where your local firewall in enabled then make sure that the number of following ports are allowed from in your CEPH storge admin node and client nodes. + +You must open ports 80, 2003, and 4505-4506 on your Admin Calamari node and port 80 to CEPH admin or Calamari node for inbound so that clients in your network can access the Calamari web user interface. + +You can start and enable firewall in centos 7 with given below command. + + #systemctl start firewalld + #systemctl enable firewalld + +To allow the mentioned ports in the Admin Calamari node run the following commands. + + #firewall-cmd --zone=public --add-port=80/tcp --permanent + #firewall-cmd --zone=public --add-port=2003/tcp --permanent + #firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent + #firewall-cmd --reload + +On the CEPH Monitor nodes you have to allow the following ports in the firewall. + + #firewall-cmd --zone=public --add-port=6789/tcp --permanent + +Then allow the following list of default ports for talking to clients and monitors and for sending data to other OSDs. + + #firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent + +It quite fair that you should disable firewall and SELinux settings if you are working in a non-production environment , so we are going to disable the firewall and SELinux in our test environment. + + #systemctl stop firewalld + #systemctl disable firewalld + +**System Update** + +Now update your system and then give it a reboot to implement the required changes. + + #yum update + #shutdown -r 0 + +### Setup CEPH User ### + +Now we will create a separate sudo user that will be used for installing the ceph-deploy utility on each node and allow that user to have password less access on each node because it needs to install software and configuration files without prompting for passwords on CEPH nodes. + +To create new user with its separate home directory run the below command on the ceph-storage host. + + [root@ceph-storage ~]# useradd -d /home/ceph -m ceph + [root@ceph-storage ~]# passwd ceph + +Each user created on the nodes must have sudo rights, you can assign the sudo rights to the user using running the following command as shown. + + [root@ceph-storage ~]# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph + ceph ALL = (root) NOPASSWD:ALL + + [root@ceph-storage ~]# sudo chmod 0440 /etc/sudoers.d/ceph + +### Setup SSH-Key ### + +Now we will generate SSH keys on the admin ceph node and then copy that key to each Ceph cluster nodes. + +Let's run the following command on the ceph-node to copy its ssh key on the ceph-storage. + + [root@ceph-node ~]# ssh-keygen + Generating public/private rsa key pair. + Enter file in which to save the key (/root/.ssh/id_rsa): + Created directory '/root/.ssh'. + Enter passphrase (empty for no passphrase): + Enter same passphrase again: + Your identification has been saved in /root/.ssh/id_rsa. + Your public key has been saved in /root/.ssh/id_rsa.pub. + The key fingerprint is: + 5b:*:*:*:*:*:*:*:*:*:c9 root@ceph-node + The key's randomart image is: + +--[ RSA 2048]----+ + +---------- + + [root@ceph-node ~]# ssh-copy-id ceph@ceph-storage + +![SSH key](http://blog.linoxide.com/wp-content/uploads/2015/10/k3.png) + +### Configure PID Count ### + +To configure the PID count value, we will make use of the following commands to check the default kernel value. By default its a small maximum number of threads that is '32768'. +So will configure this value to a higher number of threads by editing the system conf file as shown in the image. + +![Change PID Value](http://blog.linoxide.com/wp-content/uploads/2015/10/3-PID-value.png) + +### Setup Your Administration Node Server ### + +With all the networking setup and verified, now we will install ceph-deploy using the user ceph. So, check the hosts entry by opening its file. + + #vim /etc/hosts + ceph-storage 45.79.136.163 + ceph-node 45.79.171.138 + +Now to add its repository run the below command. + + #rpm -Uhv http://ceph.com/rpm-giant/el7/noarch/ceph-release-1-0.el7.noarch.rpm + +![Adding EPEL](http://blog.linoxide.com/wp-content/uploads/2015/10/k1.png) + +OR create a new file and update the CEPH repository parameters but do not forget to mention your current release and distribution. + + [root@ceph-storage ~]# vi /etc/yum.repos.d/ceph.repo + +---------- + + [ceph-noarch] + name=Ceph noarch packages + baseurl=http://ceph.com/rpm-{ceph-release}/{distro}/noarch + enabled=1 + gpgcheck=1 + type=rpm-md + gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc + +After this update your system and install the ceph deploy package. + +### Installing CEPH-Deploy Package ### + +To upate the system with latest ceph repository and other packages, we will run the following command along with ceph-deploy installation command. + + #yum update -y && yum install ceph-deploy -y + +Image- + +### Setup the cluster ### + +Create a new directory and move into it on the admin ceph-node to collect all output files and logs by using the following commands. + + #mkdir ~/ceph-cluster + #cd ~/ceph-cluster + +---------- + + #ceph-deploy new storage + +![setup ceph cluster](http://blog.linoxide.com/wp-content/uploads/2015/10/k4.png) + +Upon successful execution of above command you can see it creating its configuration files. +Now to configure the default configuration file of CEPH, open it using any editor and place the following two lines under its global parameters that reflects your public network. + + #vim ceph.conf + osd pool default size = 1 + public network = 45.79.0.0/16 + +### Installing CEPH ### + +We are now going to install CEPH on each of the node associated with our CEPH cluster. To do so we make use of the following command to install CEPH on our both nodes that is ceph-storage and ceph-node as shown below. + + #ceph-deploy install ceph-node ceph-storage + +![installing ceph](http://blog.linoxide.com/wp-content/uploads/2015/10/k5.png) + +This will takes some time while processing all its required repositories and installing the required packages. + +Once the ceph installation process is complete on both nodes we will proceed to create monitor and gather keys by running the following command on the same node. + + #ceph-deploy mon create-initial + +![CEPH Initial Monitor](http://blog.linoxide.com/wp-content/uploads/2015/10/k6.png) + +### Setup OSDs and OSD Daemons ### + +Now we will setup disk storages, to do so first run the below command to List all of your usable disks by using the below command. + + #ceph-deploy disk list ceph-storage + +In results will get the list of your disks used on your storage nodes that you will use for creating the OSD. Let's run the following command that consists of your disks names as shown below. + + #ceph-deploy disk zap storage:sda + #ceph-deploy disk zap storage:sdb + +Now to finalize the OSD setup let's run the below commands to setup the journaling disk along with data disk. + + #ceph-deploy osd prepare storage:sdb:/dev/sda + #ceph-deploy osd activate storage:/dev/sdb1:/dev/sda1 + +You will have to repeat the same command on all the nodes while it will clean everything present on the disk. Afterwards to have a functioning cluster, we need to copy the different keys and configuration files from the admin ceph-node to all the associated nodes by using the following command. + + #ceph-deploy admin ceph-node ceph-storage + +### Testing CEPH ### + +We have almost completed the CEPH cluster setup, let's run the below command to check the status of the running ceph by using the below command on the admin ceph-node. + + #ceph status + #ceph health + HEALTH_OK + +So, if you did not get any error message at ceph status , that means you have successfully setup your ceph storage cluster on CentOS 7. + +### Conclusion ### + +In this detailed article we learned about the CEPH storage clustering setup using the two virtual Machines with CentOS 7 OS installed on them that can be used as a backup or as your local storage that can be used to precess other virtual machines by creating its pools. We hope you have got this article helpful. Do share your experiences when you try this at your end. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/storage/setup-red-hat-ceph-storage-centos-7-0/ + +作者:[Kashif Siddique][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ \ No newline at end of file From 71f363bf918af18b36618ea486c74a982ce47f86 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 12 Oct 2015 21:35:16 +0800 Subject: [PATCH 687/697] Update 20151012 10 Useful Utilities For Linux Users.md --- sources/tech/20151012 10 Useful Utilities For Linux Users.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151012 10 Useful Utilities For Linux Users.md b/sources/tech/20151012 10 Useful Utilities For Linux Users.md index b39679a9fc..24df805fc6 100644 --- a/sources/tech/20151012 10 Useful Utilities For Linux Users.md +++ b/sources/tech/20151012 10 Useful Utilities For Linux Users.md @@ -1,3 +1,4 @@ +translation by strugglingyouth 10 Useful Utilities For Linux Users ================================================================================ ![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2014/09/linux-656x445.png) @@ -260,4 +261,4 @@ via: http://www.unixmen.com/10-useful-utilities-linux-users/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.unixmen.com/author/rajneesh/ -[1]:https://github.com/p-e-w/ranwhen/archive/master.zip \ No newline at end of file +[1]:https://github.com/p-e-w/ranwhen/archive/master.zip From 6c32aeae4d3ced81373eb351d73b77bd5ee0f317 Mon Sep 17 00:00:00 2001 From: alim0x Date: Mon, 12 Oct 2015 22:36:22 +0800 Subject: [PATCH 688/697] [translated]20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools --- ... Using screenfetch and linux_logo Tools.md | 106 +++++++++--------- 1 file changed, 52 insertions(+), 54 deletions(-) rename {sources => translated}/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md (63%) diff --git a/sources/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md b/translated/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md similarity index 63% rename from sources/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md rename to translated/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md index 92b47aab50..4e9dce31ca 100644 --- a/sources/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md +++ b/translated/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md @@ -1,90 +1,88 @@ -alim0x translating - -Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools +用 screenfetch 和 linux_logo 工具显示带有酷炫 Linux 标志的基本硬件信息 ================================================================================ -Do you want to display a super cool logo of your Linux distribution along with basic hardware information? Look no further try awesome screenfetch and linux_logo utilities. +想在屏幕上显示出你的 Linux 发行版的酷炫标志和基本硬件信息吗?不用找了,来试试超赞的 screenfetch 和 linux_logo 工具。 -### Say hello to screenfetch ### +### 来见见 screenfetch 吧 ### -screenFetch is a CLI bash script to show system/theme info in screenshots. It runs on a Linux, OS X, FreeBSD and many other Unix-like system. From the man page: +screenFetch 是一个能够在截屏中显示系统/主题信息的命令行脚本。它可以在 Linux,OS X,FreeBSD 以及其它的许多类Unix系统上使用。来自 man 手册的说明: -> This handy Bash script can be used to generate one of those nifty terminal theme information + ASCII distribution logos you see in everyone's screenshots nowadays. It will auto-detect your distribution and display an ASCII version of that distribution's logo and some valuable information to the right. +> 这个方便的 Bash 脚本可以用来生成那些漂亮的终端主题信息和 ASCII 发行版标志,就像如今你在别人的截屏里看到的那样。它会自动检测你的发行版并显示 ASCII 版的发行版标志,并且在右边显示一些有价值的信息。 -#### Installing screenfetch on Linux #### +#### 在 Linux 上安装 screenfetch #### -Open the Terminal application. Simply type the following [apt-get command][1] on a Debian or Ubuntu or Mint Linux based system: +打开终端应用。在基于 Debian 或 Ubuntu 或 Mint 的系统上只需要输入下列 [apt-get 命令][1]: $ sudo apt-get install screenfetch ![](http://s0.cyberciti.org/uploads/cms/2015/09/ubuntu-debian-linux-apt-get-install-screenfetch.jpg) -Fig.01: Installing screenfetch using apt-get +图一:用 apt-get 安装 screenfetch -#### Installing screenfetch Mac OS X #### +#### 在 Mac OS X 上安装 screenfetch #### -Type the following command: +输入下列命令: $ brew install screenfetch ![](http://s0.cyberciti.org/uploads/cms/2015/09/apple-mac-osx-install-screenfetch.jpg) -Fig.02: Installing screenfetch using brew command +图二:用 brew 命令安装 screenfetch -#### Installing screenfetch on FreeBSD #### +#### 在 FreeBSD 上安装 screenfetch #### -Type the following pkg command: +输入下列 pkg 命令: $ sudo pkg install sysutils/screenfetch ![](http://s0.cyberciti.org/uploads/cms/2015/09/freebsd-install-pkg-screenfetch.jpg) -Fig.03: FreeBSD install screenfetch using pkg +图三:在 FreeBSD 用 pkg 安装 screenfetch -#### Installing screenfetch on Fedora Linux #### +#### 在 Fedora 上安装 screenfetch #### -Type the following dnf command: +输入下列 dnf 命令: $ sudo dnf install screenfetch ![](http://s0.cyberciti.org/uploads/cms/2015/09/fedora-dnf-install-screenfetch.jpg) -Fig.04: Fedora Linux 22 install screenfetch using dnf +图四:在 Fedora 22 用 dnf 安装 screenfetch -#### How do I use screefetch utility? #### +#### 我该怎么使用 screefetch 工具? #### -Simply type the following command: +只需输入以下命令: $ screenfetch -Here is the output from various operating system: +这是不同系统的输出: ![](http://s0.cyberciti.org/uploads/cms/2015/09/fedora-screenfetch-300x193.jpg) -Screenfetch on Fedora +Fedora 上的 Screenfetch ![](http://s0.cyberciti.org/uploads/cms/2015/09/screenfetch-osx-300x213.jpg) -Screenfetch on OS X +OS X 上的 Screenfetch ![](http://s0.cyberciti.org/uploads/cms/2015/09/screenfetch-freebsd-300x143.jpg) -Screenfetch on FreeBSD +FreeBSD 上的 Screenfetch ![](http://s0.cyberciti.org/uploads/cms/2015/09/debian-ubutnu-screenfetch-outputs-300x279.jpg) -Screenfetch on Debian Linux +Debian 上的 Screenfetch -#### Take screenshot #### +#### 获取截屏 #### -To take a screenshot and to save a file, enter: +要获取截屏并保存成文件,输入: $ screenfetch -s -You will see a screenshot file at ~/Desktop/screenFetch-*.jpg. To take a screenshot and upload to imgur directly, enter: +你会看到一个文件 ~/Desktop/screenFetch-*.jpg。获取截屏并直接上传到 imgur,输入: $ screenfetch -su imgur -**Sample outputs:** +**输出示例:** -/+:. veryv@Viveks-MacBook-Pro :++++. OS: 64bit Mac OS X 10.10.5 14F27 @@ -106,45 +104,45 @@ You will see a screenshot file at ~/Desktop/screenFetch-*.jpg. To take a screens Taking shot in 3.. 2.. 1.. 0. ==> Uploading your screenshot now...your screenshot can be viewed at http://imgur.com/HKIUznn -You can visit [http://imgur.com/HKIUznn][2] to see uploaded screenshot. +你可以访问 [http://imgur.com/HKIUznn][2] 来查看上传的截屏。 -### Say hello to linux_logo ### +### 再来看看 linux_logo ### -The linux_logo program generates a color ANSI picture of a penguin which includes some system information obtained from the /proc filesystem. +linux_logo 程序生成一个彩色的 ANSI 版企鹅图片,还包含一些来自 /proc 的系统信息。 -#### Installation #### +#### 安装 #### -Simply type the following command as per your Linux distro. +只需按照你的 Linux 发行版输入对应的命令: #### Debian/Ubutnu/Mint #### # apt-get install linux_logo -#### CentOS/RHEL/Older Fedora #### +#### CentOS/RHEL/旧版 Fedora #### # yum install linux_logo -#### Fedora Linux v22+ or newer #### +#### Fedora Linux v22+ 或更新版本 #### # dnf install linux_logo -#### Run it #### +#### 运行它 #### -Simply type the following command: +只需输入下列命令: $ linux_logo ![](http://s0.cyberciti.org/uploads/cms/2015/09/debian-linux_logo.jpg) -linux_logo in action +运行 linux_logo -#### But wait, there's more! #### +#### 等等,还有更多! #### -You can see a list of compiled in logos using: +你可以用这个命令查看内置的标志列表: $ linux_logo -f -L list -**Sample outputs:** +**输出示例:** Available Built-in Logos: Num Type Ascii Name Description @@ -182,42 +180,42 @@ You can see a list of compiled in logos using: Do "linux_logo -L num" where num is from above to get the appropriate logo. Remember to also use -a to get ascii version. -To see aix logo, enter: +查看 aix 的标志,输入: $ linux_logo -f -L aix -To see openbsd logo: +查看 openbsd 的标志: $ linux_logo -f -L openbsd -Or just see some random Linux logo: +或者只是随机看看一些 Linux 标志: $ linux_logo -f -L random_xy -You [can combine bash for loop as follows to display various logos][3], enter: +你[可以像下面那样结合 bash 的循环来显示不同的标志][3],输入: ![](http://s0.cyberciti.org/uploads/cms/2015/09/linux-logo-fun.gif) -Gif 01: linux_logo and bash for loop for fun and profie +动图1: linux_logo 和 bash 循环,既有趣又能发朋友圈耍酷 -### Getting help ### +### 获取帮助 ### -Simply type the following command: +输入下列命令: $ screefetch -h $ linux_logo -h -**References** +**参考** -- [screenFetch home page][4] -- [linux_logo home page][5] +- [screenFetch 主页][4] +- [linux_logo 主页][5] -------------------------------------------------------------------------------- via: http://www.cyberciti.biz/hardware/howto-display-linux-logo-in-bash-terminal-using-screenfetch-linux_logo/ 作者:Vivek Gite -译者:[译者ID](https://github.com/译者ID) +译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 13274d89f5bf2413d685284050e58dcd83a0867c Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 13 Oct 2015 08:41:35 +0800 Subject: [PATCH 689/697] Update 20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md --- ...ow to change USB device permission permanently on Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md b/sources/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md index 8af62bfd75..4c07c48f90 100644 --- a/sources/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md +++ b/sources/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md @@ -1,3 +1,5 @@ +translating----geekpi + Linux FAQs with Answers--How to change USB device permission permanently on Linux ================================================================================ > **Question**: I am trying to run gpsd on my Linux with a USB GPS receiver. However, I am getting the following errors from gpsd. @@ -50,4 +52,4 @@ via: http://ask.xmodulo.com/change-usb-device-permission-linux.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file +[a]:http://ask.xmodulo.com/author/nanni From bd616af903eca9eb6e80b8957d776122ae7fe80f Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 13 Oct 2015 08:41:54 +0800 Subject: [PATCH 690/697] Update 20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md --- ...find information about built-in kernel modules on Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md b/sources/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md index b7a5be3975..d816a975f7 100644 --- a/sources/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md +++ b/sources/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md @@ -1,3 +1,5 @@ +translating----geekpi + Linux FAQs with Answers--How to find information about built-in kernel modules on Linux ================================================================================ > **Question**: I would like to know what modules are built into the kernel of my Linux system, and what parameters are available in each module. Is there a way to get a list of all built-in kernel modules and device drivers, and find detailed information about them? @@ -50,4 +52,4 @@ via: http://ask.xmodulo.com/find-information-builtin-kernel-modules-linux.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file +[a]:http://ask.xmodulo.com/author/nanni From 9dad14cf1ce72f67ffd19f139fabe5a44624f25a Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 13 Oct 2015 08:42:27 +0800 Subject: [PATCH 691/697] Update 20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md --- ...How to force password change at the next login on Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md b/sources/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md index aadd2d1e81..8e192f112b 100644 --- a/sources/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md +++ b/sources/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md @@ -1,3 +1,5 @@ +translating----geekpi + Linux FAQs with Answers--How to force password change at the next login on Linux ================================================================================ > **Question**: I manage a Linux server for multiple users to share. I have just created a new user account with some default password, and I want the user to change the default password immediately after the first login. Is there a way to force a user to change his/her password at the next login? @@ -51,4 +53,4 @@ via: http://ask.xmodulo.com/force-password-change-next-login-linux.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://ask.xmodulo.com/author/nanni -[1]:http://xmodulo.com/set-password-policy-linux.html \ No newline at end of file +[1]:http://xmodulo.com/set-password-policy-linux.html From c6ff7fc4596ef3a0ce963f720ac79c98f4174866 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 13 Oct 2015 08:53:38 +0800 Subject: [PATCH 692/697] translated --- ... device permission permanently on Linux.md | 55 ------------------- ... device permission permanently on Linux.md | 54 ++++++++++++++++++ 2 files changed, 54 insertions(+), 55 deletions(-) delete mode 100644 sources/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md create mode 100644 translated/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md diff --git a/sources/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md b/sources/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md deleted file mode 100644 index 4c07c48f90..0000000000 --- a/sources/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md +++ /dev/null @@ -1,55 +0,0 @@ -translating----geekpi - -Linux FAQs with Answers--How to change USB device permission permanently on Linux -================================================================================ -> **Question**: I am trying to run gpsd on my Linux with a USB GPS receiver. However, I am getting the following errors from gpsd. -> -> gpsd[377]: gpsd:ERROR: read-only device open failed: Permission denied -> gpsd[377]: gpsd:ERROR: /dev/ttyUSB0: device activation failed. -> gpsd[377]: gpsd:ERROR: device open failed: Permission denied - retrying read-only -> -> Looks like gpsd does not have permission to access the USB device (/dev/ttyUSB0). How can I change its default permission mode permanently on Linux? - -When you run a process that wants to read or write to a USB device, the user/group of the process must have appropriate permission to do so. Of course you can change the permission of your USB device manually with chmod command, but such manual permission change will be temporary. The USB device will revert to its default permission mode when you reboot your Linux machine. - -![](https://farm6.staticflickr.com/5741/20848677843_202ff53303_c.jpg) - -As a permanent solution, you can create a udev-based USB permission rule which assigns any custom permission mode of your choice. Here is how to do it. - -First, you need to identify the vendorID and productID of your USB device. For that, use lsusb command. - - $ lsusb -vvv - -![](https://farm1.staticflickr.com/731/20848677743_39f76eb403_c.jpg) - -From the lsusb output, find your USB device's entry, and look for "idVendor" and "idProduct" fields. In this example, we have idVendor (0x067b) and idProduct (0x2303). - -Next, create a new udev rule as follows. - - $ sudo vi /etc/udev/rules.d/50-myusb.rules - ----------- - - SUBSYSTEMS=="usb", ATTRS{idVendor}=="067b", ATTRS{idProduct}=="2303", GROUP="users", MODE="0666" - -Replace "idVendor" and "idProduct" values with your own. **MODE="0666"** indicates the preferred permission of the USB device. - -Now reboot your machine or reload udev rules: - - $ sudo udevadm control --reload - -Then verify the permission of the USB device. - -![](https://farm1.staticflickr.com/744/21282872179_9a4a05d768_b.jpg) - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/change-usb-device-permission-linux.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni diff --git a/translated/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md b/translated/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md new file mode 100644 index 0000000000..533bbfad5b --- /dev/null +++ b/translated/tech/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md @@ -0,0 +1,54 @@ +Linux有问必答 -- 如何在LInux中永久修改USB设备权限 +================================================================================ +> **提问**:当我尝试在Linux中运行USB GPS接收器时我遇到了下面来自gpsd的错误。 +> +> gpsd[377]: gpsd:ERROR: read-only device open failed: Permission denied +> gpsd[377]: gpsd:ERROR: /dev/ttyUSB0: device activation failed. +> gpsd[377]: gpsd:ERROR: device open failed: Permission denied - retrying read-only +> +> 看上去gpsd没有权限访问USB设备(/dev/ttyUSB0)。我该如何永久修改它在Linux上的权限? + +当你在运行一个会读取或者写入USB设备的进程时,进程的用户/组必须有权限这么做。当然你可以手动用chmod命令改变USB设备的权限,但是手动的权限改变只是暂时的。USB设备会在下次重启时恢复它的默认权限。 + +![](https://farm6.staticflickr.com/5741/20848677843_202ff53303_c.jpg) + +作为一个永久的方式,你可以创建一个基于udev的USB权限规则,它可以根据你的选择分配任何权限模式。下面是该如何做。 + +首先,你需要找出USB设备的vendorID和productID。使用lsusb命令。 + + $ lsusb -vvv + +![](https://farm1.staticflickr.com/731/20848677743_39f76eb403_c.jpg) + +上面lsusb的输出中,找出你的USB设备,并找出"idVendor"和"idProduct"字段。本例中,我们的结果是idVendor (0x067b)和 idProduct (0x2303) + +下面创建一个新的udev规则。 + + $ sudo vi /etc/udev/rules.d/50-myusb.rules + +---------- + + SUBSYSTEMS=="usb", ATTRS{idVendor}=="067b", ATTRS{idProduct}=="2303", GROUP="users", MODE="0666" + +用你自己的"idVendor"和"idProduct"来替换。**MODE="0666"**表示USB设备的权限。 + +现在重启电脑并重新加载udev规则: + + $ sudo udevadm control --reload + +Then verify the permission of the USB device. +接着验证USB设备的权限。 + +![](https://farm1.staticflickr.com/744/21282872179_9a4a05d768_b.jpg) + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/change-usb-device-permission-linux.html + +作者:[Dan Nanni][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni From 6c146196cb80e9d14d31c1aa3ab5d0985c512fbe Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 13 Oct 2015 09:13:38 +0800 Subject: [PATCH 693/697] translated --- ... about built-in kernel modules on Linux.md | 32 +++++++++---------- 1 file changed, 15 insertions(+), 17 deletions(-) diff --git a/sources/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md b/sources/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md index d816a975f7..3b37ef3c91 100644 --- a/sources/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md +++ b/sources/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md @@ -1,42 +1,40 @@ -translating----geekpi - -Linux FAQs with Answers--How to find information about built-in kernel modules on Linux +Linux有问必答--如何找出Linux中内置模块的信息 ================================================================================ -> **Question**: I would like to know what modules are built into the kernel of my Linux system, and what parameters are available in each module. Is there a way to get a list of all built-in kernel modules and device drivers, and find detailed information about them? +> **提问**:我想要知道Linux系统中内核内置的模块,以及每个模块的参数。有什么方法可以得到内置模块和设备驱动的列表,以及它们的详细信息呢? -The modern Linux kernel has been growing significantly over the years to support a wide variety of hardware devices, file systems and networking functions. During this time, "loadable kernel modules (LKM)" came into being in order to keep the kernel from getting bloated, while flexibly extending its capabilities and hardware support under different environments, without having to rebuild it. +现代Linux内核正在随着时间迅速地增长来支持大量的硬件、文件系统和网络功能。在此期间,“可加载模块”的引入防止内核变得越来越臃肿,以及在不同的环境中灵活地扩展功能及硬件支持,而不必重新构建内核。 -The Linux kernel shipped with the latest Linux distributions comes with relatively a small number of "built-in modules", while the rest of hardware-specific drivers or custom capabilities exist as "loadable modules" which you can selectively load or unload. +最新的Linux发型版的内核只带了相对较小的“内置模块”,其余的特定硬件驱动或者自定义功能作为“可加载模块”来让你选择地加载或卸载。 -The built-in modules are statically compiled into the kernel. Unlike loadable kernel modules which can be dynamically loaded, unloaded, looked up or listed using commands like modprobe, insmod, rmmod, modinfo or lsmod, built-in kernel modules are always loaded in the kernel upon boot-up, and cannot be managed with these commands. +内置模块被静态地编译进了内核。不像可加载内核模块可以动态地使用modprobe、insmod、rmmod、modinfo或者lsmod等命令地加载、卸载、查询模块,内置的模块总是在启动是就加载进了内核,不会被这些命令管理。 -### Find a List of Built-in Kernel Modules ### +### 找出内置模块列表 ### -To get a list of all built-in modules, run the following command. +要得到内置模块列表,运行下面的命令。 $ cat /lib/modules/$(uname -r)/modules.builtin ![](https://farm1.staticflickr.com/697/21481933835_ef6b9c71e1_c.jpg) -You can also get a hint on what modules are built-in by running: +你也可以用下面的命令来查看有哪些内置模块: ![](https://farm6.staticflickr.com/5643/21295025949_57f5849c36_c.jpg) -### Find Parameters of Built-in Kernel Modules ### +### 找出内置模块参数 ### -Each kernel module, whether it's built-in or loadable, comes with a set of parameters. For loadable kernel modules, the modinfo command will show parameter information about them. However, this command will not work with built-in modules. You will simply get the following error. +每个内核模块无论是内置的还是可加载的都有一系列的参数。对于可加载模块,modinfo命令显示它们的参数信息。然而这个命令不对内置模块管用。你会得到下面的错误。 modinfo: ERROR: Module XXXXXX not found. -If you want to check what parameters are available in a given built-in module, and what their values are, you can instead examine the content in **/sys/module** directory. +如果你想要查看内置模块的参数,以及它们的值,你可以在**/sys/module** 下检查它们的内容。 -Under /sys/module directory, you will find sub-directories named after existing kernel modules (both built-in and loadable). Then in each module directory, there is a directory named "parameters", which lists all available parameters for the module. +在 /sys/module目录下,你可以找到内核模块(包含内置和可加载的)命名的子目录。结合则进入每个模块目录,这里有个“parameters”目录,列出了这个模块所有的参数。 -For example, let's say you want to find out parameters of a built-in module called tcp_cubic (the default TCP implementation of the kernel). Then you can do this: +比如你要找出tcp_cubic(内核默认的TCP实现)模块的参数。你可以这么做: $ ls /sys/module/tcp_cubic/parameters -And check the value of each parameter by reading a corresponding file. +接着阅读这个文件查看每个参数的值。 $ cat /sys/module/tcp_cubic/parameters/tcp_friendliness @@ -47,7 +45,7 @@ And check the value of each parameter by reading a corresponding file. via: http://ask.xmodulo.com/find-information-builtin-kernel-modules-linux.html 作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From f84e372378f16cd602c10cedca8facff60bb092f Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 13 Oct 2015 09:14:07 +0800 Subject: [PATCH 694/697] Rename sources/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md to translated/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md --- ... to find information about built-in kernel modules on Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md (100%) diff --git a/sources/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md b/translated/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md similarity index 100% rename from sources/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md rename to translated/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md From cdd9166ed17971b41a9fe75c6cb8ebf0f830dca0 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 13 Oct 2015 09:48:44 +0800 Subject: [PATCH 695/697] translated --- ...sword change at the next login on Linux.md | 56 ------------------- ...sword change at the next login on Linux.md | 55 ++++++++++++++++++ 2 files changed, 55 insertions(+), 56 deletions(-) delete mode 100644 sources/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md create mode 100644 translated/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md diff --git a/sources/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md b/sources/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md deleted file mode 100644 index 8e192f112b..0000000000 --- a/sources/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md +++ /dev/null @@ -1,56 +0,0 @@ -translating----geekpi - -Linux FAQs with Answers--How to force password change at the next login on Linux -================================================================================ -> **Question**: I manage a Linux server for multiple users to share. I have just created a new user account with some default password, and I want the user to change the default password immediately after the first login. Is there a way to force a user to change his/her password at the next login? - -In multi-user Linux environment, it's a standard practice to create user accounts with some random default password. Then after a successful login, a new user can change the default password to his or her own. For security reasons, it is often recommended to "force" users to change the default password after the first login to make sure that the initial one-time password is no longer used. - -Here is **how to force a user to change his or her password on the next login**. - -Every user account in Linux is associated with various password-related configurations and information. For example, it remembers the date of the last password change, the minimum/maximum number of days between password changes, and when to expire the current password, etc. - -A command-line tool called chage can access and adjust password expiration related configurations. You can use this tool to force password change of any user at the next login. - -To view password expiration information of a particular user (e.g., alice), run the following command. Note that you need root privilege only when you are checking password age information of any other user than yourself. - - $ sudo chage -l alice - -![](https://c1.staticflickr.com/1/727/21955581605_5471e61ee0_c.jpg) - -### Force Password Change for a User ### - -If you want to force a user to change his or her password, use the following command. - - $ sudo chage -d0 - -Originally the "-d " option is supposed to set the "age" of a password (in terms of the number of days since January 1st, 1970 when the password was last changed). So "-d0" indicates that the password was changed on January 1st, 1970, which essentially expires the current password, and causes it to be changed on the next login. - -Another way to expire the current password is via passwd command. - - $ sudo passwd -e - -The above command has the same effect of "chage -d0", making the current password of the user expire immediately. - -Now check the password information of the user again, and you will see: - -![](https://c2.staticflickr.com/6/5770/21767501480_ba88f00d80_c.jpg) - -When you log in again, you will be asked to change the password. You will need to verify the current password one more time before the change. - -![](https://c2.staticflickr.com/6/5835/21929638636_eed4d69cb9_c.jpg) - -To set more comprehensive password policies (e.g., password complexity, reuse prevention), you can use PAM. See [the article][1] for more detail. - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/force-password-change-next-login-linux.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni -[1]:http://xmodulo.com/set-password-policy-linux.html diff --git a/translated/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md b/translated/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md new file mode 100644 index 0000000000..509d0b6e45 --- /dev/null +++ b/translated/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md @@ -0,0 +1,55 @@ +Linux有问必答--如何强制在下次登录Linux时更换密码 +================================================================================ +> **提问**:我管理着一台多人共享的Linux服务器。我刚使用默认密码创建了一个新用户,但是我想用户在第一次登录时更换密码。有没有什么方法可以让他/她在下次登录时修改密码呢? + +在多用户Linux环境中,标准实践是使用一个默认的随机密码创建一个用户账户。成功登录后,新用户自己改变默认密码。出于安全里有,经常建议“强制”用户在第一次登录时修改密码来确保这个一次性使用的密码不会再被使用。 + +下面是**如何强制用户在下次登录时修改他/她的密码**。 + +changes, and when to expire the current password, etc. +每个Linux用户都关联这不同的密码相关配置和信息。比如,记录着上次密码更改的日期、最小/最大的修改密码的天数、密码何时过期等等。 + +一个叫chage的命令行工具可以访问并调整密码过期相关配置。你可以使用这个工具来强制用户在下次登录修改密码、 + +要查看特定用户的过期信息(比如:alice),运行下面的命令。注意的是除了你自己之外查看其他任何用户的密码信息都需要root权限。 + + $ sudo chage -l alice + +![](https://c1.staticflickr.com/1/727/21955581605_5471e61ee0_c.jpg) + +### 强制用户修改密码 ### + +如果你想要强制用户去修改他/她的密码,使用下面的命令。 + + $ sudo chage -d0 + +原本“-d ”参数是用来设置密码的“年龄”(也就是上次修改密码起到1970 1.1起的天数)。因此“-d0”的意思是上次密码修改的时间是1970 1.1,这就让当前的密码过期了,也就强制让他在下次登录的时候修改密码了。 + +另外一个过期当前密码的方式是用passwd命令。 + + $ sudo passwd -e + +上面的命令和“chage -d0”作用一样,让当前用户的密码立即过期。 + +现在检查用户的信息,你会发现: + +![](https://c2.staticflickr.com/6/5770/21767501480_ba88f00d80_c.jpg) + +当你再次登录时候,你会被要求修改密码。你会在修改前被要求再验证一次当前密码。 + +![](https://c2.staticflickr.com/6/5835/21929638636_eed4d69cb9_c.jpg) + +要设置更全面的密码策略(如密码复杂性,防止重复使用),则可以使用PAM。参见[这篇文章][1]了解更多详情。 + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/force-password-change-next-login-linux.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:http://xmodulo.com/set-password-policy-linux.html From 5159b6aa09c44cdcb9fc5878b5f8f32ffbfc748a Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 13 Oct 2015 16:20:32 +0800 Subject: [PATCH 696/697] =?UTF-8?q?20151013-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...odo--A ToDo List Manager For DIY Lovers.md | 55 ++++++++++++++++ ...3 DFileManager--Cover Flow File Manager.md | 63 +++++++++++++++++++ 2 files changed, 118 insertions(+) create mode 100644 sources/share/20151013 Mytodo--A ToDo List Manager For DIY Lovers.md create mode 100644 sources/tech/20151013 DFileManager--Cover Flow File Manager.md diff --git a/sources/share/20151013 Mytodo--A ToDo List Manager For DIY Lovers.md b/sources/share/20151013 Mytodo--A ToDo List Manager For DIY Lovers.md new file mode 100644 index 0000000000..828de7258c --- /dev/null +++ b/sources/share/20151013 Mytodo--A ToDo List Manager For DIY Lovers.md @@ -0,0 +1,55 @@ +Mytodo: A ToDo List Manager For DIY Lovers +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Mytodo-Linux.jpg) + +Usually, I focus on applications that are hassle free and easy to use (read GUI oriented). That’s the reason why I included [Go For It][1] to do program in the list of [Linux productivity tools][2]. Today, I am going to show you yet another to-do list application that is slightly different than the rest. + +[Mytodo][3] is an open source to-do list program that puts you, the user, in command of everything. Unlike most other similar programs, Mytodo is more DIY hobbyist oriented because it lets you configure the server (if you want to use it on multiple computers), provides a command line interface among other main features. + +It is written in Python and thus it could be used in all Linux distros and other operating systems such as Windows. + +Some of the main features of Mytodo are: + +- Both GUI and command line interface +- Configure your own server +- Add users/password +- Written in Python +- Searchable by tag +- To-do list list can be displayed as [Conky][4] + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Mytodo-list.jpeg) + +GUI + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Mytodo-list-cli.jpeg) + +Command Line + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Mytodo-list-conky.jpeg) + +Conky displaying to-do list + +You can find the source code and configuration instructions on the Github link below: + +- [Download and Configure Mytodo ][5] + +While some people might not like all this command line and configuration part, but it certainly has its own pleasure. I let you try and decide if Mytodo suits our need and taste. + +Image credit: https://pixabay.com/en/to-do-list-task-list-notes-written-734587 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/mytodo-list-manager/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://itsfoss.com/go-for-it-to-do-app-in-linux/ +[2]:http://itsfoss.com/productivity-tips-ubuntu/ +[3]:https://github.com/mohamed-aziz/mytodo +[4]:http://itsfoss.com/conky-gui-ubuntu-1304/ +[5]:https://github.com/mohamed-aziz/mytodo \ No newline at end of file diff --git a/sources/tech/20151013 DFileManager--Cover Flow File Manager.md b/sources/tech/20151013 DFileManager--Cover Flow File Manager.md new file mode 100644 index 0000000000..9c96fe9553 --- /dev/null +++ b/sources/tech/20151013 DFileManager--Cover Flow File Manager.md @@ -0,0 +1,63 @@ +DFileManager: Cover Flow File Manager +================================================================================ +A real gem of a file manager absent from the standard Ubuntu repositories but sporting a unique feature. That’s DFileManager in a twitterish statement. + +A tricky question to answer is just how many open source Linux applications are available. Just out of curiosity, you can type at the shell: + + ~$ for f in /var/lib/apt/lists/*Packages; do printf ’%5d %s\n’ $(grep ’^Package: ’ “$f” | wc -l) ${f##*/} done | sort -rn + +On my Ubuntu 15.04 system, it produces the following results: + +![Ubuntu 15.04 Packages](http://www.linuxlinks.com/portal/content/reviews/FileManagers/UbuntuPackages.png) + +As the screenshot above illustrates, there are approximately 39,000 packages in the Universe repository, and around 8,500 packages in the main repository. These numbers sound a lot. But there is a smorgasbord of open source applications, utilities, and libraries that don’t have an Ubuntu team generating a package. And more importantly, there are some real treasures missing from the repositories which can only be discovered by compiling source code. DFileManager is one such utility. It is a Qt based cross-platform file manager which is in an early stage of development. Qt provides single-source portability across all major desktop operating systems. + +In the absence of a binary package, the user needs to compile the code. For some tools, this can be problematic, particularly if the application depends on any obscure libraries, or specific versions which may be incompatible with other software installed on a system. + +### Installation ### + +Fortunately, DFileManager is simple to compile. The installation instructions on the developer’s website provide most of the steps necessary for my creaking Ubuntu box, but a few essential packages were missing (why is it always that way however many libraries clutter up your filesystem?) To prepare my system, download the source code from GitHub and then compile the software, I entered the following commands at the shell: + + ~$ sudo apt-get install qt5-default qt5-qmake libqt5x11extras5-dev + ~$ git clone git://git.code.sf.net/p/dfilemanager/code dfilemanager-code + ~$ cd dfilemananger-code + ~$ mkdir build + ~$ cd build + ~$ cmake ../ -DCMAKE_INSTALL_PREFIX=/usr + ~$ make + ~$ sudo make install + +You can then start the application by typing at the shell: + + ~$ dfm + +Here is a screenshot of DFileManager in action, with the main attraction in full view; the Cover Flow view. This offers the ability to slide through items in the current folder with an attractive feel. It’s ideal for viewing photos. The file manager bears a resemblance to Finder (the default file manager and graphical user interface shell used on all Macintosh operating systems), which may appeal to you. + +![DFileManager in action](http://www.linuxlinks.com/portal/content/reviews/FileManagers/Screenshot-dfm.png) + +### Features: ### + +- 4 views: Icons, Details, Columns, and Cover Flow +- Categorised bookmarks with Places and Devices +- Tabs +- Simple searching and filtering +- Customizable thumbnails for filetypes including multimedia files +- Information bar which can be undocked +- Open folders and files with one click +- Option to queue IO operations +- Remembers some view properties for each folder +- Show hidden files + +DFileManager is not a replacement for KDE’s Dolphin, but do give it a go. It’s a file manager that really helps the user browse files. And don’t forget to give feedback to the developer; that’s a contribution anyone can offer. + +-------------------------------------------------------------------------------- + +via: http://gofk.tumblr.com/post/131014089537/dfilemanager-cover-flow-file-manager-a-real-gem + +作者:[gofk][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://gofk.tumblr.com/ \ No newline at end of file From e957ef65370227143642b9e7136e08bb248c7801 Mon Sep 17 00:00:00 2001 From: alim0x Date: Tue, 13 Oct 2015 19:36:26 +0800 Subject: [PATCH 697/697] [translating]Mytodo--A ToDo List Manager For D... --- .../20151013 Mytodo--A ToDo List Manager For DIY Lovers.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/share/20151013 Mytodo--A ToDo List Manager For DIY Lovers.md b/sources/share/20151013 Mytodo--A ToDo List Manager For DIY Lovers.md index 828de7258c..0bac6cc4d0 100644 --- a/sources/share/20151013 Mytodo--A ToDo List Manager For DIY Lovers.md +++ b/sources/share/20151013 Mytodo--A ToDo List Manager For DIY Lovers.md @@ -1,3 +1,5 @@ +alim0x translating + Mytodo: A ToDo List Manager For DIY Lovers ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Mytodo-Linux.jpg) @@ -52,4 +54,4 @@ via: http://itsfoss.com/mytodo-list-manager/ [2]:http://itsfoss.com/productivity-tips-ubuntu/ [3]:https://github.com/mohamed-aziz/mytodo [4]:http://itsfoss.com/conky-gui-ubuntu-1304/ -[5]:https://github.com/mohamed-aziz/mytodo \ No newline at end of file +[5]:https://github.com/mohamed-aziz/mytodo