diff --git a/published/20180406 How To Register The Oracle Linux System With The Unbreakable Linux Network (ULN).md b/published/20180406 How To Register The Oracle Linux System With The Unbreakable Linux Network (ULN).md index 3685ab2245..414fd634be 100644 --- a/published/20180406 How To Register The Oracle Linux System With The Unbreakable Linux Network (ULN).md +++ b/published/20180406 How To Register The Oracle Linux System With The Unbreakable Linux Network (ULN).md @@ -5,13 +5,13 @@ Oracle Linux 系统如何去注册使用坚不可摧 Linux 网络(ULN) 甚至我也不知道关于它的信息,我是最近才了解了有关它的信息,想将这些内容共享给其他人。因此写了这篇文章,它将指导你去注册 Oracle Linux 系统去使用坚不可摧 Linux 网络(ULN) 。 -这将允许你去注册系统以尽快获得软件更新和其它的补丁。 +这将允许你去注册系统以获得软件更新和其它的 ASAP 补丁。 ### 什么是坚不可摧 Linux 网络 ULN 代表坚不可摧 Linux 网络Unbreakable Linux Network,它是由 Oracle 所拥有的。如果你去 Oracle OS 支持中去激活这个订阅,你就可以注册你的系统去使用坚不可摧 Linux 网络(ULN)。 -ULN 为 Oracle Linux 和 Oracle VM 提供软件补丁、更新、以及修复,此外还有在 yum、Ksplice、以及支持策略上的信息。你也可以通过它来下载原始发行版中没有包含的有用的安装包。 +ULN 为 Oracle Linux 和 Oracle VM 提供软件补丁、更新、以及修复,这些信息同时提供在 yum、Ksplice、并提供支持策略。你也可以通过它来下载原始发行版中没有包含的有用的安装包。 ULN 的告警提示工具会周期性地使用 ULN 进行检查,当有更新的时候它给你发送警报信息。 diff --git a/sources/talk/20180611 12 fiction books for Linux and open source types.md b/sources/talk/20180611 12 fiction books for Linux and open source types.md new file mode 100644 index 0000000000..db21ae0e7f --- /dev/null +++ b/sources/talk/20180611 12 fiction books for Linux and open source types.md @@ -0,0 +1,113 @@ +12 fiction books for Linux and open source types +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/book_list_fiction_sand_vacation_read.jpg?itok=IViIZu8J) + +For this book list, I reached out to our writer community to ask which fiction books they would recommend to their peers. What I love about this question and the answers that follow is this list gives us a deeper look into their personalities. Fiction favorites are unlike non-fiction recommendations in that your technical skills and interests may have an influence on what you like to read read, but it's much more about your personality and life experiences that draw you to pick out, and love, a particular fiction book. + +These people are your people. I hope you find something interesting to add to your reading list. + +**[Ancillary Justice][1] by Annie Leckie** + +Open source is all about how one individual can start a movement. Somehow at the same time, it's about the power of a voluntary collective moving together towards a common goal. Ancillary Justice makes you ponder both concepts. + +This book is narrated by Breq, who is an "ancillary," an enslaved human body that was grafted into the soul of a warship. When that warship was destroyed, Breq kept all the ship's memories and its identity but then had to live in a single body instead of thousands. In spite of the huge change in her power, Breq has a cataclysmic influence on all around her, and she inspires both loyalty and love. She may have once been enslaved to an AI, but now that she is free, she is powerful. She learns to adapt to exercising her free will, and the decisions she makes changes her and the world around her. Breq pushes for openness in the rigid Radch, the dominant society of the book. Her actions transform the Radch into something new. + +Ancillary Justice is also about language, loyalty, sacrifice, and the disastrous effects of secrecy. Once you've read this book, you will never feel the same about what makes someone or something human. What makes you YOU? Can who you are really be destroyed while your body still lives? + +Like the open source movement, Ancillary Justice makes you think and question the status quo of the novel and of the world around you. Read it. (Recommendation and review by [Ingrid Towey][2]) + +**[Cryptonomicon][3] by Neal Stephenson** + +Set during WWII and the present day, or near future at the time of writing, Cryptonomicon captures the excitement of a startup, the perils of war, community action against authority, and the perils of cryptography. It's a book to keep coming back to, as it has multiple layers and combines a techy outlook with intrigue and a decent love story. It does a good job of asking interesting questions like "is technology always an unbounded good?" and of making you realise that the people of yesterday were just a clever, and human, as we are today. (Recommendation and review by [Mike Bursell][4]) + +**[Daemon][5] by Daniel Suarez** + +Daemon is the first in a two-part series that details the events that happen when a computer daemon (process) is awakened and wreaks havoc on the world. The story is an exciting thriller that borders on creepy due to the realism in how the technology is portrayed, and it outlines just how dependent we are on technology. (Recommendation and review by [Jay LaCroix][6]) + +**[Going Postal][7] by Terry Pratchett** + +This book is a good read for Linux and open source enthusiasts because of the depth and relatability of characters; the humor and the unique outsider narrating that goes into the book. Terry Pratchett books are like Jim Henson movies: fiercely creative, appealing to all but especially the maker, tinkerer, hacker, and those daring to dream. + +The main character is a chancer, a fly-by-night who has never considered the results of their actions. They are not committed to anything, have never formed real (non-monetary) connections. The story follows on from the outcomes of their actions, a tale of redemption taking the protagonists on an out-of-control adventure. It's funny, edgy and unfamiliar, much like the initial 1990's introduction to Linux was for me. (Recommendation and review by [Lewis Cowles][8]) + +**[Microserfs][9] by Douglas Coupland** + +Anyone who lived through the dotcom bubble of the 1990's will identify with this heartwarming tale of a young group of Microsoft engineers who end up leaving the company for a startup, moving to Silicon Valley, and becoming each other's support through life, death, love, and loss. + +There is a lot of humor to be found in this book, like in line this line: "This is my computer. There are many like it, but this one is mine..." This revision of the original comes from the Rifleman's Creed: "This is my rifle. There are many like it..." + +If you've ever spent 16 hours a day coding, while fueling yourself with Skittles and Mountain Dew, this story is for you. (Recommendation and review by [Jet Anderson][10]) + +**[Open Source][11] by M. M. Frick** + +Casey Shenk is a vending-machine technician from Savannah, Georgia by day and blogger by night. Casey's keen insights into the details of news reports, both true and false, lead him to unravel a global plot involving arms sales, the Middle East, Russia, Israel and the highest levels of power in the United States. Casey connects the pieces using "Open Source Intelligence," which is simply reading and analyzing information that is free and open to the public. + +I bought this book because of the title, just as I was learning about open source, three years ago. I thought this would be a book on open source fiction. Unfortunately, the book has nothing to do with open source as we define it. I had hoped that Casey would use some open source tools or open source methods in his investigation, such as Wireshark or Maltego, and write his posts with LibreOffice, WordPress and such. However, "open source" simply refers to the fact that his sources are "open." + +Although I was disappointed that this book was not what I expected, Frick, a Navy officer, packed the book with well-researched and interesting twists and turns. If you are looking for a book that involves Linux, command lines, GitHub, or any other open source elements, then this is not the book for you. (Recommendation and review by [Jeff Macharyas][12]) + +**[The Tao of Pooh][13] by Benjamin Hoff** + +Linux and the open source ethos is a way of approaching life and getting things done that relies on both the individual and collective goodwill of the community it serves. Leadership and service are ascribed by individual contribution and merit rather than arbitrary assignment of value in traditional hierarchies. This is the natural way of getting things done. The power of open source is its authentic gift of self to a community of developers and end users. Being a part of such a community of developers and contributors invites to share their unique gift with the wider world. In Tao of Poo, Hoff celebrates that unique gift of self, using the metaphor of Winnie the Pooh wed with Taoist philosophy. (Recommendation and review by [Don Watkins][14]) + +**[The Golem and the Jinni][15] by Helene Wecker** + +The eponymous otherworldly beings accidentally find themselves in New York City in the early 1900s and have to restart their lives far from their homelands. It's rare to find a book with such an original premise, let alone one that can follow through with it so well and with such heart. (Recommendation and review by [VM Brasseur][16]) + +**[The Rise of the Meritocracy][17] by Michael Young** + +Meritocracy—one of the most pervasive and controversial notions circulating in open source discourses—is for some critics nothing more than a quaint fiction. No surprise for them, then, that the term originated there. Michael Young's dystopian science fiction novel introduced the term into popular culture in 1958; the eponymous concept characterizes a 2034 society entirely bent on rewarding the best, the brightest, and the most talented. "Today we frankly recognize that democracy can be no more than aspiration, and have rule not so much by the people as by the cleverest people," writes the book's narrator in this pseudo-sociological account of future history,"not an aristocracy of birth, not a plutocracy of wealth, but a true meritocracy of talent." + +Would a truly meritocratic society work as intended? We might only imagine. Young's answer, anyway, has serious consequences for the fictional sociologist. (Recommendation and review by [Bryan Behrenshausen][18]) + +**[Throne of the Crescent Moon][19] by Saladin Ahmed** + +The protagonist, Adulla, is a man who just wants to retire from ghul hunting and settle down, but the world has other plans for him. Accompanied by his assistant and a vengeful young warrior, they set off to end the ghul scourge and find their revenge. While it sounds like your typical fantasy romp, the Middle Eastern setting of the story sets it apart while the tight and skillful writing of Ahmed pulls you in. (Recommendation and review by [VM Brasseur][16]) + +**[Walkaway][20] by Cory Doctorow** + +It's hard to approach this science fiction book because it's so different than other science fiction books. It's timely because in an age of rage―producing a seemingly endless parade of dystopia in fiction and in reality―this book is hopeful. We need hopeful things. Open source fans would like it because the reason it is hopeful is because of open, shared technology. I don't want to give too much away, but let's just say this book exists in a world where advanced 3D printing is so mainstream (and old) that you can practically 3D print anything. Basic needs of Maslow's hierarchy are essentially taken care of, so you're left with human relationships. + +"You wouldn't steal a car" turns into "you can fork a house or a city." This creates a present that can constantly be remade, so the attachment to things becomes practically unnecessary. Thus, people can―and do―just walk away. This wonderful (and complicated) future setting is the ever-present reality surrounding a group of characters, their complicated relationships, and a complex class struggle in a post-scarcity world. + +Best book I've read in years. Thanks, Cory! (Recommendation and review by [Kyle Conway][21]) + +**[Who Moved My Cheese?][22] by Spencer Johnson** + +The secret to success for leading open source projects and open companies is agility and motivating everyone to move beyond their comfort zones to embrace change. Many people find change difficult and do not see the advantage that comes from the development of an agile mindset. This book is about the difference in how mice and people experience and respond to change. It's an easy read and quick way to expand your mind and think differently about whatever problem you're facing today. (Recommendation and review by [Don Watkins][14]) + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/6/fiction-book-list + +作者:[Jen Wike Huger][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/remyd +[1]:https://www.annleckie.com/novel/ancillary-justice/ +[2]:https://opensource.com/users/i-towey +[3]:https://www.amazon.com/Cryptonomicon-Neal-Stephenson-ebook/dp/B000FC11A6/ref=sr_1_1?s=books&ie=UTF8&qid=1528311017&sr=1-1&keywords=Cryptonomicon +[4]:https://opensource.com/users/mikecamel +[5]:https://www.amazon.com/DAEMON-Daniel-Suarez/dp/0451228731 +[6]:https://opensource.com/users/jlacroix +[7]:https://www.amazon.com/Going-postal-Terry-PRATCHETT/dp/0385603428 +[8]:https://opensource.com/users/lewiscowles1986 +[9]:https://www.amazon.com/Microserfs-Douglas-Coupland/dp/0061624268 +[10]:https://opensource.com/users/thatsjet +[11]:https://www.amazon.com/Open-Source-M-Frick/dp/1453719989 +[12]:https://opensource.com/users/jeffmacharyas +[13]:https://www.amazon.com/Tao-Pooh-Benjamin-Hoff/dp/0140067477 +[14]:https://opensource.com/users/don-watkins +[15]:https://www.amazon.com/Golem-Jinni-Novel-P-S/dp/0062110845 +[16]:https://opensource.com/users/vmbrasseur +[17]:https://www.amazon.com/Rise-Meritocracy-Classics-Organization-Management/dp/1560007044 +[18]:https://opensource.com/users/bbehrens +[19]:https://www.amazon.com/Throne-Crescent-Moon-Kingdoms/dp/0756407788 +[20]:https://craphound.com/category/walkaway/ +[21]:https://opensource.com/users/kreyc +[22]:https://www.amazon.com/Moved-Cheese-Spencer-Johnson-M-D/dp/0743582853 diff --git a/sources/talk/20180613 AI Is Coming to Edge Computing Devices.md b/sources/talk/20180613 AI Is Coming to Edge Computing Devices.md new file mode 100644 index 0000000000..0dc34c9ba3 --- /dev/null +++ b/sources/talk/20180613 AI Is Coming to Edge Computing Devices.md @@ -0,0 +1,66 @@ +AI Is Coming to Edge Computing Devices +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ai-edge.jpg?itok=nuNfRbW8) + +Very few non-server systems run software that could be called machine learning (ML) and artificial intelligence (AI). Yet, server-class “AI on the Edge” applications are coming to embedded devices, and Arm intends to fight with Intel and AMD over every last one of them. + +Arm recently [announced][1] a new [Cortex-A76][2] architecture that is claimed to boost the processing of AI and ML algorithms on edge computing devices by a factor of four. This does not include ML performance gains promised by the new Mali-G76 GPU. There’s also a Mali-V76 VPU designed for high-res video. The Cortex-A76 and two Mali designs are designed to “complement” Arm’s Project Trillium Machine Learning processors (see below). + +### Improved performance + +The Cortex-A76 differs from the [Cortex-A73][3] and [Cortex-A75][4] IP designs in that it’s designed as much for laptops as for smartphones and high-end embedded devices. Cortex-A76 provides “35 percent more performance year-over-year,” compared to Cortex-A75, claims Arm. The IP, which is expected to arrive in products a year from now, is also said to provide 40 percent improved efficiency. + +Like Cortex-A75, which is equivalent to the latest Kyro cores available on Qualcomm’s [Snapdragon 845][5], the Cortex-A76 supports [DynamIQ][6], Arm’s more flexible version of its Big.Little multi-core scheme. Unlike Cortex-A75, which was announced with a Cortex-A55 companion chip, Arm had no new DynamIQ companion for the Cortex-A76. + +Cortex-A76 enhancements are said to include decoupled branch prediction and instruction fetch, as well as Arm’s first 4-wide decode core, which boosts the maximum instruction per cycle capability. There’s also higher integer and vector execution throughput, including support for dual-issue native 16B (128-bit) vector and floating-point units. Finally, the new full-cache memory hierarchy is “co-optimized for latency and bandwidth,” says Arm. + +Unlike the latest high-end Cortex-A releases, Cortex-A76 represents “a brand new microarchitecture,” says Arm. This is confirmed by [AnandTech’s][7] usual deep-dive analysis. Cortex-A73 and -A75 debuted elements of the new “Artemis” architecture, but the Cortex-A76 is built from scratch with Artemis. + +The Cortex-A76 should arrive on 7nm-fabricated TSMC products running at 3GHz, says AnandTech. The 4x improvements in ML workloads are primarily due to new optimizations in the ASIMD pipelines “and how dot products are handled,” says the story. + +Meanwhile, [The Register][8] noted that Cortex-A76 is Arm’s first design that will exclusively run 64-bit kernel-level code. The cores will support 32-bit code, but only at non-privileged levels, says the story.. + +### Mali-G76 GPU and Mali-G72 VPU + +The new Mali-G76 GPU announced with Cortex-A76 targets gaming, VR, AR, and on-device ML. The Mali-G76 is said to provide 30 percent more efficiency and performance density and 1.5x improved performance for mobile gaming. The Bifrost architecture GPU also provides 2.7x ML performance improvements compared to the Mali-G72, which was announced last year with the Cortex-A75. + +The Mali-V76 VPU supports UHD 8K viewing experiences. It’s aimed at 4x4 video walls, which are especially popular in China and is designed to support the 8K video coverage, which Japan is promising for the 2020 Olympics. 8K@60 streams require four times the bandwidth of 4K@60 streams. To achieve this, Arm added an extra AXI bus and doubled the line buffers throughout the video pipeline. The VPU also supports 8K@30 decode. + +### Project Trillium’s ML chip detailed + +Arm previously revealed other details about the [Machine Learning][9] (ML) processor, also referred to as MLP. The ML chip will accelerate AI applications including machine translation and face recognition. + +The new processor architecture is part of the Project Trillium initiative for AI, and follows Arm’s second-gen Object Detection (OD) Processor for optimizing visual processing and people/object detection. The ML design will initially debut as a co-processor in mobile phones by late 2019. + +Numerous block diagrams for the MLP were published by [AnandTech][10], which was briefed on the design. While stating that any judgment about the performance of the still unfinished ML IP will require next year’s silicon release, the publication says that the ML chip appears to check off all the requirements of a neural network accelerator, including providing efficient convolutional computations and data movement while also enabling sufficient programmability. + +Arm claims the chips will provide >3TOPs per Watt performance in 7nm designs with absolute throughputs of 4.6TOPs, deriving a target power of approximately 1.5W. For programmability, MLP will initially target Android’s [Neural Networks API][11] and [Arm’s NN SDK][12]. + +Join us at [Open Source Summit + Embedded Linux Conference Europe][13] in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2018/6/ai-coming-edge-computing-devices + +作者:[Eric Brown][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/ericstephenbrown +[1]:https://www.arm.com/news/2018/05/arm-announces-new-suite-of-ip-for-premium-mobile-experiences +[2]:https://community.arm.com/processors/b/blog/posts/cortex-a76-laptop-class-performance-with-mobile-efficiency +[3]:https://www.linux.com/news/mediateks-10nm-mobile-focused-soc-will-tap-cortex-a73-and-a32 +[4]:http://linuxgizmos.com/arm-debuts-cortex-a75-and-cortex-a55-with-ai-in-mind/ +[5]:http://linuxgizmos.com/hot-chips-on-parade-at-mwc-and-embedded-world/ +[6]:http://linuxgizmos.com/arm-boosts-big-little-with-dynamiq-and-launches-linux-dev-kit/ +[7]:https://www.anandtech.com/show/12785/arm-cortex-a76-cpu-unveiled-7nm-powerhouse +[8]:https://www.theregister.co.uk/2018/05/31/arm_cortex_a76/ +[9]:https://developer.arm.com/products/processors/machine-learning/arm-ml-processor +[10]:https://www.anandtech.com/show/12791/arm-details-project-trillium-mlp-architecture +[11]:https://developer.android.com/ndk/guides/neuralnetworks/ +[12]:https://developer.arm.com/products/processors/machine-learning/arm-nn +[13]:https://events.linuxfoundation.org/events/elc-openiot-europe-2018/ diff --git a/sources/tech/20170829 An Advanced System Configuration Utility For Ubuntu Power Users.md b/sources/tech/20170829 An Advanced System Configuration Utility For Ubuntu Power Users.md new file mode 100644 index 0000000000..5292c290cc --- /dev/null +++ b/sources/tech/20170829 An Advanced System Configuration Utility For Ubuntu Power Users.md @@ -0,0 +1,139 @@ +An Advanced System Configuration Utility For Ubuntu Power Users +====== + +![](https://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-4-1-720x340.png) + +**Ubunsys** is a Qt-based advanced system utility for Ubuntu and its derivatives. Most of the configuration can be easily done from the command-line by the advanced users. Just in case, you don’t want to use CLI all the time, you can use Ubunsys utility to configure your Ubuntu desktop system or its derivatives such as Linux Mint, Elementary OS etc. Ubunsys can be used to modify system configuration, install, remove, update packages and old kernels, enable/disable sudo access, install mainline kernel, update software repositories, clean up junk files, upgrade your Ubuntu to latest version, and so on. All of the aforementioned actions can be done with simple mouse clicks. You don’t need to depend on CLI mode anymore. Here is the list of things you can do with Ubunsys: + + * Install, update, and remove packages. + * Update and upgrade software repositories. + * Install mainline Kernel. + * Remove old and unused Kernels. + * Full system update. + * Complete System upgrade to next available version. + * Upgrade to latest development version. + * Clean up junk files from your system. + * Enable and/or disable sudo access without password. + * Make Sudo Passwords visible when you type them in the Terminal. + * Enable and/or disable hibernation. + * Enable and/or disable firewall. + * Open, backup and import sources.list.d and sudoers files. + * Show/unshow hidden startup items. + * Enable and/or disable Login sounds. + * Configure dual boot. + * Enable/disable Lock screen. + * Smart system update. + * Update and/or run all scripts at once using Scripts Manager. + * Exec normal user installation script from git. + * Check system integrity and missing GPG keys. + * Repair network. + * Fix broken packages. + * And more yet to come. + + + +**Important note:** Ubunsys is not for Ubuntu beginners. It is dangerous and not a stable version yet. It might break your system. If you’re a new to Ubuntu, don’t use it. If you are very curious to use this application, go through each option carefully and proceed at your own risk. Do not forget to backup your important data before using this application. + +### Ubunsys – An Advanced System Configuration Utility For Ubuntu Power Users + +#### Install Ubunsys + +Ubunusys developer has made a PPA to make the installation process much easier. Ubunsys will currently work on Ubuntu 16.04 LTS, Ubuntu 17.04 64bit editions. + +Run the following commands one by one to add Ubunsys PPA and install it. +``` +sudo add-apt-repository ppa:adgellida/ubunsys + +sudo apt-get update + +sudo apt-get install ubunsys + +``` + +If the PPA doesn’t work, head over to the [**releases page**][1], download and install the Ubunsys package depending upon the architecture you use. + +#### Usage + +Once installed, launch Ubunsys from Menu. This is how Ubunsys main interface looks like. + +![][3] + +As you can see, Ubunusys has four main sections namely **Packages** , **Tweaks** , **System** , and **Repair**. There are one or more sub-sections available for each main tab to do different operations. + +**Packages** + +This section allows you to install, remove, update packages. + +![][4] + +**Tweaks** + +In this section, we can do various various system tweaks such as, + + * Open, backup, import sources.list and sudoers file; + * Configure dual boot; + * Enable/disable login sound, firewall, lock screen, hibernation, sudo access without password. You can also enable or disable for sudo access without password to specific users. + * Can make the passwords visible while typing them in Terminal (Disable Asterisks). + + + +![][5] + +**System** + +This section further categorized into three sub-categories, each for distinct user type. + +The **Normal user** tab allows us to, + + * Update, upgrade packages and software repos. + * Clean system. + * Exec normal user installation script. + + + +The **Advanced user** section allows us to, + + * Clean Old/Unused Kernels. + * Install mainline Kernel. + * do smart packages update. + * Upgrade system. + + + +The **Developer** section allows us to upgrade the Ubuntu system to latest development version. + +![][6] + +**Repair** + +This is the fourth and last section of Ubunsys. As the name says, this section allows us to do repair our system, network, missing GPG keys, and fix broken packages. + +![][7] + +As you can see, Ubunsys helps you to do any system configuration, maintenance and software management tasks with few mouse clicks. You don’t need to depend on Terminal anymore. Ubunsys can help you to accomplish any advanced tasks. Again, I warn you, It’s not for beginners and it is not stable yet. So, you can expect bugs and crashes when using it. Use it with care after studying options and impact. + +Cheers! + +**Resource:** + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/ubunsys-advanced-system-configuration-utility-ubuntu-power-users/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://github.com/adgellida/ubunsys/releases +[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]:http://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-1.png +[4]:http://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-2.png +[5]:http://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-5.png +[6]:http://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-9.png +[7]:http://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-11.png diff --git a/sources/tech/20180115 How debuggers really work.md b/sources/tech/20180115 How debuggers really work.md index 452bc67823..8bde4eaad5 100644 --- a/sources/tech/20180115 How debuggers really work.md +++ b/sources/tech/20180115 How debuggers really work.md @@ -1,3 +1,4 @@ +translating by sunxi How debuggers really work ====== diff --git a/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md b/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md new file mode 100644 index 0000000000..9bda5fa335 --- /dev/null +++ b/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md @@ -0,0 +1,170 @@ +The Easiest PDO Tutorial (Basics) +====== + +![](http://www.theitstuff.com/wp-content/uploads/2018/04/php-language.jpg) + +Approximately 80% of the web is powered by PHP. And similarly, high number goes for SQL as well. Up until PHP version 5.5, we had the **mysql_** commands for accessing mysql databases but they were eventually deprecated due to insufficient security. + +This happened with PHP 5.5 in 2013 and as I write this article, the year is 2018 and we are on PHP 7.2. The deprecation of mysql**_** brought 2 major ways of accessing the database, the **mysqli** and the **PDO** libraries. + +Now though the mysqli library was the official successor, PDO gained more fame due to a simple reason that mysqli could only support mysql databases whereas PDO could support 12 different types of database drivers. Also, PDO had several more features that made it the better choice for most developers. You can see some of the feature comparisons in the table below; + +| | PDO | MySQLi | +| Database support** | 12 drivers | Only MySQL | +| Paradigm | OOP | Procedural + OOP | +| Prepared Statements Client Side) | Yes | No | +| Named Parameters | Yes | No | + +Now I guess it is pretty clear why PDO is the choice for most developers, so let’s dig into it and hopefully we will try to cover most of the PDO you need in this article itself. + +### Connection + +The first step is connecting to the database and since PDO is completely Object Oriented, we will be using the instance of a PDO class. + +The first thing we do is we define the host, database name, user, password and the database charset. + +`$host = 'localhost';` + +`$db = 'theitstuff';` + +`$user = 'root';` + +`$pass = 'root';` + +`$charset = 'utf8mb4';` + +`$dsn = "mysql:host=$host;dbname=$db;charset=$charset";` + +`$conn = new PDO($dsn, $user, $pass);` + +After that, as you can see in the code above we have created the **DSN** variable, the DSN variable is simply a variable that holds the information about the database. For some people running mysql on external servers, you could also adjust your port number by simply supplying a **port=$port_number**. + +Finally, you can create an instance of the PDO class, I have used the **$conn** variable and I have supplied the **$dsn, $user, $pass** parameters. If you have followed this, you should now have an object named $conn that is an instance of the PDO connection class. Now it’s time to get into the database and run some queries. + +### A simple SQL Query + +Let us now run a simple SQL query. + +`$tis = $conn->query('SELECT name, age FROM students');` + +`while ($row = $tis->fetch())` + +`{` + +`echo $row['name']."\t";` + +`echo $row['age'];` + +`echo "
";` + +`}` + +This is the simplest form of running a query with PDO. We first created a variable called **tis( **short for TheITStuff** )** and then you can see the syntax as we used the query function from the $conn object that we had created. + +We then ran a while loop and created a **$row** variable to fetch the contents from the **$tis** object and finally echoed out each row by calling out the column name. + +Easy wasn’t it ?. Now let’s get to the prepared statement. + +### Prepared Statements + +Prepared statements were one of the major reasons people started using PDO as it had prepared statements that could prevent SQL injections. + +There are 2 basic methods available, you could either use positional or named parameters. + +#### Position parameters + +Let us see an example of a query using positional parameters. + +`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)");` + +`$tis->bindValue(1,'mike');` + +`$tis->bindValue(2,22);` + +`$tis->execute();` + +In the above example, we have placed 2 question marks and later used the **bindValue()** function to map the values into the query. The values are bound to the position of the question mark in the statement. + +I could also use variables instead of directly supplying values by using the **bindParam()** function and example for the same would be this. + +`$name='Rishabh'; $age=20;` + +`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)");` + +`$tis->bindParam(1,$name);` + +`$tis->bindParam(2,$age);` + +`$tis->execute();` + +### Named Parameters + +Named parameters are also prepared statements that map values/variables to a named position in the query. Since there is no positional binding, it is very efficient in queries that use the same variable multiple time. + +`$name='Rishabh'; $age=20;` + +`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(:name, :age)");` + +`$tis->bindParam(':name', $name);` + +`$tis->bindParam(':age', $age);` + +`$tis->execute();` + +The only change you can notice is that I used **:name** and **:age** as placeholders and then mapped variables to them. The colon is used before the parameter and it is of extreme importance to let PDO know that the position is for a variable. + +You can similarly use **bindValue()** to directly map values using Named parameters as well. + +### Fetching the Data + +PDO is very rich when it comes to fetching data and it actually offers a number of formats in which you can get the data from your database. + +You can use the **PDO::FETCH_ASSOC** to fetch associative arrays, **PDO::FETCH_NUM** to fetch numeric arrays and **PDO::FETCH_OBJ** to fetch object arrays. + +`$tis = $conn->prepare("SELECT * FROM STUDENTS");` + +`$tis->execute();` + +`$result = $tis->fetchAll(PDO::FETCH_ASSOC);` + +You can see that I have used **fetchAll** since I wanted all matching records. If only one row is expected or desired, you can simply use **fetch.** + +Now that we have fetched the data it is time to loop through it and that is extremely easy. + +`foreach($result as $lnu){` + +`echo $lnu['name'];` + +`echo $lnu['age']."
";` + +`}` + +You can see that since I had requested associative arrays, I am accessing individual members by their names. + +Though there is absolutely no problem in defining how you want your data delivered, you could actually set one as default when defining the connection variable itself. + +All you need to do is create an options array where you put in all your default configs and simply pass the array in the connection variable. + +`$options = [` + +` PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,` + +`];` + +`$conn = new PDO($dsn, $user, $pass, $options);` + +This was a very brief and quick intro to PDO we will be making an advanced tutorial soon. If you had any difficulties understanding any part of the tutorial, do let me know in the comment section and I’ll be there for you. + + +-------------------------------------------------------------------------------- + +via: http://www.theitstuff.com/easiest-pdo-tutorial-basics + +作者:[Rishabh Kandari][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.theitstuff.com/author/reevkandari diff --git a/sources/tech/20180525 15 books for kids who (you want to) love Linux and open source.md b/sources/tech/20180525 15 books for kids who (you want to) love Linux and open source.md deleted file mode 100644 index 2f6872fa20..0000000000 --- a/sources/tech/20180525 15 books for kids who (you want to) love Linux and open source.md +++ /dev/null @@ -1,116 +0,0 @@ -pinewall translating - -15 books for kids who (you want to) love Linux and open source -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_library_reading_list.jpg?itok=O3GvU1gH) -In my job I've heard professionals in tech, from C-level executives to everyone in between, say they want their own kids to learn more about [Linux][1] and [open source][2]. Some of them seem to have an easy time with their kids following closely in their footsteps. And some have a tough time getting their kids to see what makes Linux and open source so cool. Maybe their time will come, maybe it won't. There's a lot of interesting, valuable stuff out there in this big world. - -Either way, if you have a kid or know a kid that may be interested in learning more about making something with code or hardware, from games to robots, this list is for you. - -### 15 books for kids with a focus on Linux and open source - -[Adventures in Raspberry Pi][3] by Carrie Anne Philbin - -The tiny, credit-card sized Raspberry Pi has become a huge hit among kids—and adults—interested in programming. It does everything your desktop can do, but with a few basic programming skills you can make it do so much more. With simple instructions, fun projects, and solid skills, Adventures in Raspberry Pi is the ultimate kids' programming guide! (Recommendation by [Joshua Allen Holm][4] | Review is an excerpt from the book's abstract) - -[Automate the Boring Stuff with Python][5] by Al Sweigart - -This is a classic introduction to programming that's written clearly enough for a motivated 11-year-old to understand and enjoy. Readers will quickly find themselves working on practical and useful tasks while picking up good coding practices almost by accident. The best part: If you like, you can read the whole book online. (Recommendation and review by [DB Clinton][6]) - -[Coding Games in Scratch][7] by Jon Woodcock - -Written for children ages 8-12 with little to no coding experience, this straightforward visual guide uses fun graphics and easy-to-follow instructions to show young learners how to build their own computer projects using Scratch, a popular free programming language. (Recommendation by [Joshua Allen Holm][4] | Review is an excerpt from the book's abstract) - -[Doing Math with Python][8] by Amit Saha - -Whether you're a student or a teacher who's curious about how you can use Python for mathematics, this book is for you. Beginning with simple mathematical operations in the Python shell to the visualization of data using Python libraries like matplotlib, this books logically takes the reader step by easily followed step from the basics to more complex operations. This book will invite your curiosity about the power of Python with mathematics. (Recommendation and review by [Don Watkins][9]) - -[Girls Who Code: Learn to Code and Change the World][10] by Reshma Saujani - -From the leader of the movement championed by Sheryl Sandberg, Malala Yousafzai, and John Legend, this book is part how-to, part girl-empowerment, and all fun. Bursting with dynamic artwork, down-to-earth explanations of coding principles, and real-life stories of girls and women working at places like Pixar and NASA, this graphically animated book shows what a huge role computer science plays in our lives and how much fun it can be. (Recommendation by [Joshua Allen Holm][4] | Review is an excerpt from the book's abstract) - -[Invent Your Own Computer Games with Python][11] by Al Sweigart - -This book will teach you how to make computer games using the popular Python programming language—even if you’ve never programmed before! Begin by building classic games like Hangman, Guess the Number, and Tic-Tac-Toe, and then work your way up to more advanced games, like a text-based treasure hunting game and an animated collision-dodging game with sound effects. (Recommendation by [Joshua Allen Holm][4] | Review is an excerpt from the book's abstract) - -[Lauren Ipsum: A Story About Computer Science and Other Improbable Things][12] by Carlos Bueno - -Written in the spirit of Alice in Wonderland, Lauren Ipsum takes its heroine through a slightly magical world whose natural laws are the laws of logic and computer science and whose puzzles can be solved only through learning and applying the principles of computer code. Computers are never mentioned, but they're at the center of it all. (Recommendation and review by [DB Clinton][6]) - -[Learn Java the Easy Way: A Hands-On Introduction to Programming][13] by Bryson Payne - -Java is the world's most popular programming language, but it’s known for having a steep learning curve. This book takes the chore out of learning Java with hands-on projects that will get you building real, functioning apps right away. (Recommendation by [Joshua Allen Holm][4] | Review is an excerpt from the book's abstract) - -[Lifelong Kindergarten][14] by Mitchell Resnick - -Kindergarten is becoming more like the rest of school. In this book, learning expert Mitchel Resnick argues for exactly the opposite: The rest of school (even the rest of life) should be more like kindergarten. To thrive in today's fast-changing world, people of all ages must learn to think and act creatively―and the best way to do that is by focusing more on imagining, creating, playing, sharing, and reflecting, just as children do in traditional kindergartens. Drawing on experiences from more than 30 years at MIT's Media Lab, Resnick discusses new technologies and strategies for engaging young people in creative learning experiences. (Recommendation by [Don Watkins][9] | Review from Amazon) - -[Python for Kids][15] by Jason Briggs - -Jason Briggs has taken the art of teaching Python programming to a new level in this book that can easily be an introductory text for teachers and students as well as parents and kids. Complex concepts are presented with step-by-step directions that will have even neophyte programmers experiencing the success that invites you to learn more. This book is an extremely readable, playful, yet powerful introduction to Python programming. You will learn fundamental data structures like tuples, lists, and maps. The reader is shown how to create functions, reuse code, and use control structures like loops and conditional statements. Kids will learn how to create games and animations, and they will experience the power of Tkinter to create advanced graphics. (Recommendation and review by [Don Watkins][9]) - -[Scratch Programming Playground][16] by Al Sweigart - -Scratch programming is often seen as a playful way to introduce young people to programming. In this book, Al Sweigart demonstrates that Scratch is in fact a much more powerful programming language than most people realize. Masterfully written and presented in his own unique style, Al will have kids exploring the power of Scratch to create complex graphics and animation in no time. (Recommendation and review by [Don Watkins][9]) - -[Secret Coders][17] by Mike Holmes - -From graphic novel superstar (and high school computer programming teacher) Gene Luen Yang comes a wildly entertaining new series that combines logic puzzles and basic programming instruction with a page-turning mystery plot. Stately Academy is the setting, a school that is crawling with mysteries to be solved! (Recommendation by [Joshua Allen Holm][4] | Review is an excerpt from the book's abstract) - -[So, You Want to Be a Coder?: The Ultimate Guide to a Career in Programming, Video Game Creation, Robotics, and More!][18] by Jane Bedell - -Love coding? Make your passion your profession with this comprehensive guide that reveals a whole host of careers working with code. (Recommendation by [Joshua Allen Holm][4] | Review is an excerpt from the book's abstract) - -[Teach Your Kids to Code][19] by Bryson Payne - -Are you looking for a playful way to introduce children to programming with Python? Bryson Payne has written a masterful book that uses the metaphor of turtle graphics in Python. This book will have you creating simple programs that are the basis for advanced Python programming. This book is a must-read for anyone who wants to teach young people to program. (Recommendation and review by [Don Watkins][9]) - -[The Children's Illustrated Guide to Kubernetes][20] by Matt Butcher, illustrated by Bailey Beougher - -Introducing Phippy, an intrepid little PHP app, and her journey to Kubernetes. (Recommendation by [Chris Short][21] | Review from [Matt Butcher's blog post][20].) - -### Bonus books for babies - -[CSS for Babies][22], [Javascript for Babies][23], and [HTML for Babies][24] by Sterling Children's - -These concept books familiarize young ones with the kind of shapes and colors that make up web-based programming languages. This beautiful book is a colorful introduction to coding and the web, and it's the perfect gift for any technologically minded family. (Recommendation by [Chris Short][21] | Review from Amazon) - -Have other books for babies or kids to share? Let us know in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/5/books-kids-linux-open-source - -作者:[Jen Wike Huger][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/remyd -[1]:https://opensource.com/resources/linux -[2]:https://opensource.com/article/18/3/what-open-source-programming -[3]:https://www.amazon.com/Adventures-Raspberry-Carrie-Anne-Philbin/dp/1119046025 -[4]:https://opensource.com/users/holmja -[5]:https://automatetheboringstuff.com/ -[6]:https://opensource.com/users/dbclinton -[7]:https://www.goodreads.com/book/show/25733628-coding-games-in-scratch -[8]:https://nostarch.com/doingmathwithpython -[9]:https://opensource.com/users/don-watkins -[10]:https://www.amazon.com/Girls-Who-Code-Learn-Change/dp/042528753X -[11]:http://inventwithpython.com/invent4thed/ -[12]:https://www.amazon.com/gp/product/1593275749/ref=as_li_tl?ie=UTF8&tag=projemun-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=1593275749&linkId=e05e1f12176c4959cc1aa1a050908c4a -[13]:https://nostarch.com/learnjava -[14]:http://lifelongkindergarten.net/ -[15]:https://nostarch.com/pythonforkids -[16]:https://nostarch.com/scratchplayground -[17]:http://www.secret-coders.com/ -[18]:https://www.amazon.com/So-You-Want-Coder-Programming/dp/1582705798?tag=ad-backfill-amzn-no-or-one-good-20 -[19]:https://opensource.com/education/15/9/review-bryson-payne-teach-your-kids-code -[20]:https://deis.com/blog/2016/kubernetes-illustrated-guide/ -[21]:https://opensource.com/users/chrisshort -[22]:https://www.amazon.com/CSS-Babies-Code-Sterling-Childrens/dp/1454921560/ -[23]:https://www.amazon.com/Javascript-Babies-Code-Sterling-Childrens/dp/1454921579/ -[24]:https://www.amazon.com/HTML-Babies-Code-Sterling-Childrens/dp/1454921552 diff --git a/sources/tech/20180607 Mesos and Kubernetes- It-s Not a Competition.md b/sources/tech/20180607 Mesos and Kubernetes- It-s Not a Competition.md new file mode 100644 index 0000000000..a168ac9f4a --- /dev/null +++ b/sources/tech/20180607 Mesos and Kubernetes- It-s Not a Competition.md @@ -0,0 +1,66 @@ +Mesos and Kubernetes: It's Not a Competition +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/architecture-barge-bay-161764_0.jpg?itok=vNChG5fb) + +The roots of Mesos can be traced back to 2009 when Ben Hindman was a PhD student at the University of California, Berkeley working on parallel programming. They were doing massive parallel computations on 128-core chips, trying to solve multiple problems such as making software and libraries run more efficiently on those chips. He started talking with fellow students so see if they could borrow ideas from parallel processing and multiple threads and apply them to cluster management. + +“Initially, our focus was on Big Data,” said Hindman. Back then, Big Data was really hot and Hadoop was one of the hottest technologies. “We recognized that the way people were running things like Hadoop on clusters was similar to the way that people were running multiple threaded applications and parallel applications,” said Hindman. + +However, it was not very efficient, so they started thinking how it could be done better through cluster management and resource management. “We looked at many different technologies at that time,” Hindman recalled. + +Hindman and his colleagues, however, decided to adopt a novel approach. “We decided to create a lower level of abstraction for resource management, and run other services on top to that to do scheduling and other things,” said Hindman, “That’s essentially the essence of Mesos -- to separate out the resource management part from the scheduling part.” + +It worked, and Mesos has been going strong ever since. + +### The project goes to Apache + +The project was founded in 2009. In 2010 the team decided to donate the project to the Apache Software Foundation (ASF). It was incubated at Apache and in 2013, it became a Top-Level Project (TLP). + +There were many reasons why the Mesos community chose Apache Software Foundation, such as the permissiveness of Apache licensing, and the fact that they already had a vibrant community of other such projects. + +It was also about influence. A lot of people working on Mesos were also involved with Apache, and many people were working on projects like Hadoop. At the same time, many folks from the Mesos community were working on other Big Data projects like Spark. This cross-pollination led all three projects -- Hadoop, Mesos, and Spark -- to become ASF projects. + +It was also about commerce. Many companies were interested in Mesos, and the developers wanted it to be maintained by a neutral body instead of being a privately owned project. + +### Who is using Mesos? + +A better question would be, who isn’t? Everyone from Apple to Netflix is using Mesos. However, Mesos had its share of challenges that any technology faces in its early days. “Initially, I had to convince people that there was this new technology called ‘containers’ that could be interesting as there is no need to use virtual machines,” said Hindman. + +The industry has changed a great deal since then, and now every conversation around infrastructure starts with ‘containers’ -- thanks to the work done by Docker. Today convincing is not needed, but even in the early days of Mesos, companies like Apple, Netflix, and PayPal saw the potential. They knew they could take advantage of containerization technologies in lieu of virtual machines. “These companies understood the value of containers before it became a phenomenon,” said Hindman. + +These companies saw that they could have a bunch of containers, instead of virtual machines. All they needed was something to manage and run these containers, and they embraced Mesos. Some of the early users of Mesos included Apple, Netflix, PayPal, Yelp, OpenTable, and Groupon. + +“Most of these organizations are using Mesos for just running arbitrary services,” said Hindman, “But there are many that are using it for doing interesting things with data processing, streaming data, analytics workloads and applications.” + +One of the reasons these companies adopted Mesos was the clear separation between the resource management layers. Mesos offers the flexibility that companies need when dealing with containerization. + +“One of the things we tried to do with Mesos was to create a layering so that people could take advantage of our layer, but also build whatever they wanted to on top,” said Hindman. “I think that's worked really well for the big organizations like Netflix and Apple.” + +However, not every company is a tech company; not every company has or should have this expertise. To help those organizations, Hindman co-founded Mesosphere to offer services and solutions around Mesos. “We ultimately decided to build DC/OS for those organizations which didn’t have the technical expertise or didn't want to spend their time building something like that on top.” + +### Mesos vs. Kubernetes? + +People often think in terms of x versus y, but it’s not always a question of one technology versus another. Most technologies overlap in some areas, and they can also be complementary. “I don't tend to see all these things as competition. I think some of them actually can work in complementary ways with one another,” said Hindman. + +“In fact the name Mesos stands for ‘middle’; it’s kind of a middle OS,” said Hindman, “We have the notion of a container scheduler that can be run on top of something like Mesos. When Kubernetes first came out, we actually embraced it in the Mesos ecosystem and saw it as another way of running containers in DC/OS on top of Mesos.” + +Mesos also resurrected a project called [Marathon][1](a container orchestrator for Mesos and DC/OS), which they have made a first-class citizen in the Mesos ecosystem. However, Marathon does not really compare with Kubernetes. “Kubernetes does a lot more than what Marathon does, so you can’t swap them with each other,” said Hindman, “At the same time, we have done many things in Mesos that are not in Kubernetes. So, these technologies are complementary to each other.” + +Instead of viewing such technologies as adversarial, they should be seen as beneficial to the industry. It’s not duplication of technologies; it’s diversity. According to Hindman, “it could be confusing for the end user in the open source space because it's hard to know which technologies are suitable for what kind of workload, but that’s the nature of the beast called Open Source.” + +That just means there are more choices, and everybody wins. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2018/6/mesos-and-kubernetes-its-not-competition + +作者:[Swapnil Bhartiya][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/arnieswap +[1]:https://mesosphere.github.io/marathon/ diff --git a/sources/tech/20180608 How to use screen scraping tools to extract data from the web.md b/sources/tech/20180608 How to use screen scraping tools to extract data from the web.md new file mode 100644 index 0000000000..2737123f8e --- /dev/null +++ b/sources/tech/20180608 How to use screen scraping tools to extract data from the web.md @@ -0,0 +1,207 @@ +How to use screen scraping tools to extract data from the web +====== +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openweb-osdc-lead.png?itok=yjU4KliG) +A perfect internet would deliver data to clients in the format of their choice, whether it's CSV, XML, JSON, etc. The real internet teases at times by making data available, but usually in HTML or PDF documents—formats designed for data display rather than data interchange. Accordingly, the [screen scraping][1] of yesteryear—extracting displayed data and converting it to the requested format—is still relevant today. + +Perl has outstanding tools for screen scraping, among them the `HTML::TableExtract` package described in the Scraping program below. + +### Overview of the scraping program + +The screen-scraping program has two main pieces, which fit together as follows: + + * The file data.html contains the data to be scraped. The data in this example, which originated in a university site under renovation, addresses the issue of whether the income associated with a college degree justifies the degree's cost. The data includes median incomes, percentiles, and other information about areas of study such as computing, engineering, and liberal arts. To run the Scraping program, the data.html file should be hosted on a web server, in my case a local Nginx server. A standalone Perl web server such as `HTTP::Server::PSGI` or `HTTP::Server::Simple` would do as well. + * The file scrape.pl contains the Scraping program, which uses features from the `Plack/PSGI` packages, in particular a Plack web server. The Scraping program is launched from the command line (as explained below). A user enters the URL for the Plack server (`localhost:5000/`) in a browser, and the following happens: + * The browser connects to the Plack server, an instance of `HTTP::Server::PSGI`, and issues a GET request for the Scraping program. The single slash (`/`) at the end of the URL identifies this program. (A modern browser would add the closing slash even if the user failed to do so.) + * The Scraping program then issues a GET request for the data.html document. If the request succeeds, the application extracts the relevant data from the document using the `HTML::TableExtract` package, saves the extracted data to a file, and takes some basic statistical measures that represent processing the extracted data. An HTML report like the following is returned to the user's browser. + + +![HTML report generated by the Scraping program][3] + +Fig. 1: Final report from the Scraping program + +The request traffic from the user's browser to the Plack server and then to the server hosting the data.html document (e.g., Nginx) can be depicted as follows: +``` +              GET localhost:5000/             GET localhost:80/data.html + +user's browser------------------->Plack server-------------------------->Nginx + +``` + +The final step involves only the Plack server and the user's browser: +``` +             reportFinal.html + +Plack server------------------>user's browser + +``` + +Fig. 1 above shows the final report document. + +### The scraping program in detail + +The source code and data file (data.html) are available from my [website][4] in a ZIP file that includes a README. Here is a quick summary of the pieces, and clarifications will follow: +``` +data.html             ## data source to be hosted by a web server + +scrape.pl             ## main source code, run with the plackup utility (see below) + +Stats::Controller.pm  ## handles request routing, data extraction, and processing + +Stats::Util.pm        ## utility functions used in Controller.pm + +report.html           ## HTML template used to generate the report + +rawData.dat           ## the extracted data + +``` + +The `Plack/PSGI` packages come with a command-line utility named `plackup`, which can be used to launch the Scraping program. With `%` as the command-line prompt, the command for starting the Scraping program is: +``` +% plackup scrape.pl + +``` + +The `plackup` command starts a standalone Plack web server that hosts the Scraping program. The Scraping code handles request routing, extracts data from the data.html document, produces some basic statistical measures, and then uses the `Template::Recall` package to generate an HTML report for the user. Because the Plack server runs indefinitely, the Scraping program prints the process ID, which can be used to kill the server and the Scraping app. + +`Plack/PSGI` supports Rails-style routing in which an HTTP request is dispatched to a specific request handler based on two factors: + + * The HTTP request method (verb) such as GET or POST. + * The Uniform Resource Identifier (URI or noun) for the requested resource; in this case the standalone finishing slash (`/`) in the URL `http://localhost:5000/` that a user enters in a browser once the Scraping program has launched. + + + +The Scraping program handles only one type of request: a GET for the resource named `/`, and this resource is the screen-scraping and data-processing code in my `Stats::Controller` package. Here, for review, is the `Plack/PSGI` routing setup, right at the top of source file scrape.pl: +``` +my $router = router { + +    match '/', {method => 'GET'},   ## noun/verb combo: / is noun, GET is verb + +    to {controller => 'Controller', action => 'index'}; ## handler is function get_index + +    # Other actions as needed + +}; + +``` + +The request handler `Controller::get_index` has only high-level logic, leaving the screen-scraping and report-generating details to utility functions in the Util.pm file, as described in the following section. + +### The screen-scraping code + +Recall that the Plack server dispatches a GET request for `localhost:5000/` to the Scraping program's `get_index` function. This function, as the request handler, then starts the job of retrieving the data to be scraped, scraping the data, and generating the final report. The data-retrieval part falls to a utility function, which uses Perl's `LWP::Agent` package to get the data from whatever server is hosting the data.html document. With the data document in hand, the Scraping program invokes the utility function `extract_from_html` to do the data extraction. + +The data.html document happens to be well-formed XML, which means a Perl package such as `XML::LibXML` could be used to extract the data through an explicit XML parse. However, the `HTML::TableExtract` package is inviting because it bypasses the tedium of XML parses, and (in very little code) delivers a Perl hash with the extracted data. Data aggregates in HTML documents usually occur in lists or tables, and the `HTML::TableExtract` package targets tables. Here are the three critical lines of code for the data extraction: +``` +my $col_headers = col_headers(); ## col_headers() returns an array of the table's column names + +my $te = HTML::TableExtract->new(headers => $col_headers); + +$te->parse($page);  ## $page is data.html + +``` + +The `$col_headers` refers to a Perl array of strings, each a column header in the HTML document: +``` +sub col_headers {    ## column headers in the HTML table + +    return ["Area", + +            "MedianWage", + +            ... + +            "BoostFromGradDegree"]; + +}col_headers + +``` + +After the call to the `TableExtract::parse` function, the Scraping program uses the `TableExtract::rows` function to iterate over the rows of extracted data—rows of data without the HTML markup. These rows, as Perl lists, are added to a Perl hash named `%majors_hash`, which can be depicted as follows: + + * Each key identifies an area of study such as Computing or Engineering. + + * The value of each key is the list of seven extracted data items, where seven is the number of columns in the HTML table. For Computing, the list with annotations is: +``` +    name            median  % with this degree  income boost from GD +     /                 /            /            / + (Computing  55000  75000  112000  5.1%  32.0%  31.0%)   ## data items +              /              \           \ +        25th-ptile      75th-ptile  % going on for GD = grad degree +``` + + + + +The hash with the extracted data is written to the local file rawData.dat: +``` +ForeignLanguage 50000 35000 75000 3.5% 54% 101% +LiberalArts 47000 32000 70000 9.7% 41% 48% +... +Engineering 78000 54000 104000 8.2% 37% 32% +Computing 75000 51000 112000 5.1% 32% 31% +... +PublicPolicy 50000 36000 74000 2.3% 24% 45% +``` + +The next step is to process the extracted data, in this case by doing rudimentary statistical analysis using the `Statistics::Descriptive` package. In Fig. 1 above, the statistical summary is presented in a separate table at the bottom of the report. + +### The report-generation code + +The final step in the Scraping program is to generate a report. Perl has options for generating HTML, and `Template::Recall` is among them. As the name suggests, the package generates HTML from an HTML template, which is a mix of standard HTML markup and customized tags that serve as placeholders for data generated from backend code. The template file is report.html, and the backend function of interest is `Controller::generate_report`. Here is how the code and the template interact. + +The report document (Fig. 1) has two tables. The top table is generated through iteration, as each row has the same columns (area of study, income for the 25th percentile, and so on). In each iteration, the code creates a hash with values for a particular area of study: +``` +my %row = ( +     major => $key, +     wage  => '$' . commify($values[0]), ## commify turns 1234 into 1,234 +     p25   => '$' . commify($values[1]), +     p75   => '$' . commify($values[2]), +     population   => $values[3], +     grad  => $values[4], +     boost => $values[5] +); + +``` + +The hash keys are Perl [barewords][5] such as `major` and `wage` that represent items in the list of data values extracted earlier from the HTML data document. The corresponding HTML template looks like this: +``` +[ === even  === ] + +   ['major'] +   ['p25'] +   ['wage'] +   ['p75'] +   ['pop'] +   ['grad'] +   ['boost'] + +[=== end1 ===] +``` + +The customized tags are in square brackets. The tags at the top and the bottom mark the beginning and the end, respectively, of a template region to be rendered. The other customized tags identify individual targets for the backend code. For example, the template column identified as `major` matches the hash entry with `major` as the key. Here is the call in the backend code that binds the data to the customized tags: +``` +print OUTFILE $tr->render('end1'); + +``` + +The reference `$tr` is to a `Template::Recall` instance, and `OUTFILE` is the report file reportFinal.html, which is generated from the template file report.html together with the backend code. If all goes well, the reportFinal.html document is what the user sees in the browser (see Fig. 1). + +The scraping program draws from excellent Perl packages such as `Plack/PSGI`, `LWP::Agent`, `HTML::TableExtract`, `Template::Recall`, and `Statistics::Descriptive` to deal with the often messy task of screen-scraping for data. These packages play together nicely, as each targets a specific subtask. Finally, the Scraping program might be extended to cluster the extracted data: The `Algorithm::KMeans` package is suited for this extension and could use the data persisted in the rawData.dat file. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/6/screen-scraping + +作者:[Marty Kalin][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/mkalindepauledu +[1]:https://en.wikipedia.org/wiki/Data_scraping#Screen_scraping +[2]:/file/399886 +[3]:https://opensource.com/sites/default/files/uploads/scrapeshot.png (HTML report generated by the Scraping program) +[4]:http://condor.depaul.edu/mkalin +[5]:https://en.wiktionary.org/wiki/bareword diff --git a/sources/tech/20180611 Turn Your Raspberry Pi into a Tor Relay Node.md b/sources/tech/20180611 Turn Your Raspberry Pi into a Tor Relay Node.md deleted file mode 100644 index c0f14a6609..0000000000 --- a/sources/tech/20180611 Turn Your Raspberry Pi into a Tor Relay Node.md +++ /dev/null @@ -1,134 +0,0 @@ -Translating by qhwdw -Turn Your Raspberry Pi into a Tor Relay Node -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/tor-onion-router.jpg?itok=6WUl0ElH) - -If you’re anything like me, you probably got yourself a first- or second-generation Raspberry Pi board when they first came out, played with it for a while, but then shelved it and mostly forgot about it. After all, unless you’re a robotics enthusiast, you probably don’t have that much use for a computer with a pretty slow processor and 256 megabytes of RAM. This is not to say that there aren’t cool things you can do with one of these, but between work and other commitments, I just never seem to find the right time for some good old nerding out. - -However, if you would like to put it to good use without sacrificing too much of your time or resources, you can turn your old Raspberry Pi into a perfectly functioning Tor relay node. - -### What is a Tor Relay node - -You have probably heard about the [Tor project][1] before, but just in case you haven’t, here’s a very quick summary. The name “Tor” stands for “The Onion Router” and it is a technology created to combat online tracking and other privacy violations. - -Everything you do on the Internet leaves a set of digital footprints in every piece of equipment that your IP packets traverse: all of the switches, routers, load balancers and destination websites log the IP address from which your session originated and the IP address of the internet resource you are accessing (and often its hostname, [even when using HTTPS][2]). If you’re browsing from home, then your IP can be directly mapped to your household. If you’re using a VPN service ([as you should be][3]), then your IP can be mapped to your VPN provider, and then they are the ones who can map it to your household. In any case, odds are that someone somewhere is assembling an online profile on you based on the sites you visit and how much time you spend on each of them. Such profiles are then sold, aggregated with matching profiles collected from other services, and then monetized by ad networks. At least, that’s the optimist’s view of how that data is used -- I’m sure you can think of many examples of how your online usage profiles can be used against you in much more nefarious ways. - -The Tor project attempts to provide a solution to this problem by making it impossible (or, at least, unreasonably difficult) to trace the endpoints of your IP session. Tor achieves this by bouncing your connection through a chain of anonymizing relays, consisting of an entry node, relay node, and exit node: - - 1. The **entry node** only knows your IP address, and the IP address of the relay node, but not the final destination of the request; - - 2. The **relay node** only knows the IP address of the entry node and the IP address of the exit node, and neither the origin nor the final destination - - 3. The **exit node** **** only knows the IP address of the relay node and the final destination of the request; it is also the only node that can decrypt the traffic before sending it over to its final destination - - - - -Relay nodes play a crucial role in this exchange because they create a cryptographic barrier between the source of the request and the destination. Even if exit nodes are controlled by adversaries intent on stealing your data, they will not be able to know the source of the request without controlling the entire Tor relay chain. - -As long as there are plenty of relay nodes, your privacy when using the Tor network remains protected -- which is why I heartily recommend that you set up and run a relay node if you have some home bandwidth to spare. - -#### Things to keep in mind regarding Tor relays - -A Tor relay node only receives encrypted traffic and sends encrypted traffic -- it never accesses any other sites or resources online, so you do not need to worry that someone will browse any worrisome sites directly from your home IP address. Having said that, if you reside in a jurisdiction where offering anonymity-enhancing services is against the law, then, obviously, do not operate your own Tor relay. You may also want to check if operating a Tor relay is against the terms and conditions of your internet access provider. - -### What you will need - - * A Raspberry Pi (any model/generation) with some kind of enclosure - - * An SD card with [Raspbian Stretch Lite][4] - - * An ethernet cable - - * A micro-USB cable for power - - * A keyboard and an HDMI-capable monitor (to use during the setup) - - - - -This guide will assume that you are setting this up on your home connection behind a generic cable or ADSL modem router that performs NAT translation (and it almost certainly does). Most of them have a USB port you can use to power up your Raspberry Pi, and if you’re only using the wifi functionality of the router, then it should have a free ethernet port for you to plug into. However, before we get to the point where we can set-and-forget your Raspberry Pi, we’ll need to set it up as a Tor relay node, for which you’ll need a keyboard and a monitor. - -### The bootstrap script - -I’ve adapted a popular Tor relay node bootstrap script for use with Raspbian Stretch -- you can find it in my GitHub repository here: . Once you have booted up your Raspberry Pi and logged in with the default “pi” user, do the following: -``` -sudo apt-get install -y git -git clone https://github.com/mricon/tor-relay-bootstrap-rpi -cd tor-relay-bootstrap-rpi -sudo ./bootstrap.sh - -``` - -Here is what the script will do: - - 1. Install the latest OS updates to make sure your Pi is fully patched - - 2. Configure your system for automated unattended updates, so you automatically receive security patches when they become available - - 3. Install Tor software - - 4. Tell your NAT router to forward the necessary ports to reach your relay (the ports we’ll use are 443 and 8080, since they are least likely to be filtered by your internet provider) - - - - -Once the script is done, you’ll need to configure the torrc file -- but first, decide how much bandwidth you’ll want to donate to Tor traffic. First, type “[Speed Test][5]” into Google and click the “Run Speed Test” button. You can disregard the “Download speed” result, as your Tor relay can only operate as fast as your maximum upload bandwidth. - -Therefore, take the “Mbps upload” number, divide by 8 and multiply by 1024 to find out the bandwidth speed in Kilobytes per second. E.g. if you got 21.5 Mbps for your upload speed, then that number is: -``` -21.5 Mbps / 8 * 1024 = 2752 KBytes per second - -``` - -You’ll want to limit your relay bandwidth to about half that amount, and allow bursting to about three-quarters of it. Once decided, open /etc/tor/torrc using your favourite editor and tweak the bandwidth settings. -``` -RelayBandwidthRate 1300 KBytes -RelayBandwidthBurst 2400 KBytes - -``` - -Of course, if you’re feeling more generous, then feel free to put in higher numbers, though you don’t want to max out your outgoing bandwidth -- it will noticeably impact your day-to-day usage if these numbers are set too high. - -While you have that file open, you should set two more things. First, the Nickname -- just for your own recordkeeping, and second the ContactInfo line, which should list a single email address. Since your relay will be running unattended, you should use an email address that you regularly check -- you will receive an alert from the “Tor Weather” service if your relay goes offline for longer than 48 hours. -``` -Nickname myrpirelay -ContactInfo you@example.com - -``` - -Save the file and reboot the system to start the Tor relay. - -### Testing to make sure Tor traffic is flowing - -If you would like to make sure that the relay is functioning, you can run the “arm” tool: -``` -sudo -u debian-tor arm - -``` - -It will take a while to start, especially on older-generation boards, but eventually it will show you a bar chart of incoming and outgoing traffic (or error messages that will help you troubleshoot your setup). - -Once you are convinced that everything is functioning, you can unplug the keyboard and the monitor and relocate the Raspberry Pi into the basement where it will quietly sit and shuffle encrypted bits around. Congratulations, you’ve helped improve privacy and combat malicious tracking online! - -Learn more about Linux through the free ["Introduction to Linux" ][6] course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/intro-to-linux/2018/6/turn-your-raspberry-pi-tor-relay-node - -作者:[Konstantin Ryabitsev][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/mricon -[1]:https://www.torproject.org/ -[2]:https://en.wikipedia.org/wiki/Server_Name_Indication#Security_implications -[3]:https://www.linux.com/blog/2017/10/tips-secure-your-network-wake-krack -[4]:https://www.raspberrypi.org/downloads/raspbian/ -[5]:https://www.google.com/search?q=speed+test -[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180614 Bash tips for everyday at the command line.md b/sources/tech/20180614 Bash tips for everyday at the command line.md new file mode 100644 index 0000000000..219c6e5cf0 --- /dev/null +++ b/sources/tech/20180614 Bash tips for everyday at the command line.md @@ -0,0 +1,593 @@ +Bash tips for everyday at the command line +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_keyboard_code.jpg?itok=YEtvcZOj) + +As the default shell for many of the Linux and Unix variants, Bash includes a wide variety of underused features, so it was hard to decide what to discuss. Ultimately, I decided to focus on Bash tips that make day-to-day activities easier. + +As a consultant, I see a plurality of diverse environments and work styles. I drew on this experience to narrow the tips to four broad categories: Terminal and line tricks, navigation and files, history, and helpful commands. These categories are completely arbitrary and serve more to organize my own thoughts than as any kind of definitive classification. Many of the tips included here might subjectively fit in more than one category. + +Without further ado, here are some of the most helpful Bash tricks I have encountered. + +### Working with Bash history + +One of the best ways to increase your productivity is to learn to use the Bash history more effectively. With that in mind, perhaps one of the most important tweaks you can make in a multi-user environment is to enable the `histappend` option to your shell. To do that, simply run the following command: +``` +shopt -s histappend + +``` + +This allows multiple terminal sessions to write to the history at the same time. In most environments this option is not enabled. That means that histories are often lost if you have more than a single Bash session open (either locally or over SSH). + +Another common task is to repeat the last command with `sudo`. For example, suppose you want to create a directory `mkdir /etc/ansible/facts.d`. Unless you are root, this command will fail. From what I have observed, most users hit the `up` arrow, scroll to the beginning of the line, and add the `sudo` command. There is an easier way. Simply run the command like this: +``` +sudo !! + +``` + +Bash will run `sudo` and then the entirety of the previous command. Here is exactly what it looks like when run in sequence: +``` +[user@centos ~]$ mkdir -p /etc/ansible/facts.d + +mkdir: cannot create directory ‘/etc/ansible’: Permission denied + + + +[user@centos ~]$ sudo !! + +sudo mkdir -p /etc/ansible/facts.d + +``` + +When the **`!!`** is run, the full command is echoed out to the terminal so you know what was just executed. + +Similar but used much less frequently is the **`!*`** shortcut. This tells Bash that you want all of the *arguments* from the previous command to be repeated in the current command. This could be useful for a command that has a lot of arguments you want to reuse. A simple example is creating a bunch of files and then changing the permissions on them: +``` +[user@centos tmp]$ touch file1 file2 file3 file4 + +[user@centos tmp]$ chmod 777 !* + +chmod 777 file1 file2 file3 file4 + +``` + +It is handy only in a specific set of circumstances, but it may save you some keystrokes. + +Speaking of saving keystrokes, let's talk about finding commands in your history. Most users will do something like this: +``` +history |grep + +``` + +However, there is an easier way to search your history. If you press +``` +ctrl + r + +``` + +Bash will do a reverse search of your history. As you start typing, results will begin to appear. For example: +``` +(reverse-i-search)`hist': shopt -s histappend + +``` + +In the above example, I typed `hist` and it matched the `shopt` command we covered earlier. If you continue pressing `ctrl + r`, Bash will continue to search backward through all of the other matches. + +Our last trick isn't a trick as much as a helpful command you can use to count and display the most-used commands in your history. +``` +[user@centos tmp]$ history | awk 'BEGIN {FS="[ \t]+|\\|"} {print $3}' | sort | uniq -c | sort -nr | head + +81 ssh + +50 sudo + +46 ls + +45 ping + +39 cd + +29 nvidia-xrun + +20 nmap + +19 export + +``` + +In this example, you can see that `ssh` is by far the most-used command in my history at the moment. + +### Navigation and file naming + +`tab` key once to complete the wording for you. This works if there is a single exact match. However, you might not know that if you hit `tab` twice, it will show you all of the matches based on what you have typed. For example: +``` +[user@centos tmp]$ cd /lib + +lib/ lib64/ + +``` + +You probably already know that if you type a command, filename, or folder name, you can hit thekey once to complete the wording for you. This works if there is a single exact match. However, you might not know that if you hittwice, it will show you all of the matches based on what you have typed. For example: + +This can be very useful for file system navigation. Another helpful trick is to enable `cdspell` in your shell. You can do this by issuing the `shopt -s cdspell` command. This will help correct your typos: +``` +[user@centos etc]$ cd /tpm + +/tmp + +[user@centos tmp]$ cd /ect + +/etc + +``` + +It's not perfect, but every little bit helps! + +Once you have successfully changed directories, what if you need to return to your previous directory? This is not a big deal if you are not very deep into the directory tree. But if you are in a fairly deep path, such as `/var/lib/flatpak/exports/share/applications/`, you could type: +``` +cd /va/lib/fla/ex/sh/app + +``` + +Fortunately, Bash remembers your previous directory, and you can return there by simply typing `cd -`. Here is what it would look like: +``` +[user@centos applications]$ pwd + +/var/lib/flatpak/exports/share/applications + + + +[user@centos applications]$ cd /tmp + +[user@centos tmp]$ pwd + +/tmp + + + +[user@centos tmp]$ cd - + +/var/lib/flatpak/exports/share/applications + +``` + +That's all well and good, but what if you have a bunch of directories you want to navigate within easily? Bash has you covered there as well. There is a variable you can set that will help you navigate more effectively. Here is an example: +``` +[user@centos applications]$ export CDPATH='~:/var/log:/etc' + +[user@centos applications]$ cd hp + +/etc/hp + + + +[user@centos hp]$ cd Downloads + +/home/user/Downloads + + + +[user@centos Downloads]$ cd ansible + +/etc/ansible + + + +[user@centos Downloads]$ cd journal + +/var/log/journal + +``` + +In the above example, I set my home directory (indicated with the tilde: `~`), `/var/log` and `/etc`. Anything at the top level of these directories will be auto-filled in when you reference them. Directories that are not at the base of the directories listed in `CDPATH` will not be found. If, for example, the directory you are after was `/etc/ansible/facts.d/` this would not complete by typing `cd facts.d`. This is because while the directory `ansible` is found under `/etc`, `facts.d` is not. Therefore, `CDPATH` is useful for getting to the top of a tree that you access frequently, but it may get cumbersome to manage when you're browsing a large folder structure. + +Finally, let's talk about two common use cases that everyone does at some point: Changing a file extension and renaming files. At first glance, this may sound like the same thing, but Bash offers a few different tricks to accomplish these tasks. + +While it may be a "down-and-dirty" operation, most users at some point need to create a quick copy of a file they are working on. Most will copy the filename exactly and simply append a file extension like `.old` or `.bak`. There is a quick shortcut for this in Bash. Suppose you have a filename like `spideroak_inotify_db.07pkh3` that you want to keep a copy of. You could type: +``` +cp spideroak_inotify_db.07pkh3 spideroak_inotify_db.07pkh3.bak + +``` + +You can make quick work of this by using copy/paste operations, using the tab complete, possibly using one of the shortcuts to repeat an argument, or simply typing the whole thing out. However, the command below should prove even quicker once you get used to typing it: +``` +cp spideroak_inotify_db.07pkh3{,.old} + +``` + +This (as you can guess) copies the file by appending the `.old` file extension to the file. That's great, you might say, but I want to rename a large number of files at once. Sure, you could write a for loop to deal with these (and in fact, I often do this for something complicated) but why would you when there is a handy utility called `rename`? There is some difference in the usage of this utility between Debian/Ubuntu and CentOS/Arch. The Debian-based rename uses a SED-like syntax: +``` +user@ubuntu-1604:/tmp$ for x in `seq 1 5`; do touch old_text_file_${x}.txt; done + + + +user@ubuntu-1604:/tmp$ ls old_text_file_* + +old_text_file_1.txt old_text_file_3.txt old_text_file_5.txt + +old_text_file_2.txt old_text_file_4.txt + + + +user@ubuntu-1604:/tmp$ rename 's/old_text_file/shiney_new_doc/' *.txt + + + +user@ubuntu-1604:/tmp$ ls shiney_new_doc_* + +shiney_new_doc_1.txt shiney_new_doc_3.txt shiney_new_doc_5.txt + +shiney_new_doc_2.txt shiney_new_doc_4.txt + +``` + +On a CentOS or Arch box it would look similar: +``` +[user@centos /tmp]$ for x in `seq 1 5`; do touch old_text_file_${x}.txt; done + + + +[user@centos /tmp]$ ls old_text_file_* + +old_text_file_1.txt old_text_file_3.txt old_text_file_5.txt + +old_text_file_2.txt old_text_file_4.txt + + + +[user@centos tmp]$ rename old_text_file centos_new_doc *.txt + + + +[user@centos tmp]$ ls centos_new_doc_* + +centos_new_doc_1.txt centos_new_doc_3.txt centos_new_doc_5.txt + +centos_new_doc_2.txt centos_new_doc_4.txt + +``` + +### Bash key bindings + +Bash has a lot of built-in keyboard shortcuts. You can find a list of them by typing `bind -p`. I thought it would be useful to highlight several, although some may be well-known. +``` +    ctrl + _ (undo) + +    ctrl + t (swap two characters) + +    ALT + t (swap two words) + +    ALT + . (prints last argument from previous command) + +    ctrl + x + * (expand glob/star) + +    ctrl + arrow (move forward a word) + +    ALT + f (move forward a word) + +    ALT + b (move backward a word) + +    ctrl + x + ctrl + e (opens the command string in an editor so that you can edit it before execution) + +    ctrl + e (move cursor to end) + +    ctrl + a (move cursor to start) + +    ctrl + xx (move to the opposite end of the line) + +    ctrl + u (cuts everything before the cursor) + +    ctrl + k (cuts everything after the cursor) + +    ctrl + y (pastes from the buffer) + +    ctrl + l (clears screen)s + +``` + +I won't discuss the more obvious ones. However, some of the most useful shortcuts I have found are the ones that let you delete words (or sections of text) and undo them. Suppose you were going to stop a bunch of services using `systemd`, but you only wanted to start a few of them after some operation has completed. You might do something like this: +``` +systemctl stop httpd mariadb nfs smbd + + + + + +``` + +But what if you removed one too many? No problem—simply use `ctrl + _` to undo the last edit. + +The other cut commands allow you to quickly remove everything from the cursor to the end or beginning of the line (using `Ctrl + k` and `Ctrl + u`, respectively). This has the added benefit of placing the cut text into the terminal buffer so you can paste it later on (using `ctrl + y`). These commands are hard to demonstrate here, so I strongly encourage you to try them out on your own. + +Last but not least, I'd like to mention a seldom-used key combination that can be extremely handy in confined environments such as containers. If you ever have a command look garbled by previous output, there is a solution: Pressing `ctrl + x + ctrl + e` will open the command in whichever editor is set in the environment variable EDITOR. This will allow you to edit a long or garbled command in a text editor that (potentially) can wrap text. Saving your work and exiting, just as you would when working on a normal file, will execute the command upon leaving the editor. + +### Miscellaneous tips + +You may find that having colors displayed in your Bash shell can enhance your experience. If you are using a session that does not have colorization enabled, below are a series of commands you can place in your `.bash_profile` to add color to your session. These are fairly straightforward and should not require an in-depth explanation: +``` +# enable colors + +eval "`dircolors -b`" + + + +# force ls to always use color and type indicators + +alias ls='ls -hF --color=auto' + + + +# make the dir command work kinda like in windows (long format) + +alias dir='ls --color=auto --format=long' + + + +# make grep highlight results using color + +export GREP_OPTIONS='--color=auto' + + + +# Add some colour to LESS/MAN pages + +export LESS_TERMCAP_mb=$'\E[01;31m' + +export LESS_TERMCAP_md=$'\E[01;33m' + +export LESS_TERMCAP_me=$'\E[0m' + +export LESS_TERMCAP_se=$'\E[0m' + +export LESS_TERMCAP_so=$'\E[01;42;30m' + +export LESS_TERMCAP_ue=$'\E[0m' + +export LESS_TERMCAP_us=$'\E[01;36m' + +``` + +Along with adjusting the various options within Bash, you can also use some neat tricks to save time. For example, to run two commands back-to-back, regardless of each one's exit status, use the `;` to separate the commands, as seen below: +``` +[user@centos /tmp]$ du -hsc * ; df -h + +``` + +This simply calculates the amount of space each file in the current directory takes up (and sums it), then it queries the system for the disk usage per block device. These commands will run regardless of any errors generated by the `du` command. + +What if you want an action to be taken upon successful completion of the first command? You can use the `&&` shorthand to indicate that you want to run the second command only if the first command returns a successful exit status. For example, suppose you want to reboot a machine only if the updates are successful: +``` +[root@arch ~]$ pacman -Syu --noconfirm && reboot + +``` + +Sometimes when running a command, you may want to capture its output. Most people know about the `tee` command, which will copy standard output to both the terminal and a file. However, if you want to capture more complex output from, say, `strace`, you will need to start working with [I/O redirection][1]. The details of I/O redirection are beyond the scope of this short article, but for our purposes we are concerned with `STDOUT` and `STDERR`. The best way to capture exactly what you are seeing is to combine the two in one file. To do this, use the `2>&1` redirection. +``` +[root@arch ~]$ strace -p 1140 > strace_output.txt 2>&1 + +``` + +This will put all of the relevant output into a file called `strace_output.txt` for viewing later. + +Sometimes during a long-running command, you may need to pause the execution of a task. You can use the 'stop' shortcut `ctrl + z` to stop (but not kill) a job. The job gets added to the job queue, but you will no longer see the job until you resume it. This job may be resumed at a later time by using the foreground command `fg`. + +In addition, you may also simply pause a job with `ctrl + s`. The job and its output stay in the terminal foreground, and use of the shell is not returned to the user. The job may be resumed by pressing `ctrl + q`. + +If you are working in a graphical environment with many terminals open, you may find it handy to have keyboard shortcuts for copying and pasting output. To do so, use the following shortcuts: +``` +# Copies highlighted text + +ctrl + shift + c + + + +# Pastes text in buffer + +ctrl + shift + v + +``` + +Suppose in the output of an executing command you see another command being executed, and you want to get more information. There are a few ways to do this. If this command is in your path somewhere, you can run the `which` command to find out where that command is located on your disk: +``` +[root@arch ~]$ which ls + +/usr/bin/ls + +``` + +With this information, you can inspect the binary with the `file` command: +``` +[root@arch ~]$ file /usr/bin/ls + +/usr/bin/ls: ELF 64-bit LSB pie executable x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=d4e02b88e596e4f82c6cc62a5bc4ce5827209a49, stripped + +``` + +You can see all sorts of information, but the most important for most users is the `ELF 64-bit LSB` nonsense. This essentially means that it is a precompiled binary as opposed to a script or other type of executable. A related tool you can use to inspect commands is the `command` tool itself. Simply running `command -V ` will give you different types of information: +``` +[root@arch ~]$ command -V ls + +ls is aliased to `ls --color=auto` + + + +[root@arch ~]$ command -V bash + +bash is /usr/bin/bash + + + +[root@arch ~]$ command -V shopt + +shopt is a shell builtin + +``` + +Last but definitely not least, one of my favorite tricks, especially when working with containers or in environments where I have little knowledge or control, is the `echo` command. This command can be used to do everything from checking to make sure your `for` loop will run the expected sequence to allowing you to check if remote ports are open. The syntax is very simple to check for an open port: `echo > /dev///`. For example: +``` +user@ubuntu-1604:~$ echo > /dev/tcp/192.168.99.99/222 + +-bash: connect: Connection refused + +-bash: /dev/tcp/192.168.99.99/222: Connection refused + + + +user@ubuntu-1604:~$ echo > /dev/tcp/192.168.99.99/22 + +``` + +If the port is closed to the type of connection you are trying to make, you will get a `Connection refused` message. If the packet is successfully sent, there will be no output. + +I hope these tips make Bash more efficient and enjoyable to use. There are many more tricks hidden in Bash than I've listed here. What are some of your favorites? + +#### Appendix 1. List of tips and tricks covered + +``` +# History related + +ctrl + r (reverse search) + +!! (rerun last command) + +!* (reuse arguments from previous command) + +!$ (use last argument of last command) + +shopt -s histappend (allow multiple terminals to write to the history file) + +history | awk 'BEGIN {FS="[ \t]+|\\|"} {print $3}' | sort | uniq -c | sort -nr | head (list the most used history commands) + + + +# File and navigation + +cp /home/foo/realllylongname.cpp{,-old} + +cd - + +rename 's/text_to_find/been_renamed/' *.txt + +export CDPATH='/var/log:~' (variable is used with the cd built-in.) + + + +# Colourize bash + + + +# enable colors + +eval "`dircolors -b`" + +# force ls to always use color and type indicators + +alias ls='ls -hF --color=auto' + +# make the dir command work kinda like in windows (long format) + +alias dir='ls --color=auto --format=long' + +# make grep highlight results using color + +export GREP_OPTIONS='--color=auto' + + + +export LESS_TERMCAP_mb=$'\E[01;31m' + +export LESS_TERMCAP_md=$'\E[01;33m' + +export LESS_TERMCAP_me=$'\E[0m' + +export LESS_TERMCAP_se=$'\E[0m' # end the info box + +export LESS_TERMCAP_so=$'\E[01;42;30m' # begin the info box + +export LESS_TERMCAP_ue=$'\E[0m' + +export LESS_TERMCAP_us=$'\E[01;36m' + + + +# Bash shortcuts + +    shopt -s cdspell (corrects typoos) + +    ctrl + _ (undo) + +    ctrl + arrow (move forward a word) + +    ctrl + a (move cursor to start) + +    ctrl + e (move cursor to end) + +    ctrl + k (cuts everything after the cursor) + +    ctrl + l (clears screen) + +    ctrl + q (resume command that is in the foreground) + +    ctrl + s (pause a long running command in the foreground) + +    ctrl + t (swap two characters) + +    ctrl + u (cuts everything before the cursor) + +    ctrl + x + ctrl + e (opens the command string in an editor so that you can edit it before it runs) + +    ctrl + x + * (expand glob/star) + +    ctrl + xx (move to the opposite end of the line) + +    ctrl + y (pastes from the buffer) + +    ctrl + shift + c/v (copy/paste into terminal) + + + +# Running commands in sequence + +&& (run second command if the first is successful) + +; (run second command regardless of success of first one) + + + +# Redirecting I/O + +2>&1 (redirect stdout and stderr to a file) + + + +# check for open ports + +echo > /dev/tcp// + +`` (use back ticks to shell out) + + + +# Examine executable + +which + +file + +command -V (tells you whether is a built-in, binary or alias) + +``` + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/5/bash-tricks + +作者:[Steve Ovens][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/stratusss +[1]:https://www.digitalocean.com/community/tutorials/an-introduction-to-linux-i-o-redirection diff --git a/sources/tech/20180615 4 tools for building embedded Linux systems.md b/sources/tech/20180615 4 tools for building embedded Linux systems.md new file mode 100644 index 0000000000..a6ff059ecb --- /dev/null +++ b/sources/tech/20180615 4 tools for building embedded Linux systems.md @@ -0,0 +1,183 @@ +4 tools for building embedded Linux systems +====== +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6) + +Linux is being deployed into a much wider array of devices than Linus Torvalds anticipated when he was working on it in his dorm room. The variety of supported chip architectures is astounding and has led to Linux in devices large and small; from [huge IBM mainframes][1] to [tiny devices][2] no bigger than their connection ports and everything in between. It is used in large enterprise data centers, internet infrastructure devices, and personal development systems. It also powers consumer electronics, mobile phones, and many Internet of Things devices. + +When building Linux software for desktop and enterprise-class devices, developers typically use a desktop distribution such as [Ubuntu][3] on their build machines to have an environment as close as possible to the one where the software will be deployed. Tools such as [VirtualBox][4] and [Docker][5] allow even better alignment between development, testing, and productions environments. + +### What is an embedded system? + +Wikipedia defines an [embedded system][6] as: "A computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints." + +I find it simple enough to say that an embedded system is a computer that most people don't think of as a computer. Its primary role is to serve as an appliance of some sort, and it is not considered a general-purpose computing platform. + +The development environment in embedded systems programming is usually very different from the testing and production environments. They may use different chip architectures, software stacks, and even operating systems. Development workflows are very different for embedded developers vs. desktop and web developers. Typically, the build output will consist of an entire software image for the target device, including the kernel, device drivers, libraries, and application software (and sometimes the bootloader). + +In this article, I will present a survey of four commonly available options for building embedded Linux systems. I will give a flavor for what it's like to work with each and provide enough information to help readers decide which tool to use for their design. I won't teach you how to use any of them; there are plenty of in-depth online learning resources once you have narrowed your choices. No option is right for all use cases, and I hope to present enough details to direct your decision. + +### Yocto + +The [Yocto][7] project is [defined][8] as "an open source collaboration project that provides templates, tools, and methods to help you create custom Linux-based systems for embedded products regardless of the hardware architecture." It is a collection of recipes, configuration values, and dependencies used to create a custom Linux runtime image tailored to your specific needs. + +Full disclosure: most of my work in embedded Linux has focused on the Yocto project, and my knowledge and bias to this system will likely be evident. + +Yocto uses [Openembedded][9] as its build system. Technically the two are separate projects; in practice, however, users do not need to understand the distinction, and the project names are frequently used interchangeably. + +The output of a Yocto project build consists broadly of three components: + + * **Target run-time binaries:** These include the bootloader, kernel, kernel modules, root filesystem image. and any other auxiliary files needed to deploy Linux to the target platform. + * **Package feed:** This is the collection of software packages available to be installed on your target. You can select the package format (e.g., deb, rpm, ipk) based on your needs. Some of them may be preinstalled in the target runtime binaries, however, it is possible to build packages for installation into a deployed system. + * **Target SDK:** These are the collection of libraries and header files representing the software installed on your target. They are used by application developers when building their code to ensure they are linked with the appropriate libraries + + + +#### Advantages + +The Yocto project is widely used in the industry and has backing from many influential companies. Additionally, it has a large and vibrant developer [community][10] and [ecosystem][11] contributing to it. The combination of open source enthusiasts and corporate sponsors helps drive the Yocto project. + +There are many options for getting support with Yocto. There are books and other training materials if you wish to do-it-yourself. Many engineers with experience in Yocto are available if you want to hire expertise. And many commercial organizations provide turnkey Yocto-based products or services-based implementation and customization for your design. + +The Yocto project is easily expanded through [layers][12], which can be published independently to add additional functionality, to target platforms not available in the project releases, or to store customizations unique to your system. Layers can be added to your configuration to add unique features that are not specifically included in the stock releases; for example, the "[meta-browser][13]" layer contains recipes for web browsers, which can be easily built for your system. Because they are independently maintained, layers can be on a different release schedule (tuned to the layers' development velocity) than the standard Yocto releases. + +Yocto has arguably the widest device support of any of the options discussed in this article. Due to support from many semiconductor and board manufacturers, it's likely Yocto will support any target platform you choose. The direct Yocto [releases][14] support only a few boards (to allow for proper testing and release cycles), however, a standard working model is to use external board support layers. + +Finally, Yocto is extremely flexible and customizable. Customizations for your specific application can be stored in a layer for encapsulation and isolation. Customizations unique to a feature layer are generally stored as part of the layer itself, which allows the same settings to be applied simultaneously to multiple system configurations. Yocto also provides a well-defined layer priority and override capability. This allows you to define the order in which layers are applied and searched for metadata. It also enables you to override settings in layers with higher priority; for instance, many customizations to existing recipes will be added in your private layers, with the order precisely controlled by the priorities. + +#### Disadvantages + +The biggest disadvantage with the Yocto project is the learning curve. It takes significant time and effort to learn the system and truly understand it. Depending on your needs, this may be too large of an investment in technologies and competence that are not central to your application. In such cases, working with one of the commercial vendors may be a good option. + +Development build times and resources are fairly high for Yocto project builds. The number of packages that need to be built, including the toolchain, kernel, and all target runtime components, is significant. Development workstations for Yocto developers tend to be large systems. Using a compact notebook is not recommended. This can be mitigated by using cloud-based build servers available from many providers. Additionally, Yocto has a built-in caching mechanism that allows it to reuse previously built components when it determines that the parameters for building a particular package have not changed. + +#### Recommendation + +Using the Yocto project for your next embedded Linux design is a strong choice. Of the options presented here, it is the most broadly applicable regardless of your target use case. The broad industry support, active community, and wide platform support make this a good choice for must designers. + +### Buildroot + +The [Buildroot][15] project is defined as "a simple, efficient, and easy-to-use tool to generate embedded Linux systems through cross-compilation." It shares many of the same objectives as the Yocto project, however it is focused on simplicity and minimalism. In general, Buildroot will disable all optional compile-time settings for all packages (with a few notable exceptions), resulting in the smallest possible system. It will be up to the system designer to enable the settings that are appropriate for a given device. + +Buildroot builds all components from source but does not support on-target package management. As such, it is sometimes called a firmware generator since the images are largely fixed at build time. Applications can update the target filesystem, but there is no mechanism to install new packages into a running system. + +The Buildroot output consists broadly of three components: + + * The root filesystem image and any other auxiliary files needed to deploy Linux to the target platform + * The kernel, boot-loader, and kernel modules appropriate for the target hardware + * The toolchain used to build all the target binaries. + + + +#### Advantages + +Buildroot's focus on simplicity means that, in general, it is easier to learn than Yocto. The core build system is written in Make and is short enough to allow a developer to understand the entire system while being expandable enough to meet the needs of embedded Linux developers. The Buildroot core generally only handles common use cases, but it is expandable via scripting. + +The Buildroot system uses normal Makefiles and the Kconfig language for its configuration. Kconfig was developed by the Linux kernel community and is widely used in open source projects, making it familiar to many developers. + +Due to the design goal of disabling all optional build-time settings, Buildroot will generally produce the smallest possible images using the out-of-the-box configuration. The build times and build host resources will likewise be smaller, in general, than those of the Yocto project. + +#### Disadvantages + +The focus on simplicity and minimal enabled build options imply that you may need to do significant customization to configure a Buildroot build for your application. Additionally, all configuration options are stored in a single file, which means if you have multiple hardware platforms, you will need to make each of your customization changes for each platform. + +Any change to the system configuration file requires a full rebuild of all packages. This is somewhat mitigated by the minimal image sizes and build times compared with Yocto, but it can result in long builds while you are tweaking your configuration. + +Intermediate package state caching is not enabled by default and is not as thorough as the Yocto implementation. This means that, while the first build may be shorter than an equivalent Yocto build, subsequent builds may require rebuilding of many components. + +#### Recommendation + +Using Buildroot for your next embedded Linux design is a good choice for most applications. If your design requires multiple hardware types or other differences, you may want to reconsider due to the complexity of synchronizing multiple configurations, however, for a system consisting of a single setup, Buildroot will likely work well for you. + +### OpenWRT/LEDE + +The [OpenWRT][16] project was started to develop custom firmware for consumer routers. Many of the low-cost routers available at your local retailer are capable of running a Linux system, but maybe not out of the box. The manufacturers of these routers may not provide frequent updates to address new threats, and even if they do, the mechanisms to install updated images are difficult and error-prone. The OpenWRT project produces updated firmware images for many devices that have been abandoned by their manufacturers and gives these devices a new lease on life. + +The OpenWRT project's primary deliverables are binary images for a large number of commercial devices. There are network-accessible package repositories that allow device end users to add new software to their systems. The OpenWRT build system is a general-purpose build system, which allows developers to create custom versions to meet their own requirements and add new packages, but its primary focus is target binaries. + +#### Advantages + +If you are looking for replacement firmware for a commercial device, OpenWRT should be on your list of options. It is well-maintained and may protect you from issues that the manufacturer's firmware cannot. You can add extra functionality as well, making your devices more useful. + +If your embedded design is networking-focused, OpenWRT is a good choice. Networking applications are the primary use case for OpenWRT, and you will likely find many of those software packages available in it. + +#### Disadvantages + +OpenWRT imposes significant policy decisions on your design (vs. Yocto and Buildroot). If these decisions don't meet your design goals, you may have to do non-trivial modifications. + +Allowing package-based updates in a fleet of deployed devices is difficult to manage. This, by definition, results in a different software load than what your QA team tested. Additionally, it is difficult to guarantee atomic installs with most package managers, and an ill-timed power cycle can leave your device in an unpredictable state. + +#### Recommendation + +OpenWRT is a good choice for hobbyist projects or for reusing commercial hardware. It is also a good choice for networking applications. If you need significant customization from the default setup, you may prefer Buildroot or Yocto. + +### Desktop distros + +A common approach to designing embedded Linux systems is to start with a desktop distribution, such as [Debian][17] or [Red Hat][18], and remove unneeded components until the installed image fits into the footprint of your target device. This is the approach taken for the popular [Raspbian][19] distribution for the [Raspberry Pi][20] platform. + +#### Advantages + +The primary advantage of this approach is familiarity. Often, embedded Linux developers are also desktop Linux users and are well-versed in their distro of choice. Using a similar environment on the target may allow developers to get started more quickly. Depending on the chosen distribution, many additional tools can be installed using standard packaging tools such as apt and yum. + +It may be possible to attach a display and keyboard to your target device and do all your development directly there. For developers new to the embedded space, this is likely to be a more familiar environment and removes the need to configure and use a tricky cross-development setup. + +The number of packages available for most desktop distributions is generally greater than that available for the embedded-specific builders discussed previously. Due to the larger user base and wider variety of use cases, you may be able to find all the runtime packages you need for your application already built and ready for use. + +#### Disadvantages + +Using the target as your primary development environment is likely to be slow. Running compiler tools is a resource-intensive operation and, depending on how much code you are building, may hinder your performance. + +With some exceptions, desktop distributions are not designed to accommodate low-resource systems, and it may be difficult to adequately trim your target images. Similarly, the expected workflow in a desktop environment is not ideal for most embedded designs. Getting a reproducible environment in this fashion is difficult. Manually adding and deleting packages is error-prone. This can be scripted using distribution-specific tools, such as [debootstrap][21] for Debian-based systems. To further improve [reproducibility][21], you can use a configuration management tool, such as [CFEngine][22] (which, full disclosure, is made by my employer, [Mender.io][23]). However, you are still at the mercy of the distribution provider, who will update packages to meet their needs, not yours. + +#### Recommendation + +Be wary of this approach for a product you plan to take to market. This is a fine model for hobbyist applications; however, for products that need support, this approach is likely going to be trouble. While you may be able to get a faster start, it may cost you time and effort in the long run. + +### Other considerations + +This discussion has focused on build systems' functionality, but there are usually non-functional requirements that may affect your decision. If you have already selected your system-on-chip (SoC) or board, your choice will likely be dictated by the vendor. If your vendor provides a board support package (BSP) for a given system, using it will normally save quite a bit of time, but please research the BSP's quality to avoid issues later in your development cycle. + +If your budget allows, you may want to consider using a commercial vendor for your target OS. There are companies that will provide a validated and supported configuration of many of the options discussed here, and, unless you have expertise in embedded Linux build systems, this is a good choice and will allow you to focus on your core competency. + +As an alternative, you may consider commercial training for your development staff. This is likely to be cheaper than a commercial OS provider and will allow you to be more self-sufficient. This is a quick way to get over the learning curve for the basics of the build system you choose. + +Finally, you may already have some developers with experience with one or more of the systems. If you have engineers who have a preference, it is certainly worth taking that into consideration as you make your decision. + +### Summary + +There are many choices available for building embedded Linux systems, each with advantages and disadvantages. It is crucial to prioritize this part of your design, as it is extremely costly to switch systems later in the process. In addition to these options, new systems are being developed all the time. Hopefully, this discussion will provide some context for reviewing new systems (and the ones mentioned here) and help you make a solid decision for your next project. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/6/embedded-linux-build-tools + +作者:[Drew Moseley][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/drewmoseley +[1]:https://en.wikipedia.org/wiki/Linux_on_z_Systems +[2]:http://www.picotux.com/ +[3]:https://www.ubuntu.com/ +[4]:https://www.virtualbox.org/ +[5]:https://www.docker.com/ +[6]:https://en.wikipedia.org/wiki/Embedded_system +[7]:https://yoctoproject.org/ +[8]:https://www.yoctoproject.org/about/ +[9]:https://www.openembedded.org/ +[10]:https://www.yoctoproject.org/community/ +[11]:https://www.yoctoproject.org/ecosystem/participants/ +[12]:https://layers.openembedded.org/layerindex/branch/master/layers/ +[13]:https://layers.openembedded.org/layerindex/branch/master/layer/meta-browser/ +[14]:https://yoctoproject.org/downloads +[15]:https://buildroot.org/ +[16]:https://openwrt.org/ +[17]:https://www.debian.org/ +[18]:https://www.redhat.com/ +[19]:https://www.raspbian.org/ +[20]:https://www.raspberrypi.org/ +[21]:https://wiki.debian.org/Debootstrap +[22]:https://cfengine.com/ +[23]:http://Mender.io diff --git a/sources/tech/20180615 5 Commands for Checking Memory Usage in Linux.md b/sources/tech/20180615 5 Commands for Checking Memory Usage in Linux.md new file mode 100644 index 0000000000..05c3da4f6c --- /dev/null +++ b/sources/tech/20180615 5 Commands for Checking Memory Usage in Linux.md @@ -0,0 +1,195 @@ +5 Commands for Checking Memory Usage in Linux +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/top-main.jpg?itok=WYAw6yJ1) +The Linux operating system includes a plethora of tools, all of which are ready to help you administer your systems. From simple file and directory tools to very complex security commands, there’s not much you can’t do on Linux. And, although regular desktop users may not need to become familiar with these tools at the command line, they’re mandatory for Linux admins. Why? First, you will have to work with a GUI-less Linux server at some point. Second, command-line tools often offer far more power and flexibility than their GUI alternative. + +Determining memory usage is a skill you might need should a particular app go rogue and commandeer system memory. When that happens, it’s handy to know you have a variety of tools available to help you troubleshoot. Or, maybe you need to gather information about a Linux swap partition or detailed information about your installed RAM? There are commands for that as well. Let’s dig into the various Linux command-line tools to help you check into system memory usage. These tools aren’t terribly hard to use, and in this article, I’ll show you five different ways to approach the problem. + +I’ll be demonstrating on the [Ubuntu Server 18.04 platform][1]. You should, however, find all of these commands available on your distribution of choice. Even better, you shouldn’t need to install a single thing (as most of these tools are included). + +With that said, let’s get to work. + +### top + +I want to start out with the most obvious tool. The top command provides a dynamic, real-time view of a running system. Included in that system summary is the ability to check memory usage on a per-process basis. That’s very important, as you could easily have multiple iterations of the same command consuming different amounts of memory. Although you won’t find this on a headless server, say you’ve opened Chrome and noticed your system slowing down. Issue the top command to see that Chrome has numerous processes running (one per tab - Figure 1). + +![top][3] + +Figure 1: Multiple instances of Chrome appearing in the top command. + +[Used with permission][4] + +Chrome isn’t the only app to show multiple processes. You see the Firefox entry in Figure 1? That’s the primary process for Firefox, whereas the Web Content processes are the open tabs. At the top of the output, you’ll see the system statistics. On my machine (a [System76 Leopard Extreme][5]), I have a total of 16GB of RAM available, of which just over 10GB is in use. You can then comb through the list and see what percentage of memory each process is using. + +One of the things top is very good for is discovering Process ID (PID) numbers of services that might have gotten out of hand. With those PIDs, you can then set about to troubleshoot (or kill) the offending tasks. + +If you want to make top a bit more memory-friendly, issue the command top -o %MEM, which will cause top to sort all processes by memory used (Figure 2). + +![top][7] + +Figure 2: Sorting process by memory used in top. + +[Used with permission][4] + +The top command also gives you a real-time update on how much of your swap space is being used. + +### free + +Sometimes, however, top can be a bit much for your needs. You may only need to see the amount of free and used memory on your system. For that, there is the free command. The free command displays: + + * Total amount of free and used physical memory + + * Total amount of swap memory in the system + + * Buffers and caches used by the kernel + + + + +From your terminal window, issue the command free. The output of this command is not in real time. Instead, what you’ll get is an instant snapshot of the free and used memory in that moment (Figure 3). + +![free][9] + +Figure 3: The output of the free command is simple and clear. + +[Used with permission][4] + +You can, of course, make free a bit more user-friendly by adding the -m option, like so: free -m. This will report the memory usage in MB (Figure 4). + +![free][11] + +Figure 4: The output of the free command in a more human-readable form. + +[Used with permission][4] + +Of course, if your system is even remotely modern, you’ll want to use the -g option (gigabytes), as in free -g. + +If you need memory totals, you can add the t option like so: free -mt. This will simply total the amount of memory in columns (Figure 5). + +![total][13] + +Figure 5: Having free total your memory columns for you. + +[Used with permission][4] + +### vmstat + +Another very handy tool to have at your disposal is vmstat. This particular command is a one-trick pony that reports virtual memory statistics. The vmstat command will report stats on: + + * Processes + + * Memory + + * Paging + + * Block IO + + * Traps + + * Disks + + * CPU + + + + +The best way to issue vmstat is by using the -s switch, like vmstat -s. This will report your stats in a single column (which is so much easier to read than the default report). The vmstat command will give you more information than you need (Figure 6), but more is always better (in such cases). + +![vmstat][15] + +Figure 6: Using the vmstat command to check memory usage. + +[Used with permission][4] + +### dmidecode + +What if you want to find out detailed information about your installed system RAM? For that, you could use the dmidecode command. This particular tool is the DMI table decoder, which dumps a system’s DMI table contents into a human-readable format. If you’re unsure as to what the DMI table is, it’s a means to describe what a system is made of (as well as possible evolutions for a system). + +To run the dmidecode command, you do need sudo privileges. So issue the command sudo dmidecode -t 17. The output of the command (Figure 7) can be lengthy, as it displays information for all memory-type devices. So if you don’t have the ability to scroll, you might want to send the output of that command to a file, like so: sudo dmidecode -t 17 > dmi_infoI, or pipe it to the less command, as in sudo dmidecode | less. + +![dmidecode][17] + +Figure 7: The output of the dmidecode command. + +[Used with permission][4] + +### /proc/meminfo + +You might be asking yourself, “Where do these commands get this information from?”. In some cases, they get it from the /proc/meminfo file. Guess what? You can read that file directly with the command less /proc/meminfo. By using the less command, you can scroll up and down through that lengthy output to find exactly what you need (Figure 8). + +![/proc/meminfo][19] + +Figure 8: The output of the less /proc/meminfo command. + +[Used with permission][4] + +One thing you should know about /proc/meminfo: This is not a real file. Instead /pro/meminfo is a virtual file that contains real-time, dynamic information about the system. In particular, you’ll want to check the values for: + + * MemTotal + + * MemFree + + * MemAvailable + + * Buffers + + * Cached + + * SwapCached + + * SwapTotal + + * SwapFree + + + + +If you want to get fancy with /proc/meminfo you can use it in conjunction with the egrep command like so: egrep --color 'Mem|Cache|Swap' /proc/meminfo. This will produce an easy to read listing of all entries that contain Mem, Cache, and Swap ... with a splash of color (Figure 9). + +![/proc/meminfo][21] + +Figure 9: Making /proc/meminfo easier to read. + +[Used with permission][4] + +### Keep learning + +One of the first things you should do is read the manual pages for each of these commands (so man top, man free, man vmstat, man dmidecode). Starting with the man pages for commands is always a great way to learn so much more about how a tool works on Linux. + +Learn more about Linux through the free ["Introduction to Linux" ][22]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/5-commands-checking-memory-usage-linux + +作者:[Jack Wallen][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/jlwallen +[1]:https://www.ubuntu.com/download/server +[2]:/files/images/memory1jpg +[3]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/memory_1.jpg?itok=fhhhUL_l (top) +[4]:/licenses/category/used-permission +[5]:https://system76.com/desktops/leopard +[6]:/files/images/memory2jpg +[7]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/memory_2.jpg?itok=zuVkQfvv (top) +[8]:/files/images/memory3jpg +[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/memory_3.jpg?itok=rvuQp3t0 (free) +[10]:/files/images/memory4jpg +[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/memory_4.jpg?itok=K_luLLPt (free) +[12]:/files/images/memory5jpg +[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/memory_5.jpg?itok=q50atcsX (total) +[14]:/files/images/memory6jpg +[15]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/memory_6.jpg?itok=bwFnUVmy (vmstat) +[16]:/files/images/memory7jpg +[17]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/memory_7.jpg?itok=UNHIT_P6 (dmidecode) +[18]:/files/images/memory8jpg +[19]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/memory_8.jpg?itok=t87jvmJJ (/proc/meminfo) +[20]:/files/images/memory9jpg +[21]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/memory_9.jpg?itok=t-iSMEKq (/proc/meminfo) +[22]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180615 BLUI- An easy way to create game UI.md b/sources/tech/20180615 BLUI- An easy way to create game UI.md new file mode 100644 index 0000000000..8e2b798d08 --- /dev/null +++ b/sources/tech/20180615 BLUI- An easy way to create game UI.md @@ -0,0 +1,57 @@ +BLUI: An easy way to create game UI +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gaming_plugin_blui_screenshot.jpg?itok=91nnYCt_) + +Game development engines have become increasingly accessible in the last few years. Engines like Unity, which has always been free to use, and Unreal, which recently switched from a subscription-based service to a free service, allow independent developers access to the same industry-standard tools used by AAA publishers. While neither of these engines is open source, each has enabled the growth of open source ecosystems around it. + +Within these engines are plugins that allow developers to enhance the base capabilities of the engine by adding specific applications. These apps can range from simple asset packs to more complicated things, like artificial intelligence (AI) integrations. These plugins widely vary across creators. Some are offered by the engine development studios and others by individuals. Many of the latter are open source plugins. + +### What is BLUI? + +As part of an indie game development studio, I've experienced the perks of using open source plugins on proprietary game engines. One open source plugin, [BLUI][1] by Aaron Shea, has been instrumental in our team's development process. It allows us to create user interface (UI) components using web-based programming like HTML/CSS and JavaScript. We chose to use this open source plugin, even though Unreal Engine (our engine of choice) has a built-in UI editor that achieves a similar purpose. We chose to use open source alternatives for three main reasons: their accessibility, their ease of implementation, and the active, supportive online communities that accompany open source programs. + +In Unreal Engine's earliest versions, the only means we had of creating UI in the game was either through the engine's native UI integration, by using Autodesk's Scaleform application, or via a few select subscription-based Unreal integrations spread throughout the Unreal community. In all those cases, the solutions were either incapable of providing a competitive UI solution for indie developers, too expensive for small teams, or exclusively for large-scale teams and AAA developers. + +After commercial products and Unreal's native integration failed us, we looked to the indie community for solutions. There we discovered BLUI. It not only integrates with Unreal Engine seamlessly but also maintains a robust and active community that frequently pushes updates and ensures the documentation is easily accessible for indie developers. BLUI gives developers the ability to import HTML files into the Unreal Engine and program them even further while inside the program. This allows UI created through web languages to integrate with the game's code, assets, and other elements with the full power of HTML, CSS, JavaScript, and other web languages. It also provides full support for the open source [Chromium Embedded Framework][2]. + +### Installing and using BLUI + +The basic process for using BLUI involves first creating the UI via HTML. Developers may use any tool at their disposal to achieve this, including bootstrapped JavaScript code, external APIs, or any database code. Once this HTML page is ready, you can install the plugin the same way you would install any Unreal plugin and load or create a project. Once the project is loaded, you can place a BLUI function anywhere within an Unreal UI blueprint or hardcoded via C++. Developers can call functions from within their HTML page or change variables easily using BLUI's internal functions. + +![Integrating BLUI into Unreal Engine 4 blueprints][4] + +Integrating BLUI into Unreal Engine 4 blueprints. + +In our current project, we use BLUI to sync UI elements with the in-game soundtrack to provide visual feedback to the rhythm aspects of the game mechanics. It's easy to integrate custom engine programming with the BLUI plugin. + +![Using BLUI to sync UI elements with the soundtrack.][6] + +Using BLUI to sync UI elements with the soundtrack. + +Implementing BLUI into Unreal Engine 4 is a trivial process thanks to the [documentation][7] on the BLUI GitHub page. There is also [a forum][8] populated with supportive Unreal Engine developers eager to both ask and answer questions regarding the plugin and any issues that appear when implementing the tool. + +### Open source advantages + +Open source plugins enable expanded creativity within the confines of proprietary game engines. They continue to lower the barrier of entry into game development and can produce in-game mechanics and assets no one has seen before. As access to proprietary game development engines continues to grow, the open source plugin community will become more important. Rising creativity will inevitably outpace proprietary software, and open source will be there to fill the gaps and facilitate the development of truly unique games. And that novelty is exactly what makes indie games so great! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/6/blui-game-development-plugin + +作者:[Uwana lkaiddi][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/uwikaiddi +[1]:https://github.com/AaronShea/BLUI +[2]:https://bitbucket.org/chromiumembedded/cef +[3]:/file/400616 +[4]:https://opensource.com/sites/default/files/uploads/blui_gaming_plugin-integratingblui.png (Integrating BLUI into Unreal Engine 4 blueprints) +[5]:/file/400621 +[6]:https://opensource.com/sites/default/files/uploads/blui_gaming_plugin-syncui.png (Using BLUI to sync UI elements with the soundtrack.) +[7]:https://github.com/AaronShea/BLUI/wiki +[8]:https://forums.unrealengine.com/community/released-projects/29036-blui-open-source-html5-js-css-hud-ui diff --git a/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md b/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md new file mode 100644 index 0000000000..e548213483 --- /dev/null +++ b/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md @@ -0,0 +1,1029 @@ +Complete Sed Command Guide [Explained with Practical Examples] +====== +In a previous article, I showed the [basic usage of Sed][1], the stream editor, on a practical use case. Today, be prepared to gain more insight about Sed as we will take an in-depth tour of the sed execution model. This will be also an opportunity to make an exhaustive review of all Sed commands and to dive into their details and subtleties. So, if you are ready, launch a terminal, [download the test files][2] and sit comfortably before your keyboard: we will start our exploration right now! + +### A little bit of theory on Sed + +![complete reference guide to sed commands][4] + +#### A first look at the sed execution model + +To truly understand Sed you must first understand the tool execution model. + +When processing data, Sed reads one line of input at a time and stores it into the so-called pattern space. All Sed’s transformations apply to the pattern space. Transformations are described by one-letter commands provided on the command line or in an external Sed script file. Most Sed commands can be preceded by an address, or an address range, to limit their scope. + +By default, Sed prints the content of the pattern space at the end of each processing cycle, that is, just before overwriting the pattern space with the next line of input. We can summarize that model like that: + + 1. Try to read the next input line into the pattern space + + 2. If the read was successful: + + 1. Apply in the script order all commands whose address matches the current input line + + 2. If sed was not launched in quiet mode (`-n`) print the content of the (potentially modified) pattern space + + 3. got back to 1. + + + + +Since the content of the pattern space is lost after each line is processed, it is not suitable for long-term storage. For that purpose, Sed has a second buffer, the hold space. Sed never clears, puts or gets data from the hold space unless you explicitly request it. We will investigate that more in depth later when studying the exchange, get and hold commands. + +#### The Sed abstract machine + +The model explained above is what you will see described in many Sed tutorials. Indeed, it is correct enough to understand the most basic Sed programs. But when you start digging into more advanced commands, you will see it is not sufficient. So let’s try to be a little bit more formal now. + +Actually, Sed can be viewed as implementing an [abstract machine][5] whose [state][6] is defined by three [buffers][7], two [registers][8], and two [flags][9]: + + * **three buffers** to store arbitrary length text. Yes: three! In the basic execution model we talked about the pattern- and hold-space, but Sed has a third buffer: the append queue. From the Sed script perspective, it is a write-only buffer that Sed will flush automatically at predefined moments of its execution (broadly speaking before reading a new line from the input, or just before quitting). + + * Sed also maintains **two registers** : the line counter (LC) which holds the number of lines read from the input, and the program counter (PC) which always hold the index (“position” in the script) of the next command to execute. Sed automatically increments the PC as part of its main loop. But using specific commands, a script can also directly modify the PC to skip or repeat parts of the program. This is how loops or conditional statements can be implemented with Sed. More on that in the dedicated branches section below. + + * Finally **two flags** can modify the behavior of certain Sed commands: the auto-print flag (AP) the substitution flag (SF). When the auto-print flag is set, Sed will automatically print the content of the pattern space before overwriting it (notably before reading a new line of input but not only). When the auto-print flag is clear (“not set”), Sed will never print the content of the pattern space without an explicit command in the script. You can clear the auto-print flag by running Sed in “quiet mode” (using the `-n` command line option or by using the special comment`#n` on the very first line or the script). The “substitution flag” is set by the substitution command (the `s` command) when both its address and search pattern match the content of the pattern space. The substitution flag is cleared at the start of each new cycle, or when a new line is read from input, or after a conditional branch is taken. Here again, we will revisit that topic in details in the branches section. + + + + +In addition, Sed maintains the list of commands having entered their address range (more on that of the range addresses section) as well as a couple of file handles to read and write data. You will find some more information on that in the read and write command description. + +#### A more accurate Sed execution model + +As a picture worth thousands of words, I draw a flowchart describing the Sed execution model. I left a couple of things aside, like dealing with multiple input files or error handling, but I think this should be sufficient for you to understand the behavior of any Sed program and to avoid wasting your time by groping around while writing your own Sed scripts. + +![The Sed execution model][10] + +You may have noticed I didn’t describe the command-specific actions in the flowchart above. We will see that in detail for each command. So, without further ado, let’s start our tour! + +### The print command + +The print command (`p`) displays the content of the pattern space at the moment it is executed. It does not change in any way the state of the Sed abstract machine. + +![The Sed `print` command][11] + +For example: +``` +sed -e 'p' inputfile + +``` + +The command above will print each line of the input file … twice. Once because you explicitly requested it using the `print` command, and a second time implicitly at the end of the processing loop (because we didn’t launch Sed in “quiet mode” here). + +If we are not interested in seeing each line twice, we have two way of fixing that: +``` +sed -n -e 'p' inputfile # quiet mode with explicit print +sed -e '' inputfile # empty "do nothing" program, implicit print + +``` + +Note: the `-e` option introduces a Sed command. It is used to distinguish between commands and file names. Since a Sed invocation must contain at least one command, the `-e` flag is optional for that first command. However, I have the habit of using it, mostly for consistency with more complex cases where I have to give multiple Sed expressions on the command line. I let you figure by yourself if this is a good or bad habit, but I will follow that convention in the rest of the article. + +### Addresses + +Obviously, the print command is not very useful by itself. However, if you add an address before the command to apply it only to some lines of the input file, it suddenly becomes able to filter out unwanted lines from a file. But what’s an address for Sed? And how are identified the “lines” of the input file? + +#### Line numbers + +A Sed address can be either a line number (with the extension of `$` to mean “the last line”) or a regular expression. When using line numbers, you have to remember lines are numbered starting from one in Sed— and not starting from zero. +``` +sed -n -e '1p' inputfile # print only the first line of the file +sed -n -e '5p' inputfile # print only the line 5 +sed -n -e '$p' inputfile # print the last line of the file +sed -n -e '0p' inputfile # will result in an error because 0 is not a valid line number + +``` + +According to the [POSIX specifications][12], line numbers are cumulative if you specify several input files. In other words, the line counter is not reset when Sed opens a new input file. So, the two commands below will do the same thing, printing only one line of text to the output: +``` +sed -n -e '1p' inputfile1 inputfile2 inputfile3 +cat inputfile1 inputfile2 inputfile3 | sed -n -e '1p' + +``` + +Actually, this is exactly how POSIX defines multiple file handling: + +> If multiple file operands are specified, the named files shall be read in the order specified and the concatenation shall be edited. + +However, some Sed implementations offer command line options to change that behavior, like the GNU Sed `-s` flag (which is implicitly applied too when using the GNU Sed `-i` flags): +``` +sed -sn -e '1p' inputfile1 inputfile2 inputfile3 + +``` + +If your implementation of Sed supports such non-standard options, I let you check into the `man` for the details regarding them. + +#### Regular expressions + +I’ve said Sed addresses could be line numbers or regular expressions. But what is this regex thing? + +In just a few words, a [regular expression][13] is a way to describe a set of strings. If a given string pertains to the set described by a regular expression, we said the string matches the regular expression. + +A regular expression may contain literal characters that must be present verbatim into a string to have a match. For example, all letters and digits behave that way, as well as most printable characters. However, some symbols have a special meaning: + + * They could represent anchors, like `^` and `$` that respectively denotes the start or end of a line; + + * other symbols can serve as placeholders for entire sets of characters (like the dot that matches any single characters or the square brackets that are used to define a custom character set); + + * others again are quantifiers that serve to denotes repetitions (like the [Kleene star][14] that means 0, 1 or several occurrences of the previous pattern). + + + + +My goal here is not to give you a regular expression tutorial. So I will stick with just a few examples. However, feel free to search for more on the web about that topic: regular expressions are a really powerful feature available in many standard Unix commands and programming language and a skill every Unixien should master. + +Here are few examples used in the context of a Sed address: +``` +sed -n -e '/systemd/p' inputfile # print only lines *containing* the literal string "systemd" +sed -n -e '/nologin$/p' inputfile # print only lines ending with "nologin" +sed -n -e '/^bin/p' inputfile # print only lines starting with "bin" +sed -n -e '/^$/p' inputfile # print only empty lines (i.e.: nothing between the start and end of a line) +sed -n -e '/./p' inputfile # print only lines containing a character (i.e. print only non-empty lines) +sed -n -e '/^.$/p' inputfile # print only lines containing exactly one character +sed -n -e '/admin.*false/p' inputfile # print only lines containing "admin" followed by "false" (with any number of arbitrary characters between them) +sed -n -e '/1[0,3]/p' inputfile # print only lines containing a "1" followed by a "0" or "3" +sed -n -e '/1[0-2]/p' inputfile # print only lines containing a "1" followed by a "0", "1", "2" or "3" +sed -n -e '/1.*2/p' inputfile # print only lines containing the character "1" followed by a "2" (with any number of arbitrary characters between them) +sed -n -e '/1[0-9]*2/p' inputfile # print only lines containing the character "1" followed by zero, one or more digits, followed by a "2" + +``` + +If you want to remove the special meaning of a character in a regular expression (including the regex delimiter symbol), you have to precede it with a slash: +``` +# Print all lines containing the string "/usr/sbin/nologin" +sed -ne '/\/usr\/sbin\/nologin/p' inputfile + +``` + +You are not limited to use only the slash as the regular expression delimiter in an address. You can use any other character that could suit your needs and tastes by preceding the first delimiter by a backslash. This is particularly useful when you have addresses that should match literal slashes like when working with file paths: +``` +# Both commands are perfectly identical +sed -ne '/\/usr\/sbin\/nologin/p' inputfile +sed -ne '\=/usr/sbin/nologin=p' inputfile + +``` + +#### Extended regular expressions + +By default, the Sed regular expression engine only understands the [POSIX basic regular expression][15] syntax. If you need [extended regular expressions][16], you must add the `-E` flag to the Sed command. Extended regular expressions add a couple of extra feature to basic regular expressions and, most important maybe, they require far fewer backslashes. I let you compare: +``` +sed -n -e '/\(www\)\|\(mail\)/p' inputfile +sed -En -e '/(www)|(mail)/p' inputfile + +``` + +#### The bracket quantifier + +One powerful feature of regular expressions is the [range quantifier][17]`{,}`. Actually, when written exactly like that, this quantifier is a perfect synonym for the `*` quantifier. However, you can explicitly add a lower and upper bound on each side of the coma, something that gives a tremendous amount of flexibility. When the lower bound of the range quantifier is omitted, it is assumed to be zero. When the upper bound is omitted, it is assumed to be the infinity: + +|Bracket notation| Shorthand |Description| + +| {,} | * | zero, one or many occurrences of the preceding regex | +| {,1} | ? | zero or one occurrence of the preceding regex | +| {1,} | + | one or many occurrences of the preceding regex | +| {n,n} | {n} | exactly n occurrences of the preceding regex | + +The bracket notation is available in basic regular expressions too, but it requires backslashes. According to POSIX, the only quantifiers available in basic regular expression are the star and the bracket notation (with backslashes `\{m,n\}`). Many regex engines do support the `\?` and `\+` notation as an extension. However, why tempting the devil? If you need those quantifiers, using extended regular expressions will be both easier to write and more portable. + +If I took the time to talk about the bracket notation for regex quantifiers, this is because that feature is often useful in Sed scripts to count characters. +``` +sed -En -e '/^.{35}$/p' inputfile # Print lines containing exactly 35 characters +sed -En -e '/^.{0,35}$/p' inputfile # Print lines containing 35 characters or less +sed -En -e '/^.{,35}$/p' inputfile # Print lines containing 35 characters or less +sed -En -e '/^.{35,}$/p' inputfile # Print lines containing 35 characters or more +sed -En -e '/.{35}/p' inputfile # I let you figure out this one by yourself (test it!) + +``` + +#### Range addresses + +All the addresses we used so far were unique addresses. When using an unique address, the command is applied only the line(s) matching that address. However, Sed also supports range addresses. They are used to apply a command to all lines between the start and the end address of the range: +``` +sed -n -e '1,5p' inputfile # print only lines 1 through 5 +sed -n -e '5,$p' inputfile # print from line 5 to the end of the file + +sed -n -e '/www/,/systemd/p' inputfile # print from the first line matching the /www/ regular expression to the next line matching the /systemd/ regular expression. + +``` + +If the same line number is used both for the start and end address, the range is reduced to that line. Actually, if the second address is a number less than or equal to the line number of the first selected line of the range, only one line will be selected: +``` +printf "%s\n" {a,b,c}{d,e,f} | cat -n | sed -ne '4,4p' + 4 bd +printf "%s\n" {a,b,c}{d,e,f} | cat -n | sed -ne '4,3p' + 4 bd + +``` + +This is somewhat tricky, but the rule given in the previous paragraph also applies when the start address is a regular expression. In that case, Sed will compare the line number of the first line matching the regex with the explicit line number given as end address. Once again, if the end line number is lower or equal to the start line number, the range will be reduced to one line: +``` +# The /b/,4 address will match *three* one-line range +# since each matching line has a line number >= 4 +printf "%s\n" {a,b,c}{d,e,f} | cat -n | sed -ne '/b/,4p' + 4 bd + 5 be + 6 bf + +# I let you figure by yourself how many ranges are matched +# by that second example: +printf "%s\n" {a,b,c}{d,e,f} | cat -n | sed -ne '/d/,4p' + 1 ad + 2 ae + 3 af + 4 bd + 7 cd + +``` + +However, the behavior of Sed is different when the end address is a regular expression. In that case, the first line of the range is not tested against the end address, so the range will contain at least two lines (except of course if there is not enough input data): +``` +printf "%s\n" {a,b,c}{d,e,f} | cat -n | sed -ne '/b/,/d/p' + 4 bd + 5 be + 6 bf + 7 cd + +printf "%s\n" {a,b,c}{d,e,f} | cat -n | sed -ne '4,/d/p' + 4 bd + 5 be + 6 bf + 7 cd + +``` + +#### Complement + +Adding an exclamation mark (`!`) after an address select lines not matching that address. For example: +``` +sed -n -e '5!p' inputfile # Print all lines except line 5 +sed -n -e '5,10!p' inputfile # Print all lines except line 5 to 10 +sed -n -e '/sys/!p' inputfile # Print all lines except those containing "sys" + +``` + +#### Conjunctions + +Sed allows to group commands in blocks using brackets (`{…​}`). You can leverage that feature to combine several addresses. For example, let’s compare the output of those two commands: +``` +sed -n -e '/usb/{ +/daemon/p +}' inputfile + +sed -n -e '/usb.*daemon/p' inputfile + +``` + +By nesting commands in a block, we will select lines containing “usb” and “daemon” in any order. Whereas the regular expression “usb.*daemon” would only match lines where the “usb” string appears before the “daemon” string. + +After that long digression, let’s go back now to our review of the various Sed commands. + +### The quit command + +The quit command will stop Sed at the end of the current processing loop iteration. + +![The Sed `quit` command][18] + +The `quit` command is a way to stop input processing before reaching the end of the input file. Why would someone want to do that? + +Well, if you remember, we’ve seen you can print the lines 1 through 5 of a file using the following command: +``` +sed -n -e '1,5p' inputfile + +``` + +With most implementations of Sed, the tool will read and cycle over all the remaining input lines, even if only the first five can produce a result. This may have a significant impact if your file contains millions of rows (or even worst, if you read from an infinite stream of data like `/dev/urandom` for example). + +Using the quit command, the same program can be rewritten much more efficiently: +``` +sed -e '5q' inputfile + +``` + +Here, since I did not use the `-n` option, Sed will implicitly print the pattern space at the end of each cycle, but it will quit, and thus not reading more data, after having processed the line 5. + +We could use a similar trick to print only a specific line of a file. That will be a good occasion to see several ways of providing multiple Sed expressions from the command line. The three variations bellow take benefit of Sed accepting several commands, either as different `-e` options, or in the same expression, but separated by newlines or semi-colon: +``` +sed -n -e '5p' -e '5q' inputfile + +sed -n -e ' + 5p + 5q +' inputfile + +sed -n -e '5p;5q' inputfile + +``` + +If you remember it, we’ve seen earlier we can group commands using brackets, something that we could use here to avoid repeating the same address twice: +``` +# By grouping commands +sed -e '5{ + p + q +}' inputfile + +# Which can be shortened as: +sed '5{p;q;}' inputfile + +# As a POSIX extension, some implementations makes the semi-colon before the closing bracket optional: +sed '5{p;q}' inputfile + +``` + +### The substitution command + +You can imagine the substitution command as the Sed equivalent to the search-replace feature you can find on most WYSIWYG editors. An equivalent, albeit a more powerful one though. The substitution command being one of the best-known Sed commands, it is largely documented on the web. + +![The Sed `substitution` command][19] + +We already have covered it [in a previous article][20] so I will not repeat myself here. However there are some key points to remember if you’re not yet completely familiar with it: + + * The substitution command takes two parameters: the search pattern and the replacement string: `sed s/:/-----/ inputfile` + + * The command and its arguments are separated by an arbitrary character. Mostly by habit, 99% of the time we use a slash but any other character can be used: `sed s%:%-----% inputfile`, `sed sX:X-----X inputfile` or even `sed 's : ----- ' inputfile` + + * By default, the substitution is applied only to the first matching substring of the pattern space. You can change that by specifying the match index as a flag after the command: `sed 's/:/-----/1' inputfile`, `sed 's/:/-----/2' inputfile`, `sed 's/:/-----/3' inputfile`, … + + * If you want to perform a substitution globally (i.e.: on each non-overlapping match of the pattern space), you need to add the `g` flag: `sed 's/:/-----/g' inputfile` + + * In the replacement string, any occurrence of the ampersand (`&`) will be replaced by the substring matching the search pattern: `sed 's/:/-&&&-/g' inputfile`, `sed 's/…​./& /g' inputfile` + + * Parenthesis ( `(…​)` in extended regex or `\(…​\)` in basic regex) introduce a capturing group. That is a part of the matching string that can be referenced in the replacement string. `\1` is the content of the first capturing group, `\2` the content of the second one and so on: `sed -E 's/(.)(.)/\2\1/g' inputfile`, `sed -E 's/(.):x:(.):(.*)/\1:\3/' inputfile` (that latter works because [the star regular expression quantifier is greedy][21], and matches as many characters as it can) + + * In the search pattern or the replacement string you can remove the special meaning of any character by preceding it with a backslash: `sed 's/:/--\&--/g' inputfile`, `sed 's/\//\\/g' inputfile` + + + + +As all this might seem a little bit abstract, here are a couple of examples. To start, let’s say I want to display the first field of my test input file padded with spaces on the right up to 20 characters, I could write something like that: +``` +sed < inputfile -E -e ' + s/:/ / # replace the first field separator by 20 spaces + s/(.{20}).*/\1/ # keep only the first 20 characters of the line + s/.*/| & |/ # add vertical bars for a nice output +' + +``` + +As a second example, if I want to change the UID/GID of the user sonia to 1100, I could write something like that: +``` +sed -En -e ' + /sonia/{ + s/[0-9]+/1100/g + p + }' inputfile + +``` + +Notice the `g` option at the end of the substitution command. It modifies its behavior, so all occurrences of the search pattern are replaced. Without that option, only the first one would be. + +By the way, this is also a good occasion to mention the print command displays the content of the pattern space at the moment the command is executed. So, I can obtain a before-after output like that: +``` +sed -En -e ' + /sonia/{ + p + s/[0-9]+/1100/g + p + }' inputfile + +``` + +Actually, since printing a line after a substitution is a common use case, the substitution command also accepts the `p` option for that purpose: +``` +sed -En -e '/sonia/s/[0-9]+/1100/gp' inputfile + +``` + +Finally, I won’t be exhaustive without mentioning the `w` option of the substitution command. We will examine it in detail later. + +#### The delete command + +The delete command (`d`) is used to clear the pattern space and immediately start the next cycle. By doing so, it will also skip the implicit print of the pattern space even if the auto-print flag is set. + +![The Sed `delete` command][22] + +A particularly inefficient way of printing only the first five lines of a file would be: +``` +sed -e '6,$d' inputfile + +``` + +I let you guess why I said this was inefficient. If this is not obvious, try to re-read the section concerning the quit command. The answer is there! + +The delete command is particularly useful when combined with regular expression-based addresses to remove matching lines from the output: +``` +sed -e '/systemd/d' inputfile + +``` + +#### The next command + +This command prints the current pattern space if Sed is not running in quiet mode, then, in all cases, it reads the next input line into the pattern space and executes the remaining commands of the current cycle with the new pattern space. + +![The Sed `next` command][23] + +A common use case of the next command is to skip lines: +``` +cat -n inputfile | sed -n -e 'n;n;p' + +``` + +In the example above, Sed will implicitly read the first line of the input file. But the `next` command discards (and does not display because of the `-n` option) the pattern space and replaces it with the next line from the input. And the second `next` command will do the same thing, skipping now the line 2 of the input. And finally, the script explicitly prints the pattern space which now contains the third line of the input. Then Sed will start a new cycle, implicitly reading the line 4, then skipping it, as well as the line 5, because of the `next` commands, and it will print the line 6. And again and again until the end of the file. Concretely, this prints one line over three of the input file. + +Using the next command, we can also find a couple of other ways to display the first five lines of a file: +``` +cat -n inputfile | sed -n -e '1{p;n;p;n;p;n;p;n;p}' +cat -n inputfile | sed -n -e 'p;n;p;n;p;n;p;n;p;q' +cat -n inputfile | sed -e 'n;n;n;n;q' + +``` + +More interestingly, the next command is also very useful when you want to process lines relative to some address: +``` +cat -n inputfile | sed -n '/pulse/p' # print lines containing "pulse" +cat -n inputfile | sed -n '/pulse/{n;p}' # print the line following + # the line containing "pulse" +cat -n inputfile | sed -n '/pulse/{n;n;p}' # print the line following + # the line following + # the line containing "pulse" + +``` + +### Working with the hold space + +Until now, the command we’ve seen dealt only with the pattern space. However, as we’ve mentioned it at the very top of this article, there is a second buffer, the hold space, entirely under the control of the user. This will be the purpose of the commands described in this section. + +#### The exchange command + +As it names implies it, the exchange command (`x`) will swap the content of the hold and pattern space. Remember as long as you didn’t put anything into the hold space, it is empty. + +![The Sed `exchange` command][24] + +As a first example, we may use the exchange command to print in reverse order the first two lines of a file: +``` +cat -n inputfile | sed -n -e 'x;n;p;x;p;q' + +``` + +Of course, you don’t have to use the content of the hold space immediately after having set it, since the hold space remains untouched as long as you don’t explicitly modify it. In the following example, I use it to move the first line of the input after the fifth one: +``` +cat -n inputfile | sed -n -e ' + 1{x;n} # Swap the hold and pattern space + # to store line 1 into the hold buffer + # and then read line two + 5{ + p # print line 5 + x # swap the hold and pattern space to get + # back the content of line one into the + # pattern space + } + + 1,5p # triggered on lines 2 through 5 + # (not a typo! try to figure why this rule + # is NOT executed for line 1;) +' + +``` + +#### The hold commands + +The hold command (`h`) is used to store the content of the pattern space into the hold space. However, as opposed to the exchange command, this time the content of the pattern space is left unchanged. The hold commands came in two flavors: + + * `h` +that will copy the content of the pattern space into the hold space, overwriting any value already present + + * `H` +that will append the content of the pattern space to the hold space, using a newline as separator + + + + +![The Sed `hold` command][25] + +The example above using the exchange command can be rewritten using the hold command instead: +``` +cat -n inputfile | sed -n -e ' + 1{h;n} # Store line 1 into the hold buffer and continue + 5{ # on line 5 + x # switch the pattern and hold space + # (now the pattern space contains the line 1) + H # append the line 1 after the line 5 in the hold space + x # switch again to get back lines 5 and 1 into + # the pattern space + } + + 1,5p # triggered on lines 2 through 5 + # (not a typo! try to figure why this rule + # is NOT executed for line 1;) +' + +``` + +#### The get command + +The get command (`g`) does the exact opposite of the hold command: it takes the content of the hold space and put it into the pattern space. Here again, it comes in two flavors: + + * `g` +that will copy the content of the hold space into the pattern space, overwriting any value already present + + * `G` +that will append the content of the hold space to the pattern space, using a newline as separator + + + + +![The Sed `get` command][26] + +Together, the hold and get commands allow to store and recall data. As a little challenge, I let you rewrite the example of the previous section to put the line 1 of the input file after the line 5, but this time using the get and hold commands (lower- or upper-case version), but without using the exchange command. With a little bit of luck, it should be simpler that way! + +In the meantime, I can show you another example that could serve for your inspiration. The goal here is to separate the users having a login shell from the others: +``` +cat -n inputfile | sed -En -e ' + \=(/usr/sbin/nologin|/bin/false)$= { H;d; } + # Append matching lines to the hold buffer + # and continue to next cycle + p # Print other lines + $ { g;p } # On the last line, + # get and print the content of the hold buffer +' + +``` + +### print, delete and next revisited + +Now you’ve gained more familiarities with the hold space, let me go back on the `print`, `delete` and `next` commands. We already talked about the lower case `p`, `d` and `n` commands. But they also have an upper case version. As it seems to be a convention with Sed, the uppercase version of those commands will be related to multi-line buffers: + + * `P` +print the content of the pattern space up to the first newline + + * `D` +delete the content of the pattern space up and including the first newline then restart a cycle with the remaining text without reading any new input + + * `N` +read and append a new line of input to the pattern space using the newline character as a separator between the old and new data. Continue the execution of the current cycle. + + + + +![The Sed uppercase `Delete` command][27] +![The Sed uppercase `Next` command][28] + +The main use case for those commands is to implement queues ([FIFO lists][29]). The canonical example being removing the last 5 lines from a file: +``` +cat -n inputfile | sed -En -e ' + 1 { N;N;N;N } # ensure the pattern space contains *five* lines + + N # append a sixth line onto the queue + P # Print the head of the queue + D # Remove the head of the queue +' + +``` + +As a second example, we could display input data on two columns: +``` +# Print on two columns +sed < inputfile -En -e ' + $!N # Append a new line to the pattern space + # *except* on the last line of input + # This is a trick required to deal with + # inconsistencies between GNU Sed and POSIX Sed + # when using N on the last line of input + # https://www.gnu.org/software/sed/manual/sed.html#N_005fcommand_005flast_005fline + + # Right pad the first field of the first line + # with spaces and discard rest of the line + s/:.*\n/ \n/ + s/:.*// # Discard all but the first field on the second line + s/(.{20}).*\n/\1/ # Trim and join lines + p # Print result +' + +``` + +### Branching + +We just saw Sed has buffer capabilities through the hold space. But it also has test and branch instructions. Having both those features makes Sed a [Turing complete][30] language. It may sound silly, but that means you can write any program using Sed. You can do it, but that does not mean it would be an easy task, nor that the result would be particularly efficient of course. + +However, don’t panic. In this article, we will stay with simple examples of tests and branches. Even if these capabilities seem limited at first sight, remember some people have written [calculators], [Tetris] or many other kinds of applications using sed! + +#### labels and branches + +By some aspects, you can see Sed as a very limited assembly language. So you won’t find high-level “for” or “while” loops or “if … else” statements, but you can implement them using branches. + +![The Sed `branch` command][31] + +If you take a look at the flowchart describing the Sed execution model at the top of this article, you can see Sed automatically increments the program counter, resulting in the execution of the commands in the program’s instructions order. However, using branch instructions, you can break that sequential flow by continuing the execution with any command of your choice in the program. The destination of a jump is explicitly defined using a label. + +![The Sed `label` command][32] + +Here is an example: +``` +echo hello | sed -ne ' + :start # Put the "start" label on that line of the program + p # Print the pattern buffer + b start # Continue execution at the :start label +' | less + +``` + +The behavior of that Sed program is very close to the `yes` command: it takes a string and produces an infinite stream of lines containing that string. + +Branching to a label as we did bypass all Sed automatic features: it does not read any input, nor print anything, nor update any buffer. It just jumps to a different instruction instead of executing the next one in the source program order. + +Worth mentioning without any label specified as an argument, the branch command (`b`) will branch to the end of the program. So Sed will start a new cycle. This may be useful to bypass some instructions and thus may be used as an alternative to blocks: +``` +cat -n inputfile | sed -ne ' +/usb/!b +/daemon/!b +p +' + +``` + +#### Conditional branch + +Until now, we’ve seen the so-called unconditional branches, even if the term is somewhat misleading in this context since Sed commands are always conditional based on their optional address. + +However, in a more traditional sense, an unconditional branch is a branch that, when executed, will always jump to the specified destination, whereas a conditional branch may or may not jump to the specified instruction depending on the current state of the system. + +Sed has only one conditional instruction, the test (`t`) command. It jumps to a different instruction only if a substitution was executed since the start of the current cycle or since the previous conditional branch. More formally, the test command will branch only if the substitution flag is set. + +![The Sed `test` command][3]![The Sed `test` command][33] + +With the test instruction, you can easily perform loops in a Sed program. As a practical example, you can use that to pad lines to a certain length (something you can’t do with regex only): +``` +# Center text +cut -d: -f1 inputfile | sed -Ee ' + :start + s/^(.{,19})$/ \1 / # Pad lines shorter than 20 chars with + # a space at the start and another one + # at the end + t start # Go back to :start if we added a space + s/(.{20}).*/| \1 |/ # Keep only the first 20 char of the line + # to fix the off-by-one error caused by + # odd lines +' + +``` + +If you carefully read the previous example, you’ve noticed I cheated a little bit by using the cut command to pre-process the data before feeding them to sed. + +We can, however, perform the same task using only sed at the cost of a small modification to the program: +``` +cat inputfile | sed -Ee ' + s/:.*// # Remove all but the first field + t start + :start + s/^(.{,19})$/ \1 / # Pad lines shorter than 20 chars with + # a space at the start and another one + # at the end + t start # Go back to :start if we added a space + s/(.{20}).*/| \1 |/ # Keep only the first 20 char of the line + # to fix the off-by-one error caused by + # odd lines +' + +``` + +In the above example, you may be surprised by the following construct: +``` +t start +:start + +``` + +At first sight, the branch here seems useless since it will jump to the instruction that would have been executed anyway. However, if you read the definition of the `test` command attentively, you will see it branches only if there was a substitution since the start of the current cycle or since the previous test command was executed. In other words, the test instruction has the side effect of clearing the substitution flag. This is exactly the purpose of the code fragment above. This is a trick you will often see in Sed programs containing conditional branches to avoid false positive when using several substitutions commands. + +I agree though it wasn’t absolutely mandatory here to clear the substitution flag since the specific substitution command I used is idempotent once it has padded the string to the right length. So one extra iteration will not change the result. However, look at that second example now: +``` +# Classify user accounts based on their login program +cat inputfile | sed -Ene ' + s/^/login=/ + /nologin/s/^/type=SERV / + /false/s/^/type=SERV / + t print + s/^/type=USER / + :print + s/:.*//p +' + +``` + +My hope here was to tag the user accounts with either “SERV” or “USER” depending on the configured default login program. If you ran it, you’ve seen the “SERV” tag as expected. However, no trace of the “USER” tag in the output. Why? Because the `t print` instruction will always branch since whatever was the content of the line, the substitution flag was set by the very first substitution command of the program. Once set, the flag remains set until a next line is read— or until the next test command. And gives us the solution to fix that program: +``` +# Classify user accounts based on the login program +cat inputfile | sed -Ene ' + s/^/login=/ + + t classify # clear the "substitution flag" + :classify + + /nologin/s/^/type=SERV / + /false/s/^/type=SERV / + t print + s/^/type=USER / + :print + s/:.*//p +' + +``` + +### Handling verbatim text + +Sed is a text editor. A non-interactive one. But a text editor nevertheless. It wouldn’t be complete without some facility to insert literal text in the output. I’m not a big fan of that feature since I find the syntax awkward (even by the Sed standards), but sometimes you can’t avoid it. + +In the strict POSIX syntax, all the three commands to change (`c`), insert (`i`) or append (`a`) some literal text to the output follow the same specific syntax: the command letter is followed by a backslash, and the text to insert start on the next line of the script: +``` +head -5 inputfile | sed ' +1i\ +# List of user accounts +$a\ +# end +' + +``` + +To insert multiple lines of text, you must end each of them with a backslash: +``` +head -5 inputfile | sed ' +1i\ +# List of user accounts\ +# (users 1 through 5) +$a\ +# end +' + +``` + +Some Sed implementations, like GNU Sed, makes the newline after the initial backslash optional, even when forced in `--posix` mode. I didn’t find anything in the standard that authorizes that alternate syntax. So use it at your own risks if portability is a premium (or leave a comment if I missed that feature in the specifications!): +``` +# non-POSIX syntax: +head -5 inputfile | sed -e ' +1i \# List of user accounts +$a\# end +' + +``` + +Some Sed implementation also makes the initial backslash completely optional. Since this is, without any doubt this time, a vendor-specific extension to the POSIX specifications, I let you check the manual for the version of sed you use to check if it supports that syntax. + +After that quick overview, let’s review those commands in more details now, starting with the change command I didn’t have presented yet. + +#### The change command + +The change command (`c\`) deletes the pattern space and starts a new cycle just like the `d` command. The only difference is the user provided text is written on the output when the command is executed. + +![The Sed `change` command][34] +``` +cat -n inputfile | sed -e ' +/systemd/c\ +# :REMOVED: +s/:.*// # This will NOT be applied to the "changed" text +' + +``` + +If the change command is associated with a range address, the text is output only once, when reaching the last line of the range. Which somehow makes it an exception to the convention a Sed command is repeatedly applied to all lines of its range address: +``` +cat -n inputfile | sed -e ' +19,22c\ +# :REMOVED: +s/:.*// # This will NOT be applied to the "changed" text +' + +``` + +As a consequence, if you want the change command to be repeated for every line in a range, you have no other choice than wrapping it inside a block: +``` +cat -n inputfile | sed -e ' +19,22{c\ +# :REMOVED: +} +s/:.*// # This will NOT be applied to the "changed" text +' + +``` + +#### The insert command + +The insert command (`i\`) immediately print the user-provided text on the output. It does not alter in any way the program flow or buffer content. + +![The Sed `insert` command][35] +``` +# display the first five user names with a title on the first row +sed < inputfile -e ' +1i\ +USER NAME +s/:.*// +5q +' + +``` + +#### The append command + +The append command (`a\`) queues some text to be displayed when the next line of input will be read. The text is output at the end of the current cycle (including at the end of the program) or when a new line is read from the input using either the `n` or `N` command. + +![The Sed `append` command][36] + +Same example as above, but inserts this time a footer instead of a header: +``` +sed < inputfile -e ' +5a\ +USER NAME +s/:.*// +5q +' + +``` + +#### The read command + +There is a fourth command to insert literal content into the output stream: the read command (`r`). It works exactly like the append command, but instead of taking the text hardcoded from the Sed script itself, it will write the content of a file on the output. + +The read command only schedules the file to be read. That latter is effectively read when the append queue is flushed. Not when the read command is executed. This may have implications if there are concurrent accesses to the file to be read, if that file is not a regular file (for example, if it’s a character device or a named pipe), or if the file is modified during processing. + +As an illustration, if you use the write command we will see in detail in the next section together with the read command to write and re-read from a temporary file, you may obtain some creative results (using a French equivalent of the [Shiritori][37] game as an illustration): +``` +printf "%s\n" "Trois p'tits chats" "Chapeau d' paille" "Paillasson" | +sed -ne ' + r temp + a\ + ---- + w temp +' + +``` + +We’ve now ended the list of the Sed commands dedicated to the insertion of literal text in the stream output. My last example was mostly for fun, but since I mentioned there the write command, that makes a perfect transition with the next section where we will see how to write data to an external file from Sed. + +### Alternate output + +Sed is designed with the idea all text transformations will end-up being written on the standard output of the process. However, Sed also has some provision to send data to alternate destinations. You have two ways to do that: using the dedicated write command, or by adding the write flag to a substitution command. + +#### The write command + +The write command (`w`) appends the content of the pattern space to the given destination file. POSIX requires the destination file to be created by Sed before it starts processing any input data. If the file already exists, it is overwritten. + +![The Sed `write` command][38] + +As a consequence, even if you never really write to a file, it will be created anyway. For example, the following Sed program will create/overwrite the “output” file, even if the write command is never executed: +``` +echo | sed -ne ' +q # immediately quit +w output # this command is never executed +' + +``` + +You can have several write commands referencing the same destination file. All write commands at the destination of the same file will append content to that file (more or less in the same manner as the `>>` shell redirection). : +``` +sed < inputfile -ne ' +/:\/bin\/false$/w server +/:\/usr\/sbin\/nologin$/w server +w output +' +cat server + +``` + +#### The substitution command `write` flag + +A long time ago now, we had seen the substitution command has the `p` option for the common use case of printing the pattern space after a substitution. In a very similar manner it also has a `w` option to write the pattern space to a file after a substitution: +``` +sed < inputfile -ne ' +s/:.*\/nologin$//w server +s/:.*\/false$//w server +' +cat server + +``` + +I already used them countless times, but I never took the time to introduce them formally, so, let’s fix that: like in most programming languages, a comment is a way to add free-form text the software will not try to interpret. The Sed syntax being rather cryptic, I can’t insist enough on the need to comment your scripts. Otherwise, they will be hardly understandable by anyone except their author. + +![The Sed `comment` command][39] + +However, like many other parts of Sed, comments have their own share of subtleties. First, and the most important, comments are not a syntactic construct, but are full-fledged commands in Sed. A do-nothing (“no-op”) command, but a command anyway. At least, that is how there are defined by POSIX. So, strictly speaking, they should only be allowed where other commands are allowed. + +Most Sed implementation relaxes that requirement by allowing inline commands as I used them all over the place in that article. + +To close on that topic, worth mentioning the very special case of the `#n` comment (an octothorpe followed by the letter n without any space). If that exact comment if found on the very first line of a script, Sed should switch to quiet mode (i.e., clearing the auto-print flag) just like if the `-n` option was specified on the command line. + +### The commands you will rarely need + +Now, we have reviewed the commands that will allow you to write 99.99% of your scripts. But this tour wouldn’t be exhaustive if I didn’t mention the last remaining Sed commands. I left them aside until now because I rarely needed them. But maybe did you have on your side examples of practical use cases where you find them useful. If that is the case do not hesitate to share that with us using the comment section. + +#### The line number command + +The `=` command writes on the standard output the number of lines currently read by Sed, that is the content of the line counter register. There is no way to capture that number in one of the Sed buffers, nor to format the output. Two limitations that severely reduce the usefulness of that command. + +![The Sed `line number` command][40] + +Remember in strict POSIX compliance mode, when several input files are given on the command line, Sed does not reset that counter but continue to increment it just like if all the files where concatenated. Some Sed implementations like GNU Sed have options to reset the counter after each input file. + +#### The unambiguous print command + +The `l` (lowercase letter ell) is similar to the print command (`p`), but the content of the pattern space will be written in an unambiguous form. To quote [the POSIX standard][12]: + +> The characters listed in XBD Escape Sequences and Associated Actions ( ‘\\\’, ‘\a’, ‘\b’, ‘\f’, ‘\r’, ‘\t’, ‘\v’ ) shall be written as the corresponding escape sequence; the ‘\n’ in that table is not applicable. Non-printable characters not in that table shall be written as one three-digit octal number (with a preceding ) for each byte in the character (most significant byte first). Long lines shall be folded, with the point of folding indicated by writing a followed by a ; the length at which folding occurs is unspecified, but should be appropriate for the output device. The end of each line shall be marked with a ‘$’. + +![The Sed `unambiguous print` command][3]![The Sed `unambiguous print` command][41] + +I suspect this command was once used to exchange data over non [8-bits clean channels][42]. As of myself, I never used it for anything else than for debugging purposes. + +#### The transliterate command + +The transliterate (`y`) command allows mapping characters of the pattern space from a source set to a destination set. It is quite similar to the `tr` command, although more limited. + +![The Sed `transliterate` command][43] +``` +# The `y` c0mm4nd 1s for h4x0rz only +sed < inputfile -e ' + s/:.*// + y/abcegio/48<3610/ +' + +``` + +While the transliterate command syntax bears some resemblances with the substitution command syntax, it does not accept any option after the replacement string. The transliteration is always global. + +Beware that the transliterate command requires all the characters both in the original and destination set to be given verbatim. That means the following Sed program does not do what you might think at first sight: +``` +# BEWARE: this doesn't do what you may think! +sed < inputfile -e ' + s/:.*// + y/[a-z]/[A-Z]/ +' + +``` + +### The last word +``` +# What will this do? +# Hint: the answer is not far away... +sed -E ' + s/.*\W(.*)/\1/ + h + ${ x; p; } + d' < inputfile + +``` + +I can’t believe we did it! We’ve reviewed all the Sed commands. If you reached that point, you deserve congratulations, especially if you took the time to try the different examples on your system! + +As you’ve seen, Sed is a complex beast, not only because of its sparse syntax but also because of all the various corner cases or subtle differences in the command behavior. No doubt, we can blame historical reasons for that. Despite these drawbacks, it is a powerful tool and even today remains one of the most useful commands of the Unix toolbox. If it is time for me to conclude that article, I wouldn’t do it without first asking you a favor: please, share with us using the command section your most favorite or creative piece of Sed script. If we have enough of them, we could publish a compilation of those Sed gems! + +-------------------------------------------------------------------------------- + +via: https://linuxhandbook.com/sed-reference-guide/ + +作者:[Sylvain Leroux][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://linuxhandbook.com/author/sylvain/ +[1]:https://linuxhandbook.com/sed-command-basics/ +[2]:https://gist.github.com/s-leroux/5cb36435bac46c10cfced26e4bf5588c +[3]:https://linuxhandbook.com/wp-content/plugins/jetpack/modules/lazy-images/images/1x1.trans.gif +[4]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05/sed-reference-guide.jpeg?resize=702%2C395&ssl=1 +[5]:http://mathworld.wolfram.com/AbstractMachine.html +[6]:https://en.wikipedia.org/wiki/State_(computer_science) +[7]:https://en.wikipedia.org/wiki/Data_buffer +[8]:https://en.wikipedia.org/wiki/Processor_register#Categories_of_registers +[9]:https://www.computerhope.com/jargon/f/flag.htm +[10]:https://i1.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-flowchart.png?w=702&ssl=1 +[11]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-print-command.png?w=702&ssl=1 +[12]:http://pubs.opengroup.org/onlinepubs/9699919799/utilities/sed.html +[13]:https://www.regular-expressions.info/ +[14]:https://chortle.ccsu.edu/FiniteAutomata/Section07/sect07_16.html +[15]:https://www.regular-expressions.info/posix.html#bre +[16]:https://www.regular-expressions.info/posix.html#ere +[17]:https://www.regular-expressions.info/repeat.html#limit +[18]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-quit-command.png?w=702&ssl=1 +[19]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-substitution-command.png?w=702&ssl=1 +[20]:https://linuxhandbook.com/?p=128 +[21]:https://www.regular-expressions.info/repeat.html#greedy +[22]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-delete-command.png?w=702&ssl=1 +[23]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-next-command.png?w=702&ssl=1 +[24]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-exchange-command.png?w=702&ssl=1 +[25]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-hold-command.png?w=702&ssl=1 +[26]:https://i1.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-get-command.png?w=702&ssl=1 +[27]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-delete-upper-command.png?w=702&ssl=1 +[28]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-next-upper-command.png?w=702&ssl=1 +[29]:https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics) +[30]:https://chortle.ccsu.edu/StructuredC/Chap01/struct01_5.html +[31]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-branch-command.png?w=702&ssl=1 +[32]:https://i1.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-label-command.png?w=702&ssl=1 +[33]:https://i1.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-test-command.png?w=702&ssl=1 +[34]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-change-command.png?w=702&ssl=1 +[35]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-insert-command.png?w=702&ssl=1 +[36]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-append-command.png?w=702&ssl=1 +[37]:https://en.wikipedia.org/wiki/Shiritori +[38]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-write-command.png?w=702&ssl=1 +[39]:https://i1.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-comment-command.png?w=702&ssl=1 +[40]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-current-line-command.png?w=702&ssl=1 +[41]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-unambiguous-print-command.png?w=702&ssl=1 +[42]:https://en.wikipedia.org/wiki/8-bit_clean +[43]:https://i1.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-transliterate-command.png?w=702&ssl=1 diff --git a/sources/tech/20180615 How To Rename Multiple Files At Once In Linux.md b/sources/tech/20180615 How To Rename Multiple Files At Once In Linux.md new file mode 100644 index 0000000000..d03dd4527b --- /dev/null +++ b/sources/tech/20180615 How To Rename Multiple Files At Once In Linux.md @@ -0,0 +1,186 @@ +How To Rename Multiple Files At Once In Linux +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/06/Rename-Multiple-Files-720x340.png) + +As you may already know, we use **mv** command to rename or move files and directories in Unix-like operating systems. But, the mv command won’t support renaming multiple files at once. Worry not. In this tutorial, we are going to learn to rename multiple files at once using **“mmv”** command in Linux. This command is used to move, copy, append and rename files in bulk using standard wildcards in Unix-like operating systems. + +### Rename Multiple Files At Once In Linux + +The mmv utility is available in the default repositories of Debian-based systems. To install it on Debian, Ubuntu, Linux Mint, run the following command: +``` +$ sudo apt-get install mmv + +``` + +Let us say, you have the following files in your current directory. +``` +$ ls +a1.txt a2.txt a3.txt + +``` + +Now you want to rename all files that starts with letter “a” to “b”. Of course, you can do this manually in few seconds. But just think if you have hundreds of files and want to rename them? It is quite time consuming process. Here is where **mmv** command comes in help. + +To rename all files starting with letter “a” to “b”, simply run: +``` +$ mmv a\* b\#1 + +``` + +Let us check if the files have been renamed or not. +``` +$ ls +b1.txt b2.txt b3.txt + +``` + +As you can see, all files starts with letter “a” (i.e a1.txt, a2.txt, a3.txt) are renamed to b1.txt, b2.txt, b3.txt. + +**Explanation** + +In the above example, the first parameter (a\\*) is the ‘from’ pattern and the second parameter is ‘to’ pattern ( b\\#1 ). As per the above example, mmv will look for any filenames staring with letter ‘a’ and rename the matched files according to second parameter i.e ‘to’ pattern. We use wildcards, such as ‘*’, ‘?’ and ‘[]‘, to match one or more arbitrary characters. Please be mindful that you must escape the wildcard characters, otherwise they will be expanded by the shell and mmv won’t understand them. + +The ‘#1′ in the ‘to’ pattern is a wildcard index. It matches the first wildcard found in the ‘from’ pattern. A ‘#2′ in the ‘to’ pattern would match the second wildcard and so on. In our example, we have only one wildcard (the asterisk), so we write a #1. And, the hash sign should be escaped as well. Also, you can enclose the patterns with quotes too. + +You can even rename all files with a certain extension to a different extension. For example, to rename all **.txt** files to **.doc** file format in the current directory, simply run: +``` +$ mmv \*.txt \#1.doc + +``` + +Here is an another example. Let us say you have the following files. +``` +$ ls +abcd1.txt abcd2.txt abcd3.txt + +``` + +You want to replace the the first occurrence of **abc** with **xyz** in all files in the current directory. How would you do? + +Simple. +``` +$ mmv '*abc*' '#1xyz#2' + +``` + +Please note that in the above example, I have enclosed the patterns in single quotes. + +Let us check if “abc” is actually replaced with “xyz” or not. +``` +$ ls +xyzd1.txt xyzd2.txt xyzd3.txt + +``` + +See? The files **abcd1.txt** , **abcd2.txt** , and **abcd3.txt** have been renamed to **xyzd1.txt** , **xyzd2.txt** , and **xyzd3.txt**. + +Another notable feature of mmv command is you can just print output instead of renaming the files using **-n** option like below. +``` +$ mmv -n a\* b\#1 +a1.txt -> b1.txt +a2.txt -> b2.txt +a3.txt -> b3.txt + +``` + +This way you can simply verify what mmv command would actually do before renaming the files. + +For more details, refer man pages. +``` +$ man mmv + +``` + +**Update:** + +The **Thunar file manager** has built-in **bulk rename** option by default. If you’re using thunar, it much easier to rename files than using mmv command. + +Thunar is available in the default repositories of most Linux distributions. + +To install it on Arch-based systems, run: +``` +$ sudo pacman -S thunar + +``` + +On RHEL, CentOS: +``` +$ sudo yum install thunar + +``` + +On Fedora: +``` +$ sudo dnf install thunar + +``` + +On openSUSE: +``` +$ sudo zypper install thunar + +``` + +On Debian, Ubuntu, Linux Mint: +``` +$ sudo apt-get install thunar + +``` + +Once installed, you can launch bulk rename utility from menu or from the application launcher. To launch it from Terminal, use the following command: +``` +$ thunar -B + +``` + +This is how bulk rename looks like. + +[![][1]][2] + +Click the plus sign and choose the list of files you want to rename. Bulk rename can rename the name of the files, the suffix of the files or both the name and the suffix of the files. Thunar currently supports the following Bulk Renamers: + + * Insert Date or Time + + * Insert or Overwrite + + * Numbering + + * Remove Characters + + * Search & Replace + + * Uppercase / Lowercase + + + + +When you select one of these criteria from the picklist, you will see a preview of your changes in the New Name column, as shown in the below screenshot. + +![][3] + +Once you choose the criteria, click on **Rename Files** option to rename the files. + +You can also open bulk renamer from within Thunar by selecting two or more files. After choosing the files, press F2 or right click and choose **Rename**. + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-rename-multiple-files-at-once-in-linux/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]:http://www.ostechnix.com/wp-content/uploads/2018/06/bulk-rename.png +[3]:http://www.ostechnix.com/wp-content/uploads/2018/06/bulk-rename-1.png diff --git a/sources/tech/20180615 How to Mount and Use an exFAT Drive on Ubuntu Linux.md b/sources/tech/20180615 How to Mount and Use an exFAT Drive on Ubuntu Linux.md new file mode 100644 index 0000000000..cc51937703 --- /dev/null +++ b/sources/tech/20180615 How to Mount and Use an exFAT Drive on Ubuntu Linux.md @@ -0,0 +1,56 @@ +How to Mount and Use an exFAT Drive on Ubuntu Linux +====== +**Brief: This quick tutorial shows you how to enable exFAT file system support on Ubuntu and other Ubuntu-based Linux distributions. This way you won’t see any error while mounting exFAT drives on your system.** + +### Problem mounting exFAT disk on Ubuntu + +The other day, I tried to use an external USB key formatted in exFAT format that contained a file of around 10 GB in size. As soon as I plugged the USB key, my Ubuntu 16.04 throw an error complaining that it **cannot mount unknown filesystem type ‘exfat’**. + +![Fix exfat drive mount error on Ubuntu Linux][1] + +The exact error message was this: +**Error mounting /dev/sdb1 at /media/abhishek/SHADI DATA: Command-line `mount -t “exfat” -o “uhelper=udisks2,nodev,nosuid,uid=1001,gid=1001,iocharset=utf8,namecase=0,errors=remount-ro,umask=0077” “/dev/sdb1” “/media/abhishek/SHADI DATA”‘ exited with non-zero exit status 32: mount: unknown filesystem type ‘exfat’** + +### The reason behind this exFAT mount error + +Microsoft’s favorite [FAT file system][2] is limited to files up to 4GB in size. You cannot transfer a file bigger than 4 GB in size to a FAT drive. To overcome the limitations of the FAT filesystem, Microsoft introduced [exFAT][3] file system in 2006. + +As most of the Microsoft related stuff are proprietary, exFAT file format is no exception to that. Ubuntu and many other Linux distributions don’t provide the proprietary exFAT file support by default. This is the reason why you see the mount error with exFAT files. + +### How to mount exFAT drive on Ubuntu Linux + +![Fix exFAT mount error on Ubuntu Linux][4] + +The solution to this problem is simple. All you need to do is to enable exFAT support. + +I am going to show the commands for Ubuntu but this should be applicable to other Ubuntu-based distributions such as [Linux Mint][5], elementary OS etc. + +Open a terminal (Ctrl+Alt+T shortcut in Ubuntu) and use the following command: +``` +sudo apt install exfat-fuse exfat-utils + +``` + +Once you have installed these packages, go to file manager and click on the USB disk again to mount it. There is no need to replug the USB. It should be mounted straightaway. + +#### Did it help you? + +I hope this quick tip helped you to fix the exFAT mount error for your Linux distribution. If you have any further questions, suggestions or a simple thanks, please use the comment box below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/mount-exfat/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/06/exfat-mount-error-linux.jpeg +[2]:http://www.ntfs.com/fat-systems.htm +[3]:https://en.wikipedia.org/wiki/ExFAT +[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/06/exfat-mount-error-featured-800x450.jpeg +[5]:https://linuxmint.com/ diff --git a/sources/tech/20180618 5 open source alternatives to Dropbox.md b/sources/tech/20180618 5 open source alternatives to Dropbox.md new file mode 100644 index 0000000000..d94b4537aa --- /dev/null +++ b/sources/tech/20180618 5 open source alternatives to Dropbox.md @@ -0,0 +1,122 @@ +5 open source alternatives to Dropbox +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dropbox.jpg?itok=qFwcqboT) + +Dropbox is the 800-pound gorilla of filesharing applications. Even though it's a massively popular tool, you may choose to use an alternative. + +Maybe that's because you're dedicated to the [open source way][1] for all the good reasons, including security and freedom, or possibly you've been spooked by data breaches. Or perhaps the pricing plan doesn't work out in your favor for the amount of storage you actually need. + +Fortunately, there are a variety of open source filesharing applications out there that give you more storage, security, and control over your data at a far lower price than Dropbox charges. How much lower? Try free, if you're a bit tech savvy and have a Linux server to use. + +Here are five of the best open source alternatives to Dropbox, plus a few others that you might want to consider. + +### ownCloud + +![](https://opensource.com/sites/default/files/uploads/owncloud.png) + +[ownCloud][2], launched in 2010, is the oldest application on this list, but don't let that fool you: It's still very popular (with over 1.5 million users, according to the company) and actively maintained by a community of 1,100 contributors, with updates released regularly. + +Its primary features—file and folding sharing, document collaboration—are similar to Dropbox's. Its primary difference (aside from its [open source license][3]) is that your files are hosted on your private Linux server or cloud, giving users complete control over your data. (Self-hosting is a common thread among the apps on this list.) + +With ownCloud, you can sync and access files through clients for Linux, MacOS, or Windows computers or mobile apps for Android and iOS devices, and provide password-protected links to others for collaboration or file upload/download. Data transfers are secured by end-to-end encryption (E2EE) and SSL encryption. You can also expand its functionality with a wide variety of third-party apps available in its [marketplace][4], and there is also a paid, commercially licensed enterprise edition. + +ownCloud offers comprehensive [documentation][5], including an installation guide and manuals for users, admins, and developers, and you can access its [source code][6] in its GitHub repository. + +### NextCloud + +![](https://opensource.com/sites/default/files/uploads/nextcloud.png) + +[NextCloud][7] spun out of ownCloud in 2016 and shares much of the same functionality. Nextcloud [touts][8] its high security and regulatory compliance as a distinguishing feature. It has HIPAA (healthcare) and GDPR (privacy) compliance features and offers extensive data-policy enforcement, encryption, user management, and auditing capabilities. It also encrypts data during transfer and at rest and integrates with mobile device management and authentication mechanisms (including LDAP/AD, single-sign-on, two-factor authentication, etc.). + +Like the other solutions on this list, NextCloud is self-hosted, but if you don't want to roll your own NextCloud server on Linux, the company partners with several [providers][9] for setup and hosting and sells servers, appliances, and support. A [marketplace][10] offers numerous apps to extend its features. + +NextCloud's [documentation][11] page offers thorough information for users, admins, and developers as well as links to its forums, IRC channel, and social media pages for community-based support. If you'd like to contribute, access its source code, report a bug, check out its (AGPLv3) license, or just learn more, visit the project's [GitHub repository][12]. + +### Seafile + +![](https://opensource.com/sites/default/files/uploads/seafile.png) + +[Seafile][13] may not have the bells and whistles (or app ecosystem) of ownCloud or Nextcloud, but it gets the job done. Essentially, it acts as a virtual drive on your Linux server to extend your desktop storage and allow you to share files selectively with password protection and various levels of permission (i.e., read-only or read/write). + +Its collaboration features include per-folder access control, password-protected download links, and Git-like version control and retention. Files are secured with two-factor authentication, file encryption, and AD/LDAP integration, and they're accessible from Windows, MacOS, Linux, iOS, or Android devices. + +For more information, visit Seafile's [GitHub repository][14], [server manual][15], [wiki][16], and [forums][17]. Note that Seafile's community edition is licensed under [GPLv2][18], but its professional edition is not open source. + +### OnionShare + +![](https://opensource.com/sites/default/files/uploads/onionshare.png) + +[OnionShare][19] is a cool app that does one thing: It allows you to share individual files or folders securely and, if you want, anonymously. There's no server to set up or maintain—all you need to do is [download and install][20] the app on MacOS, Windows, or Linux. Files are always hosted on your own computer; when you share a file, OnionShare creates a web server, makes it accessible as a Tor Onion service, and generates an unguessable .onion URL that allows the recipient to access the file via [Tor browser][21]. + +You can set limits on your fileshare, such as limiting the number of times it can be downloaded or using an auto-stop timer, which sets a strict expiration date/time after which the file is inaccessible (even if it hasn't been accessed yet). + +OnionShare is licensed under [GPLv3][22]; for more information, check out its GitHub [repository][22], which also includes [documentation][23] that covers the features in this easy-to-use filesharing application. + +### Pydio Cells + +![](https://opensource.com/sites/default/files/uploads/pydiochat.png) + +[Pydio Cells][24], which achieved stability in May 2018, is a complete overhaul of the Pydio filesharing application's core server code. Due to limitations with Pydio's PHP-based backend, the developers decided to rewrite the backend in the Go server language with a microservices architecture. (The frontend is still based on PHP.) + +Pydio Cells includes the usual filesharing and version control features, as well as in-app messaging, mobile apps (Android and iOS), and a social network-style approach to collaboration. Security includes OpenID Connect-based authentication, encryption at rest, security policies, and more. Advanced features are included in the enterprise distribution, but there's plenty of power for most small and midsize businesses and home users in the community (or "Home") version. + +You can [download][25] Pydio Cells for Linux and MacOS. For more information, check out the [documentation FAQ][26], [source code][27] repository, and [AGPLv3 license][28]. + +### Others to consider + +If these choices don't meet your needs, you may want to consider these open source filesharing-type applications. + + * If your main goal is to sync files between devices, rather than to share files, check out [Syncthing][29]). + * If you're a Git fan and don't need a mobile app, you might appreciate [SparkleShare][30]. + * If you primarily want a place to aggregate all your personal data, take a look at [Cozy][31]. + * And, if you're looking for a lightweight or dedicated filesharing tool, peruse [Scott Nesbitt's review][32] of some lesser-known options. + + + +What is your favorite open source filesharing application? Let us know in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/alternatives/dropbox + +作者:[OPensource.com][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com +[1]:https://opensource.com/open-source-way +[2]:https://owncloud.org/ +[3]:https://www.gnu.org/licenses/agpl-3.0.html +[4]:https://marketplace.owncloud.com/ +[5]:https://doc.owncloud.com/ +[6]:https://github.com/owncloud +[7]:https://nextcloud.com/ +[8]:https://nextcloud.com/secure/ +[9]:https://nextcloud.com/providers/ +[10]:https://apps.nextcloud.com/ +[11]:https://nextcloud.com/support/ +[12]:https://github.com/nextcloud +[13]:https://www.seafile.com/en/home/ +[14]:https://github.com/haiwen/seafile +[15]:https://manual.seafile.com/ +[16]:https://seacloud.cc/group/3/wiki/ +[17]:https://forum.seafile.com/ +[18]:https://github.com/haiwen/seafile/blob/master/LICENSE.txt +[19]:https://onionshare.org/ +[20]:https://onionshare.org/#downloads +[21]:https://www.torproject.org/ +[22]:https://github.com/micahflee/onionshare/blob/develop/LICENSE +[23]:https://github.com/micahflee/onionshare/wiki +[24]:https://pydio.com/en +[25]:https://pydio.com/download/ +[26]:https://pydio.com/en/docs/faq +[27]:https://github.com/pydio/cells +[28]:https://github.com/pydio/pydio-core/blob/develop/LICENSE +[29]:https://syncthing.net/ +[30]:http://www.sparkleshare.org/ +[31]:https://cozy.io/en/ +[32]:https://opensource.com/article/17/3/file-sharing-tools diff --git a/sources/tech/20180618 What-s all the C Plus Fuss- Bjarne Stroustrup warns of dangerous future plans for his C.md b/sources/tech/20180618 What-s all the C Plus Fuss- Bjarne Stroustrup warns of dangerous future plans for his C.md new file mode 100644 index 0000000000..04644aebb2 --- /dev/null +++ b/sources/tech/20180618 What-s all the C Plus Fuss- Bjarne Stroustrup warns of dangerous future plans for his C.md @@ -0,0 +1,154 @@ +What's all the C Plus Fuss? Bjarne Stroustrup warns of dangerous future plans for his C++ +====== + +![](https://regmedia.co.uk/2018/06/15/shutterstock_38621860.jpg?x=442&y=293&crop=1) + +**Interview** Earlier this year, Bjarne Stroustrup, creator of C++, managing director in the technology division of Morgan Stanley, and a visiting professor of computer science at Columbia University in the US, wrote [a letter][1] inviting those overseeing the evolution of the programming language to “Remember the Vasa!” + +Easy for a Dane to understand no doubt, but perhaps more of a stretch for those with a few gaps in their knowledge of 17th century Scandinavian history. The Vasa was a Swedish warship, commissioned by King Gustavus Adolphus. It was the most powerful warship in the Baltic Sea from its maiden voyage on the August 10, 1628, until a few minutes later when it sank. + +The formidable Vasa suffered from a design flaw: it was top-heavy, so much so that it was [undone by a gust of wind][2]. By invoking the memory of the capsized ship, Stroustrup served up a cautionary tale about the risks facing C++ as more and more features get added to the language. + +Quite a few such features have been suggested. Stroustrup cited 43 proposals in his letter. He contends those participating in the evolution of the ISO standard language, a group known as [WG21][3], are working to advance the language but not together. + +In his letter, he wrote: + +>Individually, many proposals make sense. Together they are insanity to the point of endangering the future of C++. + +He makes clear that he doesn’t interpret the fate of the Vasa to mean that incremental improvements spell doom. Rather, he takes it as a lesson to build a solid foundation, to learn from experience and to test thoroughly. + +With the recent conclusion of the C++ Standardization Committee Meeting in Rapperswil, Switzerland, earlier this month, Stroustrup addressed a few questions put to him by _The Register_ about what's next for the language. (The most recent version is C++17, which arrived last year; the next version C++20 is under development and expected in 2020.) + +**_Register:_ In your note, Remember the Vasa!, you wrote:** + +>The foundation begun in C++11 is not yet complete, and C++17 did little to make our foundation more solid, regular, and complete. Instead, it added significant surface complexity and increased the number of features people need to learn. C++ could crumble under the weight of these – mostly not quite fully-baked – proposals. We should not spend most our time creating increasingly complicated facilities for experts, such as ourselves. + +**Is C++ too challenging for newcomers, and if so, what features do you believe would make the language more accessible?** + +_**Stroustrup:**_ Some parts of C++ are too challenging for newcomers. + +On the other hand, there are parts of C++ that makes it far more accessible to newcomers than C or 1990s C++. The difficulty is to get the larger community to focus on those parts and help beginners and casual C++ users to avoid the parts that are there to support implementers of advanced libraries. + +I recommend the [C++ Core Guidelines][4] as an aide for that. + +Also, my “A Tour of C++” can help people get on the right track with modern C++ without getting lost in 1990s complexities or ensnarled by modern facilities meant for expert use. The second edition of “A Tour of C++” covering C++17 and parts of C++20 is on its way to the stores. + +I and others have taught C++ to 1st year university students with no previous programming experience in 3 months. It can be done as long as you don’t try to dig into every obscure corner of the language and focus on modern C++. + +“Making simple things simple” is a long-term goal of mine. Consider the C++11 range-for loop: +``` +for (int& x : v) ++x; // increment each element of the container v + +``` + +where v can be just about any container. In C and C-style C++, that might look like this: +``` +for (int i=0; i刽子手Hangman,猜数字,井字游戏Tic-Tac-Toe这样的经典游戏,后续更进一步编写高级一些的游戏,例如文字版寻宝游戏,以及带音效和动画的碰撞与闪避collision-dodgoing游戏。([Joshua Allen Holm][4] 推荐,评论节选于本书的内容提要) + +[Lauren Ipsum:关于计算机科学和一些不可思议事物的故事][12],作者 Carlos Bueno + +本书采用爱丽丝梦游仙境的风格,女主角 Lauren Ipsum 来到一个稍微具有魔幻色彩的世界。世界的自然法则是逻辑学和计算机科学,世界谜题只能通过学习计算机编程原理并编写代码完成。书中没有提及计算机,但其作为世界的核心存在。([DB Clinton][6] 推荐并评论) + +[Java 轻松学][13],作者 Bryson Payne + +Java 是全世界最流行的编程语言,但众所周知上手比较难。本书让 Java 学习不再困难,通过若干实操项目,让你马上学会构建真实可运行的应用。([Joshua Allen Holm][4] 推荐,评论节选于本书的内容提要) + +[终身幼儿园][14],作者 Mitchell Resnick + +幼儿园正变得越来越像学校。在本书中,学习专家 Mitchel Resnick 提出相反的看法:学校(甚至人的一生)应该更像幼儿园。要适应当今快速变化的世界,各个年龄段的人们都必须学会开创性思维和行动;想要达到这个目标,最好的方式就是更加专注于想象、创造、玩耍、分享和反馈,就像孩子在传统幼儿园中那样。基于在 MIT 媒体实验室Media Lab 30 多年的经历, Resnick 讨论了新的技术和策略,可以让年轻人拥有开创性的学习体验。([Don Watkins][9] 推荐,评论来自 Amazon 书评) + +[趣学 Python:教孩子学编程][15],作者 Jason Briggs + +在本书中,Jason Briggs 将 Python 编程教学艺术提升到新的高度。我们可以很容易地将本书用作入门书,适用群体可以是教师/学生,也可以是父母/儿童。通过一步步引导的方式介绍复杂概念,让编程新手也可以成功完成,进一步激发学习欲望。本书是一本极为易读、寓教于乐但又强大的 Python 编程入门书。读者将学习基础数据结构,包括元组turples列表lists映射maps等,学习如何创建函数、重用代码或者使用包括循环和条件语句在内的控制结构。孩子们还将学习如何创建游戏和动画,体验 Tkinter 的强大并创建高级图形。([Don Watkins][9] 推荐并评论) + +[Scratch 编程园地][16],作者 Al Sweigart + +Scratch 编程一般被视为一种寓教于乐的教年轻人编程的方式。在本书中,Al Sweigart 告诉我们 Scratch 是一种超出绝大多数人想象的强大编程语言。独特的风格,大师级的编写和呈现。Al 让孩子通过创造复杂图形和动画,短时间内认识到 Scratch 的强大之处。([Don Watkins][9] 推荐并评论) + +[秘密编程者][17],作者 Gene Luen Yang,插图作者 Mike Holmes + +Gene Luen Yang 是漫画小说超级巨星,也是一所高中的计算机编程教师。他推出了一个非常有趣的系列作品,将逻辑谜题、基础编程指令与引入入胜的解谜情节结合起来。故事发生在 Stately Academy 这所学校,其中充满有待解开的谜团。([Joshua Allen Holm][4] 推荐,评论节选于本书的内容提要) + +[想成为编程者吗?编程、视频游戏制作、机器人等职业终极指南!][18],作者 Jane Bedell + +酷爱编程?这本书易于理解,描绘了以编程为生的完整图景,激发你的热情,磨练你的专业技能。(Joshua Allen Holm][4] 推荐,评论节选于本书的内容提要) + +[教孩子编程][19],作者 Bryson Payne + +你是否在寻找一种寓教于乐的方式教你的孩子 Python 编程呢?Bryson Payne 已经写了这本大师级的书。本书通过乌龟图形打比方,引导你编写一些简单程序,为高级 Python 编程打下基础。如果你打算教年轻人编程,这本书不容错过。([Don Watkins][9] 推荐并评论) + +[图解 Kubernetes(儿童版)][20],作者 Matt Butcher, 插画作者 Bailey Beougher + +介绍了 Phippy 这个勇敢的 PHP 小应用及其 Kubernetes 之旅。([Chris Short][21] 推荐,评论来自 [Matt Butcher 的博客][20]) + +### 给宝宝的福利书 + +[宝宝的 CSS][22],[宝宝的 Javascript][23],[宝宝的 HTML][24],作者 Sterling Children's + +这本概念书让宝宝熟悉图形和颜色的种类,这些是互联网编程语言的基石。这本漂亮的书用富有色彩的方式介绍了编程和互联网,对于技术人士的家庭而言,本书是一份绝佳的礼物。([Chris Short][21] 推荐,评论来自 Amazon 书评) + +你是否有想要分享的适合宝宝或儿童的书呢?请在评论区留言告诉我们。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/5/books-kids-linux-open-source + +作者:[Jen Wike Huger][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[pinewall](https://github.com/pinewall) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/remyd +[1]:https://opensource.com/resources/linux +[2]:https://opensource.com/article/18/3/what-open-source-programming +[3]:https://www.amazon.com/Adventures-Raspberry-Carrie-Anne-Philbin/dp/1119046025 +[4]:https://opensource.com/users/holmja +[5]:https://automatetheboringstuff.com/ +[6]:https://opensource.com/users/dbclinton +[7]:https://www.goodreads.com/book/show/25733628-coding-games-in-scratch +[8]:https://nostarch.com/doingmathwithpython +[9]:https://opensource.com/users/don-watkins +[10]:https://www.amazon.com/Girls-Who-Code-Learn-Change/dp/042528753X +[11]:http://inventwithpython.com/invent4thed/ +[12]:https://www.amazon.com/gp/product/1593275749/ref=as_li_tl?ie=UTF8&tag=projemun-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=1593275749&linkId=e05e1f12176c4959cc1aa1a050908c4a +[13]:https://nostarch.com/learnjava +[14]:http://lifelongkindergarten.net/ +[15]:https://nostarch.com/pythonforkids +[16]:https://nostarch.com/scratchplayground +[17]:http://www.secret-coders.com/ +[18]:https://www.amazon.com/So-You-Want-Coder-Programming/dp/1582705798?tag=ad-backfill-amzn-no-or-one-good-20 +[19]:https://opensource.com/education/15/9/review-bryson-payne-teach-your-kids-code +[20]:https://deis.com/blog/2016/kubernetes-illustrated-guide/ +[21]:https://opensource.com/users/chrisshort +[22]:https://www.amazon.com/CSS-Babies-Code-Sterling-Childrens/dp/1454921560/ +[23]:https://www.amazon.com/Javascript-Babies-Code-Sterling-Childrens/dp/1454921579/ +[24]:https://www.amazon.com/HTML-Babies-Code-Sterling-Childrens/dp/1454921552 diff --git a/translated/tech/20180611 Turn Your Raspberry Pi into a Tor Relay Node.md b/translated/tech/20180611 Turn Your Raspberry Pi into a Tor Relay Node.md new file mode 100644 index 0000000000..53296cd8c3 --- /dev/null +++ b/translated/tech/20180611 Turn Your Raspberry Pi into a Tor Relay Node.md @@ -0,0 +1,133 @@ +将你的树莓派打造成一个 Tor 中继节点 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/tor-onion-router.jpg?itok=6WUl0ElH) + +你是否和我一样,在第一代或者第二代树莓派发布时买了一个,玩了一段时间就把它搁置“吃灰”了。毕竟,除非你是机器人爱好者,否则一般不太可能去长时间使用一个处理器很慢的并且内存只有 256 MB 的计算机的。这并不是说你不能用它去做一件很酷的东西,但是在工作和其它任务之间,我还没有看到用一些旧的物件发挥新作用的机会。 + +然而,如果你想去好好利用它并且不想花费你太多的时间和资源的话,可以将你的旧树莓派打造成一个完美的 Tor 中继节点。 + +### Tor 中继节点是什么 + +在此之前你或许听说过 [Tor 项目][1],如果恰好你没有听说过,我简单给你介绍一下,“Tor” 是 “The Onion Router(洋葱路由器)” 的缩写,它是用来对付在线追踪和其它违反隐私行为的技术。 + +不论你在因特网上做什么事情,都会在你的 IP 包通过的设备上留下一些数字“脚印”:所有的交换机、路由器、负载均衡,以及目标网络记录的来自你的原始会话的 IP 地址,以及你访问的因特网资源(经常是主机名、[甚至是在使用 HTTPS 时][2])的 IP 地址。如何你是在家中上因特网,那么你的 IP 地址可以直接映射到你的家庭所在地。如果你使用了 VPN 服务([你应该使用][3]),那么你的 IP 地址是映射到你的 VPN 提供商那里,而 VPN 提供商是可以映射到你的家庭所在地的。无论如何,有可能在某个地方的某个人正在根据你访问的网络和在网站上呆了多长时间来为你建立一个个人的在线资料。然后将这个资料进行出售,并与从其它服务上收集的资料进行聚合,然后利用广告网络进行赚钱。至少,这是乐观主义者对如何利用这些数据的一些看法 —— 我相信你还可以找到更多的更恶意地使用这些数据的例子。 + +Tor 项目尝试去提供一个解决这种问题的方案,使它们不可能(或者至少是更加困难)追踪到你的终端 IP 地址。Tor 是通过让你的连接在一个由匿名的入口节点、中继节点、和出口节点组成的匿名中继链上反复跳转的方式来实现防止追踪的目的: + + 1. **入口节点** 只知道你的 IP 地址和中继节点的 IP 地址,但是不知道你最终要访问的目标 IP 地址 + + 2. **中继节点** 只知道入口节点和出口节点的 IP 地址,以及即不是源也不是最终目标的 IP 地址 + + 3. **出口节点** 仅知道中继节点和最终目标地址,它是在到达最终目标地址之前解密流量的节点 + + + + +中继节点在这个交换过程中扮演一个关键的角色,因为它在源请求和目标地址之间创建了一个加密的障碍。甚至在意图偷窥你数据的对手控制了出口节点的情况下,在他们没有完全控制整个 Tor 中继链的情况下仍然无法知道请求源在哪里。 + +只要存在大量的中继节点,你的隐私被会得到保护 —— 这就是我为什么真诚地建议你,如果你的家庭宽带有空闲的时候去配置和运行一个中继节点。 + +#### 考虑去做 Tor 中继时要记住的一些事情 + +一个 Tor 中继节点仅发送和接收加密流量 —— 它从不访问任何其它站点或者在线资源,因此你不用担心有人会利用你的家庭 IP 地址去直接浏览一些令人担心的站点。话虽如此,但是如果你居住在一个提供匿名增强服务(anonymity-enhancing services)是违法行为的司法管辖区的话,那么你还是不要运营你的 Tor 中继节点了。你还需要去查看你的因特网服务提供商的服务条款是否允许你去运营一个 Tor 中继。 + +### 需要哪些东西 + + * 一个带完整外围附件的树莓派(任何型号/代次都行) + + * 一张有 [Raspbian Stretch Lite][4] 的 SD 卡 + + * 一根以太网线缆 + + * 一根用于供电的 micro-USB 线缆 + + * 一个键盘和带 HDMI 接口的显示器(在配置期间使用) + + + + +本指南假设你已经配置好了你的家庭网络连接的线缆或者 ADSL 路由器,它用于运行 NAT 转换(它几乎是必需的)。大多数型号的树莓派都有一个可用于为树莓派供电的 USB 端口,如果你只是使用路由器的 WiFi 功能,那么路由器应该有空闲的以太网口。但是在我们将树莓派设置为一个“配置完不管”的 Tor 中继之前,我们还需要一个键盘和显示器。 + +### 引导脚本 + +我改编了一个很流行的 Tor 中继节点引导脚本以适配树莓派上使用 —— 你可以在我的 GitHub 仓库 上找到它。你用它引导树莓派并使用缺省的用户 “pi” 登入之后,做如下的工作: +``` +sudo apt-get install -y git +git clone https://github.com/mricon/tor-relay-bootstrap-rpi +cd tor-relay-bootstrap-rpi +sudo ./bootstrap.sh + +``` + +这个脚本将做如下的工作: + + 1. 安装最新版本的操作系统更新以确保树莓派打了所有的补丁 + + 2. 将系统配置为无人值守自动更新,以确保有可用更新时会自动接收并安装 + + 3. 安装 Tor 软件 + + 4. 告诉你的 NAT 路由器去转发所需要的端口(端口一般是 443 和 8080,因为这两个端口最不可能被因特网提供商过滤掉)上的数据包到你的中继节点 + + + + +脚本运行完成后,你需要去配置 torrc 文件 —— 但是首先,你需要决定打算贡献给 Tor 流量多大带宽。首先,在 Google 中输入 “[Speed Test][5]”,然后点击 “Run Speed Test” 按钮。你可以不用管 “Download speed” 的结果,因为你的 Tor 中继能处理的速度不会超过最大的上行带宽。 + +所以,将 “Mbps upload” 的数字除以 8,然后再乘以 1024,结果就是每秒多少 KB 的宽带速度。比如,如果你得到的上行带宽是 21.5 Mbps,那么这个数字应该是: +``` +21.5 Mbps / 8 * 1024 = 2752 KBytes per second + +``` + +你可以限制你的中继带宽为那个数字的一半,并允许突发带宽为那个数字的四分之三。确定好之后,使用喜欢的文本编辑器打开 /etc/tor/torrc 文件,调整好带宽设置。 +``` +RelayBandwidthRate 1300 KBytes +RelayBandwidthBurst 2400 KBytes + +``` + +当然,如果你想更慷慨,你可以将那几个设置的数字调的更大,但是尽量不要设置为最大的出口带宽 —— 如果设置的太高,它会影响你的日常使用。 + +你打开那个文件之后,你应该去设置更多的东西。首先是昵称 —— 只是为了你自己保存记录,第二个是联系信息,只需要一个电子邮件地址。由于你的中继是运行在无人值守模式下的,你应该使用一个定期检查的电子邮件地址 —— 如果你的中继节点离线超过 48 个小时,你将收到 “Tor Weather” 服务的告警信息。 +``` +Nickname myrpirelay +ContactInfo you@example.com + +``` + +保存文件并重引导系统去启动 Tor 中继。 + +### 测试它确认有 Tor 流量通过 + +如果你想去确认中继节点的功能,你可以运行 “arm” 工具: +``` +sudo -u debian-tor arm + +``` + +它需要一点时间才显示,尤其是在老板子上。它通常会给你显示一个表示入站和出站流量(或者是错误信息,它将有助于你去排错)的柱状图。 + +一旦你确信它运行正常,就可以将键盘和显示器拔掉了,然后将树莓派放到地下室,它就可以在那里悄悄地呆着并到处转发加密的比特了。恭喜你,你已经帮助去改善隐私和防范在线的恶意跟踪了! + +通过来自 Linux 基金会和 edX 的免费课程 ["Linux 入门" ][6] 来学习更多的 Linux 知识。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/intro-to-linux/2018/6/turn-your-raspberry-pi-tor-relay-node + +作者:[Konstantin Ryabitsev][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/mricon +[1]:https://www.torproject.org/ +[2]:https://en.wikipedia.org/wiki/Server_Name_Indication#Security_implications +[3]:https://www.linux.com/blog/2017/10/tips-secure-your-network-wake-krack +[4]:https://www.raspberrypi.org/downloads/raspbian/ +[5]:https://www.google.com/search?q=speed+test +[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux