mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
c4f247f103
@ -0,0 +1,113 @@
|
||||
12 fiction books for Linux and open source types
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/book_list_fiction_sand_vacation_read.jpg?itok=IViIZu8J)
|
||||
|
||||
For this book list, I reached out to our writer community to ask which fiction books they would recommend to their peers. What I love about this question and the answers that follow is this list gives us a deeper look into their personalities. Fiction favorites are unlike non-fiction recommendations in that your technical skills and interests may have an influence on what you like to read read, but it's much more about your personality and life experiences that draw you to pick out, and love, a particular fiction book.
|
||||
|
||||
These people are your people. I hope you find something interesting to add to your reading list.
|
||||
|
||||
**[Ancillary Justice][1] by Annie Leckie**
|
||||
|
||||
Open source is all about how one individual can start a movement. Somehow at the same time, it's about the power of a voluntary collective moving together towards a common goal. Ancillary Justice makes you ponder both concepts.
|
||||
|
||||
This book is narrated by Breq, who is an "ancillary," an enslaved human body that was grafted into the soul of a warship. When that warship was destroyed, Breq kept all the ship's memories and its identity but then had to live in a single body instead of thousands. In spite of the huge change in her power, Breq has a cataclysmic influence on all around her, and she inspires both loyalty and love. She may have once been enslaved to an AI, but now that she is free, she is powerful. She learns to adapt to exercising her free will, and the decisions she makes changes her and the world around her. Breq pushes for openness in the rigid Radch, the dominant society of the book. Her actions transform the Radch into something new.
|
||||
|
||||
Ancillary Justice is also about language, loyalty, sacrifice, and the disastrous effects of secrecy. Once you've read this book, you will never feel the same about what makes someone or something human. What makes you YOU? Can who you are really be destroyed while your body still lives?
|
||||
|
||||
Like the open source movement, Ancillary Justice makes you think and question the status quo of the novel and of the world around you. Read it. (Recommendation and review by [Ingrid Towey][2])
|
||||
|
||||
**[Cryptonomicon][3] by Neal Stephenson**
|
||||
|
||||
Set during WWII and the present day, or near future at the time of writing, Cryptonomicon captures the excitement of a startup, the perils of war, community action against authority, and the perils of cryptography. It's a book to keep coming back to, as it has multiple layers and combines a techy outlook with intrigue and a decent love story. It does a good job of asking interesting questions like "is technology always an unbounded good?" and of making you realise that the people of yesterday were just a clever, and human, as we are today. (Recommendation and review by [Mike Bursell][4])
|
||||
|
||||
**[Daemon][5] by Daniel Suarez**
|
||||
|
||||
Daemon is the first in a two-part series that details the events that happen when a computer daemon (process) is awakened and wreaks havoc on the world. The story is an exciting thriller that borders on creepy due to the realism in how the technology is portrayed, and it outlines just how dependent we are on technology. (Recommendation and review by [Jay LaCroix][6])
|
||||
|
||||
**[Going Postal][7] by Terry Pratchett**
|
||||
|
||||
This book is a good read for Linux and open source enthusiasts because of the depth and relatability of characters; the humor and the unique outsider narrating that goes into the book. Terry Pratchett books are like Jim Henson movies: fiercely creative, appealing to all but especially the maker, tinkerer, hacker, and those daring to dream.
|
||||
|
||||
The main character is a chancer, a fly-by-night who has never considered the results of their actions. They are not committed to anything, have never formed real (non-monetary) connections. The story follows on from the outcomes of their actions, a tale of redemption taking the protagonists on an out-of-control adventure. It's funny, edgy and unfamiliar, much like the initial 1990's introduction to Linux was for me. (Recommendation and review by [Lewis Cowles][8])
|
||||
|
||||
**[Microserfs][9] by Douglas Coupland**
|
||||
|
||||
Anyone who lived through the dotcom bubble of the 1990's will identify with this heartwarming tale of a young group of Microsoft engineers who end up leaving the company for a startup, moving to Silicon Valley, and becoming each other's support through life, death, love, and loss.
|
||||
|
||||
There is a lot of humor to be found in this book, like in line this line: "This is my computer. There are many like it, but this one is mine..." This revision of the original comes from the Rifleman's Creed: "This is my rifle. There are many like it..."
|
||||
|
||||
If you've ever spent 16 hours a day coding, while fueling yourself with Skittles and Mountain Dew, this story is for you. (Recommendation and review by [Jet Anderson][10])
|
||||
|
||||
**[Open Source][11] by M. M. Frick**
|
||||
|
||||
Casey Shenk is a vending-machine technician from Savannah, Georgia by day and blogger by night. Casey's keen insights into the details of news reports, both true and false, lead him to unravel a global plot involving arms sales, the Middle East, Russia, Israel and the highest levels of power in the United States. Casey connects the pieces using "Open Source Intelligence," which is simply reading and analyzing information that is free and open to the public.
|
||||
|
||||
I bought this book because of the title, just as I was learning about open source, three years ago. I thought this would be a book on open source fiction. Unfortunately, the book has nothing to do with open source as we define it. I had hoped that Casey would use some open source tools or open source methods in his investigation, such as Wireshark or Maltego, and write his posts with LibreOffice, WordPress and such. However, "open source" simply refers to the fact that his sources are "open."
|
||||
|
||||
Although I was disappointed that this book was not what I expected, Frick, a Navy officer, packed the book with well-researched and interesting twists and turns. If you are looking for a book that involves Linux, command lines, GitHub, or any other open source elements, then this is not the book for you. (Recommendation and review by [Jeff Macharyas][12])
|
||||
|
||||
**[The Tao of Pooh][13] by Benjamin Hoff**
|
||||
|
||||
Linux and the open source ethos is a way of approaching life and getting things done that relies on both the individual and collective goodwill of the community it serves. Leadership and service are ascribed by individual contribution and merit rather than arbitrary assignment of value in traditional hierarchies. This is the natural way of getting things done. The power of open source is its authentic gift of self to a community of developers and end users. Being a part of such a community of developers and contributors invites to share their unique gift with the wider world. In Tao of Poo, Hoff celebrates that unique gift of self, using the metaphor of Winnie the Pooh wed with Taoist philosophy. (Recommendation and review by [Don Watkins][14])
|
||||
|
||||
**[The Golem and the Jinni][15] by Helene Wecker**
|
||||
|
||||
The eponymous otherworldly beings accidentally find themselves in New York City in the early 1900s and have to restart their lives far from their homelands. It's rare to find a book with such an original premise, let alone one that can follow through with it so well and with such heart. (Recommendation and review by [VM Brasseur][16])
|
||||
|
||||
**[The Rise of the Meritocracy][17] by Michael Young**
|
||||
|
||||
Meritocracy—one of the most pervasive and controversial notions circulating in open source discourses—is for some critics nothing more than a quaint fiction. No surprise for them, then, that the term originated there. Michael Young's dystopian science fiction novel introduced the term into popular culture in 1958; the eponymous concept characterizes a 2034 society entirely bent on rewarding the best, the brightest, and the most talented. "Today we frankly recognize that democracy can be no more than aspiration, and have rule not so much by the people as by the cleverest people," writes the book's narrator in this pseudo-sociological account of future history,"not an aristocracy of birth, not a plutocracy of wealth, but a true meritocracy of talent."
|
||||
|
||||
Would a truly meritocratic society work as intended? We might only imagine. Young's answer, anyway, has serious consequences for the fictional sociologist. (Recommendation and review by [Bryan Behrenshausen][18])
|
||||
|
||||
**[Throne of the Crescent Moon][19] by Saladin Ahmed**
|
||||
|
||||
The protagonist, Adulla, is a man who just wants to retire from ghul hunting and settle down, but the world has other plans for him. Accompanied by his assistant and a vengeful young warrior, they set off to end the ghul scourge and find their revenge. While it sounds like your typical fantasy romp, the Middle Eastern setting of the story sets it apart while the tight and skillful writing of Ahmed pulls you in. (Recommendation and review by [VM Brasseur][16])
|
||||
|
||||
**[Walkaway][20] by Cory Doctorow**
|
||||
|
||||
It's hard to approach this science fiction book because it's so different than other science fiction books. It's timely because in an age of rage―producing a seemingly endless parade of dystopia in fiction and in reality―this book is hopeful. We need hopeful things. Open source fans would like it because the reason it is hopeful is because of open, shared technology. I don't want to give too much away, but let's just say this book exists in a world where advanced 3D printing is so mainstream (and old) that you can practically 3D print anything. Basic needs of Maslow's hierarchy are essentially taken care of, so you're left with human relationships.
|
||||
|
||||
"You wouldn't steal a car" turns into "you can fork a house or a city." This creates a present that can constantly be remade, so the attachment to things becomes practically unnecessary. Thus, people can―and do―just walk away. This wonderful (and complicated) future setting is the ever-present reality surrounding a group of characters, their complicated relationships, and a complex class struggle in a post-scarcity world.
|
||||
|
||||
Best book I've read in years. Thanks, Cory! (Recommendation and review by [Kyle Conway][21])
|
||||
|
||||
**[Who Moved My Cheese?][22] by Spencer Johnson**
|
||||
|
||||
The secret to success for leading open source projects and open companies is agility and motivating everyone to move beyond their comfort zones to embrace change. Many people find change difficult and do not see the advantage that comes from the development of an agile mindset. This book is about the difference in how mice and people experience and respond to change. It's an easy read and quick way to expand your mind and think differently about whatever problem you're facing today. (Recommendation and review by [Don Watkins][14])
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/fiction-book-list
|
||||
|
||||
作者:[Jen Wike Huger][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/remyd
|
||||
[1]:https://www.annleckie.com/novel/ancillary-justice/
|
||||
[2]:https://opensource.com/users/i-towey
|
||||
[3]:https://www.amazon.com/Cryptonomicon-Neal-Stephenson-ebook/dp/B000FC11A6/ref=sr_1_1?s=books&ie=UTF8&qid=1528311017&sr=1-1&keywords=Cryptonomicon
|
||||
[4]:https://opensource.com/users/mikecamel
|
||||
[5]:https://www.amazon.com/DAEMON-Daniel-Suarez/dp/0451228731
|
||||
[6]:https://opensource.com/users/jlacroix
|
||||
[7]:https://www.amazon.com/Going-postal-Terry-PRATCHETT/dp/0385603428
|
||||
[8]:https://opensource.com/users/lewiscowles1986
|
||||
[9]:https://www.amazon.com/Microserfs-Douglas-Coupland/dp/0061624268
|
||||
[10]:https://opensource.com/users/thatsjet
|
||||
[11]:https://www.amazon.com/Open-Source-M-Frick/dp/1453719989
|
||||
[12]:https://opensource.com/users/jeffmacharyas
|
||||
[13]:https://www.amazon.com/Tao-Pooh-Benjamin-Hoff/dp/0140067477
|
||||
[14]:https://opensource.com/users/don-watkins
|
||||
[15]:https://www.amazon.com/Golem-Jinni-Novel-P-S/dp/0062110845
|
||||
[16]:https://opensource.com/users/vmbrasseur
|
||||
[17]:https://www.amazon.com/Rise-Meritocracy-Classics-Organization-Management/dp/1560007044
|
||||
[18]:https://opensource.com/users/bbehrens
|
||||
[19]:https://www.amazon.com/Throne-Crescent-Moon-Kingdoms/dp/0756407788
|
||||
[20]:https://craphound.com/category/walkaway/
|
||||
[21]:https://opensource.com/users/kreyc
|
||||
[22]:https://www.amazon.com/Moved-Cheese-Spencer-Johnson-M-D/dp/0743582853
|
@ -0,0 +1,66 @@
|
||||
AI Is Coming to Edge Computing Devices
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ai-edge.jpg?itok=nuNfRbW8)
|
||||
|
||||
Very few non-server systems run software that could be called machine learning (ML) and artificial intelligence (AI). Yet, server-class “AI on the Edge” applications are coming to embedded devices, and Arm intends to fight with Intel and AMD over every last one of them.
|
||||
|
||||
Arm recently [announced][1] a new [Cortex-A76][2] architecture that is claimed to boost the processing of AI and ML algorithms on edge computing devices by a factor of four. This does not include ML performance gains promised by the new Mali-G76 GPU. There’s also a Mali-V76 VPU designed for high-res video. The Cortex-A76 and two Mali designs are designed to “complement” Arm’s Project Trillium Machine Learning processors (see below).
|
||||
|
||||
### Improved performance
|
||||
|
||||
The Cortex-A76 differs from the [Cortex-A73][3] and [Cortex-A75][4] IP designs in that it’s designed as much for laptops as for smartphones and high-end embedded devices. Cortex-A76 provides “35 percent more performance year-over-year,” compared to Cortex-A75, claims Arm. The IP, which is expected to arrive in products a year from now, is also said to provide 40 percent improved efficiency.
|
||||
|
||||
Like Cortex-A75, which is equivalent to the latest Kyro cores available on Qualcomm’s [Snapdragon 845][5], the Cortex-A76 supports [DynamIQ][6], Arm’s more flexible version of its Big.Little multi-core scheme. Unlike Cortex-A75, which was announced with a Cortex-A55 companion chip, Arm had no new DynamIQ companion for the Cortex-A76.
|
||||
|
||||
Cortex-A76 enhancements are said to include decoupled branch prediction and instruction fetch, as well as Arm’s first 4-wide decode core, which boosts the maximum instruction per cycle capability. There’s also higher integer and vector execution throughput, including support for dual-issue native 16B (128-bit) vector and floating-point units. Finally, the new full-cache memory hierarchy is “co-optimized for latency and bandwidth,” says Arm.
|
||||
|
||||
Unlike the latest high-end Cortex-A releases, Cortex-A76 represents “a brand new microarchitecture,” says Arm. This is confirmed by [AnandTech’s][7] usual deep-dive analysis. Cortex-A73 and -A75 debuted elements of the new “Artemis” architecture, but the Cortex-A76 is built from scratch with Artemis.
|
||||
|
||||
The Cortex-A76 should arrive on 7nm-fabricated TSMC products running at 3GHz, says AnandTech. The 4x improvements in ML workloads are primarily due to new optimizations in the ASIMD pipelines “and how dot products are handled,” says the story.
|
||||
|
||||
Meanwhile, [The Register][8] noted that Cortex-A76 is Arm’s first design that will exclusively run 64-bit kernel-level code. The cores will support 32-bit code, but only at non-privileged levels, says the story..
|
||||
|
||||
### Mali-G76 GPU and Mali-G72 VPU
|
||||
|
||||
The new Mali-G76 GPU announced with Cortex-A76 targets gaming, VR, AR, and on-device ML. The Mali-G76 is said to provide 30 percent more efficiency and performance density and 1.5x improved performance for mobile gaming. The Bifrost architecture GPU also provides 2.7x ML performance improvements compared to the Mali-G72, which was announced last year with the Cortex-A75.
|
||||
|
||||
The Mali-V76 VPU supports UHD 8K viewing experiences. It’s aimed at 4x4 video walls, which are especially popular in China and is designed to support the 8K video coverage, which Japan is promising for the 2020 Olympics. 8K@60 streams require four times the bandwidth of 4K@60 streams. To achieve this, Arm added an extra AXI bus and doubled the line buffers throughout the video pipeline. The VPU also supports 8K@30 decode.
|
||||
|
||||
### Project Trillium’s ML chip detailed
|
||||
|
||||
Arm previously revealed other details about the [Machine Learning][9] (ML) processor, also referred to as MLP. The ML chip will accelerate AI applications including machine translation and face recognition.
|
||||
|
||||
The new processor architecture is part of the Project Trillium initiative for AI, and follows Arm’s second-gen Object Detection (OD) Processor for optimizing visual processing and people/object detection. The ML design will initially debut as a co-processor in mobile phones by late 2019.
|
||||
|
||||
Numerous block diagrams for the MLP were published by [AnandTech][10], which was briefed on the design. While stating that any judgment about the performance of the still unfinished ML IP will require next year’s silicon release, the publication says that the ML chip appears to check off all the requirements of a neural network accelerator, including providing efficient convolutional computations and data movement while also enabling sufficient programmability.
|
||||
|
||||
Arm claims the chips will provide >3TOPs per Watt performance in 7nm designs with absolute throughputs of 4.6TOPs, deriving a target power of approximately 1.5W. For programmability, MLP will initially target Android’s [Neural Networks API][11] and [Arm’s NN SDK][12].
|
||||
|
||||
Join us at [Open Source Summit + Embedded Linux Conference Europe][13] in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/6/ai-coming-edge-computing-devices
|
||||
|
||||
作者:[Eric Brown][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/ericstephenbrown
|
||||
[1]:https://www.arm.com/news/2018/05/arm-announces-new-suite-of-ip-for-premium-mobile-experiences
|
||||
[2]:https://community.arm.com/processors/b/blog/posts/cortex-a76-laptop-class-performance-with-mobile-efficiency
|
||||
[3]:https://www.linux.com/news/mediateks-10nm-mobile-focused-soc-will-tap-cortex-a73-and-a32
|
||||
[4]:http://linuxgizmos.com/arm-debuts-cortex-a75-and-cortex-a55-with-ai-in-mind/
|
||||
[5]:http://linuxgizmos.com/hot-chips-on-parade-at-mwc-and-embedded-world/
|
||||
[6]:http://linuxgizmos.com/arm-boosts-big-little-with-dynamiq-and-launches-linux-dev-kit/
|
||||
[7]:https://www.anandtech.com/show/12785/arm-cortex-a76-cpu-unveiled-7nm-powerhouse
|
||||
[8]:https://www.theregister.co.uk/2018/05/31/arm_cortex_a76/
|
||||
[9]:https://developer.arm.com/products/processors/machine-learning/arm-ml-processor
|
||||
[10]:https://www.anandtech.com/show/12791/arm-details-project-trillium-mlp-architecture
|
||||
[11]:https://developer.android.com/ndk/guides/neuralnetworks/
|
||||
[12]:https://developer.arm.com/products/processors/machine-learning/arm-nn
|
||||
[13]:https://events.linuxfoundation.org/events/elc-openiot-europe-2018/
|
@ -0,0 +1,139 @@
|
||||
An Advanced System Configuration Utility For Ubuntu Power Users
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-4-1-720x340.png)
|
||||
|
||||
**Ubunsys** is a Qt-based advanced system utility for Ubuntu and its derivatives. Most of the configuration can be easily done from the command-line by the advanced users. Just in case, you don’t want to use CLI all the time, you can use Ubunsys utility to configure your Ubuntu desktop system or its derivatives such as Linux Mint, Elementary OS etc. Ubunsys can be used to modify system configuration, install, remove, update packages and old kernels, enable/disable sudo access, install mainline kernel, update software repositories, clean up junk files, upgrade your Ubuntu to latest version, and so on. All of the aforementioned actions can be done with simple mouse clicks. You don’t need to depend on CLI mode anymore. Here is the list of things you can do with Ubunsys:
|
||||
|
||||
* Install, update, and remove packages.
|
||||
* Update and upgrade software repositories.
|
||||
* Install mainline Kernel.
|
||||
* Remove old and unused Kernels.
|
||||
* Full system update.
|
||||
* Complete System upgrade to next available version.
|
||||
* Upgrade to latest development version.
|
||||
* Clean up junk files from your system.
|
||||
* Enable and/or disable sudo access without password.
|
||||
* Make Sudo Passwords visible when you type them in the Terminal.
|
||||
* Enable and/or disable hibernation.
|
||||
* Enable and/or disable firewall.
|
||||
* Open, backup and import sources.list.d and sudoers files.
|
||||
* Show/unshow hidden startup items.
|
||||
* Enable and/or disable Login sounds.
|
||||
* Configure dual boot.
|
||||
* Enable/disable Lock screen.
|
||||
* Smart system update.
|
||||
* Update and/or run all scripts at once using Scripts Manager.
|
||||
* Exec normal user installation script from git.
|
||||
* Check system integrity and missing GPG keys.
|
||||
* Repair network.
|
||||
* Fix broken packages.
|
||||
* And more yet to come.
|
||||
|
||||
|
||||
|
||||
**Important note:** Ubunsys is not for Ubuntu beginners. It is dangerous and not a stable version yet. It might break your system. If you’re a new to Ubuntu, don’t use it. If you are very curious to use this application, go through each option carefully and proceed at your own risk. Do not forget to backup your important data before using this application.
|
||||
|
||||
### Ubunsys – An Advanced System Configuration Utility For Ubuntu Power Users
|
||||
|
||||
#### Install Ubunsys
|
||||
|
||||
Ubunusys developer has made a PPA to make the installation process much easier. Ubunsys will currently work on Ubuntu 16.04 LTS, Ubuntu 17.04 64bit editions.
|
||||
|
||||
Run the following commands one by one to add Ubunsys PPA and install it.
|
||||
```
|
||||
sudo add-apt-repository ppa:adgellida/ubunsys
|
||||
|
||||
sudo apt-get update
|
||||
|
||||
sudo apt-get install ubunsys
|
||||
|
||||
```
|
||||
|
||||
If the PPA doesn’t work, head over to the [**releases page**][1], download and install the Ubunsys package depending upon the architecture you use.
|
||||
|
||||
#### Usage
|
||||
|
||||
Once installed, launch Ubunsys from Menu. This is how Ubunsys main interface looks like.
|
||||
|
||||
![][3]
|
||||
|
||||
As you can see, Ubunusys has four main sections namely **Packages** , **Tweaks** , **System** , and **Repair**. There are one or more sub-sections available for each main tab to do different operations.
|
||||
|
||||
**Packages**
|
||||
|
||||
This section allows you to install, remove, update packages.
|
||||
|
||||
![][4]
|
||||
|
||||
**Tweaks**
|
||||
|
||||
In this section, we can do various various system tweaks such as,
|
||||
|
||||
* Open, backup, import sources.list and sudoers file;
|
||||
* Configure dual boot;
|
||||
* Enable/disable login sound, firewall, lock screen, hibernation, sudo access without password. You can also enable or disable for sudo access without password to specific users.
|
||||
* Can make the passwords visible while typing them in Terminal (Disable Asterisks).
|
||||
|
||||
|
||||
|
||||
![][5]
|
||||
|
||||
**System**
|
||||
|
||||
This section further categorized into three sub-categories, each for distinct user type.
|
||||
|
||||
The **Normal user** tab allows us to,
|
||||
|
||||
* Update, upgrade packages and software repos.
|
||||
* Clean system.
|
||||
* Exec normal user installation script.
|
||||
|
||||
|
||||
|
||||
The **Advanced user** section allows us to,
|
||||
|
||||
* Clean Old/Unused Kernels.
|
||||
* Install mainline Kernel.
|
||||
* do smart packages update.
|
||||
* Upgrade system.
|
||||
|
||||
|
||||
|
||||
The **Developer** section allows us to upgrade the Ubuntu system to latest development version.
|
||||
|
||||
![][6]
|
||||
|
||||
**Repair**
|
||||
|
||||
This is the fourth and last section of Ubunsys. As the name says, this section allows us to do repair our system, network, missing GPG keys, and fix broken packages.
|
||||
|
||||
![][7]
|
||||
|
||||
As you can see, Ubunsys helps you to do any system configuration, maintenance and software management tasks with few mouse clicks. You don’t need to depend on Terminal anymore. Ubunsys can help you to accomplish any advanced tasks. Again, I warn you, It’s not for beginners and it is not stable yet. So, you can expect bugs and crashes when using it. Use it with care after studying options and impact.
|
||||
|
||||
Cheers!
|
||||
|
||||
**Resource:**
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/ubunsys-advanced-system-configuration-utility-ubuntu-power-users/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://github.com/adgellida/ubunsys/releases
|
||||
[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-1.png
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-2.png
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-5.png
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-9.png
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-11.png
|
170
sources/tech/20180429 The Easiest PDO Tutorial (Basics).md
Normal file
170
sources/tech/20180429 The Easiest PDO Tutorial (Basics).md
Normal file
@ -0,0 +1,170 @@
|
||||
The Easiest PDO Tutorial (Basics)
|
||||
======
|
||||
|
||||
![](http://www.theitstuff.com/wp-content/uploads/2018/04/php-language.jpg)
|
||||
|
||||
Approximately 80% of the web is powered by PHP. And similarly, high number goes for SQL as well. Up until PHP version 5.5, we had the **mysql_** commands for accessing mysql databases but they were eventually deprecated due to insufficient security.
|
||||
|
||||
This happened with PHP 5.5 in 2013 and as I write this article, the year is 2018 and we are on PHP 7.2. The deprecation of mysql**_** brought 2 major ways of accessing the database, the **mysqli** and the **PDO** libraries.
|
||||
|
||||
Now though the mysqli library was the official successor, PDO gained more fame due to a simple reason that mysqli could only support mysql databases whereas PDO could support 12 different types of database drivers. Also, PDO had several more features that made it the better choice for most developers. You can see some of the feature comparisons in the table below;
|
||||
|
||||
| | PDO | MySQLi |
|
||||
| Database support** | 12 drivers | Only MySQL |
|
||||
| Paradigm | OOP | Procedural + OOP |
|
||||
| Prepared Statements Client Side) | Yes | No |
|
||||
| Named Parameters | Yes | No |
|
||||
|
||||
Now I guess it is pretty clear why PDO is the choice for most developers, so let’s dig into it and hopefully we will try to cover most of the PDO you need in this article itself.
|
||||
|
||||
### Connection
|
||||
|
||||
The first step is connecting to the database and since PDO is completely Object Oriented, we will be using the instance of a PDO class.
|
||||
|
||||
The first thing we do is we define the host, database name, user, password and the database charset.
|
||||
|
||||
`$host = 'localhost';`
|
||||
|
||||
`$db = 'theitstuff';`
|
||||
|
||||
`$user = 'root';`
|
||||
|
||||
`$pass = 'root';`
|
||||
|
||||
`$charset = 'utf8mb4';`
|
||||
|
||||
`$dsn = "mysql:host=$host;dbname=$db;charset=$charset";`
|
||||
|
||||
`$conn = new PDO($dsn, $user, $pass);`
|
||||
|
||||
After that, as you can see in the code above we have created the **DSN** variable, the DSN variable is simply a variable that holds the information about the database. For some people running mysql on external servers, you could also adjust your port number by simply supplying a **port=$port_number**.
|
||||
|
||||
Finally, you can create an instance of the PDO class, I have used the **$conn** variable and I have supplied the **$dsn, $user, $pass** parameters. If you have followed this, you should now have an object named $conn that is an instance of the PDO connection class. Now it’s time to get into the database and run some queries.
|
||||
|
||||
### A simple SQL Query
|
||||
|
||||
Let us now run a simple SQL query.
|
||||
|
||||
`$tis = $conn->query('SELECT name, age FROM students');`
|
||||
|
||||
`while ($row = $tis->fetch())`
|
||||
|
||||
`{`
|
||||
|
||||
`echo $row['name']."\t";`
|
||||
|
||||
`echo $row['age'];`
|
||||
|
||||
`echo "<br>";`
|
||||
|
||||
`}`
|
||||
|
||||
This is the simplest form of running a query with PDO. We first created a variable called **tis( **short for TheITStuff** )** and then you can see the syntax as we used the query function from the $conn object that we had created.
|
||||
|
||||
We then ran a while loop and created a **$row** variable to fetch the contents from the **$tis** object and finally echoed out each row by calling out the column name.
|
||||
|
||||
Easy wasn’t it ?. Now let’s get to the prepared statement.
|
||||
|
||||
### Prepared Statements
|
||||
|
||||
Prepared statements were one of the major reasons people started using PDO as it had prepared statements that could prevent SQL injections.
|
||||
|
||||
There are 2 basic methods available, you could either use positional or named parameters.
|
||||
|
||||
#### Position parameters
|
||||
|
||||
Let us see an example of a query using positional parameters.
|
||||
|
||||
`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)");`
|
||||
|
||||
`$tis->bindValue(1,'mike');`
|
||||
|
||||
`$tis->bindValue(2,22);`
|
||||
|
||||
`$tis->execute();`
|
||||
|
||||
In the above example, we have placed 2 question marks and later used the **bindValue()** function to map the values into the query. The values are bound to the position of the question mark in the statement.
|
||||
|
||||
I could also use variables instead of directly supplying values by using the **bindParam()** function and example for the same would be this.
|
||||
|
||||
`$name='Rishabh'; $age=20;`
|
||||
|
||||
`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)");`
|
||||
|
||||
`$tis->bindParam(1,$name);`
|
||||
|
||||
`$tis->bindParam(2,$age);`
|
||||
|
||||
`$tis->execute();`
|
||||
|
||||
### Named Parameters
|
||||
|
||||
Named parameters are also prepared statements that map values/variables to a named position in the query. Since there is no positional binding, it is very efficient in queries that use the same variable multiple time.
|
||||
|
||||
`$name='Rishabh'; $age=20;`
|
||||
|
||||
`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(:name, :age)");`
|
||||
|
||||
`$tis->bindParam(':name', $name);`
|
||||
|
||||
`$tis->bindParam(':age', $age);`
|
||||
|
||||
`$tis->execute();`
|
||||
|
||||
The only change you can notice is that I used **:name** and **:age** as placeholders and then mapped variables to them. The colon is used before the parameter and it is of extreme importance to let PDO know that the position is for a variable.
|
||||
|
||||
You can similarly use **bindValue()** to directly map values using Named parameters as well.
|
||||
|
||||
### Fetching the Data
|
||||
|
||||
PDO is very rich when it comes to fetching data and it actually offers a number of formats in which you can get the data from your database.
|
||||
|
||||
You can use the **PDO::FETCH_ASSOC** to fetch associative arrays, **PDO::FETCH_NUM** to fetch numeric arrays and **PDO::FETCH_OBJ** to fetch object arrays.
|
||||
|
||||
`$tis = $conn->prepare("SELECT * FROM STUDENTS");`
|
||||
|
||||
`$tis->execute();`
|
||||
|
||||
`$result = $tis->fetchAll(PDO::FETCH_ASSOC);`
|
||||
|
||||
You can see that I have used **fetchAll** since I wanted all matching records. If only one row is expected or desired, you can simply use **fetch.**
|
||||
|
||||
Now that we have fetched the data it is time to loop through it and that is extremely easy.
|
||||
|
||||
`foreach($result as $lnu){`
|
||||
|
||||
`echo $lnu['name'];`
|
||||
|
||||
`echo $lnu['age']."<br>";`
|
||||
|
||||
`}`
|
||||
|
||||
You can see that since I had requested associative arrays, I am accessing individual members by their names.
|
||||
|
||||
Though there is absolutely no problem in defining how you want your data delivered, you could actually set one as default when defining the connection variable itself.
|
||||
|
||||
All you need to do is create an options array where you put in all your default configs and simply pass the array in the connection variable.
|
||||
|
||||
`$options = [`
|
||||
|
||||
` PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,`
|
||||
|
||||
`];`
|
||||
|
||||
`$conn = new PDO($dsn, $user, $pass, $options);`
|
||||
|
||||
This was a very brief and quick intro to PDO we will be making an advanced tutorial soon. If you had any difficulties understanding any part of the tutorial, do let me know in the comment section and I’ll be there for you.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.theitstuff.com/easiest-pdo-tutorial-basics
|
||||
|
||||
作者:[Rishabh Kandari][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.theitstuff.com/author/reevkandari
|
@ -1,167 +0,0 @@
|
||||
pinewall translating
|
||||
|
||||
MySQL without the MySQL: An introduction to the MySQL Document Store
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_business_sign_store.jpg?itok=g4QibRqg)
|
||||
|
||||
MySQL can act as a NoSQL JSON Document Store so programmers can save data without having to normalize data, set up schemas, or even have a clue what their data looks like before starting to code. Since MySQL version 5.7 and in MySQL 8.0, developers can store JSON documents in a column of a table. By adding the new X DevAPI, you can stop embedding nasty strings of structured query language in your code and replace them with API calls that support modern programming design.
|
||||
|
||||
Very few developers have any formal training in structured query language (SQL), relational theory, sets, or other foundations of relational databases. But they need a secure, reliable data store. Add in a dearth of available database administrators, and things can get very messy quickly.
|
||||
|
||||
The [MySQL Document Store][1] allows programmers to store data without having to create an underlying schema, normalize data, or any of the other tasks normally required to use a database. A JSON document collection is created and can then be used.
|
||||
|
||||
### JSON data type
|
||||
|
||||
This is all based on the JSON data type introduced a few years ago in MySQL 5.7. This provides a roughly 1GB column in a row of a table. The data has to be valid JSON or the server will return an error, but developers are free to use that space as they want.
|
||||
|
||||
### X DevAPI
|
||||
|
||||
The old MySQL protocol is showing its age after almost a quarter-century, so a new protocol was developed called [X DevAPI][2]. It includes a new high-level session concept that allows code to scale from one server to many with non-blocking, asynchronous I/O that follows common host-language programming patterns. The focus is put on using CRUD (create, replace, update, delete) patterns while following modern practices and coding styles. Or, to put it another way, you no longer have to embed ugly strings of SQL statements in your beautiful, pristine code.
|
||||
|
||||
### Coding examples
|
||||
|
||||
A new shell, creatively called the [MySQL Shell][3] , supports this new protocol. It can be used to set up high-availability clusters, check servers for upgrade readiness, and interact with MySQL servers. This interaction can be done in three modes: JavaScript, Python, and SQL.
|
||||
|
||||
The coding examples that follow are in the JavaScript mode of the MySQL Shell; it has a `JS>` prompt.
|
||||
|
||||
Here, we will log in as `dstokes` with the password `password` to the local system and a schema named `demo`. There is a pointer to the schema demo that is named `db`.
|
||||
```
|
||||
$ mysqlsh dstokes:password@localhost/demo
|
||||
|
||||
JS> db.createCollection("example")
|
||||
|
||||
JS> db.example.add(
|
||||
|
||||
{
|
||||
|
||||
Name: "Dave",
|
||||
|
||||
State: "Texas",
|
||||
|
||||
foo : "bar"
|
||||
|
||||
}
|
||||
|
||||
)
|
||||
|
||||
JS>
|
||||
|
||||
```
|
||||
|
||||
Above we logged into the server, connected to the `demo` schema, created a collection named `example`, and added a record, all without creating a table definition or using SQL. We can use or abuse this data as our whims desire. This is not an object-relational mapper, as there is no mapping the code to the SQL because the new protocol “speaks” at the server layer.
|
||||
|
||||
### Node.js supported
|
||||
|
||||
The new shell is pretty sweet; you can do a lot with it, but you will probably want to use your programming language of choice. The following example uses the `world_x` demo database to search for a record with the `_id` field matching "CAN." We point to the desired collection in the schema and issue a `find` command with the desired parameters. Again, there’s no SQL involved.
|
||||
```
|
||||
var mysqlx = require('@mysql/xdevapi');
|
||||
|
||||
mysqlx.getSession({ //Auth to server
|
||||
|
||||
host: 'localhost',
|
||||
|
||||
port: '33060',
|
||||
|
||||
dbUser: 'root',
|
||||
|
||||
dbPassword: 'password'
|
||||
|
||||
}).then(function (session) { // use world_x.country.info
|
||||
|
||||
var schema = session.getSchema('world_x');
|
||||
|
||||
var collection = schema.getCollection('countryinfo');
|
||||
|
||||
|
||||
|
||||
collection // Get row for 'CAN'
|
||||
|
||||
.find("$._id == 'CAN'")
|
||||
|
||||
.limit(1)
|
||||
|
||||
.execute(doc => console.log(doc))
|
||||
|
||||
.then(() => console.log("\n\nAll done"));
|
||||
|
||||
|
||||
|
||||
session.close();
|
||||
|
||||
})
|
||||
|
||||
```
|
||||
|
||||
Here is another example in PHP that looks for "USA":
|
||||
```
|
||||
<?PHP
|
||||
|
||||
// Connection parameters
|
||||
|
||||
$user = 'root';
|
||||
|
||||
$passwd = 'S3cret#';
|
||||
|
||||
$host = 'localhost';
|
||||
|
||||
$port = '33060';
|
||||
|
||||
$connection_uri = 'mysqlx://'.$user.':'.$passwd.'@'.$host.':'.$port;
|
||||
|
||||
echo $connection_uri . "\n";
|
||||
|
||||
|
||||
|
||||
// Connect as a Node Session
|
||||
|
||||
$nodeSession = mysql_xdevapi\getNodeSession($connection_uri);
|
||||
|
||||
// "USE world_x" schema
|
||||
|
||||
$schema = $nodeSession->getSchema("world_x");
|
||||
|
||||
// Specify collection to use
|
||||
|
||||
$collection = $schema->getCollection("countryinfo");
|
||||
|
||||
// SELECT * FROM world_x WHERE _id = "USA"
|
||||
|
||||
$result = $collection->find('_id = "USA"')->execute();
|
||||
|
||||
// Fetch/Display data
|
||||
|
||||
$data = $result->fetchAll();
|
||||
|
||||
var_dump($data);
|
||||
|
||||
?>#!/usr/bin/phpmysql_xdevapi\getNodeSession
|
||||
|
||||
```
|
||||
|
||||
Note that the `find` operator used in both examples looks pretty much the same between the two different languages. This consistency should help developers who hop between programming languages or those looking to reduce the learning curve with a new language.
|
||||
|
||||
Other supported languages include C, Java, Python, and JavaScript, and more are planned.
|
||||
|
||||
### Best of both worlds
|
||||
|
||||
Did I mention that the data entered in this NoSQL fashion is also available from the SQL side of MySQL? Or that the new NoSQL method can access relational data in old-fashioned relational tables? You now have the option to use your MySQL server as a SQL server, a NoSQL server, or both.
|
||||
|
||||
Dave Stokes will present "MySQL Without the SQL—Oh My!" at [Southeast LinuxFest][4], June 8-10, in Charlotte, N.C.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/mysql-document-store
|
||||
|
||||
作者:[Dave Stokes][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/davidmstokes
|
||||
[1]:https://www.mysql.com/products/enterprise/document_store.html
|
||||
[2]:https://dev.mysql.com/doc/x-devapi-userguide/en/
|
||||
[3]:https://dev.mysql.com/downloads/shell/
|
||||
[4]:http://www.southeastlinuxfest.org/
|
@ -0,0 +1,66 @@
|
||||
Mesos and Kubernetes: It's Not a Competition
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/architecture-barge-bay-161764_0.jpg?itok=vNChG5fb)
|
||||
|
||||
The roots of Mesos can be traced back to 2009 when Ben Hindman was a PhD student at the University of California, Berkeley working on parallel programming. They were doing massive parallel computations on 128-core chips, trying to solve multiple problems such as making software and libraries run more efficiently on those chips. He started talking with fellow students so see if they could borrow ideas from parallel processing and multiple threads and apply them to cluster management.
|
||||
|
||||
“Initially, our focus was on Big Data,” said Hindman. Back then, Big Data was really hot and Hadoop was one of the hottest technologies. “We recognized that the way people were running things like Hadoop on clusters was similar to the way that people were running multiple threaded applications and parallel applications,” said Hindman.
|
||||
|
||||
However, it was not very efficient, so they started thinking how it could be done better through cluster management and resource management. “We looked at many different technologies at that time,” Hindman recalled.
|
||||
|
||||
Hindman and his colleagues, however, decided to adopt a novel approach. “We decided to create a lower level of abstraction for resource management, and run other services on top to that to do scheduling and other things,” said Hindman, “That’s essentially the essence of Mesos -- to separate out the resource management part from the scheduling part.”
|
||||
|
||||
It worked, and Mesos has been going strong ever since.
|
||||
|
||||
### The project goes to Apache
|
||||
|
||||
The project was founded in 2009. In 2010 the team decided to donate the project to the Apache Software Foundation (ASF). It was incubated at Apache and in 2013, it became a Top-Level Project (TLP).
|
||||
|
||||
There were many reasons why the Mesos community chose Apache Software Foundation, such as the permissiveness of Apache licensing, and the fact that they already had a vibrant community of other such projects.
|
||||
|
||||
It was also about influence. A lot of people working on Mesos were also involved with Apache, and many people were working on projects like Hadoop. At the same time, many folks from the Mesos community were working on other Big Data projects like Spark. This cross-pollination led all three projects -- Hadoop, Mesos, and Spark -- to become ASF projects.
|
||||
|
||||
It was also about commerce. Many companies were interested in Mesos, and the developers wanted it to be maintained by a neutral body instead of being a privately owned project.
|
||||
|
||||
### Who is using Mesos?
|
||||
|
||||
A better question would be, who isn’t? Everyone from Apple to Netflix is using Mesos. However, Mesos had its share of challenges that any technology faces in its early days. “Initially, I had to convince people that there was this new technology called ‘containers’ that could be interesting as there is no need to use virtual machines,” said Hindman.
|
||||
|
||||
The industry has changed a great deal since then, and now every conversation around infrastructure starts with ‘containers’ -- thanks to the work done by Docker. Today convincing is not needed, but even in the early days of Mesos, companies like Apple, Netflix, and PayPal saw the potential. They knew they could take advantage of containerization technologies in lieu of virtual machines. “These companies understood the value of containers before it became a phenomenon,” said Hindman.
|
||||
|
||||
These companies saw that they could have a bunch of containers, instead of virtual machines. All they needed was something to manage and run these containers, and they embraced Mesos. Some of the early users of Mesos included Apple, Netflix, PayPal, Yelp, OpenTable, and Groupon.
|
||||
|
||||
“Most of these organizations are using Mesos for just running arbitrary services,” said Hindman, “But there are many that are using it for doing interesting things with data processing, streaming data, analytics workloads and applications.”
|
||||
|
||||
One of the reasons these companies adopted Mesos was the clear separation between the resource management layers. Mesos offers the flexibility that companies need when dealing with containerization.
|
||||
|
||||
“One of the things we tried to do with Mesos was to create a layering so that people could take advantage of our layer, but also build whatever they wanted to on top,” said Hindman. “I think that's worked really well for the big organizations like Netflix and Apple.”
|
||||
|
||||
However, not every company is a tech company; not every company has or should have this expertise. To help those organizations, Hindman co-founded Mesosphere to offer services and solutions around Mesos. “We ultimately decided to build DC/OS for those organizations which didn’t have the technical expertise or didn't want to spend their time building something like that on top.”
|
||||
|
||||
### Mesos vs. Kubernetes?
|
||||
|
||||
People often think in terms of x versus y, but it’s not always a question of one technology versus another. Most technologies overlap in some areas, and they can also be complementary. “I don't tend to see all these things as competition. I think some of them actually can work in complementary ways with one another,” said Hindman.
|
||||
|
||||
“In fact the name Mesos stands for ‘middle’; it’s kind of a middle OS,” said Hindman, “We have the notion of a container scheduler that can be run on top of something like Mesos. When Kubernetes first came out, we actually embraced it in the Mesos ecosystem and saw it as another way of running containers in DC/OS on top of Mesos.”
|
||||
|
||||
Mesos also resurrected a project called [Marathon][1](a container orchestrator for Mesos and DC/OS), which they have made a first-class citizen in the Mesos ecosystem. However, Marathon does not really compare with Kubernetes. “Kubernetes does a lot more than what Marathon does, so you can’t swap them with each other,” said Hindman, “At the same time, we have done many things in Mesos that are not in Kubernetes. So, these technologies are complementary to each other.”
|
||||
|
||||
Instead of viewing such technologies as adversarial, they should be seen as beneficial to the industry. It’s not duplication of technologies; it’s diversity. According to Hindman, “it could be confusing for the end user in the open source space because it's hard to know which technologies are suitable for what kind of workload, but that’s the nature of the beast called Open Source.”
|
||||
|
||||
That just means there are more choices, and everybody wins.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/6/mesos-and-kubernetes-its-not-competition
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/arnieswap
|
||||
[1]:https://mesosphere.github.io/marathon/
|
@ -0,0 +1,207 @@
|
||||
How to use screen scraping tools to extract data from the web
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openweb-osdc-lead.png?itok=yjU4KliG)
|
||||
A perfect internet would deliver data to clients in the format of their choice, whether it's CSV, XML, JSON, etc. The real internet teases at times by making data available, but usually in HTML or PDF documents—formats designed for data display rather than data interchange. Accordingly, the [screen scraping][1] of yesteryear—extracting displayed data and converting it to the requested format—is still relevant today.
|
||||
|
||||
Perl has outstanding tools for screen scraping, among them the `HTML::TableExtract` package described in the Scraping program below.
|
||||
|
||||
### Overview of the scraping program
|
||||
|
||||
The screen-scraping program has two main pieces, which fit together as follows:
|
||||
|
||||
* The file data.html contains the data to be scraped. The data in this example, which originated in a university site under renovation, addresses the issue of whether the income associated with a college degree justifies the degree's cost. The data includes median incomes, percentiles, and other information about areas of study such as computing, engineering, and liberal arts. To run the Scraping program, the data.html file should be hosted on a web server, in my case a local Nginx server. A standalone Perl web server such as `HTTP::Server::PSGI` or `HTTP::Server::Simple` would do as well.
|
||||
* The file scrape.pl contains the Scraping program, which uses features from the `Plack/PSGI` packages, in particular a Plack web server. The Scraping program is launched from the command line (as explained below). A user enters the URL for the Plack server (`localhost:5000/`) in a browser, and the following happens:
|
||||
* The browser connects to the Plack server, an instance of `HTTP::Server::PSGI`, and issues a GET request for the Scraping program. The single slash (`/`) at the end of the URL identifies this program. (A modern browser would add the closing slash even if the user failed to do so.)
|
||||
* The Scraping program then issues a GET request for the data.html document. If the request succeeds, the application extracts the relevant data from the document using the `HTML::TableExtract` package, saves the extracted data to a file, and takes some basic statistical measures that represent processing the extracted data. An HTML report like the following is returned to the user's browser.
|
||||
|
||||
|
||||
![HTML report generated by the Scraping program][3]
|
||||
|
||||
Fig. 1: Final report from the Scraping program
|
||||
|
||||
The request traffic from the user's browser to the Plack server and then to the server hosting the data.html document (e.g., Nginx) can be depicted as follows:
|
||||
```
|
||||
GET localhost:5000/ GET localhost:80/data.html
|
||||
|
||||
user's browser------------------->Plack server-------------------------->Nginx
|
||||
|
||||
```
|
||||
|
||||
The final step involves only the Plack server and the user's browser:
|
||||
```
|
||||
reportFinal.html
|
||||
|
||||
Plack server------------------>user's browser
|
||||
|
||||
```
|
||||
|
||||
Fig. 1 above shows the final report document.
|
||||
|
||||
### The scraping program in detail
|
||||
|
||||
The source code and data file (data.html) are available from my [website][4] in a ZIP file that includes a README. Here is a quick summary of the pieces, and clarifications will follow:
|
||||
```
|
||||
data.html ## data source to be hosted by a web server
|
||||
|
||||
scrape.pl ## main source code, run with the plackup utility (see below)
|
||||
|
||||
Stats::Controller.pm ## handles request routing, data extraction, and processing
|
||||
|
||||
Stats::Util.pm ## utility functions used in Controller.pm
|
||||
|
||||
report.html ## HTML template used to generate the report
|
||||
|
||||
rawData.dat ## the extracted data
|
||||
|
||||
```
|
||||
|
||||
The `Plack/PSGI` packages come with a command-line utility named `plackup`, which can be used to launch the Scraping program. With `%` as the command-line prompt, the command for starting the Scraping program is:
|
||||
```
|
||||
% plackup scrape.pl
|
||||
|
||||
```
|
||||
|
||||
The `plackup` command starts a standalone Plack web server that hosts the Scraping program. The Scraping code handles request routing, extracts data from the data.html document, produces some basic statistical measures, and then uses the `Template::Recall` package to generate an HTML report for the user. Because the Plack server runs indefinitely, the Scraping program prints the process ID, which can be used to kill the server and the Scraping app.
|
||||
|
||||
`Plack/PSGI` supports Rails-style routing in which an HTTP request is dispatched to a specific request handler based on two factors:
|
||||
|
||||
* The HTTP request method (verb) such as GET or POST.
|
||||
* The Uniform Resource Identifier (URI or noun) for the requested resource; in this case the standalone finishing slash (`/`) in the URL `http://localhost:5000/` that a user enters in a browser once the Scraping program has launched.
|
||||
|
||||
|
||||
|
||||
The Scraping program handles only one type of request: a GET for the resource named `/`, and this resource is the screen-scraping and data-processing code in my `Stats::Controller` package. Here, for review, is the `Plack/PSGI` routing setup, right at the top of source file scrape.pl:
|
||||
```
|
||||
my $router = router {
|
||||
|
||||
match '/', {method => 'GET'}, ## noun/verb combo: / is noun, GET is verb
|
||||
|
||||
to {controller => 'Controller', action => 'index'}; ## handler is function get_index
|
||||
|
||||
# Other actions as needed
|
||||
|
||||
};
|
||||
|
||||
```
|
||||
|
||||
The request handler `Controller::get_index` has only high-level logic, leaving the screen-scraping and report-generating details to utility functions in the Util.pm file, as described in the following section.
|
||||
|
||||
### The screen-scraping code
|
||||
|
||||
Recall that the Plack server dispatches a GET request for `localhost:5000/` to the Scraping program's `get_index` function. This function, as the request handler, then starts the job of retrieving the data to be scraped, scraping the data, and generating the final report. The data-retrieval part falls to a utility function, which uses Perl's `LWP::Agent` package to get the data from whatever server is hosting the data.html document. With the data document in hand, the Scraping program invokes the utility function `extract_from_html` to do the data extraction.
|
||||
|
||||
The data.html document happens to be well-formed XML, which means a Perl package such as `XML::LibXML` could be used to extract the data through an explicit XML parse. However, the `HTML::TableExtract` package is inviting because it bypasses the tedium of XML parses, and (in very little code) delivers a Perl hash with the extracted data. Data aggregates in HTML documents usually occur in lists or tables, and the `HTML::TableExtract` package targets tables. Here are the three critical lines of code for the data extraction:
|
||||
```
|
||||
my $col_headers = col_headers(); ## col_headers() returns an array of the table's column names
|
||||
|
||||
my $te = HTML::TableExtract->new(headers => $col_headers);
|
||||
|
||||
$te->parse($page); ## $page is data.html
|
||||
|
||||
```
|
||||
|
||||
The `$col_headers` refers to a Perl array of strings, each a column header in the HTML document:
|
||||
```
|
||||
sub col_headers { ## column headers in the HTML table
|
||||
|
||||
return ["Area",
|
||||
|
||||
"MedianWage",
|
||||
|
||||
...
|
||||
|
||||
"BoostFromGradDegree"];
|
||||
|
||||
}col_headers
|
||||
|
||||
```
|
||||
|
||||
After the call to the `TableExtract::parse` function, the Scraping program uses the `TableExtract::rows` function to iterate over the rows of extracted data—rows of data without the HTML markup. These rows, as Perl lists, are added to a Perl hash named `%majors_hash`, which can be depicted as follows:
|
||||
|
||||
* Each key identifies an area of study such as Computing or Engineering.
|
||||
|
||||
* The value of each key is the list of seven extracted data items, where seven is the number of columns in the HTML table. For Computing, the list with annotations is:
|
||||
```
|
||||
name median % with this degree income boost from GD
|
||||
/ / / /
|
||||
(Computing 55000 75000 112000 5.1% 32.0% 31.0%) ## data items
|
||||
/ \ \
|
||||
25th-ptile 75th-ptile % going on for GD = grad degree
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
The hash with the extracted data is written to the local file rawData.dat:
|
||||
```
|
||||
ForeignLanguage 50000 35000 75000 3.5% 54% 101%
|
||||
LiberalArts 47000 32000 70000 9.7% 41% 48%
|
||||
...
|
||||
Engineering 78000 54000 104000 8.2% 37% 32%
|
||||
Computing 75000 51000 112000 5.1% 32% 31%
|
||||
...
|
||||
PublicPolicy 50000 36000 74000 2.3% 24% 45%
|
||||
```
|
||||
|
||||
The next step is to process the extracted data, in this case by doing rudimentary statistical analysis using the `Statistics::Descriptive` package. In Fig. 1 above, the statistical summary is presented in a separate table at the bottom of the report.
|
||||
|
||||
### The report-generation code
|
||||
|
||||
The final step in the Scraping program is to generate a report. Perl has options for generating HTML, and `Template::Recall` is among them. As the name suggests, the package generates HTML from an HTML template, which is a mix of standard HTML markup and customized tags that serve as placeholders for data generated from backend code. The template file is report.html, and the backend function of interest is `Controller::generate_report`. Here is how the code and the template interact.
|
||||
|
||||
The report document (Fig. 1) has two tables. The top table is generated through iteration, as each row has the same columns (area of study, income for the 25th percentile, and so on). In each iteration, the code creates a hash with values for a particular area of study:
|
||||
```
|
||||
my %row = (
|
||||
major => $key,
|
||||
wage => '$' . commify($values[0]), ## commify turns 1234 into 1,234
|
||||
p25 => '$' . commify($values[1]),
|
||||
p75 => '$' . commify($values[2]),
|
||||
population => $values[3],
|
||||
grad => $values[4],
|
||||
boost => $values[5]
|
||||
);
|
||||
|
||||
```
|
||||
|
||||
The hash keys are Perl [barewords][5] such as `major` and `wage` that represent items in the list of data values extracted earlier from the HTML data document. The corresponding HTML template looks like this:
|
||||
```
|
||||
[ === even === ]
|
||||
<tr class = 'even'>
|
||||
<td>['major']</td>
|
||||
<td align = 'right'>['p25']</td>
|
||||
<td align = 'right'>['wage']</td>
|
||||
<td align = 'right'>['p75']</td>
|
||||
<td align = 'right'>['pop']</td>
|
||||
<td align = 'right'>['grad']</td>
|
||||
<td align = 'right'>['boost']</td>
|
||||
</tr>
|
||||
[=== end1 ===]
|
||||
```
|
||||
|
||||
The customized tags are in square brackets. The tags at the top and the bottom mark the beginning and the end, respectively, of a template region to be rendered. The other customized tags identify individual targets for the backend code. For example, the template column identified as `major` matches the hash entry with `major` as the key. Here is the call in the backend code that binds the data to the customized tags:
|
||||
```
|
||||
print OUTFILE $tr->render('end1');
|
||||
|
||||
```
|
||||
|
||||
The reference `$tr` is to a `Template::Recall` instance, and `OUTFILE` is the report file reportFinal.html, which is generated from the template file report.html together with the backend code. If all goes well, the reportFinal.html document is what the user sees in the browser (see Fig. 1).
|
||||
|
||||
The scraping program draws from excellent Perl packages such as `Plack/PSGI`, `LWP::Agent`, `HTML::TableExtract`, `Template::Recall`, and `Statistics::Descriptive` to deal with the often messy task of screen-scraping for data. These packages play together nicely, as each targets a specific subtask. Finally, the Scraping program might be extended to cluster the extracted data: The `Algorithm::KMeans` package is suited for this extension and could use the data persisted in the rawData.dat file.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/screen-scraping
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mkalindepauledu
|
||||
[1]:https://en.wikipedia.org/wiki/Data_scraping#Screen_scraping
|
||||
[2]:/file/399886
|
||||
[3]:https://opensource.com/sites/default/files/uploads/scrapeshot.png (HTML report generated by the Scraping program)
|
||||
[4]:http://condor.depaul.edu/mkalin
|
||||
[5]:https://en.wiktionary.org/wiki/bareword
|
@ -1,134 +0,0 @@
|
||||
Translating by qhwdw
|
||||
Turn Your Raspberry Pi into a Tor Relay Node
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/tor-onion-router.jpg?itok=6WUl0ElH)
|
||||
|
||||
If you’re anything like me, you probably got yourself a first- or second-generation Raspberry Pi board when they first came out, played with it for a while, but then shelved it and mostly forgot about it. After all, unless you’re a robotics enthusiast, you probably don’t have that much use for a computer with a pretty slow processor and 256 megabytes of RAM. This is not to say that there aren’t cool things you can do with one of these, but between work and other commitments, I just never seem to find the right time for some good old nerding out.
|
||||
|
||||
However, if you would like to put it to good use without sacrificing too much of your time or resources, you can turn your old Raspberry Pi into a perfectly functioning Tor relay node.
|
||||
|
||||
### What is a Tor Relay node
|
||||
|
||||
You have probably heard about the [Tor project][1] before, but just in case you haven’t, here’s a very quick summary. The name “Tor” stands for “The Onion Router” and it is a technology created to combat online tracking and other privacy violations.
|
||||
|
||||
Everything you do on the Internet leaves a set of digital footprints in every piece of equipment that your IP packets traverse: all of the switches, routers, load balancers and destination websites log the IP address from which your session originated and the IP address of the internet resource you are accessing (and often its hostname, [even when using HTTPS][2]). If you’re browsing from home, then your IP can be directly mapped to your household. If you’re using a VPN service ([as you should be][3]), then your IP can be mapped to your VPN provider, and then they are the ones who can map it to your household. In any case, odds are that someone somewhere is assembling an online profile on you based on the sites you visit and how much time you spend on each of them. Such profiles are then sold, aggregated with matching profiles collected from other services, and then monetized by ad networks. At least, that’s the optimist’s view of how that data is used -- I’m sure you can think of many examples of how your online usage profiles can be used against you in much more nefarious ways.
|
||||
|
||||
The Tor project attempts to provide a solution to this problem by making it impossible (or, at least, unreasonably difficult) to trace the endpoints of your IP session. Tor achieves this by bouncing your connection through a chain of anonymizing relays, consisting of an entry node, relay node, and exit node:
|
||||
|
||||
1. The **entry node** only knows your IP address, and the IP address of the relay node, but not the final destination of the request;
|
||||
|
||||
2. The **relay node** only knows the IP address of the entry node and the IP address of the exit node, and neither the origin nor the final destination
|
||||
|
||||
3. The **exit node** **** only knows the IP address of the relay node and the final destination of the request; it is also the only node that can decrypt the traffic before sending it over to its final destination
|
||||
|
||||
|
||||
|
||||
|
||||
Relay nodes play a crucial role in this exchange because they create a cryptographic barrier between the source of the request and the destination. Even if exit nodes are controlled by adversaries intent on stealing your data, they will not be able to know the source of the request without controlling the entire Tor relay chain.
|
||||
|
||||
As long as there are plenty of relay nodes, your privacy when using the Tor network remains protected -- which is why I heartily recommend that you set up and run a relay node if you have some home bandwidth to spare.
|
||||
|
||||
#### Things to keep in mind regarding Tor relays
|
||||
|
||||
A Tor relay node only receives encrypted traffic and sends encrypted traffic -- it never accesses any other sites or resources online, so you do not need to worry that someone will browse any worrisome sites directly from your home IP address. Having said that, if you reside in a jurisdiction where offering anonymity-enhancing services is against the law, then, obviously, do not operate your own Tor relay. You may also want to check if operating a Tor relay is against the terms and conditions of your internet access provider.
|
||||
|
||||
### What you will need
|
||||
|
||||
* A Raspberry Pi (any model/generation) with some kind of enclosure
|
||||
|
||||
* An SD card with [Raspbian Stretch Lite][4]
|
||||
|
||||
* An ethernet cable
|
||||
|
||||
* A micro-USB cable for power
|
||||
|
||||
* A keyboard and an HDMI-capable monitor (to use during the setup)
|
||||
|
||||
|
||||
|
||||
|
||||
This guide will assume that you are setting this up on your home connection behind a generic cable or ADSL modem router that performs NAT translation (and it almost certainly does). Most of them have a USB port you can use to power up your Raspberry Pi, and if you’re only using the wifi functionality of the router, then it should have a free ethernet port for you to plug into. However, before we get to the point where we can set-and-forget your Raspberry Pi, we’ll need to set it up as a Tor relay node, for which you’ll need a keyboard and a monitor.
|
||||
|
||||
### The bootstrap script
|
||||
|
||||
I’ve adapted a popular Tor relay node bootstrap script for use with Raspbian Stretch -- you can find it in my GitHub repository here: <https://github.com/mricon/tor-relay-bootstrap-rpi>. Once you have booted up your Raspberry Pi and logged in with the default “pi” user, do the following:
|
||||
```
|
||||
sudo apt-get install -y git
|
||||
git clone https://github.com/mricon/tor-relay-bootstrap-rpi
|
||||
cd tor-relay-bootstrap-rpi
|
||||
sudo ./bootstrap.sh
|
||||
|
||||
```
|
||||
|
||||
Here is what the script will do:
|
||||
|
||||
1. Install the latest OS updates to make sure your Pi is fully patched
|
||||
|
||||
2. Configure your system for automated unattended updates, so you automatically receive security patches when they become available
|
||||
|
||||
3. Install Tor software
|
||||
|
||||
4. Tell your NAT router to forward the necessary ports to reach your relay (the ports we’ll use are 443 and 8080, since they are least likely to be filtered by your internet provider)
|
||||
|
||||
|
||||
|
||||
|
||||
Once the script is done, you’ll need to configure the torrc file -- but first, decide how much bandwidth you’ll want to donate to Tor traffic. First, type “[Speed Test][5]” into Google and click the “Run Speed Test” button. You can disregard the “Download speed” result, as your Tor relay can only operate as fast as your maximum upload bandwidth.
|
||||
|
||||
Therefore, take the “Mbps upload” number, divide by 8 and multiply by 1024 to find out the bandwidth speed in Kilobytes per second. E.g. if you got 21.5 Mbps for your upload speed, then that number is:
|
||||
```
|
||||
21.5 Mbps / 8 * 1024 = 2752 KBytes per second
|
||||
|
||||
```
|
||||
|
||||
You’ll want to limit your relay bandwidth to about half that amount, and allow bursting to about three-quarters of it. Once decided, open /etc/tor/torrc using your favourite editor and tweak the bandwidth settings.
|
||||
```
|
||||
RelayBandwidthRate 1300 KBytes
|
||||
RelayBandwidthBurst 2400 KBytes
|
||||
|
||||
```
|
||||
|
||||
Of course, if you’re feeling more generous, then feel free to put in higher numbers, though you don’t want to max out your outgoing bandwidth -- it will noticeably impact your day-to-day usage if these numbers are set too high.
|
||||
|
||||
While you have that file open, you should set two more things. First, the Nickname -- just for your own recordkeeping, and second the ContactInfo line, which should list a single email address. Since your relay will be running unattended, you should use an email address that you regularly check -- you will receive an alert from the “Tor Weather” service if your relay goes offline for longer than 48 hours.
|
||||
```
|
||||
Nickname myrpirelay
|
||||
ContactInfo you@example.com
|
||||
|
||||
```
|
||||
|
||||
Save the file and reboot the system to start the Tor relay.
|
||||
|
||||
### Testing to make sure Tor traffic is flowing
|
||||
|
||||
If you would like to make sure that the relay is functioning, you can run the “arm” tool:
|
||||
```
|
||||
sudo -u debian-tor arm
|
||||
|
||||
```
|
||||
|
||||
It will take a while to start, especially on older-generation boards, but eventually it will show you a bar chart of incoming and outgoing traffic (or error messages that will help you troubleshoot your setup).
|
||||
|
||||
Once you are convinced that everything is functioning, you can unplug the keyboard and the monitor and relocate the Raspberry Pi into the basement where it will quietly sit and shuffle encrypted bits around. Congratulations, you’ve helped improve privacy and combat malicious tracking online!
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][6] course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/intro-to-linux/2018/6/turn-your-raspberry-pi-tor-relay-node
|
||||
|
||||
作者:[Konstantin Ryabitsev][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/mricon
|
||||
[1]:https://www.torproject.org/
|
||||
[2]:https://en.wikipedia.org/wiki/Server_Name_Indication#Security_implications
|
||||
[3]:https://www.linux.com/blog/2017/10/tips-secure-your-network-wake-krack
|
||||
[4]:https://www.raspberrypi.org/downloads/raspbian/
|
||||
[5]:https://www.google.com/search?q=speed+test
|
||||
[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -0,0 +1,593 @@
|
||||
Bash tips for everyday at the command line
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_keyboard_code.jpg?itok=YEtvcZOj)
|
||||
|
||||
As the default shell for many of the Linux and Unix variants, Bash includes a wide variety of underused features, so it was hard to decide what to discuss. Ultimately, I decided to focus on Bash tips that make day-to-day activities easier.
|
||||
|
||||
As a consultant, I see a plurality of diverse environments and work styles. I drew on this experience to narrow the tips to four broad categories: Terminal and line tricks, navigation and files, history, and helpful commands. These categories are completely arbitrary and serve more to organize my own thoughts than as any kind of definitive classification. Many of the tips included here might subjectively fit in more than one category.
|
||||
|
||||
Without further ado, here are some of the most helpful Bash tricks I have encountered.
|
||||
|
||||
### Working with Bash history
|
||||
|
||||
One of the best ways to increase your productivity is to learn to use the Bash history more effectively. With that in mind, perhaps one of the most important tweaks you can make in a multi-user environment is to enable the `histappend` option to your shell. To do that, simply run the following command:
|
||||
```
|
||||
shopt -s histappend
|
||||
|
||||
```
|
||||
|
||||
This allows multiple terminal sessions to write to the history at the same time. In most environments this option is not enabled. That means that histories are often lost if you have more than a single Bash session open (either locally or over SSH).
|
||||
|
||||
Another common task is to repeat the last command with `sudo`. For example, suppose you want to create a directory `mkdir /etc/ansible/facts.d`. Unless you are root, this command will fail. From what I have observed, most users hit the `up` arrow, scroll to the beginning of the line, and add the `sudo` command. There is an easier way. Simply run the command like this:
|
||||
```
|
||||
sudo !!
|
||||
|
||||
```
|
||||
|
||||
Bash will run `sudo` and then the entirety of the previous command. Here is exactly what it looks like when run in sequence:
|
||||
```
|
||||
[user@centos ~]$ mkdir -p /etc/ansible/facts.d
|
||||
|
||||
mkdir: cannot create directory ‘/etc/ansible’: Permission denied
|
||||
|
||||
|
||||
|
||||
[user@centos ~]$ sudo !!
|
||||
|
||||
sudo mkdir -p /etc/ansible/facts.d
|
||||
|
||||
```
|
||||
|
||||
When the **`!!`** is run, the full command is echoed out to the terminal so you know what was just executed.
|
||||
|
||||
Similar but used much less frequently is the **`!*`** shortcut. This tells Bash that you want all of the *arguments* from the previous command to be repeated in the current command. This could be useful for a command that has a lot of arguments you want to reuse. A simple example is creating a bunch of files and then changing the permissions on them:
|
||||
```
|
||||
[user@centos tmp]$ touch file1 file2 file3 file4
|
||||
|
||||
[user@centos tmp]$ chmod 777 !*
|
||||
|
||||
chmod 777 file1 file2 file3 file4
|
||||
|
||||
```
|
||||
|
||||
It is handy only in a specific set of circumstances, but it may save you some keystrokes.
|
||||
|
||||
Speaking of saving keystrokes, let's talk about finding commands in your history. Most users will do something like this:
|
||||
```
|
||||
history |grep <some command>
|
||||
|
||||
```
|
||||
|
||||
However, there is an easier way to search your history. If you press
|
||||
```
|
||||
ctrl + r
|
||||
|
||||
```
|
||||
|
||||
Bash will do a reverse search of your history. As you start typing, results will begin to appear. For example:
|
||||
```
|
||||
(reverse-i-search)`hist': shopt -s histappend
|
||||
|
||||
```
|
||||
|
||||
In the above example, I typed `hist` and it matched the `shopt` command we covered earlier. If you continue pressing `ctrl + r`, Bash will continue to search backward through all of the other matches.
|
||||
|
||||
Our last trick isn't a trick as much as a helpful command you can use to count and display the most-used commands in your history.
|
||||
```
|
||||
[user@centos tmp]$ history | awk 'BEGIN {FS="[ \t]+|\\|"} {print $3}' | sort | uniq -c | sort -nr | head
|
||||
|
||||
81 ssh
|
||||
|
||||
50 sudo
|
||||
|
||||
46 ls
|
||||
|
||||
45 ping
|
||||
|
||||
39 cd
|
||||
|
||||
29 nvidia-xrun
|
||||
|
||||
20 nmap
|
||||
|
||||
19 export
|
||||
|
||||
```
|
||||
|
||||
In this example, you can see that `ssh` is by far the most-used command in my history at the moment.
|
||||
|
||||
### Navigation and file naming
|
||||
|
||||
`tab` key once to complete the wording for you. This works if there is a single exact match. However, you might not know that if you hit `tab` twice, it will show you all of the matches based on what you have typed. For example:
|
||||
```
|
||||
[user@centos tmp]$ cd /lib <tab><tab>
|
||||
|
||||
lib/ lib64/
|
||||
|
||||
```
|
||||
|
||||
You probably already know that if you type a command, filename, or folder name, you can hit thekey once to complete the wording for you. This works if there is a single exact match. However, you might not know that if you hittwice, it will show you all of the matches based on what you have typed. For example:
|
||||
|
||||
This can be very useful for file system navigation. Another helpful trick is to enable `cdspell` in your shell. You can do this by issuing the `shopt -s cdspell` command. This will help correct your typos:
|
||||
```
|
||||
[user@centos etc]$ cd /tpm
|
||||
|
||||
/tmp
|
||||
|
||||
[user@centos tmp]$ cd /ect
|
||||
|
||||
/etc
|
||||
|
||||
```
|
||||
|
||||
It's not perfect, but every little bit helps!
|
||||
|
||||
Once you have successfully changed directories, what if you need to return to your previous directory? This is not a big deal if you are not very deep into the directory tree. But if you are in a fairly deep path, such as `/var/lib/flatpak/exports/share/applications/`, you could type:
|
||||
```
|
||||
cd /va<tab>/lib/fla<tab>/ex<tab>/sh<tab>/app<tab>
|
||||
|
||||
```
|
||||
|
||||
Fortunately, Bash remembers your previous directory, and you can return there by simply typing `cd -`. Here is what it would look like:
|
||||
```
|
||||
[user@centos applications]$ pwd
|
||||
|
||||
/var/lib/flatpak/exports/share/applications
|
||||
|
||||
|
||||
|
||||
[user@centos applications]$ cd /tmp
|
||||
|
||||
[user@centos tmp]$ pwd
|
||||
|
||||
/tmp
|
||||
|
||||
|
||||
|
||||
[user@centos tmp]$ cd -
|
||||
|
||||
/var/lib/flatpak/exports/share/applications
|
||||
|
||||
```
|
||||
|
||||
That's all well and good, but what if you have a bunch of directories you want to navigate within easily? Bash has you covered there as well. There is a variable you can set that will help you navigate more effectively. Here is an example:
|
||||
```
|
||||
[user@centos applications]$ export CDPATH='~:/var/log:/etc'
|
||||
|
||||
[user@centos applications]$ cd hp
|
||||
|
||||
/etc/hp
|
||||
|
||||
|
||||
|
||||
[user@centos hp]$ cd Downloads
|
||||
|
||||
/home/user/Downloads
|
||||
|
||||
|
||||
|
||||
[user@centos Downloads]$ cd ansible
|
||||
|
||||
/etc/ansible
|
||||
|
||||
|
||||
|
||||
[user@centos Downloads]$ cd journal
|
||||
|
||||
/var/log/journal
|
||||
|
||||
```
|
||||
|
||||
In the above example, I set my home directory (indicated with the tilde: `~`), `/var/log` and `/etc`. Anything at the top level of these directories will be auto-filled in when you reference them. Directories that are not at the base of the directories listed in `CDPATH` will not be found. If, for example, the directory you are after was `/etc/ansible/facts.d/` this would not complete by typing `cd facts.d`. This is because while the directory `ansible` is found under `/etc`, `facts.d` is not. Therefore, `CDPATH` is useful for getting to the top of a tree that you access frequently, but it may get cumbersome to manage when you're browsing a large folder structure.
|
||||
|
||||
Finally, let's talk about two common use cases that everyone does at some point: Changing a file extension and renaming files. At first glance, this may sound like the same thing, but Bash offers a few different tricks to accomplish these tasks.
|
||||
|
||||
While it may be a "down-and-dirty" operation, most users at some point need to create a quick copy of a file they are working on. Most will copy the filename exactly and simply append a file extension like `.old` or `.bak`. There is a quick shortcut for this in Bash. Suppose you have a filename like `spideroak_inotify_db.07pkh3` that you want to keep a copy of. You could type:
|
||||
```
|
||||
cp spideroak_inotify_db.07pkh3 spideroak_inotify_db.07pkh3.bak
|
||||
|
||||
```
|
||||
|
||||
You can make quick work of this by using copy/paste operations, using the tab complete, possibly using one of the shortcuts to repeat an argument, or simply typing the whole thing out. However, the command below should prove even quicker once you get used to typing it:
|
||||
```
|
||||
cp spideroak_inotify_db.07pkh3{,.old}
|
||||
|
||||
```
|
||||
|
||||
This (as you can guess) copies the file by appending the `.old` file extension to the file. That's great, you might say, but I want to rename a large number of files at once. Sure, you could write a for loop to deal with these (and in fact, I often do this for something complicated) but why would you when there is a handy utility called `rename`? There is some difference in the usage of this utility between Debian/Ubuntu and CentOS/Arch. The Debian-based rename uses a SED-like syntax:
|
||||
```
|
||||
user@ubuntu-1604:/tmp$ for x in `seq 1 5`; do touch old_text_file_${x}.txt; done
|
||||
|
||||
|
||||
|
||||
user@ubuntu-1604:/tmp$ ls old_text_file_*
|
||||
|
||||
old_text_file_1.txt old_text_file_3.txt old_text_file_5.txt
|
||||
|
||||
old_text_file_2.txt old_text_file_4.txt
|
||||
|
||||
|
||||
|
||||
user@ubuntu-1604:/tmp$ rename 's/old_text_file/shiney_new_doc/' *.txt
|
||||
|
||||
|
||||
|
||||
user@ubuntu-1604:/tmp$ ls shiney_new_doc_*
|
||||
|
||||
shiney_new_doc_1.txt shiney_new_doc_3.txt shiney_new_doc_5.txt
|
||||
|
||||
shiney_new_doc_2.txt shiney_new_doc_4.txt
|
||||
|
||||
```
|
||||
|
||||
On a CentOS or Arch box it would look similar:
|
||||
```
|
||||
[user@centos /tmp]$ for x in `seq 1 5`; do touch old_text_file_${x}.txt; done
|
||||
|
||||
|
||||
|
||||
[user@centos /tmp]$ ls old_text_file_*
|
||||
|
||||
old_text_file_1.txt old_text_file_3.txt old_text_file_5.txt
|
||||
|
||||
old_text_file_2.txt old_text_file_4.txt
|
||||
|
||||
|
||||
|
||||
[user@centos tmp]$ rename old_text_file centos_new_doc *.txt
|
||||
|
||||
|
||||
|
||||
[user@centos tmp]$ ls centos_new_doc_*
|
||||
|
||||
centos_new_doc_1.txt centos_new_doc_3.txt centos_new_doc_5.txt
|
||||
|
||||
centos_new_doc_2.txt centos_new_doc_4.txt
|
||||
|
||||
```
|
||||
|
||||
### Bash key bindings
|
||||
|
||||
Bash has a lot of built-in keyboard shortcuts. You can find a list of them by typing `bind -p`. I thought it would be useful to highlight several, although some may be well-known.
|
||||
```
|
||||
ctrl + _ (undo)
|
||||
|
||||
ctrl + t (swap two characters)
|
||||
|
||||
ALT + t (swap two words)
|
||||
|
||||
ALT + . (prints last argument from previous command)
|
||||
|
||||
ctrl + x + * (expand glob/star)
|
||||
|
||||
ctrl + arrow (move forward a word)
|
||||
|
||||
ALT + f (move forward a word)
|
||||
|
||||
ALT + b (move backward a word)
|
||||
|
||||
ctrl + x + ctrl + e (opens the command string in an editor so that you can edit it before execution)
|
||||
|
||||
ctrl + e (move cursor to end)
|
||||
|
||||
ctrl + a (move cursor to start)
|
||||
|
||||
ctrl + xx (move to the opposite end of the line)
|
||||
|
||||
ctrl + u (cuts everything before the cursor)
|
||||
|
||||
ctrl + k (cuts everything after the cursor)
|
||||
|
||||
ctrl + y (pastes from the buffer)
|
||||
|
||||
ctrl + l (clears screen)s
|
||||
|
||||
```
|
||||
|
||||
I won't discuss the more obvious ones. However, some of the most useful shortcuts I have found are the ones that let you delete words (or sections of text) and undo them. Suppose you were going to stop a bunch of services using `systemd`, but you only wanted to start a few of them after some operation has completed. You might do something like this:
|
||||
```
|
||||
systemctl stop httpd mariadb nfs smbd
|
||||
|
||||
<hit the up button to get the previous command>
|
||||
|
||||
<use 'ctrl + w' to remove the unwanted arguments>
|
||||
|
||||
```
|
||||
|
||||
But what if you removed one too many? No problem—simply use `ctrl + _` to undo the last edit.
|
||||
|
||||
The other cut commands allow you to quickly remove everything from the cursor to the end or beginning of the line (using `Ctrl + k` and `Ctrl + u`, respectively). This has the added benefit of placing the cut text into the terminal buffer so you can paste it later on (using `ctrl + y`). These commands are hard to demonstrate here, so I strongly encourage you to try them out on your own.
|
||||
|
||||
Last but not least, I'd like to mention a seldom-used key combination that can be extremely handy in confined environments such as containers. If you ever have a command look garbled by previous output, there is a solution: Pressing `ctrl + x + ctrl + e` will open the command in whichever editor is set in the environment variable EDITOR. This will allow you to edit a long or garbled command in a text editor that (potentially) can wrap text. Saving your work and exiting, just as you would when working on a normal file, will execute the command upon leaving the editor.
|
||||
|
||||
### Miscellaneous tips
|
||||
|
||||
You may find that having colors displayed in your Bash shell can enhance your experience. If you are using a session that does not have colorization enabled, below are a series of commands you can place in your `.bash_profile` to add color to your session. These are fairly straightforward and should not require an in-depth explanation:
|
||||
```
|
||||
# enable colors
|
||||
|
||||
eval "`dircolors -b`"
|
||||
|
||||
|
||||
|
||||
# force ls to always use color and type indicators
|
||||
|
||||
alias ls='ls -hF --color=auto'
|
||||
|
||||
|
||||
|
||||
# make the dir command work kinda like in windows (long format)
|
||||
|
||||
alias dir='ls --color=auto --format=long'
|
||||
|
||||
|
||||
|
||||
# make grep highlight results using color
|
||||
|
||||
export GREP_OPTIONS='--color=auto'
|
||||
|
||||
|
||||
|
||||
# Add some colour to LESS/MAN pages
|
||||
|
||||
export LESS_TERMCAP_mb=$'\E[01;31m'
|
||||
|
||||
export LESS_TERMCAP_md=$'\E[01;33m'
|
||||
|
||||
export LESS_TERMCAP_me=$'\E[0m'
|
||||
|
||||
export LESS_TERMCAP_se=$'\E[0m'
|
||||
|
||||
export LESS_TERMCAP_so=$'\E[01;42;30m'
|
||||
|
||||
export LESS_TERMCAP_ue=$'\E[0m'
|
||||
|
||||
export LESS_TERMCAP_us=$'\E[01;36m'
|
||||
|
||||
```
|
||||
|
||||
Along with adjusting the various options within Bash, you can also use some neat tricks to save time. For example, to run two commands back-to-back, regardless of each one's exit status, use the `;` to separate the commands, as seen below:
|
||||
```
|
||||
[user@centos /tmp]$ du -hsc * ; df -h
|
||||
|
||||
```
|
||||
|
||||
This simply calculates the amount of space each file in the current directory takes up (and sums it), then it queries the system for the disk usage per block device. These commands will run regardless of any errors generated by the `du` command.
|
||||
|
||||
What if you want an action to be taken upon successful completion of the first command? You can use the `&&` shorthand to indicate that you want to run the second command only if the first command returns a successful exit status. For example, suppose you want to reboot a machine only if the updates are successful:
|
||||
```
|
||||
[root@arch ~]$ pacman -Syu --noconfirm && reboot
|
||||
|
||||
```
|
||||
|
||||
Sometimes when running a command, you may want to capture its output. Most people know about the `tee` command, which will copy standard output to both the terminal and a file. However, if you want to capture more complex output from, say, `strace`, you will need to start working with [I/O redirection][1]. The details of I/O redirection are beyond the scope of this short article, but for our purposes we are concerned with `STDOUT` and `STDERR`. The best way to capture exactly what you are seeing is to combine the two in one file. To do this, use the `2>&1` redirection.
|
||||
```
|
||||
[root@arch ~]$ strace -p 1140 > strace_output.txt 2>&1
|
||||
|
||||
```
|
||||
|
||||
This will put all of the relevant output into a file called `strace_output.txt` for viewing later.
|
||||
|
||||
Sometimes during a long-running command, you may need to pause the execution of a task. You can use the 'stop' shortcut `ctrl + z` to stop (but not kill) a job. The job gets added to the job queue, but you will no longer see the job until you resume it. This job may be resumed at a later time by using the foreground command `fg`.
|
||||
|
||||
In addition, you may also simply pause a job with `ctrl + s`. The job and its output stay in the terminal foreground, and use of the shell is not returned to the user. The job may be resumed by pressing `ctrl + q`.
|
||||
|
||||
If you are working in a graphical environment with many terminals open, you may find it handy to have keyboard shortcuts for copying and pasting output. To do so, use the following shortcuts:
|
||||
```
|
||||
# Copies highlighted text
|
||||
|
||||
ctrl + shift + c
|
||||
|
||||
|
||||
|
||||
# Pastes text in buffer
|
||||
|
||||
ctrl + shift + v
|
||||
|
||||
```
|
||||
|
||||
Suppose in the output of an executing command you see another command being executed, and you want to get more information. There are a few ways to do this. If this command is in your path somewhere, you can run the `which` command to find out where that command is located on your disk:
|
||||
```
|
||||
[root@arch ~]$ which ls
|
||||
|
||||
/usr/bin/ls
|
||||
|
||||
```
|
||||
|
||||
With this information, you can inspect the binary with the `file` command:
|
||||
```
|
||||
[root@arch ~]$ file /usr/bin/ls
|
||||
|
||||
/usr/bin/ls: ELF 64-bit LSB pie executable x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=d4e02b88e596e4f82c6cc62a5bc4ce5827209a49, stripped
|
||||
|
||||
```
|
||||
|
||||
You can see all sorts of information, but the most important for most users is the `ELF 64-bit LSB` nonsense. This essentially means that it is a precompiled binary as opposed to a script or other type of executable. A related tool you can use to inspect commands is the `command` tool itself. Simply running `command -V <command>` will give you different types of information:
|
||||
```
|
||||
[root@arch ~]$ command -V ls
|
||||
|
||||
ls is aliased to `ls --color=auto`
|
||||
|
||||
|
||||
|
||||
[root@arch ~]$ command -V bash
|
||||
|
||||
bash is /usr/bin/bash
|
||||
|
||||
|
||||
|
||||
[root@arch ~]$ command -V shopt
|
||||
|
||||
shopt is a shell builtin
|
||||
|
||||
```
|
||||
|
||||
Last but definitely not least, one of my favorite tricks, especially when working with containers or in environments where I have little knowledge or control, is the `echo` command. This command can be used to do everything from checking to make sure your `for` loop will run the expected sequence to allowing you to check if remote ports are open. The syntax is very simple to check for an open port: `echo > /dev/<udp or tcp>/<server ip>/<port>`. For example:
|
||||
```
|
||||
user@ubuntu-1604:~$ echo > /dev/tcp/192.168.99.99/222
|
||||
|
||||
-bash: connect: Connection refused
|
||||
|
||||
-bash: /dev/tcp/192.168.99.99/222: Connection refused
|
||||
|
||||
|
||||
|
||||
user@ubuntu-1604:~$ echo > /dev/tcp/192.168.99.99/22
|
||||
|
||||
```
|
||||
|
||||
If the port is closed to the type of connection you are trying to make, you will get a `Connection refused` message. If the packet is successfully sent, there will be no output.
|
||||
|
||||
I hope these tips make Bash more efficient and enjoyable to use. There are many more tricks hidden in Bash than I've listed here. What are some of your favorites?
|
||||
|
||||
#### Appendix 1. List of tips and tricks covered
|
||||
|
||||
```
|
||||
# History related
|
||||
|
||||
ctrl + r (reverse search)
|
||||
|
||||
!! (rerun last command)
|
||||
|
||||
!* (reuse arguments from previous command)
|
||||
|
||||
!$ (use last argument of last command)
|
||||
|
||||
shopt -s histappend (allow multiple terminals to write to the history file)
|
||||
|
||||
history | awk 'BEGIN {FS="[ \t]+|\\|"} {print $3}' | sort | uniq -c | sort -nr | head (list the most used history commands)
|
||||
|
||||
|
||||
|
||||
# File and navigation
|
||||
|
||||
cp /home/foo/realllylongname.cpp{,-old}
|
||||
|
||||
cd -
|
||||
|
||||
rename 's/text_to_find/been_renamed/' *.txt
|
||||
|
||||
export CDPATH='/var/log:~' (variable is used with the cd built-in.)
|
||||
|
||||
|
||||
|
||||
# Colourize bash
|
||||
|
||||
|
||||
|
||||
# enable colors
|
||||
|
||||
eval "`dircolors -b`"
|
||||
|
||||
# force ls to always use color and type indicators
|
||||
|
||||
alias ls='ls -hF --color=auto'
|
||||
|
||||
# make the dir command work kinda like in windows (long format)
|
||||
|
||||
alias dir='ls --color=auto --format=long'
|
||||
|
||||
# make grep highlight results using color
|
||||
|
||||
export GREP_OPTIONS='--color=auto'
|
||||
|
||||
|
||||
|
||||
export LESS_TERMCAP_mb=$'\E[01;31m'
|
||||
|
||||
export LESS_TERMCAP_md=$'\E[01;33m'
|
||||
|
||||
export LESS_TERMCAP_me=$'\E[0m'
|
||||
|
||||
export LESS_TERMCAP_se=$'\E[0m' # end the info box
|
||||
|
||||
export LESS_TERMCAP_so=$'\E[01;42;30m' # begin the info box
|
||||
|
||||
export LESS_TERMCAP_ue=$'\E[0m'
|
||||
|
||||
export LESS_TERMCAP_us=$'\E[01;36m'
|
||||
|
||||
|
||||
|
||||
# Bash shortcuts
|
||||
|
||||
shopt -s cdspell (corrects typoos)
|
||||
|
||||
ctrl + _ (undo)
|
||||
|
||||
ctrl + arrow (move forward a word)
|
||||
|
||||
ctrl + a (move cursor to start)
|
||||
|
||||
ctrl + e (move cursor to end)
|
||||
|
||||
ctrl + k (cuts everything after the cursor)
|
||||
|
||||
ctrl + l (clears screen)
|
||||
|
||||
ctrl + q (resume command that is in the foreground)
|
||||
|
||||
ctrl + s (pause a long running command in the foreground)
|
||||
|
||||
ctrl + t (swap two characters)
|
||||
|
||||
ctrl + u (cuts everything before the cursor)
|
||||
|
||||
ctrl + x + ctrl + e (opens the command string in an editor so that you can edit it before it runs)
|
||||
|
||||
ctrl + x + * (expand glob/star)
|
||||
|
||||
ctrl + xx (move to the opposite end of the line)
|
||||
|
||||
ctrl + y (pastes from the buffer)
|
||||
|
||||
ctrl + shift + c/v (copy/paste into terminal)
|
||||
|
||||
|
||||
|
||||
# Running commands in sequence
|
||||
|
||||
&& (run second command if the first is successful)
|
||||
|
||||
; (run second command regardless of success of first one)
|
||||
|
||||
|
||||
|
||||
# Redirecting I/O
|
||||
|
||||
2>&1 (redirect stdout and stderr to a file)
|
||||
|
||||
|
||||
|
||||
# check for open ports
|
||||
|
||||
echo > /dev/tcp/<server ip>/<port>
|
||||
|
||||
`` (use back ticks to shell out)
|
||||
|
||||
|
||||
|
||||
# Examine executable
|
||||
|
||||
which <command>
|
||||
|
||||
file <path/to/file>
|
||||
|
||||
command -V <some command binary> (tells you whether <some binary> is a built-in, binary or alias)
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/bash-tricks
|
||||
|
||||
作者:[Steve Ovens][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/stratusss
|
||||
[1]:https://www.digitalocean.com/community/tutorials/an-introduction-to-linux-i-o-redirection
|
@ -0,0 +1,154 @@
|
||||
What's all the C Plus Fuss? Bjarne Stroustrup warns of dangerous future plans for his C++
|
||||
======
|
||||
|
||||
![](https://regmedia.co.uk/2018/06/15/shutterstock_38621860.jpg?x=442&y=293&crop=1)
|
||||
|
||||
**Interview** Earlier this year, Bjarne Stroustrup, creator of C++, managing director in the technology division of Morgan Stanley, and a visiting professor of computer science at Columbia University in the US, wrote [a letter][1] inviting those overseeing the evolution of the programming language to “Remember the Vasa!”
|
||||
|
||||
Easy for a Dane to understand no doubt, but perhaps more of a stretch for those with a few gaps in their knowledge of 17th century Scandinavian history. The Vasa was a Swedish warship, commissioned by King Gustavus Adolphus. It was the most powerful warship in the Baltic Sea from its maiden voyage on the August 10, 1628, until a few minutes later when it sank.
|
||||
|
||||
The formidable Vasa suffered from a design flaw: it was top-heavy, so much so that it was [undone by a gust of wind][2]. By invoking the memory of the capsized ship, Stroustrup served up a cautionary tale about the risks facing C++ as more and more features get added to the language.
|
||||
|
||||
Quite a few such features have been suggested. Stroustrup cited 43 proposals in his letter. He contends those participating in the evolution of the ISO standard language, a group known as [WG21][3], are working to advance the language but not together.
|
||||
|
||||
In his letter, he wrote:
|
||||
|
||||
>Individually, many proposals make sense. Together they are insanity to the point of endangering the future of C++.
|
||||
|
||||
He makes clear that he doesn’t interpret the fate of the Vasa to mean that incremental improvements spell doom. Rather, he takes it as a lesson to build a solid foundation, to learn from experience and to test thoroughly.
|
||||
|
||||
With the recent conclusion of the C++ Standardization Committee Meeting in Rapperswil, Switzerland, earlier this month, Stroustrup addressed a few questions put to him by _The Register_ about what's next for the language. (The most recent version is C++17, which arrived last year; the next version C++20 is under development and expected in 2020.)
|
||||
|
||||
**_Register:_ In your note, Remember the Vasa!, you wrote:**
|
||||
|
||||
>The foundation begun in C++11 is not yet complete, and C++17 did little to make our foundation more solid, regular, and complete. Instead, it added significant surface complexity and increased the number of features people need to learn. C++ could crumble under the weight of these – mostly not quite fully-baked – proposals. We should not spend most our time creating increasingly complicated facilities for experts, such as ourselves.
|
||||
|
||||
**Is C++ too challenging for newcomers, and if so, what features do you believe would make the language more accessible?**
|
||||
|
||||
_**Stroustrup:**_ Some parts of C++ are too challenging for newcomers.
|
||||
|
||||
On the other hand, there are parts of C++ that makes it far more accessible to newcomers than C or 1990s C++. The difficulty is to get the larger community to focus on those parts and help beginners and casual C++ users to avoid the parts that are there to support implementers of advanced libraries.
|
||||
|
||||
I recommend the [C++ Core Guidelines][4] as an aide for that.
|
||||
|
||||
Also, my “A Tour of C++” can help people get on the right track with modern C++ without getting lost in 1990s complexities or ensnarled by modern facilities meant for expert use. The second edition of “A Tour of C++” covering C++17 and parts of C++20 is on its way to the stores.
|
||||
|
||||
I and others have taught C++ to 1st year university students with no previous programming experience in 3 months. It can be done as long as you don’t try to dig into every obscure corner of the language and focus on modern C++.
|
||||
|
||||
“Making simple things simple” is a long-term goal of mine. Consider the C++11 range-for loop:
|
||||
```
|
||||
for (int& x : v) ++x; // increment each element of the container v
|
||||
|
||||
```
|
||||
|
||||
where v can be just about any container. In C and C-style C++, that might look like this:
|
||||
```
|
||||
for (int i=0; i<MAX; i++) ++v[i]; // increment each element of the array v
|
||||
|
||||
```
|
||||
|
||||
Some people complained that adding the range-for loop made C++ more complicated, and they were obviously correct because it added a feature, but it made the _use_ of C++ simpler. It also eliminated some common errors with the use of the traditional for loop.
|
||||
|
||||
Another example is the C++11 standard thread library. It is far simpler to use and less error-prone than using the POSIX or Windows thread C APIs directly.
|
||||
|
||||
**_Register:_ How would you characterize the current state of the language?**
|
||||
|
||||
_**Stroustrup:**_ C++11 was a major improvement of C++ and C++14 completed that work. C++17 added quite a few features without offering much support for novel techniques. C++20 looks like it might become a major improvement. The state of compilers and standard-library implementations are excellent and very close to the latest standards. C++17 is already usable. The tool support is improving steadily. There are lots of third-party libraries and many new tools. Unfortunately, those can be hard to find.
|
||||
|
||||
The worries I expressed in the Vasa paper relate to the standards process that combines over-enthusiasm for novel facilities with perfectionism that delays significant improvements. “The best is the enemy of the good.” There were 160 participants at the June Rapperswil meeting. It is hard to keep a consistent focus in a group that large and diverse. There is also a tendency for experts to design more for themselves than for the community at large.
|
||||
|
||||
**Register: Is there a desired state for the language or rather do you strive simply for a desired adaptability to what programmers require at any given time?
|
||||
|
||||
**Stroustrup:** Both. I’d like to see C++ supporting a guaranteed completely type-safe and resource-safe style of programming. This should not be done by restricting applicability or adding cost, but by improved expressiveness and performance. I think it can be done and that the approach of giving programmers better (and easier to use) language facilities can get us there.
|
||||
|
||||
That end-goal will not be met soon or just through language design alone. We need a combination of improved language features, better libraries, static analysis, and rules for effective programming. The C++ Core Guidelines is part of my broad, long-term approach to improve the quality of C++ code.
|
||||
|
||||
**Register: Is there an identifiable threat to C++? If so, what form does that take? (e.g. slow evolution, the attraction of emerging low-level languages, etc...your note seems to suggest it may be too many proposals.)**
|
||||
|
||||
**Stroustrup:** Certainly; we have had 400 papers this year already. They are not all new proposals, of course. Many relate the necessary and unglamorous work on precisely specifying the language and its standard library, but the volume is getting unmanageable. You can find all the committee papers on the WG21 website.
|
||||
|
||||
I wrote the “Remember the Vasa!” as a call to action. I am scared of the pressure to add language features to address immediate needs and fashions, rather than to strengthen the language foundations (e.g. improving the static type system). Adding anything new, however minor carries a cost, such as implementation, teaching, tools upgrades. Major features are those that change the way we think about programming. Those are the ones we must concentrate on.
|
||||
|
||||
The committee has established a “Direction Group” of experienced people with strong track records in many areas of the language, the standard library, implementation, and real-word use. I’m a member and we wrote up something on direction, design philosophy, and suggested areas of emphasis.
|
||||
|
||||
For C++20, we recommend to focus on:
|
||||
|
||||
Concepts
|
||||
Modules (offering proper modularity and dramatic compile-time improvements)
|
||||
Ranges (incl. some of the infinite sequence extensions)
|
||||
Networking Concepts in the standard library
|
||||
|
||||
After the Rappwerwil meeting, the odds are reasonable, though getting modules and networking is obviously a stretch. I’m an optimist and the committee members are working very hard.
|
||||
|
||||
I don’t worry about other languages or new languages. I like programming languages. If a new language offers something useful that other languages don’t, it has a role and we can all learn from it. And then, of course, each language has its own problems. Many of C++’s problems relate to its very wide range of application areas, its very large and diverse user population, and overenthusiasm. Most language communities would love to have such problems.
|
||||
|
||||
**Register: Are there any architectural decisions about the language you've reconsidered?**
|
||||
|
||||
**Stroustrup:** I always consider older decisions and designs when I work on something new. For example, see my History of Programming papers 1, 2.
|
||||
|
||||
There are no major decisions I regret, though there is hardly any feature I wouldn’t do somewhat differently if I had to do it again.
|
||||
|
||||
As ever, the ability to deal directly with hardware plus zero-overhead abstraction is the guiding idea. The use of constructors and destructors to handle resources is key (RAII) and the STL is a good example of what can be done in a C++ library.
|
||||
|
||||
**Register: Does the three-year release cadence, adopted in 2011 it seems, still work? I ask because Java has been dealing with a desire for faster iteration.**
|
||||
|
||||
**Stroustrup:** I think C++20 will be delivered on time (like C++14 and C++17 were) and that the major compilers will conform to it almost instantly. I also hope that C++20 will be a major improvement over C++17.
|
||||
|
||||
I don’t worry too much about how other languages manage their releases. C++ is controlled by a committee working under ISO rules, rather by a corporation or a “beneficent dictator for life.” This will not change. For ISO standards, C++’s three-year cycle is a dramatic innovation. The standard is 5- or 10-year cycles.
|
||||
|
||||
**Register: In your note you wrote:**
|
||||
|
||||
We need a reasonably coherent language that can be used by 'ordinary programmers' whose main concern is to ship great applications on time.
|
||||
|
||||
Are changes to the language sufficient to address this or might this also involve more accessible tooling and educational support?
|
||||
|
||||
**Stroustrup:** I try hard to communicate my ideas of what C++ is and how it might be used and I encourage others to do the same.
|
||||
|
||||
In particular, I encourage presenters and authors to make useful ideas accessible to the great mass of C++ programmers, rather than demonstrating how clever they are by presenting complicated examples and techniques. My 2017 CppCon keynote was “Learning and Teaching C++” and it also pointed to the need for better tools.
|
||||
|
||||
I mentioned build support and package managers. Those have traditionally been areas of weakness for C++. The standards committee now has a tools Study Group and will probably soon have an Education Study group.
|
||||
|
||||
The C++ community has traditionally been completely disorganized, but over the last five years many more meetings and blogs have sprung up to satisfy the community’s appetite for news and support. CppCon, isocpp.org, and Meeting++ are examples.
|
||||
|
||||
Design in a committee is very hard. However committees are a fact of life in all large projects. I am concerned, but being concerned and facing up to the problems is necessary for success.
|
||||
|
||||
**Register: How would you characterize the C++ community process? Are there aspects of the communication and decision making procedure that you'd like to see change?**
|
||||
|
||||
**Stroustrup:** C++ doesn’t have a corporately controlled “community process”; it has an ISO standards process. We can’t significantly change the ISO rules. Ideally, we’d have a small full-time “secretariat” making the final decisions and setting directions, but that’s not going to happen. Instead, we have hundreds of people discussion on-line, about 160 people voting on technical issues, about 70 organizations and 11 nations formally voting on the resulting proposals. That’s messy, but sometimes we make it work.
|
||||
|
||||
**Register: Finally, what upcoming C++ features do you feel will be most beneficial for C++ users?**
|
||||
|
||||
**Stroustrup:**
|
||||
|
||||
+ Concepts to significantly simplify generic programming
|
||||
+ _Parallel algorithms – there is no easier way to use the power of the concurrency features of modern hardware
|
||||
+ Coroutines, if the committee can decide on those for C++20.
|
||||
+ Modules to improve the way organize our source code and dramatically improve compile times. I hope we can get such modules, but it is not yet certain that we can do that for C++20.
|
||||
+ A standard networking library, but it is not yet certain that we can do that for C++20.
|
||||
|
||||
In addition:
|
||||
|
||||
+ Contracts (run-time checked pre-conditions, post-conditions, and assertions) could become significant for many.
|
||||
+ The date and time-zone support library will be significant for many (in industry).
|
||||
|
||||
**Register: Is there anything else you'd like to add?**
|
||||
|
||||
**Stroustrup:** If the C++ standards committee can focus on major issues to solve major problems, C++20 will be great. Until then, we have C++17 that is still far better than many people’s outdated impressions of C++. ®
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.theregister.co.uk/2018/06/18/bjarne_stroustrup_c_plus_plus/
|
||||
|
||||
作者:[Thomas Claburn][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.theregister.co.uk/Author/3190
|
||||
[1]:http://open-std.org/JTC1/SC22/WG21/docs/papers/2018/p0977r0.pdf
|
||||
[2]:https://www.vasamuseet.se/en/vasa-history/disaster
|
||||
[3]:http://open-std.org/JTC1/SC22/WG21/
|
||||
[4]:https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md
|
||||
[5]:https://go.theregister.co.uk/tl/1755/shttps://continuouslifecycle.london/
|
@ -0,0 +1,119 @@
|
||||
不像 MySQL 的 MySQL:MySQL 文档存储介绍
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_business_sign_store.jpg?itok=g4QibRqg)
|
||||
|
||||
MySQL 可以提供 NoSQL JSON <ruby>文档存储<rt>Document Store</rt></ruby>了,这样开发者保存数据前无需<ruby>规范化<rt>normalize</rt></ruby>数据、创建数据库,也无需在开发之前就制定好数据样式。从 MySQL 5.7 版本和 MySQL 8.0 版本开始,开发者可以在表的一列中存储 JSON 文档。由于引入 X DevAPI,你可以从你的代码中移除令人不爽的结构化查询字符串,改为使用支持现代编程设计的 API 调用。
|
||||
|
||||
系统学习过结构化查询语言(SQL)、<ruby>关系理论<rt>relational theory</rt></ruby>和其它关系数据库底层理论的开发者并不多,但他们需要一个安全可靠的数据存储。如果数据库管理人员不足,事情很快就会变得一团糟,
|
||||
|
||||
[MySQL 文档存储][1] 允许开发者跳过底层数据结构创建、数据规范化和其它使用传统数据库时需要做的工作,直接存储数据。只需创建一个 JSON <ruby>文档集合<rt>document collection</rt></ruby>,接着就可以使用了。
|
||||
|
||||
### JSON 数据类型
|
||||
|
||||
所有这一切都基于多年前 MySQL 5.7 引入的 JSON 数据类型。允许在表的一行中提供大约 1GB 的列。数据必须是有效的 JSON,否则服务器会报错;但开发者可以自由使用这些空间。
|
||||
|
||||
### X DevAPI
|
||||
|
||||
旧的 MySQL 协议已经历经差不多四分之一个世纪,已经显现出疲态,因此新的协议被开发出来,协议名为 [X DevAPI][2]。协议引入高级会话概念,允许代码从单台服务器扩展到多台,使用符合<ruby>通用主机编程语言样式<rt>common host-language programming patterns</rt></ruby>的非阻塞异步 I/O。需要关注的是如何遵循现代实践和编码风格,同时使用 CRUD (create, replace, update, delete) 样式。换句话说,你不再需要在你精美、淳朴的代码中嵌入丑陋的 SQL 语句字符串。
|
||||
|
||||
### 代码示例
|
||||
|
||||
一个新的 shell 支持这种新协议,即所谓的 [MySQL Shell][3]。该 shell 可用于设置<ruby>高可用集群<rt>high-availability clusters</rt></ruby>、检查服务器<ruby>升级就绪状态<rt>upgrade readiness</rt></ruby>以及与 MySQL 服务器交互。支持的交互方式有以下三种:JavaScript,Python 和 SQL。
|
||||
|
||||
下面的代码示例基于 JavaScript 方式使用 MySQL Shell,可以从 `JS>` 提示符看出。
|
||||
|
||||
下面,我们将使用用户 `dstokes` 、密码 `password` 登录本地系统上的 `demo` 库。`db` 是一个指针,指向 demo 库。
|
||||
```
|
||||
$ mysqlsh dstokes:password@localhost/demo
|
||||
JS> db.createCollection("example")
|
||||
JS> db.example.add(
|
||||
{
|
||||
Name: "Dave",
|
||||
State: "Texas",
|
||||
foo : "bar"
|
||||
}
|
||||
)
|
||||
JS>
|
||||
|
||||
```
|
||||
|
||||
在上面的示例中,我们登录服务器,连接到 `demo` 库,创建了一个名为 `example` 的集合,最后插入一条记录;整个过程无需创建表,也无需使用 SQL。只要你能想象的到,你可以使用甚至滥用这些数据。这不是一种代码对象与关系语句之间的映射器,因为并没有将代码映射为 SQL;新协议直接与服务器层打交道。
|
||||
|
||||
### Node.js 支持
|
||||
|
||||
新 shell 看起来挺不错,你可以用其完成很多工作;但你可能更希望使用你选用的编程语言。下面的例子使用 `world_x` 示例数据库,搜索 `_id` 字段匹配 "CAN." 的记录。我们指定数据库中的特定集合,使用特定参数调用 `find` 命令。同样地,操作也不涉及 SQL。
|
||||
```
|
||||
var mysqlx = require('@mysql/xdevapi');
|
||||
mysqlx.getSession({ //Auth to server
|
||||
host: 'localhost',
|
||||
port: '33060',
|
||||
dbUser: 'root',
|
||||
dbPassword: 'password'
|
||||
}).then(function (session) { // use world_x.country.info
|
||||
var schema = session.getSchema('world_x');
|
||||
var collection = schema.getCollection('countryinfo');
|
||||
|
||||
collection // Get row for 'CAN'
|
||||
.find("$._id == 'CAN'")
|
||||
.limit(1)
|
||||
.execute(doc => console.log(doc))
|
||||
.then(() => console.log("\n\nAll done"));
|
||||
|
||||
session.close();
|
||||
})
|
||||
|
||||
```
|
||||
|
||||
下面例子使用 PHP,搜索 `_id` 字段匹配 "USA" 的记录:
|
||||
```
|
||||
<?PHP
|
||||
// Connection parameters
|
||||
$user = 'root';
|
||||
$passwd = 'S3cret#';
|
||||
$host = 'localhost';
|
||||
$port = '33060';
|
||||
$connection_uri = 'mysqlx://'.$user.':'.$passwd.'@'.$host.':'.$port;
|
||||
echo $connection_uri . "\n";
|
||||
|
||||
// Connect as a Node Session
|
||||
$nodeSession = mysql_xdevapi\getNodeSession($connection_uri);
|
||||
// "USE world_x" schema
|
||||
$schema = $nodeSession->getSchema("world_x");
|
||||
// Specify collection to use
|
||||
$collection = $schema->getCollection("countryinfo");
|
||||
// SELECT * FROM world_x WHERE _id = "USA"
|
||||
$result = $collection->find('_id = "USA"')->execute();
|
||||
// Fetch/Display data
|
||||
$data = $result->fetchAll();
|
||||
var_dump($data);
|
||||
?>
|
||||
|
||||
```
|
||||
|
||||
可以看出,在上面两个使用不同编程语言的例子中,`find` 操作符的用法基本一致。这种一致性对跨语言编程的开发者有很大帮助,对试图降低新语言学习成本的开发者也不无裨益。
|
||||
|
||||
支持的语言还包括 C,Java,Python 和 JavaScript 等,未来还会有更多支持的语言。
|
||||
|
||||
### 从两种方式受益
|
||||
|
||||
我会告诉你使用 NoSQL 方式录入的数据也可以用 SQL 方式使用?换句话说,我会告诉你新引入的 NoSQL 方式可以访问旧式关系型表中的数据?现在使用 MySQL 服务器有多种方式,作为 SQL 服务器,作为 NoSQL 服务器或者同时作为两者。
|
||||
|
||||
Dave Stokes 将于 6 月 8-10 日在北卡罗来纳州 Charlotte 市举行的 [Southeast LinuxFest][4] 大会上做”不用 SQL 的 MySQL,我的天哪!“主题演讲。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/mysql-document-store
|
||||
|
||||
作者:[Dave Stokes][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[pinewall](https://github.com/pinewall)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/davidmstokes
|
||||
[1]:https://www.mysql.com/products/enterprise/document_store.html
|
||||
[2]:https://dev.mysql.com/doc/x-devapi-userguide/en/
|
||||
[3]:https://dev.mysql.com/downloads/shell/
|
||||
[4]:http://www.southeastlinuxfest.org/
|
@ -0,0 +1,133 @@
|
||||
将你的树莓派打造成一个 Tor 中继节点
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/tor-onion-router.jpg?itok=6WUl0ElH)
|
||||
|
||||
你是否和我一样,在第一代或者第二代树莓派发布时买了一个,玩了一段时间就把它搁置“吃灰”了。毕竟,除非你是机器人爱好者,否则一般不太可能去长时间使用一个处理器很慢的并且内存只有 256 MB 的计算机的。这并不是说你不能用它去做一件很酷的东西,但是在工作和其它任务之间,我还没有看到用一些旧的物件发挥新作用的机会。
|
||||
|
||||
然而,如果你想去好好利用它并且不想花费你太多的时间和资源的话,可以将你的旧树莓派打造成一个完美的 Tor 中继节点。
|
||||
|
||||
### Tor 中继节点是什么
|
||||
|
||||
在此之前你或许听说过 [Tor 项目][1],如果恰好你没有听说过,我简单给你介绍一下,“Tor” 是 “The Onion Router(洋葱路由器)” 的缩写,它是用来对付在线追踪和其它违反隐私行为的技术。
|
||||
|
||||
不论你在因特网上做什么事情,都会在你的 IP 包通过的设备上留下一些数字“脚印”:所有的交换机、路由器、负载均衡,以及目标网络记录的来自你的原始会话的 IP 地址,以及你访问的因特网资源(经常是主机名、[甚至是在使用 HTTPS 时][2])的 IP 地址。如何你是在家中上因特网,那么你的 IP 地址可以直接映射到你的家庭所在地。如果你使用了 VPN 服务([你应该使用][3]),那么你的 IP 地址是映射到你的 VPN 提供商那里,而 VPN 提供商是可以映射到你的家庭所在地的。无论如何,有可能在某个地方的某个人正在根据你访问的网络和在网站上呆了多长时间来为你建立一个个人的在线资料。然后将这个资料进行出售,并与从其它服务上收集的资料进行聚合,然后利用广告网络进行赚钱。至少,这是乐观主义者对如何利用这些数据的一些看法 —— 我相信你还可以找到更多的更恶意地使用这些数据的例子。
|
||||
|
||||
Tor 项目尝试去提供一个解决这种问题的方案,使它们不可能(或者至少是更加困难)追踪到你的终端 IP 地址。Tor 是通过让你的连接在一个由匿名的入口节点、中继节点、和出口节点组成的匿名中继链上反复跳转的方式来实现防止追踪的目的:
|
||||
|
||||
1. **入口节点** 只知道你的 IP 地址和中继节点的 IP 地址,但是不知道你最终要访问的目标 IP 地址
|
||||
|
||||
2. **中继节点** 只知道入口节点和出口节点的 IP 地址,以及即不是源也不是最终目标的 IP 地址
|
||||
|
||||
3. **出口节点** 仅知道中继节点和最终目标地址,它是在到达最终目标地址之前解密流量的节点
|
||||
|
||||
|
||||
|
||||
|
||||
中继节点在这个交换过程中扮演一个关键的角色,因为它在源请求和目标地址之间创建了一个加密的障碍。甚至在意图偷窥你数据的对手控制了出口节点的情况下,在他们没有完全控制整个 Tor 中继链的情况下仍然无法知道请求源在哪里。
|
||||
|
||||
只要存在大量的中继节点,你的隐私被会得到保护 —— 这就是我为什么真诚地建议你,如果你的家庭宽带有空闲的时候去配置和运行一个中继节点。
|
||||
|
||||
#### 考虑去做 Tor 中继时要记住的一些事情
|
||||
|
||||
一个 Tor 中继节点仅发送和接收加密流量 —— 它从不访问任何其它站点或者在线资源,因此你不用担心有人会利用你的家庭 IP 地址去直接浏览一些令人担心的站点。话虽如此,但是如果你居住在一个提供匿名增强服务(anonymity-enhancing services)是违法行为的司法管辖区的话,那么你还是不要运营你的 Tor 中继节点了。你还需要去查看你的因特网服务提供商的服务条款是否允许你去运营一个 Tor 中继。
|
||||
|
||||
### 需要哪些东西
|
||||
|
||||
* 一个带完整外围附件的树莓派(任何型号/代次都行)
|
||||
|
||||
* 一张有 [Raspbian Stretch Lite][4] 的 SD 卡
|
||||
|
||||
* 一根以太网线缆
|
||||
|
||||
* 一根用于供电的 micro-USB 线缆
|
||||
|
||||
* 一个键盘和带 HDMI 接口的显示器(在配置期间使用)
|
||||
|
||||
|
||||
|
||||
|
||||
本指南假设你已经配置好了你的家庭网络连接的线缆或者 ADSL 路由器,它用于运行 NAT 转换(它几乎是必需的)。大多数型号的树莓派都有一个可用于为树莓派供电的 USB 端口,如果你只是使用路由器的 WiFi 功能,那么路由器应该有空闲的以太网口。但是在我们将树莓派设置为一个“配置完不管”的 Tor 中继之前,我们还需要一个键盘和显示器。
|
||||
|
||||
### 引导脚本
|
||||
|
||||
我改编了一个很流行的 Tor 中继节点引导脚本以适配树莓派上使用 —— 你可以在我的 GitHub 仓库 <https://github.com/mricon/tor-relay-bootstrap-rpi> 上找到它。你用它引导树莓派并使用缺省的用户 “pi” 登入之后,做如下的工作:
|
||||
```
|
||||
sudo apt-get install -y git
|
||||
git clone https://github.com/mricon/tor-relay-bootstrap-rpi
|
||||
cd tor-relay-bootstrap-rpi
|
||||
sudo ./bootstrap.sh
|
||||
|
||||
```
|
||||
|
||||
这个脚本将做如下的工作:
|
||||
|
||||
1. 安装最新版本的操作系统更新以确保树莓派打了所有的补丁
|
||||
|
||||
2. 将系统配置为无人值守自动更新,以确保有可用更新时会自动接收并安装
|
||||
|
||||
3. 安装 Tor 软件
|
||||
|
||||
4. 告诉你的 NAT 路由器去转发所需要的端口(端口一般是 443 和 8080,因为这两个端口最不可能被因特网提供商过滤掉)上的数据包到你的中继节点
|
||||
|
||||
|
||||
|
||||
|
||||
脚本运行完成后,你需要去配置 torrc 文件 —— 但是首先,你需要决定打算贡献给 Tor 流量多大带宽。首先,在 Google 中输入 “[Speed Test][5]”,然后点击 “Run Speed Test” 按钮。你可以不用管 “Download speed” 的结果,因为你的 Tor 中继能处理的速度不会超过最大的上行带宽。
|
||||
|
||||
所以,将 “Mbps upload” 的数字除以 8,然后再乘以 1024,结果就是每秒多少 KB 的宽带速度。比如,如果你得到的上行带宽是 21.5 Mbps,那么这个数字应该是:
|
||||
```
|
||||
21.5 Mbps / 8 * 1024 = 2752 KBytes per second
|
||||
|
||||
```
|
||||
|
||||
你可以限制你的中继带宽为那个数字的一半,并允许突发带宽为那个数字的四分之三。确定好之后,使用喜欢的文本编辑器打开 /etc/tor/torrc 文件,调整好带宽设置。
|
||||
```
|
||||
RelayBandwidthRate 1300 KBytes
|
||||
RelayBandwidthBurst 2400 KBytes
|
||||
|
||||
```
|
||||
|
||||
当然,如果你想更慷慨,你可以将那几个设置的数字调的更大,但是尽量不要设置为最大的出口带宽 —— 如果设置的太高,它会影响你的日常使用。
|
||||
|
||||
你打开那个文件之后,你应该去设置更多的东西。首先是昵称 —— 只是为了你自己保存记录,第二个是联系信息,只需要一个电子邮件地址。由于你的中继是运行在无人值守模式下的,你应该使用一个定期检查的电子邮件地址 —— 如果你的中继节点离线超过 48 个小时,你将收到 “Tor Weather” 服务的告警信息。
|
||||
```
|
||||
Nickname myrpirelay
|
||||
ContactInfo you@example.com
|
||||
|
||||
```
|
||||
|
||||
保存文件并重引导系统去启动 Tor 中继。
|
||||
|
||||
### 测试它确认有 Tor 流量通过
|
||||
|
||||
如果你想去确认中继节点的功能,你可以运行 “arm” 工具:
|
||||
```
|
||||
sudo -u debian-tor arm
|
||||
|
||||
```
|
||||
|
||||
它需要一点时间才显示,尤其是在老板子上。它通常会给你显示一个表示入站和出站流量(或者是错误信息,它将有助于你去排错)的柱状图。
|
||||
|
||||
一旦你确信它运行正常,就可以将键盘和显示器拔掉了,然后将树莓派放到地下室,它就可以在那里悄悄地呆着并到处转发加密的比特了。恭喜你,你已经帮助去改善隐私和防范在线的恶意跟踪了!
|
||||
|
||||
通过来自 Linux 基金会和 edX 的免费课程 ["Linux 入门" ][6] 来学习更多的 Linux 知识。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/intro-to-linux/2018/6/turn-your-raspberry-pi-tor-relay-node
|
||||
|
||||
作者:[Konstantin Ryabitsev][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/mricon
|
||||
[1]:https://www.torproject.org/
|
||||
[2]:https://en.wikipedia.org/wiki/Server_Name_Indication#Security_implications
|
||||
[3]:https://www.linux.com/blog/2017/10/tips-secure-your-network-wake-krack
|
||||
[4]:https://www.raspberrypi.org/downloads/raspbian/
|
||||
[5]:https://www.google.com/search?q=speed+test
|
||||
[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
Loading…
Reference in New Issue
Block a user