diff --git a/sources/talk/20180611 12 fiction books for Linux and open source types.md b/sources/talk/20180611 12 fiction books for Linux and open source types.md
new file mode 100644
index 0000000000..db21ae0e7f
--- /dev/null
+++ b/sources/talk/20180611 12 fiction books for Linux and open source types.md
@@ -0,0 +1,113 @@
+12 fiction books for Linux and open source types
+======
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/book_list_fiction_sand_vacation_read.jpg?itok=IViIZu8J)
+
+For this book list, I reached out to our writer community to ask which fiction books they would recommend to their peers. What I love about this question and the answers that follow is this list gives us a deeper look into their personalities. Fiction favorites are unlike non-fiction recommendations in that your technical skills and interests may have an influence on what you like to read read, but it's much more about your personality and life experiences that draw you to pick out, and love, a particular fiction book.
+
+These people are your people. I hope you find something interesting to add to your reading list.
+
+**[Ancillary Justice][1] by Annie Leckie**
+
+Open source is all about how one individual can start a movement. Somehow at the same time, it's about the power of a voluntary collective moving together towards a common goal. Ancillary Justice makes you ponder both concepts.
+
+This book is narrated by Breq, who is an "ancillary," an enslaved human body that was grafted into the soul of a warship. When that warship was destroyed, Breq kept all the ship's memories and its identity but then had to live in a single body instead of thousands. In spite of the huge change in her power, Breq has a cataclysmic influence on all around her, and she inspires both loyalty and love. She may have once been enslaved to an AI, but now that she is free, she is powerful. She learns to adapt to exercising her free will, and the decisions she makes changes her and the world around her. Breq pushes for openness in the rigid Radch, the dominant society of the book. Her actions transform the Radch into something new.
+
+Ancillary Justice is also about language, loyalty, sacrifice, and the disastrous effects of secrecy. Once you've read this book, you will never feel the same about what makes someone or something human. What makes you YOU? Can who you are really be destroyed while your body still lives?
+
+Like the open source movement, Ancillary Justice makes you think and question the status quo of the novel and of the world around you. Read it. (Recommendation and review by [Ingrid Towey][2])
+
+**[Cryptonomicon][3] by Neal Stephenson**
+
+Set during WWII and the present day, or near future at the time of writing, Cryptonomicon captures the excitement of a startup, the perils of war, community action against authority, and the perils of cryptography. It's a book to keep coming back to, as it has multiple layers and combines a techy outlook with intrigue and a decent love story. It does a good job of asking interesting questions like "is technology always an unbounded good?" and of making you realise that the people of yesterday were just a clever, and human, as we are today. (Recommendation and review by [Mike Bursell][4])
+
+**[Daemon][5] by Daniel Suarez**
+
+Daemon is the first in a two-part series that details the events that happen when a computer daemon (process) is awakened and wreaks havoc on the world. The story is an exciting thriller that borders on creepy due to the realism in how the technology is portrayed, and it outlines just how dependent we are on technology. (Recommendation and review by [Jay LaCroix][6])
+
+**[Going Postal][7] by Terry Pratchett**
+
+This book is a good read for Linux and open source enthusiasts because of the depth and relatability of characters; the humor and the unique outsider narrating that goes into the book. Terry Pratchett books are like Jim Henson movies: fiercely creative, appealing to all but especially the maker, tinkerer, hacker, and those daring to dream.
+
+The main character is a chancer, a fly-by-night who has never considered the results of their actions. They are not committed to anything, have never formed real (non-monetary) connections. The story follows on from the outcomes of their actions, a tale of redemption taking the protagonists on an out-of-control adventure. It's funny, edgy and unfamiliar, much like the initial 1990's introduction to Linux was for me. (Recommendation and review by [Lewis Cowles][8])
+
+**[Microserfs][9] by Douglas Coupland**
+
+Anyone who lived through the dotcom bubble of the 1990's will identify with this heartwarming tale of a young group of Microsoft engineers who end up leaving the company for a startup, moving to Silicon Valley, and becoming each other's support through life, death, love, and loss.
+
+There is a lot of humor to be found in this book, like in line this line: "This is my computer. There are many like it, but this one is mine..." This revision of the original comes from the Rifleman's Creed: "This is my rifle. There are many like it..."
+
+If you've ever spent 16 hours a day coding, while fueling yourself with Skittles and Mountain Dew, this story is for you. (Recommendation and review by [Jet Anderson][10])
+
+**[Open Source][11] by M. M. Frick**
+
+Casey Shenk is a vending-machine technician from Savannah, Georgia by day and blogger by night. Casey's keen insights into the details of news reports, both true and false, lead him to unravel a global plot involving arms sales, the Middle East, Russia, Israel and the highest levels of power in the United States. Casey connects the pieces using "Open Source Intelligence," which is simply reading and analyzing information that is free and open to the public.
+
+I bought this book because of the title, just as I was learning about open source, three years ago. I thought this would be a book on open source fiction. Unfortunately, the book has nothing to do with open source as we define it. I had hoped that Casey would use some open source tools or open source methods in his investigation, such as Wireshark or Maltego, and write his posts with LibreOffice, WordPress and such. However, "open source" simply refers to the fact that his sources are "open."
+
+Although I was disappointed that this book was not what I expected, Frick, a Navy officer, packed the book with well-researched and interesting twists and turns. If you are looking for a book that involves Linux, command lines, GitHub, or any other open source elements, then this is not the book for you. (Recommendation and review by [Jeff Macharyas][12])
+
+**[The Tao of Pooh][13] by Benjamin Hoff**
+
+Linux and the open source ethos is a way of approaching life and getting things done that relies on both the individual and collective goodwill of the community it serves. Leadership and service are ascribed by individual contribution and merit rather than arbitrary assignment of value in traditional hierarchies. This is the natural way of getting things done. The power of open source is its authentic gift of self to a community of developers and end users. Being a part of such a community of developers and contributors invites to share their unique gift with the wider world. In Tao of Poo, Hoff celebrates that unique gift of self, using the metaphor of Winnie the Pooh wed with Taoist philosophy. (Recommendation and review by [Don Watkins][14])
+
+**[The Golem and the Jinni][15] by Helene Wecker**
+
+The eponymous otherworldly beings accidentally find themselves in New York City in the early 1900s and have to restart their lives far from their homelands. It's rare to find a book with such an original premise, let alone one that can follow through with it so well and with such heart. (Recommendation and review by [VM Brasseur][16])
+
+**[The Rise of the Meritocracy][17] by Michael Young**
+
+Meritocracy—one of the most pervasive and controversial notions circulating in open source discourses—is for some critics nothing more than a quaint fiction. No surprise for them, then, that the term originated there. Michael Young's dystopian science fiction novel introduced the term into popular culture in 1958; the eponymous concept characterizes a 2034 society entirely bent on rewarding the best, the brightest, and the most talented. "Today we frankly recognize that democracy can be no more than aspiration, and have rule not so much by the people as by the cleverest people," writes the book's narrator in this pseudo-sociological account of future history,"not an aristocracy of birth, not a plutocracy of wealth, but a true meritocracy of talent."
+
+Would a truly meritocratic society work as intended? We might only imagine. Young's answer, anyway, has serious consequences for the fictional sociologist. (Recommendation and review by [Bryan Behrenshausen][18])
+
+**[Throne of the Crescent Moon][19] by Saladin Ahmed**
+
+The protagonist, Adulla, is a man who just wants to retire from ghul hunting and settle down, but the world has other plans for him. Accompanied by his assistant and a vengeful young warrior, they set off to end the ghul scourge and find their revenge. While it sounds like your typical fantasy romp, the Middle Eastern setting of the story sets it apart while the tight and skillful writing of Ahmed pulls you in. (Recommendation and review by [VM Brasseur][16])
+
+**[Walkaway][20] by Cory Doctorow**
+
+It's hard to approach this science fiction book because it's so different than other science fiction books. It's timely because in an age of rage―producing a seemingly endless parade of dystopia in fiction and in reality―this book is hopeful. We need hopeful things. Open source fans would like it because the reason it is hopeful is because of open, shared technology. I don't want to give too much away, but let's just say this book exists in a world where advanced 3D printing is so mainstream (and old) that you can practically 3D print anything. Basic needs of Maslow's hierarchy are essentially taken care of, so you're left with human relationships.
+
+"You wouldn't steal a car" turns into "you can fork a house or a city." This creates a present that can constantly be remade, so the attachment to things becomes practically unnecessary. Thus, people can―and do―just walk away. This wonderful (and complicated) future setting is the ever-present reality surrounding a group of characters, their complicated relationships, and a complex class struggle in a post-scarcity world.
+
+Best book I've read in years. Thanks, Cory! (Recommendation and review by [Kyle Conway][21])
+
+**[Who Moved My Cheese?][22] by Spencer Johnson**
+
+The secret to success for leading open source projects and open companies is agility and motivating everyone to move beyond their comfort zones to embrace change. Many people find change difficult and do not see the advantage that comes from the development of an agile mindset. This book is about the difference in how mice and people experience and respond to change. It's an easy read and quick way to expand your mind and think differently about whatever problem you're facing today. (Recommendation and review by [Don Watkins][14])
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/6/fiction-book-list
+
+作者:[Jen Wike Huger][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/remyd
+[1]:https://www.annleckie.com/novel/ancillary-justice/
+[2]:https://opensource.com/users/i-towey
+[3]:https://www.amazon.com/Cryptonomicon-Neal-Stephenson-ebook/dp/B000FC11A6/ref=sr_1_1?s=books&ie=UTF8&qid=1528311017&sr=1-1&keywords=Cryptonomicon
+[4]:https://opensource.com/users/mikecamel
+[5]:https://www.amazon.com/DAEMON-Daniel-Suarez/dp/0451228731
+[6]:https://opensource.com/users/jlacroix
+[7]:https://www.amazon.com/Going-postal-Terry-PRATCHETT/dp/0385603428
+[8]:https://opensource.com/users/lewiscowles1986
+[9]:https://www.amazon.com/Microserfs-Douglas-Coupland/dp/0061624268
+[10]:https://opensource.com/users/thatsjet
+[11]:https://www.amazon.com/Open-Source-M-Frick/dp/1453719989
+[12]:https://opensource.com/users/jeffmacharyas
+[13]:https://www.amazon.com/Tao-Pooh-Benjamin-Hoff/dp/0140067477
+[14]:https://opensource.com/users/don-watkins
+[15]:https://www.amazon.com/Golem-Jinni-Novel-P-S/dp/0062110845
+[16]:https://opensource.com/users/vmbrasseur
+[17]:https://www.amazon.com/Rise-Meritocracy-Classics-Organization-Management/dp/1560007044
+[18]:https://opensource.com/users/bbehrens
+[19]:https://www.amazon.com/Throne-Crescent-Moon-Kingdoms/dp/0756407788
+[20]:https://craphound.com/category/walkaway/
+[21]:https://opensource.com/users/kreyc
+[22]:https://www.amazon.com/Moved-Cheese-Spencer-Johnson-M-D/dp/0743582853
diff --git a/sources/talk/20180613 AI Is Coming to Edge Computing Devices.md b/sources/talk/20180613 AI Is Coming to Edge Computing Devices.md
new file mode 100644
index 0000000000..0dc34c9ba3
--- /dev/null
+++ b/sources/talk/20180613 AI Is Coming to Edge Computing Devices.md
@@ -0,0 +1,66 @@
+AI Is Coming to Edge Computing Devices
+======
+
+![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ai-edge.jpg?itok=nuNfRbW8)
+
+Very few non-server systems run software that could be called machine learning (ML) and artificial intelligence (AI). Yet, server-class “AI on the Edge” applications are coming to embedded devices, and Arm intends to fight with Intel and AMD over every last one of them.
+
+Arm recently [announced][1] a new [Cortex-A76][2] architecture that is claimed to boost the processing of AI and ML algorithms on edge computing devices by a factor of four. This does not include ML performance gains promised by the new Mali-G76 GPU. There’s also a Mali-V76 VPU designed for high-res video. The Cortex-A76 and two Mali designs are designed to “complement” Arm’s Project Trillium Machine Learning processors (see below).
+
+### Improved performance
+
+The Cortex-A76 differs from the [Cortex-A73][3] and [Cortex-A75][4] IP designs in that it’s designed as much for laptops as for smartphones and high-end embedded devices. Cortex-A76 provides “35 percent more performance year-over-year,” compared to Cortex-A75, claims Arm. The IP, which is expected to arrive in products a year from now, is also said to provide 40 percent improved efficiency.
+
+Like Cortex-A75, which is equivalent to the latest Kyro cores available on Qualcomm’s [Snapdragon 845][5], the Cortex-A76 supports [DynamIQ][6], Arm’s more flexible version of its Big.Little multi-core scheme. Unlike Cortex-A75, which was announced with a Cortex-A55 companion chip, Arm had no new DynamIQ companion for the Cortex-A76.
+
+Cortex-A76 enhancements are said to include decoupled branch prediction and instruction fetch, as well as Arm’s first 4-wide decode core, which boosts the maximum instruction per cycle capability. There’s also higher integer and vector execution throughput, including support for dual-issue native 16B (128-bit) vector and floating-point units. Finally, the new full-cache memory hierarchy is “co-optimized for latency and bandwidth,” says Arm.
+
+Unlike the latest high-end Cortex-A releases, Cortex-A76 represents “a brand new microarchitecture,” says Arm. This is confirmed by [AnandTech’s][7] usual deep-dive analysis. Cortex-A73 and -A75 debuted elements of the new “Artemis” architecture, but the Cortex-A76 is built from scratch with Artemis.
+
+The Cortex-A76 should arrive on 7nm-fabricated TSMC products running at 3GHz, says AnandTech. The 4x improvements in ML workloads are primarily due to new optimizations in the ASIMD pipelines “and how dot products are handled,” says the story.
+
+Meanwhile, [The Register][8] noted that Cortex-A76 is Arm’s first design that will exclusively run 64-bit kernel-level code. The cores will support 32-bit code, but only at non-privileged levels, says the story..
+
+### Mali-G76 GPU and Mali-G72 VPU
+
+The new Mali-G76 GPU announced with Cortex-A76 targets gaming, VR, AR, and on-device ML. The Mali-G76 is said to provide 30 percent more efficiency and performance density and 1.5x improved performance for mobile gaming. The Bifrost architecture GPU also provides 2.7x ML performance improvements compared to the Mali-G72, which was announced last year with the Cortex-A75.
+
+The Mali-V76 VPU supports UHD 8K viewing experiences. It’s aimed at 4x4 video walls, which are especially popular in China and is designed to support the 8K video coverage, which Japan is promising for the 2020 Olympics. 8K@60 streams require four times the bandwidth of 4K@60 streams. To achieve this, Arm added an extra AXI bus and doubled the line buffers throughout the video pipeline. The VPU also supports 8K@30 decode.
+
+### Project Trillium’s ML chip detailed
+
+Arm previously revealed other details about the [Machine Learning][9] (ML) processor, also referred to as MLP. The ML chip will accelerate AI applications including machine translation and face recognition.
+
+The new processor architecture is part of the Project Trillium initiative for AI, and follows Arm’s second-gen Object Detection (OD) Processor for optimizing visual processing and people/object detection. The ML design will initially debut as a co-processor in mobile phones by late 2019.
+
+Numerous block diagrams for the MLP were published by [AnandTech][10], which was briefed on the design. While stating that any judgment about the performance of the still unfinished ML IP will require next year’s silicon release, the publication says that the ML chip appears to check off all the requirements of a neural network accelerator, including providing efficient convolutional computations and data movement while also enabling sufficient programmability.
+
+Arm claims the chips will provide >3TOPs per Watt performance in 7nm designs with absolute throughputs of 4.6TOPs, deriving a target power of approximately 1.5W. For programmability, MLP will initially target Android’s [Neural Networks API][11] and [Arm’s NN SDK][12].
+
+Join us at [Open Source Summit + Embedded Linux Conference Europe][13] in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/2018/6/ai-coming-edge-computing-devices
+
+作者:[Eric Brown][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/ericstephenbrown
+[1]:https://www.arm.com/news/2018/05/arm-announces-new-suite-of-ip-for-premium-mobile-experiences
+[2]:https://community.arm.com/processors/b/blog/posts/cortex-a76-laptop-class-performance-with-mobile-efficiency
+[3]:https://www.linux.com/news/mediateks-10nm-mobile-focused-soc-will-tap-cortex-a73-and-a32
+[4]:http://linuxgizmos.com/arm-debuts-cortex-a75-and-cortex-a55-with-ai-in-mind/
+[5]:http://linuxgizmos.com/hot-chips-on-parade-at-mwc-and-embedded-world/
+[6]:http://linuxgizmos.com/arm-boosts-big-little-with-dynamiq-and-launches-linux-dev-kit/
+[7]:https://www.anandtech.com/show/12785/arm-cortex-a76-cpu-unveiled-7nm-powerhouse
+[8]:https://www.theregister.co.uk/2018/05/31/arm_cortex_a76/
+[9]:https://developer.arm.com/products/processors/machine-learning/arm-ml-processor
+[10]:https://www.anandtech.com/show/12791/arm-details-project-trillium-mlp-architecture
+[11]:https://developer.android.com/ndk/guides/neuralnetworks/
+[12]:https://developer.arm.com/products/processors/machine-learning/arm-nn
+[13]:https://events.linuxfoundation.org/events/elc-openiot-europe-2018/
diff --git a/sources/tech/20170829 An Advanced System Configuration Utility For Ubuntu Power Users.md b/sources/tech/20170829 An Advanced System Configuration Utility For Ubuntu Power Users.md
new file mode 100644
index 0000000000..5292c290cc
--- /dev/null
+++ b/sources/tech/20170829 An Advanced System Configuration Utility For Ubuntu Power Users.md
@@ -0,0 +1,139 @@
+An Advanced System Configuration Utility For Ubuntu Power Users
+======
+
+![](https://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-4-1-720x340.png)
+
+**Ubunsys** is a Qt-based advanced system utility for Ubuntu and its derivatives. Most of the configuration can be easily done from the command-line by the advanced users. Just in case, you don’t want to use CLI all the time, you can use Ubunsys utility to configure your Ubuntu desktop system or its derivatives such as Linux Mint, Elementary OS etc. Ubunsys can be used to modify system configuration, install, remove, update packages and old kernels, enable/disable sudo access, install mainline kernel, update software repositories, clean up junk files, upgrade your Ubuntu to latest version, and so on. All of the aforementioned actions can be done with simple mouse clicks. You don’t need to depend on CLI mode anymore. Here is the list of things you can do with Ubunsys:
+
+ * Install, update, and remove packages.
+ * Update and upgrade software repositories.
+ * Install mainline Kernel.
+ * Remove old and unused Kernels.
+ * Full system update.
+ * Complete System upgrade to next available version.
+ * Upgrade to latest development version.
+ * Clean up junk files from your system.
+ * Enable and/or disable sudo access without password.
+ * Make Sudo Passwords visible when you type them in the Terminal.
+ * Enable and/or disable hibernation.
+ * Enable and/or disable firewall.
+ * Open, backup and import sources.list.d and sudoers files.
+ * Show/unshow hidden startup items.
+ * Enable and/or disable Login sounds.
+ * Configure dual boot.
+ * Enable/disable Lock screen.
+ * Smart system update.
+ * Update and/or run all scripts at once using Scripts Manager.
+ * Exec normal user installation script from git.
+ * Check system integrity and missing GPG keys.
+ * Repair network.
+ * Fix broken packages.
+ * And more yet to come.
+
+
+
+**Important note:** Ubunsys is not for Ubuntu beginners. It is dangerous and not a stable version yet. It might break your system. If you’re a new to Ubuntu, don’t use it. If you are very curious to use this application, go through each option carefully and proceed at your own risk. Do not forget to backup your important data before using this application.
+
+### Ubunsys – An Advanced System Configuration Utility For Ubuntu Power Users
+
+#### Install Ubunsys
+
+Ubunusys developer has made a PPA to make the installation process much easier. Ubunsys will currently work on Ubuntu 16.04 LTS, Ubuntu 17.04 64bit editions.
+
+Run the following commands one by one to add Ubunsys PPA and install it.
+```
+sudo add-apt-repository ppa:adgellida/ubunsys
+
+sudo apt-get update
+
+sudo apt-get install ubunsys
+
+```
+
+If the PPA doesn’t work, head over to the [**releases page**][1], download and install the Ubunsys package depending upon the architecture you use.
+
+#### Usage
+
+Once installed, launch Ubunsys from Menu. This is how Ubunsys main interface looks like.
+
+![][3]
+
+As you can see, Ubunusys has four main sections namely **Packages** , **Tweaks** , **System** , and **Repair**. There are one or more sub-sections available for each main tab to do different operations.
+
+**Packages**
+
+This section allows you to install, remove, update packages.
+
+![][4]
+
+**Tweaks**
+
+In this section, we can do various various system tweaks such as,
+
+ * Open, backup, import sources.list and sudoers file;
+ * Configure dual boot;
+ * Enable/disable login sound, firewall, lock screen, hibernation, sudo access without password. You can also enable or disable for sudo access without password to specific users.
+ * Can make the passwords visible while typing them in Terminal (Disable Asterisks).
+
+
+
+![][5]
+
+**System**
+
+This section further categorized into three sub-categories, each for distinct user type.
+
+The **Normal user** tab allows us to,
+
+ * Update, upgrade packages and software repos.
+ * Clean system.
+ * Exec normal user installation script.
+
+
+
+The **Advanced user** section allows us to,
+
+ * Clean Old/Unused Kernels.
+ * Install mainline Kernel.
+ * do smart packages update.
+ * Upgrade system.
+
+
+
+The **Developer** section allows us to upgrade the Ubuntu system to latest development version.
+
+![][6]
+
+**Repair**
+
+This is the fourth and last section of Ubunsys. As the name says, this section allows us to do repair our system, network, missing GPG keys, and fix broken packages.
+
+![][7]
+
+As you can see, Ubunsys helps you to do any system configuration, maintenance and software management tasks with few mouse clicks. You don’t need to depend on Terminal anymore. Ubunsys can help you to accomplish any advanced tasks. Again, I warn you, It’s not for beginners and it is not stable yet. So, you can expect bugs and crashes when using it. Use it with care after studying options and impact.
+
+Cheers!
+
+**Resource:**
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/ubunsys-advanced-system-configuration-utility-ubuntu-power-users/
+
+作者:[SK][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://github.com/adgellida/ubunsys/releases
+[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[3]:http://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-1.png
+[4]:http://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-2.png
+[5]:http://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-5.png
+[6]:http://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-9.png
+[7]:http://www.ostechnix.com/wp-content/uploads/2017/08/Ubunsys-11.png
diff --git a/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md b/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md
new file mode 100644
index 0000000000..9bda5fa335
--- /dev/null
+++ b/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md
@@ -0,0 +1,170 @@
+The Easiest PDO Tutorial (Basics)
+======
+
+![](http://www.theitstuff.com/wp-content/uploads/2018/04/php-language.jpg)
+
+Approximately 80% of the web is powered by PHP. And similarly, high number goes for SQL as well. Up until PHP version 5.5, we had the **mysql_** commands for accessing mysql databases but they were eventually deprecated due to insufficient security.
+
+This happened with PHP 5.5 in 2013 and as I write this article, the year is 2018 and we are on PHP 7.2. The deprecation of mysql**_** brought 2 major ways of accessing the database, the **mysqli** and the **PDO** libraries.
+
+Now though the mysqli library was the official successor, PDO gained more fame due to a simple reason that mysqli could only support mysql databases whereas PDO could support 12 different types of database drivers. Also, PDO had several more features that made it the better choice for most developers. You can see some of the feature comparisons in the table below;
+
+| | PDO | MySQLi |
+| Database support** | 12 drivers | Only MySQL |
+| Paradigm | OOP | Procedural + OOP |
+| Prepared Statements Client Side) | Yes | No |
+| Named Parameters | Yes | No |
+
+Now I guess it is pretty clear why PDO is the choice for most developers, so let’s dig into it and hopefully we will try to cover most of the PDO you need in this article itself.
+
+### Connection
+
+The first step is connecting to the database and since PDO is completely Object Oriented, we will be using the instance of a PDO class.
+
+The first thing we do is we define the host, database name, user, password and the database charset.
+
+`$host = 'localhost';`
+
+`$db = 'theitstuff';`
+
+`$user = 'root';`
+
+`$pass = 'root';`
+
+`$charset = 'utf8mb4';`
+
+`$dsn = "mysql:host=$host;dbname=$db;charset=$charset";`
+
+`$conn = new PDO($dsn, $user, $pass);`
+
+After that, as you can see in the code above we have created the **DSN** variable, the DSN variable is simply a variable that holds the information about the database. For some people running mysql on external servers, you could also adjust your port number by simply supplying a **port=$port_number**.
+
+Finally, you can create an instance of the PDO class, I have used the **$conn** variable and I have supplied the **$dsn, $user, $pass** parameters. If you have followed this, you should now have an object named $conn that is an instance of the PDO connection class. Now it’s time to get into the database and run some queries.
+
+### A simple SQL Query
+
+Let us now run a simple SQL query.
+
+`$tis = $conn->query('SELECT name, age FROM students');`
+
+`while ($row = $tis->fetch())`
+
+`{`
+
+`echo $row['name']."\t";`
+
+`echo $row['age'];`
+
+`echo "
";`
+
+`}`
+
+This is the simplest form of running a query with PDO. We first created a variable called **tis( **short for TheITStuff** )** and then you can see the syntax as we used the query function from the $conn object that we had created.
+
+We then ran a while loop and created a **$row** variable to fetch the contents from the **$tis** object and finally echoed out each row by calling out the column name.
+
+Easy wasn’t it ?. Now let’s get to the prepared statement.
+
+### Prepared Statements
+
+Prepared statements were one of the major reasons people started using PDO as it had prepared statements that could prevent SQL injections.
+
+There are 2 basic methods available, you could either use positional or named parameters.
+
+#### Position parameters
+
+Let us see an example of a query using positional parameters.
+
+`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)");`
+
+`$tis->bindValue(1,'mike');`
+
+`$tis->bindValue(2,22);`
+
+`$tis->execute();`
+
+In the above example, we have placed 2 question marks and later used the **bindValue()** function to map the values into the query. The values are bound to the position of the question mark in the statement.
+
+I could also use variables instead of directly supplying values by using the **bindParam()** function and example for the same would be this.
+
+`$name='Rishabh'; $age=20;`
+
+`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)");`
+
+`$tis->bindParam(1,$name);`
+
+`$tis->bindParam(2,$age);`
+
+`$tis->execute();`
+
+### Named Parameters
+
+Named parameters are also prepared statements that map values/variables to a named position in the query. Since there is no positional binding, it is very efficient in queries that use the same variable multiple time.
+
+`$name='Rishabh'; $age=20;`
+
+`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(:name, :age)");`
+
+`$tis->bindParam(':name', $name);`
+
+`$tis->bindParam(':age', $age);`
+
+`$tis->execute();`
+
+The only change you can notice is that I used **:name** and **:age** as placeholders and then mapped variables to them. The colon is used before the parameter and it is of extreme importance to let PDO know that the position is for a variable.
+
+You can similarly use **bindValue()** to directly map values using Named parameters as well.
+
+### Fetching the Data
+
+PDO is very rich when it comes to fetching data and it actually offers a number of formats in which you can get the data from your database.
+
+You can use the **PDO::FETCH_ASSOC** to fetch associative arrays, **PDO::FETCH_NUM** to fetch numeric arrays and **PDO::FETCH_OBJ** to fetch object arrays.
+
+`$tis = $conn->prepare("SELECT * FROM STUDENTS");`
+
+`$tis->execute();`
+
+`$result = $tis->fetchAll(PDO::FETCH_ASSOC);`
+
+You can see that I have used **fetchAll** since I wanted all matching records. If only one row is expected or desired, you can simply use **fetch.**
+
+Now that we have fetched the data it is time to loop through it and that is extremely easy.
+
+`foreach($result as $lnu){`
+
+`echo $lnu['name'];`
+
+`echo $lnu['age']."
";`
+
+`}`
+
+You can see that since I had requested associative arrays, I am accessing individual members by their names.
+
+Though there is absolutely no problem in defining how you want your data delivered, you could actually set one as default when defining the connection variable itself.
+
+All you need to do is create an options array where you put in all your default configs and simply pass the array in the connection variable.
+
+`$options = [`
+
+` PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,`
+
+`];`
+
+`$conn = new PDO($dsn, $user, $pass, $options);`
+
+This was a very brief and quick intro to PDO we will be making an advanced tutorial soon. If you had any difficulties understanding any part of the tutorial, do let me know in the comment section and I’ll be there for you.
+
+
+--------------------------------------------------------------------------------
+
+via: http://www.theitstuff.com/easiest-pdo-tutorial-basics
+
+作者:[Rishabh Kandari][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.theitstuff.com/author/reevkandari
diff --git a/sources/tech/20180605 MySQL without the MySQL- An introduction to the MySQL Document Store.md b/sources/tech/20180605 MySQL without the MySQL- An introduction to the MySQL Document Store.md
deleted file mode 100644
index dd51b9dc66..0000000000
--- a/sources/tech/20180605 MySQL without the MySQL- An introduction to the MySQL Document Store.md
+++ /dev/null
@@ -1,167 +0,0 @@
-pinewall translating
-
-MySQL without the MySQL: An introduction to the MySQL Document Store
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_business_sign_store.jpg?itok=g4QibRqg)
-
-MySQL can act as a NoSQL JSON Document Store so programmers can save data without having to normalize data, set up schemas, or even have a clue what their data looks like before starting to code. Since MySQL version 5.7 and in MySQL 8.0, developers can store JSON documents in a column of a table. By adding the new X DevAPI, you can stop embedding nasty strings of structured query language in your code and replace them with API calls that support modern programming design.
-
-Very few developers have any formal training in structured query language (SQL), relational theory, sets, or other foundations of relational databases. But they need a secure, reliable data store. Add in a dearth of available database administrators, and things can get very messy quickly.
-
-The [MySQL Document Store][1] allows programmers to store data without having to create an underlying schema, normalize data, or any of the other tasks normally required to use a database. A JSON document collection is created and can then be used.
-
-### JSON data type
-
-This is all based on the JSON data type introduced a few years ago in MySQL 5.7. This provides a roughly 1GB column in a row of a table. The data has to be valid JSON or the server will return an error, but developers are free to use that space as they want.
-
-### X DevAPI
-
-The old MySQL protocol is showing its age after almost a quarter-century, so a new protocol was developed called [X DevAPI][2]. It includes a new high-level session concept that allows code to scale from one server to many with non-blocking, asynchronous I/O that follows common host-language programming patterns. The focus is put on using CRUD (create, replace, update, delete) patterns while following modern practices and coding styles. Or, to put it another way, you no longer have to embed ugly strings of SQL statements in your beautiful, pristine code.
-
-### Coding examples
-
-A new shell, creatively called the [MySQL Shell][3] , supports this new protocol. It can be used to set up high-availability clusters, check servers for upgrade readiness, and interact with MySQL servers. This interaction can be done in three modes: JavaScript, Python, and SQL.
-
-The coding examples that follow are in the JavaScript mode of the MySQL Shell; it has a `JS>` prompt.
-
-Here, we will log in as `dstokes` with the password `password` to the local system and a schema named `demo`. There is a pointer to the schema demo that is named `db`.
-```
-$ mysqlsh dstokes:password@localhost/demo
-
-JS> db.createCollection("example")
-
-JS> db.example.add(
-
- {
-
- Name: "Dave",
-
- State: "Texas",
-
- foo : "bar"
-
- }
-
- )
-
-JS>
-
-```
-
-Above we logged into the server, connected to the `demo` schema, created a collection named `example`, and added a record, all without creating a table definition or using SQL. We can use or abuse this data as our whims desire. This is not an object-relational mapper, as there is no mapping the code to the SQL because the new protocol “speaks” at the server layer.
-
-### Node.js supported
-
-The new shell is pretty sweet; you can do a lot with it, but you will probably want to use your programming language of choice. The following example uses the `world_x` demo database to search for a record with the `_id` field matching "CAN." We point to the desired collection in the schema and issue a `find` command with the desired parameters. Again, there’s no SQL involved.
-```
-var mysqlx = require('@mysql/xdevapi');
-
-mysqlx.getSession({ //Auth to server
-
- host: 'localhost',
-
- port: '33060',
-
- dbUser: 'root',
-
- dbPassword: 'password'
-
-}).then(function (session) { // use world_x.country.info
-
- var schema = session.getSchema('world_x');
-
- var collection = schema.getCollection('countryinfo');
-
-
-
-collection // Get row for 'CAN'
-
- .find("$._id == 'CAN'")
-
- .limit(1)
-
- .execute(doc => console.log(doc))
-
- .then(() => console.log("\n\nAll done"));
-
-
-
- session.close();
-
-})
-
-```
-
-Here is another example in PHP that looks for "USA":
-```
-getSchema("world_x");
-
-// Specify collection to use
-
- $collection = $schema->getCollection("countryinfo");
-
-// SELECT * FROM world_x WHERE _id = "USA"
-
- $result = $collection->find('_id = "USA"')->execute();
-
-// Fetch/Display data
-
- $data = $result->fetchAll();
-
- var_dump($data);
-
-?>#!/usr/bin/phpmysql_xdevapi\getNodeSession
-
-```
-
-Note that the `find` operator used in both examples looks pretty much the same between the two different languages. This consistency should help developers who hop between programming languages or those looking to reduce the learning curve with a new language.
-
-Other supported languages include C, Java, Python, and JavaScript, and more are planned.
-
-### Best of both worlds
-
-Did I mention that the data entered in this NoSQL fashion is also available from the SQL side of MySQL? Or that the new NoSQL method can access relational data in old-fashioned relational tables? You now have the option to use your MySQL server as a SQL server, a NoSQL server, or both.
-
-Dave Stokes will present "MySQL Without the SQL—Oh My!" at [Southeast LinuxFest][4], June 8-10, in Charlotte, N.C.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/6/mysql-document-store
-
-作者:[Dave Stokes][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/davidmstokes
-[1]:https://www.mysql.com/products/enterprise/document_store.html
-[2]:https://dev.mysql.com/doc/x-devapi-userguide/en/
-[3]:https://dev.mysql.com/downloads/shell/
-[4]:http://www.southeastlinuxfest.org/
diff --git a/sources/tech/20180607 Mesos and Kubernetes- It-s Not a Competition.md b/sources/tech/20180607 Mesos and Kubernetes- It-s Not a Competition.md
new file mode 100644
index 0000000000..a168ac9f4a
--- /dev/null
+++ b/sources/tech/20180607 Mesos and Kubernetes- It-s Not a Competition.md
@@ -0,0 +1,66 @@
+Mesos and Kubernetes: It's Not a Competition
+======
+
+![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/architecture-barge-bay-161764_0.jpg?itok=vNChG5fb)
+
+The roots of Mesos can be traced back to 2009 when Ben Hindman was a PhD student at the University of California, Berkeley working on parallel programming. They were doing massive parallel computations on 128-core chips, trying to solve multiple problems such as making software and libraries run more efficiently on those chips. He started talking with fellow students so see if they could borrow ideas from parallel processing and multiple threads and apply them to cluster management.
+
+“Initially, our focus was on Big Data,” said Hindman. Back then, Big Data was really hot and Hadoop was one of the hottest technologies. “We recognized that the way people were running things like Hadoop on clusters was similar to the way that people were running multiple threaded applications and parallel applications,” said Hindman.
+
+However, it was not very efficient, so they started thinking how it could be done better through cluster management and resource management. “We looked at many different technologies at that time,” Hindman recalled.
+
+Hindman and his colleagues, however, decided to adopt a novel approach. “We decided to create a lower level of abstraction for resource management, and run other services on top to that to do scheduling and other things,” said Hindman, “That’s essentially the essence of Mesos -- to separate out the resource management part from the scheduling part.”
+
+It worked, and Mesos has been going strong ever since.
+
+### The project goes to Apache
+
+The project was founded in 2009. In 2010 the team decided to donate the project to the Apache Software Foundation (ASF). It was incubated at Apache and in 2013, it became a Top-Level Project (TLP).
+
+There were many reasons why the Mesos community chose Apache Software Foundation, such as the permissiveness of Apache licensing, and the fact that they already had a vibrant community of other such projects.
+
+It was also about influence. A lot of people working on Mesos were also involved with Apache, and many people were working on projects like Hadoop. At the same time, many folks from the Mesos community were working on other Big Data projects like Spark. This cross-pollination led all three projects -- Hadoop, Mesos, and Spark -- to become ASF projects.
+
+It was also about commerce. Many companies were interested in Mesos, and the developers wanted it to be maintained by a neutral body instead of being a privately owned project.
+
+### Who is using Mesos?
+
+A better question would be, who isn’t? Everyone from Apple to Netflix is using Mesos. However, Mesos had its share of challenges that any technology faces in its early days. “Initially, I had to convince people that there was this new technology called ‘containers’ that could be interesting as there is no need to use virtual machines,” said Hindman.
+
+The industry has changed a great deal since then, and now every conversation around infrastructure starts with ‘containers’ -- thanks to the work done by Docker. Today convincing is not needed, but even in the early days of Mesos, companies like Apple, Netflix, and PayPal saw the potential. They knew they could take advantage of containerization technologies in lieu of virtual machines. “These companies understood the value of containers before it became a phenomenon,” said Hindman.
+
+These companies saw that they could have a bunch of containers, instead of virtual machines. All they needed was something to manage and run these containers, and they embraced Mesos. Some of the early users of Mesos included Apple, Netflix, PayPal, Yelp, OpenTable, and Groupon.
+
+“Most of these organizations are using Mesos for just running arbitrary services,” said Hindman, “But there are many that are using it for doing interesting things with data processing, streaming data, analytics workloads and applications.”
+
+One of the reasons these companies adopted Mesos was the clear separation between the resource management layers. Mesos offers the flexibility that companies need when dealing with containerization.
+
+“One of the things we tried to do with Mesos was to create a layering so that people could take advantage of our layer, but also build whatever they wanted to on top,” said Hindman. “I think that's worked really well for the big organizations like Netflix and Apple.”
+
+However, not every company is a tech company; not every company has or should have this expertise. To help those organizations, Hindman co-founded Mesosphere to offer services and solutions around Mesos. “We ultimately decided to build DC/OS for those organizations which didn’t have the technical expertise or didn't want to spend their time building something like that on top.”
+
+### Mesos vs. Kubernetes?
+
+People often think in terms of x versus y, but it’s not always a question of one technology versus another. Most technologies overlap in some areas, and they can also be complementary. “I don't tend to see all these things as competition. I think some of them actually can work in complementary ways with one another,” said Hindman.
+
+“In fact the name Mesos stands for ‘middle’; it’s kind of a middle OS,” said Hindman, “We have the notion of a container scheduler that can be run on top of something like Mesos. When Kubernetes first came out, we actually embraced it in the Mesos ecosystem and saw it as another way of running containers in DC/OS on top of Mesos.”
+
+Mesos also resurrected a project called [Marathon][1](a container orchestrator for Mesos and DC/OS), which they have made a first-class citizen in the Mesos ecosystem. However, Marathon does not really compare with Kubernetes. “Kubernetes does a lot more than what Marathon does, so you can’t swap them with each other,” said Hindman, “At the same time, we have done many things in Mesos that are not in Kubernetes. So, these technologies are complementary to each other.”
+
+Instead of viewing such technologies as adversarial, they should be seen as beneficial to the industry. It’s not duplication of technologies; it’s diversity. According to Hindman, “it could be confusing for the end user in the open source space because it's hard to know which technologies are suitable for what kind of workload, but that’s the nature of the beast called Open Source.”
+
+That just means there are more choices, and everybody wins.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/2018/6/mesos-and-kubernetes-its-not-competition
+
+作者:[Swapnil Bhartiya][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/arnieswap
+[1]:https://mesosphere.github.io/marathon/
diff --git a/sources/tech/20180608 How to use screen scraping tools to extract data from the web.md b/sources/tech/20180608 How to use screen scraping tools to extract data from the web.md
new file mode 100644
index 0000000000..2737123f8e
--- /dev/null
+++ b/sources/tech/20180608 How to use screen scraping tools to extract data from the web.md
@@ -0,0 +1,207 @@
+How to use screen scraping tools to extract data from the web
+======
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openweb-osdc-lead.png?itok=yjU4KliG)
+A perfect internet would deliver data to clients in the format of their choice, whether it's CSV, XML, JSON, etc. The real internet teases at times by making data available, but usually in HTML or PDF documents—formats designed for data display rather than data interchange. Accordingly, the [screen scraping][1] of yesteryear—extracting displayed data and converting it to the requested format—is still relevant today.
+
+Perl has outstanding tools for screen scraping, among them the `HTML::TableExtract` package described in the Scraping program below.
+
+### Overview of the scraping program
+
+The screen-scraping program has two main pieces, which fit together as follows:
+
+ * The file data.html contains the data to be scraped. The data in this example, which originated in a university site under renovation, addresses the issue of whether the income associated with a college degree justifies the degree's cost. The data includes median incomes, percentiles, and other information about areas of study such as computing, engineering, and liberal arts. To run the Scraping program, the data.html file should be hosted on a web server, in my case a local Nginx server. A standalone Perl web server such as `HTTP::Server::PSGI` or `HTTP::Server::Simple` would do as well.
+ * The file scrape.pl contains the Scraping program, which uses features from the `Plack/PSGI` packages, in particular a Plack web server. The Scraping program is launched from the command line (as explained below). A user enters the URL for the Plack server (`localhost:5000/`) in a browser, and the following happens:
+ * The browser connects to the Plack server, an instance of `HTTP::Server::PSGI`, and issues a GET request for the Scraping program. The single slash (`/`) at the end of the URL identifies this program. (A modern browser would add the closing slash even if the user failed to do so.)
+ * The Scraping program then issues a GET request for the data.html document. If the request succeeds, the application extracts the relevant data from the document using the `HTML::TableExtract` package, saves the extracted data to a file, and takes some basic statistical measures that represent processing the extracted data. An HTML report like the following is returned to the user's browser.
+
+
+![HTML report generated by the Scraping program][3]
+
+Fig. 1: Final report from the Scraping program
+
+The request traffic from the user's browser to the Plack server and then to the server hosting the data.html document (e.g., Nginx) can be depicted as follows:
+```
+ GET localhost:5000/ GET localhost:80/data.html
+
+user's browser------------------->Plack server-------------------------->Nginx
+
+```
+
+The final step involves only the Plack server and the user's browser:
+```
+ reportFinal.html
+
+Plack server------------------>user's browser
+
+```
+
+Fig. 1 above shows the final report document.
+
+### The scraping program in detail
+
+The source code and data file (data.html) are available from my [website][4] in a ZIP file that includes a README. Here is a quick summary of the pieces, and clarifications will follow:
+```
+data.html ## data source to be hosted by a web server
+
+scrape.pl ## main source code, run with the plackup utility (see below)
+
+Stats::Controller.pm ## handles request routing, data extraction, and processing
+
+Stats::Util.pm ## utility functions used in Controller.pm
+
+report.html ## HTML template used to generate the report
+
+rawData.dat ## the extracted data
+
+```
+
+The `Plack/PSGI` packages come with a command-line utility named `plackup`, which can be used to launch the Scraping program. With `%` as the command-line prompt, the command for starting the Scraping program is:
+```
+% plackup scrape.pl
+
+```
+
+The `plackup` command starts a standalone Plack web server that hosts the Scraping program. The Scraping code handles request routing, extracts data from the data.html document, produces some basic statistical measures, and then uses the `Template::Recall` package to generate an HTML report for the user. Because the Plack server runs indefinitely, the Scraping program prints the process ID, which can be used to kill the server and the Scraping app.
+
+`Plack/PSGI` supports Rails-style routing in which an HTTP request is dispatched to a specific request handler based on two factors:
+
+ * The HTTP request method (verb) such as GET or POST.
+ * The Uniform Resource Identifier (URI or noun) for the requested resource; in this case the standalone finishing slash (`/`) in the URL `http://localhost:5000/` that a user enters in a browser once the Scraping program has launched.
+
+
+
+The Scraping program handles only one type of request: a GET for the resource named `/`, and this resource is the screen-scraping and data-processing code in my `Stats::Controller` package. Here, for review, is the `Plack/PSGI` routing setup, right at the top of source file scrape.pl:
+```
+my $router = router {
+
+ match '/', {method => 'GET'}, ## noun/verb combo: / is noun, GET is verb
+
+ to {controller => 'Controller', action => 'index'}; ## handler is function get_index
+
+ # Other actions as needed
+
+};
+
+```
+
+The request handler `Controller::get_index` has only high-level logic, leaving the screen-scraping and report-generating details to utility functions in the Util.pm file, as described in the following section.
+
+### The screen-scraping code
+
+Recall that the Plack server dispatches a GET request for `localhost:5000/` to the Scraping program's `get_index` function. This function, as the request handler, then starts the job of retrieving the data to be scraped, scraping the data, and generating the final report. The data-retrieval part falls to a utility function, which uses Perl's `LWP::Agent` package to get the data from whatever server is hosting the data.html document. With the data document in hand, the Scraping program invokes the utility function `extract_from_html` to do the data extraction.
+
+The data.html document happens to be well-formed XML, which means a Perl package such as `XML::LibXML` could be used to extract the data through an explicit XML parse. However, the `HTML::TableExtract` package is inviting because it bypasses the tedium of XML parses, and (in very little code) delivers a Perl hash with the extracted data. Data aggregates in HTML documents usually occur in lists or tables, and the `HTML::TableExtract` package targets tables. Here are the three critical lines of code for the data extraction:
+```
+my $col_headers = col_headers(); ## col_headers() returns an array of the table's column names
+
+my $te = HTML::TableExtract->new(headers => $col_headers);
+
+$te->parse($page); ## $page is data.html
+
+```
+
+The `$col_headers` refers to a Perl array of strings, each a column header in the HTML document:
+```
+sub col_headers { ## column headers in the HTML table
+
+ return ["Area",
+
+ "MedianWage",
+
+ ...
+
+ "BoostFromGradDegree"];
+
+}col_headers
+
+```
+
+After the call to the `TableExtract::parse` function, the Scraping program uses the `TableExtract::rows` function to iterate over the rows of extracted data—rows of data without the HTML markup. These rows, as Perl lists, are added to a Perl hash named `%majors_hash`, which can be depicted as follows:
+
+ * Each key identifies an area of study such as Computing or Engineering.
+
+ * The value of each key is the list of seven extracted data items, where seven is the number of columns in the HTML table. For Computing, the list with annotations is:
+```
+ name median % with this degree income boost from GD
+ / / / /
+ (Computing 55000 75000 112000 5.1% 32.0% 31.0%) ## data items
+ / \ \
+ 25th-ptile 75th-ptile % going on for GD = grad degree
+```
+
+
+
+
+The hash with the extracted data is written to the local file rawData.dat:
+```
+ForeignLanguage 50000 35000 75000 3.5% 54% 101%
+LiberalArts 47000 32000 70000 9.7% 41% 48%
+...
+Engineering 78000 54000 104000 8.2% 37% 32%
+Computing 75000 51000 112000 5.1% 32% 31%
+...
+PublicPolicy 50000 36000 74000 2.3% 24% 45%
+```
+
+The next step is to process the extracted data, in this case by doing rudimentary statistical analysis using the `Statistics::Descriptive` package. In Fig. 1 above, the statistical summary is presented in a separate table at the bottom of the report.
+
+### The report-generation code
+
+The final step in the Scraping program is to generate a report. Perl has options for generating HTML, and `Template::Recall` is among them. As the name suggests, the package generates HTML from an HTML template, which is a mix of standard HTML markup and customized tags that serve as placeholders for data generated from backend code. The template file is report.html, and the backend function of interest is `Controller::generate_report`. Here is how the code and the template interact.
+
+The report document (Fig. 1) has two tables. The top table is generated through iteration, as each row has the same columns (area of study, income for the 25th percentile, and so on). In each iteration, the code creates a hash with values for a particular area of study:
+```
+my %row = (
+ major => $key,
+ wage => '$' . commify($values[0]), ## commify turns 1234 into 1,234
+ p25 => '$' . commify($values[1]),
+ p75 => '$' . commify($values[2]),
+ population => $values[3],
+ grad => $values[4],
+ boost => $values[5]
+);
+
+```
+
+The hash keys are Perl [barewords][5] such as `major` and `wage` that represent items in the list of data values extracted earlier from the HTML data document. The corresponding HTML template looks like this:
+```
+[ === even === ]
+