-
-```
-
-I leave it as an exercise for you to carve out the sections for sidebar.tpl and footer.tpl.
-
-Note the lines in bold. I added them to facilitate a “login bar” at the top of every webpage. Once you’ve logged into the application, you will see the bar as so:
-
-![][17]
-
-This login bar works in conjunction with the GetSession code snippet we saw in activeContent(). The logic is, if the user is logged in (ie, there is a non-nil session), then we set the InSession parameter to a value (any value), which tells the templating engine to use the “Welcome” bar instead of “Login”. We also extract the user’s first name from the session so that we can present the friendly affectation “Welcome, Richard”.
-
-The home page, represented by index.tpl, uses the following snippet from index.html:
-```
-
-
-
-
Welcome to StarDust
- // to save space, I won't enter the remainder
- // of the snippet
-
-
-
-```
-
-#### Special Note
-
-The template files for the user module reside in the ‘user’ directory within ‘views’, just to keep things tidy. So, for example, the call to activeContent() for login is:
-```
-this.activeContent("user/login")
-
-```
-
-### Controller
-
-A controller handles requests by handing them off to the appropriate function or ‘method’. We only have one controller for our application and it’s defined in default.go. The default method Get() for handling a GET operation is associated with our home page:
-```
-func (this *MainController) Get() {
- this.activeContent("index")
-
-```
-```
- //bin //boot //dev //etc //home //lib //lib64 //media //mnt //opt //proc //root //run //sbin //speedup //srv //sys //tmp //usr //var This page requires login
- sess := this.GetSession("acme")
- if sess == nil {
- this.Redirect("/user/login/home", 302)
- return
- }
- m := sess.(map[string]interface{})
- fmt.Println("username is", m["username"])
- fmt.Println("logged in at", m["timestamp"])
-}
-
-```
-
-I’ve made login a requirement for accessing this page. Logging in means creating a session, which by default expires after 3600 seconds of inactivity. A session is typically maintained on the client side by a ‘cookie’.
-
-In order to support sessions in the application, the ‘SessionOn’ flag must be set to true. There are two ways to do this:
-
- 1. Insert ‘beego.SessionOn = true’ in the main program, main.go.
- 2. Insert ‘sessionon = true’ in the configuration file, app.conf, which can be found in the ‘conf’ directory.
-
-
-
-I chose #1. (But note that I used the configuration file to set ‘EnableAdmin’ to true: ‘enableadmin = true’. EnableAdmin allows you to use the Supervisor Module in Beego that keeps track of CPU, memory, Garbage Collector, threads, etc., via port 8088: .)
-
-#### The Main Program
-
-The main program is also where we initialize the database to be used with the ORM (Object Relational Mapping) component. ORM makes it more convenient to perform database activities within our application. The main program’s init():
-```
-func init() {
- orm.RegisterDriver("sqlite", orm.DR_Sqlite)
- orm.RegisterDataBase("default", "sqlite3", "acme.db")
- name := "default"
- force := false
- verbose := false
- err := orm.RunSyncdb(name, force, verbose)
- if err != nil {
- fmt.Println(err)
- }
-}
-
-```
-
-To use SQLite, we must import ‘go-sqlite3', which can be installed with the command:
-```
-$ go get github.com/mattn/go-sqlite3
-
-```
-
-As you can see in the code snippet, the SQLite driver must be registered and ‘acme.db’ must be registered as our SQLite database.
-
-Recall in models.go, there was an init() function:
-```
-func init() {
- orm.RegisterModel(new(AuthUser))
-}
-
-```
-
-The database model has to be registered so that the appropriate table can be generated. To ensure that this init() function is executed, you must import ‘models’ without actually using it within the main program, as follows:
-```
-import _ "acme/models"
-
-```
-
-RunSyncdb() is used to autogenerate the tables when you start the program. (This is very handy for creating the database tables without having to **manually** do it in the database command line utility.) If you set ‘force’ to true, it will drop any existing tables and recreate them.
-
-#### The User Module
-
-User.go contains all the methods for handling login, registration, profile, etc. There are several third-party packages we need to import; they provide support for email, PBKDF2, and UUID. But first we must get them into our project…
-```
-$ go get github.com/alexcesaro/mail/gomail
-$ go get github.com/twinj/uuid
-
-```
-
-I originally got **github.com/gokyle/pbkdf2** , but this package was pulled from Github, so you can no longer get it. I’ve incorporated this package into my source under the ‘utilities’ folder, and the import is:
-```
-import pk "acme/utilities/pbkdf2"
-
-```
-
-The ‘pk’ is a convenient alias so that I don’t have to type the rather unwieldy ‘pbkdf2'.
-
-#### ORM
-
-It’s pretty straightforward to use ORM. The basic pattern is to create an ORM object, specify the ‘default’ database, and select which ORM operation you want, eg,
-```
-o := orm.NewOrm()
-o.Using("default")
-err := o.Insert(&user) // or
-err := o.Read(&user, "Email") // or
-err := o.Update(&user) // or
-err := o.Delete(&user)
-
-```
-
-#### Flash
-
-By the way, Beego provides a way to present notifications on your webpage through the use of ‘flash’. Basically, you create a ‘flash’ object, give it your notification message, store the flash in the controller, and then retrieve the message in the template file, eg,
-```
-flash := beego.NewFlash()
-flash.Error("You've goofed!") // or
-flash.Notice("Well done!")
-flash.Store(&this.Controller)
-
-```
-
-And in your template file, reference the Error flash with:
-```
-{{if .flash.error}}
-
{{.flash.error}}
-
-{{end}}
-
-```
-
-#### Form Validation
-
-Once the user posts a request (by pressing the Submit button, for example), our handler must extract and validate the form input. So, first, check that we have a POST operation:
-```
-if this.Ctx.Input.Method() == "POST" {
-
-```
-
-Let’s get a form element, say, email:
-```
-email := this.GetString("email")
-
-```
-
-The string “email” is the same as in the HTML form:
-```
-
-
-```
-
-To validate it, we create a validation object, specify the type of validation, and then check to see if there are any errors:
-```
-valid := validation.Validation{}
-valid.Email(email, "email") // must be a proper email address
-if valid.HasErrors() {
- for _, err := range valid.Errors {
-
-```
-
-What you do with the errors is up to you. I like to present all of them at once to the user, so as I go through the range of valid.Errors, I add them to a map of errors that will eventually be used in the template file. Hence, the full snippet:
-```
-if this.Ctx.Input.Method() == "POST" {
- email := this.GetString("email")
- password := this.GetString("password")
- valid := validation.Validation{}
- valid.Email(email, "email")
- valid.Required(password, "password")
- if valid.HasErrors() {
- errormap := []string{}
- for _, err := range valid.Errors {
- errormap = append(errormap, "Validation failed on "+err.Key+": "+err.Message+"\n")
- }
- this.Data["Errors"] = errormap
- return
- }
-
-```
-
-### The User Management Methods
-
-We’ve looked at the major pieces of the controller. Now, we get to the meat of the application, the user management methods:
-
- * Login()
- * Logout()
- * Register()
- * Verify()
- * Profile()
- * Remove()
-
-
-
-Recall that we saw references to these functions in the router. The router associates each URL (and HTTP request) with the corresponding controller method.
-
-#### Login()
-
-Let’s look at the pseudocode for this method:
-```
-if the HTTP request is "POST" then
- Validate the form (extract the email address and password).
- Read the password hash from the database, keying on email.
- Compare the submitted password with the one on record.
- Create a session for this user.
-endif
-
-```
-
-In order to compare passwords, we need to give pk.MatchPassword() a variable with members ‘Hash’ and ‘Salt’ that are **byte slices**. Hence,
-```
-var x pk.PasswordHash
-
-```
-```
-x.Hash = make([]byte, 32)
-x.Salt = make([]byte, 16)
-// after x has the password from the database, then...
-
-```
-```
-if !pk.MatchPassword(password, &x) {
- flash.Error("Bad password")
- flash.Store(&this.Controller)
- return
-}
-
-```
-
-Creating a session is trivial, but we want to store some useful information in the session, as well. So we make a map and store first name, email address, and the time of login:
-```
-m := make(map[string]interface{})
-m["first"] = user.First
-m["username"] = email
-m["timestamp"] = time.Now()
-this.SetSession("acme", m)
-this.Redirect("/"+back, 302) // go to previous page after login
-
-```
-
-Incidentally, the name “acme” passed to SetSession is completely arbitrary; you just need to reference the same name to get the same session.
-
-#### Logout()
-
-This one is trivially easy. We delete the session and redirect to the home page.
-
-#### Register()
-```
-if the HTTP request is "POST" then
- Validate the form.
- Create the password hash for the submitted password.
- Prepare new user record.
- Convert the password hash to hexadecimal string.
- Generate a UUID and insert the user into database.
- Send a verification email.
- Flash a message on the notification page.
-endif
-
-```
-
-To send a verification email to the user, we use **gomail** …
-```
-link := "http://localhost:8080/user/verify/" + u // u is UUID
-host := "smtp.gmail.com"
-port := 587
-msg := gomail.NewMessage()
-msg.SetAddressHeader("From", "acmecorp@gmail.com", "ACME Corporation")
-msg.SetHeader("To", email)
-msg.SetHeader("Subject", "Account Verification for ACME Corporation")
-msg.SetBody("text/html", "To verify your account, please click on the link: "+link+"
Best Regards, ACME Corporation")
-m := gomail.NewMailer(host, "youraccount@gmail.com", "YourPassword", port)
-if err := m.Send(msg); err != nil {
- return false
-}
-
-```
-
-I chose Gmail as my email relay (you will need to open your own account). Note that Gmail ignores the “From” address (in our case, “[acmecorp@gmail.com][18]”) because Gmail does not permit you to alter the sender address in order to prevent phishing.
-
-#### Notice()
-
-This special router method is for displaying a flash message on a notification page. It’s not really a user module function; it’s general enough that you can use it in many other places.
-
-#### Profile()
-
-We’ve already discussed all the pieces in this function. The pseudocode is:
-```
-Login required; check for a session.
-Get user record from database, keyed on email (or username).
-if the HTTP request is "POST" then
- Validate the form.
- if there is a new password then
- Validate the new password.
- Create the password hash for the new password.
- Convert the password hash to hexadecimal string.
- endif
- Compare submitted current password with the one on record.
- Update the user record.
- - update the username stored in session
-endif
-
-```
-
-#### Verify()
-
-The verification email contains a link which, when clicked by the recipient, causes Verify() to process the UUID. Verify() attempts to read the user record, keyed on the UUID or registration key, and if it’s found, then the registration key is removed from the database.
-
-#### Remove()
-
-Remove() is pretty much like Login(), except that instead of creating a session, you delete the user record from the database.
-
-### Exercise
-
-I left out one user management method: What if the user has forgotten his password? We should provide a way to reset the password. I leave this as an exercise for you. All the pieces you need are in this tutorial. (Hint: You’ll need to do it in a way similar to Registration verification. You should add a new Reset_key to the AuthUser table. And make sure the user email address exists in the database before you send the Reset email!)
-
-[Okay, so I’ll give you the [exercise solution][19]. I’m not cruel.]
-
-### Wrapping Up
-
-Let’s review what we’ve learned. We covered the mapping of URLs to request handlers in the router. We showed how to incorporate a CSS template design into our views. We discussed the ORM package, and how it’s used to perform database operations. We examined a number of third-party utilities useful in writing our application. The end result is a component useful in many scenarios.
-
-This is a great deal of material in a tutorial, but I believe it’s the best way to get started in writing a practical application.
-
-[For further material, look at the [sequel][20] to this article, as well as the [final edition][21].]
-
---------------------------------------------------------------------------------
-
-via: https://medium.com/@richardeng/a-word-from-the-beegoist-d562ff8589d7
-
-作者:[Richard Kenneth Eng][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://medium.com/@richardeng?source=post_header_lockup
-[1]:http://tour.golang.org/
-[2]:http://golang.org/
-[3]:http://beego.me/
-[4]:https://medium.com/@richardeng/in-the-beginning-61c7e63a3ea6
-[5]:http://www.mysql.com/
-[6]:http://www.sqlite.org/
-[7]:https://code.google.com/p/liteide/
-[8]:http://macromates.com/
-[9]:http://notepad-plus-plus.org/
-[10]:https://medium.com/@richardeng/back-to-the-future-9db24d6bcee1
-[11]:http://en.wikipedia.org/wiki/Acme_Corporation
-[12]:https://github.com/horrido/acme
-[13]:http://en.wikipedia.org/wiki/Regular_expression
-[14]:http://en.wikipedia.org/wiki/PBKDF2
-[15]:http://en.wikipedia.org/wiki/Universally_unique_identifier
-[16]:http://www.freewebtemplates.com/download/free-website-template/stardust-141989295/
-[17]:https://cdn-images-1.medium.com/max/1600/1*1OpYy1ISYGUaBy0U_RJ75w.png
-[18]:mailto:acmecorp@gmail.com
-[19]:https://github.com/horrido/acme-exercise
-[20]:https://medium.com/@richardeng/a-word-from-the-beegoist-ii-9561351698eb
-[21]:https://medium.com/@richardeng/a-word-from-the-beegoist-iii-dbd6308b2594
-[22]: http://golang.org/
-[23]: http://beego.me/
-[24]: http://revel.github.io/
-[25]: http://www.web2py.com/
-[26]: https://medium.com/@richardeng/the-zen-of-web2py-ede59769d084
-[27]: http://www.seaside.st/
-[28]: http://en.wikipedia.org/wiki/Object-relational_mapping
diff --git a/sources/tech/20151127 Research log- gene signatures and connectivity map.md b/sources/tech/20151127 Research log- gene signatures and connectivity map.md
new file mode 100644
index 0000000000..f4e7faa4bc
--- /dev/null
+++ b/sources/tech/20151127 Research log- gene signatures and connectivity map.md
@@ -0,0 +1,133 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Research log: gene signatures and connectivity map)
+[#]: via: (https://www.jtolio.com/2015/11/research-log-gene-signatures-and-connectivity-map)
+[#]: author: (jtolio.com https://www.jtolio.com/)
+
+Research log: gene signatures and connectivity map
+======
+
+Happy Thanksgiving everyone!
+
+### Context
+
+This is the third post in my continuing series on my attempts at research. Previously we talked about:
+
+ * [what I’m doing, cell states, and microarrays][1]
+ * and then [more about microarrays and R][2].
+
+
+
+By the end of last week we had discussed how to get a table of normalized gene expression intensities that looks like this:
+
+```
+ENSG00000280099_at 0.15484421
+ENSG00000280109_at 0.16881395
+ENSG00000280178_at -0.19621641
+ENSG00000280316_at 0.08622216
+ENSG00000280401_at 0.15966256
+ENSG00000281205_at -0.02085352
+...
+```
+
+The reason for doing this is to figure out which genes are related, and perhaps more importantly, what a cell is even doing.
+
+_Summary:_ new post, also, I’m bringing back the short section summaries.
+
+### Cell lines
+
+The first thing to do when trying to figure out what cells are doing is to choose a cell. There’s all sorts of cells. Healthy brain cells, cancerous blood cells, bruised skin cells, etc.
+
+For any experiment, you’ll need a control to eliminate noise and apply statistical tests for validity. If you don’t use a control, the effect you’re seeing may not even exist, and so for any experiment with cells, you will need a control cell.
+
+Cells often divide, which means that a cell, once chosen, will duplicate itself for you in the presence of the appropriate resources. Not all cells divide ad nauseam which provides some challenges, but many cells under study luckily do.
+
+So, a _cell line_ is simply a set of cells that have all replicated from a specific chosen initial cell. Any set of cells from a cell line will be as identical as possible (unless you screwed up! geez). They will be the same type of cell with the same traits and behaviors, at least, as much as possible.
+
+_Summary:_ a cell line is a large amount of cells that are as close to being the same as possible.
+
+### Perturbagens
+
+There are many things that might affect what a cell is doing. Drugs, agitation, temperature, disease, cancer, gene splicing, small molecules (maybe you give a cell more iron or calcium or something), hormones, light, Jello, ennui, etc. Given any particular cell line, giving a cell from that cell line one of these _perturbagens_, or, perturbing the cell in a specific way, when compared to a control will say what that cell does differently in the face of that perturbagen.
+
+If you’d like to find out what exactly a certain type of cell does when you give it lemon lime soda, then you choose the right cell line, leave out some control cells and give the rest of the cells soda.
+
+Then, you measure gene expression intensities for both the control cells and the perturbed cells. The _differential expression_ of genes between the perturbed cells and the controls cells is likely due to the introduction of the lemon lime soda.
+
+Genes that end up getting expressed _more_ in the presence of the soda are considered _up-regulated_, whereas genes that end up getting expressed _less_ are considered _down-regulated_. The degree to which a gene is up or down regulated constitutes how much of an effect the soda may have had on that gene.
+
+Of course, all of this has such a significant amount of experimental noise that you could find pretty much anything. You’ll need to replicate your experiment independently a few times before you publish that lemon lime soda causes increased expression in the [Sonic hedgehog gene][3].
+
+_Summary:_ A perturbagen is something you introduce/do to a cell to change its behavior, such as drugs or throwing it at a wall or something. The wall perturbagen.
+
+### Gene signature
+
+For a given change or perturbagen to a cell, we now have enough to compute lists of up-regulated and down-regulated genes and the magnitude change in expression for each gene.
+
+This gene expression pattern for some subset of important genes (perhaps the most changed in expression) is called a _gene signature_, and gene signatures are very useful. By comparing signatures, you can:
+
+ * identify or compare cell states
+ * find sets of positively or negatively correlated genes
+ * find similar disease signatures
+ * find similar drug signatures
+ * find drug signatures that might counteract opposite disease signatures.
+
+
+
+(That last bullet point is essentially where I’m headed with my research.)
+
+_Summary:_ a gene signature is a short summary of the most important gene expression differences a perturbagen causes in a cell.
+
+### Drugs!
+
+The pharmaceutical industry is constantly on the lookout for new breakthrough drugs that might represent huge windfalls in cash, and drugs don’t always work as planned. Many drugs spend years in research and development, only to ultimately find poor efficacy or adoption. Sometimes drugs even become known [much more for their side-effects than their originally intended therapy][4].
+
+The practical upshot is that there’s countless FDA-approved drugs that represent decades of work that are simply underused or even unused entirely. These drugs have already cleared many challenging regulatory hurdles, but are simply and quite literally cures looking for a disease.
+
+If even just one of these drugs can be given a new lease on life for some yet-to-be-cured disease, then perhaps we can give some people new leases on life!
+
+_Summary:_ instead of developing new drugs, there’s already lots of drugs that aren’t being used. Maybe we can find matching diseases!
+
+### The Connectivity Map project
+
+The [Broad Institute’s Connectivity Map project][5] isn’t particularly new anymore, but it represents a ground breaking and promising idea - we can dump a bunch of signatures into a database and construct all sorts of new hypotheses we might not even have thought to check before.
+
+To prove out the usefulness of this idea, the Connectivity Map (or cmap) project chose 5 different cell lines (all cancer cells, which are easy to get to replicate!) and a library of FDA approved drugs, and then gave some cells these drugs.
+
+They then constructed a database of all of the signatures they computed for each possible perturbagen they measured. Finally, they constructed a web interface where a user can upload a gene signature and get a result list back of all of the signatures they collected, ordered by the most to least similar. You can totally go sign up and [try it out][5].
+
+This simple tool is surprisingly powerful. It allows you to find similar drugs to a drug you know, but it also allows you to find drugs that might counteract a disease you’ve created a signature for.
+
+Ultimately, the project led to [a number of successful applications][6]. So useful was it that the Broad Institute has doubled down and created the much larger and more comprehensive [LINCS Project][7] that targets an order of magnitude more cell lines (77) and more perturbagens (42,532, compared to cmap’s 6100). You can sign up and use that one too!
+
+_Summary_: building a system that supports querying signature connections has already proved to be super useful.
+
+### Whew
+
+Alright, I wrote most of this on a plane yesterday but since I should now be spending time with family I’m going to cut it short here.
+
+Stay tuned for next week!
+
+--------------------------------------------------------------------------------
+
+via: https://www.jtolio.com/2015/11/research-log-gene-signatures-and-connectivity-map
+
+作者:[jtolio.com][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.jtolio.com/
+[b]: https://github.com/lujun9972
+[1]: https://www.jtolio.com/writing/2015/11/research-log-cell-states-and-microarrays/
+[2]: https://www.jtolio.com/writing/2015/11/research-log-r-and-more-microarrays/
+[3]: https://en.wikipedia.org/wiki/Sonic_hedgehog
+[4]: https://en.wikipedia.org/wiki/Sildenafil#History
+[5]: https://www.broadinstitute.org/cmap/
+[6]: https://www.broadinstitute.org/cmap/publications.jsp
+[7]: http://www.lincscloud.org/
diff --git a/sources/tech/20160302 Go channels are bad and you should feel bad.md b/sources/tech/20160302 Go channels are bad and you should feel bad.md
new file mode 100644
index 0000000000..0ad2a5ed97
--- /dev/null
+++ b/sources/tech/20160302 Go channels are bad and you should feel bad.md
@@ -0,0 +1,443 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Go channels are bad and you should feel bad)
+[#]: via: (https://www.jtolio.com/2016/03/go-channels-are-bad-and-you-should-feel-bad)
+[#]: author: (jtolio.com https://www.jtolio.com/)
+
+Go channels are bad and you should feel bad
+======
+
+_Update: If you’re coming to this blog post from a compendium titled “Go is not good,” I want to make it clear that I am ashamed to be on such a list. Go is absolutely the least worst programming language I’ve ever used. At the time I wrote this, I wanted to curb a trend I was seeing, namely, overuse of one of the more warty parts of Go. I still think channels could be much better, but overall, Go is wonderful. It’s like if your favorite toolbox had [this][1] in it; the tool can have uses (even if it could have had more uses), and it can still be your favorite toolbox!_
+
+_Update 2: I would be remiss if I didn’t point out this excellent survey of real issues: [Understanding Real-World Concurrency Bugs In Go][2]. A significant finding of this survey is that… Go channels cause lots of bugs._
+
+I’ve been using Google’s [Go programming language][3] on and off since mid-to-late 2010, and I’ve had legitimate product code written in Go for [Space Monkey][4] since January 2012 (before Go 1.0!). My initial experience with Go was back when I was researching Hoare’s [Communicating Sequential Processes][5] model of concurrency and the [π-calculus][6] under [Matt Might][7]’s [UCombinator research group][8] as part of my ([now redirected][9]) PhD work to better enable multicore development. Go was announced right then (how serendipitous!) and I immediately started kicking tires.
+
+It quickly became a core part of Space Monkey development. Our production systems at Space Monkey currently account for over 425k lines of pure Go (_not_ counting all of our vendored libraries, which would make it just shy of 1.5 million lines), so not the most Go you’ll ever see, but for the relatively young language we’re heavy users. We’ve [written about our Go usage][10] before. We’ve open-sourced some fairly heavily used libraries; many people seem to be fans of our [OpenSSL bindings][11] (which are faster than [crypto/tls][12], but please keep openssl itself up-to-date!), our [error handling library][13], [logging library][14], and [metric collection library/zipkin client][15]. We use Go, we love Go, we think it’s the least bad programming language for our needs we’ve used so far.
+
+Although I don’t think I can talk myself out of mentioning my widely avoided [goroutine-local-storage library][16] here either (which even though it’s a hack that you shouldn’t use, it’s a beautiful hack), hopefully my other experience will suffice as valid credentials that I kind of know what I’m talking about before I explain my deliberately inflamatory post title.
+
+![][17]
+
+### Wait, what?
+
+If you ask the proverbial programmer on the street what’s so special about Go, she’ll most likely tell you that Go is most known for channels and goroutines. Go’s theoretical underpinnings are heavily based in Hoare’s CSP model, which is itself incredibly fascinating and interesting and I firmly believe has much more to yield than we’ve appropriated so far.
+
+CSP (and the π-calculus) both use communication as the core synchronization primitive, so it makes sense Go would have channels. Rob Pike has been fascinated with CSP (with good reason) for a [considerable][18] [while][19] [now][20].
+
+But from a pragmatic perspective (which Go prides itself on), Go got channels wrong. Channels as implemented are pretty much a solid anti-pattern in my book at this point. Why? Dear reader, let me count the ways.
+
+#### You probably won’t end up using just channels.
+
+Hoare’s Communicating Sequential Processes is a computational model where essentially the only synchronization primitive is sending or receiving on a channel. As soon as you use a mutex, semaphore, or condition variable, bam, you’re no longer in pure CSP land. Go programmers often tout this model and philosophy through the chanting of the [cached thought][21] “[share memory by communicating][22].”
+
+So let’s try and write a small program using just CSP in Go! Let’s make a high score receiver. All we will do is keep track of the largest high score value we’ve seen. That’s it.
+
+First, we’ll make a `Game` struct.
+
+```
+type Game struct {
+ bestScore int
+ scores chan int
+}
+```
+
+`bestScore` isn’t going to be protected by a mutex! That’s fine, because we’ll simply have one goroutine manage its state and receive new scores over a channel.
+
+```
+func (g *Game) run() {
+ for score := range g.scores {
+ if g.bestScore < score {
+ g.bestScore = score
+ }
+ }
+}
+```
+
+Okay, now we’ll make a helpful constructor to start a game.
+
+```
+func NewGame() (g *Game) {
+ g = &Game{
+ bestScore: 0,
+ scores: make(chan int),
+ }
+ go g.run()
+ return g
+}
+```
+
+Next, let’s assume someone has given us a `Player` that can return scores. It might also return an error, cause hey maybe the incoming TCP stream can die or something, or the player quits.
+
+```
+type Player interface {
+ NextScore() (score int, err error)
+}
+```
+
+To handle the player, we’ll assume all errors are fatal and pass received scores down the channel.
+
+```
+func (g *Game) HandlePlayer(p Player) error {
+ for {
+ score, err := p.NextScore()
+ if err != nil {
+ return err
+ }
+ g.scores <- score
+ }
+}
+```
+
+Yay! Okay, we have a `Game` type that can keep track of the highest score a `Player` receives in a thread-safe way.
+
+You wrap up your development and you’re on your way to having customers. You make this game server public and you’re incredibly successful! Lots of games are being created with your game server.
+
+Soon, you discover people sometimes leave your game. Lots of games no longer have any players playing, but nothing stopped the game loop. You are getting overwhelmed by dead `(*Game).run` goroutines.
+
+**Challenge:** fix the goroutine leak above without mutexes or panics. For real, scroll up to the above code and come up with a plan for fixing this problem using just channels.
+
+I’ll wait.
+
+For what it’s worth, it totally can be done with channels only, but observe the simplicity of the following solution which doesn’t even have this problem:
+
+```
+type Game struct {
+ mtx sync.Mutex
+ bestScore int
+}
+
+func NewGame() *Game {
+ return &Game{}
+}
+
+func (g *Game) HandlePlayer(p Player) error {
+ for {
+ score, err := p.NextScore()
+ if err != nil {
+ return err
+ }
+ g.mtx.Lock()
+ if g.bestScore < score {
+ g.bestScore = score
+ }
+ g.mtx.Unlock()
+ }
+}
+```
+
+Which one would you rather work on? Don’t be deceived into thinking that the channel solution somehow makes this more readable and understandable in more complex cases. Teardown is very hard. This sort of teardown is just a piece of cake with a mutex, but the hardest thing to work out with Go-specific channels only. Also, if anyone replies that channels sending channels is easier to reason about here it will cause me an immediate head-to-desk motion.
+
+Importantly, this particular case might actually be _easily_ solved _with channels_ with some runtime assistance Go doesn’t provide! Unfortunately, as it stands, there are simply a surprising amount of problems that are solved better with traditional synchronization primitives than with Go’s version of CSP. We’ll talk about what Go could have done to make this case easier later.
+
+**Exercise:** Still skeptical? Try making both solutions above (channel-only vs. mutex-only) stop asking for scores from `Players` once `bestScore` is 100 or greater. Go ahead and open your text editor. This is a small, toy problem.
+
+The summary here is that you will be using traditional synchronization primitives in addition to channels if you want to do anything real.
+
+#### Channels are slower than implementing it yourself
+
+One of the things I assumed about Go being so heavily based in CSP theory is that there should be some pretty killer scheduler optimizations the runtime can make with channels. Perhaps channels aren’t always the most straightforward primitive, but surely they’re efficient and fast, right?
+
+![][23]
+
+As [Dustin Hiatt][24] points out on [Tyler Treat’s post about Go][25],
+
+> Behind the scenes, channels are using locks to serialize access and provide threadsafety. So by using channels to synchronize access to memory, you are, in fact, using locks; locks wrapped in a threadsafe queue. So how do Go’s fancy locks compare to just using mutex’s from their standard library `sync` package? The following numbers were obtained by using Go’s builtin benchmarking functionality to serially call Put on a single set of their respective types.
+
+```
+> BenchmarkSimpleSet-8 3000000 391 ns/op
+> BenchmarkSimpleChannelSet-8 1000000 1699 ns/o
+>
+```
+
+It’s a similar story with unbuffered channels, or even the same test under contention instead of run serially.
+
+Perhaps the Go scheduler will improve, but in the meantime, good old mutexes and condition variables are very good, efficient, and fast. If you want performance, you use the tried and true methods.
+
+#### Channels don’t compose well with other concurrency primitives
+
+Alright, so hopefully I have convinced you that you’ll at least be interacting with primitives besides channels sometimes. The standard library certainly seems to prefer traditional synchronization primitives over channels.
+
+Well guess what, it’s actually somewhat challenging to use channels alongside mutexes and condition variables correctly!
+
+One of the interesting things about channels that makes a lot of sense coming from CSP is that channel sends are synchronous. A channel send and channel receive are intended to be synchronization barriers, and the send and receive should happen at the same virtual time. That’s wonderful if you’re in well-executed CSP-land.
+
+![][26]
+
+Pragmatically, Go channels also come in a buffered variety. You can allocate a fixed amount of space to account for possible buffering so that sends and receives are disparate events, but the buffer size is capped. Go doesn’t provide a way to have arbitrarily sized buffers - you have to allocate the buffer size in advance. _This is fine_, I’ve seen people argue on the mailing list, _because memory is bounded anyway._
+
+Wat.
+
+This is a bad answer. There’s all sorts of reasons to use an arbitrarily buffered channel. If we knew everything up front, why even have `malloc`?
+
+Not having arbitrarily buffered channels means that a naive send on _any_ channel could block at any time. You want to send on a channel and update some other bookkeeping under a mutex? Careful! Your channel send might block!
+
+```
+// ...
+s.mtx.Lock()
+// ...
+s.ch <- val // might block!
+s.mtx.Unlock()
+// ...
+```
+
+This is a recipe for dining philosopher dinner fights. If you take a lock, you should quickly update state and release it and not do anything blocking under the lock if possible.
+
+There is a way to do a non-blocking send on a channel in Go, but it’s not the default behavior. Assume we have a channel `ch := make(chan int)` and we want to send the value `1` on it without blocking. Here is the minimum amount of typing you have to do to send without blocking:
+
+```
+select {
+case ch <- 1: // it sent
+default: // it didn't
+}
+```
+
+This isn’t what naturally leaps to mind for beginning Go programmers.
+
+The summary is that because many operations on channels block, it takes careful reasoning about philosophers and their dining to successfully use channel operations alongside and under mutex protection, without causing deadlocks.
+
+#### Callbacks are strictly more powerful and don’t require unnecessary goroutines.
+
+![][27]
+
+Whenever an API uses a channel, or whenever I point out that a channel makes something hard, someone invariably points out that I should just spin up a goroutine to read off the channel and make whatever translation or fix I need as it reads of the channel.
+
+Um, no. What if my code is in a hotpath? There’s very few instances that require a channel, and if your API could have been designed with mutexes, semaphores, and callbacks and no additional goroutines (because all event edges are triggered by API events), then using a channel forces me to add another stack of memory allocation to my resource usage. Goroutines are much lighter weight than threads, yes, but lighter weight doesn’t mean the lightest weight possible.
+
+As I’ve formerly [argued in the comments on an article about using channels][28] (lol the internet), your API can _always_ be more general, _always_ more flexible, and take drastically less resources if you use callbacks instead of channels. “Always” is a scary word, but I mean it here. There’s proof-level stuff going on.
+
+If someone provides a callback-based API to you and you need a channel, you can provide a callback that sends on a channel with little overhead and full flexibility.
+
+If, on the other hand, someone provides a channel-based API to you and you need a callback, you have to spin up a goroutine to read off the channel _and_ you have to hope that no one tries to send more on the channel when you’re done reading so you cause blocked goroutine leaks.
+
+For a super simple real-world example, check out the [context interface][29] (which incidentally is an incredibly useful package and what you should be using instead of [goroutine-local storage][16]):
+
+```
+type Context interface {
+ ...
+ // Done returns a channel that closes when this work unit should be canceled.
+ Done() <-chan struct{}
+
+ // Err returns a non-nil error when the Done channel is closed
+ Err() error
+ ...
+}
+```
+
+Imagine all you want to do is log the corresponding error when the `Done()` channel fires. What do you have to do? If you don’t have a good place you’re already selecting on a channel, you have to spin up a goroutine to deal with it:
+
+```
+go func() {
+ <-ctx.Done()
+ logger.Errorf("canceled: %v", ctx.Err())
+}()
+```
+
+What if `ctx` gets garbage collected without closing the channel `Done()` returned? Whoops! Just leaked a goroutine!
+
+Now imagine we changed `Done`’s signature:
+
+```
+// Done calls cb when this work unit should be canceled.
+Done(cb func())
+```
+
+First off, logging is so easy now. Check it out: `ctx.Done(func() { log.Errorf("canceled: %v", ctx.Err()) })`. But lets say you really do need some select behavior. You can just call it like this:
+
+```
+ch := make(chan struct{})
+ctx.Done(func() { close(ch) })
+```
+
+Voila! No expressiveness lost by using a callback instead. `ch` works like the channel `Done()` used to return, and in the logging case we didn’t need to spin up a whole new stack. I got to keep my stack traces (if our log package is inclined to use them); I got to avoid another stack allocation and another goroutine to give to the scheduler.
+
+Next time you use a channel, ask yourself if there’s some goroutines you could eliminate if you used mutexes and condition variables instead. If the answer is yes, your code will be more efficient if you change it. And if you’re trying to use channels just to be able to use the `range` keyword over a collection, I’m going to have to ask you to put your keyboard away or just go back to writing Python books.
+
+![more like Zooey De-channel, amirite][30]
+
+#### The channel API is inconsistent and just cray-cray
+
+Closing or sending on a closed channel panics! Why? If you want to close a channel, you need to either synchronize its closed state externally (with mutexes and so forth that don’t compose well!) so that other writers don’t write to or close a closed channel, or just charge forward and close or write to closed channels and expect you’ll have to recover any raised panics.
+
+This is such bizarre behavior. Almost every other operation in Go has a way to avoid a panic (type assertions have the `, ok =` pattern, for example), but with channels you just get to deal with it.
+
+Okay, so when a send will fail, channels panic. I guess that makes some kind of sense. But unlike almost everything else with nil values, sending to a nil channel won’t panic. Instead, it will block forever! That’s pretty counter-intuitive. That might be useful behavior, just like having a can-opener attached to your weed-whacker might be useful (and found in Skymall), but it’s certainly unexpected. Unlike interacting with nil maps (which do implicit pointer dereferences), nil interfaces (implicit pointer dereferences), unchecked type assertions, and all sorts of other things, nil channels exhibit actual channel behavior, as if a brand new channel was just instantiated for this operation.
+
+Receives are slightly nicer. What happens when you receive on a closed channel? Well, that works - you get a zero value. Okay that makes sense I guess. Bonus! Receives allow you to do a `, ok =`-style check if the channel was open when you received your value. Thank heavens we get `, ok =` here.
+
+But what happens if you receive from a nil channel? _Also blocks forever!_ Yay! Don’t try and use the fact that your channel is nil to keep track of if you closed it!
+
+### What are channels good for?
+
+Of course channels are good for some things (they are a generic container after all), and there are certain things you can only do with them (`select`).
+
+#### They are another special-cased generic datastructure
+
+Go programmers are so used to arguments about generics that I can feel the PTSD coming on just by bringing up the word. I’m not here to talk about it so wipe the sweat off your brow and let’s keep moving.
+
+Whatever your opinion of generics is, Go’s maps, slices, and channels are data structures that support generic element types, because they’ve been special-cased into the language.
+
+In a language that doesn’t allow you to write your own generic containers, _anything_ that allows you to better manage collections of things is valuable. Here, channels are a thread-safe datastructure that supports arbitrary value types.
+
+So that’s useful! That can save some boilerplate I suppose.
+
+I’m having trouble counting this as a win for channels.
+
+#### Select
+
+The main thing you can do with channels is the `select` statement. Here you can wait on a fixed number of inputs for events. It’s kind of like epoll, but you have to know upfront how many sockets you’re going to be waiting on.
+
+This is truly a useful language feature. Channels would be a complete wash if not for `select`. But holy smokes, let me tell you about the first time you decide you might need to select on multiple things but you don’t know how many and you have to use `reflect.Select`.
+
+### How could channels be better?
+
+It’s really tough to say what the most tactical thing the Go language team could do for Go 2.0 is (the Go 1.0 compatibility guarantee is good but hand-tying), but that won’t stop me from making some suggestions.
+
+#### Select on condition variables!
+
+We could just obviate the need for channels! This is where I propose we get rid of some sacred cows, but let me ask you this, how great would it be if you could select on any custom synchronization primitive? (A: So great.) If we had that, we wouldn’t need channels at all.
+
+#### GC could help us?
+
+In the very first example, we could easily solve the high score server cleanup with channels if we were able to use directionally-typed channel garbage collection to help us clean up.
+
+![][31]
+
+As you know, Go has directionally-typed channels. You can have a channel type that only supports reading (`<-chan`) and a channel type that only supports writing (`chan<-`). Great!
+
+Go also has garbage collection. It’s clear that certain kinds of book keeping are just too onerous and we shouldn’t make the programmer deal with them. We clean up unused memory! Garbage collection is useful and neat.
+
+So why not help clean up unused or deadlocked channel reads? Instead of having `make(chan Whatever)` return one bidirectional channel, have it return two single-direction channels (`chanReader, chanWriter := make(chan Type)`).
+
+Let’s reconsider the original example:
+
+```
+type Game struct {
+ bestScore int
+ scores chan<- int
+}
+
+func run(bestScore *int, scores <-chan int) {
+ // we don't keep a reference to a *Game directly because then we'd be holding
+ // onto the send side of the channel.
+ for score := range scores {
+ if *bestScore < score {
+ *bestScore = score
+ }
+ }
+}
+
+func NewGame() (g *Game) {
+ // this make(chan) return style is a proposal!
+ scoreReader, scoreWriter := make(chan int)
+ g = &Game{
+ bestScore: 0,
+ scores: scoreWriter,
+ }
+ go run(&g.bestScore, scoreReader)
+ return g
+}
+
+func (g *Game) HandlePlayer(p Player) error {
+ for {
+ score, err := p.NextScore()
+ if err != nil {
+ return err
+ }
+ g.scores <- score
+ }
+}
+```
+
+If garbage collection closed a channel when we could prove no more values are ever coming down it, this solution is completely fixed. Yes yes, the comment in `run` is indicative of the existence of a rather large gun aimed at your foot, but at least the problem is easily solveable now, whereas it really wasn’t before. Furthermore, a smart compiler could probably make appropriate proofs to reduce the damage from said foot-gun.
+
+#### Other smaller issues
+
+ * **Dup channels?** \- If we could use an equivalent of the `dup` syscall on channels, then we could also solve the multiple producer problem quite easily. Each producer could close their own `dup`-ed channel without ruining the other producers.
+ * **Fix the channel API!** \- Close isn’t idempotent? Send on closed channel panics with no way to avoid it? Ugh!
+ * **Arbitrarily buffered channels** \- If we could make buffered channels with no fixed buffer size limit, then we could make channels that don’t block.
+
+
+
+### What do we tell people about Go then?
+
+If you haven’t yet, please go take a look at my current favorite programming post: [What Color is Your Function][32]. Without being about Go specifically, this blog post much more eloquently than I could lays out exactly why goroutines are Go’s best feature (and incidentally one of the ways Go is better than Rust for some applications).
+
+If you’re still writing code in a programming language that forces keywords like `yield` on you to get high performance, concurrency, or an event-driven model, you are living in the past, whether or not you or anyone else knows it. Go is so far one of the best entrants I’ve seen of languages that implement an M:N threading model that’s not 1:1, and dang that’s powerful.
+
+So, tell folks about goroutines.
+
+If I had to pick one other leading feature of Go, it’s interfaces. Statically-typed [duck typing][33] makes extending and working with your own or someone else’s project so fun and amazing it’s probably worth me writing an entirely different set of words about it some other time.
+
+### So…
+
+I keep seeing people charge in to Go, eager to use channels to their full potential. Here’s my advice to you.
+
+**JUST STAHP IT**
+
+When you’re writing APIs and interfaces, as bad as the advice “never” can be, I’m pretty sure there’s never a time where channels are better, and every Go API I’ve used that used channels I’ve ended up having to fight. I’ve never thought “oh good, there’s a channel here;” it’s always instead been some variant of _**WHAT FRESH HELL IS THIS?**_
+
+So, _please, please use channels where appropriate and only where appropriate._
+
+In all of my Go code I work with, I can count on one hand the number of times channels were really the best choice. Sometimes they are. That’s great! Use them then. But otherwise just stop.
+
+![][34]
+
+_Special thanks for the valuable feedback provided by my proof readers Jeff Wendling, [Andrew Harding][35], [George Shank][36], and [Tyler Treat][37]._
+
+If you want to work on Go with us at Space Monkey, please [hit me up][38]!
+
+--------------------------------------------------------------------------------
+
+via: https://www.jtolio.com/2016/03/go-channels-are-bad-and-you-should-feel-bad
+
+作者:[jtolio.com][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.jtolio.com/
+[b]: https://github.com/lujun9972
+[1]: https://blog.codinghorror.com/content/images/uploads/2012/06/6a0120a85dcdae970b017742d249d5970d-800wi.jpg
+[2]: https://songlh.github.io/paper/go-study.pdf
+[3]: https://golang.org/
+[4]: http://www.spacemonkey.com/
+[5]: https://en.wikipedia.org/wiki/Communicating_sequential_processes
+[6]: https://en.wikipedia.org/wiki/%CE%A0-calculus
+[7]: http://matt.might.net
+[8]: http://www.ucombinator.org/
+[9]: https://www.jtolio.com/writing/2015/11/research-log-cell-states-and-microarrays/
+[10]: https://www.jtolio.com/writing/2014/04/go-space-monkey/
+[11]: https://godoc.org/github.com/spacemonkeygo/openssl
+[12]: https://golang.org/pkg/crypto/tls/
+[13]: https://godoc.org/github.com/spacemonkeygo/errors
+[14]: https://godoc.org/github.com/spacemonkeygo/spacelog
+[15]: https://godoc.org/gopkg.in/spacemonkeygo/monitor.v1
+[16]: https://github.com/jtolds/gls
+[17]: https://www.jtolio.com/images/wat/darth-helmet.jpg
+[18]: https://en.wikipedia.org/wiki/Newsqueak
+[19]: https://en.wikipedia.org/wiki/Alef_%28programming_language%29
+[20]: https://en.wikipedia.org/wiki/Limbo_%28programming_language%29
+[21]: https://lesswrong.com/lw/k5/cached_thoughts/
+[22]: https://blog.golang.org/share-memory-by-communicating
+[23]: https://www.jtolio.com/images/wat/jon-stewart.jpg
+[24]: https://twitter.com/HiattDustin
+[25]: http://bravenewgeek.com/go-is-unapologetically-flawed-heres-why-we-use-it/
+[26]: https://www.jtolio.com/images/wat/obama.jpg
+[27]: https://www.jtolio.com/images/wat/yael-grobglas.jpg
+[28]: http://www.informit.com/articles/article.aspx?p=2359758#comment-2061767464
+[29]: https://godoc.org/golang.org/x/net/context
+[30]: https://www.jtolio.com/images/wat/zooey-deschanel.jpg
+[31]: https://www.jtolio.com/images/wat/joel-mchale.jpg
+[32]: http://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/
+[33]: https://en.wikipedia.org/wiki/Duck_typing
+[34]: https://www.jtolio.com/images/wat/michael-cera.jpg
+[35]: https://github.com/azdagron
+[36]: https://twitter.com/taterbase
+[37]: http://bravenewgeek.com
+[38]: https://www.jtolio.com/contact/
diff --git a/sources/tech/20170115 Magic GOPATH.md b/sources/tech/20170115 Magic GOPATH.md
new file mode 100644
index 0000000000..1d4cd16e24
--- /dev/null
+++ b/sources/tech/20170115 Magic GOPATH.md
@@ -0,0 +1,119 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Magic GOPATH)
+[#]: via: (https://www.jtolio.com/2017/01/magic-gopath)
+[#]: author: (jtolio.com https://www.jtolio.com/)
+
+Magic GOPATH
+======
+
+_**Update:** With the advent of Go 1.11 and [Go modules][1], this whole post is now useless. Unset your GOPATH entirely and switch to Go modules today!_
+
+Maybe someday I’ll start writing about things besides Go again.
+
+Go requires that you set an environment variable for your workspace called your `GOPATH`. The `GOPATH` is one of the most confusing aspects of Go to newcomers and even relatively seasoned developers alike. It’s not immediately clear what would be better, but finding a good `GOPATH` value has implications for your source code repository layout, how many separate projects you have on your computer, how default project installation instructions work (via `go get`), and even how you interoperate with other projects and libraries.
+
+It’s taken until Go 1.8 to decide to [set a default][2] and that small change was one of [the most talked about code reviews][3] for the 1.8 release cycle.
+
+After [writing about GOPATH himself][4], [Dave Cheney][5] [asked me][6] to write a blog post about what I do.
+
+### My proposal
+
+I set my `GOPATH` to always be the current working directory, unless a parent directory is clearly the `GOPATH`.
+
+Here’s the relevant part of my `.bashrc`:
+
+```
+# bash command to output calculated GOPATH.
+calc_gopath() {
+ local dir="$PWD"
+
+ # we're going to walk up from the current directory to the root
+ while true; do
+
+ # if there's a '.gopath' file, use its contents as the GOPATH relative to
+ # the directory containing it.
+ if [ -f "$dir/.gopath" ]; then
+ ( cd "$dir";
+ # allow us to squash this behavior for cases we want to use vgo
+ if [ "$(cat .gopath)" != "" ]; then
+ cd "$(cat .gopath)";
+ echo "$PWD";
+ fi; )
+ return
+ fi
+
+ # if there's a 'src' directory, the parent of that directory is now the
+ # GOPATH
+ if [ -d "$dir/src" ]; then
+ echo "$dir"
+ return
+ fi
+
+ # we can't go further, so bail. we'll make the original PWD the GOPATH.
+ if [ "$dir" == "/" ]; then
+ echo "$PWD"
+ return
+ fi
+
+ # now we'll consider the parent directory
+ dir="$(dirname "$dir")"
+ done
+}
+
+my_prompt_command() {
+ export GOPATH="$(calc_gopath)"
+
+ # you can have other neat things in here. I also set my PS1 based on git
+ # state
+}
+
+case "$TERM" in
+xterm*|rxvt*)
+ # Bash provides an environment variable called PROMPT_COMMAND. The contents
+ # of this variable are executed as a regular Bash command just before Bash
+ # displays a prompt. Let's only set it if we're in some kind of graphical
+ # terminal I guess.
+ PROMPT_COMMAND=my_prompt_command
+ ;;
+*)
+ ;;
+esac
+```
+
+The benefits are fantastic. If you want to quickly `go get` something and not have it clutter up your workspace, you can do something like:
+
+```
+cd $(mktemp -d) && go get github.com/the/thing
+```
+
+On the other hand, if you’re jumping between multiple projects (whether or not they have the full workspace checked in or are just library packages), the `GOPATH` is set accurately.
+
+More flexibly, if you have a tree where some parent directory is outside of the `GOPATH` but you want to set the `GOPATH` anyways, you can create a `.gopath` file and it will automatically set your `GOPATH` correctly any time your shell is inside that directory.
+
+The whole thing is super nice. I kinda can’t imagine doing something else anymore.
+
+### Fin.
+
+--------------------------------------------------------------------------------
+
+via: https://www.jtolio.com/2017/01/magic-gopath
+
+作者:[jtolio.com][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.jtolio.com/
+[b]: https://github.com/lujun9972
+[1]: https://golang.org/cmd/go/#hdr-Modules__module_versions__and_more
+[2]: https://rakyll.org/default-gopath/
+[3]: https://go-review.googlesource.com/32019/
+[4]: https://dave.cheney.net/2016/12/20/thinking-about-gopath
+[5]: https://dave.cheney.net/
+[6]: https://twitter.com/davecheney/status/811334240247812097
diff --git a/sources/tech/20170320 Whiteboard problems in pure Lambda Calculus.md b/sources/tech/20170320 Whiteboard problems in pure Lambda Calculus.md
new file mode 100644
index 0000000000..02200befe7
--- /dev/null
+++ b/sources/tech/20170320 Whiteboard problems in pure Lambda Calculus.md
@@ -0,0 +1,836 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Whiteboard problems in pure Lambda Calculus)
+[#]: via: (https://www.jtolio.com/2017/03/whiteboard-problems-in-pure-lambda-calculus)
+[#]: author: (jtolio.com https://www.jtolio.com/)
+
+Whiteboard problems in pure Lambda Calculus
+======
+
+My team at [Vivint][1], the [Space Monkey][2] group, stopped doing whiteboard interviews a while ago. We certainly used to do them, but we’ve transitioned to homework problems or actually just hiring a candidate as a short term contractor for a day or two to solve real work problems and see how that goes. Whiteboard interviews are kind of like [Festivus][3] but in a bad way: you get the feats of strength and then the airing of grievances. Unfortunately, modern programming is nothing like writing code in front of a roomful of strangers with only a whiteboard and a marker, so it’s probably not best to optimize for that.
+
+Nonetheless, [Kyle][4]’s recent (wonderful, amazing) post titled [acing the technical interview][5] got me thinking about fun ways to approach whiteboard problems as an interviewee. Kyle’s [Church-encodings][6] made me wonder how many “standard” whiteboard problems you could solve in pure lambda calculus. If this isn’t seen as a feat of strength by your interviewers, there will certainly be some airing of grievances.
+
+➡️️ **Update**: I’ve made a lambda calculus web playground so you can run lambda calculus right in your browser! I’ve gone through and made links to examples in this post with it. Check it out at
+
+### Lambda calculus
+
+Wait, what is lambda calculus? Did I learn that in high school?
+
+Big-C “Calculus” of course usually refers to derivatives, integrals, Taylor series, etc. You might have learned about Calculus in high school, but this isn’t that.
+
+More generally, a little-c “calculus” is really just any system of calculation. The [lambda calculus][7] is essentially a formalization of the smallest set of primitives needed to make a completely [Turing-complete][8] programming language. Expressions in the language can only be one of three things.
+
+ * An expression can define a function that takes exactly one argument (no more, no less) and then has another expression as the body.
+ * An expression can call a function by applying two subexpressions.
+ * An expression can reference a variable.
+
+
+
+Here is the entire grammar:
+
+```
+ ::=
+ | `λ` `.`
+ | `(` `)`
+```
+
+That’s it. There’s nothing else you can do. There are no numbers, strings, booleans, pairs, structs, anything. Every value is a function that takes one argument. All variables refer to these functions, and all functions can do is return another function, either directly, or by calling yet another function. There’s nothing else to help you.
+
+To be honest, it’s a little surprising that this is even Turing-complete. How do you do branches or loops or recursion? This seems too simple to work, right?
+
+A common whiteboard problem is the [fizz buzz problem][9]. The goal is to write a function that prints out all the numbers from 0 to 100, but instead of printing numbers divisible by 3 it prints “fizz”, and instead of printing numbers divisible by 5 it prints “buzz”, and in the case of both it prints “fizzbuzz”. It’s a simple toy problem but it’s touted as a good whiteboard problem because evidently many self-proclaimed programmers can’t solve it. Maybe part of that is cause whiteboard problems suck? I dunno.
+
+Anyway, here’s fizz buzz in pure lambda calculus:
+
+```
+(λU.(λY.(λvoid.(λ0.(λsucc.(λ+.(λ*.(λ1.(λ2.(λ3.(λ4.(λ5.(λ6.(λ7.(λ8.(λ9.(λ10.(λnum.(λtrue.(λfalse.(λif.(λnot.(λand.(λor.(λmake-pair.(λpair-first.(λpair-second.(λzero?.(λpred.(λ-.(λeq?.(λ/.(λ%.(λnil.(λnil?.(λcons.(λcar.(λcdr.(λdo2.(λdo3.(λdo4.(λfor.(λprint-byte.(λprint-list.(λprint-newline.(λzero-byte.(λitoa.(λfizzmsg.(λbuzzmsg.(λfizzbuzzmsg.(λfizzbuzz.(fizzbuzz (((num 1) 0) 1)) λn.((for n) λi.((do2 (((if (zero? ((% i) 3))) λ_.(((if (zero? ((% i) 5))) λ_.(print-list fizzbuzzmsg)) λ_.(print-list fizzmsg))) λ_.(((if (zero? ((% i) 5))) λ_.(print-list buzzmsg)) λ_.(print-list (itoa i))))) (print-newline nil)))) ((cons (((num 0) 7) 0)) ((cons (((num 1) 0) 5)) ((cons (((num 1) 2) 2)) ((cons (((num 1) 2) 2)) ((cons (((num 0) 9) 8)) ((cons (((num 1) 1) 7)) ((cons (((num 1) 2) 2)) ((cons (((num 1) 2) 2)) nil))))))))) ((cons (((num 0) 6) 6)) ((cons (((num 1) 1) 7)) ((cons (((num 1) 2) 2)) ((cons (((num 1) 2) 2)) nil))))) ((cons (((num 0) 7) 0)) ((cons (((num 1) 0) 5)) ((cons (((num 1) 2) 2)) ((cons (((num 1) 2) 2)) nil))))) λn.(((Y λrecurse.λn.λresult.(((if (zero? n)) λ_.(((if (nil? result)) λ_.((cons zero-byte) nil)) λ_.result)) λ_.((recurse ((/ n) 10)) ((cons ((+ zero-byte) ((% n) 10))) result)))) n) nil)) (((num 0) 4) 8)) λ_.(print-byte (((num 0) 1) 0))) (Y λrecurse.λl.(((if (nil? l)) λ_.void) λ_.((do2 (print-byte (car l))) (recurse (cdr l)))))) PRINT_BYTE) λn.λf.((((Y λrecurse.λremaining.λcurrent.λf.(((if (zero? remaining)) λ_.void) λ_.((do2 (f current)) (((recurse (pred remaining)) (succ current)) f)))) n) 0) f)) λa.do3) λa.do2) λa.λb.b) λl.(pair-second (pair-second l))) λl.(pair-first (pair-second l))) λe.λl.((make-pair true) ((make-pair e) l))) λl.(not (pair-first l))) ((make-pair false) void)) λm.λn.((- m) ((* ((/ m) n)) n))) (Y λ/.λm.λn.(((if ((eq? m) n)) λ_.1) λ_.(((if (zero? ((- m) n))) λ_.0) λ_.((+ 1) ((/ ((- m) n)) n)))))) λm.λn.((and (zero? ((- m) n))) (zero? ((- n) m)))) λm.λn.((n pred) m)) λn.(((λn.λf.λx.(pair-second ((n λp.((make-pair (f (pair-first p))) (pair-first p))) ((make-pair x) x))) n) succ) 0)) λn.((n λ_.false) true)) λp.(p false)) λp.(p true)) λx.λy.λt.((t x) y)) λa.λb.((a true) b)) λa.λb.((a b) false)) λp.λt.λf.((p f) t)) λp.λa.λb.(((p a) b) void)) λt.λf.f) λt.λf.t) λa.λb.λc.((+ ((+ ((* ((* 10) 10)) a)) ((* 10) b))) c)) (succ 9)) (succ 8)) (succ 7)) (succ 6)) (succ 5)) (succ 4)) (succ 3)) (succ 2)) (succ 1)) (succ 0)) λm.λn.λx.(m (n x))) λm.λn.λf.λx.((((m succ) n) f) x)) λn.λf.λx.(f ((n f) x))) λf.λx.x) λx.(U U)) (U λh.λf.(f λx.(((h h) f) x)))) λf.(f f))
+```
+
+➡️️ [Try it out in your browser!][10]
+
+(This program expects a function to be defined called `PRINT_BYTE` which takes a Church-encoded numeral, turns it into a byte, writes it to `stdout`, and then returns the same Church-encoded numeral. Expecting a function that has side-effects might arguably disqualify this from being pure, but it’s definitely arguable.)
+
+Don’t be deceived! I said there were no native numbers or lists or control structures in lambda calculus and I meant it. `0`, `7`, `if`, and `+` are all _variables_ that represent _functions_ and have to be constructed before they can be used in the code block above.
+
+### What? What’s happening here?
+
+Okay let’s start over and build up to fizz buzz. We’re going to need a lot. We’re going to need to build up concepts of numbers, logic, and lists all from scratch. Ask your interviewers if they’re comfortable cause this might be a while.
+
+Here is a basic lambda calculus function:
+
+```
+λx.x
+```
+
+This is the identity function and it is equivalent to the following Javascript:
+
+```
+function(x) { return x; }
+```
+
+It takes an argument and returns it! We can call the identity function with another value. Function calling in many languages looks like `f(x)`, but in lambda calculus, it looks like `(f x)`.
+
+```
+(λx.x y)
+```
+
+This will return `y`. Once again, here’s equivalent Javascript:
+
+```
+function(x) { return x; }(y)
+```
+
+Aside: If you’re already familiar with lambda calculus, my formulation of precedence is such that `(λx.x y)` is not the same as `λx.(x y)`. `(λx.x y)` applies `y` to the identity function `λx.x`, and `λx.(x y)` is a function that applies `y` to its argument `x`. Perhaps not what you’re used to, but the parser was way more straightforward, and programming with it this way seems a bit more natural, believe it or not.
+
+Okay, great. We can call functions. What if we want to pass more than one argument?
+
+### Currying
+
+Imagine the following Javascript function:
+
+```
+let s1 = function(f, x) { return f(x); }
+```
+
+We want to call it with two arguments, another function and a value, and we want the function to then be called on the value, and have its result returned. Can we do this while using only one argument?
+
+[Currying][11] is a technique for dealing with this. Instead of taking two arguments, take the first argument and return another function that takes the second argument. Here’s the Javascript:
+
+```
+let s2 = function(f) {
+ return function(x) {
+ return f(x);
+ }
+};
+```
+
+Now, `s1(f, x)` is the same as `s2(f)(x)`. So the equivalent lambda calculus for `s2` is then
+
+```
+λf.λx.(f x)
+```
+
+Calling this function with `g` for `f` and `y` for `x` is like so:
+
+```
+((s2 g) y)
+```
+
+or
+
+```
+((λf.λx.(f x) g) y)
+```
+
+The equivalent Javascript here is:
+
+```
+function(f) {
+ return function(x) {
+ f(x)
+ }
+}(g)(y)
+```
+
+### Numbers
+
+Since everything is a function, we might feel a little stuck with what to do about numbers. Luckily, [Alonzo Church][12] already figured it out for us! When you have a number, often what you want to do is represent how many times you might do something.
+
+So let’s represent a number as how many times we’ll apply a function to a value. This is called a [Church numeral][13]. If we have `f` and `x`, `0` will mean we don’t call `f` at all, and just return `x`. `1` will mean we call `f` one time, `2` will mean we call `f` twice, and so on.
+
+Here are some definitions! (N.B.: assignment isn’t actually part of lambda calculus, but it makes writing down definitions easier)
+
+```
+0 = λf.λx.x
+```
+
+Here, `0` takes a function `f`, a value `x`, and never calls `f`. It just returns `x`. `f` is called 0 times.
+
+```
+1 = λf.λx.(f x)
+```
+
+Like `0`, `1` takes `f` and `x`, but here it calls `f` exactly once. Let’s see how this continues for other numbers.
+
+```
+2 = λf.λx.(f (f x))
+3 = λf.λx.(f (f (f x)))
+4 = λf.λx.(f (f (f (f x))))
+5 = λf.λx.(f (f (f (f (f x)))))
+```
+
+`5` is a function that takes `f`, `x`, and calls `f` 5 times!
+
+Okay, this is convenient, but how are we going to do math on these numbers?
+
+### Successor
+
+Let’s make a _successor_ function that takes a number and returns a new number that calls `f` just one more time.
+
+```
+succ = λn. λf.λx.(f ((n f) x))
+```
+
+`succ` is a function that takes a Church-encoded number, `n`. The spaces after `λn.` are ignored. I put them there to indicate that we expect to usually call `succ` with one argument, curried or no. `succ` then returns another Church-encoded number, `λf.λx.(f ((n f) x))`. What is it doing? Let’s break it down.
+
+ * `((n f) x)` looks like that time we needed to call a function that took two “curried” arguments. So we’re calling `n`, which is a Church numeral, with two arguments, `f` and `x`. This is going to call `f` `n` times!
+ * `(f ((n f) x))` This is calling `f` again, one more time, on the result of the previous value.
+
+
+
+So does `succ` work? Let’s see what happens when we call `(succ 1)`. We should get the `2` we defined earlier!
+
+```
+ (succ 1)
+-> (succ λf.λx.(f x)) # resolve the variable 1
+-> (λn.λf.λx.(f ((n f) x)) λf.λx.(f x)) # resolve the variable succ
+-> λf.λx.(f ((λf.λx.(f x) f) x)) # call the outside function. replace n
+ # with the argument
+
+let's sidebar and simplify the subexpression
+ (λf.λx.(f x) f)
+-> λx.(f x) # call the function, replace f with f!
+
+now we should be able to simplify the larger subexpression
+ ((λf.λx.(f x) f) x)
+-> (λx.(f x) x) # sidebar above
+-> (f x) # call the function, replace x with x!
+
+let's go back to the original now
+ λf.λx.(f ((λf.λx.(f x) f) x))
+-> λf.λx.(f (f x)) # subexpression simplification above
+```
+
+and done! That last line is identical to the `2` we defined originally! It calls `f` twice.
+
+### Math
+
+Now that we have the successor function, if your interviewers haven’t checked out, tell them that fizz buzz isn’t too far away now; we have [Peano Arithmetic][14]! They can then check their interview bingo cards and see if they’ve increased their winnings.
+
+No but for real, since we have the successor function, we can now easily do addition and multiplication, which we will need for fizz buzz.
+
+First, recall that a number `n` is a function that takes another function `f` and an initial value `x` and applies `f` _n_ times. So if you have two numbers _m_ and _n_, what you want to do is apply `succ` to `m` _n_ times!
+
+```
++ = λm.λn.((n succ) m)
+```
+
+Here, `+` is a variable. If it’s not a lambda expression or a function call, it’s a variable!
+
+Multiplication is similar, but instead of applying `succ` to `m` _n_ times, we’re going to add `m` to `0` `n` times.
+
+First, note that if `((+ m) n)` is adding `m` and `n`, then that means that `(+ m)` is a _function_ that adds `m` to its argument. So we want to apply the function `(+ m)` to `0` `n` times.
+
+```
+* = λm.λn.((n (+ m)) 0)
+```
+
+Yay! We have multiplication and addition now.
+
+### Logic
+
+We’re going to need booleans and if statements and logic tests and so on. So, let’s talk about booleans. Recall how with numbers, what we kind of wanted with a number `n` is to do something _n_ times. Similarly, what we want with booleans is to do one of two things, either/or, but not both. Alonzo Church to the rescue again.
+
+Let’s have booleans be functions that take two arguments (curried of course), where the `true` boolean will return the first option, and the `false` boolean will return the second.
+
+```
+true = λt.λf.t
+false = λt.λf.f
+```
+
+So that we can demonstrate booleans, we’re going to define a simple sample function called `zero?` that returns `true` if a number `n` is zero, and `false` otherwise:
+
+```
+zero? = λn.((n λ_.false) true)
+```
+
+To explain: if we have a Church numeral for 0, it will call the first argument it gets called with 0 times and just return the second argument. In other words, 0 will just return the second argument and that’s it. Otherwise, any other number will call the first argument at least once. So, `zero?` will take `n` and give it a function that throws away its argument and always returns `false` whenever it’s called, and start it off with `true`. Only zero values will return `true`.
+
+➡️️ [Try it out in your browser!][15]
+
+We can now write an `if'` function to make use of these boolean values. `if'` will take a predicate value `p` (the boolean) and two options `a` and `b`.
+
+```
+if' = λp.λa.λb.((p a) b)
+```
+
+You can use it like this:
+
+```
+((if' (zero? n)
+ (something-when-zero x))
+ (something-when-not-zero y))
+```
+
+One thing that’s weird about this construction is that the interpreter is going to evaluate both branches (my lambda calculus interpreter is [eager][16] instead of [lazy][17]). Both `something-when-zero` and `something-when-not-zero` are going to be called to determine what to pass in to `if'`. To make it so that we don’t actually call the function in the branch we don’t want to run, let’s protect the logic in another function. We’ll name the argument to the function `_` to indicate that we want to just throw it away.
+
+```
+((if (zero? n)
+ λ_. (something-when-zero x))
+ λ_. (something-when-not-zero y))
+```
+
+This means we’re going to have to make a new `if` function that calls the correct branch with a throwaway argument, like `0` or something.
+
+```
+if = λp.λa.λb.(((p a) b) 0)
+```
+
+Okay, now we have booleans and `if`!
+
+### Currying part deux
+
+At this point, you might be getting sick of how calling something with multiple curried arguments involves all these extra parentheses. `((f a) b)` is annoying, can’t we just do `(f a b)`?
+
+It’s not part of the strict grammar, but my interpreter makes this small concession. `(a b c)` will be expanded to `((a b) c)` by the parser. `(a b c d)` will be expanded to `(((a b) c) d)` by the parser, and so on.
+
+So, for the rest of the post, for ease of explanation, I’m going to use this [syntax sugar][18]. Observe how using `if` changes:
+
+```
+(if (zero? n)
+ λ_. (something-when-zero x)
+ λ_. (something-when-not-zero y))
+```
+
+It’s a little better.
+
+### More logic
+
+Let’s talk about `and`, `or`, and `not`!
+
+`and` returns true if and only if both `a` and `b` are true. Let’s define it!
+
+```
+and = λa.λb.
+ (if (a)
+ λ_. b
+ λ_. false)
+```
+
+`or` returns true if `a` is true or if `b` is true:
+
+```
+or = λa.λb.
+ (if (a)
+ λ_. true
+ λ_. b)
+```
+
+`not` just returns the opposite of whatever it was given:
+
+```
+not = λa.
+ (if (a)
+ λ_. false
+ λ_. true)
+```
+
+It turns out these can be written a bit more simply, but they’re basically doing the same thing:
+
+```
+and = λa.λb.(a b false)
+or = λa.λb.(a true b)
+not = λp.λt.λf.(p f t)
+```
+
+➡️️ [Try it out in your browser!][19]
+
+### Pairs!
+
+Sometimes it’s nice to keep data together. Let’s make a little 2-tuple type! We want three functions. We want a function called `make-pair` that will take two arguments and return a “pair”, we want a function called `pair-first` that will return the first element of the pair, and we want a function called `pair-second` that will return the second element. How can we achieve this? You’re almost certainly in the interview room alone, but now’s the time to yell “Alonzo Church”!
+
+```
+make-pair = λx.λy. λa.(a x y)
+```
+
+`make-pair` is going to take two arguments, `x` and `y`, and they will be the elements of the pair. The pair itself is a function that takes an “accessor” `a` that will be given `x` and `y`. All `a` has to do is take the two arguments and return the one it wants.
+
+Here is someone making a pair with variables `1` and `2`:
+
+```
+(make-pair 1 2)
+```
+
+This returns:
+
+```
+λa.(a 1 2)
+```
+
+There’s a pair! Now we just need to access the values inside.
+
+Remember how `true` takes two arguments and returns the first one and `false` takes two arguments and returns the second one?
+
+```
+pair-first = λp.(p true)
+pair-second = λp.(p false)
+```
+
+`pair-first` is going to take a pair `p` and give it `true` as the accessor `a`. `pair-second` is going to give the pair `false` as the accessor.
+
+Voilà, you can now store 2-tuples of values and recover the data from them.
+
+➡️️ [Try it out in your browser!][20]
+
+### Lists!
+
+We’re going to construct [linked lists][21]. Each list item needs two things: the value at the current position in the list and a reference to the rest of the list.
+
+One additional caveat is we want to be able to identify an empty list, so we’re going to store whether or not the current value is the end of a list as well. In [LISP][22]-based programming languages, the end of the list is the special value `nil`, and checking if we’ve hit the end of the list is accomplished with the `nil?` predicate.
+
+Because we want to distinguish `nil` from a list with a value, we’re going to store three things in each linked list item. Whether or not the list is empty, and if not, the value and the rest of the list. So we need a 3-tuple.
+
+Once we have pairs, other-sized tuples are easy. For instance, a 3-tuple is just one pair with another pair inside for one of the slots.
+
+For each list element, we’ll store:
+
+```
+[not-empty [value rest-of-list]]
+```
+
+As an example, a list element with a value of `1` would look like:
+
+```
+[true [1 remainder]]
+```
+
+whereas `nil` will look like
+
+```
+[false whatever]
+```
+
+That second part of `nil` just doesn’t matter.
+
+First, let’s define `nil` and `nil?`:
+
+```
+nil = (make-pair false false)
+nil? = λl. (not (pair-first l))
+```
+
+The important thing about `nil` is that the first element in the pair is `false`.
+
+Now that we have an empty list, let’s define how to add something to the front of it. In LISP-based languages, the operation to _construct_ a new list element is called `cons`, so we’ll call this `cons`, too.
+
+`cons` will take a value and an existing list and return a new list with the given value at the front of the list.
+
+```
+cons = λvalue.λlist.
+ (make-pair true (make-pair value list))
+```
+
+`cons` is returning a pair where, unlike `nil`, the first element of the pair is `true`. This represents that there’s something in the list here. The second pair element is what we wanted in our linked list: the value at the current position, and a reference to the rest of the list.
+
+So how do we access things in the list? Let’s define two functions called `head` and `tail`. `head` is going to return the value at the front of the list, and `tail` is going to return everything but the front of the list. In LISP-based languages, these functions are sometimes called `car` and `cdr` for surprisingly [esoteric reasons][23]. `head` and `tail` have undefined behavior here when called on `nil`, so let’s just assume `nil?` is false for the list and keep going.
+
+```
+head = λlist. (pair-first (pair-second list))
+tail = λlist. (pair-second (pair-second list))
+```
+
+Both `head` and `tail` first get `(pair-second list)`, which returns the tuple that has the value and reference to the remainder. Then, they use either `pair-first` or `pair-second` to get the current value or the rest of the list.
+
+Great, we have lists!
+
+➡️️ [Try it out in your browser!][24]
+
+### Recursion and loops
+
+Let’s make a simple function that sums up a list of numbers.
+
+```
+sum = λlist.
+ (if (nil? list)
+ λ_. 0
+ λ_. (+ (head list) (sum (tail list))))
+```
+
+If the list is empty, let’s return 0. If the list has an element, let’s add that element to the sum of the rest of the list. [Recursion][25] is a cornerstone tool of computer science, and being able to assume a solution to a subproblem to solve a problem is super neat!
+
+Okay, except, this doesn’t work like this in lambda calculus. Remember how I said assignment wasn’t something that exists in lambda calculus? If you have:
+
+```
+x = y
+
+```
+
+This really means you have:
+
+```
+(λx. y)
+```
+
+In the case of our sum definition, we have:
+
+```
+(λsum.
+
+
+ λlist.
+ (if (nil? list)
+ λ_. 0
+ λ_. (+ (head list) (sum (tail list)))))
+```
+
+What that means is `sum` doesn’t have any access to itself. It can’t call itself like we’ve written, because when it tries to call `sum`, it’s undefined!
+
+This is a pretty crushing blow, but it turns out there’s a mind bending and completely unexpected trick the universe has up its sleeve.
+
+Assume we wrote `sum` so that it takes two arguments. A reference to something like `sum` we’ll call `helper` and then the list. If we could figure out how to solve the recursion problem, then we could use this `sum`. Let’s do that.
+
+```
+sum = λhelper.λlist.
+ (if (nil? list)
+ λ_. 0
+ λ_. (+ (head list) (helper (tail list))))
+```
+
+But hey! When we call `sum`, we have a reference to `sum` then! Let’s just give `sum` itself before the list.
+
+```
+(sum sum list)
+```
+
+This seems promising, but unfortunately now the `helper` invocation inside of `sum` is broken. `helper` is just `sum` and `sum` expects a reference to itself. Let’s try again, changing the `helper` call:
+
+```
+sum = λhelper.λlist.
+ (if (nil? list)
+ λ_. 0
+ λ_. (+ (head list) (helper helper (tail list))))
+
+(sum sum list)
+```
+
+We did it! This actually works! We engineered recursion out of math! At no point does `sum` refer to itself inside of itself, and yet we managed to make a recursive function anyways!
+
+➡️️ [Try it out in your browser!][26]
+
+Despite the minor miracle we’ve just performed, we’ve now ruined how we program recursion to involve calling recursive functions with themselves. This isn’t the end of the world, but it’s a little annoying. Luckily for us, there’s a function that cleans this all right up called the [Y combinator][27].
+
+The _Y combinator_ is probably now more famously known as [a startup incubator][28], or perhaps even more so as the domain name for one of the most popular sites that has a different name than its URL, [Hacker News][29], but fixed point combinators such as the Y combinator have had a longer history.
+
+The Y combinator can be defined in different ways, but definition I’m using is:
+
+```
+Y = λf.(λx.(x x) λx.(f λy.((x x) y)))
+```
+
+You might consider reading more about how the Y combinator can be derived from an excellent tutorial such as [this one][30] or [this one][31].
+
+Anyway, `Y` will make our original `sum` work as expected.
+
+```
+sum = (Y λhelper.λlist.
+ (if (nil? list)
+ λ_. 0
+ λ_. (+ (head list) (helper (tail list)))))
+```
+
+We can now call `(sum list)` without any wacky doubling of the function name, either inside or outside of the function. Hooray!
+
+➡️️ [Try it out in your browser!][32]
+
+### More math
+
+“Get ready to do more math! We now have enough building blocks to do subtraction, division, and modulo, which we’ll need for fizz buzz,” you tell the security guards that are approaching you.
+
+Just like addition, before we define subtraction we’ll define a predecessor function. Unlike addition, the predecessor function `pred` is much more complicated than the successor function `succ`.
+
+The basic idea is we’re going to create a pair to keep track of the previous value. We’ll start from zero and build up `n` but also drag the previous value such that at `n` we also have `n - 1`. Notably, this solution does not figure out how to deal with negative numbers. The predecessor of 0 will be 0, and negatives will have to be dealt with some other time and some other way.
+
+First, we’ll make a helper function that takes a pair of numbers and returns a new pair where the first number in the old pair is the second number in the new pair, and the new first number is the successor of the old first number.
+
+```
+pred-helper = λpair.
+ (make-pair (succ (pair-first pair)) (pair-first pair))
+```
+
+Make sense? If we call `pred-helper` on a pair `[0 0]`, the result will be `[1 0]`. If we call it on `[1 0]`, the result will be `[2 1]`. Essentially this helper slides older numbers off to the right.
+
+Okay, so now we’re going to call `pred-helper` _n_ times, with a starting pair of `[0 0]`, and then get the _second_ value, which should be `n - 1` when we’re done, from the pair.
+
+```
+pred = λn.
+ (pair-second (n pred-helper (make-pair 0 0)))
+```
+
+We can combine these two functions now for the full effect:
+
+```
+pred = λn.
+ (pair-second
+ (n
+ λpair.(make-pair (succ (pair-first pair)) (pair-first pair))
+ (make-pair 0 0)))
+```
+
+➡️️ [Try it out in your browser!][33]
+
+Now that we have `pred`, subtraction is easy! To subtract `n` from `m`, we’re going to apply `pred` to `m` _n_ times.
+
+```
+- = λm.λn.(n pred m)
+```
+
+Keep in mind that if `n` is equal to _or greater than_ `m`, the result of `(- m n)` will be zero, since there are no negative numbers and the predecessor of `0` is `0`. This fact means we can implement some new logic tests. Let’s make `(ge? m n)` return `true` if `m` is greater than or equal to `n` and make `(le? m n)` return `true` if `m` is less than or equal to `n`.
+
+```
+ge? = λm.λn.(zero? (- n m))
+le? = λm.λn.(zero? (- m n))
+```
+
+If we have greater-than-or-equal-to and less-than-or-equal-to, then we can make equal!
+
+```
+eq? = λm.λn.(and (ge? m n) (le? m n))
+```
+
+Now we have enough for integer division! The idea for integer division of `n` and `m` is we will keep count of the times we can subtract `m` from `n` without going past zero.
+
+```
+/ = (Y λ/.λm.λn.
+ (if (eq? m n)
+ λ_. 1
+ λ_. (if (le? m n)
+ λ_. 0
+ λ_. (+ 1 (/ (- m n) n)))))
+```
+
+Once we have subtraction, multiplication, and integer division, we can create modulo.
+
+```
+% = λm.λn. (- m (* (/ m n) n))
+```
+
+➡️️ [Try it out in your browser!][34]
+
+### Aside about performance
+
+You might be wondering about performance at this point. Every time we subtract one from 100, we count up from 0 to 100 to generate 99. This effect compounds itself for division and modulo. The truth is that Church numerals and other encodings aren’t very performant! Just like how tapes in Turing machines aren’t a particularly efficient way to deal with data, Church encodings are most interesting from a theoretical perspective for proving facts about computation.
+
+That doesn’t mean we can’t make things faster though!
+
+Lambda calculus is purely functional and side-effect free, which means that all sorts of optimizations can applied. Functions can be aggressively memoized. In other words, once a specific function and its arguments have been computed, there’s no need to compute them ever again. The result of that function will always be the same anyways. Further, functions can be computed lazily and only if needed. What this means is if a branch of your program’s execution renders a result that’s never used, the compiler can decide to just not run that part of the program and end up with the exact same result.
+
+[My interpreter][35] does have side effects, since programs written in it can cause the system to write output to the user via the special built-in function `PRINT_BYTE`. As a result, I didn’t choose lazy evaluation. The only optimization I chose was aggressive memoization for all functions that are side-effect free. The memoization still has room for improvement, but the result is much faster than a naive implementation.
+
+### Output
+
+“We’re rounding the corner on fizz buzz!” you shout at the receptionist as security drags you around the corner on the way to the door. “We just need to figure out how to communicate results to the user!”
+
+Unfortunately, lambda calculus can’t communicate with your operating system kernel without some help, but a small concession is all we need. [Sheepda][35] provides a single built-in function `PRINT_BYTE`. `PRINT_BYTE` takes a number as its argument (a Church encoded numeral) and prints the corresponding byte to the configured output stream (usually `stdout`).
+
+With `PRINT_BYTE`, we’re going to need to reference a number of different [ASCII bytes][36], so we should make writing numbers in code easier. Earlier we defined numbers 0 - 5, so let’s start and define numbers 6 - 10.
+
+```
+6 = (succ 5)
+7 = (succ 6)
+8 = (succ 7)
+9 = (succ 8)
+10 = (succ 9)
+```
+
+Now let’s define a helper to create three digit decimal numbers.
+
+```
+num = λa.λb.λc.(+ (+ (* (* 10 10) a) (* 10 b)) c)
+```
+
+The newline byte is decimal 10. Here’s a function to print newlines!
+
+```
+print-newline = λ_.(PRINT_BYTE (num 0 1 0))
+```
+
+### Doing multiple things
+
+Now that we have this `PRINT_BYTE` function, we have functions that can cause side-effects. We want to call `PRINT_BYTE` but we don’t care about its return value. We need a way to call multiple functions in sequence.
+
+What if we make a function that takes two arguments and throws away the first one again?
+
+```
+do2 = λ_.λx.x
+```
+
+Here’s a function to print every value in a list:
+
+```
+print-list = (Y λrecurse.λlist.
+ (if (nil? list)
+ λ_. 0
+ λ_. (do2 (PRINT_BYTE (head list))
+ (recurse (tail list)))))
+```
+
+And here’s a function that works like a for loop. It calls `f` with every number from `0` to `n`. It uses a small helper function that continues to call itself until `i` is equal to `n`, and starts `i` off at `0`.
+
+```
+for = λn.λf.(
+ (Y λrecurse.λi.
+ (if (eq? i n)
+ λ_. void
+ λ_. (do2 (f i)
+ (recurse (succ i)))))
+ 0)
+```
+
+### Converting an integer to a string
+
+The last thing we need to complete fizz buzz is a function that turns a number into a string of bytes to print. You might have noticed the `print-num` calls in some of the web-based examples above. We’re going to see how to make it! Writing this function is sometimes a whiteboard problem in its own right. In C, this function is called `itoa`, for integer to ASCII.
+
+Here’s an example of how it works. Imagine the number we’re converting to bytes is `123`. We can get the `3` out by doing `(% 123 10)`, which will be `3`. Then we can divide by `10` to get `12`, and then start over. `(% 12 10)` is `2`. We’ll loop down until we hit zero.
+
+Once we have a number, we can convert it to ASCII by adding the value of the `'0'` ASCII byte. Then we can make a list of ASCII bytes for use with `print-list`.
+
+```
+zero-char = (num 0 4 8) # the ascii code for the byte that represents 0.
+
+itoa = λn.(
+ (Y λrecurse.λn.λresult.
+ (if (zero? n)
+ λ_. (if (nil? result)
+ λ_. (cons zero-char nil)
+ λ_. result)
+ λ_. (recurse (/ n 10) (cons (+ zero-char (% n 10)) result))))
+ n nil)
+
+print-num = λn.(print-list (itoa n))
+```
+
+### Fizz buzz
+
+“Here we go,” you shout at the building you just got kicked out of, “here’s how you do fizz buzz.”
+
+First, we need to define three strings: “Fizz”, “Buzz”, and “Fizzbuzz”.
+
+```
+fizzmsg = (cons (num 0 7 0) # F
+ (cons (num 1 0 5) # i
+ (cons (num 1 2 2) # z
+ (cons (num 1 2 2) # z
+ nil))))
+buzzmsg = (cons (num 0 6 6) # B
+ (cons (num 1 1 7) # u
+ (cons (num 1 2 2) # z
+ (cons (num 1 2 2) # z
+ nil))))
+fizzbuzzmsg = (cons (num 0 7 0) # F
+ (cons (num 1 0 5) # i
+ (cons (num 1 2 2) # z
+ (cons (num 1 2 2) # z
+ (cons (num 0 9 8) # b
+ (cons (num 1 1 7) # u
+ (cons (num 1 2 2) # z
+ (cons (num 1 2 2) # z
+ nil))))))))
+```
+
+Okay, now let’s define a function that will run from 0 to `n` and output numbers, fizzes, and buzzes:
+
+```
+fizzbuzz = λn.
+ (for n λi.
+ (do2
+ (if (zero? (% i 3))
+ λ_. (if (zero? (% i 5))
+ λ_. (print-list fizzbuzzmsg)
+ λ_. (print-list fizzmsg))
+ λ_. (if (zero? (% i 5))
+ λ_. (print-list buzzmsg)
+ λ_. (print-list (itoa i))))
+ (print-newline 0)))
+```
+
+Let’s do the first 20!
+
+```
+(fizzbuzz (num 0 2 0))
+```
+
+➡️️ [Try it out in your browser!][37]
+
+### Reverse a string
+
+“ENCORE!” you shout to no one as the last cars pull out of the company parking lot. Everyone’s gone home but this is your last night before the restraining order goes through.
+
+```
+reverse-list = λlist.(
+ (Y λrecurse.λold.λnew.
+ (if (nil? old)
+ λ_.new
+ λ_.(recurse (tail old) (cons (head old) new))))
+ list nil)
+```
+
+➡️️ [Try it out in your browser!][38]
+
+### Sheepda
+
+As I mentioned, I wrote a lambda calculus interpreter called [Sheepda][35] for playing around. By itself it’s pretty interesting if you’re interested in learning more about how to write programming language interpreters. Lambda calculus is as simple of a language as you can make, so the interpreter is very simple itself!
+
+It’s written in Go and thanks to [GopherJS][39] it’s what powers the [web playground][40].
+
+There are some fun projects if someone’s interested in getting more involved. Using the library to prune lambda expression trees and simplify expressions if possible would be a start! I’m sure my fizz buzz implementation isn’t as minimal as it could be, and playing [code golf][41] with it would be pretty neat!
+
+Feel free to fork , star it, bop it, twist it, or even pull it!
+
+--------------------------------------------------------------------------------
+
+via: https://www.jtolio.com/2017/03/whiteboard-problems-in-pure-lambda-calculus
+
+作者:[jtolio.com][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.jtolio.com/
+[b]: https://github.com/lujun9972
+[1]: https://www.vivint.com/
+[2]: https://www.spacemonkey.com/
+[3]: https://en.wikipedia.org/wiki/Festivus
+[4]: https://twitter.com/aphyr
+[5]: https://aphyr.com/posts/340-acing-the-technical-interview
+[6]: https://en.wikipedia.org/wiki/Church_encoding
+[7]: https://en.wikipedia.org/wiki/Lambda_calculus
+[8]: https://en.wikipedia.org/wiki/Turing_completeness
+[9]: https://imranontech.com/2007/01/24/using-fizzbuzz-to-find-developers-who-grok-coding/
+[10]: https://jtolds.github.io/sheepda/#JTdCJTIyc3RkbGliJTIyJTNBZmFsc2UlMkMlMjJvdXRwdXQlMjIlM0ElMjJvdXRwdXQlMjIlMkMlMjJjb2RlJTIyJTNBJTIyKCVDRSVCQlUuKCVDRSVCQlkuKCVDRSVCQnZvaWQuKCVDRSVCQjAuKCVDRSVCQnN1Y2MuKCVDRSVCQiUyQi4oJUNFJUJCKi4oJUNFJUJCMS4oJUNFJUJCMi4oJUNFJUJCMy4oJUNFJUJCNC4oJUNFJUJCNS4oJUNFJUJCNi4oJUNFJUJCNy4oJUNFJUJCOC4oJUNFJUJCOS4oJUNFJUJCMTAuKCVDRSVCQm51bS4oJUNFJUJCdHJ1ZS4oJUNFJUJCZmFsc2UuKCVDRSVCQmlmLiglQ0UlQkJub3QuKCVDRSVCQmFuZC4oJUNFJUJCb3IuKCVDRSVCQm1ha2UtcGFpci4oJUNFJUJCcGFpci1maXJzdC4oJUNFJUJCcGFpci1zZWNvbmQuKCVDRSVCQnplcm8lM0YuKCVDRSVCQnByZWQuKCVDRSVCQi0uKCVDRSVCQmVxJTNGLiglQ0UlQkIlMkYuKCVDRSVCQiUyNS4oJUNFJUJCbmlsLiglQ0UlQkJuaWwlM0YuKCVDRSVCQmNvbnMuKCVDRSVCQmNhci4oJUNFJUJCY2RyLiglQ0UlQkJkbzIuKCVDRSVCQmRvMy4oJUNFJUJCZG80LiglQ0UlQkJmb3IuKCVDRSVCQnByaW50LWJ5dGUuKCVDRSVCQnByaW50LWxpc3QuKCVDRSVCQnByaW50LW5ld2xpbmUuKCVDRSVCQnplcm8tYnl0ZS4oJUNFJUJCaXRvYS4oJUNFJUJCZml6em1zZy4oJUNFJUJCYnV6em1zZy4oJUNFJUJCZml6emJ1enptc2cuKCVDRSVCQmZpenpidXp6LihmaXp6YnV6eiUyMCgoKG51bSUyMDEpJTIwMCklMjAxKSklMjAlQ0UlQkJuLigoZm9yJTIwbiklMjAlQ0UlQkJpLigoZG8yJTIwKCgoaWYlMjAoemVybyUzRiUyMCgoJTI1JTIwaSklMjAzKSkpJTIwJUNFJUJCXy4oKChpZiUyMCh6ZXJvJTNGJTIwKCglMjUlMjBpKSUyMDUpKSklMjAlQ0UlQkJfLihwcmludC1saXN0JTIwZml6emJ1enptc2cpKSUyMCVDRSVCQl8uKHByaW50LWxpc3QlMjBmaXp6bXNnKSkpJTIwJUNFJUJCXy4oKChpZiUyMCh6ZXJvJTNGJTIwKCglMjUlMjBpKSUyMDUpKSklMjAlQ0UlQkJfLihwcmludC1saXN0JTIwYnV6em1zZykpJTIwJUNFJUJCXy4ocHJpbnQtbGlzdCUyMChpdG9hJTIwaSkpKSkpJTIwKHByaW50LW5ld2xpbmUlMjBuaWwpKSkpJTIwKChjb25zJTIwKCgobnVtJTIwMCklMjA3KSUyMDApKSUyMCgoY29ucyUyMCgoKG51bSUyMDEpJTIwMCklMjA1KSklMjAoKGNvbnMlMjAoKChudW0lMjAxKSUyMDIpJTIwMikpJTIwKChjb25zJTIwKCgobnVtJTIwMSklMjAyKSUyMDIpKSUyMCgoY29ucyUyMCgoKG51bSUyMDApJTIwOSklMjA4KSklMjAoKGNvbnMlMjAoKChudW0lMjAxKSUyMDEpJTIwNykpJTIwKChjb25zJTIwKCgobnVtJTIwMSklMjAyKSUyMDIpKSUyMCgoY29ucyUyMCgoKG51bSUyMDEpJTIwMiklMjAyKSklMjBuaWwpKSkpKSkpKSklMjAoKGNvbnMlMjAoKChudW0lMjAwKSUyMDYpJTIwNikpJTIwKChjb25zJTIwKCgobnVtJTIwMSklMjAxKSUyMDcpKSUyMCgoY29ucyUyMCgoKG51bSUyMDEpJTIwMiklMjAyKSklMjAoKGNvbnMlMjAoKChudW0lMjAxKSUyMDIpJTIwMikpJTIwbmlsKSkpKSklMjAoKGNvbnMlMjAoKChudW0lMjAwKSUyMDcpJTIwMCkpJTIwKChjb25zJTIwKCgobnVtJTIwMSklMjAwKSUyMDUpKSUyMCgoY29ucyUyMCgoKG51bSUyMDEpJTIwMiklMjAyKSklMjAoKGNvbnMlMjAoKChudW0lMjAxKSUyMDIpJTIwMikpJTIwbmlsKSkpKSklMjAlQ0UlQkJuLigoKFklMjAlQ0UlQkJyZWN1cnNlLiVDRSVCQm4uJUNFJUJCcmVzdWx0LigoKGlmJTIwKHplcm8lM0YlMjBuKSklMjAlQ0UlQkJfLigoKGlmJTIwKG5pbCUzRiUyMHJlc3VsdCkpJTIwJUNFJUJCXy4oKGNvbnMlMjB6ZXJvLWJ5dGUpJTIwbmlsKSklMjAlQ0UlQkJfLnJlc3VsdCkpJTIwJUNFJUJCXy4oKHJlY3Vyc2UlMjAoKCUyRiUyMG4pJTIwMTApKSUyMCgoY29ucyUyMCgoJTJCJTIwemVyby1ieXRlKSUyMCgoJTI1JTIwbiklMjAxMCkpKSUyMHJlc3VsdCkpKSklMjBuKSUyMG5pbCkpJTIwKCgobnVtJTIwMCklMjA0KSUyMDgpKSUyMCVDRSVCQl8uKHByaW50LWJ5dGUlMjAoKChudW0lMjAwKSUyMDEpJTIwMCkpKSUyMChZJTIwJUNFJUJCcmVjdXJzZS4lQ0UlQkJsLigoKGlmJTIwKG5pbCUzRiUyMGwpKSUyMCVDRSVCQl8udm9pZCklMjAlQ0UlQkJfLigoZG8yJTIwKHByaW50LWJ5dGUlMjAoY2FyJTIwbCkpKSUyMChyZWN1cnNlJTIwKGNkciUyMGwpKSkpKSklMjBQUklOVF9CWVRFKSUyMCVDRSVCQm4uJUNFJUJCZi4oKCgoWSUyMCVDRSVCQnJlY3Vyc2UuJUNFJUJCcmVtYWluaW5nLiVDRSVCQmN1cnJlbnQuJUNFJUJCZi4oKChpZiUyMCh6ZXJvJTNGJTIwcmVtYWluaW5nKSklMjAlQ0UlQkJfLnZvaWQpJTIwJUNFJUJCXy4oKGRvMiUyMChmJTIwY3VycmVudCkpJTIwKCgocmVjdXJzZSUyMChwcmVkJTIwcmVtYWluaW5nKSklMjAoc3VjYyUyMGN1cnJlbnQpKSUyMGYpKSkpJTIwbiklMjAwKSUyMGYpKSUyMCVDRSVCQmEuZG8zKSUyMCVDRSVCQmEuZG8yKSUyMCVDRSVCQmEuJUNFJUJCYi5iKSUyMCVDRSVCQmwuKHBhaXItc2Vjb25kJTIwKHBhaXItc2Vjb25kJTIwbCkpKSUyMCVDRSVCQmwuKHBhaXItZmlyc3QlMjAocGFpci1zZWNvbmQlMjBsKSkpJTIwJUNFJUJCZS4lQ0UlQkJsLigobWFrZS1wYWlyJTIwdHJ1ZSklMjAoKG1ha2UtcGFpciUyMGUpJTIwbCkpKSUyMCVDRSVCQmwuKG5vdCUyMChwYWlyLWZpcnN0JTIwbCkpKSUyMCgobWFrZS1wYWlyJTIwZmFsc2UpJTIwdm9pZCkpJTIwJUNFJUJCbS4lQ0UlQkJuLigoLSUyMG0pJTIwKCgqJTIwKCglMkYlMjBtKSUyMG4pKSUyMG4pKSklMjAoWSUyMCVDRSVCQiUyRi4lQ0UlQkJtLiVDRSVCQm4uKCgoaWYlMjAoKGVxJTNGJTIwbSklMjBuKSklMjAlQ0UlQkJfLjEpJTIwJUNFJUJCXy4oKChpZiUyMCh6ZXJvJTNGJTIwKCgtJTIwbSklMjBuKSkpJTIwJUNFJUJCXy4wKSUyMCVDRSVCQl8uKCglMkIlMjAxKSUyMCgoJTJGJTIwKCgtJTIwbSklMjBuKSklMjBuKSkpKSkpJTIwJUNFJUJCbS4lQ0UlQkJuLigoYW5kJTIwKHplcm8lM0YlMjAoKC0lMjBtKSUyMG4pKSklMjAoemVybyUzRiUyMCgoLSUyMG4pJTIwbSkpKSklMjAlQ0UlQkJtLiVDRSVCQm4uKChuJTIwcHJlZCklMjBtKSklMjAlQ0UlQkJuLigoKCVDRSVCQm4uJUNFJUJCZi4lQ0UlQkJ4LihwYWlyLXNlY29uZCUyMCgobiUyMCVDRSVCQnAuKChtYWtlLXBhaXIlMjAoZiUyMChwYWlyLWZpcnN0JTIwcCkpKSUyMChwYWlyLWZpcnN0JTIwcCkpKSUyMCgobWFrZS1wYWlyJTIweCklMjB4KSkpJTIwbiklMjBzdWNjKSUyMDApKSUyMCVDRSVCQm4uKChuJTIwJUNFJUJCXy5mYWxzZSklMjB0cnVlKSklMjAlQ0UlQkJwLihwJTIwZmFsc2UpKSUyMCVDRSVCQnAuKHAlMjB0cnVlKSklMjAlQ0UlQkJ4LiVDRSVCQnkuJUNFJUJCdC4oKHQlMjB4KSUyMHkpKSUyMCVDRSVCQmEuJUNFJUJCYi4oKGElMjB0cnVlKSUyMGIpKSUyMCVDRSVCQmEuJUNFJUJCYi4oKGElMjBiKSUyMGZhbHNlKSklMjAlQ0UlQkJwLiVDRSVCQnQuJUNFJUJCZi4oKHAlMjBmKSUyMHQpKSUyMCVDRSVCQnAuJUNFJUJCYS4lQ0UlQkJiLigoKHAlMjBhKSUyMGIpJTIwdm9pZCkpJTIwJUNFJUJCdC4lQ0UlQkJmLmYpJTIwJUNFJUJCdC4lQ0UlQkJmLnQpJTIwJUNFJUJCYS4lQ0UlQkJiLiVDRSVCQmMuKCglMkIlMjAoKCUyQiUyMCgoKiUyMCgoKiUyMDEwKSUyMDEwKSklMjBhKSklMjAoKColMjAxMCklMjBiKSkpJTIwYykpJTIwKHN1Y2MlMjA5KSklMjAoc3VjYyUyMDgpKSUyMChzdWNjJTIwNykpJTIwKHN1Y2MlMjA2KSklMjAoc3VjYyUyMDUpKSUyMChzdWNjJTIwNCkpJTIwKHN1Y2MlMjAzKSklMjAoc3VjYyUyMDIpKSUyMChzdWNjJTIwMSkpJTIwKHN1Y2MlMjAwKSklMjAlQ0UlQkJtLiVDRSVCQm4uJUNFJUJCeC4obSUyMChuJTIweCkpKSUyMCVDRSVCQm0uJUNFJUJCbi4lQ0UlQkJmLiVDRSVCQnguKCgoKG0lMjBzdWNjKSUyMG4pJTIwZiklMjB4KSklMjAlQ0UlQkJuLiVDRSVCQmYuJUNFJUJCeC4oZiUyMCgobiUyMGYpJTIweCkpKSUyMCVDRSVCQmYuJUNFJUJCeC54KSUyMCVDRSVCQnguKFUlMjBVKSklMjAoVSUyMCVDRSVCQmguJUNFJUJCZi4oZiUyMCVDRSVCQnguKCgoaCUyMGgpJTIwZiklMjB4KSkpKSUyMCVDRSVCQmYuKGYlMjBmKSklNUNuJTIyJTdE
+[11]: https://en.wikipedia.org/wiki/Currying
+[12]: https://en.wikipedia.org/wiki/Alonzo_Church
+[13]: https://en.wikipedia.org/wiki/Church_encoding#Church_numerals
+[14]: https://en.wikipedia.org/wiki/Peano_axioms#Arithmetic
+[15]: https://jtolds.github.io/sheepda/#JTdCJTIyc3RkbGliJTIyJTNBZmFsc2UlMkMlMjJvdXRwdXQlMjIlM0ElMjJyZXN1bHQlMjIlMkMlMjJjb2RlJTIyJTNBJTIyMCUyMCUzRCUyMCVDRSVCQmYuJUNFJUJCeC54JTVDbjElMjAlM0QlMjAlQ0UlQkJmLiVDRSVCQnguKGYlMjB4KSU1Q24yJTIwJTNEJTIwJUNFJUJCZi4lQ0UlQkJ4LihmJTIwKGYlMjB4KSklNUNuc3VjYyUyMCUzRCUyMCVDRSVCQm4uJUNFJUJCZi4lQ0UlQkJ4LihmJTIwKChuJTIwZiklMjB4KSklNUNuJTVDbnRydWUlMjAlMjAlM0QlMjAlQ0UlQkJ0LiVDRSVCQmYudCU1Q25mYWxzZSUyMCUzRCUyMCVDRSVCQnQuJUNFJUJCZi5mJTVDbiU1Q256ZXJvJTNGJTIwJTNEJTIwJUNFJUJCbi4oKG4lMjAlQ0UlQkJfLmZhbHNlKSUyMHRydWUpJTVDbiU1Q24lMjMlMjB0cnklMjBjaGFuZ2luZyUyMHRoZSUyMG51bWJlciUyMHplcm8lM0YlMjBpcyUyMGNhbGxlZCUyMHdpdGglNUNuKHplcm8lM0YlMjAwKSU1Q24lNUNuJTIzJTIwdGhlJTIwb3V0cHV0JTIwd2lsbCUyMGJlJTIwJTVDJTIyJUNFJUJCdC4lQ0UlQkJmLnQlNUMlMjIlMjBmb3IlMjB0cnVlJTIwYW5kJTIwJTVDJTIyJUNFJUJCdC4lQ0UlQkJmLmYlNUMlMjIlMjBmb3IlMjBmYWxzZS4lMjIlN0Q=
+[16]: https://en.wikipedia.org/wiki/Eager_evaluation
+[17]: https://en.wikipedia.org/wiki/Lazy_evaluation
+[18]: https://en.wikipedia.org/wiki/Syntactic_sugar
+[19]: https://jtolds.github.io/sheepda/#JTdCJTIyc3RkbGliJTIyJTNBZmFsc2UlMkMlMjJvdXRwdXQlMjIlM0ElMjJyZXN1bHQlMjIlMkMlMjJjb2RlJTIyJTNBJTIyMCUyMCUzRCUyMCVDRSVCQmYuJUNFJUJCeC54JTVDbjElMjAlM0QlMjAlQ0UlQkJmLiVDRSVCQnguKGYlMjB4KSU1Q24yJTIwJTNEJTIwJUNFJUJCZi4lQ0UlQkJ4LihmJTIwKGYlMjB4KSklNUNuMyUyMCUzRCUyMCVDRSVCQmYuJUNFJUJCeC4oZiUyMChmJTIwKGYlMjB4KSkpJTVDbnN1Y2MlMjAlM0QlMjAlQ0UlQkJuLiVDRSVCQmYuJUNFJUJCeC4oZiUyMCgobiUyMGYpJTIweCkpJTVDbiU1Q250cnVlJTIwJTIwJTNEJTIwJUNFJUJCdC4lQ0UlQkJmLnQlNUNuZmFsc2UlMjAlM0QlMjAlQ0UlQkJ0LiVDRSVCQmYuZiU1Q24lNUNuemVybyUzRiUyMCUzRCUyMCVDRSVCQm4uKChuJTIwJUNFJUJCXy5mYWxzZSklMjB0cnVlKSU1Q24lNUNuaWYlMjAlM0QlMjAlQ0UlQkJwLiVDRSVCQmEuJUNFJUJCYi4oKChwJTIwYSklMjBiKSUyMDApJTVDbmFuZCUyMCUzRCUyMCVDRSVCQmEuJUNFJUJCYi4oYSUyMGIlMjBmYWxzZSklNUNub3IlMjAlM0QlMjAlQ0UlQkJhLiVDRSVCQmIuKGElMjB0cnVlJTIwYiklNUNubm90JTIwJTNEJTIwJUNFJUJCcC4lQ0UlQkJ0LiVDRSVCQmYuKHAlMjBmJTIwdCklNUNuJTVDbiUyMyUyMHRyeSUyMGNoYW5naW5nJTIwdGhpcyUyMHVwISU1Q24oaWYlMjAob3IlMjAoemVybyUzRiUyMDEpJTIwKHplcm8lM0YlMjAwKSklNUNuJTIwJTIwJTIwJTIwJUNFJUJCXy4lMjAyJTVDbiUyMCUyMCUyMCUyMCVDRSVCQl8uJTIwMyklMjIlN0Q=
+[20]: https://jtolds.github.io/sheepda/#JTdCJTIyc3RkbGliJTIyJTNBZmFsc2UlMkMlMjJvdXRwdXQlMjIlM0ElMjJyZXN1bHQlMjIlMkMlMjJjb2RlJTIyJTNBJTIyMCUyMCUzRCUyMCVDRSVCQmYuJUNFJUJCeC54JTVDbjElMjAlM0QlMjAlQ0UlQkJmLiVDRSVCQnguKGYlMjB4KSU1Q24yJTIwJTNEJTIwJUNFJUJCZi4lQ0UlQkJ4LihmJTIwKGYlMjB4KSklNUNuMyUyMCUzRCUyMCVDRSVCQmYuJUNFJUJCeC4oZiUyMChmJTIwKGYlMjB4KSkpJTVDbiU1Q250cnVlJTIwJTIwJTNEJTIwJUNFJUJCdC4lQ0UlQkJmLnQlNUNuZmFsc2UlMjAlM0QlMjAlQ0UlQkJ0LiVDRSVCQmYuZiU1Q24lNUNubWFrZS1wYWlyJTIwJTNEJTIwJUNFJUJCeC4lQ0UlQkJ5LiUyMCVDRSVCQmEuKGElMjB4JTIweSklNUNucGFpci1maXJzdCUyMCUzRCUyMCVDRSVCQnAuKHAlMjB0cnVlKSU1Q25wYWlyLXNlY29uZCUyMCUzRCUyMCVDRSVCQnAuKHAlMjBmYWxzZSklNUNuJTVDbiUyMyUyMHRyeSUyMGNoYW5naW5nJTIwdGhpcyUyMHVwISU1Q25wJTIwJTNEJTIwKG1ha2UtcGFpciUyMDIlMjAzKSU1Q24ocGFpci1zZWNvbmQlMjBwKSUyMiU3RA==
+[21]: https://en.wikipedia.org/wiki/Linked_list
+[22]: https://en.wikipedia.org/wiki/Lisp_%28programming_language%29
+[23]: https://en.wikipedia.org/wiki/CAR_and_CDR#Etymology
+[24]: https://jtolds.github.io/sheepda/#JTdCJTIyc3RkbGliJTIyJTNBZmFsc2UlMkMlMjJvdXRwdXQlMjIlM0ElMjJyZXN1bHQlMjIlMkMlMjJjb2RlJTIyJTNBJTIyMCUyMCUzRCUyMCVDRSVCQmYuJUNFJUJCeC54JTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwMSUyMCUzRCUyMCVDRSVCQmYuJUNFJUJCeC4oZiUyMHgpJTIwJTIwJTIwJTIwJTIwMiUyMCUzRCUyMCVDRSVCQmYuJUNFJUJCeC4oZiUyMChmJTIweCkpJTIwJTIwJTIwJTIwMyUyMCUzRCUyMCVDRSVCQmYuJUNFJUJCeC4oZiUyMChmJTIwKGYlMjB4KSkpJTVDbnRydWUlMjAlMjAlM0QlMjAlQ0UlQkJ0LiVDRSVCQmYudCUyMCUyMCUyMCUyMGZhbHNlJTIwJTNEJTIwJUNFJUJCdC4lQ0UlQkJmLmYlNUNuJTVDbm1ha2UtcGFpciUyMCUzRCUyMCVDRSVCQnguJUNFJUJCeS4lMjAlQ0UlQkJhLihhJTIweCUyMHkpJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwcGFpci1maXJzdCUyMCUzRCUyMCVDRSVCQnAuKHAlMjB0cnVlKSUyMCUyMCUyMCUyMCUyMHBhaXItc2Vjb25kJTIwJTNEJTIwJUNFJUJCcC4ocCUyMGZhbHNlKSU1Q24lNUNubmlsJTIwJTNEJTIwKG1ha2UtcGFpciUyMGZhbHNlJTIwZmFsc2UpJTIwJTIwJTIwJTIwJTIwbmlsJTNGJTIwJTNEJTIwJUNFJUJCbC4lMjAobm90JTIwKHBhaXItZmlyc3QlMjBsKSklNUNuY29ucyUyMCUzRCUyMCVDRSVCQnZhbHVlLiVDRSVCQmxpc3QuKG1ha2UtcGFpciUyMHRydWUlMjAobWFrZS1wYWlyJTIwdmFsdWUlMjBsaXN0KSklNUNuJTVDbmhlYWQlMjAlM0QlMjAlQ0UlQkJsaXN0LiUyMChwYWlyLWZpcnN0JTIwKHBhaXItc2Vjb25kJTIwbGlzdCkpJTVDbnRhaWwlMjAlM0QlMjAlQ0UlQkJsaXN0LiUyMChwYWlyLXNlY29uZCUyMChwYWlyLXNlY29uZCUyMGxpc3QpKSU1Q24lNUNuJTIzJTIwdHJ5JTIwY2hhbmdpbmclMjB0aGlzJTIwdXAhJTVDbmwlMjAlM0QlMjAoY29ucyUyMDElMjAoY29ucyUyMDIlMjAoY29ucyUyMDMlMjBuaWwpKSklNUNuKGhlYWQlMjAodGFpbCUyMGwpKSUyMiU3RA==
+[25]: https://en.wikipedia.org/wiki/Recursion
+[26]: https://jtolds.github.io/sheepda/#JTdCJTIyc3RkbGliJTIyJTNBdHJ1ZSUyQyUyMm91dHB1dCUyMiUzQSUyMm91dHB1dCUyMiUyQyUyMmNvZGUlMjIlM0ElMjJzdW0lMjAlM0QlMjAlQ0UlQkJoZWxwZXIuJUNFJUJCbGlzdC4lNUNuJTIwJTIwKGlmJTIwKG5pbCUzRiUyMGxpc3QpJTVDbiUyMCUyMCUyMCUyMCUyMCUyMCVDRSVCQl8uJTIwMCU1Q24lMjAlMjAlMjAlMjAlMjAlMjAlQ0UlQkJfLiUyMCglMkIlMjAoaGVhZCUyMGxpc3QpJTIwKGhlbHBlciUyMGhlbHBlciUyMCh0YWlsJTIwbGlzdCkpKSklNUNuJTVDbnJlc3VsdCUyMCUzRCUyMChzdW0lMjBzdW0lMjAoY29ucyUyMDElMjAoY29ucyUyMDIlMjAoY29ucyUyMDMlMjBuaWwpKSkpJTVDbiU1Q24lMjMlMjB3ZSdsbCUyMGV4cGxhaW4lMjBob3clMjBwcmludC1udW0lMjB3b3JrcyUyMGxhdGVyJTJDJTIwYnV0JTIwd2UlMjBuZWVkJTIwaXQlMjB0byUyMHNob3clMjB0aGF0JTIwc3VtJTIwaXMlMjB3b3JraW5nJTVDbihwcmludC1udW0lMjByZXN1bHQpJTIyJTdE
+[27]: https://en.wikipedia.org/wiki/Fixed-point_combinator#Fixed_point_combinators_in_lambda_calculus
+[28]: https://www.ycombinator.com/
+[29]: https://news.ycombinator.com/
+[30]: http://matt.might.net/articles/implementation-of-recursive-fixed-point-y-combinator-in-javascript-for-memoization/
+[31]: http://kestas.kuliukas.com/YCombinatorExplained/
+[32]: https://jtolds.github.io/sheepda/#JTdCJTIyc3RkbGliJTIyJTNBdHJ1ZSUyQyUyMm91dHB1dCUyMiUzQSUyMm91dHB1dCUyMiUyQyUyMmNvZGUlMjIlM0ElMjJZJTIwJTNEJTIwJUNFJUJCZi4oJUNFJUJCeC4oeCUyMHgpJTIwJUNFJUJCeC4oZiUyMCVDRSVCQnkuKCh4JTIweCklMjB5KSkpJTVDbiU1Q25zdW0lMjAlM0QlMjAoWSUyMCVDRSVCQmhlbHBlci4lQ0UlQkJsaXN0LiU1Q24lMjAlMjAoaWYlMjAobmlsJTNGJTIwbGlzdCklNUNuJTIwJTIwJTIwJTIwJTIwJTIwJUNFJUJCXy4lMjAwJTVDbiUyMCUyMCUyMCUyMCUyMCUyMCVDRSVCQl8uJTIwKCUyQiUyMChoZWFkJTIwbGlzdCklMjAoaGVscGVyJTIwKHRhaWwlMjBsaXN0KSkpKSklNUNuJTVDbiUyMyUyMHdlJ2xsJTIwZXhwbGFpbiUyMGhvdyUyMHRoaXMlMjB3b3JrcyUyMGxhdGVyJTJDJTIwYnV0JTIwd2UlMjBuZWVkJTIwaXQlMjB0byUyMHNob3clMjB0aGF0JTIwc3VtJTIwaXMlMjB3b3JraW5nJTVDbnByaW50LW51bSUyMCUzRCUyMCVDRSVCQm4uKHByaW50LWxpc3QlMjAoaXRvYSUyMG4pKSU1Q24lNUNuKHByaW50LW51bSUyMChzdW0lMjAoY29ucyUyMDElMjAoY29ucyUyMDIlMjAoY29ucyUyMDMlMjBuaWwpKSkpKSUyMiU3RA
+[33]: https://jtolds.github.io/sheepda/#JTdCJTIyc3RkbGliJTIyJTNBdHJ1ZSUyQyUyMm91dHB1dCUyMiUzQSUyMm91dHB1dCUyMiUyQyUyMmNvZGUlMjIlM0ElMjIwJTIwJTNEJTIwJUNFJUJCZi4lQ0UlQkJ4LnglNUNuMSUyMCUzRCUyMCVDRSVCQmYuJUNFJUJCeC4oZiUyMHgpJTVDbjIlMjAlM0QlMjAlQ0UlQkJmLiVDRSVCQnguKGYlMjAoZiUyMHgpKSU1Q24zJTIwJTNEJTIwJUNFJUJCZi4lQ0UlQkJ4LihmJTIwKGYlMjAoZiUyMHgpKSklNUNuJTVDbnByZWQlMjAlM0QlMjAlQ0UlQkJuLiU1Q24lMjAlMjAocGFpci1zZWNvbmQlNUNuJTIwJTIwJTIwJTIwKG4lNUNuJTIwJTIwJTIwJTIwJTIwJUNFJUJCcGFpci4obWFrZS1wYWlyJTIwKHN1Y2MlMjAocGFpci1maXJzdCUyMHBhaXIpKSUyMChwYWlyLWZpcnN0JTIwcGFpcikpJTVDbiUyMCUyMCUyMCUyMCUyMChtYWtlLXBhaXIlMjAwJTIwMCkpKSU1Q24lNUNuJTIzJTIwd2UnbGwlMjBleHBsYWluJTIwaG93JTIwcHJpbnQtbnVtJTIwd29ya3MlMjBsYXRlciElNUNuKHByaW50LW51bSUyMChwcmVkJTIwMykpJTVDbiUyMiU3RA==
+[34]: https://jtolds.github.io/sheepda/#JTdCJTIyc3RkbGliJTIyJTNBdHJ1ZSUyQyUyMm91dHB1dCUyMiUzQSUyMm91dHB1dCUyMiUyQyUyMmNvZGUlMjIlM0ElMjIlMkIlMjAlM0QlMjAlQ0UlQkJtLiVDRSVCQm4uKG0lMjBzdWNjJTIwbiklNUNuKiUyMCUzRCUyMCVDRSVCQm0uJUNFJUJCbi4obiUyMCglMkIlMjBtKSUyMDApJTVDbi0lMjAlM0QlMjAlQ0UlQkJtLiVDRSVCQm4uKG4lMjBwcmVkJTIwbSklNUNuJTJGJTIwJTNEJTIwKFklMjAlQ0UlQkIlMkYuJUNFJUJCbS4lQ0UlQkJuLiU1Q24lMjAlMjAoaWYlMjAoZXElM0YlMjBtJTIwbiklNUNuJTIwJTIwJTIwJTIwJTIwJTIwJUNFJUJCXy4lMjAxJTVDbiUyMCUyMCUyMCUyMCUyMCUyMCVDRSVCQl8uJTIwKGlmJTIwKGxlJTNGJTIwbSUyMG4pJTVDbiUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCVDRSVCQl8uJTIwMCU1Q24lMjAlMjAlMjAlMjAlMjAlMjAlMjAlMjAlMjAlMjAlMjAlMjAlMjAlMjAlQ0UlQkJfLiUyMCglMkIlMjAxJTIwKCUyRiUyMCgtJTIwbSUyMG4pJTIwbikpKSkpJTVDbiUyNSUyMCUzRCUyMCVDRSVCQm0uJUNFJUJCbi4lMjAoLSUyMG0lMjAoKiUyMCglMkYlMjBtJTIwbiklMjBuKSklNUNuJTVDbihwcmludC1udW0lMjAoJTI1JTIwNyUyMDMpKSUyMiU3RA==
+[35]: https://github.com/jtolds/sheepda/
+[36]: https://en.wikipedia.org/wiki/ASCII#Code_chart
+[37]: https://jtolds.github.io/sheepda/#JTdCJTIyc3RkbGliJTIyJTNBdHJ1ZSUyQyUyMm91dHB1dCUyMiUzQSUyMm91dHB1dCUyMiUyQyUyMmNvZGUlMjIlM0ElMjIlMjMlMjBkZWZpbmUlMjB0aGUlMjBtZXNzYWdlcyU1Q25maXp6bXNnJTIwJTNEJTIwKGNvbnMlMjAobnVtJTIwMCUyMDclMjAwKSUyMChjb25zJTIwKG51bSUyMDElMjAwJTIwNSklMjAoY29ucyUyMChudW0lMjAxJTIwMiUyMDIpJTIwKGNvbnMlMjAobnVtJTIwMSUyMDIlMjAyKSUyMG5pbCkpKSklNUNuYnV6em1zZyUyMCUzRCUyMChjb25zJTIwKG51bSUyMDAlMjA2JTIwNiklMjAoY29ucyUyMChudW0lMjAxJTIwMSUyMDcpJTIwKGNvbnMlMjAobnVtJTIwMSUyMDIlMjAyKSUyMChjb25zJTIwKG51bSUyMDElMjAyJTIwMiklMjBuaWwpKSkpJTVDbmZpenpidXp6bXNnJTIwJTNEJTIwKGNvbnMlMjAobnVtJTIwMCUyMDclMjAwKSUyMChjb25zJTIwKG51bSUyMDElMjAwJTIwNSklMjAoY29ucyUyMChudW0lMjAxJTIwMiUyMDIpJTIwKGNvbnMlMjAobnVtJTIwMSUyMDIlMjAyKSU1Q24lMjAlMjAlMjAlMjAoY29ucyUyMChudW0lMjAwJTIwOSUyMDgpJTIwKGNvbnMlMjAobnVtJTIwMSUyMDElMjA3KSUyMChjb25zJTIwKG51bSUyMDElMjAyJTIwMiklMjAoY29ucyUyMChudW0lMjAxJTIwMiUyMDIpJTIwbmlsKSkpKSkpKSklNUNuJTVDbiUyMyUyMGZpenpidXp6JTVDbmZpenpidXp6JTIwJTNEJTIwJUNFJUJCbi4lNUNuJTIwJTIwKGZvciUyMG4lMjAlQ0UlQkJpLiU1Q24lMjAlMjAlMjAlMjAoZG8yJTVDbiUyMCUyMCUyMCUyMCUyMCUyMChpZiUyMCh6ZXJvJTNGJTIwKCUyNSUyMGklMjAzKSklNUNuJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJUNFJUJCXy4lMjAoaWYlMjAoemVybyUzRiUyMCglMjUlMjBpJTIwNSkpJTVDbiUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCVDRSVCQl8uJTIwKHByaW50LWxpc3QlMjBmaXp6YnV6em1zZyklNUNuJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJUNFJUJCXy4lMjAocHJpbnQtbGlzdCUyMGZpenptc2cpKSU1Q24lMjAlMjAlMjAlMjAlMjAlMjAlMjAlMjAlMjAlMjAlQ0UlQkJfLiUyMChpZiUyMCh6ZXJvJTNGJTIwKCUyNSUyMGklMjA1KSklNUNuJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJUNFJUJCXy4lMjAocHJpbnQtbGlzdCUyMGJ1enptc2cpJTVDbiUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCVDRSVCQl8uJTIwKHByaW50LWxpc3QlMjAoaXRvYSUyMGkpKSkpJTVDbiUyMCUyMCUyMCUyMCUyMCUyMChwcmludC1uZXdsaW5lJTIwbmlsKSkpJTVDbiU1Q24lMjMlMjBydW4lMjBmaXp6YnV6eiUyMDIwJTIwdGltZXMlNUNuKGZpenpidXp6JTIwKG51bSUyMDAlMjAyJTIwMCkpJTIyJTdE
+[38]: https://jtolds.github.io/sheepda/#JTdCJTIyc3RkbGliJTIyJTNBdHJ1ZSUyQyUyMm91dHB1dCUyMiUzQSUyMm91dHB1dCUyMiUyQyUyMmNvZGUlMjIlM0ElMjJoZWxsby13b3JsZCUyMCUzRCUyMChjb25zJTIwKG51bSUyMDAlMjA3JTIwMiklMjAoY29ucyUyMChudW0lMjAxJTIwMCUyMDEpJTIwKGNvbnMlMjAobnVtJTIwMSUyMDAlMjA4KSUyMChjb25zJTIwKG51bSUyMDElMjAwJTIwOCklMjAoY29ucyUyMChudW0lMjAxJTIwMSUyMDEpJTVDbiUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMChjb25zJTIwKG51bSUyMDAlMjA0JTIwNCklMjAoY29ucyUyMChudW0lMjAwJTIwMyUyMDIpJTIwKGNvbnMlMjAobnVtJTIwMSUyMDElMjA5KSUyMChjb25zJTIwKG51bSUyMDElMjAxJTIwMSklMjAoY29ucyUyMChudW0lMjAxJTIwMSUyMDQpJTVDbiUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMCUyMChjb25zJTIwKG51bSUyMDElMjAwJTIwOCklMjAoY29ucyUyMChudW0lMjAxJTIwMCUyMDApJTIwKGNvbnMlMjAobnVtJTIwMCUyMDMlMjAzKSUyMG5pbCkpKSkpKSkpKSkpKSklNUNuJTVDbnJldmVyc2UtbGlzdCUyMCUzRCUyMCVDRSVCQmxpc3QuKCU1Q24lMjAlMjAoWSUyMCVDRSVCQnJlY3Vyc2UuJUNFJUJCb2xkLiVDRSVCQm5ldy4lNUNuJTIwJTIwJTIwJTIwKGlmJTIwKG5pbCUzRiUyMG9sZCklNUNuJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJUNFJUJCXy5uZXclNUNuJTIwJTIwJTIwJTIwJTIwJTIwJTIwJTIwJUNFJUJCXy4ocmVjdXJzZSUyMCh0YWlsJTIwb2xkKSUyMChjb25zJTIwKGhlYWQlMjBvbGQpJTIwbmV3KSkpKSU1Q24lMjAlMjBsaXN0JTIwbmlsKSU1Q24lNUNuKGRvNCU1Q24lMjAlMjAocHJpbnQtbGlzdCUyMGhlbGxvLXdvcmxkKSU1Q24lMjAlMjAocHJpbnQtbmV3bGluZSUyMHZvaWQpJTVDbiUyMCUyMChwcmludC1saXN0JTIwKHJldmVyc2UtbGlzdCUyMGhlbGxvLXdvcmxkKSklNUNuJTIwJTIwKHByaW50LW5ld2xpbmUlMjB2b2lkKSklMjIlN0Q=
+[39]: https://github.com/gopherjs/gopherjs
+[40]: https://jtolds.github.io/sheepda/
+[41]: https://en.wikipedia.org/wiki/Code_golf
diff --git a/sources/tech/20180319 How to not be a white male asshole, by a former offender.md b/sources/tech/20180319 How to not be a white male asshole, by a former offender.md
new file mode 100644
index 0000000000..3478787ea1
--- /dev/null
+++ b/sources/tech/20180319 How to not be a white male asshole, by a former offender.md
@@ -0,0 +1,153 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to not be a white male asshole, by a former offender)
+[#]: via: (https://www.jtolio.com/2018/03/how-to-not-be-a-white-male-asshole-by-a-former-offender)
+[#]: author: (jtolio.com https://www.jtolio.com/)
+
+How to not be a white male asshole, by a former offender
+======
+
+_Huge thanks to Caitlin Jarvis for editing, contributing to, and proofreading to this post._
+
+First off, let’s start off with some assumptions. You, dear reader, don’t intend to cause anyone harm. You have good intentions, see yourself as a good person, and are interested in self improvement. That’s great!
+
+Second, I don’t actually know for sure if I’m not still a current offender. I might be! It’s certainly something I’ll never be done working on.
+
+### 1\. You don’t know what others are going through
+
+Unfortunately, your good intentions are not enough to make sure the experiences of others are, in fact, good because we live in a world of asymmetric information. If another person’s dog just died unbeknownst to you and you start talking excitedly about how great dogs are to try and cheer a sad person up, you may end up causing them to be even sadder. You know things other people don’t, and others know things you don’t.
+
+So when I say that if you are a white man, there is an invisible world of experiences happening all around you that you are inherently blind to, it’s because of asymmetric information. You can’t know what others are going through because you are not an impartial observer of a system. _You exist within the system._
+
+![][1]
+
+Let me show you what I mean: did you know a recent survey found that _[81 percent of women have experienced sexual harassment of some kind][2]_? Fully 1 out of every 2 women you know have had to deal specifically with _unwanted sexual touching_.
+
+What should have been most amazing about the [#MeToo movement][3] was not how many women reported harassment, but how many men were surprised.
+
+### 2\. You can inadvertently contribute to a racist, sexist, or prejudiced society
+
+I [previously wrote a lot about how small little interactions can add up][4], illustrating that even if you don’t intend to subject someone to racism, sexism, or some other prejudice, you might be doing it anyway. Intentions are meaningless when your actions amplify the negative experience of someone else.
+
+An example from [Maisha Johnson in Everyday Feminism][5]:
+
+> Black women deal with people touching our hair a lot. Now you know. Okay, there’s more to it than that: Black women deal with people touching our hair a _hell_ of a lot.
+>
+> If you approach a Black woman saying “I just have to feel your hair,” it’s pretty safe to assume this isn’t the first time she’s heard that.
+>
+> Everyone who asks me if they can touch follows a long line of people othering me – including strangers who touch my hair without asking. The psychological impact of having people constantly feel entitled my personal space has worn me down.
+
+Another example is that men frequently demand proof. Even though it makes sense in general to check your sources for something, the predominant response of men when confronted with claims of sexist treatment is to [ask for evidence][6]. Because this happens so frequently, this action _itself_ contributes to the sexist subjugation of women. The parallel universe women live in is so distinct from the experiences of men that men can’t believe their ears, and treat the report of a victim with skepticism.
+
+As you might imagine, this sort of effect is not limited to asking women for evidence or hair touching. Microaggressions are real and everywhere; the accumulation of lots of small things can be enormous.
+
+If you’re someone in charge of building things, this can be even more important and an even greater responsibility. If you build an app that is blind to the experiences of people who don’t look or act like you, you can significantly amplify negative experiences for others by causing systemic and system-wide issues.
+
+### 3\. The only way to stop contributing is to continually listen to others
+
+If you don’t already know what others are going through, and by not knowing what others are going through you may be subjecting them to prejudice even if you don’t mean to, what can you do to help others avoid prejudice? You can listen to them! People who are experiencing prejudice _don’t want to be experiencing prejudice_ and tend to be vocal about the experience. It is your job to really listen and then turn around and change the way you approach these situations in the future.
+
+### 4\. How do I listen?
+
+To listen to someone, you need to have empathy. You need to actually care about them. You need to process what they’re saying and not treat them with suspicion.
+
+Listening is very different from interjecting and arguing. Listening to others is different from making them do the work to educate you. It is your job to find the experiences of others you haven’t had and learn from them without demanding a curriculum.
+
+When people say you should just believe marginalized people, [no one is asking you to check your critical thinking at the door][7]. What you’re being asked to do is to be aware that your incredulity is a further reminder that you are not experiencing the same thing. Worse - white men acting incredulous is _so unbelievably common_ that it itself is a microaggression. Don’t be a sea lion:
+
+![][8]
+
+#### Aside about diversity of experience vs. diversity of thought.
+
+When trying to find others to listen to, who should you find? Recently, a growing number of people have echoed that all that’s really required of diversity is different viewpoints, and having diversity of thought is the ultimate goal.
+
+I want to point out that this is not the kind of diversity that will be useful to you. It’s easy to have a bunch of different opinions and then reject them when they complicate your life. What you want to be listening to is diversity of _experience_. Some experiences can’t be chosen. You can choose to be contrarian, but you can’t choose the color of your skin.
+
+### 5\. Where do I listen?
+
+What you need is a way to be a fly on the wall and observe the life experiences of others through their words and perspectives. Being friends and hanging out with people who are different from you is great. Getting out of monocultures is fantastic. Holding your company to diversity and inclusion initiatives is wonderful.
+
+But if you still need more or you live somewhere like Utah?
+
+What if there was a website where people from all walks of life opted in to talking about their day and what they’re feeling and experiencing from their viewpoint in a way you could read? It’d be almost like seeing the world through their eyes.
+
+Yep, this blog post is an unsolicited Twitter ad. Twitter definitely has its share of problems, but after [writing about how I finally figured out Twitter][9], in 2014 I decided to embark on a year-long effort to use Twitter (I wasn’t really using it before) to follow mostly women or people of color in my field and just see what the field is like for them on a day to day basis.
+
+Listening to others in this way blew my mind clean open. Suddenly I was aware of this invisible world around me, much of which is still invisible. Now, I’m looking for it, and I catch glimpses. I would challenge anyone and everyone to do this. Make sure the content you’re consuming is predominantly viewpoints from life experiences you haven’t had.
+
+If you need a start, here are some links to accounts to fill your Twitter feed up with:
+
+ * [200 Women of Color in Tech on Twitter][10]
+ * [Women Engineers on Twitter][11]
+
+
+
+You can also check out [who I follow][12], though I should warn I also follow a lot of political accounts, joke accounts, and my following of someone is not an endorsement.
+
+It’s also worth pointing out that no individual can possibly speak for an entire class of people, but if 38 out of 50 women are saying they’re dealing with something, you should listen.
+
+### 6\. Does this work?
+
+Listening to others works, but you don’t have to just take my word for it. Here are two specific and recent experience reports of people turning their worldview for the better by listening to others:
+
+ * [A professor at the University of New Brunswick][13]
+ * [A senior design developer at Microsoft][14]
+
+
+
+You can see how much of a profound and fast impact this had on me because by early 2015, only a few months into my Twitter experiment, I was worked up enough to write [my unicycle post][4] in response to what I was reading on Twitter.
+
+Having diverse perspectives in a workplace has even been shown to [increase productivity][15] and [increase creativity][16].
+
+### 7\. Don’t stop there!
+
+Not everyone is as growth-oriented as you. Just because you’re listening now doesn’t mean others are hearing the same distribution of experiences.
+
+If this is new to you, it’s not new to marginalized people. Imagine how tired they must be in trying to convince everyone their experiences are real, valid, and ongoing. Help get the word out! Repeat and retweet what women and minorities say. Give them credit. In meetings at your work, give credit to others for their ideas and amplify their voices.
+
+Did you know that [non-white or female bosses who push diversity are judged negatively by their peers and managers][17] but white male bosses are not? If you’re a white male, use your position where others can’t.
+
+If you need an example list of things your company can do, [here’s a list Susan Fowler wrote after her experience at Uber][18].
+
+Speak up, use your experiences to help others.
+
+### 8\. Am I not prejudiced now?
+
+The asymmetry of experiences we all have means we’re all inherently prejudiced to some degree and will likely continue to contribute to a prejudiced society. That said, the first step to fixing it is admitting it!
+
+There will always be work to do. You will always need to keep listening, keep learning, and work to improve every day.
+
+--------------------------------------------------------------------------------
+
+via: https://www.jtolio.com/2018/03/how-to-not-be-a-white-male-asshole-by-a-former-offender
+
+作者:[jtolio.com][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.jtolio.com/
+[b]: https://github.com/lujun9972
+[1]: https://www.jtolio.com/images/mrmouse.jpg
+[2]: https://www.npr.org/sections/thetwo-way/2018/02/21/587671849/a-new-survey-finds-eighty-percent-of-women-have-experienced-sexual-harassment
+[3]: https://en.wikipedia.org/wiki/Me_Too_movement
+[4]: https://www.jtolio.com/2015/03/what-riding-a-unicycle-can-teach-us-about-microaggressions/
+[5]: https://everydayfeminism.com/2015/09/dont-touch-black-womens-hair/
+[6]: https://twitter.com/ArielDumas/status/970692180766490630
+[7]: https://www.elle.com/culture/career-politics/a13977980/me-too-movement-false-accusations-believe-women/
+[8]: https://www.jtolio.com/images/sealion.png
+[9]: https://www.jtolio.com/2009/03/i-finally-figured-out-twitter/
+[10]: http://peopleofcolorintech.com/articles/a-list-of-200-women-of-color-on-twitter/
+[11]: https://github.com/ryanburgess/female-engineers-twitter
+[12]: https://twitter.com/jtolds/following
+[13]: https://www.theglobeandmail.com/opinion/ill-start-2018-by-recognizing-my-white-privilege/article37472875/
+[14]: https://micahgodbolt.com/blog/changing-your-worldview/
+[15]: http://edis.ifas.ufl.edu/hr022
+[16]: https://faculty.insead.edu/william-maddux/documents/PSPB-learning-paper.pdf
+[17]: https://digest.bps.org.uk/2017/07/12/non-white-or-female-bosses-who-push-diversity-are-judged-negatively-by-their-peers-and-managers/
+[18]: https://www.susanjfowler.com/blog/2017/5/20/five-things-tech-companies-can-do-better
diff --git a/sources/tech/20180507 Multinomial Logistic Classification.md b/sources/tech/20180507 Multinomial Logistic Classification.md
new file mode 100644
index 0000000000..01fb7b2e90
--- /dev/null
+++ b/sources/tech/20180507 Multinomial Logistic Classification.md
@@ -0,0 +1,215 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Multinomial Logistic Classification)
+[#]: via: (https://www.jtolio.com/2018/05/multinomial-logistic-classification)
+[#]: author: (jtolio.com https://www.jtolio.com/)
+
+Multinomial Logistic Classification
+======
+
+_This article was originally a problem I wrote for a coding competition I hosted, Vivint’s 2017 Game of Codes (now offline). The goal of this problem was not only to be a fun challenge but also to teach contestants almost everything they needed to know to build a neural network from scratch. I thought it might be neat to revive on my site! If machine learning is still scary sounding and foreign to you, you should feel much more at ease after working through this problem. I left out the details of [back-propagation][1], and a single-layer neural network isn’t really a neural network, but in this problem you can learn how to train and run a complete model! There’s lots of maybe scary-looking math but honestly if you can [multiply matrices][2] you should be fine._
+
+In this problem, you’re going to build and train a machine learning model… from scratch! Don’t be intimidated - it will be much easier than it sounds!
+
+### What is machine learning?
+
+_Machine learning_ is a broad and growing range of topics, but essentially the idea is to teach the computer how to find patterns in large amounts of data, then use those patterns to make predictions. Surprisingly, the techniques that have been developed allow computers to translate languages, drive cars, recognize cats, synthesize voice, understand your music tastes, cure diseases, and even adjust your thermostat!
+
+You might be surprised to learn that since about 2010, the entire artificial intelligence and machine learning community has reorganized around a surprisingly small and common toolbox for all of these problems. So, let’s dive in to this toolbox!
+
+### Classification
+
+One of the most fundamental ways of solving problems in machine learning is by recasting problems as _classification_ problems. In other words, if you can describe a problem as data that needs labels, you can use machine learning!
+
+Machine learning will go through a phase of _training_, where data and existing labels are provided to the system. As a motivating example, imagine you have a large collection of photos that either contain hot dogs or don’t. Some of your photos have already been labeled if they contain a hot dog or not, but the other photos we want to build a system that will automatically label them “hotdog” or “nothotdog.” During training, we attempt to build a model of what exactly the essence of each label is. In this case, we will run all of our existing labeled photos through the system so it can learn what makes a hot dog a hot dog.
+
+After training, we run the unseen photos through the model and use the model to generate classifications. If you provide a new photo to your hotdog/nothotdog model, your model should be able to tell you if the photo contains a hot dog, assuming your model had a good training data set and was able to capture the core concept of what a hot dog is.
+
+Many different types of problems can be described as classification problems. As an example, perhaps you want to predict which word comes next in a sequence. Given four input words, a classifier can label those four words as “likely the fourth word follows the last three words” or “not likely.” Alternatively, the classification label for three words could be the most likely word to follow those three.
+
+### How I learned to stop worrying and love multinomial logistic classification
+
+Okay, let’s do the simplest thing we can think of to take input data and classify it.
+
+Let’s imagine our data that we want to classify is a big list of values. If what we have is a 16 by 16 pixel picture, we’re going to just put all the pixels in one big row so we have 256 pixel values in a row. So we’ll say \\(\mathbf{x}\\) is a vector in 256 dimensions, and each dimension is the pixel value.
+
+We have two labels, “hotdog” and “nothotdog.” Just like any other machine learning system, our system will never be 100% confident with a classification, so we will need to output confidence probabilities. The output of our system will be a two-dimensional vector, \\(\mathbf{p}\\). \\(p_0\\) will represent the probability that the input should be labeled “hotdog” and \\(p_1\\) will represent the probability that the input should be labeled “nothotdog.”
+
+How do we take a vector in 256 (or \\(\dim(\mathbf{x})\\)) dimensions and make something in just 2 (or \\(\dim(\mathbf{p})\\)) dimensions? Why, [matrix multiplication][2] of course! If you have a matrix with 2 rows and 256 columns, multiplying it by a 256-dimensional vector will result in a 2-dimensional one.
+
+Surprisingly, this is actually really close to the final construction of our classifier, but there are two problems:
+
+ 1. If one of the input \\(\mathbf{x}\\)s is all zeros, the output will have to be zeros. But we need one of the output dimensions to not be zero!
+ 2. There’s nothing guaranteeing the probabilities in the output will be non-negative and all sum to 1.
+
+
+
+The first problem is easy, we add a bias vector \\(\mathbf{b}\\), turning our matrix multiplication into a standard linear equation of the form \\(\mathbf{W}\cdot\mathbf{x}+\mathbf{b}=\mathbf{y}\\).
+
+The second problem can be solved by using the [softmax function][3]. For a given vector \\(\mathbf{v}\\), softmax is defined as:
+
+In case the \\(\sum\\) scares you, \\(\sum_{j=0}^{n-1}\\) is basically a math “for loop.” All it’s saying is that we’re going to add together everything that comes after it (\\(e^{v_j}\\)) for every \\(j\\) value from 0 to \\(n-1\\).
+
+Softmax is a neat function! The output will be a vector where the largest dimension in the input will be the closest number to 1, no dimensions will be less than zero, and all dimensions sum to 1. Here are some examples:
+
+Unbelievably, these are all the building blocks you need for a linear model! Let’s put all the blocks together. If you already have \\(\mathbf{W}\cdot\mathbf{x}+\mathbf{b}=\mathbf{y}\\), your prediction \\(\mathbf{p}\\) can be found as \\(\text{softmax}\left(\mathbf{y}\right)\\). More fully, given an input \\(\mathbf{x}\\) and a trained model \\(\left(\mathbf{W},\mathbf{b}\right)\\), your prediction \\(\mathbf{p}\\) is:
+
+Once again, in this context, \\(p_0\\) is the probability given the model that the input should be labeled “hotdog” and \\(p_1\\) is the probability given the model that the input should be labeled “nothotdog.”
+
+It’s kind of amazing that all you need for good success with things even as complex as handwriting recognition is a linear model such as this one.
+
+### Scoring
+
+How do we find \\(\mathbf{W}\\) and \\(\mathbf{b}\\)? It might surprise you but we’re going to start off by guessing some random numbers and then changing them until we aren’t predicting things too badly (via a process known as [gradient descent][4]). But what does “too badly” mean?
+
+Recall that we have data that we’ve already labeled. We already have photos labeled “hotdog” and “nothotdog” in what’s called our _training set_. For each photo, we’re going to take whatever our current model is (\\(\mathbf{W}\\) and \\(\mathbf{b}\\)) and find \\(\mathbf{p}\\). Perhaps for one photo (that really is of a hot dog) our \\(\mathbf{p}\\) looks like this:
+
+This isn’t great! Our model says that the photo should be labeled “nothotdog” with 60% probability, but it is a hot dog.
+
+We need a bit more terminology. So far, we’ve only talked about one sample, one label, and one prediction at a time, but obviously we have lots of samples, lots of labels, and lots of predictions, and we want to score how our model does not just on one sample, but on all of our training samples. Assume we have \\(s\\) training samples, each sample has \\(d\\) dimensions, and there are \\(l\\) labels. In the case of our 16 by 16 pixel hot dog photos, \\(d = 256\\) and \\(l = 2\\). We’ll refer to sample \\(i\\) as \\(\mathbf{x}^{(i)}\\), our prediction for sample \\(i\\) as \\(\mathbf{p}^{(i)}\\), and the correct label vector for sample \\(i\\) as \\(\mathbf{L}^{(i)}\\). \\(\mathbf{L}^{(i)}\\) is a vector that is all zeros except for the dimension corresponding to the correct label, where that dimension is a 1. In other words, we have \\(\mathbf{W}\cdot\mathbf{x}^{(i)}+\mathbf{b} = \mathbf{p}^{(i)}\\) and we want \\(\mathbf{p}^{(i)}\\) to be as close to \\(\mathbf{L}^{(i)}\\) as possible, for all \\(s\\) samples.
+
+To score our model, we’re going to compute something called the _average cross entropy loss_. In general, [loss][5] is used to mean how off the mark a machine learning model is. While there are many ways of calculating loss, we’re going to use average [cross entropy][6] because it has some nice properties.
+
+Here’s the definition of the average cross entropy loss across all samples:
+
+All we need to do is find \\(\mathbf{W}\\) and \\(\mathbf{b}\\) that make this loss smallest. How do we do that?
+
+### Training
+
+As we said before, we will start \\(\mathbf{W}\\) and \\(\mathbf{b}\\) off with random values. For each value, choose a floating-point random number between -1 and 1.
+
+Of course, we’ll need to correct these values given the training data, and we now have enough information to describe how we will back-propagate corrections.
+
+The plan is to process all of the training data enough times that the loss drops to an “acceptable level.” Each time through the training data we’ll collect all of the predictions, and at the end we’ll update \\(\mathbf{W}\\) and \\(\mathbf{b}\\) with the information we’ve found.
+
+One problem that can occur is that your model might overcorrect after each run. A simple way to limit overcorrection some is to add a “learning rate”, usually designated \\(\alpha\\), which is some small fraction. You get to choose the learning rate! A good default choice for \\(\alpha\\) is 0.1.
+
+At the end of each run through all of the training data, here’s how you update \\(\mathbf{W}\\) and \\(\mathbf{b}\\):
+
+Just because this syntax is starting to get out of hand, let’s refresh what each symbol means.
+
+ * \\(W_{m,n}\\) is the cell in weight matrix \\(\mathbf{W}\\) at row \\(m\\) and column \\(n\\).
+ * \\(b_m\\) is the \\(m\\)-th dimension in the “bias” vector \\(\mathbf{b}\\).
+ * \\(\alpha\\) is again your learning rate, 0.1, and \\(s\\) is how many training samples you have.
+ * \\(x_n^{(i)}\\) is the \\(n\\)-th dimension of sample \\(i\\).
+ * Likewise, \\(p_m^{(i)}\\) and \\(L_m^{(i)}\\) are the \\(m\\)-th dimensions of our prediction and true labels for sample \\(i\\), respectively. Remember that for each sample \\(i\\), \\(L_m^{(i)}\\) is zero for all but the dimension corresponding to the correct label, where it is 1.
+
+
+
+If you’re curious how we got these equations, we applied the [chain rule][7] to calculate partial derivatives of the total loss. It’s hairy, and this problem description is already too long!
+
+Anyway, once you’ve updated your \\(\mathbf{W}\\) and \\(\mathbf{b}\\), you start the whole process over!
+
+### When do we stop?
+
+Knowing when to stop is a hard problem. How low your loss goes is a function of your learning rate, how many iterations you run over your training data, and a huge number of other factors. On the flip side, if you train your model so your loss is too low, you run the risk of overfitting your model to your training data, so it won’t work well on data it hasn’t seen before.
+
+One of the more common ways of deciding when to [stop training][8] is to have a separate validation set of samples we check our success on and stop when we stop improving. But for this problem, to keep things simple what we’re going to do is just keep track of how our loss changes and stop when the loss stops changing as much.
+
+After the first 10 iterations, your loss will have changed 9 times (there was no change from the first time since it was the first time). Take the average of those 9 changes and stop training when your loss change is less than a hundredth the average loss change.
+
+### Tie it all together
+
+Alright! If you’ve stuck with me this far, you’ve learned to implement a multinomial logistic classifier using gradient descent, [back-propagation][1], and [one-hot encoding][9]. Good job!
+
+You should now be able to write a program that takes labeled training samples, trains a model, then takes unlabeled test samples and predicts labels for them!
+
+### Your program
+
+As input your program should take vectors of floating-point values, followed by a label. Some of the labels will be question marks. Your program should output the correct label for all of the question marks it sees. The label your program should output will always be one it has seen training examples of.
+
+Your program will pass the tests if it labels 75% or more of the unlabeled data correctly.
+
+### Where to learn more
+
+If you want to learn more or dive deeper into optimizing your solution, you may be interested in the first section of [Udacity’s free course on Deep Learning][10], or [Dom Luma’s tutorial on building a mini-TensorFlow][11].
+
+### Example
+
+#### Input
+
+```
+ 0.93 -1.52 1.32 0.05 1.72 horse
+ 1.57 -1.74 0.92 -1.33 -0.68 staple
+ 0.18 1.24 -1.53 1.53 0.78 other
+ 1.96 -1.29 -1.50 -0.19 1.47 staple
+ 1.24 0.15 0.73 -0.22 1.15 battery
+ 1.41 -1.56 1.04 1.09 0.66 horse
+-0.70 -0.93 -0.18 0.75 0.88 horse
+ 1.12 -1.45 -1.26 -0.43 -0.05 staple
+ 1.89 0.21 -1.45 0.47 0.62 other
+-0.60 -1.87 0.82 -0.66 1.86 staple
+-0.80 -1.99 1.74 0.65 1.46 horse
+-0.03 1.35 0.11 -0.92 -0.04 battery
+-0.24 -0.03 0.58 1.32 -1.51 horse
+-0.60 -0.70 1.61 0.56 -0.66 horse
+ 1.29 -0.39 -1.57 -0.45 1.63 staple
+ 0.87 1.59 -1.61 -1.79 1.47 battery
+ 1.86 1.92 0.83 -0.34 1.06 battery
+-1.09 -0.81 1.47 1.82 0.06 horse
+-0.99 -1.00 -1.45 -1.02 -1.06 staple
+-0.82 -0.56 0.82 0.79 -1.02 horse
+-1.86 0.77 -0.58 0.82 -1.94 other
+ 0.15 1.18 -0.87 0.78 2.00 other
+ 1.18 0.79 1.08 -1.65 -0.73 battery
+ 0.37 1.78 0.01 0.06 -0.50 other
+-0.35 0.31 1.18 -1.83 -0.57 battery
+ 0.91 1.14 -1.85 0.39 0.07 other
+-1.61 0.28 -0.31 0.93 0.77 other
+-0.11 -1.75 -1.66 -1.55 -0.79 staple
+ 0.05 1.03 -0.23 1.49 1.66 other
+-1.99 0.43 -0.99 1.72 0.52 other
+-0.30 0.40 -0.70 0.51 0.07 other
+-0.54 1.92 -1.13 -1.53 1.73 battery
+-0.52 0.44 -0.84 -0.11 0.10 battery
+-1.00 -1.82 -1.19 -0.67 -1.18 staple
+-1.81 0.10 -1.64 -1.47 -1.86 battery
+-1.77 0.53 -1.28 0.55 -1.15 other
+ 0.29 -0.28 -0.41 0.70 1.80 horse
+-0.91 0.02 1.60 -1.44 -1.89 battery
+ 1.24 -0.42 -1.30 -0.80 -0.54 staple
+-1.98 -1.15 0.54 -0.14 -1.24 staple
+ 1.26 -1.02 -1.08 -1.27 1.65 ?
+ 1.97 1.14 0.51 0.96 -0.36 ?
+ 0.99 0.14 -0.97 -1.90 -0.87 ?
+ 1.54 -1.83 1.59 1.98 -0.41 ?
+-1.81 0.34 -0.83 0.90 -1.60 ?
+```
+
+#### Output
+
+```
+staple
+other
+battery
+horse
+other
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.jtolio.com/2018/05/multinomial-logistic-classification
+
+作者:[jtolio.com][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.jtolio.com/
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/Backpropagation
+[2]: https://en.wikipedia.org/wiki/Matrix_multiplication
+[3]: https://en.wikipedia.org/wiki/Softmax_function
+[4]: https://en.wikipedia.org/wiki/Gradient_descent
+[5]: https://en.wikipedia.org/wiki/Loss_function
+[6]: https://en.wikipedia.org/wiki/Cross_entropy
+[7]: https://en.wikipedia.org/wiki/Chain_rule
+[8]: https://en.wikipedia.org/wiki/Early_stopping
+[9]: https://en.wikipedia.org/wiki/One-hot
+[10]: https://classroom.udacity.com/courses/ud730
+[11]: https://nbviewer.jupyter.org/github/domluna/labs/blob/master/Build%20Your%20Own%20TensorFlow.ipynb
diff --git a/sources/tech/20180705 Building a Messenger App- Schema.md b/sources/tech/20180705 Building a Messenger App- Schema.md
new file mode 100644
index 0000000000..39b9bf97c2
--- /dev/null
+++ b/sources/tech/20180705 Building a Messenger App- Schema.md
@@ -0,0 +1,114 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Building a Messenger App: Schema)
+[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-schema/)
+[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
+
+Building a Messenger App: Schema
+======
+
+New post on building a messenger app. You already know this kind of app. They allow you to have conversations with your friends. [Facebook Messenger][1], [WhatsApp][2] and [Skype][3] are a few examples. Tho, these apps allows you to send pictures, stream video, record audio, chat with large groups of people, etc… We’ll try to keep it simple and just send text messages between two users.
+
+We’ll use [CockroachDB][4] as the SQL database, [Go][5] as the backend language, and JavaScript to make a web app.
+
+In this first post, we’re getting around the database design.
+
+```
+CREATE TABLE users (
+ id SERIAL NOT NULL PRIMARY KEY,
+ username STRING NOT NULL UNIQUE,
+ avatar_url STRING,
+ github_id INT NOT NULL UNIQUE
+);
+```
+
+Of course, this app requires users. We will go with social login. I selected just [GitHub][6] so we keep a reference to the github user ID there.
+
+```
+CREATE TABLE conversations (
+ id SERIAL NOT NULL PRIMARY KEY,
+ last_message_id INT,
+ INDEX (last_message_id DESC)
+);
+```
+
+Each conversation references the last message. Every time we insert a new message, we’ll go and update this field. (I’ll add the foreign key constraint below).
+
+… You can say that we can group conversations and get the last message that way, but that will add much more complexity to the queries.
+
+```
+CREATE TABLE participants (
+ user_id INT NOT NULL REFERENCES users ON DELETE CASCADE,
+ conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE,
+ messages_read_at TIMESTAMPTZ NOT NULL DEFAULT now(),
+ PRIMARY KEY (user_id, conversation_id)
+);
+```
+
+Even tho I said conversations will be between just two users, we’ll go with a design that allow the possibility to add multiple participants to a conversation. That’s why we have a participants table between the conversation and users.
+
+To know whether the user has unread messages we have the `messages_read_at` field. Every time the user read in a conversation, we update this value, so we can compare it with the conversation last message `created_at` field.
+
+```
+CREATE TABLE messages (
+ id SERIAL NOT NULL PRIMARY KEY,
+ content STRING NOT NULL,
+ user_id INT NOT NULL REFERENCES users ON DELETE CASCADE,
+ conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE,
+ created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
+ INDEX(created_at DESC)
+);
+```
+
+Last but not least is the messages table, it saves a reference to the user who created it and the conversation in which it goes. Is has an index on `created_at` too to sort messages.
+
+```
+ALTER TABLE conversations
+ADD CONSTRAINT fk_last_message_id_ref_messages
+FOREIGN KEY (last_message_id) REFERENCES messages ON DELETE SET NULL;
+```
+
+And yep, the fk constraint I said.
+
+These four tables will do the trick. You can save those queries to a file and pipe it to the Cockroach CLI. First start a new node:
+
+```
+cockroach start --insecure --host 127.0.0.1
+```
+
+Then create the database and tables:
+
+```
+cockroach sql --insecure -e "CREATE DATABASE messenger"
+cat schema.sql | cockroach sql --insecure -d messenger
+```
+
+* * *
+
+That’s it. In the next part we’ll do the login. Wait for it.
+
+[Souce Code][7]
+
+--------------------------------------------------------------------------------
+
+via: https://nicolasparada.netlify.com/posts/go-messenger-schema/
+
+作者:[Nicolás Parada][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://nicolasparada.netlify.com/
+[b]: https://github.com/lujun9972
+[1]: https://www.messenger.com/
+[2]: https://www.whatsapp.com/
+[3]: https://www.skype.com/
+[4]: https://www.cockroachlabs.com/
+[5]: https://golang.org/
+[6]: https://github.com/
+[7]: https://github.com/nicolasparada/go-messenger-demo
diff --git a/sources/tech/20180706 Building a Messenger App- OAuth.md b/sources/tech/20180706 Building a Messenger App- OAuth.md
new file mode 100644
index 0000000000..72f8c4e3f6
--- /dev/null
+++ b/sources/tech/20180706 Building a Messenger App- OAuth.md
@@ -0,0 +1,448 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Building a Messenger App: OAuth)
+[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-oauth/)
+[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
+
+Building a Messenger App: OAuth
+======
+
+[Previous part: Schema][1].
+
+In this post we start the backend by adding social login.
+
+This is how it works: the user click on a link that redirects him to the GitHub authorization page. The user grant access to his info and get redirected back logged in. The next time he tries to login, he won’t be asked to grant permission, it is remembered so the login flow is as fast as a single click.
+
+Internally, the history is more complex tho. First we need the register a new [OAuth app on GitHub][2].
+
+The important part is the callback URL. Set it to `http://localhost:3000/api/oauth/github/callback`. On development we are on localhost, so when you ship the app to production, register a new app with the correct callback URL.
+
+This will give you a client id and a secret key. Don’t share them with anyone 👀
+
+With that off of the way, lets start to write some code. Create a `main.go` file:
+
+```
+package main
+
+import (
+ "database/sql"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+ "os"
+ "strconv"
+
+ "github.com/gorilla/securecookie"
+ "github.com/joho/godotenv"
+ "github.com/knq/jwt"
+ _ "github.com/lib/pq"
+ "github.com/matryer/way"
+ "golang.org/x/oauth2"
+ "golang.org/x/oauth2/github"
+)
+
+var origin *url.URL
+var db *sql.DB
+var githubOAuthConfig *oauth2.Config
+var cookieSigner *securecookie.SecureCookie
+var jwtSigner jwt.Signer
+
+func main() {
+ godotenv.Load()
+
+ port := intEnv("PORT", 3000)
+ originString := env("ORIGIN", fmt.Sprintf("http://localhost:%d/", port))
+ databaseURL := env("DATABASE_URL", "postgresql://root@127.0.0.1:26257/messenger?sslmode=disable")
+ githubClientID := os.Getenv("GITHUB_CLIENT_ID")
+ githubClientSecret := os.Getenv("GITHUB_CLIENT_SECRET")
+ hashKey := env("HASH_KEY", "secret")
+ jwtKey := env("JWT_KEY", "secret")
+
+ var err error
+ if origin, err = url.Parse(originString); err != nil || !origin.IsAbs() {
+ log.Fatal("invalid origin")
+ return
+ }
+
+ if i, err := strconv.Atoi(origin.Port()); err == nil {
+ port = i
+ }
+
+ if githubClientID == "" || githubClientSecret == "" {
+ log.Fatalf("remember to set both $GITHUB_CLIENT_ID and $GITHUB_CLIENT_SECRET")
+ return
+ }
+
+ if db, err = sql.Open("postgres", databaseURL); err != nil {
+ log.Fatalf("could not open database connection: %v\n", err)
+ return
+ }
+ defer db.Close()
+ if err = db.Ping(); err != nil {
+ log.Fatalf("could not ping to db: %v\n", err)
+ return
+ }
+
+ githubRedirectURL := *origin
+ githubRedirectURL.Path = "/api/oauth/github/callback"
+ githubOAuthConfig = &oauth2.Config{
+ ClientID: githubClientID,
+ ClientSecret: githubClientSecret,
+ Endpoint: github.Endpoint,
+ RedirectURL: githubRedirectURL.String(),
+ Scopes: []string{"read:user"},
+ }
+
+ cookieSigner = securecookie.New([]byte(hashKey), nil).MaxAge(0)
+
+ jwtSigner, err = jwt.HS256.New([]byte(jwtKey))
+ if err != nil {
+ log.Fatalf("could not create JWT signer: %v\n", err)
+ return
+ }
+
+ router := way.NewRouter()
+ router.HandleFunc("GET", "/api/oauth/github", githubOAuthStart)
+ router.HandleFunc("GET", "/api/oauth/github/callback", githubOAuthCallback)
+ router.HandleFunc("GET", "/api/auth_user", guard(getAuthUser))
+
+ log.Printf("accepting connections on port %d\n", port)
+ log.Printf("starting server at %s\n", origin.String())
+ addr := fmt.Sprintf(":%d", port)
+ if err = http.ListenAndServe(addr, router); err != nil {
+ log.Fatalf("could not start server: %v\n", err)
+ }
+}
+
+func env(key, fallbackValue string) string {
+ v, ok := os.LookupEnv(key)
+ if !ok {
+ return fallbackValue
+ }
+ return v
+}
+
+func intEnv(key string, fallbackValue int) int {
+ v, ok := os.LookupEnv(key)
+ if !ok {
+ return fallbackValue
+ }
+ i, err := strconv.Atoi(v)
+ if err != nil {
+ return fallbackValue
+ }
+ return i
+}
+```
+
+Install dependencies:
+
+```
+go get -u github.com/gorilla/securecookie
+go get -u github.com/joho/godotenv
+go get -u github.com/knq/jwt
+go get -u github.com/lib/pq
+ge get -u github.com/matoous/go-nanoid
+go get -u github.com/matryer/way
+go get -u golang.org/x/oauth2
+```
+
+We use a `.env` file to save secret keys and other configurations. Create it with at least this content:
+
+```
+GITHUB_CLIENT_ID=your_github_client_id
+GITHUB_CLIENT_SECRET=your_github_client_secret
+```
+
+The other enviroment variables we use are:
+
+ * `PORT`: The port in which the server runs. Defaults to `3000`.
+ * `ORIGIN`: Your domain. Defaults to `http://localhost:3000/`. The port can also be extracted from this.
+ * `DATABASE_URL`: The Cockroach address. Defaults to `postgresql://root@127.0.0.1:26257/messenger?sslmode=disable`.
+ * `HASH_KEY`: Key to sign cookies. Yeah, we’ll use signed cookies for security.
+ * `JWT_KEY`: Key to sign JSON web tokens.
+
+
+
+Because they have default values, your don’t need to write them on the `.env` file.
+
+After reading the configuration and connecting to the database, we create an OAuth config. We use the origin to build the callback URL (the same we registered on the github page). And we set the scope to “read:user”. This will give us permission to read the public user info. That’s because we just need his username and avatar. Then we initialize the cookie and JWT signers. Define some endpoints and start the server.
+
+Before implementing those HTTP handlers lets write a couple functions to send HTTP responses.
+
+```
+func respond(w http.ResponseWriter, v interface{}, statusCode int) {
+ b, err := json.Marshal(v)
+ if err != nil {
+ respondError(w, fmt.Errorf("could not marshal response: %v", err))
+ return
+ }
+ w.Header().Set("Content-Type", "application/json; charset=utf-8")
+ w.WriteHeader(statusCode)
+ w.Write(b)
+}
+
+func respondError(w http.ResponseWriter, err error) {
+ log.Println(err)
+ http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError)
+}
+```
+
+The first one is to send JSON and the second one logs the error to the console and return a `500 Internal Server Error` error.
+
+### OAuth Start
+
+So, the user clicks on a link that says “Access with GitHub”… That link points the this endpoint `/api/oauth/github` that will redirect the user to github.
+
+```
+func githubOAuthStart(w http.ResponseWriter, r *http.Request) {
+ state, err := gonanoid.Nanoid()
+ if err != nil {
+ respondError(w, fmt.Errorf("could not generte state: %v", err))
+ return
+ }
+
+ stateCookieValue, err := cookieSigner.Encode("state", state)
+ if err != nil {
+ respondError(w, fmt.Errorf("could not encode state cookie: %v", err))
+ return
+ }
+
+ http.SetCookie(w, &http.Cookie{
+ Name: "state",
+ Value: stateCookieValue,
+ Path: "/api/oauth/github",
+ HttpOnly: true,
+ })
+ http.Redirect(w, r, githubOAuthConfig.AuthCodeURL(state), http.StatusTemporaryRedirect)
+}
+```
+
+OAuth2 uses a mechanism to prevent CSRF attacks so it requires a “state”. We use nanoid to create a random string and use that as state. We save it as a cookie too.
+
+### OAuth Callback
+
+Once the user grant access to his info on the GitHub page, he will be redirected to this endpoint. The URL will come with the state and a code on the query string `/api/oauth/github/callback?state=&code=`
+
+```
+const jwtLifetime = time.Hour * 24 * 14
+
+type GithubUser struct {
+ ID int `json:"id"`
+ Login string `json:"login"`
+ AvatarURL *string `json:"avatar_url,omitempty"`
+}
+
+type User struct {
+ ID string `json:"id"`
+ Username string `json:"username"`
+ AvatarURL *string `json:"avatarUrl"`
+}
+
+func githubOAuthCallback(w http.ResponseWriter, r *http.Request) {
+ stateCookie, err := r.Cookie("state")
+ if err != nil {
+ http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
+ return
+ }
+
+ http.SetCookie(w, &http.Cookie{
+ Name: "state",
+ Value: "",
+ MaxAge: -1,
+ HttpOnly: true,
+ })
+
+ var state string
+ if err = cookieSigner.Decode("state", stateCookie.Value, &state); err != nil {
+ http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
+ return
+ }
+
+ q := r.URL.Query()
+
+ if state != q.Get("state") {
+ http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
+ return
+ }
+
+ ctx := r.Context()
+
+ t, err := githubOAuthConfig.Exchange(ctx, q.Get("code"))
+ if err != nil {
+ respondError(w, fmt.Errorf("could not fetch github token: %v", err))
+ return
+ }
+
+ client := githubOAuthConfig.Client(ctx, t)
+ resp, err := client.Get("https://api.github.com/user")
+ if err != nil {
+ respondError(w, fmt.Errorf("could not fetch github user: %v", err))
+ return
+ }
+
+ var githubUser GithubUser
+ if err = json.NewDecoder(resp.Body).Decode(&githubUser); err != nil {
+ respondError(w, fmt.Errorf("could not decode github user: %v", err))
+ return
+ }
+ defer resp.Body.Close()
+
+ tx, err := db.BeginTx(ctx, nil)
+ if err != nil {
+ respondError(w, fmt.Errorf("could not begin tx: %v", err))
+ return
+ }
+
+ var user User
+ if err = tx.QueryRowContext(ctx, `
+ SELECT id, username, avatar_url FROM users WHERE github_id = $1
+ `, githubUser.ID).Scan(&user.ID, &user.Username, &user.AvatarURL); err == sql.ErrNoRows {
+ if err = tx.QueryRowContext(ctx, `
+ INSERT INTO users (username, avatar_url, github_id) VALUES ($1, $2, $3)
+ RETURNING id
+ `, githubUser.Login, githubUser.AvatarURL, githubUser.ID).Scan(&user.ID); err != nil {
+ respondError(w, fmt.Errorf("could not insert user: %v", err))
+ return
+ }
+ user.Username = githubUser.Login
+ user.AvatarURL = githubUser.AvatarURL
+ } else if err != nil {
+ respondError(w, fmt.Errorf("could not query user by github ID: %v", err))
+ return
+ }
+
+ if err = tx.Commit(); err != nil {
+ respondError(w, fmt.Errorf("could not commit to finish github oauth: %v", err))
+ return
+ }
+
+ exp := time.Now().Add(jwtLifetime)
+ token, err := jwtSigner.Encode(jwt.Claims{
+ Subject: user.ID,
+ Expiration: json.Number(strconv.FormatInt(exp.Unix(), 10)),
+ })
+ if err != nil {
+ respondError(w, fmt.Errorf("could not create token: %v", err))
+ return
+ }
+
+ expiresAt, _ := exp.MarshalText()
+
+ data := make(url.Values)
+ data.Set("token", string(token))
+ data.Set("expires_at", string(expiresAt))
+
+ http.Redirect(w, r, "/callback?"+data.Encode(), http.StatusTemporaryRedirect)
+}
+```
+
+First we try to decode the cookie with the state we saved before. And compare it with the state that comes in the query string. In case they don’t match, we return a `418 I'm teapot` error.
+
+Then we exchange the code for a token. This token is used to create an HTTP client to make requests to the GitHub API. So we do a GET request to `https://api.github.com/user`. This endpoint will give us the current authenticated user info in JSON format. We decode it to get the user ID, login (username) and avatar URL.
+
+Then we try to find a user with that GitHub ID on the database. If none is found, we create one using that data.
+
+Then, with the newly created user, we issue a JSON web token with the user ID as Subject and redirect to the frontend with the token, along side the expiration date in the query string.
+
+The web app will be for another post, but the URL you are being redirected is `/callback?token=&expires_at=`. There we’ll have some JavaScript to extract the token and expiration date from the URL and do a GET request to `/api/auth_user` with the token in the `Authorization` header in the form of `Bearer token_here` to get the authenticated user and save it to localStorage.
+
+### Guard Middleware
+
+To get the current authenticated user we use a middleware. That’s because in future posts we’ll have more endpoints that requires authentication, and a middleware allow us to share functionality.
+
+```
+type ContextKey struct {
+ Name string
+}
+
+var keyAuthUserID = ContextKey{"auth_user_id"}
+
+func guard(handler http.HandlerFunc) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ var token string
+ if a := r.Header.Get("Authorization"); strings.HasPrefix(a, "Bearer ") {
+ token = a[7:]
+ } else if t := r.URL.Query().Get("token"); t != "" {
+ token = t
+ } else {
+ http.Error(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized)
+ return
+ }
+
+ var claims jwt.Claims
+ if err := jwtSigner.Decode([]byte(token), &claims); err != nil {
+ http.Error(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized)
+ return
+ }
+
+ ctx := r.Context()
+ ctx = context.WithValue(ctx, keyAuthUserID, claims.Subject)
+
+ handler(w, r.WithContext(ctx))
+ }
+}
+```
+
+First we try to read the token from the `Authorization` header or a `token` in the URL query string. If none found, we return a `401 Unauthorized` error. Then we decode the claims in the token and use the Subject as the current authenticated user ID.
+
+Now, we can wrap any `http.handlerFunc` that needs authentication with this middleware and we’ll have the authenticated user ID in the context.
+
+```
+var guarded = guard(func(w http.ResponseWriter, r *http.Request) {
+ authUserID := r.Context().Value(keyAuthUserID).(string)
+})
+```
+
+### Get Authenticated User
+
+```
+func getAuthUser(w http.ResponseWriter, r *http.Request) {
+ ctx := r.Context()
+ authUserID := ctx.Value(keyAuthUserID).(string)
+
+ var user User
+ if err := db.QueryRowContext(ctx, `
+ SELECT username, avatar_url FROM users WHERE id = $1
+ `, authUserID).Scan(&user.Username, &user.AvatarURL); err == sql.ErrNoRows {
+ http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
+ return
+ } else if err != nil {
+ respondError(w, fmt.Errorf("could not query auth user: %v", err))
+ return
+ }
+
+ user.ID = authUserID
+
+ respond(w, user, http.StatusOK)
+}
+```
+
+We use the guard middleware to get the current authenticated user id and do a query to the database.
+
+* * *
+
+That will cover the OAuth process on the backend. In the next part we’ll see how to start conversations with other users.
+
+[Souce Code][3]
+
+--------------------------------------------------------------------------------
+
+via: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
+
+作者:[Nicolás Parada][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://nicolasparada.netlify.com/
+[b]: https://github.com/lujun9972
+[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
+[2]: https://github.com/settings/applications/new
+[3]: https://github.com/nicolasparada/go-messenger-demo
diff --git a/sources/tech/20180708 Building a Messenger App- Conversations.md b/sources/tech/20180708 Building a Messenger App- Conversations.md
new file mode 100644
index 0000000000..6789d1d4a1
--- /dev/null
+++ b/sources/tech/20180708 Building a Messenger App- Conversations.md
@@ -0,0 +1,351 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Building a Messenger App: Conversations)
+[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-conversations/)
+[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
+
+Building a Messenger App: Conversations
+======
+
+This post is the 3rd in a series:
+
+ * [Part 1: Schema][1]
+ * [Part 2: OAuth][2]
+
+
+
+In our messenger app, messages are stacked by conversations between two participants. You start a conversation providing the user you want to chat with, the conversations is created (if not exists already) and you can start sending messages to that conversations.
+
+On the front-end we’re interested in showing a list of the lastest conversations. There we’ll show the last message of it and the name and avatar of the other participant.
+
+In this post, we’ll code the endpoints to start a conversation, list the latest and find a single one.
+
+Inside the `main()` function add this routes.
+
+```
+router.HandleFunc("POST", "/api/conversations", requireJSON(guard(createConversation)))
+router.HandleFunc("GET", "/api/conversations", guard(getConversations))
+router.HandleFunc("GET", "/api/conversations/:conversationID", guard(getConversation))
+```
+
+These three endpoints require authentication so we use the `guard()` middleware. There is a new middleware that checks for the request content type JSON.
+
+### Require JSON Middleware
+
+```
+func requireJSON(handler http.HandlerFunc) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ if ct := r.Header.Get("Content-Type"); !strings.HasPrefix(ct, "application/json") {
+ http.Error(w, "Content type of application/json required", http.StatusUnsupportedMediaType)
+ return
+ }
+ handler(w, r)
+ }
+}
+```
+
+If the request isn’t JSON, it responds with a `415 Unsupported Media Type` error.
+
+### Create Conversation
+
+```
+type Conversation struct {
+ ID string `json:"id"`
+ OtherParticipant *User `json:"otherParticipant"`
+ LastMessage *Message `json:"lastMessage"`
+ HasUnreadMessages bool `json:"hasUnreadMessages"`
+}
+```
+
+So, a conversation holds a reference to the other participant and the last message. Also has a bool field to tell if it has unread messages.
+
+```
+type Message struct {
+ ID string `json:"id"`
+ Content string `json:"content"`
+ UserID string `json:"-"`
+ ConversationID string `json:"conversationID,omitempty"`
+ CreatedAt time.Time `json:"createdAt"`
+ Mine bool `json:"mine"`
+ ReceiverID string `json:"-"`
+}
+```
+
+Messages are for the next post, but I define the struct now since we are using it. Most of the fields are the same as the database table. We have `Mine` to tell if the message is owned by the current authenticated user and `ReceiverID` will be used to filter messanges once we add realtime capabilities.
+
+Lets write the HTTP handler then. It’s quite long but don’t be scared.
+
+```
+func createConversation(w http.ResponseWriter, r *http.Request) {
+ var input struct {
+ Username string `json:"username"`
+ }
+ defer r.Body.Close()
+ if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
+ http.Error(w, err.Error(), http.StatusBadRequest)
+ return
+ }
+
+ input.Username = strings.TrimSpace(input.Username)
+ if input.Username == "" {
+ respond(w, Errors{map[string]string{
+ "username": "Username required",
+ }}, http.StatusUnprocessableEntity)
+ return
+ }
+
+ ctx := r.Context()
+ authUserID := ctx.Value(keyAuthUserID).(string)
+
+ tx, err := db.BeginTx(ctx, nil)
+ if err != nil {
+ respondError(w, fmt.Errorf("could not begin tx: %v", err))
+ return
+ }
+ defer tx.Rollback()
+
+ var otherParticipant User
+ if err := tx.QueryRowContext(ctx, `
+ SELECT id, avatar_url FROM users WHERE username = $1
+ `, input.Username).Scan(
+ &otherParticipant.ID,
+ &otherParticipant.AvatarURL,
+ ); err == sql.ErrNoRows {
+ http.Error(w, "User not found", http.StatusNotFound)
+ return
+ } else if err != nil {
+ respondError(w, fmt.Errorf("could not query other participant: %v", err))
+ return
+ }
+
+ otherParticipant.Username = input.Username
+
+ if otherParticipant.ID == authUserID {
+ http.Error(w, "Try start a conversation with someone else", http.StatusForbidden)
+ return
+ }
+
+ var conversationID string
+ if err := tx.QueryRowContext(ctx, `
+ SELECT conversation_id FROM participants WHERE user_id = $1
+ INTERSECT
+ SELECT conversation_id FROM participants WHERE user_id = $2
+ `, authUserID, otherParticipant.ID).Scan(&conversationID); err != nil && err != sql.ErrNoRows {
+ respondError(w, fmt.Errorf("could not query common conversation id: %v", err))
+ return
+ } else if err == nil {
+ http.Redirect(w, r, "/api/conversations/"+conversationID, http.StatusFound)
+ return
+ }
+
+ var conversation Conversation
+ if err = tx.QueryRowContext(ctx, `
+ INSERT INTO conversations DEFAULT VALUES
+ RETURNING id
+ `).Scan(&conversation.ID); err != nil {
+ respondError(w, fmt.Errorf("could not insert conversation: %v", err))
+ return
+ }
+
+ if _, err = tx.ExecContext(ctx, `
+ INSERT INTO participants (user_id, conversation_id) VALUES
+ ($1, $2),
+ ($3, $2)
+ `, authUserID, conversation.ID, otherParticipant.ID); err != nil {
+ respondError(w, fmt.Errorf("could not insert participants: %v", err))
+ return
+ }
+
+ if err = tx.Commit(); err != nil {
+ respondError(w, fmt.Errorf("could not commit tx to create conversation: %v", err))
+ return
+ }
+
+ conversation.OtherParticipant = &otherParticipant
+
+ respond(w, conversation, http.StatusCreated)
+}
+```
+
+For this endpoint you do a POST request to `/api/conversations` with a JSON body containing the username of the user you want to chat with.
+
+So first it decodes the request body into an struct with the username. Then it validates that the username is not empty.
+
+```
+type Errors struct {
+ Errors map[string]string `json:"errors"`
+}
+```
+
+This is the `Errors` struct. It’s just a map. If you enter an empty username you get this JSON with a `422 Unprocessable Entity` error.
+
+```
+{
+ "errors": {
+ "username": "Username required"
+ }
+}
+```
+
+Then, we begin an SQL transaction. We only received an username, but we need the actual user ID. So the first part of the transaction is to query for the id and avatar of that user (the other participant). If the user is not found, we respond with a `404 Not Found` error. Also, if the user happens to be the same as the current authenticated user, we respond with `403 Forbidden`. There should be two different users, not the same.
+
+Then, we try to find a conversation those two users have in common. We use `INTERSECT` for that. If there is one, we redirect to that conversation `/api/conversations/{conversationID}` and return there.
+
+If no common conversation was found, we continue by creating a new one and adding the two participants. Finally, we `COMMIT` the transaction and respond with the newly created conversation.
+
+### Get Conversations
+
+This endpoint `/api/conversations` is to get all the conversations of the current authenticated user.
+
+```
+func getConversations(w http.ResponseWriter, r *http.Request) {
+ ctx := r.Context()
+ authUserID := ctx.Value(keyAuthUserID).(string)
+
+ rows, err := db.QueryContext(ctx, `
+ SELECT
+ conversations.id,
+ auth_user.messages_read_at < messages.created_at AS has_unread_messages,
+ messages.id,
+ messages.content,
+ messages.created_at,
+ messages.user_id = $1 AS mine,
+ other_users.id,
+ other_users.username,
+ other_users.avatar_url
+ FROM conversations
+ INNER JOIN messages ON conversations.last_message_id = messages.id
+ INNER JOIN participants other_participants
+ ON other_participants.conversation_id = conversations.id
+ AND other_participants.user_id != $1
+ INNER JOIN users other_users ON other_participants.user_id = other_users.id
+ INNER JOIN participants auth_user
+ ON auth_user.conversation_id = conversations.id
+ AND auth_user.user_id = $1
+ ORDER BY messages.created_at DESC
+ `, authUserID)
+ if err != nil {
+ respondError(w, fmt.Errorf("could not query conversations: %v", err))
+ return
+ }
+ defer rows.Close()
+
+ conversations := make([]Conversation, 0)
+ for rows.Next() {
+ var conversation Conversation
+ var lastMessage Message
+ var otherParticipant User
+ if err = rows.Scan(
+ &conversation.ID,
+ &conversation.HasUnreadMessages,
+ &lastMessage.ID,
+ &lastMessage.Content,
+ &lastMessage.CreatedAt,
+ &lastMessage.Mine,
+ &otherParticipant.ID,
+ &otherParticipant.Username,
+ &otherParticipant.AvatarURL,
+ ); err != nil {
+ respondError(w, fmt.Errorf("could not scan conversation: %v", err))
+ return
+ }
+
+ conversation.LastMessage = &lastMessage
+ conversation.OtherParticipant = &otherParticipant
+ conversations = append(conversations, conversation)
+ }
+
+ if err = rows.Err(); err != nil {
+ respondError(w, fmt.Errorf("could not iterate over conversations: %v", err))
+ return
+ }
+
+ respond(w, conversations, http.StatusOK)
+}
+```
+
+This handler just does a query to the database. It queries to the conversations table with some joins… First, to the messages table to get the last message. Then to the participants, but it adds a condition to a participant whose ID is not the one of the current authenticated user; this is the other participant. Then it joins to the users table to get his username and avatar. And finally joins with the participants again but with the contrary condition, so this participant is the current authenticated user. We compare `messages_read_at` with the message `created_at` to know whether the conversation has unread messages. And we use the message `user_id` to check if it’s “mine” or not.
+
+Note that this query assumes that a conversation has just two users. It only works for that scenario. Also, if you want to show a count of the unread messages, this design isn’t good. I think you could add a `unread_messages_count` `INT` field on the `participants` table and increment it each time a new message is created and reset it when the user read them.
+
+Then it iterates over the rows, scan each one to make an slice of conversations and respond with those at the end.
+
+### Get Conversation
+
+This endpoint `/api/conversations/{conversationID}` respond with a single conversation by its ID.
+
+```
+func getConversation(w http.ResponseWriter, r *http.Request) {
+ ctx := r.Context()
+ authUserID := ctx.Value(keyAuthUserID).(string)
+ conversationID := way.Param(ctx, "conversationID")
+
+ var conversation Conversation
+ var otherParticipant User
+ if err := db.QueryRowContext(ctx, `
+ SELECT
+ IFNULL(auth_user.messages_read_at < messages.created_at, false) AS has_unread_messages,
+ other_users.id,
+ other_users.username,
+ other_users.avatar_url
+ FROM conversations
+ LEFT JOIN messages ON conversations.last_message_id = messages.id
+ INNER JOIN participants other_participants
+ ON other_participants.conversation_id = conversations.id
+ AND other_participants.user_id != $1
+ INNER JOIN users other_users ON other_participants.user_id = other_users.id
+ INNER JOIN participants auth_user
+ ON auth_user.conversation_id = conversations.id
+ AND auth_user.user_id = $1
+ WHERE conversations.id = $2
+ `, authUserID, conversationID).Scan(
+ &conversation.HasUnreadMessages,
+ &otherParticipant.ID,
+ &otherParticipant.Username,
+ &otherParticipant.AvatarURL,
+ ); err == sql.ErrNoRows {
+ http.Error(w, "Conversation not found", http.StatusNotFound)
+ return
+ } else if err != nil {
+ respondError(w, fmt.Errorf("could not query conversation: %v", err))
+ return
+ }
+
+ conversation.ID = conversationID
+ conversation.OtherParticipant = &otherParticipant
+
+ respond(w, conversation, http.StatusOK)
+}
+```
+
+The query is quite similar. We’re not interested in showing the last message, so we omit those fields, but we need the message to know whether the conversation has unread messages. This time we do a `LEFT JOIN` instead of an `INNER JOIN` because the `last_message_id` is `NULLABLE`; in other case we won’t get any rows. We use an `IFNULL` in the `has_unread_messages` comparison for that reason too. Lastly, we filter by ID.
+
+If the query returns no rows, we respond with a `404 Not Found` error, otherwise `200 OK` with the found conversation.
+
+* * *
+
+Yeah, that concludes with the conversation endpoints.
+
+Wait for the next post to create and list messages 👋
+
+[Souce Code][3]
+
+--------------------------------------------------------------------------------
+
+via: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
+
+作者:[Nicolás Parada][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://nicolasparada.netlify.com/
+[b]: https://github.com/lujun9972
+[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
+[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
+[3]: https://github.com/nicolasparada/go-messenger-demo
diff --git a/sources/tech/20180710 Building a Messenger App- Messages.md b/sources/tech/20180710 Building a Messenger App- Messages.md
new file mode 100644
index 0000000000..55e596df64
--- /dev/null
+++ b/sources/tech/20180710 Building a Messenger App- Messages.md
@@ -0,0 +1,315 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Building a Messenger App: Messages)
+[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-messages/)
+[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
+
+Building a Messenger App: Messages
+======
+
+This post is the 4th on a series:
+
+ * [Part 1: Schema][1]
+ * [Part 2: OAuth][2]
+ * [Part 3: Conversations][3]
+
+
+
+In this post we’ll code the endpoints to create a message and list them, also an endpoint to update the last time the participant read messages. Start by adding these routes in the `main()` function.
+
+```
+router.HandleFunc("POST", "/api/conversations/:conversationID/messages", requireJSON(guard(createMessage)))
+router.HandleFunc("GET", "/api/conversations/:conversationID/messages", guard(getMessages))
+router.HandleFunc("POST", "/api/conversations/:conversationID/read_messages", guard(readMessages))
+```
+
+Messages goes into conversations so the endpoint includes the conversation ID.
+
+### Create Message
+
+This endpoint handles POST requests to `/api/conversations/{conversationID}/messages` with a JSON body with just the message content and return the newly created message. It has two side affects: it updates the conversation `last_message_id` and updates the participant `messages_read_at`.
+
+```
+func createMessage(w http.ResponseWriter, r *http.Request) {
+ var input struct {
+ Content string `json:"content"`
+ }
+ defer r.Body.Close()
+ if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
+ http.Error(w, err.Error(), http.StatusBadRequest)
+ return
+ }
+
+ errs := make(map[string]string)
+ input.Content = removeSpaces(input.Content)
+ if input.Content == "" {
+ errs["content"] = "Message content required"
+ } else if len([]rune(input.Content)) > 480 {
+ errs["content"] = "Message too long. 480 max"
+ }
+ if len(errs) != 0 {
+ respond(w, Errors{errs}, http.StatusUnprocessableEntity)
+ return
+ }
+
+ ctx := r.Context()
+ authUserID := ctx.Value(keyAuthUserID).(string)
+ conversationID := way.Param(ctx, "conversationID")
+
+ tx, err := db.BeginTx(ctx, nil)
+ if err != nil {
+ respondError(w, fmt.Errorf("could not begin tx: %v", err))
+ return
+ }
+ defer tx.Rollback()
+
+ isParticipant, err := queryParticipantExistance(ctx, tx, authUserID, conversationID)
+ if err != nil {
+ respondError(w, fmt.Errorf("could not query participant existance: %v", err))
+ return
+ }
+
+ if !isParticipant {
+ http.Error(w, "Conversation not found", http.StatusNotFound)
+ return
+ }
+
+ var message Message
+ if err := tx.QueryRowContext(ctx, `
+ INSERT INTO messages (content, user_id, conversation_id) VALUES
+ ($1, $2, $3)
+ RETURNING id, created_at
+ `, input.Content, authUserID, conversationID).Scan(
+ &message.ID,
+ &message.CreatedAt,
+ ); err != nil {
+ respondError(w, fmt.Errorf("could not insert message: %v", err))
+ return
+ }
+
+ if _, err := tx.ExecContext(ctx, `
+ UPDATE conversations SET last_message_id = $1
+ WHERE id = $2
+ `, message.ID, conversationID); err != nil {
+ respondError(w, fmt.Errorf("could not update conversation last message ID: %v", err))
+ return
+ }
+
+ if err = tx.Commit(); err != nil {
+ respondError(w, fmt.Errorf("could not commit tx to create a message: %v", err))
+ return
+ }
+
+ go func() {
+ if err = updateMessagesReadAt(nil, authUserID, conversationID); err != nil {
+ log.Printf("could not update messages read at: %v\n", err)
+ }
+ }()
+
+ message.Content = input.Content
+ message.UserID = authUserID
+ message.ConversationID = conversationID
+ // TODO: notify about new message.
+ message.Mine = true
+
+ respond(w, message, http.StatusCreated)
+}
+```
+
+First, it decodes the request body into an struct with the message content. Then, it validates the content is not empty and has less than 480 characters.
+
+```
+var rxSpaces = regexp.MustCompile("\\s+")
+
+func removeSpaces(s string) string {
+ if s == "" {
+ return s
+ }
+
+ lines := make([]string, 0)
+ for _, line := range strings.Split(s, "\n") {
+ line = rxSpaces.ReplaceAllLiteralString(line, " ")
+ line = strings.TrimSpace(line)
+ if line != "" {
+ lines = append(lines, line)
+ }
+ }
+ return strings.Join(lines, "\n")
+}
+```
+
+This is the function to remove spaces. It iterates over each line, remove more than two consecutives spaces and returns with the non empty lines.
+
+After the validation, it starts an SQL transaction. First, it queries for the participant existance in the conversation.
+
+```
+func queryParticipantExistance(ctx context.Context, tx *sql.Tx, userID, conversationID string) (bool, error) {
+ if ctx == nil {
+ ctx = context.Background()
+ }
+ var exists bool
+ if err := tx.QueryRowContext(ctx, `SELECT EXISTS (
+ SELECT 1 FROM participants
+ WHERE user_id = $1 AND conversation_id = $2
+ )`, userID, conversationID).Scan(&exists); err != nil {
+ return false, err
+ }
+ return exists, nil
+}
+```
+
+I extracted it into a function because it’s reused later.
+
+If the user isn’t participant of the conversation, we return with a `404 Not Found` error.
+
+Then, it inserts the message and updates the conversation `last_message_id`. Since this point, `last_message_id` cannot by `NULL` because we don’t allow removing messages.
+
+Then it commits the transaction and we update the participant `messages_read_at` in a goroutine.
+
+```
+func updateMessagesReadAt(ctx context.Context, userID, conversationID string) error {
+ if ctx == nil {
+ ctx = context.Background()
+ }
+
+ if _, err := db.ExecContext(ctx, `
+ UPDATE participants SET messages_read_at = now()
+ WHERE user_id = $1 AND conversation_id = $2
+ `, userID, conversationID); err != nil {
+ return err
+ }
+ return nil
+}
+```
+
+Before responding with the new message, we must notify about it. This is for the realtime part we’ll code in the next post so I left a comment there.
+
+### Get Messages
+
+This endpoint handles GET requests to `/api/conversations/{conversationID}/messages`. It responds with a JSON array with all the messages in the conversation. It also has the same side affect of updating the participant `messages_read_at`.
+
+```
+func getMessages(w http.ResponseWriter, r *http.Request) {
+ ctx := r.Context()
+ authUserID := ctx.Value(keyAuthUserID).(string)
+ conversationID := way.Param(ctx, "conversationID")
+
+ tx, err := db.BeginTx(ctx, &sql.TxOptions{ReadOnly: true})
+ if err != nil {
+ respondError(w, fmt.Errorf("could not begin tx: %v", err))
+ return
+ }
+ defer tx.Rollback()
+
+ isParticipant, err := queryParticipantExistance(ctx, tx, authUserID, conversationID)
+ if err != nil {
+ respondError(w, fmt.Errorf("could not query participant existance: %v", err))
+ return
+ }
+
+ if !isParticipant {
+ http.Error(w, "Conversation not found", http.StatusNotFound)
+ return
+ }
+
+ rows, err := tx.QueryContext(ctx, `
+ SELECT
+ id,
+ content,
+ created_at,
+ user_id = $1 AS mine
+ FROM messages
+ WHERE messages.conversation_id = $2
+ ORDER BY messages.created_at DESC
+ `, authUserID, conversationID)
+ if err != nil {
+ respondError(w, fmt.Errorf("could not query messages: %v", err))
+ return
+ }
+ defer rows.Close()
+
+ messages := make([]Message, 0)
+ for rows.Next() {
+ var message Message
+ if err = rows.Scan(
+ &message.ID,
+ &message.Content,
+ &message.CreatedAt,
+ &message.Mine,
+ ); err != nil {
+ respondError(w, fmt.Errorf("could not scan message: %v", err))
+ return
+ }
+
+ messages = append(messages, message)
+ }
+
+ if err = rows.Err(); err != nil {
+ respondError(w, fmt.Errorf("could not iterate over messages: %v", err))
+ return
+ }
+
+ if err = tx.Commit(); err != nil {
+ respondError(w, fmt.Errorf("could not commit tx to get messages: %v", err))
+ return
+ }
+
+ go func() {
+ if err = updateMessagesReadAt(nil, authUserID, conversationID); err != nil {
+ log.Printf("could not update messages read at: %v\n", err)
+ }
+ }()
+
+ respond(w, messages, http.StatusOK)
+}
+```
+
+First, it begins an SQL transaction in readonly mode. Checks for the participant existance and queries all the messages. In each message, we use the current authenticated user ID to know whether the user owns the message (`mine`). Then it commits the transaction, updates the participant `messages_read_at` in a goroutine and respond with the messages.
+
+### Read Messages
+
+This endpoint handles POST requests to `/api/conversations/{conversationID}/read_messages`. Without any request or response body. In the frontend we’ll make this request each time a new message arrive in the realtime stream.
+
+```
+func readMessages(w http.ResponseWriter, r *http.Request) {
+ ctx := r.Context()
+ authUserID := ctx.Value(keyAuthUserID).(string)
+ conversationID := way.Param(ctx, "conversationID")
+
+ if err := updateMessagesReadAt(ctx, authUserID, conversationID); err != nil {
+ respondError(w, fmt.Errorf("could not update messages read at: %v", err))
+ return
+ }
+
+ w.WriteHeader(http.StatusNoContent)
+}
+```
+
+It uses the same function we’ve been using to update the participant `messages_read_at`.
+
+* * *
+
+That concludes it. Realtime messages is the only part left in the backend. Wait for it in the next post.
+
+[Souce Code][4]
+
+--------------------------------------------------------------------------------
+
+via: https://nicolasparada.netlify.com/posts/go-messenger-messages/
+
+作者:[Nicolás Parada][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://nicolasparada.netlify.com/
+[b]: https://github.com/lujun9972
+[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
+[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
+[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
+[4]: https://github.com/nicolasparada/go-messenger-demo
diff --git a/sources/tech/20180710 Building a Messenger App- Realtime Messages.md b/sources/tech/20180710 Building a Messenger App- Realtime Messages.md
new file mode 100644
index 0000000000..71479495b2
--- /dev/null
+++ b/sources/tech/20180710 Building a Messenger App- Realtime Messages.md
@@ -0,0 +1,175 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Building a Messenger App: Realtime Messages)
+[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/)
+[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
+
+Building a Messenger App: Realtime Messages
+======
+
+This post is the 5th on a series:
+
+ * [Part 1: Schema][1]
+ * [Part 2: OAuth][2]
+ * [Part 3: Conversations][3]
+ * [Part 4: Messages][4]
+
+
+
+For realtime messages we’ll use [Server-Sent Events][5]. This is an open connection in which we can stream data. We’ll have and endpoint in which the user subscribes to all the messages sended to him.
+
+### Message Clients
+
+Before the HTTP part, let’s code a map to have all the clients listening for messages. Initialize this globally like so:
+
+```
+type MessageClient struct {
+ Messages chan Message
+ UserID string
+}
+
+var messageClients sync.Map
+```
+
+### New Message Created
+
+Remember in the [last post][4] when we created the message, we left a “TODO” comment. There we’ll dispatch a goroutine with this function.
+
+```
+go messageCreated(message)
+```
+
+Insert that line just where we left the comment.
+
+```
+func messageCreated(message Message) error {
+ if err := db.QueryRow(`
+ SELECT user_id FROM participants
+ WHERE user_id != $1 and conversation_id = $2
+ `, message.UserID, message.ConversationID).
+ Scan(&message.ReceiverID); err != nil {
+ return err
+ }
+
+ go broadcastMessage(message)
+
+ return nil
+}
+
+func broadcastMessage(message Message) {
+ messageClients.Range(func(key, _ interface{}) bool {
+ client := key.(*MessageClient)
+ if client.UserID == message.ReceiverID {
+ client.Messages <- message
+ }
+ return true
+ })
+}
+```
+
+The function queries for the recipient ID (the other participant ID) and sends the message to all the clients.
+
+### Subscribe to Messages
+
+Lets go to the `main()` function and add this route:
+
+```
+router.HandleFunc("GET", "/api/messages", guard(subscribeToMessages))
+```
+
+This endpoint handles GET requests on `/api/messages`. The request should be an [EventSource][6] connection. It responds with an event stream in which the data is JSON formatted.
+
+```
+func subscribeToMessages(w http.ResponseWriter, r *http.Request) {
+ if a := r.Header.Get("Accept"); !strings.Contains(a, "text/event-stream") {
+ http.Error(w, "This endpoint requires an EventSource connection", http.StatusNotAcceptable)
+ return
+ }
+
+ f, ok := w.(http.Flusher)
+ if !ok {
+ respondError(w, errors.New("streaming unsupported"))
+ return
+ }
+
+ ctx := r.Context()
+ authUserID := ctx.Value(keyAuthUserID).(string)
+
+ h := w.Header()
+ h.Set("Cache-Control", "no-cache")
+ h.Set("Connection", "keep-alive")
+ h.Set("Content-Type", "text/event-stream")
+
+ messages := make(chan Message)
+ defer close(messages)
+
+ client := &MessageClient{Messages: messages, UserID: authUserID}
+ messageClients.Store(client, nil)
+ defer messageClients.Delete(client)
+
+ for {
+ select {
+ case <-ctx.Done():
+ return
+ case message := <-messages:
+ if b, err := json.Marshal(message); err != nil {
+ log.Printf("could not marshall message: %v\n", err)
+ fmt.Fprintf(w, "event: error\ndata: %v\n\n", err)
+ } else {
+ fmt.Fprintf(w, "data: %s\n\n", b)
+ }
+ f.Flush()
+ }
+ }
+}
+```
+
+First it checks for the correct request headers and checks the server supports streaming. We create a channel of messages to make a client and store it in the clients map. Each time a new message is created, it will go in this channel, so we can read from it with a `for-select` loop.
+
+Server-Sent Events uses this format to send data:
+
+```
+data: some data here\n\n
+```
+
+We are sending it in JSON format:
+
+```
+data: {"foo":"bar"}\n\n
+```
+
+We are using `fmt.Fprintf()` to write to the response writter in this format and flushing the data in each iteration of the loop.
+
+This will loop until the connection is closed using the request context. We defered the close of the channel and the delete of the client, so when the loop ends, the channel will be closed and the client won’t receive more messages.
+
+Note aside, the JavaScript API to work with Server-Sent Events (EventSource) doesn’t support setting custom headers 😒 So we cannot set `Authorization: Bearer `. And that’s the reason why the `guard()` middleware reads the token from the URL query string also.
+
+* * *
+
+That concludes the realtime messages. I’d like to say that’s everything in the backend, but to code the frontend I’ll add one more endpoint to login. A login that will be just for development.
+
+[Souce Code][7]
+
+--------------------------------------------------------------------------------
+
+via: https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/
+
+作者:[Nicolás Parada][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://nicolasparada.netlify.com/
+[b]: https://github.com/lujun9972
+[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
+[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
+[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
+[4]: https://nicolasparada.netlify.com/posts/go-messenger-messages/
+[5]: https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events
+[6]: https://developer.mozilla.org/en-US/docs/Web/API/EventSource
+[7]: https://github.com/nicolasparada/go-messenger-demo
diff --git a/sources/tech/20180712 Building a Messenger App- Development Login.md b/sources/tech/20180712 Building a Messenger App- Development Login.md
new file mode 100644
index 0000000000..e12fb3c56a
--- /dev/null
+++ b/sources/tech/20180712 Building a Messenger App- Development Login.md
@@ -0,0 +1,145 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Building a Messenger App: Development Login)
+[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-dev-login/)
+[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
+
+Building a Messenger App: Development Login
+======
+
+This post is the 6th on a series:
+
+ * [Part 1: Schema][1]
+ * [Part 2: OAuth][2]
+ * [Part 3: Conversations][3]
+ * [Part 4: Messages][4]
+ * [Part 5: Realtime Messages][5]
+
+
+
+We already implemented login through GitHub, but if we want to play around with the app, we need a couple of users to test it. In this post we’ll add an endpoint to login as any user just giving an username. This endpoint will be just for development.
+
+Start by adding this route in the `main()` function.
+
+```
+router.HandleFunc("POST", "/api/login", requireJSON(login))
+```
+
+### Login
+
+This function handles POST requests to `/api/login` with a JSON body with just an username and returns the authenticated user, a token and expiration date of it in JSON format.
+
+```
+func login(w http.ResponseWriter, r *http.Request) {
+ if origin.Hostname() != "localhost" {
+ http.NotFound(w, r)
+ return
+ }
+
+ var input struct {
+ Username string `json:"username"`
+ }
+ if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
+ http.Error(w, err.Error(), http.StatusBadRequest)
+ return
+ }
+ defer r.Body.Close()
+
+ var user User
+ if err := db.QueryRowContext(r.Context(), `
+ SELECT id, avatar_url
+ FROM users
+ WHERE username = $1
+ `, input.Username).Scan(
+ &user.ID,
+ &user.AvatarURL,
+ ); err == sql.ErrNoRows {
+ http.Error(w, "User not found", http.StatusNotFound)
+ return
+ } else if err != nil {
+ respondError(w, fmt.Errorf("could not query user: %v", err))
+ return
+ }
+
+ user.Username = input.Username
+
+ exp := time.Now().Add(jwtLifetime)
+ token, err := issueToken(user.ID, exp)
+ if err != nil {
+ respondError(w, fmt.Errorf("could not create token: %v", err))
+ return
+ }
+
+ respond(w, map[string]interface{}{
+ "authUser": user,
+ "token": token,
+ "expiresAt": exp,
+ }, http.StatusOK)
+}
+```
+
+First it checks we are on localhost or it responds with `404 Not Found`. It decodes the body skipping validation since this is just for development. Then it queries to the database for a user with the given username, if none is found, it returns with `404 Not Found`. Then it issues a new JSON web token using the user ID as Subject.
+
+```
+func issueToken(subject string, exp time.Time) (string, error) {
+ token, err := jwtSigner.Encode(jwt.Claims{
+ Subject: subject,
+ Expiration: json.Number(strconv.FormatInt(exp.Unix(), 10)),
+ })
+ if err != nil {
+ return "", err
+ }
+ return string(token), nil
+}
+```
+
+The function does the same we did [previously][2]. I just moved it to reuse code.
+
+After creating the token, it responds with the user, token and expiration date.
+
+### Seed Users
+
+Now you can add users to play with to the database.
+
+```
+INSERT INTO users (id, username) VALUES
+ (1, 'john'),
+ (2, 'jane');
+```
+
+You can save it to a file and pipe it to the Cockroach CLI.
+
+```
+cat seed_users.sql | cockroach sql --insecure -d messenger
+```
+
+* * *
+
+That’s it. Once you deploy the code to production and use your own domain this login function won’t be available.
+
+This post concludes the backend.
+
+[Souce Code][6]
+
+--------------------------------------------------------------------------------
+
+via: https://nicolasparada.netlify.com/posts/go-messenger-dev-login/
+
+作者:[Nicolás Parada][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://nicolasparada.netlify.com/
+[b]: https://github.com/lujun9972
+[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
+[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
+[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
+[4]: https://nicolasparada.netlify.com/posts/go-messenger-messages/
+[5]: https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/
+[6]: https://github.com/nicolasparada/go-messenger-demo
diff --git a/sources/tech/20180716 Building a Messenger App- Access Page.md b/sources/tech/20180716 Building a Messenger App- Access Page.md
new file mode 100644
index 0000000000..21671b92f6
--- /dev/null
+++ b/sources/tech/20180716 Building a Messenger App- Access Page.md
@@ -0,0 +1,459 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Building a Messenger App: Access Page)
+[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-access-page/)
+[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
+
+Building a Messenger App: Access Page
+======
+
+This post is the 7th on a series:
+
+ * [Part 1: Schema][1]
+ * [Part 2: OAuth][2]
+ * [Part 3: Conversations][3]
+ * [Part 4: Messages][4]
+ * [Part 5: Realtime Messages][5]
+ * [Part 6: Development Login][6]
+
+
+
+Now that we’re done with the backend, lets move to the frontend. I will go with a single-page application.
+
+Lets start by creating a file `static/index.html` with the following content.
+
+```
+
+
+
+
+
+ Messenger
+
+
+
+
+
+
+```
+
+This HTML file must be server for every URL and JavaScript will take care of rendering the correct page.
+
+So lets go the the `main.go` for a moment and in the `main()` function add the following route:
+
+```
+router.Handle("GET", "/...", http.FileServer(SPAFileSystem{http.Dir("static")}))
+
+type SPAFileSystem struct {
+ fs http.FileSystem
+}
+
+func (spa SPAFileSystem) Open(name string) (http.File, error) {
+ f, err := spa.fs.Open(name)
+ if err != nil {
+ return spa.fs.Open("index.html")
+ }
+ return f, nil
+}
+```
+
+We use a custom file system so instead of returning `404 Not Found` for unknown URLs, it serves the `index.html`.
+
+### Router
+
+In the `index.html` we loaded two files: `styles.css` and `main.js`. I leave styling to your taste.
+
+Lets move to `main.js`. Create a `static/main.js` file with the following content:
+
+```
+import { guard } from './auth.js'
+import Router from './router.js'
+
+let currentPage
+const disconnect = new CustomEvent('disconnect')
+const router = new Router()
+
+router.handle('/', guard(view('home'), view('access')))
+router.handle('/callback', view('callback'))
+router.handle(/^\/conversations\/([^\/]+)$/, guard(view('conversation'), view('access')))
+router.handle(/^\//, view('not-found'))
+
+router.install(async result => {
+ document.body.innerHTML = ''
+ if (currentPage instanceof Node) {
+ currentPage.dispatchEvent(disconnect)
+ }
+ currentPage = await result
+ if (currentPage instanceof Node) {
+ document.body.appendChild(currentPage)
+ }
+})
+
+function view(pageName) {
+ return (...args) => import(`/pages/${pageName}-page.js`)
+ .then(m => m.default(...args))
+}
+```
+
+If you are follower of this blog, you already know how this works. That router is the one showed [here][7]. Just download it from [@nicolasparada/router][8] and save it to `static/router.js`.
+
+We registered four routes. At the root `/` we show the home or access page whether the user is authenticated. At `/callback` we show the callback page. On `/conversations/{conversationID}` we show the conversation or access page whether the user is authenticated and for every other URL, we show a not found page.
+
+We tell the router to render the result to the document body and dispatch a `disconnect` event to each page before leaving.
+
+We have each page in a different file and we import them with the new dynamic `import()`.
+
+### Auth
+
+`guard()` is a function that given two functions, executes the first one if the user is authenticated, or the sencond one if not. It comes from `auth.js` so lets create a `static/auth.js` file with the following content:
+
+```
+export function isAuthenticated() {
+ const token = localStorage.getItem('token')
+ const expiresAtItem = localStorage.getItem('expires_at')
+ if (token === null || expiresAtItem === null) {
+ return false
+ }
+
+ const expiresAt = new Date(expiresAtItem)
+ if (isNaN(expiresAt.valueOf()) || expiresAt <= new Date()) {
+ return false
+ }
+
+ return true
+}
+
+export function guard(fn1, fn2) {
+ return (...args) => isAuthenticated()
+ ? fn1(...args)
+ : fn2(...args)
+}
+
+export function getAuthUser() {
+ if (!isAuthenticated()) {
+ return null
+ }
+
+ const authUser = localStorage.getItem('auth_user')
+ if (authUser === null) {
+ return null
+ }
+
+ try {
+ return JSON.parse(authUser)
+ } catch (_) {
+ return null
+ }
+}
+```
+
+`isAuthenticated()` checks for `token` and `expires_at` from localStorage to tell if the user is authenticated. `getAuthUser()` gets the authenticated user from localStorage.
+
+When we login, we’ll save all the data to localStorage so it will make sense.
+
+### Access Page
+
+![access page screenshot][9]
+
+Lets start with the access page. Create a file `static/pages/access-page.js` with the following content:
+
+```
+const template = document.createElement('template')
+template.innerHTML = `
+
Messenger
+ Access with GitHub
+`
+
+export default function accessPage() {
+ return template.content
+}
+```
+
+Because the router intercepts all the link clicks to do its navigation, we must prevent the event propagation for this link in particular.
+
+Clicking on that link will redirect us to the backend, then to GitHub, then to the backend and then to the frontend again; to the callback page.
+
+### Callback Page
+
+Create the file `static/pages/callback-page.js` with the following content:
+
+```
+import http from '../http.js'
+import { navigate } from '../router.js'
+
+export default async function callbackPage() {
+ const url = new URL(location.toString())
+ const token = url.searchParams.get('token')
+ const expiresAt = url.searchParams.get('expires_at')
+
+ try {
+ if (token === null || expiresAt === null) {
+ throw new Error('Invalid URL')
+ }
+
+ const authUser = await getAuthUser(token)
+
+ localStorage.setItem('auth_user', JSON.stringify(authUser))
+ localStorage.setItem('token', token)
+ localStorage.setItem('expires_at', expiresAt)
+ } catch (err) {
+ alert(err.message)
+ } finally {
+ navigate('/', true)
+ }
+}
+
+function getAuthUser(token) {
+ return http.get('/api/auth_user', { authorization: `Bearer ${token}` })
+}
+```
+
+The callback page doesn’t render anything. It’s an async function that does a GET request to `/api/auth_user` using the token from the URL query string and saves all the data to localStorage. Then it redirects to `/`.
+
+### HTTP
+
+There is an HTTP module. Create a `static/http.js` file with the following content:
+
+```
+import { isAuthenticated } from './auth.js'
+
+async function handleResponse(res) {
+ const body = await res.clone().json().catch(() => res.text())
+
+ if (res.status === 401) {
+ localStorage.removeItem('auth_user')
+ localStorage.removeItem('token')
+ localStorage.removeItem('expires_at')
+ }
+
+ if (!res.ok) {
+ const message = typeof body === 'object' && body !== null && 'message' in body
+ ? body.message
+ : typeof body === 'string' && body !== ''
+ ? body
+ : res.statusText
+ throw Object.assign(new Error(message), {
+ url: res.url,
+ statusCode: res.status,
+ statusText: res.statusText,
+ headers: res.headers,
+ body,
+ })
+ }
+
+ return body
+}
+
+function getAuthHeader() {
+ return isAuthenticated()
+ ? { authorization: `Bearer ${localStorage.getItem('token')}` }
+ : {}
+}
+
+export default {
+ get(url, headers) {
+ return fetch(url, {
+ headers: Object.assign(getAuthHeader(), headers),
+ }).then(handleResponse)
+ },
+
+ post(url, body, headers) {
+ const init = {
+ method: 'POST',
+ headers: getAuthHeader(),
+ }
+ if (typeof body === 'object' && body !== null) {
+ init.body = JSON.stringify(body)
+ init.headers['content-type'] = 'application/json; charset=utf-8'
+ }
+ Object.assign(init.headers, headers)
+ return fetch(url, init).then(handleResponse)
+ },
+
+ subscribe(url, callback) {
+ const urlWithToken = new URL(url, location.origin)
+ if (isAuthenticated()) {
+ urlWithToken.searchParams.set('token', localStorage.getItem('token'))
+ }
+ const eventSource = new EventSource(urlWithToken.toString())
+ eventSource.onmessage = ev => {
+ let data
+ try {
+ data = JSON.parse(ev.data)
+ } catch (err) {
+ console.error('could not parse message data as JSON:', err)
+ return
+ }
+ callback(data)
+ }
+ const unsubscribe = () => {
+ eventSource.close()
+ }
+ return unsubscribe
+ },
+}
+```
+
+This module is a wrapper around the [fetch][10] and [EventSource][11] APIs. The most important part is that it adds the JSON web token to the requests.
+
+### Home Page
+
+![home page screenshot][12]
+
+So, when the user login, the home page will be shown. Create a `static/pages/home-page.js` file with the following content:
+
+```
+import { getAuthUser } from '../auth.js'
+import { avatar } from '../shared.js'
+
+export default function homePage() {
+ const authUser = getAuthUser()
+ const template = document.createElement('template')
+ template.innerHTML = `
+
+
+ ${avatar(authUser)}
+ ${authUser.username}
+
+
+
+
+
+ `
+ const page = template.content
+ page.getElementById('logout-button').onclick = onLogoutClick
+ return page
+}
+
+function onLogoutClick() {
+ localStorage.clear()
+ location.reload()
+}
+```
+
+For this post, this is the only content we render on the home page. We show the current authenticated user and a logout button.
+
+When the user clicks to logout, we clear all inside localStorage and do a reload of the page.
+
+### Avatar
+
+That `avatar()` function is to show the user’s avatar. Because it’s used in more than one place, I moved it to a `shared.js` file. Create the file `static/shared.js` with the following content:
+
+```
+export function avatar(user) {
+ return user.avatarUrl === null
+ ? ``
+ : ``
+}
+```
+
+We use a small figure with the user’s initial in case the avatar URL is null.
+
+You can show the initial with a little of CSS using the `attr()` function.
+
+```
+.avatar[data-initial]::after {
+ content: attr(data-initial);
+}
+```
+
+### Development Login
+
+![access page with login form screenshot][13]
+
+In the previous post we coded a login for development. Lets add a form for that in the access page. Go to `static/pages/access-page.js` and modify it a little.
+
+```
+import http from '../http.js'
+
+const template = document.createElement('template')
+template.innerHTML = `
+
Messenger
+
+ Access with GitHub
+`
+
+export default function accessPage() {
+ const page = template.content.cloneNode(true)
+ page.getElementById('login-form').onsubmit = onLoginSubmit
+ return page
+}
+
+async function onLoginSubmit(ev) {
+ ev.preventDefault()
+
+ const form = ev.currentTarget
+ const input = form.querySelector('input')
+ const submitButton = form.querySelector('button')
+
+ input.disabled = true
+ submitButton.disabled = true
+
+ try {
+ const payload = await login(input.value)
+ input.value = ''
+
+ localStorage.setItem('auth_user', JSON.stringify(payload.authUser))
+ localStorage.setItem('token', payload.token)
+ localStorage.setItem('expires_at', payload.expiresAt)
+
+ location.reload()
+ } catch (err) {
+ alert(err.message)
+ setTimeout(() => {
+ input.focus()
+ }, 0)
+ } finally {
+ input.disabled = false
+ submitButton.disabled = false
+ }
+}
+
+function login(username) {
+ return http.post('/api/login', { username })
+}
+```
+
+I added a login form. When the user submits the form. It does a POST requets to `/api/login` with the username. Saves all the data to localStorage and reloads the page.
+
+Remember to remove this form once you are done with the frontend.
+
+* * *
+
+That’s all for this post. In the next one, we’ll continue with the home page to add a form to start conversations and display a list with the latest ones.
+
+[Souce Code][14]
+
+--------------------------------------------------------------------------------
+
+via: https://nicolasparada.netlify.com/posts/go-messenger-access-page/
+
+作者:[Nicolás Parada][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://nicolasparada.netlify.com/
+[b]: https://github.com/lujun9972
+[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
+[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
+[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
+[4]: https://nicolasparada.netlify.com/posts/go-messenger-messages/
+[5]: https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/
+[6]: https://nicolasparada.netlify.com/posts/go-messenger-dev-login/
+[7]: https://nicolasparada.netlify.com/posts/js-router/
+[8]: https://unpkg.com/@nicolasparada/router
+[9]: https://nicolasparada.netlify.com/img/go-messenger-access-page/access-page.png
+[10]: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
+[11]: https://developer.mozilla.org/en-US/docs/Web/API/EventSource
+[12]: https://nicolasparada.netlify.com/img/go-messenger-access-page/home-page.png
+[13]: https://nicolasparada.netlify.com/img/go-messenger-access-page/access-page-v2.png
+[14]: https://github.com/nicolasparada/go-messenger-demo
diff --git a/sources/tech/20180719 Building a Messenger App- Home Page.md b/sources/tech/20180719 Building a Messenger App- Home Page.md
new file mode 100644
index 0000000000..ddec2c180f
--- /dev/null
+++ b/sources/tech/20180719 Building a Messenger App- Home Page.md
@@ -0,0 +1,255 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Building a Messenger App: Home Page)
+[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-home-page/)
+[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
+
+Building a Messenger App: Home Page
+======
+
+This post is the 8th on a series:
+
+ * [Part 1: Schema][1]
+ * [Part 2: OAuth][2]
+ * [Part 3: Conversations][3]
+ * [Part 4: Messages][4]
+ * [Part 5: Realtime Messages][5]
+ * [Part 6: Development Login][6]
+ * [Part 7: Access Page][7]
+
+
+
+Continuing the frontend, let’s finish the home page in this post. We’ll add a form to start conversations and a list with the latest ones.
+
+### Conversation Form
+
+![conversation form screenshot][8]
+
+In the `static/pages/home-page.js` file add some markup in the HTML view.
+
+```
+
+```
+
+Add that form just below the section in which we displayed the auth user and logout button.
+
+```
+page.getElementById('conversation-form').onsubmit = onConversationSubmit
+```
+
+Now we can listen to the “submit” event to create the conversation.
+
+```
+import http from '../http.js'
+import { navigate } from '../router.js'
+
+async function onConversationSubmit(ev) {
+ ev.preventDefault()
+
+ const form = ev.currentTarget
+ const input = form.querySelector('input')
+
+ input.disabled = true
+
+ try {
+ const conversation = await createConversation(input.value)
+ input.value = ''
+ navigate('/conversations/' + conversation.id)
+ } catch (err) {
+ if (err.statusCode === 422) {
+ input.setCustomValidity(err.body.errors.username)
+ } else {
+ alert(err.message)
+ }
+ setTimeout(() => {
+ input.focus()
+ }, 0)
+ } finally {
+ input.disabled = false
+ }
+}
+
+function createConversation(username) {
+ return http.post('/api/conversations', { username })
+}
+```
+
+On submit we do a POST request to `/api/conversations` with the username and redirect to the conversation page (for the next post).
+
+### Conversation List
+
+![conversation list screenshot][9]
+
+In the same file, we are going to make the `homePage()` function async to load the conversations first.
+
+```
+export default async function homePage() {
+ const conversations = await getConversations().catch(err => {
+ console.error(err)
+ return []
+ })
+ /*...*/
+}
+
+function getConversations() {
+ return http.get('/api/conversations')
+}
+```
+
+Then, add a list in the markup to render conversations there.
+
+```
+
+```
+
+Add it just below the current markup.
+
+```
+const conversationsOList = page.getElementById('conversations')
+for (const conversation of conversations) {
+ conversationsOList.appendChild(renderConversation(conversation))
+}
+```
+
+So we can append each conversation to the list.
+
+```
+import { avatar, escapeHTML } from '../shared.js'
+
+function renderConversation(conversation) {
+ const messageContent = escapeHTML(conversation.lastMessage.content)
+ const messageDate = new Date(conversation.lastMessage.createdAt).toLocaleString()
+
+ const li = document.createElement('li')
+ li.dataset['id'] = conversation.id
+ if (conversation.hasUnreadMessages) {
+ li.classList.add('has-unread-messages')
+ }
+ li.innerHTML = `
+
+
+
+ `
+ return li
+}
+```
+
+Each conversation item contains a link to the conversation page and displays the other participant info and a preview of the last message. Also, you can use `.hasUnreadMessages` to add a class to the item and do some styling with CSS. Maybe a bolder font or accent the color.
+
+Note that we’re escaping the message content. That function comes from `static/shared.js`:
+
+```
+export function escapeHTML(str) {
+ return str
+ .replace(/&/g, '&')
+ .replace(//g, '>')
+ .replace(/"/g, '"')
+ .replace(/'/g, ''')
+}
+```
+
+That prevents displaying as HTML the message the user wrote. If the user happens to write something like:
+
+```
+
+```
+
+It would be very annoying because that script will be executed 😅
+So yeah, always remember to escape content from untrusted sources.
+
+### Messages Subscription
+
+Last but not least, I want to subscribe to the message stream here.
+
+```
+const unsubscribe = subscribeToMessages(onMessageArrive)
+page.addEventListener('disconnect', unsubscribe)
+```
+
+Add that line in the `homePage()` function.
+
+```
+function subscribeToMessages(cb) {
+ return http.subscribe('/api/messages', cb)
+}
+```
+
+The `subscribe()` function returns a function that once called it closes the underlying connection. That’s why I passed it to the “disconnect” event; so when the user leaves the page, the event stream will be closed.
+
+```
+async function onMessageArrive(message) {
+ const conversationLI = document.querySelector(`li[data-id="${message.conversationID}"]`)
+ if (conversationLI !== null) {
+ conversationLI.classList.add('has-unread-messages')
+ conversationLI.querySelector('a > div > p').textContent = message.content
+ conversationLI.querySelector('a > div > time').textContent = new Date(message.createdAt).toLocaleString()
+ return
+ }
+
+ let conversation
+ try {
+ conversation = await getConversation(message.conversationID)
+ conversation.lastMessage = message
+ } catch (err) {
+ console.error(err)
+ return
+ }
+
+ const conversationsOList = document.getElementById('conversations')
+ if (conversationsOList === null) {
+ return
+ }
+
+ conversationsOList.insertAdjacentElement('afterbegin', renderConversation(conversation))
+}
+
+function getConversation(id) {
+ return http.get('/api/conversations/' + id)
+}
+```
+
+Every time a new message arrives, we go and query for the conversation item in the DOM. If found, we add the `has-unread-messages` class to the item, and update the view. If not found, it means the message is from a new conversation created just now. We go and do a GET request to `/api/conversations/{conversationID}` to get the conversation in which the message was created and prepend it to the conversation list.
+
+* * *
+
+That covers the home page 😊
+On the next post we’ll code the conversation page.
+
+[Souce Code][10]
+
+--------------------------------------------------------------------------------
+
+via: https://nicolasparada.netlify.com/posts/go-messenger-home-page/
+
+作者:[Nicolás Parada][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://nicolasparada.netlify.com/
+[b]: https://github.com/lujun9972
+[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
+[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
+[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
+[4]: https://nicolasparada.netlify.com/posts/go-messenger-messages/
+[5]: https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/
+[6]: https://nicolasparada.netlify.com/posts/go-messenger-dev-login/
+[7]: https://nicolasparada.netlify.com/posts/go-messenger-access-page/
+[8]: https://nicolasparada.netlify.com/img/go-messenger-home-page/conversation-form.png
+[9]: https://nicolasparada.netlify.com/img/go-messenger-home-page/conversation-list.png
+[10]: https://github.com/nicolasparada/go-messenger-demo
diff --git a/sources/tech/20180720 Building a Messenger App- Conversation Page.md b/sources/tech/20180720 Building a Messenger App- Conversation Page.md
new file mode 100644
index 0000000000..c721b48161
--- /dev/null
+++ b/sources/tech/20180720 Building a Messenger App- Conversation Page.md
@@ -0,0 +1,269 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Building a Messenger App: Conversation Page)
+[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-conversation-page/)
+[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
+
+Building a Messenger App: Conversation Page
+======
+
+This post is the 9th and last in a series:
+
+ * [Part 1: Schema][1]
+ * [Part 2: OAuth][2]
+ * [Part 3: Conversations][3]
+ * [Part 4: Messages][4]
+ * [Part 5: Realtime Messages][5]
+ * [Part 6: Development Login][6]
+ * [Part 7: Access Page][7]
+ * [Part 8: Home Page][8]
+
+
+
+In this post we’ll code the conversation page. This page is the chat between the two users. At the top we’ll show info about the other participant, below, a list of the latest messages and a message form at the bottom.
+
+### Chat heading
+
+![chat heading screenshot][9]
+
+Let’s start by creating the file `static/pages/conversation-page.js` with the following content:
+
+```
+import http from '../http.js'
+import { navigate } from '../router.js'
+import { avatar, escapeHTML } from '../shared.js'
+
+export default async function conversationPage(conversationID) {
+ let conversation
+ try {
+ conversation = await getConversation(conversationID)
+ } catch (err) {
+ alert(err.message)
+ navigate('/', true)
+ return
+ }
+
+ const template = document.createElement('template')
+ template.innerHTML = `
+
+ ← Back
+ ${avatar(conversation.otherParticipant)}
+ ${conversation.otherParticipant.username}
+
+
+
+ `
+ const page = template.content
+ return page
+}
+
+function getConversation(id) {
+ return http.get('/api/conversations/' + id)
+}
+```
+
+This page receives the conversation ID the router extracted from the URL.
+
+First it does a GET request to `/api/conversations/{conversationID}` to get info about the conversation. In case of error, we show it and redirect back to `/`. Then we render info about the other participant.
+
+### Conversation List
+
+![chat heading screenshot][10]
+
+We’ll fetch the latest messages too to display them.
+
+```
+let conversation, messages
+try {
+ [conversation, messages] = await Promise.all([
+ getConversation(conversationID),
+ getMessages(conversationID),
+ ])
+}
+```
+
+Update the `conversationPage()` function to fetch the messages too. We use `Promise.all()` to do both request at the same time.
+
+```
+function getMessages(conversationID) {
+ return http.get(`/api/conversations/${conversationID}/messages`)
+}
+```
+
+A GET request to `/api/conversations/{conversationID}/messages` gets the latest messages of the conversation.
+
+```
+
+```
+
+Now, add that list to the markup.
+
+```
+const messagesOList = page.getElementById('messages')
+for (const message of messages.reverse()) {
+ messagesOList.appendChild(renderMessage(message))
+}
+```
+
+So we can append messages to the list. We show them in reverse order.
+
+```
+function renderMessage(message) {
+ const messageContent = escapeHTML(message.content)
+ const messageDate = new Date(message.createdAt).toLocaleString()
+
+ const li = document.createElement('li')
+ if (message.mine) {
+ li.classList.add('owned')
+ }
+ li.innerHTML = `
+
${messageContent}
+
+ `
+ return li
+}
+```
+
+Each message item displays the message content itself with its timestamp. Using `.mine` we can append a different class to the item so maybe you can show the message to the right.
+
+### Message Form
+
+![chat heading screenshot][11]
+
+```
+
+```
+
+Add that form to the current markup.
+
+```
+page.getElementById('message-form').onsubmit = messageSubmitter(conversationID)
+```
+
+Attach an event listener to the “submit” event.
+
+```
+function messageSubmitter(conversationID) {
+ return async ev => {
+ ev.preventDefault()
+
+ const form = ev.currentTarget
+ const input = form.querySelector('input')
+ const submitButton = form.querySelector('button')
+
+ input.disabled = true
+ submitButton.disabled = true
+
+ try {
+ const message = await createMessage(input.value, conversationID)
+ input.value = ''
+ const messagesOList = document.getElementById('messages')
+ if (messagesOList === null) {
+ return
+ }
+
+ messagesOList.appendChild(renderMessage(message))
+ } catch (err) {
+ if (err.statusCode === 422) {
+ input.setCustomValidity(err.body.errors.content)
+ } else {
+ alert(err.message)
+ }
+ } finally {
+ input.disabled = false
+ submitButton.disabled = false
+
+ setTimeout(() => {
+ input.focus()
+ }, 0)
+ }
+ }
+}
+
+function createMessage(content, conversationID) {
+ return http.post(`/api/conversations/${conversationID}/messages`, { content })
+}
+```
+
+We make use of [partial application][12] to have the conversation ID in the “submit” event handler. It takes the message content from the input and does a POST request to `/api/conversations/{conversationID}/messages` with it. Then prepends the newly created message to the list.
+
+### Messages Subscription
+
+To make it realtime we’ll subscribe to the message stream in this page also.
+
+```
+page.addEventListener('disconnect', subscribeToMessages(messageArriver(conversationID)))
+```
+
+Add that line in the `conversationPage()` function.
+
+```
+function subscribeToMessages(cb) {
+ return http.subscribe('/api/messages', cb)
+}
+
+function messageArriver(conversationID) {
+ return message => {
+ if (message.conversationID !== conversationID) {
+ return
+ }
+
+ const messagesOList = document.getElementById('messages')
+ if (messagesOList === null) {
+ return
+
+ }
+ messagesOList.appendChild(renderMessage(message))
+ readMessages(message.conversationID)
+ }
+}
+
+function readMessages(conversationID) {
+ return http.post(`/api/conversations/${conversationID}/read_messages`)
+}
+```
+
+We also make use of partial application to have the conversation ID here.
+When a new message arrives, first we check if it’s from this conversation. If it is, we go a prepend a message item to the list and do a POST request to `/api/conversations/{conversationID}/read_messages` to updated the last time the participant read messages.
+
+* * *
+
+That concludes this series. The messenger app is now functional.
+
+~~I’ll add pagination on the conversation and message list, also user searching before sharing the source code. I’ll updated once it’s ready along with a hosted demo 👨💻~~
+
+[Souce Code][13] • [Demo][14]
+
+--------------------------------------------------------------------------------
+
+via: https://nicolasparada.netlify.com/posts/go-messenger-conversation-page/
+
+作者:[Nicolás Parada][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://nicolasparada.netlify.com/
+[b]: https://github.com/lujun9972
+[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
+[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
+[3]: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
+[4]: https://nicolasparada.netlify.com/posts/go-messenger-messages/
+[5]: https://nicolasparada.netlify.com/posts/go-messenger-realtime-messages/
+[6]: https://nicolasparada.netlify.com/posts/go-messenger-dev-login/
+[7]: https://nicolasparada.netlify.com/posts/go-messenger-access-page/
+[8]: https://nicolasparada.netlify.com/posts/go-messenger-home-page/
+[9]: https://nicolasparada.netlify.com/img/go-messenger-conversation-page/heading.png
+[10]: https://nicolasparada.netlify.com/img/go-messenger-conversation-page/list.png
+[11]: https://nicolasparada.netlify.com/img/go-messenger-conversation-page/form.png
+[12]: https://en.wikipedia.org/wiki/Partial_application
+[13]: https://github.com/nicolasparada/go-messenger-demo
+[14]: https://go-messenger-demo.herokuapp.com/
diff --git a/sources/tech/20181111 Some notes on running new software in production.md b/sources/tech/20181111 Some notes on running new software in production.md
new file mode 100644
index 0000000000..bfdfb66a44
--- /dev/null
+++ b/sources/tech/20181111 Some notes on running new software in production.md
@@ -0,0 +1,151 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Some notes on running new software in production)
+[#]: via: (https://jvns.ca/blog/2018/11/11/understand-the-software-you-use-in-production/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+Some notes on running new software in production
+======
+
+I’m working on a talk for kubecon in December! One of the points I want to get across is the amount of time/investment it takes to use new software in production without causing really serious incidents, and what that’s looked like for us in our use of Kubernetes.
+
+To start out, this post isn’t blanket advice. There are lots of times when it’s totally fine to just use software and not worry about **how** it works exactly. So let’s start by talking about when it’s important to invest.
+
+### when it matters: 99.99%
+
+If you’re running a service with a low SLO like 99% I don’t think it matters that much to understand the software you run in production. You can be down for like 2 hours a month! If something goes wrong, just fix it and it’s fine.
+
+At 99.99%, it’s different. That’s 45 minutes / year of downtime, and if you find out about a serious issue for the first time in production it could easily take you 20 minutes or to revert the change. That’s half your uptime budget for the year!
+
+### when it matters: software that you’re using heavily
+
+Also, even if you’re running a service with a 99.99% SLO, it’s impossible to develop a super deep understanding of every single piece of software you’re using. For example, a web service might use:
+
+ * 100 library dependencies
+ * the filesystem (so there’s linux filesystem code!)
+ * the network (linux networking code!)
+ * a database (like postgres)
+ * a proxy (like nginx/haproxy)
+
+
+
+If you’re only reading like 2 files from disk, you don’t need to do a super deep dive into Linux filesystems internals, you can just read the file from disk.
+
+What I try to do in practice is identify the components which we rely on the (or have the most unusual use cases for!), and invest time into understanding those. These are usually pretty easy to identify because they’re the ones which will cause the most problems :)
+
+### when it matters: new software
+
+Understanding your software especially matters for newer/less mature software projects, because it’s morely likely to have bugs & or just not have matured enough to be used by most people without having to worry. I’ve spent a bunch of time recently with Kubernetes/Envoy which are both relatively new projects, and neither of those are remotely in the category of “oh, it’ll just work, don’t worry about it”. I’ve spent many hours debugging weird surprising edge cases with both of them and learning how to configure them in the right way.
+
+### a playbook for understanding your software
+
+The playbook for understanding the software you run in production is pretty simple. Here it is:
+
+ 1. Start using it in production in a non-critical capacity (by sending a small percentage of traffic to it, on a less critical service, etc)
+ 2. Let that bake for a few weeks.
+ 3. Run into problems.
+ 4. Fix the problems. Go to step 3.
+
+
+
+Repeat until you feel like you have a good handle on this software’s failure modes and are comfortable running it in a more critical capacity. Let’s talk about that in a little more detail, though:
+
+### what running into bugs looks like
+
+For example, I’ve been spending a lot of time with Envoy in the last year. Some of the issues we’ve seen along the way are: (in no particular order)
+
+ * One of the default settings resulted in retry & timeout headers not being respected
+ * Envoy (as a client) doesn’t support TLS session resumption, so servers with a large amount of Envoy clients get DDOSed by TLS handshakes
+ * Envoy’s active healthchecking means that you services get healthchecked by every client. This is mostly okay but (again) services with many clients can get overwhelmed by it.
+ * Having every client independently healthcheck every server interacts somewhat poorly with services which are under heavy load, and can exacerbate performance issues by removing up-but-slow clients from the load balancer rotation.
+ * Envoy doesn’t retry failed connections by default
+ * it frequently segfaults when given incorrect configuration
+ * various issues with it segfaulting because of resource leaks / memory safety issues
+ * hosts running out of disk space between we didn’t rotate Envoy log files often enough
+
+
+
+A lot of these aren’t bugs – they’re just cases where what we expected the default configuration to do one thing, and it did another thing. This happens all the time, and it can result in really serious incidents. Figuring out how to configure a complicated piece of software appropriately takes a lot of time, and you just have to account for that.
+
+And Envoy is great software! The maintainers are incredibly responsive, they fix bugs quickly and its performance is good. It’s overall been quite stable and it’s done well in production. But just because something is great software doesn’t mean you won’t also run into 10 or 20 relatively serious issues along the way that need to be addressed in one way or another. And it’s helpful to understand those issues **before** putting the software in a really critical place.
+
+### try to have each incident only once
+
+My view is that running new software in production inevitably results in incidents. The trick:
+
+ 1. Make sure the incidents aren’t too serious (by making ‘production’ a less critical system first)
+ 2. Whenever there’s an incident (even if it’s not that serious!!!), spend the time necessary to understand exactly why it happened and how to make sure it doesn’t happen again
+
+
+
+My experience so far has been that it’s actually relatively possible to pull off “have every incident only once”. When we investigate issues and implement remediations, usually that issue **never comes back**. The remediation can either be:
+
+ * a configuration change
+ * reporting a bug upstream and either fixing it ourselves or waiting for a fix
+ * a workaround (“this software doesn’t work with 10,000 clients? ok, we just won’t use it with in cases where there are that many clients for now!“, “oh, a memory leak? let’s just restart it every hour”)
+
+
+
+Knowledge-sharing is really important here too – it’s always unfortunate when one person finds an incident in production, fixes it, but doesn’t explain the issue to the rest of the team so somebody else ends up causing the same incident again later because they didn’t hear about the original incident.
+
+### Understand what is ok to break and isn’t
+
+Another huge part of understanding the software I run in production is understanding which parts are OK to break (aka “if this breaks, it won’t result in a production incident”) and which aren’t. This lets me **focus**: I can put big boxes around some components and decide “ok, if this breaks it doesn’t matter, so I won’t pay super close attention to it”.
+
+For example, with Kubernetes:
+
+ok to break:
+
+ * any stateless control plane component can crash or be cycled out or go down for 5 minutes at any time. If we had 95% uptime for the kubernetes control plane that would probably be fine, it just needs to be working most of the time.
+ * kubernetes networking (the system where you give every pod an IP addresses) can break as much as it wants because we decided not to use it to start
+
+
+
+not ok:
+
+ * for us, if etcd goes down for 10 minutes, that’s ok. If it goes down for 2 hours, it’s not
+ * containers not starting or crashing on startup (iam issues, docker not starting containers, bugs in the scheduler, bugs in other controllers) is serious and needs to be looked at immediately
+ * containers not having access to the resources they need (because of permissions issues, etc)
+ * pods being terminated unexpectedly by Kubernetes (if you configure kubernetes wrong it can terminate your pods!)
+
+
+
+with Envoy, the breakdown is pretty different:
+
+ok to break:
+
+ * if the envoy control plane goes down for 5 minutes, that’s fine (it’ll keep working with stale data)
+ * segfaults on startup due to configuration errors are sort of okay because they manifest so early and they’re unlikely to surprise us (if the segfault doesn’t happen the 1st time, it shouldn’t happen the 200th time)
+
+
+
+not ok:
+
+ * Envoy crashes / segfaults are not good – if it crashes, network connections don’t happen
+ * if the control server serves incorrect or incomplete data that’s extremely dangerous and can result in serious production incidents. (so downtime is fine, but serving incorrect data is not!)
+
+
+
+Neither of these lists are complete at all, but they’re examples of what I mean by “understand your sofware”.
+
+### sharing ok to break / not ok lists is useful
+
+I think these “ok to break” / “not ok” lists are really useful to share, because even if they’re not 100% the same for every user, the lessons are pretty hard won. I’d be curious to hear about your breakdown of what kinds of failures are ok / not ok for software you’re using!
+
+Figuring out all the failure modes of a new piece of software and how they apply to your situation can take months. (this is is why when you ask your database team “hey can we just use NEW DATABASE” they look at you in such a pained way). So anything we can do to help other people learn faster is amazing
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2018/11/11/understand-the-software-you-use-in-production/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
diff --git a/sources/tech/20181118 An example of how C-- destructors are useful in Envoy.md b/sources/tech/20181118 An example of how C-- destructors are useful in Envoy.md
new file mode 100644
index 0000000000..f95f17db01
--- /dev/null
+++ b/sources/tech/20181118 An example of how C-- destructors are useful in Envoy.md
@@ -0,0 +1,130 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (An example of how C++ destructors are useful in Envoy)
+[#]: via: (https://jvns.ca/blog/2018/11/18/c---destructors---really-useful/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+An example of how C++ destructors are useful in Envoy
+======
+
+For a while now I’ve been working with a C++ project (Envoy), and sometimes I need to contribute to it, so my C++ skills have gone from “nonexistent” to “really minimal”. I’ve learned what an initializer list is and that a method starting with `~` is a destructor. I almost know what an lvalue and an rvalue are but not quite.
+
+But the other day when writing some C++ code I figured out something exciting about how to use destructors that I hadn’t realized! (the tl;dr of this post for people who know C++ is “julia finally understands what RAII is and that it is useful” :))
+
+### what’s a destructor?
+
+C++ has objects. When an C++ object goes out of scope, the compiler inserts a call to its destructor. So if you have some code like
+
+```
+function do_thing() {
+ Thing x{}; // this calls the Thing constructor
+ return 2;
+}
+```
+
+there will be a call to x’s destructor at the end of the `do_thing` function. so the code c++ generates looks something like:
+
+ * make new thing
+ * call the new thing’s destructor
+ * return 2
+
+
+
+Obviously destructors are way more complicated like this. They need to get called when there are exceptions! And sometimes they get called manually. And for lots of other reasons too. But there are 10 million things to know about C++ and that is not what we’re doing today, we are just talking about one thing.
+
+### what happens in a destructor?
+
+A lot of the time memory gets freed, which is how you avoid having memory leaks. But that’s not what we’re talking about in this post! We are talking about something more interesting.
+
+### the thing we’re interested in: Envoy circuit breakers
+
+So I’ve been working with Envoy a lot. 3 second Envoy refresher: it’s a HTTP proxy, your application makes requests to Envoy, which then proxies the request to the servers the application wants to talk to.
+
+One very useful feature Envoy has is this thing called “circuit breakers”. Basically the idea with is that if your application makes 50 billion connections to a service, that will probably overwhelm the service. So Envoy keeps track how many TCP connections you’ve made to a service, and will stop you from making new requests if you hit the limit. The default `max_connection` limit
+
+### how do you track connection count?
+
+To maintain a circuit breaker on the number of TCP connections, that means you need to keep an accurate count of how many TCP connections are currently open! How do you do that? Well, the way it works is to maintain a `connections` counter and:
+
+ * every time a connection is opened, increment the counter
+ * every time a connection is destroyed (because of a reset / timeout / whatever), decrement the counter
+ * when creating a new connection, check that the `connections` counter is not over the limit
+
+
+
+that’s all! And incrementing the counter when creating a new connection is pretty easy. But how do you make sure that the counter gets _decremented_ wheh the connection is destroyed? Connections can be destroyed in a lot of ways (they can time out! they can be closed by Envoy! they can be closed by the server! maybe something else I haven’t thought of could happen!) and it seems very easy to accidentally miss a way of closing them.
+
+### destructors to the rescue
+
+The way Envoy solves this problem is to create a connection object (called `ActiveClient` in the HTTP connection pool) for every connection.
+
+Then it:
+
+ * increments the counter in the constructor ([code][1])
+ * decrements the counter in the destructor ([code][2])
+ * checks the counter when a new connection is created ([code][3])
+
+
+
+The beauty of this is that now you don’t need to make sure that the counter gets decremented in all the right places, you now just need to organize your code so that the `ActiveClient` object’s destructor gets called when the connection has closed.
+
+Where does the `ActiveClient` destructor get called in Envoy? Well, Envoy maintains 2 lists of clients (`ready_clients` and `busy_clients`), and when a connection gets closed, Envoy removes the client from those lists. And when it does that, it doesn’t need to do any extra cleanup!! In C++, anytime a object is removed from a list, its destructor is called. So `client.removeFromList(ready_clients_);` takes care of all the cleanup. And there’s no chance of forgetting to decrement the counter!! It will definitely always happen unless you accidentally leave the object on one of these lists, which would be a bug anyway because the connection is closed :)
+
+### RAII
+
+This pattern Envoy is using here is an extremely common C++ programming pattern called “resource acquisition is initialization”. I find that name very confusing but that’s what it’s called. basically the way it works is:
+
+ * identify a resource (like “connection”) where a lot of things need to happen when the connection is initialized / finished
+ * make a class for that connection
+ * put all the initialization / finishing code in the constructor / destructor
+ * make sure the object’s destructor method gets called when appropriate! (by removing it from a vector / having it go out of scope)
+
+
+
+Previously I knew about using this pattern for kind of obvious things (make sure all the memory gets freed in the destructor, or make sure file descriptors get closed). But I didn’t realize it was also useful for cases that are slightly less obviously a resource like “decrement a counter”.
+
+The reason this pattern works is because the C++ compiler/standard library does a bunch of work to make sure that destructors get called when you’re done with an object – the compiler inserts destructor calls at the end of each block of code, after exceptions, and many standard library collections make sure destructors are called when you remove an object from a collection.
+
+### RAII gives you prompt, deterministic, and hard-to-screw-up cleanup of resources
+
+The exciting thing here is that this programming pattern gives you a way to schedule cleaning up resources that’s:
+
+ * easy to ensure always happens (when the object goes away, it always happens, even if there was an exception!)
+ * prompt & determinstic (it happens right away and it’s guaranteed to happen!)
+
+
+
+### what languages have RAII?
+
+C++ and Rust have RAII. Probably other languages too. Java, Python, Go, and garbage collected languages in general do not. In a garbage collected language you can often set up destructors to be run when the object is GC’d. But often (like in this case, which the connection count) you want things to be cleaned up **right away** when the object is no longer in use, not some indeterminate period later whenever GC happens to run.
+
+Python context managers are a related idea, you could do something like:
+
+```
+with conn_pool.connection() as conn:
+ do stuff
+```
+
+### that’s all for now!
+
+Hopefully this explanation of RAII is interesting and mostly correct. Thanks to Kamal for clarifying some RAII things for me!
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2018/11/18/c---destructors---really-useful/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://github.com/envoyproxy/envoy/blob/200b0e41641be46471c2ce3d230aae395fda7ded/source/common/http/http1/conn_pool.cc#L301
+[2]: https://github.com/envoyproxy/envoy/blob/200b0e41641be46471c2ce3d230aae395fda7ded/source/common/http/http1/conn_pool.cc#L315
+[3]: https://github.com/envoyproxy/envoy/blob/200b0e41641be46471c2ce3d230aae395fda7ded/source/common/http/http1/conn_pool.cc#L97
diff --git a/sources/tech/20181209 How do you document a tech project with comics.md b/sources/tech/20181209 How do you document a tech project with comics.md
new file mode 100644
index 0000000000..02d4981875
--- /dev/null
+++ b/sources/tech/20181209 How do you document a tech project with comics.md
@@ -0,0 +1,100 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How do you document a tech project with comics?)
+[#]: via: (https://jvns.ca/blog/2018/12/09/how-do-you-document-a-tech-project-with-comics/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+How do you document a tech project with comics?
+======
+
+Every so often I get email from people saying basically “hey julia! we have an open source project! we’d like to use comics / zines / art to document our project! Can we hire you?“.
+
+spoiler: the answer is “no, you can’t hire me” – I don’t do commissions. But I do think this is a cool idea and I’ve often wished I had something more useful to say to people than “no”, so if you’re interested in this, here are some ideas about how to accomplish it!
+
+### zine != drawing
+
+First, a terminology distinction. One weird thing I’ve noticed is that people frequently refer to individual tech drawings as “zines”. I think this is due to me communicating poorly somehow, but – drawings are not zines! A zine is a **printed booklet**, like a small maga**zine**. You wouldn’t call a photo of a model in Vogue a magazine! The magazine has like a million pages! An individual drawing is a drawing/comic/graphic/whatever. Just clarifying this because I think it causes a bit of unnecessary confusion.
+
+### comics without good information are useless
+
+Usually when folks ask me “hey, could we make a comic explaining X”, it doesn’t seem like they have a clear idea of what information exactly they want to get across, they just have a vague idea that maybe it would be cool to draw some comics. This makes sense – figuring out what information would be useful to tell people is very hard!! It’s 80% of what I spend my time on when making comics.
+
+You should think about comics the same way as any kind of documentation – start with the information you want to convey, who your target audience is, and how you want to distribute it (twitter? on your website? in person?), and figure out how to illustrate it after :). The information is the main thing, not the art!
+
+Once you have a clear story about what you want to get across, you can start trying to think about how to represent it using illustrations!
+
+### focus on concepts that don’t change
+
+Drawing comics is a much bigger investment than writing documentation (it takes me like 5x longer to convey the same information in a comic than in writing). So use it wisely! Because it’s not that easy to edit, if you’re going to make something a comic you want to focus on concepts that are very unlikely to change. So talk about the core ideas in your project instead of the exact command line arguments it takes!
+
+Here are a couple of options for how you could use comics/illustrations to document your project!
+
+### option 1: a single graphic
+
+One format you might want to try is a single, small graphic explaining what your project is about and why folks might be interested in it. For example: [this zulip comic][1]
+
+This is a short thing, you could post it on Twitter or print it as a pamphlet to give out. The information content here would probably be basically what’s on your project homepage, but presented in a more fun/exciting way :)
+
+You can put a pretty small amount of information in a single comic. With that Zulip comic, the things I picked out were:
+
+ * zulip is sort of like slack, but it has threads
+ * it’s easy to keep track of threads even if the conversation takes place over several days
+ * you can much more easily selectively catch up with Zulip
+ * zulip is open source
+ * there’s an open zulip server you can try out
+
+
+
+That’s not a lot of information! It’s 50 words :). So to do this effectively you need to distill your project down to 50 words in a way that’s still useful. It’s not easy!
+
+### option 2: many comics
+
+Another approach you can take is to make a more in depth comic / illustration, like [google’s guide to kubernetes][2] or [the children’s illustrated guide to kubernetes][3].
+
+To do this, you need a much stronger concept than “uh, I want to explain our project” – you want to have a clear target audience in mind! For example, if I were drawing a set of Docker comics, I’d probably focus on folks who want to use Docker in production. so I’d want to discuss:
+
+ * publishing your containers to a public/private registry
+ * some best practices for tagging your containers
+ * how to make sure your hosts don’t run out of disk space from downloading too many containers
+ * how to use layers to save on disk space / download less stuff
+ * whether it’s reasonable to run the same containers in production & in dev
+
+
+
+That’s totally different from the set of comics I’d write for folks who just want to use Docker to develop locally!
+
+### option 3: a printed zine
+
+The main thing that differentiates this from “many comics” is that zines are printed! Because of that, for this to make sense you need to have a place to give out the printed copies! Maybe you’re going present your project at a major conference? Maybe you give workshops about your project and want to give our the zine to folks in the workshop as notes? Maybe you want to mail it to people?
+
+### how to hire someone to help you
+
+There are basically 3 ways to hire someone:
+
+ 1. Hire someone who both understands (or can quickly learn) the technology you want to document and can illustrate well. These folks are tricky to find and probably expensive (I certainly wouldn’t do a project like this for less than $10,000 even if I did do commissions), just because programmers can usually charge a pretty high consulting rate. I’d guess that the main failure mode here is that it might be impossible/very hard to find someone, and it might be expensive.
+ 2. Collaborate with an illustrator to draw it for you. The main failure mode here is that if you don’t give the illustrator clear explanations of your tech to work with, you.. won’t end up with a clear and useful explanation. From what I’ve seen, **most folks underinvest in writing clear explanations for their illustrators** – I’ve seen a few really adorable tech comics that I don’t find useful or clear at all. I’d love to see more people do a better job of this. What’s the point of having an adorable illustration if it doesn’t teach anyone anything? :)
+ 3. Draw it yourself :). This is what I do, obviously. stick figures are okay!
+
+
+
+Most people seem to use method #2 – I’m not actually aware of any tech folks who have done commissioned comics (though I’m sure it’s happened!). I think method #2 is a great option and I’d love to see more folks do it. Paying illustrators is really fun!
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2018/12/09/how-do-you-document-a-tech-project-with-comics/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://twitter.com/b0rk/status/986444234365521920
+[2]: https://cloud.google.com/kubernetes-engine/kubernetes-comic/
+[3]: https://thenewstack.io/kubernetes-gets-childrens-book/
diff --git a/sources/tech/20181215 New talk- High Reliability Infrastructure Migrations.md b/sources/tech/20181215 New talk- High Reliability Infrastructure Migrations.md
new file mode 100644
index 0000000000..93755329c7
--- /dev/null
+++ b/sources/tech/20181215 New talk- High Reliability Infrastructure Migrations.md
@@ -0,0 +1,78 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (New talk: High Reliability Infrastructure Migrations)
+[#]: via: (https://jvns.ca/blog/2018/12/15/new-talk--high-reliability-infrastructure-migrations/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+New talk: High Reliability Infrastructure Migrations
+======
+
+On Tuesday I gave a talk at KubeCon called [High Reliability Infrastructure Migrations][1]. The abstract was:
+
+> For companies with high availability requirements (99.99% uptime or higher), running new software in production comes with a lot of risks. But it’s possible to make significant infrastructure changes while maintaining the availability your customers expect! I’ll give you a toolbox for derisking migrations and making infrastructure changes with confidence, with examples from our Kubernetes & Envoy experience at Stripe.
+
+### video
+
+#### slides
+
+Here are the slides:
+
+since everyone always asks, I drew them in the Notability app on an iPad. I do this because it’s faster than trying to use regular slides software and I can make better slides.
+
+### a few notes
+
+Here are a few links & notes about things I mentioned in the talk
+
+#### skycfg: write functions, not YAML
+
+I talked about how my team is working on non-YAML interfaces for configuring Kubernetes. The demo is at [skycfg.fun][2], and it’s [on GitHub here][3]. It’s based on [Starlark][4], a configuration language that’s a subset of Python.
+
+My coworker [John][5] has promised that he’ll write a blog post about it at some point, and I’m hoping that’s coming soon :)
+
+#### no haunted forests
+
+I mentioned a deploy system rewrite we did. John has a great blog post about when rewrites are a good idea and how he approached that rewrite called [no haunted forests][6].
+
+#### ignore most kubernetes ecosystem software
+
+One small point that I made in the talk was that on my team we ignore almost all software in the Kubernetes ecosystem so that we can focus on a few core pieces (Kubernetes & Envoy, plus some small things like kiam). I wanted to mention this because I think often in Kubernetes land it can seem like everyone is using Cool New Things (helm! istio! knative! eep!). I’m sure those projects are great but I find it much simpler to stay focused on the basics and I wanted people to know that it’s okay to do that if that’s what works for your company.
+
+I think the reality is that actually a lot of folks are still trying to work out how to use this new software in a reliable and secure way.
+
+#### other talks
+
+I haven’t watched other Kubecon talks yet, but here are 2 links:
+
+I heard good things about [this keynote from melanie cebula about kubernetes at airbnb][7], and I’m excited to see [this talk about kubernetes security][8]. The [slides from that security talk look useful][9]
+
+Also I’m very excited to see Kelsey Hightower’s keynote as always, but that recording isn’t up yet. If you have other Kubecon talks to recommend I’d love to know what they are.
+
+#### my first work talk I’m happy with
+
+I usually give talks about debugging tools, or side projects, or how I approach my job at a high level – not on the actual work that I do at my job. What I talked about in this talk is basically what I’ve been learning how to do at work for the last ~2 years. Figuring out how to make big infrastructure changes safely took me a long time (and I’m not done!), and so I hope this talk helps other folks do the same thing.
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2018/12/15/new-talk--high-reliability-infrastructure-migrations/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://www.youtube.com/watch?v=obB2IvCv-K0
+[2]: http://skycfg.fun
+[3]: https://github.com/stripe/skycfg
+[4]: https://github.com/bazelbuild/starlark
+[5]: https://john-millikin.com/
+[6]: https://john-millikin.com/sre-school/no-haunted-forests
+[7]: https://www.youtube.com/watch?v=ytu3aUCwlSg&index=127&t=0s&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU
+[8]: https://www.youtube.com/watch?v=a03te8xEjUg&index=65&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU&t=0s
+[9]: https://schd.ws/hosted_files/kccna18/1c/KubeCon%20NA%20-%20This%20year%2C%20it%27s%20about%20security%20-%2020181211.pdf
diff --git a/sources/tech/20181229 Some nonparametric statistics math.md b/sources/tech/20181229 Some nonparametric statistics math.md
new file mode 100644
index 0000000000..452c295781
--- /dev/null
+++ b/sources/tech/20181229 Some nonparametric statistics math.md
@@ -0,0 +1,178 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Some nonparametric statistics math)
+[#]: via: (https://jvns.ca/blog/2018/12/29/some-initial-nonparametric-statistics-notes/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+Some nonparametric statistics math
+======
+
+I’m trying to understand nonparametric statistics a little more formally. This post may not be that intelligible because I’m still pretty confused about nonparametric statistics, there is a lot of math, and I make no attempt to explain any of the math notation. I’m working towards being able to explain this stuff in a much more accessible way but first I would like to understand some of the math!
+
+There’s some MathJax in this post so the math may or may not render in an RSS reader.
+
+Some questions I’m interested in:
+
+ * what is nonparametric statistics exactly?
+ * what guarantees can we make? are there formulas we can use?
+ * why do methods like the bootstrap method work?
+
+
+
+since these notes are from reading a math book and math books are extremely dense this is basically going to be “I read 7 pages of this math book and here are some points I’m confused about”
+
+### what’s nonparametric statistics?
+
+Today I’m looking at “all of nonparametric statistics” by Larry Wasserman. He defines nonparametric inference as:
+
+> a set of modern statistical methods that aim to keep the number of underlying assumptions as weak as possible
+
+Basically my interpretation of this is that – instead of assuming that your data comes from a specific family of distributions (like the normal distribution) and then trying to estimate the paramters of that distribution, you don’t make many assumptions about the distribution (“this is just some data!!“). Not having to make assumptions is nice!
+
+There aren’t **no** assumptions though – he says
+
+> we assume that the distribution $F$ lies in some set $\mathfrak{F}$ called a **statistical model**. For example, when estimating a density $f$, we might assume that $$ f \in \mathfrak{F} = \left\\{ g : \int(g^{\prime\prime}(x))^2dx \leq c^2 \right\\}$$ which is the set of densities that are not “too wiggly”.
+
+I have not too much intuition for the condition $\int(g^{\prime\prime}(x))^2dx \leq c^2$. I calculated that integral for [the normal distribution on wolfram alpha][1] and got 4, which is a good start. (4 is not infinity!)
+
+some questions I still have about this definition:
+
+ * what’s an example of a probability density function that _doesn’t_ satisfy that $\int(g^{\prime\prime}(x))^2dx \leq c^2$ condition? (probably something with an infinite number of tiny wiggles, and I don’t think any distribution i’m interested in in practice would have an infinite number of tiny wiggles?)
+ * why does the density function being “too wiggly” cause problems for nonparametric inference? very unclear as yet.
+
+
+
+### we still have to assume independence
+
+One assumption we **won’t** get away from is that the samples in the data we’re dealing with are independent. Often data in the real world actually isn’t really independent, but I think the what people do a lot of the time is to make a good effort at something approaching independence and then close your eyes and pretend it is?
+
+### estimating the density function
+
+Okay! Here’s a useful section! Let’s say that I have 100,000 data points from a distribution. I can draw a histogram like this of those data points:
+
+![][2]
+
+If I have 100,000 data points, it’s pretty likely that that histogram is pretty close to the actual distribution. But this is math, so we should be able to make that statement precise, right?
+
+For example suppose that 5% of the points in my sample are more than 100. Is the probability that a point is greater than 100 **actually** 0.05? The book gives a nice formula for this:
+
+$$ \mathbb{P}(|\widehat{P}_n(A) - P(A)| > \epsilon ) \leq 2e^{-2n\epsilon^2} $$
+
+(by [“Hoeffding’s inequality”][3] which I’ve never heard of before). Fun aside about that inequality: here’s a nice jupyter notebook by henry wallace using it to [identify the most common Boggle words][4].
+
+here, in our example:
+
+ * n is 1000 (the number of data points we have)
+ * $A$ is the set of points more than 100
+ * $\widehat{P}_n(A)$ is the empirical probability that a point is more than 100 (0.05)
+ * $P(A)$ is the actual probability
+ * $\epsilon$ is how certain we want to be that we’re right
+
+
+
+So, what’s the probability that the **real** probability is between 0.04 and 0.06? $\epsilon = 0.01$, so it’s $2e^{-2 \times 100,000 \times (0.01)^2} = 4e^{-9} $ ish (according to wolfram alpha)
+
+here is a table of how sure we can be:
+
+ * 100,000 data points: 4e-9 (TOTALLY CERTAIN that 4% - 6% of points are more than 100)
+ * 10,000 data points: 0.27 (27% probability that we’re wrong! that’s… not bad?)
+ * 1,000 data points: 1.6 (we know the probability we’re wrong is less than.. 160%? that’s not good!)
+ * 100 data points: lol
+
+
+
+so basically, in this case, using this formula: 100,000 data points is AMAZING, 10,000 data points is pretty good, and 1,000 is much less useful. If we have 1000 data points and we see that 5% of them are more than 100, we DEFINITELY CANNOT CONCLUDE that 4% to 6% of points are more than 100. But (using the same formula) we can use $\epsilon = 0.04$ and conclude that with 92% probability 1% to 9% of points are more than 100. So we can still learn some stuff from 1000 data points!
+
+This intuitively feels pretty reasonable to me – like it makes sense to me that if you have NO IDEA what your distribution that with 100,000 points you’d be able to make quite strong inferences, and that with 1000 you can do a lot less!
+
+### more data points are exponentially better?
+
+One thing that I think is really cool about this estimating the density function formula is that how sure you can be of your inferences scales **exponentially** with the size of your dataset (this is the $e^{-n\epsilon^2}$). And also exponentially with the square of how sure you want to be (so wanting to be sure within 0.01 is VERY DIFFERENT than within 0.04). So 100,000 data points isn’t 10x better than 10,000 data points, it’s actually like 10000000000000x better.
+
+Is that true in other places? If so that seems like a super useful intuition! I still feel pretty uncertain about this, but having some basic intuition about “how much more useful is 10,000 data points than 1,000 data points?“) feels like a really good thing.
+
+### some math about the bootstrap
+
+The next chapter is about the bootstrap! Basically the way the bootstrap works is:
+
+ 1. you want to estimate some statistic (like the median) of your distribution
+ 2. the bootstrap lets you get an estimate and also the variance of that estimate
+ 3. you do this by repeatedly sampling with replacement from your data and then calculating the statistic you want (like the median) on your samples
+
+
+
+I’m not going to go too much into how to implement the bootstrap method because it’s explained in a lot of place on the internet. Let’s talk about the math!
+
+I think in order to say anything meaningful about bootstrap estimates I need to learn a new term: a **consistent estimator**.
+
+### What’s a consistent estimator?
+
+Wikipedia says:
+
+> In statistics, a **consistent estimator** or **asymptotically consistent estimator** is an estimator — a rule for computing estimates of a parameter $\theta_0$ — having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to $\theta_0$.
+
+This includes some terms where I forget what they mean (what’s “converges in probability” again?). But this seems like a very good thing! If I’m estimating some parameter (like the median), I would DEFINITELY LIKE IT TO BE TRUE that if I do it with an infinite amount of data then my estimate works. An estimator that is not consistent does not sound very useful!
+
+### why/when are bootstrap estimators consistent?
+
+spoiler: I have no idea. The book says the following:
+
+> Consistency of the boostrap can now be expressed as follows.
+>
+> **3.19 Theorem**. Suppose that $\mathbb{E}(X_1^2) < \infty$. Let $T_n = g(\overline{X}_n)$ where $g$ is continuously differentiable at $\mu = \mathbb{E}(X_1)$ and that $g\prime(\mu) \neq 0$. Then,
+>
+> $$ \sup_u | \mathbb{P}_{\widehat{F}_n} \left( \sqrt{n} (T( \widehat{F}_n*) - T( \widehat{F}_n) \leq u \right) - \mathbb{P}_{\widehat{F}} \left( \sqrt{n} (T( \widehat{F}_n) - T( \widehat{F}) \leq u \right) | \rightarrow^\text{a.s.} 0 $$
+>
+> **3.21 Theorem**. Suppose that $T(F)$ is Hadamard differentiable with respect to $d(F,G)= sup_x|F(x)-G(x)|$ and that $0 < \int L^2_F(x) dF(x) < \infty$. Then,
+>
+> $$ \sup_u | \mathbb{P}_{\widehat{F}_n} \left( \sqrt{n} (T( \widehat{F}_n*) - T( \widehat{F}_n) \leq u \right) - \mathbb{P}_{\widehat{F}} \left( \sqrt{n} (T( \widehat{F}_n) - T( \widehat{F}) \leq u \right) | \rightarrow^\text{P} 0 $$
+
+things I understand about these theorems:
+
+ * the two formulas they’re concluding are the same, except I think one is about convergence “almost surely” and one about “convergence in probability”. I don’t remember what either of those mean.
+ * I think for our purposes of doing Regular Boring Things we can replace “Hadamard differentiable” with “differentiable”
+ * I think they don’t actually show the consistency of the bootstrap, they’re actually about consistency of the bootstrap confidence interval estimate (which is a different thing)
+
+
+
+I don’t really understand how they’re related to consistency, and in particular the $\sup_u$ thing is weird, like if you’re looking at $\mathbb{P}(something < u)$, wouldn’t you want to minimize $u$ and not maximize it? Maybe it’s a typo and it should be $\inf_u$?
+
+it concludes:
+
+> there is a tendency to treat the bootstrap as a panacea for all problems. But the bootstrap requires regularity conditions to yield valid answers. It should not be applied blindly.
+
+### this book does not seem to explain why the bootstrap is consistent
+
+In the appendix (3.7) it gives a sketch of a proof for showing that estimating the **median** using the bootstrap is consistent. I don’t think this book actually gives a proof anywhere that bootstrap estimates in general are consistent, which was pretty surprising to me. It gives a bunch of references to papers. Though I guess bootstrap confidence intervals are the most important thing?
+
+### that’s all for now
+
+This is all extremely stream of consciousness and I only spent 2 hours trying to work through this, but some things I think I learned in the last couple hours are:
+
+ 1. maybe having more data is exponentially better? (is this true??)
+ 2. “consistency” of an estimator is a thing, not all estimators are consistent
+ 3. understanding when/why nonparametric bootstrap estimators are consistent in general might be very hard (the proof that the bootstrap median estimator is consistent already seems very complicated!)
+ 4. boostrap confidence intervals are not the same thing as bootstrap estimators. Maybe I’ll learn the difference next!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2018/12/29/some-initial-nonparametric-statistics-notes/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://www.wolframalpha.com/input/?i=integrate+(d%2Fdx(d%2Fdx(exp(-x%5E2))))%5E2++dx+from+x%3D-infinity+to+infinity
+[2]: https://jvns.ca/images/nonpar-histogram.png
+[3]: https://en.wikipedia.org/wiki/Hoeffding%27s_inequality
+[4]: https://nbviewer.jupyter.org/github/henrywallace/games/blob/master/boggle/boggle.ipynb#Estimating-Word-Probabilities
diff --git a/sources/tech/20190129 A few early marketing thoughts.md b/sources/tech/20190129 A few early marketing thoughts.md
new file mode 100644
index 0000000000..79cc6b1b1d
--- /dev/null
+++ b/sources/tech/20190129 A few early marketing thoughts.md
@@ -0,0 +1,164 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (A few early marketing thoughts)
+[#]: via: (https://jvns.ca/blog/2019/01/29/marketing-thoughts/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+A few early marketing thoughts
+======
+
+At some point last month I said I might write more about business, so here are some very early marketing thoughts for my zine business (!). The question I’m trying to make some progress on in this post is: “how to do marketing in a way that feels good?”
+
+### what’s the point of marketing?
+
+Okay! What’s marketing? What’s the point? I think the ideal way marketing works is:
+
+ 1. you somehow tell a person about a thing
+ 2. you explain somehow why the thing will be useful to them / why it is good
+ 3. they buy it and they like the thing because it’s what they expected
+
+
+
+(or, when you explain it they see that they don’t want it and don’t buy it which is good too!!)
+
+So basically as far as I can tell good marketing is just explaining what the thing is and why it is good in a clear way.
+
+### what internet marketing techniques do people use?
+
+I’ve been thinking a bit about internet marketing techniques I see people using on me recently. Here are a few examples of internet marketing techniques I’ve seen:
+
+ 1. word of mouth (“have you seen this cool new thing?!”)
+ 2. twitter / instagram marketing (build a twitter/instagram account)
+ 3. email marketing (“build a mailing list with a bajillion people on it and sell to them”)
+ 4. email marketing (“tell your existing users about features that they already have that they might want to use”)
+ 5. social proof marketing (“jane from georgia bought a sweater”), eg fomo.com
+ 6. cart notifications (“you left this sweater in your cart??! did you mean to buy it? maybe you should buy it!“)
+ 7. content marketing (which is fine but whenever people refer to my writing as ‘content’ I get grumpy :))
+
+
+
+### you need _some_ way to tell people about your stuff
+
+Something that is definitely true about marketing is that you need some way to tell new people about the thing you are doing. So for me when I’m thinking about running a business it’s less about “should i do marketing” and more like “well obviously i have to do marketing, how do i do it in a way that i feel good about?”
+
+### what’s up with email marketing?
+
+I feel like every single piece of internet marketing advice I read says “you need a mailing list”. This is advice that I haven’t really taken to heart – technically I have 2 mailing lists:
+
+ 1. the RSS feed for this blog, which sends out new blog posts to a mailing list for folks who don’t use RSS (which 3000 of you get)
+ 2. ’s list, for comics / new zine announcements (780 people subscribe to that! thank you!)
+
+
+
+but definitely neither of them is a Machine For Making Sales and I’ve put in almost no efforts in that direction yet.
+
+here are a few things I’ve noticed about marketing mailing lists:
+
+ * most marketing mailing lists are boring but some marketing mailing lists are actually interesting! For example I kind of like [amy hoy][1]’s emails.
+ * Someone told me recently that they have 200,000 people on their mailing list (?!!) which made the “a mailing list is a machine for making money” concept make a lot more sense to me. I wonder if people who make a lot of money from their mailing lists all have huge 10k+ person mailing lists like this?
+
+
+
+### what works for me: twitter
+
+Right now for my zines business I’d guess maybe 70% of my sales come from Twitter. The main thing I do is tweet pages from zines I’m working on (for example: yesterday’s [comic about ss][2]). The comics are usually good and fun so invariably they get tons of retweets, which means that I end up with lots of followers, which means that when I later put up the zine for sale lots of people will buy it.
+
+And of course people don’t _have_ to buy the zines, I post most of what ends up in my zines on twitter for free, so it feels like a nice way to do it. Everybody wins, I think.
+
+(side note: when I started getting tons of new followers from my comics I was actually super worried that it would make my experience of Twitter way worse. That hasn’t happened! the new followers all seem totally reasonable and I still get a lot of really interesting twitter replies which is wonderful ❤)
+
+I don’t try to hack/optimize this really: I just post comics when I make them and I try to make them good.
+
+### a small Twitter innovation: putting my website on the comics
+
+Here’s one small marketing change that I made that I think makes sense!
+
+In the past, I didn’t put anything about how to buy my comics on the comics I posted on Twitter, just my Twitter username. Like this:
+
+![][3]
+
+After a while, I realized people were asking me all the time “hey, can I buy a book/collection? where do these come from? how do I get more?“! I think a marketing secret is “people actually want to buy things that are good, it is useful to tell people where they can buy things that are good”.
+
+So just recently I’ve started adding my website and a note about my current project on the comics I post on Twitter. It doesn’t say much: just “❤ these comics? buy a collection! wizardzines.com” and “page 11 of my upcoming bite size networking zine”. Here’s what it looks like:
+
+![][4]
+
+I feel like this strikes a pretty good balance between “julia you need to tell people what you’re doing otherwise how are they supposed to buy things from you” and “omg too many sales pitches everywhere”? I’ve only started doing this recently so we’ll see how it goes.
+
+### should I work on a mailing list?
+
+It seems like the same thing that works on twitter would work by email if I wanted to put in the time (email people comics! when a zine comes out, email them about the zine and they can buy it if they want!).
+
+One thing I LOVE about Twitter though is that people always reply to the comics I post with their own tips and tricks that they love and I often learn something new. I feel like email would be nowhere near as fun :)
+
+But I still think this is a pretty good idea: keeping up with twitter can be time consuming and I bet a lot of people would like to get occasional email with programming drawings. (would you?)
+
+One thing I’m not sure about is – a lot of marketing mailing lists seem to use somewhat aggressive techniques to get new emails (a lot of popups on a website, or adding everyone who signs up to their service / buys a thing to a marketing list) and while I’m basically fine with that (unsubscribing is easy!), I’m not sure that it’s what I’d want to do, and maybe less aggressive techniques will work just as well? We’ll see.
+
+### should I track conversion rates?
+
+A piece of marketing advice I assume people give a lot is “be data driven, figure out what things convert the best, etc”. I don’t do this almost at all – gumroad used to tell me that most of my sales came from Twitter which was good to know, but right now I have basically no idea how it works.
+
+Doing a bunch of work to track conversion rates feels bad to me: it seems like it would be really easy to go down a dumb rabbit hole of “oh, let’s try to increase conversion by 5%” instead of just focusing on making really good and cool things.
+
+My guess is that what will work best for me for a while is to have some data that tells me in broad strokes how the business works (like “about 70% of sales come from twitter”) and just leave it at that.
+
+### should I do advertising?
+
+I had a conversation with Kamal about this post that went:
+
+ * julia: “hmm, maybe I should talk about ads?”
+ * julia: “wait, are ads marketing?”
+ * kamal: “yes ads are marketing”
+
+
+
+So, ads! I don’t know anything about advertising except that you can advertise on Facebook or Twitter or Google. Some non-ethical questions I have about advertising:
+
+ * how do you choose what keywords to advertise on?
+ * are there actually cheap keywords, like is ‘file descriptors’ cheap?
+ * how much do you need to pay per click? (for some weird linux keywords, google estimated 20 cents a click?)
+ * can you use ads effectively for something that costs $10?
+
+
+
+This seems nontrivial to learn about and I don’t think I’m going to try soon.
+
+### other marketing things
+
+a few other things I’ve thought about:
+
+ * I learned about “social proof marketing” sites like fomo.com yesterday which makes popups on your site like “someone bought COOL THING 3 hours ago”. This seems like it has some utility (people are actually buying things from me all the time, maybe that’s useful to share somehow?) but those popups feel a bit cheap to me and I don’t really think it’s something I’d want to do right now.
+ * similarly a lot of sites like to inject these popups like “HELLO PLEASE SIGN UP FOR OUR MAILING LIST”. similar thoughts. I’ve been putting an email signup link in the footer which seems like a good balance between discoverable and annoying. As an example of a popup which isn’t too intrusive, though: nate berkopec has [one on his site][5] which feels really reasonable! (scroll to the bottom to see it)
+
+
+
+Maybe marketing is all about “make your things discoverable without being annoying”? :)
+
+### that’s all!
+
+Hopefully some of this was interesting! Obviously the most important thing in all of this is to make cool things that are useful to people, but I think cool useful writing does not actually sell itself!
+
+If you have thoughts about what kinds of marketing have worked well for you / you’ve felt good about I would love to hear them!
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2019/01/29/marketing-thoughts/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://stackingthebricks.com/
+[2]: https://twitter.com/b0rk/status/1090058524137345025
+[3]: https://jvns.ca/images/kill.jpeg
+[4]: https://jvns.ca/images/ss.jpeg
+[5]: https://www.speedshop.co/2019/01/10/three-activerecord-mistakes.html
diff --git a/sources/tech/20190129 Create an online store with this Java-based framework.md b/sources/tech/20190129 Create an online store with this Java-based framework.md
deleted file mode 100644
index 6fb9bc5a6b..0000000000
--- a/sources/tech/20190129 Create an online store with this Java-based framework.md
+++ /dev/null
@@ -1,235 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (laingke)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Create an online store with this Java-based framework)
-[#]: via: (https://opensource.com/article/19/1/scipio-erp)
-[#]: author: (Paul Piper https://opensource.com/users/madppiper)
-
-Create an online store with this Java-based framework
-======
-Scipio ERP comes with a large range of applications and functionality.
-
-
-So you want to sell products or services online, but either can't find a fitting software or think customization would be too costly? [Scipio ERP][1] may just be what you are looking for.
-
-Scipio ERP is a Java-based open source e-commerce framework that comes with a large range of applications and functionality. The project was forked from [Apache OFBiz][2] in 2014 with a clear focus on better customization and a more modern appeal. The e-commerce component is quite extensive and works in a multi-store setup, internationally, and with a wide range of product configurations, and it's also compatible with modern HTML frameworks. The software also provides standard applications for many other business cases, such as accounting, warehouse management, or sales force automation. It's all highly standardized and therefore easy to customize, which is great if you are looking for more than a virtual cart.
-
-The system makes it very easy to keep up with modern web standards, too. All screens are constructed using the system's "[templating toolkit][3]," an easy-to-learn macro set that separates HTML from all applications. Because of it, every application is already standardized to the core. Sounds confusing? It really isn't—it all looks a lot like HTML, but you write a lot less of it.
-
-### Initial setup
-
-Before you get started, make sure you have Java 1.8 (or greater) SDK and a Git client installed. Got it? Great! Next, check out the master branch from GitHub:
-
-```
-git clone https://github.com/ilscipio/scipio-erp.git
-cd scipio-erp
-git checkout master
-```
-
-To set up the system, simply run **./install.sh** and select either option from the command line. Throughout development, it is best to stick to an **installation for development** (Option 1), which will also install a range of demo data. For professional installations, you can modify the initial config data ("seed data") so it will automatically set up the company and catalog data for you. By default, the system will run with an internal database, but it [can also be configured][4] with a wide range of relational databases such as PostgreSQL and MariaDB.
-
-![Setup wizard][6]
-
-Follow the setup wizard to complete your initial configuration,
-
-Start the system with **./start.sh** and head over to **** to complete the configuration. If you installed with demo data, you can log in with username **admin** and password **scipio**. During the setup wizard, you can set up a company profile, accounting, a warehouse, your product catalog, your online store, and additional user profiles. Keep the website entries on the product store configuration screen for now. The system allows you to run multiple webstores with different underlying code; unless you want to do that, it is easiest to stick to the defaults.
-
-Congratulations, you just installed Scipio ERP! Play around with the screens for a minute or two to get a feel for the functionality.
-
-### Shortcuts
-
-Before you jump into the customization, here are a few handy commands that will help you along the way:
-
- * Create a shop-override: **./ant create-component-shop-override**
- * Create a new component: **./ant create-component**
- * Create a new theme component: **./ant create-theme**
- * Create admin user: **./ant create-admin-user-login**
- * Various other utility functions: **./ant -p**
- * Utility to install & update add-ons: **./git-addons help**
-
-
-
-Also, make a mental note of the following locations:
-
- * Scripts to run Scipio as a service: **/tools/scripts/**
- * Log output directory: **/runtime/logs**
- * Admin application: ****
- * E-commerce application: ****
-
-
-
-Last, Scipio ERP structures all code in the following five major directories:
-
- * Framework: framework-related sources, the application server, generic screens, and configurations
- * Applications: core applications
- * Addons: third-party extensions
- * Themes: modifies the look and feel
- * Hot-deploy: your own components
-
-
-
-Aside from a few configurations, you will be working within the hot-deploy and themes directories.
-
-### Webstore customizations
-
-To really make the system your own, start thinking about [components][7]. Components are a modular approach to override, extend, and add to the system. Think of components as self-contained web modules that capture information on databases ([entity][8]), functions ([services][9]), screens ([views][10]), [events and actions][11], and web applications. Thanks to components, you can add your own code while remaining compatible with the original sources.
-
-Run **./ant create-component-shop-override** and follow the steps to create your webstore component. A new directory will be created inside of the hot-deploy directory, which extends and overrides the original e-commerce application.
-
-![component directory structure][13]
-
-A typical component directory structure.
-
-Your component will have the following directory structure:
-
- * config: configurations
- * data: seed data
- * entitydef: database table definitions
- * script: Groovy script location
- * servicedef: service definitions
- * src: Java classes
- * webapp: your web application
- * widget: screen definitions
-
-
-
-Additionally, the **ivy.xml** file allows you to add Maven libraries to the build process and the **ofbiz-component.xml** file defines the overall component and web application structure. Apart from the obvious, you will also find a **controller.xml** file inside the web apps' **WEB-INF** directory. This allows you to define request entries and connect them to events and screens. For screens alone, you can also use the built-in CMS functionality, but stick to the core mechanics first. Familiarize yourself with **/applications/shop/** before introducing changes.
-
-#### Adding custom screens
-
-Remember the [templating toolkit][3]? You will find it used on every screen. Think of it as a set of easy-to-learn macros that structure all content. Here's an example:
-
-```
-<@section title="Title">
- <@heading id="slider">Slider@heading>
- <@row>
- <@cell columns=6>
- <@slider id="" class="" controls=true indicator=true>
- <@slide link="#" image="https://placehold.it/800x300">Just some content…@slide>
- <@slide title="This is a title" link="#" image="https://placehold.it/800x300">@slide>
- @slider>
- @cell>
- <@cell columns=6>Second column@cell>
- @row>
-@section>
-```
-
-Not too difficult, right? Meanwhile, themes contain the HTML definitions and styles. This hands the power over to your front-end developers, who can define the output of each macro and otherwise stick to their own build tools for development.
-
-Let's give it a quick try. First, define a request on your own webstore. You will modify the code for this. A built-in CMS is also available at **** , which allows you to create new templates and screens in a much more efficient way. It is fully compatible with the templating toolkit and comes with example templates that can be adopted to your preferences. But since we are trying to understand the system here, let's go with the more complicated way first.
-
-Open the **[controller.xml][14]** file inside of your shop's webapp directory. The controller keeps track of request events and performs actions accordingly. The following will create a new request under **/shop/test** :
-
-```
-
-
-
-
-
-```
-
-You can define multiple responses and, if you want, you could use an event or a service call inside the request to determine which response you may want to use. I opted for a response of type "view." A view is a rendered response; other types are request-redirects, forwards, and alike. The system comes with various renderers and allows you to determine the output later; to do so, add the following:
-
-```
-
-
-```
-
-Replace **my-component** with your own component name. Then you can define your very first screen by adding the following inside the tags within the **widget/CommonScreens.xml** file:
-
-```
-
-
-
-
-
-
-
-
-
-
-
-
-
-```
-
-Screens are actually quite modular and consist of multiple elements ([widgets, actions, and decorators][15]). For the sake of simplicity, leave this as it is for now, and complete the new webpage by adding your very first templating toolkit file. For that, create a new **webapp/mycomponent/test/test.ftl** file and add the following:
-
-```
-<@alert type="info">Success!@alert>
-```
-
-![Custom screen][17]
-
-A custom screen.
-
-Open **** and marvel at your own accomplishments.
-
-#### Custom themes
-
-Modify the look and feel of the shop by creating your very own theme. All themes can be found as components inside of the themes folder. Run **./ant create-theme** to add your own.
-
-![theme component layout][19]
-
-A typical theme component layout.
-
-Here's a list of the most important directories and files:
-
- * Theme configuration: **data/*ThemeData.xml**
- * Theme-specific wrapping HTML: **includes/*.ftl**
- * Templating Toolkit HTML definition: **includes/themeTemplate.ftl**
- * CSS class definition: **includes/themeStyles.ftl**
- * CSS framework: **webapp/theme-title/***
-
-
-
-Take a quick look at the Metro theme in the toolkit; it uses the Foundation CSS framework and makes use of all the things above. Afterwards, set up your own theme inside your newly constructed **webapp/theme-title** directory and start developing. The Foundation-shop theme is a very simple shop-specific theme implementation that you can use as a basis for your own work.
-
-Voila! You have set up your own online store and are ready to customize!
-
-![Finished Scipio ERP shop][21]
-
-A finished shop based on Scipio ERP.
-
-### What's next?
-
-Scipio ERP is a powerful framework that simplifies the development of complex e-commerce applications. For a more complete understanding, check out the project [documentation][7], try the [online demo][22], or [join the community][23].
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/1/scipio-erp
-
-作者:[Paul Piper][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/madppiper
-[b]: https://github.com/lujun9972
-[1]: https://www.scipioerp.com
-[2]: https://ofbiz.apache.org/
-[3]: https://www.scipioerp.com/community/developer/freemarker-macros/
-[4]: https://www.scipioerp.com/community/developer/installation-configuration/configuration/#database-configuration
-[5]: /file/419711
-[6]: https://opensource.com/sites/default/files/uploads/setup_step5_sm.jpg (Setup wizard)
-[7]: https://www.scipioerp.com/community/developer/architecture/components/
-[8]: https://www.scipioerp.com/community/developer/entities/
-[9]: https://www.scipioerp.com/community/developer/services/
-[10]: https://www.scipioerp.com/community/developer/views-requests/
-[11]: https://www.scipioerp.com/community/developer/events-actions/
-[12]: /file/419716
-[13]: https://opensource.com/sites/default/files/uploads/component_structure.jpg (component directory structure)
-[14]: https://www.scipioerp.com/community/developer/views-requests/request-controller/
-[15]: https://www.scipioerp.com/community/developer/views-requests/screen-widgets-decorators/
-[16]: /file/419721
-[17]: https://opensource.com/sites/default/files/uploads/success_screen_sm.jpg (Custom screen)
-[18]: /file/419726
-[19]: https://opensource.com/sites/default/files/uploads/theme_structure.jpg (theme component layout)
-[20]: /file/419731
-[21]: https://opensource.com/sites/default/files/uploads/finished_shop_1_sm.jpg (Finished Scipio ERP shop)
-[22]: https://www.scipioerp.com/demo/
-[23]: https://forum.scipioerp.com/
diff --git a/sources/tech/20190217 Organizing this blog into categories.md b/sources/tech/20190217 Organizing this blog into categories.md
new file mode 100644
index 0000000000..e8a03f1bdd
--- /dev/null
+++ b/sources/tech/20190217 Organizing this blog into categories.md
@@ -0,0 +1,155 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Organizing this blog into categories)
+[#]: via: (https://jvns.ca/blog/2019/02/17/organizing-this-blog-into-categories/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+Organizing this blog into categories
+======
+
+Today I organized the front page of this blog ([jvns.ca][1]) into CATEGORIES! Now it is actually possible to make some sense of what is on here!! There are 28 categories (computer networking! learning! “how things work”! career stuff! many more!) I am so excited about this.
+
+How it works: Every post is in only 1 category. Obviously the categories aren’t “perfect” (there is a “how things work” category and a “kubernetes” category and a “networking” category, and so for a “how container networking works in kubernetes” I need to just pick one) but I think it’s really nice and I’m hoping that it’ll make the blog easier for folks to navigate.
+
+If you’re interested in more of the story of how I’m thinking about this: I’ve been a little dissatisfied for a long time with how this blog is organized. Here’s where I started, in 2013, with a pretty classic blog layout (this is Octopress, which was a Jekyll Wordpress-lookalike theme that was cool back then and which served me very well for a long time):
+
+![][2]
+
+### problem with “show the 5 most recent posts”: you don’t know what the person’s writing is about!
+
+This is a super common way to organize a blog: on the homepage of your blog, you display maybe the 5 most recent posts, and then maybe have a “previous” link.
+
+The thing I find tricky about this (as a blog reader) is that
+
+ 1. it’s hard to hunt through their back catalog to find cool things they’ve written
+ 2. it’s SO HARD to get an overall sense for the body of a person’s work by reading 1 blog post at a time
+
+
+
+### next attempt: show every post in chronological order
+
+My next attempt at blog organization was to show every post on the homepage in chronological order. This was inspired by [Dan Luu’s blog][3], which takes a super minimal approach. I switched to this (according to the internet archive) sometime in early 2016. Here’s what it looked like (with some CSS issues :))
+
+![][4]
+
+The reason I like this “show every post in chronological order” approach more is that when I discover a new blog, I like to obsessively binge read through the whole thing to see all the cool stuff the person has written. [Rachel by the bay][5] also organizes her writing this way, and when I found her blog I was like OMG WOW THIS IS AMAZING I MUST READ ALL OF THIS NOW and being able to look through all the entries quickly and start reading ones that caught my eye was SO FUN.
+
+[Will Larson’s blog][6] also has a “list of all posts” page which I find useful because it’s a good blog, and sometimes I want to refer back to something he wrote months ago and can’t remember what it was called, and being able to scan through all the titles makes it easier to do that.
+
+I was pretty happy with this and that’s how it’s been for the last 3 years.
+
+### problem: a chronological list of 390 posts still kind of sucks
+
+As of today, I have 390 posts here (360,000 words! that’s, like, 4 300-page books! eep!). This is objectively a lot of writing and I would like people new to the blog to be able to navigate it and actually have some idea what’s going on.
+
+And this blog is not actually just a totally disorganized group of words! I have a lot of specific interests: I’ve written probably 30 posts about computer networking, 15ish on ML/statistics, 20ish career posts, etc. And when I write a new Kubernetes post or whatever, it’s usually at least sort of related to some ongoing train of thought I have about Kubernetes. And it’s totally obvious to _me_ what other posts that post is related to, but obviously to a new person it’s not at all clear what the trains of thought are in this blog.
+
+### solution for now: assign every post 1 (just 1) category
+
+My new plan is to assign every post a single category. I got this idea from [Itamar Turner-Trauring’s site][7].
+
+Here are the initial categories:
+
+ * Cool computer tools / features / ideas
+ * Computer networking
+ * How a computer thing works
+ * Kubernetes / containers
+ * Zines / comics
+ * On writing comics / zines
+ * Conferences
+ * Organizing conferences
+ * Businesses / marketing
+ * Statistics / machine learning / data analysis
+ * Year in review
+ * Infrastructure / operations engineering
+ * Career / work
+ * Working with others / communication
+ * Remote work
+ * Talks transcripts / podcasts
+ * On blogging / speaking
+ * On learning
+ * Rust
+ * Linux debugging / tracing tools
+ * Debugging stories
+ * Fan posts about awesome work by other people
+ * Inclusion
+ * rbspy
+ * Performance
+ * Open source
+ * Linux systems stuff
+ * Recurse Center (my daily posts during my RC batch)
+
+
+
+I guess you can tell this is a systems-y blog because there are 8 different systems-y categories (kubernetes, infrastructure, linux debugging tools, rust, debugging stories, performance, and linux systems stuff, how a computer thing works) :).
+
+But it was nice to see that I also have this huge career / work category! And that category is pretty meaningful to me, it includes a lot of things that I struggled with and were hard for me to learn. And I get to put all my machine learning posts together, which is an area I worked in for 3 years and am still super interested in and every so often learn a new thing about!
+
+### How I assign the categories: a big text file
+
+I came up with a scheme for assigning the categories that I thought was really fun! I knew immediately that coming up with categories in advance would be impossible (how was I supposed to know that “fan posts about awesome work by other people” was a substantial category?)
+
+So instead, I took kind of a Marie Kondo approach: I wrote a script to just dump all the titles of every blog post into a text file, and then I just used vim to organize them roughly into similar sections. Seeing everything in one place (a la marie kondo) really helped me see the patterns and figure out what some categories were.
+
+[Here’s the final result of that text file][8]. I think having a lightweight way of organizing the posts all in one file made a huge difference and that it would have been impossible for me to seen the patterns otherwise.
+
+### How I implemented it: a hugo taxonomy
+
+Once I had that big text file, I wrote [a janky python script][9] to assign the categories in that text file to the actual posts.
+
+I use Hugo for this blog, and so I also needed to tell Hugo about the categories. This blog already technically has tags (though they’re woefully underused, I didn’t want to delete them). I use Hugo, and it turns out that in Hugo you can define arbitrary taxonomies. So I defined a new taxonomy for these sections (right now it’s called, unimaginitively, `juliasections`).
+
+The details of how I did this are pretty boring but [here’s the hugo template that makes it display on the homepage][10]. I used this [Hugo documentation page on taxonomies a lot][11].
+
+### organizing my site is cool! reverse chronology maybe isn’t the best possible thing!
+
+Amy Hoy has this interesting article called [how the blog broke the web][12] about how the rise of blog software made people adopt a site format that maybe didn’t serve what they were writing the best.
+
+I don’t personally feel that mad about the blog / reverse chronology organization: I like blogging! I think it was nice for the first 6 years or whatever to be able to just write things that I think are cool without thinking about where they “fit”. It’s worked really well for me.
+
+But today, 360,000 words in, I think it makes sense to add a little more structure :).
+
+### what it looks like now!
+
+Here’s what the new front page organization looks like! These are the blogging / learning / rust sections! I think it’s cool how you can see the evolution of some of my thinking (I sure have written a lot of posts about asking questions :)).
+
+![][13]
+
+### I ❤ the personal website
+
+This is also part of why I love having a personal website that I can organize any way I want: for both of my main sites ([jvns.ca][1] and now [wizardzines.com][14]) I have total control over how they appear! And I can evolve them over time at my own pace if I decide something a little different will work better for me. I’ve gone from a jekyll blog to octopress to a custom-designed octopress blog to Hugo and made a ton of little changes over time. It’s so nice.
+
+I think it’s fun that these 3 screenshots are each 3 years apart – what I wanted in 2013 is not the same as 2016 is not the same as 2019! This is okay!
+
+And I really love seeing how other people choose to organize their personal sites! Please keep making cool different personal sites.
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2019/02/17/organizing-this-blog-into-categories/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://jvns.ca
+[2]: https://jvns.ca/images/website-2013.png
+[3]: https://danluu.com
+[4]: https://jvns.ca/images/website-2016.png
+[5]: https://rachelbythebay.com/w/
+[6]: https://lethain.com/all-posts/
+[7]: https://codewithoutrules.com/worklife/
+[8]: https://github.com/jvns/jvns.ca/blob/2f7b2723994628a5348069dd87b3df68c2f0285c/scripts/titles.txt
+[9]: https://github.com/jvns/jvns.ca/blob/2f7b2723994628a5348069dd87b3df68c2f0285c/scripts/parse_titles.py
+[10]: https://github.com/jvns/jvns.ca/blob/25d239a3ba36c1bae1d055d2b7d50a4f1d0489ef/themes/orange/layouts/index.html#L39-L59
+[11]: https://gohugo.io/templates/taxonomy-templates/
+[12]: https://stackingthebricks.com/how-blogs-broke-the-web/
+[13]: https://jvns.ca/images/website-2019.png
+[14]: https://wizardzines.com
diff --git a/sources/tech/20190315 New zine- Bite Size Networking.md b/sources/tech/20190315 New zine- Bite Size Networking.md
new file mode 100644
index 0000000000..cd47c5619a
--- /dev/null
+++ b/sources/tech/20190315 New zine- Bite Size Networking.md
@@ -0,0 +1,100 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (New zine: Bite Size Networking!)
+[#]: via: (https://jvns.ca/blog/2019/03/15/new-zine--bite-size-networking-/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+New zine: Bite Size Networking!
+======
+
+Last week I released a new zine: Bite Size Networking! It’s the third zine in the “bite size” series:
+
+ 1. [Bite Size Linux][1]
+ 2. [Bite Size Command Line][2]
+ 3. [Bite Size Networking][3]
+
+
+
+You can get it for $10 at ! (or $150/$250/$600 for the corporate rate).
+
+Here’s the cover and table of contents!
+
+[![][4]][5]
+
+A few people have asked for a 3-pack with all 3 “bite size” zines which is coming soon!
+
+### why this zine?
+
+In last few years I’ve been doing a lot of networking at work, and along the way I’ve gone from “uh, what even is tcpdump” to “yes I can just type in `sudo tcpdump -c 200 -n port 443 -i lo`” without even thinking twice about it. As usual this zine is the resource I wish I had 4 years ago. There are so many things it took me a long time to figure out how to do like:
+
+ * inspect SSL certificates
+ * make DNS queries
+ * figure out what server is using that port
+ * find out whether the firewall is causing you problems or not
+ * capture / search network traffic on a machine
+
+
+
+and as often happens with computers none of them are really that hard!! But the man pages for the tols you need to do these things are Very Long and as usual don’t differentiate between “everybody always uses this option and you 10000% need to know it” and “you will never use this option it does not matter”. So I spent a long time staring sadly at the tcpdump man page.
+
+the pitch for this zine is:
+
+> It’s Thursday afternoon and your users are reporting SSL errors in production and you don’t know why. Or a HTTP header isn’t being set correctly and it’s breaking the site. Or you just got a notification that your site’s SSL certificate is expiring in 2 days. Or you need to update DNS to point to a new server. Or a server suddenly isn’t able to connect to a service. And networking maybe isn’t your full time job, but you still need to get the problem fixed.
+
+Kamal (my partner) proofreads all my zines and we hit an exciting milestone with this one: this is the first zine where he was like “wow, I really did not know a lot of the stuff in this zine”. This is of course because I’ve spent a lot more time than him debugging weird networking things, and when you practice something you get better at it :)
+
+### a couple of example pages
+
+Here are a couple of example pages, to give you an idea of what’s in the zine:
+
+![][6] ![][7]
+
+### next thing to get better at: getting feedback!
+
+One thing I’ve realized that while I get a ton of help from people while writing these zines (I read probably a thousand tweets from people suggesting ideas for things to include in the zine), I don’t get as much feedback from people about the final product as I’d like!
+
+I often hear positive things (“I love them!”, “thank you so much!”, “this helped me in my job!”) but I’d really love to hear more about which bits specifically helped the most and what didn’t make as much sense or what you would have liked to see more of. So I’ll probably be asking a few questions about that to people who buy this zine!
+
+### selling zines is going well
+
+When I made the switch about a year ago from “every zine I release is free” to “the old zines are free but all the new ones are not free” it felt scary! It’s been startlingly totally fine and a very positive thing. Sales have been really good, people take the work more seriously, I can spend more time on them, and I think the quality has gone up.
+
+And I’ve been doing occasional [giveaways][8] for people who can’t afford a $10 zine, which feels like a nice way to handle “some people legitimately can’t afford $10 and I would like to get them information too”.
+
+### what’s next?
+
+I’m not sure yet! A few options:
+
+ * kubernetes
+ * more about linux concepts (bite size linux part II)
+ * how to do statistics using simulations
+ * something else!
+
+
+
+We’ll see what I feel most inspired by :)
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2019/03/15/new-zine--bite-size-networking-/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://wizardzines.com/zines/bite-size-linux/
+[2]: https://wizardzines.com/zines/bite-size-command-line/
+[3]: https://wizardzines.com/zines/bite-size-networking/
+[4]: https://jvns.ca/images/bite-size-networking-cover.png
+[5]: https://gum.co/bite-size-networking
+[6]: https://jvns.ca/images/ngrep.png
+[7]: https://jvns.ca/images/ping.png
+[8]: https://twitter.com/b0rk/status/1104368319816220674
diff --git a/sources/tech/20190326 Why are monoidal categories interesting.md b/sources/tech/20190326 Why are monoidal categories interesting.md
new file mode 100644
index 0000000000..37aaef753a
--- /dev/null
+++ b/sources/tech/20190326 Why are monoidal categories interesting.md
@@ -0,0 +1,134 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Why are monoidal categories interesting?)
+[#]: via: (https://jvns.ca/blog/2019/03/26/what-are-monoidal-categories/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+Why are monoidal categories interesting?
+======
+
+Hello! Someone on Twitter asked a question about tensor categories recently and I remembered “oh, I know something about that!! These are a cool thing!“. Monoidal categories are also called “tensor categories” and I think that term feels a little more concrete: one of the biggest examples of a tensor category is the category of vector spaces with the tensor product as the way you combine vectors / functions. “Monoidal” means “has an associative binary operation with an identity”, and with vector spaces the tensor product is the “associative binary operation” it’s referring to. So I’m going to mostly use “tensor categories” in this post instead.
+
+So here’s a quick stab at explaining why tensor categories are cool. I’m going to make a lot of oversimplifications which I figure is better than trying to explain category theory from the ground up. I’m not a category theorist (though I spent 2 years in grad school doing a bunch of category theory) and I will almost certainly say wrong things about category theory.
+
+In this post I’m going to try to talk about [Seven Sketches in Compositionality: An Invitation to Applied Category Theory][1] using mostly plain English.
+
+### tensor categories aren’t monads
+
+If you have been around functional programming for a bit, you might see the word “monoid” and “categories” and wonder “oh, is julia writing about monads, like in Haskell”? I am not!!
+
+There is a sentence “monads are a monoid in the category of endofunctors” which includes both the word “monoid” and “category” but that is not what I am talking about at all. We’re not going to talk about types or Haskell or monads or anything.
+
+#### tensor categories are about proving (or defining) things with pictures
+
+Here’s what I think is a really nice example from this [“seven sketches in compositionality”](() PDF (on page 47):
+
+![][2]
+
+The idea here is that you have 3 inequalities
+
+ 1. `t <= v + w`
+ 2. `w + u <= x + z`
+ 3. `v + x <= y`,
+
+
+
+and you want to prove that `t + u <= y + z`.
+
+You can do this algebraically pretty easily.
+
+But in this diagram they’ve done something really different! They’ve sort of drawn the inequalities as boxes with lines coming out of them for each variable, and then you can see that you end up with a `t` and a `u` on the left and a `y` and a `z` on the right, and so maybe that means that `t + u <= y + z`.
+
+The first time I saw something like this in a math class I felt like – what? what is happening? you can’t just draw PICTURES to prove things?!! And of course you can’t _just_ draw pictures to prove things.
+
+What’s actually happening in pictures like this is that when you put 2 things next to each other in the picture (like `t` and `u`), that actually represents the “tensor product” of `t` and `u`. In this case the “tensor product” is defined to be addition. And the tensor product (addition in this case) has some special properties –
+
+ 1. it’s associative
+ 2. if `a <= b` and `c <= d` then `a + c <= b + d`
+
+
+
+so saying that this picture proves that `t + u <= y + z` **actually** means that you can read a proof off the diagram in a straightforward way:
+
+```
+ t + u
+<= (v + w) + u
+= v + (w + u)
+<= v + (x + z)
+= (v + x) + z
+<= y + z
+```
+
+So all the things that “look like they would work” according to the picture actually do work in practice because our tensor product thing is associative and because addition works nicely with the `<=` relationship. The book explains all this in a lot more detail.
+
+### draw vector spaces with “string diagrams”
+
+Proving this simple inequality is kind of boring though! We want to do something more interesting, so let’s talk about vector spaces! Here’s a diagram that includes some vector spaces (U1, U2, V1, V2) and some functions (f,g) between them.
+
+![][3]
+
+Again, here what it means to have U1 stacked on top of U2 is that we’re taking a tensor product of U1 and U2. And the tensor product is associative, so there’s no ambiguity if we stack 3 or 4 vector spaces together!
+
+This is all explained in a lot more detail in this nice blog post called [introduction to string diagrams][4] (which I took that picture from).
+
+### define the trace of a matrix with a picture
+
+So far this is pretty boring! But in a [follow up blog post][5], they talk about something more outrageous: you can (using vector space duality) take the lines in one of these diagrams and move them **backwards** and make loops. So that lets us define the trace of a function `f : V -> V` like this:
+
+![][6]
+
+This is a really outrageous thing! We’ve said, hey, we have a function and we want to get a number in return right? Okay, let’s just… draw a circle around it so that there are no lines left coming out of it, and then that will be a number! That seems a lot more natural and prettier than the usual way of defining the trace of a matrix (“sum up the numbers on the diagonal”)!
+
+When I first saw this I thought it was super cool that just drawing a circle is actually a legitimate way of defining a mathematical concept!
+
+### how are tensor category diagrams different from regular category theory diagrams?
+
+If you see “tensor categories let you prove things with pictures” you might think “well, the whole point of category theory is to prove things with pictures, so what?“. I think there are a few things that are different in tensor category diagrams:
+
+ 1. with string diagrams, the lines are objects and the boxes are functions which is the opposite of how usual category theory diagrams are
+ 2. putting things next to each other in the diagram has a specific meaning (“take the tensor product of those 2 things”) where as in usual category theory diagrams it doesn’t. being able to combine things in this way is powerful!
+ 3. half circles have a specific meaning (“take the dual”)
+ 4. you can use specific elements of a (eg vector space) in a diagram which usually you wouldn’t do in a category theory diagram (the objects would be the whole vector space, not one element of that vector space)
+
+
+
+### what does this have to do with programming?
+
+Even though this is usually a programming blog I don’t know whether this particular thing really has anything to do with programming, I just remembered I thought it was cool. I wrote my [master’s thesis][7] (which i will link to even though it’s not very readable) on topological quantum computing which involves a bunch of monoidal categories.
+
+Some of the diagrams in this post are sort of why I got interested in that area in the first place – I thought it was really cool that you could formally define / prove things with pictures. And useful things, like the trace of a matrix!
+
+### edit: some ways this might be related to programming
+
+Someone pointed me to a couple of twitter threads (coincidentally from this week!!) that relate tensor categories & diagrammatic methods to programming:
+
+ 1. [this thread from @KenScambler][8] (“My best kept secret* is that string & wiring diagrams–plucked straight out of applied category theory–are _fabulous_ for software and system design.)
+ 2. [this other thread by him of 31 interesting related things to this topic][9]
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2019/03/26/what-are-monoidal-categories/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://arxiv.org/pdf/1803.05316.pdf
+[2]: https://jvns.ca/images/monoidal-preorder.png
+[3]: https://jvns.ca/images/tensor-vector.png
+[4]: https://qchu.wordpress.com/2012/11/05/introduction-to-string-diagrams/
+[5]: https://qchu.wordpress.com/2012/11/06/string-diagrams-duality-and-trace/
+[6]: https://jvns.ca/images/trace.png
+[7]: https://github.com/jvns/masters-thesis/raw/master/thesis.pdf
+[8]: https://twitter.com/KenScambler/status/1108738366529400832
+[9]: https://twitter.com/KenScambler/status/1109474342822244353
diff --git a/sources/tech/20190403 Use Git as the backend for chat.md b/sources/tech/20190403 Use Git as the backend for chat.md
deleted file mode 100644
index e564bbc6e7..0000000000
--- a/sources/tech/20190403 Use Git as the backend for chat.md
+++ /dev/null
@@ -1,141 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Use Git as the backend for chat)
-[#]: via: (https://opensource.com/article/19/4/git-based-chat)
-[#]: author: (Seth Kenlon https://opensource.com/users/seth)
-
-Use Git as the backend for chat
-======
-GIC is a prototype chat application that showcases a novel way to use Git.
-![Team communication, chat][1]
-
-[Git][2] is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git. Today, we'll look at GIC, a Git-based chat application
-
-### Meet GIC
-
-While the authors of Git probably expected frontends to be created for Git, they undoubtedly never expected Git would become the backend for, say, a chat client. Yet, that's exactly what developer Ephi Gabay did with his experimental proof-of-concept [GIC][3]: a chat client written in [Node.js][4] using Git as its backend database.
-
-GIC is by no means intended for production use. It's purely a programming exercise, but it's one that demonstrates the flexibility of open source technology. What's astonishing is that the client consists of just 300 lines of code, excluding the Node libraries and Git itself. And that's one of the best things about the chat client and about open source; the ability to build upon existing work. Seeing is believing, so you should give GIC a look for yourself.
-
-### Get set up
-
-GIC uses Git as its engine, so you need an empty Git repository to serve as its chatroom and logger. The repository can be hosted anywhere, as long as you and anyone who needs access to the chat service has access to it. For instance, you can set up a Git repository on a free Git hosting service like GitLab and grant chat users contributor access to the Git repository. (They must be able to make commits to the repository, because each chat message is a literal commit.)
-
-If you're hosting it yourself, create a centrally located bare repository. Each user in the chat must have an account on the server where the bare repository is located. You can create accounts specific to Git with Git hosting software like [Gitolite][5] or [Gitea][6], or you can give them individual user accounts on your server, possibly using **git-shell** to restrict their access to Git.
-
-Performance is best on a self-hosted instance. Whether you host your own or you use a hosting service, the Git repository you create must have an active branch, or GIC won't be able to make commits as users chat because there is no Git HEAD. The easiest way to ensure that a branch is initialized and active is to commit a README or license file upon creation. If you don't do that, you can create and commit one after the fact:
-
-```
-$ echo "chat logs" > README
-$ git add README
-$ git commit -m 'just creating a HEAD ref'
-$ git push -u origin HEAD
-```
-
-### Install GIC
-
-Since GIC is based on Git and written in Node.js, you must first install Git, Node.js, and the Node package manager, npm (which should be bundled with Node). The command to install these differs depending on your Linux or BSD distribution, but here's an example command on Fedora:
-
-```
-$ sudo dnf install git nodejs
-```
-
-If you're not running Linux or BSD, follow the installation instructions on [git-scm.com][7] and [nodejs.org][8].
-
-There's no install process, as such, for GIC. Each user (Alice and Bob, in this example) must clone the repository to their hard drive:
-
-```
-$ git cone https://github.com/ephigabay/GIC GIC
-```
-
-Change directory into the GIC directory and install the Node.js dependencies with **npm** :
-
-```
-$ cd GIC
-$ npm install
-```
-
-Wait for the Node modules to download and install.
-
-### Configure GIC
-
-The only configuration GIC requires is the location of your Git chat repository. Edit the **config.js** file:
-
-```
-module.exports = {
-gitRepo: '[seth@example.com][9]:/home/gitchat/chatdemo.git',
-messageCheckInterval: 500,
-branchesCheckInterval: 5000
-};
-```
-
-
-Test your connection to the Git repository before trying GIC, just to make sure your configuration is sane:
-
-```
-$ git clone --quiet seth@example.com:/home/gitchat/chatdemo.git > /dev/null
-```
-
-Assuming you receive no errors, you're ready to start chatting.
-
-### Chat with Git
-
-From within the GIC directory, start the chat client:
-
-```
-$ npm start
-```
-
-When the client first launches, it must clone the chat repository. Since it's nearly an empty repository, it won't take long. Type your message and press Enter to send a message.
-
-![GIC][10]
-
-A Git-based chat client. What will they think of next?
-
-As the greeting message says, a branch in Git serves as a chatroom or channel in GIC. There's no way to create a new branch from within the GIC UI, but if you create one in another terminal session or in a web UI, it shows up immediately in GIC. It wouldn't take much to patch some IRC-style commands into GIC.
-
-After chatting for a while, take a look at your Git repository. Since the chat happens in Git, the repository itself is also a chat log:
-
-```
-$ git log --pretty=format:"%p %cn %s"
-4387984 Seth Kenlon Hey Chani, did you submit a talk for All Things Open this year?
-36369bb Chani No I didn't get a chance. Did you?
-[...]
-```
-
-### Exit GIC
-
-Not since Vim has there been an application as difficult to stop as GIC. You see, there is no way to stop GIC. It will continue to run until it is killed. When you're ready to stop GIC, open another terminal tab or window and issue this command:
-
-```
-$ kill `pgrep npm`
-```
-
-GIC is a novelty. It's a great example of how an open source ecosystem encourages and enables creativity and exploration and challenges us to look at applications from different angles. Try GIC out. Maybe it will give you ideas. At the very least, it's a great excuse to spend an afternoon with Git.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/4/git-based-chat
-
-作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/seth
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat)
-[2]: https://git-scm.com/
-[3]: https://github.com/ephigabay/GIC
-[4]: https://nodejs.org/en/
-[5]: http://gitolite.com
-[6]: http://gitea.io
-[7]: http://git-scm.com
-[8]: http://nodejs.org
-[9]: mailto:seth@example.com
-[10]: https://opensource.com/sites/default/files/uploads/gic.jpg (GIC)
diff --git a/sources/tech/20190409 Working with variables on Linux.md b/sources/tech/20190409 Working with variables on Linux.md
deleted file mode 100644
index da4fec5ea9..0000000000
--- a/sources/tech/20190409 Working with variables on Linux.md
+++ /dev/null
@@ -1,267 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Working with variables on Linux)
-[#]: via: (https://www.networkworld.com/article/3387154/working-with-variables-on-linux.html#tk.rss_all)
-[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
-
-Working with variables on Linux
-======
-Variables often look like $var, but they also look like $1, $*, $? and $$. Let's take a look at what all these $ values can tell you.
-![Mike Lawrence \(CC BY 2.0\)][1]
-
-A lot of important values are stored on Linux systems in what we call “variables,” but there are actually several types of variables and some interesting commands that can help you work with them. In a previous post, we looked at [environment variables][2] and where they are defined. In this post, we're going to look at variables that are used on the command line and within scripts.
-
-### User variables
-
-While it's quite easy to set up a variable on the command line, there are a few interesting tricks. To set up a variable, all you need to do is something like this:
-
-```
-$ myvar=11
-$ myvar2="eleven"
-```
-
-To display the values, you simply do this:
-
-```
-$ echo $myvar
-11
-$ echo $myvar2
-eleven
-```
-
-You can also work with your variables. For example, to increment a numeric variable, you could use any of these commands:
-
-```
-$ myvar=$((myvar+1))
-$ echo $myvar
-12
-$ ((myvar=myvar+1))
-$ echo $myvar
-13
-$ ((myvar+=1))
-$ echo $myvar
-14
-$ ((myvar++))
-$ echo $myvar
-15
-$ let "myvar=myvar+1"
-$ echo $myvar
-16
-$ let "myvar+=1"
-$ echo $myvar
-17
-$ let "myvar++"
-$ echo $myvar
-18
-```
-
-With some of these, you can add more than 1 to a variable's value. For example:
-
-```
-$ myvar0=0
-$ ((myvar0++))
-$ echo $myvar0
-1
-$ ((myvar0+=10))
-$ echo $myvar0
-11
-```
-
-With all these choices, you'll probably find at least one that is easy to remember and convenient to use.
-
-You can also _unset_ a variable — basically undefining it.
-
-```
-$ unset myvar
-$ echo $myvar
-```
-
-Another interesting option is that you can set up a variable and make it **read-only**. In other words, once set to read-only, its value cannot be changed (at least not without some very tricky command line wizardry). That means you can't unset it either.
-
-```
-$ readonly myvar3=1
-$ echo $myvar3
-1
-$ ((myvar3++))
--bash: myvar3: readonly variable
-$ unset myvar3
--bash: unset: myvar3: cannot unset: readonly variable
-```
-
-You can use any of those setting and incrementing options for assigning and manipulating variables within scripts, but there are also some very useful _internal variables_ for working within scripts. Note that you can't reassign their values or increment them.
-
-### Internal variables
-
-There are quite a few variables that can be used within scripts to evaluate arguments and display information about the script itself.
-
- * $1, $2, $3 etc. represent the first, second, third, etc. arguments to the script.
- * $# represents the number of arguments.
- * $* represents the string of arguments.
- * $0 represents the name of the script itself.
- * $? represents the return code of the previously run command (0=success).
- * $$ shows the process ID for the script.
- * $PPID shows the process ID for your shell (the parent process for the script).
-
-
-
-Some of these variables also work on the command line but show related information:
-
- * $0 shows the name of the shell you're using (e.g., -bash).
- * $$ shows the process ID for your shell.
- * $PPID shows the process ID for your shell's parent process (for me, this is sshd).
-
-
-
-If we throw all of these variables into a script just to see the results, we might do this:
-
-```
-#!/bin/bash
-
-echo $0
-echo $1
-echo $2
-echo $#
-echo $*
-echo $?
-echo $$
-echo $PPID
-```
-
-When we call this script, we'll see something like this:
-
-```
-$ tryme one two three
-/home/shs/bin/tryme <== script name
-one <== first argument
-two <== second argument
-3 <== number of arguments
-one two three <== all arguments
-0 <== return code from previous echo command
-10410 <== script's process ID
-10109 <== parent process's ID
-```
-
-If we check the process ID of the shell once the script is done running, we can see that it matches the PPID displayed within the script:
-
-```
-$ echo $$
-10109 <== shell's process ID
-```
-
-Of course, we're more likely to use these variables in considerably more useful ways than simply displaying their values. Let's check out some ways we might do this.
-
-Checking to see if arguments have been provided:
-
-```
-if [ $# == 0 ]; then
- echo "$0 filename"
- exit 1
-fi
-```
-
-Checking to see if a particular process is running:
-
-```
-ps -ef | grep apache2 > /dev/null
-if [ $? != 0 ]; then
- echo Apache is not running
- exit
-fi
-```
-
-Verifying that a file exists before trying to access it:
-
-```
-if [ $# -lt 2 ]; then
- echo "Usage: $0 lines filename"
- exit 1
-fi
-
-if [ ! -f $2 ]; then
- echo "Error: File $2 not found"
- exit 2
-else
- head -$1 $2
-fi
-```
-
-And in this little script, we check if the correct number of arguments have been provided, if the first argument is numeric, and if the second argument is an existing file.
-
-```
-#!/bin/bash
-
-if [ $# -lt 2 ]; then
- echo "Usage: $0 lines filename"
- exit 1
-fi
-
-if [[ $1 != [0-9]* ]]; then
- echo "Error: $1 is not numeric"
- exit 2
-fi
-
-if [ ! -f $2 ]; then
- echo "Error: File $2 not found"
- exit 3
-else
- echo top of file
- head -$1 $2
-fi
-```
-
-### Renaming variables
-
-When writing a complicated script, it's often useful to assign names to the script's arguments rather than continuing to refer to them as $1, $2, and so on. By the 35th line, someone reading your script might have forgotten what $2 represents. It will be a lot easier on that person if you assign an important parameter's value to $filename or $numlines.
-
-```
-#!/bin/bash
-
-if [ $# -lt 2 ]; then
- echo "Usage: $0 lines filename"
- exit 1
-else
- numlines=$1
- filename=$2
-fi
-
-if [[ $numlines != [0-9]* ]]; then
- echo "Error: $numlines is not numeric"
- exit 2
-fi
-
-if [ ! -f $ filename]; then
- echo "Error: File $filename not found"
- exit 3
-else
- echo top of file
- head -$numlines $filename
-fi
-```
-
-Of course, this example script does nothing more than run the head command to show the top X lines in a file, but it is meant to show how internal parameters can be used within scripts to help ensure the script runs well or fails with at least some clarity.
-
-**[ Watch Sandra Henry-Stocker's Two-Minute Linux Tips[to learn how to master a host of Linux commands][3] ]**
-
-Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3387154/working-with-variables-on-linux.html#tk.rss_all
-
-作者:[Sandra Henry-Stocker][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2019/04/variable-key-keyboard-100793080-large.jpg
-[2]: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html
-[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
-[4]: https://www.facebook.com/NetworkWorld/
-[5]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190528 A Quick Look at Elvish Shell.md b/sources/tech/20190528 A Quick Look at Elvish Shell.md
index 82927332a7..778965d442 100644
--- a/sources/tech/20190528 A Quick Look at Elvish Shell.md
+++ b/sources/tech/20190528 A Quick Look at Elvish Shell.md
@@ -1,4 +1,3 @@
-Translating by name1e5s
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
diff --git a/sources/tech/20190623 What does debugging a program look like.md b/sources/tech/20190623 What does debugging a program look like.md
new file mode 100644
index 0000000000..7cc7c1432e
--- /dev/null
+++ b/sources/tech/20190623 What does debugging a program look like.md
@@ -0,0 +1,184 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What does debugging a program look like?)
+[#]: via: (https://jvns.ca/blog/2019/06/23/a-few-debugging-resources/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+What does debugging a program look like?
+======
+
+I was debugging with a friend who’s a relatively new programmer yesterday, and showed them a few debugging tips. Then I was thinking about how to teach debugging this morning, and [mentioned on Twitter][1] that I’d never seen a really good guide to debugging your code. (there are a ton of really great replies by Anne Ogborn to that tweet if you are interested in debugging tips)
+
+As usual, I got a lot of helpful answers and now I have a few ideas about how to teach debugging skills / describe the process of debugging.
+
+### a couple of debugging resources
+
+I was hoping for more links to debugging books/guides, but here are the 2 recommendations I got:
+
+**“Debugging” by David Agans**: Several people recommended the book [Debugging][2], which looks like a nice and fairly short book that explains a debugging strategy. I haven’t read it yet (though I ordered it to see if I should be recommending it) and the rules laid out in the book (“understand the system”, “make it fail”, “quit thinking and look”, “divide and conquer”, “change one thing at a time”, “keep an audit trail”, “check the plug”, “get a fresh view”, and “if you didn’t fix it, it ain’t fixed”) seem extremely resaonable :). He also has a charming [debugging poster][3].
+
+**“How to debug” by John Regehr**: [How to Debug][4] is a very good blog post based on Regehr’s experience teaching a university embedded systems course. Lots of good advice. He also has a [blog post reviewing 4 books about debugging][5], including Agans’ book.
+
+### reproduce your bug (but how do you do that?)
+
+The rest of this post is going to be an attempt to aggregate different ideas about debugging people tweeted at me.
+
+Somewhat obviously, everybody agrees that being able to consistently reproduce a bug is important if you want to figure out what’s going on. I have an intuitive sense for how to do this but I’m not sure how to **explain** how to go from “I saw this bug twice” to “I can consistently reproduce this bug on demand on my laptop”, and I wonder whether the techniques you use to do this depend on the domain (backend web dev, frontend, mobile, games, C++ programs, embedded etc).
+
+### reproduce your bug _quickly_
+
+Everybody also agrees that it’s extremely useful be able to reproduce the bug quickly (if it takes you 3 minutes to check if every change helped, iterating is VERY SLOW).
+
+A few suggested approaches:
+
+ * for something that requires clicking on a bunch of things in a browser to reproduce, recording what you clicked on with [Selenium][6] and getting Selenium to replay the UI interactions (suggested [here][7])
+ * writing a unit test that reproduces the bug (if you can). bonus: you can add this to your test suite later if it makes sense
+ * writing a script / finding a command line incantation that does it (like `curl MY_APP.local/whatever`)
+
+
+
+### accept that it’s probably your code’s fault
+
+Sometimes I see a problem and I’m like “oh, library X has a bug”, “oh, it’s DNS”, “oh, SOME OTHER THING THAT IS NOT MY CODE is broken”. And sometimes it’s not my code! But in general between an established library and my code that I wrote last month, usually it’s my code that I wrote last month that’s the problem :).
+
+### start doing experiments
+
+@act_gardner gave a [nice, short explanation of what you have to do after you reproduce your bug][8]
+
+> I try to encourage people to first fully understand the bug - What’s happening? What do you expect to happen? When does it happen? When does it not happen? Then apply their mental model of the system to guess at what could be breaking and come up with experiments.
+>
+> Experiments could be changing or removing code, making API calls from a REPL, trying new inputs, poking at memory values with a debugger or print statements.
+
+I think the loop here may be:
+
+ * make guess about one aspect about what might be happening (“this variable is set to X where it should be Y”, “the server is being sent the wrong request”, “this code is never running at all”)
+ * do experiment to check that guess
+ * repeat until you understand what’s going on
+
+
+
+### change one thing at a time
+
+Everybody definitely agrees that it is important to change one thing a time when doing an experiment to verify an assumption.
+
+### check your assumptions
+
+A lot of debugging is realizing that something you were **sure** was true (“wait this request is going to the new server, right, not the old one???“) is actually… not true. I made an attempt to [list some common incorrect assumptions][9]. Here are some examples:
+
+ * this variable is set to X (“that filename is definitely right”)
+ * that variable’s value can’t possibly have changed between X and Y
+ * this code was doing the right thing before
+ * this function does X
+ * I’m editing the right file
+ * there can’t be any typos in that line I wrote it is just 1 line of code
+ * the documentation is correct
+ * the code I’m looking at is being executed at some point
+ * these two pieces of code execute sequentially and not in parallel
+ * the code does the same thing when compiled in debug / release mode (or with -O2 and without, or…)
+ * the compiler is not buggy (though this is last on purpose, the compiler is only very rarely to blame :))
+
+
+
+### weird methods to get information
+
+There are a lot of normal ways to do experiments to check your assumptions / guesses about what the code is doing (print out variable values, use a debugger, etc). Sometimes, though, you’re in a more difficult environment where you can’t print things out and don’t have access to a debugger (or it’s inconvenient to do those things, maybe because there are too many events). Some ways to cope:
+
+ * [adding sounds on mobile][10]: “In the mobile world, I live on this advice. Xcode can play a sound when you hit a breakpoint (and continue without stopping). I place them certain places in the code, and listen for buzzing Tink to indicate tight loops or Morse/Pop pairs to catch unbalanced events” (also [this tweet][11])
+ * there’s a very cool talk about [using XCode to play sound for iOS debugging here][12]
+ * [adding LEDs][13]: “When I did embedded dev ages ago on grids of transputers, we wired up an LED to an unused pin on each chip. It was surprisingly effective for diagnosing parallelism issues.”
+ * [string][14]: “My networks prof told me about a hack he saw at Xerox in the early days of Ethernet: a tap in the coax with an amp and motor and piece of string. The busier the network was, the faster the string twirled.”
+ * [peep][15] is a “network auralizer” that translates what’s happening on your system into sounds. I spent 10 minutes trying to get it to compile and failed so far but it looks very fun and I want to try it!!
+
+
+
+The point here is that information is the most important thing and you need to do whatever’s necessary to get information.
+
+### write your code so it’s easier to debug
+
+Another point a few people brought up is that you can improve your program to make it easier to debug. tef has a nice post about this: [Write code that’s easy to delete, and easy to debug too.][16] here. I thought this was very true:
+
+> Debuggable code isn’t necessarily clean, and code that’s littered with checks or error handling rarely makes for pleasant reading.
+
+I think one interpretation of “easy to debug” is “every single time there’s an error, the program reports to you exactly what happened in an easy to understand way”. Whenever my program has a problem and says sometihng “error: failure to connect to SOME_IP port 443: connection timeout” I’m like THANK YOU THAT IS THE KIND OF THING I WANTED TO KNOW and I can check if I need to fix a firewall thing or if I got the wrong IP for some reason or what.
+
+One simple example of this recently: I was making a request to a server I wrote and the reponse I got was “upstream connect error or disconnect/reset before headers”. This is an nginx error which basically in this case boiled down to “your program crashed before it sent anything in response to the request”. Figuring out the cause of the crash was pretty easy, but having better error handling (returning an error instead of crashing) would have saved me a little time because instead of having to go check the cause of the crash, I could have just read the error message and figured out what was going on right away.
+
+### error messages are better than silently failing
+
+To get closer to the dream of “every single time there’s an error, the program reports to you exactly what happened in an easy to understand way” you also need to be disciplined about immediately returning an error message instead of silently writing incorrect data / passing a nonsense value to another function which will do WHO KNOWS WHAT with it and cause you a gigantic headache. This means adding code like this:
+
+```
+if UNEXPECTED_THING:
+ raise "oh no THING happened"
+```
+
+This isn’t easy to get right (it’s not always obvious where you should be raising errors!“) but it really helps a lot.
+
+### failure: print out a stack of errors, not just one error.
+
+Related to returning helpful errors that make it easy to debug: Rust has a really incredible error handling library [called failure][17] which basicaly lets you return a chain of errors instead of just one error, so you can print out a stack of errors like:
+
+```
+"error starting server process" caused by
+"error initializing logging backend" caused by
+"connection failure: timeout connecting to 1.2.3.4 port 1234".
+```
+
+This is SO MUCH MORE useful than just `connection failure: timeout connecting to 1.2.3.4 port 1234` by itself because it tells you the significance of 1.2.3.4 (it’s something to do with the logging backend!). And I think it’s also more useful than `connection failure: timeout connecting to 1.2.3.4 port 1234` with a stack trace, because it summarizes at a high level the parts that went wrong instead of making you read all the lines in the stack trace (some of which might not be relevant!).
+
+tools like this in other languages:
+
+ * Go: the idiom to do this seems to be to just concatenate your stack of errors together as a big string so you get “error: thing one: error: thing two : error: thing three” which works okay but is definitely a lot less structured than `failure`’s system
+ * Java: I hear you can give exceptions causes but haven’t used that myself
+ * Python 3: you can use `raise ... from` which sets the `__cause__` attribute on the exception and then your exceptions will be separated by `The above exception was the direct cause of the following exception:..`
+
+
+
+If you know how to do this in other languages I’d be interested to hear!
+
+### understand what the error messages mean
+
+One sub debugging skill that I take for granted a lot of the time is understanding what error messages mean! I came across this nice graphic explaining [common Python errors and what they mean][18], which breaks down things like `NameError`, `IOError`, etc.
+
+I think a reason interpreting error messages is hard is that understanding a new error message might mean learning a new concept – `NameError` can mean “Your code uses a variable outside the scope where it’s defined”, but to really understand that you need to understand what variable scope is! I ran into this a lot when learning Rust – the Rust compiler would be like “you have a weird lifetime error” and I’d like be “ugh ok Rust I get it I will go actually learn about how lifetimes work now!“.
+
+And a lot of the time error messages are caused by a problem very different from the text of the message, like how “upstream connect error or disconnect/reset before headers” might mean “julia, your server crashed!“. The skill of understanding what error messages mean is often not transferable when you switch to a new area (if I started writing a lot of React or something tomorrow, I would probably have no idea what any of the error messages meant!). So this definitely isn’t just an issue for beginner programmers.
+
+### that’s all for now!
+
+I feel like the big thing I’m missing when talking about debugging skills is a stronger understanding of where people get stuck with debugging – it’s easy to say “well, you need to reproduce the problem, then make a more minimal reproduction, then start coming up with guesses and verifying them, and improve your mental model of the system, and then figure it out, then fix the problem and hopefully write a test to make it not come back”, but – where are people actually getting stuck in practice? What are the hardest parts? I have some sense of what the hardest parts usually are for me but I’m still not sure what the hardest parts usually are for someone newer to debugging their code.
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2019/06/23/a-few-debugging-resources/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://twitter.com/b0rk/status/1142825259546140673
+[2]: http://debuggingrules.com/
+[3]: http://debuggingrules.com/?page_id=40
+[4]: https://blog.regehr.org/archives/199
+[5]: https://blog.regehr.org/archives/849
+[6]: https://www.seleniumhq.org/
+[7]: https://twitter.com/AnnieTheObscure/status/1142843984642899968
+[8]: https://twitter.com/act_gardner/status/1142838587437830144
+[9]: https://twitter.com/b0rk/status/1142812831420768257
+[10]: https://twitter.com/cocoaphony/status/1142847665690030080
+[11]: https://twitter.com/AnnieTheObscure/status/1142842421954244608
+[12]: https://qnoid.com/2013/06/08/Sound-Debugging.html
+[13]: https://twitter.com/wombatnation/status/1142887843963867136
+[14]: https://twitter.com/irvingreid/status/1142887472441040896
+[15]: http://peep.sourceforge.net/intro.html
+[16]: https://programmingisterrible.com/post/173883533613/code-to-debug
+[17]: https://github.com/rust-lang-nursery/failure
+[18]: https://pythonforbiologists.com/29-common-beginner-errors-on-one-page/
diff --git a/sources/tech/20190628 Get your work recognized- write a brag document.md b/sources/tech/20190628 Get your work recognized- write a brag document.md
new file mode 100644
index 0000000000..e13dd2a07b
--- /dev/null
+++ b/sources/tech/20190628 Get your work recognized- write a brag document.md
@@ -0,0 +1,256 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Get your work recognized: write a brag document)
+[#]: via: (https://jvns.ca/blog/brag-documents/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+Get your work recognized: write a brag document
+======
+
+There’s this idea that, if you do great work at your job, people will (or should!) automatically recognize that work and reward you for it with promotions / increased pay. In practice, it’s often more complicated than that – some kinds of important work are more visible/memorable than others. It’s frustrating to have done something really important and later realize that you didn’t get rewarded for it just because the people making the decision didn’t understand or remember what you did. So I want to talk about a tactic that I and lots of people I work with have used!
+
+This blog post isn’t just about being promoted or getting raises though. The ideas here have actually been more useful to me to help me reflect on themes in my work, what’s important to me, what I’m learning, and what I’d like to be doing differently. But they’ve definitely helped with promotions!
+
+You can also [skip to the brag document template at the end][1].
+
+### you don’t remember everything you did
+
+One thing I’m always struck by when it comes to performance review time is a feeling of “wait, what _did_ I do in the last 6 months?“. This is a kind of demoralizing feeling and it’s usually not based in reality, more in “I forgot what cool stuff I actually did”.
+
+I invariably end up having to spend a bunch of time looking through my pull requests, tickets, launch emails, design documents, and more. I always end up finding small (and sometimes not-so-small) things that I completely forgot I did, like:
+
+ * mentored an intern 5 months ago
+ * did a small-but-important security project
+ * spent a few weeks helping get an important migration over the line
+ * helped X put together this design doc
+ * etcetera!
+
+
+
+### your manager doesn’t remember everything you did
+
+And if you don’t remember everything important you did, your manager (no matter how great they are!) probably doesn’t either. And they need to explain to other people why you should be promoted or given an evaluation like “exceeds expectations” (“X’s work is so awesome!!!!” doesn’t fly).
+
+So if your manager is going to effectively advocate for you, they need help.
+
+### here’s the tactic: write a document listing your accomplishments
+
+The tactic is pretty simple! Instead of trying to remember everything you did with your brain, maintain a “brag document” that lists everything so you can refer to it when you get to performance review season! This is a pretty common tactic – when I started doing this I mentioned it to more experienced people and they were like “oh yeah, I’ve been doing that for a long time, it really helps”.
+
+Where I work we call this a “brag document” but I’ve heard other names for the same concept like “hype document” or “list of stuff I did” :).
+
+There’s a basic template for a brag document at the end of this post.
+
+### share your brag document with your manager
+
+When I first wrote a brag document I was kind of nervous about sharing it with my manager. It felt weird to be like “hey, uh, look at all the awesome stuff I did this year, I wrote a long document listing everything”. But my manager was really thankful for it – I think his perspective was “this makes my job way easier, now I can look at the document when writing your perf review instead of trying to remember what happened”.
+
+Giving them a document that explains your accomplishments will really help your manager advocate for you in discussions about your performance and come to any meetings they need to have prepared.
+
+Brag documents also **really** help with manager transitions – if you get a new manager 3 months before an important performance review that you want to do well on, giving them a brag document outlining your most important work & its impact will help them understand what you’ve been doing even though they may not have been aware of any of your work before.
+
+### share it with your peer reviewers
+
+Similarly, if your company does peer feedback as part of the promotion/perf process – share your brag document with your peer reviewers!! Every time someone shares their doc with me I find it SO HELPFUL with writing their review for much the same reasons it’s helpful to share it with your manager – it reminds me of all the amazing things they did, and when they list their goals in their brag document it also helps me see what areas they might be most interested in feedback on.
+
+On some teams at work it’s a team norm to share a brag document with peer reviewers to make it easier for them.
+
+### explain the big picture
+
+In addition to just listing accomplishments, in your brag document you can write the narrative explaining the big picture of your work. Have you been really focused on security? On building your product skills & having really good relationships with your users? On building a strong culture of code review on the team?
+
+In my brag document, I like to do this by making a section for areas that I’ve been focused on (like “security”) and listing all the work I’ve done in that area there. This is especially good if you’re working on something fuzzy like “building a stronger culture of code review” where all the individual actions you do towards that might be relatively small and there isn’t a big shiny ship.
+
+### use your brag document to notice patterns
+
+In the past I’ve found the brag document useful not just to hype my accomplishments, but also to reflect on the work I’ve done. Some questions it’s helped me with:
+
+ * What work do I feel most proud of?
+ * Are there themes in these projects I should be thinking about? What’s the big picture of what I’m working on? (am I working a lot on security? localization?).
+ * What do I wish I was doing more / less of?
+ * Which of my projects had the effect I wanted, and which didn’t? Why might that have been?
+ * What could have gone better with project X? What might I want to do differently next time?
+
+
+
+### you can write it all at once or update it every 2 weeks
+
+Many people have told me that it works best for them if they take a few minutes to update their brag document every 2 weeks ago. For me it actually works better to do a single marathon session every 6 months or every year where I look through everything I did and reflect on it all at once. Try out different approaches and see what works for you!
+
+### don’t forget to include the fuzzy work
+
+A lot of us work on fuzzy projects that can feel hard to quantify, like:
+
+ * improving code quality on the team / making code reviews a little more in depth
+ * making on call easier
+ * building a more fair interview process / performance review system
+ * refactoring / driving down technical debt
+
+
+
+A lot of people will leave this kind of work out because they don’t know how to explain why it’s important. But I think this kind of work is especially important to put into your brag document because it’s the most likely to fall under the radar! One way to approach this is to, for each goal:
+
+ 1. explain your goal for the work (why do you think it’s important to refactor X piece of code?)
+ 2. list some things you’ve done towards that goal
+ 3. list any effects you’ve seen of the work, even if they’re a little indirect
+
+
+
+If you tell your coworkers this kind of work is important to you and tell them what you’ve been doing, maybe they can also give you ideas about how to do it more effectively or make the effects of that work more obvious!
+
+### encourage each other to celebrate accomplishments
+
+One nice side effect of having a shared idea that it’s normal/good to maintain a brag document at work is that I sometimes see people encouraging each other to record & celebrate their accomplishments (“hey, you should put that in your brag doc, that was really good!”). It can be hard to see the value of your work sometimes, especially when you’re working on something hard, and an outside perspective from a friend or colleague can really help you see why what you’re doing is important.
+
+Brag documents are good when you use them on your own to advocate for yourself, but I think they’re better as a collaborative effort to recognize where people are excelling.
+
+Next, I want to talk about a couple of structures that we’ve used to help people recognize their accomplishments.
+
+### the brag workshop: help people list their accomplishments
+
+The way this “brag document” practice started in the first place is that my coworker [Karla][2] and I wanted to help other women in engineering advocate for themselves more in the performance review process. The idea is that some people undersell their accomplishments more than they should, so we wanted to encourage those people to “brag” a little bit and write down what they did that was important.
+
+We did this by running a “brag workshop” just before performance review season. The format of the workshop is like this:
+
+**Part 1: write the document: 1-2 hours**. Everybody sits down with their laptop, starts looking through their pull requests, tickets they resolved, design docs, etc, and puts together a list of important things they did in the last 6 months.
+
+**Part 2: pair up and make the impact of your work clearer: 1 hour**. The goal of this part is to pair up, review each other’s documents, and identify places where people haven’t bragged “enough” – maybe they worked on an extremely critical project to the company but didn’t highlight how important it was, maybe they improved test performance but didn’t say that they made the tests 3 times faster and that it improved everyone’s developer experience. It’s easy to accidentally write “I shipped $feature” and miss the follow up (“… which caused $thing to happen”). Another person reading through your document can help you catch the places where you need to clarify the impact.
+
+### biweekly brag document writing session
+
+Another approach to helping people remember their accomplishments: my friend Dave gets some friends together every couple of weeks or so for everyone to update their brag documents. It’s a nice way for people to talk about work that they’re happy about & celebrate it a little bit, and updating your brag document as you go can be easier than trying to remember everything you did all at once at the end of the year.
+
+These don’t have to be people in the same company or even in the same city – that group meets over video chat and has people from many different companies doing this together from Portland, Toronto, New York, and Montreal.
+
+In general, especially if you’re someone who really cares about your work, I think it’s really positive to share your goals & accomplishments (and the things that haven’t gone so well too!) with your friends and coworkers. It makes it feel less like you’re working alone and more like everyone is supporting each other in helping them accomplish what they want.
+
+### thanks
+
+Thanks to Karla Burnett who I worked with on spreading this idea at work, to Dave Vasilevsky for running brag doc writing sessions, to Will Larson who encouraged me to start one [of these][3] in the first place, to my manager Jay Shirley for always being encouraging & showing me that this is a useful way to work with a manager, and to Allie, Dan, Laura, Julian, Kamal, Stanley, and Vaibhav for reading a draft of this.
+
+I’d also recommend the blog post [Hype Yourself! You’re Worth It!][4] by Aashni Shah which talks about a similar approach.
+
+## Appendix: brag document template
+
+Here’s a template for a brag document! Usually I make one brag document per year. (“Julia’s 2017 brag document”). I think it’s okay to make it quite long / comprehensive – 5-10 pages or more for a year of work doesn’t seem like too much to me, especially if you’re including some graphs/charts / screenshots to show the effects of what you did.
+
+One thing I want to emphasize, for people who don’t like to brag, is – **you don’t have to try to make your work sound better than it is**. Just make it sound **exactly as good as it is**! For example “was the primary contributor to X new feature that’s now used by 60% of our customers and has gotten Y positive feedback”.
+
+### Goals for this year:
+
+ * List your major goals here! Sharing your goals with your manager & coworkers is really nice because it helps them see how they can support you in accomplishing those goals!
+
+
+
+### Goals for next year
+
+ * If it’s getting towards the end of the year, maybe start writing down what you think your goals for next year might be.
+
+
+
+### Projects
+
+For each one, go through:
+
+ * What your contributions were (did you come up with the design? Which components did you build? Was there some useful insight like “wait, we can cut scope and do what we want by doing way less work” that you came up with?)
+ * The impact of the project – who was it for? Are there numbers you can attach to it? (saved X dollars? shipped new feature that has helped sell Y big deals? Improved performance by X%? Used by X internal users every day?). Did it support some important non-numeric company goal (required to pass an audit? helped retain an important user?)
+
+
+
+Remember: don’t forget to explain what the results of you work actually were! It’s often important to go back a few months later and fill in what actually happened after you launched the project.
+
+### Collaboration & mentorship
+
+Examples of things in this category:
+
+ * Helping others in an area you’re an expert in (like “other engineers regularly ask me for one-off help solving weird bugs in their CSS” or “quoting from the C standard at just the right moment”)
+ * Mentoring interns / helping new team members get started
+ * Writing really clear emails/meeting notes
+ * Foundational code that other people built on top of
+ * Improving monitoring / dashboards / on call
+ * Any code review that you spent a particularly long time on / that you think was especially important
+ * Important questions you answered (“helped Risha from OTHER_TEAM with a lot of questions related to Y”)
+ * Mentoring someone on a project (“gave Ben advice from time to time on leading his first big project”)
+ * Giving an internal talk or workshop
+
+
+
+### Design & documentation
+
+List design docs & documentation that you worked on
+
+ * Design docs: I usually just say “wrote design for X” or “reviewed design for X”
+ * Documentation: maybe briefly explain the goal behind this documentation (for example “we were getting a lot of questions about X, so I documented it and now we can answer the questions more quickly”)
+
+
+
+### Company building
+
+This is a category we have at work – it basically means “things you did to help the company overall, not just your project / team”. Some things that go in here:
+
+ * Going above & beyond with interviewing or recruiting (doing campus recruiting, etc)
+ * Improving important processes, like the interview process or writing better onboarding materials
+
+
+
+### What you learned
+
+My friend Julian suggested this section and I think it’s a great idea – try listing important things you learned or skills you’ve acquired recently! Some examples of skills you might be learning or improving:
+
+ * how to do performance analysis & make code run faster
+ * internals of an important piece of software (like the JVM or Postgres or Linux)
+ * how to use a library (like React)
+ * how to use an important tool (like the command line or Firefox dev tools)
+ * about a specific area of programming (like localization or timezones)
+ * an area like product management / UX design
+ * how to write a clear design doc
+ * a new programming language
+
+
+
+It’s really easy to lose track of what skills you’re learning, and usually when I reflect on this I realize I learned a lot more than I thought and also notice things that I’m _not_ learning that I wish I was.
+
+### Outside of work
+
+It’s also often useful to track accomplishments outside of work, like:
+
+ * blog posts
+ * talks/panels
+ * open source work
+ * Industry recognition
+
+
+
+I think this can be a nice way to highlight how you’re thinking about your career outside of strictly what you’re doing at work.
+
+This can also include other non-career-related things you’re proud of, if that feels good to you! Some people like to keep a combined personal + work brag document.
+
+### General prompts
+
+If you’re feeling stuck for things to mention, try:
+
+ * If you were trying to convince a friend to come join your company/team, what would you tell them about your work?
+ * Did anybody tell you you did something well recently?
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/brag-documents/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: tmp.nd0Dg3RXQE#template
+[2]: https://karla.io/
+[3]: https://lethain.com/career-narratives/
+[4]: http://blog.aashni.me/2019/01/hype-yourself-youre-worth-it/
diff --git a/sources/tech/20190718 What you need to know to be a sysadmin.md b/sources/tech/20190718 What you need to know to be a sysadmin.md
index bd482f3ca4..55947b8456 100644
--- a/sources/tech/20190718 What you need to know to be a sysadmin.md
+++ b/sources/tech/20190718 What you need to know to be a sysadmin.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: (WangYueScream )
+[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
diff --git a/sources/tech/20190730 How to manage logs in Linux.md b/sources/tech/20190730 How to manage logs in Linux.md
deleted file mode 100644
index cebfbc5f99..0000000000
--- a/sources/tech/20190730 How to manage logs in Linux.md
+++ /dev/null
@@ -1,110 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How to manage logs in Linux)
-[#]: via: (https://www.networkworld.com/article/3428361/how-to-manage-logs-in-linux.html)
-[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
-
-How to manage logs in Linux
-======
-Log files on Linux systems contain a LOT of information — more than you'll ever have time to view. Here are some tips on how you can make use of it without ... drowning in it.
-![Greg Lobinski \(CC BY 2.0\)][1]
-
-Managing log files on Linux systems can be incredibly easy or painful. It all depends on what you mean by log management.
-
-If all you mean is how you can go about ensuring that your log files don’t eat up all the disk space on your Linux server, the issue is generally quite straightforward. Log files on Linux systems will automatically roll over, and the system will only maintain a fixed number of the rolled-over logs. Even so, glancing over what can easily be a group of 100 files can be overwhelming. In this post, we'll take a look at how the log rotation works and some of the most relevant log files.
-
-**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
-
-### Automatic log rotation
-
-Log files rotate frequently. What is the current log acquires a slightly different file name and a new log file is established. Take the syslog file as an example. This file is something of a catch-all for a lot of normal system messages. If you **cd** over to **/var/log** and take a look, you’ll probably see a series of syslog files like this:
-
-```
-$ ls -l syslog*
--rw-r----- 1 syslog adm 28996 Jul 30 07:40 syslog
--rw-r----- 1 syslog adm 71212 Jul 30 00:00 syslog.1
--rw-r----- 1 syslog adm 5449 Jul 29 00:00 syslog.2.gz
--rw-r----- 1 syslog adm 6152 Jul 28 00:00 syslog.3.gz
--rw-r----- 1 syslog adm 7031 Jul 27 00:00 syslog.4.gz
--rw-r----- 1 syslog adm 5602 Jul 26 00:00 syslog.5.gz
--rw-r----- 1 syslog adm 5995 Jul 25 00:00 syslog.6.gz
--rw-r----- 1 syslog adm 32924 Jul 24 00:00 syslog.7.gz
-```
-
-Rolled over at midnight each night, the older syslog files are kept for a week and then the oldest is deleted. The syslog.7.gz file will be tossed off the system and syslog.6.gz will be renamed syslog.7.gz. The remainder of the log files will follow suit until syslog becomes syslog.1 and a new syslog file is created. Some syslog files will be larger than others, but in general, none will likely ever get very large and you’ll never see more than eight of them. This gives you just over a week to review any data they collect.
-
-The number of files maintained for any particular log file depends on the log file itself. For some, you may have as many as 13. Notice how the older files – both for syslog and dpkg – are gzipped to save space. The thinking here is likely that you’ll be most interested in the recent logs. Older logs can be unzipped with **gunzip** as needed.
-
-```
-# ls -t dpkg*
-dpkg.log dpkg.log.3.gz dpkg.log.6.gz dpkg.log.9.gz dpkg.log.12.gz
-dpkg.log.1 dpkg.log.4.gz dpkg.log.7.gz dpkg.log.10.gz
-dpkg.log.2.gz dpkg.log.5.gz dpkg.log.8.gz dpkg.log.11.gz
-```
-
-Log files can be rotated based on age, as well as by size. Keep this in mind as you examine your log files.
-
-Log file rotation can be configured differently if you are so inclined, though the defaults work for most Linux sysadmins. Take a look at files like **/etc/rsyslog.conf** and **/etc/logrotate.conf** for some of the details.
-
-### Making use of your log files
-
-Managing log files should also include using them from time to time. The first step in making use of log files should probably include getting used to what each log file can tell you about how your system is working and what problems it might have run into. Reading log files from top to bottom is almost never a good option, but knowing how to pull information from them can be of great benefit when you want to get a sense of how well your system is working or need to track down a problem. This also suggests that you have a general idea what kind of information is stored in each file. For example:
-
-```
-$ who wtmp | tail -10 show the most recent logins
-$ who wtmp | grep shark show recent logins for a particular user
-$ grep "sudo:" auth.log see who is using sudo
-$ tail dmesg look at kernel messages
-$ tail dpkg.log see recently installed and updated packages
-$ more ufw.log see firewall activity (i.e., if you are using ufw)
-```
-
-Some commands that you run will also extract information from your log files. If you want to see, for example, a list of system reboots, you can use a command like this:
-
-```
-$ last reboot
-reboot system boot 5.0.0-20-generic Tue Jul 16 13:19 still running
-reboot system boot 5.0.0-15-generic Sat May 18 17:26 - 15:19 (21+21:52)
-reboot system boot 5.0.0-13-generic Mon Apr 29 10:55 - 15:34 (18+04:39)
-```
-
-### Using more advanced log managers
-
-While you can write scripts to make it easier to find interesting information in your log files, you should also be aware that there are some very sophisticated tools available for log file analysis. Some correlate information from multiple sources to get a fuller picture of what’s happening on your network. They may provide real-time monitoring, as well. Tools such as [Solarwinds Log & Event Manager][3] and [PRTG Network Monitor][4] (which includes log monitoring) come to mind.
-
-There are also some free tools that can help with analyzing log files. These include:
-
- * **Logwatch** — program to scan system logs for interesting lines
- * **Logcheck** — system log analyzer and reporter
-
-
-
-I'll provide some insights and help on these tools in upcoming posts.
-
-**[ Also see: [Invaluable tips and tricks for troubleshooting Linux][5] ]**
-
-Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3428361/how-to-manage-logs-in-linux.html
-
-作者:[Sandra Henry-Stocker][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2019/07/logs-100806633-large.jpg
-[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
-[3]: https://www.esecurityplanet.com/products/solarwinds-log-event-manager-siem.html
-[4]: https://www.paessler.com/prtg
-[5]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
-[6]: https://www.facebook.com/NetworkWorld/
-[7]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190804 Learn how to Install LXD - LXC Containers in Ubuntu.md b/sources/tech/20190804 Learn how to Install LXD - LXC Containers in Ubuntu.md
new file mode 100644
index 0000000000..b4e1a2667b
--- /dev/null
+++ b/sources/tech/20190804 Learn how to Install LXD - LXC Containers in Ubuntu.md
@@ -0,0 +1,508 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Learn how to Install LXD / LXC Containers in Ubuntu)
+[#]: via: (https://www.linuxtechi.com/install-lxd-lxc-containers-from-scratch/)
+[#]: author: (Shashidhar Soppin https://www.linuxtechi.com/author/shashidhar/)
+
+Learn how to Install LXD / LXC Containers in Ubuntu
+======
+
+Let me start by explaining what a container is, it is normal process on the host machine (any Linux based m/c) with following characteristics,
+
+ * It feels like a VM, but it is not.
+ * Uses the host Kernel.
+ * Cannot boot a different Operating System.
+ * Can’t have its own modules.
+ * Does not need “**init”** as PID (Process id) as “1”
+
+
+
+[![Learn-LXD-LXC-Containers][1]][2]
+
+LXC (**LinuX Containers**) technology was developed long ago and is an Operating System level virtualization technology. This was existing from the days of BSD and System-V Release 4 (Popular Unix flavors during 1980-90’s). But until recently, no one new how much it can help us in saving in terms of resource utilization. Because of this technology change, all enterprises are moving towards adoption of virtualization (be it Cloud or be it Docker containers). This also helped in better management of **OpEX(Operational expenditures)** and **CaPEX(Captial expenditures)** costs. Using this technique, we can create and run multiple and isolated Linux virtual environments on a single Linux host machine (called control host). LXC mainly uses Linux’s cgroups and namespaces functionalities, which were introduced in version 2.6.24(kernel version) onwards. In parallel many advancements in hypervisors happened like that of **KVM**, **QEMU**, **Hyper-V**, **ESXi** etc. Especially KVM (Kernel Virtual Machine) which is core of Linux OS, helped in this kind of advancement.
+
+Difference between LXC and LXD is that LXC is the original and older way to manage containers but it is still supported, all commands of LXC starts with “**lxc-“** like “**lxc-create**” & “**lxc-info**“, whereas LXD is a new way to manage containers and lxc command is used for all containers operations and management.
+
+All of us know that “**Docker**” utilizes LXC and was developed using Go language, cgroups, namespaces and finally the Linux Kernel itself. Complete Docker has been built and developed using LXC as the basic foundation block. Docker is completely dependent on underlying infrastructure & hardware and using the Operating System as the medium. However, Docker is a portable and easily deployable container engine; all its dependencies are run using a virtual container on most of the Linux based servers. Groups, and Namespaces are the building block concepts for both LXC and Docker containers. Following are the brief description of these concepts.
+
+### C Groups (Control Groups)
+
+With Cgroups each resource will have its own hierarchy.
+
+ * CPU, Memory, I/O etc will have their own control group hierarchy. Following are various characterics of Cgroups,
+ * Each process is in each node
+ * Each hierarchy starts with one node
+ * Initially all processes start at the root node. Therefore “each node” is equivalent to “group of processes”.
+ * Hierarchies are independent, ex: CPU, Block I/O, memory etc
+
+
+
+As explained earlier there are various Cgroup types as listed below,
+
+1) **Memory Cgroups**
+
+a) Keeps track of pages used by each group.
+
+b) File read/write/mmap from block devices
+
+c) Anonymous memory(stack, heap etc)
+
+d) Each memory page is charged to a group
+
+e) Pages can be shared across multiple groups
+
+2) **CPU Cgroups**
+
+a) Track users/system cpu time
+
+b) Track usage per CPU
+
+c) Allows set to weights
+
+d) Can’t set cpu limits
+
+3) **Block IO Cgroup**
+
+a) Keep track of read/write(I/O’s)
+
+b) Set throttle (limits) for each group (per block device)
+
+c) Set real weights for each group (per block device)
+
+4) **Devices Cgroup**
+
+a) Controls what the group can do on device nodes
+
+b) Permission include /read/write/mknode
+
+5) **Freezer Cgroup**
+
+a) Allow to freeze/thaw a group of processes
+
+b) Similar to SIGSTOP/SIGCONT
+
+c) Cannot be detected by processes
+
+### NameSpaces
+
+Namespaces provide processes with their own system view. Each process is in name space of each type.
+
+There are multiple namespaces like,
+
+ * PID – Process within a PID name space only see processes in the same PID name space
+ * Net – Processes within a given network namespace get their own private network stack.
+ * Mnt – Processes can have their own “root” and private “mount” points.
+ * Uts – Have container its own hostname
+ * IPC – Allows processes to have own IPC semaphores, IPC message queues and shared memory
+ * USR – Allows to map UID/GID
+
+
+
+### Installation and configuration of LXD containers
+
+To have LXD installed on Ubuntu system (18.04 LTS) , we can start with LXD installation using below apt command
+
+```
+root@linuxtechi:~$ sudo apt update
+root@linuxtechi:~$ sudo apt install lxd -y
+```
+
+Once the LXD is installed, we can start with its initialization as below, (most of the times use the default options)
+
+```
+root@linuxtechi:~$ sudo lxd init
+```
+
+![lxc-init-ubuntu-system][1]
+
+Once the LXD is initialized successfully, run the below command to verify information
+
+```
+root@linuxtechi:~$ sudo lxc info | more
+```
+
+![lxc-info-command][1]
+
+Use below command to list if there is any container is downloaded on our host,
+
+```
+root@linuxtechi:~$ sudo lxc image list
++-------+-------------+--------+-------------+------+------+-------------+
+| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
++-------+-------------+--------+-------------+------+------+-------------+
+root@linuxtechi:~$
+```
+
+Quick and easy way to start the first container on Ubuntu 18.04 (or any supported Ubuntu flavor) use the following command. The container name we have provided is “shashi”
+
+```
+root@linuxtechi:~$ sudo lxc launch ubuntu:18.04 shashi
+Creating shashi
+Starting shashi
+root@linuxtechi:~$
+```
+
+To list out what are the LXC containers that are in the system
+
+```
+root@linuxtechi:~$ sudo lxc list
++--------+---------+-----------------------+-----------------------------------------------+------------+-----------+
+| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
++--------+---------+-----------------------+-----------------------------------------------+------------+-----------+
+| shashi | RUNNING | 10.122.140.140 (eth0) | fd42:49da:7c44:cebe:216:3eff:fea4:ea06 (eth0) | PERSISTENT | 0 |
++--------+---------+-----------------------+-----------------------------------------------+------------+-----------+
+root@linuxtechi:~$
+```
+
+Other Container management commands for LXD are listed below :
+
+**Note:** In below examples, shashi is my container name
+
+**How to take bash shell of your LXD Container?**
+
+```
+root@linuxtechi:~$ sudo lxc exec shashi bash
+root@linuxtechi:~#
+```
+
+**How Stop, Start & Restart LXD Container?**
+
+```
+root@linuxtechi:~$ sudo lxc stop shashi
+root@linuxtechi:~$ sudo lxc list
++--------+---------+------+------+------------+-----------+
+| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
++--------+---------+------+------+------------+-----------+
+| shashi | STOPPED | | | PERSISTENT | 0 |
++--------+---------+------+------+------------+-----------+
+root@linuxtechi:~$
+root@linuxtechi:~$ sudo lxc start shashi
+root@linuxtechi:~$ sudo lxc restart shashi
+```
+
+**How to delete a LXD Container?**
+
+```
+root@linuxtechi:~$ sudo lxc stop shashi
+root@linuxtechi:~$ sudo lxc delete shashi
+root@linuxtechi:~$ sudo lxc list
++------+-------+------+------+------+-----------+
+| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
++------+-------+------+------+------+-----------+
+root@linuxtechi:~$
+```
+
+**How to take snapshot of LXD container and then restore it?**
+
+Let’s assume we have pkumar container based on centos7 image, so to take the snapshot use the following,
+
+```
+root@linuxtechi:~$ sudo lxc snapshot pkumar pkumar_snap0
+```
+
+Use below command to verify the snapshot
+
+```
+root@linuxtechi:~$ sudo lxc info pkumar | grep -i Snapshots -A2
+Snapshots:
+ pkumar_snap0 (taken at 2019/08/02 19:39 UTC) (stateless)
+root@linuxtechi:~$
+```
+
+Use below command to restore the LXD container from their snapshot
+
+Syntax:
+
+$ lxc restore {container_name} {snapshot_name}
+
+```
+root@linuxtechi:~$ sudo lxc restore pkumar pkumar_snap0
+root@linuxtechi:~$
+```
+
+**How to delete LXD container snapshot?**
+
+```
+$ sudo lxc delete
+```
+
+**How to set Memory, CPU and Disk Limit on LXD container?**
+
+Syntax to set Memory limit:
+
+# lxc config set <container_name> limits.memory <Memory_Size>KB/MB/GB
+
+Syntax to set CPU limit:
+
+# lxc config set <container_name> limits.cpu {Number_of_CPUs}
+
+Syntax to Set Disk limit:
+
+# lxc config device set <container_name> root size <Size_MB/GB>
+
+**Note:** To set a disk limit (it requires btrfs or ZFS filesystem)
+
+Let’s set limit on Memory and CPU on container shashi using the following commands,
+
+```
+root@linuxtechi:~$ sudo lxc config set shashi limits.memory 256MB
+root@linuxtechi:~$ sudo lxc config set shashi limits.cpu 2
+```
+
+### Install and configure LXC container (commands and operations)
+
+To install lxc on your ubuntu system, use the beneath apt command,
+
+```
+root@linuxtechi:~$ sudo apt install lxc -y
+```
+
+In earlier version of LXC, the command “**lxc-clone**” was used and later it was deprecated. Now, “**lxc-copy**” command is widely used for cloning operation.
+
+**Note:** To get “lxc-copy” command working, use the following installation steps,
+
+```
+root@linuxtechi:~$ sudo apt install lxc1 -y
+```
+
+**Creating Linux Containers using the templates**
+
+LXC provides ready-made templates for easy installation of Linux containers. Templates are usually found in the directory path /usr/share/lxc/templates, but in fresh installation we will not get the templates, so to download the templates in your local system , run the beneath command,
+
+```
+root@linuxtechi:~$ sudo apt install lxc-templates -y
+```
+
+Once the lxc-templates are installed successfully then templates will be available,
+
+```
+root@linuxtechi:~$ sudo ls /usr/share/lxc/templates/
+lxc-alpine lxc-centos lxc-fedora lxc-oci lxc-plamo lxc-sparclinux lxc-voidlinux
+lxc-altlinux lxc-cirros lxc-fedora-legacy lxc-openmandriva lxc-pld lxc-sshd
+lxc-archlinux lxc-debian lxc-gentoo lxc-opensuse lxc-sabayon lxc-ubuntu
+lxc-busybox lxc-download lxc-local lxc-oracle lxc-slackware lxc-ubuntu-cloud
+root@linuxtechi:~$
+```
+
+Let’s Launch a container using template,
+
+Syntax: lxc-create -n <container_name> lxc -t <template_name>
+
+```
+root@linuxtechi:~$ sudo lxc-create -n shashi_lxc -t ubuntu
+………………………
+invoke-rc.d: could not determine current runlevel
+invoke-rc.d: policy-rc.d denied execution of start.
+Current default time zone: 'Etc/UTC'
+Local time is now: Fri Aug 2 11:46:42 UTC 2019.
+Universal Time is now: Fri Aug 2 11:46:42 UTC 2019.
+
+##
+# The default user is 'ubuntu' with password 'ubuntu'!
+# Use the 'sudo' command to run tasks as root in the container.
+##
+………………………………………
+root@linuxtechi:~$
+```
+
+Once the complete template is created, we can login into this console using the following steps
+
+```
+root@linuxtechi:~$ sudo lxc-start -n shashi_lxc -d
+root@linuxtechi:~$ sudo lxc-console -n shashi_lxc
+
+Connected to tty 1
+Type to exit the console, to enter Ctrl+a itself
+
+Ubuntu 18.04.2 LTS shashi_lxc pts/0
+
+shashi_lxc login: ubuntu
+Password:
+Last login: Fri Aug 2 12:00:35 UTC 2019 on pts/0
+Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-20-generic x86_64)
+To run a command as administrator (user "root"), use "sudo ".
+See "man sudo_root" for details.
+
+root@linuxtechi_lxc:~$ free -h
+ total used free shared buff/cache available
+Mem: 3.9G 23M 3.8G 112K 8.7M 3.8G
+Swap: 1.9G 780K 1.9G
+root@linuxtechi_lxc:~$ grep -c processor /proc/cpuinfo
+1
+root@linuxtechi_lxc:~$ df -h /
+Filesystem Size Used Avail Use% Mounted on
+/dev/sda1 40G 7.4G 31G 20% /
+root@linuxtechi_lxc:~$
+```
+
+Now logout or exit from the container and go back to host machine login window. With the lxc-ls command we can see that shashi-lxc container is created.
+
+```
+root@linuxtechi:~$ sudo lxc-ls
+shashi_lxc
+root@linuxtechi:~$
+```
+
+“**lxc-ls -f**” command provides details with ip address of the container and the same is as below,
+
+```
+root@linuxtechi:~$ sudo lxc-ls -f
+NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED
+shashi_lxc RUNNING 0 - 10.0.3.190 - false
+root@linuxtechi:~$
+```
+
+“**lxc-info -n <container_name>**” command provides with all the required details along with State, ip address etc.
+
+```
+root@linuxtechi:~$ sudo lxc-info -n shashi_lxc
+Name: shashi_lxc
+State: RUNNING
+PID: 6732
+IP: 10.0.3.190
+CPU use: 2.38 seconds
+BlkIO use: 240.00 KiB
+Memory use: 27.75 MiB
+KMem use: 5.04 MiB
+Link: vethQ7BVGU
+ TX bytes: 2.01 KiB
+ RX bytes: 9.52 KiB
+ Total bytes: 11.53 KiB
+root@linuxtechi:~$
+```
+
+**How to Start, Stop, Restart and Delete LXC containers**
+
+```
+$ lxc-start -n
+$ lxc-stop -n
+$ lxc-destroy -n
+```
+
+**LXC Cloning operation**
+
+Now the main cloning operation to be performed on the LXC container. The following steps are followed
+
+As described earlier LXC offers a feature of cloning a container from the existing container, by running the following command to clone an existing “shashi_lxc” container to a new container “shashi_lxc_clone”.
+
+**Note:** We have to make sure that before starting the cloning operation, first we have to stop the existing container using the “**lxc-stop**” command.
+
+```
+root@linuxtechi:~$ sudo lxc-stop -n shashi_lxc
+root@linuxtechi:~$ sudo lxc-copy -n shashi_lxc -N shashi_lxc_clone
+root@linuxtechi:~$ sudo lxc-ls
+shashi_lxc shashi_lxc_clone
+root@linuxtechi:~$
+```
+
+Now start the cloned container
+
+```
+root@linuxtechi:~$ sudo lxc-start -n shashi_lxc_clone
+root@linuxtechi:~$ sudo lxc-ls -f
+NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED
+shashi_lxc STOPPED 0 - - - false
+shashi_lxc_clone RUNNING 0 - 10.0.3.201 - false
+root@linuxtechi:~$
+```
+
+With the above set of commands, cloning operation is done and the new clone “shashi_lxc_clone” got created. We can login into this lxc container console with below steps,
+
+```
+root@linuxtechi:~$ sudo lxc-console -n shashi_lxc_clone
+
+Connected to tty 1
+Type to exit the console, to enter Ctrl+a itself
+Ubuntu 18.04.2 LTS shashi_lxc pts/0
+
+shashi_lxc login:
+```
+
+**LXC Network configuration and commands**
+
+We can attach to the newly created container, but to remotely login into this container using SSH or any other means, we have to do some minimal configuration changes as explained below,
+
+```
+root@linuxtechi:~$ sudo lxc-attach -n shashi_lxc_clone
+root@linuxtechi_lxc:/#
+root@linuxtechi_lxc:/# useradd -m shashi
+root@linuxtechi_lxc:/# passwd shashi
+Enter new UNIX password:
+Retype new UNIX password:
+passwd: password updated successfully
+root@linuxtechi_lxc:/#
+```
+
+First install the ssh server using the following command so that smooth “ssh” connect can be established.
+
+```
+root@linuxtechi_lxc:/# apt install openssh-server -y
+```
+
+Now get the IP address of the existing lxc container using the following command,
+
+```
+root@linuxtechi_lxc:/# ip addr show eth0|grep inet
+ inet 10.0.3.201/24 brd 10.0.3.255 scope global dynamic eth0
+ inet6 fe80::216:3eff:fe82:e251/64 scope link
+root@linuxtechi_lxc:/#
+```
+
+From the host machine with a new console window, use the following command to connect to this container over ssh
+
+```
+root@linuxtechi:~$ ssh 10.0.3.201
+root@linuxtechi's password:
+$
+```
+
+Now, we have logged in a container using ssh session.
+
+**LXC process related commands**
+
+```
+root@linuxtechi:~$ ps aux|grep lxc|grep -v grep
+```
+
+![lxc-process-ubuntu-system][1]
+
+**LXC snapshot operation**
+
+Snapshotting is one of the main operations which will help in taking point in time snapshot of the lxc container images. These same snapshot images can be used later for further use.
+
+```
+root@linuxtechi:~$ sudo lxc-stop -n shashi_lxc
+root@linuxtechi:~$ sudo lxc-snapshot -n shashi_lxc
+root@linuxtechi:~$
+```
+
+The snapshot path can be located using the following command.
+
+```
+root@linuxtechi:~$ sudo lxc-snapshot -L -n shashi_lxc
+snap0 (/var/lib/lxc/shashi_lxc/snaps) 2019:08:02 20:28:49
+root@linuxtechi:~$
+```
+
+**Conclusion:**
+
+LXC, LinuX containers is one of the early container technologies. Understanding the concepts and learning about LXC will help in deeper understanding of any other containers like Docker Containers. This article has provided deeper insights on Cgroup and Namespaces which are also very much required concepts for better understanding of Containers and like. Many of the LXC operations like cloning, snapshotting, network operation etc are covered with command line examples.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linuxtechi.com/install-lxd-lxc-containers-from-scratch/
+
+作者:[Shashidhar Soppin][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linuxtechi.com/author/shashidhar/
+[b]: https://github.com/lujun9972
+[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]: https://www.linuxtechi.com/wp-content/uploads/2019/08/Learn-LXD-LXC-Containers.jpg
diff --git a/sources/tech/20190812 Why const Doesn-t Make C Code Faster.md b/sources/tech/20190812 Why const Doesn-t Make C Code Faster.md
deleted file mode 100644
index c30d5bddfe..0000000000
--- a/sources/tech/20190812 Why const Doesn-t Make C Code Faster.md
+++ /dev/null
@@ -1,402 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (LazyWolfLin)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Why const Doesn't Make C Code Faster)
-[#]: via: (https://theartofmachinery.com/2019/08/12/c_const_isnt_for_performance.html)
-[#]: author: (Simon Arneaud https://theartofmachinery.com)
-
-Why const Doesn't Make C Code Faster
-======
-
-In a post a few months back I said [it’s a popular myth that `const` is helpful for enabling compiler optimisations in C and C++][1]. I figured I should explain that one, especially because I used to believe it was obviously true, myself. I’ll start off with some theory and artificial examples, then I’ll do some experiments and benchmarks on a real codebase: Sqlite.
-
-### A simple test
-
-Let’s start with what I used to think was the simplest and most obvious example of how `const` can make C code faster. First, let’s say we have these two function declarations:
-
-```
-void func(int *x);
-void constFunc(const int *x);
-```
-
-And suppose we have these two versions of some code:
-
-```
-void byArg(int *x)
-{
- printf("%d\n", *x);
- func(x);
- printf("%d\n", *x);
-}
-
-void constByArg(const int *x)
-{
- printf("%d\n", *x);
- constFunc(x);
- printf("%d\n", *x);
-}
-```
-
-To do the `printf()`, the CPU has to fetch the value of `*x` from RAM through the pointer. Obviously, `constByArg()` can be made slightly faster because the compiler knows that `*x` is constant, so there’s no need to load its value a second time after `constFunc()` does its thing. It’s just printing the same thing. Right? Let’s see the assembly code generated by GCC with optimisations cranked up:
-
-```
-$ gcc -S -Wall -O3 test.c
-$ view test.s
-```
-
-Here’s the full assembly output for `byArg()`:
-
-```
-byArg:
-.LFB23:
- .cfi_startproc
- pushq %rbx
- .cfi_def_cfa_offset 16
- .cfi_offset 3, -16
- movl (%rdi), %edx
- movq %rdi, %rbx
- leaq .LC0(%rip), %rsi
- movl $1, %edi
- xorl %eax, %eax
- call __printf_chk@PLT
- movq %rbx, %rdi
- call func@PLT # The only instruction that's different in constFoo
- movl (%rbx), %edx
- leaq .LC0(%rip), %rsi
- xorl %eax, %eax
- movl $1, %edi
- popq %rbx
- .cfi_def_cfa_offset 8
- jmp __printf_chk@PLT
- .cfi_endproc
-```
-
-The only difference between the generated assembly code for `byArg()` and `constByArg()` is that `constByArg()` has a `call constFunc@PLT`, just like the source code asked. The `const` itself has literally made zero difference.
-
-Okay, that’s GCC. Maybe we just need a sufficiently smart compiler. Is Clang any better?
-
-```
-$ clang -S -Wall -O3 -emit-llvm test.c
-$ view test.ll
-```
-
-Here’s the IR. It’s more compact than assembly, so I’ll dump both functions so you can see what I mean by “literally zero difference except for the call”:
-
-```
-; Function Attrs: nounwind uwtable
-define dso_local void @byArg(i32*) local_unnamed_addr #0 {
- %2 = load i32, i32* %0, align 4, !tbaa !2
- %3 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 %2)
- tail call void @func(i32* %0) #4
- %4 = load i32, i32* %0, align 4, !tbaa !2
- %5 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 %4)
- ret void
-}
-
-; Function Attrs: nounwind uwtable
-define dso_local void @constByArg(i32*) local_unnamed_addr #0 {
- %2 = load i32, i32* %0, align 4, !tbaa !2
- %3 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 %2)
- tail call void @constFunc(i32* %0) #4
- %4 = load i32, i32* %0, align 4, !tbaa !2
- %5 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 %4)
- ret void
-}
-```
-
-### Something that (sort of) works
-
-Here’s some code where `const` actually does make a difference:
-
-```
-void localVar()
-{
- int x = 42;
- printf("%d\n", x);
- constFunc(&x);
- printf("%d\n", x);
-}
-
-void constLocalVar()
-{
- const int x = 42; // const on the local variable
- printf("%d\n", x);
- constFunc(&x);
- printf("%d\n", x);
-}
-```
-
-Here’s the assembly for `localVar()`, which has two instructions that have been optimised out of `constLocalVar()`:
-
-```
-localVar:
-.LFB25:
- .cfi_startproc
- subq $24, %rsp
- .cfi_def_cfa_offset 32
- movl $42, %edx
- movl $1, %edi
- movq %fs:40, %rax
- movq %rax, 8(%rsp)
- xorl %eax, %eax
- leaq .LC0(%rip), %rsi
- movl $42, 4(%rsp)
- call __printf_chk@PLT
- leaq 4(%rsp), %rdi
- call constFunc@PLT
- movl 4(%rsp), %edx # not in constLocalVar()
- xorl %eax, %eax
- movl $1, %edi
- leaq .LC0(%rip), %rsi # not in constLocalVar()
- call __printf_chk@PLT
- movq 8(%rsp), %rax
- xorq %fs:40, %rax
- jne .L9
- addq $24, %rsp
- .cfi_remember_state
- .cfi_def_cfa_offset 8
- ret
-.L9:
- .cfi_restore_state
- call __stack_chk_fail@PLT
- .cfi_endproc
-```
-
-The LLVM IR is a little clearer. The `load` just before the second `printf()` call has been optimised out of `constLocalVar()`:
-
-```
-; Function Attrs: nounwind uwtable
-define dso_local void @localVar() local_unnamed_addr #0 {
- %1 = alloca i32, align 4
- %2 = bitcast i32* %1 to i8*
- call void @llvm.lifetime.start.p0i8(i64 4, i8* nonnull %2) #4
- store i32 42, i32* %1, align 4, !tbaa !2
- %3 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 42)
- call void @constFunc(i32* nonnull %1) #4
- %4 = load i32, i32* %1, align 4, !tbaa !2
- %5 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* @.str, i64 0, i64 0), i32 %4)
- call void @llvm.lifetime.end.p0i8(i64 4, i8* nonnull %2) #4
- ret void
-}
-```
-
-Okay, so, `constLocalVar()` has sucessfully elided the reloading of `*x`, but maybe you’ve noticed something a bit confusing: it’s the same `constFunc()` call in the bodies of `localVar()` and `constLocalVar()`. If the compiler can deduce that `constFunc()` didn’t modify `*x` in `constLocalVar()`, why can’t it deduce that the exact same function call didn’t modify `*x` in `localVar()`?
-
-The explanation gets closer to the heart of why C `const` is impractical as an optimisation aid. C `const` effectively has two meanings: it can mean the variable is a read-only alias to some data that may or may not be constant, or it can mean the variable is actually constant. If you cast away `const` from a pointer to a constant value and then write to it, the result is undefined behaviour. On the other hand, it’s okay if it’s just a `const` pointer to a value that’s not constant.
-
-This possible implementation of `constFunc()` shows what that means:
-
-```
-// x is just a read-only pointer to something that may or may not be a constant
-void constFunc(const int *x)
-{
- // local_var is a true constant
- const int local_var = 42;
-
- // Definitely undefined behaviour by C rules
- doubleIt((int*)&local_var);
- // Who knows if this is UB?
- doubleIt((int*)x);
-}
-
-void doubleIt(int *x)
-{
- *x *= 2;
-}
-```
-
-`localVar()` gave `constFunc()` a `const` pointer to non-`const` variable. Because the variable wasn’t originally `const`, `constFunc()` can be a liar and forcibly modify it without triggering UB. So the compiler can’t assume the variable has the same value after `constFunc()` returns. The variable in `constLocalVar()` really is `const`, though, so the compiler can assume it won’t change — because this time it _would_ be UB for `constFunc()` to cast `const` away and write to it.
-
-The `byArg()` and `constByArg()` functions in the first example are hopeless because the compiler has no way of knowing if `*x` really is `const`.
-
-But why the inconsistency? If the compiler can assume that `constFunc()` doesn’t modify its argument when called in `constLocalVar()`, surely it can go ahead an apply the same optimisations to other `constFunc()` calls, right? Nope. The compiler can’t assume `constLocalVar()` is ever run at all. If it isn’t (say, because it’s just some unused extra output of a code generator or macro), `constFunc()` can sneakily modify data without ever triggering UB.
-
-You might want to read the above explanation and examples a few times, but don’t worry if it sounds absurd: it is. Unfortunately, writing to `const` variables is the worst kind of UB: most of the time the compiler can’t know if it even would be UB. So most of the time the compiler sees `const`, it has to assume that someone, somewhere could cast it away, which means the compiler can’t use it for optimisation. This is true in practice because enough real-world C code has “I know what I’m doing” casting away of `const`.
-
-In short, a whole lot of things can prevent the compiler from using `const` for optimisation, including receiving data from another scope using a pointer, or allocating data on the heap. Even worse, in most cases where `const` can be used by the compiler, it’s not even necessary. For example, any decent compiler can figure out that `x` is constant in the following code, even without `const`:
-
-```
-int x = 42, y = 0;
-printf("%d %d\n", x, y);
-y += x;
-printf("%d %d\n", x, y);
-```
-
-TL;DR: `const` is almost useless for optimisation because
-
- 1. Except for special cases, the compiler has to ignore it because other code might legally cast it away
- 2. In most of the exceptions to #1, the compiler can figure out a variable is constant, anyway
-
-
-
-### C++
-
-There’s another way `const` can affect code generation if you’re using C++: function overloads. You can have `const` and non-`const` overloads of the same function, and maybe the non-`const` can be optimised (by the programmer, not the compiler) to do less copying or something.
-
-```
-void foo(int *p)
-{
- // Needs to do more copying of data
-}
-
-void foo(const int *p)
-{
- // Doesn't need defensive copies
-}
-
-int main()
-{
- const int x = 42;
- // const-ness affects which overload gets called
- foo(&x);
- return 0;
-}
-```
-
-On the one hand, I don’t think this is exploited much in practical C++ code. On the other hand, to make a real difference, the programmer has to make assumptions that the compiler can’t make because they’re not guaranteed by the language.
-
-### An experiment with Sqlite3
-
-That’s enough theory and contrived examples. How much effect does `const` have on a real codebase? I thought I’d do a test on the Sqlite database (version 3.30.0) because
-
- * It actually uses `const`
- * It’s a non-trivial codebase (over 200KLOC)
- * As a database, it includes a range of things from string processing to arithmetic to date handling
- * It can be tested with CPU-bound loads
-
-
-
-Also, the author and contributors have put years of effort into performance optimisation already, so I can assume they haven’t missed anything obvious.
-
-#### The setup
-
-I made two copies of [the source code][2] and compiled one normally. For the other copy, I used this hacky preprocessor snippet to turn `const` into a no-op:
-
-```
-#define const
-```
-
-(GNU) `sed` can add that to the top of each file with something like `sed -i '1i#define const' *.c *.h`.
-
-Sqlite makes things slightly more complicated by generating code using scripts at build time. Fortunately, compilers make a lot of noise when `const` and non-`const` code are mixed, so it was easy to detect when this happened, and tweak the scripts to include my anti-`const` snippet.
-
-Directly diffing the compiled results is a bit pointless because a tiny change can affect the whole memory layout, which can change pointers and function calls throughout the code. Instead I took a fingerprint of the disassembly (`objdump -d libsqlite3.so.0.8.6`), using the binary size and mnemonic for each instruction. For example, this function:
-
-```
-000000000005d570 :
- 5d570: 4c 8d 05 59 a2 ff ff lea -0x5da7(%rip),%r8 # 577d0
- 5d577: e9 04 fe ff ff jmpq 5d380
- 5d57c: 0f 1f 40 00 nopl 0x0(%rax)
-```
-
-would turn into something like this:
-
-```
-sqlite3_blob_read 7lea 5jmpq 4nopl
-```
-
-I left all the Sqlite build settings as-is when compiling anything.
-
-#### Analysing the compiled code
-
-The `const` version of libsqlite3.so was 4,740,704 bytes, about 0.1% larger than the 4,736,712 bytes of the non-`const` version. Both had 1374 exported functions (not including low-level helpers like stuff in the PLT), and a total of 13 had any difference in fingerprint.
-
-A few of the changes were because of the dumb preprocessor hack. For example, here’s one of the changed functions (with some Sqlite-specific definitions edited out):
-
-```
-#define LARGEST_INT64 (0xffffffff|(((int64_t)0x7fffffff)<<32))
-#define SMALLEST_INT64 (((int64_t)-1) - LARGEST_INT64)
-
-static int64_t doubleToInt64(double r){
- /*
- ** Many compilers we encounter do not define constants for the
- ** minimum and maximum 64-bit integers, or they define them
- ** inconsistently. And many do not understand the "LL" notation.
- ** So we define our own static constants here using nothing
- ** larger than a 32-bit integer constant.
- */
- static const int64_t maxInt = LARGEST_INT64;
- static const int64_t minInt = SMALLEST_INT64;
-
- if( r<=(double)minInt ){
- return minInt;
- }else if( r>=(double)maxInt ){
- return maxInt;
- }else{
- return (int64_t)r;
- }
-}
-```
-
-Removing `const` makes those constants into `static` variables. I don’t see why anyone who didn’t care about `const` would make those variables `static`. Removing both `static` and `const` makes GCC recognise them as constants again, and we get the same output. Three of the 13 functions had spurious changes because of local `static const` variables like this, but I didn’t bother fixing any of them.
-
-Sqlite uses a lot of global variables, and that’s where most of the real `const` optimisations came from. Typically they were things like a comparison with a variable being replaced with a constant comparison, or a loop being partially unrolled a step. (The [Radare toolkit][3] was handy for figuring out what the optimisations did.) A few changes were underwhelming. `sqlite3ParseUri()` is 487 instructions, but the only difference `const` made was taking this pair of comparisons:
-
-```
-test %al, %al
-je
-cmp $0x23, %al
-je
-```
-
-And swapping their order:
-
-```
-cmp $0x23, %al
-je
-test %al, %al
-je
-```
-
-#### Benchmarking
-
-Sqlite comes with a performance regression test, so I tried running it a hundred times for each version of the code, still using the default Sqlite build settings. Here are the timing results in seconds:
-
-| const | No const
----|---|---
-Minimum | 10.658s | 10.803s
-Median | 11.571s | 11.519s
-Maximum | 11.832s | 11.658s
-Mean | 11.531s | 11.492s
-
-Personally, I’m not seeing enough evidence of a difference worth caring about. I mean, I removed `const` from the entire program, so if it made a significant difference, I’d expect it to be easy to see. But maybe you care about any tiny difference because you’re doing something absolutely performance critical. Let’s try some statistical analysis.
-
-I like using the Mann-Whitney U test for stuff like this. It’s similar to the more-famous t test for detecting differences in groups, but it’s more robust to the kind of complex random variation you get when timing things on computers (thanks to unpredictable context switches, page faults, etc). Here’s the result:
-
-| const | No const
----|---|---
-N | 100 | 100
-Mean rank | 121.38 | 79.62
-Mann-Whitney U | 2912
----|---
-Z | -5.10
-2-sided p value | <10-6
-HL median difference | -.056s
-95% confidence interval | -.077s – -0.038s
-
-The U test has detected a statistically significant difference in performance. But, surprise, it’s actually the non-`const` version that’s faster — by about 60ms, or 0.5%. It seems like the small number of “optimisations” that `const` enabled weren’t worth the cost of extra code. It’s not like `const` enabled any major optimisations like auto-vectorisation. Of course, your mileage may vary with different compiler flags, or compiler versions, or codebases, or whatever, but I think it’s fair to say that if `const` were effective at improving C performance, we’d have seen it by now.
-
-### So, what’s `const` for?
-
-For all its flaws, C/C++ `const` is still useful for type safety. In particular, combined with C++ move semantics and `std::unique_pointer`s, `const` can make pointer ownership explicit. Pointer ownership ambiguity was a huge pain in old C++ codebases over ~100KLOC, so personally I’m grateful for that alone.
-
-However, I used to go beyond using `const` for meaningful type safety. I’d heard it was best practices to use `const` literally as much as possible for performance reasons. I’d heard that when performance really mattered, it was important to refactor code to add more `const`, even in ways that made it less readable. That made sense at the time, but I’ve since learned that it’s just not true.
-
---------------------------------------------------------------------------------
-
-via: https://theartofmachinery.com/2019/08/12/c_const_isnt_for_performance.html
-
-作者:[Simon Arneaud][a]
-选题:[lujun9972][b]
-译者:[LazyWolfLin](https://github.com/LazyWolfLin)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://theartofmachinery.com
-[b]: https://github.com/lujun9972
-[1]: https://theartofmachinery.com/2019/04/05/d_as_c_replacement.html#const-and-immutable
-[2]: https://sqlite.org/src/doc/trunk/README.md
-[3]: https://rada.re/r/
diff --git a/sources/tech/20190823 The Linux kernel- Top 5 innovations.md b/sources/tech/20190823 The Linux kernel- Top 5 innovations.md
deleted file mode 100644
index 95e35bc309..0000000000
--- a/sources/tech/20190823 The Linux kernel- Top 5 innovations.md
+++ /dev/null
@@ -1,105 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (The Linux kernel: Top 5 innovations)
-[#]: via: (https://opensource.com/article/19/8/linux-kernel-top-5-innovations)
-[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez)
-
-The Linux kernel: Top 5 innovations
-======
-Want to know what the actual (not buzzword) innovations are when it
-comes to the Linux kernel? Read on.
-![Penguin with green background][1]
-
-The word _innovation_ gets bandied about in the tech industry almost as much as _revolution_, so it can be difficult to differentiate hyperbole from something that’s actually exciting. The Linux kernel has been called innovative, but then again it’s also been called the biggest hack in modern computing, a monolith in a micro world.
-
-Setting aside marketing and modeling, Linux is arguably the most popular kernel of the open source world, and it’s introduced some real game-changers over its nearly 30-year life span.
-
-### Cgroups (2.6.24)
-
-Back in 2007, Paul Menage and Rohit Seth got the esoteric [_control groups_ (cgroups)][2] feature added to the kernel (the current implementation of cgroups is a rewrite by Tejun Heo.) This new technology was initially used as a way to ensure, essentially, quality of service for a specific set of tasks.
-
-For example, you could create a control group definition (cgroup) for all tasks associated with your web server, another cgroup for routine backups, and yet another for general operating system requirements. You could then control a percentage of resources for each cgroup, such that your OS and web server gets the bulk of system resources while your backup processes have access to whatever is left.
-
-What cgroups has become most famous for, though, is its role as the technology driving the cloud today: containers. In fact, cgroups were originally named [process containers][3]. It was no great surprise when they were adopted by projects like [LXC][4], [CoreOS][5], and Docker.
-
-The floodgates being opened, the term _containers_ justly became synonymous with Linux, and the concept of microservice-style cloud-based “apps” quickly became the norm. These days, it’s hard to get away from cgroups, they’re so prevalent. Every large-scale infrastructure (and probably your laptop, if you run Linux) takes advantage of cgroups in a meaningful way, making your computing experience more manageable and more flexible than ever.
-
-For example, you might already have installed [Flathub][6] or [Flatpak][7] on your computer, or maybe you’ve started using [Kubernetes][8] and/or [OpenShift][9] at work. Regardless, if the term “containers” is still hazy for you, you can gain a hands-on understanding of containers from [Behind the scenes with Linux containers][10].
-
-### LKMM (4.17)
-
-In 2018, the hard work of Jade Alglave, Alan Stern, Andrea Parri, Luc Maranget, Paul McKenney, and several others, got merged into the mainline Linux kernel to provide formal memory models. The Linux Kernel Memory [Consistency] Model (LKMM) subsystem is a set of tools describing the Linux memory coherency model, as well as producing _litmus tests_ (**klitmus**, specifically) for testing.
-
-As systems become more complex in physical design (more CPU cores added, cache and RAM grow, and so on), the harder it is for them to know which address space is required by which CPU, and when. For example, if CPU0 needs to write data to a shared variable in memory, and CPU1 needs to read that value, then CPU0 must write before CPU1 attempts to read. Similarly, if values are written in one order to memory, then there’s an expectation that they are also read in that same order, regardless of which CPU or CPUs are doing the reading.
-
-Even on a single CPU, memory management requires a specific task order. A simple action such as **x = y** requires a CPU to load the value of **y** from memory, and then store that value in **x**. Placing the value stored in **y** into the **x** variable cannot occur _before_ the CPU has read the value from memory. There are also address dependencies: **x[n] = 6** requires that **n** is loaded before the CPU can store the value of six.
-
-LKMM helps identify and trace these memory patterns in code. It does this in part with a tool called **herd**, which defines the constraints imposed by a memory model (in the form of logical axioms), and then enumerates all possible outcomes consistent with these constraints.
-
-### Low-latency patch (2.6.38)
-
-Long ago, in the days before 2011, if you wanted to do "serious" [multimedia work on Linux][11], you had to obtain a low-latency kernel. This mostly applied to [audio recording][12] while adding lots of real-time effects (such as singing into a microphone and adding reverb, and hearing your voice in your headset with no noticeable delay). There were distributions, such as [Ubuntu Studio][13], that reliably provided such a kernel, so in practice it wasn't much of a hurdle, just a significant caveat when choosing your distribution as an artist.
-
-However, if you weren’t using Ubuntu Studio, or you had some need to update your kernel before your distribution got around to it, you had to go to the rt-patches web page, download the kernel patches, apply them to your kernel source code, compile, and install manually.
-
-And then, with the release of kernel version 2.6.38, this process was all over. The Linux kernel suddenly, as if by magic, had low-latency code (according to benchmarks, latency decreased by a factor of 10, at least) built-in by default. No more downloading patches, no more compiling. Everything just worked, and all because of a small 200-line patch implemented by Mike Galbraith.
-
-For open source multimedia artists the world over, it was a game-changer. Things got so good from 2011 on that in 2016, I challenged myself to [build a Digital Audio Workstation (DAW) on a Raspberry Pi v1 (model B)][14] and found that it worked surprisingly well.
-
-### RCU (2.5)
-
-RCU, or Read-Copy-Update, is a system defined in computer science that allows multiple processor threads to read from shared memory. It does this by deferring updates, but also marking them as updated, to ensure that the data’s consumers read the latest version. Effectively, this means that reads happen concurrently with updates.
-
-The typical RCU cycle is a little like this:
-
- 1. Remove pointers to data to prevent other readers from referencing it.
- 2. Wait for readers to complete their critical processes.
- 3. Reclaim the memory space.
-
-
-
-Dividing the update stage into removal and reclamation phases means the updater performs the removal immediately while deferring reclamation until all active readers are complete (either by blocking them or by registering a callback to be invoked upon completion).
-
-While the concept of read-copy-update was not invented for the Linux kernel, its implementation in Linux is a defining example of the technology.
-
-### Collaboration (0.01)
-
-The final answer to the question of what the Linux kernel innovated will always be, above all else, collaboration. Call it good timing, call it technical superiority, call it hackability, or just call it open source, but the Linux kernel and the many projects that it enabled is a glowing example of collaboration and cooperation.
-
-And it goes well beyond just the kernel. People from all walks of life have contributed to open source, arguably _because_ of the Linux kernel. The Linux was, and remains to this day, a major force of [Free Software][15], inspiring users to bring their code, art, ideas, or just themselves, to a global, productive, and diverse community of humans.
-
-### What’s your favorite innovation?
-
-This list is biased toward my own interests: containers, non-uniform memory access (NUMA), and multimedia. I’ve surely left your favorite kernel innovation off the list. Tell me about it in the comments!
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/8/linux-kernel-top-5-innovations
-
-作者:[Seth Kenlon][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
-[2]: https://en.wikipedia.org/wiki/Cgroups
-[3]: https://lkml.org/lkml/2006/10/20/251
-[4]: https://linuxcontainers.org
-[5]: https://coreos.com/
-[6]: http://flathub.org
-[7]: http://flatpak.org
-[8]: http://kubernetes.io
-[9]: https://www.redhat.com/sysadmin/learn-openshift-minishift
-[10]: https://opensource.com/article/18/11/behind-scenes-linux-containers
-[11]: http://slackermedia.info
-[12]: https://opensource.com/article/17/6/qtractor-audio
-[13]: http://ubuntustudio.org
-[14]: https://opensource.com/life/16/3/make-music-raspberry-pi-milkytracker
-[15]: http://fsf.org
diff --git a/sources/tech/20190824 How to compile a Linux kernel in the 21st century.md b/sources/tech/20190824 How to compile a Linux kernel in the 21st century.md
index 0740c0b3a0..5821826706 100644
--- a/sources/tech/20190824 How to compile a Linux kernel in the 21st century.md
+++ b/sources/tech/20190824 How to compile a Linux kernel in the 21st century.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (luming)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
diff --git a/sources/tech/20190827 curl exercises.md b/sources/tech/20190827 curl exercises.md
new file mode 100644
index 0000000000..36eae2743b
--- /dev/null
+++ b/sources/tech/20190827 curl exercises.md
@@ -0,0 +1,84 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (curl exercises)
+[#]: via: (https://jvns.ca/blog/2019/08/27/curl-exercises/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+curl exercises
+======
+
+Recently I’ve been interested in how people learn things. I was reading Kathy Sierra’s great book [Badass: Making Users Awesome][1]. It talks about the idea of _deliberate practice_.
+
+The idea is that you find a small micro-skill that can be learned in maybe 3 sessions of 45 minutes, and focus on learning that micro-skill. So, as an exercise, I was trying to think of a computer skill that I thought could be learned in 3 45-minute sessions.
+
+I thought that making HTTP requests with `curl` might be a skill like that, so here are some curl exercises as an experiment!
+
+### what’s curl?
+
+curl is a command line tool for making HTTP requests. I like it because it’s an easy way to test that servers or APIs are doing what I think, but it’s a little confusing at first!
+
+Here’s a drawing explaining curl’s most important command line arguments (which is page 6 of my [Bite Size Networking][2] zine). You can click to make it bigger.
+
+
+
+### fluency is valuable
+
+With any command line tool, I think having fluency is really helpful. It’s really nice to be able to just type in the thing you need. For example recently I was testing out the Gumroad API and I was able to just type in:
+
+```
+curl https://api.gumroad.com/v2/sales \
+ -d "access_token=" \
+ -X GET -d "before=2016-09-03"
+```
+
+and get things working from the command line.
+
+### 21 curl exercises
+
+These exercises are about understanding how to make different kinds of HTTP requests with curl. They’re a little repetitive on purpose. They exercise basically everything I do with curl.
+
+To keep it simple, we’re going to make a lot of our requests to the same website: . httpbin is a service that accepts HTTP requests and then tells you what request you made.
+
+ 1. Request
+ 2. Request . httpbin.org/anything will look at the request you made, parse it, and echo back to you what you requested. curl’s default is to make a GET request.
+ 3. Make a POST request to
+ 4. Make a GET request to , but this time add some query parameters (set `value=panda`).
+ 5. Request google’s robots.txt file ([www.google.com/robots.txt][3])
+ 6. Make a GET request to and set the header `User-Agent: elephant`.
+ 7. Make a DELETE request to
+ 8. Request and also get the response headers
+ 9. Make a POST request to with the JSON body `{"value": "panda"}`
+ 10. Make the same POST request as the previous exercise, but set the Content-Type header to `application/json` (because POST requests need to have a content type that matches their body). Look at the `json` field in the response to see the difference from the previous one.
+ 11. Make a GET request to and set the header `Accept-Encoding: gzip` (what happens? why?)
+ 12. Put a bunch of a JSON in a file and then make a POST request to with the JSON in that file as the body
+ 13. Make a request to and set the header ‘Accept: image/png’. Save the output to a PNG file and open the file in an image viewer. Try the same thing with with different `Accept:` headers.
+ 14. Make a PUT request to
+ 15. Request , save it to a file, and open that file in your image editor.
+ 16. Request . You’ll get an empty response. Get curl to show you the response headers too, and try to figure out why the response was empty.
+ 17. Make any request to and just set some nonsense headers (like `panda: elephant`)
+ 18. Request and . Request them again and get curl to show the response headers.
+ 19. Request and set a username and password (with `-u username:password`)
+ 20. Download the Twitter homepage () in Spanish by setting the `Accept-Language: es-ES` header.
+ 21. Make a request to the Stripe API with curl. (see for how, they give you a test API key). Try making exactly the same request to .
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2019/08/27/curl-exercises/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://www.amazon.com/Badass-Making-Awesome-Kathy-Sierra/dp/1491919019
+[2]: https://wizardzines.com/zines/bite-size-networking
+[3]: http://www.google.com/robots.txt
diff --git a/sources/tech/20190828 Managing Ansible environments on MacOS with Conda.md b/sources/tech/20190828 Managing Ansible environments on MacOS with Conda.md
deleted file mode 100644
index 7aa3a4181b..0000000000
--- a/sources/tech/20190828 Managing Ansible environments on MacOS with Conda.md
+++ /dev/null
@@ -1,174 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Managing Ansible environments on MacOS with Conda)
-[#]: via: (https://opensource.com/article/19/8/using-conda-ansible-administration-macos)
-[#]: author: (James Farrell https://opensource.com/users/jamesf)
-
-Managing Ansible environments on MacOS with Conda
-======
-Conda corrals everything you need for Ansible into a virtual environment
-and keeps it separate from your other projects.
-![CICD with gears][1]
-
-If you are a Python developer using MacOS and involved with Ansible administration, you may want to use the Conda package manager to keep your Ansible work separate from your core OS and other local projects.
-
-Ansible is based on Python. Conda is not required to make Ansible work on MacOS, but it does make managing Python versions and package dependencies easier. This allows you to use an upgraded Python version on MacOS and keep Python package dependencies separate between your system, Ansible, and other programming projects.
-
-There are other ways to install Ansible on MacOS. You could use [Homebrew][2], but if you are into Python development (or Ansible development), you might find managing Ansible in a Python virtual environment reduces some confusion. I find this to be simpler; rather than trying to load a Python version and dependencies into the system or in **/usr/local**, Conda helps me corral everything I need for Ansible into a virtual environment and keep it all completely separate from other projects.
-
-This article focuses on using Conda to manage Ansible as a Python project to keep it clean and separated from other projects. Read on to learn how to install Conda, create a new virtual environment, install Ansible, and test it.
-
-### Prelude
-
-Recently, I wanted to learn [Ansible][3], so I needed to figure out the best way to install it.
-
-I am generally wary of installing things into my daily use workstation. I especially dislike applying manual updates to the vendor's default OS installation (a preference I developed from years of Unix system administration). I really wanted to use Python 3.7, but MacOS packages the older 2.7, and I was not going to install any global Python packages that might interfere with the core MacOS system.
-
-So, I started my Ansible work using a local Ubuntu 18.04 virtual machine. This provided a real level of safe isolation, but I soon found that managing it was tedious. I set out to see how to get a flexible but isolated Ansible system on native MacOS.
-
-Since Ansible is based on Python, Conda seemed to be the ideal solution.
-
-### Installing Conda
-
-Conda is an open source utility that provides convenient package- and environment-management features. It can help you manage multiple versions of Python, install package dependencies, perform upgrades, and maintain project isolation. If you are manually managing Python virtual environments, Conda will help streamline and manage your work. Surf on over to the [Conda documentation][4] for all the details.
-
-I chose the [Miniconda][5] Python 3.7 installation for my workstation because I wanted the latest Python version. Regardless of which version you select, you can always install new virtual environments with other versions of Python.
-
-To install Conda, download the PKG format file, do the usual double-click, and select the "Install for me only" option. The install took about 158MB of space on my system.
-
-After the installation, bring up a terminal to see what you have. You should see:
-
- * A new **miniconda3** directory in your **home**
- * The shell prompt modified to prepend the word "(base)"
- * **.bash_profile** updated with Conda-specific settings
-
-
-
-Now that the base is installed, you have your first Python virtual environment. Running the usual Python version check should prove this, and your PATH will point to the new location:
-
-
-```
-(base) $ which python
-/Users/jfarrell/miniconda3/bin/python
-(base) $ python --version
-Python 3.7.1
-```
-
-Now that Conda is installed, the next step is to set up a virtual environment, then get Ansible installed and running.
-
-### Creating a virtual environment for Ansible
-
-I want to keep Ansible separate from my other Python projects, so I created a new virtual environment and switched over to it:
-
-
-```
-(base) $ conda create --name ansible-env --clone base
-(base) $ conda activate ansible-env
-(ansible-env) $ conda env list
-```
-
-The first command clones the Conda base into a new virtual environment called **ansible-env**. The clone brings in the Python 3.7 version and a bunch of default Python modules that you can add to, remove, or upgrade as needed.
-
-The second command changes the shell context to this new **ansible-env** environment. It sets the proper paths for Python and the modules it contains. Notice that your shell prompt changes after the **conda activate ansible-env** command.
-
-The third command is not required; it lists what Python modules are installed with their version and other data.
-
-You can always switch out of a virtual environment and into another with Conda's **activate** command. This will bring you back to the base: **conda activate base**.
-
-### Installing Ansible
-
-There are various ways to install Ansible, but using Conda keeps the Ansible version and all desired dependencies packaged in one place. Conda provides the flexibility both to keep everything separated and to add in other new environments as needed (as I'll demonstrate later).
-
-To install a relatively recent version of Ansible, use:
-
-
-```
-(base) $ conda activate ansible-env
-(ansible-env) $ conda install -c conda-forge ansible
-```
-
-Since Ansible is not part of Conda's default channels, the **-c** is used to search and install from an alternate channel. Ansible is now installed into the **ansible-env** virtual environment and is ready to use.
-
-### Using Ansible
-
-Now that you have installed a Conda virtual environment, you're ready to use it. First, make sure the node you want to control has your workstation's SSH key installed to the right user account.
-
-Bring up a new shell and run some basic Ansible commands:
-
-
-```
-(base) $ conda activate ansible-env
-(ansible-env) $ ansible --version
-ansible 2.8.1
- config file = None
- configured module search path = ['/Users/jfarrell/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
- ansible python module location = /Users/jfarrell/miniconda3/envs/ansibleTest/lib/python3.7/site-packages/ansible
- executable location = /Users/jfarrell/miniconda3/envs/ansibleTest/bin/ansible
- python version = 3.7.1 (default, Dec 14 2018, 13:28:58) [Clang 4.0.1 (tags/RELEASE_401/final)]
-(ansible-env) $ ansible all -m ping -u ansible
-192.168.99.200 | SUCCESS => {
- "ansible_facts": {
- "discovered_interpreter_python": "/usr/bin/python"
- },
- "changed": false,
- "ping": "pong"
-}
-```
-
-Now that Ansible is working, you can pull your playbooks out of source control and start using them from your MacOS workstation.
-
-### Cloning the new Ansible for Ansible development
-
-This part is purely optional; it's only needed if you want additional virtual environments to modify Ansible or to safely experiment with questionable Python modules. You can clone your main Ansible environment into a development copy with:
-
-
-```
-(ansible-env) $ conda create --name ansible-dev --clone ansible-env
-(ansible-env) $ conda activte ansible-dev
-(ansible-dev) $
-```
-
-### Gotchas to look out for
-
-Occasionally you may get into trouble with Conda. You can usually delete a bad environment with:
-
-
-```
-$ conda activate base
-$ conda remove --name ansible-dev --all
-```
-
-If you get errors that you cannot resolve, you can usually delete the environment directly by finding it in **~/miniconda3/envs** and removing the entire directory. If the base becomes corrupt, you can remove the entire **~/miniconda3** directory and reinstall it from the PKG file. Just be sure to preserve any desired environments you have in **~/miniconda3/envs**, or use the Conda tools to dump the environment configuration and recreate it later.
-
-The **sshpass** program is not included on MacOS. It is needed only if your Ansible work requires you to supply Ansible with an SSH login password. You can find the current [sshpass source][6] on SourceForge.
-
-Finally, the base Conda Python module list may lack some Python modules you need for your work. If you need to install one, the **conda install <package>** command is preferred, but **pip** can be used where needed, and Conda will recognize the install modules.
-
-### Conclusion
-
-Ansible is a powerful automation utility that's worth all the effort to learn. Conda is a simple and effective Python virtual environment management tool.
-
-Keeping software installs separated on your MacOS environment is a prudent approach to maintain stability and sanity with your daily work environment. Conda can be especially helpful to upgrade your Python version, separate Ansible from your other projects, and safely hack on Ansible.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/8/using-conda-ansible-administration-macos
-
-作者:[James Farrell][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/jamesf
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
-[2]: https://brew.sh/
-[3]: https://docs.ansible.com/?extIdCarryOver=true&sc_cid=701f2000001OH6uAAG
-[4]: https://conda.io/projects/conda/en/latest/index.html
-[5]: https://docs.conda.io/en/latest/miniconda.html
-[6]: https://sourceforge.net/projects/sshpass/
diff --git a/sources/tech/20190830 How to Create and Use Swap File on Linux.md b/sources/tech/20190830 How to Create and Use Swap File on Linux.md
deleted file mode 100644
index bfda3bcdbe..0000000000
--- a/sources/tech/20190830 How to Create and Use Swap File on Linux.md
+++ /dev/null
@@ -1,261 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (hello-wn)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How to Create and Use Swap File on Linux)
-[#]: via: (https://itsfoss.com/create-swap-file-linux/)
-[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
-
-How to Create and Use Swap File on Linux
-======
-
-This tutorial discusses the concept of swap file in Linux, why it is used and its advantages over the traditional swap partition. You’ll learn how to create swap file or resize it.
-
-### What is a swap file in Linux?
-
-A swap file allows Linux to simulate the disk space as RAM. When your system starts running out of RAM, it uses the swap space to and swaps some content of the RAM on to the disk space. This frees up the RAM to serve more important processes. When the RAM is free again, it swaps back the data from the disk. I recommend [reading this article to learn more about swap on Linux][1].
-
-Traditionally, swap space is used as a separate partition on the disk. When you install Linux, you create a separate partition just for swap. But this trend has changed in the recent years.
-
-With swap file, you don’t need a separate partition anymore. You create a file under root and tell your system to use it as the swap space.
-
-With dedicated swap partition, resizing the swap space is a nightmare and an impossible task in many cases. But with swap files, you can resize them as you like.
-
-Recent versions of Ubuntu and some other Linux distributions have started [using the swap file by default][2]. Even if you don’t create a swap partition, Ubuntu creates a swap file of around 1 GB on its own.
-
-Let’s see some more on swap files.
-
-![][3]
-
-### Check swap space in Linux
-
-Before you go and start adding swap space, it would be a good idea to check whether you have swap space already available in your system.
-
-You can check it with the [free command in Linux][4]. In my case, my [Dell XPS][5] has 14GB of swap.
-
-```
-free -h
- total used free shared buff/cache available
-Mem: 7.5G 4.1G 267M 971M 3.1G 2.2G
-Swap: 14G 0B 14G
-```
-
-The free command gives you the size of the swap space but it doesn’t tell you if it’s a real swap partition or a swap file. The swapon command is better in this regard.
-
-```
-swapon --show
-NAME TYPE SIZE USED PRIO
-/dev/nvme0n1p4 partition 14.9G 0B -2
-```
-
-As you can see, I have 14.9 GB of swap space and it’s on a separate partition. If it was a swap file, the type would have been file instead of partition.
-
-```
-swapon --show
-NAME TYPE SIZE USED PRIO
-/swapfile file 2G 0B -2
-```
-
-If you don’ have a swap space on your system, it should show something like this:
-
-```
-free -h
- total used free shared buff/cache available
-Mem: 7.5G 4.1G 267M 971M 3.1G 2.2G
-Swap: 0B 0B 0B
-```
-
-The swapon command won’t show any output.
-
-### Create swap file on Linux
-
-If your system doesn’t have swap space or if you think the swap space is not adequate enough, you can create swap file on Linux. You can create multiple swap files as well.
-
-[][6]
-
-Suggested read Fix Missing System Settings In Ubuntu 14.04 [Quick Tip]
-
-Let’s see how to create swap file on Linux. I am using Ubuntu 18.04 in this tutorial but it should work on other Linux distributions as well.
-
-#### Step 1: Make a new swap file
-
-First thing first, create a file with the size of swap space you want. Let’s say that I want to add 1 GB of swap space to my system. Use the fallocate command to create a file of size 1 GB.
-
-```
-sudo fallocate -l 1G /swapfile
-```
-
-It is recommended to allow only root to read and write to the swap file. You’ll even see warning like “insecure permissions 0644, 0600 suggested” when you try to use this file for swap area.
-
-```
-sudo chmod 600 /swapfile
-```
-
-Do note that the name of the swap file could be anything. If you need multiple swap spaces, you can give it any appropriate name like swap_file_1, swap_file_2 etc. It’s just a file with a predefined size.
-
-#### Step 2: Mark the new file as swap space
-
-Your need to tell the Linux system that this file will be used as swap space. You can do that with [mkswap][7] tool.
-
-```
-sudo mkswap /swapfile
-```
-
-You should see an output like this:
-
-```
-Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes)
-no label, UUID=7e1faacb-ea93-4c49-a53d-fb40f3ce016a
-```
-
-#### Step 3: Enable the swap file
-
-Now your system knows that the file swapfile can be used as swap space. But it is not done yet. You need to enable the swap file so that your system can start using this file as swap.
-
-```
-sudo swapon /swapfile
-```
-
-Now if you check the swap space, you should see that your Linux system recognizes and uses it as the swap area:
-
-```
-swapon --show
-NAME TYPE SIZE USED PRIO
-/swapfile file 1024M 0B -2
-```
-
-#### Step 4: Make the changes permanent
-
-Whatever you have done so far is temporary. Reboot your system and all the changes will disappear.
-
-You can make the changes permanent by adding the newly created swap file to /etc/fstab file.
-
-It’s always a good idea to make a backup before you make any changes to the /etc/fstab file.
-
-```
-sudo cp /etc/fstab /etc/fstab.back
-```
-
-Now you can add the following line to the end of /etc/fstab file:
-
-```
-/swapfile none swap sw 0 0
-```
-
-You can do it manually using a [command line text editor][8] or you just use the following command:
-
-```
-echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
-```
-
-Now you have everything in place. Your swap file will be used even after you reboot your Linux system.
-
-### Adjust swappiness
-
-The swappiness parameters determines how often the swap space should be used. The swappiness value ranges from 0 to 100. Higher value means the swap space will be used more frequently.
-
-The default swappiness in Ubuntu desktop is 60 while in server it is 1. You can check the swappiness with the following command:
-
-```
-cat /proc/sys/vm/swappiness
-```
-
-Why servers should use a low swappiness? Because swap is slower than RAM and for a better performance, the RAM should be utilized as much as possible. On servers, the performance factor is crucial and hence the swappinness is as low as possible.
-
-[][9]
-
-Suggested read How to Replace One Linux Distribution With Another From Dual Boot [Keeping Home Partition]
-
-You can change the swappiness on the fly using the following systemd command:
-
-```
-sudo sysctl vm.swappiness=25
-```
-
-This change it only temporary though. If you want to make it permanent, you can edit the /etc/sysctl.conf file and add the swappiness value in the end of the file:
-
-```
-vm.swappiness=25
-```
-
-### Resizing swap space on Linux
-
-There are a couple of ways you can resize the swap space on Linux. But before you see that, you should learn a few things around it.
-
-When you ask your system to stop using a swap file for swap area, it transfers all the data (pages to be precise) back to RAM. So you should have enough free RAM before you swap off.
-
-This is why a good practice is to create and enable another temporary swap file. This way, when you swap off the original swap area, your system will use the temporary swap file. Now you can resize the original swap space. You can manually remove the temporary swap file or leave it as it is and it will be automatically deleted on the next boot.
-
-If you have enough free RAM or if you created a temporary swap space, swapoff your original file.
-
-```
-sudo swapoff /swapfile
-```
-
-Now you can use fallocate command to change the size of the file. Let’s say, you change it to 2 GB in size:
-
-```
-sudo fallocate -l 2G /swapfile
-```
-
-Now mark the file as swap space again:
-
-```
-sudo mkswap /swapfile
-```
-
-And turn the swap on again:
-
-```
-sudo swapon /swapfile
-```
-
-You may also choose to have multiple swap files at the same time.
-
-### Removing swap file in Linux
-
-You may have your reasons for not using swap file on Linux. If you want to remove it, the process is similar to what you just saw in resizing the swap.
-
-First, make sure that you have enough free RAM. Now swap off the file:
-
-```
-sudo swapoff /swapfile
-```
-
-The next step is to remove the respective entry from the /etc/fstab file.
-
-And in the end, you can remove the file to free up the space:
-
-```
-sudo rm /swapfile
-```
-
-**Do you swap?**
-
-I think you now have a good understanding of swap file concept in Linux. You can now easily create swap file or resize them as per your need.
-
-If you have anything to add on this topic or if you have any doubts, please leave a comment below.
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/create-swap-file-linux/
-
-作者:[Abhishek Prakash][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/abhishek/
-[b]: https://github.com/lujun9972
-[1]: https://itsfoss.com/swap-size/
-[2]: https://help.ubuntu.com/community/SwapFaq
-[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/swap-file-linux.png?resize=800%2C450&ssl=1
-[4]: https://linuxhandbook.com/free-command/
-[5]: https://itsfoss.com/dell-xps-13-ubuntu-review/
-[6]: https://itsfoss.com/fix-missing-system-settings-ubuntu-1404-quick-tip/
-[7]: http://man7.org/linux/man-pages/man8/mkswap.8.html
-[8]: https://itsfoss.com/command-line-text-editors-linux/
-[9]: https://itsfoss.com/replace-linux-from-dual-boot/
diff --git a/sources/tech/20190830 git exercises- navigate a repository.md b/sources/tech/20190830 git exercises- navigate a repository.md
new file mode 100644
index 0000000000..bfafd73d66
--- /dev/null
+++ b/sources/tech/20190830 git exercises- navigate a repository.md
@@ -0,0 +1,84 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (git exercises: navigate a repository)
+[#]: via: (https://jvns.ca/blog/2019/08/30/git-exercises--navigate-a-repository/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+git exercises: navigate a repository
+======
+
+I think the [curl exercises][1] the other day went well, so today I woke up and wanted to try writing some Git exercises. Git is a big thing to learn, probably too big to learn in a few hours, so my first idea for how to break it down was by starting by **navigating** a repository.
+
+I was originally going to use a toy test repository, but then I thought – why not a real repository? That’s way more fun! So we’re going to navigate the repository for the Ruby programming language. You don’t need to know any C to do this exercise, it’s just about getting comfortable with looking at how files in a repository change over time.
+
+### clone the repository
+
+To get started, clone the repository:
+
+```
+git clone https://github.com/ruby/ruby
+```
+
+The big different thing about this repository (as compared to most of the repositories you’ll work with in real life) is that it doesn’t have branches, but it DOES have lots of tags, which are similar to branches in that they’re both just pointers to a commit. So we’ll do exercises with tags instead of branches. The way you _change_ tags and branches are very different, but the way you _look at_ tags and branches is exactly the same.
+
+### a git SHA always refers to the same code
+
+The most important thing to keep in mind while doing these exercises is that a git SHA like `9e3d9a2a009d2a0281802a84e1c5cc1c887edc71` always refers to the same code, as explained in this page. This page is from a zine I wrote with Katie Sylor-Miller called [Oh shit, git!][2]. (She also has a great site called that inspired the zine).
+
+
+
+We’ll be using git SHAs really heavily in the exercises to get you used to working with them and to help understand how they correspond to tags and branches.
+
+### git subcommands we’ll be using
+
+All of these exercises only use 5 git subcommands:
+
+```
+git checkout
+git log (--oneline, --author, and -S will be useful)
+git diff (--stat will be useful)
+git show
+git status
+```
+
+### exercises
+
+ 1. Check out matz’s commit of Ruby from 1998. The commit ID is `3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4`. Find out how many lines of code Ruby was at that time.
+ 2. Check out the current master branch
+ 3. Look at the history for the file `hash.c`. What was the last commit ID that changed that file?
+ 4. Get a diff of how `hash.c` has changed in the last 20ish years: compare that file on the master branch to the file at commit `3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4`.
+ 5. Find a recent commit that changed `hash.c` and look at the diff for that commit
+ 6. This repository has a bunch of **tags** for every Ruby release. Get a list of all the tags.
+ 7. Find out how many files changed between tag `v1_8_6_187` and tag `v1_8_6_188`
+ 8. Find a commit (any commit) from 2015 and check it out, look at the files very briefly, then go back to the master branch.
+ 9. Find out what commit the tag `v1_8_6_187` corresponds to.
+ 10. List the directory `.git/refs/tags`. Run `cat .git/refs/tags/v1_8_6_187` to see the contents of one of those files.
+ 11. Find out what commit ID `HEAD` corresponds to right now.
+ 12. Find out how many commits have been made to the `test/` directory
+ 13. Get a diff of `lib/telnet.rb` between the commits `65a5162550f58047974793cdc8067a970b2435c0` and `9e3d9a2a009d2a0281802a84e1c5cc1c887edc71`. How many lines of that file were changed?
+ 14. How many commits were made between Ruby 2.5.1 and 2.5.2 (tags `v2_5_1` and `v2_5_3`) (this one is a tiny bit tricky, there’s more than one step)
+ 15. How many commits were authored by `matz` (Ruby’s creator)?
+ 16. What’s the most recent commit that included the word `tkutil`?
+ 17. Check out the commit `e51dca2596db9567bd4d698b18b4d300575d3881` and create a new branch that points at that commit.
+ 18. Run `git reflog` to see all the navigating of the repository you’ve done so far
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2019/08/30/git-exercises--navigate-a-repository/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://jvns.ca/blog/2019/08/27/curl-exercises/
+[2]: https://wizardzines.com/zines/oh-shit-git/
diff --git a/sources/tech/20190901 How to write zines with simple tools.md b/sources/tech/20190901 How to write zines with simple tools.md
new file mode 100644
index 0000000000..05b21f047e
--- /dev/null
+++ b/sources/tech/20190901 How to write zines with simple tools.md
@@ -0,0 +1,138 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to write zines with simple tools)
+[#]: via: (https://jvns.ca/blog/2019/09/01/ways-to-write-zines-without-fancy-tools/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+How to write zines with simple tools
+======
+
+People often ask me what tools I use to write my zines ([the answer is here][1]). Answering this question as written has always felt slightly off to me, though, and I couldn’t figure out why for a long time.
+
+I finally realized last week that instead of “what tools do you use to write zines?” some people may have actually wanted to know “how can I do this myself?”! And “buy a $500 iPad” is not a terribly useful answer to that question – it’s not how I got started, iPads are kind of a weird fancy way to write zines, and most people don’t have them.
+
+So this blog post is about more traditional (and easier to get started with) ways to write zines.
+
+We’re going to start out by talking about the mechanics of how to write the zine, and then talk about how to assemble it into a booklet.
+
+### Way 1: Write it on paper
+
+This is how I made my first zine (spying on your programs with strace) which you can see here: .
+
+Here’s an example of a page I drew on paper this morning pretty quickly. It looks kind of bad because I scanned it with my phone, but if you use a real scanner (like I did with the strace PDF above), the scanned version comes out better.
+
+
+
+### Way 2: Use a Google doc
+
+The next option is to use a Google doc (or whatever other word processor you prefer). [Here’s the Google doc I wrote for the below image][2], and here’s what it looks like:
+
+
+
+They key thing about this Google doc approach is to apply some “less is more”. It’s intended to be printed as part of a booklet on **half** a sheet of letter paper, which means everything needs to be twice as big for it to look good.
+
+### Way 3: Use an iPad
+
+This is what I do (use the Notability app on iPad). I’m not going to talk about this method much because this post is about using more readily available tools.
+
+
+
+### Way 4: Use a single sheet of paper
+
+This is a subset of “Write it on paper” – the [Wikibooks page on zine making][3] has a great guide that shows how to write out a tiny zine on 1 piece of paper and then fold it up to make a little booklet. Here are the pictures of the steps from the Wikibooks page:
+
+
+
+Sumana Harihareswara’s [Playing with python][4] zine is a nice example of a zine that’s intended to be folded up in that way.
+
+### Way 5: Adobe Illustrator
+
+I’ve never used Adobe Illustrator so I’m not going to pretend that I know anything about it or put together an example using it, but I hear it’s a way people do book layout.
+
+### booklets: the photocopier method
+
+So you’ve written a bunch of pages and want to assemble them into a booklet. One way to do this (and what I did for my first zine about strace!) is the photocopier method. There’s a great guide by Julia Gfrörer in [this tweet][5], which I’m going to reproduce here:
+
+![][6]
+![][7]
+![][8]
+![][9]
+
+That explanation is excellent and I don’t have anything to add. I did it that way and it worked great.
+
+If you want to buy a print copy of that how-to-make-zines zine from Thruban Press, you can [get it here on Etsy][10].
+
+### booklets: the computer method
+
+If you’ve made your zine in Google Docs or in another computery way, you probably want a more computery way of assembling the pages into a booklet.
+
+**what I use: pdflatex**
+
+I do this using the `pdfpages` LaTeX extension. This sounds complicated but it’s not really, you don’t need to learn latex or anything. You just need to have pdflatex on your system, which is a `sudo apt install texlive-base` away on Ubuntu. The steps are:
+
+ 1. Get a PDF with the pages from your zine (pages need to be a multiple of 4)
+ 2. Get the latex file from [this gist][11]
+ 3. Replace `/home/bork/http-zine.pdf` with the path to your PDF and `1-28` with `1-however many pages are in your zine`.
+ 4. run `pdflatex formatted-zine.tex`
+ 5. Tweak the parameters until it looks the way you want. The [documentation for the pdfpages package is here][12]
+
+
+
+I like using this relatively complicated method because there are always small tweaks I want to make like “oh, the right margin is too big, crop it a little bit” and the pdfpages package has tons of options that let me make those tweaks.
+
+**other methods**
+
+ 1. On Linux you can use the `pdfjam` bash script, which is just a wrapper around the pdfpages latex package. This is what I used to do but today I find it simpler to use the pdfpages latex package directly.
+ 2. There’s a program called [Booklet Creator][13] for Mac and Windows that [@mrfb uses][14]. It looks pretty simple to use.
+ 3. If you convert your PDF to a ps file (with `pdf2ps` for instance), `psnup` can do this. I tried `cat file.ps | psbook | psnup -2 > booklet.ps` and it worked, though the resulting PDFs are a little slow to load in my PDF viewer for some reason.
+ 4. there are probably a ton more ways to do this, if you know more let me know
+
+
+
+### making zines is easy and low tech
+
+That’s all! I mostly wanted to explain that zines are an easy low tech thing to do and if you think making them sounds fun, you definitely 100% do not need to use any fancy expensive tools to do it, you can literally use some sheets of paper, a Sharpie, a pen, and spend $3 at your local print shop to use the photocopier.
+
+### resources
+
+summary of the resources I linked to:
+
+ * Guide to putting together zines with a photocopier by Julia Gfrörer: [this tweet][5], [get it on Etsy][10]
+ * [Wikibooks page on zine making][3]
+ * Notes on making zines using Google Docs: [this twitter thread][14]
+ * [Stolen Sharpie Revolution][15] (the first book I read about making zines). You can also get it on Amazon if you want but it’s probably better to buy directly from their site.
+ * [Booklet Creator][13]
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2019/09/01/ways-to-write-zines-without-fancy-tools/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://twitter.com/b0rk/status/1160171769833185280
+[2]: https://docs.google.com/document/d/1byzfXC0h6hNFlWXaV9peJpX-GamJOrJ70x9nu1dZ-m0/edit?usp=sharing
+[3]: https://en.m.wikibooks.org/wiki/Zine_Making/Putting_pages_together
+[4]: https://www.harihareswara.net/pix/playing-with-python-zine/playing-with-python-zine.pdf
+[5]: https://twitter.com/thorazos/status/1158556879485906944
+[6]: https://pbs.twimg.com/media/EBQFUC0X4AAPTU1?format=jpg&name=small
+[7]: https://pbs.twimg.com/media/EBQFUC0XsAEBhHf?format=jpg&name=small
+[8]: https://pbs.twimg.com/media/EBQFUC1XUAAKDIB?format=jpg&name=small
+[9]: https://pbs.twimg.com/media/EBQFUDRX4AMkIAr?format=jpg&name=small
+[10]: https://www.etsy.com/thorazos/listing/693692176/thuban-press-guide-to-analog-self?utm_source=Copy&utm_medium=ListingManager&utm_campaign=Share&utm_term=so.lmsm&share_time=1565113962419
+[11]: https://gist.github.com/jvns/b3de1d658e2b44aebb485c35fb1a7a0f
+[12]: http://texdoc.net/texmf-dist/doc/latex/pdfpages/pdfpages.pdf
+[13]: https://www.bookletcreator.com/
+[14]: https://twitter.com/mrfb/status/1159478532545888258
+[15]: http://www.stolensharpierevolution.org/
diff --git a/sources/tech/20190905 How to Change Themes in Linux Mint.md b/sources/tech/20190905 How to Change Themes in Linux Mint.md
deleted file mode 100644
index 6f1c1ce3da..0000000000
--- a/sources/tech/20190905 How to Change Themes in Linux Mint.md
+++ /dev/null
@@ -1,103 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (qfzy1233)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How to Change Themes in Linux Mint)
-[#]: via: (https://itsfoss.com/install-themes-linux-mint/)
-[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/)
-
-How to Change Themes in Linux Mint
-======
-
-Using Linux Mint is, from the start, a unique experience for its main Desktop Environment: Cinnamon. This is one of the main [features why I love Linux Mint][1].
-
-Since Mint’s dev team [started to take design more serious][2], “Themes” applet became an important way not only to choose new themes, icons, buttons, window borders and mouse pointers, but also to install new themes directly from it. Interested? Let’s jump into it.
-
-### How to change themes in Linux Mint
-
-Search for themes in the Menu and open the Themes applet.
-
-![Theme Applet provides an easy way of installing and changing themes][3]
-
-At the applet there’s a “Add/Remove” button, pretty simple, huh? And, clicking on it, you and I can see Cinnamon Spices (Cinnamon’s official addons repository) themes ordered first by popularity.
-
-![Installing new themes in Linux Mint Cinnamon][4]
-
-To install one, all it’s needed to do is click on yours preferred one and wait for it to download. After that, the theme will be available at the “Desktop” option on the first page of the applet. Just double click on one of the installed themes to start using it.
-
-![Changing themes in Linux Mint Cinnamon][5]
-
-Here’s the default Linux Mint look:
-
-![Linux Mint Default Theme][6]
-
-And here’s after I change the theme:
-
-![Linux Mint with Carta Theme][7]
-
-All the themes are also available at the Cinnamon Spices site for more information and bigger screenshots so you can take a better look on how your system will look.
-
-[Browse Cinnamon Themes][8]
-
-### Installing third party themes in Linux Mint
-
-_“I saw this amazing theme on another site and it is not available at Cinnamon Spices…”_
-
-Cinnamon Spices has a good collection of themes but you’ll still find that the theme you saw some place else is not available on the official Cinnamon website.
-
-Well, it would be nice if there was another way, huh? You might imagine that there is (I’m mean…obviously there is). So, first things first, there are other websites where you and I can find new cool themes.
-
-I’ll recommend going to Cinnamon Look and browse themes there. If you like something download it.
-
-[Get more themes at Cinnamon Look][9]
-
-After the preferred theme is downloaded, you will have a compressed file now with all you need for the installation. Extract it and save at ~/.themes. Confused? The “~” file path is actually your home folder: /home/{YOURUSER}/.themes.
-
-[][10]
-
-Suggested read Fix "Failed To Start Session" At Login In Ubuntu 16.04
-
-So go to the your Home directory. Press Ctrl+H to [show hidden files in Linux][11]. If you don’t see a .themes folder, create a new folder and name .themes. Remember that the dot at the beginning of the folder name is important.
-
-Copy the extracted theme folder from your Downloads directory to the .themes folder in your Home.
-
-After that, look for the installed theme at the applet above mentioned.
-
-Note
-
-Remember that the themes must be made to work on Cinnamon, even though it is a fork from GNOME, not all themes made for GNOME works at Cinnamon.
-
-Changing theme is one part of Cinnamon customization. You can also [change the looks of Linux Mint by changing the icons][12].
-
-I hope you now you know how to change themes in Linux Mint. Which theme are you going to use?
-
-### João Gondim
-
-Linux enthusiast from Brasil.
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/install-themes-linux-mint/
-
-作者:[It's FOSS Community][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/itsfoss/
-[b]: https://github.com/lujun9972
-[1]: https://itsfoss.com/tiny-features-linux-mint-cinnamon/
-[2]: https://itsfoss.com/linux-mint-new-design/
-[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/install-theme-linux-mint-1.jpg?resize=800%2C625&ssl=1
-[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/install-theme-linux-mint-2.jpg?resize=800%2C625&ssl=1
-[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/install-theme-linux-mint-3.jpg?resize=800%2C450&ssl=1
-[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-mint-default-theme.jpg?resize=800%2C450&ssl=1
-[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-mint-carta-theme.jpg?resize=800%2C450&ssl=1
-[8]: https://cinnamon-spices.linuxmint.com/themes
-[9]: https://www.cinnamon-look.org/
-[10]: https://itsfoss.com/failed-to-start-session-ubuntu-14-04/
-[11]: https://itsfoss.com/hide-folders-and-show-hidden-files-in-ubuntu-beginner-trick/
-[12]: https://itsfoss.com/install-icon-linux-mint/
diff --git a/sources/tech/20190909 How to Setup Multi Node Elastic Stack Cluster on RHEL 8 - CentOS 8.md b/sources/tech/20190909 How to Setup Multi Node Elastic Stack Cluster on RHEL 8 - CentOS 8.md
new file mode 100644
index 0000000000..f56e708426
--- /dev/null
+++ b/sources/tech/20190909 How to Setup Multi Node Elastic Stack Cluster on RHEL 8 - CentOS 8.md
@@ -0,0 +1,476 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8)
+[#]: via: (https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/)
+[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
+
+How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8
+======
+
+Elastic stack widely known as **ELK stack**, it is a group of opensource products like **Elasticsearch**, **Logstash** and **Kibana**. Elastic Stack is developed and maintained by Elastic company. Using elastic stack, one can feed system’s logs to Logstash, it is a data collection engine which accept the logs or data from all the sources and normalize logs and then it forwards the logs to Elasticsearch for **analyzing**, **indexing**, **searching** and **storing** and finally using Kibana one can represent the visualize data, using Kibana we can also create interactive graphs and diagram based on user’s queries.
+
+[![Elastic-Stack-Cluster-RHEL8-CentOS8][1]][2]
+
+In this article we will demonstrate how to setup multi node elastic stack cluster on RHEL 8 / CentOS 8 servers. Following are details for my Elastic Stack Cluster:
+
+### Elasticsearch:
+
+ * Three Servers with Minimal RHEL 8 / CentOS 8
+ * IPs & Hostname – 192.168.56.40 (elasticsearch1.linuxtechi. local), 192.168.56.50 (elasticsearch2.linuxtechi. local), 192.168.56.60 (elasticsearch3.linuxtechi. local)
+
+
+
+### Logstash:
+
+ * Two Servers with minimal RHEL 8 / CentOS 8
+ * IPs & Hostname – 192.168.56.20 (logstash1.linuxtechi. local) , 192.168.56.30 (logstash2.linuxtechi. local)
+
+
+
+### Kibana:
+
+ * One Server with minimal RHEL 8 / CentOS 8
+ * Hostname – kibana.linuxtechi.local
+ * IP – 192.168.56.10
+
+
+
+### Filebeat:
+
+ * One Server with minimal CentOS 7
+ * IP & hostname – 192.168.56.70 (web-server)
+
+
+
+Let’s start with Elasticsearch cluster setup,
+
+#### Setup 3 node Elasticsearch cluster
+
+As I have already stated that I have kept nodes for Elasticsearch cluster, login to each node, set the hostname and configure yum/dnf repositories.
+
+Use the below hostnamectl command to set the hostname on respective nodes,
+
+```
+[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch1.linuxtechi. local"
+[root@linuxtechi ~]# exec bash
+[root@linuxtechi ~]#
+[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch2.linuxtechi. local"
+[root@linuxtechi ~]# exec bash
+[root@linuxtechi ~]#
+[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch3.linuxtechi. local"
+[root@linuxtechi ~]# exec bash
+[root@linuxtechi ~]#
+```
+
+For CentOS 8 System we don’t need to configure any OS package repository and for RHEL 8 Server, if you have valid subscription and then subscribed it with Red Hat for getting package repository. In Case you want to configure local yum/dnf repository for OS packages then refer the below url:
+
+[How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File][3]
+
+Configure Elasticsearch package repository on all the nodes, create a file elastic.repo file under /etc/yum.repos.d/ folder with the following content
+
+```
+~]# vi /etc/yum.repos.d/elastic.repo
+[elasticsearch-7.x]
+name=Elasticsearch repository for 7.x packages
+baseurl=https://artifacts.elastic.co/packages/7.x/yum
+gpgcheck=1
+gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
+enabled=1
+autorefresh=1
+type=rpm-md
+```
+
+save & exit the file
+
+Use below rpm command on all three nodes to import Elastic’s public signing key
+
+```
+~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
+```
+
+Add the following lines in /etc/hosts file on all three nodes,
+
+```
+192.168.56.40 elasticsearch1.linuxtechi.local
+192.168.56.50 elasticsearch2.linuxtechi.local
+192.168.56.60 elasticsearch3.linuxtechi.local
+```
+
+Install Java on all three Nodes using yum / dnf command,
+
+```
+[root@linuxtechi ~]# dnf install java-openjdk -y
+[root@linuxtechi ~]# dnf install java-openjdk -y
+[root@linuxtechi ~]# dnf install java-openjdk -y
+```
+
+Install Elasticsearch using beneath dnf command on all three nodes,
+
+```
+[root@linuxtechi ~]# dnf install elasticsearch -y
+[root@linuxtechi ~]# dnf install elasticsearch -y
+[root@linuxtechi ~]# dnf install elasticsearch -y
+```
+
+**Note:** In case OS firewall is enabled and running in each Elasticsearch node then allow following ports using beneath firewall-cmd command,
+
+```
+~]# firewall-cmd --permanent --add-port=9300/tcp
+~]# firewall-cmd --permanent --add-port=9200/tcp
+~]# firewall-cmd --reload
+```
+
+Configure Elasticsearch, edit the file “**/etc/elasticsearch/elasticsearch.yml**” on all the three nodes and add the followings,
+
+```
+~]# vim /etc/elasticsearch/elasticsearch.yml
+…………………………………………
+cluster.name: opn-cluster
+node.name: elasticsearch1.linuxtechi.local
+network.host: 192.168.56.40
+http.port: 9200
+discovery.seed_hosts: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"]
+cluster.initial_master_nodes: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"]
+……………………………………………
+```
+
+**Note:** on Each node, add the correct hostname in node.name parameter and ip address in network.host parameter and other parameters will remain the same.
+
+Now Start and enable the Elasticsearch service on all three nodes using following systemctl command,
+
+```
+~]# systemctl daemon-reload
+~]# systemctl enable elasticsearch.service
+~]# systemctl start elasticsearch.service
+```
+
+Use below ‘ss’ command to verify whether elasticsearch node is start listening on 9200 port,
+
+```
+[root@linuxtechi ~]# ss -tunlp | grep 9200
+tcp LISTEN 0 128 [::ffff:192.168.56.40]:9200 *:* users:(("java",pid=2734,fd=256))
+[root@linuxtechi ~]#
+```
+
+Use following curl commands to verify the Elasticsearch cluster status
+
+```
+[root@linuxtechi ~]# curl http://elasticsearch1.linuxtechi.local:9200
+[root@linuxtechi ~]# curl -X GET http://elasticsearch2.linuxtechi.local:9200/_cluster/health?pretty
+```
+
+Output above command would be something like below,
+
+![Elasticsearch-cluster-status-rhel8][1]
+
+Above output confirms that we have successfully created 3 node Elasticsearch cluster and status of cluster is also green.
+
+**Note:** If you want to modify JVM heap size then you have edit the file “**/etc/elasticsearch/jvm.options**” and change the below parameters that suits to your environment,
+
+ * -Xms1g
+ * -Xmx1g
+
+
+
+Now let’s move to Logstash nodes,
+
+#### Install and Configure Logstash
+
+Perform the following steps on both Logstash nodes,
+
+Login to both the nodes set the hostname using following hostnamectl command,
+
+```
+[root@linuxtechi ~]# hostnamectl set-hostname "logstash1.linuxtechi.local"
+[root@linuxtechi ~]# exec bash
+[root@linuxtechi ~]#
+[root@linuxtechi ~]# hostnamectl set-hostname "logstash2.linuxtechi.local"
+[root@linuxtechi ~]# exec bash
+[root@linuxtechi ~]#
+```
+
+Add the following entries in /etc/hosts file in both logstash nodes
+
+```
+~]# vi /etc/hosts
+192.168.56.40 elasticsearch1.linuxtechi.local
+192.168.56.50 elasticsearch2.linuxtechi.local
+192.168.56.60 elasticsearch3.linuxtechi.local
+```
+
+Save and exit the file
+
+Configure Logstash repository on both the nodes, create a file **logstash.repo** under the folder /ete/yum.repos.d/ with following content,
+
+```
+~]# vi /etc/yum.repos.d/logstash.repo
+[elasticsearch-7.x]
+name=Elasticsearch repository for 7.x packages
+baseurl=https://artifacts.elastic.co/packages/7.x/yum
+gpgcheck=1
+gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
+enabled=1
+autorefresh=1
+type=rpm-md
+```
+
+Save and exit the file, run the following rpm command to import the signing key
+
+```
+~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
+```
+
+Install Java OpenJDK on both the nodes using following dnf command,
+
+```
+~]# dnf install java-openjdk -y
+```
+
+Run the following dnf command from both the nodes to install logstash,
+
+```
+[root@linuxtechi ~]# dnf install logstash -y
+[root@linuxtechi ~]# dnf install logstash -y
+```
+
+Now configure logstash, perform below steps on both logstash nodes,
+
+Create a logstash conf file, for that first we have copy sample logstash file under ‘/etc/logstash/conf.d/’
+
+```
+# cd /etc/logstash/
+# cp logstash-sample.conf conf.d/logstash.conf
+```
+
+Edit conf file and update the following content,
+
+```
+# vi conf.d/logstash.conf
+
+input {
+ beats {
+ port => 5044
+ }
+}
+
+output {
+ elasticsearch {
+ hosts => ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"]
+ index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
+ #user => "elastic"
+ #password => "changeme"
+ }
+}
+```
+
+Under output section, in hosts parameter specify FQDN of all three Elasticsearch nodes, other parameters leave as it is.
+
+Allow logstash port “5044” in OS firewall using following firewall-cmd command,
+
+```
+~ # firewall-cmd --permanent --add-port=5044/tcp
+~ # firewall-cmd –reload
+```
+
+Now start and enable Logstash service, run the following systemctl commands on both the nodes
+
+```
+~]# systemctl start logstash
+~]# systemctl eanble logstash
+```
+
+Use below ss command to verify whether logstash service start listening on 5044,
+
+```
+[root@linuxtechi ~]# ss -tunlp | grep 5044
+tcp LISTEN 0 128 *:5044 *:* users:(("java",pid=2416,fd=96))
+[root@linuxtechi ~]#
+```
+
+Above output confirms that logstash has been installed and configured successfully. Let’s move to Kibana installation.
+
+#### Install and Configure Kibana
+
+Login to Kibana node, set the hostname with **hostnamectl** command,
+
+```
+[root@linuxtechi ~]# hostnamectl set-hostname "kibana.linuxtechi.local"
+[root@linuxtechi ~]# exec bash
+[root@linuxtechi ~]#
+```
+
+Edit /etc/hosts file and add the following lines
+
+```
+192.168.56.40 elasticsearch1.linuxtechi.local
+192.168.56.50 elasticsearch2.linuxtechi.local
+192.168.56.60 elasticsearch3.linuxtechi.local
+```
+
+Setup the Kibana repository using following,
+
+```
+[root@linuxtechi ~]# vi /etc/yum.repos.d/kibana.repo
+[elasticsearch-7.x]
+name=Elasticsearch repository for 7.x packages
+baseurl=https://artifacts.elastic.co/packages/7.x/yum
+gpgcheck=1
+gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
+enabled=1
+autorefresh=1
+type=rpm-md
+
+[root@linuxtechi ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
+```
+
+Execute below dnf command to install kibana,
+
+```
+[root@linuxtechi ~]# yum install kibana -y
+```
+
+Configure Kibana by editing the file “**/etc/kibana/kibana.yml**”
+
+```
+[root@linuxtechi ~]# vim /etc/kibana/kibana.yml
+…………
+server.host: "kibana.linuxtechi.local"
+server.name: "kibana.linuxtechi.local"
+elasticsearch.hosts: ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"]
+…………
+```
+
+Start and enable kibana service
+
+```
+[root@linuxtechi ~]# systemctl start kibana
+[root@linuxtechi ~]# systemctl enable kibana
+```
+
+Allow Kibana port ‘5601’ in OS firewall,
+
+```
+[root@linuxtechi ~]# firewall-cmd --permanent --add-port=5601/tcp
+success
+[root@linuxtechi ~]# firewall-cmd --reload
+success
+[root@linuxtechi ~]#
+```
+
+Access Kibana portal / GUI using the following URL:
+
+
+
+[![Kibana-Dashboard-rhel8][1]][4]
+
+From dashboard, we can also check our Elastic Stack cluster status
+
+[![Stack-Monitoring-Overview-RHEL8][1]][5]
+
+This confirms that we have successfully setup multi node Elastic Stack cluster on RHEL 8 / CentOS 8.
+
+Now let’s send some logs to logstash nodes via filebeat from other Linux servers, In my case I have one CentOS 7 Server, I will push all important logs of this server to logstash via filebeat.
+
+Login to CentOS 7 server and install filebeat package using following rpm command,
+
+```
+[root@linuxtechi ~]# rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm
+Retrieving https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm
+Preparing... ################################# [100%]
+Updating / installing...
+ 1:filebeat-7.3.1-1 ################################# [100%]
+[root@linuxtechi ~]#
+```
+
+Edit the /etc/hosts file and add the following entries,
+
+```
+192.168.56.20 logstash1.linuxtechi.local
+192.168.56.30 logstash2.linuxtechi.local
+```
+
+Now configure the filebeat so that it can send logs to logstash nodes using load balancing technique, edit the file “**/etc/filebeat/filebeat.yml**” and add the following parameters,
+
+Under the ‘**filebeat.inputs:**’ section change ‘**enabled: false**‘ to ‘**enabled: true**‘ and under the “**paths**” parameter specify the location log files that we can send to logstash, In output Elasticsearch section comment out “**output.elasticsearch**” and **host** parameter. In Logstash output section, remove the comments for “**output.logstash:**” and “**hosts:**” and add the both logstash nodes in hosts parameters and also “**loadbalance: true**”.
+
+```
+[root@linuxtechi ~]# vi /etc/filebeat/filebeat.yml
+……………………….
+filebeat.inputs:
+- type: log
+ enabled: true
+ paths:
+ - /var/log/messages
+ - /var/log/dmesg
+ - /var/log/maillog
+ - /var/log/boot.log
+#output.elasticsearch:
+ # hosts: ["localhost:9200"]
+
+output.logstash:
+ hosts: ["logstash1.linuxtechi.local:5044", "logstash2.linuxtechi.local:5044"]
+ loadbalance: true
+………………………………………
+```
+
+Start and enable filebeat service using beneath systemctl commands,
+
+```
+[root@linuxtechi ~]# systemctl start filebeat
+[root@linuxtechi ~]# systemctl enable filebeat
+```
+
+Now go to Kibana GUI, verify whether new indices are visible or not,
+
+Choose Management option from Left side bar and then click on Index Management under Elasticsearch,
+
+[![Elasticsearch-index-management-Kibana][1]][6]
+
+As we can see above, indices are visible now, let’s create index pattern,
+
+Click on “Index Patterns” from Kibana Section, it will prompt us to create a new pattern, click on “**Create Index Pattern**” and specify the pattern name as “**filebeat**”
+
+[![Define-Index-Pattern-Kibana-RHEL8][1]][7]
+
+Click on Next Step
+
+Choose “**Timestamp**” as time filter for index pattern and then click on “Create index pattern”
+
+[![Time-Filter-Index-Pattern-Kibana-RHEL8][1]][8]
+
+[![filebeat-index-pattern-overview-Kibana][1]][9]
+
+Now Click on Discover to see real time filebeat index pattern,
+
+[![Discover-Kibana-REHL8][1]][10]
+
+This confirms that Filebeat agent has been configured successfully and we are able to see real time logs on Kibana dashboard.
+
+That’s all from this article, please don’t hesitate to share your feedback and comments in case these steps help you to setup multi node Elastic Stack Cluster on RHEL 8 / CentOS 8 system.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/
+
+作者:[Pradeep Kumar][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linuxtechi.com/author/pradeep/
+[b]: https://github.com/lujun9972
+[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Elastic-Stack-Cluster-RHEL8-CentOS8.jpg
+[3]: https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/
+[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Kibana-Dashboard-rhel8.jpg
+[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Stack-Monitoring-Overview-RHEL8.jpg
+[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Elasticsearch-index-management-Kibana.jpg
+[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Define-Index-Pattern-Kibana-RHEL8.jpg
+[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Time-Filter-Index-Pattern-Kibana-RHEL8.jpg
+[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/filebeat-index-pattern-overview-Kibana.jpg
+[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Discover-Kibana-REHL8.jpg
diff --git a/sources/tech/20190909 How to use Terminator on Linux to run multiple terminals in one window.md b/sources/tech/20190909 How to use Terminator on Linux to run multiple terminals in one window.md
new file mode 100644
index 0000000000..6ee0820fdf
--- /dev/null
+++ b/sources/tech/20190909 How to use Terminator on Linux to run multiple terminals in one window.md
@@ -0,0 +1,118 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to use Terminator on Linux to run multiple terminals in one window)
+[#]: via: (https://www.networkworld.com/article/3436784/how-to-use-terminator-on-linux-to-run-multiple-terminals-in-one-window.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+How to use Terminator on Linux to run multiple terminals in one window
+======
+Providing an option for multiple GNOME terminals within a single window frame, Terminator lets you flexibly align your workspace to suit your needs.
+Sandra Henry-Stocker
+
+If you’ve ever wished that you could line up multiple terminal windows and organize them in a single window frame, we may have some good news for you. The Linux **Terminator** can do this for you. No problemo!
+
+### Splitting windows
+
+Terminator will initially open like a terminal window with a single window. Once you mouse click within that window, however, it will bring up an options menu that gives you the flexibility to make changes. You can choose “**split horizontally**” or “**split vertically**” to split the window you are currently position in into two smaller windows. In fact, with these menu choices, complete with tiny illustrations of the resultant split (resembling **=** and **||**), you can split windows repeatedly if you like. Of course, if you split the overall window into more than six or nine sections, you might just find that they're too small to be used effectively.
+
+**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][1] ]**
+
+Using ASCII art to illustrate the process of splitting windows, you might see something like this:
+
+```
++-------------------+ +-------------------+ +-------------------+
+| | | | | |
+| | | | | |
+| | ==> |-------------------| ==> |-------------------|
+| | | | | | |
+| | | | | | |
++-------------------+ +-------------------+ +-------------------+
+ Original terminal Split horizontally Split vertically
+```
+
+Another option for splitting windows is to use control sequences like **Ctrl+Shift+e** to split a window vertically and **Ctrl+Shift+o** (“o" as in “open”) to split the screen horizontally.
+
+Once Terminator has split into smaller windows for you, you can click in any window to use it and move from window to window as your work dictates.
+
+### Maximizing a window
+
+If you want to ignore all but one of your windows for a while and focus on just one, you can click in that window and select the "**Maximize**" option from the menu. That window will then grow to claim all of the space. Click again and select "**Restore all terminals**" to return to the multi-window display. **Ctrl+Shift+x** will toggle between the normal and maximized settings.
+
+The window size indicators (e.g., 80x15) on window labels display the number of characters per line and the number of lines per window that each window provides.
+
+### Closing windows
+
+To close any window, bring up the Terminator menu and select **Close**. Other windows will adjust themselves to take up the space until you close the last remaining window.
+
+### Saving your customized setup(s)
+
+Setting up your customized terminator settings as your default once you've split your overall window into multiple segments is quite easy. Select **Preferences** from the pop-up menu and then **Layouts** from the tab along the top of the window that opens. You should then see **New Layout** listed. Just click on the **Save** option at the bottom and **Close** on the bottom right. Terminator will save your settings in **~/.config/terminator/config** and will then use this file every time you use it.
+
+You can also enlarge your overall window by stretching it with your mouse. Again, if you want to retain the changes, select **Preferences** from the menu, **Layouts** and then **Save** and **Close** again.
+
+### Choosing between saved configurations
+
+If you like, you can set up multiple options for your Terminator window arrangements by maintaining a number of config files, renaming each afterwards (e.g., config-1, config-2) and then moving your choice into place as **~/.config/terminator/config** when you want to use that layout. Here's an example script for doing something like this script. It lets you choose between three pre-configured window arrangements:
+
+```
+#!/bin/bash
+
+PS3='Terminator options: '
+options=("Split 1" "Split 2" "Split 3" "Quit")
+select opt in "${options[@]}"
+do
+ case $opt in
+ "Split 1")
+ config=config-1
+ break
+ ;;
+ "Split 2")
+ config=config-2
+ break
+ ;;
+ "Split 3")
+ config=config-3
+ break
+ ;;
+ *)
+ exit
+ ;;
+ esac
+done
+
+cd ~/.config/terminator
+cp config config-
+cp $config config
+cd
+terminator &
+```
+
+You could give the options more meaningful names than "config-1" if that helps.
+
+### Wrap-up
+
+Terminator is a good choice for setting up multiple windows to work on related tasks. If you've never used it, you'll probably need to install it first with a command such as "sudo apt install terminator" or "sudo yum install -y terminator".
+
+Hopefully, you will enjoy using Terminator. And, as another character of the same name might say, "I'll be back!"
+
+Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3436784/how-to-use-terminator-on-linux-to-run-multiple-terminals-in-one-window.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
+[2]: https://www.facebook.com/NetworkWorld/
+[3]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190911 10 Ansible modules you need to know.md b/sources/tech/20190911 10 Ansible modules you need to know.md
new file mode 100644
index 0000000000..51b0078f86
--- /dev/null
+++ b/sources/tech/20190911 10 Ansible modules you need to know.md
@@ -0,0 +1,381 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (10 Ansible modules you need to know)
+[#]: via: (https://opensource.com/article/19/9/must-know-ansible-modules)
+[#]: author: (DirectedSoul https://opensource.com/users/directedsoulhttps://opensource.com/users/markphttps://opensource.com/users/rich-butkevichttps://opensource.com/users/jairojuniorhttps://opensource.com/users/marcobravohttps://opensource.com/users/johnsimcall)
+
+10 Ansible modules you need to know
+======
+See examples and learn the most important modules for automating
+everyday tasks with Ansible.
+![Text editor on a browser, in blue][1]
+
+[Ansible][2] is an open source IT configuration management and automation platform. It uses human-readable YAML templates so users can program repetitive tasks to happen automatically without having to learn an advanced programming language.
+
+Ansible is agentless, which means the nodes it manages do not require any software to be installed on them. This eliminates potential security vulnerabilities and makes overall management smoother.
+
+Ansible [modules][3] are standalone scripts that can be used inside an Ansible playbook. A playbook consists of a play, and a play consists of tasks. These concepts may seem confusing if you're new to Ansible, but as you begin writing and working more with playbooks, they will become familiar.
+
+There are some modules that are frequently used in automating everyday tasks; those are the ones that we will cover in this article.
+
+Ansible has three main files that you need to consider:
+
+ * **Host/inventory file:** Contains the entry of the nodes that need to be managed
+ * **Ansible.cfg file:** Located by default at **/etc/ansible/ansible.cfg**, it has the necessary privilege escalation options and the location of the inventory file
+ * **Main file:** A playbook that has modules that perform various tasks on a host listed in an inventory or host file
+
+
+
+### Module 1: Package management
+
+There is a module for most popular package managers, such as DNF and APT, to enable you to install any package on a system. Functionality depends entirely on the package manager, but usually these modules can install, upgrade, downgrade, remove, and list packages. The names of relevant modules are easy to guess. For example, the DNF module is [dnf_module][4], the old YUM module (required for Python 2 compatibility) is [yum_module][5], while the APT module is [apt_module][6], the Slackpkg module is [slackpkg_module][7], and so on.
+
+Example 1:
+
+
+```
+\- name: install the latest version of Apache and MariaDB
+ dnf:
+ name:
+ - httpd
+ - mariadb-server
+ state: latest
+```
+
+This installs the Apache web server and the MariaDB SQL database.
+
+#### Example 2:
+
+
+```
+\- name: Install a list of packages
+ yum:
+ name:
+ - nginx
+ - postgresql
+ - postgresql-server
+ state: present
+```
+
+This installs the list of packages and helps download multiple packages.
+
+### Module 2: Service
+
+After installing a package, you need a module to start it. The [service module][8] enables you to start, stop, and reload installed packages; this comes in pretty handy.
+
+#### Example 1:
+
+
+```
+\- name: Start service foo, based on running process /usr/bin/foo
+ service:
+ name: foo
+ pattern: /usr/bin/foo
+ state: started
+```
+
+This starts the service **foo**.
+
+#### Example 2:
+
+
+```
+\- name: Restart network service for interface eth0
+ service:
+ name: network
+ state: restarted
+ args: eth0
+```
+
+This restarts the network service of the interface **eth0**.
+
+### Module 3: Copy
+
+The [copy module][9] copies a file from the local or remote machine to a location on the remote machine.
+
+#### Example 1:
+
+
+```
+\- name: Copy a new "ntp.conf file into place, backing up the original if it differs from the copied version
+ copy:
+ src: /mine/ntp.conf
+ dest: /etc/ntp.conf
+ owner: root
+ group: root
+ mode: '0644'
+ backup: yes
+```
+
+#### Example 2:
+
+
+```
+\- name: Copy file with owner and permission, using symbolic representation
+ copy:
+ src: /srv/myfiles/foo.conf
+ dest: /etc/foo.conf
+ owner: foo
+ group: foo
+ mode: u=rw,g=r,o=r
+```
+
+### Module 4: Debug
+
+The [debug module][10] prints statements during execution and can be useful for debugging variables or expressions without having to halt the playbook.
+
+#### Example 1:
+
+
+```
+\- name: Display all variables/facts known for a host
+ debug:
+ var: hostvars[inventory_hostname]
+ verbosity: 4
+```
+
+This displays all the variable information for a host that is defined in the inventory file.
+
+#### Example 2:
+
+
+```
+\- name: Write some content in a file /tmp/foo.txt
+ copy:
+ dest: /tmp/foo.txt
+ content: |
+ Good Morning!
+ Awesome sunshine today.
+ register: display_file_content
+\- name: Debug display_file_content
+ debug:
+ var: display_file_content
+ verbosity: 2
+```
+
+This registers the content of the copy module output and displays it only when you specify verbosity as 2. For example:
+
+
+```
+`ansible-playbook demo.yaml -vv`
+```
+
+### Module 5: File
+
+The [file module][11] manages the file and its properties.
+
+ * It sets attributes of files, symlinks, or directories.
+ * It also removes files, symlinks, or directories.
+
+
+
+#### Example 1:
+
+
+```
+\- name: Change file ownership, group and permissions
+ file:
+ path: /etc/foo.conf
+ owner: foo
+ group: foo
+ mode: '0644'
+```
+
+This creates a file named **foo.conf** and sets the permission to **0644**.
+
+#### Example 2:
+
+
+```
+\- name: Create a directory if it does not exist
+ file:
+ path: /etc/some_directory
+ state: directory
+ mode: '0755'
+```
+
+This creates a directory named **some_directory** and sets the permission to **0755**.
+
+### Module 6: Lineinfile
+
+The [lineinfile module][12] manages lines in a text file.
+
+ * It ensures a particular line is in a file or replaces an existing line using a back-referenced regular expression.
+ * It's primarily useful when you want to change just a single line in a file.
+
+
+
+#### Example 1:
+
+
+```
+\- name: Ensure SELinux is set to enforcing mode
+ lineinfile:
+ path: /etc/selinux/config
+ regexp: '^SELINUX='
+ line: SELINUX=enforcing
+```
+
+This sets the value of **SELINUX=enforcing**.
+
+#### Example 2:
+
+
+```
+\- name: Add a line to a file if the file does not exist, without passing regexp
+ lineinfile:
+ path: /etc/resolv.conf
+ line: 192.168.1.99 foo.lab.net foo
+ create: yes
+```
+
+This adds an entry for the IP and hostname in the **resolv.conf** file.
+
+### Module 7: Git
+
+The [git module][13] manages git checkouts of repositories to deploy files or software.
+
+#### Example 1:
+
+
+```
+# Example Create git archive from repo
+\- git:
+ repo:
+ dest: /src/ansible-examples
+ archive: /tmp/ansible-examples.zip
+```
+
+#### Example 2:
+
+
+```
+\- git:
+ repo:
+ dest: /src/ansible-examples
+ separate_git_dir: /src/ansible-examples.git
+```
+
+This clones a repo with a separate Git directory.
+
+### Module 8: Cli_command
+
+The [cli_command module][14], first available in Ansible 2.7, provides a platform-agnostic way of pushing text-based configurations to network devices over the **network_cli connection** plugin.
+
+#### Example 1:
+
+
+```
+\- name: commit with comment
+ cli_config:
+ config: set system host-name foo
+ commit_comment: this is a test
+```
+
+This sets the hostname for a switch and exits with a commit message.
+
+#### Example 2:
+
+
+```
+\- name: configurable backup path
+ cli_config:
+ config: "{{ lookup('template', 'basic/config.j2') }}"
+ backup: yes
+ backup_options:
+ filename: backup.cfg
+ dir_path: /home/user
+```
+
+This backs up a config to a different destination file.
+
+### Module 9: Archive
+
+The [archive module][15] creates a compressed archive of one or more files. By default, it assumes the compression source exists on the target.
+
+#### Example 1:
+
+
+```
+\- name: Compress directory /path/to/foo/ into /path/to/foo.tgz
+ archive:
+ path: /path/to/foo
+ dest: /path/to/foo.tgz
+```
+
+#### Example 2:
+
+
+```
+\- name: Create a bz2 archive of multiple files, rooted at /path
+ archive:
+ path:
+ - /path/to/foo
+ - /path/wong/foo
+ dest: /path/file.tar.bz2
+ format: bz2
+```
+
+### Module 10: Command
+
+One of the most basic but useful modules, the [command module][16] takes the command name followed by a list of space-delimited arguments.
+
+#### Example 1:
+
+
+```
+\- name: return motd to registered var
+ command: cat /etc/motd
+ register: mymotd
+```
+
+#### Example 2:
+
+
+```
+\- name: Change the working directory to somedir/ and run the command as db_owner if /path/to/database does not exist.
+ command: /usr/bin/make_database.sh db_user db_name
+ become: yes
+ become_user: db_owner
+ args:
+ chdir: somedir/
+ creates: /path/to/database
+```
+
+### Conclusion
+
+There are tons of modules available in Ansible, but these ten are the most basic and powerful ones you can use for an automation job. As your requirements change, you can learn about other useful modules by entering **ansible-doc <module-name>** on the command line or refer to the [official documentation][17].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/must-know-ansible-modules
+
+作者:[DirectedSoul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/directedsoulhttps://opensource.com/users/markphttps://opensource.com/users/rich-butkevichttps://opensource.com/users/jairojuniorhttps://opensource.com/users/marcobravohttps://opensource.com/users/johnsimcall
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_blue_text_editor_web.png?itok=lcf-m6N7 (Text editor on a browser, in blue)
+[2]: https://www.ansible.com/
+[3]: https://docs.ansible.com/ansible/latest/user_guide/modules.html
+[4]: https://docs.ansible.com/ansible/latest/modules/dnf_module.html
+[5]: https://docs.ansible.com/ansible/latest/modules/yum_module.html
+[6]: https://docs.ansible.com/ansible/latest/modules/apt_module.html
+[7]: https://docs.ansible.com/ansible/latest/modules/slackpkg_module.html
+[8]: https://docs.ansible.com/ansible/latest/modules/service_module.html
+[9]: https://docs.ansible.com/ansible/latest/modules/copy_module.html
+[10]: https://docs.ansible.com/ansible/latest/modules/debug_module.html
+[11]: https://docs.ansible.com/ansible/latest/modules/file_module.html
+[12]: https://docs.ansible.com/ansible/latest/modules/lineinfile_module.html
+[13]: https://docs.ansible.com/ansible/latest/modules/git_module.html#git-module
+[14]: https://docs.ansible.com/ansible/latest/modules/cli_command_module.html
+[15]: https://docs.ansible.com/ansible/latest/modules/archive_module.html
+[16]: https://docs.ansible.com/ansible/latest/modules/command_module.html
+[17]: https://docs.ansible.com/
diff --git a/sources/tech/20190911 4 open source cloud security tools.md b/sources/tech/20190911 4 open source cloud security tools.md
new file mode 100644
index 0000000000..5a9e6d9d83
--- /dev/null
+++ b/sources/tech/20190911 4 open source cloud security tools.md
@@ -0,0 +1,90 @@
+[#]: collector: (lujun9972)
+[#]: translator: (hopefully2333)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (4 open source cloud security tools)
+[#]: via: (https://opensource.com/article/19/9/open-source-cloud-security)
+[#]: author: (Alison NaylorAaron Rinehart https://opensource.com/users/asnaylorhttps://opensource.com/users/ansilvahttps://opensource.com/users/sethhttps://opensource.com/users/bretthunoldtcomhttps://opensource.com/users/aaronrineharthttps://opensource.com/users/marcobravo)
+
+4 open source cloud security tools
+======
+Find and eliminate vulnerabilities in the data you store in AWS and
+GitHub.
+![Tools in a cloud][1]
+
+If your day-to-day as a developer, system administrator, full-stack engineer, or site reliability engineer involves Git pushes, commits, and pulls to and from GitHub and deployments to Amazon Web Services (AWS), security is a persistent concern. Fortunately, open source tools are available to help your team avoid common mistakes that could cost your organization thousands of dollars.
+
+This article describes four open source tools that can help improve your security practices when you're developing on GitHub and AWS. Also, in the spirit of open source, I've joined forces with three security experts—[Travis McPeak][2], senior cloud security engineer at Netflix; [Rich Monk][3], senior principal information security analyst at Red Hat; and [Alison Naylor][4], principal information security analyst at Red Hat—to contribute to this article.
+
+We've separated each tool by scenario, but they are not mutually exclusive.
+
+### 1\. Find sensitive data with Gitrob
+
+You need to find any potentially sensitive information present in your team's Git repos so you can remove it. It may make sense for you to use tools that are focused towards attacking an application or a system using a red/blue team model, in which an infosec team is divided in two: an attack team (a.k.a. a red team) and a defense team (a.k.a. a blue team). Having a red team to try to penetrate your systems and applications is lots better than waiting for an adversary to do so. Your red team might try using [Gitrob][5], a tool that can clone and crawl through your Git repositories looking for credentials and sensitive files.
+
+Even though tools like Gitrob could be used for harm, the idea here is for your infosec team to use it to find inadvertently disclosed sensitive data that belongs to your organization (such as AWS keypairs or other credentials that were committed by mistake). That way, you can get your repositories fixed and sensitive data expunged—hopefully before an adversary finds them. Remember to remove not only the affected files but [also their history][6]!
+
+### 2\. Avoid committing sensitive data with git-secrets
+
+While it's important to find and remove sensitive information in your Git repos, wouldn't it be better to avoid committing those secrets in the first place? Mistakes happen, but you can protect yourself from public embarrassment by using [git-secrets][7]. This tool allows you to set up hooks that scan your commits, commit messages, and merges looking for common patterns for secrets. Choose patterns that match the credentials your team uses, such as AWS access keys and secret keys. If it finds a match, your commit is rejected and a potential crisis averted.
+
+It's simple to set up git-secrets for your existing repos, and you can apply a global configuration to protect all future repositories you initialize or clone. You can also use git-secrets to scan your repos (and all previous revisions) to search for secrets before making them public.
+
+### 3\. Create temporary credentials with Key Conjurer
+
+It's great to have a little extra insurance to prevent inadvertently publishing stored secrets, but maybe we can do even better by not storing credentials at all. Keeping track of credentials generally—including who has access to them, where they are stored, and when they were last rotated—is a hassle. However, programmatically generating temporary credentials can avoid a lot of those issues altogether, neatly side-stepping the issue of storing secrets in Git repos. Enter [Key Conjurer][8], which was created to address this need. For more on why Riot Games created Key Conjurer and how they developed it, read _[Key conjurer: our policy of least privilege][9]_.
+
+### 4\. Apply least privilege automatically with Repokid
+
+Anyone who has taken a security 101 course knows that least privilege is the best practice for role-based access control configuration. Sadly, outside school, it becomes prohibitively difficult to apply least-privilege policies manually. An application's access requirements change over time, and developers are too busy to trim back their permissions manually. [Repokid][10] uses data that AWS provides about identity and access management (IAM) use to automatically right-size policies. Repokid helps even the largest organizations apply least privilege automatically in AWS.
+
+### Tools, not silver bullets
+
+These tools are by no means silver bullets, but they are just that: tools! So, make sure you work with the rest of your organization to understand the use cases and usage patterns for your cloud services before trying to implement any of these tools or other controls.
+
+Becoming familiar with the best practices documented by all your cloud and code repository services should be taken seriously as well. The following articles will help you do so.
+
+**For AWS:**
+
+ * [Best practices for managing AWS access keys][11]
+ * [AWS security audit guidelines][12]
+
+
+
+**For GitHub:**
+
+ * [Introducing new ways to keep your code secure][13]
+ * [GitHub Enterprise security best practices][14]
+
+
+
+Last but not least, reach out to your infosec team; they should be able to provide you with ideas, recommendations, and guidelines for your team's success. Always remember: security is everyone's responsibility, not just theirs.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/open-source-cloud-security
+
+作者:[Alison NaylorAaron Rinehart][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/asnaylorhttps://opensource.com/users/ansilvahttps://opensource.com/users/sethhttps://opensource.com/users/bretthunoldtcomhttps://opensource.com/users/aaronrineharthttps://opensource.com/users/marcobravo
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT (Tools in a cloud)
+[2]: https://twitter.com/travismcpeak?lang=en
+[3]: https://github.com/rmonk
+[4]: https://www.linkedin.com/in/alperkins/
+[5]: https://github.com/michenriksen/gitrob
+[6]: https://help.github.com/en/articles/removing-sensitive-data-from-a-repository
+[7]: https://github.com/awslabs/git-secrets
+[8]: https://github.com/RiotGames/key-conjurer
+[9]: https://technology.riotgames.com/news/key-conjurer-our-policy-least-privilege
+[10]: https://github.com/Netflix/repokid
+[11]: https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html
+[12]: https://docs.aws.amazon.com/general/latest/gr/aws-security-audit-guide.html
+[13]: https://github.blog/2019-05-23-introducing-new-ways-to-keep-your-code-secure/
+[14]: https://github.blog/2015-10-09-github-enterprise-security-best-practices/
diff --git a/sources/tech/20190911 How to Collect System and Application Metrics using Metricbeat.md b/sources/tech/20190911 How to Collect System and Application Metrics using Metricbeat.md
new file mode 100644
index 0000000000..194fd077e6
--- /dev/null
+++ b/sources/tech/20190911 How to Collect System and Application Metrics using Metricbeat.md
@@ -0,0 +1,161 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to Collect System and Application Metrics using Metricbeat)
+[#]: via: (https://www.linuxtechi.com/collect-system-application-metrics-metricbeat/)
+[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
+
+How to Collect System and Application Metrics using Metricbeat
+======
+
+**Metricbeat** is a lightweight shipper (or agent) which is used to collect system’s metrics and application metrics and send them to Elastic Stack Server (i.e **Elasticsearch**). Here system’s metrics refers to CPU, Memory, Disk and Network Stats (IOPS) and application metrics means monitoring and collecting metrics of applications like **Apache**, **NGINX**, **Docker**, **Kubernetes** and **Redis** etc. For metricbeat to work first we must make sure that we have a healthy elastic stack setup up and running. Please refer the below url to setup elastic stack:
+
+**[How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8][1]**
+
+In this article we will demonstrate how install metricbeat on linux servers and then how metricbeat sends data to Elastic Stack Server (i.e Elasticsearch) and then we will verify from kiabana GUI whether metrics data is visible or not.
+
+### Install Metricbeat on CentOS / RHEL Servers
+
+On CentOS / RHEL Servers, metricbeat is installed using the following rpm command,
+
+```
+[root@linuxtechi ~]# rpm -ivh https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.3.1-x86_64.rpm
+```
+
+For Debian based systems, use below command to install metricbeat.
+
+```
+~]# curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.3.1-amd64.deb
+~]# dpkg -i metricbeat-7.3.1-amd64.deb
+```
+
+Add the following lines in /etc/hosts file, as we will be using FQDN of Elasticsearch and Kibana in metricbeat config file and command,
+
+**Note:** Change the IPs and Hostname as per your setup
+
+```
+192.168.56.40 elasticsearch1.linuxtechi.local
+192.168.56.50 elasticsearch2.linuxtechi.local
+192.168.56.60 elasticsearch3.linuxtechi.local
+192.168.56.10 kibana.linuxtechi.local
+```
+
+### Configure Metricbeat on Linux Server (CentOS / RHEL / Debian)
+
+Whenever metricbeat rpm & deb package is installed then its configuration file (**metricbeat.yml**) is created under “**/etc/metricbeat/**“. Let’s edit this configuration file and inform system to send system and application metrics data to Elasticsearch servers.
+
+```
+[root@linuxtechi ~]# vi /etc/metricbeat/metricbeat.yml
+
+setup.kibana:
+ host: "kibana.linuxtechi.local:5601"
+output.elasticsearch:
+ hosts: ["elasticsearch1.linuxtechi.local:9200","elasticsearch2.linuxtechi.local:9200","elasticsearch3.linuxtechi.local:9200"]
+```
+
+Save and exit the file
+
+**Note:** Replace Elasticsearch and Kibana details that suits to your environment.
+
+Run following metricbeat command so that metric dashboard become available in Kibana GUI.
+
+```
+[root@linuxtechi ~]# metricbeat setup -e -E output.elasticsearch.hosts=['elasticsearch1.linuxtechi.local:9200','elasticsearch2.linuxtechi.local:9200','elasticsearch3.linuxtechi.local:9200'] -E setup.kibana.host=kibana.linuxtechi.local:5601
+```
+
+Output of above command would be something like below:
+
+![metricbeat-command-output-linuxserver][2]
+
+Above output confirms that metrics dashboard is loaded successfully in Kibana GUI. Now via metricbeat it will send the metrics data to Elastic Stack server every 10 seconds.
+
+Let’s start and enable metricbeat service using following commands,
+
+```
+[root@linuxtechi ~]# systemctl start metricbeat
+[root@linuxtechi ~]# systemctl enable metricbeat
+```
+
+Now go to Kibana GUI and click on Dashboard from left side bar,
+
+[![Kibana-GUI-Dashbaord-Option][2]][3]
+
+In the next window we will see available metrics dashboards, search ‘**system**’ and then choose System Metrics Dashboard,
+
+[![Choose-Metric-Dashbaord-Kibana][2]][4]
+
+[![Metricbeat-System-Overview-ECS-Kibana][2]][5]
+
+As we can see System’s metrics data is available on the dashboard, these metrics are collected based on entries mentioned in the file “**/etc/metricbeat/modules.d/system.yml**”
+
+Let’s suppose we want to collect application’s metrics data as well then we have to first enable their respective modules, to enable Apache and MySQL metric module ,run the following command from client machine,
+
+```
+[root@linuxtechi ~]# metricbeat modules enable apache mysql
+Enabled apache
+Enabled mysql
+[root@linuxtechi ~]#
+```
+
+Once we enable the modules, we can edit their yml file,
+
+```
+[root@linuxtechi ~]# vi /etc/metricbeat/modules.d/apache.yml
+…
+- module: apache
+ period: 10s
+ hosts: ["http://192.168.56.70"]
+…
+```
+
+Change IP in host parameter that suits to your environment.
+
+Similarly edit mysql yml file, Change mysql root credentials that suits to your environment
+
+```
+[root@linuxtechi ~]# vi /etc/metricbeat/modules.d/mysql.yml
+………
+- module: mysql
+ metricsets:
+ - status
+ - galera_status
+ period: 10s
+hosts: ["root:root@linuxtechi(127.0.0.1:3306)/"]
+………
+```
+
+After making the changes restart the metricbeat service,
+
+```
+[root@linuxtechi ~]# systemctl restart metricbeat
+```
+
+Now Go to Kibana GUI and under Dashboard option, look for MySQL metrics,
+
+[![Metricbeat-MySQL-Overview-ECS-Kibana][2]][6]
+
+As we can see above, MySQL metrics data is visible, this confirms that we have successfully installed and configure metricbeat.
+
+That’s all from tutorial, If these steps help you to setup metricbeat then please do share your feedback and comment.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linuxtechi.com/collect-system-application-metrics-metricbeat/
+
+作者:[Pradeep Kumar][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linuxtechi.com/author/pradeep/
+[b]: https://github.com/lujun9972
+[1]: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/
+[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Kibana-GUI-Dashbaord-Option.jpg
+[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Metric-Dashbaord-Kibana.jpg
+[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Metricbeat-System-Overview-ECS-Kibana.jpg
+[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Metricbeat-MySQL-Overview-ECS-Kibana.jpg
diff --git a/sources/tech/20190912 An introduction to Markdown.md b/sources/tech/20190912 An introduction to Markdown.md
new file mode 100644
index 0000000000..1e0a990913
--- /dev/null
+++ b/sources/tech/20190912 An introduction to Markdown.md
@@ -0,0 +1,166 @@
+[#]: collector: (lujun9972)
+[#]: translator: (qfzy1233)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (An introduction to Markdown)
+[#]: via: (https://opensource.com/article/19/9/introduction-markdown)
+[#]: author: (Juan Islas https://opensource.com/users/xislashttps://opensource.com/users/mbbroberghttps://opensource.com/users/scottnesbitthttps://opensource.com/users/scottnesbitthttps://opensource.com/users/f%C3%A1bio-emilio-costahttps://opensource.com/users/don-watkinshttps://opensource.com/users/greg-phttps://opensource.com/users/marcobravohttps://opensource.com/users/alanfdosshttps://opensource.com/users/scottnesbitthttps://opensource.com/users/jamesf)
+
+An introduction to Markdown
+======
+Write once and convert your text into multiple formats. Here's how to
+get started with Markdown.
+![Woman programming][1]
+
+For a long time, I thought all the files I saw on GitLab and GitHub with an **.md** extension were written in a file type exclusively for developers. That changed a few weeks ago when I started using Markdown. It quickly became the most important tool in my daily work.
+
+Markdown makes my life easier. I just need to add a few symbols to what I'm already writing and, with the help of a browser extension or an open source program, I can transform my text into a variety of commonly used formats such as ODT, email (more on that later), PDF, and EPUB.
+
+### What is Markdown?
+
+A friendly reminder from [Wikipedia][2]:
+
+> Markdown is a lightweight markup language with plain text formatting syntax.
+
+What this means to you is that by using just a few extra symbols in your text, Markdown helps you create a document with an explicit structure. When you take notes in plain text (in a notepad application, for example), there's nothing to indicate which text is meant to be bold or italic. In ordinary text, you might write a link as **** one time, then as just **example.com**, and later **go to the website (example.com)**. There's no internal consistency.
+
+But if you write the way Markdown prescribes, your text has internal consistency. Computers like consistency because it enables them to follow strict instructions without worrying about exceptions.
+
+Trust me; once you learn to use Markdown, every writing task will be, in some way, easier and better than before. So let's learn it.
+
+### Markdown basics
+
+The following rules are the basics for writing in Markdown.
+
+ 1. Create a text file with an **.md** extension (for example, **example.md**.) You can use any text editor (even a word processor like LibreOffice or Microsoft Word), as long as you remember to save it as a _text_ file.
+
+
+
+![Names of Markdown files][3]
+
+ 2. Write whatever you want, just as you usually do:
+
+
+```
+Lorem ipsum
+
+Consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
+Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
+Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
+
+De Finibus Bonorum et Malorum
+
+Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo.
+Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt.
+
+ Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem.
+```
+
+ 3. Make sure to place an empty line between paragraphs. That might feel unnatural if you're used to writing business letters or traditional prose, where paragraphs have only one new line and maybe even an indentation before the first word. For Markdown, an empty line (some word processors mark this with **¶**, called a Pilcrow symbol) guarantees a new paragraph is created when you convert it to another format like HTML.
+
+ 4. Designate titles and subtitles. For the document's title, add a pound or hash (**#**) symbol and a space before the text (e.g., **# Lorem ipsum**). The first subtitle level uses two (**## De Finibus Bonorum et Malorum**), the next level gets three (**### Third Subtitle**), and so on. Note that there is a space between the pound sign and the first word.
+
+
+```
+# Lorem ipsum
+
+Consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
+Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
+Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
+
+## De Finibus Bonorum et Malorum
+
+Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo.
+Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt.
+
+ Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem.
+```
+
+ 5. If you want **bold** letters, just place the letters between two asterisks (stars) with no spaces: ****This will be in bold****.
+
+
+
+
+![Bold text in Markdown][4]
+
+ 6. For _italics_, put the text between underline symbols with no spaces: **_I want this text to be in italics_**.
+
+
+
+![Italics text in Markdown][5]
+
+ 7. To insert a link (like [Markdown Tutorial][6]), put the text you want to link in brackets and the URL in parentheses with no spaces between them:
+**[Markdown Tutorial]()**.
+
+
+
+![Hyperlinks in Markdown][7]
+
+ 8. Blockquotes are written with a greater-than (**>**) symbol and a space before the text you want to quote: **> A famous quote**.
+
+
+
+![Blockquote text in Markdown][8]
+
+### Markdown tutorials and tip sheets
+
+These tips will get you started writing in Markdown, but it has a lot more functions than just bold and italics and links. The best way to learn Markdown is to use it, but I recommend investing 15 minutes stepping through the simple [Markdown Tutorial][6] to practice these rules and learn a couple more.
+
+Because modern Markdown is an amalgamation of many different interpretations of the idea of structured text, the [CommonMark][9] project defines a spec with a rigid set of rules to bring clarity to Markdown. It might be helpful to keep a [CommonMark-compliant cheatsheet][10] on hand when writing.
+
+### What you can do with Markdown
+
+Markdown lets you write anything you want—once—and transform it into almost any kind of format you want to use. The following examples show how to turn simple text written in MD into different formats. You don't need multiple formats of your text—you can start from a single source and then… rule the world!
+
+ 1. **Simple note-taking:** You can write your notes in Markdown and, the moment you save them, the open source note application [Turtl][11] interprets your text file and shows you the formatted result. You can have your notes anywhere!
+
+
+
+![Turtl application][12]
+
+ 2. **PDF files:** With the [Pandoc][13] application, you can convert your Markdown into a PDF with one simple command: **pandoc <file.md> -o <file.pdf>**.
+
+
+
+![Markdown text converted to PDF with Pandoc][14]
+
+ 3. **Email:** You can also convert Markdown text into an HTML-formatted email by installing the browser extension [Markdown Here][15]. To use it, just select your Markdown text, use Markdown Here to translate it into HTML, and send your message using your favorite email client.
+
+
+
+![Markdown text converted to email with Markdown Here][16]
+
+### Start using it
+
+You don't need a special application to use Markdown—you just need a text editor and the tips above. It's compatible with how you already write; all you need to do is use it, so give it a try.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/introduction-markdown
+
+作者:[Juan Islas][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/xislashttps://opensource.com/users/mbbroberghttps://opensource.com/users/scottnesbitthttps://opensource.com/users/scottnesbitthttps://opensource.com/users/f%C3%A1bio-emilio-costahttps://opensource.com/users/don-watkinshttps://opensource.com/users/greg-phttps://opensource.com/users/marcobravohttps://opensource.com/users/alanfdosshttps://opensource.com/users/scottnesbitthttps://opensource.com/users/jamesf
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
+[2]: https://en.wikipedia.org/wiki/Markdown
+[3]: https://opensource.com/sites/default/files/uploads/markdown_names_md-1.png (Names of Markdown files)
+[4]: https://opensource.com/sites/default/files/uploads/markdown_bold.png (Bold text in Markdown)
+[5]: https://opensource.com/sites/default/files/uploads/markdown_italic.png (Italics text in Markdown)
+[6]: https://www.markdowntutorial.com/
+[7]: https://opensource.com/sites/default/files/uploads/markdown_link.png (Hyperlinks in Markdown)
+[8]: https://opensource.com/sites/default/files/uploads/markdown_blockquote.png (Blockquote text in Markdown)
+[9]: https://commonmark.org/help/
+[10]: https://opensource.com/downloads/cheat-sheet-markdown
+[11]: https://turtlapp.com/
+[12]: https://opensource.com/sites/default/files/uploads/markdown_turtl_02.png (Turtl application)
+[13]: https://opensource.com/article/19/5/convert-markdown-to-word-pandoc
+[14]: https://opensource.com/sites/default/files/uploads/markdown_pdf.png (Markdown text converted to PDF with Pandoc)
+[15]: https://markdown-here.com/
+[16]: https://opensource.com/sites/default/files/uploads/markdown_mail_02.png (Markdown text converted to email with Markdown Here)
diff --git a/sources/tech/20190912 Bash Script to Send a Mail About New User Account Creation.md b/sources/tech/20190912 Bash Script to Send a Mail About New User Account Creation.md
deleted file mode 100644
index a65013ff04..0000000000
--- a/sources/tech/20190912 Bash Script to Send a Mail About New User Account Creation.md
+++ /dev/null
@@ -1,126 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Bash Script to Send a Mail About New User Account Creation)
-[#]: via: (https://www.2daygeek.com/linux-shell-script-to-monitor-user-creation-send-email/)
-[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-
-Bash Script to Send a Mail About New User Account Creation
-======
-
-For some purposes you may need to keep track of new user creation details on Linux.
-
-Also, you may need to send the details by mail.
-
-This may be part of the audit objective or the security team may wish to monitor this for the tracking purposes.
-
-We can do this in other way, as we have already described in the previous article.
-
- * **[Bash script to send a mail when new user account is created in system][1]**
-
-
-
-There are many open source monitoring tools are available for Linux.
-
-But I don’t think they have a way to track the new user creation process and alert the administrator when that happens.
-
-So how can we achieve this?
-
-We can write our own Bash script to achieve this.
-
-We have added many useful shell scripts in the past. If you want to check them out, go to the link below.
-
- * **[How to automate day to day activities using shell scripts?][2]**
-
-
-
-### What does this script really do?
-
-This will take a backup of the “/etc/passwd” file twice a day (beginning of the day and end of the day), which will enable you to get new user creation details for the specified date.
-
-We need to add the below two cronjobs to copy the “/etc/passwd” file.
-
-```
-# crontab -e
-
-1 0 * * * cp /etc/passwd /opt/scripts/passwd-start-$(date +"%Y-%m-%d")
-59 23 * * * cp /etc/passwd /opt/scripts/passwd-end-$(date +"%Y-%m-%d")
-```
-
-It uses the “difference” command to detect the difference between files, and if any difference is found to yesterday’s date, the script will send an email alert to the email id given with new user details.
-
-We can’t run this script often because user creation is not happening frequently. However, we plan to run this script once a day.
-
-Therefore, you can get a consolidated report on new user creation.
-
-**Note:** We used our email id in the script for demonstration purpose. So we ask you to use your email id instead.
-
-```
-# vi /opt/scripts/new-user-detail.sh
-
-#!/bin/bash
-mv /opt/scripts/passwd-start-$(date --date='yesterday' '+%Y-%m-%d') /opt/scripts/passwd-start
-mv /opt/scripts/passwd-end-$(date --date='yesterday' '+%Y-%m-%d') /opt/scripts/passwd-end
-ucount=$(diff /opt/scripts/passwd-start /opt/scripts/passwd-end | grep ">" | cut -d":" -f6 | cut -d"/" -f3 | wc -l)
-if [ $ucount -gt 0 ]
-then
-SUBJECT="ATTENTION: New User Account is created on server : `date --date='yesterday' '+%b %e'`"
-MESSAGE="/tmp/new-user-logs.txt"
-TO="[email protected]"
-echo "Hostname: `hostname`" >> $MESSAGE
-echo -e "\n" >> $MESSAGE
-echo "The New User Details are below." >> $MESSAGE
-echo "+------------------------------+" >> $MESSAGE
-diff /opt/scripts/passwd-start /opt/scripts/passwd-end | grep ">" | cut -d":" -f6 | cut -d"/" -f3 >> $MESSAGE
-echo "+------------------------------+" >> $MESSAGE
-mail -s "$SUBJECT" "$TO" < $MESSAGE
-rm $MESSAGE
-fi
-```
-
-Set an executable permission to "new-user-detail.sh" file.
-
-```
-$ chmod +x /opt/scripts/new-user-detail.sh
-```
-
-Finally add a cronjob to automate this. It runs daily at 7AM.
-
-```
-# crontab -e
-
-0 7 * * * /bin/bash /opt/scripts/new-user.sh
-```
-
-**Note:** You will receive an email alert at 7AM every day, which is for yesterday's date details.
-
-**Output:** The output will be the same as the one below.
-
-```
-# cat /tmp/new-user-logs.txt
-
-Hostname: CentOS.2daygeek.com
-
-The New User Details are below.
-+------------------------------+
-tuser3
-+------------------------------+
-```
-
---------------------------------------------------------------------------------
-
-via: https://www.2daygeek.com/linux-shell-script-to-monitor-user-creation-send-email/
-
-作者:[Magesh Maruthamuthu][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.2daygeek.com/author/magesh/
-[b]: https://github.com/lujun9972
-[1]: https://www.2daygeek.com/linux-bash-script-to-monitor-user-creation-send-email/
-[2]: https://www.2daygeek.com/category/shell-script/
diff --git a/sources/tech/20190912 How to fix common pitfalls with the Python ORM tool SQLAlchemy.md b/sources/tech/20190912 How to fix common pitfalls with the Python ORM tool SQLAlchemy.md
new file mode 100644
index 0000000000..c373e85502
--- /dev/null
+++ b/sources/tech/20190912 How to fix common pitfalls with the Python ORM tool SQLAlchemy.md
@@ -0,0 +1,208 @@
+[#]: collector: (lujun9972)
+[#]: translator: (MjSeven )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to fix common pitfalls with the Python ORM tool SQLAlchemy)
+[#]: via: (https://opensource.com/article/19/9/common-pitfalls-python)
+[#]: author: (Zach Todd https://opensource.com/users/zchtoddhttps://opensource.com/users/lauren-pritchetthttps://opensource.com/users/liranhaimovitchhttps://opensource.com/users/moshez)
+
+How to fix common pitfalls with the Python ORM tool SQLAlchemy
+======
+Seemingly small choices made when using SQLAlchemy can have important
+implications on the object-relational mapping toolkit's performance.
+![A python with a package.][1]
+
+Object-relational mapping ([ORM][2]) makes life easier for application developers, in no small part because it lets you interact with a database in a language you may know (such as Python) instead of raw SQL queries. [SQLAlchemy][3] is a Python ORM toolkit that provides access to SQL databases using Python. It is a mature ORM tool that adds the benefit of model relationships, a powerful query construction paradigm, easy serialization, and much more. Its ease of use, however, makes it easy to forget what is going on behind the scenes. Seemingly small choices made using SQLAlchemy can have important performance implications.
+
+This article explains some of the top performance issues developers encounter when using SQLAlchemy and how to fix them.
+
+### Retrieving an entire result set when you only need the count
+
+Sometimes a developer just needs a count of results, but instead of utilizing a database count, all the results are fetched and the count is done with **len** in Python.
+
+
+```
+`count = len(User.query.filter_by(acct_active=True).all())`
+```
+
+Using SQLAlchemy's **count** method instead will do the count on the server side, resulting in far less data sent to the client. Calling **all()** in the prior example also results in the instantiation of model objects, which can become expensive quickly, given enough rows.
+
+Unless more than the count is required, just use the **count** method.
+
+
+```
+`count = User.query.filter_by(acct_active=True).count()`
+```
+
+### Retrieving entire models when you only need a few columns
+
+In many cases, only a few columns are needed when issuing a query. Instead of returning entire model instances, SQLAlchemy can fetch only the columns you're interested in. This not only reduces the amount of data sent but also avoids the need to instantiate entire objects. Working with tuples of column data instead of models can be quite a bit faster.
+
+
+```
+result = User.query.all()
+for user in result:
+ print(user.name, user.email)
+```
+
+Instead, select only what is needed using the **with_entities** method.
+
+
+```
+result = User.query.with_entities(User.name, User.email).all()
+for (username, email) in result:
+ print(username, email)
+```
+
+### Updating one object at a time inside a loop
+
+Avoid using loops to update collections individually. While the database may execute a single update very quickly, the roundtrip time between the application and database servers will quickly add up. In general, strive for fewer queries where reasonable.
+
+
+```
+for user in users_to_update:
+ user.acct_active = True
+ db.session.add(user)
+```
+
+Use the bulk update method instead.
+
+
+```
+query = User.query.filter(user.id.in_([user.id for user in users_to_update]))
+query.update({"acct_active": True}, synchronize_session=False)
+```
+
+### Triggering cascading deletes
+
+ORM allows easy configuration of relationships on models, but there are some subtle behaviors that can be surprising. Most databases maintain relational integrity through foreign keys and various cascade options. SQLAlchemy allows you to define models with foreign keys and cascade options, but the ORM has its own cascade logic that can preempt the database.
+
+Consider the following models.
+
+
+```
+class Artist(Base):
+ __tablename__ = "artist"
+
+ id = Column(Integer, primary_key=True)
+ songs = relationship("Song", cascade="all, delete")
+
+class Song(Base):
+ __tablename__ = "song"
+
+ id = Column(Integer, primary_key=True)
+ artist_id = Column(Integer, ForeignKey("artist.id", ondelete="CASCADE"))
+```
+
+Deleting artists will cause the ORM to issue **delete** queries on the Song table, thus preventing the deletes from happening as a result of the foreign key. This behavior can become a bottleneck with complex relationships and a large number of records.
+
+Include the **passive_deletes** option to ensure that the database is managing relationships. Be sure, however, that your database is capable of this. SQLite, for example, does not manage foreign keys by default.
+
+
+```
+`songs = relationship("Song", cascade="all, delete", passive_deletes=True)`
+```
+
+### Relying on lazy loading when eager loading should be used
+
+Lazy loading is the default SQLAlchemy approach to relationships. Building from the last example, this implies that loading an artist does not simultaneously load his or her songs. This is usually a good idea, but the separate queries can be wasteful if certain relationships always need to be loaded.
+
+Popular serialization frameworks like [Marshmallow][4] can trigger a cascade of queries if relationships are allowed to load in a lazy fashion.
+
+There are a few ways to control this behavior. The simplest method is through the relationship function itself.
+
+
+```
+`songs = relationship("Song", lazy="joined", cascade="all, delete")`
+```
+
+This will cause a left join to be added to any query for artists, and as a result, the **songs** collection will be immediately available. Although more data is returned to the client, there are potentially far fewer roundtrips.
+
+SQLAlchemy offers finer-grained control for situations where such a blanket approach can't be taken. The **joinedload()** function can be used to toggle joined loading on a per-query basis.
+
+
+```
+from sqlalchemy.orm import joinedload
+
+artists = Artist.query.options(joinedload(Artist.songs))
+print(artists.songs) # Does not incur a roundtrip to load
+```
+
+### Using the ORM for a bulk record import
+
+The overhead of constructing full model instances becomes a major bottleneck when importing thousands of records. Imagine, for example, loading thousands of song records from a file where each song has first been converted to a dictionary.
+
+
+```
+for song in songs:
+ db.session.add(Song(**song))
+```
+
+Instead, bypass the ORM and use just the parameter binding functionality of core SQLAlchemy.
+
+
+```
+batch = []
+insert_stmt = Song.__table__.insert()
+for song in songs:
+ if len(batch) > 1000:
+ db.session.execute(insert_stmt, batch)
+ batch.clear()
+ batch.append(song)
+if batch:
+ db.session.execute(insert_stmt, batch)
+```
+
+Keep in mind that this method naturally skips any client-side ORM logic you might depend on, such as Python-based column defaults. While this method is faster than loading objects as full model instances, your database may have bulk loading methods that are faster. PostgreSQL, for example, has the **COPY** command that offers perhaps the best performance for loading large numbers of records.
+
+### Calling commit or flush prematurely
+
+There are many occasions when you need to associate a child record to its parent, or vice versa. One obvious way of doing this is to flush the session so that the record in question will be assigned an ID.
+
+
+```
+artist = Artist(name="Bob Dylan")
+song = Song(title="Mr. Tambourine Man")
+
+db.session.add(artist)
+db.session.flush()
+
+song.artist_id = artist.id
+```
+
+Committing or flushing more than once per request is usually unnecessary and undesirable. A database flush involves forcing disk writes on the database server, and in most circumstances, the client will block until the server can acknowledge that the data has been written.
+
+SQLAlchemy can track relationships and manage keys behind the scenes.
+
+
+```
+artist = Artist(name="Bob Dylan")
+song = Song(title="Mr. Tambourine Man")
+
+artist.songs.append(song)
+```
+
+### Wrapping up
+
+I hope this list of common pitfalls can help you avoid these issues and keep your application running smoothly. As always, when diagnosing a performance problem, measurement is key. Most databases offer performance diagnostics that can help you pinpoint issues, such as the PostgreSQL **pg_stat_statements** module.
+
+* * *
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/common-pitfalls-python
+
+作者:[Zach Todd][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/zchtoddhttps://opensource.com/users/lauren-pritchetthttps://opensource.com/users/liranhaimovitchhttps://opensource.com/users/moshez
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python_snake_file_box.jpg?itok=UuDVFLX- (A python with a package.)
+[2]: https://en.wikipedia.org/wiki/Object-relational_mapping
+[3]: https://www.sqlalchemy.org/
+[4]: https://marshmallow.readthedocs.io/en/stable/
diff --git a/sources/tech/20190912 New zine- HTTP- Learn your browser-s language.md b/sources/tech/20190912 New zine- HTTP- Learn your browser-s language.md
new file mode 100644
index 0000000000..85e3a6428a
--- /dev/null
+++ b/sources/tech/20190912 New zine- HTTP- Learn your browser-s language.md
@@ -0,0 +1,197 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (New zine: HTTP: Learn your browser's language!)
+[#]: via: (https://jvns.ca/blog/2019/09/12/new-zine-on-http/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+New zine: HTTP: Learn your browser's language!
+======
+
+Hello! I’ve released a new zine! It’s called “HTTP: Learn your browsers language!”
+
+You can get it for $12 at . If you buy it, you’ll get a PDF that you can either read on your computer or print out.
+
+Here’s the cover and table of contents:
+
+[![][1]][2]
+
+### why http?
+
+I got the idea for this zine from talking to [Marco Rogers][3] – he mentioned that he thought that new web developers / mobile developers would really benefit from understanding the fundamentals of HTTP better, I thought “OOH I LOVE TALKING ABOUT HTTP”, wrote a few pages about HTTP, saw they were helping people, and decided to write a whole zine about HTTP.
+
+HTTP is important to understand because it runs the entire web – if you understand how HTTP requests and responses work, then it makes it WAY EASIER to debug why your web application isn’t working properly. Caching, cookies, and a lot of web security are implemented using HTTP headers, so if you don’t understand HTTP headers those things seem kind of like impenetrable magic. But actually the HTTP protocol is fundamentally pretty simple – there are a lot of complicated details but the basics are pretty easy to understand.
+
+So the goal of this zine is to teach you the basics so you can easily look up and understand the details when you need them.
+
+### what it looks like printed out
+
+All of my zines are best printed out (though you get a PDF you can read on your computer too!), so here are a couple of pictures of what it looks like when printed. I always ask my illustrator to make both a black and white version and a colour version of the cover so that it looks great when printed on a black and white printer.
+
+[![][4]][2]
+
+(if you click on that “same origin policy” image, you can make it bigger)
+
+The zine comes with 4 print PDFs in addition to a PDF you can just read on your computer/phone:
+
+ * letter / colour
+ * letter / b&w
+ * a4 / colour
+ * a4 / b&w
+
+
+
+### zines for your team
+
+You can also buy this zine for your team members at work to help them learn HTTP!
+
+I’ve been trying to get the pricing right for this for a while – I used to do it based on size of company, but that didn’t seem quite right because sometimes people would want to buy the zine for a small team at a big company. So I’ve switched to pricing based on the number of copies you want to distribute at your company.
+
+Here’s the link: [zines for your team!][5].
+
+### the tweets
+
+When I started writing zines, I would just sit down, write down the things I thought were important, and be done with it.
+
+In the last year and a half or so I’ve taken a different approach – instead of writing everything and then releasing it, instead I write a page at a time, post the page to Twitter, and then improve it and decide what page to write next based on the questions/comments I get on Twitter. If someone replies to the tweet and asks a question that shows that what I wrote is unclear, I can improve it! (I love getting replies on twitter asking clarifiying questions!).
+
+Here are all the initial drafts of the pages I wrote and posted on twitter, in chronological order. Some of the pages didn’t make it into the zine at all, and I needed to do a lot of editing at the end to figure out the right order and make them all work coherently together in a zine instead of being a bunch of independent tweets.
+
+ * Jul 1: [http status codes][6]
+ * Jul 2: [anatomy of a HTTP response][7]
+ * Jul 2: [POST requests][8]
+ * Jul 2: [an example POST request][9]
+ * Jul 28: [the same origin policy][10]
+ * Jul 28: [what’s HTTP?][11]
+ * Jul 30: [the most important HTTP request headers][12]
+ * Jun 30: [anatomy of a HTTP request][13]
+ * Aug 4: [content delivery networks][14]
+ * Aug 6: [caching headers][15]
+ * Aug 6: [how cookies work][16]
+ * Aug 7: [redirects][17]
+ * Aug 8: [45 seconds on the Accept-Language HTTP header][18]
+ * Aug 9: [HTTPS: HTTP + security][19]
+ * Aug 9: [today in 45 second video experiments: the Range header][20]
+ * Aug 9: [some HTTP exercises to try][21]
+ * Aug 10: [some security headers][22]
+ * Aug 12: [using HTTP APIs][23]
+ * Aug 13: [what’s with those headers that start with x-?][24]
+ * Aug 13: [important HTTP response headers][25]
+ * Aug 14: [HTTP request methods (part 1)][26]
+ * Aug 14: [HTTP request methods (part 2)][27]
+ * Aug 15: [how URLs work][28]
+ * Aug 16: [CORS][29]
+ * Aug 19: [why the same origin policy matters][30]
+ * Aug 21: [HTTP headers][31]
+ * Aug 24: [how to learn more about HTTP][32]
+ * Aug 25: [HTTP/2][33]
+ * Aug 27: [certificates][34]
+
+
+
+Writing zines one tweet at a time has been really fun. I think it improves the quality a lot, because I get a ton of feedback along the way that I can use to make the zine better. There are also some experimental 45 second tiny videos in that list, which are definitely not part of the zine, but which were fun to make and which I might expand on in the future.
+
+### examplecat.com
+
+One tiny easter egg in the zine: I have a lot of examples of HTTP requests, and I wasn’t sure for a long time what domain I should use for the examples. I used example.com a bunch, and google.com and twitter.com sometimes, but none of those felt quite right.
+
+A couple of days before publishing the zine I finally had an epiphany – my example on the cover was requesting a picture of a cat, so I registered which just has a single picture of a cat. It also has an ASCII cat if you’re browsing in your terminal.
+
+```
+$ curl https://examplecat.com/cat.txt -i
+HTTP/2 200
+accept-ranges: bytes
+cache-control: public, max-age=0, must-revalidate
+content-length: 33
+content-type: text/plain; charset=UTF-8
+date: Thu, 12 Sep 2019 16:48:16 GMT
+etag: "ac5affa59f554a1440043537ae973790-ssl"
+strict-transport-security: max-age=31536000
+age: 5
+server: Netlify
+x-nf-request-id: c5060abc-0399-4b44-94bf-c481e22c2b50-1772748
+
+\ /\
+ ) ( ')
+( / )
+ \(__)|
+```
+
+### more zines at wizardzines.com
+
+If you’re interested in the idea of programming zines and haven’t seen my zines before, I have a bunch more at . There are 6 free zines there:
+
+ * [so you want to be a wizard][35]
+ * [let’s learn tcpdump!][36]
+ * [spying on your programs with strace][37]
+ * [networking! ACK!][38]
+ * [linux debugging tools you’ll love][39]
+ * [profiling and tracing with perf][40]
+
+
+
+### next zine: not sure yet!
+
+Some things I’m considering for the next zine:
+
+ * debugging skills (I started writing a bunch of pages about debugging but switched gears to the HTTP zine because I got really excited about that. but debugging is my favourite thing so I’d like to get this done at some point)
+ * gdb (a short zine in the spirit of [let’s learn tcpdump][36])
+ * relational databases (what’s up with transactions?)
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2019/09/12/new-zine-on-http/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://jvns.ca/images/http-zine-cover.png
+[2]: https://gum.co/http-zine
+[3]: https://twitter.com/polotek
+[4]: https://jvns.ca/images/http-zine-cover.jpeg
+[5]: https://wizardzines.com/zines-team/
+[6]: https://twitter.com/b0rk/status/1145824140462608387
+[7]: https://twitter.com/b0rk/status/1145896193077256197
+[8]: https://twitter.com/b0rk/status/1146054159214567424
+[9]: https://twitter.com/b0rk/status/1146065212560179202
+[10]: https://twitter.com/b0rk/status/1155493682885341184
+[11]: https://twitter.com/b0rk/status/1155318552129396736
+[12]: https://twitter.com/b0rk/status/1156048630220017665
+[13]: https://twitter.com/b0rk/status/1145362860136177664
+[14]: https://twitter.com/b0rk/status/1158012032651862017
+[15]: https://twitter.com/b0rk/status/1158726129508868097
+[16]: https://twitter.com/b0rk/status/1158848054142873603
+[17]: https://twitter.com/b0rk/status/1159163613938167808
+[18]: https://twitter.com/b0rk/status/1159492669384658944
+[19]: https://twitter.com/b0rk/status/1159812119099060224
+[20]: https://twitter.com/b0rk/status/1159829608595804160
+[21]: https://twitter.com/b0rk/status/1159839824594915335
+[22]: https://twitter.com/b0rk/status/1160185182323970050
+[23]: https://twitter.com/b0rk/status/1160933788949655552
+[24]: https://twitter.com/b0rk/status/1161283690925834241
+[25]: https://twitter.com/b0rk/status/1161262574031265793
+[26]: https://twitter.com/b0rk/status/1161679906415218690
+[27]: https://twitter.com/b0rk/status/1161680137865367553
+[28]: https://twitter.com/b0rk/status/1161997141876903936
+[29]: https://twitter.com/b0rk/status/1162392625057583104
+[30]: https://twitter.com/b0rk/status/1163460967067541504
+[31]: https://twitter.com/b0rk/status/1164181027469832196
+[32]: https://twitter.com/b0rk/status/1165277002791829510
+[33]: https://twitter.com/b0rk/status/1165623594917007362
+[34]: https://twitter.com/b0rk/status/1166466933912494081
+[35]: https://wizardzines.com/zines/wizard/
+[36]: https://wizardzines.com/zines/tcpdump/
+[37]: https://wizardzines.com/zines/strace/
+[38]: https://wizardzines.com/zines/networking/
+[39]: https://wizardzines.com/zines/debugging/
+[40]: https://wizardzines.com/zines/perf/
diff --git a/sources/tech/20190913 How to Find and Replace a String in File Using the sed Command in Linux.md b/sources/tech/20190913 How to Find and Replace a String in File Using the sed Command in Linux.md
new file mode 100644
index 0000000000..bfb85529d4
--- /dev/null
+++ b/sources/tech/20190913 How to Find and Replace a String in File Using the sed Command in Linux.md
@@ -0,0 +1,352 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to Find and Replace a String in File Using the sed Command in Linux)
+[#]: via: (https://www.2daygeek.com/linux-sed-to-find-and-replace-string-in-files/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+How to Find and Replace a String in File Using the sed Command in Linux
+======
+
+When you are working on text files you may need to find and replace a string in the file.
+
+Sed command is mostly used to replace the text in a file.
+
+This can be done using the sed command and awk command in Linux.
+
+In this tutorial, we will show you how to do this using the sed command and then show about the awk command.
+
+### What is sed Command
+
+Sed command stands for Stream Editor, It is used to perform basic text manipulation in Linux. It could perform various functions such as search, find, modify, insert or delete files.
+
+Also, it’s performing complex regular expression pattern matching.
+
+It can be used for the following purpose.
+
+ * To find and replace matches with a given format.
+ * To find and replace specific lines that match a given format.
+ * To find and replace the entire line that matches the given format.
+ * To search and replace two different patterns simultaneously.
+
+
+
+The fifteen examples listed in this article will help you to master in the sed command.
+
+If you want to remove a line from a file using the Sed command, go to the following article.
+
+**`Note:`** Since this is a demonstration article, we use the sed command without the `-i` option, which removes lines and prints the contents of the file in the Linux terminal.
+
+But if you want to remove lines from the source file in the real environment, use the `-i` option with the sed command.
+
+Common Syntax for sed to replace a string.
+
+```
+sed -i 's/Search_String/Replacement_String/g' Input_File
+```
+
+First we need to understand sed syntax to do this. See details about it.
+
+ * `sed:` It’s a Linux command.
+ * `-i:` It’s one of the option for sed and what it does? By default sed print the results to the standard output. When you add this option with sed then it will edit files in place. A backup of the original file will be created when you add a suffix (For ex, -i.bak
+ * `s:` The s is the substitute command.
+ * `Search_String:` To search a given string or regular expression.
+ * `Replacement_String:` The replacement string.
+ * `g:` Global replacement flag. By default, the sed command replaces the first occurrence of the pattern in each line and it won’t replace the other occurrence in the line. But, all occurrences will be replaced when the replacement flag is provided
+ * `/` Delimiter character.
+ * `Input_File:` The filename that you want to perform the action.
+
+
+
+Let us look at some examples of commonly used with sed command to search and convert text in files.
+
+We have created the below file for demonstration purposes.
+
+```
+# cat sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 1) How to Find and Replace the “first” Event of the Pattern on a Line
+
+The below sed command replaces the word **unix** with **linux** in the file. This only changes the first instance of the pattern on each line.
+
+```
+# sed 's/unix/linux/' sed-test.txt
+
+1 Unix linux unix 23
+2 linux Linux 34
+3 linuxlinux UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 2) How to Find and Replace the “Nth” Occurrence of the Pattern on a Line
+
+Use the /1,/2,../n flags to replace the corresponding occurrence of a pattern in a line.
+
+The below sed command replaces the second instance of the “unix” pattern with “linux” in a line.
+
+```
+# sed 's/unix/linux/2' sed-test.txt
+
+1 Unix unix linux 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 3) How to Search and Replace all Instances of the Pattern in a Line
+
+The below sed command replaces all instances of the “unix” format with “Linux” on the line because “g” means a global replacement.
+
+```
+# sed 's/unix/linux/g' sed-test.txt
+
+1 Unix linux linux 23
+2 linux Linux 34
+3 linuxlinux UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 4) How to Find and Replace the Pattern for all Instances in a Line from the “Nth” Event
+
+The below sed command replaces all the patterns from the “Nth” instance of a pattern in a line.
+
+```
+# sed 's/unix/linux/2g' sed-test.txt
+
+1 Unix unix linux 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 5) Search and Replace the pattern on a specific line number
+
+You can able to replace the string on a specific line number. The below sed command replaces the pattern “unix” with “linux” only on the 3rd line.
+
+```
+# sed '3 s/unix/linux/' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 linuxlinux UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 6) How to Find and Replace Pattern in a Range of Lines
+
+You can specify the range of line numbers to replace the string.
+
+The below sed command replaces the “Unix” pattern with “Linux” with lines 1 through 3.
+
+```
+# sed '1,3 s/unix/linux/' sed-test.txt
+
+1 Unix linux unix 23
+2 linux Linux 34
+3 linuxlinux UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 7) How to Find and Change the pattern in the Last Line
+
+The below sed command allows you to replace the matching string only in the last line.
+
+The below sed command replaces the “Linux” pattern with “Unix” only on the last line.
+
+```
+# sed '$ s/Linux/Unix/' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /bin/bash CentOS Linux OS
+Unix is free and opensource operating system
+```
+
+### 8) How to Find and Replace the Pattern with only Right Word in a Line
+
+As you might have noticed, the substring “linuxunix” is replaced with “linuxlinux” in the 6th example. If you want to replace only the right matching word, use the word-boundary expression “\b” on both ends of the search string.
+
+```
+# sed '1,3 s/\bunix\b/linux/' sed-test.txt
+
+1 Unix linux unix 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 9) How to Search and Replaces the pattern with case insensitive
+
+Everyone knows that Linux is case sensitive. To make the pattern match with case insensitive, use the I flag.
+
+```
+# sed 's/unix/linux/gI' sed-test.txt
+
+1 linux linux linux 23
+2 linux Linux 34
+3 linuxlinux linuxLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 10) How to Find and Replace a String that Contains the Delimiter Character
+
+When you search and replace for a string with the delimiter character, we need to use the backslash “\” to escape the slash.
+
+In this example, we are going to replaces the “/bin/bash” with “/usr/bin/fish”.
+
+```
+# sed 's/\/bin\/bash/\/usr\/bin\/fish/g' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /usr/bin/fish CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+The above sed command works as expected, but it looks bad. To simplify this, most of the people will use the vertical bar “|”. So, I advise you to go with it.
+
+```
+# sed 's|/bin/bash|/usr/bin/fish/|g' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /usr/bin/fish/ CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 11) How to Find and Replaces Digits with a Given Pattern
+
+Similarly, digits can be replaced with pattern. The below sed command replaces all digits with “[0-9]” “number” pattern.
+
+```
+# sed 's/[0-9]/number/g' sed-test.txt
+
+number Unix unix unix numbernumber
+number linux Linux numbernumber
+number linuxunix UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 12) How to Find and Replace only two Digit Numbers with Pattern
+
+If you want to replace the two digit numbers with the pattern, use the sed command below.
+
+```
+# sed 's/\b[0-9]\{2\}\b/number/g' sed-test.txt
+
+1 Unix unix unix number
+2 linux Linux number
+3 linuxunix UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 13) How to Print only Replaced Lines with the sed Command
+
+If you want to display only the changed lines, use the below sed command.
+
+ * p – It prints the replaced line twice on the terminal.
+ * n – It suppresses the duplicate rows generated by the “p” flag.
+
+
+
+```
+# sed -n 's/Unix/Linux/p' sed-test.txt
+
+1 Linux unix unix 23
+3 linuxunix LinuxLinux
+```
+
+### 14) How to Run Multiple sed Commands at Once
+
+The following sed command detect and replaces two different patterns simultaneously.
+
+The below sed command searches for “linuxunix” and “CentOS” pattern, replacing them with “LINUXUNIX” and “RHEL8” at a time.
+
+```
+# sed -e 's/linuxunix/LINUXUNIX/g' -e 's/CentOS/RHEL8/g' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 LINUXUNIX UnixLinux
+linux /bin/bash RHEL8 Linux OS
+Linux is free and opensource operating system
+```
+
+The following sed command search for two different patterns and replaces them with one string at a time.
+
+The below sed command searches for “linuxunix” and “CentOS” pattern, replacing them with “Fedora30” at a time.
+
+```
+# sed -e 's/\(linuxunix\|CentOS\)/Fedora30/g' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 Fedora30 UnixLinux
+linux /bin/bash Fedora30 Linux OS
+Linux is free and opensource operating system
+```
+
+### 15) How to Find and Replace the Entire Line if the Given Pattern Matches
+
+If the pattern matches, you can use the sed command to replace the entire line with the new line. This can be done using the “C” flag.
+
+```
+# sed '/OS/ c New Line' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+New Line
+Linux is free and opensource operating system
+```
+
+### 16) How to Search and Replace lines that Matches a Pattern
+
+You can specify a pattern for the sed command to fit on a line. In the event of pattern matching, the sed command searches for the string to be replaced.
+
+The below sed command first looks for lines that have the “OS” pattern, then replaces the word “Linux” with “ArchLinux”.
+
+```
+# sed '/OS/ s/Linux/ArchLinux/' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /bin/bash CentOS ArchLinux OS
+Linux is free and opensource operating system
+```
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/linux-sed-to-find-and-replace-string-in-files/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
diff --git a/sources/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md b/sources/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md
new file mode 100644
index 0000000000..877845b87a
--- /dev/null
+++ b/sources/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md
@@ -0,0 +1,197 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to Configure SFTP Server with Chroot in Debian 10)
+[#]: via: (https://www.linuxtechi.com/configure-sftp-chroot-debian10/)
+[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
+
+How to Configure SFTP Server with Chroot in Debian 10
+======
+
+**SFTP** stands for Secure File Transfer Protocol / SSH File Transfer Protocol, it is one of the most common method which is used to transfer files securely over ssh from our local system to remote server and vice-versa. The main advantage of sftp is that we don’t need to install any additional package except ‘**openssh-server**’, in most of the Linux distributions ‘openssh-server’ package is the part of default installation. Other benefit of sftp is that we can allow user to use sftp only not ssh.
+
+[![Configure-sftp-debian10][1]][2]
+
+Recently Debian 10, Code name ‘Buster’ has been released, in this article we will demonstrate how to configure sftp with Chroot ‘Jail’ like environment in Debian 10 System. Here Chroot Jail like environment means that user’s cannot go beyond from their respective home directories or users cannot change directories from their home directories. Following are the lab details:
+
+ * OS = Debian 10
+ * IP Address = 192.168.56.151
+
+
+
+Let’s jump into SFTP Configuration Steps,
+
+### Step:1) Create a Group for sftp using groupadd command
+
+Open the terminal, create a group with a name “**sftp_users**” using below groupadd command,
+
+```
+root@linuxtechi:~# groupadd sftp_users
+```
+
+### Step:2) Add Users to Group ‘sftp_users’ and set permissions
+
+In case you want to create new user and want to add that user to ‘sftp_users’ group, then run the following command,
+
+**Syntax:** # useradd -m -G sftp_users <user_name>
+
+Let’s suppose user name is ’Jonathan’
+
+```
+root@linuxtechi:~# useradd -m -G sftp_users jonathan
+```
+
+set the password using following chpasswd command,
+
+```
+root@linuxtechi:~# echo "jonathan:" | chpasswd
+```
+
+In case you want to add existing users to ‘sftp_users’ group then run beneath usermod command, let’s suppose already existing user name is ‘chris’
+
+```
+root@linuxtechi:~# usermod -G sftp_users chris
+```
+
+Now set the required permissions on Users,
+
+```
+root@linuxtechi:~# chown root /home/jonathan /home/chris/
+```
+
+Create an upload folder in both the user’s home directory and set the correct ownership,
+
+```
+root@linuxtechi:~# mkdir /home/jonathan/upload
+root@linuxtechi:~# mkdir /home/chris/upload
+root@linuxtechi:~# chown jonathan /home/jonathan/upload
+root@linuxtechi:~# chown chris /home/chris/upload
+```
+
+**Note:** User like Jonathan and Chris can upload files and directories to upload folder from their local systems.
+
+### Step:3) Edit sftp configuration file (/etc/ssh/sshd_config)
+
+As we have already stated that sftp operations are done over the ssh, so it’s configuration file is “**/etc/ssh/sshd_config**“, Before making any changes I would suggest first take the backup and then edit this file and add the following content,
+
+```
+root@linuxtechi:~# cp /etc/ssh/sshd_config /etc/ssh/sshd_config-org
+root@linuxtechi:~# vim /etc/ssh/sshd_config
+………
+#Subsystem sftp /usr/lib/openssh/sftp-server
+Subsystem sftp internal-sftp
+
+Match Group sftp_users
+ X11Forwarding no
+ AllowTcpForwarding no
+ ChrootDirectory %h
+ ForceCommand internal-sftp
+…………
+```
+
+Save & exit the file.
+
+To make above changes into the affect, restart ssh service using following systemctl command
+
+```
+root@linuxtechi:~# systemctl restart sshd
+```
+
+In above ‘sshd_config’ file we have commented out the line which starts with “Subsystem” and added new entry “Subsystem sftp internal-sftp” and new lines like,
+
+“**Match Group sftp_users”** –> It means if a user is a part of ‘sftp_users’ group then apply rules which are mentioned below to this entry.
+
+“**ChrootDierctory %h**” –> It means users can only change directories within their respective home directories, they cannot go beyond their home directories, or in other words we can say users are not permitted to change directories, they will get jai like environment within their directories and can’t access any other user’s and system’s directories.
+
+“**ForceCommand internal-sftp**” –> It means users are limited to sftp command only.
+
+### Step:4) Test and Verify sftp
+
+Login to any other Linux system which is on the same network of your sftp server and then try to ssh sftp server via the users that we have mapped in ‘sftp_users’ group.
+
+```
+[root@linuxtechi ~]# ssh root@linuxtechi
+root@linuxtechi's password:
+Write failed: Broken pipe
+[root@linuxtechi ~]# ssh root@linuxtechi
+root@linuxtechi's password:
+Write failed: Broken pipe
+[root@linuxtechi ~]#
+```
+
+Above confirms that users are not allowed to SSH , now try sftp using following commands,
+
+```
+[root@linuxtechi ~]# sftp root@linuxtechi
+root@linuxtechi's password:
+Connected to 192.168.56.151.
+sftp> ls -l
+drwxr-xr-x 2 root 1001 4096 Sep 14 07:52 debian10-pkgs
+-rw-r--r-- 1 root 1001 155 Sep 14 07:52 devops-actions.txt
+drwxr-xr-x 2 1001 1002 4096 Sep 14 08:29 upload
+```
+
+Let’s try to download a file using sftp ‘**get**‘ command
+
+```
+sftp> get devops-actions.txt
+Fetching /devops-actions.txt to devops-actions.txt
+/devops-actions.txt 100% 155 0.2KB/s 00:00
+sftp>
+sftp> cd /etc
+Couldn't stat remote file: No such file or directory
+sftp> cd /root
+Couldn't stat remote file: No such file or directory
+sftp>
+```
+
+Above output confirms that we are able to download file from our sftp server to local machine and apart from this we have also tested that users cannot change directories.
+
+Let’s try to upload a file under “**upload**” folder,
+
+```
+sftp> cd upload/
+sftp> put metricbeat-7.3.1-amd64.deb
+Uploading metricbeat-7.3.1-amd64.deb to /upload/metricbeat-7.3.1-amd64.deb
+metricbeat-7.3.1-amd64.deb 100% 38MB 38.4MB/s 00:01
+sftp> ls -l
+-rw-r--r-- 1 1001 1002 40275654 Sep 14 09:18 metricbeat-7.3.1-amd64.deb
+sftp>
+```
+
+This confirms that we have successfully uploaded a file from our local system to sftp server.
+
+Now test the SFTP server with winscp tool, enter the sftp server ip address along user’s credentials,
+
+[![Winscp-sftp-debian10][1]][3]
+
+Click on Login and then try to download and upload files
+
+[![Download-file-winscp-debian10-sftp][1]][4]
+
+Now try to upload files in upload folder,
+
+[![Upload-File-using-winscp-Debian10-sftp][1]][5]
+
+Above window confirms that uploading is also working fine, that’s all from this article. If these steps help you to configure SFTP server with chroot environment in Debian 10 then please do share your feedback and comments.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linuxtechi.com/configure-sftp-chroot-debian10/
+
+作者:[Pradeep Kumar][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linuxtechi.com/author/pradeep/
+[b]: https://github.com/lujun9972
+[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Configure-sftp-debian10.jpg
+[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Winscp-sftp-debian10.jpg
+[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Download-file-winscp-debian10-sftp.jpg
+[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Upload-File-using-winscp-Debian10-sftp.jpg
diff --git a/sources/tech/20190916 Constraint programming by example.md b/sources/tech/20190916 Constraint programming by example.md
new file mode 100644
index 0000000000..c434913c5e
--- /dev/null
+++ b/sources/tech/20190916 Constraint programming by example.md
@@ -0,0 +1,163 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Constraint programming by example)
+[#]: via: (https://opensource.com/article/19/9/constraint-programming-example)
+[#]: author: (Oleksii Tsvietnov https://opensource.com/users/oleksii-tsvietnovhttps://opensource.com/users/oleksii-tsvietnov)
+
+Constraint programming by example
+======
+Understand constraint programming with an example application that
+converts a character's case and ASCII codes.
+![Math formulas in green writing][1]
+
+There are many different ways to solve problems in computing. You might "brute force" your way to a solution by calculating as many possibilities as you can, or you might take a procedural approach and carefully establish the known factors that influence the correct answer. In [constraint programming][2], a problem is viewed as a series of limitations on what could possibly be a valid solution. This paradigm can be applied to effectively solve a group of problems that can be translated to variables and constraints or represented as a mathematic equation. In this way, it is related to the Constraint Satisfaction Problem ([CSP][3]).
+
+Using a declarative programming style, it describes a general model with certain properties. In contrast to the imperative style, it doesn't tell _how_ to achieve something, but rather _what_ to achieve. Instead of defining a set of instructions with only one obvious way to compute values, constraint programming declares relationships between variables within constraints. A final model makes it possible to compute the values of variables regardless of direction or changes. Thus, any change in the value of one variable affects the whole system (i.e., all other variables), and to satisfy defined constraints, it leads to recomputing the other values.
+
+As an example, let's take Pythagoras' theorem: **a² + b² = c²**. The _constraint_ is represented by this equation, which has three _variables_ (a, b, and c), and each has a _domain_ (non-negative). Using the imperative programming style, to compute any of the variables if we have the other two, we would need to create three different functions (because each variable is computed by a different equation):
+
+ * c = √(a² + b²)
+ * a = √(c² - b²)
+ * b = √(c² - a²)
+
+
+
+These functions satisfy the main constraint, and to check domains, each function should validate the input. Moreover, at least one more function would be needed for choosing an appropriate function according to the provided variables. This is one of the possible solutions:
+
+
+```
+def pythagoras(*, a=None, b=None, c=None):
+ ''' Computes a side of a right triangle '''
+
+ # Validate
+ if len([i for i in (a, b, c) if i is None or i <= 0]) != 1:
+ raise SystemExit("ERROR: you need to define any of two non-negative variables")
+
+ # Compute
+ if a is None:
+ return (c**2 - b**2)**0.5
+ elif b is None:
+ return (c**2 - a**2)**0.5
+ else:
+ return (a**2 + b**2)**0.5
+```
+
+To see the difference with the constraint programming approach, I'll show an example of a "problem" with four variables and a constraint that is not represented by a straight mathematic equation. This is a converter that can change characters' cases (lower-case to/from capital/upper-case) and return the ASCII codes for each. Hence, at any time, the converter is aware of all four values and reacts immediately to any changes. The idea of creating this example was fully inspired by John DeNero's [Fahrenheit-Celsius converter][4].
+
+Here is a diagram of a constraint system:
+
+![Constraint system model][5]
+
+The represented "problem" is translated into a constraint system that consists of nodes (constraints) and connectors (variables). Connectors provide an interface for getting and setting values. They also check the variables' domains. When one value changes, that particular connector notifies all its connected nodes about the change. Nodes, in turn, satisfy constraints, calculate new values, and propagate them to other connectors across the system by "asking" them to set a new value. Propagation is done using the message-passing technique that means connectors and nodes get messages (synchronously) and react accordingly. For instance, if the system gets the **A** letter on the "capital letter" connector, the other three connectors provide an appropriate result according to the defined constraint on the nodes: 97, a, and 65. It's not allowed to set any other lower-case letters (e.g., b) on that connector because each connector has its own domain.
+
+When all connectors are linked to nodes, which are defined by constraints, the system is fully set and ready to get values on any of four connectors. Once it's set, the system automatically calculates and sets values on the rest of the connectors. There is no need to check what variable was set and which functions should be called, as is required in the imperative approach—that is relatively easy to achieve with a few variables but gets interesting in case of tens or more.
+
+### How it works
+
+The full source code is available in my [GitHub repo][6]. I'll dig a little bit into the details to explain how the system is built.
+
+First, define the connectors by giving them names and setting domains as a function of one argument:
+
+
+```
+import constraint_programming as cp
+
+small_ascii = cp.connector('Small Ascii', lambda x: x >= 97 and x <= 122)
+small_letter = cp.connector('Small Letter', lambda x: x >= 'a' and x <= 'z')
+capital_ascii = cp.connector('Capital Ascii', lambda x: x >= 65 and x <= 90)
+capital_letter = cp.connector('Capital Letter', lambda x: x >= 'A' and x <= 'Z')
+```
+
+Second, link these connectors to nodes. There are two types: _code_ (translates letters back and forth to ASCII codes) and _aA_ (translates small letters to capital and back):
+
+
+```
+code(small_letter, small_ascii)
+code(capital_letter, capital_ascii)
+aA(small_letter, capital_letter)
+```
+
+These two nodes differ in which functions should be called, but they are derived from a general constraint function:
+
+
+```
+def code(conn1, conn2):
+ return cp.constraint(conn1, conn2, ord, chr)
+
+def aA(conn1, conn2):
+ return cp.constraint(conn1, conn2, str.upper, str.lower)
+```
+
+Each node has only two connectors. If there is an update on a first connector, then a first function is called to calculate the value of another connector (variable). The same happens if a second connector's value changes. For example, if the _code_ node gets **A** on the **conn1** connector, then the function **ord** will be used to get its ASCII code. And, the other way around, if the _aA_ node gets **A** on the **conn2** connector, then it needs to use the **str.lower** function to get the correct small letter on the **conn1**. Every node is responsible for computing new values and "sending" a message to another connector that there is a new value to set. This message is conveyed with the name of a node that is asking to set a new value and also a new value.
+
+
+```
+def set_value(src_constr, value):
+ if (not domain is None) and (not domain(value)):
+ raise ValueOutOfDomain(link, value)
+ link['value'] = value
+ for constraint in constraints:
+ if constraint is not src_constr:
+ constraint['update'](link)
+```
+
+When a connector receives the **set** message, it runs the **set_value** function to check a domain, sets a new value, and sends the "update" message to another node. It is just a notification that the value on that connector has changed.
+
+
+```
+def update(src_conn):
+ if src_conn is conn1:
+ conn2['set'](node, constr1(conn1['value']))
+ else:
+ conn1['set'](node, constr2(conn2['value']))
+```
+
+Then, the notified node requests this new value on the connector, computes a new value for another connector, and so on until the whole system changes. That's how the propagation works.
+
+But how does the message passing happen? It is implemented as accessing keys of dictionaries. Both functions (connector and constraint) return a _dispatch dictionary_. Such a dictionary contains _messages_ as keys and _closures_ as values. By accessing a key, let's say, **set**, a dictionary returns the function **set_value** (closure) that has access to all local names of the "connector" function.
+
+
+```
+# A dispatch dictionary
+link = { 'name': name,
+ 'value': None,
+ 'connect': connect,
+ 'set': set_value,
+ 'constraints': get_constraints }
+
+return link
+```
+
+Having a dictionary as a return value makes it possible to create multiple closures (functions) with access to the same local state to operate on. Then these closures are callable by using keys as a type of message.
+
+### Why use Constraint programming?
+
+Constraint programming can give you a new perspective to difficult problems. It's not something you can use in every situation, but it may well open new opportunities for solutions in certain situations. If you find yourself up against an equation that seems difficult to reliably solve in code, try looking at it from a different angle. If the angle that seems to work best is constraint programming, you now have an example of how it can be implemented.
+
+* * *
+
+_This article was originally published on [Oleksii Tsvietnov's blog][7] and is reprinted with his permission._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/constraint-programming-example
+
+作者:[Oleksii Tsvietnov][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/oleksii-tsvietnovhttps://opensource.com/users/oleksii-tsvietnov
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/edu_math_formulas.png?itok=B59mYTG3 (Math formulas in green writing)
+[2]: https://en.wikipedia.org/wiki/Constraint_programming
+[3]: https://vorakl.com/articles/csp/
+[4]: https://composingprograms.com/pages/24-mutable-data.html#propagating-constraints
+[5]: https://opensource.com/sites/default/files/uploads/constraint-system.png (Constraint system model)
+[6]: https://github.com/vorakl/composingprograms.com/tree/master/char_converter
+[7]: https://vorakl.com/articles/char-converter/
diff --git a/sources/tech/20190916 Copying large files with Rsync, and some misconceptions.md b/sources/tech/20190916 Copying large files with Rsync, and some misconceptions.md
new file mode 100644
index 0000000000..ae314e2a2e
--- /dev/null
+++ b/sources/tech/20190916 Copying large files with Rsync, and some misconceptions.md
@@ -0,0 +1,101 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Copying large files with Rsync, and some misconceptions)
+[#]: via: (https://fedoramagazine.org/copying-large-files-with-rsync-and-some-misconceptions/)
+[#]: author: (Daniel Leite de Abreu https://fedoramagazine.org/author/dabreu/)
+
+Copying large files with Rsync, and some misconceptions
+======
+
+![][1]
+
+There is a notion that a lot of people working in the IT industry often copy and paste from internet howtos. We all do it, and the copy-and-paste itself is not a problem. The problem is when we run things without understanding them.
+
+Some years ago, a friend who used to work on my team needed to copy virtual machine templates from site A to site B. They could not understand why the file they copied was 10GB on site A but but it became 100GB on-site B.
+
+The friend believed that _rsync_ is a magic tool that should just “sync” the file as it is. However, what most of us forget is to understand what _rsync_ really is, and how is it used, and the most important in my opinion is, where it come from. This article provides some further information about rsync, and an explanation of what happened in that story.
+
+### About rsync
+
+_rsync_ is a tool was created by Andrew Tridgell and Paul Mackerras who were motivated by the following problem:
+
+Imagine you have two files, _file_A_ and _file_B_. You wish to update _file_B_ to be the same as _file_A_. The obvious method is to copy _file_A_ onto _file_B_.
+
+Now imagine that the two files are on two different servers connected by a slow communications link, for example, a dial-up IP link. If _file_A_ is large, copying it onto _file_B_ will be slow, and sometimes not even possible. To make it more efficient, you could compress _file_A_ before sending it, but that would usually only gain a factor of 2 to 4.
+
+Now assume that _file_A_ and _file_B_ are quite similar, and to speed things up, you take advantage of this similarity. A common method is to send just the differences between _file_A_ and _file_B_ down the link and then use such list of differences to reconstruct the file on the remote end.
+
+The problem is that the normal methods for creating a set of differences between two files rely on being able to read both files. Thus they require that both files are available beforehand at one end of the link. If they are not both available on the same machine, these algorithms cannot be used. (Once you had copied the file over, you don’t need the differences). This is the problem that _rsync_ addresses.
+
+The _rsync_ algorithm efficiently computes which parts of a source file match parts of an existing destination file. Matching parts then do not need to be sent across the link; all that is needed is a reference to the part of the destination file. Only parts of the source file which are not matching need to be sent over.
+
+The receiver can then construct a copy of the source file using the references to parts of the existing destination file and the original material.
+
+Additionally, the data sent to the receiver can be compressed using any of a range of common compression algorithms for further speed improvements.
+
+The rsync algorithm addresses this problem in a lovely way as we all might know.
+
+After this introduction on _rsync_, Back to the story!
+
+### Problem 1: Thin provisioning
+
+There were two things that would help the friend understand what was going on.
+
+The problem with the file getting significantly bigger on the other size was caused by Thin Provisioning (TP) being enabled on the source system — a method of optimizing the efficiency of available space in Storage Area Networks (SAN) or Network Attached Storages (NAS).
+
+The source file was only 10GB because of TP being enabled, and when transferred over using _rsync_ without any additional configuration, the target destination was receiving the full 100GB of size. _rsync_ could not do the magic automatically, it had to be configured.
+
+The Flag that does this work is _-S_ or _–sparse_ and it tells _rsync_ to handle sparse files efficiently. And it will do what it says! It will only send the sparse data so source and destination will have a 10GB file.
+
+### Problem 2: Updating files
+
+The second problem appeared when sending over an updated file. The destination was now receiving just the 10GB, but the whole file (containing the virtual disk) was always transferred. Even when a single configuration file was changed on that virtual disk. In other words, only a small portion of the file changed.
+
+The command used for this transfer was:
+
+```
+rsync -avS vmdk_file syncuser@host1:/destination
+```
+
+Again, understanding how _rsync_ works would help with this problem as well.
+
+The above is the biggest misconception about rsync. Many of us think _rsync_ will simply send the delta updates of the files, and that it will automatically update only what needs to be updated. But this is not the default behaviour of _rsync_.
+
+As the man page says, the default behaviour of _rsync_ is to create a new copy of the file in the destination and to move it into the right place when the transfer is completed.
+
+To change this default behaviour of _rsync_, you have to set the following flags and then rsync will send only the deltas:
+
+```
+--inplace update destination files in-place
+--partial keep partially transferred files
+--append append data onto shorter files
+--progress show progress during transfer
+```
+
+So the full command that would do exactly what the friend wanted is:
+
+```
+rsync -av --partial --inplace --append --progress vmdk_file syncuser@host1:/destination
+```
+
+Note that the sparse flag _-S_ had to be removed, for two reasons. The first is that you can not use _–sparse_ and _–inplace_ together when sending a file over the wire. And second, when you once sent a file over with _–sparse_, you can’t updated with _–inplace_ anymore. Note that versions of rsync older than 3.1.3 will reject the combination of _–sparse_ and _–inplace_.
+
+So even when the friend ended up copying 100GB over the wire, that only had to happen once. All the following updates were only copying the difference, making the copy to be extremely efficient.
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/copying-large-files-with-rsync-and-some-misconceptions/
+
+作者:[Daniel Leite de Abreu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/dabreu/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/08/rsync-816x345.jpg
diff --git a/sources/tech/20190916 How to freeze and lock your Linux system (and why you would want to).md b/sources/tech/20190916 How to freeze and lock your Linux system (and why you would want to).md
new file mode 100644
index 0000000000..886974a8c0
--- /dev/null
+++ b/sources/tech/20190916 How to freeze and lock your Linux system (and why you would want to).md
@@ -0,0 +1,103 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to freeze and lock your Linux system (and why you would want to))
+[#]: via: (https://www.networkworld.com/article/3438818/how-to-freeze-and-lock-your-linux-system-and-why-you-would-want-to.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+How to freeze and lock your Linux system (and why you would want to)
+======
+What it means to freeze a terminal window and lock a screen -- and how to manage these activities on your Linux system.
+Sandra Henry-Stocker
+
+How you freeze and "thaw out" a screen on a Linux system depends a lot on what you mean by these terms. Sometimes “freezing a screen” might mean freezing a terminal window so that activity within that window comes to a halt. Sometimes it means locking your screen so that no one can walk up to your system when you're fetching another cup of coffee and type commands on your behalf.
+
+In this post, we'll examine how you can use and control these actions.
+
+**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][1] ]**
+
+### How to freeze a terminal window on Linux
+
+You can freeze a terminal window on a Linux system by typing **Ctrl+S** (hold control key and press "s"). Think of the "s" as meaning "start the freeze". If you continue typing commands after doing this, you won't see the commands you type or the output you would expect to see. In fact, the commands will pile up in a queue and will be run only when you reverse the freeze by typing **Ctrl+Q**. Think of this as "quit the freeze".
+
+One easy way to view how this works is to use the date command and then type **Ctrl+S**. Then type the date command again and wait a few minutes before typing **Ctrl+Q**. You'll see something like this:
+
+```
+$ date
+Mon 16 Sep 2019 06:47:34 PM EDT
+$ date
+Mon 16 Sep 2019 06:49:49 PM EDT
+```
+
+The gap between the two times shown will indicate that the second date command wasn't run until you unfroze your window.
+
+Terminal windows can be frozen and unfrozen whether you're sitting at the computer screen or running remotely using a tool such as PuTTY.
+
+And here's a little trick that can come in handy. If you see that a terminal window appears to be inactive, one possibility is that you or someone else inadvertently typed **Ctrl+S**. In any case, entering **Ctrl+Q** just in case this resolves the problem is not a bad idea.
+
+### How to lock your screen
+
+To lock your screen before you leave your desk, either **Ctrl+Alt+L** or **Super+L** (i.e., holding down the Windows key and pressing L) should work. Once your screen is locked, you will have to enter your password to log back in.
+
+### Automatic screen locking on Linux systems
+
+While best practice suggests that you lock your screen whenever you are about to leave your desk, Linux systems usually automatically lock after a period of no activity. The timing for "blanking" a screen (making it go dark) and actually locking the screen (requiring a login to use it again) depend on settings that you can set to your personal preferences.
+
+To change how long it takes for your screen to go dark when using GNOME screensaver, open your settings window and select **Power** and then **Blank screen**. You can choose times between 1 and 15 minutes or never. To select how long after the blanking the screen locks, go to settings, select **Privacy** and then **Blank screen.** Settings should include 1, 2, 3, 5 and 30 minutes or one hour.
+
+### How to lock your screen from the command line
+
+If you are using Gnome screensaver, you can also lock the screen from the command line using this command:
+
+```
+gnome-screensaver-command -l
+```
+
+That's a lowercase L for "lock".
+
+### How to check your lockscreen state
+
+You can also use the gnome screensaver command to check whether your screen is locked,. With the **\--query** option, the command tells you whether screen is currently locked (i.e., active). With the --time option, it tells you how long the lock has been in effect. Here's an sample sctipt:
+
+```
+#!/bin/bash
+
+gnome-screensaver-command --query
+gnome-screensaver-command --time
+```
+
+Running the script will show output like this:
+
+```
+$ ./check_lockscreen
+The screensaver is active
+The screensaver has been active for 1013 seconds.
+```
+
+#### Wrap-up
+
+Freezing your terminal window is easy if you remember the proper control sequences. For screen locking, how well it works depends on the controls you put in place for yourself or whether you're comfortable working with the defaults.
+
+**[ Also see: [Invaluable tips and tricks for troubleshooting Linux][2] ]**
+
+Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3438818/how-to-freeze-and-lock-your-linux-system-and-why-you-would-want-to.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
+[2]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
+[3]: https://www.facebook.com/NetworkWorld/
+[4]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190916 How to start developing with .NET.md b/sources/tech/20190916 How to start developing with .NET.md
new file mode 100644
index 0000000000..8dae5addd0
--- /dev/null
+++ b/sources/tech/20190916 How to start developing with .NET.md
@@ -0,0 +1,170 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to start developing with .NET)
+[#]: via: (https://opensource.com/article/19/9/getting-started-net)
+[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzic)
+
+How to start developing with .NET
+======
+Learn the basics to get up and running with the .NET development
+platform.
+![Coding on a computer][1]
+
+The .NET framework was released in 2000 by Microsoft. An open source implementation of the platform, [Mono][2], was the center of controversy in the early 2000s because Microsoft held several patents for .NET technology and could have used those patents to end Mono implementations. Fortunately, in 2014, Microsoft declared that the .NET development platform would be open source under the MIT license from then on, and in 2016, Microsoft purchased Xamarin, the company that produces Mono.
+
+Both .NET and Mono have grown into cross-platform programming environments for C#, F#, GTK#, Visual Basic, Vala, and more. Applications created with .NET and Mono have been delivered to Linux, BSD, Windows, MacOS, Android, and even some gaming consoles. You can use either .NET or Mono to develop .NET applications. Both are open source, and both have active and vibrant communities. This article focuses on getting started with Microsoft's implementation of the .NET environment.
+
+### How to install .NET
+
+The .NET downloads are divided into packages: one containing just a .NET runtime, and the other a .NET software development kit (SDK) containing the .NET Core and runtime. Depending on your platform, there may be several variants of even these packages, accounting for architecture and OS version. To start developing with .NET, you must [install the SDK][3]. This gives you the [dotnet][4] terminal or PowerShell command, which you can use to create and build projects.
+
+#### Linux
+
+To install .NET on Linux, first, add the Microsoft Linux software repository to your computer.
+
+On Fedora:
+
+
+```
+$ sudo rpm --import
+$ sudo wget -q -O /etc/yum.repos.d/microsoft-prod.repo
+```
+
+On Ubuntu:
+
+
+```
+$ wget -q -O packages-microsoft-prod.deb
+$ sudo dpkg -i packages-microsoft-prod.deb
+```
+
+Next, install the SDK using your package manager, replacing **<X.Y>** with the current version of the .NET release:
+
+On Fedora:
+
+
+```
+`$ sudo dnf install dotnet-sdk-`
+```
+
+On Ubuntu:
+
+
+```
+$ sudo apt install apt-transport-https
+$ sudo apt update
+$ sudo apt install dotnet-sdk-<X.Y>
+```
+
+Once all the packages are downloaded and installed, confirm the installation by opening a terminal and typing:
+
+
+```
+$ dotnet --version
+X.Y.Z
+```
+
+#### Windows
+
+If you're on Microsoft Windows, you probably already have the .NET runtime installed. However, to develop .NET applications, you must also install the .NET Core SDK.
+
+First, [download the installer][3]. To keep your options open, download .NET Core for cross-platform development (the .NET Framework is Windows-only). Once the **.exe** file is downloaded, double-click it to launch the installation wizard, and click through the two-step install process: accept the license and allow the install to proceed.
+
+![Installing dotnet on Windows][5]
+
+Afterward, open PowerShell from your Application menu in the lower-left corner. In PowerShell, type a test command:
+
+
+```
+`PS C:\Users\osdc> dotnet`
+```
+
+If you see information about a dotnet installation, .NET has been installed correctly.
+
+#### MacOS
+
+If you're on an Apple Mac, [download the Mac installer][3], which comes in the form of a **.pkg** package. Download and double-click on the **.pkg** file and click through the installer. You may need to grant permission for the installer since the package is not from the App Store.
+
+Once all packages are downloaded and installed, confirm the installation by opening a terminal and typing:
+
+
+```
+$ dotnet --version
+X.Y.Z
+```
+
+### Hello .NET
+
+A sample "hello world" application written in .NET is provided with the **dotnet** command. Or, more accurately, the command provides the sample application.
+
+First, create a project directory and the required code infrastructure using the **dotnet** command with the **new** and **console** options to create a new console-only application. Use the **-o** option to specify a project name:
+
+
+```
+`$ dotnet new console -o hellodotnet`
+```
+
+This creates a directory called **hellodotnet** in your current directory. Change into your project directory and have a look around:
+
+
+```
+$ cd hellodotnet
+$ dir
+hellodotnet.csproj obj Program.cs
+```
+
+The file **Program.cs** is an empty C# file containing a simple Hello World application. Open it in a text editor to view it. Microsoft's Visual Studio Code is a cross-platform, open source application built with dotnet in mind, and while it's not a bad text editor, it also collects a lot of data about its user (and grants itself permission to do so in the license applied to its binary distribution). If you want to try out Visual Studio Code, consider using [VSCodium][6], a distribution of Visual Studio Code that's built from the MIT-licensed source code _without_ the telemetry (read the [documentation][7] for options to disable other forms of tracking in even this build). Alternatively, just use your existing favorite text editor or IDE.
+
+The boilerplate code in a new console application is:
+
+
+```
+using System;
+
+namespace hellodotnet
+{
+ class Program
+ {
+ static void Main(string[] args)
+ {
+ Console.WriteLine("Hello World!");
+ }
+ }
+}
+```
+
+To run the program, use the **dotnet run** command:
+
+
+```
+$ dotnet run
+Hello World!
+```
+
+That's the basic workflow of .NET and the **dotnet** command. The full [C# guide for .NET][8] is available, and everything there is relevant to .NET. For examples of .NET in action, follow [Alex Bunardzic][9]'s mutation testing articles here on opensource.com.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/getting-started-net
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/sethhttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzic
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
+[2]: https://www.monodevelop.com/
+[3]: https://dotnet.microsoft.com/download
+[4]: https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet?tabs=netcore21
+[5]: https://opensource.com/sites/default/files/uploads/dotnet-windows-install.jpg (Installing dotnet on Windows)
+[6]: https://vscodium.com/
+[7]: https://github.com/VSCodium/vscodium/blob/master/DOCS.md
+[8]: https://docs.microsoft.com/en-us/dotnet/csharp/tutorials/intro-to-csharp/
+[9]: https://opensource.com/users/alex-bunardzic (View user profile.)
diff --git a/sources/tech/20190916 Linux commands to display your hardware information.md b/sources/tech/20190916 Linux commands to display your hardware information.md
new file mode 100644
index 0000000000..f0a13905e5
--- /dev/null
+++ b/sources/tech/20190916 Linux commands to display your hardware information.md
@@ -0,0 +1,417 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Linux commands to display your hardware information)
+[#]: via: (https://opensource.com/article/19/9/linux-commands-hardware-information)
+[#]: author: (Howard Fosdick https://opensource.com/users/howtechhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth)
+
+Linux commands to display your hardware information
+======
+Get the details on what's inside your computer from the command line.
+![computer screen ][1]
+
+There are many reasons you might need to find out details about your computer hardware. For example, if you need help fixing something and post a plea in an online forum, people will immediately ask you for specifics about your computer. Or, if you want to upgrade your computer, you'll need to know what you have and what you can have. You need to interrogate your computer to discover its specifications.
+
+The easiest way is to do that is with one of the standard Linux GUI programs:
+
+ * [i-nex][2] collects hardware information and displays it in a manner similar to the popular [CPU-Z][3] under Windows.
+ * [HardInfo][4] displays hardware specifics and even includes a set of eight popular benchmark programs you can run to gauge your system's performance.
+ * [KInfoCenter][5] and [Lshw][6] also display hardware details and are available in many software repositories.
+
+
+
+Alternatively, you could open up the box and read the labels on the disks, memory, and other devices. Or you could enter the boot-time panels—the so-called UEFI or BIOS panels. Just hit [the proper program function key][7] during the boot process to access them. These two methods give you hardware details but omit software information.
+
+Or, you could issue a Linux line command. Wait a minute… that sounds difficult. Why would you do this?
+
+Sometimes it's easy to find a specific bit of information through a well-targeted line command. Perhaps you don't have a GUI program available or don't want to install one.
+
+Probably the main reason to use line commands is for writing scripts. Whether you employ the Linux shell or another programming language, scripting typically requires coding line commands.
+
+Many line commands for detecting hardware must be issued under root authority. So either switch to the root user ID, or issue the command under your regular user ID preceded by **sudo**:
+
+
+```
+`sudo `
+```
+
+and respond to the prompt for the root password.
+
+This article introduces many of the most useful line commands for system discovery. The quick reference chart at the end summarizes them.
+
+### Hardware overview
+
+There are several line commands that will give you a comprehensive overview of your computer's hardware.
+
+The **inxi** command lists details about your system, CPU, graphics, audio, networking, drives, partitions, sensors, and more. Forum participants often ask for its output when they're trying to help others solve problems. It's a standard diagnostic for problem-solving:
+
+
+```
+`inxi -Fxz`
+```
+
+The **-F** flag means you'll get full output, **x** adds details, and **z** masks out personally identifying information like MAC and IP addresses.
+
+The **hwinfo** and **lshw** commands display much of the same information in different formats:
+
+
+```
+`hwinfo --short`
+```
+
+or
+
+
+```
+`lshw -short`
+```
+
+The long forms of these two commands spew out exhaustive—but hard to read—output:
+
+
+```
+`hwinfo`
+```
+
+or
+
+
+```
+`lshw`
+```
+
+### CPU details
+
+You can learn everything about your CPU through line commands. View CPU details by issuing either the **lscpu** command or its close relative **lshw**:
+
+
+```
+`lscpu`
+```
+
+or
+
+
+```
+`lshw -C cpu`
+```
+
+In both cases, the last few lines of output list all the CPU's capabilities. Here you can find out whether your processor supports specific features.
+
+With all these commands, you can reduce verbiage and narrow any answer down to a single detail by parsing the command output with the **grep** command. For example, to view only the CPU make and model:
+
+
+```
+`lshw -C cpu | grep -i product`
+```
+
+To view just the CPU's speed in megahertz:
+
+
+```
+`lscpu | grep -i mhz`
+```
+
+or its [BogoMips][8] power rating:
+
+
+```
+`lscpu | grep -i bogo`
+```
+
+The **-i** flag on the **grep** command simply ensures your search ignores whether the output it searches is upper or lower case.
+
+### Memory
+
+Linux line commands enable you to gather all possible details about your computer's memory. You can even determine whether you can add extra memory to the computer without opening up the box.
+
+To list each memory stick and its capacity, issue the **dmidecode** command:
+
+
+```
+`dmidecode -t memory | grep -i size`
+```
+
+For more specifics on system memory, including type, size, speed, and voltage of each RAM stick, try:
+
+
+```
+`lshw -short -C memory`
+```
+
+One thing you'll surely want to know is is the maximum memory you can install on your computer:
+
+
+```
+`dmidecode -t memory | grep -i max`
+```
+
+Now find out whether there are any open slots to insert additional memory sticks. You can do this without opening your computer by issuing this command:
+
+
+```
+`lshw -short -C memory | grep -i empty`
+```
+
+A null response means all the memory slots are already in use.
+
+Determining how much video memory you have requires a pair of commands. First, list all devices with the **lspci** command and limit the output displayed to the video device you're interested in:
+
+
+```
+`lspci | grep -i vga`
+```
+
+The output line that identifies the video controller will typically look something like this:
+
+
+```
+`00:02.0 VGA compatible controller: Intel Corporation 82Q35 Express Integrated Graphics Controller (rev 02)`
+```
+
+Now reissue the **lspci** command, referencing the video device number as the selected device:
+
+
+```
+`lspci -v -s 00:02.0`
+```
+
+The output line identified as _prefetchable_ is the amount of video RAM on your system:
+
+
+```
+...
+Memory at f0100000 (32-bit, non-prefetchable) [size=512K]
+I/O ports at 1230 [size=8]
+Memory at e0000000 (32-bit, prefetchable) [size=256M]
+Memory at f0000000 (32-bit, non-prefetchable) [size=1M]
+...
+```
+
+Finally, to show current memory use in megabytes, issue:
+
+
+```
+`free -m`
+```
+
+This tells how much memory is free, how much is in use, the size of the swap area, and whether it's being used. For example, the output might look like this:
+
+
+```
+ total used free shared buff/cache available
+Mem: 11891 1326 8877 212 1687 10077
+Swap: 1999 0 1999
+```
+
+The **top** command gives you more detail on memory use. It shows current overall memory and CPU use and also breaks it down by process ID, user ID, and the commands being run. It displays full-screen text output:
+
+
+```
+`top`
+```
+
+### Disks, filesystems, and devices
+
+You can easily determine whatever you wish to know about disks, partitions, filesystems, and other devices.
+
+To display a single line describing each disk device:
+
+
+```
+`lshw -short -C disk`
+```
+
+Get details on any specific SATA disk, such as its model and serial numbers, supported modes, sector count, and more with:
+
+
+```
+`hdparm -i /dev/sda`
+```
+
+Of course, you should replace **sda** with **sdb** or another device mnemonic if necessary.
+
+To list all disks with all their defined partitions, along with the size of each, issue:
+
+
+```
+`lsblk`
+```
+
+For more detail, including the number of sectors, size, filesystem ID and type, and partition starting and ending sectors:
+
+
+```
+`fdisk -l`
+```
+
+To start up Linux, you need to identify mountable partitions to the [GRUB][9] bootloader. You can find this information with the **blkid** command. It lists each partition's unique identifier (UUID) and its filesystem type (e.g., ext3 or ext4):
+
+
+```
+`blkid`
+```
+
+To list the mounted filesystems, their mount points, and the space used and available for each (in megabytes):
+
+
+```
+`df -m`
+```
+
+Finally, you can list details for all USB and PCI buses and devices with these commands:
+
+
+```
+`lsusb`
+```
+
+or
+
+
+```
+`lspci`
+```
+
+### Network
+
+Linux offers tons of networking line commands. Here are just a few.
+
+To see hardware details about your network card, issue:
+
+
+```
+`lshw -C network`
+```
+
+Traditionally, the command to show network interfaces was **ifconfig**:
+
+
+```
+`ifconfig -a`
+```
+
+But many people now use:
+
+
+```
+`ip link show`
+```
+
+or
+
+
+```
+`netstat -i`
+```
+
+In reading the output, it helps to know common network abbreviations:
+
+**Abbreviation** | **Meaning**
+---|---
+**lo** | Loopback interface
+**eth0** or **enp*** | Ethernet interface
+**wlan0** | Wireless interface
+**ppp0** | Point-to-Point Protocol interface (used by a dial-up modem, PPTP VPN connection, or USB modem)
+**vboxnet0** or **vmnet*** | Virtual machine interface
+
+The asterisks in this table are wildcard characters, serving as a placeholder for whatever series of characters appear from system to system. ****
+
+To show your default gateway and routing tables, issue either of these commands:
+
+
+```
+`ip route | column -t`
+```
+
+or
+
+
+```
+`netstat -r`
+```
+
+### Software
+
+Let's conclude with two commands that display low-level software details. For example, what if you want to know whether you have the latest firmware installed? This command shows the UEFI or BIOS date and version:
+
+
+```
+`dmidecode -t bios`
+```
+
+What is the kernel version, and is it 64-bit? And what is the network hostname? To find out, issue:
+
+
+```
+`uname -a`
+```
+
+### Quick reference chart
+
+This chart summarizes all the commands covered in this article:
+
+Display info about all hardware | **inxi -Fxz** _\--or--_
+**hwinfo --short** _\--or--_
+**lshw -short**
+---|---
+Display all CPU info | **lscpu** _\--or--_
+**lshw -C cpu**
+Show CPU features (e.g., PAE, SSE2) | **lshw -C cpu | grep -i capabilities**
+Report whether the CPU is 32- or 64-bit | **lshw -C cpu | grep -i width**
+Show current memory size and configuration | **dmidecode -t memory | grep -i size** _\--or--_
+**lshw -short -C memory**
+Show maximum memory for the hardware | **dmidecode -t memory | grep -i max**
+Determine whether memory slots are available | **lshw -short -C memory | grep -i empty**
+(a null answer means no slots available)
+Determine the amount of video memory | **lspci | grep -i vga**
+then reissue with the device number;
+for example: **lspci -v -s 00:02.0**
+The VRAM is the _prefetchable_ value.
+Show current memory use | **free -m** _\--or--_
+**top**
+List the disk drives | **lshw -short -C disk**
+Show detailed information about a specific disk drive | **hdparm -i /dev/sda**
+(replace **sda** if necessary)
+List information about disks and partitions | **lsblk ** (simple) _\--or--_
+**fdisk -l** (detailed)
+List partition IDs (UUIDs) | **blkid**
+List mounted filesystems, their mount points,
+and megabytes used and available for each | **df -m**
+List USB devices | **lsusb**
+List PCI devices | **lspci**
+Show network card details | **lshw -C network**
+Show network interfaces | **ifconfig -a** _\--or--_
+**ip link show **_\--or--_
+**netstat -i**
+Display routing tables | **ip route | column -t` `**_\--or--_
+**netstat -r**
+Display UEFI/BIOS info | **dmidecode -t bios**
+Show kernel version, network hostname, more | **uname -a**
+
+Do you have a favorite command that I overlooked? Please add a comment and share it.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/linux-commands-hardware-information
+
+作者:[Howard Fosdick][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/howtechhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/features_solutions_command_data.png?itok=4_VQN3RK (computer screen )
+[2]: http://sourceforge.net/projects/i-nex/
+[3]: https://www.cpuid.com/softwares/cpu-z.html
+[4]: http://sourceforge.net/projects/hardinfo.berlios/
+[5]: https://userbase.kde.org/KInfoCenter
+[6]: http://www.binarytides.com/linux-lshw-command/
+[7]: http://www.disk-image.com/faq-bootmenu.htm
+[8]: https://en.wikipedia.org/wiki/BogoMips
+[9]: https://www.dedoimedo.com/computers/grub.html
diff --git a/sources/tech/20190916 The Emacs Series Exploring ts.el.md b/sources/tech/20190916 The Emacs Series Exploring ts.el.md
new file mode 100644
index 0000000000..06e724d4ab
--- /dev/null
+++ b/sources/tech/20190916 The Emacs Series Exploring ts.el.md
@@ -0,0 +1,366 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (The Emacs Series Exploring ts.el)
+[#]: via: (https://opensourceforu.com/2019/09/the-emacs-series-exploring-ts-el/)
+[#]: author: (Shakthi Kannan https://opensourceforu.com/author/shakthi-kannan/)
+
+The Emacs Series Exploring ts.el
+======
+
+[![][1]][2]
+
+_In this article, the author reviews the ts.el date and time library for Emacs. Written by Adam Porter, ts.el is still in the development phase and has been released under the GNU General Public License v3.0._
+
+The ts.el package uses intuitive names for date and time functions. It internally uses UNIX timestamps and depends on both the ‘dash’ and ‘s’ Emacs libraries. The parts of the date are computed lazily and also cached for performance. The source code is available at __. In this article, we will explore the API functions available from the ts.el library.
+
+**Installation**
+The package does not have a tagged release yet; hence, you should download it from and add it to your Emacs load path to use it. You should also have the ‘dash’ and ‘s’ libraries installed and loaded in your Emacs environment. You can then load the library using the following command:
+
+```
+(require ‘ts)
+```
+
+**Usage**
+Let us explore the various functions available to retrieve parts of the date from the ts.el library. When the examples were executed, the date was ‘Friday July 5, 2019’. The ts-dow function can be used to obtain the day of the week, as shown below:
+
+```
+(ts-dow (ts-now))
+5
+```
+
+_ts-now_ is a Lisp construct that returns a timestamp set. It is defined in ts.el as follows:
+
+```
+(defsubst ts-now ()
+“Return `ts’ struct set to now.”
+(make-ts :unix (float-time)))
+```
+
+The day of the week starts from Monday (1) and hence Friday has the value of 5. An abbreviated form of the day can be fetched using the _ts-day-abbr_ function. In the following example, ‘Friday’ is shortened to‘Fri’.
+
+```
+(ts-day-abbr (ts-now))
+"Fri"
+```
+
+The day of the week in full form can be obtained using the _ts-day-name_ function, as shown below:
+
+```
+(ts-day-name (ts-now))
+“Friday”
+```
+
+The twelve months from January to December are numbered from 1 to 12 respectively. Hence, for the month of July, the index number is 7. This numeric value for the month can be retrieved using the ‘ts-month’ API. For example:
+
+```
+(ts-month (ts-now))
+7
+```
+
+If you want a three-character abbreviation for the month’s name, you can use the ts-month-abbr function as shown below:
+
+```
+(ts-month-abbr (ts-now))
+“Jul”
+```
+
+The _ts-month-name_ function can be used to obtain the full name of the month. For example:
+
+```
+(ts-month-name (ts-now))
+“July”
+```
+
+The day of the week starts from Monday and has an index 1, while Sunday has an index 7. If you need the numeric value for the day of the week, use the ts-day function as indicated below:
+
+```
+(ts-day (ts-now))
+5
+```
+
+The _ts-year_ API returns the year. In our example, it is ‘2019’ as shown below:
+
+```
+(ts-year (ts-now))
+2019
+```
+
+The hour, minute and seconds can be retrieved using the _ts-hour, ts-minute_ and _ts-second_ functions, respectively. Examples of these functions are given below:
+
+```
+(ts-hour (ts-now))
+18
+
+(ts-minute (ts-now))
+19
+
+(ts-second (ts-now))
+5
+```
+
+The UNIX timestamps are in UTC, by default. The _ts-tz-offset_ function returns the offset from UTC. The Indian Standard Time (IST) is five-and-a-half-hours ahead of UTC and hence this function returns ‘+0530’ as shown below:
+
+```
+(ts-tz-offset (ts-now))
+"+0530"
+```
+
+The _ts-tz-abbr_ API returns an abbreviated form of the time zone. In our case, ‘IST’ is returned for the Indian Standard Time.
+
+```
+(ts-tz-abbr (ts-now))
+"IST"
+```
+
+The _ts-adjustf_ function applies the time adjustments passed to the timestamp and the _ts-format_ function formats the timestamp as a string. A couple of examples are given below:
+
+```
+(let ((ts (ts-now)))
+(ts-adjustf ts ‘day 1)
+(ts-format nil ts))
+“2019-07-06 18:23:24 +0530”
+
+(let ((ts (ts-now)))
+(ts-adjustf ts ‘year 1 ‘month 3 ‘day 5)
+(ts-format nil ts))
+“2020-10-10 18:24:07 +0530”
+```
+
+You can use the _ts-dec_ function to decrement the timestamp. For example:
+
+```
+(ts-day-name (ts-dec ‘day 1 (ts-now)))
+“Thursday”
+```
+
+The threading macro syntax can also be used with the ts-dec function as shown below:
+
+```
+(->> (ts-now) (ts-dec ‘day 2) ts-day-name)
+“Wednesday”
+```
+
+The UNIX epoch is the number of seconds that have elapsed since January 1, 1970 (midnight UTC/GMT). The ts-unix function returns an epoch UNIX timestamp as illustrated below:
+
+```
+(ts-unix (ts-adjust ‘day -2 (ts-now)))
+1562158551.0 ;; Wednesday, July 3, 2019 6:25:51 PM GMT+05:30
+```
+
+An hour has 3600 seconds and a day has 86400 seconds. You can compare epoch timestamps as shown in the following example:
+
+```
+(/ (- (ts-unix (ts-now))
+(ts-unix (ts-adjust ‘day -4 (ts-now))))
+86400)
+4
+```
+
+The _ts-difference_ function returns the difference between two timestamps, while the _ts-human-duration_ function returns the property list (_plist_) values of years, days, hours, minutes and seconds. For example:
+
+```
+(ts-human-duration
+(ts-difference (ts-now)
+(ts-dec ‘day 3 (ts-now))))
+(:years 0 :days 3 :hours 0 :minutes 0 :seconds 0)
+```
+
+A number of aliases are available for the hour, minute, second, year, month and day format string constructors. A few examples are given below:
+
+```
+(ts-hour (ts-now))
+18
+(ts-H (ts-now))
+18
+
+
+(ts-minute (ts-now))
+46
+(ts-min (ts-now))
+46
+(ts-M (ts-now))
+46
+
+(ts-second (ts-now))
+16
+(ts-sec (ts-now))
+16
+(ts-S (ts-now))
+16
+
+(ts-year (ts-now))
+2019
+(ts-Y (ts-now))
+2019
+
+(ts-month (ts-now))
+7
+(ts-m (ts-now))
+7
+
+(ts-day (ts-now))
+5
+(ts-d (ts-now))
+5
+```
+
+You can parse a string into a timestamp object using the ts-parse function. For example:
+
+```
+(ts-format nil (ts-parse “Fri Dec 6 2019 18:48:00”))
+“2019-12-06 18:48:00 +0530”
+```
+
+You can also format the difference between two timestamps in a human readable format as shown in the following example:
+
+```
+(ts-human-format-duration
+(ts-difference (ts-now)
+(ts-adjust ‘day -1 ‘hour -3 ‘minute -2 ‘second -4 (ts-now))))
+“1 days, 3 hours, 2 minutes, 4 seconds”
+```
+
+The timestamp comparator operations are also defined in ts.el. The ts< function compares if one epoch UNIX timestamp is less than the other. Its definition is as follows:
+
+```
+(defun ts< (a b)
+“Return non-nil if timestamp A is less than timestamp B.”
+(< (ts-unix a) (ts-unix b)))
+```
+
+In the example given below, the current timestamp is not less than the previous day and hence it returns nil.
+
+```
+(ts< (ts-now) (ts-adjust ‘day -1 (ts-now)))
+nil
+```
+
+Similarly, we have other comparator functions like ts>, ts=, ts>= and ts<=. A few examples of these function use cases are given below:
+
+```
+(ts> (ts-now) (ts-adjust ‘day -1 (ts-now)))
+t
+
+(ts= (ts-now) (ts-now))
+nil
+
+(ts>= (ts-now) (ts-adjust ‘day -1 (ts-now)))
+t
+
+(ts<= (ts-now) (ts-adjust ‘day -2 (ts-now)))
+nil
+```
+
+**Benchmarking**
+A few performance tests can be conducted to compare the Emacs internal time values versus the UNIX timestamps. The benchmarking tests can be executed by including the bench-multi macro and bench-multi-process-results function available from __ in your Emacs environment.
+You will also need to load the dash-functional library to use the -on function.
+
+```
+(require ‘dash-functional)
+```
+
+The following tests have been executed on an Intel(R) Core(TM) i7-3740QM CPU at 2.70GHz with eight cores, 16GB RAM and running Ubuntu 18.04 LTS.
+
+**Formatting**
+The first benchmarking exercise is to compare the formatting of the UNIX timestamp and the Emacs internal time. The Emacs Lisp code to run the test is shown below:
+
+```
+(let ((format “%Y-%m-%d %H:%M:%S”))
+(bench-multi :times 100000
+:forms ((“Unix timestamp” (format-time-string format 1544311232))
+(“Internal time” (format-time-string format ‘(23564 20962 864324 108000))))))
+```
+
+The output appears as an s-expression:
+
+```
+((“Form” “x faster than next” “Total runtime” “# of GCs” “Total GC runtime”)
+hline
+
+(“Internal time” “1.11” “2.626460” 13 “0.838733”)
+(“Unix timestamp” “slowest” “2.921408” 13 “0.920814”))
+```
+
+The abbreviation ‘GC’ refers to garbage collection. A tabular representation of the above results is given below:
+
+[![][3]][4]
+
+We observe that formatting the internal time is slightly faster.
+
+**Getting the current time**
+The functions to obtain the current time can be compared using the following test:
+
+```
+(bench-multi :times 100000
+:forms ((“Unix timestamp” (float-time))
+(“Internal time” (current-time))))
+```
+
+The results are shown below:
+
+[![][5]][6]
+
+We observe that using the Unix timestamp is faster.
+
+**Parsing**
+The third benchmarking exercise is to compare parsing functions on a date timestamp string. The corresponding test code is given below:
+
+```
+(let* ((s “Wed 10 Jul 2019”))
+(bench-multi :times 100000
+:forms ((“ts-parse” (ts-parse s))
+(“ts-parse ts-unix” (ts-unix (ts-parse s))))))
+```
+
+The _ts-parse_ function is slightly faster than the ts-parse _ts-unix_ function, as seen in the results:
+
+[![][7]][8]
+
+**A new timestamp versus blanking fields**
+The last performance comparison is between creating a new timestamp and blanking the fields. The relevant test code is as follows:
+
+```
+(let* ((a (ts-now)))
+(bench-multi :times 100000
+:ensure-equal t
+:forms ((“New” (let ((ts (copy-ts a)))
+(setq ts (ts-fill ts))
+(make-ts :unix (ts-unix ts))))
+(“Blanking” (let ((ts (copy-ts a)))
+(setq ts (ts-fill ts))
+(ts-reset ts))))))
+```
+
+The output of the benchmarking exercise is given below:
+
+[![][9]][10]
+
+We observe that creating a new timestamp is slightly faster than blanking the fields.
+You are encouraged to read the ts.el README and notes.org from the GitHub repository __ for more information.
+
+--------------------------------------------------------------------------------
+
+via: https://opensourceforu.com/2019/09/the-emacs-series-exploring-ts-el/
+
+作者:[Shakthi Kannan][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensourceforu.com/author/shakthi-kannan/
+[b]: https://github.com/lujun9972
+[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/GPL-emacs-1.jpg?resize=696%2C435&ssl=1 (GPL emacs)
+[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/GPL-emacs-1.jpg?fit=800%2C500&ssl=1
+[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/1-1.png?resize=350%2C151&ssl=1
+[4]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/1-1.png?ssl=1
+[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/2-1.png?resize=350%2C191&ssl=1
+[6]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/2-1.png?ssl=1
+[7]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/3.png?resize=350%2C144&ssl=1
+[8]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/3.png?ssl=1
+[9]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/4.png?resize=350%2C149&ssl=1
+[10]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/4.png?ssl=1
diff --git a/sources/tech/20190917 Getting started with Zsh.md b/sources/tech/20190917 Getting started with Zsh.md
new file mode 100644
index 0000000000..d48391eab7
--- /dev/null
+++ b/sources/tech/20190917 Getting started with Zsh.md
@@ -0,0 +1,232 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Getting started with Zsh)
+[#]: via: (https://opensource.com/article/19/9/getting-started-zsh)
+[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/falm)
+
+Getting started with Zsh
+======
+Improve your shell game by upgrading from Bash to Z-shell.
+![bash logo on green background][1]
+
+Z-shell (or Zsh) is an interactive Bourne-like POSIX shell known for its abundance of innovative features. Z-Shell users often cite its many conveniences and credit it for increased efficiency and extensive customization.
+
+If you're relatively new to Linux or Unix but experienced enough to have opened a terminal and run a few commands, you have probably used the Bash shell. Bash is arguably the definitive free software shell, partly because of its progressive features and partly because it ships as the default shell on most of the popular Linux and Unix operating systems. However, the more you use a shell, the more you start to find small things that might be better for the way you want to use it. If there's one thing open source is famous for, it's _choice_. Many people choose to "graduate" from Bash to Z.
+
+### What is Zsh?
+
+A shell is just an interface to your operating system. An interactive shell allows you to type in commands through what is called _standard input_, or **stdin**, and get output through _standard output_ and _standard error_, or **stdout** and **stderr**. There are many shells, including Bash, Csh, Ksh, Tcsh, Dash, and Zsh. Each has features based on what its programmers thought would be best for a shell. Whether those features are good or bad is up to you, the end user.
+
+Zsh has features like interactive Tab completion, automated file searching, regex integration, advanced shorthand for defining command scope, and a rich theme engine. These features are included in an otherwise familiar Bourne-like shell environment, meaning that if you already know and love Bash, you'll find Zsh familiar—except with more features. You might think of it as a kind of Bash++.
+
+### Installing Zsh
+
+Install Zsh with your package manager.
+
+On Fedora, RHEL, and CentOS:
+
+
+```
+`$ sudo dnf install zsh`
+```
+
+On Ubuntu and Debian:
+
+
+```
+`$ sudo apt install zsh`
+```
+
+On MacOS, you can install it using MacPorts:
+
+
+```
+`$ sudo port install zsh`
+```
+
+Or with Homebrew:
+
+
+```
+`$ brew install zsh`
+```
+
+It's possible to run Zsh on Windows, but only on top of a Linux or Linux-like layer such as [Windows Subsystem for Linux][2] (WSL) or [Cygwin][3]. That installation is out of scope for this article, so refer to Microsoft documentation.
+
+### Setting up Zsh
+
+Zsh is not a terminal emulator; it's a shell that runs inside a terminal emulator. So, to launch Zsh, you must first launch a terminal window such as GNOME Terminal, Konsole, Terminal, iTerm2, rxvt, or another terminal of your preference. Then you can launch Zsh by typing:
+
+
+```
+`$ zsh`
+```
+
+The first time you launch Zsh, you're asked to choose some configuration options. These can all be changed later, so press **1** to continue.
+
+
+```
+This is the Z Shell configuration function for new users, zsh-newuser-install.
+
+(q) Quit and do nothing.
+
+(0) Exit, creating the file ~/.zshrc
+
+(1) Continue to the main menu.
+```
+
+There are four categories of preferences, so just start at the top.
+
+ 1. The first category lets you choose how many commands are retained in your shell history file. By default, it's set to 1,000 lines.
+ 2. Zsh completion is one of its most exciting features. To keep things simple, consider activating it with its default options until you get used to how it works. Press **1** for default options, **2** to set options manually.
+ 3. Choose Emacs or Vi key bindings. Bash uses Emacs bindings, so you may be used to that already.
+ 4. Finally, you can learn about (and set or unset) some of Zsh's subtle features. For instance, you can stop using the **cd** command by allowing Zsh to initiate a directory change when you provide a non-executable path with no command. To activate one of these extra options, type the option number and enter **s** to _set_ it. Try turning on all options to get the full Zsh experience. You can unset them later by editing **~/.zshrc**.
+
+
+
+To complete configuration, press **0**.
+
+### Using Zsh
+
+At first, Zsh feels a lot like using Bash, which is unmistakably one of its many features. There are serious differences between, for instance, Bash and Tcsh, so being able to switch between Bash and Zsh is a convenience that makes Zsh easy to try and easy to use at home if you have to use Bash at work or on your server.
+
+#### Change directory with Zsh
+
+It's the small differences that make Zsh nice. First, try changing the directory to your Documents folder _without the **cd** command_. It seems too good to be true; but if you enter a directory path with no further instruction, Zsh changes to that directory:
+
+
+```
+% Documents
+% pwd
+/home/seth/Documents
+```
+
+That renders an error in Bash or any other normal shell. But Zsh is far from normal, and this is just the beginning.
+
+#### Search with Zsh
+
+When you want to find a file using a normal shell, you probably resort to the **find** or **locate** command. At the very least, you may have used **ls -R** for a recursive listing of a set of directories. Zsh has a built-in feature allowing it to find a file in the current or any other subdirectory.
+
+For instance, assume you have two files called **foo.txt**. One is located in your current directory, and the other is in a subdirectory called **foo**. In a Bash shell, you can list the file in the current directory with:
+
+
+```
+$ ls
+foo.txt
+```
+
+and you can list the other one by stating the subdirectory's path explicitly:
+
+
+```
+$ ls foo
+foo.txt
+```
+
+To list both, you must use the **-R** switch, maybe combined with **grep**:
+
+
+```
+$ ls -R | grep foo.txt
+foo.txt
+foo.txt
+```
+
+But in Zsh, you can use the ****** shorthand:
+
+
+```
+% ls **/foo.txt
+foo.txt
+foo.txt
+```
+
+And you can use this syntax with any command, not just with **ls**. Imagine your increased efficiency when moving specific file types from one collection of directories to a single location, or concatenating snippets of text into a file, or grepping through logs.
+
+### Using Zsh Tab completion
+
+Tab completion is a power-user feature in Bash and some other shells, and it took the Unix world by storm when it became commonplace. No longer did Unix users have to resort to wildcards when typing long and tedious paths (such as **/h*/s*h/V*/SCS/sc*/comp*/t*/a*/*9/04/LS*boat*v**, which is a lot easier than typing **/home/seth/Videos/SCS/scenes/composite/takes/approved/109/04/LS_boat-port-cargo-mover.mkv**). Instead, they could just press the Tab key when they entered enough of a unique string. For example, if you know there's only one directory starting with an **h** at the root level of your system, you might type **/h** and then hit Tab. It's fast, it's simple, it's efficient. It also confirms a path exists; if Tab doesn't complete anything, you know you're looking in the wrong place or you mistyped part of the path.
+
+However, if you have many directories that share five or more of the same first letters, Tab staunchly refuses to complete. While in most modern terminals it will (at least) reveal the files blocking it from guessing what you mean, it usually takes two Tab presses to reveal them; therefore, Tab completion often becomes such an interplay of letters and Tabs across your keyboard that you feel like you're training for a piano recital.
+
+Zsh solves this minor annoyance by cycling through possible completions. If you type **ls ~/D** and press Tab, Zsh completes your command with **Documents** first; if you press Tab again, it offers **Downloads**, and so on until you find the one you want.
+
+### Wildcards in Zsh
+
+Wildcards behave differently in Zsh than what Bash users are used to. First of all, they can be modified. For example, if you want to list all folders in your current directory, you can use a modified wildcard:
+
+
+```
+% ls
+dir0 dir1 dir2 file0 file1
+% ls *(/)
+dir0 dir1 dir2
+```
+
+In this example, the **(/)** qualifies the results of the wildcard so Zsh will display only directories. To list just the files, use **(.)**. To list symlinks, use **(@)**. To list executable files, use **(*)**.
+
+
+```
+% ls ~/bin/*(*)
+fop exify tt
+```
+
+Zsh isn't aware of file types only. It can also list according to modification time, using the same wildcard modifier convention. For example, if you want to find a file that was modified within the past eight hours, use the **mh** modifier (for **modified** and **hours**) and the negative integer of hours:
+
+
+```
+% ls ~/Documents/*(mh-8)
+cal.org game.org home.org
+```
+
+To find a file modified more than (for instance) two days ago, the modifiers change to **md** (for **modified** and **day**) with a positive integer:
+
+
+```
+% ls ~/Documents/*(+2)
+holiday.org
+```
+
+There's a lot more you can do with wildcard modifiers and qualifiers, so read the [Zsh man page][4] for full details.
+
+#### The wildcard side effect
+
+To use wildcards the way you would use them in Bash, sometimes they must be escaped in Zsh. For instance, if you're copying some files to your server in Bash, you might use a wildcard like this:
+
+
+```
+`$ scp IMG_*.JPG seth@example.com:~/www/ph*/*19/09/14`
+```
+
+That works in Bash, but Zsh returns an error because it tries to expand the variables on the remote side before issuing the **scp** command. To avoid this, you must escape the remote variables:
+
+
+```
+`% scp IMG_*.JPG seth@example.com:~/www/ph\*/\*19/09/14`
+```
+
+It's these types of little exceptions that can frustrate you when you're switching to a new shell. There aren't many when using Zsh (there are probably more when switching back to Bash after experiencing Zsh) but when they happen, remain calm and be explicit. Rarely will you go wrong to adhere strictly to POSIX—but if that fails, look up the problem to solve it and move on. [Hyperpolyglot.org][5] has proven invaluable to many users stuck on one shell at work and another at home.
+
+In my next Zsh article, I'll show you how to install themes and plugins to make your Z-Shell even Z-ier.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/getting-started-zsh
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/falm
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
+[2]: https://devblogs.microsoft.com/commandline/category/bash-on-ubuntu-on-windows/
+[3]: https://www.cygwin.com/
+[4]: https://linux.die.net/man/1/zsh
+[5]: http://hyperpolyglot.org/unix-shells
diff --git a/sources/tech/20190917 Talking to machines- Lisp and the origins of AI.md b/sources/tech/20190917 Talking to machines- Lisp and the origins of AI.md
new file mode 100644
index 0000000000..795f4c731b
--- /dev/null
+++ b/sources/tech/20190917 Talking to machines- Lisp and the origins of AI.md
@@ -0,0 +1,115 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Talking to machines: Lisp and the origins of AI)
+[#]: via: (https://opensource.com/article/19/9/command-line-heroes-lisp)
+[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg)
+
+Talking to machines: Lisp and the origins of AI
+======
+The Command Line Heroes podcast explores the invention of Lisp and the
+rise of thinking computers powered by open source software.
+![Listen to the Command Line Heroes Podcast][1]
+
+Artificial intelligence (AI) is all the rage today, and its massive impact on the world is still to come, says the[ Association for the Advancement of Artificial Intelligence][2] (AAAI). According to an article on [Nanalyze][3]:
+
+> "The vast majority of nearly 2,000 experts polled by the Pew Research Center in 2014 said they anticipate robotics and artificial intelligence will permeate wide segments of daily life by 2025. A 2015 study covering 17 countries found that artificial intelligence and related technologies added an estimated 0.4 percentage point on average to those countries' annual GDP growth between 1993 and 2007, accounting for just over one-tenth of those countries' overall GDP growth during that time."
+
+However, this is the second time has AI garnered so much attention. When was AI first popular, and what does that have to do with the obscure-but-often-loved programming language Lisp?
+
+The second-to-last podcast of [Command Line Heroes][4]' third season dives into these topics and leaves us thinking about open source at the core of AI.
+
+### Before the term AI
+
+Thinking machines have been a curiosity for centuries, long before they could be realized. In the 1800s, computer science pioneers Charles Babbage and Ada Lovelace imagined an analytical engine capable of predictions far beyond human skills, such as correctly selecting the winning horse in a race.
+
+In the 1940s and '50s, Alan Turing defined what it would look like for intelligent machines to emulate human intelligence; that's what we now call the Turing Test. In his 1950 [research paper][5], Turing's "imitation game" set out to convince someone they were communicating with a human in another room when, in reality, it was a machine.
+
+While these theories inspired imaginative debate, they became less theoretical as computer hardware began providing enough power to begin experimenting.
+
+### Why Lisp is at the heart of AI theory
+
+John McCarthy, the person to coin the term "artificial intelligence," is also the person who reinvented how we program to create thinking machines. His reimagined approach was codified into the Lisp programming language. As [Paul Graham][6] wrote:
+
+> "In 1960, [John McCarthy][7] published a remarkable paper in which he did for programming something like what Euclid did for geometry. He showed how, given a handful of simple operators and a notation for functions, you can build a whole programming language. He called this language Lisp, for 'List Processing,' because one of his key ideas was to use a simple data structure called a list for both code and data.
+>
+> "It's worth understanding what McCarthy discovered, not just as a landmark in the history of computers, but as a model for what programming is tending to become in our own time. It seems to me that there have been two really clean, consistent models of programming so far: the C model and the Lisp model. These two seem points of high ground, with swampy lowlands between them. As computers have grown more powerful, the new languages being developed have been [moving steadily][8] toward the Lisp model. A popular recipe for new programming languages in the past 20 years has been to take the C model of computing and add to it, piecemeal, parts taken from the Lisp model, like runtime typing and garbage collection."
+
+I remember when I first wrote Lisp for a computer science class. After wrapping my head around its seemingly infinite number of parentheses, I uncovered a beautiful pattern of thought: Can I think through what I want this software to do?
+
+![The elegance of Lisp programming is timeless][9]
+
+That sounds silly: computers process what we code them to do, but there's something about recursion that made me think in a wildly different light. It's exciting to learn that 15 years ago, I may have been tapping into the big-picture changes McCarthy was describing.
+
+### Why the slowdown in AI?
+
+By the mid-to-late 1960s, McCarthy's work made way to a new field of research, where AI, machine learning (ML), and deep learning all became possibilities. And Lisp became the accepted standard in this emerging field. It's said that in 1968, McCarthy made a wager with David Levy, a Scottish chess master, that in 10 years a computer would be able to beat Levy in a chess match. Why did it take nearly 30 years to get to the famous [Deep Blue vs. Garry Kasparov][10] match?
+
+Command Line Heroes explores one theory: that for-profit investment in AI pulled essential talent from academia, where they were advancing the science, and pushed them onto a different path. Whether or not this was the reason, the world of AI fell into a "winter," where the people pursuing it were considered unrealistic.
+
+This AI winter lasted for quite some time. In 2005, The [_New York Times_ reported][11] that AI had become so stigmatized that "some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."
+
+### Where is AI now?
+
+Fast forward to today, when talking about AI or ML is a fast pass to getting people's attention—but that attention isn't always positive. Many are concerned that AI will remove millions of jobs from the world. Others say it will [create][12] millions of more jobs than are lost.
+
+The verdict is still out. [McKinsey's research][13] on the job loss vs. job gain debate is fascinating. When you take into account growing world consumption, aging populations, "marketization" of previously unpaid domestic work, and other factors, you find that the answer depends on your outlook.
+
+One thing is for sure: AI will be a significant part of our lives, and it will have much wider implications than other areas of tech. For this reason (among others), examining the [misconceptions around ethics and bias in AI][14] is essential.
+
+### Open source and AI
+
+McCarthy had a dream that machines could have common sense. His AI goals included open source from the very beginning; this is visualized on Red Hat's beautifully animated webpage on the [origins of AI and its open source roots][15].
+
+[![Origins of AI and open source screenshot][16]][15]
+
+If we are to achieve the goals of McCarthy, Turing, or other AI pioneers, I believe it will be because of the open source community behind the technology. Part of the reason AI's popularity bounced back is because of open source: languages, frameworks, and the datasets we analyze are increasingly open. Here are a handful of things to explore:
+
+ * [Learn enough Python and R][17] to be part of this future
+ * [Explore Python libraries][18] that will bulk up your skills
+ * Understand how [AI and ML are related][19]
+ * Explore [free and open datasets][20]
+ * Use modern implementations of Lisp, [available under open source licenses][21]
+
+
+
+It's possible that early AI explored the right ideas in the wrong decade. World-class computers back then weren't even as powerful as today's cellphones, and each one was shared by dozens of individuals. Today, many of us own multiple supercomputers and carry them with us all the time. For this reason, among others, the future of AI is strong and its highest achievements are yet to come.
+
+_Command Line Heroes has covered programming languages for all of Season 3. [Subscribe so that you don't miss the last episode of the season][4], and I would love to hear your thoughts in the comments below._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/command-line-heroes-lisp
+
+作者:[Matthew Broberg][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_hereoes_ep7_blog-header-292x521.png?itok=lI4DXvq2 (Listen to the Command Line Heroes Podcast)
+[2]: http://aaai.org/
+[3]: https://www.nanalyze.com/2016/11/artificial-intelligence-definition/
+[4]: https://www.redhat.com/en/command-line-heroes
+[5]: https://www.csee.umbc.edu/courses/471/papers/turing.pdf
+[6]: http://www.paulgraham.com/rootsoflisp.html
+[7]: http://www-formal.stanford.edu/jmc/index.html
+[8]: http://www.paulgraham.com/diff.html
+[9]: https://opensource.com/sites/default/files/uploads/lisp_cycles.png (The elegance of Lisp programming is timeless)
+[10]: https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov
+[11]: https://www.nytimes.com/2005/10/14/technology/behind-artificial-intelligence-a-squadron-of-bright-real-people.html
+[12]: https://singularityhub.com/2019/01/01/ai-will-create-millions-more-jobs-than-it-will-destroy-heres-how/
+[13]: https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages
+[14]: https://opensource.com/article/19/8/4-misconceptions-ethics-and-bias-ai
+[15]: https://www.redhat.com/en/open-source-stories/ai-revolutionaries/origins-ai-open-source
+[16]: https://opensource.com/sites/default/files/uploads/origins_aiopensource.png (Origins of AI and open source screenshot)
+[17]: https://opensource.com/article/19/5/learn-python-r-data-science
+[18]: https://opensource.com/article/18/5/top-8-open-source-ai-technologies-machine-learning
+[19]: https://opensource.com/tags/ai-and-machine-learning
+[20]: https://opensource.com/article/19/2/learn-data-science-ai
+[21]: https://www.cliki.net/Common+Lisp+implementation
diff --git a/sources/tech/20190917 What-s Good About TensorFlow 2.0.md b/sources/tech/20190917 What-s Good About TensorFlow 2.0.md
new file mode 100644
index 0000000000..a00306d6c5
--- /dev/null
+++ b/sources/tech/20190917 What-s Good About TensorFlow 2.0.md
@@ -0,0 +1,328 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What’s Good About TensorFlow 2.0?)
+[#]: via: (https://opensourceforu.com/2019/09/whats-good-about-tensorflow-2-0/)
+[#]: author: (Siva Rama Krishna Reddy B https://opensourceforu.com/author/siva-krishna/)
+
+What’s Good About TensorFlow 2.0?
+======
+
+[![][1]][2]
+
+_Version 2.0 of TensorFlow is focused on simplicity and ease of use. It has been strengthened with updates like eager execution and intuitive higher level APIs accompanied by flexible model building. It is platform agnostic, and makes APIs more consistent, while removing those that are redundant._
+
+Machine learning and artificial intelligence are experiencing a revolution these days, primarily due to three major factors. The first is the increased computing power available within small form factors such as GPUs, NPUs and TPUs. The second is the breakthrough in machine learning algorithms. State-of-art algorithms and hence models are available to infer faster. Finally, huge amounts of labelled data is essential for deep learning models to perform better, and this is now available.
+
+TensorFlow is an open source AI framework from Google which arms researchers and developers with the right tools to build novel models. It was made open source in 2015 and, in the past few years, has evolved with various enhancements covering operator support, programming languages, hardware support, data sets, official models, and distributed training and deployment strategies.
+
+TensorFlow 2.0 was released recently at the TensorFlow Developer Summit. It has major changes across the stack, some of which will be discussed from the developers’ point of view.
+
+TensorFlow 2.0 is primarily focused on the ease-of-use, power and scalability aspects. Ease is ensured in terms of simplified APIs, Keras being the main high level API interface; eager execution is available by default. Version 2.0 is powerful in the sense of being flexible and running much faster than earlier, with more optimisation. Finally, it is more scalable since it can be deployed on high-end distributed environments as well as on small edge devices.
+
+This new release streamlines the various components involved, from data preparation all the way up to deployment on various targets. High speed data processing pipelines are offered by tf.data, high level APIs are offered by tf.keras, and there are simplified APIs to access various distribution strategies on targets like the CPU, GPU and TPU. TensorFlow 2.0 offers a unique packaging format called SavedModel that can be deployed over the cloud through a TensorFlow Serving. Edge devices can be deployed through TensorFlow Lite, and Web applications through the newly introduced TensorFlow.js and various other language bindings that are also available.
+
+![Figure 1: The evolution of TensorFlow][3]
+
+TensorFlow.js was announced at the developer summit with off-the-shelf pretrained models for the browser, node, desktop and mobile native applications. The inclusion of Swift was also announced. Looking at some of the performance improvements since last year, the latest release claims a training speedup of 1.8x on NVIDIA Tesla V100, a 1.6x training speedup on Google Cloud TPUv2 and a 3.3.x inference speedup on Intel Skylake.
+
+**Upgrade to 2.0**
+The new release offers a utility _tf_upgrade_v2_ to convert a 1.x Python application script to a 2.0 compatible script. It does most of the job in converting the 1.x deprecated API to a newer compatibility API. An example of the same can be seen below:
+
+```
+test-pc:~$cat test-infer-v1.py
+
+# Tensorflow imports
+import tensorflow as tf
+
+save_path = ‘checkpoints/dev’
+with tf.gfile.FastGFile(“./trained-graph.pb”, ‘rb’) as f:
+graph_def = tf.GraphDef()
+graph_def.ParseFromString(f.read())
+tf.import_graph_def(graph_def, name=’’)
+
+with tf.Session(graph=tf.get_default_graph()) as sess:
+input_data = sess.graph.get_tensor_by_name(“DecodeJPGInput:0”)
+output_data = sess.graph.get_tensor_by_name(“final_result:0”)
+
+image = ‘elephant-299.jpg’
+if not tf.gfile.Exists(image):
+tf.logging.fatal(‘File does not exist %s’, image)
+image_data = tf.gfile.FastGFile(image, ‘rb’).read()
+
+result = sess.run(output_data, {‘DecodeJPGInput:0’: image_data})
+print(result)
+
+test-pc:~$ tf_upgrade_v2 --infile test-infer-v1.py --outfile test-infer-v2.py
+
+INFO line 5:5: Renamed ‘tf.gfile.FastGFile’ to ‘tf.compat.v1.gfile.FastGFile’
+INFO line 6:16: Renamed ‘tf.GraphDef’ to ‘tf.compat.v1.GraphDef’
+INFO line 10:9: Renamed ‘tf.Session’ to ‘tf.compat.v1.Session’
+INFO line 10:26: Renamed ‘tf.get_default_graph’ to ‘tf.compat.v1.get_default_graph’
+INFO line 15:15: Renamed ‘tf.gfile.Exists’ to ‘tf.io.gfile.exists’
+INFO line 16:12: Renamed ‘tf.logging.fatal’ to ‘tf.compat.v1.logging.fatal’
+INFO line 17:21: Renamed ‘tf.gfile.FastGFile’ to ‘tf.compat.v1.gfile.FastGFile’
+TensorFlow 2.0 Upgrade Script
+-----------------------------
+Converted 1 files
+Detected 0 issues that require attention
+-------------------------------------------------------------
+Make sure to read the detailed log ‘report.txt’
+
+test-pc:~$ cat test-infer-v2.py
+
+# Tensorflow imports
+import tensorflow as tf
+
+save_path = ‘checkpoints/dev’
+with tf.compat.v1.gfile.FastGFile(“./trained-graph.pb”, ‘rb’) as f:
+graph_def = tf.compat.v1.GraphDef()
+graph_def.ParseFromString(f.read())
+tf.import_graph_def(graph_def, name=’’)
+
+with tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph()) as sess:
+input_data = sess.graph.get_tensor_by_name(“DecodeJPGInput:0”)
+output_data = sess.graph.get_tensor_by_name(“final_result:0”)
+
+image = ‘elephant-299.jpg’
+if not tf.io.gfile.exists(image):
+tf.compat.v1.logging.fatal(‘File does not exist %s’, image)
+image_data = tf.compat.v1.gfile.FastGFile(image, ‘rb’).read()
+
+result = sess.run(output_data, {‘DecodeJPGInput:0’: image_data})
+print(result)
+```
+
+As we can see here, the _tf_upgrade_v2_ utility converts all the deprecated APIs to compatible v1 APIs, to make them work with 2.0.
+
+**Eager execution:** Eager execution allows real-time evaluation of Tensors without calling _session.run_. A major advantage with eager execution is that we can print the Tensor values any time for debugging.
+With TensorFlow 1.x, the code is:
+
+```
+test-pc:~$python3
+Python 3.6.7 (default, Oct 22 2018, 11:32:17)
+[GCC 8.2.0] on linux
+Type “help”, “copyright”, “credits” or “license” for more information.
+>>> import tensorflow as tf
+>>> print(tf.__version__)
+1.14.0
+>>> tf.add(2,3)
+
+```
+
+TensorFlow 2.0, on the other hand, evaluates the result that we call the API:
+
+```
+test-pc:~$python3
+Python 3.6.7 (default, Oct 22 2018, 11:32:17)
+[GCC 8.2.0] on linux
+Type “help”, “copyright”, “credits” or “license” for more information.
+>>> import tensorflow as tf
+>>> print(tf.__version__)
+2.0.0-beta1
+>>> tf.add(2,3)
+
+```
+
+In v1.x, the resulting Tensor doesn’t display the value and we need to execute the graph under a session to get the value, but in v2.0 the values are implicitly computed and available for debugging.
+
+**Keras**
+Keras (_tf.keras_) is now the official high level API. It has been enhanced with many compatible low level APIs. The redundancy across Keras and TensorFlow is removed, and most of the APIs are now available with Keras. The low level operators are still accessible through tf.raw_ops.
+We can now save the Keras model directly as a Tensorflow SavedModel, as shown below:
+
+```
+# Save Model to SavedModel
+saved_model_path = tf.keras.experimental.export_saved_model(model, ‘/path/to/model’)
+
+# Load the SavedModel
+new_model = tf.keras.experimental.load_from_saved_model(saved_model_path)
+
+# new_model is now keras Model object.
+new_model.summary()
+```
+
+Earlier, APIs related to various layers, optimisers, metrics and loss functions were distributed across Keras and native TensorFlow. Latest enhancements unify them as _tf.keras.optimizer.*, tf.keras.metrics.*, tf.keras.losses.* and tf.keras.layers.*._
+The RNN layers are now much more simplified compared to v 1.x.
+With TensorFlow 1.x, the commands given are:
+
+```
+if tf.test.is_gpu_available():
+model.add(tf.keras.layers.CudnnLSTM(32))
+else
+model.add(tf.keras.layers.LSTM(32))
+```
+
+With TensorFlow 2.0, the commands given are:
+
+```
+# This will use Cudnn kernel when the GPU is available.
+model.add(tf.keras.layer.LSTM(32))
+```
+
+TensorBoard integration is now a simple call back, as shown below:
+
+```
+tb_callbaclk = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
+
+model.fit(
+x_train, y_train, epocha=5,
+validation_data = [x_test, y_test],
+Callbacks = [tb_callbacks])
+```
+
+With this simple call back addition, TensorBoard is up on the browser to look for all the statistics in real-time.
+Keras offers unified distribution strategies, and a few lines of code can enable the required strategy as shown below:
+
+```
+strategy = tf.distribute.MirroredStrategy()
+
+with strategy.scope()
+model = tf.keras.models.Sequential([
+tf.keras.layers.Dense(64, input_shape=[10]),
+tf.keras.layers.Dense(64, activation=’relu’),
+tf.keras.layers.Dense(10, activation=’softmax’)])
+
+model.compile(optimizer=’adam’,
+loss=’categorical_crossentropy’,
+metrics=[‘accuracy’])
+```
+
+As shown above, the model definition under the desired scope is all we need to apply the desired strategy. Very soon, there will be support for multi-node synchronous and TPU strategy, and later, for parameter server strategy.
+
+![Figure 2: Coral products with edge TPU][4]
+
+**TensorFlow function**
+Function is a major upgrade that impacts the way we write TensorFlow applications. The new version introduces tf.function, which simplifies the applications and makes it very close to writing a normal Python application.
+A sample _tf.function_ definition looks like what’s shown in the code snippet below. Here the _tf.function_ declaration makes the user define a function as a TensorFlow operator, and all optimisation is applied automatically. Also, the function is faster than eager execution. APIs like _tf.control_dependencies_, _tf.global_variable_initializer_, and _tf.cond, tf.while_loop_ are no longer needed with _tf.function_. The user defined functions are polymorphic by default, i.e., we may pass mixed type tensors.
+
+```
+test-pc:~$ cat tf-test.py
+import tensorflow as tf
+
+print(tf.__version__)
+
+@tf.function
+def add(a, b):
+return (a+b)
+
+print(add(tf.ones([2,2]), tf.ones([2,2])))
+
+test-pc:~$ python3 tf-test.py
+2.0.0-beta1
+tf.Tensor(
+[[2. 2.]
+[2. 2.]], shape=(2, 2), dtype=float32)
+```
+
+Here is another example to demonstrate automatic control flows and Autograph in action. Autograph automatically converts the conditions, while it loops Python to TensorFlow operators.
+
+```
+test-pc:~$ cat tf-test-control.py
+import tensorflow as tf
+
+print(tf.__version__)
+
+@tf.function
+def f(x):
+while tf.reduce_sum(x) > 1:
+x = tf.tanh(x)
+return x
+
+print(f(tf.random.uniform([10])))
+
+test-pc:~$ python3 tf-test-control.py
+
+2.0.0-beta1
+tf.Tensor(
+[0.10785562 0.11102211 0.11347286 0.11239681 0.03989326 0.10335539
+0.11030331 0.1135259 0.11357211 0.07324989], shape=(10,), dtype=float32)
+```
+
+We can see Autograph in action with the following API over the function.
+
+```
+print(tf.autograph.to_code(f)) # f is the function name
+```
+
+**TensorFlow Lite**
+The latest advancements in edge devices add neural network accelerators. Google has released EdgeTPU, Intel has the edge inference platform Movidius, Huawei mobile devices have the Kirin based NPU, Qualcomm has come up with NPE SDK to accelerate on the Snapdragon chipsets using Hexagon power and, recently, Samsung released Exynos 9 with NPU. An edge device optimised framework is necessary to support these hardware ecosystems.
+
+Unlike TensorFlow, which is widely used in high power-consuming server infrastructure, edge devices are challenging in terms of reduced computing power, limited memory and battery constraints. TensorFlow Lite is aimed at bringing in TensorFlow models directly onto the edge with minimal effort. The TF Lite model format is different from TensorFlow. A TF Lite converter is available to convert a TensorFlow SavedBundle to a TF Lite model.
+
+Though TensorFlow Lite is evolving, there are limitations too, such as in the number of operations supported, and the unsupported semantics like control-flows and RNNs. In its early days, TF Lite used a TOCO converter and there were a few challenges for the developer community. A brand new 2.0 converter is planned to be released soon. There are claims that using TF Lite results in huge improvements across the CPU, GPU and TPU.
+
+TF Lite introduces delegates to accelerate parts of the graph on an accelerator. We may choose a specific delegate for a specific sub-graph, if needed.
+
+```
+#import “tensorflow/lite/delegates/gpu/metal_delegate.h”
+
+// Initialize interpreter with GPU delegate
+std::unique_ptr interpreter;
+InterpreterBuilder(*model, resolver)(&interpreter);
+auto* delegate = NewGpuDelegate(nullptr); // default config
+if (interpreter->ModifyGraphWithDelegate(delegate) != kTfLiteOk) return false;
+
+// Run inference
+while (true) {
+WriteToInputTensor(interpreter->typed_input_tensor(0));
+if (interpreter->Invoke() != kTfLiteOk) return false;
+ReadFromOutputTensor(interpreter->typed_output_tensor(0));
+}
+
+// Clean up
+interpreter = nullptr;
+DeleteGpuDelegate(delegate);
+```
+
+As shown above, we can choose GPUDelegate, and modify the graph with the respective kernel’s runtime. TF Lite is going to support the Android NNAPI delegate, in order to support all the hardware that is supported by NNAPI. For edge devices, CPU optimisation is also important, as not all edge devices are equipped with accelerators; hence, there is a plan to support further optimisations for ARM and x86.
+
+Optimisations based on quantisation and pruning are evolving to reduce the size and processing demands of models. Quantisation generally can reduce model size by 4x (i.e., 32-bit to 8-bit). Models with more convolution layers may get faster by 10 to 50 per cent on the CPU. Fully connected and RNN layers may speed up operation by 3x.
+
+TF Lite now supports post-training quantisation, which reduces the size along with compute demands greatly. TensorFlow 2.0 offers simplified APIs to build models with quantisation and by pruning optimisations.
+A normal dense layer without quantisation looks like what follows:
+
+```
+tf.keras.layers.Dense(512, activation=’relu’)
+```
+
+Whereas a quality dense layer looks like what’s shown below:
+
+```
+quantize.Quantize(tf.keras.layers.Dense(512, activation=’relu’))
+```
+
+Pruning is a technique used to drop connections that are ineffective. In general, ‘dense’ layers contain lots of connections which don’t influence the output. Such connections can be dropped by making the weight zero. Tensors with lots of zeros may be represented as ‘sparse’ and can be compressed. Also, the number of operations in a sparse tensor is less.
+Building a layer with _prune_ is as simple as using the following command:
+
+```
+prune.Prune(tf.keras.layers.Dense(512, activation=’relu’))
+```
+
+In a pipeline, there is Keras based quantised training and Keras based connection pruning. These optimisations may push TF Lite further ahead of the competition, with regard to other frameworks.
+
+**Coral**
+Coral is a new platform for creating products with on-device ML acceleration. The first product here features Google’s Edge TPU in SBC and USB form factors. TensorFlow Lite is officially supported on this platform, with the salient features being very fast inference speed, privacy and no reliance on network connection.
+
+More details related to hardware specifications, pricing, and a getting started guide can be found at __.
+
+With these advances as well as a wider ecosystem, it’s very evident that TensorFlow may become the leading framework for artificial intelligence and machine learning, similar to how Android evolved in the mobile world.
+
+--------------------------------------------------------------------------------
+
+via: https://opensourceforu.com/2019/09/whats-good-about-tensorflow-2-0/
+
+作者:[Siva Rama Krishna Reddy B][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensourceforu.com/author/siva-krishna/
+[b]: https://github.com/lujun9972
+[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2018/09/ML-with-tensorflow.jpg?resize=696%2C328&ssl=1 (ML with tensorflow)
+[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2018/09/ML-with-tensorflow.jpg?fit=1200%2C565&ssl=1
+[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-1-The-evolution-of-TensorFlow.jpg?resize=350%2C117&ssl=1
+[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-2-Coral-products-with-edge-TPU.jpg?resize=350%2C198&ssl=1
diff --git a/sources/tech/20190918 Adding themes and plugins to Zsh.md b/sources/tech/20190918 Adding themes and plugins to Zsh.md
new file mode 100644
index 0000000000..60af63d667
--- /dev/null
+++ b/sources/tech/20190918 Adding themes and plugins to Zsh.md
@@ -0,0 +1,210 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Adding themes and plugins to Zsh)
+[#]: via: (https://opensource.com/article/19/9/adding-plugins-zsh)
+[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth)
+
+Adding themes and plugins to Zsh
+======
+Expand Z-shell's capabilities with themes and plugins installed with Oh
+My Zsh.
+![Someone wearing a hardhat and carrying code ][1]
+
+In my [previous article][2], I explained how to get started with [Z-shell][2] (Zsh). For some users, the most exciting thing about Zsh is its ability to adopt new themes. It's so easy to theme Zsh both because of the active community designing visuals for the shell and also because of the [Oh My Zsh][3] project, which makes it trivial to install them.
+
+Theming is one of those changes you notice immediately, so if you don't feel like you changed shells when you installed Zsh, you'll definitely feel it once you've adopted one of the 100+ themes bundled with Oh My Zsh. There's a lot more to Oh My Zsh than just pretty themes, though; there are also hundreds of plugins that add features to your Z-shell environment.
+
+### Installing Oh My Zsh
+
+The [ohmyz.sh][3] website encourages you to install the framework by running a script over the internet from your computer. While the Oh My Zsh project is almost certainly trustworthy, it's generally ill-advised to blindly run scripts on your system. If you want to run the install script, you can download it, read it, and run it after you're satisfied you understand what it's doing.
+
+If you download the script and read it, you may notice that installation is only a three-step process:
+
+#### 1\. Clone oh-my-zsh
+
+First, clone the oh-my-zsh repository into a directory called **~/.oh-my-zsh**:
+
+
+```
+`% git clone http://github.com/robbyrussell/oh-my-zsh ~/.oh-my-zsh`
+```
+
+#### 2\. Switch the config file
+
+Next, back up your existing **.zshrc** file and move the default one from the oh-my-zsh install into its place. You can do this in one command using the **-b** (backup) option for **mv**, as long as your version of the **mv** command includes that option:
+
+
+```
+% mv -b \
+~/.oh-my-zsh/templates/zshrc.zsh-template \
+~/.zshrc
+```
+
+#### 3\. Edit the config
+
+By default, Oh My Zsh's configuration is pretty bland, so you might want to reintegrate your custom **~/.zshrc** into the **.oh-my-zsh** config. To do that, append your old config to the end of the new one using the [cat command][4]:
+
+
+```
+`% cat ~/.zshrc~ >> ~/.zshrc`
+```
+
+To see the default configuration and learn about some of the options it provides, open **~/.zshrc** in your favorite text editor. The file is well-commented, so it's a great way to get a good idea of what's possible.
+
+For instance, you can change the location of your **.oh-my-zsh** directory. At installation, it resides at the base of your home directory, but modern Linux convention, as defined by the [Free Desktop][5] specification, is to place directories that extend the functionality of applications in the **~/.local/share** directory. You can change it in **~/.zshrc** by editing the line:
+
+
+```
+# Path to your oh-my-zsh installation.
+export ZSH=$HOME/.local/share/oh-my-zsh
+```
+
+then moving the directory to that location:
+
+
+```
+% mv ~/.oh-my-zsh \
+$HOME/.local/share/oh-my-zsh
+```
+
+If you're using MacOS, the specification is less clear, but arguably the most appropriate place for the directory is **$HOME/Library/Application\ Support**.
+
+### Relaunching Zsh
+
+After editing the config, you have to relaunch your shell. Before you do that, make sure you've finished any in-progress config changes; for instance, don't change the path of **.oh-my-zsh** then forget to move the directory to its new location. If you don't want to relaunch your shell, you can **source** the config file, just as you can with Bash:
+
+
+```
+% source ~/.zshrc
+➜ .oh-my-zsh git:(master) ✗
+```
+
+You can ignore any warnings about missing update files; they will be resolved upon relaunch.
+
+### Changing your theme
+
+Installing Oh My Zsh sets your Z-shell theme to **robbyrussell**, a theme by the project's maintainer. This theme's changes are minimal, mostly involving the color of your prompt.
+
+To view all the available themes, list the contents of the **.oh-my-zsh** theme directory:
+
+
+```
+➜ .oh-my-zsh git:(master) ✗ ls \
+~/.local/share/oh-my-zsh/themes
+3den.zsh-theme
+adben.zsh-theme
+af-magic.zsh-theme
+afowler.zsh-theme
+agnoster.zsh-theme
+[...]
+```
+
+To see screenshots of themes before trying them, visit the Oh My Zsh [wiki][6]. For even more themes, visit the [External themes][7] wiki page.
+
+Most themes are simple to set up and use. Just change the value of the theme name in **.zshrc** and reload the config:
+
+
+```
+➜ ~ sed -i \
+'s/_THEME=\"robbyrussel\"/_THEME=\"linuxonly\"/g' \
+~/.zshrc
+➜ ~ source ~/.zshrc
+seth@darkstar:pts/0->/home/skenlon (0) ➜
+```
+
+Other themes require extra configuration. For example, to use the **agnoster** theme, you must first install the Powerline font. This is an open source font, and it's probably in your software repository if you're running Linux. Install it with:
+
+
+```
+`➜ ~ sudo dnf install powerline-fonts`
+```
+
+Set your theme in the config:
+
+
+```
+➜ ~ sed -i \
+'s/_THEME=\"linuxonly\"/_THEME=\"agnoster\"/g' \
+~/.zshrc
+```
+
+and then relaunch (a simple **source** won't work). Upon relaunch, you will see the new theme:
+
+![agnoster theme][8]
+
+### Installing plugins
+
+Over 200 plugins ship with Oh My Zsh, and you can see them by looking in **.oh-my-zsh/plugins**. Each plugin directory has a README file explaining what the plugin does.
+
+Some plugins are relatively simple. For instance, the **dnf**, **ubuntu**, **brew**, and **macports** plugins are collections of aliases to simplify interactions with the DNF, Apt, Homebrew, and MacPorts package managers.
+
+Others are more complex. The **git** plugin, active by default, detects when you're working in a [Git repository][9] and updates your shell prompt so that it lists the current branch and even indicates whether there are unmerged changes.
+
+To activate a plugin, add it to the plugin setting in **~/.zshrc**. For example, to add the **dnf** and **pass** plugins, open **~/.zshrc** in your favorite text editor:
+
+
+```
+`plugins=(git dnf pass)`
+```
+
+Save your changes and reload your Zsh session:
+
+
+```
+`% source ~/.zshrc`
+```
+
+The plugins are now active. You can test the **dnf** plugin by using one of the aliases it provides:
+
+
+```
+% dnfs fop
+====== Name Exactly Matched: fop ======
+fop.noarch : XSL-driven print formatter
+```
+
+Different plugins do different things, so you may want to install only one or two at a time to help you learn the new capabilities of your shell.
+
+#### Cheating
+
+Some Oh My Zsh plugins are pretty generic. If you look at a plugin that claims to be a Z-shell plugin and the code is also compatible with Bash, then you can use it in your Bash shell. Some plugins require Z-shell-specific functions, so this won't work with all of them. But you can load plugins like **dnf**, **ubuntu**, **[firewalld][10]**, and others into a Bash shell by using **source** to load the plugin of your choice. For example:
+
+
+```
+if [ -d $HOME/.local/share/oh-my-zsh/plugins ]; then
+ source $HOME/.local/share/oh-my-zsh/plugins/dnf/dnf.plugin.zsh
+fi
+```
+
+### To Z or not to Z
+
+Z-shell is a powerful shell both for its built-in features and the plugins contributed by its passionate community. Whether you use it as your primary shell or just as a shell you visit on weekends or holidays, you owe it to yourself to try it out.
+
+What are your favorite Z-shell themes and plugins? Tell us in the comments!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/adding-plugins-zsh
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag (Someone wearing a hardhat and carrying code )
+[2]: https://opensource.com/article/19/9/getting-started-zsh
+[3]: https://ohmyz.sh/
+[4]: https://opensource.com/article/19/2/getting-started-cat-command
+[5]: http://freedesktop.org
+[6]: https://github.com/robbyrussell/oh-my-zsh/wiki/Themes
+[7]: https://github.com/robbyrussell/oh-my-zsh/wiki/External-themes
+[8]: https://opensource.com/sites/default/files/uploads/zsh-agnoster.jpg (agnoster theme)
+[9]: https://opensource.com/resources/what-is-git
+[10]: https://opensource.com/article/19/7/make-linux-stronger-firewalls
diff --git a/sources/tech/20190918 How to remove carriage returns from text files on Linux.md b/sources/tech/20190918 How to remove carriage returns from text files on Linux.md
new file mode 100644
index 0000000000..45b8a8b89d
--- /dev/null
+++ b/sources/tech/20190918 How to remove carriage returns from text files on Linux.md
@@ -0,0 +1,114 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to remove carriage returns from text files on Linux)
+[#]: via: (https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+How to remove carriage returns from text files on Linux
+======
+When carriage returns (also referred to as Ctrl+M's) get on your nerves, don't fret. There are several easy ways to show them the door.
+[Kim Siever][1]
+
+Carriage returns go back a long way – as far back as typewriters on which a mechanism or a lever swung the carriage that held a sheet of paper to the right so that suddenly letters were being typed on the left again. They have persevered in text files on Windows, but were never used on Linux systems. This incompatibility sometimes causes problems when you’re trying to process files on Linux that were created on Windows, but it's an issue that is very easily resolved.
+
+The carriage return, also referred to as **Ctrl+M**, character would show up as an octal 15 if you were looking at the file with an **od** octal dump) command. The characters **CRLF** are often used to represent the carriage return and linefeed sequence that ends lines on Windows text files. Those who like to gaze at octal dumps will spot the **\r \n**. Linux text files, by comparison, end with just linefeeds.
+
+**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
+
+Here's a sample of **od** output with the lines containing the **CRLF** characters in both octal and character form highlighted.
+
+```
+$ od -bc testfile.txt
+0000000 124 150 151 163 040 151 163 040 141 040 164 145 163 164 040 146
+ T h i s i s a t e s t f
+0000020 151 154 145 040 146 162 157 155 040 127 151 156 144 157 167 163
+ i l e f r o m W i n d o w s
+0000040 056 015 012 111 164 047 163 040 144 151 146 146 145 162 145 156 <==
+ . \r \n I t ' s d i f f e r e n <==
+0000060 164 040 164 150 141 156 040 141 040 125 156 151 170 040 164 145
+ t t h a n a U n i x t e
+0000100 170 164 040 146 151 154 145 015 012 167 157 165 154 144 040 142 <==
+ x t f i l e \r \n w o u l d b <==
+```
+
+While these characters don’t represent a huge problem, they can sometimes interfere when you want to parse the text files in some way and don’t want to have to code around their presence or absence.
+
+### 3 ways to remove carriage return characters from text files
+
+Fortunately, there are several ways to easily remove carriage return characters. Here are three options:
+
+#### dos2unix
+
+You might need to go through the trouble of installing it, but **dos2unix** is probably the easiest way to turn Windows text files into Unix/Linux text files. One command with one argument, and you’re done. No second file name is required. The file will be changed in place.
+
+```
+$ dos2unix testfile.txt
+dos2unix: converting file testfile.txt to Unix format...
+```
+
+You should see the file length decrease, depending on how many lines it contains. A file with 100 lines would likely shrink by 99 characters, since only the last line will not end with the **CRLF** characters.
+
+Before:
+
+```
+-rw-rw-r-- 1 shs shs 121 Sep 14 19:11 testfile.txt
+```
+
+After:
+
+```
+-rw-rw-r-- 1 shs shs 118 Sep 14 19:12 testfile.txt
+```
+
+If you need to convert a large collection of files, don't fix them one at a time. Instead, put them all in a directory by themselves and run a command like this:
+
+```
+$ find . -type f -exec dos2unix {} \;
+```
+
+In this command, we use find to locate regular files and then run the **dos2unix** command to convert them one at a time. The {} in the command is replaced by the filename. You should be sitting in the directory with the files when you run it. This command could damage other types of files, such as those that contain octal 15 characters in some context other than a text file (e.g., bytes in an image file).
+
+#### sed
+
+You can also use **sed**, the stream editor, to remove carriage returns. You will, however, have to supply a second file name. Here’s an example:
+
+```
+$ sed -e “s/^M//” before.txt > after.txt
+```
+
+One important thing to note is that you DON’T type what that command appears to be. You must enter **^M** by typing **Ctrl+V** followed by **Ctrl+M**. The “s” is the substitute command. The slashes separate the text we’re looking for (the Ctrl+M) and the text (nothing in this case) that we’re replacing it with.
+
+#### vi
+
+You can even remove carriage return (**Ctrl+M**) characters with **vi**, although this assumes you’re not running through hundreds of files and are maybe making some other changes, as well. You would type “**:**” to go to the command line and then type the string shown below. As with **sed**, the **^M** portion of this command requires typing **Ctrl+V** to get the **^** and then **Ctrl+M** to insert the **M**. The **%s** is a substitute operation, the slashes again separate the characters we want to remove and the text (nothing) we want to replace it with. The “**g**” (global) means to do this on every line in the file.
+
+```
+:%s/^M//g
+```
+
+#### Wrap-up
+
+The **dos2unix** command is probably the easiest to remember and most reliable way to remove carriage returns from text files. Other options are a little trickier to use, but they provide the same basic function.
+
+Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://www.flickr.com/photos/kmsiever/5895380540/in/photolist-9YXnf5-cNmpxq-2KEvib-rfecPZ-9snnkJ-2KAcDR-dTxzKW-6WdgaG-6H5i46-2KzTZX-7cnSw7-e3bUdi-a9meh9-Zm3pD-xiFhs-9Hz6YM-ar4DEx-4PXAhw-9wR4jC-cihLcs-asRFJc-9ueXvG-aoWwHq-atwL3T-ai89xS-dgnntH-5en8Te-dMUDd9-aSQVn-dyZqij-cg4SeS-abygkg-f2umXt-Xk129E-4YAeNn-abB6Hb-9313Wk-f9Tot-92Yfva-2KA7Sv-awSCtG-2KDPzb-eoPN6w-FE9oi-5VhaNf-eoQgx7-eoQogA-9ZWoYU-7dTGdG-5B1aSS
+[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
+[3]: https://www.facebook.com/NetworkWorld/
+[4]: https://www.linkedin.com/company/network-world
diff --git a/translated/tech/20190129 Create an online store with this Java-based framework.md b/translated/tech/20190129 Create an online store with this Java-based framework.md
new file mode 100644
index 0000000000..5c0a9ab78e
--- /dev/null
+++ b/translated/tech/20190129 Create an online store with this Java-based framework.md
@@ -0,0 +1,235 @@
+[#]: collector: (lujun9972)
+[#]: translator: (laingke)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Create an online store with this Java-based framework)
+[#]: via: (https://opensource.com/article/19/1/scipio-erp)
+[#]: author: (Paul Piper https://opensource.com/users/madppiper)
+
+使用这个 Java 框架创建一个在线商店
+======
+Scipio ERP 具有广泛的应用程序和功能。
+
+
+所以,你想在网上销售产品或服务,但要么找不到合适的软件,要么认为定制成本太高? [Scipio ERP][1] 也许正是你想要的。
+
+Scipio ERP 是一个基于 Java 的开放源码电子商务框架,具有广泛的应用程序和功能。这个项目在 2014 年从 [Apache OFBiz][2] fork 而来,侧重于更好的定制和更现代的吸引力。这个电子商务组件应用非常广泛,可以在多商店安装中工作,同时完成国际化,并具有广泛的产品配置,而且它还兼容现代 HTML 框架。该软件还为许多其他业务案例提供标准应用程序,例如会计,仓库管理或销售人员自动化。它都是高度标准化的,因此易于定制,如果您想要的不仅仅是一个虚拟购物车,这是非常棒的。
+
+该系统也使得跟上现代 web 标准变得非常容易。所有界面都是使用系统的“[模板工具包][3]”构建的,这是一个易于学习的宏集,可以将 HTML 与所有应用程序分开。正因为如此,每个应用程序都已经标准化到核心。听起来令人困惑?它真的不是——它看起来很像 HTML,但你写的内容少了很多。
+
+### 初始安装
+
+在您开始之前,请确保您已经安装了 Java 1.8(或更高版本)的 SDK 以及一个 Git 客户端。完成了?太棒了!接下来,切换到 Github 上的主分支:
+
+```
+git clone https://github.com/ilscipio/scipio-erp.git
+cd scipio-erp
+git checkout master
+```
+
+要安装系统,只需要运行 **./install.sh** 并从命令行中选择任一选项。在开发过程中,最好一直使用 **installation for development** (选项 1),它还将安装一系列演示数据。对于专业安装,您可以修改初始配置数据(“种子数据”),以便自动为您设置公司和目录数据。默认情况下,系统将使用内部数据库运行,但是它[也可以配置][4]使用各种关系数据库,比如 PostgreSQL 和 MariaDB 等。
+
+![安装向导][6]
+
+按照安装向导完成初始配置,
+
+通过命令 **./start.sh** 启动系统然后打开链接 **** 完成配置。如果您安装了演示数据, 您可以使用用户名 **admin** 和密码 **scipio** 进行登录。在安装向导中,您可以设置公司简介、会计、仓库、产品目录、在线商店和额外的用户配置信息。暂时在产品商店配置界面上跳过网站实体的配置。系统允许您使用不同的底层代码运行多个在线商店;除非您想这样做,一直选择默认值是最简单的。
+
+祝贺您,您刚刚安装了 Scipio ERP!在界面上操作一两分钟,感受一下它的功能。
+
+### 捷径
+
+在您进入自定义之前,这里有一些方便的命令可以帮助您:
+
+ * 创建一个 shop-override:**./ant create-component-shop-override**
+ * 创建一个新组件:**./ant create-component**
+ * 创建一个新主题组件:**./ant create-theme**
+ * 创建管理员用户:**./ant create-admin-user-login**
+ * 各种其他实用功能:**./ant -p**
+ * 用于安装和更新插件的实用程序:**./git-addons help**
+
+
+
+另外,请记下以下位置:
+
+ * 将 Scipio 作为服务运行的脚本:**/tools/scripts/**
+ * 日志输出目录:**/runtime/logs**
+ * 管理应用程序:****
+ * 电子商务应用程序:****
+
+
+
+最后,Scipio ERP 在以下五个主要目录中构建了所有代码:
+
+ * Framework: 框架相关的源,应用程序服务器,通用界面和配置
+ * Applications: 核心应用程序
+ * Addons: 第三方扩展
+ * Themes: 修改界面外观
+ * Hot-deploy: 您自己的组件
+
+
+
+除了一些配置,您将在 hot-deploy 和 themes 目录中工作。
+
+### 在线商店定制
+
+要真正使系统成为您自己的系统,请开始考虑使用[组件][7]。组件是一种模块化方法,可以覆盖,扩展和添加到系统中。您可以将组件视为可以捕获有关数据库([实体][8]),功能([服务][9]),界面([视图][10]),[事件和操作][11]和 Web 应用程序信息的独立 Web 模块。由于组件功能,您可以添加自己的代码,同时保持与原始源兼容。
+
+运行命令 **./ant create-component-shop-override** 并按照步骤创建您的在线商店组件。该操作将会在 hot-deploy 目录内创建一个新目录,该目录将扩展并覆盖原始的电子商务应用程序。
+
+![组件目录结构][13]
+
+一个典型的组件目录结构。
+
+您的组件将具有以下目录结构:
+
+ * config: 配置
+ * data: 种子数据
+ * entitydef: 数据库表定义
+ * script: Groovy 脚本的位置
+ * servicedef: 服务定义
+ * src: Java 类
+ * webapp: 您的 web 应用程序
+ * widget: 界面定义
+
+
+
+此外,**ivy.xml** 文件允许您将 Maven 库添加到构建过程中,**ofbiz-component.xml** 文件定义整个组件和 Web 应用程序结构。除了一些在当前目录所能够看到的,您还可以在 Web 应用程序的 **WEB-INF** 目录中找到 **controller.xml** 文件。这允许您定义请求实体并将它们连接到事件和界面。仅对于界面来说,您还可以使用内置的 CMS 功能,但优先要坚持使用核心机制。在引入更改之前,请熟悉**/applications/shop/**。
+
+#### 添加自定义界面
+
+还记得[模板工具包][3]吗?您会发现它在每个界面都有使用到。您可以将其视为一组易于学习的宏,它用来构建所有内容。下面是一个例子:
+
+```
+<@section title="Title">
+ <@heading id="slider">Slider@heading>
+ <@row>
+ <@cell columns=6>
+ <@slider id="" class="" controls=true indicator=true>
+ <@slide link="#" image="https://placehold.it/800x300">Just some content…@slide>
+ <@slide title="This is a title" link="#" image="https://placehold.it/800x300">@slide>
+ @slider>
+ @cell>
+ <@cell columns=6>Second column@cell>
+ @row>
+@section>
+```
+
+不是很难,对吧?同时,主题包含 HTML 定义和样式。这将权力交给您的前端开发人员,他们可以定义每个宏的输出,并坚持使用自己的构建工具进行开发。
+
+我们快点试试吧。首先,在您自己的在线商店上定义一个请求。您将修改此代码。一个内置的 CMS 系统也可以通过 **** 进行访问,它允许您以更有效的方式创建新模板和界面。它与模板工具包完全兼容,并附带可根据您的喜好采用的示例模板。但是既然我们试图在这里理解系统,那么首先让我们采用更复杂的方法。
+
+打开您商店 webapp 目录中的 **[controller.xml][14]** 文件。Controller 跟踪请求事件并相应地执行操作。下面的操作将会在 **/shop/test** 下创建一个新的请求:
+
+```
+
+
+
+
+
+```
+
+您可以定义多个响应,如果需要,可以在请求中使用事件或服务调用来确定您可能要使用的响应。我选择了“视图”类型的响应。视图是渲染的响应; 其他类型是请求重定向,转发等。系统附带各种渲染器,可让您稍后确定输出; 为此,请添加以下内容:
+
+```
+
+
+```
+
+用您自己的组件名称替换 **my-component**。然后,您可以通过在 **widget/CommonScreens.xml** 文件的标签内添加以下内容来定义您的第一个界面:
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+商店界面实际上非常模块化,由多个元素组成([小部件,动作和装饰器][15])。为简单起见,请暂时保留原样,并通过添加第一个模板工具包文件来完成新网页。为此,创建一个新的 **webapp/mycomponent/test/test.ftl** 文件并添加以下内容:
+
+```
+<@alert type="info">Success!@alert>
+```
+
+![自定义的界面][17]
+
+一个自定义的界面。
+
+打开 **** 并惊叹于你自己的成就。
+
+#### 自定义主题
+
+通过创建自己的主题来修改商店的界面外观。所有主题都可以作为组件在themes文件夹中找到。运行命令 **./ant create-theme** 来创建您自己的主题。
+
+![主题组件布局][19]
+
+一个典型的主题组件布局。
+
+以下是最重要的目录和文件列表:
+
+ * 主题配置:**data/\*ThemeData.xml**
+ * 特定主题封装的HTML:**includes/\*.ftl**
+ * 模板工具包HTML定义:**includes/themeTemplate.ftl**
+ * CSS 类定义:**includes/themeStyles.ftl**
+ * CSS 框架: **webapp/theme-title/**
+
+
+
+快速浏览工具包中的 Metro 主题;它使用 Foundation CSS 框架并且充分利用了这个框架。然后,然后,在新构建的 **webapp/theme-title** 目录中设置自己的主题并开始开发。Foundation-shop 主题是一个非常简单的特定于商店的主题实现,您可以将其用作您自己工作的基础。
+
+瞧!您已经建立了自己的在线商店,准备个性化定制吧!
+
+![搭建完成的 Scipio ERP 在线商店][21]
+
+一个搭建完成的基于 Scipio ERP的在线商店。
+
+### 接下来是什么?
+
+Scipio ERP 是一个功能强大的框架,可简化复杂的电子商务应用程序的开发。为了更完整的理解,请查看项目[文档][7],尝试[在线演示][22],或者[加入社区][23].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/1/scipio-erp
+
+作者:[Paul Piper][a]
+选题:[lujun9972][b]
+译者:[laingke](https://github.com/laingke)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/madppiper
+[b]: https://github.com/lujun9972
+[1]: https://www.scipioerp.com
+[2]: https://ofbiz.apache.org/
+[3]: https://www.scipioerp.com/community/developer/freemarker-macros/
+[4]: https://www.scipioerp.com/community/developer/installation-configuration/configuration/#database-configuration
+[5]: /file/419711
+[6]: https://opensource.com/sites/default/files/uploads/setup_step5_sm.jpg (Setup wizard)
+[7]: https://www.scipioerp.com/community/developer/architecture/components/
+[8]: https://www.scipioerp.com/community/developer/entities/
+[9]: https://www.scipioerp.com/community/developer/services/
+[10]: https://www.scipioerp.com/community/developer/views-requests/
+[11]: https://www.scipioerp.com/community/developer/events-actions/
+[12]: /file/419716
+[13]: https://opensource.com/sites/default/files/uploads/component_structure.jpg (component directory structure)
+[14]: https://www.scipioerp.com/community/developer/views-requests/request-controller/
+[15]: https://www.scipioerp.com/community/developer/views-requests/screen-widgets-decorators/
+[16]: /file/419721
+[17]: https://opensource.com/sites/default/files/uploads/success_screen_sm.jpg (Custom screen)
+[18]: /file/419726
+[19]: https://opensource.com/sites/default/files/uploads/theme_structure.jpg (theme component layout)
+[20]: /file/419731
+[21]: https://opensource.com/sites/default/files/uploads/finished_shop_1_sm.jpg (Finished Scipio ERP shop)
+[22]: https://www.scipioerp.com/demo/
+[23]: https://forum.scipioerp.com/
diff --git a/translated/tech/20190823 The Linux kernel- Top 5 innovations.md b/translated/tech/20190823 The Linux kernel- Top 5 innovations.md
new file mode 100644
index 0000000000..cdf455f02a
--- /dev/null
+++ b/translated/tech/20190823 The Linux kernel- Top 5 innovations.md
@@ -0,0 +1,108 @@
+[#]: collector: (lujun9972)
+[#]: translator: (heguangzhi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (The Linux kernel: Top 5 innovations)
+[#]: via: (https://opensource.com/article/19/8/linux-kernel-top-5-innovations)
+[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez)
+
+The Linux kernel: Top 5 innovations
+======
+Linux 内核:五大创新
+======
+
+想知道什么是真正的(不是那种时髦的)在 Linux 内核上的创新吗?请继续阅读。
+![绿色背景的企鹅][1]
+
+_创新_ 这个词在科技行业的传播几乎和 _革命_ 一样多,所以很难区分那些夸张和真正令人振奋的东西。Linux 内核被称为创新的,但它又被称为现代计算中最大的黑客,一个微观世界中的庞然大物。
+
+撇开市场和模式不谈,Linux 可以说是开源世界中最受欢迎的内核,它在近30年的生命周期中引入了一些真正的游戏改变者。
+
+### Cgroups (2.6.24)
+
+早在2007年,Paul Menage 和 Rohit Seth 就在内核中添加了深奥的[_control groups_ (cgroups)][2]功能(cgroups 的当前实现是由 Tejun Heo 重写的。)这种新技术最初被用作一种方法,从本质上来说,是为了确保一组特定任务的服务质量。
+
+例如,您为与您的 WEB 服务相关联的所有任务创建一个控制组定义 ( cgroup ),为常规备份创建另一个 cgroup ,为一般操作系统需求创建另一个cgroup。然后,您可以控制每个组的资源百分比,这样您的操作系统和 WEB 服务就可以获得大部分系统资源,而您的备份进程可以访问剩余的资源。
+
+然而,cgroups 最著名的是它作为今天驱动云技术的角色:容器。事实上,cgroups 最初被命名为[进程容器][3]。当它们被 [LXC][4],[CoreOS][5]和 Docker 等项目采用时,这并不奇怪。
+
+就像闸门打开后一样,“ _容器_ ”一词恰好成为了 Linux 的同义词,微服务风格的基于云的“应用”概念很快成为了规范。如今,很难脱离 cgroups ,他们是如此普遍。每一个大规模的基础设施(可能还有你的笔记本电脑,如果你运行 Linux 的话)都以一种有意思的方式使用了 cgroups ,使你的计算体验比以往任何时候都更加易于管理和灵活。
+
+例如,您可能已经在电脑上安装了[Flathub][6]或[Flatpak][7],或者您已经在工作中使用[Kubernetes][8]和/或[OpenShift][9]。不管怎样,如果“容器”这个术语对你来说仍然模糊不清,你可以在[ Linux 容器背后的应用场景][10] 获得对容器的实际理解。
+
+### LKMM (4.17)
+
+2018年,Jade Alglave, Alan Stern, Andrea Parri, Luc Maranget, Paul McKenney, 和其他几个人的辛勤工作的成果被合并到主线 Linux 内核中,以提供正式的内存模型。Linux 内核内存[一致性]模型(LKMM)子系统是一套描述Linux 内存一致性模型的工具,同时也产生测试用例。
+
+
+随着系统在物理设计上变得越来越复杂(增加了更多的中央处理器内核,高速缓存和内存增加,等等),它们就越难知道哪个中央处理器需要哪个地址空间,以及何时需要。例如,如果 CPU0 需要将数据写入内存中的共享变量,并且 CPU1 需要读取该值,那么 CPU0 必须在 CPU1 尝试读取之前写入。类似地,如果值是以一种顺序写入内存的,那么期望它们也以同样的顺序被读取,而不管哪个或哪些 CPU 正在读取。
+
+即使在单个处理器上,内存管理也需要特定的顺序。像 **x = y** 这样的简单操作需要处理器从内存中加载 **y** 的值,然后将该值存储在 **x** 中。在处理器从内存中读取值之前,是不能将存储在 **y** 中的值放入 **x** 变量的。还有地址依赖:**x[n] = 6** 要求在处理器能够存储值6之前加载 **n** 。
+
+LKMM 帮助识别和跟踪代码中的这些内存模式。这部分是通过一个名为 **herd** 的工具来实现的,该工具定义了内存模型施加的约束(以逻辑公式的形式),然后列举了与这些约束一致性的所有可能的结果。
+
+### 低延迟补丁 (2.6.38)
+
+
+很久以前,在2011年之前的日子里,如果你想在 Linux进行 多媒体工作 [multimedia work on Linux][11] ,您必须获得一个低延迟内核。这主要适用于[录音/audio recording][12],同时添加了许多实时效果(如对着麦克风唱歌和添加混音,以及在耳机中无延迟地听到您的声音)。有些发行版,如[Ubuntu Studio][13],可靠地提供了这样一个内核,所以实际上这没有什么障碍,当艺术家选择发行版本时,只是作为一个重要提醒。
+
+然而,如果您没有使用 Ubuntu Studio ,或者您需要在分发之前更新您的内核,您必须跳转到 rt-patches 网页,下载内核补丁,将它们应用到您的内核源代码,编译,然后手动安装。
+
+然后,随着内核版本2.6.38的发布,这个过程结束了。默认情况下,Linux 内核突然像变魔术一样内置了低延迟代码(根据基准测试,延迟至少降低了10倍)。不再下载补丁,不用编译。一切都很顺利,这都是因为 Mike Galbraith 编写了一个200行的小补丁。
+
+对于全世界的开源多媒体艺术家来说,这是一个游戏规则的改变。从2011年开始到2016年事情变得如此美好,我向自己做了一个挑战,要求[在树莓派v1(型号B)上建造一个数字音频工作站(DAW)][14],结果发现它运行得出奇地好。
+
+### RCU (2.5)
+
+RCU,或称读-拷贝-更新,是计算机科学中定义的一个系统,它允许多个处理器线程从共享内存中读取数据。它通过推迟更新来做到这一点,但也将它们标记为已更新,以确保数据读取为最新内容。实际上,这意味着读取与更新同时发生。
+
+
+典型的 RCU 循环有点像这样:
+
+ 1. 删除指向数据的指针,以防止其他读操作引用它。
+ 2. 等待读完成他们的关键处理。
+ 3. 回收内存空间。
+
+将更新阶段划分为删除和回收阶段意味着更新程序会立即执行删除,同时推迟回收直到所有活动读取完成(通过阻止它们或注册一个回调以便在完成时调用)。
+
+虽然读-拷贝-更新的概念不是为 Linux 内核发明的,但它在 Linux 中的实现是该技术的一个定义性的例子。
+
+### 合作 (0.01)
+
+对于 Linux 内核创新的问题,最重要的是协作,最终答案也是。称之为好时机,称之为技术优势,称之为黑客能力,或者仅仅称之为开源,但 Linux 内核及其支持的许多项目是协作与合作的光辉范例。
+
+它远远超出了内核范畴。各行各业的人都对开源做出了贡献,可以说是因为 Linux 内核。Linux 曾经是,现在仍然是 [自由软件][15]的主要力量,激励人们把他们的代码、艺术、想法或者仅仅是他们自己带到一个全球化的、有生产力的、多样化的人类社区中。
+
+### 你最喜欢的创新是什么?
+
+这个列表偏向于我自己的兴趣:容器、非统一内存访问(NUMA)和多媒体。我肯定把你最喜欢的内核创新从列表中去掉了。在评论中告诉我!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/8/linux-kernel-top-5-innovations
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[heguangzhi](https://github.com/heguangzhi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
+[2]: https://en.wikipedia.org/wiki/Cgroups
+[3]: https://lkml.org/lkml/2006/10/20/251
+[4]: https://linuxcontainers.org
+[5]: https://coreos.com/
+[6]: http://flathub.org
+[7]: http://flatpak.org
+[8]: http://kubernetes.io
+[9]: https://www.redhat.com/sysadmin/learn-openshift-minishift
+[10]: https://opensource.com/article/18/11/behind-scenes-linux-containers
+[11]: http://slackermedia.info
+[12]: https://opensource.com/article/17/6/qtractor-audio
+[13]: http://ubuntustudio.org
+[14]: https://opensource.com/life/16/3/make-music-raspberry-pi-milkytracker
+[15]: http://fsf.org
diff --git a/translated/tech/20190906 How to put an HTML page on the internet.md b/translated/tech/20190906 How to put an HTML page on the internet.md
new file mode 100644
index 0000000000..61339a2c63
--- /dev/null
+++ b/translated/tech/20190906 How to put an HTML page on the internet.md
@@ -0,0 +1,70 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to put an HTML page on the internet)
+[#]: via: (https://jvns.ca/blog/2019/09/06/how-to-put-an-html-page-on-the-internet/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+如何在互联网放置 HTML 页面
+======
+
+我喜欢互联网的一点是在互联网放置静态页面是如此简单。今天有人问我该怎么做,所以我想我会快速地写下来!
+
+### 只是一个 HTML 页面
+
+我的所有网站都只是静态 HTML 和 CSS。我的网页设计技巧相对不高(是我自己开发的最复杂的网站),因此保持我所有的网站相对简单意味着我可以做一些改变/修复,而不会花费大量时间。
+
+因此,我们将在此文章中采用尽可能简单的方式 - 只需一个 HTML 页面。
+
+### HTML 页面
+
+我们要放在互联网上的网站只是一个名为 `index.html` 的文件。你可以在 找到它,它是一个 Github 仓库,其中只包含一个文件。
+
+HTML 文件中包含一些 CSS,使其看起来不那么无聊,部分复制自< https://example.com>。
+
+### 如何将 HTML 页面放在互联网上
+
+有以下几步:
+
+ 1. 注册 [Neocities][1] 帐户
+ 2. 将 index.html 复制到你自己 neocities 站点的 index.html 中
+ 3. 完成
+
+
+
+上面的 index.html 页面位于 [julia-example-website.neocities.com][2] 中,如果你查看源代码,你将看到它与 github 仓库中的 HTML 相同。
+
+我认为这可能是将 HTML 页面放在互联网上的最简单的方法(这是一次回归 Geocities,它是我在 2003 年制作我的第一个网站的方式):)。我也喜欢 Neocities (像 [glitch][3],我也喜欢)它能实验、学习,并有乐趣。
+
+### 其他选择
+
+这绝不是唯一简单的方式 - 在你推送 Git 仓库时,Github pages 和 Gitlab pages 以及 Netlify 都将会自动发布站点,并且它们都非常易于使用(只需将它们连接到你的 github 仓库即可)。我个人使用 Git 仓库的方式,因为 Git 没有东西让我感到紧张 - 我想知道我实际推送的页面发生了什么更改。但我想你如果第一次只想将 HTML/CSS 制作的站点放到互联网上,那么 Neocities 就是一个非常好的方法。
+
+
+如果你不只是玩,而是要将网站用于真实用途,那么你或许会需要买一个域名,以便你将来可以更改托管服务提供商,但这有点不那么简单。
+
+### 这是学习 HTML 的一个很好的起点
+
+如果你熟悉在 Git 中编辑文件,同时想练习 HTML/CSS 的话,我认为将它放在网站中是一个有趣的方式!我真的很喜欢它的简单性 - 实际上这只有一个文件,所以没有其他花哨的东西需要去理解。
+
+还有很多方法可以复杂化/扩展它,比如这个博客实际上是用 [Hugo][4] 生成的,它生成了一堆 HTML 文件并放在网络中,但从基础开始总是不错的。
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2019/09/06/how-to-put-an-html-page-on-the-internet/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://neocities.org/
+[2]: https://julia-example-website.neocities.org/
+[3]: https://glitch.com
+[4]: https://gohugo.io/
diff --git a/translated/tech/20190913 An introduction to Virtual Machine Manager.md b/translated/tech/20190913 An introduction to Virtual Machine Manager.md
new file mode 100644
index 0000000000..786efdb14b
--- /dev/null
+++ b/translated/tech/20190913 An introduction to Virtual Machine Manager.md
@@ -0,0 +1,99 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (An introduction to Virtual Machine Manager)
+[#]: via: (https://opensource.com/article/19/9/introduction-virtual-machine-manager)
+[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdosshttps://opensource.com/users/alanfdosshttps://opensource.com/users/bgamrathttps://opensource.com/users/marcobravo)
+
+Virtual Machine Manager 简介
+======
+Virt-manager 为 Linux 虚拟化提供了全方位的选择。
+![A person programming][1]
+
+在我关于 [GNOME Boxes][3] 的[系列文章][2]中,我已经解释了 Linux 用户如何能够在他们的桌面上快速启动虚拟机。当你只需要简单的配置时,Box 可以在紧要关头创建虚拟机。
+
+但是,如果你需要在虚拟机中配置更多详细信息,那么你就需要一个工具,为磁盘,网卡(NIC)和其他硬件提供全面的选项。这时就需要 [Virtual Machine Manager][4](virt-manager)了。如果在应用菜单中没有看到它,你可以从包管理器或命令行安装它:
+
+ * 在 Fedora 上:**sudo dnf install virt-manager**
+ * 在 Ubuntu 上:**sudo apt install virt-manager**
+
+
+
+安装完成后,你可以从应用菜单或在命令行中输入 **virt-manager** 启动。
+
+![Virtual Machine Manager's main screen][5]
+
+为了演示如何使用 virt-manager 创建虚拟机,我将设置一个 Red Hat Enterprise Linux 8 虚拟机。
+
+首先,单击 **File** 然后点击 **New Virtual Machine**。Virt-manager 的开发者已经标记好了每一步(例如,第 1 步,共 5 步)来使其变得简单。单击 **Local install media** 和 **Forward**。
+
+![Step 1 virtual machine creation][6]
+
+在下个页面中,选择要安装的操作系统的 ISO 文件。(RHEL 8 镜像位于我的下载目录中。)Virt-manager 自动检测操作系统。
+
+![Step 2 Choose the ISO File][7]
+
+在步骤 3 中,你可以指定虚拟机的内存和 CPU。默认值为内存 1,024MB 和一个 CPU。
+
+![Step 3 Set CPU and Memory][8]
+
+我想给 RHEL 充足的配置运行,我使用的硬件配置也充足 - 所以我将它们(分别)增加到 4,096MB 和两个 CPU。
+
+下一步为虚拟机配置存储。默认设置是 10GB 硬盘。 (我保留此设置,但你可以根据需要进行调整。)你还可以选择现有磁盘镜像或在自定义位置创建一个磁盘镜像。
+
+![Step 4 Configure VM Storage][9]
+
+步骤 5 是命名虚拟机并单击“完成”。这相当于创建了一台虚拟机或 GNOME Boxes 中的 Box。虽然技术上讲是最后一步,但你有几个选择(如下面的截图所示)。由于 virt-manager 的优点是能够自定义虚拟机,因此在单击 **Finish** 之前,我将选中 **Customize configuration before install** 的复选框。
+
+因为我选择了自定义配置,virt-manager 打开了一个有一组设备和设置的页面。这个很有趣!
+
+这里你也可以命名虚拟机。在左侧列表中,你可以查看各个方面的详细信息,例如 CPU、内存、磁盘、控制器和许多其他项目。例如,我可以单击 **CPU** 来验证我在步骤 3 中所做的更改。
+
+![Changing the CPU count][10]
+
+我也可以确认我设置的内存量。
+
+当 VM 作为服务器运行时,我通常会禁用或删除声卡。为此,请选择 **Sound** 并单击 **Remove** 或右键单击 **Sound** 并选择 **Remove Hardware**。
+
+你还可以使用底部的 **Add Hardware** 按钮添加硬件。这会打开 **Add New Virtual Hardware件** 页面,你可以在其中添加其他存储设备、内存、声卡等。这就像可以访问一个库存充足(如果虚拟)的计算机硬件仓库。
+
+![The Add New Hardware screen][11]
+
+对 VM 配置感到满意后,单击 **Begin Installation**,系统将启动并开始从 ISO 安装指定的操作系统。
+
+![Begin installing the OS][12]
+
+完成后,它会重新启动,你的新 VM 就可以使用了。
+
+![Red Hat Enterprise Linux 8 running in VMM][13]
+
+Virtual Machine Manager 是桌面 Linux 用户的强大工具。它是开源的,是专有和封闭虚拟化产品的绝佳替代品。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/introduction-virtual-machine-manager
+
+作者:[Alan Formy-Duval][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/alanfdosshttps://opensource.com/users/alanfdosshttps://opensource.com/users/bgamrathttps://opensource.com/users/marcobravo
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb (A person programming)
+[2]: https://opensource.com/sitewide-search?search_api_views_fulltext=GNOME%20Box
+[3]: https://wiki.gnome.org/Apps/Boxes
+[4]: https://virt-manager.org/
+[5]: https://opensource.com/sites/default/files/1-vmm_main_0.png (Virtual Machine Manager's main screen)
+[6]: https://opensource.com/sites/default/files/2-vmm_step1_0.png (Step 1 virtual machine creation)
+[7]: https://opensource.com/sites/default/files/3-vmm_step2.png (Step 2 Choose the ISO File)
+[8]: https://opensource.com/sites/default/files/4-vmm_step3default.png (Step 3 Set CPU and Memory)
+[9]: https://opensource.com/sites/default/files/6-vmm_step4.png (Step 4 Configure VM Storage)
+[10]: https://opensource.com/sites/default/files/9-vmm_customizecpu.png (Changing the CPU count)
+[11]: https://opensource.com/sites/default/files/11-vmm_addnewhardware.png (The Add New Hardware screen)
+[12]: https://opensource.com/sites/default/files/12-vmm_rhelbegininstall.png
+[13]: https://opensource.com/sites/default/files/13-vmm_rhelinstalled_0.png (Red Hat Enterprise Linux 8 running in VMM)