清除过期

@Starryi @MjSeven 这两篇也撤销了
This commit is contained in:
Xingyu Wang 2022-03-20 07:58:33 +08:00
parent 0079a9bd8f
commit f019bb97b7
7 changed files with 0 additions and 1871 deletions

View File

@ -1,160 +0,0 @@
[#]: subject: "Fedora 36 Release Date and New Features"
[#]: via: "https://news.itsfoss.com/fedora-36-release-date-features/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Fedora 36 Release Date and New Features
======
Fedora 36 is one of the [most anticipated releases of this year][1].
While we look forward to every major release, last year, [Fedora 35][2] has set up some exciting expectations with GNOME 41 and a new KDE flavor (Kinoite).
Fret not; if you cannot wait for Fedora 36, I shall highlight the essential details about the release.
### Fedora 36 Release Date
As per the official schedule, the early release date for Fedora 36 beta is **March 15, 2022**. And, the delayed date (in case) for Fedora 36 beta is **March 22, 2022**.
Once the public beta testing is complete, the final release can be expected on **April 19, 2022**. In case of a delay, the release date will be pushed to **April 26, 2022**.
You should also note that Fedora Linux 34 will reach its end of life on May 17**, 2022.**
![][3]
You can try to get your hands on Fedora 36 now with a nightly build (linked at the bottom of this article), but with a few weeks to go for the beta release, you should wait it out.
### Fedora 36 Features
![][4]
As usual, Fedora 36 features the latest GNOME and other additions and improvements.
The key highlights include:
### 1\. GNOME 42
[GNOME 42][5] is an exciting upgrade with various visual and functional changes.
It also includes performance and visual tweaks, among other improvements. If you missed [GNOME 41 feature additions][6], you should also check that.
Of course, you should expect to find all the changes with Fedora 36. Here, I shall highlight those details using Fedora 36 (if you didnt catch up with GNOME 42).
### 2\. System-wide Dark Mode
![][7]
Fedora 36 enjoys the system-wide dark mode introduced with GNOME 42.
While we had dark mode implementations on other Linux distributions, GNOME 42 helped Fedora 36 become an attractive option for desktop users.
The dark mode blends in perfectly and gives a clean GNOME experience.
### 3\. New Wallpapers
Without a new wallpaper, every other improvement sounds dull.
So, the Fedora Design Team brings along a beautifully crafted wallpaper in Fedora 36, illustrating the painting of a scenery, that looks interesting.
![][8]
The default wallpaper has a variant for day/night. As you notice the wallpaper for day time above, heres the artwork for the night:
![][8]
Both look fantastic and soothing to the eyes.
### 4\. Linux Kernel 5.17
Fedora 36 is known to offer the latest Linux Kernel releases. As of now, it is currently running the release candidate versions of the upcoming Linux Kernel 5.17.
With the final Fedora 36 release, you should expect the stable version of Linux Kernel 5.17.
### 5\. Dark/Light Wallpapers
Along with the new default wallpapers for Fedora 36, it also features a dark/light mode wallpaper collection introduced with GNOME 42.
![][9]
As of now, testing the Fedora 36 Workstation (pre-release version), I can only find one of the wallpapers and not the whole collection that you get with GNOME 42.
So, you can probably expect more additions with Fedora 36 beta release.
You can choose to select the wallpapers with their available dark/light variants from the appearance.
### 6\. Screenshot User Interface and Native Screen Recording
The new screenshot user interface introduced with GNOME 42 is a fantastic addition. Also, with just a toggle, you can start recording your screen!
![][10]
And, you could see that in action with Fedora 36, working perfectly fine.
### 7\. Desktop Environment Updates
For obvious reasons, you should expect the latest desktop environments with Fedora 36.
The bare minimum should be GNOME 42, [KDE Plasma 5.24][11], and Xfce 4.16.
In addition to that, LXQt has been updated to 1.0.0.
### 8\. Important Technical Changes
Along with the visual changes and the Linux Kernel upgrade, there are various technical improvements with Fedora 36.
Some of them worth mentioning include:
* Updated the system openJDK package from Java 11 to Java 17.
* Introducing the upcoming Golang 1.18 version.
* Switching to Noto fonts as the default for various languages to ensure consistency in text rendering.
* The behaviour to exclude the recommended packages (considering you do not have them installed) when automatically upgrading in future.
* GNU Toolchain update to gcc 12 and glibc 2.35.
* Fix upgradability issues in some cases.
* Default Wayland session with NVIDIA proprietary driver.
* Updated PHP stack to latest 8.1.x.
* The RPM database will be relocated to /usr directory, currently it was in /var.
For more technical details, you can refer to the [official changeset][12]. If you want to download the pre-release version, you can grab the ISO from the button below.
[Fedora 36 (Pre-Release)][13]
### Wrapping Up
Fedora 36 is going to be an exciting release.
When it releases, Im looking forward to trying the Wayland session with NVIDIAs proprietary driver on Fedora 36 Workstation.
_What are you looking forward to in this release? Let me know in the comments down below._
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/fedora-36-release-date-features/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/linux-distro-releases-2022/
[2]: https://news.itsfoss.com/fedora-35-release/
[3]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQzOSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[4]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQ2OCIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[5]: https://news.itsfoss.com/gnome-42-features/
[6]: https://news.itsfoss.com/gnome-41-release/
[7]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjM2OCIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[8]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU4NSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[9]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU1MyIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[10]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjcxMCIgd2lkdGg9IjczOCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[11]: https://news.itsfoss.com/kde-plasma-5-24-lts-release/
[12]: https://fedoraproject.org/wiki/Releases/36/ChangeSet
[13]: https://kojipkgs.fedoraproject.org/compose/branched/latest-Fedora-36/compose/Workstation/x86_64/iso/

View File

@ -1,706 +0,0 @@
[#]: subject: "Writing Advanced Web Applications with Go"
[#]: via: "https://www.jtolio.com/2017/01/writing-advanced-web-applications-with-go"
[#]: author: "jtolio.com https://www.jtolio.com/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Writing Advanced Web Applications with Go
======
Web development in many programming environments often requires subscribing to some full framework ethos. With [Ruby][1], its usually [Rails][2] but could be [Sinatra][3] or something else. With [Python][4], its often [Django][5] or [Flask][6]. With [Go][7], its…
If you spend some time in Go communities like the [Go mailing list][8] or the [Go subreddit][9], youll find Go newcomers frequently wondering what web framework is best to use. [There][10] [are][11] [quite][12] [a][13] [few][14] [Go][15] [frameworks][16] ([and][17] [then][18] [some][19]), so which one is best seems like a reasonable question. Without fail, though, the strong recommendation of the Go community is to [avoid web frameworks entirely][20] and just stick with the standard library as long as possible. Heres [an example from the Go mailing list][21] and heres [one from the subreddit][22].
Its not bad advice! The Go standard library is very rich and flexible, much more so than many other languages, and designing a web application in Go with just the standard library is definitely a good choice.
Even when these Go frameworks call themselves minimalistic, they cant seem to help themselves avoid using a different request handler interface than the default standard library [http.Handler][23], and I think this is the biggest source of angst about why frameworks should be avoided. If everyone standardizes on [http.Handler][23], then dang, all sorts of things would be interoperable!
Before Go 1.7, it made some sense to give in and use a different interface for handling HTTP requests. But now that [http.Request][24] has the [Context][25] and [WithContext][26] methods, there truly isnt a good reason any longer.
Ive done a fair share of web development in Go and Im here to share with you both some standard library development patterns Ive learned and some code Ive found myself frequently needing. The code Im sharing is not for use instead of the standard library, but to augment it.
Overall, if this blog post feels like its predominantly plugging various little standalone libraries from my [Webhelp non-framework][27], thats because it is. Its okay, theyre little standalone libraries. Only use the ones you want!
If youre new to Go web development, I suggest reading the Go documentations [Writing Web Applications][28] article first.
### Middleware
A frequent design pattern for server-side web development is the concept of _middleware_, where some portion of the request handler wraps some other portion of the request handler and does some preprocessing or routing or something. This is a big component of how [Express][29] is organized on [Node][30], and how Express middleware and [Negroni][17] middleware works is almost line-for-line identical in design.
Good use cases for middleware are things such as:
* making sure a user is logged in, redirecting if not,
* making sure the request came over HTTPS,
* making sure a session is set up and loaded from a session database,
* making sure we logged information before and after the request was handled,
* making sure the request was routed to the right handler,
* and so on.
Composing your web app as essentially a chain of middleware handlers is a very powerful and flexible approach. It allows you to avoid a lot of [cross-cutting concerns][31] and have your code factored in very elegant and easy-to-maintain ways. By wrapping a set of handlers with middleware that ensures a user is logged in prior to actually attempting to handle the request, the individual handlers no longer need mistake-prone copy-and-pasted code to ensure the same thing.
So, middleware is good. However, if Negroni or other frameworks are any indication, youd think the standard librarys `http.Handler` isnt up to the challenge. Negroni adds its own `negroni.Handler` just for the sake of making middleware easier. Theres no reason for this.
Here is a full middleware implementation for ensuring a user is logged in, assuming a `GetUser(*http.Request)` function but otherwise just using the standard library:
```
func RequireUser(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
user, err := GetUser(req)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
if user == nil {
http.Error(w, "unauthorized", http.StatusUnauthorized)
return
}
h.ServeHTTP(w, req)
})
}
```
Heres how its used (just wrap another handler!):
```
func main() {
http.ListenAndServe(":8080", RequireUser(http.HandlerFunc(myHandler)))
}
```
Express, Negroni, and other frameworks expect this kind of signature for a middleware-supporting handler:
```
type Handler interface {
// don't do this!
ServeHTTP(rw http.ResponseWriter, req *http.Request, next http.HandlerFunc)
}
```
Theres really no reason for adding the `next` argument - it reduces cross-library compatibility. So I say, dont use `negroni.Handler` (or similar). Just use `http.Handler`!
### Composability
Hopefully Ive sold you on middleware as a good design philosophy.
Probably the most commonly-used type of middleware is request routing, or muxing (seems like we should call this demuxing but what do I know). Some frameworks are almost solely focused on request routing. [gorilla/mux][32] seems more popular than any other part of the [Gorilla][33] library. I think the reason for this is that even though the Go standard library is completely full featured and has a good [ServeMux][34] implementation, it doesnt make the right thing the default.
So! Lets talk about request routing and consider the following problem. You, web developer extraordinaire, want to serve some HTML from your web server at `/hello/` but also want to serve some static assets from `/static/`. Lets take a quick stab.
```
package main
import (
"net/http"
)
func hello(w http.ResponseWriter, req *http.Request) {
w.Write([]byte("hello, world!"))
}
func main() {
mux := http.NewServeMux()
mux.Handle("/hello/", http.HandlerFunc(hello))
mux.Handle("/static/", http.FileServer(http.Dir("./static-assets")))
http.ListenAndServe(":8080", mux)
}
```
If you visit `http://localhost:8080/hello/`, youll be rewarded with a friendly “hello, world!” message.
If you visit `http://localhost:8080/static/` on the other hand (assuming you have a folder of static assets in `./static-assets`), youll be surprised and frustrated. This code tries to find the source content for the request `/static/my-file` at `./static-assets/static/my-file`! Theres an extra `/static` in there!
Okay, so this is why `http.StripPrefix` exists. Lets fix it.
```
mux.Handle("/static/", http.StripPrefix("/static",
http.FileServer(http.Dir("./static-assets"))))
```
`mux.Handle` combined with `http.StripPrefix` is such a common pattern that I think it should be the default. Whenever a request router processes a certain amount of URL elements, it should strip them off the request so the wrapped `http.Handler` doesnt need to know its absolute URL and only needs to be concerned with its relative one.
In [Russ Cox][35]s recent [TiddlyWeb backend][36], I would argue that every time `strings.TrimPrefix` is needed to remove the full URL from the handlers incoming path arguments, it is an unnecessary cross-cutting concern, unfortunately imposed by `http.ServeMux`. (An example is [line 201 in tiddly.go][37].)
Id much rather have the default `mux` behavior work more like a directory of registered elements that by default strips off the ancestor directory before handing the request to the next middleware handler. Its much more composable. To this end, Ive written a simple muxer that works in this fashion called [whmux.Dir][38]. It is essentially `http.ServeMux` and `http.StripPrefix` combined. Heres the previous example reworked to use it:
```
package main
import (
"net/http"
"gopkg.in/webhelp.v1/whmux"
)
func hello(w http.ResponseWriter, req *http.Request) {
w.Write([]byte("hello, world!"))
}
func main() {
mux := whmux.Dir{
"hello": http.HandlerFunc(hello),
"static": http.FileServer(http.Dir("./static-assets")),
}
http.ListenAndServe(":8080", mux)
}
```
There are other useful mux implementations inside the [whmux][39] package that demultiplex on various aspects of the request path, request method, request host, or pull arguments out of the request and place them into the context, such as a [whmux.IntArg][40] or [whmux.StringArg][41]. This brings us to [contexts][42].
### Contexts
Request contexts are a recent addition to the Go 1.7 standard library, but the idea of [contexts has been around since mid-2014][43]. As of Go 1.7, they were added to the standard library ([“context”][42]), but are available for older Go releases in the original location ([“golang.org/x/net/context”][44]).
First, heres the definition of the `context.Context` type that `(*http.Request).Context()` returns:
```
type Context interface {
Done() <-chan struct{}
Err() error
Deadline() (deadline time.Time, ok bool)
Value(key interface{}) interface{}
}
```
Talking about `Done()`, `Err()`, and `Deadline()` are enough for an entirely different blog post, so Im going to ignore them at least for now and focus on `Value(interface{})`.
As a motivating problem, lets say that the `GetUser(*http.Request)` method we assumed earlier is expensive, and we only want to call it once per request. We certainly dont want to call it once to check that a user is logged in, and then again when we actually need the `*User` value. With `(*http.Request).WithContext` and `context.WithValue`, we can pass the `*User` down to the next middleware precomputed!
Heres the new middleware:
```
type userKey int
func RequireUser(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
user, err := GetUser(req)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
if user == nil {
http.Error(w, "unauthorized", http.StatusUnauthorized)
return
}
ctx := r.Context()
ctx = context.WithValue(ctx, userKey(0), user)
h.ServeHTTP(w, req.WithContext(ctx))
})
}
```
Now, handlers that are protected by this `RequireUser` handler can load the previously computed `*User` value like this:
```
if user, ok := req.Context().Value(userKey(0)).(*User); ok {
// there's a valid user!
}
```
Contexts allow us to pass optional values to handlers down the chain in a way that is relatively type-safe and flexible. None of the above context logic requires anything outside of the standard library.
#### Aside about context keys
There was a curious piece of code in the above example. At the top, we defined a `type userKey int`, and then always used it as `userKey(0)`.
One of the possible problems with contexts is the `Value()` interface lends itself to a global namespace where you can stomp on other context users and use conflicting key names. Above, we used `type userKey` because its an unexported type in your package. It will never compare equal (without a cast) to any other type, including `int`, in Go. This gives us a way to namespace keys to your package, even though the `Value()` method is still a sort of global namespace.
Because the need for this is so common, the `webhelp` package defines a [GenSym()][45] helper that will create a brand new, never-before-seen, unique value for use as a context key.
If we used [GenSym()][45], then `type userKey int` would become `var userKey = webhelp.GenSym()` and `userKey(0)` would simply become `userKey`.
#### Back to whmux.StringArg
Armed with this new context behavior, we can now present a `whmux.StringArg` example:
```
package main
import (
"fmt"
"net/http"
"gopkg.in/webhelp.v1/whmux"
)
var (
pageName = whmux.NewStringArg()
)
func page(w http.ResponseWriter, req *http.Request) {
name := pageName.Get(req.Context())
fmt.Fprintf(w, "Welcome to %s", name)
}
func main() {
// pageName.Shift pulls the next /-delimited string out of the request's
// URL.Path and puts it into the context instead.
pageHandler := pageName.Shift(http.HandlerFunc(page))
http.ListenAndServe(":8080", whmux.Dir{
"wiki": pageHandler,
})
}
```
### Pre-Go-1.7 support
Contexts let you do some pretty cool things. But lets say youre stuck with something before Go 1.7 (for instance, App Engine is currently Go 1.6).
Thats okay! Ive backported all of the neat new context features to Go 1.6 and earlier in a forwards compatible way!
With the [whcompat][46] package, `req.Context()` becomes `whcompat.Context(req)`, and `req.WithContext(ctx)` becomes `whcompat.WithContext(req, ctx)`. The `whcompat` versions work with all releases of Go. Yay!
Theres a bit of unpleasantness behind the scenes to make this happen. Specifically, for pre-1.7 builds, a global map indexed by `req.URL` is kept, and a finalizer is installed on `req` to clean up. So dont change what `req.URL` points to and this will work fine. In practice its not a problem.
`whcompat` adds additional backwards-compatibility helpers. In Go 1.7 and on, the contexts `Done()` channel is closed (and `Err()` is set), whenever the request is done processing. If you want this behavior in Go 1.6 and earlier, just use the [whcompat.DoneNotify][47] middleware.
In Go 1.8 and on, the contexts `Done()` channel is closed when the client goes away, even if the request hasnt completed. If you want this behavior in Go 1.7 and earlier, just use the [whcompat.CloseNotify][48] middleware, though beware that it costs an extra goroutine.
### Error handling
How you handle errors can be another cross-cutting concern, but with good application of context and middleware, it too can be beautifully cleaned up so that the responsibilities lie in the correct place.
Problem statement: your `RequireUser` middleware needs to handle an authentication error differently between your HTML endpoints and your JSON API endpoints. You want to use `RequireUser` for both types of endpoints, but with your HTML endpoints you want to return a user-friendly error page, and with your JSON API endpoints you want to return an appropriate JSON error state.
In my opinion, the right thing to do is to have contextual error handlers, and luckily, we have a context for contextual information!
First, we need an error handler interface.
```
type ErrHandler interface {
HandleError(w http.ResponseWriter, req *http.Request, err error)
}
```
Next, lets make a middleware that registers the error handler in the context:
```
var errHandler = webhelp.GenSym() // see the aside about context keys
func HandleErrWith(eh ErrHandler, h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
ctx := context.WithValue(whcompat.Context(req), errHandler, eh)
h.ServeHTTP(w, whcompat.WithContext(req, ctx))
})
}
```
Last, lets make a function that will use the registered error handler for errors:
```
func HandleErr(w http.ResponseWriter, req *http.Request, err error) {
if handler, ok := whcompat.Context(req).Value(errHandler).(ErrHandler); ok {
handler.HandleError(w, req, err)
return
}
log.Printf("error: %v", err)
http.Error(w, "internal server error", http.StatusInternalServerError)
}
```
Now, as long as everything uses `HandleErr` to handle errors, our JSON API can handle errors with JSON responses, and our HTML endpoints can handle errors with HTML responses.
Of course, the [wherr][49] package implements this all for you, and the [whjson][49] package even implements a friendly JSON API error handler.
Heres how you might use it:
```
var userKey = webhelp.GenSym()
func RequireUser(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
user, err := GetUser(req)
if err != nil {
wherr.Handle(w, req, wherr.InternalServerError.New("failed to get user"))
return
}
if user == nil {
wherr.Handle(w, req, wherr.Unauthorized.New("no user found"))
return
}
ctx := r.Context()
ctx = context.WithValue(ctx, userKey, user)
h.ServeHTTP(w, req.WithContext(ctx))
})
}
func userpage(w http.ResponseWriter, req *http.Request) {
user := req.Context().Value(userKey).(*User)
w.Header().Set("Content-Type", "text/html")
userpageTmpl.Execute(w, user)
}
func username(w http.ResponseWriter, req *http.Request) {
user := req.Context().Value(userKey).(*User)
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(map[string]interface{}{"user": user})
}
func main() {
http.ListenAndServe(":8080", whmux.Dir{
"api": wherr.HandleWith(whjson.ErrHandler,
RequireUser(whmux.Dir{
"username": http.HandlerFunc(username),
})),
"user": RequireUser(http.HandlerFunc(userpage)),
})
}
```
#### Aside about the spacemonkeygo/errors package
The default [wherr.Handle][50] implementation understands all of the [error classes defined in the wherr top level package][51].
These error classes are implemented using the [spacemonkeygo/errors][52] library and the [spacemonkeygo/errors/errhttp][53] extensions. You dont have to use this library or these errors, but the benefit is that your error instances can be extended to include HTTP status code messages and information, which once again, provides for a nice elimination of cross-cutting concerns in your error handling logic.
See the [spacemonkeygo/errors][52] package for more details.
_**Update 2018-04-19:** After a few years of use, my friend condensed some lessons we learned and the best parts of `spacemonkeygo/errors` into a new, more concise, better library, over at [github.com/zeebo/errs][54]. Consider using that instead!_
### Sessions
Gos standard library has great support for cookies, but cookies by themselves arent usually what a developer thinks of when she thinks about sessions. Cookies are unencrypted, unauthenticated, and readable by the user, and perhaps you dont want that with your session data.
Further, sessions can be stored in cookies, but could also be stored in a database to provide features like session revocation and querying. Theres lots of potential details about the implementation of sessions.
Request handlers, however, probably dont care too much about the implementation details of the session. Request handlers usually just want a bucket of keys and values they can store safely and securely.
The [whsess][55] package implements middleware for registering an arbitrary session store (a default cookie-based session store is provided), and implements helpers for retrieving and saving new values into the session.
The default cookie-based session store implements encryption and authentication via the excellent [nacl/secretbox][56] package.
Usage is like this:
```
func handler(w http.ResponseWriter, req *http.Request) {
ctx := whcompat.Context(req)
sess, err := whsess.Load(ctx, "namespace")
if err != nil {
wherr.Handle(w, req, err)
return
}
if loggedIn, _ := sess.Values["logged_in"].(bool); loggedIn {
views, _ := sess.Values["views"].(int64)
sess.Values["views"] = views + 1
sess.Save(w)
}
}
func main() {
http.ListenAndServe(":8080", whsess.HandlerWithStore(
whsess.NewCookieStore(secret), http.HandlerFunc(handler)))
}
```
### Logging
The Go standard library by default doesnt log incoming requests, outgoing responses, or even just what port the HTTP server is listening on.
The [whlog][57] package implements all three. The [whlog.LogRequests][58] middleware will log requests as they start. The [whlog.LogResponses][59] middleware will log requests as they end, along with status code and timing information. [whlog.ListenAndServe][60] will log the address the server ultimately listens on (if you specify “:0” as your address, a port will be randomly chosen, and [whlog.ListenAndServe][60] will log it).
[whlog.LogResponses][59] deserves special mention for how it does what it does. It uses the [whmon][61] package to instrument the outgoing `http.ResponseWriter` to keep track of response information.
Usage is like this:
```
func main() {
whlog.ListenAndServe(":8080", whlog.LogResponses(whlog.Default, handler))
}
```
#### App engine logging
App engine logging is unconventional crazytown. The standard library logger doesnt work by default on App Engine, because App Engine logs _require_ the request context. This is unfortunate for libraries that dont necessarily run on App Engine all the time, as their logging information doesnt make it to the App Engine request-specific logger.
Unbelievably, this is fixable with [whgls][62], which uses my terrible, terrible (but recently improved) [Goroutine-local storage library][63] to store the request context on the current stack, register a new log output, and fix logging so standard library logging works with App Engine again.
### Template handling
Gos standard library [html/template][64] package is excellent, but youll be unsurprised to find theres a few tasks I do with it so commonly that Ive written additional support code.
The [whtmpl][65] package really does two things. First, it provides a number of useful helper methods for use within templates, and second, it takes some friction out of managing a large number of templates.
When writing templates, one thing you can do is call out to other registered templates for small values. A good example might be some sort of list element. You can have a template that renders the list element, and then your template that renders your list can use the list element template in turn.
Use of another template within a template might look like this:
```
<ul>
{{ range .List }}
{{ template "list_element" . }}
{{ end }}
</ul>
```
Youre now rendering the `list_element` template with the list element from `.List`. But what if you want to also pass the current user `.User`? Unfortunately, you can only pass one argument from one template to another. If you have two arguments you want to pass to another template, with the standard library, youre out of luck.
The [whtmpl][65] package adds three helper functions to aid you here, `makepair`, `makemap`, and `makeslice` (more docs under the [whtmpl.Collection][66] type). `makepair` is the simplest. It takes two arguments and constructs a [whtmpl.Pair][67]. Fixing our example above would look like this now:
```
<ul>
{{ $user := .User }}
{{ range .List }}
{{ template "list_element" (makepair . $user) }}
{{ end }}
</ul>
```
The second thing [whtmpl][65] does is make defining lots of templates easy, by optionally automatically naming templates after the name of the file the template is defined in.
For example, say you have three files.
Heres `pkg.go`:
```
package views
import "gopkg.in/webhelp.v1/whtmpl"
var Templates = whtmpl.NewCollection()
```
Heres `landing.go`:
```
package views
var _ = Templates.MustParse(`{{ template "header" . }}
<h1>Landing!</h1>`)
```
And heres `header.go`:
```
package views
var _ = Templates.MustParse(`<title>My website!</title>`)
```
Now, you can import your new `views` package and render the `landing` template this easily:
```
func handler(w http.ResponseWriter, req *http.Request) {
views.Templates.Render(w, req, "landing", map[string]interface{}{})
}
```
### User authentication
Ive written two Webhelp-style authentication libraries that I end up using frequently.
The first is an OAuth2 library, [whoauth2][68]. Ive written up [an example application that authenticates with Google, Facebook, and Github][69].
The second, [whgoth][70], is a wrapper around [markbates/goth][71]. My portion isnt quite complete yet (some fixes are still necessary for optional App Engine support), but will support more non-OAuth2 authentication sources (like Twitter) when it is done.
### Route listing
Surprise! If youve used [webhelp][27] based handlers and middleware for your whole app, you automatically get route listing for free, via the [whroute][72] package.
My web serving codes `main` method often has a form like this:
```
switch flag.Arg(0) {
case "serve":
panic(whlog.ListenAndServe(*listenAddr, routes))
case "routes":
whroute.PrintRoutes(os.Stdout, routes)
default:
fmt.Printf("Usage: %s <serve|routes>\n", os.Args[0])
}
```
Heres some example output:
```
GET /auth/_cb/
GET /auth/login/
GET /auth/logout/
GET /
GET /account/apikeys/
POST /account/apikeys/
GET /project/<int>/
GET /project/<int>/control/<int>/
POST /project/<int>/control/<int>/sample/
GET /project/<int>/control/
Redirect: f(req)
POST /project/<int>/control/
POST /project/<int>/control_named/<string>/sample/
GET /project/<int>/control_named/
Redirect: f(req)
GET /project/<int>/sample/<int>/
GET /project/<int>/sample/<int>/similar[/<*>]
GET /project/<int>/sample/
Redirect: f(req)
POST /project/<int>/search/
GET /project/
Redirect: /
POST /project/
```
### Other little things
[webhelp][27] has a number of other subpackages:
* [whparse][73] assists in parsing optional request arguments.
* [whredir][74] provides some handlers and helper methods for doing redirects in various cases.
* [whcache][75] creates request-specific mutable storage for caching various computations and database loaded data. Mutability helps helper functions that arent used as middleware share data.
* [whfatal][76] uses panics to simplify early request handling termination. Probably avoid this package unless you want to anger other Go developers.
### Summary
Designing your web project as a collection of composable middlewares goes quite a long way to simplify your code design, eliminate cross-cutting concerns, and create a more flexible development environment. Use my [webhelp][27] package if it helps you.
Or dont! Whatever! Its still a free country last I checked.
#### Update
Peter Kieltyka points me to his [Chi framework][77], which actually does seem to do the right things with respect to middleware, handlers, and contexts - certainly much more so than all the other frameworks Ive seen. So, shoutout to Peter and the team at Pressly!
--------------------------------------------------------------------------------
via: https://www.jtolio.com/2017/01/writing-advanced-web-applications-with-go
作者:[jtolio.com][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.jtolio.com/
[b]: https://github.com/lujun9972
[1]: https://www.ruby-lang.org/
[2]: http://rubyonrails.org/
[3]: http://www.sinatrarb.com/
[4]: https://www.python.org/
[5]: https://www.djangoproject.com/
[6]: http://flask.pocoo.org/
[7]: https://golang.org/
[8]: https://groups.google.com/d/forum/golang-nuts
[9]: https://www.reddit.com/r/golang/
[10]: https://revel.github.io/
[11]: https://gin-gonic.github.io/gin/
[12]: http://iris-go.com/
[13]: https://beego.me/
[14]: https://go-macaron.com/
[15]: https://github.com/go-martini/martini
[16]: https://github.com/gocraft/web
[17]: https://github.com/urfave/negroni
[18]: https://godoc.org/goji.io
[19]: https://echo.labstack.com/
[20]: https://medium.com/code-zen/why-i-don-t-use-go-web-frameworks-1087e1facfa4
[21]: https://groups.google.com/forum/#!topic/golang-nuts/R_lqsTTBh6I
[22]: https://www.reddit.com/r/golang/comments/1yh6gm/new_to_go_trying_to_select_web_framework/
[23]: https://golang.org/pkg/net/http/#Handler
[24]: https://golang.org/pkg/net/http/#Request
[25]: https://golang.org/pkg/net/http/#Request.Context
[26]: https://golang.org/pkg/net/http/#Request.WithContext
[27]: https://godoc.org/gopkg.in/webhelp.v1
[28]: https://golang.org/doc/articles/wiki/
[29]: https://expressjs.com/
[30]: https://nodejs.org/en/
[31]: https://en.wikipedia.org/wiki/Cross-cutting_concern
[32]: https://github.com/gorilla/mux
[33]: https://github.com/gorilla/
[34]: https://golang.org/pkg/net/http/#ServeMux
[35]: https://swtch.com/~rsc/
[36]: https://github.com/rsc/tiddly
[37]: https://github.com/rsc/tiddly/blob/8f9145ac183e374eb95d90a73be4d5f38534ec47/tiddly.go#L201
[38]: https://godoc.org/gopkg.in/webhelp.v1/whmux#Dir
[39]: https://godoc.org/gopkg.in/webhelp.v1/whmux
[40]: https://godoc.org/gopkg.in/webhelp.v1/whmux#IntArg
[41]: https://godoc.org/gopkg.in/webhelp.v1/whmux#StringArg
[42]: https://golang.org/pkg/context/
[43]: https://blog.golang.org/context
[44]: https://godoc.org/golang.org/x/net/context
[45]: https://godoc.org/gopkg.in/webhelp.v1#GenSym
[46]: https://godoc.org/gopkg.in/webhelp.v1/whcompat
[47]: https://godoc.org/gopkg.in/webhelp.v1/whcompat#DoneNotify
[48]: https://godoc.org/gopkg.in/webhelp.v1/whcompat#CloseNotify
[49]: https://godoc.org/gopkg.in/webhelp.v1/wherr
[50]: https://godoc.org/gopkg.in/webhelp.v1/wherr#Handle
[51]: https://godoc.org/gopkg.in/webhelp.v1/wherr#pkg-variables
[52]: https://godoc.org/github.com/spacemonkeygo/errors
[53]: https://godoc.org/github.com/spacemonkeygo/errors/errhttp
[54]: https://github.com/zeebo/errs
[55]: https://godoc.org/gopkg.in/webhelp.v1/whsess
[56]: https://godoc.org/golang.org/x/crypto/nacl/secretbox
[57]: https://godoc.org/gopkg.in/webhelp.v1/whlog
[58]: https://godoc.org/gopkg.in/webhelp.v1/whlog#LogRequests
[59]: https://godoc.org/gopkg.in/webhelp.v1/whlog#LogResponses
[60]: https://godoc.org/gopkg.in/webhelp.v1/whlog#ListenAndServe
[61]: https://godoc.org/gopkg.in/webhelp.v1/whmon
[62]: https://godoc.org/gopkg.in/webhelp.v1/whgls
[63]: https://godoc.org/github.com/jtolds/gls
[64]: https://golang.org/pkg/html/template/
[65]: https://godoc.org/gopkg.in/webhelp.v1/whtmpl
[66]: https://godoc.org/gopkg.in/webhelp.v1/whtmpl#Collection
[67]: https://godoc.org/gopkg.in/webhelp.v1/whtmpl#Pair
[68]: https://godoc.org/gopkg.in/go-webhelp/whoauth2.v1
[69]: https://github.com/go-webhelp/whoauth2/blob/v1/examples/group/main.go
[70]: https://godoc.org/gopkg.in/go-webhelp/whgoth.v1
[71]: https://github.com/markbates/goth
[72]: https://godoc.org/gopkg.in/webhelp.v1/whroute
[73]: https://godoc.org/gopkg.in/webhelp.v1/whparse
[74]: https://godoc.org/gopkg.in/webhelp.v1/whredir
[75]: https://godoc.org/gopkg.in/webhelp.v1/whcache
[76]: https://godoc.org/gopkg.in/webhelp.v1/whfatal
[77]: https://github.com/pressly/chi

View File

@ -1,119 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Magic GOPATH)
[#]: via: (https://www.jtolio.com/2017/01/magic-gopath)
[#]: author: (jtolio.com https://www.jtolio.com/)
Magic GOPATH
======
_**Update:** With the advent of Go 1.11 and [Go modules][1], this whole post is now useless. Unset your GOPATH entirely and switch to Go modules today!_
Maybe someday Ill start writing about things besides Go again.
Go requires that you set an environment variable for your workspace called your `GOPATH`. The `GOPATH` is one of the most confusing aspects of Go to newcomers and even relatively seasoned developers alike. Its not immediately clear what would be better, but finding a good `GOPATH` value has implications for your source code repository layout, how many separate projects you have on your computer, how default project installation instructions work (via `go get`), and even how you interoperate with other projects and libraries.
Its taken until Go 1.8 to decide to [set a default][2] and that small change was one of [the most talked about code reviews][3] for the 1.8 release cycle.
After [writing about GOPATH himself][4], [Dave Cheney][5] [asked me][6] to write a blog post about what I do.
### My proposal
I set my `GOPATH` to always be the current working directory, unless a parent directory is clearly the `GOPATH`.
Heres the relevant part of my `.bashrc`:
```
# bash command to output calculated GOPATH.
calc_gopath() {
local dir="$PWD"
# we're going to walk up from the current directory to the root
while true; do
# if there's a '.gopath' file, use its contents as the GOPATH relative to
# the directory containing it.
if [ -f "$dir/.gopath" ]; then
( cd "$dir";
# allow us to squash this behavior for cases we want to use vgo
if [ "$(cat .gopath)" != "" ]; then
cd "$(cat .gopath)";
echo "$PWD";
fi; )
return
fi
# if there's a 'src' directory, the parent of that directory is now the
# GOPATH
if [ -d "$dir/src" ]; then
echo "$dir"
return
fi
# we can't go further, so bail. we'll make the original PWD the GOPATH.
if [ "$dir" == "/" ]; then
echo "$PWD"
return
fi
# now we'll consider the parent directory
dir="$(dirname "$dir")"
done
}
my_prompt_command() {
export GOPATH="$(calc_gopath)"
# you can have other neat things in here. I also set my PS1 based on git
# state
}
case "$TERM" in
xterm*|rxvt*)
# Bash provides an environment variable called PROMPT_COMMAND. The contents
# of this variable are executed as a regular Bash command just before Bash
# displays a prompt. Let's only set it if we're in some kind of graphical
# terminal I guess.
PROMPT_COMMAND=my_prompt_command
;;
*)
;;
esac
```
The benefits are fantastic. If you want to quickly `go get` something and not have it clutter up your workspace, you can do something like:
```
cd $(mktemp -d) && go get github.com/the/thing
```
On the other hand, if youre jumping between multiple projects (whether or not they have the full workspace checked in or are just library packages), the `GOPATH` is set accurately.
More flexibly, if you have a tree where some parent directory is outside of the `GOPATH` but you want to set the `GOPATH` anyways, you can create a `.gopath` file and it will automatically set your `GOPATH` correctly any time your shell is inside that directory.
The whole thing is super nice. I kinda cant imagine doing something else anymore.
### Fin.
--------------------------------------------------------------------------------
via: https://www.jtolio.com/2017/01/magic-gopath
作者:[jtolio.com][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.jtolio.com/
[b]: https://github.com/lujun9972
[1]: https://golang.org/cmd/go/#hdr-Modules__module_versions__and_more
[2]: https://rakyll.org/default-gopath/
[3]: https://go-review.googlesource.com/32019/
[4]: https://dave.cheney.net/2016/12/20/thinking-about-gopath
[5]: https://dave.cheney.net/
[6]: https://twitter.com/davecheney/status/811334240247812097

View File

@ -1,171 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Starryi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Live video streaming with open source Video.js)
[#]: via: (https://opensource.com/article/20/2/video-streaming-tools)
[#]: author: (Aaron J. Prisk https://opensource.com/users/ricepriskytreat)
Live video streaming with open source Video.js
======
Video.js is a widely used protocol that will serve your live video
stream to a wide range of devices.
![video editing dashboard][1]
Last year, I wrote about [creating a video streaming server with Linux][2]. That project uses the Real-Time Messaging Protocol (RMTP), Nginx web server, Open Broadcast Studio (OBS), and VLC media player.
I used VLC to play our video stream, which may be fine for a small local deployment but isn't very practical on a large scale. First, your viewers have to use VLC, and RTMP streams can provide inconsistent playback. This is where [Video.js][3] comes into play! Video.js is an open source JavaScript framework for creating custom HTML5 video players. Video.js is incredibly powerful, and it's used by a host of very popular websites—largely due to its open nature and how easy it is to get up and running.
### Get started with Video.js
This project is based off of the video streaming project I wrote about last year. Since that project was set to serve RMTP streams, to use Video.js, you'll need to make some adjustments to that Nginx configuration. HTTP Live Streaming ([HLS][4]) is a widely used protocol developed by Apple that will serve your stream better to a multitude of devices. HLS will take your stream, break it into chunks, and serve it via a specialized playlist. This allows for a more fault-tolerant stream that can play on more devices.
First, create a directory that will house the HLS stream and give Nginx permission to write to it:
```
mkdir /mnt/hls
chown www:www /mnt/hls
```
Next, fire up your text editor, open the Nginx.conf file, and add the following under the **application live** section:
```
       application live {
            live on;
            # Turn on HLS
            hls on;
            hls_path /mnt/hls/;
            hls_fragment 3;
            hls_playlist_length 60;
            # disable consuming the stream from nginx as rtmp
            deny play all;
}
```
Take note of the HLS fragment and playlist length settings. You may want to adjust them later, depending on your streaming needs, but this is a good baseline to start with. Next, we need to ensure that Nginx is able to listen for requests from our player and understand how to present it to the user. So, we'll want to add a new section at the bottom of our nginx.conf file.
```
server {
        listen 8080;
        location / {
            # Disable cache
            add_header 'Cache-Control' 'no-cache';
            # CORS setup
            add_header 'Access-Control-Allow-Origin' '*' always;
            add_header 'Access-Control-Expose-Headers' 'Content-Length';
            # allow CORS preflight requests
            if ($request_method = 'OPTIONS') {
                add_header 'Access-Control-Allow-Origin' '*';
                add_header 'Access-Control-Max-Age' 1728000;
                add_header 'Content-Type' 'text/plain charset=UTF-8';
                add_header 'Content-Length' 0;
                return 204;
            }
            types {
                application/dash+xml mpd;
                application/vnd.apple.mpegurl m3u8;
                video/mp2t ts;
            }
            root /mnt/;
        }
    }
```
Visit Video.js's [Getting started][5] page to download the latest release and check out the release notes. Also on that page, Video.js has a great introductory template you can use to create a very basic web player. I'll break down the important bits of that template and insert the pieces you need to get your new HTML player to use your stream.
The **head** links in the Video.js library from a content-delivery network (CDN). You can also opt to download and store Video.js locally on your web server if you want.
```
&lt;head&gt;
  &lt;link href="<https://vjs.zencdn.net/7.5.5/video-js.css>" rel="stylesheet" /&gt;
  &lt;!-- If you'd like to support IE8 (for Video.js versions prior to v7) --&gt;
  &lt;script src="[https://vjs.zencdn.net/ie8/1.1.2/videojs-ie8.min.js"\&gt;\][6]&lt;/script&gt;
&lt;/head&gt;
```
Now to the real meat of the player. The **body** section sets the parameters of how the video player will be displayed. Within the **video** element, you need to define the properties of your player. How big do you want it to be? Do you want it to have a poster (i.e., a thumbnail)? Does it need any special player controls? This example defines a simple 600x600 pixel player with an appropriate (to me) thumbnail featuring Beastie (the BSD Demon) and Tux (the Linux penguin).
```
&lt;body&gt;
  &lt;video
    id="my-video"
    class="video-js"
    controls
    preload="auto"
    width="600"
    height="600"
    poster="BEASTIE-TUX.jpg"
    data-setup="{}"
  &gt;
```
Now that you've set how you want your player to look, you need to tell it what to play. Video.js can handle a large number of different formats, including HLS streams.
```
    &lt;source src="<http://MY-WEB-SERVER:8080/hls/STREAM-KEY.m3u8>" type="application/x-mpegURL" /&gt;
    &lt;p class="vjs-no-js"&gt;
      To view this video please enable JavaScript, and consider upgrading to a
      web browser that
      &lt;a href="<https://videojs.com/html5-video-support/>" target="_blank"
        &gt;supports HTML5 video&lt;/a
      &gt;
    &lt;/p&gt;
  &lt;/video&gt;
```
### Record your streams
Keeping a copy of your streams is super easy. Just add the following at the bottom of your **application live** section in the nginx.conf file:
```
# Enable stream recording
record all;
record_path /mnt/recordings/;
record_unique on;
```
Make sure that **record_path** exists and that Nginx has permissions to write to it:
```
`chown -R www:www /mnt/recordings`
```
### Down the stream
That's it! You should now have a spiffy new HTML5-friendly live video player. There are lots of great resources out there on how to expand all your video-making adventures. If you have any questions or suggestions, feel free to reach out to me on [Twitter][7] or leave a comment below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/video-streaming-tools
作者:[Aaron J. Prisk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ricepriskytreat
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
[2]: https://opensource.com/article/19/1/basic-live-video-streaming-server
[3]: https://videojs.com/
[4]: https://en.wikipedia.org/wiki/HTTP_Live_Streaming
[5]: https://videojs.com/getting-started
[6]: https://vjs.zencdn.net/ie8/1.1.2/videojs-ie8.min.js"\>\
[7]: https://twitter.com/AKernelPanic

View File

@ -1,236 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What happens when you update your DNS?)
[#]: via: (https://jvns.ca/blog/how-updating-dns-works/)
[#]: author: (Julia Evans https://jvns.ca/)
What happens when you update your DNS?
======
Ive seen a lot of people get confused about updating their sites DNS records to change the IP address. Why is it slow? Do you really have to wait 2 days for everything to update? Why do some people see the new IP and some people see the old IP? Whats happening?
So I wanted to write a quick exploration of whats happening behind the scenes when you update a DNS record.
### how DNS works: recursive vs authoritative DNS servers
First, we need to explain a little bit about DNS. There are 2 kinds of DNS servers: **authoritative** and **recursive**.
**authoritative** DNS servers (also known as **nameservers**) have a database of IP addresses for each domain theyre responsible for. For example, right now an authoritative DNS server for github.com is ns-421.awsdns-52.com. You can ask it for github.coms IP like this;
```
dig @ns-421.awsdns-52.com github.com
```
**recursive** DNS servers, by themselves, dont know anything about who owns what IP address. They figure out the IP address for a domain by asking the right authoritative DNS servers, and then cache that IP address in case theyre asked again. 8.8.8.8 is a recursive DNS server.
When people visit your website, theyre probably making their DNS queries to a recursive DNS server. So, how do recursive DNS servers work? Lets see!
### how does a recursive DNS server query for github.com?
Lets go through an example of what a recursive DNS server (like 8.8.8.8) does when you ask it for an IP address (A record) for github.com. First if it already has something cached, itll give you what it has cached. But what if all of its caches are expired? Heres what happens:
**step 1**: it has IP addresses for the root DNS servers hardcoded in its source code. You can see this in [unbounds source code here][1]. Lets say it picks `198.41.0.4` to start with. Heres the [official source][2] for those hardcoded IP addresses, also known as a “root hints file”.
**step 2**: Ask the root nameservers about `github.com`.
We can roughly reproduce what happens with `dig`. What this gives us is a new authoritative nameserver to ask: a nameserver for `.com`, with the IP `192.5.6.30`.
```
$ dig @198.41.0.4 github.com
...
com. 172800 IN NS a.gtld-servers.net.
...
a.gtld-servers.net. 172800 IN A 192.5.6.30
...
```
The details of the DNS response are a little more complicated than that in this case, theres an authority section with some NS records and an additional section with A records so you dont need to do an extra lookup to get the IP addresses of those nameservers.
(in practice, 99.99% of the time itll already have the address of the `.com` nameservers cached, but were pretending were really starting from scratch)
**step 3**: Ask the `.com` nameservers about `github.com`.
```
$ dig @192.5.6.30 github.com
...
github.com. 172800 IN NS ns-421.awsdns-52.com.
ns-421.awsdns-52.com. 172800 IN A 205.251.193.165
...
```
We have a new IP address to ask! This one is the nameserver for `github.com`.
**step 4**: Ask the `github.com` nameservers about `github.com`.
Were almost done!
```
$ dig @205.251.193.165 github.com
github.com. 60 IN A 140.82.112.4
```
Hooray!! We have an `A` record for `github.com`! Now the recursive nameserver has `github.com`s IP address and can return it back to you. And it could do all of this by only hardcoding a few IP addresses: the addresses of the root nameservers.
### how to see all of a recursive DNS servers steps: `dig +trace`
When I want to see what a recursive DNS server would do when resolving a domain, I run
```
$ dig @8.8.8.8 +trace github.com
```
This shows all the DNS records that it requests, starting at the root DNS servers all the 4 steps that we just went through.
### lets update some DNS records!
Now that we know the basics of how DNS works, lets update some DNS records and see what happens.
When you update your DNS records, there are two main options:
1. keep the same nameservers
2. change nameservers
### lets talk about TTLs
Weve forgotten something important though! TTLs! You know how we said earlier that the recursive DNS server will cache records until they expire? The way it decides whether the record should expire is by looking at its **TTL** or “time to live”.
In this example, the TTL for the A record githubs nameserver returns for its DNS record is `60`, which means 60 seconds:
```
$ dig @205.251.193.165 github.com
github.com. 60 IN A 140.82.112.4
```
Thats a pretty short TTL, and _in theory_ if everybodys DNS implementation followed the [DNS standard][3] it means that if Github decided to change the IP address for `github.com`, everyone should get the new IP address within 60 seconds. Lets see how that plays out in practice
### option 1: update a DNS record on the same nameservers
First, I updated my nameservers (Cloudflare) to have a new DNS record: an A record that maps `test.jvns.ca` to `1.2.3.4`.
```
$ dig @8.8.8.8 test.jvns.ca
test.jvns.ca. 299 IN A 1.2.3.4
```
This worked immediately! There was no need to wait at all, because there was no `test.jvns.ca` DNS record before that could have been cached. Great. But it looks like the new record is cached for ~5 minutes (299 seconds).
So, what if we try to change that IP? I changed it to `5.6.7.8`, and then ran the same DNS query.
```
$ dig @8.8.8.8 test.jvns.ca
test.jvns.ca. 144 IN A 1.2.3.4
```
Hmm, it seems like that DNS server has the `1.2.3.4` record still cached for another 144 seconds. Interestingly, if I query `8.8.8.8` multiple times I actually get inconsistent results sometimes itll give me the new IP and sometimes the old IP, I guess because 8.8.8.8 actually load balances to a bunch of different backends which each have their own cache.
After I waited 5 minutes, all of the `8.8.8.8` caches had updated and were always returning the new `5.6.7.8` record. Awesome. That was pretty fast!
### you cant always rely on the TTL
As with most internet protocols, not everything obeys the DNS specification. Some ISP DNS servers will cache records for longer than the TTL specifies, like maybe for 2 days instead of 5 minutes. And people can always hardcode the old IP address in their /etc/hosts.
What Id expect to happen in practice when updating a DNS record with a 5 minute TTL is that a large percentage of clients will move over to the new IPs quickly (like within 15 minutes), and then there will be a bunch of stragglers that slowly update over the next few days.
### option 2: updating your nameservers
So weve seen that when you update an IP address without changing your nameservers, a lot of DNS servers will pick up the new IP pretty quickly. Great. But what happens if you change your nameservers? Lets try it!
I didnt want to update the nameservers for my blog, so instead I went with a different domain I own and use in the examples for the [HTTP zine][4]: `examplecat.com`.
Previously, my nameservers were set to dns1.p01.nsone.net. I decided to switch them over to Googles nameservers `ns-cloud-b1.googledomains.com` etc.
When I made the change, my domain registrar somewhat ominiously popped up the message “Changes to examplecat.com saved. Theyll take effect within the next 48 hours”. Then I set up a new A record for the domain, to make it point to `1.2.3.4`
Okay, lets see if that did anything
```
$ dig @8.8.8.8 examplecat.com
examplecat.com. 17 IN A 104.248.50.87
```
No change. If I ask a different DNS server, it knows the new IP:
```
$ dig @1.1.1.1 examplecat.com
examplecat.com. 299 IN A 1.2.3.4
```
but 8.8.8.8 is still clueless. The reason 1.1.1.1 sees the new IP even though I just changed it 5 minutes ago is presumably that nobody had ever queried 1.1.1.1 about examplecat.com before, so it had nothing in its cache.
### nameserver TTLs are much longer
The reason that my registrar was saying “THIS WILL TAKE 48 HOURS” is that the TTLs on NS records (which are how recursive nameservers know which nameserver to ask) are MUCH longer!
The new nameserver is definitely returning the new IP address for `examplecat.com`
```
$ dig @ns-cloud-b1.googledomains.com examplecat.com
examplecat.com. 300 IN A 1.2.3.4
```
But remember what happened when we queried for the `github.com` nameservers, way back?
```
$ dig @192.5.6.30 github.com
...
github.com. 172800 IN NS ns-421.awsdns-52.com.
ns-421.awsdns-52.com. 172800 IN A 205.251.193.165
...
```
172800 seconds is 48 hours! So nameserver updates will in general take a lot longer to expire from caches and propagate than just updating an IP address without changing your nameserver.
### how do your nameservers get updated?
When I update the nameservers for `examplecat.com`, what happens is that he `.com` nameserver gets a new `NS` record with the new domain. Like this:
```
dig ns @j.gtld-servers.net examplecat.com
examplecat.com. 172800 IN NS ns-cloud-b1.googledomains.com
```
But how does that new NS record get there? What happens is that I tell my **domain registrar** what I want the new nameservers to be by updating it on the website, and then my domain registrar tells the `.com` nameservers to make the update.
For `.com`, these updates happen pretty fast (within a few minutes), but I think for some other TLDs the TLD nameservers might not apply updates as quickly.
### your programs DNS resolver library might also cache DNS records
One more reason TTLs might not be respected in practice: many programs need to resolve DNS names, and some programs will also cache DNS records indefinitely in memory (until the program is restarted).
For example, AWS has an article on [Setting the JVM TTL for DNS Name Lookups][5]. I havent written that much JVM code that does DNS lookups myself, but from a little Googling about the JVM and DNS it seems like you can configure the JVM so that it caches every DNS lookup indefinitely. (like [this elasticsearch issue][6])
### thats all!
I hope this helps you understand whats going on when updating your DNS!
As a disclaimer, again TTLs definitely dont tell the whole story about DNS propagation some recursive DNS servers definitely dont respect TTLs, even if the major ones like 8.8.8.8 do. So even if youre just updating an A record with a short TTL, its very possible that in practice youll still get some requests to the old IP for a day or two.
Also, I changed the nameservers for `examplecat.com` back to their old values after publishing this post.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/how-updating-dns-works/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://github.com/NLnetLabs/unbound/blob/6e0756e819779d9cc2a14741b501cadffe446c93/iterator/iter_hints.c#L131
[2]: https://www.iana.org/domains/root/files
[3]: https://tools.ietf.org/html/rfc1035
[4]: https://wizardzines.com/zines/http/
[5]: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-jvm-ttl.html
[6]: https://github.com/elastic/elasticsearch/issues/16412

View File

@ -1,359 +0,0 @@
[#]: subject: "D Declarations for C and C++ Programmers"
[#]: via: "https://theartofmachinery.com/2020/08/18/d_declarations_for_c_programmers.html"
[#]: author: "Simon Arneaud https://theartofmachinery.com"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
D Declarations for C and C++ Programmers
======
Because D was originally created by a C++ compiler writer, Walter Bright, [its an easy language for C and C++ programmers to learn][1], but there are little differences in the way declarations work. I learned them piecemeal in different places, but Im going to dump a bunch in this one post.
### `char* p`
If you want to declare a pointer in C, both of the following work:
```
char *p;
char* p;
```
Some people prefer the second form because it puts all the type information to one side. At least, thats what it looks like. Trouble is, you can fall into this trap:
```
char* p, q; // Gotcha! p is a pointer to a char, and q is a char in C
```
“Type information on the left” isnt really how C works. D, on the other hand, _does_ put all the type information to the left, so this works the way it appears:
```
char* p, q; // Both p and q are of type char* in D
```
D also accepts the `char *p` syntax, but the rule I go by is `char *p` when writing C, and `char* p` when writing D, just because that matches how the languages actually work, so no gotchas.
### Digression: how C declarations work
This isnt about D, but helps to make sense of the subtler differences between C and D declarations.
C declarations are implicit about types. `char *p` doesnt really say, “`p` is of type `char*`”; it says “the type of `p` is such that `*p` evaluates to a `char`”. Likewise:
```
int a[8]; // a[i] evaluates to an int (=> a is an array of ints)
int (*f)(double); // (*f)(0.5) evaluates to an int (=> f is a pointer to a function taking a double, returning an int)
```
Theres a kind of theoretical elegance to this implicit approach, but 1) its backwards and makes complex types confusing, 2) the theoretical elegance only goes so far because everythings a special case. For example, `int a[8];` declares an array `a`, but makes the expression `a[8]` undefined. You can only use certain operations, so `int 2*a;` doesnt work, and neither does `double 1.0 + sin(x);`. The expression `4[a]` is equivalent to `a[4]`, but you cant declare an array with `int 4[a];`. C++ gave up on the theory when it introduced reference syntax like `int &x;`.
### `function` and `delegate`
D has a special `function` keyword for declaring function pointers using the “type information on the left” approach. It makes the declaration of function pointers use the same syntax as the declaration of a function:
```
int foo();
int[] bar();
int function() foo_p = &foo;
int[] function() bar_p = &bar;
```
Note that the `&` is _required_ to get the address of a function in D (unlike in C and C++). If you want to have an array of pointers, you just add `[]` to the end of the type, just like you do with any other type. Similarly for making pointers to types:
```
int function()[] foo_pa = [&foo];
int function()* foo_pp = &foo_p;
int function()[]* foo_pap = &foo_pa;
```
Heres the C equivalent for comparison:
```
int (*foo_p)() = &foo;
int (*foo_pa[])() = {&foo};
int (**foo_pp)() = &foo_p;
int (*(*foo_pap)[])() = &foo_pa;
```
Its rare to need these complicated types, but the logic for the D declarations is much simpler.
Theres also the `delegate` keyword, which works in exactly the same way for [“fat function pointers”][2].
### Arrays
The most obvious difference from C is that D uses the “type information on the left” approach:
```
// int a[8]; is an error in D
int[8] a;
```
Another difference is in the order of indices for multidimensional arrays. E.g., this C code:
```
int a[4][64];
```
translates to this in D:
```
int[64][4] a;
```
Heres the rule for understanding the D ordering:
```
T[4] a;
static assert (is(typeof(a[0]) == T));
```
If `T` represents a type, then `T[4]` is always an array of 4 `T`s. Sounds obvious, but it means that if `T` is `int[64]`, `int[64][4]` must be an array of 4 `int[64]`s.
### `auto`
C had `auto` as a storage class keyword since the early days, but it got mostly forgotten because its only allowed in the one place its the default, anyway. (It effectively means “this variable goes on the stack”.) C++ repurposed the keyword to enable automatic type deduction.
You can also use `auto` with automatic type deduction in D, but its not actually required. Type deduction is always enabled in D; you just need to make your declaration unambiguously a declaration. For example, these work in D (but not all in C++):
```
auto x1 = 42;
const x2 = 42;
static x3 = 42;
```
### No need for forward declarations at global scope
This code works:
```
// Legal, but not required in D
// void bar();
void foo()
{
bar();
}
void bar()
{
// ...
}
```
Similarly for structs and classes. Order of definition doesnt matter, and forward declarations arent required.
Order does matter in local scope, though:
```
void foo()
{
// Error!
bar();
void bar()
{
// ...
}
}
```
Either the definition of `bar()` needs to be put before its usage, or `bar()` needs a forward declaration.
### `const()`
The `const` keyword in C declarations can be confusing. (Think `const int *p` vs `int const *p` vs `const int const *p`.) D supports the same syntax, but also allows `const` with parentheses:
```
// non-constant pointer to constant int
const(int)* p1;
// constant pointer to constant int
const(int*) p2;
```
[`const` is transitive in D][3], anyway, and this syntax makes it much clearer. The same parenthetical syntax works with `immutable`, too. Although C-style syntax is supported by D, I always prefer the parenthetical style for a few more reasons.
### `ref`
`ref` is the D alternative to C++s references. In D, `ref` doesnt create a new type, it just controls how the instance of the type is stored in memory (i.e, its a storage class). C++ acts as if references are types, but references have so many special restrictions that theyre effectively like a complex version of a storage class (in Walters words, C++ references try to be both a floor wax and dessert topping). For example, C++ treats `int&` like a type, but forbids declaring an array of `int&`.
As a former C++ programmer, I used to write D function arguments like this:
```
void foo(const ref S s);
```
Now I write them like this:
```
void foo(ref const(S) s);
```
The difference becomes more obvious with more complex types. Treating `ref` like a storage class ends up being cleaner because thats the way it actually is in D.
Currently `ref` is only supported with function arguments or `foreach` loop variables, so you cant declare a regular local variable to be `ref`.
### Function qualifiers
Ds backward-compatible support for the C-style `const` keyword creates an unfortunate gotcha:
```
struct S
{
// Confusing!
const int* foo()
{
// ...
}
}
```
`foo()` doesnt return a `const int*`. The `const` applies to the `foo()` member function itself, meaning that it works on `const` instances of `S` and returns a (non-`const`) `int*`. To avoid that trap, I always use the D-style `const()` syntax, and write member function qualifiers on the right:
```
struct S
{
const(int)* foo()
{
// ...
}
int* bar() const
{
// ...
}
}
```
### Syntax ambiguities
C++ allows initialising struct and class instances without an `=` sign:
```
S s(42);
```
This syntax famously leads to ambiguities with function declaration syntax in special cases (Scott Meyers “most vexing parse”). [People like Herb Sutter have written enough about it.][4] D only supports initialisation with `=`:
```
S s = S(42);
// Alternatively:
auto s = S(42);
```
C syntax has some weird corners, too. Heres a simple one:
```
x*y;
```
That looks like a useless multiplication between two variables, but logically it could be a declaration of `y` as a pointer to a type `x`. Expression and declaration are totally different parses that depend on what the symbol `x` means in this scope. (Even worse, if its a declaration, then the new `y` could shadow an existing `y`, which could affect later parses.) So C compilers need to track symbols in a symbol table while parsing, which is why C has forward declarations in practice.
D sidesteps the ambiguity by requiring a typecast to `void` if you really want to write an arithmetic expression without assigning it to anything:
```
int x, y;
cast(void)(x*y);
```
Ive never seen useful code do that, but that rule helps D parse simply without forward declarations.
Heres another quirk of C syntax. Remember that C declarations work by having a basic type on the left, followed by expressions that evaluate to that type? C allows parentheses in those expressions, and doesnt care about whitespace as long as symbols dont run together. That means these two declarations are equivalent:
```
int x;
int(x);
```
But what if, instead of `int`, we use some symbol that might be a typedef?
```
// Is this a declaration of x, or a function call?
t(x);
```
Just for fun, we can exploit shadowing and Cs archaic type rules:
```
typedef (*x)();
main()
{
x(x);
x(x);
}
```
The first line makes `x` a typedef to a function pointer type. The first `x(x);` redeclares `x` to be a function pointer variable, shadowing the typedef. The second `x(x);` is a function call that passes `x` as an argument. Yes, this code actually compiles, but its undefined behaviour because the function pointer is dereferenced without being initialised.
D avoids this chaos thanks to its “all type information on the left” rule. Theres no need to put parentheses around symbols in declarations, so `x(y);` is always a function call.
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2020/08/18/d_declarations_for_c_programmers.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://ddili.org/ders/d.en/index.html
[2]: https://tour.dlang.org/tour/en/basics/delegates
[3]: https://dlang.org/articles/const-faq.html#transitive-const
[4]: https://herbsutter.com/2013/05/09/gotw-1-solution/

View File

@ -1,120 +0,0 @@
[#]: subject: "Robust and Race-free Server Logging using Named Pipes"
[#]: via: "https://theartofmachinery.com/2020/10/10/logging_with_named_pipes.html"
[#]: author: "Simon Arneaud https://theartofmachinery.com"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Robust and Race-free Server Logging using Named Pipes
======
If you do any server administration work, youll have worked with log files. And if your servers need to be reliable, youll know that log files are common source of problems, especially when you need to rotate or ship them (which is practically always). In particular, moving files around causes race conditions.
Thankfully, there are better ways. With named pipes, you can have a simple and robust logging stack, with no race conditions, and without patching your servers to support some network logging protocol.
### The problems with rotating log files
First, lets talk about the problems. Race conditions are generally a problem with popular file-based logging setups, whether youre rotating logs into archival storage, or shipping them to a remote log processing stack, or whatever. To keep things concrete, though, let me talk about [logrotate][1], just because its a popular tool.
Say you have a log file at `/var/log/foo`. It gets pretty big, and you want to process the logs periodically and start with a new, empty file. So you (or your distro maintainers) set up logrotate with various rules about when to rotate the file.
By default, logrotate will rename the file (to something like `/var/log/foo.1`) and create a new `/var/log/foo` to write to. That (mostly) works for software that runs intermittently (such as a package manager that does software updates). But it wont do any good if the log file is generated by a long-running server. The server only uses the filename when it opens the file; after that it just keeps writing to its open file descriptor. That means it will keep writing to the old file (now named `/var/log/foo.1`), and the new `/var/log/foo` file will stay empty.
To handle this use-case, logrotate supports another mode: `copytruncate`. In this mode, instead of renaming, logrotate will copy the contents of `/var/log/foo` to an archival file, and then truncate the original file to zero length. As long as the server has the log file open in append mode, it will automatically write new logs to the start of the file, without needing to detect the truncation and do a file seek (the kernel handles that).
That `copytruncate` mode creates a race condition, though. Any log lines that are written after the copy but before the truncation will get destroyed. Actually, you tend to get the same race condition even with the default move-and-create mode. Thats because theres not much point just splitting up the logs into multiple files. Most systems are configured to do something like compress the old log file, but ultimately you need to delete the old, uncompressed data, which creates the same race as truncating. (In practice, this race isnt so likely for occasional log writers, like package managers, and the `delay` flag to logrotate makes it rarer, albeit by making the log handling a bit more complicated.)
Some servers, like [Nginx][2], support a modification of the default logrotate mode:
1. Rename the old file
2. Create the new file
3. (New step) notify the server that it needs to reopen its log file.
This works (as long as the logs processor doesnt delete the old file before the server has finished reopening), but it requires special support from the server, and youre out of luck with most software. Theres a lot of software out there, and log file handling just isnt interesting enough to get high on the to-do list. This approach also only works for long-running servers.
I think this is a good point to stop and take a step back. Having multiple processes juggle log files around on disk without any synchronisation is just an inherently painful way to do things. It causes bugs and makes logging stacks complicated ([heres just one of many examples][3]). One alternative is to use some network protocol like MQTT or networked syslog, but, realistically, most servers wont support the one you want. And they shouldnt have to — log files are a great interface for log writers.
Thats okay because *nix “everything is a file” lets us easily get a file interface on the writer side, with a streaming interface on the reader side.
### Named pipes 101
Maybe youve seen pipes in pipelines like this:
```
$ sort user_log.txt | uniq
```
The pipe connecting `sort` and `uniq` is a temporary, anonymous communication channel that `sort` writes to and `uniq` reads from. Named pipes are less common, but theyre also communication channels. The only difference is that they persist on the filesystem as if they were files.
Open up a terminal and `cd` into some temporary working directory. The following creates a named pipe and uses `cat` to open a writer:
```
$ mkfifo p
$ # This cat command will sit waiting for input
$ cat > p
```
Leave that `cat` command waiting, and open up another terminal in the same directory. In this terminal, start your reader:
```
$ # This will sit waiting for data to come over the pipe
$ cat p
```
Now as you type things into the writer end, youll see them appear in the reader end. `cat` will use line buffering in interactive mode, so data will get transferred every time you start a new line.
`cat` doesnt have to know anything about pipes for this to work — the pipe acts like a file as long as you just naïvely read or write to it. But if you check, youll see the data isnt stored anywhere. You can pump gigabytes through a pipe without filling up any disk space. Once the data has been read once, its lost. (You can have multiple readers, but only one will receive any buffer-load of data.)
Another thing that makes pipes useful for communication is their buffering and blocking. You can start writing before any readers open the pipe, and data gets temporarily buffered inside the kernel until a reader comes along. If the reader starts first, its read will block, waiting for data from the writer. (The writer will also block if the pipe buffer gets full.) If you try the two-terminal experiment again with a regular file, youll see that the reader `cat` will eagerly read all the data it can and then exit.
### An annoying problem and a simple solution
Maybe youre seeing how named pipes can help with logging: Servers can write to log “files” that are actually named pipes, and a logging stack can read log data directly from the named pipe without letting a single line fall onto the floor. You do whatever you want with the logs, without any racey juggling of files on disk.
Theres one annoying problem: the writer doesnt need a reader to start writing, but if a reader opens the pipe and then closes it, the writer gets a `SIGPIPE` (“broken pipe”), which will kill it by default. (Try killing the reader `cat` while typing things into the writer to see what I mean.) Similarly, a reader can read without a writer, but if a writer opens the pipe and then closes it, that will be treated like an end of file. Although the named pipe persists on disk, it isnt a stable communication channel if log writers and log readers can restart (as they will on a real server).
Theres a solution thats a bit weird but very simple. Multiple processes can open the pipe for reading and writing, and the pipe will only close when _all_ readers or _all_ writers close it. All we need for a stable logging pipe is a daemon that holds the named pipe open for both reading and writing, without doing any actual reading or writing. I set this up on my personal server last year, and I wrote [a tiny, zero-config program to act as my pipe-holding daemon][4]. It just opens every file in its current working directory for both reading and writing. I run it from a directory that has symbolic links to every named pipe in my logging stack. The program runs in a loop that ends in a `wait()` for a `SIGHUP`. If I ever update the symlinks in the directory, I give the daemon a `kill -HUP` and it reopens them all. Sure, it could do its own directory watching, but the `SIGHUP` approach is simple and predictable, and the whole thing works reliably. Thanks to the pipe buffer, log writers and log readers can be shut down and restarted independently, any time, without breakage.
My server uses the [s6 supervision suite][5] to manage daemons, so I have s6-log reading from each logging pipe. The bottom part of the [s6-log documentation page][6] has some good insights into the problems with popular logging systems, and good ideas about better ways to do things.
### Imagine: a world without log rotation
Strictly speaking, named pipes arent necessary for race-free logs processing. The s6 suite encourages writing logs to some file descriptor (like standard error), and letting the supervision suite make sure those file descriptors point to something useful. However, the named pipe approach adds a few benefits:
* It doesnt require any co-ordination between writer and reader
* It integrates nicely with the software we have today
* It gives things meaningful names (rather than `/dev/fd/4`)
Ive worked with companies that spend about as much on their logging stacks as on their serving infrastructure, and, no, “we do logs processing” isnt in their business models. Of course, log rotation and log shipping arent the only problems to blame, but it feels so wrong that weve made logs so complicated. If you work on any logging system, consider if you really need to juggle log files around. You could be helping to make the world a better place.
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2020/10/10/logging_with_named_pipes.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://github.com/logrotate/logrotate
[2]: https://www.nginx.com/resources/wiki/start/topics/examples/logrotation/
[3]: https://community.splunk.com/t5/Getting-Data-In/Why-copytruncate-logrotate-does-not-play-well-with-splunk/td-p/196112
[4]: https://gitlab.com/sarneaud/fileopenerd
[5]: http://www.skarnet.org/software/s6/index.html
[6]: http://www.skarnet.org/software/s6/s6-log.html